Dec 13 01:30:23 np0005558241 kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec 13 01:30:23 np0005558241 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 13 01:30:23 np0005558241 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 01:30:23 np0005558241 kernel: BIOS-provided physical RAM map:
Dec 13 01:30:23 np0005558241 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 13 01:30:23 np0005558241 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 13 01:30:23 np0005558241 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 13 01:30:23 np0005558241 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 13 01:30:23 np0005558241 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 13 01:30:23 np0005558241 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 13 01:30:23 np0005558241 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 13 01:30:23 np0005558241 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 13 01:30:23 np0005558241 kernel: NX (Execute Disable) protection: active
Dec 13 01:30:23 np0005558241 kernel: APIC: Static calls initialized
Dec 13 01:30:23 np0005558241 kernel: SMBIOS 2.8 present.
Dec 13 01:30:23 np0005558241 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 13 01:30:23 np0005558241 kernel: Hypervisor detected: KVM
Dec 13 01:30:23 np0005558241 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 13 01:30:23 np0005558241 kernel: kvm-clock: using sched offset of 4388818770 cycles
Dec 13 01:30:23 np0005558241 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 13 01:30:23 np0005558241 kernel: tsc: Detected 2800.000 MHz processor
Dec 13 01:30:23 np0005558241 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 13 01:30:23 np0005558241 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 13 01:30:23 np0005558241 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 01:30:23 np0005558241 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 13 01:30:23 np0005558241 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 13 01:30:23 np0005558241 kernel: Using GB pages for direct mapping
Dec 13 01:30:23 np0005558241 kernel: RAMDISK: [mem 0x2d46a000-0x32a2cfff]
Dec 13 01:30:23 np0005558241 kernel: ACPI: Early table checksum verification disabled
Dec 13 01:30:23 np0005558241 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 13 01:30:23 np0005558241 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:30:23 np0005558241 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:30:23 np0005558241 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:30:23 np0005558241 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 13 01:30:23 np0005558241 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:30:23 np0005558241 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 01:30:23 np0005558241 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 13 01:30:23 np0005558241 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 13 01:30:23 np0005558241 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 13 01:30:23 np0005558241 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 13 01:30:23 np0005558241 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 13 01:30:23 np0005558241 kernel: No NUMA configuration found
Dec 13 01:30:23 np0005558241 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 13 01:30:23 np0005558241 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 13 01:30:23 np0005558241 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 13 01:30:23 np0005558241 kernel: Zone ranges:
Dec 13 01:30:23 np0005558241 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 01:30:23 np0005558241 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 13 01:30:23 np0005558241 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 13 01:30:23 np0005558241 kernel:  Device   empty
Dec 13 01:30:23 np0005558241 kernel: Movable zone start for each node
Dec 13 01:30:23 np0005558241 kernel: Early memory node ranges
Dec 13 01:30:23 np0005558241 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 13 01:30:23 np0005558241 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 13 01:30:23 np0005558241 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 13 01:30:23 np0005558241 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 13 01:30:23 np0005558241 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 01:30:23 np0005558241 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 13 01:30:23 np0005558241 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 13 01:30:23 np0005558241 kernel: ACPI: PM-Timer IO Port: 0x608
Dec 13 01:30:23 np0005558241 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 13 01:30:23 np0005558241 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 13 01:30:23 np0005558241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 13 01:30:23 np0005558241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 13 01:30:23 np0005558241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 13 01:30:23 np0005558241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 13 01:30:23 np0005558241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 13 01:30:23 np0005558241 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 01:30:23 np0005558241 kernel: TSC deadline timer available
Dec 13 01:30:23 np0005558241 kernel: CPU topo: Max. logical packages:   8
Dec 13 01:30:23 np0005558241 kernel: CPU topo: Max. logical dies:       8
Dec 13 01:30:23 np0005558241 kernel: CPU topo: Max. dies per package:   1
Dec 13 01:30:23 np0005558241 kernel: CPU topo: Max. threads per core:   1
Dec 13 01:30:23 np0005558241 kernel: CPU topo: Num. cores per package:     1
Dec 13 01:30:23 np0005558241 kernel: CPU topo: Num. threads per package:   1
Dec 13 01:30:23 np0005558241 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 13 01:30:23 np0005558241 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 13 01:30:23 np0005558241 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 13 01:30:23 np0005558241 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 13 01:30:23 np0005558241 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 13 01:30:23 np0005558241 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 13 01:30:23 np0005558241 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 13 01:30:23 np0005558241 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 13 01:30:23 np0005558241 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 13 01:30:23 np0005558241 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 13 01:30:23 np0005558241 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 13 01:30:23 np0005558241 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 13 01:30:23 np0005558241 kernel: Booting paravirtualized kernel on KVM
Dec 13 01:30:23 np0005558241 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 01:30:23 np0005558241 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 13 01:30:23 np0005558241 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 13 01:30:23 np0005558241 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 13 01:30:23 np0005558241 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 01:30:23 np0005558241 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec 13 01:30:23 np0005558241 kernel: random: crng init done
Dec 13 01:30:23 np0005558241 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 13 01:30:23 np0005558241 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 13 01:30:23 np0005558241 kernel: Fallback order for Node 0: 0 
Dec 13 01:30:23 np0005558241 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 13 01:30:23 np0005558241 kernel: Policy zone: Normal
Dec 13 01:30:23 np0005558241 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 01:30:23 np0005558241 kernel: software IO TLB: area num 8.
Dec 13 01:30:23 np0005558241 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 13 01:30:23 np0005558241 kernel: ftrace: allocating 49357 entries in 193 pages
Dec 13 01:30:23 np0005558241 kernel: ftrace: allocated 193 pages with 3 groups
Dec 13 01:30:23 np0005558241 kernel: Dynamic Preempt: voluntary
Dec 13 01:30:23 np0005558241 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 13 01:30:23 np0005558241 kernel: rcu: #011RCU event tracing is enabled.
Dec 13 01:30:23 np0005558241 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 13 01:30:23 np0005558241 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec 13 01:30:23 np0005558241 kernel: #011Rude variant of Tasks RCU enabled.
Dec 13 01:30:23 np0005558241 kernel: #011Tracing variant of Tasks RCU enabled.
Dec 13 01:30:23 np0005558241 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 01:30:23 np0005558241 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 13 01:30:23 np0005558241 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 13 01:30:23 np0005558241 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 13 01:30:23 np0005558241 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 13 01:30:23 np0005558241 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 13 01:30:23 np0005558241 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 13 01:30:23 np0005558241 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 13 01:30:23 np0005558241 kernel: Console: colour VGA+ 80x25
Dec 13 01:30:23 np0005558241 kernel: printk: console [ttyS0] enabled
Dec 13 01:30:23 np0005558241 kernel: ACPI: Core revision 20230331
Dec 13 01:30:23 np0005558241 kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 01:30:23 np0005558241 kernel: x2apic enabled
Dec 13 01:30:23 np0005558241 kernel: APIC: Switched APIC routing to: physical x2apic
Dec 13 01:30:23 np0005558241 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 13 01:30:23 np0005558241 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec 13 01:30:23 np0005558241 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 13 01:30:23 np0005558241 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 13 01:30:23 np0005558241 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 13 01:30:23 np0005558241 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 01:30:23 np0005558241 kernel: Spectre V2 : Mitigation: Retpolines
Dec 13 01:30:23 np0005558241 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 13 01:30:23 np0005558241 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 13 01:30:23 np0005558241 kernel: RETBleed: Mitigation: untrained return thunk
Dec 13 01:30:23 np0005558241 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 13 01:30:23 np0005558241 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 13 01:30:23 np0005558241 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 13 01:30:23 np0005558241 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 13 01:30:23 np0005558241 kernel: x86/bugs: return thunk changed
Dec 13 01:30:23 np0005558241 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 13 01:30:23 np0005558241 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 13 01:30:23 np0005558241 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 13 01:30:23 np0005558241 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 13 01:30:23 np0005558241 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 13 01:30:23 np0005558241 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 13 01:30:23 np0005558241 kernel: Freeing SMP alternatives memory: 40K
Dec 13 01:30:23 np0005558241 kernel: pid_max: default: 32768 minimum: 301
Dec 13 01:30:23 np0005558241 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 13 01:30:23 np0005558241 kernel: landlock: Up and running.
Dec 13 01:30:23 np0005558241 kernel: Yama: becoming mindful.
Dec 13 01:30:23 np0005558241 kernel: SELinux:  Initializing.
Dec 13 01:30:23 np0005558241 kernel: LSM support for eBPF active
Dec 13 01:30:23 np0005558241 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 01:30:23 np0005558241 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 01:30:23 np0005558241 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 13 01:30:23 np0005558241 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 13 01:30:23 np0005558241 kernel: ... version:                0
Dec 13 01:30:23 np0005558241 kernel: ... bit width:              48
Dec 13 01:30:23 np0005558241 kernel: ... generic registers:      6
Dec 13 01:30:23 np0005558241 kernel: ... value mask:             0000ffffffffffff
Dec 13 01:30:23 np0005558241 kernel: ... max period:             00007fffffffffff
Dec 13 01:30:23 np0005558241 kernel: ... fixed-purpose events:   0
Dec 13 01:30:23 np0005558241 kernel: ... event mask:             000000000000003f
Dec 13 01:30:23 np0005558241 kernel: signal: max sigframe size: 1776
Dec 13 01:30:23 np0005558241 kernel: rcu: Hierarchical SRCU implementation.
Dec 13 01:30:23 np0005558241 kernel: rcu: #011Max phase no-delay instances is 400.
Dec 13 01:30:23 np0005558241 kernel: smp: Bringing up secondary CPUs ...
Dec 13 01:30:23 np0005558241 kernel: smpboot: x86: Booting SMP configuration:
Dec 13 01:30:23 np0005558241 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 13 01:30:23 np0005558241 kernel: smp: Brought up 1 node, 8 CPUs
Dec 13 01:30:23 np0005558241 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec 13 01:30:23 np0005558241 kernel: node 0 deferred pages initialised in 10ms
Dec 13 01:30:23 np0005558241 kernel: Memory: 7763972K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 618220K reserved, 0K cma-reserved)
Dec 13 01:30:23 np0005558241 kernel: devtmpfs: initialized
Dec 13 01:30:23 np0005558241 kernel: x86/mm: Memory block size: 128MB
Dec 13 01:30:23 np0005558241 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 01:30:23 np0005558241 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 13 01:30:23 np0005558241 kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 01:30:23 np0005558241 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 01:30:23 np0005558241 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 13 01:30:23 np0005558241 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 13 01:30:23 np0005558241 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 13 01:30:23 np0005558241 kernel: audit: initializing netlink subsys (disabled)
Dec 13 01:30:23 np0005558241 kernel: audit: type=2000 audit(1765607422.053:1): state=initialized audit_enabled=0 res=1
Dec 13 01:30:23 np0005558241 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 13 01:30:23 np0005558241 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 01:30:23 np0005558241 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 01:30:23 np0005558241 kernel: cpuidle: using governor menu
Dec 13 01:30:23 np0005558241 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 01:30:23 np0005558241 kernel: PCI: Using configuration type 1 for base access
Dec 13 01:30:23 np0005558241 kernel: PCI: Using configuration type 1 for extended access
Dec 13 01:30:23 np0005558241 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 01:30:23 np0005558241 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 13 01:30:23 np0005558241 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 13 01:30:23 np0005558241 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 01:30:23 np0005558241 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 13 01:30:23 np0005558241 kernel: Demotion targets for Node 0: null
Dec 13 01:30:23 np0005558241 kernel: cryptd: max_cpu_qlen set to 1000
Dec 13 01:30:23 np0005558241 kernel: ACPI: Added _OSI(Module Device)
Dec 13 01:30:23 np0005558241 kernel: ACPI: Added _OSI(Processor Device)
Dec 13 01:30:23 np0005558241 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 01:30:23 np0005558241 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 01:30:23 np0005558241 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 13 01:30:23 np0005558241 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 13 01:30:23 np0005558241 kernel: ACPI: Interpreter enabled
Dec 13 01:30:23 np0005558241 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 13 01:30:23 np0005558241 kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 01:30:23 np0005558241 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 01:30:23 np0005558241 kernel: PCI: Using E820 reservations for host bridge windows
Dec 13 01:30:23 np0005558241 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 13 01:30:23 np0005558241 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 13 01:30:23 np0005558241 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [3] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [4] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [5] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [6] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [7] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [8] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [9] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [10] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [11] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [12] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [13] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [14] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [15] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [16] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [17] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [18] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [19] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [20] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [21] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [22] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [23] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [24] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [25] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [26] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [27] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [28] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [29] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [30] registered
Dec 13 01:30:23 np0005558241 kernel: acpiphp: Slot [31] registered
Dec 13 01:30:23 np0005558241 kernel: PCI host bridge to bus 0000:00
Dec 13 01:30:23 np0005558241 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 13 01:30:23 np0005558241 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 13 01:30:23 np0005558241 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 13 01:30:23 np0005558241 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 13 01:30:23 np0005558241 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 13 01:30:23 np0005558241 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 13 01:30:23 np0005558241 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 13 01:30:23 np0005558241 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 13 01:30:23 np0005558241 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 13 01:30:23 np0005558241 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 13 01:30:23 np0005558241 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 13 01:30:23 np0005558241 kernel: iommu: Default domain type: Translated
Dec 13 01:30:23 np0005558241 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 13 01:30:23 np0005558241 kernel: SCSI subsystem initialized
Dec 13 01:30:23 np0005558241 kernel: ACPI: bus type USB registered
Dec 13 01:30:23 np0005558241 kernel: usbcore: registered new interface driver usbfs
Dec 13 01:30:23 np0005558241 kernel: usbcore: registered new interface driver hub
Dec 13 01:30:23 np0005558241 kernel: usbcore: registered new device driver usb
Dec 13 01:30:23 np0005558241 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 13 01:30:23 np0005558241 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 13 01:30:23 np0005558241 kernel: PTP clock support registered
Dec 13 01:30:23 np0005558241 kernel: EDAC MC: Ver: 3.0.0
Dec 13 01:30:23 np0005558241 kernel: NetLabel: Initializing
Dec 13 01:30:23 np0005558241 kernel: NetLabel:  domain hash size = 128
Dec 13 01:30:23 np0005558241 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 13 01:30:23 np0005558241 kernel: NetLabel:  unlabeled traffic allowed by default
Dec 13 01:30:23 np0005558241 kernel: PCI: Using ACPI for IRQ routing
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 13 01:30:23 np0005558241 kernel: vgaarb: loaded
Dec 13 01:30:23 np0005558241 kernel: clocksource: Switched to clocksource kvm-clock
Dec 13 01:30:23 np0005558241 kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 01:30:23 np0005558241 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 01:30:23 np0005558241 kernel: pnp: PnP ACPI init
Dec 13 01:30:23 np0005558241 kernel: pnp: PnP ACPI: found 5 devices
Dec 13 01:30:23 np0005558241 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 01:30:23 np0005558241 kernel: NET: Registered PF_INET protocol family
Dec 13 01:30:23 np0005558241 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 13 01:30:23 np0005558241 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 13 01:30:23 np0005558241 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 01:30:23 np0005558241 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 13 01:30:23 np0005558241 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 13 01:30:23 np0005558241 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 13 01:30:23 np0005558241 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 13 01:30:23 np0005558241 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 01:30:23 np0005558241 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 01:30:23 np0005558241 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 01:30:23 np0005558241 kernel: NET: Registered PF_XDP protocol family
Dec 13 01:30:23 np0005558241 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 13 01:30:23 np0005558241 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 13 01:30:23 np0005558241 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 13 01:30:23 np0005558241 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 13 01:30:23 np0005558241 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 13 01:30:23 np0005558241 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 13 01:30:23 np0005558241 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73089 usecs
Dec 13 01:30:23 np0005558241 kernel: PCI: CLS 0 bytes, default 64
Dec 13 01:30:23 np0005558241 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 13 01:30:23 np0005558241 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 13 01:30:23 np0005558241 kernel: ACPI: bus type thunderbolt registered
Dec 13 01:30:23 np0005558241 kernel: Trying to unpack rootfs image as initramfs...
Dec 13 01:30:23 np0005558241 kernel: Initialise system trusted keyrings
Dec 13 01:30:23 np0005558241 kernel: Key type blacklist registered
Dec 13 01:30:23 np0005558241 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 13 01:30:23 np0005558241 kernel: zbud: loaded
Dec 13 01:30:23 np0005558241 kernel: integrity: Platform Keyring initialized
Dec 13 01:30:23 np0005558241 kernel: integrity: Machine keyring initialized
Dec 13 01:30:23 np0005558241 kernel: Freeing initrd memory: 87820K
Dec 13 01:30:23 np0005558241 kernel: NET: Registered PF_ALG protocol family
Dec 13 01:30:23 np0005558241 kernel: xor: automatically using best checksumming function   avx       
Dec 13 01:30:23 np0005558241 kernel: Key type asymmetric registered
Dec 13 01:30:23 np0005558241 kernel: Asymmetric key parser 'x509' registered
Dec 13 01:30:23 np0005558241 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 13 01:30:23 np0005558241 kernel: io scheduler mq-deadline registered
Dec 13 01:30:23 np0005558241 kernel: io scheduler kyber registered
Dec 13 01:30:23 np0005558241 kernel: io scheduler bfq registered
Dec 13 01:30:23 np0005558241 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 13 01:30:23 np0005558241 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 13 01:30:23 np0005558241 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 13 01:30:23 np0005558241 kernel: ACPI: button: Power Button [PWRF]
Dec 13 01:30:23 np0005558241 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 13 01:30:23 np0005558241 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 13 01:30:23 np0005558241 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 13 01:30:23 np0005558241 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 01:30:23 np0005558241 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 01:30:23 np0005558241 kernel: Non-volatile memory driver v1.3
Dec 13 01:30:23 np0005558241 kernel: rdac: device handler registered
Dec 13 01:30:23 np0005558241 kernel: hp_sw: device handler registered
Dec 13 01:30:23 np0005558241 kernel: emc: device handler registered
Dec 13 01:30:23 np0005558241 kernel: alua: device handler registered
Dec 13 01:30:23 np0005558241 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 13 01:30:23 np0005558241 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 13 01:30:23 np0005558241 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 13 01:30:23 np0005558241 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 13 01:30:23 np0005558241 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 13 01:30:23 np0005558241 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 13 01:30:23 np0005558241 kernel: usb usb1: Product: UHCI Host Controller
Dec 13 01:30:23 np0005558241 kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec 13 01:30:23 np0005558241 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 13 01:30:23 np0005558241 kernel: hub 1-0:1.0: USB hub found
Dec 13 01:30:23 np0005558241 kernel: hub 1-0:1.0: 2 ports detected
Dec 13 01:30:23 np0005558241 kernel: usbcore: registered new interface driver usbserial_generic
Dec 13 01:30:23 np0005558241 kernel: usbserial: USB Serial support registered for generic
Dec 13 01:30:23 np0005558241 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 13 01:30:23 np0005558241 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 13 01:30:23 np0005558241 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 13 01:30:23 np0005558241 kernel: mousedev: PS/2 mouse device common for all mice
Dec 13 01:30:23 np0005558241 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 13 01:30:23 np0005558241 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 13 01:30:23 np0005558241 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 13 01:30:23 np0005558241 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 13 01:30:23 np0005558241 kernel: rtc_cmos 00:04: registered as rtc0
Dec 13 01:30:23 np0005558241 kernel: rtc_cmos 00:04: setting system clock to 2025-12-13T06:30:22 UTC (1765607422)
Dec 13 01:30:23 np0005558241 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 13 01:30:23 np0005558241 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 13 01:30:23 np0005558241 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 13 01:30:23 np0005558241 kernel: usbcore: registered new interface driver usbhid
Dec 13 01:30:23 np0005558241 kernel: usbhid: USB HID core driver
Dec 13 01:30:23 np0005558241 kernel: drop_monitor: Initializing network drop monitor service
Dec 13 01:30:23 np0005558241 kernel: Initializing XFRM netlink socket
Dec 13 01:30:23 np0005558241 kernel: NET: Registered PF_INET6 protocol family
Dec 13 01:30:23 np0005558241 kernel: Segment Routing with IPv6
Dec 13 01:30:23 np0005558241 kernel: NET: Registered PF_PACKET protocol family
Dec 13 01:30:23 np0005558241 kernel: mpls_gso: MPLS GSO support
Dec 13 01:30:23 np0005558241 kernel: IPI shorthand broadcast: enabled
Dec 13 01:30:23 np0005558241 kernel: AVX2 version of gcm_enc/dec engaged.
Dec 13 01:30:23 np0005558241 kernel: AES CTR mode by8 optimization enabled
Dec 13 01:30:23 np0005558241 kernel: sched_clock: Marking stable (1226069220, 159887650)->(1506132460, -120175590)
Dec 13 01:30:23 np0005558241 kernel: registered taskstats version 1
Dec 13 01:30:23 np0005558241 kernel: Loading compiled-in X.509 certificates
Dec 13 01:30:23 np0005558241 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 13 01:30:23 np0005558241 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 13 01:30:23 np0005558241 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 13 01:30:23 np0005558241 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 13 01:30:23 np0005558241 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 13 01:30:23 np0005558241 kernel: Demotion targets for Node 0: null
Dec 13 01:30:23 np0005558241 kernel: page_owner is disabled
Dec 13 01:30:23 np0005558241 kernel: Key type .fscrypt registered
Dec 13 01:30:23 np0005558241 kernel: Key type fscrypt-provisioning registered
Dec 13 01:30:23 np0005558241 kernel: Key type big_key registered
Dec 13 01:30:23 np0005558241 kernel: Key type encrypted registered
Dec 13 01:30:23 np0005558241 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 13 01:30:23 np0005558241 kernel: Loading compiled-in module X.509 certificates
Dec 13 01:30:23 np0005558241 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 13 01:30:23 np0005558241 kernel: ima: Allocated hash algorithm: sha256
Dec 13 01:30:23 np0005558241 kernel: ima: No architecture policies found
Dec 13 01:30:23 np0005558241 kernel: evm: Initialising EVM extended attributes:
Dec 13 01:30:23 np0005558241 kernel: evm: security.selinux
Dec 13 01:30:23 np0005558241 kernel: evm: security.SMACK64 (disabled)
Dec 13 01:30:23 np0005558241 kernel: evm: security.SMACK64EXEC (disabled)
Dec 13 01:30:23 np0005558241 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 13 01:30:23 np0005558241 kernel: evm: security.SMACK64MMAP (disabled)
Dec 13 01:30:23 np0005558241 kernel: evm: security.apparmor (disabled)
Dec 13 01:30:23 np0005558241 kernel: evm: security.ima
Dec 13 01:30:23 np0005558241 kernel: evm: security.capability
Dec 13 01:30:23 np0005558241 kernel: evm: HMAC attrs: 0x1
Dec 13 01:30:23 np0005558241 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 13 01:30:23 np0005558241 kernel: Running certificate verification RSA selftest
Dec 13 01:30:23 np0005558241 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 13 01:30:23 np0005558241 kernel: Running certificate verification ECDSA selftest
Dec 13 01:30:23 np0005558241 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 13 01:30:23 np0005558241 kernel: clk: Disabling unused clocks
Dec 13 01:30:23 np0005558241 kernel: Freeing unused decrypted memory: 2028K
Dec 13 01:30:23 np0005558241 kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 13 01:30:23 np0005558241 kernel: Write protecting the kernel read-only data: 30720k
Dec 13 01:30:23 np0005558241 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec 13 01:30:23 np0005558241 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 13 01:30:23 np0005558241 kernel: Run /init as init process
Dec 13 01:30:23 np0005558241 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 01:30:23 np0005558241 systemd: Detected virtualization kvm.
Dec 13 01:30:23 np0005558241 systemd: Detected architecture x86-64.
Dec 13 01:30:23 np0005558241 systemd: Running in initrd.
Dec 13 01:30:23 np0005558241 systemd: No hostname configured, using default hostname.
Dec 13 01:30:23 np0005558241 systemd: Hostname set to <localhost>.
Dec 13 01:30:23 np0005558241 systemd: Initializing machine ID from VM UUID.
Dec 13 01:30:23 np0005558241 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 13 01:30:23 np0005558241 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 13 01:30:23 np0005558241 kernel: usb 1-1: Product: QEMU USB Tablet
Dec 13 01:30:23 np0005558241 kernel: usb 1-1: Manufacturer: QEMU
Dec 13 01:30:23 np0005558241 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 13 01:30:23 np0005558241 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 13 01:30:23 np0005558241 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 13 01:30:23 np0005558241 systemd: Queued start job for default target Initrd Default Target.
Dec 13 01:30:23 np0005558241 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec 13 01:30:23 np0005558241 systemd: Reached target Local Encrypted Volumes.
Dec 13 01:30:23 np0005558241 systemd: Reached target Initrd /usr File System.
Dec 13 01:30:23 np0005558241 systemd: Reached target Local File Systems.
Dec 13 01:30:23 np0005558241 systemd: Reached target Path Units.
Dec 13 01:30:23 np0005558241 systemd: Reached target Slice Units.
Dec 13 01:30:23 np0005558241 systemd: Reached target Swaps.
Dec 13 01:30:23 np0005558241 systemd: Reached target Timer Units.
Dec 13 01:30:23 np0005558241 systemd: Listening on D-Bus System Message Bus Socket.
Dec 13 01:30:23 np0005558241 systemd: Listening on Journal Socket (/dev/log).
Dec 13 01:30:23 np0005558241 systemd: Listening on Journal Socket.
Dec 13 01:30:23 np0005558241 systemd: Listening on udev Control Socket.
Dec 13 01:30:23 np0005558241 systemd: Listening on udev Kernel Socket.
Dec 13 01:30:23 np0005558241 systemd: Reached target Socket Units.
Dec 13 01:30:23 np0005558241 systemd: Starting Create List of Static Device Nodes...
Dec 13 01:30:23 np0005558241 systemd: Starting Journal Service...
Dec 13 01:30:23 np0005558241 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 13 01:30:23 np0005558241 systemd: Starting Apply Kernel Variables...
Dec 13 01:30:23 np0005558241 systemd: Starting Create System Users...
Dec 13 01:30:23 np0005558241 systemd: Starting Setup Virtual Console...
Dec 13 01:30:23 np0005558241 systemd: Finished Create List of Static Device Nodes.
Dec 13 01:30:23 np0005558241 systemd: Finished Apply Kernel Variables.
Dec 13 01:30:23 np0005558241 systemd: Finished Create System Users.
Dec 13 01:30:23 np0005558241 systemd-journald[306]: Journal started
Dec 13 01:30:23 np0005558241 systemd-journald[306]: Runtime Journal (/run/log/journal/44548dc0241a47779df7ed38fb199087) is 8.0M, max 153.6M, 145.6M free.
Dec 13 01:30:23 np0005558241 systemd-sysusers[310]: Creating group 'users' with GID 100.
Dec 13 01:30:23 np0005558241 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Dec 13 01:30:23 np0005558241 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 13 01:30:23 np0005558241 systemd: Started Journal Service.
Dec 13 01:30:23 np0005558241 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 13 01:30:23 np0005558241 systemd[1]: Starting Create Volatile Files and Directories...
Dec 13 01:30:23 np0005558241 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 13 01:30:23 np0005558241 systemd[1]: Finished Create Volatile Files and Directories.
Dec 13 01:30:23 np0005558241 systemd[1]: Finished Setup Virtual Console.
Dec 13 01:30:23 np0005558241 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 13 01:30:23 np0005558241 systemd[1]: Starting dracut cmdline hook...
Dec 13 01:30:23 np0005558241 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Dec 13 01:30:23 np0005558241 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 01:30:23 np0005558241 systemd[1]: Finished dracut cmdline hook.
Dec 13 01:30:23 np0005558241 systemd[1]: Starting dracut pre-udev hook...
Dec 13 01:30:23 np0005558241 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 01:30:23 np0005558241 kernel: device-mapper: uevent: version 1.0.3
Dec 13 01:30:23 np0005558241 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 13 01:30:23 np0005558241 kernel: RPC: Registered named UNIX socket transport module.
Dec 13 01:30:23 np0005558241 kernel: RPC: Registered udp transport module.
Dec 13 01:30:23 np0005558241 kernel: RPC: Registered tcp transport module.
Dec 13 01:30:23 np0005558241 kernel: RPC: Registered tcp-with-tls transport module.
Dec 13 01:30:23 np0005558241 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 13 01:30:23 np0005558241 rpc.statd[442]: Version 2.5.4 starting
Dec 13 01:30:23 np0005558241 rpc.statd[442]: Initializing NSM state
Dec 13 01:30:23 np0005558241 rpc.idmapd[447]: Setting log level to 0
Dec 13 01:30:23 np0005558241 systemd[1]: Finished dracut pre-udev hook.
Dec 13 01:30:23 np0005558241 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 13 01:30:23 np0005558241 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Dec 13 01:30:23 np0005558241 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 13 01:30:23 np0005558241 systemd[1]: Starting dracut pre-trigger hook...
Dec 13 01:30:23 np0005558241 systemd[1]: Finished dracut pre-trigger hook.
Dec 13 01:30:23 np0005558241 systemd[1]: Starting Coldplug All udev Devices...
Dec 13 01:30:23 np0005558241 systemd[1]: Created slice Slice /system/modprobe.
Dec 13 01:30:23 np0005558241 systemd[1]: Starting Load Kernel Module configfs...
Dec 13 01:30:23 np0005558241 systemd[1]: Finished Coldplug All udev Devices.
Dec 13 01:30:23 np0005558241 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 01:30:23 np0005558241 systemd[1]: Finished Load Kernel Module configfs.
Dec 13 01:30:23 np0005558241 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 13 01:30:23 np0005558241 systemd[1]: Reached target Network.
Dec 13 01:30:23 np0005558241 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 13 01:30:23 np0005558241 systemd[1]: Starting dracut initqueue hook...
Dec 13 01:30:23 np0005558241 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 13 01:30:23 np0005558241 kernel: scsi host0: ata_piix
Dec 13 01:30:23 np0005558241 kernel: scsi host1: ata_piix
Dec 13 01:30:23 np0005558241 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 13 01:30:23 np0005558241 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 13 01:30:23 np0005558241 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 13 01:30:23 np0005558241 kernel: vda: vda1
Dec 13 01:30:24 np0005558241 systemd[1]: Mounting Kernel Configuration File System...
Dec 13 01:30:24 np0005558241 kernel: ata1: found unknown device (class 0)
Dec 13 01:30:24 np0005558241 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 13 01:30:24 np0005558241 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 13 01:30:24 np0005558241 systemd-udevd[497]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 01:30:24 np0005558241 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 13 01:30:24 np0005558241 systemd[1]: Mounted Kernel Configuration File System.
Dec 13 01:30:24 np0005558241 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 13 01:30:24 np0005558241 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 13 01:30:24 np0005558241 systemd[1]: Found device /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 13 01:30:24 np0005558241 systemd[1]: Reached target Initrd Root Device.
Dec 13 01:30:24 np0005558241 systemd[1]: Reached target System Initialization.
Dec 13 01:30:24 np0005558241 systemd[1]: Reached target Basic System.
Dec 13 01:30:24 np0005558241 systemd[1]: Finished dracut initqueue hook.
Dec 13 01:30:24 np0005558241 systemd[1]: Reached target Preparation for Remote File Systems.
Dec 13 01:30:24 np0005558241 systemd[1]: Reached target Remote Encrypted Volumes.
Dec 13 01:30:24 np0005558241 systemd[1]: Reached target Remote File Systems.
Dec 13 01:30:24 np0005558241 systemd[1]: Starting dracut pre-mount hook...
Dec 13 01:30:24 np0005558241 systemd[1]: Finished dracut pre-mount hook.
Dec 13 01:30:24 np0005558241 systemd[1]: Starting File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266...
Dec 13 01:30:24 np0005558241 systemd-fsck[552]: /usr/sbin/fsck.xfs: XFS file system.
Dec 13 01:30:24 np0005558241 systemd[1]: Finished File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 13 01:30:24 np0005558241 systemd[1]: Mounting /sysroot...
Dec 13 01:30:24 np0005558241 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 13 01:30:24 np0005558241 kernel: XFS (vda1): Mounting V5 Filesystem cbdedf45-ed1d-4952-82a8-33a12c0ba266
Dec 13 01:30:24 np0005558241 kernel: XFS (vda1): Ending clean mount
Dec 13 01:30:25 np0005558241 systemd[1]: Mounted /sysroot.
Dec 13 01:30:25 np0005558241 systemd[1]: Reached target Initrd Root File System.
Dec 13 01:30:25 np0005558241 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 13 01:30:25 np0005558241 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 13 01:30:25 np0005558241 systemd[1]: Reached target Initrd File Systems.
Dec 13 01:30:25 np0005558241 systemd[1]: Reached target Initrd Default Target.
Dec 13 01:30:25 np0005558241 systemd[1]: Starting dracut mount hook...
Dec 13 01:30:25 np0005558241 systemd[1]: Finished dracut mount hook.
Dec 13 01:30:25 np0005558241 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 13 01:30:25 np0005558241 rpc.idmapd[447]: exiting on signal 15
Dec 13 01:30:25 np0005558241 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 13 01:30:25 np0005558241 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Network.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Timer Units.
Dec 13 01:30:25 np0005558241 systemd[1]: dbus.socket: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 13 01:30:25 np0005558241 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Initrd Default Target.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Basic System.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Initrd Root Device.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Initrd /usr File System.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Path Units.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Remote File Systems.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Slice Units.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Socket Units.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target System Initialization.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Local File Systems.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Swaps.
Dec 13 01:30:25 np0005558241 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped dracut mount hook.
Dec 13 01:30:25 np0005558241 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped dracut pre-mount hook.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped target Local Encrypted Volumes.
Dec 13 01:30:25 np0005558241 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 13 01:30:25 np0005558241 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped dracut initqueue hook.
Dec 13 01:30:25 np0005558241 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped Apply Kernel Variables.
Dec 13 01:30:25 np0005558241 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped Create Volatile Files and Directories.
Dec 13 01:30:25 np0005558241 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped Coldplug All udev Devices.
Dec 13 01:30:25 np0005558241 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped dracut pre-trigger hook.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 13 01:30:25 np0005558241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped Setup Virtual Console.
Dec 13 01:30:25 np0005558241 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 13 01:30:25 np0005558241 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 13 01:30:25 np0005558241 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Closed udev Control Socket.
Dec 13 01:30:25 np0005558241 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Closed udev Kernel Socket.
Dec 13 01:30:25 np0005558241 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped dracut pre-udev hook.
Dec 13 01:30:25 np0005558241 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped dracut cmdline hook.
Dec 13 01:30:25 np0005558241 systemd[1]: Starting Cleanup udev Database...
Dec 13 01:30:25 np0005558241 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 13 01:30:25 np0005558241 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped Create List of Static Device Nodes.
Dec 13 01:30:25 np0005558241 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Stopped Create System Users.
Dec 13 01:30:25 np0005558241 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 01:30:25 np0005558241 systemd[1]: Finished Cleanup udev Database.
Dec 13 01:30:25 np0005558241 systemd[1]: Reached target Switch Root.
Dec 13 01:30:25 np0005558241 systemd[1]: Starting Switch Root...
Dec 13 01:30:25 np0005558241 systemd[1]: Switching root.
Dec 13 01:30:25 np0005558241 systemd-journald[306]: Journal stopped
Dec 13 01:30:26 np0005558241 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec 13 01:30:26 np0005558241 kernel: audit: type=1404 audit(1765607425.457:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 13 01:30:26 np0005558241 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 01:30:26 np0005558241 kernel: SELinux:  policy capability open_perms=1
Dec 13 01:30:26 np0005558241 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 01:30:26 np0005558241 kernel: SELinux:  policy capability always_check_network=0
Dec 13 01:30:26 np0005558241 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 01:30:26 np0005558241 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 01:30:26 np0005558241 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 01:30:26 np0005558241 kernel: audit: type=1403 audit(1765607425.597:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 13 01:30:26 np0005558241 systemd: Successfully loaded SELinux policy in 143.343ms.
Dec 13 01:30:26 np0005558241 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.310ms.
Dec 13 01:30:26 np0005558241 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 01:30:26 np0005558241 systemd: Detected virtualization kvm.
Dec 13 01:30:26 np0005558241 systemd: Detected architecture x86-64.
Dec 13 01:30:26 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 01:30:26 np0005558241 systemd: initrd-switch-root.service: Deactivated successfully.
Dec 13 01:30:26 np0005558241 systemd: Stopped Switch Root.
Dec 13 01:30:26 np0005558241 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 13 01:30:26 np0005558241 systemd: Created slice Slice /system/getty.
Dec 13 01:30:26 np0005558241 systemd: Created slice Slice /system/serial-getty.
Dec 13 01:30:26 np0005558241 systemd: Created slice Slice /system/sshd-keygen.
Dec 13 01:30:26 np0005558241 systemd: Created slice User and Session Slice.
Dec 13 01:30:26 np0005558241 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec 13 01:30:26 np0005558241 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec 13 01:30:26 np0005558241 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 13 01:30:26 np0005558241 systemd: Reached target Local Encrypted Volumes.
Dec 13 01:30:26 np0005558241 systemd: Stopped target Switch Root.
Dec 13 01:30:26 np0005558241 systemd: Stopped target Initrd File Systems.
Dec 13 01:30:26 np0005558241 systemd: Stopped target Initrd Root File System.
Dec 13 01:30:26 np0005558241 systemd: Reached target Local Integrity Protected Volumes.
Dec 13 01:30:26 np0005558241 systemd: Reached target Path Units.
Dec 13 01:30:26 np0005558241 systemd: Reached target rpc_pipefs.target.
Dec 13 01:30:26 np0005558241 systemd: Reached target Slice Units.
Dec 13 01:30:26 np0005558241 systemd: Reached target Swaps.
Dec 13 01:30:26 np0005558241 systemd: Reached target Local Verity Protected Volumes.
Dec 13 01:30:26 np0005558241 systemd: Listening on RPCbind Server Activation Socket.
Dec 13 01:30:26 np0005558241 systemd: Reached target RPC Port Mapper.
Dec 13 01:30:26 np0005558241 systemd: Listening on Process Core Dump Socket.
Dec 13 01:30:26 np0005558241 systemd: Listening on initctl Compatibility Named Pipe.
Dec 13 01:30:26 np0005558241 systemd: Listening on udev Control Socket.
Dec 13 01:30:26 np0005558241 systemd: Listening on udev Kernel Socket.
Dec 13 01:30:26 np0005558241 systemd: Mounting Huge Pages File System...
Dec 13 01:30:26 np0005558241 systemd: Mounting POSIX Message Queue File System...
Dec 13 01:30:26 np0005558241 systemd: Mounting Kernel Debug File System...
Dec 13 01:30:26 np0005558241 systemd: Mounting Kernel Trace File System...
Dec 13 01:30:26 np0005558241 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 13 01:30:26 np0005558241 systemd: Starting Create List of Static Device Nodes...
Dec 13 01:30:26 np0005558241 systemd: Starting Load Kernel Module configfs...
Dec 13 01:30:26 np0005558241 systemd: Starting Load Kernel Module drm...
Dec 13 01:30:26 np0005558241 systemd: Starting Load Kernel Module efi_pstore...
Dec 13 01:30:26 np0005558241 systemd: Starting Load Kernel Module fuse...
Dec 13 01:30:26 np0005558241 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 13 01:30:26 np0005558241 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec 13 01:30:26 np0005558241 systemd: Stopped File System Check on Root Device.
Dec 13 01:30:26 np0005558241 systemd: Stopped Journal Service.
Dec 13 01:30:26 np0005558241 systemd: Starting Journal Service...
Dec 13 01:30:26 np0005558241 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 13 01:30:26 np0005558241 systemd: Starting Generate network units from Kernel command line...
Dec 13 01:30:26 np0005558241 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 01:30:26 np0005558241 systemd: Starting Remount Root and Kernel File Systems...
Dec 13 01:30:26 np0005558241 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 13 01:30:26 np0005558241 systemd: Starting Apply Kernel Variables...
Dec 13 01:30:26 np0005558241 kernel: fuse: init (API version 7.37)
Dec 13 01:30:26 np0005558241 systemd: Starting Coldplug All udev Devices...
Dec 13 01:30:26 np0005558241 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 13 01:30:26 np0005558241 systemd: Mounted Huge Pages File System.
Dec 13 01:30:26 np0005558241 systemd: Mounted POSIX Message Queue File System.
Dec 13 01:30:26 np0005558241 systemd: Mounted Kernel Debug File System.
Dec 13 01:30:26 np0005558241 systemd: Mounted Kernel Trace File System.
Dec 13 01:30:26 np0005558241 systemd-journald[674]: Journal started
Dec 13 01:30:26 np0005558241 systemd-journald[674]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 13 01:30:26 np0005558241 systemd[1]: Queued start job for default target Multi-User System.
Dec 13 01:30:26 np0005558241 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 13 01:30:26 np0005558241 systemd: Started Journal Service.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Create List of Static Device Nodes.
Dec 13 01:30:26 np0005558241 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Load Kernel Module configfs.
Dec 13 01:30:26 np0005558241 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 13 01:30:26 np0005558241 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Load Kernel Module fuse.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Generate network units from Kernel command line.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Apply Kernel Variables.
Dec 13 01:30:26 np0005558241 kernel: ACPI: bus type drm_connector registered
Dec 13 01:30:26 np0005558241 systemd[1]: Mounting FUSE Control File System...
Dec 13 01:30:26 np0005558241 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 13 01:30:26 np0005558241 systemd[1]: Starting Rebuild Hardware Database...
Dec 13 01:30:26 np0005558241 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 13 01:30:26 np0005558241 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 01:30:26 np0005558241 systemd[1]: Starting Load/Save OS Random Seed...
Dec 13 01:30:26 np0005558241 systemd[1]: Starting Create System Users...
Dec 13 01:30:26 np0005558241 systemd-journald[674]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 13 01:30:26 np0005558241 systemd-journald[674]: Received client request to flush runtime journal.
Dec 13 01:30:26 np0005558241 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Load Kernel Module drm.
Dec 13 01:30:26 np0005558241 systemd[1]: Mounted FUSE Control File System.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Coldplug All udev Devices.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Create System Users.
Dec 13 01:30:26 np0005558241 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Load/Save OS Random Seed.
Dec 13 01:30:26 np0005558241 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 13 01:30:26 np0005558241 systemd[1]: Reached target Preparation for Local File Systems.
Dec 13 01:30:26 np0005558241 systemd[1]: Reached target Local File Systems.
Dec 13 01:30:26 np0005558241 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 13 01:30:26 np0005558241 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 13 01:30:26 np0005558241 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 01:30:26 np0005558241 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 13 01:30:26 np0005558241 systemd[1]: Starting Automatic Boot Loader Update...
Dec 13 01:30:26 np0005558241 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 13 01:30:26 np0005558241 systemd[1]: Starting Create Volatile Files and Directories...
Dec 13 01:30:26 np0005558241 bootctl[690]: Couldn't find EFI system partition, skipping.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Automatic Boot Loader Update.
Dec 13 01:30:26 np0005558241 systemd[1]: Finished Create Volatile Files and Directories.
Dec 13 01:30:26 np0005558241 systemd[1]: Starting Security Auditing Service...
Dec 13 01:30:26 np0005558241 systemd[1]: Starting RPC Bind...
Dec 13 01:30:26 np0005558241 systemd[1]: Starting Rebuild Journal Catalog...
Dec 13 01:30:26 np0005558241 auditd[696]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 13 01:30:26 np0005558241 auditd[696]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 13 01:30:26 np0005558241 systemd[1]: Started RPC Bind.
Dec 13 01:30:27 np0005558241 augenrules[701]: /sbin/augenrules: No change
Dec 13 01:30:27 np0005558241 systemd[1]: Finished Rebuild Journal Catalog.
Dec 13 01:30:27 np0005558241 augenrules[716]: No rules
Dec 13 01:30:27 np0005558241 augenrules[716]: enabled 1
Dec 13 01:30:27 np0005558241 augenrules[716]: failure 1
Dec 13 01:30:27 np0005558241 augenrules[716]: pid 696
Dec 13 01:30:27 np0005558241 augenrules[716]: rate_limit 0
Dec 13 01:30:27 np0005558241 augenrules[716]: backlog_limit 8192
Dec 13 01:30:27 np0005558241 augenrules[716]: lost 0
Dec 13 01:30:27 np0005558241 augenrules[716]: backlog 0
Dec 13 01:30:27 np0005558241 augenrules[716]: backlog_wait_time 60000
Dec 13 01:30:27 np0005558241 augenrules[716]: backlog_wait_time_actual 0
Dec 13 01:30:27 np0005558241 augenrules[716]: enabled 1
Dec 13 01:30:27 np0005558241 augenrules[716]: failure 1
Dec 13 01:30:27 np0005558241 augenrules[716]: pid 696
Dec 13 01:30:27 np0005558241 augenrules[716]: rate_limit 0
Dec 13 01:30:27 np0005558241 augenrules[716]: backlog_limit 8192
Dec 13 01:30:27 np0005558241 augenrules[716]: lost 0
Dec 13 01:30:27 np0005558241 augenrules[716]: backlog 3
Dec 13 01:30:27 np0005558241 augenrules[716]: backlog_wait_time 60000
Dec 13 01:30:27 np0005558241 augenrules[716]: backlog_wait_time_actual 0
Dec 13 01:30:27 np0005558241 augenrules[716]: enabled 1
Dec 13 01:30:27 np0005558241 augenrules[716]: failure 1
Dec 13 01:30:27 np0005558241 augenrules[716]: pid 696
Dec 13 01:30:27 np0005558241 augenrules[716]: rate_limit 0
Dec 13 01:30:27 np0005558241 augenrules[716]: backlog_limit 8192
Dec 13 01:30:27 np0005558241 augenrules[716]: lost 0
Dec 13 01:30:27 np0005558241 augenrules[716]: backlog 0
Dec 13 01:30:27 np0005558241 augenrules[716]: backlog_wait_time 60000
Dec 13 01:30:27 np0005558241 augenrules[716]: backlog_wait_time_actual 0
Dec 13 01:30:27 np0005558241 systemd[1]: Started Security Auditing Service.
Dec 13 01:30:27 np0005558241 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 13 01:30:27 np0005558241 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 13 01:30:27 np0005558241 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 13 01:30:28 np0005558241 systemd[1]: Finished Rebuild Hardware Database.
Dec 13 01:30:28 np0005558241 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 13 01:30:28 np0005558241 systemd[1]: Starting Update is Completed...
Dec 13 01:30:28 np0005558241 systemd[1]: Finished Update is Completed.
Dec 13 01:30:28 np0005558241 systemd-udevd[724]: Using default interface naming scheme 'rhel-9.0'.
Dec 13 01:30:28 np0005558241 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 13 01:30:28 np0005558241 systemd[1]: Reached target System Initialization.
Dec 13 01:30:28 np0005558241 systemd[1]: Started dnf makecache --timer.
Dec 13 01:30:28 np0005558241 systemd[1]: Started Daily rotation of log files.
Dec 13 01:30:28 np0005558241 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 13 01:30:28 np0005558241 systemd[1]: Reached target Timer Units.
Dec 13 01:30:28 np0005558241 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 13 01:30:28 np0005558241 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 13 01:30:28 np0005558241 systemd[1]: Reached target Socket Units.
Dec 13 01:30:28 np0005558241 systemd[1]: Starting D-Bus System Message Bus...
Dec 13 01:30:28 np0005558241 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 01:30:28 np0005558241 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 13 01:30:28 np0005558241 systemd[1]: Starting Load Kernel Module configfs...
Dec 13 01:30:28 np0005558241 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 01:30:28 np0005558241 systemd[1]: Finished Load Kernel Module configfs.
Dec 13 01:30:28 np0005558241 systemd-udevd[740]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 01:30:28 np0005558241 systemd[1]: Started D-Bus System Message Bus.
Dec 13 01:30:28 np0005558241 systemd[1]: Reached target Basic System.
Dec 13 01:30:28 np0005558241 dbus-broker-lau[743]: Ready
Dec 13 01:30:28 np0005558241 systemd[1]: Starting NTP client/server...
Dec 13 01:30:28 np0005558241 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 13 01:30:28 np0005558241 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 13 01:30:28 np0005558241 systemd[1]: Starting IPv4 firewall with iptables...
Dec 13 01:30:28 np0005558241 systemd[1]: Started irqbalance daemon.
Dec 13 01:30:28 np0005558241 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 13 01:30:28 np0005558241 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 01:30:28 np0005558241 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 01:30:28 np0005558241 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 01:30:28 np0005558241 systemd[1]: Reached target sshd-keygen.target.
Dec 13 01:30:28 np0005558241 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 13 01:30:28 np0005558241 systemd[1]: Reached target User and Group Name Lookups.
Dec 13 01:30:28 np0005558241 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 13 01:30:28 np0005558241 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 13 01:30:28 np0005558241 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 13 01:30:28 np0005558241 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 13 01:30:28 np0005558241 chronyd[785]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 13 01:30:28 np0005558241 chronyd[785]: Loaded 0 symmetric keys
Dec 13 01:30:28 np0005558241 chronyd[785]: Using right/UTC timezone to obtain leap second data
Dec 13 01:30:28 np0005558241 chronyd[785]: Loaded seccomp filter (level 2)
Dec 13 01:30:28 np0005558241 systemd[1]: Starting User Login Management...
Dec 13 01:30:28 np0005558241 systemd[1]: Started NTP client/server.
Dec 13 01:30:28 np0005558241 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 13 01:30:28 np0005558241 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 13 01:30:28 np0005558241 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 13 01:30:28 np0005558241 systemd-logind[787]: New seat seat0.
Dec 13 01:30:28 np0005558241 systemd[1]: Started User Login Management.
Dec 13 01:30:28 np0005558241 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 13 01:30:28 np0005558241 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 13 01:30:28 np0005558241 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 13 01:30:28 np0005558241 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 13 01:30:28 np0005558241 kernel: Console: switching to colour dummy device 80x25
Dec 13 01:30:28 np0005558241 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 13 01:30:28 np0005558241 kernel: [drm] features: -context_init
Dec 13 01:30:28 np0005558241 kernel: [drm] number of scanouts: 1
Dec 13 01:30:28 np0005558241 kernel: [drm] number of cap sets: 0
Dec 13 01:30:28 np0005558241 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 13 01:30:28 np0005558241 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 13 01:30:28 np0005558241 kernel: Console: switching to colour frame buffer device 128x48
Dec 13 01:30:28 np0005558241 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 13 01:30:28 np0005558241 kernel: kvm_amd: TSC scaling supported
Dec 13 01:30:28 np0005558241 kernel: kvm_amd: Nested Virtualization enabled
Dec 13 01:30:28 np0005558241 kernel: kvm_amd: Nested Paging enabled
Dec 13 01:30:28 np0005558241 kernel: kvm_amd: LBR virtualization supported
Dec 13 01:30:29 np0005558241 iptables.init[777]: iptables: Applying firewall rules: [  OK  ]
Dec 13 01:30:29 np0005558241 systemd[1]: Finished IPv4 firewall with iptables.
Dec 13 01:30:29 np0005558241 cloud-init[835]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 13 Dec 2025 06:30:29 +0000. Up 7.77 seconds.
Dec 13 01:30:29 np0005558241 systemd[1]: run-cloud\x2dinit-tmp-tmpueg3rt9z.mount: Deactivated successfully.
Dec 13 01:30:29 np0005558241 systemd[1]: Starting Hostname Service...
Dec 13 01:30:29 np0005558241 systemd[1]: Started Hostname Service.
Dec 13 01:30:29 np0005558241 systemd-hostnamed[849]: Hostname set to <np0005558241.novalocal> (static)
Dec 13 01:30:29 np0005558241 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 13 01:30:29 np0005558241 systemd[1]: Reached target Preparation for Network.
Dec 13 01:30:29 np0005558241 systemd[1]: Starting Network Manager...
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6108] NetworkManager (version 1.54.2-1.el9) is starting... (boot:9546f4fe-3402-44c6-8e6a-c5fcacf6ef13)
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6112] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6177] manager[0x560f72945000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6204] hostname: hostname: using hostnamed
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6205] hostname: static hostname changed from (none) to "np0005558241.novalocal"
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6209] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6474] manager[0x560f72945000]: rfkill: Wi-Fi hardware radio set enabled
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6474] manager[0x560f72945000]: rfkill: WWAN hardware radio set enabled
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6510] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6512] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6512] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6513] manager: Networking is enabled by state file
Dec 13 01:30:29 np0005558241 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6517] settings: Loaded settings plugin: keyfile (internal)
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6528] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6544] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6553] dhcp: init: Using DHCP client 'internal'
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6555] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6566] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6572] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6579] device (lo): Activation: starting connection 'lo' (85760078-2362-402c-a744-99103f51e310)
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6585] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6587] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6618] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6620] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6622] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6623] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6624] device (eth0): carrier: link connected
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6626] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6630] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6634] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6637] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6638] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6639] manager: NetworkManager state is now CONNECTING
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6640] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6644] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6647] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6704] dhcp4 (eth0): state changed new lease, address=38.129.56.138
Dec 13 01:30:29 np0005558241 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6718] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 13 01:30:29 np0005558241 systemd[1]: Started Network Manager.
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6740] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 01:30:29 np0005558241 systemd[1]: Reached target Network.
Dec 13 01:30:29 np0005558241 systemd[1]: Starting Network Manager Wait Online...
Dec 13 01:30:29 np0005558241 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 13 01:30:29 np0005558241 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6975] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6978] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6979] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 13 01:30:29 np0005558241 systemd[1]: Started GSSAPI Proxy Daemon.
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6985] device (lo): Activation: successful, device activated.
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6990] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6993] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.6995] device (eth0): Activation: successful, device activated.
Dec 13 01:30:29 np0005558241 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 13 01:30:29 np0005558241 systemd[1]: Reached target NFS client services.
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.7000] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 13 01:30:29 np0005558241 NetworkManager[853]: <info>  [1765607429.7002] manager: startup complete
Dec 13 01:30:29 np0005558241 systemd[1]: Reached target Preparation for Remote File Systems.
Dec 13 01:30:29 np0005558241 systemd[1]: Reached target Remote File Systems.
Dec 13 01:30:29 np0005558241 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 01:30:29 np0005558241 systemd[1]: Finished Network Manager Wait Online.
Dec 13 01:30:29 np0005558241 systemd[1]: Starting Cloud-init: Network Stage...
Dec 13 01:30:30 np0005558241 cloud-init[915]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 13 Dec 2025 06:30:30 +0000. Up 8.75 seconds.
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: |  eth0  | True |        38.129.56.138         | 255.255.255.0 | global | fa:16:3e:fd:e2:7e |
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: |  eth0  | True | fe80::f816:3eff:fefd:e27e/64 |       .       |  link  | fa:16:3e:fd:e2:7e |
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 13 01:30:30 np0005558241 cloud-init[915]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 13 01:30:36 np0005558241 chronyd[785]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Dec 13 01:30:36 np0005558241 chronyd[785]: System clock wrong by 1.415344 seconds
Dec 13 01:30:36 np0005558241 chronyd[785]: System clock was stepped by 1.415344 seconds
Dec 13 01:30:36 np0005558241 chronyd[785]: System clock TAI offset set to 37 seconds
Dec 13 01:30:40 np0005558241 irqbalance[778]: Cannot change IRQ 35 affinity: Operation not permitted
Dec 13 01:30:40 np0005558241 irqbalance[778]: IRQ 35 affinity is now unmanaged
Dec 13 01:30:40 np0005558241 irqbalance[778]: Cannot change IRQ 33 affinity: Operation not permitted
Dec 13 01:30:40 np0005558241 irqbalance[778]: IRQ 33 affinity is now unmanaged
Dec 13 01:30:40 np0005558241 irqbalance[778]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 13 01:30:40 np0005558241 irqbalance[778]: IRQ 31 affinity is now unmanaged
Dec 13 01:30:40 np0005558241 irqbalance[778]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 13 01:30:40 np0005558241 irqbalance[778]: IRQ 26 affinity is now unmanaged
Dec 13 01:30:40 np0005558241 irqbalance[778]: Cannot change IRQ 34 affinity: Operation not permitted
Dec 13 01:30:40 np0005558241 irqbalance[778]: IRQ 34 affinity is now unmanaged
Dec 13 01:30:40 np0005558241 irqbalance[778]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 13 01:30:40 np0005558241 irqbalance[778]: IRQ 32 affinity is now unmanaged
Dec 13 01:30:41 np0005558241 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 01:30:46 np0005558241 cloud-init[915]: Generating public/private rsa key pair.
Dec 13 01:30:46 np0005558241 cloud-init[915]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 13 01:30:46 np0005558241 cloud-init[915]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 13 01:30:46 np0005558241 cloud-init[915]: The key fingerprint is:
Dec 13 01:30:46 np0005558241 cloud-init[915]: SHA256:8Svb2dBdNvJtQegvTnJ70ZkMN8aQa9ababWE6o4SVks root@np0005558241.novalocal
Dec 13 01:30:46 np0005558241 cloud-init[915]: The key's randomart image is:
Dec 13 01:30:46 np0005558241 cloud-init[915]: +---[RSA 3072]----+
Dec 13 01:30:46 np0005558241 cloud-init[915]: |              .  |
Dec 13 01:30:46 np0005558241 cloud-init[915]: |             o.  |
Dec 13 01:30:46 np0005558241 cloud-init[915]: |        .    .*. |
Dec 13 01:30:46 np0005558241 cloud-init[915]: |         E  .*.B.|
Dec 13 01:30:46 np0005558241 cloud-init[915]: |        S o +o*o&|
Dec 13 01:30:46 np0005558241 cloud-init[915]: |       o . + .+&=|
Dec 13 01:30:46 np0005558241 cloud-init[915]: |      . o +..+oo+|
Dec 13 01:30:46 np0005558241 cloud-init[915]: |       . +.== oo |
Dec 13 01:30:46 np0005558241 cloud-init[915]: |        o.+..o.  |
Dec 13 01:30:46 np0005558241 cloud-init[915]: +----[SHA256]-----+
Dec 13 01:30:46 np0005558241 cloud-init[915]: Generating public/private ecdsa key pair.
Dec 13 01:30:46 np0005558241 cloud-init[915]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 13 01:30:46 np0005558241 cloud-init[915]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 13 01:30:46 np0005558241 cloud-init[915]: The key fingerprint is:
Dec 13 01:30:46 np0005558241 cloud-init[915]: SHA256:Fbtvff9U5YvlATCWrMdBhznUAjbOO0a9Hwol2T1QWmM root@np0005558241.novalocal
Dec 13 01:30:46 np0005558241 cloud-init[915]: The key's randomart image is:
Dec 13 01:30:46 np0005558241 cloud-init[915]: +---[ECDSA 256]---+
Dec 13 01:30:46 np0005558241 cloud-init[915]: |         +*B*E   |
Dec 13 01:30:46 np0005558241 cloud-init[915]: |        + *@X..  |
Dec 13 01:30:46 np0005558241 cloud-init[915]: |         *==++  .|
Dec 13 01:30:46 np0005558241 cloud-init[915]: |        .o++. o..|
Dec 13 01:30:46 np0005558241 cloud-init[915]: |        S=o. . oo|
Dec 13 01:30:46 np0005558241 cloud-init[915]: |        . o.o.= +|
Dec 13 01:30:46 np0005558241 cloud-init[915]: |           .oo.oo|
Dec 13 01:30:46 np0005558241 cloud-init[915]: |           .   o.|
Dec 13 01:30:46 np0005558241 cloud-init[915]: |                +|
Dec 13 01:30:46 np0005558241 cloud-init[915]: +----[SHA256]-----+
Dec 13 01:30:46 np0005558241 cloud-init[915]: Generating public/private ed25519 key pair.
Dec 13 01:30:46 np0005558241 cloud-init[915]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 13 01:30:46 np0005558241 cloud-init[915]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 13 01:30:46 np0005558241 cloud-init[915]: The key fingerprint is:
Dec 13 01:30:46 np0005558241 cloud-init[915]: SHA256:55tqlF1h4ewKmsFgLswyPliSxC/PY5Pwux4xaHvMVGw root@np0005558241.novalocal
Dec 13 01:30:46 np0005558241 cloud-init[915]: The key's randomart image is:
Dec 13 01:30:46 np0005558241 cloud-init[915]: +--[ED25519 256]--+
Dec 13 01:30:46 np0005558241 cloud-init[915]: |            ..   |
Dec 13 01:30:46 np0005558241 cloud-init[915]: |.   .      oo    |
Dec 13 01:30:46 np0005558241 cloud-init[915]: | o  oE     .o.   |
Dec 13 01:30:46 np0005558241 cloud-init[915]: |.+oooo     ..    |
Dec 13 01:30:46 np0005558241 cloud-init[915]: |====. o So...    |
Dec 13 01:30:46 np0005558241 cloud-init[915]: |+=@.+  +o+..     |
Dec 13 01:30:46 np0005558241 cloud-init[915]: |.+ &  o.  o      |
Dec 13 01:30:46 np0005558241 cloud-init[915]: |  + =   .  o     |
Dec 13 01:30:46 np0005558241 cloud-init[915]: |  .+.  ...o      |
Dec 13 01:30:46 np0005558241 cloud-init[915]: +----[SHA256]-----+
Dec 13 01:30:46 np0005558241 systemd[1]: Finished Cloud-init: Network Stage.
Dec 13 01:30:46 np0005558241 systemd[1]: Reached target Cloud-config availability.
Dec 13 01:30:46 np0005558241 systemd[1]: Reached target Network is Online.
Dec 13 01:30:46 np0005558241 systemd[1]: Starting Cloud-init: Config Stage...
Dec 13 01:30:46 np0005558241 systemd[1]: Starting Crash recovery kernel arming...
Dec 13 01:30:46 np0005558241 systemd[1]: Starting Notify NFS peers of a restart...
Dec 13 01:30:46 np0005558241 systemd[1]: Starting System Logging Service...
Dec 13 01:30:46 np0005558241 sm-notify[1001]: Version 2.5.4 starting
Dec 13 01:30:46 np0005558241 systemd[1]: Starting OpenSSH server daemon...
Dec 13 01:30:46 np0005558241 systemd[1]: Starting Permit User Sessions...
Dec 13 01:30:46 np0005558241 systemd[1]: Started Notify NFS peers of a restart.
Dec 13 01:30:46 np0005558241 systemd[1]: Finished Permit User Sessions.
Dec 13 01:30:46 np0005558241 systemd[1]: Started Command Scheduler.
Dec 13 01:30:46 np0005558241 systemd[1]: Started Getty on tty1.
Dec 13 01:30:46 np0005558241 systemd[1]: Started Serial Getty on ttyS0.
Dec 13 01:30:46 np0005558241 systemd[1]: Reached target Login Prompts.
Dec 13 01:30:46 np0005558241 systemd[1]: Started OpenSSH server daemon.
Dec 13 01:30:46 np0005558241 rsyslogd[1002]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1002" x-info="https://www.rsyslog.com"] start
Dec 13 01:30:46 np0005558241 systemd[1]: Started System Logging Service.
Dec 13 01:30:46 np0005558241 rsyslogd[1002]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 13 01:30:46 np0005558241 systemd[1]: Reached target Multi-User System.
Dec 13 01:30:46 np0005558241 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 13 01:30:46 np0005558241 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 13 01:30:46 np0005558241 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 13 01:30:46 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 01:30:46 np0005558241 kdumpctl[1014]: kdump: No kdump initial ramdisk found.
Dec 13 01:30:46 np0005558241 kdumpctl[1014]: kdump: Rebuilding /boot/initramfs-5.14.0-648.el9.x86_64kdump.img
Dec 13 01:30:46 np0005558241 cloud-init[1087]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 13 Dec 2025 06:30:46 +0000. Up 23.90 seconds.
Dec 13 01:30:46 np0005558241 systemd[1]: Finished Cloud-init: Config Stage.
Dec 13 01:30:46 np0005558241 systemd[1]: Starting Cloud-init: Final Stage...
Dec 13 01:30:47 np0005558241 cloud-init[1096]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 13 Dec 2025 06:30:47 +0000. Up 24.45 seconds.
Dec 13 01:30:47 np0005558241 cloud-init[1146]: #############################################################
Dec 13 01:30:47 np0005558241 cloud-init[1150]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 13 01:30:48 np0005558241 cloud-init[1157]: 256 SHA256:Fbtvff9U5YvlATCWrMdBhznUAjbOO0a9Hwol2T1QWmM root@np0005558241.novalocal (ECDSA)
Dec 13 01:30:48 np0005558241 cloud-init[1159]: 256 SHA256:55tqlF1h4ewKmsFgLswyPliSxC/PY5Pwux4xaHvMVGw root@np0005558241.novalocal (ED25519)
Dec 13 01:30:48 np0005558241 cloud-init[1161]: 3072 SHA256:8Svb2dBdNvJtQegvTnJ70ZkMN8aQa9ababWE6o4SVks root@np0005558241.novalocal (RSA)
Dec 13 01:30:48 np0005558241 cloud-init[1162]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 13 01:30:48 np0005558241 cloud-init[1168]: #############################################################
Dec 13 01:30:48 np0005558241 cloud-init[1096]: Cloud-init v. 24.4-7.el9 finished at Sat, 13 Dec 2025 06:30:48 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 25.34 seconds
Dec 13 01:30:48 np0005558241 systemd[1]: Finished Cloud-init: Final Stage.
Dec 13 01:30:48 np0005558241 systemd[1]: Reached target Cloud-init target.
Dec 13 01:30:48 np0005558241 dracut[1298]: dracut-057-102.git20250818.el9
Dec 13 01:30:48 np0005558241 dracut[1300]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-648.el9.x86_64kdump.img 5.14.0-648.el9.x86_64
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 13 01:30:48 np0005558241 dracut[1300]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: memstrack is not available
Dec 13 01:30:49 np0005558241 dracut[1300]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 13 01:30:49 np0005558241 dracut[1300]: memstrack is not available
Dec 13 01:30:49 np0005558241 dracut[1300]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 13 01:30:49 np0005558241 dracut[1300]: *** Including module: systemd ***
Dec 13 01:30:50 np0005558241 dracut[1300]: *** Including module: fips ***
Dec 13 01:30:50 np0005558241 dracut[1300]: *** Including module: systemd-initrd ***
Dec 13 01:30:50 np0005558241 dracut[1300]: *** Including module: i18n ***
Dec 13 01:30:50 np0005558241 dracut[1300]: *** Including module: drm ***
Dec 13 01:30:51 np0005558241 dracut[1300]: *** Including module: prefixdevname ***
Dec 13 01:30:51 np0005558241 dracut[1300]: *** Including module: kernel-modules ***
Dec 13 01:30:51 np0005558241 kernel: block vda: the capability attribute has been deprecated.
Dec 13 01:30:51 np0005558241 dracut[1300]: *** Including module: kernel-modules-extra ***
Dec 13 01:30:51 np0005558241 dracut[1300]: *** Including module: qemu ***
Dec 13 01:30:51 np0005558241 dracut[1300]: *** Including module: fstab-sys ***
Dec 13 01:30:51 np0005558241 dracut[1300]: *** Including module: rootfs-block ***
Dec 13 01:30:51 np0005558241 dracut[1300]: *** Including module: terminfo ***
Dec 13 01:30:51 np0005558241 dracut[1300]: *** Including module: udev-rules ***
Dec 13 01:30:52 np0005558241 dracut[1300]: Skipping udev rule: 91-permissions.rules
Dec 13 01:30:52 np0005558241 dracut[1300]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 13 01:30:52 np0005558241 dracut[1300]: *** Including module: virtiofs ***
Dec 13 01:30:52 np0005558241 dracut[1300]: *** Including module: dracut-systemd ***
Dec 13 01:30:52 np0005558241 dracut[1300]: *** Including module: usrmount ***
Dec 13 01:30:52 np0005558241 dracut[1300]: *** Including module: base ***
Dec 13 01:30:52 np0005558241 dracut[1300]: *** Including module: fs-lib ***
Dec 13 01:30:52 np0005558241 dracut[1300]: *** Including module: kdumpbase ***
Dec 13 01:30:53 np0005558241 dracut[1300]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 13 01:30:53 np0005558241 dracut[1300]:  microcode_ctl module: mangling fw_dir
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: configuration "intel" is ignored
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 13 01:30:53 np0005558241 dracut[1300]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 13 01:30:53 np0005558241 dracut[1300]: *** Including module: openssl ***
Dec 13 01:30:53 np0005558241 dracut[1300]: *** Including module: shutdown ***
Dec 13 01:30:53 np0005558241 dracut[1300]: *** Including module: squash ***
Dec 13 01:30:53 np0005558241 dracut[1300]: *** Including modules done ***
Dec 13 01:30:53 np0005558241 dracut[1300]: *** Installing kernel module dependencies ***
Dec 13 01:30:54 np0005558241 dracut[1300]: *** Installing kernel module dependencies done ***
Dec 13 01:30:54 np0005558241 dracut[1300]: *** Resolving executable dependencies ***
Dec 13 01:30:55 np0005558241 dracut[1300]: *** Resolving executable dependencies done ***
Dec 13 01:30:55 np0005558241 dracut[1300]: *** Generating early-microcode cpio image ***
Dec 13 01:30:55 np0005558241 dracut[1300]: *** Store current command line parameters ***
Dec 13 01:30:55 np0005558241 dracut[1300]: Stored kernel commandline:
Dec 13 01:30:55 np0005558241 dracut[1300]: No dracut internal kernel commandline stored in the initramfs
Dec 13 01:30:56 np0005558241 dracut[1300]: *** Install squash loader ***
Dec 13 01:30:57 np0005558241 dracut[1300]: *** Squashing the files inside the initramfs ***
Dec 13 01:30:58 np0005558241 dracut[1300]: *** Squashing the files inside the initramfs done ***
Dec 13 01:30:58 np0005558241 dracut[1300]: *** Creating image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' ***
Dec 13 01:30:58 np0005558241 dracut[1300]: *** Hardlinking files ***
Dec 13 01:30:58 np0005558241 dracut[1300]: *** Hardlinking files done ***
Dec 13 01:30:59 np0005558241 dracut[1300]: *** Creating initramfs image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' done ***
Dec 13 01:31:01 np0005558241 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 01:31:01 np0005558241 kdumpctl[1014]: kdump: kexec: loaded kdump kernel
Dec 13 01:31:01 np0005558241 kdumpctl[1014]: kdump: Starting kdump: [OK]
Dec 13 01:31:01 np0005558241 systemd[1]: Finished Crash recovery kernel arming.
Dec 13 01:31:01 np0005558241 systemd[1]: Startup finished in 1.541s (kernel) + 2.605s (initrd) + 34.493s (userspace) = 38.639s.
Dec 13 01:44:25 np0005558241 systemd[1]: Created slice User Slice of UID 1000.
Dec 13 01:44:25 np0005558241 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 13 01:44:25 np0005558241 systemd-logind[787]: New session 1 of user zuul.
Dec 13 01:44:25 np0005558241 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 13 01:44:25 np0005558241 systemd[1]: Starting User Manager for UID 1000...
Dec 13 01:44:25 np0005558241 systemd[4305]: Queued start job for default target Main User Target.
Dec 13 01:44:25 np0005558241 systemd[4305]: Created slice User Application Slice.
Dec 13 01:44:25 np0005558241 systemd[4305]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 13 01:44:25 np0005558241 systemd[4305]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 01:44:25 np0005558241 systemd[4305]: Reached target Paths.
Dec 13 01:44:25 np0005558241 systemd[4305]: Reached target Timers.
Dec 13 01:44:25 np0005558241 systemd[4305]: Starting D-Bus User Message Bus Socket...
Dec 13 01:44:25 np0005558241 systemd[4305]: Starting Create User's Volatile Files and Directories...
Dec 13 01:44:25 np0005558241 systemd[4305]: Listening on D-Bus User Message Bus Socket.
Dec 13 01:44:25 np0005558241 systemd[4305]: Reached target Sockets.
Dec 13 01:44:25 np0005558241 systemd[4305]: Finished Create User's Volatile Files and Directories.
Dec 13 01:44:25 np0005558241 systemd[4305]: Reached target Basic System.
Dec 13 01:44:25 np0005558241 systemd[4305]: Reached target Main User Target.
Dec 13 01:44:25 np0005558241 systemd[4305]: Startup finished in 116ms.
Dec 13 01:44:25 np0005558241 systemd[1]: Started User Manager for UID 1000.
Dec 13 01:44:25 np0005558241 systemd[1]: Started Session 1 of User zuul.
Dec 13 01:44:25 np0005558241 python3[4390]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 01:44:29 np0005558241 python3[4418]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 01:44:35 np0005558241 python3[4476]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 01:44:35 np0005558241 python3[4516]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 13 01:44:38 np0005558241 python3[4542]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQ2YWijln/nMMoMrHrAaa4VKdfSA1p71SPqvbB4YM2TOmLASukGPcLV+cGnWqyEHDA/QMVnFSHcqBhk6xRb5Q3VMMrdPoojqSEvEzTg6je6EC+lLQltFPyH2bxoku1jaa3R2D4XpPoBC5/aHaKFod/KGnUeQ+Fs1SEvY8euMoGNp1UsuBx4RGOD6ncvq3scikW4JNPmdR+w7AHGGWpJY7P0jnGnVBMW1g8hQXj5cFab8wDhUROP6EZIWYpn00+fTfyETGmBYFIVQZcteXHQ0GP1baJcCS3H860kp/Bt5JDghZci9+Ewc0zYMoS+lmBqufZtmMU+QJnep+XZYMrdBsmsSfwCU7nHMIufkXdj4mfMOSFQ8mmw6R07RBWzH35xnrTUMq9TkSemY67xCKn2y8BceSSUyXBvjgXLdzIEIMLJWH+swJ2k8EB58d9TMWTC779/VttCggJ6YZFPHNQBhZ2gdEm3l8P8drR5cRwovprhf09row+4ch5jKzYzsl4tRU= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:38 np0005558241 python3[4566]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:44:38 np0005558241 python3[4665]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:44:39 np0005558241 python3[4736]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765608278.6647565-207-259119203660458/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=3ddab2ca0c4c47368b6566fc4bc769b0_id_rsa follow=False checksum=0205e03b52619f8753d85e123f302746184e134e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:44:39 np0005558241 python3[4859]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:44:40 np0005558241 python3[4930]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765608279.5753767-240-211832905038595/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=3ddab2ca0c4c47368b6566fc4bc769b0_id_rsa.pub follow=False checksum=27e903b403036ab6ae51b9ff242344e9fbb7b57c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:44:41 np0005558241 python3[4978]: ansible-ping Invoked with data=pong
Dec 13 01:44:42 np0005558241 python3[5002]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 01:44:44 np0005558241 python3[5060]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 13 01:44:45 np0005558241 python3[5092]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:44:45 np0005558241 python3[5116]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:44:45 np0005558241 python3[5140]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:44:46 np0005558241 python3[5164]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:44:46 np0005558241 python3[5188]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:44:46 np0005558241 python3[5212]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:44:48 np0005558241 python3[5238]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:44:49 np0005558241 python3[5316]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:44:49 np0005558241 python3[5389]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608288.7741184-21-179195911872881/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:44:50 np0005558241 python3[5437]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:50 np0005558241 python3[5461]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:50 np0005558241 python3[5485]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:51 np0005558241 python3[5509]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:51 np0005558241 python3[5533]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:51 np0005558241 python3[5557]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:52 np0005558241 python3[5581]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:52 np0005558241 python3[5605]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:52 np0005558241 python3[5629]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:53 np0005558241 python3[5653]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:53 np0005558241 python3[5677]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:53 np0005558241 python3[5701]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:53 np0005558241 python3[5725]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:54 np0005558241 python3[5749]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:54 np0005558241 python3[5773]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:54 np0005558241 python3[5797]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:55 np0005558241 python3[5821]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:55 np0005558241 python3[5845]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:55 np0005558241 python3[5869]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:55 np0005558241 python3[5893]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:56 np0005558241 python3[5917]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:56 np0005558241 python3[5941]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:56 np0005558241 python3[5965]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:57 np0005558241 python3[5989]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:57 np0005558241 python3[6013]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:44:57 np0005558241 python3[6037]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:45:00 np0005558241 python3[6063]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 13 01:45:00 np0005558241 systemd[1]: Starting Time & Date Service...
Dec 13 01:45:00 np0005558241 systemd[1]: Started Time & Date Service.
Dec 13 01:45:00 np0005558241 systemd-timedated[6065]: Changed time zone to 'UTC' (UTC).
Dec 13 01:45:00 np0005558241 python3[6094]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:45:01 np0005558241 python3[6170]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:45:01 np0005558241 python3[6241]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765608300.88024-153-203971295638988/source _original_basename=tmpomdp1xap follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:45:01 np0005558241 python3[6341]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:45:02 np0005558241 python3[6412]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765608301.6936607-183-127123239745958/source _original_basename=tmpqeygymwf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:45:02 np0005558241 python3[6514]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:45:03 np0005558241 python3[6587]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765608302.7181718-231-202179962561614/source _original_basename=tmpc_xbvovb follow=False checksum=2dafab32635816f7211f2969a3efc28708d9e05d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:45:03 np0005558241 python3[6635]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:45:04 np0005558241 python3[6661]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:45:04 np0005558241 python3[6741]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:45:05 np0005558241 python3[6814]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608304.3580716-273-197731805853126/source _original_basename=tmp4_ks5_pv follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:45:05 np0005558241 python3[6865]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-20e8-d5c6-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:45:06 np0005558241 python3[6893]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-20e8-d5c6-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 13 01:45:07 np0005558241 python3[6921]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:45:30 np0005558241 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 13 01:45:30 np0005558241 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 13 01:45:30 np0005558241 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 13 01:45:30 np0005558241 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 13 01:45:30 np0005558241 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 13 01:46:04 np0005558241 python3[6952]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:46:37 np0005558241 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 13 01:46:37 np0005558241 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 13 01:46:37 np0005558241 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 13 01:46:37 np0005558241 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 13 01:46:37 np0005558241 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 13 01:46:37 np0005558241 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 13 01:46:37 np0005558241 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 13 01:46:37 np0005558241 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 13 01:46:37 np0005558241 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 13 01:46:37 np0005558241 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 13 01:46:37 np0005558241 NetworkManager[853]: <info>  [1765608397.1778] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 13 01:46:37 np0005558241 systemd-udevd[6954]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 01:46:37 np0005558241 NetworkManager[853]: <info>  [1765608397.1998] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 01:46:37 np0005558241 NetworkManager[853]: <info>  [1765608397.2036] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 13 01:46:37 np0005558241 NetworkManager[853]: <info>  [1765608397.2041] device (eth1): carrier: link connected
Dec 13 01:46:37 np0005558241 NetworkManager[853]: <info>  [1765608397.2043] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 13 01:46:37 np0005558241 NetworkManager[853]: <info>  [1765608397.2051] policy: auto-activating connection 'Wired connection 1' (401bfccc-67e5-32e5-96b5-b65c2c726952)
Dec 13 01:46:37 np0005558241 NetworkManager[853]: <info>  [1765608397.2055] device (eth1): Activation: starting connection 'Wired connection 1' (401bfccc-67e5-32e5-96b5-b65c2c726952)
Dec 13 01:46:37 np0005558241 NetworkManager[853]: <info>  [1765608397.2056] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 01:46:37 np0005558241 NetworkManager[853]: <info>  [1765608397.2059] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 01:46:37 np0005558241 NetworkManager[853]: <info>  [1765608397.2063] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 01:46:37 np0005558241 NetworkManager[853]: <info>  [1765608397.2068] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:46:37 np0005558241 systemd[4305]: Starting Mark boot as successful...
Dec 13 01:46:37 np0005558241 systemd[4305]: Finished Mark boot as successful.
Dec 13 01:46:38 np0005558241 python3[6981]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-b23c-7084-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:46:48 np0005558241 python3[7061]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:46:48 np0005558241 python3[7134]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765608407.7962942-102-274891525417497/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=5687557a98c8c9d28166fb5856b4a760ae0930d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:46:49 np0005558241 python3[7184]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 01:46:49 np0005558241 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 13 01:46:49 np0005558241 systemd[1]: Stopped Network Manager Wait Online.
Dec 13 01:46:49 np0005558241 systemd[1]: Stopping Network Manager Wait Online...
Dec 13 01:46:49 np0005558241 systemd[1]: Stopping Network Manager...
Dec 13 01:46:49 np0005558241 NetworkManager[853]: <info>  [1765608409.3674] caught SIGTERM, shutting down normally.
Dec 13 01:46:49 np0005558241 NetworkManager[853]: <info>  [1765608409.3682] dhcp4 (eth0): canceled DHCP transaction
Dec 13 01:46:49 np0005558241 NetworkManager[853]: <info>  [1765608409.3682] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:46:49 np0005558241 NetworkManager[853]: <info>  [1765608409.3682] dhcp4 (eth0): state changed no lease
Dec 13 01:46:49 np0005558241 NetworkManager[853]: <info>  [1765608409.3685] manager: NetworkManager state is now CONNECTING
Dec 13 01:46:49 np0005558241 NetworkManager[853]: <info>  [1765608409.3821] dhcp4 (eth1): canceled DHCP transaction
Dec 13 01:46:49 np0005558241 NetworkManager[853]: <info>  [1765608409.3822] dhcp4 (eth1): state changed no lease
Dec 13 01:46:49 np0005558241 NetworkManager[853]: <info>  [1765608409.3905] exiting (success)
Dec 13 01:46:49 np0005558241 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 01:46:49 np0005558241 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 01:46:49 np0005558241 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 13 01:46:49 np0005558241 systemd[1]: Stopped Network Manager.
Dec 13 01:46:49 np0005558241 systemd[1]: NetworkManager.service: Consumed 6.774s CPU time, 10.0M memory peak.
Dec 13 01:46:49 np0005558241 systemd[1]: Starting Network Manager...
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.4371] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:9546f4fe-3402-44c6-8e6a-c5fcacf6ef13)
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.4374] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.4427] manager[0x55d4e9a70000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 13 01:46:49 np0005558241 systemd[1]: Starting Hostname Service...
Dec 13 01:46:49 np0005558241 systemd[1]: Started Hostname Service.
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5377] hostname: hostname: using hostnamed
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5378] hostname: static hostname changed from (none) to "np0005558241.novalocal"
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5385] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5391] manager[0x55d4e9a70000]: rfkill: Wi-Fi hardware radio set enabled
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5392] manager[0x55d4e9a70000]: rfkill: WWAN hardware radio set enabled
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5430] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5431] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5431] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5432] manager: Networking is enabled by state file
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5436] settings: Loaded settings plugin: keyfile (internal)
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5440] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5470] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5485] dhcp: init: Using DHCP client 'internal'
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5488] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5496] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5502] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5511] device (lo): Activation: starting connection 'lo' (85760078-2362-402c-a744-99103f51e310)
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5519] device (eth0): carrier: link connected
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5525] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5532] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5532] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5539] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5547] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5554] device (eth1): carrier: link connected
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5560] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5567] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (401bfccc-67e5-32e5-96b5-b65c2c726952) (indicated)
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5567] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5573] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5581] device (eth1): Activation: starting connection 'Wired connection 1' (401bfccc-67e5-32e5-96b5-b65c2c726952)
Dec 13 01:46:49 np0005558241 systemd[1]: Started Network Manager.
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5588] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5594] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5598] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5601] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5604] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5607] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5609] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5611] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5615] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5624] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5627] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5637] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5642] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5656] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5662] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5666] device (lo): Activation: successful, device activated.
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5695] dhcp4 (eth0): state changed new lease, address=38.129.56.138
Dec 13 01:46:49 np0005558241 systemd[1]: Starting Network Manager Wait Online...
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5701] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5801] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5845] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5848] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5854] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5858] device (eth0): Activation: successful, device activated.
Dec 13 01:46:49 np0005558241 NetworkManager[7195]: <info>  [1765608409.5867] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 13 01:46:49 np0005558241 python3[7268]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-b23c-7084-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:46:59 np0005558241 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 01:47:19 np0005558241 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7243] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 01:47:34 np0005558241 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 01:47:34 np0005558241 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7549] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7552] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7558] device (eth1): Activation: successful, device activated.
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7565] manager: startup complete
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7567] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <warn>  [1765608454.7571] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7579] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 13 01:47:34 np0005558241 systemd[1]: Finished Network Manager Wait Online.
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7732] dhcp4 (eth1): canceled DHCP transaction
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7733] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7733] dhcp4 (eth1): state changed no lease
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7755] policy: auto-activating connection 'ci-private-network' (0d6fb01b-2367-52c6-ae41-b26c9e844c69)
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7762] device (eth1): Activation: starting connection 'ci-private-network' (0d6fb01b-2367-52c6-ae41-b26c9e844c69)
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7764] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7768] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7779] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7793] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7864] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7867] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 01:47:34 np0005558241 NetworkManager[7195]: <info>  [1765608454.7877] device (eth1): Activation: successful, device activated.
Dec 13 01:47:44 np0005558241 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 01:47:48 np0005558241 python3[7376]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:47:49 np0005558241 python3[7449]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608468.5268826-267-156655822868379/source _original_basename=tmpeygigaum follow=False checksum=dc06c4f31883ac330f2ac60d6887999b62459cf9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:48:49 np0005558241 systemd-logind[787]: Session 1 logged out. Waiting for processes to exit.
Dec 13 01:50:07 np0005558241 systemd[4305]: Created slice User Background Tasks Slice.
Dec 13 01:50:07 np0005558241 systemd[4305]: Starting Cleanup of User's Temporary Files and Directories...
Dec 13 01:50:07 np0005558241 systemd[4305]: Finished Cleanup of User's Temporary Files and Directories.
Dec 13 01:53:52 np0005558241 systemd-logind[787]: New session 3 of user zuul.
Dec 13 01:53:52 np0005558241 systemd[1]: Started Session 3 of User zuul.
Dec 13 01:53:52 np0005558241 python3[7522]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-0971-7456-000000001f59-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:53:53 np0005558241 python3[7551]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:53:53 np0005558241 python3[7577]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:53:53 np0005558241 python3[7603]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:53:54 np0005558241 python3[7629]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:53:54 np0005558241 python3[7655]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:53:54 np0005558241 python3[7733]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:53:55 np0005558241 python3[7806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765608834.710793-484-4313116497212/source _original_basename=tmp5q8vmgv9 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:53:56 np0005558241 python3[7856]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 01:53:56 np0005558241 systemd[1]: Reloading.
Dec 13 01:53:56 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 01:53:56 np0005558241 systemd[1]: Starting dnf makecache...
Dec 13 01:53:56 np0005558241 dnf[7888]: Failed determining last makecache time.
Dec 13 01:53:57 np0005558241 dnf[7888]: CentOS Stream 9 - BaseOS                         55 kB/s | 7.3 kB     00:00
Dec 13 01:53:57 np0005558241 dnf[7888]: CentOS Stream 9 - AppStream                      70 kB/s | 7.8 kB     00:00
Dec 13 01:53:57 np0005558241 dnf[7888]: CentOS Stream 9 - CRB                            69 kB/s | 7.2 kB     00:00
Dec 13 01:53:57 np0005558241 dnf[7888]: CentOS Stream 9 - Extras packages                74 kB/s | 8.3 kB     00:00
Dec 13 01:53:57 np0005558241 python3[7921]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 13 01:53:57 np0005558241 dnf[7888]: Metadata cache created.
Dec 13 01:53:57 np0005558241 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 13 01:53:57 np0005558241 systemd[1]: Finished dnf makecache.
Dec 13 01:53:58 np0005558241 python3[7948]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:53:58 np0005558241 python3[7976]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:53:58 np0005558241 python3[8004]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:53:58 np0005558241 python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:53:59 np0005558241 python3[8059]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-0971-7456-000000001f60-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:54:00 np0005558241 python3[8089]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 01:54:01 np0005558241 systemd[1]: session-3.scope: Deactivated successfully.
Dec 13 01:54:01 np0005558241 systemd[1]: session-3.scope: Consumed 4.140s CPU time.
Dec 13 01:54:01 np0005558241 systemd-logind[787]: Session 3 logged out. Waiting for processes to exit.
Dec 13 01:54:01 np0005558241 systemd-logind[787]: Removed session 3.
Dec 13 01:54:03 np0005558241 systemd-logind[787]: New session 4 of user zuul.
Dec 13 01:54:03 np0005558241 systemd[1]: Started Session 4 of User zuul.
Dec 13 01:54:04 np0005558241 python3[8124]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 01:54:21 np0005558241 kernel: SELinux:  Converting 385 SID table entries...
Dec 13 01:54:21 np0005558241 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 01:54:21 np0005558241 kernel: SELinux:  policy capability open_perms=1
Dec 13 01:54:21 np0005558241 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 01:54:21 np0005558241 kernel: SELinux:  policy capability always_check_network=0
Dec 13 01:54:21 np0005558241 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 01:54:21 np0005558241 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 01:54:21 np0005558241 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 01:54:31 np0005558241 kernel: SELinux:  Converting 385 SID table entries...
Dec 13 01:54:31 np0005558241 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 01:54:31 np0005558241 kernel: SELinux:  policy capability open_perms=1
Dec 13 01:54:31 np0005558241 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 01:54:31 np0005558241 kernel: SELinux:  policy capability always_check_network=0
Dec 13 01:54:31 np0005558241 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 01:54:31 np0005558241 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 01:54:31 np0005558241 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 01:54:42 np0005558241 kernel: SELinux:  Converting 385 SID table entries...
Dec 13 01:54:42 np0005558241 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 01:54:42 np0005558241 kernel: SELinux:  policy capability open_perms=1
Dec 13 01:54:42 np0005558241 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 01:54:42 np0005558241 kernel: SELinux:  policy capability always_check_network=0
Dec 13 01:54:42 np0005558241 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 01:54:42 np0005558241 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 01:54:42 np0005558241 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 01:54:47 np0005558241 setsebool[8184]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 13 01:54:47 np0005558241 setsebool[8184]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 13 01:55:01 np0005558241 kernel: SELinux:  Converting 388 SID table entries...
Dec 13 01:55:01 np0005558241 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 01:55:01 np0005558241 kernel: SELinux:  policy capability open_perms=1
Dec 13 01:55:01 np0005558241 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 01:55:01 np0005558241 kernel: SELinux:  policy capability always_check_network=0
Dec 13 01:55:01 np0005558241 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 01:55:01 np0005558241 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 01:55:01 np0005558241 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 01:55:34 np0005558241 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 13 01:55:34 np0005558241 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 01:55:34 np0005558241 systemd[1]: Starting man-db-cache-update.service...
Dec 13 01:55:34 np0005558241 systemd[1]: Reloading.
Dec 13 01:55:34 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 01:55:34 np0005558241 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 01:55:40 np0005558241 irqbalance[778]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 13 01:55:40 np0005558241 irqbalance[778]: IRQ 30 affinity is now unmanaged
Dec 13 01:55:56 np0005558241 python3[19321]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-de60-582a-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 01:55:58 np0005558241 kernel: evm: overlay not supported
Dec 13 01:55:59 np0005558241 systemd[4305]: Starting D-Bus User Message Bus...
Dec 13 01:55:59 np0005558241 dbus-broker-launch[20526]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 13 01:55:59 np0005558241 dbus-broker-launch[20526]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 13 01:55:59 np0005558241 systemd[4305]: Started D-Bus User Message Bus.
Dec 13 01:55:59 np0005558241 dbus-broker-lau[20526]: Ready
Dec 13 01:55:59 np0005558241 systemd[4305]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 13 01:55:59 np0005558241 systemd[4305]: Created slice Slice /user.
Dec 13 01:55:59 np0005558241 systemd[4305]: podman-20094.scope: unit configures an IP firewall, but not running as root.
Dec 13 01:55:59 np0005558241 systemd[4305]: (This warning is only shown for the first unit using IP firewalling.)
Dec 13 01:55:59 np0005558241 systemd[4305]: Started podman-20094.scope.
Dec 13 01:55:59 np0005558241 systemd[4305]: Started podman-pause-0ca46e84.scope.
Dec 13 01:56:00 np0005558241 python3[21287]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.129.56.213:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.129.56.213:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:56:00 np0005558241 python3[21287]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 13 01:56:01 np0005558241 systemd[1]: session-4.scope: Deactivated successfully.
Dec 13 01:56:01 np0005558241 systemd[1]: session-4.scope: Consumed 1min 4.288s CPU time.
Dec 13 01:56:01 np0005558241 systemd-logind[787]: Session 4 logged out. Waiting for processes to exit.
Dec 13 01:56:01 np0005558241 systemd-logind[787]: Removed session 4.
Dec 13 01:56:44 np0005558241 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 01:56:44 np0005558241 systemd[1]: Finished man-db-cache-update.service.
Dec 13 01:56:44 np0005558241 systemd[1]: man-db-cache-update.service: Consumed 50.816s CPU time.
Dec 13 01:56:44 np0005558241 systemd[1]: run-rcf4be4c4266b47b6914098d73092989f.service: Deactivated successfully.
Dec 13 01:56:54 np0005558241 systemd-logind[787]: New session 5 of user zuul.
Dec 13 01:56:54 np0005558241 systemd[1]: Started Session 5 of User zuul.
Dec 13 01:56:55 np0005558241 python3[29631]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJu4PZ3lFuS5lCdCVcUZIjVLiy4aHHHZdx5PAQ8ug87NJACaAfJkkt1G2orl0kblFT9F/7YAuZKNhxIyy1JDOGM= zuul@np0005558307.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:56:55 np0005558241 python3[29657]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJu4PZ3lFuS5lCdCVcUZIjVLiy4aHHHZdx5PAQ8ug87NJACaAfJkkt1G2orl0kblFT9F/7YAuZKNhxIyy1JDOGM= zuul@np0005558307.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:56:56 np0005558241 python3[29683]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005558241.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 13 01:57:00 np0005558241 python3[29717]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJu4PZ3lFuS5lCdCVcUZIjVLiy4aHHHZdx5PAQ8ug87NJACaAfJkkt1G2orl0kblFT9F/7YAuZKNhxIyy1JDOGM= zuul@np0005558307.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 01:57:00 np0005558241 python3[29795]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 01:57:01 np0005558241 python3[29868]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765609020.3795323-135-112315062661233/source _original_basename=tmp1k_n_n5m follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 01:57:02 np0005558241 python3[29918]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 13 01:57:02 np0005558241 systemd[1]: Starting Hostname Service...
Dec 13 01:57:02 np0005558241 systemd[1]: Started Hostname Service.
Dec 13 01:57:02 np0005558241 systemd-hostnamed[29922]: Changed pretty hostname to 'compute-0'
Dec 13 01:57:02 np0005558241 systemd-hostnamed[29922]: Hostname set to <compute-0> (static)
Dec 13 01:57:02 np0005558241 NetworkManager[7195]: <info>  [1765609022.1624] hostname: static hostname changed from "np0005558241.novalocal" to "compute-0"
Dec 13 01:57:02 np0005558241 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 01:57:02 np0005558241 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 01:57:02 np0005558241 systemd[1]: session-5.scope: Deactivated successfully.
Dec 13 01:57:02 np0005558241 systemd[1]: session-5.scope: Consumed 2.405s CPU time.
Dec 13 01:57:02 np0005558241 systemd-logind[787]: Session 5 logged out. Waiting for processes to exit.
Dec 13 01:57:02 np0005558241 systemd-logind[787]: Removed session 5.
Dec 13 01:57:12 np0005558241 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 01:57:32 np0005558241 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 02:03:33 np0005558241 systemd-logind[787]: New session 6 of user zuul.
Dec 13 02:03:33 np0005558241 systemd[1]: Started Session 6 of User zuul.
Dec 13 02:03:33 np0005558241 python3[30034]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:03:35 np0005558241 python3[30150]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:03:35 np0005558241 python3[30223]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765609414.9331803-33568-10829302105274/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:03:36 np0005558241 python3[30249]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:03:36 np0005558241 python3[30322]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765609414.9331803-33568-10829302105274/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:03:36 np0005558241 python3[30348]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:03:36 np0005558241 python3[30421]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765609414.9331803-33568-10829302105274/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:03:37 np0005558241 python3[30447]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:03:37 np0005558241 python3[30520]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765609414.9331803-33568-10829302105274/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:03:37 np0005558241 python3[30546]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:03:38 np0005558241 python3[30619]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765609414.9331803-33568-10829302105274/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:03:38 np0005558241 python3[30645]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:03:38 np0005558241 python3[30719]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765609414.9331803-33568-10829302105274/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:03:39 np0005558241 python3[30745]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:03:39 np0005558241 python3[30818]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765609414.9331803-33568-10829302105274/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:03:57 np0005558241 python3[30876]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:08:57 np0005558241 systemd[1]: session-6.scope: Deactivated successfully.
Dec 13 02:08:57 np0005558241 systemd[1]: session-6.scope: Consumed 5.003s CPU time.
Dec 13 02:08:57 np0005558241 systemd-logind[787]: Session 6 logged out. Waiting for processes to exit.
Dec 13 02:08:57 np0005558241 systemd-logind[787]: Removed session 6.
Dec 13 02:17:14 np0005558241 systemd-logind[787]: New session 7 of user zuul.
Dec 13 02:17:14 np0005558241 systemd[1]: Started Session 7 of User zuul.
Dec 13 02:17:15 np0005558241 python3.9[31040]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:17:16 np0005558241 python3.9[31221]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:17:17 np0005558241 systemd[1]: session-7.scope: Deactivated successfully.
Dec 13 02:17:17 np0005558241 systemd[1]: session-7.scope: Consumed 1.619s CPU time.
Dec 13 02:17:17 np0005558241 systemd-logind[787]: Session 7 logged out. Waiting for processes to exit.
Dec 13 02:17:17 np0005558241 systemd-logind[787]: Removed session 7.
Dec 13 02:17:29 np0005558241 systemd-logind[787]: New session 8 of user zuul.
Dec 13 02:17:29 np0005558241 systemd[1]: Started Session 8 of User zuul.
Dec 13 02:17:30 np0005558241 python3.9[31406]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:17:31 np0005558241 python3.9[31587]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:17:43 np0005558241 systemd[1]: session-8.scope: Deactivated successfully.
Dec 13 02:17:43 np0005558241 systemd[1]: session-8.scope: Consumed 1.644s CPU time.
Dec 13 02:17:43 np0005558241 systemd-logind[787]: Session 8 logged out. Waiting for processes to exit.
Dec 13 02:17:43 np0005558241 systemd-logind[787]: Removed session 8.
Dec 13 02:18:06 np0005558241 systemd-logind[787]: New session 9 of user zuul.
Dec 13 02:18:06 np0005558241 systemd[1]: Started Session 9 of User zuul.
Dec 13 02:18:07 np0005558241 python3.9[31773]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:18:08 np0005558241 python3.9[31954]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:18:09 np0005558241 systemd[1]: session-9.scope: Deactivated successfully.
Dec 13 02:18:09 np0005558241 systemd[1]: session-9.scope: Consumed 1.658s CPU time.
Dec 13 02:18:09 np0005558241 systemd-logind[787]: Session 9 logged out. Waiting for processes to exit.
Dec 13 02:18:09 np0005558241 systemd-logind[787]: Removed session 9.
Dec 13 02:18:52 np0005558241 systemd-logind[787]: New session 10 of user zuul.
Dec 13 02:18:52 np0005558241 systemd[1]: Started Session 10 of User zuul.
Dec 13 02:18:53 np0005558241 python3.9[32139]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:18:54 np0005558241 python3.9[32320]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:18:55 np0005558241 systemd[1]: session-10.scope: Deactivated successfully.
Dec 13 02:18:55 np0005558241 systemd[1]: session-10.scope: Consumed 1.594s CPU time.
Dec 13 02:18:55 np0005558241 systemd-logind[787]: Session 10 logged out. Waiting for processes to exit.
Dec 13 02:18:55 np0005558241 systemd-logind[787]: Removed session 10.
Dec 13 02:20:18 np0005558241 systemd-logind[787]: New session 11 of user zuul.
Dec 13 02:20:18 np0005558241 systemd[1]: Started Session 11 of User zuul.
Dec 13 02:20:19 np0005558241 python3.9[32504]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:20:20 np0005558241 python3.9[32685]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:20:28 np0005558241 systemd[1]: session-11.scope: Deactivated successfully.
Dec 13 02:20:28 np0005558241 systemd[1]: session-11.scope: Consumed 8.327s CPU time.
Dec 13 02:20:28 np0005558241 systemd-logind[787]: Session 11 logged out. Waiting for processes to exit.
Dec 13 02:20:28 np0005558241 systemd-logind[787]: Removed session 11.
Dec 13 02:20:44 np0005558241 systemd-logind[787]: New session 12 of user zuul.
Dec 13 02:20:44 np0005558241 systemd[1]: Started Session 12 of User zuul.
Dec 13 02:20:45 np0005558241 python3.9[32895]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 13 02:20:46 np0005558241 python3.9[33069]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:20:47 np0005558241 python3.9[33221]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:20:48 np0005558241 python3.9[33374]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:20:49 np0005558241 python3.9[33526]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:50 np0005558241 python3.9[33678]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:20:51 np0005558241 python3.9[33801]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610450.0129466-73-252055742922017/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:52 np0005558241 python3.9[33953]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:20:53 np0005558241 python3.9[34109]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:20:54 np0005558241 python3.9[34261]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:20:55 np0005558241 python3.9[34411]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:20:59 np0005558241 python3.9[34664]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:20:59 np0005558241 python3.9[34814]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:21:01 np0005558241 python3.9[34968]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:21:02 np0005558241 python3.9[35126]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:21:03 np0005558241 python3.9[35210]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:21:47 np0005558241 systemd[1]: Reloading.
Dec 13 02:21:47 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:21:47 np0005558241 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 13 02:21:47 np0005558241 systemd[1]: Reloading.
Dec 13 02:21:47 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:21:47 np0005558241 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 13 02:21:47 np0005558241 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 13 02:21:47 np0005558241 systemd[1]: Reloading.
Dec 13 02:21:47 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:21:48 np0005558241 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 13 02:21:48 np0005558241 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec 13 02:21:48 np0005558241 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec 13 02:21:48 np0005558241 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec 13 02:22:57 np0005558241 kernel: SELinux:  Converting 2718 SID table entries...
Dec 13 02:22:57 np0005558241 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:22:57 np0005558241 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:22:57 np0005558241 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:22:57 np0005558241 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:22:57 np0005558241 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:22:57 np0005558241 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:22:57 np0005558241 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 02:22:58 np0005558241 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 13 02:22:58 np0005558241 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:22:58 np0005558241 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:22:58 np0005558241 systemd[1]: Reloading.
Dec 13 02:22:58 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:22:58 np0005558241 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:22:59 np0005558241 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:22:59 np0005558241 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:22:59 np0005558241 systemd[1]: man-db-cache-update.service: Consumed 1.438s CPU time.
Dec 13 02:22:59 np0005558241 systemd[1]: run-r4d15df4d7a6b44669c3bfddcd0e5d574.service: Deactivated successfully.
Dec 13 02:22:59 np0005558241 python3.9[36715]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:23:02 np0005558241 python3.9[36997]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 13 02:23:03 np0005558241 python3.9[37149]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 13 02:23:06 np0005558241 python3.9[37302]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:07 np0005558241 python3.9[37454]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 13 02:23:08 np0005558241 python3.9[37606]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:11 np0005558241 python3.9[37758]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:14 np0005558241 python3.9[37881]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610589.1107953-236-177261514159637/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8420c3cfe35a04f9d22dc8ceccdc23770479ce6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:15 np0005558241 python3.9[38033]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:23:16 np0005558241 python3.9[38185]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:23:17 np0005558241 python3.9[38338]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:23:18 np0005558241 python3.9[38490]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 13 02:23:18 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:23:19 np0005558241 python3.9[38644]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 02:23:20 np0005558241 python3.9[38802]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 02:23:21 np0005558241 python3.9[38962]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 13 02:23:22 np0005558241 python3.9[39115]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 02:23:23 np0005558241 python3.9[39273]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 13 02:23:24 np0005558241 python3.9[39425]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:23:28 np0005558241 python3.9[39578]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:29 np0005558241 python3.9[39730]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:29 np0005558241 python3.9[39853]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610608.741934-355-74775124856629/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:31 np0005558241 python3.9[40005]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:23:31 np0005558241 systemd[1]: Starting Load Kernel Modules...
Dec 13 02:23:31 np0005558241 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 02:23:31 np0005558241 kernel: Bridge firewalling registered
Dec 13 02:23:31 np0005558241 systemd-modules-load[40009]: Inserted module 'br_netfilter'
Dec 13 02:23:31 np0005558241 systemd[1]: Finished Load Kernel Modules.
Dec 13 02:23:32 np0005558241 python3.9[40166]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:23:32 np0005558241 python3.9[40289]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610611.501861-378-278979686428787/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:23:33 np0005558241 python3.9[40441]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:23:39 np0005558241 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec 13 02:23:39 np0005558241 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec 13 02:23:39 np0005558241 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:23:39 np0005558241 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:23:39 np0005558241 systemd[1]: Reloading.
Dec 13 02:23:39 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:23:40 np0005558241 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:23:43 np0005558241 python3.9[43000]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:23:44 np0005558241 python3.9[43992]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 13 02:23:45 np0005558241 python3.9[44398]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:23:46 np0005558241 python3.9[44634]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:23:46 np0005558241 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 13 02:23:47 np0005558241 systemd[1]: Starting Authorization Manager...
Dec 13 02:23:47 np0005558241 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 13 02:23:47 np0005558241 polkitd[44851]: Started polkitd version 0.117
Dec 13 02:23:47 np0005558241 systemd[1]: Started Authorization Manager.
Dec 13 02:23:48 np0005558241 python3.9[45021]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:23:48 np0005558241 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 13 02:23:48 np0005558241 systemd[1]: tuned.service: Deactivated successfully.
Dec 13 02:23:48 np0005558241 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 13 02:23:48 np0005558241 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 13 02:23:49 np0005558241 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 13 02:23:50 np0005558241 python3.9[45183]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 13 02:23:52 np0005558241 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:23:52 np0005558241 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:23:52 np0005558241 systemd[1]: man-db-cache-update.service: Consumed 4.974s CPU time.
Dec 13 02:23:52 np0005558241 systemd[1]: run-r81e7752e9499481b8d1931146c4a4d3d.service: Deactivated successfully.
Dec 13 02:23:52 np0005558241 python3.9[45336]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:23:52 np0005558241 systemd[1]: Reloading.
Dec 13 02:23:52 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:23:53 np0005558241 python3.9[45525]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:23:53 np0005558241 systemd[1]: Reloading.
Dec 13 02:23:54 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:23:55 np0005558241 python3.9[45715]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:23:55 np0005558241 python3.9[45869]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:23:55 np0005558241 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 13 02:23:56 np0005558241 python3.9[46022]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:23:58 np0005558241 python3.9[46184]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:23:59 np0005558241 python3.9[46337]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:23:59 np0005558241 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 02:23:59 np0005558241 systemd[1]: Stopped Apply Kernel Variables.
Dec 13 02:23:59 np0005558241 systemd[1]: Stopping Apply Kernel Variables...
Dec 13 02:23:59 np0005558241 systemd[1]: Starting Apply Kernel Variables...
Dec 13 02:23:59 np0005558241 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 02:23:59 np0005558241 systemd[1]: Finished Apply Kernel Variables.
Dec 13 02:24:00 np0005558241 systemd[1]: session-12.scope: Deactivated successfully.
Dec 13 02:24:00 np0005558241 systemd[1]: session-12.scope: Consumed 2min 17.327s CPU time.
Dec 13 02:24:00 np0005558241 systemd-logind[787]: Session 12 logged out. Waiting for processes to exit.
Dec 13 02:24:00 np0005558241 systemd-logind[787]: Removed session 12.
Dec 13 02:24:20 np0005558241 systemd-logind[787]: New session 13 of user zuul.
Dec 13 02:24:20 np0005558241 systemd[1]: Started Session 13 of User zuul.
Dec 13 02:24:21 np0005558241 python3.9[46520]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:24:23 np0005558241 python3.9[46676]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 13 02:24:24 np0005558241 python3.9[46829]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 02:24:25 np0005558241 python3.9[46987]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 02:24:27 np0005558241 python3.9[47147]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:24:28 np0005558241 python3.9[47231]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 02:24:32 np0005558241 python3.9[47397]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:24:45 np0005558241 kernel: SELinux:  Converting 2730 SID table entries...
Dec 13 02:24:45 np0005558241 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:24:45 np0005558241 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:24:45 np0005558241 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:24:45 np0005558241 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:24:45 np0005558241 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:24:45 np0005558241 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:24:45 np0005558241 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 02:24:47 np0005558241 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 13 02:24:47 np0005558241 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 13 02:24:49 np0005558241 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:24:49 np0005558241 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:24:49 np0005558241 systemd[1]: Reloading.
Dec 13 02:24:49 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:24:49 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:24:49 np0005558241 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:24:53 np0005558241 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:24:53 np0005558241 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:24:53 np0005558241 systemd[1]: run-r8f1ae9957c3c455b83981dfe30ad8914.service: Deactivated successfully.
Dec 13 02:24:54 np0005558241 python3.9[48495]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:24:54 np0005558241 systemd[1]: Reloading.
Dec 13 02:24:54 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:24:54 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:24:54 np0005558241 systemd[1]: Starting Open vSwitch Database Unit...
Dec 13 02:24:54 np0005558241 chown[48537]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 13 02:24:54 np0005558241 ovs-ctl[48542]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 13 02:24:55 np0005558241 ovs-ctl[48542]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 13 02:24:55 np0005558241 ovs-ctl[48542]: Starting ovsdb-server [  OK  ]
Dec 13 02:24:55 np0005558241 ovs-vsctl[48591]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 13 02:24:55 np0005558241 ovs-vsctl[48607]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"0c490016-f399-44ae-a985-d9ff6e29d8d2\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 13 02:24:55 np0005558241 ovs-ctl[48542]: Configuring Open vSwitch system IDs [  OK  ]
Dec 13 02:24:55 np0005558241 ovs-ctl[48542]: Enabling remote OVSDB managers [  OK  ]
Dec 13 02:24:55 np0005558241 ovs-vsctl[48617]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 13 02:24:55 np0005558241 systemd[1]: Started Open vSwitch Database Unit.
Dec 13 02:24:55 np0005558241 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 13 02:24:55 np0005558241 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 13 02:24:55 np0005558241 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 13 02:24:55 np0005558241 kernel: openvswitch: Open vSwitch switching datapath
Dec 13 02:24:55 np0005558241 ovs-ctl[48663]: Inserting openvswitch module [  OK  ]
Dec 13 02:24:55 np0005558241 ovs-ctl[48631]: Starting ovs-vswitchd [  OK  ]
Dec 13 02:24:55 np0005558241 ovs-ctl[48631]: Enabling remote OVSDB managers [  OK  ]
Dec 13 02:24:55 np0005558241 ovs-vsctl[48680]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 13 02:24:55 np0005558241 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 13 02:24:55 np0005558241 systemd[1]: Starting Open vSwitch...
Dec 13 02:24:55 np0005558241 systemd[1]: Finished Open vSwitch.
Dec 13 02:24:56 np0005558241 python3.9[48832]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:24:57 np0005558241 python3.9[48984]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 13 02:25:01 np0005558241 kernel: SELinux:  Converting 2744 SID table entries...
Dec 13 02:25:01 np0005558241 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:25:01 np0005558241 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:25:01 np0005558241 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:25:01 np0005558241 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:25:01 np0005558241 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:25:01 np0005558241 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:25:01 np0005558241 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 02:25:02 np0005558241 python3.9[49139]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:25:03 np0005558241 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 13 02:25:04 np0005558241 python3.9[49297]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:25:06 np0005558241 python3.9[49450]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:25:07 np0005558241 python3.9[49737]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 02:25:08 np0005558241 python3.9[49887]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:25:09 np0005558241 python3.9[50041]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:25:11 np0005558241 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:25:11 np0005558241 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:25:11 np0005558241 systemd[1]: Reloading.
Dec 13 02:25:11 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:25:11 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:25:12 np0005558241 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:25:12 np0005558241 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:25:12 np0005558241 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:25:12 np0005558241 systemd[1]: run-r4788423f3f6b4ec1ac39000fe8a08ebd.service: Deactivated successfully.
Dec 13 02:25:13 np0005558241 python3.9[50358]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:25:13 np0005558241 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 13 02:25:13 np0005558241 systemd[1]: Stopped Network Manager Wait Online.
Dec 13 02:25:13 np0005558241 systemd[1]: Stopping Network Manager Wait Online...
Dec 13 02:25:13 np0005558241 systemd[1]: Stopping Network Manager...
Dec 13 02:25:13 np0005558241 NetworkManager[7195]: <info>  [1765610713.3988] caught SIGTERM, shutting down normally.
Dec 13 02:25:13 np0005558241 NetworkManager[7195]: <info>  [1765610713.4015] dhcp4 (eth0): canceled DHCP transaction
Dec 13 02:25:13 np0005558241 NetworkManager[7195]: <info>  [1765610713.4015] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 02:25:13 np0005558241 NetworkManager[7195]: <info>  [1765610713.4015] dhcp4 (eth0): state changed no lease
Dec 13 02:25:13 np0005558241 NetworkManager[7195]: <info>  [1765610713.4024] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 02:25:13 np0005558241 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 02:25:13 np0005558241 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 02:25:13 np0005558241 NetworkManager[7195]: <info>  [1765610713.8968] exiting (success)
Dec 13 02:25:13 np0005558241 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 13 02:25:13 np0005558241 systemd[1]: Stopped Network Manager.
Dec 13 02:25:13 np0005558241 systemd[1]: NetworkManager.service: Consumed 15.701s CPU time, 4.1M memory peak, read 0B from disk, written 29.0K to disk.
Dec 13 02:25:13 np0005558241 systemd[1]: Starting Network Manager...
Dec 13 02:25:13 np0005558241 NetworkManager[50376]: <info>  [1765610713.9615] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:9546f4fe-3402-44c6-8e6a-c5fcacf6ef13)
Dec 13 02:25:13 np0005558241 NetworkManager[50376]: <info>  [1765610713.9616] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 13 02:25:13 np0005558241 NetworkManager[50376]: <info>  [1765610713.9678] manager[0x55af8692f000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 13 02:25:13 np0005558241 systemd[1]: Starting Hostname Service...
Dec 13 02:25:14 np0005558241 systemd[1]: Started Hostname Service.
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0832] hostname: hostname: using hostnamed
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0833] hostname: static hostname changed from (none) to "compute-0"
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0836] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0841] manager[0x55af8692f000]: rfkill: Wi-Fi hardware radio set enabled
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0842] manager[0x55af8692f000]: rfkill: WWAN hardware radio set enabled
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0875] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0889] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0890] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0891] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0892] manager: Networking is enabled by state file
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0895] settings: Loaded settings plugin: keyfile (internal)
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0902] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0947] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0963] dhcp: init: Using DHCP client 'internal'
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0967] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0978] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.0988] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1001] device (lo): Activation: starting connection 'lo' (85760078-2362-402c-a744-99103f51e310)
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1013] device (eth0): carrier: link connected
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1020] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1028] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1029] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1042] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1056] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1067] device (eth1): carrier: link connected
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1074] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1082] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (0d6fb01b-2367-52c6-ae41-b26c9e844c69) (indicated)
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1083] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1093] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1105] device (eth1): Activation: starting connection 'ci-private-network' (0d6fb01b-2367-52c6-ae41-b26c9e844c69)
Dec 13 02:25:14 np0005558241 systemd[1]: Started Network Manager.
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1112] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1127] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1132] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1135] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1147] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1151] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1152] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1154] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1156] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1161] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1164] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1169] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1178] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1185] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1185] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1188] device (lo): Activation: successful, device activated.
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1193] dhcp4 (eth0): state changed new lease, address=38.129.56.138
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.1197] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 13 02:25:14 np0005558241 systemd[1]: Starting Network Manager Wait Online...
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.6447] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.6473] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.6481] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.6484] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.6488] device (eth1): Activation: successful, device activated.
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.7139] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.7143] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.7148] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.7151] device (eth0): Activation: successful, device activated.
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.7159] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 13 02:25:14 np0005558241 NetworkManager[50376]: <info>  [1765610714.7251] manager: startup complete
Dec 13 02:25:14 np0005558241 systemd[1]: Finished Network Manager Wait Online.
Dec 13 02:25:15 np0005558241 python3.9[50585]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:25:24 np0005558241 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 02:25:27 np0005558241 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:25:27 np0005558241 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:25:27 np0005558241 systemd[1]: Reloading.
Dec 13 02:25:27 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:25:27 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:25:27 np0005558241 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:25:35 np0005558241 python3.9[51044]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:25:36 np0005558241 python3.9[51196]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:25:36 np0005558241 python3.9[51350]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:25:37 np0005558241 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:25:37 np0005558241 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:25:37 np0005558241 systemd[1]: run-r26eb3f4d943148e5b90eb9c2e96b9976.service: Deactivated successfully.
Dec 13 02:25:37 np0005558241 python3.9[51503]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:25:38 np0005558241 python3.9[51655]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:25:39 np0005558241 python3.9[51807]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:25:39 np0005558241 python3.9[51959]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:25:40 np0005558241 python3.9[52082]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610739.4557645-229-29205160372141/.source _original_basename=.4v83lejd follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:25:41 np0005558241 python3.9[52234]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:25:42 np0005558241 python3.9[52386]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 13 02:25:43 np0005558241 python3.9[52538]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:25:44 np0005558241 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 02:25:45 np0005558241 python3.9[52967]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 13 02:25:46 np0005558241 ansible-async_wrapper.py[53142]: Invoked with j606618159670 300 /home/zuul/.ansible/tmp/ansible-tmp-1765610746.0651836-295-119552970107569/AnsiballZ_edpm_os_net_config.py _
Dec 13 02:25:46 np0005558241 ansible-async_wrapper.py[53145]: Starting module and watcher
Dec 13 02:25:46 np0005558241 ansible-async_wrapper.py[53145]: Start watching 53146 (300)
Dec 13 02:25:46 np0005558241 ansible-async_wrapper.py[53146]: Start module (53146)
Dec 13 02:25:46 np0005558241 ansible-async_wrapper.py[53142]: Return async_wrapper task started.
Dec 13 02:25:47 np0005558241 python3.9[53147]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 13 02:25:47 np0005558241 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 13 02:25:47 np0005558241 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 13 02:25:47 np0005558241 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 13 02:25:47 np0005558241 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 13 02:25:47 np0005558241 kernel: cfg80211: failed to load regulatory.db
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.2564] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.2585] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3257] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3258] audit: op="connection-add" uuid="acdd3ce7-1fdd-4b6d-9cd5-a91423c24565" name="br-ex-br" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3274] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3275] audit: op="connection-add" uuid="1ef56763-fd86-4b83-a7dc-9f686ef72f40" name="br-ex-port" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3289] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3291] audit: op="connection-add" uuid="e4e683a1-db7c-41b4-b902-edaf4216ef88" name="eth1-port" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3303] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3304] audit: op="connection-add" uuid="b2c455ac-56fb-403b-9db1-e22bc7524e4e" name="vlan20-port" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3317] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3318] audit: op="connection-add" uuid="315e05e9-505b-4c10-a607-83edc4d0272e" name="vlan21-port" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3331] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3333] audit: op="connection-add" uuid="14dfc029-c410-4905-bc2c-71ea72696e73" name="vlan22-port" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3345] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3346] audit: op="connection-add" uuid="2e9761a8-9953-4e1e-907f-d47e8cf9c41d" name="vlan23-port" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3366] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3383] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3384] audit: op="connection-add" uuid="f0641231-4988-4098-a985-6401e92e8f31" name="br-ex-if" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3593] audit: op="connection-update" uuid="0d6fb01b-2367-52c6-ae41-b26c9e844c69" name="ci-private-network" args="ipv4.method,ipv4.addresses,ipv4.never-default,ipv4.routing-rules,ipv4.dns,ipv4.routes,connection.port-type,connection.slave-type,connection.timestamp,connection.controller,connection.master,ovs-external-ids.data,ipv6.method,ipv6.addresses,ipv6.addr-gen-mode,ipv6.routing-rules,ipv6.dns,ipv6.routes,ovs-interface.type" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3614] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3616] audit: op="connection-add" uuid="d42675ae-0e46-47f2-80e4-533254c3c623" name="vlan20-if" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3631] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3633] audit: op="connection-add" uuid="1a51e2a1-4271-4ecd-9673-76014d78ab80" name="vlan21-if" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3648] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3650] audit: op="connection-add" uuid="e09a3c11-0d9f-4c8b-b2c4-c1f4bdc81e37" name="vlan22-if" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3666] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3667] audit: op="connection-add" uuid="06fbdfca-3fee-47b1-ab56-ab4d1961e3bd" name="vlan23-if" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3679] audit: op="connection-delete" uuid="401bfccc-67e5-32e5-96b5-b65c2c726952" name="Wired connection 1" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3689] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <warn>  [1765610749.3692] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3699] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3703] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (acdd3ce7-1fdd-4b6d-9cd5-a91423c24565)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3704] audit: op="connection-activate" uuid="acdd3ce7-1fdd-4b6d-9cd5-a91423c24565" name="br-ex-br" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3706] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <warn>  [1765610749.3706] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3712] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3716] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (1ef56763-fd86-4b83-a7dc-9f686ef72f40)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3717] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <warn>  [1765610749.3718] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3722] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3726] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (e4e683a1-db7c-41b4-b902-edaf4216ef88)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3727] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <warn>  [1765610749.3728] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3733] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3736] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (b2c455ac-56fb-403b-9db1-e22bc7524e4e)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3739] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <warn>  [1765610749.3739] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3744] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3748] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (315e05e9-505b-4c10-a607-83edc4d0272e)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3750] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <warn>  [1765610749.3751] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3756] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3760] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (14dfc029-c410-4905-bc2c-71ea72696e73)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3762] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <warn>  [1765610749.3763] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3768] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3772] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (2e9761a8-9953-4e1e-907f-d47e8cf9c41d)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3773] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3776] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3778] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3784] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <warn>  [1765610749.3785] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3788] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3793] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (f0641231-4988-4098-a985-6401e92e8f31)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3794] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3797] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3799] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3800] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3801] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3810] device (eth1): disconnecting for new activation request.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3811] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3813] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3815] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3816] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3818] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <warn>  [1765610749.3819] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3822] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3825] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (d42675ae-0e46-47f2-80e4-533254c3c623)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3826] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3828] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3830] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3832] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3834] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <warn>  [1765610749.3835] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3838] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3843] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (1a51e2a1-4271-4ecd-9673-76014d78ab80)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3844] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3847] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3850] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3851] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3854] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <warn>  [1765610749.3855] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3859] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3865] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (e09a3c11-0d9f-4c8b-b2c4-c1f4bdc81e37)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3866] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3869] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3871] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3872] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3875] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <warn>  [1765610749.3876] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3879] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3884] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (06fbdfca-3fee-47b1-ab56-ab4d1961e3bd)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3885] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3888] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3890] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3892] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3894] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3907] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3909] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3913] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3915] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3922] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3925] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3928] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3932] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3934] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3939] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 kernel: ovs-system: entered promiscuous mode
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3943] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3948] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3950] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3955] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3961] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 kernel: Timeout policy base is empty
Dec 13 02:25:49 np0005558241 systemd-udevd[53153]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3965] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3967] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3973] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3977] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3980] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3982] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3988] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3993] dhcp4 (eth0): canceled DHCP transaction
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3993] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3993] dhcp4 (eth0): state changed no lease
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.3995] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.4007] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.4011] audit: op="device-reapply" interface="eth1" ifindex=3 pid=53148 uid=0 result="fail" reason="Device is not activated"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.4016] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 13 02:25:49 np0005558241 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 02:25:49 np0005558241 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 02:25:49 np0005558241 kernel: br-ex: entered promiscuous mode
Dec 13 02:25:49 np0005558241 kernel: vlan20: entered promiscuous mode
Dec 13 02:25:49 np0005558241 systemd-udevd[53154]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:25:49 np0005558241 kernel: vlan21: entered promiscuous mode
Dec 13 02:25:49 np0005558241 systemd-udevd[53152]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.6552] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.6557] dhcp4 (eth0): state changed new lease, address=38.129.56.138
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.6569] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.6578] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.6590] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.6597] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.6604] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 13 02:25:49 np0005558241 kernel: vlan22: entered promiscuous mode
Dec 13 02:25:49 np0005558241 kernel: vlan23: entered promiscuous mode
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7439] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7632] device (eth1): Activation: starting connection 'ci-private-network' (0d6fb01b-2367-52c6-ae41-b26c9e844c69)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7641] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7644] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7648] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7650] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7652] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7654] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7657] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7660] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7673] device (eth1): disconnecting for new activation request.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7674] audit: op="connection-activate" uuid="0d6fb01b-2367-52c6-ae41-b26c9e844c69" name="ci-private-network" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7724] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7732] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7735] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7739] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7745] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7749] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7753] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7758] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7763] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7767] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7777] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7782] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7787] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7794] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7801] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7806] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7858] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=53148 uid=0 result="success"
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7859] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7870] device (eth1): Activation: starting connection 'ci-private-network' (0d6fb01b-2367-52c6-ae41-b26c9e844c69)
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7878] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7908] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7914] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7927] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7938] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7964] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7974] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7982] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7987] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.7997] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8022] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8032] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8040] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8052] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8055] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8064] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8071] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8081] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8089] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8100] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8105] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8108] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8111] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8119] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8129] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8137] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8146] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 02:25:49 np0005558241 NetworkManager[50376]: <info>  [1765610749.8152] device (eth1): Activation: successful, device activated.
Dec 13 02:25:50 np0005558241 python3.9[53510]: ansible-ansible.legacy.async_status Invoked with jid=j606618159670.53142 mode=status _async_dir=/root/.ansible_async
Dec 13 02:25:51 np0005558241 NetworkManager[50376]: <info>  [1765610751.1916] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=53148 uid=0 result="success"
Dec 13 02:25:51 np0005558241 NetworkManager[50376]: <info>  [1765610751.3783] checkpoint[0x55af86903950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 13 02:25:51 np0005558241 NetworkManager[50376]: <info>  [1765610751.3785] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=53148 uid=0 result="success"
Dec 13 02:25:51 np0005558241 NetworkManager[50376]: <info>  [1765610751.7195] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=53148 uid=0 result="success"
Dec 13 02:25:51 np0005558241 NetworkManager[50376]: <info>  [1765610751.7216] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=53148 uid=0 result="success"
Dec 13 02:25:51 np0005558241 ansible-async_wrapper.py[53145]: 53146 still running (300)
Dec 13 02:25:52 np0005558241 NetworkManager[50376]: <info>  [1765610752.2005] audit: op="networking-control" arg="global-dns-configuration" pid=53148 uid=0 result="success"
Dec 13 02:25:52 np0005558241 NetworkManager[50376]: <info>  [1765610752.2151] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 13 02:25:52 np0005558241 NetworkManager[50376]: <info>  [1765610752.2378] audit: op="networking-control" arg="global-dns-configuration" pid=53148 uid=0 result="success"
Dec 13 02:25:52 np0005558241 NetworkManager[50376]: <info>  [1765610752.2412] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=53148 uid=0 result="success"
Dec 13 02:25:52 np0005558241 NetworkManager[50376]: <info>  [1765610752.4040] checkpoint[0x55af86903a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 13 02:25:52 np0005558241 NetworkManager[50376]: <info>  [1765610752.4046] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=53148 uid=0 result="success"
Dec 13 02:25:52 np0005558241 ansible-async_wrapper.py[53146]: Module complete (53146)
Dec 13 02:25:54 np0005558241 python3.9[53617]: ansible-ansible.legacy.async_status Invoked with jid=j606618159670.53142 mode=status _async_dir=/root/.ansible_async
Dec 13 02:25:55 np0005558241 python3.9[53716]: ansible-ansible.legacy.async_status Invoked with jid=j606618159670.53142 mode=cleanup _async_dir=/root/.ansible_async
Dec 13 02:25:56 np0005558241 python3.9[53868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:25:56 np0005558241 python3.9[53991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610755.614127-322-146277746632199/.source.returncode _original_basename=.ylv2ea87 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:25:57 np0005558241 ansible-async_wrapper.py[53145]: Done in kid B.
Dec 13 02:25:57 np0005558241 python3.9[54144]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:25:58 np0005558241 python3.9[54267]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610757.0235846-338-222339438540608/.source.cfg _original_basename=.a4mvmj0n follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:25:59 np0005558241 python3.9[54419]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:25:59 np0005558241 systemd[1]: Reloading Network Manager...
Dec 13 02:25:59 np0005558241 NetworkManager[50376]: <info>  [1765610759.2780] audit: op="reload" arg="0" pid=54423 uid=0 result="success"
Dec 13 02:25:59 np0005558241 NetworkManager[50376]: <info>  [1765610759.2787] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 13 02:25:59 np0005558241 systemd[1]: Reloaded Network Manager.
Dec 13 02:25:59 np0005558241 systemd[1]: session-13.scope: Deactivated successfully.
Dec 13 02:25:59 np0005558241 systemd[1]: session-13.scope: Consumed 53.283s CPU time.
Dec 13 02:25:59 np0005558241 systemd-logind[787]: Session 13 logged out. Waiting for processes to exit.
Dec 13 02:25:59 np0005558241 systemd-logind[787]: Removed session 13.
Dec 13 02:26:06 np0005558241 systemd-logind[787]: New session 14 of user zuul.
Dec 13 02:26:06 np0005558241 systemd[1]: Started Session 14 of User zuul.
Dec 13 02:26:07 np0005558241 python3.9[54608]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:26:08 np0005558241 python3.9[54762]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:26:09 np0005558241 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 02:26:10 np0005558241 python3.9[54957]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:26:11 np0005558241 systemd[1]: session-14.scope: Deactivated successfully.
Dec 13 02:26:11 np0005558241 systemd[1]: session-14.scope: Consumed 2.737s CPU time.
Dec 13 02:26:11 np0005558241 systemd-logind[787]: Session 14 logged out. Waiting for processes to exit.
Dec 13 02:26:11 np0005558241 systemd-logind[787]: Removed session 14.
Dec 13 02:26:17 np0005558241 systemd-logind[787]: New session 15 of user zuul.
Dec 13 02:26:17 np0005558241 systemd[1]: Started Session 15 of User zuul.
Dec 13 02:26:18 np0005558241 python3.9[55138]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:26:19 np0005558241 python3.9[55292]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:26:20 np0005558241 python3.9[55449]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:26:21 np0005558241 python3.9[55533]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:26:24 np0005558241 python3.9[55686]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:26:26 np0005558241 python3.9[55881]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:27 np0005558241 python3.9[56033]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:26:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay-compat2063838020-merged.mount: Deactivated successfully.
Dec 13 02:26:29 np0005558241 podman[56034]: 2025-12-13 07:26:29.616976019 +0000 UTC m=+1.810295944 system refresh
Dec 13 02:26:29 np0005558241 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:26:30 np0005558241 python3.9[56196]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:31 np0005558241 python3.9[56319]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610789.9054606-79-110785402220446/.source.json follow=False _original_basename=podman_network_config.j2 checksum=7a247bd1e520a2945b16dcf9dc4b1700d3381611 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:32 np0005558241 python3.9[56471]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:33 np0005558241 python3.9[56594]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610791.6504836-94-159355057764958/.source.conf follow=False _original_basename=registries.conf.j2 checksum=bf131db1250b222e56a9a1fb5f7d68c79af7f303 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:26:34 np0005558241 python3.9[56746]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:26:34 np0005558241 python3.9[56898]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:26:35 np0005558241 python3.9[57050]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:26:36 np0005558241 python3.9[57202]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:26:37 np0005558241 python3.9[57354]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:26:39 np0005558241 python3.9[57507]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:26:40 np0005558241 python3.9[57661]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:26:40 np0005558241 python3.9[57813]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:26:41 np0005558241 python3.9[57965]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:26:42 np0005558241 python3.9[58118]: ansible-service_facts Invoked
Dec 13 02:26:42 np0005558241 network[58135]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:26:42 np0005558241 network[58136]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:26:42 np0005558241 network[58137]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:26:49 np0005558241 python3.9[58589]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:26:51 np0005558241 python3.9[58742]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 13 02:26:52 np0005558241 python3.9[58894]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:53 np0005558241 python3.9[59019]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610812.3768182-238-261597979248782/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:54 np0005558241 python3.9[59173]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:26:54 np0005558241 python3.9[59298]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610813.8309112-253-84339956854384/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:56 np0005558241 python3.9[59452]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:26:57 np0005558241 python3.9[59606]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:26:58 np0005558241 python3.9[59690]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:26:59 np0005558241 python3.9[59844]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:27:00 np0005558241 python3.9[59928]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:27:00 np0005558241 chronyd[785]: chronyd exiting
Dec 13 02:27:00 np0005558241 systemd[1]: Stopping NTP client/server...
Dec 13 02:27:00 np0005558241 systemd[1]: chronyd.service: Deactivated successfully.
Dec 13 02:27:00 np0005558241 systemd[1]: Stopped NTP client/server.
Dec 13 02:27:00 np0005558241 systemd[1]: Starting NTP client/server...
Dec 13 02:27:00 np0005558241 chronyd[59937]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 13 02:27:00 np0005558241 chronyd[59937]: Frequency -25.159 +/- 0.229 ppm read from /var/lib/chrony/drift
Dec 13 02:27:00 np0005558241 chronyd[59937]: Loaded seccomp filter (level 2)
Dec 13 02:27:00 np0005558241 systemd[1]: Started NTP client/server.
Dec 13 02:27:01 np0005558241 systemd[1]: session-15.scope: Deactivated successfully.
Dec 13 02:27:01 np0005558241 systemd[1]: session-15.scope: Consumed 28.862s CPU time.
Dec 13 02:27:01 np0005558241 systemd-logind[787]: Session 15 logged out. Waiting for processes to exit.
Dec 13 02:27:01 np0005558241 systemd-logind[787]: Removed session 15.
Dec 13 02:27:10 np0005558241 systemd-logind[787]: New session 16 of user zuul.
Dec 13 02:27:10 np0005558241 systemd[1]: Started Session 16 of User zuul.
Dec 13 02:27:11 np0005558241 python3.9[60118]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:12 np0005558241 python3.9[60270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:13 np0005558241 python3.9[60393]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610831.6551962-34-206686735848836/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:13 np0005558241 systemd[1]: session-16.scope: Deactivated successfully.
Dec 13 02:27:13 np0005558241 systemd[1]: session-16.scope: Consumed 1.966s CPU time.
Dec 13 02:27:13 np0005558241 systemd-logind[787]: Session 16 logged out. Waiting for processes to exit.
Dec 13 02:27:13 np0005558241 systemd-logind[787]: Removed session 16.
Dec 13 02:27:34 np0005558241 systemd-logind[787]: New session 17 of user zuul.
Dec 13 02:27:34 np0005558241 systemd[1]: Started Session 17 of User zuul.
Dec 13 02:27:35 np0005558241 python3.9[60571]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:27:36 np0005558241 python3.9[60727]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:37 np0005558241 python3.9[60902]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:38 np0005558241 python3.9[61025]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765610856.8296666-41-106852821459007/.source.json _original_basename=.evc50_vg follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:40 np0005558241 python3.9[61177]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:40 np0005558241 python3.9[61300]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610859.6243045-64-257177240908849/.source _original_basename=.t7hiznkt follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:41 np0005558241 python3.9[61452]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:27:42 np0005558241 python3.9[61604]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:43 np0005558241 python3.9[61727]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610862.0617466-88-32255338486548/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:27:44 np0005558241 python3.9[61879]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:45 np0005558241 python3.9[62002]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765610863.686796-88-32765446600783/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:27:45 np0005558241 python3.9[62154]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:46 np0005558241 python3.9[62306]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:47 np0005558241 python3.9[62429]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610866.2406447-125-86325718225656/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:48 np0005558241 python3.9[62581]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:49 np0005558241 python3.9[62704]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610867.7497058-140-280620875683806/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:50 np0005558241 python3.9[62856]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:27:50 np0005558241 systemd[1]: Reloading.
Dec 13 02:27:50 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:27:50 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:27:50 np0005558241 systemd[1]: Reloading.
Dec 13 02:27:50 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:27:50 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:27:51 np0005558241 systemd[1]: Starting EDPM Container Shutdown...
Dec 13 02:27:51 np0005558241 systemd[1]: Finished EDPM Container Shutdown.
Dec 13 02:27:51 np0005558241 python3.9[63083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:52 np0005558241 python3.9[63206]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610871.3829625-163-229492342113541/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:53 np0005558241 python3.9[63358]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:27:54 np0005558241 python3.9[63481]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610872.9060376-178-207223548971050/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:27:55 np0005558241 python3.9[63633]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:27:55 np0005558241 systemd[1]: Reloading.
Dec 13 02:27:55 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:27:55 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:27:55 np0005558241 systemd[1]: Reloading.
Dec 13 02:27:55 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:27:55 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:27:55 np0005558241 systemd[1]: Starting Create netns directory...
Dec 13 02:27:55 np0005558241 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 02:27:55 np0005558241 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 02:27:55 np0005558241 systemd[1]: Finished Create netns directory.
Dec 13 02:27:56 np0005558241 python3.9[63858]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:27:56 np0005558241 network[63875]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:27:56 np0005558241 network[63876]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:27:56 np0005558241 network[63877]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:28:01 np0005558241 python3.9[64139]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:28:01 np0005558241 systemd[1]: Reloading.
Dec 13 02:28:01 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:28:01 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:28:01 np0005558241 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 13 02:28:01 np0005558241 iptables.init[64180]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 13 02:28:01 np0005558241 iptables.init[64180]: iptables: Flushing firewall rules: [  OK  ]
Dec 13 02:28:01 np0005558241 systemd[1]: iptables.service: Deactivated successfully.
Dec 13 02:28:01 np0005558241 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 13 02:28:02 np0005558241 python3.9[64376]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:28:03 np0005558241 python3.9[64530]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:28:03 np0005558241 systemd[1]: Reloading.
Dec 13 02:28:04 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:28:04 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:28:04 np0005558241 systemd[1]: Starting Netfilter Tables...
Dec 13 02:28:04 np0005558241 systemd[1]: Finished Netfilter Tables.
Dec 13 02:28:05 np0005558241 python3.9[64723]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:06 np0005558241 python3.9[64876]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:07 np0005558241 python3.9[65001]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610885.8330083-247-264946330324715/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:08 np0005558241 python3.9[65154]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:28:08 np0005558241 systemd[1]: Reloading OpenSSH server daemon...
Dec 13 02:28:08 np0005558241 systemd[1]: Reloaded OpenSSH server daemon.
Dec 13 02:28:09 np0005558241 python3.9[65310]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:09 np0005558241 python3.9[65462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:10 np0005558241 python3.9[65585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610889.2339346-278-99391212984307/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:11 np0005558241 python3.9[65737]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 13 02:28:11 np0005558241 systemd[1]: Starting Time & Date Service...
Dec 13 02:28:11 np0005558241 systemd[1]: Started Time & Date Service.
Dec 13 02:28:12 np0005558241 python3.9[65893]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:13 np0005558241 python3.9[66045]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:13 np0005558241 python3.9[66168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610892.6679392-313-45284575802054/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:14 np0005558241 python3.9[66320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:15 np0005558241 python3.9[66443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765610894.1988537-328-121970632197943/.source.yaml _original_basename=.vb62sh5f follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:16 np0005558241 python3.9[66595]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:16 np0005558241 python3.9[66718]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610895.6417532-343-84393646521794/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:17 np0005558241 python3.9[66870]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:18 np0005558241 python3.9[67023]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:19 np0005558241 python3[67176]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 02:28:20 np0005558241 python3.9[67328]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:20 np0005558241 python3.9[67451]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610899.502142-382-26473721119392/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:21 np0005558241 python3.9[67603]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:22 np0005558241 python3.9[67727]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610900.8639634-397-176169653852559/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:22 np0005558241 python3.9[67879]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:23 np0005558241 python3.9[68002]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610902.371246-412-9420407986231/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:24 np0005558241 python3.9[68154]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:24 np0005558241 python3.9[68277]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610903.764948-427-268275539833180/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:25 np0005558241 python3.9[68429]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:28:26 np0005558241 python3.9[68552]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765610905.1666653-442-221661006170083/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:27 np0005558241 python3.9[68704]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:27 np0005558241 python3.9[68856]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:28 np0005558241 python3.9[69015]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:29 np0005558241 python3.9[69168]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:30 np0005558241 python3.9[69320]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:32 np0005558241 python3.9[69472]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 02:28:32 np0005558241 python3.9[69625]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 02:28:33 np0005558241 systemd[1]: session-17.scope: Deactivated successfully.
Dec 13 02:28:33 np0005558241 systemd[1]: session-17.scope: Consumed 43.316s CPU time.
Dec 13 02:28:33 np0005558241 systemd-logind[787]: Session 17 logged out. Waiting for processes to exit.
Dec 13 02:28:33 np0005558241 systemd-logind[787]: Removed session 17.
Dec 13 02:28:38 np0005558241 systemd-logind[787]: New session 18 of user zuul.
Dec 13 02:28:38 np0005558241 systemd[1]: Started Session 18 of User zuul.
Dec 13 02:28:39 np0005558241 python3.9[69807]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 13 02:28:40 np0005558241 python3.9[69959]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:28:41 np0005558241 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 13 02:28:42 np0005558241 python3.9[70113]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:28:43 np0005558241 python3.9[70265]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuiiurzJawdjQUeleljBiSOhGha2e1B2btKk1yfXMVN6SlAG7Fo+tpstAA0GHhdjs686GNYQ5oIzK/3xUtTv0Ld63YzgMVucWJKTjU8ChnIwAosyDYKaiVH2QPz1TXD3FZiOfMw5yLsK+Ha0vde4EmlqaVCGmbOvxvAAho+rthUql+33JjQkYxMCKLy/kTXHQ8BfZk4puxMSZG+LRCIdmFXZpGQCte+0L+T7WTjWfUlk4RhoD21anOaolpaE7eixUwz3blgpjOeJuvB4IU88ecjgUIWNPLhz7thzuYjBw7l8q7KkPAvt4lcEV1NG4U5a5AtgE7cZjejHF9H8vPQ+aSo7el+YImpcI6QeYfZIoinToUOql53fCcG2oF6xcRh8dRTTAspGHELURaoGIIjtYWC6K8ueDlP6lFT5IDFyOjY+U2zU8E+09kbE/LKHTixWVDGA7N4CNu1JzGUt13kg7YkVrP6VYIgeaqf5B3lgaxSn5NjcVB/9yUne0wihcMDtk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDu6G0+tHL0ibqZrMcAzREzzy/eq1WSdRdm3gwyJVIYs#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPkz157vedTD2Vnzbot0Swfx31EDYyup+EGwSclOeVmGP8/QLCKwJ0UWdGUGTXpBBeBPh/usx8UND+o/lWLPhBo=#012 create=True mode=0644 path=/tmp/ansible.uhmntv57 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:44 np0005558241 python3.9[70417]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.uhmntv57' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:45 np0005558241 python3.9[70571]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.uhmntv57 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:45 np0005558241 systemd[1]: session-18.scope: Deactivated successfully.
Dec 13 02:28:45 np0005558241 systemd[1]: session-18.scope: Consumed 4.490s CPU time.
Dec 13 02:28:45 np0005558241 systemd-logind[787]: Session 18 logged out. Waiting for processes to exit.
Dec 13 02:28:45 np0005558241 systemd-logind[787]: Removed session 18.
Dec 13 02:28:52 np0005558241 systemd-logind[787]: New session 19 of user zuul.
Dec 13 02:28:52 np0005558241 systemd[1]: Started Session 19 of User zuul.
Dec 13 02:28:53 np0005558241 python3.9[70749]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:28:54 np0005558241 python3.9[70905]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 13 02:28:55 np0005558241 python3.9[71059]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:28:56 np0005558241 python3.9[71212]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:57 np0005558241 python3.9[71365]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:28:58 np0005558241 python3.9[71519]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:28:59 np0005558241 python3.9[71674]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:28:59 np0005558241 systemd[1]: session-19.scope: Deactivated successfully.
Dec 13 02:28:59 np0005558241 systemd[1]: session-19.scope: Consumed 5.429s CPU time.
Dec 13 02:28:59 np0005558241 systemd-logind[787]: Session 19 logged out. Waiting for processes to exit.
Dec 13 02:28:59 np0005558241 systemd-logind[787]: Removed session 19.
Dec 13 02:29:05 np0005558241 systemd-logind[787]: New session 20 of user zuul.
Dec 13 02:29:05 np0005558241 systemd[1]: Started Session 20 of User zuul.
Dec 13 02:29:07 np0005558241 python3.9[71852]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:29:08 np0005558241 python3.9[72008]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:29:09 np0005558241 python3.9[72092]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 02:29:10 np0005558241 chronyd[59937]: Selected source 142.4.192.253 (pool.ntp.org)
Dec 13 02:29:11 np0005558241 python3.9[72243]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:29:12 np0005558241 python3.9[72394]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 02:29:13 np0005558241 python3.9[72544]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:29:13 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:29:14 np0005558241 python3.9[72695]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:29:14 np0005558241 systemd[1]: session-20.scope: Deactivated successfully.
Dec 13 02:29:14 np0005558241 systemd[1]: session-20.scope: Consumed 6.898s CPU time.
Dec 13 02:29:14 np0005558241 systemd-logind[787]: Session 20 logged out. Waiting for processes to exit.
Dec 13 02:29:14 np0005558241 systemd-logind[787]: Removed session 20.
Dec 13 02:29:23 np0005558241 systemd-logind[787]: New session 21 of user zuul.
Dec 13 02:29:23 np0005558241 systemd[1]: Started Session 21 of User zuul.
Dec 13 02:29:30 np0005558241 python3[73461]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:29:32 np0005558241 python3[73557]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 02:29:34 np0005558241 python3[73584]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:29:34 np0005558241 python3[73610]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:29:34 np0005558241 kernel: loop: module loaded
Dec 13 02:29:34 np0005558241 kernel: loop3: detected capacity change from 0 to 41943040
Dec 13 02:29:35 np0005558241 python3[73645]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:29:35 np0005558241 lvm[73648]: PV /dev/loop3 not used.
Dec 13 02:29:35 np0005558241 lvm[73650]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:29:35 np0005558241 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 13 02:29:35 np0005558241 lvm[73652]:  0 logical volume(s) in volume group "ceph_vg0" now active
Dec 13 02:29:35 np0005558241 lvm[73653]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:29:35 np0005558241 lvm[73653]: VG ceph_vg0 finished
Dec 13 02:29:35 np0005558241 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 13 02:29:35 np0005558241 lvm[73661]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:29:35 np0005558241 lvm[73661]: VG ceph_vg0 finished
Dec 13 02:29:36 np0005558241 python3[73739]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:29:36 np0005558241 python3[73812]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610975.8910868-36347-227476722181367/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:29:37 np0005558241 python3[73862]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:29:37 np0005558241 systemd[1]: Reloading.
Dec 13 02:29:37 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:29:37 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:29:37 np0005558241 systemd[1]: Starting Ceph OSD losetup...
Dec 13 02:29:37 np0005558241 bash[73902]: /dev/loop3: [64513]:4327942 (/var/lib/ceph-osd-0.img)
Dec 13 02:29:38 np0005558241 systemd[1]: Finished Ceph OSD losetup.
Dec 13 02:29:38 np0005558241 lvm[73904]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:29:38 np0005558241 lvm[73904]: VG ceph_vg0 finished
Dec 13 02:29:38 np0005558241 python3[73930]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 02:29:40 np0005558241 python3[73957]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:29:40 np0005558241 python3[73983]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:29:40 np0005558241 kernel: loop4: detected capacity change from 0 to 41943040
Dec 13 02:29:40 np0005558241 python3[74014]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:29:40 np0005558241 lvm[74017]: PV /dev/loop4 not used.
Dec 13 02:29:40 np0005558241 lvm[74019]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:29:41 np0005558241 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec 13 02:29:41 np0005558241 lvm[74023]:  1 logical volume(s) in volume group "ceph_vg1" now active
Dec 13 02:29:41 np0005558241 lvm[74032]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:29:41 np0005558241 lvm[74032]: VG ceph_vg1 finished
Dec 13 02:29:41 np0005558241 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec 13 02:29:41 np0005558241 python3[74110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:29:42 np0005558241 python3[74183]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610981.4394794-36374-106427151614726/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:29:42 np0005558241 python3[74233]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:29:42 np0005558241 systemd[1]: Reloading.
Dec 13 02:29:43 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:29:43 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:29:43 np0005558241 systemd[1]: Starting Ceph OSD losetup...
Dec 13 02:29:43 np0005558241 bash[74273]: /dev/loop4: [64513]:4327944 (/var/lib/ceph-osd-1.img)
Dec 13 02:29:43 np0005558241 systemd[1]: Finished Ceph OSD losetup.
Dec 13 02:29:43 np0005558241 lvm[74274]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:29:43 np0005558241 lvm[74274]: VG ceph_vg1 finished
Dec 13 02:29:43 np0005558241 python3[74300]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 02:29:45 np0005558241 python3[74327]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:29:45 np0005558241 python3[74353]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:29:45 np0005558241 kernel: loop5: detected capacity change from 0 to 41943040
Dec 13 02:29:45 np0005558241 python3[74385]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:29:46 np0005558241 lvm[74388]: PV /dev/loop5 not used.
Dec 13 02:29:46 np0005558241 lvm[74390]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:29:46 np0005558241 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec 13 02:29:46 np0005558241 lvm[74393]:  1 logical volume(s) in volume group "ceph_vg2" now active
Dec 13 02:29:46 np0005558241 lvm[74400]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:29:46 np0005558241 lvm[74400]: VG ceph_vg2 finished
Dec 13 02:29:46 np0005558241 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec 13 02:29:46 np0005558241 python3[74478]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:29:47 np0005558241 python3[74553]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765610986.4739084-36401-112888203974969/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:29:47 np0005558241 python3[74603]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:29:47 np0005558241 systemd[1]: Reloading.
Dec 13 02:29:48 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:29:48 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:29:48 np0005558241 systemd[1]: Starting Ceph OSD losetup...
Dec 13 02:29:48 np0005558241 bash[74643]: /dev/loop5: [64513]:4327965 (/var/lib/ceph-osd-2.img)
Dec 13 02:29:48 np0005558241 systemd[1]: Finished Ceph OSD losetup.
Dec 13 02:29:48 np0005558241 lvm[74644]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:29:48 np0005558241 lvm[74644]: VG ceph_vg2 finished
Dec 13 02:29:50 np0005558241 python3[74668]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:29:52 np0005558241 python3[74761]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 02:29:55 np0005558241 python3[74818]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 02:30:00 np0005558241 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:30:00 np0005558241 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:30:01 np0005558241 python3[74936]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:30:01 np0005558241 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:30:01 np0005558241 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:30:01 np0005558241 systemd[1]: run-rb57c0bf858b149818c1323d4c9c094a8.service: Deactivated successfully.
Dec 13 02:30:02 np0005558241 python3[74965]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:30:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:30:03 np0005558241 python3[75005]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:30:03 np0005558241 python3[75031]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:30:04 np0005558241 python3[75109]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:30:04 np0005558241 python3[75182]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765611003.9186263-36549-87088805753587/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:30:05 np0005558241 python3[75284]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:30:06 np0005558241 python3[75357]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765611005.2972388-36567-233537461891324/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:30:06 np0005558241 python3[75407]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:30:07 np0005558241 python3[75435]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:30:07 np0005558241 python3[75463]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:30:07 np0005558241 python3[75491]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:30:08 np0005558241 systemd[1]: Created slice User Slice of UID 42477.
Dec 13 02:30:08 np0005558241 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 13 02:30:08 np0005558241 systemd-logind[787]: New session 22 of user ceph-admin.
Dec 13 02:30:08 np0005558241 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 13 02:30:08 np0005558241 systemd[1]: Starting User Manager for UID 42477...
Dec 13 02:30:08 np0005558241 systemd[75499]: Queued start job for default target Main User Target.
Dec 13 02:30:08 np0005558241 systemd[75499]: Created slice User Application Slice.
Dec 13 02:30:08 np0005558241 systemd[75499]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 13 02:30:08 np0005558241 systemd[75499]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 02:30:08 np0005558241 systemd[75499]: Reached target Paths.
Dec 13 02:30:08 np0005558241 systemd[75499]: Reached target Timers.
Dec 13 02:30:08 np0005558241 systemd[75499]: Starting D-Bus User Message Bus Socket...
Dec 13 02:30:08 np0005558241 systemd[75499]: Starting Create User's Volatile Files and Directories...
Dec 13 02:30:08 np0005558241 systemd[75499]: Listening on D-Bus User Message Bus Socket.
Dec 13 02:30:08 np0005558241 systemd[75499]: Reached target Sockets.
Dec 13 02:30:08 np0005558241 systemd[75499]: Finished Create User's Volatile Files and Directories.
Dec 13 02:30:08 np0005558241 systemd[75499]: Reached target Basic System.
Dec 13 02:30:08 np0005558241 systemd[75499]: Reached target Main User Target.
Dec 13 02:30:08 np0005558241 systemd[75499]: Startup finished in 195ms.
Dec 13 02:30:08 np0005558241 systemd[1]: Started User Manager for UID 42477.
Dec 13 02:30:08 np0005558241 systemd[1]: Started Session 22 of User ceph-admin.
Dec 13 02:30:08 np0005558241 systemd[1]: session-22.scope: Deactivated successfully.
Dec 13 02:30:08 np0005558241 systemd-logind[787]: Session 22 logged out. Waiting for processes to exit.
Dec 13 02:30:08 np0005558241 systemd-logind[787]: Removed session 22.
Dec 13 02:30:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:30:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:30:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-compat1429064485-merged.mount: Deactivated successfully.
Dec 13 02:30:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-compat1429064485-lower\x2dmapped.mount: Deactivated successfully.
Dec 13 02:30:18 np0005558241 systemd[1]: Stopping User Manager for UID 42477...
Dec 13 02:30:18 np0005558241 systemd[75499]: Activating special unit Exit the Session...
Dec 13 02:30:18 np0005558241 systemd[75499]: Stopped target Main User Target.
Dec 13 02:30:18 np0005558241 systemd[75499]: Stopped target Basic System.
Dec 13 02:30:18 np0005558241 systemd[75499]: Stopped target Paths.
Dec 13 02:30:18 np0005558241 systemd[75499]: Stopped target Sockets.
Dec 13 02:30:18 np0005558241 systemd[75499]: Stopped target Timers.
Dec 13 02:30:18 np0005558241 systemd[75499]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 13 02:30:18 np0005558241 systemd[75499]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 13 02:30:18 np0005558241 systemd[75499]: Closed D-Bus User Message Bus Socket.
Dec 13 02:30:18 np0005558241 systemd[75499]: Stopped Create User's Volatile Files and Directories.
Dec 13 02:30:18 np0005558241 systemd[75499]: Removed slice User Application Slice.
Dec 13 02:30:18 np0005558241 systemd[75499]: Reached target Shutdown.
Dec 13 02:30:18 np0005558241 systemd[75499]: Finished Exit the Session.
Dec 13 02:30:18 np0005558241 systemd[75499]: Reached target Exit the Session.
Dec 13 02:30:18 np0005558241 systemd[1]: user@42477.service: Deactivated successfully.
Dec 13 02:30:18 np0005558241 systemd[1]: Stopped User Manager for UID 42477.
Dec 13 02:30:18 np0005558241 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 13 02:30:18 np0005558241 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 13 02:30:18 np0005558241 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 13 02:30:18 np0005558241 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 13 02:30:18 np0005558241 systemd[1]: Removed slice User Slice of UID 42477.
Dec 13 02:30:41 np0005558241 podman[75593]: 2025-12-13 07:30:41.534155174 +0000 UTC m=+32.384059243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:30:41 np0005558241 podman[75661]: 2025-12-13 07:30:41.64398716 +0000 UTC m=+0.079533787 container create ebe97279505515e3bc38b5858ee226b2cc2368977f45f86b122a15902e1b97c3 (image=quay.io/ceph/ceph:v20, name=funny_easley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:30:41 np0005558241 podman[75661]: 2025-12-13 07:30:41.604836388 +0000 UTC m=+0.040383065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:41 np0005558241 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 13 02:30:41 np0005558241 systemd[1]: Started libpod-conmon-ebe97279505515e3bc38b5858ee226b2cc2368977f45f86b122a15902e1b97c3.scope.
Dec 13 02:30:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:41 np0005558241 podman[75661]: 2025-12-13 07:30:41.916514687 +0000 UTC m=+0.352061304 container init ebe97279505515e3bc38b5858ee226b2cc2368977f45f86b122a15902e1b97c3 (image=quay.io/ceph/ceph:v20, name=funny_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:30:41 np0005558241 podman[75661]: 2025-12-13 07:30:41.928551669 +0000 UTC m=+0.364098296 container start ebe97279505515e3bc38b5858ee226b2cc2368977f45f86b122a15902e1b97c3 (image=quay.io/ceph/ceph:v20, name=funny_easley, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:30:41 np0005558241 podman[75661]: 2025-12-13 07:30:41.986443721 +0000 UTC m=+0.421990358 container attach ebe97279505515e3bc38b5858ee226b2cc2368977f45f86b122a15902e1b97c3 (image=quay.io/ceph/ceph:v20, name=funny_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 02:30:42 np0005558241 funny_easley[75677]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 13 02:30:42 np0005558241 systemd[1]: libpod-ebe97279505515e3bc38b5858ee226b2cc2368977f45f86b122a15902e1b97c3.scope: Deactivated successfully.
Dec 13 02:30:42 np0005558241 podman[75661]: 2025-12-13 07:30:42.06013363 +0000 UTC m=+0.495680257 container died ebe97279505515e3bc38b5858ee226b2cc2368977f45f86b122a15902e1b97c3 (image=quay.io/ceph/ceph:v20, name=funny_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 02:30:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6af2d6e5954a52293cfd15dba7d48b77f5998a0240a7f943b4e8a4f3e830d3a0-merged.mount: Deactivated successfully.
Dec 13 02:30:42 np0005558241 podman[75661]: 2025-12-13 07:30:42.276398196 +0000 UTC m=+0.711944813 container remove ebe97279505515e3bc38b5858ee226b2cc2368977f45f86b122a15902e1b97c3 (image=quay.io/ceph/ceph:v20, name=funny_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:30:42 np0005558241 systemd[1]: libpod-conmon-ebe97279505515e3bc38b5858ee226b2cc2368977f45f86b122a15902e1b97c3.scope: Deactivated successfully.
Dec 13 02:30:42 np0005558241 podman[75696]: 2025-12-13 07:30:42.363633485 +0000 UTC m=+0.057458203 container create ae29cee87bb3c4b3fc198affe9a08c0dbaeb898a165a7a5bf442981deb3f2470 (image=quay.io/ceph/ceph:v20, name=elegant_leavitt, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:30:42 np0005558241 systemd[1]: Started libpod-conmon-ae29cee87bb3c4b3fc198affe9a08c0dbaeb898a165a7a5bf442981deb3f2470.scope.
Dec 13 02:30:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:42 np0005558241 podman[75696]: 2025-12-13 07:30:42.341803367 +0000 UTC m=+0.035628125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:42 np0005558241 podman[75696]: 2025-12-13 07:30:42.442659437 +0000 UTC m=+0.136484195 container init ae29cee87bb3c4b3fc198affe9a08c0dbaeb898a165a7a5bf442981deb3f2470 (image=quay.io/ceph/ceph:v20, name=elegant_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:30:42 np0005558241 podman[75696]: 2025-12-13 07:30:42.448444722 +0000 UTC m=+0.142269470 container start ae29cee87bb3c4b3fc198affe9a08c0dbaeb898a165a7a5bf442981deb3f2470 (image=quay.io/ceph/ceph:v20, name=elegant_leavitt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:30:42 np0005558241 podman[75696]: 2025-12-13 07:30:42.452219527 +0000 UTC m=+0.146044305 container attach ae29cee87bb3c4b3fc198affe9a08c0dbaeb898a165a7a5bf442981deb3f2470 (image=quay.io/ceph/ceph:v20, name=elegant_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:30:42 np0005558241 elegant_leavitt[75712]: 167 167
Dec 13 02:30:42 np0005558241 systemd[1]: libpod-ae29cee87bb3c4b3fc198affe9a08c0dbaeb898a165a7a5bf442981deb3f2470.scope: Deactivated successfully.
Dec 13 02:30:42 np0005558241 podman[75696]: 2025-12-13 07:30:42.454289159 +0000 UTC m=+0.148113907 container died ae29cee87bb3c4b3fc198affe9a08c0dbaeb898a165a7a5bf442981deb3f2470 (image=quay.io/ceph/ceph:v20, name=elegant_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:30:42 np0005558241 podman[75696]: 2025-12-13 07:30:42.503733849 +0000 UTC m=+0.197558587 container remove ae29cee87bb3c4b3fc198affe9a08c0dbaeb898a165a7a5bf442981deb3f2470 (image=quay.io/ceph/ceph:v20, name=elegant_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:30:42 np0005558241 systemd[1]: libpod-conmon-ae29cee87bb3c4b3fc198affe9a08c0dbaeb898a165a7a5bf442981deb3f2470.scope: Deactivated successfully.
Dec 13 02:30:42 np0005558241 podman[75728]: 2025-12-13 07:30:42.587348807 +0000 UTC m=+0.057111174 container create 6ed42121618973d805a2e226205de7d27b554556c6d0668bd4e94c781a3d72f7 (image=quay.io/ceph/ceph:v20, name=quirky_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:30:42 np0005558241 systemd[1]: Started libpod-conmon-6ed42121618973d805a2e226205de7d27b554556c6d0668bd4e94c781a3d72f7.scope.
Dec 13 02:30:42 np0005558241 podman[75728]: 2025-12-13 07:30:42.561429177 +0000 UTC m=+0.031191594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:42 np0005558241 podman[75728]: 2025-12-13 07:30:42.684022873 +0000 UTC m=+0.153785310 container init 6ed42121618973d805a2e226205de7d27b554556c6d0668bd4e94c781a3d72f7 (image=quay.io/ceph/ceph:v20, name=quirky_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:30:42 np0005558241 podman[75728]: 2025-12-13 07:30:42.692317831 +0000 UTC m=+0.162080198 container start 6ed42121618973d805a2e226205de7d27b554556c6d0668bd4e94c781a3d72f7 (image=quay.io/ceph/ceph:v20, name=quirky_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 02:30:42 np0005558241 podman[75728]: 2025-12-13 07:30:42.697110481 +0000 UTC m=+0.166872908 container attach 6ed42121618973d805a2e226205de7d27b554556c6d0668bd4e94c781a3d72f7 (image=quay.io/ceph/ceph:v20, name=quirky_germain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:30:42 np0005558241 quirky_germain[75745]: AQAiFj1pOBtWKxAApIBnBO2zAVBiQJHyz+6Fbg==
Dec 13 02:30:42 np0005558241 systemd[1]: libpod-6ed42121618973d805a2e226205de7d27b554556c6d0668bd4e94c781a3d72f7.scope: Deactivated successfully.
Dec 13 02:30:42 np0005558241 podman[75728]: 2025-12-13 07:30:42.73336089 +0000 UTC m=+0.203123257 container died 6ed42121618973d805a2e226205de7d27b554556c6d0668bd4e94c781a3d72f7 (image=quay.io/ceph/ceph:v20, name=quirky_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:30:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f55b24e1040cc6de983a307bcfe2154d63e0ca3a7564bab4cb882b0570e828b9-merged.mount: Deactivated successfully.
Dec 13 02:30:42 np0005558241 podman[75728]: 2025-12-13 07:30:42.784738659 +0000 UTC m=+0.254501036 container remove 6ed42121618973d805a2e226205de7d27b554556c6d0668bd4e94c781a3d72f7 (image=quay.io/ceph/ceph:v20, name=quirky_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:30:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:30:42 np0005558241 systemd[1]: libpod-conmon-6ed42121618973d805a2e226205de7d27b554556c6d0668bd4e94c781a3d72f7.scope: Deactivated successfully.
Dec 13 02:30:42 np0005558241 podman[75764]: 2025-12-13 07:30:42.865106926 +0000 UTC m=+0.058842638 container create 8455685bda230714b24b52ada7997c8e64d6dde05d037be4e609c08e2f457bdd (image=quay.io/ceph/ceph:v20, name=funny_shaw, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:30:42 np0005558241 systemd[1]: Started libpod-conmon-8455685bda230714b24b52ada7997c8e64d6dde05d037be4e609c08e2f457bdd.scope.
Dec 13 02:30:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:42 np0005558241 podman[75764]: 2025-12-13 07:30:42.832426346 +0000 UTC m=+0.026162148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:42 np0005558241 podman[75764]: 2025-12-13 07:30:42.931519632 +0000 UTC m=+0.125255434 container init 8455685bda230714b24b52ada7997c8e64d6dde05d037be4e609c08e2f457bdd (image=quay.io/ceph/ceph:v20, name=funny_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:30:42 np0005558241 podman[75764]: 2025-12-13 07:30:42.935873651 +0000 UTC m=+0.129609363 container start 8455685bda230714b24b52ada7997c8e64d6dde05d037be4e609c08e2f457bdd (image=quay.io/ceph/ceph:v20, name=funny_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:30:42 np0005558241 podman[75764]: 2025-12-13 07:30:42.939713817 +0000 UTC m=+0.133449529 container attach 8455685bda230714b24b52ada7997c8e64d6dde05d037be4e609c08e2f457bdd (image=quay.io/ceph/ceph:v20, name=funny_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:30:42 np0005558241 funny_shaw[75779]: AQAiFj1pj+snORAAvTyylusQYR634AeS82+RxQ==
Dec 13 02:30:42 np0005558241 systemd[1]: libpod-8455685bda230714b24b52ada7997c8e64d6dde05d037be4e609c08e2f457bdd.scope: Deactivated successfully.
Dec 13 02:30:42 np0005558241 podman[75764]: 2025-12-13 07:30:42.963493694 +0000 UTC m=+0.157229436 container died 8455685bda230714b24b52ada7997c8e64d6dde05d037be4e609c08e2f457bdd (image=quay.io/ceph/ceph:v20, name=funny_shaw, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:30:43 np0005558241 podman[75764]: 2025-12-13 07:30:43.006559164 +0000 UTC m=+0.200294906 container remove 8455685bda230714b24b52ada7997c8e64d6dde05d037be4e609c08e2f457bdd (image=quay.io/ceph/ceph:v20, name=funny_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:30:43 np0005558241 systemd[1]: libpod-conmon-8455685bda230714b24b52ada7997c8e64d6dde05d037be4e609c08e2f457bdd.scope: Deactivated successfully.
Dec 13 02:30:43 np0005558241 podman[75797]: 2025-12-13 07:30:43.073835392 +0000 UTC m=+0.043282797 container create 5189648a3f5ecf767a28451f45dbced14d082dd3be47681f75d59849a007f665 (image=quay.io/ceph/ceph:v20, name=elegant_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:30:43 np0005558241 systemd[1]: Started libpod-conmon-5189648a3f5ecf767a28451f45dbced14d082dd3be47681f75d59849a007f665.scope.
Dec 13 02:30:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:43 np0005558241 podman[75797]: 2025-12-13 07:30:43.056930168 +0000 UTC m=+0.026377553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:43 np0005558241 podman[75797]: 2025-12-13 07:30:43.158121147 +0000 UTC m=+0.127568552 container init 5189648a3f5ecf767a28451f45dbced14d082dd3be47681f75d59849a007f665 (image=quay.io/ceph/ceph:v20, name=elegant_sammet, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:30:43 np0005558241 podman[75797]: 2025-12-13 07:30:43.167049761 +0000 UTC m=+0.136497166 container start 5189648a3f5ecf767a28451f45dbced14d082dd3be47681f75d59849a007f665 (image=quay.io/ceph/ceph:v20, name=elegant_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:30:43 np0005558241 podman[75797]: 2025-12-13 07:30:43.172034896 +0000 UTC m=+0.141482341 container attach 5189648a3f5ecf767a28451f45dbced14d082dd3be47681f75d59849a007f665 (image=quay.io/ceph/ceph:v20, name=elegant_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 02:30:43 np0005558241 elegant_sammet[75813]: AQAjFj1pjeofDBAAzG3kj8Cwcn34jWLNyOGyGQ==
Dec 13 02:30:43 np0005558241 systemd[1]: libpod-5189648a3f5ecf767a28451f45dbced14d082dd3be47681f75d59849a007f665.scope: Deactivated successfully.
Dec 13 02:30:43 np0005558241 podman[75797]: 2025-12-13 07:30:43.209230569 +0000 UTC m=+0.178677974 container died 5189648a3f5ecf767a28451f45dbced14d082dd3be47681f75d59849a007f665 (image=quay.io/ceph/ceph:v20, name=elegant_sammet, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:30:43 np0005558241 podman[75797]: 2025-12-13 07:30:43.262736031 +0000 UTC m=+0.232183426 container remove 5189648a3f5ecf767a28451f45dbced14d082dd3be47681f75d59849a007f665 (image=quay.io/ceph/ceph:v20, name=elegant_sammet, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:30:43 np0005558241 systemd[1]: libpod-conmon-5189648a3f5ecf767a28451f45dbced14d082dd3be47681f75d59849a007f665.scope: Deactivated successfully.
Dec 13 02:30:43 np0005558241 podman[75832]: 2025-12-13 07:30:43.356819512 +0000 UTC m=+0.062151991 container create 5a00c2073226ca828e795822705b14bc8b80fed4c09a611963bc9289702e2a8b (image=quay.io/ceph/ceph:v20, name=nostalgic_austin, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:30:43 np0005558241 systemd[1]: Started libpod-conmon-5a00c2073226ca828e795822705b14bc8b80fed4c09a611963bc9289702e2a8b.scope.
Dec 13 02:30:43 np0005558241 podman[75832]: 2025-12-13 07:30:43.329916967 +0000 UTC m=+0.035249506 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d34ed816d4b854d7060bdcaa9d0fcbd703d12f5200259e63bc8f158879f798/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:43 np0005558241 podman[75832]: 2025-12-13 07:30:43.450250576 +0000 UTC m=+0.155583065 container init 5a00c2073226ca828e795822705b14bc8b80fed4c09a611963bc9289702e2a8b (image=quay.io/ceph/ceph:v20, name=nostalgic_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:30:43 np0005558241 podman[75832]: 2025-12-13 07:30:43.458591385 +0000 UTC m=+0.163923874 container start 5a00c2073226ca828e795822705b14bc8b80fed4c09a611963bc9289702e2a8b (image=quay.io/ceph/ceph:v20, name=nostalgic_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:30:43 np0005558241 podman[75832]: 2025-12-13 07:30:43.463225891 +0000 UTC m=+0.168558380 container attach 5a00c2073226ca828e795822705b14bc8b80fed4c09a611963bc9289702e2a8b (image=quay.io/ceph/ceph:v20, name=nostalgic_austin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:30:43 np0005558241 nostalgic_austin[75848]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 13 02:30:43 np0005558241 nostalgic_austin[75848]: setting min_mon_release = tentacle
Dec 13 02:30:43 np0005558241 nostalgic_austin[75848]: /usr/bin/monmaptool: set fsid to 18ee9de6-e00b-571b-ab9b-b7aab06174df
Dec 13 02:30:43 np0005558241 nostalgic_austin[75848]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 13 02:30:43 np0005558241 systemd[1]: libpod-5a00c2073226ca828e795822705b14bc8b80fed4c09a611963bc9289702e2a8b.scope: Deactivated successfully.
Dec 13 02:30:43 np0005558241 podman[75832]: 2025-12-13 07:30:43.509339418 +0000 UTC m=+0.214671867 container died 5a00c2073226ca828e795822705b14bc8b80fed4c09a611963bc9289702e2a8b (image=quay.io/ceph/ceph:v20, name=nostalgic_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:30:43 np0005558241 podman[75832]: 2025-12-13 07:30:43.55684489 +0000 UTC m=+0.262177369 container remove 5a00c2073226ca828e795822705b14bc8b80fed4c09a611963bc9289702e2a8b (image=quay.io/ceph/ceph:v20, name=nostalgic_austin, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 02:30:43 np0005558241 systemd[1]: libpod-conmon-5a00c2073226ca828e795822705b14bc8b80fed4c09a611963bc9289702e2a8b.scope: Deactivated successfully.
Dec 13 02:30:43 np0005558241 podman[75867]: 2025-12-13 07:30:43.633468612 +0000 UTC m=+0.049519073 container create dc6a9c90d3006ee285597298c70f6191f183d760e66db9b2a052c0cbc7466649 (image=quay.io/ceph/ceph:v20, name=confident_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:30:43 np0005558241 systemd[1]: Started libpod-conmon-dc6a9c90d3006ee285597298c70f6191f183d760e66db9b2a052c0cbc7466649.scope.
Dec 13 02:30:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7c0a2c2b4c0eb8de6f8890e48fe71a031399757afdc15730565f34c5cbd9b6/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7c0a2c2b4c0eb8de6f8890e48fe71a031399757afdc15730565f34c5cbd9b6/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7c0a2c2b4c0eb8de6f8890e48fe71a031399757afdc15730565f34c5cbd9b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7c0a2c2b4c0eb8de6f8890e48fe71a031399757afdc15730565f34c5cbd9b6/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:43 np0005558241 podman[75867]: 2025-12-13 07:30:43.610806044 +0000 UTC m=+0.026856575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:43 np0005558241 podman[75867]: 2025-12-13 07:30:43.715420738 +0000 UTC m=+0.131471289 container init dc6a9c90d3006ee285597298c70f6191f183d760e66db9b2a052c0cbc7466649 (image=quay.io/ceph/ceph:v20, name=confident_matsumoto, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 02:30:43 np0005558241 podman[75867]: 2025-12-13 07:30:43.726138247 +0000 UTC m=+0.142188738 container start dc6a9c90d3006ee285597298c70f6191f183d760e66db9b2a052c0cbc7466649 (image=quay.io/ceph/ceph:v20, name=confident_matsumoto, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:30:43 np0005558241 podman[75867]: 2025-12-13 07:30:43.730448715 +0000 UTC m=+0.146499206 container attach dc6a9c90d3006ee285597298c70f6191f183d760e66db9b2a052c0cbc7466649 (image=quay.io/ceph/ceph:v20, name=confident_matsumoto, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 02:30:43 np0005558241 systemd[1]: libpod-dc6a9c90d3006ee285597298c70f6191f183d760e66db9b2a052c0cbc7466649.scope: Deactivated successfully.
Dec 13 02:30:43 np0005558241 podman[75867]: 2025-12-13 07:30:43.844344643 +0000 UTC m=+0.260395124 container died dc6a9c90d3006ee285597298c70f6191f183d760e66db9b2a052c0cbc7466649 (image=quay.io/ceph/ceph:v20, name=confident_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:30:43 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9b7c0a2c2b4c0eb8de6f8890e48fe71a031399757afdc15730565f34c5cbd9b6-merged.mount: Deactivated successfully.
Dec 13 02:30:43 np0005558241 podman[75867]: 2025-12-13 07:30:43.894274046 +0000 UTC m=+0.310324537 container remove dc6a9c90d3006ee285597298c70f6191f183d760e66db9b2a052c0cbc7466649 (image=quay.io/ceph/ceph:v20, name=confident_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:30:43 np0005558241 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:30:43 np0005558241 systemd[1]: libpod-conmon-dc6a9c90d3006ee285597298c70f6191f183d760e66db9b2a052c0cbc7466649.scope: Deactivated successfully.
Dec 13 02:30:43 np0005558241 systemd[1]: Reloading.
Dec 13 02:30:44 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:30:44 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:30:44 np0005558241 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:30:44 np0005558241 systemd[1]: Reloading.
Dec 13 02:30:44 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:30:44 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:30:44 np0005558241 systemd[1]: Reached target All Ceph clusters and services.
Dec 13 02:30:44 np0005558241 systemd[1]: Reloading.
Dec 13 02:30:44 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:30:44 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:30:44 np0005558241 systemd[1]: Reached target Ceph cluster 18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:30:44 np0005558241 systemd[1]: Reloading.
Dec 13 02:30:45 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:30:45 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:30:45 np0005558241 systemd[1]: Reloading.
Dec 13 02:30:45 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:30:45 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:30:45 np0005558241 systemd[1]: Created slice Slice /system/ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:30:45 np0005558241 systemd[1]: Reached target System Time Set.
Dec 13 02:30:45 np0005558241 systemd[1]: Reached target System Time Synchronized.
Dec 13 02:30:45 np0005558241 systemd[1]: Starting Ceph mon.compute-0 for 18ee9de6-e00b-571b-ab9b-b7aab06174df...
Dec 13 02:30:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:30:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:30:45 np0005558241 podman[76158]: 2025-12-13 07:30:45.802432197 +0000 UTC m=+0.044896838 container create 14c5adf7f4afdfda4837a5fabd5e3c5bd258e89eddfd619be40472bb06b53885 (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:30:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f4c833f06e2d2c0281f07546afb140e880c06d6e8776a32b1952c9f4dbcdcf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f4c833f06e2d2c0281f07546afb140e880c06d6e8776a32b1952c9f4dbcdcf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f4c833f06e2d2c0281f07546afb140e880c06d6e8776a32b1952c9f4dbcdcf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f4c833f06e2d2c0281f07546afb140e880c06d6e8776a32b1952c9f4dbcdcf6/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:45 np0005558241 podman[76158]: 2025-12-13 07:30:45.78464693 +0000 UTC m=+0.027111651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:45 np0005558241 podman[76158]: 2025-12-13 07:30:45.892931527 +0000 UTC m=+0.135396188 container init 14c5adf7f4afdfda4837a5fabd5e3c5bd258e89eddfd619be40472bb06b53885 (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:30:45 np0005558241 podman[76158]: 2025-12-13 07:30:45.904832486 +0000 UTC m=+0.147297127 container start 14c5adf7f4afdfda4837a5fabd5e3c5bd258e89eddfd619be40472bb06b53885 (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:30:45 np0005558241 bash[76158]: 14c5adf7f4afdfda4837a5fabd5e3c5bd258e89eddfd619be40472bb06b53885
Dec 13 02:30:45 np0005558241 systemd[1]: Started Ceph mon.compute-0 for 18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: pidfile_write: ignore empty --pid-file
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: load: jerasure load: lrc 
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Git sha 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: DB SUMMARY
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: DB Session ID:  5AMLSHCCW6458DF68WBB
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                                     Options.env: 0x564e0479b440
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                                Options.info_log: 0x564e051273e0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                                 Options.wal_dir: 
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                    Options.write_buffer_manager: 0x564e050a6140
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                               Options.row_cache: None
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                              Options.wal_filter: None
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.wal_compression: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.max_background_jobs: 2
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.max_total_wal_size: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:       Options.compaction_readahead_size: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Compression algorithms supported:
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: #011kZSTD supported: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:           Options.merge_operator: 
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:        Options.compaction_filter: None
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564e050b2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564e050978d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:        Options.write_buffer_size: 33554432
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:  Options.max_write_buffer_number: 2
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:          Options.compression: NoCompression
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.num_levels: 7
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e7ffae10-18d5-4cdc-93fc-8955429ef551
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611045976573, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611045979118, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "5AMLSHCCW6458DF68WBB", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611045979265, "job": 1, "event": "recovery_finished"}
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564e050c4e00
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: DB pointer 0x564e05210000
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564e050978d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: mon.compute-0@-1(???) e0 preinit fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 13 02:30:45 np0005558241 ceph-mon[76178]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(cluster) log [DBG] : fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(cluster) log [DBG] : last_changed 2025-12-13T07:30:43.504837+0000
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(cluster) log [DBG] : created 2025-12-13T07:30:43.504837+0000
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2025-12-13T07:30:43.785191Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,os=Linux}
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).mds e1 new map
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-12-13T07:30:46:025708+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(cluster) log [DBG] : fsmap 
Dec 13 02:30:46 np0005558241 podman[76179]: 2025-12-13 07:30:46.039176226 +0000 UTC m=+0.077968687 container create db2f7da019c6e522b81819175414fb5a3f7474b50d5b5e125606039da5d23018 (image=quay.io/ceph/ceph:v20, name=optimistic_swartz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mkfs 18ee9de6-e00b-571b-ab9b-b7aab06174df
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 02:30:46 np0005558241 systemd[1]: Started libpod-conmon-db2f7da019c6e522b81819175414fb5a3f7474b50d5b5e125606039da5d23018.scope.
Dec 13 02:30:46 np0005558241 podman[76179]: 2025-12-13 07:30:46.009382699 +0000 UTC m=+0.048175180 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4286a141d557cb2b8688a66a470affe71d9b686f93482adb2ddb515f8cf28288/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4286a141d557cb2b8688a66a470affe71d9b686f93482adb2ddb515f8cf28288/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4286a141d557cb2b8688a66a470affe71d9b686f93482adb2ddb515f8cf28288/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:46 np0005558241 podman[76179]: 2025-12-13 07:30:46.154648803 +0000 UTC m=+0.193441274 container init db2f7da019c6e522b81819175414fb5a3f7474b50d5b5e125606039da5d23018 (image=quay.io/ceph/ceph:v20, name=optimistic_swartz, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:30:46 np0005558241 podman[76179]: 2025-12-13 07:30:46.167164437 +0000 UTC m=+0.205956908 container start db2f7da019c6e522b81819175414fb5a3f7474b50d5b5e125606039da5d23018 (image=quay.io/ceph/ceph:v20, name=optimistic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:30:46 np0005558241 podman[76179]: 2025-12-13 07:30:46.171176828 +0000 UTC m=+0.209969359 container attach db2f7da019c6e522b81819175414fb5a3f7474b50d5b5e125606039da5d23018 (image=quay.io/ceph/ceph:v20, name=optimistic_swartz, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4271261512' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]:  cluster:
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]:    id:     18ee9de6-e00b-571b-ab9b-b7aab06174df
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]:    health: HEALTH_OK
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]: 
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]:  services:
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]:    mon: 1 daemons, quorum compute-0 (age 0.383629s) [leader: compute-0]
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]:    mgr: no daemons active
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]:    osd: 0 osds: 0 up, 0 in
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]: 
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]:  data:
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]:    pools:   0 pools, 0 pgs
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]:    objects: 0 objects, 0 B
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]:    usage:   0 B used, 0 B / 0 B avail
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]:    pgs:     
Dec 13 02:30:46 np0005558241 optimistic_swartz[76233]: 
Dec 13 02:30:46 np0005558241 systemd[1]: libpod-db2f7da019c6e522b81819175414fb5a3f7474b50d5b5e125606039da5d23018.scope: Deactivated successfully.
Dec 13 02:30:46 np0005558241 podman[76179]: 2025-12-13 07:30:46.417871227 +0000 UTC m=+0.456663698 container died db2f7da019c6e522b81819175414fb5a3f7474b50d5b5e125606039da5d23018 (image=quay.io/ceph/ceph:v20, name=optimistic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:30:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4286a141d557cb2b8688a66a470affe71d9b686f93482adb2ddb515f8cf28288-merged.mount: Deactivated successfully.
Dec 13 02:30:46 np0005558241 podman[76179]: 2025-12-13 07:30:46.467210945 +0000 UTC m=+0.506003406 container remove db2f7da019c6e522b81819175414fb5a3f7474b50d5b5e125606039da5d23018 (image=quay.io/ceph/ceph:v20, name=optimistic_swartz, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:30:46 np0005558241 systemd[1]: libpod-conmon-db2f7da019c6e522b81819175414fb5a3f7474b50d5b5e125606039da5d23018.scope: Deactivated successfully.
Dec 13 02:30:46 np0005558241 podman[76274]: 2025-12-13 07:30:46.574490656 +0000 UTC m=+0.074116070 container create a1b85ef5cec13c62b42815a4aad70781b5c21b2e430900280539be55d614fc13 (image=quay.io/ceph/ceph:v20, name=wonderful_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:30:46 np0005558241 systemd[1]: Started libpod-conmon-a1b85ef5cec13c62b42815a4aad70781b5c21b2e430900280539be55d614fc13.scope.
Dec 13 02:30:46 np0005558241 podman[76274]: 2025-12-13 07:30:46.543609011 +0000 UTC m=+0.043234475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a4b6d754ab051d98011e3bd9c76f66705cc74b738b068739f5f3ae1b348814/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a4b6d754ab051d98011e3bd9c76f66705cc74b738b068739f5f3ae1b348814/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a4b6d754ab051d98011e3bd9c76f66705cc74b738b068739f5f3ae1b348814/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a4b6d754ab051d98011e3bd9c76f66705cc74b738b068739f5f3ae1b348814/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:46 np0005558241 podman[76274]: 2025-12-13 07:30:46.684971328 +0000 UTC m=+0.184596792 container init a1b85ef5cec13c62b42815a4aad70781b5c21b2e430900280539be55d614fc13 (image=quay.io/ceph/ceph:v20, name=wonderful_saha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:30:46 np0005558241 podman[76274]: 2025-12-13 07:30:46.694260631 +0000 UTC m=+0.193886045 container start a1b85ef5cec13c62b42815a4aad70781b5c21b2e430900280539be55d614fc13 (image=quay.io/ceph/ceph:v20, name=wonderful_saha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:30:46 np0005558241 podman[76274]: 2025-12-13 07:30:46.698412265 +0000 UTC m=+0.198037669 container attach a1b85ef5cec13c62b42815a4aad70781b5c21b2e430900280539be55d614fc13 (image=quay.io/ceph/ceph:v20, name=wonderful_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 13 02:30:46 np0005558241 ceph-mon[76178]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3585518581' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 02:30:47 np0005558241 ceph-mon[76178]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3585518581' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 02:30:47 np0005558241 wonderful_saha[76291]: 
Dec 13 02:30:47 np0005558241 wonderful_saha[76291]: [global]
Dec 13 02:30:47 np0005558241 wonderful_saha[76291]: #011fsid = 18ee9de6-e00b-571b-ab9b-b7aab06174df
Dec 13 02:30:47 np0005558241 wonderful_saha[76291]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 13 02:30:47 np0005558241 wonderful_saha[76291]: #011osd_crush_chooseleaf_type = 0
Dec 13 02:30:47 np0005558241 ceph-mon[76178]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 02:30:47 np0005558241 systemd[1]: libpod-a1b85ef5cec13c62b42815a4aad70781b5c21b2e430900280539be55d614fc13.scope: Deactivated successfully.
Dec 13 02:30:47 np0005558241 podman[76274]: 2025-12-13 07:30:47.45075313 +0000 UTC m=+0.950378544 container died a1b85ef5cec13c62b42815a4aad70781b5c21b2e430900280539be55d614fc13 (image=quay.io/ceph/ceph:v20, name=wonderful_saha, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:30:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-17a4b6d754ab051d98011e3bd9c76f66705cc74b738b068739f5f3ae1b348814-merged.mount: Deactivated successfully.
Dec 13 02:30:47 np0005558241 podman[76274]: 2025-12-13 07:30:47.503114082 +0000 UTC m=+1.002739456 container remove a1b85ef5cec13c62b42815a4aad70781b5c21b2e430900280539be55d614fc13 (image=quay.io/ceph/ceph:v20, name=wonderful_saha, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:30:47 np0005558241 systemd[1]: libpod-conmon-a1b85ef5cec13c62b42815a4aad70781b5c21b2e430900280539be55d614fc13.scope: Deactivated successfully.
Dec 13 02:30:47 np0005558241 podman[76328]: 2025-12-13 07:30:47.569330194 +0000 UTC m=+0.044496298 container create bfe2f521cbd4f964e0b28c33b649d820471ed56e1aa63c9858e1631c5886bee3 (image=quay.io/ceph/ceph:v20, name=jolly_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:30:47 np0005558241 systemd[1]: Started libpod-conmon-bfe2f521cbd4f964e0b28c33b649d820471ed56e1aa63c9858e1631c5886bee3.scope.
Dec 13 02:30:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:47 np0005558241 podman[76328]: 2025-12-13 07:30:47.54844694 +0000 UTC m=+0.023613074 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/342933fa57e9d1a40439467070b94bdbd3229eba8677d4989eeb3d5b79ae3162/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/342933fa57e9d1a40439467070b94bdbd3229eba8677d4989eeb3d5b79ae3162/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/342933fa57e9d1a40439467070b94bdbd3229eba8677d4989eeb3d5b79ae3162/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/342933fa57e9d1a40439467070b94bdbd3229eba8677d4989eeb3d5b79ae3162/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:47 np0005558241 podman[76328]: 2025-12-13 07:30:47.658607203 +0000 UTC m=+0.133773357 container init bfe2f521cbd4f964e0b28c33b649d820471ed56e1aa63c9858e1631c5886bee3 (image=quay.io/ceph/ceph:v20, name=jolly_chaplygin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 02:30:47 np0005558241 podman[76328]: 2025-12-13 07:30:47.672742438 +0000 UTC m=+0.147908572 container start bfe2f521cbd4f964e0b28c33b649d820471ed56e1aa63c9858e1631c5886bee3 (image=quay.io/ceph/ceph:v20, name=jolly_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:30:47 np0005558241 podman[76328]: 2025-12-13 07:30:47.678279127 +0000 UTC m=+0.153445311 container attach bfe2f521cbd4f964e0b28c33b649d820471ed56e1aa63c9858e1631c5886bee3 (image=quay.io/ceph/ceph:v20, name=jolly_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 02:30:47 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:30:47 np0005558241 ceph-mon[76178]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3850758469' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:30:47 np0005558241 systemd[1]: libpod-bfe2f521cbd4f964e0b28c33b649d820471ed56e1aa63c9858e1631c5886bee3.scope: Deactivated successfully.
Dec 13 02:30:47 np0005558241 podman[76328]: 2025-12-13 07:30:47.915277873 +0000 UTC m=+0.390444007 container died bfe2f521cbd4f964e0b28c33b649d820471ed56e1aa63c9858e1631c5886bee3 (image=quay.io/ceph/ceph:v20, name=jolly_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:30:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-342933fa57e9d1a40439467070b94bdbd3229eba8677d4989eeb3d5b79ae3162-merged.mount: Deactivated successfully.
Dec 13 02:30:48 np0005558241 podman[76328]: 2025-12-13 07:30:48.241520447 +0000 UTC m=+0.716686581 container remove bfe2f521cbd4f964e0b28c33b649d820471ed56e1aa63c9858e1631c5886bee3 (image=quay.io/ceph/ceph:v20, name=jolly_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 02:30:48 np0005558241 systemd[1]: libpod-conmon-bfe2f521cbd4f964e0b28c33b649d820471ed56e1aa63c9858e1631c5886bee3.scope: Deactivated successfully.
Dec 13 02:30:48 np0005558241 systemd[1]: Stopping Ceph mon.compute-0 for 18ee9de6-e00b-571b-ab9b-b7aab06174df...
Dec 13 02:30:48 np0005558241 ceph-mon[76178]: from='client.? 192.168.122.100:0/3585518581' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 02:30:48 np0005558241 ceph-mon[76178]: from='client.? 192.168.122.100:0/3585518581' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 02:30:48 np0005558241 ceph-mon[76178]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 13 02:30:48 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 13 02:30:48 np0005558241 ceph-mon[76178]: mon.compute-0@0(leader) e1 shutdown
Dec 13 02:30:48 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0[76174]: 2025-12-13T07:30:48.512+0000 7f38b9681640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 13 02:30:48 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0[76174]: 2025-12-13T07:30:48.512+0000 7f38b9681640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 13 02:30:48 np0005558241 ceph-mon[76178]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 02:30:48 np0005558241 ceph-mon[76178]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 02:30:48 np0005558241 podman[76411]: 2025-12-13 07:30:48.564157881 +0000 UTC m=+0.099306532 container died 14c5adf7f4afdfda4837a5fabd5e3c5bd258e89eddfd619be40472bb06b53885 (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:30:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3f4c833f06e2d2c0281f07546afb140e880c06d6e8776a32b1952c9f4dbcdcf6-merged.mount: Deactivated successfully.
Dec 13 02:30:48 np0005558241 podman[76411]: 2025-12-13 07:30:48.61074758 +0000 UTC m=+0.145896231 container remove 14c5adf7f4afdfda4837a5fabd5e3c5bd258e89eddfd619be40472bb06b53885 (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:30:48 np0005558241 bash[76411]: ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0
Dec 13 02:30:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:30:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 02:30:48 np0005558241 systemd[1]: ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df@mon.compute-0.service: Deactivated successfully.
Dec 13 02:30:48 np0005558241 systemd[1]: Stopped Ceph mon.compute-0 for 18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:30:48 np0005558241 systemd[1]: ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df@mon.compute-0.service: Consumed 1.204s CPU time.
Dec 13 02:30:48 np0005558241 systemd[1]: Starting Ceph mon.compute-0 for 18ee9de6-e00b-571b-ab9b-b7aab06174df...
Dec 13 02:30:49 np0005558241 podman[76517]: 2025-12-13 07:30:49.073646283 +0000 UTC m=+0.047924753 container create 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 02:30:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323c31212320f079f010d1773f6e7ad8033a5123ce8c69e72d32639952562ff2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323c31212320f079f010d1773f6e7ad8033a5123ce8c69e72d32639952562ff2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323c31212320f079f010d1773f6e7ad8033a5123ce8c69e72d32639952562ff2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323c31212320f079f010d1773f6e7ad8033a5123ce8c69e72d32639952562ff2/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:49 np0005558241 podman[76517]: 2025-12-13 07:30:49.054623156 +0000 UTC m=+0.028901616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:49 np0005558241 podman[76517]: 2025-12-13 07:30:49.156317818 +0000 UTC m=+0.130596338 container init 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:30:49 np0005558241 podman[76517]: 2025-12-13 07:30:49.164638756 +0000 UTC m=+0.138917216 container start 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Dec 13 02:30:49 np0005558241 bash[76517]: 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd
Dec 13 02:30:49 np0005558241 systemd[1]: Started Ceph mon.compute-0 for 18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: pidfile_write: ignore empty --pid-file
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: load: jerasure load: lrc 
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Git sha 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: DB SUMMARY
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: DB Session ID:  DFSB2OF23V678EVCNB4J
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 61643 ; 
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                                     Options.env: 0x564513649440
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                                Options.info_log: 0x564515a95e80
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                                 Options.wal_dir: 
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                    Options.write_buffer_manager: 0x564515ae0140
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                               Options.row_cache: None
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                              Options.wal_filter: None
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.wal_compression: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.max_background_jobs: 2
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.max_total_wal_size: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:       Options.compaction_readahead_size: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Compression algorithms supported:
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: #011kZSTD supported: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:           Options.merge_operator: 
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:        Options.compaction_filter: None
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564515aeca00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564515ad18d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:        Options.write_buffer_size: 33554432
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:  Options.max_write_buffer_number: 2
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:          Options.compression: NoCompression
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.num_levels: 7
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e7ffae10-18d5-4cdc-93fc-8955429ef551
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611049250748, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611049257491, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 61236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 150, "table_properties": {"data_size": 59695, "index_size": 183, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3459, "raw_average_key_size": 30, "raw_value_size": 56998, "raw_average_value_size": 504, "num_data_blocks": 9, "num_entries": 113, "num_filter_entries": 113, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611049, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611049257957, "job": 1, "event": "recovery_finished"}
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 13 02:30:49 np0005558241 podman[76538]: 2025-12-13 07:30:49.258612864 +0000 UTC m=+0.057680768 container create fa0267e27aa03768984f40de0a67974c1f7c6d92836f6622172c5921b8556de5 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564515afee00
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: DB pointer 0x564515c48000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   61.70 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0   61.70 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 1.79 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 1.79 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.38 KB,7.15256e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0@-1(???) e1 preinit fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0@-1(???).mds e1 new map
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-12-13T07:30:46:025708+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : last_changed 2025-12-13T07:30:43.504837+0000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : created 2025-12-13T07:30:43.504837+0000
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : fsmap 
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 13 02:30:49 np0005558241 systemd[1]: Started libpod-conmon-fa0267e27aa03768984f40de0a67974c1f7c6d92836f6622172c5921b8556de5.scope.
Dec 13 02:30:49 np0005558241 podman[76538]: 2025-12-13 07:30:49.229938754 +0000 UTC m=+0.029006708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8297c544597b137bf713dbb0da9046c629dfddfbbb2f6851789d6929eccade/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8297c544597b137bf713dbb0da9046c629dfddfbbb2f6851789d6929eccade/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c8297c544597b137bf713dbb0da9046c629dfddfbbb2f6851789d6929eccade/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 02:30:49 np0005558241 podman[76538]: 2025-12-13 07:30:49.383061206 +0000 UTC m=+0.182129150 container init fa0267e27aa03768984f40de0a67974c1f7c6d92836f6622172c5921b8556de5 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:30:49 np0005558241 podman[76538]: 2025-12-13 07:30:49.394083402 +0000 UTC m=+0.193151276 container start fa0267e27aa03768984f40de0a67974c1f7c6d92836f6622172c5921b8556de5 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 02:30:49 np0005558241 podman[76538]: 2025-12-13 07:30:49.397524739 +0000 UTC m=+0.196592693 container attach fa0267e27aa03768984f40de0a67974c1f7c6d92836f6622172c5921b8556de5 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:30:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec 13 02:30:49 np0005558241 systemd[1]: libpod-fa0267e27aa03768984f40de0a67974c1f7c6d92836f6622172c5921b8556de5.scope: Deactivated successfully.
Dec 13 02:30:49 np0005558241 podman[76538]: 2025-12-13 07:30:49.65903592 +0000 UTC m=+0.458103834 container died fa0267e27aa03768984f40de0a67974c1f7c6d92836f6622172c5921b8556de5 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:30:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9c8297c544597b137bf713dbb0da9046c629dfddfbbb2f6851789d6929eccade-merged.mount: Deactivated successfully.
Dec 13 02:30:49 np0005558241 podman[76538]: 2025-12-13 07:30:49.709543737 +0000 UTC m=+0.508611651 container remove fa0267e27aa03768984f40de0a67974c1f7c6d92836f6622172c5921b8556de5 (image=quay.io/ceph/ceph:v20, name=festive_zhukovsky, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 02:30:49 np0005558241 systemd[1]: libpod-conmon-fa0267e27aa03768984f40de0a67974c1f7c6d92836f6622172c5921b8556de5.scope: Deactivated successfully.
Dec 13 02:30:49 np0005558241 podman[76631]: 2025-12-13 07:30:49.799813951 +0000 UTC m=+0.064154000 container create da873ac9b251a43b030b73c111aed6995a5e6b45190247ee137b56a921d9095f (image=quay.io/ceph/ceph:v20, name=tender_margulis, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:30:49 np0005558241 systemd[1]: Started libpod-conmon-da873ac9b251a43b030b73c111aed6995a5e6b45190247ee137b56a921d9095f.scope.
Dec 13 02:30:49 np0005558241 podman[76631]: 2025-12-13 07:30:49.773304346 +0000 UTC m=+0.037644445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833168ba9d122876449a7598e419f89f02b25891400b440eb4e6613b24e4e668/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833168ba9d122876449a7598e419f89f02b25891400b440eb4e6613b24e4e668/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/833168ba9d122876449a7598e419f89f02b25891400b440eb4e6613b24e4e668/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:49 np0005558241 podman[76631]: 2025-12-13 07:30:49.905524614 +0000 UTC m=+0.169864663 container init da873ac9b251a43b030b73c111aed6995a5e6b45190247ee137b56a921d9095f (image=quay.io/ceph/ceph:v20, name=tender_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:30:49 np0005558241 podman[76631]: 2025-12-13 07:30:49.915672438 +0000 UTC m=+0.180012487 container start da873ac9b251a43b030b73c111aed6995a5e6b45190247ee137b56a921d9095f (image=quay.io/ceph/ceph:v20, name=tender_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:30:49 np0005558241 podman[76631]: 2025-12-13 07:30:49.92014405 +0000 UTC m=+0.184484139 container attach da873ac9b251a43b030b73c111aed6995a5e6b45190247ee137b56a921d9095f (image=quay.io/ceph/ceph:v20, name=tender_margulis, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:30:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec 13 02:30:50 np0005558241 systemd[1]: libpod-da873ac9b251a43b030b73c111aed6995a5e6b45190247ee137b56a921d9095f.scope: Deactivated successfully.
Dec 13 02:30:50 np0005558241 podman[76631]: 2025-12-13 07:30:50.1688579 +0000 UTC m=+0.433197909 container died da873ac9b251a43b030b73c111aed6995a5e6b45190247ee137b56a921d9095f (image=quay.io/ceph/ceph:v20, name=tender_margulis, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:30:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-833168ba9d122876449a7598e419f89f02b25891400b440eb4e6613b24e4e668-merged.mount: Deactivated successfully.
Dec 13 02:30:50 np0005558241 podman[76631]: 2025-12-13 07:30:50.220544797 +0000 UTC m=+0.484884836 container remove da873ac9b251a43b030b73c111aed6995a5e6b45190247ee137b56a921d9095f (image=quay.io/ceph/ceph:v20, name=tender_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:30:50 np0005558241 systemd[1]: libpod-conmon-da873ac9b251a43b030b73c111aed6995a5e6b45190247ee137b56a921d9095f.scope: Deactivated successfully.
Dec 13 02:30:50 np0005558241 systemd[1]: Reloading.
Dec 13 02:30:50 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:30:50 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:30:50 np0005558241 systemd[1]: Reloading.
Dec 13 02:30:50 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:30:50 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:30:50 np0005558241 systemd[1]: Starting Ceph mgr.compute-0.vndjzx for 18ee9de6-e00b-571b-ab9b-b7aab06174df...
Dec 13 02:30:51 np0005558241 podman[76812]: 2025-12-13 07:30:51.136250339 +0000 UTC m=+0.060903629 container create 15db9c4588cdf80b2e8f43ff990db510a0f55f9f61d30bfd705370393006dfb7 (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:30:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5183ca31d5026685d8b961b67a8dcd78fff3cfa729c2bdb7ce917c0fee863f17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5183ca31d5026685d8b961b67a8dcd78fff3cfa729c2bdb7ce917c0fee863f17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5183ca31d5026685d8b961b67a8dcd78fff3cfa729c2bdb7ce917c0fee863f17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5183ca31d5026685d8b961b67a8dcd78fff3cfa729c2bdb7ce917c0fee863f17/merged/var/lib/ceph/mgr/ceph-compute-0.vndjzx supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:51 np0005558241 podman[76812]: 2025-12-13 07:30:51.108430601 +0000 UTC m=+0.033084011 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:51 np0005558241 podman[76812]: 2025-12-13 07:30:51.22117926 +0000 UTC m=+0.145832570 container init 15db9c4588cdf80b2e8f43ff990db510a0f55f9f61d30bfd705370393006dfb7 (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:30:51 np0005558241 podman[76812]: 2025-12-13 07:30:51.235109339 +0000 UTC m=+0.159762639 container start 15db9c4588cdf80b2e8f43ff990db510a0f55f9f61d30bfd705370393006dfb7 (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:30:51 np0005558241 bash[76812]: 15db9c4588cdf80b2e8f43ff990db510a0f55f9f61d30bfd705370393006dfb7
Dec 13 02:30:51 np0005558241 systemd[1]: Started Ceph mgr.compute-0.vndjzx for 18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:30:51 np0005558241 ceph-mgr[76830]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:30:51 np0005558241 ceph-mgr[76830]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 13 02:30:51 np0005558241 ceph-mgr[76830]: pidfile_write: ignore empty --pid-file
Dec 13 02:30:51 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'alerts'
Dec 13 02:30:51 np0005558241 podman[76831]: 2025-12-13 07:30:51.359657314 +0000 UTC m=+0.070709555 container create 75221fc9ad6b8146f72a1a1173bf73678e2fea63d59f743046a1bc1254e8e74b (image=quay.io/ceph/ceph:v20, name=infallible_bartik, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec 13 02:30:51 np0005558241 systemd[1]: Started libpod-conmon-75221fc9ad6b8146f72a1a1173bf73678e2fea63d59f743046a1bc1254e8e74b.scope.
Dec 13 02:30:51 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'balancer'
Dec 13 02:30:51 np0005558241 podman[76831]: 2025-12-13 07:30:51.331726763 +0000 UTC m=+0.042779084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:51 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11287be40d4585f02bcba64a7835619c77429c769e070a11d109dc4c3377cdf2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11287be40d4585f02bcba64a7835619c77429c769e070a11d109dc4c3377cdf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11287be40d4585f02bcba64a7835619c77429c769e070a11d109dc4c3377cdf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:51 np0005558241 podman[76831]: 2025-12-13 07:30:51.478784723 +0000 UTC m=+0.189836994 container init 75221fc9ad6b8146f72a1a1173bf73678e2fea63d59f743046a1bc1254e8e74b (image=quay.io/ceph/ceph:v20, name=infallible_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:30:51 np0005558241 podman[76831]: 2025-12-13 07:30:51.491213955 +0000 UTC m=+0.202266206 container start 75221fc9ad6b8146f72a1a1173bf73678e2fea63d59f743046a1bc1254e8e74b (image=quay.io/ceph/ceph:v20, name=infallible_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:30:51 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'cephadm'
Dec 13 02:30:51 np0005558241 podman[76831]: 2025-12-13 07:30:51.495607825 +0000 UTC m=+0.206660076 container attach 75221fc9ad6b8146f72a1a1173bf73678e2fea63d59f743046a1bc1254e8e74b (image=quay.io/ceph/ceph:v20, name=infallible_bartik, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:30:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 02:30:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209290898' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]: 
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]: {
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    "fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    "health": {
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "status": "HEALTH_OK",
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "checks": {},
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "mutes": []
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    },
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    "election_epoch": 5,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    "quorum": [
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        0
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    ],
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    "quorum_names": [
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "compute-0"
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    ],
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    "quorum_age": 2,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    "monmap": {
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "epoch": 1,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "min_mon_release_name": "tentacle",
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "num_mons": 1
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    },
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    "osdmap": {
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "epoch": 1,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "num_osds": 0,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "num_up_osds": 0,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "osd_up_since": 0,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "num_in_osds": 0,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "osd_in_since": 0,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "num_remapped_pgs": 0
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    },
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    "pgmap": {
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "pgs_by_state": [],
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "num_pgs": 0,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "num_pools": 0,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "num_objects": 0,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "data_bytes": 0,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "bytes_used": 0,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "bytes_avail": 0,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "bytes_total": 0
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    },
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    "fsmap": {
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "epoch": 1,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "btime": "2025-12-13T07:30:46:025708+0000",
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "by_rank": [],
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "up:standby": 0
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    },
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    "mgrmap": {
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "available": false,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "num_standbys": 0,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "modules": [
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:            "iostat",
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:            "nfs"
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        ],
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "services": {}
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    },
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    "servicemap": {
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "epoch": 1,
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "modified": "2025-12-13T07:30:46.031379+0000",
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:        "services": {}
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    },
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]:    "progress_events": {}
Dec 13 02:30:51 np0005558241 infallible_bartik[76866]: }
Dec 13 02:30:51 np0005558241 systemd[1]: libpod-75221fc9ad6b8146f72a1a1173bf73678e2fea63d59f743046a1bc1254e8e74b.scope: Deactivated successfully.
Dec 13 02:30:51 np0005558241 podman[76831]: 2025-12-13 07:30:51.739046622 +0000 UTC m=+0.450098863 container died 75221fc9ad6b8146f72a1a1173bf73678e2fea63d59f743046a1bc1254e8e74b (image=quay.io/ceph/ceph:v20, name=infallible_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 02:30:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-11287be40d4585f02bcba64a7835619c77429c769e070a11d109dc4c3377cdf2-merged.mount: Deactivated successfully.
Dec 13 02:30:51 np0005558241 podman[76831]: 2025-12-13 07:30:51.777187899 +0000 UTC m=+0.488240140 container remove 75221fc9ad6b8146f72a1a1173bf73678e2fea63d59f743046a1bc1254e8e74b (image=quay.io/ceph/ceph:v20, name=infallible_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:30:51 np0005558241 systemd[1]: libpod-conmon-75221fc9ad6b8146f72a1a1173bf73678e2fea63d59f743046a1bc1254e8e74b.scope: Deactivated successfully.
Dec 13 02:30:52 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'crash'
Dec 13 02:30:52 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'dashboard'
Dec 13 02:30:52 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'devicehealth'
Dec 13 02:30:53 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'diskprediction_local'
Dec 13 02:30:53 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 13 02:30:53 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 13 02:30:53 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]:  from numpy import show_config as show_numpy_config
Dec 13 02:30:53 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'influx'
Dec 13 02:30:53 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'insights'
Dec 13 02:30:53 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'iostat'
Dec 13 02:30:53 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'k8sevents'
Dec 13 02:30:53 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'localpool'
Dec 13 02:30:53 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'mds_autoscaler'
Dec 13 02:30:53 np0005558241 podman[76917]: 2025-12-13 07:30:53.852479404 +0000 UTC m=+0.043196714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:54 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'mirroring'
Dec 13 02:30:54 np0005558241 podman[76917]: 2025-12-13 07:30:54.042660536 +0000 UTC m=+0.233377786 container create 79ccbb5d16eca432f05cd67b6a1333150caeb62b854cf5d5d16580c752700a80 (image=quay.io/ceph/ceph:v20, name=wonderful_taussig, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:30:54 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'nfs'
Dec 13 02:30:54 np0005558241 systemd[1]: Started libpod-conmon-79ccbb5d16eca432f05cd67b6a1333150caeb62b854cf5d5d16580c752700a80.scope.
Dec 13 02:30:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d6fea970932aa1032ab699f5a3c19fe3c7152c938f80fc56a06aa483b1ec745/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d6fea970932aa1032ab699f5a3c19fe3c7152c938f80fc56a06aa483b1ec745/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d6fea970932aa1032ab699f5a3c19fe3c7152c938f80fc56a06aa483b1ec745/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:54 np0005558241 podman[76917]: 2025-12-13 07:30:54.158701927 +0000 UTC m=+0.349419227 container init 79ccbb5d16eca432f05cd67b6a1333150caeb62b854cf5d5d16580c752700a80 (image=quay.io/ceph/ceph:v20, name=wonderful_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:30:54 np0005558241 podman[76917]: 2025-12-13 07:30:54.171892208 +0000 UTC m=+0.362609468 container start 79ccbb5d16eca432f05cd67b6a1333150caeb62b854cf5d5d16580c752700a80 (image=quay.io/ceph/ceph:v20, name=wonderful_taussig, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:30:54 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'orchestrator'
Dec 13 02:30:54 np0005558241 podman[76917]: 2025-12-13 07:30:54.372113441 +0000 UTC m=+0.562830701 container attach 79ccbb5d16eca432f05cd67b6a1333150caeb62b854cf5d5d16580c752700a80 (image=quay.io/ceph/ceph:v20, name=wonderful_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:30:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 02:30:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178273061' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]: 
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]: {
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    "fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    "health": {
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "status": "HEALTH_OK",
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "checks": {},
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "mutes": []
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    },
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    "election_epoch": 5,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    "quorum": [
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        0
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    ],
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    "quorum_names": [
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "compute-0"
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    ],
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    "quorum_age": 5,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    "monmap": {
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "epoch": 1,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "min_mon_release_name": "tentacle",
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "num_mons": 1
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    },
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    "osdmap": {
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "epoch": 1,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "num_osds": 0,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "num_up_osds": 0,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "osd_up_since": 0,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "num_in_osds": 0,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "osd_in_since": 0,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "num_remapped_pgs": 0
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    },
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    "pgmap": {
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "pgs_by_state": [],
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "num_pgs": 0,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "num_pools": 0,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "num_objects": 0,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "data_bytes": 0,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "bytes_used": 0,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "bytes_avail": 0,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "bytes_total": 0
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    },
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    "fsmap": {
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "epoch": 1,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "btime": "2025-12-13T07:30:46:025708+0000",
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "by_rank": [],
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "up:standby": 0
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    },
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    "mgrmap": {
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "available": false,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "num_standbys": 0,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "modules": [
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:            "iostat",
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:            "nfs"
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        ],
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "services": {}
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    },
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    "servicemap": {
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "epoch": 1,
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "modified": "2025-12-13T07:30:46.031379+0000",
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:        "services": {}
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    },
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]:    "progress_events": {}
Dec 13 02:30:54 np0005558241 wonderful_taussig[76934]: }
Dec 13 02:30:54 np0005558241 systemd[1]: libpod-79ccbb5d16eca432f05cd67b6a1333150caeb62b854cf5d5d16580c752700a80.scope: Deactivated successfully.
Dec 13 02:30:54 np0005558241 podman[76917]: 2025-12-13 07:30:54.41670608 +0000 UTC m=+0.607423330 container died 79ccbb5d16eca432f05cd67b6a1333150caeb62b854cf5d5d16580c752700a80 (image=quay.io/ceph/ceph:v20, name=wonderful_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:30:54 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'osd_perf_query'
Dec 13 02:30:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9d6fea970932aa1032ab699f5a3c19fe3c7152c938f80fc56a06aa483b1ec745-merged.mount: Deactivated successfully.
Dec 13 02:30:54 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'osd_support'
Dec 13 02:30:54 np0005558241 podman[76917]: 2025-12-13 07:30:54.630897973 +0000 UTC m=+0.821615233 container remove 79ccbb5d16eca432f05cd67b6a1333150caeb62b854cf5d5d16580c752700a80 (image=quay.io/ceph/ceph:v20, name=wonderful_taussig, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:30:54 np0005558241 systemd[1]: libpod-conmon-79ccbb5d16eca432f05cd67b6a1333150caeb62b854cf5d5d16580c752700a80.scope: Deactivated successfully.
Dec 13 02:30:54 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'pg_autoscaler'
Dec 13 02:30:54 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'progress'
Dec 13 02:30:54 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'prometheus'
Dec 13 02:30:55 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'rbd_support'
Dec 13 02:30:55 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'rgw'
Dec 13 02:30:55 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'rook'
Dec 13 02:30:56 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'selftest'
Dec 13 02:30:56 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'smb'
Dec 13 02:30:56 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'snap_schedule'
Dec 13 02:30:56 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'stats'
Dec 13 02:30:56 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'status'
Dec 13 02:30:56 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'telegraf'
Dec 13 02:30:56 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'telemetry'
Dec 13 02:30:56 np0005558241 podman[76974]: 2025-12-13 07:30:56.689575061 +0000 UTC m=+0.027659135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:56 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'test_orchestrator'
Dec 13 02:30:56 np0005558241 podman[76974]: 2025-12-13 07:30:56.944201979 +0000 UTC m=+0.282286093 container create 1a65896932fb34e8af3b378e86eef1cb31e645a8bd15fb74a2d91233e91cd396 (image=quay.io/ceph/ceph:v20, name=romantic_proskuriakova, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:30:57 np0005558241 systemd[1]: Started libpod-conmon-1a65896932fb34e8af3b378e86eef1cb31e645a8bd15fb74a2d91233e91cd396.scope.
Dec 13 02:30:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f519ad13d008141e47260a3cc47e7918261b3becd2be4e4018d57ab6b933a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f519ad13d008141e47260a3cc47e7918261b3becd2be4e4018d57ab6b933a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58f519ad13d008141e47260a3cc47e7918261b3becd2be4e4018d57ab6b933a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:57 np0005558241 podman[76974]: 2025-12-13 07:30:57.085966335 +0000 UTC m=+0.424050509 container init 1a65896932fb34e8af3b378e86eef1cb31e645a8bd15fb74a2d91233e91cd396 (image=quay.io/ceph/ceph:v20, name=romantic_proskuriakova, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'volumes'
Dec 13 02:30:57 np0005558241 podman[76974]: 2025-12-13 07:30:57.096013767 +0000 UTC m=+0.434097881 container start 1a65896932fb34e8af3b378e86eef1cb31e645a8bd15fb74a2d91233e91cd396 (image=quay.io/ceph/ceph:v20, name=romantic_proskuriakova, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec 13 02:30:57 np0005558241 podman[76974]: 2025-12-13 07:30:57.116116112 +0000 UTC m=+0.454200266 container attach 1a65896932fb34e8af3b378e86eef1cb31e645a8bd15fb74a2d91233e91cd396 (image=quay.io/ceph/ceph:v20, name=romantic_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1503248960' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]: 
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]: {
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    "fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    "health": {
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "status": "HEALTH_OK",
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "checks": {},
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "mutes": []
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    },
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    "election_epoch": 5,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    "quorum": [
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        0
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    ],
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    "quorum_names": [
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "compute-0"
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    ],
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    "quorum_age": 8,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    "monmap": {
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "epoch": 1,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "min_mon_release_name": "tentacle",
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "num_mons": 1
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    },
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    "osdmap": {
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "epoch": 1,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "num_osds": 0,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "num_up_osds": 0,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "osd_up_since": 0,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "num_in_osds": 0,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "osd_in_since": 0,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "num_remapped_pgs": 0
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    },
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    "pgmap": {
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "pgs_by_state": [],
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "num_pgs": 0,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "num_pools": 0,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "num_objects": 0,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "data_bytes": 0,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "bytes_used": 0,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "bytes_avail": 0,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "bytes_total": 0
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    },
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    "fsmap": {
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "epoch": 1,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "btime": "2025-12-13T07:30:46:025708+0000",
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "by_rank": [],
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "up:standby": 0
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    },
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    "mgrmap": {
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "available": false,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "num_standbys": 0,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "modules": [
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:            "iostat",
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:            "nfs"
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        ],
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "services": {}
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    },
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    "servicemap": {
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "epoch": 1,
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "modified": "2025-12-13T07:30:46.031379+0000",
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:        "services": {}
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    },
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]:    "progress_events": {}
Dec 13 02:30:57 np0005558241 romantic_proskuriakova[76990]: }
Dec 13 02:30:57 np0005558241 systemd[1]: libpod-1a65896932fb34e8af3b378e86eef1cb31e645a8bd15fb74a2d91233e91cd396.scope: Deactivated successfully.
Dec 13 02:30:57 np0005558241 podman[76974]: 2025-12-13 07:30:57.337410334 +0000 UTC m=+0.675494408 container died 1a65896932fb34e8af3b378e86eef1cb31e645a8bd15fb74a2d91233e91cd396 (image=quay.io/ceph/ceph:v20, name=romantic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: ms_deliver_dispatch: unhandled message 0x55d452613860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vndjzx
Dec 13 02:30:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-58f519ad13d008141e47260a3cc47e7918261b3becd2be4e4018d57ab6b933a2-merged.mount: Deactivated successfully.
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr handle_mgr_map Activating!
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr handle_mgr_map I am now activating
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.vndjzx(active, starting, since 0.0899616s)
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' cmd={"prefix": "mds metadata"} : dispatch
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e1 all = 1
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' cmd={"prefix": "mon metadata"} : dispatch
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vndjzx", "id": "compute-0.vndjzx"} v 0)
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' cmd={"prefix": "mgr metadata", "who": "compute-0.vndjzx", "id": "compute-0.vndjzx"} : dispatch
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: balancer
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [balancer INFO root] Starting
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : Manager daemon compute-0.vndjzx is now available
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: crash
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:30:57
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [balancer INFO root] No pools available
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: devicehealth
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Starting
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: iostat
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: nfs
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: orchestrator
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: pg_autoscaler
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: progress
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [progress INFO root] Loading...
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [progress INFO root] No stored events to load
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [progress INFO root] Loaded [] historic events
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [progress INFO root] Loaded OSDMap, ready.
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] recovery thread starting
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] starting setup
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: rbd_support
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: status
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vndjzx/mirror_snapshot_schedule"} v 0)
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vndjzx/mirror_snapshot_schedule"} : dispatch
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: telemetry
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] PerfHandler: starting
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TaskHandler: starting
Dec 13 02:30:57 np0005558241 podman[76974]: 2025-12-13 07:30:57.528339164 +0000 UTC m=+0.866423248 container remove 1a65896932fb34e8af3b378e86eef1cb31e645a8bd15fb74a2d91233e91cd396 (image=quay.io/ceph/ceph:v20, name=romantic_proskuriakova, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vndjzx/trash_purge_schedule"} v 0)
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vndjzx/trash_purge_schedule"} : dispatch
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' 
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] setup complete
Dec 13 02:30:57 np0005558241 systemd[1]: libpod-conmon-1a65896932fb34e8af3b378e86eef1cb31e645a8bd15fb74a2d91233e91cd396.scope: Deactivated successfully.
Dec 13 02:30:57 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: volumes
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' 
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec 13 02:30:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' 
Dec 13 02:30:58 np0005558241 ceph-mon[76537]: Activating manager daemon compute-0.vndjzx
Dec 13 02:30:58 np0005558241 ceph-mon[76537]: Manager daemon compute-0.vndjzx is now available
Dec 13 02:30:58 np0005558241 ceph-mon[76537]: from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vndjzx/mirror_snapshot_schedule"} : dispatch
Dec 13 02:30:58 np0005558241 ceph-mon[76537]: from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vndjzx/trash_purge_schedule"} : dispatch
Dec 13 02:30:58 np0005558241 ceph-mon[76537]: from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' 
Dec 13 02:30:58 np0005558241 ceph-mon[76537]: from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' 
Dec 13 02:30:58 np0005558241 ceph-mon[76537]: from='mgr.14102 192.168.122.100:0/573820518' entity='mgr.compute-0.vndjzx' 
Dec 13 02:30:58 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.vndjzx(active, since 1.60654s)
Dec 13 02:30:59 np0005558241 ceph-mgr[76830]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:30:59 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:30:59 np0005558241 podman[77107]: 2025-12-13 07:30:59.640141374 +0000 UTC m=+0.066501170 container create 253cedd480389deec36a6253891f7f47e25cee792e05c994d2b373936d5cd8d9 (image=quay.io/ceph/ceph:v20, name=funny_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:30:59 np0005558241 systemd[1]: Started libpod-conmon-253cedd480389deec36a6253891f7f47e25cee792e05c994d2b373936d5cd8d9.scope.
Dec 13 02:30:59 np0005558241 podman[77107]: 2025-12-13 07:30:59.609858624 +0000 UTC m=+0.036218480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:30:59 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:30:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28d0bd3c131c29a7cc0fb67c94d6d2101e53fa3124dbac455f33b42b05e1b7b7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28d0bd3c131c29a7cc0fb67c94d6d2101e53fa3124dbac455f33b42b05e1b7b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28d0bd3c131c29a7cc0fb67c94d6d2101e53fa3124dbac455f33b42b05e1b7b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:30:59 np0005558241 podman[77107]: 2025-12-13 07:30:59.735216449 +0000 UTC m=+0.161576265 container init 253cedd480389deec36a6253891f7f47e25cee792e05c994d2b373936d5cd8d9 (image=quay.io/ceph/ceph:v20, name=funny_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:30:59 np0005558241 podman[77107]: 2025-12-13 07:30:59.746315947 +0000 UTC m=+0.172675743 container start 253cedd480389deec36a6253891f7f47e25cee792e05c994d2b373936d5cd8d9 (image=quay.io/ceph/ceph:v20, name=funny_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:30:59 np0005558241 podman[77107]: 2025-12-13 07:30:59.750838901 +0000 UTC m=+0.177198707 container attach 253cedd480389deec36a6253891f7f47e25cee792e05c994d2b373936d5cd8d9 (image=quay.io/ceph/ceph:v20, name=funny_wright, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:00 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.vndjzx(active, since 2s)
Dec 13 02:31:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 02:31:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1584475163' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 02:31:00 np0005558241 funny_wright[77123]: 
Dec 13 02:31:00 np0005558241 funny_wright[77123]: {
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    "fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    "health": {
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "status": "HEALTH_OK",
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "checks": {},
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "mutes": []
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    },
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    "election_epoch": 5,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    "quorum": [
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        0
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    ],
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    "quorum_names": [
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "compute-0"
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    ],
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    "quorum_age": 10,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    "monmap": {
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "epoch": 1,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "min_mon_release_name": "tentacle",
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "num_mons": 1
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    },
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    "osdmap": {
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "epoch": 1,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "num_osds": 0,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "num_up_osds": 0,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "osd_up_since": 0,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "num_in_osds": 0,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "osd_in_since": 0,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "num_remapped_pgs": 0
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    },
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    "pgmap": {
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "pgs_by_state": [],
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "num_pgs": 0,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "num_pools": 0,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "num_objects": 0,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "data_bytes": 0,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "bytes_used": 0,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "bytes_avail": 0,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "bytes_total": 0
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    },
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    "fsmap": {
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "epoch": 1,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "btime": "2025-12-13T07:30:46:025708+0000",
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "by_rank": [],
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "up:standby": 0
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    },
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    "mgrmap": {
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "available": true,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "num_standbys": 0,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "modules": [
Dec 13 02:31:00 np0005558241 funny_wright[77123]:            "iostat",
Dec 13 02:31:00 np0005558241 funny_wright[77123]:            "nfs"
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        ],
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "services": {}
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    },
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    "servicemap": {
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "epoch": 1,
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "modified": "2025-12-13T07:30:46.031379+0000",
Dec 13 02:31:00 np0005558241 funny_wright[77123]:        "services": {}
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    },
Dec 13 02:31:00 np0005558241 funny_wright[77123]:    "progress_events": {}
Dec 13 02:31:00 np0005558241 funny_wright[77123]: }
Dec 13 02:31:00 np0005558241 systemd[1]: libpod-253cedd480389deec36a6253891f7f47e25cee792e05c994d2b373936d5cd8d9.scope: Deactivated successfully.
Dec 13 02:31:00 np0005558241 podman[77107]: 2025-12-13 07:31:00.260170509 +0000 UTC m=+0.686530305 container died 253cedd480389deec36a6253891f7f47e25cee792e05c994d2b373936d5cd8d9 (image=quay.io/ceph/ceph:v20, name=funny_wright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 02:31:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-28d0bd3c131c29a7cc0fb67c94d6d2101e53fa3124dbac455f33b42b05e1b7b7-merged.mount: Deactivated successfully.
Dec 13 02:31:00 np0005558241 podman[77107]: 2025-12-13 07:31:00.309961558 +0000 UTC m=+0.736321344 container remove 253cedd480389deec36a6253891f7f47e25cee792e05c994d2b373936d5cd8d9 (image=quay.io/ceph/ceph:v20, name=funny_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:31:00 np0005558241 systemd[1]: libpod-conmon-253cedd480389deec36a6253891f7f47e25cee792e05c994d2b373936d5cd8d9.scope: Deactivated successfully.
Dec 13 02:31:00 np0005558241 podman[77161]: 2025-12-13 07:31:00.402002977 +0000 UTC m=+0.059371680 container create 7c3b7619978e0ab2b1f160a8fa5d2c8837864108018096d1715c259fdb049617 (image=quay.io/ceph/ceph:v20, name=practical_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:31:00 np0005558241 systemd[1]: Started libpod-conmon-7c3b7619978e0ab2b1f160a8fa5d2c8837864108018096d1715c259fdb049617.scope.
Dec 13 02:31:00 np0005558241 podman[77161]: 2025-12-13 07:31:00.378058517 +0000 UTC m=+0.035427200 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d287ffb377c47b1acf614a8f28a4dfed7efb93fa3a550c1b850230ffedba9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d287ffb377c47b1acf614a8f28a4dfed7efb93fa3a550c1b850230ffedba9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d287ffb377c47b1acf614a8f28a4dfed7efb93fa3a550c1b850230ffedba9a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d287ffb377c47b1acf614a8f28a4dfed7efb93fa3a550c1b850230ffedba9a/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:00 np0005558241 podman[77161]: 2025-12-13 07:31:00.495459912 +0000 UTC m=+0.152828595 container init 7c3b7619978e0ab2b1f160a8fa5d2c8837864108018096d1715c259fdb049617 (image=quay.io/ceph/ceph:v20, name=practical_elgamal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:31:00 np0005558241 podman[77161]: 2025-12-13 07:31:00.507776071 +0000 UTC m=+0.165144774 container start 7c3b7619978e0ab2b1f160a8fa5d2c8837864108018096d1715c259fdb049617 (image=quay.io/ceph/ceph:v20, name=practical_elgamal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:00 np0005558241 podman[77161]: 2025-12-13 07:31:00.511624558 +0000 UTC m=+0.168993241 container attach 7c3b7619978e0ab2b1f160a8fa5d2c8837864108018096d1715c259fdb049617 (image=quay.io/ceph/ceph:v20, name=practical_elgamal, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 13 02:31:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3896581973' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 02:31:00 np0005558241 practical_elgamal[77177]: 
Dec 13 02:31:00 np0005558241 practical_elgamal[77177]: [global]
Dec 13 02:31:00 np0005558241 practical_elgamal[77177]: #011fsid = 18ee9de6-e00b-571b-ab9b-b7aab06174df
Dec 13 02:31:00 np0005558241 practical_elgamal[77177]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 13 02:31:00 np0005558241 practical_elgamal[77177]: #011osd_crush_chooseleaf_type = 0
Dec 13 02:31:00 np0005558241 systemd[1]: libpod-7c3b7619978e0ab2b1f160a8fa5d2c8837864108018096d1715c259fdb049617.scope: Deactivated successfully.
Dec 13 02:31:00 np0005558241 podman[77161]: 2025-12-13 07:31:00.922395423 +0000 UTC m=+0.579764126 container died 7c3b7619978e0ab2b1f160a8fa5d2c8837864108018096d1715c259fdb049617 (image=quay.io/ceph/ceph:v20, name=practical_elgamal, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:31:01 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3896581973' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 02:31:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-75d287ffb377c47b1acf614a8f28a4dfed7efb93fa3a550c1b850230ffedba9a-merged.mount: Deactivated successfully.
Dec 13 02:31:01 np0005558241 podman[77161]: 2025-12-13 07:31:01.402363885 +0000 UTC m=+1.059732548 container remove 7c3b7619978e0ab2b1f160a8fa5d2c8837864108018096d1715c259fdb049617 (image=quay.io/ceph/ceph:v20, name=practical_elgamal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:01 np0005558241 ceph-mgr[76830]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:31:01 np0005558241 systemd[1]: libpod-conmon-7c3b7619978e0ab2b1f160a8fa5d2c8837864108018096d1715c259fdb049617.scope: Deactivated successfully.
Dec 13 02:31:01 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:01 np0005558241 podman[77215]: 2025-12-13 07:31:01.502671971 +0000 UTC m=+0.071158076 container create f45a7d4c3af5933d235119a959d0b655642c0ec3074cc555d4a477845d457dd5 (image=quay.io/ceph/ceph:v20, name=ecstatic_agnesi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:01 np0005558241 systemd[1]: Started libpod-conmon-f45a7d4c3af5933d235119a959d0b655642c0ec3074cc555d4a477845d457dd5.scope.
Dec 13 02:31:01 np0005558241 podman[77215]: 2025-12-13 07:31:01.469392256 +0000 UTC m=+0.037878381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97aef311373f803313728f7e88c715a1b09917f9a37e566e1ab0b3e76e70d72e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97aef311373f803313728f7e88c715a1b09917f9a37e566e1ab0b3e76e70d72e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97aef311373f803313728f7e88c715a1b09917f9a37e566e1ab0b3e76e70d72e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:01 np0005558241 podman[77215]: 2025-12-13 07:31:01.607772538 +0000 UTC m=+0.176258683 container init f45a7d4c3af5933d235119a959d0b655642c0ec3074cc555d4a477845d457dd5 (image=quay.io/ceph/ceph:v20, name=ecstatic_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:31:01 np0005558241 podman[77215]: 2025-12-13 07:31:01.614427125 +0000 UTC m=+0.182913190 container start f45a7d4c3af5933d235119a959d0b655642c0ec3074cc555d4a477845d457dd5 (image=quay.io/ceph/ceph:v20, name=ecstatic_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:31:01 np0005558241 podman[77215]: 2025-12-13 07:31:01.617750338 +0000 UTC m=+0.186236503 container attach f45a7d4c3af5933d235119a959d0b655642c0ec3074cc555d4a477845d457dd5 (image=quay.io/ceph/ceph:v20, name=ecstatic_agnesi, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:31:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec 13 02:31:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2024297309' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 13 02:31:02 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2024297309' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 13 02:31:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2024297309' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 13 02:31:02 np0005558241 ceph-mgr[76830]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 13 02:31:02 np0005558241 ceph-mgr[76830]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 13 02:31:02 np0005558241 ceph-mgr[76830]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 13 02:31:02 np0005558241 ceph-mgr[76830]: mgr respawn  1: '-n'
Dec 13 02:31:02 np0005558241 ceph-mgr[76830]: mgr respawn  2: 'mgr.compute-0.vndjzx'
Dec 13 02:31:02 np0005558241 ceph-mgr[76830]: mgr respawn  3: '-f'
Dec 13 02:31:02 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.vndjzx(active, since 5s)
Dec 13 02:31:02 np0005558241 systemd[1]: libpod-f45a7d4c3af5933d235119a959d0b655642c0ec3074cc555d4a477845d457dd5.scope: Deactivated successfully.
Dec 13 02:31:02 np0005558241 podman[77215]: 2025-12-13 07:31:02.63746951 +0000 UTC m=+1.205955605 container died f45a7d4c3af5933d235119a959d0b655642c0ec3074cc555d4a477845d457dd5 (image=quay.io/ceph/ceph:v20, name=ecstatic_agnesi, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:31:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-97aef311373f803313728f7e88c715a1b09917f9a37e566e1ab0b3e76e70d72e-merged.mount: Deactivated successfully.
Dec 13 02:31:02 np0005558241 podman[77215]: 2025-12-13 07:31:02.701755503 +0000 UTC m=+1.270241608 container remove f45a7d4c3af5933d235119a959d0b655642c0ec3074cc555d4a477845d457dd5 (image=quay.io/ceph/ceph:v20, name=ecstatic_agnesi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:02 np0005558241 systemd[1]: libpod-conmon-f45a7d4c3af5933d235119a959d0b655642c0ec3074cc555d4a477845d457dd5.scope: Deactivated successfully.
Dec 13 02:31:02 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: ignoring --setuser ceph since I am not root
Dec 13 02:31:02 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: ignoring --setgroup ceph since I am not root
Dec 13 02:31:02 np0005558241 ceph-mgr[76830]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 13 02:31:02 np0005558241 ceph-mgr[76830]: pidfile_write: ignore empty --pid-file
Dec 13 02:31:02 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'alerts'
Dec 13 02:31:02 np0005558241 podman[77269]: 2025-12-13 07:31:02.807782363 +0000 UTC m=+0.066429508 container create 354a61a89afe1bd449af89643093dab856120f29778cd11d0ebcfcbe8cc87650 (image=quay.io/ceph/ceph:v20, name=beautiful_knuth, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:02 np0005558241 systemd[1]: Started libpod-conmon-354a61a89afe1bd449af89643093dab856120f29778cd11d0ebcfcbe8cc87650.scope.
Dec 13 02:31:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ea22a148542d1430bc39260f5d4b2ff8a297af541fd66948f3bfd54dc3a8fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ea22a148542d1430bc39260f5d4b2ff8a297af541fd66948f3bfd54dc3a8fc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ea22a148542d1430bc39260f5d4b2ff8a297af541fd66948f3bfd54dc3a8fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:02 np0005558241 podman[77269]: 2025-12-13 07:31:02.783597446 +0000 UTC m=+0.042244631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:02 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'balancer'
Dec 13 02:31:02 np0005558241 podman[77269]: 2025-12-13 07:31:02.903809432 +0000 UTC m=+0.162456627 container init 354a61a89afe1bd449af89643093dab856120f29778cd11d0ebcfcbe8cc87650 (image=quay.io/ceph/ceph:v20, name=beautiful_knuth, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec 13 02:31:02 np0005558241 podman[77269]: 2025-12-13 07:31:02.914441539 +0000 UTC m=+0.173088714 container start 354a61a89afe1bd449af89643093dab856120f29778cd11d0ebcfcbe8cc87650 (image=quay.io/ceph/ceph:v20, name=beautiful_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:02 np0005558241 podman[77269]: 2025-12-13 07:31:02.919520566 +0000 UTC m=+0.178167761 container attach 354a61a89afe1bd449af89643093dab856120f29778cd11d0ebcfcbe8cc87650 (image=quay.io/ceph/ceph:v20, name=beautiful_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:31:02 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'cephadm'
Dec 13 02:31:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 13 02:31:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1841903886' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 02:31:03 np0005558241 beautiful_knuth[77305]: {
Dec 13 02:31:03 np0005558241 beautiful_knuth[77305]:    "epoch": 5,
Dec 13 02:31:03 np0005558241 beautiful_knuth[77305]:    "available": true,
Dec 13 02:31:03 np0005558241 beautiful_knuth[77305]:    "active_name": "compute-0.vndjzx",
Dec 13 02:31:03 np0005558241 beautiful_knuth[77305]:    "num_standby": 0
Dec 13 02:31:03 np0005558241 beautiful_knuth[77305]: }
Dec 13 02:31:03 np0005558241 systemd[1]: libpod-354a61a89afe1bd449af89643093dab856120f29778cd11d0ebcfcbe8cc87650.scope: Deactivated successfully.
Dec 13 02:31:03 np0005558241 podman[77269]: 2025-12-13 07:31:03.42986089 +0000 UTC m=+0.688508065 container died 354a61a89afe1bd449af89643093dab856120f29778cd11d0ebcfcbe8cc87650 (image=quay.io/ceph/ceph:v20, name=beautiful_knuth, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-59ea22a148542d1430bc39260f5d4b2ff8a297af541fd66948f3bfd54dc3a8fc-merged.mount: Deactivated successfully.
Dec 13 02:31:03 np0005558241 podman[77269]: 2025-12-13 07:31:03.485079435 +0000 UTC m=+0.743726610 container remove 354a61a89afe1bd449af89643093dab856120f29778cd11d0ebcfcbe8cc87650 (image=quay.io/ceph/ceph:v20, name=beautiful_knuth, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:03 np0005558241 systemd[1]: libpod-conmon-354a61a89afe1bd449af89643093dab856120f29778cd11d0ebcfcbe8cc87650.scope: Deactivated successfully.
Dec 13 02:31:03 np0005558241 podman[77356]: 2025-12-13 07:31:03.565757749 +0000 UTC m=+0.058651483 container create ed7ad6a8b80614e6db845a6a5d38fceeedbe529cbd7b715ddeef31d7031b1a8c (image=quay.io/ceph/ceph:v20, name=brave_meninsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:03 np0005558241 systemd[1]: Started libpod-conmon-ed7ad6a8b80614e6db845a6a5d38fceeedbe529cbd7b715ddeef31d7031b1a8c.scope.
Dec 13 02:31:03 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2024297309' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 13 02:31:03 np0005558241 podman[77356]: 2025-12-13 07:31:03.538551216 +0000 UTC m=+0.031444990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:03 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2578320ba7cd4e3c25f6674989ead80a733d14c1d703bed7ef2b015fd860541c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2578320ba7cd4e3c25f6674989ead80a733d14c1d703bed7ef2b015fd860541c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2578320ba7cd4e3c25f6674989ead80a733d14c1d703bed7ef2b015fd860541c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:03 np0005558241 podman[77356]: 2025-12-13 07:31:03.667578763 +0000 UTC m=+0.160472497 container init ed7ad6a8b80614e6db845a6a5d38fceeedbe529cbd7b715ddeef31d7031b1a8c (image=quay.io/ceph/ceph:v20, name=brave_meninsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:03 np0005558241 podman[77356]: 2025-12-13 07:31:03.677184024 +0000 UTC m=+0.170077748 container start ed7ad6a8b80614e6db845a6a5d38fceeedbe529cbd7b715ddeef31d7031b1a8c (image=quay.io/ceph/ceph:v20, name=brave_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:31:03 np0005558241 podman[77356]: 2025-12-13 07:31:03.681286227 +0000 UTC m=+0.174180021 container attach ed7ad6a8b80614e6db845a6a5d38fceeedbe529cbd7b715ddeef31d7031b1a8c (image=quay.io/ceph/ceph:v20, name=brave_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:31:03 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'crash'
Dec 13 02:31:03 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'dashboard'
Dec 13 02:31:04 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'devicehealth'
Dec 13 02:31:04 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'diskprediction_local'
Dec 13 02:31:04 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 13 02:31:04 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 13 02:31:04 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]:  from numpy import show_config as show_numpy_config
Dec 13 02:31:04 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'influx'
Dec 13 02:31:04 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'insights'
Dec 13 02:31:04 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'iostat'
Dec 13 02:31:04 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'k8sevents'
Dec 13 02:31:05 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'localpool'
Dec 13 02:31:05 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'mds_autoscaler'
Dec 13 02:31:05 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'mirroring'
Dec 13 02:31:05 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'nfs'
Dec 13 02:31:05 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'orchestrator'
Dec 13 02:31:06 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'osd_perf_query'
Dec 13 02:31:06 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'osd_support'
Dec 13 02:31:06 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'pg_autoscaler'
Dec 13 02:31:06 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'progress'
Dec 13 02:31:06 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'prometheus'
Dec 13 02:31:06 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'rbd_support'
Dec 13 02:31:06 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'rgw'
Dec 13 02:31:07 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'rook'
Dec 13 02:31:07 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'selftest'
Dec 13 02:31:07 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'smb'
Dec 13 02:31:07 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'snap_schedule'
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'stats'
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'status'
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'telegraf'
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'telemetry'
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'test_orchestrator'
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: mgr[py] Loading python module 'volumes'
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : Active manager daemon compute-0.vndjzx restarted
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vndjzx
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: ms_deliver_dispatch: unhandled message 0x5555ed86c000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: mgr handle_mgr_map Activating!
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: mgr handle_mgr_map I am now activating
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.vndjzx(active, starting, since 0.0153937s)
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vndjzx", "id": "compute-0.vndjzx"} v 0)
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "mgr metadata", "who": "compute-0.vndjzx", "id": "compute-0.vndjzx"} : dispatch
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "mds metadata"} : dispatch
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e1 all = 1
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "mon metadata"} : dispatch
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: balancer
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : Manager daemon compute-0.vndjzx is now available
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Starting
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:31:08
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:31:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] No pools available
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: Active manager daemon compute-0.vndjzx restarted
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: Activating manager daemon compute-0.vndjzx
Dec 13 02:31:08 np0005558241 ceph-mon[76537]: Manager daemon compute-0.vndjzx is now available
Dec 13 02:31:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019917780 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:09 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.vndjzx(active, since 1.02759s)
Dec 13 02:31:09 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 13 02:31:09 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 13 02:31:09 np0005558241 brave_meninsky[77373]: {
Dec 13 02:31:09 np0005558241 brave_meninsky[77373]:    "mgrmap_epoch": 7,
Dec 13 02:31:09 np0005558241 brave_meninsky[77373]:    "initialized": true
Dec 13 02:31:09 np0005558241 brave_meninsky[77373]: }
Dec 13 02:31:09 np0005558241 systemd[1]: libpod-ed7ad6a8b80614e6db845a6a5d38fceeedbe529cbd7b715ddeef31d7031b1a8c.scope: Deactivated successfully.
Dec 13 02:31:09 np0005558241 podman[77356]: 2025-12-13 07:31:09.946065367 +0000 UTC m=+6.438959061 container died ed7ad6a8b80614e6db845a6a5d38fceeedbe529cbd7b715ddeef31d7031b1a8c (image=quay.io/ceph/ceph:v20, name=brave_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:31:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Dec 13 02:31:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2578320ba7cd4e3c25f6674989ead80a733d14c1d703bed7ef2b015fd860541c-merged.mount: Deactivated successfully.
Dec 13 02:31:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Dec 13 02:31:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:09 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 13 02:31:09 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 13 02:31:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec 13 02:31:09 np0005558241 podman[77356]: 2025-12-13 07:31:09.994522693 +0000 UTC m=+6.487416427 container remove ed7ad6a8b80614e6db845a6a5d38fceeedbe529cbd7b715ddeef31d7031b1a8c (image=quay.io/ceph/ceph:v20, name=brave_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:10 np0005558241 systemd[1]: libpod-conmon-ed7ad6a8b80614e6db845a6a5d38fceeedbe529cbd7b715ddeef31d7031b1a8c.scope: Deactivated successfully.
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: cephadm
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: crash
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: devicehealth
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Starting
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: iostat
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: nfs
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: orchestrator
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: pg_autoscaler
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: progress
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [progress INFO root] Loading...
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [progress INFO root] No stored events to load
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [progress INFO root] Loaded [] historic events
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [progress INFO root] Loaded OSDMap, ready.
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] recovery thread starting
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] starting setup
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: rbd_support
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: status
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: telemetry
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vndjzx/mirror_snapshot_schedule"} v 0)
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vndjzx/mirror_snapshot_schedule"} : dispatch
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] PerfHandler: starting
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TaskHandler: starting
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vndjzx/trash_purge_schedule"} v 0)
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vndjzx/trash_purge_schedule"} : dispatch
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] setup complete
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: mgr load Constructed class from module: volumes
Dec 13 02:31:10 np0005558241 podman[77443]: 2025-12-13 07:31:10.069235247 +0000 UTC m=+0.052102148 container create 4ec17ef1fa0710f54f3c6608f78564b980a72e14c2c062d671f22bf1a21a78e2 (image=quay.io/ceph/ceph:v20, name=exciting_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:31:10 np0005558241 systemd[1]: Started libpod-conmon-4ec17ef1fa0710f54f3c6608f78564b980a72e14c2c062d671f22bf1a21a78e2.scope.
Dec 13 02:31:10 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8c9da6651276e2dea3fede0789b6f78095473a91fb8d7843a7fcede7f17ef11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8c9da6651276e2dea3fede0789b6f78095473a91fb8d7843a7fcede7f17ef11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8c9da6651276e2dea3fede0789b6f78095473a91fb8d7843a7fcede7f17ef11/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:10 np0005558241 podman[77443]: 2025-12-13 07:31:10.041969803 +0000 UTC m=+0.024836754 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:10 np0005558241 podman[77443]: 2025-12-13 07:31:10.141404178 +0000 UTC m=+0.124271069 container init 4ec17ef1fa0710f54f3c6608f78564b980a72e14c2c062d671f22bf1a21a78e2 (image=quay.io/ceph/ceph:v20, name=exciting_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 02:31:10 np0005558241 podman[77443]: 2025-12-13 07:31:10.151553112 +0000 UTC m=+0.134419983 container start 4ec17ef1fa0710f54f3c6608f78564b980a72e14c2c062d671f22bf1a21a78e2 (image=quay.io/ceph/ceph:v20, name=exciting_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:31:10 np0005558241 podman[77443]: 2025-12-13 07:31:10.155540122 +0000 UTC m=+0.138406993 container attach 4ec17ef1fa0710f54f3c6608f78564b980a72e14c2c062d671f22bf1a21a78e2 (image=quay.io/ceph/ceph:v20, name=exciting_greider, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1124510084' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 13 02:31:10 np0005558241 ceph-mgr[76830]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: Found migration_current of "None". Setting to last migration.
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vndjzx/mirror_snapshot_schedule"} : dispatch
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vndjzx/trash_purge_schedule"} : dispatch
Dec 13 02:31:10 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/1124510084' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 13 02:31:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1124510084' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 13 02:31:11 np0005558241 exciting_greider[77536]: module 'orchestrator' is already enabled (always-on)
Dec 13 02:31:11 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.vndjzx(active, since 2s)
Dec 13 02:31:11 np0005558241 systemd[1]: libpod-4ec17ef1fa0710f54f3c6608f78564b980a72e14c2c062d671f22bf1a21a78e2.scope: Deactivated successfully.
Dec 13 02:31:11 np0005558241 conmon[77536]: conmon 4ec17ef1fa0710f54f3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ec17ef1fa0710f54f3c6608f78564b980a72e14c2c062d671f22bf1a21a78e2.scope/container/memory.events
Dec 13 02:31:11 np0005558241 podman[77443]: 2025-12-13 07:31:11.055019908 +0000 UTC m=+1.037886809 container died 4ec17ef1fa0710f54f3c6608f78564b980a72e14c2c062d671f22bf1a21a78e2 (image=quay.io/ceph/ceph:v20, name=exciting_greider, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:31:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c8c9da6651276e2dea3fede0789b6f78095473a91fb8d7843a7fcede7f17ef11-merged.mount: Deactivated successfully.
Dec 13 02:31:11 np0005558241 podman[77443]: 2025-12-13 07:31:11.104825738 +0000 UTC m=+1.087692649 container remove 4ec17ef1fa0710f54f3c6608f78564b980a72e14c2c062d671f22bf1a21a78e2 (image=quay.io/ceph/ceph:v20, name=exciting_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:11 np0005558241 systemd[1]: libpod-conmon-4ec17ef1fa0710f54f3c6608f78564b980a72e14c2c062d671f22bf1a21a78e2.scope: Deactivated successfully.
Dec 13 02:31:11 np0005558241 podman[77573]: 2025-12-13 07:31:11.202728634 +0000 UTC m=+0.062944070 container create 6f93af6d642d72de08fceb4c87f1e4e7fa2738dc51c3af000bf74cb1d28a7626 (image=quay.io/ceph/ceph:v20, name=goofy_driscoll, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:11 np0005558241 systemd[1]: Started libpod-conmon-6f93af6d642d72de08fceb4c87f1e4e7fa2738dc51c3af000bf74cb1d28a7626.scope.
Dec 13 02:31:11 np0005558241 podman[77573]: 2025-12-13 07:31:11.176534527 +0000 UTC m=+0.036750053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5b2a089feaab8aeec40ff4b3233de8585b9506aa39602a30e8d612073e4a67/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5b2a089feaab8aeec40ff4b3233de8585b9506aa39602a30e8d612073e4a67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5b2a089feaab8aeec40ff4b3233de8585b9506aa39602a30e8d612073e4a67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:11 np0005558241 podman[77573]: 2025-12-13 07:31:11.293916352 +0000 UTC m=+0.154131868 container init 6f93af6d642d72de08fceb4c87f1e4e7fa2738dc51c3af000bf74cb1d28a7626 (image=quay.io/ceph/ceph:v20, name=goofy_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:11 np0005558241 podman[77573]: 2025-12-13 07:31:11.303236146 +0000 UTC m=+0.163451612 container start 6f93af6d642d72de08fceb4c87f1e4e7fa2738dc51c3af000bf74cb1d28a7626 (image=quay.io/ceph/ceph:v20, name=goofy_driscoll, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:11 np0005558241 podman[77573]: 2025-12-13 07:31:11.307609145 +0000 UTC m=+0.167824661 container attach 6f93af6d642d72de08fceb4c87f1e4e7fa2738dc51c3af000bf74cb1d28a7626 (image=quay.io/ceph/ceph:v20, name=goofy_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 02:31:11 np0005558241 ceph-mgr[76830]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:31:11] ENGINE Bus STARTING
Dec 13 02:31:11 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:31:11] ENGINE Bus STARTING
Dec 13 02:31:11 np0005558241 ceph-mgr[76830]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:31:11] ENGINE Serving on http://192.168.122.100:8765
Dec 13 02:31:11 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:31:11] ENGINE Serving on http://192.168.122.100:8765
Dec 13 02:31:11 np0005558241 ceph-mgr[76830]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:31:11] ENGINE Serving on https://192.168.122.100:7150
Dec 13 02:31:11 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:31:11] ENGINE Serving on https://192.168.122.100:7150
Dec 13 02:31:11 np0005558241 ceph-mgr[76830]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:31:11] ENGINE Bus STARTED
Dec 13 02:31:11 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:31:11] ENGINE Bus STARTED
Dec 13 02:31:11 np0005558241 ceph-mgr[76830]: [cephadm INFO cherrypy.error] [13/Dec/2025:07:31:11] ENGINE Client ('192.168.122.100', 53152) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 13 02:31:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 02:31:11 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : [13/Dec/2025:07:31:11] ENGINE Client ('192.168.122.100', 53152) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 13 02:31:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 02:31:11 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:31:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec 13 02:31:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 02:31:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 02:31:11 np0005558241 systemd[1]: libpod-6f93af6d642d72de08fceb4c87f1e4e7fa2738dc51c3af000bf74cb1d28a7626.scope: Deactivated successfully.
Dec 13 02:31:11 np0005558241 podman[77573]: 2025-12-13 07:31:11.969814629 +0000 UTC m=+0.830030085 container died 6f93af6d642d72de08fceb4c87f1e4e7fa2738dc51c3af000bf74cb1d28a7626 (image=quay.io/ceph/ceph:v20, name=goofy_driscoll, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 02:31:12 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:12 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/1124510084' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 13 02:31:12 np0005558241 ceph-mon[76537]: [13/Dec/2025:07:31:11] ENGINE Bus STARTING
Dec 13 02:31:12 np0005558241 ceph-mon[76537]: [13/Dec/2025:07:31:11] ENGINE Serving on http://192.168.122.100:8765
Dec 13 02:31:12 np0005558241 ceph-mon[76537]: [13/Dec/2025:07:31:11] ENGINE Serving on https://192.168.122.100:7150
Dec 13 02:31:12 np0005558241 ceph-mon[76537]: [13/Dec/2025:07:31:11] ENGINE Bus STARTED
Dec 13 02:31:12 np0005558241 ceph-mon[76537]: [13/Dec/2025:07:31:11] ENGINE Client ('192.168.122.100', 53152) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 13 02:31:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ce5b2a089feaab8aeec40ff4b3233de8585b9506aa39602a30e8d612073e4a67-merged.mount: Deactivated successfully.
Dec 13 02:31:12 np0005558241 podman[77573]: 2025-12-13 07:31:12.112621851 +0000 UTC m=+0.972837297 container remove 6f93af6d642d72de08fceb4c87f1e4e7fa2738dc51c3af000bf74cb1d28a7626 (image=quay.io/ceph/ceph:v20, name=goofy_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:12 np0005558241 systemd[1]: libpod-conmon-6f93af6d642d72de08fceb4c87f1e4e7fa2738dc51c3af000bf74cb1d28a7626.scope: Deactivated successfully.
Dec 13 02:31:12 np0005558241 podman[77651]: 2025-12-13 07:31:12.20424495 +0000 UTC m=+0.064471878 container create e28738b6de457194a3b6a97d00890e91aa8b063c57434d23f5ee9224d0f6fe94 (image=quay.io/ceph/ceph:v20, name=gallant_nobel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 02:31:12 np0005558241 systemd[1]: Started libpod-conmon-e28738b6de457194a3b6a97d00890e91aa8b063c57434d23f5ee9224d0f6fe94.scope.
Dec 13 02:31:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47db926bfd73536d02dc6ca8924383402ba602148612113be956e3016d23e10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47db926bfd73536d02dc6ca8924383402ba602148612113be956e3016d23e10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47db926bfd73536d02dc6ca8924383402ba602148612113be956e3016d23e10/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:12 np0005558241 podman[77651]: 2025-12-13 07:31:12.176920345 +0000 UTC m=+0.037147303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:12 np0005558241 podman[77651]: 2025-12-13 07:31:12.281645932 +0000 UTC m=+0.141872880 container init e28738b6de457194a3b6a97d00890e91aa8b063c57434d23f5ee9224d0f6fe94 (image=quay.io/ceph/ceph:v20, name=gallant_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:12 np0005558241 podman[77651]: 2025-12-13 07:31:12.288533655 +0000 UTC m=+0.148760593 container start e28738b6de457194a3b6a97d00890e91aa8b063c57434d23f5ee9224d0f6fe94 (image=quay.io/ceph/ceph:v20, name=gallant_nobel, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:31:12 np0005558241 podman[77651]: 2025-12-13 07:31:12.292358621 +0000 UTC m=+0.152585589 container attach e28738b6de457194a3b6a97d00890e91aa8b063c57434d23f5ee9224d0f6fe94 (image=quay.io/ceph/ceph:v20, name=gallant_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 02:31:12 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:31:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec 13 02:31:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:12 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Set ssh ssh_user
Dec 13 02:31:12 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 13 02:31:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec 13 02:31:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:12 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Set ssh ssh_config
Dec 13 02:31:12 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 13 02:31:12 np0005558241 ceph-mgr[76830]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 13 02:31:12 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 13 02:31:12 np0005558241 gallant_nobel[77668]: ssh user set to ceph-admin. sudo will be used
Dec 13 02:31:12 np0005558241 systemd[1]: libpod-e28738b6de457194a3b6a97d00890e91aa8b063c57434d23f5ee9224d0f6fe94.scope: Deactivated successfully.
Dec 13 02:31:12 np0005558241 podman[77651]: 2025-12-13 07:31:12.72366532 +0000 UTC m=+0.583892248 container died e28738b6de457194a3b6a97d00890e91aa8b063c57434d23f5ee9224d0f6fe94 (image=quay.io/ceph/ceph:v20, name=gallant_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d47db926bfd73536d02dc6ca8924383402ba602148612113be956e3016d23e10-merged.mount: Deactivated successfully.
Dec 13 02:31:12 np0005558241 podman[77651]: 2025-12-13 07:31:12.764435173 +0000 UTC m=+0.624662111 container remove e28738b6de457194a3b6a97d00890e91aa8b063c57434d23f5ee9224d0f6fe94 (image=quay.io/ceph/ceph:v20, name=gallant_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:31:12 np0005558241 systemd[1]: libpod-conmon-e28738b6de457194a3b6a97d00890e91aa8b063c57434d23f5ee9224d0f6fe94.scope: Deactivated successfully.
Dec 13 02:31:12 np0005558241 podman[77704]: 2025-12-13 07:31:12.851539959 +0000 UTC m=+0.057575306 container create 633342bd8004ec8432a4d85f0ff9e0bce135f7f84a5bb763069d418161e063c0 (image=quay.io/ceph/ceph:v20, name=tender_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:12 np0005558241 systemd[1]: Started libpod-conmon-633342bd8004ec8432a4d85f0ff9e0bce135f7f84a5bb763069d418161e063c0.scope.
Dec 13 02:31:12 np0005558241 ceph-mgr[76830]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:31:12 np0005558241 podman[77704]: 2025-12-13 07:31:12.822457409 +0000 UTC m=+0.028492836 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c678dc3ab2799dba0cb441cc9440f5a6676d94033a6d371b89ab65be98a93bb/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c678dc3ab2799dba0cb441cc9440f5a6676d94033a6d371b89ab65be98a93bb/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c678dc3ab2799dba0cb441cc9440f5a6676d94033a6d371b89ab65be98a93bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c678dc3ab2799dba0cb441cc9440f5a6676d94033a6d371b89ab65be98a93bb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c678dc3ab2799dba0cb441cc9440f5a6676d94033a6d371b89ab65be98a93bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:12 np0005558241 podman[77704]: 2025-12-13 07:31:12.938641204 +0000 UTC m=+0.144676591 container init 633342bd8004ec8432a4d85f0ff9e0bce135f7f84a5bb763069d418161e063c0 (image=quay.io/ceph/ceph:v20, name=tender_tu, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 02:31:12 np0005558241 podman[77704]: 2025-12-13 07:31:12.948625264 +0000 UTC m=+0.154660611 container start 633342bd8004ec8432a4d85f0ff9e0bce135f7f84a5bb763069d418161e063c0 (image=quay.io/ceph/ceph:v20, name=tender_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:31:12 np0005558241 podman[77704]: 2025-12-13 07:31:12.952703467 +0000 UTC m=+0.158738824 container attach 633342bd8004ec8432a4d85f0ff9e0bce135f7f84a5bb763069d418161e063c0 (image=quay.io/ceph/ceph:v20, name=tender_tu, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Dec 13 02:31:13 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:31:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec 13 02:31:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:13 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 13 02:31:13 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 13 02:31:13 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Set ssh private key
Dec 13 02:31:13 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 13 02:31:13 np0005558241 systemd[1]: libpod-633342bd8004ec8432a4d85f0ff9e0bce135f7f84a5bb763069d418161e063c0.scope: Deactivated successfully.
Dec 13 02:31:13 np0005558241 podman[77748]: 2025-12-13 07:31:13.432971235 +0000 UTC m=+0.022836053 container died 633342bd8004ec8432a4d85f0ff9e0bce135f7f84a5bb763069d418161e063c0 (image=quay.io/ceph/ceph:v20, name=tender_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 02:31:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6c678dc3ab2799dba0cb441cc9440f5a6676d94033a6d371b89ab65be98a93bb-merged.mount: Deactivated successfully.
Dec 13 02:31:13 np0005558241 podman[77748]: 2025-12-13 07:31:13.479704708 +0000 UTC m=+0.069569526 container remove 633342bd8004ec8432a4d85f0ff9e0bce135f7f84a5bb763069d418161e063c0 (image=quay.io/ceph/ceph:v20, name=tender_tu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:13 np0005558241 systemd[1]: libpod-conmon-633342bd8004ec8432a4d85f0ff9e0bce135f7f84a5bb763069d418161e063c0.scope: Deactivated successfully.
Dec 13 02:31:13 np0005558241 podman[77763]: 2025-12-13 07:31:13.569015639 +0000 UTC m=+0.051574765 container create 07d4630a6dda72c60ecc3e18aa21513dfc955609c29f4fb992374abd687a896e (image=quay.io/ceph/ceph:v20, name=trusting_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:31:13 np0005558241 systemd[1]: Started libpod-conmon-07d4630a6dda72c60ecc3e18aa21513dfc955609c29f4fb992374abd687a896e.scope.
Dec 13 02:31:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:13 np0005558241 podman[77763]: 2025-12-13 07:31:13.548447673 +0000 UTC m=+0.031006839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea4f9fe68eb26d078257eb5a953af5b61f7c9ab89794eb9e56780a48c94b02fe/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea4f9fe68eb26d078257eb5a953af5b61f7c9ab89794eb9e56780a48c94b02fe/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea4f9fe68eb26d078257eb5a953af5b61f7c9ab89794eb9e56780a48c94b02fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea4f9fe68eb26d078257eb5a953af5b61f7c9ab89794eb9e56780a48c94b02fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea4f9fe68eb26d078257eb5a953af5b61f7c9ab89794eb9e56780a48c94b02fe/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:13 np0005558241 podman[77763]: 2025-12-13 07:31:13.683173283 +0000 UTC m=+0.165732499 container init 07d4630a6dda72c60ecc3e18aa21513dfc955609c29f4fb992374abd687a896e (image=quay.io/ceph/ceph:v20, name=trusting_agnesi, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:13 np0005558241 podman[77763]: 2025-12-13 07:31:13.693507492 +0000 UTC m=+0.176066608 container start 07d4630a6dda72c60ecc3e18aa21513dfc955609c29f4fb992374abd687a896e (image=quay.io/ceph/ceph:v20, name=trusting_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:13 np0005558241 podman[77763]: 2025-12-13 07:31:13.701435661 +0000 UTC m=+0.183995017 container attach 07d4630a6dda72c60ecc3e18aa21513dfc955609c29f4fb992374abd687a896e (image=quay.io/ceph/ceph:v20, name=trusting_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:31:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:13 np0005558241 ceph-mon[76537]: Set ssh ssh_user
Dec 13 02:31:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:13 np0005558241 ceph-mon[76537]: Set ssh ssh_config
Dec 13 02:31:13 np0005558241 ceph-mon[76537]: ssh user set to ceph-admin. sudo will be used
Dec 13 02:31:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:14 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:14 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:31:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec 13 02:31:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:14 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 13 02:31:14 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 13 02:31:14 np0005558241 systemd[1]: libpod-07d4630a6dda72c60ecc3e18aa21513dfc955609c29f4fb992374abd687a896e.scope: Deactivated successfully.
Dec 13 02:31:14 np0005558241 podman[77763]: 2025-12-13 07:31:14.143584823 +0000 UTC m=+0.626143979 container died 07d4630a6dda72c60ecc3e18aa21513dfc955609c29f4fb992374abd687a896e (image=quay.io/ceph/ceph:v20, name=trusting_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:31:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ea4f9fe68eb26d078257eb5a953af5b61f7c9ab89794eb9e56780a48c94b02fe-merged.mount: Deactivated successfully.
Dec 13 02:31:14 np0005558241 podman[77763]: 2025-12-13 07:31:14.193195088 +0000 UTC m=+0.675754234 container remove 07d4630a6dda72c60ecc3e18aa21513dfc955609c29f4fb992374abd687a896e (image=quay.io/ceph/ceph:v20, name=trusting_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:31:14 np0005558241 systemd[1]: libpod-conmon-07d4630a6dda72c60ecc3e18aa21513dfc955609c29f4fb992374abd687a896e.scope: Deactivated successfully.
Dec 13 02:31:14 np0005558241 podman[77817]: 2025-12-13 07:31:14.292465169 +0000 UTC m=+0.068775467 container create 9e5e7150e9f24df3a1dfcf7353cc68b1ed70efab2abad4baf02d5e7033b2f744 (image=quay.io/ceph/ceph:v20, name=jovial_dirac, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:31:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052847 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:14 np0005558241 systemd[1]: Started libpod-conmon-9e5e7150e9f24df3a1dfcf7353cc68b1ed70efab2abad4baf02d5e7033b2f744.scope.
Dec 13 02:31:14 np0005558241 podman[77817]: 2025-12-13 07:31:14.264716682 +0000 UTC m=+0.041027110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdda300a357d7785f211593417454d7b0156dfe8563bbfe4ee74a4d1d97e70c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdda300a357d7785f211593417454d7b0156dfe8563bbfe4ee74a4d1d97e70c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdda300a357d7785f211593417454d7b0156dfe8563bbfe4ee74a4d1d97e70c1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:14 np0005558241 podman[77817]: 2025-12-13 07:31:14.385063882 +0000 UTC m=+0.161374270 container init 9e5e7150e9f24df3a1dfcf7353cc68b1ed70efab2abad4baf02d5e7033b2f744 (image=quay.io/ceph/ceph:v20, name=jovial_dirac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:14 np0005558241 podman[77817]: 2025-12-13 07:31:14.395278488 +0000 UTC m=+0.171588786 container start 9e5e7150e9f24df3a1dfcf7353cc68b1ed70efab2abad4baf02d5e7033b2f744 (image=quay.io/ceph/ceph:v20, name=jovial_dirac, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:31:14 np0005558241 podman[77817]: 2025-12-13 07:31:14.399462553 +0000 UTC m=+0.175772891 container attach 9e5e7150e9f24df3a1dfcf7353cc68b1ed70efab2abad4baf02d5e7033b2f744 (image=quay.io/ceph/ceph:v20, name=jovial_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:31:14 np0005558241 ceph-mon[76537]: Set ssh ssh_identity_key
Dec 13 02:31:14 np0005558241 ceph-mon[76537]: Set ssh private key
Dec 13 02:31:14 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:14 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:31:14 np0005558241 jovial_dirac[77833]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpUDWxvfF9I6E50IqT3M97ZcjOj8tbqb6nWw1ybE/DR6RrVweTO00eeAnaqMtftp/0Op6CeAPwgRi3Mp3fXkjYvWzvXlcqp0yxfUd/zKhCMUywq4jnOB/kT/vyQsmX+8NyKVGAMDWuzlugrzZxM8uCMrJDJwJhtA1/BrgEP8UntRKqMfqy40ia0PsleKp4NK1mSbLmpR8PimZaZGy1kCPs43gwcp9VDlxKYg2A6j3bE0p55OGw+LsgL6c4Sw8jL3D+Xczn53z9tq0FHIpR7R6Xc9/tgN2TE5S9HLnmePtVUDHykK3uEkrrTBymAuZS0V0luZKq8ySzS3q5+6TqR1RKlefP+Z7XYgdzLFjjUSMYeMhmWGYp4BXVTpshk3Ut/lq6iHJSgD49rDV32z+fYLJrtiFA3X1Hmvtwi9/5gFMuueBYu/exAt7ImNMXisLkG1EIO+mpwPZzkErVRy93DE17qw1n25ie4yiVmn2YnoPnsptpAunMRGVTY7VtLr13jpM= zuul@controller
Dec 13 02:31:14 np0005558241 systemd[1]: libpod-9e5e7150e9f24df3a1dfcf7353cc68b1ed70efab2abad4baf02d5e7033b2f744.scope: Deactivated successfully.
Dec 13 02:31:14 np0005558241 podman[77817]: 2025-12-13 07:31:14.826911147 +0000 UTC m=+0.603221465 container died 9e5e7150e9f24df3a1dfcf7353cc68b1ed70efab2abad4baf02d5e7033b2f744 (image=quay.io/ceph/ceph:v20, name=jovial_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:31:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-bdda300a357d7785f211593417454d7b0156dfe8563bbfe4ee74a4d1d97e70c1-merged.mount: Deactivated successfully.
Dec 13 02:31:14 np0005558241 podman[77817]: 2025-12-13 07:31:14.869853004 +0000 UTC m=+0.646163332 container remove 9e5e7150e9f24df3a1dfcf7353cc68b1ed70efab2abad4baf02d5e7033b2f744 (image=quay.io/ceph/ceph:v20, name=jovial_dirac, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:31:14 np0005558241 systemd[1]: libpod-conmon-9e5e7150e9f24df3a1dfcf7353cc68b1ed70efab2abad4baf02d5e7033b2f744.scope: Deactivated successfully.
Dec 13 02:31:14 np0005558241 ceph-mgr[76830]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:31:14 np0005558241 podman[77870]: 2025-12-13 07:31:14.948039386 +0000 UTC m=+0.052044607 container create 0612999290f0de5bc6914d574f00d3b8e840c11c34f5f17695662a5a65477c8a (image=quay.io/ceph/ceph:v20, name=thirsty_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:31:14 np0005558241 systemd[1]: Started libpod-conmon-0612999290f0de5bc6914d574f00d3b8e840c11c34f5f17695662a5a65477c8a.scope.
Dec 13 02:31:15 np0005558241 podman[77870]: 2025-12-13 07:31:14.920827843 +0000 UTC m=+0.024833114 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:15 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcef050f2b42acb06cdf65f51180253a4d906d55c4cc0e677255b0135178a29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcef050f2b42acb06cdf65f51180253a4d906d55c4cc0e677255b0135178a29/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcef050f2b42acb06cdf65f51180253a4d906d55c4cc0e677255b0135178a29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:15 np0005558241 podman[77870]: 2025-12-13 07:31:15.034243708 +0000 UTC m=+0.138248919 container init 0612999290f0de5bc6914d574f00d3b8e840c11c34f5f17695662a5a65477c8a (image=quay.io/ceph/ceph:v20, name=thirsty_mcnulty, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:31:15 np0005558241 podman[77870]: 2025-12-13 07:31:15.046170818 +0000 UTC m=+0.150176029 container start 0612999290f0de5bc6914d574f00d3b8e840c11c34f5f17695662a5a65477c8a (image=quay.io/ceph/ceph:v20, name=thirsty_mcnulty, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:31:15 np0005558241 podman[77870]: 2025-12-13 07:31:15.05025833 +0000 UTC m=+0.154263551 container attach 0612999290f0de5bc6914d574f00d3b8e840c11c34f5f17695662a5a65477c8a (image=quay.io/ceph/ceph:v20, name=thirsty_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 02:31:15 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:31:15 np0005558241 systemd-logind[787]: New session 24 of user ceph-admin.
Dec 13 02:31:15 np0005558241 systemd[1]: Created slice User Slice of UID 42477.
Dec 13 02:31:15 np0005558241 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 13 02:31:15 np0005558241 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 13 02:31:15 np0005558241 ceph-mon[76537]: Set ssh ssh_identity_pub
Dec 13 02:31:15 np0005558241 systemd[1]: Starting User Manager for UID 42477...
Dec 13 02:31:15 np0005558241 systemd-logind[787]: New session 26 of user ceph-admin.
Dec 13 02:31:15 np0005558241 systemd[77917]: Queued start job for default target Main User Target.
Dec 13 02:31:15 np0005558241 systemd[77917]: Created slice User Application Slice.
Dec 13 02:31:15 np0005558241 systemd[77917]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 13 02:31:15 np0005558241 systemd[77917]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 02:31:15 np0005558241 systemd[77917]: Reached target Paths.
Dec 13 02:31:15 np0005558241 systemd[77917]: Reached target Timers.
Dec 13 02:31:15 np0005558241 systemd[77917]: Starting D-Bus User Message Bus Socket...
Dec 13 02:31:15 np0005558241 systemd[77917]: Starting Create User's Volatile Files and Directories...
Dec 13 02:31:15 np0005558241 systemd[77917]: Listening on D-Bus User Message Bus Socket.
Dec 13 02:31:15 np0005558241 systemd[77917]: Finished Create User's Volatile Files and Directories.
Dec 13 02:31:15 np0005558241 systemd[77917]: Reached target Sockets.
Dec 13 02:31:15 np0005558241 systemd[77917]: Reached target Basic System.
Dec 13 02:31:15 np0005558241 systemd[77917]: Reached target Main User Target.
Dec 13 02:31:15 np0005558241 systemd[77917]: Startup finished in 197ms.
Dec 13 02:31:15 np0005558241 systemd[1]: Started User Manager for UID 42477.
Dec 13 02:31:15 np0005558241 systemd[1]: Started Session 24 of User ceph-admin.
Dec 13 02:31:15 np0005558241 systemd[1]: Started Session 26 of User ceph-admin.
Dec 13 02:31:16 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:16 np0005558241 systemd-logind[787]: New session 27 of user ceph-admin.
Dec 13 02:31:16 np0005558241 systemd[1]: Started Session 27 of User ceph-admin.
Dec 13 02:31:16 np0005558241 systemd-logind[787]: New session 28 of user ceph-admin.
Dec 13 02:31:16 np0005558241 systemd[1]: Started Session 28 of User ceph-admin.
Dec 13 02:31:16 np0005558241 ceph-mgr[76830]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:31:16 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 13 02:31:16 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 13 02:31:17 np0005558241 systemd-logind[787]: New session 29 of user ceph-admin.
Dec 13 02:31:17 np0005558241 systemd[1]: Started Session 29 of User ceph-admin.
Dec 13 02:31:17 np0005558241 systemd-logind[787]: New session 30 of user ceph-admin.
Dec 13 02:31:17 np0005558241 systemd[1]: Started Session 30 of User ceph-admin.
Dec 13 02:31:17 np0005558241 ceph-mon[76537]: Deploying cephadm binary to compute-0
Dec 13 02:31:17 np0005558241 systemd-logind[787]: New session 31 of user ceph-admin.
Dec 13 02:31:17 np0005558241 systemd[1]: Started Session 31 of User ceph-admin.
Dec 13 02:31:18 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:18 np0005558241 systemd-logind[787]: New session 32 of user ceph-admin.
Dec 13 02:31:18 np0005558241 systemd[1]: Started Session 32 of User ceph-admin.
Dec 13 02:31:18 np0005558241 systemd-logind[787]: New session 33 of user ceph-admin.
Dec 13 02:31:18 np0005558241 systemd[1]: Started Session 33 of User ceph-admin.
Dec 13 02:31:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054705 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:19 np0005558241 ceph-mgr[76830]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:31:19 np0005558241 systemd-logind[787]: New session 34 of user ceph-admin.
Dec 13 02:31:19 np0005558241 systemd[1]: Started Session 34 of User ceph-admin.
Dec 13 02:31:20 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:20 np0005558241 systemd-logind[787]: New session 35 of user ceph-admin.
Dec 13 02:31:20 np0005558241 systemd[1]: Started Session 35 of User ceph-admin.
Dec 13 02:31:21 np0005558241 systemd-logind[787]: New session 36 of user ceph-admin.
Dec 13 02:31:21 np0005558241 systemd[1]: Started Session 36 of User ceph-admin.
Dec 13 02:31:21 np0005558241 ceph-mgr[76830]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:31:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 02:31:22 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:22 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Added host compute-0
Dec 13 02:31:22 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 13 02:31:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 02:31:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 02:31:22 np0005558241 thirsty_mcnulty[77887]: Added host 'compute-0' with addr '192.168.122.100'
Dec 13 02:31:22 np0005558241 systemd[1]: libpod-0612999290f0de5bc6914d574f00d3b8e840c11c34f5f17695662a5a65477c8a.scope: Deactivated successfully.
Dec 13 02:31:22 np0005558241 podman[77870]: 2025-12-13 07:31:22.36346689 +0000 UTC m=+7.467472081 container died 0612999290f0de5bc6914d574f00d3b8e840c11c34f5f17695662a5a65477c8a (image=quay.io/ceph/ceph:v20, name=thirsty_mcnulty, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:31:23 np0005558241 ceph-mgr[76830]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:31:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-bfcef050f2b42acb06cdf65f51180253a4d906d55c4cc0e677255b0135178a29-merged.mount: Deactivated successfully.
Dec 13 02:31:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:23 np0005558241 ceph-mon[76537]: Added host compute-0
Dec 13 02:31:24 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:24 np0005558241 podman[77870]: 2025-12-13 07:31:24.214638714 +0000 UTC m=+9.318643925 container remove 0612999290f0de5bc6914d574f00d3b8e840c11c34f5f17695662a5a65477c8a (image=quay.io/ceph/ceph:v20, name=thirsty_mcnulty, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:24 np0005558241 systemd[1]: libpod-conmon-0612999290f0de5bc6914d574f00d3b8e840c11c34f5f17695662a5a65477c8a.scope: Deactivated successfully.
Dec 13 02:31:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:24 np0005558241 podman[78346]: 2025-12-13 07:31:24.268973121 +0000 UTC m=+0.029538358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:24 np0005558241 podman[78346]: 2025-12-13 07:31:24.760055219 +0000 UTC m=+0.520620416 container create 4772ef346fd112cd62d43922e21b93a2185544f3e6eda23c651b4b7dc368e94d (image=quay.io/ceph/ceph:v20, name=quizzical_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:25 np0005558241 systemd[1]: Started libpod-conmon-4772ef346fd112cd62d43922e21b93a2185544f3e6eda23c651b4b7dc368e94d.scope.
Dec 13 02:31:25 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:25 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3951abff129310d7abf619c268ba65d88aab14e9a40e2394390a54c0067e0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:25 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3951abff129310d7abf619c268ba65d88aab14e9a40e2394390a54c0067e0f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:25 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb3951abff129310d7abf619c268ba65d88aab14e9a40e2394390a54c0067e0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:25 np0005558241 ceph-mgr[76830]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:31:25 np0005558241 podman[78346]: 2025-12-13 07:31:25.590430638 +0000 UTC m=+1.350995845 container init 4772ef346fd112cd62d43922e21b93a2185544f3e6eda23c651b4b7dc368e94d (image=quay.io/ceph/ceph:v20, name=quizzical_carson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:31:25 np0005558241 podman[78346]: 2025-12-13 07:31:25.599795269 +0000 UTC m=+1.360360466 container start 4772ef346fd112cd62d43922e21b93a2185544f3e6eda23c651b4b7dc368e94d (image=quay.io/ceph/ceph:v20, name=quizzical_carson, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:31:25 np0005558241 podman[78346]: 2025-12-13 07:31:25.834558387 +0000 UTC m=+1.595123604 container attach 4772ef346fd112cd62d43922e21b93a2185544f3e6eda23c651b4b7dc368e94d (image=quay.io/ceph/ceph:v20, name=quizzical_carson, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:31:26 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:31:26 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 13 02:31:26 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 13 02:31:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 02:31:26 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:26 np0005558241 quizzical_carson[78374]: Scheduled mon update...
Dec 13 02:31:26 np0005558241 systemd[1]: libpod-4772ef346fd112cd62d43922e21b93a2185544f3e6eda23c651b4b7dc368e94d.scope: Deactivated successfully.
Dec 13 02:31:26 np0005558241 podman[78402]: 2025-12-13 07:31:26.1255717 +0000 UTC m=+0.038897548 container died 4772ef346fd112cd62d43922e21b93a2185544f3e6eda23c651b4b7dc368e94d (image=quay.io/ceph/ceph:v20, name=quizzical_carson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:31:26 np0005558241 systemd[1]: var-lib-containers-storage-overlay-eb3951abff129310d7abf619c268ba65d88aab14e9a40e2394390a54c0067e0f-merged.mount: Deactivated successfully.
Dec 13 02:31:26 np0005558241 podman[78357]: 2025-12-13 07:31:26.197217054 +0000 UTC m=+1.906868767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:26 np0005558241 podman[78402]: 2025-12-13 07:31:26.214369816 +0000 UTC m=+0.127695634 container remove 4772ef346fd112cd62d43922e21b93a2185544f3e6eda23c651b4b7dc368e94d (image=quay.io/ceph/ceph:v20, name=quizzical_carson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:31:26 np0005558241 systemd[1]: libpod-conmon-4772ef346fd112cd62d43922e21b93a2185544f3e6eda23c651b4b7dc368e94d.scope: Deactivated successfully.
Dec 13 02:31:26 np0005558241 podman[78427]: 2025-12-13 07:31:26.256439891 +0000 UTC m=+0.018945667 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:26 np0005558241 podman[78427]: 2025-12-13 07:31:26.384768 +0000 UTC m=+0.147273786 container create 6250e5c72cb7b4d7f486cbeffa8ebc3a8d08a11fd63df322ddee88bc56bdedf4 (image=quay.io/ceph/ceph:v20, name=bold_edison, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 02:31:26 np0005558241 systemd[1]: Started libpod-conmon-6250e5c72cb7b4d7f486cbeffa8ebc3a8d08a11fd63df322ddee88bc56bdedf4.scope.
Dec 13 02:31:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af6748ff2084eb7375fd02ef4138c9594ddcbb4f591d9e3b89d4fd6b7aa6296/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af6748ff2084eb7375fd02ef4138c9594ddcbb4f591d9e3b89d4fd6b7aa6296/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af6748ff2084eb7375fd02ef4138c9594ddcbb4f591d9e3b89d4fd6b7aa6296/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:26 np0005558241 podman[78445]: 2025-12-13 07:31:26.464036321 +0000 UTC m=+0.183732303 container create c55b47f2d43a7387eb661453e371f6e7dc20f10c5ce007658724ea74c051f176 (image=quay.io/ceph/ceph:v20, name=festive_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:31:26 np0005558241 podman[78427]: 2025-12-13 07:31:26.517711762 +0000 UTC m=+0.280217608 container init 6250e5c72cb7b4d7f486cbeffa8ebc3a8d08a11fd63df322ddee88bc56bdedf4 (image=quay.io/ceph/ceph:v20, name=bold_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:26 np0005558241 podman[78445]: 2025-12-13 07:31:26.427480911 +0000 UTC m=+0.147176953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:26 np0005558241 podman[78427]: 2025-12-13 07:31:26.525801701 +0000 UTC m=+0.288307487 container start 6250e5c72cb7b4d7f486cbeffa8ebc3a8d08a11fd63df322ddee88bc56bdedf4 (image=quay.io/ceph/ceph:v20, name=bold_edison, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:31:26 np0005558241 podman[78427]: 2025-12-13 07:31:26.53060175 +0000 UTC m=+0.293107596 container attach 6250e5c72cb7b4d7f486cbeffa8ebc3a8d08a11fd63df322ddee88bc56bdedf4 (image=quay.io/ceph/ceph:v20, name=bold_edison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:26 np0005558241 systemd[1]: Started libpod-conmon-c55b47f2d43a7387eb661453e371f6e7dc20f10c5ce007658724ea74c051f176.scope.
Dec 13 02:31:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:26 np0005558241 podman[78445]: 2025-12-13 07:31:26.636795453 +0000 UTC m=+0.356491465 container init c55b47f2d43a7387eb661453e371f6e7dc20f10c5ce007658724ea74c051f176 (image=quay.io/ceph/ceph:v20, name=festive_khorana, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:26 np0005558241 podman[78445]: 2025-12-13 07:31:26.645666942 +0000 UTC m=+0.365362924 container start c55b47f2d43a7387eb661453e371f6e7dc20f10c5ce007658724ea74c051f176 (image=quay.io/ceph/ceph:v20, name=festive_khorana, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:26 np0005558241 podman[78445]: 2025-12-13 07:31:26.650778807 +0000 UTC m=+0.370474839 container attach c55b47f2d43a7387eb661453e371f6e7dc20f10c5ce007658724ea74c051f176 (image=quay.io/ceph/ceph:v20, name=festive_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Dec 13 02:31:26 np0005558241 festive_khorana[78468]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 13 02:31:26 np0005558241 systemd[1]: libpod-c55b47f2d43a7387eb661453e371f6e7dc20f10c5ce007658724ea74c051f176.scope: Deactivated successfully.
Dec 13 02:31:26 np0005558241 podman[78492]: 2025-12-13 07:31:26.809439763 +0000 UTC m=+0.028672387 container died c55b47f2d43a7387eb661453e371f6e7dc20f10c5ce007658724ea74c051f176 (image=quay.io/ceph/ceph:v20, name=festive_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:31:26 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d46e584f7710fc3f285f3d967f0806dc3cb1384f12272bf8f8cb9d5dcf2dd658-merged.mount: Deactivated successfully.
Dec 13 02:31:26 np0005558241 podman[78492]: 2025-12-13 07:31:26.849292203 +0000 UTC m=+0.068524797 container remove c55b47f2d43a7387eb661453e371f6e7dc20f10c5ce007658724ea74c051f176 (image=quay.io/ceph/ceph:v20, name=festive_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:26 np0005558241 systemd[1]: libpod-conmon-c55b47f2d43a7387eb661453e371f6e7dc20f10c5ce007658724ea74c051f176.scope: Deactivated successfully.
Dec 13 02:31:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec 13 02:31:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:26 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:31:26 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 13 02:31:26 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 13 02:31:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 02:31:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:26 np0005558241 bold_edison[78461]: Scheduled mgr update...
Dec 13 02:31:27 np0005558241 systemd[1]: libpod-6250e5c72cb7b4d7f486cbeffa8ebc3a8d08a11fd63df322ddee88bc56bdedf4.scope: Deactivated successfully.
Dec 13 02:31:27 np0005558241 podman[78427]: 2025-12-13 07:31:27.012014128 +0000 UTC m=+0.774519914 container died 6250e5c72cb7b4d7f486cbeffa8ebc3a8d08a11fd63df322ddee88bc56bdedf4 (image=quay.io/ceph/ceph:v20, name=bold_edison, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:31:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9af6748ff2084eb7375fd02ef4138c9594ddcbb4f591d9e3b89d4fd6b7aa6296-merged.mount: Deactivated successfully.
Dec 13 02:31:27 np0005558241 ceph-mon[76537]: Saving service mon spec with placement count:5
Dec 13 02:31:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:27 np0005558241 podman[78427]: 2025-12-13 07:31:27.069992395 +0000 UTC m=+0.832498181 container remove 6250e5c72cb7b4d7f486cbeffa8ebc3a8d08a11fd63df322ddee88bc56bdedf4 (image=quay.io/ceph/ceph:v20, name=bold_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:31:27 np0005558241 systemd[1]: libpod-conmon-6250e5c72cb7b4d7f486cbeffa8ebc3a8d08a11fd63df322ddee88bc56bdedf4.scope: Deactivated successfully.
Dec 13 02:31:27 np0005558241 podman[78569]: 2025-12-13 07:31:27.181795737 +0000 UTC m=+0.077433107 container create 255d5a8edc8cf0bf426734c55e02ff07d31651996e7883b7879fb378d02650a8 (image=quay.io/ceph/ceph:v20, name=elastic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:31:27 np0005558241 systemd[1]: Started libpod-conmon-255d5a8edc8cf0bf426734c55e02ff07d31651996e7883b7879fb378d02650a8.scope.
Dec 13 02:31:27 np0005558241 podman[78569]: 2025-12-13 07:31:27.145033802 +0000 UTC m=+0.040671222 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:27 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dd4516d0857fcb9671c89d24f413a836aec0732864f191c35e33ae1d523f657/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dd4516d0857fcb9671c89d24f413a836aec0732864f191c35e33ae1d523f657/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dd4516d0857fcb9671c89d24f413a836aec0732864f191c35e33ae1d523f657/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:27 np0005558241 podman[78569]: 2025-12-13 07:31:27.282184688 +0000 UTC m=+0.177822118 container init 255d5a8edc8cf0bf426734c55e02ff07d31651996e7883b7879fb378d02650a8 (image=quay.io/ceph/ceph:v20, name=elastic_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 02:31:27 np0005558241 podman[78569]: 2025-12-13 07:31:27.294540382 +0000 UTC m=+0.190177752 container start 255d5a8edc8cf0bf426734c55e02ff07d31651996e7883b7879fb378d02650a8 (image=quay.io/ceph/ceph:v20, name=elastic_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:27 np0005558241 podman[78569]: 2025-12-13 07:31:27.299587716 +0000 UTC m=+0.195225096 container attach 255d5a8edc8cf0bf426734c55e02ff07d31651996e7883b7879fb378d02650a8 (image=quay.io/ceph/ceph:v20, name=elastic_golick, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:31:27 np0005558241 ceph-mgr[76830]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 02:31:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:31:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:27 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:31:27 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Saving service crash spec with placement *
Dec 13 02:31:27 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 13 02:31:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 13 02:31:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:27 np0005558241 elastic_golick[78587]: Scheduled crash update...
Dec 13 02:31:27 np0005558241 systemd[1]: libpod-255d5a8edc8cf0bf426734c55e02ff07d31651996e7883b7879fb378d02650a8.scope: Deactivated successfully.
Dec 13 02:31:28 np0005558241 podman[78679]: 2025-12-13 07:31:28.041289233 +0000 UTC m=+0.042081307 container died 255d5a8edc8cf0bf426734c55e02ff07d31651996e7883b7879fb378d02650a8 (image=quay.io/ceph/ceph:v20, name=elastic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:28 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:28 np0005558241 ceph-mon[76537]: Saving service mgr spec with placement count:2
Dec 13 02:31:28 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:28 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9dd4516d0857fcb9671c89d24f413a836aec0732864f191c35e33ae1d523f657-merged.mount: Deactivated successfully.
Dec 13 02:31:28 np0005558241 podman[78679]: 2025-12-13 07:31:28.092960934 +0000 UTC m=+0.093753008 container remove 255d5a8edc8cf0bf426734c55e02ff07d31651996e7883b7879fb378d02650a8 (image=quay.io/ceph/ceph:v20, name=elastic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:31:28 np0005558241 systemd[1]: libpod-conmon-255d5a8edc8cf0bf426734c55e02ff07d31651996e7883b7879fb378d02650a8.scope: Deactivated successfully.
Dec 13 02:31:28 np0005558241 podman[78696]: 2025-12-13 07:31:28.188416214 +0000 UTC m=+0.057718212 container create e4e91efa8ec404f489698c38050c7776764bb277133925e0f9003624f2ca2120 (image=quay.io/ceph/ceph:v20, name=vibrant_jemison, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 02:31:28 np0005558241 systemd[1]: Started libpod-conmon-e4e91efa8ec404f489698c38050c7776764bb277133925e0f9003624f2ca2120.scope.
Dec 13 02:31:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef310fb43df24554e5b9450423588503e970ca65776b35bac21e5fe59715ce78/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef310fb43df24554e5b9450423588503e970ca65776b35bac21e5fe59715ce78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef310fb43df24554e5b9450423588503e970ca65776b35bac21e5fe59715ce78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:28 np0005558241 podman[78696]: 2025-12-13 07:31:28.16590963 +0000 UTC m=+0.035211628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:28 np0005558241 podman[78696]: 2025-12-13 07:31:28.280977852 +0000 UTC m=+0.150279850 container init e4e91efa8ec404f489698c38050c7776764bb277133925e0f9003624f2ca2120 (image=quay.io/ceph/ceph:v20, name=vibrant_jemison, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:28 np0005558241 podman[78696]: 2025-12-13 07:31:28.288548438 +0000 UTC m=+0.157850426 container start e4e91efa8ec404f489698c38050c7776764bb277133925e0f9003624f2ca2120 (image=quay.io/ceph/ceph:v20, name=vibrant_jemison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:28 np0005558241 podman[78696]: 2025-12-13 07:31:28.292037984 +0000 UTC m=+0.161339992 container attach e4e91efa8ec404f489698c38050c7776764bb277133925e0f9003624f2ca2120 (image=quay.io/ceph/ceph:v20, name=vibrant_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:31:28 np0005558241 podman[78781]: 2025-12-13 07:31:28.553816978 +0000 UTC m=+0.098014404 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:31:28 np0005558241 podman[78781]: 2025-12-13 07:31:28.673542735 +0000 UTC m=+0.217740151 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:31:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec 13 02:31:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3458793741' entity='client.admin' 
Dec 13 02:31:28 np0005558241 systemd[1]: libpod-e4e91efa8ec404f489698c38050c7776764bb277133925e0f9003624f2ca2120.scope: Deactivated successfully.
Dec 13 02:31:28 np0005558241 podman[78696]: 2025-12-13 07:31:28.748794997 +0000 UTC m=+0.618097005 container died e4e91efa8ec404f489698c38050c7776764bb277133925e0f9003624f2ca2120 (image=quay.io/ceph/ceph:v20, name=vibrant_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 02:31:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ef310fb43df24554e5b9450423588503e970ca65776b35bac21e5fe59715ce78-merged.mount: Deactivated successfully.
Dec 13 02:31:28 np0005558241 podman[78696]: 2025-12-13 07:31:28.798149122 +0000 UTC m=+0.667451130 container remove e4e91efa8ec404f489698c38050c7776764bb277133925e0f9003624f2ca2120 (image=quay.io/ceph/ceph:v20, name=vibrant_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:31:28 np0005558241 systemd[1]: libpod-conmon-e4e91efa8ec404f489698c38050c7776764bb277133925e0f9003624f2ca2120.scope: Deactivated successfully.
Dec 13 02:31:28 np0005558241 podman[78853]: 2025-12-13 07:31:28.885401559 +0000 UTC m=+0.053119808 container create 4c3637cf5bd6556e710dde518b260619c7dc96e59092ad10b765403500dc6bd7 (image=quay.io/ceph/ceph:v20, name=priceless_edison, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:28 np0005558241 systemd[1]: Started libpod-conmon-4c3637cf5bd6556e710dde518b260619c7dc96e59092ad10b765403500dc6bd7.scope.
Dec 13 02:31:28 np0005558241 podman[78853]: 2025-12-13 07:31:28.863443569 +0000 UTC m=+0.031161848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:31:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e08e9cc7b356da5d74341e1539067512de7b309ed2713c392227636981f4c96/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e08e9cc7b356da5d74341e1539067512de7b309ed2713c392227636981f4c96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e08e9cc7b356da5d74341e1539067512de7b309ed2713c392227636981f4c96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:28 np0005558241 podman[78853]: 2025-12-13 07:31:28.985157625 +0000 UTC m=+0.152875864 container init 4c3637cf5bd6556e710dde518b260619c7dc96e59092ad10b765403500dc6bd7 (image=quay.io/ceph/ceph:v20, name=priceless_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:28 np0005558241 podman[78853]: 2025-12-13 07:31:28.991479511 +0000 UTC m=+0.159197750 container start 4c3637cf5bd6556e710dde518b260619c7dc96e59092ad10b765403500dc6bd7 (image=quay.io/ceph/ceph:v20, name=priceless_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 02:31:28 np0005558241 podman[78853]: 2025-12-13 07:31:28.995745886 +0000 UTC m=+0.163464135 container attach 4c3637cf5bd6556e710dde518b260619c7dc96e59092ad10b765403500dc6bd7 (image=quay.io/ceph/ceph:v20, name=priceless_edison, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:29 np0005558241 ceph-mgr[76830]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 13 02:31:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:29 np0005558241 ceph-mon[76537]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 13 02:31:29 np0005558241 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78973 (sysctl)
Dec 13 02:31:29 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:31:29 np0005558241 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 13 02:31:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec 13 02:31:29 np0005558241 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 13 02:31:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:29 np0005558241 systemd[1]: libpod-4c3637cf5bd6556e710dde518b260619c7dc96e59092ad10b765403500dc6bd7.scope: Deactivated successfully.
Dec 13 02:31:29 np0005558241 podman[78853]: 2025-12-13 07:31:29.41396864 +0000 UTC m=+0.581686919 container died 4c3637cf5bd6556e710dde518b260619c7dc96e59092ad10b765403500dc6bd7 (image=quay.io/ceph/ceph:v20, name=priceless_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:31:29 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3e08e9cc7b356da5d74341e1539067512de7b309ed2713c392227636981f4c96-merged.mount: Deactivated successfully.
Dec 13 02:31:29 np0005558241 podman[78853]: 2025-12-13 07:31:29.463583191 +0000 UTC m=+0.631301420 container remove 4c3637cf5bd6556e710dde518b260619c7dc96e59092ad10b765403500dc6bd7 (image=quay.io/ceph/ceph:v20, name=priceless_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:29 np0005558241 systemd[1]: libpod-conmon-4c3637cf5bd6556e710dde518b260619c7dc96e59092ad10b765403500dc6bd7.scope: Deactivated successfully.
Dec 13 02:31:29 np0005558241 podman[78992]: 2025-12-13 07:31:29.522953873 +0000 UTC m=+0.040961640 container create 0f4b4fc7acebff5f8e34a3732b66bd868a2e503da1836f2b4ee51660a75ce571 (image=quay.io/ceph/ceph:v20, name=ecstatic_driscoll, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:29 np0005558241 systemd[1]: Started libpod-conmon-0f4b4fc7acebff5f8e34a3732b66bd868a2e503da1836f2b4ee51660a75ce571.scope.
Dec 13 02:31:29 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2e307104eb7716634c46fd4ab6458779e2de8dc9584f81bc6da7fb08088beb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2e307104eb7716634c46fd4ab6458779e2de8dc9584f81bc6da7fb08088beb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2e307104eb7716634c46fd4ab6458779e2de8dc9584f81bc6da7fb08088beb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:29 np0005558241 podman[78992]: 2025-12-13 07:31:29.504899008 +0000 UTC m=+0.022906735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:29 np0005558241 podman[78992]: 2025-12-13 07:31:29.620296529 +0000 UTC m=+0.138304346 container init 0f4b4fc7acebff5f8e34a3732b66bd868a2e503da1836f2b4ee51660a75ce571 (image=quay.io/ceph/ceph:v20, name=ecstatic_driscoll, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:29 np0005558241 podman[78992]: 2025-12-13 07:31:29.632811807 +0000 UTC m=+0.150819574 container start 0f4b4fc7acebff5f8e34a3732b66bd868a2e503da1836f2b4ee51660a75ce571 (image=quay.io/ceph/ceph:v20, name=ecstatic_driscoll, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:29 np0005558241 podman[78992]: 2025-12-13 07:31:29.637448831 +0000 UTC m=+0.155456578 container attach 0f4b4fc7acebff5f8e34a3732b66bd868a2e503da1836f2b4ee51660a75ce571 (image=quay.io/ceph/ceph:v20, name=ecstatic_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:31:29 np0005558241 ceph-mon[76537]: Saving service crash spec with placement *
Dec 13 02:31:29 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3458793741' entity='client.admin' 
Dec 13 02:31:29 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:29 np0005558241 ceph-mon[76537]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 13 02:31:29 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:30 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:30 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:31:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 02:31:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:30 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Added label _admin to host compute-0
Dec 13 02:31:30 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 13 02:31:30 np0005558241 ecstatic_driscoll[79012]: Added label _admin to host compute-0
Dec 13 02:31:30 np0005558241 systemd[1]: libpod-0f4b4fc7acebff5f8e34a3732b66bd868a2e503da1836f2b4ee51660a75ce571.scope: Deactivated successfully.
Dec 13 02:31:30 np0005558241 podman[78992]: 2025-12-13 07:31:30.167978239 +0000 UTC m=+0.685985996 container died 0f4b4fc7acebff5f8e34a3732b66bd868a2e503da1836f2b4ee51660a75ce571 (image=quay.io/ceph/ceph:v20, name=ecstatic_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:31:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7c2e307104eb7716634c46fd4ab6458779e2de8dc9584f81bc6da7fb08088beb-merged.mount: Deactivated successfully.
Dec 13 02:31:30 np0005558241 podman[78992]: 2025-12-13 07:31:30.214618677 +0000 UTC m=+0.732626404 container remove 0f4b4fc7acebff5f8e34a3732b66bd868a2e503da1836f2b4ee51660a75ce571 (image=quay.io/ceph/ceph:v20, name=ecstatic_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:31:30 np0005558241 systemd[1]: libpod-conmon-0f4b4fc7acebff5f8e34a3732b66bd868a2e503da1836f2b4ee51660a75ce571.scope: Deactivated successfully.
Dec 13 02:31:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:31:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:30 np0005558241 podman[79132]: 2025-12-13 07:31:30.32362778 +0000 UTC m=+0.070990508 container create bad5e132c5f129c5ac0a2a1c18c932190dab0d9fec6aed67d391a52c4bdecbae (image=quay.io/ceph/ceph:v20, name=zealous_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:30 np0005558241 systemd[1]: Started libpod-conmon-bad5e132c5f129c5ac0a2a1c18c932190dab0d9fec6aed67d391a52c4bdecbae.scope.
Dec 13 02:31:30 np0005558241 podman[79132]: 2025-12-13 07:31:30.294067692 +0000 UTC m=+0.041430460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e3e9d1bed6dc1ce2ba54fcbb340f63e20acf816fb73d5510f2b45c8686b709/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e3e9d1bed6dc1ce2ba54fcbb340f63e20acf816fb73d5510f2b45c8686b709/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e3e9d1bed6dc1ce2ba54fcbb340f63e20acf816fb73d5510f2b45c8686b709/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:30 np0005558241 podman[79132]: 2025-12-13 07:31:30.434134439 +0000 UTC m=+0.181497197 container init bad5e132c5f129c5ac0a2a1c18c932190dab0d9fec6aed67d391a52c4bdecbae (image=quay.io/ceph/ceph:v20, name=zealous_mayer, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:30 np0005558241 podman[79132]: 2025-12-13 07:31:30.442513105 +0000 UTC m=+0.189875833 container start bad5e132c5f129c5ac0a2a1c18c932190dab0d9fec6aed67d391a52c4bdecbae (image=quay.io/ceph/ceph:v20, name=zealous_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:31:30 np0005558241 podman[79132]: 2025-12-13 07:31:30.44676777 +0000 UTC m=+0.194130598 container attach bad5e132c5f129c5ac0a2a1c18c932190dab0d9fec6aed67d391a52c4bdecbae (image=quay.io/ceph/ceph:v20, name=zealous_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:31:30 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:30 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:30 np0005558241 podman[79235]: 2025-12-13 07:31:30.818562221 +0000 UTC m=+0.054354679 container create f5b1d4327121d4df4c4ae2a766276756763ad10389ba4fdfb0eade206bafd4a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_montalcini, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:31:30 np0005558241 systemd[1]: Started libpod-conmon-f5b1d4327121d4df4c4ae2a766276756763ad10389ba4fdfb0eade206bafd4a9.scope.
Dec 13 02:31:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:30 np0005558241 podman[79235]: 2025-12-13 07:31:30.794499179 +0000 UTC m=+0.030291677 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:30 np0005558241 podman[79235]: 2025-12-13 07:31:30.905663235 +0000 UTC m=+0.141455783 container init f5b1d4327121d4df4c4ae2a766276756763ad10389ba4fdfb0eade206bafd4a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:30 np0005558241 podman[79235]: 2025-12-13 07:31:30.914122763 +0000 UTC m=+0.149915261 container start f5b1d4327121d4df4c4ae2a766276756763ad10389ba4fdfb0eade206bafd4a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_montalcini, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:31:30 np0005558241 trusting_montalcini[79251]: 167 167
Dec 13 02:31:30 np0005558241 systemd[1]: libpod-f5b1d4327121d4df4c4ae2a766276756763ad10389ba4fdfb0eade206bafd4a9.scope: Deactivated successfully.
Dec 13 02:31:30 np0005558241 podman[79235]: 2025-12-13 07:31:30.919831124 +0000 UTC m=+0.155623662 container attach f5b1d4327121d4df4c4ae2a766276756763ad10389ba4fdfb0eade206bafd4a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:31:30 np0005558241 podman[79235]: 2025-12-13 07:31:30.920243594 +0000 UTC m=+0.156036092 container died f5b1d4327121d4df4c4ae2a766276756763ad10389ba4fdfb0eade206bafd4a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-de09102049af7ec8bf71ad4b1aa93c5bc4d787c092a8b283b958c7ba0adb24b2-merged.mount: Deactivated successfully.
Dec 13 02:31:30 np0005558241 podman[79235]: 2025-12-13 07:31:30.966616755 +0000 UTC m=+0.202409253 container remove f5b1d4327121d4df4c4ae2a766276756763ad10389ba4fdfb0eade206bafd4a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_montalcini, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:30 np0005558241 systemd[1]: libpod-conmon-f5b1d4327121d4df4c4ae2a766276756763ad10389ba4fdfb0eade206bafd4a9.scope: Deactivated successfully.
Dec 13 02:31:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec 13 02:31:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2807679039' entity='client.admin' 
Dec 13 02:31:31 np0005558241 zealous_mayer[79172]: set mgr/dashboard/cluster/status
Dec 13 02:31:31 np0005558241 systemd[1]: libpod-bad5e132c5f129c5ac0a2a1c18c932190dab0d9fec6aed67d391a52c4bdecbae.scope: Deactivated successfully.
Dec 13 02:31:31 np0005558241 podman[79132]: 2025-12-13 07:31:31.064432943 +0000 UTC m=+0.811795631 container died bad5e132c5f129c5ac0a2a1c18c932190dab0d9fec6aed67d391a52c4bdecbae (image=quay.io/ceph/ceph:v20, name=zealous_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a1e3e9d1bed6dc1ce2ba54fcbb340f63e20acf816fb73d5510f2b45c8686b709-merged.mount: Deactivated successfully.
Dec 13 02:31:31 np0005558241 podman[79132]: 2025-12-13 07:31:31.100939512 +0000 UTC m=+0.848302200 container remove bad5e132c5f129c5ac0a2a1c18c932190dab0d9fec6aed67d391a52c4bdecbae (image=quay.io/ceph/ceph:v20, name=zealous_mayer, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 02:31:31 np0005558241 systemd[1]: libpod-conmon-bad5e132c5f129c5ac0a2a1c18c932190dab0d9fec6aed67d391a52c4bdecbae.scope: Deactivated successfully.
Dec 13 02:31:31 np0005558241 systemd[1]: Reloading.
Dec 13 02:31:31 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:31:31 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:31:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:31 np0005558241 podman[79327]: 2025-12-13 07:31:31.638942174 +0000 UTC m=+0.064783606 container create 701277758e1ae6be351d8957bb17f10b8063fe040f72b7ec08e760020a542a9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:31:31 np0005558241 systemd[1]: Started libpod-conmon-701277758e1ae6be351d8957bb17f10b8063fe040f72b7ec08e760020a542a9c.scope.
Dec 13 02:31:31 np0005558241 podman[79327]: 2025-12-13 07:31:31.614644096 +0000 UTC m=+0.040485598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:31 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9865d90a3a23828b9e24c4f1c322a752fd571191fbd973895d0fdafab0777903/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:31 np0005558241 ceph-mon[76537]: Added label _admin to host compute-0
Dec 13 02:31:31 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2807679039' entity='client.admin' 
Dec 13 02:31:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9865d90a3a23828b9e24c4f1c322a752fd571191fbd973895d0fdafab0777903/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9865d90a3a23828b9e24c4f1c322a752fd571191fbd973895d0fdafab0777903/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9865d90a3a23828b9e24c4f1c322a752fd571191fbd973895d0fdafab0777903/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:31 np0005558241 podman[79327]: 2025-12-13 07:31:31.760140697 +0000 UTC m=+0.185982199 container init 701277758e1ae6be351d8957bb17f10b8063fe040f72b7ec08e760020a542a9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sinoussi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:31 np0005558241 podman[79327]: 2025-12-13 07:31:31.777213018 +0000 UTC m=+0.203054450 container start 701277758e1ae6be351d8957bb17f10b8063fe040f72b7ec08e760020a542a9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sinoussi, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:31 np0005558241 podman[79327]: 2025-12-13 07:31:31.781058392 +0000 UTC m=+0.206899824 container attach 701277758e1ae6be351d8957bb17f10b8063fe040f72b7ec08e760020a542a9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:32 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:32 np0005558241 python3[79374]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:31:32 np0005558241 podman[79380]: 2025-12-13 07:31:32.167603417 +0000 UTC m=+0.062426268 container create e240e157b8ceb63713884a1bdf0fcb767f85871b939614a223528855d05027f1 (image=quay.io/ceph/ceph:v20, name=stoic_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:31:32 np0005558241 systemd[1]: Started libpod-conmon-e240e157b8ceb63713884a1bdf0fcb767f85871b939614a223528855d05027f1.scope.
Dec 13 02:31:32 np0005558241 podman[79380]: 2025-12-13 07:31:32.139578317 +0000 UTC m=+0.034401218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40926cf6575eb849816a222857fdd860d5be948a867c128ed338553fbf394de/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40926cf6575eb849816a222857fdd860d5be948a867c128ed338553fbf394de/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:32 np0005558241 podman[79380]: 2025-12-13 07:31:32.280505526 +0000 UTC m=+0.175328427 container init e240e157b8ceb63713884a1bdf0fcb767f85871b939614a223528855d05027f1 (image=quay.io/ceph/ceph:v20, name=stoic_euclid, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:31:32 np0005558241 podman[79380]: 2025-12-13 07:31:32.291578078 +0000 UTC m=+0.186400939 container start e240e157b8ceb63713884a1bdf0fcb767f85871b939614a223528855d05027f1 (image=quay.io/ceph/ceph:v20, name=stoic_euclid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:31:32 np0005558241 podman[79380]: 2025-12-13 07:31:32.295941626 +0000 UTC m=+0.190764497 container attach e240e157b8ceb63713884a1bdf0fcb767f85871b939614a223528855d05027f1 (image=quay.io/ceph/ceph:v20, name=stoic_euclid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]: [
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:    {
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:        "available": false,
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:        "being_replaced": false,
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:        "ceph_device_lvm": false,
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:        "lsm_data": {},
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:        "lvs": [],
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:        "path": "/dev/sr0",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:        "rejected_reasons": [
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "Has a FileSystem",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "Insufficient space (<5GB)"
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:        ],
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:        "sys_api": {
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "actuators": null,
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "device_nodes": [
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:                "sr0"
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            ],
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "devname": "sr0",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "human_readable_size": "482.00 KB",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "id_bus": "ata",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "model": "QEMU DVD-ROM",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "nr_requests": "2",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "parent": "/dev/sr0",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "partitions": {},
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "path": "/dev/sr0",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "removable": "1",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "rev": "2.5+",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "ro": "0",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "rotational": "1",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "sas_address": "",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "sas_device_handle": "",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "scheduler_mode": "mq-deadline",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "sectors": 0,
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "sectorsize": "2048",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "size": 493568.0,
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "support_discard": "2048",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "type": "disk",
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:            "vendor": "QEMU"
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:        }
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]:    }
Dec 13 02:31:32 np0005558241 boring_sinoussi[79344]: ]
Dec 13 02:31:32 np0005558241 systemd[1]: libpod-701277758e1ae6be351d8957bb17f10b8063fe040f72b7ec08e760020a542a9c.scope: Deactivated successfully.
Dec 13 02:31:32 np0005558241 podman[79327]: 2025-12-13 07:31:32.379961474 +0000 UTC m=+0.805802966 container died 701277758e1ae6be351d8957bb17f10b8063fe040f72b7ec08e760020a542a9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:31:32 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9865d90a3a23828b9e24c4f1c322a752fd571191fbd973895d0fdafab0777903-merged.mount: Deactivated successfully.
Dec 13 02:31:32 np0005558241 podman[79327]: 2025-12-13 07:31:32.432024055 +0000 UTC m=+0.857865517 container remove 701277758e1ae6be351d8957bb17f10b8063fe040f72b7ec08e760020a542a9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sinoussi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:31:32 np0005558241 systemd[1]: libpod-conmon-701277758e1ae6be351d8957bb17f10b8063fe040f72b7ec08e760020a542a9c.scope: Deactivated successfully.
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:31:32 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 13 02:31:32 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:31:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3489057679' entity='client.admin' 
Dec 13 02:31:32 np0005558241 systemd[1]: libpod-e240e157b8ceb63713884a1bdf0fcb767f85871b939614a223528855d05027f1.scope: Deactivated successfully.
Dec 13 02:31:32 np0005558241 podman[79380]: 2025-12-13 07:31:32.782098092 +0000 UTC m=+0.676920943 container died e240e157b8ceb63713884a1bdf0fcb767f85871b939614a223528855d05027f1 (image=quay.io/ceph/ceph:v20, name=stoic_euclid, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:32 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d40926cf6575eb849816a222857fdd860d5be948a867c128ed338553fbf394de-merged.mount: Deactivated successfully.
Dec 13 02:31:32 np0005558241 podman[79380]: 2025-12-13 07:31:32.83076543 +0000 UTC m=+0.725588281 container remove e240e157b8ceb63713884a1bdf0fcb767f85871b939614a223528855d05027f1 (image=quay.io/ceph/ceph:v20, name=stoic_euclid, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:31:32 np0005558241 systemd[1]: libpod-conmon-e240e157b8ceb63713884a1bdf0fcb767f85871b939614a223528855d05027f1.scope: Deactivated successfully.
Dec 13 02:31:33 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/config/ceph.conf
Dec 13 02:31:33 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/config/ceph.conf
Dec 13 02:31:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:33 np0005558241 ceph-mon[76537]: Updating compute-0:/etc/ceph/ceph.conf
Dec 13 02:31:33 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3489057679' entity='client.admin' 
Dec 13 02:31:34 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:34 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 13 02:31:34 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 13 02:31:34 np0005558241 ansible-async_wrapper.py[80607]: Invoked with j881929918461 30 /home/zuul/.ansible/tmp/ansible-tmp-1765611093.2734208-36608-112346933693784/AnsiballZ_command.py _
Dec 13 02:31:34 np0005558241 ansible-async_wrapper.py[80657]: Starting module and watcher
Dec 13 02:31:34 np0005558241 ansible-async_wrapper.py[80657]: Start watching 80660 (30)
Dec 13 02:31:34 np0005558241 ansible-async_wrapper.py[80660]: Start module (80660)
Dec 13 02:31:34 np0005558241 ansible-async_wrapper.py[80607]: Return async_wrapper task started.
Dec 13 02:31:34 np0005558241 python3[80668]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:31:34 np0005558241 podman[80723]: 2025-12-13 07:31:34.256048991 +0000 UTC m=+0.059371752 container create 0d86541a155c206c3b565300579a9ab2e8c9bacd7e6b9336abb39290f4fcb7d4 (image=quay.io/ceph/ceph:v20, name=gifted_hoover, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 02:31:34 np0005558241 systemd[1]: Started libpod-conmon-0d86541a155c206c3b565300579a9ab2e8c9bacd7e6b9336abb39290f4fcb7d4.scope.
Dec 13 02:31:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:34 np0005558241 podman[80723]: 2025-12-13 07:31:34.234534552 +0000 UTC m=+0.037857403 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a3612303d38fceda27e97c477536329cdfc023c477a977b7740984c24d91f6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65a3612303d38fceda27e97c477536329cdfc023c477a977b7740984c24d91f6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:34 np0005558241 podman[80723]: 2025-12-13 07:31:34.351198823 +0000 UTC m=+0.154521594 container init 0d86541a155c206c3b565300579a9ab2e8c9bacd7e6b9336abb39290f4fcb7d4 (image=quay.io/ceph/ceph:v20, name=gifted_hoover, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:34 np0005558241 podman[80723]: 2025-12-13 07:31:34.361969518 +0000 UTC m=+0.165292279 container start 0d86541a155c206c3b565300579a9ab2e8c9bacd7e6b9336abb39290f4fcb7d4 (image=quay.io/ceph/ceph:v20, name=gifted_hoover, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:31:34 np0005558241 podman[80723]: 2025-12-13 07:31:34.365450764 +0000 UTC m=+0.168773525 container attach 0d86541a155c206c3b565300579a9ab2e8c9bacd7e6b9336abb39290f4fcb7d4 (image=quay.io/ceph/ceph:v20, name=gifted_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:34 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/config/ceph.client.admin.keyring
Dec 13 02:31:34 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/config/ceph.client.admin.keyring
Dec 13 02:31:34 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 02:31:34 np0005558241 gifted_hoover[80778]: 
Dec 13 02:31:34 np0005558241 gifted_hoover[80778]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 02:31:34 np0005558241 ceph-mon[76537]: Updating compute-0:/var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/config/ceph.conf
Dec 13 02:31:34 np0005558241 ceph-mon[76537]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 13 02:31:34 np0005558241 ceph-mon[76537]: Updating compute-0:/var/lib/ceph/18ee9de6-e00b-571b-ab9b-b7aab06174df/config/ceph.client.admin.keyring
Dec 13 02:31:34 np0005558241 systemd[1]: libpod-0d86541a155c206c3b565300579a9ab2e8c9bacd7e6b9336abb39290f4fcb7d4.scope: Deactivated successfully.
Dec 13 02:31:34 np0005558241 podman[80723]: 2025-12-13 07:31:34.777880846 +0000 UTC m=+0.581203607 container died 0d86541a155c206c3b565300579a9ab2e8c9bacd7e6b9336abb39290f4fcb7d4 (image=quay.io/ceph/ceph:v20, name=gifted_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-65a3612303d38fceda27e97c477536329cdfc023c477a977b7740984c24d91f6-merged.mount: Deactivated successfully.
Dec 13 02:31:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:35 np0005558241 podman[80723]: 2025-12-13 07:31:35.426838999 +0000 UTC m=+1.230161800 container remove 0d86541a155c206c3b565300579a9ab2e8c9bacd7e6b9336abb39290f4fcb7d4 (image=quay.io/ceph/ceph:v20, name=gifted_hoover, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:31:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:31:35 np0005558241 ansible-async_wrapper.py[80660]: Module complete (80660)
Dec 13 02:31:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:31:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:31:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:35 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev c1076e02-11da-4666-8dec-dae66346fdf3 (Updating crash deployment (+1 -> 1))
Dec 13 02:31:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 13 02:31:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 13 02:31:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 13 02:31:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:31:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:31:35 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 13 02:31:35 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 13 02:31:35 np0005558241 systemd[1]: libpod-conmon-0d86541a155c206c3b565300579a9ab2e8c9bacd7e6b9336abb39290f4fcb7d4.scope: Deactivated successfully.
Dec 13 02:31:35 np0005558241 python3[81204]: ansible-ansible.legacy.async_status Invoked with jid=j881929918461.80607 mode=status _async_dir=/root/.ansible_async
Dec 13 02:31:35 np0005558241 python3[81303]: ansible-ansible.legacy.async_status Invoked with jid=j881929918461.80607 mode=cleanup _async_dir=/root/.ansible_async
Dec 13 02:31:36 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:36 np0005558241 podman[81345]: 2025-12-13 07:31:36.088054514 +0000 UTC m=+0.047885369 container create 1a9e8a9705bfa1f3e8e02f21ded731fec8fb08ecde7dbebafe89d9dace82d3f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:36 np0005558241 systemd[1]: Started libpod-conmon-1a9e8a9705bfa1f3e8e02f21ded731fec8fb08ecde7dbebafe89d9dace82d3f2.scope.
Dec 13 02:31:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:36 np0005558241 podman[81345]: 2025-12-13 07:31:36.061026269 +0000 UTC m=+0.020857114 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:36 np0005558241 podman[81345]: 2025-12-13 07:31:36.169651233 +0000 UTC m=+0.129482118 container init 1a9e8a9705bfa1f3e8e02f21ded731fec8fb08ecde7dbebafe89d9dace82d3f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_murdock, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:31:36 np0005558241 podman[81345]: 2025-12-13 07:31:36.175823165 +0000 UTC m=+0.135654020 container start 1a9e8a9705bfa1f3e8e02f21ded731fec8fb08ecde7dbebafe89d9dace82d3f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_murdock, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:31:36 np0005558241 podman[81345]: 2025-12-13 07:31:36.179765962 +0000 UTC m=+0.139596857 container attach 1a9e8a9705bfa1f3e8e02f21ded731fec8fb08ecde7dbebafe89d9dace82d3f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:31:36 np0005558241 determined_murdock[81360]: 167 167
Dec 13 02:31:36 np0005558241 systemd[1]: libpod-1a9e8a9705bfa1f3e8e02f21ded731fec8fb08ecde7dbebafe89d9dace82d3f2.scope: Deactivated successfully.
Dec 13 02:31:36 np0005558241 podman[81345]: 2025-12-13 07:31:36.182028947 +0000 UTC m=+0.141859802 container died 1a9e8a9705bfa1f3e8e02f21ded731fec8fb08ecde7dbebafe89d9dace82d3f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8a2159a9d637457c32c057109cf1866a5203a4c0794dcfda43b4e10e33585c2f-merged.mount: Deactivated successfully.
Dec 13 02:31:36 np0005558241 podman[81345]: 2025-12-13 07:31:36.234054448 +0000 UTC m=+0.193885293 container remove 1a9e8a9705bfa1f3e8e02f21ded731fec8fb08ecde7dbebafe89d9dace82d3f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:36 np0005558241 systemd[1]: libpod-conmon-1a9e8a9705bfa1f3e8e02f21ded731fec8fb08ecde7dbebafe89d9dace82d3f2.scope: Deactivated successfully.
Dec 13 02:31:36 np0005558241 systemd[1]: Reloading.
Dec 13 02:31:36 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:31:36 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:31:36 np0005558241 python3[81405]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 02:31:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 13 02:31:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 13 02:31:36 np0005558241 ceph-mon[76537]: Deploying daemon crash.compute-0 on compute-0
Dec 13 02:31:36 np0005558241 systemd[1]: Reloading.
Dec 13 02:31:36 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:31:36 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:31:36 np0005558241 systemd[1]: Starting Ceph crash.compute-0 for 18ee9de6-e00b-571b-ab9b-b7aab06174df...
Dec 13 02:31:37 np0005558241 python3[81527]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:31:37 np0005558241 podman[81552]: 2025-12-13 07:31:37.221741549 +0000 UTC m=+0.066266502 container create c0aeb8ea08bbc6b4687625494fd869598a0b555b397fdb0287620c6e2cf0cdf5 (image=quay.io/ceph/ceph:v20, name=sad_banzai, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:37 np0005558241 podman[81563]: 2025-12-13 07:31:37.238360988 +0000 UTC m=+0.066076647 container create 429f444b2870dc42ba1d2bf13eba5ed26e057f206ba5b0286590ff37b172c648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-crash-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:31:37 np0005558241 systemd[1]: Started libpod-conmon-c0aeb8ea08bbc6b4687625494fd869598a0b555b397fdb0287620c6e2cf0cdf5.scope.
Dec 13 02:31:37 np0005558241 podman[81552]: 2025-12-13 07:31:37.193906844 +0000 UTC m=+0.038431837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79eefe4c2e3c7bcca36c0543a1bea78faebed1b7174fe131611e6a79d73a5dd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79eefe4c2e3c7bcca36c0543a1bea78faebed1b7174fe131611e6a79d73a5dd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79eefe4c2e3c7bcca36c0543a1bea78faebed1b7174fe131611e6a79d73a5dd9/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79eefe4c2e3c7bcca36c0543a1bea78faebed1b7174fe131611e6a79d73a5dd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:37 np0005558241 podman[81563]: 2025-12-13 07:31:37.209631971 +0000 UTC m=+0.037347660 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:37 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5267eaffd85b9da3f50ab6837c910aa781491c77295030c5869442f432a1b48f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5267eaffd85b9da3f50ab6837c910aa781491c77295030c5869442f432a1b48f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5267eaffd85b9da3f50ab6837c910aa781491c77295030c5869442f432a1b48f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:37 np0005558241 podman[81563]: 2025-12-13 07:31:37.317761192 +0000 UTC m=+0.145476911 container init 429f444b2870dc42ba1d2bf13eba5ed26e057f206ba5b0286590ff37b172c648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Dec 13 02:31:37 np0005558241 podman[81552]: 2025-12-13 07:31:37.324299973 +0000 UTC m=+0.168824926 container init c0aeb8ea08bbc6b4687625494fd869598a0b555b397fdb0287620c6e2cf0cdf5 (image=quay.io/ceph/ceph:v20, name=sad_banzai, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:37 np0005558241 podman[81563]: 2025-12-13 07:31:37.331551242 +0000 UTC m=+0.159266881 container start 429f444b2870dc42ba1d2bf13eba5ed26e057f206ba5b0286590ff37b172c648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:37 np0005558241 podman[81552]: 2025-12-13 07:31:37.336455302 +0000 UTC m=+0.180980255 container start c0aeb8ea08bbc6b4687625494fd869598a0b555b397fdb0287620c6e2cf0cdf5 (image=quay.io/ceph/ceph:v20, name=sad_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:31:37 np0005558241 bash[81563]: 429f444b2870dc42ba1d2bf13eba5ed26e057f206ba5b0286590ff37b172c648
Dec 13 02:31:37 np0005558241 podman[81552]: 2025-12-13 07:31:37.344855669 +0000 UTC m=+0.189380652 container attach c0aeb8ea08bbc6b4687625494fd869598a0b555b397fdb0287620c6e2cf0cdf5 (image=quay.io/ceph/ceph:v20, name=sad_banzai, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:37 np0005558241 systemd[1]: Started Ceph crash.compute-0 for 18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:31:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:37 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-crash-compute-0[81588]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:37 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev c1076e02-11da-4666-8dec-dae66346fdf3 (Updating crash deployment (+1 -> 1))
Dec 13 02:31:37 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event c1076e02-11da-4666-8dec-dae66346fdf3 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:37 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev 2891c02e-71f0-4319-8e48-57cabdbf557e (Updating mgr deployment (+1 -> 2))
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.twmpkk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.twmpkk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.twmpkk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "mgr services"} : dispatch
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:31:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:31:37 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.twmpkk on compute-0
Dec 13 02:31:37 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.twmpkk on compute-0
Dec 13 02:31:37 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-crash-compute-0[81588]: 2025-12-13T07:31:37.524+0000 7f825d132640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 13 02:31:37 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-crash-compute-0[81588]: 2025-12-13T07:31:37.524+0000 7f825d132640 -1 AuthRegistry(0x7f8258052930) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 13 02:31:37 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-crash-compute-0[81588]: 2025-12-13T07:31:37.525+0000 7f825d132640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 13 02:31:37 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-crash-compute-0[81588]: 2025-12-13T07:31:37.525+0000 7f825d132640 -1 AuthRegistry(0x7f825d130fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 13 02:31:37 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-crash-compute-0[81588]: 2025-12-13T07:31:37.531+0000 7f8257fff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 13 02:31:37 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-crash-compute-0[81588]: 2025-12-13T07:31:37.531+0000 7f825d132640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 13 02:31:37 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-crash-compute-0[81588]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 13 02:31:37 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-crash-compute-0[81588]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 13 02:31:37 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 02:31:37 np0005558241 sad_banzai[81587]: 
Dec 13 02:31:37 np0005558241 sad_banzai[81587]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 02:31:37 np0005558241 systemd[1]: libpod-c0aeb8ea08bbc6b4687625494fd869598a0b555b397fdb0287620c6e2cf0cdf5.scope: Deactivated successfully.
Dec 13 02:31:37 np0005558241 podman[81552]: 2025-12-13 07:31:37.827912808 +0000 UTC m=+0.672437791 container died c0aeb8ea08bbc6b4687625494fd869598a0b555b397fdb0287620c6e2cf0cdf5 (image=quay.io/ceph/ceph:v20, name=sad_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5267eaffd85b9da3f50ab6837c910aa781491c77295030c5869442f432a1b48f-merged.mount: Deactivated successfully.
Dec 13 02:31:37 np0005558241 podman[81552]: 2025-12-13 07:31:37.878282098 +0000 UTC m=+0.722807041 container remove c0aeb8ea08bbc6b4687625494fd869598a0b555b397fdb0287620c6e2cf0cdf5 (image=quay.io/ceph/ceph:v20, name=sad_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:37 np0005558241 systemd[1]: libpod-conmon-c0aeb8ea08bbc6b4687625494fd869598a0b555b397fdb0287620c6e2cf0cdf5.scope: Deactivated successfully.
Dec 13 02:31:38 np0005558241 podman[81731]: 2025-12-13 07:31:38.031343355 +0000 UTC m=+0.064211041 container create 8a9c739c910df811407651c90048e02f0598d3735f73cbe2cbf5019dff95dfb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:31:38 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:38 np0005558241 systemd[1]: Started libpod-conmon-8a9c739c910df811407651c90048e02f0598d3735f73cbe2cbf5019dff95dfb8.scope.
Dec 13 02:31:38 np0005558241 podman[81731]: 2025-12-13 07:31:38.003180142 +0000 UTC m=+0.036047848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:38 np0005558241 podman[81731]: 2025-12-13 07:31:38.1233593 +0000 UTC m=+0.156226966 container init 8a9c739c910df811407651c90048e02f0598d3735f73cbe2cbf5019dff95dfb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:31:38 np0005558241 podman[81731]: 2025-12-13 07:31:38.132784602 +0000 UTC m=+0.165652278 container start 8a9c739c910df811407651c90048e02f0598d3735f73cbe2cbf5019dff95dfb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:31:38 np0005558241 podman[81731]: 2025-12-13 07:31:38.138157195 +0000 UTC m=+0.171024871 container attach 8a9c739c910df811407651c90048e02f0598d3735f73cbe2cbf5019dff95dfb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 02:31:38 np0005558241 practical_brahmagupta[81748]: 167 167
Dec 13 02:31:38 np0005558241 systemd[1]: libpod-8a9c739c910df811407651c90048e02f0598d3735f73cbe2cbf5019dff95dfb8.scope: Deactivated successfully.
Dec 13 02:31:38 np0005558241 podman[81731]: 2025-12-13 07:31:38.154174759 +0000 UTC m=+0.187042435 container died 8a9c739c910df811407651c90048e02f0598d3735f73cbe2cbf5019dff95dfb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brahmagupta, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:38 np0005558241 systemd[1]: var-lib-containers-storage-overlay-340d14d640f3118f49da1f0a77afef2b1bbf63e44b747de53ea07f5854f9fc74-merged.mount: Deactivated successfully.
Dec 13 02:31:38 np0005558241 podman[81731]: 2025-12-13 07:31:38.206273441 +0000 UTC m=+0.239141117 container remove 8a9c739c910df811407651c90048e02f0598d3735f73cbe2cbf5019dff95dfb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brahmagupta, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:38 np0005558241 systemd[1]: libpod-conmon-8a9c739c910df811407651c90048e02f0598d3735f73cbe2cbf5019dff95dfb8.scope: Deactivated successfully.
Dec 13 02:31:38 np0005558241 systemd[1]: Reloading.
Dec 13 02:31:38 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:31:38 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:31:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.twmpkk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 02:31:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.twmpkk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 13 02:31:38 np0005558241 ceph-mon[76537]: Deploying daemon mgr.compute-0.twmpkk on compute-0
Dec 13 02:31:38 np0005558241 systemd[1]: Reloading.
Dec 13 02:31:38 np0005558241 python3[81829]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:31:38 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:31:38 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:31:38 np0005558241 podman[81865]: 2025-12-13 07:31:38.758838752 +0000 UTC m=+0.049312205 container create d93a7d5c8ec759cc3faa9331c07f7481825197c6a2521988645efe55e65770ba (image=quay.io/ceph/ceph:v20, name=nostalgic_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:38 np0005558241 podman[81865]: 2025-12-13 07:31:38.732316329 +0000 UTC m=+0.022789792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:38 np0005558241 systemd[1]: Started libpod-conmon-d93a7d5c8ec759cc3faa9331c07f7481825197c6a2521988645efe55e65770ba.scope.
Dec 13 02:31:38 np0005558241 systemd[1]: Starting Ceph mgr.compute-0.twmpkk for 18ee9de6-e00b-571b-ab9b-b7aab06174df...
Dec 13 02:31:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c6cf9d6d7611509f1a933990363cfa8236c74914c480da5bf66e3e1ff1e9c7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c6cf9d6d7611509f1a933990363cfa8236c74914c480da5bf66e3e1ff1e9c7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c6cf9d6d7611509f1a933990363cfa8236c74914c480da5bf66e3e1ff1e9c7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:39 np0005558241 podman[81865]: 2025-12-13 07:31:39.029543335 +0000 UTC m=+0.320016798 container init d93a7d5c8ec759cc3faa9331c07f7481825197c6a2521988645efe55e65770ba (image=quay.io/ceph/ceph:v20, name=nostalgic_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:39 np0005558241 podman[81865]: 2025-12-13 07:31:39.04479163 +0000 UTC m=+0.335265083 container start d93a7d5c8ec759cc3faa9331c07f7481825197c6a2521988645efe55e65770ba (image=quay.io/ceph/ceph:v20, name=nostalgic_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 02:31:39 np0005558241 ansible-async_wrapper.py[80657]: Done in kid B.
Dec 13 02:31:39 np0005558241 podman[81865]: 2025-12-13 07:31:39.190480906 +0000 UTC m=+0.480954419 container attach d93a7d5c8ec759cc3faa9331c07f7481825197c6a2521988645efe55e65770ba (image=quay.io/ceph/ceph:v20, name=nostalgic_knuth, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 02:31:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:39 np0005558241 podman[81959]: 2025-12-13 07:31:39.413507096 +0000 UTC m=+0.080363729 container create cc59ce30e6bba06f2809459bae79a9b57fce982a5fdf5aac0de3313a604cdfab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-twmpkk, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 02:31:39 np0005558241 podman[81959]: 2025-12-13 07:31:39.367215767 +0000 UTC m=+0.034072480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec 13 02:31:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/205462099' entity='client.admin' 
Dec 13 02:31:39 np0005558241 systemd[1]: libpod-d93a7d5c8ec759cc3faa9331c07f7481825197c6a2521988645efe55e65770ba.scope: Deactivated successfully.
Dec 13 02:31:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29b8e677bd59f33e8bd00ead64379c78df19e24e482e09384975749f4d0767b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29b8e677bd59f33e8bd00ead64379c78df19e24e482e09384975749f4d0767b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29b8e677bd59f33e8bd00ead64379c78df19e24e482e09384975749f4d0767b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29b8e677bd59f33e8bd00ead64379c78df19e24e482e09384975749f4d0767b/merged/var/lib/ceph/mgr/ceph-compute-0.twmpkk supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:39 np0005558241 podman[81959]: 2025-12-13 07:31:39.569117196 +0000 UTC m=+0.235973919 container init cc59ce30e6bba06f2809459bae79a9b57fce982a5fdf5aac0de3313a604cdfab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-twmpkk, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:39 np0005558241 podman[81959]: 2025-12-13 07:31:39.57657748 +0000 UTC m=+0.243434123 container start cc59ce30e6bba06f2809459bae79a9b57fce982a5fdf5aac0de3313a604cdfab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-twmpkk, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:39 np0005558241 bash[81959]: cc59ce30e6bba06f2809459bae79a9b57fce982a5fdf5aac0de3313a604cdfab
Dec 13 02:31:39 np0005558241 systemd[1]: Started Ceph mgr.compute-0.twmpkk for 18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:31:39 np0005558241 podman[81979]: 2025-12-13 07:31:39.611381477 +0000 UTC m=+0.050105765 container died d93a7d5c8ec759cc3faa9331c07f7481825197c6a2521988645efe55e65770ba (image=quay.io/ceph/ceph:v20, name=nostalgic_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:31:39 np0005558241 ceph-mgr[81986]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:31:39 np0005558241 ceph-mgr[81986]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 13 02:31:39 np0005558241 ceph-mgr[81986]: pidfile_write: ignore empty --pid-file
Dec 13 02:31:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-83c6cf9d6d7611509f1a933990363cfa8236c74914c480da5bf66e3e1ff1e9c7-merged.mount: Deactivated successfully.
Dec 13 02:31:39 np0005558241 podman[81979]: 2025-12-13 07:31:39.653736979 +0000 UTC m=+0.092461197 container remove d93a7d5c8ec759cc3faa9331c07f7481825197c6a2521988645efe55e65770ba (image=quay.io/ceph/ceph:v20, name=nostalgic_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:39 np0005558241 systemd[1]: libpod-conmon-d93a7d5c8ec759cc3faa9331c07f7481825197c6a2521988645efe55e65770ba.scope: Deactivated successfully.
Dec 13 02:31:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:31:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:31:39 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'alerts'
Dec 13 02:31:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 02:31:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:39 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev 2891c02e-71f0-4319-8e48-57cabdbf557e (Updating mgr deployment (+1 -> 2))
Dec 13 02:31:39 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event 2891c02e-71f0-4319-8e48-57cabdbf557e (Updating mgr deployment (+1 -> 2)) in 2 seconds
Dec 13 02:31:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 02:31:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:39 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'balancer'
Dec 13 02:31:39 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'cephadm'
Dec 13 02:31:40 np0005558241 python3[82098]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:31:40 np0005558241 ceph-mgr[76830]: [progress INFO root] Writing back 2 completed events
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 02:31:40 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:31:40 np0005558241 podman[82118]: 2025-12-13 07:31:40.086284996 +0000 UTC m=+0.050441083 container create c63d7ba9496e9d19a179a385f164ff49cc094a58f3f96e29fe614e47b6e2a247 (image=quay.io/ceph/ceph:v20, name=nice_rhodes, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:40 np0005558241 systemd[1]: Started libpod-conmon-c63d7ba9496e9d19a179a385f164ff49cc094a58f3f96e29fe614e47b6e2a247.scope.
Dec 13 02:31:40 np0005558241 podman[82118]: 2025-12-13 07:31:40.065406382 +0000 UTC m=+0.029562489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b06fe93f583afdd60f1b273495a7c25df9e38657f3413dc6047a760f9f6763/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b06fe93f583afdd60f1b273495a7c25df9e38657f3413dc6047a760f9f6763/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b06fe93f583afdd60f1b273495a7c25df9e38657f3413dc6047a760f9f6763/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:40 np0005558241 podman[82118]: 2025-12-13 07:31:40.21159036 +0000 UTC m=+0.175746457 container init c63d7ba9496e9d19a179a385f164ff49cc094a58f3f96e29fe614e47b6e2a247 (image=quay.io/ceph/ceph:v20, name=nice_rhodes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:40 np0005558241 podman[82118]: 2025-12-13 07:31:40.221727279 +0000 UTC m=+0.185883356 container start c63d7ba9496e9d19a179a385f164ff49cc094a58f3f96e29fe614e47b6e2a247 (image=quay.io/ceph/ceph:v20, name=nice_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:31:40 np0005558241 podman[82118]: 2025-12-13 07:31:40.224803475 +0000 UTC m=+0.188959562 container attach c63d7ba9496e9d19a179a385f164ff49cc094a58f3f96e29fe614e47b6e2a247 (image=quay.io/ceph/ceph:v20, name=nice_rhodes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:40 np0005558241 podman[82191]: 2025-12-13 07:31:40.383925902 +0000 UTC m=+0.065017902 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 02:31:40 np0005558241 podman[82191]: 2025-12-13 07:31:40.464800282 +0000 UTC m=+0.145892292 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/205462099' entity='client.admin' 
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:40 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'crash'
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec 13 02:31:40 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'dashboard'
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3599247619' entity='client.admin' 
Dec 13 02:31:40 np0005558241 podman[82118]: 2025-12-13 07:31:40.688038627 +0000 UTC m=+0.652194744 container died c63d7ba9496e9d19a179a385f164ff49cc094a58f3f96e29fe614e47b6e2a247 (image=quay.io/ceph/ceph:v20, name=nice_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:40 np0005558241 systemd[1]: libpod-c63d7ba9496e9d19a179a385f164ff49cc094a58f3f96e29fe614e47b6e2a247.scope: Deactivated successfully.
Dec 13 02:31:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay-01b06fe93f583afdd60f1b273495a7c25df9e38657f3413dc6047a760f9f6763-merged.mount: Deactivated successfully.
Dec 13 02:31:40 np0005558241 podman[82118]: 2025-12-13 07:31:40.731420065 +0000 UTC m=+0.695576142 container remove c63d7ba9496e9d19a179a385f164ff49cc094a58f3f96e29fe614e47b6e2a247 (image=quay.io/ceph/ceph:v20, name=nice_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:31:40 np0005558241 systemd[1]: libpod-conmon-c63d7ba9496e9d19a179a385f164ff49cc094a58f3f96e29fe614e47b6e2a247.scope: Deactivated successfully.
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:40 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 13 02:31:40 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:31:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:31:40 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 13 02:31:40 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 13 02:31:41 np0005558241 python3[82386]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:31:41 np0005558241 podman[82435]: 2025-12-13 07:31:41.227004492 +0000 UTC m=+0.059858054 container create 11a629584af8cd9410420cf58bdae0fb0a04c7ff1279af066a774c527b5edc25 (image=quay.io/ceph/ceph:v20, name=silly_zhukovsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:31:41 np0005558241 systemd[1]: Started libpod-conmon-11a629584af8cd9410420cf58bdae0fb0a04c7ff1279af066a774c527b5edc25.scope.
Dec 13 02:31:41 np0005558241 podman[82435]: 2025-12-13 07:31:41.208034105 +0000 UTC m=+0.040887687 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718760bda2c9ad28a12da67ac092211ca5385d3026f5747ef187db2ad9c41524/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718760bda2c9ad28a12da67ac092211ca5385d3026f5747ef187db2ad9c41524/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718760bda2c9ad28a12da67ac092211ca5385d3026f5747ef187db2ad9c41524/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:41 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'devicehealth'
Dec 13 02:31:41 np0005558241 podman[82435]: 2025-12-13 07:31:41.342947396 +0000 UTC m=+0.175800978 container init 11a629584af8cd9410420cf58bdae0fb0a04c7ff1279af066a774c527b5edc25 (image=quay.io/ceph/ceph:v20, name=silly_zhukovsky, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:31:41 np0005558241 podman[82435]: 2025-12-13 07:31:41.349410615 +0000 UTC m=+0.182264207 container start 11a629584af8cd9410420cf58bdae0fb0a04c7ff1279af066a774c527b5edc25 (image=quay.io/ceph/ceph:v20, name=silly_zhukovsky, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:31:41 np0005558241 podman[82435]: 2025-12-13 07:31:41.353779073 +0000 UTC m=+0.186632675 container attach 11a629584af8cd9410420cf58bdae0fb0a04c7ff1279af066a774c527b5edc25 (image=quay.io/ceph/ceph:v20, name=silly_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:41 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'diskprediction_local'
Dec 13 02:31:41 np0005558241 podman[82472]: 2025-12-13 07:31:41.502093503 +0000 UTC m=+0.049632892 container create 9b2e8f6c16d32073bb38bfcbc0bd68fb33cad14e4b78cc3b2430affc23916fe5 (image=quay.io/ceph/ceph:v20, name=heuristic_germain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:41 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-twmpkk[81975]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 13 02:31:41 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-twmpkk[81975]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 13 02:31:41 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-twmpkk[81975]:  from numpy import show_config as show_numpy_config
Dec 13 02:31:41 np0005558241 systemd[1]: Started libpod-conmon-9b2e8f6c16d32073bb38bfcbc0bd68fb33cad14e4b78cc3b2430affc23916fe5.scope.
Dec 13 02:31:41 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'influx'
Dec 13 02:31:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:41 np0005558241 podman[82472]: 2025-12-13 07:31:41.478493533 +0000 UTC m=+0.026032932 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:41 np0005558241 podman[82472]: 2025-12-13 07:31:41.58565034 +0000 UTC m=+0.133189789 container init 9b2e8f6c16d32073bb38bfcbc0bd68fb33cad14e4b78cc3b2430affc23916fe5 (image=quay.io/ceph/ceph:v20, name=heuristic_germain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:41 np0005558241 podman[82472]: 2025-12-13 07:31:41.595485342 +0000 UTC m=+0.143024721 container start 9b2e8f6c16d32073bb38bfcbc0bd68fb33cad14e4b78cc3b2430affc23916fe5 (image=quay.io/ceph/ceph:v20, name=heuristic_germain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:31:41 np0005558241 podman[82472]: 2025-12-13 07:31:41.599400249 +0000 UTC m=+0.146939628 container attach 9b2e8f6c16d32073bb38bfcbc0bd68fb33cad14e4b78cc3b2430affc23916fe5 (image=quay.io/ceph/ceph:v20, name=heuristic_germain, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:41 np0005558241 heuristic_germain[82507]: 167 167
Dec 13 02:31:41 np0005558241 systemd[1]: libpod-9b2e8f6c16d32073bb38bfcbc0bd68fb33cad14e4b78cc3b2430affc23916fe5.scope: Deactivated successfully.
Dec 13 02:31:41 np0005558241 podman[82472]: 2025-12-13 07:31:41.60149455 +0000 UTC m=+0.149033929 container died 9b2e8f6c16d32073bb38bfcbc0bd68fb33cad14e4b78cc3b2430affc23916fe5 (image=quay.io/ceph/ceph:v20, name=heuristic_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:31:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-700efc6fd27aa3d74cb64dace48dcdeb3098588712454ebced9fadb69df690ad-merged.mount: Deactivated successfully.
Dec 13 02:31:41 np0005558241 podman[82472]: 2025-12-13 07:31:41.646698653 +0000 UTC m=+0.194238002 container remove 9b2e8f6c16d32073bb38bfcbc0bd68fb33cad14e4b78cc3b2430affc23916fe5 (image=quay.io/ceph/ceph:v20, name=heuristic_germain, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:41 np0005558241 systemd[1]: libpod-conmon-9b2e8f6c16d32073bb38bfcbc0bd68fb33cad14e4b78cc3b2430affc23916fe5.scope: Deactivated successfully.
Dec 13 02:31:41 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'insights'
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3599247619' entity='client.admin' 
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:41 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.vndjzx (unknown last config time)...
Dec 13 02:31:41 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.vndjzx (unknown last config time)...
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.vndjzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.vndjzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "mgr services"} : dispatch
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:31:41 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.vndjzx on compute-0
Dec 13 02:31:41 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.vndjzx on compute-0
Dec 13 02:31:41 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'iostat'
Dec 13 02:31:41 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'k8sevents'
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec 13 02:31:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2616130795' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 13 02:31:42 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:42 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'localpool'
Dec 13 02:31:42 np0005558241 podman[82591]: 2025-12-13 07:31:42.227579851 +0000 UTC m=+0.063197337 container create 7ff2137b39d8e9d113a0e9523d88146a1af23bb4965e583edcbfde446bfea294 (image=quay.io/ceph/ceph:v20, name=sleepy_wiles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Dec 13 02:31:42 np0005558241 systemd[1]: Started libpod-conmon-7ff2137b39d8e9d113a0e9523d88146a1af23bb4965e583edcbfde446bfea294.scope.
Dec 13 02:31:42 np0005558241 podman[82591]: 2025-12-13 07:31:42.201790146 +0000 UTC m=+0.037407672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:42 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'mds_autoscaler'
Dec 13 02:31:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:42 np0005558241 podman[82591]: 2025-12-13 07:31:42.334203055 +0000 UTC m=+0.169820611 container init 7ff2137b39d8e9d113a0e9523d88146a1af23bb4965e583edcbfde446bfea294 (image=quay.io/ceph/ceph:v20, name=sleepy_wiles, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:42 np0005558241 podman[82591]: 2025-12-13 07:31:42.345580025 +0000 UTC m=+0.181197491 container start 7ff2137b39d8e9d113a0e9523d88146a1af23bb4965e583edcbfde446bfea294 (image=quay.io/ceph/ceph:v20, name=sleepy_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:31:42 np0005558241 podman[82591]: 2025-12-13 07:31:42.349689366 +0000 UTC m=+0.185306912 container attach 7ff2137b39d8e9d113a0e9523d88146a1af23bb4965e583edcbfde446bfea294 (image=quay.io/ceph/ceph:v20, name=sleepy_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 02:31:42 np0005558241 sleepy_wiles[82608]: 167 167
Dec 13 02:31:42 np0005558241 systemd[1]: libpod-7ff2137b39d8e9d113a0e9523d88146a1af23bb4965e583edcbfde446bfea294.scope: Deactivated successfully.
Dec 13 02:31:42 np0005558241 podman[82591]: 2025-12-13 07:31:42.35225942 +0000 UTC m=+0.187876906 container died 7ff2137b39d8e9d113a0e9523d88146a1af23bb4965e583edcbfde446bfea294 (image=quay.io/ceph/ceph:v20, name=sleepy_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 13 02:31:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay-23ee4f722c718f8e7bdcc68bc35d4ee2b94e6c224163ecc9c53dac2aed8e77ea-merged.mount: Deactivated successfully.
Dec 13 02:31:42 np0005558241 podman[82591]: 2025-12-13 07:31:42.404016483 +0000 UTC m=+0.239633969 container remove 7ff2137b39d8e9d113a0e9523d88146a1af23bb4965e583edcbfde446bfea294 (image=quay.io/ceph/ceph:v20, name=sleepy_wiles, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:42 np0005558241 systemd[1]: libpod-conmon-7ff2137b39d8e9d113a0e9523d88146a1af23bb4965e583edcbfde446bfea294.scope: Deactivated successfully.
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:42 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'mirroring'
Dec 13 02:31:42 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'nfs'
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: Reconfiguring mgr.compute-0.vndjzx (unknown last config time)...
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.vndjzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: Reconfiguring daemon mgr.compute-0.vndjzx on compute-0
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2616130795' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2616130795' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 13 02:31:42 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 13 02:31:42 np0005558241 silly_zhukovsky[82452]: set require_min_compat_client to mimic
Dec 13 02:31:42 np0005558241 systemd[1]: libpod-11a629584af8cd9410420cf58bdae0fb0a04c7ff1279af066a774c527b5edc25.scope: Deactivated successfully.
Dec 13 02:31:42 np0005558241 podman[82435]: 2025-12-13 07:31:42.740147757 +0000 UTC m=+1.573001309 container died 11a629584af8cd9410420cf58bdae0fb0a04c7ff1279af066a774c527b5edc25 (image=quay.io/ceph/ceph:v20, name=silly_zhukovsky, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay-718760bda2c9ad28a12da67ac092211ca5385d3026f5747ef187db2ad9c41524-merged.mount: Deactivated successfully.
Dec 13 02:31:42 np0005558241 podman[82435]: 2025-12-13 07:31:42.806187273 +0000 UTC m=+1.639040865 container remove 11a629584af8cd9410420cf58bdae0fb0a04c7ff1279af066a774c527b5edc25 (image=quay.io/ceph/ceph:v20, name=silly_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:42 np0005558241 systemd[1]: libpod-conmon-11a629584af8cd9410420cf58bdae0fb0a04c7ff1279af066a774c527b5edc25.scope: Deactivated successfully.
Dec 13 02:31:42 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'orchestrator'
Dec 13 02:31:43 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'osd_perf_query'
Dec 13 02:31:43 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'osd_support'
Dec 13 02:31:43 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'pg_autoscaler'
Dec 13 02:31:43 np0005558241 podman[82732]: 2025-12-13 07:31:43.219572548 +0000 UTC m=+0.084290786 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:43 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'progress'
Dec 13 02:31:43 np0005558241 podman[82732]: 2025-12-13 07:31:43.333275746 +0000 UTC m=+0.197993974 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 02:31:43 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'prometheus'
Dec 13 02:31:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:43 np0005558241 python3[82802]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:31:43 np0005558241 podman[82841]: 2025-12-13 07:31:43.685425364 +0000 UTC m=+0.080586835 container create f9ff232bc578b5b431de21eec94ad000956fdead460a73cb163b9ede811fc6ba (image=quay.io/ceph/ceph:v20, name=frosty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:43 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'rbd_support'
Dec 13 02:31:43 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2616130795' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 13 02:31:43 np0005558241 systemd[1]: Started libpod-conmon-f9ff232bc578b5b431de21eec94ad000956fdead460a73cb163b9ede811fc6ba.scope.
Dec 13 02:31:43 np0005558241 podman[82841]: 2025-12-13 07:31:43.654872082 +0000 UTC m=+0.050033543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc7fd628072b17d9700eaac5328a6218489c5beb5e46715a4a910c5feda57fad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc7fd628072b17d9700eaac5328a6218489c5beb5e46715a4a910c5feda57fad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc7fd628072b17d9700eaac5328a6218489c5beb5e46715a4a910c5feda57fad/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:43 np0005558241 podman[82841]: 2025-12-13 07:31:43.790817548 +0000 UTC m=+0.185979019 container init f9ff232bc578b5b431de21eec94ad000956fdead460a73cb163b9ede811fc6ba (image=quay.io/ceph/ceph:v20, name=frosty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:31:43 np0005558241 podman[82841]: 2025-12-13 07:31:43.80103324 +0000 UTC m=+0.196194711 container start f9ff232bc578b5b431de21eec94ad000956fdead460a73cb163b9ede811fc6ba (image=quay.io/ceph/ceph:v20, name=frosty_swirles, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 02:31:43 np0005558241 podman[82841]: 2025-12-13 07:31:43.805647923 +0000 UTC m=+0.200809394 container attach f9ff232bc578b5b431de21eec94ad000956fdead460a73cb163b9ede811fc6ba (image=quay.io/ceph/ceph:v20, name=frosty_swirles, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:43 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'rgw'
Dec 13 02:31:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:31:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:31:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:31:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:31:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:31:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:31:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:31:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:44 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'rook'
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:44 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'selftest'
Dec 13 02:31:44 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'smb'
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Added host compute-0
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 02:31:44 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'snap_schedule'
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:44 np0005558241 frosty_swirles[82881]: Added host 'compute-0' with addr '192.168.122.100'
Dec 13 02:31:44 np0005558241 frosty_swirles[82881]: Scheduled mon update...
Dec 13 02:31:44 np0005558241 frosty_swirles[82881]: Scheduled mgr update...
Dec 13 02:31:44 np0005558241 frosty_swirles[82881]: Scheduled osd.default_drive_group update...
Dec 13 02:31:44 np0005558241 podman[82841]: 2025-12-13 07:31:44.968648618 +0000 UTC m=+1.363810069 container died f9ff232bc578b5b431de21eec94ad000956fdead460a73cb163b9ede811fc6ba (image=quay.io/ceph/ceph:v20, name=frosty_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 02:31:44 np0005558241 systemd[1]: libpod-f9ff232bc578b5b431de21eec94ad000956fdead460a73cb163b9ede811fc6ba.scope: Deactivated successfully.
Dec 13 02:31:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev d8d81bf7-76f3-499a-b332-8637dc48afa3 (Updating mgr deployment (-1 -> 1))
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.twmpkk from compute-0 -- ports [8765]
Dec 13 02:31:44 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.twmpkk from compute-0 -- ports [8765]
Dec 13 02:31:44 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'stats'
Dec 13 02:31:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fc7fd628072b17d9700eaac5328a6218489c5beb5e46715a4a910c5feda57fad-merged.mount: Deactivated successfully.
Dec 13 02:31:45 np0005558241 podman[82841]: 2025-12-13 07:31:45.020552526 +0000 UTC m=+1.415713957 container remove f9ff232bc578b5b431de21eec94ad000956fdead460a73cb163b9ede811fc6ba (image=quay.io/ceph/ceph:v20, name=frosty_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:45 np0005558241 systemd[1]: libpod-conmon-f9ff232bc578b5b431de21eec94ad000956fdead460a73cb163b9ede811fc6ba.scope: Deactivated successfully.
Dec 13 02:31:45 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'status'
Dec 13 02:31:45 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'telegraf'
Dec 13 02:31:45 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'telemetry'
Dec 13 02:31:45 np0005558241 ceph-mgr[81986]: mgr[py] Loading python module 'test_orchestrator'
Dec 13 02:31:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:45 np0005558241 systemd[1]: Stopping Ceph mgr.compute-0.twmpkk for 18ee9de6-e00b-571b-ab9b-b7aab06174df...
Dec 13 02:31:45 np0005558241 python3[83117]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:31:45 np0005558241 podman[83148]: 2025-12-13 07:31:45.594225426 +0000 UTC m=+0.040421096 container create 854050f889fd4098c77ee3346b258fbb812817be407b7ef3abe2b46476d6d61c (image=quay.io/ceph/ceph:v20, name=infallible_engelbart, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:45 np0005558241 podman[83135]: 2025-12-13 07:31:45.637595604 +0000 UTC m=+0.116900439 container died cc59ce30e6bba06f2809459bae79a9b57fce982a5fdf5aac0de3313a604cdfab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-twmpkk, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:45 np0005558241 systemd[1]: Started libpod-conmon-854050f889fd4098c77ee3346b258fbb812817be407b7ef3abe2b46476d6d61c.scope.
Dec 13 02:31:45 np0005558241 podman[83148]: 2025-12-13 07:31:45.576735716 +0000 UTC m=+0.022931416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:31:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f65c1dcc1510f3f997610b23736a7c757b95c8f074c20a24d1bcf7f984fcf18/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f65c1dcc1510f3f997610b23736a7c757b95c8f074c20a24d1bcf7f984fcf18/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f65c1dcc1510f3f997610b23736a7c757b95c8f074c20a24d1bcf7f984fcf18/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a29b8e677bd59f33e8bd00ead64379c78df19e24e482e09384975749f4d0767b-merged.mount: Deactivated successfully.
Dec 13 02:31:45 np0005558241 podman[83148]: 2025-12-13 07:31:45.718808113 +0000 UTC m=+0.165003803 container init 854050f889fd4098c77ee3346b258fbb812817be407b7ef3abe2b46476d6d61c (image=quay.io/ceph/ceph:v20, name=infallible_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:31:45 np0005558241 podman[83135]: 2025-12-13 07:31:45.725821065 +0000 UTC m=+0.205125890 container remove cc59ce30e6bba06f2809459bae79a9b57fce982a5fdf5aac0de3313a604cdfab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-twmpkk, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:31:45 np0005558241 bash[83135]: ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-twmpkk
Dec 13 02:31:45 np0005558241 systemd[1]: ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df@mgr.compute-0.twmpkk.service: Main process exited, code=exited, status=143/n/a
Dec 13 02:31:45 np0005558241 podman[83148]: 2025-12-13 07:31:45.733603317 +0000 UTC m=+0.179798997 container start 854050f889fd4098c77ee3346b258fbb812817be407b7ef3abe2b46476d6d61c (image=quay.io/ceph/ceph:v20, name=infallible_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 02:31:45 np0005558241 podman[83148]: 2025-12-13 07:31:45.749120599 +0000 UTC m=+0.195316299 container attach 854050f889fd4098c77ee3346b258fbb812817be407b7ef3abe2b46476d6d61c (image=quay.io/ceph/ceph:v20, name=infallible_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: Added host compute-0
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: Saving service mon spec with placement compute-0
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: Saving service mgr spec with placement compute-0
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: Saving service osd.default_drive_group spec with placement compute-0
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:45 np0005558241 ceph-mon[76537]: Removing daemon mgr.compute-0.twmpkk from compute-0 -- ports [8765]
Dec 13 02:31:45 np0005558241 systemd[1]: ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df@mgr.compute-0.twmpkk.service: Failed with result 'exit-code'.
Dec 13 02:31:45 np0005558241 systemd[1]: Stopped Ceph mgr.compute-0.twmpkk for 18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:31:45 np0005558241 systemd[1]: ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df@mgr.compute-0.twmpkk.service: Consumed 7.001s CPU time, 430.9M memory peak, read 0B from disk, written 753.0K to disk.
Dec 13 02:31:45 np0005558241 systemd[1]: Reloading.
Dec 13 02:31:46 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:46 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:31:46 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2936510444' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 02:31:46 np0005558241 infallible_engelbart[83172]: 
Dec 13 02:31:46 np0005558241 infallible_engelbart[83172]: {"fsid":"18ee9de6-e00b-571b-ab9b-b7aab06174df","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":56,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-13T07:30:46:025708+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-13T07:30:46.031379+0000","services":{}},"progress_events":{"d8d81bf7-76f3-499a-b332-8637dc48afa3":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 13 02:31:46 np0005558241 podman[83148]: 2025-12-13 07:31:46.275851354 +0000 UTC m=+0.722047054 container died 854050f889fd4098c77ee3346b258fbb812817be407b7ef3abe2b46476d6d61c (image=quay.io/ceph/ceph:v20, name=infallible_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:31:46 np0005558241 systemd[1]: libpod-854050f889fd4098c77ee3346b258fbb812817be407b7ef3abe2b46476d6d61c.scope: Deactivated successfully.
Dec 13 02:31:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1f65c1dcc1510f3f997610b23736a7c757b95c8f074c20a24d1bcf7f984fcf18-merged.mount: Deactivated successfully.
Dec 13 02:31:46 np0005558241 podman[83148]: 2025-12-13 07:31:46.37155499 +0000 UTC m=+0.817750680 container remove 854050f889fd4098c77ee3346b258fbb812817be407b7ef3abe2b46476d6d61c (image=quay.io/ceph/ceph:v20, name=infallible_engelbart, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:46 np0005558241 systemd[1]: libpod-conmon-854050f889fd4098c77ee3346b258fbb812817be407b7ef3abe2b46476d6d61c.scope: Deactivated successfully.
Dec 13 02:31:46 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.twmpkk
Dec 13 02:31:46 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.twmpkk
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.twmpkk"} v 0)
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.twmpkk"} : dispatch
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.twmpkk"}]': finished
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:46 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev d8d81bf7-76f3-499a-b332-8637dc48afa3 (Updating mgr deployment (-1 -> 1))
Dec 13 02:31:46 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event d8d81bf7-76f3-499a-b332-8637dc48afa3 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:31:46 np0005558241 podman[83348]: 2025-12-13 07:31:46.923989647 +0000 UTC m=+0.038458547 container create 5ed552993eb1e17e40d4b156a64bbf59bcc4444a701990aee82b09816a5063d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 02:31:46 np0005558241 systemd[1]: Started libpod-conmon-5ed552993eb1e17e40d4b156a64bbf59bcc4444a701990aee82b09816a5063d3.scope.
Dec 13 02:31:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:47 np0005558241 podman[83348]: 2025-12-13 07:31:46.906944668 +0000 UTC m=+0.021413608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:47 np0005558241 podman[83348]: 2025-12-13 07:31:47.007064112 +0000 UTC m=+0.121533042 container init 5ed552993eb1e17e40d4b156a64bbf59bcc4444a701990aee82b09816a5063d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_torvalds, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:47 np0005558241 podman[83348]: 2025-12-13 07:31:47.017981091 +0000 UTC m=+0.132450031 container start 5ed552993eb1e17e40d4b156a64bbf59bcc4444a701990aee82b09816a5063d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_torvalds, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:31:47 np0005558241 interesting_torvalds[83364]: 167 167
Dec 13 02:31:47 np0005558241 systemd[1]: libpod-5ed552993eb1e17e40d4b156a64bbf59bcc4444a701990aee82b09816a5063d3.scope: Deactivated successfully.
Dec 13 02:31:47 np0005558241 podman[83348]: 2025-12-13 07:31:47.035145723 +0000 UTC m=+0.149614663 container attach 5ed552993eb1e17e40d4b156a64bbf59bcc4444a701990aee82b09816a5063d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_torvalds, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:31:47 np0005558241 podman[83348]: 2025-12-13 07:31:47.036144348 +0000 UTC m=+0.150613288 container died 5ed552993eb1e17e40d4b156a64bbf59bcc4444a701990aee82b09816a5063d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-35be5164007b6be3406e7c38398ed17dd327a53b66041ff342ecaf762e8bb62f-merged.mount: Deactivated successfully.
Dec 13 02:31:47 np0005558241 podman[83348]: 2025-12-13 07:31:47.099007285 +0000 UTC m=+0.213476225 container remove 5ed552993eb1e17e40d4b156a64bbf59bcc4444a701990aee82b09816a5063d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_torvalds, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:47 np0005558241 systemd[1]: libpod-conmon-5ed552993eb1e17e40d4b156a64bbf59bcc4444a701990aee82b09816a5063d3.scope: Deactivated successfully.
Dec 13 02:31:47 np0005558241 ceph-mon[76537]: Removing key for mgr.compute-0.twmpkk
Dec 13 02:31:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.twmpkk"} : dispatch
Dec 13 02:31:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.twmpkk"}]': finished
Dec 13 02:31:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:31:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:47 np0005558241 podman[83388]: 2025-12-13 07:31:47.355219382 +0000 UTC m=+0.080880702 container create cc6dff49cb391c6a329aa9190429d190d3f81ae57d22d252520ae295b995e4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kilby, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 02:31:47 np0005558241 systemd[1]: Started libpod-conmon-cc6dff49cb391c6a329aa9190429d190d3f81ae57d22d252520ae295b995e4c3.scope.
Dec 13 02:31:47 np0005558241 podman[83388]: 2025-12-13 07:31:47.322490106 +0000 UTC m=+0.048151516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/698d7e26ea4c3f0f4e835ea15ca25181e165f8d326a21ba78ef71b8b2d3aa7be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/698d7e26ea4c3f0f4e835ea15ca25181e165f8d326a21ba78ef71b8b2d3aa7be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/698d7e26ea4c3f0f4e835ea15ca25181e165f8d326a21ba78ef71b8b2d3aa7be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/698d7e26ea4c3f0f4e835ea15ca25181e165f8d326a21ba78ef71b8b2d3aa7be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/698d7e26ea4c3f0f4e835ea15ca25181e165f8d326a21ba78ef71b8b2d3aa7be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:47 np0005558241 podman[83388]: 2025-12-13 07:31:47.474485177 +0000 UTC m=+0.200146577 container init cc6dff49cb391c6a329aa9190429d190d3f81ae57d22d252520ae295b995e4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:31:47 np0005558241 podman[83388]: 2025-12-13 07:31:47.486016691 +0000 UTC m=+0.211678051 container start cc6dff49cb391c6a329aa9190429d190d3f81ae57d22d252520ae295b995e4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 02:31:47 np0005558241 podman[83388]: 2025-12-13 07:31:47.505329376 +0000 UTC m=+0.230990716 container attach cc6dff49cb391c6a329aa9190429d190d3f81ae57d22d252520ae295b995e4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kilby, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:31:48 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:48 np0005558241 exciting_kilby[83404]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:31:48 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:31:48 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:31:48 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6879a430-e4fa-44e8-aed5-572a3459a9c1
Dec 13 02:31:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "6879a430-e4fa-44e8-aed5-572a3459a9c1"} v 0)
Dec 13 02:31:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/845790135' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "6879a430-e4fa-44e8-aed5-572a3459a9c1"} : dispatch
Dec 13 02:31:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 13 02:31:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:31:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/845790135' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6879a430-e4fa-44e8-aed5-572a3459a9c1"}]': finished
Dec 13 02:31:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 13 02:31:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 13 02:31:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:31:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:31:49 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:31:49 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 13 02:31:49 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 13 02:31:49 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 02:31:49 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 02:31:49 np0005558241 lvm[83498]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:31:49 np0005558241 lvm[83498]: VG ceph_vg0 finished
Dec 13 02:31:49 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 13 02:31:49 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/845790135' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "6879a430-e4fa-44e8-aed5-572a3459a9c1"} : dispatch
Dec 13 02:31:49 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/845790135' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6879a430-e4fa-44e8-aed5-572a3459a9c1"}]': finished
Dec 13 02:31:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 13 02:31:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3060733204' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 02:31:49 np0005558241 exciting_kilby[83404]: stderr: got monmap epoch 1
Dec 13 02:31:49 np0005558241 exciting_kilby[83404]: --> Creating keyring file for osd.0
Dec 13 02:31:49 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 13 02:31:49 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 13 02:31:49 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 6879a430-e4fa-44e8-aed5-572a3459a9c1 --setuser ceph --setgroup ceph
Dec 13 02:31:50 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:50 np0005558241 ceph-mgr[76830]: [progress INFO root] Writing back 3 completed events
Dec 13 02:31:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 02:31:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:50 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 13 02:31:50 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 13 02:31:50 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:31:50 np0005558241 exciting_kilby[83404]: stderr: 2025-12-13T07:31:49.787+0000 7fe23f4908c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Dec 13 02:31:50 np0005558241 exciting_kilby[83404]: stderr: 2025-12-13T07:31:49.807+0000 7fe23f4908c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 13 02:31:50 np0005558241 exciting_kilby[83404]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 13 02:31:50 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 02:31:50 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 13 02:31:50 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 02:31:50 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 13 02:31:50 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 02:31:50 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 02:31:50 np0005558241 exciting_kilby[83404]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 13 02:31:50 np0005558241 exciting_kilby[83404]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 13 02:31:50 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:31:50 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:31:51 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a9563d22-728f-45a3-a243-89e4c60c7325
Dec 13 02:31:51 np0005558241 ceph-mon[76537]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 13 02:31:51 np0005558241 ceph-mon[76537]: Cluster is now healthy
Dec 13 02:31:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "a9563d22-728f-45a3-a243-89e4c60c7325"} v 0)
Dec 13 02:31:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1783907456' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "a9563d22-728f-45a3-a243-89e4c60c7325"} : dispatch
Dec 13 02:31:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 13 02:31:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:31:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1783907456' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a9563d22-728f-45a3-a243-89e4c60c7325"}]': finished
Dec 13 02:31:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 13 02:31:51 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 13 02:31:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:31:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:31:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:31:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:31:51 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:31:51 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:31:51 np0005558241 lvm[84434]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:31:51 np0005558241 lvm[84434]: VG ceph_vg1 finished
Dec 13 02:31:51 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 13 02:31:51 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec 13 02:31:51 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 02:31:51 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 02:31:51 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 13 02:31:52 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 13 02:31:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1082248329' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 02:31:52 np0005558241 exciting_kilby[83404]: stderr: got monmap epoch 1
Dec 13 02:31:52 np0005558241 exciting_kilby[83404]: --> Creating keyring file for osd.1
Dec 13 02:31:52 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 13 02:31:52 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 13 02:31:52 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid a9563d22-728f-45a3-a243-89e4c60c7325 --setuser ceph --setgroup ceph
Dec 13 02:31:52 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/1783907456' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "a9563d22-728f-45a3-a243-89e4c60c7325"} : dispatch
Dec 13 02:31:52 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/1783907456' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a9563d22-728f-45a3-a243-89e4c60c7325"}]': finished
Dec 13 02:31:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: stderr: 2025-12-13T07:31:52.296+0000 7f50366b68c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: stderr: 2025-12-13T07:31:52.315+0000 7f50366b68c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:31:53 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new cfcb57a7-9012-4974-af2d-c0c014a6860e
Dec 13 02:31:54 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "cfcb57a7-9012-4974-af2d-c0c014a6860e"} v 0)
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1093016742' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "cfcb57a7-9012-4974-af2d-c0c014a6860e"} : dispatch
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1093016742' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "cfcb57a7-9012-4974-af2d-c0c014a6860e"}]': finished
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:31:54 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:31:54 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:31:54 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/1093016742' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "cfcb57a7-9012-4974-af2d-c0c014a6860e"} : dispatch
Dec 13 02:31:54 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/1093016742' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "cfcb57a7-9012-4974-af2d-c0c014a6860e"}]': finished
Dec 13 02:31:54 np0005558241 lvm[85377]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:31:54 np0005558241 lvm[85377]: VG ceph_vg2 finished
Dec 13 02:31:54 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec 13 02:31:54 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec 13 02:31:54 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 02:31:54 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 02:31:54 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec 13 02:31:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 13 02:31:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452958506' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 02:31:55 np0005558241 exciting_kilby[83404]: stderr: got monmap epoch 1
Dec 13 02:31:55 np0005558241 exciting_kilby[83404]: --> Creating keyring file for osd.2
Dec 13 02:31:55 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec 13 02:31:55 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec 13 02:31:55 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid cfcb57a7-9012-4974-af2d-c0c014a6860e --setuser ceph --setgroup ceph
Dec 13 02:31:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:56 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:56 np0005558241 exciting_kilby[83404]: stderr: 2025-12-13T07:31:55.279+0000 7f485a7f58c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Dec 13 02:31:56 np0005558241 exciting_kilby[83404]: stderr: 2025-12-13T07:31:55.302+0000 7f485a7f58c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec 13 02:31:56 np0005558241 exciting_kilby[83404]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec 13 02:31:56 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 02:31:56 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 13 02:31:56 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 02:31:56 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 13 02:31:56 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 02:31:56 np0005558241 exciting_kilby[83404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 02:31:56 np0005558241 exciting_kilby[83404]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 13 02:31:56 np0005558241 exciting_kilby[83404]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec 13 02:31:56 np0005558241 systemd[1]: libpod-cc6dff49cb391c6a329aa9190429d190d3f81ae57d22d252520ae295b995e4c3.scope: Deactivated successfully.
Dec 13 02:31:56 np0005558241 systemd[1]: libpod-cc6dff49cb391c6a329aa9190429d190d3f81ae57d22d252520ae295b995e4c3.scope: Consumed 6.812s CPU time.
Dec 13 02:31:56 np0005558241 podman[86292]: 2025-12-13 07:31:56.820919337 +0000 UTC m=+0.057104487 container died cc6dff49cb391c6a329aa9190429d190d3f81ae57d22d252520ae295b995e4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kilby, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:56 np0005558241 systemd[1]: var-lib-containers-storage-overlay-698d7e26ea4c3f0f4e835ea15ca25181e165f8d326a21ba78ef71b8b2d3aa7be-merged.mount: Deactivated successfully.
Dec 13 02:31:56 np0005558241 podman[86292]: 2025-12-13 07:31:56.897393719 +0000 UTC m=+0.133578849 container remove cc6dff49cb391c6a329aa9190429d190d3f81ae57d22d252520ae295b995e4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kilby, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:31:56 np0005558241 systemd[1]: libpod-conmon-cc6dff49cb391c6a329aa9190429d190d3f81ae57d22d252520ae295b995e4c3.scope: Deactivated successfully.
Dec 13 02:31:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:57 np0005558241 podman[86367]: 2025-12-13 07:31:57.444631139 +0000 UTC m=+0.059701841 container create 4c841627ab56c802fee84224b4a3867e3bbce560cd8b883115b5ab181e4bb19f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goodall, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:31:57 np0005558241 systemd[1]: Started libpod-conmon-4c841627ab56c802fee84224b4a3867e3bbce560cd8b883115b5ab181e4bb19f.scope.
Dec 13 02:31:57 np0005558241 podman[86367]: 2025-12-13 07:31:57.423043277 +0000 UTC m=+0.038113959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:57 np0005558241 podman[86367]: 2025-12-13 07:31:57.564280764 +0000 UTC m=+0.179351476 container init 4c841627ab56c802fee84224b4a3867e3bbce560cd8b883115b5ab181e4bb19f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goodall, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:57 np0005558241 podman[86367]: 2025-12-13 07:31:57.575859879 +0000 UTC m=+0.190930571 container start 4c841627ab56c802fee84224b4a3867e3bbce560cd8b883115b5ab181e4bb19f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:57 np0005558241 podman[86367]: 2025-12-13 07:31:57.581400225 +0000 UTC m=+0.196470897 container attach 4c841627ab56c802fee84224b4a3867e3bbce560cd8b883115b5ab181e4bb19f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:57 np0005558241 vibrant_goodall[86383]: 167 167
Dec 13 02:31:57 np0005558241 systemd[1]: libpod-4c841627ab56c802fee84224b4a3867e3bbce560cd8b883115b5ab181e4bb19f.scope: Deactivated successfully.
Dec 13 02:31:57 np0005558241 podman[86367]: 2025-12-13 07:31:57.583928498 +0000 UTC m=+0.198999170 container died 4c841627ab56c802fee84224b4a3867e3bbce560cd8b883115b5ab181e4bb19f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:31:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4a57d76f4c658fb0cfaf2f91ea2e29c8805dcdaca264fc7343588a38873a7491-merged.mount: Deactivated successfully.
Dec 13 02:31:57 np0005558241 podman[86367]: 2025-12-13 07:31:57.65796969 +0000 UTC m=+0.273040362 container remove 4c841627ab56c802fee84224b4a3867e3bbce560cd8b883115b5ab181e4bb19f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Dec 13 02:31:57 np0005558241 systemd[1]: libpod-conmon-4c841627ab56c802fee84224b4a3867e3bbce560cd8b883115b5ab181e4bb19f.scope: Deactivated successfully.
Dec 13 02:31:57 np0005558241 podman[86407]: 2025-12-13 07:31:57.856803764 +0000 UTC m=+0.069189724 container create bc625ced78b89ace2df61f6876a00e768c02011675abccc7efc78e048b8dfa55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 02:31:57 np0005558241 podman[86407]: 2025-12-13 07:31:57.811871788 +0000 UTC m=+0.024257788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:57 np0005558241 systemd[1]: Started libpod-conmon-bc625ced78b89ace2df61f6876a00e768c02011675abccc7efc78e048b8dfa55.scope.
Dec 13 02:31:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3798e9ab5a0451fb089d9d4818fac3a11e1c1d33c05b166be66eb56f3ab5e98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3798e9ab5a0451fb089d9d4818fac3a11e1c1d33c05b166be66eb56f3ab5e98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3798e9ab5a0451fb089d9d4818fac3a11e1c1d33c05b166be66eb56f3ab5e98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3798e9ab5a0451fb089d9d4818fac3a11e1c1d33c05b166be66eb56f3ab5e98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:57 np0005558241 podman[86407]: 2025-12-13 07:31:57.970145514 +0000 UTC m=+0.182531494 container init bc625ced78b89ace2df61f6876a00e768c02011675abccc7efc78e048b8dfa55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:31:57 np0005558241 podman[86407]: 2025-12-13 07:31:57.988647679 +0000 UTC m=+0.201033639 container start bc625ced78b89ace2df61f6876a00e768c02011675abccc7efc78e048b8dfa55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 02:31:58 np0005558241 podman[86407]: 2025-12-13 07:31:58.004425778 +0000 UTC m=+0.216811788 container attach bc625ced78b89ace2df61f6876a00e768c02011675abccc7efc78e048b8dfa55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:58 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]: {
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:    "0": [
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:        {
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "devices": [
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "/dev/loop3"
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            ],
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_name": "ceph_lv0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_size": "21470642176",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "name": "ceph_lv0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "tags": {
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.cluster_name": "ceph",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.crush_device_class": "",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.encrypted": "0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.objectstore": "bluestore",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.osd_id": "0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.type": "block",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.vdo": "0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.with_tpm": "0"
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            },
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "type": "block",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "vg_name": "ceph_vg0"
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:        }
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:    ],
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:    "1": [
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:        {
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "devices": [
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "/dev/loop4"
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            ],
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_name": "ceph_lv1",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_size": "21470642176",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "name": "ceph_lv1",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "tags": {
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.cluster_name": "ceph",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.crush_device_class": "",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.encrypted": "0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.objectstore": "bluestore",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.osd_id": "1",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.type": "block",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.vdo": "0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.with_tpm": "0"
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            },
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "type": "block",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "vg_name": "ceph_vg1"
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:        }
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:    ],
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:    "2": [
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:        {
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "devices": [
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "/dev/loop5"
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            ],
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_name": "ceph_lv2",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_size": "21470642176",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "name": "ceph_lv2",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "tags": {
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.cluster_name": "ceph",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.crush_device_class": "",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.encrypted": "0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.objectstore": "bluestore",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.osd_id": "2",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.type": "block",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.vdo": "0",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:                "ceph.with_tpm": "0"
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            },
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "type": "block",
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:            "vg_name": "ceph_vg2"
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:        }
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]:    ]
Dec 13 02:31:58 np0005558241 heuristic_wing[86423]: }
Dec 13 02:31:58 np0005558241 systemd[1]: libpod-bc625ced78b89ace2df61f6876a00e768c02011675abccc7efc78e048b8dfa55.scope: Deactivated successfully.
Dec 13 02:31:58 np0005558241 podman[86407]: 2025-12-13 07:31:58.353556741 +0000 UTC m=+0.565942731 container died bc625ced78b89ace2df61f6876a00e768c02011675abccc7efc78e048b8dfa55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a3798e9ab5a0451fb089d9d4818fac3a11e1c1d33c05b166be66eb56f3ab5e98-merged.mount: Deactivated successfully.
Dec 13 02:31:58 np0005558241 podman[86407]: 2025-12-13 07:31:58.417544426 +0000 UTC m=+0.629930396 container remove bc625ced78b89ace2df61f6876a00e768c02011675abccc7efc78e048b8dfa55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_wing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:31:58 np0005558241 systemd[1]: libpod-conmon-bc625ced78b89ace2df61f6876a00e768c02011675abccc7efc78e048b8dfa55.scope: Deactivated successfully.
Dec 13 02:31:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 13 02:31:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 13 02:31:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:31:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:31:58 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec 13 02:31:58 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec 13 02:31:58 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 13 02:31:59 np0005558241 podman[86535]: 2025-12-13 07:31:59.162682786 +0000 UTC m=+0.085505916 container create 9075a92ef891b0fea7e3f2f4c1a2b8d60a2a835ddc887642ce08d8a72353b800 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_swanson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:31:59 np0005558241 podman[86535]: 2025-12-13 07:31:59.103675464 +0000 UTC m=+0.026498614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:59 np0005558241 systemd[1]: Started libpod-conmon-9075a92ef891b0fea7e3f2f4c1a2b8d60a2a835ddc887642ce08d8a72353b800.scope.
Dec 13 02:31:59 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:59 np0005558241 podman[86535]: 2025-12-13 07:31:59.256242819 +0000 UTC m=+0.179065959 container init 9075a92ef891b0fea7e3f2f4c1a2b8d60a2a835ddc887642ce08d8a72353b800 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:31:59 np0005558241 podman[86535]: 2025-12-13 07:31:59.268191063 +0000 UTC m=+0.191014203 container start 9075a92ef891b0fea7e3f2f4c1a2b8d60a2a835ddc887642ce08d8a72353b800 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:31:59 np0005558241 dazzling_swanson[86552]: 167 167
Dec 13 02:31:59 np0005558241 podman[86535]: 2025-12-13 07:31:59.274051827 +0000 UTC m=+0.196874957 container attach 9075a92ef891b0fea7e3f2f4c1a2b8d60a2a835ddc887642ce08d8a72353b800 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:31:59 np0005558241 systemd[1]: libpod-9075a92ef891b0fea7e3f2f4c1a2b8d60a2a835ddc887642ce08d8a72353b800.scope: Deactivated successfully.
Dec 13 02:31:59 np0005558241 conmon[86552]: conmon 9075a92ef891b0fea7e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9075a92ef891b0fea7e3f2f4c1a2b8d60a2a835ddc887642ce08d8a72353b800.scope/container/memory.events
Dec 13 02:31:59 np0005558241 podman[86535]: 2025-12-13 07:31:59.276440266 +0000 UTC m=+0.199263406 container died 9075a92ef891b0fea7e3f2f4c1a2b8d60a2a835ddc887642ce08d8a72353b800 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:31:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:31:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cadd8a9ce5840d3222e708be50f4e7de348d95341e8ec1ad0cd1a46dbb250b50-merged.mount: Deactivated successfully.
Dec 13 02:31:59 np0005558241 podman[86535]: 2025-12-13 07:31:59.332923826 +0000 UTC m=+0.255746926 container remove 9075a92ef891b0fea7e3f2f4c1a2b8d60a2a835ddc887642ce08d8a72353b800 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:31:59 np0005558241 systemd[1]: libpod-conmon-9075a92ef891b0fea7e3f2f4c1a2b8d60a2a835ddc887642ce08d8a72353b800.scope: Deactivated successfully.
Dec 13 02:31:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:31:59 np0005558241 ceph-mon[76537]: Deploying daemon osd.0 on compute-0
Dec 13 02:31:59 np0005558241 podman[86580]: 2025-12-13 07:31:59.685739241 +0000 UTC m=+0.054914463 container create fb90eaeec43d29855f46fa9beaea66605d7dceb96e068fa5b57590bfaf409b42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:31:59 np0005558241 systemd[1]: Started libpod-conmon-fb90eaeec43d29855f46fa9beaea66605d7dceb96e068fa5b57590bfaf409b42.scope.
Dec 13 02:31:59 np0005558241 podman[86580]: 2025-12-13 07:31:59.659257869 +0000 UTC m=+0.028433151 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:31:59 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:31:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a885da06da16170fed3bc53200e4960a594dc5b23cf7d70090959c181e3e78a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a885da06da16170fed3bc53200e4960a594dc5b23cf7d70090959c181e3e78a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a885da06da16170fed3bc53200e4960a594dc5b23cf7d70090959c181e3e78a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a885da06da16170fed3bc53200e4960a594dc5b23cf7d70090959c181e3e78a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a885da06da16170fed3bc53200e4960a594dc5b23cf7d70090959c181e3e78a6/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:31:59 np0005558241 podman[86580]: 2025-12-13 07:31:59.8116616 +0000 UTC m=+0.180836842 container init fb90eaeec43d29855f46fa9beaea66605d7dceb96e068fa5b57590bfaf409b42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:31:59 np0005558241 podman[86580]: 2025-12-13 07:31:59.817553145 +0000 UTC m=+0.186728357 container start fb90eaeec43d29855f46fa9beaea66605d7dceb96e068fa5b57590bfaf409b42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:31:59 np0005558241 podman[86580]: 2025-12-13 07:31:59.821235556 +0000 UTC m=+0.190410818 container attach fb90eaeec43d29855f46fa9beaea66605d7dceb96e068fa5b57590bfaf409b42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:31:59 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate-test[86596]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 13 02:31:59 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate-test[86596]:                            [--no-systemd] [--no-tmpfs]
Dec 13 02:31:59 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate-test[86596]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 13 02:32:00 np0005558241 systemd[1]: libpod-fb90eaeec43d29855f46fa9beaea66605d7dceb96e068fa5b57590bfaf409b42.scope: Deactivated successfully.
Dec 13 02:32:00 np0005558241 podman[86601]: 2025-12-13 07:32:00.049218907 +0000 UTC m=+0.024902694 container died fb90eaeec43d29855f46fa9beaea66605d7dceb96e068fa5b57590bfaf409b42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate-test, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:32:00 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:32:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a885da06da16170fed3bc53200e4960a594dc5b23cf7d70090959c181e3e78a6-merged.mount: Deactivated successfully.
Dec 13 02:32:00 np0005558241 podman[86601]: 2025-12-13 07:32:00.128050018 +0000 UTC m=+0.103733835 container remove fb90eaeec43d29855f46fa9beaea66605d7dceb96e068fa5b57590bfaf409b42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:32:00 np0005558241 systemd[1]: libpod-conmon-fb90eaeec43d29855f46fa9beaea66605d7dceb96e068fa5b57590bfaf409b42.scope: Deactivated successfully.
Dec 13 02:32:00 np0005558241 systemd[1]: Reloading.
Dec 13 02:32:00 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:32:00 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:32:00 np0005558241 systemd[1]: Reloading.
Dec 13 02:32:00 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:32:00 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:32:01 np0005558241 systemd[1]: Starting Ceph osd.0 for 18ee9de6-e00b-571b-ab9b-b7aab06174df...
Dec 13 02:32:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:32:01 np0005558241 podman[86760]: 2025-12-13 07:32:01.451484383 +0000 UTC m=+0.058609674 container create 39c63b1e7d4af0b15bbcb415baa628da0da612c43cdd046d0e8667e74e2f12ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:01 np0005558241 podman[86760]: 2025-12-13 07:32:01.41804929 +0000 UTC m=+0.025174571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a6b4d610ff6dacaff195c4657e1b39788d74fc0a9cfa218bde8d77e46628b81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a6b4d610ff6dacaff195c4657e1b39788d74fc0a9cfa218bde8d77e46628b81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a6b4d610ff6dacaff195c4657e1b39788d74fc0a9cfa218bde8d77e46628b81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a6b4d610ff6dacaff195c4657e1b39788d74fc0a9cfa218bde8d77e46628b81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a6b4d610ff6dacaff195c4657e1b39788d74fc0a9cfa218bde8d77e46628b81/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:01 np0005558241 podman[86760]: 2025-12-13 07:32:01.530718223 +0000 UTC m=+0.137843514 container init 39c63b1e7d4af0b15bbcb415baa628da0da612c43cdd046d0e8667e74e2f12ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:32:01 np0005558241 podman[86760]: 2025-12-13 07:32:01.54321063 +0000 UTC m=+0.150335931 container start 39c63b1e7d4af0b15bbcb415baa628da0da612c43cdd046d0e8667e74e2f12ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:32:01 np0005558241 podman[86760]: 2025-12-13 07:32:01.547334012 +0000 UTC m=+0.154459303 container attach 39c63b1e7d4af0b15bbcb415baa628da0da612c43cdd046d0e8667e74e2f12ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 02:32:01 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate[86775]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:01 np0005558241 bash[86760]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:01 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate[86775]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:01 np0005558241 bash[86760]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:02 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:32:02 np0005558241 lvm[86859]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:32:02 np0005558241 lvm[86859]: VG ceph_vg0 finished
Dec 13 02:32:02 np0005558241 lvm[86862]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:32:02 np0005558241 lvm[86862]: VG ceph_vg2 finished
Dec 13 02:32:02 np0005558241 lvm[86863]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:32:02 np0005558241 lvm[86863]: VG ceph_vg1 finished
Dec 13 02:32:02 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate[86775]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 02:32:02 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate[86775]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:02 np0005558241 bash[86760]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 02:32:02 np0005558241 bash[86760]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:02 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate[86775]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:02 np0005558241 bash[86760]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:02 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate[86775]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 02:32:02 np0005558241 bash[86760]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 02:32:02 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate[86775]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 13 02:32:02 np0005558241 bash[86760]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 13 02:32:02 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate[86775]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:02 np0005558241 bash[86760]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:02 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate[86775]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:02 np0005558241 bash[86760]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:02 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate[86775]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 02:32:02 np0005558241 bash[86760]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 02:32:02 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate[86775]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 02:32:02 np0005558241 bash[86760]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 02:32:02 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate[86775]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 13 02:32:02 np0005558241 bash[86760]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 13 02:32:02 np0005558241 systemd[1]: libpod-39c63b1e7d4af0b15bbcb415baa628da0da612c43cdd046d0e8667e74e2f12ca.scope: Deactivated successfully.
Dec 13 02:32:02 np0005558241 systemd[1]: libpod-39c63b1e7d4af0b15bbcb415baa628da0da612c43cdd046d0e8667e74e2f12ca.scope: Consumed 1.765s CPU time.
Dec 13 02:32:02 np0005558241 podman[86965]: 2025-12-13 07:32:02.786184654 +0000 UTC m=+0.031290911 container died 39c63b1e7d4af0b15bbcb415baa628da0da612c43cdd046d0e8667e74e2f12ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 02:32:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9a6b4d610ff6dacaff195c4657e1b39788d74fc0a9cfa218bde8d77e46628b81-merged.mount: Deactivated successfully.
Dec 13 02:32:02 np0005558241 podman[86965]: 2025-12-13 07:32:02.830285129 +0000 UTC m=+0.075391396 container remove 39c63b1e7d4af0b15bbcb415baa628da0da612c43cdd046d0e8667e74e2f12ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:03 np0005558241 podman[87022]: 2025-12-13 07:32:03.151498496 +0000 UTC m=+0.065178816 container create b92982ec38e91e908ffa2dc704ae3a7a1920c78954c233fb46a1029a658d540d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687879dd897d5b0c306ad8add31cd098ec3559cd3f75c1f3e668a8e0e2c905ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687879dd897d5b0c306ad8add31cd098ec3559cd3f75c1f3e668a8e0e2c905ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687879dd897d5b0c306ad8add31cd098ec3559cd3f75c1f3e668a8e0e2c905ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687879dd897d5b0c306ad8add31cd098ec3559cd3f75c1f3e668a8e0e2c905ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687879dd897d5b0c306ad8add31cd098ec3559cd3f75c1f3e668a8e0e2c905ff/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:03 np0005558241 podman[87022]: 2025-12-13 07:32:03.129934735 +0000 UTC m=+0.043615055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:03 np0005558241 podman[87022]: 2025-12-13 07:32:03.231222098 +0000 UTC m=+0.144902458 container init b92982ec38e91e908ffa2dc704ae3a7a1920c78954c233fb46a1029a658d540d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:03 np0005558241 podman[87022]: 2025-12-13 07:32:03.246539665 +0000 UTC m=+0.160219985 container start b92982ec38e91e908ffa2dc704ae3a7a1920c78954c233fb46a1029a658d540d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:32:03 np0005558241 bash[87022]: b92982ec38e91e908ffa2dc704ae3a7a1920c78954c233fb46a1029a658d540d
Dec 13 02:32:03 np0005558241 systemd[1]: Started Ceph osd.0 for 18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: pidfile_write: ignore empty --pid-file
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:32:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:32:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:32:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 13 02:32:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:32:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:32:03 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec 13 02:32:03 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec 13 02:32:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a400 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1a000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: load: jerasure load: lrc 
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55fffeb1bc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55ffff7b1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55ffff7b1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55ffff7b1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55ffff7b1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount shared_bdev_used = 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Git sha 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: DB SUMMARY
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: DB Session ID:  FN8W1JYA7UBD349BZ5CK
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                                     Options.env: 0x55fffe9abea0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                                Options.info_log: 0x55ffffa328a0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.write_buffer_manager: 0x55fffea0cb40
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.row_cache: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                              Options.wal_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.wal_compression: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.max_background_jobs: 4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Compression algorithms supported:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kZSTD supported: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa32c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa32c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa32c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa32c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa32c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa32c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa32c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa32c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9afa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa32c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9afa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa32c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9afa30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b4a4a955-5ea0-4f49-953b-34f7e737aea2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611123667406, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611123669285, "job": 1, "event": "recovery_finished"}
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: freelist init
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: freelist _read_cfg
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs umount
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55ffff7b1800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55ffff7b1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55ffff7b1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55ffff7b1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bdev(0x55ffff7b1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluefs mount shared_bdev_used = 27262976
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Git sha 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: DB SUMMARY
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: DB Session ID:  FN8W1JYA7UBD349BZ5CL
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                                     Options.env: 0x55ffff7f7dc0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                                Options.info_log: 0x55ffffa33340
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.write_buffer_manager: 0x55fffea0d900
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.row_cache: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                              Options.wal_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.wal_compression: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.max_background_jobs: 4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Compression algorithms supported:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kZSTD supported: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa7f680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa7f680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa7f680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa7f680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa7f680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa7f680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa7f680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa7f800)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa7f800)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ffffa7f800)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fffe9af4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b4a4a955-5ea0-4f49-953b-34f7e737aea2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611123730921, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611123737658, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611123, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b4a4a955-5ea0-4f49-953b-34f7e737aea2", "db_session_id": "FN8W1JYA7UBD349BZ5CL", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611123741501, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611123, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b4a4a955-5ea0-4f49-953b-34f7e737aea2", "db_session_id": "FN8W1JYA7UBD349BZ5CL", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611123746738, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611123, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b4a4a955-5ea0-4f49-953b-34f7e737aea2", "db_session_id": "FN8W1JYA7UBD349BZ5CL", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611123749511, "job": 1, "event": "recovery_finished"}
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ffffc17c00
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: DB pointer 0x55ffffbec000
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 460.80 MB usag
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: bluestore.MempoolThread fragmentation_score=0.000124 took=0.000006s
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: _get_class not permitted to load lua
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: _get_class not permitted to load sdk
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: osd.0 0 load_pgs
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: osd.0 0 load_pgs opened 0 pgs
Dec 13 02:32:03 np0005558241 ceph-osd[87041]: osd.0 0 log_to_monitors true
Dec 13 02:32:03 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0[87037]: 2025-12-13T07:32:03.783+0000 7fdeb95238c0 -1 osd.0 0 log_to_monitors true
Dec 13 02:32:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec 13 02:32:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3865363678,v1:192.168.122.100:6803/3865363678]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 13 02:32:04 np0005558241 podman[87579]: 2025-12-13 07:32:04.001300993 +0000 UTC m=+0.063895224 container create e8ea08ed32f399c4d140473c40e1f89c880fc53c840a6fab86c14a87e824034e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_margulis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 02:32:04 np0005558241 systemd[1]: Started libpod-conmon-e8ea08ed32f399c4d140473c40e1f89c880fc53c840a6fab86c14a87e824034e.scope.
Dec 13 02:32:04 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:32:04 np0005558241 podman[87579]: 2025-12-13 07:32:03.974808671 +0000 UTC m=+0.037402922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:04 np0005558241 podman[87579]: 2025-12-13 07:32:04.106248416 +0000 UTC m=+0.168842627 container init e8ea08ed32f399c4d140473c40e1f89c880fc53c840a6fab86c14a87e824034e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:32:04 np0005558241 podman[87579]: 2025-12-13 07:32:04.116834837 +0000 UTC m=+0.179429068 container start e8ea08ed32f399c4d140473c40e1f89c880fc53c840a6fab86c14a87e824034e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_margulis, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:32:04 np0005558241 podman[87579]: 2025-12-13 07:32:04.121005319 +0000 UTC m=+0.183599550 container attach e8ea08ed32f399c4d140473c40e1f89c880fc53c840a6fab86c14a87e824034e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_margulis, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:32:04 np0005558241 beautiful_margulis[87595]: 167 167
Dec 13 02:32:04 np0005558241 systemd[1]: libpod-e8ea08ed32f399c4d140473c40e1f89c880fc53c840a6fab86c14a87e824034e.scope: Deactivated successfully.
Dec 13 02:32:04 np0005558241 conmon[87595]: conmon e8ea08ed32f399c4d140 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8ea08ed32f399c4d140473c40e1f89c880fc53c840a6fab86c14a87e824034e.scope/container/memory.events
Dec 13 02:32:04 np0005558241 podman[87579]: 2025-12-13 07:32:04.123948412 +0000 UTC m=+0.186542613 container died e8ea08ed32f399c4d140473c40e1f89c880fc53c840a6fab86c14a87e824034e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 02:32:04 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1e40f8eddc349252e7259a35e600151572c428e316bc5669f8d958f2d49e8a50-merged.mount: Deactivated successfully.
Dec 13 02:32:04 np0005558241 podman[87579]: 2025-12-13 07:32:04.169888182 +0000 UTC m=+0.232482393 container remove e8ea08ed32f399c4d140473c40e1f89c880fc53c840a6fab86c14a87e824034e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_margulis, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:04 np0005558241 systemd[1]: libpod-conmon-e8ea08ed32f399c4d140473c40e1f89c880fc53c840a6fab86c14a87e824034e.scope: Deactivated successfully.
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: Deploying daemon osd.1 on compute-0
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: from='osd.0 [v2:192.168.122.100:6802/3865363678,v1:192.168.122.100:6803/3865363678]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3865363678,v1:192.168.122.100:6803/3865363678]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3865363678,v1:192.168.122.100:6803/3865363678]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:04 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:04 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:04 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:04 np0005558241 podman[87624]: 2025-12-13 07:32:04.543267983 +0000 UTC m=+0.068283922 container create 605a704b4e3bc5fec5b44219aa29cba7b18001c2088bfd9348cc3af210d642d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate-test, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:04 np0005558241 systemd[1]: Started libpod-conmon-605a704b4e3bc5fec5b44219aa29cba7b18001c2088bfd9348cc3af210d642d2.scope.
Dec 13 02:32:04 np0005558241 podman[87624]: 2025-12-13 07:32:04.516717999 +0000 UTC m=+0.041734028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed5c3c0c11cbc2ccfe8f85e6baa1daf81aaa5ca353b63f0ef808967c794cf56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed5c3c0c11cbc2ccfe8f85e6baa1daf81aaa5ca353b63f0ef808967c794cf56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed5c3c0c11cbc2ccfe8f85e6baa1daf81aaa5ca353b63f0ef808967c794cf56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed5c3c0c11cbc2ccfe8f85e6baa1daf81aaa5ca353b63f0ef808967c794cf56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ed5c3c0c11cbc2ccfe8f85e6baa1daf81aaa5ca353b63f0ef808967c794cf56/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:04 np0005558241 podman[87624]: 2025-12-13 07:32:04.642568527 +0000 UTC m=+0.167584546 container init 605a704b4e3bc5fec5b44219aa29cba7b18001c2088bfd9348cc3af210d642d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:04 np0005558241 podman[87624]: 2025-12-13 07:32:04.654620514 +0000 UTC m=+0.179636443 container start 605a704b4e3bc5fec5b44219aa29cba7b18001c2088bfd9348cc3af210d642d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:32:04 np0005558241 podman[87624]: 2025-12-13 07:32:04.658034188 +0000 UTC m=+0.183050157 container attach 605a704b4e3bc5fec5b44219aa29cba7b18001c2088bfd9348cc3af210d642d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate-test, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:32:04 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 13 02:32:04 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 13 02:32:04 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate-test[87640]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 13 02:32:04 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate-test[87640]:                            [--no-systemd] [--no-tmpfs]
Dec 13 02:32:04 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate-test[87640]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 13 02:32:04 np0005558241 systemd[1]: libpod-605a704b4e3bc5fec5b44219aa29cba7b18001c2088bfd9348cc3af210d642d2.scope: Deactivated successfully.
Dec 13 02:32:04 np0005558241 podman[87624]: 2025-12-13 07:32:04.86896887 +0000 UTC m=+0.393984829 container died 605a704b4e3bc5fec5b44219aa29cba7b18001c2088bfd9348cc3af210d642d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:32:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0ed5c3c0c11cbc2ccfe8f85e6baa1daf81aaa5ca353b63f0ef808967c794cf56-merged.mount: Deactivated successfully.
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:32:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3865363678,v1:192.168.122.100:6803/3865363678]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec 13 02:32:05 np0005558241 ceph-osd[87041]: osd.0 0 done with init, starting boot process
Dec 13 02:32:05 np0005558241 ceph-osd[87041]: osd.0 0 start_boot
Dec 13 02:32:05 np0005558241 ceph-osd[87041]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 13 02:32:05 np0005558241 ceph-osd[87041]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 13 02:32:05 np0005558241 ceph-osd[87041]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 13 02:32:05 np0005558241 ceph-osd[87041]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 13 02:32:05 np0005558241 ceph-osd[87041]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 13 02:32:05 np0005558241 podman[87624]: 2025-12-13 07:32:05.365399779 +0000 UTC m=+0.890415748 container remove 605a704b4e3bc5fec5b44219aa29cba7b18001c2088bfd9348cc3af210d642d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate-test, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:05 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:05 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:05 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:05 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3865363678; not ready for session (expect reconnect)
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: from='osd.0 [v2:192.168.122.100:6802/3865363678,v1:192.168.122.100:6803/3865363678]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: from='osd.0 [v2:192.168.122.100:6802/3865363678,v1:192.168.122.100:6803/3865363678]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 02:32:05 np0005558241 systemd[1]: libpod-conmon-605a704b4e3bc5fec5b44219aa29cba7b18001c2088bfd9348cc3af210d642d2.scope: Deactivated successfully.
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:05 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:05 np0005558241 systemd[1]: Reloading.
Dec 13 02:32:05 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:32:05 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:32:06 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:32:06 np0005558241 systemd[1]: Reloading.
Dec 13 02:32:06 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:32:06 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:32:06 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3865363678; not ready for session (expect reconnect)
Dec 13 02:32:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:06 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:06 np0005558241 ceph-mon[76537]: from='osd.0 [v2:192.168.122.100:6802/3865363678,v1:192.168.122.100:6803/3865363678]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 02:32:06 np0005558241 systemd[1]: Starting Ceph osd.1 for 18ee9de6-e00b-571b-ab9b-b7aab06174df...
Dec 13 02:32:06 np0005558241 podman[87802]: 2025-12-13 07:32:06.747329563 +0000 UTC m=+0.077967031 container create 7dc34fc49b215f52226b437d30416f7ae0ed11b19579f1db3058152aefb0ef83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:06 np0005558241 podman[87802]: 2025-12-13 07:32:06.698693785 +0000 UTC m=+0.029331333 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:06 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb1e67b9e30b2f5591cd329d903bea772a5d28ddf89a1faa3bb1ecabc81124b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb1e67b9e30b2f5591cd329d903bea772a5d28ddf89a1faa3bb1ecabc81124b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb1e67b9e30b2f5591cd329d903bea772a5d28ddf89a1faa3bb1ecabc81124b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb1e67b9e30b2f5591cd329d903bea772a5d28ddf89a1faa3bb1ecabc81124b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb1e67b9e30b2f5591cd329d903bea772a5d28ddf89a1faa3bb1ecabc81124b/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:06 np0005558241 podman[87802]: 2025-12-13 07:32:06.887604175 +0000 UTC m=+0.218241683 container init 7dc34fc49b215f52226b437d30416f7ae0ed11b19579f1db3058152aefb0ef83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:06 np0005558241 podman[87802]: 2025-12-13 07:32:06.904965323 +0000 UTC m=+0.235602781 container start 7dc34fc49b215f52226b437d30416f7ae0ed11b19579f1db3058152aefb0ef83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:32:06 np0005558241 podman[87802]: 2025-12-13 07:32:06.930140032 +0000 UTC m=+0.260777510 container attach 7dc34fc49b215f52226b437d30416f7ae0ed11b19579f1db3058152aefb0ef83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Dec 13 02:32:07 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate[87815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:07 np0005558241 bash[87802]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:07 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate[87815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:07 np0005558241 bash[87802]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:32:07 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3865363678; not ready for session (expect reconnect)
Dec 13 02:32:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:07 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:07 np0005558241 lvm[87903]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:32:07 np0005558241 lvm[87903]: VG ceph_vg1 finished
Dec 13 02:32:07 np0005558241 lvm[87900]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:32:07 np0005558241 lvm[87900]: VG ceph_vg0 finished
Dec 13 02:32:07 np0005558241 lvm[87905]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:32:07 np0005558241 lvm[87905]: VG ceph_vg2 finished
Dec 13 02:32:07 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate[87815]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 02:32:07 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate[87815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:07 np0005558241 bash[87802]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 02:32:07 np0005558241 bash[87802]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:07 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate[87815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:07 np0005558241 bash[87802]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:07 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate[87815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 02:32:07 np0005558241 bash[87802]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 02:32:07 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate[87815]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 13 02:32:07 np0005558241 bash[87802]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 13 02:32:08 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:32:08 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate[87815]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:08 np0005558241 bash[87802]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:08 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate[87815]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:08 np0005558241 bash[87802]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:08 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate[87815]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 02:32:08 np0005558241 bash[87802]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 02:32:08 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate[87815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 02:32:08 np0005558241 bash[87802]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 02:32:08 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate[87815]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 13 02:32:08 np0005558241 bash[87802]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 13 02:32:08 np0005558241 systemd[1]: libpod-7dc34fc49b215f52226b437d30416f7ae0ed11b19579f1db3058152aefb0ef83.scope: Deactivated successfully.
Dec 13 02:32:08 np0005558241 podman[87802]: 2025-12-13 07:32:08.136938877 +0000 UTC m=+1.467576345 container died 7dc34fc49b215f52226b437d30416f7ae0ed11b19579f1db3058152aefb0ef83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:32:08 np0005558241 systemd[1]: libpod-7dc34fc49b215f52226b437d30416f7ae0ed11b19579f1db3058152aefb0ef83.scope: Consumed 1.686s CPU time.
Dec 13 02:32:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5cb1e67b9e30b2f5591cd329d903bea772a5d28ddf89a1faa3bb1ecabc81124b-merged.mount: Deactivated successfully.
Dec 13 02:32:08 np0005558241 podman[87802]: 2025-12-13 07:32:08.296824932 +0000 UTC m=+1.627462420 container remove 7dc34fc49b215f52226b437d30416f7ae0ed11b19579f1db3058152aefb0ef83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 02:32:08 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3865363678; not ready for session (expect reconnect)
Dec 13 02:32:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:08 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:08 np0005558241 podman[88066]: 2025-12-13 07:32:08.654533557 +0000 UTC m=+0.087512455 container create 060b4383c272bb14b4b8d5a00bd5d2ea997c3cda9bfb70195001c443cbfeaf1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:32:08 np0005558241 podman[88066]: 2025-12-13 07:32:08.604516816 +0000 UTC m=+0.037495754 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54058a81f2b39fce3ec5b7da4a083148592f8f62caf90b38a2e57177c686b4c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54058a81f2b39fce3ec5b7da4a083148592f8f62caf90b38a2e57177c686b4c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54058a81f2b39fce3ec5b7da4a083148592f8f62caf90b38a2e57177c686b4c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54058a81f2b39fce3ec5b7da4a083148592f8f62caf90b38a2e57177c686b4c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54058a81f2b39fce3ec5b7da4a083148592f8f62caf90b38a2e57177c686b4c9/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:08 np0005558241 podman[88066]: 2025-12-13 07:32:08.780471537 +0000 UTC m=+0.213450405 container init 060b4383c272bb14b4b8d5a00bd5d2ea997c3cda9bfb70195001c443cbfeaf1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:08 np0005558241 podman[88066]: 2025-12-13 07:32:08.797932346 +0000 UTC m=+0.230911204 container start 060b4383c272bb14b4b8d5a00bd5d2ea997c3cda9bfb70195001c443cbfeaf1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:32:08 np0005558241 bash[88066]: 060b4383c272bb14b4b8d5a00bd5d2ea997c3cda9bfb70195001c443cbfeaf1f
Dec 13 02:32:08 np0005558241 systemd[1]: Started Ceph osd.1 for 18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: pidfile_write: ignore empty --pid-file
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:32:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:32:08
Dec 13 02:32:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:32:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:32:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] No pools available
Dec 13 02:32:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 13 02:32:08 np0005558241 ceph-osd[88086]: bdev(0x561827920400 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827920000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: load: jerasure load: lrc 
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:32:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x561827921c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x5618285c1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x5618285c1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x5618285c1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x5618285c1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount shared_bdev_used = 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Git sha 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: DB SUMMARY
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: DB Session ID:  AFXX6BFY6PFLWIDD2D1Q
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                                     Options.env: 0x5618277b1ea0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                                Options.info_log: 0x56182880c8a0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.write_buffer_manager: 0x5618286b2b40
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.row_cache: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                              Options.wal_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.wal_compression: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.max_background_jobs: 4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Compression algorithms supported:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kZSTD supported: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cc60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cc80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b5a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cc80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b5a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cc80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b5a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d48cc4ea-9ee4-4a53-ae04-9522915729ba
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611129212365, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611129214458, "job": 1, "event": "recovery_finished"}
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: freelist init
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: freelist _read_cfg
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs umount
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x5618285c1800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x5618285c1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x5618285c1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x5618285c1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bdev(0x5618285c1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluefs mount shared_bdev_used = 27262976
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Git sha 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: DB SUMMARY
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: DB Session ID:  AFXX6BFY6PFLWIDD2D1R
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                                     Options.env: 0x5618277b1ce0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                                Options.info_log: 0x56182880ca20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.write_buffer_manager: 0x5618286b2b40
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.row_cache: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                              Options.wal_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.wal_compression: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.max_background_jobs: 4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Compression algorithms supported:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kZSTD supported: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880cbc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b58d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880d0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b5a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880d0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b5a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56182880d0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5618277b5a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d48cc4ea-9ee4-4a53-ae04-9522915729ba
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611129262626, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 02:32:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:32:09 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3865363678; not ready for session (expect reconnect)
Dec 13 02:32:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:09 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 13 02:32:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 13 02:32:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:32:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:32:09 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec 13 02:32:09 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec 13 02:32:09 np0005558241 ceph-osd[88086]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611129923751, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611129, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d48cc4ea-9ee4-4a53-ae04-9522915729ba", "db_session_id": "AFXX6BFY6PFLWIDD2D1R", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:32:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:32:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:32:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:32:10 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:32:10 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3865363678; not ready for session (expect reconnect)
Dec 13 02:32:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:10 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:10 np0005558241 ceph-osd[88086]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611130432224, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611129, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d48cc4ea-9ee4-4a53-ae04-9522915729ba", "db_session_id": "AFXX6BFY6PFLWIDD2D1R", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:32:10 np0005558241 podman[88593]: 2025-12-13 07:32:10.495248433 +0000 UTC m=+0.026786410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:10 np0005558241 ceph-osd[88086]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611130795214, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611130, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d48cc4ea-9ee4-4a53-ae04-9522915729ba", "db_session_id": "AFXX6BFY6PFLWIDD2D1R", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:32:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 13 02:32:10 np0005558241 ceph-osd[88086]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611130801415, "job": 1, "event": "recovery_finished"}
Dec 13 02:32:10 np0005558241 ceph-osd[88086]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 13 02:32:10 np0005558241 podman[88593]: 2025-12-13 07:32:10.801689646 +0000 UTC m=+0.333227563 container create bfcf55646ab3d054c15238aae064fa8d8826718f8bfc2878f492e5b1614ac55d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:10 np0005558241 systemd[1]: Started libpod-conmon-bfcf55646ab3d054c15238aae064fa8d8826718f8bfc2878f492e5b1614ac55d.scope.
Dec 13 02:32:10 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:11 np0005558241 podman[88593]: 2025-12-13 07:32:11.025038553 +0000 UTC m=+0.556576540 container init bfcf55646ab3d054c15238aae064fa8d8826718f8bfc2878f492e5b1614ac55d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:11 np0005558241 podman[88593]: 2025-12-13 07:32:11.036727571 +0000 UTC m=+0.568265498 container start bfcf55646ab3d054c15238aae064fa8d8826718f8bfc2878f492e5b1614ac55d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 02:32:11 np0005558241 elegant_matsumoto[88610]: 167 167
Dec 13 02:32:11 np0005558241 systemd[1]: libpod-bfcf55646ab3d054c15238aae064fa8d8826718f8bfc2878f492e5b1614ac55d.scope: Deactivated successfully.
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561828a14000
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: rocksdb: DB pointer 0x5618289c6000
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1.8 total, 1.8 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.8 total, 1.8 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.7 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.8 total, 1.8 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.8 total, 1.8 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 460.80 MB usag
Dec 13 02:32:11 np0005558241 podman[88593]: 2025-12-13 07:32:11.076871169 +0000 UTC m=+0.608409096 container attach bfcf55646ab3d054c15238aae064fa8d8826718f8bfc2878f492e5b1614ac55d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:11 np0005558241 podman[88593]: 2025-12-13 07:32:11.077483184 +0000 UTC m=+0.609021151 container died bfcf55646ab3d054c15238aae064fa8d8826718f8bfc2878f492e5b1614ac55d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_matsumoto, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: bluestore.MempoolThread fragmentation_score=0.000124 took=0.000006s
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: _get_class not permitted to load lua
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: _get_class not permitted to load sdk
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: osd.1 0 load_pgs
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: osd.1 0 load_pgs opened 0 pgs
Dec 13 02:32:11 np0005558241 ceph-osd[88086]: osd.1 0 log_to_monitors true
Dec 13 02:32:11 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1[88082]: 2025-12-13T07:32:11.089+0000 7f7b183e78c0 -1 osd.1 0 log_to_monitors true
Dec 13 02:32:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec 13 02:32:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1857527260,v1:192.168.122.100:6807/1857527260]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 13 02:32:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8bbfa500bd6e6ba34f2e940a9d8a199a7273847b475b58585ed549511b6cf526-merged.mount: Deactivated successfully.
Dec 13 02:32:11 np0005558241 podman[88593]: 2025-12-13 07:32:11.343997414 +0000 UTC m=+0.875535331 container remove bfcf55646ab3d054c15238aae064fa8d8826718f8bfc2878f492e5b1614ac55d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:32:11 np0005558241 systemd[1]: libpod-conmon-bfcf55646ab3d054c15238aae064fa8d8826718f8bfc2878f492e5b1614ac55d.scope: Deactivated successfully.
Dec 13 02:32:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:32:11 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3865363678; not ready for session (expect reconnect)
Dec 13 02:32:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:11 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:11 np0005558241 podman[88675]: 2025-12-13 07:32:11.749406403 +0000 UTC m=+0.070630420 container create 301f095f3e80940bead46c32bd2205f1f653b288190cefde518dfe3ccd0f8346 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate-test, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:32:11 np0005558241 podman[88675]: 2025-12-13 07:32:11.718753348 +0000 UTC m=+0.039977385 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 13 02:32:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:32:12 np0005558241 systemd[1]: Started libpod-conmon-301f095f3e80940bead46c32bd2205f1f653b288190cefde518dfe3ccd0f8346.scope.
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: Deploying daemon osd.2 on compute-0
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: from='osd.1 [v2:192.168.122.100:6806/1857527260,v1:192.168.122.100:6807/1857527260]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 13 02:32:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efd8eeee9b6833a09b9228b9961961e5d9584bf9a93d2883714a01f034069ed8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efd8eeee9b6833a09b9228b9961961e5d9584bf9a93d2883714a01f034069ed8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efd8eeee9b6833a09b9228b9961961e5d9584bf9a93d2883714a01f034069ed8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efd8eeee9b6833a09b9228b9961961e5d9584bf9a93d2883714a01f034069ed8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efd8eeee9b6833a09b9228b9961961e5d9584bf9a93d2883714a01f034069ed8/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:12 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:32:12 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 13 02:32:12 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1857527260,v1:192.168.122.100:6807/1857527260]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Dec 13 02:32:12 np0005558241 podman[88675]: 2025-12-13 07:32:12.164460519 +0000 UTC m=+0.485684566 container init 301f095f3e80940bead46c32bd2205f1f653b288190cefde518dfe3ccd0f8346 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate-test, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:32:12 np0005558241 podman[88675]: 2025-12-13 07:32:12.177834378 +0000 UTC m=+0.499058385 container start 301f095f3e80940bead46c32bd2205f1f653b288190cefde518dfe3ccd0f8346 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1857527260,v1:192.168.122.100:6807/1857527260]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:12 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:12 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:12 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:12 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate-test[88692]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 13 02:32:12 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate-test[88692]:                            [--no-systemd] [--no-tmpfs]
Dec 13 02:32:12 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate-test[88692]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 13 02:32:12 np0005558241 systemd[1]: libpod-301f095f3e80940bead46c32bd2205f1f653b288190cefde518dfe3ccd0f8346.scope: Deactivated successfully.
Dec 13 02:32:12 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3865363678; not ready for session (expect reconnect)
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:12 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:12 np0005558241 podman[88675]: 2025-12-13 07:32:12.421415454 +0000 UTC m=+0.742639541 container attach 301f095f3e80940bead46c32bd2205f1f653b288190cefde518dfe3ccd0f8346 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate-test, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:12 np0005558241 podman[88675]: 2025-12-13 07:32:12.423798112 +0000 UTC m=+0.745022129 container died 301f095f3e80940bead46c32bd2205f1f653b288190cefde518dfe3ccd0f8346 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate-test, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:32:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-efd8eeee9b6833a09b9228b9961961e5d9584bf9a93d2883714a01f034069ed8-merged.mount: Deactivated successfully.
Dec 13 02:32:12 np0005558241 podman[88675]: 2025-12-13 07:32:12.810311817 +0000 UTC m=+1.131535834 container remove 301f095f3e80940bead46c32bd2205f1f653b288190cefde518dfe3ccd0f8346 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:32:12 np0005558241 systemd[1]: libpod-conmon-301f095f3e80940bead46c32bd2205f1f653b288190cefde518dfe3ccd0f8346.scope: Deactivated successfully.
Dec 13 02:32:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 13 02:32:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:32:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:32:13 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3865363678; not ready for session (expect reconnect)
Dec 13 02:32:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:13 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1857527260,v1:192.168.122.100:6807/1857527260]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 02:32:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Dec 13 02:32:13 np0005558241 ceph-osd[88086]: osd.1 0 done with init, starting boot process
Dec 13 02:32:13 np0005558241 ceph-osd[88086]: osd.1 0 start_boot
Dec 13 02:32:13 np0005558241 ceph-osd[88086]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 13 02:32:13 np0005558241 ceph-osd[88086]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 13 02:32:13 np0005558241 ceph-osd[88086]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 13 02:32:13 np0005558241 ceph-osd[88086]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 13 02:32:13 np0005558241 ceph-osd[88086]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 13 02:32:14 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:32:14 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3865363678; not ready for session (expect reconnect)
Dec 13 02:32:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:32:15 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3865363678; not ready for session (expect reconnect)
Dec 13 02:32:15 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Dec 13 02:32:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:15 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:15 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:15 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:15 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1857527260; not ready for session (expect reconnect)
Dec 13 02:32:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:15 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:15 np0005558241 ceph-mon[76537]: from='osd.1 [v2:192.168.122.100:6806/1857527260,v1:192.168.122.100:6807/1857527260]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 13 02:32:15 np0005558241 ceph-mon[76537]: from='osd.1 [v2:192.168.122.100:6806/1857527260,v1:192.168.122.100:6807/1857527260]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 02:32:15 np0005558241 systemd[1]: Reloading.
Dec 13 02:32:15 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:32:15 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:32:16 np0005558241 ceph-mgr[76830]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 02:32:16 np0005558241 systemd[1]: Reloading.
Dec 13 02:32:16 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:32:16 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:32:16 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3865363678; not ready for session (expect reconnect)
Dec 13 02:32:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:16 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:16 np0005558241 systemd[1]: Starting Ceph osd.2 for 18ee9de6-e00b-571b-ab9b-b7aab06174df...
Dec 13 02:32:16 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1857527260; not ready for session (expect reconnect)
Dec 13 02:32:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:16 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:16 np0005558241 ceph-osd[87041]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 2.739 iops: 701.070 elapsed_sec: 4.279
Dec 13 02:32:16 np0005558241 ceph-osd[87041]: log_channel(cluster) log [WRN] : OSD bench result of 701.069588 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 02:32:16 np0005558241 ceph-osd[87041]: osd.0 0 waiting for initial osdmap
Dec 13 02:32:16 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0[87037]: 2025-12-13T07:32:16.805+0000 7fdeb54a5640 -1 osd.0 0 waiting for initial osdmap
Dec 13 02:32:16 np0005558241 ceph-osd[87041]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 13 02:32:16 np0005558241 ceph-osd[87041]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 13 02:32:16 np0005558241 ceph-osd[87041]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 13 02:32:16 np0005558241 ceph-osd[87041]: osd.0 10 check_osdmap_features require_osd_release unknown -> tentacle
Dec 13 02:32:16 np0005558241 python3[88837]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:32:16 np0005558241 ceph-mon[76537]: from='osd.1 [v2:192.168.122.100:6806/1857527260,v1:192.168.122.100:6807/1857527260]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 02:32:16 np0005558241 ceph-osd[87041]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 02:32:16 np0005558241 ceph-osd[87041]: osd.0 10 set_numa_affinity not setting numa affinity
Dec 13 02:32:16 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-0[87037]: 2025-12-13T07:32:16.875+0000 7fdeb02aa640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 02:32:16 np0005558241 ceph-osd[87041]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 13 02:32:16 np0005558241 podman[88874]: 2025-12-13 07:32:16.933918654 +0000 UTC m=+0.068642250 container create 7b44e023bd444ad59bb2c0bdf476222085596d5c0611b5e6b2eaa1cc36d4614f (image=quay.io/ceph/ceph:v20, name=ecstatic_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:32:16 np0005558241 podman[88874]: 2025-12-13 07:32:16.892762701 +0000 UTC m=+0.027486297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:32:17 np0005558241 podman[88891]: 2025-12-13 07:32:17.00851381 +0000 UTC m=+0.102269858 container create 8ea8e49de37d0e6b89e75f85f81c1c3f6678afdf6a7bccd959df33571d2d881f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:32:17 np0005558241 systemd[1]: Started libpod-conmon-7b44e023bd444ad59bb2c0bdf476222085596d5c0611b5e6b2eaa1cc36d4614f.scope.
Dec 13 02:32:17 np0005558241 podman[88891]: 2025-12-13 07:32:16.942815053 +0000 UTC m=+0.036571141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653d0ba2f9a10be32719b6f7ccceb7500c82f14cff3cda512f4b5b7c6d082312/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653d0ba2f9a10be32719b6f7ccceb7500c82f14cff3cda512f4b5b7c6d082312/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/653d0ba2f9a10be32719b6f7ccceb7500c82f14cff3cda512f4b5b7c6d082312/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:17 np0005558241 podman[88874]: 2025-12-13 07:32:17.098564256 +0000 UTC m=+0.233287852 container init 7b44e023bd444ad59bb2c0bdf476222085596d5c0611b5e6b2eaa1cc36d4614f (image=quay.io/ceph/ceph:v20, name=ecstatic_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:32:17 np0005558241 podman[88874]: 2025-12-13 07:32:17.106121712 +0000 UTC m=+0.240845278 container start 7b44e023bd444ad59bb2c0bdf476222085596d5c0611b5e6b2eaa1cc36d4614f (image=quay.io/ceph/ceph:v20, name=ecstatic_montalcini, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b289d4900eb1aeda26129debafcd7e33407806689aa5dab4675c0e6e2789d4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b289d4900eb1aeda26129debafcd7e33407806689aa5dab4675c0e6e2789d4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b289d4900eb1aeda26129debafcd7e33407806689aa5dab4675c0e6e2789d4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b289d4900eb1aeda26129debafcd7e33407806689aa5dab4675c0e6e2789d4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b289d4900eb1aeda26129debafcd7e33407806689aa5dab4675c0e6e2789d4d/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:17 np0005558241 podman[88874]: 2025-12-13 07:32:17.136436308 +0000 UTC m=+0.271159874 container attach 7b44e023bd444ad59bb2c0bdf476222085596d5c0611b5e6b2eaa1cc36d4614f (image=quay.io/ceph/ceph:v20, name=ecstatic_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:32:17 np0005558241 podman[88891]: 2025-12-13 07:32:17.148860714 +0000 UTC m=+0.242616832 container init 8ea8e49de37d0e6b89e75f85f81c1c3f6678afdf6a7bccd959df33571d2d881f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:17 np0005558241 podman[88891]: 2025-12-13 07:32:17.161970607 +0000 UTC m=+0.255726665 container start 8ea8e49de37d0e6b89e75f85f81c1c3f6678afdf6a7bccd959df33571d2d881f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 02:32:17 np0005558241 podman[88891]: 2025-12-13 07:32:17.192929889 +0000 UTC m=+0.286686037 container attach 8ea8e49de37d0e6b89e75f85f81c1c3f6678afdf6a7bccd959df33571d2d881f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:17 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate[88915]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:17 np0005558241 bash[88891]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 02:32:17 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate[88915]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:17 np0005558241 bash[88891]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:17 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3865363678; not ready for session (expect reconnect)
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:17 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2746100340' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 02:32:17 np0005558241 ecstatic_montalcini[88910]: 
Dec 13 02:32:17 np0005558241 ecstatic_montalcini[88910]: {"fsid":"18ee9de6-e00b-571b-ab9b-b7aab06174df","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":88,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":10,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1765611114,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-13T07:30:46:025708+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-13T07:32:11.360598+0000","services":{}},"progress_events":{}}
Dec 13 02:32:17 np0005558241 systemd[1]: libpod-7b44e023bd444ad59bb2c0bdf476222085596d5c0611b5e6b2eaa1cc36d4614f.scope: Deactivated successfully.
Dec 13 02:32:17 np0005558241 podman[88874]: 2025-12-13 07:32:17.631733939 +0000 UTC m=+0.766457525 container died 7b44e023bd444ad59bb2c0bdf476222085596d5c0611b5e6b2eaa1cc36d4614f (image=quay.io/ceph/ceph:v20, name=ecstatic_montalcini, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:32:17 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1857527260; not ready for session (expect reconnect)
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:17 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay-653d0ba2f9a10be32719b6f7ccceb7500c82f14cff3cda512f4b5b7c6d082312-merged.mount: Deactivated successfully.
Dec 13 02:32:17 np0005558241 podman[88874]: 2025-12-13 07:32:17.805619519 +0000 UTC m=+0.940343065 container remove 7b44e023bd444ad59bb2c0bdf476222085596d5c0611b5e6b2eaa1cc36d4614f (image=quay.io/ceph/ceph:v20, name=ecstatic_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:32:17 np0005558241 systemd[1]: libpod-conmon-7b44e023bd444ad59bb2c0bdf476222085596d5c0611b5e6b2eaa1cc36d4614f.scope: Deactivated successfully.
Dec 13 02:32:17 np0005558241 ceph-osd[87041]: osd.0 10 tick checking mon for new map
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:32:17 np0005558241 lvm[89033]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:32:17 np0005558241 lvm[89032]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:32:17 np0005558241 lvm[89033]: VG ceph_vg0 finished
Dec 13 02:32:17 np0005558241 lvm[89032]: VG ceph_vg1 finished
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: OSD bench result of 701.069588 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 02:32:17 np0005558241 lvm[89036]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:32:17 np0005558241 lvm[89036]: VG ceph_vg2 finished
Dec 13 02:32:17 np0005558241 ceph-osd[87041]: osd.0 11 state: booting -> active
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3865363678,v1:192.168.122.100:6803/3865363678] boot
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:17 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:17 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:18 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate[88915]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 02:32:18 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate[88915]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:18 np0005558241 bash[88891]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 02:32:18 np0005558241 bash[88891]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:18 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] creating mgr pool
Dec 13 02:32:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec 13 02:32:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 13 02:32:18 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate[88915]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:18 np0005558241 bash[88891]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 02:32:18 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate[88915]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 02:32:18 np0005558241 bash[88891]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 02:32:18 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate[88915]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 13 02:32:18 np0005558241 bash[88891]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 13 02:32:18 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate[88915]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:18 np0005558241 bash[88891]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:18 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate[88915]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:18 np0005558241 bash[88891]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:18 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate[88915]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 02:32:18 np0005558241 bash[88891]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 02:32:18 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate[88915]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 02:32:18 np0005558241 bash[88891]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 02:32:18 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate[88915]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 13 02:32:18 np0005558241 bash[88891]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 13 02:32:18 np0005558241 systemd[1]: libpod-8ea8e49de37d0e6b89e75f85f81c1c3f6678afdf6a7bccd959df33571d2d881f.scope: Deactivated successfully.
Dec 13 02:32:18 np0005558241 systemd[1]: libpod-8ea8e49de37d0e6b89e75f85f81c1c3f6678afdf6a7bccd959df33571d2d881f.scope: Consumed 1.690s CPU time.
Dec 13 02:32:18 np0005558241 podman[89143]: 2025-12-13 07:32:18.397655682 +0000 UTC m=+0.028641316 container died 8ea8e49de37d0e6b89e75f85f81c1c3f6678afdf6a7bccd959df33571d2d881f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:32:18 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9b289d4900eb1aeda26129debafcd7e33407806689aa5dab4675c0e6e2789d4d-merged.mount: Deactivated successfully.
Dec 13 02:32:18 np0005558241 podman[89143]: 2025-12-13 07:32:18.612542811 +0000 UTC m=+0.243528445 container remove 8ea8e49de37d0e6b89e75f85f81c1c3f6678afdf6a7bccd959df33571d2d881f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2-activate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 02:32:18 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1857527260; not ready for session (expect reconnect)
Dec 13 02:32:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:18 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:18 np0005558241 podman[89202]: 2025-12-13 07:32:18.860859153 +0000 UTC m=+0.092539469 container create fee4e182b257bd045f2ce8dced211f24985d9c6b59fc214e8aed6e029e549d8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:32:18 np0005558241 podman[89202]: 2025-12-13 07:32:18.794353036 +0000 UTC m=+0.026033372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1f0b49abe4691b121100bf616540bdcd89365d2d7053b1511fde4292d0395a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1f0b49abe4691b121100bf616540bdcd89365d2d7053b1511fde4292d0395a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1f0b49abe4691b121100bf616540bdcd89365d2d7053b1511fde4292d0395a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1f0b49abe4691b121100bf616540bdcd89365d2d7053b1511fde4292d0395a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c1f0b49abe4691b121100bf616540bdcd89365d2d7053b1511fde4292d0395a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 13 02:32:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 02:32:18 np0005558241 ceph-mon[76537]: osd.0 [v2:192.168.122.100:6802/3865363678,v1:192.168.122.100:6803/3865363678] boot
Dec 13 02:32:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 13 02:32:18 np0005558241 podman[89202]: 2025-12-13 07:32:18.977879633 +0000 UTC m=+0.209559989 container init fee4e182b257bd045f2ce8dced211f24985d9c6b59fc214e8aed6e029e549d8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:32:18 np0005558241 podman[89202]: 2025-12-13 07:32:18.986008733 +0000 UTC m=+0.217689089 container start fee4e182b257bd045f2ce8dced211f24985d9c6b59fc214e8aed6e029e549d8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:32:19 np0005558241 bash[89202]: fee4e182b257bd045f2ce8dced211f24985d9c6b59fc214e8aed6e029e549d8a
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:19 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 13 02:32:19 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:19 np0005558241 systemd[1]: Started Ceph osd.2 for 18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: pidfile_write: ignore empty --pid-file
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:32:19 np0005558241 ceph-osd[87041]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 13 02:32:19 np0005558241 ceph-osd[87041]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 13 02:32:19 np0005558241 ceph-osd[87041]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e400 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108e000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: load: jerasure load: lrc 
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:32:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b5108fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b51d25800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b51d25800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b51d25800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b51d25800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount shared_bdev_used = 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Git sha 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: DB SUMMARY
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: DB Session ID:  K00KBWTL26CKQCU0G8KL
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                                     Options.env: 0x562b50f1fea0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                                Options.info_log: 0x562b51f708a0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.write_buffer_manager: 0x562b50f84b40
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.row_cache: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                              Options.wal_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.wal_compression: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.max_background_jobs: 4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Compression algorithms supported:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kZSTD supported: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f23a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f23a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f23a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 580a6e92-95b0-4b00-af53-1c3cf49b6236
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611139426954, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611139429209, "job": 1, "event": "recovery_finished"}
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: freelist init
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: freelist _read_cfg
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs umount
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b51d25800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b51d25800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b51d25800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b51d25800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bdev(0x562b51d25800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluefs mount shared_bdev_used = 27262976
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: RocksDB version: 7.9.2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Git sha 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: DB SUMMARY
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: DB Session ID:  K00KBWTL26CKQCU0G8KK
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: CURRENT file:  CURRENT
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                         Options.error_if_exists: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.create_if_missing: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                                     Options.env: 0x562b52140a80
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                                Options.info_log: 0x562b51f70960
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                              Options.statistics: (nil)
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.use_fsync: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                              Options.db_log_dir: 
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.write_buffer_manager: 0x562b50f85900
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.unordered_write: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.row_cache: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                              Options.wal_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.two_write_queues: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.wal_compression: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.atomic_flush: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.max_background_jobs: 4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.max_background_compactions: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.max_subcompactions: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.max_open_files: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Compression algorithms supported:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kZSTD supported: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kXpressCompression supported: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kBZip2Compression supported: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kLZ4Compression supported: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kZlibCompression supported: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kLZ4HCCompression supported: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: #011kSnappyCompression supported: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f70bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f710c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f23a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f710c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f23a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:           Options.merge_operator: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562b51f710c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562b50f23a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.compression: LZ4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.num_levels: 7
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.bloom_locality: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                               Options.ttl: 2592000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                       Options.enable_blob_files: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                           Options.min_blob_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 580a6e92-95b0-4b00-af53-1c3cf49b6236
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611139494794, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611139527914, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611139, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "580a6e92-95b0-4b00-af53-1c3cf49b6236", "db_session_id": "K00KBWTL26CKQCU0G8KK", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611139563393, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 466, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611139, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "580a6e92-95b0-4b00-af53-1c3cf49b6236", "db_session_id": "K00KBWTL26CKQCU0G8KK", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:32:19 np0005558241 podman[89699]: 2025-12-13 07:32:19.569244999 +0000 UTC m=+0.050780051 container create 1264dfbdc8dc22ab48bff0f45e17ccb85f4b6b70e91391b9b443309e96203962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_jackson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611139582773, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611139, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "580a6e92-95b0-4b00-af53-1c3cf49b6236", "db_session_id": "K00KBWTL26CKQCU0G8KK", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611139600765, "job": 1, "event": "recovery_finished"}
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 13 02:32:19 np0005558241 systemd[1]: Started libpod-conmon-1264dfbdc8dc22ab48bff0f45e17ccb85f4b6b70e91391b9b443309e96203962.scope.
Dec 13 02:32:19 np0005558241 podman[89699]: 2025-12-13 07:32:19.541616959 +0000 UTC m=+0.023152031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562b5218a000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: DB pointer 0x562b5212a000
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: bluestore.MempoolThread fragmentation_score=0.000124 took=0.000006s
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: _get_class not permitted to load lua
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: _get_class not permitted to load sdk
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: osd.2 0 load_pgs
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: osd.2 0 load_pgs opened 0 pgs
Dec 13 02:32:19 np0005558241 ceph-osd[89221]: osd.2 0 log_to_monitors true
Dec 13 02:32:19 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2[89217]: 2025-12-13T07:32:19.682+0000 7fb1f38798c0 -1 osd.2 0 log_to_monitors true
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3792529097,v1:192.168.122.100:6811/3792529097]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 13 02:32:19 np0005558241 podman[89699]: 2025-12-13 07:32:19.712027424 +0000 UTC m=+0.193562496 container init 1264dfbdc8dc22ab48bff0f45e17ccb85f4b6b70e91391b9b443309e96203962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_jackson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:32:19 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1857527260; not ready for session (expect reconnect)
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:19 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:19 np0005558241 podman[89699]: 2025-12-13 07:32:19.723802883 +0000 UTC m=+0.205337935 container start 1264dfbdc8dc22ab48bff0f45e17ccb85f4b6b70e91391b9b443309e96203962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_jackson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:19 np0005558241 loving_jackson[89715]: 167 167
Dec 13 02:32:19 np0005558241 systemd[1]: libpod-1264dfbdc8dc22ab48bff0f45e17ccb85f4b6b70e91391b9b443309e96203962.scope: Deactivated successfully.
Dec 13 02:32:19 np0005558241 conmon[89715]: conmon 1264dfbdc8dc22ab48bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1264dfbdc8dc22ab48bff0f45e17ccb85f4b6b70e91391b9b443309e96203962.scope/container/memory.events
Dec 13 02:32:19 np0005558241 podman[89699]: 2025-12-13 07:32:19.744540784 +0000 UTC m=+0.226075836 container attach 1264dfbdc8dc22ab48bff0f45e17ccb85f4b6b70e91391b9b443309e96203962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:32:19 np0005558241 podman[89699]: 2025-12-13 07:32:19.74480204 +0000 UTC m=+0.226337092 container died 1264dfbdc8dc22ab48bff0f45e17ccb85f4b6b70e91391b9b443309e96203962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:32:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e7f2c9e81c0d0e4ab2b1fb1756060ac4fbf5a0f32c6efb47703b62c8f6d003b6-merged.mount: Deactivated successfully.
Dec 13 02:32:19 np0005558241 podman[89699]: 2025-12-13 07:32:19.981659901 +0000 UTC m=+0.463194993 container remove 1264dfbdc8dc22ab48bff0f45e17ccb85f4b6b70e91391b9b443309e96203962 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:19 np0005558241 systemd[1]: libpod-conmon-1264dfbdc8dc22ab48bff0f45e17ccb85f4b6b70e91391b9b443309e96203962.scope: Deactivated successfully.
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: from='osd.2 [v2:192.168.122.100:6810/3792529097,v1:192.168.122.100:6811/3792529097]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 13 02:32:20 np0005558241 podman[89774]: 2025-12-13 07:32:20.171230467 +0000 UTC m=+0.027790865 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:20 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 13 02:32:20 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 13 02:32:20 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1857527260; not ready for session (expect reconnect)
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:20 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3792529097,v1:192.168.122.100:6811/3792529097]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Dec 13 02:32:20 np0005558241 podman[89774]: 2025-12-13 07:32:20.733784282 +0000 UTC m=+0.590344620 container create 41d3d3d65d8ce762e98533063377768de09df677dcbc360c97431c9b50b33215 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3792529097,v1:192.168.122.100:6811/3792529097]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 13 02:32:20 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:20 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:20 np0005558241 systemd[1]: Started libpod-conmon-41d3d3d65d8ce762e98533063377768de09df677dcbc360c97431c9b50b33215.scope.
Dec 13 02:32:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b95bba386601a62212e72ccd4ce6c598a9e78d5e0d7e933ff0eea043f37d89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b95bba386601a62212e72ccd4ce6c598a9e78d5e0d7e933ff0eea043f37d89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b95bba386601a62212e72ccd4ce6c598a9e78d5e0d7e933ff0eea043f37d89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b95bba386601a62212e72ccd4ce6c598a9e78d5e0d7e933ff0eea043f37d89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:20 np0005558241 podman[89774]: 2025-12-13 07:32:20.851190972 +0000 UTC m=+0.707751290 container init 41d3d3d65d8ce762e98533063377768de09df677dcbc360c97431c9b50b33215 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mcclintock, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:32:20 np0005558241 podman[89774]: 2025-12-13 07:32:20.862004758 +0000 UTC m=+0.718565056 container start 41d3d3d65d8ce762e98533063377768de09df677dcbc360c97431c9b50b33215 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 02:32:20 np0005558241 podman[89774]: 2025-12-13 07:32:20.892848447 +0000 UTC m=+0.749408745 container attach 41d3d3d65d8ce762e98533063377768de09df677dcbc360c97431c9b50b33215 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: from='osd.2 [v2:192.168.122.100:6810/3792529097,v1:192.168.122.100:6811/3792529097]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: from='osd.2 [v2:192.168.122.100:6810/3792529097,v1:192.168.122.100:6811/3792529097]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 02:32:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 02:32:21 np0005558241 lvm[89868]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:32:21 np0005558241 lvm[89868]: VG ceph_vg0 finished
Dec 13 02:32:21 np0005558241 lvm[89869]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:32:21 np0005558241 lvm[89869]: VG ceph_vg1 finished
Dec 13 02:32:21 np0005558241 lvm[89871]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:32:21 np0005558241 lvm[89871]: VG ceph_vg2 finished
Dec 13 02:32:21 np0005558241 ceph-osd[88086]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.576 iops: 6547.567 elapsed_sec: 0.458
Dec 13 02:32:21 np0005558241 ceph-osd[88086]: log_channel(cluster) log [WRN] : OSD bench result of 6547.566748 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 02:32:21 np0005558241 ceph-osd[88086]: osd.1 0 waiting for initial osdmap
Dec 13 02:32:21 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1[88082]: 2025-12-13T07:32:21.591+0000 7f7b14369640 -1 osd.1 0 waiting for initial osdmap
Dec 13 02:32:21 np0005558241 ceph-osd[88086]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 13 02:32:21 np0005558241 ceph-osd[88086]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 13 02:32:21 np0005558241 ceph-osd[88086]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 13 02:32:21 np0005558241 ceph-osd[88086]: osd.1 13 check_osdmap_features require_osd_release unknown -> tentacle
Dec 13 02:32:21 np0005558241 sad_mcclintock[89790]: {}
Dec 13 02:32:21 np0005558241 ceph-osd[88086]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 02:32:21 np0005558241 ceph-osd[88086]: osd.1 13 set_numa_affinity not setting numa affinity
Dec 13 02:32:21 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-1[88082]: 2025-12-13T07:32:21.626+0000 7f7b0f16e640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 02:32:21 np0005558241 ceph-osd[88086]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Dec 13 02:32:21 np0005558241 systemd[1]: libpod-41d3d3d65d8ce762e98533063377768de09df677dcbc360c97431c9b50b33215.scope: Deactivated successfully.
Dec 13 02:32:21 np0005558241 systemd[1]: libpod-41d3d3d65d8ce762e98533063377768de09df677dcbc360c97431c9b50b33215.scope: Consumed 1.236s CPU time.
Dec 13 02:32:21 np0005558241 podman[89774]: 2025-12-13 07:32:21.644972 +0000 UTC m=+1.501532298 container died 41d3d3d65d8ce762e98533063377768de09df677dcbc360c97431c9b50b33215 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:32:21 np0005558241 systemd[1]: var-lib-containers-storage-overlay-26b95bba386601a62212e72ccd4ce6c598a9e78d5e0d7e933ff0eea043f37d89-merged.mount: Deactivated successfully.
Dec 13 02:32:21 np0005558241 podman[89774]: 2025-12-13 07:32:21.693954896 +0000 UTC m=+1.550515194 container remove 41d3d3d65d8ce762e98533063377768de09df677dcbc360c97431c9b50b33215 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mcclintock, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:32:21 np0005558241 systemd[1]: libpod-conmon-41d3d3d65d8ce762e98533063377768de09df677dcbc360c97431c9b50b33215.scope: Deactivated successfully.
Dec 13 02:32:21 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1857527260; not ready for session (expect reconnect)
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:21 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3792529097,v1:192.168.122.100:6811/3792529097]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Dec 13 02:32:21 np0005558241 ceph-osd[89221]: osd.2 0 done with init, starting boot process
Dec 13 02:32:21 np0005558241 ceph-osd[89221]: osd.2 0 start_boot
Dec 13 02:32:21 np0005558241 ceph-osd[89221]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 13 02:32:21 np0005558241 ceph-osd[89221]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 13 02:32:21 np0005558241 ceph-osd[89221]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 13 02:32:21 np0005558241 ceph-osd[89221]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 13 02:32:21 np0005558241 ceph-osd[89221]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1857527260,v1:192.168.122.100:6807/1857527260] boot
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:32:21 np0005558241 ceph-osd[88086]: osd.1 14 state: booting -> active
Dec 13 02:32:21 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:32:21 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3792529097; not ready for session (expect reconnect)
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:21 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:32:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:22 np0005558241 ceph-mon[76537]: OSD bench result of 6547.566748 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 02:32:22 np0005558241 ceph-mon[76537]: from='osd.2 [v2:192.168.122.100:6810/3792529097,v1:192.168.122.100:6811/3792529097]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 02:32:22 np0005558241 ceph-mon[76537]: osd.1 [v2:192.168.122.100:6806/1857527260,v1:192.168.122.100:6807/1857527260] boot
Dec 13 02:32:22 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:22 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:22 np0005558241 podman[90004]: 2025-12-13 07:32:22.624543401 +0000 UTC m=+0.138912510 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 13 02:32:22 np0005558241 podman[90004]: 2025-12-13 07:32:22.741549921 +0000 UTC m=+0.255918980 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 02:32:22 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3792529097; not ready for session (expect reconnect)
Dec 13 02:32:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:22 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Dec 13 02:32:22 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Dec 13 02:32:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:22 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:22 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:32:22 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] creating main.db for devicehealth
Dec 13 02:32:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 02:32:23 np0005558241 ceph-mgr[76830]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 02:32:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:23 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3792529097; not ready for session (expect reconnect)
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:23 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:23 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:24 np0005558241 podman[90231]: 2025-12-13 07:32:24.23009583 +0000 UTC m=+0.073940421 container create c65048eb1aaaae39cca8649f2acf75761e44d87cd9f651b9f6e3a0c853a8591b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_satoshi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:32:24 np0005558241 ceph-mon[76537]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 13 02:32:24 np0005558241 ceph-mon[76537]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 13 02:32:24 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:24 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:24 np0005558241 podman[90231]: 2025-12-13 07:32:24.194137655 +0000 UTC m=+0.037982256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:24 np0005558241 systemd[1]: Started libpod-conmon-c65048eb1aaaae39cca8649f2acf75761e44d87cd9f651b9f6e3a0c853a8591b.scope.
Dec 13 02:32:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:24 np0005558241 podman[90231]: 2025-12-13 07:32:24.398362951 +0000 UTC m=+0.242207592 container init c65048eb1aaaae39cca8649f2acf75761e44d87cd9f651b9f6e3a0c853a8591b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 02:32:24 np0005558241 podman[90231]: 2025-12-13 07:32:24.409787733 +0000 UTC m=+0.253632324 container start c65048eb1aaaae39cca8649f2acf75761e44d87cd9f651b9f6e3a0c853a8591b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_satoshi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Dec 13 02:32:24 np0005558241 stoic_satoshi[90247]: 167 167
Dec 13 02:32:24 np0005558241 systemd[1]: libpod-c65048eb1aaaae39cca8649f2acf75761e44d87cd9f651b9f6e3a0c853a8591b.scope: Deactivated successfully.
Dec 13 02:32:24 np0005558241 podman[90231]: 2025-12-13 07:32:24.427277113 +0000 UTC m=+0.271121724 container attach c65048eb1aaaae39cca8649f2acf75761e44d87cd9f651b9f6e3a0c853a8591b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_satoshi, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:24 np0005558241 podman[90231]: 2025-12-13 07:32:24.428605456 +0000 UTC m=+0.272450047 container died c65048eb1aaaae39cca8649f2acf75761e44d87cd9f651b9f6e3a0c853a8591b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_satoshi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:24 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c6f886713257ec9effea441697bfb9a68690767d088677d6fd36f9aafac155ab-merged.mount: Deactivated successfully.
Dec 13 02:32:24 np0005558241 podman[90231]: 2025-12-13 07:32:24.573889592 +0000 UTC m=+0.417734193 container remove c65048eb1aaaae39cca8649f2acf75761e44d87cd9f651b9f6e3a0c853a8591b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_satoshi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 02:32:24 np0005558241 systemd[1]: libpod-conmon-c65048eb1aaaae39cca8649f2acf75761e44d87cd9f651b9f6e3a0c853a8591b.scope: Deactivated successfully.
Dec 13 02:32:24 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3792529097; not ready for session (expect reconnect)
Dec 13 02:32:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:24 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:24 np0005558241 podman[90273]: 2025-12-13 07:32:24.844229726 +0000 UTC m=+0.086488040 container create 58cff493293bc14ceb32b934abf7dd0eac71d5046dbaabf203eef7772695367f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:32:24 np0005558241 podman[90273]: 2025-12-13 07:32:24.801764081 +0000 UTC m=+0.044022485 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:24 np0005558241 systemd[1]: Started libpod-conmon-58cff493293bc14ceb32b934abf7dd0eac71d5046dbaabf203eef7772695367f.scope.
Dec 13 02:32:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9940bbcc364f33b3c3faaae0d4f937e91bfd0f43d292baa165ad41578b2e570c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9940bbcc364f33b3c3faaae0d4f937e91bfd0f43d292baa165ad41578b2e570c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9940bbcc364f33b3c3faaae0d4f937e91bfd0f43d292baa165ad41578b2e570c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9940bbcc364f33b3c3faaae0d4f937e91bfd0f43d292baa165ad41578b2e570c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:25 np0005558241 podman[90273]: 2025-12-13 07:32:25.008028698 +0000 UTC m=+0.250287112 container init 58cff493293bc14ceb32b934abf7dd0eac71d5046dbaabf203eef7772695367f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:25 np0005558241 podman[90273]: 2025-12-13 07:32:25.022177791 +0000 UTC m=+0.264436115 container start 58cff493293bc14ceb32b934abf7dd0eac71d5046dbaabf203eef7772695367f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:25 np0005558241 podman[90273]: 2025-12-13 07:32:25.047363634 +0000 UTC m=+0.289621958 container attach 58cff493293bc14ceb32b934abf7dd0eac71d5046dbaabf203eef7772695367f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.vndjzx(active, since 76s)
Dec 13 02:32:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]: [
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:    {
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:        "available": false,
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:        "being_replaced": false,
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:        "ceph_device_lvm": false,
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:        "lsm_data": {},
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:        "lvs": [],
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:        "path": "/dev/sr0",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:        "rejected_reasons": [
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "Has a FileSystem",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "Insufficient space (<5GB)"
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:        ],
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:        "sys_api": {
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "actuators": null,
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "device_nodes": [
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:                "sr0"
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            ],
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "devname": "sr0",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "human_readable_size": "482.00 KB",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "id_bus": "ata",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "model": "QEMU DVD-ROM",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "nr_requests": "2",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "parent": "/dev/sr0",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "partitions": {},
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "path": "/dev/sr0",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "removable": "1",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "rev": "2.5+",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "ro": "0",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "rotational": "1",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "sas_address": "",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "sas_device_handle": "",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "scheduler_mode": "mq-deadline",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "sectors": 0,
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "sectorsize": "2048",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "size": 493568.0,
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "support_discard": "2048",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "type": "disk",
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:            "vendor": "QEMU"
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:        }
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]:    }
Dec 13 02:32:25 np0005558241 goofy_volhard[90289]: ]
Dec 13 02:32:25 np0005558241 systemd[1]: libpod-58cff493293bc14ceb32b934abf7dd0eac71d5046dbaabf203eef7772695367f.scope: Deactivated successfully.
Dec 13 02:32:25 np0005558241 podman[90273]: 2025-12-13 07:32:25.608238873 +0000 UTC m=+0.850497237 container died 58cff493293bc14ceb32b934abf7dd0eac71d5046dbaabf203eef7772695367f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:25 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9940bbcc364f33b3c3faaae0d4f937e91bfd0f43d292baa165ad41578b2e570c-merged.mount: Deactivated successfully.
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:25 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3792529097; not ready for session (expect reconnect)
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:25 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:25 np0005558241 podman[90273]: 2025-12-13 07:32:25.787272133 +0000 UTC m=+1.029530467 container remove 58cff493293bc14ceb32b934abf7dd0eac71d5046dbaabf203eef7772695367f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Dec 13 02:32:25 np0005558241 systemd[1]: libpod-conmon-58cff493293bc14ceb32b934abf7dd0eac71d5046dbaabf203eef7772695367f.scope: Deactivated successfully.
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 13 02:32:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 13 02:32:26 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43687k
Dec 13 02:32:26 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43687k
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 13 02:32:26 np0005558241 ceph-mgr[76830]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44736375: error parsing value: Value '44736375' is below minimum 939524096
Dec 13 02:32:26 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44736375: error parsing value: Value '44736375' is below minimum 939524096
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:32:26 np0005558241 podman[91016]: 2025-12-13 07:32:26.564423188 +0000 UTC m=+0.040582201 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:26 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3792529097; not ready for session (expect reconnect)
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:26 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:26 np0005558241 podman[91016]: 2025-12-13 07:32:26.916301253 +0000 UTC m=+0.392460186 container create 49c7ecfd0f137e52a4d9ea2f2499967d1e77bc75f7d418aed224854e55a18525 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:32:27 np0005558241 systemd[1]: Started libpod-conmon-49c7ecfd0f137e52a4d9ea2f2499967d1e77bc75f7d418aed224854e55a18525.scope.
Dec 13 02:32:27 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 13 02:32:27 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3792529097; not ready for session (expect reconnect)
Dec 13 02:32:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:27 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:27 np0005558241 podman[91016]: 2025-12-13 07:32:27.858855225 +0000 UTC m=+1.335014228 container init 49c7ecfd0f137e52a4d9ea2f2499967d1e77bc75f7d418aed224854e55a18525 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_pascal, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 02:32:27 np0005558241 ceph-mon[76537]: Adjusting osd_memory_target on compute-0 to 43687k
Dec 13 02:32:27 np0005558241 ceph-mon[76537]: Unable to set osd_memory_target on compute-0 to 44736375: error parsing value: Value '44736375' is below minimum 939524096
Dec 13 02:32:27 np0005558241 podman[91016]: 2025-12-13 07:32:27.867690267 +0000 UTC m=+1.343849200 container start 49c7ecfd0f137e52a4d9ea2f2499967d1e77bc75f7d418aed224854e55a18525 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_pascal, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:27 np0005558241 brave_pascal[91032]: 167 167
Dec 13 02:32:27 np0005558241 systemd[1]: libpod-49c7ecfd0f137e52a4d9ea2f2499967d1e77bc75f7d418aed224854e55a18525.scope: Deactivated successfully.
Dec 13 02:32:27 np0005558241 conmon[91032]: conmon 49c7ecfd0f137e52a4d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49c7ecfd0f137e52a4d9ea2f2499967d1e77bc75f7d418aed224854e55a18525.scope/container/memory.events
Dec 13 02:32:28 np0005558241 podman[91016]: 2025-12-13 07:32:28.173506463 +0000 UTC m=+1.649665476 container attach 49c7ecfd0f137e52a4d9ea2f2499967d1e77bc75f7d418aed224854e55a18525 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_pascal, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 02:32:28 np0005558241 podman[91016]: 2025-12-13 07:32:28.17414862 +0000 UTC m=+1.650307593 container died 49c7ecfd0f137e52a4d9ea2f2499967d1e77bc75f7d418aed224854e55a18525 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_pascal, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:32:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e5d3e412957d54e181cfac7acac6abb13a5427f50931a53f99f21a3b727fcfa7-merged.mount: Deactivated successfully.
Dec 13 02:32:28 np0005558241 podman[91016]: 2025-12-13 07:32:28.453134523 +0000 UTC m=+1.929293486 container remove 49c7ecfd0f137e52a4d9ea2f2499967d1e77bc75f7d418aed224854e55a18525 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:32:28 np0005558241 systemd[1]: libpod-conmon-49c7ecfd0f137e52a4d9ea2f2499967d1e77bc75f7d418aed224854e55a18525.scope: Deactivated successfully.
Dec 13 02:32:28 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3792529097; not ready for session (expect reconnect)
Dec 13 02:32:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:28 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:28 np0005558241 podman[91058]: 2025-12-13 07:32:28.666362243 +0000 UTC m=+0.042333175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 13 02:32:29 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3792529097; not ready for session (expect reconnect)
Dec 13 02:32:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:29 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:29 np0005558241 podman[91058]: 2025-12-13 07:32:29.928859097 +0000 UTC m=+1.304829969 container create 12ec54b059880e6eba9da41b140f6df3ed86f145ff88555baea835d8d6932f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_nash, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:32:30 np0005558241 systemd[1]: Started libpod-conmon-12ec54b059880e6eba9da41b140f6df3ed86f145ff88555baea835d8d6932f47.scope.
Dec 13 02:32:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87f907a76341b70902a79afe83c580f14eedcf4c0027ab0c48e3e7f12fa0887/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87f907a76341b70902a79afe83c580f14eedcf4c0027ab0c48e3e7f12fa0887/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87f907a76341b70902a79afe83c580f14eedcf4c0027ab0c48e3e7f12fa0887/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87f907a76341b70902a79afe83c580f14eedcf4c0027ab0c48e3e7f12fa0887/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87f907a76341b70902a79afe83c580f14eedcf4c0027ab0c48e3e7f12fa0887/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:30 np0005558241 podman[91058]: 2025-12-13 07:32:30.490330501 +0000 UTC m=+1.866301403 container init 12ec54b059880e6eba9da41b140f6df3ed86f145ff88555baea835d8d6932f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_nash, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:30 np0005558241 podman[91058]: 2025-12-13 07:32:30.498273031 +0000 UTC m=+1.874243883 container start 12ec54b059880e6eba9da41b140f6df3ed86f145ff88555baea835d8d6932f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_nash, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:32:30 np0005558241 podman[91058]: 2025-12-13 07:32:30.503603935 +0000 UTC m=+1.879574797 container attach 12ec54b059880e6eba9da41b140f6df3ed86f145ff88555baea835d8d6932f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_nash, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:30 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3792529097; not ready for session (expect reconnect)
Dec 13 02:32:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:30 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:31 np0005558241 romantic_nash[91074]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:32:31 np0005558241 romantic_nash[91074]: --> All data devices are unavailable
Dec 13 02:32:31 np0005558241 systemd[1]: libpod-12ec54b059880e6eba9da41b140f6df3ed86f145ff88555baea835d8d6932f47.scope: Deactivated successfully.
Dec 13 02:32:31 np0005558241 podman[91094]: 2025-12-13 07:32:31.090842336 +0000 UTC m=+0.025943023 container died 12ec54b059880e6eba9da41b140f6df3ed86f145ff88555baea835d8d6932f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_nash, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 13 02:32:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f87f907a76341b70902a79afe83c580f14eedcf4c0027ab0c48e3e7f12fa0887-merged.mount: Deactivated successfully.
Dec 13 02:32:31 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3792529097; not ready for session (expect reconnect)
Dec 13 02:32:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:31 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:32 np0005558241 podman[91094]: 2025-12-13 07:32:32.028792912 +0000 UTC m=+0.963893429 container remove 12ec54b059880e6eba9da41b140f6df3ed86f145ff88555baea835d8d6932f47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec 13 02:32:32 np0005558241 systemd[1]: libpod-conmon-12ec54b059880e6eba9da41b140f6df3ed86f145ff88555baea835d8d6932f47.scope: Deactivated successfully.
Dec 13 02:32:32 np0005558241 ceph-osd[89221]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 2.291 iops: 586.568 elapsed_sec: 5.114
Dec 13 02:32:32 np0005558241 ceph-osd[89221]: log_channel(cluster) log [WRN] : OSD bench result of 586.568079 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 02:32:32 np0005558241 ceph-osd[89221]: osd.2 0 waiting for initial osdmap
Dec 13 02:32:32 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2[89217]: 2025-12-13T07:32:32.113+0000 7fb1ef7fb640 -1 osd.2 0 waiting for initial osdmap
Dec 13 02:32:32 np0005558241 ceph-osd[89221]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 13 02:32:32 np0005558241 ceph-osd[89221]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 13 02:32:32 np0005558241 ceph-osd[89221]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 13 02:32:32 np0005558241 ceph-osd[89221]: osd.2 16 check_osdmap_features require_osd_release unknown -> tentacle
Dec 13 02:32:32 np0005558241 ceph-osd[89221]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 02:32:32 np0005558241 ceph-osd[89221]: osd.2 16 set_numa_affinity not setting numa affinity
Dec 13 02:32:32 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-osd-2[89217]: 2025-12-13T07:32:32.201+0000 7fb1ea600640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 02:32:32 np0005558241 ceph-osd[89221]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Dec 13 02:32:32 np0005558241 podman[91169]: 2025-12-13 07:32:32.628559719 +0000 UTC m=+0.042770616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:32 np0005558241 ceph-mgr[76830]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3792529097; not ready for session (expect reconnect)
Dec 13 02:32:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:32 np0005558241 ceph-mgr[76830]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 02:32:32 np0005558241 podman[91169]: 2025-12-13 07:32:32.856791946 +0000 UTC m=+0.271002783 container create 3ff67d93edf27317dc6de9915d1f50cca5f795fc5ca96093048b972a56369116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_blackburn, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Dec 13 02:32:32 np0005558241 systemd[1]: Started libpod-conmon-3ff67d93edf27317dc6de9915d1f50cca5f795fc5ca96093048b972a56369116.scope.
Dec 13 02:32:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:32 np0005558241 podman[91169]: 2025-12-13 07:32:32.969497409 +0000 UTC m=+0.383708306 container init 3ff67d93edf27317dc6de9915d1f50cca5f795fc5ca96093048b972a56369116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_blackburn, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:32:32 np0005558241 podman[91169]: 2025-12-13 07:32:32.980040634 +0000 UTC m=+0.394251481 container start 3ff67d93edf27317dc6de9915d1f50cca5f795fc5ca96093048b972a56369116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:32:32 np0005558241 objective_blackburn[91186]: 167 167
Dec 13 02:32:32 np0005558241 systemd[1]: libpod-3ff67d93edf27317dc6de9915d1f50cca5f795fc5ca96093048b972a56369116.scope: Deactivated successfully.
Dec 13 02:32:33 np0005558241 podman[91169]: 2025-12-13 07:32:33.131595133 +0000 UTC m=+0.545806030 container attach 3ff67d93edf27317dc6de9915d1f50cca5f795fc5ca96093048b972a56369116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_blackburn, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:33 np0005558241 podman[91169]: 2025-12-13 07:32:33.132402384 +0000 UTC m=+0.546613231 container died 3ff67d93edf27317dc6de9915d1f50cca5f795fc5ca96093048b972a56369116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_blackburn, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:32:33 np0005558241 ceph-osd[89221]: osd.2 16 tick checking mon for new map
Dec 13 02:32:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 13 02:32:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Dec 13 02:32:33 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/3792529097,v1:192.168.122.100:6811/3792529097] boot
Dec 13 02:32:33 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Dec 13 02:32:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 02:32:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 02:32:33 np0005558241 ceph-osd[89221]: osd.2 17 state: booting -> active
Dec 13 02:32:33 np0005558241 ceph-mon[76537]: OSD bench result of 586.568079 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 02:32:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-bea0e4bf056ad4a0382097d6b5b5f9826fc6e399e38ba3384d810b29abddc9a5-merged.mount: Deactivated successfully.
Dec 13 02:32:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 13 02:32:33 np0005558241 podman[91169]: 2025-12-13 07:32:33.648745073 +0000 UTC m=+1.062955920 container remove 3ff67d93edf27317dc6de9915d1f50cca5f795fc5ca96093048b972a56369116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:32:33 np0005558241 systemd[1]: libpod-conmon-3ff67d93edf27317dc6de9915d1f50cca5f795fc5ca96093048b972a56369116.scope: Deactivated successfully.
Dec 13 02:32:33 np0005558241 podman[91212]: 2025-12-13 07:32:33.87021622 +0000 UTC m=+0.044562101 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:34 np0005558241 podman[91212]: 2025-12-13 07:32:34.190331966 +0000 UTC m=+0.364677797 container create 1b07fceb52a06470bf9904ed9daef7bdbd9e31906f123914ea8ced87efc97253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 02:32:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 13 02:32:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec 13 02:32:34 np0005558241 systemd[1]: Started libpod-conmon-1b07fceb52a06470bf9904ed9daef7bdbd9e31906f123914ea8ced87efc97253.scope.
Dec 13 02:32:34 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec 13 02:32:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a625d817a87ff0ab42e4bf6c4de80ba1c217380129dabb25d54d8e9ea1fa137/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a625d817a87ff0ab42e4bf6c4de80ba1c217380129dabb25d54d8e9ea1fa137/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a625d817a87ff0ab42e4bf6c4de80ba1c217380129dabb25d54d8e9ea1fa137/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a625d817a87ff0ab42e4bf6c4de80ba1c217380129dabb25d54d8e9ea1fa137/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:34 np0005558241 ceph-mon[76537]: osd.2 [v2:192.168.122.100:6810/3792529097,v1:192.168.122.100:6811/3792529097] boot
Dec 13 02:32:34 np0005558241 podman[91212]: 2025-12-13 07:32:34.491045475 +0000 UTC m=+0.665391376 container init 1b07fceb52a06470bf9904ed9daef7bdbd9e31906f123914ea8ced87efc97253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 02:32:34 np0005558241 podman[91212]: 2025-12-13 07:32:34.502286518 +0000 UTC m=+0.676632369 container start 1b07fceb52a06470bf9904ed9daef7bdbd9e31906f123914ea8ced87efc97253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 02:32:34 np0005558241 podman[91212]: 2025-12-13 07:32:34.531617795 +0000 UTC m=+0.705963726 container attach 1b07fceb52a06470bf9904ed9daef7bdbd9e31906f123914ea8ced87efc97253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]: {
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:    "0": [
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:        {
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "devices": [
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "/dev/loop3"
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            ],
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_name": "ceph_lv0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_size": "21470642176",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "name": "ceph_lv0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "tags": {
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.cluster_name": "ceph",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.crush_device_class": "",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.encrypted": "0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.objectstore": "bluestore",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.osd_id": "0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.type": "block",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.vdo": "0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.with_tpm": "0"
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            },
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "type": "block",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "vg_name": "ceph_vg0"
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:        }
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:    ],
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:    "1": [
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:        {
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "devices": [
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "/dev/loop4"
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            ],
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_name": "ceph_lv1",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_size": "21470642176",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "name": "ceph_lv1",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "tags": {
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.cluster_name": "ceph",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.crush_device_class": "",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.encrypted": "0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.objectstore": "bluestore",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.osd_id": "1",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.type": "block",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.vdo": "0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.with_tpm": "0"
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            },
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "type": "block",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "vg_name": "ceph_vg1"
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:        }
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:    ],
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:    "2": [
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:        {
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "devices": [
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "/dev/loop5"
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            ],
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_name": "ceph_lv2",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_size": "21470642176",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "name": "ceph_lv2",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "tags": {
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.cluster_name": "ceph",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.crush_device_class": "",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.encrypted": "0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.objectstore": "bluestore",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.osd_id": "2",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.type": "block",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.vdo": "0",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:                "ceph.with_tpm": "0"
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            },
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "type": "block",
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:            "vg_name": "ceph_vg2"
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:        }
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]:    ]
Dec 13 02:32:34 np0005558241 nice_northcutt[91228]: }
Dec 13 02:32:34 np0005558241 systemd[1]: libpod-1b07fceb52a06470bf9904ed9daef7bdbd9e31906f123914ea8ced87efc97253.scope: Deactivated successfully.
Dec 13 02:32:34 np0005558241 podman[91212]: 2025-12-13 07:32:34.857925307 +0000 UTC m=+1.032271138 container died 1b07fceb52a06470bf9904ed9daef7bdbd9e31906f123914ea8ced87efc97253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4a625d817a87ff0ab42e4bf6c4de80ba1c217380129dabb25d54d8e9ea1fa137-merged.mount: Deactivated successfully.
Dec 13 02:32:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:35 np0005558241 podman[91212]: 2025-12-13 07:32:35.37124509 +0000 UTC m=+1.545590941 container remove 1b07fceb52a06470bf9904ed9daef7bdbd9e31906f123914ea8ced87efc97253 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:32:35 np0005558241 systemd[1]: libpod-conmon-1b07fceb52a06470bf9904ed9daef7bdbd9e31906f123914ea8ced87efc97253.scope: Deactivated successfully.
Dec 13 02:32:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:36 np0005558241 podman[91314]: 2025-12-13 07:32:35.950961082 +0000 UTC m=+0.044097709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:36 np0005558241 podman[91314]: 2025-12-13 07:32:36.308736556 +0000 UTC m=+0.401873133 container create 49d36886350d74f2d80b82f696f2479f6f952d56bd2ef97183b1f53dd9ce1618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:32:36 np0005558241 systemd[1]: Started libpod-conmon-49d36886350d74f2d80b82f696f2479f6f952d56bd2ef97183b1f53dd9ce1618.scope.
Dec 13 02:32:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:36 np0005558241 podman[91314]: 2025-12-13 07:32:36.411911219 +0000 UTC m=+0.505047836 container init 49d36886350d74f2d80b82f696f2479f6f952d56bd2ef97183b1f53dd9ce1618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:36 np0005558241 podman[91314]: 2025-12-13 07:32:36.423332426 +0000 UTC m=+0.516469003 container start 49d36886350d74f2d80b82f696f2479f6f952d56bd2ef97183b1f53dd9ce1618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:32:36 np0005558241 intelligent_ardinghelli[91331]: 167 167
Dec 13 02:32:36 np0005558241 podman[91314]: 2025-12-13 07:32:36.428275321 +0000 UTC m=+0.521411958 container attach 49d36886350d74f2d80b82f696f2479f6f952d56bd2ef97183b1f53dd9ce1618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ardinghelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:32:36 np0005558241 systemd[1]: libpod-49d36886350d74f2d80b82f696f2479f6f952d56bd2ef97183b1f53dd9ce1618.scope: Deactivated successfully.
Dec 13 02:32:36 np0005558241 podman[91314]: 2025-12-13 07:32:36.429282666 +0000 UTC m=+0.522419253 container died 49d36886350d74f2d80b82f696f2479f6f952d56bd2ef97183b1f53dd9ce1618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:32:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d9511b026300225f89f760ac67f2933c62f39a3f6459065de0d215a938849488-merged.mount: Deactivated successfully.
Dec 13 02:32:36 np0005558241 podman[91314]: 2025-12-13 07:32:36.943261996 +0000 UTC m=+1.036398583 container remove 49d36886350d74f2d80b82f696f2479f6f952d56bd2ef97183b1f53dd9ce1618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:32:37 np0005558241 systemd[1]: libpod-conmon-49d36886350d74f2d80b82f696f2479f6f952d56bd2ef97183b1f53dd9ce1618.scope: Deactivated successfully.
Dec 13 02:32:37 np0005558241 podman[91354]: 2025-12-13 07:32:37.200488232 +0000 UTC m=+0.117460564 container create 305baf2e543923003a7bf04afe9a55730a46ce57a302f115507eb9e6711263ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_robinson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:32:37 np0005558241 podman[91354]: 2025-12-13 07:32:37.124466341 +0000 UTC m=+0.041438723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:37 np0005558241 systemd[1]: Started libpod-conmon-305baf2e543923003a7bf04afe9a55730a46ce57a302f115507eb9e6711263ce.scope.
Dec 13 02:32:37 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4adf36bf834e292b7ad251b54e1cffc57e56904865fd57f48983ec92945ab113/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4adf36bf834e292b7ad251b54e1cffc57e56904865fd57f48983ec92945ab113/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4adf36bf834e292b7ad251b54e1cffc57e56904865fd57f48983ec92945ab113/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4adf36bf834e292b7ad251b54e1cffc57e56904865fd57f48983ec92945ab113/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:37 np0005558241 podman[91354]: 2025-12-13 07:32:37.316633791 +0000 UTC m=+0.233606153 container init 305baf2e543923003a7bf04afe9a55730a46ce57a302f115507eb9e6711263ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Dec 13 02:32:37 np0005558241 podman[91354]: 2025-12-13 07:32:37.328199712 +0000 UTC m=+0.245172044 container start 305baf2e543923003a7bf04afe9a55730a46ce57a302f115507eb9e6711263ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_robinson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 02:32:37 np0005558241 podman[91354]: 2025-12-13 07:32:37.362172446 +0000 UTC m=+0.279144798 container attach 305baf2e543923003a7bf04afe9a55730a46ce57a302f115507eb9e6711263ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:38 np0005558241 lvm[91450]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:32:38 np0005558241 lvm[91450]: VG ceph_vg1 finished
Dec 13 02:32:38 np0005558241 lvm[91449]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:32:38 np0005558241 lvm[91449]: VG ceph_vg0 finished
Dec 13 02:32:38 np0005558241 lvm[91452]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:32:38 np0005558241 lvm[91452]: VG ceph_vg2 finished
Dec 13 02:32:38 np0005558241 lvm[91454]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:32:38 np0005558241 lvm[91454]: VG ceph_vg2 finished
Dec 13 02:32:38 np0005558241 naughty_robinson[91371]: {}
Dec 13 02:32:38 np0005558241 systemd[1]: libpod-305baf2e543923003a7bf04afe9a55730a46ce57a302f115507eb9e6711263ce.scope: Deactivated successfully.
Dec 13 02:32:38 np0005558241 podman[91354]: 2025-12-13 07:32:38.206759336 +0000 UTC m=+1.123731668 container died 305baf2e543923003a7bf04afe9a55730a46ce57a302f115507eb9e6711263ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_robinson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:32:38 np0005558241 systemd[1]: libpod-305baf2e543923003a7bf04afe9a55730a46ce57a302f115507eb9e6711263ce.scope: Consumed 1.398s CPU time.
Dec 13 02:32:38 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4adf36bf834e292b7ad251b54e1cffc57e56904865fd57f48983ec92945ab113-merged.mount: Deactivated successfully.
Dec 13 02:32:38 np0005558241 podman[91354]: 2025-12-13 07:32:38.589537797 +0000 UTC m=+1.506510139 container remove 305baf2e543923003a7bf04afe9a55730a46ce57a302f115507eb9e6711263ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:38 np0005558241 systemd[1]: libpod-conmon-305baf2e543923003a7bf04afe9a55730a46ce57a302f115507eb9e6711263ce.scope: Deactivated successfully.
Dec 13 02:32:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:32:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:32:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:32:40 np0005558241 podman[91588]: 2025-12-13 07:32:40.791050776 +0000 UTC m=+1.277082872 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Dec 13 02:32:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:41 np0005558241 podman[91588]: 2025-12-13 07:32:41.748606346 +0000 UTC m=+2.234638392 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:32:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:32:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:32:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:32:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:32:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:32:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:32:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:32:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:32:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:32:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:32:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:32:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:32:46 np0005558241 podman[91883]: 2025-12-13 07:32:46.155547062 +0000 UTC m=+0.038996552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:46 np0005558241 podman[91883]: 2025-12-13 07:32:46.62137139 +0000 UTC m=+0.504820830 container create ab4bc8ad580c8a09a3a43f82b5284aeca679ee4347006e268aeab37359ce669b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tesla, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 02:32:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:32:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:32:46 np0005558241 systemd[1]: Started libpod-conmon-ab4bc8ad580c8a09a3a43f82b5284aeca679ee4347006e268aeab37359ce669b.scope.
Dec 13 02:32:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:46 np0005558241 podman[91883]: 2025-12-13 07:32:46.737733276 +0000 UTC m=+0.621182756 container init ab4bc8ad580c8a09a3a43f82b5284aeca679ee4347006e268aeab37359ce669b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tesla, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:46 np0005558241 podman[91883]: 2025-12-13 07:32:46.747730197 +0000 UTC m=+0.631179667 container start ab4bc8ad580c8a09a3a43f82b5284aeca679ee4347006e268aeab37359ce669b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tesla, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 02:32:46 np0005558241 podman[91883]: 2025-12-13 07:32:46.752187829 +0000 UTC m=+0.635637359 container attach ab4bc8ad580c8a09a3a43f82b5284aeca679ee4347006e268aeab37359ce669b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:32:46 np0005558241 jolly_tesla[91899]: 167 167
Dec 13 02:32:46 np0005558241 systemd[1]: libpod-ab4bc8ad580c8a09a3a43f82b5284aeca679ee4347006e268aeab37359ce669b.scope: Deactivated successfully.
Dec 13 02:32:46 np0005558241 podman[91883]: 2025-12-13 07:32:46.754295512 +0000 UTC m=+0.637744982 container died ab4bc8ad580c8a09a3a43f82b5284aeca679ee4347006e268aeab37359ce669b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tesla, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 02:32:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0f58482303629d2e7799ec4f376cd288100b1cf07480a5456864fec7e231bb0e-merged.mount: Deactivated successfully.
Dec 13 02:32:46 np0005558241 podman[91883]: 2025-12-13 07:32:46.806964506 +0000 UTC m=+0.690413966 container remove ab4bc8ad580c8a09a3a43f82b5284aeca679ee4347006e268aeab37359ce669b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:46 np0005558241 systemd[1]: libpod-conmon-ab4bc8ad580c8a09a3a43f82b5284aeca679ee4347006e268aeab37359ce669b.scope: Deactivated successfully.
Dec 13 02:32:46 np0005558241 podman[91924]: 2025-12-13 07:32:46.98973581 +0000 UTC m=+0.058208464 container create 455ac45e32dd9282d8c695e7ba0f982d4443529aa052f71bf1d4d06f7849d75f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 02:32:47 np0005558241 systemd[1]: Started libpod-conmon-455ac45e32dd9282d8c695e7ba0f982d4443529aa052f71bf1d4d06f7849d75f.scope.
Dec 13 02:32:47 np0005558241 podman[91924]: 2025-12-13 07:32:46.966743872 +0000 UTC m=+0.035216546 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc9a06cd01b47a80f9c627e43f6fc001ba3a95cbe8f276af68b138551922f8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc9a06cd01b47a80f9c627e43f6fc001ba3a95cbe8f276af68b138551922f8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc9a06cd01b47a80f9c627e43f6fc001ba3a95cbe8f276af68b138551922f8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc9a06cd01b47a80f9c627e43f6fc001ba3a95cbe8f276af68b138551922f8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acc9a06cd01b47a80f9c627e43f6fc001ba3a95cbe8f276af68b138551922f8a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:47 np0005558241 podman[91924]: 2025-12-13 07:32:47.096819262 +0000 UTC m=+0.165291966 container init 455ac45e32dd9282d8c695e7ba0f982d4443529aa052f71bf1d4d06f7849d75f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:47 np0005558241 podman[91924]: 2025-12-13 07:32:47.111191963 +0000 UTC m=+0.179664627 container start 455ac45e32dd9282d8c695e7ba0f982d4443529aa052f71bf1d4d06f7849d75f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:32:47 np0005558241 podman[91924]: 2025-12-13 07:32:47.116139078 +0000 UTC m=+0.184611732 container attach 455ac45e32dd9282d8c695e7ba0f982d4443529aa052f71bf1d4d06f7849d75f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:47 np0005558241 dazzling_ramanujan[91941]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:32:47 np0005558241 dazzling_ramanujan[91941]: --> All data devices are unavailable
Dec 13 02:32:47 np0005558241 systemd[1]: libpod-455ac45e32dd9282d8c695e7ba0f982d4443529aa052f71bf1d4d06f7849d75f.scope: Deactivated successfully.
Dec 13 02:32:47 np0005558241 podman[91924]: 2025-12-13 07:32:47.724009928 +0000 UTC m=+0.792482582 container died 455ac45e32dd9282d8c695e7ba0f982d4443529aa052f71bf1d4d06f7849d75f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-acc9a06cd01b47a80f9c627e43f6fc001ba3a95cbe8f276af68b138551922f8a-merged.mount: Deactivated successfully.
Dec 13 02:32:47 np0005558241 podman[91924]: 2025-12-13 07:32:47.790583571 +0000 UTC m=+0.859056195 container remove 455ac45e32dd9282d8c695e7ba0f982d4443529aa052f71bf1d4d06f7849d75f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:32:47 np0005558241 systemd[1]: libpod-conmon-455ac45e32dd9282d8c695e7ba0f982d4443529aa052f71bf1d4d06f7849d75f.scope: Deactivated successfully.
Dec 13 02:32:48 np0005558241 python3[92025]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:32:48 np0005558241 podman[92052]: 2025-12-13 07:32:48.220503048 +0000 UTC m=+0.079334315 container create 8746857e965bb1a035d4cce8f3c3d6270719c45cf617d9b0bbf3d46eee8988d1 (image=quay.io/ceph/ceph:v20, name=pensive_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:32:48 np0005558241 systemd[1]: Started libpod-conmon-8746857e965bb1a035d4cce8f3c3d6270719c45cf617d9b0bbf3d46eee8988d1.scope.
Dec 13 02:32:48 np0005558241 podman[92052]: 2025-12-13 07:32:48.182861482 +0000 UTC m=+0.041692809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:32:48 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c7ef7b896b72c61338576c6189e14d3fd5061af900d02879fb2f65ec174fc3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c7ef7b896b72c61338576c6189e14d3fd5061af900d02879fb2f65ec174fc3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c7ef7b896b72c61338576c6189e14d3fd5061af900d02879fb2f65ec174fc3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:48 np0005558241 podman[92052]: 2025-12-13 07:32:48.311785023 +0000 UTC m=+0.170616290 container init 8746857e965bb1a035d4cce8f3c3d6270719c45cf617d9b0bbf3d46eee8988d1 (image=quay.io/ceph/ceph:v20, name=pensive_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:48 np0005558241 podman[92052]: 2025-12-13 07:32:48.320994974 +0000 UTC m=+0.179826201 container start 8746857e965bb1a035d4cce8f3c3d6270719c45cf617d9b0bbf3d46eee8988d1 (image=quay.io/ceph/ceph:v20, name=pensive_morse, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:32:48 np0005558241 podman[92052]: 2025-12-13 07:32:48.32522846 +0000 UTC m=+0.184059727 container attach 8746857e965bb1a035d4cce8f3c3d6270719c45cf617d9b0bbf3d46eee8988d1 (image=quay.io/ceph/ceph:v20, name=pensive_morse, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:48 np0005558241 podman[92083]: 2025-12-13 07:32:48.396238885 +0000 UTC m=+0.062976524 container create 6654c2c0439c31aac005273d8533ec3286f919bc2d1074fa83a1da13f1ad4366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:48 np0005558241 systemd[1]: Started libpod-conmon-6654c2c0439c31aac005273d8533ec3286f919bc2d1074fa83a1da13f1ad4366.scope.
Dec 13 02:32:48 np0005558241 podman[92083]: 2025-12-13 07:32:48.368153089 +0000 UTC m=+0.034890738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:48 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:48 np0005558241 podman[92083]: 2025-12-13 07:32:48.484764911 +0000 UTC m=+0.151502610 container init 6654c2c0439c31aac005273d8533ec3286f919bc2d1074fa83a1da13f1ad4366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:32:48 np0005558241 podman[92083]: 2025-12-13 07:32:48.494161507 +0000 UTC m=+0.160899136 container start 6654c2c0439c31aac005273d8533ec3286f919bc2d1074fa83a1da13f1ad4366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_visvesvaraya, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:32:48 np0005558241 podman[92083]: 2025-12-13 07:32:48.498192418 +0000 UTC m=+0.164930047 container attach 6654c2c0439c31aac005273d8533ec3286f919bc2d1074fa83a1da13f1ad4366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_visvesvaraya, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:32:48 np0005558241 peaceful_visvesvaraya[92102]: 167 167
Dec 13 02:32:48 np0005558241 systemd[1]: libpod-6654c2c0439c31aac005273d8533ec3286f919bc2d1074fa83a1da13f1ad4366.scope: Deactivated successfully.
Dec 13 02:32:48 np0005558241 conmon[92102]: conmon 6654c2c0439c31aac005 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6654c2c0439c31aac005273d8533ec3286f919bc2d1074fa83a1da13f1ad4366.scope/container/memory.events
Dec 13 02:32:48 np0005558241 podman[92083]: 2025-12-13 07:32:48.500895496 +0000 UTC m=+0.167633135 container died 6654c2c0439c31aac005273d8533ec3286f919bc2d1074fa83a1da13f1ad4366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d5d8a313ffc21b1f89f0b220e8b251073bda4a99ee9732ed7847168a360e87a3-merged.mount: Deactivated successfully.
Dec 13 02:32:48 np0005558241 podman[92083]: 2025-12-13 07:32:48.552468102 +0000 UTC m=+0.219205721 container remove 6654c2c0439c31aac005273d8533ec3286f919bc2d1074fa83a1da13f1ad4366 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:32:48 np0005558241 systemd[1]: libpod-conmon-6654c2c0439c31aac005273d8533ec3286f919bc2d1074fa83a1da13f1ad4366.scope: Deactivated successfully.
Dec 13 02:32:48 np0005558241 podman[92143]: 2025-12-13 07:32:48.776444582 +0000 UTC m=+0.055645549 container create db00a2df3be581a3b6c1f7b59e7a463e2e98a1604e8ab7792ff2205258c60083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ride, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:48 np0005558241 systemd[1]: Started libpod-conmon-db00a2df3be581a3b6c1f7b59e7a463e2e98a1604e8ab7792ff2205258c60083.scope.
Dec 13 02:32:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 02:32:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4063785381' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 02:32:48 np0005558241 pensive_morse[92071]: 
Dec 13 02:32:48 np0005558241 pensive_morse[92071]: {"fsid":"18ee9de6-e00b-571b-ab9b-b7aab06174df","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":119,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":18,"num_osds":3,"num_up_osds":3,"osd_up_since":1765611153,"num_in_osds":3,"osd_in_since":1765611114,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":83406848,"bytes_avail":64328519680,"bytes_total":64411926528},"fsmap":{"epoch":1,"btime":"2025-12-13T07:30:46:025708+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-13T07:32:11.360598+0000","services":{}},"progress_events":{}}
Dec 13 02:32:48 np0005558241 podman[92143]: 2025-12-13 07:32:48.757501976 +0000 UTC m=+0.036702963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:48 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a57533cd97965eb3ba66e99da143cb19f99b8f244a82d53fb6ccea56766f75d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a57533cd97965eb3ba66e99da143cb19f99b8f244a82d53fb6ccea56766f75d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a57533cd97965eb3ba66e99da143cb19f99b8f244a82d53fb6ccea56766f75d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a57533cd97965eb3ba66e99da143cb19f99b8f244a82d53fb6ccea56766f75d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:48 np0005558241 systemd[1]: libpod-8746857e965bb1a035d4cce8f3c3d6270719c45cf617d9b0bbf3d46eee8988d1.scope: Deactivated successfully.
Dec 13 02:32:48 np0005558241 podman[92052]: 2025-12-13 07:32:48.866031064 +0000 UTC m=+0.724862331 container died 8746857e965bb1a035d4cce8f3c3d6270719c45cf617d9b0bbf3d46eee8988d1 (image=quay.io/ceph/ceph:v20, name=pensive_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:32:48 np0005558241 podman[92143]: 2025-12-13 07:32:48.884401396 +0000 UTC m=+0.163602443 container init db00a2df3be581a3b6c1f7b59e7a463e2e98a1604e8ab7792ff2205258c60083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ride, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:48 np0005558241 podman[92143]: 2025-12-13 07:32:48.897595978 +0000 UTC m=+0.176796975 container start db00a2df3be581a3b6c1f7b59e7a463e2e98a1604e8ab7792ff2205258c60083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ride, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 02:32:48 np0005558241 podman[92143]: 2025-12-13 07:32:48.902225864 +0000 UTC m=+0.181426911 container attach db00a2df3be581a3b6c1f7b59e7a463e2e98a1604e8ab7792ff2205258c60083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:32:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-31c7ef7b896b72c61338576c6189e14d3fd5061af900d02879fb2f65ec174fc3-merged.mount: Deactivated successfully.
Dec 13 02:32:48 np0005558241 podman[92052]: 2025-12-13 07:32:48.929732315 +0000 UTC m=+0.788563582 container remove 8746857e965bb1a035d4cce8f3c3d6270719c45cf617d9b0bbf3d46eee8988d1 (image=quay.io/ceph/ceph:v20, name=pensive_morse, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 02:32:48 np0005558241 systemd[1]: libpod-conmon-8746857e965bb1a035d4cce8f3c3d6270719c45cf617d9b0bbf3d46eee8988d1.scope: Deactivated successfully.
Dec 13 02:32:49 np0005558241 serene_ride[92160]: {
Dec 13 02:32:49 np0005558241 serene_ride[92160]:    "0": [
Dec 13 02:32:49 np0005558241 serene_ride[92160]:        {
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "devices": [
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "/dev/loop3"
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            ],
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_name": "ceph_lv0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_size": "21470642176",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "name": "ceph_lv0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "tags": {
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.cluster_name": "ceph",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.crush_device_class": "",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.encrypted": "0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.objectstore": "bluestore",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.osd_id": "0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.type": "block",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.vdo": "0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.with_tpm": "0"
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            },
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "type": "block",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "vg_name": "ceph_vg0"
Dec 13 02:32:49 np0005558241 serene_ride[92160]:        }
Dec 13 02:32:49 np0005558241 serene_ride[92160]:    ],
Dec 13 02:32:49 np0005558241 serene_ride[92160]:    "1": [
Dec 13 02:32:49 np0005558241 serene_ride[92160]:        {
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "devices": [
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "/dev/loop4"
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            ],
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_name": "ceph_lv1",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_size": "21470642176",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "name": "ceph_lv1",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "tags": {
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.cluster_name": "ceph",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.crush_device_class": "",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.encrypted": "0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.objectstore": "bluestore",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.osd_id": "1",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.type": "block",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.vdo": "0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.with_tpm": "0"
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            },
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "type": "block",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "vg_name": "ceph_vg1"
Dec 13 02:32:49 np0005558241 serene_ride[92160]:        }
Dec 13 02:32:49 np0005558241 serene_ride[92160]:    ],
Dec 13 02:32:49 np0005558241 serene_ride[92160]:    "2": [
Dec 13 02:32:49 np0005558241 serene_ride[92160]:        {
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "devices": [
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "/dev/loop5"
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            ],
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_name": "ceph_lv2",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_size": "21470642176",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "name": "ceph_lv2",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "tags": {
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.cluster_name": "ceph",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.crush_device_class": "",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.encrypted": "0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.objectstore": "bluestore",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.osd_id": "2",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.type": "block",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.vdo": "0",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:                "ceph.with_tpm": "0"
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            },
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "type": "block",
Dec 13 02:32:49 np0005558241 serene_ride[92160]:            "vg_name": "ceph_vg2"
Dec 13 02:32:49 np0005558241 serene_ride[92160]:        }
Dec 13 02:32:49 np0005558241 serene_ride[92160]:    ]
Dec 13 02:32:49 np0005558241 serene_ride[92160]: }
Dec 13 02:32:49 np0005558241 systemd[1]: libpod-db00a2df3be581a3b6c1f7b59e7a463e2e98a1604e8ab7792ff2205258c60083.scope: Deactivated successfully.
Dec 13 02:32:49 np0005558241 podman[92143]: 2025-12-13 07:32:49.249635146 +0000 UTC m=+0.528836123 container died db00a2df3be581a3b6c1f7b59e7a463e2e98a1604e8ab7792ff2205258c60083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ride, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0a57533cd97965eb3ba66e99da143cb19f99b8f244a82d53fb6ccea56766f75d-merged.mount: Deactivated successfully.
Dec 13 02:32:49 np0005558241 podman[92143]: 2025-12-13 07:32:49.312522906 +0000 UTC m=+0.591723903 container remove db00a2df3be581a3b6c1f7b59e7a463e2e98a1604e8ab7792ff2205258c60083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ride, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 02:32:49 np0005558241 systemd[1]: libpod-conmon-db00a2df3be581a3b6c1f7b59e7a463e2e98a1604e8ab7792ff2205258c60083.scope: Deactivated successfully.
Dec 13 02:32:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:49 np0005558241 python3[92220]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:32:49 np0005558241 podman[92267]: 2025-12-13 07:32:49.583936799 +0000 UTC m=+0.048292825 container create 26404542d2dc62b99ca060a400c14e4b7849a965e1d8abad8e77d5bdde341ca8 (image=quay.io/ceph/ceph:v20, name=quizzical_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:32:49 np0005558241 systemd[1]: Started libpod-conmon-26404542d2dc62b99ca060a400c14e4b7849a965e1d8abad8e77d5bdde341ca8.scope.
Dec 13 02:32:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6661f2c6f347d27d4c03f4350bb046f295c108bba894d16c0aa1a3944c91cc0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6661f2c6f347d27d4c03f4350bb046f295c108bba894d16c0aa1a3944c91cc0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:49 np0005558241 podman[92267]: 2025-12-13 07:32:49.569139737 +0000 UTC m=+0.033495773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:32:49 np0005558241 podman[92267]: 2025-12-13 07:32:49.679694966 +0000 UTC m=+0.144051022 container init 26404542d2dc62b99ca060a400c14e4b7849a965e1d8abad8e77d5bdde341ca8 (image=quay.io/ceph/ceph:v20, name=quizzical_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:32:49 np0005558241 podman[92267]: 2025-12-13 07:32:49.693819231 +0000 UTC m=+0.158175257 container start 26404542d2dc62b99ca060a400c14e4b7849a965e1d8abad8e77d5bdde341ca8 (image=quay.io/ceph/ceph:v20, name=quizzical_visvesvaraya, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 02:32:49 np0005558241 podman[92267]: 2025-12-13 07:32:49.697580946 +0000 UTC m=+0.161936972 container attach 26404542d2dc62b99ca060a400c14e4b7849a965e1d8abad8e77d5bdde341ca8 (image=quay.io/ceph/ceph:v20, name=quizzical_visvesvaraya, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:32:49 np0005558241 podman[92320]: 2025-12-13 07:32:49.976180869 +0000 UTC m=+0.111854293 container create 3cfeb52b1b6d2bfe7edaa06e6fecd354d7945a5c4d005ad46c73a6af7ab54f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_feynman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 02:32:49 np0005558241 podman[92320]: 2025-12-13 07:32:49.895388228 +0000 UTC m=+0.031061662 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:50 np0005558241 systemd[1]: Started libpod-conmon-3cfeb52b1b6d2bfe7edaa06e6fecd354d7945a5c4d005ad46c73a6af7ab54f55.scope.
Dec 13 02:32:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:50 np0005558241 podman[92320]: 2025-12-13 07:32:50.104880764 +0000 UTC m=+0.240554238 container init 3cfeb52b1b6d2bfe7edaa06e6fecd354d7945a5c4d005ad46c73a6af7ab54f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:32:50 np0005558241 podman[92320]: 2025-12-13 07:32:50.114870085 +0000 UTC m=+0.250543519 container start 3cfeb52b1b6d2bfe7edaa06e6fecd354d7945a5c4d005ad46c73a6af7ab54f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_feynman, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:32:50 np0005558241 agitated_feynman[92336]: 167 167
Dec 13 02:32:50 np0005558241 systemd[1]: libpod-3cfeb52b1b6d2bfe7edaa06e6fecd354d7945a5c4d005ad46c73a6af7ab54f55.scope: Deactivated successfully.
Dec 13 02:32:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 02:32:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2391058913' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:32:50 np0005558241 podman[92320]: 2025-12-13 07:32:50.158694757 +0000 UTC m=+0.294368231 container attach 3cfeb52b1b6d2bfe7edaa06e6fecd354d7945a5c4d005ad46c73a6af7ab54f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_feynman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 02:32:50 np0005558241 podman[92320]: 2025-12-13 07:32:50.159057556 +0000 UTC m=+0.294730990 container died 3cfeb52b1b6d2bfe7edaa06e6fecd354d7945a5c4d005ad46c73a6af7ab54f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 02:32:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-00a3a0955fe7fc1550a878d4487ed8d792f7406aadc4abf4a4ed2c7ee2a0e46d-merged.mount: Deactivated successfully.
Dec 13 02:32:50 np0005558241 podman[92320]: 2025-12-13 07:32:50.224137272 +0000 UTC m=+0.359810696 container remove 3cfeb52b1b6d2bfe7edaa06e6fecd354d7945a5c4d005ad46c73a6af7ab54f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_feynman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:32:50 np0005558241 systemd[1]: libpod-conmon-3cfeb52b1b6d2bfe7edaa06e6fecd354d7945a5c4d005ad46c73a6af7ab54f55.scope: Deactivated successfully.
Dec 13 02:32:50 np0005558241 podman[92363]: 2025-12-13 07:32:50.46234303 +0000 UTC m=+0.064043741 container create b45c0883de111f7020dd41f9768513b785916500372bcb96294b8abe2730ee35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 02:32:50 np0005558241 systemd[1]: Started libpod-conmon-b45c0883de111f7020dd41f9768513b785916500372bcb96294b8abe2730ee35.scope.
Dec 13 02:32:50 np0005558241 podman[92363]: 2025-12-13 07:32:50.436613813 +0000 UTC m=+0.038314554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:32:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a668e4b0ec5cec5d877ee67f6630910667086ab6a018cc688d957b5797ba7d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a668e4b0ec5cec5d877ee67f6630910667086ab6a018cc688d957b5797ba7d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a668e4b0ec5cec5d877ee67f6630910667086ab6a018cc688d957b5797ba7d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a668e4b0ec5cec5d877ee67f6630910667086ab6a018cc688d957b5797ba7d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:50 np0005558241 podman[92363]: 2025-12-13 07:32:50.581827383 +0000 UTC m=+0.183528094 container init b45c0883de111f7020dd41f9768513b785916500372bcb96294b8abe2730ee35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_grothendieck, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:32:50 np0005558241 podman[92363]: 2025-12-13 07:32:50.60240914 +0000 UTC m=+0.204109841 container start b45c0883de111f7020dd41f9768513b785916500372bcb96294b8abe2730ee35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:32:50 np0005558241 podman[92363]: 2025-12-13 07:32:50.614524455 +0000 UTC m=+0.216225216 container attach b45c0883de111f7020dd41f9768513b785916500372bcb96294b8abe2730ee35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 02:32:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 13 02:32:50 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2391058913' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:32:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2391058913' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:32:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec 13 02:32:50 np0005558241 quizzical_visvesvaraya[92285]: pool 'vms' created
Dec 13 02:32:50 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec 13 02:32:50 np0005558241 systemd[1]: libpod-26404542d2dc62b99ca060a400c14e4b7849a965e1d8abad8e77d5bdde341ca8.scope: Deactivated successfully.
Dec 13 02:32:50 np0005558241 podman[92267]: 2025-12-13 07:32:50.713678317 +0000 UTC m=+1.178034353 container died 26404542d2dc62b99ca060a400c14e4b7849a965e1d8abad8e77d5bdde341ca8 (image=quay.io/ceph/ceph:v20, name=quizzical_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d6661f2c6f347d27d4c03f4350bb046f295c108bba894d16c0aa1a3944c91cc0-merged.mount: Deactivated successfully.
Dec 13 02:32:50 np0005558241 podman[92267]: 2025-12-13 07:32:50.785090692 +0000 UTC m=+1.249446728 container remove 26404542d2dc62b99ca060a400c14e4b7849a965e1d8abad8e77d5bdde341ca8 (image=quay.io/ceph/ceph:v20, name=quizzical_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:50 np0005558241 systemd[1]: libpod-conmon-26404542d2dc62b99ca060a400c14e4b7849a965e1d8abad8e77d5bdde341ca8.scope: Deactivated successfully.
Dec 13 02:32:51 np0005558241 python3[92431]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:32:51 np0005558241 podman[92460]: 2025-12-13 07:32:51.147764359 +0000 UTC m=+0.021293956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:32:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v61: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:51 np0005558241 lvm[92506]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:32:51 np0005558241 lvm[92509]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:32:51 np0005558241 lvm[92509]: VG ceph_vg2 finished
Dec 13 02:32:51 np0005558241 lvm[92506]: VG ceph_vg0 finished
Dec 13 02:32:51 np0005558241 lvm[92508]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:32:51 np0005558241 lvm[92508]: VG ceph_vg1 finished
Dec 13 02:32:51 np0005558241 serene_grothendieck[92379]: {}
Dec 13 02:32:51 np0005558241 podman[92460]: 2025-12-13 07:32:51.492759101 +0000 UTC m=+0.366288708 container create 74fd704b48730581ab044817a5540b7093b60fb2a69572f56da756f9e3cbb560 (image=quay.io/ceph/ceph:v20, name=dreamy_bhabha, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:32:51 np0005558241 systemd[1]: libpod-b45c0883de111f7020dd41f9768513b785916500372bcb96294b8abe2730ee35.scope: Deactivated successfully.
Dec 13 02:32:51 np0005558241 systemd[1]: libpod-b45c0883de111f7020dd41f9768513b785916500372bcb96294b8abe2730ee35.scope: Consumed 1.393s CPU time.
Dec 13 02:32:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:51 np0005558241 systemd[1]: Started libpod-conmon-74fd704b48730581ab044817a5540b7093b60fb2a69572f56da756f9e3cbb560.scope.
Dec 13 02:32:51 np0005558241 podman[92363]: 2025-12-13 07:32:51.942550517 +0000 UTC m=+1.544251218 container died b45c0883de111f7020dd41f9768513b785916500372bcb96294b8abe2730ee35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_grothendieck, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:51 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7824ca30529c95e73d0e762e375919e9d341603f1750078ca7719bf0f022ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7824ca30529c95e73d0e762e375919e9d341603f1750078ca7719bf0f022ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:52 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:32:52 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2391058913' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:32:52 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8a668e4b0ec5cec5d877ee67f6630910667086ab6a018cc688d957b5797ba7d1-merged.mount: Deactivated successfully.
Dec 13 02:32:52 np0005558241 podman[92512]: 2025-12-13 07:32:52.826512887 +0000 UTC m=+1.289265519 container remove b45c0883de111f7020dd41f9768513b785916500372bcb96294b8abe2730ee35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:32:52 np0005558241 systemd[1]: libpod-conmon-b45c0883de111f7020dd41f9768513b785916500372bcb96294b8abe2730ee35.scope: Deactivated successfully.
Dec 13 02:32:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:32:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:32:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:53 np0005558241 podman[92460]: 2025-12-13 07:32:53.242510464 +0000 UTC m=+2.116040121 container init 74fd704b48730581ab044817a5540b7093b60fb2a69572f56da756f9e3cbb560 (image=quay.io/ceph/ceph:v20, name=dreamy_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 13 02:32:53 np0005558241 podman[92460]: 2025-12-13 07:32:53.254208048 +0000 UTC m=+2.127737645 container start 74fd704b48730581ab044817a5540b7093b60fb2a69572f56da756f9e3cbb560 (image=quay.io/ceph/ceph:v20, name=dreamy_bhabha, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:32:53 np0005558241 podman[92460]: 2025-12-13 07:32:53.258948957 +0000 UTC m=+2.132478624 container attach 74fd704b48730581ab044817a5540b7093b60fb2a69572f56da756f9e3cbb560 (image=quay.io/ceph/ceph:v20, name=dreamy_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:32:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v62: 2 pgs: 1 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 13 02:32:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec 13 02:32:53 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec 13 02:32:53 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:32:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 02:32:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2925833257' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:32:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:32:54 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2925833257' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:32:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 13 02:32:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2925833257' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:32:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec 13 02:32:54 np0005558241 dreamy_bhabha[92529]: pool 'volumes' created
Dec 13 02:32:54 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec 13 02:32:54 np0005558241 systemd[1]: libpod-74fd704b48730581ab044817a5540b7093b60fb2a69572f56da756f9e3cbb560.scope: Deactivated successfully.
Dec 13 02:32:54 np0005558241 podman[92460]: 2025-12-13 07:32:54.912683997 +0000 UTC m=+3.786213604 container died 74fd704b48730581ab044817a5540b7093b60fb2a69572f56da756f9e3cbb560 (image=quay.io/ceph/ceph:v20, name=dreamy_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 02:32:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ce7824ca30529c95e73d0e762e375919e9d341603f1750078ca7719bf0f022ff-merged.mount: Deactivated successfully.
Dec 13 02:32:54 np0005558241 podman[92460]: 2025-12-13 07:32:54.970058129 +0000 UTC m=+3.843587736 container remove 74fd704b48730581ab044817a5540b7093b60fb2a69572f56da756f9e3cbb560 (image=quay.io/ceph/ceph:v20, name=dreamy_bhabha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:54 np0005558241 systemd[1]: libpod-conmon-74fd704b48730581ab044817a5540b7093b60fb2a69572f56da756f9e3cbb560.scope: Deactivated successfully.
Dec 13 02:32:55 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2925833257' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:32:55 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:32:55 np0005558241 python3[92620]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:32:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v65: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:55 np0005558241 podman[92621]: 2025-12-13 07:32:55.460119408 +0000 UTC m=+0.084690240 container create 2edef8112af3cae7e93c7774ad08ff67057b792d0c714c90de2c44a506a57971 (image=quay.io/ceph/ceph:v20, name=gracious_knuth, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:32:55 np0005558241 systemd[1]: Started libpod-conmon-2edef8112af3cae7e93c7774ad08ff67057b792d0c714c90de2c44a506a57971.scope.
Dec 13 02:32:55 np0005558241 podman[92621]: 2025-12-13 07:32:55.428166384 +0000 UTC m=+0.052737326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:32:55 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/591332fa36d2234950ec220da0eeeaacc54a4ae4180933395571f855961147e3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/591332fa36d2234950ec220da0eeeaacc54a4ae4180933395571f855961147e3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:55 np0005558241 podman[92621]: 2025-12-13 07:32:55.591688185 +0000 UTC m=+0.216259067 container init 2edef8112af3cae7e93c7774ad08ff67057b792d0c714c90de2c44a506a57971 (image=quay.io/ceph/ceph:v20, name=gracious_knuth, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:32:55 np0005558241 podman[92621]: 2025-12-13 07:32:55.603177164 +0000 UTC m=+0.227748006 container start 2edef8112af3cae7e93c7774ad08ff67057b792d0c714c90de2c44a506a57971 (image=quay.io/ceph/ceph:v20, name=gracious_knuth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:32:55 np0005558241 podman[92621]: 2025-12-13 07:32:55.607853671 +0000 UTC m=+0.232424513 container attach 2edef8112af3cae7e93c7774ad08ff67057b792d0c714c90de2c44a506a57971 (image=quay.io/ceph/ceph:v20, name=gracious_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:32:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 02:32:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3646306472' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:32:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 13 02:32:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3646306472' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:32:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec 13 02:32:56 np0005558241 gracious_knuth[92636]: pool 'backups' created
Dec 13 02:32:56 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3646306472' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:32:56 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:32:56 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec 13 02:32:56 np0005558241 systemd[1]: libpod-2edef8112af3cae7e93c7774ad08ff67057b792d0c714c90de2c44a506a57971.scope: Deactivated successfully.
Dec 13 02:32:56 np0005558241 podman[92621]: 2025-12-13 07:32:56.344505347 +0000 UTC m=+0.969076179 container died 2edef8112af3cae7e93c7774ad08ff67057b792d0c714c90de2c44a506a57971 (image=quay.io/ceph/ceph:v20, name=gracious_knuth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 02:32:56 np0005558241 systemd[1]: var-lib-containers-storage-overlay-591332fa36d2234950ec220da0eeeaacc54a4ae4180933395571f855961147e3-merged.mount: Deactivated successfully.
Dec 13 02:32:56 np0005558241 podman[92621]: 2025-12-13 07:32:56.388850032 +0000 UTC m=+1.013420854 container remove 2edef8112af3cae7e93c7774ad08ff67057b792d0c714c90de2c44a506a57971 (image=quay.io/ceph/ceph:v20, name=gracious_knuth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:32:56 np0005558241 systemd[1]: libpod-conmon-2edef8112af3cae7e93c7774ad08ff67057b792d0c714c90de2c44a506a57971.scope: Deactivated successfully.
Dec 13 02:32:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:32:56 np0005558241 python3[92700]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:32:56 np0005558241 podman[92701]: 2025-12-13 07:32:56.848317242 +0000 UTC m=+0.069736154 container create 0487fc8caaa5661d51583b2860bb79cc1bef7e97b810b9782b7b582e4fad7206 (image=quay.io/ceph/ceph:v20, name=tender_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 02:32:56 np0005558241 systemd[1]: Started libpod-conmon-0487fc8caaa5661d51583b2860bb79cc1bef7e97b810b9782b7b582e4fad7206.scope.
Dec 13 02:32:56 np0005558241 podman[92701]: 2025-12-13 07:32:56.818855561 +0000 UTC m=+0.040274513 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:32:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5679db2b479eb929992f7776db335e981fc273eda88f19ad3f18d37f788b9a0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5679db2b479eb929992f7776db335e981fc273eda88f19ad3f18d37f788b9a0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:56 np0005558241 podman[92701]: 2025-12-13 07:32:56.942574681 +0000 UTC m=+0.163993573 container init 0487fc8caaa5661d51583b2860bb79cc1bef7e97b810b9782b7b582e4fad7206 (image=quay.io/ceph/ceph:v20, name=tender_gagarin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 02:32:56 np0005558241 podman[92701]: 2025-12-13 07:32:56.951793213 +0000 UTC m=+0.173212085 container start 0487fc8caaa5661d51583b2860bb79cc1bef7e97b810b9782b7b582e4fad7206 (image=quay.io/ceph/ceph:v20, name=tender_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:32:56 np0005558241 podman[92701]: 2025-12-13 07:32:56.955160567 +0000 UTC m=+0.176579469 container attach 0487fc8caaa5661d51583b2860bb79cc1bef7e97b810b9782b7b582e4fad7206 (image=quay.io/ceph/ceph:v20, name=tender_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:32:57 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:32:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 13 02:32:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec 13 02:32:57 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec 13 02:32:57 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3646306472' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:32:57 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:32:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v68: 4 pgs: 2 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 02:32:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4093842433' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:32:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 13 02:32:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4093842433' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:32:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec 13 02:32:58 np0005558241 tender_gagarin[92716]: pool 'images' created
Dec 13 02:32:58 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/4093842433' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:32:58 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec 13 02:32:58 np0005558241 systemd[1]: libpod-0487fc8caaa5661d51583b2860bb79cc1bef7e97b810b9782b7b582e4fad7206.scope: Deactivated successfully.
Dec 13 02:32:58 np0005558241 podman[92701]: 2025-12-13 07:32:58.369328845 +0000 UTC m=+1.590747787 container died 0487fc8caaa5661d51583b2860bb79cc1bef7e97b810b9782b7b582e4fad7206 (image=quay.io/ceph/ceph:v20, name=tender_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:32:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e5679db2b479eb929992f7776db335e981fc273eda88f19ad3f18d37f788b9a0-merged.mount: Deactivated successfully.
Dec 13 02:32:58 np0005558241 podman[92701]: 2025-12-13 07:32:58.449139421 +0000 UTC m=+1.670558303 container remove 0487fc8caaa5661d51583b2860bb79cc1bef7e97b810b9782b7b582e4fad7206 (image=quay.io/ceph/ceph:v20, name=tender_gagarin, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:32:58 np0005558241 systemd[1]: libpod-conmon-0487fc8caaa5661d51583b2860bb79cc1bef7e97b810b9782b7b582e4fad7206.scope: Deactivated successfully.
Dec 13 02:32:58 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:32:58 np0005558241 python3[92781]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:32:58 np0005558241 podman[92782]: 2025-12-13 07:32:58.877604112 +0000 UTC m=+0.061495277 container create fd2fed37a82d6c3481710ddafa6343d123bcf54cb25d2e822215d28f4a4715a3 (image=quay.io/ceph/ceph:v20, name=sweet_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:32:58 np0005558241 systemd[1]: Started libpod-conmon-fd2fed37a82d6c3481710ddafa6343d123bcf54cb25d2e822215d28f4a4715a3.scope.
Dec 13 02:32:58 np0005558241 podman[92782]: 2025-12-13 07:32:58.8445078 +0000 UTC m=+0.028399015 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:32:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:32:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834f218bd14563536505a1f5e4421cf9f9d974e8d0b257f9ea9fff0b4b4575cc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834f218bd14563536505a1f5e4421cf9f9d974e8d0b257f9ea9fff0b4b4575cc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:32:58 np0005558241 podman[92782]: 2025-12-13 07:32:58.967589904 +0000 UTC m=+0.151481099 container init fd2fed37a82d6c3481710ddafa6343d123bcf54cb25d2e822215d28f4a4715a3 (image=quay.io/ceph/ceph:v20, name=sweet_borg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:32:58 np0005558241 podman[92782]: 2025-12-13 07:32:58.974438166 +0000 UTC m=+0.158329331 container start fd2fed37a82d6c3481710ddafa6343d123bcf54cb25d2e822215d28f4a4715a3 (image=quay.io/ceph/ceph:v20, name=sweet_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:32:58 np0005558241 podman[92782]: 2025-12-13 07:32:58.979225096 +0000 UTC m=+0.163116301 container attach fd2fed37a82d6c3481710ddafa6343d123bcf54cb25d2e822215d28f4a4715a3 (image=quay.io/ceph/ceph:v20, name=sweet_borg, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:32:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 13 02:32:59 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/4093842433' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:32:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec 13 02:32:59 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec 13 02:32:59 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:32:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v71: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:32:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 02:32:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2672843360' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:33:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 13 02:33:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2672843360' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:33:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec 13 02:33:00 np0005558241 sweet_borg[92797]: pool 'cephfs.cephfs.meta' created
Dec 13 02:33:00 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec 13 02:33:00 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2672843360' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:33:00 np0005558241 systemd[1]: libpod-fd2fed37a82d6c3481710ddafa6343d123bcf54cb25d2e822215d28f4a4715a3.scope: Deactivated successfully.
Dec 13 02:33:00 np0005558241 podman[92782]: 2025-12-13 07:33:00.408114703 +0000 UTC m=+1.592005858 container died fd2fed37a82d6c3481710ddafa6343d123bcf54cb25d2e822215d28f4a4715a3 (image=quay.io/ceph/ceph:v20, name=sweet_borg, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-834f218bd14563536505a1f5e4421cf9f9d974e8d0b257f9ea9fff0b4b4575cc-merged.mount: Deactivated successfully.
Dec 13 02:33:00 np0005558241 podman[92782]: 2025-12-13 07:33:00.470650465 +0000 UTC m=+1.654541630 container remove fd2fed37a82d6c3481710ddafa6343d123bcf54cb25d2e822215d28f4a4715a3 (image=quay.io/ceph/ceph:v20, name=sweet_borg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:33:00 np0005558241 systemd[1]: libpod-conmon-fd2fed37a82d6c3481710ddafa6343d123bcf54cb25d2e822215d28f4a4715a3.scope: Deactivated successfully.
Dec 13 02:33:00 np0005558241 python3[92861]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:00 np0005558241 podman[92862]: 2025-12-13 07:33:00.928428092 +0000 UTC m=+0.068776220 container create 18016c5025ad401d10c7f1395b510766f37025afcd265aa0bde3d0b6f363dc35 (image=quay.io/ceph/ceph:v20, name=vigilant_pike, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:33:00 np0005558241 systemd[1]: Started libpod-conmon-18016c5025ad401d10c7f1395b510766f37025afcd265aa0bde3d0b6f363dc35.scope.
Dec 13 02:33:00 np0005558241 podman[92862]: 2025-12-13 07:33:00.898403597 +0000 UTC m=+0.038751785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da59decf37f177b462735a610865717a7f484f08d89f3f6e5efdd91ac77378ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da59decf37f177b462735a610865717a7f484f08d89f3f6e5efdd91ac77378ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:01 np0005558241 podman[92862]: 2025-12-13 07:33:01.110418346 +0000 UTC m=+0.250766534 container init 18016c5025ad401d10c7f1395b510766f37025afcd265aa0bde3d0b6f363dc35 (image=quay.io/ceph/ceph:v20, name=vigilant_pike, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:01 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:01 np0005558241 podman[92862]: 2025-12-13 07:33:01.122058799 +0000 UTC m=+0.262406897 container start 18016c5025ad401d10c7f1395b510766f37025afcd265aa0bde3d0b6f363dc35 (image=quay.io/ceph/ceph:v20, name=vigilant_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:01 np0005558241 podman[92862]: 2025-12-13 07:33:01.126827239 +0000 UTC m=+0.267175427 container attach 18016c5025ad401d10c7f1395b510766f37025afcd265aa0bde3d0b6f363dc35 (image=quay.io/ceph/ceph:v20, name=vigilant_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v73: 6 pgs: 1 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 13 02:33:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec 13 02:33:01 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec 13 02:33:01 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:01 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2672843360' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:33:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 02:33:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3855451313' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:33:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 13 02:33:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3855451313' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:33:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec 13 02:33:02 np0005558241 vigilant_pike[92878]: pool 'cephfs.cephfs.data' created
Dec 13 02:33:02 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec 13 02:33:02 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3855451313' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 02:33:02 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3855451313' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 02:33:02 np0005558241 systemd[1]: libpod-18016c5025ad401d10c7f1395b510766f37025afcd265aa0bde3d0b6f363dc35.scope: Deactivated successfully.
Dec 13 02:33:02 np0005558241 podman[92862]: 2025-12-13 07:33:02.428739024 +0000 UTC m=+1.569087162 container died 18016c5025ad401d10c7f1395b510766f37025afcd265aa0bde3d0b6f363dc35 (image=quay.io/ceph/ceph:v20, name=vigilant_pike, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-da59decf37f177b462735a610865717a7f484f08d89f3f6e5efdd91ac77378ed-merged.mount: Deactivated successfully.
Dec 13 02:33:02 np0005558241 podman[92862]: 2025-12-13 07:33:02.489306067 +0000 UTC m=+1.629654195 container remove 18016c5025ad401d10c7f1395b510766f37025afcd265aa0bde3d0b6f363dc35 (image=quay.io/ceph/ceph:v20, name=vigilant_pike, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:33:02 np0005558241 systemd[1]: libpod-conmon-18016c5025ad401d10c7f1395b510766f37025afcd265aa0bde3d0b6f363dc35.scope: Deactivated successfully.
Dec 13 02:33:02 np0005558241 python3[92943]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:03 np0005558241 podman[92944]: 2025-12-13 07:33:03.052140755 +0000 UTC m=+0.065025826 container create 7205de40ab50520b3d5a61fdd3537db10ed5c89d36b55e5fb18ba5e9b53c05ab (image=quay.io/ceph/ceph:v20, name=intelligent_jackson, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:33:03 np0005558241 systemd[1]: Started libpod-conmon-7205de40ab50520b3d5a61fdd3537db10ed5c89d36b55e5fb18ba5e9b53c05ab.scope.
Dec 13 02:33:03 np0005558241 podman[92944]: 2025-12-13 07:33:03.020705115 +0000 UTC m=+0.033590246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:03 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f608d4098a381b07a3c44fff9e2917b086cd9a92244ad7ee7ac57e33f42e4f3b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f608d4098a381b07a3c44fff9e2917b086cd9a92244ad7ee7ac57e33f42e4f3b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:03 np0005558241 podman[92944]: 2025-12-13 07:33:03.158687443 +0000 UTC m=+0.171572514 container init 7205de40ab50520b3d5a61fdd3537db10ed5c89d36b55e5fb18ba5e9b53c05ab (image=quay.io/ceph/ceph:v20, name=intelligent_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:03 np0005558241 podman[92944]: 2025-12-13 07:33:03.169879165 +0000 UTC m=+0.182764236 container start 7205de40ab50520b3d5a61fdd3537db10ed5c89d36b55e5fb18ba5e9b53c05ab (image=quay.io/ceph/ceph:v20, name=intelligent_jackson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec 13 02:33:03 np0005558241 podman[92944]: 2025-12-13 07:33:03.176325547 +0000 UTC m=+0.189210638 container attach 7205de40ab50520b3d5a61fdd3537db10ed5c89d36b55e5fb18ba5e9b53c05ab (image=quay.io/ceph/ceph:v20, name=intelligent_jackson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:03 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 2 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 13 02:33:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec 13 02:33:03 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec 13 02:33:03 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec 13 02:33:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3223251088' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 13 02:33:04 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3223251088' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 13 02:33:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 13 02:33:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3223251088' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 13 02:33:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec 13 02:33:04 np0005558241 intelligent_jackson[92959]: enabled application 'rbd' on pool 'vms'
Dec 13 02:33:04 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec 13 02:33:04 np0005558241 systemd[1]: libpod-7205de40ab50520b3d5a61fdd3537db10ed5c89d36b55e5fb18ba5e9b53c05ab.scope: Deactivated successfully.
Dec 13 02:33:04 np0005558241 podman[92944]: 2025-12-13 07:33:04.477012151 +0000 UTC m=+1.489897172 container died 7205de40ab50520b3d5a61fdd3537db10ed5c89d36b55e5fb18ba5e9b53c05ab (image=quay.io/ceph/ceph:v20, name=intelligent_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:33:04 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f608d4098a381b07a3c44fff9e2917b086cd9a92244ad7ee7ac57e33f42e4f3b-merged.mount: Deactivated successfully.
Dec 13 02:33:04 np0005558241 podman[92944]: 2025-12-13 07:33:04.524608947 +0000 UTC m=+1.537493978 container remove 7205de40ab50520b3d5a61fdd3537db10ed5c89d36b55e5fb18ba5e9b53c05ab (image=quay.io/ceph/ceph:v20, name=intelligent_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:33:04 np0005558241 systemd[1]: libpod-conmon-7205de40ab50520b3d5a61fdd3537db10ed5c89d36b55e5fb18ba5e9b53c05ab.scope: Deactivated successfully.
Dec 13 02:33:04 np0005558241 python3[93021]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:05 np0005558241 podman[93022]: 2025-12-13 07:33:05.009955417 +0000 UTC m=+0.068221455 container create f87cb84163b542bc9953c67aabf102588845c51231611412020edf5f726f33d6 (image=quay.io/ceph/ceph:v20, name=fervent_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:33:05 np0005558241 systemd[1]: Started libpod-conmon-f87cb84163b542bc9953c67aabf102588845c51231611412020edf5f726f33d6.scope.
Dec 13 02:33:05 np0005558241 podman[93022]: 2025-12-13 07:33:04.981142473 +0000 UTC m=+0.039408561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef7ca115e32f88be58fedaeed08b23b3d2b852f9da9e6e5e11eae9fa58c20536/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef7ca115e32f88be58fedaeed08b23b3d2b852f9da9e6e5e11eae9fa58c20536/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:05 np0005558241 podman[93022]: 2025-12-13 07:33:05.110509665 +0000 UTC m=+0.168775703 container init f87cb84163b542bc9953c67aabf102588845c51231611412020edf5f726f33d6 (image=quay.io/ceph/ceph:v20, name=fervent_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:05 np0005558241 podman[93022]: 2025-12-13 07:33:05.11985813 +0000 UTC m=+0.178124158 container start f87cb84163b542bc9953c67aabf102588845c51231611412020edf5f726f33d6 (image=quay.io/ceph/ceph:v20, name=fervent_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:33:05 np0005558241 podman[93022]: 2025-12-13 07:33:05.124033055 +0000 UTC m=+0.182299083 container attach f87cb84163b542bc9953c67aabf102588845c51231611412020edf5f726f33d6 (image=quay.io/ceph/ceph:v20, name=fervent_carson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:05 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3223251088' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 13 02:33:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec 13 02:33:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2709406670' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 13 02:33:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 13 02:33:06 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2709406670' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 13 02:33:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2709406670' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 13 02:33:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec 13 02:33:06 np0005558241 fervent_carson[93038]: enabled application 'rbd' on pool 'volumes'
Dec 13 02:33:06 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec 13 02:33:06 np0005558241 systemd[1]: libpod-f87cb84163b542bc9953c67aabf102588845c51231611412020edf5f726f33d6.scope: Deactivated successfully.
Dec 13 02:33:06 np0005558241 podman[93022]: 2025-12-13 07:33:06.509228734 +0000 UTC m=+1.567494762 container died f87cb84163b542bc9953c67aabf102588845c51231611412020edf5f726f33d6 (image=quay.io/ceph/ceph:v20, name=fervent_carson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ef7ca115e32f88be58fedaeed08b23b3d2b852f9da9e6e5e11eae9fa58c20536-merged.mount: Deactivated successfully.
Dec 13 02:33:06 np0005558241 podman[93022]: 2025-12-13 07:33:06.562883903 +0000 UTC m=+1.621149931 container remove f87cb84163b542bc9953c67aabf102588845c51231611412020edf5f726f33d6 (image=quay.io/ceph/ceph:v20, name=fervent_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:06 np0005558241 systemd[1]: libpod-conmon-f87cb84163b542bc9953c67aabf102588845c51231611412020edf5f726f33d6.scope: Deactivated successfully.
Dec 13 02:33:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:06 np0005558241 python3[93099]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:07 np0005558241 podman[93100]: 2025-12-13 07:33:07.052410108 +0000 UTC m=+0.075187061 container create 3cb986833ed4501ed3dcdcf44ac5b0adf6edbe7a0685935319cfc3a3a62bc1ae (image=quay.io/ceph/ceph:v20, name=dazzling_saha, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 02:33:07 np0005558241 systemd[1]: Started libpod-conmon-3cb986833ed4501ed3dcdcf44ac5b0adf6edbe7a0685935319cfc3a3a62bc1ae.scope.
Dec 13 02:33:07 np0005558241 podman[93100]: 2025-12-13 07:33:07.02106027 +0000 UTC m=+0.043837273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba089870e751f56326daa562a30d2d17de4cc269ab25ff91b8c35c155584946/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba089870e751f56326daa562a30d2d17de4cc269ab25ff91b8c35c155584946/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:07 np0005558241 podman[93100]: 2025-12-13 07:33:07.157776576 +0000 UTC m=+0.180553569 container init 3cb986833ed4501ed3dcdcf44ac5b0adf6edbe7a0685935319cfc3a3a62bc1ae (image=quay.io/ceph/ceph:v20, name=dazzling_saha, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:07 np0005558241 podman[93100]: 2025-12-13 07:33:07.168236139 +0000 UTC m=+0.191013102 container start 3cb986833ed4501ed3dcdcf44ac5b0adf6edbe7a0685935319cfc3a3a62bc1ae (image=quay.io/ceph/ceph:v20, name=dazzling_saha, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:07 np0005558241 podman[93100]: 2025-12-13 07:33:07.172336842 +0000 UTC m=+0.195113855 container attach 3cb986833ed4501ed3dcdcf44ac5b0adf6edbe7a0685935319cfc3a3a62bc1ae (image=quay.io/ceph/ceph:v20, name=dazzling_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:33:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:07 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2709406670' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 13 02:33:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec 13 02:33:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/317929508' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 13 02:33:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 13 02:33:08 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/317929508' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 13 02:33:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/317929508' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 13 02:33:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec 13 02:33:08 np0005558241 dazzling_saha[93115]: enabled application 'rbd' on pool 'backups'
Dec 13 02:33:08 np0005558241 systemd[1]: libpod-3cb986833ed4501ed3dcdcf44ac5b0adf6edbe7a0685935319cfc3a3a62bc1ae.scope: Deactivated successfully.
Dec 13 02:33:08 np0005558241 podman[93100]: 2025-12-13 07:33:08.904997445 +0000 UTC m=+1.927774378 container died 3cb986833ed4501ed3dcdcf44ac5b0adf6edbe7a0685935319cfc3a3a62bc1ae (image=quay.io/ceph/ceph:v20, name=dazzling_saha, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:33:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:33:08
Dec 13 02:33:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:33:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Some PGs (0.142857) are unknown; try again later
Dec 13 02:33:09 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec 13 02:33:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2ba089870e751f56326daa562a30d2d17de4cc269ab25ff91b8c35c155584946-merged.mount: Deactivated successfully.
Dec 13 02:33:09 np0005558241 podman[93100]: 2025-12-13 07:33:09.211246043 +0000 UTC m=+2.234022986 container remove 3cb986833ed4501ed3dcdcf44ac5b0adf6edbe7a0685935319cfc3a3a62bc1ae (image=quay.io/ceph/ceph:v20, name=dazzling_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:09 np0005558241 systemd[1]: libpod-conmon-3cb986833ed4501ed3dcdcf44ac5b0adf6edbe7a0685935319cfc3a3a62bc1ae.scope: Deactivated successfully.
Dec 13 02:33:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:09 np0005558241 python3[93177]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:09 np0005558241 podman[93178]: 2025-12-13 07:33:09.676058267 +0000 UTC m=+0.069787675 container create 911689e7e42f505242248060fa4ddd93db5b8cf695f6021ab8b9989d607798af (image=quay.io/ceph/ceph:v20, name=kind_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:09 np0005558241 systemd[1]: Started libpod-conmon-911689e7e42f505242248060fa4ddd93db5b8cf695f6021ab8b9989d607798af.scope.
Dec 13 02:33:09 np0005558241 podman[93178]: 2025-12-13 07:33:09.645801347 +0000 UTC m=+0.039530775 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:09 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483cfc6c3e54a1f8c90833e43efedbd66880c66a84f02316076cb5ab713c3b9b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483cfc6c3e54a1f8c90833e43efedbd66880c66a84f02316076cb5ab713c3b9b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:09 np0005558241 podman[93178]: 2025-12-13 07:33:09.78398722 +0000 UTC m=+0.177716638 container init 911689e7e42f505242248060fa4ddd93db5b8cf695f6021ab8b9989d607798af (image=quay.io/ceph/ceph:v20, name=kind_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:09 np0005558241 podman[93178]: 2025-12-13 07:33:09.793289544 +0000 UTC m=+0.187018922 container start 911689e7e42f505242248060fa4ddd93db5b8cf695f6021ab8b9989d607798af (image=quay.io/ceph/ceph:v20, name=kind_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:33:09 np0005558241 podman[93178]: 2025-12-13 07:33:09.797006997 +0000 UTC m=+0.190736375 container attach 911689e7e42f505242248060fa4ddd93db5b8cf695f6021ab8b9989d607798af (image=quay.io/ceph/ceph:v20, name=kind_engelbart, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:33:09 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/317929508' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:33:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:33:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:33:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec 13 02:33:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3211443820' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 13 02:33:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 13 02:33:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:33:10 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3211443820' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 13 02:33:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:33:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3211443820' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 13 02:33:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec 13 02:33:10 np0005558241 kind_engelbart[93194]: enabled application 'rbd' on pool 'images'
Dec 13 02:33:10 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec 13 02:33:10 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev 0de9bdfc-5719-4ffe-be1a-f1d9d5b2d13f (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 13 02:33:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:33:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:33:10 np0005558241 systemd[1]: libpod-911689e7e42f505242248060fa4ddd93db5b8cf695f6021ab8b9989d607798af.scope: Deactivated successfully.
Dec 13 02:33:10 np0005558241 podman[93178]: 2025-12-13 07:33:10.949216669 +0000 UTC m=+1.342946057 container died 911689e7e42f505242248060fa4ddd93db5b8cf695f6021ab8b9989d607798af (image=quay.io/ceph/ceph:v20, name=kind_engelbart, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:10 np0005558241 systemd[1]: var-lib-containers-storage-overlay-483cfc6c3e54a1f8c90833e43efedbd66880c66a84f02316076cb5ab713c3b9b-merged.mount: Deactivated successfully.
Dec 13 02:33:11 np0005558241 podman[93178]: 2025-12-13 07:33:11.00496738 +0000 UTC m=+1.398696758 container remove 911689e7e42f505242248060fa4ddd93db5b8cf695f6021ab8b9989d607798af (image=quay.io/ceph/ceph:v20, name=kind_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:33:11 np0005558241 systemd[1]: libpod-conmon-911689e7e42f505242248060fa4ddd93db5b8cf695f6021ab8b9989d607798af.scope: Deactivated successfully.
Dec 13 02:33:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:33:11 np0005558241 python3[93256]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:11 np0005558241 podman[93257]: 2025-12-13 07:33:11.465468076 +0000 UTC m=+0.061710862 container create d3e64cec3a18e3ebe62d8cff028d7ad99dd9762a9ae1397ee6cb869e84c44487 (image=quay.io/ceph/ceph:v20, name=confident_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 02:33:11 np0005558241 systemd[1]: Started libpod-conmon-d3e64cec3a18e3ebe62d8cff028d7ad99dd9762a9ae1397ee6cb869e84c44487.scope.
Dec 13 02:33:11 np0005558241 podman[93257]: 2025-12-13 07:33:11.434681732 +0000 UTC m=+0.030924568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8ce4b04f065adb20b33cf061d234d527cd498359ed63874e2cd4cbcf8bab6a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8ce4b04f065adb20b33cf061d234d527cd498359ed63874e2cd4cbcf8bab6a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:11 np0005558241 podman[93257]: 2025-12-13 07:33:11.566554357 +0000 UTC m=+0.162797143 container init d3e64cec3a18e3ebe62d8cff028d7ad99dd9762a9ae1397ee6cb869e84c44487 (image=quay.io/ceph/ceph:v20, name=confident_fermat, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:11 np0005558241 podman[93257]: 2025-12-13 07:33:11.573529942 +0000 UTC m=+0.169772718 container start d3e64cec3a18e3ebe62d8cff028d7ad99dd9762a9ae1397ee6cb869e84c44487 (image=quay.io/ceph/ceph:v20, name=confident_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:33:11 np0005558241 podman[93257]: 2025-12-13 07:33:11.577781749 +0000 UTC m=+0.174024595 container attach d3e64cec3a18e3ebe62d8cff028d7ad99dd9762a9ae1397ee6cb869e84c44487 (image=quay.io/ceph/ceph:v20, name=confident_fermat, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3211443820' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec 13 02:33:11 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev bf3023ed-a4c5-49ad-aec9-49e042f01b47 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:33:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:33:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec 13 02:33:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2071165402' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=34 pruub=13.458908081s) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active pruub 65.881416321s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=34 pruub=13.458908081s) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown pruub 65.881416321s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 13 02:33:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:33:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2071165402' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 13 02:33:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec 13 02:33:12 np0005558241 confident_fermat[93272]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 13 02:33:12 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1d( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1e( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1c( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.b( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1f( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.a( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.9( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.8( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.6( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.5( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.2( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.3( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.4( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.7( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.c( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.d( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.e( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.10( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.f( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.11( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.12( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.13( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.16( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.15( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev 7b1c5351-c6da-4202-b803-c5345f2667d3 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.14( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.18( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:33:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.17( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.19( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1a( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1b( empty local-lis/les=19/20 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1d( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:33:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:33:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:33:12 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2071165402' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1f( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.a( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.9( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.8( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1c( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.b( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.6( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1e( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.3( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.2( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.5( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.4( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=34/35 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.d( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.c( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.10( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.e( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.12( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.11( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.f( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.13( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.15( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.7( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.16( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.18( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.14( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1a( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.19( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.17( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 35 pg[2.1b( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=19/19 les/c/f=20/20/0 sis=34) [2] r=0 lpr=34 pi=[19,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:12 np0005558241 systemd[1]: libpod-d3e64cec3a18e3ebe62d8cff028d7ad99dd9762a9ae1397ee6cb869e84c44487.scope: Deactivated successfully.
Dec 13 02:33:12 np0005558241 podman[93257]: 2025-12-13 07:33:12.971562034 +0000 UTC m=+1.567804840 container died d3e64cec3a18e3ebe62d8cff028d7ad99dd9762a9ae1397ee6cb869e84c44487 (image=quay.io/ceph/ceph:v20, name=confident_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:33:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e8ce4b04f065adb20b33cf061d234d527cd498359ed63874e2cd4cbcf8bab6a2-merged.mount: Deactivated successfully.
Dec 13 02:33:13 np0005558241 podman[93257]: 2025-12-13 07:33:13.024628869 +0000 UTC m=+1.620871645 container remove d3e64cec3a18e3ebe62d8cff028d7ad99dd9762a9ae1397ee6cb869e84c44487 (image=quay.io/ceph/ceph:v20, name=confident_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:13 np0005558241 systemd[1]: libpod-conmon-d3e64cec3a18e3ebe62d8cff028d7ad99dd9762a9ae1397ee6cb869e84c44487.scope: Deactivated successfully.
Dec 13 02:33:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v88: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:33:13 np0005558241 python3[93336]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:13 np0005558241 podman[93337]: 2025-12-13 07:33:13.495239698 +0000 UTC m=+0.063677471 container create 3486f4cf52c3957a017a54b162810c1c7718e14670ca2fa54e0280c50b08e90e (image=quay.io/ceph/ceph:v20, name=wonderful_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:33:13 np0005558241 systemd[1]: Started libpod-conmon-3486f4cf52c3957a017a54b162810c1c7718e14670ca2fa54e0280c50b08e90e.scope.
Dec 13 02:33:13 np0005558241 podman[93337]: 2025-12-13 07:33:13.469622384 +0000 UTC m=+0.038060237 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4211c26407d1d1d4d80e907af358a446b571090e1652bbcb32b6f4bad3fa302f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4211c26407d1d1d4d80e907af358a446b571090e1652bbcb32b6f4bad3fa302f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:13 np0005558241 podman[93337]: 2025-12-13 07:33:13.591402526 +0000 UTC m=+0.159840309 container init 3486f4cf52c3957a017a54b162810c1c7718e14670ca2fa54e0280c50b08e90e (image=quay.io/ceph/ceph:v20, name=wonderful_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:13 np0005558241 podman[93337]: 2025-12-13 07:33:13.600191087 +0000 UTC m=+0.168628890 container start 3486f4cf52c3957a017a54b162810c1c7718e14670ca2fa54e0280c50b08e90e (image=quay.io/ceph/ceph:v20, name=wonderful_cohen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 02:33:13 np0005558241 podman[93337]: 2025-12-13 07:33:13.60548758 +0000 UTC m=+0.173925363 container attach 3486f4cf52c3957a017a54b162810c1c7718e14670ca2fa54e0280c50b08e90e (image=quay.io/ceph/ceph:v20, name=wonderful_cohen, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec 13 02:33:13 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev bf097fe0-7730-4645-a7d8-5604a61bfd00 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2071165402' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:33:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:33:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec 13 02:33:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/763853033' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 36 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=36 pruub=14.828155518s) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active pruub 85.551963806s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 36 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=13.820656776s) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active pruub 77.245323181s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 36 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=36 pruub=14.828155518s) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown pruub 85.551963806s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 36 pg[3.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=13.820656776s) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown pruub 77.245323181s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 13 02:33:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:33:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/763853033' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 13 02:33:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec 13 02:33:14 np0005558241 wonderful_cohen[93352]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 13 02:33:14 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec 13 02:33:14 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev 402e4749-397e-4d5f-b2a0-5e3d9f66d6d7 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 13 02:33:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:33:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1f( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1c( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1d( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1e( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.b( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.7( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.6( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1b( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.8( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.a( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.5( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1a( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.9( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.4( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.19( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.3( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.2( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.d( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.e( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.f( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.c( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.11( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.10( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.12( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.13( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.15( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.16( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.17( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.18( empty local-lis/les=22/23 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1d( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1c( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1e( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1f( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1d( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1b( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.8( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.7( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.6( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.5( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.3( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.2( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.4( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1f( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.b( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.c( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.d( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.e( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.f( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.11( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.10( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.12( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.13( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.14( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1e( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.16( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.b( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.7( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.17( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.6( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.18( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.a( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1b( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.5( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.19( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1a( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.4( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.9( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.19( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.3( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=36/37 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.8( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1a( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1d( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.a( empty local-lis/les=21/22 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.d( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.f( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.1( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.c( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.11( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.10( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.12( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.13( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.15( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.17( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.16( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.18( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 37 pg[4.e( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=22/22 les/c/f=23/23/0 sis=36) [0] r=0 lpr=36 pi=[22,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1c( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1e( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1b( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.6( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.7( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.5( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.3( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.8( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.4( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=36/37 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1f( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.d( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.e( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.c( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.f( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.11( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.10( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.b( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.12( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.14( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.13( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.18( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.16( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.17( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.2( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.1a( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.a( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 37 pg[3.19( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=21/21 les/c/f=22/22/0 sis=36) [1] r=0 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:14 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/763853033' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 13 02:33:14 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:33:14 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/763853033' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 13 02:33:14 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:33:14 np0005558241 systemd[1]: libpod-3486f4cf52c3957a017a54b162810c1c7718e14670ca2fa54e0280c50b08e90e.scope: Deactivated successfully.
Dec 13 02:33:14 np0005558241 podman[93337]: 2025-12-13 07:33:14.975437255 +0000 UTC m=+1.543875058 container died 3486f4cf52c3957a017a54b162810c1c7718e14670ca2fa54e0280c50b08e90e (image=quay.io/ceph/ceph:v20, name=wonderful_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:33:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4211c26407d1d1d4d80e907af358a446b571090e1652bbcb32b6f4bad3fa302f-merged.mount: Deactivated successfully.
Dec 13 02:33:15 np0005558241 podman[93337]: 2025-12-13 07:33:15.03172555 +0000 UTC m=+1.600163323 container remove 3486f4cf52c3957a017a54b162810c1c7718e14670ca2fa54e0280c50b08e90e (image=quay.io/ceph/ceph:v20, name=wonderful_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:33:15 np0005558241 systemd[1]: libpod-conmon-3486f4cf52c3957a017a54b162810c1c7718e14670ca2fa54e0280c50b08e90e.scope: Deactivated successfully.
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v91: 100 pgs: 62 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev 4b7c72cb-1e91-4777-851a-e2f1523766b3 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev 0de9bdfc-5719-4ffe-be1a-f1d9d5b2d13f (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event 0de9bdfc-5719-4ffe-be1a-f1d9d5b2d13f (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev bf3023ed-a4c5-49ad-aec9-49e042f01b47 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event bf3023ed-a4c5-49ad-aec9-49e042f01b47 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev 7b1c5351-c6da-4202-b803-c5345f2667d3 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event 7b1c5351-c6da-4202-b803-c5345f2667d3 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev bf097fe0-7730-4645-a7d8-5604a61bfd00 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event bf097fe0-7730-4645-a7d8-5604a61bfd00 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev 402e4749-397e-4d5f-b2a0-5e3d9f66d6d7 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event 402e4749-397e-4d5f-b2a0-5e3d9f66d6d7 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev 4b7c72cb-1e91-4777-851a-e2f1523766b3 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 13 02:33:15 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event 4b7c72cb-1e91-4777-851a-e2f1523766b3 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:33:15 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:33:16 np0005558241 python3[93464]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:33:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:16 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 38 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.547197342s) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 81.628005981s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:16 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 38 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=14.509831429s) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active pruub 71.690345764s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:16 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 38 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.547197342s) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown pruub 81.628005981s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:16 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 38 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=14.509831429s) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown pruub 71.690345764s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:16 np0005558241 python3[93535]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765611196.2156935-36723-109646357859437/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:33:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 13 02:33:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec 13 02:33:17 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1c( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1f( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.10( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.17( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.8( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.a( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.b( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.6( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.e( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.d( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1b( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=24/25 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1a( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.15( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.14( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.17( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.11( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.16( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.10( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.13( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.12( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.3( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1b( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.18( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.7( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.19( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.9( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1e( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.a( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1f( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1c( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1d( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1c( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1f( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.10( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.17( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.a( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.8( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.b( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=38/39 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.6( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1a( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.5( empty local-lis/les=26/27 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.d( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1b( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 39 pg[5.e( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=24/24 les/c/f=25/25/0 sis=38) [2] r=0 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.15( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.14( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.17( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.11( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.16( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.10( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.12( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=38/39 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.3( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.13( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1b( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.18( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.7( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.19( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.9( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1e( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.a( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1f( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1c( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.1d( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 39 pg[6.5( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=26/26 les/c/f=27/27/0 sis=38) [0] r=0 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v94: 162 pgs: 124 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:33:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:33:17 np0005558241 python3[93637]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:33:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 13 02:33:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:33:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec 13 02:33:18 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec 13 02:33:18 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=9.432550430s) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active pruub 76.367774963s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:33:18 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=40 pruub=9.432550430s) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown pruub 76.367774963s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:18 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec 13 02:33:18 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec 13 02:33:18 np0005558241 python3[93712]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765611197.2689674-36737-274618135181823/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=fa3f913c036e3166487fe98b2b0944a8388c970e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:33:18 np0005558241 python3[93762]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:18 np0005558241 podman[93763]: 2025-12-13 07:33:18.559564208 +0000 UTC m=+0.077413717 container create fa60e3945f508072e863c6446cb15f4fce1cf4f2769a790985b14a0de3e7dee4 (image=quay.io/ceph/ceph:v20, name=serene_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:33:18 np0005558241 systemd[77917]: Starting Mark boot as successful...
Dec 13 02:33:18 np0005558241 systemd[77917]: Finished Mark boot as successful.
Dec 13 02:33:18 np0005558241 systemd[1]: Started libpod-conmon-fa60e3945f508072e863c6446cb15f4fce1cf4f2769a790985b14a0de3e7dee4.scope.
Dec 13 02:33:18 np0005558241 podman[93763]: 2025-12-13 07:33:18.527221645 +0000 UTC m=+0.045071194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:18 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daab70444f84faea8a6f5606ee77ee5f0f559ba9d1d517606f8d24dc0c3dee31/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daab70444f84faea8a6f5606ee77ee5f0f559ba9d1d517606f8d24dc0c3dee31/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daab70444f84faea8a6f5606ee77ee5f0f559ba9d1d517606f8d24dc0c3dee31/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:18 np0005558241 podman[93763]: 2025-12-13 07:33:18.660361972 +0000 UTC m=+0.178211481 container init fa60e3945f508072e863c6446cb15f4fce1cf4f2769a790985b14a0de3e7dee4 (image=quay.io/ceph/ceph:v20, name=serene_shaw, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:18 np0005558241 podman[93763]: 2025-12-13 07:33:18.666404594 +0000 UTC m=+0.184254093 container start fa60e3945f508072e863c6446cb15f4fce1cf4f2769a790985b14a0de3e7dee4 (image=quay.io/ceph/ceph:v20, name=serene_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:33:18 np0005558241 podman[93763]: 2025-12-13 07:33:18.672126807 +0000 UTC m=+0.189976316 container attach fa60e3945f508072e863c6446cb15f4fce1cf4f2769a790985b14a0de3e7dee4 (image=quay.io/ceph/ceph:v20, name=serene_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Dec 13 02:33:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 13 02:33:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec 13 02:33:19 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1e( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.12( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.10( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.17( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.b( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.7( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.d( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=28/29 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.17( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.10( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.12( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.b( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1e( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=40/41 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.7( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.d( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=28/28 les/c/f=29/29/0 sis=40) [1] r=0 lpr=40 pi=[28,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 13 02:33:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/40267836' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 02:33:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/40267836' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 02:33:19 np0005558241 serene_shaw[93780]: 
Dec 13 02:33:19 np0005558241 serene_shaw[93780]: [global]
Dec 13 02:33:19 np0005558241 serene_shaw[93780]: #011fsid = 18ee9de6-e00b-571b-ab9b-b7aab06174df
Dec 13 02:33:19 np0005558241 serene_shaw[93780]: #011mon_host = 192.168.122.100
Dec 13 02:33:19 np0005558241 serene_shaw[93780]: #011rgw_keystone_api_version = 3
Dec 13 02:33:19 np0005558241 systemd[1]: libpod-fa60e3945f508072e863c6446cb15f4fce1cf4f2769a790985b14a0de3e7dee4.scope: Deactivated successfully.
Dec 13 02:33:19 np0005558241 podman[93763]: 2025-12-13 07:33:19.15002252 +0000 UTC m=+0.667872049 container died fa60e3945f508072e863c6446cb15f4fce1cf4f2769a790985b14a0de3e7dee4 (image=quay.io/ceph/ceph:v20, name=serene_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 02:33:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay-daab70444f84faea8a6f5606ee77ee5f0f559ba9d1d517606f8d24dc0c3dee31-merged.mount: Deactivated successfully.
Dec 13 02:33:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:19 np0005558241 podman[93763]: 2025-12-13 07:33:19.468117216 +0000 UTC m=+0.985966715 container remove fa60e3945f508072e863c6446cb15f4fce1cf4f2769a790985b14a0de3e7dee4 (image=quay.io/ceph/ceph:v20, name=serene_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:33:19 np0005558241 systemd[1]: libpod-conmon-fa60e3945f508072e863c6446cb15f4fce1cf4f2769a790985b14a0de3e7dee4.scope: Deactivated successfully.
Dec 13 02:33:19 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 13 02:33:19 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 13 02:33:19 np0005558241 python3[93927]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:20 np0005558241 ceph-mgr[76830]: [progress INFO root] Writing back 9 completed events
Dec 13 02:33:20 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 13 02:33:20 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 13 02:33:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 02:33:20 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/40267836' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 02:33:20 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/40267836' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 02:33:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:20 np0005558241 podman[93932]: 2025-12-13 07:33:20.932945827 +0000 UTC m=+1.192277561 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:20 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 13 02:33:20 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 13 02:33:20 np0005558241 podman[93946]: 2025-12-13 07:33:20.970104041 +0000 UTC m=+1.113793198 container create 4eaf636b39d04da69faab027ef39e82e3e2c7b01109497d5ef1b7533db0ed8aa (image=quay.io/ceph/ceph:v20, name=crazy_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:33:21 np0005558241 podman[93946]: 2025-12-13 07:33:20.921216082 +0000 UTC m=+1.064905299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:21 np0005558241 systemd[1]: Started libpod-conmon-4eaf636b39d04da69faab027ef39e82e3e2c7b01109497d5ef1b7533db0ed8aa.scope.
Dec 13 02:33:21 np0005558241 podman[93932]: 2025-12-13 07:33:21.05359019 +0000 UTC m=+1.312921884 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:33:21 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ad32aa81158e35818aff490b0f9f987987e1af7a0dcea80f4c81c003121854/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ad32aa81158e35818aff490b0f9f987987e1af7a0dcea80f4c81c003121854/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ad32aa81158e35818aff490b0f9f987987e1af7a0dcea80f4c81c003121854/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:21 np0005558241 podman[93946]: 2025-12-13 07:33:21.093333459 +0000 UTC m=+1.237022676 container init 4eaf636b39d04da69faab027ef39e82e3e2c7b01109497d5ef1b7533db0ed8aa (image=quay.io/ceph/ceph:v20, name=crazy_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:21 np0005558241 podman[93946]: 2025-12-13 07:33:21.107145056 +0000 UTC m=+1.250834223 container start 4eaf636b39d04da69faab027ef39e82e3e2c7b01109497d5ef1b7533db0ed8aa (image=quay.io/ceph/ceph:v20, name=crazy_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:21 np0005558241 podman[93946]: 2025-12-13 07:33:21.111398583 +0000 UTC m=+1.255087740 container attach 4eaf636b39d04da69faab027ef39e82e3e2c7b01109497d5ef1b7533db0ed8aa (image=quay.io/ceph/ceph:v20, name=crazy_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:21 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec 13 02:33:21 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/470931444' entity='client.admin' 
Dec 13 02:33:21 np0005558241 crazy_snyder[93968]: set ssl_option
Dec 13 02:33:21 np0005558241 systemd[1]: libpod-4eaf636b39d04da69faab027ef39e82e3e2c7b01109497d5ef1b7533db0ed8aa.scope: Deactivated successfully.
Dec 13 02:33:21 np0005558241 podman[93946]: 2025-12-13 07:33:21.637284641 +0000 UTC m=+1.780973768 container died 4eaf636b39d04da69faab027ef39e82e3e2c7b01109497d5ef1b7533db0ed8aa (image=quay.io/ceph/ceph:v20, name=crazy_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 02:33:21 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec 13 02:33:21 np0005558241 systemd[1]: var-lib-containers-storage-overlay-21ad32aa81158e35818aff490b0f9f987987e1af7a0dcea80f4c81c003121854-merged.mount: Deactivated successfully.
Dec 13 02:33:21 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec 13 02:33:21 np0005558241 podman[93946]: 2025-12-13 07:33:21.6893323 +0000 UTC m=+1.833021457 container remove 4eaf636b39d04da69faab027ef39e82e3e2c7b01109497d5ef1b7533db0ed8aa (image=quay.io/ceph/ceph:v20, name=crazy_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:21 np0005558241 systemd[1]: libpod-conmon-4eaf636b39d04da69faab027ef39e82e3e2c7b01109497d5ef1b7533db0ed8aa.scope: Deactivated successfully.
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/470931444' entity='client.admin' 
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:33:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:33:21 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 13 02:33:21 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 13 02:33:22 np0005558241 python3[94166]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:22 np0005558241 podman[94212]: 2025-12-13 07:33:22.188332893 +0000 UTC m=+0.068017911 container create e1cd197ca816a6ab7b2ef726e14c27818a2d277847d8e9e36d92fba53fbcf363 (image=quay.io/ceph/ceph:v20, name=festive_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:33:22 np0005558241 systemd[1]: Started libpod-conmon-e1cd197ca816a6ab7b2ef726e14c27818a2d277847d8e9e36d92fba53fbcf363.scope.
Dec 13 02:33:22 np0005558241 podman[94212]: 2025-12-13 07:33:22.16154196 +0000 UTC m=+0.041227078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4627a33ae05e05632c0cd8aca8fe0dbcb4d56217fee4d39dc3405b12e4c1b1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4627a33ae05e05632c0cd8aca8fe0dbcb4d56217fee4d39dc3405b12e4c1b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c4627a33ae05e05632c0cd8aca8fe0dbcb4d56217fee4d39dc3405b12e4c1b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:22 np0005558241 podman[94212]: 2025-12-13 07:33:22.301239561 +0000 UTC m=+0.180924599 container init e1cd197ca816a6ab7b2ef726e14c27818a2d277847d8e9e36d92fba53fbcf363 (image=quay.io/ceph/ceph:v20, name=festive_saha, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:22 np0005558241 podman[94212]: 2025-12-13 07:33:22.311292944 +0000 UTC m=+0.190978002 container start e1cd197ca816a6ab7b2ef726e14c27818a2d277847d8e9e36d92fba53fbcf363 (image=quay.io/ceph/ceph:v20, name=festive_saha, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:22 np0005558241 podman[94212]: 2025-12-13 07:33:22.315756436 +0000 UTC m=+0.195441464 container attach e1cd197ca816a6ab7b2ef726e14c27818a2d277847d8e9e36d92fba53fbcf363 (image=quay.io/ceph/ceph:v20, name=festive_saha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:22 np0005558241 podman[94244]: 2025-12-13 07:33:22.429807663 +0000 UTC m=+0.068448162 container create d03f2524d537b52d75e5d500d3ae9a4c8f3ff10111e1d895ae936527c4ac750e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_merkle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:33:22 np0005558241 systemd[1]: Started libpod-conmon-d03f2524d537b52d75e5d500d3ae9a4c8f3ff10111e1d895ae936527c4ac750e.scope.
Dec 13 02:33:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:22 np0005558241 podman[94244]: 2025-12-13 07:33:22.400933677 +0000 UTC m=+0.039574256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:22 np0005558241 podman[94244]: 2025-12-13 07:33:22.49930034 +0000 UTC m=+0.137940919 container init d03f2524d537b52d75e5d500d3ae9a4c8f3ff10111e1d895ae936527c4ac750e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:22 np0005558241 podman[94244]: 2025-12-13 07:33:22.503999628 +0000 UTC m=+0.142640127 container start d03f2524d537b52d75e5d500d3ae9a4c8f3ff10111e1d895ae936527c4ac750e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_merkle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:22 np0005558241 podman[94244]: 2025-12-13 07:33:22.507629639 +0000 UTC m=+0.146270188 container attach d03f2524d537b52d75e5d500d3ae9a4c8f3ff10111e1d895ae936527c4ac750e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_merkle, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:22 np0005558241 sweet_merkle[94279]: 167 167
Dec 13 02:33:22 np0005558241 systemd[1]: libpod-d03f2524d537b52d75e5d500d3ae9a4c8f3ff10111e1d895ae936527c4ac750e.scope: Deactivated successfully.
Dec 13 02:33:22 np0005558241 conmon[94279]: conmon d03f2524d537b52d75e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d03f2524d537b52d75e5d500d3ae9a4c8f3ff10111e1d895ae936527c4ac750e.scope/container/memory.events
Dec 13 02:33:22 np0005558241 podman[94244]: 2025-12-13 07:33:22.512168913 +0000 UTC m=+0.150809442 container died d03f2524d537b52d75e5d500d3ae9a4c8f3ff10111e1d895ae936527c4ac750e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_merkle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:33:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-795642abf8654d9d7024a22c92162111ef9609f71bc366d75804e34ee08345d1-merged.mount: Deactivated successfully.
Dec 13 02:33:22 np0005558241 podman[94244]: 2025-12-13 07:33:22.559887993 +0000 UTC m=+0.198528502 container remove d03f2524d537b52d75e5d500d3ae9a4c8f3ff10111e1d895ae936527c4ac750e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:22 np0005558241 systemd[1]: libpod-conmon-d03f2524d537b52d75e5d500d3ae9a4c8f3ff10111e1d895ae936527c4ac750e.scope: Deactivated successfully.
Dec 13 02:33:22 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 13 02:33:22 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 13 02:33:22 np0005558241 podman[94304]: 2025-12-13 07:33:22.754217638 +0000 UTC m=+0.067441987 container create 583de0746514348c162e771a59046c4c83ff26abebf4f9b3ffc46a15cc90d57b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:22 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:33:22 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Dec 13 02:33:22 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 13 02:33:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 02:33:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:22 np0005558241 festive_saha[94228]: Scheduled rgw.rgw update...
Dec 13 02:33:22 np0005558241 systemd[1]: Started libpod-conmon-583de0746514348c162e771a59046c4c83ff26abebf4f9b3ffc46a15cc90d57b.scope.
Dec 13 02:33:22 np0005558241 systemd[1]: libpod-e1cd197ca816a6ab7b2ef726e14c27818a2d277847d8e9e36d92fba53fbcf363.scope: Deactivated successfully.
Dec 13 02:33:22 np0005558241 conmon[94228]: conmon e1cd197ca816a6ab7b2e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1cd197ca816a6ab7b2ef726e14c27818a2d277847d8e9e36d92fba53fbcf363.scope/container/memory.events
Dec 13 02:33:22 np0005558241 podman[94212]: 2025-12-13 07:33:22.803349943 +0000 UTC m=+0.683034961 container died e1cd197ca816a6ab7b2ef726e14c27818a2d277847d8e9e36d92fba53fbcf363 (image=quay.io/ceph/ceph:v20, name=festive_saha, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 02:33:22 np0005558241 podman[94304]: 2025-12-13 07:33:22.729206609 +0000 UTC m=+0.042431008 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c1610753e3572d0ca766fcd8452b70babd2b5adabc10dd9f2209aeddc4b4b1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c1610753e3572d0ca766fcd8452b70babd2b5adabc10dd9f2209aeddc4b4b1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c1610753e3572d0ca766fcd8452b70babd2b5adabc10dd9f2209aeddc4b4b1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c1610753e3572d0ca766fcd8452b70babd2b5adabc10dd9f2209aeddc4b4b1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c1610753e3572d0ca766fcd8452b70babd2b5adabc10dd9f2209aeddc4b4b1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8c4627a33ae05e05632c0cd8aca8fe0dbcb4d56217fee4d39dc3405b12e4c1b1-merged.mount: Deactivated successfully.
Dec 13 02:33:22 np0005558241 podman[94304]: 2025-12-13 07:33:22.853343209 +0000 UTC m=+0.166567598 container init 583de0746514348c162e771a59046c4c83ff26abebf4f9b3ffc46a15cc90d57b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:33:22 np0005558241 podman[94304]: 2025-12-13 07:33:22.870394998 +0000 UTC m=+0.183619327 container start 583de0746514348c162e771a59046c4c83ff26abebf4f9b3ffc46a15cc90d57b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ganguly, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:22 np0005558241 podman[94304]: 2025-12-13 07:33:22.874571173 +0000 UTC m=+0.187795582 container attach 583de0746514348c162e771a59046c4c83ff26abebf4f9b3ffc46a15cc90d57b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ganguly, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:22 np0005558241 podman[94212]: 2025-12-13 07:33:22.884452051 +0000 UTC m=+0.764137069 container remove e1cd197ca816a6ab7b2ef726e14c27818a2d277847d8e9e36d92fba53fbcf363 (image=quay.io/ceph/ceph:v20, name=festive_saha, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 02:33:22 np0005558241 systemd[1]: libpod-conmon-e1cd197ca816a6ab7b2ef726e14c27818a2d277847d8e9e36d92fba53fbcf363.scope: Deactivated successfully.
Dec 13 02:33:22 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:33:22 np0005558241 ceph-mon[76537]: Saving service rgw.rgw spec with placement compute-0
Dec 13 02:33:22 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:33:23 np0005558241 quirky_ganguly[94322]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:33:23 np0005558241 quirky_ganguly[94322]: --> All data devices are unavailable
Dec 13 02:33:23 np0005558241 systemd[1]: libpod-583de0746514348c162e771a59046c4c83ff26abebf4f9b3ffc46a15cc90d57b.scope: Deactivated successfully.
Dec 13 02:33:23 np0005558241 conmon[94322]: conmon 583de0746514348c162e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-583de0746514348c162e771a59046c4c83ff26abebf4f9b3ffc46a15cc90d57b.scope/container/memory.events
Dec 13 02:33:23 np0005558241 podman[94304]: 2025-12-13 07:33:23.453060384 +0000 UTC m=+0.766284713 container died 583de0746514348c162e771a59046c4c83ff26abebf4f9b3ffc46a15cc90d57b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ganguly, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:33:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7c1610753e3572d0ca766fcd8452b70babd2b5adabc10dd9f2209aeddc4b4b1b-merged.mount: Deactivated successfully.
Dec 13 02:33:23 np0005558241 podman[94304]: 2025-12-13 07:33:23.504524018 +0000 UTC m=+0.817748367 container remove 583de0746514348c162e771a59046c4c83ff26abebf4f9b3ffc46a15cc90d57b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_ganguly, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:33:23 np0005558241 systemd[1]: libpod-conmon-583de0746514348c162e771a59046c4c83ff26abebf4f9b3ffc46a15cc90d57b.scope: Deactivated successfully.
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec 13 02:33:23 np0005558241 python3[94489]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:33:23 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.031144142s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.191452026s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.031104088s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.191452026s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.079575539s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.239868164s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.079504967s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.239906311s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.15( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.079472542s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.239868164s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.14( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.079463959s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.239906311s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.079382896s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.239944458s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.17( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.079363823s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.239944458s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.030652046s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.191268921s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.030638695s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.191268921s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.079313278s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.239974976s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.030571938s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.191268921s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.11( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.079287529s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.239974976s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.030218124s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.190971375s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.030205727s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.190971375s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.030488968s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.191268921s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.079340935s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240150452s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.13( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.079330444s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240150452s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.030454636s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.191268921s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.030190468s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.191040039s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.079099655s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240066528s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.029996872s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.190956116s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.079087257s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240066528s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.029972076s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.190956116s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.030373573s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.191459656s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.030314445s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.191459656s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.078893661s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240051270s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.078868866s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240051270s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.030179024s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.191040039s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.078819275s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240074158s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.029693604s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.190956116s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.078806877s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240074158s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.078767776s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240112305s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.078758240s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240112305s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.029596329s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.190956116s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.030561447s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.191268921s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.025777817s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.187286377s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.025766373s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.187286377s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.029307365s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.190963745s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.029294968s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.190963745s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.078407288s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240165710s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.078383446s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240165710s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.025307655s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.187118530s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.025296211s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.187118530s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.078392029s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240318298s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.078380585s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240318298s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.025166512s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.187156677s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.078267097s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240295410s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.025143623s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.187156677s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.078253746s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240295410s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024986267s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.187126160s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024974823s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.187126160s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024966240s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.187171936s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024920464s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.187171936s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024727821s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.187011719s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024716377s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.187011719s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024713516s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.187088013s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024702072s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.187088013s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077989578s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240394592s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077967644s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240394592s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077948570s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240447998s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077937126s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240447998s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024406433s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.187004089s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024394989s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.187004089s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024616241s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.187271118s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077732086s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240470886s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.1e( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077722549s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240470886s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024096489s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 95.186927795s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024084091s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.186927795s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=36/37 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.024584770s) [1] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 95.187271118s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077476501s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240524292s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.1f( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077239037s) [2] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240524292s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077465057s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240570068s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.1c( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077162743s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240570068s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077032089s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240493774s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077009201s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240493774s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077079773s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 89.240585327s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[6.1d( empty local-lis/les=38/39 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.077052116s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 89.240585327s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[6.1e( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[6.c( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[6.8( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[6.14( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[6.15( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[6.11( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[6.13( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[6.1f( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.061486244s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.330886841s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.061463356s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.330886841s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064623833s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.334068298s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064593315s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.334068298s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.005872726s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275428772s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.005861282s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275428772s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.005803108s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275444031s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.005617142s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275276184s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.005601883s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275276184s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.005767822s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275444031s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.005720139s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275436401s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.005698204s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275436401s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.005441666s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275268555s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.005431175s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275268555s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064250946s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.334167480s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064242363s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.334167480s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064113617s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.334121704s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.065975189s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.335983276s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064103127s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.334121704s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.065950394s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.335983276s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.005123138s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275238037s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064017296s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.334159851s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064066887s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.334228516s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064054489s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.334228516s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.063988686s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.334159851s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004947662s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275222778s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.063969612s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.334251404s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004903793s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275222778s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004915237s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275260925s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.063903809s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.334251404s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004894257s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275260925s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004782677s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275222778s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004754066s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275222778s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.063827515s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.334342957s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004642487s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275169373s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.063811302s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.334342957s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004631042s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275169373s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.063770294s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.334434509s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004351616s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275054932s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.063747406s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.334434509s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004518509s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275245667s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004323959s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275054932s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.063650131s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.334403992s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004490852s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275245667s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.063597679s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.334403992s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004195213s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275039673s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004182816s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275039673s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.063496590s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.334396362s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004161835s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275093079s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.063473701s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.334396362s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.065035820s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.335983276s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.065025330s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.335983276s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004143715s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275093079s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[6.2( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[6.6( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[6.4( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.065103531s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.336135864s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.065093040s) [0] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.336135864s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.004003525s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.275077820s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064902306s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.336006165s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[6.e( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064885139s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.336006165s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.003520966s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.274650574s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.003222466s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.274360657s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.003977776s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275077820s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.003211975s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.274360657s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[6.17( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.003498077s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.274650574s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.003057480s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.274291992s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.003017426s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.274291992s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.003271103s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.274642944s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.003261566s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.274642944s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064705849s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.336151123s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064697266s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.336151123s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[6.1d( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.002829552s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.274360657s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.002820969s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.274360657s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[6.1c( empty local-lis/les=0/0 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=12.996954918s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.268547058s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=12.996944427s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.268547058s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064547539s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.336227417s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064538002s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.336227417s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.002423286s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.274215698s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.002410889s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.274215698s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064409256s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.336242676s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064385414s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.336242676s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064365387s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.336250305s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.064354897s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.336250305s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.001607895s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 active pruub 77.274291992s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.001569748s) [1] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.274291992s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.062609673s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 active pruub 73.336013794s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=38/39 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42 pruub=9.062569618s) [1] r=-1 lpr=42 pi=[38,42)/1 crt=0'0 unknown NOTIFY pruub 73.336013794s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=34/35 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42 pruub=13.001566887s) [0] r=-1 lpr=42 pi=[34,42)/1 crt=0'0 unknown NOTIFY pruub 77.275238037s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.103343964s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.978134155s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.103265762s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.978134155s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.016919136s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891921997s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.016901970s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891921997s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.103000641s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.978126526s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.016719818s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891876221s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.102976799s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.978126526s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.016702652s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891876221s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.016550064s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891860962s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.102837563s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.978149414s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.016530991s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891860962s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.102818489s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.978149414s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.016304016s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891845703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.016289711s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891845703s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.016138077s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891723633s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.016119003s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891723633s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.102755547s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.978401184s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.102743149s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.978401184s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.015946388s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891716003s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.015952110s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891723633s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.015934944s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891716003s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.015934944s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891723633s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.107129097s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.982986450s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.107117653s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.982986450s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.015772820s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891716003s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106914520s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.982849121s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.015763283s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891716003s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106897354s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.982849121s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106849670s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.982879639s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106839180s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.982879639s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106842995s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.982948303s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106824875s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.982948303s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106927872s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.983062744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106916428s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.983062744s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106788635s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.983070374s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.015591621s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891914368s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106770515s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.983070374s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.015579224s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891914368s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.015115738s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891532898s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.015106201s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891532898s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106596947s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.983047485s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106575966s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.983047485s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.014998436s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891517639s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.014984131s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891517639s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.014832497s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891456604s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106432915s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.983078003s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.014813423s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891456604s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106417656s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.983078003s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106236458s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.983108521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106218338s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.983108521s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.014457703s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891448975s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106435776s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.983459473s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.014437675s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891448975s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.106419563s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.983459473s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.014388084s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891525269s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.105962753s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.983108521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.014369011s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891525269s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.105948448s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.983108521s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.014038086s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891304016s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.013995171s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891288757s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.014021873s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891304016s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.013975143s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891288757s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.017409325s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.894851685s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.105943680s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.983406067s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.017392159s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.894851685s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.105927467s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.983406067s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.105885506s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.983436584s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.105874062s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.983436584s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.013700485s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891296387s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.013685226s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891296387s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.105445862s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.983116150s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.105433464s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.983116150s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.009546280s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.887268066s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.009529114s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.887268066s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.105571747s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.983383179s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.105561256s) [2] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.983383179s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.105547905s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 active pruub 83.983436584s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.013378143s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891281128s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.013360023s) [2] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891281128s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.013604164s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 active pruub 87.891563416s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=36/37 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42 pruub=15.013590813s) [0] r=-1 lpr=42 pi=[36,42)/1 crt=0'0 unknown NOTIFY pruub 87.891563416s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=40/41 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42 pruub=11.105371475s) [0] r=-1 lpr=42 pi=[40,42)/1 crt=0'0 unknown NOTIFY pruub 83.983436584s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:23 np0005558241 podman[94503]: 2025-12-13 07:33:23.981322143 +0000 UTC m=+0.075037917 container create bb890639d9ec3c86f7a3a701fd41bf100c81f858a6957eb9b01cd7b9af8fd209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_noether, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:33:24 np0005558241 systemd[1]: Started libpod-conmon-bb890639d9ec3c86f7a3a701fd41bf100c81f858a6957eb9b01cd7b9af8fd209.scope.
Dec 13 02:33:24 np0005558241 podman[94503]: 2025-12-13 07:33:23.936344543 +0000 UTC m=+0.030060347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:24 np0005558241 podman[94503]: 2025-12-13 07:33:24.064641857 +0000 UTC m=+0.158357641 container init bb890639d9ec3c86f7a3a701fd41bf100c81f858a6957eb9b01cd7b9af8fd209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_noether, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:24 np0005558241 podman[94503]: 2025-12-13 07:33:24.072103635 +0000 UTC m=+0.165819399 container start bb890639d9ec3c86f7a3a701fd41bf100c81f858a6957eb9b01cd7b9af8fd209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 02:33:24 np0005558241 podman[94503]: 2025-12-13 07:33:24.076932636 +0000 UTC m=+0.170648450 container attach bb890639d9ec3c86f7a3a701fd41bf100c81f858a6957eb9b01cd7b9af8fd209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:24 np0005558241 great_noether[94542]: 167 167
Dec 13 02:33:24 np0005558241 systemd[1]: libpod-bb890639d9ec3c86f7a3a701fd41bf100c81f858a6957eb9b01cd7b9af8fd209.scope: Deactivated successfully.
Dec 13 02:33:24 np0005558241 podman[94503]: 2025-12-13 07:33:24.080143907 +0000 UTC m=+0.173859671 container died bb890639d9ec3c86f7a3a701fd41bf100c81f858a6957eb9b01cd7b9af8fd209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:33:24 np0005558241 systemd[1]: var-lib-containers-storage-overlay-56b7e7eb7ae83ab4b1740b08711a11b643674b4f4c6c40cc44ffab7c3fedc3c3-merged.mount: Deactivated successfully.
Dec 13 02:33:24 np0005558241 podman[94503]: 2025-12-13 07:33:24.123151858 +0000 UTC m=+0.216867642 container remove bb890639d9ec3c86f7a3a701fd41bf100c81f858a6957eb9b01cd7b9af8fd209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_noether, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:24 np0005558241 systemd[1]: libpod-conmon-bb890639d9ec3c86f7a3a701fd41bf100c81f858a6957eb9b01cd7b9af8fd209.scope: Deactivated successfully.
Dec 13 02:33:24 np0005558241 python3[94606]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765611203.6503146-36778-33797347754738/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:33:24 np0005558241 podman[94612]: 2025-12-13 07:33:24.296407533 +0000 UTC m=+0.060094421 container create 42bb68742b01336c867c1e14f1106125be91331eb4ff7de1f08d6059c2ecede2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:33:24 np0005558241 systemd[1]: Started libpod-conmon-42bb68742b01336c867c1e14f1106125be91331eb4ff7de1f08d6059c2ecede2.scope.
Dec 13 02:33:24 np0005558241 podman[94612]: 2025-12-13 07:33:24.266458781 +0000 UTC m=+0.030145739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9687d241b8671a3fc0d4b4d6058d3ceaf95a839ce4a54b519d57dae12f5e6ff5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9687d241b8671a3fc0d4b4d6058d3ceaf95a839ce4a54b519d57dae12f5e6ff5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9687d241b8671a3fc0d4b4d6058d3ceaf95a839ce4a54b519d57dae12f5e6ff5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9687d241b8671a3fc0d4b4d6058d3ceaf95a839ce4a54b519d57dae12f5e6ff5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:24 np0005558241 podman[94612]: 2025-12-13 07:33:24.411903267 +0000 UTC m=+0.175590185 container init 42bb68742b01336c867c1e14f1106125be91331eb4ff7de1f08d6059c2ecede2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_murdock, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 02:33:24 np0005558241 podman[94612]: 2025-12-13 07:33:24.419755074 +0000 UTC m=+0.183441962 container start 42bb68742b01336c867c1e14f1106125be91331eb4ff7de1f08d6059c2ecede2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_murdock, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:33:24 np0005558241 podman[94612]: 2025-12-13 07:33:24.422952354 +0000 UTC m=+0.186639242 container attach 42bb68742b01336c867c1e14f1106125be91331eb4ff7de1f08d6059c2ecede2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_murdock, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:33:24 np0005558241 loving_murdock[94630]: {
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:    "0": [
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:        {
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "devices": [
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "/dev/loop3"
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            ],
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_name": "ceph_lv0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_size": "21470642176",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "name": "ceph_lv0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "tags": {
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.cluster_name": "ceph",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.crush_device_class": "",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.encrypted": "0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.objectstore": "bluestore",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.osd_id": "0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.type": "block",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.vdo": "0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.with_tpm": "0"
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            },
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "type": "block",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "vg_name": "ceph_vg0"
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:        }
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:    ],
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:    "1": [
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:        {
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "devices": [
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "/dev/loop4"
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            ],
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_name": "ceph_lv1",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_size": "21470642176",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "name": "ceph_lv1",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "tags": {
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.cluster_name": "ceph",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.crush_device_class": "",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.encrypted": "0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.objectstore": "bluestore",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.osd_id": "1",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.type": "block",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.vdo": "0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.with_tpm": "0"
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            },
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "type": "block",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "vg_name": "ceph_vg1"
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:        }
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:    ],
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:    "2": [
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:        {
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "devices": [
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "/dev/loop5"
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            ],
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_name": "ceph_lv2",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_size": "21470642176",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "name": "ceph_lv2",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "tags": {
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.cluster_name": "ceph",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.crush_device_class": "",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.encrypted": "0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.objectstore": "bluestore",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.osd_id": "2",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.type": "block",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.vdo": "0",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:                "ceph.with_tpm": "0"
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            },
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "type": "block",
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:            "vg_name": "ceph_vg2"
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:        }
Dec 13 02:33:24 np0005558241 loving_murdock[94630]:    ]
Dec 13 02:33:24 np0005558241 loving_murdock[94630]: }
Dec 13 02:33:24 np0005558241 systemd[1]: libpod-42bb68742b01336c867c1e14f1106125be91331eb4ff7de1f08d6059c2ecede2.scope: Deactivated successfully.
Dec 13 02:33:24 np0005558241 podman[94612]: 2025-12-13 07:33:24.749346629 +0000 UTC m=+0.513033557 container died 42bb68742b01336c867c1e14f1106125be91331eb4ff7de1f08d6059c2ecede2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_murdock, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:24 np0005558241 python3[94685]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:24 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9687d241b8671a3fc0d4b4d6058d3ceaf95a839ce4a54b519d57dae12f5e6ff5-merged.mount: Deactivated successfully.
Dec 13 02:33:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 13 02:33:25 np0005558241 podman[94612]: 2025-12-13 07:33:25.064063139 +0000 UTC m=+0.827750027 container remove 42bb68742b01336c867c1e14f1106125be91331eb4ff7de1f08d6059c2ecede2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Dec 13 02:33:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec 13 02:33:25 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [0] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [0] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [0] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[6.f( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:33:25 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:33:25 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:33:25 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:33:25 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:33:25 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[6.8( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[6.14( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[6.15( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[6.11( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[6.13( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[6.1f( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [2] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=36/21 lis/c=36/36 les/c/f=37/37/0 sis=42) [2] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=40/28 lis/c=40/40 les/c/f=41/41/0 sis=42) [2] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[6.c( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[6.4( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[6.1e( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[6.1( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[6.d( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[6.b( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[6.6( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[6.17( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[6.1d( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[6.1c( empty local-lis/les=42/43 n=0 ec=38/26 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=34/19 lis/c=34/34 les/c/f=35/35/0 sis=42) [1] r=0 lpr=42 pi=[34,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=38/24 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=36/22 lis/c=36/36 les/c/f=37/37/0 sis=42) [1] r=0 lpr=42 pi=[36,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:25 np0005558241 systemd[1]: libpod-conmon-42bb68742b01336c867c1e14f1106125be91331eb4ff7de1f08d6059c2ecede2.scope: Deactivated successfully.
Dec 13 02:33:25 np0005558241 podman[94695]: 2025-12-13 07:33:25.172224298 +0000 UTC m=+0.375108049 container create 7dbeb94c6fa271774f94920856911014e19bb311d64ab77fabb9320a30fd9ddc (image=quay.io/ceph/ceph:v20, name=keen_lehmann, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 02:33:25 np0005558241 podman[94695]: 2025-12-13 07:33:25.153438286 +0000 UTC m=+0.356322067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:25 np0005558241 systemd[1]: Started libpod-conmon-7dbeb94c6fa271774f94920856911014e19bb311d64ab77fabb9320a30fd9ddc.scope.
Dec 13 02:33:25 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:25 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0680fde3d78b2fc5f143e1307f49969d7633e8b3d55265999059cf18b9aa8b3c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:25 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0680fde3d78b2fc5f143e1307f49969d7633e8b3d55265999059cf18b9aa8b3c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:25 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0680fde3d78b2fc5f143e1307f49969d7633e8b3d55265999059cf18b9aa8b3c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:25 np0005558241 podman[94695]: 2025-12-13 07:33:25.529275073 +0000 UTC m=+0.732158894 container init 7dbeb94c6fa271774f94920856911014e19bb311d64ab77fabb9320a30fd9ddc (image=quay.io/ceph/ceph:v20, name=keen_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:33:25 np0005558241 podman[94695]: 2025-12-13 07:33:25.54030158 +0000 UTC m=+0.743185361 container start 7dbeb94c6fa271774f94920856911014e19bb311d64ab77fabb9320a30fd9ddc (image=quay.io/ceph/ceph:v20, name=keen_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:25 np0005558241 podman[94695]: 2025-12-13 07:33:25.556296872 +0000 UTC m=+0.759180723 container attach 7dbeb94c6fa271774f94920856911014e19bb311d64ab77fabb9320a30fd9ddc (image=quay.io/ceph/ceph:v20, name=keen_lehmann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:33:25 np0005558241 podman[94780]: 2025-12-13 07:33:25.682148476 +0000 UTC m=+0.085142761 container create 5321c5dcf5d2233673d4bc06695b55c9cd074586376b401c6fe1d595db3184e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 13 02:33:25 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 13 02:33:25 np0005558241 podman[94780]: 2025-12-13 07:33:25.623412499 +0000 UTC m=+0.026406874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:25 np0005558241 systemd[1]: Started libpod-conmon-5321c5dcf5d2233673d4bc06695b55c9cd074586376b401c6fe1d595db3184e7.scope.
Dec 13 02:33:25 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:25 np0005558241 podman[94780]: 2025-12-13 07:33:25.844832795 +0000 UTC m=+0.247827180 container init 5321c5dcf5d2233673d4bc06695b55c9cd074586376b401c6fe1d595db3184e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:33:25 np0005558241 podman[94780]: 2025-12-13 07:33:25.855548655 +0000 UTC m=+0.258542980 container start 5321c5dcf5d2233673d4bc06695b55c9cd074586376b401c6fe1d595db3184e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:25 np0005558241 podman[94780]: 2025-12-13 07:33:25.86133552 +0000 UTC m=+0.264329855 container attach 5321c5dcf5d2233673d4bc06695b55c9cd074586376b401c6fe1d595db3184e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_thompson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Dec 13 02:33:25 np0005558241 suspicious_thompson[94815]: 167 167
Dec 13 02:33:25 np0005558241 systemd[1]: libpod-5321c5dcf5d2233673d4bc06695b55c9cd074586376b401c6fe1d595db3184e7.scope: Deactivated successfully.
Dec 13 02:33:25 np0005558241 podman[94780]: 2025-12-13 07:33:25.86294689 +0000 UTC m=+0.265941235 container died 5321c5dcf5d2233673d4bc06695b55c9cd074586376b401c6fe1d595db3184e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_thompson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 02:33:25 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d76a45fce5ff52d0db7efdf0aa4c297db7081b2f71ea0ac052289468a7554a0a-merged.mount: Deactivated successfully.
Dec 13 02:33:25 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event 05196af1-dc3f-4568-ab3a-77d9e440ebf7 (Global Recovery Event) in 11 seconds
Dec 13 02:33:25 np0005558241 podman[94780]: 2025-12-13 07:33:25.923260287 +0000 UTC m=+0.326254582 container remove 5321c5dcf5d2233673d4bc06695b55c9cd074586376b401c6fe1d595db3184e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:33:25 np0005558241 systemd[1]: libpod-conmon-5321c5dcf5d2233673d4bc06695b55c9cd074586376b401c6fe1d595db3184e7.scope: Deactivated successfully.
Dec 13 02:33:26 np0005558241 podman[94839]: 2025-12-13 07:33:26.088316236 +0000 UTC m=+0.042317245 container create 826eb793d1f2dfa77897145ab4f6d70687a7c9d8fc076498f00b10cc1e0e2d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:33:26 np0005558241 systemd[1]: Started libpod-conmon-826eb793d1f2dfa77897145ab4f6d70687a7c9d8fc076498f00b10cc1e0e2d53.scope.
Dec 13 02:33:26 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:33:26 np0005558241 ceph-mgr[76830]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 13 02:33:26 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0[76533]: 2025-12-13T07:33:26.155+0000 7f7b155da640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 13 02:33:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e2 new map
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-12-13T07:33:26:155918+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-13T07:33:26.155600+0000#012modified#0112025-12-13T07:33:26.155600+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec 13 02:33:26 np0005558241 podman[94839]: 2025-12-13 07:33:26.070131518 +0000 UTC m=+0.024132517 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45cfab49d234def2da92efc15412c0a8969dd1a7686af44dd34140c6bc32bf26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 13 02:33:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45cfab49d234def2da92efc15412c0a8969dd1a7686af44dd34140c6bc32bf26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45cfab49d234def2da92efc15412c0a8969dd1a7686af44dd34140c6bc32bf26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:26 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 13 02:33:26 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 02:33:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45cfab49d234def2da92efc15412c0a8969dd1a7686af44dd34140c6bc32bf26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:26 np0005558241 ceph-mgr[76830]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 13 02:33:26 np0005558241 podman[94839]: 2025-12-13 07:33:26.185874878 +0000 UTC m=+0.139875907 container init 826eb793d1f2dfa77897145ab4f6d70687a7c9d8fc076498f00b10cc1e0e2d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:26 np0005558241 podman[94839]: 2025-12-13 07:33:26.192938245 +0000 UTC m=+0.146939284 container start 826eb793d1f2dfa77897145ab4f6d70687a7c9d8fc076498f00b10cc1e0e2d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:33:26 np0005558241 podman[94839]: 2025-12-13 07:33:26.198210908 +0000 UTC m=+0.152211927 container attach 826eb793d1f2dfa77897145ab4f6d70687a7c9d8fc076498f00b10cc1e0e2d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:26 np0005558241 systemd[1]: libpod-7dbeb94c6fa271774f94920856911014e19bb311d64ab77fabb9320a30fd9ddc.scope: Deactivated successfully.
Dec 13 02:33:26 np0005558241 podman[94695]: 2025-12-13 07:33:26.210343873 +0000 UTC m=+1.413227654 container died 7dbeb94c6fa271774f94920856911014e19bb311d64ab77fabb9320a30fd9ddc (image=quay.io/ceph/ceph:v20, name=keen_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:33:26 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0680fde3d78b2fc5f143e1307f49969d7633e8b3d55265999059cf18b9aa8b3c-merged.mount: Deactivated successfully.
Dec 13 02:33:26 np0005558241 podman[94695]: 2025-12-13 07:33:26.258451552 +0000 UTC m=+1.461335313 container remove 7dbeb94c6fa271774f94920856911014e19bb311d64ab77fabb9320a30fd9ddc (image=quay.io/ceph/ceph:v20, name=keen_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:33:26 np0005558241 systemd[1]: libpod-conmon-7dbeb94c6fa271774f94920856911014e19bb311d64ab77fabb9320a30fd9ddc.scope: Deactivated successfully.
Dec 13 02:33:26 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 13 02:33:26 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 13 02:33:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:26 np0005558241 python3[94910]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:26 np0005558241 podman[94933]: 2025-12-13 07:33:26.704030613 +0000 UTC m=+0.049393673 container create c9efc88489d12a6fe8b8f9f2c1b82094d709289dd553fecf1184011380481ae9 (image=quay.io/ceph/ceph:v20, name=eloquent_mccarthy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:26 np0005558241 systemd[1]: Started libpod-conmon-c9efc88489d12a6fe8b8f9f2c1b82094d709289dd553fecf1184011380481ae9.scope.
Dec 13 02:33:26 np0005558241 podman[94933]: 2025-12-13 07:33:26.685250181 +0000 UTC m=+0.030613281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab8a85bbf2bca28e87d780e4303f2e5f91c0fbbc8f73e10d8da2231ae93dac21/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab8a85bbf2bca28e87d780e4303f2e5f91c0fbbc8f73e10d8da2231ae93dac21/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab8a85bbf2bca28e87d780e4303f2e5f91c0fbbc8f73e10d8da2231ae93dac21/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:26 np0005558241 podman[94933]: 2025-12-13 07:33:26.802631281 +0000 UTC m=+0.147994411 container init c9efc88489d12a6fe8b8f9f2c1b82094d709289dd553fecf1184011380481ae9 (image=quay.io/ceph/ceph:v20, name=eloquent_mccarthy, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:26 np0005558241 podman[94933]: 2025-12-13 07:33:26.80932863 +0000 UTC m=+0.154691690 container start c9efc88489d12a6fe8b8f9f2c1b82094d709289dd553fecf1184011380481ae9 (image=quay.io/ceph/ceph:v20, name=eloquent_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:33:26 np0005558241 podman[94933]: 2025-12-13 07:33:26.81331678 +0000 UTC m=+0.158679870 container attach c9efc88489d12a6fe8b8f9f2c1b82094d709289dd553fecf1184011380481ae9 (image=quay.io/ceph/ceph:v20, name=eloquent_mccarthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:26 np0005558241 lvm[94991]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:33:26 np0005558241 lvm[94991]: VG ceph_vg0 finished
Dec 13 02:33:26 np0005558241 lvm[95011]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:33:26 np0005558241 lvm[95011]: VG ceph_vg1 finished
Dec 13 02:33:26 np0005558241 lvm[95013]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:33:26 np0005558241 lvm[95013]: VG ceph_vg2 finished
Dec 13 02:33:26 np0005558241 lvm[95014]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:33:26 np0005558241 lvm[95014]: VG ceph_vg0 finished
Dec 13 02:33:27 np0005558241 busy_ramanujan[94856]: {}
Dec 13 02:33:27 np0005558241 systemd[1]: libpod-826eb793d1f2dfa77897145ab4f6d70687a7c9d8fc076498f00b10cc1e0e2d53.scope: Deactivated successfully.
Dec 13 02:33:27 np0005558241 podman[94839]: 2025-12-13 07:33:27.089761329 +0000 UTC m=+1.043762328 container died 826eb793d1f2dfa77897145ab4f6d70687a7c9d8fc076498f00b10cc1e0e2d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ramanujan, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:27 np0005558241 systemd[1]: libpod-826eb793d1f2dfa77897145ab4f6d70687a7c9d8fc076498f00b10cc1e0e2d53.scope: Consumed 1.378s CPU time.
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: Saving service mds.cephfs spec with placement compute-0
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-45cfab49d234def2da92efc15412c0a8969dd1a7686af44dd34140c6bc32bf26-merged.mount: Deactivated successfully.
Dec 13 02:33:27 np0005558241 podman[94839]: 2025-12-13 07:33:27.135185571 +0000 UTC m=+1.089186570 container remove 826eb793d1f2dfa77897145ab4f6d70687a7c9d8fc076498f00b10cc1e0e2d53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:27 np0005558241 systemd[1]: libpod-conmon-826eb793d1f2dfa77897145ab4f6d70687a7c9d8fc076498f00b10cc1e0e2d53.scope: Deactivated successfully.
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:27 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14240 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 02:33:27 np0005558241 ceph-mgr[76830]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 13 02:33:27 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 02:33:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:27 np0005558241 eloquent_mccarthy[94974]: Scheduled mds.cephfs update...
Dec 13 02:33:27 np0005558241 systemd[1]: libpod-c9efc88489d12a6fe8b8f9f2c1b82094d709289dd553fecf1184011380481ae9.scope: Deactivated successfully.
Dec 13 02:33:27 np0005558241 podman[94933]: 2025-12-13 07:33:27.346324668 +0000 UTC m=+0.691687818 container died c9efc88489d12a6fe8b8f9f2c1b82094d709289dd553fecf1184011380481ae9 (image=quay.io/ceph/ceph:v20, name=eloquent_mccarthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 02:33:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ab8a85bbf2bca28e87d780e4303f2e5f91c0fbbc8f73e10d8da2231ae93dac21-merged.mount: Deactivated successfully.
Dec 13 02:33:27 np0005558241 podman[94933]: 2025-12-13 07:33:27.404610223 +0000 UTC m=+0.749973323 container remove c9efc88489d12a6fe8b8f9f2c1b82094d709289dd553fecf1184011380481ae9 (image=quay.io/ceph/ceph:v20, name=eloquent_mccarthy, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 02:33:27 np0005558241 systemd[1]: libpod-conmon-c9efc88489d12a6fe8b8f9f2c1b82094d709289dd553fecf1184011380481ae9.scope: Deactivated successfully.
Dec 13 02:33:27 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 13 02:33:27 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 13 02:33:28 np0005558241 podman[95222]: 2025-12-13 07:33:28.156898633 +0000 UTC m=+0.193670559 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 02:33:28 np0005558241 python3[95255]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 02:33:28 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:28 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:28 np0005558241 ceph-mon[76537]: Saving service mds.cephfs spec with placement compute-0
Dec 13 02:33:28 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:28 np0005558241 podman[95286]: 2025-12-13 07:33:28.471485771 +0000 UTC m=+0.186380116 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:28 np0005558241 podman[95222]: 2025-12-13 07:33:28.631330157 +0000 UTC m=+0.668102043 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:28 np0005558241 python3[95348]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765611207.8023794-36808-38914182176244/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=a94eaee1cc0bd48006b45780280e826c7650ff45 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:33:29 np0005558241 python3[95464]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:29 np0005558241 podman[95487]: 2025-12-13 07:33:29.35783767 +0000 UTC m=+0.115497854 container create e59fa7345f27077c0c2936f3c6e314a2335e229f12c99b56b28fae28ddf25fa0 (image=quay.io/ceph/ceph:v20, name=upbeat_banach, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:33:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:29 np0005558241 podman[95487]: 2025-12-13 07:33:29.289909642 +0000 UTC m=+0.047569876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:29 np0005558241 systemd[1]: Started libpod-conmon-e59fa7345f27077c0c2936f3c6e314a2335e229f12c99b56b28fae28ddf25fa0.scope.
Dec 13 02:33:29 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085268e6943c84784bfc9224b740e725d5b7486e6ac6f39da6ed9a0f44c06009/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085268e6943c84784bfc9224b740e725d5b7486e6ac6f39da6ed9a0f44c06009/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:29 np0005558241 podman[95487]: 2025-12-13 07:33:29.521064673 +0000 UTC m=+0.278724907 container init e59fa7345f27077c0c2936f3c6e314a2335e229f12c99b56b28fae28ddf25fa0 (image=quay.io/ceph/ceph:v20, name=upbeat_banach, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:29 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:33:29 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 13 02:33:29 np0005558241 podman[95487]: 2025-12-13 07:33:29.536093661 +0000 UTC m=+0.293753855 container start e59fa7345f27077c0c2936f3c6e314a2335e229f12c99b56b28fae28ddf25fa0 (image=quay.io/ceph/ceph:v20, name=upbeat_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:33:29 np0005558241 podman[95487]: 2025-12-13 07:33:29.585240576 +0000 UTC m=+0.342900780 container attach e59fa7345f27077c0c2936f3c6e314a2335e229f12c99b56b28fae28ddf25fa0 (image=quay.io/ceph/ceph:v20, name=upbeat_banach, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:29 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 13 02:33:29 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:33:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:33:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Dec 13 02:33:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3212917766' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 13 02:33:30 np0005558241 podman[95618]: 2025-12-13 07:33:30.250364095 +0000 UTC m=+0.030646481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3212917766' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 13 02:33:30 np0005558241 podman[95618]: 2025-12-13 07:33:30.367406138 +0000 UTC m=+0.147688534 container create 20105e2a91005d03c63d80bde88fbdcf2e0ad4a25f8bf667e99d644b161f33b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_lumiere, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:30 np0005558241 systemd[1]: libpod-e59fa7345f27077c0c2936f3c6e314a2335e229f12c99b56b28fae28ddf25fa0.scope: Deactivated successfully.
Dec 13 02:33:30 np0005558241 podman[95487]: 2025-12-13 07:33:30.421110999 +0000 UTC m=+1.178771213 container died e59fa7345f27077c0c2936f3c6e314a2335e229f12c99b56b28fae28ddf25fa0 (image=quay.io/ceph/ceph:v20, name=upbeat_banach, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:30 np0005558241 systemd[1]: Started libpod-conmon-20105e2a91005d03c63d80bde88fbdcf2e0ad4a25f8bf667e99d644b161f33b1.scope.
Dec 13 02:33:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:30 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 13 02:33:30 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 13 02:33:30 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 13 02:33:30 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 13 02:33:30 np0005558241 podman[95618]: 2025-12-13 07:33:30.685497207 +0000 UTC m=+0.465779623 container init 20105e2a91005d03c63d80bde88fbdcf2e0ad4a25f8bf667e99d644b161f33b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:30 np0005558241 podman[95618]: 2025-12-13 07:33:30.697816772 +0000 UTC m=+0.478099148 container start 20105e2a91005d03c63d80bde88fbdcf2e0ad4a25f8bf667e99d644b161f33b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_lumiere, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 02:33:30 np0005558241 brave_lumiere[95646]: 167 167
Dec 13 02:33:30 np0005558241 systemd[1]: libpod-20105e2a91005d03c63d80bde88fbdcf2e0ad4a25f8bf667e99d644b161f33b1.scope: Deactivated successfully.
Dec 13 02:33:30 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:30 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:33:30 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:30 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:33:30 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3212917766' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 13 02:33:30 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3212917766' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 13 02:33:30 np0005558241 podman[95618]: 2025-12-13 07:33:30.805765786 +0000 UTC m=+0.586048192 container attach 20105e2a91005d03c63d80bde88fbdcf2e0ad4a25f8bf667e99d644b161f33b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:30 np0005558241 podman[95618]: 2025-12-13 07:33:30.806942595 +0000 UTC m=+0.587225001 container died 20105e2a91005d03c63d80bde88fbdcf2e0ad4a25f8bf667e99d644b161f33b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:30 np0005558241 ceph-mgr[76830]: [progress INFO root] Writing back 10 completed events
Dec 13 02:33:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 02:33:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7f7729279a56ea05db8367520a53d9e222c2d47865210b3716f02ec8a37fb976-merged.mount: Deactivated successfully.
Dec 13 02:33:31 np0005558241 podman[95618]: 2025-12-13 07:33:31.022526315 +0000 UTC m=+0.802808721 container remove 20105e2a91005d03c63d80bde88fbdcf2e0ad4a25f8bf667e99d644b161f33b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-085268e6943c84784bfc9224b740e725d5b7486e6ac6f39da6ed9a0f44c06009-merged.mount: Deactivated successfully.
Dec 13 02:33:31 np0005558241 podman[95487]: 2025-12-13 07:33:31.164596493 +0000 UTC m=+1.922256687 container remove e59fa7345f27077c0c2936f3c6e314a2335e229f12c99b56b28fae28ddf25fa0 (image=quay.io/ceph/ceph:v20, name=upbeat_banach, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:31 np0005558241 systemd[1]: libpod-conmon-20105e2a91005d03c63d80bde88fbdcf2e0ad4a25f8bf667e99d644b161f33b1.scope: Deactivated successfully.
Dec 13 02:33:31 np0005558241 systemd[1]: libpod-conmon-e59fa7345f27077c0c2936f3c6e314a2335e229f12c99b56b28fae28ddf25fa0.scope: Deactivated successfully.
Dec 13 02:33:31 np0005558241 podman[95672]: 2025-12-13 07:33:31.224882067 +0000 UTC m=+0.031514792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:31 np0005558241 podman[95672]: 2025-12-13 07:33:31.438294962 +0000 UTC m=+0.244927687 container create f65e11f7fd4c4bc5453a175858bbed04de1d95ec8d3a3750e46e7c38f11283e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:31 np0005558241 systemd[1]: Started libpod-conmon-f65e11f7fd4c4bc5453a175858bbed04de1d95ec8d3a3750e46e7c38f11283e7.scope.
Dec 13 02:33:31 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fc36967da0692ee97172063bc9bb482110ef1714e54c77fb1f9895153da1dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fc36967da0692ee97172063bc9bb482110ef1714e54c77fb1f9895153da1dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fc36967da0692ee97172063bc9bb482110ef1714e54c77fb1f9895153da1dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fc36967da0692ee97172063bc9bb482110ef1714e54c77fb1f9895153da1dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fc36967da0692ee97172063bc9bb482110ef1714e54c77fb1f9895153da1dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:31 np0005558241 python3[95716]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:32 np0005558241 podman[95672]: 2025-12-13 07:33:32.024504721 +0000 UTC m=+0.831137506 container init f65e11f7fd4c4bc5453a175858bbed04de1d95ec8d3a3750e46e7c38f11283e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lamarr, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:32 np0005558241 podman[95672]: 2025-12-13 07:33:32.037600935 +0000 UTC m=+0.844233660 container start f65e11f7fd4c4bc5453a175858bbed04de1d95ec8d3a3750e46e7c38f11283e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lamarr, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:32 np0005558241 podman[95718]: 2025-12-13 07:33:32.037168825 +0000 UTC m=+0.034306391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:32 np0005558241 podman[95672]: 2025-12-13 07:33:32.15570409 +0000 UTC m=+0.962336825 container attach f65e11f7fd4c4bc5453a175858bbed04de1d95ec8d3a3750e46e7c38f11283e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lamarr, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:33:32 np0005558241 podman[95718]: 2025-12-13 07:33:32.244799716 +0000 UTC m=+0.241937282 container create f5f23fd50f2d03ade59335e198dfe2ed807f02fff77a32401ad3a3f73ea469da (image=quay.io/ceph/ceph:v20, name=elegant_galileo, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:33:32 np0005558241 systemd[1]: Started libpod-conmon-f5f23fd50f2d03ade59335e198dfe2ed807f02fff77a32401ad3a3f73ea469da.scope.
Dec 13 02:33:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279e4a6524523153ede6a8c6852713d933333d0f574748c0b3018ba8b0c9a162/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279e4a6524523153ede6a8c6852713d933333d0f574748c0b3018ba8b0c9a162/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:32 np0005558241 podman[95718]: 2025-12-13 07:33:32.416214372 +0000 UTC m=+0.413351918 container init f5f23fd50f2d03ade59335e198dfe2ed807f02fff77a32401ad3a3f73ea469da (image=quay.io/ceph/ceph:v20, name=elegant_galileo, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:33:32 np0005558241 podman[95718]: 2025-12-13 07:33:32.424094017 +0000 UTC m=+0.421231533 container start f5f23fd50f2d03ade59335e198dfe2ed807f02fff77a32401ad3a3f73ea469da (image=quay.io/ceph/ceph:v20, name=elegant_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 02:33:32 np0005558241 podman[95718]: 2025-12-13 07:33:32.459681628 +0000 UTC m=+0.456819254 container attach f5f23fd50f2d03ade59335e198dfe2ed807f02fff77a32401ad3a3f73ea469da (image=quay.io/ceph/ceph:v20, name=elegant_galileo, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:33:32 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 13 02:33:32 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 13 02:33:32 np0005558241 sweet_lamarr[95712]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:33:32 np0005558241 sweet_lamarr[95712]: --> All data devices are unavailable
Dec 13 02:33:32 np0005558241 systemd[1]: libpod-f65e11f7fd4c4bc5453a175858bbed04de1d95ec8d3a3750e46e7c38f11283e7.scope: Deactivated successfully.
Dec 13 02:33:32 np0005558241 podman[95672]: 2025-12-13 07:33:32.600970418 +0000 UTC m=+1.407603143 container died f65e11f7fd4c4bc5453a175858bbed04de1d95ec8d3a3750e46e7c38f11283e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 02:33:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3395721764' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 02:33:32 np0005558241 elegant_galileo[95740]: 
Dec 13 02:33:32 np0005558241 elegant_galileo[95740]: {"fsid":"18ee9de6-e00b-571b-ab9b-b7aab06174df","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":163,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":44,"num_osds":3,"num_up_osds":3,"osd_up_since":1765611153,"num_in_osds":3,"osd_in_since":1765611114,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84312064,"bytes_avail":64327614464,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2025-12-13T07:33:26:155918+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-12-13T07:33:23.379449+0000","services":{"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec 13 02:33:32 np0005558241 systemd[1]: libpod-f5f23fd50f2d03ade59335e198dfe2ed807f02fff77a32401ad3a3f73ea469da.scope: Deactivated successfully.
Dec 13 02:33:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-31fc36967da0692ee97172063bc9bb482110ef1714e54c77fb1f9895153da1dd-merged.mount: Deactivated successfully.
Dec 13 02:33:33 np0005558241 podman[95718]: 2025-12-13 07:33:33.001561679 +0000 UTC m=+0.998699215 container died f5f23fd50f2d03ade59335e198dfe2ed807f02fff77a32401ad3a3f73ea469da (image=quay.io/ceph/ceph:v20, name=elegant_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:33:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-279e4a6524523153ede6a8c6852713d933333d0f574748c0b3018ba8b0c9a162-merged.mount: Deactivated successfully.
Dec 13 02:33:33 np0005558241 podman[95718]: 2025-12-13 07:33:33.303216841 +0000 UTC m=+1.300354377 container remove f5f23fd50f2d03ade59335e198dfe2ed807f02fff77a32401ad3a3f73ea469da (image=quay.io/ceph/ceph:v20, name=elegant_galileo, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:33 np0005558241 systemd[1]: libpod-conmon-f5f23fd50f2d03ade59335e198dfe2ed807f02fff77a32401ad3a3f73ea469da.scope: Deactivated successfully.
Dec 13 02:33:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:33 np0005558241 podman[95672]: 2025-12-13 07:33:33.429396676 +0000 UTC m=+2.236029401 container remove f65e11f7fd4c4bc5453a175858bbed04de1d95ec8d3a3750e46e7c38f11283e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lamarr, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:33 np0005558241 systemd[1]: libpod-conmon-f65e11f7fd4c4bc5453a175858bbed04de1d95ec8d3a3750e46e7c38f11283e7.scope: Deactivated successfully.
Dec 13 02:33:33 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 13 02:33:33 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 13 02:33:33 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:33:33 np0005558241 python3[95848]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:33 np0005558241 podman[95879]: 2025-12-13 07:33:33.795219866 +0000 UTC m=+0.091612550 container create 6bd1b5115ff378f6e387db1324cedcbe92ad8122fdf26169ecfc3d3307546106 (image=quay.io/ceph/ceph:v20, name=busy_bartik, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:33 np0005558241 podman[95879]: 2025-12-13 07:33:33.72957917 +0000 UTC m=+0.025971854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:33 np0005558241 systemd[1]: Started libpod-conmon-6bd1b5115ff378f6e387db1324cedcbe92ad8122fdf26169ecfc3d3307546106.scope.
Dec 13 02:33:33 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e169e50f9afde6c82b1cf310619b28a32919033885fb60e767e5294666e9975/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e169e50f9afde6c82b1cf310619b28a32919033885fb60e767e5294666e9975/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:33 np0005558241 podman[95879]: 2025-12-13 07:33:33.973873971 +0000 UTC m=+0.270266675 container init 6bd1b5115ff378f6e387db1324cedcbe92ad8122fdf26169ecfc3d3307546106 (image=quay.io/ceph/ceph:v20, name=busy_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:33:33 np0005558241 podman[95879]: 2025-12-13 07:33:33.983666264 +0000 UTC m=+0.280058958 container start 6bd1b5115ff378f6e387db1324cedcbe92ad8122fdf26169ecfc3d3307546106 (image=quay.io/ceph/ceph:v20, name=busy_bartik, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:34 np0005558241 podman[95879]: 2025-12-13 07:33:34.011883473 +0000 UTC m=+0.308276137 container attach 6bd1b5115ff378f6e387db1324cedcbe92ad8122fdf26169ecfc3d3307546106 (image=quay.io/ceph/ceph:v20, name=busy_bartik, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:34 np0005558241 podman[95909]: 2025-12-13 07:33:34.109210273 +0000 UTC m=+0.192462658 container create 71b904ad59a777b91b5b7861629a26a4175ba70bc2d7eee456503a90cf56edcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:34 np0005558241 podman[95909]: 2025-12-13 07:33:34.026870464 +0000 UTC m=+0.110122869 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:34 np0005558241 systemd[1]: Started libpod-conmon-71b904ad59a777b91b5b7861629a26a4175ba70bc2d7eee456503a90cf56edcc.scope.
Dec 13 02:33:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:34 np0005558241 podman[95909]: 2025-12-13 07:33:34.321962462 +0000 UTC m=+0.405214877 container init 71b904ad59a777b91b5b7861629a26a4175ba70bc2d7eee456503a90cf56edcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:34 np0005558241 podman[95909]: 2025-12-13 07:33:34.33236787 +0000 UTC m=+0.415620255 container start 71b904ad59a777b91b5b7861629a26a4175ba70bc2d7eee456503a90cf56edcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lehmann, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:34 np0005558241 gracious_lehmann[95945]: 167 167
Dec 13 02:33:34 np0005558241 systemd[1]: libpod-71b904ad59a777b91b5b7861629a26a4175ba70bc2d7eee456503a90cf56edcc.scope: Deactivated successfully.
Dec 13 02:33:34 np0005558241 podman[95909]: 2025-12-13 07:33:34.36263369 +0000 UTC m=+0.445886125 container attach 71b904ad59a777b91b5b7861629a26a4175ba70bc2d7eee456503a90cf56edcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:33:34 np0005558241 podman[95909]: 2025-12-13 07:33:34.363104021 +0000 UTC m=+0.446356416 container died 71b904ad59a777b91b5b7861629a26a4175ba70bc2d7eee456503a90cf56edcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lehmann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a3f842ade3df5ea25207d169dc59f5bf0a6e542c1259a19308695dc8b23aad1a-merged.mount: Deactivated successfully.
Dec 13 02:33:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 02:33:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2043839' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 02:33:34 np0005558241 busy_bartik[95906]: 
Dec 13 02:33:34 np0005558241 busy_bartik[95906]: {"epoch":1,"fsid":"18ee9de6-e00b-571b-ab9b-b7aab06174df","modified":"2025-12-13T07:30:43.504837Z","created":"2025-12-13T07:30:43.504837Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec 13 02:33:34 np0005558241 busy_bartik[95906]: dumped monmap epoch 1
Dec 13 02:33:34 np0005558241 systemd[1]: libpod-6bd1b5115ff378f6e387db1324cedcbe92ad8122fdf26169ecfc3d3307546106.scope: Deactivated successfully.
Dec 13 02:33:34 np0005558241 podman[95879]: 2025-12-13 07:33:34.558783118 +0000 UTC m=+0.855175822 container died 6bd1b5115ff378f6e387db1324cedcbe92ad8122fdf26169ecfc3d3307546106 (image=quay.io/ceph/ceph:v20, name=busy_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:33:34 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 13 02:33:34 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 13 02:33:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2e169e50f9afde6c82b1cf310619b28a32919033885fb60e767e5294666e9975-merged.mount: Deactivated successfully.
Dec 13 02:33:34 np0005558241 podman[95879]: 2025-12-13 07:33:34.950127791 +0000 UTC m=+1.246520445 container remove 6bd1b5115ff378f6e387db1324cedcbe92ad8122fdf26169ecfc3d3307546106 (image=quay.io/ceph/ceph:v20, name=busy_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:35 np0005558241 podman[95909]: 2025-12-13 07:33:35.023553649 +0000 UTC m=+1.106806044 container remove 71b904ad59a777b91b5b7861629a26a4175ba70bc2d7eee456503a90cf56edcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lehmann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:33:35 np0005558241 systemd[1]: libpod-conmon-71b904ad59a777b91b5b7861629a26a4175ba70bc2d7eee456503a90cf56edcc.scope: Deactivated successfully.
Dec 13 02:33:35 np0005558241 systemd[1]: libpod-conmon-6bd1b5115ff378f6e387db1324cedcbe92ad8122fdf26169ecfc3d3307546106.scope: Deactivated successfully.
Dec 13 02:33:35 np0005558241 podman[95985]: 2025-12-13 07:33:35.260831505 +0000 UTC m=+0.077462259 container create 0dd4d17dd2570aa452d6b10f7402f88573d71ff444d91dcb504846228bd8ebab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elbakyan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:35 np0005558241 podman[95985]: 2025-12-13 07:33:35.213018722 +0000 UTC m=+0.029649516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:35 np0005558241 systemd[1]: Started libpod-conmon-0dd4d17dd2570aa452d6b10f7402f88573d71ff444d91dcb504846228bd8ebab.scope.
Dec 13 02:33:35 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8905f1d0cd039881eff20d083d1c69920f6cd375e863535bf5b3632a3abbdb47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8905f1d0cd039881eff20d083d1c69920f6cd375e863535bf5b3632a3abbdb47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8905f1d0cd039881eff20d083d1c69920f6cd375e863535bf5b3632a3abbdb47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8905f1d0cd039881eff20d083d1c69920f6cd375e863535bf5b3632a3abbdb47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:35 np0005558241 podman[95985]: 2025-12-13 07:33:35.413865276 +0000 UTC m=+0.230496070 container init 0dd4d17dd2570aa452d6b10f7402f88573d71ff444d91dcb504846228bd8ebab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elbakyan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:35 np0005558241 podman[95985]: 2025-12-13 07:33:35.427196617 +0000 UTC m=+0.243827361 container start 0dd4d17dd2570aa452d6b10f7402f88573d71ff444d91dcb504846228bd8ebab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 02:33:35 np0005558241 podman[95985]: 2025-12-13 07:33:35.472603611 +0000 UTC m=+0.289234365 container attach 0dd4d17dd2570aa452d6b10f7402f88573d71ff444d91dcb504846228bd8ebab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elbakyan, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 02:33:35 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 13 02:33:35 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 13 02:33:35 np0005558241 python3[96032]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]: {
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:    "0": [
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:        {
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "devices": [
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "/dev/loop3"
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            ],
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_name": "ceph_lv0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_size": "21470642176",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "name": "ceph_lv0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "tags": {
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.cluster_name": "ceph",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.crush_device_class": "",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.encrypted": "0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.objectstore": "bluestore",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.osd_id": "0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.type": "block",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.vdo": "0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.with_tpm": "0"
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            },
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "type": "block",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "vg_name": "ceph_vg0"
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:        }
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:    ],
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:    "1": [
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:        {
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "devices": [
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "/dev/loop4"
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            ],
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_name": "ceph_lv1",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_size": "21470642176",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "name": "ceph_lv1",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "tags": {
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.cluster_name": "ceph",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.crush_device_class": "",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.encrypted": "0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.objectstore": "bluestore",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.osd_id": "1",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.type": "block",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.vdo": "0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.with_tpm": "0"
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            },
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "type": "block",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "vg_name": "ceph_vg1"
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:        }
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:    ],
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:    "2": [
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:        {
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "devices": [
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "/dev/loop5"
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            ],
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_name": "ceph_lv2",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_size": "21470642176",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "name": "ceph_lv2",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "tags": {
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.cluster_name": "ceph",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.crush_device_class": "",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.encrypted": "0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.objectstore": "bluestore",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.osd_id": "2",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.type": "block",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.vdo": "0",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:                "ceph.with_tpm": "0"
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            },
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "type": "block",
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:            "vg_name": "ceph_vg2"
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:        }
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]:    ]
Dec 13 02:33:35 np0005558241 optimistic_elbakyan[96002]: }
Dec 13 02:33:35 np0005558241 podman[96035]: 2025-12-13 07:33:35.774967549 +0000 UTC m=+0.110739023 container create d99140b9fd10479e3658d99584379981b2996f22022715ad97e9cb7d356becc9 (image=quay.io/ceph/ceph:v20, name=hardcore_carver, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:35 np0005558241 systemd[1]: libpod-0dd4d17dd2570aa452d6b10f7402f88573d71ff444d91dcb504846228bd8ebab.scope: Deactivated successfully.
Dec 13 02:33:35 np0005558241 podman[95985]: 2025-12-13 07:33:35.798700077 +0000 UTC m=+0.615330821 container died 0dd4d17dd2570aa452d6b10f7402f88573d71ff444d91dcb504846228bd8ebab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elbakyan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:35 np0005558241 podman[96035]: 2025-12-13 07:33:35.732810775 +0000 UTC m=+0.068582269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:35 np0005558241 systemd[1]: Started libpod-conmon-d99140b9fd10479e3658d99584379981b2996f22022715ad97e9cb7d356becc9.scope.
Dec 13 02:33:35 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2925209ef1675380ee7618328c3d077cddfee2fcf0ce4610b9e0c13bcd8052d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2925209ef1675380ee7618328c3d077cddfee2fcf0ce4610b9e0c13bcd8052d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:35 np0005558241 podman[96035]: 2025-12-13 07:33:35.908501936 +0000 UTC m=+0.244273420 container init d99140b9fd10479e3658d99584379981b2996f22022715ad97e9cb7d356becc9 (image=quay.io/ceph/ceph:v20, name=hardcore_carver, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:35 np0005558241 podman[96035]: 2025-12-13 07:33:35.916480334 +0000 UTC m=+0.252251828 container start d99140b9fd10479e3658d99584379981b2996f22022715ad97e9cb7d356becc9 (image=quay.io/ceph/ceph:v20, name=hardcore_carver, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8905f1d0cd039881eff20d083d1c69920f6cd375e863535bf5b3632a3abbdb47-merged.mount: Deactivated successfully.
Dec 13 02:33:36 np0005558241 podman[95985]: 2025-12-13 07:33:36.094313158 +0000 UTC m=+0.910943922 container remove 0dd4d17dd2570aa452d6b10f7402f88573d71ff444d91dcb504846228bd8ebab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:36 np0005558241 podman[96035]: 2025-12-13 07:33:36.164872766 +0000 UTC m=+0.500644240 container attach d99140b9fd10479e3658d99584379981b2996f22022715ad97e9cb7d356becc9 (image=quay.io/ceph/ceph:v20, name=hardcore_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:36 np0005558241 systemd[1]: libpod-conmon-0dd4d17dd2570aa452d6b10f7402f88573d71ff444d91dcb504846228bd8ebab.scope: Deactivated successfully.
Dec 13 02:33:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec 13 02:33:36 np0005558241 hardcore_carver[96064]: [client.openstack]
Dec 13 02:33:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2708139544' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 13 02:33:36 np0005558241 hardcore_carver[96064]: #011key = AQDvFT1pAAAAABAAqIZJ9wGdbwkz/sMiYoPyyQ==
Dec 13 02:33:36 np0005558241 hardcore_carver[96064]: #011caps mgr = "allow *"
Dec 13 02:33:36 np0005558241 hardcore_carver[96064]: #011caps mon = "profile rbd"
Dec 13 02:33:36 np0005558241 hardcore_carver[96064]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec 13 02:33:36 np0005558241 systemd[1]: libpod-d99140b9fd10479e3658d99584379981b2996f22022715ad97e9cb7d356becc9.scope: Deactivated successfully.
Dec 13 02:33:36 np0005558241 podman[96035]: 2025-12-13 07:33:36.477934 +0000 UTC m=+0.813705484 container died d99140b9fd10479e3658d99584379981b2996f22022715ad97e9cb7d356becc9 (image=quay.io/ceph/ceph:v20, name=hardcore_carver, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a2925209ef1675380ee7618328c3d077cddfee2fcf0ce4610b9e0c13bcd8052d-merged.mount: Deactivated successfully.
Dec 13 02:33:36 np0005558241 podman[96035]: 2025-12-13 07:33:36.923373833 +0000 UTC m=+1.259145287 container remove d99140b9fd10479e3658d99584379981b2996f22022715ad97e9cb7d356becc9 (image=quay.io/ceph/ceph:v20, name=hardcore_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:33:36 np0005558241 systemd[1]: libpod-conmon-d99140b9fd10479e3658d99584379981b2996f22022715ad97e9cb7d356becc9.scope: Deactivated successfully.
Dec 13 02:33:37 np0005558241 podman[96165]: 2025-12-13 07:33:37.030799723 +0000 UTC m=+0.039790086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:37 np0005558241 podman[96165]: 2025-12-13 07:33:37.15665807 +0000 UTC m=+0.165648433 container create 36819f99932f928c3e66f204e9516b4fb846f683aabc371b8ad2d73ca898a06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_sinoussi, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:33:37 np0005558241 systemd[1]: Started libpod-conmon-36819f99932f928c3e66f204e9516b4fb846f683aabc371b8ad2d73ca898a06f.scope.
Dec 13 02:33:37 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:37 np0005558241 podman[96165]: 2025-12-13 07:33:37.402992311 +0000 UTC m=+0.411982754 container init 36819f99932f928c3e66f204e9516b4fb846f683aabc371b8ad2d73ca898a06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_sinoussi, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:37 np0005558241 podman[96165]: 2025-12-13 07:33:37.413779229 +0000 UTC m=+0.422769602 container start 36819f99932f928c3e66f204e9516b4fb846f683aabc371b8ad2d73ca898a06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_sinoussi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:37 np0005558241 nifty_sinoussi[96181]: 167 167
Dec 13 02:33:37 np0005558241 systemd[1]: libpod-36819f99932f928c3e66f204e9516b4fb846f683aabc371b8ad2d73ca898a06f.scope: Deactivated successfully.
Dec 13 02:33:37 np0005558241 podman[96165]: 2025-12-13 07:33:37.650818939 +0000 UTC m=+0.659809362 container attach 36819f99932f928c3e66f204e9516b4fb846f683aabc371b8ad2d73ca898a06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_sinoussi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:33:37 np0005558241 podman[96165]: 2025-12-13 07:33:37.651502326 +0000 UTC m=+0.660492699 container died 36819f99932f928c3e66f204e9516b4fb846f683aabc371b8ad2d73ca898a06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:33:37 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/2708139544' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 13 02:33:38 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 13 02:33:38 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 13 02:33:38 np0005558241 systemd[1]: var-lib-containers-storage-overlay-42cc815fb3ac18082f5d27f7552b2e5e43cf3fe8fead579504859615cf06ad4d-merged.mount: Deactivated successfully.
Dec 13 02:33:38 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec 13 02:33:38 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec 13 02:33:38 np0005558241 ansible-async_wrapper.py[96347]: Invoked with j660110470274 30 /home/zuul/.ansible/tmp/ansible-tmp-1765611217.99456-36880-277382966404223/AnsiballZ_command.py _
Dec 13 02:33:38 np0005558241 ansible-async_wrapper.py[96350]: Starting module and watcher
Dec 13 02:33:38 np0005558241 ansible-async_wrapper.py[96350]: Start watching 96351 (30)
Dec 13 02:33:38 np0005558241 ansible-async_wrapper.py[96351]: Start module (96351)
Dec 13 02:33:38 np0005558241 ansible-async_wrapper.py[96347]: Return async_wrapper task started.
Dec 13 02:33:38 np0005558241 podman[96165]: 2025-12-13 07:33:38.652983711 +0000 UTC m=+1.661974084 container remove 36819f99932f928c3e66f204e9516b4fb846f683aabc371b8ad2d73ca898a06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_sinoussi, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:38 np0005558241 systemd[1]: libpod-conmon-36819f99932f928c3e66f204e9516b4fb846f683aabc371b8ad2d73ca898a06f.scope: Deactivated successfully.
Dec 13 02:33:38 np0005558241 python3[96352]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:38 np0005558241 podman[96360]: 2025-12-13 07:33:38.864696474 +0000 UTC m=+0.032790973 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:39 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 13 02:33:39 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 13 02:33:39 np0005558241 podman[96360]: 2025-12-13 07:33:39.22264893 +0000 UTC m=+0.390743399 container create b929a281adb53478b37ffc8385349047176199e3872fd76c869f4cae69911fb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:33:39 np0005558241 podman[96367]: 2025-12-13 07:33:39.243420445 +0000 UTC m=+0.387468738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:39 np0005558241 systemd[1]: Started libpod-conmon-b929a281adb53478b37ffc8385349047176199e3872fd76c869f4cae69911fb2.scope.
Dec 13 02:33:39 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40948a3b5f9518d54bb376c26cb080724b3b6d60bcc35b74ffd4785675eb890b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40948a3b5f9518d54bb376c26cb080724b3b6d60bcc35b74ffd4785675eb890b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40948a3b5f9518d54bb376c26cb080724b3b6d60bcc35b74ffd4785675eb890b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40948a3b5f9518d54bb376c26cb080724b3b6d60bcc35b74ffd4785675eb890b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:39 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 13 02:33:39 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 13 02:33:39 np0005558241 podman[96367]: 2025-12-13 07:33:39.816051456 +0000 UTC m=+0.960099669 container create ba0888d7c697769e075ecbb76bce319ebd8a260a350e0b71392a53e808a446c4 (image=quay.io/ceph/ceph:v20, name=charming_jemison, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 02:33:39 np0005558241 systemd[1]: Started libpod-conmon-ba0888d7c697769e075ecbb76bce319ebd8a260a350e0b71392a53e808a446c4.scope.
Dec 13 02:33:39 np0005558241 python3[96440]: ansible-ansible.legacy.async_status Invoked with jid=j660110470274.96347 mode=status _async_dir=/root/.ansible_async
Dec 13 02:33:39 np0005558241 podman[96360]: 2025-12-13 07:33:39.974222133 +0000 UTC m=+1.142316712 container init b929a281adb53478b37ffc8385349047176199e3872fd76c869f4cae69911fb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_stonebraker, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:39 np0005558241 podman[96360]: 2025-12-13 07:33:39.981394311 +0000 UTC m=+1.149488800 container start b929a281adb53478b37ffc8385349047176199e3872fd76c869f4cae69911fb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871fe3efcd1a31f35129fbe606f709ad7d80b46e3317f4eb9e6e04361e090d04/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871fe3efcd1a31f35129fbe606f709ad7d80b46e3317f4eb9e6e04361e090d04/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:33:40 np0005558241 podman[96360]: 2025-12-13 07:33:40.086503414 +0000 UTC m=+1.254597863 container attach b929a281adb53478b37ffc8385349047176199e3872fd76c869f4cae69911fb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:40 np0005558241 podman[96367]: 2025-12-13 07:33:40.361040384 +0000 UTC m=+1.505088647 container init ba0888d7c697769e075ecbb76bce319ebd8a260a350e0b71392a53e808a446c4 (image=quay.io/ceph/ceph:v20, name=charming_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 02:33:40 np0005558241 podman[96367]: 2025-12-13 07:33:40.367882734 +0000 UTC m=+1.511930967 container start ba0888d7c697769e075ecbb76bce319ebd8a260a350e0b71392a53e808a446c4 (image=quay.io/ceph/ceph:v20, name=charming_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:40 np0005558241 podman[96367]: 2025-12-13 07:33:40.452338905 +0000 UTC m=+1.596387138 container attach ba0888d7c697769e075ecbb76bce319ebd8a260a350e0b71392a53e808a446c4 (image=quay.io/ceph/ceph:v20, name=charming_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 02:33:40 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 02:33:40 np0005558241 charming_jemison[96443]: 
Dec 13 02:33:40 np0005558241 charming_jemison[96443]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 02:33:40 np0005558241 lvm[96543]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:33:40 np0005558241 lvm[96543]: VG ceph_vg1 finished
Dec 13 02:33:40 np0005558241 systemd[1]: libpod-ba0888d7c697769e075ecbb76bce319ebd8a260a350e0b71392a53e808a446c4.scope: Deactivated successfully.
Dec 13 02:33:40 np0005558241 podman[96367]: 2025-12-13 07:33:40.803867292 +0000 UTC m=+1.947915505 container died ba0888d7c697769e075ecbb76bce319ebd8a260a350e0b71392a53e808a446c4 (image=quay.io/ceph/ceph:v20, name=charming_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:33:40 np0005558241 lvm[96542]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:33:40 np0005558241 lvm[96542]: VG ceph_vg0 finished
Dec 13 02:33:40 np0005558241 lvm[96546]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:33:40 np0005558241 lvm[96546]: VG ceph_vg2 finished
Dec 13 02:33:40 np0005558241 elegant_stonebraker[96389]: {}
Dec 13 02:33:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay-871fe3efcd1a31f35129fbe606f709ad7d80b46e3317f4eb9e6e04361e090d04-merged.mount: Deactivated successfully.
Dec 13 02:33:40 np0005558241 systemd[1]: libpod-b929a281adb53478b37ffc8385349047176199e3872fd76c869f4cae69911fb2.scope: Deactivated successfully.
Dec 13 02:33:40 np0005558241 systemd[1]: libpod-b929a281adb53478b37ffc8385349047176199e3872fd76c869f4cae69911fb2.scope: Consumed 1.511s CPU time.
Dec 13 02:33:41 np0005558241 podman[96367]: 2025-12-13 07:33:41.050327746 +0000 UTC m=+2.194375999 container remove ba0888d7c697769e075ecbb76bce319ebd8a260a350e0b71392a53e808a446c4 (image=quay.io/ceph/ceph:v20, name=charming_jemison, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:41 np0005558241 systemd[1]: libpod-conmon-ba0888d7c697769e075ecbb76bce319ebd8a260a350e0b71392a53e808a446c4.scope: Deactivated successfully.
Dec 13 02:33:41 np0005558241 podman[96360]: 2025-12-13 07:33:41.061955534 +0000 UTC m=+2.230050013 container died b929a281adb53478b37ffc8385349047176199e3872fd76c869f4cae69911fb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_stonebraker, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 02:33:41 np0005558241 ansible-async_wrapper.py[96351]: Module complete (96351)
Dec 13 02:33:41 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 13 02:33:41 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 13 02:33:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-40948a3b5f9518d54bb376c26cb080724b3b6d60bcc35b74ffd4785675eb890b-merged.mount: Deactivated successfully.
Dec 13 02:33:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:41 np0005558241 python3[96621]: ansible-ansible.legacy.async_status Invoked with jid=j660110470274.96347 mode=status _async_dir=/root/.ansible_async
Dec 13 02:33:41 np0005558241 podman[96360]: 2025-12-13 07:33:41.471528608 +0000 UTC m=+2.639623057 container remove b929a281adb53478b37ffc8385349047176199e3872fd76c869f4cae69911fb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_stonebraker, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:41 np0005558241 systemd[1]: libpod-conmon-b929a281adb53478b37ffc8385349047176199e3872fd76c869f4cae69911fb2.scope: Deactivated successfully.
Dec 13 02:33:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:33:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:33:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:41 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev c723b99d-a8e3-4686-8a16-5caed4ff7fe9 (Updating rgw.rgw deployment (+1 -> 1))
Dec 13 02:33:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.tiyxaw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 13 02:33:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.tiyxaw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec 13 02:33:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.tiyxaw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 13 02:33:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 13 02:33:41 np0005558241 python3[96673]: ansible-ansible.legacy.async_status Invoked with jid=j660110470274.96347 mode=cleanup _async_dir=/root/.ansible_async
Dec 13 02:33:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:33:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:33:41 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.tiyxaw on compute-0
Dec 13 02:33:41 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.tiyxaw on compute-0
Dec 13 02:33:42 np0005558241 podman[96788]: 2025-12-13 07:33:42.32473052 +0000 UTC m=+0.033165822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:42 np0005558241 podman[96788]: 2025-12-13 07:33:42.444705172 +0000 UTC m=+0.153140404 container create dfdb8973b4f44910aebb8692674736e154c01f157c52d606853dfbf5aa2ca7e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mccarthy, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:42 np0005558241 python3[96790]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:42 np0005558241 systemd[1]: Started libpod-conmon-dfdb8973b4f44910aebb8692674736e154c01f157c52d606853dfbf5aa2ca7e9.scope.
Dec 13 02:33:42 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 13 02:33:42 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 13 02:33:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:42 np0005558241 podman[96803]: 2025-12-13 07:33:42.563415402 +0000 UTC m=+0.071743978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.tiyxaw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec 13 02:33:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.tiyxaw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 13 02:33:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:42 np0005558241 ceph-mon[76537]: Deploying daemon rgw.rgw.compute-0.tiyxaw on compute-0
Dec 13 02:33:42 np0005558241 podman[96803]: 2025-12-13 07:33:42.758620086 +0000 UTC m=+0.266948612 container create 6d344f55197eb4e26f2a1061add6517ba5ee6167816994a934b5db28252fa1a5 (image=quay.io/ceph/ceph:v20, name=boring_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:42 np0005558241 systemd[1]: Started libpod-conmon-6d344f55197eb4e26f2a1061add6517ba5ee6167816994a934b5db28252fa1a5.scope.
Dec 13 02:33:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab8abd7caaa765179316e7e746889fd4bfa4b4d838c8b85ed2eff9c9eb878acb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab8abd7caaa765179316e7e746889fd4bfa4b4d838c8b85ed2eff9c9eb878acb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:42 np0005558241 podman[96788]: 2025-12-13 07:33:42.924623097 +0000 UTC m=+0.633058299 container init dfdb8973b4f44910aebb8692674736e154c01f157c52d606853dfbf5aa2ca7e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mccarthy, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:42 np0005558241 podman[96788]: 2025-12-13 07:33:42.939340192 +0000 UTC m=+0.647775414 container start dfdb8973b4f44910aebb8692674736e154c01f157c52d606853dfbf5aa2ca7e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mccarthy, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:42 np0005558241 loving_mccarthy[96815]: 167 167
Dec 13 02:33:42 np0005558241 systemd[1]: libpod-dfdb8973b4f44910aebb8692674736e154c01f157c52d606853dfbf5aa2ca7e9.scope: Deactivated successfully.
Dec 13 02:33:42 np0005558241 podman[96803]: 2025-12-13 07:33:42.995979934 +0000 UTC m=+0.504308460 container init 6d344f55197eb4e26f2a1061add6517ba5ee6167816994a934b5db28252fa1a5 (image=quay.io/ceph/ceph:v20, name=boring_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:43 np0005558241 podman[96803]: 2025-12-13 07:33:43.005301885 +0000 UTC m=+0.513630381 container start 6d344f55197eb4e26f2a1061add6517ba5ee6167816994a934b5db28252fa1a5 (image=quay.io/ceph/ceph:v20, name=boring_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:33:43 np0005558241 podman[96803]: 2025-12-13 07:33:43.068617023 +0000 UTC m=+0.576945519 container attach 6d344f55197eb4e26f2a1061add6517ba5ee6167816994a934b5db28252fa1a5 (image=quay.io/ceph/ceph:v20, name=boring_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:43 np0005558241 podman[96788]: 2025-12-13 07:33:43.119018022 +0000 UTC m=+0.827453214 container attach dfdb8973b4f44910aebb8692674736e154c01f157c52d606853dfbf5aa2ca7e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:43 np0005558241 podman[96788]: 2025-12-13 07:33:43.121515754 +0000 UTC m=+0.829950946 container died dfdb8973b4f44910aebb8692674736e154c01f157c52d606853dfbf5aa2ca7e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mccarthy, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:43 np0005558241 systemd[1]: var-lib-containers-storage-overlay-de9a2ba4a5e4a0df2003b141dc3882186df85f7eaf22457ce94559d01f9cdc7c-merged.mount: Deactivated successfully.
Dec 13 02:33:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v112: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:43 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14252 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 02:33:43 np0005558241 boring_ritchie[96822]: 
Dec 13 02:33:43 np0005558241 boring_ritchie[96822]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 02:33:43 np0005558241 systemd[1]: libpod-6d344f55197eb4e26f2a1061add6517ba5ee6167816994a934b5db28252fa1a5.scope: Deactivated successfully.
Dec 13 02:33:43 np0005558241 podman[96788]: 2025-12-13 07:33:43.509412181 +0000 UTC m=+1.217847403 container remove dfdb8973b4f44910aebb8692674736e154c01f157c52d606853dfbf5aa2ca7e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:43 np0005558241 podman[96803]: 2025-12-13 07:33:43.522043284 +0000 UTC m=+1.030371810 container died 6d344f55197eb4e26f2a1061add6517ba5ee6167816994a934b5db28252fa1a5 (image=quay.io/ceph/ceph:v20, name=boring_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:33:43 np0005558241 ansible-async_wrapper.py[96350]: Done in kid B.
Dec 13 02:33:43 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ab8abd7caaa765179316e7e746889fd4bfa4b4d838c8b85ed2eff9c9eb878acb-merged.mount: Deactivated successfully.
Dec 13 02:33:43 np0005558241 systemd[1]: libpod-conmon-dfdb8973b4f44910aebb8692674736e154c01f157c52d606853dfbf5aa2ca7e9.scope: Deactivated successfully.
Dec 13 02:33:44 np0005558241 systemd[1]: Reloading.
Dec 13 02:33:44 np0005558241 podman[96862]: 2025-12-13 07:33:44.087309134 +0000 UTC m=+0.604336819 container remove 6d344f55197eb4e26f2a1061add6517ba5ee6167816994a934b5db28252fa1a5 (image=quay.io/ceph/ceph:v20, name=boring_ritchie, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:33:44 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:33:44 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec 13 02:33:44 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:33:44 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec 13 02:33:44 np0005558241 systemd[1]: libpod-conmon-6d344f55197eb4e26f2a1061add6517ba5ee6167816994a934b5db28252fa1a5.scope: Deactivated successfully.
Dec 13 02:33:44 np0005558241 systemd[1]: Reloading.
Dec 13 02:33:44 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec 13 02:33:44 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:33:44 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:33:44 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec 13 02:33:44 np0005558241 systemd[1]: Starting Ceph rgw.rgw.compute-0.tiyxaw for 18ee9de6-e00b-571b-ab9b-b7aab06174df...
Dec 13 02:33:45 np0005558241 podman[97025]: 2025-12-13 07:33:44.994488243 +0000 UTC m=+0.034786313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:45 np0005558241 podman[97025]: 2025-12-13 07:33:45.110798653 +0000 UTC m=+0.151096683 container create 2114a4ca5e939913a498ac0bd1c5d054b879c4ca471e6554115d443aa0517746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-rgw-rgw-compute-0-tiyxaw, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:45 np0005558241 python3[97026]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a897bfafefe0de19c501ca82c3d66a9eb81470b4915aea211bc10decac6c26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a897bfafefe0de19c501ca82c3d66a9eb81470b4915aea211bc10decac6c26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a897bfafefe0de19c501ca82c3d66a9eb81470b4915aea211bc10decac6c26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a897bfafefe0de19c501ca82c3d66a9eb81470b4915aea211bc10decac6c26/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.tiyxaw supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:45 np0005558241 podman[97039]: 2025-12-13 07:33:45.216719637 +0000 UTC m=+0.050331658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:45 np0005558241 podman[97039]: 2025-12-13 07:33:45.344773728 +0000 UTC m=+0.178385699 container create 4d7beb368b37737889dc705d57afc89b52fb5574b9c1372ebe34b72e4eb92a5a (image=quay.io/ceph/ceph:v20, name=heuristic_taussig, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v113: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:45 np0005558241 systemd[1]: Started libpod-conmon-4d7beb368b37737889dc705d57afc89b52fb5574b9c1372ebe34b72e4eb92a5a.scope.
Dec 13 02:33:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73e1d5118423c2a07991c996ce975b451492f61cb5e51d91515e3f08d223d0ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73e1d5118423c2a07991c996ce975b451492f61cb5e51d91515e3f08d223d0ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:45 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 13 02:33:45 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 13 02:33:45 np0005558241 podman[97025]: 2025-12-13 07:33:45.848977267 +0000 UTC m=+0.889275387 container init 2114a4ca5e939913a498ac0bd1c5d054b879c4ca471e6554115d443aa0517746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-rgw-rgw-compute-0-tiyxaw, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:33:45 np0005558241 podman[97025]: 2025-12-13 07:33:45.861555148 +0000 UTC m=+0.901853208 container start 2114a4ca5e939913a498ac0bd1c5d054b879c4ca471e6554115d443aa0517746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-rgw-rgw-compute-0-tiyxaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:45 np0005558241 radosgw[97065]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:33:45 np0005558241 radosgw[97065]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Dec 13 02:33:45 np0005558241 radosgw[97065]: framework: beast
Dec 13 02:33:45 np0005558241 radosgw[97065]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 13 02:33:45 np0005558241 radosgw[97065]: init_numa not setting numa affinity
Dec 13 02:33:46 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 13 02:33:46 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 13 02:33:46 np0005558241 bash[97025]: 2114a4ca5e939913a498ac0bd1c5d054b879c4ca471e6554115d443aa0517746
Dec 13 02:33:46 np0005558241 systemd[1]: Started Ceph rgw.rgw.compute-0.tiyxaw for 18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:33:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 13 02:33:46 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 13 02:33:46 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 13 02:33:46 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 13 02:33:46 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 13 02:33:46 np0005558241 podman[97039]: 2025-12-13 07:33:46.650524128 +0000 UTC m=+1.484136149 container init 4d7beb368b37737889dc705d57afc89b52fb5574b9c1372ebe34b72e4eb92a5a (image=quay.io/ceph/ceph:v20, name=heuristic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 02:33:46 np0005558241 podman[97039]: 2025-12-13 07:33:46.664012402 +0000 UTC m=+1.497624373 container start 4d7beb368b37737889dc705d57afc89b52fb5574b9c1372ebe34b72e4eb92a5a (image=quay.io/ceph/ceph:v20, name=heuristic_taussig, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 02:33:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:33:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v114: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:47 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec 13 02:33:47 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14257 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 02:33:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} v 0)
Dec 13 02:33:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 02:33:47 np0005558241 heuristic_taussig[97061]: 
Dec 13 02:33:47 np0005558241 heuristic_taussig[97061]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Dec 13 02:33:47 np0005558241 systemd[1]: libpod-4d7beb368b37737889dc705d57afc89b52fb5574b9c1372ebe34b72e4eb92a5a.scope: Deactivated successfully.
Dec 13 02:33:47 np0005558241 podman[97039]: 2025-12-13 07:33:47.90305447 +0000 UTC m=+2.736666441 container attach 4d7beb368b37737889dc705d57afc89b52fb5574b9c1372ebe34b72e4eb92a5a (image=quay.io/ceph/ceph:v20, name=heuristic_taussig, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:33:47 np0005558241 podman[97039]: 2025-12-13 07:33:47.904596628 +0000 UTC m=+2.738208629 container died 4d7beb368b37737889dc705d57afc89b52fb5574b9c1372ebe34b72e4eb92a5a (image=quay.io/ceph/ceph:v20, name=heuristic_taussig, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:47 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec 13 02:33:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3865107080' entity='client.rgw.rgw.compute-0.tiyxaw' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:33:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-73e1d5118423c2a07991c996ce975b451492f61cb5e51d91515e3f08d223d0ef-merged.mount: Deactivated successfully.
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:48 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev c723b99d-a8e3-4686-8a16-5caed4ff7fe9 (Updating rgw.rgw deployment (+1 -> 1))
Dec 13 02:33:48 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event c723b99d-a8e3-4686-8a16-5caed4ff7fe9 (Updating rgw.rgw deployment (+1 -> 1)) in 7 seconds
Dec 13 02:33:48 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Dec 13 02:33:48 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 02:33:48 np0005558241 podman[97039]: 2025-12-13 07:33:48.427025757 +0000 UTC m=+3.260637688 container remove 4d7beb368b37737889dc705d57afc89b52fb5574b9c1372ebe34b72e4eb92a5a (image=quay.io/ceph/ceph:v20, name=heuristic_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:48 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev b1dc0d02-3e33-4df5-b835-bd796bd19cdd (Updating mds.cephfs deployment (+1 -> 1))
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gfdyct", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gfdyct", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 13 02:33:48 np0005558241 systemd[1]: libpod-conmon-4d7beb368b37737889dc705d57afc89b52fb5574b9c1372ebe34b72e4eb92a5a.scope: Deactivated successfully.
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gfdyct", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:33:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:33:48 np0005558241 ceph-mgr[76830]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.gfdyct on compute-0
Dec 13 02:33:48 np0005558241 ceph-mgr[76830]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.gfdyct on compute-0
Dec 13 02:33:48 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 13 02:33:48 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 13 02:33:48 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 45 pg[8.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 13 02:33:49 np0005558241 podman[97217]: 2025-12-13 07:33:49.038467141 +0000 UTC m=+0.028033495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:49 np0005558241 podman[97217]: 2025-12-13 07:33:49.300644085 +0000 UTC m=+0.290210369 container create 4a39f693c2d921f43e6dde8a21183de9fa69415d18fc5fd5fa0ab3b8f627c768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3865107080' entity='client.rgw.rgw.compute-0.tiyxaw' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec 13 02:33:49 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 46 pg[8.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:49 np0005558241 systemd[1]: Started libpod-conmon-4a39f693c2d921f43e6dde8a21183de9fa69415d18fc5fd5fa0ab3b8f627c768.scope.
Dec 13 02:33:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v117: 194 pgs: 1 unknown, 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3865107080' entity='client.rgw.rgw.compute-0.tiyxaw' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: Saving service rgw.rgw spec with placement compute-0
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gfdyct", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.gfdyct", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: Deploying daemon mds.cephfs.compute-0.gfdyct on compute-0
Dec 13 02:33:49 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3865107080' entity='client.rgw.rgw.compute-0.tiyxaw' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 13 02:33:49 np0005558241 podman[97217]: 2025-12-13 07:33:49.442593291 +0000 UTC m=+0.432159675 container init 4a39f693c2d921f43e6dde8a21183de9fa69415d18fc5fd5fa0ab3b8f627c768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:33:49 np0005558241 podman[97217]: 2025-12-13 07:33:49.45789171 +0000 UTC m=+0.447458014 container start 4a39f693c2d921f43e6dde8a21183de9fa69415d18fc5fd5fa0ab3b8f627c768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:33:49 np0005558241 mystifying_pascal[97259]: 167 167
Dec 13 02:33:49 np0005558241 systemd[1]: libpod-4a39f693c2d921f43e6dde8a21183de9fa69415d18fc5fd5fa0ab3b8f627c768.scope: Deactivated successfully.
Dec 13 02:33:49 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 13 02:33:49 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 13 02:33:49 np0005558241 python3[97256]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:49 np0005558241 podman[97217]: 2025-12-13 07:33:49.700036837 +0000 UTC m=+0.689603161 container attach 4a39f693c2d921f43e6dde8a21183de9fa69415d18fc5fd5fa0ab3b8f627c768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 02:33:49 np0005558241 podman[97217]: 2025-12-13 07:33:49.700842697 +0000 UTC m=+0.690409011 container died 4a39f693c2d921f43e6dde8a21183de9fa69415d18fc5fd5fa0ab3b8f627c768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_pascal, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-352d8a99c53cfa3cdae9999ac950d9e9fe8d738b3ffec71cc9982c746ed1d700-merged.mount: Deactivated successfully.
Dec 13 02:33:50 np0005558241 podman[97217]: 2025-12-13 07:33:50.312059544 +0000 UTC m=+1.301625858 container remove 4a39f693c2d921f43e6dde8a21183de9fa69415d18fc5fd5fa0ab3b8f627c768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_pascal, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 13 02:33:50 np0005558241 podman[97275]: 2025-12-13 07:33:50.367798085 +0000 UTC m=+0.828095570 container create f57189ce17ca600c7fd0301dfda9a01c9c7749cf70c5d28cd1a6ca338c6c161d (image=quay.io/ceph/ceph:v20, name=hungry_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 02:33:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 13 02:33:50 np0005558241 systemd[1]: libpod-conmon-4a39f693c2d921f43e6dde8a21183de9fa69415d18fc5fd5fa0ab3b8f627c768.scope: Deactivated successfully.
Dec 13 02:33:50 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 13 02:33:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 13 02:33:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec 13 02:33:50 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 47 pg[9.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:50 np0005558241 podman[97275]: 2025-12-13 07:33:50.315760676 +0000 UTC m=+0.776058161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:50 np0005558241 systemd[1]: Started libpod-conmon-f57189ce17ca600c7fd0301dfda9a01c9c7749cf70c5d28cd1a6ca338c6c161d.scope.
Dec 13 02:33:50 np0005558241 systemd[1]: Reloading.
Dec 13 02:33:50 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec 13 02:33:50 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:33:50 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:33:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8345d1e77d162236f469c3a5c800a6706644447f518bbadffa87f9f9ffee8e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8345d1e77d162236f469c3a5c800a6706644447f518bbadffa87f9f9ffee8e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:50 np0005558241 podman[97275]: 2025-12-13 07:33:50.721816983 +0000 UTC m=+1.182114468 container init f57189ce17ca600c7fd0301dfda9a01c9c7749cf70c5d28cd1a6ca338c6c161d (image=quay.io/ceph/ceph:v20, name=hungry_goldberg, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 02:33:50 np0005558241 podman[97275]: 2025-12-13 07:33:50.729706248 +0000 UTC m=+1.190003703 container start f57189ce17ca600c7fd0301dfda9a01c9c7749cf70c5d28cd1a6ca338c6c161d (image=quay.io/ceph/ceph:v20, name=hungry_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:33:50 np0005558241 podman[97275]: 2025-12-13 07:33:50.73584794 +0000 UTC m=+1.196145555 container attach f57189ce17ca600c7fd0301dfda9a01c9c7749cf70c5d28cd1a6ca338c6c161d (image=quay.io/ceph/ceph:v20, name=hungry_goldberg, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:50 np0005558241 systemd[1]: Reloading.
Dec 13 02:33:50 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:33:50 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:33:50 np0005558241 ceph-mgr[76830]: [progress INFO root] Writing back 11 completed events
Dec 13 02:33:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 02:33:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:50 np0005558241 ceph-mgr[76830]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Dec 13 02:33:51 np0005558241 systemd[1]: Starting Ceph mds.cephfs.compute-0.gfdyct for 18ee9de6-e00b-571b-ab9b-b7aab06174df...
Dec 13 02:33:51 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 02:33:51 np0005558241 hungry_goldberg[97856]: 
Dec 13 02:33:51 np0005558241 hungry_goldberg[97856]: [{"container_id": "429f444b2870", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.18%", "created": "2025-12-13T07:31:37.349307Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-13T07:31:37.425910Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:33:29.526659Z", "memory_usage": 7808745, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2025-12-13T07:31:37.217184Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df@crash.compute-0", "version": "20.2.0"}, {"container_id": "15db9c4588cd", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "13.56%", "created": "2025-12-13T07:30:51.255158Z", "daemon_id": "compute-0.vndjzx", "daemon_name": "mgr.compute-0.vndjzx", "daemon_type": "mgr", "events": ["2025-12-13T07:31:42.492366Z daemon:mgr.compute-0.vndjzx [INFO] \"Reconfigured mgr.compute-0.vndjzx on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:33:29.526523Z", "memory_usage": 549558681, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-13T07:30:51.116891Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df@mgr.compute-0.vndjzx", "version": "20.2.0"}, {"container_id": "63e25f0ba271", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.45%", "created": "2025-12-13T07:30:45.925025Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-13T07:31:41.704117Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:33:29.526345Z", "memory_request": 2147483648, "memory_usage": 40244346, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2025-12-13T07:30:49.058552Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df@mon.compute-0", "version": "20.2.0"}, {"container_id": "b92982ec38e9", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.67%", "created": "2025-12-13T07:32:03.267945Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-13T07:32:03.346015Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:33:29.526796Z", "memory_request": 4294967296, "memory_usage": 69793218, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-13T07:32:03.135085Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df@osd.0", "version": "20.2.0"}, {"container_id": "060b4383c272", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.64%", "created": "2025-12-13T07:32:08.850082Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-13T07:32:09.838772Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:33:29.526930Z", "memory_request": 4294967296, "memory_usage": 69793218, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-13T07:32:08.612269Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df@osd.1", "version": "20.2.0"}, {"container_id": "fee4e182b257", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.92%", "created": "2025-12-13T07:32:19.031332Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-13T07:32:19.162639Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T07:33:29.527100Z", "memory_request": 4294967296, "memory_usage": 66385346, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-13T07:32:18.798277Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df@osd.2", "version": "20.2.0"}, {"daemon_id": "rgw.compute-0.tiyxaw", "daemon_name": "rgw.rgw.compute-0.tiyxaw", "daemon_type": "rgw", "events": ["2025-12-13T07:33:48.408851Z daemon:rgw.rgw.compute-0.tiyxaw [INFO] \"Deployed rgw.rgw.compute-0.tiyxaw on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Dec 13 02:33:51 np0005558241 systemd[1]: libpod-f57189ce17ca600c7fd0301dfda9a01c9c7749cf70c5d28cd1a6ca338c6c161d.scope: Deactivated successfully.
Dec 13 02:33:51 np0005558241 conmon[97856]: conmon f57189ce17ca600c7fd0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f57189ce17ca600c7fd0301dfda9a01c9c7749cf70c5d28cd1a6ca338c6c161d.scope/container/memory.events
Dec 13 02:33:51 np0005558241 podman[97275]: 2025-12-13 07:33:51.258922326 +0000 UTC m=+1.719219771 container died f57189ce17ca600c7fd0301dfda9a01c9c7749cf70c5d28cd1a6ca338c6c161d (image=quay.io/ceph/ceph:v20, name=hungry_goldberg, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:33:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-be8345d1e77d162236f469c3a5c800a6706644447f518bbadffa87f9f9ffee8e-merged.mount: Deactivated successfully.
Dec 13 02:33:51 np0005558241 podman[97275]: 2025-12-13 07:33:51.337236695 +0000 UTC m=+1.797534150 container remove f57189ce17ca600c7fd0301dfda9a01c9c7749cf70c5d28cd1a6ca338c6c161d (image=quay.io/ceph/ceph:v20, name=hungry_goldberg, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:51 np0005558241 systemd[1]: libpod-conmon-f57189ce17ca600c7fd0301dfda9a01c9c7749cf70c5d28cd1a6ca338c6c161d.scope: Deactivated successfully.
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 13 02:33:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v120: 195 pgs: 2 unknown, 1 active+clean+scrubbing, 192 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:33:51 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 48 pg[9.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:51 np0005558241 podman[98016]: 2025-12-13 07:33:51.433273874 +0000 UTC m=+0.060287094 container create 6909b2060376c945095e492f38557b52e386dd36ed037641e560ef4dbd0b0ba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mds-cephfs-compute-0-gfdyct, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0169ed6bc5d8bbbe4ac64ead18cd1fcdca8ea253bcd66e13bf615891bb39721a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0169ed6bc5d8bbbe4ac64ead18cd1fcdca8ea253bcd66e13bf615891bb39721a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0169ed6bc5d8bbbe4ac64ead18cd1fcdca8ea253bcd66e13bf615891bb39721a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0169ed6bc5d8bbbe4ac64ead18cd1fcdca8ea253bcd66e13bf615891bb39721a/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.gfdyct supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:51 np0005558241 podman[98016]: 2025-12-13 07:33:51.493835114 +0000 UTC m=+0.120848364 container init 6909b2060376c945095e492f38557b52e386dd36ed037641e560ef4dbd0b0ba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mds-cephfs-compute-0-gfdyct, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:51 np0005558241 podman[98016]: 2025-12-13 07:33:51.503024742 +0000 UTC m=+0.130037962 container start 6909b2060376c945095e492f38557b52e386dd36ed037641e560ef4dbd0b0ba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mds-cephfs-compute-0-gfdyct, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:33:51 np0005558241 podman[98016]: 2025-12-13 07:33:51.409324571 +0000 UTC m=+0.036337821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:51 np0005558241 bash[98016]: 6909b2060376c945095e492f38557b52e386dd36ed037641e560ef4dbd0b0ba7
Dec 13 02:33:51 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 13 02:33:51 np0005558241 systemd[1]: Started Ceph mds.cephfs.compute-0.gfdyct for 18ee9de6-e00b-571b-ab9b-b7aab06174df.
Dec 13 02:33:51 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 13 02:33:51 np0005558241 ceph-mds[98037]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:33:51 np0005558241 ceph-mds[98037]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Dec 13 02:33:51 np0005558241 ceph-mds[98037]: main not setting numa affinity
Dec 13 02:33:51 np0005558241 ceph-mds[98037]: pidfile_write: ignore empty --pid-file
Dec 13 02:33:51 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mds-cephfs-compute-0-gfdyct[98033]: starting mds.cephfs.compute-0.gfdyct at 
Dec 13 02:33:51 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct Updating MDS map to version 2 from mon.0
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:51 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev b1dc0d02-3e33-4df5-b835-bd796bd19cdd (Updating mds.cephfs deployment (+1 -> 1))
Dec 13 02:33:51 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event b1dc0d02-3e33-4df5-b835-bd796bd19cdd (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:51 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 13 02:33:51 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:52 np0005558241 python3[98174]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.100:6814/2145339145,v1:192.168.122.100:6815/2145339145] as mds.0
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.gfdyct assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e3 new map
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-12-13T07:33:52:345462+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-13T07:33:26.155600+0000#012modified#0112025-12-13T07:33:52.345453+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14264}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.gfdyct{0:14264} state up:creating seq 1 addr [v2:192.168.122.100:6814/2145339145,v1:192.168.122.100:6815/2145339145] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct Updating MDS map to version 3 from mon.0
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2145339145,v1:192.168.122.100:6815/2145339145] up:boot
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.gfdyct=up:creating}
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.3 handle_mds_map I am now mds.0.3
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.cache creating system inode with ino:0x1
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.cache creating system inode with ino:0x100
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.cache creating system inode with ino:0x600
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.cache creating system inode with ino:0x601
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.cache creating system inode with ino:0x602
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.cache creating system inode with ino:0x603
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.cache creating system inode with ino:0x604
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.cache creating system inode with ino:0x605
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.cache creating system inode with ino:0x606
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.cache creating system inode with ino:0x607
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.cache creating system inode with ino:0x608
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.cache creating system inode with ino:0x609
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.gfdyct"} v 0)
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.gfdyct"} : dispatch
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e3 all = 0
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 13 02:33:52 np0005558241 ceph-mds[98037]: mds.0.3 creating_done
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.gfdyct is now active in filesystem cephfs as rank 0
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec 13 02:33:52 np0005558241 podman[98200]: 2025-12-13 07:33:52.405799331 +0000 UTC m=+0.084413832 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:33:52 np0005558241 podman[98214]: 2025-12-13 07:33:52.416709541 +0000 UTC m=+0.053157887 container create c15619787a2f0cd0fde577fa569b105a561420ea715d8a2140d6a96e1b49519c (image=quay.io/ceph/ceph:v20, name=elated_euler, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:33:52 np0005558241 systemd[1]: Started libpod-conmon-c15619787a2f0cd0fde577fa569b105a561420ea715d8a2140d6a96e1b49519c.scope.
Dec 13 02:33:52 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 49 pg[10.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [2] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:52 np0005558241 podman[98214]: 2025-12-13 07:33:52.389716313 +0000 UTC m=+0.026164649 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5ecfa2c117bddde1cfa914254a65cd6ea0ca7b59bc3822f2aec8a94731ae64/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5ecfa2c117bddde1cfa914254a65cd6ea0ca7b59bc3822f2aec8a94731ae64/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:52 np0005558241 podman[98200]: 2025-12-13 07:33:52.513500749 +0000 UTC m=+0.192115250 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:33:52 np0005558241 podman[98214]: 2025-12-13 07:33:52.513478458 +0000 UTC m=+0.149926834 container init c15619787a2f0cd0fde577fa569b105a561420ea715d8a2140d6a96e1b49519c (image=quay.io/ceph/ceph:v20, name=elated_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 02:33:52 np0005558241 podman[98214]: 2025-12-13 07:33:52.524554452 +0000 UTC m=+0.161002768 container start c15619787a2f0cd0fde577fa569b105a561420ea715d8a2140d6a96e1b49519c (image=quay.io/ceph/ceph:v20, name=elated_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:52 np0005558241 podman[98214]: 2025-12-13 07:33:52.529546656 +0000 UTC m=+0.165994972 container attach c15619787a2f0cd0fde577fa569b105a561420ea715d8a2140d6a96e1b49519c (image=quay.io/ceph/ceph:v20, name=elated_euler, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: daemon mds.cephfs.compute-0.gfdyct assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: Cluster is now healthy
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: daemon mds.cephfs.compute-0.gfdyct is now active in filesystem cephfs as rank 0
Dec 13 02:33:52 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/891962200' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 02:33:53 np0005558241 elated_euler[98246]: 
Dec 13 02:33:53 np0005558241 elated_euler[98246]: {"fsid":"18ee9de6-e00b-571b-ab9b-b7aab06174df","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":183,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":49,"num_osds":3,"num_up_osds":3,"osd_up_since":1765611153,"num_in_osds":3,"osd_in_since":1765611114,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":192},{"state_name":"unknown","count":2},{"state_name":"active+clean+scrubbing","count":1}],"num_pgs":195,"num_pools":9,"num_objects":2,"data_bytes":459280,"bytes_used":84275200,"bytes_avail":64327651328,"bytes_total":64411926528,"unknown_pgs_ratio":0.010256410576403141},"fsmap":{"epoch":3,"btime":"2025-12-13T07:33:52:345462+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.gfdyct","status":"up:creating","gid":14264}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-12-13T07:33:35.383502+0000","services":{}},"progress_events":{"64d33a97-d822-42ca-8db8-e155e3382cea":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true},"b1dc0d02-3e33-4df5-b835-bd796bd19cdd":{"message":"Updating mds.cephfs deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 13 02:33:53 np0005558241 systemd[1]: libpod-c15619787a2f0cd0fde577fa569b105a561420ea715d8a2140d6a96e1b49519c.scope: Deactivated successfully.
Dec 13 02:33:53 np0005558241 podman[98214]: 2025-12-13 07:33:53.042551742 +0000 UTC m=+0.679000118 container died c15619787a2f0cd0fde577fa569b105a561420ea715d8a2140d6a96e1b49519c (image=quay.io/ceph/ceph:v20, name=elated_euler, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:33:53 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ec5ecfa2c117bddde1cfa914254a65cd6ea0ca7b59bc3822f2aec8a94731ae64-merged.mount: Deactivated successfully.
Dec 13 02:33:53 np0005558241 podman[98214]: 2025-12-13 07:33:53.098364344 +0000 UTC m=+0.734812670 container remove c15619787a2f0cd0fde577fa569b105a561420ea715d8a2140d6a96e1b49519c (image=quay.io/ceph/ceph:v20, name=elated_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:33:53 np0005558241 systemd[1]: libpod-conmon-c15619787a2f0cd0fde577fa569b105a561420ea715d8a2140d6a96e1b49519c.scope: Deactivated successfully.
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e4 new map
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-12-13T07:33:53:357390+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-13T07:33:26.155600+0000#012modified#0112025-12-13T07:33:53.357387+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14264}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14264 members: 14264#012[mds.cephfs.compute-0.gfdyct{0:14264} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2145339145,v1:192.168.122.100:6815/2145339145] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec 13 02:33:53 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct Updating MDS map to version 4 from mon.0
Dec 13 02:33:53 np0005558241 ceph-mds[98037]: mds.0.3 handle_mds_map I am now mds.0.3
Dec 13 02:33:53 np0005558241 ceph-mds[98037]: mds.0.3 handle_mds_map state change up:creating --> up:active
Dec 13 02:33:53 np0005558241 ceph-mds[98037]: mds.0.3 recovery_done -- successful recovery!
Dec 13 02:33:53 np0005558241 ceph-mds[98037]: mds.0.3 active_start
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2145339145,v1:192.168.122.100:6815/2145339145] up:active
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.gfdyct=up:active}
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 13 02:33:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v122: 196 pgs: 1 unknown, 195 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 6 op/s
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 13 02:33:53 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 50 pg[10.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [2] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:53 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 13 02:33:53 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:54 np0005558241 python3[98530]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:54 np0005558241 podman[98543]: 2025-12-13 07:33:54.108578324 +0000 UTC m=+0.037984402 container create 8bdf10a8834db7e04847388a039c5fc43acd78b3e455350c65a02f80aafd248e (image=quay.io/ceph/ceph:v20, name=loving_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:33:54 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 13 02:33:54 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 13 02:33:54 np0005558241 systemd[1]: Started libpod-conmon-8bdf10a8834db7e04847388a039c5fc43acd78b3e455350c65a02f80aafd248e.scope.
Dec 13 02:33:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcd84dba00af1812bc735653fddc13f7853b4830eeb1f28e1401c30aae7a5ba5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcd84dba00af1812bc735653fddc13f7853b4830eeb1f28e1401c30aae7a5ba5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:54 np0005558241 podman[98543]: 2025-12-13 07:33:54.182670999 +0000 UTC m=+0.112077127 container init 8bdf10a8834db7e04847388a039c5fc43acd78b3e455350c65a02f80aafd248e (image=quay.io/ceph/ceph:v20, name=loving_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:54 np0005558241 podman[98543]: 2025-12-13 07:33:54.09186919 +0000 UTC m=+0.021275288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:54 np0005558241 podman[98543]: 2025-12-13 07:33:54.190302958 +0000 UTC m=+0.119709036 container start 8bdf10a8834db7e04847388a039c5fc43acd78b3e455350c65a02f80aafd248e (image=quay.io/ceph/ceph:v20, name=loving_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:33:54 np0005558241 podman[98543]: 2025-12-13 07:33:54.196492191 +0000 UTC m=+0.125898309 container attach 8bdf10a8834db7e04847388a039c5fc43acd78b3e455350c65a02f80aafd248e (image=quay.io/ceph/ceph:v20, name=loving_snyder, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec 13 02:33:54 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 13 02:33:54 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 13 02:33:54 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/386218196' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 02:33:54 np0005558241 loving_snyder[98562]: 
Dec 13 02:33:54 np0005558241 systemd[1]: libpod-8bdf10a8834db7e04847388a039c5fc43acd78b3e455350c65a02f80aafd248e.scope: Deactivated successfully.
Dec 13 02:33:54 np0005558241 loving_snyder[98562]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.tiyxaw","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec 13 02:33:54 np0005558241 podman[98543]: 2025-12-13 07:33:54.611960082 +0000 UTC m=+0.541366170 container died 8bdf10a8834db7e04847388a039c5fc43acd78b3e455350c65a02f80aafd248e (image=quay.io/ceph/ceph:v20, name=loving_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:33:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fcd84dba00af1812bc735653fddc13f7853b4830eeb1f28e1401c30aae7a5ba5-merged.mount: Deactivated successfully.
Dec 13 02:33:54 np0005558241 podman[98543]: 2025-12-13 07:33:54.654674419 +0000 UTC m=+0.584080487 container remove 8bdf10a8834db7e04847388a039c5fc43acd78b3e455350c65a02f80aafd248e (image=quay.io/ceph/ceph:v20, name=loving_snyder, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 02:33:54 np0005558241 systemd[1]: libpod-conmon-8bdf10a8834db7e04847388a039c5fc43acd78b3e455350c65a02f80aafd248e.scope: Deactivated successfully.
Dec 13 02:33:54 np0005558241 podman[98675]: 2025-12-13 07:33:54.69909904 +0000 UTC m=+0.038448324 container create ea1c87b529980bd76c3fdd096ed96b566219347793cb9c77efaa1062ad6fa196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 13 02:33:54 np0005558241 systemd[1]: Started libpod-conmon-ea1c87b529980bd76c3fdd096ed96b566219347793cb9c77efaa1062ad6fa196.scope.
Dec 13 02:33:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:54 np0005558241 podman[98675]: 2025-12-13 07:33:54.761102275 +0000 UTC m=+0.100451559 container init ea1c87b529980bd76c3fdd096ed96b566219347793cb9c77efaa1062ad6fa196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_maxwell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:33:54 np0005558241 podman[98675]: 2025-12-13 07:33:54.766031027 +0000 UTC m=+0.105380301 container start ea1c87b529980bd76c3fdd096ed96b566219347793cb9c77efaa1062ad6fa196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:54 np0005558241 determined_maxwell[98691]: 167 167
Dec 13 02:33:54 np0005558241 podman[98675]: 2025-12-13 07:33:54.769287198 +0000 UTC m=+0.108636492 container attach ea1c87b529980bd76c3fdd096ed96b566219347793cb9c77efaa1062ad6fa196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:33:54 np0005558241 systemd[1]: libpod-ea1c87b529980bd76c3fdd096ed96b566219347793cb9c77efaa1062ad6fa196.scope: Deactivated successfully.
Dec 13 02:33:54 np0005558241 podman[98675]: 2025-12-13 07:33:54.771089653 +0000 UTC m=+0.110438937 container died ea1c87b529980bd76c3fdd096ed96b566219347793cb9c77efaa1062ad6fa196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 02:33:54 np0005558241 podman[98675]: 2025-12-13 07:33:54.68256586 +0000 UTC m=+0.021915164 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fd7247325f202babb321a11547240703833d6db667d05294b81efe20dbd3db80-merged.mount: Deactivated successfully.
Dec 13 02:33:54 np0005558241 podman[98675]: 2025-12-13 07:33:54.809929975 +0000 UTC m=+0.149279279 container remove ea1c87b529980bd76c3fdd096ed96b566219347793cb9c77efaa1062ad6fa196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:33:54 np0005558241 systemd[1]: libpod-conmon-ea1c87b529980bd76c3fdd096ed96b566219347793cb9c77efaa1062ad6fa196.scope: Deactivated successfully.
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:33:54 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec 13 02:33:54 np0005558241 podman[98715]: 2025-12-13 07:33:54.998175857 +0000 UTC m=+0.062974991 container create 2c999ee1d59d424898351db1224f1758e1887f1532e4bc28c8f30a0aa94c28aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:55 np0005558241 systemd[1]: Started libpod-conmon-2c999ee1d59d424898351db1224f1758e1887f1532e4bc28c8f30a0aa94c28aa.scope.
Dec 13 02:33:55 np0005558241 podman[98715]: 2025-12-13 07:33:54.971584659 +0000 UTC m=+0.036383863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:55 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33929251d14eadc404f7e3aada5819975bcedd0bc0a7fbbe3ba34bab0437cb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33929251d14eadc404f7e3aada5819975bcedd0bc0a7fbbe3ba34bab0437cb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33929251d14eadc404f7e3aada5819975bcedd0bc0a7fbbe3ba34bab0437cb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33929251d14eadc404f7e3aada5819975bcedd0bc0a7fbbe3ba34bab0437cb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33929251d14eadc404f7e3aada5819975bcedd0bc0a7fbbe3ba34bab0437cb5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:55 np0005558241 podman[98715]: 2025-12-13 07:33:55.097986469 +0000 UTC m=+0.162785683 container init 2c999ee1d59d424898351db1224f1758e1887f1532e4bc28c8f30a0aa94c28aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:33:55 np0005558241 podman[98715]: 2025-12-13 07:33:55.111104744 +0000 UTC m=+0.175903888 container start 2c999ee1d59d424898351db1224f1758e1887f1532e4bc28c8f30a0aa94c28aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:55 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 13 02:33:55 np0005558241 podman[98715]: 2025-12-13 07:33:55.118244951 +0000 UTC m=+0.183044115 container attach 2c999ee1d59d424898351db1224f1758e1887f1532e4bc28c8f30a0aa94c28aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:33:55 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 13 02:33:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 1 unknown, 196 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Dec 13 02:33:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 13 02:33:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 13 02:33:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 13 02:33:55 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 13 02:33:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 13 02:33:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec 13 02:33:55 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:33:55 np0005558241 python3[98770]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:55 np0005558241 sleepy_clarke[98731]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:33:55 np0005558241 sleepy_clarke[98731]: --> All data devices are unavailable
Dec 13 02:33:55 np0005558241 systemd[1]: libpod-2c999ee1d59d424898351db1224f1758e1887f1532e4bc28c8f30a0aa94c28aa.scope: Deactivated successfully.
Dec 13 02:33:55 np0005558241 podman[98715]: 2025-12-13 07:33:55.724544808 +0000 UTC m=+0.789344012 container died 2c999ee1d59d424898351db1224f1758e1887f1532e4bc28c8f30a0aa94c28aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:55 np0005558241 podman[98776]: 2025-12-13 07:33:55.76180195 +0000 UTC m=+0.082222117 container create 24915151cd6f467200e5a6c8720fc81cab71b896ddc3f59c135bc15bd6a9463e (image=quay.io/ceph/ceph:v20, name=fervent_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 02:33:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d33929251d14eadc404f7e3aada5819975bcedd0bc0a7fbbe3ba34bab0437cb5-merged.mount: Deactivated successfully.
Dec 13 02:33:55 np0005558241 podman[98715]: 2025-12-13 07:33:55.795632608 +0000 UTC m=+0.860431732 container remove 2c999ee1d59d424898351db1224f1758e1887f1532e4bc28c8f30a0aa94c28aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:55 np0005558241 systemd[1]: Started libpod-conmon-24915151cd6f467200e5a6c8720fc81cab71b896ddc3f59c135bc15bd6a9463e.scope.
Dec 13 02:33:55 np0005558241 systemd[1]: libpod-conmon-2c999ee1d59d424898351db1224f1758e1887f1532e4bc28c8f30a0aa94c28aa.scope: Deactivated successfully.
Dec 13 02:33:55 np0005558241 podman[98776]: 2025-12-13 07:33:55.722121368 +0000 UTC m=+0.042541515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:55 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddeab7fc787bcb1fcd059390602778d4d0d819b55712fe8863a6ddb67979e5d8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddeab7fc787bcb1fcd059390602778d4d0d819b55712fe8863a6ddb67979e5d8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:55 np0005558241 podman[98776]: 2025-12-13 07:33:55.876433349 +0000 UTC m=+0.196853557 container init 24915151cd6f467200e5a6c8720fc81cab71b896ddc3f59c135bc15bd6a9463e (image=quay.io/ceph/ceph:v20, name=fervent_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:55 np0005558241 podman[98776]: 2025-12-13 07:33:55.887268398 +0000 UTC m=+0.207688545 container start 24915151cd6f467200e5a6c8720fc81cab71b896ddc3f59c135bc15bd6a9463e (image=quay.io/ceph/ceph:v20, name=fervent_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:33:55 np0005558241 podman[98776]: 2025-12-13 07:33:55.891749479 +0000 UTC m=+0.212169626 container attach 24915151cd6f467200e5a6c8720fc81cab71b896ddc3f59c135bc15bd6a9463e (image=quay.io/ceph/ceph:v20, name=fervent_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:33:55 np0005558241 ceph-mgr[76830]: [progress INFO root] Writing back 12 completed events
Dec 13 02:33:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 02:33:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec 13 02:33:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346767696' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec 13 02:33:56 np0005558241 fervent_moore[98806]: mimic
Dec 13 02:33:56 np0005558241 systemd[1]: libpod-24915151cd6f467200e5a6c8720fc81cab71b896ddc3f59c135bc15bd6a9463e.scope: Deactivated successfully.
Dec 13 02:33:56 np0005558241 podman[98776]: 2025-12-13 07:33:56.333605663 +0000 UTC m=+0.654025830 container died 24915151cd6f467200e5a6c8720fc81cab71b896ddc3f59c135bc15bd6a9463e (image=quay.io/ceph/ceph:v20, name=fervent_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:33:56 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ddeab7fc787bcb1fcd059390602778d4d0d819b55712fe8863a6ddb67979e5d8-merged.mount: Deactivated successfully.
Dec 13 02:33:56 np0005558241 podman[98891]: 2025-12-13 07:33:56.392119002 +0000 UTC m=+0.085887008 container create 9251bb99fc4db3ee0e79652ffe1aa540f77cd54d5a4e974de3dbed3d9d372de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dirac, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 02:33:56 np0005558241 podman[98776]: 2025-12-13 07:33:56.397608978 +0000 UTC m=+0.718029095 container remove 24915151cd6f467200e5a6c8720fc81cab71b896ddc3f59c135bc15bd6a9463e (image=quay.io/ceph/ceph:v20, name=fervent_moore, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:33:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 13 02:33:56 np0005558241 systemd[1]: libpod-conmon-24915151cd6f467200e5a6c8720fc81cab71b896ddc3f59c135bc15bd6a9463e.scope: Deactivated successfully.
Dec 13 02:33:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 13 02:33:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 13 02:33:56 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 13 02:33:56 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 13 02:33:56 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec 13 02:33:56 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:56 np0005558241 systemd[1]: Started libpod-conmon-9251bb99fc4db3ee0e79652ffe1aa540f77cd54d5a4e974de3dbed3d9d372de0.scope.
Dec 13 02:33:56 np0005558241 podman[98891]: 2025-12-13 07:33:56.353860924 +0000 UTC m=+0.047629020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:56 np0005558241 podman[98891]: 2025-12-13 07:33:56.480774878 +0000 UTC m=+0.174542944 container init 9251bb99fc4db3ee0e79652ffe1aa540f77cd54d5a4e974de3dbed3d9d372de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 02:33:56 np0005558241 podman[98891]: 2025-12-13 07:33:56.487683089 +0000 UTC m=+0.181451125 container start 9251bb99fc4db3ee0e79652ffe1aa540f77cd54d5a4e974de3dbed3d9d372de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dirac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:33:56 np0005558241 jolly_dirac[98920]: 167 167
Dec 13 02:33:56 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 13 02:33:56 np0005558241 systemd[1]: libpod-9251bb99fc4db3ee0e79652ffe1aa540f77cd54d5a4e974de3dbed3d9d372de0.scope: Deactivated successfully.
Dec 13 02:33:56 np0005558241 podman[98891]: 2025-12-13 07:33:56.495064382 +0000 UTC m=+0.188832388 container attach 9251bb99fc4db3ee0e79652ffe1aa540f77cd54d5a4e974de3dbed3d9d372de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:33:56 np0005558241 podman[98891]: 2025-12-13 07:33:56.495620055 +0000 UTC m=+0.189388091 container died 9251bb99fc4db3ee0e79652ffe1aa540f77cd54d5a4e974de3dbed3d9d372de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dirac, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:56 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 13 02:33:56 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0dbea1185086b6743619db9c150f1fbacfd268a0f677b433eeecd4588f289e72-merged.mount: Deactivated successfully.
Dec 13 02:33:56 np0005558241 podman[98891]: 2025-12-13 07:33:56.546762372 +0000 UTC m=+0.240530408 container remove 9251bb99fc4db3ee0e79652ffe1aa540f77cd54d5a4e974de3dbed3d9d372de0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 02:33:56 np0005558241 systemd[1]: libpod-conmon-9251bb99fc4db3ee0e79652ffe1aa540f77cd54d5a4e974de3dbed3d9d372de0.scope: Deactivated successfully.
Dec 13 02:33:56 np0005558241 radosgw[97065]: v1 topic migration: starting v1 topic migration..
Dec 13 02:33:56 np0005558241 radosgw[97065]: v1 topic migration: finished v1 topic migration
Dec 13 02:33:56 np0005558241 radosgw[97065]: framework: beast
Dec 13 02:33:56 np0005558241 radosgw[97065]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 13 02:33:56 np0005558241 radosgw[97065]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 13 02:33:56 np0005558241 radosgw[97065]: starting handler: beast
Dec 13 02:33:56 np0005558241 podman[98962]: 2025-12-13 07:33:56.774206575 +0000 UTC m=+0.062677973 container create a09e7df961fd83cdb0b579ea09d97557fbd581c054e4139550bc5309a6be05cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:56 np0005558241 radosgw[97065]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 02:33:56 np0005558241 radosgw[97065]: mgrc service_daemon_register rgw.14260 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.tiyxaw,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=6d6c1e0c-c3a9-48aa-bd3f-12e44515fcdf,zone_name=default,zonegroup_id=0afa7a3e-962f-4160-a32b-7eb904821ac1,zonegroup_name=default}
Dec 13 02:33:56 np0005558241 systemd[1]: Started libpod-conmon-a09e7df961fd83cdb0b579ea09d97557fbd581c054e4139550bc5309a6be05cf.scope.
Dec 13 02:33:56 np0005558241 podman[98962]: 2025-12-13 07:33:56.7497681 +0000 UTC m=+0.038239588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0b6e23046c76af59adf7e1ef40e87c228a21ddc667c0b9d2f156c06c71221b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0b6e23046c76af59adf7e1ef40e87c228a21ddc667c0b9d2f156c06c71221b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0b6e23046c76af59adf7e1ef40e87c228a21ddc667c0b9d2f156c06c71221b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0b6e23046c76af59adf7e1ef40e87c228a21ddc667c0b9d2f156c06c71221b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:56 np0005558241 podman[98962]: 2025-12-13 07:33:56.888427784 +0000 UTC m=+0.176899182 container init a09e7df961fd83cdb0b579ea09d97557fbd581c054e4139550bc5309a6be05cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:33:56 np0005558241 podman[98962]: 2025-12-13 07:33:56.898113444 +0000 UTC m=+0.186584832 container start a09e7df961fd83cdb0b579ea09d97557fbd581c054e4139550bc5309a6be05cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:56 np0005558241 podman[98962]: 2025-12-13 07:33:56.901443537 +0000 UTC m=+0.189914925 container attach a09e7df961fd83cdb0b579ea09d97557fbd581c054e4139550bc5309a6be05cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:33:57 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 13 02:33:57 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]: {
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:    "0": [
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:        {
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "devices": [
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "/dev/loop3"
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            ],
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_name": "ceph_lv0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_size": "21470642176",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "name": "ceph_lv0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "tags": {
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.cluster_name": "ceph",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.crush_device_class": "",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.encrypted": "0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.objectstore": "bluestore",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.osd_id": "0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.type": "block",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.vdo": "0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.with_tpm": "0"
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            },
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "type": "block",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "vg_name": "ceph_vg0"
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:        }
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:    ],
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:    "1": [
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:        {
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "devices": [
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "/dev/loop4"
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            ],
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_name": "ceph_lv1",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_size": "21470642176",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "name": "ceph_lv1",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "tags": {
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.cluster_name": "ceph",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.crush_device_class": "",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.encrypted": "0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.objectstore": "bluestore",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.osd_id": "1",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.type": "block",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.vdo": "0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.with_tpm": "0"
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            },
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "type": "block",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "vg_name": "ceph_vg1"
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:        }
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:    ],
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:    "2": [
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:        {
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "devices": [
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "/dev/loop5"
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            ],
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_name": "ceph_lv2",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_size": "21470642176",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "name": "ceph_lv2",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "tags": {
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.cluster_name": "ceph",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.crush_device_class": "",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.encrypted": "0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.objectstore": "bluestore",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.osd_id": "2",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.type": "block",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.vdo": "0",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:                "ceph.with_tpm": "0"
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            },
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "type": "block",
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:            "vg_name": "ceph_vg2"
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:        }
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]:    ]
Dec 13 02:33:57 np0005558241 adoring_albattani[98995]: }
Dec 13 02:33:57 np0005558241 systemd[1]: libpod-a09e7df961fd83cdb0b579ea09d97557fbd581c054e4139550bc5309a6be05cf.scope: Deactivated successfully.
Dec 13 02:33:57 np0005558241 podman[98962]: 2025-12-13 07:33:57.287581059 +0000 UTC m=+0.576052457 container died a09e7df961fd83cdb0b579ea09d97557fbd581c054e4139550bc5309a6be05cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 02:33:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2e0b6e23046c76af59adf7e1ef40e87c228a21ddc667c0b9d2f156c06c71221b-merged.mount: Deactivated successfully.
Dec 13 02:33:57 np0005558241 podman[98962]: 2025-12-13 07:33:57.334635995 +0000 UTC m=+0.623107393 container remove a09e7df961fd83cdb0b579ea09d97557fbd581c054e4139550bc5309a6be05cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:33:57 np0005558241 systemd[1]: libpod-conmon-a09e7df961fd83cdb0b579ea09d97557fbd581c054e4139550bc5309a6be05cf.scope: Deactivated successfully.
Dec 13 02:33:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:33:57 np0005558241 ceph-mds[98037]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 13 02:33:57 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mds-cephfs-compute-0-gfdyct[98033]: 2025-12-13T07:33:57.369+0000 7f57b9b52640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 13 02:33:57 np0005558241 python3[99029]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:33:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 1 unknown, 196 active+clean; 453 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s wr, 8 op/s
Dec 13 02:33:57 np0005558241 podman[99042]: 2025-12-13 07:33:57.437137844 +0000 UTC m=+0.042873643 container create 20ce8de476ffa216aefc402109358b8dce7b8286d018ecbce2347f6f7289fcf1 (image=quay.io/ceph/ceph:v20, name=boring_ritchie, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 02:33:57 np0005558241 ceph-mon[76537]: from='client.? 192.168.122.100:0/3148396081' entity='client.rgw.rgw.compute-0.tiyxaw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 13 02:33:57 np0005558241 systemd[1]: Started libpod-conmon-20ce8de476ffa216aefc402109358b8dce7b8286d018ecbce2347f6f7289fcf1.scope.
Dec 13 02:33:57 np0005558241 podman[99042]: 2025-12-13 07:33:57.417788494 +0000 UTC m=+0.023524273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:33:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/832eaf633d1cc1274ae565005343c8a5540b6d36864f70db86be852808b3aa0c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/832eaf633d1cc1274ae565005343c8a5540b6d36864f70db86be852808b3aa0c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:57 np0005558241 podman[99042]: 2025-12-13 07:33:57.528718002 +0000 UTC m=+0.134453781 container init 20ce8de476ffa216aefc402109358b8dce7b8286d018ecbce2347f6f7289fcf1 (image=quay.io/ceph/ceph:v20, name=boring_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:33:57 np0005558241 podman[99042]: 2025-12-13 07:33:57.537153641 +0000 UTC m=+0.142889390 container start 20ce8de476ffa216aefc402109358b8dce7b8286d018ecbce2347f6f7289fcf1 (image=quay.io/ceph/ceph:v20, name=boring_ritchie, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:57 np0005558241 podman[99042]: 2025-12-13 07:33:57.540185536 +0000 UTC m=+0.145921285 container attach 20ce8de476ffa216aefc402109358b8dce7b8286d018ecbce2347f6f7289fcf1 (image=quay.io/ceph/ceph:v20, name=boring_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:33:57 np0005558241 podman[99143]: 2025-12-13 07:33:57.769213438 +0000 UTC m=+0.044545024 container create bdc188fce9bc63a9b6cae1b52f8b4af679360029d9f8aee2fd39a33e0dd0d136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:33:57 np0005558241 systemd[1]: Started libpod-conmon-bdc188fce9bc63a9b6cae1b52f8b4af679360029d9f8aee2fd39a33e0dd0d136.scope.
Dec 13 02:33:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:57 np0005558241 podman[99143]: 2025-12-13 07:33:57.743514792 +0000 UTC m=+0.018846398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:57 np0005558241 podman[99143]: 2025-12-13 07:33:57.85166546 +0000 UTC m=+0.126997126 container init bdc188fce9bc63a9b6cae1b52f8b4af679360029d9f8aee2fd39a33e0dd0d136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 02:33:57 np0005558241 podman[99143]: 2025-12-13 07:33:57.859110335 +0000 UTC m=+0.134441961 container start bdc188fce9bc63a9b6cae1b52f8b4af679360029d9f8aee2fd39a33e0dd0d136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:33:57 np0005558241 compassionate_hamilton[99159]: 167 167
Dec 13 02:33:57 np0005558241 systemd[1]: libpod-bdc188fce9bc63a9b6cae1b52f8b4af679360029d9f8aee2fd39a33e0dd0d136.scope: Deactivated successfully.
Dec 13 02:33:57 np0005558241 podman[99143]: 2025-12-13 07:33:57.86819413 +0000 UTC m=+0.143525806 container attach bdc188fce9bc63a9b6cae1b52f8b4af679360029d9f8aee2fd39a33e0dd0d136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 02:33:57 np0005558241 podman[99143]: 2025-12-13 07:33:57.868632241 +0000 UTC m=+0.143963857 container died bdc188fce9bc63a9b6cae1b52f8b4af679360029d9f8aee2fd39a33e0dd0d136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:33:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f3e14a62d15d4e4534b92c8124868d3af5cf1a1c10c3780741f5b68fbd0eb567-merged.mount: Deactivated successfully.
Dec 13 02:33:57 np0005558241 podman[99143]: 2025-12-13 07:33:57.925660613 +0000 UTC m=+0.200992239 container remove bdc188fce9bc63a9b6cae1b52f8b4af679360029d9f8aee2fd39a33e0dd0d136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:33:57 np0005558241 systemd[1]: libpod-conmon-bdc188fce9bc63a9b6cae1b52f8b4af679360029d9f8aee2fd39a33e0dd0d136.scope: Deactivated successfully.
Dec 13 02:33:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec 13 02:33:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3830586787' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec 13 02:33:58 np0005558241 boring_ritchie[99095]: 
Dec 13 02:33:58 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 13 02:33:58 np0005558241 systemd[1]: libpod-20ce8de476ffa216aefc402109358b8dce7b8286d018ecbce2347f6f7289fcf1.scope: Deactivated successfully.
Dec 13 02:33:58 np0005558241 boring_ritchie[99095]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Dec 13 02:33:58 np0005558241 podman[99042]: 2025-12-13 07:33:58.064857111 +0000 UTC m=+0.670592870 container died 20ce8de476ffa216aefc402109358b8dce7b8286d018ecbce2347f6f7289fcf1 (image=quay.io/ceph/ceph:v20, name=boring_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:33:58 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 13 02:33:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-832eaf633d1cc1274ae565005343c8a5540b6d36864f70db86be852808b3aa0c-merged.mount: Deactivated successfully.
Dec 13 02:33:58 np0005558241 podman[99042]: 2025-12-13 07:33:58.119055533 +0000 UTC m=+0.724791302 container remove 20ce8de476ffa216aefc402109358b8dce7b8286d018ecbce2347f6f7289fcf1 (image=quay.io/ceph/ceph:v20, name=boring_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 13 02:33:58 np0005558241 systemd[1]: libpod-conmon-20ce8de476ffa216aefc402109358b8dce7b8286d018ecbce2347f6f7289fcf1.scope: Deactivated successfully.
Dec 13 02:33:58 np0005558241 podman[99198]: 2025-12-13 07:33:58.172042265 +0000 UTC m=+0.058096149 container create 629bcd7549882dfe7c61ccf576fafa2e29da2b826ab9573d0a3da43cc519abdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 02:33:58 np0005558241 systemd[1]: Started libpod-conmon-629bcd7549882dfe7c61ccf576fafa2e29da2b826ab9573d0a3da43cc519abdd.scope.
Dec 13 02:33:58 np0005558241 podman[99198]: 2025-12-13 07:33:58.147052856 +0000 UTC m=+0.033106820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:33:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:33:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc3ba65a331c39e4d0b5cea04d697a69b0b244606b858a38b47625cdb2dbaa7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc3ba65a331c39e4d0b5cea04d697a69b0b244606b858a38b47625cdb2dbaa7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc3ba65a331c39e4d0b5cea04d697a69b0b244606b858a38b47625cdb2dbaa7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc3ba65a331c39e4d0b5cea04d697a69b0b244606b858a38b47625cdb2dbaa7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:33:58 np0005558241 podman[99198]: 2025-12-13 07:33:58.29091438 +0000 UTC m=+0.176968354 container init 629bcd7549882dfe7c61ccf576fafa2e29da2b826ab9573d0a3da43cc519abdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:33:58 np0005558241 podman[99198]: 2025-12-13 07:33:58.304849255 +0000 UTC m=+0.190903169 container start 629bcd7549882dfe7c61ccf576fafa2e29da2b826ab9573d0a3da43cc519abdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:33:58 np0005558241 podman[99198]: 2025-12-13 07:33:58.309315175 +0000 UTC m=+0.195369089 container attach 629bcd7549882dfe7c61ccf576fafa2e29da2b826ab9573d0a3da43cc519abdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_williams, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:59 np0005558241 lvm[99293]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:33:59 np0005558241 lvm[99293]: VG ceph_vg1 finished
Dec 13 02:33:59 np0005558241 lvm[99294]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:33:59 np0005558241 lvm[99294]: VG ceph_vg0 finished
Dec 13 02:33:59 np0005558241 lvm[99296]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:33:59 np0005558241 lvm[99296]: VG ceph_vg2 finished
Dec 13 02:33:59 np0005558241 strange_williams[99215]: {}
Dec 13 02:33:59 np0005558241 systemd[1]: libpod-629bcd7549882dfe7c61ccf576fafa2e29da2b826ab9573d0a3da43cc519abdd.scope: Deactivated successfully.
Dec 13 02:33:59 np0005558241 podman[99198]: 2025-12-13 07:33:59.193270889 +0000 UTC m=+1.079324773 container died 629bcd7549882dfe7c61ccf576fafa2e29da2b826ab9573d0a3da43cc519abdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:33:59 np0005558241 systemd[1]: libpod-629bcd7549882dfe7c61ccf576fafa2e29da2b826ab9573d0a3da43cc519abdd.scope: Consumed 1.415s CPU time.
Dec 13 02:33:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fcc3ba65a331c39e4d0b5cea04d697a69b0b244606b858a38b47625cdb2dbaa7-merged.mount: Deactivated successfully.
Dec 13 02:33:59 np0005558241 podman[99198]: 2025-12-13 07:33:59.240486408 +0000 UTC m=+1.126540322 container remove 629bcd7549882dfe7c61ccf576fafa2e29da2b826ab9573d0a3da43cc519abdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 02:33:59 np0005558241 systemd[1]: libpod-conmon-629bcd7549882dfe7c61ccf576fafa2e29da2b826ab9573d0a3da43cc519abdd.scope: Deactivated successfully.
Dec 13 02:33:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:33:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:33:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 13 KiB/s wr, 240 op/s
Dec 13 02:33:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:33:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:33:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:33:59 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 13 02:33:59 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 13 02:34:00 np0005558241 podman[99431]: 2025-12-13 07:34:00.198066715 +0000 UTC m=+0.078792662 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:34:00 np0005558241 podman[99431]: 2025-12-13 07:34:00.355614517 +0000 UTC m=+0.236340424 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:34:00 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event 64d33a97-d822-42ca-8db8-e155e3382cea (Global Recovery Event) in 10 seconds
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:34:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 9.4 KiB/s wr, 203 op/s
Dec 13 02:34:01 np0005558241 podman[99682]: 2025-12-13 07:34:01.846488102 +0000 UTC m=+0.054274435 container create cb851c9c094da54ba8c2c0da6a2374972d1aedfb09dc5e62c4b70341d0ca9d2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_torvalds, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:34:01 np0005558241 systemd[1]: Started libpod-conmon-cb851c9c094da54ba8c2c0da6a2374972d1aedfb09dc5e62c4b70341d0ca9d2a.scope.
Dec 13 02:34:01 np0005558241 podman[99682]: 2025-12-13 07:34:01.828594019 +0000 UTC m=+0.036380342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:34:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:34:01 np0005558241 podman[99682]: 2025-12-13 07:34:01.949604296 +0000 UTC m=+0.157390639 container init cb851c9c094da54ba8c2c0da6a2374972d1aedfb09dc5e62c4b70341d0ca9d2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_torvalds, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:34:01 np0005558241 podman[99682]: 2025-12-13 07:34:01.958293081 +0000 UTC m=+0.166079444 container start cb851c9c094da54ba8c2c0da6a2374972d1aedfb09dc5e62c4b70341d0ca9d2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:34:01 np0005558241 podman[99682]: 2025-12-13 07:34:01.962477565 +0000 UTC m=+0.170263898 container attach cb851c9c094da54ba8c2c0da6a2374972d1aedfb09dc5e62c4b70341d0ca9d2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:34:01 np0005558241 nice_torvalds[99699]: 167 167
Dec 13 02:34:01 np0005558241 systemd[1]: libpod-cb851c9c094da54ba8c2c0da6a2374972d1aedfb09dc5e62c4b70341d0ca9d2a.scope: Deactivated successfully.
Dec 13 02:34:01 np0005558241 podman[99682]: 2025-12-13 07:34:01.964792512 +0000 UTC m=+0.172578855 container died cb851c9c094da54ba8c2c0da6a2374972d1aedfb09dc5e62c4b70341d0ca9d2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_torvalds, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:34:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-76f19f726637254653a9390c4c300dd62ada99bcda66ebc5de12288deaa66719-merged.mount: Deactivated successfully.
Dec 13 02:34:02 np0005558241 podman[99682]: 2025-12-13 07:34:02.008117425 +0000 UTC m=+0.215903768 container remove cb851c9c094da54ba8c2c0da6a2374972d1aedfb09dc5e62c4b70341d0ca9d2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:34:02 np0005558241 systemd[1]: libpod-conmon-cb851c9c094da54ba8c2c0da6a2374972d1aedfb09dc5e62c4b70341d0ca9d2a.scope: Deactivated successfully.
Dec 13 02:34:02 np0005558241 podman[99723]: 2025-12-13 07:34:02.179444189 +0000 UTC m=+0.061477664 container create fd9942b372c389ad40cf0bb06961b3e2f3b96207e6c908747b22f2159b1fa77c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_khayyam, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:34:02 np0005558241 podman[99723]: 2025-12-13 07:34:02.155907536 +0000 UTC m=+0.037941101 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:34:02 np0005558241 systemd[1]: Started libpod-conmon-fd9942b372c389ad40cf0bb06961b3e2f3b96207e6c908747b22f2159b1fa77c.scope.
Dec 13 02:34:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:34:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77bf6aa67b64522a442f36bbdc395278b4ea4d2329b782ac025e43b856cbac0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77bf6aa67b64522a442f36bbdc395278b4ea4d2329b782ac025e43b856cbac0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77bf6aa67b64522a442f36bbdc395278b4ea4d2329b782ac025e43b856cbac0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77bf6aa67b64522a442f36bbdc395278b4ea4d2329b782ac025e43b856cbac0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77bf6aa67b64522a442f36bbdc395278b4ea4d2329b782ac025e43b856cbac0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:34:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:34:02 np0005558241 podman[99723]: 2025-12-13 07:34:02.328449559 +0000 UTC m=+0.210483084 container init fd9942b372c389ad40cf0bb06961b3e2f3b96207e6c908747b22f2159b1fa77c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:34:02 np0005558241 podman[99723]: 2025-12-13 07:34:02.340168329 +0000 UTC m=+0.222201844 container start fd9942b372c389ad40cf0bb06961b3e2f3b96207e6c908747b22f2159b1fa77c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_khayyam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:34:02 np0005558241 podman[99723]: 2025-12-13 07:34:02.346587728 +0000 UTC m=+0.228621313 container attach fd9942b372c389ad40cf0bb06961b3e2f3b96207e6c908747b22f2159b1fa77c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:34:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:02 np0005558241 confident_khayyam[99739]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:34:02 np0005558241 confident_khayyam[99739]: --> All data devices are unavailable
Dec 13 02:34:02 np0005558241 systemd[1]: libpod-fd9942b372c389ad40cf0bb06961b3e2f3b96207e6c908747b22f2159b1fa77c.scope: Deactivated successfully.
Dec 13 02:34:02 np0005558241 podman[99723]: 2025-12-13 07:34:02.966294317 +0000 UTC m=+0.848327802 container died fd9942b372c389ad40cf0bb06961b3e2f3b96207e6c908747b22f2159b1fa77c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_khayyam, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:34:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f77bf6aa67b64522a442f36bbdc395278b4ea4d2329b782ac025e43b856cbac0-merged.mount: Deactivated successfully.
Dec 13 02:34:03 np0005558241 podman[99723]: 2025-12-13 07:34:03.023797901 +0000 UTC m=+0.905831416 container remove fd9942b372c389ad40cf0bb06961b3e2f3b96207e6c908747b22f2159b1fa77c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_khayyam, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:34:03 np0005558241 systemd[1]: libpod-conmon-fd9942b372c389ad40cf0bb06961b3e2f3b96207e6c908747b22f2159b1fa77c.scope: Deactivated successfully.
Dec 13 02:34:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s
Dec 13 02:34:03 np0005558241 podman[99833]: 2025-12-13 07:34:03.530765967 +0000 UTC m=+0.055597738 container create f4460d34ec43ab17c9c0124f6a8cf475012ef0f5c0455def3831603cc3f6d1ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chatterjee, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:34:03 np0005558241 systemd[1]: Started libpod-conmon-f4460d34ec43ab17c9c0124f6a8cf475012ef0f5c0455def3831603cc3f6d1ac.scope.
Dec 13 02:34:03 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:34:03 np0005558241 podman[99833]: 2025-12-13 07:34:03.511637874 +0000 UTC m=+0.036469685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:34:03 np0005558241 podman[99833]: 2025-12-13 07:34:03.626759465 +0000 UTC m=+0.151591286 container init f4460d34ec43ab17c9c0124f6a8cf475012ef0f5c0455def3831603cc3f6d1ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:34:03 np0005558241 podman[99833]: 2025-12-13 07:34:03.637644745 +0000 UTC m=+0.162476536 container start f4460d34ec43ab17c9c0124f6a8cf475012ef0f5c0455def3831603cc3f6d1ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:34:03 np0005558241 podman[99833]: 2025-12-13 07:34:03.642258789 +0000 UTC m=+0.167090650 container attach f4460d34ec43ab17c9c0124f6a8cf475012ef0f5c0455def3831603cc3f6d1ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chatterjee, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 02:34:03 np0005558241 zealous_chatterjee[99849]: 167 167
Dec 13 02:34:03 np0005558241 systemd[1]: libpod-f4460d34ec43ab17c9c0124f6a8cf475012ef0f5c0455def3831603cc3f6d1ac.scope: Deactivated successfully.
Dec 13 02:34:03 np0005558241 conmon[99849]: conmon f4460d34ec43ab17c9c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f4460d34ec43ab17c9c0124f6a8cf475012ef0f5c0455def3831603cc3f6d1ac.scope/container/memory.events
Dec 13 02:34:03 np0005558241 podman[99854]: 2025-12-13 07:34:03.707391892 +0000 UTC m=+0.042537604 container died f4460d34ec43ab17c9c0124f6a8cf475012ef0f5c0455def3831603cc3f6d1ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:34:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c8ea0812947cd8bc0ced2b23b8bdaf793f7bc45597817fdaa683401d4d502744-merged.mount: Deactivated successfully.
Dec 13 02:34:03 np0005558241 podman[99854]: 2025-12-13 07:34:03.756011846 +0000 UTC m=+0.091157528 container remove f4460d34ec43ab17c9c0124f6a8cf475012ef0f5c0455def3831603cc3f6d1ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 02:34:03 np0005558241 systemd[1]: libpod-conmon-f4460d34ec43ab17c9c0124f6a8cf475012ef0f5c0455def3831603cc3f6d1ac.scope: Deactivated successfully.
Dec 13 02:34:04 np0005558241 podman[99876]: 2025-12-13 07:34:04.017579065 +0000 UTC m=+0.071109352 container create 9e26c947295e8b3836ecc43a58e1fcc07d87dfbc241854104803bc151002878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_pare, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:34:04 np0005558241 systemd[1]: Started libpod-conmon-9e26c947295e8b3836ecc43a58e1fcc07d87dfbc241854104803bc151002878e.scope.
Dec 13 02:34:04 np0005558241 podman[99876]: 2025-12-13 07:34:03.987253664 +0000 UTC m=+0.040784011 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:34:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:34:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82856dc38a0df8db370c87bac3a9f047e111f9ab96d4db0172ae9abe027332be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82856dc38a0df8db370c87bac3a9f047e111f9ab96d4db0172ae9abe027332be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82856dc38a0df8db370c87bac3a9f047e111f9ab96d4db0172ae9abe027332be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82856dc38a0df8db370c87bac3a9f047e111f9ab96d4db0172ae9abe027332be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:04 np0005558241 podman[99876]: 2025-12-13 07:34:04.127289192 +0000 UTC m=+0.180819519 container init 9e26c947295e8b3836ecc43a58e1fcc07d87dfbc241854104803bc151002878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:34:04 np0005558241 podman[99876]: 2025-12-13 07:34:04.139970546 +0000 UTC m=+0.193500833 container start 9e26c947295e8b3836ecc43a58e1fcc07d87dfbc241854104803bc151002878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:34:04 np0005558241 podman[99876]: 2025-12-13 07:34:04.144366925 +0000 UTC m=+0.197897212 container attach 9e26c947295e8b3836ecc43a58e1fcc07d87dfbc241854104803bc151002878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 13 02:34:04 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec 13 02:34:04 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec 13 02:34:04 np0005558241 happy_pare[99892]: {
Dec 13 02:34:04 np0005558241 happy_pare[99892]:    "0": [
Dec 13 02:34:04 np0005558241 happy_pare[99892]:        {
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "devices": [
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "/dev/loop3"
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            ],
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_name": "ceph_lv0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_size": "21470642176",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "name": "ceph_lv0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "tags": {
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.cluster_name": "ceph",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.crush_device_class": "",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.encrypted": "0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.objectstore": "bluestore",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.osd_id": "0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.type": "block",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.vdo": "0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.with_tpm": "0"
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            },
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "type": "block",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "vg_name": "ceph_vg0"
Dec 13 02:34:04 np0005558241 happy_pare[99892]:        }
Dec 13 02:34:04 np0005558241 happy_pare[99892]:    ],
Dec 13 02:34:04 np0005558241 happy_pare[99892]:    "1": [
Dec 13 02:34:04 np0005558241 happy_pare[99892]:        {
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "devices": [
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "/dev/loop4"
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            ],
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_name": "ceph_lv1",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_size": "21470642176",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "name": "ceph_lv1",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "tags": {
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.cluster_name": "ceph",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.crush_device_class": "",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.encrypted": "0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.objectstore": "bluestore",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.osd_id": "1",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.type": "block",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.vdo": "0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.with_tpm": "0"
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            },
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "type": "block",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "vg_name": "ceph_vg1"
Dec 13 02:34:04 np0005558241 happy_pare[99892]:        }
Dec 13 02:34:04 np0005558241 happy_pare[99892]:    ],
Dec 13 02:34:04 np0005558241 happy_pare[99892]:    "2": [
Dec 13 02:34:04 np0005558241 happy_pare[99892]:        {
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "devices": [
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "/dev/loop5"
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            ],
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_name": "ceph_lv2",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_size": "21470642176",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "name": "ceph_lv2",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "tags": {
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.cluster_name": "ceph",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.crush_device_class": "",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.encrypted": "0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.objectstore": "bluestore",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.osd_id": "2",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.type": "block",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.vdo": "0",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:                "ceph.with_tpm": "0"
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            },
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "type": "block",
Dec 13 02:34:04 np0005558241 happy_pare[99892]:            "vg_name": "ceph_vg2"
Dec 13 02:34:04 np0005558241 happy_pare[99892]:        }
Dec 13 02:34:04 np0005558241 happy_pare[99892]:    ]
Dec 13 02:34:04 np0005558241 happy_pare[99892]: }
Dec 13 02:34:04 np0005558241 systemd[1]: libpod-9e26c947295e8b3836ecc43a58e1fcc07d87dfbc241854104803bc151002878e.scope: Deactivated successfully.
Dec 13 02:34:04 np0005558241 podman[99876]: 2025-12-13 07:34:04.503899399 +0000 UTC m=+0.557429676 container died 9e26c947295e8b3836ecc43a58e1fcc07d87dfbc241854104803bc151002878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:34:04 np0005558241 systemd[1]: var-lib-containers-storage-overlay-82856dc38a0df8db370c87bac3a9f047e111f9ab96d4db0172ae9abe027332be-merged.mount: Deactivated successfully.
Dec 13 02:34:04 np0005558241 podman[99876]: 2025-12-13 07:34:04.56087755 +0000 UTC m=+0.614407797 container remove 9e26c947295e8b3836ecc43a58e1fcc07d87dfbc241854104803bc151002878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:34:04 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec 13 02:34:04 np0005558241 systemd[1]: libpod-conmon-9e26c947295e8b3836ecc43a58e1fcc07d87dfbc241854104803bc151002878e.scope: Deactivated successfully.
Dec 13 02:34:04 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec 13 02:34:05 np0005558241 podman[99976]: 2025-12-13 07:34:05.106116684 +0000 UTC m=+0.069200755 container create 8a2243fbc0ef67a24c9a27d8cb725923271e13e1413cb74c681930075251c211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_zhukovsky, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 02:34:05 np0005558241 systemd[1]: Started libpod-conmon-8a2243fbc0ef67a24c9a27d8cb725923271e13e1413cb74c681930075251c211.scope.
Dec 13 02:34:05 np0005558241 podman[99976]: 2025-12-13 07:34:05.07444759 +0000 UTC m=+0.037531721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:34:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:34:05 np0005558241 podman[99976]: 2025-12-13 07:34:05.203730712 +0000 UTC m=+0.166814833 container init 8a2243fbc0ef67a24c9a27d8cb725923271e13e1413cb74c681930075251c211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:34:05 np0005558241 podman[99976]: 2025-12-13 07:34:05.215155085 +0000 UTC m=+0.178239166 container start 8a2243fbc0ef67a24c9a27d8cb725923271e13e1413cb74c681930075251c211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:34:05 np0005558241 jolly_zhukovsky[99993]: 167 167
Dec 13 02:34:05 np0005558241 podman[99976]: 2025-12-13 07:34:05.220288992 +0000 UTC m=+0.183373073 container attach 8a2243fbc0ef67a24c9a27d8cb725923271e13e1413cb74c681930075251c211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_zhukovsky, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:34:05 np0005558241 systemd[1]: libpod-8a2243fbc0ef67a24c9a27d8cb725923271e13e1413cb74c681930075251c211.scope: Deactivated successfully.
Dec 13 02:34:05 np0005558241 conmon[99993]: conmon 8a2243fbc0ef67a24c9a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a2243fbc0ef67a24c9a27d8cb725923271e13e1413cb74c681930075251c211.scope/container/memory.events
Dec 13 02:34:05 np0005558241 podman[99976]: 2025-12-13 07:34:05.222060166 +0000 UTC m=+0.185144237 container died 8a2243fbc0ef67a24c9a27d8cb725923271e13e1413cb74c681930075251c211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_zhukovsky, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:34:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6ffbebc5e874ae0ce2ff3bae944999605cc7ae2981bd8f7cf2649fd0b81790f7-merged.mount: Deactivated successfully.
Dec 13 02:34:05 np0005558241 podman[99976]: 2025-12-13 07:34:05.27914138 +0000 UTC m=+0.242225461 container remove 8a2243fbc0ef67a24c9a27d8cb725923271e13e1413cb74c681930075251c211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_zhukovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:34:05 np0005558241 systemd[1]: libpod-conmon-8a2243fbc0ef67a24c9a27d8cb725923271e13e1413cb74c681930075251c211.scope: Deactivated successfully.
Dec 13 02:34:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.6 KiB/s wr, 142 op/s
Dec 13 02:34:05 np0005558241 podman[100017]: 2025-12-13 07:34:05.486588248 +0000 UTC m=+0.055888326 container create 29393f1ee432e7602ae5bf399a7da6e661a07820eb75f6756cc6169f73e02688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:34:05 np0005558241 systemd[1]: Started libpod-conmon-29393f1ee432e7602ae5bf399a7da6e661a07820eb75f6756cc6169f73e02688.scope.
Dec 13 02:34:05 np0005558241 podman[100017]: 2025-12-13 07:34:05.457552169 +0000 UTC m=+0.026852227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:34:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:34:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2485de4c6451483d3c87ff09bf6618f2e583cb47ed2cee1861bbfae0e2df98d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2485de4c6451483d3c87ff09bf6618f2e583cb47ed2cee1861bbfae0e2df98d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2485de4c6451483d3c87ff09bf6618f2e583cb47ed2cee1861bbfae0e2df98d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2485de4c6451483d3c87ff09bf6618f2e583cb47ed2cee1861bbfae0e2df98d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:34:05 np0005558241 podman[100017]: 2025-12-13 07:34:05.598592632 +0000 UTC m=+0.167892750 container init 29393f1ee432e7602ae5bf399a7da6e661a07820eb75f6756cc6169f73e02688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_hermann, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:34:05 np0005558241 podman[100017]: 2025-12-13 07:34:05.612904596 +0000 UTC m=+0.182204674 container start 29393f1ee432e7602ae5bf399a7da6e661a07820eb75f6756cc6169f73e02688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:34:05 np0005558241 podman[100017]: 2025-12-13 07:34:05.617793187 +0000 UTC m=+0.187093255 container attach 29393f1ee432e7602ae5bf399a7da6e661a07820eb75f6756cc6169f73e02688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:34:05 np0005558241 ceph-mgr[76830]: [progress INFO root] Writing back 13 completed events
Dec 13 02:34:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 02:34:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:06 np0005558241 lvm[100112]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:34:06 np0005558241 lvm[100113]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:34:06 np0005558241 lvm[100113]: VG ceph_vg1 finished
Dec 13 02:34:06 np0005558241 lvm[100112]: VG ceph_vg0 finished
Dec 13 02:34:06 np0005558241 lvm[100115]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:34:06 np0005558241 lvm[100115]: VG ceph_vg2 finished
Dec 13 02:34:06 np0005558241 lvm[100116]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:34:06 np0005558241 lvm[100116]: VG ceph_vg0 finished
Dec 13 02:34:06 np0005558241 bold_hermann[100034]: {}
Dec 13 02:34:06 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:06 np0005558241 systemd[1]: libpod-29393f1ee432e7602ae5bf399a7da6e661a07820eb75f6756cc6169f73e02688.scope: Deactivated successfully.
Dec 13 02:34:06 np0005558241 systemd[1]: libpod-29393f1ee432e7602ae5bf399a7da6e661a07820eb75f6756cc6169f73e02688.scope: Consumed 1.477s CPU time.
Dec 13 02:34:06 np0005558241 podman[100017]: 2025-12-13 07:34:06.497900135 +0000 UTC m=+1.067200213 container died 29393f1ee432e7602ae5bf399a7da6e661a07820eb75f6756cc6169f73e02688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:34:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d2485de4c6451483d3c87ff09bf6618f2e583cb47ed2cee1861bbfae0e2df98d-merged.mount: Deactivated successfully.
Dec 13 02:34:06 np0005558241 podman[100017]: 2025-12-13 07:34:06.692546146 +0000 UTC m=+1.261846224 container remove 29393f1ee432e7602ae5bf399a7da6e661a07820eb75f6756cc6169f73e02688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:34:06 np0005558241 systemd[1]: libpod-conmon-29393f1ee432e7602ae5bf399a7da6e661a07820eb75f6756cc6169f73e02688.scope: Deactivated successfully.
Dec 13 02:34:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:34:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:34:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 6.0 KiB/s wr, 129 op/s
Dec 13 02:34:07 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:07 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:34:08
Dec 13 02:34:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:34:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:34:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.log', 'backups', 'volumes', 'default.rgw.control']
Dec 13 02:34:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:34:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.5 KiB/s wr, 118 op/s
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:34:10 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec 13 02:34:10 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec 13 02:34:10 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 13 02:34:10 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 13 02:34:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:11 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 13 02:34:11 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 13 02:34:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:12 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 13 02:34:12 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 13 02:34:13 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 13 02:34:13 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 13 02:34:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:13 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 13 02:34:13 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 13 02:34:14 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 13 02:34:14 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 13 02:34:15 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 13 02:34:15 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0931826417176156e-06 of space, bias 4.0, pg target 0.0013118191700611387 quantized to 16 (current 32)
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:34:15 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 02:34:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:34:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:34:16 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec 13 02:34:16 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec 13 02:34:16 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec 13 02:34:16 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec 13 02:34:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 13 02:34:16 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:34:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:34:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 13 02:34:16 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 13 02:34:16 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev 55a33a5c-1377-4c8f-a0bf-f479208f4111 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 13 02:34:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:34:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:34:17 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 13 02:34:17 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 13 02:34:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:34:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:34:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v139: 197 pgs: 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:17 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 13 02:34:17 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 13 02:34:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 13 02:34:17 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:34:17 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:34:17 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:34:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:34:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:34:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 13 02:34:17 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 13 02:34:17 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev b14b2073-ac1d-4d13-9740-d93731fcef70 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 13 02:34:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:34:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:34:18 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 13 02:34:18 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 13 02:34:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 13 02:34:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:34:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 13 02:34:19 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 55 pg[8.0( v 46'6 (0'0,46'6] local-lis/les=45/46 n=6 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55 pruub=10.233946800s) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 46'5 mlcod 46'5 active pruub 138.261245728s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:19 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev 4d51569e-4aad-476a-8349-6cd08f2c3fe7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 13 02:34:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec 13 02:34:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:34:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.0( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55 pruub=10.233946800s) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 46'5 mlcod 0'0 unknown pruub 138.261245728s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:34:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x56182ad4fd40) split_cache   moving buffer(0x561829bce680 space 0x56182b2bf440 0x0~2e clean)
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x56182ad4fd40) split_cache   moving buffer(0x561829b16c80 space 0x56182943d140 0x0~424 clean)
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.c( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.2( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.5( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.f( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.19( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.12( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.a( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.b( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.10( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.3( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.16( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.1b( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.7( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.e( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.11( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.17( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.13( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.1a( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.1( v 46'6 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.15( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.4( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.d( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.8( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.14( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.6( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.9( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.18( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.1c( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.1d( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.1e( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 56 pg[8.1f( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v142: 228 pgs: 31 unknown, 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:34:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:34:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:34:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 13 02:34:19 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 13 02:34:20 np0005558241 ceph-mgr[76830]: [progress INFO root] update: starting ev c9e5111b-d3ab-466f-9697-79ff2c857f90 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 13 02:34:20 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 57 pg[10.0( v 52'18 (0'0,52'18] local-lis/les=49/50 n=9 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.302277565s) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 52'17 mlcod 52'17 active pruub 133.734542847s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:20 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev 55a33a5c-1377-4c8f-a0bf-f479208f4111 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 13 02:34:20 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event 55a33a5c-1377-4c8f-a0bf-f479208f4111 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Dec 13 02:34:20 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev b14b2073-ac1d-4d13-9740-d93731fcef70 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 13 02:34:20 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event b14b2073-ac1d-4d13-9740-d93731fcef70 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Dec 13 02:34:20 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev 4d51569e-4aad-476a-8349-6cd08f2c3fe7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 13 02:34:20 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event 4d51569e-4aad-476a-8349-6cd08f2c3fe7 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Dec 13 02:34:20 np0005558241 ceph-mgr[76830]: [progress INFO root] complete: finished ev c9e5111b-d3ab-466f-9697-79ff2c857f90 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 13 02:34:20 np0005558241 ceph-mgr[76830]: [progress INFO root] Completed event c9e5111b-d3ab-466f-9697-79ff2c857f90 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec 13 02:34:20 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 57 pg[10.0( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.302277565s) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 52'17 mlcod 0'0 unknown pruub 133.734542847s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[9.0( v 53'483 (0'0,53'483] local-lis/les=47/48 n=210 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=11.277227402s) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 53'482 mlcod 53'482 active pruub 140.317016602s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.10( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.17( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.15( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.14( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.16( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.1( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.2( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.3( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.e( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.8( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.a( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.7( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.9( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.6( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.0( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 46'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.5( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.4( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.18( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.1b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.1d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.1f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.1c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.1e( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.13( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.11( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.19( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.1a( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[8.12( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 57 pg[9.0( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=11.277227402s) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 53'482 mlcod 0'0 unknown pruub 140.317016602s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829a35500 space 0x561829f78840 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829a6af00 space 0x56182b0e6840 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829bd5300 space 0x56182b105740 0x0~98 clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b4d000 space 0x561829f79140 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829a35400 space 0x56182b10da40 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829a57380 space 0x56182b124540 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b16800 space 0x561829f3d440 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829bd4980 space 0x56182a094b40 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b44580 space 0x561829b74e40 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829bd9500 space 0x56182a002840 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829bd4f00 space 0x56182a095440 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829bcf100 space 0x56182ad73a40 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829bcf500 space 0x56182b0b9740 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b44380 space 0x561829b75740 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b7d300 space 0x56182a093a40 0x0~98 clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b45d00 space 0x56182b0e4240 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829a57180 space 0x56182b07a540 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b92000 space 0x56182b10c840 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b44180 space 0x56182b0ea240 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x5618299c1100 space 0x56182b2c0540 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829c1ae00 space 0x56182b0e4b40 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b16980 space 0x56182a095d40 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b16f00 space 0x561829f3cb40 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b6b600 space 0x5618285af740 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829bd4800 space 0x56182a092540 0x0~98 clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b17100 space 0x561829f3dd40 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b7d880 space 0x56182b108240 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829c04c00 space 0x56182b116840 0x0~98 clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829a39d00 space 0x56182b0e7740 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829bd4280 space 0x56182b10d140 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b7d700 space 0x56182b11f440 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b6b400 space 0x56182b0e6240 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b4de00 space 0x56182a002e40 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b96200 space 0x56182b0ebd40 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b16180 space 0x56182b058e40 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b6a100 space 0x56182b054540 0x0~98 clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829c04d80 space 0x56182b059740 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b4cb00 space 0x56182b11e540 0x0~98 clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561828605c00 space 0x561829b75440 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b6a900 space 0x561829f79a40 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b6be00 space 0x56182b339a40 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b6b800 space 0x5618285aee40 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b4db80 space 0x561829f78240 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829bcf200 space 0x56182ad73140 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b44200 space 0x561829d2bd40 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829a57300 space 0x56182b07cb40 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b4c400 space 0x56182b055a40 0x0~98 clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b6bc80 space 0x56182b339140 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829bd4c80 space 0x56182b0e5a40 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b8c600 space 0x56182b0e5440 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b6f800 space 0x56182b07d440 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b21d80 space 0x56182a003140 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829a41c80 space 0x561829d2b440 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b1cd80 space 0x56182a094240 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829be6c00 space 0x561829f7c840 0x0~9a clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829b96a00 space 0x56182b0e5d40 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829a6ac80 space 0x56182b058540 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829bd9380 space 0x56182b057440 0x0~98 clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x561829add440) split_cache   moving buffer(0x561829c04080 space 0x56182b0eab40 0x0~6e clean)
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 13 02:34:20 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 13 02:34:20 np0005558241 ceph-mgr[76830]: [progress INFO root] Writing back 17 completed events
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 02:34:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 13 02:34:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.b( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1e( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1b( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.d( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.13( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.a( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.12( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.10( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.11( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1f( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1c( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1d( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1a( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.19( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.18( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.7( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.6( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.5( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.4( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.f( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.e( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.8( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.c( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.9( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1( v 52'18 (0'0,52'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.3( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.2( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.14( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.15( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.16( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.17( v 52'18 lc 0'0 (0'0,52'18] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.b( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.15( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.14( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.16( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.17( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1e( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1b( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.13( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.a( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.12( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.10( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1f( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.11( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1c( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1d( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.19( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.18( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.7( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.6( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1a( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.4( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.3( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.2( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.d( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.c( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.5( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.f( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.11( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.f( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.9( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.d( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.b( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.e( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.a( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.8( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.6( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.7( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.4( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.5( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1a( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.19( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1e( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1f( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1c( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.18( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1d( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.12( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.13( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.10( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1b( v 53'483 lc 0'0 (0'0,53'483] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.e( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.0( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 52'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.1( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.c( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.3( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.15( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.2( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.16( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.14( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.17( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.9( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 58 pg[10.8( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.0( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 53'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.14( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.2( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.a( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.5( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.11( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.4( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1a( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.19( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.12( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.13( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.10( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.18( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 58 pg[9.1c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=53'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:21 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:34:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v145: 290 pgs: 93 unknown, 197 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 02:34:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:34:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 13 02:34:22 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 02:34:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:34:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 13 02:34:22 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 13 02:34:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:23 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 13 02:34:23 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 13 02:34:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 02:34:23 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec 13 02:34:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v147: 321 pgs: 62 unknown, 259 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:23 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec 13 02:34:24 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 59 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=11.324398041s) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active pruub 144.350738525s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:24 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 59 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=11.324398041s) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown pruub 144.350738525s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 13 02:34:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 13 02:34:24 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.16( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.13( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.c( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.a( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.5( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.7( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1d( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.16( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.13( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.0( empty local-lis/les=59/60 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.5( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.7( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 60 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v149: 321 pgs: 31 unknown, 290 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v150: 321 pgs: 31 unknown, 290 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:27 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 13 02:34:27 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 13 02:34:28 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 13 02:34:28 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 13 02:34:28 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 13 02:34:28 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 13 02:34:29 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 13 02:34:29 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 13 02:34:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:34:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:34:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:34:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:34:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 13 02:34:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 13 02:34:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:34:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:34:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 13 02:34:30 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 13 02:34:30 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 13 02:34:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:34:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:34:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 13 02:34:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:34:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 13 02:34:31 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec 13 02:34:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v153: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.d( v 59'19 (0'0,59'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.642704964s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 52'18 active pruub 145.445739746s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.b( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.637641907s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.440704346s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.b( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.637593269s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.440704346s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.d( v 59'19 (0'0,59'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.642609596s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 52'18 unknown NOTIFY pruub 145.445739746s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.1e( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.641002655s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.444122314s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.1e( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640925407s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.444122314s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.13( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640663147s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.444152832s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.12( v 59'19 (0'0,59'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640766144s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 52'18 active pruub 145.444305420s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.12( v 59'19 (0'0,59'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640729904s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 52'18 unknown NOTIFY pruub 145.444305420s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.13( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640600204s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.444152832s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.10( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640669823s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.444366455s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.10( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640581131s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.444366455s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.1a( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640857697s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.444885254s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.19( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640789986s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.444854736s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.1a( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640834808s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.444885254s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.19( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640766144s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.444854736s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.7( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640482903s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.444869995s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.7( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640419960s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.444869995s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.4( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640308380s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.444900513s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.4( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640275955s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.444900513s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.8( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.645506859s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.450195312s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.8( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.645475388s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.450195312s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.f( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640759468s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.445602417s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.6( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.639974594s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.444885254s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.f( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.640690804s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.445602417s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.9( v 59'19 (0'0,59'19] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.645003319s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 52'18 active pruub 145.449966431s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.6( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.639933586s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.444885254s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.9( v 59'19 (0'0,59'19] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.644973755s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 52'18 unknown NOTIFY pruub 145.449966431s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.e( v 59'19 (0'0,59'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.644454002s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 52'18 active pruub 145.449584961s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 13 02:34:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.e( v 59'19 (0'0,59'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.644406319s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 52'18 unknown NOTIFY pruub 145.449584961s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.1( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.644381523s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.449615479s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.1( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.644359589s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.449615479s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.2( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.644467354s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.449859619s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.2( v 52'18 (0'0,52'18] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.644439697s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.449859619s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.14( v 59'19 (0'0,59'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.644423485s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 52'18 active pruub 145.449890137s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.14( v 59'19 (0'0,59'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.644393921s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 52'18 unknown NOTIFY pruub 145.449890137s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.11( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.639141083s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.444732666s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.11( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.639109612s) [1] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.444732666s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.15( v 59'19 (0'0,59'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.643919945s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 52'18 active pruub 145.449859619s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.17( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.643947601s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.449920654s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.17( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.643923759s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.449920654s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.15( v 59'19 (0'0,59'19] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.643866539s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 52'18 unknown NOTIFY pruub 145.449859619s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.16( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.643745422s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 active pruub 145.449890137s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[10.16( v 52'18 (0'0,52'18] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.643725395s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 unknown NOTIFY pruub 145.449890137s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec 13 02:34:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 13 02:34:31 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[10.9( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[10.8( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[10.13( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[10.15( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[10.4( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[10.10( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[10.7( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[10.11( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[10.17( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[10.1( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[10.2( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[10.b( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[10.12( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[10.14( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.409269333s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226043701s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.14( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.226127625s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.043060303s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.14( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.226058960s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.043060303s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.409231186s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226043701s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.15( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.225359917s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.043014526s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.234888077s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.052551270s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.15( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.225331306s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.043014526s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.234856606s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.052551270s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.206557274s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.024444580s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.205913544s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.024353027s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.10( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.224416733s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.043060303s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[8.15( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.10( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.224388123s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.043060303s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.205513954s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.024444580s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.407052040s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226043701s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.230975151s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.049972534s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.407011032s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226043701s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.11( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.234251976s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.053421021s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.230926514s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.049972534s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.205780029s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.024353027s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.11( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.234222412s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.053421021s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.2( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.223250389s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.043151855s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.232605934s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.052520752s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.2( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.2( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.223213196s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.043151855s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.232574463s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.052520752s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.406240463s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226531982s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.405800819s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226226807s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.222778320s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.043243408s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.222750664s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.043243408s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.406199455s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226531982s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.15( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.405319214s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226242065s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.405730247s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226226807s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.405247688s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226242065s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.222191811s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.043228149s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.222160339s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.043228149s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.405042648s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226287842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.405018806s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226287842s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[8.2( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.231225967s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.052627563s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.e( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.221613884s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.043228149s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.231225967s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.052917480s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.231148720s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.052917480s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.e( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.221420288s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.043228149s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.404793739s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226531982s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[8.d( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.230962753s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.052627563s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.230913162s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.053207397s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.230883598s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.053207397s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.404356003s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226531982s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.d( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.403511047s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226303101s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.403463364s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226303101s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.229890823s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.052932739s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.229856491s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.052932739s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.b( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.403141975s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226623535s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.222648621s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.045959473s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.403113365s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226623535s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.222450256s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.045959473s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.9( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.221913338s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.045959473s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.9( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.221885681s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.045959473s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.402213097s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226654053s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.402173996s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226654053s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.1( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.228365898s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.053161621s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.1( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.228338242s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.053161621s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.401635170s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226669312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=9.401610374s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226669312s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.220555305s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.045989990s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=12.220439911s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.045989990s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.9( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.8( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.3( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:34:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:34:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 13 02:34:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.6( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.814816475s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.045989990s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.6( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.814754486s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.045989990s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.995391846s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226715088s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.995287895s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226715088s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.4( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.814311028s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.046112061s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.4( v 46'6 (0'0,46'6] local-lis/les=55/57 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.814281464s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.046112061s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[11.17( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.5( v 59'484 (0'0,59'484] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.821356773s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 active pruub 154.053237915s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.5( v 59'484 (0'0,59'484] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.821295738s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 53'483 unknown NOTIFY pruub 154.053237915s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.821524620s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.053253174s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.1b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.814188004s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.046264648s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.821189880s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.053253174s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.1b( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.814163208s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.046264648s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.994623184s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226791382s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.994565964s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226791382s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.994498253s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226806641s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.994476318s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226806641s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.19( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.820990562s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.053482056s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.19( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.820956230s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.053482056s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.994151115s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226791382s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.18( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.813541412s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.046203613s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.994104385s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226791382s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.18( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.813503265s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.046203613s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.1f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.813538551s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.046264648s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.1f( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.813508987s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.046264648s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.823781967s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.056762695s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.823739052s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.056762695s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.1d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.813084602s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.046234131s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.1d( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.813057899s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.046234131s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.993652344s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226867676s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.993727684s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226959229s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.993591309s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226867676s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.823472977s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.056793213s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.993630409s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226959229s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[8.4( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.823446274s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.056793213s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.1c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.812859535s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.046279907s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[8.1b( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.1c( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.812832832s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.046279907s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.993269920s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226882935s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.993267059s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226913452s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.12( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.812702179s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.046432495s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.992956161s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226730347s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.13( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.822972298s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.056823730s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.1a( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[8.14( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.992656708s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226882935s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.992637634s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226913452s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.12( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.812118530s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.046432495s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.992238045s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226730347s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.11( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.811770439s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.046417236s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.13( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.822155952s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.056823730s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.11( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.811726570s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.046417236s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.1b( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.1b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.822350502s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 active pruub 154.057113647s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[9.1b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.822249413s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 154.057113647s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.1a( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.811496735s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 active pruub 153.046401978s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[8.1a( v 46'6 (0'0,46'6] local-lis/les=55/57 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61 pruub=11.811470032s) [0] r=-1 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 unknown NOTIFY pruub 153.046401978s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[8.10( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.991430283s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226898193s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.991368294s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 active pruub 150.226943970s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.991378784s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226898193s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 61 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=8.991339684s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 unknown NOTIFY pruub 150.226943970s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[11.14( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.1c( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.1e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[8.c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[11.1( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[11.e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.1f( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[8.1c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[11.f( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.11( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[8.12( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[8.e( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.18( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[8.11( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 61 pg[11.12( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[8.f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[8.9( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[11.4( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[8.b( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[8.6( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[11.6( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[8.18( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[8.1f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[8.1d( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[11.10( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[8.1a( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 61 pg[11.19( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 13 02:34:32 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 13 02:34:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v154: 321 pgs: 16 unknown, 43 peering, 262 active+clean; 457 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 13 02:34:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 13 02:34:35 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 13 02:34:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v156: 321 pgs: 16 unknown, 66 peering, 239 active+clean; 457 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:35 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 13 02:34:35 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 13 02:34:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 13 02:34:36 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 13 02:34:36 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 13 02:34:36 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 13 02:34:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:34:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:34:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 13 02:34:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:34:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 13 02:34:36 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec 13 02:34:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 13 02:34:37 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 13 02:34:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v158: 321 pgs: 16 unknown, 66 peering, 239 active+clean; 457 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.11( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.11( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.1( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.1( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.5( v 59'484 (0'0,59'484] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.5( v 59'484 (0'0,59'484] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.19( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.19( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.1b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.13( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.13( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[9.1b( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.11( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.5( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.9( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.1( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.3( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.1d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:37 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] r=-1 lpr=63 pi=[57,63)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:37 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[10.14( v 59'19 lc 50'7 (0'0,59'19] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=59'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:38 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec 13 02:34:38 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.1a( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:38 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[8.10( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:38 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec 13 02:34:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 13 02:34:39 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[10.f( v 52'18 (0'0,52'18] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[10.12( v 59'19 lc 52'17 (0'0,59'19] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=59'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[10.b( v 52'18 (0'0,52'18] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[10.2( v 52'18 (0'0,52'18] local-lis/les=61/63 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[10.19( v 52'18 (0'0,52'18] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[10.11( v 52'18 (0'0,52'18] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[10.1a( v 52'18 (0'0,52'18] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[10.10( v 52'18 (0'0,52'18] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[10.13( v 52'18 (0'0,52'18] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 63 pg[10.6( v 52'18 (0'0,52'18] local-lis/les=61/63 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.12( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[8.11( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.b( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.1f( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.1e( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[8.1c( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.1c( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.11( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.1b( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.18( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[8.12( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[8.4( v 46'6 (0'0,46'6] local-lis/les=61/63 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.d( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[8.d( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[8.2( v 46'6 (0'0,46'6] local-lis/les=61/63 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.9( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.8( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.2( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.15( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[11.3( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[8.15( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 63 pg[8.1b( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[10.9( v 59'19 lc 50'8 (0'0,59'19] local-lis/les=61/63 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=59'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[11.4( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[8.b( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[10.8( v 52'18 (0'0,52'18] local-lis/les=61/63 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[11.14( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[11.10( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[10.4( v 52'18 (0'0,52'18] local-lis/les=61/63 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[8.9( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[10.15( v 59'19 lc 50'3 (0'0,59'19] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=59'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[11.6( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[10.7( v 52'18 (0'0,52'18] local-lis/les=61/63 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[10.17( v 52'18 (0'0,52'18] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[8.6( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=61/63 n=1 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[8.f( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[10.d( v 59'19 lc 50'5 (0'0,59'19] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=59'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[8.e( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[8.c( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[10.e( v 59'19 lc 50'4 (0'0,59'19] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=59'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[11.f( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[11.1( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[10.1e( v 52'18 (0'0,52'18] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[11.e( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[8.1a( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[10.16( v 52'18 (0'0,52'18] local-lis/les=61/63 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[11.17( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[11.19( empty local-lis/les=61/63 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61) [0] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[10.1( v 52'18 (0'0,52'18] local-lis/les=61/63 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=52'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[8.18( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[8.14( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[8.1d( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 63 pg[8.1f( v 46'6 (0'0,46'6] local-lis/les=61/63 n=0 ec=55/45 lis/c=55/55 les/c/f=57/57/0 sis=61) [0] r=0 lpr=61 pi=[55,61)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 16 unknown, 63 activating, 1 active+recovering, 241 active+clean; 457 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 8/144 objects degraded (5.556%)
Dec 13 02:34:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 13 02:34:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 13 02:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:34:40 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 13 02:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:34:40 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 13 02:34:40 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.11( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.5( v 59'484 (0'0,59'484] local-lis/les=63/64 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=59'484 lcod 53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.13( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.1( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.19( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 64 pg[9.1b( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0]/[1] async=[0] r=0 lpr=63 pi=[57,63)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v161: 321 pgs: 16 unknown, 63 activating, 1 active+recovering, 241 active+clean; 457 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 8/144 objects degraded (5.556%)
Dec 13 02:34:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 13 02:34:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 13 02:34:43 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 13 02:34:43 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 65 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=13.990648270s) [0] async=[0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 active pruub 166.177520752s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:43 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 65 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=65 pruub=13.989913940s) [0] r=-1 lpr=65 pi=[57,65)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 166.177520752s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:43 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 65 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:43 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 65 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v163: 321 pgs: 1 active+recovering+remapped, 14 active+recovery_wait+remapped, 1 active+remapped, 53 activating, 252 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.4 KiB/s wr, 117 op/s; 7/249 objects degraded (2.811%); 92/249 objects misplaced (36.948%); 87 B/s, 1 objects/s recovering
Dec 13 02:34:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 13 02:34:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 13 02:34:45 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 66 pg[9.11( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:45 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 66 pg[9.11( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:45 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 13 02:34:45 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 66 pg[9.11( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=66 pruub=11.919470787s) [0] async=[0] r=-1 lpr=66 pi=[57,66)/1 crt=53'483 lcod 0'0 active pruub 166.177520752s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:45 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 66 pg[9.11( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=66 pruub=11.919262886s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 166.177520752s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v165: 321 pgs: 1 active+recovering+remapped, 14 active+recovery_wait+remapped, 1 peering, 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.5 KiB/s wr, 118 op/s; 92/249 objects misplaced (36.948%); 92 B/s, 2 objects/s recovering
Dec 13 02:34:45 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 66 pg[9.b( v 53'483 (0'0,53'483] local-lis/les=65/66 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=65) [0] r=0 lpr=65 pi=[57,65)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 13 02:34:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 13 02:34:46 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 13 02:34:46 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 67 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=67 pruub=10.951075554s) [0] async=[0] r=-1 lpr=67 pi=[57,67)/1 crt=53'483 lcod 0'0 active pruub 166.258895874s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:46 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 67 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=67 pruub=10.950965881s) [0] r=-1 lpr=67 pi=[57,67)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 166.258895874s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:46 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 67 pg[9.13( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=67 pruub=10.950407028s) [0] async=[0] r=-1 lpr=67 pi=[57,67)/1 crt=53'483 lcod 0'0 active pruub 166.259201050s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:46 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 67 pg[9.13( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=67 pruub=10.950233459s) [0] r=-1 lpr=67 pi=[57,67)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 166.259201050s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:46 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 67 pg[9.13( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=67) [0] r=0 lpr=67 pi=[57,67)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:46 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 67 pg[9.13( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=67) [0] r=0 lpr=67 pi=[57,67)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:46 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 67 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=67) [0] r=0 lpr=67 pi=[57,67)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:46 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 67 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=67) [0] r=0 lpr=67 pi=[57,67)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:46 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 67 pg[9.11( v 53'483 (0'0,53'483] local-lis/les=66/67 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 13 02:34:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 13 02:34:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v167: 321 pgs: 1 active+recovering+remapped, 14 active+recovery_wait+remapped, 1 peering, 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.5 KiB/s wr, 118 op/s; 92/249 objects misplaced (36.948%); 92 B/s, 2 objects/s recovering
Dec 13 02:34:47 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 13 02:34:47 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 68 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68 pruub=9.712835312s) [0] async=[0] r=-1 lpr=68 pi=[57,68)/1 crt=53'483 lcod 0'0 active pruub 166.259490967s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:47 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 68 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68 pruub=9.712712288s) [0] r=-1 lpr=68 pi=[57,68)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 166.259490967s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:47 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 68 pg[9.1( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68 pruub=9.710652351s) [0] async=[0] r=-1 lpr=68 pi=[57,68)/1 crt=53'483 lcod 0'0 active pruub 166.259429932s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:47 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 68 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68 pruub=9.710237503s) [0] async=[0] r=-1 lpr=68 pi=[57,68)/1 crt=53'483 lcod 0'0 active pruub 166.259323120s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:47 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 68 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68 pruub=9.710184097s) [0] r=-1 lpr=68 pi=[57,68)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 166.259323120s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:47 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 68 pg[9.1( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68 pruub=9.710396767s) [0] r=-1 lpr=68 pi=[57,68)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 166.259429932s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:47 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 68 pg[9.1( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68) [0] r=0 lpr=68 pi=[57,68)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:47 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 68 pg[9.1( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68) [0] r=0 lpr=68 pi=[57,68)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:47 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 68 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68) [0] r=0 lpr=68 pi=[57,68)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:47 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 68 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68) [0] r=0 lpr=68 pi=[57,68)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:47 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 68 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68) [0] r=0 lpr=68 pi=[57,68)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:47 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 68 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68) [0] r=0 lpr=68 pi=[57,68)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:47 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 68 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=67/68 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=67) [0] r=0 lpr=67 pi=[57,67)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:47 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 68 pg[9.13( v 53'483 (0'0,53'483] local-lis/les=67/68 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=67) [0] r=0 lpr=67 pi=[57,67)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 13 02:34:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 13 02:34:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 13 02:34:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v170: 321 pgs: 1 active+recovering+remapped, 7 active+recovery_wait+remapped, 1 active+remapped, 3 peering, 309 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 49/249 objects misplaced (19.679%); 281 B/s, 5 objects/s recovering
Dec 13 02:34:49 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 69 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=69 pruub=15.796628952s) [0] async=[0] r=-1 lpr=69 pi=[57,69)/1 crt=53'483 lcod 0'0 active pruub 174.259338379s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:49 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 69 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=69 pruub=15.796526909s) [0] r=-1 lpr=69 pi=[57,69)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 174.259338379s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 13 02:34:49 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 69 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=69) [0] r=0 lpr=69 pi=[57,69)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:49 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 69 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=69) [0] r=0 lpr=69 pi=[57,69)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 13 02:34:49 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 69 pg[9.1d( v 53'483 (0'0,53'483] local-lis/les=68/69 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68) [0] r=0 lpr=68 pi=[57,68)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:49 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 69 pg[9.1( v 53'483 (0'0,53'483] local-lis/les=68/69 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68) [0] r=0 lpr=68 pi=[57,68)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:49 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 69 pg[9.3( v 53'483 (0'0,53'483] local-lis/les=68/69 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=68) [0] r=0 lpr=68 pi=[57,68)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 13 02:34:50 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 70 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=70 pruub=15.298364639s) [0] async=[0] r=-1 lpr=70 pi=[57,70)/1 crt=53'483 lcod 0'0 active pruub 174.259719849s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:50 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 70 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=70 pruub=15.298248291s) [0] r=-1 lpr=70 pi=[57,70)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 174.259719849s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:50 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 70 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=70 pruub=15.296979904s) [0] async=[0] r=-1 lpr=70 pi=[57,70)/1 crt=53'483 lcod 0'0 active pruub 174.259719849s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:50 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 70 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=70 pruub=15.296885490s) [0] r=-1 lpr=70 pi=[57,70)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 174.259719849s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:50 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 70 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=70) [0] r=0 lpr=70 pi=[57,70)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:50 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 70 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=70) [0] r=0 lpr=70 pi=[57,70)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:50 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 70 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=70) [0] r=0 lpr=70 pi=[57,70)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:50 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 70 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=70) [0] r=0 lpr=70 pi=[57,70)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:50 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 13 02:34:50 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 13 02:34:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 13 02:34:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v172: 321 pgs: 1 active+recovering+remapped, 7 active+recovery_wait+remapped, 1 active+remapped, 3 peering, 309 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 49/249 objects misplaced (19.679%); 245 B/s, 4 objects/s recovering
Dec 13 02:34:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 13 02:34:51 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 13 02:34:51 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 71 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=71 pruub=13.380564690s) [0] async=[0] r=-1 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 active pruub 174.259841919s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:51 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 71 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=71 pruub=13.380158424s) [0] r=-1 lpr=71 pi=[57,71)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 174.259841919s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:52 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 71 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=71) [0] r=0 lpr=71 pi=[57,71)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:52 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 71 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=71) [0] r=0 lpr=71 pi=[57,71)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:52 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 13 02:34:52 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 13 02:34:52 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 71 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=69/71 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=69) [0] r=0 lpr=69 pi=[57,69)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:52 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 71 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=70/71 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=70) [0] r=0 lpr=70 pi=[57,70)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:52 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 71 pg[9.9( v 53'483 (0'0,53'483] local-lis/les=70/71 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=70) [0] r=0 lpr=70 pi=[57,70)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 13 02:34:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v174: 321 pgs: 2 active+recovering+remapped, 5 active+recovery_wait+remapped, 2 active+remapped, 3 peering, 309 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 38/249 objects misplaced (15.261%); 281 B/s, 5 objects/s recovering
Dec 13 02:34:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 13 02:34:54 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 72 pg[9.5( v 64'485 (0'0,64'485] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=72) [0] r=0 lpr=72 pi=[57,72)/1 pct=0'0 crt=59'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:54 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 72 pg[9.5( v 64'485 (0'0,64'485] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=72) [0] r=0 lpr=72 pi=[57,72)/1 crt=59'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:34:54 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 13 02:34:54 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 72 pg[9.5( v 64'485 (0'0,64'485] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=72 pruub=10.615837097s) [0] async=[0] r=-1 lpr=72 pi=[57,72)/1 crt=59'484 lcod 59'484 active pruub 174.259109497s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:34:54 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 72 pg[9.5( v 64'485 (0'0,64'485] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=72 pruub=10.615653992s) [0] r=-1 lpr=72 pi=[57,72)/1 crt=59'484 lcod 59'484 unknown NOTIFY pruub 174.259109497s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:34:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 1 active+recovering+remapped, 3 active+recovery_wait+remapped, 1 active+remapped, 1 peering, 315 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 26/249 objects misplaced (10.442%); 249 B/s, 6 objects/s recovering
Dec 13 02:34:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:34:55 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 72 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=71/72 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=71) [0] r=0 lpr=71 pi=[57,71)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 13 02:34:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 13 02:34:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 1 active+recovering+remapped, 3 active+recovery_wait+remapped, 1 active+remapped, 1 peering, 315 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 26/249 objects misplaced (10.442%); 249 B/s, 6 objects/s recovering
Dec 13 02:34:57 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 13 02:34:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 13 02:34:59 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 73 pg[9.5( v 64'485 (0'0,64'485] local-lis/les=72/73 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=72) [0] r=0 lpr=72 pi=[57,72)/1 crt=64'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:34:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 1 active+recovering+remapped, 2 active+recovery_wait+remapped, 1 active+remapped, 1 activating, 316 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 14/249 objects misplaced (5.622%); 262 B/s, 7 objects/s recovering
Dec 13 02:34:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 13 02:35:00 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 74 pg[9.19( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=74) [0] r=0 lpr=74 pi=[57,74)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:00 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 74 pg[9.19( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=74) [0] r=0 lpr=74 pi=[57,74)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:00 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 13 02:35:00 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 74 pg[9.19( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=74 pruub=12.487049103s) [0] async=[0] r=-1 lpr=74 pi=[57,74)/1 crt=53'483 lcod 0'0 active pruub 182.260025024s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:00 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 74 pg[9.19( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=74 pruub=12.486922264s) [0] r=-1 lpr=74 pi=[57,74)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 182.260025024s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 13 02:35:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 1 active+recovering+remapped, 2 active+recovery_wait+remapped, 1 active+remapped, 1 activating, 316 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 14/249 objects misplaced (5.622%); 74 B/s, 2 objects/s recovering
Dec 13 02:35:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 13 02:35:03 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 13 02:35:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 1 active+recovering+remapped, 3 active+remapped, 1 activating, 316 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4/249 objects misplaced (1.606%); 170 B/s, 4 objects/s recovering
Dec 13 02:35:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 13 02:35:03 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 75 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=75 pruub=9.417336464s) [0] async=[0] r=-1 lpr=75 pi=[57,75)/1 crt=53'483 lcod 0'0 active pruub 182.260009766s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:03 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 75 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=75 pruub=9.417219162s) [0] r=-1 lpr=75 pi=[57,75)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 182.260009766s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:04 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 13 02:35:04 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 13 02:35:04 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 75 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=75) [0] r=0 lpr=75 pi=[57,75)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:04 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 75 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=75) [0] r=0 lpr=75 pi=[57,75)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 13 02:35:04 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 13 02:35:04 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 76 pg[9.1b( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:04 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 76 pg[9.1b( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:05 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 76 pg[9.1b( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=76 pruub=8.297765732s) [0] async=[0] r=-1 lpr=76 pi=[57,76)/1 crt=53'483 lcod 0'0 active pruub 182.260025024s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:05 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 76 pg[9.1b( v 53'483 (0'0,53'483] local-lis/les=63/64 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=76 pruub=8.297500610s) [0] r=-1 lpr=76 pi=[57,76)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 182.260025024s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:05 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 76 pg[9.19( v 53'483 (0'0,53'483] local-lis/les=74/76 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=74) [0] r=0 lpr=74 pi=[57,74)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v185: 321 pgs: 1 active+recovering+remapped, 1 peering, 2 active+remapped, 317 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 4/249 objects misplaced (1.606%); 18 B/s, 0 objects/s recovering
Dec 13 02:35:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 13 02:35:05 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 13 02:35:05 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 13 02:35:06 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 13 02:35:06 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 13 02:35:06 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 13 02:35:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 13 02:35:06 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 13 02:35:07 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 13 02:35:07 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 13 02:35:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v187: 321 pgs: 1 active+recovering+remapped, 1 peering, 2 active+remapped, 317 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 4/249 objects misplaced (1.606%); 18 B/s, 0 objects/s recovering
Dec 13 02:35:07 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 13 02:35:07 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 77 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=77 pruub=13.559630394s) [0] async=[0] r=-1 lpr=77 pi=[57,77)/1 crt=53'483 lcod 0'0 active pruub 190.260192871s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:07 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 77 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=63/64 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=77 pruub=13.559527397s) [0] r=-1 lpr=77 pi=[57,77)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 190.260192871s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:07 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 13 02:35:07 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 13 02:35:08 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 13 02:35:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:35:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:35:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:35:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:35:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:35:08 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 13 02:35:08 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 77 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=77) [0] r=0 lpr=77 pi=[57,77)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:08 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 77 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=77) [0] r=0 lpr=77 pi=[57,77)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 13 02:35:08 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 77 pg[9.1b( v 53'483 (0'0,53'483] local-lis/les=76/77 n=6 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=76) [0] r=0 lpr=76 pi=[57,76)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:08 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 13 02:35:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:35:08
Dec 13 02:35:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:35:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Some PGs (0.003115) are inactive; try again later
Dec 13 02:35:09 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 77 pg[9.d( v 53'483 (0'0,53'483] local-lis/les=75/77 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=75) [0] r=0 lpr=75 pi=[57,75)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:09 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 13 02:35:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 13 02:35:09 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 13 02:35:09 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 13 02:35:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 1 activating, 1 peering, 319 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 0 objects/s recovering
Dec 13 02:35:09 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:35:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:35:10 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 13 02:35:10 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 13 02:35:10 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 13 02:35:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:35:10 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 13 02:35:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:35:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:35:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:35:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:35:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:35:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:35:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:11 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 78 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=77/78 n=7 ec=57/47 lis/c=63/57 les/c/f=64/58/0 sis=77) [0] r=0 lpr=77 pi=[57,77)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v190: 321 pgs: 1 activating, 1 peering, 319 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Dec 13 02:35:11 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 13 02:35:11 np0005558241 podman[100299]: 2025-12-13 07:35:11.633540871 +0000 UTC m=+0.041000954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:35:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:35:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:35:11 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 13 02:35:11 np0005558241 podman[100299]: 2025-12-13 07:35:11.743954048 +0000 UTC m=+0.151414051 container create 805911f6762334f2012a8bbe8e1d1723389f889f2edfd044c6349d0c16eb7b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:35:11 np0005558241 systemd[1]: Started libpod-conmon-805911f6762334f2012a8bbe8e1d1723389f889f2edfd044c6349d0c16eb7b7b.scope.
Dec 13 02:35:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:35:11 np0005558241 podman[100299]: 2025-12-13 07:35:11.844998172 +0000 UTC m=+0.252458185 container init 805911f6762334f2012a8bbe8e1d1723389f889f2edfd044c6349d0c16eb7b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 13 02:35:11 np0005558241 podman[100299]: 2025-12-13 07:35:11.853448213 +0000 UTC m=+0.260908166 container start 805911f6762334f2012a8bbe8e1d1723389f889f2edfd044c6349d0c16eb7b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:35:11 np0005558241 podman[100299]: 2025-12-13 07:35:11.857793581 +0000 UTC m=+0.265253564 container attach 805911f6762334f2012a8bbe8e1d1723389f889f2edfd044c6349d0c16eb7b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_noyce, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:35:11 np0005558241 heuristic_noyce[100315]: 167 167
Dec 13 02:35:11 np0005558241 systemd[1]: libpod-805911f6762334f2012a8bbe8e1d1723389f889f2edfd044c6349d0c16eb7b7b.scope: Deactivated successfully.
Dec 13 02:35:11 np0005558241 podman[100299]: 2025-12-13 07:35:11.858822107 +0000 UTC m=+0.266282070 container died 805911f6762334f2012a8bbe8e1d1723389f889f2edfd044c6349d0c16eb7b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:35:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-571095980a63fcdca46496f6511bdf5744ffe6d3f242c2095385e6ed07d54ba2-merged.mount: Deactivated successfully.
Dec 13 02:35:11 np0005558241 podman[100299]: 2025-12-13 07:35:11.911367409 +0000 UTC m=+0.318827412 container remove 805911f6762334f2012a8bbe8e1d1723389f889f2edfd044c6349d0c16eb7b7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:35:11 np0005558241 systemd[1]: libpod-conmon-805911f6762334f2012a8bbe8e1d1723389f889f2edfd044c6349d0c16eb7b7b.scope: Deactivated successfully.
Dec 13 02:35:12 np0005558241 podman[100339]: 2025-12-13 07:35:12.097511417 +0000 UTC m=+0.057357333 container create 7bfca34853d5a8c17b3888fb931371882568ae29786413966437bf18de7f1774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:35:12 np0005558241 systemd[1]: Started libpod-conmon-7bfca34853d5a8c17b3888fb931371882568ae29786413966437bf18de7f1774.scope.
Dec 13 02:35:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:35:12 np0005558241 podman[100339]: 2025-12-13 07:35:12.07040855 +0000 UTC m=+0.030254506 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:35:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6459e497b2484a5f9a180065de19b5b52522461a4bc069feda60f3722272f793/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6459e497b2484a5f9a180065de19b5b52522461a4bc069feda60f3722272f793/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6459e497b2484a5f9a180065de19b5b52522461a4bc069feda60f3722272f793/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6459e497b2484a5f9a180065de19b5b52522461a4bc069feda60f3722272f793/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6459e497b2484a5f9a180065de19b5b52522461a4bc069feda60f3722272f793/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:12 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 13 02:35:12 np0005558241 podman[100339]: 2025-12-13 07:35:12.182987941 +0000 UTC m=+0.142833867 container init 7bfca34853d5a8c17b3888fb931371882568ae29786413966437bf18de7f1774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_margulis, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:35:12 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 13 02:35:12 np0005558241 podman[100339]: 2025-12-13 07:35:12.193930965 +0000 UTC m=+0.153776901 container start 7bfca34853d5a8c17b3888fb931371882568ae29786413966437bf18de7f1774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Dec 13 02:35:12 np0005558241 podman[100339]: 2025-12-13 07:35:12.198635092 +0000 UTC m=+0.158480998 container attach 7bfca34853d5a8c17b3888fb931371882568ae29786413966437bf18de7f1774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 13 02:35:12 np0005558241 brave_margulis[100356]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:35:12 np0005558241 brave_margulis[100356]: --> All data devices are unavailable
Dec 13 02:35:12 np0005558241 systemd[1]: libpod-7bfca34853d5a8c17b3888fb931371882568ae29786413966437bf18de7f1774.scope: Deactivated successfully.
Dec 13 02:35:12 np0005558241 podman[100339]: 2025-12-13 07:35:12.724693907 +0000 UTC m=+0.684539863 container died 7bfca34853d5a8c17b3888fb931371882568ae29786413966437bf18de7f1774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_margulis, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:35:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:35:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6459e497b2484a5f9a180065de19b5b52522461a4bc069feda60f3722272f793-merged.mount: Deactivated successfully.
Dec 13 02:35:12 np0005558241 podman[100339]: 2025-12-13 07:35:12.941140792 +0000 UTC m=+0.900986738 container remove 7bfca34853d5a8c17b3888fb931371882568ae29786413966437bf18de7f1774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:35:12 np0005558241 systemd[1]: libpod-conmon-7bfca34853d5a8c17b3888fb931371882568ae29786413966437bf18de7f1774.scope: Deactivated successfully.
Dec 13 02:35:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v191: 321 pgs: 1 activating, 1 peering, 319 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 13 02:35:13 np0005558241 podman[100450]: 2025-12-13 07:35:13.476993372 +0000 UTC m=+0.033686372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:35:13 np0005558241 podman[100450]: 2025-12-13 07:35:13.650916155 +0000 UTC m=+0.207609145 container create 455b30f648a43a140d42a2ba96f37327667fa02037e24e08d5ef266e867bbb70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wescoff, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 02:35:13 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 13 02:35:13 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 13 02:35:13 np0005558241 systemd[1]: Started libpod-conmon-455b30f648a43a140d42a2ba96f37327667fa02037e24e08d5ef266e867bbb70.scope.
Dec 13 02:35:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:35:14 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 13 02:35:14 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 13 02:35:14 np0005558241 podman[100450]: 2025-12-13 07:35:14.373264652 +0000 UTC m=+0.929957662 container init 455b30f648a43a140d42a2ba96f37327667fa02037e24e08d5ef266e867bbb70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:35:14 np0005558241 podman[100450]: 2025-12-13 07:35:14.385238711 +0000 UTC m=+0.941931691 container start 455b30f648a43a140d42a2ba96f37327667fa02037e24e08d5ef266e867bbb70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wescoff, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:35:14 np0005558241 magical_wescoff[100467]: 167 167
Dec 13 02:35:14 np0005558241 systemd[1]: libpod-455b30f648a43a140d42a2ba96f37327667fa02037e24e08d5ef266e867bbb70.scope: Deactivated successfully.
Dec 13 02:35:15 np0005558241 podman[100450]: 2025-12-13 07:35:15.019714004 +0000 UTC m=+1.576407074 container attach 455b30f648a43a140d42a2ba96f37327667fa02037e24e08d5ef266e867bbb70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wescoff, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:35:15 np0005558241 podman[100450]: 2025-12-13 07:35:15.021307654 +0000 UTC m=+1.578000664 container died 455b30f648a43a140d42a2ba96f37327667fa02037e24e08d5ef266e867bbb70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wescoff, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:35:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3b08c1e151cc1da3f9c9579edef5233397e47cb15e11b0f44564d18c2d447a24-merged.mount: Deactivated successfully.
Dec 13 02:35:15 np0005558241 podman[100450]: 2025-12-13 07:35:15.066533013 +0000 UTC m=+1.623226003 container remove 455b30f648a43a140d42a2ba96f37327667fa02037e24e08d5ef266e867bbb70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 02:35:15 np0005558241 systemd[1]: libpod-conmon-455b30f648a43a140d42a2ba96f37327667fa02037e24e08d5ef266e867bbb70.scope: Deactivated successfully.
Dec 13 02:35:15 np0005558241 podman[100491]: 2025-12-13 07:35:15.293632153 +0000 UTC m=+0.057228740 container create add876e14e71bf7f090434a566f8136ef7e04ea21b1a1c0a39520afead7c13f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_sutherland, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:35:15 np0005558241 systemd[1]: Started libpod-conmon-add876e14e71bf7f090434a566f8136ef7e04ea21b1a1c0a39520afead7c13f5.scope.
Dec 13 02:35:15 np0005558241 podman[100491]: 2025-12-13 07:35:15.269746307 +0000 UTC m=+0.033342884 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:35:15 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:35:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7337851d1b0ccf944fb39197b77c96dfd880d6af421660579bedefd756273c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7337851d1b0ccf944fb39197b77c96dfd880d6af421660579bedefd756273c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7337851d1b0ccf944fb39197b77c96dfd880d6af421660579bedefd756273c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7337851d1b0ccf944fb39197b77c96dfd880d6af421660579bedefd756273c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:15 np0005558241 podman[100491]: 2025-12-13 07:35:15.403816895 +0000 UTC m=+0.167413462 container init add876e14e71bf7f090434a566f8136ef7e04ea21b1a1c0a39520afead7c13f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:35:15 np0005558241 podman[100491]: 2025-12-13 07:35:15.412915442 +0000 UTC m=+0.176512039 container start add876e14e71bf7f090434a566f8136ef7e04ea21b1a1c0a39520afead7c13f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:35:15 np0005558241 podman[100491]: 2025-12-13 07:35:15.417299401 +0000 UTC m=+0.180895998 container attach add876e14e71bf7f090434a566f8136ef7e04ea21b1a1c0a39520afead7c13f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:35:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v192: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 49 B/s, 1 objects/s recovering
Dec 13 02:35:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 13 02:35:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]: {
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:    "0": [
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:        {
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "devices": [
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "/dev/loop3"
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            ],
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_name": "ceph_lv0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_size": "21470642176",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "name": "ceph_lv0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "tags": {
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.cluster_name": "ceph",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.crush_device_class": "",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.encrypted": "0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.objectstore": "bluestore",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.osd_id": "0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.type": "block",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.vdo": "0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.with_tpm": "0"
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            },
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "type": "block",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "vg_name": "ceph_vg0"
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:        }
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:    ],
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:    "1": [
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:        {
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "devices": [
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "/dev/loop4"
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            ],
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_name": "ceph_lv1",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_size": "21470642176",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "name": "ceph_lv1",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "tags": {
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.cluster_name": "ceph",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.crush_device_class": "",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.encrypted": "0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.objectstore": "bluestore",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.osd_id": "1",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.type": "block",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.vdo": "0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.with_tpm": "0"
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            },
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "type": "block",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "vg_name": "ceph_vg1"
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:        }
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:    ],
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:    "2": [
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:        {
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "devices": [
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "/dev/loop5"
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            ],
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_name": "ceph_lv2",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_size": "21470642176",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "name": "ceph_lv2",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "tags": {
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.cluster_name": "ceph",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.crush_device_class": "",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.encrypted": "0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.objectstore": "bluestore",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.osd_id": "2",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.type": "block",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.vdo": "0",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:                "ceph.with_tpm": "0"
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            },
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "type": "block",
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:            "vg_name": "ceph_vg2"
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:        }
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]:    ]
Dec 13 02:35:15 np0005558241 beautiful_sutherland[100507]: }
Dec 13 02:35:15 np0005558241 systemd[1]: libpod-add876e14e71bf7f090434a566f8136ef7e04ea21b1a1c0a39520afead7c13f5.scope: Deactivated successfully.
Dec 13 02:35:15 np0005558241 podman[100491]: 2025-12-13 07:35:15.755334422 +0000 UTC m=+0.518930979 container died add876e14e71bf7f090434a566f8136ef7e04ea21b1a1c0a39520afead7c13f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_sutherland, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:35:15 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 13 02:35:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-da7337851d1b0ccf944fb39197b77c96dfd880d6af421660579bedefd756273c-merged.mount: Deactivated successfully.
Dec 13 02:35:16 np0005558241 podman[100491]: 2025-12-13 07:35:16.013024236 +0000 UTC m=+0.776620843 container remove add876e14e71bf7f090434a566f8136ef7e04ea21b1a1c0a39520afead7c13f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:35:16 np0005558241 systemd[1]: libpod-conmon-add876e14e71bf7f090434a566f8136ef7e04ea21b1a1c0a39520afead7c13f5.scope: Deactivated successfully.
Dec 13 02:35:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 13 02:35:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 13 02:35:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 13 02:35:16 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 13 02:35:16 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 13 02:35:16 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 13 02:35:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:16 np0005558241 podman[100592]: 2025-12-13 07:35:16.510847776 +0000 UTC m=+0.032663187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:35:16 np0005558241 podman[100592]: 2025-12-13 07:35:16.618386542 +0000 UTC m=+0.140201902 container create 51294a44f4862606d1691441a1205cbf4b64f21dbe3a2f73b074f1311ef45583 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:35:16 np0005558241 systemd[1]: Started libpod-conmon-51294a44f4862606d1691441a1205cbf4b64f21dbe3a2f73b074f1311ef45583.scope.
Dec 13 02:35:16 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:35:16 np0005558241 podman[100592]: 2025-12-13 07:35:16.949733606 +0000 UTC m=+0.471549016 container init 51294a44f4862606d1691441a1205cbf4b64f21dbe3a2f73b074f1311ef45583 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cannon, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:35:16 np0005558241 podman[100592]: 2025-12-13 07:35:16.96074322 +0000 UTC m=+0.482558530 container start 51294a44f4862606d1691441a1205cbf4b64f21dbe3a2f73b074f1311ef45583 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 02:35:16 np0005558241 podman[100592]: 2025-12-13 07:35:16.965577601 +0000 UTC m=+0.487393011 container attach 51294a44f4862606d1691441a1205cbf4b64f21dbe3a2f73b074f1311ef45583 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cannon, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:35:16 np0005558241 practical_cannon[100608]: 167 167
Dec 13 02:35:16 np0005558241 systemd[1]: libpod-51294a44f4862606d1691441a1205cbf4b64f21dbe3a2f73b074f1311ef45583.scope: Deactivated successfully.
Dec 13 02:35:16 np0005558241 conmon[100608]: conmon 51294a44f4862606d169 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-51294a44f4862606d1691441a1205cbf4b64f21dbe3a2f73b074f1311ef45583.scope/container/memory.events
Dec 13 02:35:16 np0005558241 podman[100592]: 2025-12-13 07:35:16.971213332 +0000 UTC m=+0.493028722 container died 51294a44f4862606d1691441a1205cbf4b64f21dbe3a2f73b074f1311ef45583 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cannon, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:35:17 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec 13 02:35:17 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 13 02:35:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d000e21c67e7e9f9c685edd94c8c3745726c8c0b3976f969fcb8458f41af7513-merged.mount: Deactivated successfully.
Dec 13 02:35:17 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 13 02:35:17 np0005558241 podman[100592]: 2025-12-13 07:35:17.219622275 +0000 UTC m=+0.741437635 container remove 51294a44f4862606d1691441a1205cbf4b64f21dbe3a2f73b074f1311ef45583 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cannon, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:35:17 np0005558241 systemd[1]: libpod-conmon-51294a44f4862606d1691441a1205cbf4b64f21dbe3a2f73b074f1311ef45583.scope: Deactivated successfully.
Dec 13 02:35:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 321 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 0 objects/s recovering
Dec 13 02:35:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 13 02:35:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 13 02:35:17 np0005558241 podman[100631]: 2025-12-13 07:35:17.483448883 +0000 UTC m=+0.075397334 container create ccde95da8e352496925c263cd4f7c05daf9696a375ecabfef029e6c05ea747d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 02:35:17 np0005558241 systemd[1]: Started libpod-conmon-ccde95da8e352496925c263cd4f7c05daf9696a375ecabfef029e6c05ea747d8.scope.
Dec 13 02:35:17 np0005558241 podman[100631]: 2025-12-13 07:35:17.451840203 +0000 UTC m=+0.043788704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:35:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:35:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcf00e447f3c497379196f8a71b044ee851278f7b37c5af4e53d164f4a1df9dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcf00e447f3c497379196f8a71b044ee851278f7b37c5af4e53d164f4a1df9dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcf00e447f3c497379196f8a71b044ee851278f7b37c5af4e53d164f4a1df9dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcf00e447f3c497379196f8a71b044ee851278f7b37c5af4e53d164f4a1df9dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:17 np0005558241 podman[100631]: 2025-12-13 07:35:17.594632359 +0000 UTC m=+0.186580860 container init ccde95da8e352496925c263cd4f7c05daf9696a375ecabfef029e6c05ea747d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:35:17 np0005558241 podman[100631]: 2025-12-13 07:35:17.612350831 +0000 UTC m=+0.204299252 container start ccde95da8e352496925c263cd4f7c05daf9696a375ecabfef029e6c05ea747d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:35:17 np0005558241 podman[100631]: 2025-12-13 07:35:17.617329666 +0000 UTC m=+0.209278147 container attach ccde95da8e352496925c263cd4f7c05daf9696a375ecabfef029e6c05ea747d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:35:17 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 13 02:35:17 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 13 02:35:18 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 13 02:35:18 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 13 02:35:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 13 02:35:18 np0005558241 lvm[100725]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:35:18 np0005558241 lvm[100729]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:35:18 np0005558241 lvm[100729]: VG ceph_vg2 finished
Dec 13 02:35:18 np0005558241 lvm[100725]: VG ceph_vg0 finished
Dec 13 02:35:18 np0005558241 lvm[100728]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:35:18 np0005558241 lvm[100728]: VG ceph_vg1 finished
Dec 13 02:35:18 np0005558241 busy_joliot[100648]: {}
Dec 13 02:35:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 13 02:35:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 13 02:35:18 np0005558241 systemd[1]: libpod-ccde95da8e352496925c263cd4f7c05daf9696a375ecabfef029e6c05ea747d8.scope: Deactivated successfully.
Dec 13 02:35:18 np0005558241 systemd[1]: libpod-ccde95da8e352496925c263cd4f7c05daf9696a375ecabfef029e6c05ea747d8.scope: Consumed 1.528s CPU time.
Dec 13 02:35:18 np0005558241 podman[100732]: 2025-12-13 07:35:18.604054214 +0000 UTC m=+0.030660626 container died ccde95da8e352496925c263cd4f7c05daf9696a375ecabfef029e6c05ea747d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:35:18 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 13 02:35:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 13 02:35:18 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fcf00e447f3c497379196f8a71b044ee851278f7b37c5af4e53d164f4a1df9dd-merged.mount: Deactivated successfully.
Dec 13 02:35:18 np0005558241 podman[100732]: 2025-12-13 07:35:18.758625224 +0000 UTC m=+0.185231616 container remove ccde95da8e352496925c263cd4f7c05daf9696a375ecabfef029e6c05ea747d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_joliot, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:35:18 np0005558241 systemd[1]: libpod-conmon-ccde95da8e352496925c263cd4f7c05daf9696a375ecabfef029e6c05ea747d8.scope: Deactivated successfully.
Dec 13 02:35:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:35:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:35:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:35:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:35:19 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 13 02:35:19 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 13 02:35:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 0 objects/s recovering
Dec 13 02:35:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 13 02:35:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 13 02:35:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 13 02:35:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:35:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:35:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 13 02:35:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.302072527881028e-06 of space, bias 4.0, pg target 0.0015624870334572335 quantized to 16 (current 32)
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:35:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:35:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 13 02:35:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 13 02:35:20 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 13 02:35:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 13 02:35:21 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 13 02:35:21 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 13 02:35:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 13 02:35:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 13 02:35:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 13 02:35:22 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 13 02:35:22 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 13 02:35:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 13 02:35:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 13 02:35:22 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 13 02:35:22 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 82 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=82 pruub=9.911016464s) [2] r=-1 lpr=82 pi=[57,82)/1 crt=53'483 lcod 0'0 active pruub 202.054107666s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 82 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=82 pruub=9.910705566s) [2] r=-1 lpr=82 pi=[57,82)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 202.054107666s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 82 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=82) [2] r=0 lpr=82 pi=[57,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 82 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=82 pruub=9.909561157s) [2] r=-1 lpr=82 pi=[57,82)/1 crt=53'483 lcod 0'0 active pruub 202.054336548s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 82 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=82 pruub=9.909518242s) [2] r=-1 lpr=82 pi=[57,82)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 202.054336548s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 82 pg[9.e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=82 pruub=9.910298347s) [2] r=-1 lpr=82 pi=[57,82)/1 crt=53'483 lcod 0'0 active pruub 202.054122925s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 82 pg[9.e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=82 pruub=9.909004211s) [2] r=-1 lpr=82 pi=[57,82)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 202.054122925s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 82 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=82) [2] r=0 lpr=82 pi=[57,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 82 pg[9.1e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=82 pruub=9.911881447s) [2] r=-1 lpr=82 pi=[57,82)/1 crt=53'483 lcod 0'0 active pruub 202.057846069s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 82 pg[9.1e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=82 pruub=9.911739349s) [2] r=-1 lpr=82 pi=[57,82)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 202.057846069s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 82 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=82) [2] r=0 lpr=82 pi=[57,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 82 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=82) [2] r=0 lpr=82 pi=[57,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 13 02:35:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 13 02:35:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 13 02:35:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 13 02:35:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 13 02:35:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 13 02:35:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 13 02:35:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 83 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=70/71 n=6 ec=57/47 lis/c=70/70 les/c/f=71/71/0 sis=83 pruub=9.341098785s) [2] r=-1 lpr=83 pi=[70,83)/1 crt=53'483 active pruub 209.128875732s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 83 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=70/71 n=6 ec=57/47 lis/c=70/70 les/c/f=71/71/0 sis=83 pruub=9.341064453s) [2] r=-1 lpr=83 pi=[70,83)/1 crt=53'483 unknown NOTIFY pruub 209.128875732s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 83 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=77/78 n=7 ec=57/47 lis/c=77/77 les/c/f=78/78/0 sis=83 pruub=11.822781563s) [2] r=-1 lpr=83 pi=[77,83)/1 crt=53'483 active pruub 211.610946655s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 83 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=69/71 n=6 ec=57/47 lis/c=69/69 les/c/f=71/71/0 sis=83 pruub=9.340552330s) [2] r=-1 lpr=83 pi=[69,83)/1 crt=53'483 active pruub 209.128952026s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 83 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=67/68 n=7 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=83 pruub=12.402116776s) [2] r=-1 lpr=83 pi=[67,83)/1 crt=53'483 active pruub 212.190536499s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 83 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=67/68 n=7 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=83 pruub=12.402101517s) [2] r=-1 lpr=83 pi=[67,83)/1 crt=53'483 unknown NOTIFY pruub 212.190536499s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 83 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=69/71 n=6 ec=57/47 lis/c=69/69 les/c/f=71/71/0 sis=83 pruub=9.340498924s) [2] r=-1 lpr=83 pi=[69,83)/1 crt=53'483 unknown NOTIFY pruub 209.128952026s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 83 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=77/78 n=7 ec=57/47 lis/c=77/77 les/c/f=78/78/0 sis=83 pruub=11.822570801s) [2] r=-1 lpr=83 pi=[77,83)/1 crt=53'483 unknown NOTIFY pruub 211.610946655s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:23 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 83 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=69/69 les/c/f=71/71/0 sis=83) [2] r=0 lpr=83 pi=[69,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 83 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=77/77 les/c/f=78/78/0 sis=83) [2] r=0 lpr=83 pi=[77,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 83 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=83) [2] r=0 lpr=83 pi=[67,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 83 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=70/70 les/c/f=71/71/0 sis=83) [2] r=0 lpr=83 pi=[70,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 83 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 83 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 83 pg[9.e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 83 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 83 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 83 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 83 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 83 pg[9.6( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[57,83)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 83 pg[9.1e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 83 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 83 pg[9.e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 83 pg[9.1e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 83 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 83 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 83 pg[9.e( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:23 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 83 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:24 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 13 02:35:24 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 13 02:35:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 13 02:35:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 13 02:35:24 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 13 02:35:24 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 84 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=70/71 n=6 ec=57/47 lis/c=70/70 les/c/f=71/71/0 sis=84) [2]/[0] r=0 lpr=84 pi=[70,84)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:24 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 84 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=70/71 n=6 ec=57/47 lis/c=70/70 les/c/f=71/71/0 sis=84) [2]/[0] r=0 lpr=84 pi=[70,84)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:24 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 84 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=77/78 n=7 ec=57/47 lis/c=77/77 les/c/f=78/78/0 sis=84) [2]/[0] r=0 lpr=84 pi=[77,84)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:24 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 84 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=77/78 n=7 ec=57/47 lis/c=77/77 les/c/f=78/78/0 sis=84) [2]/[0] r=0 lpr=84 pi=[77,84)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:24 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 84 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=69/71 n=6 ec=57/47 lis/c=69/69 les/c/f=71/71/0 sis=84) [2]/[0] r=0 lpr=84 pi=[69,84)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:24 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 84 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=69/71 n=6 ec=57/47 lis/c=69/69 les/c/f=71/71/0 sis=84) [2]/[0] r=0 lpr=84 pi=[69,84)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:24 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 84 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=67/68 n=7 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=84) [2]/[0] r=0 lpr=84 pi=[67,84)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:24 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 84 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=67/68 n=7 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=84) [2]/[0] r=0 lpr=84 pi=[67,84)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:24 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 84 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=69/69 les/c/f=71/71/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[69,84)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:24 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 84 pg[9.17( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=69/69 les/c/f=71/71/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[69,84)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:24 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 84 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[67,84)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:24 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 84 pg[9.7( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[67,84)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:24 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=70/70 les/c/f=71/71/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[70,84)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:24 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=77/77 les/c/f=78/78/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[77,84)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:24 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 84 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=70/70 les/c/f=71/71/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[70,84)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:24 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 84 pg[9.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=77/77 les/c/f=78/78/0 sis=84) [2]/[0] r=-1 lpr=84 pi=[77,84)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:24 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 13 02:35:24 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 84 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=83/84 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:24 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 84 pg[9.1e( v 53'483 (0'0,53'483] local-lis/les=83/84 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:24 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 84 pg[9.e( v 53'483 (0'0,53'483] local-lis/les=83/84 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:24 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 84 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=83/84 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[57,83)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v203: 321 pgs: 4 unknown, 317 active+clean; 460 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 13 02:35:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 13 02:35:25 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 13 02:35:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 85 pg[9.e( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 85 pg[9.e( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 85 pg[9.1e( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 85 pg[9.1e( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 85 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 85 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 85 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 85 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 85 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=83/84 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.009618759s) [2] async=[2] r=-1 lpr=85 pi=[57,85)/1 crt=53'483 lcod 0'0 active pruub 209.508880615s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 85 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=83/84 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.009551048s) [2] r=-1 lpr=85 pi=[57,85)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 209.508880615s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 85 pg[9.1e( v 53'483 (0'0,53'483] local-lis/les=83/84 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.009481430s) [2] async=[2] r=-1 lpr=85 pi=[57,85)/1 crt=53'483 lcod 0'0 active pruub 209.508850098s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 85 pg[9.e( v 53'483 (0'0,53'483] local-lis/les=83/84 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.009437561s) [2] async=[2] r=-1 lpr=85 pi=[57,85)/1 crt=53'483 lcod 0'0 active pruub 209.508865356s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 85 pg[9.1e( v 53'483 (0'0,53'483] local-lis/les=83/84 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.009353638s) [2] r=-1 lpr=85 pi=[57,85)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 209.508850098s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 85 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=83/84 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.009313583s) [2] async=[2] r=-1 lpr=85 pi=[57,85)/1 crt=53'483 lcod 0'0 active pruub 209.508819580s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 85 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=83/84 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.009287834s) [2] r=-1 lpr=85 pi=[57,85)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 209.508819580s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:25 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 85 pg[9.e( v 53'483 (0'0,53'483] local-lis/les=83/84 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85 pruub=15.009020805s) [2] r=-1 lpr=85 pi=[57,85)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 209.508865356s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 85 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=84/85 n=7 ec=57/47 lis/c=77/77 les/c/f=78/78/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[77,84)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 85 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=84/85 n=6 ec=57/47 lis/c=69/69 les/c/f=71/71/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[69,84)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 85 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=84/85 n=7 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[67,84)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 85 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=84/85 n=6 ec=57/47 lis/c=70/70 les/c/f=71/71/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[70,84)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 13 02:35:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 13 02:35:26 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 13 02:35:26 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 86 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=84/85 n=6 ec=57/47 lis/c=84/70 les/c/f=85/71/0 sis=86 pruub=15.221201897s) [2] async=[2] r=-1 lpr=86 pi=[70,86)/1 crt=53'483 active pruub 217.814590454s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:26 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 86 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=84/85 n=6 ec=57/47 lis/c=84/70 les/c/f=85/71/0 sis=86 pruub=15.221107483s) [2] r=-1 lpr=86 pi=[70,86)/1 crt=53'483 unknown NOTIFY pruub 217.814590454s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:26 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 86 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=84/85 n=7 ec=57/47 lis/c=84/77 les/c/f=85/78/0 sis=86 pruub=15.220626831s) [2] async=[2] r=-1 lpr=86 pi=[77,86)/1 crt=53'483 active pruub 217.814468384s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:26 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 86 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=84/85 n=7 ec=57/47 lis/c=84/77 les/c/f=85/78/0 sis=86 pruub=15.220532417s) [2] r=-1 lpr=86 pi=[77,86)/1 crt=53'483 unknown NOTIFY pruub 217.814468384s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:26 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 86 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=84/85 n=7 ec=57/47 lis/c=84/67 les/c/f=85/68/0 sis=86 pruub=15.220313072s) [2] async=[2] r=-1 lpr=86 pi=[67,86)/1 crt=53'483 active pruub 217.814529419s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:26 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 86 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=84/85 n=7 ec=57/47 lis/c=84/67 les/c/f=85/68/0 sis=86 pruub=15.220244408s) [2] r=-1 lpr=86 pi=[67,86)/1 crt=53'483 unknown NOTIFY pruub 217.814529419s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:26 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 86 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=84/85 n=6 ec=57/47 lis/c=84/69 les/c/f=85/71/0 sis=86 pruub=15.220070839s) [2] async=[2] r=-1 lpr=86 pi=[69,86)/1 crt=53'483 active pruub 217.814498901s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:26 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 86 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=84/85 n=6 ec=57/47 lis/c=84/69 les/c/f=85/71/0 sis=86 pruub=15.219786644s) [2] r=-1 lpr=86 pi=[69,86)/1 crt=53'483 unknown NOTIFY pruub 217.814498901s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:26 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 86 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=84/70 les/c/f=85/71/0 sis=86) [2] r=0 lpr=86 pi=[70,86)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:26 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 86 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=84/70 les/c/f=85/71/0 sis=86) [2] r=0 lpr=86 pi=[70,86)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:26 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 86 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=84/67 les/c/f=85/68/0 sis=86) [2] r=0 lpr=86 pi=[67,86)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:26 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 86 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=84/67 les/c/f=85/68/0 sis=86) [2] r=0 lpr=86 pi=[67,86)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:26 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 86 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=84/77 les/c/f=85/78/0 sis=86) [2] r=0 lpr=86 pi=[77,86)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:26 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 86 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=84/77 les/c/f=85/78/0 sis=86) [2] r=0 lpr=86 pi=[77,86)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:26 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 86 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=84/69 les/c/f=85/71/0 sis=86) [2] r=0 lpr=86 pi=[69,86)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:26 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 86 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=84/69 les/c/f=85/71/0 sis=86) [2] r=0 lpr=86 pi=[69,86)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:26 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 86 pg[9.e( v 53'483 (0'0,53'483] local-lis/les=85/86 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:26 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 86 pg[9.1e( v 53'483 (0'0,53'483] local-lis/les=85/86 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:26 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 86 pg[9.6( v 53'483 (0'0,53'483] local-lis/les=85/86 n=7 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:26 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 86 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=85/86 n=6 ec=57/47 lis/c=83/57 les/c/f=84/58/0 sis=85) [2] r=0 lpr=85 pi=[57,85)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 13 02:35:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 13 02:35:27 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 13 02:35:27 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 87 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=86/87 n=6 ec=57/47 lis/c=84/70 les/c/f=85/71/0 sis=86) [2] r=0 lpr=86 pi=[70,86)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:27 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 87 pg[9.7( v 53'483 (0'0,53'483] local-lis/les=86/87 n=7 ec=57/47 lis/c=84/67 les/c/f=85/68/0 sis=86) [2] r=0 lpr=86 pi=[67,86)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:27 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 87 pg[9.17( v 53'483 (0'0,53'483] local-lis/les=86/87 n=6 ec=57/47 lis/c=84/69 les/c/f=85/71/0 sis=86) [2] r=0 lpr=86 pi=[69,86)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:27 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 87 pg[9.f( v 53'483 (0'0,53'483] local-lis/les=86/87 n=7 ec=57/47 lis/c=84/77 les/c/f=85/78/0 sis=86) [2] r=0 lpr=86 pi=[77,86)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v207: 321 pgs: 4 unknown, 317 active+clean; 460 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:29 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 13 02:35:29 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 13 02:35:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v208: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.6 KiB/s wr, 36 op/s; 647 B/s, 14 objects/s recovering
Dec 13 02:35:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 13 02:35:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 13 02:35:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 13 02:35:29 np0005558241 python3[100797]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:35:29 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 13 02:35:29 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 13 02:35:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 13 02:35:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 13 02:35:29 np0005558241 podman[100798]: 2025-12-13 07:35:29.748835421 +0000 UTC m=+0.026799890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:35:30 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 13 02:35:30 np0005558241 podman[100798]: 2025-12-13 07:35:30.630260679 +0000 UTC m=+0.908225188 container create e557c8ba8e669ce8f79f7e643d2397804a2932dc828f77cfc216564e9a8bbc0a (image=quay.io/ceph/ceph:v20, name=recursing_pike, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:35:30 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 13 02:35:31 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 13 02:35:31 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 13 02:35:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v210: 321 pgs: 321 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.4 KiB/s wr, 30 op/s; 537 B/s, 12 objects/s recovering
Dec 13 02:35:31 np0005558241 systemd[1]: Started libpod-conmon-e557c8ba8e669ce8f79f7e643d2397804a2932dc828f77cfc216564e9a8bbc0a.scope.
Dec 13 02:35:31 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:35:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc9dbc3626777e29899529216e92282c146ec67c0b28d37538bb3dee021ff0f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc9dbc3626777e29899529216e92282c146ec67c0b28d37538bb3dee021ff0f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:31 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 13 02:35:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 88 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=88) [2] r=0 lpr=88 pi=[57,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 88 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=88) [2] r=0 lpr=88 pi=[57,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 13 02:35:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 13 02:35:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 13 02:35:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 88 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=88 pruub=9.279875755s) [2] r=-1 lpr=88 pi=[57,88)/1 crt=53'483 lcod 0'0 active pruub 210.054428101s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 88 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=88 pruub=9.279828072s) [2] r=-1 lpr=88 pi=[57,88)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 210.054428101s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 88 pg[9.18( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=88 pruub=9.283068657s) [2] r=-1 lpr=88 pi=[57,88)/1 crt=53'483 lcod 0'0 active pruub 210.058105469s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:31 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 88 pg[9.18( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=88 pruub=9.283038139s) [2] r=-1 lpr=88 pi=[57,88)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 210.058105469s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:31 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 13 02:35:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 13 02:35:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 13 02:35:32 np0005558241 podman[100798]: 2025-12-13 07:35:32.112880342 +0000 UTC m=+2.390844921 container init e557c8ba8e669ce8f79f7e643d2397804a2932dc828f77cfc216564e9a8bbc0a (image=quay.io/ceph/ceph:v20, name=recursing_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:35:32 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 13 02:35:32 np0005558241 podman[100798]: 2025-12-13 07:35:32.122867751 +0000 UTC m=+2.400832260 container start e557c8ba8e669ce8f79f7e643d2397804a2932dc828f77cfc216564e9a8bbc0a (image=quay.io/ceph/ceph:v20, name=recursing_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:35:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 89 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[57,89)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 89 pg[9.8( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[57,89)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 89 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[57,89)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 89 pg[9.18( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[57,89)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:32 np0005558241 podman[100798]: 2025-12-13 07:35:32.139606388 +0000 UTC m=+2.417570907 container attach e557c8ba8e669ce8f79f7e643d2397804a2932dc828f77cfc216564e9a8bbc0a (image=quay.io/ceph/ceph:v20, name=recursing_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:35:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 13 02:35:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 13 02:35:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 89 pg[9.18( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=89) [2]/[1] r=0 lpr=89 pi=[57,89)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 89 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=89) [2]/[1] r=0 lpr=89 pi=[57,89)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 89 pg[9.18( v 53'483 (0'0,53'483] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=89) [2]/[1] r=0 lpr=89 pi=[57,89)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:32 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 89 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=89) [2]/[1] r=0 lpr=89 pi=[57,89)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 13 02:35:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 13 02:35:33 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 13 02:35:33 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 13 02:35:33 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 90 pg[9.18( v 53'483 (0'0,53'483] local-lis/les=89/90 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=89) [2]/[1] async=[2] r=0 lpr=89 pi=[57,89)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:33 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 90 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=89/90 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=89) [2]/[1] async=[2] r=0 lpr=89 pi=[57,89)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:33 np0005558241 recursing_pike[100813]: could not fetch user info: no user info saved
Dec 13 02:35:33 np0005558241 systemd[1]: libpod-e557c8ba8e669ce8f79f7e643d2397804a2932dc828f77cfc216564e9a8bbc0a.scope: Deactivated successfully.
Dec 13 02:35:33 np0005558241 podman[100798]: 2025-12-13 07:35:33.238321234 +0000 UTC m=+3.516285703 container died e557c8ba8e669ce8f79f7e643d2397804a2932dc828f77cfc216564e9a8bbc0a (image=quay.io/ceph/ceph:v20, name=recursing_pike, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:35:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3bc9dbc3626777e29899529216e92282c146ec67c0b28d37538bb3dee021ff0f-merged.mount: Deactivated successfully.
Dec 13 02:35:33 np0005558241 podman[100798]: 2025-12-13 07:35:33.283205585 +0000 UTC m=+3.561170054 container remove e557c8ba8e669ce8f79f7e643d2397804a2932dc828f77cfc216564e9a8bbc0a (image=quay.io/ceph/ceph:v20, name=recursing_pike, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:35:33 np0005558241 systemd[1]: libpod-conmon-e557c8ba8e669ce8f79f7e643d2397804a2932dc828f77cfc216564e9a8bbc0a.scope: Deactivated successfully.
Dec 13 02:35:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v213: 321 pgs: 2 remapped+peering, 319 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.3 KiB/s wr, 36 op/s; 526 B/s, 11 objects/s recovering
Dec 13 02:35:33 np0005558241 python3[100936]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 18ee9de6-e00b-571b-ab9b-b7aab06174df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:35:33 np0005558241 podman[100937]: 2025-12-13 07:35:33.669733326 +0000 UTC m=+0.043867776 container create ef168163582800dae860de218ab19654789f8b9a56251f8588bf3d1a15afb563 (image=quay.io/ceph/ceph:v20, name=optimistic_mendel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:35:33 np0005558241 systemd[1]: Started libpod-conmon-ef168163582800dae860de218ab19654789f8b9a56251f8588bf3d1a15afb563.scope.
Dec 13 02:35:33 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:35:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fa42d7fb4a0f5f166feaa33abc7a3e3ac4c38907d8d426c7bc82f8efc2ada7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fa42d7fb4a0f5f166feaa33abc7a3e3ac4c38907d8d426c7bc82f8efc2ada7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:35:33 np0005558241 podman[100937]: 2025-12-13 07:35:33.650060445 +0000 UTC m=+0.024194935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 02:35:33 np0005558241 podman[100937]: 2025-12-13 07:35:33.761241221 +0000 UTC m=+0.135375711 container init ef168163582800dae860de218ab19654789f8b9a56251f8588bf3d1a15afb563 (image=quay.io/ceph/ceph:v20, name=optimistic_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:35:33 np0005558241 podman[100937]: 2025-12-13 07:35:33.76800487 +0000 UTC m=+0.142139340 container start ef168163582800dae860de218ab19654789f8b9a56251f8588bf3d1a15afb563 (image=quay.io/ceph/ceph:v20, name=optimistic_mendel, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:35:33 np0005558241 podman[100937]: 2025-12-13 07:35:33.771846756 +0000 UTC m=+0.145981236 container attach ef168163582800dae860de218ab19654789f8b9a56251f8588bf3d1a15afb563 (image=quay.io/ceph/ceph:v20, name=optimistic_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:35:33 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 13 02:35:33 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]: {
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "user_id": "openstack",
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "display_name": "openstack",
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "email": "",
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "suspended": 0,
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "max_buckets": 1000,
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "subusers": [],
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "keys": [
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:        {
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:            "user": "openstack",
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:            "access_key": "DNW3FLC1NZ95G9UTQ4FL",
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:            "secret_key": "2YADrUuiy8F4LpvMv1UPqCgik1Fhl6MHg60vjuRO",
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:            "active": true,
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:            "create_date": "2025-12-13T07:35:34.006031Z"
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:        }
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    ],
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "swift_keys": [],
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "caps": [],
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "op_mask": "read, write, delete",
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "default_placement": "",
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "default_storage_class": "",
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "placement_tags": [],
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "bucket_quota": {
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:        "enabled": false,
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:        "check_on_raw": false,
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:        "max_size": -1,
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:        "max_size_kb": 0,
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:        "max_objects": -1
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    },
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "user_quota": {
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:        "enabled": false,
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:        "check_on_raw": false,
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:        "max_size": -1,
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:        "max_size_kb": 0,
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:        "max_objects": -1
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    },
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "temp_url_keys": [],
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "type": "rgw",
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "mfa_ids": [],
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "account_id": "",
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "path": "/",
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "create_date": "2025-12-13T07:35:34.005580Z",
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "tags": [],
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]:    "group_ids": []
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]: }
Dec 13 02:35:34 np0005558241 optimistic_mendel[100952]: 
Dec 13 02:35:34 np0005558241 podman[100937]: 2025-12-13 07:35:34.041460737 +0000 UTC m=+0.415595187 container died ef168163582800dae860de218ab19654789f8b9a56251f8588bf3d1a15afb563 (image=quay.io/ceph/ceph:v20, name=optimistic_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:35:34 np0005558241 systemd[1]: libpod-ef168163582800dae860de218ab19654789f8b9a56251f8588bf3d1a15afb563.scope: Deactivated successfully.
Dec 13 02:35:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d7fa42d7fb4a0f5f166feaa33abc7a3e3ac4c38907d8d426c7bc82f8efc2ada7-merged.mount: Deactivated successfully.
Dec 13 02:35:34 np0005558241 podman[100937]: 2025-12-13 07:35:34.088928543 +0000 UTC m=+0.463063013 container remove ef168163582800dae860de218ab19654789f8b9a56251f8588bf3d1a15afb563 (image=quay.io/ceph/ceph:v20, name=optimistic_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:35:34 np0005558241 systemd[1]: libpod-conmon-ef168163582800dae860de218ab19654789f8b9a56251f8588bf3d1a15afb563.scope: Deactivated successfully.
Dec 13 02:35:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 13 02:35:34 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 13 02:35:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 13 02:35:34 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 13 02:35:34 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 13 02:35:34 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 91 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=89/90 n=7 ec=57/47 lis/c=89/57 les/c/f=90/58/0 sis=91 pruub=14.954468727s) [2] async=[2] r=-1 lpr=91 pi=[57,91)/1 crt=53'483 lcod 0'0 active pruub 218.086654663s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:34 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 91 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=89/90 n=7 ec=57/47 lis/c=89/57 les/c/f=90/58/0 sis=91 pruub=14.954370499s) [2] r=-1 lpr=91 pi=[57,91)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 218.086654663s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:34 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 91 pg[9.18( v 90'487 (0'0,90'487] local-lis/les=89/90 n=6 ec=57/47 lis/c=89/57 les/c/f=90/58/0 sis=91 pruub=14.949507713s) [2] async=[2] r=-1 lpr=91 pi=[57,91)/1 crt=90'486 lcod 90'486 active pruub 218.082916260s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:34 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 91 pg[9.18( v 90'487 (0'0,90'487] local-lis/les=89/90 n=6 ec=57/47 lis/c=89/57 les/c/f=90/58/0 sis=91 pruub=14.949386597s) [2] r=-1 lpr=91 pi=[57,91)/1 crt=90'486 lcod 90'486 unknown NOTIFY pruub 218.082916260s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:34 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 91 pg[9.18( v 90'487 (0'0,90'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=89/57 les/c/f=90/58/0 sis=91) [2] r=0 lpr=91 pi=[57,91)/1 pct=0'0 crt=90'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:34 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 91 pg[9.18( v 90'487 (0'0,90'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=89/57 les/c/f=90/58/0 sis=91) [2] r=0 lpr=91 pi=[57,91)/1 crt=90'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:34 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 91 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=89/57 les/c/f=90/58/0 sis=91) [2] r=0 lpr=91 pi=[57,91)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:34 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 91 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=89/57 les/c/f=90/58/0 sis=91) [2] r=0 lpr=91 pi=[57,91)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 13 02:35:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v215: 321 pgs: 2 remapped+peering, 319 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 21 op/s
Dec 13 02:35:35 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 13 02:35:35 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 13 02:35:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 13 02:35:36 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 13 02:35:36 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 92 pg[9.18( v 90'487 (0'0,90'487] local-lis/les=91/92 n=6 ec=57/47 lis/c=89/57 les/c/f=90/58/0 sis=91) [2] r=0 lpr=91 pi=[57,91)/1 crt=90'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:36 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 13 02:35:36 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 13 02:35:36 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 92 pg[9.8( v 53'483 (0'0,53'483] local-lis/les=91/92 n=7 ec=57/47 lis/c=89/57 les/c/f=90/58/0 sis=91) [2] r=0 lpr=91 pi=[57,91)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 2 remapped+peering, 319 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 21 op/s
Dec 13 02:35:37 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 13 02:35:37 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 13 02:35:37 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 13 02:35:37 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 13 02:35:39 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 13 02:35:39 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 13 02:35:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v218: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 325 B/s wr, 38 op/s; 83 B/s, 2 objects/s recovering
Dec 13 02:35:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:35:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:35:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:35:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:35:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:35:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:35:40 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 13 02:35:40 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 13 02:35:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v219: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 255 B/s wr, 30 op/s; 65 B/s, 1 objects/s recovering
Dec 13 02:35:41 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec 13 02:35:42 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 13 02:35:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 221 B/s wr, 18 op/s; 57 B/s, 1 objects/s recovering
Dec 13 02:35:43 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 13 02:35:43 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:43 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 13 02:35:43 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec 13 02:35:43 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 13 02:35:43 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 13 02:35:43 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 13 02:35:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 02:35:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 02:35:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 02:35:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v222: 321 pgs: 4 active+clean+scrubbing, 317 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 219 B/s wr, 18 op/s; 56 B/s, 1 objects/s recovering
Dec 13 02:35:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 13 02:35:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 13 02:35:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 13 02:35:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 13 02:35:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 13 02:35:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 13 02:35:45 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 13 02:35:46 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 13 02:35:46 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 13 02:35:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 13 02:35:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v224: 321 pgs: 4 active+clean+scrubbing, 317 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 13 02:35:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 13 02:35:47 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 13 02:35:47 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 13 02:35:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 13 02:35:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 13 02:35:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 13 02:35:47 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 13 02:35:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 13 02:35:48 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 13 02:35:48 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 13 02:35:48 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 95 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=95 pruub=8.501245499s) [2] r=-1 lpr=95 pi=[57,95)/1 crt=53'483 lcod 0'0 active pruub 226.054428101s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:48 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 95 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=95 pruub=8.501214981s) [2] r=-1 lpr=95 pi=[57,95)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 226.054428101s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:48 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 95 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=95 pruub=8.503978729s) [2] r=-1 lpr=95 pi=[57,95)/1 crt=90'486 lcod 90'486 active pruub 226.058288574s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:48 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 95 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=95 pruub=8.503946304s) [2] r=-1 lpr=95 pi=[57,95)/1 crt=90'486 lcod 90'486 unknown NOTIFY pruub 226.058288574s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:48 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 95 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=95) [2] r=0 lpr=95 pi=[57,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:48 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 95 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=95) [2] r=0 lpr=95 pi=[57,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 13 02:35:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 13 02:35:48 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 13 02:35:48 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 96 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=96) [2]/[1] r=-1 lpr=96 pi=[57,96)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:48 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 96 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=96) [2]/[1] r=-1 lpr=96 pi=[57,96)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:48 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 96 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=96) [2]/[1] r=-1 lpr=96 pi=[57,96)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:48 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 96 pg[9.c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=96) [2]/[1] r=-1 lpr=96 pi=[57,96)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:48 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 96 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=96) [2]/[1] r=0 lpr=96 pi=[57,96)/1 crt=90'486 lcod 90'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:48 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 96 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=57/58 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=96) [2]/[1] r=0 lpr=96 pi=[57,96)/1 crt=90'486 lcod 90'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:48 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 96 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=96) [2]/[1] r=0 lpr=96 pi=[57,96)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:48 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 96 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=57/58 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=96) [2]/[1] r=0 lpr=96 pi=[57,96)/1 crt=53'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:48 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 13 02:35:49 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 13 02:35:49 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 13 02:35:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 13 02:35:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 13 02:35:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 13 02:35:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 13 02:35:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 13 02:35:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 13 02:35:49 np0005558241 systemd-logind[787]: New session 37 of user zuul.
Dec 13 02:35:49 np0005558241 systemd[1]: Started Session 37 of User zuul.
Dec 13 02:35:49 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 13 02:35:49 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 13 02:35:50 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 97 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=96/97 n=6 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[57,96)/1 crt=90'487 lcod 90'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:50 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 97 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=96/97 n=7 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=96) [2]/[1] async=[2] r=0 lpr=96 pi=[57,96)/1 crt=53'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 13 02:35:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 13 02:35:50 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 13 02:35:50 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 98 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=96/57 les/c/f=97/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 pct=0'0 crt=90'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:50 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 98 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=96/57 les/c/f=97/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=90'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:50 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 98 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=96/57 les/c/f=97/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:50 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 98 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=0/0 n=7 ec=57/47 lis/c=96/57 les/c/f=97/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:35:50 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 98 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=96/97 n=6 ec=57/47 lis/c=96/57 les/c/f=97/58/0 sis=98 pruub=15.273664474s) [2] async=[2] r=-1 lpr=98 pi=[57,98)/1 crt=90'487 lcod 90'486 active pruub 235.138702393s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:50 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 98 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=96/97 n=6 ec=57/47 lis/c=96/57 les/c/f=97/58/0 sis=98 pruub=15.273543358s) [2] r=-1 lpr=98 pi=[57,98)/1 crt=90'487 lcod 90'486 unknown NOTIFY pruub 235.138702393s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:50 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 98 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=96/97 n=7 ec=57/47 lis/c=96/57 les/c/f=97/58/0 sis=98 pruub=15.273633957s) [2] async=[2] r=-1 lpr=98 pi=[57,98)/1 crt=53'483 lcod 0'0 active pruub 235.140304565s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:35:50 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 98 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=96/97 n=7 ec=57/47 lis/c=96/57 les/c/f=97/58/0 sis=98 pruub=15.273186684s) [2] r=-1 lpr=98 pi=[57,98)/1 crt=53'483 lcod 0'0 unknown NOTIFY pruub 235.140304565s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:35:51 np0005558241 python3.9[101204]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:35:51 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec 13 02:35:51 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec 13 02:35:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v230: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 13 02:35:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 13 02:35:51 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 13 02:35:51 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 13 02:35:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 13 02:35:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 13 02:35:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 13 02:35:51 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 13 02:35:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 13 02:35:51 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 99 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=98/99 n=6 ec=57/47 lis/c=96/57 les/c/f=97/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=90'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:51 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 99 pg[9.c( v 53'483 (0'0,53'483] local-lis/les=98/99 n=7 ec=57/47 lis/c=96/57 les/c/f=97/58/0 sis=98) [2] r=0 lpr=98 pi=[57,98)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:35:52 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 13 02:35:52 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 13 02:35:52 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 13 02:35:52 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 13 02:35:52 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 13 02:35:52 np0005558241 python3.9[101422]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:35:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v232: 321 pgs: 321 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:35:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 13 02:35:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 13 02:35:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 13 02:35:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 13 02:35:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 13 02:35:53 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 13 02:35:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 13 02:35:54 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 13 02:35:54 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 13 02:35:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 13 02:35:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 2 objects/s recovering
Dec 13 02:35:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec 13 02:35:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec 13 02:35:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 13 02:35:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 13 02:35:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 13 02:35:56 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 13 02:35:56 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec 13 02:35:56 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 13 02:35:56 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 13 02:35:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v236: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec 13 02:35:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec 13 02:35:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec 13 02:35:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 13 02:35:57 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 13 02:35:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 13 02:35:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 13 02:35:58 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 13 02:35:58 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec 13 02:35:58 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 13 02:35:58 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 13 02:35:58 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 13 02:35:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:35:58 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 13 02:35:58 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 13 02:35:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec 13 02:35:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec 13 02:35:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec 13 02:35:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 13 02:35:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 13 02:35:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 13 02:35:59 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 13 02:35:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec 13 02:36:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 13 02:36:01 np0005558241 systemd[1]: session-37.scope: Deactivated successfully.
Dec 13 02:36:01 np0005558241 systemd[1]: session-37.scope: Consumed 8.852s CPU time.
Dec 13 02:36:01 np0005558241 systemd-logind[787]: Session 37 logged out. Waiting for processes to exit.
Dec 13 02:36:01 np0005558241 systemd-logind[787]: Removed session 37.
Dec 13 02:36:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v240: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec 13 02:36:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec 13 02:36:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 13 02:36:01 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec 13 02:36:01 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 13 02:36:01 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 13 02:36:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 13 02:36:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 13 02:36:02 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 13 02:36:02 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 104 pg[9.13( v 90'485 (0'0,90'485] local-lis/les=67/68 n=6 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=104 pruub=13.952581406s) [2] r=-1 lpr=104 pi=[67,104)/1 crt=89'484 lcod 89'484 active pruub 252.191665649s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:02 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 104 pg[9.13( v 90'485 (0'0,90'485] local-lis/les=67/68 n=6 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=104 pruub=13.952522278s) [2] r=-1 lpr=104 pi=[67,104)/1 crt=89'484 lcod 89'484 unknown NOTIFY pruub 252.191665649s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:02 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 104 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=104) [2] r=0 lpr=104 pi=[67,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:02 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec 13 02:36:02 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec 13 02:36:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 13 02:36:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 13 02:36:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 13 02:36:03 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 13 02:36:03 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 105 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=105) [2]/[0] r=-1 lpr=105 pi=[67,105)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:03 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 105 pg[9.13( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=105) [2]/[0] r=-1 lpr=105 pi=[67,105)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:03 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 105 pg[9.13( v 90'485 (0'0,90'485] local-lis/les=67/68 n=6 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=105) [2]/[0] r=0 lpr=105 pi=[67,105)/1 crt=89'484 lcod 89'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:03 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 105 pg[9.13( v 90'485 (0'0,90'485] local-lis/les=67/68 n=6 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=105) [2]/[0] r=0 lpr=105 pi=[67,105)/1 crt=89'484 lcod 89'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec 13 02:36:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec 13 02:36:03 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 13 02:36:03 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 13 02:36:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:03 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec 13 02:36:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 13 02:36:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 13 02:36:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 13 02:36:04 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 13 02:36:04 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 106 pg[9.13( v 90'485 (0'0,90'485] local-lis/les=105/106 n=6 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=105) [2]/[0] async=[2] r=0 lpr=105 pi=[67,105)/1 crt=90'485 lcod 89'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:04 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec 13 02:36:04 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec 13 02:36:04 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 13 02:36:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 13 02:36:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 13 02:36:05 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 13 02:36:05 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 107 pg[9.13( v 90'485 (0'0,90'485] local-lis/les=105/106 n=6 ec=57/47 lis/c=105/67 les/c/f=106/68/0 sis=107 pruub=15.100477219s) [2] async=[2] r=-1 lpr=107 pi=[67,107)/1 crt=90'485 lcod 89'484 active pruub 256.369873047s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:05 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 107 pg[9.13( v 90'485 (0'0,90'485] local-lis/les=105/106 n=6 ec=57/47 lis/c=105/67 les/c/f=106/68/0 sis=107 pruub=15.100369453s) [2] r=-1 lpr=107 pi=[67,107)/1 crt=90'485 lcod 89'484 unknown NOTIFY pruub 256.369873047s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:05 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 107 pg[9.13( v 90'485 (0'0,90'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=105/67 les/c/f=106/68/0 sis=107) [2] r=0 lpr=107 pi=[67,107)/1 pct=0'0 crt=90'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:05 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 107 pg[9.13( v 90'485 (0'0,90'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=105/67 les/c/f=106/68/0 sis=107) [2] r=0 lpr=107 pi=[67,107)/1 crt=90'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v246: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 13 02:36:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 13 02:36:06 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 13 02:36:06 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 108 pg[9.13( v 90'485 (0'0,90'485] local-lis/les=107/108 n=6 ec=57/47 lis/c=105/67 les/c/f=106/68/0 sis=107) [2] r=0 lpr=107 pi=[67,107)/1 crt=90'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:06 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec 13 02:36:06 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec 13 02:36:06 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 13 02:36:06 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 13 02:36:07 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec 13 02:36:07 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec 13 02:36:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v248: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:36:08
Dec 13 02:36:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:36:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Some PGs (0.003115) are inactive; try again later
Dec 13 02:36:09 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec 13 02:36:09 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec 13 02:36:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v249: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 0 objects/s recovering
Dec 13 02:36:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec 13 02:36:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec 13 02:36:09 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec 13 02:36:09 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:36:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:36:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 13 02:36:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec 13 02:36:10 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 13 02:36:10 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Dec 13 02:36:10 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 13 02:36:11 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Dec 13 02:36:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 13 02:36:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 13 02:36:11 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 13 02:36:11 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 109 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=71/72 n=6 ec=57/47 lis/c=71/71 les/c/f=72/72/0 sis=109 pruub=12.098935127s) [1] r=-1 lpr=109 pi=[71,109)/1 crt=53'483 active pruub 259.713195801s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:11 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 109 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=71/72 n=6 ec=57/47 lis/c=71/71 les/c/f=72/72/0 sis=109 pruub=12.098618507s) [1] r=-1 lpr=109 pi=[71,109)/1 crt=53'483 unknown NOTIFY pruub 259.713195801s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:11 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 109 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=71/71 les/c/f=72/72/0 sis=109) [1] r=0 lpr=109 pi=[71,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 40 B/s, 0 objects/s recovering
Dec 13 02:36:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec 13 02:36:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec 13 02:36:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 13 02:36:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec 13 02:36:11 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec 13 02:36:11 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 13 02:36:11 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 13 02:36:11 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec 13 02:36:12 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 13 02:36:12 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 13 02:36:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec 13 02:36:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 13 02:36:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec 13 02:36:12 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec 13 02:36:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 110 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=110 pruub=9.910621643s) [0] r=-1 lpr=110 pi=[85,110)/1 crt=53'483 active pruub 242.698455811s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:12 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 110 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=110 pruub=9.910564423s) [0] r=-1 lpr=110 pi=[85,110)/1 crt=53'483 unknown NOTIFY pruub 242.698455811s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:12 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 110 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=71/71 les/c/f=72/72/0 sis=110) [1]/[0] r=-1 lpr=110 pi=[71,110)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:12 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 110 pg[9.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=71/71 les/c/f=72/72/0 sis=110) [1]/[0] r=-1 lpr=110 pi=[71,110)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:12 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=110) [0] r=0 lpr=110 pi=[85,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:12 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 110 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=71/72 n=6 ec=57/47 lis/c=71/71 les/c/f=72/72/0 sis=110) [1]/[0] r=0 lpr=110 pi=[71,110)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:12 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 110 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=71/72 n=6 ec=57/47 lis/c=71/71 les/c/f=72/72/0 sis=110) [1]/[0] r=0 lpr=110 pi=[71,110)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 13 02:36:12 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 13 02:36:12 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 13 02:36:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v253: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 35 B/s, 0 objects/s recovering
Dec 13 02:36:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec 13 02:36:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec 13 02:36:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec 13 02:36:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 13 02:36:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec 13 02:36:13 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 13 02:36:13 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 111 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[85,111)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:13 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 111 pg[9.16( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[85,111)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:13 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 111 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=111) [0]/[2] r=0 lpr=111 pi=[85,111)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:13 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 111 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=111) [0]/[2] r=0 lpr=111 pi=[85,111)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:13 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 111 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=110/111 n=6 ec=57/47 lis/c=71/71 les/c/f=72/72/0 sis=110) [1]/[0] async=[1] r=0 lpr=110 pi=[71,110)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec 13 02:36:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 13 02:36:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 13 02:36:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec 13 02:36:13 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 112 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=110/71 les/c/f=111/72/0 sis=112) [1] r=0 lpr=112 pi=[71,112)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:13 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 112 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=110/71 les/c/f=111/72/0 sis=112) [1] r=0 lpr=112 pi=[71,112)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:13 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec 13 02:36:13 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 112 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=110/111 n=6 ec=57/47 lis/c=110/71 les/c/f=111/72/0 sis=112 pruub=15.740199089s) [1] async=[1] r=-1 lpr=112 pi=[71,112)/1 crt=53'483 active pruub 265.755676270s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:13 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 112 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=110/111 n=6 ec=57/47 lis/c=110/71 les/c/f=111/72/0 sis=112 pruub=15.740078926s) [1] r=-1 lpr=112 pi=[71,112)/1 crt=53'483 unknown NOTIFY pruub 265.755676270s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:13 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 112 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=111/112 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=111) [0]/[2] async=[0] r=0 lpr=111 pi=[85,111)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:14 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec 13 02:36:14 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec 13 02:36:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec 13 02:36:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec 13 02:36:14 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec 13 02:36:14 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 113 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=111/112 n=6 ec=57/47 lis/c=111/85 les/c/f=112/86/0 sis=113 pruub=15.003973961s) [0] async=[0] r=-1 lpr=113 pi=[85,113)/1 crt=53'483 active pruub 250.122680664s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:14 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 113 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=111/112 n=6 ec=57/47 lis/c=111/85 les/c/f=112/86/0 sis=113 pruub=15.003801346s) [0] r=-1 lpr=113 pi=[85,113)/1 crt=53'483 unknown NOTIFY pruub 250.122680664s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 113 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=111/85 les/c/f=112/86/0 sis=113) [0] r=0 lpr=113 pi=[85,113)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:14 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 113 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=111/85 les/c/f=112/86/0 sis=113) [0] r=0 lpr=113 pi=[85,113)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:14 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 113 pg[9.15( v 53'483 (0'0,53'483] local-lis/les=112/113 n=6 ec=57/47 lis/c=110/71 les/c/f=111/72/0 sis=112) [1] r=0 lpr=112 pi=[71,112)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:15 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 13 02:36:15 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 13 02:36:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 1 activating+remapped, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/251 objects misplaced (1.594%)
Dec 13 02:36:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec 13 02:36:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec 13 02:36:15 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec 13 02:36:15 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 114 pg[9.16( v 53'483 (0'0,53'483] local-lis/les=113/114 n=6 ec=57/47 lis/c=111/85 les/c/f=112/86/0 sis=113) [0] r=0 lpr=113 pi=[85,113)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:15 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec 13 02:36:15 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec 13 02:36:16 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec 13 02:36:16 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec 13 02:36:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 1 activating+remapped, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/251 objects misplaced (1.594%)
Dec 13 02:36:17 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 13 02:36:17 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 13 02:36:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:18 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 13 02:36:18 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 13 02:36:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Dec 13 02:36:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec 13 02:36:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec 13 02:36:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec 13 02:36:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 13 02:36:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec 13 02:36:19 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 13 02:36:19 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 13 02:36:19 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec 13 02:36:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:36:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:36:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:36:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:36:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec 13 02:36:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:36:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:36:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:36:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:36:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:36:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:36:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:36:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3189482845707067e-06 of space, bias 4.0, pg target 0.001582737941484848 quantized to 16 (current 32)
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:36:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:36:20 np0005558241 podman[101625]: 2025-12-13 07:36:20.542114911 +0000 UTC m=+0.026886162 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:36:20 np0005558241 podman[101625]: 2025-12-13 07:36:20.913974551 +0000 UTC m=+0.398745732 container create d210b9238b068b005559d2de189342554a4a0b950d1d37cd9a30f4c58f300e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 02:36:20 np0005558241 systemd[77917]: Created slice User Background Tasks Slice.
Dec 13 02:36:20 np0005558241 systemd[77917]: Starting Cleanup of User's Temporary Files and Directories...
Dec 13 02:36:20 np0005558241 systemd[77917]: Finished Cleanup of User's Temporary Files and Directories.
Dec 13 02:36:21 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 13 02:36:21 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:36:21 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:36:21 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:36:21 np0005558241 systemd[1]: Started libpod-conmon-d210b9238b068b005559d2de189342554a4a0b950d1d37cd9a30f4c58f300e42.scope.
Dec 13 02:36:21 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:36:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 1 objects/s recovering
Dec 13 02:36:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec 13 02:36:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec 13 02:36:21 np0005558241 systemd-logind[787]: New session 38 of user zuul.
Dec 13 02:36:21 np0005558241 systemd[1]: Started Session 38 of User zuul.
Dec 13 02:36:21 np0005558241 podman[101625]: 2025-12-13 07:36:21.644153242 +0000 UTC m=+1.128924433 container init d210b9238b068b005559d2de189342554a4a0b950d1d37cd9a30f4c58f300e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:36:21 np0005558241 podman[101625]: 2025-12-13 07:36:21.655064904 +0000 UTC m=+1.139836075 container start d210b9238b068b005559d2de189342554a4a0b950d1d37cd9a30f4c58f300e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kalam, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:36:21 np0005558241 systemd[1]: libpod-d210b9238b068b005559d2de189342554a4a0b950d1d37cd9a30f4c58f300e42.scope: Deactivated successfully.
Dec 13 02:36:21 np0005558241 romantic_kalam[101643]: 167 167
Dec 13 02:36:21 np0005558241 conmon[101643]: conmon d210b9238b068b005559 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d210b9238b068b005559d2de189342554a4a0b950d1d37cd9a30f4c58f300e42.scope/container/memory.events
Dec 13 02:36:21 np0005558241 podman[101625]: 2025-12-13 07:36:21.982395242 +0000 UTC m=+1.467166453 container attach d210b9238b068b005559d2de189342554a4a0b950d1d37cd9a30f4c58f300e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kalam, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:36:21 np0005558241 podman[101625]: 2025-12-13 07:36:21.984649058 +0000 UTC m=+1.469420259 container died d210b9238b068b005559d2de189342554a4a0b950d1d37cd9a30f4c58f300e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kalam, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:36:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-bd7a5224e9c1bd76a3cf43e78bdd65040d6d2316f36fdadea4c05d4a475bb0fc-merged.mount: Deactivated successfully.
Dec 13 02:36:22 np0005558241 podman[101625]: 2025-12-13 07:36:22.05643761 +0000 UTC m=+1.541208781 container remove d210b9238b068b005559d2de189342554a4a0b950d1d37cd9a30f4c58f300e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 02:36:22 np0005558241 systemd[1]: libpod-conmon-d210b9238b068b005559d2de189342554a4a0b950d1d37cd9a30f4c58f300e42.scope: Deactivated successfully.
Dec 13 02:36:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec 13 02:36:22 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec 13 02:36:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 13 02:36:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec 13 02:36:22 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec 13 02:36:22 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 116 pg[9.19( v 90'487 (0'0,90'487] local-lis/les=74/76 n=6 ec=57/47 lis/c=74/74 les/c/f=76/76/0 sis=116 pruub=10.918578148s) [2] r=-1 lpr=116 pi=[74,116)/1 crt=90'486 lcod 90'486 active pruub 269.358703613s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:22 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 116 pg[9.19( v 90'487 (0'0,90'487] local-lis/les=74/76 n=6 ec=57/47 lis/c=74/74 les/c/f=76/76/0 sis=116 pruub=10.918490410s) [2] r=-1 lpr=116 pi=[74,116)/1 crt=90'486 lcod 90'486 unknown NOTIFY pruub 269.358703613s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:22 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 116 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=74/74 les/c/f=76/76/0 sis=116) [2] r=0 lpr=116 pi=[74,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:22 np0005558241 podman[101772]: 2025-12-13 07:36:22.279777733 +0000 UTC m=+0.043511807 container create 94c0bb9df0801a4dae241d6a4b4f03cce754a6bbd4fbd6a1e680bb2378deb6e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 02:36:22 np0005558241 systemd[1]: Started libpod-conmon-94c0bb9df0801a4dae241d6a4b4f03cce754a6bbd4fbd6a1e680bb2378deb6e4.scope.
Dec 13 02:36:22 np0005558241 podman[101772]: 2025-12-13 07:36:22.26162803 +0000 UTC m=+0.025362124 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:36:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:36:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c764f1c586e8f712b246da64eb2f9ce01d53fbe3a8b0237bf050378444882f8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:36:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c764f1c586e8f712b246da64eb2f9ce01d53fbe3a8b0237bf050378444882f8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:36:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c764f1c586e8f712b246da64eb2f9ce01d53fbe3a8b0237bf050378444882f8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:36:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c764f1c586e8f712b246da64eb2f9ce01d53fbe3a8b0237bf050378444882f8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:36:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c764f1c586e8f712b246da64eb2f9ce01d53fbe3a8b0237bf050378444882f8e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:36:22 np0005558241 python3.9[101841]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 13 02:36:22 np0005558241 podman[101772]: 2025-12-13 07:36:22.820458716 +0000 UTC m=+0.584192810 container init 94c0bb9df0801a4dae241d6a4b4f03cce754a6bbd4fbd6a1e680bb2378deb6e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:36:22 np0005558241 podman[101772]: 2025-12-13 07:36:22.834242289 +0000 UTC m=+0.597976373 container start 94c0bb9df0801a4dae241d6a4b4f03cce754a6bbd4fbd6a1e680bb2378deb6e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:36:22 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Dec 13 02:36:22 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Dec 13 02:36:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec 13 02:36:23 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec 13 02:36:23 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec 13 02:36:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 28 B/s, 1 objects/s recovering
Dec 13 02:36:23 np0005558241 dreamy_jennings[101812]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:36:23 np0005558241 dreamy_jennings[101812]: --> All data devices are unavailable
Dec 13 02:36:23 np0005558241 systemd[1]: libpod-94c0bb9df0801a4dae241d6a4b4f03cce754a6bbd4fbd6a1e680bb2378deb6e4.scope: Deactivated successfully.
Dec 13 02:36:23 np0005558241 conmon[101812]: conmon 94c0bb9df0801a4dae24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94c0bb9df0801a4dae241d6a4b4f03cce754a6bbd4fbd6a1e680bb2378deb6e4.scope/container/memory.events
Dec 13 02:36:23 np0005558241 podman[101772]: 2025-12-13 07:36:23.573392174 +0000 UTC m=+1.337126268 container attach 94c0bb9df0801a4dae241d6a4b4f03cce754a6bbd4fbd6a1e680bb2378deb6e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jennings, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:36:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec 13 02:36:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec 13 02:36:23 np0005558241 podman[101772]: 2025-12-13 07:36:23.575305352 +0000 UTC m=+1.339039446 container died 94c0bb9df0801a4dae241d6a4b4f03cce754a6bbd4fbd6a1e680bb2378deb6e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 02:36:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec 13 02:36:23 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec 13 02:36:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 117 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=74/74 les/c/f=76/76/0 sis=117) [2]/[0] r=-1 lpr=117 pi=[74,117)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:23 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 117 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=74/74 les/c/f=76/76/0 sis=117) [2]/[0] r=-1 lpr=117 pi=[74,117)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 13 02:36:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 117 pg[9.19( v 90'487 (0'0,90'487] local-lis/les=74/76 n=6 ec=57/47 lis/c=74/74 les/c/f=76/76/0 sis=117) [2]/[0] r=0 lpr=117 pi=[74,117)/1 crt=90'486 lcod 90'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:23 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 117 pg[9.19( v 90'487 (0'0,90'487] local-lis/les=74/76 n=6 ec=57/47 lis/c=74/74 les/c/f=76/76/0 sis=117) [2]/[0] r=0 lpr=117 pi=[74,117)/1 crt=90'486 lcod 90'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c764f1c586e8f712b246da64eb2f9ce01d53fbe3a8b0237bf050378444882f8e-merged.mount: Deactivated successfully.
Dec 13 02:36:23 np0005558241 podman[101772]: 2025-12-13 07:36:23.654039047 +0000 UTC m=+1.417773161 container remove 94c0bb9df0801a4dae241d6a4b4f03cce754a6bbd4fbd6a1e680bb2378deb6e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jennings, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:36:23 np0005558241 systemd[1]: libpod-conmon-94c0bb9df0801a4dae241d6a4b4f03cce754a6bbd4fbd6a1e680bb2378deb6e4.scope: Deactivated successfully.
Dec 13 02:36:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:23 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec 13 02:36:23 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec 13 02:36:23 np0005558241 python3.9[102045]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:36:24 np0005558241 podman[102113]: 2025-12-13 07:36:24.226498301 +0000 UTC m=+0.070682565 container create bf43c2e8bd4c7f945c07d6d2172c59509f76063b777d2971298910b64fc8111e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_brattain, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:36:24 np0005558241 systemd[1]: Started libpod-conmon-bf43c2e8bd4c7f945c07d6d2172c59509f76063b777d2971298910b64fc8111e.scope.
Dec 13 02:36:24 np0005558241 podman[102113]: 2025-12-13 07:36:24.20081295 +0000 UTC m=+0.044997224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:36:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:36:24 np0005558241 podman[102113]: 2025-12-13 07:36:24.331882651 +0000 UTC m=+0.176066925 container init bf43c2e8bd4c7f945c07d6d2172c59509f76063b777d2971298910b64fc8111e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_brattain, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:36:24 np0005558241 podman[102113]: 2025-12-13 07:36:24.345779238 +0000 UTC m=+0.189963492 container start bf43c2e8bd4c7f945c07d6d2172c59509f76063b777d2971298910b64fc8111e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_brattain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:36:24 np0005558241 podman[102113]: 2025-12-13 07:36:24.350238389 +0000 UTC m=+0.194422643 container attach bf43c2e8bd4c7f945c07d6d2172c59509f76063b777d2971298910b64fc8111e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:36:24 np0005558241 dazzling_brattain[102132]: 167 167
Dec 13 02:36:24 np0005558241 systemd[1]: libpod-bf43c2e8bd4c7f945c07d6d2172c59509f76063b777d2971298910b64fc8111e.scope: Deactivated successfully.
Dec 13 02:36:24 np0005558241 podman[102113]: 2025-12-13 07:36:24.354915296 +0000 UTC m=+0.199099560 container died bf43c2e8bd4c7f945c07d6d2172c59509f76063b777d2971298910b64fc8111e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_brattain, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 02:36:24 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2e8a5e067780b13078321f1fc4b5eb125aa260afad13dee834434d2150108369-merged.mount: Deactivated successfully.
Dec 13 02:36:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec 13 02:36:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 13 02:36:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec 13 02:36:24 np0005558241 podman[102113]: 2025-12-13 07:36:24.588825083 +0000 UTC m=+0.433009307 container remove bf43c2e8bd4c7f945c07d6d2172c59509f76063b777d2971298910b64fc8111e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:36:24 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec 13 02:36:24 np0005558241 systemd[1]: libpod-conmon-bf43c2e8bd4c7f945c07d6d2172c59509f76063b777d2971298910b64fc8111e.scope: Deactivated successfully.
Dec 13 02:36:24 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec 13 02:36:24 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 13 02:36:24 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 118 pg[9.19( v 90'487 (0'0,90'487] local-lis/les=117/118 n=6 ec=57/47 lis/c=74/74 les/c/f=76/76/0 sis=117) [2]/[0] async=[2] r=0 lpr=117 pi=[74,117)/1 crt=90'487 lcod 90'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:24 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec 13 02:36:24 np0005558241 podman[102230]: 2025-12-13 07:36:24.827729695 +0000 UTC m=+0.069053965 container create 7f91c13e9748ea89b052898f8af960d36c3b65242cc9cd1364f73666ba3c9c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:36:24 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec 13 02:36:24 np0005558241 systemd[1]: Started libpod-conmon-7f91c13e9748ea89b052898f8af960d36c3b65242cc9cd1364f73666ba3c9c64.scope.
Dec 13 02:36:24 np0005558241 podman[102230]: 2025-12-13 07:36:24.79827918 +0000 UTC m=+0.039603500 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:36:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:36:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0148e9462f62b4e54163b4867e900009beb19cfb433bbc5971878733738226/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:36:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0148e9462f62b4e54163b4867e900009beb19cfb433bbc5971878733738226/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:36:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0148e9462f62b4e54163b4867e900009beb19cfb433bbc5971878733738226/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:36:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0148e9462f62b4e54163b4867e900009beb19cfb433bbc5971878733738226/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:36:24 np0005558241 podman[102230]: 2025-12-13 07:36:24.958702163 +0000 UTC m=+0.200026433 container init 7f91c13e9748ea89b052898f8af960d36c3b65242cc9cd1364f73666ba3c9c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:36:24 np0005558241 podman[102230]: 2025-12-13 07:36:24.971151723 +0000 UTC m=+0.212475963 container start 7f91c13e9748ea89b052898f8af960d36c3b65242cc9cd1364f73666ba3c9c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_cannon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:36:24 np0005558241 podman[102230]: 2025-12-13 07:36:24.975125133 +0000 UTC m=+0.216449373 container attach 7f91c13e9748ea89b052898f8af960d36c3b65242cc9cd1364f73666ba3c9c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_cannon, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]: {
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:    "0": [
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:        {
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "devices": [
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "/dev/loop3"
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            ],
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_name": "ceph_lv0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_size": "21470642176",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "name": "ceph_lv0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "tags": {
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.cluster_name": "ceph",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.crush_device_class": "",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.encrypted": "0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.objectstore": "bluestore",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.osd_id": "0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.type": "block",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.vdo": "0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.with_tpm": "0"
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            },
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "type": "block",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "vg_name": "ceph_vg0"
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:        }
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:    ],
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:    "1": [
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:        {
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "devices": [
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "/dev/loop4"
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            ],
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_name": "ceph_lv1",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_size": "21470642176",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "name": "ceph_lv1",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "tags": {
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.cluster_name": "ceph",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.crush_device_class": "",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.encrypted": "0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.objectstore": "bluestore",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.osd_id": "1",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.type": "block",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.vdo": "0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.with_tpm": "0"
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            },
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "type": "block",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "vg_name": "ceph_vg1"
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:        }
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:    ],
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:    "2": [
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:        {
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "devices": [
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "/dev/loop5"
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            ],
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_name": "ceph_lv2",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_size": "21470642176",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "name": "ceph_lv2",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "tags": {
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.cluster_name": "ceph",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.crush_device_class": "",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.encrypted": "0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.objectstore": "bluestore",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.osd_id": "2",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.type": "block",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.vdo": "0",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:                "ceph.with_tpm": "0"
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            },
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "type": "block",
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:            "vg_name": "ceph_vg2"
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:        }
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]:    ]
Dec 13 02:36:25 np0005558241 relaxed_cannon[102264]: }
Dec 13 02:36:25 np0005558241 systemd[1]: libpod-7f91c13e9748ea89b052898f8af960d36c3b65242cc9cd1364f73666ba3c9c64.scope: Deactivated successfully.
Dec 13 02:36:25 np0005558241 podman[102230]: 2025-12-13 07:36:25.328816259 +0000 UTC m=+0.570140509 container died 7f91c13e9748ea89b052898f8af960d36c3b65242cc9cd1364f73666ba3c9c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:36:25 np0005558241 python3.9[102328]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:36:25 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4d0148e9462f62b4e54163b4867e900009beb19cfb433bbc5971878733738226-merged.mount: Deactivated successfully.
Dec 13 02:36:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:25 np0005558241 podman[102230]: 2025-12-13 07:36:25.5973716 +0000 UTC m=+0.838695840 container remove 7f91c13e9748ea89b052898f8af960d36c3b65242cc9cd1364f73666ba3c9c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_cannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:36:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec 13 02:36:25 np0005558241 systemd[1]: libpod-conmon-7f91c13e9748ea89b052898f8af960d36c3b65242cc9cd1364f73666ba3c9c64.scope: Deactivated successfully.
Dec 13 02:36:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec 13 02:36:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 119 pg[9.19( v 90'487 (0'0,90'487] local-lis/les=117/118 n=6 ec=57/47 lis/c=117/74 les/c/f=118/76/0 sis=119 pruub=14.753608704s) [2] async=[2] r=-1 lpr=119 pi=[74,119)/1 crt=90'487 lcod 90'486 active pruub 276.831054688s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 119 pg[9.19( v 90'487 (0'0,90'487] local-lis/les=117/118 n=6 ec=57/47 lis/c=117/74 les/c/f=118/76/0 sis=119 pruub=14.753482819s) [2] r=-1 lpr=119 pi=[74,119)/1 crt=90'487 lcod 90'486 unknown NOTIFY pruub 276.831054688s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:25 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec 13 02:36:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 119 pg[9.19( v 90'487 (0'0,90'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=117/74 les/c/f=118/76/0 sis=119) [2] r=0 lpr=119 pi=[74,119)/1 pct=0'0 crt=90'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:25 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 119 pg[9.19( v 90'487 (0'0,90'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=117/74 les/c/f=118/76/0 sis=119) [2] r=0 lpr=119 pi=[74,119)/1 crt=90'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:26 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 13 02:36:26 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 13 02:36:26 np0005558241 podman[102489]: 2025-12-13 07:36:26.178375239 +0000 UTC m=+0.031192609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:36:26 np0005558241 podman[102489]: 2025-12-13 07:36:26.328330851 +0000 UTC m=+0.181148161 container create ba96e33b1bf8289388901355e1bc7405528ddb4fc01cda63829f2c953d6d2af8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_elbakyan, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:36:26 np0005558241 systemd[1]: Started libpod-conmon-ba96e33b1bf8289388901355e1bc7405528ddb4fc01cda63829f2c953d6d2af8.scope.
Dec 13 02:36:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:36:26 np0005558241 podman[102489]: 2025-12-13 07:36:26.424022429 +0000 UTC m=+0.276839719 container init ba96e33b1bf8289388901355e1bc7405528ddb4fc01cda63829f2c953d6d2af8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:36:26 np0005558241 podman[102489]: 2025-12-13 07:36:26.434245114 +0000 UTC m=+0.287062434 container start ba96e33b1bf8289388901355e1bc7405528ddb4fc01cda63829f2c953d6d2af8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_elbakyan, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:36:26 np0005558241 podman[102489]: 2025-12-13 07:36:26.438454009 +0000 UTC m=+0.291271329 container attach ba96e33b1bf8289388901355e1bc7405528ddb4fc01cda63829f2c953d6d2af8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:36:26 np0005558241 confident_elbakyan[102573]: 167 167
Dec 13 02:36:26 np0005558241 systemd[1]: libpod-ba96e33b1bf8289388901355e1bc7405528ddb4fc01cda63829f2c953d6d2af8.scope: Deactivated successfully.
Dec 13 02:36:26 np0005558241 podman[102489]: 2025-12-13 07:36:26.441322261 +0000 UTC m=+0.294139551 container died ba96e33b1bf8289388901355e1bc7405528ddb4fc01cda63829f2c953d6d2af8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_elbakyan, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:36:26 np0005558241 python3.9[102577]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:36:26 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5841d1d28e81e6a4e3ae2badda5b57508305d7e9522ff4ac4e966027b3c977d5-merged.mount: Deactivated successfully.
Dec 13 02:36:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec 13 02:36:26 np0005558241 podman[102489]: 2025-12-13 07:36:26.749689056 +0000 UTC m=+0.602506336 container remove ba96e33b1bf8289388901355e1bc7405528ddb4fc01cda63829f2c953d6d2af8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_elbakyan, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:36:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec 13 02:36:26 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec 13 02:36:26 np0005558241 systemd[1]: libpod-conmon-ba96e33b1bf8289388901355e1bc7405528ddb4fc01cda63829f2c953d6d2af8.scope: Deactivated successfully.
Dec 13 02:36:26 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 120 pg[9.19( v 90'487 (0'0,90'487] local-lis/les=119/120 n=6 ec=57/47 lis/c=117/74 les/c/f=118/76/0 sis=119) [2] r=0 lpr=119 pi=[74,119)/1 crt=90'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:27 np0005558241 podman[102651]: 2025-12-13 07:36:27.001640113 +0000 UTC m=+0.061182458 container create 53a5655bb8714efd7834202a6216fc39ab0ab3cda54554dd176ac7e7c4c89d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:36:27 np0005558241 systemd[1]: Started libpod-conmon-53a5655bb8714efd7834202a6216fc39ab0ab3cda54554dd176ac7e7c4c89d6c.scope.
Dec 13 02:36:27 np0005558241 podman[102651]: 2025-12-13 07:36:26.974471715 +0000 UTC m=+0.034014140 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:36:27 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:36:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac3e687aa0bd96b401852a87a749fd7be8367740d7936d2c2af101eb883ebcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:36:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac3e687aa0bd96b401852a87a749fd7be8367740d7936d2c2af101eb883ebcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:36:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac3e687aa0bd96b401852a87a749fd7be8367740d7936d2c2af101eb883ebcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:36:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac3e687aa0bd96b401852a87a749fd7be8367740d7936d2c2af101eb883ebcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:36:27 np0005558241 podman[102651]: 2025-12-13 07:36:27.099480625 +0000 UTC m=+0.159023010 container init 53a5655bb8714efd7834202a6216fc39ab0ab3cda54554dd176ac7e7c4c89d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hoover, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:36:27 np0005558241 podman[102651]: 2025-12-13 07:36:27.116166731 +0000 UTC m=+0.175709076 container start 53a5655bb8714efd7834202a6216fc39ab0ab3cda54554dd176ac7e7c4c89d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:36:27 np0005558241 podman[102651]: 2025-12-13 07:36:27.121620777 +0000 UTC m=+0.181163382 container attach 53a5655bb8714efd7834202a6216fc39ab0ab3cda54554dd176ac7e7c4c89d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:36:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:27 np0005558241 python3.9[102789]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:36:27 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec 13 02:36:27 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec 13 02:36:27 np0005558241 lvm[102897]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:36:27 np0005558241 lvm[102897]: VG ceph_vg1 finished
Dec 13 02:36:27 np0005558241 lvm[102895]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:36:27 np0005558241 lvm[102895]: VG ceph_vg0 finished
Dec 13 02:36:27 np0005558241 lvm[102901]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:36:27 np0005558241 lvm[102901]: VG ceph_vg2 finished
Dec 13 02:36:28 np0005558241 thirsty_hoover[102697]: {}
Dec 13 02:36:28 np0005558241 systemd[1]: libpod-53a5655bb8714efd7834202a6216fc39ab0ab3cda54554dd176ac7e7c4c89d6c.scope: Deactivated successfully.
Dec 13 02:36:28 np0005558241 systemd[1]: libpod-53a5655bb8714efd7834202a6216fc39ab0ab3cda54554dd176ac7e7c4c89d6c.scope: Consumed 1.557s CPU time.
Dec 13 02:36:28 np0005558241 podman[102651]: 2025-12-13 07:36:28.076457563 +0000 UTC m=+1.135999908 container died 53a5655bb8714efd7834202a6216fc39ab0ab3cda54554dd176ac7e7c4c89d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:36:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1ac3e687aa0bd96b401852a87a749fd7be8367740d7936d2c2af101eb883ebcc-merged.mount: Deactivated successfully.
Dec 13 02:36:28 np0005558241 podman[102651]: 2025-12-13 07:36:28.147829634 +0000 UTC m=+1.207371979 container remove 53a5655bb8714efd7834202a6216fc39ab0ab3cda54554dd176ac7e7c4c89d6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hoover, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Dec 13 02:36:28 np0005558241 systemd[1]: libpod-conmon-53a5655bb8714efd7834202a6216fc39ab0ab3cda54554dd176ac7e7c4c89d6c.scope: Deactivated successfully.
Dec 13 02:36:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:36:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:36:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:36:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:36:28 np0005558241 python3.9[103035]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:36:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:28 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:36:28 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:36:28 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec 13 02:36:28 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec 13 02:36:29 np0005558241 python3.9[103197]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:36:29 np0005558241 network[103214]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:36:29 np0005558241 network[103215]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:36:29 np0005558241 network[103216]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:36:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 70 B/s, 1 objects/s recovering
Dec 13 02:36:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec 13 02:36:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec 13 02:36:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec 13 02:36:29 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 13 02:36:29 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 13 02:36:30 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 13 02:36:30 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 13 02:36:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 13 02:36:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec 13 02:36:30 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec 13 02:36:30 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec 13 02:36:31 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec 13 02:36:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Dec 13 02:36:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec 13 02:36:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec 13 02:36:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec 13 02:36:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec 13 02:36:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 13 02:36:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec 13 02:36:31 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec 13 02:36:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 122 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=98/99 n=6 ec=57/47 lis/c=98/98 les/c/f=99/99/0 sis=122 pruub=8.122266769s) [0] r=-1 lpr=122 pi=[98,122)/1 crt=90'487 active pruub 260.278076172s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:31 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 122 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=98/99 n=6 ec=57/47 lis/c=98/98 les/c/f=99/99/0 sis=122 pruub=8.122061729s) [0] r=-1 lpr=122 pi=[98,122)/1 crt=90'487 unknown NOTIFY pruub 260.278076172s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:31 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 122 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=98/98 les/c/f=99/99/0 sis=122) [0] r=0 lpr=122 pi=[98,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 13 02:36:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec 13 02:36:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 13 02:36:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec 13 02:36:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec 13 02:36:32 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec 13 02:36:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 123 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=98/98 les/c/f=99/99/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[98,123)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:32 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 123 pg[9.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=98/98 les/c/f=99/99/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[98,123)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 123 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=98/99 n=6 ec=57/47 lis/c=98/98 les/c/f=99/99/0 sis=123) [0]/[2] r=0 lpr=123 pi=[98,123)/1 crt=90'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:32 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 123 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=98/99 n=6 ec=57/47 lis/c=98/98 les/c/f=99/99/0 sis=123) [0]/[2] r=0 lpr=123 pi=[98,123)/1 crt=90'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:32 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 13 02:36:32 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 13 02:36:33 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 13 02:36:33 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 13 02:36:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Dec 13 02:36:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec 13 02:36:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec 13 02:36:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:33 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 13 02:36:33 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 13 02:36:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec 13 02:36:33 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 13 02:36:33 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 13 02:36:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 13 02:36:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec 13 02:36:33 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec 13 02:36:34 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec 13 02:36:34 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec 13 02:36:34 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec 13 02:36:34 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 124 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=123/124 n=6 ec=57/47 lis/c=98/98 les/c/f=99/99/0 sis=123) [0]/[2] async=[0] r=0 lpr=123 pi=[98,123)/1 crt=90'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:34 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 13 02:36:35 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 13 02:36:35 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 13 02:36:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec 13 02:36:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec 13 02:36:36 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec 13 02:36:36 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 125 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=123/124 n=6 ec=57/47 lis/c=123/98 les/c/f=124/99/0 sis=125 pruub=14.423685074s) [0] async=[0] r=-1 lpr=125 pi=[98,125)/1 crt=90'487 active pruub 271.280578613s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:36 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 125 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=123/124 n=6 ec=57/47 lis/c=123/98 les/c/f=124/99/0 sis=125 pruub=14.423545837s) [0] r=-1 lpr=125 pi=[98,125)/1 crt=90'487 unknown NOTIFY pruub 271.280578613s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:36 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 125 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=123/98 les/c/f=124/99/0 sis=125) [0] r=0 lpr=125 pi=[98,125)/1 pct=0'0 crt=90'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:36 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 125 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=0/0 n=6 ec=57/47 lis/c=123/98 les/c/f=124/99/0 sis=125) [0] r=0 lpr=125 pi=[98,125)/1 crt=90'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:37 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 13 02:36:37 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 13 02:36:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 1 remapped+peering, 320 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:37 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec 13 02:36:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec 13 02:36:37 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec 13 02:36:38 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 13 02:36:38 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 13 02:36:38 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 13 02:36:38 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 13 02:36:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec 13 02:36:38 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec 13 02:36:38 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 126 pg[9.1c( v 90'487 (0'0,90'487] local-lis/les=125/126 n=6 ec=57/47 lis/c=123/98 les/c/f=124/99/0 sis=125) [0] r=0 lpr=125 pi=[98,125)/1 crt=90'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Dec 13 02:36:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec 13 02:36:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec 13 02:36:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:36:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:36:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:36:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:36:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:36:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:36:40 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 13 02:36:40 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 13 02:36:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec 13 02:36:40 np0005558241 python3.9[103476]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:36:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec 13 02:36:40 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 13 02:36:40 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 13 02:36:41 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec 13 02:36:41 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec 13 02:36:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 13 02:36:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec 13 02:36:41 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec 13 02:36:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Dec 13 02:36:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 02:36:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:36:41 np0005558241 python3.9[103626]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:36:41 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 13 02:36:41 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 02:36:41 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 13 02:36:42 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 13 02:36:42 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 127 pg[9.1e( v 90'485 (0'0,90'485] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=127 pruub=12.137515068s) [0] r=-1 lpr=127 pi=[85,127)/1 crt=89'484 lcod 89'484 active pruub 274.699035645s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:42 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 127 pg[9.1e( v 90'485 (0'0,90'485] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=127 pruub=12.137446404s) [0] r=-1 lpr=127 pi=[85,127)/1 crt=89'484 lcod 89'484 unknown NOTIFY pruub 274.699035645s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:42 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 127 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=127) [0] r=0 lpr=127 pi=[85,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec 13 02:36:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:36:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec 13 02:36:42 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec 13 02:36:42 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=128) [0]/[2] r=-1 lpr=128 pi=[85,128)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:42 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=128) [0]/[2] r=-1 lpr=128 pi=[85,128)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:42 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 128 pg[9.1e( v 90'485 (0'0,90'485] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=128) [0]/[2] r=0 lpr=128 pi=[85,128)/1 crt=89'484 lcod 89'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:42 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 128 pg[9.1e( v 90'485 (0'0,90'485] local-lis/les=85/86 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=128) [0]/[2] r=0 lpr=128 pi=[85,128)/1 crt=89'484 lcod 89'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:42 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 128 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=86/87 n=6 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=128 pruub=12.739711761s) [1] r=-1 lpr=128 pi=[86,128)/1 crt=53'483 active pruub 275.708374023s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:42 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 128 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=86/87 n=6 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=128 pruub=12.739518166s) [1] r=-1 lpr=128 pi=[86,128)/1 crt=53'483 unknown NOTIFY pruub 275.708374023s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:42 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 128 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=128) [1] r=0 lpr=128 pi=[86,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 02:36:42 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 13 02:36:42 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 13 02:36:42 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 13 02:36:42 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 13 02:36:43 np0005558241 python3.9[103780]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:36:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 1 objects/s recovering
Dec 13 02:36:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec 13 02:36:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec 13 02:36:43 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 13 02:36:43 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 13 02:36:44 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec 13 02:36:44 np0005558241 python3.9[103938]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:36:44 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 129 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=86/87 n=6 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=129) [1]/[2] r=0 lpr=129 pi=[86,129)/1 crt=53'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:44 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 129 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=86/87 n=6 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=129) [1]/[2] r=0 lpr=129 pi=[86,129)/1 crt=53'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec 13 02:36:45 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 13 02:36:45 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 129 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=129) [1]/[2] r=-1 lpr=129 pi=[86,129)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:45 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 129 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=129) [1]/[2] r=-1 lpr=129 pi=[86,129)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:45 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 13 02:36:45 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 129 pg[9.1e( v 90'485 (0'0,90'485] local-lis/les=128/129 n=6 ec=57/47 lis/c=85/85 les/c/f=86/86/0 sis=128) [0]/[2] async=[0] r=0 lpr=128 pi=[85,128)/1 crt=90'485 lcod 89'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 1 unknown, 1 remapped+peering, 319 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:45 np0005558241 python3.9[104022]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:36:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec 13 02:36:45 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec 13 02:36:45 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 130 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=129/130 n=6 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=129) [1]/[2] async=[1] r=0 lpr=129 pi=[86,129)/1 crt=53'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec 13 02:36:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec 13 02:36:46 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec 13 02:36:46 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 131 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=129/130 n=6 ec=57/47 lis/c=129/86 les/c/f=130/87/0 sis=131 pruub=15.168054581s) [1] async=[1] r=-1 lpr=131 pi=[86,131)/1 crt=53'483 active pruub 282.279846191s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:46 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 131 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=129/86 les/c/f=130/87/0 sis=131) [1] r=0 lpr=131 pi=[86,131)/1 pct=0'0 crt=53'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:46 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 131 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=0/0 n=6 ec=57/47 lis/c=129/86 les/c/f=130/87/0 sis=131) [1] r=0 lpr=131 pi=[86,131)/1 crt=53'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:46 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 131 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=129/130 n=6 ec=57/47 lis/c=129/86 les/c/f=130/87/0 sis=131 pruub=15.167943001s) [1] r=-1 lpr=131 pi=[86,131)/1 crt=53'483 unknown NOTIFY pruub 282.279846191s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:46 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 131 pg[9.1e( v 90'485 (0'0,90'485] local-lis/les=128/129 n=6 ec=57/47 lis/c=128/85 les/c/f=129/86/0 sis=131 pruub=14.661699295s) [0] async=[0] r=-1 lpr=131 pi=[85,131)/1 crt=90'485 lcod 89'484 active pruub 281.784973145s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:46 np0005558241 ceph-osd[89221]: osd.2 pg_epoch: 131 pg[9.1e( v 90'485 (0'0,90'485] local-lis/les=128/129 n=6 ec=57/47 lis/c=128/85 les/c/f=129/86/0 sis=131 pruub=14.661368370s) [0] r=-1 lpr=131 pi=[85,131)/1 crt=90'485 lcod 89'484 unknown NOTIFY pruub 281.784973145s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 02:36:46 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 131 pg[9.1e( v 90'485 (0'0,90'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=128/85 les/c/f=129/86/0 sis=131) [0] r=0 lpr=131 pi=[85,131)/1 pct=0'0 crt=90'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 02:36:46 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 131 pg[9.1e( v 90'485 (0'0,90'485] local-lis/les=0/0 n=6 ec=57/47 lis/c=128/85 les/c/f=129/86/0 sis=131) [0] r=0 lpr=131 pi=[85,131)/1 crt=90'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 02:36:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 1 unknown, 1 remapped+peering, 319 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:36:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec 13 02:36:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec 13 02:36:47 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec 13 02:36:47 np0005558241 ceph-osd[87041]: osd.0 pg_epoch: 132 pg[9.1e( v 90'485 (0'0,90'485] local-lis/les=131/132 n=6 ec=57/47 lis/c=128/85 les/c/f=129/86/0 sis=131) [0] r=0 lpr=131 pi=[85,131)/1 crt=90'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:47 np0005558241 ceph-osd[88086]: osd.1 pg_epoch: 132 pg[9.1f( v 53'483 (0'0,53'483] local-lis/les=131/132 n=6 ec=57/47 lis/c=129/86 les/c/f=130/87/0 sis=131) [1] r=0 lpr=131 pi=[86,131)/1 crt=53'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 02:36:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:48 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 13 02:36:48 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 13 02:36:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 381 B/s wr, 8 op/s; 89 B/s, 3 objects/s recovering
Dec 13 02:36:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 340 B/s wr, 7 op/s; 80 B/s, 3 objects/s recovering
Dec 13 02:36:51 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec 13 02:36:51 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec 13 02:36:52 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 13 02:36:52 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 13 02:36:52 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 13 02:36:52 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 13 02:36:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 265 B/s wr, 5 op/s; 62 B/s, 2 objects/s recovering
Dec 13 02:36:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:53 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 13 02:36:53 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 13 02:36:53 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 13 02:36:53 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 13 02:36:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 117 B/s wr, 4 op/s; 55 B/s, 2 objects/s recovering
Dec 13 02:36:56 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 13 02:36:56 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 13 02:36:56 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 13 02:36:56 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 13 02:36:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 102 B/s wr, 4 op/s; 48 B/s, 2 objects/s recovering
Dec 13 02:36:57 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec 13 02:36:57 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec 13 02:36:58 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 13 02:36:58 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 13 02:36:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:36:59 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec 13 02:36:59 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec 13 02:36:59 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec 13 02:36:59 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec 13 02:36:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 87 B/s wr, 3 op/s; 41 B/s, 1 objects/s recovering
Dec 13 02:37:00 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 13 02:37:00 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 13 02:37:00 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec 13 02:37:00 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec 13 02:37:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:01 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 13 02:37:01 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 13 02:37:02 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.b scrub starts
Dec 13 02:37:02 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.b scrub ok
Dec 13 02:37:03 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec 13 02:37:03 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec 13 02:37:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:03 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 13 02:37:03 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 13 02:37:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:37:05 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 13 02:37:05 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 13 02:37:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:05 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Dec 13 02:37:05 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Dec 13 02:37:06 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec 13 02:37:06 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec 13 02:37:07 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 13 02:37:07 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 13 02:37:07 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec 13 02:37:07 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec 13 02:37:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:07 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 13 02:37:07 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 13 02:37:08 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec 13 02:37:08 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec 13 02:37:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:37:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:37:08
Dec 13 02:37:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:37:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:37:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.log', 'images', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr']
Dec 13 02:37:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:37:08 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec 13 02:37:09 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec 13 02:37:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:09 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 13 02:37:09 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:37:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:37:10 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Dec 13 02:37:10 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Dec 13 02:37:11 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec 13 02:37:11 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec 13 02:37:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:12 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec 13 02:37:12 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec 13 02:37:12 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec 13 02:37:12 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec 13 02:37:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:37:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:15 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 13 02:37:15 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 13 02:37:16 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec 13 02:37:16 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec 13 02:37:17 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Dec 13 02:37:17 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Dec 13 02:37:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:17 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Dec 13 02:37:17 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Dec 13 02:37:18 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec 13 02:37:18 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec 13 02:37:18 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec 13 02:37:18 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec 13 02:37:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:37:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:19 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 13 02:37:19 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:37:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:37:20 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec 13 02:37:20 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec 13 02:37:20 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec 13 02:37:20 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec 13 02:37:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:21 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec 13 02:37:21 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec 13 02:37:22 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec 13 02:37:22 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec 13 02:37:22 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 13 02:37:22 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 13 02:37:23 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 13 02:37:23 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 13 02:37:23 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec 13 02:37:23 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec 13 02:37:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:37:24 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec 13 02:37:24 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec 13 02:37:25 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.c scrub starts
Dec 13 02:37:25 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.c scrub ok
Dec 13 02:37:25 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 13 02:37:25 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 13 02:37:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:26 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.a scrub starts
Dec 13 02:37:26 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.a scrub ok
Dec 13 02:37:27 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 13 02:37:27 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 13 02:37:27 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec 13 02:37:27 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec 13 02:37:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:37:29 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec 13 02:37:29 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec 13 02:37:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:29 np0005558241 podman[104313]: 2025-12-13 07:37:29.744121758 +0000 UTC m=+0.044426928 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:37:29 np0005558241 podman[104313]: 2025-12-13 07:37:29.957448643 +0000 UTC m=+0.257753763 container create acb7688badec28931a258bd716cde83ec6fc6012f90eb2428a1dc1cb142b2f71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bose, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:37:29 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:37:30 np0005558241 systemd[1]: Started libpod-conmon-acb7688badec28931a258bd716cde83ec6fc6012f90eb2428a1dc1cb142b2f71.scope.
Dec 13 02:37:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:37:30 np0005558241 podman[104313]: 2025-12-13 07:37:30.095622005 +0000 UTC m=+0.395927095 container init acb7688badec28931a258bd716cde83ec6fc6012f90eb2428a1dc1cb142b2f71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:37:30 np0005558241 podman[104313]: 2025-12-13 07:37:30.108245391 +0000 UTC m=+0.408550491 container start acb7688badec28931a258bd716cde83ec6fc6012f90eb2428a1dc1cb142b2f71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bose, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:37:30 np0005558241 podman[104313]: 2025-12-13 07:37:30.112897746 +0000 UTC m=+0.413202846 container attach acb7688badec28931a258bd716cde83ec6fc6012f90eb2428a1dc1cb142b2f71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bose, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:37:30 np0005558241 hardcore_bose[104330]: 167 167
Dec 13 02:37:30 np0005558241 systemd[1]: libpod-acb7688badec28931a258bd716cde83ec6fc6012f90eb2428a1dc1cb142b2f71.scope: Deactivated successfully.
Dec 13 02:37:30 np0005558241 conmon[104330]: conmon acb7688badec28931a25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acb7688badec28931a258bd716cde83ec6fc6012f90eb2428a1dc1cb142b2f71.scope/container/memory.events
Dec 13 02:37:30 np0005558241 podman[104313]: 2025-12-13 07:37:30.118545504 +0000 UTC m=+0.418850604 container died acb7688badec28931a258bd716cde83ec6fc6012f90eb2428a1dc1cb142b2f71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bose, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:37:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b60912eaf4c0f4e613058e005b23962427661599a582de19bf919646d99338b6-merged.mount: Deactivated successfully.
Dec 13 02:37:30 np0005558241 podman[104313]: 2025-12-13 07:37:30.167193687 +0000 UTC m=+0.467498757 container remove acb7688badec28931a258bd716cde83ec6fc6012f90eb2428a1dc1cb142b2f71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bose, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:37:30 np0005558241 systemd[1]: libpod-conmon-acb7688badec28931a258bd716cde83ec6fc6012f90eb2428a1dc1cb142b2f71.scope: Deactivated successfully.
Dec 13 02:37:30 np0005558241 podman[104354]: 2025-12-13 07:37:30.351095675 +0000 UTC m=+0.047332143 container create 85a18d369a216374f5d18a72c622eba3eceaab204554658b2e7989e86e3c9d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:37:30 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec 13 02:37:30 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec 13 02:37:30 np0005558241 systemd[1]: Started libpod-conmon-85a18d369a216374f5d18a72c622eba3eceaab204554658b2e7989e86e3c9d3e.scope.
Dec 13 02:37:30 np0005558241 podman[104354]: 2025-12-13 07:37:30.330982749 +0000 UTC m=+0.027219237 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:37:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:37:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fef80f3c1dbf05f69ce01c2097decd973d1e70c8b3c779f254c4b389f40e95b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:37:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fef80f3c1dbf05f69ce01c2097decd973d1e70c8b3c779f254c4b389f40e95b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:37:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fef80f3c1dbf05f69ce01c2097decd973d1e70c8b3c779f254c4b389f40e95b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:37:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fef80f3c1dbf05f69ce01c2097decd973d1e70c8b3c779f254c4b389f40e95b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:37:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fef80f3c1dbf05f69ce01c2097decd973d1e70c8b3c779f254c4b389f40e95b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:37:30 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.a scrub starts
Dec 13 02:37:30 np0005558241 podman[104354]: 2025-12-13 07:37:30.452795831 +0000 UTC m=+0.149032359 container init 85a18d369a216374f5d18a72c622eba3eceaab204554658b2e7989e86e3c9d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_galois, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:37:30 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.a scrub ok
Dec 13 02:37:30 np0005558241 podman[104354]: 2025-12-13 07:37:30.464708951 +0000 UTC m=+0.160945419 container start 85a18d369a216374f5d18a72c622eba3eceaab204554658b2e7989e86e3c9d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_galois, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:37:30 np0005558241 podman[104354]: 2025-12-13 07:37:30.469209473 +0000 UTC m=+0.165446051 container attach 85a18d369a216374f5d18a72c622eba3eceaab204554658b2e7989e86e3c9d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:37:30 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec 13 02:37:30 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec 13 02:37:30 np0005558241 elated_galois[104370]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:37:30 np0005558241 elated_galois[104370]: --> All data devices are unavailable
Dec 13 02:37:30 np0005558241 systemd[1]: libpod-85a18d369a216374f5d18a72c622eba3eceaab204554658b2e7989e86e3c9d3e.scope: Deactivated successfully.
Dec 13 02:37:30 np0005558241 podman[104354]: 2025-12-13 07:37:30.992022453 +0000 UTC m=+0.688258951 container died 85a18d369a216374f5d18a72c622eba3eceaab204554658b2e7989e86e3c9d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:37:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3fef80f3c1dbf05f69ce01c2097decd973d1e70c8b3c779f254c4b389f40e95b-merged.mount: Deactivated successfully.
Dec 13 02:37:31 np0005558241 podman[104354]: 2025-12-13 07:37:31.046540559 +0000 UTC m=+0.742777027 container remove 85a18d369a216374f5d18a72c622eba3eceaab204554658b2e7989e86e3c9d3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_galois, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:37:31 np0005558241 systemd[1]: libpod-conmon-85a18d369a216374f5d18a72c622eba3eceaab204554658b2e7989e86e3c9d3e.scope: Deactivated successfully.
Dec 13 02:37:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:31 np0005558241 podman[104466]: 2025-12-13 07:37:31.528166695 +0000 UTC m=+0.067451190 container create 73cf8c56453c041d166bd1c5ed4cb86677f3db762a5692a3967d225efa59dccf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bhaskara, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:37:31 np0005558241 systemd[1]: Started libpod-conmon-73cf8c56453c041d166bd1c5ed4cb86677f3db762a5692a3967d225efa59dccf.scope.
Dec 13 02:37:31 np0005558241 podman[104466]: 2025-12-13 07:37:31.49616545 +0000 UTC m=+0.035449995 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:37:31 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:37:31 np0005558241 podman[104466]: 2025-12-13 07:37:31.643329235 +0000 UTC m=+0.182613810 container init 73cf8c56453c041d166bd1c5ed4cb86677f3db762a5692a3967d225efa59dccf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bhaskara, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:37:31 np0005558241 podman[104466]: 2025-12-13 07:37:31.648960953 +0000 UTC m=+0.188245478 container start 73cf8c56453c041d166bd1c5ed4cb86677f3db762a5692a3967d225efa59dccf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bhaskara, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:37:31 np0005558241 podman[104466]: 2025-12-13 07:37:31.653138898 +0000 UTC m=+0.192423423 container attach 73cf8c56453c041d166bd1c5ed4cb86677f3db762a5692a3967d225efa59dccf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:37:31 np0005558241 vigorous_bhaskara[104482]: 167 167
Dec 13 02:37:31 np0005558241 systemd[1]: libpod-73cf8c56453c041d166bd1c5ed4cb86677f3db762a5692a3967d225efa59dccf.scope: Deactivated successfully.
Dec 13 02:37:31 np0005558241 conmon[104482]: conmon 73cf8c56453c041d166b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73cf8c56453c041d166bd1c5ed4cb86677f3db762a5692a3967d225efa59dccf.scope/container/memory.events
Dec 13 02:37:31 np0005558241 podman[104466]: 2025-12-13 07:37:31.657179769 +0000 UTC m=+0.196464274 container died 73cf8c56453c041d166bd1c5ed4cb86677f3db762a5692a3967d225efa59dccf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:37:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9b5e7d10eca17e18baa62e854adbe15432deebc87085287a6b1b0ed1a4a60d92-merged.mount: Deactivated successfully.
Dec 13 02:37:31 np0005558241 podman[104466]: 2025-12-13 07:37:31.74279123 +0000 UTC m=+0.282075725 container remove 73cf8c56453c041d166bd1c5ed4cb86677f3db762a5692a3967d225efa59dccf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bhaskara, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:37:31 np0005558241 systemd[1]: libpod-conmon-73cf8c56453c041d166bd1c5ed4cb86677f3db762a5692a3967d225efa59dccf.scope: Deactivated successfully.
Dec 13 02:37:31 np0005558241 podman[104506]: 2025-12-13 07:37:31.968191538 +0000 UTC m=+0.075286747 container create 86b0be53ed1cd51abf0235db3cd30bac9a2a8916ed8a03ff5f5be9b0f434085d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:37:32 np0005558241 systemd[1]: Started libpod-conmon-86b0be53ed1cd51abf0235db3cd30bac9a2a8916ed8a03ff5f5be9b0f434085d.scope.
Dec 13 02:37:32 np0005558241 podman[104506]: 2025-12-13 07:37:31.93697625 +0000 UTC m=+0.044071509 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:37:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:37:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5d3836830828b5a61d126df42bad5e7eff53cb11768ed134bf71ca57ac3d1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:37:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5d3836830828b5a61d126df42bad5e7eff53cb11768ed134bf71ca57ac3d1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:37:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5d3836830828b5a61d126df42bad5e7eff53cb11768ed134bf71ca57ac3d1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:37:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5d3836830828b5a61d126df42bad5e7eff53cb11768ed134bf71ca57ac3d1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:37:32 np0005558241 podman[104506]: 2025-12-13 07:37:32.092489685 +0000 UTC m=+0.199584944 container init 86b0be53ed1cd51abf0235db3cd30bac9a2a8916ed8a03ff5f5be9b0f434085d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ptolemy, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 02:37:32 np0005558241 podman[104506]: 2025-12-13 07:37:32.108844876 +0000 UTC m=+0.215940085 container start 86b0be53ed1cd51abf0235db3cd30bac9a2a8916ed8a03ff5f5be9b0f434085d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ptolemy, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 02:37:32 np0005558241 podman[104506]: 2025-12-13 07:37:32.114100825 +0000 UTC m=+0.221196024 container attach 86b0be53ed1cd51abf0235db3cd30bac9a2a8916ed8a03ff5f5be9b0f434085d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ptolemy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:37:32 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.e scrub starts
Dec 13 02:37:32 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.e scrub ok
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]: {
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:    "0": [
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:        {
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "devices": [
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "/dev/loop3"
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            ],
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_name": "ceph_lv0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_size": "21470642176",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "name": "ceph_lv0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "tags": {
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.cluster_name": "ceph",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.crush_device_class": "",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.encrypted": "0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.objectstore": "bluestore",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.osd_id": "0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.type": "block",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.vdo": "0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.with_tpm": "0"
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            },
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "type": "block",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "vg_name": "ceph_vg0"
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:        }
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:    ],
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:    "1": [
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:        {
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "devices": [
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "/dev/loop4"
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            ],
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_name": "ceph_lv1",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_size": "21470642176",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "name": "ceph_lv1",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "tags": {
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.cluster_name": "ceph",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.crush_device_class": "",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.encrypted": "0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.objectstore": "bluestore",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.osd_id": "1",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.type": "block",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.vdo": "0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.with_tpm": "0"
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            },
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "type": "block",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "vg_name": "ceph_vg1"
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:        }
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:    ],
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:    "2": [
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:        {
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "devices": [
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "/dev/loop5"
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            ],
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_name": "ceph_lv2",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_size": "21470642176",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "name": "ceph_lv2",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "tags": {
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.cluster_name": "ceph",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.crush_device_class": "",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.encrypted": "0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.objectstore": "bluestore",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.osd_id": "2",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.type": "block",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.vdo": "0",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:                "ceph.with_tpm": "0"
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            },
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "type": "block",
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:            "vg_name": "ceph_vg2"
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:        }
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]:    ]
Dec 13 02:37:32 np0005558241 boring_ptolemy[104523]: }
Dec 13 02:37:32 np0005558241 systemd[1]: libpod-86b0be53ed1cd51abf0235db3cd30bac9a2a8916ed8a03ff5f5be9b0f434085d.scope: Deactivated successfully.
Dec 13 02:37:32 np0005558241 podman[104506]: 2025-12-13 07:37:32.470248037 +0000 UTC m=+0.577343236 container died 86b0be53ed1cd51abf0235db3cd30bac9a2a8916ed8a03ff5f5be9b0f434085d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:37:32 np0005558241 python3.9[104694]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:37:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9e5d3836830828b5a61d126df42bad5e7eff53cb11768ed134bf71ca57ac3d1a-merged.mount: Deactivated successfully.
Dec 13 02:37:33 np0005558241 podman[104506]: 2025-12-13 07:37:33.060492396 +0000 UTC m=+1.167587605 container remove 86b0be53ed1cd51abf0235db3cd30bac9a2a8916ed8a03ff5f5be9b0f434085d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ptolemy, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:37:33 np0005558241 systemd[1]: libpod-conmon-86b0be53ed1cd51abf0235db3cd30bac9a2a8916ed8a03ff5f5be9b0f434085d.scope: Deactivated successfully.
Dec 13 02:37:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:33 np0005558241 podman[104779]: 2025-12-13 07:37:33.565786729 +0000 UTC m=+0.044964560 container create 84b3d2e0a91cee6663b77862cdeb68988f8296fd6c8d031b5cb13a4514a4c614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:37:33 np0005558241 systemd[1]: Started libpod-conmon-84b3d2e0a91cee6663b77862cdeb68988f8296fd6c8d031b5cb13a4514a4c614.scope.
Dec 13 02:37:33 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:37:33 np0005558241 podman[104779]: 2025-12-13 07:37:33.547234279 +0000 UTC m=+0.026412130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:37:33 np0005558241 podman[104779]: 2025-12-13 07:37:33.652408972 +0000 UTC m=+0.131586853 container init 84b3d2e0a91cee6663b77862cdeb68988f8296fd6c8d031b5cb13a4514a4c614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:37:33 np0005558241 podman[104779]: 2025-12-13 07:37:33.66201667 +0000 UTC m=+0.141194501 container start 84b3d2e0a91cee6663b77862cdeb68988f8296fd6c8d031b5cb13a4514a4c614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:37:33 np0005558241 podman[104779]: 2025-12-13 07:37:33.666575674 +0000 UTC m=+0.145753535 container attach 84b3d2e0a91cee6663b77862cdeb68988f8296fd6c8d031b5cb13a4514a4c614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_gauss, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:37:33 np0005558241 gallant_gauss[104885]: 167 167
Dec 13 02:37:33 np0005558241 systemd[1]: libpod-84b3d2e0a91cee6663b77862cdeb68988f8296fd6c8d031b5cb13a4514a4c614.scope: Deactivated successfully.
Dec 13 02:37:33 np0005558241 podman[104779]: 2025-12-13 07:37:33.667823472 +0000 UTC m=+0.147001293 container died 84b3d2e0a91cee6663b77862cdeb68988f8296fd6c8d031b5cb13a4514a4c614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_gauss, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 02:37:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-530f28cc287020bd7323275c1d6babf2b5b9abf73f101a9098e06406e96839df-merged.mount: Deactivated successfully.
Dec 13 02:37:33 np0005558241 podman[104779]: 2025-12-13 07:37:33.706661282 +0000 UTC m=+0.185839153 container remove 84b3d2e0a91cee6663b77862cdeb68988f8296fd6c8d031b5cb13a4514a4c614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_gauss, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:37:33 np0005558241 systemd[1]: libpod-conmon-84b3d2e0a91cee6663b77862cdeb68988f8296fd6c8d031b5cb13a4514a4c614.scope: Deactivated successfully.
Dec 13 02:37:33 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec 13 02:37:33 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec 13 02:37:33 np0005558241 podman[104932]: 2025-12-13 07:37:33.886721593 +0000 UTC m=+0.049814290 container create bd01fce52700f5f829c85857c1a829982de49d331bdefc67f9e4048144296684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:37:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:37:33 np0005558241 systemd[1]: Started libpod-conmon-bd01fce52700f5f829c85857c1a829982de49d331bdefc67f9e4048144296684.scope.
Dec 13 02:37:33 np0005558241 podman[104932]: 2025-12-13 07:37:33.867642521 +0000 UTC m=+0.030735198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:37:33 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:37:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4d16be928bc682d879c6b6f5a7a1cbee925e479bf33757462c262f7ee799c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:37:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4d16be928bc682d879c6b6f5a7a1cbee925e479bf33757462c262f7ee799c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:37:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4d16be928bc682d879c6b6f5a7a1cbee925e479bf33757462c262f7ee799c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:37:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4d16be928bc682d879c6b6f5a7a1cbee925e479bf33757462c262f7ee799c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:37:34 np0005558241 podman[104932]: 2025-12-13 07:37:34.002876396 +0000 UTC m=+0.165969103 container init bd01fce52700f5f829c85857c1a829982de49d331bdefc67f9e4048144296684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:37:34 np0005558241 podman[104932]: 2025-12-13 07:37:34.020520576 +0000 UTC m=+0.183613233 container start bd01fce52700f5f829c85857c1a829982de49d331bdefc67f9e4048144296684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 02:37:34 np0005558241 podman[104932]: 2025-12-13 07:37:34.025335205 +0000 UTC m=+0.188427892 container attach bd01fce52700f5f829c85857c1a829982de49d331bdefc67f9e4048144296684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:37:34 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec 13 02:37:34 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec 13 02:37:34 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec 13 02:37:34 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec 13 02:37:34 np0005558241 lvm[105180]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:37:34 np0005558241 lvm[105178]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:37:34 np0005558241 lvm[105180]: VG ceph_vg1 finished
Dec 13 02:37:34 np0005558241 lvm[105178]: VG ceph_vg0 finished
Dec 13 02:37:34 np0005558241 lvm[105181]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:37:34 np0005558241 lvm[105181]: VG ceph_vg2 finished
Dec 13 02:37:34 np0005558241 clever_goldberg[104972]: {}
Dec 13 02:37:34 np0005558241 systemd[1]: libpod-bd01fce52700f5f829c85857c1a829982de49d331bdefc67f9e4048144296684.scope: Deactivated successfully.
Dec 13 02:37:34 np0005558241 systemd[1]: libpod-bd01fce52700f5f829c85857c1a829982de49d331bdefc67f9e4048144296684.scope: Consumed 1.462s CPU time.
Dec 13 02:37:34 np0005558241 podman[104932]: 2025-12-13 07:37:34.912182916 +0000 UTC m=+1.075275573 container died bd01fce52700f5f829c85857c1a829982de49d331bdefc67f9e4048144296684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 02:37:34 np0005558241 python3.9[105170]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 13 02:37:35 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec 13 02:37:35 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec 13 02:37:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:36 np0005558241 python3.9[105346]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 13 02:37:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2d4d16be928bc682d879c6b6f5a7a1cbee925e479bf33757462c262f7ee799c5-merged.mount: Deactivated successfully.
Dec 13 02:37:36 np0005558241 podman[104932]: 2025-12-13 07:37:36.344505101 +0000 UTC m=+2.507597798 container remove bd01fce52700f5f829c85857c1a829982de49d331bdefc67f9e4048144296684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:37:36 np0005558241 systemd[1]: libpod-conmon-bd01fce52700f5f829c85857c1a829982de49d331bdefc67f9e4048144296684.scope: Deactivated successfully.
Dec 13 02:37:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:37:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:37:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:37:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:37:36 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 13 02:37:36 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 13 02:37:36 np0005558241 python3.9[105526]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:37:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:37:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:37:37 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec 13 02:37:37 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec 13 02:37:37 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Dec 13 02:37:37 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Dec 13 02:37:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:37 np0005558241 python3.9[105678]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 13 02:37:37 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 13 02:37:37 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 13 02:37:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:37:39 np0005558241 python3.9[105830]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:37:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:39 np0005558241 python3.9[105982]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:37:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:37:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:37:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:37:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:37:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:37:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:37:40 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 13 02:37:40 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 13 02:37:41 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Dec 13 02:37:41 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Dec 13 02:37:41 np0005558241 python3.9[106060]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:37:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:42 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Dec 13 02:37:42 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Dec 13 02:37:42 np0005558241 python3.9[106212]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:37:43 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec 13 02:37:43 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec 13 02:37:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:43 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 13 02:37:43 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 13 02:37:43 np0005558241 python3.9[106366]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 13 02:37:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:37:44 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Dec 13 02:37:44 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Dec 13 02:37:44 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec 13 02:37:44 np0005558241 python3.9[106519]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 13 02:37:44 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec 13 02:37:45 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec 13 02:37:45 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec 13 02:37:45 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 13 02:37:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:45 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 13 02:37:45 np0005558241 python3.9[106672]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 02:37:46 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 13 02:37:46 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 13 02:37:46 np0005558241 python3.9[106824]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 13 02:37:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:47 np0005558241 python3.9[106976]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:37:47 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec 13 02:37:47 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec 13 02:37:48 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 13 02:37:48 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 13 02:37:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:37:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:50 np0005558241 python3.9[107129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:37:50 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Dec 13 02:37:50 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Dec 13 02:37:50 np0005558241 python3.9[107281]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:37:51 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 13 02:37:51 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 13 02:37:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:51 np0005558241 python3.9[107359]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:37:52 np0005558241 python3.9[107511]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:37:53 np0005558241 python3.9[107589]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:37:53 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec 13 02:37:53 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec 13 02:37:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:53 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 13 02:37:53 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 13 02:37:54 np0005558241 python3.9[107741]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:37:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:37:54 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.f scrub starts
Dec 13 02:37:54 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.f scrub ok
Dec 13 02:37:54 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 13 02:37:54 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 13 02:37:55 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec 13 02:37:55 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec 13 02:37:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:56 np0005558241 python3.9[107892]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:37:56 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 13 02:37:56 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 13 02:37:57 np0005558241 python3.9[108044]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 13 02:37:57 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec 13 02:37:57 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec 13 02:37:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:57 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 13 02:37:57 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 13 02:37:58 np0005558241 python3.9[108194]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:37:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:37:59 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 13 02:37:59 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 13 02:37:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:37:59 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 13 02:37:59 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 13 02:37:59 np0005558241 python3.9[108346]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:37:59 np0005558241 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 13 02:38:00 np0005558241 systemd[1]: tuned.service: Deactivated successfully.
Dec 13 02:38:00 np0005558241 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 13 02:38:00 np0005558241 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 13 02:38:00 np0005558241 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 13 02:38:01 np0005558241 python3.9[108508]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 13 02:38:01 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 13 02:38:01 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 13 02:38:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:38:04 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 13 02:38:04 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 13 02:38:04 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 13 02:38:04 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 13 02:38:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:38:06.041193) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611486041665, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7348, "num_deletes": 251, "total_data_size": 9733193, "memory_usage": 9910192, "flush_reason": "Manual Compaction"}
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611486137910, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7831404, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 153, "largest_seqno": 7498, "table_properties": {"data_size": 7803386, "index_size": 18559, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 77484, "raw_average_key_size": 23, "raw_value_size": 7738627, "raw_average_value_size": 2324, "num_data_blocks": 814, "num_entries": 3329, "num_filter_entries": 3329, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611049, "oldest_key_time": 1765611049, "file_creation_time": 1765611486, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 96770 microseconds, and 30251 cpu microseconds.
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:38:06.137979) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7831404 bytes OK
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:38:06.138003) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:38:06.139565) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:38:06.139578) EVENT_LOG_v1 {"time_micros": 1765611486139574, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:38:06.139608) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9700982, prev total WAL file size 9700982, number of live WAL files 2.
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:38:06.142174) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7647KB) 13(59KB) 8(1944B)]
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611486142295, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7894584, "oldest_snapshot_seqno": -1}
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3157 keys, 7846831 bytes, temperature: kUnknown
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611486210067, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7846831, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7819191, "index_size": 18634, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7941, "raw_key_size": 75965, "raw_average_key_size": 24, "raw_value_size": 7755680, "raw_average_value_size": 2456, "num_data_blocks": 819, "num_entries": 3157, "num_filter_entries": 3157, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765611486, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:38:06.211318) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7846831 bytes
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:38:06.213540) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.1 rd, 115.4 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3447, records dropped: 290 output_compression: NoCompression
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:38:06.213591) EVENT_LOG_v1 {"time_micros": 1765611486213560, "job": 4, "event": "compaction_finished", "compaction_time_micros": 68026, "compaction_time_cpu_micros": 17750, "output_level": 6, "num_output_files": 1, "total_output_size": 7846831, "num_input_records": 3447, "num_output_records": 3157, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611486215319, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611486215512, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611486215754, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 13 02:38:06 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:38:06.141953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:38:06 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec 13 02:38:06 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec 13 02:38:06 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec 13 02:38:06 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec 13 02:38:07 np0005558241 python3.9[108661]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:38:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:08 np0005558241 python3.9[108815]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:38:08 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.d scrub starts
Dec 13 02:38:08 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.d scrub ok
Dec 13 02:38:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:38:08
Dec 13 02:38:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:38:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:38:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'images', 'vms', '.mgr', '.rgw.root', 'backups']
Dec 13 02:38:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:38:09 np0005558241 systemd[1]: session-38.scope: Deactivated successfully.
Dec 13 02:38:09 np0005558241 systemd[1]: session-38.scope: Consumed 1min 11.098s CPU time.
Dec 13 02:38:09 np0005558241 systemd-logind[787]: Session 38 logged out. Waiting for processes to exit.
Dec 13 02:38:09 np0005558241 systemd-logind[787]: Removed session 38.
Dec 13 02:38:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:38:09 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec 13 02:38:09 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec 13 02:38:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:38:11 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 13 02:38:11 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 13 02:38:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:13 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Dec 13 02:38:13 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Dec 13 02:38:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:38:15 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 13 02:38:15 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 13 02:38:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:15 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec 13 02:38:15 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec 13 02:38:17 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 13 02:38:17 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 13 02:38:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:17 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec 13 02:38:17 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec 13 02:38:18 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Dec 13 02:38:18 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Dec 13 02:38:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:38:19 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Dec 13 02:38:19 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 13 02:38:19 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Dec 13 02:38:19 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 13 02:38:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:19 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec 13 02:38:19 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:38:20 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 13 02:38:20 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec 13 02:38:20 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 13 02:38:20 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec 13 02:38:21 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec 13 02:38:21 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec 13 02:38:21 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 13 02:38:21 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 13 02:38:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:21 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 13 02:38:21 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 13 02:38:22 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 13 02:38:22 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 13 02:38:22 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Dec 13 02:38:22 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Dec 13 02:38:23 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.b scrub starts
Dec 13 02:38:23 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.b scrub ok
Dec 13 02:38:23 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 13 02:38:23 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 13 02:38:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:38:24 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Dec 13 02:38:24 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Dec 13 02:38:25 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Dec 13 02:38:25 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Dec 13 02:38:25 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Dec 13 02:38:25 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Dec 13 02:38:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:26 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec 13 02:38:26 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec 13 02:38:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:28 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 13 02:38:28 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 13 02:38:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:38:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:30 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec 13 02:38:30 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec 13 02:38:31 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 13 02:38:31 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 13 02:38:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:32 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 13 02:38:32 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 13 02:38:33 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec 13 02:38:33 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec 13 02:38:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:38:35 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec 13 02:38:35 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec 13 02:38:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:38:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:38:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:38:37 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 13 02:38:37 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 13 02:38:37 np0005558241 podman[108985]: 2025-12-13 07:38:37.936310129 +0000 UTC m=+0.063394577 container create c5df89501f531383217359dc23c57b7c42ca00c740c2bb30a3bde8a9def18732 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 02:38:37 np0005558241 systemd[1]: Started libpod-conmon-c5df89501f531383217359dc23c57b7c42ca00c740c2bb30a3bde8a9def18732.scope.
Dec 13 02:38:38 np0005558241 podman[108985]: 2025-12-13 07:38:37.910406065 +0000 UTC m=+0.037490603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:38:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:38:38 np0005558241 podman[108985]: 2025-12-13 07:38:38.042406467 +0000 UTC m=+0.169491005 container init c5df89501f531383217359dc23c57b7c42ca00c740c2bb30a3bde8a9def18732 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_moser, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:38:38 np0005558241 podman[108985]: 2025-12-13 07:38:38.057045061 +0000 UTC m=+0.184129549 container start c5df89501f531383217359dc23c57b7c42ca00c740c2bb30a3bde8a9def18732 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_moser, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:38:38 np0005558241 podman[108985]: 2025-12-13 07:38:38.062072496 +0000 UTC m=+0.189156984 container attach c5df89501f531383217359dc23c57b7c42ca00c740c2bb30a3bde8a9def18732 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_moser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:38:38 np0005558241 confident_moser[109001]: 167 167
Dec 13 02:38:38 np0005558241 systemd[1]: libpod-c5df89501f531383217359dc23c57b7c42ca00c740c2bb30a3bde8a9def18732.scope: Deactivated successfully.
Dec 13 02:38:38 np0005558241 conmon[109001]: conmon c5df89501f5313832173 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5df89501f531383217359dc23c57b7c42ca00c740c2bb30a3bde8a9def18732.scope/container/memory.events
Dec 13 02:38:38 np0005558241 podman[108985]: 2025-12-13 07:38:38.067596003 +0000 UTC m=+0.194680461 container died c5df89501f531383217359dc23c57b7c42ca00c740c2bb30a3bde8a9def18732 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_moser, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:38:38 np0005558241 systemd[1]: var-lib-containers-storage-overlay-edf2b16d1e196dafb6a7eebe025b3602702326b5b786a4705e64407a5f795156-merged.mount: Deactivated successfully.
Dec 13 02:38:38 np0005558241 podman[108985]: 2025-12-13 07:38:38.121590386 +0000 UTC m=+0.248674844 container remove c5df89501f531383217359dc23c57b7c42ca00c740c2bb30a3bde8a9def18732 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:38:38 np0005558241 systemd[1]: libpod-conmon-c5df89501f531383217359dc23c57b7c42ca00c740c2bb30a3bde8a9def18732.scope: Deactivated successfully.
Dec 13 02:38:38 np0005558241 podman[109024]: 2025-12-13 07:38:38.357939102 +0000 UTC m=+0.071882768 container create 8eafbd538b211ffa77d7af5699adfcc7358d192820a6f0bc94232deb7d4b2486 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_noether, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 02:38:38 np0005558241 systemd[1]: Started libpod-conmon-8eafbd538b211ffa77d7af5699adfcc7358d192820a6f0bc94232deb7d4b2486.scope.
Dec 13 02:38:38 np0005558241 podman[109024]: 2025-12-13 07:38:38.330023998 +0000 UTC m=+0.043967714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:38:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:38:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875e6f217dee375257d1ba63b1a71a162c0b13e32aab995ad04b9779d9b4fe9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:38:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875e6f217dee375257d1ba63b1a71a162c0b13e32aab995ad04b9779d9b4fe9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:38:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875e6f217dee375257d1ba63b1a71a162c0b13e32aab995ad04b9779d9b4fe9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:38:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875e6f217dee375257d1ba63b1a71a162c0b13e32aab995ad04b9779d9b4fe9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:38:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/875e6f217dee375257d1ba63b1a71a162c0b13e32aab995ad04b9779d9b4fe9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:38:38 np0005558241 podman[109024]: 2025-12-13 07:38:38.471747271 +0000 UTC m=+0.185690967 container init 8eafbd538b211ffa77d7af5699adfcc7358d192820a6f0bc94232deb7d4b2486 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:38:38 np0005558241 podman[109024]: 2025-12-13 07:38:38.484810876 +0000 UTC m=+0.198754502 container start 8eafbd538b211ffa77d7af5699adfcc7358d192820a6f0bc94232deb7d4b2486 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:38:38 np0005558241 podman[109024]: 2025-12-13 07:38:38.488722313 +0000 UTC m=+0.202665979 container attach 8eafbd538b211ffa77d7af5699adfcc7358d192820a6f0bc94232deb7d4b2486 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_noether, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Dec 13 02:38:39 np0005558241 upbeat_noether[109040]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:38:39 np0005558241 upbeat_noether[109040]: --> All data devices are unavailable
Dec 13 02:38:39 np0005558241 systemd-logind[787]: New session 39 of user zuul.
Dec 13 02:38:39 np0005558241 systemd[1]: Started Session 39 of User zuul.
Dec 13 02:38:39 np0005558241 systemd[1]: libpod-8eafbd538b211ffa77d7af5699adfcc7358d192820a6f0bc94232deb7d4b2486.scope: Deactivated successfully.
Dec 13 02:38:39 np0005558241 podman[109024]: 2025-12-13 07:38:39.123453544 +0000 UTC m=+0.837397430 container died 8eafbd538b211ffa77d7af5699adfcc7358d192820a6f0bc94232deb7d4b2486 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_noether, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:38:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-875e6f217dee375257d1ba63b1a71a162c0b13e32aab995ad04b9779d9b4fe9d-merged.mount: Deactivated successfully.
Dec 13 02:38:39 np0005558241 podman[109024]: 2025-12-13 07:38:39.190481581 +0000 UTC m=+0.904425247 container remove 8eafbd538b211ffa77d7af5699adfcc7358d192820a6f0bc94232deb7d4b2486 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:38:39 np0005558241 systemd[1]: libpod-conmon-8eafbd538b211ffa77d7af5699adfcc7358d192820a6f0bc94232deb7d4b2486.scope: Deactivated successfully.
Dec 13 02:38:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:38:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:39 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Dec 13 02:38:39 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Dec 13 02:38:39 np0005558241 podman[109191]: 2025-12-13 07:38:39.762659696 +0000 UTC m=+0.046214970 container create ce3b648c7e95f1ce957873dc18e011920462e214c07590a7a4cc148242f2c585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:38:39 np0005558241 systemd[1]: Started libpod-conmon-ce3b648c7e95f1ce957873dc18e011920462e214c07590a7a4cc148242f2c585.scope.
Dec 13 02:38:39 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 13 02:38:39 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 13 02:38:39 np0005558241 podman[109191]: 2025-12-13 07:38:39.742796922 +0000 UTC m=+0.026352296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:38:39 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:38:39 np0005558241 podman[109191]: 2025-12-13 07:38:39.857200226 +0000 UTC m=+0.140755520 container init ce3b648c7e95f1ce957873dc18e011920462e214c07590a7a4cc148242f2c585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sutherland, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:38:39 np0005558241 podman[109191]: 2025-12-13 07:38:39.865549234 +0000 UTC m=+0.149104518 container start ce3b648c7e95f1ce957873dc18e011920462e214c07590a7a4cc148242f2c585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sutherland, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:38:39 np0005558241 podman[109191]: 2025-12-13 07:38:39.869445631 +0000 UTC m=+0.153001005 container attach ce3b648c7e95f1ce957873dc18e011920462e214c07590a7a4cc148242f2c585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 02:38:39 np0005558241 sad_sutherland[109254]: 167 167
Dec 13 02:38:39 np0005558241 systemd[1]: libpod-ce3b648c7e95f1ce957873dc18e011920462e214c07590a7a4cc148242f2c585.scope: Deactivated successfully.
Dec 13 02:38:39 np0005558241 podman[109191]: 2025-12-13 07:38:39.874388134 +0000 UTC m=+0.157943458 container died ce3b648c7e95f1ce957873dc18e011920462e214c07590a7a4cc148242f2c585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sutherland, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:38:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-22e8a8bf581aa8e9f7701401a569bb05a1ac067c7b02c194f894205d9d67d1e9-merged.mount: Deactivated successfully.
Dec 13 02:38:39 np0005558241 podman[109191]: 2025-12-13 07:38:39.918344407 +0000 UTC m=+0.201899691 container remove ce3b648c7e95f1ce957873dc18e011920462e214c07590a7a4cc148242f2c585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_sutherland, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 02:38:39 np0005558241 systemd[1]: libpod-conmon-ce3b648c7e95f1ce957873dc18e011920462e214c07590a7a4cc148242f2c585.scope: Deactivated successfully.
Dec 13 02:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:38:40 np0005558241 podman[109329]: 2025-12-13 07:38:40.120147603 +0000 UTC m=+0.064248739 container create ba1d6c8c0e008e2341736d3db687e345652c5e91d0d50232b461688860a7387a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_shirley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:38:40 np0005558241 systemd[1]: Started libpod-conmon-ba1d6c8c0e008e2341736d3db687e345652c5e91d0d50232b461688860a7387a.scope.
Dec 13 02:38:40 np0005558241 podman[109329]: 2025-12-13 07:38:40.087421279 +0000 UTC m=+0.031522425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:38:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:38:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c12e4ead1a9e7b439d91302bed3463b1162ff5b44b7ceb65d9539fea9354e60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:38:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c12e4ead1a9e7b439d91302bed3463b1162ff5b44b7ceb65d9539fea9354e60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:38:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c12e4ead1a9e7b439d91302bed3463b1162ff5b44b7ceb65d9539fea9354e60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:38:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c12e4ead1a9e7b439d91302bed3463b1162ff5b44b7ceb65d9539fea9354e60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:38:40 np0005558241 podman[109329]: 2025-12-13 07:38:40.237800878 +0000 UTC m=+0.181902104 container init ba1d6c8c0e008e2341736d3db687e345652c5e91d0d50232b461688860a7387a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 02:38:40 np0005558241 podman[109329]: 2025-12-13 07:38:40.2503518 +0000 UTC m=+0.194452956 container start ba1d6c8c0e008e2341736d3db687e345652c5e91d0d50232b461688860a7387a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:38:40 np0005558241 podman[109329]: 2025-12-13 07:38:40.258524933 +0000 UTC m=+0.202626159 container attach ba1d6c8c0e008e2341736d3db687e345652c5e91d0d50232b461688860a7387a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_shirley, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:38:40 np0005558241 python3.9[109323]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]: {
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:    "0": [
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:        {
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "devices": [
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "/dev/loop3"
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            ],
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_name": "ceph_lv0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_size": "21470642176",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "name": "ceph_lv0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "tags": {
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.cluster_name": "ceph",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.crush_device_class": "",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.encrypted": "0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.objectstore": "bluestore",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.osd_id": "0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.type": "block",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.vdo": "0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.with_tpm": "0"
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            },
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "type": "block",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "vg_name": "ceph_vg0"
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:        }
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:    ],
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:    "1": [
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:        {
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "devices": [
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "/dev/loop4"
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            ],
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_name": "ceph_lv1",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_size": "21470642176",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "name": "ceph_lv1",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "tags": {
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.cluster_name": "ceph",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.crush_device_class": "",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.encrypted": "0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.objectstore": "bluestore",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.osd_id": "1",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.type": "block",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.vdo": "0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.with_tpm": "0"
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            },
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "type": "block",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "vg_name": "ceph_vg1"
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:        }
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:    ],
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:    "2": [
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:        {
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "devices": [
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "/dev/loop5"
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            ],
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_name": "ceph_lv2",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_size": "21470642176",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "name": "ceph_lv2",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "tags": {
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.cluster_name": "ceph",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.crush_device_class": "",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.encrypted": "0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.objectstore": "bluestore",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.osd_id": "2",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.type": "block",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.vdo": "0",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:                "ceph.with_tpm": "0"
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            },
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "type": "block",
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:            "vg_name": "ceph_vg2"
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:        }
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]:    ]
Dec 13 02:38:40 np0005558241 zealous_shirley[109346]: }
Dec 13 02:38:40 np0005558241 systemd[1]: libpod-ba1d6c8c0e008e2341736d3db687e345652c5e91d0d50232b461688860a7387a.scope: Deactivated successfully.
Dec 13 02:38:40 np0005558241 podman[109329]: 2025-12-13 07:38:40.586572149 +0000 UTC m=+0.530673365 container died ba1d6c8c0e008e2341736d3db687e345652c5e91d0d50232b461688860a7387a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_shirley, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:38:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3c12e4ead1a9e7b439d91302bed3463b1162ff5b44b7ceb65d9539fea9354e60-merged.mount: Deactivated successfully.
Dec 13 02:38:40 np0005558241 podman[109329]: 2025-12-13 07:38:40.72177784 +0000 UTC m=+0.665878976 container remove ba1d6c8c0e008e2341736d3db687e345652c5e91d0d50232b461688860a7387a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_shirley, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:38:40 np0005558241 systemd[1]: libpod-conmon-ba1d6c8c0e008e2341736d3db687e345652c5e91d0d50232b461688860a7387a.scope: Deactivated successfully.
Dec 13 02:38:40 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 13 02:38:40 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 13 02:38:41 np0005558241 podman[109510]: 2025-12-13 07:38:41.295221537 +0000 UTC m=+0.030035078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:38:41 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 13 02:38:41 np0005558241 podman[109510]: 2025-12-13 07:38:41.468477165 +0000 UTC m=+0.203290666 container create 87c015ac6cc132bedc094dc97542c36b596408b5ac1c9d5b91a1d1326aabc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:38:41 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 13 02:38:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:41 np0005558241 systemd[1]: Started libpod-conmon-87c015ac6cc132bedc094dc97542c36b596408b5ac1c9d5b91a1d1326aabc264.scope.
Dec 13 02:38:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:38:41 np0005558241 python3.9[109599]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 13 02:38:42 np0005558241 podman[109510]: 2025-12-13 07:38:42.004726147 +0000 UTC m=+0.739539738 container init 87c015ac6cc132bedc094dc97542c36b596408b5ac1c9d5b91a1d1326aabc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 02:38:42 np0005558241 podman[109510]: 2025-12-13 07:38:42.016611482 +0000 UTC m=+0.751425013 container start 87c015ac6cc132bedc094dc97542c36b596408b5ac1c9d5b91a1d1326aabc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Dec 13 02:38:42 np0005558241 podman[109510]: 2025-12-13 07:38:42.021990586 +0000 UTC m=+0.756804147 container attach 87c015ac6cc132bedc094dc97542c36b596408b5ac1c9d5b91a1d1326aabc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:38:42 np0005558241 mystifying_chatterjee[109603]: 167 167
Dec 13 02:38:42 np0005558241 systemd[1]: libpod-87c015ac6cc132bedc094dc97542c36b596408b5ac1c9d5b91a1d1326aabc264.scope: Deactivated successfully.
Dec 13 02:38:42 np0005558241 podman[109510]: 2025-12-13 07:38:42.025809721 +0000 UTC m=+0.760623212 container died 87c015ac6cc132bedc094dc97542c36b596408b5ac1c9d5b91a1d1326aabc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:38:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay-86b413cf87494980b6c9ab28f6ab3968627661a2b35066bd2b7fd27a3f56acd2-merged.mount: Deactivated successfully.
Dec 13 02:38:42 np0005558241 podman[109510]: 2025-12-13 07:38:42.075947348 +0000 UTC m=+0.810760869 container remove 87c015ac6cc132bedc094dc97542c36b596408b5ac1c9d5b91a1d1326aabc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chatterjee, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 02:38:42 np0005558241 systemd[1]: libpod-conmon-87c015ac6cc132bedc094dc97542c36b596408b5ac1c9d5b91a1d1326aabc264.scope: Deactivated successfully.
Dec 13 02:38:42 np0005558241 podman[109652]: 2025-12-13 07:38:42.302174732 +0000 UTC m=+0.061214313 container create b9fdcd314ec1b072b8328ce9bb0f7c1e3bdba47c679b49a7982436f2c63d7c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brahmagupta, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:38:42 np0005558241 systemd[1]: Started libpod-conmon-b9fdcd314ec1b072b8328ce9bb0f7c1e3bdba47c679b49a7982436f2c63d7c6c.scope.
Dec 13 02:38:42 np0005558241 podman[109652]: 2025-12-13 07:38:42.275802426 +0000 UTC m=+0.034842047 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:38:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:38:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1532a09e491176ce51acbbe6e8279f06f480a9d704b7e5b91c17b900573aaa4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:38:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1532a09e491176ce51acbbe6e8279f06f480a9d704b7e5b91c17b900573aaa4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:38:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1532a09e491176ce51acbbe6e8279f06f480a9d704b7e5b91c17b900573aaa4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:38:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1532a09e491176ce51acbbe6e8279f06f480a9d704b7e5b91c17b900573aaa4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:38:42 np0005558241 podman[109652]: 2025-12-13 07:38:42.413807007 +0000 UTC m=+0.172846608 container init b9fdcd314ec1b072b8328ce9bb0f7c1e3bdba47c679b49a7982436f2c63d7c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 02:38:42 np0005558241 podman[109652]: 2025-12-13 07:38:42.428462982 +0000 UTC m=+0.187502593 container start b9fdcd314ec1b072b8328ce9bb0f7c1e3bdba47c679b49a7982436f2c63d7c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:38:42 np0005558241 podman[109652]: 2025-12-13 07:38:42.433598289 +0000 UTC m=+0.192637880 container attach b9fdcd314ec1b072b8328ce9bb0f7c1e3bdba47c679b49a7982436f2c63d7c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brahmagupta, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:38:42 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 13 02:38:42 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 13 02:38:42 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec 13 02:38:42 np0005558241 ceph-osd[87041]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec 13 02:38:43 np0005558241 python3.9[109811]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:38:43 np0005558241 lvm[109882]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:38:43 np0005558241 lvm[109882]: VG ceph_vg0 finished
Dec 13 02:38:43 np0005558241 lvm[109885]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:38:43 np0005558241 lvm[109885]: VG ceph_vg1 finished
Dec 13 02:38:43 np0005558241 lvm[109887]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:38:43 np0005558241 lvm[109887]: VG ceph_vg2 finished
Dec 13 02:38:43 np0005558241 admiring_brahmagupta[109710]: {}
Dec 13 02:38:43 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 13 02:38:43 np0005558241 systemd[1]: libpod-b9fdcd314ec1b072b8328ce9bb0f7c1e3bdba47c679b49a7982436f2c63d7c6c.scope: Deactivated successfully.
Dec 13 02:38:43 np0005558241 podman[109652]: 2025-12-13 07:38:43.401836012 +0000 UTC m=+1.160875623 container died b9fdcd314ec1b072b8328ce9bb0f7c1e3bdba47c679b49a7982436f2c63d7c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 02:38:43 np0005558241 systemd[1]: libpod-b9fdcd314ec1b072b8328ce9bb0f7c1e3bdba47c679b49a7982436f2c63d7c6c.scope: Consumed 1.569s CPU time.
Dec 13 02:38:43 np0005558241 ceph-osd[88086]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 13 02:38:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:43 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec 13 02:38:43 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec 13 02:38:44 np0005558241 python3.9[109976]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 02:38:44 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c1532a09e491176ce51acbbe6e8279f06f480a9d704b7e5b91c17b900573aaa4-merged.mount: Deactivated successfully.
Dec 13 02:38:44 np0005558241 podman[109652]: 2025-12-13 07:38:44.218905315 +0000 UTC m=+1.977944936 container remove b9fdcd314ec1b072b8328ce9bb0f7c1e3bdba47c679b49a7982436f2c63d7c6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_brahmagupta, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:38:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:38:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:38:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:38:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:38:44 np0005558241 systemd[1]: libpod-conmon-b9fdcd314ec1b072b8328ce9bb0f7c1e3bdba47c679b49a7982436f2c63d7c6c.scope: Deactivated successfully.
Dec 13 02:38:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:38:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:38:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:38:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:46 np0005558241 python3.9[110155]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:38:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:49 np0005558241 python3.9[110308]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:38:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:38:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:50 np0005558241 python3.9[110461]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:38:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:51 np0005558241 python3.9[110613]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 13 02:38:52 np0005558241 python3.9[110763]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:38:52 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec 13 02:38:52 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec 13 02:38:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:53 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec 13 02:38:53 np0005558241 python3.9[110921]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:38:53 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec 13 02:38:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:38:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:56 np0005558241 python3.9[111074]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:38:56 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 13 02:38:56 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 13 02:38:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:38:57 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 13 02:38:57 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 13 02:38:58 np0005558241 python3.9[111361]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 02:38:58 np0005558241 python3.9[111511]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:38:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:38:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:00 np0005558241 python3.9[111666]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:39:00 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec 13 02:39:01 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec 13 02:39:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:01 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 13 02:39:02 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 13 02:39:02 np0005558241 python3.9[111819]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:39:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:39:05 np0005558241 python3.9[111972]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:39:05 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 13 02:39:05 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 13 02:39:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:05 np0005558241 python3.9[112126]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 13 02:39:06 np0005558241 systemd[1]: session-39.scope: Deactivated successfully.
Dec 13 02:39:06 np0005558241 systemd[1]: session-39.scope: Consumed 20.685s CPU time.
Dec 13 02:39:06 np0005558241 systemd-logind[787]: Session 39 logged out. Waiting for processes to exit.
Dec 13 02:39:06 np0005558241 systemd-logind[787]: Removed session 39.
Dec 13 02:39:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:39:08
Dec 13 02:39:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:39:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:39:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'default.rgw.meta', 'images']
Dec 13 02:39:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:39:09 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 13 02:39:09 np0005558241 ceph-osd[89221]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 13 02:39:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:39:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:39:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:39:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:18 np0005558241 systemd-logind[787]: New session 40 of user zuul.
Dec 13 02:39:18 np0005558241 systemd[1]: Started Session 40 of User zuul.
Dec 13 02:39:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:39:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:39:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:39:20 np0005558241 python3.9[112304]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:39:21 np0005558241 python3.9[112458]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:39:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:22 np0005558241 python3.9[112651]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:39:23 np0005558241 systemd[1]: session-40.scope: Deactivated successfully.
Dec 13 02:39:23 np0005558241 systemd[1]: session-40.scope: Consumed 2.810s CPU time.
Dec 13 02:39:23 np0005558241 systemd-logind[787]: Session 40 logged out. Waiting for processes to exit.
Dec 13 02:39:23 np0005558241 systemd-logind[787]: Removed session 40.
Dec 13 02:39:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:39:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:39:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:30 np0005558241 systemd-logind[787]: New session 41 of user zuul.
Dec 13 02:39:30 np0005558241 systemd[1]: Started Session 41 of User zuul.
Dec 13 02:39:31 np0005558241 python3.9[112830]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:39:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:32 np0005558241 python3.9[112984]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:39:33 np0005558241 python3.9[113140]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:39:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:39:34 np0005558241 python3.9[113224]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:39:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:36 np0005558241 python3.9[113377]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:39:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:37 np0005558241 python3.9[113572]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:39:38 np0005558241 python3.9[113724]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:39:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:39:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:39 np0005558241 python3.9[113889]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:39:40 np0005558241 python3.9[113967]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:39:41 np0005558241 python3.9[114119]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:39:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:41 np0005558241 python3.9[114197]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:39:42 np0005558241 python3.9[114349]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:39:43 np0005558241 python3.9[114501]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:39:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:44 np0005558241 python3.9[114653]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:39:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:39:44 np0005558241 python3.9[114855]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:39:45 np0005558241 podman[115102]: 2025-12-13 07:39:45.584705862 +0000 UTC m=+0.052231931 container create 2b1915707691f93d070792975af3fef91b01882dc05daa333819db547550facf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hypatia, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:39:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:45 np0005558241 systemd[1]: Started libpod-conmon-2b1915707691f93d070792975af3fef91b01882dc05daa333819db547550facf.scope.
Dec 13 02:39:45 np0005558241 podman[115102]: 2025-12-13 07:39:45.558695564 +0000 UTC m=+0.026221633 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:39:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:39:45 np0005558241 podman[115102]: 2025-12-13 07:39:45.687937959 +0000 UTC m=+0.155464068 container init 2b1915707691f93d070792975af3fef91b01882dc05daa333819db547550facf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hypatia, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 02:39:45 np0005558241 podman[115102]: 2025-12-13 07:39:45.698317349 +0000 UTC m=+0.165843448 container start 2b1915707691f93d070792975af3fef91b01882dc05daa333819db547550facf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:39:45 np0005558241 podman[115102]: 2025-12-13 07:39:45.70258531 +0000 UTC m=+0.170111409 container attach 2b1915707691f93d070792975af3fef91b01882dc05daa333819db547550facf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hypatia, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:39:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:39:45 np0005558241 naughty_hypatia[115119]: 167 167
Dec 13 02:39:45 np0005558241 systemd[1]: libpod-2b1915707691f93d070792975af3fef91b01882dc05daa333819db547550facf.scope: Deactivated successfully.
Dec 13 02:39:45 np0005558241 podman[115102]: 2025-12-13 07:39:45.712096238 +0000 UTC m=+0.179622317 container died 2b1915707691f93d070792975af3fef91b01882dc05daa333819db547550facf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hypatia, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:39:45 np0005558241 python3.9[115090]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:39:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1835626f28469f80c40b820e185761a441e36b61f1f60f9e9693e8456e53bd2f-merged.mount: Deactivated successfully.
Dec 13 02:39:45 np0005558241 podman[115102]: 2025-12-13 07:39:45.770486698 +0000 UTC m=+0.238012787 container remove 2b1915707691f93d070792975af3fef91b01882dc05daa333819db547550facf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:39:45 np0005558241 systemd[1]: libpod-conmon-2b1915707691f93d070792975af3fef91b01882dc05daa333819db547550facf.scope: Deactivated successfully.
Dec 13 02:39:45 np0005558241 podman[115145]: 2025-12-13 07:39:45.992890727 +0000 UTC m=+0.074023448 container create e2de8843944193b2928420b15600d8c34c700850d268333853aa8783f49b3b96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:39:46 np0005558241 podman[115145]: 2025-12-13 07:39:45.960402741 +0000 UTC m=+0.041535522 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:39:46 np0005558241 systemd[1]: Started libpod-conmon-e2de8843944193b2928420b15600d8c34c700850d268333853aa8783f49b3b96.scope.
Dec 13 02:39:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:39:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ef38c0f42fd1911ad87711f185b80f87ff4fa16655ba981f639c9a08fa6395/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:39:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ef38c0f42fd1911ad87711f185b80f87ff4fa16655ba981f639c9a08fa6395/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:39:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ef38c0f42fd1911ad87711f185b80f87ff4fa16655ba981f639c9a08fa6395/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:39:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ef38c0f42fd1911ad87711f185b80f87ff4fa16655ba981f639c9a08fa6395/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:39:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ef38c0f42fd1911ad87711f185b80f87ff4fa16655ba981f639c9a08fa6395/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:39:46 np0005558241 podman[115145]: 2025-12-13 07:39:46.165990503 +0000 UTC m=+0.247123224 container init e2de8843944193b2928420b15600d8c34c700850d268333853aa8783f49b3b96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:39:46 np0005558241 podman[115145]: 2025-12-13 07:39:46.184430223 +0000 UTC m=+0.265562954 container start e2de8843944193b2928420b15600d8c34c700850d268333853aa8783f49b3b96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 02:39:46 np0005558241 podman[115145]: 2025-12-13 07:39:46.188462528 +0000 UTC m=+0.269595219 container attach e2de8843944193b2928420b15600d8c34c700850d268333853aa8783f49b3b96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:39:46 np0005558241 ecstatic_mirzakhani[115162]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:39:46 np0005558241 ecstatic_mirzakhani[115162]: --> All data devices are unavailable
Dec 13 02:39:46 np0005558241 systemd[1]: libpod-e2de8843944193b2928420b15600d8c34c700850d268333853aa8783f49b3b96.scope: Deactivated successfully.
Dec 13 02:39:46 np0005558241 podman[115145]: 2025-12-13 07:39:46.778419566 +0000 UTC m=+0.859552277 container died e2de8843944193b2928420b15600d8c34c700850d268333853aa8783f49b3b96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:39:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-55ef38c0f42fd1911ad87711f185b80f87ff4fa16655ba981f639c9a08fa6395-merged.mount: Deactivated successfully.
Dec 13 02:39:46 np0005558241 podman[115145]: 2025-12-13 07:39:46.826801745 +0000 UTC m=+0.907934436 container remove e2de8843944193b2928420b15600d8c34c700850d268333853aa8783f49b3b96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:39:46 np0005558241 systemd[1]: libpod-conmon-e2de8843944193b2928420b15600d8c34c700850d268333853aa8783f49b3b96.scope: Deactivated successfully.
Dec 13 02:39:47 np0005558241 podman[115280]: 2025-12-13 07:39:47.354697227 +0000 UTC m=+0.055751722 container create 92458dfe810ebd6f930425207ed84506c86095307fb1f15878dc0a4e02f55a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:39:47 np0005558241 systemd[1]: Started libpod-conmon-92458dfe810ebd6f930425207ed84506c86095307fb1f15878dc0a4e02f55a1c.scope.
Dec 13 02:39:47 np0005558241 podman[115280]: 2025-12-13 07:39:47.333416713 +0000 UTC m=+0.034471218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:39:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:39:47 np0005558241 podman[115280]: 2025-12-13 07:39:47.457491883 +0000 UTC m=+0.158546468 container init 92458dfe810ebd6f930425207ed84506c86095307fb1f15878dc0a4e02f55a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haslett, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 02:39:47 np0005558241 podman[115280]: 2025-12-13 07:39:47.470786509 +0000 UTC m=+0.171841004 container start 92458dfe810ebd6f930425207ed84506c86095307fb1f15878dc0a4e02f55a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haslett, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:39:47 np0005558241 podman[115280]: 2025-12-13 07:39:47.475152412 +0000 UTC m=+0.176206947 container attach 92458dfe810ebd6f930425207ed84506c86095307fb1f15878dc0a4e02f55a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:39:47 np0005558241 loving_haslett[115296]: 167 167
Dec 13 02:39:47 np0005558241 systemd[1]: libpod-92458dfe810ebd6f930425207ed84506c86095307fb1f15878dc0a4e02f55a1c.scope: Deactivated successfully.
Dec 13 02:39:47 np0005558241 conmon[115296]: conmon 92458dfe810ebd6f9304 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92458dfe810ebd6f930425207ed84506c86095307fb1f15878dc0a4e02f55a1c.scope/container/memory.events
Dec 13 02:39:47 np0005558241 podman[115280]: 2025-12-13 07:39:47.481633661 +0000 UTC m=+0.182688156 container died 92458dfe810ebd6f930425207ed84506c86095307fb1f15878dc0a4e02f55a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haslett, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 02:39:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5db6b1763f8696d91ca661f52fed5029be104279403ca6f79b0fd246088a961a-merged.mount: Deactivated successfully.
Dec 13 02:39:47 np0005558241 podman[115280]: 2025-12-13 07:39:47.522869195 +0000 UTC m=+0.223923690 container remove 92458dfe810ebd6f930425207ed84506c86095307fb1f15878dc0a4e02f55a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:39:47 np0005558241 systemd[1]: libpod-conmon-92458dfe810ebd6f930425207ed84506c86095307fb1f15878dc0a4e02f55a1c.scope: Deactivated successfully.
Dec 13 02:39:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:47 np0005558241 podman[115396]: 2025-12-13 07:39:47.76970108 +0000 UTC m=+0.059937231 container create 76f04ade67b732c7b5a6462363448f144ec1056fe30fa8a778ed80e98d6d20a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:39:47 np0005558241 systemd[1]: Started libpod-conmon-76f04ade67b732c7b5a6462363448f144ec1056fe30fa8a778ed80e98d6d20a8.scope.
Dec 13 02:39:47 np0005558241 podman[115396]: 2025-12-13 07:39:47.744863503 +0000 UTC m=+0.035099674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:39:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:39:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da22cbb165c0fc81bb13b18b3e638573caadbeaa740d7aa997a9f6964174b4a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:39:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da22cbb165c0fc81bb13b18b3e638573caadbeaa740d7aa997a9f6964174b4a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:39:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da22cbb165c0fc81bb13b18b3e638573caadbeaa740d7aa997a9f6964174b4a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:39:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da22cbb165c0fc81bb13b18b3e638573caadbeaa740d7aa997a9f6964174b4a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:39:47 np0005558241 podman[115396]: 2025-12-13 07:39:47.896941322 +0000 UTC m=+0.187177513 container init 76f04ade67b732c7b5a6462363448f144ec1056fe30fa8a778ed80e98d6d20a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 02:39:47 np0005558241 podman[115396]: 2025-12-13 07:39:47.905662569 +0000 UTC m=+0.195898720 container start 76f04ade67b732c7b5a6462363448f144ec1056fe30fa8a778ed80e98d6d20a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:39:47 np0005558241 podman[115396]: 2025-12-13 07:39:47.90914082 +0000 UTC m=+0.199376981 container attach 76f04ade67b732c7b5a6462363448f144ec1056fe30fa8a778ed80e98d6d20a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:39:48 np0005558241 python3.9[115468]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:39:48 np0005558241 modest_chaum[115461]: {
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:    "0": [
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:        {
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "devices": [
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "/dev/loop3"
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            ],
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_name": "ceph_lv0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_size": "21470642176",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "name": "ceph_lv0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "tags": {
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.cluster_name": "ceph",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.crush_device_class": "",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.encrypted": "0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.objectstore": "bluestore",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.osd_id": "0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.type": "block",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.vdo": "0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.with_tpm": "0"
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            },
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "type": "block",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "vg_name": "ceph_vg0"
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:        }
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:    ],
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:    "1": [
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:        {
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "devices": [
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "/dev/loop4"
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            ],
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_name": "ceph_lv1",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_size": "21470642176",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "name": "ceph_lv1",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "tags": {
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.cluster_name": "ceph",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.crush_device_class": "",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.encrypted": "0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.objectstore": "bluestore",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.osd_id": "1",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.type": "block",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.vdo": "0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.with_tpm": "0"
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            },
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "type": "block",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "vg_name": "ceph_vg1"
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:        }
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:    ],
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:    "2": [
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:        {
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "devices": [
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "/dev/loop5"
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            ],
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_name": "ceph_lv2",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_size": "21470642176",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "name": "ceph_lv2",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "tags": {
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.cluster_name": "ceph",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.crush_device_class": "",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.encrypted": "0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.objectstore": "bluestore",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.osd_id": "2",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.type": "block",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.vdo": "0",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:                "ceph.with_tpm": "0"
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            },
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "type": "block",
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:            "vg_name": "ceph_vg2"
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:        }
Dec 13 02:39:48 np0005558241 modest_chaum[115461]:    ]
Dec 13 02:39:48 np0005558241 modest_chaum[115461]: }
Dec 13 02:39:48 np0005558241 systemd[1]: libpod-76f04ade67b732c7b5a6462363448f144ec1056fe30fa8a778ed80e98d6d20a8.scope: Deactivated successfully.
Dec 13 02:39:48 np0005558241 podman[115396]: 2025-12-13 07:39:48.306674497 +0000 UTC m=+0.596910618 container died 76f04ade67b732c7b5a6462363448f144ec1056fe30fa8a778ed80e98d6d20a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:39:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-da22cbb165c0fc81bb13b18b3e638573caadbeaa740d7aa997a9f6964174b4a7-merged.mount: Deactivated successfully.
Dec 13 02:39:48 np0005558241 podman[115396]: 2025-12-13 07:39:48.362955492 +0000 UTC m=+0.653191613 container remove 76f04ade67b732c7b5a6462363448f144ec1056fe30fa8a778ed80e98d6d20a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 02:39:48 np0005558241 systemd[1]: libpod-conmon-76f04ade67b732c7b5a6462363448f144ec1056fe30fa8a778ed80e98d6d20a8.scope: Deactivated successfully.
Dec 13 02:39:48 np0005558241 podman[115700]: 2025-12-13 07:39:48.939578972 +0000 UTC m=+0.065807054 container create 27caf2fcb59908ecdc1141121b0b63c9acb201eb7b5061bf877e79a6bf1191f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_sanderson, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:39:48 np0005558241 systemd[1]: Started libpod-conmon-27caf2fcb59908ecdc1141121b0b63c9acb201eb7b5061bf877e79a6bf1191f0.scope.
Dec 13 02:39:49 np0005558241 podman[115700]: 2025-12-13 07:39:48.911243665 +0000 UTC m=+0.037471817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:39:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:39:49 np0005558241 podman[115700]: 2025-12-13 07:39:49.045611993 +0000 UTC m=+0.171840135 container init 27caf2fcb59908ecdc1141121b0b63c9acb201eb7b5061bf877e79a6bf1191f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_sanderson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 02:39:49 np0005558241 podman[115700]: 2025-12-13 07:39:49.057115842 +0000 UTC m=+0.183343934 container start 27caf2fcb59908ecdc1141121b0b63c9acb201eb7b5061bf877e79a6bf1191f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_sanderson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:39:49 np0005558241 podman[115700]: 2025-12-13 07:39:49.061234139 +0000 UTC m=+0.187462221 container attach 27caf2fcb59908ecdc1141121b0b63c9acb201eb7b5061bf877e79a6bf1191f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:39:49 np0005558241 quizzical_sanderson[115717]: 167 167
Dec 13 02:39:49 np0005558241 systemd[1]: libpod-27caf2fcb59908ecdc1141121b0b63c9acb201eb7b5061bf877e79a6bf1191f0.scope: Deactivated successfully.
Dec 13 02:39:49 np0005558241 podman[115700]: 2025-12-13 07:39:49.064408202 +0000 UTC m=+0.190636294 container died 27caf2fcb59908ecdc1141121b0b63c9acb201eb7b5061bf877e79a6bf1191f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_sanderson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:39:49 np0005558241 python3.9[115695]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:39:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4d2c8cf7cc1f4a0b7716db0dcdf3a7c113875bb75487bf934f1b38e442a9c906-merged.mount: Deactivated successfully.
Dec 13 02:39:49 np0005558241 podman[115700]: 2025-12-13 07:39:49.115843001 +0000 UTC m=+0.242071093 container remove 27caf2fcb59908ecdc1141121b0b63c9acb201eb7b5061bf877e79a6bf1191f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:39:49 np0005558241 systemd[1]: libpod-conmon-27caf2fcb59908ecdc1141121b0b63c9acb201eb7b5061bf877e79a6bf1191f0.scope: Deactivated successfully.
Dec 13 02:39:49 np0005558241 podman[115765]: 2025-12-13 07:39:49.311492434 +0000 UTC m=+0.051762479 container create 19161682d68e75aefdfb38dd6fd41d6ec374776922a4a27aa5bb7487afaa1f52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:39:49 np0005558241 systemd[1]: Started libpod-conmon-19161682d68e75aefdfb38dd6fd41d6ec374776922a4a27aa5bb7487afaa1f52.scope.
Dec 13 02:39:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:39:49 np0005558241 podman[115765]: 2025-12-13 07:39:49.282403197 +0000 UTC m=+0.022673232 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:39:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:39:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba3c54c2858b0bdef75cfeb3d58a6d8c0aae032a9e83b3d1782b995bf0734829/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:39:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba3c54c2858b0bdef75cfeb3d58a6d8c0aae032a9e83b3d1782b995bf0734829/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:39:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba3c54c2858b0bdef75cfeb3d58a6d8c0aae032a9e83b3d1782b995bf0734829/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:39:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba3c54c2858b0bdef75cfeb3d58a6d8c0aae032a9e83b3d1782b995bf0734829/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:39:49 np0005558241 podman[115765]: 2025-12-13 07:39:49.426798135 +0000 UTC m=+0.167068190 container init 19161682d68e75aefdfb38dd6fd41d6ec374776922a4a27aa5bb7487afaa1f52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:39:49 np0005558241 podman[115765]: 2025-12-13 07:39:49.447232757 +0000 UTC m=+0.187502792 container start 19161682d68e75aefdfb38dd6fd41d6ec374776922a4a27aa5bb7487afaa1f52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:39:49 np0005558241 podman[115765]: 2025-12-13 07:39:49.451327644 +0000 UTC m=+0.191597699 container attach 19161682d68e75aefdfb38dd6fd41d6ec374776922a4a27aa5bb7487afaa1f52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_pascal, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:39:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:49 np0005558241 python3.9[115923]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:39:50 np0005558241 lvm[116015]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:39:50 np0005558241 lvm[116015]: VG ceph_vg0 finished
Dec 13 02:39:50 np0005558241 lvm[116016]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:39:50 np0005558241 lvm[116016]: VG ceph_vg1 finished
Dec 13 02:39:50 np0005558241 lvm[116022]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:39:50 np0005558241 lvm[116022]: VG ceph_vg2 finished
Dec 13 02:39:50 np0005558241 eloquent_pascal[115781]: {}
Dec 13 02:39:50 np0005558241 systemd[1]: libpod-19161682d68e75aefdfb38dd6fd41d6ec374776922a4a27aa5bb7487afaa1f52.scope: Deactivated successfully.
Dec 13 02:39:50 np0005558241 systemd[1]: libpod-19161682d68e75aefdfb38dd6fd41d6ec374776922a4a27aa5bb7487afaa1f52.scope: Consumed 1.434s CPU time.
Dec 13 02:39:50 np0005558241 podman[115765]: 2025-12-13 07:39:50.334374861 +0000 UTC m=+1.074644916 container died 19161682d68e75aefdfb38dd6fd41d6ec374776922a4a27aa5bb7487afaa1f52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_pascal, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:39:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ba3c54c2858b0bdef75cfeb3d58a6d8c0aae032a9e83b3d1782b995bf0734829-merged.mount: Deactivated successfully.
Dec 13 02:39:50 np0005558241 python3.9[116155]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:39:50 np0005558241 podman[115765]: 2025-12-13 07:39:50.815761742 +0000 UTC m=+1.556031757 container remove 19161682d68e75aefdfb38dd6fd41d6ec374776922a4a27aa5bb7487afaa1f52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_pascal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:39:50 np0005558241 systemd[1]: libpod-conmon-19161682d68e75aefdfb38dd6fd41d6ec374776922a4a27aa5bb7487afaa1f52.scope: Deactivated successfully.
Dec 13 02:39:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:39:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:39:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:39:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:39:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:51 np0005558241 python3.9[116333]: ansible-service_facts Invoked
Dec 13 02:39:51 np0005558241 network[116350]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:39:51 np0005558241 network[116351]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:39:51 np0005558241 network[116352]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:39:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:39:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:39:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:39:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:39:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:39:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:00 np0005558241 python3.9[116804]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:40:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:03 np0005558241 python3.9[116957]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 13 02:40:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:40:04 np0005558241 python3.9[117109]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:40:05 np0005558241 python3.9[117187]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:06 np0005558241 python3.9[117339]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:40:06 np0005558241 python3.9[117417]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:08 np0005558241 python3.9[117569]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:40:08
Dec 13 02:40:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:40:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:40:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'volumes', 'backups', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta']
Dec 13 02:40:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:40:09 np0005558241 python3.9[117721]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:40:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:40:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:40:10 np0005558241 python3.9[117805]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:40:11 np0005558241 systemd[1]: session-41.scope: Deactivated successfully.
Dec 13 02:40:11 np0005558241 systemd[1]: session-41.scope: Consumed 28.209s CPU time.
Dec 13 02:40:11 np0005558241 systemd-logind[787]: Session 41 logged out. Waiting for processes to exit.
Dec 13 02:40:11 np0005558241 systemd-logind[787]: Removed session 41.
Dec 13 02:40:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.387297) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611614387433, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1429, "num_deletes": 252, "total_data_size": 1991649, "memory_usage": 2028928, "flush_reason": "Manual Compaction"}
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611614431864, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1188472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7499, "largest_seqno": 8927, "table_properties": {"data_size": 1183452, "index_size": 2160, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14447, "raw_average_key_size": 20, "raw_value_size": 1171785, "raw_average_value_size": 1686, "num_data_blocks": 101, "num_entries": 695, "num_filter_entries": 695, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611486, "oldest_key_time": 1765611486, "file_creation_time": 1765611614, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 44658 microseconds, and 7954 cpu microseconds.
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.431949) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1188472 bytes OK
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.431983) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.443296) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.443326) EVENT_LOG_v1 {"time_micros": 1765611614443315, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.443358) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1985090, prev total WAL file size 1985090, number of live WAL files 2.
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.444700) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1160KB)], [20(7662KB)]
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611614444930, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9035303, "oldest_snapshot_seqno": -1}
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3394 keys, 7085980 bytes, temperature: kUnknown
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611614576249, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7085980, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7059342, "index_size": 17056, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 81761, "raw_average_key_size": 24, "raw_value_size": 6994058, "raw_average_value_size": 2060, "num_data_blocks": 755, "num_entries": 3394, "num_filter_entries": 3394, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765611614, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.576670) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7085980 bytes
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.583865) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.7 rd, 53.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 7.5 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(13.6) write-amplify(6.0) OK, records in: 3852, records dropped: 458 output_compression: NoCompression
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.583904) EVENT_LOG_v1 {"time_micros": 1765611614583885, "job": 6, "event": "compaction_finished", "compaction_time_micros": 131475, "compaction_time_cpu_micros": 33245, "output_level": 6, "num_output_files": 1, "total_output_size": 7085980, "num_input_records": 3852, "num_output_records": 3394, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611614584645, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611614588007, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.444575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.588175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.588187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.588190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.588194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:40:14 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:40:14.588197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:40:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:18 np0005558241 systemd-logind[787]: New session 42 of user zuul.
Dec 13 02:40:18 np0005558241 systemd[1]: Started Session 42 of User zuul.
Dec 13 02:40:19 np0005558241 python3.9[117987]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:40:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:40:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:40:20 np0005558241 python3.9[118139]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:40:20 np0005558241 irqbalance[778]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 13 02:40:20 np0005558241 irqbalance[778]: IRQ 28 affinity is now unmanaged
Dec 13 02:40:20 np0005558241 python3.9[118217]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:21 np0005558241 systemd[1]: session-42.scope: Deactivated successfully.
Dec 13 02:40:21 np0005558241 systemd[1]: session-42.scope: Consumed 1.940s CPU time.
Dec 13 02:40:21 np0005558241 systemd-logind[787]: Session 42 logged out. Waiting for processes to exit.
Dec 13 02:40:21 np0005558241 systemd-logind[787]: Removed session 42.
Dec 13 02:40:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:40:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:40:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:31 np0005558241 systemd-logind[787]: New session 43 of user zuul.
Dec 13 02:40:31 np0005558241 systemd[1]: Started Session 43 of User zuul.
Dec 13 02:40:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:32 np0005558241 python3.9[118395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:40:33 np0005558241 systemd[1]: session-21.scope: Deactivated successfully.
Dec 13 02:40:33 np0005558241 systemd[1]: session-21.scope: Consumed 1min 47.872s CPU time.
Dec 13 02:40:33 np0005558241 systemd-logind[787]: Session 21 logged out. Waiting for processes to exit.
Dec 13 02:40:33 np0005558241 systemd-logind[787]: Removed session 21.
Dec 13 02:40:33 np0005558241 python3.9[118551]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:40:34 np0005558241 python3.9[118726]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:40:34 np0005558241 python3.9[118804]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.9o05q5o7 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:35 np0005558241 python3.9[118956]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:40:36 np0005558241 python3.9[119034]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.tg7z6ssi recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:37 np0005558241 python3.9[119186]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:40:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:38 np0005558241 python3.9[119338]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:40:38 np0005558241 python3.9[119416]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:40:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:40:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:39 np0005558241 python3.9[119568]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:40:40 np0005558241 python3.9[119646]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:40:41 np0005558241 python3.9[119798]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:41 np0005558241 python3.9[119950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:40:42 np0005558241 python3.9[120028]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:43 np0005558241 python3.9[120180]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:40:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:43 np0005558241 python3.9[120258]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:40:44 np0005558241 python3.9[120410]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:40:44 np0005558241 systemd[1]: Reloading.
Dec 13 02:40:45 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:40:45 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:40:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:46 np0005558241 python3.9[120600]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:40:46 np0005558241 python3.9[120678]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:47 np0005558241 python3.9[120830]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:40:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:47 np0005558241 python3.9[120908]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:48 np0005558241 python3.9[121060]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:40:48 np0005558241 systemd[1]: Reloading.
Dec 13 02:40:49 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:40:49 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:40:49 np0005558241 systemd[1]: Starting Create netns directory...
Dec 13 02:40:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:40:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2001 writes, 9022 keys, 2001 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2001 writes, 2001 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2001 writes, 9022 keys, 2001 commit groups, 1.0 writes per commit group, ingest: 11.60 MB, 0.02 MB/s#012Interval WAL: 2001 writes, 2001 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     58.8      0.15              0.04         3    0.049       0      0       0.0       0.0#012  L6      1/0    6.76 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     80.9     71.4      0.20              0.05         2    0.100    7299    748       0.0       0.0#012 Sum      1/0    6.76 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     46.6     66.0      0.35              0.09         5    0.069    7299    748       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7     47.4     67.0      0.34              0.09         4    0.085    7299    748       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     80.9     71.4      0.20              0.05         2    0.100    7299    748       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     60.8      0.14              0.04         2    0.071       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.3 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 308.00 MB usage: 621.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(38,529.62 KB,0.167926%) FilterBlock(6,28.61 KB,0.00907105%) IndexBlock(6,62.94 KB,0.0199553%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 02:40:49 np0005558241 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 02:40:49 np0005558241 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 02:40:49 np0005558241 systemd[1]: Finished Create netns directory.
Dec 13 02:40:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:40:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:50 np0005558241 python3.9[121252]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:40:50 np0005558241 network[121269]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:40:50 np0005558241 network[121270]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:40:50 np0005558241 network[121271]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:40:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:40:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:40:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:40:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:40:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:40:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:40:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:40:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:40:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:40:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:40:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:40:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:40:52 np0005558241 podman[121437]: 2025-12-13 07:40:52.597954869 +0000 UTC m=+0.037603731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:40:53 np0005558241 podman[121437]: 2025-12-13 07:40:53.284367573 +0000 UTC m=+0.724016435 container create 1b7cf04078c8d86f3fdc11951a7971240c3bb61bfaafd1a98e4399d7cf0a86b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:40:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:40:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:40:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:40:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:53 np0005558241 systemd[1]: Started libpod-conmon-1b7cf04078c8d86f3fdc11951a7971240c3bb61bfaafd1a98e4399d7cf0a86b2.scope.
Dec 13 02:40:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:40:54 np0005558241 podman[121437]: 2025-12-13 07:40:54.255662507 +0000 UTC m=+1.695311349 container init 1b7cf04078c8d86f3fdc11951a7971240c3bb61bfaafd1a98e4399d7cf0a86b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_shannon, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 02:40:54 np0005558241 podman[121437]: 2025-12-13 07:40:54.268349801 +0000 UTC m=+1.707998633 container start 1b7cf04078c8d86f3fdc11951a7971240c3bb61bfaafd1a98e4399d7cf0a86b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:40:54 np0005558241 loving_shannon[121475]: 167 167
Dec 13 02:40:54 np0005558241 systemd[1]: libpod-1b7cf04078c8d86f3fdc11951a7971240c3bb61bfaafd1a98e4399d7cf0a86b2.scope: Deactivated successfully.
Dec 13 02:40:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:40:54 np0005558241 podman[121437]: 2025-12-13 07:40:54.601806565 +0000 UTC m=+2.041455417 container attach 1b7cf04078c8d86f3fdc11951a7971240c3bb61bfaafd1a98e4399d7cf0a86b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_shannon, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:40:54 np0005558241 podman[121437]: 2025-12-13 07:40:54.602531082 +0000 UTC m=+2.042179904 container died 1b7cf04078c8d86f3fdc11951a7971240c3bb61bfaafd1a98e4399d7cf0a86b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_shannon, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:40:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fdd59215cf702fd5d90cbce18233f3463974f5e24c24c69a6ecf737241cd8a86-merged.mount: Deactivated successfully.
Dec 13 02:40:56 np0005558241 podman[121437]: 2025-12-13 07:40:56.853521353 +0000 UTC m=+4.293170175 container remove 1b7cf04078c8d86f3fdc11951a7971240c3bb61bfaafd1a98e4399d7cf0a86b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_shannon, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:40:56 np0005558241 systemd[1]: libpod-conmon-1b7cf04078c8d86f3fdc11951a7971240c3bb61bfaafd1a98e4399d7cf0a86b2.scope: Deactivated successfully.
Dec 13 02:40:57 np0005558241 podman[121591]: 2025-12-13 07:40:57.085052116 +0000 UTC m=+0.043883782 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:40:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:57 np0005558241 podman[121591]: 2025-12-13 07:40:57.871384642 +0000 UTC m=+0.830216218 container create d14d3b2db5b55406c6d7fc2fe13913a5c4296818b23c2b1fe47a1f645f967047 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lamarr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:40:57 np0005558241 python3.9[121732]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:40:58 np0005558241 systemd[1]: Started libpod-conmon-d14d3b2db5b55406c6d7fc2fe13913a5c4296818b23c2b1fe47a1f645f967047.scope.
Dec 13 02:40:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:40:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724e40522d6e2bf6615c41eafc8c5beb61adbb6e41424efcd8b047208cdde485/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:40:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724e40522d6e2bf6615c41eafc8c5beb61adbb6e41424efcd8b047208cdde485/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:40:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724e40522d6e2bf6615c41eafc8c5beb61adbb6e41424efcd8b047208cdde485/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:40:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724e40522d6e2bf6615c41eafc8c5beb61adbb6e41424efcd8b047208cdde485/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:40:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/724e40522d6e2bf6615c41eafc8c5beb61adbb6e41424efcd8b047208cdde485/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:40:58 np0005558241 python3.9[121815]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:59 np0005558241 podman[121591]: 2025-12-13 07:40:59.04706199 +0000 UTC m=+2.005893616 container init d14d3b2db5b55406c6d7fc2fe13913a5c4296818b23c2b1fe47a1f645f967047 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:40:59 np0005558241 podman[121591]: 2025-12-13 07:40:59.05920301 +0000 UTC m=+2.018034586 container start d14d3b2db5b55406c6d7fc2fe13913a5c4296818b23c2b1fe47a1f645f967047 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:40:59 np0005558241 podman[121591]: 2025-12-13 07:40:59.400279186 +0000 UTC m=+2.359110852 container attach d14d3b2db5b55406c6d7fc2fe13913a5c4296818b23c2b1fe47a1f645f967047 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:40:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:40:59 np0005558241 python3.9[121969]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:40:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:40:59 np0005558241 amazing_lamarr[121737]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:40:59 np0005558241 amazing_lamarr[121737]: --> All data devices are unavailable
Dec 13 02:40:59 np0005558241 systemd[1]: libpod-d14d3b2db5b55406c6d7fc2fe13913a5c4296818b23c2b1fe47a1f645f967047.scope: Deactivated successfully.
Dec 13 02:40:59 np0005558241 podman[121591]: 2025-12-13 07:40:59.735955762 +0000 UTC m=+2.694787328 container died d14d3b2db5b55406c6d7fc2fe13913a5c4296818b23c2b1fe47a1f645f967047 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:41:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-724e40522d6e2bf6615c41eafc8c5beb61adbb6e41424efcd8b047208cdde485-merged.mount: Deactivated successfully.
Dec 13 02:41:00 np0005558241 python3.9[122146]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:41:00 np0005558241 podman[121591]: 2025-12-13 07:41:00.358252141 +0000 UTC m=+3.317083707 container remove d14d3b2db5b55406c6d7fc2fe13913a5c4296818b23c2b1fe47a1f645f967047 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lamarr, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec 13 02:41:00 np0005558241 systemd[1]: libpod-conmon-d14d3b2db5b55406c6d7fc2fe13913a5c4296818b23c2b1fe47a1f645f967047.scope: Deactivated successfully.
Dec 13 02:41:00 np0005558241 python3.9[122274]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:00 np0005558241 podman[122296]: 2025-12-13 07:41:00.92000895 +0000 UTC m=+0.094288408 container create 1f72988619b1e464ef676219b8954de2eeb70f7b904f5a75df527c9b32c9c92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:41:00 np0005558241 podman[122296]: 2025-12-13 07:41:00.848296633 +0000 UTC m=+0.022576071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:41:01 np0005558241 systemd[1]: Started libpod-conmon-1f72988619b1e464ef676219b8954de2eeb70f7b904f5a75df527c9b32c9c92a.scope.
Dec 13 02:41:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:41:01 np0005558241 podman[122296]: 2025-12-13 07:41:01.388925886 +0000 UTC m=+0.563205394 container init 1f72988619b1e464ef676219b8954de2eeb70f7b904f5a75df527c9b32c9c92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:41:01 np0005558241 podman[122296]: 2025-12-13 07:41:01.40329288 +0000 UTC m=+0.577572358 container start 1f72988619b1e464ef676219b8954de2eeb70f7b904f5a75df527c9b32c9c92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:41:01 np0005558241 podman[122296]: 2025-12-13 07:41:01.408831493 +0000 UTC m=+0.583110971 container attach 1f72988619b1e464ef676219b8954de2eeb70f7b904f5a75df527c9b32c9c92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:41:01 np0005558241 epic_zhukovsky[122328]: 167 167
Dec 13 02:41:01 np0005558241 systemd[1]: libpod-1f72988619b1e464ef676219b8954de2eeb70f7b904f5a75df527c9b32c9c92a.scope: Deactivated successfully.
Dec 13 02:41:01 np0005558241 podman[122296]: 2025-12-13 07:41:01.416595299 +0000 UTC m=+0.590874767 container died 1f72988619b1e464ef676219b8954de2eeb70f7b904f5a75df527c9b32c9c92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 02:41:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d9ce85c8dd9112f05bd9fdab491b536f95ca208b86072c49f6cda466abaf7348-merged.mount: Deactivated successfully.
Dec 13 02:41:01 np0005558241 podman[122296]: 2025-12-13 07:41:01.556430497 +0000 UTC m=+0.730709965 container remove 1f72988619b1e464ef676219b8954de2eeb70f7b904f5a75df527c9b32c9c92a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_zhukovsky, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:41:01 np0005558241 systemd[1]: libpod-conmon-1f72988619b1e464ef676219b8954de2eeb70f7b904f5a75df527c9b32c9c92a.scope: Deactivated successfully.
Dec 13 02:41:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:01 np0005558241 podman[122483]: 2025-12-13 07:41:01.775461141 +0000 UTC m=+0.076886172 container create 32a0756fd97254e3a4f5b0cef9ce6f4e41daaa0f62296f1bad594537a7e524c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 02:41:01 np0005558241 python3.9[122477]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 13 02:41:01 np0005558241 podman[122483]: 2025-12-13 07:41:01.734446469 +0000 UTC m=+0.035871520 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:41:01 np0005558241 systemd[1]: Starting Time & Date Service...
Dec 13 02:41:01 np0005558241 systemd[1]: Started libpod-conmon-32a0756fd97254e3a4f5b0cef9ce6f4e41daaa0f62296f1bad594537a7e524c7.scope.
Dec 13 02:41:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:41:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77406013b2e83fd1bbfa7da93cfc233bb17ba5036755c18786af01942d83f359/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:41:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77406013b2e83fd1bbfa7da93cfc233bb17ba5036755c18786af01942d83f359/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:41:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77406013b2e83fd1bbfa7da93cfc233bb17ba5036755c18786af01942d83f359/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:41:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77406013b2e83fd1bbfa7da93cfc233bb17ba5036755c18786af01942d83f359/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:41:01 np0005558241 systemd[1]: Started Time & Date Service.
Dec 13 02:41:01 np0005558241 podman[122483]: 2025-12-13 07:41:01.971052761 +0000 UTC m=+0.272477872 container init 32a0756fd97254e3a4f5b0cef9ce6f4e41daaa0f62296f1bad594537a7e524c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:41:01 np0005558241 podman[122483]: 2025-12-13 07:41:01.981356385 +0000 UTC m=+0.282781426 container start 32a0756fd97254e3a4f5b0cef9ce6f4e41daaa0f62296f1bad594537a7e524c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 02:41:01 np0005558241 podman[122483]: 2025-12-13 07:41:01.987085386 +0000 UTC m=+0.288510447 container attach 32a0756fd97254e3a4f5b0cef9ce6f4e41daaa0f62296f1bad594537a7e524c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]: {
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:    "0": [
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:        {
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "devices": [
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "/dev/loop3"
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            ],
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_name": "ceph_lv0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_size": "21470642176",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "name": "ceph_lv0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "tags": {
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.cluster_name": "ceph",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.crush_device_class": "",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.encrypted": "0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.objectstore": "bluestore",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.osd_id": "0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.type": "block",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.vdo": "0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.with_tpm": "0"
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            },
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "type": "block",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "vg_name": "ceph_vg0"
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:        }
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:    ],
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:    "1": [
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:        {
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "devices": [
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "/dev/loop4"
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            ],
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_name": "ceph_lv1",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_size": "21470642176",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "name": "ceph_lv1",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "tags": {
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.cluster_name": "ceph",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.crush_device_class": "",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.encrypted": "0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.objectstore": "bluestore",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.osd_id": "1",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.type": "block",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.vdo": "0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.with_tpm": "0"
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            },
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "type": "block",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "vg_name": "ceph_vg1"
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:        }
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:    ],
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:    "2": [
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:        {
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "devices": [
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "/dev/loop5"
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            ],
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_name": "ceph_lv2",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_size": "21470642176",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "name": "ceph_lv2",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "tags": {
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.cluster_name": "ceph",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.crush_device_class": "",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.encrypted": "0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.objectstore": "bluestore",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.osd_id": "2",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.type": "block",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.vdo": "0",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:                "ceph.with_tpm": "0"
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            },
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "type": "block",
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:            "vg_name": "ceph_vg2"
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:        }
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]:    ]
Dec 13 02:41:02 np0005558241 hopeful_clarke[122502]: }
Dec 13 02:41:02 np0005558241 systemd[1]: libpod-32a0756fd97254e3a4f5b0cef9ce6f4e41daaa0f62296f1bad594537a7e524c7.scope: Deactivated successfully.
Dec 13 02:41:02 np0005558241 podman[122483]: 2025-12-13 07:41:02.335753497 +0000 UTC m=+0.637178538 container died 32a0756fd97254e3a4f5b0cef9ce6f4e41daaa0f62296f1bad594537a7e524c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:41:02 np0005558241 python3.9[122675]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-77406013b2e83fd1bbfa7da93cfc233bb17ba5036755c18786af01942d83f359-merged.mount: Deactivated successfully.
Dec 13 02:41:03 np0005558241 podman[122483]: 2025-12-13 07:41:03.457043809 +0000 UTC m=+1.758468880 container remove 32a0756fd97254e3a4f5b0cef9ce6f4e41daaa0f62296f1bad594537a7e524c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_clarke, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:41:03 np0005558241 systemd[1]: libpod-conmon-32a0756fd97254e3a4f5b0cef9ce6f4e41daaa0f62296f1bad594537a7e524c7.scope: Deactivated successfully.
Dec 13 02:41:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:03 np0005558241 python3.9[122852]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:41:03 np0005558241 podman[122906]: 2025-12-13 07:41:03.973845024 +0000 UTC m=+0.078460500 container create 8ab8b5114d1649107faf06a926d7f96b499266250198ccd65a884c3a34c9e6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 02:41:04 np0005558241 systemd[1]: Started libpod-conmon-8ab8b5114d1649107faf06a926d7f96b499266250198ccd65a884c3a34c9e6b7.scope.
Dec 13 02:41:04 np0005558241 podman[122906]: 2025-12-13 07:41:03.920773119 +0000 UTC m=+0.025388645 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:41:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:41:04 np0005558241 podman[122906]: 2025-12-13 07:41:04.068442219 +0000 UTC m=+0.173057775 container init 8ab8b5114d1649107faf06a926d7f96b499266250198ccd65a884c3a34c9e6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dirac, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:41:04 np0005558241 podman[122906]: 2025-12-13 07:41:04.077061661 +0000 UTC m=+0.181677127 container start 8ab8b5114d1649107faf06a926d7f96b499266250198ccd65a884c3a34c9e6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dirac, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 02:41:04 np0005558241 podman[122906]: 2025-12-13 07:41:04.080401013 +0000 UTC m=+0.185016499 container attach 8ab8b5114d1649107faf06a926d7f96b499266250198ccd65a884c3a34c9e6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:41:04 np0005558241 thirsty_dirac[122956]: 167 167
Dec 13 02:41:04 np0005558241 systemd[1]: libpod-8ab8b5114d1649107faf06a926d7f96b499266250198ccd65a884c3a34c9e6b7.scope: Deactivated successfully.
Dec 13 02:41:04 np0005558241 podman[122906]: 2025-12-13 07:41:04.084647167 +0000 UTC m=+0.189262663 container died 8ab8b5114d1649107faf06a926d7f96b499266250198ccd65a884c3a34c9e6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dirac, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:41:04 np0005558241 systemd[1]: var-lib-containers-storage-overlay-047c09c5e73a99912e24e1ff64bb86e26bc2c169c3e458b1b6265a935908d16d-merged.mount: Deactivated successfully.
Dec 13 02:41:04 np0005558241 podman[122906]: 2025-12-13 07:41:04.140402018 +0000 UTC m=+0.245017504 container remove 8ab8b5114d1649107faf06a926d7f96b499266250198ccd65a884c3a34c9e6b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 13 02:41:04 np0005558241 systemd[1]: libpod-conmon-8ab8b5114d1649107faf06a926d7f96b499266250198ccd65a884c3a34c9e6b7.scope: Deactivated successfully.
Dec 13 02:41:04 np0005558241 python3.9[122996]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:04 np0005558241 podman[123007]: 2025-12-13 07:41:04.298169296 +0000 UTC m=+0.043172432 container create 74f7786b8875eadc5c77459715fcab5b2ae67c54bc43374064e31b53d0b68c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:41:04 np0005558241 systemd[1]: Started libpod-conmon-74f7786b8875eadc5c77459715fcab5b2ae67c54bc43374064e31b53d0b68c85.scope.
Dec 13 02:41:04 np0005558241 podman[123007]: 2025-12-13 07:41:04.278693247 +0000 UTC m=+0.023696363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:41:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:41:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd12f82da58553077d38315a0607d0e87f235c5e3a9feacfd594445c9a6bfb3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:41:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd12f82da58553077d38315a0607d0e87f235c5e3a9feacfd594445c9a6bfb3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:41:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd12f82da58553077d38315a0607d0e87f235c5e3a9feacfd594445c9a6bfb3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:41:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd12f82da58553077d38315a0607d0e87f235c5e3a9feacfd594445c9a6bfb3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:41:04 np0005558241 podman[123007]: 2025-12-13 07:41:04.398488372 +0000 UTC m=+0.143491508 container init 74f7786b8875eadc5c77459715fcab5b2ae67c54bc43374064e31b53d0b68c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_turing, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:41:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:41:04 np0005558241 podman[123007]: 2025-12-13 07:41:04.406475759 +0000 UTC m=+0.151478875 container start 74f7786b8875eadc5c77459715fcab5b2ae67c54bc43374064e31b53d0b68c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_turing, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 02:41:04 np0005558241 podman[123007]: 2025-12-13 07:41:04.410663011 +0000 UTC m=+0.155666167 container attach 74f7786b8875eadc5c77459715fcab5b2ae67c54bc43374064e31b53d0b68c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:41:04 np0005558241 python3.9[123205]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:41:05 np0005558241 lvm[123288]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:41:05 np0005558241 lvm[123286]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:41:05 np0005558241 lvm[123288]: VG ceph_vg1 finished
Dec 13 02:41:05 np0005558241 lvm[123286]: VG ceph_vg0 finished
Dec 13 02:41:05 np0005558241 lvm[123306]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:41:05 np0005558241 lvm[123306]: VG ceph_vg2 finished
Dec 13 02:41:05 np0005558241 boring_turing[123033]: {}
Dec 13 02:41:05 np0005558241 systemd[1]: libpod-74f7786b8875eadc5c77459715fcab5b2ae67c54bc43374064e31b53d0b68c85.scope: Deactivated successfully.
Dec 13 02:41:05 np0005558241 systemd[1]: libpod-74f7786b8875eadc5c77459715fcab5b2ae67c54bc43374064e31b53d0b68c85.scope: Consumed 1.430s CPU time.
Dec 13 02:41:05 np0005558241 podman[123007]: 2025-12-13 07:41:05.328537185 +0000 UTC m=+1.073540301 container died 74f7786b8875eadc5c77459715fcab5b2ae67c54bc43374064e31b53d0b68c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:41:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fd12f82da58553077d38315a0607d0e87f235c5e3a9feacfd594445c9a6bfb3f-merged.mount: Deactivated successfully.
Dec 13 02:41:05 np0005558241 podman[123007]: 2025-12-13 07:41:05.39385163 +0000 UTC m=+1.138854746 container remove 74f7786b8875eadc5c77459715fcab5b2ae67c54bc43374064e31b53d0b68c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_turing, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:41:05 np0005558241 systemd[1]: libpod-conmon-74f7786b8875eadc5c77459715fcab5b2ae67c54bc43374064e31b53d0b68c85.scope: Deactivated successfully.
Dec 13 02:41:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:41:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:41:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:41:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:41:05 np0005558241 python3.9[123337]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zr4pyuje recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:06 np0005558241 python3.9[123527]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:41:06 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:41:06 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:41:06 np0005558241 python3.9[123605]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:07 np0005558241 python3.9[123757]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:41:08 np0005558241 python3[123910]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 02:41:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:41:08
Dec 13 02:41:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:41:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:41:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.rgw.root', 'default.rgw.log', 'images']
Dec 13 02:41:08 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:41:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:41:09 np0005558241 python3.9[124062]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:41:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:09 np0005558241 python3.9[124140]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:41:10 np0005558241 python3.9[124292]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:41:11 np0005558241 python3.9[124370]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:11 np0005558241 python3.9[124522]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:41:12 np0005558241 python3.9[124600]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:13 np0005558241 python3.9[124752]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:41:13 np0005558241 python3.9[124830]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:14 np0005558241 python3.9[124982]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:41:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:41:15 np0005558241 python3.9[125060]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:15 np0005558241 python3.9[125212]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:41:17 np0005558241 python3.9[125367]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:17 np0005558241 python3.9[125519]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:18 np0005558241 python3.9[125671]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:41:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:20 np0005558241 python3.9[125823]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:41:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:41:20 np0005558241 python3.9[125975]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 02:41:21 np0005558241 systemd[1]: session-43.scope: Deactivated successfully.
Dec 13 02:41:21 np0005558241 systemd[1]: session-43.scope: Consumed 34.107s CPU time.
Dec 13 02:41:21 np0005558241 systemd-logind[787]: Session 43 logged out. Waiting for processes to exit.
Dec 13 02:41:21 np0005558241 systemd-logind[787]: Removed session 43.
Dec 13 02:41:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:41:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:41:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:32 np0005558241 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 13 02:41:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:34 np0005558241 systemd-logind[787]: New session 44 of user zuul.
Dec 13 02:41:34 np0005558241 systemd[1]: Started Session 44 of User zuul.
Dec 13 02:41:34 np0005558241 python3.9[126158]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 13 02:41:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:35 np0005558241 python3.9[126310]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:41:36 np0005558241 python3.9[126464]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 13 02:41:37 np0005558241 python3.9[126616]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.as07zxp_ follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:41:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:38 np0005558241 python3.9[126741]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.as07zxp_ mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765611697.0606284-44-37787356945462/.source.as07zxp_ _original_basename=.y5n_kdpd follow=False checksum=1f4e1c3a60532cd7eb23f957244f50a4f0b26266 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:39 np0005558241 python3.9[126893]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:41:40 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:41:40 np0005558241 python3.9[127045]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuiiurzJawdjQUeleljBiSOhGha2e1B2btKk1yfXMVN6SlAG7Fo+tpstAA0GHhdjs686GNYQ5oIzK/3xUtTv0Ld63YzgMVucWJKTjU8ChnIwAosyDYKaiVH2QPz1TXD3FZiOfMw5yLsK+Ha0vde4EmlqaVCGmbOvxvAAho+rthUql+33JjQkYxMCKLy/kTXHQ8BfZk4puxMSZG+LRCIdmFXZpGQCte+0L+T7WTjWfUlk4RhoD21anOaolpaE7eixUwz3blgpjOeJuvB4IU88ecjgUIWNPLhz7thzuYjBw7l8q7KkPAvt4lcEV1NG4U5a5AtgE7cZjejHF9H8vPQ+aSo7el+YImpcI6QeYfZIoinToUOql53fCcG2oF6xcRh8dRTTAspGHELURaoGIIjtYWC6K8ueDlP6lFT5IDFyOjY+U2zU8E+09kbE/LKHTixWVDGA7N4CNu1JzGUt13kg7YkVrP6VYIgeaqf5B3lgaxSn5NjcVB/9yUne0wihcMDtk=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDu6G0+tHL0ibqZrMcAzREzzy/eq1WSdRdm3gwyJVIYs#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPkz157vedTD2Vnzbot0Swfx31EDYyup+EGwSclOeVmGP8/QLCKwJ0UWdGUGTXpBBeBPh/usx8UND+o/lWLPhBo=#012 create=True mode=0644 path=/tmp/ansible.as07zxp_ state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e4 check_health: resetting beacon timeouts due to mon delay (slow election?) of 1e+01 seconds
Dec 13 02:41:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:41:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:42 np0005558241 python3.9[127197]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.as07zxp_' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:41:42 np0005558241 python3.9[127351]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.as07zxp_ state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:41:43 np0005558241 systemd[1]: session-44.scope: Deactivated successfully.
Dec 13 02:41:43 np0005558241 systemd[1]: session-44.scope: Consumed 6.365s CPU time.
Dec 13 02:41:43 np0005558241 systemd-logind[787]: Session 44 logged out. Waiting for processes to exit.
Dec 13 02:41:43 np0005558241 systemd-logind[787]: Removed session 44.
Dec 13 02:41:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:41:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:49 np0005558241 systemd-logind[787]: New session 45 of user zuul.
Dec 13 02:41:49 np0005558241 systemd[1]: Started Session 45 of User zuul.
Dec 13 02:41:50 np0005558241 python3.9[127531]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:41:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:41:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:51 np0005558241 python3.9[127687]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 13 02:41:52 np0005558241 python3.9[127841]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:41:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.711822) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611713711891, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 939, "num_deletes": 251, "total_data_size": 1358968, "memory_usage": 1381496, "flush_reason": "Manual Compaction"}
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611713722028, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1346785, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8928, "largest_seqno": 9866, "table_properties": {"data_size": 1342126, "index_size": 2247, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9622, "raw_average_key_size": 18, "raw_value_size": 1332885, "raw_average_value_size": 2598, "num_data_blocks": 105, "num_entries": 513, "num_filter_entries": 513, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611614, "oldest_key_time": 1765611614, "file_creation_time": 1765611713, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 10344 microseconds, and 3939 cpu microseconds.
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.722176) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1346785 bytes OK
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.722204) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.723829) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.723851) EVENT_LOG_v1 {"time_micros": 1765611713723845, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.723876) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1354472, prev total WAL file size 1354472, number of live WAL files 2.
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.724960) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1315KB)], [23(6919KB)]
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611713725058, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8432765, "oldest_snapshot_seqno": -1}
Dec 13 02:41:53 np0005558241 python3.9[127994]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3394 keys, 6718156 bytes, temperature: kUnknown
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611713786006, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6718156, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6692585, "index_size": 15990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 82482, "raw_average_key_size": 24, "raw_value_size": 6628325, "raw_average_value_size": 1952, "num_data_blocks": 696, "num_entries": 3394, "num_filter_entries": 3394, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765611713, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.786322) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6718156 bytes
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.787898) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.1 rd, 110.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.8 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(11.2) write-amplify(5.0) OK, records in: 3907, records dropped: 513 output_compression: NoCompression
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.787926) EVENT_LOG_v1 {"time_micros": 1765611713787914, "job": 8, "event": "compaction_finished", "compaction_time_micros": 61062, "compaction_time_cpu_micros": 30833, "output_level": 6, "num_output_files": 1, "total_output_size": 6718156, "num_input_records": 3907, "num_output_records": 3394, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611713788617, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765611713790068, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.724773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.790262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.790269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.790271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.790273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:41:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:41:53.790274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:41:54 np0005558241 python3.9[128147]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:41:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:41:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:41:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:42:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:42:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5509 writes, 24K keys, 5509 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5509 writes, 795 syncs, 6.93 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5509 writes, 24K keys, 5509 commit groups, 1.0 writes per commit group, ingest: 18.82 MB, 0.03 MB/s#012Interval WAL: 5509 writes, 795 syncs, 6.93 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec 13 02:42:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:42:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:42:06 np0005558241 python3.9[128370]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:42:07 np0005558241 systemd[1]: session-45.scope: Deactivated successfully.
Dec 13 02:42:07 np0005558241 systemd[1]: session-45.scope: Consumed 4.167s CPU time.
Dec 13 02:42:07 np0005558241 systemd-logind[787]: Session 45 logged out. Waiting for processes to exit.
Dec 13 02:42:07 np0005558241 systemd-logind[787]: Removed session 45.
Dec 13 02:42:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:42:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:42:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:42:08
Dec 13 02:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['backups', 'vms', '.mgr', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta']
Dec 13 02:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:42:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:42:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:42:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:42:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:42:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:42:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:09 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:42:09 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:42:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:42:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:42:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:42:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:42:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:42:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:42:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:42:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:42:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 601.8 total, 600.0 interval#012Cumulative writes: 6768 writes, 28K keys, 6768 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6768 writes, 1166 syncs, 5.80 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6768 writes, 28K keys, 6768 commit groups, 1.0 writes per commit group, ingest: 19.92 MB, 0.03 MB/s#012Interval WAL: 6768 writes, 1166 syncs, 5.80 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.7 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec 13 02:42:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:42:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:42:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:11 np0005558241 podman[128540]: 2025-12-13 07:42:11.584674887 +0000 UTC m=+0.039053658 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:42:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:13 np0005558241 podman[128540]: 2025-12-13 07:42:13.681810005 +0000 UTC m=+2.136188696 container create 9af81e6daea362b6d6d91e57046263edd864ac7717a98f06320e28837c0e741b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:42:13 np0005558241 systemd[1]: Started libpod-conmon-9af81e6daea362b6d6d91e57046263edd864ac7717a98f06320e28837c0e741b.scope.
Dec 13 02:42:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:42:14 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:42:14 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:42:14 np0005558241 podman[128540]: 2025-12-13 07:42:14.256446232 +0000 UTC m=+2.710824983 container init 9af81e6daea362b6d6d91e57046263edd864ac7717a98f06320e28837c0e741b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kirch, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:42:14 np0005558241 podman[128540]: 2025-12-13 07:42:14.269224752 +0000 UTC m=+2.723603433 container start 9af81e6daea362b6d6d91e57046263edd864ac7717a98f06320e28837c0e741b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:42:14 np0005558241 goofy_kirch[128556]: 167 167
Dec 13 02:42:14 np0005558241 systemd[1]: libpod-9af81e6daea362b6d6d91e57046263edd864ac7717a98f06320e28837c0e741b.scope: Deactivated successfully.
Dec 13 02:42:14 np0005558241 conmon[128556]: conmon 9af81e6daea362b6d6d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9af81e6daea362b6d6d91e57046263edd864ac7717a98f06320e28837c0e741b.scope/container/memory.events
Dec 13 02:42:14 np0005558241 podman[128540]: 2025-12-13 07:42:14.658384364 +0000 UTC m=+3.112763085 container attach 9af81e6daea362b6d6d91e57046263edd864ac7717a98f06320e28837c0e741b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kirch, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 02:42:14 np0005558241 podman[128540]: 2025-12-13 07:42:14.659779768 +0000 UTC m=+3.114158489 container died 9af81e6daea362b6d6d91e57046263edd864ac7717a98f06320e28837c0e741b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kirch, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 02:42:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-774f4b60c691ad9a1d05c34e60a61c8d0866edbac7975e121ebff6b69f99174f-merged.mount: Deactivated successfully.
Dec 13 02:42:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:42:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:19 np0005558241 podman[128540]: 2025-12-13 07:42:19.016036185 +0000 UTC m=+7.470414916 container remove 9af81e6daea362b6d6d91e57046263edd864ac7717a98f06320e28837c0e741b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:42:19 np0005558241 systemd[1]: libpod-conmon-9af81e6daea362b6d6d91e57046263edd864ac7717a98f06320e28837c0e741b.scope: Deactivated successfully.
Dec 13 02:42:19 np0005558241 podman[128581]: 2025-12-13 07:42:19.211474752 +0000 UTC m=+0.042560262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:42:19 np0005558241 podman[128581]: 2025-12-13 07:42:19.658559248 +0000 UTC m=+0.489644758 container create 03c4857ae8c9f4311695b6b55897a14d81382b89becfef3aa25d109e4acb92ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:42:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:42:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 5527 writes, 24K keys, 5527 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5527 writes, 782 syncs, 7.07 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5527 writes, 24K keys, 5527 commit groups, 1.0 writes per commit group, ingest: 18.79 MB, 0.03 MB/s#012Interval WAL: 5527 writes, 782 syncs, 7.07 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec 13 02:42:19 np0005558241 systemd[1]: Started libpod-conmon-03c4857ae8c9f4311695b6b55897a14d81382b89becfef3aa25d109e4acb92ef.scope.
Dec 13 02:42:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:42:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06cc83afb01f7433cc9f10348ed6f798775403012dbfd5884e9fc62145cf19a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:42:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06cc83afb01f7433cc9f10348ed6f798775403012dbfd5884e9fc62145cf19a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:42:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06cc83afb01f7433cc9f10348ed6f798775403012dbfd5884e9fc62145cf19a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:42:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06cc83afb01f7433cc9f10348ed6f798775403012dbfd5884e9fc62145cf19a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:42:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06cc83afb01f7433cc9f10348ed6f798775403012dbfd5884e9fc62145cf19a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:42:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:42:20 np0005558241 podman[128581]: 2025-12-13 07:42:20.299415211 +0000 UTC m=+1.130500691 container init 03c4857ae8c9f4311695b6b55897a14d81382b89becfef3aa25d109e4acb92ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_kilby, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:42:20 np0005558241 podman[128581]: 2025-12-13 07:42:20.309445634 +0000 UTC m=+1.140531104 container start 03c4857ae8c9f4311695b6b55897a14d81382b89becfef3aa25d109e4acb92ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_kilby, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:42:20 np0005558241 podman[128581]: 2025-12-13 07:42:20.799628024 +0000 UTC m=+1.630713534 container attach 03c4857ae8c9f4311695b6b55897a14d81382b89becfef3aa25d109e4acb92ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_kilby, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:42:20 np0005558241 crazy_kilby[128597]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:42:20 np0005558241 crazy_kilby[128597]: --> All data devices are unavailable
Dec 13 02:42:20 np0005558241 systemd[1]: libpod-03c4857ae8c9f4311695b6b55897a14d81382b89becfef3aa25d109e4acb92ef.scope: Deactivated successfully.
Dec 13 02:42:20 np0005558241 podman[128581]: 2025-12-13 07:42:20.908014211 +0000 UTC m=+1.739099691 container died 03c4857ae8c9f4311695b6b55897a14d81382b89becfef3aa25d109e4acb92ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:42:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:42:21 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a06cc83afb01f7433cc9f10348ed6f798775403012dbfd5884e9fc62145cf19a-merged.mount: Deactivated successfully.
Dec 13 02:42:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:21 np0005558241 podman[128581]: 2025-12-13 07:42:21.7063602 +0000 UTC m=+2.537445700 container remove 03c4857ae8c9f4311695b6b55897a14d81382b89becfef3aa25d109e4acb92ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_kilby, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:42:21 np0005558241 systemd[1]: libpod-conmon-03c4857ae8c9f4311695b6b55897a14d81382b89becfef3aa25d109e4acb92ef.scope: Deactivated successfully.
Dec 13 02:42:22 np0005558241 podman[128691]: 2025-12-13 07:42:22.204902422 +0000 UTC m=+0.058378906 container create 4a7a02e0d0f0da1ec71b9ed1713a1d02cffed72823adf79c179557c003b0301e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_galileo, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:42:22 np0005558241 systemd[1]: Started libpod-conmon-4a7a02e0d0f0da1ec71b9ed1713a1d02cffed72823adf79c179557c003b0301e.scope.
Dec 13 02:42:22 np0005558241 podman[128691]: 2025-12-13 07:42:22.169027622 +0000 UTC m=+0.022504146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:42:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:42:22 np0005558241 podman[128691]: 2025-12-13 07:42:22.745124445 +0000 UTC m=+0.598600929 container init 4a7a02e0d0f0da1ec71b9ed1713a1d02cffed72823adf79c179557c003b0301e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 02:42:22 np0005558241 podman[128691]: 2025-12-13 07:42:22.751226433 +0000 UTC m=+0.604702927 container start 4a7a02e0d0f0da1ec71b9ed1713a1d02cffed72823adf79c179557c003b0301e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_galileo, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:42:22 np0005558241 systemd[1]: libpod-4a7a02e0d0f0da1ec71b9ed1713a1d02cffed72823adf79c179557c003b0301e.scope: Deactivated successfully.
Dec 13 02:42:22 np0005558241 gifted_galileo[128708]: 167 167
Dec 13 02:42:22 np0005558241 conmon[128708]: conmon 4a7a02e0d0f0da1ec71b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4a7a02e0d0f0da1ec71b9ed1713a1d02cffed72823adf79c179557c003b0301e.scope/container/memory.events
Dec 13 02:42:23 np0005558241 podman[128691]: 2025-12-13 07:42:23.052851363 +0000 UTC m=+0.906327847 container attach 4a7a02e0d0f0da1ec71b9ed1713a1d02cffed72823adf79c179557c003b0301e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:42:23 np0005558241 podman[128691]: 2025-12-13 07:42:23.054178315 +0000 UTC m=+0.907654809 container died 4a7a02e0d0f0da1ec71b9ed1713a1d02cffed72823adf79c179557c003b0301e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:42:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 02:42:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6f47780be44d25e765e5f08a08dd836e4b0a7cbb5b68e17abd7a9c5dc230504b-merged.mount: Deactivated successfully.
Dec 13 02:42:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:23 np0005558241 podman[128691]: 2025-12-13 07:42:23.931774215 +0000 UTC m=+1.785250729 container remove 4a7a02e0d0f0da1ec71b9ed1713a1d02cffed72823adf79c179557c003b0301e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:42:23 np0005558241 systemd[1]: libpod-conmon-4a7a02e0d0f0da1ec71b9ed1713a1d02cffed72823adf79c179557c003b0301e.scope: Deactivated successfully.
Dec 13 02:42:24 np0005558241 podman[128731]: 2025-12-13 07:42:24.085983062 +0000 UTC m=+0.027849126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:42:24 np0005558241 podman[128731]: 2025-12-13 07:42:24.49851776 +0000 UTC m=+0.440383774 container create f13c9e96e7e2acd344519b7c4275736ef78fcec11d017a8a00cc46262eddc1af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:42:24 np0005558241 systemd[1]: Started libpod-conmon-f13c9e96e7e2acd344519b7c4275736ef78fcec11d017a8a00cc46262eddc1af.scope.
Dec 13 02:42:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:42:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cfae815cde23fda9800ad4d36ba1604a5c68a376323b46ca8b28ad03e6c21bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:42:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cfae815cde23fda9800ad4d36ba1604a5c68a376323b46ca8b28ad03e6c21bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:42:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cfae815cde23fda9800ad4d36ba1604a5c68a376323b46ca8b28ad03e6c21bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:42:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cfae815cde23fda9800ad4d36ba1604a5c68a376323b46ca8b28ad03e6c21bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:42:25 np0005558241 systemd-logind[787]: New session 46 of user zuul.
Dec 13 02:42:25 np0005558241 systemd[1]: Started Session 46 of User zuul.
Dec 13 02:42:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:25 np0005558241 podman[128731]: 2025-12-13 07:42:25.908737518 +0000 UTC m=+1.850603592 container init f13c9e96e7e2acd344519b7c4275736ef78fcec11d017a8a00cc46262eddc1af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Dec 13 02:42:25 np0005558241 podman[128731]: 2025-12-13 07:42:25.92779761 +0000 UTC m=+1.869663594 container start f13c9e96e7e2acd344519b7c4275736ef78fcec11d017a8a00cc46262eddc1af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:42:26 np0005558241 magical_tu[128747]: {
Dec 13 02:42:26 np0005558241 magical_tu[128747]:    "0": [
Dec 13 02:42:26 np0005558241 magical_tu[128747]:        {
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "devices": [
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "/dev/loop3"
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            ],
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_name": "ceph_lv0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_size": "21470642176",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "name": "ceph_lv0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "tags": {
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.cluster_name": "ceph",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.crush_device_class": "",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.encrypted": "0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.objectstore": "bluestore",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.osd_id": "0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.type": "block",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.vdo": "0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.with_tpm": "0"
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            },
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "type": "block",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "vg_name": "ceph_vg0"
Dec 13 02:42:26 np0005558241 magical_tu[128747]:        }
Dec 13 02:42:26 np0005558241 magical_tu[128747]:    ],
Dec 13 02:42:26 np0005558241 magical_tu[128747]:    "1": [
Dec 13 02:42:26 np0005558241 magical_tu[128747]:        {
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "devices": [
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "/dev/loop4"
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            ],
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_name": "ceph_lv1",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_size": "21470642176",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "name": "ceph_lv1",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "tags": {
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.cluster_name": "ceph",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.crush_device_class": "",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.encrypted": "0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.objectstore": "bluestore",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.osd_id": "1",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.type": "block",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.vdo": "0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.with_tpm": "0"
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            },
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "type": "block",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "vg_name": "ceph_vg1"
Dec 13 02:42:26 np0005558241 magical_tu[128747]:        }
Dec 13 02:42:26 np0005558241 magical_tu[128747]:    ],
Dec 13 02:42:26 np0005558241 magical_tu[128747]:    "2": [
Dec 13 02:42:26 np0005558241 magical_tu[128747]:        {
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "devices": [
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "/dev/loop5"
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            ],
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_name": "ceph_lv2",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_size": "21470642176",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "name": "ceph_lv2",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "tags": {
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.cluster_name": "ceph",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.crush_device_class": "",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.encrypted": "0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.objectstore": "bluestore",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.osd_id": "2",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.type": "block",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.vdo": "0",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:                "ceph.with_tpm": "0"
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            },
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "type": "block",
Dec 13 02:42:26 np0005558241 magical_tu[128747]:            "vg_name": "ceph_vg2"
Dec 13 02:42:26 np0005558241 magical_tu[128747]:        }
Dec 13 02:42:26 np0005558241 magical_tu[128747]:    ]
Dec 13 02:42:26 np0005558241 magical_tu[128747]: }
Dec 13 02:42:26 np0005558241 podman[128731]: 2025-12-13 07:42:26.278188752 +0000 UTC m=+2.220054776 container attach f13c9e96e7e2acd344519b7c4275736ef78fcec11d017a8a00cc46262eddc1af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:42:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:42:26 np0005558241 systemd[1]: libpod-f13c9e96e7e2acd344519b7c4275736ef78fcec11d017a8a00cc46262eddc1af.scope: Deactivated successfully.
Dec 13 02:42:26 np0005558241 podman[128731]: 2025-12-13 07:42:26.304280224 +0000 UTC m=+2.246146248 container died f13c9e96e7e2acd344519b7c4275736ef78fcec11d017a8a00cc46262eddc1af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tu, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:42:26 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1cfae815cde23fda9800ad4d36ba1604a5c68a376323b46ca8b28ad03e6c21bc-merged.mount: Deactivated successfully.
Dec 13 02:42:26 np0005558241 python3.9[128905]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:42:26 np0005558241 podman[128731]: 2025-12-13 07:42:26.892371308 +0000 UTC m=+2.834237292 container remove f13c9e96e7e2acd344519b7c4275736ef78fcec11d017a8a00cc46262eddc1af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_tu, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:42:26 np0005558241 systemd[1]: libpod-conmon-f13c9e96e7e2acd344519b7c4275736ef78fcec11d017a8a00cc46262eddc1af.scope: Deactivated successfully.
Dec 13 02:42:27 np0005558241 podman[129139]: 2025-12-13 07:42:27.42801865 +0000 UTC m=+0.027527858 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:42:27 np0005558241 python3.9[129128]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:42:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:28 np0005558241 podman[129139]: 2025-12-13 07:42:28.530961302 +0000 UTC m=+1.130470490 container create a44e25237155f35637b199868dc0eb2b1957c1c05380bf3dd2fbebc53f0f41d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_vaughan, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:42:28 np0005558241 python3.9[129234]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 02:42:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:29 np0005558241 systemd[1]: Started libpod-conmon-a44e25237155f35637b199868dc0eb2b1957c1c05380bf3dd2fbebc53f0f41d2.scope.
Dec 13 02:42:29 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:42:30 np0005558241 podman[129139]: 2025-12-13 07:42:30.034450759 +0000 UTC m=+2.633960017 container init a44e25237155f35637b199868dc0eb2b1957c1c05380bf3dd2fbebc53f0f41d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_vaughan, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:42:30 np0005558241 podman[129139]: 2025-12-13 07:42:30.044324688 +0000 UTC m=+2.643833876 container start a44e25237155f35637b199868dc0eb2b1957c1c05380bf3dd2fbebc53f0f41d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:42:30 np0005558241 cool_vaughan[129239]: 167 167
Dec 13 02:42:30 np0005558241 systemd[1]: libpod-a44e25237155f35637b199868dc0eb2b1957c1c05380bf3dd2fbebc53f0f41d2.scope: Deactivated successfully.
Dec 13 02:42:31 np0005558241 podman[129139]: 2025-12-13 07:42:31.018506279 +0000 UTC m=+3.618015497 container attach a44e25237155f35637b199868dc0eb2b1957c1c05380bf3dd2fbebc53f0f41d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_vaughan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:42:31 np0005558241 podman[129139]: 2025-12-13 07:42:31.01978635 +0000 UTC m=+3.619295578 container died a44e25237155f35637b199868dc0eb2b1957c1c05380bf3dd2fbebc53f0f41d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:42:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:42:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:32 np0005558241 python3.9[129402]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:42:32 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ef111ee5b5e074350464eea31e5faa60b0add66b8af8784c47b767fced83bd94-merged.mount: Deactivated successfully.
Dec 13 02:42:33 np0005558241 podman[129139]: 2025-12-13 07:42:33.496163847 +0000 UTC m=+6.095673045 container remove a44e25237155f35637b199868dc0eb2b1957c1c05380bf3dd2fbebc53f0f41d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:42:33 np0005558241 python3.9[129554]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 02:42:33 np0005558241 systemd[1]: libpod-conmon-a44e25237155f35637b199868dc0eb2b1957c1c05380bf3dd2fbebc53f0f41d2.scope: Deactivated successfully.
Dec 13 02:42:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:33 np0005558241 podman[129586]: 2025-12-13 07:42:33.704587819 +0000 UTC m=+0.058260973 container create 8233cae8cbc90c1c7da4b8d0fbc3f7b3ca24d45d798e9579a30dbffb541755d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:42:33 np0005558241 podman[129586]: 2025-12-13 07:42:33.680879894 +0000 UTC m=+0.034553018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:42:34 np0005558241 systemd[1]: Started libpod-conmon-8233cae8cbc90c1c7da4b8d0fbc3f7b3ca24d45d798e9579a30dbffb541755d2.scope.
Dec 13 02:42:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:42:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e47c16cf754576ef98e4ea4235bdf1530e752a65ca49c338bf3eaac989b535/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:42:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e47c16cf754576ef98e4ea4235bdf1530e752a65ca49c338bf3eaac989b535/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:42:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e47c16cf754576ef98e4ea4235bdf1530e752a65ca49c338bf3eaac989b535/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:42:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8e47c16cf754576ef98e4ea4235bdf1530e752a65ca49c338bf3eaac989b535/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:42:34 np0005558241 python3.9[129728]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:42:35 np0005558241 podman[129586]: 2025-12-13 07:42:35.15617165 +0000 UTC m=+1.509844824 container init 8233cae8cbc90c1c7da4b8d0fbc3f7b3ca24d45d798e9579a30dbffb541755d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_carson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:42:35 np0005558241 podman[129586]: 2025-12-13 07:42:35.171687496 +0000 UTC m=+1.525360650 container start 8233cae8cbc90c1c7da4b8d0fbc3f7b3ca24d45d798e9579a30dbffb541755d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 02:42:35 np0005558241 podman[129586]: 2025-12-13 07:42:35.211914181 +0000 UTC m=+1.565587305 container attach 8233cae8cbc90c1c7da4b8d0fbc3f7b3ca24d45d798e9579a30dbffb541755d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_carson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:42:35 np0005558241 python3.9[129881]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:42:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:35 np0005558241 lvm[129981]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:42:35 np0005558241 lvm[129982]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:42:35 np0005558241 lvm[129981]: VG ceph_vg0 finished
Dec 13 02:42:35 np0005558241 lvm[129982]: VG ceph_vg1 finished
Dec 13 02:42:36 np0005558241 lvm[129984]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:42:36 np0005558241 lvm[129984]: VG ceph_vg2 finished
Dec 13 02:42:36 np0005558241 serene_carson[129729]: {}
Dec 13 02:42:36 np0005558241 systemd[1]: libpod-8233cae8cbc90c1c7da4b8d0fbc3f7b3ca24d45d798e9579a30dbffb541755d2.scope: Deactivated successfully.
Dec 13 02:42:36 np0005558241 systemd[1]: libpod-8233cae8cbc90c1c7da4b8d0fbc3f7b3ca24d45d798e9579a30dbffb541755d2.scope: Consumed 1.575s CPU time.
Dec 13 02:42:36 np0005558241 podman[129586]: 2025-12-13 07:42:36.168473493 +0000 UTC m=+2.522146657 container died 8233cae8cbc90c1c7da4b8d0fbc3f7b3ca24d45d798e9579a30dbffb541755d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_carson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:42:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:42:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f8e47c16cf754576ef98e4ea4235bdf1530e752a65ca49c338bf3eaac989b535-merged.mount: Deactivated successfully.
Dec 13 02:42:37 np0005558241 systemd[1]: session-46.scope: Deactivated successfully.
Dec 13 02:42:37 np0005558241 systemd[1]: session-46.scope: Consumed 7.091s CPU time.
Dec 13 02:42:37 np0005558241 systemd-logind[787]: Session 46 logged out. Waiting for processes to exit.
Dec 13 02:42:37 np0005558241 systemd-logind[787]: Removed session 46.
Dec 13 02:42:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:37 np0005558241 podman[129586]: 2025-12-13 07:42:37.806960504 +0000 UTC m=+4.160633638 container remove 8233cae8cbc90c1c7da4b8d0fbc3f7b3ca24d45d798e9579a30dbffb541755d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 02:42:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:42:37 np0005558241 systemd[1]: libpod-conmon-8233cae8cbc90c1c7da4b8d0fbc3f7b3ca24d45d798e9579a30dbffb541755d2.scope: Deactivated successfully.
Dec 13 02:42:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:42:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:42:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:42:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:42:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:42:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:42:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:42:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:42:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:50 np0005558241 systemd-logind[787]: New session 47 of user zuul.
Dec 13 02:42:50 np0005558241 systemd[1]: Started Session 47 of User zuul.
Dec 13 02:42:51 np0005558241 python3.9[130177]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:42:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:42:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:53 np0005558241 python3.9[130333]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:42:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:54 np0005558241 python3.9[130485]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:42:55 np0005558241 python3.9[130637]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:42:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:55 np0005558241 python3.9[130760]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611774.3227718-65-187811705968643/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4c80d4d23b433ec175f5013efef91db17c8a7837 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:42:56 np0005558241 python3.9[130912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:42:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:42:57 np0005558241 python3.9[131035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611775.990696-65-15867468285284/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9546334270ccae2e6ac497d1ae6efa6519cc5422 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:42:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:57 np0005558241 python3.9[131187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:42:58 np0005558241 python3.9[131310]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611777.41451-65-7558681861710/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7510b5426ff8f953c3fe5b98b58333ef9d735ee6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:42:59 np0005558241 python3.9[131462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:42:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:42:59 np0005558241 python3.9[131614]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:43:00 np0005558241 python3.9[131766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:01 np0005558241 python3.9[131889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611780.230873-124-233239972173751/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=0037a3af29210ff3ada1a2918956eff6dd9214c9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:43:02 np0005558241 python3.9[132041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:02 np0005558241 python3.9[132164]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611781.6313248-124-178678612638374/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ebf6a51f7aee1ebca19477aec744cc9ac256dbb2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:03 np0005558241 python3.9[132316]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:03 np0005558241 python3.9[132439]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611782.7853184-124-44458662465988/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=535528b9b3cf85ad348d73a56d5cbdf767eb9069 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:04 np0005558241 python3.9[132591]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:43:05 np0005558241 python3.9[132743]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:43:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:05 np0005558241 python3.9[132895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:06 np0005558241 python3.9[133018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611785.3145933-183-2230220402093/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=881382ef4c6eb46a16ea9a799de6982a42ac2ab6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:43:07 np0005558241 python3.9[133170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:07 np0005558241 python3.9[133293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611786.6150165-183-13329341050155/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ebf6a51f7aee1ebca19477aec744cc9ac256dbb2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:08 np0005558241 python3.9[133445]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:08 np0005558241 python3.9[133568]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611787.849382-183-146745768373587/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e7f16d86ef93bf6794a14f766281b541ae644c1e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:43:09
Dec 13 02:43:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:43:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:43:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', 'images', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec 13 02:43:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:43:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:43:10 np0005558241 python3.9[133720]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:43:10 np0005558241 python3.9[133872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:11 np0005558241 python3.9[133995]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611790.3571687-251-187763421953017/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8420c3cfe35a04f9d22dc8ceccdc23770479ce6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:43:12 np0005558241 python3.9[134147]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:43:12 np0005558241 python3.9[134299]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:13 np0005558241 python3.9[134422]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611792.3130379-275-168908600787870/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8420c3cfe35a04f9d22dc8ceccdc23770479ce6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:14 np0005558241 python3.9[134574]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:43:14 np0005558241 python3.9[134726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:15 np0005558241 python3.9[134849]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611794.4255037-299-138002979167241/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8420c3cfe35a04f9d22dc8ceccdc23770479ce6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:16 np0005558241 python3.9[135001]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:43:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:43:16 np0005558241 python3.9[135153]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:17 np0005558241 python3.9[135276]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611796.3303485-323-56362847770030/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8420c3cfe35a04f9d22dc8ceccdc23770479ce6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:18 np0005558241 python3.9[135428]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:43:18 np0005558241 python3.9[135580]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:19 np0005558241 python3.9[135703]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611798.4742892-347-75779294186098/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8420c3cfe35a04f9d22dc8ceccdc23770479ce6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:43:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:43:20 np0005558241 python3.9[135855]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:43:21 np0005558241 python3.9[136007]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:21 np0005558241 python3.9[136130]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611800.653339-371-227706802735796/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8420c3cfe35a04f9d22dc8ceccdc23770479ce6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:22 np0005558241 systemd[1]: session-47.scope: Deactivated successfully.
Dec 13 02:43:22 np0005558241 systemd[1]: session-47.scope: Consumed 25.368s CPU time.
Dec 13 02:43:22 np0005558241 systemd-logind[787]: Session 47 logged out. Waiting for processes to exit.
Dec 13 02:43:22 np0005558241 systemd-logind[787]: Removed session 47.
Dec 13 02:43:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:43:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:43:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:30 np0005558241 systemd-logind[787]: New session 48 of user zuul.
Dec 13 02:43:30 np0005558241 systemd[1]: Started Session 48 of User zuul.
Dec 13 02:43:31 np0005558241 python3.9[136310]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:32 np0005558241 python3.9[136462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:43:33 np0005558241 python3.9[136585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765611811.8675008-34-236185394943007/.source.conf _original_basename=ceph.conf follow=False checksum=7a1becb324a58e91d816c710f0d5c5dc5eb2d159 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:33 np0005558241 python3.9[136737]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:43:34 np0005558241 python3.9[136860]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765611813.3895822-34-271587393272973/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=a94eaee1cc0bd48006b45780280e826c7650ff45 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:43:34 np0005558241 systemd[1]: session-48.scope: Deactivated successfully.
Dec 13 02:43:34 np0005558241 systemd[1]: session-48.scope: Consumed 2.757s CPU time.
Dec 13 02:43:34 np0005558241 systemd-logind[787]: Session 48 logged out. Waiting for processes to exit.
Dec 13 02:43:34 np0005558241 systemd-logind[787]: Removed session 48.
Dec 13 02:43:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:43:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 02:43:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 02:43:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:43:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:43:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:43:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:43:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:43:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:43:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:43:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:43:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:43:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:43:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:43:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:43:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 02:43:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:43:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:43:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:43:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:39 np0005558241 podman[137028]: 2025-12-13 07:43:39.795584528 +0000 UTC m=+0.045549391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:43:40 np0005558241 podman[137028]: 2025-12-13 07:43:40.01164069 +0000 UTC m=+0.261605493 container create 1ec3d39119ae0ab7af8053b5b83d2e085647b4d0676956a18991a002becfe51f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_heyrovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:43:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:43:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:43:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:43:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:43:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:43:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:43:40 np0005558241 systemd[1]: Started libpod-conmon-1ec3d39119ae0ab7af8053b5b83d2e085647b4d0676956a18991a002becfe51f.scope.
Dec 13 02:43:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:43:40 np0005558241 podman[137028]: 2025-12-13 07:43:40.56655489 +0000 UTC m=+0.816519683 container init 1ec3d39119ae0ab7af8053b5b83d2e085647b4d0676956a18991a002becfe51f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_heyrovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:43:40 np0005558241 podman[137028]: 2025-12-13 07:43:40.576127316 +0000 UTC m=+0.826092119 container start 1ec3d39119ae0ab7af8053b5b83d2e085647b4d0676956a18991a002becfe51f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 02:43:40 np0005558241 happy_heyrovsky[137045]: 167 167
Dec 13 02:43:40 np0005558241 systemd[1]: libpod-1ec3d39119ae0ab7af8053b5b83d2e085647b4d0676956a18991a002becfe51f.scope: Deactivated successfully.
Dec 13 02:43:40 np0005558241 systemd-logind[787]: New session 49 of user zuul.
Dec 13 02:43:40 np0005558241 systemd[1]: Started Session 49 of User zuul.
Dec 13 02:43:40 np0005558241 podman[137028]: 2025-12-13 07:43:40.897227388 +0000 UTC m=+1.147192171 container attach 1ec3d39119ae0ab7af8053b5b83d2e085647b4d0676956a18991a002becfe51f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_heyrovsky, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Dec 13 02:43:40 np0005558241 podman[137028]: 2025-12-13 07:43:40.897972406 +0000 UTC m=+1.147937179 container died 1ec3d39119ae0ab7af8053b5b83d2e085647b4d0676956a18991a002becfe51f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_heyrovsky, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:43:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4c52ca7af2a4df73f23cc9d40be37706bedeaef856da3b3a905bbe179c53e56b-merged.mount: Deactivated successfully.
Dec 13 02:43:41 np0005558241 podman[137028]: 2025-12-13 07:43:41.539578398 +0000 UTC m=+1.789543161 container remove 1ec3d39119ae0ab7af8053b5b83d2e085647b4d0676956a18991a002becfe51f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:43:41 np0005558241 systemd[1]: libpod-conmon-1ec3d39119ae0ab7af8053b5b83d2e085647b4d0676956a18991a002becfe51f.scope: Deactivated successfully.
Dec 13 02:43:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:41 np0005558241 podman[137224]: 2025-12-13 07:43:41.7849632 +0000 UTC m=+0.094150075 container create a24cd7e64c1192813cc03082f9257143fa56051e28cb5613ba36ca99c6c067f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_feistel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:43:41 np0005558241 podman[137224]: 2025-12-13 07:43:41.739538684 +0000 UTC m=+0.048725609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:43:41 np0005558241 systemd[1]: Started libpod-conmon-a24cd7e64c1192813cc03082f9257143fa56051e28cb5613ba36ca99c6c067f7.scope.
Dec 13 02:43:41 np0005558241 python3.9[137218]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:43:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:43:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a336a9ec52094cce554a3398e7925b1400f71f33a373c06099b0c3d35df13a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:43:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a336a9ec52094cce554a3398e7925b1400f71f33a373c06099b0c3d35df13a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:43:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a336a9ec52094cce554a3398e7925b1400f71f33a373c06099b0c3d35df13a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:43:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a336a9ec52094cce554a3398e7925b1400f71f33a373c06099b0c3d35df13a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:43:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a336a9ec52094cce554a3398e7925b1400f71f33a373c06099b0c3d35df13a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:43:42 np0005558241 podman[137224]: 2025-12-13 07:43:42.05423508 +0000 UTC m=+0.363421995 container init a24cd7e64c1192813cc03082f9257143fa56051e28cb5613ba36ca99c6c067f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_feistel, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:43:42 np0005558241 podman[137224]: 2025-12-13 07:43:42.066708016 +0000 UTC m=+0.375894891 container start a24cd7e64c1192813cc03082f9257143fa56051e28cb5613ba36ca99c6c067f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_feistel, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:43:42 np0005558241 podman[137224]: 2025-12-13 07:43:42.160811559 +0000 UTC m=+0.469998474 container attach a24cd7e64c1192813cc03082f9257143fa56051e28cb5613ba36ca99c6c067f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_feistel, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:43:42 np0005558241 blissful_feistel[137241]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:43:42 np0005558241 blissful_feistel[137241]: --> All data devices are unavailable
Dec 13 02:43:42 np0005558241 systemd[1]: libpod-a24cd7e64c1192813cc03082f9257143fa56051e28cb5613ba36ca99c6c067f7.scope: Deactivated successfully.
Dec 13 02:43:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:43:42 np0005558241 podman[137341]: 2025-12-13 07:43:42.677006688 +0000 UTC m=+0.040033535 container died a24cd7e64c1192813cc03082f9257143fa56051e28cb5613ba36ca99c6c067f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_feistel, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:43:43 np0005558241 python3.9[137427]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:43:43 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2a336a9ec52094cce554a3398e7925b1400f71f33a373c06099b0c3d35df13a8-merged.mount: Deactivated successfully.
Dec 13 02:43:43 np0005558241 podman[137341]: 2025-12-13 07:43:43.228391512 +0000 UTC m=+0.591418359 container remove a24cd7e64c1192813cc03082f9257143fa56051e28cb5613ba36ca99c6c067f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:43:43 np0005558241 systemd[1]: libpod-conmon-a24cd7e64c1192813cc03082f9257143fa56051e28cb5613ba36ca99c6c067f7.scope: Deactivated successfully.
Dec 13 02:43:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:43 np0005558241 podman[137645]: 2025-12-13 07:43:43.755194252 +0000 UTC m=+0.064367883 container create b7bb50f62f99a1cb89be00f8e533bcfcc9971d08e981a891f1e3f9f14af3d2b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_engelbart, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:43:43 np0005558241 python3.9[137631]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:43:43 np0005558241 podman[137645]: 2025-12-13 07:43:43.714319387 +0000 UTC m=+0.023493038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:43:43 np0005558241 systemd[1]: Started libpod-conmon-b7bb50f62f99a1cb89be00f8e533bcfcc9971d08e981a891f1e3f9f14af3d2b1.scope.
Dec 13 02:43:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:43:44 np0005558241 podman[137645]: 2025-12-13 07:43:44.117291563 +0000 UTC m=+0.426465214 container init b7bb50f62f99a1cb89be00f8e533bcfcc9971d08e981a891f1e3f9f14af3d2b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_engelbart, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:43:44 np0005558241 podman[137645]: 2025-12-13 07:43:44.130132539 +0000 UTC m=+0.439306170 container start b7bb50f62f99a1cb89be00f8e533bcfcc9971d08e981a891f1e3f9f14af3d2b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_engelbart, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:43:44 np0005558241 systemd[1]: libpod-b7bb50f62f99a1cb89be00f8e533bcfcc9971d08e981a891f1e3f9f14af3d2b1.scope: Deactivated successfully.
Dec 13 02:43:44 np0005558241 zen_engelbart[137681]: 167 167
Dec 13 02:43:44 np0005558241 conmon[137681]: conmon b7bb50f62f99a1cb89be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7bb50f62f99a1cb89be00f8e533bcfcc9971d08e981a891f1e3f9f14af3d2b1.scope/container/memory.events
Dec 13 02:43:44 np0005558241 podman[137645]: 2025-12-13 07:43:44.163240743 +0000 UTC m=+0.472414394 container attach b7bb50f62f99a1cb89be00f8e533bcfcc9971d08e981a891f1e3f9f14af3d2b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:43:44 np0005558241 podman[137645]: 2025-12-13 07:43:44.164084953 +0000 UTC m=+0.473258584 container died b7bb50f62f99a1cb89be00f8e533bcfcc9971d08e981a891f1e3f9f14af3d2b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_engelbart, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec 13 02:43:44 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ddca02dc7652c2fb8725b3a56109efea85befa40d70dc6782967fe9ccabe86ef-merged.mount: Deactivated successfully.
Dec 13 02:43:44 np0005558241 python3.9[137827]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:43:44 np0005558241 podman[137645]: 2025-12-13 07:43:44.740497542 +0000 UTC m=+1.049671173 container remove b7bb50f62f99a1cb89be00f8e533bcfcc9971d08e981a891f1e3f9f14af3d2b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_engelbart, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:43:44 np0005558241 systemd[1]: libpod-conmon-b7bb50f62f99a1cb89be00f8e533bcfcc9971d08e981a891f1e3f9f14af3d2b1.scope: Deactivated successfully.
Dec 13 02:43:44 np0005558241 podman[137890]: 2025-12-13 07:43:44.925302265 +0000 UTC m=+0.050280597 container create 64f149047feab4b492265dc0d441fec45d6849eaff5f3503b05882bcc8bdab05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:43:44 np0005558241 systemd[1]: Started libpod-conmon-64f149047feab4b492265dc0d441fec45d6849eaff5f3503b05882bcc8bdab05.scope.
Dec 13 02:43:44 np0005558241 podman[137890]: 2025-12-13 07:43:44.903695604 +0000 UTC m=+0.028673926 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:43:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:43:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ecabb0dbd3b7ebf16f8ee63ada2694239f43a87dec2e5d8e6873017c4126b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:43:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ecabb0dbd3b7ebf16f8ee63ada2694239f43a87dec2e5d8e6873017c4126b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:43:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ecabb0dbd3b7ebf16f8ee63ada2694239f43a87dec2e5d8e6873017c4126b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:43:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ecabb0dbd3b7ebf16f8ee63ada2694239f43a87dec2e5d8e6873017c4126b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:43:45 np0005558241 podman[137890]: 2025-12-13 07:43:45.036803276 +0000 UTC m=+0.161781618 container init 64f149047feab4b492265dc0d441fec45d6849eaff5f3503b05882bcc8bdab05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:43:45 np0005558241 podman[137890]: 2025-12-13 07:43:45.046266108 +0000 UTC m=+0.171244430 container start 64f149047feab4b492265dc0d441fec45d6849eaff5f3503b05882bcc8bdab05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wilbur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Dec 13 02:43:45 np0005558241 podman[137890]: 2025-12-13 07:43:45.073454247 +0000 UTC m=+0.198432599 container attach 64f149047feab4b492265dc0d441fec45d6849eaff5f3503b05882bcc8bdab05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wilbur, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]: {
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:    "0": [
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:        {
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "devices": [
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "/dev/loop3"
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            ],
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_name": "ceph_lv0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_size": "21470642176",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "name": "ceph_lv0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "tags": {
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.cluster_name": "ceph",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.crush_device_class": "",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.encrypted": "0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.objectstore": "bluestore",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.osd_id": "0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.type": "block",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.vdo": "0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.with_tpm": "0"
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            },
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "type": "block",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "vg_name": "ceph_vg0"
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:        }
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:    ],
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:    "1": [
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:        {
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "devices": [
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "/dev/loop4"
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            ],
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_name": "ceph_lv1",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_size": "21470642176",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "name": "ceph_lv1",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "tags": {
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.cluster_name": "ceph",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.crush_device_class": "",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.encrypted": "0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.objectstore": "bluestore",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.osd_id": "1",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.type": "block",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.vdo": "0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.with_tpm": "0"
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            },
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "type": "block",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "vg_name": "ceph_vg1"
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:        }
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:    ],
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:    "2": [
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:        {
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "devices": [
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "/dev/loop5"
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            ],
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_name": "ceph_lv2",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_size": "21470642176",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "name": "ceph_lv2",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "tags": {
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.cluster_name": "ceph",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.crush_device_class": "",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.encrypted": "0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.objectstore": "bluestore",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.osd_id": "2",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.type": "block",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.vdo": "0",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:                "ceph.with_tpm": "0"
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            },
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "type": "block",
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:            "vg_name": "ceph_vg2"
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:        }
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]:    ]
Dec 13 02:43:45 np0005558241 angry_wilbur[137927]: }
Dec 13 02:43:45 np0005558241 systemd[1]: libpod-64f149047feab4b492265dc0d441fec45d6849eaff5f3503b05882bcc8bdab05.scope: Deactivated successfully.
Dec 13 02:43:45 np0005558241 podman[138012]: 2025-12-13 07:43:45.462472899 +0000 UTC m=+0.031852584 container died 64f149047feab4b492265dc0d441fec45d6849eaff5f3503b05882bcc8bdab05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 02:43:45 np0005558241 python3.9[138011]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 13 02:43:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:43:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:48 np0005558241 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 13 02:43:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-83ecabb0dbd3b7ebf16f8ee63ada2694239f43a87dec2e5d8e6873017c4126b3-merged.mount: Deactivated successfully.
Dec 13 02:43:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:51 np0005558241 podman[138012]: 2025-12-13 07:43:51.459965487 +0000 UTC m=+6.029345172 container remove 64f149047feab4b492265dc0d441fec45d6849eaff5f3503b05882bcc8bdab05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wilbur, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:43:51 np0005558241 systemd[1]: libpod-conmon-64f149047feab4b492265dc0d441fec45d6849eaff5f3503b05882bcc8bdab05.scope: Deactivated successfully.
Dec 13 02:43:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:52 np0005558241 podman[138116]: 2025-12-13 07:43:51.967709979 +0000 UTC m=+0.026216686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:43:52 np0005558241 podman[138116]: 2025-12-13 07:43:52.430443014 +0000 UTC m=+0.488949711 container create 58660787581f7aaa6fc5b3b9541767d45d5734f1503c56dba1a580668d43503c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 02:43:52 np0005558241 python3.9[138258]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:43:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:43:53 np0005558241 systemd[1]: Started libpod-conmon-58660787581f7aaa6fc5b3b9541767d45d5734f1503c56dba1a580668d43503c.scope.
Dec 13 02:43:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:43:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:53 np0005558241 podman[138116]: 2025-12-13 07:43:53.820986756 +0000 UTC m=+1.879493433 container init 58660787581f7aaa6fc5b3b9541767d45d5734f1503c56dba1a580668d43503c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:43:53 np0005558241 podman[138116]: 2025-12-13 07:43:53.830840688 +0000 UTC m=+1.889347345 container start 58660787581f7aaa6fc5b3b9541767d45d5734f1503c56dba1a580668d43503c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rubin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:43:53 np0005558241 laughing_rubin[138299]: 167 167
Dec 13 02:43:53 np0005558241 systemd[1]: libpod-58660787581f7aaa6fc5b3b9541767d45d5734f1503c56dba1a580668d43503c.scope: Deactivated successfully.
Dec 13 02:43:53 np0005558241 python3.9[138347]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:43:54 np0005558241 podman[138116]: 2025-12-13 07:43:54.381460563 +0000 UTC m=+2.439967270 container attach 58660787581f7aaa6fc5b3b9541767d45d5734f1503c56dba1a580668d43503c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:43:54 np0005558241 podman[138116]: 2025-12-13 07:43:54.382971611 +0000 UTC m=+2.441478288 container died 58660787581f7aaa6fc5b3b9541767d45d5734f1503c56dba1a580668d43503c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rubin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:43:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1aebce582995a3e67100ba7b0a250c1232395318f80f55e7f4e20e02e92a9539-merged.mount: Deactivated successfully.
Dec 13 02:43:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:55 np0005558241 podman[138116]: 2025-12-13 07:43:55.891552504 +0000 UTC m=+3.950059161 container remove 58660787581f7aaa6fc5b3b9541767d45d5734f1503c56dba1a580668d43503c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Dec 13 02:43:55 np0005558241 systemd[1]: libpod-conmon-58660787581f7aaa6fc5b3b9541767d45d5734f1503c56dba1a580668d43503c.scope: Deactivated successfully.
Dec 13 02:43:56 np0005558241 podman[138447]: 2025-12-13 07:43:56.092910393 +0000 UTC m=+0.024978905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:43:56 np0005558241 python3.9[138536]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:43:57 np0005558241 podman[138447]: 2025-12-13 07:43:57.252342665 +0000 UTC m=+1.184411177 container create 4e37059bc8bbec82898908aeea9f194020027f8343f023550208d6ce4a4d95ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Dec 13 02:43:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:43:58 np0005558241 systemd[1]: Started libpod-conmon-4e37059bc8bbec82898908aeea9f194020027f8343f023550208d6ce4a4d95ef.scope.
Dec 13 02:43:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:43:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a13324b4ab9a4e801e9fe2f02d3be10cbc9319f5bbe811a45e4571221aa9d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:43:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a13324b4ab9a4e801e9fe2f02d3be10cbc9319f5bbe811a45e4571221aa9d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:43:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a13324b4ab9a4e801e9fe2f02d3be10cbc9319f5bbe811a45e4571221aa9d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:43:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75a13324b4ab9a4e801e9fe2f02d3be10cbc9319f5bbe811a45e4571221aa9d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:43:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:43:58 np0005558241 python3[138696]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 13 02:43:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Dec 13 02:43:59 np0005558241 python3.9[138848]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:00 np0005558241 podman[138447]: 2025-12-13 07:44:00.35404651 +0000 UTC m=+4.286115022 container init 4e37059bc8bbec82898908aeea9f194020027f8343f023550208d6ce4a4d95ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:44:00 np0005558241 podman[138447]: 2025-12-13 07:44:00.368372822 +0000 UTC m=+4.300441314 container start 4e37059bc8bbec82898908aeea9f194020027f8343f023550208d6ce4a4d95ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:44:00 np0005558241 python3.9[139002]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:01 np0005558241 lvm[139155]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:44:01 np0005558241 lvm[139155]: VG ceph_vg1 finished
Dec 13 02:44:01 np0005558241 lvm[139154]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:44:01 np0005558241 lvm[139154]: VG ceph_vg0 finished
Dec 13 02:44:01 np0005558241 lvm[139157]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:44:01 np0005558241 lvm[139157]: VG ceph_vg2 finished
Dec 13 02:44:01 np0005558241 podman[138447]: 2025-12-13 07:44:01.268902269 +0000 UTC m=+5.200970761 container attach 4e37059bc8bbec82898908aeea9f194020027f8343f023550208d6ce4a4d95ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:44:01 np0005558241 determined_gagarin[138618]: {}
Dec 13 02:44:01 np0005558241 python3.9[139121]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:01 np0005558241 systemd[1]: libpod-4e37059bc8bbec82898908aeea9f194020027f8343f023550208d6ce4a4d95ef.scope: Deactivated successfully.
Dec 13 02:44:01 np0005558241 podman[138447]: 2025-12-13 07:44:01.346504516 +0000 UTC m=+5.278573028 container died 4e37059bc8bbec82898908aeea9f194020027f8343f023550208d6ce4a4d95ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:44:01 np0005558241 systemd[1]: libpod-4e37059bc8bbec82898908aeea9f194020027f8343f023550208d6ce4a4d95ef.scope: Consumed 1.606s CPU time.
Dec 13 02:44:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Dec 13 02:44:02 np0005558241 python3.9[139325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:02 np0005558241 python3.9[139403]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.qct1b6up recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:03 np0005558241 python3.9[139556]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Dec 13 02:44:04 np0005558241 python3.9[139634]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:04 np0005558241 systemd[1]: var-lib-containers-storage-overlay-75a13324b4ab9a4e801e9fe2f02d3be10cbc9319f5bbe811a45e4571221aa9d7-merged.mount: Deactivated successfully.
Dec 13 02:44:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 02:44:06 np0005558241 python3.9[139786]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:44:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 02:44:08 np0005558241 python3[139939]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 02:44:08 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:44:08 np0005558241 python3.9[140091]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:44:09
Dec 13 02:44:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:44:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:44:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'backups']
Dec 13 02:44:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:44:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e4 check_health: resetting beacon timeouts due to mon delay (slow election?) of 1e+01 seconds
Dec 13 02:44:09 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.508172989s, txc = 0x562b52470000, txc bytes = 2116, txc ios = 1, txc cost = 672116, txc onodes = 1, DB updates = 3, DB bytes = 1875, cost max = 95489052 on 2025-12-13T07:32:22.232711+0000, txc max = 156 on 2025-12-13T07:32:31.461856+0000
Dec 13 02:44:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:44:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 02:44:09 np0005558241 python3.9[140216]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611848.3436406-157-84008507947326/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:44:10 np0005558241 python3.9[140368]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:11 np0005558241 python3.9[140493]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611849.947917-172-22640064383536/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 02:44:11 np0005558241 python3.9[140646]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:12 np0005558241 podman[138447]: 2025-12-13 07:44:12.280264487 +0000 UTC m=+16.212332969 container remove 4e37059bc8bbec82898908aeea9f194020027f8343f023550208d6ce4a4d95ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:44:12 np0005558241 systemd[1]: libpod-conmon-4e37059bc8bbec82898908aeea9f194020027f8343f023550208d6ce4a4d95ef.scope: Deactivated successfully.
Dec 13 02:44:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:44:12 np0005558241 python3.9[140771]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611851.2421317-187-162447920418229/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:13 np0005558241 python3.9[140923]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 02:44:13 np0005558241 python3.9[141048]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611852.6352932-202-9543950482535/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:14 np0005558241 python3.9[141200]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:15 np0005558241 python3.9[141325]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765611853.9936495-217-196734185703588/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 02:44:16 np0005558241 python3.9[141477]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:44:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:44:16 np0005558241 python3.9[141629]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:44:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:44:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 13 02:44:17 np0005558241 python3.9[141784]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:18 np0005558241 ceph-mon[76537]: log_channel(cluster) log [WRN] : Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec 13 02:44:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:44:18 np0005558241 python3.9[141936]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:44:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:44:19 np0005558241 ceph-mon[76537]: Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec 13 02:44:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:44:19 np0005558241 python3.9[142114]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:44:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 6 op/s
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:44:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:44:20 np0005558241 python3.9[142268]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:44:21 np0005558241 python3.9[142423]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:44:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 02:44:22 np0005558241 python3.9[142573]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:44:23 np0005558241 python3.9[142726]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:cb:58:d7:dd" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:44:23 np0005558241 ovs-vsctl[142727]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:cb:58:d7:dd external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 13 02:44:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 02:44:24 np0005558241 python3.9[142879]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:44:25 np0005558241 python3.9[143034]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:44:25 np0005558241 ovs-vsctl[143035]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 13 02:44:25 np0005558241 python3.9[143185]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:44:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 10 op/s
Dec 13 02:44:26 np0005558241 python3.9[143339]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:44:27 np0005558241 python3.9[143491]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 9 op/s
Dec 13 02:44:27 np0005558241 python3.9[143569]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:44:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:44:28 np0005558241 python3.9[143721]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:29 np0005558241 python3.9[143799]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:44:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 11 op/s
Dec 13 02:44:30 np0005558241 python3.9[143951]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:31 np0005558241 python3.9[144103]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:31 np0005558241 python3.9[144181]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 6 op/s
Dec 13 02:44:32 np0005558241 python3.9[144333]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:33 np0005558241 python3.9[144411]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 0 B/s wr, 6 op/s
Dec 13 02:44:33 np0005558241 python3.9[144563]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:44:33 np0005558241 systemd[1]: Reloading.
Dec 13 02:44:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:44:34 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:44:34 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:44:35 np0005558241 python3.9[144753]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 0 B/s wr, 8 op/s
Dec 13 02:44:35 np0005558241 python3.9[144832]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:36 np0005558241 python3.9[144984]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:37 np0005558241 python3.9[145062]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 02:44:38 np0005558241 python3.9[145214]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:44:38 np0005558241 systemd[1]: Reloading.
Dec 13 02:44:38 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:44:38 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:44:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:44:39 np0005558241 systemd[1]: Starting Create netns directory...
Dec 13 02:44:39 np0005558241 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 02:44:39 np0005558241 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 02:44:39 np0005558241 systemd[1]: Finished Create netns directory.
Dec 13 02:44:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 02:44:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:44:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:44:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:44:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:44:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:44:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:44:40 np0005558241 python3.9[145408]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:44:40 np0005558241 python3.9[145560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:41 np0005558241 python3.9[145683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765611880.4048123-468-199416857739318/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:44:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 02:44:42 np0005558241 python3.9[145835]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:44:43 np0005558241 python3.9[145987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:44:43 np0005558241 python3.9[146110]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765611882.5831194-493-218056889430665/.source.json _original_basename=.dp306p0o follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 02:44:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:44:44 np0005558241 python3.9[146263]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:44:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Dec 13 02:44:46 np0005558241 python3.9[146690]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 13 02:44:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Dec 13 02:44:47 np0005558241 python3.9[146842]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 02:44:48 np0005558241 python3.9[146994]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 13 02:44:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:44:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Dec 13 02:44:50 np0005558241 python3[147173]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 02:44:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Dec 13 02:44:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Dec 13 02:44:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:44:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Dec 13 02:44:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 0 B/s wr, 10 op/s
Dec 13 02:44:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:44:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 0 B/s wr, 10 op/s
Dec 13 02:45:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 02:45:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 02:45:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 02:45:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:08 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:45:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:45:09
Dec 13 02:45:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:45:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:45:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.control', 'images', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Dec 13 02:45:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:45:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:45:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:12 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:45:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:16 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:45:16 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct MDS connection to Monitors appears to be laggy; 16.1464s since last acked beacon
Dec 13 02:45:16 np0005558241 ceph-mds[98037]: mds.0.3 skipping upkeep work because connection to Monitors appears laggy
Dec 13 02:45:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:18 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct  MDS is no longer laggy
Dec 13 02:45:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:45:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:45:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e4 check_health: resetting beacon timeouts due to mon delay (slow election?) of 2e+01 seconds
Dec 13 02:45:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:45:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:24 np0005558241 podman[147374]: 2025-12-13 07:45:24.452871576 +0000 UTC m=+3.273471030 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:45:25 np0005558241 podman[147374]: 2025-12-13 07:45:25.591754335 +0000 UTC m=+4.412353789 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:45:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:45:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:28 np0005558241 podman[147184]: 2025-12-13 07:45:28.644027326 +0000 UTC m=+37.898632445 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Dec 13 02:45:28 np0005558241 podman[147458]: 2025-12-13 07:45:28.803092963 +0000 UTC m=+0.039908524 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Dec 13 02:45:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:30 np0005558241 podman[147458]: 2025-12-13 07:45:30.905602815 +0000 UTC m=+2.142418376 container create 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 02:45:30 np0005558241 python3[147173]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de
Dec 13 02:45:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:45:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:45:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:32 np0005558241 python3.9[147772]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:45:33 np0005558241 python3.9[147926]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:45:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:34 np0005558241 python3.9[148002]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:45:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:45:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:45:34 np0005558241 python3.9[148153]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765611934.1023197-581-250658573380038/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:45:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:45:35 np0005558241 python3.9[148279]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:45:35 np0005558241 systemd[1]: Reloading.
Dec 13 02:45:35 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:45:35 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:45:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:45:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:45:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:45:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:45:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:45:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:45:37 np0005558241 python3.9[148420]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:45:37 np0005558241 systemd[1]: Reloading.
Dec 13 02:45:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:37 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:45:37 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:45:38 np0005558241 systemd[1]: Starting ovn_controller container...
Dec 13 02:45:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:45:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:45:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:45:39 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:45:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9187c6996e7c6cf323191ce7b42fe7a6b0c54aedb21105603671b673a898fb2d/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 13 02:45:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:45:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:45:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:45:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:45:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:45:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:45:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:45:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:45:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:45:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:45:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:45:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:45:40 np0005558241 systemd[1]: Started /usr/bin/podman healthcheck run 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1.
Dec 13 02:45:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:45:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:45:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:45:41 np0005558241 podman[148461]: 2025-12-13 07:45:41.00888901 +0000 UTC m=+2.193404762 container init 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: + sudo -E kolla_set_configs
Dec 13 02:45:41 np0005558241 podman[148461]: 2025-12-13 07:45:41.050244528 +0000 UTC m=+2.234760280 container start 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 02:45:41 np0005558241 systemd[1]: Created slice User Slice of UID 0.
Dec 13 02:45:41 np0005558241 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 13 02:45:41 np0005558241 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 13 02:45:41 np0005558241 systemd[1]: Starting User Manager for UID 0...
Dec 13 02:45:41 np0005558241 systemd[148567]: Queued start job for default target Main User Target.
Dec 13 02:45:41 np0005558241 systemd[148567]: Created slice User Application Slice.
Dec 13 02:45:41 np0005558241 systemd[148567]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 13 02:45:41 np0005558241 systemd[148567]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 02:45:41 np0005558241 systemd[148567]: Reached target Paths.
Dec 13 02:45:41 np0005558241 systemd[148567]: Reached target Timers.
Dec 13 02:45:41 np0005558241 systemd[148567]: Starting D-Bus User Message Bus Socket...
Dec 13 02:45:41 np0005558241 systemd[148567]: Starting Create User's Volatile Files and Directories...
Dec 13 02:45:41 np0005558241 systemd[148567]: Listening on D-Bus User Message Bus Socket.
Dec 13 02:45:41 np0005558241 systemd[148567]: Reached target Sockets.
Dec 13 02:45:41 np0005558241 systemd[148567]: Finished Create User's Volatile Files and Directories.
Dec 13 02:45:41 np0005558241 systemd[148567]: Reached target Basic System.
Dec 13 02:45:41 np0005558241 systemd[148567]: Reached target Main User Target.
Dec 13 02:45:41 np0005558241 systemd[148567]: Startup finished in 160ms.
Dec 13 02:45:41 np0005558241 systemd[1]: Started User Manager for UID 0.
Dec 13 02:45:41 np0005558241 systemd[1]: Started Session c1 of User root.
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: INFO:__main__:Validating config file
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: INFO:__main__:Writing out command to execute
Dec 13 02:45:41 np0005558241 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: ++ cat /run_command
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: + ARGS=
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: + sudo kolla_copy_cacerts
Dec 13 02:45:41 np0005558241 systemd[1]: Started Session c2 of User root.
Dec 13 02:45:41 np0005558241 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: + [[ ! -n '' ]]
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: + . kolla_extend_start
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: + umask 0022
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 13 02:45:41 np0005558241 NetworkManager[50376]: <info>  [1765611941.5095] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 13 02:45:41 np0005558241 NetworkManager[50376]: <info>  [1765611941.5108] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 02:45:41 np0005558241 NetworkManager[50376]: <warn>  [1765611941.5113] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 02:45:41 np0005558241 NetworkManager[50376]: <info>  [1765611941.5125] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 13 02:45:41 np0005558241 NetworkManager[50376]: <info>  [1765611941.5134] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 13 02:45:41 np0005558241 NetworkManager[50376]: <info>  [1765611941.5140] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 13 02:45:41 np0005558241 kernel: br-int: entered promiscuous mode
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00019|main|INFO|OVS feature set changed, force recompute.
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00020|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 13 02:45:41 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:41Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 13 02:45:41 np0005558241 systemd-udevd[148594]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:45:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:42 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:42Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 02:45:42 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:42Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 02:45:42 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:42Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 02:45:42 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:42Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 02:45:42 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:42Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 02:45:42 np0005558241 ovn_controller[148476]: 2025-12-13T07:45:42Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 02:45:42 np0005558241 NetworkManager[50376]: <info>  [1765611942.4019] manager: (ovn-201fda-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 13 02:45:42 np0005558241 kernel: genev_sys_6081: entered promiscuous mode
Dec 13 02:45:42 np0005558241 systemd-udevd[148596]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 02:45:42 np0005558241 NetworkManager[50376]: <info>  [1765611942.4318] device (genev_sys_6081): carrier: link connected
Dec 13 02:45:42 np0005558241 NetworkManager[50376]: <info>  [1765611942.4325] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 13 02:45:42 np0005558241 edpm-start-podman-container[148461]: ovn_controller
Dec 13 02:45:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:45:42 np0005558241 podman[148547]: 2025-12-13 07:45:42.554993127 +0000 UTC m=+1.475781206 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:45:42 np0005558241 podman[148547]: 2025-12-13 07:45:42.777891986 +0000 UTC m=+1.698680045 container create e17e12059333f05c7f40eace15d403954c05ce8e21e65db964d4ca3267b0635d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tharp, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:45:43 np0005558241 podman[148545]: 2025-12-13 07:45:43.057613436 +0000 UTC m=+1.994825080 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 02:45:43 np0005558241 systemd[1]: Started libpod-conmon-e17e12059333f05c7f40eace15d403954c05ce8e21e65db964d4ca3267b0635d.scope.
Dec 13 02:45:43 np0005558241 edpm-start-podman-container[148460]: Creating additional drop-in dependency for "ovn_controller" (2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1)
Dec 13 02:45:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:45:43 np0005558241 systemd[1]: Reloading.
Dec 13 02:45:43 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:45:43 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:45:43 np0005558241 systemd[1]: Started ovn_controller container.
Dec 13 02:45:43 np0005558241 podman[148547]: 2025-12-13 07:45:43.535992857 +0000 UTC m=+2.456780956 container init e17e12059333f05c7f40eace15d403954c05ce8e21e65db964d4ca3267b0635d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tharp, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 02:45:43 np0005558241 podman[148547]: 2025-12-13 07:45:43.549176422 +0000 UTC m=+2.469964521 container start e17e12059333f05c7f40eace15d403954c05ce8e21e65db964d4ca3267b0635d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:45:43 np0005558241 admiring_tharp[148637]: 167 167
Dec 13 02:45:43 np0005558241 systemd[1]: libpod-e17e12059333f05c7f40eace15d403954c05ce8e21e65db964d4ca3267b0635d.scope: Deactivated successfully.
Dec 13 02:45:43 np0005558241 conmon[148637]: conmon e17e12059333f05c7f40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e17e12059333f05c7f40eace15d403954c05ce8e21e65db964d4ca3267b0635d.scope/container/memory.events
Dec 13 02:45:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:44 np0005558241 python3.9[148845]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:45:44 np0005558241 ovs-vsctl[148846]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 13 02:45:44 np0005558241 podman[148547]: 2025-12-13 07:45:44.347533184 +0000 UTC m=+3.268321273 container attach e17e12059333f05c7f40eace15d403954c05ce8e21e65db964d4ca3267b0635d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:45:44 np0005558241 podman[148547]: 2025-12-13 07:45:44.349514443 +0000 UTC m=+3.270302502 container died e17e12059333f05c7f40eace15d403954c05ce8e21e65db964d4ca3267b0635d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tharp, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:45:44 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3bf8f8f614225b05fe3931cb60cc3bbaa7a7b4dd98d275bec281791ba2533d08-merged.mount: Deactivated successfully.
Dec 13 02:45:45 np0005558241 python3.9[148999]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:45:45 np0005558241 ovs-vsctl[149001]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 13 02:45:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:45 np0005558241 python3.9[149154]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:45:45 np0005558241 ovs-vsctl[149155]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 13 02:45:46 np0005558241 systemd-logind[787]: Session 49 logged out. Waiting for processes to exit.
Dec 13 02:45:46 np0005558241 systemd[1]: session-49.scope: Deactivated successfully.
Dec 13 02:45:46 np0005558241 systemd[1]: session-49.scope: Consumed 1min 5.262s CPU time.
Dec 13 02:45:46 np0005558241 systemd-logind[787]: Removed session 49.
Dec 13 02:45:46 np0005558241 podman[148547]: 2025-12-13 07:45:46.613347075 +0000 UTC m=+5.534135164 container remove e17e12059333f05c7f40eace15d403954c05ce8e21e65db964d4ca3267b0635d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_tharp, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:45:46 np0005558241 systemd[1]: libpod-conmon-e17e12059333f05c7f40eace15d403954c05ce8e21e65db964d4ca3267b0635d.scope: Deactivated successfully.
Dec 13 02:45:46 np0005558241 podman[149187]: 2025-12-13 07:45:46.800060353 +0000 UTC m=+0.032217164 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:45:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:45:48 np0005558241 podman[149187]: 2025-12-13 07:45:48.870689729 +0000 UTC m=+2.102846490 container create 0090d202f90dbab139d1611990e2b8017ad25d78ff264895725b660ae7e4a5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_gould, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:45:49 np0005558241 systemd[1]: Started libpod-conmon-0090d202f90dbab139d1611990e2b8017ad25d78ff264895725b660ae7e4a5e7.scope.
Dec 13 02:45:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:45:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5d6f19d078d7ebceaf2c87823c69abe738409012c59d017f748441b6af78dfe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:45:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5d6f19d078d7ebceaf2c87823c69abe738409012c59d017f748441b6af78dfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:45:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5d6f19d078d7ebceaf2c87823c69abe738409012c59d017f748441b6af78dfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:45:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5d6f19d078d7ebceaf2c87823c69abe738409012c59d017f748441b6af78dfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:45:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5d6f19d078d7ebceaf2c87823c69abe738409012c59d017f748441b6af78dfe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:45:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:50 np0005558241 podman[149187]: 2025-12-13 07:45:50.092645234 +0000 UTC m=+3.324802025 container init 0090d202f90dbab139d1611990e2b8017ad25d78ff264895725b660ae7e4a5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:45:50 np0005558241 podman[149187]: 2025-12-13 07:45:50.104982228 +0000 UTC m=+3.337138989 container start 0090d202f90dbab139d1611990e2b8017ad25d78ff264895725b660ae7e4a5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_gould, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:45:50 np0005558241 podman[149187]: 2025-12-13 07:45:50.37875354 +0000 UTC m=+3.610910301 container attach 0090d202f90dbab139d1611990e2b8017ad25d78ff264895725b660ae7e4a5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:45:50 np0005558241 keen_gould[149204]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:45:50 np0005558241 keen_gould[149204]: --> All data devices are unavailable
Dec 13 02:45:50 np0005558241 systemd[1]: libpod-0090d202f90dbab139d1611990e2b8017ad25d78ff264895725b660ae7e4a5e7.scope: Deactivated successfully.
Dec 13 02:45:50 np0005558241 podman[149187]: 2025-12-13 07:45:50.740621912 +0000 UTC m=+3.972778683 container died 0090d202f90dbab139d1611990e2b8017ad25d78ff264895725b660ae7e4a5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_gould, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:45:51 np0005558241 systemd[1]: Stopping User Manager for UID 0...
Dec 13 02:45:51 np0005558241 systemd[148567]: Activating special unit Exit the Session...
Dec 13 02:45:51 np0005558241 systemd[148567]: Stopped target Main User Target.
Dec 13 02:45:51 np0005558241 systemd[148567]: Stopped target Basic System.
Dec 13 02:45:51 np0005558241 systemd[148567]: Stopped target Paths.
Dec 13 02:45:51 np0005558241 systemd[148567]: Stopped target Sockets.
Dec 13 02:45:51 np0005558241 systemd[148567]: Stopped target Timers.
Dec 13 02:45:51 np0005558241 systemd[148567]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 13 02:45:51 np0005558241 systemd[148567]: Closed D-Bus User Message Bus Socket.
Dec 13 02:45:51 np0005558241 systemd[148567]: Stopped Create User's Volatile Files and Directories.
Dec 13 02:45:51 np0005558241 systemd[148567]: Removed slice User Application Slice.
Dec 13 02:45:51 np0005558241 systemd[148567]: Reached target Shutdown.
Dec 13 02:45:51 np0005558241 systemd[148567]: Finished Exit the Session.
Dec 13 02:45:51 np0005558241 systemd[148567]: Reached target Exit the Session.
Dec 13 02:45:51 np0005558241 systemd[1]: user@0.service: Deactivated successfully.
Dec 13 02:45:51 np0005558241 systemd[1]: Stopped User Manager for UID 0.
Dec 13 02:45:51 np0005558241 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 13 02:45:51 np0005558241 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 13 02:45:51 np0005558241 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 13 02:45:51 np0005558241 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 13 02:45:51 np0005558241 systemd[1]: Removed slice User Slice of UID 0.
Dec 13 02:45:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a5d6f19d078d7ebceaf2c87823c69abe738409012c59d017f748441b6af78dfe-merged.mount: Deactivated successfully.
Dec 13 02:45:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:52 np0005558241 podman[149187]: 2025-12-13 07:45:52.444786704 +0000 UTC m=+5.676943475 container remove 0090d202f90dbab139d1611990e2b8017ad25d78ff264895725b660ae7e4a5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:45:52 np0005558241 systemd[1]: libpod-conmon-0090d202f90dbab139d1611990e2b8017ad25d78ff264895725b660ae7e4a5e7.scope: Deactivated successfully.
Dec 13 02:45:53 np0005558241 podman[149301]: 2025-12-13 07:45:52.9485479 +0000 UTC m=+0.024419712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:45:53 np0005558241 podman[149301]: 2025-12-13 07:45:53.147681854 +0000 UTC m=+0.223553646 container create 7849cc8eec72f5e8fcc630d2a6ee40b0707165ce356f129486155ea61986106b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:45:53 np0005558241 systemd[1]: Started libpod-conmon-7849cc8eec72f5e8fcc630d2a6ee40b0707165ce356f129486155ea61986106b.scope.
Dec 13 02:45:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:45:53 np0005558241 systemd-logind[787]: New session 51 of user zuul.
Dec 13 02:45:53 np0005558241 systemd[1]: Started Session 51 of User zuul.
Dec 13 02:45:53 np0005558241 podman[149301]: 2025-12-13 07:45:53.406263892 +0000 UTC m=+0.482135714 container init 7849cc8eec72f5e8fcc630d2a6ee40b0707165ce356f129486155ea61986106b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:45:53 np0005558241 podman[149301]: 2025-12-13 07:45:53.413920181 +0000 UTC m=+0.489791973 container start 7849cc8eec72f5e8fcc630d2a6ee40b0707165ce356f129486155ea61986106b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:45:53 np0005558241 systemd[1]: libpod-7849cc8eec72f5e8fcc630d2a6ee40b0707165ce356f129486155ea61986106b.scope: Deactivated successfully.
Dec 13 02:45:53 np0005558241 exciting_engelbart[149320]: 167 167
Dec 13 02:45:53 np0005558241 conmon[149320]: conmon 7849cc8eec72f5e8fcc6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7849cc8eec72f5e8fcc630d2a6ee40b0707165ce356f129486155ea61986106b.scope/container/memory.events
Dec 13 02:45:53 np0005558241 podman[149301]: 2025-12-13 07:45:53.779259518 +0000 UTC m=+0.855131320 container attach 7849cc8eec72f5e8fcc630d2a6ee40b0707165ce356f129486155ea61986106b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:45:53 np0005558241 podman[149301]: 2025-12-13 07:45:53.781683368 +0000 UTC m=+0.857555160 container died 7849cc8eec72f5e8fcc630d2a6ee40b0707165ce356f129486155ea61986106b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:45:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:45:54 np0005558241 python3.9[149485]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:45:55 np0005558241 python3.9[149642]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:45:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-eea1d6cabe06ef9f6d3e134058a15a55aec19a1ef88fe8b0f3ce0cad15786e26-merged.mount: Deactivated successfully.
Dec 13 02:45:56 np0005558241 python3.9[149797]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:45:57 np0005558241 python3.9[149949]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:45:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:45:57 np0005558241 python3.9[150101]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:45:58 np0005558241 python3.9[150253]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:45:59 np0005558241 python3.9[150403]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:45:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:45:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:00 np0005558241 python3.9[150556]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 13 02:46:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:02 np0005558241 python3.9[150706]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:02 np0005558241 podman[149301]: 2025-12-13 07:46:02.463836264 +0000 UTC m=+9.539708116 container remove 7849cc8eec72f5e8fcc630d2a6ee40b0707165ce356f129486155ea61986106b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:46:02 np0005558241 systemd[1]: libpod-conmon-7849cc8eec72f5e8fcc630d2a6ee40b0707165ce356f129486155ea61986106b.scope: Deactivated successfully.
Dec 13 02:46:02 np0005558241 podman[150761]: 2025-12-13 07:46:02.676377639 +0000 UTC m=+0.046691511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:46:03 np0005558241 python3.9[150848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765611961.4869998-86-86049156952025/.source follow=False _original_basename=haproxy.j2 checksum=d225e0e1c34f765c55f17e757e326dba55238d01 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:46:03 np0005558241 podman[150761]: 2025-12-13 07:46:03.560977956 +0000 UTC m=+0.931291748 container create 2c43239c0f6c75b111f61fd7a8e2ed8849b34e961ee042fd3376987c5098d18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hodgkin, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:46:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:04 np0005558241 python3.9[150998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:04 np0005558241 systemd[1]: Started libpod-conmon-2c43239c0f6c75b111f61fd7a8e2ed8849b34e961ee042fd3376987c5098d18a.scope.
Dec 13 02:46:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:46:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc7df522f098e54d1ef9f6b0f9ede8743da3bb73eb285668664df7c4915a51bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:46:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc7df522f098e54d1ef9f6b0f9ede8743da3bb73eb285668664df7c4915a51bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:46:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc7df522f098e54d1ef9f6b0f9ede8743da3bb73eb285668664df7c4915a51bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:46:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc7df522f098e54d1ef9f6b0f9ede8743da3bb73eb285668664df7c4915a51bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:46:04 np0005558241 python3.9[151125]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765611963.4023798-101-276680484654302/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:46:05 np0005558241 podman[150761]: 2025-12-13 07:46:05.2974165 +0000 UTC m=+2.667730312 container init 2c43239c0f6c75b111f61fd7a8e2ed8849b34e961ee042fd3376987c5098d18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 02:46:05 np0005558241 podman[150761]: 2025-12-13 07:46:05.312993484 +0000 UTC m=+2.683307296 container start 2c43239c0f6c75b111f61fd7a8e2ed8849b34e961ee042fd3376987c5098d18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hodgkin, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]: {
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:    "0": [
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:        {
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "devices": [
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "/dev/loop3"
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            ],
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_name": "ceph_lv0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_size": "21470642176",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "name": "ceph_lv0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "tags": {
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.cluster_name": "ceph",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.crush_device_class": "",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.encrypted": "0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.objectstore": "bluestore",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.osd_id": "0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.type": "block",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.vdo": "0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.with_tpm": "0"
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            },
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "type": "block",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "vg_name": "ceph_vg0"
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:        }
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:    ],
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:    "1": [
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:        {
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "devices": [
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "/dev/loop4"
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            ],
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_name": "ceph_lv1",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_size": "21470642176",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "name": "ceph_lv1",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "tags": {
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.cluster_name": "ceph",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.crush_device_class": "",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.encrypted": "0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.objectstore": "bluestore",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.osd_id": "1",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.type": "block",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.vdo": "0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.with_tpm": "0"
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            },
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "type": "block",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "vg_name": "ceph_vg1"
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:        }
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:    ],
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:    "2": [
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:        {
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "devices": [
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "/dev/loop5"
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            ],
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_name": "ceph_lv2",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_size": "21470642176",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "name": "ceph_lv2",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "tags": {
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.cluster_name": "ceph",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.crush_device_class": "",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.encrypted": "0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.objectstore": "bluestore",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.osd_id": "2",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.type": "block",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.vdo": "0",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:                "ceph.with_tpm": "0"
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            },
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "type": "block",
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:            "vg_name": "ceph_vg2"
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:        }
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]:    ]
Dec 13 02:46:05 np0005558241 gracious_hodgkin[151054]: }
Dec 13 02:46:05 np0005558241 systemd[1]: libpod-2c43239c0f6c75b111f61fd7a8e2ed8849b34e961ee042fd3376987c5098d18a.scope: Deactivated successfully.
Dec 13 02:46:05 np0005558241 python3.9[151279]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:46:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:46:06 np0005558241 python3.9[151378]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:46:06 np0005558241 podman[150761]: 2025-12-13 07:46:06.83098807 +0000 UTC m=+4.201301882 container attach 2c43239c0f6c75b111f61fd7a8e2ed8849b34e961ee042fd3376987c5098d18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 13 02:46:06 np0005558241 podman[150761]: 2025-12-13 07:46:06.834490436 +0000 UTC m=+4.204804268 container died 2c43239c0f6c75b111f61fd7a8e2ed8849b34e961ee042fd3376987c5098d18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:46:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fc7df522f098e54d1ef9f6b0f9ede8743da3bb73eb285668664df7c4915a51bf-merged.mount: Deactivated successfully.
Dec 13 02:46:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:08 np0005558241 podman[150761]: 2025-12-13 07:46:08.118677982 +0000 UTC m=+5.488991784 container remove 2c43239c0f6c75b111f61fd7a8e2ed8849b34e961ee042fd3376987c5098d18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:46:08 np0005558241 systemd[1]: libpod-conmon-2c43239c0f6c75b111f61fd7a8e2ed8849b34e961ee042fd3376987c5098d18a.scope: Deactivated successfully.
Dec 13 02:46:08 np0005558241 podman[151445]: 2025-12-13 07:46:08.705466744 +0000 UTC m=+0.025591021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:46:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:46:09
Dec 13 02:46:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:46:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:46:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.log', '.rgw.root']
Dec 13 02:46:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:46:09 np0005558241 podman[151445]: 2025-12-13 07:46:09.1369254 +0000 UTC m=+0.457049647 container create da299b86a5c180ee01769bfd8aa0aeca597a9c4d826f4d13b06712907790f8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bassi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:46:09 np0005558241 python3.9[151610]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:46:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:46:10 np0005558241 systemd[1]: Started libpod-conmon-da299b86a5c180ee01769bfd8aa0aeca597a9c4d826f4d13b06712907790f8a7.scope.
Dec 13 02:46:10 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:46:10 np0005558241 python3.9[151769]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:10 np0005558241 podman[151445]: 2025-12-13 07:46:10.874523115 +0000 UTC m=+2.194647392 container init da299b86a5c180ee01769bfd8aa0aeca597a9c4d826f4d13b06712907790f8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bassi, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:46:10 np0005558241 podman[151445]: 2025-12-13 07:46:10.888197051 +0000 UTC m=+2.208321338 container start da299b86a5c180ee01769bfd8aa0aeca597a9c4d826f4d13b06712907790f8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:46:10 np0005558241 gallant_bassi[151682]: 167 167
Dec 13 02:46:10 np0005558241 systemd[1]: libpod-da299b86a5c180ee01769bfd8aa0aeca597a9c4d826f4d13b06712907790f8a7.scope: Deactivated successfully.
Dec 13 02:46:11 np0005558241 python3.9[151903]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765611970.0946171-138-102134419031184/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:46:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:11 np0005558241 python3.9[152053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:12 np0005558241 podman[151445]: 2025-12-13 07:46:12.243196712 +0000 UTC m=+3.563320969 container attach da299b86a5c180ee01769bfd8aa0aeca597a9c4d826f4d13b06712907790f8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:46:12 np0005558241 podman[151445]: 2025-12-13 07:46:12.244548815 +0000 UTC m=+3.564673082 container died da299b86a5c180ee01769bfd8aa0aeca597a9c4d826f4d13b06712907790f8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bassi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:46:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:46:12 np0005558241 python3.9[152174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765611971.4487677-138-247360459549333/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:46:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:13 np0005558241 python3.9[152336]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:14 np0005558241 python3.9[152457]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765611973.351931-182-167796812692674/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:46:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1f07f9dc0713785543fe465f8b03514bc0d93dd00c6daa49c567b3e3e414a943-merged.mount: Deactivated successfully.
Dec 13 02:46:15 np0005558241 python3.9[152607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:15 np0005558241 python3.9[152728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765611974.67835-182-186876017389400/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:46:16 np0005558241 python3.9[152878]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:46:17 np0005558241 python3.9[153032]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:46:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:18 np0005558241 python3.9[153184]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:18 np0005558241 podman[151445]: 2025-12-13 07:46:18.314891575 +0000 UTC m=+9.635015862 container remove da299b86a5c180ee01769bfd8aa0aeca597a9c4d826f4d13b06712907790f8a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bassi, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 02:46:18 np0005558241 ovn_controller[148476]: 2025-12-13T07:46:18Z|00025|memory|INFO|17024 kB peak resident set size after 36.9 seconds
Dec 13 02:46:18 np0005558241 ovn_controller[148476]: 2025-12-13T07:46:18Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec 13 02:46:18 np0005558241 systemd[1]: libpod-conmon-da299b86a5c180ee01769bfd8aa0aeca597a9c4d826f4d13b06712907790f8a7.scope: Deactivated successfully.
Dec 13 02:46:18 np0005558241 podman[152200]: 2025-12-13 07:46:18.438365526 +0000 UTC m=+5.259419530 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 13 02:46:18 np0005558241 podman[153284]: 2025-12-13 07:46:18.520852188 +0000 UTC m=+0.030417420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:46:18 np0005558241 python3.9[153279]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:46:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:46:19 np0005558241 python3.9[153450]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:19 np0005558241 podman[153284]: 2025-12-13 07:46:19.612945014 +0000 UTC m=+1.122510226 container create 343efc08deeaf406bebf5e8f6156bb2cb0c84f637a7f601a7b834df46c44d787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:46:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:19 np0005558241 python3.9[153528]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:46:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:46:20 np0005558241 systemd[1]: Started libpod-conmon-343efc08deeaf406bebf5e8f6156bb2cb0c84f637a7f601a7b834df46c44d787.scope.
Dec 13 02:46:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:46:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56300f27f9b54e4dbd42a343b8163ad618375c37d2055c88a665b6c55feb6010/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:46:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56300f27f9b54e4dbd42a343b8163ad618375c37d2055c88a665b6c55feb6010/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:46:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56300f27f9b54e4dbd42a343b8163ad618375c37d2055c88a665b6c55feb6010/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:46:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56300f27f9b54e4dbd42a343b8163ad618375c37d2055c88a665b6c55feb6010/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:46:20 np0005558241 python3.9[153686]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:46:21 np0005558241 podman[153284]: 2025-12-13 07:46:21.011724764 +0000 UTC m=+2.521289986 container init 343efc08deeaf406bebf5e8f6156bb2cb0c84f637a7f601a7b834df46c44d787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:46:21 np0005558241 podman[153284]: 2025-12-13 07:46:21.028985299 +0000 UTC m=+2.538550551 container start 343efc08deeaf406bebf5e8f6156bb2cb0c84f637a7f601a7b834df46c44d787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:46:21 np0005558241 podman[153284]: 2025-12-13 07:46:21.10091436 +0000 UTC m=+2.610479602 container attach 343efc08deeaf406bebf5e8f6156bb2cb0c84f637a7f601a7b834df46c44d787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jemison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:46:21 np0005558241 python3.9[153840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:21 np0005558241 python3.9[153969]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:46:21 np0005558241 lvm[153994]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:46:21 np0005558241 lvm[153991]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:46:21 np0005558241 lvm[153994]: VG ceph_vg1 finished
Dec 13 02:46:21 np0005558241 lvm[153991]: VG ceph_vg0 finished
Dec 13 02:46:21 np0005558241 lvm[153996]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:46:21 np0005558241 lvm[153996]: VG ceph_vg2 finished
Dec 13 02:46:21 np0005558241 lvm[154001]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:46:21 np0005558241 lvm[154001]: VG ceph_vg1 finished
Dec 13 02:46:21 np0005558241 zealous_jemison[153680]: {}
Dec 13 02:46:22 np0005558241 systemd[1]: libpod-343efc08deeaf406bebf5e8f6156bb2cb0c84f637a7f601a7b834df46c44d787.scope: Deactivated successfully.
Dec 13 02:46:22 np0005558241 podman[153284]: 2025-12-13 07:46:22.012823388 +0000 UTC m=+3.522388600 container died 343efc08deeaf406bebf5e8f6156bb2cb0c84f637a7f601a7b834df46c44d787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:46:22 np0005558241 systemd[1]: libpod-343efc08deeaf406bebf5e8f6156bb2cb0c84f637a7f601a7b834df46c44d787.scope: Consumed 1.530s CPU time.
Dec 13 02:46:22 np0005558241 python3.9[154161]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:23 np0005558241 python3.9[154239]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:46:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:46:24 np0005558241 python3.9[154391]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:46:24 np0005558241 systemd[1]: Reloading.
Dec 13 02:46:24 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:46:24 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:46:25 np0005558241 systemd[1]: var-lib-containers-storage-overlay-56300f27f9b54e4dbd42a343b8163ad618375c37d2055c88a665b6c55feb6010-merged.mount: Deactivated successfully.
Dec 13 02:46:25 np0005558241 python3.9[154581]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:25 np0005558241 podman[153284]: 2025-12-13 07:46:25.685043487 +0000 UTC m=+7.194608699 container remove 343efc08deeaf406bebf5e8f6156bb2cb0c84f637a7f601a7b834df46c44d787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:46:25 np0005558241 systemd[1]: libpod-conmon-343efc08deeaf406bebf5e8f6156bb2cb0c84f637a7f601a7b834df46c44d787.scope: Deactivated successfully.
Dec 13 02:46:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:46:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:46:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:46:26 np0005558241 python3.9[154659]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:46:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:46:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:46:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:46:26 np0005558241 python3.9[154836]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:27 np0005558241 python3.9[154914]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:46:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:28 np0005558241 python3.9[155066]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:46:28 np0005558241 systemd[1]: Reloading.
Dec 13 02:46:28 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:46:28 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:46:28 np0005558241 systemd[1]: Starting Create netns directory...
Dec 13 02:46:28 np0005558241 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 02:46:28 np0005558241 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 02:46:28 np0005558241 systemd[1]: Finished Create netns directory.
Dec 13 02:46:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:46:29 np0005558241 python3.9[155259]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:46:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:30 np0005558241 python3.9[155411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:31 np0005558241 python3.9[155534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765611989.8929095-333-218678422530680/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:46:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:31 np0005558241 python3.9[155686]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:46:32 np0005558241 python3.9[155838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:46:33 np0005558241 python3.9[155961]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765611992.1144059-358-4394635199678/.source.json _original_basename=.c1pdmt4l follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:46:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:33 np0005558241 python3.9[156113]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:46:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:46:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:36 np0005558241 python3.9[156540]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 13 02:46:37 np0005558241 python3.9[156694]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 02:46:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:38 np0005558241 python3.9[156846]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 13 02:46:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:46:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:46:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:46:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:46:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:46:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:46:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:46:40 np0005558241 python3[157025]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 02:46:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:46:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:49 np0005558241 podman[157071]: 2025-12-13 07:46:49.117609654 +0000 UTC m=+0.186463066 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 02:46:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:46:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:46:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:46:58 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:46:58 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:46:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:47:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:47:09
Dec 13 02:47:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:47:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:47:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log', 'volumes']
Dec 13 02:47:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:47:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:47:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:47:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:16 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:47:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:47:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:47:20 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:47:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:24 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:47:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:26 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct MDS connection to Monitors appears to be laggy; 18.1454s since last acked beacon
Dec 13 02:47:26 np0005558241 ceph-mds[98037]: mds.0.3 skipping upkeep work because connection to Monitors appears laggy
Dec 13 02:47:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:28 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:47:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:31 np0005558241 ceph-mds[98037]: mds.0.3 skipping upkeep work because connection to Monitors appears laggy
Dec 13 02:47:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:32 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:47:32 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct  MDS is no longer laggy
Dec 13 02:47:33 np0005558241 podman[157161]: 2025-12-13 07:47:33.010877561 +0000 UTC m=+13.100691318 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Dec 13 02:47:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e4 check_health: resetting beacon timeouts due to mon delay (slow election?) of 2e+01 seconds
Dec 13 02:47:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:47:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:47:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:47:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:47:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:47:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:47:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:47:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:47:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:47:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:47:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:47:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:47:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:47:35 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 13 02:47:35 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:35.605390) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:47:35 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 13 02:47:35 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612055605497, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2106, "num_deletes": 251, "total_data_size": 3673654, "memory_usage": 3735888, "flush_reason": "Manual Compaction"}
Dec 13 02:47:35 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 13 02:47:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612057574809, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3587539, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9867, "largest_seqno": 11972, "table_properties": {"data_size": 3577860, "index_size": 6109, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19403, "raw_average_key_size": 19, "raw_value_size": 3558511, "raw_average_value_size": 3657, "num_data_blocks": 276, "num_entries": 973, "num_filter_entries": 973, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611714, "oldest_key_time": 1765611714, "file_creation_time": 1765612055, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 1969454 microseconds, and 7483 cpu microseconds.
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:47:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:37.574860) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3587539 bytes OK
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:37.574883) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:37.949896) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:37.949957) EVENT_LOG_v1 {"time_micros": 1765612057949944, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:37.949990) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3664685, prev total WAL file size 3672672, number of live WAL files 2.
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:37.951969) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3503KB)], [26(6560KB)]
Dec 13 02:47:37 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612057952153, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10305695, "oldest_snapshot_seqno": -1}
Dec 13 02:47:38 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3853 keys, 8464301 bytes, temperature: kUnknown
Dec 13 02:47:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612058807566, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8464301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8434451, "index_size": 19116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 92960, "raw_average_key_size": 24, "raw_value_size": 8360854, "raw_average_value_size": 2169, "num_data_blocks": 825, "num_entries": 3853, "num_filter_entries": 3853, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765612057, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:47:38 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:38.807964) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8464301 bytes
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:39.009812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 12.0 rd, 9.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 6.4 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(5.2) write-amplify(2.4) OK, records in: 4367, records dropped: 514 output_compression: NoCompression
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:39.009850) EVENT_LOG_v1 {"time_micros": 1765612059009833, "job": 10, "event": "compaction_finished", "compaction_time_micros": 855548, "compaction_time_cpu_micros": 26138, "output_level": 6, "num_output_files": 1, "total_output_size": 8464301, "num_input_records": 4367, "num_output_records": 3853, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612059010553, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612059011499, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:37.951869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:39.011600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:39.011610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:39.011613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:39.011616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:47:39.011619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:47:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:47:39 np0005558241 podman[157037]: 2025-12-13 07:47:39.701767076 +0000 UTC m=+59.289592691 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 02:47:39 np0005558241 podman[157366]: 2025-12-13 07:47:39.652309558 +0000 UTC m=+0.693601786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:47:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:47:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:47:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:47:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:47:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:47:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:47:40 np0005558241 podman[157366]: 2025-12-13 07:47:40.302584164 +0000 UTC m=+1.343876352 container create ceae33bc01d0f6e709290bd9db76f9768ad41513a479a842563c1ae28a0f2de5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:47:41 np0005558241 systemd[1]: Started libpod-conmon-ceae33bc01d0f6e709290bd9db76f9768ad41513a479a842563c1ae28a0f2de5.scope.
Dec 13 02:47:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:47:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:42 np0005558241 podman[157366]: 2025-12-13 07:47:42.985194132 +0000 UTC m=+4.026486370 container init ceae33bc01d0f6e709290bd9db76f9768ad41513a479a842563c1ae28a0f2de5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:47:42 np0005558241 podman[157366]: 2025-12-13 07:47:42.997768376 +0000 UTC m=+4.039060554 container start ceae33bc01d0f6e709290bd9db76f9768ad41513a479a842563c1ae28a0f2de5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:47:43 np0005558241 upbeat_golick[157393]: 167 167
Dec 13 02:47:43 np0005558241 systemd[1]: libpod-ceae33bc01d0f6e709290bd9db76f9768ad41513a479a842563c1ae28a0f2de5.scope: Deactivated successfully.
Dec 13 02:47:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:47:44 np0005558241 podman[157366]: 2025-12-13 07:47:44.692482121 +0000 UTC m=+5.733774359 container attach ceae33bc01d0f6e709290bd9db76f9768ad41513a479a842563c1ae28a0f2de5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:47:44 np0005558241 podman[157366]: 2025-12-13 07:47:44.693045175 +0000 UTC m=+5.734337353 container died ceae33bc01d0f6e709290bd9db76f9768ad41513a479a842563c1ae28a0f2de5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:47:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6b5c0b17d9b7c6a5f689f1c9eb59b90249d9151eb2422b9a2b1c57b7345f07ba-merged.mount: Deactivated successfully.
Dec 13 02:47:45 np0005558241 podman[157366]: 2025-12-13 07:47:45.555723934 +0000 UTC m=+6.597016112 container remove ceae33bc01d0f6e709290bd9db76f9768ad41513a479a842563c1ae28a0f2de5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_golick, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:47:45 np0005558241 systemd[1]: libpod-conmon-ceae33bc01d0f6e709290bd9db76f9768ad41513a479a842563c1ae28a0f2de5.scope: Deactivated successfully.
Dec 13 02:47:45 np0005558241 podman[157407]: 2025-12-13 07:47:45.585455144 +0000 UTC m=+3.981800167 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 02:47:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:45 np0005558241 podman[157407]: 2025-12-13 07:47:45.920873906 +0000 UTC m=+4.317218839 container create bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 02:47:45 np0005558241 python3[157025]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 02:47:46 np0005558241 podman[157444]: 2025-12-13 07:47:45.951407776 +0000 UTC m=+0.260820607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:47:46 np0005558241 podman[157444]: 2025-12-13 07:47:46.220780579 +0000 UTC m=+0.530193360 container create a3be31a93e959ce3684c822e93ed9b15e6dd203d14655e526bb2b2ea5e16c3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_boyd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:47:46 np0005558241 systemd[1]: Started libpod-conmon-a3be31a93e959ce3684c822e93ed9b15e6dd203d14655e526bb2b2ea5e16c3d7.scope.
Dec 13 02:47:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:47:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6283be3b2e61097b81286ac691ad68ba1bf760639c520cd5de70aba5f27cd11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6283be3b2e61097b81286ac691ad68ba1bf760639c520cd5de70aba5f27cd11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6283be3b2e61097b81286ac691ad68ba1bf760639c520cd5de70aba5f27cd11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6283be3b2e61097b81286ac691ad68ba1bf760639c520cd5de70aba5f27cd11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6283be3b2e61097b81286ac691ad68ba1bf760639c520cd5de70aba5f27cd11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:47 np0005558241 podman[157444]: 2025-12-13 07:47:47.577783317 +0000 UTC m=+1.887196538 container init a3be31a93e959ce3684c822e93ed9b15e6dd203d14655e526bb2b2ea5e16c3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Dec 13 02:47:47 np0005558241 podman[157444]: 2025-12-13 07:47:47.591882368 +0000 UTC m=+1.901295149 container start a3be31a93e959ce3684c822e93ed9b15e6dd203d14655e526bb2b2ea5e16c3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_boyd, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:47:47 np0005558241 python3.9[157642]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:47:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:48 np0005558241 xenodochial_boyd[157486]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:47:48 np0005558241 xenodochial_boyd[157486]: --> All data devices are unavailable
Dec 13 02:47:48 np0005558241 systemd[1]: libpod-a3be31a93e959ce3684c822e93ed9b15e6dd203d14655e526bb2b2ea5e16c3d7.scope: Deactivated successfully.
Dec 13 02:47:48 np0005558241 podman[157444]: 2025-12-13 07:47:48.446031571 +0000 UTC m=+2.755444442 container attach a3be31a93e959ce3684c822e93ed9b15e6dd203d14655e526bb2b2ea5e16c3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:47:48 np0005558241 podman[157444]: 2025-12-13 07:47:48.448735387 +0000 UTC m=+2.758148218 container died a3be31a93e959ce3684c822e93ed9b15e6dd203d14655e526bb2b2ea5e16c3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_boyd, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:47:48 np0005558241 python3.9[157822]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:47:49 np0005558241 python3.9[157898]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:47:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:47:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d6283be3b2e61097b81286ac691ad68ba1bf760639c520cd5de70aba5f27cd11-merged.mount: Deactivated successfully.
Dec 13 02:47:50 np0005558241 python3.9[158049]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765612069.4985776-446-138090698120134/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:47:50 np0005558241 python3.9[158126]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:47:50 np0005558241 systemd[1]: Reloading.
Dec 13 02:47:51 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:47:51 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:47:51 np0005558241 podman[157444]: 2025-12-13 07:47:51.167912789 +0000 UTC m=+5.477325610 container remove a3be31a93e959ce3684c822e93ed9b15e6dd203d14655e526bb2b2ea5e16c3d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 02:47:51 np0005558241 systemd[1]: libpod-conmon-a3be31a93e959ce3684c822e93ed9b15e6dd203d14655e526bb2b2ea5e16c3d7.scope: Deactivated successfully.
Dec 13 02:47:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:51 np0005558241 podman[158298]: 2025-12-13 07:47:51.989959845 +0000 UTC m=+0.082111710 container create b267fec8ae076cd75ce71bea94e923a8e3b66cd4d847a73e76807e839db2da2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:47:52 np0005558241 systemd[1]: Started libpod-conmon-b267fec8ae076cd75ce71bea94e923a8e3b66cd4d847a73e76807e839db2da2d.scope.
Dec 13 02:47:52 np0005558241 podman[158298]: 2025-12-13 07:47:51.945550789 +0000 UTC m=+0.037702674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:47:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:47:52 np0005558241 podman[158298]: 2025-12-13 07:47:52.083802547 +0000 UTC m=+0.175954442 container init b267fec8ae076cd75ce71bea94e923a8e3b66cd4d847a73e76807e839db2da2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:47:52 np0005558241 podman[158298]: 2025-12-13 07:47:52.092952359 +0000 UTC m=+0.185104224 container start b267fec8ae076cd75ce71bea94e923a8e3b66cd4d847a73e76807e839db2da2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:47:52 np0005558241 podman[158298]: 2025-12-13 07:47:52.098523304 +0000 UTC m=+0.190675299 container attach b267fec8ae076cd75ce71bea94e923a8e3b66cd4d847a73e76807e839db2da2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:47:52 np0005558241 nice_gauss[158315]: 167 167
Dec 13 02:47:52 np0005558241 systemd[1]: libpod-b267fec8ae076cd75ce71bea94e923a8e3b66cd4d847a73e76807e839db2da2d.scope: Deactivated successfully.
Dec 13 02:47:52 np0005558241 conmon[158315]: conmon b267fec8ae076cd75ce7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b267fec8ae076cd75ce71bea94e923a8e3b66cd4d847a73e76807e839db2da2d.scope/container/memory.events
Dec 13 02:47:52 np0005558241 podman[158298]: 2025-12-13 07:47:52.103906234 +0000 UTC m=+0.196058109 container died b267fec8ae076cd75ce71bea94e923a8e3b66cd4d847a73e76807e839db2da2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:47:52 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4610c8716639eb756d97df134e065ac3cfbc4c6dca2e24967d6e7dec55151a7d-merged.mount: Deactivated successfully.
Dec 13 02:47:52 np0005558241 podman[158298]: 2025-12-13 07:47:52.16529791 +0000 UTC m=+0.257449775 container remove b267fec8ae076cd75ce71bea94e923a8e3b66cd4d847a73e76807e839db2da2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gauss, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 13 02:47:52 np0005558241 systemd[1]: libpod-conmon-b267fec8ae076cd75ce71bea94e923a8e3b66cd4d847a73e76807e839db2da2d.scope: Deactivated successfully.
Dec 13 02:47:52 np0005558241 python3.9[158287]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:47:52 np0005558241 systemd[1]: Reloading.
Dec 13 02:47:52 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:47:52 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:47:52 np0005558241 podman[158341]: 2025-12-13 07:47:52.373525873 +0000 UTC m=+0.071218146 container create 2bccec48eb3392707b209c3170be98b2bcfcb8f639208e81e854add638de4976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_black, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:47:52 np0005558241 podman[158341]: 2025-12-13 07:47:52.347720018 +0000 UTC m=+0.045412311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:47:52 np0005558241 systemd[1]: Started libpod-conmon-2bccec48eb3392707b209c3170be98b2bcfcb8f639208e81e854add638de4976.scope.
Dec 13 02:47:52 np0005558241 systemd[1]: Starting ovn_metadata_agent container...
Dec 13 02:47:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:47:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f46d36e580ff42cea612a5d97df773a62e3b6034901a1d6a4abab3aab989238/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f46d36e580ff42cea612a5d97df773a62e3b6034901a1d6a4abab3aab989238/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f46d36e580ff42cea612a5d97df773a62e3b6034901a1d6a4abab3aab989238/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f46d36e580ff42cea612a5d97df773a62e3b6034901a1d6a4abab3aab989238/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:52 np0005558241 podman[158341]: 2025-12-13 07:47:52.672788459 +0000 UTC m=+0.370480762 container init 2bccec48eb3392707b209c3170be98b2bcfcb8f639208e81e854add638de4976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_black, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 02:47:52 np0005558241 podman[158341]: 2025-12-13 07:47:52.686609404 +0000 UTC m=+0.384301677 container start 2bccec48eb3392707b209c3170be98b2bcfcb8f639208e81e854add638de4976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_black, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:47:52 np0005558241 podman[158341]: 2025-12-13 07:47:52.693364337 +0000 UTC m=+0.391056630 container attach 2bccec48eb3392707b209c3170be98b2bcfcb8f639208e81e854add638de4976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_black, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:47:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:47:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46725ee06483d774fd006d48832ff9ebecdb109dcbb62e6c629034b97770cd5c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46725ee06483d774fd006d48832ff9ebecdb109dcbb62e6c629034b97770cd5c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:52 np0005558241 systemd[1]: Started /usr/bin/podman healthcheck run bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7.
Dec 13 02:47:52 np0005558241 podman[158397]: 2025-12-13 07:47:52.832388394 +0000 UTC m=+0.255063167 container init bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: + sudo -E kolla_set_configs
Dec 13 02:47:52 np0005558241 podman[158397]: 2025-12-13 07:47:52.863449546 +0000 UTC m=+0.286124309 container start bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 02:47:52 np0005558241 edpm-start-podman-container[158397]: ovn_metadata_agent
Dec 13 02:47:52 np0005558241 edpm-start-podman-container[158394]: Creating additional drop-in dependency for "ovn_metadata_agent" (bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7)
Dec 13 02:47:52 np0005558241 podman[158420]: 2025-12-13 07:47:52.938127044 +0000 UTC m=+0.062263248 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Validating config file
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Copying service configuration files
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Writing out command to execute
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 13 02:47:52 np0005558241 systemd[1]: Reloading.
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: ++ cat /run_command
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: + CMD=neutron-ovn-metadata-agent
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: + ARGS=
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: + sudo kolla_copy_cacerts
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: + [[ ! -n '' ]]
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: + . kolla_extend_start
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: Running command: 'neutron-ovn-metadata-agent'
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: + umask 0022
Dec 13 02:47:52 np0005558241 ovn_metadata_agent[158414]: + exec neutron-ovn-metadata-agent
Dec 13 02:47:53 np0005558241 nice_black[158393]: {
Dec 13 02:47:53 np0005558241 nice_black[158393]:    "0": [
Dec 13 02:47:53 np0005558241 nice_black[158393]:        {
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "devices": [
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "/dev/loop3"
Dec 13 02:47:53 np0005558241 nice_black[158393]:            ],
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_name": "ceph_lv0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_size": "21470642176",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "name": "ceph_lv0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "tags": {
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.cluster_name": "ceph",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.crush_device_class": "",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.encrypted": "0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.objectstore": "bluestore",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.osd_id": "0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.type": "block",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.vdo": "0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.with_tpm": "0"
Dec 13 02:47:53 np0005558241 nice_black[158393]:            },
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "type": "block",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "vg_name": "ceph_vg0"
Dec 13 02:47:53 np0005558241 nice_black[158393]:        }
Dec 13 02:47:53 np0005558241 nice_black[158393]:    ],
Dec 13 02:47:53 np0005558241 nice_black[158393]:    "1": [
Dec 13 02:47:53 np0005558241 nice_black[158393]:        {
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "devices": [
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "/dev/loop4"
Dec 13 02:47:53 np0005558241 nice_black[158393]:            ],
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_name": "ceph_lv1",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_size": "21470642176",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "name": "ceph_lv1",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "tags": {
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.cluster_name": "ceph",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.crush_device_class": "",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.encrypted": "0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.objectstore": "bluestore",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.osd_id": "1",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.type": "block",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.vdo": "0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.with_tpm": "0"
Dec 13 02:47:53 np0005558241 nice_black[158393]:            },
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "type": "block",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "vg_name": "ceph_vg1"
Dec 13 02:47:53 np0005558241 nice_black[158393]:        }
Dec 13 02:47:53 np0005558241 nice_black[158393]:    ],
Dec 13 02:47:53 np0005558241 nice_black[158393]:    "2": [
Dec 13 02:47:53 np0005558241 nice_black[158393]:        {
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "devices": [
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "/dev/loop5"
Dec 13 02:47:53 np0005558241 nice_black[158393]:            ],
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_name": "ceph_lv2",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_size": "21470642176",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "name": "ceph_lv2",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "tags": {
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.cluster_name": "ceph",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.crush_device_class": "",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.encrypted": "0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.objectstore": "bluestore",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.osd_id": "2",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.type": "block",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.vdo": "0",
Dec 13 02:47:53 np0005558241 nice_black[158393]:                "ceph.with_tpm": "0"
Dec 13 02:47:53 np0005558241 nice_black[158393]:            },
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "type": "block",
Dec 13 02:47:53 np0005558241 nice_black[158393]:            "vg_name": "ceph_vg2"
Dec 13 02:47:53 np0005558241 nice_black[158393]:        }
Dec 13 02:47:53 np0005558241 nice_black[158393]:    ]
Dec 13 02:47:53 np0005558241 nice_black[158393]: }
Dec 13 02:47:53 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:47:53 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:47:53 np0005558241 podman[158341]: 2025-12-13 07:47:53.073740048 +0000 UTC m=+0.771432321 container died 2bccec48eb3392707b209c3170be98b2bcfcb8f639208e81e854add638de4976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_black, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:47:53 np0005558241 systemd[1]: Started ovn_metadata_agent container.
Dec 13 02:47:53 np0005558241 systemd[1]: libpod-2bccec48eb3392707b209c3170be98b2bcfcb8f639208e81e854add638de4976.scope: Deactivated successfully.
Dec 13 02:47:53 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0f46d36e580ff42cea612a5d97df773a62e3b6034901a1d6a4abab3aab989238-merged.mount: Deactivated successfully.
Dec 13 02:47:53 np0005558241 podman[158341]: 2025-12-13 07:47:53.570951788 +0000 UTC m=+1.268644061 container remove 2bccec48eb3392707b209c3170be98b2bcfcb8f639208e81e854add638de4976 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_black, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:47:53 np0005558241 systemd[1]: libpod-conmon-2bccec48eb3392707b209c3170be98b2bcfcb8f639208e81e854add638de4976.scope: Deactivated successfully.
Dec 13 02:47:53 np0005558241 systemd[1]: session-51.scope: Deactivated successfully.
Dec 13 02:47:53 np0005558241 systemd[1]: session-51.scope: Consumed 1min 3.007s CPU time.
Dec 13 02:47:53 np0005558241 systemd-logind[787]: Session 51 logged out. Waiting for processes to exit.
Dec 13 02:47:53 np0005558241 systemd-logind[787]: Removed session 51.
Dec 13 02:47:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:54 np0005558241 podman[158602]: 2025-12-13 07:47:54.096892673 +0000 UTC m=+0.044358985 container create d97fbdc00497babbc87e61a4f845ec454509f7593a701069a93e308bf9052b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:47:54 np0005558241 systemd[1]: Started libpod-conmon-d97fbdc00497babbc87e61a4f845ec454509f7593a701069a93e308bf9052b64.scope.
Dec 13 02:47:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:47:54 np0005558241 podman[158602]: 2025-12-13 07:47:54.079305677 +0000 UTC m=+0.026772009 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:47:54 np0005558241 podman[158602]: 2025-12-13 07:47:54.184996347 +0000 UTC m=+0.132462659 container init d97fbdc00497babbc87e61a4f845ec454509f7593a701069a93e308bf9052b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_babbage, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:47:54 np0005558241 podman[158602]: 2025-12-13 07:47:54.194565849 +0000 UTC m=+0.142032151 container start d97fbdc00497babbc87e61a4f845ec454509f7593a701069a93e308bf9052b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_babbage, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:47:54 np0005558241 podman[158602]: 2025-12-13 07:47:54.199084168 +0000 UTC m=+0.146550520 container attach d97fbdc00497babbc87e61a4f845ec454509f7593a701069a93e308bf9052b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:47:54 np0005558241 boring_babbage[158618]: 167 167
Dec 13 02:47:54 np0005558241 systemd[1]: libpod-d97fbdc00497babbc87e61a4f845ec454509f7593a701069a93e308bf9052b64.scope: Deactivated successfully.
Dec 13 02:47:54 np0005558241 conmon[158618]: conmon d97fbdc00497babbc87e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d97fbdc00497babbc87e61a4f845ec454509f7593a701069a93e308bf9052b64.scope/container/memory.events
Dec 13 02:47:54 np0005558241 podman[158602]: 2025-12-13 07:47:54.203550086 +0000 UTC m=+0.151016408 container died d97fbdc00497babbc87e61a4f845ec454509f7593a701069a93e308bf9052b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_babbage, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 02:47:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5359bade166853c235c88864dd463b43309995df47a5fb3b42b1ac91dcad9f74-merged.mount: Deactivated successfully.
Dec 13 02:47:54 np0005558241 podman[158602]: 2025-12-13 07:47:54.266747486 +0000 UTC m=+0.214213788 container remove d97fbdc00497babbc87e61a4f845ec454509f7593a701069a93e308bf9052b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_babbage, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:47:54 np0005558241 systemd[1]: libpod-conmon-d97fbdc00497babbc87e61a4f845ec454509f7593a701069a93e308bf9052b64.scope: Deactivated successfully.
Dec 13 02:47:54 np0005558241 podman[158644]: 2025-12-13 07:47:54.45480557 +0000 UTC m=+0.048427624 container create 9a2c708b3327ff597b3bc56b7dc6aca4e100aca90d0e0e6b39a717e33f0914d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:47:54 np0005558241 systemd[1]: Started libpod-conmon-9a2c708b3327ff597b3bc56b7dc6aca4e100aca90d0e0e6b39a717e33f0914d7.scope.
Dec 13 02:47:54 np0005558241 podman[158644]: 2025-12-13 07:47:54.432135651 +0000 UTC m=+0.025757735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:47:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:47:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6338e606db1d95b63396191aeb9adbacee998bbed946d0a7e16e2deac84a4a56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6338e606db1d95b63396191aeb9adbacee998bbed946d0a7e16e2deac84a4a56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6338e606db1d95b63396191aeb9adbacee998bbed946d0a7e16e2deac84a4a56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6338e606db1d95b63396191aeb9adbacee998bbed946d0a7e16e2deac84a4a56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:47:54 np0005558241 podman[158644]: 2025-12-13 07:47:54.560013878 +0000 UTC m=+0.153635952 container init 9a2c708b3327ff597b3bc56b7dc6aca4e100aca90d0e0e6b39a717e33f0914d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_faraday, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:47:54 np0005558241 podman[158644]: 2025-12-13 07:47:54.569896787 +0000 UTC m=+0.163518841 container start 9a2c708b3327ff597b3bc56b7dc6aca4e100aca90d0e0e6b39a717e33f0914d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_faraday, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:47:54 np0005558241 podman[158644]: 2025-12-13 07:47:54.582352449 +0000 UTC m=+0.175974533 container attach 9a2c708b3327ff597b3bc56b7dc6aca4e100aca90d0e0e6b39a717e33f0914d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_faraday, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 02:47:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.310 158419 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.312 158419 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.312 158419 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.313 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.313 158419 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.313 158419 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.313 158419 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.313 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.314 158419 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.314 158419 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.314 158419 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.314 158419 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.314 158419 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.314 158419 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.314 158419 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.315 158419 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.315 158419 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.315 158419 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.315 158419 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.315 158419 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.315 158419 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.315 158419 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.316 158419 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.316 158419 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.316 158419 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.316 158419 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.316 158419 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.316 158419 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.317 158419 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.317 158419 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.317 158419 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.317 158419 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.317 158419 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.318 158419 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.318 158419 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.318 158419 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.319 158419 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.319 158419 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.319 158419 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.319 158419 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.319 158419 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.319 158419 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.320 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.320 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.320 158419 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.320 158419 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.320 158419 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.320 158419 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.320 158419 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.321 158419 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.321 158419 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.321 158419 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.321 158419 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.321 158419 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.321 158419 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.322 158419 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.322 158419 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.322 158419 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.322 158419 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.322 158419 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.322 158419 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.322 158419 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.323 158419 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.323 158419 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.323 158419 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.323 158419 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.324 158419 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.324 158419 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.324 158419 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.324 158419 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.324 158419 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.324 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.324 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.325 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.325 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.325 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.325 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.325 158419 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.325 158419 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.326 158419 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.326 158419 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.326 158419 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.326 158419 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.326 158419 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.326 158419 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.326 158419 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.326 158419 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.327 158419 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.327 158419 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.327 158419 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.327 158419 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.327 158419 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.327 158419 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.327 158419 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.328 158419 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.328 158419 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.328 158419 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.328 158419 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.328 158419 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.328 158419 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.328 158419 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.328 158419 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.329 158419 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.329 158419 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.329 158419 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.329 158419 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.329 158419 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.329 158419 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.329 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.330 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.330 158419 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.330 158419 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.330 158419 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.330 158419 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.330 158419 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.330 158419 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.331 158419 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.331 158419 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.331 158419 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.331 158419 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.331 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.331 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.332 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.332 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.332 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.332 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.332 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.332 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.332 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.333 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.333 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.333 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.333 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.333 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.333 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.333 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.334 158419 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.334 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.334 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.334 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.334 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.334 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.334 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.335 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.335 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.335 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.335 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.335 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.335 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.335 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.336 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.336 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.336 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.336 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.336 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.336 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.337 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.337 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.337 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.337 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.337 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.337 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.337 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.337 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.338 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.338 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.338 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.338 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.338 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.338 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.339 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.339 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.339 158419 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.339 158419 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.339 158419 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.339 158419 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.339 158419 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.340 158419 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.340 158419 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.340 158419 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.340 158419 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.340 158419 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.340 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.340 158419 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.341 158419 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.341 158419 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.341 158419 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.341 158419 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.341 158419 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.341 158419 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.342 158419 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.342 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.342 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.342 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.342 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.342 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.342 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.343 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.343 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.343 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.343 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.343 158419 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.343 158419 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.343 158419 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.344 158419 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.344 158419 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.344 158419 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.344 158419 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.344 158419 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.344 158419 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.344 158419 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.345 158419 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.345 158419 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.345 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.345 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.345 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.345 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.346 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.346 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.346 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.346 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.346 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.347 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 lvm[158740]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:47:55 np0005558241 lvm[158739]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:47:55 np0005558241 lvm[158740]: VG ceph_vg1 finished
Dec 13 02:47:55 np0005558241 lvm[158739]: VG ceph_vg0 finished
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.347 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.347 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.347 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.347 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.348 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.348 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.348 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.348 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.348 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.348 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.348 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.348 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.349 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.349 158419 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.349 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.349 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.349 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.349 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.349 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.349 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.349 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.350 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.350 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.350 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.350 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.350 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.350 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.350 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.350 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.350 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.351 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.351 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.351 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.351 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.351 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.351 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.351 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.351 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.352 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.352 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.352 158419 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.352 158419 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.352 158419 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.352 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.352 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.352 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.352 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.353 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.353 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.353 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.353 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.353 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.353 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.353 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.353 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.353 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.354 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.354 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.354 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.354 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.354 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.354 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.354 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.354 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.355 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.355 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.355 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.355 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.355 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.355 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.355 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.355 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.356 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.356 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.356 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.356 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.356 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.356 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.356 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.356 158419 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.356 158419 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec 13 02:47:55 np0005558241 lvm[158742]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:47:55 np0005558241 lvm[158742]: VG ceph_vg2 finished
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.367 158419 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.368 158419 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.368 158419 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.368 158419 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.368 158419 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.387 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 0c490016-f399-44ae-a985-d9ff6e29d8d2 (UUID: 0c490016-f399-44ae-a985-d9ff6e29d8d2) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.417 158419 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.418 158419 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.418 158419 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.418 158419 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.421 158419 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.428 158419 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.438 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '0c490016-f399-44ae-a985-d9ff6e29d8d2'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], external_ids={}, name=0c490016-f399-44ae-a985-d9ff6e29d8d2, nb_cfg_timestamp=1765611949566, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.439 158419 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f5668644340>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.440 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.440 158419 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.441 158419 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.441 158419 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.448 158419 DEBUG oslo_service.service [-] Started child 158745 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.451 158745 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-982556'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.452 158419 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp21ja4fd7/privsep.sock']#033[00m
Dec 13 02:47:55 np0005558241 wonderful_faraday[158661]: {}
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.480 158745 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.481 158745 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.482 158745 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.488 158745 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.496 158745 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec 13 02:47:55 np0005558241 systemd[1]: libpod-9a2c708b3327ff597b3bc56b7dc6aca4e100aca90d0e0e6b39a717e33f0914d7.scope: Deactivated successfully.
Dec 13 02:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:55.505 158745 INFO eventlet.wsgi.server [-] (158745) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec 13 02:47:55 np0005558241 systemd[1]: libpod-9a2c708b3327ff597b3bc56b7dc6aca4e100aca90d0e0e6b39a717e33f0914d7.scope: Consumed 1.535s CPU time.
Dec 13 02:47:55 np0005558241 podman[158749]: 2025-12-13 07:47:55.559161781 +0000 UTC m=+0.031044033 container died 9a2c708b3327ff597b3bc56b7dc6aca4e100aca90d0e0e6b39a717e33f0914d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_faraday, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:47:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6338e606db1d95b63396191aeb9adbacee998bbed946d0a7e16e2deac84a4a56-merged.mount: Deactivated successfully.
Dec 13 02:47:55 np0005558241 podman[158749]: 2025-12-13 07:47:55.607631184 +0000 UTC m=+0.079513406 container remove 9a2c708b3327ff597b3bc56b7dc6aca4e100aca90d0e0e6b39a717e33f0914d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_faraday, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:47:55 np0005558241 systemd[1]: libpod-conmon-9a2c708b3327ff597b3bc56b7dc6aca4e100aca90d0e0e6b39a717e33f0914d7.scope: Deactivated successfully.
Dec 13 02:47:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:47:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:47:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:47:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:47:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:56 np0005558241 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 13 02:47:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:56.275 158419 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec 13 02:47:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:56.276 158419 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp21ja4fd7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec 13 02:47:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:56.126 158790 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec 13 02:47:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:56.131 158790 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec 13 02:47:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:56.134 158790 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec 13 02:47:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:56.134 158790 INFO oslo.privsep.daemon [-] privsep daemon running as pid 158790#033[00m
Dec 13 02:47:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:56.278 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[da349d28-b63d-42a9-b5e2-e1deab9e5805]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 02:47:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:56.828 158790 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:47:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:56.828 158790 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:47:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:56.828 158790 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.406 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[189fb3ee-61d0-4e7a-a5db-6576f560a1f7]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 02:47:57 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:47:57 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.408 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, column=external_ids, values=({'neutron:ovn-metadata-id': 'a57ba301-a60e-5913-8705-ee7f28f19643'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.436 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.453 158419 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.453 158419 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.453 158419 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.453 158419 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.454 158419 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.454 158419 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.454 158419 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.454 158419 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.455 158419 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.455 158419 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.455 158419 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.455 158419 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.455 158419 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.456 158419 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.456 158419 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.456 158419 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.456 158419 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.457 158419 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.457 158419 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.457 158419 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.457 158419 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.457 158419 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.458 158419 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.458 158419 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.458 158419 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.459 158419 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.459 158419 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.459 158419 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.459 158419 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.459 158419 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.460 158419 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.460 158419 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.460 158419 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.460 158419 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.461 158419 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.461 158419 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.461 158419 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.461 158419 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.462 158419 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.462 158419 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.462 158419 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.462 158419 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.462 158419 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.463 158419 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.463 158419 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.463 158419 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.463 158419 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.463 158419 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.464 158419 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.464 158419 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.464 158419 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.464 158419 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.464 158419 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.465 158419 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.465 158419 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.465 158419 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.465 158419 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.465 158419 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.466 158419 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.466 158419 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.466 158419 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.466 158419 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.466 158419 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.467 158419 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.467 158419 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.467 158419 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.467 158419 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.468 158419 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.468 158419 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.468 158419 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.468 158419 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.468 158419 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.469 158419 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.469 158419 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.469 158419 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.469 158419 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.469 158419 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.470 158419 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.470 158419 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.470 158419 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.470 158419 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.470 158419 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.471 158419 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.471 158419 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.471 158419 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.471 158419 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.471 158419 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.472 158419 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.472 158419 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.472 158419 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.472 158419 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.472 158419 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.473 158419 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.473 158419 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.473 158419 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.473 158419 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.473 158419 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.474 158419 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.474 158419 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.474 158419 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.474 158419 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.474 158419 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.475 158419 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.475 158419 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.475 158419 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.475 158419 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.475 158419 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.475 158419 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.476 158419 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.476 158419 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.476 158419 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.477 158419 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.477 158419 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.477 158419 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.477 158419 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.477 158419 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.478 158419 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.478 158419 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.478 158419 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.478 158419 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.478 158419 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.479 158419 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.479 158419 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.479 158419 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.479 158419 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.480 158419 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.480 158419 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.480 158419 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.480 158419 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.480 158419 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.481 158419 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.481 158419 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.481 158419 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.481 158419 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.481 158419 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.482 158419 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.482 158419 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.482 158419 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.482 158419 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.483 158419 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.483 158419 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.483 158419 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.483 158419 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.483 158419 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.484 158419 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.484 158419 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.484 158419 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.484 158419 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.484 158419 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.485 158419 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.485 158419 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.485 158419 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.485 158419 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.485 158419 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.486 158419 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.486 158419 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.486 158419 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.486 158419 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.486 158419 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.486 158419 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.487 158419 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.487 158419 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.487 158419 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.487 158419 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.487 158419 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.488 158419 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.488 158419 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.488 158419 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.488 158419 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.488 158419 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.489 158419 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.489 158419 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.489 158419 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.489 158419 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.489 158419 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.490 158419 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.490 158419 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.490 158419 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.490 158419 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.490 158419 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.491 158419 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.491 158419 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.491 158419 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.491 158419 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.491 158419 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.492 158419 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.492 158419 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.492 158419 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.492 158419 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.492 158419 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.493 158419 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.493 158419 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.493 158419 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.493 158419 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.494 158419 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.494 158419 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.494 158419 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.494 158419 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.494 158419 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.495 158419 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.495 158419 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.495 158419 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.495 158419 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.495 158419 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.496 158419 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.496 158419 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.496 158419 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.496 158419 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.496 158419 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.497 158419 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.497 158419 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.497 158419 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.497 158419 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.497 158419 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.497 158419 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.498 158419 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.498 158419 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.498 158419 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.498 158419 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.498 158419 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.498 158419 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.498 158419 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.499 158419 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.499 158419 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.499 158419 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.499 158419 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.499 158419 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.499 158419 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.499 158419 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.500 158419 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.500 158419 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.500 158419 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.500 158419 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.500 158419 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.500 158419 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.500 158419 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.501 158419 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.501 158419 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.501 158419 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.501 158419 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.501 158419 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.502 158419 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.502 158419 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.502 158419 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.502 158419 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.502 158419 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.503 158419 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.503 158419 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.503 158419 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.503 158419 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.503 158419 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.504 158419 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.504 158419 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.504 158419 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.504 158419 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.504 158419 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.505 158419 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.505 158419 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.505 158419 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.505 158419 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.505 158419 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.506 158419 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.506 158419 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.506 158419 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.506 158419 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.506 158419 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.507 158419 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.507 158419 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.507 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.507 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.508 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.508 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.508 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.508 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.508 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.509 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.509 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.509 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.509 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.509 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.510 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.510 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.510 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.510 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.510 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.511 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.511 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.511 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.511 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.511 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.512 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.512 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.512 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.512 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.513 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.513 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.513 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.513 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.513 158419 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.514 158419 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.514 158419 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.514 158419 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.514 158419 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:47:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:47:57.515 158419 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec 13 02:47:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:59 np0005558241 systemd-logind[787]: New session 52 of user zuul.
Dec 13 02:47:59 np0005558241 systemd[1]: Started Session 52 of User zuul.
Dec 13 02:47:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:47:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:48:00 np0005558241 python3.9[158948]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:48:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:01 np0005558241 python3.9[159104]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:48:03 np0005558241 podman[159269]: 2025-12-13 07:48:03.205357338 +0000 UTC m=+0.136320672 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 13 02:48:03 np0005558241 python3.9[159270]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:48:03 np0005558241 systemd[1]: Reloading.
Dec 13 02:48:03 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:48:03 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:48:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:04 np0005558241 python3.9[159480]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:48:05 np0005558241 network[159497]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:48:05 np0005558241 network[159498]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:48:05 np0005558241 network[159499]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:48:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:48:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:48:09
Dec 13 02:48:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:48:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:48:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['backups', 'volumes', '.mgr', 'images', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data']
Dec 13 02:48:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:48:09 np0005558241 python3.9[159761]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:48:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:48:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:48:11 np0005558241 python3.9[159914]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:48:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:12 np0005558241 python3.9[160067]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:48:13 np0005558241 python3.9[160220]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:48:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:14 np0005558241 python3.9[160373]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:48:15 np0005558241 python3.9[160526]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:48:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:48:16 np0005558241 python3.9[160679]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:48:17 np0005558241 python3.9[160832]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:18 np0005558241 python3.9[160984]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:19 np0005558241 python3.9[161136]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:19 np0005558241 python3.9[161288]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:48:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:48:21 np0005558241 python3.9[161440]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:48:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:22 np0005558241 python3.9[161592]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:23 np0005558241 python3.9[161744]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:23 np0005558241 podman[161868]: 2025-12-13 07:48:23.726663347 +0000 UTC m=+0.080631063 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 02:48:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:23 np0005558241 python3.9[161912]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:24 np0005558241 python3.9[162065]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:25 np0005558241 python3.9[162217]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:26 np0005558241 python3.9[162369]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:27 np0005558241 python3.9[162521]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:48:27 np0005558241 python3.9[162673]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:28 np0005558241 python3.9[162825]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:48:29 np0005558241 python3.9[162977]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:48:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:30 np0005558241 python3.9[163129]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 02:48:31 np0005558241 python3.9[163281]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:48:31 np0005558241 systemd[1]: Reloading.
Dec 13 02:48:31 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:48:31 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:48:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:48:32 np0005558241 python3.9[163467]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:48:33 np0005558241 python3.9[163620]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:48:33 np0005558241 podman[163622]: 2025-12-13 07:48:33.549135882 +0000 UTC m=+0.114651677 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 13 02:48:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:34 np0005558241 python3.9[163800]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:48:34 np0005558241 python3.9[163953]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:48:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:36 np0005558241 python3.9[164106]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:48:37 np0005558241 python3.9[164259]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:48:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:48:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:38 np0005558241 python3.9[164412]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:48:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:39 np0005558241 python3.9[164565]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 13 02:48:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:48:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:48:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:48:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:48:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:48:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:48:41 np0005558241 python3.9[164718]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 02:48:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:48:42 np0005558241 python3.9[164876]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 02:48:43 np0005558241 python3.9[165036]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:48:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:44 np0005558241 python3.9[165120]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:48:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:48:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:48:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:53 np0005558241 podman[165134]: 2025-12-13 07:48:53.97512287 +0000 UTC m=+0.061752680 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 02:48:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:48:55.359 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:48:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:48:55.360 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:48:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:48:55.360 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:48:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:48:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:48:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:48:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:48:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:48:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:48:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:48:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 13 02:48:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:48:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 13 02:49:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:49:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:49:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:49:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:49:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:49:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:49:01 np0005558241 podman[165294]: 2025-12-13 07:49:01.448839262 +0000 UTC m=+0.037461813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:49:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 13 02:49:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:49:03 np0005558241 podman[165294]: 2025-12-13 07:49:03.585525256 +0000 UTC m=+2.174147757 container create e6bea412d84548fc0fe24d47db11999c474da453b76329d0e43c905701c4158b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_babbage, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 02:49:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Dec 13 02:49:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Dec 13 02:49:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:49:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Dec 13 02:49:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:49:09
Dec 13 02:49:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:49:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:49:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'backups']
Dec 13 02:49:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:49:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 B/s wr, 1 op/s
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:49:10 np0005558241 systemd[1]: Started libpod-conmon-e6bea412d84548fc0fe24d47db11999c474da453b76329d0e43c905701c4158b.scope.
Dec 13 02:49:10 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:49:10 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 6.031196117s
Dec 13 02:49:10 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 6.031196594s
Dec 13 02:49:10 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.031438351s, txc = 0x561829093200, txc bytes = 1273, txc ios = 1, txc cost = 671273, txc onodes = 1, DB updates = 3, DB bytes = 1050, cost max = 95489052 on 2025-12-13T07:32:16.596576+0000, txc max = 100 on 2025-12-13T07:32:21.123438+0000
Dec 13 02:49:11 np0005558241 podman[165294]: 2025-12-13 07:49:11.250853532 +0000 UTC m=+9.839476083 container init e6bea412d84548fc0fe24d47db11999c474da453b76329d0e43c905701c4158b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:49:11 np0005558241 podman[165294]: 2025-12-13 07:49:11.265974684 +0000 UTC m=+9.854597145 container start e6bea412d84548fc0fe24d47db11999c474da453b76329d0e43c905701c4158b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_babbage, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:49:11 np0005558241 keen_babbage[165323]: 167 167
Dec 13 02:49:11 np0005558241 systemd[1]: libpod-e6bea412d84548fc0fe24d47db11999c474da453b76329d0e43c905701c4158b.scope: Deactivated successfully.
Dec 13 02:49:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:49:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 0 B/s wr, 1 op/s
Dec 13 02:49:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:49:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:49:11 np0005558241 podman[165294]: 2025-12-13 07:49:11.969450294 +0000 UTC m=+10.558072765 container attach e6bea412d84548fc0fe24d47db11999c474da453b76329d0e43c905701c4158b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_babbage, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:49:11 np0005558241 podman[165294]: 2025-12-13 07:49:11.971041053 +0000 UTC m=+10.559663544 container died e6bea412d84548fc0fe24d47db11999c474da453b76329d0e43c905701c4158b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_babbage, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:49:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-49fa730bb584101abe02bf4f598ffec12e915a4a5b524ffdbbab93e1bf961a1a-merged.mount: Deactivated successfully.
Dec 13 02:49:13 np0005558241 podman[165294]: 2025-12-13 07:49:13.229556521 +0000 UTC m=+11.818179062 container remove e6bea412d84548fc0fe24d47db11999c474da453b76329d0e43c905701c4158b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:49:13 np0005558241 podman[165310]: 2025-12-13 07:49:13.233744704 +0000 UTC m=+9.597740568 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Dec 13 02:49:13 np0005558241 systemd[1]: libpod-conmon-e6bea412d84548fc0fe24d47db11999c474da453b76329d0e43c905701c4158b.scope: Deactivated successfully.
Dec 13 02:49:13 np0005558241 podman[165361]: 2025-12-13 07:49:13.383772093 +0000 UTC m=+0.027365403 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:49:13 np0005558241 podman[165361]: 2025-12-13 07:49:13.571566051 +0000 UTC m=+0.215159351 container create b4de722e2084048d07711ff5fe427260b29e3cb1b01050e7f7e3f72644a7bbdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:49:13 np0005558241 systemd[1]: Started libpod-conmon-b4de722e2084048d07711ff5fe427260b29e3cb1b01050e7f7e3f72644a7bbdc.scope.
Dec 13 02:49:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:49:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd8c720f96ab4cd048218e0b465d2108e018a8fbbaea963fb10d9c075d0d15c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:49:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd8c720f96ab4cd048218e0b465d2108e018a8fbbaea963fb10d9c075d0d15c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:49:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd8c720f96ab4cd048218e0b465d2108e018a8fbbaea963fb10d9c075d0d15c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:49:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd8c720f96ab4cd048218e0b465d2108e018a8fbbaea963fb10d9c075d0d15c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:49:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd8c720f96ab4cd048218e0b465d2108e018a8fbbaea963fb10d9c075d0d15c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:49:13 np0005558241 podman[165361]: 2025-12-13 07:49:13.812591088 +0000 UTC m=+0.456184408 container init b4de722e2084048d07711ff5fe427260b29e3cb1b01050e7f7e3f72644a7bbdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_montalcini, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:49:13 np0005558241 podman[165361]: 2025-12-13 07:49:13.82122227 +0000 UTC m=+0.464815560 container start b4de722e2084048d07711ff5fe427260b29e3cb1b01050e7f7e3f72644a7bbdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 02:49:13 np0005558241 podman[165361]: 2025-12-13 07:49:13.83014424 +0000 UTC m=+0.473737530 container attach b4de722e2084048d07711ff5fe427260b29e3cb1b01050e7f7e3f72644a7bbdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:49:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 02:49:14 np0005558241 ceph-mon[76537]: log_channel(cluster) log [WRN] : Health check update: 2 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec 13 02:49:14 np0005558241 ceph-mon[76537]: Health check update: 2 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec 13 02:49:14 np0005558241 agitated_montalcini[165378]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:49:14 np0005558241 agitated_montalcini[165378]: --> All data devices are unavailable
Dec 13 02:49:14 np0005558241 systemd[1]: libpod-b4de722e2084048d07711ff5fe427260b29e3cb1b01050e7f7e3f72644a7bbdc.scope: Deactivated successfully.
Dec 13 02:49:14 np0005558241 podman[165361]: 2025-12-13 07:49:14.329971251 +0000 UTC m=+0.973564571 container died b4de722e2084048d07711ff5fe427260b29e3cb1b01050e7f7e3f72644a7bbdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_montalcini, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:49:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-edd8c720f96ab4cd048218e0b465d2108e018a8fbbaea963fb10d9c075d0d15c-merged.mount: Deactivated successfully.
Dec 13 02:49:14 np0005558241 podman[165361]: 2025-12-13 07:49:14.41490154 +0000 UTC m=+1.058494830 container remove b4de722e2084048d07711ff5fe427260b29e3cb1b01050e7f7e3f72644a7bbdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_montalcini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:49:14 np0005558241 systemd[1]: libpod-conmon-b4de722e2084048d07711ff5fe427260b29e3cb1b01050e7f7e3f72644a7bbdc.scope: Deactivated successfully.
Dec 13 02:49:14 np0005558241 podman[165469]: 2025-12-13 07:49:14.887253006 +0000 UTC m=+0.045972431 container create 16dc470d72b4788fce6132bcc61ddc959ed34b3122859afd2e93f44413ed7aa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_edison, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 02:49:14 np0005558241 systemd[1]: Started libpod-conmon-16dc470d72b4788fce6132bcc61ddc959ed34b3122859afd2e93f44413ed7aa7.scope.
Dec 13 02:49:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:49:14 np0005558241 podman[165469]: 2025-12-13 07:49:14.865601554 +0000 UTC m=+0.024320999 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:49:14 np0005558241 podman[165469]: 2025-12-13 07:49:14.974497692 +0000 UTC m=+0.133217127 container init 16dc470d72b4788fce6132bcc61ddc959ed34b3122859afd2e93f44413ed7aa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_edison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:49:14 np0005558241 podman[165469]: 2025-12-13 07:49:14.981515644 +0000 UTC m=+0.140235099 container start 16dc470d72b4788fce6132bcc61ddc959ed34b3122859afd2e93f44413ed7aa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_edison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:49:14 np0005558241 podman[165469]: 2025-12-13 07:49:14.985491872 +0000 UTC m=+0.144211297 container attach 16dc470d72b4788fce6132bcc61ddc959ed34b3122859afd2e93f44413ed7aa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_edison, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:49:14 np0005558241 elated_edison[165489]: 167 167
Dec 13 02:49:14 np0005558241 systemd[1]: libpod-16dc470d72b4788fce6132bcc61ddc959ed34b3122859afd2e93f44413ed7aa7.scope: Deactivated successfully.
Dec 13 02:49:14 np0005558241 podman[165469]: 2025-12-13 07:49:14.988386913 +0000 UTC m=+0.147106338 container died 16dc470d72b4788fce6132bcc61ddc959ed34b3122859afd2e93f44413ed7aa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_edison, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 02:49:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c627bfada5036a3ccc10ba20c2bb4a16d2ee4c6649639f79994697396b4c673a-merged.mount: Deactivated successfully.
Dec 13 02:49:15 np0005558241 podman[165469]: 2025-12-13 07:49:15.019992731 +0000 UTC m=+0.178712146 container remove 16dc470d72b4788fce6132bcc61ddc959ed34b3122859afd2e93f44413ed7aa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_edison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:49:15 np0005558241 systemd[1]: libpod-conmon-16dc470d72b4788fce6132bcc61ddc959ed34b3122859afd2e93f44413ed7aa7.scope: Deactivated successfully.
Dec 13 02:49:15 np0005558241 podman[165517]: 2025-12-13 07:49:15.205113613 +0000 UTC m=+0.058448538 container create 799a365c5d0348197d9b596c6ecbbc3cf656c241af3f5a8c751f3734da3a8203 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:49:15 np0005558241 podman[165517]: 2025-12-13 07:49:15.181698477 +0000 UTC m=+0.035033442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:49:15 np0005558241 systemd[1]: Started libpod-conmon-799a365c5d0348197d9b596c6ecbbc3cf656c241af3f5a8c751f3734da3a8203.scope.
Dec 13 02:49:15 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:49:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cccb94bfd42e6585cb669ce0c96d74f292d4a0b5b127feaee841ad88f13d8d5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:49:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cccb94bfd42e6585cb669ce0c96d74f292d4a0b5b127feaee841ad88f13d8d5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:49:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cccb94bfd42e6585cb669ce0c96d74f292d4a0b5b127feaee841ad88f13d8d5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:49:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cccb94bfd42e6585cb669ce0c96d74f292d4a0b5b127feaee841ad88f13d8d5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:49:15 np0005558241 podman[165517]: 2025-12-13 07:49:15.518423358 +0000 UTC m=+0.371758333 container init 799a365c5d0348197d9b596c6ecbbc3cf656c241af3f5a8c751f3734da3a8203 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:49:15 np0005558241 podman[165517]: 2025-12-13 07:49:15.52865509 +0000 UTC m=+0.381990035 container start 799a365c5d0348197d9b596c6ecbbc3cf656c241af3f5a8c751f3734da3a8203 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:49:15 np0005558241 podman[165517]: 2025-12-13 07:49:15.678221258 +0000 UTC m=+0.531556233 container attach 799a365c5d0348197d9b596c6ecbbc3cf656c241af3f5a8c751f3734da3a8203 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]: {
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:    "0": [
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:        {
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "devices": [
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "/dev/loop3"
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            ],
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_name": "ceph_lv0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_size": "21470642176",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "name": "ceph_lv0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "tags": {
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.cluster_name": "ceph",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.crush_device_class": "",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.encrypted": "0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.objectstore": "bluestore",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.osd_id": "0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.type": "block",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.vdo": "0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.with_tpm": "0"
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            },
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "type": "block",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "vg_name": "ceph_vg0"
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:        }
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:    ],
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:    "1": [
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:        {
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "devices": [
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "/dev/loop4"
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            ],
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_name": "ceph_lv1",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_size": "21470642176",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "name": "ceph_lv1",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "tags": {
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.cluster_name": "ceph",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.crush_device_class": "",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.encrypted": "0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.objectstore": "bluestore",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.osd_id": "1",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.type": "block",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.vdo": "0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.with_tpm": "0"
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            },
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "type": "block",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "vg_name": "ceph_vg1"
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:        }
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:    ],
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:    "2": [
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:        {
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "devices": [
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "/dev/loop5"
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            ],
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_name": "ceph_lv2",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_size": "21470642176",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "name": "ceph_lv2",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "tags": {
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.cluster_name": "ceph",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.crush_device_class": "",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.encrypted": "0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.objectstore": "bluestore",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.osd_id": "2",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.type": "block",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.vdo": "0",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:                "ceph.with_tpm": "0"
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            },
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "type": "block",
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:            "vg_name": "ceph_vg2"
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:        }
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]:    ]
Dec 13 02:49:15 np0005558241 keen_keldysh[165538]: }
Dec 13 02:49:15 np0005558241 systemd[1]: libpod-799a365c5d0348197d9b596c6ecbbc3cf656c241af3f5a8c751f3734da3a8203.scope: Deactivated successfully.
Dec 13 02:49:15 np0005558241 podman[165566]: 2025-12-13 07:49:15.890404656 +0000 UTC m=+0.025835117 container died 799a365c5d0348197d9b596c6ecbbc3cf656c241af3f5a8c751f3734da3a8203 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:49:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 02:49:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:49:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cccb94bfd42e6585cb669ce0c96d74f292d4a0b5b127feaee841ad88f13d8d5a-merged.mount: Deactivated successfully.
Dec 13 02:49:17 np0005558241 podman[165566]: 2025-12-13 07:49:17.212137449 +0000 UTC m=+1.347567920 container remove 799a365c5d0348197d9b596c6ecbbc3cf656c241af3f5a8c751f3734da3a8203 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:49:17 np0005558241 systemd[1]: libpod-conmon-799a365c5d0348197d9b596c6ecbbc3cf656c241af3f5a8c751f3734da3a8203.scope: Deactivated successfully.
Dec 13 02:49:17 np0005558241 podman[165694]: 2025-12-13 07:49:17.721713061 +0000 UTC m=+0.026262037 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:49:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 02:49:18 np0005558241 podman[165694]: 2025-12-13 07:49:18.089352002 +0000 UTC m=+0.393900888 container create 131052642c93eae9e3c83c007d8c3145b10fb4ab8e3e7059da392dbbe5caa233 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keller, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:49:18 np0005558241 systemd[1]: Started libpod-conmon-131052642c93eae9e3c83c007d8c3145b10fb4ab8e3e7059da392dbbe5caa233.scope.
Dec 13 02:49:18 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:49:18 np0005558241 podman[165694]: 2025-12-13 07:49:18.906923898 +0000 UTC m=+1.211472804 container init 131052642c93eae9e3c83c007d8c3145b10fb4ab8e3e7059da392dbbe5caa233 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 02:49:18 np0005558241 podman[165694]: 2025-12-13 07:49:18.91920817 +0000 UTC m=+1.223757106 container start 131052642c93eae9e3c83c007d8c3145b10fb4ab8e3e7059da392dbbe5caa233 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keller, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 02:49:18 np0005558241 quizzical_keller[165738]: 167 167
Dec 13 02:49:18 np0005558241 systemd[1]: libpod-131052642c93eae9e3c83c007d8c3145b10fb4ab8e3e7059da392dbbe5caa233.scope: Deactivated successfully.
Dec 13 02:49:18 np0005558241 podman[165694]: 2025-12-13 07:49:18.926330935 +0000 UTC m=+1.230879871 container attach 131052642c93eae9e3c83c007d8c3145b10fb4ab8e3e7059da392dbbe5caa233 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keller, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:49:18 np0005558241 podman[165694]: 2025-12-13 07:49:18.928442317 +0000 UTC m=+1.232991253 container died 131052642c93eae9e3c83c007d8c3145b10fb4ab8e3e7059da392dbbe5caa233 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:49:18 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e5e17640ed0cdb887031b0e6f5241a251291f18b1f143e9be09d60f56d3c19ad-merged.mount: Deactivated successfully.
Dec 13 02:49:19 np0005558241 podman[165694]: 2025-12-13 07:49:19.042270786 +0000 UTC m=+1.346819672 container remove 131052642c93eae9e3c83c007d8c3145b10fb4ab8e3e7059da392dbbe5caa233 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:49:19 np0005558241 systemd[1]: libpod-conmon-131052642c93eae9e3c83c007d8c3145b10fb4ab8e3e7059da392dbbe5caa233.scope: Deactivated successfully.
Dec 13 02:49:19 np0005558241 podman[165774]: 2025-12-13 07:49:19.193039164 +0000 UTC m=+0.023579291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:49:19 np0005558241 podman[165774]: 2025-12-13 07:49:19.444678112 +0000 UTC m=+0.275218259 container create cbca332968f69a4df85ed76edcaef854a6f12d1c0efade35e14d3d22769beb64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 02:49:19 np0005558241 systemd[1]: Started libpod-conmon-cbca332968f69a4df85ed76edcaef854a6f12d1c0efade35e14d3d22769beb64.scope.
Dec 13 02:49:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:49:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59331d672f99184c8f40a21430c0bafdfa170df539598086163f13742ff705eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:49:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59331d672f99184c8f40a21430c0bafdfa170df539598086163f13742ff705eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:49:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59331d672f99184c8f40a21430c0bafdfa170df539598086163f13742ff705eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:49:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59331d672f99184c8f40a21430c0bafdfa170df539598086163f13742ff705eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:49:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 14 op/s
Dec 13 02:49:20 np0005558241 podman[165774]: 2025-12-13 07:49:20.180251661 +0000 UTC m=+1.010791818 container init cbca332968f69a4df85ed76edcaef854a6f12d1c0efade35e14d3d22769beb64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cohen, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:49:20 np0005558241 podman[165774]: 2025-12-13 07:49:20.19563726 +0000 UTC m=+1.026177367 container start cbca332968f69a4df85ed76edcaef854a6f12d1c0efade35e14d3d22769beb64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cohen, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 02:49:20 np0005558241 podman[165774]: 2025-12-13 07:49:20.201119385 +0000 UTC m=+1.031659572 container attach cbca332968f69a4df85ed76edcaef854a6f12d1c0efade35e14d3d22769beb64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:49:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:49:20 np0005558241 lvm[165917]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:49:20 np0005558241 lvm[165916]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:49:20 np0005558241 lvm[165917]: VG ceph_vg1 finished
Dec 13 02:49:20 np0005558241 lvm[165916]: VG ceph_vg0 finished
Dec 13 02:49:21 np0005558241 lvm[165919]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:49:21 np0005558241 lvm[165919]: VG ceph_vg2 finished
Dec 13 02:49:21 np0005558241 bold_cohen[165805]: {}
Dec 13 02:49:21 np0005558241 systemd[1]: libpod-cbca332968f69a4df85ed76edcaef854a6f12d1c0efade35e14d3d22769beb64.scope: Deactivated successfully.
Dec 13 02:49:21 np0005558241 systemd[1]: libpod-cbca332968f69a4df85ed76edcaef854a6f12d1c0efade35e14d3d22769beb64.scope: Consumed 1.506s CPU time.
Dec 13 02:49:21 np0005558241 podman[165774]: 2025-12-13 07:49:21.115528091 +0000 UTC m=+1.946068218 container died cbca332968f69a4df85ed76edcaef854a6f12d1c0efade35e14d3d22769beb64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cohen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:49:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:49:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 14 op/s
Dec 13 02:49:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-59331d672f99184c8f40a21430c0bafdfa170df539598086163f13742ff705eb-merged.mount: Deactivated successfully.
Dec 13 02:49:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 14 op/s
Dec 13 02:49:25 np0005558241 podman[165774]: 2025-12-13 07:49:25.396740412 +0000 UTC m=+6.227280549 container remove cbca332968f69a4df85ed76edcaef854a6f12d1c0efade35e14d3d22769beb64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_cohen, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:49:25 np0005558241 systemd[1]: libpod-conmon-cbca332968f69a4df85ed76edcaef854a6f12d1c0efade35e14d3d22769beb64.scope: Deactivated successfully.
Dec 13 02:49:25 np0005558241 podman[165935]: 2025-12-13 07:49:25.548479264 +0000 UTC m=+0.624842367 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 02:49:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 10 op/s
Dec 13 02:49:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:49:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:49:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:49:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 10 op/s
Dec 13 02:49:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:49:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:49:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 02:49:30 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:49:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:49:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:49:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:49:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:49:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:49:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:49:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:49:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:49:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:49:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:49:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:44 np0005558241 podman[165988]: 2025-12-13 07:49:44.053845298 +0000 UTC m=+0.132885099 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 02:49:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:49:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:49:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:54 np0005558241 kernel: SELinux:  Converting 2769 SID table entries...
Dec 13 02:49:54 np0005558241 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:49:54 np0005558241 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:49:54 np0005558241 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:49:54 np0005558241 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:49:54 np0005558241 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:49:54 np0005558241 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:49:54 np0005558241 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 02:49:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:49:55.360 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:49:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:49:55.362 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:49:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:49:55.362 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:49:55 np0005558241 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec 13 02:49:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:56 np0005558241 podman[166022]: 2025-12-13 07:49:56.000248451 +0000 UTC m=+0.078763308 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:49:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:49:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:49:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:50:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:50:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:50:09
Dec 13 02:50:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:50:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:50:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.control', 'backups', '.mgr']
Dec 13 02:50:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:50:09 np0005558241 kernel: SELinux:  Converting 2769 SID table entries...
Dec 13 02:50:09 np0005558241 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:50:09 np0005558241 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:50:09 np0005558241 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:50:09 np0005558241 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:50:09 np0005558241 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:50:09 np0005558241 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:50:09 np0005558241 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 02:50:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:50:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:50:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:14 np0005558241 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 13 02:50:15 np0005558241 podman[166048]: 2025-12-13 07:50:15.02806169 +0000 UTC m=+0.106816938 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 02:50:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:50:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:50:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:50:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:50:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:26 np0005558241 podman[167742]: 2025-12-13 07:50:26.968374164 +0000 UTC m=+0.060269703 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 02:50:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:50:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:50:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:50:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:50:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:50:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:50:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:50:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:50:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:50:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:50:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:50:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:50:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:50:30 np0005558241 podman[170220]: 2025-12-13 07:50:30.962536828 +0000 UTC m=+0.043904291 container create 62919e26b3087daae180fe71c40b7cd84d90d4dd556000eeb339e4e72c04f170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_black, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:50:31 np0005558241 systemd[1]: Started libpod-conmon-62919e26b3087daae180fe71c40b7cd84d90d4dd556000eeb339e4e72c04f170.scope.
Dec 13 02:50:31 np0005558241 podman[170220]: 2025-12-13 07:50:30.941491841 +0000 UTC m=+0.022859324 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:50:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:50:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:50:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:50:31 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:50:31 np0005558241 podman[170220]: 2025-12-13 07:50:31.068919334 +0000 UTC m=+0.150286827 container init 62919e26b3087daae180fe71c40b7cd84d90d4dd556000eeb339e4e72c04f170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_black, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:50:31 np0005558241 podman[170220]: 2025-12-13 07:50:31.078365517 +0000 UTC m=+0.159732980 container start 62919e26b3087daae180fe71c40b7cd84d90d4dd556000eeb339e4e72c04f170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:50:31 np0005558241 podman[170220]: 2025-12-13 07:50:31.081838132 +0000 UTC m=+0.163205595 container attach 62919e26b3087daae180fe71c40b7cd84d90d4dd556000eeb339e4e72c04f170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_black, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:50:31 np0005558241 wizardly_black[170313]: 167 167
Dec 13 02:50:31 np0005558241 systemd[1]: libpod-62919e26b3087daae180fe71c40b7cd84d90d4dd556000eeb339e4e72c04f170.scope: Deactivated successfully.
Dec 13 02:50:31 np0005558241 podman[170220]: 2025-12-13 07:50:31.088176608 +0000 UTC m=+0.169544071 container died 62919e26b3087daae180fe71c40b7cd84d90d4dd556000eeb339e4e72c04f170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_black, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Dec 13 02:50:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-eff0b9504345828a7d5a2c6b222da36a46795b6dd4ddeb0c6d7d29646f308a28-merged.mount: Deactivated successfully.
Dec 13 02:50:31 np0005558241 podman[170220]: 2025-12-13 07:50:31.134993479 +0000 UTC m=+0.216360952 container remove 62919e26b3087daae180fe71c40b7cd84d90d4dd556000eeb339e4e72c04f170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:50:31 np0005558241 systemd[1]: libpod-conmon-62919e26b3087daae180fe71c40b7cd84d90d4dd556000eeb339e4e72c04f170.scope: Deactivated successfully.
Dec 13 02:50:31 np0005558241 podman[170478]: 2025-12-13 07:50:31.326317284 +0000 UTC m=+0.057714410 container create 72682add4d95cc9271686a41abdfdb02728ded7956457833341fb9c51090a6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_moore, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:50:31 np0005558241 systemd[1]: Started libpod-conmon-72682add4d95cc9271686a41abdfdb02728ded7956457833341fb9c51090a6ba.scope.
Dec 13 02:50:31 np0005558241 podman[170478]: 2025-12-13 07:50:31.298979292 +0000 UTC m=+0.030376408 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:50:31 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:50:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed0932699773363cc175faa12397606564bad137e3784300a5b5b0d2163a0b0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:50:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed0932699773363cc175faa12397606564bad137e3784300a5b5b0d2163a0b0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:50:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed0932699773363cc175faa12397606564bad137e3784300a5b5b0d2163a0b0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:50:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed0932699773363cc175faa12397606564bad137e3784300a5b5b0d2163a0b0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:50:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed0932699773363cc175faa12397606564bad137e3784300a5b5b0d2163a0b0a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:50:31 np0005558241 podman[170478]: 2025-12-13 07:50:31.437610181 +0000 UTC m=+0.169007297 container init 72682add4d95cc9271686a41abdfdb02728ded7956457833341fb9c51090a6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_moore, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:50:31 np0005558241 podman[170478]: 2025-12-13 07:50:31.445338791 +0000 UTC m=+0.176735887 container start 72682add4d95cc9271686a41abdfdb02728ded7956457833341fb9c51090a6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:50:31 np0005558241 podman[170478]: 2025-12-13 07:50:31.452810795 +0000 UTC m=+0.184207891 container attach 72682add4d95cc9271686a41abdfdb02728ded7956457833341fb9c51090a6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:50:31 np0005558241 distracted_moore[170567]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:50:31 np0005558241 distracted_moore[170567]: --> All data devices are unavailable
Dec 13 02:50:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:31 np0005558241 systemd[1]: libpod-72682add4d95cc9271686a41abdfdb02728ded7956457833341fb9c51090a6ba.scope: Deactivated successfully.
Dec 13 02:50:31 np0005558241 podman[170478]: 2025-12-13 07:50:31.989030284 +0000 UTC m=+0.720427390 container died 72682add4d95cc9271686a41abdfdb02728ded7956457833341fb9c51090a6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:50:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:50:32 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ed0932699773363cc175faa12397606564bad137e3784300a5b5b0d2163a0b0a-merged.mount: Deactivated successfully.
Dec 13 02:50:32 np0005558241 podman[170478]: 2025-12-13 07:50:32.342389583 +0000 UTC m=+1.073786679 container remove 72682add4d95cc9271686a41abdfdb02728ded7956457833341fb9c51090a6ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_moore, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:50:32 np0005558241 systemd[1]: libpod-conmon-72682add4d95cc9271686a41abdfdb02728ded7956457833341fb9c51090a6ba.scope: Deactivated successfully.
Dec 13 02:50:32 np0005558241 podman[171513]: 2025-12-13 07:50:32.85417214 +0000 UTC m=+0.026985385 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:50:33 np0005558241 podman[171513]: 2025-12-13 07:50:33.218866858 +0000 UTC m=+0.391680083 container create aea1d9a79577f3dc1ddbb489bc07d385742c48a45ce96f3b049cfc7280463bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:50:33 np0005558241 systemd[1]: Started libpod-conmon-aea1d9a79577f3dc1ddbb489bc07d385742c48a45ce96f3b049cfc7280463bae.scope.
Dec 13 02:50:33 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:50:33 np0005558241 podman[171513]: 2025-12-13 07:50:33.889628524 +0000 UTC m=+1.062441769 container init aea1d9a79577f3dc1ddbb489bc07d385742c48a45ce96f3b049cfc7280463bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:50:33 np0005558241 podman[171513]: 2025-12-13 07:50:33.897913968 +0000 UTC m=+1.070727193 container start aea1d9a79577f3dc1ddbb489bc07d385742c48a45ce96f3b049cfc7280463bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_stonebraker, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 02:50:33 np0005558241 sweet_stonebraker[172065]: 167 167
Dec 13 02:50:33 np0005558241 systemd[1]: libpod-aea1d9a79577f3dc1ddbb489bc07d385742c48a45ce96f3b049cfc7280463bae.scope: Deactivated successfully.
Dec 13 02:50:33 np0005558241 conmon[172065]: conmon aea1d9a79577f3dc1ddb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aea1d9a79577f3dc1ddbb489bc07d385742c48a45ce96f3b049cfc7280463bae.scope/container/memory.events
Dec 13 02:50:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:34 np0005558241 podman[171513]: 2025-12-13 07:50:34.16354227 +0000 UTC m=+1.336355525 container attach aea1d9a79577f3dc1ddbb489bc07d385742c48a45ce96f3b049cfc7280463bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_stonebraker, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 02:50:34 np0005558241 podman[171513]: 2025-12-13 07:50:34.16436557 +0000 UTC m=+1.337178835 container died aea1d9a79577f3dc1ddbb489bc07d385742c48a45ce96f3b049cfc7280463bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:50:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2a363fabe1de7eedd1707d5510b1f3276a770f659cda2a0408c9fa5bf0bd5de9-merged.mount: Deactivated successfully.
Dec 13 02:50:34 np0005558241 podman[171513]: 2025-12-13 07:50:34.325011041 +0000 UTC m=+1.497824276 container remove aea1d9a79577f3dc1ddbb489bc07d385742c48a45ce96f3b049cfc7280463bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:50:34 np0005558241 systemd[1]: libpod-conmon-aea1d9a79577f3dc1ddbb489bc07d385742c48a45ce96f3b049cfc7280463bae.scope: Deactivated successfully.
Dec 13 02:50:34 np0005558241 podman[172453]: 2025-12-13 07:50:34.583513208 +0000 UTC m=+0.120369311 container create 100a2a212e13a1d8f6b27673a5551a80ad7655764da94926854b8a4fb1c8c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 02:50:34 np0005558241 podman[172453]: 2025-12-13 07:50:34.489059406 +0000 UTC m=+0.025915549 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:50:34 np0005558241 systemd[1]: Started libpod-conmon-100a2a212e13a1d8f6b27673a5551a80ad7655764da94926854b8a4fb1c8c54f.scope.
Dec 13 02:50:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:50:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b4814cfd006424724966b3af4d67661a7f0c6cb8592c279f30f28079b90107/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:50:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b4814cfd006424724966b3af4d67661a7f0c6cb8592c279f30f28079b90107/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:50:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b4814cfd006424724966b3af4d67661a7f0c6cb8592c279f30f28079b90107/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:50:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59b4814cfd006424724966b3af4d67661a7f0c6cb8592c279f30f28079b90107/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:50:34 np0005558241 podman[172453]: 2025-12-13 07:50:34.885555446 +0000 UTC m=+0.422411559 container init 100a2a212e13a1d8f6b27673a5551a80ad7655764da94926854b8a4fb1c8c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:50:34 np0005558241 podman[172453]: 2025-12-13 07:50:34.893817819 +0000 UTC m=+0.430673912 container start 100a2a212e13a1d8f6b27673a5551a80ad7655764da94926854b8a4fb1c8c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:50:34 np0005558241 podman[172453]: 2025-12-13 07:50:34.911274459 +0000 UTC m=+0.448130572 container attach 100a2a212e13a1d8f6b27673a5551a80ad7655764da94926854b8a4fb1c8c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wilson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:50:35 np0005558241 practical_wilson[172651]: {
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:    "0": [
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:        {
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "devices": [
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "/dev/loop3"
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            ],
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_name": "ceph_lv0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_size": "21470642176",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "name": "ceph_lv0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "tags": {
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.cluster_name": "ceph",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.crush_device_class": "",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.encrypted": "0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.objectstore": "bluestore",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.osd_id": "0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.type": "block",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.vdo": "0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.with_tpm": "0"
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            },
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "type": "block",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "vg_name": "ceph_vg0"
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:        }
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:    ],
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:    "1": [
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:        {
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "devices": [
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "/dev/loop4"
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            ],
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_name": "ceph_lv1",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_size": "21470642176",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "name": "ceph_lv1",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "tags": {
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.cluster_name": "ceph",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.crush_device_class": "",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.encrypted": "0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.objectstore": "bluestore",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.osd_id": "1",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.type": "block",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.vdo": "0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.with_tpm": "0"
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            },
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "type": "block",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "vg_name": "ceph_vg1"
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:        }
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:    ],
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:    "2": [
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:        {
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "devices": [
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "/dev/loop5"
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            ],
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_name": "ceph_lv2",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_size": "21470642176",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "name": "ceph_lv2",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "tags": {
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.cluster_name": "ceph",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.crush_device_class": "",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.encrypted": "0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.objectstore": "bluestore",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.osd_id": "2",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.type": "block",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.vdo": "0",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:                "ceph.with_tpm": "0"
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            },
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "type": "block",
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:            "vg_name": "ceph_vg2"
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:        }
Dec 13 02:50:35 np0005558241 practical_wilson[172651]:    ]
Dec 13 02:50:35 np0005558241 practical_wilson[172651]: }
Dec 13 02:50:35 np0005558241 podman[172453]: 2025-12-13 07:50:35.242887794 +0000 UTC m=+0.779743887 container died 100a2a212e13a1d8f6b27673a5551a80ad7655764da94926854b8a4fb1c8c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wilson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:50:35 np0005558241 systemd[1]: libpod-100a2a212e13a1d8f6b27673a5551a80ad7655764da94926854b8a4fb1c8c54f.scope: Deactivated successfully.
Dec 13 02:50:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-59b4814cfd006424724966b3af4d67661a7f0c6cb8592c279f30f28079b90107-merged.mount: Deactivated successfully.
Dec 13 02:50:36 np0005558241 podman[172453]: 2025-12-13 07:50:36.22560517 +0000 UTC m=+1.762461263 container remove 100a2a212e13a1d8f6b27673a5551a80ad7655764da94926854b8a4fb1c8c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wilson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:50:36 np0005558241 systemd[1]: libpod-conmon-100a2a212e13a1d8f6b27673a5551a80ad7655764da94926854b8a4fb1c8c54f.scope: Deactivated successfully.
Dec 13 02:50:36 np0005558241 podman[173861]: 2025-12-13 07:50:36.777961124 +0000 UTC m=+0.092232499 container create 223ca0cf219bc0b8ce8ef5820af2ba84cc145f38b3fb944846ee9905ff07bbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:50:36 np0005558241 podman[173861]: 2025-12-13 07:50:36.716025361 +0000 UTC m=+0.030296766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:50:36 np0005558241 systemd[1]: Started libpod-conmon-223ca0cf219bc0b8ce8ef5820af2ba84cc145f38b3fb944846ee9905ff07bbfb.scope.
Dec 13 02:50:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:50:36 np0005558241 podman[173861]: 2025-12-13 07:50:36.989426044 +0000 UTC m=+0.303697439 container init 223ca0cf219bc0b8ce8ef5820af2ba84cc145f38b3fb944846ee9905ff07bbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_elion, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:50:36 np0005558241 podman[173861]: 2025-12-13 07:50:36.997500113 +0000 UTC m=+0.311771488 container start 223ca0cf219bc0b8ce8ef5820af2ba84cc145f38b3fb944846ee9905ff07bbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_elion, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:50:37 np0005558241 vigorous_elion[174006]: 167 167
Dec 13 02:50:37 np0005558241 systemd[1]: libpod-223ca0cf219bc0b8ce8ef5820af2ba84cc145f38b3fb944846ee9905ff07bbfb.scope: Deactivated successfully.
Dec 13 02:50:37 np0005558241 podman[173861]: 2025-12-13 07:50:37.045671657 +0000 UTC m=+0.359943052 container attach 223ca0cf219bc0b8ce8ef5820af2ba84cc145f38b3fb944846ee9905ff07bbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_elion, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 02:50:37 np0005558241 podman[173861]: 2025-12-13 07:50:37.046233581 +0000 UTC m=+0.360504976 container died 223ca0cf219bc0b8ce8ef5820af2ba84cc145f38b3fb944846ee9905ff07bbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_elion, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 02:50:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:50:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-593192f4a37c840c01c8c159d548071807a0f39c8c0cadd7f495e48d3faa8bf0-merged.mount: Deactivated successfully.
Dec 13 02:50:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:38 np0005558241 podman[173861]: 2025-12-13 07:50:38.646550256 +0000 UTC m=+1.960821631 container remove 223ca0cf219bc0b8ce8ef5820af2ba84cc145f38b3fb944846ee9905ff07bbfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_elion, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 02:50:38 np0005558241 systemd[1]: libpod-conmon-223ca0cf219bc0b8ce8ef5820af2ba84cc145f38b3fb944846ee9905ff07bbfb.scope: Deactivated successfully.
Dec 13 02:50:38 np0005558241 podman[175071]: 2025-12-13 07:50:38.810934089 +0000 UTC m=+0.023435608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:50:39 np0005558241 podman[175071]: 2025-12-13 07:50:39.016059333 +0000 UTC m=+0.228560832 container create cb1ad6005cbf6022dd38c015aa1d9586d2215225f96a74540c4bf2142cdd24a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_agnesi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:50:39 np0005558241 systemd[1]: Started libpod-conmon-cb1ad6005cbf6022dd38c015aa1d9586d2215225f96a74540c4bf2142cdd24a8.scope.
Dec 13 02:50:39 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:50:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a218c902ce7b4150a07eb3e0ec111bb6736ac042719259dde708083132627/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:50:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a218c902ce7b4150a07eb3e0ec111bb6736ac042719259dde708083132627/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:50:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a218c902ce7b4150a07eb3e0ec111bb6736ac042719259dde708083132627/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:50:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5a218c902ce7b4150a07eb3e0ec111bb6736ac042719259dde708083132627/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:50:39 np0005558241 podman[175071]: 2025-12-13 07:50:39.125399262 +0000 UTC m=+0.337900791 container init cb1ad6005cbf6022dd38c015aa1d9586d2215225f96a74540c4bf2142cdd24a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_agnesi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 02:50:39 np0005558241 podman[175071]: 2025-12-13 07:50:39.134165648 +0000 UTC m=+0.346667157 container start cb1ad6005cbf6022dd38c015aa1d9586d2215225f96a74540c4bf2142cdd24a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_agnesi, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:50:39 np0005558241 podman[175071]: 2025-12-13 07:50:39.1391434 +0000 UTC m=+0.351644929 container attach cb1ad6005cbf6022dd38c015aa1d9586d2215225f96a74540c4bf2142cdd24a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_agnesi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 02:50:39 np0005558241 lvm[175811]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:50:39 np0005558241 lvm[175807]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:50:39 np0005558241 lvm[175807]: VG ceph_vg0 finished
Dec 13 02:50:39 np0005558241 lvm[175811]: VG ceph_vg1 finished
Dec 13 02:50:39 np0005558241 lvm[175813]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:50:39 np0005558241 lvm[175813]: VG ceph_vg2 finished
Dec 13 02:50:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:39 np0005558241 stoic_agnesi[175258]: {}
Dec 13 02:50:40 np0005558241 systemd[1]: libpod-cb1ad6005cbf6022dd38c015aa1d9586d2215225f96a74540c4bf2142cdd24a8.scope: Deactivated successfully.
Dec 13 02:50:40 np0005558241 systemd[1]: libpod-cb1ad6005cbf6022dd38c015aa1d9586d2215225f96a74540c4bf2142cdd24a8.scope: Consumed 1.486s CPU time.
Dec 13 02:50:40 np0005558241 podman[175071]: 2025-12-13 07:50:40.022408311 +0000 UTC m=+1.234909820 container died cb1ad6005cbf6022dd38c015aa1d9586d2215225f96a74540c4bf2142cdd24a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_agnesi, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:50:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:50:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:50:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:50:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:50:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:50:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:50:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay-bf5a218c902ce7b4150a07eb3e0ec111bb6736ac042719259dde708083132627-merged.mount: Deactivated successfully.
Dec 13 02:50:40 np0005558241 podman[175071]: 2025-12-13 07:50:40.418931812 +0000 UTC m=+1.631433311 container remove cb1ad6005cbf6022dd38c015aa1d9586d2215225f96a74540c4bf2142cdd24a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:50:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:50:40 np0005558241 systemd[1]: libpod-conmon-cb1ad6005cbf6022dd38c015aa1d9586d2215225f96a74540c4bf2142cdd24a8.scope: Deactivated successfully.
Dec 13 02:50:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:50:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:50:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:50:41 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:50:41 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:50:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:50:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:46 np0005558241 podman[179351]: 2025-12-13 07:50:46.028534086 +0000 UTC m=+0.107192465 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 02:50:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:50:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:50:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 2957 writes, 13K keys, 2957 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 2957 writes, 2957 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 956 writes, 4008 keys, 956 commit groups, 1.0 writes per commit group, ingest: 6.68 MB, 0.01 MB/s#012Interval WAL: 956 writes, 956 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      6.3      2.13              0.05         5    0.425       0      0       0.0       0.0#012  L6      1/0    8.07 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1     30.5     25.7      1.12              0.11         4    0.279     15K   1775       0.0       0.0#012 Sum      1/0    8.07 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     10.5     13.0      3.24              0.16         9    0.360     15K   1775       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.1      6.2      6.6      2.90              0.07         4    0.724    8274   1027       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     30.5     25.7      1.12              0.11         4    0.279     15K   1775       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      6.3      2.12              0.05         4    0.530       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.013, interval 0.005#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.04 GB write, 0.04 MB/s write, 0.03 GB read, 0.03 MB/s read, 3.2 seconds#012Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.02 GB read, 0.03 MB/s read, 2.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 308.00 MB usage: 1.24 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(72,1.09 MB,0.353231%) FilterBlock(10,51.61 KB,0.0163636%) IndexBlock(10,107.50 KB,0.0340846%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 02:50:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:50:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:50:55.362 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:50:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:50:55.363 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:50:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:50:55.363 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:50:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:57 np0005558241 podman[183527]: 2025-12-13 07:50:57.819306902 +0000 UTC m=+0.067272661 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 02:50:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:50:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:50:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.556731) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612263556777, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1571, "num_deletes": 251, "total_data_size": 2715877, "memory_usage": 2760024, "flush_reason": "Manual Compaction"}
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612263589861, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1555763, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11973, "largest_seqno": 13543, "table_properties": {"data_size": 1550331, "index_size": 2636, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13818, "raw_average_key_size": 20, "raw_value_size": 1538389, "raw_average_value_size": 2269, "num_data_blocks": 121, "num_entries": 678, "num_filter_entries": 678, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765612057, "oldest_key_time": 1765612057, "file_creation_time": 1765612263, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 33195 microseconds, and 5157 cpu microseconds.
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.589922) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1555763 bytes OK
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.589949) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.597480) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.597555) EVENT_LOG_v1 {"time_micros": 1765612263597543, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.597592) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2708973, prev total WAL file size 2708973, number of live WAL files 2.
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.598688) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1519KB)], [29(8265KB)]
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612263598718, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 10020064, "oldest_snapshot_seqno": -1}
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4098 keys, 7733610 bytes, temperature: kUnknown
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612263697733, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7733610, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7704361, "index_size": 17875, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 98386, "raw_average_key_size": 24, "raw_value_size": 7628664, "raw_average_value_size": 1861, "num_data_blocks": 774, "num_entries": 4098, "num_filter_entries": 4098, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765612263, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.698046) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7733610 bytes
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.700243) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.1 rd, 78.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(11.4) write-amplify(5.0) OK, records in: 4531, records dropped: 433 output_compression: NoCompression
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.700272) EVENT_LOG_v1 {"time_micros": 1765612263700259, "job": 12, "event": "compaction_finished", "compaction_time_micros": 99123, "compaction_time_cpu_micros": 18508, "output_level": 6, "num_output_files": 1, "total_output_size": 7733610, "num_input_records": 4531, "num_output_records": 4098, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612263700669, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612263701783, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.598606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.701821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.701827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.701829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.701831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:51:03 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:51:03.701832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:51:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:51:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:51:09
Dec 13 02:51:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:51:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:51:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'images', 'default.rgw.log']
Dec 13 02:51:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:51:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:51:11 np0005558241 kernel: SELinux:  Converting 2770 SID table entries...
Dec 13 02:51:11 np0005558241 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:51:11 np0005558241 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:51:11 np0005558241 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:51:11 np0005558241 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:51:11 np0005558241 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:51:11 np0005558241 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:51:11 np0005558241 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 02:51:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:51:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:16 np0005558241 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 13 02:51:17 np0005558241 podman[183562]: 2025-12-13 07:51:17.080873605 +0000 UTC m=+0.092922383 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 13 02:51:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:51:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:51:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:51:20 np0005558241 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec 13 02:51:20 np0005558241 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Dec 13 02:51:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:51:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:27 np0005558241 podman[183617]: 2025-12-13 07:51:27.956023996 +0000 UTC m=+0.072057248 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 13 02:51:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:51:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:51:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:51:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:51:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:51:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:51:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:51:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:51:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:51:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:51:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:51:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:51:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:51:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:51:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:51:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:51:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:51:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:51:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:51:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:51:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:51:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:51:45 np0005558241 podman[183816]: 2025-12-13 07:51:45.047625703 +0000 UTC m=+0.023921191 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:51:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:51:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:51:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:51:45 np0005558241 podman[183816]: 2025-12-13 07:51:45.904901009 +0000 UTC m=+0.881196477 container create 3766e62dd7c08e6508102b26eb74e5f4fb656e39eb29fd06965be53ab1c3e984 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 02:51:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:48 np0005558241 systemd[1]: Started libpod-conmon-3766e62dd7c08e6508102b26eb74e5f4fb656e39eb29fd06965be53ab1c3e984.scope.
Dec 13 02:51:48 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:51:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:51:50 np0005558241 podman[183816]: 2025-12-13 07:51:50.78179188 +0000 UTC m=+5.758087348 container init 3766e62dd7c08e6508102b26eb74e5f4fb656e39eb29fd06965be53ab1c3e984 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:51:50 np0005558241 podman[183816]: 2025-12-13 07:51:50.797572139 +0000 UTC m=+5.773867597 container start 3766e62dd7c08e6508102b26eb74e5f4fb656e39eb29fd06965be53ab1c3e984 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:51:50 np0005558241 elegant_cohen[183846]: 167 167
Dec 13 02:51:50 np0005558241 systemd[1]: libpod-3766e62dd7c08e6508102b26eb74e5f4fb656e39eb29fd06965be53ab1c3e984.scope: Deactivated successfully.
Dec 13 02:51:50 np0005558241 conmon[183846]: conmon 3766e62dd7c08e650810 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3766e62dd7c08e6508102b26eb74e5f4fb656e39eb29fd06965be53ab1c3e984.scope/container/memory.events
Dec 13 02:51:51 np0005558241 podman[183816]: 2025-12-13 07:51:51.241639053 +0000 UTC m=+6.217934521 container attach 3766e62dd7c08e6508102b26eb74e5f4fb656e39eb29fd06965be53ab1c3e984 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:51:51 np0005558241 podman[183816]: 2025-12-13 07:51:51.243048797 +0000 UTC m=+6.219344255 container died 3766e62dd7c08e6508102b26eb74e5f4fb656e39eb29fd06965be53ab1c3e984 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:51:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8ebabead71f5acbb9418522a10b8e14adc0b3b878fbd39fc08adc3bfeae561ff-merged.mount: Deactivated successfully.
Dec 13 02:51:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:51:55.363 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:51:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:51:55.364 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:51:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:51:55.364 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:51:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:51:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:51:58 np0005558241 podman[183816]: 2025-12-13 07:51:58.009289413 +0000 UTC m=+12.985584871 container remove 3766e62dd7c08e6508102b26eb74e5f4fb656e39eb29fd06965be53ab1c3e984 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:51:58 np0005558241 systemd[1]: libpod-conmon-3766e62dd7c08e6508102b26eb74e5f4fb656e39eb29fd06965be53ab1c3e984.scope: Deactivated successfully.
Dec 13 02:51:58 np0005558241 podman[183830]: 2025-12-13 07:51:58.0967211 +0000 UTC m=+10.175924685 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 02:51:58 np0005558241 podman[183929]: 2025-12-13 07:51:58.166032949 +0000 UTC m=+0.048213461 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 02:51:58 np0005558241 podman[183941]: 2025-12-13 07:51:58.155485548 +0000 UTC m=+0.022293910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:51:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:00 np0005558241 podman[183941]: 2025-12-13 07:52:00.53519598 +0000 UTC m=+2.402004302 container create 663e0f8089262835fabb8df287b3e5131b13ebf15c62f2e79da62d4fb32d5fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:52:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:52:01 np0005558241 systemd[1]: Started libpod-conmon-663e0f8089262835fabb8df287b3e5131b13ebf15c62f2e79da62d4fb32d5fd1.scope.
Dec 13 02:52:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:52:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8fe056b0fd7cf37d523fbc6de0cd04f13fce5f087c27d743d3863266aa94df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:52:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8fe056b0fd7cf37d523fbc6de0cd04f13fce5f087c27d743d3863266aa94df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:52:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8fe056b0fd7cf37d523fbc6de0cd04f13fce5f087c27d743d3863266aa94df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:52:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8fe056b0fd7cf37d523fbc6de0cd04f13fce5f087c27d743d3863266aa94df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:52:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8fe056b0fd7cf37d523fbc6de0cd04f13fce5f087c27d743d3863266aa94df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:52:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:02 np0005558241 podman[183941]: 2025-12-13 07:52:02.534946279 +0000 UTC m=+4.401754661 container init 663e0f8089262835fabb8df287b3e5131b13ebf15c62f2e79da62d4fb32d5fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:52:02 np0005558241 podman[183941]: 2025-12-13 07:52:02.54798064 +0000 UTC m=+4.414788992 container start 663e0f8089262835fabb8df287b3e5131b13ebf15c62f2e79da62d4fb32d5fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:52:03 np0005558241 stupefied_keller[183994]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:52:03 np0005558241 stupefied_keller[183994]: --> All data devices are unavailable
Dec 13 02:52:03 np0005558241 systemd[1]: libpod-663e0f8089262835fabb8df287b3e5131b13ebf15c62f2e79da62d4fb32d5fd1.scope: Deactivated successfully.
Dec 13 02:52:03 np0005558241 podman[183941]: 2025-12-13 07:52:03.155986678 +0000 UTC m=+5.022795040 container attach 663e0f8089262835fabb8df287b3e5131b13ebf15c62f2e79da62d4fb32d5fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:52:03 np0005558241 podman[183941]: 2025-12-13 07:52:03.157725841 +0000 UTC m=+5.024534173 container died 663e0f8089262835fabb8df287b3e5131b13ebf15c62f2e79da62d4fb32d5fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 02:52:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:52:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5757 writes, 24K keys, 5757 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5757 writes, 919 syncs, 6.26 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec 13 02:52:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:52:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fd8fe056b0fd7cf37d523fbc6de0cd04f13fce5f087c27d743d3863266aa94df-merged.mount: Deactivated successfully.
Dec 13 02:52:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:52:09
Dec 13 02:52:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:52:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:52:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.log']
Dec 13 02:52:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:52:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:52:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:52:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1201.8 total, 600.0 interval#012Cumulative writes: 6991 writes, 29K keys, 6991 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6991 writes, 1277 syncs, 5.47 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 223 writes, 336 keys, 223 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 223 writes, 111 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.7 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.8 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Dec 13 02:52:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:52:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:14 np0005558241 podman[183941]: 2025-12-13 07:52:14.44025957 +0000 UTC m=+16.307067942 container remove 663e0f8089262835fabb8df287b3e5131b13ebf15c62f2e79da62d4fb32d5fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_keller, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 02:52:14 np0005558241 systemd[1]: libpod-conmon-663e0f8089262835fabb8df287b3e5131b13ebf15c62f2e79da62d4fb32d5fd1.scope: Deactivated successfully.
Dec 13 02:52:15 np0005558241 podman[184167]: 2025-12-13 07:52:14.983206214 +0000 UTC m=+0.037924267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:52:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:16 np0005558241 podman[184167]: 2025-12-13 07:52:16.780389404 +0000 UTC m=+1.835107407 container create 39375d4e9c50c85cbb14390c4201e6587db417f8ae0e6ff4030e53946753b759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:52:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:52:17 np0005558241 systemd[1]: Started libpod-conmon-39375d4e9c50c85cbb14390c4201e6587db417f8ae0e6ff4030e53946753b759.scope.
Dec 13 02:52:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:52:17 np0005558241 podman[184167]: 2025-12-13 07:52:17.844037003 +0000 UTC m=+2.898755036 container init 39375d4e9c50c85cbb14390c4201e6587db417f8ae0e6ff4030e53946753b759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 02:52:17 np0005558241 podman[184167]: 2025-12-13 07:52:17.859622738 +0000 UTC m=+2.914340781 container start 39375d4e9c50c85cbb14390c4201e6587db417f8ae0e6ff4030e53946753b759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 02:52:17 np0005558241 exciting_agnesi[184194]: 167 167
Dec 13 02:52:17 np0005558241 systemd[1]: libpod-39375d4e9c50c85cbb14390c4201e6587db417f8ae0e6ff4030e53946753b759.scope: Deactivated successfully.
Dec 13 02:52:17 np0005558241 podman[184167]: 2025-12-13 07:52:17.876571106 +0000 UTC m=+2.931289139 container attach 39375d4e9c50c85cbb14390c4201e6587db417f8ae0e6ff4030e53946753b759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 02:52:17 np0005558241 podman[184167]: 2025-12-13 07:52:17.877251743 +0000 UTC m=+2.931969756 container died 39375d4e9c50c85cbb14390c4201e6587db417f8ae0e6ff4030e53946753b759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:52:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay-03351b53cd6b179f10596f1c91c7daf8fe2923227d567e4972b414c6f83efbab-merged.mount: Deactivated successfully.
Dec 13 02:52:17 np0005558241 podman[184167]: 2025-12-13 07:52:17.924675972 +0000 UTC m=+2.979393975 container remove 39375d4e9c50c85cbb14390c4201e6587db417f8ae0e6ff4030e53946753b759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 02:52:17 np0005558241 systemd[1]: libpod-conmon-39375d4e9c50c85cbb14390c4201e6587db417f8ae0e6ff4030e53946753b759.scope: Deactivated successfully.
Dec 13 02:52:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:18 np0005558241 podman[184220]: 2025-12-13 07:52:18.074676983 +0000 UTC m=+0.023662475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:52:18 np0005558241 podman[184220]: 2025-12-13 07:52:18.482918703 +0000 UTC m=+0.431904195 container create 70238a8ea457980b6a0faed8fa2614d192980f4071a767749879c829fb78feaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_antonelli, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:52:18 np0005558241 systemd[1]: Started libpod-conmon-70238a8ea457980b6a0faed8fa2614d192980f4071a767749879c829fb78feaa.scope.
Dec 13 02:52:18 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:52:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee77d0ec5a69180f523eaee1576f45f24a6e04ddac96e8f1f8803fa36cd9b260/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:52:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee77d0ec5a69180f523eaee1576f45f24a6e04ddac96e8f1f8803fa36cd9b260/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:52:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee77d0ec5a69180f523eaee1576f45f24a6e04ddac96e8f1f8803fa36cd9b260/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:52:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee77d0ec5a69180f523eaee1576f45f24a6e04ddac96e8f1f8803fa36cd9b260/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:52:19 np0005558241 podman[184220]: 2025-12-13 07:52:19.207555018 +0000 UTC m=+1.156540500 container init 70238a8ea457980b6a0faed8fa2614d192980f4071a767749879c829fb78feaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_antonelli, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:52:19 np0005558241 podman[184220]: 2025-12-13 07:52:19.217206356 +0000 UTC m=+1.166191818 container start 70238a8ea457980b6a0faed8fa2614d192980f4071a767749879c829fb78feaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_antonelli, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:52:19 np0005558241 podman[184220]: 2025-12-13 07:52:19.345254505 +0000 UTC m=+1.294240017 container attach 70238a8ea457980b6a0faed8fa2614d192980f4071a767749879c829fb78feaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_antonelli, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]: {
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:    "0": [
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:        {
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "devices": [
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "/dev/loop3"
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            ],
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_name": "ceph_lv0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_size": "21470642176",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "name": "ceph_lv0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "tags": {
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.cluster_name": "ceph",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.crush_device_class": "",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.encrypted": "0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.objectstore": "bluestore",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.osd_id": "0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.type": "block",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.vdo": "0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.with_tpm": "0"
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            },
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "type": "block",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "vg_name": "ceph_vg0"
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:        }
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:    ],
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:    "1": [
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:        {
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "devices": [
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "/dev/loop4"
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            ],
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_name": "ceph_lv1",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_size": "21470642176",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "name": "ceph_lv1",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "tags": {
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.cluster_name": "ceph",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.crush_device_class": "",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.encrypted": "0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.objectstore": "bluestore",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.osd_id": "1",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.type": "block",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.vdo": "0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.with_tpm": "0"
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            },
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "type": "block",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "vg_name": "ceph_vg1"
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:        }
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:    ],
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:    "2": [
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:        {
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "devices": [
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "/dev/loop5"
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            ],
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_name": "ceph_lv2",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_size": "21470642176",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "name": "ceph_lv2",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "tags": {
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.cluster_name": "ceph",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.crush_device_class": "",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.encrypted": "0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.objectstore": "bluestore",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.osd_id": "2",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.type": "block",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.vdo": "0",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:                "ceph.with_tpm": "0"
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            },
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "type": "block",
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:            "vg_name": "ceph_vg2"
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:        }
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]:    ]
Dec 13 02:52:19 np0005558241 zen_antonelli[184237]: }
Dec 13 02:52:19 np0005558241 systemd[1]: libpod-70238a8ea457980b6a0faed8fa2614d192980f4071a767749879c829fb78feaa.scope: Deactivated successfully.
Dec 13 02:52:19 np0005558241 podman[184246]: 2025-12-13 07:52:19.604027017 +0000 UTC m=+0.022582608 container died 70238a8ea457980b6a0faed8fa2614d192980f4071a767749879c829fb78feaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:52:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 02:52:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 5755 writes, 24K keys, 5755 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5755 writes, 896 syncs, 6.42 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec 13 02:52:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:20 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ee77d0ec5a69180f523eaee1576f45f24a6e04ddac96e8f1f8803fa36cd9b260-merged.mount: Deactivated successfully.
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:52:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:52:21 np0005558241 podman[184246]: 2025-12-13 07:52:21.191704101 +0000 UTC m=+1.610259672 container remove 70238a8ea457980b6a0faed8fa2614d192980f4071a767749879c829fb78feaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_antonelli, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:52:21 np0005558241 systemd[1]: libpod-conmon-70238a8ea457980b6a0faed8fa2614d192980f4071a767749879c829fb78feaa.scope: Deactivated successfully.
Dec 13 02:52:21 np0005558241 podman[184549]: 2025-12-13 07:52:21.713892823 +0000 UTC m=+0.046365855 container create a2a7e0af1ba6f067fbe4281d6f2802f1ceed1f4b18509c9d5088ef9a1ab8e37b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 02:52:21 np0005558241 systemd[1]: Started libpod-conmon-a2a7e0af1ba6f067fbe4281d6f2802f1ceed1f4b18509c9d5088ef9a1ab8e37b.scope.
Dec 13 02:52:21 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:52:21 np0005558241 podman[184549]: 2025-12-13 07:52:21.690109566 +0000 UTC m=+0.022582628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:52:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:52:21 np0005558241 podman[184549]: 2025-12-13 07:52:21.843575852 +0000 UTC m=+0.176048924 container init a2a7e0af1ba6f067fbe4281d6f2802f1ceed1f4b18509c9d5088ef9a1ab8e37b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 02:52:21 np0005558241 podman[184549]: 2025-12-13 07:52:21.850994675 +0000 UTC m=+0.183467717 container start a2a7e0af1ba6f067fbe4281d6f2802f1ceed1f4b18509c9d5088ef9a1ab8e37b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_roentgen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:52:21 np0005558241 podman[184549]: 2025-12-13 07:52:21.8548686 +0000 UTC m=+0.187341642 container attach a2a7e0af1ba6f067fbe4281d6f2802f1ceed1f4b18509c9d5088ef9a1ab8e37b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:52:21 np0005558241 modest_roentgen[184633]: 167 167
Dec 13 02:52:21 np0005558241 systemd[1]: libpod-a2a7e0af1ba6f067fbe4281d6f2802f1ceed1f4b18509c9d5088ef9a1ab8e37b.scope: Deactivated successfully.
Dec 13 02:52:21 np0005558241 podman[184549]: 2025-12-13 07:52:21.858038588 +0000 UTC m=+0.190511640 container died a2a7e0af1ba6f067fbe4281d6f2802f1ceed1f4b18509c9d5088ef9a1ab8e37b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_roentgen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:52:21 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0e3fd28946336a197c998e790f41dd04d4be861cf9b52084a9b5fbd074337a90-merged.mount: Deactivated successfully.
Dec 13 02:52:21 np0005558241 podman[184549]: 2025-12-13 07:52:21.906913974 +0000 UTC m=+0.239387016 container remove a2a7e0af1ba6f067fbe4281d6f2802f1ceed1f4b18509c9d5088ef9a1ab8e37b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_roentgen, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 02:52:21 np0005558241 systemd[1]: libpod-conmon-a2a7e0af1ba6f067fbe4281d6f2802f1ceed1f4b18509c9d5088ef9a1ab8e37b.scope: Deactivated successfully.
Dec 13 02:52:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:22 np0005558241 podman[184821]: 2025-12-13 07:52:22.081400068 +0000 UTC m=+0.038799168 container create dc86ccd606ba33a2475d2009555294573be022694f2a74ef20fc05e221acffd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 02:52:22 np0005558241 systemd[1]: Started libpod-conmon-dc86ccd606ba33a2475d2009555294573be022694f2a74ef20fc05e221acffd2.scope.
Dec 13 02:52:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:52:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa531b7f14cb2549290eb85ce427b673692cc6831fc7c9d8b22ff0cbb8f266a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:52:22 np0005558241 podman[184821]: 2025-12-13 07:52:22.064797559 +0000 UTC m=+0.022196679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:52:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa531b7f14cb2549290eb85ce427b673692cc6831fc7c9d8b22ff0cbb8f266a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:52:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa531b7f14cb2549290eb85ce427b673692cc6831fc7c9d8b22ff0cbb8f266a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:52:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa531b7f14cb2549290eb85ce427b673692cc6831fc7c9d8b22ff0cbb8f266a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:52:22 np0005558241 podman[184821]: 2025-12-13 07:52:22.174749311 +0000 UTC m=+0.132148431 container init dc86ccd606ba33a2475d2009555294573be022694f2a74ef20fc05e221acffd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:52:22 np0005558241 podman[184821]: 2025-12-13 07:52:22.181381674 +0000 UTC m=+0.138780774 container start dc86ccd606ba33a2475d2009555294573be022694f2a74ef20fc05e221acffd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:52:22 np0005558241 podman[184821]: 2025-12-13 07:52:22.186831639 +0000 UTC m=+0.144230739 container attach dc86ccd606ba33a2475d2009555294573be022694f2a74ef20fc05e221acffd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_spence, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 02:52:22 np0005558241 systemd[1]: Stopping OpenSSH server daemon...
Dec 13 02:52:22 np0005558241 systemd[1]: sshd.service: Deactivated successfully.
Dec 13 02:52:22 np0005558241 systemd[1]: Stopped OpenSSH server daemon.
Dec 13 02:52:22 np0005558241 systemd[1]: sshd.service: Consumed 3.378s CPU time, read 32.0K from disk, written 24.0K to disk.
Dec 13 02:52:22 np0005558241 systemd[1]: Stopped target sshd-keygen.target.
Dec 13 02:52:22 np0005558241 systemd[1]: Stopping sshd-keygen.target...
Dec 13 02:52:22 np0005558241 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 02:52:22 np0005558241 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 02:52:22 np0005558241 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 02:52:22 np0005558241 systemd[1]: Reached target sshd-keygen.target.
Dec 13 02:52:22 np0005558241 systemd[1]: Starting OpenSSH server daemon...
Dec 13 02:52:22 np0005558241 systemd[1]: Started OpenSSH server daemon.
Dec 13 02:52:22 np0005558241 lvm[185135]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:52:22 np0005558241 lvm[185135]: VG ceph_vg0 finished
Dec 13 02:52:22 np0005558241 lvm[185136]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:52:22 np0005558241 lvm[185136]: VG ceph_vg1 finished
Dec 13 02:52:22 np0005558241 lvm[185140]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:52:22 np0005558241 lvm[185140]: VG ceph_vg2 finished
Dec 13 02:52:23 np0005558241 recursing_spence[184901]: {}
Dec 13 02:52:23 np0005558241 podman[184821]: 2025-12-13 07:52:23.0896907 +0000 UTC m=+1.047089800 container died dc86ccd606ba33a2475d2009555294573be022694f2a74ef20fc05e221acffd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_spence, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 02:52:23 np0005558241 systemd[1]: libpod-dc86ccd606ba33a2475d2009555294573be022694f2a74ef20fc05e221acffd2.scope: Deactivated successfully.
Dec 13 02:52:23 np0005558241 systemd[1]: libpod-dc86ccd606ba33a2475d2009555294573be022694f2a74ef20fc05e221acffd2.scope: Consumed 1.360s CPU time.
Dec 13 02:52:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1aa531b7f14cb2549290eb85ce427b673692cc6831fc7c9d8b22ff0cbb8f266a-merged.mount: Deactivated successfully.
Dec 13 02:52:23 np0005558241 podman[184821]: 2025-12-13 07:52:23.145557678 +0000 UTC m=+1.102956788 container remove dc86ccd606ba33a2475d2009555294573be022694f2a74ef20fc05e221acffd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_spence, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:52:23 np0005558241 systemd[1]: libpod-conmon-dc86ccd606ba33a2475d2009555294573be022694f2a74ef20fc05e221acffd2.scope: Deactivated successfully.
Dec 13 02:52:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:52:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:52:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:52:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:52:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 02:52:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:24 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:52:24 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:52:24 np0005558241 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:52:24 np0005558241 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:52:24 np0005558241 systemd[1]: Reloading.
Dec 13 02:52:24 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:52:24 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:52:24 np0005558241 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:52:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:52:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:28 np0005558241 podman[188393]: 2025-12-13 07:52:28.375958338 +0000 UTC m=+0.063983649 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 02:52:28 np0005558241 podman[188368]: 2025-12-13 07:52:28.411510735 +0000 UTC m=+0.099534906 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 02:52:28 np0005558241 python3.9[188536]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:52:28 np0005558241 systemd[1]: Reloading.
Dec 13 02:52:28 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:52:28 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:52:29 np0005558241 python3.9[189825]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:52:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:30 np0005558241 systemd[1]: Reloading.
Dec 13 02:52:30 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:52:30 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:52:31 np0005558241 python3.9[191268]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:52:31 np0005558241 systemd[1]: Reloading.
Dec 13 02:52:31 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:52:31 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:52:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:52:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:32 np0005558241 python3.9[192286]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:52:32 np0005558241 systemd[1]: Reloading.
Dec 13 02:52:32 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:52:32 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:52:33 np0005558241 python3.9[193507]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:33 np0005558241 systemd[1]: Reloading.
Dec 13 02:52:33 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:52:33 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:52:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:35 np0005558241 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:52:35 np0005558241 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:52:35 np0005558241 systemd[1]: man-db-cache-update.service: Consumed 12.201s CPU time.
Dec 13 02:52:35 np0005558241 systemd[1]: run-r3d5fd280970442e6be00bd999700e839.service: Deactivated successfully.
Dec 13 02:52:35 np0005558241 python3.9[194834]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:35 np0005558241 systemd[1]: Reloading.
Dec 13 02:52:35 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:52:35 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:52:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:36 np0005558241 python3.9[195106]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:36 np0005558241 systemd[1]: Reloading.
Dec 13 02:52:36 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:52:36 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:52:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:52:37 np0005558241 python3.9[195296]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:38 np0005558241 python3.9[195451]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:38 np0005558241 systemd[1]: Reloading.
Dec 13 02:52:38 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:52:38 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:52:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:52:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:52:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:52:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:52:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:52:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:52:40 np0005558241 python3.9[195643]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 02:52:40 np0005558241 systemd[1]: Reloading.
Dec 13 02:52:40 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:52:40 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:52:40 np0005558241 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 13 02:52:40 np0005558241 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 13 02:52:41 np0005558241 python3.9[195836]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:52:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:42 np0005558241 python3.9[195991]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:43 np0005558241 python3.9[196146]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:44 np0005558241 python3.9[196301]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:45 np0005558241 python3.9[196456]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:46 np0005558241 python3.9[196611]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:52:47 np0005558241 python3.9[196766]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:48 np0005558241 python3.9[196921]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:49 np0005558241 python3.9[197076]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:49 np0005558241 python3.9[197231]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:50 np0005558241 python3.9[197386]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:51 np0005558241 python3.9[197541]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:52:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:52 np0005558241 python3.9[197696]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:53 np0005558241 python3.9[197851]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 02:52:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:54 np0005558241 python3.9[198006]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:52:55 np0005558241 python3.9[198158]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:52:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:52:55.365 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:52:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:52:55.367 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:52:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:52:55.367 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:52:55 np0005558241 python3.9[198310]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:52:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:56 np0005558241 python3.9[198462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:52:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:52:57 np0005558241 python3.9[198614]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:52:57 np0005558241 python3.9[198766]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:52:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:52:58 np0005558241 podman[198891]: 2025-12-13 07:52:58.620420247 +0000 UTC m=+0.075159494 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 02:52:58 np0005558241 podman[198890]: 2025-12-13 07:52:58.656039451 +0000 UTC m=+0.104704348 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Dec 13 02:52:58 np0005558241 python3.9[198956]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:52:59 np0005558241 python3.9[199089]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765612378.0656528-554-76340562804511/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:00 np0005558241 python3.9[199241]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:00 np0005558241 python3.9[199366]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765612379.7903583-554-160550970905072/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:01 np0005558241 python3.9[199518]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:53:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:02 np0005558241 python3.9[199645]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765612381.1327105-554-144186592454018/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:02 np0005558241 python3.9[199797]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:03 np0005558241 python3.9[199922]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765612382.352952-554-61295218920678/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:03 np0005558241 python3.9[200074]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:04 np0005558241 python3.9[200199]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765612383.5070992-554-107595811052551/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:05 np0005558241 python3.9[200351]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:05 np0005558241 python3.9[200476]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765612384.7367978-554-182454760879987/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:06 np0005558241 python3.9[200628]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:53:07 np0005558241 python3.9[200751]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765612386.0032392-554-131711557732035/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:07 np0005558241 python3.9[200903]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:08 np0005558241 python3.9[201028]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765612387.2586112-554-51906839103165/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:09 np0005558241 python3.9[201180]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 13 02:53:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:53:09
Dec 13 02:53:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:53:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:53:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.control', 'vms', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.data']
Dec 13 02:53:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:53:10 np0005558241 python3.9[201333]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:53:10 np0005558241 python3.9[201485]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:11 np0005558241 python3.9[201637]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:53:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:12 np0005558241 python3.9[201789]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:12 np0005558241 python3.9[201941]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:13 np0005558241 python3.9[202093]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:14 np0005558241 python3.9[202245]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:15 np0005558241 python3.9[202397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:16 np0005558241 python3.9[202549]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:53:17 np0005558241 python3.9[202701]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:18 np0005558241 python3.9[202853]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:20 np0005558241 python3.9[203005]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:53:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:53:21 np0005558241 python3.9[203157]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:21 np0005558241 python3.9[203309]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:53:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:22 np0005558241 python3.9[203461]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:22 np0005558241 python3.9[203584]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612401.8664665-775-111081974629760/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:23 np0005558241 python3.9[203786]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:53:24 np0005558241 python3.9[203929]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612403.093562-775-55200115419158/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:53:25 np0005558241 python3.9[204081]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:53:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:26 np0005558241 python3.9[204204]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612404.6135316-775-74740285714356/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:26 np0005558241 auditd[696]: Audit daemon rotating log files
Dec 13 02:53:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:53:26 np0005558241 python3.9[204404]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:53:27 np0005558241 python3.9[204560]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612406.307757-775-75564589250316/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:53:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:53:27 np0005558241 podman[204696]: 2025-12-13 07:53:27.577323423 +0000 UTC m=+0.021919975 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:53:27 np0005558241 podman[204696]: 2025-12-13 07:53:27.673730373 +0000 UTC m=+0.118326915 container create 49705983ecab54d011d1d835487b050631783c4e56f64a32c3b6d6feeb18cd70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:53:27 np0005558241 systemd[1]: Started libpod-conmon-49705983ecab54d011d1d835487b050631783c4e56f64a32c3b6d6feeb18cd70.scope.
Dec 13 02:53:27 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:53:27 np0005558241 podman[204696]: 2025-12-13 07:53:27.819748393 +0000 UTC m=+0.264345015 container init 49705983ecab54d011d1d835487b050631783c4e56f64a32c3b6d6feeb18cd70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hypatia, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:53:27 np0005558241 podman[204696]: 2025-12-13 07:53:27.830316726 +0000 UTC m=+0.274913268 container start 49705983ecab54d011d1d835487b050631783c4e56f64a32c3b6d6feeb18cd70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 02:53:27 np0005558241 romantic_hypatia[204728]: 167 167
Dec 13 02:53:27 np0005558241 podman[204696]: 2025-12-13 07:53:27.83696371 +0000 UTC m=+0.281560252 container attach 49705983ecab54d011d1d835487b050631783c4e56f64a32c3b6d6feeb18cd70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hypatia, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:53:27 np0005558241 systemd[1]: libpod-49705983ecab54d011d1d835487b050631783c4e56f64a32c3b6d6feeb18cd70.scope: Deactivated successfully.
Dec 13 02:53:27 np0005558241 podman[204696]: 2025-12-13 07:53:27.838631832 +0000 UTC m=+0.283228374 container died 49705983ecab54d011d1d835487b050631783c4e56f64a32c3b6d6feeb18cd70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 02:53:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0baf8df0381af87a3d7c2cd52a950a4cbc9d64d6458e38bdf71a5d37be691dc1-merged.mount: Deactivated successfully.
Dec 13 02:53:27 np0005558241 podman[204696]: 2025-12-13 07:53:27.97646555 +0000 UTC m=+0.421062092 container remove 49705983ecab54d011d1d835487b050631783c4e56f64a32c3b6d6feeb18cd70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:53:28 np0005558241 systemd[1]: libpod-conmon-49705983ecab54d011d1d835487b050631783c4e56f64a32c3b6d6feeb18cd70.scope: Deactivated successfully.
Dec 13 02:53:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:28 np0005558241 python3.9[204808]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:28 np0005558241 podman[204816]: 2025-12-13 07:53:28.13537119 +0000 UTC m=+0.030754144 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:53:28 np0005558241 podman[204816]: 2025-12-13 07:53:28.647699074 +0000 UTC m=+0.543082058 container create c4bfdc4c6c5f8c4ebec1eabf541f78b1ac5df05331b84cbe174f088da8afefab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:53:28 np0005558241 systemd[1]: Started libpod-conmon-c4bfdc4c6c5f8c4ebec1eabf541f78b1ac5df05331b84cbe174f088da8afefab.scope.
Dec 13 02:53:28 np0005558241 python3.9[204952]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612407.4937432-775-216177155714032/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:28 np0005558241 podman[204953]: 2025-12-13 07:53:28.776057557 +0000 UTC m=+0.087769367 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 02:53:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:53:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf1589e3840ee22aefbf9b163803770022d0cf34ed912b4996fd42eb1ceea7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:53:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf1589e3840ee22aefbf9b163803770022d0cf34ed912b4996fd42eb1ceea7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:53:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf1589e3840ee22aefbf9b163803770022d0cf34ed912b4996fd42eb1ceea7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:53:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf1589e3840ee22aefbf9b163803770022d0cf34ed912b4996fd42eb1ceea7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:53:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf1589e3840ee22aefbf9b163803770022d0cf34ed912b4996fd42eb1ceea7e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:53:28 np0005558241 podman[204816]: 2025-12-13 07:53:28.821282109 +0000 UTC m=+0.716665063 container init c4bfdc4c6c5f8c4ebec1eabf541f78b1ac5df05331b84cbe174f088da8afefab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:53:28 np0005558241 podman[204954]: 2025-12-13 07:53:28.82211979 +0000 UTC m=+0.128125219 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 02:53:28 np0005558241 podman[204816]: 2025-12-13 07:53:28.832269691 +0000 UTC m=+0.727652635 container start c4bfdc4c6c5f8c4ebec1eabf541f78b1ac5df05331b84cbe174f088da8afefab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 13 02:53:28 np0005558241 podman[204816]: 2025-12-13 07:53:28.838348362 +0000 UTC m=+0.733731326 container attach c4bfdc4c6c5f8c4ebec1eabf541f78b1ac5df05331b84cbe174f088da8afefab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:53:29 np0005558241 awesome_carson[204986]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:53:29 np0005558241 awesome_carson[204986]: --> All data devices are unavailable
Dec 13 02:53:29 np0005558241 systemd[1]: libpod-c4bfdc4c6c5f8c4ebec1eabf541f78b1ac5df05331b84cbe174f088da8afefab.scope: Deactivated successfully.
Dec 13 02:53:29 np0005558241 podman[204816]: 2025-12-13 07:53:29.362426498 +0000 UTC m=+1.257809462 container died c4bfdc4c6c5f8c4ebec1eabf541f78b1ac5df05331b84cbe174f088da8afefab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 02:53:29 np0005558241 python3.9[205166]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:29 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6cf1589e3840ee22aefbf9b163803770022d0cf34ed912b4996fd42eb1ceea7e-merged.mount: Deactivated successfully.
Dec 13 02:53:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:30 np0005558241 python3.9[205304]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612408.9477537-775-224871722687792/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:30 np0005558241 python3.9[205458]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:30 np0005558241 podman[204816]: 2025-12-13 07:53:30.889137196 +0000 UTC m=+2.784520140 container remove c4bfdc4c6c5f8c4ebec1eabf541f78b1ac5df05331b84cbe174f088da8afefab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 02:53:30 np0005558241 systemd[1]: libpod-conmon-c4bfdc4c6c5f8c4ebec1eabf541f78b1ac5df05331b84cbe174f088da8afefab.scope: Deactivated successfully.
Dec 13 02:53:31 np0005558241 python3.9[205631]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612410.260492-775-231708454675760/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:31 np0005558241 podman[205644]: 2025-12-13 07:53:31.377384972 +0000 UTC m=+0.025358780 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:53:31 np0005558241 podman[205644]: 2025-12-13 07:53:31.570690956 +0000 UTC m=+0.218664714 container create 88f7e72a7d8be4b54395b1afc98b23c356cb32b54938ad8cdc4cd3c35ad7adb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:53:31 np0005558241 systemd[1]: Started libpod-conmon-88f7e72a7d8be4b54395b1afc98b23c356cb32b54938ad8cdc4cd3c35ad7adb7.scope.
Dec 13 02:53:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:53:31 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:53:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:32 np0005558241 python3.9[205814]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:32 np0005558241 podman[205644]: 2025-12-13 07:53:32.204832271 +0000 UTC m=+0.852806059 container init 88f7e72a7d8be4b54395b1afc98b23c356cb32b54938ad8cdc4cd3c35ad7adb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cannon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 02:53:32 np0005558241 podman[205644]: 2025-12-13 07:53:32.221475864 +0000 UTC m=+0.869449622 container start 88f7e72a7d8be4b54395b1afc98b23c356cb32b54938ad8cdc4cd3c35ad7adb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:53:32 np0005558241 ecstatic_cannon[205759]: 167 167
Dec 13 02:53:32 np0005558241 systemd[1]: libpod-88f7e72a7d8be4b54395b1afc98b23c356cb32b54938ad8cdc4cd3c35ad7adb7.scope: Deactivated successfully.
Dec 13 02:53:32 np0005558241 podman[205644]: 2025-12-13 07:53:32.397778095 +0000 UTC m=+1.045751863 container attach 88f7e72a7d8be4b54395b1afc98b23c356cb32b54938ad8cdc4cd3c35ad7adb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cannon, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 02:53:32 np0005558241 podman[205644]: 2025-12-13 07:53:32.399494558 +0000 UTC m=+1.047468336 container died 88f7e72a7d8be4b54395b1afc98b23c356cb32b54938ad8cdc4cd3c35ad7adb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:53:32 np0005558241 systemd[1]: var-lib-containers-storage-overlay-92843e72e8de21a26b555dee198f4488548c21eb14de1e9f20f04c991e6a92be-merged.mount: Deactivated successfully.
Dec 13 02:53:32 np0005558241 podman[205644]: 2025-12-13 07:53:32.730044565 +0000 UTC m=+1.378018323 container remove 88f7e72a7d8be4b54395b1afc98b23c356cb32b54938ad8cdc4cd3c35ad7adb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:53:32 np0005558241 python3.9[205950]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612411.5654585-775-69962678251521/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:32 np0005558241 systemd[1]: libpod-conmon-88f7e72a7d8be4b54395b1afc98b23c356cb32b54938ad8cdc4cd3c35ad7adb7.scope: Deactivated successfully.
Dec 13 02:53:32 np0005558241 podman[205979]: 2025-12-13 07:53:32.925296976 +0000 UTC m=+0.061141717 container create 2a745d8a4dc930dd4002599fd5fb17a325a2d4f3400add85e01e06f8597350b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nash, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 02:53:32 np0005558241 systemd[1]: Started libpod-conmon-2a745d8a4dc930dd4002599fd5fb17a325a2d4f3400add85e01e06f8597350b5.scope.
Dec 13 02:53:32 np0005558241 podman[205979]: 2025-12-13 07:53:32.896759969 +0000 UTC m=+0.032604820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:53:33 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:53:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0312dbc8bafb026d61ab366abae80dee1d0143369ab834b7d088a1a42500aa3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:53:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0312dbc8bafb026d61ab366abae80dee1d0143369ab834b7d088a1a42500aa3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:53:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0312dbc8bafb026d61ab366abae80dee1d0143369ab834b7d088a1a42500aa3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:53:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0312dbc8bafb026d61ab366abae80dee1d0143369ab834b7d088a1a42500aa3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:53:33 np0005558241 podman[205979]: 2025-12-13 07:53:33.036716009 +0000 UTC m=+0.172560760 container init 2a745d8a4dc930dd4002599fd5fb17a325a2d4f3400add85e01e06f8597350b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nash, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 02:53:33 np0005558241 podman[205979]: 2025-12-13 07:53:33.043339354 +0000 UTC m=+0.179184095 container start 2a745d8a4dc930dd4002599fd5fb17a325a2d4f3400add85e01e06f8597350b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 02:53:33 np0005558241 podman[205979]: 2025-12-13 07:53:33.048583864 +0000 UTC m=+0.184428705 container attach 2a745d8a4dc930dd4002599fd5fb17a325a2d4f3400add85e01e06f8597350b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 02:53:33 np0005558241 jovial_nash[206043]: {
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:    "0": [
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:        {
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "devices": [
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "/dev/loop3"
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            ],
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_name": "ceph_lv0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_size": "21470642176",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "name": "ceph_lv0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "tags": {
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.cluster_name": "ceph",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.crush_device_class": "",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.encrypted": "0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.objectstore": "bluestore",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.osd_id": "0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.type": "block",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.vdo": "0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.with_tpm": "0"
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            },
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "type": "block",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "vg_name": "ceph_vg0"
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:        }
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:    ],
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:    "1": [
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:        {
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "devices": [
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "/dev/loop4"
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            ],
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_name": "ceph_lv1",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_size": "21470642176",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "name": "ceph_lv1",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "tags": {
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.cluster_name": "ceph",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.crush_device_class": "",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.encrypted": "0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.objectstore": "bluestore",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.osd_id": "1",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.type": "block",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.vdo": "0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.with_tpm": "0"
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            },
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "type": "block",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "vg_name": "ceph_vg1"
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:        }
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:    ],
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:    "2": [
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:        {
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "devices": [
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "/dev/loop5"
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            ],
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_name": "ceph_lv2",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_size": "21470642176",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "name": "ceph_lv2",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "tags": {
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.cluster_name": "ceph",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.crush_device_class": "",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.encrypted": "0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.objectstore": "bluestore",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.osd_id": "2",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.type": "block",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.vdo": "0",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:                "ceph.with_tpm": "0"
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            },
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "type": "block",
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:            "vg_name": "ceph_vg2"
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:        }
Dec 13 02:53:33 np0005558241 jovial_nash[206043]:    ]
Dec 13 02:53:33 np0005558241 jovial_nash[206043]: }
Dec 13 02:53:33 np0005558241 systemd[1]: libpod-2a745d8a4dc930dd4002599fd5fb17a325a2d4f3400add85e01e06f8597350b5.scope: Deactivated successfully.
Dec 13 02:53:33 np0005558241 podman[205979]: 2025-12-13 07:53:33.380146705 +0000 UTC m=+0.515991446 container died 2a745d8a4dc930dd4002599fd5fb17a325a2d4f3400add85e01e06f8597350b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:53:33 np0005558241 python3.9[206133]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0312dbc8bafb026d61ab366abae80dee1d0143369ab834b7d088a1a42500aa3a-merged.mount: Deactivated successfully.
Dec 13 02:53:33 np0005558241 podman[205979]: 2025-12-13 07:53:33.546594623 +0000 UTC m=+0.682439364 container remove 2a745d8a4dc930dd4002599fd5fb17a325a2d4f3400add85e01e06f8597350b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:53:33 np0005558241 systemd[1]: libpod-conmon-2a745d8a4dc930dd4002599fd5fb17a325a2d4f3400add85e01e06f8597350b5.scope: Deactivated successfully.
Dec 13 02:53:34 np0005558241 python3.9[206322]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612412.9396317-775-22935490409952/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:34 np0005558241 podman[206335]: 2025-12-13 07:53:34.095927475 +0000 UTC m=+0.109015624 container create 5fa91e088ba5b25d4b4d4d34546eb03f34ed120c00faa30b23f671bc36a8fd4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_faraday, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:53:34 np0005558241 podman[206335]: 2025-12-13 07:53:34.014008313 +0000 UTC m=+0.027096472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:53:34 np0005558241 systemd[1]: Started libpod-conmon-5fa91e088ba5b25d4b4d4d34546eb03f34ed120c00faa30b23f671bc36a8fd4d.scope.
Dec 13 02:53:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:53:34 np0005558241 podman[206335]: 2025-12-13 07:53:34.461731796 +0000 UTC m=+0.474819955 container init 5fa91e088ba5b25d4b4d4d34546eb03f34ed120c00faa30b23f671bc36a8fd4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_faraday, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:53:34 np0005558241 podman[206335]: 2025-12-13 07:53:34.475887787 +0000 UTC m=+0.488975966 container start 5fa91e088ba5b25d4b4d4d34546eb03f34ed120c00faa30b23f671bc36a8fd4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_faraday, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:53:34 np0005558241 systemd[1]: libpod-5fa91e088ba5b25d4b4d4d34546eb03f34ed120c00faa30b23f671bc36a8fd4d.scope: Deactivated successfully.
Dec 13 02:53:34 np0005558241 flamboyant_faraday[206381]: 167 167
Dec 13 02:53:34 np0005558241 conmon[206381]: conmon 5fa91e088ba5b25d4b4d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fa91e088ba5b25d4b4d4d34546eb03f34ed120c00faa30b23f671bc36a8fd4d.scope/container/memory.events
Dec 13 02:53:34 np0005558241 podman[206335]: 2025-12-13 07:53:34.497177215 +0000 UTC m=+0.510265404 container attach 5fa91e088ba5b25d4b4d4d34546eb03f34ed120c00faa30b23f671bc36a8fd4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 02:53:34 np0005558241 podman[206335]: 2025-12-13 07:53:34.49778243 +0000 UTC m=+0.510870579 container died 5fa91e088ba5b25d4b4d4d34546eb03f34ed120c00faa30b23f671bc36a8fd4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:53:34 np0005558241 python3.9[206503]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-17293295bfcca7b045f7f0dfca8dcfc97d38bf211adde02df62ca9c33a30402f-merged.mount: Deactivated successfully.
Dec 13 02:53:35 np0005558241 python3.9[206640]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612414.1660826-775-22119279947920/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:36 np0005558241 podman[206335]: 2025-12-13 07:53:36.030227119 +0000 UTC m=+2.043315268 container remove 5fa91e088ba5b25d4b4d4d34546eb03f34ed120c00faa30b23f671bc36a8fd4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_faraday, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:53:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:36 np0005558241 systemd[1]: libpod-conmon-5fa91e088ba5b25d4b4d4d34546eb03f34ed120c00faa30b23f671bc36a8fd4d.scope: Deactivated successfully.
Dec 13 02:53:36 np0005558241 python3.9[206792]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:36 np0005558241 podman[206800]: 2025-12-13 07:53:36.22381621 +0000 UTC m=+0.047913719 container create 8a254e772ff3169f5a52a3fba4251c614ddb11c484021f4f223493a68664c29e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_ride, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:53:36 np0005558241 systemd[1]: Started libpod-conmon-8a254e772ff3169f5a52a3fba4251c614ddb11c484021f4f223493a68664c29e.scope.
Dec 13 02:53:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:53:36 np0005558241 podman[206800]: 2025-12-13 07:53:36.204923961 +0000 UTC m=+0.029021500 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:53:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f4845e57601593e6889ec1b1a24b31e0da21cd7cced42913649d2d5bce533d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:53:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f4845e57601593e6889ec1b1a24b31e0da21cd7cced42913649d2d5bce533d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:53:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f4845e57601593e6889ec1b1a24b31e0da21cd7cced42913649d2d5bce533d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:53:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56f4845e57601593e6889ec1b1a24b31e0da21cd7cced42913649d2d5bce533d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:53:36 np0005558241 podman[206800]: 2025-12-13 07:53:36.32950005 +0000 UTC m=+0.153597589 container init 8a254e772ff3169f5a52a3fba4251c614ddb11c484021f4f223493a68664c29e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_ride, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:53:36 np0005558241 podman[206800]: 2025-12-13 07:53:36.339398366 +0000 UTC m=+0.163495875 container start 8a254e772ff3169f5a52a3fba4251c614ddb11c484021f4f223493a68664c29e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:53:36 np0005558241 podman[206800]: 2025-12-13 07:53:36.348493921 +0000 UTC m=+0.172591450 container attach 8a254e772ff3169f5a52a3fba4251c614ddb11c484021f4f223493a68664c29e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:53:36 np0005558241 python3.9[206943]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612415.6441965-775-71547519776934/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:53:37 np0005558241 lvm[207140]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:53:37 np0005558241 lvm[207140]: VG ceph_vg0 finished
Dec 13 02:53:37 np0005558241 lvm[207141]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:53:37 np0005558241 lvm[207141]: VG ceph_vg1 finished
Dec 13 02:53:37 np0005558241 lvm[207147]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:53:37 np0005558241 lvm[207147]: VG ceph_vg2 finished
Dec 13 02:53:37 np0005558241 beautiful_ride[206862]: {}
Dec 13 02:53:37 np0005558241 systemd[1]: libpod-8a254e772ff3169f5a52a3fba4251c614ddb11c484021f4f223493a68664c29e.scope: Deactivated successfully.
Dec 13 02:53:37 np0005558241 systemd[1]: libpod-8a254e772ff3169f5a52a3fba4251c614ddb11c484021f4f223493a68664c29e.scope: Consumed 1.478s CPU time.
Dec 13 02:53:37 np0005558241 podman[206800]: 2025-12-13 07:53:37.243375602 +0000 UTC m=+1.067473111 container died 8a254e772ff3169f5a52a3fba4251c614ddb11c484021f4f223493a68664c29e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_ride, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:53:37 np0005558241 python3.9[207173]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-56f4845e57601593e6889ec1b1a24b31e0da21cd7cced42913649d2d5bce533d-merged.mount: Deactivated successfully.
Dec 13 02:53:37 np0005558241 podman[206800]: 2025-12-13 07:53:37.625024126 +0000 UTC m=+1.449121635 container remove 8a254e772ff3169f5a52a3fba4251c614ddb11c484021f4f223493a68664c29e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_ride, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:53:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:53:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:53:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:53:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:53:37 np0005558241 systemd[1]: libpod-conmon-8a254e772ff3169f5a52a3fba4251c614ddb11c484021f4f223493a68664c29e.scope: Deactivated successfully.
Dec 13 02:53:37 np0005558241 python3.9[207310]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612416.8543026-775-156162166828985/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:38 np0005558241 python3.9[207487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:53:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:53:39 np0005558241 python3.9[207610]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612418.1073542-775-237732500822647/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:39 np0005558241 python3.9[207762]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:53:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:53:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:53:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:53:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:53:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:53:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:53:40 np0005558241 python3.9[207885]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612419.472709-775-116093875586988/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:41 np0005558241 python3.9[208035]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:53:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:53:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:42 np0005558241 python3.9[208190]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 13 02:53:43 np0005558241 dbus-broker-launch[768]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 13 02:53:44 np0005558241 python3.9[208346]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:44 np0005558241 python3.9[208498]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:45 np0005558241 python3.9[208650]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:45 np0005558241 python3.9[208802]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:46 np0005558241 python3.9[208954]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:53:47 np0005558241 python3.9[209106]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:47 np0005558241 python3.9[209258]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:48 np0005558241 python3.9[209410]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:49 np0005558241 python3.9[209562]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:49 np0005558241 python3.9[209714]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:50 np0005558241 python3.9[209866]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:53:50 np0005558241 systemd[1]: Reloading.
Dec 13 02:53:50 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:53:50 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:53:50 np0005558241 systemd[1]: Starting libvirt logging daemon socket...
Dec 13 02:53:51 np0005558241 systemd[1]: Listening on libvirt logging daemon socket.
Dec 13 02:53:51 np0005558241 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 13 02:53:51 np0005558241 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 13 02:53:51 np0005558241 systemd[1]: Starting libvirt logging daemon...
Dec 13 02:53:51 np0005558241 systemd[1]: Started libvirt logging daemon.
Dec 13 02:53:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:53:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:52 np0005558241 python3.9[210059]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:53:52 np0005558241 systemd[1]: Reloading.
Dec 13 02:53:52 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:53:52 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:53:52 np0005558241 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 13 02:53:52 np0005558241 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 13 02:53:52 np0005558241 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 13 02:53:52 np0005558241 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 13 02:53:52 np0005558241 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 13 02:53:52 np0005558241 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 13 02:53:52 np0005558241 systemd[1]: Starting libvirt nodedev daemon...
Dec 13 02:53:52 np0005558241 systemd[1]: Started libvirt nodedev daemon.
Dec 13 02:53:53 np0005558241 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 13 02:53:53 np0005558241 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 13 02:53:53 np0005558241 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 13 02:53:53 np0005558241 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 13 02:53:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:54 np0005558241 python3.9[210283]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:53:54 np0005558241 systemd[1]: Reloading.
Dec 13 02:53:54 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:53:54 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:53:54 np0005558241 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 13 02:53:54 np0005558241 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 13 02:53:54 np0005558241 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 13 02:53:54 np0005558241 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 13 02:53:54 np0005558241 systemd[1]: Starting libvirt proxy daemon...
Dec 13 02:53:54 np0005558241 systemd[1]: Started libvirt proxy daemon.
Dec 13 02:53:54 np0005558241 setroubleshoot[210171]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l c380133e-52ad-4ea2-8bb6-bbfbf2696abc
Dec 13 02:53:54 np0005558241 setroubleshoot[210171]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec 13 02:53:54 np0005558241 setroubleshoot[210171]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l c380133e-52ad-4ea2-8bb6-bbfbf2696abc
Dec 13 02:53:54 np0005558241 setroubleshoot[210171]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec 13 02:53:55 np0005558241 python3.9[210497]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:53:55 np0005558241 systemd[1]: Reloading.
Dec 13 02:53:55 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:53:55 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:53:55.366 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:53:55.368 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:53:55.368 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:53:55 np0005558241 systemd[1]: Listening on libvirt locking daemon socket.
Dec 13 02:53:55 np0005558241 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 13 02:53:55 np0005558241 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 13 02:53:55 np0005558241 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 13 02:53:55 np0005558241 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 13 02:53:55 np0005558241 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 13 02:53:55 np0005558241 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 13 02:53:55 np0005558241 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 13 02:53:55 np0005558241 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 13 02:53:55 np0005558241 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 13 02:53:55 np0005558241 systemd[1]: Starting libvirt QEMU daemon...
Dec 13 02:53:55 np0005558241 systemd[1]: Started libvirt QEMU daemon.
Dec 13 02:53:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:53:56 np0005558241 python3.9[210713]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:53:56 np0005558241 systemd[1]: Reloading.
Dec 13 02:53:56 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:53:56 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:53:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:53:57 np0005558241 systemd[1]: Starting libvirt secret daemon socket...
Dec 13 02:53:57 np0005558241 systemd[1]: Listening on libvirt secret daemon socket.
Dec 13 02:53:57 np0005558241 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 13 02:53:57 np0005558241 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 13 02:53:57 np0005558241 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 13 02:53:57 np0005558241 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 13 02:53:57 np0005558241 systemd[1]: Starting libvirt secret daemon...
Dec 13 02:53:57 np0005558241 systemd[1]: Started libvirt secret daemon.
Dec 13 02:53:58 np0005558241 python3.9[210925]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:53:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 13 02:53:58 np0005558241 python3.9[211077]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 02:53:58 np0005558241 podman[211126]: 2025-12-13 07:53:58.983679917 +0000 UTC m=+0.063307450 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 13 02:53:59 np0005558241 podman[211111]: 2025-12-13 07:53:59.050974036 +0000 UTC m=+0.130546948 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 02:53:59 np0005558241 python3.9[211272]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:54:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Dec 13 02:54:00 np0005558241 python3.9[211426]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 02:54:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:54:02 np0005558241 python3.9[211576]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:54:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Dec 13 02:54:02 np0005558241 python3.9[211697]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765612441.397215-1133-143305114463016/.source.xml follow=False _original_basename=secret.xml.j2 checksum=61e3b93075eb21f285df6180cf802903fecb2258 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:03 np0005558241 python3.9[211849]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 18ee9de6-e00b-571b-ab9b-b7aab06174df#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:54:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 02:54:04 np0005558241 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 13 02:54:04 np0005558241 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.083s CPU time.
Dec 13 02:54:04 np0005558241 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 13 02:54:05 np0005558241 python3.9[212011]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 02:54:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:54:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 02:54:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:54:09
Dec 13 02:54:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:54:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:54:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'backups', 'vms', 'images', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log']
Dec 13 02:54:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:54:09 np0005558241 python3.9[212474]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:54:10 np0005558241 python3.9[212626]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:54:10 np0005558241 python3.9[212749]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765612449.5018442-1188-252510435516957/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:11 np0005558241 python3.9[212901]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:54:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 02:54:12 np0005558241 python3.9[213053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:54:13 np0005558241 python3.9[213131]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Dec 13 02:54:14 np0005558241 python3.9[213283]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:54:15 np0005558241 python3.9[213361]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.svw6m4kk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 13 op/s
Dec 13 02:54:16 np0005558241 python3.9[213513]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:54:16 np0005558241 python3.9[213591]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:54:16 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec 13 02:54:16 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:16.949695) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:54:16 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec 13 02:54:16 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612456949812, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1781, "num_deletes": 507, "total_data_size": 2666104, "memory_usage": 2709536, "flush_reason": "Manual Compaction"}
Dec 13 02:54:16 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612457211001, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2602435, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13544, "largest_seqno": 15324, "table_properties": {"data_size": 2594579, "index_size": 4286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 18275, "raw_average_key_size": 18, "raw_value_size": 2577008, "raw_average_value_size": 2603, "num_data_blocks": 195, "num_entries": 990, "num_filter_entries": 990, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765612264, "oldest_key_time": 1765612264, "file_creation_time": 1765612456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 261377 microseconds, and 8328 cpu microseconds.
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.211059) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2602435 bytes OK
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.211102) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.313615) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.313671) EVENT_LOG_v1 {"time_micros": 1765612457313659, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.313744) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2657378, prev total WAL file size 2657378, number of live WAL files 2.
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.315052) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2541KB)], [32(7552KB)]
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612457315191, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 10336045, "oldest_snapshot_seqno": -1}
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4061 keys, 8353631 bytes, temperature: kUnknown
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612457694263, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 8353631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8323775, "index_size": 18610, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 99684, "raw_average_key_size": 24, "raw_value_size": 8247653, "raw_average_value_size": 2030, "num_data_blocks": 788, "num_entries": 4061, "num_filter_entries": 4061, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765612457, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.694626) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8353631 bytes
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.705710) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 27.3 rd, 22.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 7.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(7.2) write-amplify(3.2) OK, records in: 5088, records dropped: 1027 output_compression: NoCompression
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.705787) EVENT_LOG_v1 {"time_micros": 1765612457705752, "job": 14, "event": "compaction_finished", "compaction_time_micros": 379171, "compaction_time_cpu_micros": 21035, "output_level": 6, "num_output_files": 1, "total_output_size": 8353631, "num_input_records": 5088, "num_output_records": 4061, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612457706897, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612457708579, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.314946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.708704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.708711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.708712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.708714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:54:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:54:17.708724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:54:17 np0005558241 python3.9[213743]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:54:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec 13 02:54:18 np0005558241 python3[213896]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 02:54:19 np0005558241 python3.9[214048]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Dec 13 02:54:20 np0005558241 python3.9[214126]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:54:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:54:20 np0005558241 python3.9[214278]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:54:21 np0005558241 python3.9[214356]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:54:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Dec 13 02:54:22 np0005558241 python3.9[214508]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:54:22 np0005558241 python3.9[214586]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Dec 13 02:54:24 np0005558241 python3.9[214738]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:54:24 np0005558241 python3.9[214816]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Dec 13 02:54:26 np0005558241 python3.9[214968]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:54:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:54:27 np0005558241 python3.9[215093]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765612466.0890324-1313-121522379506009/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Dec 13 02:54:28 np0005558241 python3.9[215245]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:28 np0005558241 python3.9[215397]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:54:29 np0005558241 podman[215525]: 2025-12-13 07:54:29.760969573 +0000 UTC m=+0.080365503 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Dec 13 02:54:29 np0005558241 podman[215524]: 2025-12-13 07:54:29.803641892 +0000 UTC m=+0.123248298 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 13 02:54:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Dec 13 02:54:30 np0005558241 python3.9[215578]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:31 np0005558241 python3.9[215750]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:54:31 np0005558241 python3.9[215903]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:54:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:54:32 np0005558241 python3.9[216057]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:54:33 np0005558241 python3.9[216212]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:34 np0005558241 python3.9[216364]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:54:35 np0005558241 python3.9[216487]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765612473.7905812-1385-91187459521990/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:35 np0005558241 python3.9[216639]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:54:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:36 np0005558241 python3.9[216762]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765612475.2173452-1400-154041429926709/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:54:37 np0005558241 python3.9[216914]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:54:37 np0005558241 python3.9[217037]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765612476.7342257-1415-79908397950419/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:54:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:54:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:54:39 np0005558241 python3.9[217291]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:54:39 np0005558241 systemd[1]: Reloading.
Dec 13 02:54:39 np0005558241 podman[217333]: 2025-12-13 07:54:39.373214049 +0000 UTC m=+0.050602386 container create 7657147fe8849597c3f466d47ad6bdc2f95a38b1ac0a8acfcb691fd919436e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_almeida, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:54:39 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:54:39 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:54:39 np0005558241 podman[217333]: 2025-12-13 07:54:39.353143761 +0000 UTC m=+0.030532118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:54:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 02:54:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:54:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:54:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:54:39 np0005558241 systemd[1]: Started libpod-conmon-7657147fe8849597c3f466d47ad6bdc2f95a38b1ac0a8acfcb691fd919436e3e.scope.
Dec 13 02:54:39 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:54:39 np0005558241 podman[217333]: 2025-12-13 07:54:39.758984575 +0000 UTC m=+0.436372912 container init 7657147fe8849597c3f466d47ad6bdc2f95a38b1ac0a8acfcb691fd919436e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_almeida, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:54:39 np0005558241 systemd[1]: Reached target edpm_libvirt.target.
Dec 13 02:54:39 np0005558241 podman[217333]: 2025-12-13 07:54:39.772800157 +0000 UTC m=+0.450188484 container start 7657147fe8849597c3f466d47ad6bdc2f95a38b1ac0a8acfcb691fd919436e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:54:39 np0005558241 podman[217333]: 2025-12-13 07:54:39.77813487 +0000 UTC m=+0.455523207 container attach 7657147fe8849597c3f466d47ad6bdc2f95a38b1ac0a8acfcb691fd919436e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_almeida, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 02:54:39 np0005558241 upbeat_almeida[217385]: 167 167
Dec 13 02:54:39 np0005558241 systemd[1]: libpod-7657147fe8849597c3f466d47ad6bdc2f95a38b1ac0a8acfcb691fd919436e3e.scope: Deactivated successfully.
Dec 13 02:54:39 np0005558241 podman[217333]: 2025-12-13 07:54:39.779782581 +0000 UTC m=+0.457170918 container died 7657147fe8849597c3f466d47ad6bdc2f95a38b1ac0a8acfcb691fd919436e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_almeida, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 02:54:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-23ae6fea5d0a73dc6b3a4789e94f26abae4070fa81a72d18f3315ccef7079143-merged.mount: Deactivated successfully.
Dec 13 02:54:39 np0005558241 podman[217333]: 2025-12-13 07:54:39.839633805 +0000 UTC m=+0.517022142 container remove 7657147fe8849597c3f466d47ad6bdc2f95a38b1ac0a8acfcb691fd919436e3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:54:39 np0005558241 systemd[1]: libpod-conmon-7657147fe8849597c3f466d47ad6bdc2f95a38b1ac0a8acfcb691fd919436e3e.scope: Deactivated successfully.
Dec 13 02:54:40 np0005558241 podman[217413]: 2025-12-13 07:54:40.027826821 +0000 UTC m=+0.045420927 container create 848946d474ba51fd49aeddf8e76e456c396fc5c5376119ef0400c107b22ce106 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:54:40 np0005558241 systemd[1]: Started libpod-conmon-848946d474ba51fd49aeddf8e76e456c396fc5c5376119ef0400c107b22ce106.scope.
Dec 13 02:54:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:54:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:54:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:54:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:54:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:54:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:54:40 np0005558241 podman[217413]: 2025-12-13 07:54:40.009784704 +0000 UTC m=+0.027378850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:54:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:54:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b74171e77442215be8149aa91ea8884f8205c2420ff6a09b9947b54d15fcd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:54:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b74171e77442215be8149aa91ea8884f8205c2420ff6a09b9947b54d15fcd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:54:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b74171e77442215be8149aa91ea8884f8205c2420ff6a09b9947b54d15fcd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:54:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b74171e77442215be8149aa91ea8884f8205c2420ff6a09b9947b54d15fcd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:54:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9b74171e77442215be8149aa91ea8884f8205c2420ff6a09b9947b54d15fcd3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:54:40 np0005558241 podman[217413]: 2025-12-13 07:54:40.129615306 +0000 UTC m=+0.147209442 container init 848946d474ba51fd49aeddf8e76e456c396fc5c5376119ef0400c107b22ce106 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_neumann, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 02:54:40 np0005558241 podman[217413]: 2025-12-13 07:54:40.140084775 +0000 UTC m=+0.157678881 container start 848946d474ba51fd49aeddf8e76e456c396fc5c5376119ef0400c107b22ce106 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:54:40 np0005558241 podman[217413]: 2025-12-13 07:54:40.144563106 +0000 UTC m=+0.162157262 container attach 848946d474ba51fd49aeddf8e76e456c396fc5c5376119ef0400c107b22ce106 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 02:54:40 np0005558241 pedantic_neumann[217446]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:54:40 np0005558241 pedantic_neumann[217446]: --> All data devices are unavailable
Dec 13 02:54:40 np0005558241 systemd[1]: libpod-848946d474ba51fd49aeddf8e76e456c396fc5c5376119ef0400c107b22ce106.scope: Deactivated successfully.
Dec 13 02:54:40 np0005558241 podman[217413]: 2025-12-13 07:54:40.627465631 +0000 UTC m=+0.645059747 container died 848946d474ba51fd49aeddf8e76e456c396fc5c5376119ef0400c107b22ce106 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:54:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c9b74171e77442215be8149aa91ea8884f8205c2420ff6a09b9947b54d15fcd3-merged.mount: Deactivated successfully.
Dec 13 02:54:40 np0005558241 podman[217413]: 2025-12-13 07:54:40.677275366 +0000 UTC m=+0.694869472 container remove 848946d474ba51fd49aeddf8e76e456c396fc5c5376119ef0400c107b22ce106 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_neumann, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 02:54:40 np0005558241 systemd[1]: libpod-conmon-848946d474ba51fd49aeddf8e76e456c396fc5c5376119ef0400c107b22ce106.scope: Deactivated successfully.
Dec 13 02:54:41 np0005558241 podman[217663]: 2025-12-13 07:54:41.186394821 +0000 UTC m=+0.048256848 container create 1e5f43b9fe652e930ea946637b6d3e1d009e75d68df9488d6fa20e8859cb23a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_darwin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:54:41 np0005558241 systemd[1]: Started libpod-conmon-1e5f43b9fe652e930ea946637b6d3e1d009e75d68df9488d6fa20e8859cb23a1.scope.
Dec 13 02:54:41 np0005558241 podman[217663]: 2025-12-13 07:54:41.163552694 +0000 UTC m=+0.025414771 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:54:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:54:41 np0005558241 podman[217663]: 2025-12-13 07:54:41.285029907 +0000 UTC m=+0.146891964 container init 1e5f43b9fe652e930ea946637b6d3e1d009e75d68df9488d6fa20e8859cb23a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:54:41 np0005558241 podman[217663]: 2025-12-13 07:54:41.293873736 +0000 UTC m=+0.155735763 container start 1e5f43b9fe652e930ea946637b6d3e1d009e75d68df9488d6fa20e8859cb23a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_darwin, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:54:41 np0005558241 podman[217663]: 2025-12-13 07:54:41.297784263 +0000 UTC m=+0.159646290 container attach 1e5f43b9fe652e930ea946637b6d3e1d009e75d68df9488d6fa20e8859cb23a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:54:41 np0005558241 objective_darwin[217693]: 167 167
Dec 13 02:54:41 np0005558241 systemd[1]: libpod-1e5f43b9fe652e930ea946637b6d3e1d009e75d68df9488d6fa20e8859cb23a1.scope: Deactivated successfully.
Dec 13 02:54:41 np0005558241 podman[217663]: 2025-12-13 07:54:41.302464069 +0000 UTC m=+0.164326116 container died 1e5f43b9fe652e930ea946637b6d3e1d009e75d68df9488d6fa20e8859cb23a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:54:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-91096c28fe38a75f762042f780c764c0917659bf1ca06a15b26543dbcf1234b9-merged.mount: Deactivated successfully.
Dec 13 02:54:41 np0005558241 podman[217663]: 2025-12-13 07:54:41.35007422 +0000 UTC m=+0.211936247 container remove 1e5f43b9fe652e930ea946637b6d3e1d009e75d68df9488d6fa20e8859cb23a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_darwin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 02:54:41 np0005558241 systemd[1]: libpod-conmon-1e5f43b9fe652e930ea946637b6d3e1d009e75d68df9488d6fa20e8859cb23a1.scope: Deactivated successfully.
Dec 13 02:54:41 np0005558241 python3.9[217688]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 13 02:54:41 np0005558241 systemd[1]: Reloading.
Dec 13 02:54:41 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:54:41 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:54:41 np0005558241 podman[217719]: 2025-12-13 07:54:41.53318392 +0000 UTC m=+0.025169135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:54:41 np0005558241 systemd[1]: Reloading.
Dec 13 02:54:41 np0005558241 podman[217719]: 2025-12-13 07:54:41.866953637 +0000 UTC m=+0.358938842 container create 267a24f2fe933bea67694c54520f3a538860b07ff09c7ad9051025802db4e43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 02:54:41 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:54:41 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:54:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:54:43 np0005558241 systemd[1]: Started libpod-conmon-267a24f2fe933bea67694c54520f3a538860b07ff09c7ad9051025802db4e43a.scope.
Dec 13 02:54:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:54:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/083422e948923769af46fe7318d5ca9bab6843e3ac80f164bd247b3ef710b7cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:54:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/083422e948923769af46fe7318d5ca9bab6843e3ac80f164bd247b3ef710b7cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:54:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/083422e948923769af46fe7318d5ca9bab6843e3ac80f164bd247b3ef710b7cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:54:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/083422e948923769af46fe7318d5ca9bab6843e3ac80f164bd247b3ef710b7cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:54:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:44 np0005558241 systemd[1]: session-52.scope: Deactivated successfully.
Dec 13 02:54:44 np0005558241 systemd[1]: session-52.scope: Consumed 3min 41.991s CPU time.
Dec 13 02:54:44 np0005558241 systemd-logind[787]: Session 52 logged out. Waiting for processes to exit.
Dec 13 02:54:44 np0005558241 systemd-logind[787]: Removed session 52.
Dec 13 02:54:44 np0005558241 podman[217719]: 2025-12-13 07:54:44.236862693 +0000 UTC m=+2.728847898 container init 267a24f2fe933bea67694c54520f3a538860b07ff09c7ad9051025802db4e43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_napier, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 02:54:44 np0005558241 podman[217719]: 2025-12-13 07:54:44.247725803 +0000 UTC m=+2.739710998 container start 267a24f2fe933bea67694c54520f3a538860b07ff09c7ad9051025802db4e43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_napier, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 02:54:44 np0005558241 podman[217719]: 2025-12-13 07:54:44.407260009 +0000 UTC m=+2.899245234 container attach 267a24f2fe933bea67694c54520f3a538860b07ff09c7ad9051025802db4e43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 02:54:44 np0005558241 charming_napier[217833]: {
Dec 13 02:54:44 np0005558241 charming_napier[217833]:    "0": [
Dec 13 02:54:44 np0005558241 charming_napier[217833]:        {
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "devices": [
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "/dev/loop3"
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            ],
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_name": "ceph_lv0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_size": "21470642176",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "name": "ceph_lv0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "tags": {
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.cluster_name": "ceph",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.crush_device_class": "",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.encrypted": "0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.objectstore": "bluestore",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.osd_id": "0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.type": "block",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.vdo": "0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.with_tpm": "0"
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            },
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "type": "block",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "vg_name": "ceph_vg0"
Dec 13 02:54:44 np0005558241 charming_napier[217833]:        }
Dec 13 02:54:44 np0005558241 charming_napier[217833]:    ],
Dec 13 02:54:44 np0005558241 charming_napier[217833]:    "1": [
Dec 13 02:54:44 np0005558241 charming_napier[217833]:        {
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "devices": [
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "/dev/loop4"
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            ],
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_name": "ceph_lv1",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_size": "21470642176",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "name": "ceph_lv1",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "tags": {
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.cluster_name": "ceph",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.crush_device_class": "",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.encrypted": "0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.objectstore": "bluestore",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.osd_id": "1",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.type": "block",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.vdo": "0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.with_tpm": "0"
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            },
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "type": "block",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "vg_name": "ceph_vg1"
Dec 13 02:54:44 np0005558241 charming_napier[217833]:        }
Dec 13 02:54:44 np0005558241 charming_napier[217833]:    ],
Dec 13 02:54:44 np0005558241 charming_napier[217833]:    "2": [
Dec 13 02:54:44 np0005558241 charming_napier[217833]:        {
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "devices": [
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "/dev/loop5"
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            ],
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_name": "ceph_lv2",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_size": "21470642176",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "name": "ceph_lv2",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "tags": {
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.cluster_name": "ceph",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.crush_device_class": "",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.encrypted": "0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.objectstore": "bluestore",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.osd_id": "2",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.type": "block",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.vdo": "0",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:                "ceph.with_tpm": "0"
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            },
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "type": "block",
Dec 13 02:54:44 np0005558241 charming_napier[217833]:            "vg_name": "ceph_vg2"
Dec 13 02:54:44 np0005558241 charming_napier[217833]:        }
Dec 13 02:54:44 np0005558241 charming_napier[217833]:    ]
Dec 13 02:54:44 np0005558241 charming_napier[217833]: }
Dec 13 02:54:44 np0005558241 systemd[1]: libpod-267a24f2fe933bea67694c54520f3a538860b07ff09c7ad9051025802db4e43a.scope: Deactivated successfully.
Dec 13 02:54:44 np0005558241 conmon[217833]: conmon 267a24f2fe933bea6769 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-267a24f2fe933bea67694c54520f3a538860b07ff09c7ad9051025802db4e43a.scope/container/memory.events
Dec 13 02:54:44 np0005558241 podman[217719]: 2025-12-13 07:54:44.673153272 +0000 UTC m=+3.165138467 container died 267a24f2fe933bea67694c54520f3a538860b07ff09c7ad9051025802db4e43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_napier, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 02:54:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-083422e948923769af46fe7318d5ca9bab6843e3ac80f164bd247b3ef710b7cc-merged.mount: Deactivated successfully.
Dec 13 02:54:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:54:49 np0005558241 podman[217719]: 2025-12-13 07:54:49.074022091 +0000 UTC m=+7.566007286 container remove 267a24f2fe933bea67694c54520f3a538860b07ff09c7ad9051025802db4e43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_napier, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 02:54:49 np0005558241 systemd[1]: libpod-conmon-267a24f2fe933bea67694c54520f3a538860b07ff09c7ad9051025802db4e43a.scope: Deactivated successfully.
Dec 13 02:54:49 np0005558241 podman[217914]: 2025-12-13 07:54:49.558773531 +0000 UTC m=+0.019926825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:54:49 np0005558241 podman[217914]: 2025-12-13 07:54:49.818124263 +0000 UTC m=+0.279277547 container create 4e9a4e19f3d5a8f785b63b3bcf09a3ea655240b94a9e2f38720bb58196b64204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_solomon, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 02:54:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:50 np0005558241 systemd[1]: Started libpod-conmon-4e9a4e19f3d5a8f785b63b3bcf09a3ea655240b94a9e2f38720bb58196b64204.scope.
Dec 13 02:54:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:54:50 np0005558241 podman[217914]: 2025-12-13 07:54:50.64354532 +0000 UTC m=+1.104698624 container init 4e9a4e19f3d5a8f785b63b3bcf09a3ea655240b94a9e2f38720bb58196b64204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_solomon, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 02:54:50 np0005558241 podman[217914]: 2025-12-13 07:54:50.658286775 +0000 UTC m=+1.119440049 container start 4e9a4e19f3d5a8f785b63b3bcf09a3ea655240b94a9e2f38720bb58196b64204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_solomon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:54:50 np0005558241 systemd[1]: libpod-4e9a4e19f3d5a8f785b63b3bcf09a3ea655240b94a9e2f38720bb58196b64204.scope: Deactivated successfully.
Dec 13 02:54:50 np0005558241 admiring_solomon[217931]: 167 167
Dec 13 02:54:50 np0005558241 podman[217914]: 2025-12-13 07:54:50.740989996 +0000 UTC m=+1.202143360 container attach 4e9a4e19f3d5a8f785b63b3bcf09a3ea655240b94a9e2f38720bb58196b64204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_solomon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 02:54:50 np0005558241 podman[217914]: 2025-12-13 07:54:50.742474143 +0000 UTC m=+1.203627437 container died 4e9a4e19f3d5a8f785b63b3bcf09a3ea655240b94a9e2f38720bb58196b64204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:54:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e610416cab27785c07bdc52ee47b0b3163fe8507ed4128fe5f53cfa06af9faba-merged.mount: Deactivated successfully.
Dec 13 02:54:51 np0005558241 podman[217914]: 2025-12-13 07:54:51.695620209 +0000 UTC m=+2.156773483 container remove 4e9a4e19f3d5a8f785b63b3bcf09a3ea655240b94a9e2f38720bb58196b64204 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:54:51 np0005558241 systemd[1]: libpod-conmon-4e9a4e19f3d5a8f785b63b3bcf09a3ea655240b94a9e2f38720bb58196b64204.scope: Deactivated successfully.
Dec 13 02:54:51 np0005558241 podman[217956]: 2025-12-13 07:54:51.943834334 +0000 UTC m=+0.121682549 container create 110d4932e9c7df321aaadf8c12c3f7d7ded32a3ccef618b0e9dae05ad635336d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_blackburn, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:54:51 np0005558241 podman[217956]: 2025-12-13 07:54:51.851297639 +0000 UTC m=+0.029145874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:54:52 np0005558241 systemd[1]: Started libpod-conmon-110d4932e9c7df321aaadf8c12c3f7d7ded32a3ccef618b0e9dae05ad635336d.scope.
Dec 13 02:54:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:54:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846f27cae93bd5ece9ac86942a0c86320649ce3687c44c2217c3728376a66a39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:54:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846f27cae93bd5ece9ac86942a0c86320649ce3687c44c2217c3728376a66a39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:54:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846f27cae93bd5ece9ac86942a0c86320649ce3687c44c2217c3728376a66a39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:54:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846f27cae93bd5ece9ac86942a0c86320649ce3687c44c2217c3728376a66a39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:54:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:52 np0005558241 systemd-logind[787]: New session 53 of user zuul.
Dec 13 02:54:52 np0005558241 systemd[1]: Started Session 53 of User zuul.
Dec 13 02:54:52 np0005558241 podman[217956]: 2025-12-13 07:54:52.151454262 +0000 UTC m=+0.329302487 container init 110d4932e9c7df321aaadf8c12c3f7d7ded32a3ccef618b0e9dae05ad635336d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_blackburn, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:54:52 np0005558241 podman[217956]: 2025-12-13 07:54:52.160470646 +0000 UTC m=+0.338318881 container start 110d4932e9c7df321aaadf8c12c3f7d7ded32a3ccef618b0e9dae05ad635336d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:54:52 np0005558241 podman[217956]: 2025-12-13 07:54:52.257933762 +0000 UTC m=+0.435781997 container attach 110d4932e9c7df321aaadf8c12c3f7d7ded32a3ccef618b0e9dae05ad635336d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:54:52 np0005558241 lvm[218207]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:54:52 np0005558241 lvm[218206]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:54:52 np0005558241 lvm[218206]: VG ceph_vg1 finished
Dec 13 02:54:52 np0005558241 lvm[218203]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:54:52 np0005558241 lvm[218203]: VG ceph_vg0 finished
Dec 13 02:54:52 np0005558241 lvm[218207]: VG ceph_vg2 finished
Dec 13 02:54:53 np0005558241 nice_blackburn[217974]: {}
Dec 13 02:54:53 np0005558241 python3.9[218191]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:54:53 np0005558241 systemd[1]: libpod-110d4932e9c7df321aaadf8c12c3f7d7ded32a3ccef618b0e9dae05ad635336d.scope: Deactivated successfully.
Dec 13 02:54:53 np0005558241 systemd[1]: libpod-110d4932e9c7df321aaadf8c12c3f7d7ded32a3ccef618b0e9dae05ad635336d.scope: Consumed 1.540s CPU time.
Dec 13 02:54:53 np0005558241 podman[217956]: 2025-12-13 07:54:53.158167716 +0000 UTC m=+1.336015931 container died 110d4932e9c7df321aaadf8c12c3f7d7ded32a3ccef618b0e9dae05ad635336d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_blackburn, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:54:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:54:53 np0005558241 systemd[1]: var-lib-containers-storage-overlay-846f27cae93bd5ece9ac86942a0c86320649ce3687c44c2217c3728376a66a39-merged.mount: Deactivated successfully.
Dec 13 02:54:53 np0005558241 podman[217956]: 2025-12-13 07:54:53.94636869 +0000 UTC m=+2.124216905 container remove 110d4932e9c7df321aaadf8c12c3f7d7ded32a3ccef618b0e9dae05ad635336d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:54:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:54:54 np0005558241 systemd[1]: libpod-conmon-110d4932e9c7df321aaadf8c12c3f7d7ded32a3ccef618b0e9dae05ad635336d.scope: Deactivated successfully.
Dec 13 02:54:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:54:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:54:54 np0005558241 python3.9[218376]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:54:54 np0005558241 network[218393]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:54:54 np0005558241 network[218394]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:54:54 np0005558241 network[218395]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:54:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:54:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:54:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:54:55.368 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:54:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:54:55.370 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:54:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:54:55.370 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:54:55 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:54:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:54:58 np0005558241 python3.9[218692]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 02:54:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:54:59 np0005558241 python3.9[218776]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:54:59 np0005558241 podman[218779]: 2025-12-13 07:54:59.976135531 +0000 UTC m=+0.063271340 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 02:55:00 np0005558241 podman[218778]: 2025-12-13 07:55:00.012157324 +0000 UTC m=+0.098620456 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Dec 13 02:55:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:55:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:07 np0005558241 python3.9[218972]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:55:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:08 np0005558241 python3.9[219124]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:55:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:55:09 np0005558241 python3.9[219277]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:55:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:55:09
Dec 13 02:55:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:55:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:55:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'vms', 'volumes', '.mgr', '.rgw.root', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec 13 02:55:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:55:09 np0005558241 python3.9[219429]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:55:10 np0005558241 python3.9[219582]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:55:11 np0005558241 python3.9[219705]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765612510.410622-95-188110539116202/.source.iscsi _original_basename=.dtl8l3tr follow=False checksum=2f389898b7e3159501961cd57c613abdcc9ca7fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:12 np0005558241 python3.9[219857]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:13 np0005558241 python3.9[220009]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:55:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:14 np0005558241 python3.9[220161]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:55:14 np0005558241 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 13 02:55:15 np0005558241 python3.9[220317]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:55:15 np0005558241 systemd[1]: Reloading.
Dec 13 02:55:15 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:55:15 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:55:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:16 np0005558241 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 13 02:55:16 np0005558241 systemd[1]: Starting Open-iSCSI...
Dec 13 02:55:16 np0005558241 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 02:55:16 np0005558241 systemd[1]: Started Open-iSCSI.
Dec 13 02:55:16 np0005558241 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 13 02:55:16 np0005558241 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 13 02:55:16 np0005558241 python3.9[220519]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:55:17 np0005558241 network[220536]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:55:17 np0005558241 network[220537]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:55:17 np0005558241 network[220538]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:55:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:55:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:55:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:23 np0005558241 python3.9[220810]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 02:55:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:55:23 np0005558241 python3.9[220962]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 13 02:55:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:24 np0005558241 python3.9[221118]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:55:25 np0005558241 python3.9[221241]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765612524.1841185-172-235805757863307/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:26 np0005558241 python3.9[221393]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:27 np0005558241 python3.9[221545]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:55:27 np0005558241 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 02:55:27 np0005558241 systemd[1]: Stopped Load Kernel Modules.
Dec 13 02:55:27 np0005558241 systemd[1]: Stopping Load Kernel Modules...
Dec 13 02:55:27 np0005558241 systemd[1]: Starting Load Kernel Modules...
Dec 13 02:55:27 np0005558241 systemd[1]: Finished Load Kernel Modules.
Dec 13 02:55:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:28 np0005558241 python3.9[221701]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:55:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:55:29 np0005558241 python3.9[221853]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:55:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:30 np0005558241 podman[221978]: 2025-12-13 07:55:30.26646642 +0000 UTC m=+0.069465402 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 02:55:30 np0005558241 podman[221977]: 2025-12-13 07:55:30.312933014 +0000 UTC m=+0.116181632 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 02:55:30 np0005558241 python3.9[222039]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:55:31 np0005558241 python3.9[222199]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:55:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:32 np0005558241 python3.9[222322]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765612531.3167155-230-107218168403993/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:33 np0005558241 python3.9[222474]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:55:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:55:33 np0005558241 python3.9[222627]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:34 np0005558241 python3.9[222779]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:35 np0005558241 python3.9[222931]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:36 np0005558241 python3.9[223083]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:36 np0005558241 python3.9[223235]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:37 np0005558241 python3.9[223387]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:38 np0005558241 python3.9[223539]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:55:39 np0005558241 python3.9[223691]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:55:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:55:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:55:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:55:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:55:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:55:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:55:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:40 np0005558241 python3.9[223845]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:41 np0005558241 python3.9[223997]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:55:42 np0005558241 python3.9[224149]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:55:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:42 np0005558241 python3.9[224227]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:55:43 np0005558241 python3.9[224379]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:55:43 np0005558241 python3.9[224457]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:55:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:55:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:44 np0005558241 python3.9[224609]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:45 np0005558241 python3.9[224761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:55:45 np0005558241 python3.9[224839]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:46 np0005558241 python3.9[224991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:55:47 np0005558241 python3.9[225069]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:48 np0005558241 python3.9[225221]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:55:48 np0005558241 systemd[1]: Reloading.
Dec 13 02:55:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:48 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:55:48 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:55:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:55:49 np0005558241 python3.9[225410]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:55:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:50 np0005558241 python3.9[225488]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:51 np0005558241 python3.9[225640]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:55:51 np0005558241 python3.9[225718]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:52 np0005558241 python3.9[225870]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:55:52 np0005558241 systemd[1]: Reloading.
Dec 13 02:55:52 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:55:52 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:55:52 np0005558241 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 13 02:55:53 np0005558241 systemd[1]: Starting Create netns directory...
Dec 13 02:55:53 np0005558241 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 02:55:53 np0005558241 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 02:55:53 np0005558241 systemd[1]: Finished Create netns directory.
Dec 13 02:55:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:55:53 np0005558241 python3.9[226063]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:55:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:54 np0005558241 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 13 02:55:54 np0005558241 python3.9[226215]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:55:55 np0005558241 python3.9[226339]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765612554.1124551-437-113034122390917/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:55:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:55:55.369 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:55:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:55:55.373 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:55:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:55:55.373 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:55:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:56 np0005558241 podman[226586]: 2025-12-13 07:55:56.134648814 +0000 UTC m=+0.061659680 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:55:56 np0005558241 python3.9[226564]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:55:56 np0005558241 podman[226586]: 2025-12-13 07:55:56.241573828 +0000 UTC m=+0.168584674 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 02:55:56 np0005558241 python3.9[226860]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:55:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:55:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:55:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:55:57 np0005558241 python3.9[227098]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765612556.3867202-462-124015632083815/.source.json _original_basename=.zllaovlt follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:55:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:55:58 np0005558241 python3.9[227327]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:55:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:55:58 np0005558241 podman[227352]: 2025-12-13 07:55:58.155487696 +0000 UTC m=+0.046542227 container create 58adc2bf7ddc6647ac3e104e1e6d197f55f3e61325a725ab86f8f214a7d93c4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hawking, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:55:58 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:55:58 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:55:58 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:55:58 np0005558241 systemd[1]: Started libpod-conmon-58adc2bf7ddc6647ac3e104e1e6d197f55f3e61325a725ab86f8f214a7d93c4b.scope.
Dec 13 02:55:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:55:58 np0005558241 podman[227352]: 2025-12-13 07:55:58.135458763 +0000 UTC m=+0.026513294 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:55:58 np0005558241 podman[227352]: 2025-12-13 07:55:58.237752063 +0000 UTC m=+0.128806604 container init 58adc2bf7ddc6647ac3e104e1e6d197f55f3e61325a725ab86f8f214a7d93c4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:55:58 np0005558241 podman[227352]: 2025-12-13 07:55:58.247492333 +0000 UTC m=+0.138546844 container start 58adc2bf7ddc6647ac3e104e1e6d197f55f3e61325a725ab86f8f214a7d93c4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:55:58 np0005558241 podman[227352]: 2025-12-13 07:55:58.25064774 +0000 UTC m=+0.141702271 container attach 58adc2bf7ddc6647ac3e104e1e6d197f55f3e61325a725ab86f8f214a7d93c4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hawking, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:55:58 np0005558241 romantic_hawking[227383]: 167 167
Dec 13 02:55:58 np0005558241 systemd[1]: libpod-58adc2bf7ddc6647ac3e104e1e6d197f55f3e61325a725ab86f8f214a7d93c4b.scope: Deactivated successfully.
Dec 13 02:55:58 np0005558241 podman[227352]: 2025-12-13 07:55:58.254927076 +0000 UTC m=+0.145981587 container died 58adc2bf7ddc6647ac3e104e1e6d197f55f3e61325a725ab86f8f214a7d93c4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 02:55:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-15b12b3ea6e9afa6bb85f1654ff2199a7b6d97f2f875c0a40dabbf57cf0e7d55-merged.mount: Deactivated successfully.
Dec 13 02:55:58 np0005558241 podman[227352]: 2025-12-13 07:55:58.306557067 +0000 UTC m=+0.197611578 container remove 58adc2bf7ddc6647ac3e104e1e6d197f55f3e61325a725ab86f8f214a7d93c4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hawking, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 02:55:58 np0005558241 systemd[1]: libpod-conmon-58adc2bf7ddc6647ac3e104e1e6d197f55f3e61325a725ab86f8f214a7d93c4b.scope: Deactivated successfully.
Dec 13 02:55:58 np0005558241 podman[227481]: 2025-12-13 07:55:58.47717553 +0000 UTC m=+0.045454771 container create d07321a0e4dbe06fe9fd9102944ba2bacbac423fe20506e4d396ec5256e828bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_kalam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:55:58 np0005558241 systemd[1]: Started libpod-conmon-d07321a0e4dbe06fe9fd9102944ba2bacbac423fe20506e4d396ec5256e828bb.scope.
Dec 13 02:55:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:55:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cd4900b89b18f7dbb6a1181601b7934e725294e194d178f633d80c5d513046/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:55:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cd4900b89b18f7dbb6a1181601b7934e725294e194d178f633d80c5d513046/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:55:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cd4900b89b18f7dbb6a1181601b7934e725294e194d178f633d80c5d513046/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:55:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cd4900b89b18f7dbb6a1181601b7934e725294e194d178f633d80c5d513046/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:55:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25cd4900b89b18f7dbb6a1181601b7934e725294e194d178f633d80c5d513046/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:55:58 np0005558241 podman[227481]: 2025-12-13 07:55:58.456230234 +0000 UTC m=+0.024509505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:55:58 np0005558241 podman[227481]: 2025-12-13 07:55:58.564748147 +0000 UTC m=+0.133027388 container init d07321a0e4dbe06fe9fd9102944ba2bacbac423fe20506e4d396ec5256e828bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_kalam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:55:58 np0005558241 podman[227481]: 2025-12-13 07:55:58.573234636 +0000 UTC m=+0.141513877 container start d07321a0e4dbe06fe9fd9102944ba2bacbac423fe20506e4d396ec5256e828bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:55:58 np0005558241 podman[227481]: 2025-12-13 07:55:58.577016529 +0000 UTC m=+0.145295970 container attach d07321a0e4dbe06fe9fd9102944ba2bacbac423fe20506e4d396ec5256e828bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_kalam, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:55:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:55:59 np0005558241 hardcore_kalam[227522]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:55:59 np0005558241 hardcore_kalam[227522]: --> All data devices are unavailable
Dec 13 02:55:59 np0005558241 systemd[1]: libpod-d07321a0e4dbe06fe9fd9102944ba2bacbac423fe20506e4d396ec5256e828bb.scope: Deactivated successfully.
Dec 13 02:55:59 np0005558241 podman[227481]: 2025-12-13 07:55:59.084800676 +0000 UTC m=+0.653079927 container died d07321a0e4dbe06fe9fd9102944ba2bacbac423fe20506e4d396ec5256e828bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:55:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-25cd4900b89b18f7dbb6a1181601b7934e725294e194d178f633d80c5d513046-merged.mount: Deactivated successfully.
Dec 13 02:55:59 np0005558241 podman[227481]: 2025-12-13 07:55:59.138457288 +0000 UTC m=+0.706736529 container remove d07321a0e4dbe06fe9fd9102944ba2bacbac423fe20506e4d396ec5256e828bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_kalam, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:55:59 np0005558241 systemd[1]: libpod-conmon-d07321a0e4dbe06fe9fd9102944ba2bacbac423fe20506e4d396ec5256e828bb.scope: Deactivated successfully.
Dec 13 02:55:59 np0005558241 podman[227767]: 2025-12-13 07:55:59.721303314 +0000 UTC m=+0.081530969 container create bda8fe257e818148ff8c30af40c6451c14879224a46475a9a08ab7d696c225f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:55:59 np0005558241 systemd[1]: Started libpod-conmon-bda8fe257e818148ff8c30af40c6451c14879224a46475a9a08ab7d696c225f3.scope.
Dec 13 02:55:59 np0005558241 podman[227767]: 2025-12-13 07:55:59.685516462 +0000 UTC m=+0.045744207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:55:59 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:55:59 np0005558241 podman[227767]: 2025-12-13 07:55:59.824851794 +0000 UTC m=+0.185079499 container init bda8fe257e818148ff8c30af40c6451c14879224a46475a9a08ab7d696c225f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 02:55:59 np0005558241 podman[227767]: 2025-12-13 07:55:59.839561197 +0000 UTC m=+0.199788882 container start bda8fe257e818148ff8c30af40c6451c14879224a46475a9a08ab7d696c225f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 02:55:59 np0005558241 podman[227767]: 2025-12-13 07:55:59.844343864 +0000 UTC m=+0.204571559 container attach bda8fe257e818148ff8c30af40c6451c14879224a46475a9a08ab7d696c225f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mcclintock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:55:59 np0005558241 systemd[1]: libpod-bda8fe257e818148ff8c30af40c6451c14879224a46475a9a08ab7d696c225f3.scope: Deactivated successfully.
Dec 13 02:55:59 np0005558241 ecstatic_mcclintock[227783]: 167 167
Dec 13 02:55:59 np0005558241 conmon[227783]: conmon bda8fe257e818148ff8c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bda8fe257e818148ff8c30af40c6451c14879224a46475a9a08ab7d696c225f3.scope/container/memory.events
Dec 13 02:55:59 np0005558241 podman[227767]: 2025-12-13 07:55:59.850906656 +0000 UTC m=+0.211134341 container died bda8fe257e818148ff8c30af40c6451c14879224a46475a9a08ab7d696c225f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mcclintock, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:55:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5fb325245acacb589c42b8f6bf448110487572290f8f9016288c647b34bebce2-merged.mount: Deactivated successfully.
Dec 13 02:55:59 np0005558241 podman[227767]: 2025-12-13 07:55:59.891550207 +0000 UTC m=+0.251777882 container remove bda8fe257e818148ff8c30af40c6451c14879224a46475a9a08ab7d696c225f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mcclintock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:55:59 np0005558241 systemd[1]: libpod-conmon-bda8fe257e818148ff8c30af40c6451c14879224a46475a9a08ab7d696c225f3.scope: Deactivated successfully.
Dec 13 02:56:00 np0005558241 podman[227831]: 2025-12-13 07:56:00.1044116 +0000 UTC m=+0.065269819 container create c77a76fcd54ec23ee5b61b82e99e1f1b35a7ccc18174fd465216934459f8de15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keldysh, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 02:56:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:00 np0005558241 systemd[1]: Started libpod-conmon-c77a76fcd54ec23ee5b61b82e99e1f1b35a7ccc18174fd465216934459f8de15.scope.
Dec 13 02:56:00 np0005558241 podman[227831]: 2025-12-13 07:56:00.068948947 +0000 UTC m=+0.029807136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:56:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:56:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79628f026fb6a9f00e3d8d4b6ce048cfea53fed338b13fdb6cb889230d8316f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:56:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79628f026fb6a9f00e3d8d4b6ce048cfea53fed338b13fdb6cb889230d8316f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:56:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79628f026fb6a9f00e3d8d4b6ce048cfea53fed338b13fdb6cb889230d8316f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:56:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b79628f026fb6a9f00e3d8d4b6ce048cfea53fed338b13fdb6cb889230d8316f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:56:00 np0005558241 podman[227831]: 2025-12-13 07:56:00.213297032 +0000 UTC m=+0.174155211 container init c77a76fcd54ec23ee5b61b82e99e1f1b35a7ccc18174fd465216934459f8de15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keldysh, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:56:00 np0005558241 podman[227831]: 2025-12-13 07:56:00.224239451 +0000 UTC m=+0.185097630 container start c77a76fcd54ec23ee5b61b82e99e1f1b35a7ccc18174fd465216934459f8de15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:56:00 np0005558241 podman[227831]: 2025-12-13 07:56:00.228331562 +0000 UTC m=+0.189189761 container attach c77a76fcd54ec23ee5b61b82e99e1f1b35a7ccc18174fd465216934459f8de15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]: {
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:    "0": [
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:        {
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "devices": [
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "/dev/loop3"
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            ],
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_name": "ceph_lv0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_size": "21470642176",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "name": "ceph_lv0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "tags": {
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.cluster_name": "ceph",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.crush_device_class": "",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.encrypted": "0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.objectstore": "bluestore",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.osd_id": "0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.type": "block",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.vdo": "0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.with_tpm": "0"
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            },
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "type": "block",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "vg_name": "ceph_vg0"
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:        }
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:    ],
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:    "1": [
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:        {
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "devices": [
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "/dev/loop4"
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            ],
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_name": "ceph_lv1",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_size": "21470642176",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "name": "ceph_lv1",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "tags": {
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.cluster_name": "ceph",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.crush_device_class": "",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.encrypted": "0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.objectstore": "bluestore",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.osd_id": "1",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.type": "block",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.vdo": "0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.with_tpm": "0"
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            },
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "type": "block",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "vg_name": "ceph_vg1"
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:        }
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:    ],
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:    "2": [
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:        {
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "devices": [
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "/dev/loop5"
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            ],
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_name": "ceph_lv2",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_size": "21470642176",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "name": "ceph_lv2",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "tags": {
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.cluster_name": "ceph",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.crush_device_class": "",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.encrypted": "0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.objectstore": "bluestore",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.osd_id": "2",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.type": "block",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.vdo": "0",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:                "ceph.with_tpm": "0"
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            },
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "type": "block",
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:            "vg_name": "ceph_vg2"
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:        }
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]:    ]
Dec 13 02:56:00 np0005558241 blissful_keldysh[227854]: }
Dec 13 02:56:00 np0005558241 podman[227831]: 2025-12-13 07:56:00.599835643 +0000 UTC m=+0.560693812 container died c77a76fcd54ec23ee5b61b82e99e1f1b35a7ccc18174fd465216934459f8de15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keldysh, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:56:00 np0005558241 systemd[1]: libpod-c77a76fcd54ec23ee5b61b82e99e1f1b35a7ccc18174fd465216934459f8de15.scope: Deactivated successfully.
Dec 13 02:56:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b79628f026fb6a9f00e3d8d4b6ce048cfea53fed338b13fdb6cb889230d8316f-merged.mount: Deactivated successfully.
Dec 13 02:56:00 np0005558241 podman[227831]: 2025-12-13 07:56:00.742908507 +0000 UTC m=+0.703766686 container remove c77a76fcd54ec23ee5b61b82e99e1f1b35a7ccc18174fd465216934459f8de15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keldysh, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:56:00 np0005558241 systemd[1]: libpod-conmon-c77a76fcd54ec23ee5b61b82e99e1f1b35a7ccc18174fd465216934459f8de15.scope: Deactivated successfully.
Dec 13 02:56:00 np0005558241 podman[227957]: 2025-12-13 07:56:00.759490915 +0000 UTC m=+0.137454997 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 13 02:56:00 np0005558241 podman[227955]: 2025-12-13 07:56:00.794513968 +0000 UTC m=+0.173728790 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 02:56:00 np0005558241 python3.9[228012]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 13 02:56:01 np0005558241 podman[228149]: 2025-12-13 07:56:01.270907081 +0000 UTC m=+0.040991520 container create cf011a8a8636dbdce16bd679148f75ff1c1adf6faaff8bfd789af98e796fd01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_knuth, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 02:56:01 np0005558241 systemd[1]: Started libpod-conmon-cf011a8a8636dbdce16bd679148f75ff1c1adf6faaff8bfd789af98e796fd01d.scope.
Dec 13 02:56:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:56:01 np0005558241 podman[228149]: 2025-12-13 07:56:01.253732908 +0000 UTC m=+0.023817367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:56:01 np0005558241 podman[228149]: 2025-12-13 07:56:01.353186828 +0000 UTC m=+0.123271327 container init cf011a8a8636dbdce16bd679148f75ff1c1adf6faaff8bfd789af98e796fd01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_knuth, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:56:01 np0005558241 podman[228149]: 2025-12-13 07:56:01.362836186 +0000 UTC m=+0.132920635 container start cf011a8a8636dbdce16bd679148f75ff1c1adf6faaff8bfd789af98e796fd01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:56:01 np0005558241 podman[228149]: 2025-12-13 07:56:01.368062504 +0000 UTC m=+0.138146963 container attach cf011a8a8636dbdce16bd679148f75ff1c1adf6faaff8bfd789af98e796fd01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_knuth, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 02:56:01 np0005558241 elegant_knuth[228195]: 167 167
Dec 13 02:56:01 np0005558241 systemd[1]: libpod-cf011a8a8636dbdce16bd679148f75ff1c1adf6faaff8bfd789af98e796fd01d.scope: Deactivated successfully.
Dec 13 02:56:01 np0005558241 podman[228149]: 2025-12-13 07:56:01.369158181 +0000 UTC m=+0.139242620 container died cf011a8a8636dbdce16bd679148f75ff1c1adf6faaff8bfd789af98e796fd01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_knuth, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:56:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-28a05ee3cfe8f306cf9fe9339f6d2c303c9dac0e37420fe8990c9bd4666824d5-merged.mount: Deactivated successfully.
Dec 13 02:56:01 np0005558241 podman[228149]: 2025-12-13 07:56:01.420867245 +0000 UTC m=+0.190951684 container remove cf011a8a8636dbdce16bd679148f75ff1c1adf6faaff8bfd789af98e796fd01d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:56:01 np0005558241 systemd[1]: libpod-conmon-cf011a8a8636dbdce16bd679148f75ff1c1adf6faaff8bfd789af98e796fd01d.scope: Deactivated successfully.
Dec 13 02:56:01 np0005558241 podman[228218]: 2025-12-13 07:56:01.586127005 +0000 UTC m=+0.045225215 container create 415ef42b8e9b8427c0b06d875dc9ae58d65f07564f41a0bc7a20534fd9d6ed39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_payne, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:56:01 np0005558241 systemd[1]: Started libpod-conmon-415ef42b8e9b8427c0b06d875dc9ae58d65f07564f41a0bc7a20534fd9d6ed39.scope.
Dec 13 02:56:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:56:01 np0005558241 podman[228218]: 2025-12-13 07:56:01.566479081 +0000 UTC m=+0.025577321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:56:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b61eb91696fdc1e3117ea36406ca2156c5ac0a620f0e7c84fce13497fbc85ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:56:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b61eb91696fdc1e3117ea36406ca2156c5ac0a620f0e7c84fce13497fbc85ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:56:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b61eb91696fdc1e3117ea36406ca2156c5ac0a620f0e7c84fce13497fbc85ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:56:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b61eb91696fdc1e3117ea36406ca2156c5ac0a620f0e7c84fce13497fbc85ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:56:01 np0005558241 podman[228218]: 2025-12-13 07:56:01.683928403 +0000 UTC m=+0.143026713 container init 415ef42b8e9b8427c0b06d875dc9ae58d65f07564f41a0bc7a20534fd9d6ed39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_payne, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:56:01 np0005558241 podman[228218]: 2025-12-13 07:56:01.698828 +0000 UTC m=+0.157926220 container start 415ef42b8e9b8427c0b06d875dc9ae58d65f07564f41a0bc7a20534fd9d6ed39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:56:01 np0005558241 podman[228218]: 2025-12-13 07:56:01.703247099 +0000 UTC m=+0.162345339 container attach 415ef42b8e9b8427c0b06d875dc9ae58d65f07564f41a0bc7a20534fd9d6ed39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:56:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:02 np0005558241 python3.9[228328]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 02:56:02 np0005558241 lvm[228414]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:56:02 np0005558241 lvm[228414]: VG ceph_vg1 finished
Dec 13 02:56:02 np0005558241 lvm[228413]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:56:02 np0005558241 lvm[228413]: VG ceph_vg0 finished
Dec 13 02:56:02 np0005558241 lvm[228417]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:56:02 np0005558241 lvm[228417]: VG ceph_vg2 finished
Dec 13 02:56:02 np0005558241 interesting_payne[228235]: {}
Dec 13 02:56:02 np0005558241 systemd[1]: libpod-415ef42b8e9b8427c0b06d875dc9ae58d65f07564f41a0bc7a20534fd9d6ed39.scope: Deactivated successfully.
Dec 13 02:56:02 np0005558241 systemd[1]: libpod-415ef42b8e9b8427c0b06d875dc9ae58d65f07564f41a0bc7a20534fd9d6ed39.scope: Consumed 1.557s CPU time.
Dec 13 02:56:02 np0005558241 podman[228218]: 2025-12-13 07:56:02.618997285 +0000 UTC m=+1.078095495 container died 415ef42b8e9b8427c0b06d875dc9ae58d65f07564f41a0bc7a20534fd9d6ed39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_payne, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 02:56:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9b61eb91696fdc1e3117ea36406ca2156c5ac0a620f0e7c84fce13497fbc85ae-merged.mount: Deactivated successfully.
Dec 13 02:56:02 np0005558241 podman[228218]: 2025-12-13 07:56:02.673283342 +0000 UTC m=+1.132381552 container remove 415ef42b8e9b8427c0b06d875dc9ae58d65f07564f41a0bc7a20534fd9d6ed39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_payne, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:56:02 np0005558241 systemd[1]: libpod-conmon-415ef42b8e9b8427c0b06d875dc9ae58d65f07564f41a0bc7a20534fd9d6ed39.scope: Deactivated successfully.
Dec 13 02:56:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:56:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:56:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:56:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:56:03 np0005558241 python3.9[228583]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 13 02:56:03 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:56:03 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:56:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:56:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:04 np0005558241 python3[228762]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 02:56:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:07 np0005558241 podman[228775]: 2025-12-13 07:56:07.082680028 +0000 UTC m=+2.027213902 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f
Dec 13 02:56:07 np0005558241 podman[228831]: 2025-12-13 07:56:07.206421266 +0000 UTC m=+0.024699499 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f
Dec 13 02:56:07 np0005558241 podman[228831]: 2025-12-13 07:56:07.844343978 +0000 UTC m=+0.662622211 container create a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:56:07 np0005558241 python3[228762]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f
Dec 13 02:56:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:08 np0005558241 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 13 02:56:08 np0005558241 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 13 02:56:08 np0005558241 python3.9[229024]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:56:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:56:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:56:09
Dec 13 02:56:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:56:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:56:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.control', '.rgw.root', 'images', 'vms', 'volumes', 'cephfs.cephfs.meta']
Dec 13 02:56:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:56:09 np0005558241 python3.9[229178]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:56:10 np0005558241 python3.9[229254]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:56:11 np0005558241 python3.9[229405]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765612570.5496838-550-121851752200277/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:11 np0005558241 python3.9[229481]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:56:11 np0005558241 systemd[1]: Reloading.
Dec 13 02:56:11 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:56:11 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:56:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:56:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:16 np0005558241 python3.9[229593]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:56:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:16 np0005558241 systemd[1]: Reloading.
Dec 13 02:56:16 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:56:16 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:56:16 np0005558241 systemd[1]: Starting multipathd container...
Dec 13 02:56:16 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:56:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d3e0e6297197da49818712b8a82542fe026ba0658c75bd7ba0f49c1783183d0/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 02:56:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d3e0e6297197da49818712b8a82542fe026ba0658c75bd7ba0f49c1783183d0/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 02:56:16 np0005558241 systemd[1]: Started /usr/bin/podman healthcheck run a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3.
Dec 13 02:56:16 np0005558241 podman[229634]: 2025-12-13 07:56:16.650734832 +0000 UTC m=+0.115603448 container init a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec 13 02:56:16 np0005558241 multipathd[229649]: + sudo -E kolla_set_configs
Dec 13 02:56:16 np0005558241 podman[229634]: 2025-12-13 07:56:16.675326858 +0000 UTC m=+0.140195464 container start a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 13 02:56:16 np0005558241 podman[229634]: multipathd
Dec 13 02:56:16 np0005558241 systemd[1]: Started multipathd container.
Dec 13 02:56:16 np0005558241 multipathd[229649]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 02:56:16 np0005558241 multipathd[229649]: INFO:__main__:Validating config file
Dec 13 02:56:16 np0005558241 multipathd[229649]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 02:56:16 np0005558241 multipathd[229649]: INFO:__main__:Writing out command to execute
Dec 13 02:56:16 np0005558241 podman[229656]: 2025-12-13 07:56:16.743657871 +0000 UTC m=+0.056837491 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 02:56:16 np0005558241 multipathd[229649]: ++ cat /run_command
Dec 13 02:56:16 np0005558241 systemd[1]: a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3-48d2d95b15ee9713.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 02:56:16 np0005558241 systemd[1]: a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3-48d2d95b15ee9713.service: Failed with result 'exit-code'.
Dec 13 02:56:16 np0005558241 multipathd[229649]: + CMD='/usr/sbin/multipathd -d'
Dec 13 02:56:16 np0005558241 multipathd[229649]: + ARGS=
Dec 13 02:56:16 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:56:16 np0005558241 multipathd[229649]: + sudo kolla_copy_cacerts
Dec 13 02:56:16 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:56:16 np0005558241 multipathd[229649]: + [[ ! -n '' ]]
Dec 13 02:56:16 np0005558241 multipathd[229649]: + . kolla_extend_start
Dec 13 02:56:16 np0005558241 multipathd[229649]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 13 02:56:16 np0005558241 multipathd[229649]: Running command: '/usr/sbin/multipathd -d'
Dec 13 02:56:16 np0005558241 multipathd[229649]: + umask 0022
Dec 13 02:56:16 np0005558241 multipathd[229649]: + exec /usr/sbin/multipathd -d
Dec 13 02:56:16 np0005558241 multipathd[229649]: 5154.064969 | --------start up--------
Dec 13 02:56:16 np0005558241 multipathd[229649]: 5154.064988 | read /etc/multipath.conf
Dec 13 02:56:16 np0005558241 multipathd[229649]: 5154.072451 | path checkers start up
Dec 13 02:56:17 np0005558241 python3.9[229839]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:56:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:18 np0005558241 python3.9[229993]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:56:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:56:19 np0005558241 python3.9[230158]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:56:19 np0005558241 systemd[1]: Stopping multipathd container...
Dec 13 02:56:19 np0005558241 multipathd[229649]: 5156.513259 | exit (signal)
Dec 13 02:56:19 np0005558241 multipathd[229649]: 5156.513785 | --------shut down-------
Dec 13 02:56:19 np0005558241 systemd[1]: libpod-a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3.scope: Deactivated successfully.
Dec 13 02:56:19 np0005558241 podman[230162]: 2025-12-13 07:56:19.268381806 +0000 UTC m=+0.079103999 container died a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 02:56:19 np0005558241 systemd[1]: a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3-48d2d95b15ee9713.timer: Deactivated successfully.
Dec 13 02:56:19 np0005558241 systemd[1]: Stopped /usr/bin/podman healthcheck run a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3.
Dec 13 02:56:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3-userdata-shm.mount: Deactivated successfully.
Dec 13 02:56:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1d3e0e6297197da49818712b8a82542fe026ba0658c75bd7ba0f49c1783183d0-merged.mount: Deactivated successfully.
Dec 13 02:56:19 np0005558241 podman[230162]: 2025-12-13 07:56:19.564308364 +0000 UTC m=+0.375030557 container cleanup a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 02:56:19 np0005558241 podman[230162]: multipathd
Dec 13 02:56:19 np0005558241 podman[230189]: multipathd
Dec 13 02:56:19 np0005558241 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 13 02:56:19 np0005558241 systemd[1]: Stopped multipathd container.
Dec 13 02:56:19 np0005558241 systemd[1]: Starting multipathd container...
Dec 13 02:56:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:56:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d3e0e6297197da49818712b8a82542fe026ba0658c75bd7ba0f49c1783183d0/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 02:56:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d3e0e6297197da49818712b8a82542fe026ba0658c75bd7ba0f49c1783183d0/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 02:56:19 np0005558241 systemd[1]: Started /usr/bin/podman healthcheck run a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3.
Dec 13 02:56:19 np0005558241 podman[230202]: 2025-12-13 07:56:19.810021366 +0000 UTC m=+0.146825907 container init a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Dec 13 02:56:19 np0005558241 multipathd[230218]: + sudo -E kolla_set_configs
Dec 13 02:56:19 np0005558241 podman[230202]: 2025-12-13 07:56:19.833143586 +0000 UTC m=+0.169948137 container start a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 02:56:19 np0005558241 podman[230202]: multipathd
Dec 13 02:56:19 np0005558241 systemd[1]: Started multipathd container.
Dec 13 02:56:19 np0005558241 podman[230225]: 2025-12-13 07:56:19.895895191 +0000 UTC m=+0.052581586 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2)
Dec 13 02:56:19 np0005558241 systemd[1]: a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3-792128872d050ccd.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 02:56:19 np0005558241 systemd[1]: a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3-792128872d050ccd.service: Failed with result 'exit-code'.
Dec 13 02:56:19 np0005558241 multipathd[230218]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 02:56:19 np0005558241 multipathd[230218]: INFO:__main__:Validating config file
Dec 13 02:56:19 np0005558241 multipathd[230218]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 02:56:19 np0005558241 multipathd[230218]: INFO:__main__:Writing out command to execute
Dec 13 02:56:19 np0005558241 multipathd[230218]: ++ cat /run_command
Dec 13 02:56:19 np0005558241 multipathd[230218]: + CMD='/usr/sbin/multipathd -d'
Dec 13 02:56:19 np0005558241 multipathd[230218]: + ARGS=
Dec 13 02:56:19 np0005558241 multipathd[230218]: + sudo kolla_copy_cacerts
Dec 13 02:56:19 np0005558241 multipathd[230218]: + [[ ! -n '' ]]
Dec 13 02:56:19 np0005558241 multipathd[230218]: + . kolla_extend_start
Dec 13 02:56:19 np0005558241 multipathd[230218]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 13 02:56:19 np0005558241 multipathd[230218]: Running command: '/usr/sbin/multipathd -d'
Dec 13 02:56:19 np0005558241 multipathd[230218]: + umask 0022
Dec 13 02:56:19 np0005558241 multipathd[230218]: + exec /usr/sbin/multipathd -d
Dec 13 02:56:19 np0005558241 multipathd[230218]: 5157.239242 | --------start up--------
Dec 13 02:56:19 np0005558241 multipathd[230218]: 5157.239270 | read /etc/multipath.conf
Dec 13 02:56:19 np0005558241 multipathd[230218]: 5157.246168 | path checkers start up
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:56:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:56:20 np0005558241 python3.9[230409]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:22 np0005558241 python3.9[230561]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 02:56:23 np0005558241 python3.9[230713]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 13 02:56:23 np0005558241 kernel: Key type psk registered
Dec 13 02:56:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:56:24 np0005558241 python3.9[230876]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:56:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:25 np0005558241 python3.9[230999]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765612583.6226187-630-92937745152707/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:26 np0005558241 python3.9[231151]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:27 np0005558241 python3.9[231303]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:56:27 np0005558241 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 02:56:27 np0005558241 systemd[1]: Stopped Load Kernel Modules.
Dec 13 02:56:27 np0005558241 systemd[1]: Stopping Load Kernel Modules...
Dec 13 02:56:27 np0005558241 systemd[1]: Starting Load Kernel Modules...
Dec 13 02:56:27 np0005558241 systemd[1]: Finished Load Kernel Modules.
Dec 13 02:56:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:56:29 np0005558241 python3.9[231459]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 02:56:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:30 np0005558241 podman[231462]: 2025-12-13 07:56:30.968902114 +0000 UTC m=+0.059270741 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:56:30 np0005558241 podman[231461]: 2025-12-13 07:56:30.997838527 +0000 UTC m=+0.088201714 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Dec 13 02:56:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.207702) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612592207765, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1281, "num_deletes": 251, "total_data_size": 2124082, "memory_usage": 2160208, "flush_reason": "Manual Compaction"}
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612592223242, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2069063, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15325, "largest_seqno": 16605, "table_properties": {"data_size": 2062996, "index_size": 3398, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12488, "raw_average_key_size": 19, "raw_value_size": 2050827, "raw_average_value_size": 3209, "num_data_blocks": 156, "num_entries": 639, "num_filter_entries": 639, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765612458, "oldest_key_time": 1765612458, "file_creation_time": 1765612592, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 16182 microseconds, and 5350 cpu microseconds.
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.223838) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2069063 bytes OK
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.223943) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.226380) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.226439) EVENT_LOG_v1 {"time_micros": 1765612592226429, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 02:56:32 np0005558241 systemd[1]: Reloading.
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.226466) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2118320, prev total WAL file size 2118320, number of live WAL files 2.
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.227607) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2020KB)], [35(8157KB)]
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612592227685, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 10422694, "oldest_snapshot_seqno": -1}
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4186 keys, 8566017 bytes, temperature: kUnknown
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612592294302, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8566017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8535205, "index_size": 19246, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 102745, "raw_average_key_size": 24, "raw_value_size": 8456706, "raw_average_value_size": 2020, "num_data_blocks": 813, "num_entries": 4186, "num_filter_entries": 4186, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765612592, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.294777) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8566017 bytes
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.296864) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.1 rd, 128.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.0 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(9.2) write-amplify(4.1) OK, records in: 4700, records dropped: 514 output_compression: NoCompression
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.296917) EVENT_LOG_v1 {"time_micros": 1765612592296897, "job": 16, "event": "compaction_finished", "compaction_time_micros": 66773, "compaction_time_cpu_micros": 20310, "output_level": 6, "num_output_files": 1, "total_output_size": 8566017, "num_input_records": 4700, "num_output_records": 4186, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612592297733, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612592299612, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.227487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.300475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.300485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.300488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.300491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:56:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-07:56:32.300494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 02:56:32 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:56:32 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:56:32 np0005558241 systemd[1]: Reloading.
Dec 13 02:56:32 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:56:32 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:56:33 np0005558241 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 13 02:56:33 np0005558241 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 13 02:56:33 np0005558241 lvm[231619]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:56:33 np0005558241 lvm[231619]: VG ceph_vg1 finished
Dec 13 02:56:33 np0005558241 lvm[231618]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:56:33 np0005558241 lvm[231618]: VG ceph_vg0 finished
Dec 13 02:56:33 np0005558241 lvm[231622]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:56:33 np0005558241 lvm[231622]: VG ceph_vg2 finished
Dec 13 02:56:33 np0005558241 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 02:56:33 np0005558241 systemd[1]: Starting man-db-cache-update.service...
Dec 13 02:56:33 np0005558241 systemd[1]: Reloading.
Dec 13 02:56:33 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:56:33 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:56:33 np0005558241 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 02:56:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:56:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:35 np0005558241 python3.9[232780]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:56:35 np0005558241 systemd[1]: Stopping Open-iSCSI...
Dec 13 02:56:35 np0005558241 iscsid[220358]: iscsid shutting down.
Dec 13 02:56:35 np0005558241 systemd[1]: iscsid.service: Deactivated successfully.
Dec 13 02:56:35 np0005558241 systemd[1]: Stopped Open-iSCSI.
Dec 13 02:56:35 np0005558241 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 13 02:56:35 np0005558241 systemd[1]: Starting Open-iSCSI...
Dec 13 02:56:35 np0005558241 systemd[1]: Started Open-iSCSI.
Dec 13 02:56:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v886: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:36 np0005558241 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 02:56:36 np0005558241 systemd[1]: Finished man-db-cache-update.service.
Dec 13 02:56:36 np0005558241 systemd[1]: man-db-cache-update.service: Consumed 1.842s CPU time.
Dec 13 02:56:36 np0005558241 systemd[1]: run-r9e801b648f3743f5aa7f5fb120a5e4a2.service: Deactivated successfully.
Dec 13 02:56:37 np0005558241 python3.9[233128]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 02:56:38 np0005558241 python3.9[233284]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:56:39 np0005558241 python3.9[233436]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:56:39 np0005558241 systemd[1]: Reloading.
Dec 13 02:56:39 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:56:39 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:56:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:56:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:56:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:56:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:56:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:56:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:56:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v888: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:40 np0005558241 python3.9[233621]: ansible-ansible.builtin.service_facts Invoked
Dec 13 02:56:40 np0005558241 network[233638]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 02:56:40 np0005558241 network[233639]: 'network-scripts' will be removed from distribution in near future.
Dec 13 02:56:40 np0005558241 network[233640]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 02:56:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v889: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:56:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:44 np0005558241 python3.9[233915]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:56:45 np0005558241 python3.9[234068]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:56:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v891: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:46 np0005558241 python3.9[234221]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:56:47 np0005558241 python3.9[234374]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:56:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:48 np0005558241 python3.9[234527]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:56:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:56:49 np0005558241 python3.9[234680]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:56:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:50 np0005558241 podman[234805]: 2025-12-13 07:56:50.545976965 +0000 UTC m=+0.075702996 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 02:56:50 np0005558241 python3.9[234851]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:56:51 np0005558241 python3.9[235007]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:56:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:52 np0005558241 python3.9[235160]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:53 np0005558241 python3.9[235312]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:53 np0005558241 python3.9[235464]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:56:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v895: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:54 np0005558241 python3.9[235616]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:55 np0005558241 python3.9[235768]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:56:55.371 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:56:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:56:55.372 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:56:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:56:55.372 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:56:55 np0005558241 python3.9[235920]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:56 np0005558241 python3.9[236072]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:57 np0005558241 python3.9[236224]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:57 np0005558241 python3.9[236376]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:56:58 np0005558241 python3.9[236528]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:56:59 np0005558241 python3.9[236680]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:56:59 np0005558241 python3.9[236832]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:57:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:00 np0005558241 python3.9[236984]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:57:01 np0005558241 podman[237109]: 2025-12-13 07:57:01.349264685 +0000 UTC m=+0.059533568 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 02:57:01 np0005558241 podman[237108]: 2025-12-13 07:57:01.385280252 +0000 UTC m=+0.099438600 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true)
Dec 13 02:57:01 np0005558241 python3.9[237174]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:57:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:02 np0005558241 python3.9[237332]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:57:03 np0005558241 python3.9[237534]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:57:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:57:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:57:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:57:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:57:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:57:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:57:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:57:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:57:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:57:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:57:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:57:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:57:03 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:57:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:57:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:04 np0005558241 podman[237777]: 2025-12-13 07:57:04.117869746 +0000 UTC m=+0.027160850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:57:04 np0005558241 python3.9[237786]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:57:04 np0005558241 podman[237777]: 2025-12-13 07:57:04.506471508 +0000 UTC m=+0.415762612 container create f6d88e2820d8c9709f6084251cfa2c5e51c3db60013f79ff16fac4446d0c42f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 02:57:05 np0005558241 python3.9[237945]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 02:57:05 np0005558241 systemd[1]: Started libpod-conmon-f6d88e2820d8c9709f6084251cfa2c5e51c3db60013f79ff16fac4446d0c42f0.scope.
Dec 13 02:57:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:57:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:57:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:57:05 np0005558241 podman[237777]: 2025-12-13 07:57:05.437833828 +0000 UTC m=+1.347124932 container init f6d88e2820d8c9709f6084251cfa2c5e51c3db60013f79ff16fac4446d0c42f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 02:57:05 np0005558241 podman[237777]: 2025-12-13 07:57:05.449269109 +0000 UTC m=+1.358560213 container start f6d88e2820d8c9709f6084251cfa2c5e51c3db60013f79ff16fac4446d0c42f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 02:57:05 np0005558241 frosty_elbakyan[237972]: 167 167
Dec 13 02:57:05 np0005558241 systemd[1]: libpod-f6d88e2820d8c9709f6084251cfa2c5e51c3db60013f79ff16fac4446d0c42f0.scope: Deactivated successfully.
Dec 13 02:57:05 np0005558241 podman[237777]: 2025-12-13 07:57:05.744007539 +0000 UTC m=+1.653298693 container attach f6d88e2820d8c9709f6084251cfa2c5e51c3db60013f79ff16fac4446d0c42f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:57:05 np0005558241 podman[237777]: 2025-12-13 07:57:05.744933242 +0000 UTC m=+1.654224396 container died f6d88e2820d8c9709f6084251cfa2c5e51c3db60013f79ff16fac4446d0c42f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:57:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-64f6cefc920f4e3b84858dbde5bc3b065de864b17fc3aeaa2105db6a660df49d-merged.mount: Deactivated successfully.
Dec 13 02:57:06 np0005558241 python3.9[238115]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:57:06 np0005558241 systemd[1]: Reloading.
Dec 13 02:57:06 np0005558241 podman[237777]: 2025-12-13 07:57:06.109578782 +0000 UTC m=+2.018869886 container remove f6d88e2820d8c9709f6084251cfa2c5e51c3db60013f79ff16fac4446d0c42f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:57:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:06 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:57:06 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:57:06 np0005558241 podman[238159]: 2025-12-13 07:57:06.307295812 +0000 UTC m=+0.033060495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:57:06 np0005558241 podman[238159]: 2025-12-13 07:57:06.431121582 +0000 UTC m=+0.156886185 container create 185042a72d3104b536805ac8b5e7194a522ee2511de807e327ee735231fde47a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:57:06 np0005558241 systemd[1]: libpod-conmon-f6d88e2820d8c9709f6084251cfa2c5e51c3db60013f79ff16fac4446d0c42f0.scope: Deactivated successfully.
Dec 13 02:57:06 np0005558241 systemd[1]: Started libpod-conmon-185042a72d3104b536805ac8b5e7194a522ee2511de807e327ee735231fde47a.scope.
Dec 13 02:57:06 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:57:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a015345197d0e38612d5df6d76fa55bd4bf60b82d76a5cdb775c0f6410fb346e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:57:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a015345197d0e38612d5df6d76fa55bd4bf60b82d76a5cdb775c0f6410fb346e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:57:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a015345197d0e38612d5df6d76fa55bd4bf60b82d76a5cdb775c0f6410fb346e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:57:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a015345197d0e38612d5df6d76fa55bd4bf60b82d76a5cdb775c0f6410fb346e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:57:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a015345197d0e38612d5df6d76fa55bd4bf60b82d76a5cdb775c0f6410fb346e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:57:07 np0005558241 podman[238159]: 2025-12-13 07:57:07.350102077 +0000 UTC m=+1.075866720 container init 185042a72d3104b536805ac8b5e7194a522ee2511de807e327ee735231fde47a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:57:07 np0005558241 python3.9[238329]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:57:07 np0005558241 podman[238159]: 2025-12-13 07:57:07.367239019 +0000 UTC m=+1.093003652 container start 185042a72d3104b536805ac8b5e7194a522ee2511de807e327ee735231fde47a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:57:07 np0005558241 condescending_hodgkin[238251]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:57:07 np0005558241 condescending_hodgkin[238251]: --> All data devices are unavailable
Dec 13 02:57:08 np0005558241 systemd[1]: libpod-185042a72d3104b536805ac8b5e7194a522ee2511de807e327ee735231fde47a.scope: Deactivated successfully.
Dec 13 02:57:08 np0005558241 podman[238159]: 2025-12-13 07:57:08.095860586 +0000 UTC m=+1.821625189 container attach 185042a72d3104b536805ac8b5e7194a522ee2511de807e327ee735231fde47a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 02:57:08 np0005558241 podman[238159]: 2025-12-13 07:57:08.096622544 +0000 UTC m=+1.822387147 container died 185042a72d3104b536805ac8b5e7194a522ee2511de807e327ee735231fde47a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:57:08 np0005558241 python3.9[238496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:57:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a015345197d0e38612d5df6d76fa55bd4bf60b82d76a5cdb775c0f6410fb346e-merged.mount: Deactivated successfully.
Dec 13 02:57:08 np0005558241 podman[238159]: 2025-12-13 07:57:08.464792883 +0000 UTC m=+2.190557476 container remove 185042a72d3104b536805ac8b5e7194a522ee2511de807e327ee735231fde47a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 02:57:08 np0005558241 systemd[1]: libpod-conmon-185042a72d3104b536805ac8b5e7194a522ee2511de807e327ee735231fde47a.scope: Deactivated successfully.
Dec 13 02:57:08 np0005558241 python3.9[238703]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:57:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:57:09 np0005558241 podman[238736]: 2025-12-13 07:57:08.94011682 +0000 UTC m=+0.020844884 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:57:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:57:09
Dec 13 02:57:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:57:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:57:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', 'images', 'volumes', '.rgw.root', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data']
Dec 13 02:57:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:57:09 np0005558241 podman[238736]: 2025-12-13 07:57:09.123347003 +0000 UTC m=+0.204075077 container create 7cf568535764b083b711ea7e588cd9c8638f4777aa7b18c8377b9d7fee746d38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 02:57:09 np0005558241 python3.9[238897]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:57:09 np0005558241 systemd[1]: Started libpod-conmon-7cf568535764b083b711ea7e588cd9c8638f4777aa7b18c8377b9d7fee746d38.scope.
Dec 13 02:57:09 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:57:09 np0005558241 podman[238736]: 2025-12-13 07:57:09.827006214 +0000 UTC m=+0.907734298 container init 7cf568535764b083b711ea7e588cd9c8638f4777aa7b18c8377b9d7fee746d38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_swanson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:57:09 np0005558241 podman[238736]: 2025-12-13 07:57:09.834895628 +0000 UTC m=+0.915623662 container start 7cf568535764b083b711ea7e588cd9c8638f4777aa7b18c8377b9d7fee746d38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_swanson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:57:09 np0005558241 charming_swanson[238902]: 167 167
Dec 13 02:57:09 np0005558241 systemd[1]: libpod-7cf568535764b083b711ea7e588cd9c8638f4777aa7b18c8377b9d7fee746d38.scope: Deactivated successfully.
Dec 13 02:57:09 np0005558241 podman[238736]: 2025-12-13 07:57:09.870992997 +0000 UTC m=+0.951721071 container attach 7cf568535764b083b711ea7e588cd9c8638f4777aa7b18c8377b9d7fee746d38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 02:57:09 np0005558241 podman[238736]: 2025-12-13 07:57:09.872998517 +0000 UTC m=+0.953726581 container died 7cf568535764b083b711ea7e588cd9c8638f4777aa7b18c8377b9d7fee746d38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_swanson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:10 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7795fcc63ff4a11b4041a02a76d24bd2e4302f17d3be16f353c2f28315ebb83c-merged.mount: Deactivated successfully.
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:57:10 np0005558241 python3.9[239071]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:57:10 np0005558241 podman[238736]: 2025-12-13 07:57:10.451513566 +0000 UTC m=+1.532241620 container remove 7cf568535764b083b711ea7e588cd9c8638f4777aa7b18c8377b9d7fee746d38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:57:10 np0005558241 systemd[1]: libpod-conmon-7cf568535764b083b711ea7e588cd9c8638f4777aa7b18c8377b9d7fee746d38.scope: Deactivated successfully.
Dec 13 02:57:10 np0005558241 podman[239080]: 2025-12-13 07:57:10.610131303 +0000 UTC m=+0.026520804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:57:10 np0005558241 podman[239080]: 2025-12-13 07:57:10.845341156 +0000 UTC m=+0.261730627 container create 20868e129cbe5248f558650214e8cab385d135f47803d322d5a2dc6d3ddd3b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_napier, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 02:57:11 np0005558241 systemd[1]: Started libpod-conmon-20868e129cbe5248f558650214e8cab385d135f47803d322d5a2dc6d3ddd3b7e.scope.
Dec 13 02:57:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:57:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38c9ae65bcfbcfdd71c5afdddb49c54ae5120fd227c494c1fda2d6aa52c39ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:57:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38c9ae65bcfbcfdd71c5afdddb49c54ae5120fd227c494c1fda2d6aa52c39ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:57:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38c9ae65bcfbcfdd71c5afdddb49c54ae5120fd227c494c1fda2d6aa52c39ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:57:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38c9ae65bcfbcfdd71c5afdddb49c54ae5120fd227c494c1fda2d6aa52c39ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:57:11 np0005558241 podman[239080]: 2025-12-13 07:57:11.236425599 +0000 UTC m=+0.652815090 container init 20868e129cbe5248f558650214e8cab385d135f47803d322d5a2dc6d3ddd3b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_napier, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec 13 02:57:11 np0005558241 podman[239080]: 2025-12-13 07:57:11.244447737 +0000 UTC m=+0.660837208 container start 20868e129cbe5248f558650214e8cab385d135f47803d322d5a2dc6d3ddd3b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:57:11 np0005558241 python3.9[239250]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:57:11 np0005558241 podman[239080]: 2025-12-13 07:57:11.314525413 +0000 UTC m=+0.730914884 container attach 20868e129cbe5248f558650214e8cab385d135f47803d322d5a2dc6d3ddd3b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_napier, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:57:11 np0005558241 eager_napier[239246]: {
Dec 13 02:57:11 np0005558241 eager_napier[239246]:    "0": [
Dec 13 02:57:11 np0005558241 eager_napier[239246]:        {
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "devices": [
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "/dev/loop3"
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            ],
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_name": "ceph_lv0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_size": "21470642176",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "name": "ceph_lv0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "tags": {
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.cluster_name": "ceph",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.crush_device_class": "",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.encrypted": "0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.objectstore": "bluestore",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.osd_id": "0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.type": "block",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.vdo": "0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.with_tpm": "0"
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            },
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "type": "block",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "vg_name": "ceph_vg0"
Dec 13 02:57:11 np0005558241 eager_napier[239246]:        }
Dec 13 02:57:11 np0005558241 eager_napier[239246]:    ],
Dec 13 02:57:11 np0005558241 eager_napier[239246]:    "1": [
Dec 13 02:57:11 np0005558241 eager_napier[239246]:        {
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "devices": [
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "/dev/loop4"
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            ],
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_name": "ceph_lv1",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_size": "21470642176",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "name": "ceph_lv1",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "tags": {
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.cluster_name": "ceph",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.crush_device_class": "",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.encrypted": "0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.objectstore": "bluestore",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.osd_id": "1",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.type": "block",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.vdo": "0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.with_tpm": "0"
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            },
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "type": "block",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "vg_name": "ceph_vg1"
Dec 13 02:57:11 np0005558241 eager_napier[239246]:        }
Dec 13 02:57:11 np0005558241 eager_napier[239246]:    ],
Dec 13 02:57:11 np0005558241 eager_napier[239246]:    "2": [
Dec 13 02:57:11 np0005558241 eager_napier[239246]:        {
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "devices": [
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "/dev/loop5"
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            ],
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_name": "ceph_lv2",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_size": "21470642176",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "name": "ceph_lv2",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "tags": {
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.cluster_name": "ceph",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.crush_device_class": "",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.encrypted": "0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.objectstore": "bluestore",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.osd_id": "2",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.type": "block",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.vdo": "0",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:                "ceph.with_tpm": "0"
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            },
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "type": "block",
Dec 13 02:57:11 np0005558241 eager_napier[239246]:            "vg_name": "ceph_vg2"
Dec 13 02:57:11 np0005558241 eager_napier[239246]:        }
Dec 13 02:57:11 np0005558241 eager_napier[239246]:    ]
Dec 13 02:57:11 np0005558241 eager_napier[239246]: }
Dec 13 02:57:11 np0005558241 systemd[1]: libpod-20868e129cbe5248f558650214e8cab385d135f47803d322d5a2dc6d3ddd3b7e.scope: Deactivated successfully.
Dec 13 02:57:11 np0005558241 podman[239080]: 2025-12-13 07:57:11.608984125 +0000 UTC m=+1.025373606 container died 20868e129cbe5248f558650214e8cab385d135f47803d322d5a2dc6d3ddd3b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_napier, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 02:57:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e38c9ae65bcfbcfdd71c5afdddb49c54ae5120fd227c494c1fda2d6aa52c39ce-merged.mount: Deactivated successfully.
Dec 13 02:57:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:12 np0005558241 podman[239080]: 2025-12-13 07:57:12.427746799 +0000 UTC m=+1.844136270 container remove 20868e129cbe5248f558650214e8cab385d135f47803d322d5a2dc6d3ddd3b7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_napier, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:57:12 np0005558241 systemd[1]: libpod-conmon-20868e129cbe5248f558650214e8cab385d135f47803d322d5a2dc6d3ddd3b7e.scope: Deactivated successfully.
Dec 13 02:57:12 np0005558241 python3.9[239421]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:57:13 np0005558241 podman[239547]: 2025-12-13 07:57:12.962244436 +0000 UTC m=+0.029537805 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:57:13 np0005558241 podman[239547]: 2025-12-13 07:57:13.193625659 +0000 UTC m=+0.260918998 container create 5cd164fb9f06d064d163ebffa5203d000f6d21fd44fd240a8359ce3313ddca55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 02:57:13 np0005558241 systemd[1]: Started libpod-conmon-5cd164fb9f06d064d163ebffa5203d000f6d21fd44fd240a8359ce3313ddca55.scope.
Dec 13 02:57:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:57:13 np0005558241 podman[239547]: 2025-12-13 07:57:13.337502127 +0000 UTC m=+0.404795506 container init 5cd164fb9f06d064d163ebffa5203d000f6d21fd44fd240a8359ce3313ddca55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_payne, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:57:13 np0005558241 podman[239547]: 2025-12-13 07:57:13.347308228 +0000 UTC m=+0.414601587 container start 5cd164fb9f06d064d163ebffa5203d000f6d21fd44fd240a8359ce3313ddca55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_payne, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 02:57:13 np0005558241 charming_payne[239653]: 167 167
Dec 13 02:57:13 np0005558241 systemd[1]: libpod-5cd164fb9f06d064d163ebffa5203d000f6d21fd44fd240a8359ce3313ddca55.scope: Deactivated successfully.
Dec 13 02:57:13 np0005558241 python3.9[239650]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 02:57:13 np0005558241 podman[239547]: 2025-12-13 07:57:13.402340147 +0000 UTC m=+0.469633526 container attach 5cd164fb9f06d064d163ebffa5203d000f6d21fd44fd240a8359ce3313ddca55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_payne, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 02:57:13 np0005558241 podman[239547]: 2025-12-13 07:57:13.40325766 +0000 UTC m=+0.470551019 container died 5cd164fb9f06d064d163ebffa5203d000f6d21fd44fd240a8359ce3313ddca55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 02:57:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cad3fdf9eaa8b161e910c18050bacdf7aa5a5c93c53b8e9e3665ddd3cad95bfa-merged.mount: Deactivated successfully.
Dec 13 02:57:13 np0005558241 podman[239547]: 2025-12-13 07:57:13.734624235 +0000 UTC m=+0.801917584 container remove 5cd164fb9f06d064d163ebffa5203d000f6d21fd44fd240a8359ce3313ddca55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_payne, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:57:13 np0005558241 systemd[1]: libpod-conmon-5cd164fb9f06d064d163ebffa5203d000f6d21fd44fd240a8359ce3313ddca55.scope: Deactivated successfully.
Dec 13 02:57:13 np0005558241 podman[239704]: 2025-12-13 07:57:13.93054088 +0000 UTC m=+0.052012146 container create f51b4c89a408749425bbe6ec694076da6baec41920527b84c5423775faf0b3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_visvesvaraya, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 02:57:13 np0005558241 systemd[1]: Started libpod-conmon-f51b4c89a408749425bbe6ec694076da6baec41920527b84c5423775faf0b3c0.scope.
Dec 13 02:57:13 np0005558241 podman[239704]: 2025-12-13 07:57:13.905299821 +0000 UTC m=+0.026771117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:57:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:57:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a35af981b7d20ff6a6265e59c564ed8ddb5fa6b45380288758c765bfe498ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:57:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a35af981b7d20ff6a6265e59c564ed8ddb5fa6b45380288758c765bfe498ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:57:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a35af981b7d20ff6a6265e59c564ed8ddb5fa6b45380288758c765bfe498ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:57:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a35af981b7d20ff6a6265e59c564ed8ddb5fa6b45380288758c765bfe498ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:57:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:57:14 np0005558241 podman[239704]: 2025-12-13 07:57:14.026403101 +0000 UTC m=+0.147874397 container init f51b4c89a408749425bbe6ec694076da6baec41920527b84c5423775faf0b3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_visvesvaraya, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:57:14 np0005558241 podman[239704]: 2025-12-13 07:57:14.035383031 +0000 UTC m=+0.156854307 container start f51b4c89a408749425bbe6ec694076da6baec41920527b84c5423775faf0b3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 02:57:14 np0005558241 podman[239704]: 2025-12-13 07:57:14.040377893 +0000 UTC m=+0.161849479 container attach f51b4c89a408749425bbe6ec694076da6baec41920527b84c5423775faf0b3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:57:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:14 np0005558241 lvm[239926]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:57:14 np0005558241 lvm[239924]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:57:14 np0005558241 lvm[239926]: VG ceph_vg1 finished
Dec 13 02:57:14 np0005558241 lvm[239924]: VG ceph_vg0 finished
Dec 13 02:57:14 np0005558241 python3.9[239906]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:14 np0005558241 lvm[239928]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:57:14 np0005558241 lvm[239928]: VG ceph_vg2 finished
Dec 13 02:57:14 np0005558241 lvm[239929]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:57:14 np0005558241 lvm[239929]: VG ceph_vg1 finished
Dec 13 02:57:14 np0005558241 lvm[239931]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:57:14 np0005558241 lvm[239931]: VG ceph_vg1 finished
Dec 13 02:57:14 np0005558241 beautiful_visvesvaraya[239719]: {}
Dec 13 02:57:14 np0005558241 systemd[1]: libpod-f51b4c89a408749425bbe6ec694076da6baec41920527b84c5423775faf0b3c0.scope: Deactivated successfully.
Dec 13 02:57:14 np0005558241 systemd[1]: libpod-f51b4c89a408749425bbe6ec694076da6baec41920527b84c5423775faf0b3c0.scope: Consumed 1.438s CPU time.
Dec 13 02:57:14 np0005558241 podman[239704]: 2025-12-13 07:57:14.957991776 +0000 UTC m=+1.079463052 container died f51b4c89a408749425bbe6ec694076da6baec41920527b84c5423775faf0b3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_visvesvaraya, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:57:15 np0005558241 python3.9[240096]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:16 np0005558241 python3.9[240248]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:16 np0005558241 python3.9[240402]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay-67a35af981b7d20ff6a6265e59c564ed8ddb5fa6b45380288758c765bfe498ce-merged.mount: Deactivated successfully.
Dec 13 02:57:17 np0005558241 python3.9[240554]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:17 np0005558241 podman[239704]: 2025-12-13 07:57:17.953301055 +0000 UTC m=+4.074772331 container remove f51b4c89a408749425bbe6ec694076da6baec41920527b84c5423775faf0b3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:57:17 np0005558241 systemd[1]: libpod-conmon-f51b4c89a408749425bbe6ec694076da6baec41920527b84c5423775faf0b3c0.scope: Deactivated successfully.
Dec 13 02:57:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:57:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:57:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:57:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:57:18 np0005558241 python3.9[240706]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:18 np0005558241 python3.9[240883]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:57:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:57:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:57:19 np0005558241 python3.9[241035]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:20 np0005558241 python3.9[241187]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:57:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:57:20 np0005558241 podman[241311]: 2025-12-13 07:57:20.674228347 +0000 UTC m=+0.073264868 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 02:57:21 np0005558241 python3.9[241356]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:57:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:27 np0005558241 python3.9[241512]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 13 02:57:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:28 np0005558241 python3.9[241665]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 02:57:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:57:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:30 np0005558241 python3.9[241823]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 02:57:31 np0005558241 podman[241827]: 2025-12-13 07:57:31.976029099 +0000 UTC m=+0.062179556 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 02:57:32 np0005558241 podman[241826]: 2025-12-13 07:57:32.074524744 +0000 UTC m=+0.160609740 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Dec 13 02:57:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v914: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:57:34 np0005558241 systemd-logind[787]: New session 54 of user zuul.
Dec 13 02:57:34 np0005558241 systemd[1]: Started Session 54 of User zuul.
Dec 13 02:57:34 np0005558241 systemd[1]: session-54.scope: Deactivated successfully.
Dec 13 02:57:34 np0005558241 systemd-logind[787]: Session 54 logged out. Waiting for processes to exit.
Dec 13 02:57:34 np0005558241 systemd-logind[787]: Removed session 54.
Dec 13 02:57:35 np0005558241 python3.9[242054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:57:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:36 np0005558241 python3.9[242175]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765612654.8721173-1249-232597118712199/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:37 np0005558241 python3.9[242325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:57:37 np0005558241 python3.9[242401]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:38 np0005558241 python3.9[242551]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:57:39 np0005558241 python3.9[242672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765612657.9962676-1249-83813858166607/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:40 np0005558241 python3.9[242822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:57:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:57:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:57:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:57:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:57:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:57:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:57:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:41 np0005558241 python3.9[242943]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765612659.509749-1249-191577130498221/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:42 np0005558241 python3.9[243093]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:57:43 np0005558241 python3.9[243214]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765612661.212887-1249-107072069696811/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:57:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v920: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:44 np0005558241 python3.9[243364]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:57:45 np0005558241 python3.9[243485]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765612663.645549-1249-226043783629553/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:46 np0005558241 python3.9[243637]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:57:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:47 np0005558241 python3.9[243789]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:57:48 np0005558241 python3.9[243941]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:57:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:57:48 np0005558241 python3.9[244093]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:57:49 np0005558241 python3.9[244216]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765612668.2686062-1356-220432768861211/.source _original_basename=.n3vfvnlz follow=False checksum=487ea8f8c640a80de7604f32055aca77da5e0bb8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 13 02:57:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:50 np0005558241 python3.9[244368]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:57:50 np0005558241 podman[244494]: 2025-12-13 07:57:50.893691398 +0000 UTC m=+0.110724206 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Dec 13 02:57:51 np0005558241 python3.9[244531]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:57:51 np0005558241 python3.9[244661]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765612670.4856782-1382-25236023315805/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=209f20105d13c02e6cb251483bae1beb11a1258f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:52 np0005558241 python3.9[244811]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 02:57:53 np0005558241 python3.9[244932]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765612672.2587528-1397-178134296120622/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=0333d3a3f5c3a0526b0ebe430250032166710e8a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 02:57:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:57:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:54 np0005558241 python3.9[245084]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 13 02:57:55 np0005558241 python3.9[245236]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 02:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:57:55.371 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:57:55.372 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:57:55.372 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:57:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:56 np0005558241 python3[245388]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 02:57:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:57:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:58:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:08 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:58:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:58:09
Dec 13 02:58:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:58:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:58:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['images', '.rgw.root', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta']
Dec 13 02:58:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:58:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v934: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:12 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:58:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:16 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:58:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).mds e4 check_health: resetting beacon timeouts due to mon delay (slow election?) of 2e+01 seconds
Dec 13 02:58:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:58:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:58:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:58:20 np0005558241 podman[245442]: 2025-12-13 07:58:20.957965183 +0000 UTC m=+18.037569647 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:58:20 np0005558241 podman[245441]: 2025-12-13 07:58:20.981723266 +0000 UTC m=+18.066801884 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec 13 02:58:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:58:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:28 np0005558241 ceph-mds[98037]: mds.beacon.cephfs.compute-0.gfdyct missed beacon ack from the monitors
Dec 13 02:58:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:58:31 np0005558241 podman[245549]: 2025-12-13 07:58:31.362211796 +0000 UTC m=+10.382797658 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 02:58:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:58:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:58:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 02:58:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:58:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 02:58:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:58:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:58:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 02:58:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 02:58:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 02:58:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:58:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 02:58:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 02:58:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 02:58:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:58:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:58:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:58:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:58:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:58:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:58:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:58:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 02:58:42 np0005558241 podman[245400]: 2025-12-13 07:58:42.538611852 +0000 UTC m=+45.845492152 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Dec 13 02:58:42 np0005558241 podman[245680]: 2025-12-13 07:58:42.496347426 +0000 UTC m=+0.389180755 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:58:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:58:43 np0005558241 podman[245680]: 2025-12-13 07:58:43.497818713 +0000 UTC m=+1.390652002 container create 57ceaa040cc09d6cb91d7564423c4f05563e8105dadb9c053aa1a6420caf4eef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 02:58:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:46 np0005558241 systemd[1]: Started libpod-conmon-57ceaa040cc09d6cb91d7564423c4f05563e8105dadb9c053aa1a6420caf4eef.scope.
Dec 13 02:58:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:58:47 np0005558241 podman[245680]: 2025-12-13 07:58:47.041317676 +0000 UTC m=+4.934151055 container init 57ceaa040cc09d6cb91d7564423c4f05563e8105dadb9c053aa1a6420caf4eef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_nash, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:58:47 np0005558241 podman[245680]: 2025-12-13 07:58:47.059924943 +0000 UTC m=+4.952758262 container start 57ceaa040cc09d6cb91d7564423c4f05563e8105dadb9c053aa1a6420caf4eef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:58:47 np0005558241 gallant_nash[245709]: 167 167
Dec 13 02:58:47 np0005558241 systemd[1]: libpod-57ceaa040cc09d6cb91d7564423c4f05563e8105dadb9c053aa1a6420caf4eef.scope: Deactivated successfully.
Dec 13 02:58:47 np0005558241 podman[245723]: 2025-12-13 07:58:47.051503937 +0000 UTC m=+0.260151431 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Dec 13 02:58:48 np0005558241 podman[245680]: 2025-12-13 07:58:48.181175079 +0000 UTC m=+6.074008358 container attach 57ceaa040cc09d6cb91d7564423c4f05563e8105dadb9c053aa1a6420caf4eef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 02:58:48 np0005558241 podman[245680]: 2025-12-13 07:58:48.181747083 +0000 UTC m=+6.074580362 container died 57ceaa040cc09d6cb91d7564423c4f05563e8105dadb9c053aa1a6420caf4eef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_nash, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 02:58:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:58:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c0dc6112ded6ee92a5ccab0e9e22ff75978f14d83f487dfef6995d25cd4b9084-merged.mount: Deactivated successfully.
Dec 13 02:58:51 np0005558241 podman[245680]: 2025-12-13 07:58:51.575612497 +0000 UTC m=+9.468445796 container remove 57ceaa040cc09d6cb91d7564423c4f05563e8105dadb9c053aa1a6420caf4eef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_nash, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:58:51 np0005558241 podman[245723]: 2025-12-13 07:58:51.872780114 +0000 UTC m=+5.081427508 container create 6d8eba2a8d3d1a911a625b70b6b080009da3bb2c03eb9d5fd8e6ee6514b263ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 13 02:58:51 np0005558241 python3[245388]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 13 02:58:51 np0005558241 podman[245757]: 2025-12-13 07:58:51.882239726 +0000 UTC m=+0.156842477 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:58:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:52 np0005558241 podman[245757]: 2025-12-13 07:58:52.269344728 +0000 UTC m=+0.543947499 container create c3863afa275cc9afc8795bf913ed0fd2cf66197a78af7e99a99132f96d5dd567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 02:58:52 np0005558241 systemd[1]: Started libpod-conmon-c3863afa275cc9afc8795bf913ed0fd2cf66197a78af7e99a99132f96d5dd567.scope.
Dec 13 02:58:52 np0005558241 systemd[1]: libpod-conmon-57ceaa040cc09d6cb91d7564423c4f05563e8105dadb9c053aa1a6420caf4eef.scope: Deactivated successfully.
Dec 13 02:58:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:58:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca19db0afd20376914e0b1917c55b416d4c7e05f490ccf5f5ed447df4677cd70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:58:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca19db0afd20376914e0b1917c55b416d4c7e05f490ccf5f5ed447df4677cd70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:58:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca19db0afd20376914e0b1917c55b416d4c7e05f490ccf5f5ed447df4677cd70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:58:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca19db0afd20376914e0b1917c55b416d4c7e05f490ccf5f5ed447df4677cd70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:58:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca19db0afd20376914e0b1917c55b416d4c7e05f490ccf5f5ed447df4677cd70/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 02:58:53 np0005558241 podman[245757]: 2025-12-13 07:58:53.00545072 +0000 UTC m=+1.280053471 container init c3863afa275cc9afc8795bf913ed0fd2cf66197a78af7e99a99132f96d5dd567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sutherland, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 13 02:58:53 np0005558241 podman[245757]: 2025-12-13 07:58:53.013797115 +0000 UTC m=+1.288399846 container start c3863afa275cc9afc8795bf913ed0fd2cf66197a78af7e99a99132f96d5dd567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sutherland, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:58:53 np0005558241 podman[245757]: 2025-12-13 07:58:53.255486111 +0000 UTC m=+1.530088842 container attach c3863afa275cc9afc8795bf913ed0fd2cf66197a78af7e99a99132f96d5dd567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sutherland, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 02:58:53 np0005558241 frosty_sutherland[245786]: --> passed data devices: 0 physical, 3 LVM
Dec 13 02:58:53 np0005558241 frosty_sutherland[245786]: --> All data devices are unavailable
Dec 13 02:58:53 np0005558241 systemd[1]: libpod-c3863afa275cc9afc8795bf913ed0fd2cf66197a78af7e99a99132f96d5dd567.scope: Deactivated successfully.
Dec 13 02:58:53 np0005558241 podman[245757]: 2025-12-13 07:58:53.558325547 +0000 UTC m=+1.832928278 container died c3863afa275cc9afc8795bf913ed0fd2cf66197a78af7e99a99132f96d5dd567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sutherland, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:58:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:58:53 np0005558241 python3.9[245982]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:58:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:55 np0005558241 python3.9[246137]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 13 02:58:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ca19db0afd20376914e0b1917c55b416d4c7e05f490ccf5f5ed447df4677cd70-merged.mount: Deactivated successfully.
Dec 13 02:58:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:58:55.372 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:58:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:58:55.375 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:58:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:58:55.375 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:58:56 np0005558241 python3.9[246289]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 02:58:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:58:57 np0005558241 python3[246442]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 02:58:57 np0005558241 podman[245757]: 2025-12-13 07:58:57.499453471 +0000 UTC m=+5.774056212 container remove c3863afa275cc9afc8795bf913ed0fd2cf66197a78af7e99a99132f96d5dd567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:58:57 np0005558241 systemd[1]: libpod-conmon-c3863afa275cc9afc8795bf913ed0fd2cf66197a78af7e99a99132f96d5dd567.scope: Deactivated successfully.
Dec 13 02:58:57 np0005558241 podman[246503]: 2025-12-13 07:58:57.655646761 +0000 UTC m=+0.027758892 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b
Dec 13 02:58:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:59:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:02 np0005558241 podman[246555]: 2025-12-13 07:59:02.502659249 +0000 UTC m=+0.585957250 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 02:59:02 np0005558241 podman[246554]: 2025-12-13 07:59:02.513160667 +0000 UTC m=+0.597440922 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 02:59:02 np0005558241 podman[246503]: 2025-12-13 07:59:02.616372588 +0000 UTC m=+4.988484709 container create d09d1b5b272f5302266a465e8ee172fe808bc3075bafd4679b1dd627c5537661 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, container_name=nova_compute, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2)
Dec 13 02:59:02 np0005558241 podman[246553]: 2025-12-13 07:59:02.621501534 +0000 UTC m=+0.709114600 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 02:59:02 np0005558241 python3[246442]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b kolla_start
Dec 13 02:59:02 np0005558241 podman[246609]: 2025-12-13 07:59:02.627092071 +0000 UTC m=+0.078410764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:59:02 np0005558241 podman[246609]: 2025-12-13 07:59:02.827354902 +0000 UTC m=+0.278673575 container create 680c1fdfaca5d2c327f924f515ca0fc39ca5f974b8c3d03953341b500eb56f28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_murdock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 02:59:02 np0005558241 systemd[1]: Started libpod-conmon-680c1fdfaca5d2c327f924f515ca0fc39ca5f974b8c3d03953341b500eb56f28.scope.
Dec 13 02:59:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:59:03 np0005558241 podman[246609]: 2025-12-13 07:59:03.042478967 +0000 UTC m=+0.493797660 container init 680c1fdfaca5d2c327f924f515ca0fc39ca5f974b8c3d03953341b500eb56f28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 02:59:03 np0005558241 podman[246609]: 2025-12-13 07:59:03.052708978 +0000 UTC m=+0.504027661 container start 680c1fdfaca5d2c327f924f515ca0fc39ca5f974b8c3d03953341b500eb56f28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_murdock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 02:59:03 np0005558241 keen_murdock[246647]: 167 167
Dec 13 02:59:03 np0005558241 systemd[1]: libpod-680c1fdfaca5d2c327f924f515ca0fc39ca5f974b8c3d03953341b500eb56f28.scope: Deactivated successfully.
Dec 13 02:59:03 np0005558241 podman[246609]: 2025-12-13 07:59:03.217113309 +0000 UTC m=+0.668432012 container attach 680c1fdfaca5d2c327f924f515ca0fc39ca5f974b8c3d03953341b500eb56f28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 02:59:03 np0005558241 podman[246609]: 2025-12-13 07:59:03.218331109 +0000 UTC m=+0.669649782 container died 680c1fdfaca5d2c327f924f515ca0fc39ca5f974b8c3d03953341b500eb56f28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:59:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1a2194ec5d11639fcaf74ebee0ebff36c526293d96242cb8a5ce5b03950e2586-merged.mount: Deactivated successfully.
Dec 13 02:59:03 np0005558241 podman[246609]: 2025-12-13 07:59:03.433547737 +0000 UTC m=+0.884866410 container remove 680c1fdfaca5d2c327f924f515ca0fc39ca5f974b8c3d03953341b500eb56f28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_murdock, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Dec 13 02:59:03 np0005558241 systemd[1]: libpod-conmon-680c1fdfaca5d2c327f924f515ca0fc39ca5f974b8c3d03953341b500eb56f28.scope: Deactivated successfully.
Dec 13 02:59:03 np0005558241 podman[246701]: 2025-12-13 07:59:03.601187338 +0000 UTC m=+0.025760483 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:59:03 np0005558241 podman[246701]: 2025-12-13 07:59:03.866883393 +0000 UTC m=+0.291456518 container create 15ee7c38ba4e1b613e500f5940c79d5ada73976c0ad0ed4d6edbfefd102af634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 02:59:03 np0005558241 systemd[1]: Started libpod-conmon-15ee7c38ba4e1b613e500f5940c79d5ada73976c0ad0ed4d6edbfefd102af634.scope.
Dec 13 02:59:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:59:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc262f3a2b0bc38cc43c6e37f448e0ad954c89e96fa3a2bc514eae4a12a6a32e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc262f3a2b0bc38cc43c6e37f448e0ad954c89e96fa3a2bc514eae4a12a6a32e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc262f3a2b0bc38cc43c6e37f448e0ad954c89e96fa3a2bc514eae4a12a6a32e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc262f3a2b0bc38cc43c6e37f448e0ad954c89e96fa3a2bc514eae4a12a6a32e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:04 np0005558241 podman[246701]: 2025-12-13 07:59:04.068825984 +0000 UTC m=+0.493399139 container init 15ee7c38ba4e1b613e500f5940c79d5ada73976c0ad0ed4d6edbfefd102af634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cannon, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 02:59:04 np0005558241 podman[246701]: 2025-12-13 07:59:04.076754269 +0000 UTC m=+0.501327404 container start 15ee7c38ba4e1b613e500f5940c79d5ada73976c0ad0ed4d6edbfefd102af634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:59:04 np0005558241 podman[246701]: 2025-12-13 07:59:04.088403164 +0000 UTC m=+0.512976309 container attach 15ee7c38ba4e1b613e500f5940c79d5ada73976c0ad0ed4d6edbfefd102af634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 02:59:04 np0005558241 python3.9[246842]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:59:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]: {
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:    "0": [
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:        {
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "devices": [
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "/dev/loop3"
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            ],
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_name": "ceph_lv0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_size": "21470642176",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "name": "ceph_lv0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "tags": {
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.cluster_name": "ceph",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.crush_device_class": "",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.encrypted": "0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.objectstore": "bluestore",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.osd_id": "0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.type": "block",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.vdo": "0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.with_tpm": "0"
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            },
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "type": "block",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "vg_name": "ceph_vg0"
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:        }
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:    ],
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:    "1": [
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:        {
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "devices": [
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "/dev/loop4"
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            ],
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_name": "ceph_lv1",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_size": "21470642176",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "name": "ceph_lv1",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "tags": {
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.cluster_name": "ceph",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.crush_device_class": "",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.encrypted": "0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.objectstore": "bluestore",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.osd_id": "1",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.type": "block",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.vdo": "0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.with_tpm": "0"
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            },
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "type": "block",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "vg_name": "ceph_vg1"
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:        }
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:    ],
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:    "2": [
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:        {
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "devices": [
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "/dev/loop5"
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            ],
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_name": "ceph_lv2",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_size": "21470642176",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "name": "ceph_lv2",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "tags": {
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.cephx_lockbox_secret": "",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.cluster_name": "ceph",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.crush_device_class": "",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.encrypted": "0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.objectstore": "bluestore",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.osd_id": "2",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.type": "block",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.vdo": "0",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:                "ceph.with_tpm": "0"
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            },
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "type": "block",
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:            "vg_name": "ceph_vg2"
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:        }
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]:    ]
Dec 13 02:59:04 np0005558241 beautiful_cannon[246845]: }
Dec 13 02:59:04 np0005558241 systemd[1]: libpod-15ee7c38ba4e1b613e500f5940c79d5ada73976c0ad0ed4d6edbfefd102af634.scope: Deactivated successfully.
Dec 13 02:59:04 np0005558241 podman[246701]: 2025-12-13 07:59:04.393881265 +0000 UTC m=+0.818454410 container died 15ee7c38ba4e1b613e500f5940c79d5ada73976c0ad0ed4d6edbfefd102af634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:59:04 np0005558241 python3.9[247015]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:59:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-dc262f3a2b0bc38cc43c6e37f448e0ad954c89e96fa3a2bc514eae4a12a6a32e-merged.mount: Deactivated successfully.
Dec 13 02:59:05 np0005558241 podman[246701]: 2025-12-13 07:59:05.268971664 +0000 UTC m=+1.693544789 container remove 15ee7c38ba4e1b613e500f5940c79d5ada73976c0ad0ed4d6edbfefd102af634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cannon, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:59:05 np0005558241 systemd[1]: libpod-conmon-15ee7c38ba4e1b613e500f5940c79d5ada73976c0ad0ed4d6edbfefd102af634.scope: Deactivated successfully.
Dec 13 02:59:05 np0005558241 python3.9[247189]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765612744.9849994-1489-40815447651902/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 02:59:05 np0005558241 podman[247256]: 2025-12-13 07:59:05.723100161 +0000 UTC m=+0.023248071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:59:05 np0005558241 podman[247256]: 2025-12-13 07:59:05.924253693 +0000 UTC m=+0.224401583 container create fc78fb92d84ffcbe159a41028cad3ada6bb74660ad4caee4e10355e668fa3456 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 02:59:05 np0005558241 systemd[1]: Started libpod-conmon-fc78fb92d84ffcbe159a41028cad3ada6bb74660ad4caee4e10355e668fa3456.scope.
Dec 13 02:59:06 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:59:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:59:06 np0005558241 podman[247256]: 2025-12-13 07:59:06.130375748 +0000 UTC m=+0.430523668 container init fc78fb92d84ffcbe159a41028cad3ada6bb74660ad4caee4e10355e668fa3456 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:59:06 np0005558241 podman[247256]: 2025-12-13 07:59:06.137857591 +0000 UTC m=+0.438005482 container start fc78fb92d84ffcbe159a41028cad3ada6bb74660ad4caee4e10355e668fa3456 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 02:59:06 np0005558241 ecstatic_sanderson[247325]: 167 167
Dec 13 02:59:06 np0005558241 systemd[1]: libpod-fc78fb92d84ffcbe159a41028cad3ada6bb74660ad4caee4e10355e668fa3456.scope: Deactivated successfully.
Dec 13 02:59:06 np0005558241 podman[247256]: 2025-12-13 07:59:06.176656863 +0000 UTC m=+0.476804833 container attach fc78fb92d84ffcbe159a41028cad3ada6bb74660ad4caee4e10355e668fa3456 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:59:06 np0005558241 podman[247256]: 2025-12-13 07:59:06.177061453 +0000 UTC m=+0.477209343 container died fc78fb92d84ffcbe159a41028cad3ada6bb74660ad4caee4e10355e668fa3456 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 02:59:06 np0005558241 python3.9[247322]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 02:59:06 np0005558241 systemd[1]: Reloading.
Dec 13 02:59:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:06 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:59:06 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:59:06 np0005558241 podman[247256]: 2025-12-13 07:59:06.350818624 +0000 UTC m=+0.650966514 container remove fc78fb92d84ffcbe159a41028cad3ada6bb74660ad4caee4e10355e668fa3456 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 02:59:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a163ffed57640efc9a7ee4c65c7ced743b789b23560fbcb723e61f8bd951e3fc-merged.mount: Deactivated successfully.
Dec 13 02:59:06 np0005558241 systemd[1]: libpod-conmon-fc78fb92d84ffcbe159a41028cad3ada6bb74660ad4caee4e10355e668fa3456.scope: Deactivated successfully.
Dec 13 02:59:06 np0005558241 podman[247385]: 2025-12-13 07:59:06.506134972 +0000 UTC m=+0.026818958 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 02:59:06 np0005558241 podman[247385]: 2025-12-13 07:59:06.75968994 +0000 UTC m=+0.280373906 container create 37c2ed8e9652cff1c4c25017f4d8d362ba4ec6461aa9d8c62b7ce0e71b4fe64c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:59:06 np0005558241 systemd[1]: Started libpod-conmon-37c2ed8e9652cff1c4c25017f4d8d362ba4ec6461aa9d8c62b7ce0e71b4fe64c.scope.
Dec 13 02:59:06 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:59:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98115bf28373570871c59702b6cf12b7aeaca6136c13d996d25a109d5f465dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98115bf28373570871c59702b6cf12b7aeaca6136c13d996d25a109d5f465dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98115bf28373570871c59702b6cf12b7aeaca6136c13d996d25a109d5f465dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98115bf28373570871c59702b6cf12b7aeaca6136c13d996d25a109d5f465dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:07 np0005558241 podman[247385]: 2025-12-13 07:59:07.133185779 +0000 UTC m=+0.653869765 container init 37c2ed8e9652cff1c4c25017f4d8d362ba4ec6461aa9d8c62b7ce0e71b4fe64c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 02:59:07 np0005558241 podman[247385]: 2025-12-13 07:59:07.148348351 +0000 UTC m=+0.669032317 container start 37c2ed8e9652cff1c4c25017f4d8d362ba4ec6461aa9d8c62b7ce0e71b4fe64c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 02:59:07 np0005558241 python3.9[247480]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 02:59:07 np0005558241 podman[247385]: 2025-12-13 07:59:07.205715198 +0000 UTC m=+0.726399194 container attach 37c2ed8e9652cff1c4c25017f4d8d362ba4ec6461aa9d8c62b7ce0e71b4fe64c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 02:59:07 np0005558241 systemd[1]: Reloading.
Dec 13 02:59:07 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 02:59:07 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 02:59:07 np0005558241 systemd[1]: Starting dnf makecache...
Dec 13 02:59:07 np0005558241 systemd[1]: Starting nova_compute container...
Dec 13 02:59:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:59:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d77a3e2254840fdf3a4277971a1fcecfff1df76241b5e3b893cb7c127bbb98/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d77a3e2254840fdf3a4277971a1fcecfff1df76241b5e3b893cb7c127bbb98/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d77a3e2254840fdf3a4277971a1fcecfff1df76241b5e3b893cb7c127bbb98/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d77a3e2254840fdf3a4277971a1fcecfff1df76241b5e3b893cb7c127bbb98/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d77a3e2254840fdf3a4277971a1fcecfff1df76241b5e3b893cb7c127bbb98/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:07 np0005558241 podman[247560]: 2025-12-13 07:59:07.833694146 +0000 UTC m=+0.186930375 container init d09d1b5b272f5302266a465e8ee172fe808bc3075bafd4679b1dd627c5537661 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute)
Dec 13 02:59:07 np0005558241 podman[247560]: 2025-12-13 07:59:07.844928572 +0000 UTC m=+0.198164801 container start d09d1b5b272f5302266a465e8ee172fe808bc3075bafd4679b1dd627c5537661 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec 13 02:59:07 np0005558241 nova_compute[247600]: + sudo -E kolla_set_configs
Dec 13 02:59:07 np0005558241 lvm[247618]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 02:59:07 np0005558241 lvm[247618]: VG ceph_vg0 finished
Dec 13 02:59:07 np0005558241 lvm[247621]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 02:59:07 np0005558241 lvm[247621]: VG ceph_vg1 finished
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Validating config file
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Copying service configuration files
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Deleting /etc/ceph
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Creating directory /etc/ceph
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /etc/ceph
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 13 02:59:07 np0005558241 lvm[247623]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 02:59:07 np0005558241 lvm[247623]: VG ceph_vg2 finished
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Writing out command to execute
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:59:07 np0005558241 nova_compute[247600]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 02:59:07 np0005558241 nova_compute[247600]: ++ cat /run_command
Dec 13 02:59:07 np0005558241 nova_compute[247600]: + CMD=nova-compute
Dec 13 02:59:07 np0005558241 nova_compute[247600]: + ARGS=
Dec 13 02:59:07 np0005558241 nova_compute[247600]: + sudo kolla_copy_cacerts
Dec 13 02:59:07 np0005558241 dnf[247556]: Metadata cache refreshed recently.
Dec 13 02:59:07 np0005558241 nova_compute[247600]: + [[ ! -n '' ]]
Dec 13 02:59:07 np0005558241 nova_compute[247600]: + . kolla_extend_start
Dec 13 02:59:07 np0005558241 nova_compute[247600]: + echo 'Running command: '\''nova-compute'\'''
Dec 13 02:59:07 np0005558241 nova_compute[247600]: Running command: 'nova-compute'
Dec 13 02:59:07 np0005558241 nova_compute[247600]: + umask 0022
Dec 13 02:59:07 np0005558241 nova_compute[247600]: + exec nova-compute
Dec 13 02:59:07 np0005558241 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 13 02:59:07 np0005558241 systemd[1]: Finished dnf makecache.
Dec 13 02:59:08 np0005558241 elastic_sinoussi[247449]: {}
Dec 13 02:59:08 np0005558241 systemd[1]: libpod-37c2ed8e9652cff1c4c25017f4d8d362ba4ec6461aa9d8c62b7ce0e71b4fe64c.scope: Deactivated successfully.
Dec 13 02:59:08 np0005558241 systemd[1]: libpod-37c2ed8e9652cff1c4c25017f4d8d362ba4ec6461aa9d8c62b7ce0e71b4fe64c.scope: Consumed 1.397s CPU time.
Dec 13 02:59:08 np0005558241 podman[247560]: nova_compute
Dec 13 02:59:08 np0005558241 systemd[1]: Started nova_compute container.
Dec 13 02:59:08 np0005558241 podman[247385]: 2025-12-13 07:59:08.142109949 +0000 UTC m=+1.662793935 container died 37c2ed8e9652cff1c4c25017f4d8d362ba4ec6461aa9d8c62b7ce0e71b4fe64c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:59:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f98115bf28373570871c59702b6cf12b7aeaca6136c13d996d25a109d5f465dc-merged.mount: Deactivated successfully.
Dec 13 02:59:08 np0005558241 podman[247630]: 2025-12-13 07:59:08.425885978 +0000 UTC m=+0.345433621 container remove 37c2ed8e9652cff1c4c25017f4d8d362ba4ec6461aa9d8c62b7ce0e71b4fe64c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 02:59:08 np0005558241 systemd[1]: libpod-conmon-37c2ed8e9652cff1c4c25017f4d8d362ba4ec6461aa9d8c62b7ce0e71b4fe64c.scope: Deactivated successfully.
Dec 13 02:59:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 02:59:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:59:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 02:59:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:59:09 np0005558241 python3.9[247819]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:59:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_07:59:09
Dec 13 02:59:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 02:59:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 02:59:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'images', 'backups', 'volumes', 'cephfs.cephfs.data', 'vms', '.mgr', 'cephfs.cephfs.meta']
Dec 13 02:59:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 02:59:09 np0005558241 python3.9[247970]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:59:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:59:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 02:59:10 np0005558241 python3.9[248120]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 02:59:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:59:11 np0005558241 nova_compute[247600]: 2025-12-13 07:59:11.588 247613 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 13 02:59:11 np0005558241 nova_compute[247600]: 2025-12-13 07:59:11.589 247613 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 13 02:59:11 np0005558241 nova_compute[247600]: 2025-12-13 07:59:11.589 247613 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 13 02:59:11 np0005558241 nova_compute[247600]: 2025-12-13 07:59:11.589 247613 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec 13 02:59:11 np0005558241 python3.9[248274]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 13 02:59:11 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 02:59:12 np0005558241 nova_compute[247600]: 2025-12-13 07:59:12.030 247613 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:59:12 np0005558241 nova_compute[247600]: 2025-12-13 07:59:12.061 247613 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:59:12 np0005558241 nova_compute[247600]: 2025-12-13 07:59:12.061 247613 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec 13 02:59:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:13 np0005558241 python3.9[248451]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 02:59:13 np0005558241 systemd[1]: Stopping nova_compute container...
Dec 13 02:59:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:14 np0005558241 systemd[1]: libpod-d09d1b5b272f5302266a465e8ee172fe808bc3075bafd4679b1dd627c5537661.scope: Deactivated successfully.
Dec 13 02:59:14 np0005558241 systemd[1]: libpod-d09d1b5b272f5302266a465e8ee172fe808bc3075bafd4679b1dd627c5537661.scope: Consumed 2.693s CPU time.
Dec 13 02:59:14 np0005558241 podman[248455]: 2025-12-13 07:59:14.224814731 +0000 UTC m=+0.521588221 container died d09d1b5b272f5302266a465e8ee172fe808bc3075bafd4679b1dd627c5537661 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Dec 13 02:59:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d09d1b5b272f5302266a465e8ee172fe808bc3075bafd4679b1dd627c5537661-userdata-shm.mount: Deactivated successfully.
Dec 13 02:59:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-25d77a3e2254840fdf3a4277971a1fcecfff1df76241b5e3b893cb7c127bbb98-merged.mount: Deactivated successfully.
Dec 13 02:59:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:59:16 np0005558241 podman[248455]: 2025-12-13 07:59:16.554486088 +0000 UTC m=+2.851259588 container cleanup d09d1b5b272f5302266a465e8ee172fe808bc3075bafd4679b1dd627c5537661 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 02:59:16 np0005558241 podman[248455]: nova_compute
Dec 13 02:59:16 np0005558241 podman[248484]: nova_compute
Dec 13 02:59:16 np0005558241 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 13 02:59:16 np0005558241 systemd[1]: Stopped nova_compute container.
Dec 13 02:59:16 np0005558241 systemd[1]: Starting nova_compute container...
Dec 13 02:59:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:59:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d77a3e2254840fdf3a4277971a1fcecfff1df76241b5e3b893cb7c127bbb98/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d77a3e2254840fdf3a4277971a1fcecfff1df76241b5e3b893cb7c127bbb98/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d77a3e2254840fdf3a4277971a1fcecfff1df76241b5e3b893cb7c127bbb98/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d77a3e2254840fdf3a4277971a1fcecfff1df76241b5e3b893cb7c127bbb98/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25d77a3e2254840fdf3a4277971a1fcecfff1df76241b5e3b893cb7c127bbb98/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:17 np0005558241 podman[248495]: 2025-12-13 07:59:17.300715778 +0000 UTC m=+0.622356643 container init d09d1b5b272f5302266a465e8ee172fe808bc3075bafd4679b1dd627c5537661 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 02:59:17 np0005558241 podman[248495]: 2025-12-13 07:59:17.311222926 +0000 UTC m=+0.632863781 container start d09d1b5b272f5302266a465e8ee172fe808bc3075bafd4679b1dd627c5537661 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 02:59:17 np0005558241 nova_compute[248510]: + sudo -E kolla_set_configs
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Validating config file
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Copying service configuration files
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Deleting /etc/ceph
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Creating directory /etc/ceph
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /etc/ceph
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Writing out command to execute
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:59:17 np0005558241 nova_compute[248510]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 02:59:17 np0005558241 nova_compute[248510]: ++ cat /run_command
Dec 13 02:59:17 np0005558241 nova_compute[248510]: + CMD=nova-compute
Dec 13 02:59:17 np0005558241 nova_compute[248510]: + ARGS=
Dec 13 02:59:17 np0005558241 nova_compute[248510]: + sudo kolla_copy_cacerts
Dec 13 02:59:17 np0005558241 nova_compute[248510]: + [[ ! -n '' ]]
Dec 13 02:59:17 np0005558241 nova_compute[248510]: + . kolla_extend_start
Dec 13 02:59:17 np0005558241 nova_compute[248510]: Running command: 'nova-compute'
Dec 13 02:59:17 np0005558241 nova_compute[248510]: + echo 'Running command: '\''nova-compute'\'''
Dec 13 02:59:17 np0005558241 nova_compute[248510]: + umask 0022
Dec 13 02:59:17 np0005558241 nova_compute[248510]: + exec nova-compute
Dec 13 02:59:17 np0005558241 podman[248495]: nova_compute
Dec 13 02:59:17 np0005558241 systemd[1]: Started nova_compute container.
Dec 13 02:59:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:18 np0005558241 python3.9[248674]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 13 02:59:19 np0005558241 systemd[1]: Started libpod-conmon-6d8eba2a8d3d1a911a625b70b6b080009da3bb2c03eb9d5fd8e6ee6514b263ac.scope.
Dec 13 02:59:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 02:59:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d0ec018113f0447e0d2ce342d74dac8088467624bd6b1fa92af3c649c28237e/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d0ec018113f0447e0d2ce342d74dac8088467624bd6b1fa92af3c649c28237e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d0ec018113f0447e0d2ce342d74dac8088467624bd6b1fa92af3c649c28237e/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 13 02:59:19 np0005558241 nova_compute[248510]: 2025-12-13 07:59:19.536 248514 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 13 02:59:19 np0005558241 nova_compute[248510]: 2025-12-13 07:59:19.537 248514 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 13 02:59:19 np0005558241 nova_compute[248510]: 2025-12-13 07:59:19.537 248514 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec 13 02:59:19 np0005558241 nova_compute[248510]: 2025-12-13 07:59:19.537 248514 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec 13 02:59:19 np0005558241 nova_compute[248510]: 2025-12-13 07:59:19.731 248514 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:59:19 np0005558241 nova_compute[248510]: 2025-12-13 07:59:19.760 248514 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:59:19 np0005558241 nova_compute[248510]: 2025-12-13 07:59:19.760 248514 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec 13 02:59:20 np0005558241 podman[248699]: 2025-12-13 07:59:20.020346268 +0000 UTC m=+1.240629873 container init 6d8eba2a8d3d1a911a625b70b6b080009da3bb2c03eb9d5fd8e6ee6514b263ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm)
Dec 13 02:59:20 np0005558241 podman[248699]: 2025-12-13 07:59:20.036406912 +0000 UTC m=+1.256690487 container start 6d8eba2a8d3d1a911a625b70b6b080009da3bb2c03eb9d5fd8e6ee6514b263ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Applying nova statedir ownership
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 13 02:59:20 np0005558241 nova_compute_init[248724]: INFO:nova_statedir:Nova statedir ownership complete
Dec 13 02:59:20 np0005558241 systemd[1]: libpod-6d8eba2a8d3d1a911a625b70b6b080009da3bb2c03eb9d5fd8e6ee6514b263ac.scope: Deactivated successfully.
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 02:59:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 02:59:20 np0005558241 nova_compute[248510]: 2025-12-13 07:59:20.605 248514 INFO nova.virt.driver [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec 13 02:59:20 np0005558241 nova_compute[248510]: 2025-12-13 07:59:20.763 248514 INFO nova.compute.provider_config [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec 13 02:59:20 np0005558241 python3.9[248674]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 13 02:59:20 np0005558241 podman[248736]: 2025-12-13 07:59:20.852148106 +0000 UTC m=+0.032587971 container died 6d8eba2a8d3d1a911a625b70b6b080009da3bb2c03eb9d5fd8e6ee6514b263ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 02:59:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:59:21 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2d0ec018113f0447e0d2ce342d74dac8088467624bd6b1fa92af3c649c28237e-merged.mount: Deactivated successfully.
Dec 13 02:59:21 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d8eba2a8d3d1a911a625b70b6b080009da3bb2c03eb9d5fd8e6ee6514b263ac-userdata-shm.mount: Deactivated successfully.
Dec 13 02:59:22 np0005558241 systemd[1]: session-53.scope: Deactivated successfully.
Dec 13 02:59:22 np0005558241 systemd[1]: session-53.scope: Consumed 2min 29.284s CPU time.
Dec 13 02:59:22 np0005558241 systemd-logind[787]: Session 53 logged out. Waiting for processes to exit.
Dec 13 02:59:22 np0005558241 systemd-logind[787]: Removed session 53.
Dec 13 02:59:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:22 np0005558241 podman[248725]: 2025-12-13 07:59:22.843698761 +0000 UTC m=+2.731559022 container cleanup 6d8eba2a8d3d1a911a625b70b6b080009da3bb2c03eb9d5fd8e6ee6514b263ac (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 02:59:22 np0005558241 systemd[1]: libpod-conmon-6d8eba2a8d3d1a911a625b70b6b080009da3bb2c03eb9d5fd8e6ee6514b263ac.scope: Deactivated successfully.
Dec 13 02:59:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.232 248514 DEBUG oslo_concurrency.lockutils [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.234 248514 DEBUG oslo_concurrency.lockutils [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.234 248514 DEBUG oslo_concurrency.lockutils [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.235 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.235 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.235 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.235 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.235 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.236 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.236 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.236 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.236 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.236 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.237 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.237 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.237 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.237 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.237 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.237 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.238 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.238 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.238 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.238 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.238 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.239 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.239 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.239 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.239 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.239 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.240 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.240 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.240 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.240 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.241 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.241 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.241 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.241 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.241 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.241 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.242 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.242 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.242 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.242 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.242 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.243 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.243 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.243 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.243 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.244 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.244 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.244 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.244 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.244 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.245 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.245 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.245 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.245 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.245 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.246 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.246 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.246 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.246 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.246 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.246 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.246 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.247 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.247 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.247 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.247 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.247 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.247 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.247 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.248 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.248 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.248 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.248 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.248 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.248 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.249 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.249 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.249 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.249 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.250 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.250 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.250 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.250 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.250 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.250 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.251 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.251 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.251 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.251 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.251 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.251 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.252 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.252 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.252 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.252 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.252 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.252 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.253 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.253 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.253 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.253 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.253 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.253 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.254 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.254 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.254 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.254 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.254 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.254 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.254 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.255 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.255 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.255 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.255 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.255 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.255 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.256 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.256 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.256 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.256 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.256 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.256 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.256 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.257 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.257 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.257 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.257 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.257 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.257 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.257 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.258 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.258 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.258 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.258 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.258 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.258 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.259 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.259 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.259 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.259 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.259 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.259 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.259 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.260 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.260 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.260 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.260 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.260 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.260 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.261 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.261 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.261 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.261 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.261 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.261 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.261 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.262 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.262 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.262 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.262 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.262 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.262 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.263 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.263 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.263 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.263 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.263 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.263 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.264 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.264 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.264 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.264 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.264 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.264 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.265 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.265 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.265 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.265 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.265 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.266 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.266 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.266 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.266 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.266 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.266 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.267 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.267 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.267 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.267 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.267 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.268 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.268 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.268 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.268 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.268 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.268 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.268 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.269 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.269 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.269 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.269 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.269 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.269 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.270 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.270 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.270 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.270 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.270 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.270 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.270 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.271 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.271 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.271 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.271 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.271 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.272 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.272 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.272 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.272 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.272 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.272 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.272 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.273 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.273 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.273 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.273 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.273 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.273 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.273 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.274 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.274 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.274 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.274 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.274 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.274 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.275 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.275 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.275 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.275 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.275 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.275 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.275 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.276 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.276 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.276 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.276 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.277 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.277 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.277 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.277 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.277 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.277 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.278 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.278 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.278 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.278 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.278 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.278 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.279 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.279 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.279 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.279 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.279 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.279 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.280 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.280 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.280 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.280 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.280 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.280 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.280 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.281 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.281 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.281 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.281 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.281 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.281 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.282 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.282 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.282 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.282 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.282 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.282 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.282 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.283 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.283 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.283 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.283 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.283 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.283 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.283 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.284 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.284 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.284 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.284 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.284 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.284 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.285 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.285 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.285 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.285 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.285 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.285 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.286 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.286 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.286 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.286 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.286 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.286 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.286 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.287 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.287 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.287 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.287 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.287 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.287 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.288 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.288 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.288 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.288 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.288 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.288 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.288 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.289 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.289 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.289 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.289 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.289 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.289 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.289 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.290 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.290 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.290 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.290 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.290 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.290 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.290 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.291 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.291 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.291 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.291 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.291 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.291 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.292 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.292 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.292 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.292 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.292 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.292 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.293 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.293 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.293 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.293 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.293 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.293 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.294 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.294 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.294 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.294 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.294 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.295 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.295 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.295 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.295 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.295 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.295 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.295 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.296 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.296 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.296 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.296 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.296 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.296 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.296 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.297 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.297 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.297 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.297 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.297 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.297 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.298 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.298 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.298 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.298 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.298 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.299 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.299 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.299 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.299 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.299 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.300 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.300 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.300 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.300 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.300 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.301 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.301 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.301 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.301 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.301 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.301 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.302 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.302 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.302 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.302 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.302 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.302 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.303 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.303 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.303 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.303 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.303 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.303 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.304 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.304 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.304 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.304 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.304 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.304 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.305 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.305 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.305 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.305 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.305 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.305 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.306 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.306 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.306 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.306 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.306 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.307 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.307 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.307 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.307 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.307 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.307 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.308 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.308 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.308 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.308 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.308 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.309 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.309 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.309 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.309 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.309 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.309 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.310 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.310 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.310 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.310 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.310 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.311 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.311 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.311 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.311 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.311 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.312 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.312 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.312 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.312 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.312 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.313 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.313 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.313 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.313 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.313 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.313 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.314 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.314 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.314 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.314 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.314 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.314 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.315 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.315 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.315 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.315 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.315 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.316 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.316 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.316 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.316 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.316 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.316 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.317 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.317 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.317 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.317 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.317 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.317 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.318 248514 WARNING oslo_config.cfg [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 13 02:59:24 np0005558241 nova_compute[248510]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 13 02:59:24 np0005558241 nova_compute[248510]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 13 02:59:24 np0005558241 nova_compute[248510]: and ``live_migration_inbound_addr`` respectively.
Dec 13 02:59:24 np0005558241 nova_compute[248510]: ).  Its value may be silently ignored in the future.#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.318 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.318 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.318 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.319 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.319 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.319 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.319 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.319 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.320 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.320 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.320 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.320 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.320 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.321 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.321 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.321 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.321 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.321 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.321 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.rbd_secret_uuid        = 18ee9de6-e00b-571b-ab9b-b7aab06174df log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.322 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.322 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.322 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.322 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.322 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.322 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.323 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.323 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.323 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.323 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.323 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.323 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.324 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.324 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.324 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.324 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.324 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.324 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.325 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.325 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.325 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.325 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.325 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.325 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.325 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.326 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.326 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.326 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.326 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.326 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.326 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.327 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.327 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.327 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.327 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.327 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.327 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.327 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.328 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.328 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.328 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.328 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.328 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.328 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.328 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.329 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.329 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.329 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.329 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.329 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.329 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.329 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.330 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.330 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.330 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.330 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.330 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.330 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.330 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.331 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.331 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.331 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.331 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.331 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.331 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.331 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.332 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.332 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.332 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.332 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.332 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.332 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.333 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.333 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.333 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.333 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.333 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.333 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.333 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.334 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.334 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.334 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.334 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.334 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.334 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.334 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.334 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.335 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.335 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.335 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.335 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.335 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.335 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.336 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.336 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.336 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.336 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.336 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.336 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.337 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.337 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.337 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.337 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.337 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.337 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.337 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.338 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.338 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.338 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.338 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.338 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.338 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.339 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.339 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.339 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.339 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.339 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.339 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.340 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.340 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.340 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.340 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.340 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.341 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.341 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.341 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.341 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.341 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.341 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.342 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.342 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.342 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.342 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.342 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.342 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.343 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.343 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.343 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.343 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.343 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.343 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.344 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.344 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.344 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.344 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.344 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.345 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.345 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.345 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.345 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.345 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.345 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.345 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.346 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.346 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.346 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.346 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.346 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.346 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.347 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.347 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.347 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.347 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.347 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.348 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.348 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.348 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.348 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.348 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.348 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.349 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.349 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.349 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.349 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.349 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.349 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.350 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.350 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.350 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.350 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.350 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.350 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.351 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.351 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.351 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.351 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.351 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.351 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.351 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.352 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.352 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.352 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.352 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.352 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.352 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.352 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.353 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.353 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.353 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.353 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.353 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.353 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.353 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.354 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.354 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.354 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.354 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.354 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.354 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.354 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.355 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.355 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.355 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.355 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.355 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.355 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.355 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.356 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.356 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.356 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.356 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.356 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.356 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.356 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.357 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.357 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.357 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.357 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.357 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.357 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.358 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.358 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.358 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.358 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.358 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.358 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.359 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.359 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.359 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.359 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.359 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.359 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.360 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.360 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.360 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.360 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.360 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.360 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.360 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.361 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.361 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.361 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.361 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.361 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.361 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.361 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.362 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.362 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.362 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.362 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.362 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.362 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.362 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.363 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.363 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.363 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.363 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.363 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.363 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.364 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.364 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.364 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.364 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.364 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.364 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.365 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.365 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.365 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.365 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.365 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.365 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.366 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.366 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.366 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.366 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.366 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.366 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.366 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.367 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.367 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.367 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.367 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.367 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.367 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.368 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.368 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.368 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.368 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.368 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.368 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.368 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.369 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.369 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.369 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.369 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.369 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.369 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.370 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.370 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.370 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.370 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.370 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.370 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.370 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.371 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.371 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.371 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.371 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.371 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.371 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.372 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.372 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.372 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.372 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.372 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.372 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.372 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.373 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.373 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.373 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.373 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.373 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.373 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.374 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.374 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.374 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.374 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.374 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.374 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.374 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.375 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.375 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.375 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.375 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.375 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.375 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.375 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.376 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.376 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.376 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.376 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.376 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.376 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.377 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.377 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.377 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.377 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.377 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.377 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.377 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.378 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.378 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.378 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.378 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.378 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.378 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.378 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.379 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.379 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.379 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.379 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.379 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.379 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.380 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.380 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.380 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.380 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.380 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.380 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.380 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.381 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.381 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.381 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.381 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.381 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.381 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.382 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.382 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.382 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.382 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.382 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.382 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.382 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.383 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.383 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.383 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.383 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.383 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.383 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.384 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.384 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.384 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.384 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.384 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.384 248514 DEBUG oslo_service.service [None req-2baf2e19-8dc3-43f4-bb36-90eab7a73d15 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.386 248514 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.673 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.674 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.674 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.674 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec 13 02:59:24 np0005558241 systemd[1]: Starting libvirt QEMU daemon...
Dec 13 02:59:24 np0005558241 systemd[1]: Started libvirt QEMU daemon.
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.808 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f2a42d45fd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.814 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f2a42d45fd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.815 248514 INFO nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.976 248514 WARNING nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec 13 02:59:24 np0005558241 nova_compute[248510]: 2025-12-13 07:59:24.976 248514 DEBUG nova.virt.libvirt.volume.mount [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec 13 02:59:25 np0005558241 nova_compute[248510]: 2025-12-13 07:59:25.769 248514 INFO nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Libvirt host capabilities <capabilities>
Dec 13 02:59:25 np0005558241 nova_compute[248510]: 
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <host>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <uuid>44548dc0-241a-4777-9df7-ed38fb199087</uuid>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <cpu>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <arch>x86_64</arch>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model>EPYC-Rome-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <vendor>AMD</vendor>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <microcode version='16777317'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <signature family='23' model='49' stepping='0'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <maxphysaddr mode='emulate' bits='40'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='x2apic'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='tsc-deadline'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='osxsave'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='hypervisor'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='tsc_adjust'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='spec-ctrl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='stibp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='arch-capabilities'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='cmp_legacy'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='topoext'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='virt-ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='lbrv'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='tsc-scale'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='vmcb-clean'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='pause-filter'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='pfthreshold'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='svme-addr-chk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='rdctl-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='skip-l1dfl-vmentry'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='mds-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature name='pschange-mc-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <pages unit='KiB' size='4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <pages unit='KiB' size='2048'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <pages unit='KiB' size='1048576'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </cpu>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <power_management>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <suspend_mem/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </power_management>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <iommu support='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <migration_features>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <live/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <uri_transports>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <uri_transport>tcp</uri_transport>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <uri_transport>rdma</uri_transport>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </uri_transports>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </migration_features>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <topology>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <cells num='1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <cell id='0'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:          <memory unit='KiB'>7864308</memory>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:          <pages unit='KiB' size='4'>1966077</pages>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:          <pages unit='KiB' size='2048'>0</pages>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:          <pages unit='KiB' size='1048576'>0</pages>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:          <distances>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:            <sibling id='0' value='10'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:          </distances>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:          <cpus num='8'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:          </cpus>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        </cell>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </cells>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </topology>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <cache>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </cache>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <secmodel>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model>selinux</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <doi>0</doi>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </secmodel>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <secmodel>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model>dac</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <doi>0</doi>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </secmodel>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </host>
Dec 13 02:59:25 np0005558241 nova_compute[248510]: 
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <guest>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <os_type>hvm</os_type>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <arch name='i686'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <wordsize>32</wordsize>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <domain type='qemu'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <domain type='kvm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </arch>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <features>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <pae/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <nonpae/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <acpi default='on' toggle='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <apic default='on' toggle='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <cpuselection/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <deviceboot/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <disksnapshot default='on' toggle='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <externalSnapshot/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </features>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </guest>
Dec 13 02:59:25 np0005558241 nova_compute[248510]: 
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <guest>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <os_type>hvm</os_type>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <arch name='x86_64'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <wordsize>64</wordsize>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <domain type='qemu'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <domain type='kvm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </arch>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <features>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <acpi default='on' toggle='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <apic default='on' toggle='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <cpuselection/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <deviceboot/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <disksnapshot default='on' toggle='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <externalSnapshot/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </features>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </guest>
Dec 13 02:59:25 np0005558241 nova_compute[248510]: 
Dec 13 02:59:25 np0005558241 nova_compute[248510]: </capabilities>
Dec 13 02:59:25 np0005558241 nova_compute[248510]: #033[00m
Dec 13 02:59:25 np0005558241 nova_compute[248510]: 2025-12-13 07:59:25.776 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec 13 02:59:25 np0005558241 nova_compute[248510]: 2025-12-13 07:59:25.799 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 13 02:59:25 np0005558241 nova_compute[248510]: <domainCapabilities>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <path>/usr/libexec/qemu-kvm</path>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <domain>kvm</domain>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <machine>pc-q35-rhel9.8.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <arch>i686</arch>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <vcpu max='4096'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <iothreads supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <os supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <enum name='firmware'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <loader supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>rom</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pflash</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='readonly'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>yes</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>no</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='secure'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>no</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </loader>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </os>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <cpu>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='host-passthrough' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='hostPassthroughMigratable'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>on</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>off</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='maximum' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='maximumMigratable'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>on</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>off</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='host-model' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model fallback='forbid'>EPYC-Rome</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <vendor>AMD</vendor>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='x2apic'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='tsc-deadline'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='hypervisor'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='tsc_adjust'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='spec-ctrl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='stibp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='cmp_legacy'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='overflow-recov'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='succor'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='ibrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='amd-ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='virt-ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='lbrv'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='tsc-scale'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='vmcb-clean'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='flushbyasid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='pause-filter'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='pfthreshold'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='svme-addr-chk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='lfence-always-serializing'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='disable' name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='custom' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v5'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cooperlake'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cooperlake-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cooperlake-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Denverton'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Denverton-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Denverton-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Denverton-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Dhyana-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Genoa'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amd-psfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='auto-ibrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='no-nested-data-bp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='null-sel-clr-base'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='stibp-always-on'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Genoa-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amd-psfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='auto-ibrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='no-nested-data-bp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='null-sel-clr-base'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='stibp-always-on'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Milan'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Milan-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Milan-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amd-psfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='no-nested-data-bp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='null-sel-clr-base'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='stibp-always-on'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='GraniteRapids'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='prefetchiti'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='GraniteRapids-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='prefetchiti'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='GraniteRapids-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx10'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx10-128'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx10-256'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx10-512'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='prefetchiti'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-noTSX-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v5'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v6'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v7'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='KnightsMill'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-4fmaps'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-4vnniw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512er'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512pf'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='KnightsMill-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-4fmaps'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-4vnniw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512er'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512pf'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G4-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G5'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tbm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G5-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tbm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SierraForest'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-ne-convert'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cmpccxadd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SierraForest-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-ne-convert'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cmpccxadd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v5'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='athlon'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='athlon-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='core2duo'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='core2duo-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='coreduo'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='coreduo-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='n270'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='n270-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='phenom'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='phenom-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <memoryBacking supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <enum name='sourceType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>file</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>anonymous</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>memfd</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </memoryBacking>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <devices>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <disk supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='diskDevice'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>disk</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>cdrom</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>floppy</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>lun</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='bus'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>fdc</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>scsi</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>usb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>sata</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio-transitional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio-non-transitional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </disk>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <graphics supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vnc</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>egl-headless</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>dbus</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <video supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='modelType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vga</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>cirrus</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>none</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>bochs</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>ramfb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </video>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <hostdev supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='mode'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>subsystem</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='startupPolicy'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>default</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>mandatory</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>requisite</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>optional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='subsysType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>usb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pci</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>scsi</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='capsType'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='pciBackend'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </hostdev>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <rng supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio-transitional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio-non-transitional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendModel'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>random</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>egd</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>builtin</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </rng>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <filesystem supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='driverType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>path</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>handle</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtiofs</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </filesystem>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <tpm supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tpm-tis</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tpm-crb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendModel'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>emulator</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>external</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendVersion'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>2.0</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </tpm>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <redirdev supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='bus'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>usb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </redirdev>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <channel supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pty</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>unix</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </channel>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <crypto supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>qemu</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendModel'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>builtin</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </crypto>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <interface supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>default</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>passt</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </interface>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <panic supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>isa</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>hyperv</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </panic>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <console supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>null</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vc</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pty</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>dev</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>file</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pipe</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>stdio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>udp</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tcp</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>unix</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>qemu-vdagent</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>dbus</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </console>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </devices>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <features>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <gic supported='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <vmcoreinfo supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <genid supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <backingStoreInput supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <backup supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <async-teardown supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <ps2 supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <sev supported='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <sgx supported='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <hyperv supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='features'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>relaxed</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vapic</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>spinlocks</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vpindex</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>runtime</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>synic</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>stimer</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>reset</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vendor_id</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>frequencies</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>reenlightenment</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tlbflush</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>ipi</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>avic</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>emsr_bitmap</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>xmm_input</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <defaults>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <spinlocks>4095</spinlocks>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <stimer_direct>on</stimer_direct>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <tlbflush_direct>on</tlbflush_direct>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <tlbflush_extended>on</tlbflush_extended>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </defaults>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </hyperv>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <launchSecurity supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='sectype'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tdx</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </launchSecurity>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </features>
Dec 13 02:59:25 np0005558241 nova_compute[248510]: </domainCapabilities>
Dec 13 02:59:25 np0005558241 nova_compute[248510]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec 13 02:59:25 np0005558241 nova_compute[248510]: 2025-12-13 07:59:25.806 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 13 02:59:25 np0005558241 nova_compute[248510]: <domainCapabilities>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <path>/usr/libexec/qemu-kvm</path>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <domain>kvm</domain>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <arch>i686</arch>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <vcpu max='240'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <iothreads supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <os supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <enum name='firmware'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <loader supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>rom</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pflash</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='readonly'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>yes</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>no</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='secure'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>no</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </loader>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </os>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <cpu>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='host-passthrough' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='hostPassthroughMigratable'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>on</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>off</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='maximum' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='maximumMigratable'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>on</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>off</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='host-model' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model fallback='forbid'>EPYC-Rome</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <vendor>AMD</vendor>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='x2apic'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='tsc-deadline'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='hypervisor'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='tsc_adjust'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='spec-ctrl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='stibp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='cmp_legacy'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='overflow-recov'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='succor'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='ibrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='amd-ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='virt-ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='lbrv'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='tsc-scale'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='vmcb-clean'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='flushbyasid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='pause-filter'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='pfthreshold'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='svme-addr-chk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='lfence-always-serializing'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='disable' name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='custom' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v5'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cooperlake'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cooperlake-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cooperlake-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Denverton'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Denverton-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Denverton-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Denverton-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Dhyana-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Genoa'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amd-psfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='auto-ibrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='no-nested-data-bp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='null-sel-clr-base'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='stibp-always-on'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Genoa-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amd-psfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='auto-ibrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='no-nested-data-bp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='null-sel-clr-base'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='stibp-always-on'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Milan'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Milan-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Milan-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amd-psfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='no-nested-data-bp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='null-sel-clr-base'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='stibp-always-on'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='GraniteRapids'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='prefetchiti'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='GraniteRapids-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='prefetchiti'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='GraniteRapids-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx10'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx10-128'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx10-256'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx10-512'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='prefetchiti'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-noTSX-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v5'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v6'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v7'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='KnightsMill'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-4fmaps'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-4vnniw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512er'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512pf'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='KnightsMill-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-4fmaps'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-4vnniw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512er'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512pf'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G4-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G5'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tbm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G5-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tbm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SierraForest'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-ne-convert'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cmpccxadd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SierraForest-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-ne-convert'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cmpccxadd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v5'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='athlon'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='athlon-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='core2duo'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='core2duo-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='coreduo'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='coreduo-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='n270'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='n270-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='phenom'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='phenom-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <memoryBacking supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <enum name='sourceType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>file</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>anonymous</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>memfd</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </memoryBacking>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <devices>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <disk supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='diskDevice'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>disk</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>cdrom</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>floppy</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>lun</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='bus'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>ide</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>fdc</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>scsi</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>usb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>sata</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio-transitional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio-non-transitional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </disk>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <graphics supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vnc</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>egl-headless</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>dbus</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <video supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='modelType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vga</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>cirrus</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>none</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>bochs</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>ramfb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </video>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <hostdev supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='mode'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>subsystem</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='startupPolicy'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>default</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>mandatory</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>requisite</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>optional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='subsysType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>usb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pci</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>scsi</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='capsType'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='pciBackend'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </hostdev>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <rng supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio-transitional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio-non-transitional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendModel'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>random</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>egd</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>builtin</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </rng>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <filesystem supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='driverType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>path</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>handle</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtiofs</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </filesystem>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <tpm supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tpm-tis</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tpm-crb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendModel'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>emulator</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>external</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendVersion'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>2.0</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </tpm>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <redirdev supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='bus'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>usb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </redirdev>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <channel supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pty</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>unix</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </channel>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <crypto supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>qemu</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendModel'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>builtin</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </crypto>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <interface supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>default</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>passt</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </interface>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <panic supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>isa</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>hyperv</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </panic>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <console supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>null</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vc</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pty</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>dev</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>file</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pipe</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>stdio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>udp</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tcp</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>unix</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>qemu-vdagent</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>dbus</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </console>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </devices>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <features>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <gic supported='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <vmcoreinfo supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <genid supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <backingStoreInput supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <backup supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <async-teardown supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <ps2 supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <sev supported='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <sgx supported='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <hyperv supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='features'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>relaxed</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vapic</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>spinlocks</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vpindex</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>runtime</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>synic</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>stimer</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>reset</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vendor_id</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>frequencies</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>reenlightenment</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tlbflush</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>ipi</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>avic</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>emsr_bitmap</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>xmm_input</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <defaults>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <spinlocks>4095</spinlocks>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <stimer_direct>on</stimer_direct>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <tlbflush_direct>on</tlbflush_direct>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <tlbflush_extended>on</tlbflush_extended>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </defaults>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </hyperv>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <launchSecurity supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='sectype'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tdx</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </launchSecurity>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </features>
Dec 13 02:59:25 np0005558241 nova_compute[248510]: </domainCapabilities>
Dec 13 02:59:25 np0005558241 nova_compute[248510]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec 13 02:59:25 np0005558241 nova_compute[248510]: 2025-12-13 07:59:25.841 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec 13 02:59:25 np0005558241 nova_compute[248510]: 2025-12-13 07:59:25.845 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 13 02:59:25 np0005558241 nova_compute[248510]: <domainCapabilities>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <path>/usr/libexec/qemu-kvm</path>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <domain>kvm</domain>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <machine>pc-q35-rhel9.8.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <arch>x86_64</arch>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <vcpu max='4096'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <iothreads supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <os supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <enum name='firmware'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>efi</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <loader supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>rom</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pflash</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='readonly'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>yes</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>no</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='secure'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>yes</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>no</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </loader>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </os>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <cpu>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='host-passthrough' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='hostPassthroughMigratable'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>on</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>off</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='maximum' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='maximumMigratable'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>on</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>off</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='host-model' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model fallback='forbid'>EPYC-Rome</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <vendor>AMD</vendor>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='x2apic'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='tsc-deadline'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='hypervisor'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='tsc_adjust'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='spec-ctrl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='stibp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='cmp_legacy'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='overflow-recov'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='succor'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='ibrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='amd-ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='virt-ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='lbrv'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='tsc-scale'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='vmcb-clean'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='flushbyasid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='pause-filter'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='pfthreshold'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='svme-addr-chk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='lfence-always-serializing'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='disable' name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='custom' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v5'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cooperlake'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cooperlake-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cooperlake-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Denverton'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Denverton-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Denverton-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Denverton-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Dhyana-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Genoa'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amd-psfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='auto-ibrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='no-nested-data-bp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='null-sel-clr-base'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='stibp-always-on'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Genoa-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amd-psfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='auto-ibrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='no-nested-data-bp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='null-sel-clr-base'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='stibp-always-on'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Milan'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Milan-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Milan-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amd-psfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='no-nested-data-bp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='null-sel-clr-base'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='stibp-always-on'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='EPYC-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='GraniteRapids'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='prefetchiti'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='GraniteRapids-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='prefetchiti'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='GraniteRapids-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx10'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx10-128'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx10-256'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx10-512'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='prefetchiti'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-noTSX-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v5'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v6'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v7'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='KnightsMill'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-4fmaps'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-4vnniw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512er'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512pf'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='KnightsMill-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-4fmaps'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-4vnniw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512er'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512pf'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G4-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G5'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tbm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G5-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tbm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SierraForest'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-ne-convert'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cmpccxadd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='SierraForest-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-ifma'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-ne-convert'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx-vnni-int8'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cmpccxadd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v5'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='athlon'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='athlon-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='core2duo'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='core2duo-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='coreduo'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='coreduo-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='n270'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='n270-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='phenom'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='phenom-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <memoryBacking supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <enum name='sourceType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>file</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>anonymous</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>memfd</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </memoryBacking>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <devices>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <disk supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='diskDevice'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>disk</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>cdrom</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>floppy</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>lun</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='bus'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>fdc</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>scsi</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>usb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>sata</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio-transitional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio-non-transitional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </disk>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <graphics supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vnc</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>egl-headless</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>dbus</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <video supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='modelType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vga</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>cirrus</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>none</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>bochs</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>ramfb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </video>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <hostdev supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='mode'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>subsystem</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='startupPolicy'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>default</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>mandatory</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>requisite</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>optional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='subsysType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>usb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pci</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>scsi</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='capsType'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='pciBackend'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </hostdev>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <rng supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio-transitional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtio-non-transitional</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendModel'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>random</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>egd</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>builtin</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </rng>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <filesystem supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='driverType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>path</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>handle</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>virtiofs</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </filesystem>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <tpm supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tpm-tis</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tpm-crb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendModel'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>emulator</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>external</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendVersion'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>2.0</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </tpm>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <redirdev supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='bus'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>usb</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </redirdev>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <channel supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pty</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>unix</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </channel>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <crypto supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>qemu</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendModel'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>builtin</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </crypto>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <interface supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='backendType'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>default</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>passt</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </interface>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <panic supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>isa</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>hyperv</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </panic>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <console supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>null</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vc</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pty</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>dev</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>file</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pipe</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>stdio</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>udp</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tcp</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>unix</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>qemu-vdagent</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>dbus</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </console>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </devices>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <features>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <gic supported='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <vmcoreinfo supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <genid supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <backingStoreInput supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <backup supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <async-teardown supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <ps2 supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <sev supported='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <sgx supported='no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <hyperv supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='features'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>relaxed</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vapic</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>spinlocks</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vpindex</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>runtime</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>synic</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>stimer</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>reset</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>vendor_id</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>frequencies</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>reenlightenment</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tlbflush</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>ipi</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>avic</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>emsr_bitmap</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>xmm_input</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <defaults>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <spinlocks>4095</spinlocks>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <stimer_direct>on</stimer_direct>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <tlbflush_direct>on</tlbflush_direct>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <tlbflush_extended>on</tlbflush_extended>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </defaults>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </hyperv>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <launchSecurity supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='sectype'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>tdx</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </launchSecurity>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </features>
Dec 13 02:59:25 np0005558241 nova_compute[248510]: </domainCapabilities>
Dec 13 02:59:25 np0005558241 nova_compute[248510]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec 13 02:59:25 np0005558241 nova_compute[248510]: 2025-12-13 07:59:25.902 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 13 02:59:25 np0005558241 nova_compute[248510]: <domainCapabilities>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <path>/usr/libexec/qemu-kvm</path>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <domain>kvm</domain>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <arch>x86_64</arch>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <vcpu max='240'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <iothreads supported='yes'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <os supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <enum name='firmware'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <loader supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>rom</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>pflash</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='readonly'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>yes</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>no</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='secure'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>no</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </loader>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  </os>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:  <cpu>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='host-passthrough' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='hostPassthroughMigratable'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>on</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>off</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='maximum' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <enum name='maximumMigratable'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>on</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <value>off</value>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='host-model' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model fallback='forbid'>EPYC-Rome</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <vendor>AMD</vendor>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='x2apic'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='tsc-deadline'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='hypervisor'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='tsc_adjust'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='spec-ctrl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='stibp'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='cmp_legacy'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='overflow-recov'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='succor'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='ibrs'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='amd-ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='virt-ssbd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='lbrv'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='tsc-scale'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='vmcb-clean'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='flushbyasid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='pause-filter'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='pfthreshold'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='svme-addr-chk'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='require' name='lfence-always-serializing'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <feature policy='disable' name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:    <mode name='custom' supported='yes'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Broadwell-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-noTSX'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v1'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v2'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v3'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v4'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cascadelake-Server-v5'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      <blockers model='Cooperlake'>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:25 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Cooperlake-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Cooperlake-v2'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Denverton'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Denverton-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Denverton-v2'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Denverton-v3'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Dhyana-v2'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Genoa'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amd-psfd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='auto-ibrs'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='no-nested-data-bp'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='null-sel-clr-base'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='stibp-always-on'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Genoa-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amd-psfd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='auto-ibrs'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='no-nested-data-bp'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='null-sel-clr-base'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='stibp-always-on'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Milan'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Milan-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Milan-v2'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amd-psfd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='no-nested-data-bp'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='null-sel-clr-base'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='stibp-always-on'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome-v2'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='EPYC-Rome-v3'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='EPYC-v3'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='EPYC-v4'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='GraniteRapids'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-fp16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='prefetchiti'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='GraniteRapids-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-fp16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='prefetchiti'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='GraniteRapids-v2'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-fp16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx10'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx10-128'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx10-256'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx10-512'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='prefetchiti'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Haswell'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Haswell-IBRS'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Haswell-noTSX'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Haswell-noTSX-IBRS'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v2'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v3'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Haswell-v4'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-noTSX'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v2'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v3'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v4'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v5'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v6'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Icelake-Server-v7'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge-IBRS'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='IvyBridge-v2'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='KnightsMill'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-4fmaps'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-4vnniw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512er'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512pf'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='KnightsMill-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-4fmaps'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-4vnniw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512er'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512pf'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G4'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G4-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G5'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='tbm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Opteron_G5-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fma4'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='tbm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xop'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids-v2'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='SapphireRapids-v3'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-int8'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='amx-tile'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-bf16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-fp16'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512-vpopcntdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bitalg'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vbmi2'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrc'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fzrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='la57'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='taa-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='tsx-ldtrk'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xfd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='SierraForest'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-ne-convert'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-vnni-int8'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='cmpccxadd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='SierraForest-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-ifma'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-ne-convert'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-vnni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx-vnni-int8'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='bus-lock-detect'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='cmpccxadd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fbsdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='fsrs'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ibrs-all'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='mcdt-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pbrsb-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='psdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='sbdr-ssdp-no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='serialize'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vaes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='vpclmulqdq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-IBRS'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v2'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v3'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Client-v4'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-IBRS'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v2'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='hle'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='rtm'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v3'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v4'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Skylake-Server-v5'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512bw'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512cd'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512dq'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512f'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='avx512vl'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='invpcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pcid'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='pku'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Snowridge'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='mpx'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v2'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v3'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='core-capability'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='split-lock-detect'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='Snowridge-v4'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='cldemote'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='erms'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='gfni'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdir64b'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='movdiri'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='xsaves'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='athlon'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='athlon-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='core2duo'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='core2duo-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='coreduo'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='coreduo-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='n270'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='n270-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='ss'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='phenom'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <blockers model='phenom-v1'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='3dnow'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <feature name='3dnowext'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </blockers>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </mode>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:  <memoryBacking supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <enum name='sourceType'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <value>file</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <value>anonymous</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <value>memfd</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:  </memoryBacking>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:  <devices>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <disk supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='diskDevice'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>disk</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>cdrom</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>floppy</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>lun</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='bus'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>ide</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>fdc</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>scsi</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>usb</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>sata</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>virtio-transitional</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>virtio-non-transitional</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </disk>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <graphics supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>vnc</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>egl-headless</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>dbus</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <video supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='modelType'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>vga</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>cirrus</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>none</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>bochs</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>ramfb</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </video>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <hostdev supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='mode'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>subsystem</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='startupPolicy'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>default</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>mandatory</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>requisite</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>optional</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='subsysType'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>usb</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>pci</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>scsi</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='capsType'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='pciBackend'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </hostdev>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <rng supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>virtio</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>virtio-transitional</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>virtio-non-transitional</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='backendModel'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>random</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>egd</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>builtin</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </rng>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <filesystem supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='driverType'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>path</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>handle</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>virtiofs</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </filesystem>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <tpm supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>tpm-tis</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>tpm-crb</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='backendModel'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>emulator</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>external</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='backendVersion'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>2.0</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </tpm>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <redirdev supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='bus'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>usb</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </redirdev>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <channel supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>pty</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>unix</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </channel>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <crypto supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='model'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>qemu</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='backendModel'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>builtin</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </crypto>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <interface supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='backendType'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>default</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>passt</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </interface>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <panic supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='model'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>isa</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>hyperv</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </panic>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <console supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='type'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>null</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>vc</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>pty</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>dev</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>file</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>pipe</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>stdio</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>udp</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>tcp</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>unix</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>qemu-vdagent</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>dbus</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </console>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:  </devices>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:  <features>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <gic supported='no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <vmcoreinfo supported='yes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <genid supported='yes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <backingStoreInput supported='yes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <backup supported='yes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <async-teardown supported='yes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <ps2 supported='yes'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <sev supported='no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <sgx supported='no'/>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <hyperv supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='features'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>relaxed</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>vapic</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>spinlocks</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>vpindex</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>runtime</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>synic</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>stimer</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>reset</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>vendor_id</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>frequencies</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>reenlightenment</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>tlbflush</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>ipi</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>avic</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>emsr_bitmap</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>xmm_input</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <defaults>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <spinlocks>4095</spinlocks>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <stimer_direct>on</stimer_direct>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <tlbflush_direct>on</tlbflush_direct>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <tlbflush_extended>on</tlbflush_extended>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </defaults>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </hyperv>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    <launchSecurity supported='yes'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      <enum name='sectype'>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:        <value>tdx</value>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:      </enum>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:    </launchSecurity>
Dec 13 02:59:26 np0005558241 nova_compute[248510]:  </features>
Dec 13 02:59:26 np0005558241 nova_compute[248510]: </domainCapabilities>
Dec 13 02:59:26 np0005558241 nova_compute[248510]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec 13 02:59:26 np0005558241 nova_compute[248510]: 2025-12-13 07:59:25.965 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec 13 02:59:26 np0005558241 nova_compute[248510]: 2025-12-13 07:59:25.965 248514 INFO nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Secure Boot support detected#033[00m
Dec 13 02:59:26 np0005558241 nova_compute[248510]: 2025-12-13 07:59:25.968 248514 INFO nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec 13 02:59:26 np0005558241 nova_compute[248510]: 2025-12-13 07:59:25.968 248514 INFO nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec 13 02:59:26 np0005558241 nova_compute[248510]: 2025-12-13 07:59:25.978 248514 DEBUG nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec 13 02:59:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:59:26 np0005558241 nova_compute[248510]: 2025-12-13 07:59:26.865 248514 INFO nova.virt.node [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Determined node identity f581abbc-7737-4dd5-b64e-9a4263571fb3 from /var/lib/nova/compute_id#033[00m
Dec 13 02:59:26 np0005558241 nova_compute[248510]: 2025-12-13 07:59:26.968 248514 WARNING nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Compute nodes ['f581abbc-7737-4dd5-b64e-9a4263571fb3'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec 13 02:59:27 np0005558241 nova_compute[248510]: 2025-12-13 07:59:27.119 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec 13 02:59:27 np0005558241 nova_compute[248510]: 2025-12-13 07:59:27.179 248514 WARNING nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec 13 02:59:27 np0005558241 nova_compute[248510]: 2025-12-13 07:59:27.179 248514 DEBUG oslo_concurrency.lockutils [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:59:27 np0005558241 nova_compute[248510]: 2025-12-13 07:59:27.180 248514 DEBUG oslo_concurrency.lockutils [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:59:27 np0005558241 nova_compute[248510]: 2025-12-13 07:59:27.180 248514 DEBUG oslo_concurrency.lockutils [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:59:27 np0005558241 nova_compute[248510]: 2025-12-13 07:59:27.180 248514 DEBUG nova.compute.resource_tracker [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 02:59:27 np0005558241 nova_compute[248510]: 2025-12-13 07:59:27.181 248514 DEBUG oslo_concurrency.processutils [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:59:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:59:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/721311705' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:59:27 np0005558241 nova_compute[248510]: 2025-12-13 07:59:27.745 248514 DEBUG oslo_concurrency.processutils [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:59:27 np0005558241 systemd[1]: Starting libvirt nodedev daemon...
Dec 13 02:59:27 np0005558241 systemd[1]: Started libvirt nodedev daemon.
Dec 13 02:59:28 np0005558241 nova_compute[248510]: 2025-12-13 07:59:28.110 248514 WARNING nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 02:59:28 np0005558241 nova_compute[248510]: 2025-12-13 07:59:28.112 248514 DEBUG nova.compute.resource_tracker [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5073MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 02:59:28 np0005558241 nova_compute[248510]: 2025-12-13 07:59:28.113 248514 DEBUG oslo_concurrency.lockutils [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:59:28 np0005558241 nova_compute[248510]: 2025-12-13 07:59:28.113 248514 DEBUG oslo_concurrency.lockutils [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:59:28 np0005558241 nova_compute[248510]: 2025-12-13 07:59:28.139 248514 WARNING nova.compute.resource_tracker [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] No compute node record for compute-0.ctlplane.example.com:f581abbc-7737-4dd5-b64e-9a4263571fb3: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host f581abbc-7737-4dd5-b64e-9a4263571fb3 could not be found.#033[00m
Dec 13 02:59:28 np0005558241 nova_compute[248510]: 2025-12-13 07:59:28.167 248514 INFO nova.compute.resource_tracker [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: f581abbc-7737-4dd5-b64e-9a4263571fb3#033[00m
Dec 13 02:59:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:28 np0005558241 nova_compute[248510]: 2025-12-13 07:59:28.250 248514 DEBUG nova.compute.resource_tracker [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 02:59:28 np0005558241 nova_compute[248510]: 2025-12-13 07:59:28.251 248514 DEBUG nova.compute.resource_tracker [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 02:59:29 np0005558241 nova_compute[248510]: 2025-12-13 07:59:29.708 248514 INFO nova.scheduler.client.report [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [req-cb1e82dc-e2c1-46ac-846a-01c3c256862e] Created resource provider record via placement API for resource provider with UUID f581abbc-7737-4dd5-b64e-9a4263571fb3 and name compute-0.ctlplane.example.com.#033[00m
Dec 13 02:59:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:30 np0005558241 nova_compute[248510]: 2025-12-13 07:59:30.319 248514 DEBUG oslo_concurrency.processutils [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 02:59:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 02:59:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2862749957' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 02:59:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:59:31 np0005558241 nova_compute[248510]: 2025-12-13 07:59:31.609 248514 DEBUG oslo_concurrency.processutils [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 02:59:31 np0005558241 nova_compute[248510]: 2025-12-13 07:59:31.615 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 13 02:59:31 np0005558241 nova_compute[248510]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec 13 02:59:31 np0005558241 nova_compute[248510]: 2025-12-13 07:59:31.615 248514 INFO nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec 13 02:59:31 np0005558241 nova_compute[248510]: 2025-12-13 07:59:31.616 248514 DEBUG nova.compute.provider_tree [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 02:59:31 np0005558241 nova_compute[248510]: 2025-12-13 07:59:31.617 248514 DEBUG nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 02:59:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:32 np0005558241 nova_compute[248510]: 2025-12-13 07:59:32.390 248514 DEBUG nova.scheduler.client.report [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Updated inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec 13 02:59:32 np0005558241 nova_compute[248510]: 2025-12-13 07:59:32.391 248514 DEBUG nova.compute.provider_tree [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Updating resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec 13 02:59:32 np0005558241 nova_compute[248510]: 2025-12-13 07:59:32.391 248514 DEBUG nova.compute.provider_tree [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 02:59:32 np0005558241 nova_compute[248510]: 2025-12-13 07:59:32.489 248514 DEBUG nova.compute.provider_tree [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Updating resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec 13 02:59:32 np0005558241 podman[248920]: 2025-12-13 07:59:32.976743063 +0000 UTC m=+0.059789347 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 02:59:32 np0005558241 podman[248919]: 2025-12-13 07:59:32.980589968 +0000 UTC m=+0.067300742 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 13 02:59:33 np0005558241 podman[248918]: 2025-12-13 07:59:33.017533024 +0000 UTC m=+0.102243229 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 02:59:33 np0005558241 nova_compute[248510]: 2025-12-13 07:59:33.528 248514 DEBUG nova.compute.resource_tracker [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 02:59:33 np0005558241 nova_compute[248510]: 2025-12-13 07:59:33.529 248514 DEBUG oslo_concurrency.lockutils [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:59:33 np0005558241 nova_compute[248510]: 2025-12-13 07:59:33.529 248514 DEBUG nova.service [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec 13 02:59:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:35 np0005558241 nova_compute[248510]: 2025-12-13 07:59:35.219 248514 DEBUG nova.service [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec 13 02:59:35 np0005558241 nova_compute[248510]: 2025-12-13 07:59:35.220 248514 DEBUG nova.servicegroup.drivers.db [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec 13 02:59:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:59:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:59:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:59:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:59:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:59:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 02:59:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 02:59:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:59:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:43 np0005558241 nova_compute[248510]: 2025-12-13 07:59:43.222 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:59:43 np0005558241 nova_compute[248510]: 2025-12-13 07:59:43.376 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 02:59:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:59:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 02:59:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624606011' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 02:59:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 02:59:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624606011' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 02:59:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 02:59:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4176195286' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 02:59:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 02:59:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4176195286' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 02:59:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 02:59:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/576422114' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 02:59:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 02:59:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/576422114' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 02:59:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:59:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:59:55.373 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 02:59:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:59:55.374 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 02:59:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 07:59:55.374 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 02:59:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 02:59:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 02:59:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:00:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:03 np0005558241 podman[248984]: 2025-12-13 08:00:03.982028057 +0000 UTC m=+0.059469070 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 03:00:04 np0005558241 podman[248982]: 2025-12-13 08:00:04.005988074 +0000 UTC m=+0.090869679 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:00:04 np0005558241 podman[248983]: 2025-12-13 08:00:04.012281819 +0000 UTC m=+0.092503540 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:00:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:00:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:00:09
Dec 13 03:00:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:00:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:00:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', '.rgw.root', 'images', 'default.rgw.meta', 'volumes', 'vms', 'backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta']
Dec 13 03:00:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:00:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:00:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:00:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:00:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:00:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:00:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:00:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:00:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:00:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:00:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:00:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:00:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:00:09 np0005558241 podman[249187]: 2025-12-13 08:00:09.893511958 +0000 UTC m=+0.027936936 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:00:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:00:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:00:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:00:10 np0005558241 podman[249187]: 2025-12-13 08:00:10.227353605 +0000 UTC m=+0.361778573 container create 111e1166f61f775ea8b5b58fc5885092f8e2bfcb131ee40b393eb28332669a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:00:10 np0005558241 systemd[1]: Started libpod-conmon-111e1166f61f775ea8b5b58fc5885092f8e2bfcb131ee40b393eb28332669a31.scope.
Dec 13 03:00:10 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:00:10 np0005558241 podman[249187]: 2025-12-13 08:00:10.523183469 +0000 UTC m=+0.657608467 container init 111e1166f61f775ea8b5b58fc5885092f8e2bfcb131ee40b393eb28332669a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_clarke, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:00:10 np0005558241 podman[249187]: 2025-12-13 08:00:10.53382408 +0000 UTC m=+0.668249048 container start 111e1166f61f775ea8b5b58fc5885092f8e2bfcb131ee40b393eb28332669a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_clarke, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 03:00:10 np0005558241 podman[249187]: 2025-12-13 08:00:10.539717894 +0000 UTC m=+0.674142862 container attach 111e1166f61f775ea8b5b58fc5885092f8e2bfcb131ee40b393eb28332669a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_clarke, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 03:00:10 np0005558241 pedantic_clarke[249203]: 167 167
Dec 13 03:00:10 np0005558241 systemd[1]: libpod-111e1166f61f775ea8b5b58fc5885092f8e2bfcb131ee40b393eb28332669a31.scope: Deactivated successfully.
Dec 13 03:00:10 np0005558241 conmon[249203]: conmon 111e1166f61f775ea8b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-111e1166f61f775ea8b5b58fc5885092f8e2bfcb131ee40b393eb28332669a31.scope/container/memory.events
Dec 13 03:00:10 np0005558241 podman[249187]: 2025-12-13 08:00:10.541948829 +0000 UTC m=+0.676373797 container died 111e1166f61f775ea8b5b58fc5885092f8e2bfcb131ee40b393eb28332669a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_clarke, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:00:10 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a9b4d00d7f1431d17c2278ea5c5a5b6dc2a7a126460872450556ea7e8bbe8454-merged.mount: Deactivated successfully.
Dec 13 03:00:10 np0005558241 podman[249187]: 2025-12-13 08:00:10.594821505 +0000 UTC m=+0.729246473 container remove 111e1166f61f775ea8b5b58fc5885092f8e2bfcb131ee40b393eb28332669a31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:00:10 np0005558241 systemd[1]: libpod-conmon-111e1166f61f775ea8b5b58fc5885092f8e2bfcb131ee40b393eb28332669a31.scope: Deactivated successfully.
Dec 13 03:00:10 np0005558241 podman[249227]: 2025-12-13 08:00:10.77404554 +0000 UTC m=+0.050006657 container create 42e53a1d962d2be474b0c762c46eafa5dc8e8caa1a6d44fd623c64d8cff26b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chandrasekhar, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:00:10 np0005558241 systemd[1]: Started libpod-conmon-42e53a1d962d2be474b0c762c46eafa5dc8e8caa1a6d44fd623c64d8cff26b1c.scope.
Dec 13 03:00:10 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:00:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95c0f6fc3b1422a604b274be795a82531759c6d09813057353c591590b73d7a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:00:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95c0f6fc3b1422a604b274be795a82531759c6d09813057353c591590b73d7a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:00:10 np0005558241 podman[249227]: 2025-12-13 08:00:10.751912458 +0000 UTC m=+0.027873585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:00:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95c0f6fc3b1422a604b274be795a82531759c6d09813057353c591590b73d7a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:00:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95c0f6fc3b1422a604b274be795a82531759c6d09813057353c591590b73d7a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:00:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95c0f6fc3b1422a604b274be795a82531759c6d09813057353c591590b73d7a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:00:10 np0005558241 podman[249227]: 2025-12-13 08:00:10.859701671 +0000 UTC m=+0.135662838 container init 42e53a1d962d2be474b0c762c46eafa5dc8e8caa1a6d44fd623c64d8cff26b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chandrasekhar, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:00:10 np0005558241 podman[249227]: 2025-12-13 08:00:10.869973603 +0000 UTC m=+0.145934710 container start 42e53a1d962d2be474b0c762c46eafa5dc8e8caa1a6d44fd623c64d8cff26b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chandrasekhar, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:00:10 np0005558241 podman[249227]: 2025-12-13 08:00:10.875271233 +0000 UTC m=+0.151232370 container attach 42e53a1d962d2be474b0c762c46eafa5dc8e8caa1a6d44fd623c64d8cff26b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:00:11 np0005558241 unruffled_chandrasekhar[249244]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:00:11 np0005558241 unruffled_chandrasekhar[249244]: --> All data devices are unavailable
Dec 13 03:00:11 np0005558241 systemd[1]: libpod-42e53a1d962d2be474b0c762c46eafa5dc8e8caa1a6d44fd623c64d8cff26b1c.scope: Deactivated successfully.
Dec 13 03:00:11 np0005558241 podman[249227]: 2025-12-13 08:00:11.383154077 +0000 UTC m=+0.659115204 container died 42e53a1d962d2be474b0c762c46eafa5dc8e8caa1a6d44fd623c64d8cff26b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:00:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-95c0f6fc3b1422a604b274be795a82531759c6d09813057353c591590b73d7a2-merged.mount: Deactivated successfully.
Dec 13 03:00:11 np0005558241 podman[249227]: 2025-12-13 08:00:11.439664033 +0000 UTC m=+0.715625140 container remove 42e53a1d962d2be474b0c762c46eafa5dc8e8caa1a6d44fd623c64d8cff26b1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_chandrasekhar, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:00:11 np0005558241 systemd[1]: libpod-conmon-42e53a1d962d2be474b0c762c46eafa5dc8e8caa1a6d44fd623c64d8cff26b1c.scope: Deactivated successfully.
Dec 13 03:00:11 np0005558241 podman[249337]: 2025-12-13 08:00:11.923957539 +0000 UTC m=+0.061116060 container create ef951663c55b5d8577c41c42a301fb6998df3d3daa7aa33f9176c6219a0387a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wu, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:00:11 np0005558241 systemd[1]: Started libpod-conmon-ef951663c55b5d8577c41c42a301fb6998df3d3daa7aa33f9176c6219a0387a0.scope.
Dec 13 03:00:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:00:11 np0005558241 podman[249337]: 2025-12-13 08:00:11.888176211 +0000 UTC m=+0.025334762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:00:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:00:12 np0005558241 podman[249337]: 2025-12-13 08:00:12.005768645 +0000 UTC m=+0.142927196 container init ef951663c55b5d8577c41c42a301fb6998df3d3daa7aa33f9176c6219a0387a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:00:12 np0005558241 podman[249337]: 2025-12-13 08:00:12.014821227 +0000 UTC m=+0.151979748 container start ef951663c55b5d8577c41c42a301fb6998df3d3daa7aa33f9176c6219a0387a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wu, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:00:12 np0005558241 podman[249337]: 2025-12-13 08:00:12.017969274 +0000 UTC m=+0.155127905 container attach ef951663c55b5d8577c41c42a301fb6998df3d3daa7aa33f9176c6219a0387a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:00:12 np0005558241 vibrant_wu[249353]: 167 167
Dec 13 03:00:12 np0005558241 systemd[1]: libpod-ef951663c55b5d8577c41c42a301fb6998df3d3daa7aa33f9176c6219a0387a0.scope: Deactivated successfully.
Dec 13 03:00:12 np0005558241 podman[249337]: 2025-12-13 08:00:12.020531247 +0000 UTC m=+0.157689768 container died ef951663c55b5d8577c41c42a301fb6998df3d3daa7aa33f9176c6219a0387a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:00:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4ee6438c3cc2f198a1dcec9944a3288468d5576e1839dab7bd193b1f7073d252-merged.mount: Deactivated successfully.
Dec 13 03:00:12 np0005558241 podman[249337]: 2025-12-13 08:00:12.060904996 +0000 UTC m=+0.198063517 container remove ef951663c55b5d8577c41c42a301fb6998df3d3daa7aa33f9176c6219a0387a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_wu, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:00:12 np0005558241 systemd[1]: libpod-conmon-ef951663c55b5d8577c41c42a301fb6998df3d3daa7aa33f9176c6219a0387a0.scope: Deactivated successfully.
Dec 13 03:00:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:12 np0005558241 podman[249375]: 2025-12-13 08:00:12.214523203 +0000 UTC m=+0.028346366 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:00:12 np0005558241 podman[249375]: 2025-12-13 08:00:12.562438235 +0000 UTC m=+0.376261388 container create 555e97c763575ab794efb582d50a0289d1c3c41a296a3fb31ce1ccd23d8f1817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:00:12 np0005558241 systemd[1]: Started libpod-conmon-555e97c763575ab794efb582d50a0289d1c3c41a296a3fb31ce1ccd23d8f1817.scope.
Dec 13 03:00:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:00:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf9d339df9c51fba8ed14d5399785922832a6e4bebe53f51d0a537da68a05766/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:00:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf9d339df9c51fba8ed14d5399785922832a6e4bebe53f51d0a537da68a05766/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:00:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf9d339df9c51fba8ed14d5399785922832a6e4bebe53f51d0a537da68a05766/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:00:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf9d339df9c51fba8ed14d5399785922832a6e4bebe53f51d0a537da68a05766/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:00:12 np0005558241 podman[249375]: 2025-12-13 08:00:12.817367706 +0000 UTC m=+0.631190879 container init 555e97c763575ab794efb582d50a0289d1c3c41a296a3fb31ce1ccd23d8f1817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 03:00:12 np0005558241 podman[249375]: 2025-12-13 08:00:12.825732641 +0000 UTC m=+0.639555784 container start 555e97c763575ab794efb582d50a0289d1c3c41a296a3fb31ce1ccd23d8f1817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:00:12 np0005558241 podman[249375]: 2025-12-13 08:00:12.831198725 +0000 UTC m=+0.645021878 container attach 555e97c763575ab794efb582d50a0289d1c3c41a296a3fb31ce1ccd23d8f1817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]: {
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:    "0": [
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:        {
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "devices": [
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "/dev/loop3"
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            ],
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_name": "ceph_lv0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_size": "21470642176",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "name": "ceph_lv0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "tags": {
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.cluster_name": "ceph",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.crush_device_class": "",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.encrypted": "0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.objectstore": "bluestore",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.osd_id": "0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.type": "block",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.vdo": "0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.with_tpm": "0"
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            },
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "type": "block",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "vg_name": "ceph_vg0"
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:        }
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:    ],
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:    "1": [
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:        {
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "devices": [
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "/dev/loop4"
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            ],
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_name": "ceph_lv1",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_size": "21470642176",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "name": "ceph_lv1",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "tags": {
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.cluster_name": "ceph",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.crush_device_class": "",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.encrypted": "0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.objectstore": "bluestore",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.osd_id": "1",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.type": "block",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.vdo": "0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.with_tpm": "0"
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            },
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "type": "block",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "vg_name": "ceph_vg1"
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:        }
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:    ],
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:    "2": [
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:        {
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "devices": [
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "/dev/loop5"
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            ],
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_name": "ceph_lv2",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_size": "21470642176",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "name": "ceph_lv2",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "tags": {
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.cluster_name": "ceph",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.crush_device_class": "",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.encrypted": "0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.objectstore": "bluestore",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.osd_id": "2",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.type": "block",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.vdo": "0",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:                "ceph.with_tpm": "0"
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            },
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "type": "block",
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:            "vg_name": "ceph_vg2"
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:        }
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]:    ]
Dec 13 03:00:13 np0005558241 suspicious_khayyam[249392]: }
Dec 13 03:00:13 np0005558241 systemd[1]: libpod-555e97c763575ab794efb582d50a0289d1c3c41a296a3fb31ce1ccd23d8f1817.scope: Deactivated successfully.
Dec 13 03:00:13 np0005558241 podman[249375]: 2025-12-13 08:00:13.166914558 +0000 UTC m=+0.980737701 container died 555e97c763575ab794efb582d50a0289d1c3c41a296a3fb31ce1ccd23d8f1817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:00:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cf9d339df9c51fba8ed14d5399785922832a6e4bebe53f51d0a537da68a05766-merged.mount: Deactivated successfully.
Dec 13 03:00:13 np0005558241 podman[249375]: 2025-12-13 08:00:13.209305357 +0000 UTC m=+1.023128500 container remove 555e97c763575ab794efb582d50a0289d1c3c41a296a3fb31ce1ccd23d8f1817 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:00:13 np0005558241 systemd[1]: libpod-conmon-555e97c763575ab794efb582d50a0289d1c3c41a296a3fb31ce1ccd23d8f1817.scope: Deactivated successfully.
Dec 13 03:00:13 np0005558241 podman[249477]: 2025-12-13 08:00:13.728195192 +0000 UTC m=+0.026706996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:00:14 np0005558241 podman[249477]: 2025-12-13 08:00:14.072211768 +0000 UTC m=+0.370723522 container create 7d9bc1543fe4cd8f22b02fec3385ebb99887391cc27dcf002c134e687aa968a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:00:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:14 np0005558241 systemd[1]: Started libpod-conmon-7d9bc1543fe4cd8f22b02fec3385ebb99887391cc27dcf002c134e687aa968a5.scope.
Dec 13 03:00:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:00:14 np0005558241 podman[249477]: 2025-12-13 08:00:14.336007116 +0000 UTC m=+0.634518910 container init 7d9bc1543fe4cd8f22b02fec3385ebb99887391cc27dcf002c134e687aa968a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ride, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:00:14 np0005558241 podman[249477]: 2025-12-13 08:00:14.344506325 +0000 UTC m=+0.643018079 container start 7d9bc1543fe4cd8f22b02fec3385ebb99887391cc27dcf002c134e687aa968a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ride, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:00:14 np0005558241 focused_ride[249494]: 167 167
Dec 13 03:00:14 np0005558241 systemd[1]: libpod-7d9bc1543fe4cd8f22b02fec3385ebb99887391cc27dcf002c134e687aa968a5.scope: Deactivated successfully.
Dec 13 03:00:14 np0005558241 podman[249477]: 2025-12-13 08:00:14.35532101 +0000 UTC m=+0.653832784 container attach 7d9bc1543fe4cd8f22b02fec3385ebb99887391cc27dcf002c134e687aa968a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:00:14 np0005558241 podman[249477]: 2025-12-13 08:00:14.356822627 +0000 UTC m=+0.655334381 container died 7d9bc1543fe4cd8f22b02fec3385ebb99887391cc27dcf002c134e687aa968a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:00:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3c41f736ba6aefb2c99e2a5b6c70120b8c0a87f3e15590fb52c1e9796fb3c3e9-merged.mount: Deactivated successfully.
Dec 13 03:00:14 np0005558241 podman[249477]: 2025-12-13 08:00:14.402550778 +0000 UTC m=+0.701062532 container remove 7d9bc1543fe4cd8f22b02fec3385ebb99887391cc27dcf002c134e687aa968a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ride, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:00:14 np0005558241 systemd[1]: libpod-conmon-7d9bc1543fe4cd8f22b02fec3385ebb99887391cc27dcf002c134e687aa968a5.scope: Deactivated successfully.
Dec 13 03:00:14 np0005558241 podman[249518]: 2025-12-13 08:00:14.568977359 +0000 UTC m=+0.047912625 container create 5063fa5634813dce54013424ad797db9d0cc8c9d0c32e9615098d0a715cb69c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hugle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:00:14 np0005558241 systemd[1]: Started libpod-conmon-5063fa5634813dce54013424ad797db9d0cc8c9d0c32e9615098d0a715cb69c4.scope.
Dec 13 03:00:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:00:14 np0005558241 podman[249518]: 2025-12-13 08:00:14.548801935 +0000 UTC m=+0.027737231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:00:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa98f6f42b1b64212453b910c2e2a2d816418c3f21bb577d15b68c9d8c10f336/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:00:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa98f6f42b1b64212453b910c2e2a2d816418c3f21bb577d15b68c9d8c10f336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:00:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa98f6f42b1b64212453b910c2e2a2d816418c3f21bb577d15b68c9d8c10f336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:00:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa98f6f42b1b64212453b910c2e2a2d816418c3f21bb577d15b68c9d8c10f336/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:00:14 np0005558241 podman[249518]: 2025-12-13 08:00:14.655656265 +0000 UTC m=+0.134591551 container init 5063fa5634813dce54013424ad797db9d0cc8c9d0c32e9615098d0a715cb69c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hugle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:00:14 np0005558241 podman[249518]: 2025-12-13 08:00:14.664127243 +0000 UTC m=+0.143062509 container start 5063fa5634813dce54013424ad797db9d0cc8c9d0c32e9615098d0a715cb69c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:00:14 np0005558241 podman[249518]: 2025-12-13 08:00:14.669010983 +0000 UTC m=+0.147946259 container attach 5063fa5634813dce54013424ad797db9d0cc8c9d0c32e9615098d0a715cb69c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hugle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:00:15 np0005558241 lvm[249614]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:00:15 np0005558241 lvm[249614]: VG ceph_vg1 finished
Dec 13 03:00:15 np0005558241 lvm[249613]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:00:15 np0005558241 lvm[249613]: VG ceph_vg0 finished
Dec 13 03:00:15 np0005558241 lvm[249616]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:00:15 np0005558241 lvm[249616]: VG ceph_vg2 finished
Dec 13 03:00:15 np0005558241 strange_hugle[249535]: {}
Dec 13 03:00:15 np0005558241 systemd[1]: libpod-5063fa5634813dce54013424ad797db9d0cc8c9d0c32e9615098d0a715cb69c4.scope: Deactivated successfully.
Dec 13 03:00:15 np0005558241 systemd[1]: libpod-5063fa5634813dce54013424ad797db9d0cc8c9d0c32e9615098d0a715cb69c4.scope: Consumed 1.482s CPU time.
Dec 13 03:00:15 np0005558241 podman[249518]: 2025-12-13 08:00:15.599622963 +0000 UTC m=+1.078558239 container died 5063fa5634813dce54013424ad797db9d0cc8c9d0c32e9615098d0a715cb69c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hugle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:00:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fa98f6f42b1b64212453b910c2e2a2d816418c3f21bb577d15b68c9d8c10f336-merged.mount: Deactivated successfully.
Dec 13 03:00:15 np0005558241 podman[249518]: 2025-12-13 08:00:15.65211484 +0000 UTC m=+1.131050106 container remove 5063fa5634813dce54013424ad797db9d0cc8c9d0c32e9615098d0a715cb69c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hugle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:00:15 np0005558241 systemd[1]: libpod-conmon-5063fa5634813dce54013424ad797db9d0cc8c9d0c32e9615098d0a715cb69c4.scope: Deactivated successfully.
Dec 13 03:00:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:00:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:00:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:00:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:00:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:16 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:00:16 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:00:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:00:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:19 np0005558241 nova_compute[248510]: 2025-12-13 08:00:19.774 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:00:19 np0005558241 nova_compute[248510]: 2025-12-13 08:00:19.775 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:00:19 np0005558241 nova_compute[248510]: 2025-12-13 08:00:19.776 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:00:19 np0005558241 nova_compute[248510]: 2025-12-13 08:00:19.776 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:00:19 np0005558241 nova_compute[248510]: 2025-12-13 08:00:19.884 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:00:19 np0005558241 nova_compute[248510]: 2025-12-13 08:00:19.884 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:00:19 np0005558241 nova_compute[248510]: 2025-12-13 08:00:19.885 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:00:19 np0005558241 nova_compute[248510]: 2025-12-13 08:00:19.885 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:00:19 np0005558241 nova_compute[248510]: 2025-12-13 08:00:19.885 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:00:19 np0005558241 nova_compute[248510]: 2025-12-13 08:00:19.885 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:00:19 np0005558241 nova_compute[248510]: 2025-12-13 08:00:19.886 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:00:19 np0005558241 nova_compute[248510]: 2025-12-13 08:00:19.886 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:00:19 np0005558241 nova_compute[248510]: 2025-12-13 08:00:19.886 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:00:20 np0005558241 nova_compute[248510]: 2025-12-13 08:00:20.027 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:00:20 np0005558241 nova_compute[248510]: 2025-12-13 08:00:20.027 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:00:20 np0005558241 nova_compute[248510]: 2025-12-13 08:00:20.027 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:00:20 np0005558241 nova_compute[248510]: 2025-12-13 08:00:20.028 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:00:20 np0005558241 nova_compute[248510]: 2025-12-13 08:00:20.028 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:00:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:00:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:00:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1285672135' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:00:20 np0005558241 nova_compute[248510]: 2025-12-13 08:00:20.606 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:00:20 np0005558241 nova_compute[248510]: 2025-12-13 08:00:20.776 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:00:20 np0005558241 nova_compute[248510]: 2025-12-13 08:00:20.777 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5133MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:00:20 np0005558241 nova_compute[248510]: 2025-12-13 08:00:20.777 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:00:20 np0005558241 nova_compute[248510]: 2025-12-13 08:00:20.778 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:00:20 np0005558241 nova_compute[248510]: 2025-12-13 08:00:20.985 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:00:20 np0005558241 nova_compute[248510]: 2025-12-13 08:00:20.985 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:00:21 np0005558241 nova_compute[248510]: 2025-12-13 08:00:21.111 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:00:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:00:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2916365415' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:00:21 np0005558241 nova_compute[248510]: 2025-12-13 08:00:21.724 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:00:21 np0005558241 nova_compute[248510]: 2025-12-13 08:00:21.732 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:00:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:00:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:22 np0005558241 nova_compute[248510]: 2025-12-13 08:00:22.280 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:00:22 np0005558241 nova_compute[248510]: 2025-12-13 08:00:22.613 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:00:22 np0005558241 nova_compute[248510]: 2025-12-13 08:00:22.614 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:00:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 13 03:00:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3341061223' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 13 03:00:26 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 13 03:00:26 np0005558241 ceph-mgr[76830]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 03:00:26 np0005558241 ceph-mgr[76830]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 03:00:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:00:26 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec 13 03:00:26 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:26.982489) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:00:26 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec 13 03:00:26 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612826982584, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1780, "num_deletes": 250, "total_data_size": 3061643, "memory_usage": 3110896, "flush_reason": "Manual Compaction"}
Dec 13 03:00:26 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec 13 03:00:26 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612826998548, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1738580, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16606, "largest_seqno": 18385, "table_properties": {"data_size": 1732684, "index_size": 2907, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15225, "raw_average_key_size": 20, "raw_value_size": 1719573, "raw_average_value_size": 2305, "num_data_blocks": 134, "num_entries": 746, "num_filter_entries": 746, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765612593, "oldest_key_time": 1765612593, "file_creation_time": 1765612826, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:00:26 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 16123 microseconds, and 4892 cpu microseconds.
Dec 13 03:00:26 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:26.998615) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1738580 bytes OK
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:26.998641) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:27.001457) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:27.001481) EVENT_LOG_v1 {"time_micros": 1765612827001476, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:27.001503) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3053997, prev total WAL file size 3053997, number of live WAL files 2.
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:27.002534) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1697KB)], [38(8365KB)]
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612827002653, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 10304597, "oldest_snapshot_seqno": -1}
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4512 keys, 8137599 bytes, temperature: kUnknown
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612827071362, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 8137599, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8106625, "index_size": 18576, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 109825, "raw_average_key_size": 24, "raw_value_size": 8024301, "raw_average_value_size": 1778, "num_data_blocks": 788, "num_entries": 4512, "num_filter_entries": 4512, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765612827, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:27.071591) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8137599 bytes
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:27.074340) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.8 rd, 118.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.2 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(10.6) write-amplify(4.7) OK, records in: 4932, records dropped: 420 output_compression: NoCompression
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:27.074356) EVENT_LOG_v1 {"time_micros": 1765612827074348, "job": 18, "event": "compaction_finished", "compaction_time_micros": 68780, "compaction_time_cpu_micros": 24291, "output_level": 6, "num_output_files": 1, "total_output_size": 8137599, "num_input_records": 4932, "num_output_records": 4512, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612827074741, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612827076501, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:27.002442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:27.076565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:27.076569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:27.076571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:27.076573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:00:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:00:27.076575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:00:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:00:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:34 np0005558241 podman[249701]: 2025-12-13 08:00:34.9763504 +0000 UTC m=+0.060988447 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 03:00:34 np0005558241 podman[249700]: 2025-12-13 08:00:34.984990382 +0000 UTC m=+0.073070103 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 13 03:00:35 np0005558241 podman[249699]: 2025-12-13 08:00:35.005767981 +0000 UTC m=+0.094396356 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:00:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:00:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:00:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:00:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:00:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:00:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:00:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:00:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:00:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:00:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 13 03:00:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2288112208' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 13 03:00:48 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 13 03:00:48 np0005558241 ceph-mgr[76830]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 03:00:48 np0005558241 ceph-mgr[76830]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 03:00:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:00:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4097 writes, 18K keys, 4097 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 4097 writes, 4097 syncs, 1.00 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1140 writes, 5360 keys, 1140 commit groups, 1.0 writes per commit group, ingest: 8.01 MB, 0.01 MB/s#012Interval WAL: 1140 writes, 1140 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      8.5      2.45              0.07         9    0.273       0      0       0.0       0.0#012  L6      1/0    7.76 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   2.9     42.3     34.7      1.73              0.19         8    0.216     34K   4169       0.0       0.0#012 Sum      1/0    7.76 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.9     17.5     19.3      4.18              0.27        17    0.246     34K   4169       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1     41.6     41.3      0.94              0.11         8    0.118     19K   2394       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     42.3     34.7      1.73              0.19         8    0.216     34K   4169       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      8.5      2.45              0.07         8    0.306       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.020, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.04 MB/s write, 0.07 GB read, 0.04 MB/s read, 4.2 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 308.00 MB usage: 4.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000187 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(267,4.15 MB,1.34828%) FilterBlock(18,105.86 KB,0.0335644%) IndexBlock(18,202.38 KB,0.0641662%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 03:00:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:00:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:00:55.375 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:00:55.376 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:00:55.376 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:00:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:00:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:00:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:01:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.335609) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612864335656, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 531, "num_deletes": 251, "total_data_size": 569388, "memory_usage": 579128, "flush_reason": "Manual Compaction"}
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612864344194, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 560473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18386, "largest_seqno": 18916, "table_properties": {"data_size": 557514, "index_size": 931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6734, "raw_average_key_size": 18, "raw_value_size": 551762, "raw_average_value_size": 1536, "num_data_blocks": 43, "num_entries": 359, "num_filter_entries": 359, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765612827, "oldest_key_time": 1765612827, "file_creation_time": 1765612864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 8624 microseconds, and 4243 cpu microseconds.
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.344235) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 560473 bytes OK
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.344256) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.346052) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.346090) EVENT_LOG_v1 {"time_micros": 1765612864346084, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.346113) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 566380, prev total WAL file size 566380, number of live WAL files 2.
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.346581) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(547KB)], [41(7946KB)]
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612864346625, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8698072, "oldest_snapshot_seqno": -1}
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4362 keys, 6798368 bytes, temperature: kUnknown
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612864417584, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6798368, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6769694, "index_size": 16660, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 107340, "raw_average_key_size": 24, "raw_value_size": 6691262, "raw_average_value_size": 1533, "num_data_blocks": 698, "num_entries": 4362, "num_filter_entries": 4362, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765612864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.417871) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6798368 bytes
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.419496) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.4 rd, 95.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 7.8 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(27.6) write-amplify(12.1) OK, records in: 4871, records dropped: 509 output_compression: NoCompression
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.419513) EVENT_LOG_v1 {"time_micros": 1765612864419505, "job": 20, "event": "compaction_finished", "compaction_time_micros": 71041, "compaction_time_cpu_micros": 30984, "output_level": 6, "num_output_files": 1, "total_output_size": 6798368, "num_input_records": 4871, "num_output_records": 4362, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612864419726, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612864421109, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.346492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.421177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.421185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.421189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.421191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:01:04 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:01:04.421194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:01:06 np0005558241 podman[249771]: 2025-12-13 08:01:06.004292007 +0000 UTC m=+0.077313357 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd)
Dec 13 03:01:06 np0005558241 podman[249770]: 2025-12-13 08:01:06.026561873 +0000 UTC m=+0.104403261 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 13 03:01:06 np0005558241 podman[249772]: 2025-12-13 08:01:06.035468272 +0000 UTC m=+0.102186627 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 13 03:01:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:01:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:01:09
Dec 13 03:01:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:01:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:01:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'backups', '.mgr', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes']
Dec 13 03:01:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:01:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:01:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:01:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2232217356' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:01:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:01:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2232217356' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:01:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:01:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:01:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:01:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:01:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:01:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:01:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:01:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:01:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:01:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:01:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:01:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:01:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:01:17 np0005558241 podman[249976]: 2025-12-13 08:01:17.467639243 +0000 UTC m=+0.024596864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:01:17 np0005558241 podman[249976]: 2025-12-13 08:01:17.595769585 +0000 UTC m=+0.152727156 container create fb3b251d69e6c1e6c402d5518d2fe3139b2d273754192ef3ecc510d7031c7304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shamir, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:01:17 np0005558241 systemd[1]: Started libpod-conmon-fb3b251d69e6c1e6c402d5518d2fe3139b2d273754192ef3ecc510d7031c7304.scope.
Dec 13 03:01:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:01:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:01:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:01:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:01:18 np0005558241 podman[249976]: 2025-12-13 08:01:18.132321371 +0000 UTC m=+0.689278972 container init fb3b251d69e6c1e6c402d5518d2fe3139b2d273754192ef3ecc510d7031c7304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shamir, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:01:18 np0005558241 podman[249976]: 2025-12-13 08:01:18.140318958 +0000 UTC m=+0.697276529 container start fb3b251d69e6c1e6c402d5518d2fe3139b2d273754192ef3ecc510d7031c7304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:01:18 np0005558241 affectionate_shamir[249992]: 167 167
Dec 13 03:01:18 np0005558241 systemd[1]: libpod-fb3b251d69e6c1e6c402d5518d2fe3139b2d273754192ef3ecc510d7031c7304.scope: Deactivated successfully.
Dec 13 03:01:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:18 np0005558241 podman[249976]: 2025-12-13 08:01:18.767257752 +0000 UTC m=+1.324215333 container attach fb3b251d69e6c1e6c402d5518d2fe3139b2d273754192ef3ecc510d7031c7304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:01:18 np0005558241 podman[249976]: 2025-12-13 08:01:18.768681117 +0000 UTC m=+1.325638748 container died fb3b251d69e6c1e6c402d5518d2fe3139b2d273754192ef3ecc510d7031c7304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:01:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e29bef022bf4bf059056a09176f875bb4587d3d469abd763cda66f1d1de42bee-merged.mount: Deactivated successfully.
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:01:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:01:21 np0005558241 podman[249976]: 2025-12-13 08:01:21.312267821 +0000 UTC m=+3.869225392 container remove fb3b251d69e6c1e6c402d5518d2fe3139b2d273754192ef3ecc510d7031c7304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:01:21 np0005558241 systemd[1]: libpod-conmon-fb3b251d69e6c1e6c402d5518d2fe3139b2d273754192ef3ecc510d7031c7304.scope: Deactivated successfully.
Dec 13 03:01:21 np0005558241 podman[250016]: 2025-12-13 08:01:21.474662733 +0000 UTC m=+0.028625963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:01:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:01:22 np0005558241 podman[250016]: 2025-12-13 08:01:22.216448304 +0000 UTC m=+0.770411544 container create 0af40d55323f8d0a1a79f670a0f1d7435a652f8881e54b1938da8425687f2d58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 03:01:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:22 np0005558241 systemd[1]: Started libpod-conmon-0af40d55323f8d0a1a79f670a0f1d7435a652f8881e54b1938da8425687f2d58.scope.
Dec 13 03:01:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:01:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74aa8f0855fcb0bd7dc1107c705e549d8766fc5501a6305862e20cb92216344/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:01:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74aa8f0855fcb0bd7dc1107c705e549d8766fc5501a6305862e20cb92216344/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:01:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74aa8f0855fcb0bd7dc1107c705e549d8766fc5501a6305862e20cb92216344/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:01:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74aa8f0855fcb0bd7dc1107c705e549d8766fc5501a6305862e20cb92216344/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:01:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74aa8f0855fcb0bd7dc1107c705e549d8766fc5501a6305862e20cb92216344/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.605 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.606 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.630 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.631 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.631 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.649 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.650 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.650 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.650 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.651 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.651 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.651 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.651 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.651 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.675 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.675 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.675 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.676 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:01:22 np0005558241 nova_compute[248510]: 2025-12-13 08:01:22.676 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:01:22 np0005558241 podman[250016]: 2025-12-13 08:01:22.919176996 +0000 UTC m=+1.473140226 container init 0af40d55323f8d0a1a79f670a0f1d7435a652f8881e54b1938da8425687f2d58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:01:22 np0005558241 podman[250016]: 2025-12-13 08:01:22.926749502 +0000 UTC m=+1.480712712 container start 0af40d55323f8d0a1a79f670a0f1d7435a652f8881e54b1938da8425687f2d58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:01:22 np0005558241 podman[250016]: 2025-12-13 08:01:22.973059328 +0000 UTC m=+1.527022608 container attach 0af40d55323f8d0a1a79f670a0f1d7435a652f8881e54b1938da8425687f2d58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_torvalds, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:01:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:01:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1962618351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:01:23 np0005558241 nova_compute[248510]: 2025-12-13 08:01:23.290 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:01:23 np0005558241 thirsty_torvalds[250033]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:01:23 np0005558241 thirsty_torvalds[250033]: --> All data devices are unavailable
Dec 13 03:01:23 np0005558241 systemd[1]: libpod-0af40d55323f8d0a1a79f670a0f1d7435a652f8881e54b1938da8425687f2d58.scope: Deactivated successfully.
Dec 13 03:01:23 np0005558241 podman[250016]: 2025-12-13 08:01:23.43761634 +0000 UTC m=+1.991579550 container died 0af40d55323f8d0a1a79f670a0f1d7435a652f8881e54b1938da8425687f2d58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_torvalds, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:01:23 np0005558241 nova_compute[248510]: 2025-12-13 08:01:23.451 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:01:23 np0005558241 nova_compute[248510]: 2025-12-13 08:01:23.453 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5074MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:01:23 np0005558241 nova_compute[248510]: 2025-12-13 08:01:23.453 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:01:23 np0005558241 nova_compute[248510]: 2025-12-13 08:01:23.453 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:01:23 np0005558241 nova_compute[248510]: 2025-12-13 08:01:23.532 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:01:23 np0005558241 nova_compute[248510]: 2025-12-13 08:01:23.532 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:01:23 np0005558241 nova_compute[248510]: 2025-12-13 08:01:23.550 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:01:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e74aa8f0855fcb0bd7dc1107c705e549d8766fc5501a6305862e20cb92216344-merged.mount: Deactivated successfully.
Dec 13 03:01:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:01:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/379319498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:01:24 np0005558241 nova_compute[248510]: 2025-12-13 08:01:24.115 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:01:24 np0005558241 nova_compute[248510]: 2025-12-13 08:01:24.121 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:01:24 np0005558241 nova_compute[248510]: 2025-12-13 08:01:24.141 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:01:24 np0005558241 nova_compute[248510]: 2025-12-13 08:01:24.143 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:01:24 np0005558241 nova_compute[248510]: 2025-12-13 08:01:24.144 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:01:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:24 np0005558241 podman[250016]: 2025-12-13 08:01:24.503385333 +0000 UTC m=+3.057348583 container remove 0af40d55323f8d0a1a79f670a0f1d7435a652f8881e54b1938da8425687f2d58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 03:01:24 np0005558241 systemd[1]: libpod-conmon-0af40d55323f8d0a1a79f670a0f1d7435a652f8881e54b1938da8425687f2d58.scope: Deactivated successfully.
Dec 13 03:01:25 np0005558241 podman[250171]: 2025-12-13 08:01:25.0178927 +0000 UTC m=+0.022537674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:01:25 np0005558241 podman[250171]: 2025-12-13 08:01:25.300718415 +0000 UTC m=+0.305363379 container create a8538b60b7129ac2bcd35273ae45c36c4f3c1fb0ade304f21414f53f7a4a2ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_cerf, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:01:25 np0005558241 systemd[1]: Started libpod-conmon-a8538b60b7129ac2bcd35273ae45c36c4f3c1fb0ade304f21414f53f7a4a2ea5.scope.
Dec 13 03:01:25 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:01:25 np0005558241 podman[250171]: 2025-12-13 08:01:25.928983372 +0000 UTC m=+0.933628346 container init a8538b60b7129ac2bcd35273ae45c36c4f3c1fb0ade304f21414f53f7a4a2ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:01:25 np0005558241 podman[250171]: 2025-12-13 08:01:25.938887965 +0000 UTC m=+0.943532909 container start a8538b60b7129ac2bcd35273ae45c36c4f3c1fb0ade304f21414f53f7a4a2ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:01:25 np0005558241 modest_cerf[250188]: 167 167
Dec 13 03:01:25 np0005558241 systemd[1]: libpod-a8538b60b7129ac2bcd35273ae45c36c4f3c1fb0ade304f21414f53f7a4a2ea5.scope: Deactivated successfully.
Dec 13 03:01:26 np0005558241 podman[250171]: 2025-12-13 08:01:26.155329033 +0000 UTC m=+1.159974007 container attach a8538b60b7129ac2bcd35273ae45c36c4f3c1fb0ade304f21414f53f7a4a2ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:01:26 np0005558241 podman[250171]: 2025-12-13 08:01:26.15602636 +0000 UTC m=+1.160671304 container died a8538b60b7129ac2bcd35273ae45c36c4f3c1fb0ade304f21414f53f7a4a2ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_cerf, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:01:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:01:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c20b5e7410e84f754db6e945be59663d01d264a76fa00e9439e46fd28573b388-merged.mount: Deactivated successfully.
Dec 13 03:01:27 np0005558241 podman[250171]: 2025-12-13 08:01:27.897459402 +0000 UTC m=+2.902104376 container remove a8538b60b7129ac2bcd35273ae45c36c4f3c1fb0ade304f21414f53f7a4a2ea5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:01:27 np0005558241 systemd[1]: libpod-conmon-a8538b60b7129ac2bcd35273ae45c36c4f3c1fb0ade304f21414f53f7a4a2ea5.scope: Deactivated successfully.
Dec 13 03:01:28 np0005558241 podman[250212]: 2025-12-13 08:01:28.113966142 +0000 UTC m=+0.090651164 container create a96cdd1ace966d83ee01df383fe3cc98be5b40e4ed62387710f5f828811b0e4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dhawan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:01:28 np0005558241 podman[250212]: 2025-12-13 08:01:28.048311412 +0000 UTC m=+0.024996424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:01:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:28 np0005558241 systemd[1]: Started libpod-conmon-a96cdd1ace966d83ee01df383fe3cc98be5b40e4ed62387710f5f828811b0e4f.scope.
Dec 13 03:01:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:01:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/316a2915f83819e64133ce4b49e52778b94e401d054b6e6a7beebca6e336e8dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:01:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/316a2915f83819e64133ce4b49e52778b94e401d054b6e6a7beebca6e336e8dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:01:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/316a2915f83819e64133ce4b49e52778b94e401d054b6e6a7beebca6e336e8dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:01:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/316a2915f83819e64133ce4b49e52778b94e401d054b6e6a7beebca6e336e8dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:01:29 np0005558241 podman[250212]: 2025-12-13 08:01:29.266877103 +0000 UTC m=+1.243562135 container init a96cdd1ace966d83ee01df383fe3cc98be5b40e4ed62387710f5f828811b0e4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dhawan, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:01:29 np0005558241 podman[250212]: 2025-12-13 08:01:29.276982081 +0000 UTC m=+1.253667103 container start a96cdd1ace966d83ee01df383fe3cc98be5b40e4ed62387710f5f828811b0e4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 03:01:29 np0005558241 podman[250212]: 2025-12-13 08:01:29.396986663 +0000 UTC m=+1.373671695 container attach a96cdd1ace966d83ee01df383fe3cc98be5b40e4ed62387710f5f828811b0e4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]: {
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:    "0": [
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:        {
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "devices": [
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "/dev/loop3"
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            ],
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_name": "ceph_lv0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_size": "21470642176",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "name": "ceph_lv0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "tags": {
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.cluster_name": "ceph",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.crush_device_class": "",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.encrypted": "0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.objectstore": "bluestore",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.osd_id": "0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.type": "block",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.vdo": "0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.with_tpm": "0"
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            },
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "type": "block",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "vg_name": "ceph_vg0"
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:        }
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:    ],
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:    "1": [
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:        {
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "devices": [
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "/dev/loop4"
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            ],
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_name": "ceph_lv1",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_size": "21470642176",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "name": "ceph_lv1",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "tags": {
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.cluster_name": "ceph",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.crush_device_class": "",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.encrypted": "0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.objectstore": "bluestore",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.osd_id": "1",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.type": "block",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.vdo": "0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.with_tpm": "0"
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            },
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "type": "block",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "vg_name": "ceph_vg1"
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:        }
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:    ],
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:    "2": [
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:        {
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "devices": [
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "/dev/loop5"
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            ],
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_name": "ceph_lv2",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_size": "21470642176",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "name": "ceph_lv2",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "tags": {
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.cluster_name": "ceph",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.crush_device_class": "",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.encrypted": "0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.objectstore": "bluestore",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.osd_id": "2",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.type": "block",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.vdo": "0",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:                "ceph.with_tpm": "0"
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            },
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "type": "block",
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:            "vg_name": "ceph_vg2"
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:        }
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]:    ]
Dec 13 03:01:29 np0005558241 thirsty_dhawan[250228]: }
Dec 13 03:01:29 np0005558241 systemd[1]: libpod-a96cdd1ace966d83ee01df383fe3cc98be5b40e4ed62387710f5f828811b0e4f.scope: Deactivated successfully.
Dec 13 03:01:29 np0005558241 podman[250212]: 2025-12-13 08:01:29.634757034 +0000 UTC m=+1.611442036 container died a96cdd1ace966d83ee01df383fe3cc98be5b40e4ed62387710f5f828811b0e4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dhawan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:01:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-316a2915f83819e64133ce4b49e52778b94e401d054b6e6a7beebca6e336e8dd-merged.mount: Deactivated successfully.
Dec 13 03:01:30 np0005558241 podman[250212]: 2025-12-13 08:01:30.926308586 +0000 UTC m=+2.902993578 container remove a96cdd1ace966d83ee01df383fe3cc98be5b40e4ed62387710f5f828811b0e4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dhawan, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:01:30 np0005558241 systemd[1]: libpod-conmon-a96cdd1ace966d83ee01df383fe3cc98be5b40e4ed62387710f5f828811b0e4f.scope: Deactivated successfully.
Dec 13 03:01:31 np0005558241 podman[250311]: 2025-12-13 08:01:31.401758989 +0000 UTC m=+0.022699189 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:01:31 np0005558241 podman[250311]: 2025-12-13 08:01:31.584943875 +0000 UTC m=+0.205884065 container create f573a6240a822252646244c699d26313b6e6d7a7b6cb9291bd14e3a02f5dd772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dubinsky, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:01:31 np0005558241 systemd[1]: Started libpod-conmon-f573a6240a822252646244c699d26313b6e6d7a7b6cb9291bd14e3a02f5dd772.scope.
Dec 13 03:01:31 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:01:31 np0005558241 podman[250311]: 2025-12-13 08:01:31.842448998 +0000 UTC m=+0.463389228 container init f573a6240a822252646244c699d26313b6e6d7a7b6cb9291bd14e3a02f5dd772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dubinsky, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:01:31 np0005558241 podman[250311]: 2025-12-13 08:01:31.850974778 +0000 UTC m=+0.471914968 container start f573a6240a822252646244c699d26313b6e6d7a7b6cb9291bd14e3a02f5dd772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dubinsky, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:01:31 np0005558241 strange_dubinsky[250327]: 167 167
Dec 13 03:01:31 np0005558241 systemd[1]: libpod-f573a6240a822252646244c699d26313b6e6d7a7b6cb9291bd14e3a02f5dd772.scope: Deactivated successfully.
Dec 13 03:01:31 np0005558241 podman[250311]: 2025-12-13 08:01:31.875990663 +0000 UTC m=+0.496930883 container attach f573a6240a822252646244c699d26313b6e6d7a7b6cb9291bd14e3a02f5dd772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:01:31 np0005558241 podman[250311]: 2025-12-13 08:01:31.877680065 +0000 UTC m=+0.498620245 container died f573a6240a822252646244c699d26313b6e6d7a7b6cb9291bd14e3a02f5dd772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:01:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3f5d0297b2458857dfb9884a347672e4b52af25c96149cc6744fb916ed999dff-merged.mount: Deactivated successfully.
Dec 13 03:01:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:01:32 np0005558241 podman[250311]: 2025-12-13 08:01:32.003188442 +0000 UTC m=+0.624128632 container remove f573a6240a822252646244c699d26313b6e6d7a7b6cb9291bd14e3a02f5dd772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:01:32 np0005558241 systemd[1]: libpod-conmon-f573a6240a822252646244c699d26313b6e6d7a7b6cb9291bd14e3a02f5dd772.scope: Deactivated successfully.
Dec 13 03:01:32 np0005558241 podman[250351]: 2025-12-13 08:01:32.146799504 +0000 UTC m=+0.026023581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:01:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:32 np0005558241 podman[250351]: 2025-12-13 08:01:32.316507628 +0000 UTC m=+0.195731705 container create adf37a22547fc58e047b7157deb11efc763dfade85fd6d9500c16c002928fff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bohr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:01:32 np0005558241 systemd[1]: Started libpod-conmon-adf37a22547fc58e047b7157deb11efc763dfade85fd6d9500c16c002928fff4.scope.
Dec 13 03:01:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:01:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c908c7176b0735705d9a9e1fefe8afaab96a74dac011af0f4ee8da16436add3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:01:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c908c7176b0735705d9a9e1fefe8afaab96a74dac011af0f4ee8da16436add3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:01:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c908c7176b0735705d9a9e1fefe8afaab96a74dac011af0f4ee8da16436add3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:01:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c908c7176b0735705d9a9e1fefe8afaab96a74dac011af0f4ee8da16436add3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:01:32 np0005558241 podman[250351]: 2025-12-13 08:01:32.448814142 +0000 UTC m=+0.328038239 container init adf37a22547fc58e047b7157deb11efc763dfade85fd6d9500c16c002928fff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bohr, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:01:32 np0005558241 podman[250351]: 2025-12-13 08:01:32.462865247 +0000 UTC m=+0.342089344 container start adf37a22547fc58e047b7157deb11efc763dfade85fd6d9500c16c002928fff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bohr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:01:32 np0005558241 podman[250351]: 2025-12-13 08:01:32.562379775 +0000 UTC m=+0.441603852 container attach adf37a22547fc58e047b7157deb11efc763dfade85fd6d9500c16c002928fff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bohr, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:01:33 np0005558241 lvm[250446]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:01:33 np0005558241 lvm[250447]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:01:33 np0005558241 lvm[250447]: VG ceph_vg1 finished
Dec 13 03:01:33 np0005558241 lvm[250446]: VG ceph_vg0 finished
Dec 13 03:01:33 np0005558241 lvm[250449]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:01:33 np0005558241 lvm[250449]: VG ceph_vg2 finished
Dec 13 03:01:33 np0005558241 sweet_bohr[250368]: {}
Dec 13 03:01:33 np0005558241 systemd[1]: libpod-adf37a22547fc58e047b7157deb11efc763dfade85fd6d9500c16c002928fff4.scope: Deactivated successfully.
Dec 13 03:01:33 np0005558241 podman[250351]: 2025-12-13 08:01:33.304929938 +0000 UTC m=+1.184154005 container died adf37a22547fc58e047b7157deb11efc763dfade85fd6d9500c16c002928fff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bohr, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:01:33 np0005558241 systemd[1]: libpod-adf37a22547fc58e047b7157deb11efc763dfade85fd6d9500c16c002928fff4.scope: Consumed 1.346s CPU time.
Dec 13 03:01:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2c908c7176b0735705d9a9e1fefe8afaab96a74dac011af0f4ee8da16436add3-merged.mount: Deactivated successfully.
Dec 13 03:01:33 np0005558241 podman[250351]: 2025-12-13 08:01:33.631737446 +0000 UTC m=+1.510961523 container remove adf37a22547fc58e047b7157deb11efc763dfade85fd6d9500c16c002928fff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bohr, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:01:33 np0005558241 systemd[1]: libpod-conmon-adf37a22547fc58e047b7157deb11efc763dfade85fd6d9500c16c002928fff4.scope: Deactivated successfully.
Dec 13 03:01:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:01:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:01:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:01:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:01:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:34 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:01:34 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:01:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:36 np0005558241 podman[250489]: 2025-12-13 08:01:36.977937156 +0000 UTC m=+0.061365060 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 03:01:36 np0005558241 podman[250490]: 2025-12-13 08:01:36.977918766 +0000 UTC m=+0.060583271 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 13 03:01:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:01:37 np0005558241 podman[250488]: 2025-12-13 08:01:37.004693254 +0000 UTC m=+0.087861242 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_controller)
Dec 13 03:01:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:01:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:01:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:01:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:01:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:01:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:01:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:01:40.724 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:01:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:01:40.725 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:01:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:01:40.727 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:01:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:01:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:01:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:01:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:01:55.377 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:01:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:01:55.378 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:01:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:01:55.378 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:01:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:01:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:01:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:02:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:02:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5969 writes, 24K keys, 5969 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5969 writes, 1025 syncs, 5.82 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:02:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:02:07 np0005558241 podman[250554]: 2025-12-13 08:02:07.967980042 +0000 UTC m=+0.054425500 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 03:02:07 np0005558241 podman[250553]: 2025-12-13 08:02:07.973228961 +0000 UTC m=+0.065836210 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec 13 03:02:08 np0005558241 podman[250552]: 2025-12-13 08:02:08.015122421 +0000 UTC m=+0.110521129 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 03:02:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:02:09
Dec 13 03:02:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:02:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:02:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta']
Dec 13 03:02:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:02:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:02:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1801.8 total, 600.0 interval#012Cumulative writes: 7171 writes, 29K keys, 7171 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7171 writes, 1367 syncs, 5.25 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:02:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:02:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:02:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3509778777' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:02:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:02:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3509778777' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:02:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:02:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:02:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 5935 writes, 24K keys, 5935 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5935 writes, 986 syncs, 6.02 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:02:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:02:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:02:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.145 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.146 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.146 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.146 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:02:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.535 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.536 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.536 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.536 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.536 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.536 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.537 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.537 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.537 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.569 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.569 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.570 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.570 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:02:24 np0005558241 nova_compute[248510]: 2025-12-13 08:02:24.570 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:02:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:02:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/236396992' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:02:25 np0005558241 nova_compute[248510]: 2025-12-13 08:02:25.121 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:02:25 np0005558241 nova_compute[248510]: 2025-12-13 08:02:25.270 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:02:25 np0005558241 nova_compute[248510]: 2025-12-13 08:02:25.271 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:02:25 np0005558241 nova_compute[248510]: 2025-12-13 08:02:25.271 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:02:25 np0005558241 nova_compute[248510]: 2025-12-13 08:02:25.271 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:02:25 np0005558241 nova_compute[248510]: 2025-12-13 08:02:25.331 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:02:25 np0005558241 nova_compute[248510]: 2025-12-13 08:02:25.332 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:02:25 np0005558241 nova_compute[248510]: 2025-12-13 08:02:25.353 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:02:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:02:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501728964' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:02:25 np0005558241 nova_compute[248510]: 2025-12-13 08:02:25.894 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:02:25 np0005558241 nova_compute[248510]: 2025-12-13 08:02:25.900 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:02:25 np0005558241 nova_compute[248510]: 2025-12-13 08:02:25.922 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:02:25 np0005558241 nova_compute[248510]: 2025-12-13 08:02:25.923 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:02:25 np0005558241 nova_compute[248510]: 2025-12-13 08:02:25.923 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:02:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:02:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:02:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:02:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:02:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:35 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:35 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:35 np0005558241 podman[250872]: 2025-12-13 08:02:35.432918893 +0000 UTC m=+0.045803207 container create 69fc33403e6eea7a5f6ca11a8a956926121842c7c93b9f21512d93426def3d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 03:02:35 np0005558241 systemd[1]: Started libpod-conmon-69fc33403e6eea7a5f6ca11a8a956926121842c7c93b9f21512d93426def3d47.scope.
Dec 13 03:02:35 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:02:35 np0005558241 podman[250872]: 2025-12-13 08:02:35.410672886 +0000 UTC m=+0.023557210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:02:35 np0005558241 podman[250872]: 2025-12-13 08:02:35.518363465 +0000 UTC m=+0.131247799 container init 69fc33403e6eea7a5f6ca11a8a956926121842c7c93b9f21512d93426def3d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_galileo, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:02:35 np0005558241 podman[250872]: 2025-12-13 08:02:35.525659804 +0000 UTC m=+0.138544118 container start 69fc33403e6eea7a5f6ca11a8a956926121842c7c93b9f21512d93426def3d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:02:35 np0005558241 podman[250872]: 2025-12-13 08:02:35.529838527 +0000 UTC m=+0.142722841 container attach 69fc33403e6eea7a5f6ca11a8a956926121842c7c93b9f21512d93426def3d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:02:35 np0005558241 heuristic_galileo[250888]: 167 167
Dec 13 03:02:35 np0005558241 systemd[1]: libpod-69fc33403e6eea7a5f6ca11a8a956926121842c7c93b9f21512d93426def3d47.scope: Deactivated successfully.
Dec 13 03:02:35 np0005558241 conmon[250888]: conmon 69fc33403e6eea7a5f6c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-69fc33403e6eea7a5f6ca11a8a956926121842c7c93b9f21512d93426def3d47.scope/container/memory.events
Dec 13 03:02:35 np0005558241 podman[250872]: 2025-12-13 08:02:35.535497966 +0000 UTC m=+0.148382290 container died 69fc33403e6eea7a5f6ca11a8a956926121842c7c93b9f21512d93426def3d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_galileo, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:02:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4a91b7d676bc836b5f82e788e72f03156f2ae081f7a8e3246f7d996148c30502-merged.mount: Deactivated successfully.
Dec 13 03:02:35 np0005558241 podman[250872]: 2025-12-13 08:02:35.581037466 +0000 UTC m=+0.193921780 container remove 69fc33403e6eea7a5f6ca11a8a956926121842c7c93b9f21512d93426def3d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_galileo, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:02:35 np0005558241 systemd[1]: libpod-conmon-69fc33403e6eea7a5f6ca11a8a956926121842c7c93b9f21512d93426def3d47.scope: Deactivated successfully.
Dec 13 03:02:35 np0005558241 podman[250911]: 2025-12-13 08:02:35.767575544 +0000 UTC m=+0.058302045 container create 748c68163e98bd6c8626ab07ac22110f1845db853b14e0756cead84274f47b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_einstein, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:02:35 np0005558241 systemd[1]: Started libpod-conmon-748c68163e98bd6c8626ab07ac22110f1845db853b14e0756cead84274f47b43.scope.
Dec 13 03:02:35 np0005558241 podman[250911]: 2025-12-13 08:02:35.746870075 +0000 UTC m=+0.037596606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:02:35 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:02:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed41b1c36fa798d207d0b6cb8506ad21243ea148d3a59015066fc249e6034a9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed41b1c36fa798d207d0b6cb8506ad21243ea148d3a59015066fc249e6034a9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed41b1c36fa798d207d0b6cb8506ad21243ea148d3a59015066fc249e6034a9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed41b1c36fa798d207d0b6cb8506ad21243ea148d3a59015066fc249e6034a9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:35 np0005558241 podman[250911]: 2025-12-13 08:02:35.863589316 +0000 UTC m=+0.154315827 container init 748c68163e98bd6c8626ab07ac22110f1845db853b14e0756cead84274f47b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_einstein, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:02:35 np0005558241 podman[250911]: 2025-12-13 08:02:35.872760081 +0000 UTC m=+0.163486592 container start 748c68163e98bd6c8626ab07ac22110f1845db853b14e0756cead84274f47b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:02:35 np0005558241 podman[250911]: 2025-12-13 08:02:35.876977415 +0000 UTC m=+0.167703966 container attach 748c68163e98bd6c8626ab07ac22110f1845db853b14e0756cead84274f47b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:02:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]: [
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:    {
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:        "available": false,
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:        "being_replaced": false,
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:        "ceph_device_lvm": false,
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:        "lsm_data": {},
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:        "lvs": [],
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:        "path": "/dev/sr0",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:        "rejected_reasons": [
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "Has a FileSystem",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "Insufficient space (<5GB)"
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:        ],
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:        "sys_api": {
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "actuators": null,
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "device_nodes": [
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:                "sr0"
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            ],
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "devname": "sr0",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "human_readable_size": "482.00 KB",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "id_bus": "ata",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "model": "QEMU DVD-ROM",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "nr_requests": "2",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "parent": "/dev/sr0",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "partitions": {},
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "path": "/dev/sr0",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "removable": "1",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "rev": "2.5+",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "ro": "0",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "rotational": "1",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "sas_address": "",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "sas_device_handle": "",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "scheduler_mode": "mq-deadline",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "sectors": 0,
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "sectorsize": "2048",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "size": 493568.0,
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "support_discard": "2048",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "type": "disk",
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:            "vendor": "QEMU"
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:        }
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]:    }
Dec 13 03:02:36 np0005558241 beautiful_einstein[250927]: ]
Dec 13 03:02:36 np0005558241 systemd[1]: libpod-748c68163e98bd6c8626ab07ac22110f1845db853b14e0756cead84274f47b43.scope: Deactivated successfully.
Dec 13 03:02:36 np0005558241 podman[250911]: 2025-12-13 08:02:36.467378536 +0000 UTC m=+0.758105057 container died 748c68163e98bd6c8626ab07ac22110f1845db853b14e0756cead84274f47b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_einstein, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 03:02:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ed41b1c36fa798d207d0b6cb8506ad21243ea148d3a59015066fc249e6034a9c-merged.mount: Deactivated successfully.
Dec 13 03:02:36 np0005558241 podman[250911]: 2025-12-13 08:02:36.514851253 +0000 UTC m=+0.805577754 container remove 748c68163e98bd6c8626ab07ac22110f1845db853b14e0756cead84274f47b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_einstein, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:02:36 np0005558241 systemd[1]: libpod-conmon-748c68163e98bd6c8626ab07ac22110f1845db853b14e0756cead84274f47b43.scope: Deactivated successfully.
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:02:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:02:36 np0005558241 podman[251869]: 2025-12-13 08:02:36.988508143 +0000 UTC m=+0.038872977 container create fefa6a8b9fb423e833a300d30e818b4aaf42c5b0e869afcd14e06190497562fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_clarke, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:02:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:02:37 np0005558241 systemd[1]: Started libpod-conmon-fefa6a8b9fb423e833a300d30e818b4aaf42c5b0e869afcd14e06190497562fe.scope.
Dec 13 03:02:37 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:02:37 np0005558241 podman[251869]: 2025-12-13 08:02:36.9721202 +0000 UTC m=+0.022485064 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:02:37 np0005558241 podman[251869]: 2025-12-13 08:02:37.073323379 +0000 UTC m=+0.123688233 container init fefa6a8b9fb423e833a300d30e818b4aaf42c5b0e869afcd14e06190497562fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:02:37 np0005558241 podman[251869]: 2025-12-13 08:02:37.082254949 +0000 UTC m=+0.132619783 container start fefa6a8b9fb423e833a300d30e818b4aaf42c5b0e869afcd14e06190497562fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_clarke, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:02:37 np0005558241 podman[251869]: 2025-12-13 08:02:37.087016386 +0000 UTC m=+0.137381270 container attach fefa6a8b9fb423e833a300d30e818b4aaf42c5b0e869afcd14e06190497562fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:02:37 np0005558241 charming_clarke[251885]: 167 167
Dec 13 03:02:37 np0005558241 systemd[1]: libpod-fefa6a8b9fb423e833a300d30e818b4aaf42c5b0e869afcd14e06190497562fe.scope: Deactivated successfully.
Dec 13 03:02:37 np0005558241 podman[251869]: 2025-12-13 08:02:37.090125862 +0000 UTC m=+0.140490696 container died fefa6a8b9fb423e833a300d30e818b4aaf42c5b0e869afcd14e06190497562fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_clarke, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:02:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-32a4304baa61a1dcae03c565c9e16fc56d201442207b87f878280285ff27baf1-merged.mount: Deactivated successfully.
Dec 13 03:02:37 np0005558241 podman[251869]: 2025-12-13 08:02:37.13354977 +0000 UTC m=+0.183914604 container remove fefa6a8b9fb423e833a300d30e818b4aaf42c5b0e869afcd14e06190497562fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_clarke, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:02:37 np0005558241 systemd[1]: libpod-conmon-fefa6a8b9fb423e833a300d30e818b4aaf42c5b0e869afcd14e06190497562fe.scope: Deactivated successfully.
Dec 13 03:02:37 np0005558241 podman[251910]: 2025-12-13 08:02:37.302365192 +0000 UTC m=+0.040910447 container create 510e050c015003f2d74a0a51b408b9febbdb543fc7791a50d5873a79656b1ade (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_diffie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:02:37 np0005558241 systemd[1]: Started libpod-conmon-510e050c015003f2d74a0a51b408b9febbdb543fc7791a50d5873a79656b1ade.scope.
Dec 13 03:02:37 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:02:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9848c429102cf19b3d2f3d8b5c67a725a9e9b5a7c5b467ae50401ab3c96e7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9848c429102cf19b3d2f3d8b5c67a725a9e9b5a7c5b467ae50401ab3c96e7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9848c429102cf19b3d2f3d8b5c67a725a9e9b5a7c5b467ae50401ab3c96e7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9848c429102cf19b3d2f3d8b5c67a725a9e9b5a7c5b467ae50401ab3c96e7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b9848c429102cf19b3d2f3d8b5c67a725a9e9b5a7c5b467ae50401ab3c96e7d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:02:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:02:37 np0005558241 podman[251910]: 2025-12-13 08:02:37.284424941 +0000 UTC m=+0.022970216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:02:37 np0005558241 podman[251910]: 2025-12-13 08:02:37.384566394 +0000 UTC m=+0.123111679 container init 510e050c015003f2d74a0a51b408b9febbdb543fc7791a50d5873a79656b1ade (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_diffie, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:02:37 np0005558241 podman[251910]: 2025-12-13 08:02:37.394699203 +0000 UTC m=+0.133244468 container start 510e050c015003f2d74a0a51b408b9febbdb543fc7791a50d5873a79656b1ade (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_diffie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:02:37 np0005558241 podman[251910]: 2025-12-13 08:02:37.398692221 +0000 UTC m=+0.137237486 container attach 510e050c015003f2d74a0a51b408b9febbdb543fc7791a50d5873a79656b1ade (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_diffie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:02:37 np0005558241 funny_diffie[251926]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:02:37 np0005558241 funny_diffie[251926]: --> All data devices are unavailable
Dec 13 03:02:37 np0005558241 systemd[1]: libpod-510e050c015003f2d74a0a51b408b9febbdb543fc7791a50d5873a79656b1ade.scope: Deactivated successfully.
Dec 13 03:02:37 np0005558241 podman[251910]: 2025-12-13 08:02:37.913632216 +0000 UTC m=+0.652177491 container died 510e050c015003f2d74a0a51b408b9febbdb543fc7791a50d5873a79656b1ade (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:02:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8b9848c429102cf19b3d2f3d8b5c67a725a9e9b5a7c5b467ae50401ab3c96e7d-merged.mount: Deactivated successfully.
Dec 13 03:02:37 np0005558241 podman[251910]: 2025-12-13 08:02:37.959194887 +0000 UTC m=+0.697740152 container remove 510e050c015003f2d74a0a51b408b9febbdb543fc7791a50d5873a79656b1ade (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:02:37 np0005558241 systemd[1]: libpod-conmon-510e050c015003f2d74a0a51b408b9febbdb543fc7791a50d5873a79656b1ade.scope: Deactivated successfully.
Dec 13 03:02:38 np0005558241 podman[251985]: 2025-12-13 08:02:38.182776486 +0000 UTC m=+0.061888183 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:02:38 np0005558241 podman[251984]: 2025-12-13 08:02:38.189353338 +0000 UTC m=+0.068588198 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 03:02:38 np0005558241 podman[251983]: 2025-12-13 08:02:38.220081723 +0000 UTC m=+0.098928554 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Dec 13 03:02:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:38 np0005558241 podman[252083]: 2025-12-13 08:02:38.452822548 +0000 UTC m=+0.043249835 container create d516457d866c1f31501b06900800e26654acda3644c60d4ea11f9310b485e58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_liskov, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:02:38 np0005558241 systemd[1]: Started libpod-conmon-d516457d866c1f31501b06900800e26654acda3644c60d4ea11f9310b485e58c.scope.
Dec 13 03:02:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:02:38 np0005558241 podman[252083]: 2025-12-13 08:02:38.435980883 +0000 UTC m=+0.026408270 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:02:38 np0005558241 podman[252083]: 2025-12-13 08:02:38.540808612 +0000 UTC m=+0.131235949 container init d516457d866c1f31501b06900800e26654acda3644c60d4ea11f9310b485e58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_liskov, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:02:38 np0005558241 podman[252083]: 2025-12-13 08:02:38.551381932 +0000 UTC m=+0.141809229 container start d516457d866c1f31501b06900800e26654acda3644c60d4ea11f9310b485e58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_liskov, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:02:38 np0005558241 podman[252083]: 2025-12-13 08:02:38.555928164 +0000 UTC m=+0.146355461 container attach d516457d866c1f31501b06900800e26654acda3644c60d4ea11f9310b485e58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:02:38 np0005558241 jolly_liskov[252097]: 167 167
Dec 13 03:02:38 np0005558241 systemd[1]: libpod-d516457d866c1f31501b06900800e26654acda3644c60d4ea11f9310b485e58c.scope: Deactivated successfully.
Dec 13 03:02:38 np0005558241 podman[252083]: 2025-12-13 08:02:38.558583449 +0000 UTC m=+0.149010736 container died d516457d866c1f31501b06900800e26654acda3644c60d4ea11f9310b485e58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:02:38 np0005558241 systemd[1]: var-lib-containers-storage-overlay-255a19e545977ac726f1a8189cc48085fb2aa8f9a368f93ff63cfeabb2098080-merged.mount: Deactivated successfully.
Dec 13 03:02:38 np0005558241 podman[252083]: 2025-12-13 08:02:38.608056266 +0000 UTC m=+0.198483553 container remove d516457d866c1f31501b06900800e26654acda3644c60d4ea11f9310b485e58c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_liskov, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:02:38 np0005558241 systemd[1]: libpod-conmon-d516457d866c1f31501b06900800e26654acda3644c60d4ea11f9310b485e58c.scope: Deactivated successfully.
Dec 13 03:02:38 np0005558241 podman[252119]: 2025-12-13 08:02:38.805587153 +0000 UTC m=+0.046296919 container create f805eb6433ebc53ff800344029b21cce7386ce94eccd8952a54de494a92e3c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:02:38 np0005558241 systemd[1]: Started libpod-conmon-f805eb6433ebc53ff800344029b21cce7386ce94eccd8952a54de494a92e3c06.scope.
Dec 13 03:02:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:02:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91d461901a7eb2df6d0fea0f186bc2b9b809e24d105961ddda00e8a244801936/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91d461901a7eb2df6d0fea0f186bc2b9b809e24d105961ddda00e8a244801936/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91d461901a7eb2df6d0fea0f186bc2b9b809e24d105961ddda00e8a244801936/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91d461901a7eb2df6d0fea0f186bc2b9b809e24d105961ddda00e8a244801936/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:38 np0005558241 podman[252119]: 2025-12-13 08:02:38.785510339 +0000 UTC m=+0.026220125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:02:39 np0005558241 podman[252119]: 2025-12-13 08:02:39.100303952 +0000 UTC m=+0.341013748 container init f805eb6433ebc53ff800344029b21cce7386ce94eccd8952a54de494a92e3c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banach, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:02:39 np0005558241 podman[252119]: 2025-12-13 08:02:39.108025532 +0000 UTC m=+0.348735298 container start f805eb6433ebc53ff800344029b21cce7386ce94eccd8952a54de494a92e3c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banach, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:02:39 np0005558241 podman[252119]: 2025-12-13 08:02:39.130569556 +0000 UTC m=+0.371279322 container attach f805eb6433ebc53ff800344029b21cce7386ce94eccd8952a54de494a92e3c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]: {
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:    "0": [
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:        {
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "devices": [
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "/dev/loop3"
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            ],
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_name": "ceph_lv0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_size": "21470642176",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "name": "ceph_lv0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "tags": {
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.cluster_name": "ceph",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.crush_device_class": "",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.encrypted": "0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.objectstore": "bluestore",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.osd_id": "0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.type": "block",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.vdo": "0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.with_tpm": "0"
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            },
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "type": "block",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "vg_name": "ceph_vg0"
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:        }
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:    ],
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:    "1": [
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:        {
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "devices": [
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "/dev/loop4"
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            ],
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_name": "ceph_lv1",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_size": "21470642176",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "name": "ceph_lv1",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "tags": {
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.cluster_name": "ceph",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.crush_device_class": "",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.encrypted": "0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.objectstore": "bluestore",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.osd_id": "1",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.type": "block",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.vdo": "0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.with_tpm": "0"
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            },
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "type": "block",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "vg_name": "ceph_vg1"
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:        }
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:    ],
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:    "2": [
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:        {
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "devices": [
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "/dev/loop5"
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            ],
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_name": "ceph_lv2",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_size": "21470642176",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "name": "ceph_lv2",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "tags": {
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.cluster_name": "ceph",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.crush_device_class": "",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.encrypted": "0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.objectstore": "bluestore",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.osd_id": "2",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.type": "block",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.vdo": "0",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:                "ceph.with_tpm": "0"
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            },
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "type": "block",
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:            "vg_name": "ceph_vg2"
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:        }
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]:    ]
Dec 13 03:02:39 np0005558241 xenodochial_banach[252136]: }
Dec 13 03:02:39 np0005558241 systemd[1]: libpod-f805eb6433ebc53ff800344029b21cce7386ce94eccd8952a54de494a92e3c06.scope: Deactivated successfully.
Dec 13 03:02:39 np0005558241 podman[252119]: 2025-12-13 08:02:39.441144505 +0000 UTC m=+0.681854291 container died f805eb6433ebc53ff800344029b21cce7386ce94eccd8952a54de494a92e3c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:02:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-91d461901a7eb2df6d0fea0f186bc2b9b809e24d105961ddda00e8a244801936-merged.mount: Deactivated successfully.
Dec 13 03:02:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:02:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:02:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:02:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:02:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:02:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:02:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:40 np0005558241 podman[252119]: 2025-12-13 08:02:40.803296516 +0000 UTC m=+2.044006292 container remove f805eb6433ebc53ff800344029b21cce7386ce94eccd8952a54de494a92e3c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:02:40 np0005558241 systemd[1]: libpod-conmon-f805eb6433ebc53ff800344029b21cce7386ce94eccd8952a54de494a92e3c06.scope: Deactivated successfully.
Dec 13 03:02:41 np0005558241 podman[252221]: 2025-12-13 08:02:41.313217867 +0000 UTC m=+0.065361128 container create 8f22145486cff1447d3ad68bff3e5d25e496ce93043b1affdd93a8ab3d8819f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:02:41 np0005558241 systemd[1]: Started libpod-conmon-8f22145486cff1447d3ad68bff3e5d25e496ce93043b1affdd93a8ab3d8819f7.scope.
Dec 13 03:02:41 np0005558241 podman[252221]: 2025-12-13 08:02:41.272702841 +0000 UTC m=+0.024846162 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:02:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:02:41 np0005558241 podman[252221]: 2025-12-13 08:02:41.557298941 +0000 UTC m=+0.309442202 container init 8f22145486cff1447d3ad68bff3e5d25e496ce93043b1affdd93a8ab3d8819f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_khayyam, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:02:41 np0005558241 podman[252221]: 2025-12-13 08:02:41.564715473 +0000 UTC m=+0.316858714 container start 8f22145486cff1447d3ad68bff3e5d25e496ce93043b1affdd93a8ab3d8819f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_khayyam, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Dec 13 03:02:41 np0005558241 priceless_khayyam[252237]: 167 167
Dec 13 03:02:41 np0005558241 systemd[1]: libpod-8f22145486cff1447d3ad68bff3e5d25e496ce93043b1affdd93a8ab3d8819f7.scope: Deactivated successfully.
Dec 13 03:02:41 np0005558241 podman[252221]: 2025-12-13 08:02:41.74350587 +0000 UTC m=+0.495649151 container attach 8f22145486cff1447d3ad68bff3e5d25e496ce93043b1affdd93a8ab3d8819f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:02:41 np0005558241 podman[252221]: 2025-12-13 08:02:41.745290594 +0000 UTC m=+0.497433915 container died 8f22145486cff1447d3ad68bff3e5d25e496ce93043b1affdd93a8ab3d8819f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_khayyam, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:02:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9525a419acea474e11c18457de4cea50ee21f0065340027685b68efa2d4736a8-merged.mount: Deactivated successfully.
Dec 13 03:02:41 np0005558241 podman[252221]: 2025-12-13 08:02:41.827814964 +0000 UTC m=+0.579958205 container remove 8f22145486cff1447d3ad68bff3e5d25e496ce93043b1affdd93a8ab3d8819f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_khayyam, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 03:02:41 np0005558241 systemd[1]: libpod-conmon-8f22145486cff1447d3ad68bff3e5d25e496ce93043b1affdd93a8ab3d8819f7.scope: Deactivated successfully.
Dec 13 03:02:42 np0005558241 podman[252261]: 2025-12-13 08:02:42.003659509 +0000 UTC m=+0.049780916 container create 5d1f6aebc7a8f8a98961b4b2521e9ec598ba7355225a8e100f084cb40d830971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nightingale, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:02:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:02:42 np0005558241 systemd[1]: Started libpod-conmon-5d1f6aebc7a8f8a98961b4b2521e9ec598ba7355225a8e100f084cb40d830971.scope.
Dec 13 03:02:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:02:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fba0b6033ca6220b809ff3fc4860d08c9034dac9ddd70ee7e9bf4b7efb60ee8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fba0b6033ca6220b809ff3fc4860d08c9034dac9ddd70ee7e9bf4b7efb60ee8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fba0b6033ca6220b809ff3fc4860d08c9034dac9ddd70ee7e9bf4b7efb60ee8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fba0b6033ca6220b809ff3fc4860d08c9034dac9ddd70ee7e9bf4b7efb60ee8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:02:42 np0005558241 podman[252261]: 2025-12-13 08:02:41.983458792 +0000 UTC m=+0.029580249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:02:42 np0005558241 podman[252261]: 2025-12-13 08:02:42.085776269 +0000 UTC m=+0.131897696 container init 5d1f6aebc7a8f8a98961b4b2521e9ec598ba7355225a8e100f084cb40d830971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:02:42 np0005558241 podman[252261]: 2025-12-13 08:02:42.09231932 +0000 UTC m=+0.138440727 container start 5d1f6aebc7a8f8a98961b4b2521e9ec598ba7355225a8e100f084cb40d830971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nightingale, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:02:42 np0005558241 podman[252261]: 2025-12-13 08:02:42.095357014 +0000 UTC m=+0.141478421 container attach 5d1f6aebc7a8f8a98961b4b2521e9ec598ba7355225a8e100f084cb40d830971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nightingale, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:02:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:42 np0005558241 lvm[252355]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:02:42 np0005558241 lvm[252355]: VG ceph_vg0 finished
Dec 13 03:02:42 np0005558241 lvm[252356]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:02:42 np0005558241 lvm[252356]: VG ceph_vg1 finished
Dec 13 03:02:42 np0005558241 lvm[252358]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:02:42 np0005558241 lvm[252358]: VG ceph_vg2 finished
Dec 13 03:02:42 np0005558241 xenodochial_nightingale[252277]: {}
Dec 13 03:02:42 np0005558241 systemd[1]: libpod-5d1f6aebc7a8f8a98961b4b2521e9ec598ba7355225a8e100f084cb40d830971.scope: Deactivated successfully.
Dec 13 03:02:42 np0005558241 systemd[1]: libpod-5d1f6aebc7a8f8a98961b4b2521e9ec598ba7355225a8e100f084cb40d830971.scope: Consumed 1.392s CPU time.
Dec 13 03:02:42 np0005558241 podman[252261]: 2025-12-13 08:02:42.965151216 +0000 UTC m=+1.011272623 container died 5d1f6aebc7a8f8a98961b4b2521e9ec598ba7355225a8e100f084cb40d830971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nightingale, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:02:43 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9fba0b6033ca6220b809ff3fc4860d08c9034dac9ddd70ee7e9bf4b7efb60ee8-merged.mount: Deactivated successfully.
Dec 13 03:02:43 np0005558241 podman[252261]: 2025-12-13 08:02:43.01248549 +0000 UTC m=+1.058606897 container remove 5d1f6aebc7a8f8a98961b4b2521e9ec598ba7355225a8e100f084cb40d830971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_nightingale, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:02:43 np0005558241 systemd[1]: libpod-conmon-5d1f6aebc7a8f8a98961b4b2521e9ec598ba7355225a8e100f084cb40d830971.scope: Deactivated successfully.
Dec 13 03:02:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:02:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:02:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:43 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:43 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:02:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:02:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:02:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:02:55.378 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:02:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:02:55.381 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:02:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:02:55.381 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:02:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.024358) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612977024409, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1153, "num_deletes": 256, "total_data_size": 1857355, "memory_usage": 1883024, "flush_reason": "Manual Compaction"}
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612977039190, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1796243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18917, "largest_seqno": 20069, "table_properties": {"data_size": 1790695, "index_size": 2943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11281, "raw_average_key_size": 18, "raw_value_size": 1779557, "raw_average_value_size": 2960, "num_data_blocks": 135, "num_entries": 601, "num_filter_entries": 601, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765612865, "oldest_key_time": 1765612865, "file_creation_time": 1765612977, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 14894 microseconds, and 6067 cpu microseconds.
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.039249) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1796243 bytes OK
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.039274) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.042515) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.042561) EVENT_LOG_v1 {"time_micros": 1765612977042552, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.042593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1852004, prev total WAL file size 1852004, number of live WAL files 2.
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.043502) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1754KB)], [44(6639KB)]
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612977043589, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 8594611, "oldest_snapshot_seqno": -1}
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4439 keys, 8460783 bytes, temperature: kUnknown
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612977107957, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8460783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8429482, "index_size": 19095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 109989, "raw_average_key_size": 24, "raw_value_size": 8347583, "raw_average_value_size": 1880, "num_data_blocks": 803, "num_entries": 4439, "num_filter_entries": 4439, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765612977, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.108226) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8460783 bytes
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.109711) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.4 rd, 131.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 6.5 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(9.5) write-amplify(4.7) OK, records in: 4963, records dropped: 524 output_compression: NoCompression
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.109731) EVENT_LOG_v1 {"time_micros": 1765612977109722, "job": 22, "event": "compaction_finished", "compaction_time_micros": 64438, "compaction_time_cpu_micros": 22641, "output_level": 6, "num_output_files": 1, "total_output_size": 8460783, "num_input_records": 4963, "num_output_records": 4439, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612977110149, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765612977111389, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.043364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.111479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.111485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.111487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.111489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:02:57 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:02:57.111490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:02:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:03:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:03:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:08 np0005558241 podman[252400]: 2025-12-13 08:03:08.981104423 +0000 UTC m=+0.063311369 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:03:08 np0005558241 podman[252399]: 2025-12-13 08:03:08.984223239 +0000 UTC m=+0.065293417 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 13 03:03:09 np0005558241 podman[252398]: 2025-12-13 08:03:09.021048125 +0000 UTC m=+0.102042141 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 13 03:03:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:03:09
Dec 13 03:03:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:03:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:03:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', '.rgw.root']
Dec 13 03:03:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:03:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:03:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:03:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/569829053' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:03:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:03:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/569829053' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:03:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:03:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:03:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:03:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:03:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.543 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.544 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.565 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.566 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.566 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.579 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.579 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.580 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.580 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.580 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.797 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.797 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.797 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.798 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:03:22 np0005558241 nova_compute[248510]: 2025-12-13 08:03:22.798 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:03:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:03:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1362863013' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:03:23 np0005558241 nova_compute[248510]: 2025-12-13 08:03:23.333 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:03:23 np0005558241 nova_compute[248510]: 2025-12-13 08:03:23.501 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:03:23 np0005558241 nova_compute[248510]: 2025-12-13 08:03:23.504 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:03:23 np0005558241 nova_compute[248510]: 2025-12-13 08:03:23.504 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:03:23 np0005558241 nova_compute[248510]: 2025-12-13 08:03:23.504 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:03:23 np0005558241 nova_compute[248510]: 2025-12-13 08:03:23.588 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:03:23 np0005558241 nova_compute[248510]: 2025-12-13 08:03:23.589 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:03:23 np0005558241 nova_compute[248510]: 2025-12-13 08:03:23.606 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:03:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:03:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2852615945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:03:24 np0005558241 nova_compute[248510]: 2025-12-13 08:03:24.102 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:03:24 np0005558241 nova_compute[248510]: 2025-12-13 08:03:24.108 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:03:24 np0005558241 nova_compute[248510]: 2025-12-13 08:03:24.124 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:03:24 np0005558241 nova_compute[248510]: 2025-12-13 08:03:24.126 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:03:24 np0005558241 nova_compute[248510]: 2025-12-13 08:03:24.126 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:03:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:25 np0005558241 nova_compute[248510]: 2025-12-13 08:03:25.127 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:03:25 np0005558241 nova_compute[248510]: 2025-12-13 08:03:25.127 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:03:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:03:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:03:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:03:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:39 np0005558241 podman[252503]: 2025-12-13 08:03:39.965625048 +0000 UTC m=+0.055411154 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:03:39 np0005558241 podman[252504]: 2025-12-13 08:03:39.980808952 +0000 UTC m=+0.065359029 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:03:39 np0005558241 podman[252502]: 2025-12-13 08:03:39.99498631 +0000 UTC m=+0.088727523 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Dec 13 03:03:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:03:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:03:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:03:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:03:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:03:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:03:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:03:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:03:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:03:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:03:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:03:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:03:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:03:44 np0005558241 podman[252781]: 2025-12-13 08:03:44.874057249 +0000 UTC m=+0.021265574 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:03:45 np0005558241 podman[252781]: 2025-12-13 08:03:45.09893384 +0000 UTC m=+0.246142175 container create 28a324114053343020ed2b5beec07de11401f8351104e1aa2622d089aa120f82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:03:45 np0005558241 systemd[1]: Started libpod-conmon-28a324114053343020ed2b5beec07de11401f8351104e1aa2622d089aa120f82.scope.
Dec 13 03:03:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:03:45 np0005558241 podman[252781]: 2025-12-13 08:03:45.41971004 +0000 UTC m=+0.566918365 container init 28a324114053343020ed2b5beec07de11401f8351104e1aa2622d089aa120f82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feistel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:03:45 np0005558241 podman[252781]: 2025-12-13 08:03:45.429456289 +0000 UTC m=+0.576664584 container start 28a324114053343020ed2b5beec07de11401f8351104e1aa2622d089aa120f82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 03:03:45 np0005558241 relaxed_feistel[252797]: 167 167
Dec 13 03:03:45 np0005558241 systemd[1]: libpod-28a324114053343020ed2b5beec07de11401f8351104e1aa2622d089aa120f82.scope: Deactivated successfully.
Dec 13 03:03:46 np0005558241 podman[252781]: 2025-12-13 08:03:46.221965941 +0000 UTC m=+1.369174276 container attach 28a324114053343020ed2b5beec07de11401f8351104e1aa2622d089aa120f82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feistel, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:03:46 np0005558241 podman[252781]: 2025-12-13 08:03:46.223389196 +0000 UTC m=+1.370597491 container died 28a324114053343020ed2b5beec07de11401f8351104e1aa2622d089aa120f82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feistel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:03:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b810cc6459a193453bdb0dfab2fdf72b6420717f6c7bd19f0d58144e7df3b69b-merged.mount: Deactivated successfully.
Dec 13 03:03:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:03:47 np0005558241 podman[252781]: 2025-12-13 08:03:47.163669011 +0000 UTC m=+2.310877306 container remove 28a324114053343020ed2b5beec07de11401f8351104e1aa2622d089aa120f82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:03:47 np0005558241 systemd[1]: libpod-conmon-28a324114053343020ed2b5beec07de11401f8351104e1aa2622d089aa120f82.scope: Deactivated successfully.
Dec 13 03:03:47 np0005558241 podman[252821]: 2025-12-13 08:03:47.314317227 +0000 UTC m=+0.029166229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:03:47 np0005558241 podman[252821]: 2025-12-13 08:03:47.565352691 +0000 UTC m=+0.280201673 container create 7946ea7dfb4c02788d5d07d2a18bdbd8e8d153eb16fcee34dc92a6cd8c4e64de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:03:47 np0005558241 systemd[1]: Started libpod-conmon-7946ea7dfb4c02788d5d07d2a18bdbd8e8d153eb16fcee34dc92a6cd8c4e64de.scope.
Dec 13 03:03:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:03:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1a43f8e17bc61c5f9f1285ee001363fe6021ea182df2a32ae0c26cd949547/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:03:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1a43f8e17bc61c5f9f1285ee001363fe6021ea182df2a32ae0c26cd949547/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:03:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1a43f8e17bc61c5f9f1285ee001363fe6021ea182df2a32ae0c26cd949547/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:03:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1a43f8e17bc61c5f9f1285ee001363fe6021ea182df2a32ae0c26cd949547/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:03:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1a43f8e17bc61c5f9f1285ee001363fe6021ea182df2a32ae0c26cd949547/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:03:47 np0005558241 podman[252821]: 2025-12-13 08:03:47.980721937 +0000 UTC m=+0.695570949 container init 7946ea7dfb4c02788d5d07d2a18bdbd8e8d153eb16fcee34dc92a6cd8c4e64de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Dec 13 03:03:47 np0005558241 podman[252821]: 2025-12-13 08:03:47.98979534 +0000 UTC m=+0.704644332 container start 7946ea7dfb4c02788d5d07d2a18bdbd8e8d153eb16fcee34dc92a6cd8c4e64de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mcclintock, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:03:48 np0005558241 podman[252821]: 2025-12-13 08:03:48.112860467 +0000 UTC m=+0.827709479 container attach 7946ea7dfb4c02788d5d07d2a18bdbd8e8d153eb16fcee34dc92a6cd8c4e64de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mcclintock, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Dec 13 03:03:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:48 np0005558241 fervent_mcclintock[252838]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:03:48 np0005558241 fervent_mcclintock[252838]: --> All data devices are unavailable
Dec 13 03:03:48 np0005558241 systemd[1]: libpod-7946ea7dfb4c02788d5d07d2a18bdbd8e8d153eb16fcee34dc92a6cd8c4e64de.scope: Deactivated successfully.
Dec 13 03:03:48 np0005558241 podman[252858]: 2025-12-13 08:03:48.619120478 +0000 UTC m=+0.031953987 container died 7946ea7dfb4c02788d5d07d2a18bdbd8e8d153eb16fcee34dc92a6cd8c4e64de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mcclintock, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:03:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a2c1a43f8e17bc61c5f9f1285ee001363fe6021ea182df2a32ae0c26cd949547-merged.mount: Deactivated successfully.
Dec 13 03:03:48 np0005558241 podman[252858]: 2025-12-13 08:03:48.787700874 +0000 UTC m=+0.200534363 container remove 7946ea7dfb4c02788d5d07d2a18bdbd8e8d153eb16fcee34dc92a6cd8c4e64de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mcclintock, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:03:48 np0005558241 systemd[1]: libpod-conmon-7946ea7dfb4c02788d5d07d2a18bdbd8e8d153eb16fcee34dc92a6cd8c4e64de.scope: Deactivated successfully.
Dec 13 03:03:49 np0005558241 podman[252935]: 2025-12-13 08:03:49.2416688 +0000 UTC m=+0.022108455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:03:49 np0005558241 podman[252935]: 2025-12-13 08:03:49.65639736 +0000 UTC m=+0.436837035 container create 4b80e99b8e64080de7d41e936c14f6e00d9c756e5349a5a1838d5f82fdf3dacc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_pare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 03:03:50 np0005558241 systemd[1]: Started libpod-conmon-4b80e99b8e64080de7d41e936c14f6e00d9c756e5349a5a1838d5f82fdf3dacc.scope.
Dec 13 03:03:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:03:50 np0005558241 podman[252935]: 2025-12-13 08:03:50.223475437 +0000 UTC m=+1.003915102 container init 4b80e99b8e64080de7d41e936c14f6e00d9c756e5349a5a1838d5f82fdf3dacc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:03:50 np0005558241 podman[252935]: 2025-12-13 08:03:50.233103684 +0000 UTC m=+1.013543359 container start 4b80e99b8e64080de7d41e936c14f6e00d9c756e5349a5a1838d5f82fdf3dacc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:03:50 np0005558241 cranky_pare[252951]: 167 167
Dec 13 03:03:50 np0005558241 podman[252935]: 2025-12-13 08:03:50.238565658 +0000 UTC m=+1.019005333 container attach 4b80e99b8e64080de7d41e936c14f6e00d9c756e5349a5a1838d5f82fdf3dacc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_pare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:03:50 np0005558241 systemd[1]: libpod-4b80e99b8e64080de7d41e936c14f6e00d9c756e5349a5a1838d5f82fdf3dacc.scope: Deactivated successfully.
Dec 13 03:03:50 np0005558241 podman[252935]: 2025-12-13 08:03:50.239502282 +0000 UTC m=+1.019941947 container died 4b80e99b8e64080de7d41e936c14f6e00d9c756e5349a5a1838d5f82fdf3dacc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_pare, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Dec 13 03:03:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4910bde7f9d7e8ab74f9ee69c436274dce53f762d3d4a89c1ac67576056781ba-merged.mount: Deactivated successfully.
Dec 13 03:03:50 np0005558241 podman[252935]: 2025-12-13 08:03:50.286886167 +0000 UTC m=+1.067325812 container remove 4b80e99b8e64080de7d41e936c14f6e00d9c756e5349a5a1838d5f82fdf3dacc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_pare, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:03:50 np0005558241 systemd[1]: libpod-conmon-4b80e99b8e64080de7d41e936c14f6e00d9c756e5349a5a1838d5f82fdf3dacc.scope: Deactivated successfully.
Dec 13 03:03:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:50 np0005558241 podman[252975]: 2025-12-13 08:03:50.459182293 +0000 UTC m=+0.040908407 container create 49ce75e3f4b336ef0cffc282bbc6a205dda305b371e6ec84be12972779b50015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:03:50 np0005558241 systemd[1]: Started libpod-conmon-49ce75e3f4b336ef0cffc282bbc6a205dda305b371e6ec84be12972779b50015.scope.
Dec 13 03:03:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:03:50 np0005558241 podman[252975]: 2025-12-13 08:03:50.441714654 +0000 UTC m=+0.023440798 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:03:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f05e817d9a120d7b41c39aee919a695f1d8ac858c929873ee9b80577441661/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:03:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f05e817d9a120d7b41c39aee919a695f1d8ac858c929873ee9b80577441661/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:03:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f05e817d9a120d7b41c39aee919a695f1d8ac858c929873ee9b80577441661/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:03:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f05e817d9a120d7b41c39aee919a695f1d8ac858c929873ee9b80577441661/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:03:50 np0005558241 podman[252975]: 2025-12-13 08:03:50.560549377 +0000 UTC m=+0.142275511 container init 49ce75e3f4b336ef0cffc282bbc6a205dda305b371e6ec84be12972779b50015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:03:50 np0005558241 podman[252975]: 2025-12-13 08:03:50.56758301 +0000 UTC m=+0.149309124 container start 49ce75e3f4b336ef0cffc282bbc6a205dda305b371e6ec84be12972779b50015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_babbage, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:03:50 np0005558241 podman[252975]: 2025-12-13 08:03:50.572402978 +0000 UTC m=+0.154129092 container attach 49ce75e3f4b336ef0cffc282bbc6a205dda305b371e6ec84be12972779b50015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_babbage, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]: {
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:    "0": [
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:        {
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "devices": [
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "/dev/loop3"
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            ],
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_name": "ceph_lv0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_size": "21470642176",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "name": "ceph_lv0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "tags": {
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.cluster_name": "ceph",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.crush_device_class": "",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.encrypted": "0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.objectstore": "bluestore",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.osd_id": "0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.type": "block",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.vdo": "0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.with_tpm": "0"
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            },
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "type": "block",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "vg_name": "ceph_vg0"
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:        }
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:    ],
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:    "1": [
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:        {
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "devices": [
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "/dev/loop4"
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            ],
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_name": "ceph_lv1",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_size": "21470642176",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "name": "ceph_lv1",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "tags": {
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.cluster_name": "ceph",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.crush_device_class": "",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.encrypted": "0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.objectstore": "bluestore",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.osd_id": "1",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.type": "block",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.vdo": "0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.with_tpm": "0"
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            },
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "type": "block",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "vg_name": "ceph_vg1"
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:        }
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:    ],
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:    "2": [
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:        {
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "devices": [
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "/dev/loop5"
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            ],
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_name": "ceph_lv2",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_size": "21470642176",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "name": "ceph_lv2",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "tags": {
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.cluster_name": "ceph",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.crush_device_class": "",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.encrypted": "0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.objectstore": "bluestore",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.osd_id": "2",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.type": "block",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.vdo": "0",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:                "ceph.with_tpm": "0"
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            },
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "type": "block",
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:            "vg_name": "ceph_vg2"
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:        }
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]:    ]
Dec 13 03:03:50 np0005558241 infallible_babbage[252991]: }
Dec 13 03:03:50 np0005558241 systemd[1]: libpod-49ce75e3f4b336ef0cffc282bbc6a205dda305b371e6ec84be12972779b50015.scope: Deactivated successfully.
Dec 13 03:03:50 np0005558241 podman[252975]: 2025-12-13 08:03:50.869875135 +0000 UTC m=+0.451601249 container died 49ce75e3f4b336ef0cffc282bbc6a205dda305b371e6ec84be12972779b50015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:03:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f0f05e817d9a120d7b41c39aee919a695f1d8ac858c929873ee9b80577441661-merged.mount: Deactivated successfully.
Dec 13 03:03:51 np0005558241 podman[252975]: 2025-12-13 08:03:51.547619933 +0000 UTC m=+1.129346047 container remove 49ce75e3f4b336ef0cffc282bbc6a205dda305b371e6ec84be12972779b50015 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:03:51 np0005558241 systemd[1]: libpod-conmon-49ce75e3f4b336ef0cffc282bbc6a205dda305b371e6ec84be12972779b50015.scope: Deactivated successfully.
Dec 13 03:03:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:03:52 np0005558241 podman[253075]: 2025-12-13 08:03:52.029542576 +0000 UTC m=+0.026479922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:03:52 np0005558241 podman[253075]: 2025-12-13 08:03:52.327207917 +0000 UTC m=+0.324145233 container create 5a414a2362ab4e8704a54813de4b69152682d3deeeb8362202b9a114623b1729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:03:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:52 np0005558241 systemd[1]: Started libpod-conmon-5a414a2362ab4e8704a54813de4b69152682d3deeeb8362202b9a114623b1729.scope.
Dec 13 03:03:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:03:52 np0005558241 podman[253075]: 2025-12-13 08:03:52.903763698 +0000 UTC m=+0.900701014 container init 5a414a2362ab4e8704a54813de4b69152682d3deeeb8362202b9a114623b1729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_torvalds, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:03:52 np0005558241 podman[253075]: 2025-12-13 08:03:52.918738986 +0000 UTC m=+0.915676342 container start 5a414a2362ab4e8704a54813de4b69152682d3deeeb8362202b9a114623b1729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 03:03:52 np0005558241 systemd[1]: libpod-5a414a2362ab4e8704a54813de4b69152682d3deeeb8362202b9a114623b1729.scope: Deactivated successfully.
Dec 13 03:03:52 np0005558241 cranky_torvalds[253092]: 167 167
Dec 13 03:03:52 np0005558241 conmon[253092]: conmon 5a414a2362ab4e8704a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a414a2362ab4e8704a54813de4b69152682d3deeeb8362202b9a114623b1729.scope/container/memory.events
Dec 13 03:03:53 np0005558241 podman[253075]: 2025-12-13 08:03:53.310984493 +0000 UTC m=+1.307921919 container attach 5a414a2362ab4e8704a54813de4b69152682d3deeeb8362202b9a114623b1729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:03:53 np0005558241 podman[253075]: 2025-12-13 08:03:53.312666565 +0000 UTC m=+1.309603951 container died 5a414a2362ab4e8704a54813de4b69152682d3deeeb8362202b9a114623b1729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_torvalds, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:03:53 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5cf7c160c0fce2535077e4b21153c0ac773e88d7542a809de30eb6bb6f0bbafc-merged.mount: Deactivated successfully.
Dec 13 03:03:54 np0005558241 podman[253075]: 2025-12-13 08:03:54.065703415 +0000 UTC m=+2.062640731 container remove 5a414a2362ab4e8704a54813de4b69152682d3deeeb8362202b9a114623b1729 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:03:54 np0005558241 systemd[1]: libpod-conmon-5a414a2362ab4e8704a54813de4b69152682d3deeeb8362202b9a114623b1729.scope: Deactivated successfully.
Dec 13 03:03:54 np0005558241 podman[253117]: 2025-12-13 08:03:54.242923974 +0000 UTC m=+0.044944317 container create ce47b077ec2b437f10b6066a63c5bd43723b5dd5ac4c993f566737e5fbf2ef2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bardeen, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:03:54 np0005558241 systemd[1]: Started libpod-conmon-ce47b077ec2b437f10b6066a63c5bd43723b5dd5ac4c993f566737e5fbf2ef2e.scope.
Dec 13 03:03:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:03:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c86087353ec0d300f8e0eff13b843463d3ca230629fe41972550c2ccc71128b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:03:54 np0005558241 podman[253117]: 2025-12-13 08:03:54.227163916 +0000 UTC m=+0.029184249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:03:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c86087353ec0d300f8e0eff13b843463d3ca230629fe41972550c2ccc71128b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:03:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c86087353ec0d300f8e0eff13b843463d3ca230629fe41972550c2ccc71128b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:03:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c86087353ec0d300f8e0eff13b843463d3ca230629fe41972550c2ccc71128b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:03:54 np0005558241 podman[253117]: 2025-12-13 08:03:54.334820964 +0000 UTC m=+0.136841337 container init ce47b077ec2b437f10b6066a63c5bd43723b5dd5ac4c993f566737e5fbf2ef2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:03:54 np0005558241 podman[253117]: 2025-12-13 08:03:54.345022355 +0000 UTC m=+0.147042718 container start ce47b077ec2b437f10b6066a63c5bd43723b5dd5ac4c993f566737e5fbf2ef2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bardeen, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:03:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:54 np0005558241 podman[253117]: 2025-12-13 08:03:54.348988562 +0000 UTC m=+0.151008905 container attach ce47b077ec2b437f10b6066a63c5bd43723b5dd5ac4c993f566737e5fbf2ef2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bardeen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:03:55 np0005558241 lvm[253211]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:03:55 np0005558241 lvm[253211]: VG ceph_vg0 finished
Dec 13 03:03:55 np0005558241 lvm[253214]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:03:55 np0005558241 lvm[253214]: VG ceph_vg1 finished
Dec 13 03:03:55 np0005558241 lvm[253216]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:03:55 np0005558241 lvm[253216]: VG ceph_vg2 finished
Dec 13 03:03:55 np0005558241 tender_bardeen[253134]: {}
Dec 13 03:03:55 np0005558241 systemd[1]: libpod-ce47b077ec2b437f10b6066a63c5bd43723b5dd5ac4c993f566737e5fbf2ef2e.scope: Deactivated successfully.
Dec 13 03:03:55 np0005558241 podman[253117]: 2025-12-13 08:03:55.196370184 +0000 UTC m=+0.998390557 container died ce47b077ec2b437f10b6066a63c5bd43723b5dd5ac4c993f566737e5fbf2ef2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:03:55 np0005558241 systemd[1]: libpod-ce47b077ec2b437f10b6066a63c5bd43723b5dd5ac4c993f566737e5fbf2ef2e.scope: Consumed 1.391s CPU time.
Dec 13 03:03:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3c86087353ec0d300f8e0eff13b843463d3ca230629fe41972550c2ccc71128b-merged.mount: Deactivated successfully.
Dec 13 03:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:03:55.380 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:03:55.382 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:03:55.382 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:03:55 np0005558241 podman[253117]: 2025-12-13 08:03:55.506892921 +0000 UTC m=+1.308913294 container remove ce47b077ec2b437f10b6066a63c5bd43723b5dd5ac4c993f566737e5fbf2ef2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:03:55 np0005558241 systemd[1]: libpod-conmon-ce47b077ec2b437f10b6066a63c5bd43723b5dd5ac4c993f566737e5fbf2ef2e.scope: Deactivated successfully.
Dec 13 03:03:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:03:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:03:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:03:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:03:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:03:56 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:03:56 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:03:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:03:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 03:04:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:04:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 03:04:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:04:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:04:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:04:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:04:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:04:09
Dec 13 03:04:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:04:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:04:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'backups', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.meta', 'volumes']
Dec 13 03:04:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:04:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:04:10 np0005558241 podman[253259]: 2025-12-13 08:04:10.980997353 +0000 UTC m=+0.060447947 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 03:04:10 np0005558241 podman[253258]: 2025-12-13 08:04:10.990862246 +0000 UTC m=+0.079912296 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:04:11 np0005558241 podman[253257]: 2025-12-13 08:04:11.015293267 +0000 UTC m=+0.104236455 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:04:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:04:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Dec 13 03:04:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Dec 13 03:04:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:04:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2773861626' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:04:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:04:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2773861626' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:04:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:04:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 03:04:19 np0005558241 nova_compute[248510]: 2025-12-13 08:04:19.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:04:19 np0005558241 nova_compute[248510]: 2025-12-13 08:04:19.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 03:04:20 np0005558241 nova_compute[248510]: 2025-12-13 08:04:20.552 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 03:04:20 np0005558241 nova_compute[248510]: 2025-12-13 08:04:20.553 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:04:20 np0005558241 nova_compute[248510]: 2025-12-13 08:04:20.553 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:04:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:04:21 np0005558241 nova_compute[248510]: 2025-12-13 08:04:21.584 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:04:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:04:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.412 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.412 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.413 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.413 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.428 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.429 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.429 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.429 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.429 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.430 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.800 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.801 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:04:23 np0005558241 nova_compute[248510]: 2025-12-13 08:04:23.802 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:04:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 03:04:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:04:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123085025' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:04:24 np0005558241 nova_compute[248510]: 2025-12-13 08:04:24.401 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:04:24 np0005558241 nova_compute[248510]: 2025-12-13 08:04:24.553 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:04:24 np0005558241 nova_compute[248510]: 2025-12-13 08:04:24.554 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5151MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:04:24 np0005558241 nova_compute[248510]: 2025-12-13 08:04:24.554 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:04:24 np0005558241 nova_compute[248510]: 2025-12-13 08:04:24.554 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:04:24 np0005558241 nova_compute[248510]: 2025-12-13 08:04:24.632 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:04:24 np0005558241 nova_compute[248510]: 2025-12-13 08:04:24.633 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:04:24 np0005558241 nova_compute[248510]: 2025-12-13 08:04:24.657 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:04:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:04:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1612252230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:04:25 np0005558241 nova_compute[248510]: 2025-12-13 08:04:25.196 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:04:25 np0005558241 nova_compute[248510]: 2025-12-13 08:04:25.203 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:04:25 np0005558241 nova_compute[248510]: 2025-12-13 08:04:25.224 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:04:25 np0005558241 nova_compute[248510]: 2025-12-13 08:04:25.226 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:04:25 np0005558241 nova_compute[248510]: 2025-12-13 08:04:25.227 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:04:26 np0005558241 nova_compute[248510]: 2025-12-13 08:04:26.227 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:04:26 np0005558241 nova_compute[248510]: 2025-12-13 08:04:26.228 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:04:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 03:04:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:04:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 03:04:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Dec 13 03:04:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:04:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:04:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:04:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:04:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:04:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:04:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:04:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:04:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:41 np0005558241 podman[253368]: 2025-12-13 08:04:41.967140928 +0000 UTC m=+0.055123517 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 03:04:41 np0005558241 podman[253367]: 2025-12-13 08:04:41.968667066 +0000 UTC m=+0.058868749 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:04:42 np0005558241 podman[253366]: 2025-12-13 08:04:42.043865235 +0000 UTC m=+0.136130379 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:04:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:04:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:04:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:04:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.473186) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613092473226, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1154, "num_deletes": 251, "total_data_size": 1821483, "memory_usage": 1853536, "flush_reason": "Manual Compaction"}
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613092485675, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1781314, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20070, "largest_seqno": 21223, "table_properties": {"data_size": 1775763, "index_size": 2946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11657, "raw_average_key_size": 19, "raw_value_size": 1764680, "raw_average_value_size": 2970, "num_data_blocks": 135, "num_entries": 594, "num_filter_entries": 594, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765612977, "oldest_key_time": 1765612977, "file_creation_time": 1765613092, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 12538 microseconds, and 4747 cpu microseconds.
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.485720) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1781314 bytes OK
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.485740) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.487572) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.487594) EVENT_LOG_v1 {"time_micros": 1765613092487588, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.487613) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1816202, prev total WAL file size 1816202, number of live WAL files 2.
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.488285) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1739KB)], [47(8262KB)]
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613092488340, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 10242097, "oldest_snapshot_seqno": -1}
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4519 keys, 8356995 bytes, temperature: kUnknown
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613092546687, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 8356995, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8325311, "index_size": 19286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 112219, "raw_average_key_size": 24, "raw_value_size": 8242106, "raw_average_value_size": 1823, "num_data_blocks": 808, "num_entries": 4519, "num_filter_entries": 4519, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765613092, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.546936) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8356995 bytes
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.548619) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.2 rd, 143.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.1 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(10.4) write-amplify(4.7) OK, records in: 5033, records dropped: 514 output_compression: NoCompression
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.548638) EVENT_LOG_v1 {"time_micros": 1765613092548629, "job": 24, "event": "compaction_finished", "compaction_time_micros": 58449, "compaction_time_cpu_micros": 18440, "output_level": 6, "num_output_files": 1, "total_output_size": 8356995, "num_input_records": 5033, "num_output_records": 4519, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613092549178, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613092550769, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.488172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.550923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.550932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.550934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.550935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:04:52 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:04:52.550937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:04:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:04:55.381 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:04:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:04:55.382 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:04:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:04:55.382 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:04:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:04:56 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:04:56 np0005558241 podman[253569]: 2025-12-13 08:04:56.902109587 +0000 UTC m=+0.057303330 container create 5b1a42ede36c30308e9bc780a6b80e17725678db7367c0e0d7d8f53f8934c26f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_turing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:04:56 np0005558241 systemd[1]: Started libpod-conmon-5b1a42ede36c30308e9bc780a6b80e17725678db7367c0e0d7d8f53f8934c26f.scope.
Dec 13 03:04:56 np0005558241 podman[253569]: 2025-12-13 08:04:56.879344967 +0000 UTC m=+0.034538720 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:04:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:04:57 np0005558241 podman[253569]: 2025-12-13 08:04:57.00713828 +0000 UTC m=+0.162332023 container init 5b1a42ede36c30308e9bc780a6b80e17725678db7367c0e0d7d8f53f8934c26f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_turing, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:04:57 np0005558241 podman[253569]: 2025-12-13 08:04:57.015140107 +0000 UTC m=+0.170333830 container start 5b1a42ede36c30308e9bc780a6b80e17725678db7367c0e0d7d8f53f8934c26f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:04:57 np0005558241 podman[253569]: 2025-12-13 08:04:57.018750656 +0000 UTC m=+0.173944409 container attach 5b1a42ede36c30308e9bc780a6b80e17725678db7367c0e0d7d8f53f8934c26f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_turing, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:04:57 np0005558241 systemd[1]: libpod-5b1a42ede36c30308e9bc780a6b80e17725678db7367c0e0d7d8f53f8934c26f.scope: Deactivated successfully.
Dec 13 03:04:57 np0005558241 pensive_turing[253585]: 167 167
Dec 13 03:04:57 np0005558241 conmon[253585]: conmon 5b1a42ede36c30308e9b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b1a42ede36c30308e9bc780a6b80e17725678db7367c0e0d7d8f53f8934c26f.scope/container/memory.events
Dec 13 03:04:57 np0005558241 podman[253569]: 2025-12-13 08:04:57.025941233 +0000 UTC m=+0.181134996 container died 5b1a42ede36c30308e9bc780a6b80e17725678db7367c0e0d7d8f53f8934c26f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:04:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1fb618311d4f027fc8d0d8788bec102c8f8e60d20b2ae5807009d62ea43015ac-merged.mount: Deactivated successfully.
Dec 13 03:04:57 np0005558241 podman[253569]: 2025-12-13 08:04:57.076382684 +0000 UTC m=+0.231576407 container remove 5b1a42ede36c30308e9bc780a6b80e17725678db7367c0e0d7d8f53f8934c26f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:04:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:04:57 np0005558241 systemd[1]: libpod-conmon-5b1a42ede36c30308e9bc780a6b80e17725678db7367c0e0d7d8f53f8934c26f.scope: Deactivated successfully.
Dec 13 03:04:57 np0005558241 podman[253609]: 2025-12-13 08:04:57.257048937 +0000 UTC m=+0.047553590 container create b437b6c747b87433866855d7322cd8318670876ad0edfde46db20e6ffba67ed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_chebyshev, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:04:57 np0005558241 systemd[1]: Started libpod-conmon-b437b6c747b87433866855d7322cd8318670876ad0edfde46db20e6ffba67ed0.scope.
Dec 13 03:04:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:04:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731c30edfc526ac3abb903625cdc6dd7b323d3d3210fcb0a2323dec5db29dfd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:04:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731c30edfc526ac3abb903625cdc6dd7b323d3d3210fcb0a2323dec5db29dfd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:04:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731c30edfc526ac3abb903625cdc6dd7b323d3d3210fcb0a2323dec5db29dfd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:04:57 np0005558241 podman[253609]: 2025-12-13 08:04:57.236237685 +0000 UTC m=+0.026742368 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:04:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731c30edfc526ac3abb903625cdc6dd7b323d3d3210fcb0a2323dec5db29dfd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:04:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731c30edfc526ac3abb903625cdc6dd7b323d3d3210fcb0a2323dec5db29dfd4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:04:57 np0005558241 podman[253609]: 2025-12-13 08:04:57.361871175 +0000 UTC m=+0.152375868 container init b437b6c747b87433866855d7322cd8318670876ad0edfde46db20e6ffba67ed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 03:04:57 np0005558241 podman[253609]: 2025-12-13 08:04:57.369215196 +0000 UTC m=+0.159719859 container start b437b6c747b87433866855d7322cd8318670876ad0edfde46db20e6ffba67ed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_chebyshev, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:04:57 np0005558241 podman[253609]: 2025-12-13 08:04:57.446540458 +0000 UTC m=+0.237045211 container attach b437b6c747b87433866855d7322cd8318670876ad0edfde46db20e6ffba67ed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:04:57 np0005558241 angry_chebyshev[253626]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:04:57 np0005558241 angry_chebyshev[253626]: --> All data devices are unavailable
Dec 13 03:04:57 np0005558241 systemd[1]: libpod-b437b6c747b87433866855d7322cd8318670876ad0edfde46db20e6ffba67ed0.scope: Deactivated successfully.
Dec 13 03:04:58 np0005558241 podman[253647]: 2025-12-13 08:04:58.006018858 +0000 UTC m=+0.039729518 container died b437b6c747b87433866855d7322cd8318670876ad0edfde46db20e6ffba67ed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_chebyshev, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:04:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-731c30edfc526ac3abb903625cdc6dd7b323d3d3210fcb0a2323dec5db29dfd4-merged.mount: Deactivated successfully.
Dec 13 03:04:58 np0005558241 podman[253647]: 2025-12-13 08:04:58.324645015 +0000 UTC m=+0.358355585 container remove b437b6c747b87433866855d7322cd8318670876ad0edfde46db20e6ffba67ed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_chebyshev, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:04:58 np0005558241 systemd[1]: libpod-conmon-b437b6c747b87433866855d7322cd8318670876ad0edfde46db20e6ffba67ed0.scope: Deactivated successfully.
Dec 13 03:04:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:04:58 np0005558241 podman[253724]: 2025-12-13 08:04:58.812758879 +0000 UTC m=+0.044613508 container create 9c96ed97592918262f8356410b28e3412ec568f2fbc16cfddf7756fca27cc6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:04:58 np0005558241 systemd[1]: Started libpod-conmon-9c96ed97592918262f8356410b28e3412ec568f2fbc16cfddf7756fca27cc6c4.scope.
Dec 13 03:04:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:04:58 np0005558241 podman[253724]: 2025-12-13 08:04:58.884808761 +0000 UTC m=+0.116663430 container init 9c96ed97592918262f8356410b28e3412ec568f2fbc16cfddf7756fca27cc6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sinoussi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:04:58 np0005558241 podman[253724]: 2025-12-13 08:04:58.791191358 +0000 UTC m=+0.023046027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:04:58 np0005558241 podman[253724]: 2025-12-13 08:04:58.892310735 +0000 UTC m=+0.124165364 container start 9c96ed97592918262f8356410b28e3412ec568f2fbc16cfddf7756fca27cc6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sinoussi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:04:58 np0005558241 podman[253724]: 2025-12-13 08:04:58.897818081 +0000 UTC m=+0.129672750 container attach 9c96ed97592918262f8356410b28e3412ec568f2fbc16cfddf7756fca27cc6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:04:58 np0005558241 pensive_sinoussi[253740]: 167 167
Dec 13 03:04:58 np0005558241 systemd[1]: libpod-9c96ed97592918262f8356410b28e3412ec568f2fbc16cfddf7756fca27cc6c4.scope: Deactivated successfully.
Dec 13 03:04:58 np0005558241 podman[253724]: 2025-12-13 08:04:58.89902076 +0000 UTC m=+0.130875379 container died 9c96ed97592918262f8356410b28e3412ec568f2fbc16cfddf7756fca27cc6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 03:04:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-42bf34f467bcd4e6db0953e496d842f4a0f9f3c8ba22dedd0b041f7d56456347-merged.mount: Deactivated successfully.
Dec 13 03:04:58 np0005558241 podman[253724]: 2025-12-13 08:04:58.959376855 +0000 UTC m=+0.191231474 container remove 9c96ed97592918262f8356410b28e3412ec568f2fbc16cfddf7756fca27cc6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sinoussi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:04:58 np0005558241 systemd[1]: libpod-conmon-9c96ed97592918262f8356410b28e3412ec568f2fbc16cfddf7756fca27cc6c4.scope: Deactivated successfully.
Dec 13 03:04:59 np0005558241 podman[253765]: 2025-12-13 08:04:59.116890489 +0000 UTC m=+0.038333504 container create 07b364e6242486a7b608aa72283b8e1db33ad13b3f484485be5dbfd96cc5a1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dijkstra, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:04:59 np0005558241 systemd[1]: Started libpod-conmon-07b364e6242486a7b608aa72283b8e1db33ad13b3f484485be5dbfd96cc5a1cf.scope.
Dec 13 03:04:59 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:04:59 np0005558241 podman[253765]: 2025-12-13 08:04:59.102003223 +0000 UTC m=+0.023446258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:04:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114f0b2acc4d26c82fb15c4440017da09cd84b0bdec8b37b2ca23bc36941e856/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:04:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114f0b2acc4d26c82fb15c4440017da09cd84b0bdec8b37b2ca23bc36941e856/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:04:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114f0b2acc4d26c82fb15c4440017da09cd84b0bdec8b37b2ca23bc36941e856/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:04:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114f0b2acc4d26c82fb15c4440017da09cd84b0bdec8b37b2ca23bc36941e856/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:04:59 np0005558241 podman[253765]: 2025-12-13 08:04:59.217309859 +0000 UTC m=+0.138752894 container init 07b364e6242486a7b608aa72283b8e1db33ad13b3f484485be5dbfd96cc5a1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dijkstra, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:04:59 np0005558241 podman[253765]: 2025-12-13 08:04:59.224872035 +0000 UTC m=+0.146315050 container start 07b364e6242486a7b608aa72283b8e1db33ad13b3f484485be5dbfd96cc5a1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 03:04:59 np0005558241 podman[253765]: 2025-12-13 08:04:59.228307659 +0000 UTC m=+0.149750694 container attach 07b364e6242486a7b608aa72283b8e1db33ad13b3f484485be5dbfd96cc5a1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dijkstra, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]: {
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:    "0": [
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:        {
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "devices": [
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "/dev/loop3"
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            ],
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_name": "ceph_lv0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_size": "21470642176",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "name": "ceph_lv0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "tags": {
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.cluster_name": "ceph",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.crush_device_class": "",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.encrypted": "0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.objectstore": "bluestore",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.osd_id": "0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.type": "block",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.vdo": "0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.with_tpm": "0"
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            },
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "type": "block",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "vg_name": "ceph_vg0"
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:        }
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:    ],
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:    "1": [
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:        {
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "devices": [
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "/dev/loop4"
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            ],
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_name": "ceph_lv1",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_size": "21470642176",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "name": "ceph_lv1",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "tags": {
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.cluster_name": "ceph",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.crush_device_class": "",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.encrypted": "0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.objectstore": "bluestore",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.osd_id": "1",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.type": "block",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.vdo": "0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.with_tpm": "0"
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            },
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "type": "block",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "vg_name": "ceph_vg1"
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:        }
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:    ],
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:    "2": [
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:        {
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "devices": [
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "/dev/loop5"
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            ],
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_name": "ceph_lv2",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_size": "21470642176",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "name": "ceph_lv2",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "tags": {
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.cluster_name": "ceph",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.crush_device_class": "",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.encrypted": "0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.objectstore": "bluestore",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.osd_id": "2",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.type": "block",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.vdo": "0",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:                "ceph.with_tpm": "0"
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            },
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "type": "block",
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:            "vg_name": "ceph_vg2"
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:        }
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]:    ]
Dec 13 03:04:59 np0005558241 dazzling_dijkstra[253782]: }
Dec 13 03:04:59 np0005558241 systemd[1]: libpod-07b364e6242486a7b608aa72283b8e1db33ad13b3f484485be5dbfd96cc5a1cf.scope: Deactivated successfully.
Dec 13 03:04:59 np0005558241 podman[253765]: 2025-12-13 08:04:59.54387018 +0000 UTC m=+0.465313195 container died 07b364e6242486a7b608aa72283b8e1db33ad13b3f484485be5dbfd96cc5a1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dijkstra, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:04:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-114f0b2acc4d26c82fb15c4440017da09cd84b0bdec8b37b2ca23bc36941e856-merged.mount: Deactivated successfully.
Dec 13 03:04:59 np0005558241 podman[253765]: 2025-12-13 08:04:59.596511845 +0000 UTC m=+0.517954860 container remove 07b364e6242486a7b608aa72283b8e1db33ad13b3f484485be5dbfd96cc5a1cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:04:59 np0005558241 systemd[1]: libpod-conmon-07b364e6242486a7b608aa72283b8e1db33ad13b3f484485be5dbfd96cc5a1cf.scope: Deactivated successfully.
Dec 13 03:05:00 np0005558241 podman[253867]: 2025-12-13 08:05:00.13301162 +0000 UTC m=+0.048534524 container create ed48e7a106a300ce85d7e4855c9ab4bf7b56af0608109509ea26142d3ab340a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:05:00 np0005558241 systemd[1]: Started libpod-conmon-ed48e7a106a300ce85d7e4855c9ab4bf7b56af0608109509ea26142d3ab340a4.scope.
Dec 13 03:05:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:05:00 np0005558241 podman[253867]: 2025-12-13 08:05:00.108986339 +0000 UTC m=+0.024509253 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:05:00 np0005558241 podman[253867]: 2025-12-13 08:05:00.210686021 +0000 UTC m=+0.126208925 container init ed48e7a106a300ce85d7e4855c9ab4bf7b56af0608109509ea26142d3ab340a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_hugle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:05:00 np0005558241 podman[253867]: 2025-12-13 08:05:00.218212296 +0000 UTC m=+0.133735200 container start ed48e7a106a300ce85d7e4855c9ab4bf7b56af0608109509ea26142d3ab340a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:05:00 np0005558241 sweet_hugle[253883]: 167 167
Dec 13 03:05:00 np0005558241 podman[253867]: 2025-12-13 08:05:00.222344517 +0000 UTC m=+0.137867471 container attach ed48e7a106a300ce85d7e4855c9ab4bf7b56af0608109509ea26142d3ab340a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_hugle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 03:05:00 np0005558241 systemd[1]: libpod-ed48e7a106a300ce85d7e4855c9ab4bf7b56af0608109509ea26142d3ab340a4.scope: Deactivated successfully.
Dec 13 03:05:00 np0005558241 podman[253867]: 2025-12-13 08:05:00.223361002 +0000 UTC m=+0.138883906 container died ed48e7a106a300ce85d7e4855c9ab4bf7b56af0608109509ea26142d3ab340a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_hugle, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 13 03:05:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e679c8cc9cc31e73cde301c4332e4cfa04e6dec225933f6ef932e007a48ef5c5-merged.mount: Deactivated successfully.
Dec 13 03:05:00 np0005558241 podman[253867]: 2025-12-13 08:05:00.275876574 +0000 UTC m=+0.191399478 container remove ed48e7a106a300ce85d7e4855c9ab4bf7b56af0608109509ea26142d3ab340a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_hugle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:05:00 np0005558241 systemd[1]: libpod-conmon-ed48e7a106a300ce85d7e4855c9ab4bf7b56af0608109509ea26142d3ab340a4.scope: Deactivated successfully.
Dec 13 03:05:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:00 np0005558241 podman[253907]: 2025-12-13 08:05:00.430063486 +0000 UTC m=+0.024943064 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:05:00 np0005558241 podman[253907]: 2025-12-13 08:05:00.540013541 +0000 UTC m=+0.134893089 container create 0db77419c8326ba7aa7eabaa8a6453b5a37890ed0759db9ce5210a621ef4f4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:05:00 np0005558241 systemd[1]: Started libpod-conmon-0db77419c8326ba7aa7eabaa8a6453b5a37890ed0759db9ce5210a621ef4f4ff.scope.
Dec 13 03:05:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:05:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e619b2bf58b72fe92b25c5c869595064decfe9a8fa3304d10200f54cc98e7c0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:05:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e619b2bf58b72fe92b25c5c869595064decfe9a8fa3304d10200f54cc98e7c0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:05:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e619b2bf58b72fe92b25c5c869595064decfe9a8fa3304d10200f54cc98e7c0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:05:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e619b2bf58b72fe92b25c5c869595064decfe9a8fa3304d10200f54cc98e7c0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:05:00 np0005558241 podman[253907]: 2025-12-13 08:05:00.863674231 +0000 UTC m=+0.458553819 container init 0db77419c8326ba7aa7eabaa8a6453b5a37890ed0759db9ce5210a621ef4f4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:05:00 np0005558241 podman[253907]: 2025-12-13 08:05:00.87217578 +0000 UTC m=+0.467055368 container start 0db77419c8326ba7aa7eabaa8a6453b5a37890ed0759db9ce5210a621ef4f4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 03:05:00 np0005558241 podman[253907]: 2025-12-13 08:05:00.877417449 +0000 UTC m=+0.472297007 container attach 0db77419c8326ba7aa7eabaa8a6453b5a37890ed0759db9ce5210a621ef4f4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:05:01 np0005558241 lvm[254007]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:05:01 np0005558241 lvm[254008]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:05:01 np0005558241 lvm[254004]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:05:01 np0005558241 lvm[254007]: VG ceph_vg1 finished
Dec 13 03:05:01 np0005558241 lvm[254004]: VG ceph_vg0 finished
Dec 13 03:05:01 np0005558241 lvm[254008]: VG ceph_vg2 finished
Dec 13 03:05:01 np0005558241 elegant_morse[253927]: {}
Dec 13 03:05:01 np0005558241 systemd[1]: libpod-0db77419c8326ba7aa7eabaa8a6453b5a37890ed0759db9ce5210a621ef4f4ff.scope: Deactivated successfully.
Dec 13 03:05:01 np0005558241 systemd[1]: libpod-0db77419c8326ba7aa7eabaa8a6453b5a37890ed0759db9ce5210a621ef4f4ff.scope: Consumed 1.416s CPU time.
Dec 13 03:05:01 np0005558241 podman[253907]: 2025-12-13 08:05:01.718733831 +0000 UTC m=+1.313613389 container died 0db77419c8326ba7aa7eabaa8a6453b5a37890ed0759db9ce5210a621ef4f4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:05:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e619b2bf58b72fe92b25c5c869595064decfe9a8fa3304d10200f54cc98e7c0d-merged.mount: Deactivated successfully.
Dec 13 03:05:01 np0005558241 podman[253907]: 2025-12-13 08:05:01.88781542 +0000 UTC m=+1.482694988 container remove 0db77419c8326ba7aa7eabaa8a6453b5a37890ed0759db9ce5210a621ef4f4ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:05:01 np0005558241 systemd[1]: libpod-conmon-0db77419c8326ba7aa7eabaa8a6453b5a37890ed0759db9ce5210a621ef4f4ff.scope: Deactivated successfully.
Dec 13 03:05:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:05:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:05:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:05:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:05:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:05:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:05:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:05:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:05:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:05:09
Dec 13 03:05:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:05:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:05:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['vms', '.mgr', 'backups', 'images', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'default.rgw.control']
Dec 13 03:05:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:05:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:05:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:12 np0005558241 podman[254054]: 2025-12-13 08:05:12.975600359 +0000 UTC m=+0.057604458 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:05:12 np0005558241 podman[254053]: 2025-12-13 08:05:12.993895569 +0000 UTC m=+0.073096459 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec 13 03:05:13 np0005558241 podman[254052]: 2025-12-13 08:05:13.008747864 +0000 UTC m=+0.095904970 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Dec 13 03:05:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:05:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/219011784' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:05:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:05:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/219011784' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:05:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:05:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:05:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:05:20 np0005558241 nova_compute[248510]: 2025-12-13 08:05:20.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:05:20 np0005558241 nova_compute[248510]: 2025-12-13 08:05:20.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:05:20 np0005558241 nova_compute[248510]: 2025-12-13 08:05:20.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:05:20 np0005558241 nova_compute[248510]: 2025-12-13 08:05:20.819 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:05:21 np0005558241 nova_compute[248510]: 2025-12-13 08:05:21.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:05:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:05:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:22 np0005558241 nova_compute[248510]: 2025-12-13 08:05:22.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:05:22 np0005558241 nova_compute[248510]: 2025-12-13 08:05:22.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:05:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:24 np0005558241 nova_compute[248510]: 2025-12-13 08:05:24.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:05:24 np0005558241 nova_compute[248510]: 2025-12-13 08:05:24.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:05:24 np0005558241 nova_compute[248510]: 2025-12-13 08:05:24.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:05:25 np0005558241 nova_compute[248510]: 2025-12-13 08:05:25.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:05:26 np0005558241 nova_compute[248510]: 2025-12-13 08:05:26.217 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:05:26 np0005558241 nova_compute[248510]: 2025-12-13 08:05:26.218 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:05:26 np0005558241 nova_compute[248510]: 2025-12-13 08:05:26.218 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:05:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:26 np0005558241 nova_compute[248510]: 2025-12-13 08:05:26.538 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:05:26 np0005558241 nova_compute[248510]: 2025-12-13 08:05:26.539 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:05:26 np0005558241 nova_compute[248510]: 2025-12-13 08:05:26.539 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:05:26 np0005558241 nova_compute[248510]: 2025-12-13 08:05:26.539 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:05:26 np0005558241 nova_compute[248510]: 2025-12-13 08:05:26.540 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:05:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:05:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1537652622' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:05:27 np0005558241 nova_compute[248510]: 2025-12-13 08:05:27.081 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:05:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:05:27 np0005558241 nova_compute[248510]: 2025-12-13 08:05:27.235 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:05:27 np0005558241 nova_compute[248510]: 2025-12-13 08:05:27.236 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5122MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:05:27 np0005558241 nova_compute[248510]: 2025-12-13 08:05:27.236 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:05:27 np0005558241 nova_compute[248510]: 2025-12-13 08:05:27.237 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:05:27 np0005558241 nova_compute[248510]: 2025-12-13 08:05:27.605 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:05:27 np0005558241 nova_compute[248510]: 2025-12-13 08:05:27.606 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:05:27 np0005558241 nova_compute[248510]: 2025-12-13 08:05:27.730 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 03:05:27 np0005558241 nova_compute[248510]: 2025-12-13 08:05:27.876 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 03:05:27 np0005558241 nova_compute[248510]: 2025-12-13 08:05:27.877 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 03:05:27 np0005558241 nova_compute[248510]: 2025-12-13 08:05:27.905 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 03:05:27 np0005558241 nova_compute[248510]: 2025-12-13 08:05:27.931 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 03:05:27 np0005558241 nova_compute[248510]: 2025-12-13 08:05:27.953 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:05:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:05:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2713460651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:05:28 np0005558241 nova_compute[248510]: 2025-12-13 08:05:28.506 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:05:28 np0005558241 nova_compute[248510]: 2025-12-13 08:05:28.512 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:05:28 np0005558241 nova_compute[248510]: 2025-12-13 08:05:28.589 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:05:28 np0005558241 nova_compute[248510]: 2025-12-13 08:05:28.590 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:05:28 np0005558241 nova_compute[248510]: 2025-12-13 08:05:28.590 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:05:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:05:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:05:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:05:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:05:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:05:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:05:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:05:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:05:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:05:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:43 np0005558241 podman[254161]: 2025-12-13 08:05:43.974927054 +0000 UTC m=+0.063550874 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:05:43 np0005558241 podman[254162]: 2025-12-13 08:05:43.977900507 +0000 UTC m=+0.058169442 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:05:43 np0005558241 podman[254160]: 2025-12-13 08:05:43.993048569 +0000 UTC m=+0.083704629 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:05:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:05:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:05:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:05:55.383 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:05:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:05:55.384 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:05:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:05:55.384 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:05:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:05:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:05:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:06:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:02 np0005558241 podman[254316]: 2025-12-13 08:06:02.6576354 +0000 UTC m=+0.056915671 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 03:06:02 np0005558241 podman[254316]: 2025-12-13 08:06:02.755719753 +0000 UTC m=+0.155000004 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:06:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:06:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:06:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:06:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:06:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:06:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:06:05 np0005558241 podman[254645]: 2025-12-13 08:06:05.417501724 +0000 UTC m=+0.042191469 container create 763b0f7c7e729a72889ff2394728c58d2f75ec583f7830d66a360b0e3970e597 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_benz, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:06:05 np0005558241 systemd[1]: Started libpod-conmon-763b0f7c7e729a72889ff2394728c58d2f75ec583f7830d66a360b0e3970e597.scope.
Dec 13 03:06:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:06:05 np0005558241 podman[254645]: 2025-12-13 08:06:05.399000429 +0000 UTC m=+0.023690194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:06:05 np0005558241 podman[254645]: 2025-12-13 08:06:05.507929869 +0000 UTC m=+0.132619624 container init 763b0f7c7e729a72889ff2394728c58d2f75ec583f7830d66a360b0e3970e597 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_benz, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:06:05 np0005558241 podman[254645]: 2025-12-13 08:06:05.517110585 +0000 UTC m=+0.141800320 container start 763b0f7c7e729a72889ff2394728c58d2f75ec583f7830d66a360b0e3970e597 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_benz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:06:05 np0005558241 podman[254645]: 2025-12-13 08:06:05.520887268 +0000 UTC m=+0.145577003 container attach 763b0f7c7e729a72889ff2394728c58d2f75ec583f7830d66a360b0e3970e597 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_benz, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:06:05 np0005558241 heuristic_benz[254661]: 167 167
Dec 13 03:06:05 np0005558241 systemd[1]: libpod-763b0f7c7e729a72889ff2394728c58d2f75ec583f7830d66a360b0e3970e597.scope: Deactivated successfully.
Dec 13 03:06:05 np0005558241 podman[254645]: 2025-12-13 08:06:05.525203534 +0000 UTC m=+0.149893259 container died 763b0f7c7e729a72889ff2394728c58d2f75ec583f7830d66a360b0e3970e597 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:06:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f9c136880e6e7b67980b2c479d3bf0e33e5814d3efc617ee5dbec5be37f1428d-merged.mount: Deactivated successfully.
Dec 13 03:06:05 np0005558241 podman[254645]: 2025-12-13 08:06:05.567839803 +0000 UTC m=+0.192529538 container remove 763b0f7c7e729a72889ff2394728c58d2f75ec583f7830d66a360b0e3970e597 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_benz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:06:05 np0005558241 systemd[1]: libpod-conmon-763b0f7c7e729a72889ff2394728c58d2f75ec583f7830d66a360b0e3970e597.scope: Deactivated successfully.
Dec 13 03:06:05 np0005558241 podman[254685]: 2025-12-13 08:06:05.744190812 +0000 UTC m=+0.046433014 container create 9af58326d11bdac3b397eef4aeda74267961b463028551a822e0e679bdae97fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:06:05 np0005558241 systemd[1]: Started libpod-conmon-9af58326d11bdac3b397eef4aeda74267961b463028551a822e0e679bdae97fe.scope.
Dec 13 03:06:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:06:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:06:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:06:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:06:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c844f6bdfd4d85060d64521ed39ee2a8a44b2bace626c0cc3222d2140904eba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:06:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c844f6bdfd4d85060d64521ed39ee2a8a44b2bace626c0cc3222d2140904eba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:06:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c844f6bdfd4d85060d64521ed39ee2a8a44b2bace626c0cc3222d2140904eba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:06:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c844f6bdfd4d85060d64521ed39ee2a8a44b2bace626c0cc3222d2140904eba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:06:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c844f6bdfd4d85060d64521ed39ee2a8a44b2bace626c0cc3222d2140904eba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:06:05 np0005558241 podman[254685]: 2025-12-13 08:06:05.723221556 +0000 UTC m=+0.025463788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:06:05 np0005558241 podman[254685]: 2025-12-13 08:06:05.836238716 +0000 UTC m=+0.138480948 container init 9af58326d11bdac3b397eef4aeda74267961b463028551a822e0e679bdae97fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:06:05 np0005558241 podman[254685]: 2025-12-13 08:06:05.845452152 +0000 UTC m=+0.147694354 container start 9af58326d11bdac3b397eef4aeda74267961b463028551a822e0e679bdae97fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_germain, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 03:06:05 np0005558241 podman[254685]: 2025-12-13 08:06:05.850313032 +0000 UTC m=+0.152555234 container attach 9af58326d11bdac3b397eef4aeda74267961b463028551a822e0e679bdae97fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:06:06 np0005558241 infallible_germain[254701]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:06:06 np0005558241 infallible_germain[254701]: --> All data devices are unavailable
Dec 13 03:06:06 np0005558241 systemd[1]: libpod-9af58326d11bdac3b397eef4aeda74267961b463028551a822e0e679bdae97fe.scope: Deactivated successfully.
Dec 13 03:06:06 np0005558241 podman[254685]: 2025-12-13 08:06:06.353011668 +0000 UTC m=+0.655253870 container died 9af58326d11bdac3b397eef4aeda74267961b463028551a822e0e679bdae97fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:06:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8c844f6bdfd4d85060d64521ed39ee2a8a44b2bace626c0cc3222d2140904eba-merged.mount: Deactivated successfully.
Dec 13 03:06:06 np0005558241 podman[254685]: 2025-12-13 08:06:06.730593926 +0000 UTC m=+1.032836148 container remove 9af58326d11bdac3b397eef4aeda74267961b463028551a822e0e679bdae97fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_germain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:06:06 np0005558241 systemd[1]: libpod-conmon-9af58326d11bdac3b397eef4aeda74267961b463028551a822e0e679bdae97fe.scope: Deactivated successfully.
Dec 13 03:06:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:06:07 np0005558241 podman[254794]: 2025-12-13 08:06:07.185042965 +0000 UTC m=+0.040667981 container create 9c73c65eddf44674b61b4e47174d824cfe5a70b819ce07f258d333a124a4d3cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_goodall, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:06:07 np0005558241 systemd[1]: Started libpod-conmon-9c73c65eddf44674b61b4e47174d824cfe5a70b819ce07f258d333a124a4d3cb.scope.
Dec 13 03:06:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:06:07 np0005558241 podman[254794]: 2025-12-13 08:06:07.168286113 +0000 UTC m=+0.023911139 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:06:07 np0005558241 podman[254794]: 2025-12-13 08:06:07.271124963 +0000 UTC m=+0.126749999 container init 9c73c65eddf44674b61b4e47174d824cfe5a70b819ce07f258d333a124a4d3cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:06:07 np0005558241 podman[254794]: 2025-12-13 08:06:07.279466738 +0000 UTC m=+0.135091754 container start 9c73c65eddf44674b61b4e47174d824cfe5a70b819ce07f258d333a124a4d3cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_goodall, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:06:07 np0005558241 podman[254794]: 2025-12-13 08:06:07.282978415 +0000 UTC m=+0.138603431 container attach 9c73c65eddf44674b61b4e47174d824cfe5a70b819ce07f258d333a124a4d3cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 03:06:07 np0005558241 loving_goodall[254810]: 167 167
Dec 13 03:06:07 np0005558241 systemd[1]: libpod-9c73c65eddf44674b61b4e47174d824cfe5a70b819ce07f258d333a124a4d3cb.scope: Deactivated successfully.
Dec 13 03:06:07 np0005558241 podman[254794]: 2025-12-13 08:06:07.287518186 +0000 UTC m=+0.143143202 container died 9c73c65eddf44674b61b4e47174d824cfe5a70b819ce07f258d333a124a4d3cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 03:06:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-370d615a220c9c1f6c8123dec74336efa53f203f2a0beb4c2b143e83a4ff319a-merged.mount: Deactivated successfully.
Dec 13 03:06:07 np0005558241 podman[254794]: 2025-12-13 08:06:07.331822816 +0000 UTC m=+0.187447832 container remove 9c73c65eddf44674b61b4e47174d824cfe5a70b819ce07f258d333a124a4d3cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:06:07 np0005558241 systemd[1]: libpod-conmon-9c73c65eddf44674b61b4e47174d824cfe5a70b819ce07f258d333a124a4d3cb.scope: Deactivated successfully.
Dec 13 03:06:07 np0005558241 podman[254835]: 2025-12-13 08:06:07.522631481 +0000 UTC m=+0.048143886 container create 4c8fd437e42461fe6ef0b3785eea497c7289c3ca0aacfddd6ccbe5adeb681d37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_cannon, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:06:07 np0005558241 systemd[1]: Started libpod-conmon-4c8fd437e42461fe6ef0b3785eea497c7289c3ca0aacfddd6ccbe5adeb681d37.scope.
Dec 13 03:06:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:06:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6ffb8acc19acfdb3e7aec1c6444b0d6014b772e4f3674496b9b88de792930e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:06:07 np0005558241 podman[254835]: 2025-12-13 08:06:07.498728633 +0000 UTC m=+0.024241018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:06:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6ffb8acc19acfdb3e7aec1c6444b0d6014b772e4f3674496b9b88de792930e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:06:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6ffb8acc19acfdb3e7aec1c6444b0d6014b772e4f3674496b9b88de792930e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:06:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db6ffb8acc19acfdb3e7aec1c6444b0d6014b772e4f3674496b9b88de792930e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:06:07 np0005558241 podman[254835]: 2025-12-13 08:06:07.607361075 +0000 UTC m=+0.132873460 container init 4c8fd437e42461fe6ef0b3785eea497c7289c3ca0aacfddd6ccbe5adeb681d37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:06:07 np0005558241 podman[254835]: 2025-12-13 08:06:07.617362011 +0000 UTC m=+0.142874396 container start 4c8fd437e42461fe6ef0b3785eea497c7289c3ca0aacfddd6ccbe5adeb681d37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_cannon, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:06:07 np0005558241 podman[254835]: 2025-12-13 08:06:07.621526383 +0000 UTC m=+0.147038768 container attach 4c8fd437e42461fe6ef0b3785eea497c7289c3ca0aacfddd6ccbe5adeb681d37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_cannon, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:06:07 np0005558241 elated_cannon[254851]: {
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:    "0": [
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:        {
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "devices": [
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "/dev/loop3"
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            ],
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_name": "ceph_lv0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_size": "21470642176",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "name": "ceph_lv0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "tags": {
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.cluster_name": "ceph",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.crush_device_class": "",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.encrypted": "0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.objectstore": "bluestore",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.osd_id": "0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.type": "block",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.vdo": "0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.with_tpm": "0"
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            },
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "type": "block",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "vg_name": "ceph_vg0"
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:        }
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:    ],
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:    "1": [
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:        {
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "devices": [
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "/dev/loop4"
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            ],
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_name": "ceph_lv1",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_size": "21470642176",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "name": "ceph_lv1",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "tags": {
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.cluster_name": "ceph",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.crush_device_class": "",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.encrypted": "0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.objectstore": "bluestore",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.osd_id": "1",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.type": "block",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.vdo": "0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.with_tpm": "0"
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            },
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "type": "block",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "vg_name": "ceph_vg1"
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:        }
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:    ],
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:    "2": [
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:        {
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "devices": [
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "/dev/loop5"
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            ],
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_name": "ceph_lv2",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_size": "21470642176",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "name": "ceph_lv2",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "tags": {
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.cluster_name": "ceph",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.crush_device_class": "",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.encrypted": "0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.objectstore": "bluestore",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.osd_id": "2",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.type": "block",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.vdo": "0",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:                "ceph.with_tpm": "0"
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            },
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "type": "block",
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:            "vg_name": "ceph_vg2"
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:        }
Dec 13 03:06:07 np0005558241 elated_cannon[254851]:    ]
Dec 13 03:06:07 np0005558241 elated_cannon[254851]: }
Dec 13 03:06:07 np0005558241 systemd[1]: libpod-4c8fd437e42461fe6ef0b3785eea497c7289c3ca0aacfddd6ccbe5adeb681d37.scope: Deactivated successfully.
Dec 13 03:06:07 np0005558241 podman[254835]: 2025-12-13 08:06:07.952424864 +0000 UTC m=+0.477937239 container died 4c8fd437e42461fe6ef0b3785eea497c7289c3ca0aacfddd6ccbe5adeb681d37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_cannon, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 03:06:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-db6ffb8acc19acfdb3e7aec1c6444b0d6014b772e4f3674496b9b88de792930e-merged.mount: Deactivated successfully.
Dec 13 03:06:08 np0005558241 podman[254835]: 2025-12-13 08:06:08.004090635 +0000 UTC m=+0.529603000 container remove 4c8fd437e42461fe6ef0b3785eea497c7289c3ca0aacfddd6ccbe5adeb681d37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_cannon, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:06:08 np0005558241 systemd[1]: libpod-conmon-4c8fd437e42461fe6ef0b3785eea497c7289c3ca0aacfddd6ccbe5adeb681d37.scope: Deactivated successfully.
Dec 13 03:06:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:08 np0005558241 podman[254934]: 2025-12-13 08:06:08.457763146 +0000 UTC m=+0.041510382 container create de4b03accfd4c3f033d0b7edd826c021f8bcf9747d5063cb8c6f00cc93c5a905 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Dec 13 03:06:08 np0005558241 systemd[1]: Started libpod-conmon-de4b03accfd4c3f033d0b7edd826c021f8bcf9747d5063cb8c6f00cc93c5a905.scope.
Dec 13 03:06:08 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:06:08 np0005558241 podman[254934]: 2025-12-13 08:06:08.439744233 +0000 UTC m=+0.023491499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:06:08 np0005558241 podman[254934]: 2025-12-13 08:06:08.534446532 +0000 UTC m=+0.118193828 container init de4b03accfd4c3f033d0b7edd826c021f8bcf9747d5063cb8c6f00cc93c5a905 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_taussig, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 03:06:08 np0005558241 podman[254934]: 2025-12-13 08:06:08.539749163 +0000 UTC m=+0.123496399 container start de4b03accfd4c3f033d0b7edd826c021f8bcf9747d5063cb8c6f00cc93c5a905 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:06:08 np0005558241 podman[254934]: 2025-12-13 08:06:08.543034434 +0000 UTC m=+0.126781690 container attach de4b03accfd4c3f033d0b7edd826c021f8bcf9747d5063cb8c6f00cc93c5a905 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_taussig, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:06:08 np0005558241 flamboyant_taussig[254950]: 167 167
Dec 13 03:06:08 np0005558241 systemd[1]: libpod-de4b03accfd4c3f033d0b7edd826c021f8bcf9747d5063cb8c6f00cc93c5a905.scope: Deactivated successfully.
Dec 13 03:06:08 np0005558241 podman[254934]: 2025-12-13 08:06:08.545833313 +0000 UTC m=+0.129580569 container died de4b03accfd4c3f033d0b7edd826c021f8bcf9747d5063cb8c6f00cc93c5a905 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_taussig, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:06:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8965b1393d94c145ffc17f584968d5e3cdcfd5f08d44ed8fa21542cf34622f27-merged.mount: Deactivated successfully.
Dec 13 03:06:08 np0005558241 podman[254934]: 2025-12-13 08:06:08.585375375 +0000 UTC m=+0.169122601 container remove de4b03accfd4c3f033d0b7edd826c021f8bcf9747d5063cb8c6f00cc93c5a905 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_taussig, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:06:08 np0005558241 systemd[1]: libpod-conmon-de4b03accfd4c3f033d0b7edd826c021f8bcf9747d5063cb8c6f00cc93c5a905.scope: Deactivated successfully.
Dec 13 03:06:08 np0005558241 podman[254973]: 2025-12-13 08:06:08.751478282 +0000 UTC m=+0.047972192 container create e8a85da6eeaa61a04465e05cb565579762a4453cfb2ecc24b7615e4e8118d0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_antonelli, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:06:08 np0005558241 systemd[1]: Started libpod-conmon-e8a85da6eeaa61a04465e05cb565579762a4453cfb2ecc24b7615e4e8118d0b1.scope.
Dec 13 03:06:08 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:06:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb82ad9b196169b51a2ae2312dc013551642eb862d66bbaaa365d3390a6f508/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:06:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb82ad9b196169b51a2ae2312dc013551642eb862d66bbaaa365d3390a6f508/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:06:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb82ad9b196169b51a2ae2312dc013551642eb862d66bbaaa365d3390a6f508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:06:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb82ad9b196169b51a2ae2312dc013551642eb862d66bbaaa365d3390a6f508/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:06:08 np0005558241 podman[254973]: 2025-12-13 08:06:08.734211827 +0000 UTC m=+0.030705757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:06:08 np0005558241 podman[254973]: 2025-12-13 08:06:08.83150008 +0000 UTC m=+0.127994010 container init e8a85da6eeaa61a04465e05cb565579762a4453cfb2ecc24b7615e4e8118d0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_antonelli, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:06:08 np0005558241 podman[254973]: 2025-12-13 08:06:08.840748128 +0000 UTC m=+0.137242028 container start e8a85da6eeaa61a04465e05cb565579762a4453cfb2ecc24b7615e4e8118d0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:06:08 np0005558241 podman[254973]: 2025-12-13 08:06:08.844139661 +0000 UTC m=+0.140633581 container attach e8a85da6eeaa61a04465e05cb565579762a4453cfb2ecc24b7615e4e8118d0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_antonelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:06:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:06:09
Dec 13 03:06:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:06:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:06:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.meta', 'default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'vms']
Dec 13 03:06:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:06:09 np0005558241 lvm[255067]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:06:09 np0005558241 lvm[255068]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:06:09 np0005558241 lvm[255068]: VG ceph_vg1 finished
Dec 13 03:06:09 np0005558241 lvm[255067]: VG ceph_vg0 finished
Dec 13 03:06:09 np0005558241 lvm[255070]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:06:09 np0005558241 lvm[255070]: VG ceph_vg2 finished
Dec 13 03:06:09 np0005558241 crazy_antonelli[254989]: {}
Dec 13 03:06:09 np0005558241 systemd[1]: libpod-e8a85da6eeaa61a04465e05cb565579762a4453cfb2ecc24b7615e4e8118d0b1.scope: Deactivated successfully.
Dec 13 03:06:09 np0005558241 podman[254973]: 2025-12-13 08:06:09.740203335 +0000 UTC m=+1.036697225 container died e8a85da6eeaa61a04465e05cb565579762a4453cfb2ecc24b7615e4e8118d0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_antonelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:06:09 np0005558241 systemd[1]: libpod-e8a85da6eeaa61a04465e05cb565579762a4453cfb2ecc24b7615e4e8118d0b1.scope: Consumed 1.450s CPU time.
Dec 13 03:06:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8bb82ad9b196169b51a2ae2312dc013551642eb862d66bbaaa365d3390a6f508-merged.mount: Deactivated successfully.
Dec 13 03:06:09 np0005558241 podman[254973]: 2025-12-13 08:06:09.792111982 +0000 UTC m=+1.088605882 container remove e8a85da6eeaa61a04465e05cb565579762a4453cfb2ecc24b7615e4e8118d0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_antonelli, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec 13 03:06:09 np0005558241 systemd[1]: libpod-conmon-e8a85da6eeaa61a04465e05cb565579762a4453cfb2ecc24b7615e4e8118d0b1.scope: Deactivated successfully.
Dec 13 03:06:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:06:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:06:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:06:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:06:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:06:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:06:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:06:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:06:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1227996447' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:06:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:06:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1227996447' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:06:14 np0005558241 podman[255111]: 2025-12-13 08:06:14.972668538 +0000 UTC m=+0.057926406 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 03:06:14 np0005558241 podman[255110]: 2025-12-13 08:06:14.974859762 +0000 UTC m=+0.061679709 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd)
Dec 13 03:06:15 np0005558241 podman[255109]: 2025-12-13 08:06:15.007043593 +0000 UTC m=+0.096869914 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 03:06:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:06:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:06:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:06:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:06:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:24 np0005558241 nova_compute[248510]: 2025-12-13 08:06:24.144 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:06:24 np0005558241 nova_compute[248510]: 2025-12-13 08:06:24.145 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:06:24 np0005558241 nova_compute[248510]: 2025-12-13 08:06:24.145 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:06:24 np0005558241 nova_compute[248510]: 2025-12-13 08:06:24.145 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:06:24 np0005558241 nova_compute[248510]: 2025-12-13 08:06:24.216 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:06:24 np0005558241 nova_compute[248510]: 2025-12-13 08:06:24.216 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:06:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:24 np0005558241 nova_compute[248510]: 2025-12-13 08:06:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:06:25 np0005558241 nova_compute[248510]: 2025-12-13 08:06:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:06:25 np0005558241 nova_compute[248510]: 2025-12-13 08:06:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:06:25 np0005558241 nova_compute[248510]: 2025-12-13 08:06:25.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:06:25 np0005558241 nova_compute[248510]: 2025-12-13 08:06:25.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:06:25 np0005558241 nova_compute[248510]: 2025-12-13 08:06:25.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:06:25 np0005558241 nova_compute[248510]: 2025-12-13 08:06:25.819 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:06:25 np0005558241 nova_compute[248510]: 2025-12-13 08:06:25.820 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:06:25 np0005558241 nova_compute[248510]: 2025-12-13 08:06:25.820 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:06:25 np0005558241 nova_compute[248510]: 2025-12-13 08:06:25.820 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:06:25 np0005558241 nova_compute[248510]: 2025-12-13 08:06:25.821 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:06:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:06:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4124205500' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:06:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:26 np0005558241 nova_compute[248510]: 2025-12-13 08:06:26.442 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:06:26 np0005558241 nova_compute[248510]: 2025-12-13 08:06:26.584 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:06:26 np0005558241 nova_compute[248510]: 2025-12-13 08:06:26.585 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5154MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:06:26 np0005558241 nova_compute[248510]: 2025-12-13 08:06:26.586 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:06:26 np0005558241 nova_compute[248510]: 2025-12-13 08:06:26.586 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:06:26 np0005558241 nova_compute[248510]: 2025-12-13 08:06:26.670 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:06:26 np0005558241 nova_compute[248510]: 2025-12-13 08:06:26.670 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:06:26 np0005558241 nova_compute[248510]: 2025-12-13 08:06:26.693 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:06:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:06:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:06:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/14282307' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:06:27 np0005558241 nova_compute[248510]: 2025-12-13 08:06:27.243 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:06:27 np0005558241 nova_compute[248510]: 2025-12-13 08:06:27.249 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:06:27 np0005558241 nova_compute[248510]: 2025-12-13 08:06:27.274 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:06:27 np0005558241 nova_compute[248510]: 2025-12-13 08:06:27.276 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:06:27 np0005558241 nova_compute[248510]: 2025-12-13 08:06:27.276 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:06:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:29 np0005558241 nova_compute[248510]: 2025-12-13 08:06:29.276 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:06:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:06:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:06:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:06:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:06:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:06:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:06:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:06:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:06:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:06:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:45 np0005558241 podman[255215]: 2025-12-13 08:06:45.990484596 +0000 UTC m=+0.067985454 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 03:06:46 np0005558241 podman[255216]: 2025-12-13 08:06:45.999859256 +0000 UTC m=+0.071985401 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 03:06:46 np0005558241 podman[255214]: 2025-12-13 08:06:46.011052242 +0000 UTC m=+0.101865947 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 03:06:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:06:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:06:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:06:55.385 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:06:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:06:55.385 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:06:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:06:55.385 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:06:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:06:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:06:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:07:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:07:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:07:09
Dec 13 03:07:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:07:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:07:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['vms', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'volumes']
Dec 13 03:07:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:07:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:07:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:07:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:07:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:07:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:07:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:07:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:07:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:07:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:07:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:07:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:07:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:07:11 np0005558241 podman[255421]: 2025-12-13 08:07:11.176373161 +0000 UTC m=+0.069593103 container create 74989a55257911b0f771f07ce82e062094b0e44040f771736be56e5dfeb54335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bhaskara, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:07:11 np0005558241 systemd[1]: Started libpod-conmon-74989a55257911b0f771f07ce82e062094b0e44040f771736be56e5dfeb54335.scope.
Dec 13 03:07:11 np0005558241 podman[255421]: 2025-12-13 08:07:11.128452612 +0000 UTC m=+0.021672554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:07:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:07:11 np0005558241 podman[255421]: 2025-12-13 08:07:11.314753485 +0000 UTC m=+0.207973407 container init 74989a55257911b0f771f07ce82e062094b0e44040f771736be56e5dfeb54335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 03:07:11 np0005558241 podman[255421]: 2025-12-13 08:07:11.323758006 +0000 UTC m=+0.216977908 container start 74989a55257911b0f771f07ce82e062094b0e44040f771736be56e5dfeb54335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bhaskara, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:07:11 np0005558241 youthful_bhaskara[255437]: 167 167
Dec 13 03:07:11 np0005558241 systemd[1]: libpod-74989a55257911b0f771f07ce82e062094b0e44040f771736be56e5dfeb54335.scope: Deactivated successfully.
Dec 13 03:07:11 np0005558241 podman[255421]: 2025-12-13 08:07:11.3405889 +0000 UTC m=+0.233808832 container attach 74989a55257911b0f771f07ce82e062094b0e44040f771736be56e5dfeb54335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 03:07:11 np0005558241 podman[255421]: 2025-12-13 08:07:11.341879512 +0000 UTC m=+0.235099424 container died 74989a55257911b0f771f07ce82e062094b0e44040f771736be56e5dfeb54335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bhaskara, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:07:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d214fac87c501d0f8a5235a64663ade9fe75648a06a6e7094af89b198abdc07b-merged.mount: Deactivated successfully.
Dec 13 03:07:11 np0005558241 podman[255421]: 2025-12-13 08:07:11.39098038 +0000 UTC m=+0.284200292 container remove 74989a55257911b0f771f07ce82e062094b0e44040f771736be56e5dfeb54335 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_bhaskara, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:07:11 np0005558241 systemd[1]: libpod-conmon-74989a55257911b0f771f07ce82e062094b0e44040f771736be56e5dfeb54335.scope: Deactivated successfully.
Dec 13 03:07:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:07:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:07:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:07:11 np0005558241 podman[255461]: 2025-12-13 08:07:11.583498486 +0000 UTC m=+0.074913514 container create 13f8134062c26dceaf68c6932ec1aa7160b7464f569e08a796fb00023c214d4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_greider, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:07:11 np0005558241 podman[255461]: 2025-12-13 08:07:11.5336502 +0000 UTC m=+0.025065258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:07:11 np0005558241 systemd[1]: Started libpod-conmon-13f8134062c26dceaf68c6932ec1aa7160b7464f569e08a796fb00023c214d4e.scope.
Dec 13 03:07:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:07:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf31199d32257399d5a372bcbe91fc528e69af14dd6d358f2a124d96b08d28e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf31199d32257399d5a372bcbe91fc528e69af14dd6d358f2a124d96b08d28e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf31199d32257399d5a372bcbe91fc528e69af14dd6d358f2a124d96b08d28e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf31199d32257399d5a372bcbe91fc528e69af14dd6d358f2a124d96b08d28e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf31199d32257399d5a372bcbe91fc528e69af14dd6d358f2a124d96b08d28e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:11 np0005558241 podman[255461]: 2025-12-13 08:07:11.720545238 +0000 UTC m=+0.211960266 container init 13f8134062c26dceaf68c6932ec1aa7160b7464f569e08a796fb00023c214d4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_greider, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:07:11 np0005558241 podman[255461]: 2025-12-13 08:07:11.727488799 +0000 UTC m=+0.218903827 container start 13f8134062c26dceaf68c6932ec1aa7160b7464f569e08a796fb00023c214d4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_greider, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:07:11 np0005558241 podman[255461]: 2025-12-13 08:07:11.732171804 +0000 UTC m=+0.223586822 container attach 13f8134062c26dceaf68c6932ec1aa7160b7464f569e08a796fb00023c214d4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_greider, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:07:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:07:12 np0005558241 relaxed_greider[255477]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:07:12 np0005558241 relaxed_greider[255477]: --> All data devices are unavailable
Dec 13 03:07:12 np0005558241 systemd[1]: libpod-13f8134062c26dceaf68c6932ec1aa7160b7464f569e08a796fb00023c214d4e.scope: Deactivated successfully.
Dec 13 03:07:12 np0005558241 podman[255461]: 2025-12-13 08:07:12.22590172 +0000 UTC m=+0.717316748 container died 13f8134062c26dceaf68c6932ec1aa7160b7464f569e08a796fb00023c214d4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:07:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7bf31199d32257399d5a372bcbe91fc528e69af14dd6d358f2a124d96b08d28e-merged.mount: Deactivated successfully.
Dec 13 03:07:12 np0005558241 podman[255461]: 2025-12-13 08:07:12.270683592 +0000 UTC m=+0.762098620 container remove 13f8134062c26dceaf68c6932ec1aa7160b7464f569e08a796fb00023c214d4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:07:12 np0005558241 systemd[1]: libpod-conmon-13f8134062c26dceaf68c6932ec1aa7160b7464f569e08a796fb00023c214d4e.scope: Deactivated successfully.
Dec 13 03:07:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:12 np0005558241 podman[255572]: 2025-12-13 08:07:12.72669963 +0000 UTC m=+0.024043842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:07:12 np0005558241 podman[255572]: 2025-12-13 08:07:12.952009423 +0000 UTC m=+0.249353605 container create f359a13cc7919803989ce181f43a00f7099bedce1dc405c97aa5ecbd3190e3b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bartik, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:07:13 np0005558241 systemd[1]: Started libpod-conmon-f359a13cc7919803989ce181f43a00f7099bedce1dc405c97aa5ecbd3190e3b4.scope.
Dec 13 03:07:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:07:14 np0005558241 podman[255572]: 2025-12-13 08:07:14.053259415 +0000 UTC m=+1.350603617 container init f359a13cc7919803989ce181f43a00f7099bedce1dc405c97aa5ecbd3190e3b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:07:14 np0005558241 podman[255572]: 2025-12-13 08:07:14.067145366 +0000 UTC m=+1.364489538 container start f359a13cc7919803989ce181f43a00f7099bedce1dc405c97aa5ecbd3190e3b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:07:14 np0005558241 frosty_bartik[255588]: 167 167
Dec 13 03:07:14 np0005558241 systemd[1]: libpod-f359a13cc7919803989ce181f43a00f7099bedce1dc405c97aa5ecbd3190e3b4.scope: Deactivated successfully.
Dec 13 03:07:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:14 np0005558241 podman[255572]: 2025-12-13 08:07:14.476400953 +0000 UTC m=+1.773745225 container attach f359a13cc7919803989ce181f43a00f7099bedce1dc405c97aa5ecbd3190e3b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bartik, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:07:14 np0005558241 podman[255572]: 2025-12-13 08:07:14.477170802 +0000 UTC m=+1.774515024 container died f359a13cc7919803989ce181f43a00f7099bedce1dc405c97aa5ecbd3190e3b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:07:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:07:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/803686102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:07:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:07:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/803686102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:07:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b6dffc34598dcbc4f76a8d13121c025eb7396e4e634331c071fdc8a8f6639941-merged.mount: Deactivated successfully.
Dec 13 03:07:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:16 np0005558241 podman[255572]: 2025-12-13 08:07:16.950184101 +0000 UTC m=+4.247528303 container remove f359a13cc7919803989ce181f43a00f7099bedce1dc405c97aa5ecbd3190e3b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bartik, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:07:16 np0005558241 systemd[1]: libpod-conmon-f359a13cc7919803989ce181f43a00f7099bedce1dc405c97aa5ecbd3190e3b4.scope: Deactivated successfully.
Dec 13 03:07:17 np0005558241 podman[255609]: 2025-12-13 08:07:17.000060908 +0000 UTC m=+0.075226742 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:07:17 np0005558241 podman[255608]: 2025-12-13 08:07:17.030953898 +0000 UTC m=+0.116936498 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec 13 03:07:17 np0005558241 podman[255607]: 2025-12-13 08:07:17.030960018 +0000 UTC m=+0.113121034 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 13 03:07:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:07:17 np0005558241 podman[255675]: 2025-12-13 08:07:17.104294462 +0000 UTC m=+0.022984456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:07:17 np0005558241 podman[255675]: 2025-12-13 08:07:17.362968726 +0000 UTC m=+0.281658700 container create 0aa34660e82b942acd0baf7a0e68f7d4801d41b73d9c7443383c6dba7b96e78e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:07:17 np0005558241 systemd[1]: Started libpod-conmon-0aa34660e82b942acd0baf7a0e68f7d4801d41b73d9c7443383c6dba7b96e78e.scope.
Dec 13 03:07:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:07:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b36e652787d350a5388229cf0f90084ba675371f42e0706356f6923a49f6c5f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b36e652787d350a5388229cf0f90084ba675371f42e0706356f6923a49f6c5f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b36e652787d350a5388229cf0f90084ba675371f42e0706356f6923a49f6c5f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b36e652787d350a5388229cf0f90084ba675371f42e0706356f6923a49f6c5f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:18 np0005558241 podman[255675]: 2025-12-13 08:07:18.090719628 +0000 UTC m=+1.009409622 container init 0aa34660e82b942acd0baf7a0e68f7d4801d41b73d9c7443383c6dba7b96e78e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:07:18 np0005558241 podman[255675]: 2025-12-13 08:07:18.099041823 +0000 UTC m=+1.017731797 container start 0aa34660e82b942acd0baf7a0e68f7d4801d41b73d9c7443383c6dba7b96e78e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:07:18 np0005558241 podman[255675]: 2025-12-13 08:07:18.303031811 +0000 UTC m=+1.221721815 container attach 0aa34660e82b942acd0baf7a0e68f7d4801d41b73d9c7443383c6dba7b96e78e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]: {
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:    "0": [
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:        {
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "devices": [
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "/dev/loop3"
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            ],
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_name": "ceph_lv0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_size": "21470642176",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "name": "ceph_lv0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "tags": {
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.cluster_name": "ceph",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.crush_device_class": "",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.encrypted": "0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.objectstore": "bluestore",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.osd_id": "0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.type": "block",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.vdo": "0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.with_tpm": "0"
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            },
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "type": "block",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "vg_name": "ceph_vg0"
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:        }
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:    ],
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:    "1": [
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:        {
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "devices": [
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "/dev/loop4"
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            ],
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_name": "ceph_lv1",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_size": "21470642176",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "name": "ceph_lv1",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "tags": {
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.cluster_name": "ceph",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.crush_device_class": "",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.encrypted": "0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.objectstore": "bluestore",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.osd_id": "1",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.type": "block",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.vdo": "0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.with_tpm": "0"
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            },
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "type": "block",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "vg_name": "ceph_vg1"
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:        }
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:    ],
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:    "2": [
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:        {
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "devices": [
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "/dev/loop5"
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            ],
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_name": "ceph_lv2",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_size": "21470642176",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "name": "ceph_lv2",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "tags": {
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.cluster_name": "ceph",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.crush_device_class": "",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.encrypted": "0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.objectstore": "bluestore",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.osd_id": "2",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.type": "block",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.vdo": "0",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:                "ceph.with_tpm": "0"
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            },
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "type": "block",
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:            "vg_name": "ceph_vg2"
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:        }
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]:    ]
Dec 13 03:07:18 np0005558241 intelligent_chandrasekhar[255691]: }
Dec 13 03:07:18 np0005558241 systemd[1]: libpod-0aa34660e82b942acd0baf7a0e68f7d4801d41b73d9c7443383c6dba7b96e78e.scope: Deactivated successfully.
Dec 13 03:07:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:18 np0005558241 podman[255701]: 2025-12-13 08:07:18.472527721 +0000 UTC m=+0.032337077 container died 0aa34660e82b942acd0baf7a0e68f7d4801d41b73d9c7443383c6dba7b96e78e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 03:07:18 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b36e652787d350a5388229cf0f90084ba675371f42e0706356f6923a49f6c5f7-merged.mount: Deactivated successfully.
Dec 13 03:07:18 np0005558241 podman[255701]: 2025-12-13 08:07:18.937421188 +0000 UTC m=+0.497230514 container remove 0aa34660e82b942acd0baf7a0e68f7d4801d41b73d9c7443383c6dba7b96e78e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:07:18 np0005558241 systemd[1]: libpod-conmon-0aa34660e82b942acd0baf7a0e68f7d4801d41b73d9c7443383c6dba7b96e78e.scope: Deactivated successfully.
Dec 13 03:07:19 np0005558241 podman[255778]: 2025-12-13 08:07:19.396477271 +0000 UTC m=+0.024059803 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:07:19 np0005558241 podman[255778]: 2025-12-13 08:07:19.615250113 +0000 UTC m=+0.242832615 container create 90d214c6f4b1f5c0a242d5b37b6d3696f12e5b36987134d51287c0edcd71b6aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ganguly, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 03:07:19 np0005558241 systemd[1]: Started libpod-conmon-90d214c6f4b1f5c0a242d5b37b6d3696f12e5b36987134d51287c0edcd71b6aa.scope.
Dec 13 03:07:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:07:20 np0005558241 podman[255778]: 2025-12-13 08:07:20.085589614 +0000 UTC m=+0.713172206 container init 90d214c6f4b1f5c0a242d5b37b6d3696f12e5b36987134d51287c0edcd71b6aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 03:07:20 np0005558241 podman[255778]: 2025-12-13 08:07:20.094653017 +0000 UTC m=+0.722235529 container start 90d214c6f4b1f5c0a242d5b37b6d3696f12e5b36987134d51287c0edcd71b6aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:07:20 np0005558241 compassionate_ganguly[255795]: 167 167
Dec 13 03:07:20 np0005558241 systemd[1]: libpod-90d214c6f4b1f5c0a242d5b37b6d3696f12e5b36987134d51287c0edcd71b6aa.scope: Deactivated successfully.
Dec 13 03:07:20 np0005558241 podman[255778]: 2025-12-13 08:07:20.152559222 +0000 UTC m=+0.780141734 container attach 90d214c6f4b1f5c0a242d5b37b6d3696f12e5b36987134d51287c0edcd71b6aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ganguly, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:07:20 np0005558241 podman[255778]: 2025-12-13 08:07:20.153058374 +0000 UTC m=+0.780640886 container died 90d214c6f4b1f5c0a242d5b37b6d3696f12e5b36987134d51287c0edcd71b6aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:07:20 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4ca0aef55e6a3694ae4e2f6a3f09d09036acce727404e71f4f334af5d8e2cedb-merged.mount: Deactivated successfully.
Dec 13 03:07:20 np0005558241 podman[255778]: 2025-12-13 08:07:20.214610198 +0000 UTC m=+0.842192710 container remove 90d214c6f4b1f5c0a242d5b37b6d3696f12e5b36987134d51287c0edcd71b6aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 03:07:20 np0005558241 systemd[1]: libpod-conmon-90d214c6f4b1f5c0a242d5b37b6d3696f12e5b36987134d51287c0edcd71b6aa.scope: Deactivated successfully.
Dec 13 03:07:20 np0005558241 podman[255820]: 2025-12-13 08:07:20.384658192 +0000 UTC m=+0.040854156 container create 3ff4666e7418cacfb14335c49632dfe7d5ea9ac9a4c11d3a1a6805b420dd0585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:07:20 np0005558241 systemd[1]: Started libpod-conmon-3ff4666e7418cacfb14335c49632dfe7d5ea9ac9a4c11d3a1a6805b420dd0585.scope.
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:07:20 np0005558241 podman[255820]: 2025-12-13 08:07:20.36751585 +0000 UTC m=+0.023711824 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:07:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e13b0dba8d344e727014a4b6ef73304fea075325b15a2b26efddb8f8f2463e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e13b0dba8d344e727014a4b6ef73304fea075325b15a2b26efddb8f8f2463e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e13b0dba8d344e727014a4b6ef73304fea075325b15a2b26efddb8f8f2463e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e13b0dba8d344e727014a4b6ef73304fea075325b15a2b26efddb8f8f2463e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:20 np0005558241 podman[255820]: 2025-12-13 08:07:20.474458091 +0000 UTC m=+0.130654065 container init 3ff4666e7418cacfb14335c49632dfe7d5ea9ac9a4c11d3a1a6805b420dd0585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:07:20 np0005558241 podman[255820]: 2025-12-13 08:07:20.481716769 +0000 UTC m=+0.137912733 container start 3ff4666e7418cacfb14335c49632dfe7d5ea9ac9a4c11d3a1a6805b420dd0585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:07:20 np0005558241 podman[255820]: 2025-12-13 08:07:20.486175609 +0000 UTC m=+0.142371583 container attach 3ff4666e7418cacfb14335c49632dfe7d5ea9ac9a4c11d3a1a6805b420dd0585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_dijkstra, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3236368572664593e-06 of space, bias 4.0, pg target 0.0015883642287197511 quantized to 16 (current 32)
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:07:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:07:21 np0005558241 lvm[255915]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:07:21 np0005558241 lvm[255914]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:07:21 np0005558241 lvm[255914]: VG ceph_vg0 finished
Dec 13 03:07:21 np0005558241 lvm[255915]: VG ceph_vg1 finished
Dec 13 03:07:21 np0005558241 lvm[255917]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:07:21 np0005558241 lvm[255917]: VG ceph_vg2 finished
Dec 13 03:07:21 np0005558241 frosty_dijkstra[255836]: {}
Dec 13 03:07:21 np0005558241 systemd[1]: libpod-3ff4666e7418cacfb14335c49632dfe7d5ea9ac9a4c11d3a1a6805b420dd0585.scope: Deactivated successfully.
Dec 13 03:07:21 np0005558241 systemd[1]: libpod-3ff4666e7418cacfb14335c49632dfe7d5ea9ac9a4c11d3a1a6805b420dd0585.scope: Consumed 1.365s CPU time.
Dec 13 03:07:21 np0005558241 podman[255820]: 2025-12-13 08:07:21.319617943 +0000 UTC m=+0.975813907 container died 3ff4666e7418cacfb14335c49632dfe7d5ea9ac9a4c11d3a1a6805b420dd0585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:07:21 np0005558241 systemd[1]: var-lib-containers-storage-overlay-66e13b0dba8d344e727014a4b6ef73304fea075325b15a2b26efddb8f8f2463e-merged.mount: Deactivated successfully.
Dec 13 03:07:21 np0005558241 podman[255820]: 2025-12-13 08:07:21.534814987 +0000 UTC m=+1.191010971 container remove 3ff4666e7418cacfb14335c49632dfe7d5ea9ac9a4c11d3a1a6805b420dd0585 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_dijkstra, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:07:21 np0005558241 systemd[1]: libpod-conmon-3ff4666e7418cacfb14335c49632dfe7d5ea9ac9a4c11d3a1a6805b420dd0585.scope: Deactivated successfully.
Dec 13 03:07:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:07:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:07:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:07:21 np0005558241 nova_compute[248510]: 2025-12-13 08:07:21.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:07:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:07:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:07:22 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:07:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:07:23 np0005558241 nova_compute[248510]: 2025-12-13 08:07:23.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:07:23 np0005558241 nova_compute[248510]: 2025-12-13 08:07:23.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:07:23 np0005558241 nova_compute[248510]: 2025-12-13 08:07:23.770 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:07:23 np0005558241 nova_compute[248510]: 2025-12-13 08:07:23.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:07:23 np0005558241 nova_compute[248510]: 2025-12-13 08:07:23.791 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:07:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:25 np0005558241 nova_compute[248510]: 2025-12-13 08:07:25.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:07:25 np0005558241 nova_compute[248510]: 2025-12-13 08:07:25.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:07:25 np0005558241 nova_compute[248510]: 2025-12-13 08:07:25.821 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:07:25 np0005558241 nova_compute[248510]: 2025-12-13 08:07:25.821 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:07:25 np0005558241 nova_compute[248510]: 2025-12-13 08:07:25.821 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:07:25 np0005558241 nova_compute[248510]: 2025-12-13 08:07:25.822 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:07:25 np0005558241 nova_compute[248510]: 2025-12-13 08:07:25.822 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:07:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:07:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/445118165' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:07:26 np0005558241 nova_compute[248510]: 2025-12-13 08:07:26.429 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:07:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:26 np0005558241 nova_compute[248510]: 2025-12-13 08:07:26.590 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:07:26 np0005558241 nova_compute[248510]: 2025-12-13 08:07:26.592 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:07:26 np0005558241 nova_compute[248510]: 2025-12-13 08:07:26.592 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:07:26 np0005558241 nova_compute[248510]: 2025-12-13 08:07:26.592 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:07:26 np0005558241 nova_compute[248510]: 2025-12-13 08:07:26.796 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:07:26 np0005558241 nova_compute[248510]: 2025-12-13 08:07:26.797 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:07:26 np0005558241 nova_compute[248510]: 2025-12-13 08:07:26.824 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:07:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:07:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:07:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2812229928' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:07:27 np0005558241 nova_compute[248510]: 2025-12-13 08:07:27.390 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:07:27 np0005558241 nova_compute[248510]: 2025-12-13 08:07:27.395 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:07:27 np0005558241 nova_compute[248510]: 2025-12-13 08:07:27.419 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:07:27 np0005558241 nova_compute[248510]: 2025-12-13 08:07:27.423 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:07:27 np0005558241 nova_compute[248510]: 2025-12-13 08:07:27.423 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:07:28 np0005558241 nova_compute[248510]: 2025-12-13 08:07:28.423 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:07:28 np0005558241 nova_compute[248510]: 2025-12-13 08:07:28.424 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:07:28 np0005558241 nova_compute[248510]: 2025-12-13 08:07:28.424 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:07:28 np0005558241 nova_compute[248510]: 2025-12-13 08:07:28.424 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:07:28 np0005558241 nova_compute[248510]: 2025-12-13 08:07:28.425 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:07:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:28 np0005558241 nova_compute[248510]: 2025-12-13 08:07:28.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:07:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:07:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:07:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:07:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:07:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:07:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:07:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:07:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:07:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:07:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:07:47 np0005558241 podman[256008]: 2025-12-13 08:07:47.98810103 +0000 UTC m=+0.074964775 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.build-date=20251202)
Dec 13 03:07:47 np0005558241 podman[256009]: 2025-12-13 08:07:47.998593588 +0000 UTC m=+0.085092524 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 03:07:48 np0005558241 podman[256007]: 2025-12-13 08:07:48.002618057 +0000 UTC m=+0.089590955 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 13 03:07:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec 13 03:07:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec 13 03:07:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec 13 03:07:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec 13 03:07:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec 13 03:07:50 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec 13 03:07:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:07:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:07:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 895 B/s wr, 6 op/s
Dec 13 03:07:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:07:55.387 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:07:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:07:55.387 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:07:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:07:55.387 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:07:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 895 B/s wr, 6 op/s
Dec 13 03:07:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec 13 03:07:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec 13 03:07:57 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec 13 03:07:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.0 MiB/s wr, 13 op/s
Dec 13 03:08:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.2 MiB/s wr, 15 op/s
Dec 13 03:08:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:08:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec 13 03:08:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Dec 13 03:08:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec 13 03:08:02 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec 13 03:08:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 29 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 3.6 MiB/s wr, 26 op/s
Dec 13 03:08:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 29 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.3 MiB/s wr, 24 op/s
Dec 13 03:08:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:08:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec 13 03:08:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 33 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 2.5 MiB/s wr, 22 op/s
Dec 13 03:08:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec 13 03:08:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:08:09
Dec 13 03:08:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:08:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:08:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'vms', 'volumes']
Dec 13 03:08:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:08:09 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:08:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Dec 13 03:08:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 2.1 MiB/s wr, 13 op/s
Dec 13 03:08:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:08:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 1.2 MiB/s wr, 11 op/s
Dec 13 03:08:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:08:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3530489593' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:08:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:08:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3530489593' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:08:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 1.2 MiB/s wr, 11 op/s
Dec 13 03:08:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:08:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 819 KiB/s wr, 4 op/s
Dec 13 03:08:18 np0005558241 podman[256073]: 2025-12-13 08:08:18.967791469 +0000 UTC m=+0.056889501 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:08:18 np0005558241 podman[256074]: 2025-12-13 08:08:18.973183871 +0000 UTC m=+0.057825573 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 03:08:18 np0005558241 podman[256072]: 2025-12-13 08:08:18.9991532 +0000 UTC m=+0.088172410 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 452 B/s wr, 4 op/s
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665279136052113 of space, bias 1.0, pg target 0.1999583740815634 quantized to 32 (current 32)
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2446142247453319e-06 of space, bias 4.0, pg target 0.0014935370696943983 quantized to 16 (current 32)
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:08:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:08:21 np0005558241 nova_compute[248510]: 2025-12-13 08:08:21.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:08:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Dec 13 03:08:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:08:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:08:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:08:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:08:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:08:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:08:23 np0005558241 nova_compute[248510]: 2025-12-13 08:08:23.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:08:23 np0005558241 nova_compute[248510]: 2025-12-13 08:08:23.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:08:23 np0005558241 nova_compute[248510]: 2025-12-13 08:08:23.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:08:23 np0005558241 nova_compute[248510]: 2025-12-13 08:08:23.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:08:23 np0005558241 nova_compute[248510]: 2025-12-13 08:08:23.806 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:08:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec 13 03:08:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:08:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:08:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:08:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:08:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:08:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:08:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:08:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s rd, 938 B/s wr, 11 op/s
Dec 13 03:08:24 np0005558241 podman[256281]: 2025-12-13 08:08:24.606122528 +0000 UTC m=+0.025126929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:08:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec 13 03:08:24 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:08:25 np0005558241 podman[256281]: 2025-12-13 08:08:25.007306908 +0000 UTC m=+0.426311309 container create 7a432992dbb7d038ab31509c426b3ccbaf6bd814280c35368fb4fc349b630548 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hofstadter, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:08:25 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec 13 03:08:25 np0005558241 nova_compute[248510]: 2025-12-13 08:08:25.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:08:25 np0005558241 nova_compute[248510]: 2025-12-13 08:08:25.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:08:25 np0005558241 nova_compute[248510]: 2025-12-13 08:08:25.811 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:08:25 np0005558241 nova_compute[248510]: 2025-12-13 08:08:25.811 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:08:25 np0005558241 nova_compute[248510]: 2025-12-13 08:08:25.811 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:08:25 np0005558241 nova_compute[248510]: 2025-12-13 08:08:25.812 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:08:26 np0005558241 systemd[1]: Started libpod-conmon-7a432992dbb7d038ab31509c426b3ccbaf6bd814280c35368fb4fc349b630548.scope.
Dec 13 03:08:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:08:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Dec 13 03:08:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:08:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373848915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:08:26 np0005558241 podman[256281]: 2025-12-13 08:08:26.631936155 +0000 UTC m=+2.050940586 container init 7a432992dbb7d038ab31509c426b3ccbaf6bd814280c35368fb4fc349b630548 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hofstadter, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:08:26 np0005558241 podman[256281]: 2025-12-13 08:08:26.642693639 +0000 UTC m=+2.061698040 container start 7a432992dbb7d038ab31509c426b3ccbaf6bd814280c35368fb4fc349b630548 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hofstadter, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:08:26 np0005558241 adoring_hofstadter[256309]: 167 167
Dec 13 03:08:26 np0005558241 systemd[1]: libpod-7a432992dbb7d038ab31509c426b3ccbaf6bd814280c35368fb4fc349b630548.scope: Deactivated successfully.
Dec 13 03:08:26 np0005558241 nova_compute[248510]: 2025-12-13 08:08:26.663 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.851s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:08:26 np0005558241 nova_compute[248510]: 2025-12-13 08:08:26.816 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:08:26 np0005558241 nova_compute[248510]: 2025-12-13 08:08:26.817 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5078MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:08:26 np0005558241 nova_compute[248510]: 2025-12-13 08:08:26.817 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:08:26 np0005558241 nova_compute[248510]: 2025-12-13 08:08:26.817 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:08:26 np0005558241 nova_compute[248510]: 2025-12-13 08:08:26.893 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:08:26 np0005558241 nova_compute[248510]: 2025-12-13 08:08:26.894 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:08:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:08:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:08:26 np0005558241 nova_compute[248510]: 2025-12-13 08:08:26.909 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:08:27 np0005558241 podman[256281]: 2025-12-13 08:08:27.060878367 +0000 UTC m=+2.479882858 container attach 7a432992dbb7d038ab31509c426b3ccbaf6bd814280c35368fb4fc349b630548 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:08:27 np0005558241 podman[256281]: 2025-12-13 08:08:27.063339208 +0000 UTC m=+2.482343629 container died 7a432992dbb7d038ab31509c426b3ccbaf6bd814280c35368fb4fc349b630548 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hofstadter, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:08:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:08:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/860577750' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:08:27 np0005558241 nova_compute[248510]: 2025-12-13 08:08:27.529 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:08:27 np0005558241 nova_compute[248510]: 2025-12-13 08:08:27.535 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:08:27 np0005558241 nova_compute[248510]: 2025-12-13 08:08:27.555 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:08:27 np0005558241 nova_compute[248510]: 2025-12-13 08:08:27.556 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:08:27 np0005558241 nova_compute[248510]: 2025-12-13 08:08:27.557 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:08:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:08:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 716 B/s wr, 12 op/s
Dec 13 03:08:29 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f04565014390119f0c5af526c2c504dbdb2f90ddb24b088edb6095ad472899f3-merged.mount: Deactivated successfully.
Dec 13 03:08:29 np0005558241 nova_compute[248510]: 2025-12-13 08:08:29.557 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:08:29 np0005558241 nova_compute[248510]: 2025-12-13 08:08:29.557 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:08:29 np0005558241 nova_compute[248510]: 2025-12-13 08:08:29.557 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:08:29 np0005558241 nova_compute[248510]: 2025-12-13 08:08:29.557 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:08:29 np0005558241 nova_compute[248510]: 2025-12-13 08:08:29.558 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:08:29 np0005558241 nova_compute[248510]: 2025-12-13 08:08:29.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:08:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 716 B/s wr, 13 op/s
Dec 13 03:08:32 np0005558241 podman[256281]: 2025-12-13 08:08:32.351356347 +0000 UTC m=+7.770360758 container remove 7a432992dbb7d038ab31509c426b3ccbaf6bd814280c35368fb4fc349b630548 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:08:32 np0005558241 systemd[1]: libpod-conmon-7a432992dbb7d038ab31509c426b3ccbaf6bd814280c35368fb4fc349b630548.scope: Deactivated successfully.
Dec 13 03:08:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 716 B/s wr, 13 op/s
Dec 13 03:08:32 np0005558241 podman[256367]: 2025-12-13 08:08:32.494476458 +0000 UTC m=+0.023689814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:08:32 np0005558241 podman[256367]: 2025-12-13 08:08:32.895606457 +0000 UTC m=+0.424819793 container create ff49ce98d1759d607985621dfcf5f946a7eeefb7a2c7b71e641040d95de7231e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:08:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:08:33 np0005558241 systemd[1]: Started libpod-conmon-ff49ce98d1759d607985621dfcf5f946a7eeefb7a2c7b71e641040d95de7231e.scope.
Dec 13 03:08:33 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:08:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e17d37c35f2aea42b8e859d949febdf9e14a189b3c2c11dddaa52114795f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:08:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e17d37c35f2aea42b8e859d949febdf9e14a189b3c2c11dddaa52114795f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:08:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e17d37c35f2aea42b8e859d949febdf9e14a189b3c2c11dddaa52114795f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:08:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e17d37c35f2aea42b8e859d949febdf9e14a189b3c2c11dddaa52114795f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:08:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e17d37c35f2aea42b8e859d949febdf9e14a189b3c2c11dddaa52114795f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:08:34 np0005558241 podman[256367]: 2025-12-13 08:08:34.363119798 +0000 UTC m=+1.892333154 container init ff49ce98d1759d607985621dfcf5f946a7eeefb7a2c7b71e641040d95de7231e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poincare, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:08:34 np0005558241 podman[256367]: 2025-12-13 08:08:34.372154901 +0000 UTC m=+1.901368237 container start ff49ce98d1759d607985621dfcf5f946a7eeefb7a2c7b71e641040d95de7231e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poincare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:08:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 204 B/s wr, 9 op/s
Dec 13 03:08:34 np0005558241 podman[256367]: 2025-12-13 08:08:34.719312571 +0000 UTC m=+2.248525907 container attach ff49ce98d1759d607985621dfcf5f946a7eeefb7a2c7b71e641040d95de7231e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:08:34 np0005558241 gracious_poincare[256383]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:08:34 np0005558241 gracious_poincare[256383]: --> All data devices are unavailable
Dec 13 03:08:34 np0005558241 systemd[1]: libpod-ff49ce98d1759d607985621dfcf5f946a7eeefb7a2c7b71e641040d95de7231e.scope: Deactivated successfully.
Dec 13 03:08:34 np0005558241 podman[256367]: 2025-12-13 08:08:34.886010802 +0000 UTC m=+2.415224138 container died ff49ce98d1759d607985621dfcf5f946a7eeefb7a2c7b71e641040d95de7231e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poincare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:08:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fc5e17d37c35f2aea42b8e859d949febdf9e14a189b3c2c11dddaa52114795f9-merged.mount: Deactivated successfully.
Dec 13 03:08:36 np0005558241 podman[256367]: 2025-12-13 08:08:36.411417769 +0000 UTC m=+3.940631145 container remove ff49ce98d1759d607985621dfcf5f946a7eeefb7a2c7b71e641040d95de7231e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:08:36 np0005558241 systemd[1]: libpod-conmon-ff49ce98d1759d607985621dfcf5f946a7eeefb7a2c7b71e641040d95de7231e.scope: Deactivated successfully.
Dec 13 03:08:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 177 B/s wr, 8 op/s
Dec 13 03:08:36 np0005558241 podman[256479]: 2025-12-13 08:08:36.902030338 +0000 UTC m=+0.025383536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:08:37 np0005558241 podman[256479]: 2025-12-13 08:08:37.156759285 +0000 UTC m=+0.280112453 container create 17b77261bf0f50ccee1ae4e4f5305a2626adf90371d2f571d4d71da0c83ae2e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_tu, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:08:37 np0005558241 systemd[1]: Started libpod-conmon-17b77261bf0f50ccee1ae4e4f5305a2626adf90371d2f571d4d71da0c83ae2e2.scope.
Dec 13 03:08:37 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:08:38 np0005558241 podman[256479]: 2025-12-13 08:08:38.04192003 +0000 UTC m=+1.165273218 container init 17b77261bf0f50ccee1ae4e4f5305a2626adf90371d2f571d4d71da0c83ae2e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:08:38 np0005558241 podman[256479]: 2025-12-13 08:08:38.050196154 +0000 UTC m=+1.173549322 container start 17b77261bf0f50ccee1ae4e4f5305a2626adf90371d2f571d4d71da0c83ae2e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:08:38 np0005558241 inspiring_tu[256496]: 167 167
Dec 13 03:08:38 np0005558241 systemd[1]: libpod-17b77261bf0f50ccee1ae4e4f5305a2626adf90371d2f571d4d71da0c83ae2e2.scope: Deactivated successfully.
Dec 13 03:08:38 np0005558241 podman[256479]: 2025-12-13 08:08:38.174523243 +0000 UTC m=+1.297876411 container attach 17b77261bf0f50ccee1ae4e4f5305a2626adf90371d2f571d4d71da0c83ae2e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_tu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:08:38 np0005558241 podman[256479]: 2025-12-13 08:08:38.176204264 +0000 UTC m=+1.299557432 container died 17b77261bf0f50ccee1ae4e4f5305a2626adf90371d2f571d4d71da0c83ae2e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_tu, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:08:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:08:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec 13 03:08:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec 13 03:08:38 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e3374ef5c52f2a91bf5f35bd639616cabe08a910336402f4e4ccb92578f7530e-merged.mount: Deactivated successfully.
Dec 13 03:08:38 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec 13 03:08:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 33 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 307 B/s wr, 9 op/s
Dec 13 03:08:38 np0005558241 podman[256479]: 2025-12-13 08:08:38.860202761 +0000 UTC m=+1.983555929 container remove 17b77261bf0f50ccee1ae4e4f5305a2626adf90371d2f571d4d71da0c83ae2e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_tu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 03:08:38 np0005558241 systemd[1]: libpod-conmon-17b77261bf0f50ccee1ae4e4f5305a2626adf90371d2f571d4d71da0c83ae2e2.scope: Deactivated successfully.
Dec 13 03:08:39 np0005558241 podman[256521]: 2025-12-13 08:08:39.055105356 +0000 UTC m=+0.053825745 container create 92d9de59709a41ed5b12fb2780c9b81a9d5792120c404e1c96b93d0458bc9a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ellis, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:08:39 np0005558241 systemd[1]: Started libpod-conmon-92d9de59709a41ed5b12fb2780c9b81a9d5792120c404e1c96b93d0458bc9a77.scope.
Dec 13 03:08:39 np0005558241 podman[256521]: 2025-12-13 08:08:39.030152312 +0000 UTC m=+0.028872721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:08:39 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:08:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2e7bdda9361fbff0599c0717e4d9b4fe6ca7afd44534a2e26c264e04be4189/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:08:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2e7bdda9361fbff0599c0717e4d9b4fe6ca7afd44534a2e26c264e04be4189/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:08:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2e7bdda9361fbff0599c0717e4d9b4fe6ca7afd44534a2e26c264e04be4189/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:08:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2e7bdda9361fbff0599c0717e4d9b4fe6ca7afd44534a2e26c264e04be4189/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:08:39 np0005558241 podman[256521]: 2025-12-13 08:08:39.423356255 +0000 UTC m=+0.422076674 container init 92d9de59709a41ed5b12fb2780c9b81a9d5792120c404e1c96b93d0458bc9a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:08:39 np0005558241 podman[256521]: 2025-12-13 08:08:39.430085261 +0000 UTC m=+0.428805650 container start 92d9de59709a41ed5b12fb2780c9b81a9d5792120c404e1c96b93d0458bc9a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ellis, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 03:08:39 np0005558241 podman[256521]: 2025-12-13 08:08:39.44183826 +0000 UTC m=+0.440558659 container attach 92d9de59709a41ed5b12fb2780c9b81a9d5792120c404e1c96b93d0458bc9a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]: {
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:    "0": [
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:        {
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "devices": [
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "/dev/loop3"
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            ],
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_name": "ceph_lv0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_size": "21470642176",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "name": "ceph_lv0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "tags": {
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.cluster_name": "ceph",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.crush_device_class": "",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.encrypted": "0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.objectstore": "bluestore",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.osd_id": "0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.type": "block",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.vdo": "0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.with_tpm": "0"
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            },
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "type": "block",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "vg_name": "ceph_vg0"
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:        }
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:    ],
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:    "1": [
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:        {
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "devices": [
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "/dev/loop4"
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            ],
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_name": "ceph_lv1",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_size": "21470642176",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "name": "ceph_lv1",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "tags": {
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.cluster_name": "ceph",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.crush_device_class": "",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.encrypted": "0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.objectstore": "bluestore",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.osd_id": "1",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.type": "block",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.vdo": "0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.with_tpm": "0"
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            },
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "type": "block",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "vg_name": "ceph_vg1"
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:        }
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:    ],
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:    "2": [
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:        {
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "devices": [
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "/dev/loop5"
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            ],
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_name": "ceph_lv2",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_size": "21470642176",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "name": "ceph_lv2",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "tags": {
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.cluster_name": "ceph",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.crush_device_class": "",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.encrypted": "0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.objectstore": "bluestore",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.osd_id": "2",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.type": "block",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.vdo": "0",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:                "ceph.with_tpm": "0"
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            },
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "type": "block",
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:            "vg_name": "ceph_vg2"
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:        }
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]:    ]
Dec 13 03:08:39 np0005558241 wonderful_ellis[256537]: }
Dec 13 03:08:39 np0005558241 systemd[1]: libpod-92d9de59709a41ed5b12fb2780c9b81a9d5792120c404e1c96b93d0458bc9a77.scope: Deactivated successfully.
Dec 13 03:08:39 np0005558241 podman[256521]: 2025-12-13 08:08:39.78044567 +0000 UTC m=+0.779166059 container died 92d9de59709a41ed5b12fb2780c9b81a9d5792120c404e1c96b93d0458bc9a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:08:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8b2e7bdda9361fbff0599c0717e4d9b4fe6ca7afd44534a2e26c264e04be4189-merged.mount: Deactivated successfully.
Dec 13 03:08:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:08:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:08:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:08:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:08:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:08:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:08:40 np0005558241 podman[256521]: 2025-12-13 08:08:40.306731167 +0000 UTC m=+1.305451556 container remove 92d9de59709a41ed5b12fb2780c9b81a9d5792120c404e1c96b93d0458bc9a77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:08:40 np0005558241 systemd[1]: libpod-conmon-92d9de59709a41ed5b12fb2780c9b81a9d5792120c404e1c96b93d0458bc9a77.scope: Deactivated successfully.
Dec 13 03:08:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 8.1 KiB/s rd, 716 B/s wr, 11 op/s
Dec 13 03:08:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec 13 03:08:40 np0005558241 podman[256622]: 2025-12-13 08:08:40.847332476 +0000 UTC m=+0.030974013 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:08:41 np0005558241 podman[256622]: 2025-12-13 08:08:41.229008516 +0000 UTC m=+0.412650023 container create e74d7296616ac3d273ee7d6baa7e6a069fd8338b5fb6d1ce1c9df684deebb892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_spence, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:08:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec 13 03:08:41 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec 13 03:08:41 np0005558241 systemd[1]: Started libpod-conmon-e74d7296616ac3d273ee7d6baa7e6a069fd8338b5fb6d1ce1c9df684deebb892.scope.
Dec 13 03:08:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:08:41 np0005558241 podman[256622]: 2025-12-13 08:08:41.474886045 +0000 UTC m=+0.658527552 container init e74d7296616ac3d273ee7d6baa7e6a069fd8338b5fb6d1ce1c9df684deebb892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_spence, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:08:41 np0005558241 podman[256622]: 2025-12-13 08:08:41.482738428 +0000 UTC m=+0.666379935 container start e74d7296616ac3d273ee7d6baa7e6a069fd8338b5fb6d1ce1c9df684deebb892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:08:41 np0005558241 determined_spence[256639]: 167 167
Dec 13 03:08:41 np0005558241 systemd[1]: libpod-e74d7296616ac3d273ee7d6baa7e6a069fd8338b5fb6d1ce1c9df684deebb892.scope: Deactivated successfully.
Dec 13 03:08:41 np0005558241 podman[256622]: 2025-12-13 08:08:41.499421078 +0000 UTC m=+0.683062585 container attach e74d7296616ac3d273ee7d6baa7e6a069fd8338b5fb6d1ce1c9df684deebb892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_spence, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 03:08:41 np0005558241 podman[256622]: 2025-12-13 08:08:41.50069606 +0000 UTC m=+0.684337567 container died e74d7296616ac3d273ee7d6baa7e6a069fd8338b5fb6d1ce1c9df684deebb892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_spence, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:08:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-35d0024819f2f01e6a278c8560d04709a2c9f58503d03135fecea6771c0ff7d2-merged.mount: Deactivated successfully.
Dec 13 03:08:41 np0005558241 podman[256622]: 2025-12-13 08:08:41.738169211 +0000 UTC m=+0.921810718 container remove e74d7296616ac3d273ee7d6baa7e6a069fd8338b5fb6d1ce1c9df684deebb892 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_spence, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:08:41 np0005558241 systemd[1]: libpod-conmon-e74d7296616ac3d273ee7d6baa7e6a069fd8338b5fb6d1ce1c9df684deebb892.scope: Deactivated successfully.
Dec 13 03:08:41 np0005558241 podman[256665]: 2025-12-13 08:08:41.978260918 +0000 UTC m=+0.107704121 container create f02e959817f82a5a5c1377dbab014d8adda7c1e245c116cf97580cd41e7d3fec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:08:41 np0005558241 podman[256665]: 2025-12-13 08:08:41.901574961 +0000 UTC m=+0.031018174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:08:42 np0005558241 systemd[1]: Started libpod-conmon-f02e959817f82a5a5c1377dbab014d8adda7c1e245c116cf97580cd41e7d3fec.scope.
Dec 13 03:08:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:08:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a07a01d2aebc0ad9b67867d22440697c9f84bb477614ce2fda0b88545bfec3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:08:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a07a01d2aebc0ad9b67867d22440697c9f84bb477614ce2fda0b88545bfec3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:08:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a07a01d2aebc0ad9b67867d22440697c9f84bb477614ce2fda0b88545bfec3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:08:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a07a01d2aebc0ad9b67867d22440697c9f84bb477614ce2fda0b88545bfec3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:08:42 np0005558241 podman[256665]: 2025-12-13 08:08:42.492907309 +0000 UTC m=+0.622350542 container init f02e959817f82a5a5c1377dbab014d8adda7c1e245c116cf97580cd41e7d3fec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Dec 13 03:08:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 767 B/s wr, 8 op/s
Dec 13 03:08:42 np0005558241 podman[256665]: 2025-12-13 08:08:42.500238499 +0000 UTC m=+0.629681702 container start f02e959817f82a5a5c1377dbab014d8adda7c1e245c116cf97580cd41e7d3fec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:08:42 np0005558241 podman[256665]: 2025-12-13 08:08:42.584941903 +0000 UTC m=+0.714385136 container attach f02e959817f82a5a5c1377dbab014d8adda7c1e245c116cf97580cd41e7d3fec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.594200) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613322594251, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2107, "num_deletes": 252, "total_data_size": 3682961, "memory_usage": 3734328, "flush_reason": "Manual Compaction"}
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613322777122, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3590055, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21224, "largest_seqno": 23330, "table_properties": {"data_size": 3580408, "index_size": 6141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19372, "raw_average_key_size": 20, "raw_value_size": 3561166, "raw_average_value_size": 3709, "num_data_blocks": 275, "num_entries": 960, "num_filter_entries": 960, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765613094, "oldest_key_time": 1765613094, "file_creation_time": 1765613322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 182968 microseconds, and 9480 cpu microseconds.
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.777166) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3590055 bytes OK
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.777188) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.786559) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.786593) EVENT_LOG_v1 {"time_micros": 1765613322786585, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.786620) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3674080, prev total WAL file size 3674080, number of live WAL files 2.
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.787619) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3505KB)], [50(8161KB)]
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613322787665, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11947050, "oldest_snapshot_seqno": -1}
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4960 keys, 10110083 bytes, temperature: kUnknown
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613322944903, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 10110083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10073839, "index_size": 22758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 122009, "raw_average_key_size": 24, "raw_value_size": 9981106, "raw_average_value_size": 2012, "num_data_blocks": 955, "num_entries": 4960, "num_filter_entries": 4960, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765613322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.945248) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 10110083 bytes
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.947327) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 75.9 rd, 64.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5479, records dropped: 519 output_compression: NoCompression
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.947351) EVENT_LOG_v1 {"time_micros": 1765613322947340, "job": 26, "event": "compaction_finished", "compaction_time_micros": 157377, "compaction_time_cpu_micros": 20561, "output_level": 6, "num_output_files": 1, "total_output_size": 10110083, "num_input_records": 5479, "num_output_records": 4960, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613322948166, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613322950475, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.787556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.950756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.950762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.950764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.950765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:08:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:08:42.950767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:08:43 np0005558241 lvm[256762]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:08:43 np0005558241 lvm[256760]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:08:43 np0005558241 lvm[256762]: VG ceph_vg1 finished
Dec 13 03:08:43 np0005558241 lvm[256760]: VG ceph_vg0 finished
Dec 13 03:08:43 np0005558241 lvm[256764]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:08:43 np0005558241 lvm[256764]: VG ceph_vg2 finished
Dec 13 03:08:43 np0005558241 lvm[256765]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:08:43 np0005558241 lvm[256765]: VG ceph_vg1 finished
Dec 13 03:08:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:08:43 np0005558241 great_mclean[256682]: {}
Dec 13 03:08:43 np0005558241 systemd[1]: libpod-f02e959817f82a5a5c1377dbab014d8adda7c1e245c116cf97580cd41e7d3fec.scope: Deactivated successfully.
Dec 13 03:08:43 np0005558241 systemd[1]: libpod-f02e959817f82a5a5c1377dbab014d8adda7c1e245c116cf97580cd41e7d3fec.scope: Consumed 1.465s CPU time.
Dec 13 03:08:43 np0005558241 podman[256665]: 2025-12-13 08:08:43.438245995 +0000 UTC m=+1.567689208 container died f02e959817f82a5a5c1377dbab014d8adda7c1e245c116cf97580cd41e7d3fec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:08:43 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e1a07a01d2aebc0ad9b67867d22440697c9f84bb477614ce2fda0b88545bfec3-merged.mount: Deactivated successfully.
Dec 13 03:08:43 np0005558241 podman[256665]: 2025-12-13 08:08:43.580594517 +0000 UTC m=+1.710037720 container remove f02e959817f82a5a5c1377dbab014d8adda7c1e245c116cf97580cd41e7d3fec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:08:43 np0005558241 systemd[1]: libpod-conmon-f02e959817f82a5a5c1377dbab014d8adda7c1e245c116cf97580cd41e7d3fec.scope: Deactivated successfully.
Dec 13 03:08:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:08:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:08:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:08:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:08:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 38 op/s
Dec 13 03:08:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:08:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:08:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.0 KiB/s wr, 32 op/s
Dec 13 03:08:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 KiB/s wr, 27 op/s
Dec 13 03:08:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:08:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec 13 03:08:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec 13 03:08:48 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec 13 03:08:49 np0005558241 podman[256808]: 2025-12-13 08:08:49.977943295 +0000 UTC m=+0.060662763 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 13 03:08:50 np0005558241 podman[256807]: 2025-12-13 08:08:50.002847088 +0000 UTC m=+0.079140608 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd)
Dec 13 03:08:50 np0005558241 podman[256806]: 2025-12-13 08:08:50.016479693 +0000 UTC m=+0.102120173 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 03:08:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Dec 13 03:08:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 03:08:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:08:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 204 B/s wr, 1 op/s
Dec 13 03:08:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:08:55.388 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:08:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:08:55.388 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:08:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:08:55.388 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:08:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 204 B/s wr, 1 op/s
Dec 13 03:08:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 921 B/s wr, 9 op/s
Dec 13 03:08:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:08:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec 13 03:09:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec 13 03:09:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 1023 B/s wr, 9 op/s
Dec 13 03:09:00 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec 13 03:09:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s rd, 1023 B/s wr, 9 op/s
Dec 13 03:09:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:09:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 13 03:09:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 13 03:09:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 716 B/s wr, 5 op/s
Dec 13 03:09:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:09:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:09:09
Dec 13 03:09:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:09:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:09:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Dec 13 03:09:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:09:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 603 B/s wr, 5 op/s
Dec 13 03:09:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 511 B/s wr, 4 op/s
Dec 13 03:09:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:09:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 511 B/s wr, 4 op/s
Dec 13 03:09:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:09:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4020292572' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:09:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:09:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4020292572' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:09:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.1364976013902964e-06 of space, bias 1.0, pg target 0.0003409492804170889 quantized to 32 (current 32)
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.119202667671527e-06 of space, bias 4.0, pg target 0.0013430432012058323 quantized to 16 (current 32)
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:09:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:09:20 np0005558241 podman[256877]: 2025-12-13 08:09:20.973141577 +0000 UTC m=+0.057825264 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec 13 03:09:20 np0005558241 podman[256876]: 2025-12-13 08:09:20.973908165 +0000 UTC m=+0.062933619 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 13 03:09:21 np0005558241 podman[256875]: 2025-12-13 08:09:21.004255332 +0000 UTC m=+0.095480690 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 03:09:21 np0005558241 nova_compute[248510]: 2025-12-13 08:09:21.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:09:23 np0005558241 nova_compute[248510]: 2025-12-13 08:09:23.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:24 np0005558241 nova_compute[248510]: 2025-12-13 08:09:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:24 np0005558241 nova_compute[248510]: 2025-12-13 08:09:24.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:09:24 np0005558241 nova_compute[248510]: 2025-12-13 08:09:24.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:09:24 np0005558241 nova_compute[248510]: 2025-12-13 08:09:24.790 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:09:24 np0005558241 nova_compute[248510]: 2025-12-13 08:09:24.790 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:26 np0005558241 nova_compute[248510]: 2025-12-13 08:09:26.788 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:26 np0005558241 nova_compute[248510]: 2025-12-13 08:09:26.831 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:09:26 np0005558241 nova_compute[248510]: 2025-12-13 08:09:26.831 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:09:26 np0005558241 nova_compute[248510]: 2025-12-13 08:09:26.831 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:09:26 np0005558241 nova_compute[248510]: 2025-12-13 08:09:26.832 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:09:26 np0005558241 nova_compute[248510]: 2025-12-13 08:09:26.832 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:09:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:09:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3788769115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:09:27 np0005558241 nova_compute[248510]: 2025-12-13 08:09:27.535 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.703s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:09:27 np0005558241 nova_compute[248510]: 2025-12-13 08:09:27.717 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:09:27 np0005558241 nova_compute[248510]: 2025-12-13 08:09:27.718 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5115MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:09:27 np0005558241 nova_compute[248510]: 2025-12-13 08:09:27.718 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:09:27 np0005558241 nova_compute[248510]: 2025-12-13 08:09:27.718 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:09:27 np0005558241 nova_compute[248510]: 2025-12-13 08:09:27.784 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:09:27 np0005558241 nova_compute[248510]: 2025-12-13 08:09:27.785 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:09:27 np0005558241 nova_compute[248510]: 2025-12-13 08:09:27.803 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:09:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:09:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2629669499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:09:28 np0005558241 nova_compute[248510]: 2025-12-13 08:09:28.343 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:09:28 np0005558241 nova_compute[248510]: 2025-12-13 08:09:28.350 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:09:28 np0005558241 nova_compute[248510]: 2025-12-13 08:09:28.369 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:09:28 np0005558241 nova_compute[248510]: 2025-12-13 08:09:28.372 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:09:28 np0005558241 nova_compute[248510]: 2025-12-13 08:09:28.372 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:09:28 np0005558241 nova_compute[248510]: 2025-12-13 08:09:28.373 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:28 np0005558241 nova_compute[248510]: 2025-12-13 08:09:28.373 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 03:09:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:09:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:28 np0005558241 nova_compute[248510]: 2025-12-13 08:09:28.785 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:28 np0005558241 nova_compute[248510]: 2025-12-13 08:09:28.812 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:28 np0005558241 nova_compute[248510]: 2025-12-13 08:09:28.813 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:28 np0005558241 nova_compute[248510]: 2025-12-13 08:09:28.813 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:28 np0005558241 nova_compute[248510]: 2025-12-13 08:09:28.813 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 03:09:28 np0005558241 nova_compute[248510]: 2025-12-13 08:09:28.829 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 03:09:29 np0005558241 nova_compute[248510]: 2025-12-13 08:09:29.787 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:29 np0005558241 nova_compute[248510]: 2025-12-13 08:09:29.788 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:29 np0005558241 nova_compute[248510]: 2025-12-13 08:09:29.788 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:09:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:30 np0005558241 nova_compute[248510]: 2025-12-13 08:09:30.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:09:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:09:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:09:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:09:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:09:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:09:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:09:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:09:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:43 np0005558241 nova_compute[248510]: 2025-12-13 08:09:43.226 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:09:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:09:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:09:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:09:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:09:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:09:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:09:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:09:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:09:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:09:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:09:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:09:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:09:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:09:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:45 np0005558241 podman[257125]: 2025-12-13 08:09:44.957586996 +0000 UTC m=+0.024003141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:09:45 np0005558241 podman[257125]: 2025-12-13 08:09:45.10770864 +0000 UTC m=+0.174124755 container create 31a97cabb5f33acc083f1c247edca6cb2fa2de14e0583ef11cdd809cdd4b2444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chatterjee, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:09:45 np0005558241 systemd[1]: Started libpod-conmon-31a97cabb5f33acc083f1c247edca6cb2fa2de14e0583ef11cdd809cdd4b2444.scope.
Dec 13 03:09:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:09:45 np0005558241 podman[257125]: 2025-12-13 08:09:45.713583576 +0000 UTC m=+0.779999751 container init 31a97cabb5f33acc083f1c247edca6cb2fa2de14e0583ef11cdd809cdd4b2444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chatterjee, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:09:45 np0005558241 podman[257125]: 2025-12-13 08:09:45.721792887 +0000 UTC m=+0.788209012 container start 31a97cabb5f33acc083f1c247edca6cb2fa2de14e0583ef11cdd809cdd4b2444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chatterjee, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:09:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:09:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:09:45 np0005558241 systemd[1]: libpod-31a97cabb5f33acc083f1c247edca6cb2fa2de14e0583ef11cdd809cdd4b2444.scope: Deactivated successfully.
Dec 13 03:09:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:09:45 np0005558241 eager_chatterjee[257141]: 167 167
Dec 13 03:09:45 np0005558241 conmon[257141]: conmon 31a97cabb5f33acc083f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31a97cabb5f33acc083f1c247edca6cb2fa2de14e0583ef11cdd809cdd4b2444.scope/container/memory.events
Dec 13 03:09:45 np0005558241 podman[257125]: 2025-12-13 08:09:45.74790143 +0000 UTC m=+0.814317585 container attach 31a97cabb5f33acc083f1c247edca6cb2fa2de14e0583ef11cdd809cdd4b2444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chatterjee, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:09:45 np0005558241 podman[257125]: 2025-12-13 08:09:45.748417722 +0000 UTC m=+0.814833847 container died 31a97cabb5f33acc083f1c247edca6cb2fa2de14e0583ef11cdd809cdd4b2444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chatterjee, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:09:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-81520eddeb04881a88d9a106775a04793973dbcdfb90d3d28547eeed8e37750b-merged.mount: Deactivated successfully.
Dec 13 03:09:45 np0005558241 podman[257125]: 2025-12-13 08:09:45.852220816 +0000 UTC m=+0.918636941 container remove 31a97cabb5f33acc083f1c247edca6cb2fa2de14e0583ef11cdd809cdd4b2444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:09:45 np0005558241 systemd[1]: libpod-conmon-31a97cabb5f33acc083f1c247edca6cb2fa2de14e0583ef11cdd809cdd4b2444.scope: Deactivated successfully.
Dec 13 03:09:46 np0005558241 podman[257167]: 2025-12-13 08:09:46.045109271 +0000 UTC m=+0.053778884 container create 934316f7a182f07f1706fa68a8cb1bc5d6c2dee31df948c3a98ed78f95354fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_cannon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:09:46 np0005558241 systemd[1]: Started libpod-conmon-934316f7a182f07f1706fa68a8cb1bc5d6c2dee31df948c3a98ed78f95354fa4.scope.
Dec 13 03:09:46 np0005558241 podman[257167]: 2025-12-13 08:09:46.017296117 +0000 UTC m=+0.025965730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:09:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:09:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b5437d7ab6362fe839c2c4511c5a02d7784bf8f3808ac1b24926e2433aeb4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:09:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b5437d7ab6362fe839c2c4511c5a02d7784bf8f3808ac1b24926e2433aeb4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:09:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b5437d7ab6362fe839c2c4511c5a02d7784bf8f3808ac1b24926e2433aeb4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:09:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b5437d7ab6362fe839c2c4511c5a02d7784bf8f3808ac1b24926e2433aeb4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:09:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b5437d7ab6362fe839c2c4511c5a02d7784bf8f3808ac1b24926e2433aeb4a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:09:46 np0005558241 podman[257167]: 2025-12-13 08:09:46.142264652 +0000 UTC m=+0.150934255 container init 934316f7a182f07f1706fa68a8cb1bc5d6c2dee31df948c3a98ed78f95354fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_cannon, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:09:46 np0005558241 podman[257167]: 2025-12-13 08:09:46.152649837 +0000 UTC m=+0.161319420 container start 934316f7a182f07f1706fa68a8cb1bc5d6c2dee31df948c3a98ed78f95354fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:09:46 np0005558241 podman[257167]: 2025-12-13 08:09:46.158288666 +0000 UTC m=+0.166958259 container attach 934316f7a182f07f1706fa68a8cb1bc5d6c2dee31df948c3a98ed78f95354fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:09:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:46 np0005558241 upbeat_cannon[257183]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:09:46 np0005558241 upbeat_cannon[257183]: --> All data devices are unavailable
Dec 13 03:09:46 np0005558241 systemd[1]: libpod-934316f7a182f07f1706fa68a8cb1bc5d6c2dee31df948c3a98ed78f95354fa4.scope: Deactivated successfully.
Dec 13 03:09:46 np0005558241 podman[257167]: 2025-12-13 08:09:46.699671554 +0000 UTC m=+0.708341147 container died 934316f7a182f07f1706fa68a8cb1bc5d6c2dee31df948c3a98ed78f95354fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:09:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c5b5437d7ab6362fe839c2c4511c5a02d7784bf8f3808ac1b24926e2433aeb4a-merged.mount: Deactivated successfully.
Dec 13 03:09:46 np0005558241 podman[257167]: 2025-12-13 08:09:46.757664491 +0000 UTC m=+0.766334074 container remove 934316f7a182f07f1706fa68a8cb1bc5d6c2dee31df948c3a98ed78f95354fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:09:46 np0005558241 systemd[1]: libpod-conmon-934316f7a182f07f1706fa68a8cb1bc5d6c2dee31df948c3a98ed78f95354fa4.scope: Deactivated successfully.
Dec 13 03:09:47 np0005558241 podman[257276]: 2025-12-13 08:09:47.301607643 +0000 UTC m=+0.051932109 container create ae98b6ef2ec3d4b70a1abe990ed51b621a65a659f859b4d2f31fe1dbc2dee49b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_goldwasser, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 03:09:47 np0005558241 systemd[1]: Started libpod-conmon-ae98b6ef2ec3d4b70a1abe990ed51b621a65a659f859b4d2f31fe1dbc2dee49b.scope.
Dec 13 03:09:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:09:47 np0005558241 podman[257276]: 2025-12-13 08:09:47.279116049 +0000 UTC m=+0.029440535 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:09:47 np0005558241 podman[257276]: 2025-12-13 08:09:47.411768383 +0000 UTC m=+0.162092869 container init ae98b6ef2ec3d4b70a1abe990ed51b621a65a659f859b4d2f31fe1dbc2dee49b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:09:47 np0005558241 podman[257276]: 2025-12-13 08:09:47.420430786 +0000 UTC m=+0.170755252 container start ae98b6ef2ec3d4b70a1abe990ed51b621a65a659f859b4d2f31fe1dbc2dee49b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_goldwasser, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:09:47 np0005558241 podman[257276]: 2025-12-13 08:09:47.427445139 +0000 UTC m=+0.177769625 container attach ae98b6ef2ec3d4b70a1abe990ed51b621a65a659f859b4d2f31fe1dbc2dee49b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:09:47 np0005558241 focused_goldwasser[257292]: 167 167
Dec 13 03:09:47 np0005558241 systemd[1]: libpod-ae98b6ef2ec3d4b70a1abe990ed51b621a65a659f859b4d2f31fe1dbc2dee49b.scope: Deactivated successfully.
Dec 13 03:09:47 np0005558241 podman[257276]: 2025-12-13 08:09:47.429349805 +0000 UTC m=+0.179674271 container died ae98b6ef2ec3d4b70a1abe990ed51b621a65a659f859b4d2f31fe1dbc2dee49b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:09:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1bcbf864268b0ac134bc14c4f6bba5b1b403ec50dac4b38c539a8acb2ae5d1cf-merged.mount: Deactivated successfully.
Dec 13 03:09:47 np0005558241 podman[257276]: 2025-12-13 08:09:47.480251498 +0000 UTC m=+0.230575964 container remove ae98b6ef2ec3d4b70a1abe990ed51b621a65a659f859b4d2f31fe1dbc2dee49b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:09:47 np0005558241 systemd[1]: libpod-conmon-ae98b6ef2ec3d4b70a1abe990ed51b621a65a659f859b4d2f31fe1dbc2dee49b.scope: Deactivated successfully.
Dec 13 03:09:47 np0005558241 podman[257317]: 2025-12-13 08:09:47.670197921 +0000 UTC m=+0.058145462 container create 1f985a8bd3db805df34713076113b08e89e0233414ca35650807fffb5aea9bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:09:47 np0005558241 systemd[1]: Started libpod-conmon-1f985a8bd3db805df34713076113b08e89e0233414ca35650807fffb5aea9bad.scope.
Dec 13 03:09:47 np0005558241 podman[257317]: 2025-12-13 08:09:47.64539665 +0000 UTC m=+0.033344221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:09:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:09:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e28eebd024e21994bf1e25a9178d42ecc4d85132ffcd7fe4f5a6467f2333393/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:09:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e28eebd024e21994bf1e25a9178d42ecc4d85132ffcd7fe4f5a6467f2333393/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:09:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e28eebd024e21994bf1e25a9178d42ecc4d85132ffcd7fe4f5a6467f2333393/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:09:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e28eebd024e21994bf1e25a9178d42ecc4d85132ffcd7fe4f5a6467f2333393/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:09:47 np0005558241 podman[257317]: 2025-12-13 08:09:47.773700377 +0000 UTC m=+0.161647938 container init 1f985a8bd3db805df34713076113b08e89e0233414ca35650807fffb5aea9bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:09:47 np0005558241 podman[257317]: 2025-12-13 08:09:47.782234937 +0000 UTC m=+0.170182488 container start 1f985a8bd3db805df34713076113b08e89e0233414ca35650807fffb5aea9bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_johnson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:09:47 np0005558241 podman[257317]: 2025-12-13 08:09:47.787197909 +0000 UTC m=+0.175145450 container attach 1f985a8bd3db805df34713076113b08e89e0233414ca35650807fffb5aea9bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:09:48 np0005558241 magical_johnson[257334]: {
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:    "0": [
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:        {
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "devices": [
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "/dev/loop3"
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            ],
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_name": "ceph_lv0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_size": "21470642176",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "name": "ceph_lv0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "tags": {
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.cluster_name": "ceph",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.crush_device_class": "",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.encrypted": "0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.objectstore": "bluestore",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.osd_id": "0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.type": "block",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.vdo": "0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.with_tpm": "0"
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            },
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "type": "block",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "vg_name": "ceph_vg0"
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:        }
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:    ],
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:    "1": [
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:        {
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "devices": [
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "/dev/loop4"
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            ],
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_name": "ceph_lv1",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_size": "21470642176",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "name": "ceph_lv1",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "tags": {
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.cluster_name": "ceph",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.crush_device_class": "",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.encrypted": "0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.objectstore": "bluestore",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.osd_id": "1",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.type": "block",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.vdo": "0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.with_tpm": "0"
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            },
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "type": "block",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "vg_name": "ceph_vg1"
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:        }
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:    ],
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:    "2": [
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:        {
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "devices": [
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "/dev/loop5"
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            ],
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_name": "ceph_lv2",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_size": "21470642176",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "name": "ceph_lv2",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "tags": {
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.cluster_name": "ceph",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.crush_device_class": "",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.encrypted": "0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.objectstore": "bluestore",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.osd_id": "2",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.type": "block",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.vdo": "0",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:                "ceph.with_tpm": "0"
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            },
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "type": "block",
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:            "vg_name": "ceph_vg2"
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:        }
Dec 13 03:09:48 np0005558241 magical_johnson[257334]:    ]
Dec 13 03:09:48 np0005558241 magical_johnson[257334]: }
Dec 13 03:09:48 np0005558241 systemd[1]: libpod-1f985a8bd3db805df34713076113b08e89e0233414ca35650807fffb5aea9bad.scope: Deactivated successfully.
Dec 13 03:09:48 np0005558241 podman[257317]: 2025-12-13 08:09:48.110125543 +0000 UTC m=+0.498073084 container died 1f985a8bd3db805df34713076113b08e89e0233414ca35650807fffb5aea9bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_johnson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:09:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1e28eebd024e21994bf1e25a9178d42ecc4d85132ffcd7fe4f5a6467f2333393-merged.mount: Deactivated successfully.
Dec 13 03:09:48 np0005558241 podman[257317]: 2025-12-13 08:09:48.163489866 +0000 UTC m=+0.551437407 container remove 1f985a8bd3db805df34713076113b08e89e0233414ca35650807fffb5aea9bad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_johnson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:09:48 np0005558241 systemd[1]: libpod-conmon-1f985a8bd3db805df34713076113b08e89e0233414ca35650807fffb5aea9bad.scope: Deactivated successfully.
Dec 13 03:09:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:09:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:48 np0005558241 podman[257415]: 2025-12-13 08:09:48.719032702 +0000 UTC m=+0.052468632 container create 69b7ea17137dc9e80827bc8743390bbb38bb07eca01f6849f70819bc580e5955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:09:48 np0005558241 systemd[1]: Started libpod-conmon-69b7ea17137dc9e80827bc8743390bbb38bb07eca01f6849f70819bc580e5955.scope.
Dec 13 03:09:48 np0005558241 podman[257415]: 2025-12-13 08:09:48.695038262 +0000 UTC m=+0.028474212 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:09:48 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:09:48 np0005558241 podman[257415]: 2025-12-13 08:09:48.819257038 +0000 UTC m=+0.152692978 container init 69b7ea17137dc9e80827bc8743390bbb38bb07eca01f6849f70819bc580e5955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_feynman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:09:48 np0005558241 podman[257415]: 2025-12-13 08:09:48.828564177 +0000 UTC m=+0.162000107 container start 69b7ea17137dc9e80827bc8743390bbb38bb07eca01f6849f70819bc580e5955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_feynman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:09:48 np0005558241 podman[257415]: 2025-12-13 08:09:48.832422492 +0000 UTC m=+0.165858452 container attach 69b7ea17137dc9e80827bc8743390bbb38bb07eca01f6849f70819bc580e5955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:09:48 np0005558241 intelligent_feynman[257431]: 167 167
Dec 13 03:09:48 np0005558241 systemd[1]: libpod-69b7ea17137dc9e80827bc8743390bbb38bb07eca01f6849f70819bc580e5955.scope: Deactivated successfully.
Dec 13 03:09:48 np0005558241 podman[257415]: 2025-12-13 08:09:48.835652201 +0000 UTC m=+0.169088141 container died 69b7ea17137dc9e80827bc8743390bbb38bb07eca01f6849f70819bc580e5955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:09:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-107db2c7858ed9122aabb33611631e2aa9ee6f30cc80985eb9d0b021bd5ac58a-merged.mount: Deactivated successfully.
Dec 13 03:09:48 np0005558241 podman[257415]: 2025-12-13 08:09:48.88927926 +0000 UTC m=+0.222715190 container remove 69b7ea17137dc9e80827bc8743390bbb38bb07eca01f6849f70819bc580e5955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Dec 13 03:09:48 np0005558241 systemd[1]: libpod-conmon-69b7ea17137dc9e80827bc8743390bbb38bb07eca01f6849f70819bc580e5955.scope: Deactivated successfully.
Dec 13 03:09:49 np0005558241 podman[257454]: 2025-12-13 08:09:49.072553689 +0000 UTC m=+0.051424956 container create e29bf28f9f3c0f503231d625134aa296f1357d382e6fb5397164d43f2a55dbe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hofstadter, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:09:49 np0005558241 systemd[1]: Started libpod-conmon-e29bf28f9f3c0f503231d625134aa296f1357d382e6fb5397164d43f2a55dbe6.scope.
Dec 13 03:09:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:09:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9330e5c99d4cdc3c2f1ea20e5b645f0113b266405bb535e346354466db05cc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:09:49 np0005558241 podman[257454]: 2025-12-13 08:09:49.04982784 +0000 UTC m=+0.028699137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:09:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9330e5c99d4cdc3c2f1ea20e5b645f0113b266405bb535e346354466db05cc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:09:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9330e5c99d4cdc3c2f1ea20e5b645f0113b266405bb535e346354466db05cc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:09:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9330e5c99d4cdc3c2f1ea20e5b645f0113b266405bb535e346354466db05cc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:09:49 np0005558241 podman[257454]: 2025-12-13 08:09:49.160330728 +0000 UTC m=+0.139202015 container init e29bf28f9f3c0f503231d625134aa296f1357d382e6fb5397164d43f2a55dbe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:09:49 np0005558241 podman[257454]: 2025-12-13 08:09:49.168325195 +0000 UTC m=+0.147196462 container start e29bf28f9f3c0f503231d625134aa296f1357d382e6fb5397164d43f2a55dbe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 03:09:49 np0005558241 podman[257454]: 2025-12-13 08:09:49.175741948 +0000 UTC m=+0.154613215 container attach e29bf28f9f3c0f503231d625134aa296f1357d382e6fb5397164d43f2a55dbe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hofstadter, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:09:49 np0005558241 lvm[257550]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:09:49 np0005558241 lvm[257549]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:09:49 np0005558241 lvm[257550]: VG ceph_vg1 finished
Dec 13 03:09:49 np0005558241 lvm[257549]: VG ceph_vg0 finished
Dec 13 03:09:49 np0005558241 lvm[257552]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:09:49 np0005558241 lvm[257552]: VG ceph_vg2 finished
Dec 13 03:09:50 np0005558241 affectionate_hofstadter[257471]: {}
Dec 13 03:09:50 np0005558241 systemd[1]: libpod-e29bf28f9f3c0f503231d625134aa296f1357d382e6fb5397164d43f2a55dbe6.scope: Deactivated successfully.
Dec 13 03:09:50 np0005558241 systemd[1]: libpod-e29bf28f9f3c0f503231d625134aa296f1357d382e6fb5397164d43f2a55dbe6.scope: Consumed 1.477s CPU time.
Dec 13 03:09:50 np0005558241 podman[257454]: 2025-12-13 08:09:50.050309623 +0000 UTC m=+1.029180970 container died e29bf28f9f3c0f503231d625134aa296f1357d382e6fb5397164d43f2a55dbe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hofstadter, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:09:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a9330e5c99d4cdc3c2f1ea20e5b645f0113b266405bb535e346354466db05cc0-merged.mount: Deactivated successfully.
Dec 13 03:09:50 np0005558241 podman[257454]: 2025-12-13 08:09:50.228794534 +0000 UTC m=+1.207665801 container remove e29bf28f9f3c0f503231d625134aa296f1357d382e6fb5397164d43f2a55dbe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:09:50 np0005558241 systemd[1]: libpod-conmon-e29bf28f9f3c0f503231d625134aa296f1357d382e6fb5397164d43f2a55dbe6.scope: Deactivated successfully.
Dec 13 03:09:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:09:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:09:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:09:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:09:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:09:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:09:51 np0005558241 podman[257593]: 2025-12-13 08:09:51.977199926 +0000 UTC m=+0.058363427 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:09:51 np0005558241 podman[257592]: 2025-12-13 08:09:51.987567691 +0000 UTC m=+0.074082444 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 03:09:52 np0005558241 podman[257591]: 2025-12-13 08:09:52.012178256 +0000 UTC m=+0.099028847 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 03:09:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:09:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:09:55.389 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:09:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:09:55.390 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:09:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:09:55.390 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:09:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:09:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:09:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:10:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:10:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:10:09
Dec 13 03:10:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:10:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:10:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'images', 'volumes', 'cephfs.cephfs.meta', 'vms']
Dec 13 03:10:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:10:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:10:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:10:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2527834953' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:10:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:10:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2527834953' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:10:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:10:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.1364976013902964e-06 of space, bias 1.0, pg target 0.0003409492804170889 quantized to 32 (current 32)
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.119202667671527e-06 of space, bias 4.0, pg target 0.0013430432012058323 quantized to 16 (current 32)
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:10:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:10:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:22 np0005558241 nova_compute[248510]: 2025-12-13 08:10:22.800 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:10:23 np0005558241 podman[257653]: 2025-12-13 08:10:23.049704968 +0000 UTC m=+0.054491353 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 03:10:23 np0005558241 podman[257652]: 2025-12-13 08:10:23.050554709 +0000 UTC m=+0.058841670 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:10:23 np0005558241 podman[257651]: 2025-12-13 08:10:23.079943392 +0000 UTC m=+0.095188634 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 03:10:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:10:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:24 np0005558241 nova_compute[248510]: 2025-12-13 08:10:24.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:10:25 np0005558241 nova_compute[248510]: 2025-12-13 08:10:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:10:25 np0005558241 nova_compute[248510]: 2025-12-13 08:10:25.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:10:25 np0005558241 nova_compute[248510]: 2025-12-13 08:10:25.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:10:25 np0005558241 nova_compute[248510]: 2025-12-13 08:10:25.809 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:10:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:27 np0005558241 nova_compute[248510]: 2025-12-13 08:10:27.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:10:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:10:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:28 np0005558241 nova_compute[248510]: 2025-12-13 08:10:28.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:10:28 np0005558241 nova_compute[248510]: 2025-12-13 08:10:28.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:10:28 np0005558241 nova_compute[248510]: 2025-12-13 08:10:28.812 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:10:28 np0005558241 nova_compute[248510]: 2025-12-13 08:10:28.812 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:10:28 np0005558241 nova_compute[248510]: 2025-12-13 08:10:28.812 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:10:28 np0005558241 nova_compute[248510]: 2025-12-13 08:10:28.813 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:10:28 np0005558241 nova_compute[248510]: 2025-12-13 08:10:28.813 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:10:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:10:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1844708107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:10:29 np0005558241 nova_compute[248510]: 2025-12-13 08:10:29.352 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:10:29 np0005558241 nova_compute[248510]: 2025-12-13 08:10:29.515 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:10:29 np0005558241 nova_compute[248510]: 2025-12-13 08:10:29.516 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5121MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:10:29 np0005558241 nova_compute[248510]: 2025-12-13 08:10:29.516 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:10:29 np0005558241 nova_compute[248510]: 2025-12-13 08:10:29.517 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:10:29 np0005558241 nova_compute[248510]: 2025-12-13 08:10:29.793 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:10:29 np0005558241 nova_compute[248510]: 2025-12-13 08:10:29.794 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:10:29 np0005558241 nova_compute[248510]: 2025-12-13 08:10:29.920 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 03:10:30 np0005558241 nova_compute[248510]: 2025-12-13 08:10:30.043 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 03:10:30 np0005558241 nova_compute[248510]: 2025-12-13 08:10:30.044 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 03:10:30 np0005558241 nova_compute[248510]: 2025-12-13 08:10:30.064 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 03:10:30 np0005558241 nova_compute[248510]: 2025-12-13 08:10:30.085 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 03:10:30 np0005558241 nova_compute[248510]: 2025-12-13 08:10:30.105 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:10:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:10:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1787480017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:10:30 np0005558241 nova_compute[248510]: 2025-12-13 08:10:30.681 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:10:30 np0005558241 nova_compute[248510]: 2025-12-13 08:10:30.690 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:10:30 np0005558241 nova_compute[248510]: 2025-12-13 08:10:30.709 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:10:30 np0005558241 nova_compute[248510]: 2025-12-13 08:10:30.713 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:10:30 np0005558241 nova_compute[248510]: 2025-12-13 08:10:30.714 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:10:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:10:33 np0005558241 nova_compute[248510]: 2025-12-13 08:10:33.714 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:10:33 np0005558241 nova_compute[248510]: 2025-12-13 08:10:33.715 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:10:33 np0005558241 nova_compute[248510]: 2025-12-13 08:10:33.715 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:10:33 np0005558241 nova_compute[248510]: 2025-12-13 08:10:33.715 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:10:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.548805) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613438548841, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1443, "num_deletes": 510, "total_data_size": 1863915, "memory_usage": 1898160, "flush_reason": "Manual Compaction"}
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613438570120, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1794951, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23331, "largest_seqno": 24773, "table_properties": {"data_size": 1788647, "index_size": 3058, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16548, "raw_average_key_size": 19, "raw_value_size": 1773830, "raw_average_value_size": 2045, "num_data_blocks": 137, "num_entries": 867, "num_filter_entries": 867, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765613323, "oldest_key_time": 1765613323, "file_creation_time": 1765613438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 21421 microseconds, and 5644 cpu microseconds.
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.570217) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1794951 bytes OK
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.570244) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.577415) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.577432) EVENT_LOG_v1 {"time_micros": 1765613438577427, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.577460) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1856448, prev total WAL file size 1856448, number of live WAL files 2.
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.578232) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1752KB)], [53(9873KB)]
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613438578328, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11905034, "oldest_snapshot_seqno": -1}
Dec 13 03:10:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4792 keys, 8355365 bytes, temperature: kUnknown
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613438648631, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 8355365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8322613, "index_size": 19650, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12037, "raw_key_size": 120096, "raw_average_key_size": 25, "raw_value_size": 8235171, "raw_average_value_size": 1718, "num_data_blocks": 815, "num_entries": 4792, "num_filter_entries": 4792, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765613438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.649129) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 8355365 bytes
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.654725) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.9 rd, 118.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.6 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(11.3) write-amplify(4.7) OK, records in: 5827, records dropped: 1035 output_compression: NoCompression
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.654781) EVENT_LOG_v1 {"time_micros": 1765613438654759, "job": 28, "event": "compaction_finished", "compaction_time_micros": 70495, "compaction_time_cpu_micros": 23233, "output_level": 6, "num_output_files": 1, "total_output_size": 8355365, "num_input_records": 5827, "num_output_records": 4792, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613438656238, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613438658652, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.578109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.658818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.658825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.658827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.658829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:10:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:10:38.658830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:10:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:10:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:10:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:10:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:10:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:10:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:10:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:10:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:10:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:10:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5427 writes, 24K keys, 5427 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 5427 writes, 5427 syncs, 1.00 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1330 writes, 6303 keys, 1330 commit groups, 1.0 writes per commit group, ingest: 9.15 MB, 0.02 MB/s#012Interval WAL: 1330 writes, 1330 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.2      2.69              0.10        14    0.192       0      0       0.0       0.0#012  L6      1/0    7.97 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     56.8     46.5      2.15              0.31        13    0.166     60K   7270       0.0       0.0#012 Sum      1/0    7.97 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     25.2     26.9      4.85              0.41        27    0.179     60K   7270       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4     74.0     74.3      0.66              0.15        10    0.066     26K   3101       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     56.8     46.5      2.15              0.31        13    0.166     60K   7270       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.2      2.69              0.10        13    0.207       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.029, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 4.8 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 11.85 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000191 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(728,11.36 MB,3.73746%) FilterBlock(28,176.42 KB,0.0566734%) IndexBlock(28,323.16 KB,0.10381%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 03:10:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:10:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:10:51 np0005558241 podman[257901]: 2025-12-13 08:10:51.611217062 +0000 UTC m=+0.064368655 container create 0db763abe213c85d6e7a690c553ac758f4c574da3169bb5423f39780773a794a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:10:51 np0005558241 systemd[1]: Started libpod-conmon-0db763abe213c85d6e7a690c553ac758f4c574da3169bb5423f39780773a794a.scope.
Dec 13 03:10:51 np0005558241 podman[257901]: 2025-12-13 08:10:51.574638572 +0000 UTC m=+0.027790175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:10:51 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:10:51 np0005558241 podman[257901]: 2025-12-13 08:10:51.808061328 +0000 UTC m=+0.261213021 container init 0db763abe213c85d6e7a690c553ac758f4c574da3169bb5423f39780773a794a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 03:10:51 np0005558241 podman[257901]: 2025-12-13 08:10:51.818502345 +0000 UTC m=+0.271653938 container start 0db763abe213c85d6e7a690c553ac758f4c574da3169bb5423f39780773a794a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:10:51 np0005558241 wizardly_shaw[257918]: 167 167
Dec 13 03:10:51 np0005558241 systemd[1]: libpod-0db763abe213c85d6e7a690c553ac758f4c574da3169bb5423f39780773a794a.scope: Deactivated successfully.
Dec 13 03:10:51 np0005558241 conmon[257918]: conmon 0db763abe213c85d6e7a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0db763abe213c85d6e7a690c553ac758f4c574da3169bb5423f39780773a794a.scope/container/memory.events
Dec 13 03:10:51 np0005558241 podman[257901]: 2025-12-13 08:10:51.831516775 +0000 UTC m=+0.284668448 container attach 0db763abe213c85d6e7a690c553ac758f4c574da3169bb5423f39780773a794a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shaw, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:10:51 np0005558241 podman[257901]: 2025-12-13 08:10:51.832838258 +0000 UTC m=+0.285989851 container died 0db763abe213c85d6e7a690c553ac758f4c574da3169bb5423f39780773a794a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shaw, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:10:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ad87778f9716acbca564d39cb42cabdce3b2bc158539909e1701e04fabb3d2b6-merged.mount: Deactivated successfully.
Dec 13 03:10:51 np0005558241 podman[257901]: 2025-12-13 08:10:51.972759872 +0000 UTC m=+0.425911475 container remove 0db763abe213c85d6e7a690c553ac758f4c574da3169bb5423f39780773a794a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:10:51 np0005558241 systemd[1]: libpod-conmon-0db763abe213c85d6e7a690c553ac758f4c574da3169bb5423f39780773a794a.scope: Deactivated successfully.
Dec 13 03:10:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Dec 13 03:10:52 np0005558241 podman[257943]: 2025-12-13 08:10:52.12544178 +0000 UTC m=+0.024384081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:10:52 np0005558241 podman[257943]: 2025-12-13 08:10:52.238620026 +0000 UTC m=+0.137562307 container create 6b9eeda6e8417c077b2ca7922e1c603ab22df9144decccbd791b1080f28d6e89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_edison, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:10:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Dec 13 03:10:52 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Dec 13 03:10:52 np0005558241 systemd[1]: Started libpod-conmon-6b9eeda6e8417c077b2ca7922e1c603ab22df9144decccbd791b1080f28d6e89.scope.
Dec 13 03:10:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:10:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea406cee8d7b97df16ef1e2567e2152122208a5a712674ae7d5cc3c550284b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:10:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea406cee8d7b97df16ef1e2567e2152122208a5a712674ae7d5cc3c550284b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:10:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea406cee8d7b97df16ef1e2567e2152122208a5a712674ae7d5cc3c550284b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:10:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea406cee8d7b97df16ef1e2567e2152122208a5a712674ae7d5cc3c550284b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:10:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea406cee8d7b97df16ef1e2567e2152122208a5a712674ae7d5cc3c550284b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:10:52 np0005558241 podman[257943]: 2025-12-13 08:10:52.488089337 +0000 UTC m=+0.387031648 container init 6b9eeda6e8417c077b2ca7922e1c603ab22df9144decccbd791b1080f28d6e89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:10:52 np0005558241 podman[257943]: 2025-12-13 08:10:52.496157816 +0000 UTC m=+0.395100107 container start 6b9eeda6e8417c077b2ca7922e1c603ab22df9144decccbd791b1080f28d6e89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_edison, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:10:52 np0005558241 podman[257943]: 2025-12-13 08:10:52.539255207 +0000 UTC m=+0.438197518 container attach 6b9eeda6e8417c077b2ca7922e1c603ab22df9144decccbd791b1080f28d6e89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:10:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:10:52 np0005558241 cool_edison[257960]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:10:52 np0005558241 cool_edison[257960]: --> All data devices are unavailable
Dec 13 03:10:53 np0005558241 systemd[1]: libpod-6b9eeda6e8417c077b2ca7922e1c603ab22df9144decccbd791b1080f28d6e89.scope: Deactivated successfully.
Dec 13 03:10:53 np0005558241 podman[257943]: 2025-12-13 08:10:53.032211881 +0000 UTC m=+0.931154162 container died 6b9eeda6e8417c077b2ca7922e1c603ab22df9144decccbd791b1080f28d6e89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:10:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:10:53 np0005558241 systemd[1]: var-lib-containers-storage-overlay-dea406cee8d7b97df16ef1e2567e2152122208a5a712674ae7d5cc3c550284b1-merged.mount: Deactivated successfully.
Dec 13 03:10:53 np0005558241 podman[257943]: 2025-12-13 08:10:53.89220068 +0000 UTC m=+1.791142961 container remove 6b9eeda6e8417c077b2ca7922e1c603ab22df9144decccbd791b1080f28d6e89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_edison, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:10:53 np0005558241 systemd[1]: libpod-conmon-6b9eeda6e8417c077b2ca7922e1c603ab22df9144decccbd791b1080f28d6e89.scope: Deactivated successfully.
Dec 13 03:10:53 np0005558241 podman[257994]: 2025-12-13 08:10:53.947226544 +0000 UTC m=+0.109186918 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 03:10:53 np0005558241 podman[257993]: 2025-12-13 08:10:53.953400486 +0000 UTC m=+0.119786989 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:10:54 np0005558241 podman[257992]: 2025-12-13 08:10:54.009043006 +0000 UTC m=+0.178156876 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 13 03:10:54 np0005558241 podman[258115]: 2025-12-13 08:10:54.392019604 +0000 UTC m=+0.047373878 container create ac31f4f6df4c3b907ba9bfabac4189677e817146cfbfc04c7c81f91dc274d8e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 03:10:54 np0005558241 systemd[1]: Started libpod-conmon-ac31f4f6df4c3b907ba9bfabac4189677e817146cfbfc04c7c81f91dc274d8e3.scope.
Dec 13 03:10:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:10:54 np0005558241 podman[258115]: 2025-12-13 08:10:54.369295054 +0000 UTC m=+0.024649348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:10:54 np0005558241 podman[258115]: 2025-12-13 08:10:54.576758541 +0000 UTC m=+0.232112835 container init ac31f4f6df4c3b907ba9bfabac4189677e817146cfbfc04c7c81f91dc274d8e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:10:54 np0005558241 podman[258115]: 2025-12-13 08:10:54.586180463 +0000 UTC m=+0.241534737 container start ac31f4f6df4c3b907ba9bfabac4189677e817146cfbfc04c7c81f91dc274d8e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_meninsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:10:54 np0005558241 podman[258115]: 2025-12-13 08:10:54.590978821 +0000 UTC m=+0.246333115 container attach ac31f4f6df4c3b907ba9bfabac4189677e817146cfbfc04c7c81f91dc274d8e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_meninsky, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:10:54 np0005558241 gifted_meninsky[258132]: 167 167
Dec 13 03:10:54 np0005558241 systemd[1]: libpod-ac31f4f6df4c3b907ba9bfabac4189677e817146cfbfc04c7c81f91dc274d8e3.scope: Deactivated successfully.
Dec 13 03:10:54 np0005558241 podman[258115]: 2025-12-13 08:10:54.595571214 +0000 UTC m=+0.250925498 container died ac31f4f6df4c3b907ba9bfabac4189677e817146cfbfc04c7c81f91dc274d8e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_meninsky, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:10:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.0 MiB/s wr, 22 op/s
Dec 13 03:10:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1e735ad3b4be050d13f4bd50265349762ae37c80029f4fa96bdfa82a3db1464c-merged.mount: Deactivated successfully.
Dec 13 03:10:54 np0005558241 podman[258115]: 2025-12-13 08:10:54.637608629 +0000 UTC m=+0.292962903 container remove ac31f4f6df4c3b907ba9bfabac4189677e817146cfbfc04c7c81f91dc274d8e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:10:54 np0005558241 systemd[1]: libpod-conmon-ac31f4f6df4c3b907ba9bfabac4189677e817146cfbfc04c7c81f91dc274d8e3.scope: Deactivated successfully.
Dec 13 03:10:54 np0005558241 podman[258155]: 2025-12-13 08:10:54.820279406 +0000 UTC m=+0.059675890 container create 5365d4c0c890f3acb261cee4382f423f534c005bbc24758b0b9159670eb3391c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:10:54 np0005558241 systemd[1]: Started libpod-conmon-5365d4c0c890f3acb261cee4382f423f534c005bbc24758b0b9159670eb3391c.scope.
Dec 13 03:10:54 np0005558241 podman[258155]: 2025-12-13 08:10:54.790934113 +0000 UTC m=+0.030330607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:10:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:10:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53cdf59be9fabc1e73893d1960b2898cf4c5d5a201f13f6ab5e3918971d89476/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:10:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53cdf59be9fabc1e73893d1960b2898cf4c5d5a201f13f6ab5e3918971d89476/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:10:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53cdf59be9fabc1e73893d1960b2898cf4c5d5a201f13f6ab5e3918971d89476/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:10:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53cdf59be9fabc1e73893d1960b2898cf4c5d5a201f13f6ab5e3918971d89476/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:10:54 np0005558241 podman[258155]: 2025-12-13 08:10:54.915209162 +0000 UTC m=+0.154605666 container init 5365d4c0c890f3acb261cee4382f423f534c005bbc24758b0b9159670eb3391c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_hypatia, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:10:54 np0005558241 podman[258155]: 2025-12-13 08:10:54.922567023 +0000 UTC m=+0.161963497 container start 5365d4c0c890f3acb261cee4382f423f534c005bbc24758b0b9159670eb3391c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:10:54 np0005558241 podman[258155]: 2025-12-13 08:10:54.926825648 +0000 UTC m=+0.166222132 container attach 5365d4c0c890f3acb261cee4382f423f534c005bbc24758b0b9159670eb3391c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_hypatia, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]: {
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:    "0": [
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:        {
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "devices": [
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "/dev/loop3"
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            ],
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_name": "ceph_lv0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_size": "21470642176",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "name": "ceph_lv0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "tags": {
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.cluster_name": "ceph",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.crush_device_class": "",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.encrypted": "0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.objectstore": "bluestore",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.osd_id": "0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.type": "block",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.vdo": "0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.with_tpm": "0"
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            },
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "type": "block",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "vg_name": "ceph_vg0"
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:        }
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:    ],
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:    "1": [
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:        {
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "devices": [
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "/dev/loop4"
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            ],
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_name": "ceph_lv1",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_size": "21470642176",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "name": "ceph_lv1",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "tags": {
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.cluster_name": "ceph",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.crush_device_class": "",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.encrypted": "0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.objectstore": "bluestore",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.osd_id": "1",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.type": "block",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.vdo": "0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.with_tpm": "0"
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            },
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "type": "block",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "vg_name": "ceph_vg1"
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:        }
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:    ],
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:    "2": [
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:        {
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "devices": [
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "/dev/loop5"
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            ],
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_name": "ceph_lv2",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_size": "21470642176",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "name": "ceph_lv2",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "tags": {
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.cluster_name": "ceph",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.crush_device_class": "",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.encrypted": "0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.objectstore": "bluestore",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.osd_id": "2",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.type": "block",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.vdo": "0",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:                "ceph.with_tpm": "0"
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            },
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "type": "block",
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:            "vg_name": "ceph_vg2"
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:        }
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]:    ]
Dec 13 03:10:55 np0005558241 keen_hypatia[258172]: }
Dec 13 03:10:55 np0005558241 systemd[1]: libpod-5365d4c0c890f3acb261cee4382f423f534c005bbc24758b0b9159670eb3391c.scope: Deactivated successfully.
Dec 13 03:10:55 np0005558241 podman[258155]: 2025-12-13 08:10:55.249355088 +0000 UTC m=+0.488751572 container died 5365d4c0c890f3acb261cee4382f423f534c005bbc24758b0b9159670eb3391c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:10:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:10:55.391 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:10:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:10:55.393 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:10:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:10:55.394 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:10:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Dec 13 03:10:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Dec 13 03:10:56 np0005558241 systemd[1]: var-lib-containers-storage-overlay-53cdf59be9fabc1e73893d1960b2898cf4c5d5a201f13f6ab5e3918971d89476-merged.mount: Deactivated successfully.
Dec 13 03:10:56 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Dec 13 03:10:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.6 MiB/s wr, 28 op/s
Dec 13 03:10:56 np0005558241 podman[258155]: 2025-12-13 08:10:56.62745572 +0000 UTC m=+1.866852224 container remove 5365d4c0c890f3acb261cee4382f423f534c005bbc24758b0b9159670eb3391c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:10:56 np0005558241 systemd[1]: libpod-conmon-5365d4c0c890f3acb261cee4382f423f534c005bbc24758b0b9159670eb3391c.scope: Deactivated successfully.
Dec 13 03:10:57 np0005558241 podman[258255]: 2025-12-13 08:10:57.122528557 +0000 UTC m=+0.049039138 container create 6df499b12b61c41beea0b61d8b987ad9376fceaf75389e477fd2db75782856c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mclean, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:10:57 np0005558241 systemd[1]: Started libpod-conmon-6df499b12b61c41beea0b61d8b987ad9376fceaf75389e477fd2db75782856c5.scope.
Dec 13 03:10:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:10:57 np0005558241 podman[258255]: 2025-12-13 08:10:57.102217447 +0000 UTC m=+0.028728048 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:10:57 np0005558241 podman[258255]: 2025-12-13 08:10:57.20835112 +0000 UTC m=+0.134861721 container init 6df499b12b61c41beea0b61d8b987ad9376fceaf75389e477fd2db75782856c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mclean, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:10:57 np0005558241 podman[258255]: 2025-12-13 08:10:57.215634879 +0000 UTC m=+0.142145480 container start 6df499b12b61c41beea0b61d8b987ad9376fceaf75389e477fd2db75782856c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:10:57 np0005558241 podman[258255]: 2025-12-13 08:10:57.221305529 +0000 UTC m=+0.147816130 container attach 6df499b12b61c41beea0b61d8b987ad9376fceaf75389e477fd2db75782856c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:10:57 np0005558241 recursing_mclean[258271]: 167 167
Dec 13 03:10:57 np0005558241 systemd[1]: libpod-6df499b12b61c41beea0b61d8b987ad9376fceaf75389e477fd2db75782856c5.scope: Deactivated successfully.
Dec 13 03:10:57 np0005558241 conmon[258271]: conmon 6df499b12b61c41beea0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6df499b12b61c41beea0b61d8b987ad9376fceaf75389e477fd2db75782856c5.scope/container/memory.events
Dec 13 03:10:57 np0005558241 podman[258255]: 2025-12-13 08:10:57.22541305 +0000 UTC m=+0.151923631 container died 6df499b12b61c41beea0b61d8b987ad9376fceaf75389e477fd2db75782856c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mclean, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:10:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-14cc27e08ebb50667e2a591c8326758d2ead43c5e6b586ced1a899956ee35580-merged.mount: Deactivated successfully.
Dec 13 03:10:57 np0005558241 podman[258255]: 2025-12-13 08:10:57.26034416 +0000 UTC m=+0.186854741 container remove 6df499b12b61c41beea0b61d8b987ad9376fceaf75389e477fd2db75782856c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:10:57 np0005558241 systemd[1]: libpod-conmon-6df499b12b61c41beea0b61d8b987ad9376fceaf75389e477fd2db75782856c5.scope: Deactivated successfully.
Dec 13 03:10:57 np0005558241 podman[258296]: 2025-12-13 08:10:57.445035036 +0000 UTC m=+0.047004758 container create c4660c76c8a6319223e068fc45b852748391898b5c08bedeb90d93713846b0a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lumiere, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:10:57 np0005558241 systemd[1]: Started libpod-conmon-c4660c76c8a6319223e068fc45b852748391898b5c08bedeb90d93713846b0a2.scope.
Dec 13 03:10:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:10:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de54783302b7f04eea8190257e8832371177279480d47c16700a2fba1572cc69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:10:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de54783302b7f04eea8190257e8832371177279480d47c16700a2fba1572cc69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:10:57 np0005558241 podman[258296]: 2025-12-13 08:10:57.426634093 +0000 UTC m=+0.028603835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:10:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de54783302b7f04eea8190257e8832371177279480d47c16700a2fba1572cc69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:10:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de54783302b7f04eea8190257e8832371177279480d47c16700a2fba1572cc69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:10:57 np0005558241 podman[258296]: 2025-12-13 08:10:57.542787152 +0000 UTC m=+0.144756954 container init c4660c76c8a6319223e068fc45b852748391898b5c08bedeb90d93713846b0a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lumiere, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:10:57 np0005558241 podman[258296]: 2025-12-13 08:10:57.551063996 +0000 UTC m=+0.153033748 container start c4660c76c8a6319223e068fc45b852748391898b5c08bedeb90d93713846b0a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:10:57 np0005558241 podman[258296]: 2025-12-13 08:10:57.569671614 +0000 UTC m=+0.171641346 container attach c4660c76c8a6319223e068fc45b852748391898b5c08bedeb90d93713846b0a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 03:10:58 np0005558241 lvm[258392]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:10:58 np0005558241 lvm[258391]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:10:58 np0005558241 lvm[258391]: VG ceph_vg0 finished
Dec 13 03:10:58 np0005558241 lvm[258392]: VG ceph_vg1 finished
Dec 13 03:10:58 np0005558241 lvm[258394]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:10:58 np0005558241 lvm[258394]: VG ceph_vg2 finished
Dec 13 03:10:58 np0005558241 vibrant_lumiere[258313]: {}
Dec 13 03:10:58 np0005558241 systemd[1]: libpod-c4660c76c8a6319223e068fc45b852748391898b5c08bedeb90d93713846b0a2.scope: Deactivated successfully.
Dec 13 03:10:58 np0005558241 systemd[1]: libpod-c4660c76c8a6319223e068fc45b852748391898b5c08bedeb90d93713846b0a2.scope: Consumed 1.514s CPU time.
Dec 13 03:10:58 np0005558241 podman[258397]: 2025-12-13 08:10:58.499506073 +0000 UTC m=+0.032616064 container died c4660c76c8a6319223e068fc45b852748391898b5c08bedeb90d93713846b0a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lumiere, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:10:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:10:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 33 MiB data, 170 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.1 MiB/s wr, 29 op/s
Dec 13 03:10:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-de54783302b7f04eea8190257e8832371177279480d47c16700a2fba1572cc69-merged.mount: Deactivated successfully.
Dec 13 03:10:58 np0005558241 podman[258397]: 2025-12-13 08:10:58.75976594 +0000 UTC m=+0.292875911 container remove c4660c76c8a6319223e068fc45b852748391898b5c08bedeb90d93713846b0a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:10:58 np0005558241 systemd[1]: libpod-conmon-c4660c76c8a6319223e068fc45b852748391898b5c08bedeb90d93713846b0a2.scope: Deactivated successfully.
Dec 13 03:10:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:10:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:10:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:10:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:10:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:10:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:11:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 4.9 MiB/s wr, 45 op/s
Dec 13 03:11:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Dec 13 03:11:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:11:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Dec 13 03:11:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Dec 13 03:11:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:11:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.7 MiB/s wr, 12 op/s
Dec 13 03:11:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:11:09
Dec 13 03:11:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:11:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:11:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'images', 'volumes', 'vms', 'default.rgw.log', 'default.rgw.control', '.mgr']
Dec 13 03:11:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:11:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s rd, 684 KiB/s wr, 11 op/s
Dec 13 03:11:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:11:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:11:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:11:14.216 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:11:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:11:14.218 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:11:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:11:14.219 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:11:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:11:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:11:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/311987326' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:11:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:11:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/311987326' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:11:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:11:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:11:18 np0005558241 nova_compute[248510]: 2025-12-13 08:11:18.585 248514 DEBUG oslo_concurrency.lockutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Acquiring lock "7849bffe-8652-42d0-a977-826c0c452b88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:18 np0005558241 nova_compute[248510]: 2025-12-13 08:11:18.585 248514 DEBUG oslo_concurrency.lockutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "7849bffe-8652-42d0-a977-826c0c452b88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:11:18 np0005558241 nova_compute[248510]: 2025-12-13 08:11:18.619 248514 DEBUG nova.compute.manager [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:11:18 np0005558241 nova_compute[248510]: 2025-12-13 08:11:18.732 248514 DEBUG oslo_concurrency.lockutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:18 np0005558241 nova_compute[248510]: 2025-12-13 08:11:18.733 248514 DEBUG oslo_concurrency.lockutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:18 np0005558241 nova_compute[248510]: 2025-12-13 08:11:18.743 248514 DEBUG nova.virt.hardware [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:11:18 np0005558241 nova_compute[248510]: 2025-12-13 08:11:18.743 248514 INFO nova.compute.claims [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:11:18 np0005558241 nova_compute[248510]: 2025-12-13 08:11:18.941 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:11:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/682369851' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:11:19 np0005558241 nova_compute[248510]: 2025-12-13 08:11:19.546 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:19 np0005558241 nova_compute[248510]: 2025-12-13 08:11:19.553 248514 DEBUG nova.compute.provider_tree [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:11:19 np0005558241 nova_compute[248510]: 2025-12-13 08:11:19.578 248514 DEBUG nova.scheduler.client.report [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:11:19 np0005558241 nova_compute[248510]: 2025-12-13 08:11:19.611 248514 DEBUG oslo_concurrency.lockutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:19 np0005558241 nova_compute[248510]: 2025-12-13 08:11:19.613 248514 DEBUG nova.compute.manager [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:11:19 np0005558241 nova_compute[248510]: 2025-12-13 08:11:19.682 248514 DEBUG nova.compute.manager [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec 13 03:11:19 np0005558241 nova_compute[248510]: 2025-12-13 08:11:19.705 248514 INFO nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:11:19 np0005558241 nova_compute[248510]: 2025-12-13 08:11:19.728 248514 DEBUG nova.compute.manager [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:11:19 np0005558241 nova_compute[248510]: 2025-12-13 08:11:19.823 248514 DEBUG nova.compute.manager [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:11:19 np0005558241 nova_compute[248510]: 2025-12-13 08:11:19.825 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:11:19 np0005558241 nova_compute[248510]: 2025-12-13 08:11:19.825 248514 INFO nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Creating image(s)#033[00m
Dec 13 03:11:20 np0005558241 nova_compute[248510]: 2025-12-13 08:11:20.096 248514 DEBUG nova.storage.rbd_utils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] rbd image 7849bffe-8652-42d0-a977-826c0c452b88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:20 np0005558241 nova_compute[248510]: 2025-12-13 08:11:20.121 248514 DEBUG nova.storage.rbd_utils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] rbd image 7849bffe-8652-42d0-a977-826c0c452b88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:20 np0005558241 nova_compute[248510]: 2025-12-13 08:11:20.146 248514 DEBUG nova.storage.rbd_utils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] rbd image 7849bffe-8652-42d0-a977-826c0c452b88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:20 np0005558241 nova_compute[248510]: 2025-12-13 08:11:20.150 248514 DEBUG oslo_concurrency.lockutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:20 np0005558241 nova_compute[248510]: 2025-12-13 08:11:20.151 248514 DEBUG oslo_concurrency.lockutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667240312728688 of space, bias 1.0, pg target 0.2001720938186064 quantized to 32 (current 32)
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.107900412961236e-06 of space, bias 4.0, pg target 0.0013294804955534833 quantized to 16 (current 32)
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:11:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:11:21 np0005558241 nova_compute[248510]: 2025-12-13 08:11:21.071 248514 DEBUG nova.virt.libvirt.imagebackend [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/0ed20320-9c25-4108-ad76-64b3cb3500ce/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/0ed20320-9c25-4108-ad76-64b3cb3500ce/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec 13 03:11:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:11:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:11:24 np0005558241 nova_compute[248510]: 2025-12-13 08:11:24.030 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:24 np0005558241 nova_compute[248510]: 2025-12-13 08:11:24.113 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301.part --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:24 np0005558241 nova_compute[248510]: 2025-12-13 08:11:24.115 248514 DEBUG nova.virt.images [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] 0ed20320-9c25-4108-ad76-64b3cb3500ce was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec 13 03:11:24 np0005558241 nova_compute[248510]: 2025-12-13 08:11:24.116 248514 DEBUG nova.privsep.utils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec 13 03:11:24 np0005558241 nova_compute[248510]: 2025-12-13 08:11:24.117 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301.part /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 6 op/s
Dec 13 03:11:24 np0005558241 nova_compute[248510]: 2025-12-13 08:11:24.755 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301.part /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301.converted" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:24 np0005558241 nova_compute[248510]: 2025-12-13 08:11:24.761 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:24 np0005558241 nova_compute[248510]: 2025-12-13 08:11:24.779 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:11:24 np0005558241 nova_compute[248510]: 2025-12-13 08:11:24.780 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:11:24 np0005558241 nova_compute[248510]: 2025-12-13 08:11:24.822 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301.converted --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:24 np0005558241 nova_compute[248510]: 2025-12-13 08:11:24.823 248514 DEBUG oslo_concurrency.lockutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:24 np0005558241 nova_compute[248510]: 2025-12-13 08:11:24.884 248514 DEBUG nova.storage.rbd_utils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] rbd image 7849bffe-8652-42d0-a977-826c0c452b88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:24 np0005558241 nova_compute[248510]: 2025-12-13 08:11:24.888 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7849bffe-8652-42d0-a977-826c0c452b88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:25 np0005558241 podman[258548]: 2025-12-13 08:11:25.00124541 +0000 UTC m=+0.078294938 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 03:11:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Dec 13 03:11:25 np0005558241 podman[258549]: 2025-12-13 08:11:25.022048402 +0000 UTC m=+0.098045214 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 03:11:25 np0005558241 podman[258546]: 2025-12-13 08:11:25.05813523 +0000 UTC m=+0.130870451 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 13 03:11:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Dec 13 03:11:25 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 13 03:11:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Dec 13 03:11:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 8 op/s
Dec 13 03:11:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Dec 13 03:11:26 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Dec 13 03:11:26 np0005558241 nova_compute[248510]: 2025-12-13 08:11:26.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:11:26 np0005558241 nova_compute[248510]: 2025-12-13 08:11:26.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:11:26 np0005558241 nova_compute[248510]: 2025-12-13 08:11:26.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:11:26 np0005558241 nova_compute[248510]: 2025-12-13 08:11:26.791 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:11:26 np0005558241 nova_compute[248510]: 2025-12-13 08:11:26.791 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:11:27 np0005558241 nova_compute[248510]: 2025-12-13 08:11:27.436 248514 DEBUG oslo_concurrency.lockutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Acquiring lock "f2ee92e0-bac6-4763-8d01-824e4f983bbe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:27 np0005558241 nova_compute[248510]: 2025-12-13 08:11:27.437 248514 DEBUG oslo_concurrency.lockutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "f2ee92e0-bac6-4763-8d01-824e4f983bbe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:27 np0005558241 nova_compute[248510]: 2025-12-13 08:11:27.461 248514 DEBUG nova.compute.manager [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:11:27 np0005558241 nova_compute[248510]: 2025-12-13 08:11:27.561 248514 DEBUG oslo_concurrency.lockutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:27 np0005558241 nova_compute[248510]: 2025-12-13 08:11:27.561 248514 DEBUG oslo_concurrency.lockutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:27 np0005558241 nova_compute[248510]: 2025-12-13 08:11:27.573 248514 DEBUG nova.virt.hardware [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:11:27 np0005558241 nova_compute[248510]: 2025-12-13 08:11:27.574 248514 INFO nova.compute.claims [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:11:27 np0005558241 nova_compute[248510]: 2025-12-13 08:11:27.722 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:27 np0005558241 nova_compute[248510]: 2025-12-13 08:11:27.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:11:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:11:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807507178' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:11:28 np0005558241 nova_compute[248510]: 2025-12-13 08:11:28.355 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:28 np0005558241 nova_compute[248510]: 2025-12-13 08:11:28.361 248514 DEBUG nova.compute.provider_tree [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 03:11:28 np0005558241 nova_compute[248510]: 2025-12-13 08:11:28.396 248514 ERROR nova.scheduler.client.report [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [req-3283cba0-091e-4a46-8633-806a0851a161] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID f581abbc-7737-4dd5-b64e-9a4263571fb3.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-3283cba0-091e-4a46-8633-806a0851a161"}]}#033[00m
Dec 13 03:11:28 np0005558241 nova_compute[248510]: 2025-12-13 08:11:28.415 248514 DEBUG nova.scheduler.client.report [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 03:11:28 np0005558241 nova_compute[248510]: 2025-12-13 08:11:28.448 248514 DEBUG nova.scheduler.client.report [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 03:11:28 np0005558241 nova_compute[248510]: 2025-12-13 08:11:28.448 248514 DEBUG nova.compute.provider_tree [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 03:11:28 np0005558241 nova_compute[248510]: 2025-12-13 08:11:28.472 248514 DEBUG nova.scheduler.client.report [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 03:11:28 np0005558241 nova_compute[248510]: 2025-12-13 08:11:28.494 248514 DEBUG nova.scheduler.client.report [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 03:11:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:11:28 np0005558241 nova_compute[248510]: 2025-12-13 08:11:28.555 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.4 KiB/s wr, 15 op/s
Dec 13 03:11:28 np0005558241 nova_compute[248510]: 2025-12-13 08:11:28.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:11:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:11:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2249759522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.184 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.191 248514 DEBUG nova.compute.provider_tree [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.239 248514 DEBUG nova.scheduler.client.report [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Updated inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.240 248514 DEBUG nova.compute.provider_tree [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Updating resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.241 248514 DEBUG nova.compute.provider_tree [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.274 248514 DEBUG oslo_concurrency.lockutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.275 248514 DEBUG nova.compute.manager [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.338 248514 DEBUG nova.compute.manager [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.339 248514 DEBUG nova.network.neutron [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.377 248514 INFO nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.398 248514 DEBUG nova.compute.manager [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.446 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7849bffe-8652-42d0-a977-826c0c452b88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.513 248514 DEBUG nova.storage.rbd_utils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] resizing rbd image 7849bffe-8652-42d0-a977-826c0c452b88_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.586 248514 DEBUG nova.compute.manager [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.587 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.588 248514 INFO nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Creating image(s)#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.608 248514 DEBUG nova.storage.rbd_utils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.635 248514 DEBUG nova.storage.rbd_utils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.657 248514 DEBUG nova.storage.rbd_utils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.661 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.725 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.726 248514 DEBUG oslo_concurrency.lockutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.727 248514 DEBUG oslo_concurrency.lockutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.727 248514 DEBUG oslo_concurrency.lockutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.749 248514 DEBUG nova.storage.rbd_utils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.754 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.779 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.780 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.808 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.810 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.810 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.997 248514 DEBUG nova.network.neutron [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 03:11:29 np0005558241 nova_compute[248510]: 2025-12-13 08:11:29.997 248514 DEBUG nova.compute.manager [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.363 248514 DEBUG nova.objects.instance [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lazy-loading 'migration_context' on Instance uuid 7849bffe-8652-42d0-a977-826c0c452b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.430 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.430 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Ensure instance console log exists: /var/lib/nova/instances/7849bffe-8652-42d0-a977-826c0c452b88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.431 248514 DEBUG oslo_concurrency.lockutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.431 248514 DEBUG oslo_concurrency.lockutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.432 248514 DEBUG oslo_concurrency.lockutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.434 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.439 248514 WARNING nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.470 248514 DEBUG nova.virt.libvirt.host [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.471 248514 DEBUG nova.virt.libvirt.host [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.476 248514 DEBUG nova.virt.libvirt.host [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.477 248514 DEBUG nova.virt.libvirt.host [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.478 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.478 248514 DEBUG nova.virt.hardware [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.479 248514 DEBUG nova.virt.hardware [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.479 248514 DEBUG nova.virt.hardware [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.479 248514 DEBUG nova.virt.hardware [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.479 248514 DEBUG nova.virt.hardware [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.479 248514 DEBUG nova.virt.hardware [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.480 248514 DEBUG nova.virt.hardware [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.480 248514 DEBUG nova.virt.hardware [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.480 248514 DEBUG nova.virt.hardware [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.480 248514 DEBUG nova.virt.hardware [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.480 248514 DEBUG nova.virt.hardware [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.484 248514 DEBUG nova.privsep.utils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.484 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:11:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1287218263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.552 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.742s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 58 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 977 KiB/s wr, 31 op/s
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.796 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.797 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5037MB free_disk=59.98221294302493GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.797 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.797 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.936 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 7849bffe-8652-42d0-a977-826c0c452b88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.937 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance f2ee92e0-bac6-4763-8d01-824e4f983bbe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.937 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:11:30 np0005558241 nova_compute[248510]: 2025-12-13 08:11:30.937 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:11:31 np0005558241 nova_compute[248510]: 2025-12-13 08:11:31.008 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:11:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2099017488' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:11:31 np0005558241 nova_compute[248510]: 2025-12-13 08:11:31.205 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.721s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:31 np0005558241 nova_compute[248510]: 2025-12-13 08:11:31.225 248514 DEBUG nova.storage.rbd_utils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] rbd image 7849bffe-8652-42d0-a977-826c0c452b88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:31 np0005558241 nova_compute[248510]: 2025-12-13 08:11:31.230 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:11:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2018365920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:11:31 np0005558241 nova_compute[248510]: 2025-12-13 08:11:31.744 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:31 np0005558241 nova_compute[248510]: 2025-12-13 08:11:31.752 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:11:31 np0005558241 nova_compute[248510]: 2025-12-13 08:11:31.778 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:11:31 np0005558241 nova_compute[248510]: 2025-12-13 08:11:31.798 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:11:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256602591' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:11:31 np0005558241 nova_compute[248510]: 2025-12-13 08:11:31.869 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:11:31 np0005558241 nova_compute[248510]: 2025-12-13 08:11:31.870 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:31 np0005558241 nova_compute[248510]: 2025-12-13 08:11:31.904 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.674s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:31 np0005558241 nova_compute[248510]: 2025-12-13 08:11:31.907 248514 DEBUG nova.objects.instance [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7849bffe-8652-42d0-a977-826c0c452b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:11:31 np0005558241 nova_compute[248510]: 2025-12-13 08:11:31.913 248514 DEBUG nova.storage.rbd_utils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] resizing rbd image f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.053 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  <uuid>7849bffe-8652-42d0-a977-826c0c452b88</uuid>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  <name>instance-00000001</name>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <nova:name>tempest-AutoAllocateNetworkTest-server-1597879971</nova:name>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:11:30</nova:creationTime>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:        <nova:user uuid="033c6a0517ec454ca6e4080f7c3bbe97">tempest-AutoAllocateNetworkTest-2005483704-project-member</nova:user>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:        <nova:project uuid="e0d21752374a43158985b0e68f3c8d69">tempest-AutoAllocateNetworkTest-2005483704</nova:project>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <entry name="serial">7849bffe-8652-42d0-a977-826c0c452b88</entry>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <entry name="uuid">7849bffe-8652-42d0-a977-826c0c452b88</entry>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/7849bffe-8652-42d0-a977-826c0c452b88_disk">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/7849bffe-8652-42d0-a977-826c0c452b88_disk.config">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/7849bffe-8652-42d0-a977-826c0c452b88/console.log" append="off"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:11:32 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:11:32 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:11:32 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:11:32 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.149 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.150 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.151 248514 INFO nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Using config drive#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.183 248514 DEBUG nova.storage.rbd_utils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] rbd image 7849bffe-8652-42d0-a977-826c0c452b88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.271 248514 DEBUG nova.objects.instance [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lazy-loading 'migration_context' on Instance uuid f2ee92e0-bac6-4763-8d01-824e4f983bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.309 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.310 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Ensure instance console log exists: /var/lib/nova/instances/f2ee92e0-bac6-4763-8d01-824e4f983bbe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.311 248514 DEBUG oslo_concurrency.lockutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.311 248514 DEBUG oslo_concurrency.lockutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.311 248514 DEBUG oslo_concurrency.lockutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.312 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.317 248514 WARNING nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.321 248514 DEBUG nova.virt.libvirt.host [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.321 248514 DEBUG nova.virt.libvirt.host [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.325 248514 DEBUG nova.virt.libvirt.host [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.325 248514 DEBUG nova.virt.libvirt.host [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.325 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.326 248514 DEBUG nova.virt.hardware [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.326 248514 DEBUG nova.virt.hardware [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.326 248514 DEBUG nova.virt.hardware [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.326 248514 DEBUG nova.virt.hardware [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.327 248514 DEBUG nova.virt.hardware [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.327 248514 DEBUG nova.virt.hardware [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.327 248514 DEBUG nova.virt.hardware [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.327 248514 DEBUG nova.virt.hardware [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.327 248514 DEBUG nova.virt.hardware [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.328 248514 DEBUG nova.virt.hardware [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.328 248514 DEBUG nova.virt.hardware [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:11:32 np0005558241 nova_compute[248510]: 2025-12-13 08:11:32.331 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 58 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 977 KiB/s wr, 21 op/s
Dec 13 03:11:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:11:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/63894963' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.191 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.860s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.217 248514 DEBUG nova.storage.rbd_utils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.223 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.449 248514 INFO nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Creating config drive at /var/lib/nova/instances/7849bffe-8652-42d0-a977-826c0c452b88/disk.config#033[00m
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.455 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7849bffe-8652-42d0-a977-826c0c452b88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeh6owsbf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:11:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Dec 13 03:11:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.610 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7849bffe-8652-42d0-a977-826c0c452b88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeh6owsbf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:33 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Dec 13 03:11:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:11:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541985872' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.758 248514 DEBUG nova.storage.rbd_utils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] rbd image 7849bffe-8652-42d0-a977-826c0c452b88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.764 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7849bffe-8652-42d0-a977-826c0c452b88/disk.config 7849bffe-8652-42d0-a977-826c0c452b88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.788 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.791 248514 DEBUG nova.objects.instance [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lazy-loading 'pci_devices' on Instance uuid f2ee92e0-bac6-4763-8d01-824e4f983bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.813 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  <uuid>f2ee92e0-bac6-4763-8d01-824e4f983bbe</uuid>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  <name>instance-00000002</name>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1679028893</nova:name>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:11:32</nova:creationTime>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:        <nova:user uuid="3d3fdbc81f6f46a591bc6a1c719e3f04">tempest-DeleteServersAdminTestJSON-1827276923-project-member</nova:user>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:        <nova:project uuid="e9147fab08c9431ebaa3462d5f0ea810">tempest-DeleteServersAdminTestJSON-1827276923</nova:project>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <entry name="serial">f2ee92e0-bac6-4763-8d01-824e4f983bbe</entry>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <entry name="uuid">f2ee92e0-bac6-4763-8d01-824e4f983bbe</entry>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk.config">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/f2ee92e0-bac6-4763-8d01-824e4f983bbe/console.log" append="off"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:11:33 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:11:33 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:11:33 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:11:33 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.864 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.864 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.865 248514 INFO nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Using config drive#033[00m
Dec 13 03:11:33 np0005558241 nova_compute[248510]: 2025-12-13 08:11:33.908 248514 DEBUG nova.storage.rbd_utils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:34 np0005558241 nova_compute[248510]: 2025-12-13 08:11:34.431 248514 DEBUG oslo_concurrency.processutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7849bffe-8652-42d0-a977-826c0c452b88/disk.config 7849bffe-8652-42d0-a977-826c0c452b88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:34 np0005558241 nova_compute[248510]: 2025-12-13 08:11:34.433 248514 INFO nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Deleting local config drive /var/lib/nova/instances/7849bffe-8652-42d0-a977-826c0c452b88/disk.config because it was imported into RBD.#033[00m
Dec 13 03:11:34 np0005558241 systemd[1]: Starting libvirt secret daemon...
Dec 13 03:11:34 np0005558241 nova_compute[248510]: 2025-12-13 08:11:34.465 248514 INFO nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Creating config drive at /var/lib/nova/instances/f2ee92e0-bac6-4763-8d01-824e4f983bbe/disk.config#033[00m
Dec 13 03:11:34 np0005558241 nova_compute[248510]: 2025-12-13 08:11:34.471 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f2ee92e0-bac6-4763-8d01-824e4f983bbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp05z3qi23 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:34 np0005558241 systemd[1]: Started libvirt secret daemon.
Dec 13 03:11:34 np0005558241 systemd-machined[210538]: New machine qemu-1-instance-00000001.
Dec 13 03:11:34 np0005558241 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec 13 03:11:34 np0005558241 nova_compute[248510]: 2025-12-13 08:11:34.599 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f2ee92e0-bac6-4763-8d01-824e4f983bbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp05z3qi23" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Dec 13 03:11:34 np0005558241 nova_compute[248510]: 2025-12-13 08:11:34.628 248514 DEBUG nova.storage.rbd_utils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:34 np0005558241 nova_compute[248510]: 2025-12-13 08:11:34.634 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f2ee92e0-bac6-4763-8d01-824e4f983bbe/disk.config f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:34 np0005558241 nova_compute[248510]: 2025-12-13 08:11:34.864 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:11:34 np0005558241 nova_compute[248510]: 2025-12-13 08:11:34.865 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:11:34 np0005558241 nova_compute[248510]: 2025-12-13 08:11:34.865 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:11:34 np0005558241 nova_compute[248510]: 2025-12-13 08:11:34.865 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:11:35 np0005558241 nova_compute[248510]: 2025-12-13 08:11:35.371 248514 DEBUG oslo_concurrency.processutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f2ee92e0-bac6-4763-8d01-824e4f983bbe/disk.config f2ee92e0-bac6-4763-8d01-824e4f983bbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.737s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:35 np0005558241 nova_compute[248510]: 2025-12-13 08:11:35.372 248514 INFO nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Deleting local config drive /var/lib/nova/instances/f2ee92e0-bac6-4763-8d01-824e4f983bbe/disk.config because it was imported into RBD.#033[00m
Dec 13 03:11:35 np0005558241 systemd-machined[210538]: New machine qemu-2-instance-00000002.
Dec 13 03:11:35 np0005558241 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.089 248514 DEBUG nova.compute.manager [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.091 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.091 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613496.0886614, 7849bffe-8652-42d0-a977-826c0c452b88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.092 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.097 248514 INFO nova.virt.libvirt.driver [-] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Instance spawned successfully.#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.098 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.146 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.147 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.147 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.148 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.148 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.149 248514 DEBUG nova.virt.libvirt.driver [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.154 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.159 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.202 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.203 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613496.0896902, 7849bffe-8652-42d0-a977-826c0c452b88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.203 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] VM Started (Lifecycle Event)#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.222 248514 DEBUG nova.compute.manager [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.223 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.226 248514 INFO nova.virt.libvirt.driver [-] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Instance spawned successfully.#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.226 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.233 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.237 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.244 248514 INFO nova.compute.manager [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Took 16.42 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.245 248514 DEBUG nova.compute.manager [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.298 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.299 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613496.2195055, f2ee92e0-bac6-4763-8d01-824e4f983bbe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.299 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.348 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.366 248514 INFO nova.compute.manager [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Took 17.67 seconds to build instance.#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.369 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.369 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.369 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.370 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.370 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.371 248514 DEBUG nova.virt.libvirt.driver [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.377 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.460 248514 DEBUG oslo_concurrency.lockutils [None req-d0115c59-22ec-46e0-989f-25214f5ab93a 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "7849bffe-8652-42d0-a977-826c0c452b88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.489 248514 INFO nova.compute.manager [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Took 6.90 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.489 248514 DEBUG nova.compute.manager [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.492 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.493 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613496.2213798, f2ee92e0-bac6-4763-8d01-824e4f983bbe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.493 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] VM Started (Lifecycle Event)#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.536 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.542 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.581 248514 INFO nova.compute.manager [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Took 9.05 seconds to build instance.#033[00m
Dec 13 03:11:36 np0005558241 nova_compute[248510]: 2025-12-13 08:11:36.611 248514 DEBUG oslo_concurrency.lockutils [None req-512df077-a65e-4d4f-a62a-27d190baa4d5 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "f2ee92e0-bac6-4763-8d01-824e4f983bbe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 4.3 MiB/s wr, 66 op/s
Dec 13 03:11:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:11:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 151 op/s
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.435 248514 DEBUG oslo_concurrency.lockutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Acquiring lock "f2ee92e0-bac6-4763-8d01-824e4f983bbe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.436 248514 DEBUG oslo_concurrency.lockutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Lock "f2ee92e0-bac6-4763-8d01-824e4f983bbe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.436 248514 DEBUG oslo_concurrency.lockutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Acquiring lock "f2ee92e0-bac6-4763-8d01-824e4f983bbe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.437 248514 DEBUG oslo_concurrency.lockutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Lock "f2ee92e0-bac6-4763-8d01-824e4f983bbe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.437 248514 DEBUG oslo_concurrency.lockutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Lock "f2ee92e0-bac6-4763-8d01-824e4f983bbe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.439 248514 INFO nova.compute.manager [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Terminating instance#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.440 248514 DEBUG oslo_concurrency.lockutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Acquiring lock "refresh_cache-f2ee92e0-bac6-4763-8d01-824e4f983bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.440 248514 DEBUG oslo_concurrency.lockutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Acquired lock "refresh_cache-f2ee92e0-bac6-4763-8d01-824e4f983bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.440 248514 DEBUG nova.network.neutron [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.678 248514 DEBUG oslo_concurrency.lockutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Acquiring lock "7849bffe-8652-42d0-a977-826c0c452b88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.679 248514 DEBUG oslo_concurrency.lockutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "7849bffe-8652-42d0-a977-826c0c452b88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.679 248514 DEBUG oslo_concurrency.lockutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Acquiring lock "7849bffe-8652-42d0-a977-826c0c452b88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.679 248514 DEBUG oslo_concurrency.lockutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "7849bffe-8652-42d0-a977-826c0c452b88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.680 248514 DEBUG oslo_concurrency.lockutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "7849bffe-8652-42d0-a977-826c0c452b88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.681 248514 INFO nova.compute.manager [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Terminating instance#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.682 248514 DEBUG oslo_concurrency.lockutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Acquiring lock "refresh_cache-7849bffe-8652-42d0-a977-826c0c452b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.682 248514 DEBUG oslo_concurrency.lockutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Acquired lock "refresh_cache-7849bffe-8652-42d0-a977-826c0c452b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.683 248514 DEBUG nova.network.neutron [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.851 248514 DEBUG nova.network.neutron [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:11:39 np0005558241 nova_compute[248510]: 2025-12-13 08:11:39.987 248514 DEBUG nova.network.neutron [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:11:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:11:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:11:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:11:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:11:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:11:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:11:40 np0005558241 nova_compute[248510]: 2025-12-13 08:11:40.309 248514 DEBUG nova.network.neutron [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:11:40 np0005558241 nova_compute[248510]: 2025-12-13 08:11:40.334 248514 DEBUG oslo_concurrency.lockutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Releasing lock "refresh_cache-f2ee92e0-bac6-4763-8d01-824e4f983bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:11:40 np0005558241 nova_compute[248510]: 2025-12-13 08:11:40.335 248514 DEBUG nova.compute.manager [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:11:40 np0005558241 nova_compute[248510]: 2025-12-13 08:11:40.382 248514 DEBUG nova.network.neutron [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:11:40 np0005558241 nova_compute[248510]: 2025-12-13 08:11:40.404 248514 DEBUG oslo_concurrency.lockutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Releasing lock "refresh_cache-7849bffe-8652-42d0-a977-826c0c452b88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:11:40 np0005558241 nova_compute[248510]: 2025-12-13 08:11:40.404 248514 DEBUG nova.compute.manager [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:11:40 np0005558241 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec 13 03:11:40 np0005558241 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 4.549s CPU time.
Dec 13 03:11:40 np0005558241 systemd-machined[210538]: Machine qemu-2-instance-00000002 terminated.
Dec 13 03:11:40 np0005558241 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec 13 03:11:40 np0005558241 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 5.411s CPU time.
Dec 13 03:11:40 np0005558241 systemd-machined[210538]: Machine qemu-1-instance-00000001 terminated.
Dec 13 03:11:40 np0005558241 nova_compute[248510]: 2025-12-13 08:11:40.556 248514 INFO nova.virt.libvirt.driver [-] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Instance destroyed successfully.#033[00m
Dec 13 03:11:40 np0005558241 nova_compute[248510]: 2025-12-13 08:11:40.557 248514 DEBUG nova.objects.instance [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Lazy-loading 'resources' on Instance uuid f2ee92e0-bac6-4763-8d01-824e4f983bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:11:40 np0005558241 nova_compute[248510]: 2025-12-13 08:11:40.623 248514 INFO nova.virt.libvirt.driver [-] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Instance destroyed successfully.#033[00m
Dec 13 03:11:40 np0005558241 nova_compute[248510]: 2025-12-13 08:11:40.624 248514 DEBUG nova.objects.instance [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lazy-loading 'resources' on Instance uuid 7849bffe-8652-42d0-a977-826c0c452b88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:11:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.5 MiB/s wr, 224 op/s
Dec 13 03:11:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.5 MiB/s wr, 224 op/s
Dec 13 03:11:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:11:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 42 MiB data, 183 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.4 MiB/s wr, 210 op/s
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.094 248514 INFO nova.virt.libvirt.driver [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Deleting instance files /var/lib/nova/instances/7849bffe-8652-42d0-a977-826c0c452b88_del#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.096 248514 INFO nova.virt.libvirt.driver [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Deletion of /var/lib/nova/instances/7849bffe-8652-42d0-a977-826c0c452b88_del complete#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.216 248514 INFO nova.virt.libvirt.driver [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Deleting instance files /var/lib/nova/instances/f2ee92e0-bac6-4763-8d01-824e4f983bbe_del#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.217 248514 INFO nova.virt.libvirt.driver [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Deletion of /var/lib/nova/instances/f2ee92e0-bac6-4763-8d01-824e4f983bbe_del complete#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.245 248514 DEBUG nova.virt.libvirt.host [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.246 248514 INFO nova.virt.libvirt.host [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] UEFI support detected#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.249 248514 INFO nova.compute.manager [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Took 5.84 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.250 248514 DEBUG oslo.service.loopingcall [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.251 248514 DEBUG nova.compute.manager [-] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.251 248514 DEBUG nova.network.neutron [-] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.341 248514 INFO nova.compute.manager [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Took 6.01 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.342 248514 DEBUG oslo.service.loopingcall [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.342 248514 DEBUG nova.compute.manager [-] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.342 248514 DEBUG nova.network.neutron [-] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.521 248514 DEBUG nova.network.neutron [-] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.543 248514 DEBUG nova.network.neutron [-] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.565 248514 INFO nova.compute.manager [-] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Took 0.31 seconds to deallocate network for instance.#033[00m
Dec 13 03:11:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 42 MiB data, 183 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 186 op/s
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.631 248514 DEBUG oslo_concurrency.lockutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.632 248514 DEBUG oslo_concurrency.lockutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.682 248514 DEBUG nova.network.neutron [-] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.706 248514 DEBUG nova.network.neutron [-] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.756 248514 INFO nova.compute.manager [-] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Took 0.41 seconds to deallocate network for instance.#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.782 248514 DEBUG oslo_concurrency.processutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:46 np0005558241 nova_compute[248510]: 2025-12-13 08:11:46.829 248514 DEBUG oslo_concurrency.lockutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:11:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2148103533' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:11:47 np0005558241 nova_compute[248510]: 2025-12-13 08:11:47.365 248514 DEBUG oslo_concurrency.processutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:47 np0005558241 nova_compute[248510]: 2025-12-13 08:11:47.372 248514 DEBUG nova.compute.provider_tree [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:11:47 np0005558241 nova_compute[248510]: 2025-12-13 08:11:47.705 248514 DEBUG nova.scheduler.client.report [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:11:47 np0005558241 nova_compute[248510]: 2025-12-13 08:11:47.749 248514 DEBUG oslo_concurrency.lockutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:47 np0005558241 nova_compute[248510]: 2025-12-13 08:11:47.752 248514 DEBUG oslo_concurrency.lockutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:47 np0005558241 nova_compute[248510]: 2025-12-13 08:11:47.801 248514 INFO nova.scheduler.client.report [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Deleted allocations for instance 7849bffe-8652-42d0-a977-826c0c452b88#033[00m
Dec 13 03:11:47 np0005558241 nova_compute[248510]: 2025-12-13 08:11:47.833 248514 DEBUG oslo_concurrency.processutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:47 np0005558241 nova_compute[248510]: 2025-12-13 08:11:47.904 248514 DEBUG oslo_concurrency.lockutils [None req-5ea9c82a-a3a5-41d5-9670-f7ab1e81a8f8 033c6a0517ec454ca6e4080f7c3bbe97 e0d21752374a43158985b0e68f3c8d69 - - default default] Lock "7849bffe-8652-42d0-a977-826c0c452b88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:11:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1111416140' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:11:48 np0005558241 nova_compute[248510]: 2025-12-13 08:11:48.382 248514 DEBUG oslo_concurrency.processutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:48 np0005558241 nova_compute[248510]: 2025-12-13 08:11:48.389 248514 DEBUG nova.compute.provider_tree [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:11:48 np0005558241 nova_compute[248510]: 2025-12-13 08:11:48.411 248514 DEBUG nova.scheduler.client.report [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:11:48 np0005558241 nova_compute[248510]: 2025-12-13 08:11:48.446 248514 DEBUG oslo_concurrency.lockutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:48 np0005558241 nova_compute[248510]: 2025-12-13 08:11:48.490 248514 INFO nova.scheduler.client.report [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Deleted allocations for instance f2ee92e0-bac6-4763-8d01-824e4f983bbe#033[00m
Dec 13 03:11:48 np0005558241 nova_compute[248510]: 2025-12-13 08:11:48.563 248514 DEBUG oslo_concurrency.lockutils [None req-a1088002-be6c-4a09-9235-3bccec67df38 57bec3e7965a4504a6284fe5c37129ee cc80177447e54464815fb35452343957 - - default default] Lock "f2ee92e0-bac6-4763-8d01-824e4f983bbe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:11:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 41 MiB data, 182 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 27 KiB/s wr, 197 op/s
Dec 13 03:11:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 KiB/s wr, 124 op/s
Dec 13 03:11:51 np0005558241 nova_compute[248510]: 2025-12-13 08:11:51.356 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:51 np0005558241 nova_compute[248510]: 2025-12-13 08:11:51.357 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:51 np0005558241 nova_compute[248510]: 2025-12-13 08:11:51.382 248514 DEBUG nova.compute.manager [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:11:51 np0005558241 nova_compute[248510]: 2025-12-13 08:11:51.524 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:51 np0005558241 nova_compute[248510]: 2025-12-13 08:11:51.524 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:51 np0005558241 nova_compute[248510]: 2025-12-13 08:11:51.533 248514 DEBUG nova.virt.hardware [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:11:51 np0005558241 nova_compute[248510]: 2025-12-13 08:11:51.534 248514 INFO nova.compute.claims [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:11:51 np0005558241 nova_compute[248510]: 2025-12-13 08:11:51.688 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.081 248514 DEBUG oslo_concurrency.lockutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Acquiring lock "0cb5f1aa-9103-4667-ac26-f23f9e43ea86" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.082 248514 DEBUG oslo_concurrency.lockutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "0cb5f1aa-9103-4667-ac26-f23f9e43ea86" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.135 248514 DEBUG nova.compute.manager [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.227 248514 DEBUG oslo_concurrency.lockutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:11:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1684885389' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.304 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.311 248514 DEBUG nova.compute.provider_tree [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.344 248514 DEBUG nova.scheduler.client.report [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.374 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.376 248514 DEBUG nova.compute.manager [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.380 248514 DEBUG oslo_concurrency.lockutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.389 248514 DEBUG nova.virt.hardware [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.390 248514 INFO nova.compute.claims [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.484 248514 DEBUG nova.compute.manager [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.485 248514 DEBUG nova.network.neutron [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.530 248514 INFO nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.553 248514 DEBUG nova.compute.manager [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:11:52 np0005558241 nova_compute[248510]: 2025-12-13 08:11:52.591 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Dec 13 03:11:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:11:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3521502352' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.182 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.189 248514 DEBUG nova.compute.provider_tree [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:11:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.745 248514 DEBUG nova.compute.manager [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.747 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.748 248514 INFO nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Creating image(s)#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.775 248514 DEBUG nova.storage.rbd_utils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.800 248514 DEBUG nova.storage.rbd_utils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.827 248514 DEBUG nova.storage.rbd_utils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.831 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.859 248514 DEBUG nova.scheduler.client.report [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.877 248514 WARNING oslo_policy.policy [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.877 248514 WARNING oslo_policy.policy [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.879 248514 DEBUG nova.policy [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'df94630f47824853b3347bca91426ecd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '474fd43853634a5d8b3026b622723873', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.924 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.925 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.926 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.926 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.950 248514 DEBUG nova.storage.rbd_utils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.955 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.976 248514 DEBUG oslo_concurrency.lockutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:53 np0005558241 nova_compute[248510]: 2025-12-13 08:11:53.978 248514 DEBUG nova.compute.manager [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.055 248514 DEBUG nova.compute.manager [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.055 248514 DEBUG nova.network.neutron [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.093 248514 INFO nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.127 248514 DEBUG nova.compute.manager [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.250 248514 DEBUG nova.compute.manager [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.251 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.251 248514 INFO nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Creating image(s)#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.274 248514 DEBUG nova.storage.rbd_utils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image 0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.301 248514 DEBUG nova.storage.rbd_utils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image 0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.327 248514 DEBUG nova.storage.rbd_utils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image 0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.331 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.351 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.414 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.415 248514 DEBUG oslo_concurrency.lockutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.416 248514 DEBUG oslo_concurrency.lockutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.417 248514 DEBUG oslo_concurrency.lockutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.443 248514 DEBUG nova.storage.rbd_utils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image 0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.446 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.470 248514 DEBUG nova.storage.rbd_utils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] resizing rbd image a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.547 248514 DEBUG nova.objects.instance [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lazy-loading 'migration_context' on Instance uuid a1cbc612-63fc-4cbf-ad1e-007a93a03286 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.575 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.575 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Ensure instance console log exists: /var/lib/nova/instances/a1cbc612-63fc-4cbf-ad1e-007a93a03286/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.575 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.576 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.576 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 59 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 530 KiB/s wr, 56 op/s
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.770 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.836 248514 DEBUG nova.storage.rbd_utils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] resizing rbd image 0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.924 248514 DEBUG nova.objects.instance [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lazy-loading 'migration_context' on Instance uuid 0cb5f1aa-9103-4667-ac26-f23f9e43ea86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.954 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.954 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Ensure instance console log exists: /var/lib/nova/instances/0cb5f1aa-9103-4667-ac26-f23f9e43ea86/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.955 248514 DEBUG oslo_concurrency.lockutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.955 248514 DEBUG oslo_concurrency.lockutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:54 np0005558241 nova_compute[248510]: 2025-12-13 08:11:54.955 248514 DEBUG oslo_concurrency.lockutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.001 248514 DEBUG nova.network.neutron [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.001 248514 DEBUG nova.compute.manager [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.003 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.007 248514 WARNING nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.013 248514 DEBUG nova.virt.libvirt.host [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.014 248514 DEBUG nova.virt.libvirt.host [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.018 248514 DEBUG nova.virt.libvirt.host [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.018 248514 DEBUG nova.virt.libvirt.host [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.019 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.019 248514 DEBUG nova.virt.hardware [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.020 248514 DEBUG nova.virt.hardware [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.020 248514 DEBUG nova.virt.hardware [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.020 248514 DEBUG nova.virt.hardware [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.020 248514 DEBUG nova.virt.hardware [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.021 248514 DEBUG nova.virt.hardware [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.021 248514 DEBUG nova.virt.hardware [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.021 248514 DEBUG nova.virt.hardware [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.022 248514 DEBUG nova.virt.hardware [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.022 248514 DEBUG nova.virt.hardware [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.022 248514 DEBUG nova.virt.hardware [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.025 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:11:55.392 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:11:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:11:55.392 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:11:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:11:55.392 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:11:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/878344819' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.555 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613500.5536823, f2ee92e0-bac6-4763-8d01-824e4f983bbe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.556 248514 INFO nova.compute.manager [-] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.570 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.603 248514 DEBUG nova.storage.rbd_utils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image 0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.609 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.632 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613500.621352, 7849bffe-8652-42d0-a977-826c0c452b88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.633 248514 INFO nova.compute.manager [-] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.636 248514 DEBUG nova.compute.manager [None req-5c0e1681-4892-49aa-bec1-6d38ffff2c0a - - - - - -] [instance: f2ee92e0-bac6-4763-8d01-824e4f983bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.659 248514 DEBUG nova.compute.manager [None req-dcb93fd0-20f0-4b37-a3cd-04c7f8f1b783 - - - - - -] [instance: 7849bffe-8652-42d0-a977-826c0c452b88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:11:55 np0005558241 nova_compute[248510]: 2025-12-13 08:11:55.866 248514 DEBUG nova.network.neutron [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Successfully created port: d38d76da-797b-43d4-9453-50dfde011f0d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:11:55 np0005558241 podman[259852]: 2025-12-13 08:11:55.998363294 +0000 UTC m=+0.074676360 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:11:56 np0005558241 podman[259851]: 2025-12-13 08:11:56.009346804 +0000 UTC m=+0.085373853 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:11:56 np0005558241 podman[259850]: 2025-12-13 08:11:56.033479818 +0000 UTC m=+0.107909247 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 03:11:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:11:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/416770121' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:11:56 np0005558241 nova_compute[248510]: 2025-12-13 08:11:56.174 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:56 np0005558241 nova_compute[248510]: 2025-12-13 08:11:56.176 248514 DEBUG nova.objects.instance [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0cb5f1aa-9103-4667-ac26-f23f9e43ea86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:11:56 np0005558241 nova_compute[248510]: 2025-12-13 08:11:56.363 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  <uuid>0cb5f1aa-9103-4667-ac26-f23f9e43ea86</uuid>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  <name>instance-00000004</name>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1663785291</nova:name>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:11:55</nova:creationTime>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:        <nova:user uuid="3d3fdbc81f6f46a591bc6a1c719e3f04">tempest-DeleteServersAdminTestJSON-1827276923-project-member</nova:user>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:        <nova:project uuid="e9147fab08c9431ebaa3462d5f0ea810">tempest-DeleteServersAdminTestJSON-1827276923</nova:project>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <entry name="serial">0cb5f1aa-9103-4667-ac26-f23f9e43ea86</entry>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <entry name="uuid">0cb5f1aa-9103-4667-ac26-f23f9e43ea86</entry>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk.config">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/0cb5f1aa-9103-4667-ac26-f23f9e43ea86/console.log" append="off"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:11:56 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:11:56 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:11:56 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:11:56 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:11:56 np0005558241 nova_compute[248510]: 2025-12-13 08:11:56.461 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:11:56 np0005558241 nova_compute[248510]: 2025-12-13 08:11:56.462 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:11:56 np0005558241 nova_compute[248510]: 2025-12-13 08:11:56.463 248514 INFO nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Using config drive#033[00m
Dec 13 03:11:56 np0005558241 nova_compute[248510]: 2025-12-13 08:11:56.509 248514 DEBUG nova.storage.rbd_utils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image 0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 59 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 529 KiB/s wr, 16 op/s
Dec 13 03:11:57 np0005558241 nova_compute[248510]: 2025-12-13 08:11:57.006 248514 INFO nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Creating config drive at /var/lib/nova/instances/0cb5f1aa-9103-4667-ac26-f23f9e43ea86/disk.config#033[00m
Dec 13 03:11:57 np0005558241 nova_compute[248510]: 2025-12-13 08:11:57.015 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0cb5f1aa-9103-4667-ac26-f23f9e43ea86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcj67e0yp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:57 np0005558241 nova_compute[248510]: 2025-12-13 08:11:57.567 248514 DEBUG nova.network.neutron [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Successfully updated port: d38d76da-797b-43d4-9453-50dfde011f0d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:11:57 np0005558241 nova_compute[248510]: 2025-12-13 08:11:57.596 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "refresh_cache-a1cbc612-63fc-4cbf-ad1e-007a93a03286" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:11:57 np0005558241 nova_compute[248510]: 2025-12-13 08:11:57.596 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquired lock "refresh_cache-a1cbc612-63fc-4cbf-ad1e-007a93a03286" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:11:57 np0005558241 nova_compute[248510]: 2025-12-13 08:11:57.596 248514 DEBUG nova.network.neutron [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.138 248514 DEBUG nova.network.neutron [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.218 248514 DEBUG nova.compute.manager [req-2f9c517d-45ba-4baa-89e8-d38dbb68a92d req-f9d2d934-8b77-4570-810b-ee386eef35b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Received event network-changed-d38d76da-797b-43d4-9453-50dfde011f0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.219 248514 DEBUG nova.compute.manager [req-2f9c517d-45ba-4baa-89e8-d38dbb68a92d req-f9d2d934-8b77-4570-810b-ee386eef35b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Refreshing instance network info cache due to event network-changed-d38d76da-797b-43d4-9453-50dfde011f0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.219 248514 DEBUG oslo_concurrency.lockutils [req-2f9c517d-45ba-4baa-89e8-d38dbb68a92d req-f9d2d934-8b77-4570-810b-ee386eef35b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-a1cbc612-63fc-4cbf-ad1e-007a93a03286" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.356 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0cb5f1aa-9103-4667-ac26-f23f9e43ea86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcj67e0yp" returned: 0 in 1.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.380 248514 DEBUG nova.storage.rbd_utils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] rbd image 0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.384 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0cb5f1aa-9103-4667-ac26-f23f9e43ea86/disk.config 0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.530 248514 DEBUG oslo_concurrency.processutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0cb5f1aa-9103-4667-ac26-f23f9e43ea86/disk.config 0cb5f1aa-9103-4667-ac26-f23f9e43ea86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.531 248514 INFO nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Deleting local config drive /var/lib/nova/instances/0cb5f1aa-9103-4667-ac26-f23f9e43ea86/disk.config because it was imported into RBD.#033[00m
Dec 13 03:11:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:11:58 np0005558241 systemd-machined[210538]: New machine qemu-3-instance-00000004.
Dec 13 03:11:58 np0005558241 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Dec 13 03:11:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 120 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 3.0 MiB/s wr, 48 op/s
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.936 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613518.9354355, 0cb5f1aa-9103-4667-ac26-f23f9e43ea86 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.936 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.940 248514 DEBUG nova.compute.manager [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.940 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.944 248514 INFO nova.virt.libvirt.driver [-] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Instance spawned successfully.#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.945 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.977 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.978 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.978 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.979 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.979 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.980 248514 DEBUG nova.virt.libvirt.driver [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.984 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:11:58 np0005558241 nova_compute[248510]: 2025-12-13 08:11:58.987 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.060 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.061 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613518.9389937, 0cb5f1aa-9103-4667-ac26-f23f9e43ea86 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.061 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] VM Started (Lifecycle Event)#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.093 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.098 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.108 248514 INFO nova.compute.manager [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Took 4.86 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.108 248514 DEBUG nova.compute.manager [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.119 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.185 248514 INFO nova.compute.manager [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Took 6.99 seconds to build instance.#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.209 248514 DEBUG oslo_concurrency.lockutils [None req-5692f6ae-5f71-4ed5-a64f-1db50dbe6c6f 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "0cb5f1aa-9103-4667-ac26-f23f9e43ea86" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.468 248514 DEBUG nova.network.neutron [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Updating instance_info_cache with network_info: [{"id": "d38d76da-797b-43d4-9453-50dfde011f0d", "address": "fa:16:3e:4d:a1:fb", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38d76da-79", "ovs_interfaceid": "d38d76da-797b-43d4-9453-50dfde011f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.498 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Releasing lock "refresh_cache-a1cbc612-63fc-4cbf-ad1e-007a93a03286" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.500 248514 DEBUG nova.compute.manager [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Instance network_info: |[{"id": "d38d76da-797b-43d4-9453-50dfde011f0d", "address": "fa:16:3e:4d:a1:fb", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38d76da-79", "ovs_interfaceid": "d38d76da-797b-43d4-9453-50dfde011f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.501 248514 DEBUG oslo_concurrency.lockutils [req-2f9c517d-45ba-4baa-89e8-d38dbb68a92d req-f9d2d934-8b77-4570-810b-ee386eef35b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-a1cbc612-63fc-4cbf-ad1e-007a93a03286" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.502 248514 DEBUG nova.network.neutron [req-2f9c517d-45ba-4baa-89e8-d38dbb68a92d req-f9d2d934-8b77-4570-810b-ee386eef35b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Refreshing network info cache for port d38d76da-797b-43d4-9453-50dfde011f0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.504 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Start _get_guest_xml network_info=[{"id": "d38d76da-797b-43d4-9453-50dfde011f0d", "address": "fa:16:3e:4d:a1:fb", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38d76da-79", "ovs_interfaceid": "d38d76da-797b-43d4-9453-50dfde011f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.509 248514 WARNING nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.514 248514 DEBUG nova.virt.libvirt.host [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.516 248514 DEBUG nova.virt.libvirt.host [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.518 248514 DEBUG nova.virt.libvirt.host [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.518 248514 DEBUG nova.virt.libvirt.host [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.519 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.519 248514 DEBUG nova.virt.hardware [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:11:40Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1173756231',id=8,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1380081649',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.520 248514 DEBUG nova.virt.hardware [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.520 248514 DEBUG nova.virt.hardware [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.520 248514 DEBUG nova.virt.hardware [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.521 248514 DEBUG nova.virt.hardware [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.521 248514 DEBUG nova.virt.hardware [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.521 248514 DEBUG nova.virt.hardware [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.522 248514 DEBUG nova.virt.hardware [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.522 248514 DEBUG nova.virt.hardware [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.522 248514 DEBUG nova.virt.hardware [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.522 248514 DEBUG nova.virt.hardware [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:11:59 np0005558241 nova_compute[248510]: 2025-12-13 08:11:59.525 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:11:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:11:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:11:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:11:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:11:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:11:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:11:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:11:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:11:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:11:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:11:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:11:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:12:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:12:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2403581774' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.175 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.199 248514 DEBUG nova.storage.rbd_utils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.204 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:00 np0005558241 podman[260193]: 2025-12-13 08:12:00.205050006 +0000 UTC m=+0.049377167 container create e2c27eaeff8b82537b872bea0b89caf15debb28b9b2bebdcd83f2a6d4969f9ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:12:00 np0005558241 systemd[1]: Started libpod-conmon-e2c27eaeff8b82537b872bea0b89caf15debb28b9b2bebdcd83f2a6d4969f9ff.scope.
Dec 13 03:12:00 np0005558241 podman[260193]: 2025-12-13 08:12:00.182026519 +0000 UTC m=+0.026353700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:12:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:12:00 np0005558241 podman[260193]: 2025-12-13 08:12:00.312533781 +0000 UTC m=+0.156860962 container init e2c27eaeff8b82537b872bea0b89caf15debb28b9b2bebdcd83f2a6d4969f9ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:12:00 np0005558241 podman[260193]: 2025-12-13 08:12:00.321713347 +0000 UTC m=+0.166040508 container start e2c27eaeff8b82537b872bea0b89caf15debb28b9b2bebdcd83f2a6d4969f9ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:12:00 np0005558241 podman[260193]: 2025-12-13 08:12:00.326628768 +0000 UTC m=+0.170955959 container attach e2c27eaeff8b82537b872bea0b89caf15debb28b9b2bebdcd83f2a6d4969f9ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:12:00 np0005558241 determined_wilson[260230]: 167 167
Dec 13 03:12:00 np0005558241 systemd[1]: libpod-e2c27eaeff8b82537b872bea0b89caf15debb28b9b2bebdcd83f2a6d4969f9ff.scope: Deactivated successfully.
Dec 13 03:12:00 np0005558241 podman[260245]: 2025-12-13 08:12:00.372433086 +0000 UTC m=+0.027560889 container died e2c27eaeff8b82537b872bea0b89caf15debb28b9b2bebdcd83f2a6d4969f9ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:12:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8599938f0aa41ae4567019bef9b89cb9a9216fdb66ab3f96ea70dd33735cd06f-merged.mount: Deactivated successfully.
Dec 13 03:12:00 np0005558241 podman[260245]: 2025-12-13 08:12:00.420510309 +0000 UTC m=+0.075638142 container remove e2c27eaeff8b82537b872bea0b89caf15debb28b9b2bebdcd83f2a6d4969f9ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:12:00 np0005558241 systemd[1]: libpod-conmon-e2c27eaeff8b82537b872bea0b89caf15debb28b9b2bebdcd83f2a6d4969f9ff.scope: Deactivated successfully.
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.445 248514 DEBUG oslo_concurrency.lockutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Acquiring lock "0cb5f1aa-9103-4667-ac26-f23f9e43ea86" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.446 248514 DEBUG oslo_concurrency.lockutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "0cb5f1aa-9103-4667-ac26-f23f9e43ea86" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.447 248514 DEBUG oslo_concurrency.lockutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Acquiring lock "0cb5f1aa-9103-4667-ac26-f23f9e43ea86-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.447 248514 DEBUG oslo_concurrency.lockutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "0cb5f1aa-9103-4667-ac26-f23f9e43ea86-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.447 248514 DEBUG oslo_concurrency.lockutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "0cb5f1aa-9103-4667-ac26-f23f9e43ea86-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.449 248514 INFO nova.compute.manager [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Terminating instance#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.450 248514 DEBUG oslo_concurrency.lockutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Acquiring lock "refresh_cache-0cb5f1aa-9103-4667-ac26-f23f9e43ea86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.450 248514 DEBUG oslo_concurrency.lockutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Acquired lock "refresh_cache-0cb5f1aa-9103-4667-ac26-f23f9e43ea86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.451 248514 DEBUG nova.network.neutron [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:12:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:12:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:12:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:12:00 np0005558241 podman[260273]: 2025-12-13 08:12:00.584615419 +0000 UTC m=+0.036675844 container create f3715bbb970d136455bc549fe29755a3a16b2dba44a4addd2ea5b24e5462e32c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:12:00 np0005558241 systemd[1]: Started libpod-conmon-f3715bbb970d136455bc549fe29755a3a16b2dba44a4addd2ea5b24e5462e32c.scope.
Dec 13 03:12:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.5 MiB/s wr, 63 op/s
Dec 13 03:12:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:12:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e153d5f13b88d52f06a7e3a066d7dcba16939bd8720fbe659715ddee814d58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e153d5f13b88d52f06a7e3a066d7dcba16939bd8720fbe659715ddee814d58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e153d5f13b88d52f06a7e3a066d7dcba16939bd8720fbe659715ddee814d58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e153d5f13b88d52f06a7e3a066d7dcba16939bd8720fbe659715ddee814d58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e153d5f13b88d52f06a7e3a066d7dcba16939bd8720fbe659715ddee814d58/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:00 np0005558241 podman[260273]: 2025-12-13 08:12:00.5680024 +0000 UTC m=+0.020062855 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:12:00 np0005558241 podman[260273]: 2025-12-13 08:12:00.676543592 +0000 UTC m=+0.128604017 container init f3715bbb970d136455bc549fe29755a3a16b2dba44a4addd2ea5b24e5462e32c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:12:00 np0005558241 podman[260273]: 2025-12-13 08:12:00.683041352 +0000 UTC m=+0.135101787 container start f3715bbb970d136455bc549fe29755a3a16b2dba44a4addd2ea5b24e5462e32c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jones, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.684 248514 DEBUG nova.network.neutron [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:12:00 np0005558241 podman[260273]: 2025-12-13 08:12:00.68865169 +0000 UTC m=+0.140712135 container attach f3715bbb970d136455bc549fe29755a3a16b2dba44a4addd2ea5b24e5462e32c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:12:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:12:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3686746625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.785 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.787 248514 DEBUG nova.virt.libvirt.vif [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:11:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1074105427',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1074105427',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(8),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1074105427',id=3,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=8,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ4VTdpGZWW2edkCqJ93Ae0N7W5Wl1KxtxPXJB+IDKzYse7HySnAxnSN+D34LgL+vn16+92mSLvTyRP3HupOhW4VHoScrkUgJj4Ke9zalwfgWa6DU4Za2b0K2T2xL2drqg==',key_name='tempest-keypair-1114404274',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='474fd43853634a5d8b3026b622723873',ramdisk_id='',reservation_id='r-jmhsm7jq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-306379411',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-306379411-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:11:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='df94630f47824853b3347bca91426ecd',uuid=a1cbc612-63fc-4cbf-ad1e-007a93a03286,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d38d76da-797b-43d4-9453-50dfde011f0d", "address": "fa:16:3e:4d:a1:fb", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38d76da-79", "ovs_interfaceid": "d38d76da-797b-43d4-9453-50dfde011f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.788 248514 DEBUG nova.network.os_vif_util [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Converting VIF {"id": "d38d76da-797b-43d4-9453-50dfde011f0d", "address": "fa:16:3e:4d:a1:fb", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38d76da-79", "ovs_interfaceid": "d38d76da-797b-43d4-9453-50dfde011f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.789 248514 DEBUG nova.network.os_vif_util [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:a1:fb,bridge_name='br-int',has_traffic_filtering=True,id=d38d76da-797b-43d4-9453-50dfde011f0d,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38d76da-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.791 248514 DEBUG nova.objects.instance [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1cbc612-63fc-4cbf-ad1e-007a93a03286 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.848 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  <uuid>a1cbc612-63fc-4cbf-ad1e-007a93a03286</uuid>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  <name>instance-00000003</name>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1074105427</nova:name>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:11:59</nova:creationTime>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-1380081649">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:        <nova:user uuid="df94630f47824853b3347bca91426ecd">tempest-ServersWithSpecificFlavorTestJSON-306379411-project-member</nova:user>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:        <nova:project uuid="474fd43853634a5d8b3026b622723873">tempest-ServersWithSpecificFlavorTestJSON-306379411</nova:project>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:        <nova:port uuid="d38d76da-797b-43d4-9453-50dfde011f0d">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <entry name="serial">a1cbc612-63fc-4cbf-ad1e-007a93a03286</entry>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <entry name="uuid">a1cbc612-63fc-4cbf-ad1e-007a93a03286</entry>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk.config">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:4d:a1:fb"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <target dev="tapd38d76da-79"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/a1cbc612-63fc-4cbf-ad1e-007a93a03286/console.log" append="off"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:12:00 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:12:00 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:12:00 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:12:00 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.855 248514 DEBUG nova.compute.manager [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Preparing to wait for external event network-vif-plugged-d38d76da-797b-43d4-9453-50dfde011f0d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.855 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.855 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.855 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.856 248514 DEBUG nova.virt.libvirt.vif [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:11:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1074105427',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1074105427',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(8),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1074105427',id=3,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=8,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ4VTdpGZWW2edkCqJ93Ae0N7W5Wl1KxtxPXJB+IDKzYse7HySnAxnSN+D34LgL+vn16+92mSLvTyRP3HupOhW4VHoScrkUgJj4Ke9zalwfgWa6DU4Za2b0K2T2xL2drqg==',key_name='tempest-keypair-1114404274',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='474fd43853634a5d8b3026b622723873',ramdisk_id='',reservation_id='r-jmhsm7jq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-306379411',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-306379411-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:11:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='df94630f47824853b3347bca91426ecd',uuid=a1cbc612-63fc-4cbf-ad1e-007a93a03286,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d38d76da-797b-43d4-9453-50dfde011f0d", "address": "fa:16:3e:4d:a1:fb", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38d76da-79", "ovs_interfaceid": "d38d76da-797b-43d4-9453-50dfde011f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.856 248514 DEBUG nova.network.os_vif_util [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Converting VIF {"id": "d38d76da-797b-43d4-9453-50dfde011f0d", "address": "fa:16:3e:4d:a1:fb", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38d76da-79", "ovs_interfaceid": "d38d76da-797b-43d4-9453-50dfde011f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.857 248514 DEBUG nova.network.os_vif_util [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:a1:fb,bridge_name='br-int',has_traffic_filtering=True,id=d38d76da-797b-43d4-9453-50dfde011f0d,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38d76da-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.859 248514 DEBUG os_vif [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:a1:fb,bridge_name='br-int',has_traffic_filtering=True,id=d38d76da-797b-43d4-9453-50dfde011f0d,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38d76da-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.895 248514 DEBUG ovsdbapp.backend.ovs_idl [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.897 248514 DEBUG ovsdbapp.backend.ovs_idl [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.897 248514 DEBUG ovsdbapp.backend.ovs_idl [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [POLLOUT] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec 13 03:12:00 np0005558241 nova_compute[248510]: 2025-12-13 08:12:00.900 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:01 np0005558241 nova_compute[248510]: 2025-12-13 08:12:01.084 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:01 np0005558241 nova_compute[248510]: 2025-12-13 08:12:01.085 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:01 np0005558241 nova_compute[248510]: 2025-12-13 08:12:01.086 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:12:01 np0005558241 nova_compute[248510]: 2025-12-13 08:12:01.087 248514 INFO oslo.privsep.daemon [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpy7s3kiic/privsep.sock']#033[00m
Dec 13 03:12:01 np0005558241 laughing_jones[260289]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:12:01 np0005558241 laughing_jones[260289]: --> All data devices are unavailable
Dec 13 03:12:01 np0005558241 systemd[1]: libpod-f3715bbb970d136455bc549fe29755a3a16b2dba44a4addd2ea5b24e5462e32c.scope: Deactivated successfully.
Dec 13 03:12:01 np0005558241 podman[260315]: 2025-12-13 08:12:01.276347296 +0000 UTC m=+0.038180241 container died f3715bbb970d136455bc549fe29755a3a16b2dba44a4addd2ea5b24e5462e32c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jones, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:12:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-15e153d5f13b88d52f06a7e3a066d7dcba16939bd8720fbe659715ddee814d58-merged.mount: Deactivated successfully.
Dec 13 03:12:01 np0005558241 podman[260315]: 2025-12-13 08:12:01.321281432 +0000 UTC m=+0.083114357 container remove f3715bbb970d136455bc549fe29755a3a16b2dba44a4addd2ea5b24e5462e32c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:12:01 np0005558241 systemd[1]: libpod-conmon-f3715bbb970d136455bc549fe29755a3a16b2dba44a4addd2ea5b24e5462e32c.scope: Deactivated successfully.
Dec 13 03:12:01 np0005558241 nova_compute[248510]: 2025-12-13 08:12:01.814 248514 DEBUG nova.network.neutron [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:12:01 np0005558241 podman[260394]: 2025-12-13 08:12:01.8216836 +0000 UTC m=+0.028519633 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:12:01 np0005558241 podman[260394]: 2025-12-13 08:12:01.919130819 +0000 UTC m=+0.125966832 container create c3631f3bfbc39cd041122f59889f8daa9f6c76e308047cfca53fb7c0a32009be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_feynman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:12:01 np0005558241 systemd[1]: Started libpod-conmon-c3631f3bfbc39cd041122f59889f8daa9f6c76e308047cfca53fb7c0a32009be.scope.
Dec 13 03:12:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.003 248514 DEBUG oslo_concurrency.lockutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Releasing lock "refresh_cache-0cb5f1aa-9103-4667-ac26-f23f9e43ea86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.005 248514 DEBUG nova.compute.manager [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:12:02 np0005558241 podman[260394]: 2025-12-13 08:12:02.037594755 +0000 UTC m=+0.244430778 container init c3631f3bfbc39cd041122f59889f8daa9f6c76e308047cfca53fb7c0a32009be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_feynman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:12:02 np0005558241 podman[260394]: 2025-12-13 08:12:02.048232037 +0000 UTC m=+0.255068050 container start c3631f3bfbc39cd041122f59889f8daa9f6c76e308047cfca53fb7c0a32009be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 03:12:02 np0005558241 competent_feynman[260410]: 167 167
Dec 13 03:12:02 np0005558241 systemd[1]: libpod-c3631f3bfbc39cd041122f59889f8daa9f6c76e308047cfca53fb7c0a32009be.scope: Deactivated successfully.
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.077 248514 INFO oslo.privsep.daemon [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:01.887 260405 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:01.892 260405 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:01.894 260405 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:01.894 260405 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260405#033[00m
Dec 13 03:12:02 np0005558241 podman[260394]: 2025-12-13 08:12:02.090248671 +0000 UTC m=+0.297084684 container attach c3631f3bfbc39cd041122f59889f8daa9f6c76e308047cfca53fb7c0a32009be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:12:02 np0005558241 podman[260394]: 2025-12-13 08:12:02.09226211 +0000 UTC m=+0.299098123 container died c3631f3bfbc39cd041122f59889f8daa9f6c76e308047cfca53fb7c0a32009be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_feynman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:12:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4d6ee693990da412f2087fb979734c49c28521ca215a9b46b405a4debc7eee6b-merged.mount: Deactivated successfully.
Dec 13 03:12:02 np0005558241 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec 13 03:12:02 np0005558241 podman[260394]: 2025-12-13 08:12:02.133541897 +0000 UTC m=+0.340377950 container remove c3631f3bfbc39cd041122f59889f8daa9f6c76e308047cfca53fb7c0a32009be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:12:02 np0005558241 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 3.460s CPU time.
Dec 13 03:12:02 np0005558241 systemd-machined[210538]: Machine qemu-3-instance-00000004 terminated.
Dec 13 03:12:02 np0005558241 systemd[1]: libpod-conmon-c3631f3bfbc39cd041122f59889f8daa9f6c76e308047cfca53fb7c0a32009be.scope: Deactivated successfully.
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.250 248514 INFO nova.virt.libvirt.driver [-] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Instance destroyed successfully.#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.253 248514 DEBUG nova.objects.instance [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lazy-loading 'resources' on Instance uuid 0cb5f1aa-9103-4667-ac26-f23f9e43ea86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:12:02 np0005558241 podman[260444]: 2025-12-13 08:12:02.345154746 +0000 UTC m=+0.053602891 container create 75f1ba949494a461abae7b87afe13c700e8130f4b93b049fb8024295a24ff18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chebyshev, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:12:02 np0005558241 podman[260444]: 2025-12-13 08:12:02.3156764 +0000 UTC m=+0.024124545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.429 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.430 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd38d76da-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.430 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd38d76da-79, col_values=(('external_ids', {'iface-id': 'd38d76da-797b-43d4-9453-50dfde011f0d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:a1:fb', 'vm-uuid': 'a1cbc612-63fc-4cbf-ad1e-007a93a03286'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.432 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:02 np0005558241 NetworkManager[50376]: <info>  [1765613522.4337] manager: (tapd38d76da-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.442 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.444 248514 INFO os_vif [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:a1:fb,bridge_name='br-int',has_traffic_filtering=True,id=d38d76da-797b-43d4-9453-50dfde011f0d,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38d76da-79')#033[00m
Dec 13 03:12:02 np0005558241 systemd[1]: Started libpod-conmon-75f1ba949494a461abae7b87afe13c700e8130f4b93b049fb8024295a24ff18a.scope.
Dec 13 03:12:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:12:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0375fb4241a79e35f9805798da566694b4d0e8a21510c4a8a8b6ef5d8c3832c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0375fb4241a79e35f9805798da566694b4d0e8a21510c4a8a8b6ef5d8c3832c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0375fb4241a79e35f9805798da566694b4d0e8a21510c4a8a8b6ef5d8c3832c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0375fb4241a79e35f9805798da566694b4d0e8a21510c4a8a8b6ef5d8c3832c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:02 np0005558241 podman[260444]: 2025-12-13 08:12:02.543711114 +0000 UTC m=+0.252159279 container init 75f1ba949494a461abae7b87afe13c700e8130f4b93b049fb8024295a24ff18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chebyshev, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:12:02 np0005558241 podman[260444]: 2025-12-13 08:12:02.555105414 +0000 UTC m=+0.263553559 container start 75f1ba949494a461abae7b87afe13c700e8130f4b93b049fb8024295a24ff18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:12:02 np0005558241 podman[260444]: 2025-12-13 08:12:02.598929583 +0000 UTC m=+0.307377728 container attach 75f1ba949494a461abae7b87afe13c700e8130f4b93b049fb8024295a24ff18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chebyshev, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.604 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.605 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.605 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] No VIF found with MAC fa:16:3e:4d:a1:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.606 248514 INFO nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Using config drive#033[00m
Dec 13 03:12:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.5 MiB/s wr, 61 op/s
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.643 248514 DEBUG nova.storage.rbd_utils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.717 248514 DEBUG nova.network.neutron [req-2f9c517d-45ba-4baa-89e8-d38dbb68a92d req-f9d2d934-8b77-4570-810b-ee386eef35b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Updated VIF entry in instance network info cache for port d38d76da-797b-43d4-9453-50dfde011f0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.718 248514 DEBUG nova.network.neutron [req-2f9c517d-45ba-4baa-89e8-d38dbb68a92d req-f9d2d934-8b77-4570-810b-ee386eef35b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Updating instance_info_cache with network_info: [{"id": "d38d76da-797b-43d4-9453-50dfde011f0d", "address": "fa:16:3e:4d:a1:fb", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38d76da-79", "ovs_interfaceid": "d38d76da-797b-43d4-9453-50dfde011f0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.747 248514 DEBUG oslo_concurrency.lockutils [req-2f9c517d-45ba-4baa-89e8-d38dbb68a92d req-f9d2d934-8b77-4570-810b-ee386eef35b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-a1cbc612-63fc-4cbf-ad1e-007a93a03286" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.799 248514 INFO nova.virt.libvirt.driver [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Deleting instance files /var/lib/nova/instances/0cb5f1aa-9103-4667-ac26-f23f9e43ea86_del#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.800 248514 INFO nova.virt.libvirt.driver [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Deletion of /var/lib/nova/instances/0cb5f1aa-9103-4667-ac26-f23f9e43ea86_del complete#033[00m
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]: {
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:    "0": [
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:        {
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "devices": [
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "/dev/loop3"
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            ],
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_name": "ceph_lv0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_size": "21470642176",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "name": "ceph_lv0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "tags": {
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.cluster_name": "ceph",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.crush_device_class": "",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.encrypted": "0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.objectstore": "bluestore",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.osd_id": "0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.type": "block",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.vdo": "0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.with_tpm": "0"
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            },
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "type": "block",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "vg_name": "ceph_vg0"
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:        }
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:    ],
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:    "1": [
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:        {
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "devices": [
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "/dev/loop4"
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            ],
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_name": "ceph_lv1",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_size": "21470642176",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "name": "ceph_lv1",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "tags": {
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.cluster_name": "ceph",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.crush_device_class": "",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.encrypted": "0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.objectstore": "bluestore",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.osd_id": "1",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.type": "block",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.vdo": "0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.with_tpm": "0"
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            },
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "type": "block",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "vg_name": "ceph_vg1"
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:        }
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:    ],
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:    "2": [
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:        {
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "devices": [
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "/dev/loop5"
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            ],
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_name": "ceph_lv2",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_size": "21470642176",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "name": "ceph_lv2",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "tags": {
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.cluster_name": "ceph",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.crush_device_class": "",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.encrypted": "0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.objectstore": "bluestore",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.osd_id": "2",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.type": "block",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.vdo": "0",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:                "ceph.with_tpm": "0"
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            },
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "type": "block",
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:            "vg_name": "ceph_vg2"
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:        }
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]:    ]
Dec 13 03:12:02 np0005558241 gifted_chebyshev[260477]: }
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.876 248514 INFO nova.compute.manager [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.877 248514 DEBUG oslo.service.loopingcall [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.877 248514 DEBUG nova.compute.manager [-] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:12:02 np0005558241 nova_compute[248510]: 2025-12-13 08:12:02.877 248514 DEBUG nova.network.neutron [-] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:12:02 np0005558241 systemd[1]: libpod-75f1ba949494a461abae7b87afe13c700e8130f4b93b049fb8024295a24ff18a.scope: Deactivated successfully.
Dec 13 03:12:02 np0005558241 podman[260506]: 2025-12-13 08:12:02.941707361 +0000 UTC m=+0.026402711 container died 75f1ba949494a461abae7b87afe13c700e8130f4b93b049fb8024295a24ff18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chebyshev, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.041 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.047 248514 DEBUG nova.network.neutron [-] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.068 248514 DEBUG nova.network.neutron [-] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.089 248514 INFO nova.compute.manager [-] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Took 0.21 seconds to deallocate network for instance.#033[00m
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.145 248514 DEBUG oslo_concurrency.lockutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.146 248514 DEBUG oslo_concurrency.lockutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0375fb4241a79e35f9805798da566694b4d0e8a21510c4a8a8b6ef5d8c3832c3-merged.mount: Deactivated successfully.
Dec 13 03:12:03 np0005558241 podman[260506]: 2025-12-13 08:12:03.229361021 +0000 UTC m=+0.314056361 container remove 75f1ba949494a461abae7b87afe13c700e8130f4b93b049fb8024295a24ff18a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:12:03 np0005558241 systemd[1]: libpod-conmon-75f1ba949494a461abae7b87afe13c700e8130f4b93b049fb8024295a24ff18a.scope: Deactivated successfully.
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.422 248514 DEBUG oslo_concurrency.processutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.474 248514 INFO nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Creating config drive at /var/lib/nova/instances/a1cbc612-63fc-4cbf-ad1e-007a93a03286/disk.config#033[00m
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.480 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1cbc612-63fc-4cbf-ad1e-007a93a03286/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr94pi5q6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.610 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1cbc612-63fc-4cbf-ad1e-007a93a03286/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr94pi5q6" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.678 248514 DEBUG nova.storage.rbd_utils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.686 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1cbc612-63fc-4cbf-ad1e-007a93a03286/disk.config a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:03 np0005558241 podman[260624]: 2025-12-13 08:12:03.745369223 +0000 UTC m=+0.046551476 container create 78530873f470cfcc71e8a5b0dc5ed066b2e6cec0ce8a443028b6e611047d2026 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_goldstine, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:12:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:12:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 6842 writes, 27K keys, 6842 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6841 writes, 1400 syncs, 4.89 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 873 writes, 2358 keys, 873 commit groups, 1.0 writes per commit group, ingest: 1.34 MB, 0.00 MB/s#012Interval WAL: 872 writes, 375 syncs, 2.33 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:12:03 np0005558241 systemd[1]: Started libpod-conmon-78530873f470cfcc71e8a5b0dc5ed066b2e6cec0ce8a443028b6e611047d2026.scope.
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.808 248514 DEBUG oslo_concurrency.processutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1cbc612-63fc-4cbf-ad1e-007a93a03286/disk.config a1cbc612-63fc-4cbf-ad1e-007a93a03286_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.810 248514 INFO nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Deleting local config drive /var/lib/nova/instances/a1cbc612-63fc-4cbf-ad1e-007a93a03286/disk.config because it was imported into RBD.#033[00m
Dec 13 03:12:03 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:12:03 np0005558241 podman[260624]: 2025-12-13 08:12:03.727704799 +0000 UTC m=+0.028887062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:12:03 np0005558241 podman[260624]: 2025-12-13 08:12:03.833452732 +0000 UTC m=+0.134635005 container init 78530873f470cfcc71e8a5b0dc5ed066b2e6cec0ce8a443028b6e611047d2026 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_goldstine, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:12:03 np0005558241 podman[260624]: 2025-12-13 08:12:03.842563606 +0000 UTC m=+0.143745859 container start 78530873f470cfcc71e8a5b0dc5ed066b2e6cec0ce8a443028b6e611047d2026 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_goldstine, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:12:03 np0005558241 podman[260624]: 2025-12-13 08:12:03.84638696 +0000 UTC m=+0.147569233 container attach 78530873f470cfcc71e8a5b0dc5ed066b2e6cec0ce8a443028b6e611047d2026 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_goldstine, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:12:03 np0005558241 exciting_goldstine[260660]: 167 167
Dec 13 03:12:03 np0005558241 systemd[1]: libpod-78530873f470cfcc71e8a5b0dc5ed066b2e6cec0ce8a443028b6e611047d2026.scope: Deactivated successfully.
Dec 13 03:12:03 np0005558241 podman[260624]: 2025-12-13 08:12:03.849375334 +0000 UTC m=+0.150557587 container died 78530873f470cfcc71e8a5b0dc5ed066b2e6cec0ce8a443028b6e611047d2026 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:12:03 np0005558241 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 13 03:12:03 np0005558241 kernel: tapd38d76da-79: entered promiscuous mode
Dec 13 03:12:03 np0005558241 NetworkManager[50376]: <info>  [1765613523.9294] manager: (tapd38d76da-79): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:03Z|00027|binding|INFO|Claiming lport d38d76da-797b-43d4-9453-50dfde011f0d for this chassis.
Dec 13 03:12:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:03Z|00028|binding|INFO|d38d76da-797b-43d4-9453-50dfde011f0d: Claiming fa:16:3e:4d:a1:fb 10.100.0.13
Dec 13 03:12:03 np0005558241 systemd-udevd[260687]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.933 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:03.943 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:a1:fb 10.100.0.13'], port_security=['fa:16:3e:4d:a1:fb 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a1cbc612-63fc-4cbf-ad1e-007a93a03286', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '474fd43853634a5d8b3026b622723873', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd0e531d-b7ab-4913-abf7-a093cbaf1988', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d464c07e-e23a-4447-a355-481e56e0a828, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d38d76da-797b-43d4-9453-50dfde011f0d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:03.944 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d38d76da-797b-43d4-9453-50dfde011f0d in datapath 033b8122-ba6c-4217-b5b0-bd54c6f0aec0 bound to our chassis#033[00m
Dec 13 03:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:03.946 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 033b8122-ba6c-4217-b5b0-bd54c6f0aec0#033[00m
Dec 13 03:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:03.948 158419 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpah3zi4lm/privsep.sock']#033[00m
Dec 13 03:12:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:12:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3481503426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:12:03 np0005558241 NetworkManager[50376]: <info>  [1765613523.9648] device (tapd38d76da-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:12:03 np0005558241 NetworkManager[50376]: <info>  [1765613523.9654] device (tapd38d76da-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:12:03 np0005558241 nova_compute[248510]: 2025-12-13 08:12:03.998 248514 DEBUG oslo_concurrency.processutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.017 248514 DEBUG nova.compute.provider_tree [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:12:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:04Z|00029|binding|INFO|Setting lport d38d76da-797b-43d4-9453-50dfde011f0d ovn-installed in OVS
Dec 13 03:12:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:04Z|00030|binding|INFO|Setting lport d38d76da-797b-43d4-9453-50dfde011f0d up in Southbound
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.020 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:04 np0005558241 systemd-machined[210538]: New machine qemu-4-instance-00000003.
Dec 13 03:12:04 np0005558241 systemd[1]: Started Virtual Machine qemu-4-instance-00000003.
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.197 248514 DEBUG nova.scheduler.client.report [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.243 248514 DEBUG oslo_concurrency.lockutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.280 248514 INFO nova.scheduler.client.report [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Deleted allocations for instance 0cb5f1aa-9103-4667-ac26-f23f9e43ea86#033[00m
Dec 13 03:12:04 np0005558241 systemd[1]: var-lib-containers-storage-overlay-be7293f5c4937df570c2387a866e986d6d46a58ea7d376e5d3c0f0ff3c93ff28-merged.mount: Deactivated successfully.
Dec 13 03:12:04 np0005558241 podman[260624]: 2025-12-13 08:12:04.357186933 +0000 UTC m=+0.658369186 container remove 78530873f470cfcc71e8a5b0dc5ed066b2e6cec0ce8a443028b6e611047d2026 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.365 248514 DEBUG oslo_concurrency.lockutils [None req-eabd6245-3370-44c3-abd4-bccb67484367 3d3fdbc81f6f46a591bc6a1c719e3f04 e9147fab08c9431ebaa3462d5f0ea810 - - default default] Lock "0cb5f1aa-9103-4667-ac26-f23f9e43ea86" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:04 np0005558241 systemd[1]: libpod-conmon-78530873f470cfcc71e8a5b0dc5ed066b2e6cec0ce8a443028b6e611047d2026.scope: Deactivated successfully.
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.518 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613524.5179613, a1cbc612-63fc-4cbf-ad1e-007a93a03286 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.519 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] VM Started (Lifecycle Event)#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.547 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.554 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613524.5180616, a1cbc612-63fc-4cbf-ad1e-007a93a03286 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.554 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.585 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.593 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:12:04 np0005558241 podman[260760]: 2025-12-13 08:12:04.603528987 +0000 UTC m=+0.066725053 container create e3429b449e48b114c891704d1fa140bde81be8d5c1479953996a0ff5fb036091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.633 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:12:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Dec 13 03:12:04 np0005558241 systemd[1]: Started libpod-conmon-e3429b449e48b114c891704d1fa140bde81be8d5c1479953996a0ff5fb036091.scope.
Dec 13 03:12:04 np0005558241 podman[260760]: 2025-12-13 08:12:04.568893244 +0000 UTC m=+0.032089400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:12:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:12:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e75960375b674e1ee1a2da12a7f62a4aa4be802a15d7d93229245bbce7293ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e75960375b674e1ee1a2da12a7f62a4aa4be802a15d7d93229245bbce7293ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e75960375b674e1ee1a2da12a7f62a4aa4be802a15d7d93229245bbce7293ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e75960375b674e1ee1a2da12a7f62a4aa4be802a15d7d93229245bbce7293ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:04.759 158419 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec 13 03:12:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:04.761 158419 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpah3zi4lm/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec 13 03:12:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:04.590 260774 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec 13 03:12:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:04.594 260774 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec 13 03:12:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:04.596 260774 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec 13 03:12:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:04.596 260774 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260774#033[00m
Dec 13 03:12:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:04.764 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3477436d-b56a-4b12-8f08-63ef60e16ecd]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:04 np0005558241 podman[260760]: 2025-12-13 08:12:04.775730676 +0000 UTC m=+0.238926742 container init e3429b449e48b114c891704d1fa140bde81be8d5c1479953996a0ff5fb036091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:12:04 np0005558241 podman[260760]: 2025-12-13 08:12:04.784109892 +0000 UTC m=+0.247305958 container start e3429b449e48b114c891704d1fa140bde81be8d5c1479953996a0ff5fb036091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:12:04 np0005558241 podman[260760]: 2025-12-13 08:12:04.818484948 +0000 UTC m=+0.281681034 container attach e3429b449e48b114c891704d1fa140bde81be8d5c1479953996a0ff5fb036091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.931 248514 DEBUG nova.compute.manager [req-737d6101-1a5b-4152-a7af-420bf1c6c126 req-9e38a4ae-21a0-4e0e-97d8-8914bf94eafb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Received event network-vif-plugged-d38d76da-797b-43d4-9453-50dfde011f0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.932 248514 DEBUG oslo_concurrency.lockutils [req-737d6101-1a5b-4152-a7af-420bf1c6c126 req-9e38a4ae-21a0-4e0e-97d8-8914bf94eafb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.932 248514 DEBUG oslo_concurrency.lockutils [req-737d6101-1a5b-4152-a7af-420bf1c6c126 req-9e38a4ae-21a0-4e0e-97d8-8914bf94eafb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.932 248514 DEBUG oslo_concurrency.lockutils [req-737d6101-1a5b-4152-a7af-420bf1c6c126 req-9e38a4ae-21a0-4e0e-97d8-8914bf94eafb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.933 248514 DEBUG nova.compute.manager [req-737d6101-1a5b-4152-a7af-420bf1c6c126 req-9e38a4ae-21a0-4e0e-97d8-8914bf94eafb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Processing event network-vif-plugged-d38d76da-797b-43d4-9453-50dfde011f0d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.933 248514 DEBUG nova.compute.manager [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.938 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613524.9370701, a1cbc612-63fc-4cbf-ad1e-007a93a03286 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.938 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.941 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.946 248514 INFO nova.virt.libvirt.driver [-] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Instance spawned successfully.#033[00m
Dec 13 03:12:04 np0005558241 nova_compute[248510]: 2025-12-13 08:12:04.947 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:12:05 np0005558241 nova_compute[248510]: 2025-12-13 08:12:05.214 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:12:05 np0005558241 nova_compute[248510]: 2025-12-13 08:12:05.219 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:12:05 np0005558241 nova_compute[248510]: 2025-12-13 08:12:05.220 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:12:05 np0005558241 nova_compute[248510]: 2025-12-13 08:12:05.220 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:12:05 np0005558241 nova_compute[248510]: 2025-12-13 08:12:05.220 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:12:05 np0005558241 nova_compute[248510]: 2025-12-13 08:12:05.221 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:12:05 np0005558241 nova_compute[248510]: 2025-12-13 08:12:05.221 248514 DEBUG nova.virt.libvirt.driver [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:12:05 np0005558241 nova_compute[248510]: 2025-12-13 08:12:05.225 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:12:05 np0005558241 nova_compute[248510]: 2025-12-13 08:12:05.269 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:12:05 np0005558241 nova_compute[248510]: 2025-12-13 08:12:05.304 248514 INFO nova.compute.manager [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Took 11.56 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:12:05 np0005558241 nova_compute[248510]: 2025-12-13 08:12:05.305 248514 DEBUG nova.compute.manager [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:12:05 np0005558241 nova_compute[248510]: 2025-12-13 08:12:05.374 248514 INFO nova.compute.manager [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Took 13.88 seconds to build instance.#033[00m
Dec 13 03:12:05 np0005558241 nova_compute[248510]: 2025-12-13 08:12:05.397 248514 DEBUG oslo_concurrency.lockutils [None req-2483a4ec-d695-41b8-a57f-988597ae5c57 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:05 np0005558241 lvm[260860]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:12:05 np0005558241 lvm[260862]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:12:05 np0005558241 lvm[260860]: VG ceph_vg0 finished
Dec 13 03:12:05 np0005558241 lvm[260862]: VG ceph_vg1 finished
Dec 13 03:12:05 np0005558241 lvm[260864]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:12:05 np0005558241 lvm[260864]: VG ceph_vg2 finished
Dec 13 03:12:05 np0005558241 focused_ptolemy[260779]: {}
Dec 13 03:12:05 np0005558241 systemd[1]: libpod-e3429b449e48b114c891704d1fa140bde81be8d5c1479953996a0ff5fb036091.scope: Deactivated successfully.
Dec 13 03:12:05 np0005558241 systemd[1]: libpod-e3429b449e48b114c891704d1fa140bde81be8d5c1479953996a0ff5fb036091.scope: Consumed 1.592s CPU time.
Dec 13 03:12:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:05.821 260774 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:05.822 260774 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:05.822 260774 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:05 np0005558241 podman[260867]: 2025-12-13 08:12:05.836944589 +0000 UTC m=+0.034210253 container died e3429b449e48b114c891704d1fa140bde81be8d5c1479953996a0ff5fb036091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ptolemy, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:12:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9e75960375b674e1ee1a2da12a7f62a4aa4be802a15d7d93229245bbce7293ec-merged.mount: Deactivated successfully.
Dec 13 03:12:05 np0005558241 podman[260867]: 2025-12-13 08:12:05.900478123 +0000 UTC m=+0.097743777 container remove e3429b449e48b114c891704d1fa140bde81be8d5c1479953996a0ff5fb036091 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ptolemy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:12:05 np0005558241 systemd[1]: libpod-conmon-e3429b449e48b114c891704d1fa140bde81be8d5c1479953996a0ff5fb036091.scope: Deactivated successfully.
Dec 13 03:12:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:12:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:12:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:12:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:12:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 151 op/s
Dec 13 03:12:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:06.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1fbab54-7f32-4621-8dfe-f0b7058feee1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:06.688 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap033b8122-b1 in ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:12:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:06.691 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap033b8122-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:12:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:06.691 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75793ba5-5137-4883-be4c-7fb5e7821873]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:06.697 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6faf9e-6551-487c-84f1-612d7dd0a550]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:06.730 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e6eaaaa0-ae71-40ff-8d74-600ae7b60755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:06.753 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e511a3fc-b853-4026-803e-2ab5072116fb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:06.756 158419 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpsqjvc9ow/privsep.sock']#033[00m
Dec 13 03:12:06 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:12:06 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:12:07 np0005558241 nova_compute[248510]: 2025-12-13 08:12:07.338 248514 DEBUG nova.compute.manager [req-ede1b442-a3cf-4f18-bf3a-6e89b6828076 req-3e0d3450-e6c2-48ae-9440-0952ca51c07b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Received event network-vif-plugged-d38d76da-797b-43d4-9453-50dfde011f0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:12:07 np0005558241 nova_compute[248510]: 2025-12-13 08:12:07.338 248514 DEBUG oslo_concurrency.lockutils [req-ede1b442-a3cf-4f18-bf3a-6e89b6828076 req-3e0d3450-e6c2-48ae-9440-0952ca51c07b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:07 np0005558241 nova_compute[248510]: 2025-12-13 08:12:07.338 248514 DEBUG oslo_concurrency.lockutils [req-ede1b442-a3cf-4f18-bf3a-6e89b6828076 req-3e0d3450-e6c2-48ae-9440-0952ca51c07b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:07 np0005558241 nova_compute[248510]: 2025-12-13 08:12:07.339 248514 DEBUG oslo_concurrency.lockutils [req-ede1b442-a3cf-4f18-bf3a-6e89b6828076 req-3e0d3450-e6c2-48ae-9440-0952ca51c07b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:07 np0005558241 nova_compute[248510]: 2025-12-13 08:12:07.339 248514 DEBUG nova.compute.manager [req-ede1b442-a3cf-4f18-bf3a-6e89b6828076 req-3e0d3450-e6c2-48ae-9440-0952ca51c07b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] No waiting events found dispatching network-vif-plugged-d38d76da-797b-43d4-9453-50dfde011f0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:12:07 np0005558241 nova_compute[248510]: 2025-12-13 08:12:07.339 248514 WARNING nova.compute.manager [req-ede1b442-a3cf-4f18-bf3a-6e89b6828076 req-3e0d3450-e6c2-48ae-9440-0952ca51c07b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Received unexpected event network-vif-plugged-d38d76da-797b-43d4-9453-50dfde011f0d for instance with vm_state active and task_state None.#033[00m
Dec 13 03:12:07 np0005558241 nova_compute[248510]: 2025-12-13 08:12:07.433 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:07.458 158419 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec 13 03:12:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:07.459 158419 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpsqjvc9ow/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec 13 03:12:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:07.309 260916 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec 13 03:12:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:07.313 260916 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec 13 03:12:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:07.315 260916 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec 13 03:12:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:07.315 260916 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260916#033[00m
Dec 13 03:12:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:07.463 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[aaad46f0-81c2-4b8a-8712-80b22b4b1187]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:08.032 260916 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:08.032 260916 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:08.032 260916 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:08 np0005558241 nova_compute[248510]: 2025-12-13 08:12:08.043 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:12:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 88 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.1 MiB/s wr, 190 op/s
Dec 13 03:12:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:08.729 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[75875457-951b-4446-b766-12c4b86662a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:08 np0005558241 NetworkManager[50376]: <info>  [1765613528.7519] manager: (tap033b8122-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Dec 13 03:12:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:08.749 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8045f64e-3fef-40f7-9674-a56446f97f7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:08.786 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0310c63c-c2dd-4ddd-9968-41af47a61eb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:08 np0005558241 systemd-udevd[260928]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:12:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:08.791 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d5cb58c8-09c5-49a3-82f9-47c68b2caaef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:08 np0005558241 NetworkManager[50376]: <info>  [1765613528.8200] device (tap033b8122-b0): carrier: link connected
Dec 13 03:12:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:08.829 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[355abff1-f9a3-412b-8d19-e438a05ba5f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:08.852 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d4f900-fc00-41b5-baca-c9f6810c329e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap033b8122-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:ef:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610603, 'reachable_time': 24415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260946, 'error': None, 'target': 'ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:08.874 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[189ef5d9-8263-4bf8-8e08-c176d5658fb0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:ef3a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610603, 'tstamp': 610603}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260947, 'error': None, 'target': 'ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:08.895 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5623eb6c-6222-42e0-a0cd-f9b2329d6b98]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap033b8122-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:ef:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610603, 'reachable_time': 24415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260948, 'error': None, 'target': 'ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:08.934 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[29811aef-a34a-46b7-9924-1b0bcbc03760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:09.015 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ee47214e-aee8-4cd3-8fd5-fcc4534757b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:09.018 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap033b8122-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:09.018 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:09.019 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap033b8122-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:09 np0005558241 NetworkManager[50376]: <info>  [1765613529.0230] manager: (tap033b8122-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Dec 13 03:12:09 np0005558241 kernel: tap033b8122-b0: entered promiscuous mode
Dec 13 03:12:09 np0005558241 nova_compute[248510]: 2025-12-13 08:12:09.024 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:09.029 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap033b8122-b0, col_values=(('external_ids', {'iface-id': 'cc19a2a4-5a8b-432b-b3a5-c69f8b8fdfb7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:09Z|00031|binding|INFO|Releasing lport cc19a2a4-5a8b-432b-b3a5-c69f8b8fdfb7 from this chassis (sb_readonly=0)
Dec 13 03:12:09 np0005558241 nova_compute[248510]: 2025-12-13 08:12:09.031 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:09 np0005558241 nova_compute[248510]: 2025-12-13 08:12:09.032 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:09.033 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/033b8122-ba6c-4217-b5b0-bd54c6f0aec0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/033b8122-ba6c-4217-b5b0-bd54c6f0aec0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:09.034 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c38c277c-8dbd-4b82-82aa-b52df2eb73d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:09.036 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-033b8122-ba6c-4217-b5b0-bd54c6f0aec0
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/033b8122-ba6c-4217-b5b0-bd54c6f0aec0.pid.haproxy
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 033b8122-ba6c-4217-b5b0-bd54c6f0aec0
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:12:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:09.037 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'env', 'PROCESS_TAG=haproxy-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/033b8122-ba6c-4217-b5b0-bd54c6f0aec0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:12:09 np0005558241 nova_compute[248510]: 2025-12-13 08:12:09.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:12:09
Dec 13 03:12:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:12:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:12:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'images', 'backups']
Dec 13 03:12:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:12:09 np0005558241 podman[260982]: 2025-12-13 08:12:09.428185521 +0000 UTC m=+0.061272050 container create 4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 03:12:09 np0005558241 systemd[1]: Started libpod-conmon-4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0.scope.
Dec 13 03:12:09 np0005558241 podman[260982]: 2025-12-13 08:12:09.395747382 +0000 UTC m=+0.028833931 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:12:09 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:12:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29d95e16cea6e456beb4ab6fb6c1241a1e4a00ff92037dcf5991d0d96daafe9b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:09 np0005558241 podman[260982]: 2025-12-13 08:12:09.520744939 +0000 UTC m=+0.153831488 container init 4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:12:09 np0005558241 podman[260982]: 2025-12-13 08:12:09.529485274 +0000 UTC m=+0.162571803 container start 4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:12:09 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[260997]: [NOTICE]   (261001) : New worker (261003) forked
Dec 13 03:12:09 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[260997]: [NOTICE]   (261001) : Loading success.
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:12:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 88 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 597 KiB/s wr, 192 op/s
Dec 13 03:12:10 np0005558241 nova_compute[248510]: 2025-12-13 08:12:10.967 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:10 np0005558241 NetworkManager[50376]: <info>  [1765613530.9687] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Dec 13 03:12:10 np0005558241 NetworkManager[50376]: <info>  [1765613530.9697] device (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:12:10 np0005558241 NetworkManager[50376]: <warn>  [1765613530.9702] device (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 03:12:10 np0005558241 NetworkManager[50376]: <info>  [1765613530.9722] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Dec 13 03:12:10 np0005558241 NetworkManager[50376]: <info>  [1765613530.9731] device (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:12:10 np0005558241 NetworkManager[50376]: <warn>  [1765613530.9732] device (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 03:12:10 np0005558241 NetworkManager[50376]: <info>  [1765613530.9751] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec 13 03:12:10 np0005558241 NetworkManager[50376]: <info>  [1765613530.9765] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Dec 13 03:12:10 np0005558241 NetworkManager[50376]: <info>  [1765613530.9775] device (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 13 03:12:10 np0005558241 NetworkManager[50376]: <info>  [1765613530.9784] device (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 13 03:12:11 np0005558241 nova_compute[248510]: 2025-12-13 08:12:11.043 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:11Z|00032|binding|INFO|Releasing lport cc19a2a4-5a8b-432b-b3a5-c69f8b8fdfb7 from this chassis (sb_readonly=0)
Dec 13 03:12:11 np0005558241 nova_compute[248510]: 2025-12-13 08:12:11.057 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:12:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2401.8 total, 600.0 interval#012Cumulative writes: 8019 writes, 31K keys, 8019 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8019 writes, 1738 syncs, 4.61 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 848 writes, 2379 keys, 848 commit groups, 1.0 writes per commit group, ingest: 1.33 MB, 0.00 MB/s#012Interval WAL: 848 writes, 371 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:12:12 np0005558241 nova_compute[248510]: 2025-12-13 08:12:12.436 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:12 np0005558241 nova_compute[248510]: 2025-12-13 08:12:12.453 248514 DEBUG nova.compute.manager [req-c51d260f-ac25-4b27-943d-bab2a1ffe552 req-05acb2c7-6bac-4d00-9f07-543a95a26de2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Received event network-changed-d38d76da-797b-43d4-9453-50dfde011f0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:12:12 np0005558241 nova_compute[248510]: 2025-12-13 08:12:12.454 248514 DEBUG nova.compute.manager [req-c51d260f-ac25-4b27-943d-bab2a1ffe552 req-05acb2c7-6bac-4d00-9f07-543a95a26de2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Refreshing instance network info cache due to event network-changed-d38d76da-797b-43d4-9453-50dfde011f0d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:12:12 np0005558241 nova_compute[248510]: 2025-12-13 08:12:12.454 248514 DEBUG oslo_concurrency.lockutils [req-c51d260f-ac25-4b27-943d-bab2a1ffe552 req-05acb2c7-6bac-4d00-9f07-543a95a26de2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-a1cbc612-63fc-4cbf-ad1e-007a93a03286" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:12:12 np0005558241 nova_compute[248510]: 2025-12-13 08:12:12.454 248514 DEBUG oslo_concurrency.lockutils [req-c51d260f-ac25-4b27-943d-bab2a1ffe552 req-05acb2c7-6bac-4d00-9f07-543a95a26de2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-a1cbc612-63fc-4cbf-ad1e-007a93a03286" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:12:12 np0005558241 nova_compute[248510]: 2025-12-13 08:12:12.454 248514 DEBUG nova.network.neutron [req-c51d260f-ac25-4b27-943d-bab2a1ffe552 req-05acb2c7-6bac-4d00-9f07-543a95a26de2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Refreshing network info cache for port d38d76da-797b-43d4-9453-50dfde011f0d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:12:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 88 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 166 op/s
Dec 13 03:12:13 np0005558241 nova_compute[248510]: 2025-12-13 08:12:13.084 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:12:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 88 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 166 op/s
Dec 13 03:12:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:12:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/310909671' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:12:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:12:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/310909671' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:12:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:16.214 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:12:16 np0005558241 nova_compute[248510]: 2025-12-13 08:12:16.215 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:16.217 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:12:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 88 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 72 op/s
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.077947) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613537078050, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1148, "num_deletes": 251, "total_data_size": 1691240, "memory_usage": 1713136, "flush_reason": "Manual Compaction"}
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613537095294, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1652805, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24774, "largest_seqno": 25921, "table_properties": {"data_size": 1647240, "index_size": 2960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12186, "raw_average_key_size": 20, "raw_value_size": 1635897, "raw_average_value_size": 2690, "num_data_blocks": 132, "num_entries": 608, "num_filter_entries": 608, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765613439, "oldest_key_time": 1765613439, "file_creation_time": 1765613537, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 17392 microseconds, and 6451 cpu microseconds.
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.095344) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1652805 bytes OK
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.095368) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.097562) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.097580) EVENT_LOG_v1 {"time_micros": 1765613537097575, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.097602) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1685917, prev total WAL file size 1685917, number of live WAL files 2.
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.098456) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1614KB)], [56(8159KB)]
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613537098530, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 10008170, "oldest_snapshot_seqno": -1}
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4882 keys, 8147096 bytes, temperature: kUnknown
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613537170575, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 8147096, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8113933, "index_size": 19869, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 122747, "raw_average_key_size": 25, "raw_value_size": 8025022, "raw_average_value_size": 1643, "num_data_blocks": 818, "num_entries": 4882, "num_filter_entries": 4882, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765613537, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.171224) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 8147096 bytes
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.176424) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.2 rd, 112.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.0 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(11.0) write-amplify(4.9) OK, records in: 5400, records dropped: 518 output_compression: NoCompression
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.176441) EVENT_LOG_v1 {"time_micros": 1765613537176432, "job": 30, "event": "compaction_finished", "compaction_time_micros": 72435, "compaction_time_cpu_micros": 24999, "output_level": 6, "num_output_files": 1, "total_output_size": 8147096, "num_input_records": 5400, "num_output_records": 4882, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613537176969, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613537178585, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.098331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.178709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.178721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.178725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.178728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:12:17.178732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:12:17 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 13 03:12:17 np0005558241 nova_compute[248510]: 2025-12-13 08:12:17.243 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613522.2419224, 0cb5f1aa-9103-4667-ac26-f23f9e43ea86 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:12:17 np0005558241 nova_compute[248510]: 2025-12-13 08:12:17.244 248514 INFO nova.compute.manager [-] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:12:17 np0005558241 nova_compute[248510]: 2025-12-13 08:12:17.283 248514 DEBUG nova.compute.manager [None req-b8c491b6-73cf-41b5-a04c-df5f712b39c4 - - - - - -] [instance: 0cb5f1aa-9103-4667-ac26-f23f9e43ea86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:12:17 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 13 03:12:17 np0005558241 nova_compute[248510]: 2025-12-13 08:12:17.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:17 np0005558241 nova_compute[248510]: 2025-12-13 08:12:17.645 248514 DEBUG nova.network.neutron [req-c51d260f-ac25-4b27-943d-bab2a1ffe552 req-05acb2c7-6bac-4d00-9f07-543a95a26de2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Updated VIF entry in instance network info cache for port d38d76da-797b-43d4-9453-50dfde011f0d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:12:17 np0005558241 nova_compute[248510]: 2025-12-13 08:12:17.646 248514 DEBUG nova.network.neutron [req-c51d260f-ac25-4b27-943d-bab2a1ffe552 req-05acb2c7-6bac-4d00-9f07-543a95a26de2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Updating instance_info_cache with network_info: [{"id": "d38d76da-797b-43d4-9453-50dfde011f0d", "address": "fa:16:3e:4d:a1:fb", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38d76da-79", "ovs_interfaceid": "d38d76da-797b-43d4-9453-50dfde011f0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:12:17 np0005558241 nova_compute[248510]: 2025-12-13 08:12:17.676 248514 DEBUG oslo_concurrency.lockutils [req-c51d260f-ac25-4b27-943d-bab2a1ffe552 req-05acb2c7-6bac-4d00-9f07-543a95a26de2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-a1cbc612-63fc-4cbf-ad1e-007a93a03286" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:12:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:17Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:a1:fb 10.100.0.13
Dec 13 03:12:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:17Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:a1:fb 10.100.0.13
Dec 13 03:12:18 np0005558241 nova_compute[248510]: 2025-12-13 08:12:18.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:18 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 13 03:12:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:12:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 107 MiB data, 204 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 93 op/s
Dec 13 03:12:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:12:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 6958 writes, 28K keys, 6958 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6958 writes, 1385 syncs, 5.02 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1023 writes, 3716 keys, 1023 commit groups, 1.0 writes per commit group, ingest: 3.66 MB, 0.01 MB/s#012Interval WAL: 1023 writes, 399 syncs, 2.56 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 114 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007556280587080782 of space, bias 1.0, pg target 0.22668841761242345 quantized to 32 (current 32)
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665309409948658 of space, bias 1.0, pg target 0.19995928229845975 quantized to 32 (current 32)
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0142997348728516e-06 of space, bias 4.0, pg target 0.001217159681847422 quantized to 16 (current 32)
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:12:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:12:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:22.220 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:22 np0005558241 nova_compute[248510]: 2025-12-13 08:12:22.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 114 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 13 03:12:23 np0005558241 nova_compute[248510]: 2025-12-13 08:12:23.089 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 03:12:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:12:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 03:12:24 np0005558241 nova_compute[248510]: 2025-12-13 08:12:24.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:12:25 np0005558241 nova_compute[248510]: 2025-12-13 08:12:25.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:12:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 03:12:27 np0005558241 podman[261017]: 2025-12-13 08:12:27.017221299 +0000 UTC m=+0.084568313 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:12:27 np0005558241 podman[261016]: 2025-12-13 08:12:27.026004745 +0000 UTC m=+0.101767036 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:12:27 np0005558241 podman[261015]: 2025-12-13 08:12:27.026415755 +0000 UTC m=+0.104584335 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:12:27 np0005558241 nova_compute[248510]: 2025-12-13 08:12:27.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:28 np0005558241 nova_compute[248510]: 2025-12-13 08:12:28.092 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:12:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 03:12:28 np0005558241 nova_compute[248510]: 2025-12-13 08:12:28.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:12:28 np0005558241 nova_compute[248510]: 2025-12-13 08:12:28.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:12:28 np0005558241 nova_compute[248510]: 2025-12-13 08:12:28.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:12:29 np0005558241 nova_compute[248510]: 2025-12-13 08:12:29.988 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-a1cbc612-63fc-4cbf-ad1e-007a93a03286" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:12:29 np0005558241 nova_compute[248510]: 2025-12-13 08:12:29.989 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-a1cbc612-63fc-4cbf-ad1e-007a93a03286" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:12:29 np0005558241 nova_compute[248510]: 2025-12-13 08:12:29.989 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:12:29 np0005558241 nova_compute[248510]: 2025-12-13 08:12:29.989 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid a1cbc612-63fc-4cbf-ad1e-007a93a03286 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:12:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 1.0 MiB/s wr, 43 op/s
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.104 248514 DEBUG oslo_concurrency.lockutils [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.105 248514 DEBUG oslo_concurrency.lockutils [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.105 248514 DEBUG oslo_concurrency.lockutils [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.106 248514 DEBUG oslo_concurrency.lockutils [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.106 248514 DEBUG oslo_concurrency.lockutils [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.108 248514 INFO nova.compute.manager [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Terminating instance#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.110 248514 DEBUG nova.compute.manager [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:12:31 np0005558241 kernel: tapd38d76da-79 (unregistering): left promiscuous mode
Dec 13 03:12:31 np0005558241 NetworkManager[50376]: <info>  [1765613551.1640] device (tapd38d76da-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:12:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:31Z|00033|binding|INFO|Releasing lport d38d76da-797b-43d4-9453-50dfde011f0d from this chassis (sb_readonly=0)
Dec 13 03:12:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:31Z|00034|binding|INFO|Setting lport d38d76da-797b-43d4-9453-50dfde011f0d down in Southbound
Dec 13 03:12:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:31Z|00035|binding|INFO|Removing iface tapd38d76da-79 ovn-installed in OVS
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.181 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.191 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:a1:fb 10.100.0.13'], port_security=['fa:16:3e:4d:a1:fb 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a1cbc612-63fc-4cbf-ad1e-007a93a03286', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '474fd43853634a5d8b3026b622723873', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd0e531d-b7ab-4913-abf7-a093cbaf1988', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d464c07e-e23a-4447-a355-481e56e0a828, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d38d76da-797b-43d4-9453-50dfde011f0d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.193 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d38d76da-797b-43d4-9453-50dfde011f0d in datapath 033b8122-ba6c-4217-b5b0-bd54c6f0aec0 unbound from our chassis#033[00m
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.195 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 033b8122-ba6c-4217-b5b0-bd54c6f0aec0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.196 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ee20d1ed-9861-4197-8fff-70a96179f472]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.197 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0 namespace which is not needed anymore#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.208 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:31 np0005558241 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec 13 03:12:31 np0005558241 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000003.scope: Consumed 14.149s CPU time.
Dec 13 03:12:31 np0005558241 systemd-machined[210538]: Machine qemu-4-instance-00000003 terminated.
Dec 13 03:12:31 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[260997]: [NOTICE]   (261001) : haproxy version is 2.8.14-c23fe91
Dec 13 03:12:31 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[260997]: [NOTICE]   (261001) : path to executable is /usr/sbin/haproxy
Dec 13 03:12:31 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[260997]: [WARNING]  (261001) : Exiting Master process...
Dec 13 03:12:31 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[260997]: [ALERT]    (261001) : Current worker (261003) exited with code 143 (Terminated)
Dec 13 03:12:31 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[260997]: [WARNING]  (261001) : All workers exited. Exiting... (0)
Dec 13 03:12:31 np0005558241 systemd[1]: libpod-4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0.scope: Deactivated successfully.
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.347 248514 INFO nova.virt.libvirt.driver [-] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Instance destroyed successfully.#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.348 248514 DEBUG nova.objects.instance [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lazy-loading 'resources' on Instance uuid a1cbc612-63fc-4cbf-ad1e-007a93a03286 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:12:31 np0005558241 podman[261104]: 2025-12-13 08:12:31.352731482 +0000 UTC m=+0.050731280 container died 4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.379 248514 DEBUG nova.virt.libvirt.vif [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:11:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1074105427',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1074105427',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(8),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1074105427',id=3,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=8,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ4VTdpGZWW2edkCqJ93Ae0N7W5Wl1KxtxPXJB+IDKzYse7HySnAxnSN+D34LgL+vn16+92mSLvTyRP3HupOhW4VHoScrkUgJj4Ke9zalwfgWa6DU4Za2b0K2T2xL2drqg==',key_name='tempest-keypair-1114404274',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:12:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='474fd43853634a5d8b3026b622723873',ramdisk_id='',reservation_id='r-jmhsm7jq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-306379411',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-306379411-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:12:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='df94630f47824853b3347bca91426ecd',uuid=a1cbc612-63fc-4cbf-ad1e-007a93a03286,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d38d76da-797b-43d4-9453-50dfde011f0d", "address": "fa:16:3e:4d:a1:fb", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38d76da-79", "ovs_interfaceid": "d38d76da-797b-43d4-9453-50dfde011f0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.381 248514 DEBUG nova.network.os_vif_util [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Converting VIF {"id": "d38d76da-797b-43d4-9453-50dfde011f0d", "address": "fa:16:3e:4d:a1:fb", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38d76da-79", "ovs_interfaceid": "d38d76da-797b-43d4-9453-50dfde011f0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.382 248514 DEBUG nova.network.os_vif_util [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:a1:fb,bridge_name='br-int',has_traffic_filtering=True,id=d38d76da-797b-43d4-9453-50dfde011f0d,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38d76da-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.382 248514 DEBUG os_vif [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:a1:fb,bridge_name='br-int',has_traffic_filtering=True,id=d38d76da-797b-43d4-9453-50dfde011f0d,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38d76da-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.384 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.384 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd38d76da-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0-userdata-shm.mount: Deactivated successfully.
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.389 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-29d95e16cea6e456beb4ab6fb6c1241a1e4a00ff92037dcf5991d0d96daafe9b-merged.mount: Deactivated successfully.
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.390 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.393 248514 INFO os_vif [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:a1:fb,bridge_name='br-int',has_traffic_filtering=True,id=d38d76da-797b-43d4-9453-50dfde011f0d,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd38d76da-79')#033[00m
Dec 13 03:12:31 np0005558241 podman[261104]: 2025-12-13 08:12:31.400866066 +0000 UTC m=+0.098865844 container cleanup 4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:12:31 np0005558241 systemd[1]: libpod-conmon-4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0.scope: Deactivated successfully.
Dec 13 03:12:31 np0005558241 podman[261157]: 2025-12-13 08:12:31.464531454 +0000 UTC m=+0.040734434 container remove 4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.470 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[51b7bcb4-9525-49ea-aca8-75d256481b8f]: (4, ('Sat Dec 13 08:12:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0 (4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0)\n4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0\nSat Dec 13 08:12:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0 (4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0)\n4feee1bdc2fb0d1d84c6e7fc4a3fd1aebdb5ef46523b592c969c20bf813ee9e0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.473 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f318733-0049-42b2-83c6-efd371d3cade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.474 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap033b8122-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.477 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:31 np0005558241 kernel: tap033b8122-b0: left promiscuous mode
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.490 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.493 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52bef566-afe8-4794-812e-4f279af7c969]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.508 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d62649-3e4c-420b-b548-92d595affe3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.510 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e3df8d1f-f1ed-4acb-a648-518b5e461dee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.528 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f359525-5b94-4fc5-a419-df9c68673275]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610593, 'reachable_time': 16991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261176, 'error': None, 'target': 'ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:31 np0005558241 systemd[1]: run-netns-ovnmeta\x2d033b8122\x2dba6c\x2d4217\x2db5b0\x2dbd54c6f0aec0.mount: Deactivated successfully.
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.537 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:12:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:31.538 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[f16e0360-8076-4632-a11a-012f4be90a6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.657 248514 INFO nova.virt.libvirt.driver [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Deleting instance files /var/lib/nova/instances/a1cbc612-63fc-4cbf-ad1e-007a93a03286_del#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.658 248514 INFO nova.virt.libvirt.driver [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Deletion of /var/lib/nova/instances/a1cbc612-63fc-4cbf-ad1e-007a93a03286_del complete#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.714 248514 DEBUG nova.compute.manager [req-bd14c803-e585-46be-8401-a4327b1c0ee2 req-33cefc0a-1ec6-4d8a-94e4-d599dc48de38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Received event network-vif-unplugged-d38d76da-797b-43d4-9453-50dfde011f0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.714 248514 DEBUG oslo_concurrency.lockutils [req-bd14c803-e585-46be-8401-a4327b1c0ee2 req-33cefc0a-1ec6-4d8a-94e4-d599dc48de38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.715 248514 DEBUG oslo_concurrency.lockutils [req-bd14c803-e585-46be-8401-a4327b1c0ee2 req-33cefc0a-1ec6-4d8a-94e4-d599dc48de38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.715 248514 DEBUG oslo_concurrency.lockutils [req-bd14c803-e585-46be-8401-a4327b1c0ee2 req-33cefc0a-1ec6-4d8a-94e4-d599dc48de38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.716 248514 DEBUG nova.compute.manager [req-bd14c803-e585-46be-8401-a4327b1c0ee2 req-33cefc0a-1ec6-4d8a-94e4-d599dc48de38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] No waiting events found dispatching network-vif-unplugged-d38d76da-797b-43d4-9453-50dfde011f0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.716 248514 DEBUG nova.compute.manager [req-bd14c803-e585-46be-8401-a4327b1c0ee2 req-33cefc0a-1ec6-4d8a-94e4-d599dc48de38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Received event network-vif-unplugged-d38d76da-797b-43d4-9453-50dfde011f0d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.755 248514 INFO nova.compute.manager [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.756 248514 DEBUG oslo.service.loopingcall [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.756 248514 DEBUG nova.compute.manager [-] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:12:31 np0005558241 nova_compute[248510]: 2025-12-13 08:12:31.756 248514 DEBUG nova.network.neutron [-] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:12:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 30 KiB/s wr, 5 op/s
Dec 13 03:12:33 np0005558241 nova_compute[248510]: 2025-12-13 08:12:33.094 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.173 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Updating instance_info_cache with network_info: [{"id": "d38d76da-797b-43d4-9453-50dfde011f0d", "address": "fa:16:3e:4d:a1:fb", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd38d76da-79", "ovs_interfaceid": "d38d76da-797b-43d4-9453-50dfde011f0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.212 248514 DEBUG nova.compute.manager [req-ac99d773-4c32-4f3f-8dad-1a77554d9046 req-3c298ef6-b90a-4328-bf1c-cba4a33b261a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Received event network-vif-plugged-d38d76da-797b-43d4-9453-50dfde011f0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.212 248514 DEBUG oslo_concurrency.lockutils [req-ac99d773-4c32-4f3f-8dad-1a77554d9046 req-3c298ef6-b90a-4328-bf1c-cba4a33b261a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.213 248514 DEBUG oslo_concurrency.lockutils [req-ac99d773-4c32-4f3f-8dad-1a77554d9046 req-3c298ef6-b90a-4328-bf1c-cba4a33b261a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.213 248514 DEBUG oslo_concurrency.lockutils [req-ac99d773-4c32-4f3f-8dad-1a77554d9046 req-3c298ef6-b90a-4328-bf1c-cba4a33b261a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.213 248514 DEBUG nova.compute.manager [req-ac99d773-4c32-4f3f-8dad-1a77554d9046 req-3c298ef6-b90a-4328-bf1c-cba4a33b261a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] No waiting events found dispatching network-vif-plugged-d38d76da-797b-43d4-9453-50dfde011f0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.213 248514 WARNING nova.compute.manager [req-ac99d773-4c32-4f3f-8dad-1a77554d9046 req-3c298ef6-b90a-4328-bf1c-cba4a33b261a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Received unexpected event network-vif-plugged-d38d76da-797b-43d4-9453-50dfde011f0d for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.216 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-a1cbc612-63fc-4cbf-ad1e-007a93a03286" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.217 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.218 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.218 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.218 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.219 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.259 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.260 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.261 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.261 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.262 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.411 248514 DEBUG nova.network.neutron [-] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.439 248514 INFO nova.compute.manager [-] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Took 2.68 seconds to deallocate network for instance.#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.531 248514 DEBUG oslo_concurrency.lockutils [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.532 248514 DEBUG oslo_concurrency.lockutils [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 31 KiB/s wr, 33 op/s
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.659 248514 DEBUG oslo_concurrency.processutils [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:12:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2390867027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.855 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:34 np0005558241 nova_compute[248510]: 2025-12-13 08:12:34.973 248514 DEBUG nova.compute.manager [req-fb94a2cc-4210-4123-b5c0-c11daec76947 req-7bebc682-aec0-4ec9-b50a-c492e54ddbb2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Received event network-vif-deleted-d38d76da-797b-43d4-9453-50dfde011f0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.060 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.062 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4762MB free_disk=59.94263580162078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.062 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:12:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572797650' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.219 248514 DEBUG oslo_concurrency.processutils [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.227 248514 DEBUG nova.compute.provider_tree [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.247 248514 DEBUG nova.scheduler.client.report [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.281 248514 DEBUG oslo_concurrency.lockutils [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.286 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.367 248514 INFO nova.scheduler.client.report [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Deleted allocations for instance a1cbc612-63fc-4cbf-ad1e-007a93a03286#033[00m
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.401 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.402 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.433 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.465 248514 DEBUG oslo_concurrency.lockutils [None req-2c85c1d1-099e-4e9b-9700-893e509909a1 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "a1cbc612-63fc-4cbf-ad1e-007a93a03286" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:12:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/624527205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.978 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:35 np0005558241 nova_compute[248510]: 2025-12-13 08:12:35.983 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:12:36 np0005558241 nova_compute[248510]: 2025-12-13 08:12:36.007 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:12:36 np0005558241 nova_compute[248510]: 2025-12-13 08:12:36.040 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:12:36 np0005558241 nova_compute[248510]: 2025-12-13 08:12:36.041 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:36 np0005558241 nova_compute[248510]: 2025-12-13 08:12:36.387 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:36 np0005558241 nova_compute[248510]: 2025-12-13 08:12:36.593 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:12:36 np0005558241 nova_compute[248510]: 2025-12-13 08:12:36.594 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:12:36 np0005558241 nova_compute[248510]: 2025-12-13 08:12:36.595 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:12:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 13 03:12:38 np0005558241 nova_compute[248510]: 2025-12-13 08:12:38.095 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:12:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 13 03:12:39 np0005558241 nova_compute[248510]: 2025-12-13 08:12:39.768 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "57a4022e-e14b-4094-a66f-1af491c55f6d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:39 np0005558241 nova_compute[248510]: 2025-12-13 08:12:39.768 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:39 np0005558241 nova_compute[248510]: 2025-12-13 08:12:39.812 248514 DEBUG nova.compute.manager [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:12:39 np0005558241 nova_compute[248510]: 2025-12-13 08:12:39.906 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:39 np0005558241 nova_compute[248510]: 2025-12-13 08:12:39.907 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:39 np0005558241 nova_compute[248510]: 2025-12-13 08:12:39.914 248514 DEBUG nova.virt.hardware [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:12:39 np0005558241 nova_compute[248510]: 2025-12-13 08:12:39.914 248514 INFO nova.compute.claims [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:12:40 np0005558241 nova_compute[248510]: 2025-12-13 08:12:40.071 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:12:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:12:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:12:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:12:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:12:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:12:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:12:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3390668210' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:12:40 np0005558241 nova_compute[248510]: 2025-12-13 08:12:40.644 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:40 np0005558241 nova_compute[248510]: 2025-12-13 08:12:40.650 248514 DEBUG nova.compute.provider_tree [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:12:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 28 op/s
Dec 13 03:12:40 np0005558241 nova_compute[248510]: 2025-12-13 08:12:40.673 248514 DEBUG nova.scheduler.client.report [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:12:40 np0005558241 nova_compute[248510]: 2025-12-13 08:12:40.700 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:40 np0005558241 nova_compute[248510]: 2025-12-13 08:12:40.701 248514 DEBUG nova.compute.manager [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:12:40 np0005558241 nova_compute[248510]: 2025-12-13 08:12:40.765 248514 DEBUG nova.compute.manager [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:12:40 np0005558241 nova_compute[248510]: 2025-12-13 08:12:40.766 248514 DEBUG nova.network.neutron [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:12:40 np0005558241 nova_compute[248510]: 2025-12-13 08:12:40.804 248514 INFO nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:12:40 np0005558241 nova_compute[248510]: 2025-12-13 08:12:40.855 248514 DEBUG nova.compute.manager [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.027 248514 DEBUG nova.compute.manager [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.030 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.032 248514 INFO nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Creating image(s)#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.060 248514 DEBUG nova.storage.rbd_utils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image 57a4022e-e14b-4094-a66f-1af491c55f6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.084 248514 DEBUG nova.storage.rbd_utils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image 57a4022e-e14b-4094-a66f-1af491c55f6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.105 248514 DEBUG nova.storage.rbd_utils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image 57a4022e-e14b-4094-a66f-1af491c55f6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.109 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.166 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.167 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.167 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.168 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.187 248514 DEBUG nova.storage.rbd_utils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image 57a4022e-e14b-4094-a66f-1af491c55f6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.191 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 57a4022e-e14b-4094-a66f-1af491c55f6d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.326 248514 DEBUG nova.policy [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'df94630f47824853b3347bca91426ecd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '474fd43853634a5d8b3026b622723873', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.390 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.584 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 57a4022e-e14b-4094-a66f-1af491c55f6d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.674 248514 DEBUG nova.storage.rbd_utils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] resizing rbd image 57a4022e-e14b-4094-a66f-1af491c55f6d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.763 248514 DEBUG nova.objects.instance [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lazy-loading 'migration_context' on Instance uuid 57a4022e-e14b-4094-a66f-1af491c55f6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.809 248514 DEBUG nova.storage.rbd_utils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image 57a4022e-e14b-4094-a66f-1af491c55f6d_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.830 248514 DEBUG nova.storage.rbd_utils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image 57a4022e-e14b-4094-a66f-1af491c55f6d_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.834 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.835 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.835 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.876 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.877 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.920 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.921 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.950 248514 DEBUG nova.storage.rbd_utils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image 57a4022e-e14b-4094-a66f-1af491c55f6d_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:12:41 np0005558241 nova_compute[248510]: 2025-12-13 08:12:41.953 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 57a4022e-e14b-4094-a66f-1af491c55f6d_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 03:12:42 np0005558241 nova_compute[248510]: 2025-12-13 08:12:42.855 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 57a4022e-e14b-4094-a66f-1af491c55f6d_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.902s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:42 np0005558241 nova_compute[248510]: 2025-12-13 08:12:42.937 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:12:42 np0005558241 nova_compute[248510]: 2025-12-13 08:12:42.938 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Ensure instance console log exists: /var/lib/nova/instances/57a4022e-e14b-4094-a66f-1af491c55f6d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:12:42 np0005558241 nova_compute[248510]: 2025-12-13 08:12:42.938 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:42 np0005558241 nova_compute[248510]: 2025-12-13 08:12:42.939 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:42 np0005558241 nova_compute[248510]: 2025-12-13 08:12:42.939 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:43 np0005558241 nova_compute[248510]: 2025-12-13 08:12:43.105 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:12:44 np0005558241 nova_compute[248510]: 2025-12-13 08:12:44.026 248514 DEBUG nova.network.neutron [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Successfully created port: 7c5b48da-bc21-4027-a29c-5db83c1a308c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:12:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 13 03:12:45 np0005558241 nova_compute[248510]: 2025-12-13 08:12:45.888 248514 DEBUG nova.network.neutron [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Successfully updated port: 7c5b48da-bc21-4027-a29c-5db83c1a308c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:12:45 np0005558241 nova_compute[248510]: 2025-12-13 08:12:45.942 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "refresh_cache-57a4022e-e14b-4094-a66f-1af491c55f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:12:45 np0005558241 nova_compute[248510]: 2025-12-13 08:12:45.942 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquired lock "refresh_cache-57a4022e-e14b-4094-a66f-1af491c55f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:12:45 np0005558241 nova_compute[248510]: 2025-12-13 08:12:45.943 248514 DEBUG nova.network.neutron [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:12:46 np0005558241 nova_compute[248510]: 2025-12-13 08:12:46.345 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613551.3439112, a1cbc612-63fc-4cbf-ad1e-007a93a03286 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:12:46 np0005558241 nova_compute[248510]: 2025-12-13 08:12:46.345 248514 INFO nova.compute.manager [-] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:12:46 np0005558241 nova_compute[248510]: 2025-12-13 08:12:46.393 248514 DEBUG nova.compute.manager [None req-7aac3ef9-2851-4cc8-a7d0-342ceba791d6 - - - - - -] [instance: a1cbc612-63fc-4cbf-ad1e-007a93a03286] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:12:46 np0005558241 nova_compute[248510]: 2025-12-13 08:12:46.395 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:46 np0005558241 nova_compute[248510]: 2025-12-13 08:12:46.455 248514 DEBUG nova.compute.manager [req-c8dbecf7-b0d2-49e6-a5b9-c6fcea0ec027 req-2f094d55-e07d-4765-9d87-e82796b2cf87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Received event network-changed-7c5b48da-bc21-4027-a29c-5db83c1a308c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:12:46 np0005558241 nova_compute[248510]: 2025-12-13 08:12:46.455 248514 DEBUG nova.compute.manager [req-c8dbecf7-b0d2-49e6-a5b9-c6fcea0ec027 req-2f094d55-e07d-4765-9d87-e82796b2cf87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Refreshing instance network info cache due to event network-changed-7c5b48da-bc21-4027-a29c-5db83c1a308c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:12:46 np0005558241 nova_compute[248510]: 2025-12-13 08:12:46.456 248514 DEBUG oslo_concurrency.lockutils [req-c8dbecf7-b0d2-49e6-a5b9-c6fcea0ec027 req-2f094d55-e07d-4765-9d87-e82796b2cf87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-57a4022e-e14b-4094-a66f-1af491c55f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:12:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Dec 13 03:12:46 np0005558241 nova_compute[248510]: 2025-12-13 08:12:46.990 248514 DEBUG nova.network.neutron [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.107 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:12:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.681 248514 DEBUG nova.network.neutron [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Updating instance_info_cache with network_info: [{"id": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "address": "fa:16:3e:6e:29:ed", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c5b48da-bc", "ovs_interfaceid": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.710 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Releasing lock "refresh_cache-57a4022e-e14b-4094-a66f-1af491c55f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.711 248514 DEBUG nova.compute.manager [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Instance network_info: |[{"id": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "address": "fa:16:3e:6e:29:ed", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c5b48da-bc", "ovs_interfaceid": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.711 248514 DEBUG oslo_concurrency.lockutils [req-c8dbecf7-b0d2-49e6-a5b9-c6fcea0ec027 req-2f094d55-e07d-4765-9d87-e82796b2cf87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-57a4022e-e14b-4094-a66f-1af491c55f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.712 248514 DEBUG nova.network.neutron [req-c8dbecf7-b0d2-49e6-a5b9-c6fcea0ec027 req-2f094d55-e07d-4765-9d87-e82796b2cf87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Refreshing network info cache for port 7c5b48da-bc21-4027-a29c-5db83c1a308c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.716 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Start _get_guest_xml network_info=[{"id": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "address": "fa:16:3e:6e:29:ed", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c5b48da-bc", "ovs_interfaceid": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vdb', 'size': 1, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.721 248514 WARNING nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.726 248514 DEBUG nova.virt.libvirt.host [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.727 248514 DEBUG nova.virt.libvirt.host [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.734 248514 DEBUG nova.virt.libvirt.host [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.735 248514 DEBUG nova.virt.libvirt.host [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.736 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.736 248514 DEBUG nova.virt.hardware [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:11:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1087082748',id=7,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-433176113',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.737 248514 DEBUG nova.virt.hardware [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.737 248514 DEBUG nova.virt.hardware [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.737 248514 DEBUG nova.virt.hardware [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.737 248514 DEBUG nova.virt.hardware [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.738 248514 DEBUG nova.virt.hardware [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.738 248514 DEBUG nova.virt.hardware [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.738 248514 DEBUG nova.virt.hardware [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.739 248514 DEBUG nova.virt.hardware [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.739 248514 DEBUG nova.virt.hardware [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.739 248514 DEBUG nova.virt.hardware [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:12:48 np0005558241 nova_compute[248510]: 2025-12-13 08:12:48.742 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:12:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028481329' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:12:49 np0005558241 nova_compute[248510]: 2025-12-13 08:12:49.288 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:49 np0005558241 nova_compute[248510]: 2025-12-13 08:12:49.289 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:12:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/118883605' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:12:49 np0005558241 nova_compute[248510]: 2025-12-13 08:12:49.870 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:49 np0005558241 nova_compute[248510]: 2025-12-13 08:12:49.891 248514 DEBUG nova.storage.rbd_utils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image 57a4022e-e14b-4094-a66f-1af491c55f6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:12:49 np0005558241 nova_compute[248510]: 2025-12-13 08:12:49.895 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:12:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3130973070' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.501 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.502 248514 DEBUG nova.virt.libvirt.vif [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:12:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-156009774',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-156009774',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(7),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-156009774',id=5,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=7,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ4VTdpGZWW2edkCqJ93Ae0N7W5Wl1KxtxPXJB+IDKzYse7HySnAxnSN+D34LgL+vn16+92mSLvTyRP3HupOhW4VHoScrkUgJj4Ke9zalwfgWa6DU4Za2b0K2T2xL2drqg==',key_name='tempest-keypair-1114404274',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='474fd43853634a5d8b3026b622723873',ramdisk_id='',reservation_id='r-x75wb70y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-306379411',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-306379411-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:12:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='df94630f47824853b3347bca91426ecd',uuid=57a4022e-e14b-4094-a66f-1af491c55f6d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "address": "fa:16:3e:6e:29:ed", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c5b48da-bc", "ovs_interfaceid": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.503 248514 DEBUG nova.network.os_vif_util [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Converting VIF {"id": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "address": "fa:16:3e:6e:29:ed", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c5b48da-bc", "ovs_interfaceid": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.504 248514 DEBUG nova.network.os_vif_util [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:29:ed,bridge_name='br-int',has_traffic_filtering=True,id=7c5b48da-bc21-4027-a29c-5db83c1a308c,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c5b48da-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.505 248514 DEBUG nova.objects.instance [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lazy-loading 'pci_devices' on Instance uuid 57a4022e-e14b-4094-a66f-1af491c55f6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.531 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  <uuid>57a4022e-e14b-4094-a66f-1af491c55f6d</uuid>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  <name>instance-00000005</name>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-156009774</nova:name>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:12:48</nova:creationTime>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-433176113">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <nova:ephemeral>1</nova:ephemeral>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <nova:user uuid="df94630f47824853b3347bca91426ecd">tempest-ServersWithSpecificFlavorTestJSON-306379411-project-member</nova:user>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <nova:project uuid="474fd43853634a5d8b3026b622723873">tempest-ServersWithSpecificFlavorTestJSON-306379411</nova:project>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <nova:port uuid="7c5b48da-bc21-4027-a29c-5db83c1a308c">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <entry name="serial">57a4022e-e14b-4094-a66f-1af491c55f6d</entry>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <entry name="uuid">57a4022e-e14b-4094-a66f-1af491c55f6d</entry>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/57a4022e-e14b-4094-a66f-1af491c55f6d_disk">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/57a4022e-e14b-4094-a66f-1af491c55f6d_disk.eph0">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <target dev="vdb" bus="virtio"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/57a4022e-e14b-4094-a66f-1af491c55f6d_disk.config">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:6e:29:ed"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <target dev="tap7c5b48da-bc"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/57a4022e-e14b-4094-a66f-1af491c55f6d/console.log" append="off"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:12:50 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:12:50 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:12:50 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:12:50 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.532 248514 DEBUG nova.compute.manager [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Preparing to wait for external event network-vif-plugged-7c5b48da-bc21-4027-a29c-5db83c1a308c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.533 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.533 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.533 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.534 248514 DEBUG nova.virt.libvirt.vif [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:12:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-156009774',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-156009774',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(7),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-156009774',id=5,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=7,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ4VTdpGZWW2edkCqJ93Ae0N7W5Wl1KxtxPXJB+IDKzYse7HySnAxnSN+D34LgL+vn16+92mSLvTyRP3HupOhW4VHoScrkUgJj4Ke9zalwfgWa6DU4Za2b0K2T2xL2drqg==',key_name='tempest-keypair-1114404274',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='474fd43853634a5d8b3026b622723873',ramdisk_id='',reservation_id='r-x75wb70y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-306379411',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-306379411-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:12:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='df94630f47824853b3347bca91426ecd',uuid=57a4022e-e14b-4094-a66f-1af491c55f6d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "address": "fa:16:3e:6e:29:ed", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c5b48da-bc", "ovs_interfaceid": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.534 248514 DEBUG nova.network.os_vif_util [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Converting VIF {"id": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "address": "fa:16:3e:6e:29:ed", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c5b48da-bc", "ovs_interfaceid": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.535 248514 DEBUG nova.network.os_vif_util [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:29:ed,bridge_name='br-int',has_traffic_filtering=True,id=7c5b48da-bc21-4027-a29c-5db83c1a308c,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c5b48da-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.535 248514 DEBUG os_vif [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:29:ed,bridge_name='br-int',has_traffic_filtering=True,id=7c5b48da-bc21-4027-a29c-5db83c1a308c,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c5b48da-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.536 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.537 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.541 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c5b48da-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.541 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7c5b48da-bc, col_values=(('external_ids', {'iface-id': '7c5b48da-bc21-4027-a29c-5db83c1a308c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:29:ed', 'vm-uuid': '57a4022e-e14b-4094-a66f-1af491c55f6d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:50 np0005558241 NetworkManager[50376]: <info>  [1765613570.5444] manager: (tap7c5b48da-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:12:50 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.551 248514 INFO os_vif [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:29:ed,bridge_name='br-int',has_traffic_filtering=True,id=7c5b48da-bc21-4027-a29c-5db83c1a308c,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c5b48da-bc')#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.651 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.652 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.652 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.653 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] No VIF found with MAC fa:16:3e:6e:29:ed, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.654 248514 INFO nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Using config drive#033[00m
Dec 13 03:12:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Dec 13 03:12:50 np0005558241 nova_compute[248510]: 2025-12-13 08:12:50.683 248514 DEBUG nova.storage.rbd_utils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image 57a4022e-e14b-4094-a66f-1af491c55f6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:12:51 np0005558241 nova_compute[248510]: 2025-12-13 08:12:51.221 248514 DEBUG nova.network.neutron [req-c8dbecf7-b0d2-49e6-a5b9-c6fcea0ec027 req-2f094d55-e07d-4765-9d87-e82796b2cf87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Updated VIF entry in instance network info cache for port 7c5b48da-bc21-4027-a29c-5db83c1a308c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:12:51 np0005558241 nova_compute[248510]: 2025-12-13 08:12:51.222 248514 DEBUG nova.network.neutron [req-c8dbecf7-b0d2-49e6-a5b9-c6fcea0ec027 req-2f094d55-e07d-4765-9d87-e82796b2cf87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Updating instance_info_cache with network_info: [{"id": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "address": "fa:16:3e:6e:29:ed", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c5b48da-bc", "ovs_interfaceid": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:12:51 np0005558241 nova_compute[248510]: 2025-12-13 08:12:51.244 248514 DEBUG oslo_concurrency.lockutils [req-c8dbecf7-b0d2-49e6-a5b9-c6fcea0ec027 req-2f094d55-e07d-4765-9d87-e82796b2cf87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-57a4022e-e14b-4094-a66f-1af491c55f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:12:51 np0005558241 nova_compute[248510]: 2025-12-13 08:12:51.384 248514 INFO nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Creating config drive at /var/lib/nova/instances/57a4022e-e14b-4094-a66f-1af491c55f6d/disk.config#033[00m
Dec 13 03:12:51 np0005558241 nova_compute[248510]: 2025-12-13 08:12:51.390 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/57a4022e-e14b-4094-a66f-1af491c55f6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2s00lzln execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:51 np0005558241 nova_compute[248510]: 2025-12-13 08:12:51.527 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/57a4022e-e14b-4094-a66f-1af491c55f6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2s00lzln" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:51 np0005558241 nova_compute[248510]: 2025-12-13 08:12:51.554 248514 DEBUG nova.storage.rbd_utils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] rbd image 57a4022e-e14b-4094-a66f-1af491c55f6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:12:51 np0005558241 nova_compute[248510]: 2025-12-13 08:12:51.558 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/57a4022e-e14b-4094-a66f-1af491c55f6d/disk.config 57a4022e-e14b-4094-a66f-1af491c55f6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:12:51 np0005558241 nova_compute[248510]: 2025-12-13 08:12:51.709 248514 DEBUG oslo_concurrency.processutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/57a4022e-e14b-4094-a66f-1af491c55f6d/disk.config 57a4022e-e14b-4094-a66f-1af491c55f6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:12:51 np0005558241 nova_compute[248510]: 2025-12-13 08:12:51.710 248514 INFO nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Deleting local config drive /var/lib/nova/instances/57a4022e-e14b-4094-a66f-1af491c55f6d/disk.config because it was imported into RBD.#033[00m
Dec 13 03:12:51 np0005558241 kernel: tap7c5b48da-bc: entered promiscuous mode
Dec 13 03:12:51 np0005558241 nova_compute[248510]: 2025-12-13 08:12:51.765 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:51Z|00036|binding|INFO|Claiming lport 7c5b48da-bc21-4027-a29c-5db83c1a308c for this chassis.
Dec 13 03:12:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:51Z|00037|binding|INFO|7c5b48da-bc21-4027-a29c-5db83c1a308c: Claiming fa:16:3e:6e:29:ed 10.100.0.12
Dec 13 03:12:51 np0005558241 NetworkManager[50376]: <info>  [1765613571.7683] manager: (tap7c5b48da-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.778 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:29:ed 10.100.0.12'], port_security=['fa:16:3e:6e:29:ed 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '57a4022e-e14b-4094-a66f-1af491c55f6d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '474fd43853634a5d8b3026b622723873', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd0e531d-b7ab-4913-abf7-a093cbaf1988', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d464c07e-e23a-4447-a355-481e56e0a828, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7c5b48da-bc21-4027-a29c-5db83c1a308c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.780 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7c5b48da-bc21-4027-a29c-5db83c1a308c in datapath 033b8122-ba6c-4217-b5b0-bd54c6f0aec0 bound to our chassis#033[00m
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.782 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 033b8122-ba6c-4217-b5b0-bd54c6f0aec0#033[00m
Dec 13 03:12:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:51Z|00038|binding|INFO|Setting lport 7c5b48da-bc21-4027-a29c-5db83c1a308c ovn-installed in OVS
Dec 13 03:12:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:51Z|00039|binding|INFO|Setting lport 7c5b48da-bc21-4027-a29c-5db83c1a308c up in Southbound
Dec 13 03:12:51 np0005558241 nova_compute[248510]: 2025-12-13 08:12:51.787 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.797 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d62d02e1-3847-4d6d-9ae4-c0b41a6aafbf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.799 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap033b8122-b1 in ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:12:51 np0005558241 systemd-udevd[261723]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.801 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap033b8122-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.801 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[245ea9e4-2605-4393-832e-991a4524a95c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.802 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4ded1960-e694-44a5-b9f5-fcd7accfc71a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:51 np0005558241 systemd-machined[210538]: New machine qemu-5-instance-00000005.
Dec 13 03:12:51 np0005558241 NetworkManager[50376]: <info>  [1765613571.8131] device (tap7c5b48da-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:12:51 np0005558241 NetworkManager[50376]: <info>  [1765613571.8142] device (tap7c5b48da-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.816 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[58608b80-9d9b-4aa4-8d8c-629673d5eb53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:51 np0005558241 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.841 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a87c5fbd-1cac-4c74-a981-0e2ad072608c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.883 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[871b0ef5-ebdf-4f5a-848e-782960e59ab4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:51 np0005558241 systemd-udevd[261727]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:12:51 np0005558241 NetworkManager[50376]: <info>  [1765613571.8939] manager: (tap033b8122-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.892 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[361a9c7b-d4fd-4dbd-bd57-8a5ec435ef53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.934 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[480d5358-1594-4c74-89be-2d6d7f12649e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.938 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5b7f8248-07af-4f39-bcaf-cda7d5117085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:51 np0005558241 NetworkManager[50376]: <info>  [1765613571.9676] device (tap033b8122-b0): carrier: link connected
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.973 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e15cbcfd-bb38-4639-b7f6-2118dfec6569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:51.993 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ddcc1604-7a55-40de-9a71-08d9f9ad074b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap033b8122-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:ef:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614918, 'reachable_time': 26511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261756, 'error': None, 'target': 'ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:52.018 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[26c28af2-c04d-4c4d-b20c-0f5b4a559edf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:ef3a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 614918, 'tstamp': 614918}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261757, 'error': None, 'target': 'ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:52.040 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2df0c56c-a8f0-4851-bef2-53b4db3dd06e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap033b8122-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:ef:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614918, 'reachable_time': 26511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261758, 'error': None, 'target': 'ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:52.079 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[721d53a7-087b-4ef3-9719-710dd336e9a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:52.150 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6f978a-2897-418c-959b-5e522f763a97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:52.152 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap033b8122-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:52.153 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:52.153 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap033b8122-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:52 np0005558241 NetworkManager[50376]: <info>  [1765613572.1565] manager: (tap033b8122-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Dec 13 03:12:52 np0005558241 kernel: tap033b8122-b0: entered promiscuous mode
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.156 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:52.159 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap033b8122-b0, col_values=(('external_ids', {'iface-id': 'cc19a2a4-5a8b-432b-b3a5-c69f8b8fdfb7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:12:52 np0005558241 ovn_controller[148476]: 2025-12-13T08:12:52Z|00040|binding|INFO|Releasing lport cc19a2a4-5a8b-432b-b3a5-c69f8b8fdfb7 from this chassis (sb_readonly=0)
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.161 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.175 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:52.177 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/033b8122-ba6c-4217-b5b0-bd54c6f0aec0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/033b8122-ba6c-4217-b5b0-bd54c6f0aec0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:52.177 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a295ad77-6add-4963-bb06-3cfc7c3bb1fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:52.178 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-033b8122-ba6c-4217-b5b0-bd54c6f0aec0
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/033b8122-ba6c-4217-b5b0-bd54c6f0aec0.pid.haproxy
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 033b8122-ba6c-4217-b5b0-bd54c6f0aec0
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:52.179 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'env', 'PROCESS_TAG=haproxy-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/033b8122-ba6c-4217-b5b0-bd54c6f0aec0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.289 248514 DEBUG nova.compute.manager [req-4a86fb73-3cb0-43f1-b57e-3e6490b21327 req-13122c4f-f399-469b-8c0b-b79604ad8d69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Received event network-vif-plugged-7c5b48da-bc21-4027-a29c-5db83c1a308c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.290 248514 DEBUG oslo_concurrency.lockutils [req-4a86fb73-3cb0-43f1-b57e-3e6490b21327 req-13122c4f-f399-469b-8c0b-b79604ad8d69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.290 248514 DEBUG oslo_concurrency.lockutils [req-4a86fb73-3cb0-43f1-b57e-3e6490b21327 req-13122c4f-f399-469b-8c0b-b79604ad8d69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.290 248514 DEBUG oslo_concurrency.lockutils [req-4a86fb73-3cb0-43f1-b57e-3e6490b21327 req-13122c4f-f399-469b-8c0b-b79604ad8d69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.291 248514 DEBUG nova.compute.manager [req-4a86fb73-3cb0-43f1-b57e-3e6490b21327 req-13122c4f-f399-469b-8c0b-b79604ad8d69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Processing event network-vif-plugged-7c5b48da-bc21-4027-a29c-5db83c1a308c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:12:52 np0005558241 podman[261790]: 2025-12-13 08:12:52.582882743 +0000 UTC m=+0.053845436 container create 3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 03:12:52 np0005558241 systemd[1]: Started libpod-conmon-3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072.scope.
Dec 13 03:12:52 np0005558241 podman[261790]: 2025-12-13 08:12:52.55431781 +0000 UTC m=+0.025280533 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:12:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:12:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7803cc902b5488d229e4626867ec671a293e1989d3bcb8d2b0b297ab9bfa864/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:12:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Dec 13 03:12:52 np0005558241 podman[261790]: 2025-12-13 08:12:52.676873127 +0000 UTC m=+0.147835840 container init 3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:12:52 np0005558241 podman[261790]: 2025-12-13 08:12:52.685162931 +0000 UTC m=+0.156125624 container start 3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 13 03:12:52 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[261862]: [NOTICE]   (261869) : New worker (261871) forked
Dec 13 03:12:52 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[261862]: [NOTICE]   (261869) : Loading success.
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.767 248514 DEBUG nova.compute.manager [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.768 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613572.7679985, 57a4022e-e14b-4094-a66f-1af491c55f6d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.769 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] VM Started (Lifecycle Event)#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.772 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.775 248514 INFO nova.virt.libvirt.driver [-] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Instance spawned successfully.#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.775 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.813 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.818 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.822 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.823 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.823 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.823 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.824 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.824 248514 DEBUG nova.virt.libvirt.driver [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.860 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.861 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613572.7681081, 57a4022e-e14b-4094-a66f-1af491c55f6d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.861 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.902 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.906 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613572.771015, 57a4022e-e14b-4094-a66f-1af491c55f6d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.906 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.912 248514 INFO nova.compute.manager [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Took 11.88 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.913 248514 DEBUG nova.compute.manager [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.926 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.930 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:12:52 np0005558241 nova_compute[248510]: 2025-12-13 08:12:52.973 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:12:53 np0005558241 nova_compute[248510]: 2025-12-13 08:12:53.011 248514 INFO nova.compute.manager [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Took 13.13 seconds to build instance.#033[00m
Dec 13 03:12:53 np0005558241 nova_compute[248510]: 2025-12-13 08:12:53.077 248514 DEBUG oslo_concurrency.lockutils [None req-2fdb1013-70cd-4b10-a682-f828c3f72edd df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:53 np0005558241 nova_compute[248510]: 2025-12-13 08:12:53.108 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:12:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Dec 13 03:12:55 np0005558241 nova_compute[248510]: 2025-12-13 08:12:55.083 248514 DEBUG nova.compute.manager [req-3962a9a1-3a9d-4623-b372-f5d2f15c538d req-b4e6f5ee-8a1e-4ddb-b4b0-eb2174be2dc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Received event network-vif-plugged-7c5b48da-bc21-4027-a29c-5db83c1a308c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:12:55 np0005558241 nova_compute[248510]: 2025-12-13 08:12:55.083 248514 DEBUG oslo_concurrency.lockutils [req-3962a9a1-3a9d-4623-b372-f5d2f15c538d req-b4e6f5ee-8a1e-4ddb-b4b0-eb2174be2dc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:55 np0005558241 nova_compute[248510]: 2025-12-13 08:12:55.083 248514 DEBUG oslo_concurrency.lockutils [req-3962a9a1-3a9d-4623-b372-f5d2f15c538d req-b4e6f5ee-8a1e-4ddb-b4b0-eb2174be2dc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:55 np0005558241 nova_compute[248510]: 2025-12-13 08:12:55.083 248514 DEBUG oslo_concurrency.lockutils [req-3962a9a1-3a9d-4623-b372-f5d2f15c538d req-b4e6f5ee-8a1e-4ddb-b4b0-eb2174be2dc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:55 np0005558241 nova_compute[248510]: 2025-12-13 08:12:55.084 248514 DEBUG nova.compute.manager [req-3962a9a1-3a9d-4623-b372-f5d2f15c538d req-b4e6f5ee-8a1e-4ddb-b4b0-eb2174be2dc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] No waiting events found dispatching network-vif-plugged-7c5b48da-bc21-4027-a29c-5db83c1a308c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:12:55 np0005558241 nova_compute[248510]: 2025-12-13 08:12:55.084 248514 WARNING nova.compute.manager [req-3962a9a1-3a9d-4623-b372-f5d2f15c538d req-b4e6f5ee-8a1e-4ddb-b4b0-eb2174be2dc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Received unexpected event network-vif-plugged-7c5b48da-bc21-4027-a29c-5db83c1a308c for instance with vm_state active and task_state None.#033[00m
Dec 13 03:12:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:55.393 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:12:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:55.394 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:12:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:12:55.395 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:12:55 np0005558241 nova_compute[248510]: 2025-12-13 08:12:55.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1012 KiB/s rd, 14 KiB/s wr, 45 op/s
Dec 13 03:12:57 np0005558241 podman[261883]: 2025-12-13 08:12:57.973297522 +0000 UTC m=+0.057272831 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:12:57 np0005558241 podman[261882]: 2025-12-13 08:12:57.998135113 +0000 UTC m=+0.084639324 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 13 03:12:58 np0005558241 podman[261881]: 2025-12-13 08:12:58.001749712 +0000 UTC m=+0.090387405 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:12:58 np0005558241 nova_compute[248510]: 2025-12-13 08:12:58.110 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:12:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:12:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Dec 13 03:12:59 np0005558241 nova_compute[248510]: 2025-12-13 08:12:59.656 248514 DEBUG nova.compute.manager [req-6a47bc3c-67cf-4ee6-ab3d-6a1028e1b254 req-d75f8192-acd6-49f2-bebb-962a218de187 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Received event network-changed-7c5b48da-bc21-4027-a29c-5db83c1a308c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:12:59 np0005558241 nova_compute[248510]: 2025-12-13 08:12:59.657 248514 DEBUG nova.compute.manager [req-6a47bc3c-67cf-4ee6-ab3d-6a1028e1b254 req-d75f8192-acd6-49f2-bebb-962a218de187 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Refreshing instance network info cache due to event network-changed-7c5b48da-bc21-4027-a29c-5db83c1a308c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:12:59 np0005558241 nova_compute[248510]: 2025-12-13 08:12:59.657 248514 DEBUG oslo_concurrency.lockutils [req-6a47bc3c-67cf-4ee6-ab3d-6a1028e1b254 req-d75f8192-acd6-49f2-bebb-962a218de187 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-57a4022e-e14b-4094-a66f-1af491c55f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:12:59 np0005558241 nova_compute[248510]: 2025-12-13 08:12:59.658 248514 DEBUG oslo_concurrency.lockutils [req-6a47bc3c-67cf-4ee6-ab3d-6a1028e1b254 req-d75f8192-acd6-49f2-bebb-962a218de187 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-57a4022e-e14b-4094-a66f-1af491c55f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:12:59 np0005558241 nova_compute[248510]: 2025-12-13 08:12:59.658 248514 DEBUG nova.network.neutron [req-6a47bc3c-67cf-4ee6-ab3d-6a1028e1b254 req-d75f8192-acd6-49f2-bebb-962a218de187 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Refreshing network info cache for port 7c5b48da-bc21-4027-a29c-5db83c1a308c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:13:00 np0005558241 nova_compute[248510]: 2025-12-13 08:13:00.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Dec 13 03:13:02 np0005558241 nova_compute[248510]: 2025-12-13 08:13:02.462 248514 DEBUG nova.network.neutron [req-6a47bc3c-67cf-4ee6-ab3d-6a1028e1b254 req-d75f8192-acd6-49f2-bebb-962a218de187 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Updated VIF entry in instance network info cache for port 7c5b48da-bc21-4027-a29c-5db83c1a308c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:13:02 np0005558241 nova_compute[248510]: 2025-12-13 08:13:02.463 248514 DEBUG nova.network.neutron [req-6a47bc3c-67cf-4ee6-ab3d-6a1028e1b254 req-d75f8192-acd6-49f2-bebb-962a218de187 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Updating instance_info_cache with network_info: [{"id": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "address": "fa:16:3e:6e:29:ed", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c5b48da-bc", "ovs_interfaceid": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:13:02 np0005558241 nova_compute[248510]: 2025-12-13 08:13:02.504 248514 DEBUG oslo_concurrency.lockutils [req-6a47bc3c-67cf-4ee6-ab3d-6a1028e1b254 req-d75f8192-acd6-49f2-bebb-962a218de187 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-57a4022e-e14b-4094-a66f-1af491c55f6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:13:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 90 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Dec 13 03:13:03 np0005558241 nova_compute[248510]: 2025-12-13 08:13:03.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:13:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 99 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 677 KiB/s wr, 81 op/s
Dec 13 03:13:05 np0005558241 ovn_controller[148476]: 2025-12-13T08:13:05Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:29:ed 10.100.0.12
Dec 13 03:13:05 np0005558241 ovn_controller[148476]: 2025-12-13T08:13:05Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:29:ed 10.100.0.12
Dec 13 03:13:05 np0005558241 nova_compute[248510]: 2025-12-13 08:13:05.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:06 np0005558241 nova_compute[248510]: 2025-12-13 08:13:06.306 248514 DEBUG oslo_concurrency.processutils [None req-9c10600c-ec49-4291-85f2-17231f2a8ecd c19599d8b3be4bceae3a6701311c5668 b111119d12184ff89f0f014679ef78c2 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:13:06 np0005558241 nova_compute[248510]: 2025-12-13 08:13:06.333 248514 DEBUG oslo_concurrency.processutils [None req-9c10600c-ec49-4291-85f2-17231f2a8ecd c19599d8b3be4bceae3a6701311c5668 b111119d12184ff89f0f014679ef78c2 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:13:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 99 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 959 KiB/s rd, 663 KiB/s wr, 36 op/s
Dec 13 03:13:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:13:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:13:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:13:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:13:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:13:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:13:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:13:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:13:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:13:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:13:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:13:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:13:07 np0005558241 podman[262088]: 2025-12-13 08:13:07.241020805 +0000 UTC m=+0.024376141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:13:07 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:13:07 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:13:07 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:13:07 np0005558241 podman[262088]: 2025-12-13 08:13:07.431864943 +0000 UTC m=+0.215220259 container create e07dde46a5e84cae7a1d8ea59629d85e61f654ca0284a1a6d66dc8a2d671c4b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:13:07 np0005558241 systemd[1]: Started libpod-conmon-e07dde46a5e84cae7a1d8ea59629d85e61f654ca0284a1a6d66dc8a2d671c4b5.scope.
Dec 13 03:13:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:13:07 np0005558241 podman[262088]: 2025-12-13 08:13:07.518909436 +0000 UTC m=+0.302264772 container init e07dde46a5e84cae7a1d8ea59629d85e61f654ca0284a1a6d66dc8a2d671c4b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:13:07 np0005558241 podman[262088]: 2025-12-13 08:13:07.527680391 +0000 UTC m=+0.311035707 container start e07dde46a5e84cae7a1d8ea59629d85e61f654ca0284a1a6d66dc8a2d671c4b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_edison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:13:07 np0005558241 podman[262088]: 2025-12-13 08:13:07.535279378 +0000 UTC m=+0.318634734 container attach e07dde46a5e84cae7a1d8ea59629d85e61f654ca0284a1a6d66dc8a2d671c4b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:13:07 np0005558241 kind_edison[262105]: 167 167
Dec 13 03:13:07 np0005558241 systemd[1]: libpod-e07dde46a5e84cae7a1d8ea59629d85e61f654ca0284a1a6d66dc8a2d671c4b5.scope: Deactivated successfully.
Dec 13 03:13:07 np0005558241 podman[262088]: 2025-12-13 08:13:07.546350531 +0000 UTC m=+0.329705897 container died e07dde46a5e84cae7a1d8ea59629d85e61f654ca0284a1a6d66dc8a2d671c4b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_edison, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:13:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-04703edc8f532afeb9fcc0a495d5b2b79bb13e02af2294b810fe2ce4a41aaefb-merged.mount: Deactivated successfully.
Dec 13 03:13:07 np0005558241 podman[262088]: 2025-12-13 08:13:07.59141011 +0000 UTC m=+0.374765426 container remove e07dde46a5e84cae7a1d8ea59629d85e61f654ca0284a1a6d66dc8a2d671c4b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:13:07 np0005558241 systemd[1]: libpod-conmon-e07dde46a5e84cae7a1d8ea59629d85e61f654ca0284a1a6d66dc8a2d671c4b5.scope: Deactivated successfully.
Dec 13 03:13:07 np0005558241 podman[262130]: 2025-12-13 08:13:07.773323918 +0000 UTC m=+0.045899851 container create 610d74f22f4a60a16f99c0d864191d0bb34bc141e4fd11370f07c0e9a300ac36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:13:07 np0005558241 systemd[1]: Started libpod-conmon-610d74f22f4a60a16f99c0d864191d0bb34bc141e4fd11370f07c0e9a300ac36.scope.
Dec 13 03:13:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:13:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d80a953e171446d59492bb830f20d4601e91a3642f230b65eac5c47e6b1e79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:13:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d80a953e171446d59492bb830f20d4601e91a3642f230b65eac5c47e6b1e79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:13:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d80a953e171446d59492bb830f20d4601e91a3642f230b65eac5c47e6b1e79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:13:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d80a953e171446d59492bb830f20d4601e91a3642f230b65eac5c47e6b1e79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:13:07 np0005558241 podman[262130]: 2025-12-13 08:13:07.753667424 +0000 UTC m=+0.026243377 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:13:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d80a953e171446d59492bb830f20d4601e91a3642f230b65eac5c47e6b1e79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:13:07 np0005558241 podman[262130]: 2025-12-13 08:13:07.860854023 +0000 UTC m=+0.133429976 container init 610d74f22f4a60a16f99c0d864191d0bb34bc141e4fd11370f07c0e9a300ac36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 03:13:07 np0005558241 podman[262130]: 2025-12-13 08:13:07.870123871 +0000 UTC m=+0.142699804 container start 610d74f22f4a60a16f99c0d864191d0bb34bc141e4fd11370f07c0e9a300ac36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chebyshev, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:13:07 np0005558241 podman[262130]: 2025-12-13 08:13:07.873870993 +0000 UTC m=+0.146446946 container attach 610d74f22f4a60a16f99c0d864191d0bb34bc141e4fd11370f07c0e9a300ac36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:13:08 np0005558241 nova_compute[248510]: 2025-12-13 08:13:08.116 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:08 np0005558241 admiring_chebyshev[262147]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:13:08 np0005558241 admiring_chebyshev[262147]: --> All data devices are unavailable
Dec 13 03:13:08 np0005558241 systemd[1]: libpod-610d74f22f4a60a16f99c0d864191d0bb34bc141e4fd11370f07c0e9a300ac36.scope: Deactivated successfully.
Dec 13 03:13:08 np0005558241 podman[262130]: 2025-12-13 08:13:08.5676206 +0000 UTC m=+0.840196533 container died 610d74f22f4a60a16f99c0d864191d0bb34bc141e4fd11370f07c0e9a300ac36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chebyshev, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 03:13:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:13:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 121 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Dec 13 03:13:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-58d80a953e171446d59492bb830f20d4601e91a3642f230b65eac5c47e6b1e79-merged.mount: Deactivated successfully.
Dec 13 03:13:08 np0005558241 podman[262130]: 2025-12-13 08:13:08.876581785 +0000 UTC m=+1.149157718 container remove 610d74f22f4a60a16f99c0d864191d0bb34bc141e4fd11370f07c0e9a300ac36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 03:13:08 np0005558241 systemd[1]: libpod-conmon-610d74f22f4a60a16f99c0d864191d0bb34bc141e4fd11370f07c0e9a300ac36.scope: Deactivated successfully.
Dec 13 03:13:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:13:09
Dec 13 03:13:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:13:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:13:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.mgr', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'volumes']
Dec 13 03:13:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:13:09 np0005558241 podman[262240]: 2025-12-13 08:13:09.373979329 +0000 UTC m=+0.040872557 container create 3719b2ca3cd51e6b9b8f876b3933d32c159b66c1016923f03f69cc6e74cfefaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:13:09 np0005558241 systemd[1]: Started libpod-conmon-3719b2ca3cd51e6b9b8f876b3933d32c159b66c1016923f03f69cc6e74cfefaf.scope.
Dec 13 03:13:09 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:13:09 np0005558241 podman[262240]: 2025-12-13 08:13:09.35697186 +0000 UTC m=+0.023865118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:13:09 np0005558241 podman[262240]: 2025-12-13 08:13:09.456187403 +0000 UTC m=+0.123080661 container init 3719b2ca3cd51e6b9b8f876b3933d32c159b66c1016923f03f69cc6e74cfefaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:13:09 np0005558241 podman[262240]: 2025-12-13 08:13:09.462803246 +0000 UTC m=+0.129696474 container start 3719b2ca3cd51e6b9b8f876b3933d32c159b66c1016923f03f69cc6e74cfefaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haibt, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:13:09 np0005558241 podman[262240]: 2025-12-13 08:13:09.466004144 +0000 UTC m=+0.132897562 container attach 3719b2ca3cd51e6b9b8f876b3933d32c159b66c1016923f03f69cc6e74cfefaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:13:09 np0005558241 reverent_haibt[262257]: 167 167
Dec 13 03:13:09 np0005558241 systemd[1]: libpod-3719b2ca3cd51e6b9b8f876b3933d32c159b66c1016923f03f69cc6e74cfefaf.scope: Deactivated successfully.
Dec 13 03:13:09 np0005558241 podman[262240]: 2025-12-13 08:13:09.470438274 +0000 UTC m=+0.137331502 container died 3719b2ca3cd51e6b9b8f876b3933d32c159b66c1016923f03f69cc6e74cfefaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 03:13:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0f71a1bb864df2d1b236e16a261bdb29ac10fa003726364ffef71ba3c466f317-merged.mount: Deactivated successfully.
Dec 13 03:13:09 np0005558241 podman[262240]: 2025-12-13 08:13:09.520834274 +0000 UTC m=+0.187727502 container remove 3719b2ca3cd51e6b9b8f876b3933d32c159b66c1016923f03f69cc6e74cfefaf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:13:09 np0005558241 systemd[1]: libpod-conmon-3719b2ca3cd51e6b9b8f876b3933d32c159b66c1016923f03f69cc6e74cfefaf.scope: Deactivated successfully.
Dec 13 03:13:09 np0005558241 podman[262283]: 2025-12-13 08:13:09.82654794 +0000 UTC m=+0.119138244 container create b871743741a13398ff80e2d449a8e281057aed1c6e2e0b1345dab9f905992bfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jackson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:13:09 np0005558241 podman[262283]: 2025-12-13 08:13:09.734844542 +0000 UTC m=+0.027434866 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:13:09 np0005558241 systemd[1]: Started libpod-conmon-b871743741a13398ff80e2d449a8e281057aed1c6e2e0b1345dab9f905992bfa.scope.
Dec 13 03:13:09 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:13:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e06ff2ce18aca2321e4f9789ec7cd3e70bd726467a73d190cd83beeb093c97f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:13:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e06ff2ce18aca2321e4f9789ec7cd3e70bd726467a73d190cd83beeb093c97f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:13:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e06ff2ce18aca2321e4f9789ec7cd3e70bd726467a73d190cd83beeb093c97f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:13:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e06ff2ce18aca2321e4f9789ec7cd3e70bd726467a73d190cd83beeb093c97f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:13:10 np0005558241 podman[262283]: 2025-12-13 08:13:10.09750411 +0000 UTC m=+0.390094434 container init b871743741a13398ff80e2d449a8e281057aed1c6e2e0b1345dab9f905992bfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jackson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:13:10 np0005558241 podman[262283]: 2025-12-13 08:13:10.104922872 +0000 UTC m=+0.397513176 container start b871743741a13398ff80e2d449a8e281057aed1c6e2e0b1345dab9f905992bfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jackson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:13:10 np0005558241 podman[262283]: 2025-12-13 08:13:10.136696584 +0000 UTC m=+0.429286918 container attach b871743741a13398ff80e2d449a8e281057aed1c6e2e0b1345dab9f905992bfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:13:10 np0005558241 zen_jackson[262300]: {
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:    "0": [
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:        {
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "devices": [
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "/dev/loop3"
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            ],
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_name": "ceph_lv0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_size": "21470642176",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "name": "ceph_lv0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "tags": {
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.cluster_name": "ceph",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.crush_device_class": "",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.encrypted": "0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.objectstore": "bluestore",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.osd_id": "0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.type": "block",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.vdo": "0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.with_tpm": "0"
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            },
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "type": "block",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "vg_name": "ceph_vg0"
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:        }
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:    ],
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:    "1": [
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:        {
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "devices": [
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "/dev/loop4"
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            ],
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_name": "ceph_lv1",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_size": "21470642176",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "name": "ceph_lv1",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "tags": {
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.cluster_name": "ceph",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.crush_device_class": "",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.encrypted": "0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.objectstore": "bluestore",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.osd_id": "1",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.type": "block",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.vdo": "0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.with_tpm": "0"
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            },
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "type": "block",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "vg_name": "ceph_vg1"
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:        }
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:    ],
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:    "2": [
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:        {
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "devices": [
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "/dev/loop5"
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            ],
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_name": "ceph_lv2",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_size": "21470642176",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "name": "ceph_lv2",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "tags": {
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.cluster_name": "ceph",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.crush_device_class": "",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.encrypted": "0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.objectstore": "bluestore",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.osd_id": "2",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.type": "block",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.vdo": "0",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:                "ceph.with_tpm": "0"
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            },
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "type": "block",
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:            "vg_name": "ceph_vg2"
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:        }
Dec 13 03:13:10 np0005558241 zen_jackson[262300]:    ]
Dec 13 03:13:10 np0005558241 zen_jackson[262300]: }
Dec 13 03:13:10 np0005558241 systemd[1]: libpod-b871743741a13398ff80e2d449a8e281057aed1c6e2e0b1345dab9f905992bfa.scope: Deactivated successfully.
Dec 13 03:13:10 np0005558241 podman[262283]: 2025-12-13 08:13:10.393093706 +0000 UTC m=+0.685684040 container died b871743741a13398ff80e2d449a8e281057aed1c6e2e0b1345dab9f905992bfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:13:10 np0005558241 nova_compute[248510]: 2025-12-13 08:13:10.555 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:10 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9e06ff2ce18aca2321e4f9789ec7cd3e70bd726467a73d190cd83beeb093c97f-merged.mount: Deactivated successfully.
Dec 13 03:13:10 np0005558241 podman[262283]: 2025-12-13 08:13:10.661613466 +0000 UTC m=+0.954203770 container remove b871743741a13398ff80e2d449a8e281057aed1c6e2e0b1345dab9f905992bfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:13:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 123 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Dec 13 03:13:10 np0005558241 systemd[1]: libpod-conmon-b871743741a13398ff80e2d449a8e281057aed1c6e2e0b1345dab9f905992bfa.scope: Deactivated successfully.
Dec 13 03:13:11 np0005558241 podman[262384]: 2025-12-13 08:13:11.219819057 +0000 UTC m=+0.045806169 container create 9c6a3ad7c30a3fc77fa4e9a554fe923ee2fbab165bb284ca5841e52883fe5c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:13:11 np0005558241 systemd[1]: Started libpod-conmon-9c6a3ad7c30a3fc77fa4e9a554fe923ee2fbab165bb284ca5841e52883fe5c6b.scope.
Dec 13 03:13:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:13:11 np0005558241 podman[262384]: 2025-12-13 08:13:11.202127711 +0000 UTC m=+0.028114833 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:13:11 np0005558241 podman[262384]: 2025-12-13 08:13:11.307237279 +0000 UTC m=+0.133224411 container init 9c6a3ad7c30a3fc77fa4e9a554fe923ee2fbab165bb284ca5841e52883fe5c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 03:13:11 np0005558241 podman[262384]: 2025-12-13 08:13:11.314278022 +0000 UTC m=+0.140265134 container start 9c6a3ad7c30a3fc77fa4e9a554fe923ee2fbab165bb284ca5841e52883fe5c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_bohr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:13:11 np0005558241 podman[262384]: 2025-12-13 08:13:11.318808414 +0000 UTC m=+0.144795536 container attach 9c6a3ad7c30a3fc77fa4e9a554fe923ee2fbab165bb284ca5841e52883fe5c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_bohr, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:13:11 np0005558241 kind_bohr[262400]: 167 167
Dec 13 03:13:11 np0005558241 systemd[1]: libpod-9c6a3ad7c30a3fc77fa4e9a554fe923ee2fbab165bb284ca5841e52883fe5c6b.scope: Deactivated successfully.
Dec 13 03:13:11 np0005558241 podman[262384]: 2025-12-13 08:13:11.32313523 +0000 UTC m=+0.149122362 container died 9c6a3ad7c30a3fc77fa4e9a554fe923ee2fbab165bb284ca5841e52883fe5c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_bohr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:13:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-901341d1773563b47a2de662402e47895b966de2db852d2cfb387ac7c1317026-merged.mount: Deactivated successfully.
Dec 13 03:13:11 np0005558241 podman[262384]: 2025-12-13 08:13:11.378208546 +0000 UTC m=+0.204195648 container remove 9c6a3ad7c30a3fc77fa4e9a554fe923ee2fbab165bb284ca5841e52883fe5c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_bohr, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:13:11 np0005558241 systemd[1]: libpod-conmon-9c6a3ad7c30a3fc77fa4e9a554fe923ee2fbab165bb284ca5841e52883fe5c6b.scope: Deactivated successfully.
Dec 13 03:13:11 np0005558241 podman[262423]: 2025-12-13 08:13:11.567761062 +0000 UTC m=+0.042034446 container create 42f7bf04af959755aafb729c42b9712c1690d9a74cd9e59760105a24829dd455 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_fermi, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:13:11 np0005558241 systemd[1]: Started libpod-conmon-42f7bf04af959755aafb729c42b9712c1690d9a74cd9e59760105a24829dd455.scope.
Dec 13 03:13:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:13:11 np0005558241 podman[262423]: 2025-12-13 08:13:11.550257161 +0000 UTC m=+0.024530575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:13:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f48b78eb0db14d3b9e2f5d438eb765489daed2c0926eb7f640ed5a763432cf5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:13:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f48b78eb0db14d3b9e2f5d438eb765489daed2c0926eb7f640ed5a763432cf5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:13:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f48b78eb0db14d3b9e2f5d438eb765489daed2c0926eb7f640ed5a763432cf5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:13:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f48b78eb0db14d3b9e2f5d438eb765489daed2c0926eb7f640ed5a763432cf5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:13:11 np0005558241 podman[262423]: 2025-12-13 08:13:11.662218727 +0000 UTC m=+0.136492141 container init 42f7bf04af959755aafb729c42b9712c1690d9a74cd9e59760105a24829dd455 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 03:13:11 np0005558241 podman[262423]: 2025-12-13 08:13:11.67044583 +0000 UTC m=+0.144719214 container start 42f7bf04af959755aafb729c42b9712c1690d9a74cd9e59760105a24829dd455 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_fermi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:13:11 np0005558241 podman[262423]: 2025-12-13 08:13:11.675153306 +0000 UTC m=+0.149426710 container attach 42f7bf04af959755aafb729c42b9712c1690d9a74cd9e59760105a24829dd455 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_fermi, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:13:12 np0005558241 lvm[262519]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:13:12 np0005558241 lvm[262519]: VG ceph_vg0 finished
Dec 13 03:13:12 np0005558241 lvm[262520]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:13:12 np0005558241 lvm[262520]: VG ceph_vg1 finished
Dec 13 03:13:12 np0005558241 lvm[262521]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:13:12 np0005558241 lvm[262521]: VG ceph_vg2 finished
Dec 13 03:13:12 np0005558241 recursing_fermi[262439]: {}
Dec 13 03:13:12 np0005558241 systemd[1]: libpod-42f7bf04af959755aafb729c42b9712c1690d9a74cd9e59760105a24829dd455.scope: Deactivated successfully.
Dec 13 03:13:12 np0005558241 systemd[1]: libpod-42f7bf04af959755aafb729c42b9712c1690d9a74cd9e59760105a24829dd455.scope: Consumed 1.320s CPU time.
Dec 13 03:13:12 np0005558241 podman[262423]: 2025-12-13 08:13:12.465018938 +0000 UTC m=+0.939292332 container died 42f7bf04af959755aafb729c42b9712c1690d9a74cd9e59760105a24829dd455 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_fermi, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:13:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3f48b78eb0db14d3b9e2f5d438eb765489daed2c0926eb7f640ed5a763432cf5-merged.mount: Deactivated successfully.
Dec 13 03:13:12 np0005558241 podman[262423]: 2025-12-13 08:13:12.599736025 +0000 UTC m=+1.074009419 container remove 42f7bf04af959755aafb729c42b9712c1690d9a74cd9e59760105a24829dd455 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:13:12 np0005558241 systemd[1]: libpod-conmon-42f7bf04af959755aafb729c42b9712c1690d9a74cd9e59760105a24829dd455.scope: Deactivated successfully.
Dec 13 03:13:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:13:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:13:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:13:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 123 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Dec 13 03:13:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:13:13 np0005558241 nova_compute[248510]: 2025-12-13 08:13:13.118 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:13:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:13:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:13:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 123 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Dec 13 03:13:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:13:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/426747264' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:13:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:13:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/426747264' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:13:15 np0005558241 nova_compute[248510]: 2025-12-13 08:13:15.559 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.325 248514 DEBUG oslo_concurrency.lockutils [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "57a4022e-e14b-4094-a66f-1af491c55f6d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.326 248514 DEBUG oslo_concurrency.lockutils [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.327 248514 DEBUG oslo_concurrency.lockutils [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.327 248514 DEBUG oslo_concurrency.lockutils [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.327 248514 DEBUG oslo_concurrency.lockutils [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.328 248514 INFO nova.compute.manager [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Terminating instance#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.329 248514 DEBUG nova.compute.manager [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:13:16 np0005558241 kernel: tap7c5b48da-bc (unregistering): left promiscuous mode
Dec 13 03:13:16 np0005558241 NetworkManager[50376]: <info>  [1765613596.3935] device (tap7c5b48da-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:13:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:13:16Z|00041|binding|INFO|Releasing lport 7c5b48da-bc21-4027-a29c-5db83c1a308c from this chassis (sb_readonly=0)
Dec 13 03:13:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:13:16Z|00042|binding|INFO|Setting lport 7c5b48da-bc21-4027-a29c-5db83c1a308c down in Southbound
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:13:16Z|00043|binding|INFO|Removing iface tap7c5b48da-bc ovn-installed in OVS
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.410 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:29:ed 10.100.0.12'], port_security=['fa:16:3e:6e:29:ed 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '57a4022e-e14b-4094-a66f-1af491c55f6d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '474fd43853634a5d8b3026b622723873', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd0e531d-b7ab-4913-abf7-a093cbaf1988', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d464c07e-e23a-4447-a355-481e56e0a828, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7c5b48da-bc21-4027-a29c-5db83c1a308c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.412 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7c5b48da-bc21-4027-a29c-5db83c1a308c in datapath 033b8122-ba6c-4217-b5b0-bd54c6f0aec0 unbound from our chassis#033[00m
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.413 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 033b8122-ba6c-4217-b5b0-bd54c6f0aec0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.415 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[41ac9a31-211b-43cf-9f44-b6645ae8d22f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.416 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0 namespace which is not needed anymore#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.425 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:16 np0005558241 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec 13 03:13:16 np0005558241 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 13.677s CPU time.
Dec 13 03:13:16 np0005558241 systemd-machined[210538]: Machine qemu-5-instance-00000005 terminated.
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.554 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.560 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:16 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[261862]: [NOTICE]   (261869) : haproxy version is 2.8.14-c23fe91
Dec 13 03:13:16 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[261862]: [NOTICE]   (261869) : path to executable is /usr/sbin/haproxy
Dec 13 03:13:16 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[261862]: [WARNING]  (261869) : Exiting Master process...
Dec 13 03:13:16 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[261862]: [ALERT]    (261869) : Current worker (261871) exited with code 143 (Terminated)
Dec 13 03:13:16 np0005558241 neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0[261862]: [WARNING]  (261869) : All workers exited. Exiting... (0)
Dec 13 03:13:16 np0005558241 systemd[1]: libpod-3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072.scope: Deactivated successfully.
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.570 248514 INFO nova.virt.libvirt.driver [-] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Instance destroyed successfully.#033[00m
Dec 13 03:13:16 np0005558241 podman[262584]: 2025-12-13 08:13:16.571325519 +0000 UTC m=+0.050692058 container died 3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.571 248514 DEBUG nova.objects.instance [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lazy-loading 'resources' on Instance uuid 57a4022e-e14b-4094-a66f-1af491c55f6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.590 248514 DEBUG nova.virt.libvirt.vif [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:12:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-156009774',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-156009774',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(7),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-156009774',id=5,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=7,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ4VTdpGZWW2edkCqJ93Ae0N7W5Wl1KxtxPXJB+IDKzYse7HySnAxnSN+D34LgL+vn16+92mSLvTyRP3HupOhW4VHoScrkUgJj4Ke9zalwfgWa6DU4Za2b0K2T2xL2drqg==',key_name='tempest-keypair-1114404274',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:12:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='474fd43853634a5d8b3026b622723873',ramdisk_id='',reservation_id='r-x75wb70y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-306379411',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-306379411-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:12:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='df94630f47824853b3347bca91426ecd',uuid=57a4022e-e14b-4094-a66f-1af491c55f6d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "address": "fa:16:3e:6e:29:ed", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c5b48da-bc", "ovs_interfaceid": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.591 248514 DEBUG nova.network.os_vif_util [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Converting VIF {"id": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "address": "fa:16:3e:6e:29:ed", "network": {"id": "033b8122-ba6c-4217-b5b0-bd54c6f0aec0", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-2086758929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "474fd43853634a5d8b3026b622723873", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c5b48da-bc", "ovs_interfaceid": "7c5b48da-bc21-4027-a29c-5db83c1a308c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.592 248514 DEBUG nova.network.os_vif_util [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:29:ed,bridge_name='br-int',has_traffic_filtering=True,id=7c5b48da-bc21-4027-a29c-5db83c1a308c,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c5b48da-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.593 248514 DEBUG os_vif [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:29:ed,bridge_name='br-int',has_traffic_filtering=True,id=7c5b48da-bc21-4027-a29c-5db83c1a308c,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c5b48da-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.595 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.595 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c5b48da-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.598 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.601 248514 INFO os_vif [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:29:ed,bridge_name='br-int',has_traffic_filtering=True,id=7c5b48da-bc21-4027-a29c-5db83c1a308c,network=Network(033b8122-ba6c-4217-b5b0-bd54c6f0aec0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c5b48da-bc')#033[00m
Dec 13 03:13:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072-userdata-shm.mount: Deactivated successfully.
Dec 13 03:13:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c7803cc902b5488d229e4626867ec671a293e1989d3bcb8d2b0b297ab9bfa864-merged.mount: Deactivated successfully.
Dec 13 03:13:16 np0005558241 podman[262584]: 2025-12-13 08:13:16.632028374 +0000 UTC m=+0.111394913 container cleanup 3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 03:13:16 np0005558241 systemd[1]: libpod-conmon-3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072.scope: Deactivated successfully.
Dec 13 03:13:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 123 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 358 KiB/s rd, 1.5 MiB/s wr, 66 op/s
Dec 13 03:13:16 np0005558241 podman[262641]: 2025-12-13 08:13:16.715476598 +0000 UTC m=+0.056695377 container remove 3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.722 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f46d6c7-99c3-455f-b897-6aaf171abc25]: (4, ('Sat Dec 13 08:13:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0 (3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072)\n3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072\nSat Dec 13 08:13:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0 (3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072)\n3b94ac3ee74ba72ccb526d18f39faec4f2ab96e075f1f78491c837613cf94072\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.725 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3ee90c-f201-411b-b8f8-2411cf95e509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.727 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap033b8122-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:16 np0005558241 kernel: tap033b8122-b0: left promiscuous mode
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.746 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.750 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb855063-27bb-4d55-bbee-15da522a295c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.766 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[376a73c2-e1fd-4974-9766-317a8421579e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.768 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f6543cea-db04-4ec2-92ac-e757c8c06d2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.787 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.786 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.787 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[723eb7b5-ee0b-4944-ba5c-e49516ec606b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614909, 'reachable_time': 31314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262657, 'error': None, 'target': 'ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.792 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-033b8122-ba6c-4217-b5b0-bd54c6f0aec0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:13:16 np0005558241 systemd[1]: run-netns-ovnmeta\x2d033b8122\x2dba6c\x2d4217\x2db5b0\x2dbd54c6f0aec0.mount: Deactivated successfully.
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.793 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa41333-d0cd-4f66-b0db-98f0476a04fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:13:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:16.793 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.989 248514 INFO nova.virt.libvirt.driver [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Deleting instance files /var/lib/nova/instances/57a4022e-e14b-4094-a66f-1af491c55f6d_del#033[00m
Dec 13 03:13:16 np0005558241 nova_compute[248510]: 2025-12-13 08:13:16.990 248514 INFO nova.virt.libvirt.driver [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Deletion of /var/lib/nova/instances/57a4022e-e14b-4094-a66f-1af491c55f6d_del complete#033[00m
Dec 13 03:13:17 np0005558241 nova_compute[248510]: 2025-12-13 08:13:17.077 248514 INFO nova.compute.manager [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:13:17 np0005558241 nova_compute[248510]: 2025-12-13 08:13:17.078 248514 DEBUG oslo.service.loopingcall [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:13:17 np0005558241 nova_compute[248510]: 2025-12-13 08:13:17.078 248514 DEBUG nova.compute.manager [-] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:13:17 np0005558241 nova_compute[248510]: 2025-12-13 08:13:17.078 248514 DEBUG nova.network.neutron [-] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:13:17 np0005558241 nova_compute[248510]: 2025-12-13 08:13:17.210 248514 DEBUG nova.compute.manager [req-36d3ecec-a5a5-4b0e-a4ac-3a6d3fade9b5 req-94df3b06-ec94-430f-a83d-1c4759448509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Received event network-vif-unplugged-7c5b48da-bc21-4027-a29c-5db83c1a308c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:13:17 np0005558241 nova_compute[248510]: 2025-12-13 08:13:17.211 248514 DEBUG oslo_concurrency.lockutils [req-36d3ecec-a5a5-4b0e-a4ac-3a6d3fade9b5 req-94df3b06-ec94-430f-a83d-1c4759448509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:17 np0005558241 nova_compute[248510]: 2025-12-13 08:13:17.211 248514 DEBUG oslo_concurrency.lockutils [req-36d3ecec-a5a5-4b0e-a4ac-3a6d3fade9b5 req-94df3b06-ec94-430f-a83d-1c4759448509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:17 np0005558241 nova_compute[248510]: 2025-12-13 08:13:17.211 248514 DEBUG oslo_concurrency.lockutils [req-36d3ecec-a5a5-4b0e-a4ac-3a6d3fade9b5 req-94df3b06-ec94-430f-a83d-1c4759448509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:17 np0005558241 nova_compute[248510]: 2025-12-13 08:13:17.212 248514 DEBUG nova.compute.manager [req-36d3ecec-a5a5-4b0e-a4ac-3a6d3fade9b5 req-94df3b06-ec94-430f-a83d-1c4759448509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] No waiting events found dispatching network-vif-unplugged-7c5b48da-bc21-4027-a29c-5db83c1a308c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:13:17 np0005558241 nova_compute[248510]: 2025-12-13 08:13:17.212 248514 DEBUG nova.compute.manager [req-36d3ecec-a5a5-4b0e-a4ac-3a6d3fade9b5 req-94df3b06-ec94-430f-a83d-1c4759448509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Received event network-vif-unplugged-7c5b48da-bc21-4027-a29c-5db83c1a308c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:13:18 np0005558241 nova_compute[248510]: 2025-12-13 08:13:18.121 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:13:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 64 MiB data, 247 MiB used, 60 GiB / 60 GiB avail; 373 KiB/s rd, 1.5 MiB/s wr, 89 op/s
Dec 13 03:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:18.796 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:13:19 np0005558241 nova_compute[248510]: 2025-12-13 08:13:19.371 248514 DEBUG nova.compute.manager [req-a51036a7-595c-49a4-9e53-ad095e94af63 req-246aaf83-5501-4dcf-a7c4-81817b405003 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Received event network-vif-plugged-7c5b48da-bc21-4027-a29c-5db83c1a308c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:13:19 np0005558241 nova_compute[248510]: 2025-12-13 08:13:19.372 248514 DEBUG oslo_concurrency.lockutils [req-a51036a7-595c-49a4-9e53-ad095e94af63 req-246aaf83-5501-4dcf-a7c4-81817b405003 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:19 np0005558241 nova_compute[248510]: 2025-12-13 08:13:19.372 248514 DEBUG oslo_concurrency.lockutils [req-a51036a7-595c-49a4-9e53-ad095e94af63 req-246aaf83-5501-4dcf-a7c4-81817b405003 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:19 np0005558241 nova_compute[248510]: 2025-12-13 08:13:19.372 248514 DEBUG oslo_concurrency.lockutils [req-a51036a7-595c-49a4-9e53-ad095e94af63 req-246aaf83-5501-4dcf-a7c4-81817b405003 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:19 np0005558241 nova_compute[248510]: 2025-12-13 08:13:19.372 248514 DEBUG nova.compute.manager [req-a51036a7-595c-49a4-9e53-ad095e94af63 req-246aaf83-5501-4dcf-a7c4-81817b405003 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] No waiting events found dispatching network-vif-plugged-7c5b48da-bc21-4027-a29c-5db83c1a308c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:13:19 np0005558241 nova_compute[248510]: 2025-12-13 08:13:19.373 248514 WARNING nova.compute.manager [req-a51036a7-595c-49a4-9e53-ad095e94af63 req-246aaf83-5501-4dcf-a7c4-81817b405003 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Received unexpected event network-vif-plugged-7c5b48da-bc21-4027-a29c-5db83c1a308c for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 58 KiB/s wr, 42 op/s
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.1606402339091982e-06 of space, bias 1.0, pg target 0.0006481920701727594 quantized to 32 (current 32)
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664242682034999 of space, bias 1.0, pg target 0.19992728046104996 quantized to 32 (current 32)
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.79896168337131e-07 of space, bias 4.0, pg target 0.001175875402004557 quantized to 16 (current 32)
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:13:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:13:21 np0005558241 nova_compute[248510]: 2025-12-13 08:13:21.165 248514 DEBUG nova.network.neutron [-] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:13:21 np0005558241 nova_compute[248510]: 2025-12-13 08:13:21.259 248514 INFO nova.compute.manager [-] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Took 4.18 seconds to deallocate network for instance.#033[00m
Dec 13 03:13:21 np0005558241 nova_compute[248510]: 2025-12-13 08:13:21.359 248514 DEBUG oslo_concurrency.lockutils [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:21 np0005558241 nova_compute[248510]: 2025-12-13 08:13:21.360 248514 DEBUG oslo_concurrency.lockutils [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:21 np0005558241 nova_compute[248510]: 2025-12-13 08:13:21.499 248514 DEBUG oslo_concurrency.processutils [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:13:21 np0005558241 nova_compute[248510]: 2025-12-13 08:13:21.573 248514 DEBUG nova.compute.manager [req-02924425-f895-481b-9e0f-4a189f1eadc2 req-0b43ada2-56bd-4c1a-a7df-1f0ac0e62794 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Received event network-vif-deleted-7c5b48da-bc21-4027-a29c-5db83c1a308c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:13:21 np0005558241 nova_compute[248510]: 2025-12-13 08:13:21.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:13:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2484412' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:13:22 np0005558241 nova_compute[248510]: 2025-12-13 08:13:22.196 248514 DEBUG oslo_concurrency.processutils [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.697s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:13:22 np0005558241 nova_compute[248510]: 2025-12-13 08:13:22.204 248514 DEBUG nova.compute.provider_tree [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:13:22 np0005558241 nova_compute[248510]: 2025-12-13 08:13:22.228 248514 DEBUG nova.scheduler.client.report [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:13:22 np0005558241 nova_compute[248510]: 2025-12-13 08:13:22.261 248514 DEBUG oslo_concurrency.lockutils [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:22 np0005558241 nova_compute[248510]: 2025-12-13 08:13:22.351 248514 INFO nova.scheduler.client.report [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Deleted allocations for instance 57a4022e-e14b-4094-a66f-1af491c55f6d#033[00m
Dec 13 03:13:22 np0005558241 nova_compute[248510]: 2025-12-13 08:13:22.448 248514 DEBUG oslo_concurrency.lockutils [None req-3c1d398c-ed8a-4f41-9b49-a76f67e453b5 df94630f47824853b3347bca91426ecd 474fd43853634a5d8b3026b622723873 - - default default] Lock "57a4022e-e14b-4094-a66f-1af491c55f6d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 14 KiB/s wr, 41 op/s
Dec 13 03:13:23 np0005558241 nova_compute[248510]: 2025-12-13 08:13:23.123 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:13:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 14 KiB/s wr, 41 op/s
Dec 13 03:13:25 np0005558241 nova_compute[248510]: 2025-12-13 08:13:25.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:13:26 np0005558241 nova_compute[248510]: 2025-12-13 08:13:26.600 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 13 03:13:27 np0005558241 nova_compute[248510]: 2025-12-13 08:13:27.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:13:28 np0005558241 nova_compute[248510]: 2025-12-13 08:13:28.127 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:13:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 13 03:13:28 np0005558241 nova_compute[248510]: 2025-12-13 08:13:28.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:13:28 np0005558241 nova_compute[248510]: 2025-12-13 08:13:28.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:13:28 np0005558241 nova_compute[248510]: 2025-12-13 08:13:28.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:13:28 np0005558241 nova_compute[248510]: 2025-12-13 08:13:28.841 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:13:28 np0005558241 podman[262681]: 2025-12-13 08:13:28.985536177 +0000 UTC m=+0.067261587 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202)
Dec 13 03:13:28 np0005558241 podman[262682]: 2025-12-13 08:13:28.985428394 +0000 UTC m=+0.062491999 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 03:13:29 np0005558241 podman[262680]: 2025-12-13 08:13:29.008887002 +0000 UTC m=+0.092457067 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:13:29 np0005558241 nova_compute[248510]: 2025-12-13 08:13:29.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:13:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 17 op/s
Dec 13 03:13:30 np0005558241 nova_compute[248510]: 2025-12-13 08:13:30.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:13:30 np0005558241 nova_compute[248510]: 2025-12-13 08:13:30.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:30 np0005558241 nova_compute[248510]: 2025-12-13 08:13:30.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:30 np0005558241 nova_compute[248510]: 2025-12-13 08:13:30.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:30 np0005558241 nova_compute[248510]: 2025-12-13 08:13:30.811 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:13:30 np0005558241 nova_compute[248510]: 2025-12-13 08:13:30.811 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:13:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:13:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199012911' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:13:31 np0005558241 nova_compute[248510]: 2025-12-13 08:13:31.395 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:13:31 np0005558241 nova_compute[248510]: 2025-12-13 08:13:31.568 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613596.5671196, 57a4022e-e14b-4094-a66f-1af491c55f6d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:13:31 np0005558241 nova_compute[248510]: 2025-12-13 08:13:31.569 248514 INFO nova.compute.manager [-] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:13:31 np0005558241 nova_compute[248510]: 2025-12-13 08:13:31.577 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:13:31 np0005558241 nova_compute[248510]: 2025-12-13 08:13:31.578 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4745MB free_disk=59.98815163690597GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:13:31 np0005558241 nova_compute[248510]: 2025-12-13 08:13:31.579 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:31 np0005558241 nova_compute[248510]: 2025-12-13 08:13:31.579 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:31 np0005558241 nova_compute[248510]: 2025-12-13 08:13:31.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:31 np0005558241 nova_compute[248510]: 2025-12-13 08:13:31.615 248514 DEBUG nova.compute.manager [None req-4d3771d1-5cd4-4d55-9a33-c724e5589ed3 - - - - - -] [instance: 57a4022e-e14b-4094-a66f-1af491c55f6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:13:31 np0005558241 nova_compute[248510]: 2025-12-13 08:13:31.986 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:13:31 np0005558241 nova_compute[248510]: 2025-12-13 08:13:31.987 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:13:32 np0005558241 nova_compute[248510]: 2025-12-13 08:13:32.018 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:13:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:13:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1429850595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:13:32 np0005558241 nova_compute[248510]: 2025-12-13 08:13:32.579 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:13:32 np0005558241 nova_compute[248510]: 2025-12-13 08:13:32.586 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:13:32 np0005558241 nova_compute[248510]: 2025-12-13 08:13:32.628 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:13:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:13:32 np0005558241 nova_compute[248510]: 2025-12-13 08:13:32.893 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:13:32 np0005558241 nova_compute[248510]: 2025-12-13 08:13:32.893 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:33 np0005558241 nova_compute[248510]: 2025-12-13 08:13:33.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:13:33 np0005558241 nova_compute[248510]: 2025-12-13 08:13:33.889 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:13:34 np0005558241 nova_compute[248510]: 2025-12-13 08:13:34.009 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:13:34 np0005558241 nova_compute[248510]: 2025-12-13 08:13:34.010 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:13:34 np0005558241 nova_compute[248510]: 2025-12-13 08:13:34.276 248514 DEBUG oslo_concurrency.lockutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Acquiring lock "b90d265f-9e84-486a-99dd-abe3e2bf7338" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:34 np0005558241 nova_compute[248510]: 2025-12-13 08:13:34.276 248514 DEBUG oslo_concurrency.lockutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "b90d265f-9e84-486a-99dd-abe3e2bf7338" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:34 np0005558241 nova_compute[248510]: 2025-12-13 08:13:34.313 248514 DEBUG nova.compute.manager [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:13:34 np0005558241 nova_compute[248510]: 2025-12-13 08:13:34.486 248514 DEBUG oslo_concurrency.lockutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:34 np0005558241 nova_compute[248510]: 2025-12-13 08:13:34.487 248514 DEBUG oslo_concurrency.lockutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:34 np0005558241 nova_compute[248510]: 2025-12-13 08:13:34.494 248514 DEBUG nova.virt.hardware [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:13:34 np0005558241 nova_compute[248510]: 2025-12-13 08:13:34.495 248514 INFO nova.compute.claims [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:13:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:13:34 np0005558241 nova_compute[248510]: 2025-12-13 08:13:34.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:13:34 np0005558241 nova_compute[248510]: 2025-12-13 08:13:34.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:13:35 np0005558241 nova_compute[248510]: 2025-12-13 08:13:35.179 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:13:35 np0005558241 nova_compute[248510]: 2025-12-13 08:13:35.705 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:35 np0005558241 nova_compute[248510]: 2025-12-13 08:13:35.711 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:13:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1689778431' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:13:35 np0005558241 nova_compute[248510]: 2025-12-13 08:13:35.933 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.754s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:13:35 np0005558241 nova_compute[248510]: 2025-12-13 08:13:35.940 248514 DEBUG nova.compute.provider_tree [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:13:35 np0005558241 nova_compute[248510]: 2025-12-13 08:13:35.970 248514 DEBUG nova.scheduler.client.report [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.003 248514 DEBUG oslo_concurrency.lockutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.004 248514 DEBUG nova.compute.manager [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.313 248514 DEBUG nova.compute.manager [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.314 248514 DEBUG nova.network.neutron [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.348 248514 INFO nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.549 248514 DEBUG nova.compute.manager [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.606 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.790 248514 DEBUG nova.compute.manager [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.792 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.793 248514 INFO nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Creating image(s)#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.827 248514 DEBUG nova.storage.rbd_utils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] rbd image b90d265f-9e84-486a-99dd-abe3e2bf7338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.859 248514 DEBUG nova.storage.rbd_utils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] rbd image b90d265f-9e84-486a-99dd-abe3e2bf7338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.883 248514 DEBUG nova.storage.rbd_utils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] rbd image b90d265f-9e84-486a-99dd-abe3e2bf7338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.886 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.913 248514 DEBUG nova.network.neutron [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.914 248514 DEBUG nova.compute.manager [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.959 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.960 248514 DEBUG oslo_concurrency.lockutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.961 248514 DEBUG oslo_concurrency.lockutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.961 248514 DEBUG oslo_concurrency.lockutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.985 248514 DEBUG nova.storage.rbd_utils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] rbd image b90d265f-9e84-486a-99dd-abe3e2bf7338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:13:36 np0005558241 nova_compute[248510]: 2025-12-13 08:13:36.990 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b90d265f-9e84-486a-99dd-abe3e2bf7338_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.239 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b90d265f-9e84-486a-99dd-abe3e2bf7338_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.301 248514 DEBUG nova.storage.rbd_utils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] resizing rbd image b90d265f-9e84-486a-99dd-abe3e2bf7338_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.370 248514 DEBUG nova.objects.instance [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lazy-loading 'migration_context' on Instance uuid b90d265f-9e84-486a-99dd-abe3e2bf7338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.391 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.392 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Ensure instance console log exists: /var/lib/nova/instances/b90d265f-9e84-486a-99dd-abe3e2bf7338/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.392 248514 DEBUG oslo_concurrency.lockutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.393 248514 DEBUG oslo_concurrency.lockutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.393 248514 DEBUG oslo_concurrency.lockutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.395 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.400 248514 WARNING nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.407 248514 DEBUG nova.virt.libvirt.host [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.407 248514 DEBUG nova.virt.libvirt.host [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.412 248514 DEBUG nova.virt.libvirt.host [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.413 248514 DEBUG nova.virt.libvirt.host [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.413 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.413 248514 DEBUG nova.virt.hardware [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.414 248514 DEBUG nova.virt.hardware [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.414 248514 DEBUG nova.virt.hardware [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.414 248514 DEBUG nova.virt.hardware [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.414 248514 DEBUG nova.virt.hardware [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.415 248514 DEBUG nova.virt.hardware [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.415 248514 DEBUG nova.virt.hardware [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.415 248514 DEBUG nova.virt.hardware [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.415 248514 DEBUG nova.virt.hardware [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.415 248514 DEBUG nova.virt.hardware [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.416 248514 DEBUG nova.virt.hardware [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.418 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:13:37 np0005558241 nova_compute[248510]: 2025-12-13 08:13:37.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:13:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:13:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338319131' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:13:38 np0005558241 nova_compute[248510]: 2025-12-13 08:13:38.001 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:13:38 np0005558241 nova_compute[248510]: 2025-12-13 08:13:38.030 248514 DEBUG nova.storage.rbd_utils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] rbd image b90d265f-9e84-486a-99dd-abe3e2bf7338_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:13:38 np0005558241 nova_compute[248510]: 2025-12-13 08:13:38.036 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:13:38 np0005558241 nova_compute[248510]: 2025-12-13 08:13:38.185 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:13:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:13:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3655589471' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:13:38 np0005558241 nova_compute[248510]: 2025-12-13 08:13:38.669 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:13:38 np0005558241 nova_compute[248510]: 2025-12-13 08:13:38.671 248514 DEBUG nova.objects.instance [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lazy-loading 'pci_devices' on Instance uuid b90d265f-9e84-486a-99dd-abe3e2bf7338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:13:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 82 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 26 op/s
Dec 13 03:13:38 np0005558241 nova_compute[248510]: 2025-12-13 08:13:38.696 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  <uuid>b90d265f-9e84-486a-99dd-abe3e2bf7338</uuid>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  <name>instance-00000006</name>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerDiagnosticsTest-server-875676947</nova:name>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:13:37</nova:creationTime>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:        <nova:user uuid="46569e00540b4b1d9cd847e87133f263">tempest-ServerDiagnosticsTest-20871084-project-member</nova:user>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:        <nova:project uuid="b6d9f3caaa984a82b8cd76e3bc11dd4f">tempest-ServerDiagnosticsTest-20871084</nova:project>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <entry name="serial">b90d265f-9e84-486a-99dd-abe3e2bf7338</entry>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <entry name="uuid">b90d265f-9e84-486a-99dd-abe3e2bf7338</entry>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b90d265f-9e84-486a-99dd-abe3e2bf7338_disk">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b90d265f-9e84-486a-99dd-abe3e2bf7338_disk.config">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/b90d265f-9e84-486a-99dd-abe3e2bf7338/console.log" append="off"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:13:38 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:13:38 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:13:38 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:13:38 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:13:38 np0005558241 nova_compute[248510]: 2025-12-13 08:13:38.759 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:13:38 np0005558241 nova_compute[248510]: 2025-12-13 08:13:38.759 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:13:38 np0005558241 nova_compute[248510]: 2025-12-13 08:13:38.760 248514 INFO nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Using config drive#033[00m
Dec 13 03:13:38 np0005558241 nova_compute[248510]: 2025-12-13 08:13:38.778 248514 DEBUG nova.storage.rbd_utils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] rbd image b90d265f-9e84-486a-99dd-abe3e2bf7338_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:13:39 np0005558241 nova_compute[248510]: 2025-12-13 08:13:39.093 248514 INFO nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Creating config drive at /var/lib/nova/instances/b90d265f-9e84-486a-99dd-abe3e2bf7338/disk.config#033[00m
Dec 13 03:13:39 np0005558241 nova_compute[248510]: 2025-12-13 08:13:39.099 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b90d265f-9e84-486a-99dd-abe3e2bf7338/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwye2zn6x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:13:39 np0005558241 nova_compute[248510]: 2025-12-13 08:13:39.228 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b90d265f-9e84-486a-99dd-abe3e2bf7338/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwye2zn6x" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:13:39 np0005558241 nova_compute[248510]: 2025-12-13 08:13:39.261 248514 DEBUG nova.storage.rbd_utils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] rbd image b90d265f-9e84-486a-99dd-abe3e2bf7338_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:13:39 np0005558241 nova_compute[248510]: 2025-12-13 08:13:39.265 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b90d265f-9e84-486a-99dd-abe3e2bf7338/disk.config b90d265f-9e84-486a-99dd-abe3e2bf7338_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:13:39 np0005558241 nova_compute[248510]: 2025-12-13 08:13:39.407 248514 DEBUG oslo_concurrency.processutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b90d265f-9e84-486a-99dd-abe3e2bf7338/disk.config b90d265f-9e84-486a-99dd-abe3e2bf7338_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:13:39 np0005558241 nova_compute[248510]: 2025-12-13 08:13:39.407 248514 INFO nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Deleting local config drive /var/lib/nova/instances/b90d265f-9e84-486a-99dd-abe3e2bf7338/disk.config because it was imported into RBD.#033[00m
Dec 13 03:13:39 np0005558241 systemd-machined[210538]: New machine qemu-6-instance-00000006.
Dec 13 03:13:39 np0005558241 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec 13 03:13:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:13:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.098 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613620.096642, b90d265f-9e84-486a-99dd-abe3e2bf7338 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.099 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:13:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:13:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:13:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:13:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.103 248514 DEBUG nova.compute.manager [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.103 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.108 248514 INFO nova.virt.libvirt.driver [-] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Instance spawned successfully.#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.108 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.138 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.145 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.151 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.152 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.152 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.153 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.153 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.153 248514 DEBUG nova.virt.libvirt.driver [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.198 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.199 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613620.0986745, b90d265f-9e84-486a-99dd-abe3e2bf7338 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.199 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] VM Started (Lifecycle Event)#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.231 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.236 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.241 248514 INFO nova.compute.manager [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Took 3.45 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.242 248514 DEBUG nova.compute.manager [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.275 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.329 248514 INFO nova.compute.manager [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Took 5.86 seconds to build instance.#033[00m
Dec 13 03:13:40 np0005558241 nova_compute[248510]: 2025-12-13 08:13:40.352 248514 DEBUG oslo_concurrency.lockutils [None req-4af821f7-d8be-4140-99e2-8bff2e48d539 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "b90d265f-9e84-486a-99dd-abe3e2bf7338" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:13:41 np0005558241 nova_compute[248510]: 2025-12-13 08:13:41.501 248514 DEBUG nova.compute.manager [None req-f001c626-40b4-4b4d-8391-05e4c9c1e91e 383cd1888a5d4ce68ab2bc783dc6d3df 8ab272aacf374ddcaac7ab5228502fe0 - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:13:41 np0005558241 nova_compute[248510]: 2025-12-13 08:13:41.505 248514 INFO nova.compute.manager [None req-f001c626-40b4-4b4d-8391-05e4c9c1e91e 383cd1888a5d4ce68ab2bc783dc6d3df 8ab272aacf374ddcaac7ab5228502fe0 - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Retrieving diagnostics#033[00m
Dec 13 03:13:41 np0005558241 nova_compute[248510]: 2025-12-13 08:13:41.608 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:41 np0005558241 nova_compute[248510]: 2025-12-13 08:13:41.836 248514 DEBUG oslo_concurrency.lockutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Acquiring lock "b90d265f-9e84-486a-99dd-abe3e2bf7338" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:41 np0005558241 nova_compute[248510]: 2025-12-13 08:13:41.837 248514 DEBUG oslo_concurrency.lockutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "b90d265f-9e84-486a-99dd-abe3e2bf7338" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:41 np0005558241 nova_compute[248510]: 2025-12-13 08:13:41.837 248514 DEBUG oslo_concurrency.lockutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Acquiring lock "b90d265f-9e84-486a-99dd-abe3e2bf7338-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:41 np0005558241 nova_compute[248510]: 2025-12-13 08:13:41.837 248514 DEBUG oslo_concurrency.lockutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "b90d265f-9e84-486a-99dd-abe3e2bf7338-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:41 np0005558241 nova_compute[248510]: 2025-12-13 08:13:41.837 248514 DEBUG oslo_concurrency.lockutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "b90d265f-9e84-486a-99dd-abe3e2bf7338-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:41 np0005558241 nova_compute[248510]: 2025-12-13 08:13:41.839 248514 INFO nova.compute.manager [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Terminating instance#033[00m
Dec 13 03:13:41 np0005558241 nova_compute[248510]: 2025-12-13 08:13:41.840 248514 DEBUG oslo_concurrency.lockutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Acquiring lock "refresh_cache-b90d265f-9e84-486a-99dd-abe3e2bf7338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:13:41 np0005558241 nova_compute[248510]: 2025-12-13 08:13:41.840 248514 DEBUG oslo_concurrency.lockutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Acquired lock "refresh_cache-b90d265f-9e84-486a-99dd-abe3e2bf7338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:13:41 np0005558241 nova_compute[248510]: 2025-12-13 08:13:41.840 248514 DEBUG nova.network.neutron [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:13:42 np0005558241 nova_compute[248510]: 2025-12-13 08:13:42.360 248514 DEBUG nova.network.neutron [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:13:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:13:43 np0005558241 nova_compute[248510]: 2025-12-13 08:13:43.188 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:43 np0005558241 nova_compute[248510]: 2025-12-13 08:13:43.254 248514 DEBUG nova.network.neutron [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:13:43 np0005558241 nova_compute[248510]: 2025-12-13 08:13:43.280 248514 DEBUG oslo_concurrency.lockutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Releasing lock "refresh_cache-b90d265f-9e84-486a-99dd-abe3e2bf7338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:13:43 np0005558241 nova_compute[248510]: 2025-12-13 08:13:43.281 248514 DEBUG nova.compute.manager [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:13:43 np0005558241 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec 13 03:13:43 np0005558241 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 3.850s CPU time.
Dec 13 03:13:43 np0005558241 systemd-machined[210538]: Machine qemu-6-instance-00000006 terminated.
Dec 13 03:13:43 np0005558241 nova_compute[248510]: 2025-12-13 08:13:43.501 248514 INFO nova.virt.libvirt.driver [-] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Instance destroyed successfully.#033[00m
Dec 13 03:13:43 np0005558241 nova_compute[248510]: 2025-12-13 08:13:43.502 248514 DEBUG nova.objects.instance [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lazy-loading 'resources' on Instance uuid b90d265f-9e84-486a-99dd-abe3e2bf7338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:13:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:13:44 np0005558241 nova_compute[248510]: 2025-12-13 08:13:44.401 248514 INFO nova.virt.libvirt.driver [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Deleting instance files /var/lib/nova/instances/b90d265f-9e84-486a-99dd-abe3e2bf7338_del#033[00m
Dec 13 03:13:44 np0005558241 nova_compute[248510]: 2025-12-13 08:13:44.402 248514 INFO nova.virt.libvirt.driver [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Deletion of /var/lib/nova/instances/b90d265f-9e84-486a-99dd-abe3e2bf7338_del complete#033[00m
Dec 13 03:13:44 np0005558241 nova_compute[248510]: 2025-12-13 08:13:44.470 248514 INFO nova.compute.manager [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Took 1.19 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:13:44 np0005558241 nova_compute[248510]: 2025-12-13 08:13:44.471 248514 DEBUG oslo.service.loopingcall [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:13:44 np0005558241 nova_compute[248510]: 2025-12-13 08:13:44.472 248514 DEBUG nova.compute.manager [-] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:13:44 np0005558241 nova_compute[248510]: 2025-12-13 08:13:44.472 248514 DEBUG nova.network.neutron [-] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:13:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 13 03:13:44 np0005558241 nova_compute[248510]: 2025-12-13 08:13:44.810 248514 DEBUG nova.network.neutron [-] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:13:44 np0005558241 nova_compute[248510]: 2025-12-13 08:13:44.831 248514 DEBUG nova.network.neutron [-] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:13:44 np0005558241 nova_compute[248510]: 2025-12-13 08:13:44.852 248514 INFO nova.compute.manager [-] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Took 0.38 seconds to deallocate network for instance.#033[00m
Dec 13 03:13:44 np0005558241 nova_compute[248510]: 2025-12-13 08:13:44.940 248514 DEBUG oslo_concurrency.lockutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:44 np0005558241 nova_compute[248510]: 2025-12-13 08:13:44.941 248514 DEBUG oslo_concurrency.lockutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:45 np0005558241 nova_compute[248510]: 2025-12-13 08:13:45.018 248514 DEBUG oslo_concurrency.processutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:13:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:13:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3764999277' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:13:45 np0005558241 nova_compute[248510]: 2025-12-13 08:13:45.606 248514 DEBUG oslo_concurrency.processutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:13:45 np0005558241 nova_compute[248510]: 2025-12-13 08:13:45.615 248514 DEBUG nova.compute.provider_tree [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:13:45 np0005558241 nova_compute[248510]: 2025-12-13 08:13:45.643 248514 DEBUG nova.scheduler.client.report [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:13:45 np0005558241 nova_compute[248510]: 2025-12-13 08:13:45.672 248514 DEBUG oslo_concurrency.lockutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:45 np0005558241 nova_compute[248510]: 2025-12-13 08:13:45.704 248514 INFO nova.scheduler.client.report [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Deleted allocations for instance b90d265f-9e84-486a-99dd-abe3e2bf7338#033[00m
Dec 13 03:13:45 np0005558241 nova_compute[248510]: 2025-12-13 08:13:45.820 248514 DEBUG oslo_concurrency.lockutils [None req-dc87cbb2-05a5-48b5-b126-611742a7be75 46569e00540b4b1d9cd847e87133f263 b6d9f3caaa984a82b8cd76e3bc11dd4f - - default default] Lock "b90d265f-9e84-486a-99dd-abe3e2bf7338" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:46 np0005558241 nova_compute[248510]: 2025-12-13 08:13:46.612 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 13 03:13:48 np0005558241 nova_compute[248510]: 2025-12-13 08:13:48.242 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:13:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 47 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Dec 13 03:13:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 188 KiB/s wr, 100 op/s
Dec 13 03:13:51 np0005558241 nova_compute[248510]: 2025-12-13 08:13:51.615 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec 13 03:13:53 np0005558241 nova_compute[248510]: 2025-12-13 08:13:53.278 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:13:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec 13 03:13:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:55.394 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:13:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:55.394 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:13:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:13:55.394 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:13:56 np0005558241 nova_compute[248510]: 2025-12-13 08:13:56.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 13 03:13:58 np0005558241 nova_compute[248510]: 2025-12-13 08:13:58.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:13:58 np0005558241 nova_compute[248510]: 2025-12-13 08:13:58.499 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613623.49855, b90d265f-9e84-486a-99dd-abe3e2bf7338 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:13:58 np0005558241 nova_compute[248510]: 2025-12-13 08:13:58.500 248514 INFO nova.compute.manager [-] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:13:58 np0005558241 nova_compute[248510]: 2025-12-13 08:13:58.564 248514 DEBUG nova.compute.manager [None req-e59f5167-8105-484a-a9cc-d8b669361510 - - - - - -] [instance: b90d265f-9e84-486a-99dd-abe3e2bf7338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:13:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:13:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 13 03:14:00 np0005558241 podman[263201]: 2025-12-13 08:14:00.008772412 +0000 UTC m=+0.094371344 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 03:14:00 np0005558241 podman[263202]: 2025-12-13 08:14:00.03347053 +0000 UTC m=+0.106067672 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 03:14:00 np0005558241 podman[263200]: 2025-12-13 08:14:00.0643681 +0000 UTC m=+0.147691776 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:14:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 03:14:01 np0005558241 nova_compute[248510]: 2025-12-13 08:14:01.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:01 np0005558241 nova_compute[248510]: 2025-12-13 08:14:01.756 248514 DEBUG oslo_concurrency.lockutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "523b15f7-aac8-4023-957c-9ffeab4d92ff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:01 np0005558241 nova_compute[248510]: 2025-12-13 08:14:01.756 248514 DEBUG oslo_concurrency.lockutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "523b15f7-aac8-4023-957c-9ffeab4d92ff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:02 np0005558241 nova_compute[248510]: 2025-12-13 08:14:02.149 248514 DEBUG nova.compute.manager [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:14:02 np0005558241 nova_compute[248510]: 2025-12-13 08:14:02.417 248514 DEBUG oslo_concurrency.lockutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:02 np0005558241 nova_compute[248510]: 2025-12-13 08:14:02.418 248514 DEBUG oslo_concurrency.lockutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:02 np0005558241 nova_compute[248510]: 2025-12-13 08:14:02.426 248514 DEBUG nova.virt.hardware [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:14:02 np0005558241 nova_compute[248510]: 2025-12-13 08:14:02.427 248514 INFO nova.compute.claims [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:14:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 03:14:02 np0005558241 nova_compute[248510]: 2025-12-13 08:14:02.857 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:03 np0005558241 nova_compute[248510]: 2025-12-13 08:14:03.366 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:14:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154598895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:14:03 np0005558241 nova_compute[248510]: 2025-12-13 08:14:03.399 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:03 np0005558241 nova_compute[248510]: 2025-12-13 08:14:03.407 248514 DEBUG nova.compute.provider_tree [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:14:03 np0005558241 nova_compute[248510]: 2025-12-13 08:14:03.431 248514 DEBUG nova.scheduler.client.report [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:14:03 np0005558241 nova_compute[248510]: 2025-12-13 08:14:03.531 248514 DEBUG oslo_concurrency.lockutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:03 np0005558241 nova_compute[248510]: 2025-12-13 08:14:03.532 248514 DEBUG nova.compute.manager [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:14:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:14:03 np0005558241 nova_compute[248510]: 2025-12-13 08:14:03.626 248514 DEBUG nova.compute.manager [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:14:03 np0005558241 nova_compute[248510]: 2025-12-13 08:14:03.626 248514 DEBUG nova.network.neutron [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:14:03 np0005558241 nova_compute[248510]: 2025-12-13 08:14:03.936 248514 INFO nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.015 248514 DEBUG nova.compute.manager [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.302 248514 DEBUG nova.compute.manager [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.303 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.304 248514 INFO nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Creating image(s)#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.327 248514 DEBUG nova.storage.rbd_utils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image 523b15f7-aac8-4023-957c-9ffeab4d92ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.351 248514 DEBUG nova.storage.rbd_utils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image 523b15f7-aac8-4023-957c-9ffeab4d92ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.379 248514 DEBUG nova.storage.rbd_utils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image 523b15f7-aac8-4023-957c-9ffeab4d92ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.384 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.446 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.447 248514 DEBUG oslo_concurrency.lockutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.448 248514 DEBUG oslo_concurrency.lockutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.448 248514 DEBUG oslo_concurrency.lockutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.471 248514 DEBUG nova.storage.rbd_utils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image 523b15f7-aac8-4023-957c-9ffeab4d92ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:04 np0005558241 nova_compute[248510]: 2025-12-13 08:14:04.474 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 523b15f7-aac8-4023-957c-9ffeab4d92ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.208 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 523b15f7-aac8-4023-957c-9ffeab4d92ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.734s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.269 248514 DEBUG nova.network.neutron [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.270 248514 DEBUG nova.compute.manager [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.275 248514 DEBUG nova.storage.rbd_utils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] resizing rbd image 523b15f7-aac8-4023-957c-9ffeab4d92ff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.471 248514 DEBUG nova.objects.instance [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lazy-loading 'migration_context' on Instance uuid 523b15f7-aac8-4023-957c-9ffeab4d92ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.589 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.590 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Ensure instance console log exists: /var/lib/nova/instances/523b15f7-aac8-4023-957c-9ffeab4d92ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.591 248514 DEBUG oslo_concurrency.lockutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.591 248514 DEBUG oslo_concurrency.lockutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.592 248514 DEBUG oslo_concurrency.lockutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.593 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.599 248514 WARNING nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.605 248514 DEBUG nova.virt.libvirt.host [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.606 248514 DEBUG nova.virt.libvirt.host [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.609 248514 DEBUG nova.virt.libvirt.host [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.609 248514 DEBUG nova.virt.libvirt.host [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.609 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.610 248514 DEBUG nova.virt.hardware [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.610 248514 DEBUG nova.virt.hardware [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.610 248514 DEBUG nova.virt.hardware [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.610 248514 DEBUG nova.virt.hardware [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.611 248514 DEBUG nova.virt.hardware [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.611 248514 DEBUG nova.virt.hardware [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.611 248514 DEBUG nova.virt.hardware [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.611 248514 DEBUG nova.virt.hardware [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.611 248514 DEBUG nova.virt.hardware [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.612 248514 DEBUG nova.virt.hardware [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.612 248514 DEBUG nova.virt.hardware [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:14:05 np0005558241 nova_compute[248510]: 2025-12-13 08:14:05.614 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:14:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1057586264' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:14:06 np0005558241 nova_compute[248510]: 2025-12-13 08:14:06.160 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:06 np0005558241 nova_compute[248510]: 2025-12-13 08:14:06.181 248514 DEBUG nova.storage.rbd_utils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image 523b15f7-aac8-4023-957c-9ffeab4d92ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:06 np0005558241 nova_compute[248510]: 2025-12-13 08:14:06.187 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:06 np0005558241 nova_compute[248510]: 2025-12-13 08:14:06.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:14:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:14:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476854684' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:14:06 np0005558241 nova_compute[248510]: 2025-12-13 08:14:06.729 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:06 np0005558241 nova_compute[248510]: 2025-12-13 08:14:06.731 248514 DEBUG nova.objects.instance [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 523b15f7-aac8-4023-957c-9ffeab4d92ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:06 np0005558241 nova_compute[248510]: 2025-12-13 08:14:06.753 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  <uuid>523b15f7-aac8-4023-957c-9ffeab4d92ff</uuid>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  <name>instance-00000007</name>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-661397542</nova:name>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:14:05</nova:creationTime>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:        <nova:user uuid="546de43f2a284b5589b25c3ae6f2ab3a">tempest-ServersAdminNegativeTestJSON-315495826-project-member</nova:user>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:        <nova:project uuid="66e4b42b92b04724aa8401fca41d9c82">tempest-ServersAdminNegativeTestJSON-315495826</nova:project>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <entry name="serial">523b15f7-aac8-4023-957c-9ffeab4d92ff</entry>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <entry name="uuid">523b15f7-aac8-4023-957c-9ffeab4d92ff</entry>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/523b15f7-aac8-4023-957c-9ffeab4d92ff_disk">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/523b15f7-aac8-4023-957c-9ffeab4d92ff_disk.config">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/523b15f7-aac8-4023-957c-9ffeab4d92ff/console.log" append="off"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:14:06 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:14:06 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:14:06 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:14:06 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:14:06 np0005558241 nova_compute[248510]: 2025-12-13 08:14:06.831 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:14:06 np0005558241 nova_compute[248510]: 2025-12-13 08:14:06.832 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:14:06 np0005558241 nova_compute[248510]: 2025-12-13 08:14:06.832 248514 INFO nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Using config drive#033[00m
Dec 13 03:14:06 np0005558241 nova_compute[248510]: 2025-12-13 08:14:06.850 248514 DEBUG nova.storage.rbd_utils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image 523b15f7-aac8-4023-957c-9ffeab4d92ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:07 np0005558241 nova_compute[248510]: 2025-12-13 08:14:07.334 248514 INFO nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Creating config drive at /var/lib/nova/instances/523b15f7-aac8-4023-957c-9ffeab4d92ff/disk.config#033[00m
Dec 13 03:14:07 np0005558241 nova_compute[248510]: 2025-12-13 08:14:07.340 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/523b15f7-aac8-4023-957c-9ffeab4d92ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt0_5fpvl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:07 np0005558241 nova_compute[248510]: 2025-12-13 08:14:07.465 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/523b15f7-aac8-4023-957c-9ffeab4d92ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt0_5fpvl" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:07 np0005558241 nova_compute[248510]: 2025-12-13 08:14:07.501 248514 DEBUG nova.storage.rbd_utils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image 523b15f7-aac8-4023-957c-9ffeab4d92ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:07 np0005558241 nova_compute[248510]: 2025-12-13 08:14:07.505 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/523b15f7-aac8-4023-957c-9ffeab4d92ff/disk.config 523b15f7-aac8-4023-957c-9ffeab4d92ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:07 np0005558241 nova_compute[248510]: 2025-12-13 08:14:07.640 248514 DEBUG oslo_concurrency.processutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/523b15f7-aac8-4023-957c-9ffeab4d92ff/disk.config 523b15f7-aac8-4023-957c-9ffeab4d92ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:07 np0005558241 nova_compute[248510]: 2025-12-13 08:14:07.641 248514 INFO nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Deleting local config drive /var/lib/nova/instances/523b15f7-aac8-4023-957c-9ffeab4d92ff/disk.config because it was imported into RBD.#033[00m
Dec 13 03:14:07 np0005558241 systemd-machined[210538]: New machine qemu-7-instance-00000007.
Dec 13 03:14:07 np0005558241 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.039 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613648.0394292, 523b15f7-aac8-4023-957c-9ffeab4d92ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.040 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.043 248514 DEBUG nova.compute.manager [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.043 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.047 248514 INFO nova.virt.libvirt.driver [-] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Instance spawned successfully.#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.047 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.118 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.123 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.126 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.126 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.127 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.127 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.128 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.128 248514 DEBUG nova.virt.libvirt.driver [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.157 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.158 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613648.0417857, 523b15f7-aac8-4023-957c-9ffeab4d92ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.159 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] VM Started (Lifecycle Event)#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.212 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.216 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.227 248514 INFO nova.compute.manager [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Took 3.92 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.227 248514 DEBUG nova.compute.manager [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.271 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.324 248514 INFO nova.compute.manager [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Took 5.94 seconds to build instance.#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.369 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:08 np0005558241 nova_compute[248510]: 2025-12-13 08:14:08.602 248514 DEBUG oslo_concurrency.lockutils [None req-cb5ca552-04c1-413c-be61-65b23eeb0a07 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "523b15f7-aac8-4023-957c-9ffeab4d92ff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:14:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 71 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 1.4 MiB/s wr, 92 op/s
Dec 13 03:14:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:14:09
Dec 13 03:14:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:14:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:14:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec 13 03:14:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:14:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 373 KiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 13 03:14:11 np0005558241 nova_compute[248510]: 2025-12-13 08:14:11.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 372 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 13 03:14:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:14:13Z|00044|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 13 03:14:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:14:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:14:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:14:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:14:13 np0005558241 nova_compute[248510]: 2025-12-13 08:14:13.370 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:14:14 np0005558241 podman[263840]: 2025-12-13 08:14:14.513206281 +0000 UTC m=+0.048609707 container create ef548ac0de9a019b866df493a0ad4436105c522a235fd1979eca4c46fc0d6339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:14:14 np0005558241 systemd[1]: Started libpod-conmon-ef548ac0de9a019b866df493a0ad4436105c522a235fd1979eca4c46fc0d6339.scope.
Dec 13 03:14:14 np0005558241 podman[263840]: 2025-12-13 08:14:14.490303068 +0000 UTC m=+0.025706504 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:14:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:14:14 np0005558241 podman[263840]: 2025-12-13 08:14:14.621047746 +0000 UTC m=+0.156451162 container init ef548ac0de9a019b866df493a0ad4436105c522a235fd1979eca4c46fc0d6339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:14:14 np0005558241 podman[263840]: 2025-12-13 08:14:14.631465482 +0000 UTC m=+0.166868898 container start ef548ac0de9a019b866df493a0ad4436105c522a235fd1979eca4c46fc0d6339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lumiere, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:14:14 np0005558241 podman[263840]: 2025-12-13 08:14:14.636304662 +0000 UTC m=+0.171708098 container attach ef548ac0de9a019b866df493a0ad4436105c522a235fd1979eca4c46fc0d6339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lumiere, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:14:14 np0005558241 goofy_lumiere[263856]: 167 167
Dec 13 03:14:14 np0005558241 systemd[1]: libpod-ef548ac0de9a019b866df493a0ad4436105c522a235fd1979eca4c46fc0d6339.scope: Deactivated successfully.
Dec 13 03:14:14 np0005558241 podman[263840]: 2025-12-13 08:14:14.642693019 +0000 UTC m=+0.178096435 container died ef548ac0de9a019b866df493a0ad4436105c522a235fd1979eca4c46fc0d6339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:14:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b269cf8257256bab68c0344245fd926b323c6b1b9964d466e064b39cd23f9862-merged.mount: Deactivated successfully.
Dec 13 03:14:14 np0005558241 podman[263840]: 2025-12-13 08:14:14.697656632 +0000 UTC m=+0.233060048 container remove ef548ac0de9a019b866df493a0ad4436105c522a235fd1979eca4c46fc0d6339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_lumiere, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:14:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Dec 13 03:14:14 np0005558241 systemd[1]: libpod-conmon-ef548ac0de9a019b866df493a0ad4436105c522a235fd1979eca4c46fc0d6339.scope: Deactivated successfully.
Dec 13 03:14:14 np0005558241 podman[263881]: 2025-12-13 08:14:14.86778223 +0000 UTC m=+0.048111406 container create ae8f61c20c84c7dff128f7c503bc9e168c9b332f044d1806090c1fc0259ec031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 03:14:14 np0005558241 systemd[1]: Started libpod-conmon-ae8f61c20c84c7dff128f7c503bc9e168c9b332f044d1806090c1fc0259ec031.scope.
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685836713' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:14:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685836713' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:14:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:14:14 np0005558241 podman[263881]: 2025-12-13 08:14:14.846140807 +0000 UTC m=+0.026470063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:14:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc9c4227ffb037e3f59c4f33061e6333e50ba098e2f1dd7110a907b07b67ab6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:14:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc9c4227ffb037e3f59c4f33061e6333e50ba098e2f1dd7110a907b07b67ab6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:14:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc9c4227ffb037e3f59c4f33061e6333e50ba098e2f1dd7110a907b07b67ab6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:14:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc9c4227ffb037e3f59c4f33061e6333e50ba098e2f1dd7110a907b07b67ab6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:14:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc9c4227ffb037e3f59c4f33061e6333e50ba098e2f1dd7110a907b07b67ab6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:14:14 np0005558241 podman[263881]: 2025-12-13 08:14:14.956049573 +0000 UTC m=+0.136378769 container init ae8f61c20c84c7dff128f7c503bc9e168c9b332f044d1806090c1fc0259ec031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:14:14 np0005558241 podman[263881]: 2025-12-13 08:14:14.963916696 +0000 UTC m=+0.144245872 container start ae8f61c20c84c7dff128f7c503bc9e168c9b332f044d1806090c1fc0259ec031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 03:14:14 np0005558241 podman[263881]: 2025-12-13 08:14:14.967643788 +0000 UTC m=+0.147972984 container attach ae8f61c20c84c7dff128f7c503bc9e168c9b332f044d1806090c1fc0259ec031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:14:15 np0005558241 reverent_volhard[263898]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:14:15 np0005558241 reverent_volhard[263898]: --> All data devices are unavailable
Dec 13 03:14:15 np0005558241 systemd[1]: libpod-ae8f61c20c84c7dff128f7c503bc9e168c9b332f044d1806090c1fc0259ec031.scope: Deactivated successfully.
Dec 13 03:14:15 np0005558241 podman[263881]: 2025-12-13 08:14:15.466449177 +0000 UTC m=+0.646778363 container died ae8f61c20c84c7dff128f7c503bc9e168c9b332f044d1806090c1fc0259ec031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:14:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3fc9c4227ffb037e3f59c4f33061e6333e50ba098e2f1dd7110a907b07b67ab6-merged.mount: Deactivated successfully.
Dec 13 03:14:15 np0005558241 podman[263881]: 2025-12-13 08:14:15.523764938 +0000 UTC m=+0.704094114 container remove ae8f61c20c84c7dff128f7c503bc9e168c9b332f044d1806090c1fc0259ec031 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_volhard, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:14:15 np0005558241 systemd[1]: libpod-conmon-ae8f61c20c84c7dff128f7c503bc9e168c9b332f044d1806090c1fc0259ec031.scope: Deactivated successfully.
Dec 13 03:14:15 np0005558241 podman[263991]: 2025-12-13 08:14:15.994759082 +0000 UTC m=+0.047643594 container create 30b409c7eb329cd78c57e65a7efd903c15e11e50466049763723ff898c686599 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:14:16 np0005558241 systemd[1]: Started libpod-conmon-30b409c7eb329cd78c57e65a7efd903c15e11e50466049763723ff898c686599.scope.
Dec 13 03:14:16 np0005558241 podman[263991]: 2025-12-13 08:14:15.971876399 +0000 UTC m=+0.024760941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:14:16 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:14:16 np0005558241 podman[263991]: 2025-12-13 08:14:16.082745438 +0000 UTC m=+0.135629950 container init 30b409c7eb329cd78c57e65a7efd903c15e11e50466049763723ff898c686599 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:14:16 np0005558241 podman[263991]: 2025-12-13 08:14:16.09053613 +0000 UTC m=+0.143420642 container start 30b409c7eb329cd78c57e65a7efd903c15e11e50466049763723ff898c686599 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_payne, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:14:16 np0005558241 podman[263991]: 2025-12-13 08:14:16.093972884 +0000 UTC m=+0.146857416 container attach 30b409c7eb329cd78c57e65a7efd903c15e11e50466049763723ff898c686599 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_payne, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 03:14:16 np0005558241 modest_payne[264007]: 167 167
Dec 13 03:14:16 np0005558241 systemd[1]: libpod-30b409c7eb329cd78c57e65a7efd903c15e11e50466049763723ff898c686599.scope: Deactivated successfully.
Dec 13 03:14:16 np0005558241 podman[263991]: 2025-12-13 08:14:16.096271421 +0000 UTC m=+0.149155943 container died 30b409c7eb329cd78c57e65a7efd903c15e11e50466049763723ff898c686599 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_payne, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:14:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7d3f327a87af15ff597134993d3ed10a151b11b44dc871c4f2d4a9204851cfe7-merged.mount: Deactivated successfully.
Dec 13 03:14:16 np0005558241 podman[263991]: 2025-12-13 08:14:16.129521329 +0000 UTC m=+0.182405841 container remove 30b409c7eb329cd78c57e65a7efd903c15e11e50466049763723ff898c686599 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_payne, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:14:16 np0005558241 systemd[1]: libpod-conmon-30b409c7eb329cd78c57e65a7efd903c15e11e50466049763723ff898c686599.scope: Deactivated successfully.
Dec 13 03:14:16 np0005558241 podman[264031]: 2025-12-13 08:14:16.309581072 +0000 UTC m=+0.051004137 container create 7166591d229ee04547d47ee125b3cc3cc55e8791ba728d6c8e12567575d04bd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 03:14:16 np0005558241 systemd[1]: Started libpod-conmon-7166591d229ee04547d47ee125b3cc3cc55e8791ba728d6c8e12567575d04bd9.scope.
Dec 13 03:14:16 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:14:16 np0005558241 podman[264031]: 2025-12-13 08:14:16.284169086 +0000 UTC m=+0.025592171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:14:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e73a99fd2499dee7f1261f9d60c4057a522ce28fa843d2d444a592166b18f33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:14:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e73a99fd2499dee7f1261f9d60c4057a522ce28fa843d2d444a592166b18f33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:14:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e73a99fd2499dee7f1261f9d60c4057a522ce28fa843d2d444a592166b18f33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:14:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e73a99fd2499dee7f1261f9d60c4057a522ce28fa843d2d444a592166b18f33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:14:16 np0005558241 podman[264031]: 2025-12-13 08:14:16.398420979 +0000 UTC m=+0.139844094 container init 7166591d229ee04547d47ee125b3cc3cc55e8791ba728d6c8e12567575d04bd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_boyd, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:14:16 np0005558241 podman[264031]: 2025-12-13 08:14:16.406288702 +0000 UTC m=+0.147711777 container start 7166591d229ee04547d47ee125b3cc3cc55e8791ba728d6c8e12567575d04bd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:14:16 np0005558241 podman[264031]: 2025-12-13 08:14:16.411800928 +0000 UTC m=+0.153223993 container attach 7166591d229ee04547d47ee125b3cc3cc55e8791ba728d6c8e12567575d04bd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:14:16 np0005558241 nova_compute[248510]: 2025-12-13 08:14:16.627 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]: {
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:    "0": [
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:        {
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "devices": [
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "/dev/loop3"
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            ],
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_name": "ceph_lv0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_size": "21470642176",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "name": "ceph_lv0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "tags": {
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.cluster_name": "ceph",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.crush_device_class": "",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.encrypted": "0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.objectstore": "bluestore",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.osd_id": "0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.type": "block",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.vdo": "0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.with_tpm": "0"
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            },
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "type": "block",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "vg_name": "ceph_vg0"
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:        }
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:    ],
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:    "1": [
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:        {
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "devices": [
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "/dev/loop4"
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            ],
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_name": "ceph_lv1",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_size": "21470642176",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "name": "ceph_lv1",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "tags": {
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.cluster_name": "ceph",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.crush_device_class": "",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.encrypted": "0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.objectstore": "bluestore",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.osd_id": "1",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.type": "block",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.vdo": "0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.with_tpm": "0"
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            },
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "type": "block",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "vg_name": "ceph_vg1"
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:        }
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:    ],
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:    "2": [
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:        {
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "devices": [
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "/dev/loop5"
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            ],
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_name": "ceph_lv2",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_size": "21470642176",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "name": "ceph_lv2",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "tags": {
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.cluster_name": "ceph",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.crush_device_class": "",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.encrypted": "0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.objectstore": "bluestore",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.osd_id": "2",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.type": "block",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.vdo": "0",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:                "ceph.with_tpm": "0"
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            },
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "type": "block",
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:            "vg_name": "ceph_vg2"
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:        }
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]:    ]
Dec 13 03:14:16 np0005558241 distracted_boyd[264047]: }
Dec 13 03:14:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 03:14:16 np0005558241 systemd[1]: libpod-7166591d229ee04547d47ee125b3cc3cc55e8791ba728d6c8e12567575d04bd9.scope: Deactivated successfully.
Dec 13 03:14:16 np0005558241 podman[264031]: 2025-12-13 08:14:16.726235588 +0000 UTC m=+0.467658653 container died 7166591d229ee04547d47ee125b3cc3cc55e8791ba728d6c8e12567575d04bd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_boyd, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:14:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0e73a99fd2499dee7f1261f9d60c4057a522ce28fa843d2d444a592166b18f33-merged.mount: Deactivated successfully.
Dec 13 03:14:16 np0005558241 podman[264031]: 2025-12-13 08:14:16.805301753 +0000 UTC m=+0.546724818 container remove 7166591d229ee04547d47ee125b3cc3cc55e8791ba728d6c8e12567575d04bd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_boyd, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:14:16 np0005558241 systemd[1]: libpod-conmon-7166591d229ee04547d47ee125b3cc3cc55e8791ba728d6c8e12567575d04bd9.scope: Deactivated successfully.
Dec 13 03:14:17 np0005558241 nova_compute[248510]: 2025-12-13 08:14:17.009 248514 DEBUG oslo_concurrency.lockutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "c46820c9-23b7-492b-bd49-f3f444722252" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:17 np0005558241 nova_compute[248510]: 2025-12-13 08:14:17.010 248514 DEBUG oslo_concurrency.lockutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "c46820c9-23b7-492b-bd49-f3f444722252" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:17 np0005558241 nova_compute[248510]: 2025-12-13 08:14:17.080 248514 DEBUG nova.compute.manager [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:14:17 np0005558241 podman[264133]: 2025-12-13 08:14:17.301216221 +0000 UTC m=+0.050056673 container create b4b8c8fc546accc95093cb3ed6c782706064ea3111f9ddd3bda223235d790876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shaw, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:14:17 np0005558241 systemd[1]: Started libpod-conmon-b4b8c8fc546accc95093cb3ed6c782706064ea3111f9ddd3bda223235d790876.scope.
Dec 13 03:14:17 np0005558241 podman[264133]: 2025-12-13 08:14:17.277326103 +0000 UTC m=+0.026166585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:14:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:14:17 np0005558241 podman[264133]: 2025-12-13 08:14:17.390580801 +0000 UTC m=+0.139421273 container init b4b8c8fc546accc95093cb3ed6c782706064ea3111f9ddd3bda223235d790876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:14:17 np0005558241 podman[264133]: 2025-12-13 08:14:17.399644524 +0000 UTC m=+0.148484976 container start b4b8c8fc546accc95093cb3ed6c782706064ea3111f9ddd3bda223235d790876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:14:17 np0005558241 systemd[1]: libpod-b4b8c8fc546accc95093cb3ed6c782706064ea3111f9ddd3bda223235d790876.scope: Deactivated successfully.
Dec 13 03:14:17 np0005558241 conmon[264149]: conmon b4b8c8fc546accc95093 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4b8c8fc546accc95093cb3ed6c782706064ea3111f9ddd3bda223235d790876.scope/container/memory.events
Dec 13 03:14:17 np0005558241 cranky_shaw[264149]: 167 167
Dec 13 03:14:17 np0005558241 podman[264133]: 2025-12-13 08:14:17.408497752 +0000 UTC m=+0.157338194 container attach b4b8c8fc546accc95093cb3ed6c782706064ea3111f9ddd3bda223235d790876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:14:17 np0005558241 podman[264133]: 2025-12-13 08:14:17.408992744 +0000 UTC m=+0.157833196 container died b4b8c8fc546accc95093cb3ed6c782706064ea3111f9ddd3bda223235d790876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 13 03:14:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9d47775177c9bf51f1ceed2de4152eeb3d027e884f13622d666ed93880c02c4f-merged.mount: Deactivated successfully.
Dec 13 03:14:17 np0005558241 podman[264133]: 2025-12-13 08:14:17.445362669 +0000 UTC m=+0.194203121 container remove b4b8c8fc546accc95093cb3ed6c782706064ea3111f9ddd3bda223235d790876 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:14:17 np0005558241 systemd[1]: libpod-conmon-b4b8c8fc546accc95093cb3ed6c782706064ea3111f9ddd3bda223235d790876.scope: Deactivated successfully.
Dec 13 03:14:17 np0005558241 podman[264173]: 2025-12-13 08:14:17.608358732 +0000 UTC m=+0.046331562 container create eca02287763b49c34c3c91765c2389d474935a5a6d388336d09dd014ce6dd1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:14:17 np0005558241 systemd[1]: Started libpod-conmon-eca02287763b49c34c3c91765c2389d474935a5a6d388336d09dd014ce6dd1df.scope.
Dec 13 03:14:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:14:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43b6428b1503b90ea0afc0afd2ffc5524bab19b6af04f5a2b5a5df694115bf70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:14:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43b6428b1503b90ea0afc0afd2ffc5524bab19b6af04f5a2b5a5df694115bf70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:14:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43b6428b1503b90ea0afc0afd2ffc5524bab19b6af04f5a2b5a5df694115bf70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:14:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43b6428b1503b90ea0afc0afd2ffc5524bab19b6af04f5a2b5a5df694115bf70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:14:17 np0005558241 podman[264173]: 2025-12-13 08:14:17.587205541 +0000 UTC m=+0.025178391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:14:17 np0005558241 nova_compute[248510]: 2025-12-13 08:14:17.685 248514 DEBUG oslo_concurrency.lockutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:17 np0005558241 nova_compute[248510]: 2025-12-13 08:14:17.687 248514 DEBUG oslo_concurrency.lockutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:17 np0005558241 podman[264173]: 2025-12-13 08:14:17.693431406 +0000 UTC m=+0.131404266 container init eca02287763b49c34c3c91765c2389d474935a5a6d388336d09dd014ce6dd1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:14:17 np0005558241 nova_compute[248510]: 2025-12-13 08:14:17.698 248514 DEBUG nova.virt.hardware [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:14:17 np0005558241 nova_compute[248510]: 2025-12-13 08:14:17.699 248514 INFO nova.compute.claims [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:14:17 np0005558241 podman[264173]: 2025-12-13 08:14:17.699513815 +0000 UTC m=+0.137486645 container start eca02287763b49c34c3c91765c2389d474935a5a6d388336d09dd014ce6dd1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_tesla, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:14:17 np0005558241 podman[264173]: 2025-12-13 08:14:17.705226546 +0000 UTC m=+0.143199376 container attach eca02287763b49c34c3c91765c2389d474935a5a6d388336d09dd014ce6dd1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:14:17 np0005558241 nova_compute[248510]: 2025-12-13 08:14:17.968 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:18 np0005558241 nova_compute[248510]: 2025-12-13 08:14:18.372 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:18 np0005558241 lvm[264288]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:14:18 np0005558241 lvm[264289]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:14:18 np0005558241 lvm[264289]: VG ceph_vg1 finished
Dec 13 03:14:18 np0005558241 lvm[264288]: VG ceph_vg0 finished
Dec 13 03:14:18 np0005558241 lvm[264291]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:14:18 np0005558241 lvm[264291]: VG ceph_vg2 finished
Dec 13 03:14:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:14:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3823741423' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:14:18 np0005558241 nova_compute[248510]: 2025-12-13 08:14:18.538 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:18 np0005558241 nova_compute[248510]: 2025-12-13 08:14:18.544 248514 DEBUG nova.compute.provider_tree [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:14:18 np0005558241 determined_tesla[264190]: {}
Dec 13 03:14:18 np0005558241 systemd[1]: libpod-eca02287763b49c34c3c91765c2389d474935a5a6d388336d09dd014ce6dd1df.scope: Deactivated successfully.
Dec 13 03:14:18 np0005558241 podman[264173]: 2025-12-13 08:14:18.608341437 +0000 UTC m=+1.046314297 container died eca02287763b49c34c3c91765c2389d474935a5a6d388336d09dd014ce6dd1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_tesla, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:14:18 np0005558241 systemd[1]: libpod-eca02287763b49c34c3c91765c2389d474935a5a6d388336d09dd014ce6dd1df.scope: Consumed 1.419s CPU time.
Dec 13 03:14:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:14:18 np0005558241 systemd[1]: var-lib-containers-storage-overlay-43b6428b1503b90ea0afc0afd2ffc5524bab19b6af04f5a2b5a5df694115bf70-merged.mount: Deactivated successfully.
Dec 13 03:14:18 np0005558241 podman[264173]: 2025-12-13 08:14:18.652548356 +0000 UTC m=+1.090521186 container remove eca02287763b49c34c3c91765c2389d474935a5a6d388336d09dd014ce6dd1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Dec 13 03:14:18 np0005558241 nova_compute[248510]: 2025-12-13 08:14:18.654 248514 DEBUG nova.scheduler.client.report [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:14:18 np0005558241 systemd[1]: libpod-conmon-eca02287763b49c34c3c91765c2389d474935a5a6d388336d09dd014ce6dd1df.scope: Deactivated successfully.
Dec 13 03:14:18 np0005558241 nova_compute[248510]: 2025-12-13 08:14:18.682 248514 DEBUG oslo_concurrency.lockutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:18 np0005558241 nova_compute[248510]: 2025-12-13 08:14:18.683 248514 DEBUG nova.compute.manager [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:14:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:14:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 03:14:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:14:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:14:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:14:18 np0005558241 nova_compute[248510]: 2025-12-13 08:14:18.953 248514 DEBUG nova.compute.manager [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:14:18 np0005558241 nova_compute[248510]: 2025-12-13 08:14:18.955 248514 DEBUG nova.network.neutron [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:14:19 np0005558241 nova_compute[248510]: 2025-12-13 08:14:19.549 248514 INFO nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:14:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:14:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:14:19 np0005558241 nova_compute[248510]: 2025-12-13 08:14:19.602 248514 DEBUG nova.compute.manager [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:14:19 np0005558241 nova_compute[248510]: 2025-12-13 08:14:19.858 248514 DEBUG nova.compute.manager [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:14:19 np0005558241 nova_compute[248510]: 2025-12-13 08:14:19.860 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:14:19 np0005558241 nova_compute[248510]: 2025-12-13 08:14:19.860 248514 INFO nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Creating image(s)#033[00m
Dec 13 03:14:19 np0005558241 nova_compute[248510]: 2025-12-13 08:14:19.883 248514 DEBUG nova.storage.rbd_utils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image c46820c9-23b7-492b-bd49-f3f444722252_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:19 np0005558241 nova_compute[248510]: 2025-12-13 08:14:19.905 248514 DEBUG nova.storage.rbd_utils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image c46820c9-23b7-492b-bd49-f3f444722252_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:19 np0005558241 nova_compute[248510]: 2025-12-13 08:14:19.928 248514 DEBUG nova.storage.rbd_utils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image c46820c9-23b7-492b-bd49-f3f444722252_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:19 np0005558241 nova_compute[248510]: 2025-12-13 08:14:19.931 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:19 np0005558241 nova_compute[248510]: 2025-12-13 08:14:19.992 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:19 np0005558241 nova_compute[248510]: 2025-12-13 08:14:19.994 248514 DEBUG oslo_concurrency.lockutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:19 np0005558241 nova_compute[248510]: 2025-12-13 08:14:19.995 248514 DEBUG oslo_concurrency.lockutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:19 np0005558241 nova_compute[248510]: 2025-12-13 08:14:19.996 248514 DEBUG oslo_concurrency.lockutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.019 248514 DEBUG nova.storage.rbd_utils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image c46820c9-23b7-492b-bd49-f3f444722252_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.023 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c46820c9-23b7-492b-bd49-f3f444722252_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.247 248514 DEBUG nova.network.neutron [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.248 248514 DEBUG nova.compute.manager [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.310 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c46820c9-23b7-492b-bd49-f3f444722252_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.373 248514 DEBUG nova.storage.rbd_utils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] resizing rbd image c46820c9-23b7-492b-bd49-f3f444722252_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.460 248514 DEBUG nova.objects.instance [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lazy-loading 'migration_context' on Instance uuid c46820c9-23b7-492b-bd49-f3f444722252 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.530 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.531 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Ensure instance console log exists: /var/lib/nova/instances/c46820c9-23b7-492b-bd49-f3f444722252/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.533 248514 DEBUG oslo_concurrency.lockutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.534 248514 DEBUG oslo_concurrency.lockutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.535 248514 DEBUG oslo_concurrency.lockutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.537 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.545 248514 WARNING nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.554 248514 DEBUG nova.virt.libvirt.host [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.555 248514 DEBUG nova.virt.libvirt.host [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.559 248514 DEBUG nova.virt.libvirt.host [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.560 248514 DEBUG nova.virt.libvirt.host [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.561 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.561 248514 DEBUG nova.virt.hardware [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.562 248514 DEBUG nova.virt.hardware [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.562 248514 DEBUG nova.virt.hardware [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.563 248514 DEBUG nova.virt.hardware [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.563 248514 DEBUG nova.virt.hardware [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.564 248514 DEBUG nova.virt.hardware [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.564 248514 DEBUG nova.virt.hardware [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.564 248514 DEBUG nova.virt.hardware [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.565 248514 DEBUG nova.virt.hardware [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.565 248514 DEBUG nova.virt.hardware [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.566 248514 DEBUG nova.virt.hardware [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:14:20 np0005558241 nova_compute[248510]: 2025-12-13 08:14:20.570 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003507408832148384 of space, bias 1.0, pg target 0.10522226496445151 quantized to 32 (current 32)
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664137887777726 of space, bias 1.0, pg target 0.19992413663333178 quantized to 32 (current 32)
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.702085214425958e-07 of space, bias 4.0, pg target 0.001164250225731115 quantized to 16 (current 32)
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:14:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 364 KiB/s wr, 70 op/s
Dec 13 03:14:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:14:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4096385636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.240 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.259 248514 DEBUG nova.storage.rbd_utils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image c46820c9-23b7-492b-bd49-f3f444722252_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.262 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.420 248514 DEBUG oslo_concurrency.lockutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Acquiring lock "3cf47630-6d1c-4617-ab97-b60185421433" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.421 248514 DEBUG oslo_concurrency.lockutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "3cf47630-6d1c-4617-ab97-b60185421433" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.482 248514 DEBUG nova.compute.manager [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.631 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.661 248514 DEBUG oslo_concurrency.lockutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.662 248514 DEBUG oslo_concurrency.lockutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.671 248514 DEBUG nova.virt.hardware [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.672 248514 INFO nova.compute.claims [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:14:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:14:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1958915093' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.831 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.833 248514 DEBUG nova.objects.instance [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lazy-loading 'pci_devices' on Instance uuid c46820c9-23b7-492b-bd49-f3f444722252 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.867 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:21 np0005558241 nova_compute[248510]: 2025-12-13 08:14:21.937 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  <uuid>c46820c9-23b7-492b-bd49-f3f444722252</uuid>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  <name>instance-00000008</name>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-1947268636</nova:name>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:14:20</nova:creationTime>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:        <nova:user uuid="546de43f2a284b5589b25c3ae6f2ab3a">tempest-ServersAdminNegativeTestJSON-315495826-project-member</nova:user>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:        <nova:project uuid="66e4b42b92b04724aa8401fca41d9c82">tempest-ServersAdminNegativeTestJSON-315495826</nova:project>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <entry name="serial">c46820c9-23b7-492b-bd49-f3f444722252</entry>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <entry name="uuid">c46820c9-23b7-492b-bd49-f3f444722252</entry>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c46820c9-23b7-492b-bd49-f3f444722252_disk">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c46820c9-23b7-492b-bd49-f3f444722252_disk.config">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/c46820c9-23b7-492b-bd49-f3f444722252/console.log" append="off"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:14:21 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:14:21 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:14:21 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:14:21 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.297 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.298 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.298 248514 INFO nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Using config drive#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.322 248514 DEBUG nova.storage.rbd_utils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image c46820c9-23b7-492b-bd49-f3f444722252_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:14:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/806839160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.432 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.453 248514 DEBUG nova.compute.provider_tree [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.476 248514 DEBUG nova.scheduler.client.report [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.536 248514 DEBUG oslo_concurrency.lockutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.537 248514 DEBUG nova.compute.manager [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.592 248514 INFO nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Creating config drive at /var/lib/nova/instances/c46820c9-23b7-492b-bd49-f3f444722252/disk.config#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.597 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c46820c9-23b7-492b-bd49-f3f444722252/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwghqouxb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 58 op/s
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.727 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c46820c9-23b7-492b-bd49-f3f444722252/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwghqouxb" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.753 248514 DEBUG nova.storage.rbd_utils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] rbd image c46820c9-23b7-492b-bd49-f3f444722252_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.757 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c46820c9-23b7-492b-bd49-f3f444722252/disk.config c46820c9-23b7-492b-bd49-f3f444722252_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.786 248514 DEBUG nova.compute.manager [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.878 248514 DEBUG oslo_concurrency.processutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c46820c9-23b7-492b-bd49-f3f444722252/disk.config c46820c9-23b7-492b-bd49-f3f444722252_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.878 248514 INFO nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Deleting local config drive /var/lib/nova/instances/c46820c9-23b7-492b-bd49-f3f444722252/disk.config because it was imported into RBD.#033[00m
Dec 13 03:14:22 np0005558241 nova_compute[248510]: 2025-12-13 08:14:22.983 248514 INFO nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:14:22 np0005558241 systemd-machined[210538]: New machine qemu-8-instance-00000008.
Dec 13 03:14:22 np0005558241 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.011 248514 DEBUG nova.compute.manager [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.337 248514 DEBUG nova.compute.manager [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.339 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.339 248514 INFO nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Creating image(s)#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.363 248514 DEBUG nova.storage.rbd_utils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] rbd image 3cf47630-6d1c-4617-ab97-b60185421433_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.390 248514 DEBUG nova.storage.rbd_utils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] rbd image 3cf47630-6d1c-4617-ab97-b60185421433_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.414 248514 DEBUG nova.storage.rbd_utils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] rbd image 3cf47630-6d1c-4617-ab97-b60185421433_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.418 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.441 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613663.3731172, c46820c9-23b7-492b-bd49-f3f444722252 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.441 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c46820c9-23b7-492b-bd49-f3f444722252] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.442 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.444 248514 DEBUG nova.compute.manager [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.444 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.447 248514 DEBUG oslo_concurrency.lockutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Acquiring lock "06335528-6de3-417b-837a-06e18baaf91f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.448 248514 DEBUG oslo_concurrency.lockutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "06335528-6de3-417b-837a-06e18baaf91f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.452 248514 INFO nova.virt.libvirt.driver [-] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Instance spawned successfully.#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.452 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.479 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.480 248514 DEBUG oslo_concurrency.lockutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.480 248514 DEBUG oslo_concurrency.lockutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.481 248514 DEBUG oslo_concurrency.lockutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.501 248514 DEBUG nova.storage.rbd_utils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] rbd image 3cf47630-6d1c-4617-ab97-b60185421433_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.505 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3cf47630-6d1c-4617-ab97-b60185421433_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.522 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.523 248514 DEBUG nova.compute.manager [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.531 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.598 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c46820c9-23b7-492b-bd49-f3f444722252] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.599 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613663.3734217, c46820c9-23b7-492b-bd49-f3f444722252 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.599 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c46820c9-23b7-492b-bd49-f3f444722252] VM Started (Lifecycle Event)#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.603 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.603 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.604 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.604 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.604 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.605 248514 DEBUG nova.virt.libvirt.driver [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.823 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3cf47630-6d1c-4617-ab97-b60185421433_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:23 np0005558241 nova_compute[248510]: 2025-12-13 08:14:23.906 248514 DEBUG nova.storage.rbd_utils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] resizing rbd image 3cf47630-6d1c-4617-ab97-b60185421433_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.002 248514 DEBUG nova.objects.instance [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lazy-loading 'migration_context' on Instance uuid 3cf47630-6d1c-4617-ab97-b60185421433 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.028 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.031 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.051 248514 DEBUG oslo_concurrency.lockutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.051 248514 DEBUG oslo_concurrency.lockutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.058 248514 DEBUG nova.virt.hardware [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.059 248514 INFO nova.compute.claims [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.622 248514 INFO nova.compute.manager [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Took 4.76 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.623 248514 DEBUG nova.compute.manager [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.633 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c46820c9-23b7-492b-bd49-f3f444722252] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.634 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.635 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Ensure instance console log exists: /var/lib/nova/instances/3cf47630-6d1c-4617-ab97-b60185421433/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.635 248514 DEBUG oslo_concurrency.lockutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.636 248514 DEBUG oslo_concurrency.lockutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.636 248514 DEBUG oslo_concurrency.lockutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.639 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.644 248514 WARNING nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.650 248514 DEBUG nova.virt.libvirt.host [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.650 248514 DEBUG nova.virt.libvirt.host [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.653 248514 DEBUG nova.virt.libvirt.host [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.653 248514 DEBUG nova.virt.libvirt.host [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.654 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.654 248514 DEBUG nova.virt.hardware [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.654 248514 DEBUG nova.virt.hardware [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.655 248514 DEBUG nova.virt.hardware [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.655 248514 DEBUG nova.virt.hardware [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.655 248514 DEBUG nova.virt.hardware [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.655 248514 DEBUG nova.virt.hardware [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.656 248514 DEBUG nova.virt.hardware [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.656 248514 DEBUG nova.virt.hardware [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.656 248514 DEBUG nova.virt.hardware [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.656 248514 DEBUG nova.virt.hardware [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.656 248514 DEBUG nova.virt.hardware [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.659 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.5 MiB/s wr, 166 op/s
Dec 13 03:14:24 np0005558241 nova_compute[248510]: 2025-12-13 08:14:24.843 248514 INFO nova.compute.manager [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Took 7.19 seconds to build instance.#033[00m
Dec 13 03:14:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:14:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/944908143' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:14:25 np0005558241 nova_compute[248510]: 2025-12-13 08:14:25.211 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:25 np0005558241 nova_compute[248510]: 2025-12-13 08:14:25.234 248514 DEBUG nova.storage.rbd_utils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] rbd image 3cf47630-6d1c-4617-ab97-b60185421433_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:25 np0005558241 nova_compute[248510]: 2025-12-13 08:14:25.242 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:25 np0005558241 nova_compute[248510]: 2025-12-13 08:14:25.595 248514 DEBUG oslo_concurrency.lockutils [None req-d6702f7b-c4fc-40d5-b0c7-3162184cac85 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "c46820c9-23b7-492b-bd49-f3f444722252" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:25 np0005558241 nova_compute[248510]: 2025-12-13 08:14:25.640 248514 DEBUG oslo_concurrency.lockutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "7bb2ecd0-7c38-401e-9999-a3f457ede62f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:25 np0005558241 nova_compute[248510]: 2025-12-13 08:14:25.641 248514 DEBUG oslo_concurrency.lockutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "7bb2ecd0-7c38-401e-9999-a3f457ede62f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:14:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422898917' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:14:25 np0005558241 nova_compute[248510]: 2025-12-13 08:14:25.770 248514 DEBUG nova.compute.manager [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:14:25 np0005558241 nova_compute[248510]: 2025-12-13 08:14:25.780 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:14:25 np0005558241 nova_compute[248510]: 2025-12-13 08:14:25.792 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:25 np0005558241 nova_compute[248510]: 2025-12-13 08:14:25.795 248514 DEBUG nova.objects.instance [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lazy-loading 'pci_devices' on Instance uuid 3cf47630-6d1c-4617-ab97-b60185421433 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:25 np0005558241 nova_compute[248510]: 2025-12-13 08:14:25.802 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.116 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  <uuid>3cf47630-6d1c-4617-ab97-b60185421433</uuid>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  <name>instance-00000009</name>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-1489268244</nova:name>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:14:24</nova:creationTime>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:        <nova:user uuid="89af206f7ce04f1d8b86a3ebd054cb05">tempest-ServerDiagnosticsV248Test-2121576140-project-member</nova:user>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:        <nova:project uuid="24d87ef6a6664654a40b74febd73407f">tempest-ServerDiagnosticsV248Test-2121576140</nova:project>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <entry name="serial">3cf47630-6d1c-4617-ab97-b60185421433</entry>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <entry name="uuid">3cf47630-6d1c-4617-ab97-b60185421433</entry>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3cf47630-6d1c-4617-ab97-b60185421433_disk">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3cf47630-6d1c-4617-ab97-b60185421433_disk.config">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/3cf47630-6d1c-4617-ab97-b60185421433/console.log" append="off"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:14:26 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:14:26 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:14:26 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:14:26 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:14:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:14:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1489754051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.390 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.397 248514 DEBUG nova.compute.provider_tree [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.400 248514 DEBUG oslo_concurrency.lockutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.633 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.707 248514 DEBUG nova.scheduler.client.report [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:14:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 464 KiB/s rd, 4.5 MiB/s wr, 110 op/s
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.718 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.719 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.720 248514 INFO nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Using config drive#033[00m
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.747 248514 DEBUG nova.storage.rbd_utils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] rbd image 3cf47630-6d1c-4617-ab97-b60185421433_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.880 248514 DEBUG oslo_concurrency.lockutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.880 248514 DEBUG nova.compute.manager [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.883 248514 DEBUG oslo_concurrency.lockutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.895 248514 DEBUG nova.virt.hardware [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:14:26 np0005558241 nova_compute[248510]: 2025-12-13 08:14:26.896 248514 INFO nova.compute.claims [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.189 248514 DEBUG nova.compute.manager [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.190 248514 DEBUG nova.network.neutron [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.468 248514 INFO nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.513 248514 DEBUG nova.compute.manager [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.578 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.611 248514 INFO nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Creating config drive at /var/lib/nova/instances/3cf47630-6d1c-4617-ab97-b60185421433/disk.config#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.619 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3cf47630-6d1c-4617-ab97-b60185421433/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprep4elbm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.750 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3cf47630-6d1c-4617-ab97-b60185421433/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprep4elbm" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.783 248514 DEBUG nova.storage.rbd_utils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] rbd image 3cf47630-6d1c-4617-ab97-b60185421433_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.788 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3cf47630-6d1c-4617-ab97-b60185421433/disk.config 3cf47630-6d1c-4617-ab97-b60185421433_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.872 248514 DEBUG nova.compute.manager [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.875 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.875 248514 INFO nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Creating image(s)#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.910 248514 DEBUG nova.storage.rbd_utils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] rbd image 06335528-6de3-417b-837a-06e18baaf91f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.945 248514 DEBUG nova.storage.rbd_utils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] rbd image 06335528-6de3-417b-837a-06e18baaf91f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.974 248514 DEBUG nova.storage.rbd_utils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] rbd image 06335528-6de3-417b-837a-06e18baaf91f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:27 np0005558241 nova_compute[248510]: 2025-12-13 08:14:27.979 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.006 248514 DEBUG nova.network.neutron [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.006 248514 DEBUG nova.compute.manager [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.008 248514 DEBUG oslo_concurrency.processutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3cf47630-6d1c-4617-ab97-b60185421433/disk.config 3cf47630-6d1c-4617-ab97-b60185421433_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.008 248514 INFO nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Deleting local config drive /var/lib/nova/instances/3cf47630-6d1c-4617-ab97-b60185421433/disk.config because it was imported into RBD.#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.060 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.061 248514 DEBUG oslo_concurrency.lockutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.061 248514 DEBUG oslo_concurrency.lockutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.062 248514 DEBUG oslo_concurrency.lockutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:28 np0005558241 systemd-machined[210538]: New machine qemu-9-instance-00000009.
Dec 13 03:14:28 np0005558241 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.090 248514 DEBUG nova.storage.rbd_utils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] rbd image 06335528-6de3-417b-837a-06e18baaf91f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.102 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 06335528-6de3-417b-837a-06e18baaf91f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:14:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/11905070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.171 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.194 248514 DEBUG nova.compute.provider_tree [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.310 248514 DEBUG nova.scheduler.client.report [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.385 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 06335528-6de3-417b-837a-06e18baaf91f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.454 248514 DEBUG nova.storage.rbd_utils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] resizing rbd image 06335528-6de3-417b-837a-06e18baaf91f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.510 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613668.4686823, 3cf47630-6d1c-4617-ab97-b60185421433 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.511 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.513 248514 DEBUG nova.compute.manager [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.513 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.516 248514 DEBUG oslo_concurrency.lockutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.517 248514 DEBUG nova.compute.manager [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.526 248514 INFO nova.virt.libvirt.driver [-] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Instance spawned successfully.#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.528 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.580 248514 DEBUG nova.objects.instance [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lazy-loading 'migration_context' on Instance uuid 06335528-6de3-417b-837a-06e18baaf91f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.714 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 232 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.5 MiB/s wr, 204 op/s
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.715 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.715 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Ensure instance console log exists: /var/lib/nova/instances/06335528-6de3-417b-837a-06e18baaf91f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.716 248514 DEBUG oslo_concurrency.lockutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.716 248514 DEBUG oslo_concurrency.lockutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.717 248514 DEBUG oslo_concurrency.lockutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.718 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.723 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.727 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.728 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.728 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.729 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.729 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.730 248514 DEBUG nova.virt.libvirt.driver [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.733 248514 WARNING nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.738 248514 DEBUG nova.virt.libvirt.host [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.738 248514 DEBUG nova.virt.libvirt.host [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.744 248514 DEBUG nova.virt.libvirt.host [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.745 248514 DEBUG nova.virt.libvirt.host [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.746 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.746 248514 DEBUG nova.virt.hardware [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.746 248514 DEBUG nova.virt.hardware [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.747 248514 DEBUG nova.virt.hardware [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.747 248514 DEBUG nova.virt.hardware [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.748 248514 DEBUG nova.virt.hardware [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.748 248514 DEBUG nova.virt.hardware [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.748 248514 DEBUG nova.virt.hardware [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.748 248514 DEBUG nova.virt.hardware [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.748 248514 DEBUG nova.virt.hardware [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.749 248514 DEBUG nova.virt.hardware [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.749 248514 DEBUG nova.virt.hardware [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.752 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.782 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.801 248514 DEBUG nova.compute.manager [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.801 248514 DEBUG nova.network.neutron [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.804 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.805 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613668.4707696, 3cf47630-6d1c-4617-ab97-b60185421433 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:28 np0005558241 nova_compute[248510]: 2025-12-13 08:14:28.805 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] VM Started (Lifecycle Event)#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.008 248514 INFO nova.compute.manager [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Took 5.67 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.008 248514 DEBUG nova.compute.manager [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.014 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.024 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.032 248514 INFO nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:14:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:14:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121862736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.372 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.392 248514 DEBUG nova.storage.rbd_utils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] rbd image 06335528-6de3-417b-837a-06e18baaf91f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.396 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.545 248514 DEBUG nova.objects.instance [None req-e199101e-b86f-47ab-8524-fb4444cbc157 70fe0c5ff55c4130980c6ff1e4743d0f bad01444c3a245abb641d237f0241025 - - default default] Lazy-loading 'pci_devices' on Instance uuid c46820c9-23b7-492b-bd49-f3f444722252 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.557 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.764 248514 DEBUG nova.compute.manager [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.770 248514 DEBUG nova.network.neutron [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.770 248514 DEBUG nova.compute.manager [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.831 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613669.8307421, c46820c9-23b7-492b-bd49-f3f444722252 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.831 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c46820c9-23b7-492b-bd49-f3f444722252] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.844 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.845 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.845 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:14:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:14:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/983245769' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.969 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:29 np0005558241 nova_compute[248510]: 2025-12-13 08:14:29.970 248514 DEBUG nova.objects.instance [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 06335528-6de3-417b-837a-06e18baaf91f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.130 248514 INFO nova.compute.manager [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Took 8.50 seconds to build instance.#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.144 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.146 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  <uuid>06335528-6de3-417b-837a-06e18baaf91f</uuid>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  <name>instance-0000000a</name>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerExternalEventsTest-server-519153013</nova:name>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:14:28</nova:creationTime>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:        <nova:user uuid="0a74cc39716242dc871d1ab627389698">tempest-ServerExternalEventsTest-1577587857-project-member</nova:user>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:        <nova:project uuid="f21bfaf361ba42a2bf41821eddd2b6f0">tempest-ServerExternalEventsTest-1577587857</nova:project>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <entry name="serial">06335528-6de3-417b-837a-06e18baaf91f</entry>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <entry name="uuid">06335528-6de3-417b-837a-06e18baaf91f</entry>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/06335528-6de3-417b-837a-06e18baaf91f_disk">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/06335528-6de3-417b-837a-06e18baaf91f_disk.config">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/06335528-6de3-417b-837a-06e18baaf91f/console.log" append="off"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:14:30 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:14:30 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:14:30 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:14:30 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.149 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:30 np0005558241 podman[265320]: 2025-12-13 08:14:30.250037927 +0000 UTC m=+0.067131605 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Dec 13 03:14:30 np0005558241 podman[265321]: 2025-12-13 08:14:30.250108849 +0000 UTC m=+0.063068875 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:14:30 np0005558241 podman[265319]: 2025-12-13 08:14:30.31146682 +0000 UTC m=+0.128491116 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.341 248514 DEBUG oslo_concurrency.lockutils [None req-fa5ce096-9270-4df1-ae3c-3dd285ee04a3 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "3cf47630-6d1c-4617-ab97-b60185421433" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.437 248514 DEBUG nova.compute.manager [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.438 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.439 248514 INFO nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Creating image(s)#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.478 248514 DEBUG nova.storage.rbd_utils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image 7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.504 248514 DEBUG nova.storage.rbd_utils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image 7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.532 248514 DEBUG nova.storage.rbd_utils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image 7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.538 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.560 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c46820c9-23b7-492b-bd49-f3f444722252] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.567 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-523b15f7-aac8-4023-957c-9ffeab4d92ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.567 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-523b15f7-aac8-4023-957c-9ffeab4d92ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.568 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.568 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 523b15f7-aac8-4023-957c-9ffeab4d92ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.594 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.595 248514 DEBUG oslo_concurrency.lockutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.596 248514 DEBUG oslo_concurrency.lockutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.596 248514 DEBUG oslo_concurrency.lockutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:30 np0005558241 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec 13 03:14:30 np0005558241 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 6.867s CPU time.
Dec 13 03:14:30 np0005558241 systemd-machined[210538]: Machine qemu-8-instance-00000008 terminated.
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.618 248514 DEBUG nova.storage.rbd_utils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image 7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.622 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 237 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.8 MiB/s wr, 215 op/s
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.742 248514 DEBUG nova.compute.manager [None req-e199101e-b86f-47ab-8524-fb4444cbc157 70fe0c5ff55c4130980c6ff1e4743d0f bad01444c3a245abb641d237f0241025 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:30 np0005558241 nova_compute[248510]: 2025-12-13 08:14:30.946 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.003 248514 DEBUG nova.storage.rbd_utils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] resizing rbd image 7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.081 248514 DEBUG nova.objects.instance [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lazy-loading 'migration_context' on Instance uuid 7bb2ecd0-7c38-401e-9999-a3f457ede62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.084 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.084 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.085 248514 INFO nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Using config drive#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.101 248514 DEBUG nova.storage.rbd_utils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] rbd image 06335528-6de3-417b-837a-06e18baaf91f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.440 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.441 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Ensure instance console log exists: /var/lib/nova/instances/7bb2ecd0-7c38-401e-9999-a3f457ede62f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.441 248514 DEBUG oslo_concurrency.lockutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.442 248514 DEBUG oslo_concurrency.lockutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.442 248514 DEBUG oslo_concurrency.lockutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.443 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.448 248514 WARNING nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.454 248514 DEBUG nova.virt.libvirt.host [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.455 248514 DEBUG nova.virt.libvirt.host [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.458 248514 DEBUG nova.virt.libvirt.host [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.459 248514 DEBUG nova.virt.libvirt.host [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.459 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.460 248514 DEBUG nova.virt.hardware [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.460 248514 DEBUG nova.virt.hardware [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.461 248514 DEBUG nova.virt.hardware [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.461 248514 DEBUG nova.virt.hardware [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.461 248514 DEBUG nova.virt.hardware [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.462 248514 DEBUG nova.virt.hardware [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.462 248514 DEBUG nova.virt.hardware [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.463 248514 DEBUG nova.virt.hardware [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.463 248514 DEBUG nova.virt.hardware [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.464 248514 DEBUG nova.virt.hardware [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.464 248514 DEBUG nova.virt.hardware [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.468 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.587 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.635 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.892 248514 INFO nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Creating config drive at /var/lib/nova/instances/06335528-6de3-417b-837a-06e18baaf91f/disk.config#033[00m
Dec 13 03:14:31 np0005558241 nova_compute[248510]: 2025-12-13 08:14:31.902 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/06335528-6de3-417b-837a-06e18baaf91f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy3q8baeh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:32 np0005558241 nova_compute[248510]: 2025-12-13 08:14:32.035 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/06335528-6de3-417b-837a-06e18baaf91f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy3q8baeh" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:32 np0005558241 nova_compute[248510]: 2025-12-13 08:14:32.056 248514 DEBUG nova.storage.rbd_utils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] rbd image 06335528-6de3-417b-837a-06e18baaf91f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:32 np0005558241 nova_compute[248510]: 2025-12-13 08:14:32.059 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/06335528-6de3-417b-837a-06e18baaf91f/disk.config 06335528-6de3-417b-837a-06e18baaf91f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:14:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2332899944' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:14:32 np0005558241 nova_compute[248510]: 2025-12-13 08:14:32.094 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:32 np0005558241 nova_compute[248510]: 2025-12-13 08:14:32.121 248514 DEBUG nova.storage.rbd_utils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image 7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:32 np0005558241 nova_compute[248510]: 2025-12-13 08:14:32.130 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:32 np0005558241 nova_compute[248510]: 2025-12-13 08:14:32.217 248514 DEBUG oslo_concurrency.processutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/06335528-6de3-417b-837a-06e18baaf91f/disk.config 06335528-6de3-417b-837a-06e18baaf91f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:32 np0005558241 nova_compute[248510]: 2025-12-13 08:14:32.219 248514 INFO nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Deleting local config drive /var/lib/nova/instances/06335528-6de3-417b-837a-06e18baaf91f/disk.config because it was imported into RBD.#033[00m
Dec 13 03:14:32 np0005558241 nova_compute[248510]: 2025-12-13 08:14:32.253 248514 DEBUG nova.compute.manager [None req-27d3b6fa-2342-40f5-9ed1-16828d648a7f 1d47f19f27c247e2a923463aae1593c9 0117f42cd01644b1932f9e0502dec820 - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:32 np0005558241 nova_compute[248510]: 2025-12-13 08:14:32.262 248514 INFO nova.compute.manager [None req-27d3b6fa-2342-40f5-9ed1-16828d648a7f 1d47f19f27c247e2a923463aae1593c9 0117f42cd01644b1932f9e0502dec820 - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Retrieving diagnostics#033[00m
Dec 13 03:14:32 np0005558241 systemd-machined[210538]: New machine qemu-10-instance-0000000a.
Dec 13 03:14:32 np0005558241 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Dec 13 03:14:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 237 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.8 MiB/s wr, 213 op/s
Dec 13 03:14:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:14:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240251699' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:14:32 np0005558241 nova_compute[248510]: 2025-12-13 08:14:32.851 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.721s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:32 np0005558241 nova_compute[248510]: 2025-12-13 08:14:32.854 248514 DEBUG nova.objects.instance [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lazy-loading 'pci_devices' on Instance uuid 7bb2ecd0-7c38-401e-9999-a3f457ede62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:32 np0005558241 nova_compute[248510]: 2025-12-13 08:14:32.913 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.037 248514 DEBUG nova.compute.manager [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.039 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.040 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613673.0386775, 06335528-6de3-417b-837a-06e18baaf91f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.040 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 06335528-6de3-417b-837a-06e18baaf91f] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.046 248514 INFO nova.virt.libvirt.driver [-] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Instance spawned successfully.#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.047 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.133 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  <uuid>7bb2ecd0-7c38-401e-9999-a3f457ede62f</uuid>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  <name>instance-0000000b</name>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <nova:name>tempest-LiveMigrationNegativeTest-server-2055290029</nova:name>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:14:31</nova:creationTime>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:        <nova:user uuid="d5a7834344c24e5094e33d0227489d0a">tempest-LiveMigrationNegativeTest-1226694505-project-member</nova:user>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:        <nova:project uuid="00b8bf8c06f844f49b8b633781cfebef">tempest-LiveMigrationNegativeTest-1226694505</nova:project>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <entry name="serial">7bb2ecd0-7c38-401e-9999-a3f457ede62f</entry>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <entry name="uuid">7bb2ecd0-7c38-401e-9999-a3f457ede62f</entry>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk.config">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/7bb2ecd0-7c38-401e-9999-a3f457ede62f/console.log" append="off"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:14:33 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:14:33 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:14:33 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:14:33 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.143 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.153 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.161 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.162 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.162 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.163 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.163 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.164 248514 DEBUG nova.virt.libvirt.driver [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.206 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-523b15f7-aac8-4023-957c-9ffeab4d92ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.207 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.208 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.209 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 06335528-6de3-417b-837a-06e18baaf91f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.209 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613673.0389519, 06335528-6de3-417b-837a-06e18baaf91f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.209 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 06335528-6de3-417b-837a-06e18baaf91f] VM Started (Lifecycle Event)#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.211 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.370 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.371 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.372 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.372 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.372 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.373 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.398 248514 INFO nova.compute.manager [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Took 5.52 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.399 248514 DEBUG nova.compute.manager [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.412 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.418 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.509 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.509 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.510 248514 INFO nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Using config drive#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.538 248514 DEBUG nova.storage.rbd_utils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image 7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.608 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 06335528-6de3-417b-837a-06e18baaf91f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:14:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.841 248514 INFO nova.compute.manager [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Took 9.83 seconds to build instance.#033[00m
Dec 13 03:14:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:14:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1920579667' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:14:33 np0005558241 nova_compute[248510]: 2025-12-13 08:14:33.936 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.121 248514 DEBUG oslo_concurrency.lockutils [None req-730142c6-ee35-4ee4-b596-3317be321425 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "06335528-6de3-417b-837a-06e18baaf91f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.259 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.260 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.264 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.265 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.268 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.268 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.272 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.272 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.276 248514 INFO nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Creating config drive at /var/lib/nova/instances/7bb2ecd0-7c38-401e-9999-a3f457ede62f/disk.config#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.282 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7bb2ecd0-7c38-401e-9999-a3f457ede62f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuoi6m2o4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.311 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.312 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.416 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7bb2ecd0-7c38-401e-9999-a3f457ede62f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuoi6m2o4" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.449 248514 DEBUG nova.storage.rbd_utils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image 7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.455 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7bb2ecd0-7c38-401e-9999-a3f457ede62f/disk.config 7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.577 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.579 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4285MB free_disk=59.88849866203964GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.579 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.580 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.685 248514 DEBUG oslo_concurrency.processutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7bb2ecd0-7c38-401e-9999-a3f457ede62f/disk.config 7bb2ecd0-7c38-401e-9999-a3f457ede62f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.685 248514 INFO nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Deleting local config drive /var/lib/nova/instances/7bb2ecd0-7c38-401e-9999-a3f457ede62f/disk.config because it was imported into RBD.#033[00m
Dec 13 03:14:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 306 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 9.3 MiB/s wr, 333 op/s
Dec 13 03:14:34 np0005558241 systemd-machined[210538]: New machine qemu-11-instance-0000000b.
Dec 13 03:14:34 np0005558241 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.842 248514 DEBUG nova.compute.manager [None req-dc6f90e2-9f55-47d2-ad18-ffa28792ea15 e0886ca46eb943b2ab42d18200748d96 04792fb9b6b249138f31c463ca83cec6 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.844 248514 DEBUG nova.compute.manager [None req-dc6f90e2-9f55-47d2-ad18-ffa28792ea15 e0886ca46eb943b2ab42d18200748d96 04792fb9b6b249138f31c463ca83cec6 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.844 248514 DEBUG oslo_concurrency.lockutils [None req-dc6f90e2-9f55-47d2-ad18-ffa28792ea15 e0886ca46eb943b2ab42d18200748d96 04792fb9b6b249138f31c463ca83cec6 - - default default] Acquiring lock "refresh_cache-06335528-6de3-417b-837a-06e18baaf91f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.845 248514 DEBUG oslo_concurrency.lockutils [None req-dc6f90e2-9f55-47d2-ad18-ffa28792ea15 e0886ca46eb943b2ab42d18200748d96 04792fb9b6b249138f31c463ca83cec6 - - default default] Acquired lock "refresh_cache-06335528-6de3-417b-837a-06e18baaf91f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.845 248514 DEBUG nova.network.neutron [None req-dc6f90e2-9f55-47d2-ad18-ffa28792ea15 e0886ca46eb943b2ab42d18200748d96 04792fb9b6b249138f31c463ca83cec6 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.849 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 523b15f7-aac8-4023-957c-9ffeab4d92ff actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.850 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance c46820c9-23b7-492b-bd49-f3f444722252 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.851 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3cf47630-6d1c-4617-ab97-b60185421433 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.851 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 06335528-6de3-417b-837a-06e18baaf91f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.851 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 7bb2ecd0-7c38-401e-9999-a3f457ede62f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.852 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.852 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.858 248514 DEBUG oslo_concurrency.lockutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "c46820c9-23b7-492b-bd49-f3f444722252" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.859 248514 DEBUG oslo_concurrency.lockutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "c46820c9-23b7-492b-bd49-f3f444722252" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.860 248514 DEBUG oslo_concurrency.lockutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "c46820c9-23b7-492b-bd49-f3f444722252-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.860 248514 DEBUG oslo_concurrency.lockutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "c46820c9-23b7-492b-bd49-f3f444722252-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.861 248514 DEBUG oslo_concurrency.lockutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "c46820c9-23b7-492b-bd49-f3f444722252-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.863 248514 INFO nova.compute.manager [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Terminating instance#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.865 248514 DEBUG oslo_concurrency.lockutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "refresh_cache-c46820c9-23b7-492b-bd49-f3f444722252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.865 248514 DEBUG oslo_concurrency.lockutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquired lock "refresh_cache-c46820c9-23b7-492b-bd49-f3f444722252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:14:34 np0005558241 nova_compute[248510]: 2025-12-13 08:14:34.866 248514 DEBUG nova.network.neutron [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.007 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.031 248514 DEBUG nova.network.neutron [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.055 248514 DEBUG nova.network.neutron [None req-dc6f90e2-9f55-47d2-ad18-ffa28792ea15 e0886ca46eb943b2ab42d18200748d96 04792fb9b6b249138f31c463ca83cec6 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.331 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613675.3267345, 7bb2ecd0-7c38-401e-9999-a3f457ede62f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.331 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.334 248514 DEBUG nova.compute.manager [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.334 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.343 248514 INFO nova.virt.libvirt.driver [-] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Instance spawned successfully.#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.343 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.374 248514 DEBUG oslo_concurrency.lockutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Acquiring lock "06335528-6de3-417b-837a-06e18baaf91f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.375 248514 DEBUG oslo_concurrency.lockutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "06335528-6de3-417b-837a-06e18baaf91f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.375 248514 DEBUG oslo_concurrency.lockutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Acquiring lock "06335528-6de3-417b-837a-06e18baaf91f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.376 248514 DEBUG oslo_concurrency.lockutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "06335528-6de3-417b-837a-06e18baaf91f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.376 248514 DEBUG oslo_concurrency.lockutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "06335528-6de3-417b-837a-06e18baaf91f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.377 248514 INFO nova.compute.manager [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Terminating instance#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.378 248514 DEBUG oslo_concurrency.lockutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Acquiring lock "refresh_cache-06335528-6de3-417b-837a-06e18baaf91f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.389 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.395 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.400 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.400 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.401 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.401 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.401 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.402 248514 DEBUG nova.virt.libvirt.driver [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.472 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.473 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613675.3308027, 7bb2ecd0-7c38-401e-9999-a3f457ede62f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.474 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] VM Started (Lifecycle Event)#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.557 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.561 248514 DEBUG nova.network.neutron [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.564 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.580 248514 DEBUG nova.network.neutron [None req-dc6f90e2-9f55-47d2-ad18-ffa28792ea15 e0886ca46eb943b2ab42d18200748d96 04792fb9b6b249138f31c463ca83cec6 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:14:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:14:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1321410617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.610 248514 INFO nova.compute.manager [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Took 5.17 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.611 248514 DEBUG nova.compute.manager [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.612 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.624 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.625 248514 DEBUG oslo_concurrency.lockutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Releasing lock "refresh_cache-c46820c9-23b7-492b-bd49-f3f444722252" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.625 248514 DEBUG nova.compute.manager [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.628 248514 DEBUG oslo_concurrency.lockutils [None req-dc6f90e2-9f55-47d2-ad18-ffa28792ea15 e0886ca46eb943b2ab42d18200748d96 04792fb9b6b249138f31c463ca83cec6 - - default default] Releasing lock "refresh_cache-06335528-6de3-417b-837a-06e18baaf91f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.630 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.631 248514 DEBUG oslo_concurrency.lockutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Acquired lock "refresh_cache-06335528-6de3-417b-837a-06e18baaf91f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.631 248514 DEBUG nova.network.neutron [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.637 248514 INFO nova.virt.libvirt.driver [-] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Instance destroyed successfully.#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.637 248514 DEBUG nova.objects.instance [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lazy-loading 'resources' on Instance uuid c46820c9-23b7-492b-bd49-f3f444722252 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.800 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.856 248514 INFO nova.compute.manager [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Took 9.49 seconds to build instance.#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.917 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.918 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.918 248514 DEBUG oslo_concurrency.lockutils [None req-27727d2f-a10a-4411-af55-329738f57dfd d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "7bb2ecd0-7c38-401e-9999-a3f457ede62f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.918 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.919 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.937 248514 DEBUG nova.network.neutron [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:14:35 np0005558241 nova_compute[248510]: 2025-12-13 08:14:35.970 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.209 248514 DEBUG nova.network.neutron [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.238 248514 DEBUG oslo_concurrency.lockutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Releasing lock "refresh_cache-06335528-6de3-417b-837a-06e18baaf91f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.239 248514 DEBUG nova.compute.manager [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:14:36 np0005558241 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec 13 03:14:36 np0005558241 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 3.891s CPU time.
Dec 13 03:14:36 np0005558241 systemd-machined[210538]: Machine qemu-10-instance-0000000a terminated.
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.462 248514 INFO nova.virt.libvirt.driver [-] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Instance destroyed successfully.#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.463 248514 DEBUG nova.objects.instance [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lazy-loading 'resources' on Instance uuid 06335528-6de3-417b-837a-06e18baaf91f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.536 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.537 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.537 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.537 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.546 248514 INFO nova.virt.libvirt.driver [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Deleting instance files /var/lib/nova/instances/c46820c9-23b7-492b-bd49-f3f444722252_del#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.547 248514 INFO nova.virt.libvirt.driver [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Deletion of /var/lib/nova/instances/c46820c9-23b7-492b-bd49-f3f444722252_del complete#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.639 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 306 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.8 MiB/s wr, 225 op/s
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.740 248514 INFO nova.compute.manager [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.741 248514 DEBUG oslo.service.loopingcall [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.741 248514 DEBUG nova.compute.manager [-] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.741 248514 DEBUG nova.network.neutron [-] [instance: c46820c9-23b7-492b-bd49-f3f444722252] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.799 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.818 248514 INFO nova.virt.libvirt.driver [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Deleting instance files /var/lib/nova/instances/06335528-6de3-417b-837a-06e18baaf91f_del#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.819 248514 INFO nova.virt.libvirt.driver [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Deletion of /var/lib/nova/instances/06335528-6de3-417b-837a-06e18baaf91f_del complete#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.968 248514 INFO nova.compute.manager [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.969 248514 DEBUG oslo.service.loopingcall [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.969 248514 DEBUG nova.compute.manager [-] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:14:36 np0005558241 nova_compute[248510]: 2025-12-13 08:14:36.970 248514 DEBUG nova.network.neutron [-] [instance: 06335528-6de3-417b-837a-06e18baaf91f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:14:37 np0005558241 nova_compute[248510]: 2025-12-13 08:14:37.823 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:14:37 np0005558241 nova_compute[248510]: 2025-12-13 08:14:37.875 248514 DEBUG nova.network.neutron [-] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:14:37 np0005558241 nova_compute[248510]: 2025-12-13 08:14:37.894 248514 DEBUG nova.network.neutron [-] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:14:37 np0005558241 nova_compute[248510]: 2025-12-13 08:14:37.982 248514 DEBUG nova.network.neutron [-] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:14:37 np0005558241 nova_compute[248510]: 2025-12-13 08:14:37.990 248514 DEBUG nova.network.neutron [-] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:14:38 np0005558241 nova_compute[248510]: 2025-12-13 08:14:38.412 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:14:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 234 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 4.8 MiB/s wr, 374 op/s
Dec 13 03:14:39 np0005558241 nova_compute[248510]: 2025-12-13 08:14:39.416 248514 INFO nova.compute.manager [-] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Took 2.67 seconds to deallocate network for instance.#033[00m
Dec 13 03:14:39 np0005558241 nova_compute[248510]: 2025-12-13 08:14:39.443 248514 INFO nova.compute.manager [-] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Took 2.47 seconds to deallocate network for instance.#033[00m
Dec 13 03:14:39 np0005558241 nova_compute[248510]: 2025-12-13 08:14:39.502 248514 DEBUG oslo_concurrency.lockutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:39 np0005558241 nova_compute[248510]: 2025-12-13 08:14:39.503 248514 DEBUG oslo_concurrency.lockutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:39 np0005558241 nova_compute[248510]: 2025-12-13 08:14:39.538 248514 DEBUG oslo_concurrency.lockutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:39 np0005558241 nova_compute[248510]: 2025-12-13 08:14:39.665 248514 DEBUG oslo_concurrency.processutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:14:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:14:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:14:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:14:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:14:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:14:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:14:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751037268' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:14:40 np0005558241 nova_compute[248510]: 2025-12-13 08:14:40.250 248514 DEBUG oslo_concurrency.processutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:40 np0005558241 nova_compute[248510]: 2025-12-13 08:14:40.257 248514 DEBUG nova.compute.provider_tree [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:14:40 np0005558241 nova_compute[248510]: 2025-12-13 08:14:40.281 248514 DEBUG nova.scheduler.client.report [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:14:40 np0005558241 nova_compute[248510]: 2025-12-13 08:14:40.653 248514 DEBUG oslo_concurrency.lockutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:40 np0005558241 nova_compute[248510]: 2025-12-13 08:14:40.656 248514 DEBUG oslo_concurrency.lockutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 214 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.8 MiB/s wr, 310 op/s
Dec 13 03:14:40 np0005558241 nova_compute[248510]: 2025-12-13 08:14:40.918 248514 INFO nova.scheduler.client.report [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Deleted allocations for instance c46820c9-23b7-492b-bd49-f3f444722252#033[00m
Dec 13 03:14:40 np0005558241 nova_compute[248510]: 2025-12-13 08:14:40.954 248514 DEBUG oslo_concurrency.processutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.033 248514 DEBUG oslo_concurrency.lockutils [None req-b3518738-58e1-4931-8c3d-5da025d314c9 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "c46820c9-23b7-492b-bd49-f3f444722252" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:14:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3064989200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.517 248514 DEBUG oslo_concurrency.processutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.523 248514 DEBUG nova.compute.provider_tree [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.585 248514 DEBUG oslo_concurrency.lockutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "523b15f7-aac8-4023-957c-9ffeab4d92ff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.586 248514 DEBUG oslo_concurrency.lockutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "523b15f7-aac8-4023-957c-9ffeab4d92ff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.586 248514 DEBUG oslo_concurrency.lockutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "523b15f7-aac8-4023-957c-9ffeab4d92ff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.586 248514 DEBUG oslo_concurrency.lockutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "523b15f7-aac8-4023-957c-9ffeab4d92ff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.587 248514 DEBUG oslo_concurrency.lockutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "523b15f7-aac8-4023-957c-9ffeab4d92ff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.588 248514 INFO nova.compute.manager [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Terminating instance#033[00m
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.589 248514 DEBUG oslo_concurrency.lockutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "refresh_cache-523b15f7-aac8-4023-957c-9ffeab4d92ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.589 248514 DEBUG oslo_concurrency.lockutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquired lock "refresh_cache-523b15f7-aac8-4023-957c-9ffeab4d92ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.589 248514 DEBUG nova.network.neutron [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:41 np0005558241 nova_compute[248510]: 2025-12-13 08:14:41.779 248514 DEBUG nova.scheduler.client.report [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:14:42 np0005558241 nova_compute[248510]: 2025-12-13 08:14:42.040 248514 DEBUG oslo_concurrency.lockutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.384s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:42 np0005558241 nova_compute[248510]: 2025-12-13 08:14:42.239 248514 DEBUG nova.network.neutron [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:14:42 np0005558241 nova_compute[248510]: 2025-12-13 08:14:42.384 248514 INFO nova.scheduler.client.report [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Deleted allocations for instance 06335528-6de3-417b-837a-06e18baaf91f#033[00m
Dec 13 03:14:42 np0005558241 nova_compute[248510]: 2025-12-13 08:14:42.455 248514 DEBUG nova.compute.manager [None req-86210972-f5c0-4aec-9aaa-3413aeb99266 1d47f19f27c247e2a923463aae1593c9 0117f42cd01644b1932f9e0502dec820 - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:42 np0005558241 nova_compute[248510]: 2025-12-13 08:14:42.458 248514 INFO nova.compute.manager [None req-86210972-f5c0-4aec-9aaa-3413aeb99266 1d47f19f27c247e2a923463aae1593c9 0117f42cd01644b1932f9e0502dec820 - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Retrieving diagnostics#033[00m
Dec 13 03:14:42 np0005558241 nova_compute[248510]: 2025-12-13 08:14:42.568 248514 DEBUG nova.network.neutron [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:14:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 214 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.5 MiB/s wr, 299 op/s
Dec 13 03:14:42 np0005558241 nova_compute[248510]: 2025-12-13 08:14:42.893 248514 DEBUG oslo_concurrency.lockutils [None req-9f3b31fc-326d-435d-853f-8709d6931137 0a74cc39716242dc871d1ab627389698 f21bfaf361ba42a2bf41821eddd2b6f0 - - default default] Lock "06335528-6de3-417b-837a-06e18baaf91f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:42 np0005558241 nova_compute[248510]: 2025-12-13 08:14:42.901 248514 DEBUG oslo_concurrency.lockutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Releasing lock "refresh_cache-523b15f7-aac8-4023-957c-9ffeab4d92ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:14:42 np0005558241 nova_compute[248510]: 2025-12-13 08:14:42.902 248514 DEBUG nova.compute.manager [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:14:43 np0005558241 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec 13 03:14:43 np0005558241 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 13.350s CPU time.
Dec 13 03:14:43 np0005558241 systemd-machined[210538]: Machine qemu-7-instance-00000007 terminated.
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.124 248514 INFO nova.virt.libvirt.driver [-] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Instance destroyed successfully.#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.124 248514 DEBUG nova.objects.instance [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lazy-loading 'resources' on Instance uuid 523b15f7-aac8-4023-957c-9ffeab4d92ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.376 248514 DEBUG oslo_concurrency.lockutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Acquiring lock "3cf47630-6d1c-4617-ab97-b60185421433" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.377 248514 DEBUG oslo_concurrency.lockutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "3cf47630-6d1c-4617-ab97-b60185421433" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.377 248514 DEBUG oslo_concurrency.lockutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Acquiring lock "3cf47630-6d1c-4617-ab97-b60185421433-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.378 248514 DEBUG oslo_concurrency.lockutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "3cf47630-6d1c-4617-ab97-b60185421433-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.378 248514 DEBUG oslo_concurrency.lockutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "3cf47630-6d1c-4617-ab97-b60185421433-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.379 248514 INFO nova.compute.manager [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Terminating instance#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.380 248514 DEBUG oslo_concurrency.lockutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Acquiring lock "refresh_cache-3cf47630-6d1c-4617-ab97-b60185421433" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.380 248514 DEBUG oslo_concurrency.lockutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Acquired lock "refresh_cache-3cf47630-6d1c-4617-ab97-b60185421433" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.381 248514 DEBUG nova.network.neutron [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.483 248514 DEBUG oslo_concurrency.lockutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "ddf242e8-2fff-4bae-99eb-5d4441eb8b75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.484 248514 DEBUG oslo_concurrency.lockutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "ddf242e8-2fff-4bae-99eb-5d4441eb8b75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.499 248514 INFO nova.virt.libvirt.driver [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Deleting instance files /var/lib/nova/instances/523b15f7-aac8-4023-957c-9ffeab4d92ff_del#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.500 248514 INFO nova.virt.libvirt.driver [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Deletion of /var/lib/nova/instances/523b15f7-aac8-4023-957c-9ffeab4d92ff_del complete#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.589 248514 DEBUG nova.compute.manager [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.624 248514 DEBUG nova.network.neutron [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:14:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.822 248514 INFO nova.compute.manager [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.823 248514 DEBUG oslo.service.loopingcall [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.824 248514 DEBUG nova.compute.manager [-] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:14:43 np0005558241 nova_compute[248510]: 2025-12-13 08:14:43.824 248514 DEBUG nova.network.neutron [-] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.023 248514 DEBUG nova.network.neutron [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.025 248514 DEBUG nova.network.neutron [-] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.051 248514 DEBUG oslo_concurrency.lockutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.052 248514 DEBUG oslo_concurrency.lockutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.059 248514 DEBUG nova.virt.hardware [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.059 248514 INFO nova.compute.claims [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.190 248514 DEBUG oslo_concurrency.lockutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Releasing lock "refresh_cache-3cf47630-6d1c-4617-ab97-b60185421433" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.191 248514 DEBUG nova.compute.manager [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.226 248514 DEBUG nova.network.neutron [-] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:14:44 np0005558241 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec 13 03:14:44 np0005558241 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 12.358s CPU time.
Dec 13 03:14:44 np0005558241 systemd-machined[210538]: Machine qemu-9-instance-00000009 terminated.
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.305 248514 INFO nova.compute.manager [-] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Took 0.48 seconds to deallocate network for instance.#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.409 248514 INFO nova.virt.libvirt.driver [-] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Instance destroyed successfully.#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.410 248514 DEBUG nova.objects.instance [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lazy-loading 'resources' on Instance uuid 3cf47630-6d1c-4617-ab97-b60185421433 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.538 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 193 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 4.6 MiB/s wr, 375 op/s
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.757 248514 INFO nova.virt.libvirt.driver [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Deleting instance files /var/lib/nova/instances/3cf47630-6d1c-4617-ab97-b60185421433_del#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.758 248514 INFO nova.virt.libvirt.driver [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Deletion of /var/lib/nova/instances/3cf47630-6d1c-4617-ab97-b60185421433_del complete#033[00m
Dec 13 03:14:44 np0005558241 nova_compute[248510]: 2025-12-13 08:14:44.972 248514 DEBUG oslo_concurrency.lockutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:14:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003956225' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.107 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.113 248514 DEBUG nova.compute.provider_tree [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.167 248514 INFO nova.compute.manager [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.168 248514 DEBUG oslo.service.loopingcall [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.168 248514 DEBUG nova.compute.manager [-] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.168 248514 DEBUG nova.network.neutron [-] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.172 248514 DEBUG nova.scheduler.client.report [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.448 248514 DEBUG oslo_concurrency.lockutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.449 248514 DEBUG nova.compute.manager [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.452 248514 DEBUG oslo_concurrency.lockutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.480s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.566 248514 DEBUG nova.network.neutron [-] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.574 248514 DEBUG oslo_concurrency.processutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.745 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613670.7431178, c46820c9-23b7-492b-bd49-f3f444722252 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.746 248514 INFO nova.compute.manager [-] [instance: c46820c9-23b7-492b-bd49-f3f444722252] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.862 248514 DEBUG nova.compute.manager [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.862 248514 DEBUG nova.network.neutron [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.866 248514 DEBUG nova.network.neutron [-] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.868 248514 DEBUG nova.compute.manager [None req-aeb4dd35-a822-4747-894b-75065e965a8d - - - - - -] [instance: c46820c9-23b7-492b-bd49-f3f444722252] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.891 248514 INFO nova.compute.manager [-] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Took 0.72 seconds to deallocate network for instance.#033[00m
Dec 13 03:14:45 np0005558241 nova_compute[248510]: 2025-12-13 08:14:45.897 248514 INFO nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.116 248514 DEBUG nova.compute.manager [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:14:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:14:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1954437065' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.165 248514 DEBUG oslo_concurrency.processutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.172 248514 DEBUG nova.compute.provider_tree [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.299 248514 DEBUG oslo_concurrency.lockutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.427 248514 DEBUG nova.scheduler.client.report [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.643 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.660 248514 DEBUG oslo_concurrency.lockutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.663 248514 DEBUG oslo_concurrency.lockutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 193 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 255 op/s
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.786 248514 DEBUG oslo_concurrency.processutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.909 248514 INFO nova.scheduler.client.report [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Deleted allocations for instance 523b15f7-aac8-4023-957c-9ffeab4d92ff#033[00m
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.941 248514 DEBUG nova.compute.manager [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.943 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.943 248514 INFO nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Creating image(s)#033[00m
Dec 13 03:14:46 np0005558241 nova_compute[248510]: 2025-12-13 08:14:46.973 248514 DEBUG nova.storage.rbd_utils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.002 248514 DEBUG nova.storage.rbd_utils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.032 248514 DEBUG nova.storage.rbd_utils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.038 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.066 248514 DEBUG nova.network.neutron [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.067 248514 DEBUG nova.compute.manager [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.111 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.112 248514 DEBUG oslo_concurrency.lockutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.113 248514 DEBUG oslo_concurrency.lockutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.113 248514 DEBUG oslo_concurrency.lockutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.144 248514 DEBUG nova.storage.rbd_utils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.149 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.314 248514 DEBUG oslo_concurrency.lockutils [None req-08cbb9eb-927b-4dba-aa17-839c857236d4 546de43f2a284b5589b25c3ae6f2ab3a 66e4b42b92b04724aa8401fca41d9c82 - - default default] Lock "523b15f7-aac8-4023-957c-9ffeab4d92ff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:14:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3334214654' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.367 248514 DEBUG oslo_concurrency.processutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.373 248514 DEBUG nova.compute.provider_tree [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.412 248514 DEBUG nova.scheduler.client.report [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.478 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.536 248514 DEBUG nova.storage.rbd_utils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] resizing rbd image ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.611 248514 DEBUG oslo_concurrency.lockutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.619 248514 DEBUG nova.objects.instance [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lazy-loading 'migration_context' on Instance uuid ddf242e8-2fff-4bae-99eb-5d4441eb8b75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.640 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.641 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Ensure instance console log exists: /var/lib/nova/instances/ddf242e8-2fff-4bae-99eb-5d4441eb8b75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.641 248514 DEBUG oslo_concurrency.lockutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.642 248514 DEBUG oslo_concurrency.lockutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.642 248514 DEBUG oslo_concurrency.lockutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.643 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.649 248514 INFO nova.scheduler.client.report [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Deleted allocations for instance 3cf47630-6d1c-4617-ab97-b60185421433#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.651 248514 WARNING nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.657 248514 DEBUG nova.virt.libvirt.host [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.658 248514 DEBUG nova.virt.libvirt.host [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.663 248514 DEBUG nova.virt.libvirt.host [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.663 248514 DEBUG nova.virt.libvirt.host [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.664 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.664 248514 DEBUG nova.virt.hardware [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.664 248514 DEBUG nova.virt.hardware [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.664 248514 DEBUG nova.virt.hardware [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.664 248514 DEBUG nova.virt.hardware [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.665 248514 DEBUG nova.virt.hardware [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.665 248514 DEBUG nova.virt.hardware [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.665 248514 DEBUG nova.virt.hardware [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.665 248514 DEBUG nova.virt.hardware [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.665 248514 DEBUG nova.virt.hardware [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.666 248514 DEBUG nova.virt.hardware [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.666 248514 DEBUG nova.virt.hardware [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:14:47 np0005558241 nova_compute[248510]: 2025-12-13 08:14:47.668 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:14:48.086 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:14:48 np0005558241 nova_compute[248510]: 2025-12-13 08:14:48.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:14:48.089 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:14:48 np0005558241 nova_compute[248510]: 2025-12-13 08:14:48.139 248514 DEBUG oslo_concurrency.lockutils [None req-0cffe70b-f706-4ea0-8de0-4bc8f3eb34d9 89af206f7ce04f1d8b86a3ebd054cb05 24d87ef6a6664654a40b74febd73407f - - default default] Lock "3cf47630-6d1c-4617-ab97-b60185421433" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:14:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794023888' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:14:48 np0005558241 nova_compute[248510]: 2025-12-13 08:14:48.314 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:48 np0005558241 nova_compute[248510]: 2025-12-13 08:14:48.342 248514 DEBUG nova.storage.rbd_utils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:48 np0005558241 nova_compute[248510]: 2025-12-13 08:14:48.348 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:48 np0005558241 nova_compute[248510]: 2025-12-13 08:14:48.475 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:14:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 156 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.6 MiB/s wr, 333 op/s
Dec 13 03:14:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:14:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039680683' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:14:48 np0005558241 nova_compute[248510]: 2025-12-13 08:14:48.901 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:48 np0005558241 nova_compute[248510]: 2025-12-13 08:14:48.904 248514 DEBUG nova.objects.instance [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lazy-loading 'pci_devices' on Instance uuid ddf242e8-2fff-4bae-99eb-5d4441eb8b75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:49 np0005558241 nova_compute[248510]: 2025-12-13 08:14:49.609 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  <uuid>ddf242e8-2fff-4bae-99eb-5d4441eb8b75</uuid>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  <name>instance-0000000c</name>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <nova:name>tempest-LiveMigrationNegativeTest-server-1482924024</nova:name>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:14:47</nova:creationTime>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:        <nova:user uuid="d5a7834344c24e5094e33d0227489d0a">tempest-LiveMigrationNegativeTest-1226694505-project-member</nova:user>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:        <nova:project uuid="00b8bf8c06f844f49b8b633781cfebef">tempest-LiveMigrationNegativeTest-1226694505</nova:project>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <entry name="serial">ddf242e8-2fff-4bae-99eb-5d4441eb8b75</entry>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <entry name="uuid">ddf242e8-2fff-4bae-99eb-5d4441eb8b75</entry>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk.config">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/ddf242e8-2fff-4bae-99eb-5d4441eb8b75/console.log" append="off"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:14:49 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:14:49 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:14:49 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:14:49 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:14:49 np0005558241 nova_compute[248510]: 2025-12-13 08:14:49.907 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:14:49 np0005558241 nova_compute[248510]: 2025-12-13 08:14:49.907 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:14:49 np0005558241 nova_compute[248510]: 2025-12-13 08:14:49.908 248514 INFO nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Using config drive#033[00m
Dec 13 03:14:49 np0005558241 nova_compute[248510]: 2025-12-13 08:14:49.934 248514 DEBUG nova.storage.rbd_utils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:50 np0005558241 nova_compute[248510]: 2025-12-13 08:14:50.514 248514 INFO nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Creating config drive at /var/lib/nova/instances/ddf242e8-2fff-4bae-99eb-5d4441eb8b75/disk.config#033[00m
Dec 13 03:14:50 np0005558241 nova_compute[248510]: 2025-12-13 08:14:50.521 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddf242e8-2fff-4bae-99eb-5d4441eb8b75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfmz3n2uv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:50 np0005558241 nova_compute[248510]: 2025-12-13 08:14:50.657 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddf242e8-2fff-4bae-99eb-5d4441eb8b75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfmz3n2uv" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:50 np0005558241 nova_compute[248510]: 2025-12-13 08:14:50.692 248514 DEBUG nova.storage.rbd_utils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] rbd image ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:14:50 np0005558241 nova_compute[248510]: 2025-12-13 08:14:50.697 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddf242e8-2fff-4bae-99eb-5d4441eb8b75/disk.config ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:14:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 164 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 942 KiB/s rd, 6.0 MiB/s wr, 223 op/s
Dec 13 03:14:50 np0005558241 nova_compute[248510]: 2025-12-13 08:14:50.865 248514 DEBUG oslo_concurrency.processutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddf242e8-2fff-4bae-99eb-5d4441eb8b75/disk.config ddf242e8-2fff-4bae-99eb-5d4441eb8b75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:14:50 np0005558241 nova_compute[248510]: 2025-12-13 08:14:50.866 248514 INFO nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Deleting local config drive /var/lib/nova/instances/ddf242e8-2fff-4bae-99eb-5d4441eb8b75/disk.config because it was imported into RBD.#033[00m
Dec 13 03:14:50 np0005558241 systemd-machined[210538]: New machine qemu-12-instance-0000000c.
Dec 13 03:14:50 np0005558241 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Dec 13 03:14:51 np0005558241 nova_compute[248510]: 2025-12-13 08:14:51.460 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613676.457914, 06335528-6de3-417b-837a-06e18baaf91f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:51 np0005558241 nova_compute[248510]: 2025-12-13 08:14:51.460 248514 INFO nova.compute.manager [-] [instance: 06335528-6de3-417b-837a-06e18baaf91f] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:14:51 np0005558241 nova_compute[248510]: 2025-12-13 08:14:51.644 248514 DEBUG nova.compute.manager [None req-a3c67967-310e-463b-b4fd-2fa7a532550b - - - - - -] [instance: 06335528-6de3-417b-837a-06e18baaf91f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:51 np0005558241 nova_compute[248510]: 2025-12-13 08:14:51.646 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:51 np0005558241 nova_compute[248510]: 2025-12-13 08:14:51.983 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613691.9832356, ddf242e8-2fff-4bae-99eb-5d4441eb8b75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:51 np0005558241 nova_compute[248510]: 2025-12-13 08:14:51.984 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:14:51 np0005558241 nova_compute[248510]: 2025-12-13 08:14:51.987 248514 DEBUG nova.compute.manager [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:14:51 np0005558241 nova_compute[248510]: 2025-12-13 08:14:51.988 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:14:51 np0005558241 nova_compute[248510]: 2025-12-13 08:14:51.993 248514 INFO nova.virt.libvirt.driver [-] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Instance spawned successfully.#033[00m
Dec 13 03:14:51 np0005558241 nova_compute[248510]: 2025-12-13 08:14:51.993 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.024 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.028 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.041 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.042 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.042 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.042 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.043 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.043 248514 DEBUG nova.virt.libvirt.driver [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.062 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.063 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613691.986874, ddf242e8-2fff-4bae-99eb-5d4441eb8b75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.063 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] VM Started (Lifecycle Event)#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.088 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.090 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.119 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.136 248514 INFO nova.compute.manager [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Took 5.19 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.137 248514 DEBUG nova.compute.manager [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.209 248514 INFO nova.compute.manager [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Took 8.18 seconds to build instance.#033[00m
Dec 13 03:14:52 np0005558241 nova_compute[248510]: 2025-12-13 08:14:52.228 248514 DEBUG oslo_concurrency.lockutils [None req-7ade49eb-da73-4752-8441-bc27babdbe66 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "ddf242e8-2fff-4bae-99eb-5d4441eb8b75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 164 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 490 KiB/s rd, 6.0 MiB/s wr, 192 op/s
Dec 13 03:14:53 np0005558241 nova_compute[248510]: 2025-12-13 08:14:53.477 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:14:54 np0005558241 nova_compute[248510]: 2025-12-13 08:14:54.541 248514 DEBUG nova.objects.instance [None req-428267ba-9ae4-4d33-91c2-e6e4589dfefa 60ae329885ee4061af98c1e47dcd22d6 1df6ec087d0b4857b221dd69c7b6fc72 - - default default] Lazy-loading 'pci_devices' on Instance uuid ddf242e8-2fff-4bae-99eb-5d4441eb8b75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:14:54 np0005558241 nova_compute[248510]: 2025-12-13 08:14:54.698 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613694.6981153, ddf242e8-2fff-4bae-99eb-5d4441eb8b75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:54 np0005558241 nova_compute[248510]: 2025-12-13 08:14:54.698 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:14:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.0 MiB/s wr, 272 op/s
Dec 13 03:14:54 np0005558241 nova_compute[248510]: 2025-12-13 08:14:54.735 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:54 np0005558241 nova_compute[248510]: 2025-12-13 08:14:54.740 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:14:54 np0005558241 nova_compute[248510]: 2025-12-13 08:14:54.860 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Dec 13 03:14:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:14:55.396 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:14:55.397 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:14:55.397 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:55 np0005558241 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec 13 03:14:55 np0005558241 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 3.913s CPU time.
Dec 13 03:14:55 np0005558241 systemd-machined[210538]: Machine qemu-12-instance-0000000c terminated.
Dec 13 03:14:55 np0005558241 nova_compute[248510]: 2025-12-13 08:14:55.548 248514 DEBUG nova.compute.manager [None req-428267ba-9ae4-4d33-91c2-e6e4589dfefa 60ae329885ee4061af98c1e47dcd22d6 1df6ec087d0b4857b221dd69c7b6fc72 - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:56 np0005558241 nova_compute[248510]: 2025-12-13 08:14:56.649 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 196 op/s
Dec 13 03:14:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:14:57.092 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:14:57 np0005558241 nova_compute[248510]: 2025-12-13 08:14:57.996 248514 DEBUG oslo_concurrency.lockutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "ddf242e8-2fff-4bae-99eb-5d4441eb8b75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:57 np0005558241 nova_compute[248510]: 2025-12-13 08:14:57.996 248514 DEBUG oslo_concurrency.lockutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "ddf242e8-2fff-4bae-99eb-5d4441eb8b75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:57 np0005558241 nova_compute[248510]: 2025-12-13 08:14:57.996 248514 DEBUG oslo_concurrency.lockutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "ddf242e8-2fff-4bae-99eb-5d4441eb8b75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:14:57 np0005558241 nova_compute[248510]: 2025-12-13 08:14:57.997 248514 DEBUG oslo_concurrency.lockutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "ddf242e8-2fff-4bae-99eb-5d4441eb8b75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:14:57 np0005558241 nova_compute[248510]: 2025-12-13 08:14:57.997 248514 DEBUG oslo_concurrency.lockutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "ddf242e8-2fff-4bae-99eb-5d4441eb8b75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:14:57 np0005558241 nova_compute[248510]: 2025-12-13 08:14:57.998 248514 INFO nova.compute.manager [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Terminating instance#033[00m
Dec 13 03:14:57 np0005558241 nova_compute[248510]: 2025-12-13 08:14:57.999 248514 DEBUG oslo_concurrency.lockutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "refresh_cache-ddf242e8-2fff-4bae-99eb-5d4441eb8b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:14:57 np0005558241 nova_compute[248510]: 2025-12-13 08:14:57.999 248514 DEBUG oslo_concurrency.lockutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquired lock "refresh_cache-ddf242e8-2fff-4bae-99eb-5d4441eb8b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:14:58 np0005558241 nova_compute[248510]: 2025-12-13 08:14:57.999 248514 DEBUG nova.network.neutron [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:14:58 np0005558241 nova_compute[248510]: 2025-12-13 08:14:58.123 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613683.1215234, 523b15f7-aac8-4023-957c-9ffeab4d92ff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:58 np0005558241 nova_compute[248510]: 2025-12-13 08:14:58.123 248514 INFO nova.compute.manager [-] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:14:58 np0005558241 nova_compute[248510]: 2025-12-13 08:14:58.299 248514 DEBUG nova.compute.manager [None req-8970c850-9800-49c7-be0a-d1f7768c7418 - - - - - -] [instance: 523b15f7-aac8-4023-957c-9ffeab4d92ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:14:58 np0005558241 nova_compute[248510]: 2025-12-13 08:14:58.478 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:14:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:14:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Dec 13 03:14:59 np0005558241 nova_compute[248510]: 2025-12-13 08:14:59.395 248514 DEBUG nova.network.neutron [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:14:59 np0005558241 nova_compute[248510]: 2025-12-13 08:14:59.408 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613684.4067752, 3cf47630-6d1c-4617-ab97-b60185421433 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:14:59 np0005558241 nova_compute[248510]: 2025-12-13 08:14:59.408 248514 INFO nova.compute.manager [-] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:14:59 np0005558241 nova_compute[248510]: 2025-12-13 08:14:59.464 248514 DEBUG nova.compute.manager [None req-9fcdf8a6-ed36-4de7-8947-0215a531d1e7 - - - - - -] [instance: 3cf47630-6d1c-4617-ab97-b60185421433] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 118 op/s
Dec 13 03:15:00 np0005558241 nova_compute[248510]: 2025-12-13 08:15:00.770 248514 DEBUG nova.network.neutron [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:15:00 np0005558241 nova_compute[248510]: 2025-12-13 08:15:00.802 248514 DEBUG oslo_concurrency.lockutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Releasing lock "refresh_cache-ddf242e8-2fff-4bae-99eb-5d4441eb8b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:15:00 np0005558241 nova_compute[248510]: 2025-12-13 08:15:00.803 248514 DEBUG nova.compute.manager [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:15:00 np0005558241 nova_compute[248510]: 2025-12-13 08:15:00.810 248514 INFO nova.virt.libvirt.driver [-] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Instance destroyed successfully.#033[00m
Dec 13 03:15:00 np0005558241 nova_compute[248510]: 2025-12-13 08:15:00.810 248514 DEBUG nova.objects.instance [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lazy-loading 'resources' on Instance uuid ddf242e8-2fff-4bae-99eb-5d4441eb8b75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:00 np0005558241 podman[266433]: 2025-12-13 08:15:00.988039006 +0000 UTC m=+0.073235035 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:15:01 np0005558241 podman[266432]: 2025-12-13 08:15:01.028329638 +0000 UTC m=+0.117369822 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 03:15:01 np0005558241 podman[266431]: 2025-12-13 08:15:01.028854431 +0000 UTC m=+0.118366917 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:15:01 np0005558241 nova_compute[248510]: 2025-12-13 08:15:01.651 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:02 np0005558241 nova_compute[248510]: 2025-12-13 08:15:02.567 248514 INFO nova.virt.libvirt.driver [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Deleting instance files /var/lib/nova/instances/ddf242e8-2fff-4bae-99eb-5d4441eb8b75_del#033[00m
Dec 13 03:15:02 np0005558241 nova_compute[248510]: 2025-12-13 08:15:02.568 248514 INFO nova.virt.libvirt.driver [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Deletion of /var/lib/nova/instances/ddf242e8-2fff-4bae-99eb-5d4441eb8b75_del complete#033[00m
Dec 13 03:15:02 np0005558241 nova_compute[248510]: 2025-12-13 08:15:02.667 248514 INFO nova.compute.manager [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Took 1.86 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:15:02 np0005558241 nova_compute[248510]: 2025-12-13 08:15:02.668 248514 DEBUG oslo.service.loopingcall [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:15:02 np0005558241 nova_compute[248510]: 2025-12-13 08:15:02.668 248514 DEBUG nova.compute.manager [-] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:15:02 np0005558241 nova_compute[248510]: 2025-12-13 08:15:02.668 248514 DEBUG nova.network.neutron [-] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:15:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 90 KiB/s wr, 80 op/s
Dec 13 03:15:03 np0005558241 nova_compute[248510]: 2025-12-13 08:15:03.480 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:03 np0005558241 nova_compute[248510]: 2025-12-13 08:15:03.617 248514 DEBUG nova.network.neutron [-] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:15:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:15:03 np0005558241 nova_compute[248510]: 2025-12-13 08:15:03.653 248514 DEBUG nova.network.neutron [-] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:15:03 np0005558241 nova_compute[248510]: 2025-12-13 08:15:03.677 248514 INFO nova.compute.manager [-] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Took 1.01 seconds to deallocate network for instance.#033[00m
Dec 13 03:15:03 np0005558241 nova_compute[248510]: 2025-12-13 08:15:03.771 248514 DEBUG oslo_concurrency.lockutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:03 np0005558241 nova_compute[248510]: 2025-12-13 08:15:03.772 248514 DEBUG oslo_concurrency.lockutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:03 np0005558241 nova_compute[248510]: 2025-12-13 08:15:03.875 248514 DEBUG oslo_concurrency.processutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:15:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2001202701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:15:04 np0005558241 nova_compute[248510]: 2025-12-13 08:15:04.446 248514 DEBUG oslo_concurrency.processutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:04 np0005558241 nova_compute[248510]: 2025-12-13 08:15:04.452 248514 DEBUG nova.compute.provider_tree [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:15:04 np0005558241 nova_compute[248510]: 2025-12-13 08:15:04.476 248514 DEBUG nova.scheduler.client.report [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:15:04 np0005558241 nova_compute[248510]: 2025-12-13 08:15:04.508 248514 DEBUG oslo_concurrency.lockutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:04 np0005558241 nova_compute[248510]: 2025-12-13 08:15:04.555 248514 INFO nova.scheduler.client.report [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Deleted allocations for instance ddf242e8-2fff-4bae-99eb-5d4441eb8b75#033[00m
Dec 13 03:15:04 np0005558241 nova_compute[248510]: 2025-12-13 08:15:04.688 248514 DEBUG oslo_concurrency.lockutils [None req-15adbbcc-463e-465a-b7ff-df1a9ff3fe8d d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "ddf242e8-2fff-4bae-99eb-5d4441eb8b75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 121 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 92 KiB/s wr, 106 op/s
Dec 13 03:15:04 np0005558241 nova_compute[248510]: 2025-12-13 08:15:04.792 248514 DEBUG oslo_concurrency.lockutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Acquiring lock "b0993cc2-f55f-4847-84c6-dd3ab57cc25f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:04 np0005558241 nova_compute[248510]: 2025-12-13 08:15:04.793 248514 DEBUG oslo_concurrency.lockutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "b0993cc2-f55f-4847-84c6-dd3ab57cc25f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:04 np0005558241 nova_compute[248510]: 2025-12-13 08:15:04.825 248514 DEBUG nova.compute.manager [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.079 248514 DEBUG oslo_concurrency.lockutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.080 248514 DEBUG oslo_concurrency.lockutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.087 248514 DEBUG nova.virt.hardware [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.087 248514 INFO nova.compute.claims [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.314 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.407 248514 DEBUG oslo_concurrency.lockutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "7bb2ecd0-7c38-401e-9999-a3f457ede62f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.408 248514 DEBUG oslo_concurrency.lockutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "7bb2ecd0-7c38-401e-9999-a3f457ede62f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.408 248514 DEBUG oslo_concurrency.lockutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "7bb2ecd0-7c38-401e-9999-a3f457ede62f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.409 248514 DEBUG oslo_concurrency.lockutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "7bb2ecd0-7c38-401e-9999-a3f457ede62f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.409 248514 DEBUG oslo_concurrency.lockutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "7bb2ecd0-7c38-401e-9999-a3f457ede62f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.411 248514 INFO nova.compute.manager [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Terminating instance#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.411 248514 DEBUG oslo_concurrency.lockutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "refresh_cache-7bb2ecd0-7c38-401e-9999-a3f457ede62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.412 248514 DEBUG oslo_concurrency.lockutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquired lock "refresh_cache-7bb2ecd0-7c38-401e-9999-a3f457ede62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.412 248514 DEBUG nova.network.neutron [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.619 248514 DEBUG nova.network.neutron [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.702 248514 DEBUG oslo_concurrency.lockutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Acquiring lock "b1899d5c-4fd3-42a0-9b73-5a3f70253c1d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.702 248514 DEBUG oslo_concurrency.lockutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "b1899d5c-4fd3-42a0-9b73-5a3f70253c1d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.733 248514 DEBUG nova.compute.manager [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:15:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:15:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/389222910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.890 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.896 248514 DEBUG nova.compute.provider_tree [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.921 248514 DEBUG nova.scheduler.client.report [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:15:05 np0005558241 nova_compute[248510]: 2025-12-13 08:15:05.965 248514 DEBUG oslo_concurrency.lockutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.011 248514 DEBUG oslo_concurrency.lockutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.012 248514 DEBUG nova.compute.manager [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.019 248514 DEBUG nova.network.neutron [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.021 248514 DEBUG oslo_concurrency.lockutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.036 248514 DEBUG nova.virt.hardware [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.037 248514 INFO nova.compute.claims [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.080 248514 DEBUG oslo_concurrency.lockutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Releasing lock "refresh_cache-7bb2ecd0-7c38-401e-9999-a3f457ede62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.081 248514 DEBUG nova.compute.manager [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.164 248514 DEBUG nova.compute.manager [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec 13 03:15:06 np0005558241 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec 13 03:15:06 np0005558241 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 13.830s CPU time.
Dec 13 03:15:06 np0005558241 systemd-machined[210538]: Machine qemu-11-instance-0000000b terminated.
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.212 248514 INFO nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.278 248514 DEBUG nova.compute.manager [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.303 248514 INFO nova.virt.libvirt.driver [-] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Instance destroyed successfully.#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.304 248514 DEBUG nova.objects.instance [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lazy-loading 'resources' on Instance uuid 7bb2ecd0-7c38-401e-9999-a3f457ede62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.405 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.460 248514 DEBUG nova.compute.manager [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.463 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.464 248514 INFO nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Creating image(s)#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.488 248514 DEBUG nova.storage.rbd_utils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.519 248514 DEBUG nova.storage.rbd_utils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.541 248514 DEBUG nova.storage.rbd_utils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.545 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.624 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.627 248514 DEBUG oslo_concurrency.lockutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.628 248514 DEBUG oslo_concurrency.lockutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.629 248514 DEBUG oslo_concurrency.lockutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.655 248514 DEBUG nova.storage.rbd_utils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.659 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.684 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.694 248514 INFO nova.virt.libvirt.driver [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Deleting instance files /var/lib/nova/instances/7bb2ecd0-7c38-401e-9999-a3f457ede62f_del#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.696 248514 INFO nova.virt.libvirt.driver [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Deletion of /var/lib/nova/instances/7bb2ecd0-7c38-401e-9999-a3f457ede62f_del complete#033[00m
Dec 13 03:15:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 121 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 12 KiB/s wr, 26 op/s
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.906 248514 INFO nova.compute.manager [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.907 248514 DEBUG oslo.service.loopingcall [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.908 248514 DEBUG nova.compute.manager [-] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.908 248514 DEBUG nova.network.neutron [-] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:15:06 np0005558241 nova_compute[248510]: 2025-12-13 08:15:06.990 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:15:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2597186170' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.044 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.052 248514 DEBUG nova.storage.rbd_utils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] resizing rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.085 248514 DEBUG nova.compute.provider_tree [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.133 248514 DEBUG nova.objects.instance [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lazy-loading 'migration_context' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.158 248514 DEBUG nova.scheduler.client.report [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.216 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.217 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Ensure instance console log exists: /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.218 248514 DEBUG oslo_concurrency.lockutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.218 248514 DEBUG oslo_concurrency.lockutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.218 248514 DEBUG oslo_concurrency.lockutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.219 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.224 248514 DEBUG oslo_concurrency.lockutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.225 248514 DEBUG nova.compute.manager [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.232 248514 WARNING nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.239 248514 DEBUG nova.virt.libvirt.host [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.240 248514 DEBUG nova.virt.libvirt.host [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.245 248514 DEBUG nova.virt.libvirt.host [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.245 248514 DEBUG nova.virt.libvirt.host [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.246 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.246 248514 DEBUG nova.virt.hardware [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.246 248514 DEBUG nova.virt.hardware [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.247 248514 DEBUG nova.virt.hardware [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.247 248514 DEBUG nova.virt.hardware [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.247 248514 DEBUG nova.virt.hardware [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.247 248514 DEBUG nova.virt.hardware [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.248 248514 DEBUG nova.virt.hardware [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.248 248514 DEBUG nova.virt.hardware [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.248 248514 DEBUG nova.virt.hardware [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.248 248514 DEBUG nova.virt.hardware [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.249 248514 DEBUG nova.virt.hardware [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.252 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.339 248514 DEBUG nova.compute.manager [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.340 248514 DEBUG nova.network.neutron [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.441 248514 INFO nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.482 248514 DEBUG nova.compute.manager [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.624 248514 DEBUG nova.network.neutron [-] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.736 248514 DEBUG nova.compute.manager [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.738 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.739 248514 INFO nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Creating image(s)#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.763 248514 DEBUG nova.storage.rbd_utils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] rbd image b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.789 248514 DEBUG nova.storage.rbd_utils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] rbd image b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:15:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/51261875' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.812 248514 DEBUG nova.storage.rbd_utils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] rbd image b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.819 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.844 248514 DEBUG nova.network.neutron [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.845 248514 DEBUG nova.compute.manager [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.846 248514 DEBUG nova.network.neutron [-] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.848 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.872 248514 DEBUG nova.storage.rbd_utils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.877 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.902 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.903 248514 DEBUG oslo_concurrency.lockutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.904 248514 DEBUG oslo_concurrency.lockutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.904 248514 DEBUG oslo_concurrency.lockutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.930 248514 DEBUG nova.storage.rbd_utils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] rbd image b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.934 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:07 np0005558241 nova_compute[248510]: 2025-12-13 08:15:07.960 248514 INFO nova.compute.manager [-] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Took 1.05 seconds to deallocate network for instance.#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.145 248514 DEBUG oslo_concurrency.lockutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.147 248514 DEBUG oslo_concurrency.lockutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.250 248514 DEBUG oslo_concurrency.processutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:15:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3421277725' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.471 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.474 248514 DEBUG nova.objects.instance [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lazy-loading 'pci_devices' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.480 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.552 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  <uuid>b0993cc2-f55f-4847-84c6-dd3ab57cc25f</uuid>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  <name>instance-0000000d</name>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersAdmin275Test-server-1375343897</nova:name>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:15:07</nova:creationTime>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:        <nova:user uuid="227d115378154452ad3fe25dae04837b">tempest-ServersAdmin275Test-1969959405-project-member</nova:user>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:        <nova:project uuid="39b72cde121b4531a2b743b669122ebd">tempest-ServersAdmin275Test-1969959405</nova:project>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <entry name="serial">b0993cc2-f55f-4847-84c6-dd3ab57cc25f</entry>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <entry name="uuid">b0993cc2-f55f-4847-84c6-dd3ab57cc25f</entry>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/console.log" append="off"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:15:08 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:15:08 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:15:08 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:15:08 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:15:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.715 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.781s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 98 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.1 MiB/s wr, 74 op/s
Dec 13 03:15:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:15:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476190431' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.789 248514 DEBUG nova.storage.rbd_utils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] resizing rbd image b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.816 248514 DEBUG oslo_concurrency.processutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.822 248514 DEBUG nova.compute.provider_tree [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.873 248514 DEBUG nova.objects.instance [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lazy-loading 'migration_context' on Instance uuid b1899d5c-4fd3-42a0-9b73-5a3f70253c1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.877 248514 DEBUG nova.scheduler.client.report [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.885 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.886 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.886 248514 INFO nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Using config drive#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.909 248514 DEBUG nova.storage.rbd_utils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.916 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.916 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Ensure instance console log exists: /var/lib/nova/instances/b1899d5c-4fd3-42a0-9b73-5a3f70253c1d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.917 248514 DEBUG oslo_concurrency.lockutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.917 248514 DEBUG oslo_concurrency.lockutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.917 248514 DEBUG oslo_concurrency.lockutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.919 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.924 248514 WARNING nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.930 248514 DEBUG nova.virt.libvirt.host [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.931 248514 DEBUG nova.virt.libvirt.host [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.933 248514 DEBUG nova.virt.libvirt.host [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.934 248514 DEBUG nova.virt.libvirt.host [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.934 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.934 248514 DEBUG nova.virt.hardware [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.935 248514 DEBUG nova.virt.hardware [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.935 248514 DEBUG nova.virt.hardware [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.935 248514 DEBUG nova.virt.hardware [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.935 248514 DEBUG nova.virt.hardware [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.936 248514 DEBUG nova.virt.hardware [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.936 248514 DEBUG nova.virt.hardware [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.936 248514 DEBUG nova.virt.hardware [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.936 248514 DEBUG nova.virt.hardware [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.937 248514 DEBUG nova.virt.hardware [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.937 248514 DEBUG nova.virt.hardware [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.940 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:08 np0005558241 nova_compute[248510]: 2025-12-13 08:15:08.969 248514 DEBUG oslo_concurrency.lockutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:09 np0005558241 nova_compute[248510]: 2025-12-13 08:15:09.011 248514 INFO nova.scheduler.client.report [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Deleted allocations for instance 7bb2ecd0-7c38-401e-9999-a3f457ede62f#033[00m
Dec 13 03:15:09 np0005558241 nova_compute[248510]: 2025-12-13 08:15:09.100 248514 DEBUG oslo_concurrency.lockutils [None req-a6b5d4ef-9435-4c9e-8f9e-b5ca293c9815 d5a7834344c24e5094e33d0227489d0a 00b8bf8c06f844f49b8b633781cfebef - - default default] Lock "7bb2ecd0-7c38-401e-9999-a3f457ede62f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:09 np0005558241 nova_compute[248510]: 2025-12-13 08:15:09.106 248514 INFO nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Creating config drive at /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config#033[00m
Dec 13 03:15:09 np0005558241 nova_compute[248510]: 2025-12-13 08:15:09.112 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzaig15du execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:15:09
Dec 13 03:15:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:15:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:15:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.mgr', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'backups']
Dec 13 03:15:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:15:09 np0005558241 nova_compute[248510]: 2025-12-13 08:15:09.240 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzaig15du" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:09 np0005558241 nova_compute[248510]: 2025-12-13 08:15:09.265 248514 DEBUG nova.storage.rbd_utils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:09 np0005558241 nova_compute[248510]: 2025-12-13 08:15:09.269 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:09 np0005558241 nova_compute[248510]: 2025-12-13 08:15:09.407 248514 DEBUG oslo_concurrency.processutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:09 np0005558241 nova_compute[248510]: 2025-12-13 08:15:09.409 248514 INFO nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Deleting local config drive /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config because it was imported into RBD.#033[00m
Dec 13 03:15:09 np0005558241 systemd-machined[210538]: New machine qemu-13-instance-0000000d.
Dec 13 03:15:09 np0005558241 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Dec 13 03:15:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:15:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1711558082' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:15:09 np0005558241 nova_compute[248510]: 2025-12-13 08:15:09.552 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:09 np0005558241 nova_compute[248510]: 2025-12-13 08:15:09.576 248514 DEBUG nova.storage.rbd_utils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] rbd image b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:09 np0005558241 nova_compute[248510]: 2025-12-13 08:15:09.581 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.141 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613710.1413782, b0993cc2-f55f-4847-84c6-dd3ab57cc25f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.142 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.146 248514 DEBUG nova.compute.manager [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.146 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.151 248514 INFO nova.virt.libvirt.driver [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance spawned successfully.#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.151 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:15:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:15:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4285845758' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.176 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.180 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.183 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.184 248514 DEBUG nova.objects.instance [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lazy-loading 'pci_devices' on Instance uuid b1899d5c-4fd3-42a0-9b73-5a3f70253c1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.208 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.209 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.209 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.210 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.210 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.211 248514 DEBUG nova.virt.libvirt.driver [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.241 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  <uuid>b1899d5c-4fd3-42a0-9b73-5a3f70253c1d</uuid>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  <name>instance-0000000e</name>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerDiagnosticsNegativeTest-server-725012794</nova:name>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:15:08</nova:creationTime>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:        <nova:user uuid="9d048243a9d0486dabf41512320e2948">tempest-ServerDiagnosticsNegativeTest-180456093-project-member</nova:user>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:        <nova:project uuid="e86f5d35accc4f5489476a565c247620">tempest-ServerDiagnosticsNegativeTest-180456093</nova:project>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <entry name="serial">b1899d5c-4fd3-42a0-9b73-5a3f70253c1d</entry>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <entry name="uuid">b1899d5c-4fd3-42a0-9b73-5a3f70253c1d</entry>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk.config">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/b1899d5c-4fd3-42a0-9b73-5a3f70253c1d/console.log" append="off"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:15:10 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:15:10 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:15:10 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:15:10 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.247 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.248 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613710.142751, b0993cc2-f55f-4847-84c6-dd3ab57cc25f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.248 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] VM Started (Lifecycle Event)#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.311 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.317 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.338 248514 INFO nova.compute.manager [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Took 3.88 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.339 248514 DEBUG nova.compute.manager [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.354 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.355 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.355 248514 INFO nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Using config drive#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.378 248514 DEBUG nova.storage.rbd_utils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] rbd image b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.385 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.482 248514 INFO nova.compute.manager [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Took 5.46 seconds to build instance.#033[00m
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.551 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613695.5484297, ddf242e8-2fff-4bae-99eb-5d4441eb8b75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.552 248514 INFO nova.compute.manager [-] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.597 248514 DEBUG nova.compute.manager [None req-929157d4-77d2-4109-9acc-53f7ab494706 - - - - - -] [instance: ddf242e8-2fff-4bae-99eb-5d4441eb8b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:10 np0005558241 nova_compute[248510]: 2025-12-13 08:15:10.604 248514 DEBUG oslo_concurrency.lockutils [None req-c6d3a6c7-a15f-424d-8e9f-3b23aba89315 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "b0993cc2-f55f-4847-84c6-dd3ab57cc25f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 117 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.8 MiB/s wr, 93 op/s
Dec 13 03:15:11 np0005558241 nova_compute[248510]: 2025-12-13 08:15:11.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:12 np0005558241 nova_compute[248510]: 2025-12-13 08:15:12.628 248514 INFO nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Creating config drive at /var/lib/nova/instances/b1899d5c-4fd3-42a0-9b73-5a3f70253c1d/disk.config#033[00m
Dec 13 03:15:12 np0005558241 nova_compute[248510]: 2025-12-13 08:15:12.633 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b1899d5c-4fd3-42a0-9b73-5a3f70253c1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2n3zjmsu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 117 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.8 MiB/s wr, 93 op/s
Dec 13 03:15:12 np0005558241 nova_compute[248510]: 2025-12-13 08:15:12.771 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b1899d5c-4fd3-42a0-9b73-5a3f70253c1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2n3zjmsu" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:12 np0005558241 nova_compute[248510]: 2025-12-13 08:15:12.799 248514 DEBUG nova.storage.rbd_utils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] rbd image b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:12 np0005558241 nova_compute[248510]: 2025-12-13 08:15:12.803 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b1899d5c-4fd3-42a0-9b73-5a3f70253c1d/disk.config b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:12 np0005558241 nova_compute[248510]: 2025-12-13 08:15:12.937 248514 DEBUG oslo_concurrency.processutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b1899d5c-4fd3-42a0-9b73-5a3f70253c1d/disk.config b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:12 np0005558241 nova_compute[248510]: 2025-12-13 08:15:12.938 248514 INFO nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Deleting local config drive /var/lib/nova/instances/b1899d5c-4fd3-42a0-9b73-5a3f70253c1d/disk.config because it was imported into RBD.#033[00m
Dec 13 03:15:13 np0005558241 systemd-machined[210538]: New machine qemu-14-instance-0000000e.
Dec 13 03:15:13 np0005558241 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.340 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613713.3395967, b1899d5c-4fd3-42a0-9b73-5a3f70253c1d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.341 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.346 248514 DEBUG nova.compute.manager [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.347 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.351 248514 INFO nova.virt.libvirt.driver [-] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Instance spawned successfully.#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.352 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.482 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.501 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.505 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.506 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.506 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.507 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.507 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.508 248514 DEBUG nova.virt.libvirt.driver [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.512 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.634 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.634 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613713.339737, b1899d5c-4fd3-42a0-9b73-5a3f70253c1d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.635 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] VM Started (Lifecycle Event)#033[00m
Dec 13 03:15:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.676 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.679 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.799 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.814 248514 INFO nova.compute.manager [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Took 6.08 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:15:13 np0005558241 nova_compute[248510]: 2025-12-13 08:15:13.815 248514 DEBUG nova.compute.manager [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:14 np0005558241 nova_compute[248510]: 2025-12-13 08:15:14.022 248514 INFO nova.compute.manager [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Took 8.11 seconds to build instance.#033[00m
Dec 13 03:15:14 np0005558241 nova_compute[248510]: 2025-12-13 08:15:14.227 248514 DEBUG oslo_concurrency.lockutils [None req-b66a7ae8-aa0a-43aa-a4df-5b1ae910dd37 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "b1899d5c-4fd3-42a0-9b73-5a3f70253c1d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 190 op/s
Dec 13 03:15:14 np0005558241 nova_compute[248510]: 2025-12-13 08:15:14.887 248514 INFO nova.compute.manager [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Rebuilding instance#033[00m
Dec 13 03:15:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:15:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/826429947' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:15:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:15:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/826429947' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:15:15 np0005558241 nova_compute[248510]: 2025-12-13 08:15:15.840 248514 DEBUG nova.objects.instance [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lazy-loading 'trusted_certs' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:15 np0005558241 nova_compute[248510]: 2025-12-13 08:15:15.928 248514 DEBUG nova.compute.manager [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:16 np0005558241 nova_compute[248510]: 2025-12-13 08:15:16.615 248514 DEBUG nova.objects.instance [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lazy-loading 'pci_requests' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:16 np0005558241 nova_compute[248510]: 2025-12-13 08:15:16.689 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 164 op/s
Dec 13 03:15:16 np0005558241 nova_compute[248510]: 2025-12-13 08:15:16.901 248514 DEBUG nova.objects.instance [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lazy-loading 'pci_devices' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:17 np0005558241 nova_compute[248510]: 2025-12-13 08:15:17.050 248514 DEBUG nova.objects.instance [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lazy-loading 'resources' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:17 np0005558241 nova_compute[248510]: 2025-12-13 08:15:17.111 248514 DEBUG nova.objects.instance [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lazy-loading 'migration_context' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:17 np0005558241 nova_compute[248510]: 2025-12-13 08:15:17.391 248514 DEBUG nova.objects.instance [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:15:17 np0005558241 nova_compute[248510]: 2025-12-13 08:15:17.398 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:15:17 np0005558241 nova_compute[248510]: 2025-12-13 08:15:17.443 248514 DEBUG oslo_concurrency.lockutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Acquiring lock "b1899d5c-4fd3-42a0-9b73-5a3f70253c1d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:17 np0005558241 nova_compute[248510]: 2025-12-13 08:15:17.444 248514 DEBUG oslo_concurrency.lockutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "b1899d5c-4fd3-42a0-9b73-5a3f70253c1d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:17 np0005558241 nova_compute[248510]: 2025-12-13 08:15:17.444 248514 DEBUG oslo_concurrency.lockutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Acquiring lock "b1899d5c-4fd3-42a0-9b73-5a3f70253c1d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:17 np0005558241 nova_compute[248510]: 2025-12-13 08:15:17.445 248514 DEBUG oslo_concurrency.lockutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "b1899d5c-4fd3-42a0-9b73-5a3f70253c1d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:17 np0005558241 nova_compute[248510]: 2025-12-13 08:15:17.445 248514 DEBUG oslo_concurrency.lockutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "b1899d5c-4fd3-42a0-9b73-5a3f70253c1d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:17 np0005558241 nova_compute[248510]: 2025-12-13 08:15:17.447 248514 INFO nova.compute.manager [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Terminating instance#033[00m
Dec 13 03:15:17 np0005558241 nova_compute[248510]: 2025-12-13 08:15:17.448 248514 DEBUG oslo_concurrency.lockutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Acquiring lock "refresh_cache-b1899d5c-4fd3-42a0-9b73-5a3f70253c1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:15:17 np0005558241 nova_compute[248510]: 2025-12-13 08:15:17.449 248514 DEBUG oslo_concurrency.lockutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Acquired lock "refresh_cache-b1899d5c-4fd3-42a0-9b73-5a3f70253c1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:15:17 np0005558241 nova_compute[248510]: 2025-12-13 08:15:17.449 248514 DEBUG nova.network.neutron [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:15:18 np0005558241 nova_compute[248510]: 2025-12-13 08:15:18.510 248514 DEBUG nova.network.neutron [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:15:18 np0005558241 nova_compute[248510]: 2025-12-13 08:15:18.515 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:15:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.6 MiB/s wr, 216 op/s
Dec 13 03:15:18 np0005558241 nova_compute[248510]: 2025-12-13 08:15:18.796 248514 DEBUG nova.network.neutron [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:15:18 np0005558241 nova_compute[248510]: 2025-12-13 08:15:18.842 248514 DEBUG oslo_concurrency.lockutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Releasing lock "refresh_cache-b1899d5c-4fd3-42a0-9b73-5a3f70253c1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:15:18 np0005558241 nova_compute[248510]: 2025-12-13 08:15:18.843 248514 DEBUG nova.compute.manager [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:15:18 np0005558241 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec 13 03:15:18 np0005558241 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 5.933s CPU time.
Dec 13 03:15:18 np0005558241 systemd-machined[210538]: Machine qemu-14-instance-0000000e terminated.
Dec 13 03:15:19 np0005558241 nova_compute[248510]: 2025-12-13 08:15:19.065 248514 INFO nova.virt.libvirt.driver [-] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Instance destroyed successfully.#033[00m
Dec 13 03:15:19 np0005558241 nova_compute[248510]: 2025-12-13 08:15:19.066 248514 DEBUG nova.objects.instance [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lazy-loading 'resources' on Instance uuid b1899d5c-4fd3-42a0-9b73-5a3f70253c1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:19 np0005558241 nova_compute[248510]: 2025-12-13 08:15:19.434 248514 INFO nova.virt.libvirt.driver [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Deleting instance files /var/lib/nova/instances/b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_del#033[00m
Dec 13 03:15:19 np0005558241 nova_compute[248510]: 2025-12-13 08:15:19.436 248514 INFO nova.virt.libvirt.driver [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Deletion of /var/lib/nova/instances/b1899d5c-4fd3-42a0-9b73-5a3f70253c1d_del complete#033[00m
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:15:19 np0005558241 nova_compute[248510]: 2025-12-13 08:15:19.618 248514 INFO nova.compute.manager [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:15:19 np0005558241 nova_compute[248510]: 2025-12-13 08:15:19.619 248514 DEBUG oslo.service.loopingcall [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:15:19 np0005558241 nova_compute[248510]: 2025-12-13 08:15:19.619 248514 DEBUG nova.compute.manager [-] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:15:19 np0005558241 nova_compute[248510]: 2025-12-13 08:15:19.620 248514 DEBUG nova.network.neutron [-] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:15:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:15:20 np0005558241 podman[267472]: 2025-12-13 08:15:20.067847026 +0000 UTC m=+0.065685809 container create 0ab8168a8e38cb8ec28579d92442eefbeba6c55f2415bd058c5cf77b70aeb19c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_colden, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:15:20 np0005558241 nova_compute[248510]: 2025-12-13 08:15:20.115 248514 DEBUG nova.network.neutron [-] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:15:20 np0005558241 systemd[1]: Started libpod-conmon-0ab8168a8e38cb8ec28579d92442eefbeba6c55f2415bd058c5cf77b70aeb19c.scope.
Dec 13 03:15:20 np0005558241 podman[267472]: 2025-12-13 08:15:20.039657231 +0000 UTC m=+0.037496034 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:15:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:15:20 np0005558241 podman[267472]: 2025-12-13 08:15:20.184676353 +0000 UTC m=+0.182515156 container init 0ab8168a8e38cb8ec28579d92442eefbeba6c55f2415bd058c5cf77b70aeb19c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_colden, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:15:20 np0005558241 podman[267472]: 2025-12-13 08:15:20.194476515 +0000 UTC m=+0.192315308 container start 0ab8168a8e38cb8ec28579d92442eefbeba6c55f2415bd058c5cf77b70aeb19c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_colden, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:15:20 np0005558241 podman[267472]: 2025-12-13 08:15:20.199697833 +0000 UTC m=+0.197536716 container attach 0ab8168a8e38cb8ec28579d92442eefbeba6c55f2415bd058c5cf77b70aeb19c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:15:20 np0005558241 quizzical_colden[267488]: 167 167
Dec 13 03:15:20 np0005558241 systemd[1]: libpod-0ab8168a8e38cb8ec28579d92442eefbeba6c55f2415bd058c5cf77b70aeb19c.scope: Deactivated successfully.
Dec 13 03:15:20 np0005558241 podman[267472]: 2025-12-13 08:15:20.203750873 +0000 UTC m=+0.201589666 container died 0ab8168a8e38cb8ec28579d92442eefbeba6c55f2415bd058c5cf77b70aeb19c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:15:20 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1c0d3900176d0edb059e85327a74497e129e618075e1bad561e78e0e16c0fd4a-merged.mount: Deactivated successfully.
Dec 13 03:15:20 np0005558241 podman[267472]: 2025-12-13 08:15:20.268330404 +0000 UTC m=+0.266169187 container remove 0ab8168a8e38cb8ec28579d92442eefbeba6c55f2415bd058c5cf77b70aeb19c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:15:20 np0005558241 systemd[1]: libpod-conmon-0ab8168a8e38cb8ec28579d92442eefbeba6c55f2415bd058c5cf77b70aeb19c.scope: Deactivated successfully.
Dec 13 03:15:20 np0005558241 nova_compute[248510]: 2025-12-13 08:15:20.314 248514 DEBUG nova.network.neutron [-] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:15:20 np0005558241 nova_compute[248510]: 2025-12-13 08:15:20.387 248514 INFO nova.compute.manager [-] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Took 0.77 seconds to deallocate network for instance.#033[00m
Dec 13 03:15:20 np0005558241 podman[267511]: 2025-12-13 08:15:20.490839224 +0000 UTC m=+0.059876495 container create 71de9631341c30232cd962a8550ac89ffd00e3d0d0bf96a5d4b0de4971f9cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackburn, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:15:20 np0005558241 podman[267511]: 2025-12-13 08:15:20.466371762 +0000 UTC m=+0.035409053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:15:20 np0005558241 systemd[1]: Started libpod-conmon-71de9631341c30232cd962a8550ac89ffd00e3d0d0bf96a5d4b0de4971f9cee8.scope.
Dec 13 03:15:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:15:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3b5f46611ef4874770d437c1ed7e0e0c837c45f5d8dd8d41c1122af5123f03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:15:20 np0005558241 nova_compute[248510]: 2025-12-13 08:15:20.705 248514 DEBUG oslo_concurrency.lockutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3b5f46611ef4874770d437c1ed7e0e0c837c45f5d8dd8d41c1122af5123f03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:15:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3b5f46611ef4874770d437c1ed7e0e0c837c45f5d8dd8d41c1122af5123f03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:15:20 np0005558241 nova_compute[248510]: 2025-12-13 08:15:20.706 248514 DEBUG oslo_concurrency.lockutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3b5f46611ef4874770d437c1ed7e0e0c837c45f5d8dd8d41c1122af5123f03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:15:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3b5f46611ef4874770d437c1ed7e0e0c837c45f5d8dd8d41c1122af5123f03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006995014809938026 of space, bias 1.0, pg target 0.20985044429814076 quantized to 32 (current 32)
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663470930549218 of space, bias 1.0, pg target 0.19990412791647655 quantized to 32 (current 32)
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.963246887966146e-07 of space, bias 4.0, pg target 0.0010755896265559374 quantized to 16 (current 32)
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:15:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 181 op/s
Dec 13 03:15:20 np0005558241 podman[267511]: 2025-12-13 08:15:20.774884539 +0000 UTC m=+0.343921840 container init 71de9631341c30232cd962a8550ac89ffd00e3d0d0bf96a5d4b0de4971f9cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:15:20 np0005558241 podman[267511]: 2025-12-13 08:15:20.785679965 +0000 UTC m=+0.354717246 container start 71de9631341c30232cd962a8550ac89ffd00e3d0d0bf96a5d4b0de4971f9cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackburn, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:15:20 np0005558241 podman[267511]: 2025-12-13 08:15:20.795271422 +0000 UTC m=+0.364308693 container attach 71de9631341c30232cd962a8550ac89ffd00e3d0d0bf96a5d4b0de4971f9cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:15:20 np0005558241 nova_compute[248510]: 2025-12-13 08:15:20.809 248514 DEBUG oslo_concurrency.processutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:21 np0005558241 boring_blackburn[267527]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:15:21 np0005558241 boring_blackburn[267527]: --> All data devices are unavailable
Dec 13 03:15:21 np0005558241 nova_compute[248510]: 2025-12-13 08:15:21.303 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613706.3015218, 7bb2ecd0-7c38-401e-9999-a3f457ede62f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:15:21 np0005558241 nova_compute[248510]: 2025-12-13 08:15:21.304 248514 INFO nova.compute.manager [-] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:15:21 np0005558241 systemd[1]: libpod-71de9631341c30232cd962a8550ac89ffd00e3d0d0bf96a5d4b0de4971f9cee8.scope: Deactivated successfully.
Dec 13 03:15:21 np0005558241 conmon[267527]: conmon 71de9631341c30232cd9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71de9631341c30232cd962a8550ac89ffd00e3d0d0bf96a5d4b0de4971f9cee8.scope/container/memory.events
Dec 13 03:15:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:15:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4253376477' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:15:21 np0005558241 podman[267567]: 2025-12-13 08:15:21.374296254 +0000 UTC m=+0.026552085 container died 71de9631341c30232cd962a8550ac89ffd00e3d0d0bf96a5d4b0de4971f9cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:15:21 np0005558241 nova_compute[248510]: 2025-12-13 08:15:21.383 248514 DEBUG oslo_concurrency.processutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:21 np0005558241 nova_compute[248510]: 2025-12-13 08:15:21.389 248514 DEBUG nova.compute.provider_tree [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:15:21 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3a3b5f46611ef4874770d437c1ed7e0e0c837c45f5d8dd8d41c1122af5123f03-merged.mount: Deactivated successfully.
Dec 13 03:15:21 np0005558241 podman[267567]: 2025-12-13 08:15:21.43423682 +0000 UTC m=+0.086492641 container remove 71de9631341c30232cd962a8550ac89ffd00e3d0d0bf96a5d4b0de4971f9cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackburn, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:15:21 np0005558241 systemd[1]: libpod-conmon-71de9631341c30232cd962a8550ac89ffd00e3d0d0bf96a5d4b0de4971f9cee8.scope: Deactivated successfully.
Dec 13 03:15:21 np0005558241 nova_compute[248510]: 2025-12-13 08:15:21.628 248514 DEBUG nova.compute.manager [None req-025e2bbb-d615-4e11-a332-06eafa4f3c13 - - - - - -] [instance: 7bb2ecd0-7c38-401e-9999-a3f457ede62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:21 np0005558241 nova_compute[248510]: 2025-12-13 08:15:21.692 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:21 np0005558241 podman[267643]: 2025-12-13 08:15:21.946120578 +0000 UTC m=+0.041022401 container create 809df31e4b607e516274bbbb8fe9094797766e321efedd2c26a2e7ddee188ff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 03:15:21 np0005558241 systemd[1]: Started libpod-conmon-809df31e4b607e516274bbbb8fe9094797766e321efedd2c26a2e7ddee188ff2.scope.
Dec 13 03:15:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:15:22 np0005558241 podman[267643]: 2025-12-13 08:15:21.928589477 +0000 UTC m=+0.023491320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:15:22 np0005558241 podman[267643]: 2025-12-13 08:15:22.036055754 +0000 UTC m=+0.130957657 container init 809df31e4b607e516274bbbb8fe9094797766e321efedd2c26a2e7ddee188ff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:15:22 np0005558241 podman[267643]: 2025-12-13 08:15:22.044181164 +0000 UTC m=+0.139082987 container start 809df31e4b607e516274bbbb8fe9094797766e321efedd2c26a2e7ddee188ff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:15:22 np0005558241 podman[267643]: 2025-12-13 08:15:22.047771922 +0000 UTC m=+0.142673835 container attach 809df31e4b607e516274bbbb8fe9094797766e321efedd2c26a2e7ddee188ff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_leakey, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 03:15:22 np0005558241 trusting_leakey[267659]: 167 167
Dec 13 03:15:22 np0005558241 systemd[1]: libpod-809df31e4b607e516274bbbb8fe9094797766e321efedd2c26a2e7ddee188ff2.scope: Deactivated successfully.
Dec 13 03:15:22 np0005558241 conmon[267659]: conmon 809df31e4b607e516274 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-809df31e4b607e516274bbbb8fe9094797766e321efedd2c26a2e7ddee188ff2.scope/container/memory.events
Dec 13 03:15:22 np0005558241 podman[267643]: 2025-12-13 08:15:22.051720879 +0000 UTC m=+0.146622702 container died 809df31e4b607e516274bbbb8fe9094797766e321efedd2c26a2e7ddee188ff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:15:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e039977b26d4943e7e26ee3e76ac0528acc3a04db4ded5dc7033f76cd946d1f7-merged.mount: Deactivated successfully.
Dec 13 03:15:22 np0005558241 podman[267643]: 2025-12-13 08:15:22.097439286 +0000 UTC m=+0.192341109 container remove 809df31e4b607e516274bbbb8fe9094797766e321efedd2c26a2e7ddee188ff2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_leakey, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:15:22 np0005558241 systemd[1]: libpod-conmon-809df31e4b607e516274bbbb8fe9094797766e321efedd2c26a2e7ddee188ff2.scope: Deactivated successfully.
Dec 13 03:15:22 np0005558241 nova_compute[248510]: 2025-12-13 08:15:22.179 248514 DEBUG nova.scheduler.client.report [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:15:22 np0005558241 podman[267683]: 2025-12-13 08:15:22.27668101 +0000 UTC m=+0.051090079 container create 28625a82ba9e35d06a73e3a99c52cf93a6a232bd6f8c8c2568619f1b37800164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_dubinsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:15:22 np0005558241 systemd[1]: Started libpod-conmon-28625a82ba9e35d06a73e3a99c52cf93a6a232bd6f8c8c2568619f1b37800164.scope.
Dec 13 03:15:22 np0005558241 podman[267683]: 2025-12-13 08:15:22.254121745 +0000 UTC m=+0.028530834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:15:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:15:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fb62887136d089eabfa48cb725eeeaca6c5331c7ef724646dfc312e7f8b3f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:15:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fb62887136d089eabfa48cb725eeeaca6c5331c7ef724646dfc312e7f8b3f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:15:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fb62887136d089eabfa48cb725eeeaca6c5331c7ef724646dfc312e7f8b3f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:15:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fb62887136d089eabfa48cb725eeeaca6c5331c7ef724646dfc312e7f8b3f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:15:22 np0005558241 podman[267683]: 2025-12-13 08:15:22.372675075 +0000 UTC m=+0.147084174 container init 28625a82ba9e35d06a73e3a99c52cf93a6a232bd6f8c8c2568619f1b37800164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:15:22 np0005558241 podman[267683]: 2025-12-13 08:15:22.379816601 +0000 UTC m=+0.154225690 container start 28625a82ba9e35d06a73e3a99c52cf93a6a232bd6f8c8c2568619f1b37800164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_dubinsky, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:15:22 np0005558241 podman[267683]: 2025-12-13 08:15:22.384156488 +0000 UTC m=+0.158565567 container attach 28625a82ba9e35d06a73e3a99c52cf93a6a232bd6f8c8c2568619f1b37800164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_dubinsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:15:22 np0005558241 nova_compute[248510]: 2025-12-13 08:15:22.606 248514 DEBUG oslo_concurrency.lockutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]: {
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:    "0": [
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:        {
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "devices": [
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "/dev/loop3"
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            ],
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_name": "ceph_lv0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_size": "21470642176",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "name": "ceph_lv0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "tags": {
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.cluster_name": "ceph",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.crush_device_class": "",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.encrypted": "0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.objectstore": "bluestore",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.osd_id": "0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.type": "block",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.vdo": "0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.with_tpm": "0"
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            },
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "type": "block",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "vg_name": "ceph_vg0"
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:        }
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:    ],
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:    "1": [
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:        {
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "devices": [
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "/dev/loop4"
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            ],
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_name": "ceph_lv1",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_size": "21470642176",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "name": "ceph_lv1",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "tags": {
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.cluster_name": "ceph",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.crush_device_class": "",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.encrypted": "0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.objectstore": "bluestore",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.osd_id": "1",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.type": "block",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.vdo": "0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.with_tpm": "0"
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            },
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "type": "block",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "vg_name": "ceph_vg1"
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:        }
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:    ],
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:    "2": [
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:        {
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "devices": [
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "/dev/loop5"
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            ],
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_name": "ceph_lv2",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_size": "21470642176",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "name": "ceph_lv2",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "tags": {
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.cluster_name": "ceph",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.crush_device_class": "",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.encrypted": "0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.objectstore": "bluestore",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.osd_id": "2",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.type": "block",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.vdo": "0",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:                "ceph.with_tpm": "0"
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            },
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "type": "block",
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:            "vg_name": "ceph_vg2"
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:        }
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]:    ]
Dec 13 03:15:22 np0005558241 determined_dubinsky[267699]: }
Dec 13 03:15:22 np0005558241 nova_compute[248510]: 2025-12-13 08:15:22.694 248514 INFO nova.scheduler.client.report [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Deleted allocations for instance b1899d5c-4fd3-42a0-9b73-5a3f70253c1d#033[00m
Dec 13 03:15:22 np0005558241 systemd[1]: libpod-28625a82ba9e35d06a73e3a99c52cf93a6a232bd6f8c8c2568619f1b37800164.scope: Deactivated successfully.
Dec 13 03:15:22 np0005558241 podman[267683]: 2025-12-13 08:15:22.702449758 +0000 UTC m=+0.476858857 container died 28625a82ba9e35d06a73e3a99c52cf93a6a232bd6f8c8c2568619f1b37800164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_dubinsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 03:15:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-94fb62887136d089eabfa48cb725eeeaca6c5331c7ef724646dfc312e7f8b3f6-merged.mount: Deactivated successfully.
Dec 13 03:15:22 np0005558241 podman[267683]: 2025-12-13 08:15:22.750192874 +0000 UTC m=+0.524601943 container remove 28625a82ba9e35d06a73e3a99c52cf93a6a232bd6f8c8c2568619f1b37800164 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:15:22 np0005558241 systemd[1]: libpod-conmon-28625a82ba9e35d06a73e3a99c52cf93a6a232bd6f8c8c2568619f1b37800164.scope: Deactivated successfully.
Dec 13 03:15:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 801 KiB/s wr, 161 op/s
Dec 13 03:15:23 np0005558241 podman[267780]: 2025-12-13 08:15:23.254002683 +0000 UTC m=+0.047415819 container create 345d4e6b35c9f96eb8e9862420aae6b5703ccb3739a512f827f7532283827d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_banzai, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:15:23 np0005558241 systemd[1]: Started libpod-conmon-345d4e6b35c9f96eb8e9862420aae6b5703ccb3739a512f827f7532283827d84.scope.
Dec 13 03:15:23 np0005558241 podman[267780]: 2025-12-13 08:15:23.231852767 +0000 UTC m=+0.025265953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:15:23 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:15:23 np0005558241 podman[267780]: 2025-12-13 08:15:23.440368463 +0000 UTC m=+0.233781709 container init 345d4e6b35c9f96eb8e9862420aae6b5703ccb3739a512f827f7532283827d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_banzai, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:15:23 np0005558241 podman[267780]: 2025-12-13 08:15:23.449182551 +0000 UTC m=+0.242595707 container start 345d4e6b35c9f96eb8e9862420aae6b5703ccb3739a512f827f7532283827d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_banzai, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:15:23 np0005558241 zealous_banzai[267797]: 167 167
Dec 13 03:15:23 np0005558241 systemd[1]: libpod-345d4e6b35c9f96eb8e9862420aae6b5703ccb3739a512f827f7532283827d84.scope: Deactivated successfully.
Dec 13 03:15:23 np0005558241 podman[267780]: 2025-12-13 08:15:23.460188002 +0000 UTC m=+0.253601238 container attach 345d4e6b35c9f96eb8e9862420aae6b5703ccb3739a512f827f7532283827d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_banzai, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:15:23 np0005558241 podman[267780]: 2025-12-13 08:15:23.46092515 +0000 UTC m=+0.254338336 container died 345d4e6b35c9f96eb8e9862420aae6b5703ccb3739a512f827f7532283827d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 03:15:23 np0005558241 nova_compute[248510]: 2025-12-13 08:15:23.519 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5a067a6aa5e2879ec0b0e24d5cd4631d1d931294b61c1552a19f835d78b76d74-merged.mount: Deactivated successfully.
Dec 13 03:15:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:15:23 np0005558241 podman[267780]: 2025-12-13 08:15:23.662881134 +0000 UTC m=+0.456294280 container remove 345d4e6b35c9f96eb8e9862420aae6b5703ccb3739a512f827f7532283827d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_banzai, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:15:23 np0005558241 systemd[1]: libpod-conmon-345d4e6b35c9f96eb8e9862420aae6b5703ccb3739a512f827f7532283827d84.scope: Deactivated successfully.
Dec 13 03:15:23 np0005558241 nova_compute[248510]: 2025-12-13 08:15:23.790 248514 DEBUG oslo_concurrency.lockutils [None req-90bfefe2-4200-4198-8959-29b2f2439f5e 9d048243a9d0486dabf41512320e2948 e86f5d35accc4f5489476a565c247620 - - default default] Lock "b1899d5c-4fd3-42a0-9b73-5a3f70253c1d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:23 np0005558241 podman[267819]: 2025-12-13 08:15:23.896257492 +0000 UTC m=+0.057553508 container create a797c5d8b89ce63a7f36fea76acb6be7c8c285f250da8164c31d33fd60d6c2ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noyce, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:15:23 np0005558241 systemd[1]: Started libpod-conmon-a797c5d8b89ce63a7f36fea76acb6be7c8c285f250da8164c31d33fd60d6c2ff.scope.
Dec 13 03:15:23 np0005558241 podman[267819]: 2025-12-13 08:15:23.867199567 +0000 UTC m=+0.028495673 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:15:23 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:15:23 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0ce67c196ae6026403ee8bcce50b854c69f356d75759869f33291cf9f5c86c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:15:23 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0ce67c196ae6026403ee8bcce50b854c69f356d75759869f33291cf9f5c86c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:15:23 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0ce67c196ae6026403ee8bcce50b854c69f356d75759869f33291cf9f5c86c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:15:23 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0ce67c196ae6026403ee8bcce50b854c69f356d75759869f33291cf9f5c86c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:15:24 np0005558241 podman[267819]: 2025-12-13 08:15:24.395018947 +0000 UTC m=+0.556314983 container init a797c5d8b89ce63a7f36fea76acb6be7c8c285f250da8164c31d33fd60d6c2ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noyce, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:15:24 np0005558241 podman[267819]: 2025-12-13 08:15:24.403132197 +0000 UTC m=+0.564428213 container start a797c5d8b89ce63a7f36fea76acb6be7c8c285f250da8164c31d33fd60d6c2ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noyce, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 03:15:24 np0005558241 podman[267819]: 2025-12-13 08:15:24.406981352 +0000 UTC m=+0.568277388 container attach a797c5d8b89ce63a7f36fea76acb6be7c8c285f250da8164c31d33fd60d6c2ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noyce, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:15:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.9 MiB/s wr, 253 op/s
Dec 13 03:15:25 np0005558241 lvm[267913]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:15:25 np0005558241 lvm[267913]: VG ceph_vg0 finished
Dec 13 03:15:25 np0005558241 lvm[267914]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:15:25 np0005558241 lvm[267914]: VG ceph_vg1 finished
Dec 13 03:15:25 np0005558241 lvm[267916]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:15:25 np0005558241 lvm[267916]: VG ceph_vg2 finished
Dec 13 03:15:25 np0005558241 strange_noyce[267835]: {}
Dec 13 03:15:25 np0005558241 systemd[1]: libpod-a797c5d8b89ce63a7f36fea76acb6be7c8c285f250da8164c31d33fd60d6c2ff.scope: Deactivated successfully.
Dec 13 03:15:25 np0005558241 systemd[1]: libpod-a797c5d8b89ce63a7f36fea76acb6be7c8c285f250da8164c31d33fd60d6c2ff.scope: Consumed 1.492s CPU time.
Dec 13 03:15:25 np0005558241 podman[267819]: 2025-12-13 08:15:25.371665112 +0000 UTC m=+1.532961128 container died a797c5d8b89ce63a7f36fea76acb6be7c8c285f250da8164c31d33fd60d6c2ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:15:25 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0c0ce67c196ae6026403ee8bcce50b854c69f356d75759869f33291cf9f5c86c-merged.mount: Deactivated successfully.
Dec 13 03:15:25 np0005558241 podman[267819]: 2025-12-13 08:15:25.420995187 +0000 UTC m=+1.582291223 container remove a797c5d8b89ce63a7f36fea76acb6be7c8c285f250da8164c31d33fd60d6c2ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_noyce, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:15:25 np0005558241 systemd[1]: libpod-conmon-a797c5d8b89ce63a7f36fea76acb6be7c8c285f250da8164c31d33fd60d6c2ff.scope: Deactivated successfully.
Dec 13 03:15:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:15:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:15:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:15:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:15:25 np0005558241 nova_compute[248510]: 2025-12-13 08:15:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:15:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:15:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:15:26 np0005558241 nova_compute[248510]: 2025-12-13 08:15:26.695 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 156 op/s
Dec 13 03:15:27 np0005558241 nova_compute[248510]: 2025-12-13 08:15:27.463 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:15:28 np0005558241 nova_compute[248510]: 2025-12-13 08:15:28.564 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:15:28 np0005558241 nova_compute[248510]: 2025-12-13 08:15:28.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:15:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 158 op/s
Dec 13 03:15:29 np0005558241 nova_compute[248510]: 2025-12-13 08:15:29.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:15:29 np0005558241 nova_compute[248510]: 2025-12-13 08:15:29.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:15:30 np0005558241 nova_compute[248510]: 2025-12-13 08:15:30.067 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:15:30 np0005558241 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec 13 03:15:30 np0005558241 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 12.685s CPU time.
Dec 13 03:15:30 np0005558241 systemd-machined[210538]: Machine qemu-13-instance-0000000d terminated.
Dec 13 03:15:30 np0005558241 nova_compute[248510]: 2025-12-13 08:15:30.481 248514 INFO nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance shutdown successfully after 13 seconds.#033[00m
Dec 13 03:15:30 np0005558241 nova_compute[248510]: 2025-12-13 08:15:30.488 248514 INFO nova.virt.libvirt.driver [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance destroyed successfully.#033[00m
Dec 13 03:15:30 np0005558241 nova_compute[248510]: 2025-12-13 08:15:30.494 248514 INFO nova.virt.libvirt.driver [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance destroyed successfully.#033[00m
Dec 13 03:15:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 787 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Dec 13 03:15:30 np0005558241 nova_compute[248510]: 2025-12-13 08:15:30.837 248514 INFO nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Deleting instance files /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f_del#033[00m
Dec 13 03:15:30 np0005558241 nova_compute[248510]: 2025-12-13 08:15:30.838 248514 INFO nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Deletion of /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f_del complete#033[00m
Dec 13 03:15:31 np0005558241 nova_compute[248510]: 2025-12-13 08:15:31.697 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:31 np0005558241 nova_compute[248510]: 2025-12-13 08:15:31.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:15:31 np0005558241 podman[267978]: 2025-12-13 08:15:31.991844709 +0000 UTC m=+0.066536459 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:15:31 np0005558241 podman[267977]: 2025-12-13 08:15:31.995044368 +0000 UTC m=+0.074683800 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 03:15:32 np0005558241 podman[267976]: 2025-12-13 08:15:32.021151271 +0000 UTC m=+0.099744258 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.368 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.368 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.369 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.370 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.371 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.414 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.415 248514 INFO nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Creating image(s)#033[00m
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.442 248514 DEBUG nova.storage.rbd_utils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.474 248514 DEBUG nova.storage.rbd_utils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.506 248514 DEBUG nova.storage.rbd_utils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.512 248514 DEBUG oslo_concurrency.lockutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.513 248514 DEBUG oslo_concurrency.lockutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Dec 13 03:15:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:15:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2997460922' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.968 248514 DEBUG nova.virt.libvirt.imagebackend [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/c10f1898-20b3-4bc9-8a36-2ee01b39c9ce/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/c10f1898-20b3-4bc9-8a36-2ee01b39c9ce/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec 13 03:15:32 np0005558241 nova_compute[248510]: 2025-12-13 08:15:32.972 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:33 np0005558241 nova_compute[248510]: 2025-12-13 08:15:33.169 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:15:33 np0005558241 nova_compute[248510]: 2025-12-13 08:15:33.171 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4666MB free_disk=59.942613425664604GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:15:33 np0005558241 nova_compute[248510]: 2025-12-13 08:15:33.171 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:33 np0005558241 nova_compute[248510]: 2025-12-13 08:15:33.172 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:33 np0005558241 nova_compute[248510]: 2025-12-13 08:15:33.384 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance b0993cc2-f55f-4847-84c6-dd3ab57cc25f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:15:33 np0005558241 nova_compute[248510]: 2025-12-13 08:15:33.385 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:15:33 np0005558241 nova_compute[248510]: 2025-12-13 08:15:33.385 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:15:33 np0005558241 nova_compute[248510]: 2025-12-13 08:15:33.516 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:33 np0005558241 nova_compute[248510]: 2025-12-13 08:15:33.567 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.039 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:15:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2354126643' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.062 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613719.061917, b1899d5c-4fd3-42a0-9b73-5a3f70253c1d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.063 248514 INFO nova.compute.manager [-] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.084 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.090 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.093 248514 DEBUG nova.compute.manager [None req-2bd68e65-377a-4e96-b2f6-be4ce3a67c57 - - - - - -] [instance: b1899d5c-4fd3-42a0-9b73-5a3f70253c1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.107 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc.part --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.108 248514 DEBUG nova.virt.images [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] c10f1898-20b3-4bc9-8a36-2ee01b39c9ce was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.109 248514 DEBUG nova.privsep.utils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.109 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc.part /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.139 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.167 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.167 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.297 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc.part /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc.converted" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.308 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.373 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc.converted --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.376 248514 DEBUG oslo_concurrency.lockutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.412 248514 DEBUG nova.storage.rbd_utils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.417 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 128 op/s
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.793 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.856 248514 DEBUG nova.storage.rbd_utils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] resizing rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.950 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.951 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Ensure instance console log exists: /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.951 248514 DEBUG oslo_concurrency.lockutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.952 248514 DEBUG oslo_concurrency.lockutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.952 248514 DEBUG oslo_concurrency.lockutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.954 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.959 248514 WARNING nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.964 248514 DEBUG nova.virt.libvirt.host [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.965 248514 DEBUG nova.virt.libvirt.host [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.969 248514 DEBUG nova.virt.libvirt.host [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.970 248514 DEBUG nova.virt.libvirt.host [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.970 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.971 248514 DEBUG nova.virt.hardware [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.971 248514 DEBUG nova.virt.hardware [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.971 248514 DEBUG nova.virt.hardware [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.972 248514 DEBUG nova.virt.hardware [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.972 248514 DEBUG nova.virt.hardware [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.972 248514 DEBUG nova.virt.hardware [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.973 248514 DEBUG nova.virt.hardware [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.973 248514 DEBUG nova.virt.hardware [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.973 248514 DEBUG nova.virt.hardware [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.973 248514 DEBUG nova.virt.hardware [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.974 248514 DEBUG nova.virt.hardware [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.974 248514 DEBUG nova.objects.instance [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lazy-loading 'vcpu_model' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:34 np0005558241 nova_compute[248510]: 2025-12-13 08:15:34.999 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:15:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133968109' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:15:35 np0005558241 nova_compute[248510]: 2025-12-13 08:15:35.538 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:35 np0005558241 nova_compute[248510]: 2025-12-13 08:15:35.562 248514 DEBUG nova.storage.rbd_utils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:35 np0005558241 nova_compute[248510]: 2025-12-13 08:15:35.567 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:15:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/129572976' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.123 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.126 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  <uuid>b0993cc2-f55f-4847-84c6-dd3ab57cc25f</uuid>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  <name>instance-0000000d</name>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersAdmin275Test-server-1375343897</nova:name>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:15:34</nova:creationTime>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:        <nova:user uuid="227d115378154452ad3fe25dae04837b">tempest-ServersAdmin275Test-1969959405-project-member</nova:user>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:        <nova:project uuid="39b72cde121b4531a2b743b669122ebd">tempest-ServersAdmin275Test-1969959405</nova:project>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <entry name="serial">b0993cc2-f55f-4847-84c6-dd3ab57cc25f</entry>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <entry name="uuid">b0993cc2-f55f-4847-84c6-dd3ab57cc25f</entry>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/console.log" append="off"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:15:36 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:15:36 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:15:36 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:15:36 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.168 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.192 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.193 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.196 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.196 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.197 248514 INFO nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Using config drive#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.226 248514 DEBUG nova.storage.rbd_utils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.232 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.260 248514 DEBUG nova.objects.instance [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lazy-loading 'ec2_ids' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.356 248514 DEBUG nova.objects.instance [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lazy-loading 'keypairs' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:15:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 26 KiB/s wr, 36 op/s
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.787 248514 INFO nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Creating config drive at /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.794 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkgrinecm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.924 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkgrinecm" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.955 248514 DEBUG nova.storage.rbd_utils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:36 np0005558241 nova_compute[248510]: 2025-12-13 08:15:36.960 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.099 248514 DEBUG oslo_concurrency.processutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.100 248514 INFO nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Deleting local config drive /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config because it was imported into RBD.#033[00m
Dec 13 03:15:37 np0005558241 systemd-machined[210538]: New machine qemu-15-instance-0000000d.
Dec 13 03:15:37 np0005558241 systemd[1]: Started Virtual Machine qemu-15-instance-0000000d.
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.594 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for b0993cc2-f55f-4847-84c6-dd3ab57cc25f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.596 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613737.5935712, b0993cc2-f55f-4847-84c6-dd3ab57cc25f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.596 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.600 248514 DEBUG nova.compute.manager [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.601 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.605 248514 INFO nova.virt.libvirt.driver [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance spawned successfully.#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.605 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:15:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.655 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.661 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:15:37 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.666 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.666 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.667 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.668 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.669 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.670 248514 DEBUG nova.virt.libvirt.driver [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.709 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.710 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613737.5953023, b0993cc2-f55f-4847-84c6-dd3ab57cc25f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.710 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] VM Started (Lifecycle Event)#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.752 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.756 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.763 248514 DEBUG nova.compute.manager [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.802 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.844 248514 DEBUG oslo_concurrency.lockutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.845 248514 DEBUG oslo_concurrency.lockutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:37 np0005558241 nova_compute[248510]: 2025-12-13 08:15:37.845 248514 DEBUG nova.objects.instance [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:15:38 np0005558241 nova_compute[248510]: 2025-12-13 08:15:38.219 248514 DEBUG oslo_concurrency.lockutils [None req-a606dbc7-c2d4-4a13-8241-e382c0c35fa6 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:38 np0005558241 nova_compute[248510]: 2025-12-13 08:15:38.568 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:15:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 87 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 102 op/s
Dec 13 03:15:39 np0005558241 nova_compute[248510]: 2025-12-13 08:15:39.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:15:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:15:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:15:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:15:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:15:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:15:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:15:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Dec 13 03:15:41 np0005558241 nova_compute[248510]: 2025-12-13 08:15:41.527 248514 INFO nova.compute.manager [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Rebuilding instance#033[00m
Dec 13 03:15:41 np0005558241 nova_compute[248510]: 2025-12-13 08:15:41.702 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:41 np0005558241 nova_compute[248510]: 2025-12-13 08:15:41.935 248514 DEBUG nova.objects.instance [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:42 np0005558241 nova_compute[248510]: 2025-12-13 08:15:42.031 248514 DEBUG nova.compute.manager [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:15:42 np0005558241 nova_compute[248510]: 2025-12-13 08:15:42.111 248514 DEBUG nova.objects.instance [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lazy-loading 'pci_requests' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:42 np0005558241 nova_compute[248510]: 2025-12-13 08:15:42.136 248514 DEBUG nova.objects.instance [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lazy-loading 'pci_devices' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:42 np0005558241 nova_compute[248510]: 2025-12-13 08:15:42.169 248514 DEBUG nova.objects.instance [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lazy-loading 'resources' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:42 np0005558241 nova_compute[248510]: 2025-12-13 08:15:42.194 248514 DEBUG nova.objects.instance [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lazy-loading 'migration_context' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:42 np0005558241 nova_compute[248510]: 2025-12-13 08:15:42.214 248514 DEBUG nova.objects.instance [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:15:42 np0005558241 nova_compute[248510]: 2025-12-13 08:15:42.220 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:15:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Dec 13 03:15:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Dec 13 03:15:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Dec 13 03:15:43 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Dec 13 03:15:43 np0005558241 nova_compute[248510]: 2025-12-13 08:15:43.571 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:15:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 183 op/s
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.610305) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613746610392, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2076, "num_deletes": 251, "total_data_size": 3381413, "memory_usage": 3438816, "flush_reason": "Manual Compaction"}
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613746634680, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3291336, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25922, "largest_seqno": 27997, "table_properties": {"data_size": 3282092, "index_size": 5738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19549, "raw_average_key_size": 20, "raw_value_size": 3263368, "raw_average_value_size": 3385, "num_data_blocks": 253, "num_entries": 964, "num_filter_entries": 964, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765613538, "oldest_key_time": 1765613538, "file_creation_time": 1765613746, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 24448 microseconds, and 7791 cpu microseconds.
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.634757) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3291336 bytes OK
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.634787) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.637390) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.637424) EVENT_LOG_v1 {"time_micros": 1765613746637415, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.637455) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3372678, prev total WAL file size 3372678, number of live WAL files 2.
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.638802) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3214KB)], [59(7956KB)]
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613746638937, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11438432, "oldest_snapshot_seqno": -1}
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5328 keys, 9576844 bytes, temperature: kUnknown
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613746699275, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9576844, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9539512, "index_size": 22886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 132761, "raw_average_key_size": 24, "raw_value_size": 9441722, "raw_average_value_size": 1772, "num_data_blocks": 943, "num_entries": 5328, "num_filter_entries": 5328, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765613746, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.699529) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9576844 bytes
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.701227) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.3 rd, 158.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.8 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 5846, records dropped: 518 output_compression: NoCompression
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.701247) EVENT_LOG_v1 {"time_micros": 1765613746701236, "job": 32, "event": "compaction_finished", "compaction_time_micros": 60424, "compaction_time_cpu_micros": 21065, "output_level": 6, "num_output_files": 1, "total_output_size": 9576844, "num_input_records": 5846, "num_output_records": 5328, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613746701939, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613746703209, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.638606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.703262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.703267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.703269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.703271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:15:46 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:15:46.703272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:15:46 np0005558241 nova_compute[248510]: 2025-12-13 08:15:46.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 160 op/s
Dec 13 03:15:48 np0005558241 nova_compute[248510]: 2025-12-13 08:15:48.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:15:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.0 KiB/s wr, 99 op/s
Dec 13 03:15:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:15:49.186 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:15:49 np0005558241 nova_compute[248510]: 2025-12-13 08:15:49.187 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:15:49.187 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:15:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 174 KiB/s wr, 89 op/s
Dec 13 03:15:51 np0005558241 nova_compute[248510]: 2025-12-13 08:15:51.706 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:52 np0005558241 nova_compute[248510]: 2025-12-13 08:15:52.266 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:15:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 174 KiB/s wr, 89 op/s
Dec 13 03:15:53 np0005558241 nova_compute[248510]: 2025-12-13 08:15:53.639 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:15:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Dec 13 03:15:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Dec 13 03:15:53 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Dec 13 03:15:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:15:54.190 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:15:54 np0005558241 nova_compute[248510]: 2025-12-13 08:15:54.360 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Acquiring lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:54 np0005558241 nova_compute[248510]: 2025-12-13 08:15:54.360 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:54 np0005558241 nova_compute[248510]: 2025-12-13 08:15:54.605 248514 DEBUG nova.compute.manager [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:15:54 np0005558241 nova_compute[248510]: 2025-12-13 08:15:54.740 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:54 np0005558241 nova_compute[248510]: 2025-12-13 08:15:54.741 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:54 np0005558241 nova_compute[248510]: 2025-12-13 08:15:54.749 248514 DEBUG nova.virt.hardware [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:15:54 np0005558241 nova_compute[248510]: 2025-12-13 08:15:54.749 248514 INFO nova.compute.claims [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:15:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 413 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Dec 13 03:15:54 np0005558241 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec 13 03:15:54 np0005558241 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Consumed 12.501s CPU time.
Dec 13 03:15:54 np0005558241 systemd-machined[210538]: Machine qemu-15-instance-0000000d terminated.
Dec 13 03:15:55 np0005558241 nova_compute[248510]: 2025-12-13 08:15:55.003 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:55 np0005558241 nova_compute[248510]: 2025-12-13 08:15:55.282 248514 INFO nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance shutdown successfully after 13 seconds.#033[00m
Dec 13 03:15:55 np0005558241 nova_compute[248510]: 2025-12-13 08:15:55.289 248514 INFO nova.virt.libvirt.driver [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance destroyed successfully.#033[00m
Dec 13 03:15:55 np0005558241 nova_compute[248510]: 2025-12-13 08:15:55.295 248514 INFO nova.virt.libvirt.driver [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance destroyed successfully.#033[00m
Dec 13 03:15:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:15:55.396 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:15:55.397 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:15:55.397 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:15:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2892165032' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:15:55 np0005558241 nova_compute[248510]: 2025-12-13 08:15:55.533 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:55 np0005558241 nova_compute[248510]: 2025-12-13 08:15:55.540 248514 DEBUG nova.compute.provider_tree [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:15:55 np0005558241 nova_compute[248510]: 2025-12-13 08:15:55.559 248514 DEBUG nova.scheduler.client.report [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:15:55 np0005558241 nova_compute[248510]: 2025-12-13 08:15:55.784 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:55 np0005558241 nova_compute[248510]: 2025-12-13 08:15:55.785 248514 DEBUG nova.compute.manager [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.117 248514 DEBUG nova.compute.manager [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.118 248514 DEBUG nova.network.neutron [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.240 248514 INFO nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.372 248514 DEBUG nova.compute.manager [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.538 248514 DEBUG nova.policy [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd06ea0a7e1b040b993e05a34b436154d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '081c046ea2ea42ee8e09ed5de2e8db81', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.566 248514 DEBUG nova.compute.manager [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.568 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.569 248514 INFO nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Creating image(s)#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.600 248514 DEBUG nova.storage.rbd_utils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] rbd image d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.626 248514 DEBUG nova.storage.rbd_utils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] rbd image d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.661 248514 DEBUG nova.storage.rbd_utils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] rbd image d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.665 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.708 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.758 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.759 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.760 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.760 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.782 248514 DEBUG nova.storage.rbd_utils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] rbd image d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.785 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 413 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.839 248514 INFO nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Deleting instance files /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f_del#033[00m
Dec 13 03:15:56 np0005558241 nova_compute[248510]: 2025-12-13 08:15:56.840 248514 INFO nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Deletion of /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f_del complete#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.545 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.759s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.595 248514 DEBUG nova.storage.rbd_utils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] resizing rbd image d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.689 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.690 248514 INFO nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Creating image(s)#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.723 248514 DEBUG nova.storage.rbd_utils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.756 248514 DEBUG nova.storage.rbd_utils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.787 248514 DEBUG nova.storage.rbd_utils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.792 248514 DEBUG oslo_concurrency.processutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.914 248514 DEBUG oslo_concurrency.processutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.915 248514 DEBUG oslo_concurrency.lockutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.915 248514 DEBUG oslo_concurrency.lockutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.916 248514 DEBUG oslo_concurrency.lockutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.934 248514 DEBUG nova.storage.rbd_utils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.937 248514 DEBUG oslo_concurrency.processutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.960 248514 DEBUG nova.objects.instance [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lazy-loading 'migration_context' on Instance uuid d7894b40-fff7-4226-9a51-17dc5fe4bb30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.991 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.992 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Ensure instance console log exists: /var/lib/nova/instances/d7894b40-fff7-4226-9a51-17dc5fe4bb30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.992 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.993 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:57 np0005558241 nova_compute[248510]: 2025-12-13 08:15:57.993 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:58 np0005558241 nova_compute[248510]: 2025-12-13 08:15:58.153 248514 DEBUG nova.network.neutron [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Successfully created port: 99fe9517-3d12-4555-a9f8-2237cc2213c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:15:58 np0005558241 nova_compute[248510]: 2025-12-13 08:15:58.643 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:15:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:15:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 104 MiB data, 249 MiB used, 60 GiB / 60 GiB avail; 410 KiB/s rd, 3.6 MiB/s wr, 96 op/s
Dec 13 03:15:58 np0005558241 nova_compute[248510]: 2025-12-13 08:15:58.832 248514 DEBUG oslo_concurrency.processutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.895s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:58 np0005558241 nova_compute[248510]: 2025-12-13 08:15:58.894 248514 DEBUG nova.storage.rbd_utils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] resizing rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.124 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.124 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Ensure instance console log exists: /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.125 248514 DEBUG oslo_concurrency.lockutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.125 248514 DEBUG oslo_concurrency.lockutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.125 248514 DEBUG oslo_concurrency.lockutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.127 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.130 248514 WARNING nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.136 248514 DEBUG nova.virt.libvirt.host [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.137 248514 DEBUG nova.virt.libvirt.host [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.140 248514 DEBUG nova.virt.libvirt.host [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.141 248514 DEBUG nova.virt.libvirt.host [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.141 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.141 248514 DEBUG nova.virt.hardware [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.142 248514 DEBUG nova.virt.hardware [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.142 248514 DEBUG nova.virt.hardware [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.142 248514 DEBUG nova.virt.hardware [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.142 248514 DEBUG nova.virt.hardware [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.142 248514 DEBUG nova.virt.hardware [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.142 248514 DEBUG nova.virt.hardware [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.143 248514 DEBUG nova.virt.hardware [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.143 248514 DEBUG nova.virt.hardware [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.143 248514 DEBUG nova.virt.hardware [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.143 248514 DEBUG nova.virt.hardware [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.143 248514 DEBUG nova.objects.instance [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.180 248514 DEBUG oslo_concurrency.processutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:15:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:15:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/106159907' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.753 248514 DEBUG oslo_concurrency.processutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.773 248514 DEBUG nova.storage.rbd_utils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:15:59 np0005558241 nova_compute[248510]: 2025-12-13 08:15:59.777 248514 DEBUG oslo_concurrency.processutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:16:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1734648080' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:16:00 np0005558241 nova_compute[248510]: 2025-12-13 08:16:00.325 248514 DEBUG oslo_concurrency.processutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:00 np0005558241 nova_compute[248510]: 2025-12-13 08:16:00.328 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  <uuid>b0993cc2-f55f-4847-84c6-dd3ab57cc25f</uuid>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  <name>instance-0000000d</name>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersAdmin275Test-server-1375343897</nova:name>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:15:59</nova:creationTime>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:        <nova:user uuid="227d115378154452ad3fe25dae04837b">tempest-ServersAdmin275Test-1969959405-project-member</nova:user>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:        <nova:project uuid="39b72cde121b4531a2b743b669122ebd">tempest-ServersAdmin275Test-1969959405</nova:project>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <entry name="serial">b0993cc2-f55f-4847-84c6-dd3ab57cc25f</entry>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <entry name="uuid">b0993cc2-f55f-4847-84c6-dd3ab57cc25f</entry>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/console.log" append="off"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:16:00 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:16:00 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:16:00 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:16:00 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:16:00 np0005558241 nova_compute[248510]: 2025-12-13 08:16:00.685 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:16:00 np0005558241 nova_compute[248510]: 2025-12-13 08:16:00.685 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:16:00 np0005558241 nova_compute[248510]: 2025-12-13 08:16:00.686 248514 INFO nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Using config drive#033[00m
Dec 13 03:16:00 np0005558241 nova_compute[248510]: 2025-12-13 08:16:00.707 248514 DEBUG nova.storage.rbd_utils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:16:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 123 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 423 KiB/s rd, 6.4 MiB/s wr, 143 op/s
Dec 13 03:16:01 np0005558241 nova_compute[248510]: 2025-12-13 08:16:01.636 248514 DEBUG nova.objects.instance [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lazy-loading 'ec2_ids' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:16:01 np0005558241 nova_compute[248510]: 2025-12-13 08:16:01.674 248514 DEBUG nova.objects.instance [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lazy-loading 'keypairs' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:16:01 np0005558241 nova_compute[248510]: 2025-12-13 08:16:01.711 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:02 np0005558241 nova_compute[248510]: 2025-12-13 08:16:02.336 248514 INFO nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Creating config drive at /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config#033[00m
Dec 13 03:16:02 np0005558241 nova_compute[248510]: 2025-12-13 08:16:02.342 248514 DEBUG oslo_concurrency.processutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp66oqj5gl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:02 np0005558241 nova_compute[248510]: 2025-12-13 08:16:02.477 248514 DEBUG oslo_concurrency.processutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp66oqj5gl" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:02 np0005558241 nova_compute[248510]: 2025-12-13 08:16:02.500 248514 DEBUG nova.storage.rbd_utils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] rbd image b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:16:02 np0005558241 nova_compute[248510]: 2025-12-13 08:16:02.504 248514 DEBUG oslo_concurrency.processutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:02 np0005558241 nova_compute[248510]: 2025-12-13 08:16:02.643 248514 DEBUG oslo_concurrency.processutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config b0993cc2-f55f-4847-84c6-dd3ab57cc25f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:02 np0005558241 nova_compute[248510]: 2025-12-13 08:16:02.645 248514 INFO nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Deleting local config drive /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f/disk.config because it was imported into RBD.#033[00m
Dec 13 03:16:02 np0005558241 systemd-machined[210538]: New machine qemu-16-instance-0000000d.
Dec 13 03:16:02 np0005558241 systemd[1]: Started Virtual Machine qemu-16-instance-0000000d.
Dec 13 03:16:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 123 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 423 KiB/s rd, 6.4 MiB/s wr, 143 op/s
Dec 13 03:16:02 np0005558241 podman[268944]: 2025-12-13 08:16:02.801519726 +0000 UTC m=+0.062981773 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 03:16:02 np0005558241 podman[268943]: 2025-12-13 08:16:02.802797327 +0000 UTC m=+0.069996625 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:16:02 np0005558241 podman[268941]: 2025-12-13 08:16:02.85812974 +0000 UTC m=+0.127592034 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.442 248514 DEBUG nova.network.neutron [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Successfully updated port: 99fe9517-3d12-4555-a9f8-2237cc2213c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.491 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Acquiring lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.491 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Acquired lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.491 248514 DEBUG nova.network.neutron [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.559 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for b0993cc2-f55f-4847-84c6-dd3ab57cc25f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.561 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613763.559237, b0993cc2-f55f-4847-84c6-dd3ab57cc25f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.561 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.564 248514 DEBUG nova.compute.manager [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.564 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.570 248514 INFO nova.virt.libvirt.driver [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance spawned successfully.#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.571 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.643 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.701 248514 DEBUG nova.compute.manager [req-ba3843a2-d2a3-47aa-9664-1c8f28ce828f req-6b86734b-d655-41f1-be7e-5cad496fff20 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Received event network-changed-99fe9517-3d12-4555-a9f8-2237cc2213c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.703 248514 DEBUG nova.compute.manager [req-ba3843a2-d2a3-47aa-9664-1c8f28ce828f req-6b86734b-d655-41f1-be7e-5cad496fff20 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Refreshing instance network info cache due to event network-changed-99fe9517-3d12-4555-a9f8-2237cc2213c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.704 248514 DEBUG oslo_concurrency.lockutils [req-ba3843a2-d2a3-47aa-9664-1c8f28ce828f req-6b86734b-d655-41f1-be7e-5cad496fff20 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.725 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.732 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.736 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.736 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.737 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.737 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.738 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.738 248514 DEBUG nova.virt.libvirt.driver [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.800 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.801 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613763.5612736, b0993cc2-f55f-4847-84c6-dd3ab57cc25f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.801 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] VM Started (Lifecycle Event)#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.910 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.915 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.982 248514 DEBUG nova.network.neutron [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:16:03 np0005558241 nova_compute[248510]: 2025-12-13 08:16:03.987 248514 DEBUG nova.compute.manager [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:04 np0005558241 nova_compute[248510]: 2025-12-13 08:16:04.002 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:16:04 np0005558241 nova_compute[248510]: 2025-12-13 08:16:04.226 248514 DEBUG oslo_concurrency.lockutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:04 np0005558241 nova_compute[248510]: 2025-12-13 08:16:04.227 248514 DEBUG oslo_concurrency.lockutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:04 np0005558241 nova_compute[248510]: 2025-12-13 08:16:04.227 248514 DEBUG nova.objects.instance [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:16:04 np0005558241 nova_compute[248510]: 2025-12-13 08:16:04.474 248514 DEBUG oslo_concurrency.lockutils [None req-ec4b84f4-bd99-4d9a-84cd-fcb50c5c2bc3 ea16341f48e04f0eb3eade8747b01f31 d1eea0a6020b48f5a104a8a77c364a89 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 397 KiB/s rd, 4.7 MiB/s wr, 140 op/s
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.216 248514 DEBUG nova.network.neutron [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Updating instance_info_cache with network_info: [{"id": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "address": "fa:16:3e:80:d9:f4", "network": {"id": "5f43148c-6790-47ca-9335-d071d9be9a23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-87276096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "081c046ea2ea42ee8e09ed5de2e8db81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fe9517-3d", "ovs_interfaceid": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.266 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Releasing lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.266 248514 DEBUG nova.compute.manager [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Instance network_info: |[{"id": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "address": "fa:16:3e:80:d9:f4", "network": {"id": "5f43148c-6790-47ca-9335-d071d9be9a23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-87276096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "081c046ea2ea42ee8e09ed5de2e8db81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fe9517-3d", "ovs_interfaceid": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.267 248514 DEBUG oslo_concurrency.lockutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Acquiring lock "b0993cc2-f55f-4847-84c6-dd3ab57cc25f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.267 248514 DEBUG oslo_concurrency.lockutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "b0993cc2-f55f-4847-84c6-dd3ab57cc25f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.268 248514 DEBUG oslo_concurrency.lockutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Acquiring lock "b0993cc2-f55f-4847-84c6-dd3ab57cc25f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.268 248514 DEBUG oslo_concurrency.lockutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "b0993cc2-f55f-4847-84c6-dd3ab57cc25f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.268 248514 DEBUG oslo_concurrency.lockutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "b0993cc2-f55f-4847-84c6-dd3ab57cc25f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.269 248514 INFO nova.compute.manager [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Terminating instance#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.270 248514 DEBUG oslo_concurrency.lockutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Acquiring lock "refresh_cache-b0993cc2-f55f-4847-84c6-dd3ab57cc25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.270 248514 DEBUG oslo_concurrency.lockutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Acquired lock "refresh_cache-b0993cc2-f55f-4847-84c6-dd3ab57cc25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.271 248514 DEBUG nova.network.neutron [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.272 248514 DEBUG oslo_concurrency.lockutils [req-ba3843a2-d2a3-47aa-9664-1c8f28ce828f req-6b86734b-d655-41f1-be7e-5cad496fff20 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.272 248514 DEBUG nova.network.neutron [req-ba3843a2-d2a3-47aa-9664-1c8f28ce828f req-6b86734b-d655-41f1-be7e-5cad496fff20 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Refreshing network info cache for port 99fe9517-3d12-4555-a9f8-2237cc2213c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.274 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Start _get_guest_xml network_info=[{"id": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "address": "fa:16:3e:80:d9:f4", "network": {"id": "5f43148c-6790-47ca-9335-d071d9be9a23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-87276096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "081c046ea2ea42ee8e09ed5de2e8db81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fe9517-3d", "ovs_interfaceid": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.280 248514 WARNING nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.289 248514 DEBUG nova.virt.libvirt.host [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.290 248514 DEBUG nova.virt.libvirt.host [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.295 248514 DEBUG nova.virt.libvirt.host [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.296 248514 DEBUG nova.virt.libvirt.host [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.296 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.297 248514 DEBUG nova.virt.hardware [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.297 248514 DEBUG nova.virt.hardware [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.298 248514 DEBUG nova.virt.hardware [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.298 248514 DEBUG nova.virt.hardware [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.298 248514 DEBUG nova.virt.hardware [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.298 248514 DEBUG nova.virt.hardware [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.299 248514 DEBUG nova.virt.hardware [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.299 248514 DEBUG nova.virt.hardware [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.299 248514 DEBUG nova.virt.hardware [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.300 248514 DEBUG nova.virt.hardware [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.300 248514 DEBUG nova.virt.hardware [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.304 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.519 248514 DEBUG nova.network.neutron [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.713 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 3.6 MiB/s wr, 93 op/s
Dec 13 03:16:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:16:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1698475457' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.890 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.916 248514 DEBUG nova.storage.rbd_utils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] rbd image d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.922 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:06 np0005558241 nova_compute[248510]: 2025-12-13 08:16:06.950 248514 DEBUG nova.network.neutron [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.190 248514 DEBUG oslo_concurrency.lockutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Releasing lock "refresh_cache-b0993cc2-f55f-4847-84c6-dd3ab57cc25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.191 248514 DEBUG nova.compute.manager [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:16:07 np0005558241 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec 13 03:16:07 np0005558241 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000d.scope: Consumed 4.487s CPU time.
Dec 13 03:16:07 np0005558241 systemd-machined[210538]: Machine qemu-16-instance-0000000d terminated.
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.409 248514 INFO nova.virt.libvirt.driver [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance destroyed successfully.#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.410 248514 DEBUG nova.objects.instance [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lazy-loading 'resources' on Instance uuid b0993cc2-f55f-4847-84c6-dd3ab57cc25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:16:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:16:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355383362' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.533 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.535 248514 DEBUG nova.virt.libvirt.vif [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:15:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1146892662',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1146892662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-114689266',id=15,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='081c046ea2ea42ee8e09ed5de2e8db81',ramdisk_id='',reservation_id='r-ukx38l09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-996250676',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-996250676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:15:56Z,user_data=None,user_id='d06ea0a7e1b040b993e05a34b436154d',uuid=d7894b40-fff7-4226-9a51-17dc5fe4bb30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "address": "fa:16:3e:80:d9:f4", "network": {"id": "5f43148c-6790-47ca-9335-d071d9be9a23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-87276096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "081c046ea2ea42ee8e09ed5de2e8db81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fe9517-3d", "ovs_interfaceid": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.535 248514 DEBUG nova.network.os_vif_util [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Converting VIF {"id": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "address": "fa:16:3e:80:d9:f4", "network": {"id": "5f43148c-6790-47ca-9335-d071d9be9a23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-87276096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "081c046ea2ea42ee8e09ed5de2e8db81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fe9517-3d", "ovs_interfaceid": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.536 248514 DEBUG nova.network.os_vif_util [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:d9:f4,bridge_name='br-int',has_traffic_filtering=True,id=99fe9517-3d12-4555-a9f8-2237cc2213c5,network=Network(5f43148c-6790-47ca-9335-d071d9be9a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fe9517-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.537 248514 DEBUG nova.objects.instance [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lazy-loading 'pci_devices' on Instance uuid d7894b40-fff7-4226-9a51-17dc5fe4bb30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.558 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  <uuid>d7894b40-fff7-4226-9a51-17dc5fe4bb30</uuid>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  <name>instance-0000000f</name>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-1146892662</nova:name>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:16:06</nova:creationTime>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:        <nova:user uuid="d06ea0a7e1b040b993e05a34b436154d">tempest-FloatingIPsAssociationNegativeTestJSON-996250676-project-member</nova:user>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:        <nova:project uuid="081c046ea2ea42ee8e09ed5de2e8db81">tempest-FloatingIPsAssociationNegativeTestJSON-996250676</nova:project>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:        <nova:port uuid="99fe9517-3d12-4555-a9f8-2237cc2213c5">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <entry name="serial">d7894b40-fff7-4226-9a51-17dc5fe4bb30</entry>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <entry name="uuid">d7894b40-fff7-4226-9a51-17dc5fe4bb30</entry>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk.config">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:80:d9:f4"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <target dev="tap99fe9517-3d"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/d7894b40-fff7-4226-9a51-17dc5fe4bb30/console.log" append="off"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:16:07 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:16:07 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:16:07 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:16:07 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.561 248514 DEBUG nova.compute.manager [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Preparing to wait for external event network-vif-plugged-99fe9517-3d12-4555-a9f8-2237cc2213c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.562 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Acquiring lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.562 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.562 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.564 248514 DEBUG nova.virt.libvirt.vif [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:15:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1146892662',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1146892662',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-114689266',id=15,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='081c046ea2ea42ee8e09ed5de2e8db81',ramdisk_id='',reservation_id='r-ukx38l09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-996250676',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-996250676-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:15:56Z,user_data=None,user_id='d06ea0a7e1b040b993e05a34b436154d',uuid=d7894b40-fff7-4226-9a51-17dc5fe4bb30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "address": "fa:16:3e:80:d9:f4", "network": {"id": "5f43148c-6790-47ca-9335-d071d9be9a23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-87276096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "081c046ea2ea42ee8e09ed5de2e8db81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fe9517-3d", "ovs_interfaceid": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.565 248514 DEBUG nova.network.os_vif_util [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Converting VIF {"id": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "address": "fa:16:3e:80:d9:f4", "network": {"id": "5f43148c-6790-47ca-9335-d071d9be9a23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-87276096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "081c046ea2ea42ee8e09ed5de2e8db81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fe9517-3d", "ovs_interfaceid": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.566 248514 DEBUG nova.network.os_vif_util [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:d9:f4,bridge_name='br-int',has_traffic_filtering=True,id=99fe9517-3d12-4555-a9f8-2237cc2213c5,network=Network(5f43148c-6790-47ca-9335-d071d9be9a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fe9517-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.566 248514 DEBUG os_vif [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:d9:f4,bridge_name='br-int',has_traffic_filtering=True,id=99fe9517-3d12-4555-a9f8-2237cc2213c5,network=Network(5f43148c-6790-47ca-9335-d071d9be9a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fe9517-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.567 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.568 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.568 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.574 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99fe9517-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.575 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap99fe9517-3d, col_values=(('external_ids', {'iface-id': '99fe9517-3d12-4555-a9f8-2237cc2213c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:d9:f4', 'vm-uuid': 'd7894b40-fff7-4226-9a51-17dc5fe4bb30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.578 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:07 np0005558241 NetworkManager[50376]: <info>  [1765613767.5782] manager: (tap99fe9517-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.580 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.588 248514 INFO os_vif [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:d9:f4,bridge_name='br-int',has_traffic_filtering=True,id=99fe9517-3d12-4555-a9f8-2237cc2213c5,network=Network(5f43148c-6790-47ca-9335-d071d9be9a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fe9517-3d')#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.670 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.671 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.671 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] No VIF found with MAC fa:16:3e:80:d9:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.671 248514 INFO nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Using config drive#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.694 248514 DEBUG nova.storage.rbd_utils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] rbd image d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.734 248514 INFO nova.virt.libvirt.driver [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Deleting instance files /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f_del#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.735 248514 INFO nova.virt.libvirt.driver [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Deletion of /var/lib/nova/instances/b0993cc2-f55f-4847-84c6-dd3ab57cc25f_del complete#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.877 248514 INFO nova.compute.manager [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.878 248514 DEBUG oslo.service.loopingcall [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.878 248514 DEBUG nova.compute.manager [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:16:07 np0005558241 nova_compute[248510]: 2025-12-13 08:16:07.879 248514 DEBUG nova.network.neutron [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.128 248514 DEBUG nova.network.neutron [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.213 248514 DEBUG nova.network.neutron [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.233 248514 INFO nova.compute.manager [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Took 0.35 seconds to deallocate network for instance.#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.261 248514 INFO nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Creating config drive at /var/lib/nova/instances/d7894b40-fff7-4226-9a51-17dc5fe4bb30/disk.config#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.266 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7894b40-fff7-4226-9a51-17dc5fe4bb30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphrquno0l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.293 248514 DEBUG oslo_concurrency.lockutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.294 248514 DEBUG oslo_concurrency.lockutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.358 248514 DEBUG nova.network.neutron [req-ba3843a2-d2a3-47aa-9664-1c8f28ce828f req-6b86734b-d655-41f1-be7e-5cad496fff20 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Updated VIF entry in instance network info cache for port 99fe9517-3d12-4555-a9f8-2237cc2213c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.359 248514 DEBUG nova.network.neutron [req-ba3843a2-d2a3-47aa-9664-1c8f28ce828f req-6b86734b-d655-41f1-be7e-5cad496fff20 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Updating instance_info_cache with network_info: [{"id": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "address": "fa:16:3e:80:d9:f4", "network": {"id": "5f43148c-6790-47ca-9335-d071d9be9a23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-87276096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "081c046ea2ea42ee8e09ed5de2e8db81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fe9517-3d", "ovs_interfaceid": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.381 248514 DEBUG oslo_concurrency.processutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.406 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7894b40-fff7-4226-9a51-17dc5fe4bb30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphrquno0l" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.408 248514 DEBUG oslo_concurrency.lockutils [req-ba3843a2-d2a3-47aa-9664-1c8f28ce828f req-6b86734b-d655-41f1-be7e-5cad496fff20 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.434 248514 DEBUG nova.storage.rbd_utils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] rbd image d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.438 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7894b40-fff7-4226-9a51-17dc5fe4bb30/disk.config d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.583 248514 DEBUG oslo_concurrency.processutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7894b40-fff7-4226-9a51-17dc5fe4bb30/disk.config d7894b40-fff7-4226-9a51-17dc5fe4bb30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.585 248514 INFO nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Deleting local config drive /var/lib/nova/instances/d7894b40-fff7-4226-9a51-17dc5fe4bb30/disk.config because it was imported into RBD.#033[00m
Dec 13 03:16:08 np0005558241 kernel: tap99fe9517-3d: entered promiscuous mode
Dec 13 03:16:08 np0005558241 NetworkManager[50376]: <info>  [1765613768.6405] manager: (tap99fe9517-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Dec 13 03:16:08 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:08Z|00045|binding|INFO|Claiming lport 99fe9517-3d12-4555-a9f8-2237cc2213c5 for this chassis.
Dec 13 03:16:08 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:08Z|00046|binding|INFO|99fe9517-3d12-4555-a9f8-2237cc2213c5: Claiming fa:16:3e:80:d9:f4 10.100.0.14
Dec 13 03:16:08 np0005558241 systemd-udevd[269114]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.655 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.660 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:d9:f4 10.100.0.14'], port_security=['fa:16:3e:80:d9:f4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd7894b40-fff7-4226-9a51-17dc5fe4bb30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f43148c-6790-47ca-9335-d071d9be9a23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '081c046ea2ea42ee8e09ed5de2e8db81', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'de101059-5f1e-4690-862b-b4a45d904f8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0b66f82-4270-4981-ae10-dd3083c40575, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=99fe9517-3d12-4555-a9f8-2237cc2213c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.662 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 99fe9517-3d12-4555-a9f8-2237cc2213c5 in datapath 5f43148c-6790-47ca-9335-d071d9be9a23 bound to our chassis#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.664 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5f43148c-6790-47ca-9335-d071d9be9a23#033[00m
Dec 13 03:16:08 np0005558241 NetworkManager[50376]: <info>  [1765613768.6649] device (tap99fe9517-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:16:08 np0005558241 NetworkManager[50376]: <info>  [1765613768.6657] device (tap99fe9517-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:16:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.682 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3abbf188-ca9c-438f-a6a5-0e18be7324a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.684 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5f43148c-61 in ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.686 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5f43148c-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.687 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf46554-a20a-40d4-8514-c2fec1ad0441]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 systemd-machined[210538]: New machine qemu-17-instance-0000000f.
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.688 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ae82d418-cb2e-4931-881c-3c4fdbf43299]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.704 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[9efb13e9-40f7-4b36-9b26-2a8bc37115d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 systemd[1]: Started Virtual Machine qemu-17-instance-0000000f.
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.726 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:08 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:08Z|00047|binding|INFO|Setting lport 99fe9517-3d12-4555-a9f8-2237cc2213c5 ovn-installed in OVS
Dec 13 03:16:08 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:08Z|00048|binding|INFO|Setting lport 99fe9517-3d12-4555-a9f8-2237cc2213c5 up in Southbound
Dec 13 03:16:08 np0005558241 nova_compute[248510]: 2025-12-13 08:16:08.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.732 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[16a18d80-6533-43dd-95e8-c5c013577774]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.766 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c93b8ba0-e7a7-4a53-ae8d-0b1ba1bf4c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 NetworkManager[50376]: <info>  [1765613768.7725] manager: (tap5f43148c-60): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.773 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[61a5e84d-94df-48fd-bcc2-f9825bfb6aab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 123 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 3.6 MiB/s wr, 106 op/s
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.803 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0510b2ff-7bce-41be-bbe6-8f89e9a64712]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.807 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[12e0ab98-6776-4906-820e-2508cc395506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 NetworkManager[50376]: <info>  [1765613768.8301] device (tap5f43148c-60): carrier: link connected
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.837 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[07a227b2-04bf-4a99-83b1-393eca218c25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.856 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7c999aa2-1324-4e87-9cab-f3ed99431357]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5f43148c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:d8:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634604, 'reachable_time': 16664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269264, 'error': None, 'target': 'ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.874 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[269b3975-b7db-4a68-8942-080e83ec2933]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:d81c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634604, 'tstamp': 634604}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269265, 'error': None, 'target': 'ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.896 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ffcc57-d6ae-49b7-a963-238a38976d0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5f43148c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:d8:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634604, 'reachable_time': 16664, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269266, 'error': None, 'target': 'ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:08.943 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5aafee-1a99-4f50-b062-321c1c5ee79d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:16:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3942857617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.007 248514 DEBUG oslo_concurrency.processutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.016 248514 DEBUG nova.compute.provider_tree [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:09.018 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[56499bbe-8918-4911-9076-beb5a9634470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:09.020 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f43148c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:09.021 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:09.021 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f43148c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.023 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:09 np0005558241 NetworkManager[50376]: <info>  [1765613769.0244] manager: (tap5f43148c-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec 13 03:16:09 np0005558241 kernel: tap5f43148c-60: entered promiscuous mode
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.026 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:09.027 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5f43148c-60, col_values=(('external_ids', {'iface-id': 'ef0b90f4-1740-443f-9a11-5ea86a0f958c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.028 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:09Z|00049|binding|INFO|Releasing lport ef0b90f4-1740-443f-9a11-5ea86a0f958c from this chassis (sb_readonly=0)
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.044 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:09.046 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5f43148c-6790-47ca-9335-d071d9be9a23.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5f43148c-6790-47ca-9335-d071d9be9a23.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:09.047 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[709fea05-5dbb-4792-b55b-2bffb3050508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:09.047 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-5f43148c-6790-47ca-9335-d071d9be9a23
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/5f43148c-6790-47ca-9335-d071d9be9a23.pid.haproxy
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 5f43148c-6790-47ca-9335-d071d9be9a23
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.048 248514 DEBUG nova.scheduler.client.report [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:16:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:09.048 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23', 'env', 'PROCESS_TAG=haproxy-5f43148c-6790-47ca-9335-d071d9be9a23', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5f43148c-6790-47ca-9335-d071d9be9a23.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.084 248514 DEBUG oslo_concurrency.lockutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.110 248514 INFO nova.scheduler.client.report [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Deleted allocations for instance b0993cc2-f55f-4847-84c6-dd3ab57cc25f#033[00m
Dec 13 03:16:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:16:09
Dec 13 03:16:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:16:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:16:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', 'default.rgw.meta', '.mgr', '.rgw.root']
Dec 13 03:16:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.191 248514 DEBUG oslo_concurrency.lockutils [None req-1013b546-b249-434d-98a6-76f1cba6bc90 227d115378154452ad3fe25dae04837b 39b72cde121b4531a2b743b669122ebd - - default default] Lock "b0993cc2-f55f-4847-84c6-dd3ab57cc25f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.211 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613769.2101939, d7894b40-fff7-4226-9a51-17dc5fe4bb30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.212 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] VM Started (Lifecycle Event)#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.233 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.238 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613769.2104077, d7894b40-fff7-4226-9a51-17dc5fe4bb30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.239 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.277 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.281 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:16:09 np0005558241 nova_compute[248510]: 2025-12-13 08:16:09.311 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:16:09 np0005558241 podman[269343]: 2025-12-13 08:16:09.465145236 +0000 UTC m=+0.056505353 container create cde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 03:16:09 np0005558241 systemd[1]: Started libpod-conmon-cde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b.scope.
Dec 13 03:16:09 np0005558241 podman[269343]: 2025-12-13 08:16:09.435354822 +0000 UTC m=+0.026714949 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:16:09 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:16:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d02a48ea041a1c150833fab56b2f50018a0dcdfc19ce4143cada06390b74edf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:09 np0005558241 podman[269343]: 2025-12-13 08:16:09.554457266 +0000 UTC m=+0.145817393 container init cde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:16:09 np0005558241 podman[269343]: 2025-12-13 08:16:09.560434943 +0000 UTC m=+0.151795050 container start cde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:16:09 np0005558241 neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23[269358]: [NOTICE]   (269362) : New worker (269364) forked
Dec 13 03:16:09 np0005558241 neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23[269358]: [NOTICE]   (269362) : Loading success.
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:16:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 170 op/s
Dec 13 03:16:11 np0005558241 nova_compute[248510]: 2025-12-13 08:16:11.386 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Acquiring lock "5e65e6d8-81fc-412d-9d08-d87023661fda" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:11 np0005558241 nova_compute[248510]: 2025-12-13 08:16:11.386 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:11 np0005558241 nova_compute[248510]: 2025-12-13 08:16:11.416 248514 DEBUG nova.compute.manager [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:16:11 np0005558241 nova_compute[248510]: 2025-12-13 08:16:11.548 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:11 np0005558241 nova_compute[248510]: 2025-12-13 08:16:11.550 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:11 np0005558241 nova_compute[248510]: 2025-12-13 08:16:11.561 248514 DEBUG nova.virt.hardware [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:16:11 np0005558241 nova_compute[248510]: 2025-12-13 08:16:11.561 248514 INFO nova.compute.claims [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:16:11 np0005558241 nova_compute[248510]: 2025-12-13 08:16:11.741 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.244 248514 DEBUG nova.compute.manager [req-07be3ea8-eb44-4baa-b3fa-e8dd5592113f req-ad02d4ac-6833-454d-9f7a-b1520fc5b044 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Received event network-vif-plugged-99fe9517-3d12-4555-a9f8-2237cc2213c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.246 248514 DEBUG oslo_concurrency.lockutils [req-07be3ea8-eb44-4baa-b3fa-e8dd5592113f req-ad02d4ac-6833-454d-9f7a-b1520fc5b044 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.247 248514 DEBUG oslo_concurrency.lockutils [req-07be3ea8-eb44-4baa-b3fa-e8dd5592113f req-ad02d4ac-6833-454d-9f7a-b1520fc5b044 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.247 248514 DEBUG oslo_concurrency.lockutils [req-07be3ea8-eb44-4baa-b3fa-e8dd5592113f req-ad02d4ac-6833-454d-9f7a-b1520fc5b044 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.247 248514 DEBUG nova.compute.manager [req-07be3ea8-eb44-4baa-b3fa-e8dd5592113f req-ad02d4ac-6833-454d-9f7a-b1520fc5b044 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Processing event network-vif-plugged-99fe9517-3d12-4555-a9f8-2237cc2213c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.248 248514 DEBUG nova.compute.manager [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.270 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613772.2604156, d7894b40-fff7-4226-9a51-17dc5fe4bb30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.271 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.274 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.291 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.304 248514 INFO nova.virt.libvirt.driver [-] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Instance spawned successfully.#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.305 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.307 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:16:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:16:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164259002' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.341 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.345 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.345 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.346 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.346 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.347 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.347 248514 DEBUG nova.virt.libvirt.driver [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.352 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.358 248514 DEBUG nova.compute.provider_tree [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.387 248514 DEBUG nova.scheduler.client.report [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.426 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.429 248514 DEBUG nova.compute.manager [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.434 248514 INFO nova.compute.manager [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Took 15.87 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.434 248514 DEBUG nova.compute.manager [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.526 248514 INFO nova.compute.manager [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Took 17.82 seconds to build instance.#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.578 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.583 248514 DEBUG nova.compute.manager [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.583 248514 DEBUG nova.network.neutron [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.611 248514 DEBUG oslo_concurrency.lockutils [None req-b155c5c9-2ea7-4a67-9052-3aabb1f4c6a0 d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.632 248514 INFO nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.717 248514 DEBUG nova.compute.manager [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:16:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 196 KiB/s wr, 120 op/s
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.853 248514 DEBUG nova.compute.manager [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.854 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.855 248514 INFO nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Creating image(s)#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.876 248514 DEBUG nova.storage.rbd_utils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] rbd image 5e65e6d8-81fc-412d-9d08-d87023661fda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.903 248514 DEBUG nova.storage.rbd_utils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] rbd image 5e65e6d8-81fc-412d-9d08-d87023661fda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.930 248514 DEBUG nova.storage.rbd_utils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] rbd image 5e65e6d8-81fc-412d-9d08-d87023661fda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.934 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:12 np0005558241 nova_compute[248510]: 2025-12-13 08:16:12.970 248514 DEBUG nova.policy [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec32d06bed00468da4407dd9585407a0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '487f5e19eb9e49e3bbae1c23af285690', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.004 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.005 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.006 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.006 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.030 248514 DEBUG nova.storage.rbd_utils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] rbd image 5e65e6d8-81fc-412d-9d08-d87023661fda_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.034 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5e65e6d8-81fc-412d-9d08-d87023661fda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.631 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5e65e6d8-81fc-412d-9d08-d87023661fda_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.736 248514 DEBUG nova.storage.rbd_utils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] resizing rbd image 5e65e6d8-81fc-412d-9d08-d87023661fda_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.845 248514 DEBUG nova.objects.instance [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lazy-loading 'migration_context' on Instance uuid 5e65e6d8-81fc-412d-9d08-d87023661fda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.870 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.871 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Ensure instance console log exists: /var/lib/nova/instances/5e65e6d8-81fc-412d-9d08-d87023661fda/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.872 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.872 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:13 np0005558241 nova_compute[248510]: 2025-12-13 08:16:13.872 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:14 np0005558241 nova_compute[248510]: 2025-12-13 08:16:14.498 248514 DEBUG nova.network.neutron [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Successfully created port: 44171d97-ddef-4b41-9740-19635fa87c3a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:16:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 112 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 893 KiB/s wr, 197 op/s
Dec 13 03:16:14 np0005558241 nova_compute[248510]: 2025-12-13 08:16:14.932 248514 DEBUG nova.compute.manager [req-9448b552-3a96-4a02-a7f4-246ac0f012ef req-9693bf36-b3ee-4598-83a3-0cb3197f636d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Received event network-vif-plugged-99fe9517-3d12-4555-a9f8-2237cc2213c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:16:14 np0005558241 nova_compute[248510]: 2025-12-13 08:16:14.933 248514 DEBUG oslo_concurrency.lockutils [req-9448b552-3a96-4a02-a7f4-246ac0f012ef req-9693bf36-b3ee-4598-83a3-0cb3197f636d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:14 np0005558241 nova_compute[248510]: 2025-12-13 08:16:14.933 248514 DEBUG oslo_concurrency.lockutils [req-9448b552-3a96-4a02-a7f4-246ac0f012ef req-9693bf36-b3ee-4598-83a3-0cb3197f636d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:14 np0005558241 nova_compute[248510]: 2025-12-13 08:16:14.934 248514 DEBUG oslo_concurrency.lockutils [req-9448b552-3a96-4a02-a7f4-246ac0f012ef req-9693bf36-b3ee-4598-83a3-0cb3197f636d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:14 np0005558241 nova_compute[248510]: 2025-12-13 08:16:14.934 248514 DEBUG nova.compute.manager [req-9448b552-3a96-4a02-a7f4-246ac0f012ef req-9693bf36-b3ee-4598-83a3-0cb3197f636d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] No waiting events found dispatching network-vif-plugged-99fe9517-3d12-4555-a9f8-2237cc2213c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:16:14 np0005558241 nova_compute[248510]: 2025-12-13 08:16:14.934 248514 WARNING nova.compute.manager [req-9448b552-3a96-4a02-a7f4-246ac0f012ef req-9693bf36-b3ee-4598-83a3-0cb3197f636d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Received unexpected event network-vif-plugged-99fe9517-3d12-4555-a9f8-2237cc2213c5 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:16:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:16:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3865941358' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:16:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:16:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3865941358' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:16:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 112 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 698 KiB/s wr, 166 op/s
Dec 13 03:16:17 np0005558241 nova_compute[248510]: 2025-12-13 08:16:17.156 248514 DEBUG nova.network.neutron [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Successfully updated port: 44171d97-ddef-4b41-9740-19635fa87c3a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:16:17 np0005558241 nova_compute[248510]: 2025-12-13 08:16:17.188 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Acquiring lock "refresh_cache-5e65e6d8-81fc-412d-9d08-d87023661fda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:16:17 np0005558241 nova_compute[248510]: 2025-12-13 08:16:17.188 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Acquired lock "refresh_cache-5e65e6d8-81fc-412d-9d08-d87023661fda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:16:17 np0005558241 nova_compute[248510]: 2025-12-13 08:16:17.188 248514 DEBUG nova.network.neutron [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:16:17 np0005558241 nova_compute[248510]: 2025-12-13 08:16:17.506 248514 DEBUG nova.network.neutron [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:16:17 np0005558241 nova_compute[248510]: 2025-12-13 08:16:17.580 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:18 np0005558241 nova_compute[248510]: 2025-12-13 08:16:18.271 248514 DEBUG nova.compute.manager [req-00ad32f2-40d2-4af1-98ac-7ffb3655b003 req-a71fcda8-15ed-462d-b246-c22e5d12ea13 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Received event network-changed-44171d97-ddef-4b41-9740-19635fa87c3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:16:18 np0005558241 nova_compute[248510]: 2025-12-13 08:16:18.272 248514 DEBUG nova.compute.manager [req-00ad32f2-40d2-4af1-98ac-7ffb3655b003 req-a71fcda8-15ed-462d-b246-c22e5d12ea13 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Refreshing instance network info cache due to event network-changed-44171d97-ddef-4b41-9740-19635fa87c3a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:16:18 np0005558241 nova_compute[248510]: 2025-12-13 08:16:18.273 248514 DEBUG oslo_concurrency.lockutils [req-00ad32f2-40d2-4af1-98ac-7ffb3655b003 req-a71fcda8-15ed-462d-b246-c22e5d12ea13 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5e65e6d8-81fc-412d-9d08-d87023661fda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:16:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:16:18 np0005558241 nova_compute[248510]: 2025-12-13 08:16:18.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 188 op/s
Dec 13 03:16:18 np0005558241 nova_compute[248510]: 2025-12-13 08:16:18.974 248514 DEBUG nova.network.neutron [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Updating instance_info_cache with network_info: [{"id": "44171d97-ddef-4b41-9740-19635fa87c3a", "address": "fa:16:3e:52:8d:5c", "network": {"id": "f47cfa0b-b3fe-477a-a9e8-75d92379981e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1799598885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "487f5e19eb9e49e3bbae1c23af285690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44171d97-dd", "ovs_interfaceid": "44171d97-ddef-4b41-9740-19635fa87c3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.006 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Releasing lock "refresh_cache-5e65e6d8-81fc-412d-9d08-d87023661fda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.007 248514 DEBUG nova.compute.manager [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Instance network_info: |[{"id": "44171d97-ddef-4b41-9740-19635fa87c3a", "address": "fa:16:3e:52:8d:5c", "network": {"id": "f47cfa0b-b3fe-477a-a9e8-75d92379981e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1799598885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "487f5e19eb9e49e3bbae1c23af285690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44171d97-dd", "ovs_interfaceid": "44171d97-ddef-4b41-9740-19635fa87c3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.007 248514 DEBUG oslo_concurrency.lockutils [req-00ad32f2-40d2-4af1-98ac-7ffb3655b003 req-a71fcda8-15ed-462d-b246-c22e5d12ea13 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5e65e6d8-81fc-412d-9d08-d87023661fda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.008 248514 DEBUG nova.network.neutron [req-00ad32f2-40d2-4af1-98ac-7ffb3655b003 req-a71fcda8-15ed-462d-b246-c22e5d12ea13 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Refreshing network info cache for port 44171d97-ddef-4b41-9740-19635fa87c3a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.011 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Start _get_guest_xml network_info=[{"id": "44171d97-ddef-4b41-9740-19635fa87c3a", "address": "fa:16:3e:52:8d:5c", "network": {"id": "f47cfa0b-b3fe-477a-a9e8-75d92379981e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1799598885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "487f5e19eb9e49e3bbae1c23af285690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44171d97-dd", "ovs_interfaceid": "44171d97-ddef-4b41-9740-19635fa87c3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.017 248514 WARNING nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.022 248514 DEBUG nova.virt.libvirt.host [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.023 248514 DEBUG nova.virt.libvirt.host [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.027 248514 DEBUG nova.virt.libvirt.host [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.028 248514 DEBUG nova.virt.libvirt.host [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.028 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.028 248514 DEBUG nova.virt.hardware [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.029 248514 DEBUG nova.virt.hardware [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.029 248514 DEBUG nova.virt.hardware [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.029 248514 DEBUG nova.virt.hardware [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.029 248514 DEBUG nova.virt.hardware [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.030 248514 DEBUG nova.virt.hardware [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.031 248514 DEBUG nova.virt.hardware [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.031 248514 DEBUG nova.virt.hardware [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.031 248514 DEBUG nova.virt.hardware [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.031 248514 DEBUG nova.virt.hardware [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.032 248514 DEBUG nova.virt.hardware [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.034 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:16:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2234292444' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.608 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.630 248514 DEBUG nova.storage.rbd_utils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] rbd image 5e65e6d8-81fc-412d-9d08-d87023661fda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:16:19 np0005558241 nova_compute[248510]: 2025-12-13 08:16:19.634 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:16:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/267886290' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.192 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.195 248514 DEBUG nova.virt.libvirt.vif [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1254373773',display_name='tempest-ImagesOneServerTestJSON-server-1254373773',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1254373773',id=16,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='487f5e19eb9e49e3bbae1c23af285690',ramdisk_id='',reservation_id='r-0abwyfh9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1469548079',owner_user_name='tempest-ImagesOneServerTestJSON-1469548079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:16:12Z,user_data=None,user_id='ec32d06bed00468da4407dd9585407a0',uuid=5e65e6d8-81fc-412d-9d08-d87023661fda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44171d97-ddef-4b41-9740-19635fa87c3a", "address": "fa:16:3e:52:8d:5c", "network": {"id": "f47cfa0b-b3fe-477a-a9e8-75d92379981e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1799598885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "487f5e19eb9e49e3bbae1c23af285690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44171d97-dd", "ovs_interfaceid": "44171d97-ddef-4b41-9740-19635fa87c3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.195 248514 DEBUG nova.network.os_vif_util [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Converting VIF {"id": "44171d97-ddef-4b41-9740-19635fa87c3a", "address": "fa:16:3e:52:8d:5c", "network": {"id": "f47cfa0b-b3fe-477a-a9e8-75d92379981e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1799598885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "487f5e19eb9e49e3bbae1c23af285690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44171d97-dd", "ovs_interfaceid": "44171d97-ddef-4b41-9740-19635fa87c3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.196 248514 DEBUG nova.network.os_vif_util [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:8d:5c,bridge_name='br-int',has_traffic_filtering=True,id=44171d97-ddef-4b41-9740-19635fa87c3a,network=Network(f47cfa0b-b3fe-477a-a9e8-75d92379981e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44171d97-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.198 248514 DEBUG nova.objects.instance [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e65e6d8-81fc-412d-9d08-d87023661fda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.223 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  <uuid>5e65e6d8-81fc-412d-9d08-d87023661fda</uuid>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  <name>instance-00000010</name>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <nova:name>tempest-ImagesOneServerTestJSON-server-1254373773</nova:name>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:16:19</nova:creationTime>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:        <nova:user uuid="ec32d06bed00468da4407dd9585407a0">tempest-ImagesOneServerTestJSON-1469548079-project-member</nova:user>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:        <nova:project uuid="487f5e19eb9e49e3bbae1c23af285690">tempest-ImagesOneServerTestJSON-1469548079</nova:project>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:        <nova:port uuid="44171d97-ddef-4b41-9740-19635fa87c3a">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <entry name="serial">5e65e6d8-81fc-412d-9d08-d87023661fda</entry>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <entry name="uuid">5e65e6d8-81fc-412d-9d08-d87023661fda</entry>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/5e65e6d8-81fc-412d-9d08-d87023661fda_disk">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/5e65e6d8-81fc-412d-9d08-d87023661fda_disk.config">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:52:8d:5c"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <target dev="tap44171d97-dd"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/5e65e6d8-81fc-412d-9d08-d87023661fda/console.log" append="off"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:16:20 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:16:20 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:16:20 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:16:20 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.224 248514 DEBUG nova.compute.manager [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Preparing to wait for external event network-vif-plugged-44171d97-ddef-4b41-9740-19635fa87c3a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.225 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Acquiring lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.226 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.226 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.226 248514 DEBUG nova.virt.libvirt.vif [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1254373773',display_name='tempest-ImagesOneServerTestJSON-server-1254373773',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1254373773',id=16,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='487f5e19eb9e49e3bbae1c23af285690',ramdisk_id='',reservation_id='r-0abwyfh9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1469548079',owner_user_name='tempest-ImagesOneServerTestJSON-1469548079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:16:12Z,user_data=None,user_id='ec32d06bed00468da4407dd9585407a0',uuid=5e65e6d8-81fc-412d-9d08-d87023661fda,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44171d97-ddef-4b41-9740-19635fa87c3a", "address": "fa:16:3e:52:8d:5c", "network": {"id": "f47cfa0b-b3fe-477a-a9e8-75d92379981e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1799598885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "487f5e19eb9e49e3bbae1c23af285690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44171d97-dd", "ovs_interfaceid": "44171d97-ddef-4b41-9740-19635fa87c3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.227 248514 DEBUG nova.network.os_vif_util [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Converting VIF {"id": "44171d97-ddef-4b41-9740-19635fa87c3a", "address": "fa:16:3e:52:8d:5c", "network": {"id": "f47cfa0b-b3fe-477a-a9e8-75d92379981e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1799598885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "487f5e19eb9e49e3bbae1c23af285690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44171d97-dd", "ovs_interfaceid": "44171d97-ddef-4b41-9740-19635fa87c3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.228 248514 DEBUG nova.network.os_vif_util [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:8d:5c,bridge_name='br-int',has_traffic_filtering=True,id=44171d97-ddef-4b41-9740-19635fa87c3a,network=Network(f47cfa0b-b3fe-477a-a9e8-75d92379981e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44171d97-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.228 248514 DEBUG os_vif [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:8d:5c,bridge_name='br-int',has_traffic_filtering=True,id=44171d97-ddef-4b41-9740-19635fa87c3a,network=Network(f47cfa0b-b3fe-477a-a9e8-75d92379981e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44171d97-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.229 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.230 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.234 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44171d97-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.235 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44171d97-dd, col_values=(('external_ids', {'iface-id': '44171d97-ddef-4b41-9740-19635fa87c3a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:8d:5c', 'vm-uuid': '5e65e6d8-81fc-412d-9d08-d87023661fda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:16:20 np0005558241 NetworkManager[50376]: <info>  [1765613780.2378] manager: (tap44171d97-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.245 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.246 248514 INFO os_vif [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:8d:5c,bridge_name='br-int',has_traffic_filtering=True,id=44171d97-ddef-4b41-9740-19635fa87c3a,network=Network(f47cfa0b-b3fe-477a-a9e8-75d92379981e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44171d97-dd')#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.326 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.327 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.327 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] No VIF found with MAC fa:16:3e:52:8d:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.328 248514 INFO nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Using config drive#033[00m
Dec 13 03:16:20 np0005558241 nova_compute[248510]: 2025-12-13 08:16:20.346 248514 DEBUG nova.storage.rbd_utils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] rbd image 5e65e6d8-81fc-412d-9d08-d87023661fda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000697319695607537 of space, bias 1.0, pg target 0.2091959086822611 quantized to 32 (current 32)
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664356946588113 of space, bias 1.0, pg target 0.1999307083976434 quantized to 32 (current 32)
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.654297892451325e-07 of space, bias 4.0, pg target 0.001038515747094159 quantized to 16 (current 32)
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:16:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Dec 13 03:16:21 np0005558241 nova_compute[248510]: 2025-12-13 08:16:21.988 248514 DEBUG nova.network.neutron [req-00ad32f2-40d2-4af1-98ac-7ffb3655b003 req-a71fcda8-15ed-462d-b246-c22e5d12ea13 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Updated VIF entry in instance network info cache for port 44171d97-ddef-4b41-9740-19635fa87c3a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:16:21 np0005558241 nova_compute[248510]: 2025-12-13 08:16:21.989 248514 DEBUG nova.network.neutron [req-00ad32f2-40d2-4af1-98ac-7ffb3655b003 req-a71fcda8-15ed-462d-b246-c22e5d12ea13 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Updating instance_info_cache with network_info: [{"id": "44171d97-ddef-4b41-9740-19635fa87c3a", "address": "fa:16:3e:52:8d:5c", "network": {"id": "f47cfa0b-b3fe-477a-a9e8-75d92379981e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1799598885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "487f5e19eb9e49e3bbae1c23af285690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44171d97-dd", "ovs_interfaceid": "44171d97-ddef-4b41-9740-19635fa87c3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:16:22 np0005558241 nova_compute[248510]: 2025-12-13 08:16:22.039 248514 DEBUG oslo_concurrency.lockutils [req-00ad32f2-40d2-4af1-98ac-7ffb3655b003 req-a71fcda8-15ed-462d-b246-c22e5d12ea13 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5e65e6d8-81fc-412d-9d08-d87023661fda" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:16:22 np0005558241 nova_compute[248510]: 2025-12-13 08:16:22.407 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613767.4065259, b0993cc2-f55f-4847-84c6-dd3ab57cc25f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:16:22 np0005558241 nova_compute[248510]: 2025-12-13 08:16:22.407 248514 INFO nova.compute.manager [-] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:16:22 np0005558241 nova_compute[248510]: 2025-12-13 08:16:22.608 248514 DEBUG nova.compute.manager [None req-439bbbdb-4f2a-4b55-972e-bf6b12a3bd0c - - - - - -] [instance: b0993cc2-f55f-4847-84c6-dd3ab57cc25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Dec 13 03:16:22 np0005558241 nova_compute[248510]: 2025-12-13 08:16:22.903 248514 INFO nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Creating config drive at /var/lib/nova/instances/5e65e6d8-81fc-412d-9d08-d87023661fda/disk.config#033[00m
Dec 13 03:16:22 np0005558241 nova_compute[248510]: 2025-12-13 08:16:22.908 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e65e6d8-81fc-412d-9d08-d87023661fda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4vas7pa1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:23 np0005558241 nova_compute[248510]: 2025-12-13 08:16:23.040 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e65e6d8-81fc-412d-9d08-d87023661fda/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4vas7pa1" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:23 np0005558241 nova_compute[248510]: 2025-12-13 08:16:23.067 248514 DEBUG nova.storage.rbd_utils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] rbd image 5e65e6d8-81fc-412d-9d08-d87023661fda_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:16:23 np0005558241 nova_compute[248510]: 2025-12-13 08:16:23.073 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e65e6d8-81fc-412d-9d08-d87023661fda/disk.config 5e65e6d8-81fc-412d-9d08-d87023661fda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:23 np0005558241 nova_compute[248510]: 2025-12-13 08:16:23.227 248514 DEBUG oslo_concurrency.processutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e65e6d8-81fc-412d-9d08-d87023661fda/disk.config 5e65e6d8-81fc-412d-9d08-d87023661fda_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:23 np0005558241 nova_compute[248510]: 2025-12-13 08:16:23.228 248514 INFO nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Deleting local config drive /var/lib/nova/instances/5e65e6d8-81fc-412d-9d08-d87023661fda/disk.config because it was imported into RBD.#033[00m
Dec 13 03:16:23 np0005558241 kernel: tap44171d97-dd: entered promiscuous mode
Dec 13 03:16:23 np0005558241 NetworkManager[50376]: <info>  [1765613783.2953] manager: (tap44171d97-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Dec 13 03:16:23 np0005558241 systemd-udevd[269697]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:16:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:23Z|00050|binding|INFO|Claiming lport 44171d97-ddef-4b41-9740-19635fa87c3a for this chassis.
Dec 13 03:16:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:23Z|00051|binding|INFO|44171d97-ddef-4b41-9740-19635fa87c3a: Claiming fa:16:3e:52:8d:5c 10.100.0.6
Dec 13 03:16:23 np0005558241 nova_compute[248510]: 2025-12-13 08:16:23.334 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:23 np0005558241 nova_compute[248510]: 2025-12-13 08:16:23.339 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:23 np0005558241 NetworkManager[50376]: <info>  [1765613783.3516] device (tap44171d97-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:16:23 np0005558241 NetworkManager[50376]: <info>  [1765613783.3527] device (tap44171d97-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.352 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:8d:5c 10.100.0.6'], port_security=['fa:16:3e:52:8d:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5e65e6d8-81fc-412d-9d08-d87023661fda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f47cfa0b-b3fe-477a-a9e8-75d92379981e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '487f5e19eb9e49e3bbae1c23af285690', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f4736b95-4339-401b-b0f3-e7f2b1cf7256', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51d7514f-435e-4096-96dd-36769afe18f3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=44171d97-ddef-4b41-9740-19635fa87c3a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.354 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 44171d97-ddef-4b41-9740-19635fa87c3a in datapath f47cfa0b-b3fe-477a-a9e8-75d92379981e bound to our chassis#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.355 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f47cfa0b-b3fe-477a-a9e8-75d92379981e#033[00m
Dec 13 03:16:23 np0005558241 systemd-machined[210538]: New machine qemu-18-instance-00000010.
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.369 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d87d52e3-4690-462b-a04c-bc998441dbfb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.371 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf47cfa0b-b1 in ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.373 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf47cfa0b-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.373 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d86167e9-ce20-4d26-8ee7-0e7068a8f110]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.374 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a42d19ec-1f60-49e8-8e6d-67699ef88ffd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.387 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ce23a4cf-12f6-4e0c-acd4-e513271b69a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 systemd[1]: Started Virtual Machine qemu-18-instance-00000010.
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.409 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[23f03cdf-1984-4f13-91ce-3df215802684]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:23Z|00052|binding|INFO|Setting lport 44171d97-ddef-4b41-9740-19635fa87c3a ovn-installed in OVS
Dec 13 03:16:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:23Z|00053|binding|INFO|Setting lport 44171d97-ddef-4b41-9740-19635fa87c3a up in Southbound
Dec 13 03:16:23 np0005558241 nova_compute[248510]: 2025-12-13 08:16:23.416 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.451 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8de5fc-bea3-4dba-bcb3-3ea29f47c8ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 NetworkManager[50376]: <info>  [1765613783.4613] manager: (tapf47cfa0b-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.460 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[82e75dd4-05df-4eb8-82a5-91c3eed41386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.508 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[64692d15-a288-4877-a77b-896b76ad0e42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.514 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[90c783e3-757f-47d8-947d-406d85055178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 NetworkManager[50376]: <info>  [1765613783.5447] device (tapf47cfa0b-b0): carrier: link connected
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.548 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3e274c-2b84-4540-8d95-479b33aaa603]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.569 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd1bb06f-fbcf-4b3b-94aa-e1df30705354]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf47cfa0b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:8b:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636076, 'reachable_time': 18250, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269733, 'error': None, 'target': 'ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.592 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb45ee7-9202-4238-8dc1-2f4e58e3a99a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:8b91'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636076, 'tstamp': 636076}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269734, 'error': None, 'target': 'ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.614 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2bccb3c7-1420-4d96-bdfb-d025f0249b91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf47cfa0b-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:8b:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636076, 'reachable_time': 18250, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269735, 'error': None, 'target': 'ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.655 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4232ec-397b-4a48-863c-b6a9c1dde75b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:16:23 np0005558241 nova_compute[248510]: 2025-12-13 08:16:23.688 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.750 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f95be0-8313-4938-8e4a-cc31f029f0e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.752 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf47cfa0b-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.753 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.753 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf47cfa0b-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:16:23 np0005558241 NetworkManager[50376]: <info>  [1765613783.7568] manager: (tapf47cfa0b-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Dec 13 03:16:23 np0005558241 kernel: tapf47cfa0b-b0: entered promiscuous mode
Dec 13 03:16:23 np0005558241 nova_compute[248510]: 2025-12-13 08:16:23.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.761 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf47cfa0b-b0, col_values=(('external_ids', {'iface-id': '529060b8-8f58-4200-8305-96486d94b419'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:16:23 np0005558241 nova_compute[248510]: 2025-12-13 08:16:23.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:23Z|00054|binding|INFO|Releasing lport 529060b8-8f58-4200-8305-96486d94b419 from this chassis (sb_readonly=0)
Dec 13 03:16:23 np0005558241 nova_compute[248510]: 2025-12-13 08:16:23.765 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.766 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f47cfa0b-b3fe-477a-a9e8-75d92379981e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f47cfa0b-b3fe-477a-a9e8-75d92379981e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.768 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[21df19b8-c63f-404b-a313-3276e58a8812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.769 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-f47cfa0b-b3fe-477a-a9e8-75d92379981e
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/f47cfa0b-b3fe-477a-a9e8-75d92379981e.pid.haproxy
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID f47cfa0b-b3fe-477a-a9e8-75d92379981e
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:16:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:23.771 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e', 'env', 'PROCESS_TAG=haproxy-f47cfa0b-b3fe-477a-a9e8-75d92379981e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f47cfa0b-b3fe-477a-a9e8-75d92379981e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:16:23 np0005558241 nova_compute[248510]: 2025-12-13 08:16:23.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:24 np0005558241 nova_compute[248510]: 2025-12-13 08:16:24.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:24 np0005558241 NetworkManager[50376]: <info>  [1765613784.1785] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Dec 13 03:16:24 np0005558241 NetworkManager[50376]: <info>  [1765613784.1793] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Dec 13 03:16:24 np0005558241 nova_compute[248510]: 2025-12-13 08:16:24.292 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:24Z|00055|binding|INFO|Releasing lport ef0b90f4-1740-443f-9a11-5ea86a0f958c from this chassis (sb_readonly=0)
Dec 13 03:16:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:24Z|00056|binding|INFO|Releasing lport 529060b8-8f58-4200-8305-96486d94b419 from this chassis (sb_readonly=0)
Dec 13 03:16:24 np0005558241 nova_compute[248510]: 2025-12-13 08:16:24.307 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:24 np0005558241 podman[269767]: 2025-12-13 08:16:24.203827548 +0000 UTC m=+0.043247486 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:16:24 np0005558241 podman[269767]: 2025-12-13 08:16:24.36917615 +0000 UTC m=+0.208596078 container create 7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 03:16:24 np0005558241 systemd[1]: Started libpod-conmon-7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647.scope.
Dec 13 03:16:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:16:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4649857adb74ed91cdd90063932457465b43f92da005310d777b41a71b47704/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:24 np0005558241 podman[269767]: 2025-12-13 08:16:24.556854363 +0000 UTC m=+0.396274281 container init 7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:16:24 np0005558241 podman[269767]: 2025-12-13 08:16:24.563753833 +0000 UTC m=+0.403173751 container start 7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 03:16:24 np0005558241 neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e[269782]: [NOTICE]   (269786) : New worker (269795) forked
Dec 13 03:16:24 np0005558241 neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e[269782]: [NOTICE]   (269786) : Loading success.
Dec 13 03:16:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:24Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:d9:f4 10.100.0.14
Dec 13 03:16:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:24Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:d9:f4 10.100.0.14
Dec 13 03:16:24 np0005558241 nova_compute[248510]: 2025-12-13 08:16:24.764 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613784.7628164, 5e65e6d8-81fc-412d-9d08-d87023661fda => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:16:24 np0005558241 nova_compute[248510]: 2025-12-13 08:16:24.765 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] VM Started (Lifecycle Event)#033[00m
Dec 13 03:16:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 151 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 119 op/s
Dec 13 03:16:25 np0005558241 nova_compute[248510]: 2025-12-13 08:16:25.238 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:26 np0005558241 podman[269934]: 2025-12-13 08:16:26.212690667 +0000 UTC m=+0.075266495 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:16:26 np0005558241 podman[269934]: 2025-12-13 08:16:26.32364051 +0000 UTC m=+0.186216318 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:16:26 np0005558241 nova_compute[248510]: 2025-12-13 08:16:26.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:16:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 151 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 723 KiB/s rd, 2.2 MiB/s wr, 43 op/s
Dec 13 03:16:27 np0005558241 nova_compute[248510]: 2025-12-13 08:16:27.035 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:27 np0005558241 nova_compute[248510]: 2025-12-13 08:16:27.042 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613784.7633562, 5e65e6d8-81fc-412d-9d08-d87023661fda => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:16:27 np0005558241 nova_compute[248510]: 2025-12-13 08:16:27.043 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:16:27 np0005558241 nova_compute[248510]: 2025-12-13 08:16:27.096 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:16:27 np0005558241 nova_compute[248510]: 2025-12-13 08:16:27.100 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:16:27 np0005558241 nova_compute[248510]: 2025-12-13 08:16:27.156 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:16:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:16:28 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:16:28 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:16:28 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:16:28 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:16:28 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:16:28 np0005558241 podman[270264]: 2025-12-13 08:16:28.313943593 +0000 UTC m=+0.045458221 container create fa0a6023a6d97a84be4a2158f82f64bbe71703f664c483addaefe10aa97f8d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:16:28 np0005558241 systemd[1]: Started libpod-conmon-fa0a6023a6d97a84be4a2158f82f64bbe71703f664c483addaefe10aa97f8d41.scope.
Dec 13 03:16:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:16:28 np0005558241 podman[270264]: 2025-12-13 08:16:28.296488263 +0000 UTC m=+0.028002911 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:16:28 np0005558241 podman[270264]: 2025-12-13 08:16:28.392372865 +0000 UTC m=+0.123887513 container init fa0a6023a6d97a84be4a2158f82f64bbe71703f664c483addaefe10aa97f8d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:16:28 np0005558241 podman[270264]: 2025-12-13 08:16:28.401643223 +0000 UTC m=+0.133157851 container start fa0a6023a6d97a84be4a2158f82f64bbe71703f664c483addaefe10aa97f8d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:16:28 np0005558241 podman[270264]: 2025-12-13 08:16:28.405248832 +0000 UTC m=+0.136763490 container attach fa0a6023a6d97a84be4a2158f82f64bbe71703f664c483addaefe10aa97f8d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:16:28 np0005558241 blissful_perlman[270281]: 167 167
Dec 13 03:16:28 np0005558241 systemd[1]: libpod-fa0a6023a6d97a84be4a2158f82f64bbe71703f664c483addaefe10aa97f8d41.scope: Deactivated successfully.
Dec 13 03:16:28 np0005558241 conmon[270281]: conmon fa0a6023a6d97a84be4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa0a6023a6d97a84be4a2158f82f64bbe71703f664c483addaefe10aa97f8d41.scope/container/memory.events
Dec 13 03:16:28 np0005558241 podman[270286]: 2025-12-13 08:16:28.458541785 +0000 UTC m=+0.030163654 container died fa0a6023a6d97a84be4a2158f82f64bbe71703f664c483addaefe10aa97f8d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:16:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e13296d772e60bd19ca5db2460344074a00678456e425819325fa666a1d28a7a-merged.mount: Deactivated successfully.
Dec 13 03:16:28 np0005558241 podman[270286]: 2025-12-13 08:16:28.503284687 +0000 UTC m=+0.074906526 container remove fa0a6023a6d97a84be4a2158f82f64bbe71703f664c483addaefe10aa97f8d41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:16:28 np0005558241 systemd[1]: libpod-conmon-fa0a6023a6d97a84be4a2158f82f64bbe71703f664c483addaefe10aa97f8d41.scope: Deactivated successfully.
Dec 13 03:16:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:16:28 np0005558241 nova_compute[248510]: 2025-12-13 08:16:28.691 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:28 np0005558241 podman[270307]: 2025-12-13 08:16:28.710105581 +0000 UTC m=+0.054906673 container create 86ccebdeaced24a20a279c211901c8bd3bf2bf10e69c31f906332324aa86107e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:16:28 np0005558241 systemd[1]: Started libpod-conmon-86ccebdeaced24a20a279c211901c8bd3bf2bf10e69c31f906332324aa86107e.scope.
Dec 13 03:16:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:16:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9602b53c6f478848ea34b3fb4b717859d849b0671c0cb675fae0d4ac216e1039/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9602b53c6f478848ea34b3fb4b717859d849b0671c0cb675fae0d4ac216e1039/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9602b53c6f478848ea34b3fb4b717859d849b0671c0cb675fae0d4ac216e1039/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9602b53c6f478848ea34b3fb4b717859d849b0671c0cb675fae0d4ac216e1039/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9602b53c6f478848ea34b3fb4b717859d849b0671c0cb675fae0d4ac216e1039/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:28 np0005558241 podman[270307]: 2025-12-13 08:16:28.685175127 +0000 UTC m=+0.029976239 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:16:28 np0005558241 podman[270307]: 2025-12-13 08:16:28.789507137 +0000 UTC m=+0.134308249 container init 86ccebdeaced24a20a279c211901c8bd3bf2bf10e69c31f906332324aa86107e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 03:16:28 np0005558241 podman[270307]: 2025-12-13 08:16:28.795981656 +0000 UTC m=+0.140782748 container start 86ccebdeaced24a20a279c211901c8bd3bf2bf10e69c31f906332324aa86107e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:16:28 np0005558241 podman[270307]: 2025-12-13 08:16:28.801938123 +0000 UTC m=+0.146739215 container attach 86ccebdeaced24a20a279c211901c8bd3bf2bf10e69c31f906332324aa86107e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 03:16:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 160 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 898 KiB/s rd, 3.2 MiB/s wr, 75 op/s
Dec 13 03:16:29 np0005558241 upbeat_leavitt[270323]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:16:29 np0005558241 upbeat_leavitt[270323]: --> All data devices are unavailable
Dec 13 03:16:29 np0005558241 systemd[1]: libpod-86ccebdeaced24a20a279c211901c8bd3bf2bf10e69c31f906332324aa86107e.scope: Deactivated successfully.
Dec 13 03:16:29 np0005558241 podman[270307]: 2025-12-13 08:16:29.310444207 +0000 UTC m=+0.655245339 container died 86ccebdeaced24a20a279c211901c8bd3bf2bf10e69c31f906332324aa86107e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:16:29 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9602b53c6f478848ea34b3fb4b717859d849b0671c0cb675fae0d4ac216e1039-merged.mount: Deactivated successfully.
Dec 13 03:16:29 np0005558241 podman[270307]: 2025-12-13 08:16:29.364288113 +0000 UTC m=+0.709089225 container remove 86ccebdeaced24a20a279c211901c8bd3bf2bf10e69c31f906332324aa86107e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:16:29 np0005558241 systemd[1]: libpod-conmon-86ccebdeaced24a20a279c211901c8bd3bf2bf10e69c31f906332324aa86107e.scope: Deactivated successfully.
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.451 248514 DEBUG nova.compute.manager [req-f69d9515-3ccb-4a7f-95e3-8ff673275c22 req-64535c3e-bec3-47a3-b3a7-b5f63f42dda3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Received event network-vif-plugged-44171d97-ddef-4b41-9740-19635fa87c3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.453 248514 DEBUG oslo_concurrency.lockutils [req-f69d9515-3ccb-4a7f-95e3-8ff673275c22 req-64535c3e-bec3-47a3-b3a7-b5f63f42dda3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.453 248514 DEBUG oslo_concurrency.lockutils [req-f69d9515-3ccb-4a7f-95e3-8ff673275c22 req-64535c3e-bec3-47a3-b3a7-b5f63f42dda3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.454 248514 DEBUG oslo_concurrency.lockutils [req-f69d9515-3ccb-4a7f-95e3-8ff673275c22 req-64535c3e-bec3-47a3-b3a7-b5f63f42dda3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.454 248514 DEBUG nova.compute.manager [req-f69d9515-3ccb-4a7f-95e3-8ff673275c22 req-64535c3e-bec3-47a3-b3a7-b5f63f42dda3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Processing event network-vif-plugged-44171d97-ddef-4b41-9740-19635fa87c3a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.454 248514 DEBUG nova.compute.manager [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.460 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613789.4595895, 5e65e6d8-81fc-412d-9d08-d87023661fda => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.460 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.463 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.467 248514 INFO nova.virt.libvirt.driver [-] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Instance spawned successfully.#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.468 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.500 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.505 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.511 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.511 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.512 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.512 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.512 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.513 248514 DEBUG nova.virt.libvirt.driver [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.557 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.614 248514 INFO nova.compute.manager [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Took 16.76 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.614 248514 DEBUG nova.compute.manager [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.744 248514 INFO nova.compute.manager [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Took 18.22 seconds to build instance.#033[00m
Dec 13 03:16:29 np0005558241 nova_compute[248510]: 2025-12-13 08:16:29.768 248514 DEBUG oslo_concurrency.lockutils [None req-eb03bae0-db9b-4cba-b9c8-90807287b87f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:29 np0005558241 podman[270418]: 2025-12-13 08:16:29.881320568 +0000 UTC m=+0.035352821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:16:30 np0005558241 podman[270418]: 2025-12-13 08:16:30.127751898 +0000 UTC m=+0.281784161 container create 2dfaa963987463b1dab695f2bbca926fdcd7cf141ffe993d2e4001a8aa2f3888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:16:30 np0005558241 systemd[1]: Started libpod-conmon-2dfaa963987463b1dab695f2bbca926fdcd7cf141ffe993d2e4001a8aa2f3888.scope.
Dec 13 03:16:30 np0005558241 nova_compute[248510]: 2025-12-13 08:16:30.274 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:16:30 np0005558241 podman[270418]: 2025-12-13 08:16:30.303414925 +0000 UTC m=+0.457447168 container init 2dfaa963987463b1dab695f2bbca926fdcd7cf141ffe993d2e4001a8aa2f3888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec 13 03:16:30 np0005558241 podman[270418]: 2025-12-13 08:16:30.312195691 +0000 UTC m=+0.466227914 container start 2dfaa963987463b1dab695f2bbca926fdcd7cf141ffe993d2e4001a8aa2f3888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:16:30 np0005558241 magical_faraday[270435]: 167 167
Dec 13 03:16:30 np0005558241 systemd[1]: libpod-2dfaa963987463b1dab695f2bbca926fdcd7cf141ffe993d2e4001a8aa2f3888.scope: Deactivated successfully.
Dec 13 03:16:30 np0005558241 podman[270418]: 2025-12-13 08:16:30.337752231 +0000 UTC m=+0.491784484 container attach 2dfaa963987463b1dab695f2bbca926fdcd7cf141ffe993d2e4001a8aa2f3888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:16:30 np0005558241 podman[270418]: 2025-12-13 08:16:30.338435907 +0000 UTC m=+0.492468140 container died 2dfaa963987463b1dab695f2bbca926fdcd7cf141ffe993d2e4001a8aa2f3888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:16:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2ead498e65dec4e83db2f6c9cd9e1aab0fff888c27bc66db877fafa9cda7ce07-merged.mount: Deactivated successfully.
Dec 13 03:16:30 np0005558241 podman[270418]: 2025-12-13 08:16:30.433429037 +0000 UTC m=+0.587461260 container remove 2dfaa963987463b1dab695f2bbca926fdcd7cf141ffe993d2e4001a8aa2f3888 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_faraday, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:16:30 np0005558241 systemd[1]: libpod-conmon-2dfaa963987463b1dab695f2bbca926fdcd7cf141ffe993d2e4001a8aa2f3888.scope: Deactivated successfully.
Dec 13 03:16:30 np0005558241 nova_compute[248510]: 2025-12-13 08:16:30.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:30 np0005558241 podman[270459]: 2025-12-13 08:16:30.63776077 +0000 UTC m=+0.055194720 container create 2cfb361c3a687faf55a94908b8de1a17ef8eea1a63df93dfaed1227ae13a022a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tharp, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:16:30 np0005558241 systemd[1]: Started libpod-conmon-2cfb361c3a687faf55a94908b8de1a17ef8eea1a63df93dfaed1227ae13a022a.scope.
Dec 13 03:16:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:16:30 np0005558241 podman[270459]: 2025-12-13 08:16:30.616296921 +0000 UTC m=+0.033730891 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:16:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc53236740d9b34943e141178f7f893ed47e5e0ffd2de6af21649cfc48836c46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc53236740d9b34943e141178f7f893ed47e5e0ffd2de6af21649cfc48836c46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc53236740d9b34943e141178f7f893ed47e5e0ffd2de6af21649cfc48836c46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc53236740d9b34943e141178f7f893ed47e5e0ffd2de6af21649cfc48836c46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:30 np0005558241 podman[270459]: 2025-12-13 08:16:30.732292259 +0000 UTC m=+0.149726229 container init 2cfb361c3a687faf55a94908b8de1a17ef8eea1a63df93dfaed1227ae13a022a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:16:30 np0005558241 podman[270459]: 2025-12-13 08:16:30.739834244 +0000 UTC m=+0.157268194 container start 2cfb361c3a687faf55a94908b8de1a17ef8eea1a63df93dfaed1227ae13a022a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:16:30 np0005558241 podman[270459]: 2025-12-13 08:16:30.745141705 +0000 UTC m=+0.162575675 container attach 2cfb361c3a687faf55a94908b8de1a17ef8eea1a63df93dfaed1227ae13a022a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:16:30 np0005558241 nova_compute[248510]: 2025-12-13 08:16:30.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:16:30 np0005558241 nova_compute[248510]: 2025-12-13 08:16:30.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:16:30 np0005558241 nova_compute[248510]: 2025-12-13 08:16:30.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:16:30 np0005558241 nova_compute[248510]: 2025-12-13 08:16:30.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:16:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]: {
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:    "0": [
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:        {
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "devices": [
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "/dev/loop3"
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            ],
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_name": "ceph_lv0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_size": "21470642176",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "name": "ceph_lv0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "tags": {
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.cluster_name": "ceph",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.crush_device_class": "",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.encrypted": "0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.objectstore": "bluestore",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.osd_id": "0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.type": "block",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.vdo": "0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.with_tpm": "0"
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            },
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "type": "block",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "vg_name": "ceph_vg0"
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:        }
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:    ],
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:    "1": [
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:        {
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "devices": [
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "/dev/loop4"
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            ],
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_name": "ceph_lv1",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_size": "21470642176",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "name": "ceph_lv1",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "tags": {
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.cluster_name": "ceph",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.crush_device_class": "",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.encrypted": "0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.objectstore": "bluestore",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.osd_id": "1",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.type": "block",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.vdo": "0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.with_tpm": "0"
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            },
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "type": "block",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "vg_name": "ceph_vg1"
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:        }
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:    ],
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:    "2": [
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:        {
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "devices": [
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "/dev/loop5"
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            ],
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_name": "ceph_lv2",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_size": "21470642176",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "name": "ceph_lv2",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "tags": {
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.cluster_name": "ceph",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.crush_device_class": "",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.encrypted": "0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.objectstore": "bluestore",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.osd_id": "2",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.type": "block",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.vdo": "0",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:                "ceph.with_tpm": "0"
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            },
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "type": "block",
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:            "vg_name": "ceph_vg2"
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:        }
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]:    ]
Dec 13 03:16:31 np0005558241 elegant_tharp[270475]: }
Dec 13 03:16:31 np0005558241 systemd[1]: libpod-2cfb361c3a687faf55a94908b8de1a17ef8eea1a63df93dfaed1227ae13a022a.scope: Deactivated successfully.
Dec 13 03:16:31 np0005558241 podman[270459]: 2025-12-13 08:16:31.090908242 +0000 UTC m=+0.508342192 container died 2cfb361c3a687faf55a94908b8de1a17ef8eea1a63df93dfaed1227ae13a022a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 03:16:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fc53236740d9b34943e141178f7f893ed47e5e0ffd2de6af21649cfc48836c46-merged.mount: Deactivated successfully.
Dec 13 03:16:31 np0005558241 podman[270459]: 2025-12-13 08:16:31.157277506 +0000 UTC m=+0.574711456 container remove 2cfb361c3a687faf55a94908b8de1a17ef8eea1a63df93dfaed1227ae13a022a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:16:31 np0005558241 systemd[1]: libpod-conmon-2cfb361c3a687faf55a94908b8de1a17ef8eea1a63df93dfaed1227ae13a022a.scope: Deactivated successfully.
Dec 13 03:16:31 np0005558241 nova_compute[248510]: 2025-12-13 08:16:31.204 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:16:31 np0005558241 nova_compute[248510]: 2025-12-13 08:16:31.204 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:16:31 np0005558241 nova_compute[248510]: 2025-12-13 08:16:31.205 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:16:31 np0005558241 nova_compute[248510]: 2025-12-13 08:16:31.205 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid d7894b40-fff7-4226-9a51-17dc5fe4bb30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:16:31 np0005558241 podman[270560]: 2025-12-13 08:16:31.697930113 +0000 UTC m=+0.054831071 container create 3d75e7e3f24ab48f443825534ea4cbb7b32ea937c690bfc42ffae2f903dc806e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 03:16:31 np0005558241 systemd[1]: Started libpod-conmon-3d75e7e3f24ab48f443825534ea4cbb7b32ea937c690bfc42ffae2f903dc806e.scope.
Dec 13 03:16:31 np0005558241 podman[270560]: 2025-12-13 08:16:31.671437491 +0000 UTC m=+0.028338509 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:16:31 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:16:31 np0005558241 podman[270560]: 2025-12-13 08:16:31.787160471 +0000 UTC m=+0.144061459 container init 3d75e7e3f24ab48f443825534ea4cbb7b32ea937c690bfc42ffae2f903dc806e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:16:31 np0005558241 podman[270560]: 2025-12-13 08:16:31.794130013 +0000 UTC m=+0.151030971 container start 3d75e7e3f24ab48f443825534ea4cbb7b32ea937c690bfc42ffae2f903dc806e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:16:31 np0005558241 podman[270560]: 2025-12-13 08:16:31.7976776 +0000 UTC m=+0.154578558 container attach 3d75e7e3f24ab48f443825534ea4cbb7b32ea937c690bfc42ffae2f903dc806e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:16:31 np0005558241 sleepy_maxwell[270577]: 167 167
Dec 13 03:16:31 np0005558241 systemd[1]: libpod-3d75e7e3f24ab48f443825534ea4cbb7b32ea937c690bfc42ffae2f903dc806e.scope: Deactivated successfully.
Dec 13 03:16:31 np0005558241 podman[270560]: 2025-12-13 08:16:31.801200967 +0000 UTC m=+0.158101945 container died 3d75e7e3f24ab48f443825534ea4cbb7b32ea937c690bfc42ffae2f903dc806e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:16:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-286e133e8b9d576cdec68cc3879e55568467298e61d30ddf168a65b516a765a2-merged.mount: Deactivated successfully.
Dec 13 03:16:31 np0005558241 podman[270560]: 2025-12-13 08:16:31.85248225 +0000 UTC m=+0.209383208 container remove 3d75e7e3f24ab48f443825534ea4cbb7b32ea937c690bfc42ffae2f903dc806e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:16:31 np0005558241 systemd[1]: libpod-conmon-3d75e7e3f24ab48f443825534ea4cbb7b32ea937c690bfc42ffae2f903dc806e.scope: Deactivated successfully.
Dec 13 03:16:32 np0005558241 podman[270603]: 2025-12-13 08:16:32.076062577 +0000 UTC m=+0.064890069 container create 2df9debe47e58e65d190b568d01019a2d2e025881bb7ddbbf7bc23cda51711c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bartik, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:16:32 np0005558241 systemd[1]: Started libpod-conmon-2df9debe47e58e65d190b568d01019a2d2e025881bb7ddbbf7bc23cda51711c6.scope.
Dec 13 03:16:32 np0005558241 podman[270603]: 2025-12-13 08:16:32.04369483 +0000 UTC m=+0.032522342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:16:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:16:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/169f0d1925dc270a8874ebe6d8bbedde7ec8bf5dfb691d8a0f37ede9b886671a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/169f0d1925dc270a8874ebe6d8bbedde7ec8bf5dfb691d8a0f37ede9b886671a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/169f0d1925dc270a8874ebe6d8bbedde7ec8bf5dfb691d8a0f37ede9b886671a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/169f0d1925dc270a8874ebe6d8bbedde7ec8bf5dfb691d8a0f37ede9b886671a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:16:32 np0005558241 podman[270603]: 2025-12-13 08:16:32.179429393 +0000 UTC m=+0.168256915 container init 2df9debe47e58e65d190b568d01019a2d2e025881bb7ddbbf7bc23cda51711c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:16:32 np0005558241 podman[270603]: 2025-12-13 08:16:32.187389619 +0000 UTC m=+0.176217111 container start 2df9debe47e58e65d190b568d01019a2d2e025881bb7ddbbf7bc23cda51711c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bartik, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:16:32 np0005558241 podman[270603]: 2025-12-13 08:16:32.201142348 +0000 UTC m=+0.189969840 container attach 2df9debe47e58e65d190b568d01019a2d2e025881bb7ddbbf7bc23cda51711c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bartik, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:16:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 13 03:16:32 np0005558241 lvm[270729]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:16:32 np0005558241 lvm[270729]: VG ceph_vg1 finished
Dec 13 03:16:32 np0005558241 lvm[270730]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:16:32 np0005558241 lvm[270730]: VG ceph_vg2 finished
Dec 13 03:16:32 np0005558241 lvm[270727]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:16:32 np0005558241 lvm[270727]: VG ceph_vg0 finished
Dec 13 03:16:32 np0005558241 elated_bartik[270620]: {}
Dec 13 03:16:33 np0005558241 podman[270698]: 2025-12-13 08:16:33.011097497 +0000 UTC m=+0.083838206 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:16:33 np0005558241 lvm[270753]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:16:33 np0005558241 lvm[270753]: VG ceph_vg0 finished
Dec 13 03:16:33 np0005558241 podman[270697]: 2025-12-13 08:16:33.033641862 +0000 UTC m=+0.116220803 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:16:33 np0005558241 podman[270695]: 2025-12-13 08:16:33.044026988 +0000 UTC m=+0.131144821 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:16:33 np0005558241 systemd[1]: libpod-2df9debe47e58e65d190b568d01019a2d2e025881bb7ddbbf7bc23cda51711c6.scope: Deactivated successfully.
Dec 13 03:16:33 np0005558241 systemd[1]: libpod-2df9debe47e58e65d190b568d01019a2d2e025881bb7ddbbf7bc23cda51711c6.scope: Consumed 1.451s CPU time.
Dec 13 03:16:33 np0005558241 podman[270603]: 2025-12-13 08:16:33.046633812 +0000 UTC m=+1.035461314 container died 2df9debe47e58e65d190b568d01019a2d2e025881bb7ddbbf7bc23cda51711c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bartik, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:16:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-169f0d1925dc270a8874ebe6d8bbedde7ec8bf5dfb691d8a0f37ede9b886671a-merged.mount: Deactivated successfully.
Dec 13 03:16:33 np0005558241 podman[270603]: 2025-12-13 08:16:33.178786627 +0000 UTC m=+1.167614109 container remove 2df9debe47e58e65d190b568d01019a2d2e025881bb7ddbbf7bc23cda51711c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:16:33 np0005558241 systemd[1]: libpod-conmon-2df9debe47e58e65d190b568d01019a2d2e025881bb7ddbbf7bc23cda51711c6.scope: Deactivated successfully.
Dec 13 03:16:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:16:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:16:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:16:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.667 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Updating instance_info_cache with network_info: [{"id": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "address": "fa:16:3e:80:d9:f4", "network": {"id": "5f43148c-6790-47ca-9335-d071d9be9a23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-87276096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "081c046ea2ea42ee8e09ed5de2e8db81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fe9517-3d", "ovs_interfaceid": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:16:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.692 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.706 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.706 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.721 248514 DEBUG nova.compute.manager [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Received event network-changed-99fe9517-3d12-4555-a9f8-2237cc2213c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.722 248514 DEBUG nova.compute.manager [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Refreshing instance network info cache due to event network-changed-99fe9517-3d12-4555-a9f8-2237cc2213c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.722 248514 DEBUG oslo_concurrency.lockutils [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.723 248514 DEBUG oslo_concurrency.lockutils [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.723 248514 DEBUG nova.network.neutron [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Refreshing network info cache for port 99fe9517-3d12-4555-a9f8-2237cc2213c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.824 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.824 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.825 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.825 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:16:33 np0005558241 nova_compute[248510]: 2025-12-13 08:16:33.825 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:34 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:16:34 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:16:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:16:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1628346916' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:16:34 np0005558241 nova_compute[248510]: 2025-12-13 08:16:34.396 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:34 np0005558241 nova_compute[248510]: 2025-12-13 08:16:34.748 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:16:34 np0005558241 nova_compute[248510]: 2025-12-13 08:16:34.749 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:16:34 np0005558241 nova_compute[248510]: 2025-12-13 08:16:34.753 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:16:34 np0005558241 nova_compute[248510]: 2025-12-13 08:16:34.753 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:16:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Dec 13 03:16:34 np0005558241 nova_compute[248510]: 2025-12-13 08:16:34.945 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:16:34 np0005558241 nova_compute[248510]: 2025-12-13 08:16:34.946 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4263MB free_disk=59.92185636609793GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:16:34 np0005558241 nova_compute[248510]: 2025-12-13 08:16:34.946 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:34 np0005558241 nova_compute[248510]: 2025-12-13 08:16:34.946 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.040 248514 DEBUG nova.compute.manager [None req-e62919ee-b75f-454e-add6-620a246e838f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.096 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance d7894b40-fff7-4226-9a51-17dc5fe4bb30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.097 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 5e65e6d8-81fc-412d-9d08-d87023661fda actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.097 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.097 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.112 248514 INFO nova.compute.manager [None req-e62919ee-b75f-454e-add6-620a246e838f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] instance snapshotting#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.120 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.150 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.150 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.178 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.225 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.276 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.333 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.439 248514 INFO nova.virt.libvirt.driver [None req-e62919ee-b75f-454e-add6-620a246e838f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Beginning live snapshot process#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.592 248514 DEBUG nova.network.neutron [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Updated VIF entry in instance network info cache for port 99fe9517-3d12-4555-a9f8-2237cc2213c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.594 248514 DEBUG nova.network.neutron [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Updating instance_info_cache with network_info: [{"id": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "address": "fa:16:3e:80:d9:f4", "network": {"id": "5f43148c-6790-47ca-9335-d071d9be9a23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-87276096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "081c046ea2ea42ee8e09ed5de2e8db81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fe9517-3d", "ovs_interfaceid": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.674 248514 DEBUG oslo_concurrency.lockutils [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.675 248514 DEBUG nova.compute.manager [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Received event network-vif-plugged-44171d97-ddef-4b41-9740-19635fa87c3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.675 248514 DEBUG oslo_concurrency.lockutils [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.676 248514 DEBUG oslo_concurrency.lockutils [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.676 248514 DEBUG oslo_concurrency.lockutils [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.676 248514 DEBUG nova.compute.manager [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] No waiting events found dispatching network-vif-plugged-44171d97-ddef-4b41-9740-19635fa87c3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.677 248514 WARNING nova.compute.manager [req-ad359151-6f7d-4160-b831-4a653edd6a63 req-f15e94b2-021d-417c-aebd-ca354e8c4516 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Received unexpected event network-vif-plugged-44171d97-ddef-4b41-9740-19635fa87c3a for instance with vm_state active and task_state None.#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.682 248514 DEBUG nova.virt.libvirt.imagebackend [None req-e62919ee-b75f-454e-add6-620a246e838f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:16:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:16:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966044842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.914 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.921 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:16:35 np0005558241 nova_compute[248510]: 2025-12-13 08:16:35.943 248514 DEBUG nova.storage.rbd_utils [None req-e62919ee-b75f-454e-add6-620a246e838f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] creating snapshot(9bee313e13b74fda872f032ec64a2e08) on rbd image(5e65e6d8-81fc-412d-9d08-d87023661fda_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:16:36 np0005558241 nova_compute[248510]: 2025-12-13 08:16:36.130 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:16:36 np0005558241 nova_compute[248510]: 2025-12-13 08:16:36.208 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:16:36 np0005558241 nova_compute[248510]: 2025-12-13 08:16:36.208 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Dec 13 03:16:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Dec 13 03:16:36 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Dec 13 03:16:36 np0005558241 nova_compute[248510]: 2025-12-13 08:16:36.356 248514 DEBUG nova.storage.rbd_utils [None req-e62919ee-b75f-454e-add6-620a246e838f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] cloning vms/5e65e6d8-81fc-412d-9d08-d87023661fda_disk@9bee313e13b74fda872f032ec64a2e08 to images/6b6e2db2-fc90-47d7-b33b-514df5aeef60 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:16:36 np0005558241 nova_compute[248510]: 2025-12-13 08:16:36.440 248514 DEBUG nova.storage.rbd_utils [None req-e62919ee-b75f-454e-add6-620a246e838f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] flattening images/6b6e2db2-fc90-47d7-b33b-514df5aeef60 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:16:36 np0005558241 nova_compute[248510]: 2025-12-13 08:16:36.748 248514 DEBUG nova.storage.rbd_utils [None req-e62919ee-b75f-454e-add6-620a246e838f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] removing snapshot(9bee313e13b74fda872f032ec64a2e08) on rbd image(5e65e6d8-81fc-412d-9d08-d87023661fda_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:16:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.3 MiB/s wr, 140 op/s
Dec 13 03:16:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Dec 13 03:16:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Dec 13 03:16:37 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Dec 13 03:16:37 np0005558241 nova_compute[248510]: 2025-12-13 08:16:37.343 248514 DEBUG nova.storage.rbd_utils [None req-e62919ee-b75f-454e-add6-620a246e838f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] creating snapshot(snap) on rbd image(6b6e2db2-fc90-47d7-b33b-514df5aeef60) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:16:38 np0005558241 nova_compute[248510]: 2025-12-13 08:16:38.209 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:16:38 np0005558241 nova_compute[248510]: 2025-12-13 08:16:38.209 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:16:38 np0005558241 nova_compute[248510]: 2025-12-13 08:16:38.209 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:16:38 np0005558241 nova_compute[248510]: 2025-12-13 08:16:38.209 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:16:38 np0005558241 nova_compute[248510]: 2025-12-13 08:16:38.210 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:16:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Dec 13 03:16:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Dec 13 03:16:38 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Dec 13 03:16:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:16:38 np0005558241 nova_compute[248510]: 2025-12-13 08:16:38.695 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 188 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 810 KiB/s wr, 157 op/s
Dec 13 03:16:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:16:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:16:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:16:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:16:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:16:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:16:40 np0005558241 nova_compute[248510]: 2025-12-13 08:16:40.279 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:40 np0005558241 nova_compute[248510]: 2025-12-13 08:16:40.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:16:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 213 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 109 op/s
Dec 13 03:16:41 np0005558241 nova_compute[248510]: 2025-12-13 08:16:41.580 248514 INFO nova.virt.libvirt.driver [None req-e62919ee-b75f-454e-add6-620a246e838f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Snapshot image upload complete#033[00m
Dec 13 03:16:41 np0005558241 nova_compute[248510]: 2025-12-13 08:16:41.581 248514 INFO nova.compute.manager [None req-e62919ee-b75f-454e-add6-620a246e838f ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Took 6.47 seconds to snapshot the instance on the hypervisor.#033[00m
Dec 13 03:16:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 213 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.3 MiB/s wr, 101 op/s
Dec 13 03:16:43 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:43Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:8d:5c 10.100.0.6
Dec 13 03:16:43 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:43Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:8d:5c 10.100.0.6
Dec 13 03:16:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:16:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Dec 13 03:16:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Dec 13 03:16:43 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Dec 13 03:16:43 np0005558241 nova_compute[248510]: 2025-12-13 08:16:43.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:44Z|00057|binding|INFO|Releasing lport ef0b90f4-1740-443f-9a11-5ea86a0f958c from this chassis (sb_readonly=0)
Dec 13 03:16:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:16:44Z|00058|binding|INFO|Releasing lport 529060b8-8f58-4200-8305-96486d94b419 from this chassis (sb_readonly=0)
Dec 13 03:16:44 np0005558241 nova_compute[248510]: 2025-12-13 08:16:44.661 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 245 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.2 MiB/s wr, 206 op/s
Dec 13 03:16:45 np0005558241 nova_compute[248510]: 2025-12-13 08:16:45.281 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 245 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.9 MiB/s wr, 161 op/s
Dec 13 03:16:48 np0005558241 nova_compute[248510]: 2025-12-13 08:16:48.233 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:16:48 np0005558241 nova_compute[248510]: 2025-12-13 08:16:48.700 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 246 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.2 MiB/s wr, 138 op/s
Dec 13 03:16:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:49.745 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:16:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:49.746 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:16:49 np0005558241 nova_compute[248510]: 2025-12-13 08:16:49.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Dec 13 03:16:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Dec 13 03:16:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Dec 13 03:16:50 np0005558241 nova_compute[248510]: 2025-12-13 08:16:50.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:50 np0005558241 nova_compute[248510]: 2025-12-13 08:16:50.678 248514 DEBUG nova.compute.manager [req-951f5eac-bf26-4a4c-90be-e5d3c3159f11 req-11faf737-9fab-483a-b88b-b26f6d5df854 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Received event network-changed-99fe9517-3d12-4555-a9f8-2237cc2213c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:16:50 np0005558241 nova_compute[248510]: 2025-12-13 08:16:50.678 248514 DEBUG nova.compute.manager [req-951f5eac-bf26-4a4c-90be-e5d3c3159f11 req-11faf737-9fab-483a-b88b-b26f6d5df854 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Refreshing instance network info cache due to event network-changed-99fe9517-3d12-4555-a9f8-2237cc2213c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:16:50 np0005558241 nova_compute[248510]: 2025-12-13 08:16:50.678 248514 DEBUG oslo_concurrency.lockutils [req-951f5eac-bf26-4a4c-90be-e5d3c3159f11 req-11faf737-9fab-483a-b88b-b26f6d5df854 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:16:50 np0005558241 nova_compute[248510]: 2025-12-13 08:16:50.678 248514 DEBUG oslo_concurrency.lockutils [req-951f5eac-bf26-4a4c-90be-e5d3c3159f11 req-11faf737-9fab-483a-b88b-b26f6d5df854 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:16:50 np0005558241 nova_compute[248510]: 2025-12-13 08:16:50.679 248514 DEBUG nova.network.neutron [req-951f5eac-bf26-4a4c-90be-e5d3c3159f11 req-11faf737-9fab-483a-b88b-b26f6d5df854 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Refreshing network info cache for port 99fe9517-3d12-4555-a9f8-2237cc2213c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:16:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 504 KiB/s rd, 3.2 MiB/s wr, 113 op/s
Dec 13 03:16:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 418 KiB/s rd, 2.4 MiB/s wr, 92 op/s
Dec 13 03:16:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:16:53 np0005558241 nova_compute[248510]: 2025-12-13 08:16:53.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 200 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 66 KiB/s wr, 28 op/s
Dec 13 03:16:55 np0005558241 nova_compute[248510]: 2025-12-13 08:16:55.286 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:55.397 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:16:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:55.398 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:16:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:55.398 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:16:55 np0005558241 nova_compute[248510]: 2025-12-13 08:16:55.724 248514 DEBUG nova.compute.manager [None req-91d8770c-b43c-4bce-ae39-398ceec761d9 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:16:55 np0005558241 nova_compute[248510]: 2025-12-13 08:16:55.805 248514 INFO nova.compute.manager [None req-91d8770c-b43c-4bce-ae39-398ceec761d9 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] instance snapshotting#033[00m
Dec 13 03:16:56 np0005558241 nova_compute[248510]: 2025-12-13 08:16:56.656 248514 INFO nova.virt.libvirt.driver [None req-91d8770c-b43c-4bce-ae39-398ceec761d9 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Beginning live snapshot process#033[00m
Dec 13 03:16:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 200 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 66 KiB/s wr, 28 op/s
Dec 13 03:16:57 np0005558241 nova_compute[248510]: 2025-12-13 08:16:57.264 248514 DEBUG nova.virt.libvirt.imagebackend [None req-91d8770c-b43c-4bce-ae39-398ceec761d9 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:16:57 np0005558241 nova_compute[248510]: 2025-12-13 08:16:57.365 248514 DEBUG nova.network.neutron [req-951f5eac-bf26-4a4c-90be-e5d3c3159f11 req-11faf737-9fab-483a-b88b-b26f6d5df854 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Updated VIF entry in instance network info cache for port 99fe9517-3d12-4555-a9f8-2237cc2213c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:16:57 np0005558241 nova_compute[248510]: 2025-12-13 08:16:57.366 248514 DEBUG nova.network.neutron [req-951f5eac-bf26-4a4c-90be-e5d3c3159f11 req-11faf737-9fab-483a-b88b-b26f6d5df854 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Updating instance_info_cache with network_info: [{"id": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "address": "fa:16:3e:80:d9:f4", "network": {"id": "5f43148c-6790-47ca-9335-d071d9be9a23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-87276096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "081c046ea2ea42ee8e09ed5de2e8db81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fe9517-3d", "ovs_interfaceid": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:16:57 np0005558241 nova_compute[248510]: 2025-12-13 08:16:57.385 248514 DEBUG oslo_concurrency.lockutils [req-951f5eac-bf26-4a4c-90be-e5d3c3159f11 req-11faf737-9fab-483a-b88b-b26f6d5df854 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d7894b40-fff7-4226-9a51-17dc5fe4bb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:16:57 np0005558241 nova_compute[248510]: 2025-12-13 08:16:57.520 248514 DEBUG nova.storage.rbd_utils [None req-91d8770c-b43c-4bce-ae39-398ceec761d9 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] creating snapshot(c28169a3d09744b29b71d6d915649b10) on rbd image(5e65e6d8-81fc-412d-9d08-d87023661fda_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:16:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Dec 13 03:16:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Dec 13 03:16:58 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Dec 13 03:16:58 np0005558241 nova_compute[248510]: 2025-12-13 08:16:58.296 248514 DEBUG nova.storage.rbd_utils [None req-91d8770c-b43c-4bce-ae39-398ceec761d9 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] cloning vms/5e65e6d8-81fc-412d-9d08-d87023661fda_disk@c28169a3d09744b29b71d6d915649b10 to images/5aabf85b-f070-488a-9455-8ddbe9dc31f4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:16:58 np0005558241 nova_compute[248510]: 2025-12-13 08:16:58.389 248514 DEBUG nova.storage.rbd_utils [None req-91d8770c-b43c-4bce-ae39-398ceec761d9 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] flattening images/5aabf85b-f070-488a-9455-8ddbe9dc31f4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:16:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:16:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Dec 13 03:16:58 np0005558241 nova_compute[248510]: 2025-12-13 08:16:58.708 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Dec 13 03:16:58 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Dec 13 03:16:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 200 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 27 KiB/s wr, 33 op/s
Dec 13 03:16:58 np0005558241 nova_compute[248510]: 2025-12-13 08:16:58.871 248514 DEBUG nova.storage.rbd_utils [None req-91d8770c-b43c-4bce-ae39-398ceec761d9 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] removing snapshot(c28169a3d09744b29b71d6d915649b10) on rbd image(5e65e6d8-81fc-412d-9d08-d87023661fda_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:16:59 np0005558241 nova_compute[248510]: 2025-12-13 08:16:59.524 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:16:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Dec 13 03:16:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Dec 13 03:16:59 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Dec 13 03:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:16:59.749 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:16:59 np0005558241 nova_compute[248510]: 2025-12-13 08:16:59.760 248514 DEBUG nova.storage.rbd_utils [None req-91d8770c-b43c-4bce-ae39-398ceec761d9 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] creating snapshot(snap) on rbd image(5aabf85b-f070-488a-9455-8ddbe9dc31f4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:17:00 np0005558241 nova_compute[248510]: 2025-12-13 08:17:00.288 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Dec 13 03:17:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Dec 13 03:17:00 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Dec 13 03:17:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 258 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 8.3 MiB/s wr, 123 op/s
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.703 248514 DEBUG oslo_concurrency.lockutils [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Acquiring lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.704 248514 DEBUG oslo_concurrency.lockutils [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.704 248514 DEBUG oslo_concurrency.lockutils [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Acquiring lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.704 248514 DEBUG oslo_concurrency.lockutils [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.705 248514 DEBUG oslo_concurrency.lockutils [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.706 248514 INFO nova.compute.manager [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Terminating instance#033[00m
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.707 248514 DEBUG nova.compute.manager [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:17:02 np0005558241 kernel: tap99fe9517-3d (unregistering): left promiscuous mode
Dec 13 03:17:02 np0005558241 NetworkManager[50376]: <info>  [1765613822.7648] device (tap99fe9517-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.778 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:02Z|00059|binding|INFO|Releasing lport 99fe9517-3d12-4555-a9f8-2237cc2213c5 from this chassis (sb_readonly=0)
Dec 13 03:17:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:02Z|00060|binding|INFO|Setting lport 99fe9517-3d12-4555-a9f8-2237cc2213c5 down in Southbound
Dec 13 03:17:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:02Z|00061|binding|INFO|Removing iface tap99fe9517-3d ovn-installed in OVS
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.781 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:02.788 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:d9:f4 10.100.0.14'], port_security=['fa:16:3e:80:d9:f4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd7894b40-fff7-4226-9a51-17dc5fe4bb30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f43148c-6790-47ca-9335-d071d9be9a23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '081c046ea2ea42ee8e09ed5de2e8db81', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'de101059-5f1e-4690-862b-b4a45d904f8b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0b66f82-4270-4981-ae10-dd3083c40575, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=99fe9517-3d12-4555-a9f8-2237cc2213c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:17:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:02.790 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 99fe9517-3d12-4555-a9f8-2237cc2213c5 in datapath 5f43148c-6790-47ca-9335-d071d9be9a23 unbound from our chassis#033[00m
Dec 13 03:17:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:02.792 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5f43148c-6790-47ca-9335-d071d9be9a23, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:17:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:02.794 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a06fadec-a495-4fe7-94cc-338d3bde487b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:02.794 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23 namespace which is not needed anymore#033[00m
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.795 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:02 np0005558241 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec 13 03:17:02 np0005558241 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Consumed 15.029s CPU time.
Dec 13 03:17:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 258 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 7.3 MiB/s wr, 106 op/s
Dec 13 03:17:02 np0005558241 systemd-machined[210538]: Machine qemu-17-instance-0000000f terminated.
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.942 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.952 248514 INFO nova.virt.libvirt.driver [-] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Instance destroyed successfully.#033[00m
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.952 248514 DEBUG nova.objects.instance [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lazy-loading 'resources' on Instance uuid d7894b40-fff7-4226-9a51-17dc5fe4bb30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:17:02 np0005558241 neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23[269358]: [NOTICE]   (269362) : haproxy version is 2.8.14-c23fe91
Dec 13 03:17:02 np0005558241 neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23[269358]: [NOTICE]   (269362) : path to executable is /usr/sbin/haproxy
Dec 13 03:17:02 np0005558241 neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23[269358]: [WARNING]  (269362) : Exiting Master process...
Dec 13 03:17:02 np0005558241 neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23[269358]: [WARNING]  (269362) : Exiting Master process...
Dec 13 03:17:02 np0005558241 neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23[269358]: [ALERT]    (269362) : Current worker (269364) exited with code 143 (Terminated)
Dec 13 03:17:02 np0005558241 neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23[269358]: [WARNING]  (269362) : All workers exited. Exiting... (0)
Dec 13 03:17:02 np0005558241 systemd[1]: libpod-cde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b.scope: Deactivated successfully.
Dec 13 03:17:02 np0005558241 podman[271154]: 2025-12-13 08:17:02.981184512 +0000 UTC m=+0.061909946 container died cde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.995 248514 DEBUG nova.virt.libvirt.vif [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:15:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1146892662',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1146892662',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-114689266',id=15,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:16:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='081c046ea2ea42ee8e09ed5de2e8db81',ramdisk_id='',reservation_id='r-ukx38l09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-996250676',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-996250676-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:16:12Z,user_data=None,user_id='d06ea0a7e1b040b993e05a34b436154d',uuid=d7894b40-fff7-4226-9a51-17dc5fe4bb30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "address": "fa:16:3e:80:d9:f4", "network": {"id": "5f43148c-6790-47ca-9335-d071d9be9a23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-87276096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "081c046ea2ea42ee8e09ed5de2e8db81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fe9517-3d", "ovs_interfaceid": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.997 248514 DEBUG nova.network.os_vif_util [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Converting VIF {"id": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "address": "fa:16:3e:80:d9:f4", "network": {"id": "5f43148c-6790-47ca-9335-d071d9be9a23", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-87276096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "081c046ea2ea42ee8e09ed5de2e8db81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fe9517-3d", "ovs_interfaceid": "99fe9517-3d12-4555-a9f8-2237cc2213c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.998 248514 DEBUG nova.network.os_vif_util [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:d9:f4,bridge_name='br-int',has_traffic_filtering=True,id=99fe9517-3d12-4555-a9f8-2237cc2213c5,network=Network(5f43148c-6790-47ca-9335-d071d9be9a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fe9517-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:17:02 np0005558241 nova_compute[248510]: 2025-12-13 08:17:02.998 248514 DEBUG os_vif [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:d9:f4,bridge_name='br-int',has_traffic_filtering=True,id=99fe9517-3d12-4555-a9f8-2237cc2213c5,network=Network(5f43148c-6790-47ca-9335-d071d9be9a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fe9517-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.001 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99fe9517-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b-userdata-shm.mount: Deactivated successfully.
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.055 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.057 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.058 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4d02a48ea041a1c150833fab56b2f50018a0dcdfc19ce4143cada06390b74edf-merged.mount: Deactivated successfully.
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.061 248514 INFO os_vif [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:d9:f4,bridge_name='br-int',has_traffic_filtering=True,id=99fe9517-3d12-4555-a9f8-2237cc2213c5,network=Network(5f43148c-6790-47ca-9335-d071d9be9a23),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fe9517-3d')#033[00m
Dec 13 03:17:03 np0005558241 podman[271154]: 2025-12-13 08:17:03.070600352 +0000 UTC m=+0.151325786 container cleanup cde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:17:03 np0005558241 systemd[1]: libpod-conmon-cde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b.scope: Deactivated successfully.
Dec 13 03:17:03 np0005558241 podman[271207]: 2025-12-13 08:17:03.14673879 +0000 UTC m=+0.052838404 container remove cde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:17:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:03.153 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[04fad9dd-8077-45b4-af7f-3d58d5932b11]: (4, ('Sat Dec 13 08:17:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23 (cde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b)\ncde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b\nSat Dec 13 08:17:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23 (cde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b)\ncde7a50b7b9f9bea6347cfc2be038391b2611ec4644f9c143f821d42884b9f6b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:03.155 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[66803501-dd02-4587-a7eb-f7320e164293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:03.158 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f43148c-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.160 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:03 np0005558241 kernel: tap5f43148c-60: left promiscuous mode
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:03 np0005558241 podman[271208]: 2025-12-13 08:17:03.178927563 +0000 UTC m=+0.082992237 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 03:17:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:03.180 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5197e2b3-eafa-4fea-8eb4-7999b07a00e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:03 np0005558241 podman[271210]: 2025-12-13 08:17:03.191258456 +0000 UTC m=+0.086778519 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 03:17:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:03.197 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b77b09-cc6e-4a12-b67b-eae585ace40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:03.199 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4baffee2-8665-45a7-8ecf-0c152a8c01f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:03 np0005558241 podman[271190]: 2025-12-13 08:17:03.201188411 +0000 UTC m=+0.108286180 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:17:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:03.218 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f81ee5b9-f683-4c70-a0d3-a5becf31fd45]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634597, 'reachable_time': 37002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271286, 'error': None, 'target': 'ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:03.223 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5f43148c-6790-47ca-9335-d071d9be9a23 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:17:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:03.223 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[f540e900-c6ce-40d9-9e34-1e99856f749e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:03 np0005558241 systemd[1]: run-netns-ovnmeta\x2d5f43148c\x2d6790\x2d47ca\x2d9335\x2dd071d9be9a23.mount: Deactivated successfully.
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.391 248514 INFO nova.virt.libvirt.driver [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Deleting instance files /var/lib/nova/instances/d7894b40-fff7-4226-9a51-17dc5fe4bb30_del#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.392 248514 INFO nova.virt.libvirt.driver [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Deletion of /var/lib/nova/instances/d7894b40-fff7-4226-9a51-17dc5fe4bb30_del complete#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.440 248514 DEBUG nova.compute.manager [req-88cf892f-a8cc-4c01-ac91-b840a59101be req-85c5319a-6835-49dc-9033-d609ed478a23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Received event network-vif-unplugged-99fe9517-3d12-4555-a9f8-2237cc2213c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.440 248514 DEBUG oslo_concurrency.lockutils [req-88cf892f-a8cc-4c01-ac91-b840a59101be req-85c5319a-6835-49dc-9033-d609ed478a23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.440 248514 DEBUG oslo_concurrency.lockutils [req-88cf892f-a8cc-4c01-ac91-b840a59101be req-85c5319a-6835-49dc-9033-d609ed478a23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.441 248514 DEBUG oslo_concurrency.lockutils [req-88cf892f-a8cc-4c01-ac91-b840a59101be req-85c5319a-6835-49dc-9033-d609ed478a23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.441 248514 DEBUG nova.compute.manager [req-88cf892f-a8cc-4c01-ac91-b840a59101be req-85c5319a-6835-49dc-9033-d609ed478a23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] No waiting events found dispatching network-vif-unplugged-99fe9517-3d12-4555-a9f8-2237cc2213c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.441 248514 DEBUG nova.compute.manager [req-88cf892f-a8cc-4c01-ac91-b840a59101be req-85c5319a-6835-49dc-9033-d609ed478a23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Received event network-vif-unplugged-99fe9517-3d12-4555-a9f8-2237cc2213c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.487 248514 INFO nova.virt.libvirt.driver [None req-91d8770c-b43c-4bce-ae39-398ceec761d9 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Snapshot image upload complete#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.488 248514 INFO nova.compute.manager [None req-91d8770c-b43c-4bce-ae39-398ceec761d9 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Took 7.68 seconds to snapshot the instance on the hypervisor.#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.503 248514 INFO nova.compute.manager [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.504 248514 DEBUG oslo.service.loopingcall [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.504 248514 DEBUG nova.compute.manager [-] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.505 248514 DEBUG nova.network.neutron [-] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:17:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:17:03 np0005558241 nova_compute[248510]: 2025-12-13 08:17:03.726 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 234 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 7.6 MiB/s wr, 201 op/s
Dec 13 03:17:05 np0005558241 nova_compute[248510]: 2025-12-13 08:17:05.181 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:05 np0005558241 nova_compute[248510]: 2025-12-13 08:17:05.181 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:05 np0005558241 nova_compute[248510]: 2025-12-13 08:17:05.348 248514 DEBUG nova.compute.manager [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:17:05 np0005558241 nova_compute[248510]: 2025-12-13 08:17:05.374 248514 DEBUG nova.network.neutron [-] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:17:05 np0005558241 nova_compute[248510]: 2025-12-13 08:17:05.414 248514 INFO nova.compute.manager [-] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Took 1.91 seconds to deallocate network for instance.#033[00m
Dec 13 03:17:05 np0005558241 nova_compute[248510]: 2025-12-13 08:17:05.465 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:05 np0005558241 nova_compute[248510]: 2025-12-13 08:17:05.465 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:05 np0005558241 nova_compute[248510]: 2025-12-13 08:17:05.477 248514 DEBUG nova.virt.hardware [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:17:05 np0005558241 nova_compute[248510]: 2025-12-13 08:17:05.477 248514 INFO nova.compute.claims [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:17:05 np0005558241 nova_compute[248510]: 2025-12-13 08:17:05.525 248514 DEBUG oslo_concurrency.lockutils [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:06 np0005558241 nova_compute[248510]: 2025-12-13 08:17:06.498 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 234 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 153 op/s
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.052 248514 DEBUG nova.compute.manager [req-b97c853c-377e-4214-b2ff-49af97d19db9 req-05baac53-7775-464a-890d-f54428bf21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Received event network-vif-deleted-99fe9517-3d12-4555-a9f8-2237cc2213c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:17:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:17:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694871116' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.125 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.131 248514 DEBUG nova.compute.provider_tree [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.150 248514 DEBUG nova.scheduler.client.report [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.168 248514 DEBUG nova.compute.manager [req-c87d2578-bbc8-46d3-af2b-a687a183d895 req-457b1efe-ce1c-43c2-a1b6-28fa61db622d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Received event network-vif-plugged-99fe9517-3d12-4555-a9f8-2237cc2213c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.169 248514 DEBUG oslo_concurrency.lockutils [req-c87d2578-bbc8-46d3-af2b-a687a183d895 req-457b1efe-ce1c-43c2-a1b6-28fa61db622d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.169 248514 DEBUG oslo_concurrency.lockutils [req-c87d2578-bbc8-46d3-af2b-a687a183d895 req-457b1efe-ce1c-43c2-a1b6-28fa61db622d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.170 248514 DEBUG oslo_concurrency.lockutils [req-c87d2578-bbc8-46d3-af2b-a687a183d895 req-457b1efe-ce1c-43c2-a1b6-28fa61db622d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.170 248514 DEBUG nova.compute.manager [req-c87d2578-bbc8-46d3-af2b-a687a183d895 req-457b1efe-ce1c-43c2-a1b6-28fa61db622d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] No waiting events found dispatching network-vif-plugged-99fe9517-3d12-4555-a9f8-2237cc2213c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.170 248514 WARNING nova.compute.manager [req-c87d2578-bbc8-46d3-af2b-a687a183d895 req-457b1efe-ce1c-43c2-a1b6-28fa61db622d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Received unexpected event network-vif-plugged-99fe9517-3d12-4555-a9f8-2237cc2213c5 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.186 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.187 248514 DEBUG nova.compute.manager [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.192 248514 DEBUG oslo_concurrency.lockutils [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.286 248514 DEBUG nova.compute.manager [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.286 248514 DEBUG nova.network.neutron [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.459 248514 INFO nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.473 248514 DEBUG oslo_concurrency.processutils [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:07 np0005558241 nova_compute[248510]: 2025-12-13 08:17:07.502 248514 DEBUG nova.compute.manager [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:17:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:17:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1634512689' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.051 248514 DEBUG oslo_concurrency.processutils [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.056 248514 DEBUG nova.compute.provider_tree [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.059 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.228 248514 DEBUG nova.scheduler.client.report [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.317 248514 DEBUG oslo_concurrency.lockutils [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.323 248514 DEBUG nova.compute.manager [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.324 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.324 248514 INFO nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Creating image(s)#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.350 248514 DEBUG nova.storage.rbd_utils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.376 248514 DEBUG nova.storage.rbd_utils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.402 248514 DEBUG nova.storage.rbd_utils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.407 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.439 248514 DEBUG nova.policy [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d3578a9aa9a4f4facf4009a71564a31', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.444 248514 INFO nova.scheduler.client.report [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Deleted allocations for instance d7894b40-fff7-4226-9a51-17dc5fe4bb30#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.484 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.485 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.486 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.487 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.510 248514 DEBUG nova.storage.rbd_utils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.514 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a14d8b88-7aec-468f-a550-881364e4d95e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.713 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:17:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Dec 13 03:17:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Dec 13 03:17:08 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.806 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a14d8b88-7aec-468f-a550-881364e4d95e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 200 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 112 op/s
Dec 13 03:17:08 np0005558241 nova_compute[248510]: 2025-12-13 08:17:08.886 248514 DEBUG nova.storage.rbd_utils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] resizing rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.003 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "3fffafca-321d-4611-8940-da963b356ca1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.003 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.011 248514 DEBUG nova.objects.instance [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'migration_context' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.044 248514 DEBUG oslo_concurrency.lockutils [None req-48d87259-61a6-42b7-b2d3-9dcec780845f d06ea0a7e1b040b993e05a34b436154d 081c046ea2ea42ee8e09ed5de2e8db81 - - default default] Lock "d7894b40-fff7-4226-9a51-17dc5fe4bb30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.047 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.048 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Ensure instance console log exists: /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.049 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.049 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.049 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.058 248514 DEBUG nova.compute.manager [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.167 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.168 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.179 248514 DEBUG nova.virt.hardware [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.180 248514 INFO nova.compute.claims [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:17:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:17:09
Dec 13 03:17:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:17:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:17:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'volumes', '.rgw.root', '.mgr', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control']
Dec 13 03:17:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.347 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:17:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2350754642' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:17:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.912 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.920 248514 DEBUG nova.compute.provider_tree [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.946 248514 DEBUG nova.scheduler.client.report [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:17:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.979 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:09 np0005558241 nova_compute[248510]: 2025-12-13 08:17:09.980 248514 DEBUG nova.compute.manager [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:17:10 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.207 248514 DEBUG nova.compute.manager [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.208 248514 DEBUG nova.network.neutron [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.261 248514 INFO nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.265 248514 DEBUG nova.network.neutron [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Successfully created port: 7f8a109e-e262-4847-8430-ac7944dace5c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.358 248514 DEBUG nova.compute.manager [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.785 248514 DEBUG nova.policy [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d3578a9aa9a4f4facf4009a71564a31', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.799 248514 DEBUG nova.compute.manager [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.801 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.802 248514 INFO nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Creating image(s)#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.826 248514 DEBUG nova.storage.rbd_utils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 3fffafca-321d-4611-8940-da963b356ca1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 229 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 132 op/s
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.858 248514 DEBUG nova.storage.rbd_utils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 3fffafca-321d-4611-8940-da963b356ca1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.890 248514 DEBUG nova.storage.rbd_utils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 3fffafca-321d-4611-8940-da963b356ca1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.894 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.984 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.986 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.987 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:10 np0005558241 nova_compute[248510]: 2025-12-13 08:17:10.988 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:11 np0005558241 nova_compute[248510]: 2025-12-13 08:17:11.087 248514 DEBUG nova.storage.rbd_utils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 3fffafca-321d-4611-8940-da963b356ca1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:11 np0005558241 nova_compute[248510]: 2025-12-13 08:17:11.093 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3fffafca-321d-4611-8940-da963b356ca1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:11 np0005558241 nova_compute[248510]: 2025-12-13 08:17:11.427 248514 DEBUG nova.network.neutron [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Successfully updated port: 7f8a109e-e262-4847-8430-ac7944dace5c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:17:11 np0005558241 nova_compute[248510]: 2025-12-13 08:17:11.452 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:17:11 np0005558241 nova_compute[248510]: 2025-12-13 08:17:11.453 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquired lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:17:11 np0005558241 nova_compute[248510]: 2025-12-13 08:17:11.453 248514 DEBUG nova.network.neutron [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:17:11 np0005558241 nova_compute[248510]: 2025-12-13 08:17:11.884 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3fffafca-321d-4611-8940-da963b356ca1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.792s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:11 np0005558241 nova_compute[248510]: 2025-12-13 08:17:11.951 248514 DEBUG nova.storage.rbd_utils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] resizing rbd image 3fffafca-321d-4611-8940-da963b356ca1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:17:12 np0005558241 nova_compute[248510]: 2025-12-13 08:17:12.144 248514 DEBUG nova.network.neutron [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:17:12 np0005558241 nova_compute[248510]: 2025-12-13 08:17:12.193 248514 DEBUG nova.objects.instance [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'migration_context' on Instance uuid 3fffafca-321d-4611-8940-da963b356ca1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:17:12 np0005558241 nova_compute[248510]: 2025-12-13 08:17:12.214 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:17:12 np0005558241 nova_compute[248510]: 2025-12-13 08:17:12.215 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Ensure instance console log exists: /var/lib/nova/instances/3fffafca-321d-4611-8940-da963b356ca1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:17:12 np0005558241 nova_compute[248510]: 2025-12-13 08:17:12.215 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:12 np0005558241 nova_compute[248510]: 2025-12-13 08:17:12.215 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:12 np0005558241 nova_compute[248510]: 2025-12-13 08:17:12.216 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:12 np0005558241 nova_compute[248510]: 2025-12-13 08:17:12.333 248514 DEBUG nova.network.neutron [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Successfully created port: c85d366f-1e88-41c4-9425-0e5f62b77e99 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:17:12 np0005558241 nova_compute[248510]: 2025-12-13 08:17:12.756 248514 DEBUG nova.compute.manager [req-776d5b96-dbe5-4133-b7fc-5bab10126108 req-14eff970-52e8-44c3-9f80-e61b8c5f7099 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-changed-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:17:12 np0005558241 nova_compute[248510]: 2025-12-13 08:17:12.756 248514 DEBUG nova.compute.manager [req-776d5b96-dbe5-4133-b7fc-5bab10126108 req-14eff970-52e8-44c3-9f80-e61b8c5f7099 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Refreshing instance network info cache due to event network-changed-7f8a109e-e262-4847-8430-ac7944dace5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:17:12 np0005558241 nova_compute[248510]: 2025-12-13 08:17:12.757 248514 DEBUG oslo_concurrency.lockutils [req-776d5b96-dbe5-4133-b7fc-5bab10126108 req-14eff970-52e8-44c3-9f80-e61b8c5f7099 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:17:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 229 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Dec 13 03:17:12 np0005558241 nova_compute[248510]: 2025-12-13 08:17:12.982 248514 DEBUG nova.network.neutron [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Updating instance_info_cache with network_info: [{"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.062 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.140 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Releasing lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.141 248514 DEBUG nova.compute.manager [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance network_info: |[{"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.142 248514 DEBUG oslo_concurrency.lockutils [req-776d5b96-dbe5-4133-b7fc-5bab10126108 req-14eff970-52e8-44c3-9f80-e61b8c5f7099 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.142 248514 DEBUG nova.network.neutron [req-776d5b96-dbe5-4133-b7fc-5bab10126108 req-14eff970-52e8-44c3-9f80-e61b8c5f7099 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Refreshing network info cache for port 7f8a109e-e262-4847-8430-ac7944dace5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.145 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Start _get_guest_xml network_info=[{"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.149 248514 WARNING nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.158 248514 DEBUG nova.virt.libvirt.host [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.160 248514 DEBUG nova.virt.libvirt.host [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.170 248514 DEBUG nova.virt.libvirt.host [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.171 248514 DEBUG nova.virt.libvirt.host [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.171 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.172 248514 DEBUG nova.virt.hardware [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.172 248514 DEBUG nova.virt.hardware [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.172 248514 DEBUG nova.virt.hardware [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.173 248514 DEBUG nova.virt.hardware [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.173 248514 DEBUG nova.virt.hardware [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.173 248514 DEBUG nova.virt.hardware [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.174 248514 DEBUG nova.virt.hardware [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.174 248514 DEBUG nova.virt.hardware [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.174 248514 DEBUG nova.virt.hardware [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.174 248514 DEBUG nova.virt.hardware [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.175 248514 DEBUG nova.virt.hardware [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.178 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:17:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1770217023' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.716 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.729 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.752 248514 DEBUG nova.storage.rbd_utils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:13 np0005558241 nova_compute[248510]: 2025-12-13 08:17:13.757 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:17:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/420939762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.402 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.405 248514 DEBUG nova.virt.libvirt.vif [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:17:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-247332834',display_name='tempest-ServersAdminTestJSON-server-247332834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-247332834',id=17,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-88vex2g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:17:07Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=a14d8b88-7aec-468f-a550-881364e4d95e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.405 248514 DEBUG nova.network.os_vif_util [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.406 248514 DEBUG nova.network.os_vif_util [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.407 248514 DEBUG nova.objects.instance [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'pci_devices' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.607 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  <uuid>a14d8b88-7aec-468f-a550-881364e4d95e</uuid>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  <name>instance-00000011</name>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersAdminTestJSON-server-247332834</nova:name>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:17:13</nova:creationTime>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:        <nova:user uuid="0d3578a9aa9a4f4facf4009a71564a31">tempest-ServersAdminTestJSON-1079129122-project-member</nova:user>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:        <nova:project uuid="1f6b42f9712f4afea6a07d08373b56ca">tempest-ServersAdminTestJSON-1079129122</nova:project>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:        <nova:port uuid="7f8a109e-e262-4847-8430-ac7944dace5c">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <entry name="serial">a14d8b88-7aec-468f-a550-881364e4d95e</entry>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <entry name="uuid">a14d8b88-7aec-468f-a550-881364e4d95e</entry>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/a14d8b88-7aec-468f-a550-881364e4d95e_disk">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/a14d8b88-7aec-468f-a550-881364e4d95e_disk.config">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:e6:0a:44"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <target dev="tap7f8a109e-e2"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/console.log" append="off"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:17:14 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:17:14 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:17:14 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:17:14 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.609 248514 DEBUG nova.compute.manager [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Preparing to wait for external event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.610 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.610 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.611 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.611 248514 DEBUG nova.virt.libvirt.vif [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:17:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-247332834',display_name='tempest-ServersAdminTestJSON-server-247332834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-247332834',id=17,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-88vex2g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:17:07Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=a14d8b88-7aec-468f-a550-881364e4d95e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.612 248514 DEBUG nova.network.os_vif_util [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.613 248514 DEBUG nova.network.os_vif_util [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.613 248514 DEBUG os_vif [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.614 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.615 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.620 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.620 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f8a109e-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.621 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f8a109e-e2, col_values=(('external_ids', {'iface-id': '7f8a109e-e262-4847-8430-ac7944dace5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:0a:44', 'vm-uuid': 'a14d8b88-7aec-468f-a550-881364e4d95e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:14 np0005558241 NetworkManager[50376]: <info>  [1765613834.6253] manager: (tap7f8a109e-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.627 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.635 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.639 248514 INFO os_vif [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2')#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.737 248514 DEBUG oslo_concurrency.lockutils [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Acquiring lock "5e65e6d8-81fc-412d-9d08-d87023661fda" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.738 248514 DEBUG oslo_concurrency.lockutils [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.738 248514 DEBUG oslo_concurrency.lockutils [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Acquiring lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.738 248514 DEBUG oslo_concurrency.lockutils [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.738 248514 DEBUG oslo_concurrency.lockutils [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.740 248514 INFO nova.compute.manager [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Terminating instance#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.741 248514 DEBUG nova.compute.manager [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:17:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 213 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 5.3 MiB/s wr, 136 op/s
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.930 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.930 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.931 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No VIF found with MAC fa:16:3e:e6:0a:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.931 248514 INFO nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Using config drive#033[00m
Dec 13 03:17:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:17:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3606603496' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:17:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:17:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3606603496' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:17:14 np0005558241 nova_compute[248510]: 2025-12-13 08:17:14.954 248514 DEBUG nova.storage.rbd_utils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:14 np0005558241 kernel: tap44171d97-dd (unregistering): left promiscuous mode
Dec 13 03:17:14 np0005558241 NetworkManager[50376]: <info>  [1765613834.9981] device (tap44171d97-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:15Z|00062|binding|INFO|Releasing lport 44171d97-ddef-4b41-9740-19635fa87c3a from this chassis (sb_readonly=0)
Dec 13 03:17:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:15Z|00063|binding|INFO|Setting lport 44171d97-ddef-4b41-9740-19635fa87c3a down in Southbound
Dec 13 03:17:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:15Z|00064|binding|INFO|Removing iface tap44171d97-dd ovn-installed in OVS
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.012 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.025 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.050 248514 DEBUG nova.network.neutron [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Successfully updated port: c85d366f-1e88-41c4-9425-0e5f62b77e99 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:17:15 np0005558241 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Deactivated successfully.
Dec 13 03:17:15 np0005558241 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Consumed 16.304s CPU time.
Dec 13 03:17:15 np0005558241 systemd-machined[210538]: Machine qemu-18-instance-00000010 terminated.
Dec 13 03:17:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:15.086 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:8d:5c 10.100.0.6'], port_security=['fa:16:3e:52:8d:5c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5e65e6d8-81fc-412d-9d08-d87023661fda', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f47cfa0b-b3fe-477a-a9e8-75d92379981e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '487f5e19eb9e49e3bbae1c23af285690', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f4736b95-4339-401b-b0f3-e7f2b1cf7256', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51d7514f-435e-4096-96dd-36769afe18f3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=44171d97-ddef-4b41-9740-19635fa87c3a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:17:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:15.087 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 44171d97-ddef-4b41-9740-19635fa87c3a in datapath f47cfa0b-b3fe-477a-a9e8-75d92379981e unbound from our chassis#033[00m
Dec 13 03:17:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:15.089 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f47cfa0b-b3fe-477a-a9e8-75d92379981e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:17:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:15.090 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db9186e9-af76-47c3-a58c-7bbbeb243d49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:15.091 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e namespace which is not needed anymore#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.179 248514 INFO nova.virt.libvirt.driver [-] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Instance destroyed successfully.#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.179 248514 DEBUG nova.objects.instance [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lazy-loading 'resources' on Instance uuid 5e65e6d8-81fc-412d-9d08-d87023661fda obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.245 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "refresh_cache-3fffafca-321d-4611-8940-da963b356ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.246 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquired lock "refresh_cache-3fffafca-321d-4611-8940-da963b356ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.246 248514 DEBUG nova.network.neutron [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:17:15 np0005558241 neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e[269782]: [NOTICE]   (269786) : haproxy version is 2.8.14-c23fe91
Dec 13 03:17:15 np0005558241 neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e[269782]: [NOTICE]   (269786) : path to executable is /usr/sbin/haproxy
Dec 13 03:17:15 np0005558241 neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e[269782]: [WARNING]  (269786) : Exiting Master process...
Dec 13 03:17:15 np0005558241 neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e[269782]: [WARNING]  (269786) : Exiting Master process...
Dec 13 03:17:15 np0005558241 neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e[269782]: [ALERT]    (269786) : Current worker (269795) exited with code 143 (Terminated)
Dec 13 03:17:15 np0005558241 neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e[269782]: [WARNING]  (269786) : All workers exited. Exiting... (0)
Dec 13 03:17:15 np0005558241 systemd[1]: libpod-7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647.scope: Deactivated successfully.
Dec 13 03:17:15 np0005558241 podman[271806]: 2025-12-13 08:17:15.270469337 +0000 UTC m=+0.080991636 container died 7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.320 248514 DEBUG nova.compute.manager [req-d7dd0ac5-bd72-4d95-8f15-9d8b7cfc3805 req-8bd164a7-5dc6-4d35-8466-ff3f7ee69c45 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received event network-changed-c85d366f-1e88-41c4-9425-0e5f62b77e99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.321 248514 DEBUG nova.compute.manager [req-d7dd0ac5-bd72-4d95-8f15-9d8b7cfc3805 req-8bd164a7-5dc6-4d35-8466-ff3f7ee69c45 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Refreshing instance network info cache due to event network-changed-c85d366f-1e88-41c4-9425-0e5f62b77e99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.321 248514 DEBUG oslo_concurrency.lockutils [req-d7dd0ac5-bd72-4d95-8f15-9d8b7cfc3805 req-8bd164a7-5dc6-4d35-8466-ff3f7ee69c45 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3fffafca-321d-4611-8940-da963b356ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.335 248514 DEBUG nova.virt.libvirt.vif [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:16:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1254373773',display_name='tempest-ImagesOneServerTestJSON-server-1254373773',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1254373773',id=16,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:16:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='487f5e19eb9e49e3bbae1c23af285690',ramdisk_id='',reservation_id='r-0abwyfh9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-1469548079',owner_user_name='tempest-ImagesOneServerTestJSON-1469548079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:17:03Z,user_data=None,user_id='ec32d06bed00468da4407dd9585407a0',uuid=5e65e6d8-81fc-412d-9d08-d87023661fda,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "44171d97-ddef-4b41-9740-19635fa87c3a", "address": "fa:16:3e:52:8d:5c", "network": {"id": "f47cfa0b-b3fe-477a-a9e8-75d92379981e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1799598885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "487f5e19eb9e49e3bbae1c23af285690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44171d97-dd", "ovs_interfaceid": "44171d97-ddef-4b41-9740-19635fa87c3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.336 248514 DEBUG nova.network.os_vif_util [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Converting VIF {"id": "44171d97-ddef-4b41-9740-19635fa87c3a", "address": "fa:16:3e:52:8d:5c", "network": {"id": "f47cfa0b-b3fe-477a-a9e8-75d92379981e", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1799598885-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "487f5e19eb9e49e3bbae1c23af285690", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44171d97-dd", "ovs_interfaceid": "44171d97-ddef-4b41-9740-19635fa87c3a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.336 248514 DEBUG nova.network.os_vif_util [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:8d:5c,bridge_name='br-int',has_traffic_filtering=True,id=44171d97-ddef-4b41-9740-19635fa87c3a,network=Network(f47cfa0b-b3fe-477a-a9e8-75d92379981e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44171d97-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.337 248514 DEBUG os_vif [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:8d:5c,bridge_name='br-int',has_traffic_filtering=True,id=44171d97-ddef-4b41-9740-19635fa87c3a,network=Network(f47cfa0b-b3fe-477a-a9e8-75d92379981e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44171d97-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.339 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.339 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44171d97-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.346 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.351 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.354 248514 INFO os_vif [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:8d:5c,bridge_name='br-int',has_traffic_filtering=True,id=44171d97-ddef-4b41-9740-19635fa87c3a,network=Network(f47cfa0b-b3fe-477a-a9e8-75d92379981e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44171d97-dd')#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.369 248514 DEBUG nova.network.neutron [req-776d5b96-dbe5-4133-b7fc-5bab10126108 req-14eff970-52e8-44c3-9f80-e61b8c5f7099 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Updated VIF entry in instance network info cache for port 7f8a109e-e262-4847-8430-ac7944dace5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.369 248514 DEBUG nova.network.neutron [req-776d5b96-dbe5-4133-b7fc-5bab10126108 req-14eff970-52e8-44c3-9f80-e61b8c5f7099 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Updating instance_info_cache with network_info: [{"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:17:15 np0005558241 nova_compute[248510]: 2025-12-13 08:17:15.516 248514 DEBUG oslo_concurrency.lockutils [req-776d5b96-dbe5-4133-b7fc-5bab10126108 req-14eff970-52e8-44c3-9f80-e61b8c5f7099 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:17:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647-userdata-shm.mount: Deactivated successfully.
Dec 13 03:17:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d4649857adb74ed91cdd90063932457465b43f92da005310d777b41a71b47704-merged.mount: Deactivated successfully.
Dec 13 03:17:15 np0005558241 podman[271806]: 2025-12-13 08:17:15.710004342 +0000 UTC m=+0.520526641 container cleanup 7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:17:15 np0005558241 systemd[1]: libpod-conmon-7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647.scope: Deactivated successfully.
Dec 13 03:17:16 np0005558241 nova_compute[248510]: 2025-12-13 08:17:16.303 248514 DEBUG nova.network.neutron [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:17:16 np0005558241 nova_compute[248510]: 2025-12-13 08:17:16.364 248514 INFO nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Creating config drive at /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config#033[00m
Dec 13 03:17:16 np0005558241 nova_compute[248510]: 2025-12-13 08:17:16.369 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5sw0wdaz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:16 np0005558241 nova_compute[248510]: 2025-12-13 08:17:16.505 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5sw0wdaz" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:16 np0005558241 nova_compute[248510]: 2025-12-13 08:17:16.533 248514 DEBUG nova.storage.rbd_utils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:16 np0005558241 nova_compute[248510]: 2025-12-13 08:17:16.538 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config a14d8b88-7aec-468f-a550-881364e4d95e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:16 np0005558241 podman[271858]: 2025-12-13 08:17:16.544237151 +0000 UTC m=+0.807901941 container remove 7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:17:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:16.551 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eae9ff5d-2c74-4922-b5ea-5581d15c4f56]: (4, ('Sat Dec 13 08:17:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e (7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647)\n7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647\nSat Dec 13 08:17:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e (7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647)\n7466634db03049735d68cc46a469dbb365935f90bb962bd77a7ba4f328b68647\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:16.553 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46c791b6-33ab-4a6b-aa4a-1a5a96017d7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:16.554 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf47cfa0b-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:16 np0005558241 kernel: tapf47cfa0b-b0: left promiscuous mode
Dec 13 03:17:16 np0005558241 nova_compute[248510]: 2025-12-13 08:17:16.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:16.579 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb5352e-7723-48d6-853e-44acfd6a8ce8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:16.600 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2870366f-b0f5-4f68-920a-502a15954b64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:16.601 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6cb650-2859-46eb-8896-d01b9e190589]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:16.622 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[861de972-a753-45c7-b8c7-a5958c04b4b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636065, 'reachable_time': 41459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271899, 'error': None, 'target': 'ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:16 np0005558241 systemd[1]: run-netns-ovnmeta\x2df47cfa0b\x2db3fe\x2d477a\x2da9e8\x2d75d92379981e.mount: Deactivated successfully.
Dec 13 03:17:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:16.625 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f47cfa0b-b3fe-477a-a9e8-75d92379981e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:17:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:16.626 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[123d0487-88a3-432f-8940-ba44d7ceb5c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:16 np0005558241 nova_compute[248510]: 2025-12-13 08:17:16.712 248514 DEBUG nova.compute.manager [req-137f30d6-e47d-4c9d-ab2d-81cd1f5428be req-6357566b-3550-42f4-addc-1661cd7facf9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Received event network-vif-unplugged-44171d97-ddef-4b41-9740-19635fa87c3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:17:16 np0005558241 nova_compute[248510]: 2025-12-13 08:17:16.712 248514 DEBUG oslo_concurrency.lockutils [req-137f30d6-e47d-4c9d-ab2d-81cd1f5428be req-6357566b-3550-42f4-addc-1661cd7facf9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:16 np0005558241 nova_compute[248510]: 2025-12-13 08:17:16.713 248514 DEBUG oslo_concurrency.lockutils [req-137f30d6-e47d-4c9d-ab2d-81cd1f5428be req-6357566b-3550-42f4-addc-1661cd7facf9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:16 np0005558241 nova_compute[248510]: 2025-12-13 08:17:16.713 248514 DEBUG oslo_concurrency.lockutils [req-137f30d6-e47d-4c9d-ab2d-81cd1f5428be req-6357566b-3550-42f4-addc-1661cd7facf9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:16 np0005558241 nova_compute[248510]: 2025-12-13 08:17:16.713 248514 DEBUG nova.compute.manager [req-137f30d6-e47d-4c9d-ab2d-81cd1f5428be req-6357566b-3550-42f4-addc-1661cd7facf9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] No waiting events found dispatching network-vif-unplugged-44171d97-ddef-4b41-9740-19635fa87c3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:17:16 np0005558241 nova_compute[248510]: 2025-12-13 08:17:16.713 248514 DEBUG nova.compute.manager [req-137f30d6-e47d-4c9d-ab2d-81cd1f5428be req-6357566b-3550-42f4-addc-1661cd7facf9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Received event network-vif-unplugged-44171d97-ddef-4b41-9740-19635fa87c3a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:17:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 213 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 5.3 MiB/s wr, 114 op/s
Dec 13 03:17:17 np0005558241 nova_compute[248510]: 2025-12-13 08:17:17.455 248514 DEBUG oslo_concurrency.processutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config a14d8b88-7aec-468f-a550-881364e4d95e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.917s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:17 np0005558241 nova_compute[248510]: 2025-12-13 08:17:17.455 248514 INFO nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Deleting local config drive /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config because it was imported into RBD.#033[00m
Dec 13 03:17:17 np0005558241 kernel: tap7f8a109e-e2: entered promiscuous mode
Dec 13 03:17:17 np0005558241 NetworkManager[50376]: <info>  [1765613837.5246] manager: (tap7f8a109e-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Dec 13 03:17:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:17Z|00065|binding|INFO|Claiming lport 7f8a109e-e262-4847-8430-ac7944dace5c for this chassis.
Dec 13 03:17:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:17Z|00066|binding|INFO|7f8a109e-e262-4847-8430-ac7944dace5c: Claiming fa:16:3e:e6:0a:44 10.100.0.14
Dec 13 03:17:17 np0005558241 nova_compute[248510]: 2025-12-13 08:17:17.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:17 np0005558241 systemd-udevd[271773]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:17:17 np0005558241 NetworkManager[50376]: <info>  [1765613837.5428] device (tap7f8a109e-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:17:17 np0005558241 NetworkManager[50376]: <info>  [1765613837.5439] device (tap7f8a109e-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:17:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:17Z|00067|binding|INFO|Setting lport 7f8a109e-e262-4847-8430-ac7944dace5c ovn-installed in OVS
Dec 13 03:17:17 np0005558241 nova_compute[248510]: 2025-12-13 08:17:17.545 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:17 np0005558241 nova_compute[248510]: 2025-12-13 08:17:17.547 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:17 np0005558241 systemd-machined[210538]: New machine qemu-19-instance-00000011.
Dec 13 03:17:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:17Z|00068|binding|INFO|Setting lport 7f8a109e-e262-4847-8430-ac7944dace5c up in Southbound
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.577 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:0a:44 10.100.0.14'], port_security=['fa:16:3e:e6:0a:44 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a14d8b88-7aec-468f-a550-881364e4d95e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7f8a109e-e262-4847-8430-ac7944dace5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.579 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7f8a109e-e262-4847-8430-ac7944dace5c in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 bound to our chassis#033[00m
Dec 13 03:17:17 np0005558241 systemd[1]: Started Virtual Machine qemu-19-instance-00000011.
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.580 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.595 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc274d1-d5b4-4c68-a1de-9346263f0fe1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.596 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape3139e1f-d1 in ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.599 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape3139e1f-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.599 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[593cd066-07eb-427e-b0d9-b16d4d154ee4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.600 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[042de7d5-ddb2-4c6e-bbc2-995a1829308d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.615 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[9001eead-3d9f-40d6-a6cd-dda6d0def252]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.644 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[985e2b8b-9d0f-4f1d-a298-b7ff3642f88e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.679 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f256b85e-d034-4528-ac46-6595c7626b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 NetworkManager[50376]: <info>  [1765613837.6873] manager: (tape3139e1f-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3b27d2f1-7927-4c58-95e4-fd768cd5c453]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.722 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b87a15-5f48-4360-bc1d-37f05e7551a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.726 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4731949d-56fd-4a44-8669-3ea775cd7258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 NetworkManager[50376]: <info>  [1765613837.7562] device (tape3139e1f-d0): carrier: link connected
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.765 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[70ec1712-415b-401a-b203-e111132e6c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.784 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bb61fa-e85c-44d8-a61f-23c256e0ce54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 35036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271963, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.805 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f1cf824c-40f8-421e-a318-0388a084b3ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:6e4a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641497, 'tstamp': 641497}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271964, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.827 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9e33758d-11f0-4a9a-9fb7-c0d0c06f2b92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 35036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271965, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.870 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ca32cd56-aaab-4461-af5d-764a6d4c19d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.947 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62b7b1f3-1a7d-4669-9e7d-d2b90ac460c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.949 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.949 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:17:17 np0005558241 nova_compute[248510]: 2025-12-13 08:17:17.950 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613822.948226, d7894b40-fff7-4226-9a51-17dc5fe4bb30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:17:17 np0005558241 nova_compute[248510]: 2025-12-13 08:17:17.951 248514 INFO nova.compute.manager [-] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.951 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:17 np0005558241 nova_compute[248510]: 2025-12-13 08:17:17.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:17 np0005558241 NetworkManager[50376]: <info>  [1765613837.9546] manager: (tape3139e1f-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Dec 13 03:17:17 np0005558241 kernel: tape3139e1f-d0: entered promiscuous mode
Dec 13 03:17:17 np0005558241 nova_compute[248510]: 2025-12-13 08:17:17.957 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.958 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:17 np0005558241 nova_compute[248510]: 2025-12-13 08:17:17.959 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:17Z|00069|binding|INFO|Releasing lport c8850a7f-53d6-4776-8c0e-7fb474fe52de from this chassis (sb_readonly=0)
Dec 13 03:17:17 np0005558241 nova_compute[248510]: 2025-12-13 08:17:17.983 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.986 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.987 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3380bc4d-b1d1-4f7d-bfbb-41da3f90801a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.988 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5.pid.haproxy
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:17:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:17.989 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'env', 'PROCESS_TAG=haproxy-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:17:18 np0005558241 nova_compute[248510]: 2025-12-13 08:17:18.108 248514 DEBUG nova.compute.manager [None req-19ce7018-1f91-42d2-852d-3d9f356eda6f - - - - - -] [instance: d7894b40-fff7-4226-9a51-17dc5fe4bb30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:17:18 np0005558241 podman[272031]: 2025-12-13 08:17:18.350524722 +0000 UTC m=+0.028888483 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:17:18 np0005558241 nova_compute[248510]: 2025-12-13 08:17:18.480 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613838.479456, a14d8b88-7aec-468f-a550-881364e4d95e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:17:18 np0005558241 nova_compute[248510]: 2025-12-13 08:17:18.480 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] VM Started (Lifecycle Event)#033[00m
Dec 13 03:17:18 np0005558241 nova_compute[248510]: 2025-12-13 08:17:18.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:17:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Dec 13 03:17:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Dec 13 03:17:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 147 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 4.3 MiB/s wr, 115 op/s
Dec 13 03:17:18 np0005558241 nova_compute[248510]: 2025-12-13 08:17:18.865 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:17:18 np0005558241 nova_compute[248510]: 2025-12-13 08:17:18.870 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613838.480206, a14d8b88-7aec-468f-a550-881364e4d95e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:17:18 np0005558241 nova_compute[248510]: 2025-12-13 08:17:18.870 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:17:18 np0005558241 podman[272031]: 2025-12-13 08:17:18.919618419 +0000 UTC m=+0.597982160 container create 563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:17:18 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.121860) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613839121964, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1460, "num_deletes": 511, "total_data_size": 1708133, "memory_usage": 1741768, "flush_reason": "Manual Compaction"}
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613839135881, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1122379, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27998, "largest_seqno": 29457, "table_properties": {"data_size": 1116900, "index_size": 2234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16633, "raw_average_key_size": 19, "raw_value_size": 1103225, "raw_average_value_size": 1305, "num_data_blocks": 100, "num_entries": 845, "num_filter_entries": 845, "num_deletions": 511, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765613747, "oldest_key_time": 1765613747, "file_creation_time": 1765613839, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 14089 microseconds, and 5799 cpu microseconds.
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.135943) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1122379 bytes OK
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.135973) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.139032) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.139062) EVENT_LOG_v1 {"time_micros": 1765613839139055, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.139114) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1700534, prev total WAL file size 1700534, number of live WAL files 2.
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.139900) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303037' seq:72057594037927935, type:22 .. '6D6772737461740031323538' seq:0, type:0; will stop at (end)
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1096KB)], [62(9352KB)]
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613839140119, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10699223, "oldest_snapshot_seqno": -1}
Dec 13 03:17:19 np0005558241 systemd[1]: Started libpod-conmon-563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864.scope.
Dec 13 03:17:19 np0005558241 nova_compute[248510]: 2025-12-13 08:17:19.168 248514 INFO nova.virt.libvirt.driver [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Deleting instance files /var/lib/nova/instances/5e65e6d8-81fc-412d-9d08-d87023661fda_del#033[00m
Dec 13 03:17:19 np0005558241 nova_compute[248510]: 2025-12-13 08:17:19.170 248514 INFO nova.virt.libvirt.driver [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Deletion of /var/lib/nova/instances/5e65e6d8-81fc-412d-9d08-d87023661fda_del complete#033[00m
Dec 13 03:17:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:17:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937067be80a0f92a68338f93cc6d8182d7c2bbc7c8ec6a16edddf5b92b0e5e64/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5177 keys, 7731015 bytes, temperature: kUnknown
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613839218550, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7731015, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7697296, "index_size": 19657, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12997, "raw_key_size": 131144, "raw_average_key_size": 25, "raw_value_size": 7604735, "raw_average_value_size": 1468, "num_data_blocks": 806, "num_entries": 5177, "num_filter_entries": 5177, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765613839, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.218925) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7731015 bytes
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.222340) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.2 rd, 98.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(16.4) write-amplify(6.9) OK, records in: 6173, records dropped: 996 output_compression: NoCompression
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.222373) EVENT_LOG_v1 {"time_micros": 1765613839222358, "job": 34, "event": "compaction_finished", "compaction_time_micros": 78544, "compaction_time_cpu_micros": 22898, "output_level": 6, "num_output_files": 1, "total_output_size": 7731015, "num_input_records": 6173, "num_output_records": 5177, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613839222882, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec 13 03:17:19 np0005558241 podman[272031]: 2025-12-13 08:17:19.224356225 +0000 UTC m=+0.902719986 container init 563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613839225578, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.139765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.225660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.225668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.225671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.225675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:17:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:17:19.225678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:17:19 np0005558241 podman[272031]: 2025-12-13 08:17:19.230122897 +0000 UTC m=+0.908486648 container start 563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 03:17:19 np0005558241 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [NOTICE]   (272057) : New worker (272059) forked
Dec 13 03:17:19 np0005558241 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [NOTICE]   (272057) : Loading success.
Dec 13 03:17:19 np0005558241 nova_compute[248510]: 2025-12-13 08:17:19.976 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:17:19 np0005558241 nova_compute[248510]: 2025-12-13 08:17:19.984 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:17:20 np0005558241 nova_compute[248510]: 2025-12-13 08:17:20.221 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:17:20 np0005558241 nova_compute[248510]: 2025-12-13 08:17:20.222 248514 INFO nova.compute.manager [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Took 5.48 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:17:20 np0005558241 nova_compute[248510]: 2025-12-13 08:17:20.223 248514 DEBUG oslo.service.loopingcall [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:17:20 np0005558241 nova_compute[248510]: 2025-12-13 08:17:20.223 248514 DEBUG nova.compute.manager [-] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:17:20 np0005558241 nova_compute[248510]: 2025-12-13 08:17:20.223 248514 DEBUG nova.network.neutron [-] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:17:20 np0005558241 nova_compute[248510]: 2025-12-13 08:17:20.342 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008506073479448406 of space, bias 1.0, pg target 0.2551822043834522 quantized to 32 (current 32)
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665742559545385 of space, bias 1.0, pg target 0.19997227678636154 quantized to 32 (current 32)
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.683950785990688e-07 of space, bias 4.0, pg target 0.0010420740943188826 quantized to 16 (current 32)
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:17:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 134 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 2.9 MiB/s wr, 115 op/s
Dec 13 03:17:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 2.9 MiB/s wr, 122 op/s
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.297 248514 DEBUG nova.network.neutron [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Updating instance_info_cache with network_info: [{"id": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "address": "fa:16:3e:25:e7:e6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc85d366f-1e", "ovs_interfaceid": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.374 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Releasing lock "refresh_cache-3fffafca-321d-4611-8940-da963b356ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.374 248514 DEBUG nova.compute.manager [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Instance network_info: |[{"id": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "address": "fa:16:3e:25:e7:e6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc85d366f-1e", "ovs_interfaceid": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.375 248514 DEBUG oslo_concurrency.lockutils [req-d7dd0ac5-bd72-4d95-8f15-9d8b7cfc3805 req-8bd164a7-5dc6-4d35-8466-ff3f7ee69c45 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3fffafca-321d-4611-8940-da963b356ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.375 248514 DEBUG nova.network.neutron [req-d7dd0ac5-bd72-4d95-8f15-9d8b7cfc3805 req-8bd164a7-5dc6-4d35-8466-ff3f7ee69c45 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Refreshing network info cache for port c85d366f-1e88-41c4-9425-0e5f62b77e99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.378 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Start _get_guest_xml network_info=[{"id": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "address": "fa:16:3e:25:e7:e6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc85d366f-1e", "ovs_interfaceid": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.383 248514 WARNING nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.391 248514 DEBUG nova.virt.libvirt.host [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.393 248514 DEBUG nova.virt.libvirt.host [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.397 248514 DEBUG nova.virt.libvirt.host [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.398 248514 DEBUG nova.virt.libvirt.host [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.398 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.399 248514 DEBUG nova.virt.hardware [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.399 248514 DEBUG nova.virt.hardware [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.399 248514 DEBUG nova.virt.hardware [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.399 248514 DEBUG nova.virt.hardware [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.400 248514 DEBUG nova.virt.hardware [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.400 248514 DEBUG nova.virt.hardware [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.400 248514 DEBUG nova.virt.hardware [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.400 248514 DEBUG nova.virt.hardware [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.401 248514 DEBUG nova.virt.hardware [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.401 248514 DEBUG nova.virt.hardware [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.401 248514 DEBUG nova.virt.hardware [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.405 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:17:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:17:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/289715063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.956 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.984 248514 DEBUG nova.storage.rbd_utils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 3fffafca-321d-4611-8940-da963b356ca1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:23 np0005558241 nova_compute[248510]: 2025-12-13 08:17:23.993 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:24Z|00070|binding|INFO|Releasing lport c8850a7f-53d6-4776-8c0e-7fb474fe52de from this chassis (sb_readonly=0)
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.229 248514 DEBUG nova.compute.manager [req-ef9078ae-f8a6-41ff-abe7-c8eb99d266a4 req-72d95ca5-e38a-4846-9d7c-86f876306e52 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Received event network-vif-plugged-44171d97-ddef-4b41-9740-19635fa87c3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.230 248514 DEBUG oslo_concurrency.lockutils [req-ef9078ae-f8a6-41ff-abe7-c8eb99d266a4 req-72d95ca5-e38a-4846-9d7c-86f876306e52 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.230 248514 DEBUG oslo_concurrency.lockutils [req-ef9078ae-f8a6-41ff-abe7-c8eb99d266a4 req-72d95ca5-e38a-4846-9d7c-86f876306e52 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.231 248514 DEBUG oslo_concurrency.lockutils [req-ef9078ae-f8a6-41ff-abe7-c8eb99d266a4 req-72d95ca5-e38a-4846-9d7c-86f876306e52 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.231 248514 DEBUG nova.compute.manager [req-ef9078ae-f8a6-41ff-abe7-c8eb99d266a4 req-72d95ca5-e38a-4846-9d7c-86f876306e52 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] No waiting events found dispatching network-vif-plugged-44171d97-ddef-4b41-9740-19635fa87c3a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.231 248514 WARNING nova.compute.manager [req-ef9078ae-f8a6-41ff-abe7-c8eb99d266a4 req-72d95ca5-e38a-4846-9d7c-86f876306e52 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Received unexpected event network-vif-plugged-44171d97-ddef-4b41-9740-19635fa87c3a for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.238 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:24Z|00071|binding|INFO|Releasing lport c8850a7f-53d6-4776-8c0e-7fb474fe52de from this chassis (sb_readonly=0)
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.429 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:17:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2466498810' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.548 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.550 248514 DEBUG nova.virt.libvirt.vif [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:17:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-364828820',display_name='tempest-ServersAdminTestJSON-server-364828820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-364828820',id=18,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-t7ff1bcv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:17:10Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=3fffafca-321d-4611-8940-da963b356ca1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "address": "fa:16:3e:25:e7:e6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc85d366f-1e", "ovs_interfaceid": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.551 248514 DEBUG nova.network.os_vif_util [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "address": "fa:16:3e:25:e7:e6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc85d366f-1e", "ovs_interfaceid": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.552 248514 DEBUG nova.network.os_vif_util [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:e7:e6,bridge_name='br-int',has_traffic_filtering=True,id=c85d366f-1e88-41c4-9425-0e5f62b77e99,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc85d366f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.553 248514 DEBUG nova.objects.instance [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'pci_devices' on Instance uuid 3fffafca-321d-4611-8940-da963b356ca1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.800 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  <uuid>3fffafca-321d-4611-8940-da963b356ca1</uuid>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  <name>instance-00000012</name>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersAdminTestJSON-server-364828820</nova:name>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:17:23</nova:creationTime>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:        <nova:user uuid="0d3578a9aa9a4f4facf4009a71564a31">tempest-ServersAdminTestJSON-1079129122-project-member</nova:user>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:        <nova:project uuid="1f6b42f9712f4afea6a07d08373b56ca">tempest-ServersAdminTestJSON-1079129122</nova:project>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:        <nova:port uuid="c85d366f-1e88-41c4-9425-0e5f62b77e99">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <entry name="serial">3fffafca-321d-4611-8940-da963b356ca1</entry>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <entry name="uuid">3fffafca-321d-4611-8940-da963b356ca1</entry>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3fffafca-321d-4611-8940-da963b356ca1_disk">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3fffafca-321d-4611-8940-da963b356ca1_disk.config">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:25:e7:e6"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <target dev="tapc85d366f-1e"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/3fffafca-321d-4611-8940-da963b356ca1/console.log" append="off"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:17:24 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:17:24 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:17:24 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:17:24 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.802 248514 DEBUG nova.compute.manager [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Preparing to wait for external event network-vif-plugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.802 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "3fffafca-321d-4611-8940-da963b356ca1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.802 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.803 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.804 248514 DEBUG nova.virt.libvirt.vif [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:17:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-364828820',display_name='tempest-ServersAdminTestJSON-server-364828820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-364828820',id=18,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-t7ff1bcv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:17:10Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=3fffafca-321d-4611-8940-da963b356ca1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "address": "fa:16:3e:25:e7:e6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc85d366f-1e", "ovs_interfaceid": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.804 248514 DEBUG nova.network.os_vif_util [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "address": "fa:16:3e:25:e7:e6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc85d366f-1e", "ovs_interfaceid": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.805 248514 DEBUG nova.network.os_vif_util [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:e7:e6,bridge_name='br-int',has_traffic_filtering=True,id=c85d366f-1e88-41c4-9425-0e5f62b77e99,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc85d366f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.805 248514 DEBUG os_vif [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:e7:e6,bridge_name='br-int',has_traffic_filtering=True,id=c85d366f-1e88-41c4-9425-0e5f62b77e99,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc85d366f-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.806 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.806 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.807 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.810 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.810 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc85d366f-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.810 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc85d366f-1e, col_values=(('external_ids', {'iface-id': 'c85d366f-1e88-41c4-9425-0e5f62b77e99', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:e7:e6', 'vm-uuid': '3fffafca-321d-4611-8940-da963b356ca1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.812 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:24 np0005558241 NetworkManager[50376]: <info>  [1765613844.8134] manager: (tapc85d366f-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.814 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.820 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:24 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.823 248514 INFO os_vif [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:e7:e6,bridge_name='br-int',has_traffic_filtering=True,id=c85d366f-1e88-41c4-9425-0e5f62b77e99,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc85d366f-1e')#033[00m
Dec 13 03:17:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 18 KiB/s wr, 45 op/s
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:24.999 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:25.000 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:25.000 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No VIF found with MAC fa:16:3e:25:e7:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:25.001 248514 INFO nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Using config drive#033[00m
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:25.027 248514 DEBUG nova.storage.rbd_utils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 3fffafca-321d-4611-8940-da963b356ca1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:25.402 248514 DEBUG nova.network.neutron [-] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:25.656 248514 INFO nova.compute.manager [-] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Took 5.43 seconds to deallocate network for instance.#033[00m
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:25.662 248514 DEBUG nova.compute.manager [req-d6b680a2-f6f7-4035-be27-0d6bc8bc971c req-b9e74818-083f-4c5f-abde-f1590072cfee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Received event network-vif-deleted-44171d97-ddef-4b41-9740-19635fa87c3a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:25.663 248514 INFO nova.compute.manager [req-d6b680a2-f6f7-4035-be27-0d6bc8bc971c req-b9e74818-083f-4c5f-abde-f1590072cfee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Neutron deleted interface 44171d97-ddef-4b41-9740-19635fa87c3a; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:25.663 248514 DEBUG nova.network.neutron [req-d6b680a2-f6f7-4035-be27-0d6bc8bc971c req-b9e74818-083f-4c5f-abde-f1590072cfee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:25.795 248514 DEBUG nova.compute.manager [req-d6b680a2-f6f7-4035-be27-0d6bc8bc971c req-b9e74818-083f-4c5f-abde-f1590072cfee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Detach interface failed, port_id=44171d97-ddef-4b41-9740-19635fa87c3a, reason: Instance 5e65e6d8-81fc-412d-9d08-d87023661fda could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:25.812 248514 INFO nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Creating config drive at /var/lib/nova/instances/3fffafca-321d-4611-8940-da963b356ca1/disk.config#033[00m
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:25.818 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3fffafca-321d-4611-8940-da963b356ca1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ec2521j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:25 np0005558241 nova_compute[248510]: 2025-12-13 08:17:25.955 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3fffafca-321d-4611-8940-da963b356ca1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ec2521j" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.023 248514 DEBUG nova.storage.rbd_utils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 3fffafca-321d-4611-8940-da963b356ca1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.027 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3fffafca-321d-4611-8940-da963b356ca1/disk.config 3fffafca-321d-4611-8940-da963b356ca1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.169 248514 DEBUG oslo_concurrency.processutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3fffafca-321d-4611-8940-da963b356ca1/disk.config 3fffafca-321d-4611-8940-da963b356ca1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.170 248514 INFO nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Deleting local config drive /var/lib/nova/instances/3fffafca-321d-4611-8940-da963b356ca1/disk.config because it was imported into RBD.#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.214 248514 DEBUG nova.network.neutron [req-d7dd0ac5-bd72-4d95-8f15-9d8b7cfc3805 req-8bd164a7-5dc6-4d35-8466-ff3f7ee69c45 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Updated VIF entry in instance network info cache for port c85d366f-1e88-41c4-9425-0e5f62b77e99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.215 248514 DEBUG nova.network.neutron [req-d7dd0ac5-bd72-4d95-8f15-9d8b7cfc3805 req-8bd164a7-5dc6-4d35-8466-ff3f7ee69c45 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Updating instance_info_cache with network_info: [{"id": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "address": "fa:16:3e:25:e7:e6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc85d366f-1e", "ovs_interfaceid": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.219 248514 DEBUG oslo_concurrency.lockutils [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.219 248514 DEBUG oslo_concurrency.lockutils [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:26 np0005558241 kernel: tapc85d366f-1e: entered promiscuous mode
Dec 13 03:17:26 np0005558241 NetworkManager[50376]: <info>  [1765613846.2268] manager: (tapc85d366f-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Dec 13 03:17:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:26Z|00072|binding|INFO|Claiming lport c85d366f-1e88-41c4-9425-0e5f62b77e99 for this chassis.
Dec 13 03:17:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:26Z|00073|binding|INFO|c85d366f-1e88-41c4-9425-0e5f62b77e99: Claiming fa:16:3e:25:e7:e6 10.100.0.11
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.231 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:26Z|00074|binding|INFO|Setting lport c85d366f-1e88-41c4-9425-0e5f62b77e99 ovn-installed in OVS
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:26 np0005558241 systemd-udevd[272203]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:17:26 np0005558241 systemd-machined[210538]: New machine qemu-20-instance-00000012.
Dec 13 03:17:26 np0005558241 systemd[1]: Started Virtual Machine qemu-20-instance-00000012.
Dec 13 03:17:26 np0005558241 NetworkManager[50376]: <info>  [1765613846.2769] device (tapc85d366f-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:17:26 np0005558241 NetworkManager[50376]: <info>  [1765613846.2778] device (tapc85d366f-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.280 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:e7:e6 10.100.0.11'], port_security=['fa:16:3e:25:e7:e6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3fffafca-321d-4611-8940-da963b356ca1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c85d366f-1e88-41c4-9425-0e5f62b77e99) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:17:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:26Z|00075|binding|INFO|Setting lport c85d366f-1e88-41c4-9425-0e5f62b77e99 up in Southbound
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.282 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c85d366f-1e88-41c4-9425-0e5f62b77e99 in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 bound to our chassis#033[00m
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.284 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5#033[00m
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.302 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ff88ea8d-73ba-4034-abcb-914e1a702e87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.307 248514 DEBUG oslo_concurrency.lockutils [req-d7dd0ac5-bd72-4d95-8f15-9d8b7cfc3805 req-8bd164a7-5dc6-4d35-8466-ff3f7ee69c45 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3fffafca-321d-4611-8940-da963b356ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.332 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[50afad36-c563-435c-818c-4c8738593f45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.337 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[168f5408-9f0c-408f-80d0-766ddc01b9f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.360 248514 DEBUG oslo_concurrency.processutils [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.370 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5328cc62-edd2-44c5-8693-00d3f7c45e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.391 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a29d26b8-e2cf-430a-93b4-e0acdfd6bbd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 35036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272219, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.412 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[29b2ab64-8b6b-473b-a84f-9a2f665eea07]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272220, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272220, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.415 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.419 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.420 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.420 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.419 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:26.421 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.608 248514 DEBUG nova.compute.manager [req-6f34cf3f-a4ba-41b0-81f9-5f8422e3dd5b req-7e057575-6d1d-4bfc-84a3-e5ae07b6729a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.608 248514 DEBUG oslo_concurrency.lockutils [req-6f34cf3f-a4ba-41b0-81f9-5f8422e3dd5b req-7e057575-6d1d-4bfc-84a3-e5ae07b6729a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.609 248514 DEBUG oslo_concurrency.lockutils [req-6f34cf3f-a4ba-41b0-81f9-5f8422e3dd5b req-7e057575-6d1d-4bfc-84a3-e5ae07b6729a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.609 248514 DEBUG oslo_concurrency.lockutils [req-6f34cf3f-a4ba-41b0-81f9-5f8422e3dd5b req-7e057575-6d1d-4bfc-84a3-e5ae07b6729a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.609 248514 DEBUG nova.compute.manager [req-6f34cf3f-a4ba-41b0-81f9-5f8422e3dd5b req-7e057575-6d1d-4bfc-84a3-e5ae07b6729a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Processing event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.610 248514 DEBUG nova.compute.manager [req-6f34cf3f-a4ba-41b0-81f9-5f8422e3dd5b req-7e057575-6d1d-4bfc-84a3-e5ae07b6729a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.610 248514 DEBUG oslo_concurrency.lockutils [req-6f34cf3f-a4ba-41b0-81f9-5f8422e3dd5b req-7e057575-6d1d-4bfc-84a3-e5ae07b6729a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.610 248514 DEBUG oslo_concurrency.lockutils [req-6f34cf3f-a4ba-41b0-81f9-5f8422e3dd5b req-7e057575-6d1d-4bfc-84a3-e5ae07b6729a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.610 248514 DEBUG oslo_concurrency.lockutils [req-6f34cf3f-a4ba-41b0-81f9-5f8422e3dd5b req-7e057575-6d1d-4bfc-84a3-e5ae07b6729a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.611 248514 DEBUG nova.compute.manager [req-6f34cf3f-a4ba-41b0-81f9-5f8422e3dd5b req-7e057575-6d1d-4bfc-84a3-e5ae07b6729a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] No waiting events found dispatching network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.611 248514 WARNING nova.compute.manager [req-6f34cf3f-a4ba-41b0-81f9-5f8422e3dd5b req-7e057575-6d1d-4bfc-84a3-e5ae07b6729a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received unexpected event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.612 248514 DEBUG nova.compute.manager [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.626 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.632 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613846.6322703, a14d8b88-7aec-468f-a550-881364e4d95e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.633 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.653 248514 INFO nova.virt.libvirt.driver [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance spawned successfully.#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.654 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:17:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 18 KiB/s wr, 45 op/s
Dec 13 03:17:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:17:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2155743700' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.938 248514 DEBUG oslo_concurrency.processutils [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.944 248514 DEBUG nova.compute.provider_tree [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.974 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:17:26 np0005558241 nova_compute[248510]: 2025-12-13 08:17:26.978 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.040 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.041 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.041 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.041 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.042 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.042 248514 DEBUG nova.virt.libvirt.driver [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.048 248514 DEBUG nova.scheduler.client.report [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.052 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.053 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613846.8680418, 3fffafca-321d-4611-8940-da963b356ca1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.053 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] VM Started (Lifecycle Event)#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.209 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.214 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613846.8683178, 3fffafca-321d-4611-8940-da963b356ca1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.215 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.222 248514 DEBUG oslo_concurrency.lockutils [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.373 248514 INFO nova.scheduler.client.report [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Deleted allocations for instance 5e65e6d8-81fc-412d-9d08-d87023661fda#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.377 248514 INFO nova.compute.manager [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Took 19.05 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.378 248514 DEBUG nova.compute.manager [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.379 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.385 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.463 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.645 248514 INFO nova.compute.manager [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Took 22.22 seconds to build instance.#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:17:27 np0005558241 nova_compute[248510]: 2025-12-13 08:17:27.925 248514 DEBUG oslo_concurrency.lockutils [None req-438a7d7a-3c46-47e3-9082-956f92b50756 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.059 248514 DEBUG oslo_concurrency.lockutils [None req-e2a58d84-bac0-4ce8-bc4d-37927041acc0 ec32d06bed00468da4407dd9585407a0 487f5e19eb9e49e3bbae1c23af285690 - - default default] Lock "5e65e6d8-81fc-412d-9d08-d87023661fda" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:17:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 941 KiB/s rd, 1.3 KiB/s wr, 64 op/s
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.845 248514 DEBUG nova.compute.manager [req-bf443841-58cc-476c-95e0-bc1853fe6da1 req-b4bf2f28-2d1c-4a07-be86-0b8f6e4bcc4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received event network-vif-plugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.846 248514 DEBUG oslo_concurrency.lockutils [req-bf443841-58cc-476c-95e0-bc1853fe6da1 req-b4bf2f28-2d1c-4a07-be86-0b8f6e4bcc4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3fffafca-321d-4611-8940-da963b356ca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.846 248514 DEBUG oslo_concurrency.lockutils [req-bf443841-58cc-476c-95e0-bc1853fe6da1 req-b4bf2f28-2d1c-4a07-be86-0b8f6e4bcc4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.847 248514 DEBUG oslo_concurrency.lockutils [req-bf443841-58cc-476c-95e0-bc1853fe6da1 req-b4bf2f28-2d1c-4a07-be86-0b8f6e4bcc4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.847 248514 DEBUG nova.compute.manager [req-bf443841-58cc-476c-95e0-bc1853fe6da1 req-b4bf2f28-2d1c-4a07-be86-0b8f6e4bcc4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Processing event network-vif-plugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.847 248514 DEBUG nova.compute.manager [req-bf443841-58cc-476c-95e0-bc1853fe6da1 req-b4bf2f28-2d1c-4a07-be86-0b8f6e4bcc4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received event network-vif-plugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.847 248514 DEBUG oslo_concurrency.lockutils [req-bf443841-58cc-476c-95e0-bc1853fe6da1 req-b4bf2f28-2d1c-4a07-be86-0b8f6e4bcc4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3fffafca-321d-4611-8940-da963b356ca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.847 248514 DEBUG oslo_concurrency.lockutils [req-bf443841-58cc-476c-95e0-bc1853fe6da1 req-b4bf2f28-2d1c-4a07-be86-0b8f6e4bcc4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.848 248514 DEBUG oslo_concurrency.lockutils [req-bf443841-58cc-476c-95e0-bc1853fe6da1 req-b4bf2f28-2d1c-4a07-be86-0b8f6e4bcc4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.848 248514 DEBUG nova.compute.manager [req-bf443841-58cc-476c-95e0-bc1853fe6da1 req-b4bf2f28-2d1c-4a07-be86-0b8f6e4bcc4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] No waiting events found dispatching network-vif-plugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.848 248514 WARNING nova.compute.manager [req-bf443841-58cc-476c-95e0-bc1853fe6da1 req-b4bf2f28-2d1c-4a07-be86-0b8f6e4bcc4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received unexpected event network-vif-plugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.849 248514 DEBUG nova.compute.manager [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.852 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613848.8525786, 3fffafca-321d-4611-8940-da963b356ca1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.853 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.855 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.860 248514 INFO nova.virt.libvirt.driver [-] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Instance spawned successfully.#033[00m
Dec 13 03:17:28 np0005558241 nova_compute[248510]: 2025-12-13 08:17:28.862 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.090 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.097 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.098 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.098 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.099 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.099 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.100 248514 DEBUG nova.virt.libvirt.driver [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.104 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.458 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.548 248514 INFO nova.compute.manager [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Took 18.75 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.549 248514 DEBUG nova.compute.manager [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.802 248514 INFO nova.compute.manager [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Took 20.66 seconds to build instance.#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:29 np0005558241 nova_compute[248510]: 2025-12-13 08:17:29.917 248514 DEBUG oslo_concurrency.lockutils [None req-0f1bb3a5-1c23-4bad-be71-a81e6318001a 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:30 np0005558241 nova_compute[248510]: 2025-12-13 08:17:30.177 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613835.175904, 5e65e6d8-81fc-412d-9d08-d87023661fda => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:17:30 np0005558241 nova_compute[248510]: 2025-12-13 08:17:30.178 248514 INFO nova.compute.manager [-] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:17:30 np0005558241 nova_compute[248510]: 2025-12-13 08:17:30.268 248514 DEBUG nova.compute.manager [None req-af45bd73-a6ad-4798-850f-ed2701d480e3 - - - - - -] [instance: 5e65e6d8-81fc-412d-9d08-d87023661fda] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:17:30 np0005558241 nova_compute[248510]: 2025-12-13 08:17:30.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:17:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 93 op/s
Dec 13 03:17:32 np0005558241 nova_compute[248510]: 2025-12-13 08:17:32.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:17:32 np0005558241 nova_compute[248510]: 2025-12-13 08:17:32.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:17:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 84 op/s
Dec 13 03:17:32 np0005558241 nova_compute[248510]: 2025-12-13 08:17:32.898 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:17:33 np0005558241 podman[272309]: 2025-12-13 08:17:33.565614222 +0000 UTC m=+0.068225191 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 13 03:17:33 np0005558241 podman[272310]: 2025-12-13 08:17:33.582218891 +0000 UTC m=+0.091733240 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 03:17:33 np0005558241 podman[272308]: 2025-12-13 08:17:33.621470207 +0000 UTC m=+0.131109879 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 03:17:33 np0005558241 nova_compute[248510]: 2025-12-13 08:17:33.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:17:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:17:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:17:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:17:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:17:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:17:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:17:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:17:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:17:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:17:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:17:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:17:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:17:34 np0005558241 podman[272489]: 2025-12-13 08:17:34.623875337 +0000 UTC m=+0.026820141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:17:34 np0005558241 podman[272489]: 2025-12-13 08:17:34.736227345 +0000 UTC m=+0.139172129 container create db1fd9b39b42eac414ad6b66b91d76f26cb10c5106266f598cc7b688eb0ad26e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 03:17:34 np0005558241 nova_compute[248510]: 2025-12-13 08:17:34.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:17:34 np0005558241 nova_compute[248510]: 2025-12-13 08:17:34.816 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 137 op/s
Dec 13 03:17:34 np0005558241 systemd[1]: Started libpod-conmon-db1fd9b39b42eac414ad6b66b91d76f26cb10c5106266f598cc7b688eb0ad26e.scope.
Dec 13 03:17:34 np0005558241 nova_compute[248510]: 2025-12-13 08:17:34.877 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:34 np0005558241 nova_compute[248510]: 2025-12-13 08:17:34.878 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:34 np0005558241 nova_compute[248510]: 2025-12-13 08:17:34.879 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:34 np0005558241 nova_compute[248510]: 2025-12-13 08:17:34.879 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:17:34 np0005558241 nova_compute[248510]: 2025-12-13 08:17:34.879 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:17:35 np0005558241 podman[272489]: 2025-12-13 08:17:35.021010529 +0000 UTC m=+0.423955343 container init db1fd9b39b42eac414ad6b66b91d76f26cb10c5106266f598cc7b688eb0ad26e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Dec 13 03:17:35 np0005558241 podman[272489]: 2025-12-13 08:17:35.030159765 +0000 UTC m=+0.433104549 container start db1fd9b39b42eac414ad6b66b91d76f26cb10c5106266f598cc7b688eb0ad26e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Dec 13 03:17:35 np0005558241 confident_shaw[272505]: 167 167
Dec 13 03:17:35 np0005558241 systemd[1]: libpod-db1fd9b39b42eac414ad6b66b91d76f26cb10c5106266f598cc7b688eb0ad26e.scope: Deactivated successfully.
Dec 13 03:17:35 np0005558241 podman[272489]: 2025-12-13 08:17:35.040712144 +0000 UTC m=+0.443656928 container attach db1fd9b39b42eac414ad6b66b91d76f26cb10c5106266f598cc7b688eb0ad26e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shaw, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:17:35 np0005558241 podman[272489]: 2025-12-13 08:17:35.041211677 +0000 UTC m=+0.444156471 container died db1fd9b39b42eac414ad6b66b91d76f26cb10c5106266f598cc7b688eb0ad26e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:17:35 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:17:35 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:17:35 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:17:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1393f5947562ece98bd10d1004998da1d60145288f1fc246ec16857b1b7e9b85-merged.mount: Deactivated successfully.
Dec 13 03:17:35 np0005558241 podman[272489]: 2025-12-13 08:17:35.084900823 +0000 UTC m=+0.487845597 container remove db1fd9b39b42eac414ad6b66b91d76f26cb10c5106266f598cc7b688eb0ad26e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shaw, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:17:35 np0005558241 systemd[1]: libpod-conmon-db1fd9b39b42eac414ad6b66b91d76f26cb10c5106266f598cc7b688eb0ad26e.scope: Deactivated successfully.
Dec 13 03:17:35 np0005558241 podman[272548]: 2025-12-13 08:17:35.289313868 +0000 UTC m=+0.055038077 container create 6cb0a1d13af3bd54a4dce2cef80657d031e79dbf36fc4f1acb0b688b9a533084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ganguly, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:17:35 np0005558241 systemd[1]: Started libpod-conmon-6cb0a1d13af3bd54a4dce2cef80657d031e79dbf36fc4f1acb0b688b9a533084.scope.
Dec 13 03:17:35 np0005558241 podman[272548]: 2025-12-13 08:17:35.266492086 +0000 UTC m=+0.032216315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:17:35 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:17:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34346893fccf150ac3de5edd6125da2b5d1d504f537791990f3eb679f851901/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34346893fccf150ac3de5edd6125da2b5d1d504f537791990f3eb679f851901/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34346893fccf150ac3de5edd6125da2b5d1d504f537791990f3eb679f851901/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34346893fccf150ac3de5edd6125da2b5d1d504f537791990f3eb679f851901/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34346893fccf150ac3de5edd6125da2b5d1d504f537791990f3eb679f851901/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:35 np0005558241 podman[272548]: 2025-12-13 08:17:35.391564886 +0000 UTC m=+0.157289125 container init 6cb0a1d13af3bd54a4dce2cef80657d031e79dbf36fc4f1acb0b688b9a533084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ganguly, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:17:35 np0005558241 podman[272548]: 2025-12-13 08:17:35.398797664 +0000 UTC m=+0.164521873 container start 6cb0a1d13af3bd54a4dce2cef80657d031e79dbf36fc4f1acb0b688b9a533084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:17:35 np0005558241 podman[272548]: 2025-12-13 08:17:35.403977492 +0000 UTC m=+0.169701701 container attach 6cb0a1d13af3bd54a4dce2cef80657d031e79dbf36fc4f1acb0b688b9a533084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ganguly, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:17:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:17:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2165430477' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:17:35 np0005558241 nova_compute[248510]: 2025-12-13 08:17:35.515 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:35 np0005558241 nova_compute[248510]: 2025-12-13 08:17:35.666 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:17:35 np0005558241 nova_compute[248510]: 2025-12-13 08:17:35.667 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:17:35 np0005558241 nova_compute[248510]: 2025-12-13 08:17:35.672 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:17:35 np0005558241 nova_compute[248510]: 2025-12-13 08:17:35.673 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:17:35 np0005558241 nova_compute[248510]: 2025-12-13 08:17:35.874 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:17:35 np0005558241 nova_compute[248510]: 2025-12-13 08:17:35.877 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4211MB free_disk=59.94629057776183GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:17:35 np0005558241 nova_compute[248510]: 2025-12-13 08:17:35.878 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:35 np0005558241 nova_compute[248510]: 2025-12-13 08:17:35.878 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:35 np0005558241 confident_ganguly[272565]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:17:35 np0005558241 confident_ganguly[272565]: --> All data devices are unavailable
Dec 13 03:17:35 np0005558241 systemd[1]: libpod-6cb0a1d13af3bd54a4dce2cef80657d031e79dbf36fc4f1acb0b688b9a533084.scope: Deactivated successfully.
Dec 13 03:17:35 np0005558241 podman[272588]: 2025-12-13 08:17:35.979280453 +0000 UTC m=+0.026094004 container died 6cb0a1d13af3bd54a4dce2cef80657d031e79dbf36fc4f1acb0b688b9a533084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ganguly, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:17:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b34346893fccf150ac3de5edd6125da2b5d1d504f537791990f3eb679f851901-merged.mount: Deactivated successfully.
Dec 13 03:17:36 np0005558241 podman[272588]: 2025-12-13 08:17:36.019975615 +0000 UTC m=+0.066789146 container remove 6cb0a1d13af3bd54a4dce2cef80657d031e79dbf36fc4f1acb0b688b9a533084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 03:17:36 np0005558241 systemd[1]: libpod-conmon-6cb0a1d13af3bd54a4dce2cef80657d031e79dbf36fc4f1acb0b688b9a533084.scope: Deactivated successfully.
Dec 13 03:17:36 np0005558241 nova_compute[248510]: 2025-12-13 08:17:36.192 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance a14d8b88-7aec-468f-a550-881364e4d95e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:17:36 np0005558241 nova_compute[248510]: 2025-12-13 08:17:36.195 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3fffafca-321d-4611-8940-da963b356ca1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:17:36 np0005558241 nova_compute[248510]: 2025-12-13 08:17:36.197 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:17:36 np0005558241 nova_compute[248510]: 2025-12-13 08:17:36.198 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:17:36 np0005558241 nova_compute[248510]: 2025-12-13 08:17:36.280 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:36 np0005558241 podman[272684]: 2025-12-13 08:17:36.530237243 +0000 UTC m=+0.044757763 container create 6105f2c0582639ca2a6c7a43b957a398fab63122f951b145f688d72b6541a37e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noyce, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:17:36 np0005558241 systemd[1]: Started libpod-conmon-6105f2c0582639ca2a6c7a43b957a398fab63122f951b145f688d72b6541a37e.scope.
Dec 13 03:17:36 np0005558241 podman[272684]: 2025-12-13 08:17:36.509657107 +0000 UTC m=+0.024177657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:17:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:17:36 np0005558241 podman[272684]: 2025-12-13 08:17:36.639252309 +0000 UTC m=+0.153772849 container init 6105f2c0582639ca2a6c7a43b957a398fab63122f951b145f688d72b6541a37e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:17:36 np0005558241 podman[272684]: 2025-12-13 08:17:36.650771672 +0000 UTC m=+0.165292192 container start 6105f2c0582639ca2a6c7a43b957a398fab63122f951b145f688d72b6541a37e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noyce, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:17:36 np0005558241 kind_noyce[272700]: 167 167
Dec 13 03:17:36 np0005558241 podman[272684]: 2025-12-13 08:17:36.655613602 +0000 UTC m=+0.170134142 container attach 6105f2c0582639ca2a6c7a43b957a398fab63122f951b145f688d72b6541a37e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:17:36 np0005558241 systemd[1]: libpod-6105f2c0582639ca2a6c7a43b957a398fab63122f951b145f688d72b6541a37e.scope: Deactivated successfully.
Dec 13 03:17:36 np0005558241 podman[272684]: 2025-12-13 08:17:36.657607931 +0000 UTC m=+0.172128451 container died 6105f2c0582639ca2a6c7a43b957a398fab63122f951b145f688d72b6541a37e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noyce, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 03:17:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e82a4d0d57493f87e4f169d5cc2f03ab5f2e77731a2544504c268b1f6c081b87-merged.mount: Deactivated successfully.
Dec 13 03:17:36 np0005558241 podman[272684]: 2025-12-13 08:17:36.700783384 +0000 UTC m=+0.215303904 container remove 6105f2c0582639ca2a6c7a43b957a398fab63122f951b145f688d72b6541a37e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noyce, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:17:36 np0005558241 systemd[1]: libpod-conmon-6105f2c0582639ca2a6c7a43b957a398fab63122f951b145f688d72b6541a37e.scope: Deactivated successfully.
Dec 13 03:17:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 137 op/s
Dec 13 03:17:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:17:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/240539490' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:17:36 np0005558241 nova_compute[248510]: 2025-12-13 08:17:36.899 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:36 np0005558241 nova_compute[248510]: 2025-12-13 08:17:36.910 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:17:36 np0005558241 podman[272723]: 2025-12-13 08:17:36.925050538 +0000 UTC m=+0.080750370 container create 291a74eeabeaa1e9c3f6ff0e2e0b00d718ff90922d51ca449296cabb0b9b41ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:17:36 np0005558241 nova_compute[248510]: 2025-12-13 08:17:36.954 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:17:36 np0005558241 podman[272723]: 2025-12-13 08:17:36.887849152 +0000 UTC m=+0.043549004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:17:36 np0005558241 systemd[1]: Started libpod-conmon-291a74eeabeaa1e9c3f6ff0e2e0b00d718ff90922d51ca449296cabb0b9b41ea.scope.
Dec 13 03:17:37 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:17:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51cd68e50df040c2e1dcd1c704129f63162b17ffc41813fa8020e02b0621d41d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51cd68e50df040c2e1dcd1c704129f63162b17ffc41813fa8020e02b0621d41d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51cd68e50df040c2e1dcd1c704129f63162b17ffc41813fa8020e02b0621d41d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51cd68e50df040c2e1dcd1c704129f63162b17ffc41813fa8020e02b0621d41d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:37 np0005558241 podman[272723]: 2025-12-13 08:17:37.044859719 +0000 UTC m=+0.200559581 container init 291a74eeabeaa1e9c3f6ff0e2e0b00d718ff90922d51ca449296cabb0b9b41ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hopper, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:17:37 np0005558241 podman[272723]: 2025-12-13 08:17:37.055505151 +0000 UTC m=+0.211204983 container start 291a74eeabeaa1e9c3f6ff0e2e0b00d718ff90922d51ca449296cabb0b9b41ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:17:37 np0005558241 podman[272723]: 2025-12-13 08:17:37.062107154 +0000 UTC m=+0.217807016 container attach 291a74eeabeaa1e9c3f6ff0e2e0b00d718ff90922d51ca449296cabb0b9b41ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:17:37 np0005558241 nova_compute[248510]: 2025-12-13 08:17:37.109 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:17:37 np0005558241 nova_compute[248510]: 2025-12-13 08:17:37.109 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]: {
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:    "0": [
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:        {
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "devices": [
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "/dev/loop3"
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            ],
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_name": "ceph_lv0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_size": "21470642176",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "name": "ceph_lv0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "tags": {
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.cluster_name": "ceph",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.crush_device_class": "",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.encrypted": "0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.objectstore": "bluestore",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.osd_id": "0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.type": "block",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.vdo": "0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.with_tpm": "0"
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            },
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "type": "block",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "vg_name": "ceph_vg0"
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:        }
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:    ],
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:    "1": [
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:        {
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "devices": [
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "/dev/loop4"
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            ],
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_name": "ceph_lv1",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_size": "21470642176",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "name": "ceph_lv1",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "tags": {
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.cluster_name": "ceph",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.crush_device_class": "",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.encrypted": "0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.objectstore": "bluestore",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.osd_id": "1",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.type": "block",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.vdo": "0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.with_tpm": "0"
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            },
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "type": "block",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "vg_name": "ceph_vg1"
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:        }
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:    ],
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:    "2": [
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:        {
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "devices": [
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "/dev/loop5"
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            ],
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_name": "ceph_lv2",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_size": "21470642176",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "name": "ceph_lv2",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "tags": {
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.cluster_name": "ceph",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.crush_device_class": "",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.encrypted": "0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.objectstore": "bluestore",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.osd_id": "2",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.type": "block",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.vdo": "0",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:                "ceph.with_tpm": "0"
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            },
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "type": "block",
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:            "vg_name": "ceph_vg2"
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:        }
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]:    ]
Dec 13 03:17:37 np0005558241 sharp_hopper[272741]: }
Dec 13 03:17:37 np0005558241 systemd[1]: libpod-291a74eeabeaa1e9c3f6ff0e2e0b00d718ff90922d51ca449296cabb0b9b41ea.scope: Deactivated successfully.
Dec 13 03:17:37 np0005558241 podman[272750]: 2025-12-13 08:17:37.483344438 +0000 UTC m=+0.030745288 container died 291a74eeabeaa1e9c3f6ff0e2e0b00d718ff90922d51ca449296cabb0b9b41ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hopper, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:17:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-51cd68e50df040c2e1dcd1c704129f63162b17ffc41813fa8020e02b0621d41d-merged.mount: Deactivated successfully.
Dec 13 03:17:37 np0005558241 podman[272750]: 2025-12-13 08:17:37.532099539 +0000 UTC m=+0.079500359 container remove 291a74eeabeaa1e9c3f6ff0e2e0b00d718ff90922d51ca449296cabb0b9b41ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:17:37 np0005558241 systemd[1]: libpod-conmon-291a74eeabeaa1e9c3f6ff0e2e0b00d718ff90922d51ca449296cabb0b9b41ea.scope: Deactivated successfully.
Dec 13 03:17:38 np0005558241 podman[272828]: 2025-12-13 08:17:38.055197543 +0000 UTC m=+0.045405079 container create 2d23d0e874e4f972e2d107246e76e50f92d7c5be1e0e55f480356407447d1652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_engelbart, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:17:38 np0005558241 systemd[1]: Started libpod-conmon-2d23d0e874e4f972e2d107246e76e50f92d7c5be1e0e55f480356407447d1652.scope.
Dec 13 03:17:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:17:38 np0005558241 podman[272828]: 2025-12-13 08:17:38.034929174 +0000 UTC m=+0.025136730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:17:38 np0005558241 podman[272828]: 2025-12-13 08:17:38.135339887 +0000 UTC m=+0.125547443 container init 2d23d0e874e4f972e2d107246e76e50f92d7c5be1e0e55f480356407447d1652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 03:17:38 np0005558241 podman[272828]: 2025-12-13 08:17:38.143539259 +0000 UTC m=+0.133746795 container start 2d23d0e874e4f972e2d107246e76e50f92d7c5be1e0e55f480356407447d1652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_engelbart, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:17:38 np0005558241 podman[272828]: 2025-12-13 08:17:38.147979659 +0000 UTC m=+0.138187225 container attach 2d23d0e874e4f972e2d107246e76e50f92d7c5be1e0e55f480356407447d1652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_engelbart, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:17:38 np0005558241 peaceful_engelbart[272845]: 167 167
Dec 13 03:17:38 np0005558241 systemd[1]: libpod-2d23d0e874e4f972e2d107246e76e50f92d7c5be1e0e55f480356407447d1652.scope: Deactivated successfully.
Dec 13 03:17:38 np0005558241 podman[272828]: 2025-12-13 08:17:38.15411465 +0000 UTC m=+0.144322216 container died 2d23d0e874e4f972e2d107246e76e50f92d7c5be1e0e55f480356407447d1652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_engelbart, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 03:17:38 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7ec0c6b9743bf6fda5b8375539db19198837cf35f2aead187bf5415a6477732e-merged.mount: Deactivated successfully.
Dec 13 03:17:38 np0005558241 podman[272828]: 2025-12-13 08:17:38.197797906 +0000 UTC m=+0.188005442 container remove 2d23d0e874e4f972e2d107246e76e50f92d7c5be1e0e55f480356407447d1652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:17:38 np0005558241 systemd[1]: libpod-conmon-2d23d0e874e4f972e2d107246e76e50f92d7c5be1e0e55f480356407447d1652.scope: Deactivated successfully.
Dec 13 03:17:38 np0005558241 podman[272869]: 2025-12-13 08:17:38.406211299 +0000 UTC m=+0.061421804 container create 738592ada365ccb08faa36f169da927a4789dcec6527b71f31ead42b8b28cf1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keller, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:17:38 np0005558241 systemd[1]: Started libpod-conmon-738592ada365ccb08faa36f169da927a4789dcec6527b71f31ead42b8b28cf1b.scope.
Dec 13 03:17:38 np0005558241 podman[272869]: 2025-12-13 08:17:38.376059967 +0000 UTC m=+0.031270492 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:17:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:17:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23811d6dbee189e7a3956bc2696633bdea8a1ee0f377dbdd744a27d1218821b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23811d6dbee189e7a3956bc2696633bdea8a1ee0f377dbdd744a27d1218821b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23811d6dbee189e7a3956bc2696633bdea8a1ee0f377dbdd744a27d1218821b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c23811d6dbee189e7a3956bc2696633bdea8a1ee0f377dbdd744a27d1218821b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:17:38 np0005558241 podman[272869]: 2025-12-13 08:17:38.524941454 +0000 UTC m=+0.180151979 container init 738592ada365ccb08faa36f169da927a4789dcec6527b71f31ead42b8b28cf1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keller, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:17:38 np0005558241 podman[272869]: 2025-12-13 08:17:38.531770142 +0000 UTC m=+0.186980647 container start 738592ada365ccb08faa36f169da927a4789dcec6527b71f31ead42b8b28cf1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:17:38 np0005558241 podman[272869]: 2025-12-13 08:17:38.536853287 +0000 UTC m=+0.192063792 container attach 738592ada365ccb08faa36f169da927a4789dcec6527b71f31ead42b8b28cf1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keller, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:17:38 np0005558241 nova_compute[248510]: 2025-12-13 08:17:38.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:38 np0005558241 nova_compute[248510]: 2025-12-13 08:17:38.732 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:38 np0005558241 nova_compute[248510]: 2025-12-13 08:17:38.733 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:38Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:0a:44 10.100.0.14
Dec 13 03:17:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:38Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:0a:44 10.100.0.14
Dec 13 03:17:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:17:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 149 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.5 MiB/s wr, 174 op/s
Dec 13 03:17:39 np0005558241 nova_compute[248510]: 2025-12-13 08:17:39.110 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:17:39 np0005558241 lvm[272964]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:17:39 np0005558241 lvm[272964]: VG ceph_vg0 finished
Dec 13 03:17:39 np0005558241 lvm[272965]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:17:39 np0005558241 lvm[272965]: VG ceph_vg1 finished
Dec 13 03:17:39 np0005558241 lvm[272967]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:17:39 np0005558241 lvm[272967]: VG ceph_vg2 finished
Dec 13 03:17:39 np0005558241 crazy_keller[272886]: {}
Dec 13 03:17:39 np0005558241 systemd[1]: libpod-738592ada365ccb08faa36f169da927a4789dcec6527b71f31ead42b8b28cf1b.scope: Deactivated successfully.
Dec 13 03:17:39 np0005558241 systemd[1]: libpod-738592ada365ccb08faa36f169da927a4789dcec6527b71f31ead42b8b28cf1b.scope: Consumed 1.498s CPU time.
Dec 13 03:17:39 np0005558241 podman[272869]: 2025-12-13 08:17:39.480379077 +0000 UTC m=+1.135589602 container died 738592ada365ccb08faa36f169da927a4789dcec6527b71f31ead42b8b28cf1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keller, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:17:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c23811d6dbee189e7a3956bc2696633bdea8a1ee0f377dbdd744a27d1218821b-merged.mount: Deactivated successfully.
Dec 13 03:17:39 np0005558241 podman[272869]: 2025-12-13 08:17:39.566612321 +0000 UTC m=+1.221822826 container remove 738592ada365ccb08faa36f169da927a4789dcec6527b71f31ead42b8b28cf1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 03:17:39 np0005558241 systemd[1]: libpod-conmon-738592ada365ccb08faa36f169da927a4789dcec6527b71f31ead42b8b28cf1b.scope: Deactivated successfully.
Dec 13 03:17:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:17:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:17:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:17:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:17:39 np0005558241 nova_compute[248510]: 2025-12-13 08:17:39.820 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:17:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:17:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:17:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:17:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:17:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:17:40 np0005558241 nova_compute[248510]: 2025-12-13 08:17:40.195 248514 DEBUG nova.compute.manager [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:17:40 np0005558241 nova_compute[248510]: 2025-12-13 08:17:40.241 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:17:40 np0005558241 nova_compute[248510]: 2025-12-13 08:17:40.242 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:17:40 np0005558241 nova_compute[248510]: 2025-12-13 08:17:40.242 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:17:40 np0005558241 nova_compute[248510]: 2025-12-13 08:17:40.242 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:17:40 np0005558241 nova_compute[248510]: 2025-12-13 08:17:40.242 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:17:40 np0005558241 nova_compute[248510]: 2025-12-13 08:17:40.430 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:40 np0005558241 nova_compute[248510]: 2025-12-13 08:17:40.431 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:40 np0005558241 nova_compute[248510]: 2025-12-13 08:17:40.440 248514 DEBUG nova.virt.hardware [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:17:40 np0005558241 nova_compute[248510]: 2025-12-13 08:17:40.440 248514 INFO nova.compute.claims [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:17:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:17:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:17:40 np0005558241 nova_compute[248510]: 2025-12-13 08:17:40.770 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 149 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.5 MiB/s wr, 139 op/s
Dec 13 03:17:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:17:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607715563' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:17:41 np0005558241 nova_compute[248510]: 2025-12-13 08:17:41.393 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:41 np0005558241 nova_compute[248510]: 2025-12-13 08:17:41.400 248514 DEBUG nova.compute.provider_tree [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:17:41 np0005558241 nova_compute[248510]: 2025-12-13 08:17:41.435 248514 DEBUG nova.scheduler.client.report [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:17:41 np0005558241 nova_compute[248510]: 2025-12-13 08:17:41.481 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:41 np0005558241 nova_compute[248510]: 2025-12-13 08:17:41.482 248514 DEBUG nova.compute.manager [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:17:41 np0005558241 nova_compute[248510]: 2025-12-13 08:17:41.894 248514 DEBUG nova.compute.manager [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:17:41 np0005558241 nova_compute[248510]: 2025-12-13 08:17:41.895 248514 DEBUG nova.network.neutron [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:17:41 np0005558241 nova_compute[248510]: 2025-12-13 08:17:41.941 248514 INFO nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.000 248514 DEBUG nova.compute.manager [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.217 248514 DEBUG nova.policy [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d3578a9aa9a4f4facf4009a71564a31', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:17:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:42Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:25:e7:e6 10.100.0.11
Dec 13 03:17:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:42Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:25:e7:e6 10.100.0.11
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.435 248514 DEBUG nova.compute.manager [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.437 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.437 248514 INFO nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Creating image(s)#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.461 248514 DEBUG nova.storage.rbd_utils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.491 248514 DEBUG nova.storage.rbd_utils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.513 248514 DEBUG nova.storage.rbd_utils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.517 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.590 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.591 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.592 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.592 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.612 248514 DEBUG nova.storage.rbd_utils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.615 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:42 np0005558241 nova_compute[248510]: 2025-12-13 08:17:42.777 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:17:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 171 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 122 op/s
Dec 13 03:17:43 np0005558241 nova_compute[248510]: 2025-12-13 08:17:43.051 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:43 np0005558241 nova_compute[248510]: 2025-12-13 08:17:43.110 248514 DEBUG nova.storage.rbd_utils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] resizing rbd image 6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:17:43 np0005558241 nova_compute[248510]: 2025-12-13 08:17:43.228 248514 DEBUG nova.objects.instance [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'migration_context' on Instance uuid 6b1e2b65-1398-4af8-9e8a-a8b99630eef8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:17:43 np0005558241 nova_compute[248510]: 2025-12-13 08:17:43.339 248514 DEBUG nova.network.neutron [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Successfully created port: 4b46a3fe-06cf-4169-9e52-49c6d076be13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:17:43 np0005558241 nova_compute[248510]: 2025-12-13 08:17:43.344 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:17:43 np0005558241 nova_compute[248510]: 2025-12-13 08:17:43.344 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Ensure instance console log exists: /var/lib/nova/instances/6b1e2b65-1398-4af8-9e8a-a8b99630eef8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:17:43 np0005558241 nova_compute[248510]: 2025-12-13 08:17:43.344 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:43 np0005558241 nova_compute[248510]: 2025-12-13 08:17:43.345 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:43 np0005558241 nova_compute[248510]: 2025-12-13 08:17:43.345 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:43 np0005558241 nova_compute[248510]: 2025-12-13 08:17:43.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:17:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 241 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.0 MiB/s wr, 212 op/s
Dec 13 03:17:44 np0005558241 nova_compute[248510]: 2025-12-13 08:17:44.866 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 241 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 667 KiB/s rd, 6.0 MiB/s wr, 152 op/s
Dec 13 03:17:48 np0005558241 nova_compute[248510]: 2025-12-13 08:17:48.767 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:17:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 667 KiB/s rd, 6.0 MiB/s wr, 154 op/s
Dec 13 03:17:49 np0005558241 nova_compute[248510]: 2025-12-13 08:17:49.883 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:50.458 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:17:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:50.459 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:17:50 np0005558241 nova_compute[248510]: 2025-12-13 08:17:50.459 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 471 KiB/s rd, 4.6 MiB/s wr, 117 op/s
Dec 13 03:17:51 np0005558241 nova_compute[248510]: 2025-12-13 08:17:51.180 248514 DEBUG nova.network.neutron [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Successfully updated port: 4b46a3fe-06cf-4169-9e52-49c6d076be13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:17:51 np0005558241 nova_compute[248510]: 2025-12-13 08:17:51.297 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "refresh_cache-6b1e2b65-1398-4af8-9e8a-a8b99630eef8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:17:51 np0005558241 nova_compute[248510]: 2025-12-13 08:17:51.297 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquired lock "refresh_cache-6b1e2b65-1398-4af8-9e8a-a8b99630eef8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:17:51 np0005558241 nova_compute[248510]: 2025-12-13 08:17:51.298 248514 DEBUG nova.network.neutron [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:17:51 np0005558241 nova_compute[248510]: 2025-12-13 08:17:51.504 248514 DEBUG nova.compute.manager [req-01ef46ec-613f-4920-b735-3113c9215f00 req-28c6aed9-4d30-4b9e-9fd1-14b8752f7daa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received event network-changed-4b46a3fe-06cf-4169-9e52-49c6d076be13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:17:51 np0005558241 nova_compute[248510]: 2025-12-13 08:17:51.505 248514 DEBUG nova.compute.manager [req-01ef46ec-613f-4920-b735-3113c9215f00 req-28c6aed9-4d30-4b9e-9fd1-14b8752f7daa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Refreshing instance network info cache due to event network-changed-4b46a3fe-06cf-4169-9e52-49c6d076be13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:17:51 np0005558241 nova_compute[248510]: 2025-12-13 08:17:51.505 248514 DEBUG oslo_concurrency.lockutils [req-01ef46ec-613f-4920-b735-3113c9215f00 req-28c6aed9-4d30-4b9e-9fd1-14b8752f7daa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-6b1e2b65-1398-4af8-9e8a-a8b99630eef8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:17:51 np0005558241 nova_compute[248510]: 2025-12-13 08:17:51.544 248514 DEBUG nova.network.neutron [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:17:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 471 KiB/s rd, 4.6 MiB/s wr, 117 op/s
Dec 13 03:17:53 np0005558241 nova_compute[248510]: 2025-12-13 08:17:53.656 248514 DEBUG nova.network.neutron [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Updating instance_info_cache with network_info: [{"id": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "address": "fa:16:3e:69:e5:d6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b46a3fe-06", "ovs_interfaceid": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:17:53 np0005558241 nova_compute[248510]: 2025-12-13 08:17:53.769 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.231 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Releasing lock "refresh_cache-6b1e2b65-1398-4af8-9e8a-a8b99630eef8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.232 248514 DEBUG nova.compute.manager [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Instance network_info: |[{"id": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "address": "fa:16:3e:69:e5:d6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b46a3fe-06", "ovs_interfaceid": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.232 248514 DEBUG oslo_concurrency.lockutils [req-01ef46ec-613f-4920-b735-3113c9215f00 req-28c6aed9-4d30-4b9e-9fd1-14b8752f7daa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-6b1e2b65-1398-4af8-9e8a-a8b99630eef8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.232 248514 DEBUG nova.network.neutron [req-01ef46ec-613f-4920-b735-3113c9215f00 req-28c6aed9-4d30-4b9e-9fd1-14b8752f7daa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Refreshing network info cache for port 4b46a3fe-06cf-4169-9e52-49c6d076be13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.235 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Start _get_guest_xml network_info=[{"id": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "address": "fa:16:3e:69:e5:d6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b46a3fe-06", "ovs_interfaceid": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.240 248514 WARNING nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.252 248514 DEBUG nova.virt.libvirt.host [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.253 248514 DEBUG nova.virt.libvirt.host [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.257 248514 DEBUG nova.virt.libvirt.host [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.258 248514 DEBUG nova.virt.libvirt.host [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.258 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.259 248514 DEBUG nova.virt.hardware [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.259 248514 DEBUG nova.virt.hardware [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.259 248514 DEBUG nova.virt.hardware [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.260 248514 DEBUG nova.virt.hardware [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.260 248514 DEBUG nova.virt.hardware [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.260 248514 DEBUG nova.virt.hardware [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.260 248514 DEBUG nova.virt.hardware [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.261 248514 DEBUG nova.virt.hardware [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.261 248514 DEBUG nova.virt.hardware [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.261 248514 DEBUG nova.virt.hardware [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.261 248514 DEBUG nova.virt.hardware [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.264 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 3.2 MiB/s wr, 95 op/s
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:17:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/624000065' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.949 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.977 248514 DEBUG nova.storage.rbd_utils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:54 np0005558241 nova_compute[248510]: 2025-12-13 08:17:54.984 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:55.399 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:55.399 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:55.400 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:55.462 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:17:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1987096175' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.610 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.613 248514 DEBUG nova.virt.libvirt.vif [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1421874999',display_name='tempest-ServersAdminTestJSON-server-1421874999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1421874999',id=19,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-fdarv985',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:17:42Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=6b1e2b65-1398-4af8-9e8a-a8b99630eef8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "address": "fa:16:3e:69:e5:d6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b46a3fe-06", "ovs_interfaceid": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.613 248514 DEBUG nova.network.os_vif_util [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "address": "fa:16:3e:69:e5:d6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b46a3fe-06", "ovs_interfaceid": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.614 248514 DEBUG nova.network.os_vif_util [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:e5:d6,bridge_name='br-int',has_traffic_filtering=True,id=4b46a3fe-06cf-4169-9e52-49c6d076be13,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b46a3fe-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.616 248514 DEBUG nova.objects.instance [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b1e2b65-1398-4af8-9e8a-a8b99630eef8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.642 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  <uuid>6b1e2b65-1398-4af8-9e8a-a8b99630eef8</uuid>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  <name>instance-00000013</name>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersAdminTestJSON-server-1421874999</nova:name>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:17:54</nova:creationTime>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:        <nova:user uuid="0d3578a9aa9a4f4facf4009a71564a31">tempest-ServersAdminTestJSON-1079129122-project-member</nova:user>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:        <nova:project uuid="1f6b42f9712f4afea6a07d08373b56ca">tempest-ServersAdminTestJSON-1079129122</nova:project>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:        <nova:port uuid="4b46a3fe-06cf-4169-9e52-49c6d076be13">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <entry name="serial">6b1e2b65-1398-4af8-9e8a-a8b99630eef8</entry>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <entry name="uuid">6b1e2b65-1398-4af8-9e8a-a8b99630eef8</entry>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk.config">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:69:e5:d6"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <target dev="tap4b46a3fe-06"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/6b1e2b65-1398-4af8-9e8a-a8b99630eef8/console.log" append="off"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:17:55 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:17:55 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:17:55 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:17:55 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.644 248514 DEBUG nova.compute.manager [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Preparing to wait for external event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.645 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.645 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.645 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.646 248514 DEBUG nova.virt.libvirt.vif [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1421874999',display_name='tempest-ServersAdminTestJSON-server-1421874999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1421874999',id=19,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-fdarv985',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:17:42Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=6b1e2b65-1398-4af8-9e8a-a8b99630eef8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "address": "fa:16:3e:69:e5:d6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b46a3fe-06", "ovs_interfaceid": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.646 248514 DEBUG nova.network.os_vif_util [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "address": "fa:16:3e:69:e5:d6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b46a3fe-06", "ovs_interfaceid": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.647 248514 DEBUG nova.network.os_vif_util [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:e5:d6,bridge_name='br-int',has_traffic_filtering=True,id=4b46a3fe-06cf-4169-9e52-49c6d076be13,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b46a3fe-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.648 248514 DEBUG os_vif [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:e5:d6,bridge_name='br-int',has_traffic_filtering=True,id=4b46a3fe-06cf-4169-9e52-49c6d076be13,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b46a3fe-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.649 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.649 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.650 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.655 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.656 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b46a3fe-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.656 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b46a3fe-06, col_values=(('external_ids', {'iface-id': '4b46a3fe-06cf-4169-9e52-49c6d076be13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:e5:d6', 'vm-uuid': '6b1e2b65-1398-4af8-9e8a-a8b99630eef8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:55 np0005558241 NetworkManager[50376]: <info>  [1765613875.6592] manager: (tap4b46a3fe-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.658 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.664 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.667 248514 INFO os_vif [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:e5:d6,bridge_name='br-int',has_traffic_filtering=True,id=4b46a3fe-06cf-4169-9e52-49c6d076be13,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b46a3fe-06')#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.887 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.888 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.888 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No VIF found with MAC fa:16:3e:69:e5:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.889 248514 INFO nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Using config drive#033[00m
Dec 13 03:17:55 np0005558241 nova_compute[248510]: 2025-12-13 08:17:55.911 248514 DEBUG nova.storage.rbd_utils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:56 np0005558241 nova_compute[248510]: 2025-12-13 08:17:56.331 248514 DEBUG nova.network.neutron [req-01ef46ec-613f-4920-b735-3113c9215f00 req-28c6aed9-4d30-4b9e-9fd1-14b8752f7daa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Updated VIF entry in instance network info cache for port 4b46a3fe-06cf-4169-9e52-49c6d076be13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:17:56 np0005558241 nova_compute[248510]: 2025-12-13 08:17:56.332 248514 DEBUG nova.network.neutron [req-01ef46ec-613f-4920-b735-3113c9215f00 req-28c6aed9-4d30-4b9e-9fd1-14b8752f7daa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Updating instance_info_cache with network_info: [{"id": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "address": "fa:16:3e:69:e5:d6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b46a3fe-06", "ovs_interfaceid": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:17:56 np0005558241 nova_compute[248510]: 2025-12-13 08:17:56.372 248514 DEBUG oslo_concurrency.lockutils [req-01ef46ec-613f-4920-b735-3113c9215f00 req-28c6aed9-4d30-4b9e-9fd1-14b8752f7daa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-6b1e2b65-1398-4af8-9e8a-a8b99630eef8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:17:56 np0005558241 nova_compute[248510]: 2025-12-13 08:17:56.673 248514 INFO nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Creating config drive at /var/lib/nova/instances/6b1e2b65-1398-4af8-9e8a-a8b99630eef8/disk.config#033[00m
Dec 13 03:17:56 np0005558241 nova_compute[248510]: 2025-12-13 08:17:56.679 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b1e2b65-1398-4af8-9e8a-a8b99630eef8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxh0geawa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:56 np0005558241 nova_compute[248510]: 2025-12-13 08:17:56.817 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b1e2b65-1398-4af8-9e8a-a8b99630eef8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxh0geawa" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:56 np0005558241 nova_compute[248510]: 2025-12-13 08:17:56.846 248514 DEBUG nova.storage.rbd_utils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:17:56 np0005558241 nova_compute[248510]: 2025-12-13 08:17:56.850 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b1e2b65-1398-4af8-9e8a-a8b99630eef8/disk.config 6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:17:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s wr, 1 op/s
Dec 13 03:17:57 np0005558241 nova_compute[248510]: 2025-12-13 08:17:57.364 248514 DEBUG oslo_concurrency.processutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b1e2b65-1398-4af8-9e8a-a8b99630eef8/disk.config 6b1e2b65-1398-4af8-9e8a-a8b99630eef8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:17:57 np0005558241 nova_compute[248510]: 2025-12-13 08:17:57.366 248514 INFO nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Deleting local config drive /var/lib/nova/instances/6b1e2b65-1398-4af8-9e8a-a8b99630eef8/disk.config because it was imported into RBD.#033[00m
Dec 13 03:17:57 np0005558241 kernel: tap4b46a3fe-06: entered promiscuous mode
Dec 13 03:17:57 np0005558241 NetworkManager[50376]: <info>  [1765613877.4157] manager: (tap4b46a3fe-06): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Dec 13 03:17:57 np0005558241 nova_compute[248510]: 2025-12-13 08:17:57.417 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:57Z|00076|binding|INFO|Claiming lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 for this chassis.
Dec 13 03:17:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:57Z|00077|binding|INFO|4b46a3fe-06cf-4169-9e52-49c6d076be13: Claiming fa:16:3e:69:e5:d6 10.100.0.8
Dec 13 03:17:57 np0005558241 nova_compute[248510]: 2025-12-13 08:17:57.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:57Z|00078|binding|INFO|Setting lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 ovn-installed in OVS
Dec 13 03:17:57 np0005558241 nova_compute[248510]: 2025-12-13 08:17:57.436 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:57 np0005558241 systemd-udevd[273331]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:17:57 np0005558241 systemd-machined[210538]: New machine qemu-21-instance-00000013.
Dec 13 03:17:57 np0005558241 NetworkManager[50376]: <info>  [1765613877.4637] device (tap4b46a3fe-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:17:57 np0005558241 NetworkManager[50376]: <info>  [1765613877.4644] device (tap4b46a3fe-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:17:57 np0005558241 systemd[1]: Started Virtual Machine qemu-21-instance-00000013.
Dec 13 03:17:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:17:57Z|00079|binding|INFO|Setting lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 up in Southbound
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.565 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:e5:d6 10.100.0.8'], port_security=['fa:16:3e:69:e5:d6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6b1e2b65-1398-4af8-9e8a-a8b99630eef8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4b46a3fe-06cf-4169-9e52-49c6d076be13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.567 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4b46a3fe-06cf-4169-9e52-49c6d076be13 in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 bound to our chassis#033[00m
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.568 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5#033[00m
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.588 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3a55cc22-f287-4c73-a060-47e6d667e4b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.628 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[874aea02-ed93-4715-8dcc-f9fa737f8777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.632 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec03b22-31a2-44e1-97ce-aaf30d7f7c5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.665 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba2f7ed-09fb-4406-b1a4-f2d0b6424fd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.689 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9a96d0b9-4f2b-46cd-ba2e-1f9ea662de0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 35036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273346, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.709 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[683754d9-f4c5-4848-95f8-f27b9b100b11]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273347, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273347, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.711 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:57 np0005558241 nova_compute[248510]: 2025-12-13 08:17:57.713 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.714 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.714 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.715 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:17:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:17:57.716 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:17:58 np0005558241 nova_compute[248510]: 2025-12-13 08:17:58.003 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613878.0027251, 6b1e2b65-1398-4af8-9e8a-a8b99630eef8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:17:58 np0005558241 nova_compute[248510]: 2025-12-13 08:17:58.004 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] VM Started (Lifecycle Event)#033[00m
Dec 13 03:17:58 np0005558241 nova_compute[248510]: 2025-12-13 08:17:58.164 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:17:58 np0005558241 nova_compute[248510]: 2025-12-13 08:17:58.169 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613878.0031016, 6b1e2b65-1398-4af8-9e8a-a8b99630eef8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:17:58 np0005558241 nova_compute[248510]: 2025-12-13 08:17:58.169 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:17:58 np0005558241 nova_compute[248510]: 2025-12-13 08:17:58.430 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:17:58 np0005558241 nova_compute[248510]: 2025-12-13 08:17:58.435 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:17:58 np0005558241 nova_compute[248510]: 2025-12-13 08:17:58.574 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:17:58 np0005558241 nova_compute[248510]: 2025-12-13 08:17:58.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:17:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:17:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 86 KiB/s wr, 9 op/s
Dec 13 03:18:00 np0005558241 nova_compute[248510]: 2025-12-13 08:18:00.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 17 KiB/s wr, 8 op/s
Dec 13 03:18:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 17 KiB/s wr, 10 op/s
Dec 13 03:18:03 np0005558241 nova_compute[248510]: 2025-12-13 08:18:03.775 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:18:03 np0005558241 podman[273392]: 2025-12-13 08:18:03.985461935 +0000 UTC m=+0.066669433 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 13 03:18:04 np0005558241 podman[273391]: 2025-12-13 08:18:04.013530926 +0000 UTC m=+0.098843565 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:18:04 np0005558241 podman[273390]: 2025-12-13 08:18:04.041807993 +0000 UTC m=+0.126907087 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 13 03:18:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 18 KiB/s wr, 10 op/s
Dec 13 03:18:05 np0005558241 nova_compute[248510]: 2025-12-13 08:18:05.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 14 KiB/s wr, 10 op/s
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.132 248514 DEBUG nova.compute.manager [req-2d143a78-a4f0-4634-871e-69e9c375809a req-998499a9-c2d7-48ea-a3be-3b95a58b2518 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.132 248514 DEBUG oslo_concurrency.lockutils [req-2d143a78-a4f0-4634-871e-69e9c375809a req-998499a9-c2d7-48ea-a3be-3b95a58b2518 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.132 248514 DEBUG oslo_concurrency.lockutils [req-2d143a78-a4f0-4634-871e-69e9c375809a req-998499a9-c2d7-48ea-a3be-3b95a58b2518 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.133 248514 DEBUG oslo_concurrency.lockutils [req-2d143a78-a4f0-4634-871e-69e9c375809a req-998499a9-c2d7-48ea-a3be-3b95a58b2518 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.133 248514 DEBUG nova.compute.manager [req-2d143a78-a4f0-4634-871e-69e9c375809a req-998499a9-c2d7-48ea-a3be-3b95a58b2518 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Processing event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.134 248514 DEBUG nova.compute.manager [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Instance event wait completed in 9 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.138 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613887.1384144, 6b1e2b65-1398-4af8-9e8a-a8b99630eef8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.139 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.141 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.146 248514 INFO nova.virt.libvirt.driver [-] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Instance spawned successfully.#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.146 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.293 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.298 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.302 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.303 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.303 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.304 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.305 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.305 248514 DEBUG nova.virt.libvirt.driver [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.436 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.544 248514 INFO nova.compute.manager [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Took 25.11 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.545 248514 DEBUG nova.compute.manager [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.611 248514 INFO nova.compute.manager [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Took 27.20 seconds to build instance.#033[00m
Dec 13 03:18:07 np0005558241 nova_compute[248510]: 2025-12-13 08:18:07.648 248514 DEBUG oslo_concurrency.lockutils [None req-a16ae966-e164-49d7-9143-ab8e9a963587 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 28.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:08 np0005558241 nova_compute[248510]: 2025-12-13 08:18:08.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:18:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 70 op/s
Dec 13 03:18:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:18:09
Dec 13 03:18:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:18:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:18:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'vms']
Dec 13 03:18:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:18:09 np0005558241 nova_compute[248510]: 2025-12-13 08:18:09.243 248514 DEBUG nova.compute.manager [req-80c3deb0-2406-4d93-8be8-4bf2e80861ad req-301bcd94-c876-4116-83ed-269b28bf8944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:18:09 np0005558241 nova_compute[248510]: 2025-12-13 08:18:09.243 248514 DEBUG oslo_concurrency.lockutils [req-80c3deb0-2406-4d93-8be8-4bf2e80861ad req-301bcd94-c876-4116-83ed-269b28bf8944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:09 np0005558241 nova_compute[248510]: 2025-12-13 08:18:09.244 248514 DEBUG oslo_concurrency.lockutils [req-80c3deb0-2406-4d93-8be8-4bf2e80861ad req-301bcd94-c876-4116-83ed-269b28bf8944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:09 np0005558241 nova_compute[248510]: 2025-12-13 08:18:09.244 248514 DEBUG oslo_concurrency.lockutils [req-80c3deb0-2406-4d93-8be8-4bf2e80861ad req-301bcd94-c876-4116-83ed-269b28bf8944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:09 np0005558241 nova_compute[248510]: 2025-12-13 08:18:09.244 248514 DEBUG nova.compute.manager [req-80c3deb0-2406-4d93-8be8-4bf2e80861ad req-301bcd94-c876-4116-83ed-269b28bf8944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] No waiting events found dispatching network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:18:09 np0005558241 nova_compute[248510]: 2025-12-13 08:18:09.244 248514 WARNING nova.compute.manager [req-80c3deb0-2406-4d93-8be8-4bf2e80861ad req-301bcd94-c876-4116-83ed-269b28bf8944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received unexpected event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:18:10 np0005558241 nova_compute[248510]: 2025-12-13 08:18:10.665 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1023 B/s wr, 63 op/s
Dec 13 03:18:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 KiB/s wr, 66 op/s
Dec 13 03:18:13 np0005558241 nova_compute[248510]: 2025-12-13 08:18:13.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:18:14 np0005558241 nova_compute[248510]: 2025-12-13 08:18:14.507 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "0d64e209-19e7-4ad3-a790-43d04d832838" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:14 np0005558241 nova_compute[248510]: 2025-12-13 08:18:14.507 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:14 np0005558241 nova_compute[248510]: 2025-12-13 08:18:14.744 248514 DEBUG nova.compute.manager [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:18:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.7 KiB/s wr, 65 op/s
Dec 13 03:18:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:18:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1190033260' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:18:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:18:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1190033260' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:18:14 np0005558241 nova_compute[248510]: 2025-12-13 08:18:14.978 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:14 np0005558241 nova_compute[248510]: 2025-12-13 08:18:14.978 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:14 np0005558241 nova_compute[248510]: 2025-12-13 08:18:14.989 248514 DEBUG nova.virt.hardware [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:18:14 np0005558241 nova_compute[248510]: 2025-12-13 08:18:14.990 248514 INFO nova.compute.claims [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:18:15 np0005558241 nova_compute[248510]: 2025-12-13 08:18:15.268 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:15 np0005558241 nova_compute[248510]: 2025-12-13 08:18:15.667 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:18:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3773160274' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:18:15 np0005558241 nova_compute[248510]: 2025-12-13 08:18:15.901 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:15 np0005558241 nova_compute[248510]: 2025-12-13 08:18:15.909 248514 DEBUG nova.compute.provider_tree [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:18:16 np0005558241 nova_compute[248510]: 2025-12-13 08:18:16.143 248514 DEBUG nova.scheduler.client.report [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:18:16 np0005558241 nova_compute[248510]: 2025-12-13 08:18:16.684 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:16 np0005558241 nova_compute[248510]: 2025-12-13 08:18:16.685 248514 DEBUG nova.compute.manager [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:18:16 np0005558241 nova_compute[248510]: 2025-12-13 08:18:16.746 248514 DEBUG nova.compute.manager [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:18:16 np0005558241 nova_compute[248510]: 2025-12-13 08:18:16.746 248514 DEBUG nova.network.neutron [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:18:16 np0005558241 nova_compute[248510]: 2025-12-13 08:18:16.775 248514 INFO nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:18:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 246 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 64 op/s
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.007 248514 DEBUG nova.compute.manager [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.198 248514 DEBUG nova.policy [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d3578a9aa9a4f4facf4009a71564a31', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.514 248514 DEBUG nova.compute.manager [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.515 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.516 248514 INFO nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Creating image(s)#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.537 248514 DEBUG nova.storage.rbd_utils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 0d64e209-19e7-4ad3-a790-43d04d832838_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.566 248514 DEBUG nova.storage.rbd_utils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 0d64e209-19e7-4ad3-a790-43d04d832838_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.592 248514 DEBUG nova.storage.rbd_utils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 0d64e209-19e7-4ad3-a790-43d04d832838_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.596 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.688 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.690 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.690 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.691 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.719 248514 DEBUG nova.storage.rbd_utils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 0d64e209-19e7-4ad3-a790-43d04d832838_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:17 np0005558241 nova_compute[248510]: 2025-12-13 08:18:17.726 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0d64e209-19e7-4ad3-a790-43d04d832838_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:18 np0005558241 nova_compute[248510]: 2025-12-13 08:18:18.780 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:18:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 265 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 350 KiB/s wr, 79 op/s
Dec 13 03:18:19 np0005558241 nova_compute[248510]: 2025-12-13 08:18:19.118 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0d64e209-19e7-4ad3-a790-43d04d832838_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:19 np0005558241 nova_compute[248510]: 2025-12-13 08:18:19.194 248514 DEBUG nova.storage.rbd_utils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] resizing rbd image 0d64e209-19e7-4ad3-a790-43d04d832838_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:18:19 np0005558241 nova_compute[248510]: 2025-12-13 08:18:19.291 248514 DEBUG nova.network.neutron [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Successfully created port: 2b62e133-0b9e-4c9d-9219-2a7e55c381ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:18:20 np0005558241 nova_compute[248510]: 2025-12-13 08:18:20.476 248514 DEBUG nova.objects.instance [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'migration_context' on Instance uuid 0d64e209-19e7-4ad3-a790-43d04d832838 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:18:20 np0005558241 nova_compute[248510]: 2025-12-13 08:18:20.671 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019566272861783513 of space, bias 1.0, pg target 0.5869881858535054 quantized to 32 (current 32)
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665837573005312 of space, bias 1.0, pg target 0.19997512719015936 quantized to 32 (current 32)
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.633028539493772e-07 of space, bias 4.0, pg target 0.0010359634247392527 quantized to 16 (current 32)
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:18:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 265 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 350 KiB/s wr, 18 op/s
Dec 13 03:18:22 np0005558241 nova_compute[248510]: 2025-12-13 08:18:22.062 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:18:22 np0005558241 nova_compute[248510]: 2025-12-13 08:18:22.063 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Ensure instance console log exists: /var/lib/nova/instances/0d64e209-19e7-4ad3-a790-43d04d832838/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:18:22 np0005558241 nova_compute[248510]: 2025-12-13 08:18:22.064 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:22 np0005558241 nova_compute[248510]: 2025-12-13 08:18:22.064 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:22 np0005558241 nova_compute[248510]: 2025-12-13 08:18:22.064 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 289 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 2.0 MiB/s wr, 26 op/s
Dec 13 03:18:23 np0005558241 nova_compute[248510]: 2025-12-13 08:18:23.807 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:18:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 314 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.8 MiB/s wr, 52 op/s
Dec 13 03:18:25 np0005558241 nova_compute[248510]: 2025-12-13 08:18:25.675 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:18:25Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:e5:d6 10.100.0.8
Dec 13 03:18:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:18:25Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:e5:d6 10.100.0.8
Dec 13 03:18:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 314 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.8 MiB/s wr, 51 op/s
Dec 13 03:18:27 np0005558241 nova_compute[248510]: 2025-12-13 08:18:27.401 248514 DEBUG nova.network.neutron [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Successfully updated port: 2b62e133-0b9e-4c9d-9219-2a7e55c381ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:18:27 np0005558241 nova_compute[248510]: 2025-12-13 08:18:27.490 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "refresh_cache-0d64e209-19e7-4ad3-a790-43d04d832838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:18:27 np0005558241 nova_compute[248510]: 2025-12-13 08:18:27.491 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquired lock "refresh_cache-0d64e209-19e7-4ad3-a790-43d04d832838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:18:27 np0005558241 nova_compute[248510]: 2025-12-13 08:18:27.491 248514 DEBUG nova.network.neutron [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:18:28 np0005558241 nova_compute[248510]: 2025-12-13 08:18:28.179 248514 DEBUG nova.network.neutron [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:18:28 np0005558241 nova_compute[248510]: 2025-12-13 08:18:28.359 248514 DEBUG nova.compute.manager [req-6e5678f7-0044-4e53-afa7-b741ed950eb5 req-e825d636-8984-4c39-bd9f-6967562d593c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received event network-changed-2b62e133-0b9e-4c9d-9219-2a7e55c381ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:18:28 np0005558241 nova_compute[248510]: 2025-12-13 08:18:28.359 248514 DEBUG nova.compute.manager [req-6e5678f7-0044-4e53-afa7-b741ed950eb5 req-e825d636-8984-4c39-bd9f-6967562d593c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Refreshing instance network info cache due to event network-changed-2b62e133-0b9e-4c9d-9219-2a7e55c381ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:18:28 np0005558241 nova_compute[248510]: 2025-12-13 08:18:28.360 248514 DEBUG oslo_concurrency.lockutils [req-6e5678f7-0044-4e53-afa7-b741ed950eb5 req-e825d636-8984-4c39-bd9f-6967562d593c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-0d64e209-19e7-4ad3-a790-43d04d832838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:18:28 np0005558241 nova_compute[248510]: 2025-12-13 08:18:28.726 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "e686bddc-956e-4714-868c-9a29dda243d2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:28 np0005558241 nova_compute[248510]: 2025-12-13 08:18:28.727 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:28 np0005558241 nova_compute[248510]: 2025-12-13 08:18:28.756 248514 DEBUG nova.compute.manager [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:18:28 np0005558241 nova_compute[248510]: 2025-12-13 08:18:28.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:18:28 np0005558241 nova_compute[248510]: 2025-12-13 08:18:28.809 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:28 np0005558241 nova_compute[248510]: 2025-12-13 08:18:28.858 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:28 np0005558241 nova_compute[248510]: 2025-12-13 08:18:28.859 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:28 np0005558241 nova_compute[248510]: 2025-12-13 08:18:28.869 248514 DEBUG nova.virt.hardware [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:18:28 np0005558241 nova_compute[248510]: 2025-12-13 08:18:28.869 248514 INFO nova.compute.claims [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:18:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 318 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 3.8 MiB/s wr, 75 op/s
Dec 13 03:18:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.068 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.492 248514 DEBUG nova.network.neutron [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Updating instance_info_cache with network_info: [{"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.519 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Releasing lock "refresh_cache-0d64e209-19e7-4ad3-a790-43d04d832838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.519 248514 DEBUG nova.compute.manager [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Instance network_info: |[{"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.520 248514 DEBUG oslo_concurrency.lockutils [req-6e5678f7-0044-4e53-afa7-b741ed950eb5 req-e825d636-8984-4c39-bd9f-6967562d593c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-0d64e209-19e7-4ad3-a790-43d04d832838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.520 248514 DEBUG nova.network.neutron [req-6e5678f7-0044-4e53-afa7-b741ed950eb5 req-e825d636-8984-4c39-bd9f-6967562d593c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Refreshing network info cache for port 2b62e133-0b9e-4c9d-9219-2a7e55c381ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.524 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Start _get_guest_xml network_info=[{"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.529 248514 WARNING nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.535 248514 DEBUG nova.virt.libvirt.host [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.535 248514 DEBUG nova.virt.libvirt.host [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.548 248514 DEBUG nova.virt.libvirt.host [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.549 248514 DEBUG nova.virt.libvirt.host [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.550 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.550 248514 DEBUG nova.virt.hardware [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.551 248514 DEBUG nova.virt.hardware [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.551 248514 DEBUG nova.virt.hardware [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.551 248514 DEBUG nova.virt.hardware [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.552 248514 DEBUG nova.virt.hardware [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.552 248514 DEBUG nova.virt.hardware [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.552 248514 DEBUG nova.virt.hardware [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.553 248514 DEBUG nova.virt.hardware [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.553 248514 DEBUG nova.virt.hardware [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.553 248514 DEBUG nova.virt.hardware [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.553 248514 DEBUG nova.virt.hardware [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:18:29 np0005558241 nova_compute[248510]: 2025-12-13 08:18:29.558 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:18:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1393795216' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:18:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:18:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3628286845' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:18:30 np0005558241 nova_compute[248510]: 2025-12-13 08:18:30.565 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:30 np0005558241 nova_compute[248510]: 2025-12-13 08:18:30.589 248514 DEBUG nova.storage.rbd_utils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 0d64e209-19e7-4ad3-a790-43d04d832838_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:30 np0005558241 nova_compute[248510]: 2025-12-13 08:18:30.601 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:30 np0005558241 nova_compute[248510]: 2025-12-13 08:18:30.633 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:30 np0005558241 nova_compute[248510]: 2025-12-13 08:18:30.640 248514 DEBUG nova.compute.provider_tree [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:18:30 np0005558241 nova_compute[248510]: 2025-12-13 08:18:30.680 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:30 np0005558241 nova_compute[248510]: 2025-12-13 08:18:30.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:18:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 318 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 241 KiB/s rd, 3.5 MiB/s wr, 60 op/s
Dec 13 03:18:30 np0005558241 nova_compute[248510]: 2025-12-13 08:18:30.894 248514 DEBUG nova.scheduler.client.report [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:18:31 np0005558241 nova_compute[248510]: 2025-12-13 08:18:31.274 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:31 np0005558241 nova_compute[248510]: 2025-12-13 08:18:31.275 248514 DEBUG nova.compute.manager [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:18:31 np0005558241 nova_compute[248510]: 2025-12-13 08:18:31.356 248514 DEBUG nova.compute.manager [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:18:31 np0005558241 nova_compute[248510]: 2025-12-13 08:18:31.356 248514 DEBUG nova.network.neutron [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:18:31 np0005558241 nova_compute[248510]: 2025-12-13 08:18:31.379 248514 INFO nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:18:31 np0005558241 nova_compute[248510]: 2025-12-13 08:18:31.402 248514 DEBUG nova.compute.manager [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:18:31 np0005558241 nova_compute[248510]: 2025-12-13 08:18:31.525 248514 DEBUG nova.compute.manager [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:18:31 np0005558241 nova_compute[248510]: 2025-12-13 08:18:31.526 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:18:31 np0005558241 nova_compute[248510]: 2025-12-13 08:18:31.527 248514 INFO nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Creating image(s)#033[00m
Dec 13 03:18:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:18:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257427535' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.062 248514 DEBUG nova.storage.rbd_utils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image e686bddc-956e-4714-868c-9a29dda243d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.092 248514 DEBUG nova.storage.rbd_utils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image e686bddc-956e-4714-868c-9a29dda243d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.116 248514 DEBUG nova.storage.rbd_utils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image e686bddc-956e-4714-868c-9a29dda243d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.120 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.152 248514 DEBUG nova.policy [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e46b9271df84074b6a832899ca5ee57', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6abc0a5e3b44a75bd19cd7df807b790', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.159 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.161 248514 DEBUG nova.virt.libvirt.vif [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:18:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-301010878',display_name='tempest-ServersAdminTestJSON-server-301010878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-301010878',id=20,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-pq0jl515',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:18:17Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=0d64e209-19e7-4ad3-a790-43d04d832838,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.162 248514 DEBUG nova.network.os_vif_util [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.163 248514 DEBUG nova.network.os_vif_util [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:ac:d2,bridge_name='br-int',has_traffic_filtering=True,id=2b62e133-0b9e-4c9d-9219-2a7e55c381ab,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b62e133-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.164 248514 DEBUG nova.objects.instance [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d64e209-19e7-4ad3-a790-43d04d832838 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.188 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  <uuid>0d64e209-19e7-4ad3-a790-43d04d832838</uuid>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  <name>instance-00000014</name>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersAdminTestJSON-server-301010878</nova:name>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:18:29</nova:creationTime>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:        <nova:user uuid="0d3578a9aa9a4f4facf4009a71564a31">tempest-ServersAdminTestJSON-1079129122-project-member</nova:user>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:        <nova:project uuid="1f6b42f9712f4afea6a07d08373b56ca">tempest-ServersAdminTestJSON-1079129122</nova:project>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:        <nova:port uuid="2b62e133-0b9e-4c9d-9219-2a7e55c381ab">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <entry name="serial">0d64e209-19e7-4ad3-a790-43d04d832838</entry>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <entry name="uuid">0d64e209-19e7-4ad3-a790-43d04d832838</entry>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0d64e209-19e7-4ad3-a790-43d04d832838_disk">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0d64e209-19e7-4ad3-a790-43d04d832838_disk.config">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:23:ac:d2"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <target dev="tap2b62e133-0b"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/0d64e209-19e7-4ad3-a790-43d04d832838/console.log" append="off"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:18:32 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:18:32 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:18:32 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:18:32 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.189 248514 DEBUG nova.compute.manager [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Preparing to wait for external event network-vif-plugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.190 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.190 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.190 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.191 248514 DEBUG nova.virt.libvirt.vif [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:18:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-301010878',display_name='tempest-ServersAdminTestJSON-server-301010878',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-301010878',id=20,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-pq0jl515',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:18:17Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=0d64e209-19e7-4ad3-a790-43d04d832838,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.191 248514 DEBUG nova.network.os_vif_util [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.192 248514 DEBUG nova.network.os_vif_util [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:ac:d2,bridge_name='br-int',has_traffic_filtering=True,id=2b62e133-0b9e-4c9d-9219-2a7e55c381ab,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b62e133-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.192 248514 DEBUG os_vif [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:ac:d2,bridge_name='br-int',has_traffic_filtering=True,id=2b62e133-0b9e-4c9d-9219-2a7e55c381ab,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b62e133-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.193 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.194 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.194 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.195 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.196 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.196 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.197 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.220 248514 DEBUG nova.storage.rbd_utils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image e686bddc-956e-4714-868c-9a29dda243d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.224 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e686bddc-956e-4714-868c-9a29dda243d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.260 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.261 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b62e133-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.262 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b62e133-0b, col_values=(('external_ids', {'iface-id': '2b62e133-0b9e-4c9d-9219-2a7e55c381ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:ac:d2', 'vm-uuid': '0d64e209-19e7-4ad3-a790-43d04d832838'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:18:32 np0005558241 NetworkManager[50376]: <info>  [1765613912.2954] manager: (tap2b62e133-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.294 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.301 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.304 248514 INFO os_vif [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:ac:d2,bridge_name='br-int',has_traffic_filtering=True,id=2b62e133-0b9e-4c9d-9219-2a7e55c381ab,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b62e133-0b')#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.413 248514 DEBUG nova.network.neutron [req-6e5678f7-0044-4e53-afa7-b741ed950eb5 req-e825d636-8984-4c39-bd9f-6967562d593c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Updated VIF entry in instance network info cache for port 2b62e133-0b9e-4c9d-9219-2a7e55c381ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.414 248514 DEBUG nova.network.neutron [req-6e5678f7-0044-4e53-afa7-b741ed950eb5 req-e825d636-8984-4c39-bd9f-6967562d593c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Updating instance_info_cache with network_info: [{"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.438 248514 DEBUG oslo_concurrency.lockutils [req-6e5678f7-0044-4e53-afa7-b741ed950eb5 req-e825d636-8984-4c39-bd9f-6967562d593c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-0d64e209-19e7-4ad3-a790-43d04d832838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.495 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.496 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.496 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No VIF found with MAC fa:16:3e:23:ac:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.497 248514 INFO nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Using config drive#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.525 248514 DEBUG nova.storage.rbd_utils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 0d64e209-19e7-4ad3-a790-43d04d832838_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.808 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:18:32 np0005558241 nova_compute[248510]: 2025-12-13 08:18:32.809 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:18:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 318 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 241 KiB/s rd, 3.5 MiB/s wr, 61 op/s
Dec 13 03:18:33 np0005558241 nova_compute[248510]: 2025-12-13 08:18:33.329 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:18:33 np0005558241 nova_compute[248510]: 2025-12-13 08:18:33.329 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:18:33 np0005558241 nova_compute[248510]: 2025-12-13 08:18:33.329 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:18:33 np0005558241 nova_compute[248510]: 2025-12-13 08:18:33.329 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:18:33 np0005558241 nova_compute[248510]: 2025-12-13 08:18:33.439 248514 INFO nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Creating config drive at /var/lib/nova/instances/0d64e209-19e7-4ad3-a790-43d04d832838/disk.config#033[00m
Dec 13 03:18:33 np0005558241 nova_compute[248510]: 2025-12-13 08:18:33.445 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d64e209-19e7-4ad3-a790-43d04d832838/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx_ghr9ds execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:33 np0005558241 nova_compute[248510]: 2025-12-13 08:18:33.483 248514 DEBUG nova.network.neutron [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Successfully created port: c204c1c8-ba1e-45b9-87aa-dac2170952ec _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:18:33 np0005558241 nova_compute[248510]: 2025-12-13 08:18:33.588 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d64e209-19e7-4ad3-a790-43d04d832838/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx_ghr9ds" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:33 np0005558241 nova_compute[248510]: 2025-12-13 08:18:33.618 248514 DEBUG nova.storage.rbd_utils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image 0d64e209-19e7-4ad3-a790-43d04d832838_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:33 np0005558241 nova_compute[248510]: 2025-12-13 08:18:33.623 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d64e209-19e7-4ad3-a790-43d04d832838/disk.config 0d64e209-19e7-4ad3-a790-43d04d832838_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:33 np0005558241 nova_compute[248510]: 2025-12-13 08:18:33.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:18:34 np0005558241 nova_compute[248510]: 2025-12-13 08:18:34.647 248514 DEBUG nova.network.neutron [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Successfully updated port: c204c1c8-ba1e-45b9-87aa-dac2170952ec _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:18:34 np0005558241 nova_compute[248510]: 2025-12-13 08:18:34.850 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "refresh_cache-e686bddc-956e-4714-868c-9a29dda243d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:18:34 np0005558241 nova_compute[248510]: 2025-12-13 08:18:34.850 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquired lock "refresh_cache-e686bddc-956e-4714-868c-9a29dda243d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:18:34 np0005558241 nova_compute[248510]: 2025-12-13 08:18:34.850 248514 DEBUG nova.network.neutron [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:18:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 318 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 231 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Dec 13 03:18:34 np0005558241 nova_compute[248510]: 2025-12-13 08:18:34.924 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e686bddc-956e-4714-868c-9a29dda243d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:34 np0005558241 nova_compute[248510]: 2025-12-13 08:18:34.975 248514 DEBUG nova.compute.manager [req-8a893bcd-2f2e-463e-a465-bd306e7b82b8 req-a31b5248-6f24-4cfc-98e1-3d11917f7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Received event network-changed-c204c1c8-ba1e-45b9-87aa-dac2170952ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:18:34 np0005558241 nova_compute[248510]: 2025-12-13 08:18:34.976 248514 DEBUG nova.compute.manager [req-8a893bcd-2f2e-463e-a465-bd306e7b82b8 req-a31b5248-6f24-4cfc-98e1-3d11917f7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Refreshing instance network info cache due to event network-changed-c204c1c8-ba1e-45b9-87aa-dac2170952ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:18:34 np0005558241 nova_compute[248510]: 2025-12-13 08:18:34.976 248514 DEBUG oslo_concurrency.lockutils [req-8a893bcd-2f2e-463e-a465-bd306e7b82b8 req-a31b5248-6f24-4cfc-98e1-3d11917f7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-e686bddc-956e-4714-868c-9a29dda243d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:18:34 np0005558241 podman[273879]: 2025-12-13 08:18:34.996597422 +0000 UTC m=+0.076202208 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Dec 13 03:18:35 np0005558241 podman[273880]: 2025-12-13 08:18:35.013536119 +0000 UTC m=+0.091503005 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Dec 13 03:18:35 np0005558241 podman[273878]: 2025-12-13 08:18:35.018010009 +0000 UTC m=+0.102249919 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.040 248514 DEBUG nova.storage.rbd_utils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] resizing rbd image e686bddc-956e-4714-868c-9a29dda243d2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.307 248514 DEBUG nova.network.neutron [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.798 248514 DEBUG oslo_concurrency.processutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d64e209-19e7-4ad3-a790-43d04d832838/disk.config 0d64e209-19e7-4ad3-a790-43d04d832838_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.799 248514 INFO nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Deleting local config drive /var/lib/nova/instances/0d64e209-19e7-4ad3-a790-43d04d832838/disk.config because it was imported into RBD.#033[00m
Dec 13 03:18:35 np0005558241 kernel: tap2b62e133-0b: entered promiscuous mode
Dec 13 03:18:35 np0005558241 NetworkManager[50376]: <info>  [1765613915.8605] manager: (tap2b62e133-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Dec 13 03:18:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:18:35Z|00080|binding|INFO|Claiming lport 2b62e133-0b9e-4c9d-9219-2a7e55c381ab for this chassis.
Dec 13 03:18:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:18:35Z|00081|binding|INFO|2b62e133-0b9e-4c9d-9219-2a7e55c381ab: Claiming fa:16:3e:23:ac:d2 10.100.0.4
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.865 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.871 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Updating instance_info_cache with network_info: [{"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:18:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:35.873 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:ac:d2 10.100.0.4'], port_security=['fa:16:3e:23:ac:d2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0d64e209-19e7-4ad3-a790-43d04d832838', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2b62e133-0b9e-4c9d-9219-2a7e55c381ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:18:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:35.875 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2b62e133-0b9e-4c9d-9219-2a7e55c381ab in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 bound to our chassis#033[00m
Dec 13 03:18:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:35.877 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5#033[00m
Dec 13 03:18:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:18:35Z|00082|binding|INFO|Setting lport 2b62e133-0b9e-4c9d-9219-2a7e55c381ab ovn-installed in OVS
Dec 13 03:18:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:18:35Z|00083|binding|INFO|Setting lport 2b62e133-0b9e-4c9d-9219-2a7e55c381ab up in Southbound
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.883 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:35.900 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7e0d39-6f30-4c04-a4dc-cc9561612dfe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:35 np0005558241 systemd-udevd[274006]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.905 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.906 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:18:35 np0005558241 systemd-machined[210538]: New machine qemu-22-instance-00000014.
Dec 13 03:18:35 np0005558241 NetworkManager[50376]: <info>  [1765613915.9166] device (tap2b62e133-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:18:35 np0005558241 NetworkManager[50376]: <info>  [1765613915.9174] device (tap2b62e133-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:18:35 np0005558241 systemd[1]: Started Virtual Machine qemu-22-instance-00000014.
Dec 13 03:18:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:35.941 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a5671019-c208-4746-9762-bd02e1de56a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:35.946 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[31050535-00a8-43b3-b5f1-02a1f5feb8e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.953 248514 DEBUG nova.objects.instance [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lazy-loading 'migration_context' on Instance uuid e686bddc-956e-4714-868c-9a29dda243d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.975 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.976 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Ensure instance console log exists: /var/lib/nova/instances/e686bddc-956e-4714-868c-9a29dda243d2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.977 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.978 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:35 np0005558241 nova_compute[248510]: 2025-12-13 08:18:35.978 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:35.977 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b20ff35e-bfd5-4d45-bfb5-f0bba92b83b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:35.997 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78e57ee5-72b0-4364-b4c1-15e99e015cd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 784, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 784, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 35036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274036, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:36.017 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6fe2a8-1c97-44cd-a958-eba996cd7dbb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274038, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274038, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:36.019 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.021 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.022 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:36.023 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:18:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:36.023 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:18:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:36.023 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:18:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:36.024 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.552 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613916.5516868, 0d64e209-19e7-4ad3-a790-43d04d832838 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.552 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] VM Started (Lifecycle Event)#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.585 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.591 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613916.5519292, 0d64e209-19e7-4ad3-a790-43d04d832838 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.591 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.608 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.612 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.651 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.762 248514 DEBUG nova.network.neutron [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Updating instance_info_cache with network_info: [{"id": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "address": "fa:16:3e:67:37:df", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc204c1c8-ba", "ovs_interfaceid": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.804 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Releasing lock "refresh_cache-e686bddc-956e-4714-868c-9a29dda243d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.805 248514 DEBUG nova.compute.manager [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Instance network_info: |[{"id": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "address": "fa:16:3e:67:37:df", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc204c1c8-ba", "ovs_interfaceid": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.805 248514 DEBUG oslo_concurrency.lockutils [req-8a893bcd-2f2e-463e-a465-bd306e7b82b8 req-a31b5248-6f24-4cfc-98e1-3d11917f7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-e686bddc-956e-4714-868c-9a29dda243d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.805 248514 DEBUG nova.network.neutron [req-8a893bcd-2f2e-463e-a465-bd306e7b82b8 req-a31b5248-6f24-4cfc-98e1-3d11917f7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Refreshing network info cache for port c204c1c8-ba1e-45b9-87aa-dac2170952ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.808 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Start _get_guest_xml network_info=[{"id": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "address": "fa:16:3e:67:37:df", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc204c1c8-ba", "ovs_interfaceid": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.812 248514 WARNING nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.816 248514 DEBUG nova.virt.libvirt.host [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.817 248514 DEBUG nova.virt.libvirt.host [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.821 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.821 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.821 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.821 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.822 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.854 248514 DEBUG nova.virt.libvirt.host [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.856 248514 DEBUG nova.virt.libvirt.host [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.856 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.856 248514 DEBUG nova.virt.hardware [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.857 248514 DEBUG nova.virt.hardware [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.857 248514 DEBUG nova.virt.hardware [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.858 248514 DEBUG nova.virt.hardware [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.858 248514 DEBUG nova.virt.hardware [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.858 248514 DEBUG nova.virt.hardware [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.859 248514 DEBUG nova.virt.hardware [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.859 248514 DEBUG nova.virt.hardware [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.859 248514 DEBUG nova.virt.hardware [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.859 248514 DEBUG nova.virt.hardware [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.860 248514 DEBUG nova.virt.hardware [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:18:36 np0005558241 nova_compute[248510]: 2025-12-13 08:18:36.863 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 318 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 140 KiB/s wr, 29 op/s
Dec 13 03:18:37 np0005558241 nova_compute[248510]: 2025-12-13 08:18:37.295 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:18:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4265055477' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:18:37 np0005558241 nova_compute[248510]: 2025-12-13 08:18:37.757 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.935s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:18:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1982922576' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:18:37 np0005558241 nova_compute[248510]: 2025-12-13 08:18:37.804 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.941s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:37 np0005558241 nova_compute[248510]: 2025-12-13 08:18:37.836 248514 DEBUG nova.storage.rbd_utils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image e686bddc-956e-4714-868c-9a29dda243d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:37 np0005558241 nova_compute[248510]: 2025-12-13 08:18:37.842 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:37 np0005558241 nova_compute[248510]: 2025-12-13 08:18:37.910 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:18:37 np0005558241 nova_compute[248510]: 2025-12-13 08:18:37.910 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:18:37 np0005558241 nova_compute[248510]: 2025-12-13 08:18:37.915 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:18:37 np0005558241 nova_compute[248510]: 2025-12-13 08:18:37.916 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:18:37 np0005558241 nova_compute[248510]: 2025-12-13 08:18:37.919 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:18:37 np0005558241 nova_compute[248510]: 2025-12-13 08:18:37.919 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:18:37 np0005558241 nova_compute[248510]: 2025-12-13 08:18:37.923 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:18:37 np0005558241 nova_compute[248510]: 2025-12-13 08:18:37.923 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.173 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.175 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3903MB free_disk=59.82996305823326GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.175 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.175 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.314 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance a14d8b88-7aec-468f-a550-881364e4d95e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.315 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3fffafca-321d-4611-8940-da963b356ca1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.315 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 6b1e2b65-1398-4af8-9e8a-a8b99630eef8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.315 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 0d64e209-19e7-4ad3-a790-43d04d832838 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.316 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance e686bddc-956e-4714-868c-9a29dda243d2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.316 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.317 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:18:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:18:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956141917' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.426 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.428 248514 DEBUG nova.virt.libvirt.vif [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:18:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1776725301',display_name='tempest-SecurityGroupsTestJSON-server-1776725301',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1776725301',id=21,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6abc0a5e3b44a75bd19cd7df807b790',ramdisk_id='',reservation_id='r-f47be1x0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1306632779',owner_user_name='tempest-SecurityGroupsTestJSON-1306632779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:18:31Z,user_data=None,user_id='1e46b9271df84074b6a832899ca5ee57',uuid=e686bddc-956e-4714-868c-9a29dda243d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "address": "fa:16:3e:67:37:df", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc204c1c8-ba", "ovs_interfaceid": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.429 248514 DEBUG nova.network.os_vif_util [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converting VIF {"id": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "address": "fa:16:3e:67:37:df", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc204c1c8-ba", "ovs_interfaceid": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.430 248514 DEBUG nova.network.os_vif_util [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:37:df,bridge_name='br-int',has_traffic_filtering=True,id=c204c1c8-ba1e-45b9-87aa-dac2170952ec,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc204c1c8-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.431 248514 DEBUG nova.objects.instance [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lazy-loading 'pci_devices' on Instance uuid e686bddc-956e-4714-868c-9a29dda243d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.439 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.466 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  <uuid>e686bddc-956e-4714-868c-9a29dda243d2</uuid>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  <name>instance-00000015</name>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1776725301</nova:name>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:18:36</nova:creationTime>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:        <nova:user uuid="1e46b9271df84074b6a832899ca5ee57">tempest-SecurityGroupsTestJSON-1306632779-project-member</nova:user>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:        <nova:project uuid="d6abc0a5e3b44a75bd19cd7df807b790">tempest-SecurityGroupsTestJSON-1306632779</nova:project>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:        <nova:port uuid="c204c1c8-ba1e-45b9-87aa-dac2170952ec">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <entry name="serial">e686bddc-956e-4714-868c-9a29dda243d2</entry>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <entry name="uuid">e686bddc-956e-4714-868c-9a29dda243d2</entry>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/e686bddc-956e-4714-868c-9a29dda243d2_disk">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/e686bddc-956e-4714-868c-9a29dda243d2_disk.config">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:67:37:df"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <target dev="tapc204c1c8-ba"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/e686bddc-956e-4714-868c-9a29dda243d2/console.log" append="off"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:18:38 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:18:38 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:18:38 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:18:38 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.468 248514 DEBUG nova.compute.manager [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Preparing to wait for external event network-vif-plugged-c204c1c8-ba1e-45b9-87aa-dac2170952ec prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.468 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "e686bddc-956e-4714-868c-9a29dda243d2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.468 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.469 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.469 248514 DEBUG nova.virt.libvirt.vif [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:18:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1776725301',display_name='tempest-SecurityGroupsTestJSON-server-1776725301',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1776725301',id=21,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6abc0a5e3b44a75bd19cd7df807b790',ramdisk_id='',reservation_id='r-f47be1x0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1306632779',owner_user_name='tempest-SecurityGroupsTestJSON-1306632779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:18:31Z,user_data=None,user_id='1e46b9271df84074b6a832899ca5ee57',uuid=e686bddc-956e-4714-868c-9a29dda243d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "address": "fa:16:3e:67:37:df", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc204c1c8-ba", "ovs_interfaceid": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.470 248514 DEBUG nova.network.os_vif_util [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converting VIF {"id": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "address": "fa:16:3e:67:37:df", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc204c1c8-ba", "ovs_interfaceid": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.471 248514 DEBUG nova.network.os_vif_util [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:37:df,bridge_name='br-int',has_traffic_filtering=True,id=c204c1c8-ba1e-45b9-87aa-dac2170952ec,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc204c1c8-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.471 248514 DEBUG os_vif [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:37:df,bridge_name='br-int',has_traffic_filtering=True,id=c204c1c8-ba1e-45b9-87aa-dac2170952ec,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc204c1c8-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.473 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.473 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.477 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc204c1c8-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.477 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc204c1c8-ba, col_values=(('external_ids', {'iface-id': 'c204c1c8-ba1e-45b9-87aa-dac2170952ec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:37:df', 'vm-uuid': 'e686bddc-956e-4714-868c-9a29dda243d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:18:38 np0005558241 NetworkManager[50376]: <info>  [1765613918.4800] manager: (tapc204c1c8-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.479 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.482 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.490 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.491 248514 INFO os_vif [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:37:df,bridge_name='br-int',has_traffic_filtering=True,id=c204c1c8-ba1e-45b9-87aa-dac2170952ec,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc204c1c8-ba')#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.655 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.655 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.655 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] No VIF found with MAC fa:16:3e:67:37:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.656 248514 INFO nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Using config drive#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.706 248514 DEBUG nova.storage.rbd_utils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image e686bddc-956e-4714-868c-9a29dda243d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:38 np0005558241 nova_compute[248510]: 2025-12-13 08:18:38.815 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 372 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 280 KiB/s rd, 1.9 MiB/s wr, 75 op/s
Dec 13 03:18:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:18:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508116075' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:18:39 np0005558241 nova_compute[248510]: 2025-12-13 08:18:39.168 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.728s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:39 np0005558241 nova_compute[248510]: 2025-12-13 08:18:39.175 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:18:39 np0005558241 nova_compute[248510]: 2025-12-13 08:18:39.195 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:18:39 np0005558241 nova_compute[248510]: 2025-12-13 08:18:39.231 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:18:39 np0005558241 nova_compute[248510]: 2025-12-13 08:18:39.232 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:18:39 np0005558241 nova_compute[248510]: 2025-12-13 08:18:39.379 248514 DEBUG nova.network.neutron [req-8a893bcd-2f2e-463e-a465-bd306e7b82b8 req-a31b5248-6f24-4cfc-98e1-3d11917f7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Updated VIF entry in instance network info cache for port c204c1c8-ba1e-45b9-87aa-dac2170952ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:18:39 np0005558241 nova_compute[248510]: 2025-12-13 08:18:39.379 248514 DEBUG nova.network.neutron [req-8a893bcd-2f2e-463e-a465-bd306e7b82b8 req-a31b5248-6f24-4cfc-98e1-3d11917f7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Updating instance_info_cache with network_info: [{"id": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "address": "fa:16:3e:67:37:df", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc204c1c8-ba", "ovs_interfaceid": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:18:39 np0005558241 nova_compute[248510]: 2025-12-13 08:18:39.437 248514 DEBUG oslo_concurrency.lockutils [req-8a893bcd-2f2e-463e-a465-bd306e7b82b8 req-a31b5248-6f24-4cfc-98e1-3d11917f7146 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-e686bddc-956e-4714-868c-9a29dda243d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:18:39 np0005558241 nova_compute[248510]: 2025-12-13 08:18:39.544 248514 INFO nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Creating config drive at /var/lib/nova/instances/e686bddc-956e-4714-868c-9a29dda243d2/disk.config#033[00m
Dec 13 03:18:39 np0005558241 nova_compute[248510]: 2025-12-13 08:18:39.550 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e686bddc-956e-4714-868c-9a29dda243d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp78jgqik1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:39 np0005558241 nova_compute[248510]: 2025-12-13 08:18:39.689 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e686bddc-956e-4714-868c-9a29dda243d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp78jgqik1" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:39 np0005558241 nova_compute[248510]: 2025-12-13 08:18:39.716 248514 DEBUG nova.storage.rbd_utils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image e686bddc-956e-4714-868c-9a29dda243d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:18:39 np0005558241 nova_compute[248510]: 2025-12-13 08:18:39.720 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e686bddc-956e-4714-868c-9a29dda243d2/disk.config e686bddc-956e-4714-868c-9a29dda243d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:18:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:18:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:18:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:18:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:18:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:18:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:18:40 np0005558241 nova_compute[248510]: 2025-12-13 08:18:40.234 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:18:40 np0005558241 nova_compute[248510]: 2025-12-13 08:18:40.235 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:18:40 np0005558241 nova_compute[248510]: 2025-12-13 08:18:40.235 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:18:40 np0005558241 nova_compute[248510]: 2025-12-13 08:18:40.235 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:18:40 np0005558241 nova_compute[248510]: 2025-12-13 08:18:40.235 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:18:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:18:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:18:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:18:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:18:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:18:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 372 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 1.9 MiB/s wr, 51 op/s
Dec 13 03:18:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:18:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:18:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:18:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:18:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:18:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:18:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:18:41 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:18:41 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:18:41 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:18:42 np0005558241 podman[274390]: 2025-12-13 08:18:42.056894473 +0000 UTC m=+0.025986731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:18:42 np0005558241 podman[274390]: 2025-12-13 08:18:42.397782459 +0000 UTC m=+0.366874697 container create b6b8d601271eec68a16498ce0e79fa03a8db4135095571c4565c2c9865d769ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 03:18:42 np0005558241 nova_compute[248510]: 2025-12-13 08:18:42.417 248514 DEBUG oslo_concurrency.processutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e686bddc-956e-4714-868c-9a29dda243d2/disk.config e686bddc-956e-4714-868c-9a29dda243d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.697s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:18:42 np0005558241 nova_compute[248510]: 2025-12-13 08:18:42.419 248514 INFO nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Deleting local config drive /var/lib/nova/instances/e686bddc-956e-4714-868c-9a29dda243d2/disk.config because it was imported into RBD.#033[00m
Dec 13 03:18:42 np0005558241 kernel: tapc204c1c8-ba: entered promiscuous mode
Dec 13 03:18:42 np0005558241 NetworkManager[50376]: <info>  [1765613922.4903] manager: (tapc204c1c8-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Dec 13 03:18:42 np0005558241 nova_compute[248510]: 2025-12-13 08:18:42.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:18:42Z|00084|binding|INFO|Claiming lport c204c1c8-ba1e-45b9-87aa-dac2170952ec for this chassis.
Dec 13 03:18:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:18:42Z|00085|binding|INFO|c204c1c8-ba1e-45b9-87aa-dac2170952ec: Claiming fa:16:3e:67:37:df 10.100.0.8
Dec 13 03:18:42 np0005558241 nova_compute[248510]: 2025-12-13 08:18:42.538 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.546 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:37:df 10.100.0.8'], port_security=['fa:16:3e:67:37:df 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e686bddc-956e-4714-868c-9a29dda243d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-675d1cc9-9f63-41fd-81b7-761155b686e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6abc0a5e3b44a75bd19cd7df807b790', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2d00a2f3-cbea-40c0-a88e-6b16688a4860', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ccd165e-0e0e-4b3a-8747-912025213604, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c204c1c8-ba1e-45b9-87aa-dac2170952ec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.547 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c204c1c8-ba1e-45b9-87aa-dac2170952ec in datapath 675d1cc9-9f63-41fd-81b7-761155b686e5 bound to our chassis#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.549 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 675d1cc9-9f63-41fd-81b7-761155b686e5#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.565 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5a54e3c4-9b4b-47c5-80c3-a0116fd0e883]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.566 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap675d1cc9-91 in ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.569 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap675d1cc9-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.569 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4deb48a1-5d58-4803-bf2d-d15d705ad3bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.570 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[88576729-cfe1-4cf0-b780-f7033a767071]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.586 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[8c229cc3-cb0d-4559-9b09-fd53495266e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 systemd[1]: Started libpod-conmon-b6b8d601271eec68a16498ce0e79fa03a8db4135095571c4565c2c9865d769ba.scope.
Dec 13 03:18:42 np0005558241 systemd-machined[210538]: New machine qemu-23-instance-00000015.
Dec 13 03:18:42 np0005558241 systemd[1]: Started Virtual Machine qemu-23-instance-00000015.
Dec 13 03:18:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:18:42Z|00086|binding|INFO|Setting lport c204c1c8-ba1e-45b9-87aa-dac2170952ec ovn-installed in OVS
Dec 13 03:18:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:18:42Z|00087|binding|INFO|Setting lport c204c1c8-ba1e-45b9-87aa-dac2170952ec up in Southbound
Dec 13 03:18:42 np0005558241 nova_compute[248510]: 2025-12-13 08:18:42.606 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.612 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[258f20c7-bc43-4797-837d-68a3b1a316ea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 systemd-udevd[274425]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:18:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:18:42 np0005558241 NetworkManager[50376]: <info>  [1765613922.6294] device (tapc204c1c8-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:18:42 np0005558241 NetworkManager[50376]: <info>  [1765613922.6305] device (tapc204c1c8-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.654 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2f465267-2e99-480d-afb6-b02c238a3415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 systemd-udevd[274429]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.660 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f0489a-6e51-48a1-bb4a-5f1bd47ca7b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 NetworkManager[50376]: <info>  [1765613922.6621] manager: (tap675d1cc9-90): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.691 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c8afdd-6a4b-4cc8-8ca2-4a25196995e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.695 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ab60948f-52a3-46ef-8f55-582ad036221d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 NetworkManager[50376]: <info>  [1765613922.7187] device (tap675d1cc9-90): carrier: link connected
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.725 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[31798b9d-82a9-4660-ad70-8542cb55b33f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.745 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[562721cc-8e86-4dec-8719-74767a2e199f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap675d1cc9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:51:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649993, 'reachable_time': 19465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274455, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.763 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7c2eba-6dae-4350-88a2-e67ef0eb15d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe75:512c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649993, 'tstamp': 649993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274456, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.786 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62af7d3e-f50a-4073-bdf0-3ef3963a1157]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap675d1cc9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:51:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649993, 'reachable_time': 19465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274457, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 podman[274390]: 2025-12-13 08:18:42.818482111 +0000 UTC m=+0.787574369 container init b6b8d601271eec68a16498ce0e79fa03a8db4135095571c4565c2c9865d769ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_einstein, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.828 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b7acb4ce-3561-4488-9b3b-2b58feb1a079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 podman[274390]: 2025-12-13 08:18:42.831790309 +0000 UTC m=+0.800882547 container start b6b8d601271eec68a16498ce0e79fa03a8db4135095571c4565c2c9865d769ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:18:42 np0005558241 optimistic_einstein[274421]: 167 167
Dec 13 03:18:42 np0005558241 systemd[1]: libpod-b6b8d601271eec68a16498ce0e79fa03a8db4135095571c4565c2c9865d769ba.scope: Deactivated successfully.
Dec 13 03:18:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 372 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 1.9 MiB/s wr, 52 op/s
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.895 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[41fec47b-31b5-42db-990c-7619a8619d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.897 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap675d1cc9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.898 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.898 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap675d1cc9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:18:42 np0005558241 NetworkManager[50376]: <info>  [1765613922.9015] manager: (tap675d1cc9-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Dec 13 03:18:42 np0005558241 nova_compute[248510]: 2025-12-13 08:18:42.901 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:42 np0005558241 kernel: tap675d1cc9-90: entered promiscuous mode
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.905 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap675d1cc9-90, col_values=(('external_ids', {'iface-id': '0e8c3540-dab0-4c73-b9e7-cb8c0c7c41e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:18:42 np0005558241 nova_compute[248510]: 2025-12-13 08:18:42.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:18:42Z|00088|binding|INFO|Releasing lport 0e8c3540-dab0-4c73-b9e7-cb8c0c7c41e8 from this chassis (sb_readonly=0)
Dec 13 03:18:42 np0005558241 nova_compute[248510]: 2025-12-13 08:18:42.910 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.912 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/675d1cc9-9f63-41fd-81b7-761155b686e5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/675d1cc9-9f63-41fd-81b7-761155b686e5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.913 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dea84aa7-5514-4013-bf79-dcc217c2ba30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.914 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-675d1cc9-9f63-41fd-81b7-761155b686e5
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/675d1cc9-9f63-41fd-81b7-761155b686e5.pid.haproxy
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 675d1cc9-9f63-41fd-81b7-761155b686e5
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:18:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:42.914 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'env', 'PROCESS_TAG=haproxy-675d1cc9-9f63-41fd-81b7-761155b686e5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/675d1cc9-9f63-41fd-81b7-761155b686e5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:18:42 np0005558241 nova_compute[248510]: 2025-12-13 08:18:42.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:43 np0005558241 podman[274390]: 2025-12-13 08:18:43.00071796 +0000 UTC m=+0.969810228 container attach b6b8d601271eec68a16498ce0e79fa03a8db4135095571c4565c2c9865d769ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:18:43 np0005558241 podman[274390]: 2025-12-13 08:18:43.001772666 +0000 UTC m=+0.970864904 container died b6b8d601271eec68a16498ce0e79fa03a8db4135095571c4565c2c9865d769ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_einstein, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.236 248514 DEBUG nova.compute.manager [req-df96188c-3ef7-4d24-ac9e-0ee0464be521 req-58140c09-0952-4890-8d5c-391e38d4b674 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Received event network-vif-plugged-c204c1c8-ba1e-45b9-87aa-dac2170952ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.237 248514 DEBUG oslo_concurrency.lockutils [req-df96188c-3ef7-4d24-ac9e-0ee0464be521 req-58140c09-0952-4890-8d5c-391e38d4b674 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e686bddc-956e-4714-868c-9a29dda243d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.238 248514 DEBUG oslo_concurrency.lockutils [req-df96188c-3ef7-4d24-ac9e-0ee0464be521 req-58140c09-0952-4890-8d5c-391e38d4b674 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.238 248514 DEBUG oslo_concurrency.lockutils [req-df96188c-3ef7-4d24-ac9e-0ee0464be521 req-58140c09-0952-4890-8d5c-391e38d4b674 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.238 248514 DEBUG nova.compute.manager [req-df96188c-3ef7-4d24-ac9e-0ee0464be521 req-58140c09-0952-4890-8d5c-391e38d4b674 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Processing event network-vif-plugged-c204c1c8-ba1e-45b9-87aa-dac2170952ec _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.331 248514 DEBUG nova.compute.manager [req-c73cc1e0-188a-4e48-8977-d5a20db5019e req-70efb139-fe8f-4413-aa32-ee2754ee8d61 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received event network-vif-plugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.331 248514 DEBUG oslo_concurrency.lockutils [req-c73cc1e0-188a-4e48-8977-d5a20db5019e req-70efb139-fe8f-4413-aa32-ee2754ee8d61 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.331 248514 DEBUG oslo_concurrency.lockutils [req-c73cc1e0-188a-4e48-8977-d5a20db5019e req-70efb139-fe8f-4413-aa32-ee2754ee8d61 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.332 248514 DEBUG oslo_concurrency.lockutils [req-c73cc1e0-188a-4e48-8977-d5a20db5019e req-70efb139-fe8f-4413-aa32-ee2754ee8d61 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.332 248514 DEBUG nova.compute.manager [req-c73cc1e0-188a-4e48-8977-d5a20db5019e req-70efb139-fe8f-4413-aa32-ee2754ee8d61 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Processing event network-vif-plugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.332 248514 DEBUG nova.compute.manager [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.339 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613923.338943, 0d64e209-19e7-4ad3-a790-43d04d832838 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.339 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.341 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.345 248514 INFO nova.virt.libvirt.driver [-] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Instance spawned successfully.#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.345 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.378 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.385 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.389 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.389 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.390 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.390 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.390 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.391 248514 DEBUG nova.virt.libvirt.driver [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.433 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.463 248514 INFO nova.compute.manager [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Took 25.95 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.464 248514 DEBUG nova.compute.manager [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.479 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.557 248514 INFO nova.compute.manager [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Took 28.70 seconds to build instance.#033[00m
Dec 13 03:18:43 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a15f07c417051c5f159cde18ac99c17e46ee71862460aba690bb5f16112f2d4c-merged.mount: Deactivated successfully.
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:43 np0005558241 nova_compute[248510]: 2025-12-13 08:18:43.926 248514 DEBUG oslo_concurrency.lockutils [None req-5424bb8a-eacf-4019-b5e5-c973ef1a4f7b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 29.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.098 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613924.0977237, e686bddc-956e-4714-868c-9a29dda243d2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.098 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e686bddc-956e-4714-868c-9a29dda243d2] VM Started (Lifecycle Event)#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.101 248514 DEBUG nova.compute.manager [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.105 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.109 248514 INFO nova.virt.libvirt.driver [-] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Instance spawned successfully.#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.109 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.179 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.187 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.192 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.192 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.193 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.193 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.194 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.195 248514 DEBUG nova.virt.libvirt.driver [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.226 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e686bddc-956e-4714-868c-9a29dda243d2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.227 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613924.1004996, e686bddc-956e-4714-868c-9a29dda243d2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.227 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e686bddc-956e-4714-868c-9a29dda243d2] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.263 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.267 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613924.1066506, e686bddc-956e-4714-868c-9a29dda243d2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.267 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e686bddc-956e-4714-868c-9a29dda243d2] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.285 248514 INFO nova.compute.manager [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Took 12.76 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.286 248514 DEBUG nova.compute.manager [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:18:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.324 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.330 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.358 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e686bddc-956e-4714-868c-9a29dda243d2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.371 248514 INFO nova.compute.manager [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Took 15.55 seconds to build instance.#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.399 248514 DEBUG oslo_concurrency.lockutils [None req-825edadf-80ca-44d5-9598-2b0ca731068e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:44 np0005558241 nova_compute[248510]: 2025-12-13 08:18:44.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:18:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 372 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 1.9 MiB/s wr, 61 op/s
Dec 13 03:18:45 np0005558241 nova_compute[248510]: 2025-12-13 08:18:45.368 248514 DEBUG oslo_concurrency.lockutils [None req-49c8c485-b7c0-4cf5-ae7a-d5b9fd7e1b2b 4142e005895249659c735621795d2415 12c03df1b9ed471c8c40aa39b90a9b97 - - default default] Acquiring lock "refresh_cache-0d64e209-19e7-4ad3-a790-43d04d832838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:18:45 np0005558241 nova_compute[248510]: 2025-12-13 08:18:45.370 248514 DEBUG oslo_concurrency.lockutils [None req-49c8c485-b7c0-4cf5-ae7a-d5b9fd7e1b2b 4142e005895249659c735621795d2415 12c03df1b9ed471c8c40aa39b90a9b97 - - default default] Acquired lock "refresh_cache-0d64e209-19e7-4ad3-a790-43d04d832838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:18:45 np0005558241 nova_compute[248510]: 2025-12-13 08:18:45.370 248514 DEBUG nova.network.neutron [None req-49c8c485-b7c0-4cf5-ae7a-d5b9fd7e1b2b 4142e005895249659c735621795d2415 12c03df1b9ed471c8c40aa39b90a9b97 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:18:46 np0005558241 podman[274390]: 2025-12-13 08:18:46.322867137 +0000 UTC m=+4.291959375 container remove b6b8d601271eec68a16498ce0e79fa03a8db4135095571c4565c2c9865d769ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_einstein, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:18:46 np0005558241 nova_compute[248510]: 2025-12-13 08:18:46.364 248514 DEBUG nova.compute.manager [req-c1aad994-fe8c-4443-8218-809b88d32942 req-ebbb1ffa-b168-4a33-82f0-594081b267c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Received event network-vif-plugged-c204c1c8-ba1e-45b9-87aa-dac2170952ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:18:46 np0005558241 nova_compute[248510]: 2025-12-13 08:18:46.364 248514 DEBUG oslo_concurrency.lockutils [req-c1aad994-fe8c-4443-8218-809b88d32942 req-ebbb1ffa-b168-4a33-82f0-594081b267c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e686bddc-956e-4714-868c-9a29dda243d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:46 np0005558241 nova_compute[248510]: 2025-12-13 08:18:46.366 248514 DEBUG oslo_concurrency.lockutils [req-c1aad994-fe8c-4443-8218-809b88d32942 req-ebbb1ffa-b168-4a33-82f0-594081b267c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:46 np0005558241 nova_compute[248510]: 2025-12-13 08:18:46.366 248514 DEBUG oslo_concurrency.lockutils [req-c1aad994-fe8c-4443-8218-809b88d32942 req-ebbb1ffa-b168-4a33-82f0-594081b267c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:46 np0005558241 nova_compute[248510]: 2025-12-13 08:18:46.367 248514 DEBUG nova.compute.manager [req-c1aad994-fe8c-4443-8218-809b88d32942 req-ebbb1ffa-b168-4a33-82f0-594081b267c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] No waiting events found dispatching network-vif-plugged-c204c1c8-ba1e-45b9-87aa-dac2170952ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:18:46 np0005558241 nova_compute[248510]: 2025-12-13 08:18:46.367 248514 WARNING nova.compute.manager [req-c1aad994-fe8c-4443-8218-809b88d32942 req-ebbb1ffa-b168-4a33-82f0-594081b267c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Received unexpected event network-vif-plugged-c204c1c8-ba1e-45b9-87aa-dac2170952ec for instance with vm_state active and task_state None.#033[00m
Dec 13 03:18:46 np0005558241 systemd[1]: libpod-conmon-b6b8d601271eec68a16498ce0e79fa03a8db4135095571c4565c2c9865d769ba.scope: Deactivated successfully.
Dec 13 03:18:46 np0005558241 podman[274548]: 2025-12-13 08:18:46.459207946 +0000 UTC m=+0.033341283 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:18:46 np0005558241 podman[274564]: 2025-12-13 08:18:46.76591417 +0000 UTC m=+0.251464875 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:18:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 372 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Dec 13 03:18:46 np0005558241 podman[274548]: 2025-12-13 08:18:46.888288174 +0000 UTC m=+0.462421481 container create 574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 03:18:47 np0005558241 systemd[1]: Started libpod-conmon-574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872.scope.
Dec 13 03:18:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:18:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d42d157bdfe0e317613121c55a4528b7179660e6d506480bb0778cc8794c3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:18:47 np0005558241 podman[274564]: 2025-12-13 08:18:47.279237104 +0000 UTC m=+0.764787809 container create 65b01cc4c5e82da7b827f886a99459718d38252f1ebeb59a7d314e1f35f88f26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:18:47 np0005558241 nova_compute[248510]: 2025-12-13 08:18:47.315 248514 DEBUG nova.compute.manager [req-4b7f9da4-f7d0-4ac3-b788-24180fa4b619 req-67820103-0a6e-451f-832b-ae302435eb12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received event network-vif-plugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:18:47 np0005558241 nova_compute[248510]: 2025-12-13 08:18:47.317 248514 DEBUG oslo_concurrency.lockutils [req-4b7f9da4-f7d0-4ac3-b788-24180fa4b619 req-67820103-0a6e-451f-832b-ae302435eb12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:47 np0005558241 nova_compute[248510]: 2025-12-13 08:18:47.318 248514 DEBUG oslo_concurrency.lockutils [req-4b7f9da4-f7d0-4ac3-b788-24180fa4b619 req-67820103-0a6e-451f-832b-ae302435eb12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:47 np0005558241 nova_compute[248510]: 2025-12-13 08:18:47.318 248514 DEBUG oslo_concurrency.lockutils [req-4b7f9da4-f7d0-4ac3-b788-24180fa4b619 req-67820103-0a6e-451f-832b-ae302435eb12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:47 np0005558241 nova_compute[248510]: 2025-12-13 08:18:47.318 248514 DEBUG nova.compute.manager [req-4b7f9da4-f7d0-4ac3-b788-24180fa4b619 req-67820103-0a6e-451f-832b-ae302435eb12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] No waiting events found dispatching network-vif-plugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:18:47 np0005558241 nova_compute[248510]: 2025-12-13 08:18:47.318 248514 WARNING nova.compute.manager [req-4b7f9da4-f7d0-4ac3-b788-24180fa4b619 req-67820103-0a6e-451f-832b-ae302435eb12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received unexpected event network-vif-plugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab for instance with vm_state active and task_state None.#033[00m
Dec 13 03:18:47 np0005558241 podman[274548]: 2025-12-13 08:18:47.496825183 +0000 UTC m=+1.070958520 container init 574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:18:47 np0005558241 podman[274548]: 2025-12-13 08:18:47.503710823 +0000 UTC m=+1.077844130 container start 574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:18:47 np0005558241 neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5[274582]: [NOTICE]   (274586) : New worker (274588) forked
Dec 13 03:18:47 np0005558241 neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5[274582]: [NOTICE]   (274586) : Loading success.
Dec 13 03:18:47 np0005558241 systemd[1]: Started libpod-conmon-65b01cc4c5e82da7b827f886a99459718d38252f1ebeb59a7d314e1f35f88f26.scope.
Dec 13 03:18:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:18:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a13552bf385d5a59cb56e48f270554a4d11ba967df64d32d4d6fa7c5ad62a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:18:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a13552bf385d5a59cb56e48f270554a4d11ba967df64d32d4d6fa7c5ad62a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:18:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a13552bf385d5a59cb56e48f270554a4d11ba967df64d32d4d6fa7c5ad62a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:18:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a13552bf385d5a59cb56e48f270554a4d11ba967df64d32d4d6fa7c5ad62a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:18:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a13552bf385d5a59cb56e48f270554a4d11ba967df64d32d4d6fa7c5ad62a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:18:48 np0005558241 podman[274564]: 2025-12-13 08:18:48.000804127 +0000 UTC m=+1.486354842 container init 65b01cc4c5e82da7b827f886a99459718d38252f1ebeb59a7d314e1f35f88f26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_colden, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:18:48 np0005558241 podman[274564]: 2025-12-13 08:18:48.008587199 +0000 UTC m=+1.494137894 container start 65b01cc4c5e82da7b827f886a99459718d38252f1ebeb59a7d314e1f35f88f26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:18:48 np0005558241 podman[274564]: 2025-12-13 08:18:48.256522676 +0000 UTC m=+1.742073411 container attach 65b01cc4c5e82da7b827f886a99459718d38252f1ebeb59a7d314e1f35f88f26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:18:48 np0005558241 nova_compute[248510]: 2025-12-13 08:18:48.481 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:48 np0005558241 festive_colden[274600]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:18:48 np0005558241 festive_colden[274600]: --> All data devices are unavailable
Dec 13 03:18:48 np0005558241 systemd[1]: libpod-65b01cc4c5e82da7b827f886a99459718d38252f1ebeb59a7d314e1f35f88f26.scope: Deactivated successfully.
Dec 13 03:18:48 np0005558241 podman[274564]: 2025-12-13 08:18:48.578505076 +0000 UTC m=+2.064055791 container died 65b01cc4c5e82da7b827f886a99459718d38252f1ebeb59a7d314e1f35f88f26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 03:18:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 372 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Dec 13 03:18:48 np0005558241 nova_compute[248510]: 2025-12-13 08:18:48.890 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:18:49 np0005558241 nova_compute[248510]: 2025-12-13 08:18:49.443 248514 DEBUG nova.network.neutron [None req-49c8c485-b7c0-4cf5-ae7a-d5b9fd7e1b2b 4142e005895249659c735621795d2415 12c03df1b9ed471c8c40aa39b90a9b97 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Updating instance_info_cache with network_info: [{"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:18:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-89a13552bf385d5a59cb56e48f270554a4d11ba967df64d32d4d6fa7c5ad62a2-merged.mount: Deactivated successfully.
Dec 13 03:18:49 np0005558241 nova_compute[248510]: 2025-12-13 08:18:49.470 248514 DEBUG oslo_concurrency.lockutils [None req-49c8c485-b7c0-4cf5-ae7a-d5b9fd7e1b2b 4142e005895249659c735621795d2415 12c03df1b9ed471c8c40aa39b90a9b97 - - default default] Releasing lock "refresh_cache-0d64e209-19e7-4ad3-a790-43d04d832838" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:18:49 np0005558241 nova_compute[248510]: 2025-12-13 08:18:49.471 248514 DEBUG nova.compute.manager [None req-49c8c485-b7c0-4cf5-ae7a-d5b9fd7e1b2b 4142e005895249659c735621795d2415 12c03df1b9ed471c8c40aa39b90a9b97 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec 13 03:18:49 np0005558241 nova_compute[248510]: 2025-12-13 08:18:49.471 248514 DEBUG nova.compute.manager [None req-49c8c485-b7c0-4cf5-ae7a-d5b9fd7e1b2b 4142e005895249659c735621795d2415 12c03df1b9ed471c8c40aa39b90a9b97 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] network_info to inject: |[{"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec 13 03:18:49 np0005558241 podman[274564]: 2025-12-13 08:18:49.696128894 +0000 UTC m=+3.181679599 container remove 65b01cc4c5e82da7b827f886a99459718d38252f1ebeb59a7d314e1f35f88f26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:18:49 np0005558241 systemd[1]: libpod-conmon-65b01cc4c5e82da7b827f886a99459718d38252f1ebeb59a7d314e1f35f88f26.scope: Deactivated successfully.
Dec 13 03:18:50 np0005558241 podman[274692]: 2025-12-13 08:18:50.215852275 +0000 UTC m=+0.031054706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:18:50 np0005558241 podman[274692]: 2025-12-13 08:18:50.315829217 +0000 UTC m=+0.131031628 container create b85199ddaf788df21747890690fc19f952656f0062b992c8427f7461c52ba874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 03:18:50 np0005558241 systemd[1]: Started libpod-conmon-b85199ddaf788df21747890690fc19f952656f0062b992c8427f7461c52ba874.scope.
Dec 13 03:18:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:18:50 np0005558241 nova_compute[248510]: 2025-12-13 08:18:50.426 248514 DEBUG nova.compute.manager [req-c640f060-5b13-4f20-9fd9-ba0209f8cfb9 req-df5fb257-d42e-4e73-8285-0fd553622f53 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Received event network-changed-c204c1c8-ba1e-45b9-87aa-dac2170952ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:18:50 np0005558241 nova_compute[248510]: 2025-12-13 08:18:50.429 248514 DEBUG nova.compute.manager [req-c640f060-5b13-4f20-9fd9-ba0209f8cfb9 req-df5fb257-d42e-4e73-8285-0fd553622f53 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Refreshing instance network info cache due to event network-changed-c204c1c8-ba1e-45b9-87aa-dac2170952ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:18:50 np0005558241 nova_compute[248510]: 2025-12-13 08:18:50.429 248514 DEBUG oslo_concurrency.lockutils [req-c640f060-5b13-4f20-9fd9-ba0209f8cfb9 req-df5fb257-d42e-4e73-8285-0fd553622f53 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-e686bddc-956e-4714-868c-9a29dda243d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:18:50 np0005558241 nova_compute[248510]: 2025-12-13 08:18:50.429 248514 DEBUG oslo_concurrency.lockutils [req-c640f060-5b13-4f20-9fd9-ba0209f8cfb9 req-df5fb257-d42e-4e73-8285-0fd553622f53 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-e686bddc-956e-4714-868c-9a29dda243d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:18:50 np0005558241 nova_compute[248510]: 2025-12-13 08:18:50.430 248514 DEBUG nova.network.neutron [req-c640f060-5b13-4f20-9fd9-ba0209f8cfb9 req-df5fb257-d42e-4e73-8285-0fd553622f53 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Refreshing network info cache for port c204c1c8-ba1e-45b9-87aa-dac2170952ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:18:50 np0005558241 podman[274692]: 2025-12-13 08:18:50.437849663 +0000 UTC m=+0.253052104 container init b85199ddaf788df21747890690fc19f952656f0062b992c8427f7461c52ba874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mayer, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:18:50 np0005558241 podman[274692]: 2025-12-13 08:18:50.447686155 +0000 UTC m=+0.262888566 container start b85199ddaf788df21747890690fc19f952656f0062b992c8427f7461c52ba874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:18:50 np0005558241 festive_mayer[274708]: 167 167
Dec 13 03:18:50 np0005558241 systemd[1]: libpod-b85199ddaf788df21747890690fc19f952656f0062b992c8427f7461c52ba874.scope: Deactivated successfully.
Dec 13 03:18:50 np0005558241 conmon[274708]: conmon b85199ddaf788df21747 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b85199ddaf788df21747890690fc19f952656f0062b992c8427f7461c52ba874.scope/container/memory.events
Dec 13 03:18:50 np0005558241 podman[274692]: 2025-12-13 08:18:50.504685389 +0000 UTC m=+0.319887820 container attach b85199ddaf788df21747890690fc19f952656f0062b992c8427f7461c52ba874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mayer, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:18:50 np0005558241 podman[274692]: 2025-12-13 08:18:50.505157791 +0000 UTC m=+0.320360222 container died b85199ddaf788df21747890690fc19f952656f0062b992c8427f7461c52ba874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mayer, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:18:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-67b4129d1c4380e3c2a981fb5e2629834bbd1c2a78dfe8741b9d59ee3e9e0007-merged.mount: Deactivated successfully.
Dec 13 03:18:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 372 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 139 op/s
Dec 13 03:18:50 np0005558241 podman[274692]: 2025-12-13 08:18:50.892945882 +0000 UTC m=+0.708148293 container remove b85199ddaf788df21747890690fc19f952656f0062b992c8427f7461c52ba874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mayer, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:18:50 np0005558241 systemd[1]: libpod-conmon-b85199ddaf788df21747890690fc19f952656f0062b992c8427f7461c52ba874.scope: Deactivated successfully.
Dec 13 03:18:51 np0005558241 podman[274731]: 2025-12-13 08:18:51.147645386 +0000 UTC m=+0.083923008 container create 5b44f9b12dd88bddaf53875293ad3a17ec37915ea3e6ee51372fef23c0d408cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 03:18:51 np0005558241 podman[274731]: 2025-12-13 08:18:51.093783989 +0000 UTC m=+0.030061631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:18:51 np0005558241 systemd[1]: Started libpod-conmon-5b44f9b12dd88bddaf53875293ad3a17ec37915ea3e6ee51372fef23c0d408cf.scope.
Dec 13 03:18:51 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:18:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18fbf1e4f09beacdf22bb049b4700764d71bc2f6b520375ee37719a843fe95c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:18:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18fbf1e4f09beacdf22bb049b4700764d71bc2f6b520375ee37719a843fe95c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:18:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18fbf1e4f09beacdf22bb049b4700764d71bc2f6b520375ee37719a843fe95c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:18:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18fbf1e4f09beacdf22bb049b4700764d71bc2f6b520375ee37719a843fe95c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:18:51 np0005558241 podman[274731]: 2025-12-13 08:18:51.316677189 +0000 UTC m=+0.252954841 container init 5b44f9b12dd88bddaf53875293ad3a17ec37915ea3e6ee51372fef23c0d408cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cannon, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:18:51 np0005558241 podman[274731]: 2025-12-13 08:18:51.326667835 +0000 UTC m=+0.262945457 container start 5b44f9b12dd88bddaf53875293ad3a17ec37915ea3e6ee51372fef23c0d408cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 03:18:51 np0005558241 podman[274731]: 2025-12-13 08:18:51.351348143 +0000 UTC m=+0.287625795 container attach 5b44f9b12dd88bddaf53875293ad3a17ec37915ea3e6ee51372fef23c0d408cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]: {
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:    "0": [
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:        {
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "devices": [
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "/dev/loop3"
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            ],
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_name": "ceph_lv0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_size": "21470642176",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "name": "ceph_lv0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "tags": {
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.cluster_name": "ceph",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.crush_device_class": "",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.encrypted": "0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.objectstore": "bluestore",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.osd_id": "0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.type": "block",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.vdo": "0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.with_tpm": "0"
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            },
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "type": "block",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "vg_name": "ceph_vg0"
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:        }
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:    ],
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:    "1": [
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:        {
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "devices": [
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "/dev/loop4"
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            ],
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_name": "ceph_lv1",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_size": "21470642176",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "name": "ceph_lv1",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "tags": {
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.cluster_name": "ceph",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.crush_device_class": "",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.encrypted": "0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.objectstore": "bluestore",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.osd_id": "1",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.type": "block",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.vdo": "0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.with_tpm": "0"
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            },
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "type": "block",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "vg_name": "ceph_vg1"
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:        }
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:    ],
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:    "2": [
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:        {
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "devices": [
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "/dev/loop5"
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            ],
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_name": "ceph_lv2",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_size": "21470642176",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "name": "ceph_lv2",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "tags": {
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.cluster_name": "ceph",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.crush_device_class": "",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.encrypted": "0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.objectstore": "bluestore",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.osd_id": "2",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.type": "block",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.vdo": "0",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:                "ceph.with_tpm": "0"
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            },
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "type": "block",
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:            "vg_name": "ceph_vg2"
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:        }
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]:    ]
Dec 13 03:18:51 np0005558241 gracious_cannon[274747]: }
Dec 13 03:18:51 np0005558241 systemd[1]: libpod-5b44f9b12dd88bddaf53875293ad3a17ec37915ea3e6ee51372fef23c0d408cf.scope: Deactivated successfully.
Dec 13 03:18:51 np0005558241 podman[274731]: 2025-12-13 08:18:51.675101908 +0000 UTC m=+0.611379560 container died 5b44f9b12dd88bddaf53875293ad3a17ec37915ea3e6ee51372fef23c0d408cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:18:52 np0005558241 nova_compute[248510]: 2025-12-13 08:18:52.011 248514 DEBUG nova.network.neutron [req-c640f060-5b13-4f20-9fd9-ba0209f8cfb9 req-df5fb257-d42e-4e73-8285-0fd553622f53 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Updated VIF entry in instance network info cache for port c204c1c8-ba1e-45b9-87aa-dac2170952ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:18:52 np0005558241 nova_compute[248510]: 2025-12-13 08:18:52.014 248514 DEBUG nova.network.neutron [req-c640f060-5b13-4f20-9fd9-ba0209f8cfb9 req-df5fb257-d42e-4e73-8285-0fd553622f53 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Updating instance_info_cache with network_info: [{"id": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "address": "fa:16:3e:67:37:df", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc204c1c8-ba", "ovs_interfaceid": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:18:52 np0005558241 nova_compute[248510]: 2025-12-13 08:18:52.051 248514 DEBUG oslo_concurrency.lockutils [req-c640f060-5b13-4f20-9fd9-ba0209f8cfb9 req-df5fb257-d42e-4e73-8285-0fd553622f53 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-e686bddc-956e-4714-868c-9a29dda243d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:18:52 np0005558241 systemd[1]: var-lib-containers-storage-overlay-18fbf1e4f09beacdf22bb049b4700764d71bc2f6b520375ee37719a843fe95c2-merged.mount: Deactivated successfully.
Dec 13 03:18:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 372 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 139 op/s
Dec 13 03:18:53 np0005558241 nova_compute[248510]: 2025-12-13 08:18:53.486 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:53 np0005558241 podman[274731]: 2025-12-13 08:18:53.776821984 +0000 UTC m=+2.713099626 container remove 5b44f9b12dd88bddaf53875293ad3a17ec37915ea3e6ee51372fef23c0d408cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:18:53 np0005558241 systemd[1]: libpod-conmon-5b44f9b12dd88bddaf53875293ad3a17ec37915ea3e6ee51372fef23c0d408cf.scope: Deactivated successfully.
Dec 13 03:18:53 np0005558241 nova_compute[248510]: 2025-12-13 08:18:53.926 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:18:54 np0005558241 podman[274828]: 2025-12-13 08:18:54.232834666 +0000 UTC m=+0.023341225 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:18:54 np0005558241 nova_compute[248510]: 2025-12-13 08:18:54.809 248514 DEBUG nova.compute.manager [req-696686ff-d776-4aab-9213-f60cd31880cd req-df657f99-eefa-48b6-a7d1-5e0a69fded62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Received event network-changed-c204c1c8-ba1e-45b9-87aa-dac2170952ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:18:54 np0005558241 nova_compute[248510]: 2025-12-13 08:18:54.809 248514 DEBUG nova.compute.manager [req-696686ff-d776-4aab-9213-f60cd31880cd req-df657f99-eefa-48b6-a7d1-5e0a69fded62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Refreshing instance network info cache due to event network-changed-c204c1c8-ba1e-45b9-87aa-dac2170952ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:18:54 np0005558241 nova_compute[248510]: 2025-12-13 08:18:54.810 248514 DEBUG oslo_concurrency.lockutils [req-696686ff-d776-4aab-9213-f60cd31880cd req-df657f99-eefa-48b6-a7d1-5e0a69fded62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-e686bddc-956e-4714-868c-9a29dda243d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:18:54 np0005558241 nova_compute[248510]: 2025-12-13 08:18:54.810 248514 DEBUG oslo_concurrency.lockutils [req-696686ff-d776-4aab-9213-f60cd31880cd req-df657f99-eefa-48b6-a7d1-5e0a69fded62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-e686bddc-956e-4714-868c-9a29dda243d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:18:54 np0005558241 nova_compute[248510]: 2025-12-13 08:18:54.810 248514 DEBUG nova.network.neutron [req-696686ff-d776-4aab-9213-f60cd31880cd req-df657f99-eefa-48b6-a7d1-5e0a69fded62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Refreshing network info cache for port c204c1c8-ba1e-45b9-87aa-dac2170952ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:18:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 372 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 137 op/s
Dec 13 03:18:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:55.266 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:18:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:55.268 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:18:55 np0005558241 nova_compute[248510]: 2025-12-13 08:18:55.270 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:55.400 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:18:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:55.402 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:18:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:55.403 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:18:56 np0005558241 podman[274828]: 2025-12-13 08:18:56.54433838 +0000 UTC m=+2.334844919 container create ff71ca4bddd8c5ff1996f079c23615de46c6474c4a4bd003dca51858c26901f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:18:56 np0005558241 systemd[1]: Started libpod-conmon-ff71ca4bddd8c5ff1996f079c23615de46c6474c4a4bd003dca51858c26901f5.scope.
Dec 13 03:18:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:18:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 372 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 127 op/s
Dec 13 03:18:57 np0005558241 nova_compute[248510]: 2025-12-13 08:18:57.522 248514 DEBUG nova.network.neutron [req-696686ff-d776-4aab-9213-f60cd31880cd req-df657f99-eefa-48b6-a7d1-5e0a69fded62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Updated VIF entry in instance network info cache for port c204c1c8-ba1e-45b9-87aa-dac2170952ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:18:57 np0005558241 nova_compute[248510]: 2025-12-13 08:18:57.523 248514 DEBUG nova.network.neutron [req-696686ff-d776-4aab-9213-f60cd31880cd req-df657f99-eefa-48b6-a7d1-5e0a69fded62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Updating instance_info_cache with network_info: [{"id": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "address": "fa:16:3e:67:37:df", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc204c1c8-ba", "ovs_interfaceid": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:18:57 np0005558241 nova_compute[248510]: 2025-12-13 08:18:57.565 248514 DEBUG oslo_concurrency.lockutils [req-696686ff-d776-4aab-9213-f60cd31880cd req-df657f99-eefa-48b6-a7d1-5e0a69fded62 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-e686bddc-956e-4714-868c-9a29dda243d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:18:57 np0005558241 podman[274828]: 2025-12-13 08:18:57.810435356 +0000 UTC m=+3.600941905 container init ff71ca4bddd8c5ff1996f079c23615de46c6474c4a4bd003dca51858c26901f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:18:57 np0005558241 podman[274828]: 2025-12-13 08:18:57.82112888 +0000 UTC m=+3.611635409 container start ff71ca4bddd8c5ff1996f079c23615de46c6474c4a4bd003dca51858c26901f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:18:57 np0005558241 optimistic_greider[274844]: 167 167
Dec 13 03:18:57 np0005558241 systemd[1]: libpod-ff71ca4bddd8c5ff1996f079c23615de46c6474c4a4bd003dca51858c26901f5.scope: Deactivated successfully.
Dec 13 03:18:58 np0005558241 nova_compute[248510]: 2025-12-13 08:18:58.488 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 376 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 610 KiB/s wr, 143 op/s
Dec 13 03:18:58 np0005558241 nova_compute[248510]: 2025-12-13 08:18:58.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:18:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:18:59.270 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:18:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:18:59 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 13 03:19:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 376 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 610 KiB/s wr, 16 op/s
Dec 13 03:19:01 np0005558241 podman[274828]: 2025-12-13 08:19:01.487979558 +0000 UTC m=+7.278486197 container attach ff71ca4bddd8c5ff1996f079c23615de46c6474c4a4bd003dca51858c26901f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:19:01 np0005558241 podman[274828]: 2025-12-13 08:19:01.489036875 +0000 UTC m=+7.279543414 container died ff71ca4bddd8c5ff1996f079c23615de46c6474c4a4bd003dca51858c26901f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.697037697s, txc = 0x560001136c00, txc bytes = 460046, txc ios = 8, txc cost = 5820046, txc onodes = 1, DB updates = 10, DB bytes = 3136, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.696384907s, txc = 0x5600026f6000, txc bytes = 312590, txc ios = 6, txc cost = 4332590, txc onodes = 1, DB updates = 6, DB bytes = 1985, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.695998669s, txc = 0x5600005a7500, txc bytes = 263438, txc ios = 5, txc cost = 3613438, txc onodes = 1, DB updates = 6, DB bytes = 1694, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.693610668s, txc = 0x560001472000, txc bytes = 5390, txc ios = 1, txc cost = 675390, txc onodes = 1, DB updates = 7, DB bytes = 6211, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.693314075s, txc = 0x560000602300, txc bytes = 66830, txc ios = 2, txc cost = 1406830, txc onodes = 1, DB updates = 6, DB bytes = 1768, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.693065166s, txc = 0x560001f08000, txc bytes = 58638, txc ios = 1, txc cost = 728638, txc onodes = 1, DB updates = 7, DB bytes = 59233, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.692714691s, txc = 0x5600005ccf00, txc bytes = 197902, txc ios = 5, txc cost = 3547902, txc onodes = 1, DB updates = 7, DB bytes = 2324, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.691856384s, txc = 0x5600000b6f00, txc bytes = 34062, txc ios = 1, txc cost = 704062, txc onodes = 1, DB updates = 7, DB bytes = 35181, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.690774918s, txc = 0x5600005a7200, txc bytes = 66830, txc ios = 2, txc cost = 1406830, txc onodes = 1, DB updates = 13, DB bytes = 4010, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.688998699s, txc = 0x5600064f7b00, txc bytes = 9486, txc ios = 1, txc cost = 679486, txc onodes = 1, DB updates = 6, DB bytes = 9727, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.689024448s, txc = 0x56000011e300, txc bytes = 25870, txc ios = 1, txc cost = 695870, txc onodes = 1, DB updates = 7, DB bytes = 26338, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.688879013s, txc = 0x560001165200, txc bytes = 17678, txc ios = 1, txc cost = 687678, txc onodes = 1, DB updates = 6, DB bytes = 17949, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.688838959s, txc = 0x560002be4900, txc bytes = 34062, txc ios = 1, txc cost = 704062, txc onodes = 1, DB updates = 7, DB bytes = 34575, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.688261032s, txc = 0x560004b1f800, txc bytes = 50446, txc ios = 1, txc cost = 720446, txc onodes = 1, DB updates = 6, DB bytes = 50770, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.049779415s, txc = 0x5600064f7200, txc bytes = 66830, txc ios = 2, txc cost = 1406830, txc onodes = 1, DB updates = 6, DB bytes = 1832, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.045408726s, txc = 0x560001166600, txc bytes = 222478, txc ios = 5, txc cost = 3572478, txc onodes = 1, DB updates = 6, DB bytes = 1964, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.044545174s, txc = 0x560000081200, txc bytes = 34062, txc ios = 1, txc cost = 704062, txc onodes = 1, DB updates = 7, DB bytes = 34698, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.044240475s, txc = 0x56000066af00, txc bytes = 66830, txc ios = 2, txc cost = 1406830, txc onodes = 1, DB updates = 6, DB bytes = 1954, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.044041634s, txc = 0x560001f68900, txc bytes = 132366, txc ios = 4, txc cost = 2812366, txc onodes = 1, DB updates = 5, DB bytes = 2090, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.043401241s, txc = 0x560000e12c00, txc bytes = 423182, txc ios = 8, txc cost = 5783182, txc onodes = 1, DB updates = 10, DB bytes = 2954, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.039463997s, txc = 0x5600000cfb00, txc bytes = 558350, txc ios = 11, txc cost = 7928350, txc onodes = 1, DB updates = 7, DB bytes = 2284, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.038090706s, txc = 0x560002be5b00, txc bytes = 140558, txc ios = 4, txc cost = 2820558, txc onodes = 1, DB updates = 11, DB bytes = 3303, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.035337448s, txc = 0x560001150300, txc bytes = 234766, txc ios = 5, txc cost = 3584766, txc onodes = 1, DB updates = 6, DB bytes = 1559, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.032855511s, txc = 0x560001166000, txc bytes = 312590, txc ios = 6, txc cost = 4332590, txc onodes = 1, DB updates = 7, DB bytes = 1948, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.030773640s, txc = 0x560002be5800, txc bytes = 83214, txc ios = 3, txc cost = 2093214, txc onodes = 1, DB updates = 6, DB bytes = 2014, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.027955532s, txc = 0x5600004f7b00, txc bytes = 132366, txc ios = 3, txc cost = 2142366, txc onodes = 1, DB updates = 6, DB bytes = 2164, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-osd[87041]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.026063442s, txc = 0x560002597200, txc bytes = 50446, txc ios = 1, txc cost = 720446, txc onodes = 1, DB updates = 8, DB bytes = 51321, cost max = 110579982 on 2025-12-13T08:18:33.799573+0000, txc max = 199 on 2025-12-13T07:32:16.600359+0000
Dec 13 03:19:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 380 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.1 MiB/s wr, 22 op/s
Dec 13 03:19:03 np0005558241 nova_compute[248510]: 2025-12-13 08:19:03.491 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:03 np0005558241 nova_compute[248510]: 2025-12-13 08:19:03.987 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:19:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 381 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Dec 13 03:19:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-84f51d35fa6668ab08142a317a478e633394382450b53d57afc6b510f03ab311-merged.mount: Deactivated successfully.
Dec 13 03:19:05 np0005558241 nova_compute[248510]: 2025-12-13 08:19:05.665 248514 INFO nova.compute.manager [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Rebuilding instance#033[00m
Dec 13 03:19:06 np0005558241 nova_compute[248510]: 2025-12-13 08:19:06.000 248514 DEBUG nova.objects.instance [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'trusted_certs' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:06 np0005558241 nova_compute[248510]: 2025-12-13 08:19:06.023 248514 DEBUG nova.compute.manager [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:19:06 np0005558241 nova_compute[248510]: 2025-12-13 08:19:06.102 248514 DEBUG nova.objects.instance [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'pci_requests' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:06 np0005558241 nova_compute[248510]: 2025-12-13 08:19:06.235 248514 DEBUG nova.objects.instance [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'pci_devices' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:06 np0005558241 nova_compute[248510]: 2025-12-13 08:19:06.259 248514 DEBUG nova.objects.instance [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'resources' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:06 np0005558241 nova_compute[248510]: 2025-12-13 08:19:06.272 248514 DEBUG nova.objects.instance [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'migration_context' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:06 np0005558241 nova_compute[248510]: 2025-12-13 08:19:06.287 248514 DEBUG nova.objects.instance [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:19:06 np0005558241 nova_compute[248510]: 2025-12-13 08:19:06.292 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:19:06 np0005558241 podman[274828]: 2025-12-13 08:19:06.472631563 +0000 UTC m=+12.263138112 container remove ff71ca4bddd8c5ff1996f079c23615de46c6474c4a4bd003dca51858c26901f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:19:06 np0005558241 ceph-mon[76537]: log_channel(cluster) log [WRN] : Health check update: 3 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec 13 03:19:06 np0005558241 systemd[1]: libpod-conmon-ff71ca4bddd8c5ff1996f079c23615de46c6474c4a4bd003dca51858c26901f5.scope: Deactivated successfully.
Dec 13 03:19:06 np0005558241 podman[274864]: 2025-12-13 08:19:06.570286719 +0000 UTC m=+1.415167398 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:19:06 np0005558241 podman[274865]: 2025-12-13 08:19:06.577381403 +0000 UTC m=+1.424860366 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:19:06 np0005558241 podman[274863]: 2025-12-13 08:19:06.613151034 +0000 UTC m=+1.465639170 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 03:19:06 np0005558241 podman[274931]: 2025-12-13 08:19:06.697844269 +0000 UTC m=+0.031362043 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:19:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 381 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Dec 13 03:19:07 np0005558241 podman[274931]: 2025-12-13 08:19:07.469340662 +0000 UTC m=+0.802858416 container create ca6f6cc0cc390591003c4e7496842ad5be70f96a6eeac39f803c814aa08255fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:19:07 np0005558241 ceph-mon[76537]: Health check update: 3 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec 13 03:19:07 np0005558241 systemd[1]: Started libpod-conmon-ca6f6cc0cc390591003c4e7496842ad5be70f96a6eeac39f803c814aa08255fa.scope.
Dec 13 03:19:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:19:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ac8814ba277844a740ea9907b1b06c58f3f9898aa405c3e63f08ab88d81bbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:19:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ac8814ba277844a740ea9907b1b06c58f3f9898aa405c3e63f08ab88d81bbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:19:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ac8814ba277844a740ea9907b1b06c58f3f9898aa405c3e63f08ab88d81bbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:19:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47ac8814ba277844a740ea9907b1b06c58f3f9898aa405c3e63f08ab88d81bbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:19:08 np0005558241 nova_compute[248510]: 2025-12-13 08:19:08.532 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:08 np0005558241 podman[274931]: 2025-12-13 08:19:08.857774201 +0000 UTC m=+2.191291975 container init ca6f6cc0cc390591003c4e7496842ad5be70f96a6eeac39f803c814aa08255fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:19:08 np0005558241 podman[274931]: 2025-12-13 08:19:08.867836459 +0000 UTC m=+2.201354253 container start ca6f6cc0cc390591003c4e7496842ad5be70f96a6eeac39f803c814aa08255fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:19:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 407 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 3.4 MiB/s wr, 45 op/s
Dec 13 03:19:08 np0005558241 nova_compute[248510]: 2025-12-13 08:19:08.989 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:19:09
Dec 13 03:19:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:19:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:19:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', 'images', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'default.rgw.log', '.rgw.root', 'backups']
Dec 13 03:19:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:19:09 np0005558241 podman[274931]: 2025-12-13 08:19:09.375751299 +0000 UTC m=+2.709269073 container attach ca6f6cc0cc390591003c4e7496842ad5be70f96a6eeac39f803c814aa08255fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:19:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:19:09 np0005558241 lvm[275026]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:19:09 np0005558241 lvm[275025]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:19:09 np0005558241 lvm[275026]: VG ceph_vg0 finished
Dec 13 03:19:09 np0005558241 lvm[275025]: VG ceph_vg1 finished
Dec 13 03:19:09 np0005558241 lvm[275028]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:19:09 np0005558241 lvm[275028]: VG ceph_vg2 finished
Dec 13 03:19:09 np0005558241 lvm[275031]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:19:09 np0005558241 lvm[275031]: VG ceph_vg2 finished
Dec 13 03:19:09 np0005558241 mystifying_tharp[274947]: {}
Dec 13 03:19:09 np0005558241 systemd[1]: libpod-ca6f6cc0cc390591003c4e7496842ad5be70f96a6eeac39f803c814aa08255fa.scope: Deactivated successfully.
Dec 13 03:19:09 np0005558241 podman[274931]: 2025-12-13 08:19:09.835645277 +0000 UTC m=+3.169163031 container died ca6f6cc0cc390591003c4e7496842ad5be70f96a6eeac39f803c814aa08255fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 03:19:09 np0005558241 systemd[1]: libpod-ca6f6cc0cc390591003c4e7496842ad5be70f96a6eeac39f803c814aa08255fa.scope: Consumed 1.575s CPU time.
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:19:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 407 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.9 MiB/s wr, 29 op/s
Dec 13 03:19:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-47ac8814ba277844a740ea9907b1b06c58f3f9898aa405c3e63f08ab88d81bbb-merged.mount: Deactivated successfully.
Dec 13 03:19:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 411 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.1 MiB/s wr, 30 op/s
Dec 13 03:19:13 np0005558241 nova_compute[248510]: 2025-12-13 08:19:13.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:14 np0005558241 nova_compute[248510]: 2025-12-13 08:19:14.036 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:14 np0005558241 podman[274931]: 2025-12-13 08:19:14.332432096 +0000 UTC m=+7.665949860 container remove ca6f6cc0cc390591003c4e7496842ad5be70f96a6eeac39f803c814aa08255fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:19:14 np0005558241 systemd[1]: libpod-conmon-ca6f6cc0cc390591003c4e7496842ad5be70f96a6eeac39f803c814aa08255fa.scope: Deactivated successfully.
Dec 13 03:19:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:19:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:19:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 414 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 2.9 MiB/s wr, 31 op/s
Dec 13 03:19:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:19:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:19:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:19:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/596080702' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:19:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:19:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/596080702' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:19:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:19:15 np0005558241 nova_compute[248510]: 2025-12-13 08:19:15.983 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:15 np0005558241 nova_compute[248510]: 2025-12-13 08:19:15.983 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:15Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:ac:d2 10.100.0.4
Dec 13 03:19:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:15Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:ac:d2 10.100.0.4
Dec 13 03:19:16 np0005558241 nova_compute[248510]: 2025-12-13 08:19:16.014 248514 DEBUG nova.compute.manager [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:19:16 np0005558241 nova_compute[248510]: 2025-12-13 08:19:16.114 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:16 np0005558241 nova_compute[248510]: 2025-12-13 08:19:16.115 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:16 np0005558241 nova_compute[248510]: 2025-12-13 08:19:16.127 248514 DEBUG nova.virt.hardware [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:19:16 np0005558241 nova_compute[248510]: 2025-12-13 08:19:16.128 248514 INFO nova.compute.claims [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:19:16 np0005558241 nova_compute[248510]: 2025-12-13 08:19:16.367 248514 INFO nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance shutdown successfully after 10 seconds.#033[00m
Dec 13 03:19:16 np0005558241 nova_compute[248510]: 2025-12-13 08:19:16.396 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:16 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:19:16 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:19:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 414 MiB data, 497 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 2.8 MiB/s wr, 28 op/s
Dec 13 03:19:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:19:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2090442788' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.451 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.459 248514 DEBUG nova.compute.provider_tree [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.547 248514 DEBUG nova.scheduler.client.report [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.577 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.462s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.578 248514 DEBUG nova.compute.manager [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.631 248514 DEBUG nova.compute.manager [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.631 248514 DEBUG nova.network.neutron [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.659 248514 INFO nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.695 248514 DEBUG nova.compute.manager [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:19:17 np0005558241 kernel: tap7f8a109e-e2 (unregistering): left promiscuous mode
Dec 13 03:19:17 np0005558241 NetworkManager[50376]: <info>  [1765613957.7702] device (tap7f8a109e-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:19:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:17Z|00089|binding|INFO|Releasing lport 7f8a109e-e262-4847-8430-ac7944dace5c from this chassis (sb_readonly=0)
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:17Z|00090|binding|INFO|Setting lport 7f8a109e-e262-4847-8430-ac7944dace5c down in Southbound
Dec 13 03:19:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:17Z|00091|binding|INFO|Removing iface tap7f8a109e-e2 ovn-installed in OVS
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.792 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:0a:44 10.100.0.14'], port_security=['fa:16:3e:e6:0a:44 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a14d8b88-7aec-468f-a550-881364e4d95e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7f8a109e-e262-4847-8430-ac7944dace5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.793 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7f8a109e-e262-4847-8430-ac7944dace5c in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis#033[00m
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.795 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.797 248514 DEBUG nova.compute.manager [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.799 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.800 248514 INFO nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Creating image(s)#033[00m
Dec 13 03:19:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:17Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:37:df 10.100.0.8
Dec 13 03:19:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:17Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:37:df 10.100.0.8
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.824 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[24d27bbf-237e-4996-b28c-2d1dc1a05f45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.828 248514 DEBUG nova.storage.rbd_utils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image 2f398188-2df1-4c96-a224-c66efd7383a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:17 np0005558241 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Deactivated successfully.
Dec 13 03:19:17 np0005558241 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Consumed 17.637s CPU time.
Dec 13 03:19:17 np0005558241 systemd-machined[210538]: Machine qemu-19-instance-00000011 terminated.
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.853 248514 DEBUG nova.storage.rbd_utils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image 2f398188-2df1-4c96-a224-c66efd7383a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.859 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a963f1aa-f54a-444c-80b2-ecdf9fef693f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.862 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1a298730-3747-4df8-98b3-086525208102]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.878 248514 DEBUG nova.storage.rbd_utils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image 2f398188-2df1-4c96-a224-c66efd7383a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.884 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.903 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd0d2ee-5a59-47d7-9aa5-0db49ff20419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.924 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f49a2cf7-96fc-472b-bc54-4e8271d0e41a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275161, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.944 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cec5f8bb-b794-4630-9c61-488359b00bb8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275162, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275162, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.947 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.949 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:17 np0005558241 nova_compute[248510]: 2025-12-13 08:19:17.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.954 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.954 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.955 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:17.955 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.004 248514 INFO nova.virt.libvirt.driver [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance destroyed successfully.#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.010 248514 INFO nova.virt.libvirt.driver [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance destroyed successfully.#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.011 248514 DEBUG nova.virt.libvirt.vif [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:17:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-247332834',display_name='tempest-ServersAdminTestJSON-server-247332834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-247332834',id=17,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:17:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-88vex2g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:19:05Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=a14d8b88-7aec-468f-a550-881364e4d95e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.012 248514 DEBUG nova.network.os_vif_util [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.013 248514 DEBUG nova.network.os_vif_util [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.013 248514 DEBUG os_vif [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.016 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.016 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f8a109e-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.020 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.024 248514 INFO os_vif [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2')#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.045 248514 DEBUG nova.policy [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1e46b9271df84074b6a832899ca5ee57', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6abc0a5e3b44a75bd19cd7df807b790', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.053 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.053 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.054 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.054 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.074 248514 DEBUG nova.storage.rbd_utils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image 2f398188-2df1-4c96-a224-c66efd7383a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.080 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2f398188-2df1-4c96-a224-c66efd7383a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.460 248514 DEBUG nova.compute.manager [req-3753f677-e9fa-4bcc-8641-58b07763721a req-02cc5110-5d2c-46b8-b3b6-3f93500e82a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-unplugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.461 248514 DEBUG oslo_concurrency.lockutils [req-3753f677-e9fa-4bcc-8641-58b07763721a req-02cc5110-5d2c-46b8-b3b6-3f93500e82a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.462 248514 DEBUG oslo_concurrency.lockutils [req-3753f677-e9fa-4bcc-8641-58b07763721a req-02cc5110-5d2c-46b8-b3b6-3f93500e82a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.462 248514 DEBUG oslo_concurrency.lockutils [req-3753f677-e9fa-4bcc-8641-58b07763721a req-02cc5110-5d2c-46b8-b3b6-3f93500e82a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.463 248514 DEBUG nova.compute.manager [req-3753f677-e9fa-4bcc-8641-58b07763721a req-02cc5110-5d2c-46b8-b3b6-3f93500e82a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] No waiting events found dispatching network-vif-unplugged-7f8a109e-e262-4847-8430-ac7944dace5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.463 248514 WARNING nova.compute.manager [req-3753f677-e9fa-4bcc-8641-58b07763721a req-02cc5110-5d2c-46b8-b3b6-3f93500e82a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received unexpected event network-vif-unplugged-7f8a109e-e262-4847-8430-ac7944dace5c for instance with vm_state error and task_state rebuilding.#033[00m
Dec 13 03:19:18 np0005558241 nova_compute[248510]: 2025-12-13 08:19:18.857 248514 DEBUG nova.network.neutron [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Successfully created port: 31fc970a-7d64-4540-880f-6d5aee7d129a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:19:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 419 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 654 KiB/s rd, 2.9 MiB/s wr, 86 op/s
Dec 13 03:19:19 np0005558241 nova_compute[248510]: 2025-12-13 08:19:19.077 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:19:19 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec 13 03:19:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:19.994266) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:19:19 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec 13 03:19:19 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613959994357, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1175, "num_deletes": 251, "total_data_size": 1831050, "memory_usage": 1852752, "flush_reason": "Manual Compaction"}
Dec 13 03:19:19 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613960278846, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1790491, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29458, "largest_seqno": 30632, "table_properties": {"data_size": 1784779, "index_size": 3043, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12280, "raw_average_key_size": 19, "raw_value_size": 1773408, "raw_average_value_size": 2883, "num_data_blocks": 136, "num_entries": 615, "num_filter_entries": 615, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765613839, "oldest_key_time": 1765613839, "file_creation_time": 1765613959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 284668 microseconds, and 5057 cpu microseconds.
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:19:20 np0005558241 nova_compute[248510]: 2025-12-13 08:19:20.399 248514 DEBUG nova.network.neutron [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Successfully updated port: 31fc970a-7d64-4540-880f-6d5aee7d129a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:19:20 np0005558241 nova_compute[248510]: 2025-12-13 08:19:20.436 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:19:20 np0005558241 nova_compute[248510]: 2025-12-13 08:19:20.436 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquired lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:19:20 np0005558241 nova_compute[248510]: 2025-12-13 08:19:20.437 248514 DEBUG nova.network.neutron [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:20.278934) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1790491 bytes OK
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:20.278965) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:20.518703) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:20.518755) EVENT_LOG_v1 {"time_micros": 1765613960518745, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:20.518789) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1825622, prev total WAL file size 1827951, number of live WAL files 2.
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:20.519954) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1748KB)], [65(7549KB)]
Dec 13 03:19:20 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613960520226, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 9521506, "oldest_snapshot_seqno": -1}
Dec 13 03:19:20 np0005558241 nova_compute[248510]: 2025-12-13 08:19:20.649 248514 DEBUG nova.compute.manager [req-3fbcc140-5fcd-47bd-aeae-ceb035fc32cc req-f64d4219-6c89-430c-a551-872a39d987aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received event network-changed-31fc970a-7d64-4540-880f-6d5aee7d129a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:19:20 np0005558241 nova_compute[248510]: 2025-12-13 08:19:20.649 248514 DEBUG nova.compute.manager [req-3fbcc140-5fcd-47bd-aeae-ceb035fc32cc req-f64d4219-6c89-430c-a551-872a39d987aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Refreshing instance network info cache due to event network-changed-31fc970a-7d64-4540-880f-6d5aee7d129a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:19:20 np0005558241 nova_compute[248510]: 2025-12-13 08:19:20.650 248514 DEBUG oslo_concurrency.lockutils [req-3fbcc140-5fcd-47bd-aeae-ceb035fc32cc req-f64d4219-6c89-430c-a551-872a39d987aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:19:20 np0005558241 nova_compute[248510]: 2025-12-13 08:19:20.652 248514 DEBUG nova.compute.manager [req-2bced678-c321-4523-a854-6afeea11898b req-2a229963-fc3b-46e3-a4fd-643cc3b310a8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:19:20 np0005558241 nova_compute[248510]: 2025-12-13 08:19:20.653 248514 DEBUG oslo_concurrency.lockutils [req-2bced678-c321-4523-a854-6afeea11898b req-2a229963-fc3b-46e3-a4fd-643cc3b310a8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:20 np0005558241 nova_compute[248510]: 2025-12-13 08:19:20.653 248514 DEBUG oslo_concurrency.lockutils [req-2bced678-c321-4523-a854-6afeea11898b req-2a229963-fc3b-46e3-a4fd-643cc3b310a8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:20 np0005558241 nova_compute[248510]: 2025-12-13 08:19:20.653 248514 DEBUG oslo_concurrency.lockutils [req-2bced678-c321-4523-a854-6afeea11898b req-2a229963-fc3b-46e3-a4fd-643cc3b310a8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:20 np0005558241 nova_compute[248510]: 2025-12-13 08:19:20.654 248514 DEBUG nova.compute.manager [req-2bced678-c321-4523-a854-6afeea11898b req-2a229963-fc3b-46e3-a4fd-643cc3b310a8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] No waiting events found dispatching network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:19:20 np0005558241 nova_compute[248510]: 2025-12-13 08:19:20.654 248514 WARNING nova.compute.manager [req-2bced678-c321-4523-a854-6afeea11898b req-2a229963-fc3b-46e3-a4fd-643cc3b310a8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received unexpected event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c for instance with vm_state error and task_state rebuilding.#033[00m
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0037712730094232524 of space, bias 1.0, pg target 1.1313819028269758 quantized to 32 (current 32)
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665836486250052 of space, bias 1.0, pg target 0.19930851093887655 quantized to 32 (current 32)
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.623247742148328e-07 of space, bias 4.0, pg target 0.00103134042996094 quantized to 16 (current 32)
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Dec 13 03:19:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 419 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 648 KiB/s rd, 669 KiB/s wr, 66 op/s
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5278 keys, 7721052 bytes, temperature: kUnknown
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613961090733, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7721052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7686731, "index_size": 20003, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13253, "raw_key_size": 133997, "raw_average_key_size": 25, "raw_value_size": 7592493, "raw_average_value_size": 1438, "num_data_blocks": 815, "num_entries": 5278, "num_filter_entries": 5278, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765613960, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:19:21 np0005558241 nova_compute[248510]: 2025-12-13 08:19:21.319 248514 DEBUG nova.network.neutron [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:21.091033) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7721052 bytes
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:21.366180) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 16.7 rd, 13.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.4 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(9.6) write-amplify(4.3) OK, records in: 5792, records dropped: 514 output_compression: NoCompression
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:21.366242) EVENT_LOG_v1 {"time_micros": 1765613961366215, "job": 36, "event": "compaction_finished", "compaction_time_micros": 570598, "compaction_time_cpu_micros": 27813, "output_level": 6, "num_output_files": 1, "total_output_size": 7721052, "num_input_records": 5792, "num_output_records": 5278, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613961367574, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765613961369973, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:20.519838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:21.370121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:21.370128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:21.370130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:21.370132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:19:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:19:21.370136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:19:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 434 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 654 KiB/s rd, 1.1 MiB/s wr, 76 op/s
Dec 13 03:19:23 np0005558241 nova_compute[248510]: 2025-12-13 08:19:23.018 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:23 np0005558241 nova_compute[248510]: 2025-12-13 08:19:23.314 248514 DEBUG nova.network.neutron [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Updating instance_info_cache with network_info: [{"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:19:23 np0005558241 nova_compute[248510]: 2025-12-13 08:19:23.346 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Releasing lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:19:23 np0005558241 nova_compute[248510]: 2025-12-13 08:19:23.346 248514 DEBUG nova.compute.manager [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Instance network_info: |[{"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:19:23 np0005558241 nova_compute[248510]: 2025-12-13 08:19:23.347 248514 DEBUG oslo_concurrency.lockutils [req-3fbcc140-5fcd-47bd-aeae-ceb035fc32cc req-f64d4219-6c89-430c-a551-872a39d987aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:19:23 np0005558241 nova_compute[248510]: 2025-12-13 08:19:23.347 248514 DEBUG nova.network.neutron [req-3fbcc140-5fcd-47bd-aeae-ceb035fc32cc req-f64d4219-6c89-430c-a551-872a39d987aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Refreshing network info cache for port 31fc970a-7d64-4540-880f-6d5aee7d129a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:19:24 np0005558241 nova_compute[248510]: 2025-12-13 08:19:24.079 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:24 np0005558241 nova_compute[248510]: 2025-12-13 08:19:24.661 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2f398188-2df1-4c96-a224-c66efd7383a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:24 np0005558241 nova_compute[248510]: 2025-12-13 08:19:24.727 248514 DEBUG nova.storage.rbd_utils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] resizing rbd image 2f398188-2df1-4c96-a224-c66efd7383a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:19:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 464 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 662 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 13 03:19:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:19:26 np0005558241 nova_compute[248510]: 2025-12-13 08:19:26.035 248514 DEBUG nova.network.neutron [req-3fbcc140-5fcd-47bd-aeae-ceb035fc32cc req-f64d4219-6c89-430c-a551-872a39d987aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Updated VIF entry in instance network info cache for port 31fc970a-7d64-4540-880f-6d5aee7d129a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:19:26 np0005558241 nova_compute[248510]: 2025-12-13 08:19:26.036 248514 DEBUG nova.network.neutron [req-3fbcc140-5fcd-47bd-aeae-ceb035fc32cc req-f64d4219-6c89-430c-a551-872a39d987aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Updating instance_info_cache with network_info: [{"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:19:26 np0005558241 nova_compute[248510]: 2025-12-13 08:19:26.062 248514 DEBUG oslo_concurrency.lockutils [req-3fbcc140-5fcd-47bd-aeae-ceb035fc32cc req-f64d4219-6c89-430c-a551-872a39d987aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:19:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 464 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 566 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Dec 13 03:19:28 np0005558241 nova_compute[248510]: 2025-12-13 08:19:28.020 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 475 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 2.0 MiB/s wr, 110 op/s
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.248 248514 DEBUG nova.objects.instance [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lazy-loading 'migration_context' on Instance uuid 2f398188-2df1-4c96-a224-c66efd7383a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.269 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.270 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Ensure instance console log exists: /var/lib/nova/instances/2f398188-2df1-4c96-a224-c66efd7383a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.270 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.271 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.271 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.273 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Start _get_guest_xml network_info=[{"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.278 248514 WARNING nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.289 248514 DEBUG nova.virt.libvirt.host [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.290 248514 DEBUG nova.virt.libvirt.host [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.294 248514 DEBUG nova.virt.libvirt.host [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.295 248514 DEBUG nova.virt.libvirt.host [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.295 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.295 248514 DEBUG nova.virt.hardware [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.296 248514 DEBUG nova.virt.hardware [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.296 248514 DEBUG nova.virt.hardware [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.296 248514 DEBUG nova.virt.hardware [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.296 248514 DEBUG nova.virt.hardware [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.296 248514 DEBUG nova.virt.hardware [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.296 248514 DEBUG nova.virt.hardware [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.297 248514 DEBUG nova.virt.hardware [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.297 248514 DEBUG nova.virt.hardware [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.297 248514 DEBUG nova.virt.hardware [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.297 248514 DEBUG nova.virt.hardware [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.300 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:29 np0005558241 nova_compute[248510]: 2025-12-13 08:19:29.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 475 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.9 MiB/s wr, 51 op/s
Dec 13 03:19:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:19:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:19:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3464608606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:19:31 np0005558241 nova_compute[248510]: 2025-12-13 08:19:31.391 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:31 np0005558241 nova_compute[248510]: 2025-12-13 08:19:31.427 248514 DEBUG nova.storage.rbd_utils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image 2f398188-2df1-4c96-a224-c66efd7383a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:31 np0005558241 nova_compute[248510]: 2025-12-13 08:19:31.432 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:19:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1506517085' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.041 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.043 248514 DEBUG nova.virt.libvirt.vif [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:19:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-60806246',display_name='tempest-SecurityGroupsTestJSON-server-60806246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-60806246',id=22,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6abc0a5e3b44a75bd19cd7df807b790',ramdisk_id='',reservation_id='r-q4l6b00g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1306632779',owner_user_name='tempest-SecurityGroupsTestJSON-1306632779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:19:17Z,user_data=None,user_id='1e46b9271df84074b6a832899ca5ee57',uuid=2f398188-2df1-4c96-a224-c66efd7383a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.043 248514 DEBUG nova.network.os_vif_util [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converting VIF {"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.044 248514 DEBUG nova.network.os_vif_util [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.045 248514 DEBUG nova.objects.instance [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f398188-2df1-4c96-a224-c66efd7383a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.071 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  <uuid>2f398188-2df1-4c96-a224-c66efd7383a0</uuid>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  <name>instance-00000016</name>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <nova:name>tempest-SecurityGroupsTestJSON-server-60806246</nova:name>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:19:29</nova:creationTime>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:        <nova:user uuid="1e46b9271df84074b6a832899ca5ee57">tempest-SecurityGroupsTestJSON-1306632779-project-member</nova:user>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:        <nova:project uuid="d6abc0a5e3b44a75bd19cd7df807b790">tempest-SecurityGroupsTestJSON-1306632779</nova:project>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:        <nova:port uuid="31fc970a-7d64-4540-880f-6d5aee7d129a">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <entry name="serial">2f398188-2df1-4c96-a224-c66efd7383a0</entry>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <entry name="uuid">2f398188-2df1-4c96-a224-c66efd7383a0</entry>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2f398188-2df1-4c96-a224-c66efd7383a0_disk">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2f398188-2df1-4c96-a224-c66efd7383a0_disk.config">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:67:bb:71"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <target dev="tap31fc970a-7d"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/2f398188-2df1-4c96-a224-c66efd7383a0/console.log" append="off"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:19:32 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:19:32 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:19:32 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:19:32 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.071 248514 DEBUG nova.compute.manager [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Preparing to wait for external event network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.072 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.072 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.072 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.073 248514 DEBUG nova.virt.libvirt.vif [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:19:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-60806246',display_name='tempest-SecurityGroupsTestJSON-server-60806246',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-60806246',id=22,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6abc0a5e3b44a75bd19cd7df807b790',ramdisk_id='',reservation_id='r-q4l6b00g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1306632779',owner_user_name='tempest-SecurityGroupsTestJSON-1306632779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:19:17Z,user_data=None,user_id='1e46b9271df84074b6a832899ca5ee57',uuid=2f398188-2df1-4c96-a224-c66efd7383a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.073 248514 DEBUG nova.network.os_vif_util [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converting VIF {"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.074 248514 DEBUG nova.network.os_vif_util [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.074 248514 DEBUG os_vif [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.075 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.075 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.076 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.078 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.078 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31fc970a-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.079 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31fc970a-7d, col_values=(('external_ids', {'iface-id': '31fc970a-7d64-4540-880f-6d5aee7d129a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:bb:71', 'vm-uuid': '2f398188-2df1-4c96-a224-c66efd7383a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:32 np0005558241 NetworkManager[50376]: <info>  [1765613972.0815] manager: (tap31fc970a-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.082 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.087 248514 INFO os_vif [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d')#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:32 np0005558241 nova_compute[248510]: 2025-12-13 08:19:32.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:19:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 452 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.9 MiB/s wr, 57 op/s
Dec 13 03:19:33 np0005558241 nova_compute[248510]: 2025-12-13 08:19:33.003 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765613958.0022025, a14d8b88-7aec-468f-a550-881364e4d95e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:19:33 np0005558241 nova_compute[248510]: 2025-12-13 08:19:33.003 248514 INFO nova.compute.manager [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:19:33 np0005558241 nova_compute[248510]: 2025-12-13 08:19:33.247 248514 DEBUG nova.compute.manager [None req-5c960f5f-be21-4126-8ef7-0cf9aa635c70 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:19:33 np0005558241 nova_compute[248510]: 2025-12-13 08:19:33.251 248514 DEBUG nova.compute.manager [None req-5c960f5f-be21-4126-8ef7-0cf9aa635c70 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: error, current task_state: rebuilding, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:19:33 np0005558241 nova_compute[248510]: 2025-12-13 08:19:33.303 248514 INFO nova.compute.manager [None req-5c960f5f-be21-4126-8ef7-0cf9aa635c70 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] During sync_power_state the instance has a pending task (rebuilding). Skip.#033[00m
Dec 13 03:19:33 np0005558241 nova_compute[248510]: 2025-12-13 08:19:33.308 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:19:33 np0005558241 nova_compute[248510]: 2025-12-13 08:19:33.308 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:19:33 np0005558241 nova_compute[248510]: 2025-12-13 08:19:33.308 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] No VIF found with MAC fa:16:3e:67:bb:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:19:33 np0005558241 nova_compute[248510]: 2025-12-13 08:19:33.309 248514 INFO nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Using config drive#033[00m
Dec 13 03:19:33 np0005558241 nova_compute[248510]: 2025-12-13 08:19:33.331 248514 DEBUG nova.storage.rbd_utils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image 2f398188-2df1-4c96-a224-c66efd7383a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:33 np0005558241 nova_compute[248510]: 2025-12-13 08:19:33.363 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-3fffafca-321d-4611-8940-da963b356ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:19:33 np0005558241 nova_compute[248510]: 2025-12-13 08:19:33.364 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-3fffafca-321d-4611-8940-da963b356ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:19:33 np0005558241 nova_compute[248510]: 2025-12-13 08:19:33.364 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:19:34 np0005558241 nova_compute[248510]: 2025-12-13 08:19:34.144 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:34 np0005558241 nova_compute[248510]: 2025-12-13 08:19:34.426 248514 INFO nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Creating config drive at /var/lib/nova/instances/2f398188-2df1-4c96-a224-c66efd7383a0/disk.config#033[00m
Dec 13 03:19:34 np0005558241 nova_compute[248510]: 2025-12-13 08:19:34.431 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f398188-2df1-4c96-a224-c66efd7383a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6w65xix8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:34 np0005558241 nova_compute[248510]: 2025-12-13 08:19:34.568 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f398188-2df1-4c96-a224-c66efd7383a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6w65xix8" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:34 np0005558241 nova_compute[248510]: 2025-12-13 08:19:34.673 248514 DEBUG nova.storage.rbd_utils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] rbd image 2f398188-2df1-4c96-a224-c66efd7383a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:34 np0005558241 nova_compute[248510]: 2025-12-13 08:19:34.678 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f398188-2df1-4c96-a224-c66efd7383a0/disk.config 2f398188-2df1-4c96-a224-c66efd7383a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 407 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 13 03:19:36 np0005558241 nova_compute[248510]: 2025-12-13 08:19:36.724 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Updating instance_info_cache with network_info: [{"id": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "address": "fa:16:3e:25:e7:e6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc85d366f-1e", "ovs_interfaceid": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:19:36 np0005558241 nova_compute[248510]: 2025-12-13 08:19:36.758 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-3fffafca-321d-4611-8940-da963b356ca1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:19:36 np0005558241 nova_compute[248510]: 2025-12-13 08:19:36.759 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:19:36 np0005558241 nova_compute[248510]: 2025-12-13 08:19:36.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:19:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 407 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 261 KiB/s wr, 39 op/s
Dec 13 03:19:37 np0005558241 podman[275430]: 2025-12-13 08:19:37.00658173 +0000 UTC m=+0.068870327 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:19:37 np0005558241 podman[275429]: 2025-12-13 08:19:37.009338048 +0000 UTC m=+0.073158703 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:19:37 np0005558241 podman[275428]: 2025-12-13 08:19:37.027571388 +0000 UTC m=+0.101235565 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 03:19:37 np0005558241 nova_compute[248510]: 2025-12-13 08:19:37.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:38 np0005558241 nova_compute[248510]: 2025-12-13 08:19:38.784 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 405 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 285 KiB/s wr, 62 op/s
Dec 13 03:19:38 np0005558241 nova_compute[248510]: 2025-12-13 08:19:38.943 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:38 np0005558241 nova_compute[248510]: 2025-12-13 08:19:38.944 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:38 np0005558241 nova_compute[248510]: 2025-12-13 08:19:38.971 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:38 np0005558241 nova_compute[248510]: 2025-12-13 08:19:38.972 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:38 np0005558241 nova_compute[248510]: 2025-12-13 08:19:38.972 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:38 np0005558241 nova_compute[248510]: 2025-12-13 08:19:38.973 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:19:38 np0005558241 nova_compute[248510]: 2025-12-13 08:19:38.973 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:39 np0005558241 nova_compute[248510]: 2025-12-13 08:19:39.192 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:19:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3083784402' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:19:39 np0005558241 nova_compute[248510]: 2025-12-13 08:19:39.942 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.969s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:19:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:19:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:19:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:19:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:19:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.169 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.170 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.176 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.177 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.182 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.182 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.185 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.186 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.189 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.189 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.192 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.193 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.427 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.428 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3596MB free_disk=59.78535605408251GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.428 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.429 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.558 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance a14d8b88-7aec-468f-a550-881364e4d95e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.558 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3fffafca-321d-4611-8940-da963b356ca1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.558 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 6b1e2b65-1398-4af8-9e8a-a8b99630eef8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.558 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 0d64e209-19e7-4ad3-a790-43d04d832838 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.559 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance e686bddc-956e-4714-868c-9a29dda243d2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.559 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2f398188-2df1-4c96-a224-c66efd7383a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.559 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.559 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:19:40 np0005558241 nova_compute[248510]: 2025-12-13 08:19:40.800 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 405 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 51 KiB/s wr, 37 op/s
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.033 248514 DEBUG oslo_concurrency.processutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f398188-2df1-4c96-a224-c66efd7383a0/disk.config 2f398188-2df1-4c96-a224-c66efd7383a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.035 248514 INFO nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Deleting local config drive /var/lib/nova/instances/2f398188-2df1-4c96-a224-c66efd7383a0/disk.config because it was imported into RBD.#033[00m
Dec 13 03:19:41 np0005558241 kernel: tap31fc970a-7d: entered promiscuous mode
Dec 13 03:19:41 np0005558241 NetworkManager[50376]: <info>  [1765613981.0987] manager: (tap31fc970a-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Dec 13 03:19:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:41Z|00092|binding|INFO|Claiming lport 31fc970a-7d64-4540-880f-6d5aee7d129a for this chassis.
Dec 13 03:19:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:41Z|00093|binding|INFO|31fc970a-7d64-4540-880f-6d5aee7d129a: Claiming fa:16:3e:67:bb:71 10.100.0.4
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.100 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.108 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:bb:71 10.100.0.4'], port_security=['fa:16:3e:67:bb:71 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f398188-2df1-4c96-a224-c66efd7383a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-675d1cc9-9f63-41fd-81b7-761155b686e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6abc0a5e3b44a75bd19cd7df807b790', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2d00a2f3-cbea-40c0-a88e-6b16688a4860', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ccd165e-0e0e-4b3a-8747-912025213604, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=31fc970a-7d64-4540-880f-6d5aee7d129a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.110 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 31fc970a-7d64-4540-880f-6d5aee7d129a in datapath 675d1cc9-9f63-41fd-81b7-761155b686e5 bound to our chassis#033[00m
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.112 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 675d1cc9-9f63-41fd-81b7-761155b686e5#033[00m
Dec 13 03:19:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:41Z|00094|binding|INFO|Setting lport 31fc970a-7d64-4540-880f-6d5aee7d129a ovn-installed in OVS
Dec 13 03:19:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:41Z|00095|binding|INFO|Setting lport 31fc970a-7d64-4540-880f-6d5aee7d129a up in Southbound
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.122 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.126 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.140 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa0ebb0-1314-46b3-b765-6654a7bd37ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:41 np0005558241 systemd-udevd[275547]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:19:41 np0005558241 systemd-machined[210538]: New machine qemu-24-instance-00000016.
Dec 13 03:19:41 np0005558241 NetworkManager[50376]: <info>  [1765613981.1584] device (tap31fc970a-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:19:41 np0005558241 NetworkManager[50376]: <info>  [1765613981.1591] device (tap31fc970a-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:19:41 np0005558241 systemd[1]: Started Virtual Machine qemu-24-instance-00000016.
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.179 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d84ddd61-aee2-4f2e-bf82-3dfb789f6c89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.182 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[edae5312-cc8d-43a0-8859-eee28c2ddb26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.214 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e03d84-c4ef-4b5a-b46e-ed6472fce09d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.237 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6a3e55-a7a3-4b15-be3d-1a874095e4bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap675d1cc9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:51:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649993, 'reachable_time': 18646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275560, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.259 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2160e6-c433-4e9c-9206-83f26fff1b9b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap675d1cc9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650007, 'tstamp': 650007}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275562, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap675d1cc9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650010, 'tstamp': 650010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275562, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.261 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap675d1cc9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.263 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.264 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap675d1cc9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.264 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.265 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap675d1cc9-90, col_values=(('external_ids', {'iface-id': '0e8c3540-dab0-4c73-b9e7-cb8c0c7c41e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:41.265 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:19:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:19:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2565109227' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.420 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.427 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.447 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.487 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.487 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.784 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "ea00b062-2638-45e3-91fb-24240ef8912f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.784 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:41 np0005558241 nova_compute[248510]: 2025-12-13 08:19:41.811 248514 DEBUG nova.compute.manager [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:19:42 np0005558241 nova_compute[248510]: 2025-12-13 08:19:42.012 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:42 np0005558241 nova_compute[248510]: 2025-12-13 08:19:42.013 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:42 np0005558241 nova_compute[248510]: 2025-12-13 08:19:42.021 248514 DEBUG nova.virt.hardware [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:19:42 np0005558241 nova_compute[248510]: 2025-12-13 08:19:42.021 248514 INFO nova.compute.claims [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:19:42 np0005558241 nova_compute[248510]: 2025-12-13 08:19:42.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:19:42 np0005558241 nova_compute[248510]: 2025-12-13 08:19:42.315 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:42 np0005558241 nova_compute[248510]: 2025-12-13 08:19:42.315 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:42 np0005558241 nova_compute[248510]: 2025-12-13 08:19:42.316 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:42 np0005558241 nova_compute[248510]: 2025-12-13 08:19:42.316 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:19:42 np0005558241 nova_compute[248510]: 2025-12-13 08:19:42.738 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 405 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 71 KiB/s wr, 43 op/s
Dec 13 03:19:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:19:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956532623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.337 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.345 248514 DEBUG nova.compute.provider_tree [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.411 248514 DEBUG nova.scheduler.client.report [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.444 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.445 248514 DEBUG nova.compute.manager [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.566 248514 DEBUG nova.compute.manager [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.567 248514 DEBUG nova.network.neutron [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.594 248514 INFO nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.614 248514 DEBUG nova.compute.manager [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.724 248514 DEBUG nova.compute.manager [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.725 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.726 248514 INFO nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Creating image(s)#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.807 248514 DEBUG nova.storage.rbd_utils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image ea00b062-2638-45e3-91fb-24240ef8912f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.831 248514 DEBUG nova.storage.rbd_utils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image ea00b062-2638-45e3-91fb-24240ef8912f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.856 248514 DEBUG nova.storage.rbd_utils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image ea00b062-2638-45e3-91fb-24240ef8912f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.862 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.894 248514 DEBUG nova.policy [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.897 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613983.7684476, 2f398188-2df1-4c96-a224-c66efd7383a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.897 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] VM Started (Lifecycle Event)#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.927 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.933 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613983.768575, 2f398188-2df1-4c96-a224-c66efd7383a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.933 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.938 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.939 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.940 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:43 np0005558241 nova_compute[248510]: 2025-12-13 08:19:43.940 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.012 248514 DEBUG nova.storage.rbd_utils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image ea00b062-2638-45e3-91fb-24240ef8912f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.017 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ea00b062-2638-45e3-91fb-24240ef8912f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.056 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.060 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.089 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.817 248514 DEBUG nova.compute.manager [req-1794eec6-f3d0-4ad8-8170-5bd6f94f7359 req-1aecbec0-19b1-4c08-9247-105762524bc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received event network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.817 248514 DEBUG oslo_concurrency.lockutils [req-1794eec6-f3d0-4ad8-8170-5bd6f94f7359 req-1aecbec0-19b1-4c08-9247-105762524bc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.818 248514 DEBUG oslo_concurrency.lockutils [req-1794eec6-f3d0-4ad8-8170-5bd6f94f7359 req-1aecbec0-19b1-4c08-9247-105762524bc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.818 248514 DEBUG oslo_concurrency.lockutils [req-1794eec6-f3d0-4ad8-8170-5bd6f94f7359 req-1aecbec0-19b1-4c08-9247-105762524bc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.819 248514 DEBUG nova.compute.manager [req-1794eec6-f3d0-4ad8-8170-5bd6f94f7359 req-1aecbec0-19b1-4c08-9247-105762524bc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Processing event network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.820 248514 DEBUG nova.compute.manager [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.824 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613984.8236926, 2f398188-2df1-4c96-a224-c66efd7383a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.824 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.826 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.830 248514 INFO nova.virt.libvirt.driver [-] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Instance spawned successfully.#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.831 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.851 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.858 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.861 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.861 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.861 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.862 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.862 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.862 248514 DEBUG nova.virt.libvirt.driver [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.901 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:19:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 405 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 79 KiB/s wr, 48 op/s
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.950 248514 INFO nova.compute.manager [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Took 27.15 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:19:44 np0005558241 nova_compute[248510]: 2025-12-13 08:19:44.950 248514 DEBUG nova.compute.manager [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:19:45 np0005558241 nova_compute[248510]: 2025-12-13 08:19:45.047 248514 INFO nova.compute.manager [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Took 28.97 seconds to build instance.#033[00m
Dec 13 03:19:45 np0005558241 nova_compute[248510]: 2025-12-13 08:19:45.069 248514 DEBUG oslo_concurrency.lockutils [None req-b21470ca-b230-488e-860a-dda66b4885de 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 29.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:45 np0005558241 nova_compute[248510]: 2025-12-13 08:19:45.243 248514 DEBUG nova.network.neutron [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Successfully created port: 41d2792e-cb2f-464c-a974-4be557dde28b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:19:46 np0005558241 nova_compute[248510]: 2025-12-13 08:19:46.711 248514 DEBUG nova.network.neutron [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Successfully updated port: 41d2792e-cb2f-464c-a974-4be557dde28b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:19:46 np0005558241 nova_compute[248510]: 2025-12-13 08:19:46.747 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:19:46 np0005558241 nova_compute[248510]: 2025-12-13 08:19:46.748 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:19:46 np0005558241 nova_compute[248510]: 2025-12-13 08:19:46.748 248514 DEBUG nova.network.neutron [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:19:46 np0005558241 nova_compute[248510]: 2025-12-13 08:19:46.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 405 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 53 KiB/s wr, 39 op/s
Dec 13 03:19:47 np0005558241 nova_compute[248510]: 2025-12-13 08:19:47.054 248514 DEBUG nova.network.neutron [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:19:47 np0005558241 nova_compute[248510]: 2025-12-13 08:19:47.088 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:47 np0005558241 nova_compute[248510]: 2025-12-13 08:19:47.241 248514 INFO nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Deleting instance files /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e_del#033[00m
Dec 13 03:19:47 np0005558241 nova_compute[248510]: 2025-12-13 08:19:47.242 248514 INFO nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Deletion of /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e_del complete#033[00m
Dec 13 03:19:47 np0005558241 nova_compute[248510]: 2025-12-13 08:19:47.466 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:19:47 np0005558241 nova_compute[248510]: 2025-12-13 08:19:47.467 248514 INFO nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Creating image(s)#033[00m
Dec 13 03:19:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:19:47 np0005558241 nova_compute[248510]: 2025-12-13 08:19:47.906 248514 DEBUG nova.storage.rbd_utils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:47 np0005558241 nova_compute[248510]: 2025-12-13 08:19:47.931 248514 DEBUG nova.storage.rbd_utils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:47 np0005558241 nova_compute[248510]: 2025-12-13 08:19:47.957 248514 DEBUG nova.storage.rbd_utils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:47 np0005558241 nova_compute[248510]: 2025-12-13 08:19:47.962 248514 DEBUG oslo_concurrency.processutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.008 248514 DEBUG nova.compute.manager [req-06b6419e-1889-4b17-890e-98c6b477b34c req-bdde1b0d-9a96-4931-b0d8-fd7c56d4fc33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-changed-41d2792e-cb2f-464c-a974-4be557dde28b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.009 248514 DEBUG nova.compute.manager [req-06b6419e-1889-4b17-890e-98c6b477b34c req-bdde1b0d-9a96-4931-b0d8-fd7c56d4fc33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Refreshing instance network info cache due to event network-changed-41d2792e-cb2f-464c-a974-4be557dde28b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.010 248514 DEBUG oslo_concurrency.lockutils [req-06b6419e-1889-4b17-890e-98c6b477b34c req-bdde1b0d-9a96-4931-b0d8-fd7c56d4fc33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.011 248514 DEBUG nova.compute.manager [req-86d20d10-b6cc-45f5-a27d-6682acc491d6 req-b1ffce98-1516-4d1d-b029-6c93c128db95 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received event network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.011 248514 DEBUG oslo_concurrency.lockutils [req-86d20d10-b6cc-45f5-a27d-6682acc491d6 req-b1ffce98-1516-4d1d-b029-6c93c128db95 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.012 248514 DEBUG oslo_concurrency.lockutils [req-86d20d10-b6cc-45f5-a27d-6682acc491d6 req-b1ffce98-1516-4d1d-b029-6c93c128db95 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.012 248514 DEBUG oslo_concurrency.lockutils [req-86d20d10-b6cc-45f5-a27d-6682acc491d6 req-b1ffce98-1516-4d1d-b029-6c93c128db95 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.012 248514 DEBUG nova.compute.manager [req-86d20d10-b6cc-45f5-a27d-6682acc491d6 req-b1ffce98-1516-4d1d-b029-6c93c128db95 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] No waiting events found dispatching network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.013 248514 WARNING nova.compute.manager [req-86d20d10-b6cc-45f5-a27d-6682acc491d6 req-b1ffce98-1516-4d1d-b029-6c93c128db95 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received unexpected event network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a for instance with vm_state active and task_state None.#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.050 248514 DEBUG oslo_concurrency.processutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.050 248514 DEBUG oslo_concurrency.lockutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.051 248514 DEBUG oslo_concurrency.lockutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.051 248514 DEBUG oslo_concurrency.lockutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.088 248514 DEBUG nova.storage.rbd_utils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.093 248514 DEBUG oslo_concurrency.processutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc a14d8b88-7aec-468f-a550-881364e4d95e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.123 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ea00b062-2638-45e3-91fb-24240ef8912f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.192 248514 DEBUG nova.storage.rbd_utils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] resizing rbd image ea00b062-2638-45e3-91fb-24240ef8912f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 03:19:48 np0005558241 nova_compute[248510]: 2025-12-13 08:19:48.840 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 03:19:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 451 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Dec 13 03:19:49 np0005558241 nova_compute[248510]: 2025-12-13 08:19:49.249 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:49 np0005558241 nova_compute[248510]: 2025-12-13 08:19:49.695 248514 DEBUG nova.network.neutron [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Updating instance_info_cache with network_info: [{"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:19:49 np0005558241 nova_compute[248510]: 2025-12-13 08:19:49.730 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:19:49 np0005558241 nova_compute[248510]: 2025-12-13 08:19:49.731 248514 DEBUG nova.compute.manager [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Instance network_info: |[{"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:19:49 np0005558241 nova_compute[248510]: 2025-12-13 08:19:49.732 248514 DEBUG oslo_concurrency.lockutils [req-06b6419e-1889-4b17-890e-98c6b477b34c req-bdde1b0d-9a96-4931-b0d8-fd7c56d4fc33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:19:49 np0005558241 nova_compute[248510]: 2025-12-13 08:19:49.732 248514 DEBUG nova.network.neutron [req-06b6419e-1889-4b17-890e-98c6b477b34c req-bdde1b0d-9a96-4931-b0d8-fd7c56d4fc33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Refreshing network info cache for port 41d2792e-cb2f-464c-a974-4be557dde28b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:19:49 np0005558241 nova_compute[248510]: 2025-12-13 08:19:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:49 np0005558241 nova_compute[248510]: 2025-12-13 08:19:49.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 03:19:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1613: 321 pgs: 321 active+clean; 451 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 13 03:19:50 np0005558241 nova_compute[248510]: 2025-12-13 08:19:50.959 248514 DEBUG nova.objects.instance [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'migration_context' on Instance uuid ea00b062-2638-45e3-91fb-24240ef8912f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:50 np0005558241 nova_compute[248510]: 2025-12-13 08:19:50.975 248514 DEBUG nova.compute.manager [req-6b224b78-6d69-46cc-a07f-5c74085c7a1b req-9fee0ffe-27f7-4967-bd8a-aff0936c8709 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received event network-changed-31fc970a-7d64-4540-880f-6d5aee7d129a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:19:50 np0005558241 nova_compute[248510]: 2025-12-13 08:19:50.975 248514 DEBUG nova.compute.manager [req-6b224b78-6d69-46cc-a07f-5c74085c7a1b req-9fee0ffe-27f7-4967-bd8a-aff0936c8709 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Refreshing instance network info cache due to event network-changed-31fc970a-7d64-4540-880f-6d5aee7d129a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:19:50 np0005558241 nova_compute[248510]: 2025-12-13 08:19:50.976 248514 DEBUG oslo_concurrency.lockutils [req-6b224b78-6d69-46cc-a07f-5c74085c7a1b req-9fee0ffe-27f7-4967-bd8a-aff0936c8709 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:19:50 np0005558241 nova_compute[248510]: 2025-12-13 08:19:50.976 248514 DEBUG oslo_concurrency.lockutils [req-6b224b78-6d69-46cc-a07f-5c74085c7a1b req-9fee0ffe-27f7-4967-bd8a-aff0936c8709 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:19:50 np0005558241 nova_compute[248510]: 2025-12-13 08:19:50.977 248514 DEBUG nova.network.neutron [req-6b224b78-6d69-46cc-a07f-5c74085c7a1b req-9fee0ffe-27f7-4967-bd8a-aff0936c8709 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Refreshing network info cache for port 31fc970a-7d64-4540-880f-6d5aee7d129a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:19:50 np0005558241 nova_compute[248510]: 2025-12-13 08:19:50.993 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:19:50 np0005558241 nova_compute[248510]: 2025-12-13 08:19:50.994 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Ensure instance console log exists: /var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:19:50 np0005558241 nova_compute[248510]: 2025-12-13 08:19:50.995 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:50 np0005558241 nova_compute[248510]: 2025-12-13 08:19:50.995 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:50 np0005558241 nova_compute[248510]: 2025-12-13 08:19:50.995 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:50 np0005558241 nova_compute[248510]: 2025-12-13 08:19:50.997 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Start _get_guest_xml network_info=[{"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.008 248514 WARNING nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.014 248514 DEBUG nova.virt.libvirt.host [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.015 248514 DEBUG nova.virt.libvirt.host [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.022 248514 DEBUG nova.virt.libvirt.host [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.023 248514 DEBUG nova.virt.libvirt.host [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.023 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.023 248514 DEBUG nova.virt.hardware [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.024 248514 DEBUG nova.virt.hardware [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.024 248514 DEBUG nova.virt.hardware [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.025 248514 DEBUG nova.virt.hardware [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.025 248514 DEBUG nova.virt.hardware [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.025 248514 DEBUG nova.virt.hardware [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.025 248514 DEBUG nova.virt.hardware [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.026 248514 DEBUG nova.virt.hardware [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.026 248514 DEBUG nova.virt.hardware [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.026 248514 DEBUG nova.virt.hardware [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.027 248514 DEBUG nova.virt.hardware [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.030 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:19:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268825913' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.815 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.785s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.836 248514 DEBUG nova.storage.rbd_utils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image ea00b062-2638-45e3-91fb-24240ef8912f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.840 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.904 248514 DEBUG nova.network.neutron [req-06b6419e-1889-4b17-890e-98c6b477b34c req-bdde1b0d-9a96-4931-b0d8-fd7c56d4fc33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Updated VIF entry in instance network info cache for port 41d2792e-cb2f-464c-a974-4be557dde28b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.905 248514 DEBUG nova.network.neutron [req-06b6419e-1889-4b17-890e-98c6b477b34c req-bdde1b0d-9a96-4931-b0d8-fd7c56d4fc33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Updating instance_info_cache with network_info: [{"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:19:51 np0005558241 nova_compute[248510]: 2025-12-13 08:19:51.925 248514 DEBUG oslo_concurrency.lockutils [req-06b6419e-1889-4b17-890e-98c6b477b34c req-bdde1b0d-9a96-4931-b0d8-fd7c56d4fc33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:19:52 np0005558241 nova_compute[248510]: 2025-12-13 08:19:52.090 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:52 np0005558241 nova_compute[248510]: 2025-12-13 08:19:52.555 248514 DEBUG oslo_concurrency.processutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc a14d8b88-7aec-468f-a550-881364e4d95e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 467 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 123 op/s
Dec 13 03:19:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:19:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2403451117' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:19:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.090 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.092 248514 DEBUG nova.virt.libvirt.vif [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:19:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1677975054',display_name='tempest-AttachInterfacesTestJSON-server-1677975054',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1677975054',id=23,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlwa4a+xPwIF1ZQHB9HOTH7BIkCV659ajtgIcCDEuqwt51MFjzqyMmTtvHIvG4KvdVHQ8rxK33SgXXhym2PtXjLAyZK6aAc1sIvFRHlJNcPXY0ZisQ+T6fVIhZy24etfQ==',key_name='tempest-keypair-1760745004',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-jvuhbj1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:19:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=ea00b062-2638-45e3-91fb-24240ef8912f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.092 248514 DEBUG nova.network.os_vif_util [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.093 248514 DEBUG nova.network.os_vif_util [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:65:9d,bridge_name='br-int',has_traffic_filtering=True,id=41d2792e-cb2f-464c-a974-4be557dde28b,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d2792e-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.094 248514 DEBUG nova.objects.instance [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_devices' on Instance uuid ea00b062-2638-45e3-91fb-24240ef8912f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.101 248514 DEBUG nova.storage.rbd_utils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] resizing rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.201 248514 DEBUG oslo_concurrency.lockutils [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.202 248514 DEBUG oslo_concurrency.lockutils [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.203 248514 INFO nova.compute.manager [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Rebooting instance#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.205 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  <uuid>ea00b062-2638-45e3-91fb-24240ef8912f</uuid>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  <name>instance-00000017</name>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1677975054</nova:name>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:19:51</nova:creationTime>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:        <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:        <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:        <nova:port uuid="41d2792e-cb2f-464c-a974-4be557dde28b">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <entry name="serial">ea00b062-2638-45e3-91fb-24240ef8912f</entry>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <entry name="uuid">ea00b062-2638-45e3-91fb-24240ef8912f</entry>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/ea00b062-2638-45e3-91fb-24240ef8912f_disk">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/ea00b062-2638-45e3-91fb-24240ef8912f_disk.config">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:d2:65:9d"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <target dev="tap41d2792e-cb"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/console.log" append="off"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:19:53 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:19:53 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:19:53 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:19:53 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.212 248514 DEBUG nova.compute.manager [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Preparing to wait for external event network-vif-plugged-41d2792e-cb2f-464c-a974-4be557dde28b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.213 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.213 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.213 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.214 248514 DEBUG nova.virt.libvirt.vif [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:19:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1677975054',display_name='tempest-AttachInterfacesTestJSON-server-1677975054',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1677975054',id=23,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlwa4a+xPwIF1ZQHB9HOTH7BIkCV659ajtgIcCDEuqwt51MFjzqyMmTtvHIvG4KvdVHQ8rxK33SgXXhym2PtXjLAyZK6aAc1sIvFRHlJNcPXY0ZisQ+T6fVIhZy24etfQ==',key_name='tempest-keypair-1760745004',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-jvuhbj1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:19:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=ea00b062-2638-45e3-91fb-24240ef8912f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.215 248514 DEBUG nova.network.os_vif_util [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.216 248514 DEBUG nova.network.os_vif_util [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:65:9d,bridge_name='br-int',has_traffic_filtering=True,id=41d2792e-cb2f-464c-a974-4be557dde28b,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d2792e-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.216 248514 DEBUG os_vif [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:65:9d,bridge_name='br-int',has_traffic_filtering=True,id=41d2792e-cb2f-464c-a974-4be557dde28b,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d2792e-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.217 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.217 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.218 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.221 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.222 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41d2792e-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.222 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41d2792e-cb, col_values=(('external_ids', {'iface-id': '41d2792e-cb2f-464c-a974-4be557dde28b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:65:9d', 'vm-uuid': 'ea00b062-2638-45e3-91fb-24240ef8912f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.224 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:53 np0005558241 NetworkManager[50376]: <info>  [1765613993.2254] manager: (tap41d2792e-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.233 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.234 248514 INFO os_vif [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:65:9d,bridge_name='br-int',has_traffic_filtering=True,id=41d2792e-cb2f-464c-a974-4be557dde28b,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d2792e-cb')#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.246 248514 DEBUG oslo_concurrency.lockutils [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.559 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.560 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.561 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:d2:65:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.562 248514 INFO nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Using config drive#033[00m
Dec 13 03:19:53 np0005558241 nova_compute[248510]: 2025-12-13 08:19:53.800 248514 DEBUG nova.storage.rbd_utils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image ea00b062-2638-45e3-91fb-24240ef8912f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.249 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.250 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Ensure instance console log exists: /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.250 248514 DEBUG oslo_concurrency.lockutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.251 248514 DEBUG oslo_concurrency.lockutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.251 248514 DEBUG oslo_concurrency.lockutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.253 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Start _get_guest_xml network_info=[{"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.254 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.259 248514 WARNING nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.267 248514 DEBUG nova.virt.libvirt.host [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.268 248514 DEBUG nova.virt.libvirt.host [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.276 248514 DEBUG nova.virt.libvirt.host [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.276 248514 DEBUG nova.virt.libvirt.host [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.276 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.277 248514 DEBUG nova.virt.hardware [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.277 248514 DEBUG nova.virt.hardware [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.277 248514 DEBUG nova.virt.hardware [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.277 248514 DEBUG nova.virt.hardware [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.278 248514 DEBUG nova.virt.hardware [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.278 248514 DEBUG nova.virt.hardware [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.278 248514 DEBUG nova.virt.hardware [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.278 248514 DEBUG nova.virt.hardware [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.278 248514 DEBUG nova.virt.hardware [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.278 248514 DEBUG nova.virt.hardware [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.279 248514 DEBUG nova.virt.hardware [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.279 248514 DEBUG nova.objects.instance [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'vcpu_model' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.597 248514 DEBUG oslo_concurrency.processutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.679 248514 DEBUG nova.network.neutron [req-6b224b78-6d69-46cc-a07f-5c74085c7a1b req-9fee0ffe-27f7-4967-bd8a-aff0936c8709 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Updated VIF entry in instance network info cache for port 31fc970a-7d64-4540-880f-6d5aee7d129a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.680 248514 DEBUG nova.network.neutron [req-6b224b78-6d69-46cc-a07f-5c74085c7a1b req-9fee0ffe-27f7-4967-bd8a-aff0936c8709 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Updating instance_info_cache with network_info: [{"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.709 248514 DEBUG oslo_concurrency.lockutils [req-6b224b78-6d69-46cc-a07f-5c74085c7a1b req-9fee0ffe-27f7-4967-bd8a-aff0936c8709 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.710 248514 DEBUG oslo_concurrency.lockutils [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquired lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.710 248514 DEBUG nova.network.neutron [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.891 248514 INFO nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Creating config drive at /var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/disk.config#033[00m
Dec 13 03:19:54 np0005558241 nova_compute[248510]: 2025-12-13 08:19:54.897 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3efvky35 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 497 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 130 op/s
Dec 13 03:19:55 np0005558241 nova_compute[248510]: 2025-12-13 08:19:55.034 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3efvky35" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:55 np0005558241 nova_compute[248510]: 2025-12-13 08:19:55.080 248514 DEBUG nova.storage.rbd_utils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image ea00b062-2638-45e3-91fb-24240ef8912f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:55 np0005558241 nova_compute[248510]: 2025-12-13 08:19:55.086 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/disk.config ea00b062-2638-45e3-91fb-24240ef8912f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:19:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/663903877' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:19:55 np0005558241 nova_compute[248510]: 2025-12-13 08:19:55.249 248514 DEBUG oslo_concurrency.processutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:55 np0005558241 nova_compute[248510]: 2025-12-13 08:19:55.274 248514 DEBUG nova.storage.rbd_utils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:55 np0005558241 nova_compute[248510]: 2025-12-13 08:19:55.278 248514 DEBUG oslo_concurrency.processutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:55.401 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:55.402 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:55.402 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Dec 13 03:19:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Dec 13 03:19:56 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Dec 13 03:19:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:19:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147577287' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.734 248514 DEBUG oslo_concurrency.processutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.736 248514 DEBUG nova.virt.libvirt.vif [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:17:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-247332834',display_name='tempest-ServersAdminTestJSON-server-247332834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-247332834',id=17,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:17:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-88vex2g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:19:47Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=a14d8b88-7aec-468f-a550-881364e4d95e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.738 248514 DEBUG nova.network.os_vif_util [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.739 248514 DEBUG nova.network.os_vif_util [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.742 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  <uuid>a14d8b88-7aec-468f-a550-881364e4d95e</uuid>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  <name>instance-00000011</name>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersAdminTestJSON-server-247332834</nova:name>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:19:54</nova:creationTime>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:        <nova:user uuid="0d3578a9aa9a4f4facf4009a71564a31">tempest-ServersAdminTestJSON-1079129122-project-member</nova:user>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:        <nova:project uuid="1f6b42f9712f4afea6a07d08373b56ca">tempest-ServersAdminTestJSON-1079129122</nova:project>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:        <nova:port uuid="7f8a109e-e262-4847-8430-ac7944dace5c">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <entry name="serial">a14d8b88-7aec-468f-a550-881364e4d95e</entry>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <entry name="uuid">a14d8b88-7aec-468f-a550-881364e4d95e</entry>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/a14d8b88-7aec-468f-a550-881364e4d95e_disk">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/a14d8b88-7aec-468f-a550-881364e4d95e_disk.config">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:e6:0a:44"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <target dev="tap7f8a109e-e2"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/console.log" append="off"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:19:56 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:19:56 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:19:56 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:19:56 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.744 248514 DEBUG nova.virt.libvirt.vif [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:17:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-247332834',display_name='tempest-ServersAdminTestJSON-server-247332834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-247332834',id=17,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:17:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-88vex2g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:19:47Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=a14d8b88-7aec-468f-a550-881364e4d95e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.745 248514 DEBUG nova.network.os_vif_util [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.745 248514 DEBUG nova.network.os_vif_util [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.746 248514 DEBUG os_vif [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.746 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.747 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.748 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.751 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.751 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f8a109e-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.752 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f8a109e-e2, col_values=(('external_ids', {'iface-id': '7f8a109e-e262-4847-8430-ac7944dace5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:0a:44', 'vm-uuid': 'a14d8b88-7aec-468f-a550-881364e4d95e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.754 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:56 np0005558241 NetworkManager[50376]: <info>  [1765613996.7558] manager: (tap7f8a109e-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.763 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:56 np0005558241 nova_compute[248510]: 2025-12-13 08:19:56.765 248514 INFO os_vif [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2')#033[00m
Dec 13 03:19:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 497 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 143 op/s
Dec 13 03:19:57 np0005558241 nova_compute[248510]: 2025-12-13 08:19:57.085 248514 DEBUG nova.network.neutron [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Updating instance_info_cache with network_info: [{"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:19:57 np0005558241 nova_compute[248510]: 2025-12-13 08:19:57.156 248514 DEBUG oslo_concurrency.lockutils [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Releasing lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:19:57 np0005558241 nova_compute[248510]: 2025-12-13 08:19:57.158 248514 DEBUG nova.compute.manager [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:19:57 np0005558241 nova_compute[248510]: 2025-12-13 08:19:57.482 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:19:57 np0005558241 nova_compute[248510]: 2025-12-13 08:19:57.482 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:19:57 np0005558241 nova_compute[248510]: 2025-12-13 08:19:57.482 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No VIF found with MAC fa:16:3e:e6:0a:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:19:57 np0005558241 nova_compute[248510]: 2025-12-13 08:19:57.483 248514 INFO nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Using config drive#033[00m
Dec 13 03:19:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:19:58 np0005558241 nova_compute[248510]: 2025-12-13 08:19:58.316 248514 DEBUG nova.storage.rbd_utils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:19:58 np0005558241 nova_compute[248510]: 2025-12-13 08:19:58.323 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:19:58 np0005558241 nova_compute[248510]: 2025-12-13 08:19:58.381 248514 DEBUG nova.objects.instance [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'ec2_ids' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:58 np0005558241 nova_compute[248510]: 2025-12-13 08:19:58.813 248514 DEBUG oslo_concurrency.processutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/disk.config ea00b062-2638-45e3-91fb-24240ef8912f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:19:58 np0005558241 nova_compute[248510]: 2025-12-13 08:19:58.814 248514 INFO nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Deleting local config drive /var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/disk.config because it was imported into RBD.#033[00m
Dec 13 03:19:58 np0005558241 NetworkManager[50376]: <info>  [1765613998.8749] manager: (tap41d2792e-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Dec 13 03:19:58 np0005558241 kernel: tap41d2792e-cb: entered promiscuous mode
Dec 13 03:19:58 np0005558241 nova_compute[248510]: 2025-12-13 08:19:58.879 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:58Z|00096|binding|INFO|Claiming lport 41d2792e-cb2f-464c-a974-4be557dde28b for this chassis.
Dec 13 03:19:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:58Z|00097|binding|INFO|41d2792e-cb2f-464c-a974-4be557dde28b: Claiming fa:16:3e:d2:65:9d 10.100.0.9
Dec 13 03:19:58 np0005558241 systemd-udevd[276183]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:19:58 np0005558241 NetworkManager[50376]: <info>  [1765613998.9238] device (tap41d2792e-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:19:58 np0005558241 NetworkManager[50376]: <info>  [1765613998.9251] device (tap41d2792e-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:19:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 498 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Dec 13 03:19:58 np0005558241 kernel: tap31fc970a-7d (unregistering): left promiscuous mode
Dec 13 03:19:58 np0005558241 NetworkManager[50376]: <info>  [1765613998.9503] device (tap31fc970a-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:19:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:58Z|00098|binding|INFO|Setting lport 41d2792e-cb2f-464c-a974-4be557dde28b ovn-installed in OVS
Dec 13 03:19:58 np0005558241 nova_compute[248510]: 2025-12-13 08:19:58.983 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:58Z|00099|binding|INFO|Releasing lport 31fc970a-7d64-4540-880f-6d5aee7d129a from this chassis (sb_readonly=1)
Dec 13 03:19:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:58Z|00100|binding|INFO|Removing iface tap31fc970a-7d ovn-installed in OVS
Dec 13 03:19:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:58Z|00101|if_status|INFO|Not setting lport 31fc970a-7d64-4540-880f-6d5aee7d129a down as sb is readonly
Dec 13 03:19:58 np0005558241 nova_compute[248510]: 2025-12-13 08:19:58.985 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:58 np0005558241 nova_compute[248510]: 2025-12-13 08:19:58.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:59 np0005558241 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Deactivated successfully.
Dec 13 03:19:59 np0005558241 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Consumed 11.398s CPU time.
Dec 13 03:19:59 np0005558241 systemd-machined[210538]: Machine qemu-24-instance-00000016 terminated.
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.060 248514 DEBUG nova.objects.instance [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'keypairs' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:59 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:59Z|00102|binding|INFO|Setting lport 31fc970a-7d64-4540-880f-6d5aee7d129a down in Southbound
Dec 13 03:19:59 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:59Z|00103|binding|INFO|Setting lport 41d2792e-cb2f-464c-a974-4be557dde28b up in Southbound
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.061 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:65:9d 10.100.0.9'], port_security=['fa:16:3e:d2:65:9d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea00b062-2638-45e3-91fb-24240ef8912f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '267af4c5-9a6a-49cd-8515-bd9e68488e32', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=41d2792e-cb2f-464c-a974-4be557dde28b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.063 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 41d2792e-cb2f-464c-a974-4be557dde28b in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 bound to our chassis#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.064 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.067 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:bb:71 10.100.0.4'], port_security=['fa:16:3e:67:bb:71 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f398188-2df1-4c96-a224-c66efd7383a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-675d1cc9-9f63-41fd-81b7-761155b686e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6abc0a5e3b44a75bd19cd7df807b790', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2d00a2f3-cbea-40c0-a88e-6b16688a4860 2dfe1c43-6323-4420-b707-b23b52a24b81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ccd165e-0e0e-4b3a-8747-912025213604, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=31fc970a-7d64-4540-880f-6d5aee7d129a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:19:59 np0005558241 systemd-machined[210538]: New machine qemu-25-instance-00000017.
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.080 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[70a68f59-aae1-41d5-8bc7-ff05a3610c33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.082 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ca92864-31 in ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:19:59 np0005558241 systemd[1]: Started Virtual Machine qemu-25-instance-00000017.
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.089 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ca92864-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.089 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[74397988-f008-4678-97de-a6682c5d9107]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.091 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52389646-ae9e-4137-b4ba-4a0947c0ad71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.109 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d658ca22-5e82-42c0-b283-aa1203573d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.125 248514 INFO nova.virt.libvirt.driver [-] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Instance destroyed successfully.#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.126 248514 DEBUG nova.objects.instance [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lazy-loading 'resources' on Instance uuid 2f398188-2df1-4c96-a224-c66efd7383a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.142 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[41297913-f005-4390-8b3b-504ee9e03767]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.151 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid a14d8b88-7aec-468f-a550-881364e4d95e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.151 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 3fffafca-321d-4611-8940-da963b356ca1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.151 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 6b1e2b65-1398-4af8-9e8a-a8b99630eef8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.152 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 0d64e209-19e7-4ad3-a790-43d04d832838 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.152 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid e686bddc-956e-4714-868c-9a29dda243d2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.152 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 2f398188-2df1-4c96-a224-c66efd7383a0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.153 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid ea00b062-2638-45e3-91fb-24240ef8912f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.153 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.154 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "a14d8b88-7aec-468f-a550-881364e4d95e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.154 248514 INFO nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.154 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "a14d8b88-7aec-468f-a550-881364e4d95e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.154 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "3fffafca-321d-4611-8940-da963b356ca1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.155 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "3fffafca-321d-4611-8940-da963b356ca1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.155 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.155 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.155 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "0d64e209-19e7-4ad3-a790-43d04d832838" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.156 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "0d64e209-19e7-4ad3-a790-43d04d832838" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.156 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "e686bddc-956e-4714-868c-9a29dda243d2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.156 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "e686bddc-956e-4714-868c-9a29dda243d2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.157 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.157 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "ea00b062-2638-45e3-91fb-24240ef8912f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.174 248514 DEBUG nova.virt.libvirt.vif [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:19:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-60806246',display_name='tempest-SecurityGroupsTestJSON-server-60806246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-60806246',id=22,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:19:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d6abc0a5e3b44a75bd19cd7df807b790',ramdisk_id='',reservation_id='r-q4l6b00g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1306632779',owner_user_name='tempest-SecurityGroupsTestJSON-1306632779-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:19:57Z,user_data=None,user_id='1e46b9271df84074b6a832899ca5ee57',uuid=2f398188-2df1-4c96-a224-c66efd7383a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.175 248514 DEBUG nova.network.os_vif_util [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converting VIF {"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.176 248514 DEBUG nova.network.os_vif_util [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.176 248514 DEBUG os_vif [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.177 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[51c6bb36-7b58-4022-a9b0-d96388a1ce78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.178 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.178 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31fc970a-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.180 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.183 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.184 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a92d9256-a83c-49f9-926e-95149a488ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 NetworkManager[50376]: <info>  [1765613999.1857] manager: (tap1ca92864-30): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.187 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.190 248514 INFO os_vif [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d')#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.200 248514 DEBUG nova.virt.libvirt.driver [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Start _get_guest_xml network_info=[{"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.211 248514 WARNING nova.virt.libvirt.driver [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.218 248514 DEBUG nova.virt.libvirt.host [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.219 248514 DEBUG nova.virt.libvirt.host [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.224 248514 DEBUG nova.virt.libvirt.host [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.225 248514 DEBUG nova.virt.libvirt.host [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.225 248514 DEBUG nova.virt.libvirt.driver [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.226 248514 DEBUG nova.virt.hardware [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.226 248514 DEBUG nova.virt.hardware [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.226 248514 DEBUG nova.virt.hardware [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.227 248514 DEBUG nova.virt.hardware [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.227 248514 DEBUG nova.virt.hardware [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.227 248514 DEBUG nova.virt.hardware [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.227 248514 DEBUG nova.virt.hardware [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.228 248514 DEBUG nova.virt.hardware [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.229 248514 DEBUG nova.virt.hardware [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.229 248514 DEBUG nova.virt.hardware [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.229 248514 DEBUG nova.virt.hardware [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.228 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9c20185f-33f8-48fb-b6c9-80279c31743f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.229 248514 DEBUG nova.objects.instance [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2f398188-2df1-4c96-a224-c66efd7383a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.234 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[127f0653-95ba-46cf-b826-fb4016a0c4ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.252 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:59 np0005558241 NetworkManager[50376]: <info>  [1765613999.2668] device (tap1ca92864-30): carrier: link connected
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.278 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[404a5187-6942-4b6f-b706-5d514500bb5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.302 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc73201-6afd-4023-b649-b4c2320e47cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657648, 'reachable_time': 24417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276241, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.316 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.316 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "0d64e209-19e7-4ad3-a790-43d04d832838" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.317 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "3fffafca-321d-4611-8940-da963b356ca1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.326 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c92ec8ae-b830-4a81-9f58-f853c56668b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:e2af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657648, 'tstamp': 657648}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276242, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.349 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6fdb0ad7-7703-4513-91c1-a097af017d39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657648, 'reachable_time': 24417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276243, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.356 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "e686bddc-956e-4714-868c-9a29dda243d2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.384 248514 DEBUG oslo_concurrency.processutils [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.384 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1283d6a8-b0ea-402c-8e17-a0ee8ffdb0d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.489 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0a344613-71f8-4689-9698-4f1768defb7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.492 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.492 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.493 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:59 np0005558241 NetworkManager[50376]: <info>  [1765613999.4960] manager: (tap1ca92864-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.495 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:59 np0005558241 kernel: tap1ca92864-30: entered promiscuous mode
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.501 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.502 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.503 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:59 np0005558241 ovn_controller[148476]: 2025-12-13T08:19:59Z|00104|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.518 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.522 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ca92864-3b70-4794-9db1-fa08128cef92.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ca92864-3b70-4794-9db1-fa08128cef92.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.523 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6099d4-12c2-40ae-a66e-3f29e6e88476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.524 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/1ca92864-3b70-4794-9db1-fa08128cef92.pid.haproxy
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:19:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:19:59.526 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'env', 'PROCESS_TAG=haproxy-1ca92864-3b70-4794-9db1-fa08128cef92', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ca92864-3b70-4794-9db1-fa08128cef92.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.693 248514 INFO nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Creating config drive at /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.700 248514 DEBUG oslo_concurrency.processutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp68a8534j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.777 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613999.7764578, ea00b062-2638-45e3-91fb-24240ef8912f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.778 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] VM Started (Lifecycle Event)#033[00m
Dec 13 03:19:59 np0005558241 nova_compute[248510]: 2025-12-13 08:19:59.837 248514 DEBUG oslo_concurrency.processutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp68a8534j" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:20:00 np0005558241 podman[276352]: 2025-12-13 08:19:59.919403552 +0000 UTC m=+0.026682708 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:20:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:20:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321427535' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.078 248514 DEBUG nova.storage.rbd_utils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.082 248514 DEBUG oslo_concurrency.processutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config a14d8b88-7aec-468f-a550-881364e4d95e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.115 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.117 248514 DEBUG oslo_concurrency.processutils [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.733s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.146 248514 DEBUG oslo_concurrency.processutils [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.182 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765613999.7769623, ea00b062-2638-45e3-91fb-24240ef8912f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.183 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.621 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.624 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.658 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.818 248514 DEBUG nova.compute.manager [req-53b85742-13aa-47b9-82d6-71ce1645a8a8 req-8a16b823-a427-4c70-84d5-0dcb3b070f1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-vif-plugged-41d2792e-cb2f-464c-a974-4be557dde28b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.819 248514 DEBUG oslo_concurrency.lockutils [req-53b85742-13aa-47b9-82d6-71ce1645a8a8 req-8a16b823-a427-4c70-84d5-0dcb3b070f1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.819 248514 DEBUG oslo_concurrency.lockutils [req-53b85742-13aa-47b9-82d6-71ce1645a8a8 req-8a16b823-a427-4c70-84d5-0dcb3b070f1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.819 248514 DEBUG oslo_concurrency.lockutils [req-53b85742-13aa-47b9-82d6-71ce1645a8a8 req-8a16b823-a427-4c70-84d5-0dcb3b070f1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.819 248514 DEBUG nova.compute.manager [req-53b85742-13aa-47b9-82d6-71ce1645a8a8 req-8a16b823-a427-4c70-84d5-0dcb3b070f1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Processing event network-vif-plugged-41d2792e-cb2f-464c-a974-4be557dde28b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.820 248514 DEBUG nova.compute.manager [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.824 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614000.823959, ea00b062-2638-45e3-91fb-24240ef8912f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.824 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.826 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.829 248514 INFO nova.virt.libvirt.driver [-] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Instance spawned successfully.#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.830 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.927 248514 DEBUG nova.compute.manager [req-b6114ba9-ddb0-4041-8d84-619ebb23b790 req-33347060-f570-45dd-95ad-6dcdc6738488 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received event network-vif-unplugged-31fc970a-7d64-4540-880f-6d5aee7d129a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.927 248514 DEBUG oslo_concurrency.lockutils [req-b6114ba9-ddb0-4041-8d84-619ebb23b790 req-33347060-f570-45dd-95ad-6dcdc6738488 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.928 248514 DEBUG oslo_concurrency.lockutils [req-b6114ba9-ddb0-4041-8d84-619ebb23b790 req-33347060-f570-45dd-95ad-6dcdc6738488 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.928 248514 DEBUG oslo_concurrency.lockutils [req-b6114ba9-ddb0-4041-8d84-619ebb23b790 req-33347060-f570-45dd-95ad-6dcdc6738488 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.928 248514 DEBUG nova.compute.manager [req-b6114ba9-ddb0-4041-8d84-619ebb23b790 req-33347060-f570-45dd-95ad-6dcdc6738488 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] No waiting events found dispatching network-vif-unplugged-31fc970a-7d64-4540-880f-6d5aee7d129a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.928 248514 WARNING nova.compute.manager [req-b6114ba9-ddb0-4041-8d84-619ebb23b790 req-33347060-f570-45dd-95ad-6dcdc6738488 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received unexpected event network-vif-unplugged-31fc970a-7d64-4540-880f-6d5aee7d129a for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec 13 03:20:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 498 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.963 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.968 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.968 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.969 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.969 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.969 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.970 248514 DEBUG nova.virt.libvirt.driver [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:20:00 np0005558241 nova_compute[248510]: 2025-12-13 08:20:00.975 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:20:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:20:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1098124685' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.116 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.117 248514 DEBUG oslo_concurrency.processutils [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.971s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.118 248514 DEBUG nova.virt.libvirt.vif [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:19:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-60806246',display_name='tempest-SecurityGroupsTestJSON-server-60806246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-60806246',id=22,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:19:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d6abc0a5e3b44a75bd19cd7df807b790',ramdisk_id='',reservation_id='r-q4l6b00g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1306632779',owner_user_name='tempest-SecurityGroupsTestJSON-1306632779-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:19:57Z,user_data=None,user_id='1e46b9271df84074b6a832899ca5ee57',uuid=2f398188-2df1-4c96-a224-c66efd7383a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.118 248514 DEBUG nova.network.os_vif_util [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converting VIF {"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.119 248514 DEBUG nova.network.os_vif_util [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.120 248514 DEBUG nova.objects.instance [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f398188-2df1-4c96-a224-c66efd7383a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.270 248514 INFO nova.compute.manager [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Took 17.55 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.270 248514 DEBUG nova.compute.manager [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.272 248514 DEBUG nova.virt.libvirt.driver [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  <uuid>2f398188-2df1-4c96-a224-c66efd7383a0</uuid>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  <name>instance-00000016</name>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <nova:name>tempest-SecurityGroupsTestJSON-server-60806246</nova:name>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:19:59</nova:creationTime>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:        <nova:user uuid="1e46b9271df84074b6a832899ca5ee57">tempest-SecurityGroupsTestJSON-1306632779-project-member</nova:user>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:        <nova:project uuid="d6abc0a5e3b44a75bd19cd7df807b790">tempest-SecurityGroupsTestJSON-1306632779</nova:project>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:        <nova:port uuid="31fc970a-7d64-4540-880f-6d5aee7d129a">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <entry name="serial">2f398188-2df1-4c96-a224-c66efd7383a0</entry>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <entry name="uuid">2f398188-2df1-4c96-a224-c66efd7383a0</entry>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2f398188-2df1-4c96-a224-c66efd7383a0_disk">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2f398188-2df1-4c96-a224-c66efd7383a0_disk.config">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:67:bb:71"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <target dev="tap31fc970a-7d"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/2f398188-2df1-4c96-a224-c66efd7383a0/console.log" append="off"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <input type="keyboard" bus="usb"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:20:01 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:20:01 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:20:01 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:20:01 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.273 248514 DEBUG nova.virt.libvirt.driver [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.273 248514 DEBUG nova.virt.libvirt.driver [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.274 248514 DEBUG nova.virt.libvirt.vif [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:19:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-60806246',display_name='tempest-SecurityGroupsTestJSON-server-60806246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-60806246',id=22,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:19:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='d6abc0a5e3b44a75bd19cd7df807b790',ramdisk_id='',reservation_id='r-q4l6b00g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1306632779',owner_user_name='tempest-SecurityGroupsTestJSON-1306632779-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:19:57Z,user_data=None,user_id='1e46b9271df84074b6a832899ca5ee57',uuid=2f398188-2df1-4c96-a224-c66efd7383a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.275 248514 DEBUG nova.network.os_vif_util [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converting VIF {"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.275 248514 DEBUG nova.network.os_vif_util [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.275 248514 DEBUG os_vif [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.276 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.276 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.277 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.280 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.281 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31fc970a-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.281 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31fc970a-7d, col_values=(('external_ids', {'iface-id': '31fc970a-7d64-4540-880f-6d5aee7d129a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:bb:71', 'vm-uuid': '2f398188-2df1-4c96-a224-c66efd7383a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:01 np0005558241 NetworkManager[50376]: <info>  [1765614001.2848] manager: (tap31fc970a-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.284 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.294 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.295 248514 INFO os_vif [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d')#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.415 248514 INFO nova.compute.manager [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Took 19.54 seconds to build instance.#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.911 248514 DEBUG oslo_concurrency.lockutils [None req-c18f3ae2-f44d-4d5f-84ca-79c11e3f2cc2 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.911 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "ea00b062-2638-45e3-91fb-24240ef8912f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.911 248514 INFO nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.912 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "ea00b062-2638-45e3-91fb-24240ef8912f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:01 np0005558241 nova_compute[248510]: 2025-12-13 08:20:01.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:01.925 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:20:02 np0005558241 kernel: tap31fc970a-7d: entered promiscuous mode
Dec 13 03:20:02 np0005558241 NetworkManager[50376]: <info>  [1765614002.0418] manager: (tap31fc970a-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Dec 13 03:20:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:02Z|00105|binding|INFO|Claiming lport 31fc970a-7d64-4540-880f-6d5aee7d129a for this chassis.
Dec 13 03:20:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:02Z|00106|binding|INFO|31fc970a-7d64-4540-880f-6d5aee7d129a: Claiming fa:16:3e:67:bb:71 10.100.0.4
Dec 13 03:20:02 np0005558241 nova_compute[248510]: 2025-12-13 08:20:02.049 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:02 np0005558241 nova_compute[248510]: 2025-12-13 08:20:02.074 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:02Z|00107|binding|INFO|Setting lport 31fc970a-7d64-4540-880f-6d5aee7d129a ovn-installed in OVS
Dec 13 03:20:02 np0005558241 nova_compute[248510]: 2025-12-13 08:20:02.083 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:02 np0005558241 systemd-machined[210538]: New machine qemu-26-instance-00000016.
Dec 13 03:20:02 np0005558241 systemd[1]: Started Virtual Machine qemu-26-instance-00000016.
Dec 13 03:20:02 np0005558241 systemd-udevd[276453]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:20:02 np0005558241 NetworkManager[50376]: <info>  [1765614002.1375] device (tap31fc970a-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:20:02 np0005558241 NetworkManager[50376]: <info>  [1765614002.1382] device (tap31fc970a-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:20:02 np0005558241 podman[276352]: 2025-12-13 08:20:02.147244046 +0000 UTC m=+2.254523182 container create 9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:02.373 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:bb:71 10.100.0.4'], port_security=['fa:16:3e:67:bb:71 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f398188-2df1-4c96-a224-c66efd7383a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-675d1cc9-9f63-41fd-81b7-761155b686e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6abc0a5e3b44a75bd19cd7df807b790', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2d00a2f3-cbea-40c0-a88e-6b16688a4860 2dfe1c43-6323-4420-b707-b23b52a24b81', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ccd165e-0e0e-4b3a-8747-912025213604, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=31fc970a-7d64-4540-880f-6d5aee7d129a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:20:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:02Z|00108|binding|INFO|Setting lport 31fc970a-7d64-4540-880f-6d5aee7d129a up in Southbound
Dec 13 03:20:02 np0005558241 systemd[1]: Started libpod-conmon-9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3.scope.
Dec 13 03:20:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:20:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce3084b50b0e6ec15e1ecf47d7e2ce15ecf3c552074717af103c056d1bf1d71/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 502 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.7 MiB/s wr, 94 op/s
Dec 13 03:20:02 np0005558241 podman[276352]: 2025-12-13 08:20:02.940977126 +0000 UTC m=+3.048256282 container init 9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:20:02 np0005558241 podman[276352]: 2025-12-13 08:20:02.949482016 +0000 UTC m=+3.056761152 container start 9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:20:02 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[276492]: [NOTICE]   (276515) : New worker (276517) forked
Dec 13 03:20:02 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[276492]: [NOTICE]   (276515) : Loading success.
Dec 13 03:20:02 np0005558241 nova_compute[248510]: 2025-12-13 08:20:02.981 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 2f398188-2df1-4c96-a224-c66efd7383a0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:20:02 np0005558241 nova_compute[248510]: 2025-12-13 08:20:02.982 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614002.9808493, 2f398188-2df1-4c96-a224-c66efd7383a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:20:02 np0005558241 nova_compute[248510]: 2025-12-13 08:20:02.982 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:20:02 np0005558241 nova_compute[248510]: 2025-12-13 08:20:02.986 248514 DEBUG nova.compute.manager [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:20:02 np0005558241 nova_compute[248510]: 2025-12-13 08:20:02.992 248514 INFO nova.virt.libvirt.driver [-] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Instance rebooted successfully.#033[00m
Dec 13 03:20:02 np0005558241 nova_compute[248510]: 2025-12-13 08:20:02.993 248514 DEBUG nova.compute.manager [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.056 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.060 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.357 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 31fc970a-7d64-4540-880f-6d5aee7d129a in datapath 675d1cc9-9f63-41fd-81b7-761155b686e5 unbound from our chassis#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.360 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 675d1cc9-9f63-41fd-81b7-761155b686e5#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.383 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[87a068f5-c906-426f-8a26-497d4fbb6aba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.387 248514 DEBUG oslo_concurrency.processutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config a14d8b88-7aec-468f-a550-881364e4d95e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.387 248514 INFO nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Deleting local config drive /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config because it was imported into RBD.#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.425 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5f6822-8aed-4bff-a53b-83bfc681281c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.429 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1c930d-4c8c-4e43-aa51-80159626315b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 kernel: tap7f8a109e-e2: entered promiscuous mode
Dec 13 03:20:03 np0005558241 NetworkManager[50376]: <info>  [1765614003.4466] manager: (tap7f8a109e-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Dec 13 03:20:03 np0005558241 systemd-udevd[276455]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:20:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:03Z|00109|binding|INFO|Claiming lport 7f8a109e-e262-4847-8430-ac7944dace5c for this chassis.
Dec 13 03:20:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:03Z|00110|binding|INFO|7f8a109e-e262-4847-8430-ac7944dace5c: Claiming fa:16:3e:e6:0a:44 10.100.0.14
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.453 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:20:03 np0005558241 NetworkManager[50376]: <info>  [1765614003.4683] device (tap7f8a109e-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:20:03 np0005558241 NetworkManager[50376]: <info>  [1765614003.4694] device (tap7f8a109e-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.472 248514 DEBUG oslo_concurrency.lockutils [None req-52efa884-e5f0-4752-94cb-d18327f9425e 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 10.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.474 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "2f398188-2df1-4c96-a224-c66efd7383a0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.474 248514 INFO nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.474 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "2f398188-2df1-4c96-a224-c66efd7383a0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.475 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614002.9843125, 2f398188-2df1-4c96-a224-c66efd7383a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.475 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] VM Started (Lifecycle Event)#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.478 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:03Z|00111|binding|INFO|Setting lport 7f8a109e-e262-4847-8430-ac7944dace5c ovn-installed in OVS
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.484 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.487 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0760c415-3b8e-4554-993c-a594539adc18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 systemd-machined[210538]: New machine qemu-27-instance-00000011.
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.519 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9795e9-14e5-47c7-a92d-b14a1779cf6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap675d1cc9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:51:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649993, 'reachable_time': 18646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276545, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 systemd[1]: Started Virtual Machine qemu-27-instance-00000011.
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.542 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[524a87a7-c67f-4013-bbe4-8ddb62b01ceb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap675d1cc9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650007, 'tstamp': 650007}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276547, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap675d1cc9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650010, 'tstamp': 650010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276547, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.545 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap675d1cc9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.548 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap675d1cc9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.549 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.547 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.550 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap675d1cc9-90, col_values=(('external_ids', {'iface-id': '0e8c3540-dab0-4c73-b9e7-cb8c0c7c41e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.550 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.552 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.553 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 31fc970a-7d64-4540-880f-6d5aee7d129a in datapath 675d1cc9-9f63-41fd-81b7-761155b686e5 unbound from our chassis#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.555 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 675d1cc9-9f63-41fd-81b7-761155b686e5#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.575 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8751f087-ec8b-4b89-8da6-0f1aaf430db7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.614 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f0fd6172-36cb-4695-842f-bf74809f5867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.621 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0eb298-ad2b-4bc9-b81e-7abb57c53879]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.640 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:0a:44 10.100.0.14'], port_security=['fa:16:3e:e6:0a:44 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a14d8b88-7aec-468f-a550-881364e4d95e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7f8a109e-e262-4847-8430-ac7944dace5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:20:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:03Z|00112|binding|INFO|Setting lport 7f8a109e-e262-4847-8430-ac7944dace5c up in Southbound
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.662 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0c2a9d-4f5e-447c-9a7a-3389882406ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.689 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3666d4eb-38e3-4ad7-92a6-7d16d06faa43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap675d1cc9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:51:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 698, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 698, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649993, 'reachable_time': 18646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276559, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.719 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b124a030-f6d1-4004-a0fa-79db6bb53001]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap675d1cc9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650007, 'tstamp': 650007}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276560, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap675d1cc9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650010, 'tstamp': 650010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276560, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.722 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap675d1cc9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.727 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap675d1cc9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.727 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.727 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap675d1cc9-90, col_values=(('external_ids', {'iface-id': '0e8c3540-dab0-4c73-b9e7-cb8c0c7c41e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.728 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.730 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7f8a109e-e262-4847-8430-ac7944dace5c in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 bound to our chassis#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.733 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.758 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[34625e2b-d096-4f9c-a1dc-5ff80f3ea371]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.799 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[46d50304-4476-4ea1-98a2-4855e51ab05b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.802 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[99b1efeb-894e-458f-b402-befd3b70a2f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.846 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fd240f41-4851-4b16-bde5-d7a91c5a0fae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.868 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e408c58e-611b-4700-835f-bfc51dd749b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 868, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 14, 'rx_bytes': 868, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276566, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.883 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.887 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.898 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cf02bd28-9008-46be-aa6e-703f1fd928c5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276567, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276567, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.903 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.905 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:03 np0005558241 nova_compute[248510]: 2025-12-13 08:20:03.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.906 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.907 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.907 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:03.907 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.065 248514 DEBUG nova.compute.manager [req-a00d38e2-7805-465c-8393-ddbea485a556 req-9c0fdf96-5b8e-4ab1-84aa-583b474c7ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-vif-plugged-41d2792e-cb2f-464c-a974-4be557dde28b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.065 248514 DEBUG oslo_concurrency.lockutils [req-a00d38e2-7805-465c-8393-ddbea485a556 req-9c0fdf96-5b8e-4ab1-84aa-583b474c7ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.066 248514 DEBUG oslo_concurrency.lockutils [req-a00d38e2-7805-465c-8393-ddbea485a556 req-9c0fdf96-5b8e-4ab1-84aa-583b474c7ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.066 248514 DEBUG oslo_concurrency.lockutils [req-a00d38e2-7805-465c-8393-ddbea485a556 req-9c0fdf96-5b8e-4ab1-84aa-583b474c7ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.066 248514 DEBUG nova.compute.manager [req-a00d38e2-7805-465c-8393-ddbea485a556 req-9c0fdf96-5b8e-4ab1-84aa-583b474c7ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] No waiting events found dispatching network-vif-plugged-41d2792e-cb2f-464c-a974-4be557dde28b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.066 248514 WARNING nova.compute.manager [req-a00d38e2-7805-465c-8393-ddbea485a556 req-9c0fdf96-5b8e-4ab1-84aa-583b474c7ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received unexpected event network-vif-plugged-41d2792e-cb2f-464c-a974-4be557dde28b for instance with vm_state active and task_state None.#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.159 248514 DEBUG nova.compute.manager [req-feb7f6c6-ca18-4084-b0cb-b0ef88f3a59f req-51ac0b00-56f7-4e02-b1f6-53e4ac85dd5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received event network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.160 248514 DEBUG oslo_concurrency.lockutils [req-feb7f6c6-ca18-4084-b0cb-b0ef88f3a59f req-51ac0b00-56f7-4e02-b1f6-53e4ac85dd5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.160 248514 DEBUG oslo_concurrency.lockutils [req-feb7f6c6-ca18-4084-b0cb-b0ef88f3a59f req-51ac0b00-56f7-4e02-b1f6-53e4ac85dd5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.160 248514 DEBUG oslo_concurrency.lockutils [req-feb7f6c6-ca18-4084-b0cb-b0ef88f3a59f req-51ac0b00-56f7-4e02-b1f6-53e4ac85dd5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.160 248514 DEBUG nova.compute.manager [req-feb7f6c6-ca18-4084-b0cb-b0ef88f3a59f req-51ac0b00-56f7-4e02-b1f6-53e4ac85dd5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] No waiting events found dispatching network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.160 248514 WARNING nova.compute.manager [req-feb7f6c6-ca18-4084-b0cb-b0ef88f3a59f req-51ac0b00-56f7-4e02-b1f6-53e4ac85dd5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received unexpected event network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a for instance with vm_state active and task_state None.#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.160 248514 DEBUG nova.compute.manager [req-feb7f6c6-ca18-4084-b0cb-b0ef88f3a59f req-51ac0b00-56f7-4e02-b1f6-53e4ac85dd5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received event network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.161 248514 DEBUG oslo_concurrency.lockutils [req-feb7f6c6-ca18-4084-b0cb-b0ef88f3a59f req-51ac0b00-56f7-4e02-b1f6-53e4ac85dd5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.161 248514 DEBUG oslo_concurrency.lockutils [req-feb7f6c6-ca18-4084-b0cb-b0ef88f3a59f req-51ac0b00-56f7-4e02-b1f6-53e4ac85dd5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.161 248514 DEBUG oslo_concurrency.lockutils [req-feb7f6c6-ca18-4084-b0cb-b0ef88f3a59f req-51ac0b00-56f7-4e02-b1f6-53e4ac85dd5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.161 248514 DEBUG nova.compute.manager [req-feb7f6c6-ca18-4084-b0cb-b0ef88f3a59f req-51ac0b00-56f7-4e02-b1f6-53e4ac85dd5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] No waiting events found dispatching network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.161 248514 WARNING nova.compute.manager [req-feb7f6c6-ca18-4084-b0cb-b0ef88f3a59f req-51ac0b00-56f7-4e02-b1f6-53e4ac85dd5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received unexpected event network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a for instance with vm_state active and task_state None.#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.254 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.336 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614004.335249, a14d8b88-7aec-468f-a550-881364e4d95e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.336 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.341 248514 DEBUG nova.compute.manager [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.341 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.350 248514 INFO nova.virt.libvirt.driver [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance spawned successfully.#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.351 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.638 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.648 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.681 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.683 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.684 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.684 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.686 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.687 248514 DEBUG nova.virt.libvirt.driver [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.782 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.782 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614004.3381357, a14d8b88-7aec-468f-a550-881364e4d95e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:20:04 np0005558241 nova_compute[248510]: 2025-12-13 08:20:04.782 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] VM Started (Lifecycle Event)#033[00m
Dec 13 03:20:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1621: 321 pgs: 321 active+clean; 502 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 434 KiB/s wr, 152 op/s
Dec 13 03:20:05 np0005558241 nova_compute[248510]: 2025-12-13 08:20:05.180 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:20:05 np0005558241 nova_compute[248510]: 2025-12-13 08:20:05.194 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:20:05 np0005558241 nova_compute[248510]: 2025-12-13 08:20:05.257 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:20:05 np0005558241 nova_compute[248510]: 2025-12-13 08:20:05.284 248514 DEBUG nova.compute.manager [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:20:05 np0005558241 nova_compute[248510]: 2025-12-13 08:20:05.474 248514 DEBUG oslo_concurrency.lockutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:05 np0005558241 nova_compute[248510]: 2025-12-13 08:20:05.475 248514 DEBUG oslo_concurrency.lockutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:05 np0005558241 nova_compute[248510]: 2025-12-13 08:20:05.475 248514 DEBUG nova.objects.instance [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:20:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:05.555 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:05 np0005558241 nova_compute[248510]: 2025-12-13 08:20:05.697 248514 DEBUG oslo_concurrency.lockutils [None req-c1bd99f3-5a54-4277-93a8-59bc8cf9e9fd 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:06 np0005558241 nova_compute[248510]: 2025-12-13 08:20:06.285 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:06 np0005558241 nova_compute[248510]: 2025-12-13 08:20:06.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:06 np0005558241 NetworkManager[50376]: <info>  [1765614006.5529] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Dec 13 03:20:06 np0005558241 NetworkManager[50376]: <info>  [1765614006.5536] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Dec 13 03:20:06 np0005558241 nova_compute[248510]: 2025-12-13 08:20:06.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:06Z|00113|binding|INFO|Releasing lport 0e8c3540-dab0-4c73-b9e7-cb8c0c7c41e8 from this chassis (sb_readonly=0)
Dec 13 03:20:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:06Z|00114|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:20:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:06Z|00115|binding|INFO|Releasing lport c8850a7f-53d6-4776-8c0e-7fb474fe52de from this chassis (sb_readonly=0)
Dec 13 03:20:06 np0005558241 nova_compute[248510]: 2025-12-13 08:20:06.640 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 502 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 403 KiB/s wr, 141 op/s
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.662 248514 DEBUG nova.compute.manager [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received event network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.662 248514 DEBUG oslo_concurrency.lockutils [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.663 248514 DEBUG oslo_concurrency.lockutils [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.663 248514 DEBUG oslo_concurrency.lockutils [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.663 248514 DEBUG nova.compute.manager [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] No waiting events found dispatching network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.663 248514 WARNING nova.compute.manager [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received unexpected event network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a for instance with vm_state active and task_state None.#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.663 248514 DEBUG nova.compute.manager [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.663 248514 DEBUG oslo_concurrency.lockutils [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.664 248514 DEBUG oslo_concurrency.lockutils [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.664 248514 DEBUG oslo_concurrency.lockutils [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.664 248514 DEBUG nova.compute.manager [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] No waiting events found dispatching network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.664 248514 WARNING nova.compute.manager [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received unexpected event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c for instance with vm_state active and task_state None.#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.664 248514 DEBUG nova.compute.manager [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.664 248514 DEBUG oslo_concurrency.lockutils [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.665 248514 DEBUG oslo_concurrency.lockutils [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.665 248514 DEBUG oslo_concurrency.lockutils [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.665 248514 DEBUG nova.compute.manager [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] No waiting events found dispatching network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.665 248514 WARNING nova.compute.manager [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received unexpected event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c for instance with vm_state active and task_state None.#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.665 248514 DEBUG nova.compute.manager [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received event network-changed-31fc970a-7d64-4540-880f-6d5aee7d129a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.665 248514 DEBUG nova.compute.manager [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Refreshing instance network info cache due to event network-changed-31fc970a-7d64-4540-880f-6d5aee7d129a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.666 248514 DEBUG oslo_concurrency.lockutils [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.666 248514 DEBUG oslo_concurrency.lockutils [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:20:07 np0005558241 nova_compute[248510]: 2025-12-13 08:20:07.666 248514 DEBUG nova.network.neutron [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Refreshing network info cache for port 31fc970a-7d64-4540-880f-6d5aee7d129a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:20:08 np0005558241 podman[276612]: 2025-12-13 08:20:08.009876576 +0000 UTC m=+0.090122100 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 13 03:20:08 np0005558241 podman[276613]: 2025-12-13 08:20:08.035161079 +0000 UTC m=+0.111882907 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:20:08 np0005558241 podman[276611]: 2025-12-13 08:20:08.045407552 +0000 UTC m=+0.125576974 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 13 03:20:08 np0005558241 nova_compute[248510]: 2025-12-13 08:20:08.340 248514 DEBUG nova.compute.manager [req-b37cb835-e47b-4300-b041-d1738d82c581 req-a3bcc4fb-2846-4217-92ae-2bdfbcbd55fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-changed-41d2792e-cb2f-464c-a974-4be557dde28b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:08 np0005558241 nova_compute[248510]: 2025-12-13 08:20:08.341 248514 DEBUG nova.compute.manager [req-b37cb835-e47b-4300-b041-d1738d82c581 req-a3bcc4fb-2846-4217-92ae-2bdfbcbd55fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Refreshing instance network info cache due to event network-changed-41d2792e-cb2f-464c-a974-4be557dde28b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:20:08 np0005558241 nova_compute[248510]: 2025-12-13 08:20:08.341 248514 DEBUG oslo_concurrency.lockutils [req-b37cb835-e47b-4300-b041-d1738d82c581 req-a3bcc4fb-2846-4217-92ae-2bdfbcbd55fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:20:08 np0005558241 nova_compute[248510]: 2025-12-13 08:20:08.342 248514 DEBUG oslo_concurrency.lockutils [req-b37cb835-e47b-4300-b041-d1738d82c581 req-a3bcc4fb-2846-4217-92ae-2bdfbcbd55fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:20:08 np0005558241 nova_compute[248510]: 2025-12-13 08:20:08.342 248514 DEBUG nova.network.neutron [req-b37cb835-e47b-4300-b041-d1738d82c581 req-a3bcc4fb-2846-4217-92ae-2bdfbcbd55fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Refreshing network info cache for port 41d2792e-cb2f-464c-a974-4be557dde28b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:20:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:20:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 502 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 369 KiB/s wr, 247 op/s
Dec 13 03:20:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:20:09
Dec 13 03:20:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:20:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:20:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.rgw.root', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'images', 'vms', 'cephfs.cephfs.data']
Dec 13 03:20:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.258 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.277 248514 DEBUG oslo_concurrency.lockutils [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.277 248514 DEBUG oslo_concurrency.lockutils [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.278 248514 DEBUG oslo_concurrency.lockutils [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.278 248514 DEBUG oslo_concurrency.lockutils [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.278 248514 DEBUG oslo_concurrency.lockutils [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.280 248514 INFO nova.compute.manager [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Terminating instance#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.281 248514 DEBUG nova.compute.manager [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:20:09 np0005558241 kernel: tap31fc970a-7d (unregistering): left promiscuous mode
Dec 13 03:20:09 np0005558241 NetworkManager[50376]: <info>  [1765614009.4127] device (tap31fc970a-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:20:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:09Z|00116|binding|INFO|Releasing lport 31fc970a-7d64-4540-880f-6d5aee7d129a from this chassis (sb_readonly=0)
Dec 13 03:20:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:09Z|00117|binding|INFO|Setting lport 31fc970a-7d64-4540-880f-6d5aee7d129a down in Southbound
Dec 13 03:20:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:09Z|00118|binding|INFO|Removing iface tap31fc970a-7d ovn-installed in OVS
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.428 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.442 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:bb:71 10.100.0.4'], port_security=['fa:16:3e:67:bb:71 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f398188-2df1-4c96-a224-c66efd7383a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-675d1cc9-9f63-41fd-81b7-761155b686e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6abc0a5e3b44a75bd19cd7df807b790', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2d00a2f3-cbea-40c0-a88e-6b16688a4860 2dfe1c43-6323-4420-b707-b23b52a24b81 54596ed1-fbc3-4f29-a8db-04837deeb1d5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ccd165e-0e0e-4b3a-8747-912025213604, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=31fc970a-7d64-4540-880f-6d5aee7d129a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.440 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.443 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 31fc970a-7d64-4540-880f-6d5aee7d129a in datapath 675d1cc9-9f63-41fd-81b7-761155b686e5 unbound from our chassis#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.446 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 675d1cc9-9f63-41fd-81b7-761155b686e5#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.468 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bddbf9db-056b-4015-a36a-75ac737b0380]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:09 np0005558241 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000016.scope: Deactivated successfully.
Dec 13 03:20:09 np0005558241 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000016.scope: Consumed 6.943s CPU time.
Dec 13 03:20:09 np0005558241 systemd-machined[210538]: Machine qemu-26-instance-00000016 terminated.
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.500 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0f65fd90-6bd3-4cde-ae14-2b30b2fa283a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.503 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[01888545-3ac3-428f-84d4-43e6ce5b0ce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.534 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[65c760b2-a997-4166-97b3-f5ec0a8b75ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.557 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[052d2d42-893c-466a-88dd-02c73786be8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap675d1cc9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:51:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 782, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 782, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649993, 'reachable_time': 18646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276678, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.577 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5fcf398d-3dae-4a73-bb1a-5affe5af99f4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap675d1cc9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650007, 'tstamp': 650007}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276679, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap675d1cc9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650010, 'tstamp': 650010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276679, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.580 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap675d1cc9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.582 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.587 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.588 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap675d1cc9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.588 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.588 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap675d1cc9-90, col_values=(('external_ids', {'iface-id': '0e8c3540-dab0-4c73-b9e7-cb8c0c7c41e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.588 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:09 np0005558241 kernel: tap31fc970a-7d: entered promiscuous mode
Dec 13 03:20:09 np0005558241 NetworkManager[50376]: <info>  [1765614009.7141] manager: (tap31fc970a-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Dec 13 03:20:09 np0005558241 kernel: tap31fc970a-7d (unregistering): left promiscuous mode
Dec 13 03:20:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:09Z|00119|binding|INFO|Claiming lport 31fc970a-7d64-4540-880f-6d5aee7d129a for this chassis.
Dec 13 03:20:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:09Z|00120|binding|INFO|31fc970a-7d64-4540-880f-6d5aee7d129a: Claiming fa:16:3e:67:bb:71 10.100.0.4
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.716 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.743 248514 INFO nova.virt.libvirt.driver [-] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Instance destroyed successfully.#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.744 248514 DEBUG nova.objects.instance [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lazy-loading 'resources' on Instance uuid 2f398188-2df1-4c96-a224-c66efd7383a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:09Z|00121|binding|INFO|Setting lport 31fc970a-7d64-4540-880f-6d5aee7d129a ovn-installed in OVS
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.747 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:09Z|00122|binding|INFO|Releasing lport 31fc970a-7d64-4540-880f-6d5aee7d129a from this chassis (sb_readonly=0)
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.866 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:bb:71 10.100.0.4'], port_security=['fa:16:3e:67:bb:71 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f398188-2df1-4c96-a224-c66efd7383a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-675d1cc9-9f63-41fd-81b7-761155b686e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6abc0a5e3b44a75bd19cd7df807b790', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2d00a2f3-cbea-40c0-a88e-6b16688a4860 2dfe1c43-6323-4420-b707-b23b52a24b81 54596ed1-fbc3-4f29-a8db-04837deeb1d5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ccd165e-0e0e-4b3a-8747-912025213604, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=31fc970a-7d64-4540-880f-6d5aee7d129a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.875 248514 DEBUG nova.virt.libvirt.vif [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:19:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-60806246',display_name='tempest-SecurityGroupsTestJSON-server-60806246',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-60806246',id=22,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:19:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d6abc0a5e3b44a75bd19cd7df807b790',ramdisk_id='',reservation_id='r-q4l6b00g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1306632779',owner_user_name='tempest-SecurityGroupsTestJSON-1306632779-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:20:03Z,user_data=None,user_id='1e46b9271df84074b6a832899ca5ee57',uuid=2f398188-2df1-4c96-a224-c66efd7383a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.876 248514 DEBUG nova.network.os_vif_util [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converting VIF {"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.877 248514 DEBUG nova.network.os_vif_util [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.877 248514 DEBUG os_vif [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.879 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.879 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 31fc970a-7d64-4540-880f-6d5aee7d129a in datapath 675d1cc9-9f63-41fd-81b7-761155b686e5 bound to our chassis#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.880 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31fc970a-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.881 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.881 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 675d1cc9-9f63-41fd-81b7-761155b686e5#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.883 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:09 np0005558241 nova_compute[248510]: 2025-12-13 08:20:09.887 248514 INFO os_vif [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:bb:71,bridge_name='br-int',has_traffic_filtering=True,id=31fc970a-7d64-4540-880f-6d5aee7d129a,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31fc970a-7d')#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.912 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[91d346a1-ab84-4de7-b33b-c4ea1687fa40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.951 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8dfc615e-71dd-4306-9fe7-e9ccd2285d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.955 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3665f2ae-667a-4463-9aa8-841378fcd395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.975 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:bb:71 10.100.0.4'], port_security=['fa:16:3e:67:bb:71 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f398188-2df1-4c96-a224-c66efd7383a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-675d1cc9-9f63-41fd-81b7-761155b686e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6abc0a5e3b44a75bd19cd7df807b790', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2d00a2f3-cbea-40c0-a88e-6b16688a4860 2dfe1c43-6323-4420-b707-b23b52a24b81 54596ed1-fbc3-4f29-a8db-04837deeb1d5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ccd165e-0e0e-4b3a-8747-912025213604, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=31fc970a-7d64-4540-880f-6d5aee7d129a) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:20:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:09.990 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ea04a6-c22f-4a97-89e5-927be7527943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:10 np0005558241 nova_compute[248510]: 2025-12-13 08:20:10.021 248514 DEBUG nova.network.neutron [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Updated VIF entry in instance network info cache for port 31fc970a-7d64-4540-880f-6d5aee7d129a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:20:10 np0005558241 nova_compute[248510]: 2025-12-13 08:20:10.022 248514 DEBUG nova.network.neutron [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Updating instance_info_cache with network_info: [{"id": "31fc970a-7d64-4540-880f-6d5aee7d129a", "address": "fa:16:3e:67:bb:71", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31fc970a-7d", "ovs_interfaceid": "31fc970a-7d64-4540-880f-6d5aee7d129a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.022 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[50099fd2-893c-4962-bdc1-cb0d1a122fc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap675d1cc9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:51:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 15, 'rx_bytes': 658, 'tx_bytes': 866, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 15, 'rx_bytes': 658, 'tx_bytes': 866, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649993, 'reachable_time': 18646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276714, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.044 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5314d408-4755-4086-a9d6-ef83d40b0dad]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap675d1cc9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650007, 'tstamp': 650007}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276715, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap675d1cc9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650010, 'tstamp': 650010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276715, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap675d1cc9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:10 np0005558241 nova_compute[248510]: 2025-12-13 08:20:10.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.049 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap675d1cc9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.049 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.050 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap675d1cc9-90, col_values=(('external_ids', {'iface-id': '0e8c3540-dab0-4c73-b9e7-cb8c0c7c41e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.050 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.051 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 31fc970a-7d64-4540-880f-6d5aee7d129a in datapath 675d1cc9-9f63-41fd-81b7-761155b686e5 unbound from our chassis#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.053 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 675d1cc9-9f63-41fd-81b7-761155b686e5#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.070 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09163d09-cef4-43c9-b4e5-c30f389e0413]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.111 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b499a283-dfc7-404f-b11b-b4ab122eb2ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.115 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3c32a7-2bcc-407f-a95f-0adf03f6d72f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.149 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[db4ec1b3-47fb-4e8e-87c8-a5d4eb35924f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.170 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a104bc98-7ffc-460e-94b9-e500caf96203]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap675d1cc9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:75:51:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 950, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 950, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649993, 'reachable_time': 18646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276721, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.190 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52bae0be-3f12-4231-ba78-651b2dd55ea7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap675d1cc9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650007, 'tstamp': 650007}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276722, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap675d1cc9-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650010, 'tstamp': 650010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276722, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.192 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap675d1cc9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:10 np0005558241 nova_compute[248510]: 2025-12-13 08:20:10.194 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:10 np0005558241 nova_compute[248510]: 2025-12-13 08:20:10.196 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.196 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap675d1cc9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.196 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.197 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap675d1cc9-90, col_values=(('external_ids', {'iface-id': '0e8c3540-dab0-4c73-b9e7-cb8c0c7c41e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:10.197 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:10 np0005558241 nova_compute[248510]: 2025-12-13 08:20:10.230 248514 DEBUG oslo_concurrency.lockutils [req-d38a92c8-5282-4b56-b9d1-ba7a63b6570c req-a88349d9-8e1d-4baa-9a52-e427c17e6d37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2f398188-2df1-4c96-a224-c66efd7383a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:20:10 np0005558241 nova_compute[248510]: 2025-12-13 08:20:10.239 248514 DEBUG nova.network.neutron [req-b37cb835-e47b-4300-b041-d1738d82c581 req-a3bcc4fb-2846-4217-92ae-2bdfbcbd55fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Updated VIF entry in instance network info cache for port 41d2792e-cb2f-464c-a974-4be557dde28b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:20:10 np0005558241 nova_compute[248510]: 2025-12-13 08:20:10.240 248514 DEBUG nova.network.neutron [req-b37cb835-e47b-4300-b041-d1738d82c581 req-a3bcc4fb-2846-4217-92ae-2bdfbcbd55fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Updating instance_info_cache with network_info: [{"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:20:10 np0005558241 nova_compute[248510]: 2025-12-13 08:20:10.318 248514 DEBUG oslo_concurrency.lockutils [req-b37cb835-e47b-4300-b041-d1738d82c581 req-a3bcc4fb-2846-4217-92ae-2bdfbcbd55fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:20:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 502 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 302 KiB/s wr, 220 op/s
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.141 248514 DEBUG nova.compute.manager [req-7d754035-3d66-46af-b7a4-540a203956d1 req-9ec528a6-32c8-49de-b098-ad9e36af2705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received event network-vif-unplugged-31fc970a-7d64-4540-880f-6d5aee7d129a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.142 248514 DEBUG oslo_concurrency.lockutils [req-7d754035-3d66-46af-b7a4-540a203956d1 req-9ec528a6-32c8-49de-b098-ad9e36af2705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.142 248514 DEBUG oslo_concurrency.lockutils [req-7d754035-3d66-46af-b7a4-540a203956d1 req-9ec528a6-32c8-49de-b098-ad9e36af2705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.142 248514 DEBUG oslo_concurrency.lockutils [req-7d754035-3d66-46af-b7a4-540a203956d1 req-9ec528a6-32c8-49de-b098-ad9e36af2705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.142 248514 DEBUG nova.compute.manager [req-7d754035-3d66-46af-b7a4-540a203956d1 req-9ec528a6-32c8-49de-b098-ad9e36af2705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] No waiting events found dispatching network-vif-unplugged-31fc970a-7d64-4540-880f-6d5aee7d129a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.143 248514 DEBUG nova.compute.manager [req-7d754035-3d66-46af-b7a4-540a203956d1 req-9ec528a6-32c8-49de-b098-ad9e36af2705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received event network-vif-unplugged-31fc970a-7d64-4540-880f-6d5aee7d129a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.145 248514 INFO nova.compute.manager [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Rebuilding instance#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.445 248514 DEBUG nova.objects.instance [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'trusted_certs' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.593 248514 DEBUG nova.compute.manager [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.789 248514 DEBUG nova.objects.instance [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'pci_requests' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.808 248514 DEBUG nova.objects.instance [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'pci_devices' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.858 248514 DEBUG nova.objects.instance [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'resources' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.952 248514 DEBUG nova.objects.instance [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'migration_context' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.992 248514 DEBUG nova.objects.instance [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:20:11 np0005558241 nova_compute[248510]: 2025-12-13 08:20:11.997 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:20:12 np0005558241 nova_compute[248510]: 2025-12-13 08:20:12.516 248514 INFO nova.virt.libvirt.driver [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Deleting instance files /var/lib/nova/instances/2f398188-2df1-4c96-a224-c66efd7383a0_del#033[00m
Dec 13 03:20:12 np0005558241 nova_compute[248510]: 2025-12-13 08:20:12.517 248514 INFO nova.virt.libvirt.driver [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Deletion of /var/lib/nova/instances/2f398188-2df1-4c96-a224-c66efd7383a0_del complete#033[00m
Dec 13 03:20:12 np0005558241 nova_compute[248510]: 2025-12-13 08:20:12.754 248514 INFO nova.compute.manager [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Took 3.47 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:20:12 np0005558241 nova_compute[248510]: 2025-12-13 08:20:12.755 248514 DEBUG oslo.service.loopingcall [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:20:12 np0005558241 nova_compute[248510]: 2025-12-13 08:20:12.755 248514 DEBUG nova.compute.manager [-] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:20:12 np0005558241 nova_compute[248510]: 2025-12-13 08:20:12.756 248514 DEBUG nova.network.neutron [-] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:20:12 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 13 03:20:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 484 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 448 KiB/s wr, 235 op/s
Dec 13 03:20:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:20:13 np0005558241 nova_compute[248510]: 2025-12-13 08:20:13.628 248514 DEBUG nova.compute.manager [req-f7fa9fe7-d463-4435-8373-f97c264e1c33 req-f315c783-0a44-4baa-88f6-942e13482d69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received event network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:13 np0005558241 nova_compute[248510]: 2025-12-13 08:20:13.628 248514 DEBUG oslo_concurrency.lockutils [req-f7fa9fe7-d463-4435-8373-f97c264e1c33 req-f315c783-0a44-4baa-88f6-942e13482d69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:13 np0005558241 nova_compute[248510]: 2025-12-13 08:20:13.629 248514 DEBUG oslo_concurrency.lockutils [req-f7fa9fe7-d463-4435-8373-f97c264e1c33 req-f315c783-0a44-4baa-88f6-942e13482d69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:13 np0005558241 nova_compute[248510]: 2025-12-13 08:20:13.629 248514 DEBUG oslo_concurrency.lockutils [req-f7fa9fe7-d463-4435-8373-f97c264e1c33 req-f315c783-0a44-4baa-88f6-942e13482d69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:13 np0005558241 nova_compute[248510]: 2025-12-13 08:20:13.629 248514 DEBUG nova.compute.manager [req-f7fa9fe7-d463-4435-8373-f97c264e1c33 req-f315c783-0a44-4baa-88f6-942e13482d69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] No waiting events found dispatching network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:13 np0005558241 nova_compute[248510]: 2025-12-13 08:20:13.629 248514 WARNING nova.compute.manager [req-f7fa9fe7-d463-4435-8373-f97c264e1c33 req-f315c783-0a44-4baa-88f6-942e13482d69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received unexpected event network-vif-plugged-31fc970a-7d64-4540-880f-6d5aee7d129a for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:20:13 np0005558241 nova_compute[248510]: 2025-12-13 08:20:13.712 248514 DEBUG nova.network.neutron [-] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:20:13 np0005558241 nova_compute[248510]: 2025-12-13 08:20:13.923 248514 INFO nova.compute.manager [-] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Took 1.17 seconds to deallocate network for instance.#033[00m
Dec 13 03:20:14 np0005558241 nova_compute[248510]: 2025-12-13 08:20:14.260 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:14 np0005558241 nova_compute[248510]: 2025-12-13 08:20:14.685 248514 DEBUG oslo_concurrency.lockutils [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:14 np0005558241 nova_compute[248510]: 2025-12-13 08:20:14.686 248514 DEBUG oslo_concurrency.lockutils [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:14 np0005558241 nova_compute[248510]: 2025-12-13 08:20:14.848 248514 DEBUG oslo_concurrency.processutils [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:20:14 np0005558241 nova_compute[248510]: 2025-12-13 08:20:14.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 467 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 1.2 MiB/s wr, 231 op/s
Dec 13 03:20:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:20:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1292101871' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:20:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:20:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1292101871' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:20:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:15Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:65:9d 10.100.0.9
Dec 13 03:20:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:15Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:65:9d 10.100.0.9
Dec 13 03:20:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:20:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/545530442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:20:15 np0005558241 nova_compute[248510]: 2025-12-13 08:20:15.537 248514 DEBUG oslo_concurrency.processutils [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:20:15 np0005558241 nova_compute[248510]: 2025-12-13 08:20:15.551 248514 DEBUG nova.compute.provider_tree [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:20:15 np0005558241 nova_compute[248510]: 2025-12-13 08:20:15.952 248514 DEBUG nova.scheduler.client.report [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:20:16 np0005558241 nova_compute[248510]: 2025-12-13 08:20:16.102 248514 DEBUG oslo_concurrency.lockutils [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:16 np0005558241 nova_compute[248510]: 2025-12-13 08:20:16.371 248514 INFO nova.scheduler.client.report [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Deleted allocations for instance 2f398188-2df1-4c96-a224-c66efd7383a0#033[00m
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:20:16 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:20:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 467 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.2 MiB/s wr, 170 op/s
Dec 13 03:20:16 np0005558241 nova_compute[248510]: 2025-12-13 08:20:16.995 248514 DEBUG nova.compute.manager [req-4bef0fd0-13d9-46f0-afa9-da05e1563a0c req-e011f728-aafd-446b-b54a-96e775e5882c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Received event network-vif-deleted-31fc970a-7d64-4540-880f-6d5aee7d129a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:17 np0005558241 podman[276888]: 2025-12-13 08:20:17.039492964 +0000 UTC m=+0.061165828 container create a8a36d495b2c8314b66f30508c1e7bcb876ae7db07766d34d1b9c4fadc481561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sinoussi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:20:17 np0005558241 systemd[1]: Started libpod-conmon-a8a36d495b2c8314b66f30508c1e7bcb876ae7db07766d34d1b9c4fadc481561.scope.
Dec 13 03:20:17 np0005558241 podman[276888]: 2025-12-13 08:20:17.00806064 +0000 UTC m=+0.029733524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:20:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:20:17 np0005558241 podman[276888]: 2025-12-13 08:20:17.167913747 +0000 UTC m=+0.189586631 container init a8a36d495b2c8314b66f30508c1e7bcb876ae7db07766d34d1b9c4fadc481561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:20:17 np0005558241 podman[276888]: 2025-12-13 08:20:17.180267171 +0000 UTC m=+0.201940025 container start a8a36d495b2c8314b66f30508c1e7bcb876ae7db07766d34d1b9c4fadc481561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:20:17 np0005558241 podman[276888]: 2025-12-13 08:20:17.186430093 +0000 UTC m=+0.208102957 container attach a8a36d495b2c8314b66f30508c1e7bcb876ae7db07766d34d1b9c4fadc481561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sinoussi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:20:17 np0005558241 festive_sinoussi[276905]: 167 167
Dec 13 03:20:17 np0005558241 systemd[1]: libpod-a8a36d495b2c8314b66f30508c1e7bcb876ae7db07766d34d1b9c4fadc481561.scope: Deactivated successfully.
Dec 13 03:20:17 np0005558241 conmon[276905]: conmon a8a36d495b2c8314b66f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8a36d495b2c8314b66f30508c1e7bcb876ae7db07766d34d1b9c4fadc481561.scope/container/memory.events
Dec 13 03:20:17 np0005558241 podman[276888]: 2025-12-13 08:20:17.193800615 +0000 UTC m=+0.215473499 container died a8a36d495b2c8314b66f30508c1e7bcb876ae7db07766d34d1b9c4fadc481561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:20:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ceb34b875d3155d8631ce5754a450e1c1c3d6c555a002dbacb29e450b82a603e-merged.mount: Deactivated successfully.
Dec 13 03:20:17 np0005558241 nova_compute[248510]: 2025-12-13 08:20:17.253 248514 DEBUG oslo_concurrency.lockutils [None req-34140998-53ef-4bb2-8f23-4d2ef09df32d 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "2f398188-2df1-4c96-a224-c66efd7383a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:17 np0005558241 podman[276888]: 2025-12-13 08:20:17.264280361 +0000 UTC m=+0.285953225 container remove a8a36d495b2c8314b66f30508c1e7bcb876ae7db07766d34d1b9c4fadc481561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:20:17 np0005558241 systemd[1]: libpod-conmon-a8a36d495b2c8314b66f30508c1e7bcb876ae7db07766d34d1b9c4fadc481561.scope: Deactivated successfully.
Dec 13 03:20:17 np0005558241 podman[276929]: 2025-12-13 08:20:17.526609962 +0000 UTC m=+0.064694264 container create 788b23c47a87cea7bbfb3285cf8cd83ffc49a3bb4bede6db29f208d875920968 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:20:17 np0005558241 systemd[1]: Started libpod-conmon-788b23c47a87cea7bbfb3285cf8cd83ffc49a3bb4bede6db29f208d875920968.scope.
Dec 13 03:20:17 np0005558241 podman[276929]: 2025-12-13 08:20:17.49729189 +0000 UTC m=+0.035376222 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:20:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:20:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f63c987a6c88d6c8ac8b29c8958444c8b024b379d8ab71ab0fb910f1dd5d4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f63c987a6c88d6c8ac8b29c8958444c8b024b379d8ab71ab0fb910f1dd5d4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f63c987a6c88d6c8ac8b29c8958444c8b024b379d8ab71ab0fb910f1dd5d4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f63c987a6c88d6c8ac8b29c8958444c8b024b379d8ab71ab0fb910f1dd5d4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f63c987a6c88d6c8ac8b29c8958444c8b024b379d8ab71ab0fb910f1dd5d4c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:17 np0005558241 podman[276929]: 2025-12-13 08:20:17.649684264 +0000 UTC m=+0.187768596 container init 788b23c47a87cea7bbfb3285cf8cd83ffc49a3bb4bede6db29f208d875920968 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jennings, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 03:20:17 np0005558241 podman[276929]: 2025-12-13 08:20:17.662784316 +0000 UTC m=+0.200868618 container start 788b23c47a87cea7bbfb3285cf8cd83ffc49a3bb4bede6db29f208d875920968 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:20:17 np0005558241 podman[276929]: 2025-12-13 08:20:17.668371114 +0000 UTC m=+0.206455446 container attach 788b23c47a87cea7bbfb3285cf8cd83ffc49a3bb4bede6db29f208d875920968 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:20:18 np0005558241 dreamy_jennings[276946]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:20:18 np0005558241 dreamy_jennings[276946]: --> All data devices are unavailable
Dec 13 03:20:18 np0005558241 systemd[1]: libpod-788b23c47a87cea7bbfb3285cf8cd83ffc49a3bb4bede6db29f208d875920968.scope: Deactivated successfully.
Dec 13 03:20:18 np0005558241 podman[276929]: 2025-12-13 08:20:18.281343121 +0000 UTC m=+0.819427423 container died 788b23c47a87cea7bbfb3285cf8cd83ffc49a3bb4bede6db29f208d875920968 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:20:18 np0005558241 systemd[1]: var-lib-containers-storage-overlay-78f63c987a6c88d6c8ac8b29c8958444c8b024b379d8ab71ab0fb910f1dd5d4c-merged.mount: Deactivated successfully.
Dec 13 03:20:18 np0005558241 podman[276929]: 2025-12-13 08:20:18.36533317 +0000 UTC m=+0.903417482 container remove 788b23c47a87cea7bbfb3285cf8cd83ffc49a3bb4bede6db29f208d875920968 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_jennings, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:20:18 np0005558241 systemd[1]: libpod-conmon-788b23c47a87cea7bbfb3285cf8cd83ffc49a3bb4bede6db29f208d875920968.scope: Deactivated successfully.
Dec 13 03:20:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:20:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 484 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 218 op/s
Dec 13 03:20:18 np0005558241 podman[277040]: 2025-12-13 08:20:18.994505927 +0000 UTC m=+0.089798842 container create 4fad02fe65874bfd533cb3d255d085f06a8d31fa5bba1988fd8d0504410a78a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:20:19 np0005558241 podman[277040]: 2025-12-13 08:20:18.939761409 +0000 UTC m=+0.035054344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:20:19 np0005558241 systemd[1]: Started libpod-conmon-4fad02fe65874bfd533cb3d255d085f06a8d31fa5bba1988fd8d0504410a78a1.scope.
Dec 13 03:20:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:20:19 np0005558241 podman[277040]: 2025-12-13 08:20:19.190585637 +0000 UTC m=+0.285878582 container init 4fad02fe65874bfd533cb3d255d085f06a8d31fa5bba1988fd8d0504410a78a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:20:19 np0005558241 podman[277040]: 2025-12-13 08:20:19.203976197 +0000 UTC m=+0.299269122 container start 4fad02fe65874bfd533cb3d255d085f06a8d31fa5bba1988fd8d0504410a78a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_easley, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:20:19 np0005558241 friendly_easley[277056]: 167 167
Dec 13 03:20:19 np0005558241 systemd[1]: libpod-4fad02fe65874bfd533cb3d255d085f06a8d31fa5bba1988fd8d0504410a78a1.scope: Deactivated successfully.
Dec 13 03:20:19 np0005558241 podman[277040]: 2025-12-13 08:20:19.216808383 +0000 UTC m=+0.312101408 container attach 4fad02fe65874bfd533cb3d255d085f06a8d31fa5bba1988fd8d0504410a78a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:20:19 np0005558241 podman[277040]: 2025-12-13 08:20:19.217306405 +0000 UTC m=+0.312599320 container died 4fad02fe65874bfd533cb3d255d085f06a8d31fa5bba1988fd8d0504410a78a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:20:19 np0005558241 nova_compute[248510]: 2025-12-13 08:20:19.262 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2ee2ec5b129aee5e6d6525c1ac22f60ccf901ad38b16a79c4cf9f59dbdf00d97-merged.mount: Deactivated successfully.
Dec 13 03:20:19 np0005558241 nova_compute[248510]: 2025-12-13 08:20:19.893 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004158767800300997 of space, bias 1.0, pg target 1.247630340090299 quantized to 32 (current 32)
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006668535365314388 of space, bias 1.0, pg target 0.1993892074229002 quantized to 32 (current 32)
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.373449282960717e-07 of space, bias 4.0, pg target 0.0010014645342421018 quantized to 16 (current 32)
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Dec 13 03:20:20 np0005558241 podman[277040]: 2025-12-13 08:20:20.824194734 +0000 UTC m=+1.919487659 container remove 4fad02fe65874bfd533cb3d255d085f06a8d31fa5bba1988fd8d0504410a78a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_easley, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:20:20 np0005558241 systemd[1]: libpod-conmon-4fad02fe65874bfd533cb3d255d085f06a8d31fa5bba1988fd8d0504410a78a1.scope: Deactivated successfully.
Dec 13 03:20:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1629: 321 pgs: 321 active+clean; 484 MiB data, 557 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 97 op/s
Dec 13 03:20:21 np0005558241 podman[277081]: 2025-12-13 08:20:21.04370289 +0000 UTC m=+0.035669739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:20:21 np0005558241 podman[277081]: 2025-12-13 08:20:21.510445487 +0000 UTC m=+0.502412296 container create 7be5bddf90ade059c648296c2b55076e05df5614d812aeb93f44f09cb37bfb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:20:21 np0005558241 systemd[1]: Started libpod-conmon-7be5bddf90ade059c648296c2b55076e05df5614d812aeb93f44f09cb37bfb77.scope.
Dec 13 03:20:21 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:20:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f0fafcb58b8d8f494d4ba15658f920cf7bfa4d1a2562a4f78e9c2184cf83cb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f0fafcb58b8d8f494d4ba15658f920cf7bfa4d1a2562a4f78e9c2184cf83cb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f0fafcb58b8d8f494d4ba15658f920cf7bfa4d1a2562a4f78e9c2184cf83cb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f0fafcb58b8d8f494d4ba15658f920cf7bfa4d1a2562a4f78e9c2184cf83cb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:21 np0005558241 podman[277081]: 2025-12-13 08:20:21.937039914 +0000 UTC m=+0.929006763 container init 7be5bddf90ade059c648296c2b55076e05df5614d812aeb93f44f09cb37bfb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_greider, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:20:21 np0005558241 podman[277081]: 2025-12-13 08:20:21.946752973 +0000 UTC m=+0.938719792 container start 7be5bddf90ade059c648296c2b55076e05df5614d812aeb93f44f09cb37bfb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_greider, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 03:20:21 np0005558241 podman[277081]: 2025-12-13 08:20:21.961354553 +0000 UTC m=+0.953321392 container attach 7be5bddf90ade059c648296c2b55076e05df5614d812aeb93f44f09cb37bfb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_greider, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:20:22 np0005558241 nova_compute[248510]: 2025-12-13 08:20:22.076 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:20:22 np0005558241 romantic_greider[277098]: {
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:    "0": [
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:        {
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "devices": [
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "/dev/loop3"
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            ],
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_name": "ceph_lv0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_size": "21470642176",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "name": "ceph_lv0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "tags": {
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.cluster_name": "ceph",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.crush_device_class": "",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.encrypted": "0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.objectstore": "bluestore",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.osd_id": "0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.type": "block",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.vdo": "0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.with_tpm": "0"
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            },
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "type": "block",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "vg_name": "ceph_vg0"
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:        }
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:    ],
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:    "1": [
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:        {
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "devices": [
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "/dev/loop4"
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            ],
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_name": "ceph_lv1",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_size": "21470642176",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "name": "ceph_lv1",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "tags": {
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.cluster_name": "ceph",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.crush_device_class": "",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.encrypted": "0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.objectstore": "bluestore",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.osd_id": "1",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.type": "block",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.vdo": "0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.with_tpm": "0"
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            },
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "type": "block",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "vg_name": "ceph_vg1"
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:        }
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:    ],
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:    "2": [
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:        {
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "devices": [
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "/dev/loop5"
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            ],
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_name": "ceph_lv2",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_size": "21470642176",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "name": "ceph_lv2",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "tags": {
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.cluster_name": "ceph",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.crush_device_class": "",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.encrypted": "0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.objectstore": "bluestore",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.osd_id": "2",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.type": "block",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.vdo": "0",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:                "ceph.with_tpm": "0"
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            },
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "type": "block",
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:            "vg_name": "ceph_vg2"
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:        }
Dec 13 03:20:22 np0005558241 romantic_greider[277098]:    ]
Dec 13 03:20:22 np0005558241 romantic_greider[277098]: }
Dec 13 03:20:22 np0005558241 systemd[1]: libpod-7be5bddf90ade059c648296c2b55076e05df5614d812aeb93f44f09cb37bfb77.scope: Deactivated successfully.
Dec 13 03:20:22 np0005558241 podman[277081]: 2025-12-13 08:20:22.345215618 +0000 UTC m=+1.337182437 container died 7be5bddf90ade059c648296c2b55076e05df5614d812aeb93f44f09cb37bfb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_greider, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 03:20:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9f0fafcb58b8d8f494d4ba15658f920cf7bfa4d1a2562a4f78e9c2184cf83cb2-merged.mount: Deactivated successfully.
Dec 13 03:20:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:22Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:0a:44 10.100.0.14
Dec 13 03:20:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:22Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:0a:44 10.100.0.14
Dec 13 03:20:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 488 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 375 KiB/s rd, 2.7 MiB/s wr, 104 op/s
Dec 13 03:20:23 np0005558241 podman[277081]: 2025-12-13 08:20:23.426397428 +0000 UTC m=+2.418364247 container remove 7be5bddf90ade059c648296c2b55076e05df5614d812aeb93f44f09cb37bfb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_greider, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:20:23 np0005558241 systemd[1]: libpod-conmon-7be5bddf90ade059c648296c2b55076e05df5614d812aeb93f44f09cb37bfb77.scope: Deactivated successfully.
Dec 13 03:20:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:20:24 np0005558241 podman[277182]: 2025-12-13 08:20:23.90594499 +0000 UTC m=+0.026057913 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:20:24 np0005558241 nova_compute[248510]: 2025-12-13 08:20:24.265 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:24 np0005558241 podman[277182]: 2025-12-13 08:20:24.500585848 +0000 UTC m=+0.620698761 container create e926299e6829dc2188c198d742ef337fa94eb520fc9dd8ba9f386e2f645c45e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:20:24 np0005558241 nova_compute[248510]: 2025-12-13 08:20:24.740 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614009.7389936, 2f398188-2df1-4c96-a224-c66efd7383a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:20:24 np0005558241 nova_compute[248510]: 2025-12-13 08:20:24.742 248514 INFO nova.compute.manager [-] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:20:24 np0005558241 nova_compute[248510]: 2025-12-13 08:20:24.895 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:24 np0005558241 nova_compute[248510]: 2025-12-13 08:20:24.920 248514 DEBUG nova.compute.manager [None req-018492a6-bd0e-468a-915c-3a5f2f3ed01e - - - - - -] [instance: 2f398188-2df1-4c96-a224-c66efd7383a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:20:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 510 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 615 KiB/s rd, 4.1 MiB/s wr, 128 op/s
Dec 13 03:20:25 np0005558241 systemd[1]: Started libpod-conmon-e926299e6829dc2188c198d742ef337fa94eb520fc9dd8ba9f386e2f645c45e2.scope.
Dec 13 03:20:25 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:20:25 np0005558241 podman[277182]: 2025-12-13 08:20:25.518420357 +0000 UTC m=+1.638533310 container init e926299e6829dc2188c198d742ef337fa94eb520fc9dd8ba9f386e2f645c45e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:20:25 np0005558241 podman[277182]: 2025-12-13 08:20:25.528536647 +0000 UTC m=+1.648649550 container start e926299e6829dc2188c198d742ef337fa94eb520fc9dd8ba9f386e2f645c45e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_tharp, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:20:25 np0005558241 systemd[1]: libpod-e926299e6829dc2188c198d742ef337fa94eb520fc9dd8ba9f386e2f645c45e2.scope: Deactivated successfully.
Dec 13 03:20:25 np0005558241 practical_tharp[277198]: 167 167
Dec 13 03:20:26 np0005558241 podman[277182]: 2025-12-13 08:20:26.244752488 +0000 UTC m=+2.364865391 container attach e926299e6829dc2188c198d742ef337fa94eb520fc9dd8ba9f386e2f645c45e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_tharp, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:20:26 np0005558241 podman[277182]: 2025-12-13 08:20:26.245745943 +0000 UTC m=+2.365858876 container died e926299e6829dc2188c198d742ef337fa94eb520fc9dd8ba9f386e2f645c45e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_tharp, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:20:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 510 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 3.0 MiB/s wr, 92 op/s
Dec 13 03:20:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-80a271579f04e009cc46ecb818c869953850ff49629f71e501ec8f6e28b96fae-merged.mount: Deactivated successfully.
Dec 13 03:20:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:20:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 510 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 479 KiB/s rd, 3.1 MiB/s wr, 99 op/s
Dec 13 03:20:29 np0005558241 nova_compute[248510]: 2025-12-13 08:20:29.314 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:29 np0005558241 nova_compute[248510]: 2025-12-13 08:20:29.583 248514 DEBUG oslo_concurrency.lockutils [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-ea00b062-2638-45e3-91fb-24240ef8912f-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:29 np0005558241 nova_compute[248510]: 2025-12-13 08:20:29.584 248514 DEBUG oslo_concurrency.lockutils [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-ea00b062-2638-45e3-91fb-24240ef8912f-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:29 np0005558241 nova_compute[248510]: 2025-12-13 08:20:29.584 248514 DEBUG nova.objects.instance [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid ea00b062-2638-45e3-91fb-24240ef8912f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:29 np0005558241 nova_compute[248510]: 2025-12-13 08:20:29.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:30 np0005558241 podman[277182]: 2025-12-13 08:20:30.116679436 +0000 UTC m=+6.236792339 container remove e926299e6829dc2188c198d742ef337fa94eb520fc9dd8ba9f386e2f645c45e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:20:30 np0005558241 systemd[1]: libpod-conmon-e926299e6829dc2188c198d742ef337fa94eb520fc9dd8ba9f386e2f645c45e2.scope: Deactivated successfully.
Dec 13 03:20:30 np0005558241 nova_compute[248510]: 2025-12-13 08:20:30.397 248514 DEBUG nova.objects.instance [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_requests' on Instance uuid ea00b062-2638-45e3-91fb-24240ef8912f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:30 np0005558241 podman[277222]: 2025-12-13 08:20:30.338062119 +0000 UTC m=+0.035421133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:20:30 np0005558241 nova_compute[248510]: 2025-12-13 08:20:30.606 248514 DEBUG nova.network.neutron [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:20:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 510 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Dec 13 03:20:30 np0005558241 podman[277222]: 2025-12-13 08:20:30.971603124 +0000 UTC m=+0.668962108 container create dffa0a773462153fb8aac46a5d2ddd8e3cec75da340484ad7c60eac1276dc627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 03:20:31 np0005558241 nova_compute[248510]: 2025-12-13 08:20:31.186 248514 DEBUG nova.policy [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:20:31 np0005558241 systemd[1]: Started libpod-conmon-dffa0a773462153fb8aac46a5d2ddd8e3cec75da340484ad7c60eac1276dc627.scope.
Dec 13 03:20:31 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:20:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e43d633516d9e84ed5f77d92fbedc4e7a1f1f979ce8ec827c0e269788629b0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e43d633516d9e84ed5f77d92fbedc4e7a1f1f979ce8ec827c0e269788629b0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e43d633516d9e84ed5f77d92fbedc4e7a1f1f979ce8ec827c0e269788629b0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e43d633516d9e84ed5f77d92fbedc4e7a1f1f979ce8ec827c0e269788629b0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:20:31 np0005558241 podman[277222]: 2025-12-13 08:20:31.882038609 +0000 UTC m=+1.579397623 container init dffa0a773462153fb8aac46a5d2ddd8e3cec75da340484ad7c60eac1276dc627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:20:31 np0005558241 podman[277222]: 2025-12-13 08:20:31.889990465 +0000 UTC m=+1.587349449 container start dffa0a773462153fb8aac46a5d2ddd8e3cec75da340484ad7c60eac1276dc627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:20:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Dec 13 03:20:32 np0005558241 podman[277222]: 2025-12-13 08:20:32.48131808 +0000 UTC m=+2.178677064 container attach dffa0a773462153fb8aac46a5d2ddd8e3cec75da340484ad7c60eac1276dc627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:20:32 np0005558241 nova_compute[248510]: 2025-12-13 08:20:32.605 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:20:32 np0005558241 lvm[277318]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:20:32 np0005558241 lvm[277318]: VG ceph_vg1 finished
Dec 13 03:20:32 np0005558241 lvm[277317]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:20:32 np0005558241 lvm[277317]: VG ceph_vg0 finished
Dec 13 03:20:32 np0005558241 lvm[277320]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:20:32 np0005558241 lvm[277320]: VG ceph_vg2 finished
Dec 13 03:20:32 np0005558241 nova_compute[248510]: 2025-12-13 08:20:32.783 248514 DEBUG nova.network.neutron [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Successfully created port: 2508d856-adda-4895-ac2f-24adfa6c6b29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:20:32 np0005558241 recursing_jackson[277238]: {}
Dec 13 03:20:32 np0005558241 systemd[1]: libpod-dffa0a773462153fb8aac46a5d2ddd8e3cec75da340484ad7c60eac1276dc627.scope: Deactivated successfully.
Dec 13 03:20:32 np0005558241 podman[277222]: 2025-12-13 08:20:32.877900947 +0000 UTC m=+2.575259931 container died dffa0a773462153fb8aac46a5d2ddd8e3cec75da340484ad7c60eac1276dc627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:20:32 np0005558241 systemd[1]: libpod-dffa0a773462153fb8aac46a5d2ddd8e3cec75da340484ad7c60eac1276dc627.scope: Consumed 1.382s CPU time.
Dec 13 03:20:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Dec 13 03:20:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 510 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 276 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 13 03:20:33 np0005558241 nova_compute[248510]: 2025-12-13 08:20:33.135 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:20:33 np0005558241 nova_compute[248510]: 2025-12-13 08:20:33.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:20:33 np0005558241 nova_compute[248510]: 2025-12-13 08:20:33.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:20:33 np0005558241 nova_compute[248510]: 2025-12-13 08:20:33.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:20:34 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Dec 13 03:20:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:20:34 np0005558241 nova_compute[248510]: 2025-12-13 08:20:34.317 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3e43d633516d9e84ed5f77d92fbedc4e7a1f1f979ce8ec827c0e269788629b0b-merged.mount: Deactivated successfully.
Dec 13 03:20:34 np0005558241 nova_compute[248510]: 2025-12-13 08:20:34.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 517 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 100 KiB/s wr, 26 op/s
Dec 13 03:20:35 np0005558241 podman[277222]: 2025-12-13 08:20:35.550713871 +0000 UTC m=+5.248072865 container remove dffa0a773462153fb8aac46a5d2ddd8e3cec75da340484ad7c60eac1276dc627 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:20:35 np0005558241 systemd[1]: libpod-conmon-dffa0a773462153fb8aac46a5d2ddd8e3cec75da340484ad7c60eac1276dc627.scope: Deactivated successfully.
Dec 13 03:20:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:20:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:20:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:20:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.136 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-6b1e2b65-1398-4af8-9e8a-a8b99630eef8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.137 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-6b1e2b65-1398-4af8-9e8a-a8b99630eef8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.137 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.307 248514 DEBUG oslo_concurrency.lockutils [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "e686bddc-956e-4714-868c-9a29dda243d2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.307 248514 DEBUG oslo_concurrency.lockutils [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.307 248514 DEBUG oslo_concurrency.lockutils [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "e686bddc-956e-4714-868c-9a29dda243d2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.307 248514 DEBUG oslo_concurrency.lockutils [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.308 248514 DEBUG oslo_concurrency.lockutils [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.309 248514 INFO nova.compute.manager [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Terminating instance#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.310 248514 DEBUG nova.compute.manager [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:20:36 np0005558241 kernel: tapc204c1c8-ba (unregistering): left promiscuous mode
Dec 13 03:20:36 np0005558241 NetworkManager[50376]: <info>  [1765614036.5950] device (tapc204c1c8-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:20:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:36Z|00123|binding|INFO|Releasing lport c204c1c8-ba1e-45b9-87aa-dac2170952ec from this chassis (sb_readonly=0)
Dec 13 03:20:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:36Z|00124|binding|INFO|Setting lport c204c1c8-ba1e-45b9-87aa-dac2170952ec down in Southbound
Dec 13 03:20:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:36Z|00125|binding|INFO|Removing iface tapc204c1c8-ba ovn-installed in OVS
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.604 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.650 248514 DEBUG nova.network.neutron [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Successfully updated port: 2508d856-adda-4895-ac2f-24adfa6c6b29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:20:36 np0005558241 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Deactivated successfully.
Dec 13 03:20:36 np0005558241 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Consumed 20.236s CPU time.
Dec 13 03:20:36 np0005558241 systemd-machined[210538]: Machine qemu-23-instance-00000015 terminated.
Dec 13 03:20:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:36.680 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:37:df 10.100.0.8'], port_security=['fa:16:3e:67:37:df 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e686bddc-956e-4714-868c-9a29dda243d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-675d1cc9-9f63-41fd-81b7-761155b686e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6abc0a5e3b44a75bd19cd7df807b790', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2d00a2f3-cbea-40c0-a88e-6b16688a4860 dc223e9e-0c22-4711-a3a9-8439d1819b54 eb0d7b10-55c9-4c5e-bc27-5e77df245451', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ccd165e-0e0e-4b3a-8747-912025213604, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c204c1c8-ba1e-45b9-87aa-dac2170952ec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:20:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:36.682 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c204c1c8-ba1e-45b9-87aa-dac2170952ec in datapath 675d1cc9-9f63-41fd-81b7-761155b686e5 unbound from our chassis#033[00m
Dec 13 03:20:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:36.684 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 675d1cc9-9f63-41fd-81b7-761155b686e5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:20:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:36.685 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb15cd5-85ef-4a6c-ba7a-7b804050f990]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:36.686 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5 namespace which is not needed anymore#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.700 248514 DEBUG oslo_concurrency.lockutils [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.700 248514 DEBUG oslo_concurrency.lockutils [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.701 248514 DEBUG nova.network.neutron [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:20:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:20:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.747 248514 INFO nova.virt.libvirt.driver [-] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Instance destroyed successfully.#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.748 248514 DEBUG nova.objects.instance [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lazy-loading 'resources' on Instance uuid e686bddc-956e-4714-868c-9a29dda243d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.768 248514 DEBUG nova.virt.libvirt.vif [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:18:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1776725301',display_name='tempest-SecurityGroupsTestJSON-server-1776725301',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1776725301',id=21,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:18:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d6abc0a5e3b44a75bd19cd7df807b790',ramdisk_id='',reservation_id='r-f47be1x0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1306632779',owner_user_name='tempest-SecurityGroupsTestJSON-1306632779-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:18:44Z,user_data=None,user_id='1e46b9271df84074b6a832899ca5ee57',uuid=e686bddc-956e-4714-868c-9a29dda243d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "address": "fa:16:3e:67:37:df", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc204c1c8-ba", "ovs_interfaceid": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.769 248514 DEBUG nova.network.os_vif_util [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converting VIF {"id": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "address": "fa:16:3e:67:37:df", "network": {"id": "675d1cc9-9f63-41fd-81b7-761155b686e5", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1111524458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6abc0a5e3b44a75bd19cd7df807b790", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc204c1c8-ba", "ovs_interfaceid": "c204c1c8-ba1e-45b9-87aa-dac2170952ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.770 248514 DEBUG nova.network.os_vif_util [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:37:df,bridge_name='br-int',has_traffic_filtering=True,id=c204c1c8-ba1e-45b9-87aa-dac2170952ec,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc204c1c8-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.770 248514 DEBUG os_vif [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:37:df,bridge_name='br-int',has_traffic_filtering=True,id=c204c1c8-ba1e-45b9-87aa-dac2170952ec,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc204c1c8-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.773 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc204c1c8-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.775 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:36 np0005558241 nova_compute[248510]: 2025-12-13 08:20:36.780 248514 INFO os_vif [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:37:df,bridge_name='br-int',has_traffic_filtering=True,id=c204c1c8-ba1e-45b9-87aa-dac2170952ec,network=Network(675d1cc9-9f63-41fd-81b7-761155b686e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc204c1c8-ba')#033[00m
Dec 13 03:20:36 np0005558241 neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5[274582]: [NOTICE]   (274586) : haproxy version is 2.8.14-c23fe91
Dec 13 03:20:36 np0005558241 neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5[274582]: [NOTICE]   (274586) : path to executable is /usr/sbin/haproxy
Dec 13 03:20:36 np0005558241 neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5[274582]: [WARNING]  (274586) : Exiting Master process...
Dec 13 03:20:36 np0005558241 neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5[274582]: [WARNING]  (274586) : Exiting Master process...
Dec 13 03:20:36 np0005558241 neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5[274582]: [ALERT]    (274586) : Current worker (274588) exited with code 143 (Terminated)
Dec 13 03:20:36 np0005558241 neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5[274582]: [WARNING]  (274586) : All workers exited. Exiting... (0)
Dec 13 03:20:36 np0005558241 systemd[1]: libpod-574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872.scope: Deactivated successfully.
Dec 13 03:20:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 517 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 100 KiB/s wr, 26 op/s
Dec 13 03:20:36 np0005558241 podman[277402]: 2025-12-13 08:20:36.951944934 +0000 UTC m=+0.140001129 container died 574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:20:37 np0005558241 nova_compute[248510]: 2025-12-13 08:20:37.198 248514 WARNING nova.network.neutron [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it#033[00m
Dec 13 03:20:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-21d42d157bdfe0e317613121c55a4528b7179660e6d506480bb0778cc8794c3a-merged.mount: Deactivated successfully.
Dec 13 03:20:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872-userdata-shm.mount: Deactivated successfully.
Dec 13 03:20:37 np0005558241 podman[277402]: 2025-12-13 08:20:37.729390953 +0000 UTC m=+0.917447138 container cleanup 574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:20:38 np0005558241 podman[277440]: 2025-12-13 08:20:38.617354044 +0000 UTC m=+0.862832313 container remove 574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:20:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:38.627 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[142d3457-e5d4-47fe-9477-e52f88a434a7]: (4, ('Sat Dec 13 08:20:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5 (574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872)\n574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872\nSat Dec 13 08:20:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5 (574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872)\n574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:38.630 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8c78c66c-0753-488a-b141-8b65c524cd05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:38.631 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap675d1cc9-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:38 np0005558241 nova_compute[248510]: 2025-12-13 08:20:38.633 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:38 np0005558241 kernel: tap675d1cc9-90: left promiscuous mode
Dec 13 03:20:38 np0005558241 nova_compute[248510]: 2025-12-13 08:20:38.651 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:38.655 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4477cf-3af7-4c51-bff8-87e7c33d1336]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:38 np0005558241 systemd[1]: libpod-conmon-574dd4e4dc0eaea738a4aa4155e54f17c8a5d3c3127835815b4ce93c3dc28872.scope: Deactivated successfully.
Dec 13 03:20:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:38.674 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[650ad255-a1e3-4d94-bd08-f08908a16a78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:38.676 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[714980c7-9a12-4b2d-b3b3-b0f56c9feed1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:38.697 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[05b73627-5e45-40ac-bbf0-16e8d131ad54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649986, 'reachable_time': 43182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277468, 'error': None, 'target': 'ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:38 np0005558241 systemd[1]: run-netns-ovnmeta\x2d675d1cc9\x2d9f63\x2d41fd\x2d81b7\x2d761155b686e5.mount: Deactivated successfully.
Dec 13 03:20:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:38.704 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-675d1cc9-9f63-41fd-81b7-761155b686e5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:20:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:38.704 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[a095a4b5-84a5-458f-886e-b5917e9922d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:38 np0005558241 podman[277454]: 2025-12-13 08:20:38.760447979 +0000 UTC m=+0.078618067 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:20:38 np0005558241 podman[277456]: 2025-12-13 08:20:38.765334449 +0000 UTC m=+0.078836523 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:20:38 np0005558241 podman[277452]: 2025-12-13 08:20:38.795491712 +0000 UTC m=+0.120620842 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 03:20:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 449 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 71 KiB/s wr, 55 op/s
Dec 13 03:20:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.320 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.363 248514 DEBUG nova.compute.manager [req-56f9d509-df4f-4172-b484-5c22aea3c01a req-580a80cc-9e00-4672-a236-81b203d14a67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-changed-2508d856-adda-4895-ac2f-24adfa6c6b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.364 248514 DEBUG nova.compute.manager [req-56f9d509-df4f-4172-b484-5c22aea3c01a req-580a80cc-9e00-4672-a236-81b203d14a67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Refreshing instance network info cache due to event network-changed-2508d856-adda-4895-ac2f-24adfa6c6b29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.364 248514 DEBUG oslo_concurrency.lockutils [req-56f9d509-df4f-4172-b484-5c22aea3c01a req-580a80cc-9e00-4672-a236-81b203d14a67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.603 248514 INFO nova.virt.libvirt.driver [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Deleting instance files /var/lib/nova/instances/e686bddc-956e-4714-868c-9a29dda243d2_del#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.604 248514 INFO nova.virt.libvirt.driver [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Deletion of /var/lib/nova/instances/e686bddc-956e-4714-868c-9a29dda243d2_del complete#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.708 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Updating instance_info_cache with network_info: [{"id": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "address": "fa:16:3e:69:e5:d6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b46a3fe-06", "ovs_interfaceid": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.911 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-6b1e2b65-1398-4af8-9e8a-a8b99630eef8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.911 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.912 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.912 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.919 248514 INFO nova.compute.manager [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Took 3.61 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.920 248514 DEBUG oslo.service.loopingcall [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.920 248514 DEBUG nova.compute.manager [-] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:20:39 np0005558241 nova_compute[248510]: 2025-12-13 08:20:39.920 248514 DEBUG nova.network.neutron [-] [instance: e686bddc-956e-4714-868c-9a29dda243d2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:20:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:20:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:20:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:20:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:20:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:20:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:20:40 np0005558241 nova_compute[248510]: 2025-12-13 08:20:40.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:20:40 np0005558241 nova_compute[248510]: 2025-12-13 08:20:40.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:20:40 np0005558241 nova_compute[248510]: 2025-12-13 08:20:40.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:20:40 np0005558241 nova_compute[248510]: 2025-12-13 08:20:40.808 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:40 np0005558241 nova_compute[248510]: 2025-12-13 08:20:40.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:40 np0005558241 nova_compute[248510]: 2025-12-13 08:20:40.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:40 np0005558241 nova_compute[248510]: 2025-12-13 08:20:40.809 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:20:40 np0005558241 nova_compute[248510]: 2025-12-13 08:20:40.810 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:20:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 449 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 71 KiB/s wr, 55 op/s
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.304 248514 DEBUG nova.network.neutron [-] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.338 248514 INFO nova.compute.manager [-] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Took 1.42 seconds to deallocate network for instance.#033[00m
Dec 13 03:20:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:20:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223066073' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.381 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.395 248514 DEBUG oslo_concurrency.lockutils [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.396 248514 DEBUG oslo_concurrency.lockutils [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.520 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.521 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.528 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.528 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.534 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.535 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.539 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.539 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.541 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.542 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.715 248514 DEBUG nova.network.neutron [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Updating instance_info_cache with network_info: [{"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.753 248514 DEBUG oslo_concurrency.lockutils [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.754 248514 DEBUG oslo_concurrency.lockutils [req-56f9d509-df4f-4172-b484-5c22aea3c01a req-580a80cc-9e00-4672-a236-81b203d14a67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.754 248514 DEBUG nova.network.neutron [req-56f9d509-df4f-4172-b484-5c22aea3c01a req-580a80cc-9e00-4672-a236-81b203d14a67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Refreshing network info cache for port 2508d856-adda-4895-ac2f-24adfa6c6b29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.759 248514 DEBUG nova.virt.libvirt.vif [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:19:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1677975054',display_name='tempest-AttachInterfacesTestJSON-server-1677975054',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1677975054',id=23,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlwa4a+xPwIF1ZQHB9HOTH7BIkCV659ajtgIcCDEuqwt51MFjzqyMmTtvHIvG4KvdVHQ8rxK33SgXXhym2PtXjLAyZK6aAc1sIvFRHlJNcPXY0ZisQ+T6fVIhZy24etfQ==',key_name='tempest-keypair-1760745004',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:20:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-jvuhbj1q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=ea00b062-2638-45e3-91fb-24240ef8912f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.759 248514 DEBUG nova.network.os_vif_util [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.760 248514 DEBUG nova.network.os_vif_util [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:c0:5c,bridge_name='br-int',has_traffic_filtering=True,id=2508d856-adda-4895-ac2f-24adfa6c6b29,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2508d856-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.760 248514 DEBUG os_vif [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:c0:5c,bridge_name='br-int',has_traffic_filtering=True,id=2508d856-adda-4895-ac2f-24adfa6c6b29,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2508d856-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.761 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.761 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.762 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.765 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2508d856-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.765 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2508d856-ad, col_values=(('external_ids', {'iface-id': '2508d856-adda-4895-ac2f-24adfa6c6b29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:c0:5c', 'vm-uuid': 'ea00b062-2638-45e3-91fb-24240ef8912f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.767 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:41 np0005558241 NetworkManager[50376]: <info>  [1765614041.7679] manager: (tap2508d856-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.775 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.776 248514 INFO os_vif [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:c0:5c,bridge_name='br-int',has_traffic_filtering=True,id=2508d856-adda-4895-ac2f-24adfa6c6b29,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2508d856-ad')#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.777 248514 DEBUG nova.virt.libvirt.vif [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:19:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1677975054',display_name='tempest-AttachInterfacesTestJSON-server-1677975054',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1677975054',id=23,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlwa4a+xPwIF1ZQHB9HOTH7BIkCV659ajtgIcCDEuqwt51MFjzqyMmTtvHIvG4KvdVHQ8rxK33SgXXhym2PtXjLAyZK6aAc1sIvFRHlJNcPXY0ZisQ+T6fVIhZy24etfQ==',key_name='tempest-keypair-1760745004',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:20:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-jvuhbj1q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=ea00b062-2638-45e3-91fb-24240ef8912f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.778 248514 DEBUG nova.network.os_vif_util [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.778 248514 DEBUG nova.network.os_vif_util [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:c0:5c,bridge_name='br-int',has_traffic_filtering=True,id=2508d856-adda-4895-ac2f-24adfa6c6b29,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2508d856-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.782 248514 DEBUG nova.compute.manager [req-47597b03-3052-4e03-81be-dfc23b4484f6 req-6b24dde5-438c-46ca-9eea-d61bc00dc87e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Received event network-vif-unplugged-c204c1c8-ba1e-45b9-87aa-dac2170952ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.783 248514 DEBUG oslo_concurrency.lockutils [req-47597b03-3052-4e03-81be-dfc23b4484f6 req-6b24dde5-438c-46ca-9eea-d61bc00dc87e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e686bddc-956e-4714-868c-9a29dda243d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.783 248514 DEBUG oslo_concurrency.lockutils [req-47597b03-3052-4e03-81be-dfc23b4484f6 req-6b24dde5-438c-46ca-9eea-d61bc00dc87e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.783 248514 DEBUG oslo_concurrency.lockutils [req-47597b03-3052-4e03-81be-dfc23b4484f6 req-6b24dde5-438c-46ca-9eea-d61bc00dc87e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.783 248514 DEBUG nova.compute.manager [req-47597b03-3052-4e03-81be-dfc23b4484f6 req-6b24dde5-438c-46ca-9eea-d61bc00dc87e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] No waiting events found dispatching network-vif-unplugged-c204c1c8-ba1e-45b9-87aa-dac2170952ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.784 248514 WARNING nova.compute.manager [req-47597b03-3052-4e03-81be-dfc23b4484f6 req-6b24dde5-438c-46ca-9eea-d61bc00dc87e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Received unexpected event network-vif-unplugged-c204c1c8-ba1e-45b9-87aa-dac2170952ec for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.784 248514 DEBUG nova.compute.manager [req-47597b03-3052-4e03-81be-dfc23b4484f6 req-6b24dde5-438c-46ca-9eea-d61bc00dc87e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Received event network-vif-plugged-c204c1c8-ba1e-45b9-87aa-dac2170952ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.784 248514 DEBUG oslo_concurrency.lockutils [req-47597b03-3052-4e03-81be-dfc23b4484f6 req-6b24dde5-438c-46ca-9eea-d61bc00dc87e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e686bddc-956e-4714-868c-9a29dda243d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.784 248514 DEBUG oslo_concurrency.lockutils [req-47597b03-3052-4e03-81be-dfc23b4484f6 req-6b24dde5-438c-46ca-9eea-d61bc00dc87e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.784 248514 DEBUG oslo_concurrency.lockutils [req-47597b03-3052-4e03-81be-dfc23b4484f6 req-6b24dde5-438c-46ca-9eea-d61bc00dc87e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.784 248514 DEBUG nova.compute.manager [req-47597b03-3052-4e03-81be-dfc23b4484f6 req-6b24dde5-438c-46ca-9eea-d61bc00dc87e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] No waiting events found dispatching network-vif-plugged-c204c1c8-ba1e-45b9-87aa-dac2170952ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.784 248514 WARNING nova.compute.manager [req-47597b03-3052-4e03-81be-dfc23b4484f6 req-6b24dde5-438c-46ca-9eea-d61bc00dc87e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Received unexpected event network-vif-plugged-c204c1c8-ba1e-45b9-87aa-dac2170952ec for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.785 248514 DEBUG nova.compute.manager [req-47597b03-3052-4e03-81be-dfc23b4484f6 req-6b24dde5-438c-46ca-9eea-d61bc00dc87e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Received event network-vif-deleted-c204c1c8-ba1e-45b9-87aa-dac2170952ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.786 248514 DEBUG nova.virt.libvirt.guest [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] attach device xml: <interface type="ethernet">
Dec 13 03:20:41 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:96:c0:5c"/>
Dec 13 03:20:41 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:20:41 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:20:41 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:20:41 np0005558241 nova_compute[248510]:  <target dev="tap2508d856-ad"/>
Dec 13 03:20:41 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:20:41 np0005558241 nova_compute[248510]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec 13 03:20:41 np0005558241 kernel: tap2508d856-ad: entered promiscuous mode
Dec 13 03:20:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:41Z|00126|binding|INFO|Claiming lport 2508d856-adda-4895-ac2f-24adfa6c6b29 for this chassis.
Dec 13 03:20:41 np0005558241 NetworkManager[50376]: <info>  [1765614041.8032] manager: (tap2508d856-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Dec 13 03:20:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:41Z|00127|binding|INFO|2508d856-adda-4895-ac2f-24adfa6c6b29: Claiming fa:16:3e:96:c0:5c 10.100.0.14
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.801 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.804 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.812 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:c0:5c 10.100.0.14'], port_security=['fa:16:3e:96:c0:5c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ea00b062-2638-45e3-91fb-24240ef8912f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2508d856-adda-4895-ac2f-24adfa6c6b29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.814 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2508d856-adda-4895-ac2f-24adfa6c6b29 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 bound to our chassis#033[00m
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.816 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.819 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.821 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3436MB free_disk=59.757444496266544GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:20:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:41Z|00128|binding|INFO|Setting lport 2508d856-adda-4895-ac2f-24adfa6c6b29 ovn-installed in OVS
Dec 13 03:20:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:41Z|00129|binding|INFO|Setting lport 2508d856-adda-4895-ac2f-24adfa6c6b29 up in Southbound
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.821 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.831 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.838 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7de3265d-02d3-42d0-ba95-1bcb4b129dca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:41 np0005558241 systemd-udevd[277550]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:20:41 np0005558241 NetworkManager[50376]: <info>  [1765614041.8641] device (tap2508d856-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:20:41 np0005558241 NetworkManager[50376]: <info>  [1765614041.8649] device (tap2508d856-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.888 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[61245631-be91-4d91-9ee5-640a310ffcda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.892 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[58303955-b31c-4735-a836-d8dbdc43eef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.934 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[627237ea-64a5-456e-9186-74b436dd910e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.936 248514 DEBUG nova.virt.libvirt.driver [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.936 248514 DEBUG nova.virt.libvirt.driver [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.936 248514 DEBUG nova.virt.libvirt.driver [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:d2:65:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.937 248514 DEBUG nova.virt.libvirt.driver [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:96:c0:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.958 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a24f36-1558-4c27-9123-7e984c3fbb7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657648, 'reachable_time': 24417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277557, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.976 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ed277c1e-19bd-4e36-8e20-744953add136]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657666, 'tstamp': 657666}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277558, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657669, 'tstamp': 657669}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277558, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.978 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:41 np0005558241 nova_compute[248510]: 2025-12-13 08:20:41.983 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.984 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.984 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.985 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:41.985 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.044 248514 DEBUG nova.virt.libvirt.guest [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:20:42 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1677975054</nova:name>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:20:42</nova:creationTime>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:20:42 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:    <nova:port uuid="41d2792e-cb2f-464c-a974-4be557dde28b">
Dec 13 03:20:42 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:    <nova:port uuid="2508d856-adda-4895-ac2f-24adfa6c6b29">
Dec 13 03:20:42 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:20:42 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:20:42 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:20:42 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.088 248514 DEBUG oslo_concurrency.processutils [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.118 248514 DEBUG oslo_concurrency.lockutils [None req-00af8032-a0a8-4fd3-9257-06781caa927e ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-ea00b062-2638-45e3-91fb-24240ef8912f-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 12.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:20:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2125565551' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.685 248514 DEBUG oslo_concurrency.processutils [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.691 248514 DEBUG nova.compute.provider_tree [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.875 248514 DEBUG nova.compute.manager [req-1af142b2-a389-4cb9-a81a-b40823cd0b63 req-505644a5-207a-416f-a844-526a072375aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-vif-plugged-2508d856-adda-4895-ac2f-24adfa6c6b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.875 248514 DEBUG oslo_concurrency.lockutils [req-1af142b2-a389-4cb9-a81a-b40823cd0b63 req-505644a5-207a-416f-a844-526a072375aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.875 248514 DEBUG oslo_concurrency.lockutils [req-1af142b2-a389-4cb9-a81a-b40823cd0b63 req-505644a5-207a-416f-a844-526a072375aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.876 248514 DEBUG oslo_concurrency.lockutils [req-1af142b2-a389-4cb9-a81a-b40823cd0b63 req-505644a5-207a-416f-a844-526a072375aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.876 248514 DEBUG nova.compute.manager [req-1af142b2-a389-4cb9-a81a-b40823cd0b63 req-505644a5-207a-416f-a844-526a072375aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] No waiting events found dispatching network-vif-plugged-2508d856-adda-4895-ac2f-24adfa6c6b29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.876 248514 WARNING nova.compute.manager [req-1af142b2-a389-4cb9-a81a-b40823cd0b63 req-505644a5-207a-416f-a844-526a072375aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received unexpected event network-vif-plugged-2508d856-adda-4895-ac2f-24adfa6c6b29 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.881 248514 DEBUG nova.scheduler.client.report [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.922 248514 DEBUG oslo_concurrency.lockutils [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.926 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 438 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 76 KiB/s wr, 55 op/s
Dec 13 03:20:42 np0005558241 nova_compute[248510]: 2025-12-13 08:20:42.969 248514 INFO nova.scheduler.client.report [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Deleted allocations for instance e686bddc-956e-4714-868c-9a29dda243d2#033[00m
Dec 13 03:20:43 np0005558241 nova_compute[248510]: 2025-12-13 08:20:43.039 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance a14d8b88-7aec-468f-a550-881364e4d95e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:20:43 np0005558241 nova_compute[248510]: 2025-12-13 08:20:43.040 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3fffafca-321d-4611-8940-da963b356ca1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:20:43 np0005558241 nova_compute[248510]: 2025-12-13 08:20:43.040 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 6b1e2b65-1398-4af8-9e8a-a8b99630eef8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:20:43 np0005558241 nova_compute[248510]: 2025-12-13 08:20:43.040 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 0d64e209-19e7-4ad3-a790-43d04d832838 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:20:43 np0005558241 nova_compute[248510]: 2025-12-13 08:20:43.040 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance ea00b062-2638-45e3-91fb-24240ef8912f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:20:43 np0005558241 nova_compute[248510]: 2025-12-13 08:20:43.040 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:20:43 np0005558241 nova_compute[248510]: 2025-12-13 08:20:43.040 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:20:43 np0005558241 nova_compute[248510]: 2025-12-13 08:20:43.071 248514 DEBUG oslo_concurrency.lockutils [None req-00db2017-140f-4901-821d-16f62fb5a47f 1e46b9271df84074b6a832899ca5ee57 d6abc0a5e3b44a75bd19cd7df807b790 - - default default] Lock "e686bddc-956e-4714-868c-9a29dda243d2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:43 np0005558241 nova_compute[248510]: 2025-12-13 08:20:43.234 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:20:43 np0005558241 nova_compute[248510]: 2025-12-13 08:20:43.774 248514 DEBUG nova.network.neutron [req-56f9d509-df4f-4172-b484-5c22aea3c01a req-580a80cc-9e00-4672-a236-81b203d14a67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Updated VIF entry in instance network info cache for port 2508d856-adda-4895-ac2f-24adfa6c6b29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:20:43 np0005558241 nova_compute[248510]: 2025-12-13 08:20:43.776 248514 DEBUG nova.network.neutron [req-56f9d509-df4f-4172-b484-5c22aea3c01a req-580a80cc-9e00-4672-a236-81b203d14a67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Updating instance_info_cache with network_info: [{"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:20:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:20:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/797189108' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:20:43 np0005558241 nova_compute[248510]: 2025-12-13 08:20:43.841 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:20:43 np0005558241 nova_compute[248510]: 2025-12-13 08:20:43.848 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:20:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:20:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Dec 13 03:20:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Dec 13 03:20:44 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.193 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.209 248514 DEBUG oslo_concurrency.lockutils [req-56f9d509-df4f-4172-b484-5c22aea3c01a req-580a80cc-9e00-4672-a236-81b203d14a67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:20:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:44Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:c0:5c 10.100.0.14
Dec 13 03:20:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:44Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:c0:5c 10.100.0.14
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.323 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.630 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.687 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.687 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.701 248514 DEBUG oslo_concurrency.lockutils [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-ea00b062-2638-45e3-91fb-24240ef8912f-2508d856-adda-4895-ac2f-24adfa6c6b29" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.701 248514 DEBUG oslo_concurrency.lockutils [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-ea00b062-2638-45e3-91fb-24240ef8912f-2508d856-adda-4895-ac2f-24adfa6c6b29" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.721 248514 DEBUG nova.objects.instance [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid ea00b062-2638-45e3-91fb-24240ef8912f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.751 248514 DEBUG nova.virt.libvirt.vif [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:19:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1677975054',display_name='tempest-AttachInterfacesTestJSON-server-1677975054',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1677975054',id=23,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlwa4a+xPwIF1ZQHB9HOTH7BIkCV659ajtgIcCDEuqwt51MFjzqyMmTtvHIvG4KvdVHQ8rxK33SgXXhym2PtXjLAyZK6aAc1sIvFRHlJNcPXY0ZisQ+T6fVIhZy24etfQ==',key_name='tempest-keypair-1760745004',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:20:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-jvuhbj1q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=ea00b062-2638-45e3-91fb-24240ef8912f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.751 248514 DEBUG nova.network.os_vif_util [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.752 248514 DEBUG nova.network.os_vif_util [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:c0:5c,bridge_name='br-int',has_traffic_filtering=True,id=2508d856-adda-4895-ac2f-24adfa6c6b29,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2508d856-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.758 248514 DEBUG nova.virt.libvirt.guest [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:c0:5c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2508d856-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.761 248514 DEBUG nova.virt.libvirt.guest [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:c0:5c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2508d856-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.765 248514 DEBUG nova.virt.libvirt.driver [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Attempting to detach device tap2508d856-ad from instance ea00b062-2638-45e3-91fb-24240ef8912f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.765 248514 DEBUG nova.virt.libvirt.guest [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:96:c0:5c"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <target dev="tap2508d856-ad"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:20:44 np0005558241 nova_compute[248510]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.774 248514 DEBUG nova.virt.libvirt.guest [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:c0:5c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2508d856-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.779 248514 DEBUG nova.virt.libvirt.guest [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:96:c0:5c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2508d856-ad"/></interface>not found in domain: <domain type='kvm' id='25'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <name>instance-00000017</name>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <uuid>ea00b062-2638-45e3-91fb-24240ef8912f</uuid>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1677975054</nova:name>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:20:42</nova:creationTime>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:port uuid="41d2792e-cb2f-464c-a974-4be557dde28b">
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:port uuid="2508d856-adda-4895-ac2f-24adfa6c6b29">
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:20:44 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <entry name='serial'>ea00b062-2638-45e3-91fb-24240ef8912f</entry>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <entry name='uuid'>ea00b062-2638-45e3-91fb-24240ef8912f</entry>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/ea00b062-2638-45e3-91fb-24240ef8912f_disk' index='2'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/ea00b062-2638-45e3-91fb-24240ef8912f_disk.config' index='1'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:d2:65:9d'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target dev='tap41d2792e-cb'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:96:c0:5c'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target dev='tap2508d856-ad'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='net1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/console.log' append='off'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/0'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/console.log' append='off'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c359,c499</label>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c359,c499</imagelabel>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:20:44 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:20:44 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.780 248514 INFO nova.virt.libvirt.driver [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap2508d856-ad from instance ea00b062-2638-45e3-91fb-24240ef8912f from the persistent domain config.#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.781 248514 DEBUG nova.virt.libvirt.driver [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] (1/8): Attempting to detach device tap2508d856-ad with device alias net1 from instance ea00b062-2638-45e3-91fb-24240ef8912f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.782 248514 DEBUG nova.virt.libvirt.guest [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:96:c0:5c"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <target dev="tap2508d856-ad"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:20:44 np0005558241 nova_compute[248510]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec 13 03:20:44 np0005558241 kernel: tap2508d856-ad (unregistering): left promiscuous mode
Dec 13 03:20:44 np0005558241 NetworkManager[50376]: <info>  [1765614044.8435] device (tap2508d856-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.851 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:44Z|00130|binding|INFO|Releasing lport 2508d856-adda-4895-ac2f-24adfa6c6b29 from this chassis (sb_readonly=0)
Dec 13 03:20:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:44Z|00131|binding|INFO|Setting lport 2508d856-adda-4895-ac2f-24adfa6c6b29 down in Southbound
Dec 13 03:20:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:44Z|00132|binding|INFO|Removing iface tap2508d856-ad ovn-installed in OVS
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.853 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.855 248514 DEBUG nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Received event <DeviceRemovedEvent: 1765614044.855672, ea00b062-2638-45e3-91fb-24240ef8912f => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.857 248514 DEBUG nova.virt.libvirt.driver [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Start waiting for the detach event from libvirt for device tap2508d856-ad with device alias net1 for instance ea00b062-2638-45e3-91fb-24240ef8912f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.857 248514 DEBUG nova.virt.libvirt.guest [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:c0:5c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2508d856-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:20:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:44.859 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:c0:5c 10.100.0.14'], port_security=['fa:16:3e:96:c0:5c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ea00b062-2638-45e3-91fb-24240ef8912f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2508d856-adda-4895-ac2f-24adfa6c6b29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.861 248514 DEBUG nova.virt.libvirt.guest [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:96:c0:5c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2508d856-ad"/></interface>not found in domain: <domain type='kvm' id='25'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <name>instance-00000017</name>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <uuid>ea00b062-2638-45e3-91fb-24240ef8912f</uuid>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1677975054</nova:name>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:20:42</nova:creationTime>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:port uuid="41d2792e-cb2f-464c-a974-4be557dde28b">
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:port uuid="2508d856-adda-4895-ac2f-24adfa6c6b29">
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:20:44 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <entry name='serial'>ea00b062-2638-45e3-91fb-24240ef8912f</entry>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <entry name='uuid'>ea00b062-2638-45e3-91fb-24240ef8912f</entry>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/ea00b062-2638-45e3-91fb-24240ef8912f_disk' index='2'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/ea00b062-2638-45e3-91fb-24240ef8912f_disk.config' index='1'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:44.861 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2508d856-adda-4895-ac2f-24adfa6c6b29 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:d2:65:9d'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target dev='tap41d2792e-cb'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/console.log' append='off'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/0'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/console.log' append='off'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c359,c499</label>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c359,c499</imagelabel>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:20:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:44.863 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:20:44 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:20:44 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.861 248514 INFO nova.virt.libvirt.driver [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap2508d856-ad from instance ea00b062-2638-45e3-91fb-24240ef8912f from the live domain config.#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.862 248514 DEBUG nova.virt.libvirt.vif [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:19:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1677975054',display_name='tempest-AttachInterfacesTestJSON-server-1677975054',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1677975054',id=23,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlwa4a+xPwIF1ZQHB9HOTH7BIkCV659ajtgIcCDEuqwt51MFjzqyMmTtvHIvG4KvdVHQ8rxK33SgXXhym2PtXjLAyZK6aAc1sIvFRHlJNcPXY0ZisQ+T6fVIhZy24etfQ==',key_name='tempest-keypair-1760745004',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:20:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-jvuhbj1q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=ea00b062-2638-45e3-91fb-24240ef8912f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.862 248514 DEBUG nova.network.os_vif_util [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.863 248514 DEBUG nova.network.os_vif_util [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:c0:5c,bridge_name='br-int',has_traffic_filtering=True,id=2508d856-adda-4895-ac2f-24adfa6c6b29,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2508d856-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.863 248514 DEBUG os_vif [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:c0:5c,bridge_name='br-int',has_traffic_filtering=True,id=2508d856-adda-4895-ac2f-24adfa6c6b29,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2508d856-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.864 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.865 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2508d856-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.875 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.878 248514 INFO os_vif [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:c0:5c,bridge_name='br-int',has_traffic_filtering=True,id=2508d856-adda-4895-ac2f-24adfa6c6b29,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2508d856-ad')#033[00m
Dec 13 03:20:44 np0005558241 nova_compute[248510]: 2025-12-13 08:20:44.879 248514 DEBUG nova.virt.libvirt.guest [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1677975054</nova:name>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:20:44</nova:creationTime>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    <nova:port uuid="41d2792e-cb2f-464c-a974-4be557dde28b">
Dec 13 03:20:44 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:20:44 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:20:44 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:20:44 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:20:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:44.884 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[80cc8d92-ad40-462c-a792-b6339692f4e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:44.917 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[44f70c35-7127-42f7-8376-3a66a1f38047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:44.920 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[98198739-8691-4155-8a55-7ba6209e5484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 438 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 32 KiB/s wr, 53 op/s
Dec 13 03:20:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:44.954 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f29f3f0e-0905-44ef-a38c-8492518db8b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:44.976 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8121cd0f-32df-4725-bc56-dae39af4e658]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657648, 'reachable_time': 24417, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277614, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:44.999 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[029e592d-f56b-4c4c-a1bd-957e72c2a503]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657666, 'tstamp': 657666}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277615, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657669, 'tstamp': 657669}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277615, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:45.002 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:45 np0005558241 nova_compute[248510]: 2025-12-13 08:20:45.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:45 np0005558241 nova_compute[248510]: 2025-12-13 08:20:45.006 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:45.007 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:45.007 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:45.008 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:45.008 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:45 np0005558241 nova_compute[248510]: 2025-12-13 08:20:45.483 248514 DEBUG nova.compute.manager [req-6796d8d2-c29e-40fb-83cc-c58a6840a807 req-cbb9ac12-b49a-4b3e-a5ac-887483af98b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-vif-plugged-2508d856-adda-4895-ac2f-24adfa6c6b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:45 np0005558241 nova_compute[248510]: 2025-12-13 08:20:45.484 248514 DEBUG oslo_concurrency.lockutils [req-6796d8d2-c29e-40fb-83cc-c58a6840a807 req-cbb9ac12-b49a-4b3e-a5ac-887483af98b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:45 np0005558241 nova_compute[248510]: 2025-12-13 08:20:45.484 248514 DEBUG oslo_concurrency.lockutils [req-6796d8d2-c29e-40fb-83cc-c58a6840a807 req-cbb9ac12-b49a-4b3e-a5ac-887483af98b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:45 np0005558241 nova_compute[248510]: 2025-12-13 08:20:45.484 248514 DEBUG oslo_concurrency.lockutils [req-6796d8d2-c29e-40fb-83cc-c58a6840a807 req-cbb9ac12-b49a-4b3e-a5ac-887483af98b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:45 np0005558241 nova_compute[248510]: 2025-12-13 08:20:45.484 248514 DEBUG nova.compute.manager [req-6796d8d2-c29e-40fb-83cc-c58a6840a807 req-cbb9ac12-b49a-4b3e-a5ac-887483af98b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] No waiting events found dispatching network-vif-plugged-2508d856-adda-4895-ac2f-24adfa6c6b29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:45 np0005558241 nova_compute[248510]: 2025-12-13 08:20:45.485 248514 WARNING nova.compute.manager [req-6796d8d2-c29e-40fb-83cc-c58a6840a807 req-cbb9ac12-b49a-4b3e-a5ac-887483af98b3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received unexpected event network-vif-plugged-2508d856-adda-4895-ac2f-24adfa6c6b29 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:20:45 np0005558241 nova_compute[248510]: 2025-12-13 08:20:45.681 248514 DEBUG oslo_concurrency.lockutils [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:20:45 np0005558241 nova_compute[248510]: 2025-12-13 08:20:45.681 248514 DEBUG oslo_concurrency.lockutils [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:20:45 np0005558241 nova_compute[248510]: 2025-12-13 08:20:45.682 248514 DEBUG nova.network.neutron [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.688 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.709 248514 DEBUG nova.compute.manager [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-vif-deleted-2508d856-adda-4895-ac2f-24adfa6c6b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.710 248514 INFO nova.compute.manager [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Neutron deleted interface 2508d856-adda-4895-ac2f-24adfa6c6b29; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.710 248514 DEBUG nova.network.neutron [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Updating instance_info_cache with network_info: [{"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.753 248514 DEBUG nova.objects.instance [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'system_metadata' on Instance uuid ea00b062-2638-45e3-91fb-24240ef8912f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.816 248514 DEBUG nova.objects.instance [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'flavor' on Instance uuid ea00b062-2638-45e3-91fb-24240ef8912f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.868 248514 DEBUG nova.virt.libvirt.vif [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:19:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1677975054',display_name='tempest-AttachInterfacesTestJSON-server-1677975054',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1677975054',id=23,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlwa4a+xPwIF1ZQHB9HOTH7BIkCV659ajtgIcCDEuqwt51MFjzqyMmTtvHIvG4KvdVHQ8rxK33SgXXhym2PtXjLAyZK6aAc1sIvFRHlJNcPXY0ZisQ+T6fVIhZy24etfQ==',key_name='tempest-keypair-1760745004',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:20:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-jvuhbj1q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=ea00b062-2638-45e3-91fb-24240ef8912f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.869 248514 DEBUG nova.network.os_vif_util [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.870 248514 DEBUG nova.network.os_vif_util [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:c0:5c,bridge_name='br-int',has_traffic_filtering=True,id=2508d856-adda-4895-ac2f-24adfa6c6b29,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2508d856-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.873 248514 DEBUG nova.virt.libvirt.guest [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:c0:5c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2508d856-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.878 248514 DEBUG nova.virt.libvirt.guest [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:96:c0:5c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2508d856-ad"/></interface>not found in domain: <domain type='kvm' id='25'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <name>instance-00000017</name>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <uuid>ea00b062-2638-45e3-91fb-24240ef8912f</uuid>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1677975054</nova:name>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:20:44</nova:creationTime>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:port uuid="41d2792e-cb2f-464c-a974-4be557dde28b">
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:20:46 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <entry name='serial'>ea00b062-2638-45e3-91fb-24240ef8912f</entry>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <entry name='uuid'>ea00b062-2638-45e3-91fb-24240ef8912f</entry>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/ea00b062-2638-45e3-91fb-24240ef8912f_disk' index='2'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/ea00b062-2638-45e3-91fb-24240ef8912f_disk.config' index='1'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:d2:65:9d'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target dev='tap41d2792e-cb'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/console.log' append='off'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/0'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/console.log' append='off'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c359,c499</label>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c359,c499</imagelabel>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:20:46 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:20:46 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.879 248514 DEBUG nova.virt.libvirt.guest [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:c0:5c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2508d856-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.883 248514 DEBUG nova.virt.libvirt.guest [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:96:c0:5c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap2508d856-ad"/></interface>not found in domain: <domain type='kvm' id='25'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <name>instance-00000017</name>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <uuid>ea00b062-2638-45e3-91fb-24240ef8912f</uuid>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1677975054</nova:name>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:20:44</nova:creationTime>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:port uuid="41d2792e-cb2f-464c-a974-4be557dde28b">
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:20:46 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <entry name='serial'>ea00b062-2638-45e3-91fb-24240ef8912f</entry>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <entry name='uuid'>ea00b062-2638-45e3-91fb-24240ef8912f</entry>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/ea00b062-2638-45e3-91fb-24240ef8912f_disk' index='2'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/ea00b062-2638-45e3-91fb-24240ef8912f_disk.config' index='1'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:d2:65:9d'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target dev='tap41d2792e-cb'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/console.log' append='off'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/0'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f/console.log' append='off'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c359,c499</label>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c359,c499</imagelabel>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:20:46 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:20:46 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.883 248514 WARNING nova.virt.libvirt.driver [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Detaching interface fa:16:3e:96:c0:5c failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap2508d856-ad' not found.#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.884 248514 DEBUG nova.virt.libvirt.vif [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:19:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1677975054',display_name='tempest-AttachInterfacesTestJSON-server-1677975054',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1677975054',id=23,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlwa4a+xPwIF1ZQHB9HOTH7BIkCV659ajtgIcCDEuqwt51MFjzqyMmTtvHIvG4KvdVHQ8rxK33SgXXhym2PtXjLAyZK6aAc1sIvFRHlJNcPXY0ZisQ+T6fVIhZy24etfQ==',key_name='tempest-keypair-1760745004',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:20:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-jvuhbj1q',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=ea00b062-2638-45e3-91fb-24240ef8912f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.884 248514 DEBUG nova.network.os_vif_util [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "2508d856-adda-4895-ac2f-24adfa6c6b29", "address": "fa:16:3e:96:c0:5c", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2508d856-ad", "ovs_interfaceid": "2508d856-adda-4895-ac2f-24adfa6c6b29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.885 248514 DEBUG nova.network.os_vif_util [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:c0:5c,bridge_name='br-int',has_traffic_filtering=True,id=2508d856-adda-4895-ac2f-24adfa6c6b29,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2508d856-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.885 248514 DEBUG os_vif [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:c0:5c,bridge_name='br-int',has_traffic_filtering=True,id=2508d856-adda-4895-ac2f-24adfa6c6b29,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2508d856-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.887 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.887 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2508d856-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.887 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.889 248514 INFO os_vif [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:c0:5c,bridge_name='br-int',has_traffic_filtering=True,id=2508d856-adda-4895-ac2f-24adfa6c6b29,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2508d856-ad')#033[00m
Dec 13 03:20:46 np0005558241 nova_compute[248510]: 2025-12-13 08:20:46.890 248514 DEBUG nova.virt.libvirt.guest [req-e7ffbcea-e628-4558-b5dd-36fea06ffd79 req-736f3c18-9d0e-483a-a086-2f9f2e1e74b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1677975054</nova:name>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:20:46</nova:creationTime>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    <nova:port uuid="41d2792e-cb2f-464c-a974-4be557dde28b">
Dec 13 03:20:46 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:20:46 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:20:46 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:20:46 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:20:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 438 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 32 KiB/s wr, 53 op/s
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.630 248514 DEBUG nova.compute.manager [req-a42b1c43-1eb9-4695-b6f1-66a7065625e8 req-e5c69883-712e-489b-a616-44b2961d63a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-vif-unplugged-2508d856-adda-4895-ac2f-24adfa6c6b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.631 248514 DEBUG oslo_concurrency.lockutils [req-a42b1c43-1eb9-4695-b6f1-66a7065625e8 req-e5c69883-712e-489b-a616-44b2961d63a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.631 248514 DEBUG oslo_concurrency.lockutils [req-a42b1c43-1eb9-4695-b6f1-66a7065625e8 req-e5c69883-712e-489b-a616-44b2961d63a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.631 248514 DEBUG oslo_concurrency.lockutils [req-a42b1c43-1eb9-4695-b6f1-66a7065625e8 req-e5c69883-712e-489b-a616-44b2961d63a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.631 248514 DEBUG nova.compute.manager [req-a42b1c43-1eb9-4695-b6f1-66a7065625e8 req-e5c69883-712e-489b-a616-44b2961d63a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] No waiting events found dispatching network-vif-unplugged-2508d856-adda-4895-ac2f-24adfa6c6b29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.631 248514 WARNING nova.compute.manager [req-a42b1c43-1eb9-4695-b6f1-66a7065625e8 req-e5c69883-712e-489b-a616-44b2961d63a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received unexpected event network-vif-unplugged-2508d856-adda-4895-ac2f-24adfa6c6b29 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.633 248514 DEBUG nova.compute.manager [req-a42b1c43-1eb9-4695-b6f1-66a7065625e8 req-e5c69883-712e-489b-a616-44b2961d63a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-vif-plugged-2508d856-adda-4895-ac2f-24adfa6c6b29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.633 248514 DEBUG oslo_concurrency.lockutils [req-a42b1c43-1eb9-4695-b6f1-66a7065625e8 req-e5c69883-712e-489b-a616-44b2961d63a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.633 248514 DEBUG oslo_concurrency.lockutils [req-a42b1c43-1eb9-4695-b6f1-66a7065625e8 req-e5c69883-712e-489b-a616-44b2961d63a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.633 248514 DEBUG oslo_concurrency.lockutils [req-a42b1c43-1eb9-4695-b6f1-66a7065625e8 req-e5c69883-712e-489b-a616-44b2961d63a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.633 248514 DEBUG nova.compute.manager [req-a42b1c43-1eb9-4695-b6f1-66a7065625e8 req-e5c69883-712e-489b-a616-44b2961d63a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] No waiting events found dispatching network-vif-plugged-2508d856-adda-4895-ac2f-24adfa6c6b29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.634 248514 WARNING nova.compute.manager [req-a42b1c43-1eb9-4695-b6f1-66a7065625e8 req-e5c69883-712e-489b-a616-44b2961d63a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received unexpected event network-vif-plugged-2508d856-adda-4895-ac2f-24adfa6c6b29 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.941 248514 DEBUG oslo_concurrency.lockutils [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "ea00b062-2638-45e3-91fb-24240ef8912f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.942 248514 DEBUG oslo_concurrency.lockutils [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.942 248514 DEBUG oslo_concurrency.lockutils [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.942 248514 DEBUG oslo_concurrency.lockutils [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.943 248514 DEBUG oslo_concurrency.lockutils [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.944 248514 INFO nova.compute.manager [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Terminating instance#033[00m
Dec 13 03:20:47 np0005558241 nova_compute[248510]: 2025-12-13 08:20:47.945 248514 DEBUG nova.compute.manager [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:20:48 np0005558241 kernel: tap41d2792e-cb (unregistering): left promiscuous mode
Dec 13 03:20:48 np0005558241 NetworkManager[50376]: <info>  [1765614048.2354] device (tap41d2792e-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:48Z|00133|binding|INFO|Releasing lport 41d2792e-cb2f-464c-a974-4be557dde28b from this chassis (sb_readonly=0)
Dec 13 03:20:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:48Z|00134|binding|INFO|Setting lport 41d2792e-cb2f-464c-a974-4be557dde28b down in Southbound
Dec 13 03:20:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:48Z|00135|binding|INFO|Removing iface tap41d2792e-cb ovn-installed in OVS
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:48.251 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:65:9d 10.100.0.9'], port_security=['fa:16:3e:d2:65:9d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea00b062-2638-45e3-91fb-24240ef8912f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '267af4c5-9a6a-49cd-8515-bd9e68488e32', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=41d2792e-cb2f-464c-a974-4be557dde28b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:20:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:48.252 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 41d2792e-cb2f-464c-a974-4be557dde28b in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis#033[00m
Dec 13 03:20:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:48.253 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ca92864-3b70-4794-9db1-fa08128cef92, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:20:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:48.255 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4410e6d7-da09-41d8-9b38-9d459d18f639]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:48.255 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 namespace which is not needed anymore#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.259 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:48 np0005558241 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Deactivated successfully.
Dec 13 03:20:48 np0005558241 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Consumed 14.706s CPU time.
Dec 13 03:20:48 np0005558241 systemd-machined[210538]: Machine qemu-25-instance-00000017 terminated.
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.374 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.382 248514 INFO nova.virt.libvirt.driver [-] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Instance destroyed successfully.#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.383 248514 DEBUG nova.objects.instance [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'resources' on Instance uuid ea00b062-2638-45e3-91fb-24240ef8912f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.405 248514 DEBUG nova.virt.libvirt.vif [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:19:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1677975054',display_name='tempest-AttachInterfacesTestJSON-server-1677975054',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1677975054',id=23,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHlwa4a+xPwIF1ZQHB9HOTH7BIkCV659ajtgIcCDEuqwt51MFjzqyMmTtvHIvG4KvdVHQ8rxK33SgXXhym2PtXjLAyZK6aAc1sIvFRHlJNcPXY0ZisQ+T6fVIhZy24etfQ==',key_name='tempest-keypair-1760745004',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:20:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-jvuhbj1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:20:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=ea00b062-2638-45e3-91fb-24240ef8912f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.406 248514 DEBUG nova.network.os_vif_util [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.406 248514 DEBUG nova.network.os_vif_util [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:65:9d,bridge_name='br-int',has_traffic_filtering=True,id=41d2792e-cb2f-464c-a974-4be557dde28b,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d2792e-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.407 248514 DEBUG os_vif [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:65:9d,bridge_name='br-int',has_traffic_filtering=True,id=41d2792e-cb2f-464c-a974-4be557dde28b,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d2792e-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.408 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.408 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41d2792e-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.414 248514 INFO os_vif [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:65:9d,bridge_name='br-int',has_traffic_filtering=True,id=41d2792e-cb2f-464c-a974-4be557dde28b,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41d2792e-cb')#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.439 248514 INFO nova.network.neutron [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Port 2508d856-adda-4895-ac2f-24adfa6c6b29 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.440 248514 DEBUG nova.network.neutron [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Updating instance_info_cache with network_info: [{"id": "41d2792e-cb2f-464c-a974-4be557dde28b", "address": "fa:16:3e:d2:65:9d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41d2792e-cb", "ovs_interfaceid": "41d2792e-cb2f-464c-a974-4be557dde28b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.462 248514 DEBUG oslo_concurrency.lockutils [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-ea00b062-2638-45e3-91fb-24240ef8912f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:20:48 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[276492]: [NOTICE]   (276515) : haproxy version is 2.8.14-c23fe91
Dec 13 03:20:48 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[276492]: [NOTICE]   (276515) : path to executable is /usr/sbin/haproxy
Dec 13 03:20:48 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[276492]: [WARNING]  (276515) : Exiting Master process...
Dec 13 03:20:48 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[276492]: [WARNING]  (276515) : Exiting Master process...
Dec 13 03:20:48 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[276492]: [ALERT]    (276515) : Current worker (276517) exited with code 143 (Terminated)
Dec 13 03:20:48 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[276492]: [WARNING]  (276515) : All workers exited. Exiting... (0)
Dec 13 03:20:48 np0005558241 systemd[1]: libpod-9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3.scope: Deactivated successfully.
Dec 13 03:20:48 np0005558241 podman[277640]: 2025-12-13 08:20:48.488790866 +0000 UTC m=+0.128981828 container died 9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.490 248514 DEBUG oslo_concurrency.lockutils [None req-5dec7f73-3d30-488e-b913-0df448b149be ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-ea00b062-2638-45e3-91fb-24240ef8912f-2508d856-adda-4895-ac2f-24adfa6c6b29" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:20:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3-userdata-shm.mount: Deactivated successfully.
Dec 13 03:20:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4ce3084b50b0e6ec15e1ecf47d7e2ce15ecf3c552074717af103c056d1bf1d71-merged.mount: Deactivated successfully.
Dec 13 03:20:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 438 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 18 KiB/s wr, 19 op/s
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.955 248514 DEBUG nova.compute.manager [req-41885f6e-59b9-46dc-b0f1-bde5d1d63e1b req-f6a5e92b-48c7-4074-9681-b488064682a8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-vif-unplugged-41d2792e-cb2f-464c-a974-4be557dde28b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.956 248514 DEBUG oslo_concurrency.lockutils [req-41885f6e-59b9-46dc-b0f1-bde5d1d63e1b req-f6a5e92b-48c7-4074-9681-b488064682a8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.956 248514 DEBUG oslo_concurrency.lockutils [req-41885f6e-59b9-46dc-b0f1-bde5d1d63e1b req-f6a5e92b-48c7-4074-9681-b488064682a8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.956 248514 DEBUG oslo_concurrency.lockutils [req-41885f6e-59b9-46dc-b0f1-bde5d1d63e1b req-f6a5e92b-48c7-4074-9681-b488064682a8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.957 248514 DEBUG nova.compute.manager [req-41885f6e-59b9-46dc-b0f1-bde5d1d63e1b req-f6a5e92b-48c7-4074-9681-b488064682a8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] No waiting events found dispatching network-vif-unplugged-41d2792e-cb2f-464c-a974-4be557dde28b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:48 np0005558241 nova_compute[248510]: 2025-12-13 08:20:48.957 248514 DEBUG nova.compute.manager [req-41885f6e-59b9-46dc-b0f1-bde5d1d63e1b req-f6a5e92b-48c7-4074-9681-b488064682a8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-vif-unplugged-41d2792e-cb2f-464c-a974-4be557dde28b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:20:49 np0005558241 podman[277640]: 2025-12-13 08:20:49.03296245 +0000 UTC m=+0.673153422 container cleanup 9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 03:20:49 np0005558241 systemd[1]: libpod-conmon-9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3.scope: Deactivated successfully.
Dec 13 03:20:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:20:49 np0005558241 podman[277698]: 2025-12-13 08:20:49.24978855 +0000 UTC m=+0.193514727 container remove 9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:20:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:49.256 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd83c0be-d521-403f-8478-fbb731eb7e5f]: (4, ('Sat Dec 13 08:20:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 (9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3)\n9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3\nSat Dec 13 08:20:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 (9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3)\n9b9d3f8f97d9725aca9c3eb782ceac0858649736ab6c6c40f6971e75725c41b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:49.258 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d228b1a7-252c-42ca-9f85-64290f8458a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:49.260 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:20:49 np0005558241 nova_compute[248510]: 2025-12-13 08:20:49.264 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:49 np0005558241 kernel: tap1ca92864-30: left promiscuous mode
Dec 13 03:20:49 np0005558241 nova_compute[248510]: 2025-12-13 08:20:49.266 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:49.272 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b13f9ad-266f-4ca6-9aaa-e7b4587bc90b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:20:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 6876 writes, 31K keys, 6876 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s#012Cumulative WAL: 6876 writes, 6876 syncs, 1.00 writes per sync, written: 0.04 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1449 writes, 6530 keys, 1449 commit groups, 1.0 writes per commit group, ingest: 9.45 MB, 0.02 MB/s#012Interval WAL: 1449 writes, 1449 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      3.03              0.13        18    0.169       0      0       0.0       0.0#012  L6      1/0    7.36 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.5     55.2     44.9      2.93              0.40        17    0.173     84K   9816       0.0       0.0#012 Sum      1/0    7.36 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.5     27.1     28.4      5.97              0.53        35    0.171     84K   9816       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.2     35.4     34.9      1.12              0.12         8    0.140     23K   2546       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     55.2     44.9      2.93              0.40        17    0.173     84K   9816       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      3.03              0.13        17    0.178       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.037, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.17 GB write, 0.06 MB/s write, 0.16 GB read, 0.05 MB/s read, 6.0 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 18.97 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000411 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1177,18.33 MB,6.02906%) FilterBlock(36,238.92 KB,0.0767507%) IndexBlock(36,422.03 KB,0.135572%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 03:20:49 np0005558241 nova_compute[248510]: 2025-12-13 08:20:49.279 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:49.287 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ffa067f-dec2-4fb8-9c63-cf59c6788440]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:49.288 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab198e8-39fe-42cc-8bf3-8545afa39e2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:49.308 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7f82d012-4cc4-44bc-b001-69255be3d9e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657638, 'reachable_time': 41182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277713, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:49 np0005558241 systemd[1]: run-netns-ovnmeta\x2d1ca92864\x2d3b70\x2d4794\x2d9db1\x2dfa08128cef92.mount: Deactivated successfully.
Dec 13 03:20:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:49.311 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:20:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:49.312 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[87a146da-7bfa-433e-a159-de361f9258be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:20:49 np0005558241 nova_compute[248510]: 2025-12-13 08:20:49.325 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 438 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 18 KiB/s wr, 19 op/s
Dec 13 03:20:51 np0005558241 nova_compute[248510]: 2025-12-13 08:20:51.746 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614036.7444968, e686bddc-956e-4714-868c-9a29dda243d2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:20:51 np0005558241 nova_compute[248510]: 2025-12-13 08:20:51.747 248514 INFO nova.compute.manager [-] [instance: e686bddc-956e-4714-868c-9a29dda243d2] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:20:52 np0005558241 nova_compute[248510]: 2025-12-13 08:20:52.190 248514 INFO nova.virt.libvirt.driver [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Deleting instance files /var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f_del#033[00m
Dec 13 03:20:52 np0005558241 nova_compute[248510]: 2025-12-13 08:20:52.191 248514 INFO nova.virt.libvirt.driver [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Deletion of /var/lib/nova/instances/ea00b062-2638-45e3-91fb-24240ef8912f_del complete#033[00m
Dec 13 03:20:52 np0005558241 nova_compute[248510]: 2025-12-13 08:20:52.447 248514 DEBUG nova.compute.manager [None req-6e4af5ce-ef40-4617-b4b9-a249bda8e3e4 - - - - - -] [instance: e686bddc-956e-4714-868c-9a29dda243d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:20:52 np0005558241 nova_compute[248510]: 2025-12-13 08:20:52.610 248514 INFO nova.compute.manager [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Took 4.66 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:20:52 np0005558241 nova_compute[248510]: 2025-12-13 08:20:52.611 248514 DEBUG oslo.service.loopingcall [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:20:52 np0005558241 nova_compute[248510]: 2025-12-13 08:20:52.611 248514 DEBUG nova.compute.manager [-] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:20:52 np0005558241 nova_compute[248510]: 2025-12-13 08:20:52.611 248514 DEBUG nova.network.neutron [-] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:20:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 405 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 30 op/s
Dec 13 03:20:53 np0005558241 nova_compute[248510]: 2025-12-13 08:20:53.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:20:54 np0005558241 nova_compute[248510]: 2025-12-13 08:20:54.259 248514 DEBUG nova.compute.manager [req-2fc2d235-0367-4b8f-b43e-c94c63314835 req-c7454b72-e116-4068-b95c-39c5f0fcc9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-vif-plugged-41d2792e-cb2f-464c-a974-4be557dde28b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:54 np0005558241 nova_compute[248510]: 2025-12-13 08:20:54.259 248514 DEBUG oslo_concurrency.lockutils [req-2fc2d235-0367-4b8f-b43e-c94c63314835 req-c7454b72-e116-4068-b95c-39c5f0fcc9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:54 np0005558241 nova_compute[248510]: 2025-12-13 08:20:54.259 248514 DEBUG oslo_concurrency.lockutils [req-2fc2d235-0367-4b8f-b43e-c94c63314835 req-c7454b72-e116-4068-b95c-39c5f0fcc9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:54 np0005558241 nova_compute[248510]: 2025-12-13 08:20:54.260 248514 DEBUG oslo_concurrency.lockutils [req-2fc2d235-0367-4b8f-b43e-c94c63314835 req-c7454b72-e116-4068-b95c-39c5f0fcc9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:54 np0005558241 nova_compute[248510]: 2025-12-13 08:20:54.260 248514 DEBUG nova.compute.manager [req-2fc2d235-0367-4b8f-b43e-c94c63314835 req-c7454b72-e116-4068-b95c-39c5f0fcc9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] No waiting events found dispatching network-vif-plugged-41d2792e-cb2f-464c-a974-4be557dde28b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:20:54 np0005558241 nova_compute[248510]: 2025-12-13 08:20:54.260 248514 WARNING nova.compute.manager [req-2fc2d235-0367-4b8f-b43e-c94c63314835 req-c7454b72-e116-4068-b95c-39c5f0fcc9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received unexpected event network-vif-plugged-41d2792e-cb2f-464c-a974-4be557dde28b for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:20:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:54Z|00136|binding|INFO|Releasing lport c8850a7f-53d6-4776-8c0e-7fb474fe52de from this chassis (sb_readonly=0)
Dec 13 03:20:54 np0005558241 nova_compute[248510]: 2025-12-13 08:20:54.328 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:54 np0005558241 nova_compute[248510]: 2025-12-13 08:20:54.334 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:20:54Z|00137|binding|INFO|Releasing lport c8850a7f-53d6-4776-8c0e-7fb474fe52de from this chassis (sb_readonly=0)
Dec 13 03:20:54 np0005558241 nova_compute[248510]: 2025-12-13 08:20:54.517 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:54 np0005558241 nova_compute[248510]: 2025-12-13 08:20:54.833 248514 DEBUG nova.network.neutron [-] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:20:54 np0005558241 nova_compute[248510]: 2025-12-13 08:20:54.849 248514 INFO nova.compute.manager [-] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Took 2.24 seconds to deallocate network for instance.#033[00m
Dec 13 03:20:54 np0005558241 nova_compute[248510]: 2025-12-13 08:20:54.924 248514 DEBUG oslo_concurrency.lockutils [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:54 np0005558241 nova_compute[248510]: 2025-12-13 08:20:54.924 248514 DEBUG oslo_concurrency.lockutils [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 358 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 8.7 KiB/s wr, 32 op/s
Dec 13 03:20:55 np0005558241 nova_compute[248510]: 2025-12-13 08:20:55.010 248514 DEBUG nova.compute.manager [req-9eb712f8-9a52-4a0b-ac6d-718db001c4ee req-eea2fe27-78fd-43ee-bd6a-a194670d6ec5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Received event network-vif-deleted-41d2792e-cb2f-464c-a974-4be557dde28b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:20:55 np0005558241 nova_compute[248510]: 2025-12-13 08:20:55.087 248514 DEBUG oslo_concurrency.processutils [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:20:55 np0005558241 nova_compute[248510]: 2025-12-13 08:20:55.245 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:20:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:55.402 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:20:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:55.403 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:20:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:20:55.403 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:20:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1336253598' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:20:56 np0005558241 nova_compute[248510]: 2025-12-13 08:20:56.097 248514 DEBUG oslo_concurrency.processutils [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:20:56 np0005558241 nova_compute[248510]: 2025-12-13 08:20:56.104 248514 DEBUG nova.compute.provider_tree [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:20:56 np0005558241 nova_compute[248510]: 2025-12-13 08:20:56.166 248514 DEBUG nova.scheduler.client.report [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:20:56 np0005558241 nova_compute[248510]: 2025-12-13 08:20:56.203 248514 DEBUG oslo_concurrency.lockutils [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:56 np0005558241 nova_compute[248510]: 2025-12-13 08:20:56.314 248514 INFO nova.scheduler.client.report [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Deleted allocations for instance ea00b062-2638-45e3-91fb-24240ef8912f#033[00m
Dec 13 03:20:56 np0005558241 nova_compute[248510]: 2025-12-13 08:20:56.565 248514 DEBUG oslo_concurrency.lockutils [None req-006dfac7-9f73-4e95-8c79-978ca4171103 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "ea00b062-2638-45e3-91fb-24240ef8912f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:20:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 358 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 7.9 KiB/s wr, 29 op/s
Dec 13 03:20:58 np0005558241 nova_compute[248510]: 2025-12-13 08:20:58.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 358 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 29 op/s
Dec 13 03:20:59 np0005558241 nova_compute[248510]: 2025-12-13 08:20:59.332 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:20:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:21:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 358 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.9 KiB/s wr, 27 op/s
Dec 13 03:21:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:02.491 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:21:02 np0005558241 nova_compute[248510]: 2025-12-13 08:21:02.492 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:02.493 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:21:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Dec 13 03:21:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Dec 13 03:21:02 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Dec 13 03:21:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 358 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 3.6 KiB/s wr, 20 op/s
Dec 13 03:21:03 np0005558241 nova_compute[248510]: 2025-12-13 08:21:03.381 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614048.3799386, ea00b062-2638-45e3-91fb-24240ef8912f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:03 np0005558241 nova_compute[248510]: 2025-12-13 08:21:03.381 248514 INFO nova.compute.manager [-] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:21:03 np0005558241 nova_compute[248510]: 2025-12-13 08:21:03.406 248514 DEBUG nova.compute.manager [None req-2b5c5bdb-22b2-4dd7-84cb-edc6b044a8d7 - - - - - -] [instance: ea00b062-2638-45e3-91fb-24240ef8912f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:03 np0005558241 nova_compute[248510]: 2025-12-13 08:21:03.419 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:04 np0005558241 nova_compute[248510]: 2025-12-13 08:21:04.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:21:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 358 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 4.6 KiB/s wr, 11 op/s
Dec 13 03:21:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:05.495 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:06 np0005558241 nova_compute[248510]: 2025-12-13 08:21:06.299 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:21:06 np0005558241 nova_compute[248510]: 2025-12-13 08:21:06.349 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:06 np0005558241 nova_compute[248510]: 2025-12-13 08:21:06.350 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:06 np0005558241 nova_compute[248510]: 2025-12-13 08:21:06.367 248514 DEBUG nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:21:06 np0005558241 nova_compute[248510]: 2025-12-13 08:21:06.487 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:06 np0005558241 nova_compute[248510]: 2025-12-13 08:21:06.488 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:06 np0005558241 nova_compute[248510]: 2025-12-13 08:21:06.497 248514 DEBUG nova.virt.hardware [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:21:06 np0005558241 nova_compute[248510]: 2025-12-13 08:21:06.498 248514 INFO nova.compute.claims [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:21:06 np0005558241 nova_compute[248510]: 2025-12-13 08:21:06.930 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 358 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 4.6 KiB/s wr, 11 op/s
Dec 13 03:21:07 np0005558241 kernel: tap7f8a109e-e2 (unregistering): left promiscuous mode
Dec 13 03:21:07 np0005558241 NetworkManager[50376]: <info>  [1765614067.1170] device (tap7f8a109e-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:21:07 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:07Z|00138|binding|INFO|Releasing lport 7f8a109e-e262-4847-8430-ac7944dace5c from this chassis (sb_readonly=0)
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.127 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:07 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:07Z|00139|binding|INFO|Setting lport 7f8a109e-e262-4847-8430-ac7944dace5c down in Southbound
Dec 13 03:21:07 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:07Z|00140|binding|INFO|Removing iface tap7f8a109e-e2 ovn-installed in OVS
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.142 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:0a:44 10.100.0.14'], port_security=['fa:16:3e:e6:0a:44 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a14d8b88-7aec-468f-a550-881364e4d95e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7f8a109e-e262-4847-8430-ac7944dace5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.143 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7f8a109e-e262-4847-8430-ac7944dace5c in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis#033[00m
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.145 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.146 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.164 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[40f9078a-eee3-45e6-a395-f77a4cf49850]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.201 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[688c8913-f898-46bd-9c2d-abcfa4d003d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.204 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a01fb298-e220-45e2-b917-0177aa53e138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:07 np0005558241 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000011.scope: Deactivated successfully.
Dec 13 03:21:07 np0005558241 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000011.scope: Consumed 18.904s CPU time.
Dec 13 03:21:07 np0005558241 systemd-machined[210538]: Machine qemu-27-instance-00000011 terminated.
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.235 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[945ee5ec-6ba4-42b6-ba6c-5dabeefc2d42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.255 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ccd336-0585-47be-973a-d4d1b50d9010]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 16, 'rx_bytes': 994, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 16, 'rx_bytes': 994, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277769, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.274 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[738e02c1-e589-4b06-ae38-1e360796e0f1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277770, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277770, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.276 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.278 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.284 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.285 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.285 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:07.286 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.356 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.363 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.372 248514 INFO nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance shutdown successfully after 55 seconds.#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.378 248514 INFO nova.virt.libvirt.driver [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance destroyed successfully.#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.398 248514 INFO nova.virt.libvirt.driver [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance destroyed successfully.#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.400 248514 DEBUG nova.virt.libvirt.vif [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:17:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-247332834',display_name='tempest-ServersAdminTestJSON-server-247332834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-247332834',id=17,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:20:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-88vex2g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:20:09Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=a14d8b88-7aec-468f-a550-881364e4d95e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.400 248514 DEBUG nova.network.os_vif_util [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.401 248514 DEBUG nova.network.os_vif_util [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.402 248514 DEBUG os_vif [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.404 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f8a109e-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.407 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.410 248514 INFO os_vif [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2')#033[00m
Dec 13 03:21:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:21:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046833984' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.522 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.528 248514 DEBUG nova.compute.provider_tree [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.560 248514 DEBUG nova.scheduler.client.report [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.603 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.604 248514 DEBUG nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.669 248514 DEBUG nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.670 248514 DEBUG nova.network.neutron [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.704 248514 INFO nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.732 248514 DEBUG nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.756 248514 DEBUG nova.compute.manager [req-ce029894-6dac-4ab9-a321-7c5ba4b8d3d8 req-9ad76d1d-9383-46dd-87a1-89de5333abc9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-unplugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.756 248514 DEBUG oslo_concurrency.lockutils [req-ce029894-6dac-4ab9-a321-7c5ba4b8d3d8 req-9ad76d1d-9383-46dd-87a1-89de5333abc9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.757 248514 DEBUG oslo_concurrency.lockutils [req-ce029894-6dac-4ab9-a321-7c5ba4b8d3d8 req-9ad76d1d-9383-46dd-87a1-89de5333abc9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.757 248514 DEBUG oslo_concurrency.lockutils [req-ce029894-6dac-4ab9-a321-7c5ba4b8d3d8 req-9ad76d1d-9383-46dd-87a1-89de5333abc9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.757 248514 DEBUG nova.compute.manager [req-ce029894-6dac-4ab9-a321-7c5ba4b8d3d8 req-9ad76d1d-9383-46dd-87a1-89de5333abc9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] No waiting events found dispatching network-vif-unplugged-7f8a109e-e262-4847-8430-ac7944dace5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.757 248514 WARNING nova.compute.manager [req-ce029894-6dac-4ab9-a321-7c5ba4b8d3d8 req-9ad76d1d-9383-46dd-87a1-89de5333abc9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received unexpected event network-vif-unplugged-7f8a109e-e262-4847-8430-ac7944dace5c for instance with vm_state active and task_state rebuilding.#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.853 248514 DEBUG nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.855 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.855 248514 INFO nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Creating image(s)#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.876 248514 DEBUG nova.storage.rbd_utils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.899 248514 DEBUG nova.storage.rbd_utils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.918 248514 DEBUG nova.storage.rbd_utils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:07 np0005558241 nova_compute[248510]: 2025-12-13 08:21:07.920 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:08 np0005558241 nova_compute[248510]: 2025-12-13 08:21:08.031 248514 DEBUG nova.policy [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:21:08 np0005558241 nova_compute[248510]: 2025-12-13 08:21:08.035 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:08 np0005558241 nova_compute[248510]: 2025-12-13 08:21:08.035 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:08 np0005558241 nova_compute[248510]: 2025-12-13 08:21:08.035 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:08 np0005558241 nova_compute[248510]: 2025-12-13 08:21:08.036 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:08 np0005558241 nova_compute[248510]: 2025-12-13 08:21:08.055 248514 DEBUG nova.storage.rbd_utils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:08 np0005558241 nova_compute[248510]: 2025-12-13 08:21:08.058 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 358 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 20 KiB/s wr, 29 op/s
Dec 13 03:21:08 np0005558241 podman[277895]: 2025-12-13 08:21:08.975718175 +0000 UTC m=+0.062136621 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:21:08 np0005558241 podman[277894]: 2025-12-13 08:21:08.986272905 +0000 UTC m=+0.073584123 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 13 03:21:09 np0005558241 podman[277893]: 2025-12-13 08:21:09.021470382 +0000 UTC m=+0.110122993 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 03:21:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:21:09
Dec 13 03:21:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:21:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:21:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'vms', 'volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'images']
Dec 13 03:21:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.529 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.601 248514 DEBUG nova.storage.rbd_utils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] resizing rbd image 2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:21:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Dec 13 03:21:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Dec 13 03:21:09 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.875 248514 DEBUG nova.objects.instance [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'migration_context' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.891 248514 DEBUG nova.compute.manager [req-036d24d9-33e6-4c7a-86d1-5ed5aefe9ef1 req-4b5a9c17-48d1-4d59-8c8c-8f7ffb762c0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.892 248514 DEBUG oslo_concurrency.lockutils [req-036d24d9-33e6-4c7a-86d1-5ed5aefe9ef1 req-4b5a9c17-48d1-4d59-8c8c-8f7ffb762c0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.892 248514 DEBUG oslo_concurrency.lockutils [req-036d24d9-33e6-4c7a-86d1-5ed5aefe9ef1 req-4b5a9c17-48d1-4d59-8c8c-8f7ffb762c0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.892 248514 DEBUG oslo_concurrency.lockutils [req-036d24d9-33e6-4c7a-86d1-5ed5aefe9ef1 req-4b5a9c17-48d1-4d59-8c8c-8f7ffb762c0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.893 248514 DEBUG nova.compute.manager [req-036d24d9-33e6-4c7a-86d1-5ed5aefe9ef1 req-4b5a9c17-48d1-4d59-8c8c-8f7ffb762c0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] No waiting events found dispatching network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.893 248514 WARNING nova.compute.manager [req-036d24d9-33e6-4c7a-86d1-5ed5aefe9ef1 req-4b5a9c17-48d1-4d59-8c8c-8f7ffb762c0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received unexpected event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c for instance with vm_state active and task_state rebuilding.#033[00m
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.896 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.897 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Ensure instance console log exists: /var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.897 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.898 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:09 np0005558241 nova_compute[248510]: 2025-12-13 08:21:09.898 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.022 248514 INFO nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Deleting instance files /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e_del#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.022 248514 INFO nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Deletion of /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e_del complete#033[00m
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.219 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.220 248514 INFO nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Creating image(s)#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.246 248514 DEBUG nova.storage.rbd_utils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.275 248514 DEBUG nova.storage.rbd_utils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.301 248514 DEBUG nova.storage.rbd_utils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.306 248514 DEBUG oslo_concurrency.processutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.339 248514 DEBUG nova.network.neutron [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Successfully created port: 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.390 248514 DEBUG oslo_concurrency.processutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.391 248514 DEBUG oslo_concurrency.lockutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.392 248514 DEBUG oslo_concurrency.lockutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.392 248514 DEBUG oslo_concurrency.lockutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.426 248514 DEBUG nova.storage.rbd_utils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.432 248514 DEBUG oslo_concurrency.processutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a14d8b88-7aec-468f-a550-881364e4d95e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.745 248514 DEBUG oslo_concurrency.processutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a14d8b88-7aec-468f-a550-881364e4d95e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.813 248514 DEBUG nova.storage.rbd_utils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] resizing rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.893 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.894 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Ensure instance console log exists: /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.894 248514 DEBUG oslo_concurrency.lockutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.894 248514 DEBUG oslo_concurrency.lockutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.895 248514 DEBUG oslo_concurrency.lockutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.897 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Start _get_guest_xml network_info=[{"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.900 248514 WARNING nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.904 248514 DEBUG nova.virt.libvirt.host [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.904 248514 DEBUG nova.virt.libvirt.host [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.907 248514 DEBUG nova.virt.libvirt.host [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.907 248514 DEBUG nova.virt.libvirt.host [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.907 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.907 248514 DEBUG nova.virt.hardware [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.908 248514 DEBUG nova.virt.hardware [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.908 248514 DEBUG nova.virt.hardware [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.908 248514 DEBUG nova.virt.hardware [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.908 248514 DEBUG nova.virt.hardware [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.908 248514 DEBUG nova.virt.hardware [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.909 248514 DEBUG nova.virt.hardware [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.909 248514 DEBUG nova.virt.hardware [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.909 248514 DEBUG nova.virt.hardware [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.909 248514 DEBUG nova.virt.hardware [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.909 248514 DEBUG nova.virt.hardware [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.909 248514 DEBUG nova.objects.instance [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'vcpu_model' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:10 np0005558241 nova_compute[248510]: 2025-12-13 08:21:10.952 248514 DEBUG oslo_concurrency.processutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 358 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 24 KiB/s wr, 28 op/s
Dec 13 03:21:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:21:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2339773804' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:21:11 np0005558241 nova_compute[248510]: 2025-12-13 08:21:11.505 248514 DEBUG oslo_concurrency.processutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:11 np0005558241 nova_compute[248510]: 2025-12-13 08:21:11.525 248514 DEBUG nova.storage.rbd_utils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:11 np0005558241 nova_compute[248510]: 2025-12-13 08:21:11.529 248514 DEBUG oslo_concurrency.processutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:21:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4189394162' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.080 248514 DEBUG oslo_concurrency.processutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.082 248514 DEBUG nova.virt.libvirt.vif [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:17:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-247332834',display_name='tempest-ServersAdminTestJSON-server-247332834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-247332834',id=17,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:20:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-88vex2g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:10Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=a14d8b88-7aec-468f-a550-881364e4d95e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.082 248514 DEBUG nova.network.os_vif_util [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.084 248514 DEBUG nova.network.os_vif_util [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.086 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  <uuid>a14d8b88-7aec-468f-a550-881364e4d95e</uuid>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  <name>instance-00000011</name>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersAdminTestJSON-server-247332834</nova:name>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:21:10</nova:creationTime>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:        <nova:user uuid="0d3578a9aa9a4f4facf4009a71564a31">tempest-ServersAdminTestJSON-1079129122-project-member</nova:user>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:        <nova:project uuid="1f6b42f9712f4afea6a07d08373b56ca">tempest-ServersAdminTestJSON-1079129122</nova:project>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:        <nova:port uuid="7f8a109e-e262-4847-8430-ac7944dace5c">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <entry name="serial">a14d8b88-7aec-468f-a550-881364e4d95e</entry>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <entry name="uuid">a14d8b88-7aec-468f-a550-881364e4d95e</entry>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/a14d8b88-7aec-468f-a550-881364e4d95e_disk">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/a14d8b88-7aec-468f-a550-881364e4d95e_disk.config">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:e6:0a:44"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <target dev="tap7f8a109e-e2"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/console.log" append="off"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:21:12 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:21:12 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:21:12 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:21:12 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.087 248514 DEBUG nova.virt.libvirt.vif [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:17:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-247332834',display_name='tempest-ServersAdminTestJSON-server-247332834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-247332834',id=17,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:20:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-88vex2g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:10Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=a14d8b88-7aec-468f-a550-881364e4d95e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.087 248514 DEBUG nova.network.os_vif_util [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.088 248514 DEBUG nova.network.os_vif_util [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.088 248514 DEBUG os_vif [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.089 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.089 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.090 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.093 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.093 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f8a109e-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.094 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f8a109e-e2, col_values=(('external_ids', {'iface-id': '7f8a109e-e262-4847-8430-ac7944dace5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:0a:44', 'vm-uuid': 'a14d8b88-7aec-468f-a550-881364e4d95e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.095 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:12 np0005558241 NetworkManager[50376]: <info>  [1765614072.0965] manager: (tap7f8a109e-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.100 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.103 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.104 248514 INFO os_vif [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2')#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.189 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.190 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.190 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] No VIF found with MAC fa:16:3e:e6:0a:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.191 248514 INFO nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Using config drive#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.212 248514 DEBUG nova.storage.rbd_utils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.282 248514 DEBUG nova.objects.instance [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'ec2_ids' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.335 248514 DEBUG nova.objects.instance [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'keypairs' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.412 248514 DEBUG nova.network.neutron [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Successfully updated port: 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.436 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.436 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.436 248514 DEBUG nova.network.neutron [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.577 248514 DEBUG nova.compute.manager [req-8cfe21be-8597-4224-9020-e0189461e6e0 req-31d2f6de-4684-420d-9949-fc24649de1bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-changed-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.578 248514 DEBUG nova.compute.manager [req-8cfe21be-8597-4224-9020-e0189461e6e0 req-31d2f6de-4684-420d-9949-fc24649de1bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing instance network info cache due to event network-changed-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.578 248514 DEBUG oslo_concurrency.lockutils [req-8cfe21be-8597-4224-9020-e0189461e6e0 req-31d2f6de-4684-420d-9949-fc24649de1bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.692 248514 DEBUG nova.network.neutron [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.800 248514 INFO nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Creating config drive at /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.805 248514 DEBUG oslo_concurrency.processutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgbk6zlp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.952 248514 DEBUG oslo_concurrency.processutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgbk6zlp" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 374 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.984 248514 DEBUG nova.storage.rbd_utils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] rbd image a14d8b88-7aec-468f-a550-881364e4d95e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:12 np0005558241 nova_compute[248510]: 2025-12-13 08:21:12.989 248514 DEBUG oslo_concurrency.processutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config a14d8b88-7aec-468f-a550-881364e4d95e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Dec 13 03:21:13 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.619 248514 DEBUG oslo_concurrency.processutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config a14d8b88-7aec-468f-a550-881364e4d95e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.620 248514 INFO nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Deleting local config drive /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e/disk.config because it was imported into RBD.#033[00m
Dec 13 03:21:13 np0005558241 kernel: tap7f8a109e-e2: entered promiscuous mode
Dec 13 03:21:13 np0005558241 NetworkManager[50376]: <info>  [1765614073.6836] manager: (tap7f8a109e-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Dec 13 03:21:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:13Z|00141|binding|INFO|Claiming lport 7f8a109e-e262-4847-8430-ac7944dace5c for this chassis.
Dec 13 03:21:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:13Z|00142|binding|INFO|7f8a109e-e262-4847-8430-ac7944dace5c: Claiming fa:16:3e:e6:0a:44 10.100.0.14
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.694 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:0a:44 10.100.0.14'], port_security=['fa:16:3e:e6:0a:44 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a14d8b88-7aec-468f-a550-881364e4d95e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7f8a109e-e262-4847-8430-ac7944dace5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.695 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7f8a109e-e262-4847-8430-ac7944dace5c in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 bound to our chassis#033[00m
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.697 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5#033[00m
Dec 13 03:21:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:13Z|00143|binding|INFO|Setting lport 7f8a109e-e262-4847-8430-ac7944dace5c ovn-installed in OVS
Dec 13 03:21:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:13Z|00144|binding|INFO|Setting lport 7f8a109e-e262-4847-8430-ac7944dace5c up in Southbound
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.702 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:13 np0005558241 systemd-udevd[278329]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.723 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[16292fd1-0d7b-42be-ac4c-da6eb9847427]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:13 np0005558241 systemd-machined[210538]: New machine qemu-28-instance-00000011.
Dec 13 03:21:13 np0005558241 NetworkManager[50376]: <info>  [1765614073.7364] device (tap7f8a109e-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:21:13 np0005558241 NetworkManager[50376]: <info>  [1765614073.7372] device (tap7f8a109e-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:21:13 np0005558241 systemd[1]: Started Virtual Machine qemu-28-instance-00000011.
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.763 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe90068-abdb-4b56-b518-8bd9dcfcb9b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.767 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eed1ef8a-2ac8-471b-9b7e-ae4f77cca71e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.804 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6a93fc0d-2803-4d4b-8633-130be5ec6932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.824 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd8588b6-8392-438d-a6b1-61594a9f678b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 18, 'rx_bytes': 994, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 18, 'rx_bytes': 994, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278343, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.843 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d3d864-d0fd-4c4d-bcd8-ab39d1852921]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278344, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278344, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.845 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.847 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.848 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.849 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.849 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:13.849 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.934 248514 DEBUG nova.network.neutron [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.961 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.961 248514 DEBUG nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Instance network_info: |[{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.962 248514 DEBUG oslo_concurrency.lockutils [req-8cfe21be-8597-4224-9020-e0189461e6e0 req-31d2f6de-4684-420d-9949-fc24649de1bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.962 248514 DEBUG nova.network.neutron [req-8cfe21be-8597-4224-9020-e0189461e6e0 req-31d2f6de-4684-420d-9949-fc24649de1bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing network info cache for port 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.965 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Start _get_guest_xml network_info=[{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.970 248514 WARNING nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.976 248514 DEBUG nova.virt.libvirt.host [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.977 248514 DEBUG nova.virt.libvirt.host [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.982 248514 DEBUG nova.virt.libvirt.host [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.982 248514 DEBUG nova.virt.libvirt.host [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.982 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.983 248514 DEBUG nova.virt.hardware [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.983 248514 DEBUG nova.virt.hardware [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.983 248514 DEBUG nova.virt.hardware [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.983 248514 DEBUG nova.virt.hardware [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.984 248514 DEBUG nova.virt.hardware [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.984 248514 DEBUG nova.virt.hardware [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.984 248514 DEBUG nova.virt.hardware [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.984 248514 DEBUG nova.virt.hardware [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.985 248514 DEBUG nova.virt.hardware [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.985 248514 DEBUG nova.virt.hardware [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.985 248514 DEBUG nova.virt.hardware [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:21:13 np0005558241 nova_compute[248510]: 2025-12-13 08:21:13.988 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:14 np0005558241 nova_compute[248510]: 2025-12-13 08:21:14.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:21:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:21:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1177335100' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:21:14 np0005558241 nova_compute[248510]: 2025-12-13 08:21:14.712 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:14 np0005558241 nova_compute[248510]: 2025-12-13 08:21:14.741 248514 DEBUG nova.storage.rbd_utils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:14 np0005558241 nova_compute[248510]: 2025-12-13 08:21:14.745 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 372 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 115 KiB/s rd, 5.3 MiB/s wr, 173 op/s
Dec 13 03:21:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:21:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457289863' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:21:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:21:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457289863' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.228 248514 DEBUG nova.compute.manager [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.231 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.233 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for a14d8b88-7aec-468f-a550-881364e4d95e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.233 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614075.2229166, a14d8b88-7aec-468f-a550-881364e4d95e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.233 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.264 248514 INFO nova.virt.libvirt.driver [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance spawned successfully.#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.265 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.284 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.292 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.293 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.293 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.293 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.294 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.294 248514 DEBUG nova.virt.libvirt.driver [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.297 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.342 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.343 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614075.2237046, a14d8b88-7aec-468f-a550-881364e4d95e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.344 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] VM Started (Lifecycle Event)#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.360 248514 DEBUG nova.compute.manager [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:21:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3213241258' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.371 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.375 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.392 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.394 248514 DEBUG nova.virt.libvirt.vif [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.394 248514 DEBUG nova.network.os_vif_util [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.395 248514 DEBUG nova.network.os_vif_util [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:c0:80,bridge_name='br-int',has_traffic_filtering=True,id=10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10aa2df4-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.396 248514 DEBUG nova.objects.instance [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.460 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.469 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  <uuid>2e309dc2-3cab-4ecf-8be7-eab85790a0da</uuid>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  <name>instance-00000018</name>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:21:13</nova:creationTime>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:        <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:        <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:        <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <entry name="serial">2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <entry name="uuid">2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:54:c0:80"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <target dev="tap10aa2df4-a7"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log" append="off"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:21:15 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:21:15 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:21:15 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:21:15 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.469 248514 DEBUG nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Preparing to wait for external event network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.470 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.470 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.470 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.472 248514 DEBUG nova.virt.libvirt.vif [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.472 248514 DEBUG nova.network.os_vif_util [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.473 248514 DEBUG nova.network.os_vif_util [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:c0:80,bridge_name='br-int',has_traffic_filtering=True,id=10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10aa2df4-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.474 248514 DEBUG os_vif [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:c0:80,bridge_name='br-int',has_traffic_filtering=True,id=10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10aa2df4-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.475 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.475 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.478 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.479 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10aa2df4-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.480 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10aa2df4-a7, col_values=(('external_ids', {'iface-id': '10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:c0:80', 'vm-uuid': '2e309dc2-3cab-4ecf-8be7-eab85790a0da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:15 np0005558241 NetworkManager[50376]: <info>  [1765614075.4827] manager: (tap10aa2df4-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.485 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.493 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.495 248514 INFO os_vif [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:c0:80,bridge_name='br-int',has_traffic_filtering=True,id=10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10aa2df4-a7')#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.498 248514 DEBUG oslo_concurrency.lockutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.499 248514 DEBUG oslo_concurrency.lockutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.499 248514 DEBUG nova.objects.instance [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.592 248514 DEBUG oslo_concurrency.lockutils [None req-c4053f14-0e88-4ea4-af68-e5919d2bb708 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.653 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.654 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.654 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:54:c0:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.654 248514 INFO nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Using config drive#033[00m
Dec 13 03:21:15 np0005558241 nova_compute[248510]: 2025-12-13 08:21:15.681 248514 DEBUG nova.storage.rbd_utils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.048 248514 DEBUG nova.compute.manager [req-1544311e-e937-45a9-8d7d-33f32c558ba3 req-a8bbba27-44da-4c9b-9d03-b841a93a2a57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.049 248514 DEBUG oslo_concurrency.lockutils [req-1544311e-e937-45a9-8d7d-33f32c558ba3 req-a8bbba27-44da-4c9b-9d03-b841a93a2a57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.049 248514 DEBUG oslo_concurrency.lockutils [req-1544311e-e937-45a9-8d7d-33f32c558ba3 req-a8bbba27-44da-4c9b-9d03-b841a93a2a57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.050 248514 DEBUG oslo_concurrency.lockutils [req-1544311e-e937-45a9-8d7d-33f32c558ba3 req-a8bbba27-44da-4c9b-9d03-b841a93a2a57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.050 248514 DEBUG nova.compute.manager [req-1544311e-e937-45a9-8d7d-33f32c558ba3 req-a8bbba27-44da-4c9b-9d03-b841a93a2a57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] No waiting events found dispatching network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.050 248514 WARNING nova.compute.manager [req-1544311e-e937-45a9-8d7d-33f32c558ba3 req-a8bbba27-44da-4c9b-9d03-b841a93a2a57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received unexpected event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c for instance with vm_state active and task_state None.#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.247 248514 INFO nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Creating config drive at /var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/disk.config#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.253 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt1bjqlut execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.293 248514 DEBUG nova.network.neutron [req-8cfe21be-8597-4224-9020-e0189461e6e0 req-31d2f6de-4684-420d-9949-fc24649de1bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updated VIF entry in instance network info cache for port 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.295 248514 DEBUG nova.network.neutron [req-8cfe21be-8597-4224-9020-e0189461e6e0 req-31d2f6de-4684-420d-9949-fc24649de1bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.313 248514 DEBUG oslo_concurrency.lockutils [req-8cfe21be-8597-4224-9020-e0189461e6e0 req-31d2f6de-4684-420d-9949-fc24649de1bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.398 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt1bjqlut" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.434 248514 DEBUG nova.storage.rbd_utils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.440 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/disk.config 2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.483 248514 DEBUG oslo_concurrency.lockutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "07413df5-0bb8-42c2-95ff-13458d598139" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.483 248514 DEBUG oslo_concurrency.lockutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "07413df5-0bb8-42c2-95ff-13458d598139" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.503 248514 DEBUG nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.584 248514 DEBUG oslo_concurrency.lockutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.585 248514 DEBUG oslo_concurrency.lockutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.591 248514 DEBUG nova.virt.hardware [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.591 248514 INFO nova.compute.claims [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.723 248514 DEBUG oslo_concurrency.processutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/disk.config 2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.723 248514 INFO nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Deleting local config drive /var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/disk.config because it was imported into RBD.#033[00m
Dec 13 03:21:16 np0005558241 NetworkManager[50376]: <info>  [1765614076.7796] manager: (tap10aa2df4-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Dec 13 03:21:16 np0005558241 kernel: tap10aa2df4-a7: entered promiscuous mode
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.784 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:16Z|00145|binding|INFO|Claiming lport 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 for this chassis.
Dec 13 03:21:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:16Z|00146|binding|INFO|10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4: Claiming fa:16:3e:54:c0:80 10.100.0.7
Dec 13 03:21:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:16.813 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:c0:80 10.100.0.7'], port_security=['fa:16:3e:54:c0:80 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '71562f64-f92d-4728-bc7f-33bdc44249e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:21:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:16.814 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 bound to our chassis#033[00m
Dec 13 03:21:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:16.816 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:21:16 np0005558241 systemd-udevd[278523]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:21:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:16.828 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[17fc604b-6f5c-4297-8612-b7886517099f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:16.830 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ca92864-31 in ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:21:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:16.831 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ca92864-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:21:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:16.831 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2c10dd-3f2f-45c0-b5d0-45f1b0a1ecde]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.831 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:16.832 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d888f5e0-3bf9-417c-8cb9-1536e57a0a5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:16 np0005558241 systemd-machined[210538]: New machine qemu-29-instance-00000018.
Dec 13 03:21:16 np0005558241 NetworkManager[50376]: <info>  [1765614076.8408] device (tap10aa2df4-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:21:16 np0005558241 NetworkManager[50376]: <info>  [1765614076.8415] device (tap10aa2df4-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:21:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:16.848 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b87d1d-89c0-4a16-b1c0-149adc1d0488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:16Z|00147|binding|INFO|Setting lport 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 ovn-installed in OVS
Dec 13 03:21:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:16Z|00148|binding|INFO|Setting lport 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 up in Southbound
Dec 13 03:21:16 np0005558241 nova_compute[248510]: 2025-12-13 08:21:16.909 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:16 np0005558241 systemd[1]: Started Virtual Machine qemu-29-instance-00000018.
Dec 13 03:21:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:16.921 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5fc29b-a507-487a-a931-8fa9d5e3a32d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 372 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 5.3 MiB/s wr, 150 op/s
Dec 13 03:21:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:16.970 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[adc1a1c5-69ae-42bc-a1a9-2946944c958e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:16 np0005558241 systemd-udevd[278527]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:21:16 np0005558241 NetworkManager[50376]: <info>  [1765614076.9779] manager: (tap1ca92864-30): new Veth device (/org/freedesktop/NetworkManager/Devices/76)
Dec 13 03:21:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:16.977 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3958b3e2-3563-4ef6-a328-985b240b4727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.020 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[74c1ee98-7e29-449a-b95c-ce0a3b489b9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.024 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0c655a-feb8-4a80-9c25-290058341a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:17 np0005558241 NetworkManager[50376]: <info>  [1765614077.0559] device (tap1ca92864-30): carrier: link connected
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.061 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8a11e2fb-bdfd-404e-9be8-a9ff5be7bce5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.082 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2da814b1-710e-471b-9cd8-bf1d99423f0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278576, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.102 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9d54a47e-8b4d-4fca-a65f-75a02b601920]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:e2af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665427, 'tstamp': 665427}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278577, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.122 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e4bff21f-9f21-408c-b506-16797a90ad6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278578, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.161 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e30040-715b-4e58-9c97-4e116140fce5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.236 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e94a770f-0448-41c7-ba24-13f86d02f097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.237 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.237 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.238 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:17 np0005558241 nova_compute[248510]: 2025-12-13 08:21:17.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:17 np0005558241 kernel: tap1ca92864-30: entered promiscuous mode
Dec 13 03:21:17 np0005558241 NetworkManager[50376]: <info>  [1765614077.2407] manager: (tap1ca92864-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.242 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:17 np0005558241 nova_compute[248510]: 2025-12-13 08:21:17.243 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:17Z|00149|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:21:17 np0005558241 nova_compute[248510]: 2025-12-13 08:21:17.263 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.264 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ca92864-3b70-4794-9db1-fa08128cef92.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ca92864-3b70-4794-9db1-fa08128cef92.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.265 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed4abfa-2b02-4504-bd89-1c81cde05ce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.265 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/1ca92864-3b70-4794-9db1-fa08128cef92.pid.haproxy
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:21:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:17.266 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'env', 'PROCESS_TAG=haproxy-1ca92864-3b70-4794-9db1-fa08128cef92', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ca92864-3b70-4794-9db1-fa08128cef92.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:21:17 np0005558241 nova_compute[248510]: 2025-12-13 08:21:17.367 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614077.3665993, 2e309dc2-3cab-4ecf-8be7-eab85790a0da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:17 np0005558241 nova_compute[248510]: 2025-12-13 08:21:17.368 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] VM Started (Lifecycle Event)#033[00m
Dec 13 03:21:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:21:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373161858' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:21:17 np0005558241 nova_compute[248510]: 2025-12-13 08:21:17.466 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:17 np0005558241 nova_compute[248510]: 2025-12-13 08:21:17.473 248514 DEBUG nova.compute.provider_tree [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:21:17 np0005558241 nova_compute[248510]: 2025-12-13 08:21:17.484 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:17 np0005558241 nova_compute[248510]: 2025-12-13 08:21:17.490 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614077.368307, 2e309dc2-3cab-4ecf-8be7-eab85790a0da => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:17 np0005558241 nova_compute[248510]: 2025-12-13 08:21:17.490 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:21:17 np0005558241 nova_compute[248510]: 2025-12-13 08:21:17.501 248514 DEBUG nova.scheduler.client.report [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:21:17 np0005558241 nova_compute[248510]: 2025-12-13 08:21:17.514 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:17 np0005558241 nova_compute[248510]: 2025-12-13 08:21:17.518 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:21:17 np0005558241 podman[278654]: 2025-12-13 08:21:17.667998264 +0000 UTC m=+0.051271024 container create c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:21:17 np0005558241 systemd[1]: Started libpod-conmon-c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7.scope.
Dec 13 03:21:17 np0005558241 podman[278654]: 2025-12-13 08:21:17.639274767 +0000 UTC m=+0.022547557 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:21:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:21:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d74ef57e5fc97f32c1e7bb2787cfa72693667d60ba34de8c1be9c45989ad24ab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:17 np0005558241 podman[278654]: 2025-12-13 08:21:17.763553128 +0000 UTC m=+0.146825908 container init c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:21:17 np0005558241 podman[278654]: 2025-12-13 08:21:17.77175759 +0000 UTC m=+0.155030350 container start c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 13 03:21:17 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[278670]: [NOTICE]   (278674) : New worker (278676) forked
Dec 13 03:21:17 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[278670]: [NOTICE]   (278674) : Loading success.
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.471 248514 DEBUG oslo_concurrency.lockutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.472 248514 DEBUG nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.476 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.537 248514 DEBUG nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.537 248514 DEBUG nova.network.neutron [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.577 248514 INFO nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.608 248514 DEBUG nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.704 248514 DEBUG nova.compute.manager [req-300ff556-bb09-464f-90ab-572c4c3cca16 req-ebb66f76-0831-4a1a-9d4e-91b6392d7f89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.705 248514 DEBUG oslo_concurrency.lockutils [req-300ff556-bb09-464f-90ab-572c4c3cca16 req-ebb66f76-0831-4a1a-9d4e-91b6392d7f89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.705 248514 DEBUG oslo_concurrency.lockutils [req-300ff556-bb09-464f-90ab-572c4c3cca16 req-ebb66f76-0831-4a1a-9d4e-91b6392d7f89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.705 248514 DEBUG oslo_concurrency.lockutils [req-300ff556-bb09-464f-90ab-572c4c3cca16 req-ebb66f76-0831-4a1a-9d4e-91b6392d7f89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.705 248514 DEBUG nova.compute.manager [req-300ff556-bb09-464f-90ab-572c4c3cca16 req-ebb66f76-0831-4a1a-9d4e-91b6392d7f89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] No waiting events found dispatching network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.705 248514 WARNING nova.compute.manager [req-300ff556-bb09-464f-90ab-572c4c3cca16 req-ebb66f76-0831-4a1a-9d4e-91b6392d7f89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received unexpected event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c for instance with vm_state active and task_state None.#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.726 248514 DEBUG nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.730 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.730 248514 INFO nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Creating image(s)#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.750 248514 DEBUG nova.storage.rbd_utils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 07413df5-0bb8-42c2-95ff-13458d598139_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.779 248514 DEBUG nova.storage.rbd_utils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 07413df5-0bb8-42c2-95ff-13458d598139_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.807 248514 DEBUG nova.storage.rbd_utils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 07413df5-0bb8-42c2-95ff-13458d598139_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.811 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.895 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.896 248514 DEBUG oslo_concurrency.lockutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.897 248514 DEBUG oslo_concurrency.lockutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.897 248514 DEBUG oslo_concurrency.lockutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.921 248514 DEBUG nova.storage.rbd_utils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 07413df5-0bb8-42c2-95ff-13458d598139_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:18 np0005558241 nova_compute[248510]: 2025-12-13 08:21:18.924 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 07413df5-0bb8-42c2-95ff-13458d598139_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 372 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.7 MiB/s wr, 235 op/s
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.101 248514 DEBUG nova.network.neutron [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.102 248514 DEBUG nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.163 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "9b6188af-75f0-4213-89c2-bd3eb72960b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.164 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "9b6188af-75f0-4213-89c2-bd3eb72960b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.255 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 07413df5-0bb8-42c2-95ff-13458d598139_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.326 248514 DEBUG nova.storage.rbd_utils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] resizing rbd image 07413df5-0bb8-42c2-95ff-13458d598139_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.357 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.415 248514 DEBUG nova.objects.instance [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lazy-loading 'migration_context' on Instance uuid 07413df5-0bb8-42c2-95ff-13458d598139 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.450 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.525 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.525 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Ensure instance console log exists: /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.526 248514 DEBUG oslo_concurrency.lockutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.526 248514 DEBUG oslo_concurrency.lockutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.526 248514 DEBUG oslo_concurrency.lockutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.528 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.533 248514 WARNING nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.538 248514 DEBUG nova.virt.libvirt.host [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.539 248514 DEBUG nova.virt.libvirt.host [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.543 248514 DEBUG nova.virt.libvirt.host [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.544 248514 DEBUG nova.virt.libvirt.host [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.544 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.544 248514 DEBUG nova.virt.hardware [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.545 248514 DEBUG nova.virt.hardware [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.545 248514 DEBUG nova.virt.hardware [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.545 248514 DEBUG nova.virt.hardware [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.545 248514 DEBUG nova.virt.hardware [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.545 248514 DEBUG nova.virt.hardware [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.546 248514 DEBUG nova.virt.hardware [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.546 248514 DEBUG nova.virt.hardware [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.546 248514 DEBUG nova.virt.hardware [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.546 248514 DEBUG nova.virt.hardware [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.546 248514 DEBUG nova.virt.hardware [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.549 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.654 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.655 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.663 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.664 248514 INFO nova.compute.claims [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:21:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:21:19 np0005558241 nova_compute[248510]: 2025-12-13 08:21:19.907 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:21:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359856825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:21:20 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:21:20 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.150 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.178 248514 DEBUG nova.storage.rbd_utils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 07413df5-0bb8-42c2-95ff-13458d598139_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.182 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:21:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3037640602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.483 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.501 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.508 248514 DEBUG nova.compute.provider_tree [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.531 248514 DEBUG nova.scheduler.client.report [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.565 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.566 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.683 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.684 248514 DEBUG nova.network.neutron [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:21:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:21:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1065062239' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.744 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.746 248514 DEBUG nova.objects.instance [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lazy-loading 'pci_devices' on Instance uuid 07413df5-0bb8-42c2-95ff-13458d598139 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.796 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  <uuid>07413df5-0bb8-42c2-95ff-13458d598139</uuid>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  <name>instance-00000019</name>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1462636274</nova:name>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:21:19</nova:creationTime>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:        <nova:user uuid="bb27aa40b8134948b82eee1cf755ccc1">tempest-ListImageFiltersTestJSON-1727108935-project-member</nova:user>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:        <nova:project uuid="14e8ffb710fe4f92a0f68ad58c260f0f">tempest-ListImageFiltersTestJSON-1727108935</nova:project>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <entry name="serial">07413df5-0bb8-42c2-95ff-13458d598139</entry>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <entry name="uuid">07413df5-0bb8-42c2-95ff-13458d598139</entry>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/07413df5-0bb8-42c2-95ff-13458d598139_disk">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/07413df5-0bb8-42c2-95ff-13458d598139_disk.config">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/console.log" append="off"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:21:20 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:21:20 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:21:20 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:21:20 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029789743506055313 of space, bias 1.0, pg target 0.8936923051816594 quantized to 32 (current 32)
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667350369986437 of space, bias 1.0, pg target 0.2002051109959311 quantized to 32 (current 32)
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.966071310985396e-07 of space, bias 4.0, pg target 0.0009559285573182475 quantized to 16 (current 32)
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.805 248514 INFO nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:21:20 np0005558241 nova_compute[248510]: 2025-12-13 08:21:20.921 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:21:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 372 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 215 op/s
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.003 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.004 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.005 248514 INFO nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Using config drive#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.030 248514 DEBUG nova.storage.rbd_utils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 07413df5-0bb8-42c2-95ff-13458d598139_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.093 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.095 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.095 248514 INFO nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Creating image(s)#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.137 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.163 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.195 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.199 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.246 248514 INFO nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Creating config drive at /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/disk.config#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.252 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwq8485sl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.293 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.294 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.295 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.295 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.321 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.326 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.401 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwq8485sl" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.429 248514 DEBUG nova.storage.rbd_utils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 07413df5-0bb8-42c2-95ff-13458d598139_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.433 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/disk.config 07413df5-0bb8-42c2-95ff-13458d598139_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.477 248514 DEBUG nova.network.neutron [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.478 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.568 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "0d64e209-19e7-4ad3-a790-43d04d832838" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.569 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.569 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.569 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.569 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.571 248514 INFO nova.compute.manager [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Terminating instance#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.572 248514 DEBUG nova.compute.manager [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:21:21 np0005558241 kernel: tap2b62e133-0b (unregistering): left promiscuous mode
Dec 13 03:21:21 np0005558241 NetworkManager[50376]: <info>  [1765614081.9614] device (tap2b62e133-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:21:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:21Z|00150|binding|INFO|Releasing lport 2b62e133-0b9e-4c9d-9219-2a7e55c381ab from this chassis (sb_readonly=0)
Dec 13 03:21:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:21Z|00151|binding|INFO|Setting lport 2b62e133-0b9e-4c9d-9219-2a7e55c381ab down in Southbound
Dec 13 03:21:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:21Z|00152|binding|INFO|Removing iface tap2b62e133-0b ovn-installed in OVS
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.971 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.974 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:21.982 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:ac:d2 10.100.0.4'], port_security=['fa:16:3e:23:ac:d2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0d64e209-19e7-4ad3-a790-43d04d832838', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2b62e133-0b9e-4c9d-9219-2a7e55c381ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:21:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:21.985 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2b62e133-0b9e-4c9d-9219-2a7e55c381ab in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis#033[00m
Dec 13 03:21:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:21.987 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5#033[00m
Dec 13 03:21:21 np0005558241 nova_compute[248510]: 2025-12-13 08:21:21.989 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.010 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53638d0e-777e-4295-904f-a26c4f28f88f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:22 np0005558241 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Deactivated successfully.
Dec 13 03:21:22 np0005558241 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Consumed 20.806s CPU time.
Dec 13 03:21:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.039 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1285804c-5f24-4e55-9620-16a4a79bc204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:22 np0005558241 systemd-machined[210538]: Machine qemu-22-instance-00000014 terminated.
Dec 13 03:21:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.047 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7cbd7bc2-10e2-4d6e-80b2-f1afd2daf6dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.076 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f7f0a4-605b-47af-8226-14ff8e7d2fe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.101 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[abd8b276-da30-447d-97b9-60a84eeed6d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 20, 'rx_bytes': 994, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 20, 'rx_bytes': 994, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279101, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.131 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1767c7-2f37-4806-bd63-a2252501eb94]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279102, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279102, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.133 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.135 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.142 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.141 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.142 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.142 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:22.142 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.215 248514 INFO nova.virt.libvirt.driver [-] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Instance destroyed successfully.#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.215 248514 DEBUG nova.objects.instance [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'resources' on Instance uuid 0d64e209-19e7-4ad3-a790-43d04d832838 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.239 248514 DEBUG nova.virt.libvirt.vif [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:18:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-301010878',display_name='tempest-ServersAdminTestJSON-server-301010878',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-301010878',id=20,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:18:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-pq0jl515',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:18:43Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=0d64e209-19e7-4ad3-a790-43d04d832838,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.239 248514 DEBUG nova.network.os_vif_util [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "address": "fa:16:3e:23:ac:d2", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b62e133-0b", "ovs_interfaceid": "2b62e133-0b9e-4c9d-9219-2a7e55c381ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.240 248514 DEBUG nova.network.os_vif_util [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:ac:d2,bridge_name='br-int',has_traffic_filtering=True,id=2b62e133-0b9e-4c9d-9219-2a7e55c381ab,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b62e133-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.241 248514 DEBUG os_vif [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:ac:d2,bridge_name='br-int',has_traffic_filtering=True,id=2b62e133-0b9e-4c9d-9219-2a7e55c381ab,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b62e133-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.244 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b62e133-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.247 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.251 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.254 248514 INFO os_vif [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:ac:d2,bridge_name='br-int',has_traffic_filtering=True,id=2b62e133-0b9e-4c9d-9219-2a7e55c381ab,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b62e133-0b')#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.279 248514 DEBUG nova.compute.manager [req-74875409-2fd3-41e6-8fb5-1e321d0b3aa4 req-fb57fdba-baa3-47e0-97de-c17773874d2b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received event network-vif-unplugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.280 248514 DEBUG oslo_concurrency.lockutils [req-74875409-2fd3-41e6-8fb5-1e321d0b3aa4 req-fb57fdba-baa3-47e0-97de-c17773874d2b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.280 248514 DEBUG oslo_concurrency.lockutils [req-74875409-2fd3-41e6-8fb5-1e321d0b3aa4 req-fb57fdba-baa3-47e0-97de-c17773874d2b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.281 248514 DEBUG oslo_concurrency.lockutils [req-74875409-2fd3-41e6-8fb5-1e321d0b3aa4 req-fb57fdba-baa3-47e0-97de-c17773874d2b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.281 248514 DEBUG nova.compute.manager [req-74875409-2fd3-41e6-8fb5-1e321d0b3aa4 req-fb57fdba-baa3-47e0-97de-c17773874d2b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] No waiting events found dispatching network-vif-unplugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.281 248514 DEBUG nova.compute.manager [req-74875409-2fd3-41e6-8fb5-1e321d0b3aa4 req-fb57fdba-baa3-47e0-97de-c17773874d2b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received event network-vif-unplugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.312 248514 DEBUG nova.compute.manager [req-87217f62-c967-47ad-89a6-d9ed1d540777 req-13a71e33-1386-4ecd-9728-0b5e403535fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.312 248514 DEBUG oslo_concurrency.lockutils [req-87217f62-c967-47ad-89a6-d9ed1d540777 req-13a71e33-1386-4ecd-9728-0b5e403535fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.313 248514 DEBUG oslo_concurrency.lockutils [req-87217f62-c967-47ad-89a6-d9ed1d540777 req-13a71e33-1386-4ecd-9728-0b5e403535fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.313 248514 DEBUG oslo_concurrency.lockutils [req-87217f62-c967-47ad-89a6-d9ed1d540777 req-13a71e33-1386-4ecd-9728-0b5e403535fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.313 248514 DEBUG nova.compute.manager [req-87217f62-c967-47ad-89a6-d9ed1d540777 req-13a71e33-1386-4ecd-9728-0b5e403535fa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Processing event network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.314 248514 DEBUG nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.319 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.323 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Instance spawned successfully.#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.323 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.340 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614082.3283288, 2e309dc2-3cab-4ecf-8be7-eab85790a0da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.341 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.356 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.357 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.358 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.358 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.359 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.359 248514 DEBUG nova.virt.libvirt.driver [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.366 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.370 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.399 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.431 248514 INFO nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Took 14.58 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.432 248514 DEBUG nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.599 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.668 248514 INFO nova.compute.manager [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Took 16.23 seconds to build instance.#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.673 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] resizing rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.763 248514 DEBUG oslo_concurrency.processutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/disk.config 07413df5-0bb8-42c2-95ff-13458d598139_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.763 248514 INFO nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Deleting local config drive /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139/disk.config because it was imported into RBD.#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.842 248514 DEBUG oslo_concurrency.lockutils [None req-a9583223-efa3-41a0-a531-aed1bb1f1c84 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:22 np0005558241 systemd-machined[210538]: New machine qemu-30-instance-00000019.
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.854 248514 DEBUG nova.objects.instance [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lazy-loading 'migration_context' on Instance uuid 9b6188af-75f0-4213-89c2-bd3eb72960b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:22 np0005558241 systemd[1]: Started Virtual Machine qemu-30-instance-00000019.
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.874 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.875 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Ensure instance console log exists: /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.876 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.876 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.876 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.878 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.896 248514 WARNING nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.907 248514 DEBUG nova.virt.libvirt.host [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.909 248514 DEBUG nova.virt.libvirt.host [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.922 248514 DEBUG nova.virt.libvirt.host [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.923 248514 DEBUG nova.virt.libvirt.host [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.924 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.924 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.924 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.924 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.925 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.925 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.925 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.925 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.926 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.926 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.926 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.926 248514 DEBUG nova.virt.hardware [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:21:22 np0005558241 nova_compute[248510]: 2025-12-13 08:21:22.929 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 399 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 228 op/s
Dec 13 03:21:23 np0005558241 nova_compute[248510]: 2025-12-13 08:21:23.108 248514 INFO nova.virt.libvirt.driver [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Deleting instance files /var/lib/nova/instances/0d64e209-19e7-4ad3-a790-43d04d832838_del#033[00m
Dec 13 03:21:23 np0005558241 nova_compute[248510]: 2025-12-13 08:21:23.110 248514 INFO nova.virt.libvirt.driver [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Deletion of /var/lib/nova/instances/0d64e209-19e7-4ad3-a790-43d04d832838_del complete#033[00m
Dec 13 03:21:23 np0005558241 nova_compute[248510]: 2025-12-13 08:21:23.172 248514 INFO nova.compute.manager [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Took 1.60 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:21:23 np0005558241 nova_compute[248510]: 2025-12-13 08:21:23.173 248514 DEBUG oslo.service.loopingcall [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:21:23 np0005558241 nova_compute[248510]: 2025-12-13 08:21:23.173 248514 DEBUG nova.compute.manager [-] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:21:23 np0005558241 nova_compute[248510]: 2025-12-13 08:21:23.173 248514 DEBUG nova.network.neutron [-] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:21:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:21:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929085001' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:21:23 np0005558241 nova_compute[248510]: 2025-12-13 08:21:23.915 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.986s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:23 np0005558241 nova_compute[248510]: 2025-12-13 08:21:23.945 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:23 np0005558241 nova_compute[248510]: 2025-12-13 08:21:23.953 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.064 248514 DEBUG nova.network.neutron [-] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.108 248514 INFO nova.compute.manager [-] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Took 0.93 seconds to deallocate network for instance.#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.265 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614084.2595425, 07413df5-0bb8-42c2-95ff-13458d598139 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.267 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.271 248514 DEBUG nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.278 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.287 248514 INFO nova.virt.libvirt.driver [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Instance spawned successfully.#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.289 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.341 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.414 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.416 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:21:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2275496492' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.541 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.544 248514 DEBUG nova.objects.instance [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b6188af-75f0-4213-89c2-bd3eb72960b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.580 248514 DEBUG oslo_concurrency.processutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.618 248514 DEBUG nova.compute.manager [req-0263d3a9-7f1c-4830-9111-d034042c24dd req-7bea2224-8461-4ac5-b3d6-425b7d054c55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.619 248514 DEBUG oslo_concurrency.lockutils [req-0263d3a9-7f1c-4830-9111-d034042c24dd req-7bea2224-8461-4ac5-b3d6-425b7d054c55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.620 248514 DEBUG oslo_concurrency.lockutils [req-0263d3a9-7f1c-4830-9111-d034042c24dd req-7bea2224-8461-4ac5-b3d6-425b7d054c55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.620 248514 DEBUG oslo_concurrency.lockutils [req-0263d3a9-7f1c-4830-9111-d034042c24dd req-7bea2224-8461-4ac5-b3d6-425b7d054c55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.620 248514 DEBUG nova.compute.manager [req-0263d3a9-7f1c-4830-9111-d034042c24dd req-7bea2224-8461-4ac5-b3d6-425b7d054c55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.620 248514 WARNING nova.compute.manager [req-0263d3a9-7f1c-4830-9111-d034042c24dd req-7bea2224-8461-4ac5-b3d6-425b7d054c55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.662 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.665 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  <uuid>9b6188af-75f0-4213-89c2-bd3eb72960b7</uuid>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  <name>instance-0000001a</name>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1010980373</nova:name>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:21:22</nova:creationTime>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:        <nova:user uuid="bb27aa40b8134948b82eee1cf755ccc1">tempest-ListImageFiltersTestJSON-1727108935-project-member</nova:user>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:        <nova:project uuid="14e8ffb710fe4f92a0f68ad58c260f0f">tempest-ListImageFiltersTestJSON-1727108935</nova:project>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <entry name="serial">9b6188af-75f0-4213-89c2-bd3eb72960b7</entry>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <entry name="uuid">9b6188af-75f0-4213-89c2-bd3eb72960b7</entry>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9b6188af-75f0-4213-89c2-bd3eb72960b7_disk">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9b6188af-75f0-4213-89c2-bd3eb72960b7_disk.config">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/console.log" append="off"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:21:24 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:21:24 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:21:24 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:21:24 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:21:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.699 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.699 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.700 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.700 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.700 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.701 248514 DEBUG nova.virt.libvirt.driver [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.710 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.957 248514 DEBUG nova.compute.manager [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received event network-vif-plugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.957 248514 DEBUG oslo_concurrency.lockutils [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.957 248514 DEBUG oslo_concurrency.lockutils [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.958 248514 DEBUG oslo_concurrency.lockutils [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.958 248514 DEBUG nova.compute.manager [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] No waiting events found dispatching network-vif-plugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.958 248514 WARNING nova.compute.manager [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received unexpected event network-vif-plugged-2b62e133-0b9e-4c9d-9219-2a7e55c381ab for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:21:24 np0005558241 nova_compute[248510]: 2025-12-13 08:21:24.958 248514 DEBUG nova.compute.manager [req-1b5635e7-b41a-4c76-b240-741263b3954f req-35faa646-bae9-40ec-8222-76f3785c1eda 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Received event network-vif-deleted-2b62e133-0b9e-4c9d-9219-2a7e55c381ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 420 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 6.2 MiB/s wr, 276 op/s
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.069 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.069 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614084.2609031, 07413df5-0bb8-42c2-95ff-13458d598139 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.070 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] VM Started (Lifecycle Event)#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.095 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.100 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.102 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.104 248514 INFO nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Using config drive#033[00m
Dec 13 03:21:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:21:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2144514618' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.274 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.284 248514 INFO nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Took 6.56 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.284 248514 DEBUG nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.285 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.304 248514 DEBUG oslo_concurrency.processutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.308 248514 DEBUG nova.compute.provider_tree [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.336 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.337 248514 DEBUG nova.scheduler.client.report [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.355 248514 INFO nova.compute.manager [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Took 8.80 seconds to build instance.#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.361 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.389 248514 INFO nova.scheduler.client.report [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Deleted allocations for instance 0d64e209-19e7-4ad3-a790-43d04d832838#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.393 248514 DEBUG oslo_concurrency.lockutils [None req-ea877d2c-5156-4658-a622-2a629700314b bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "07413df5-0bb8-42c2-95ff-13458d598139" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:25 np0005558241 NetworkManager[50376]: <info>  [1765614085.4694] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Dec 13 03:21:25 np0005558241 NetworkManager[50376]: <info>  [1765614085.4707] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.479 248514 DEBUG oslo_concurrency.lockutils [None req-d3ea9e01-ebad-4955-8092-eaf88a23da5b 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "0d64e209-19e7-4ad3-a790-43d04d832838" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.541 248514 INFO nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Creating config drive at /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/disk.config#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.546 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphs519wrp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:25Z|00153|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:21:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:25Z|00154|binding|INFO|Releasing lport c8850a7f-53d6-4776-8c0e-7fb474fe52de from this chassis (sb_readonly=0)
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.571 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.579 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.695 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphs519wrp" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.728 248514 DEBUG nova.storage.rbd_utils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] rbd image 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:25 np0005558241 nova_compute[248510]: 2025-12-13 08:21:25.736 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/disk.config 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:26 np0005558241 nova_compute[248510]: 2025-12-13 08:21:26.345 248514 DEBUG oslo_concurrency.processutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/disk.config 9b6188af-75f0-4213-89c2-bd3eb72960b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:26 np0005558241 nova_compute[248510]: 2025-12-13 08:21:26.347 248514 INFO nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Deleting local config drive /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7/disk.config because it was imported into RBD.#033[00m
Dec 13 03:21:26 np0005558241 systemd-machined[210538]: New machine qemu-31-instance-0000001a.
Dec 13 03:21:26 np0005558241 systemd[1]: Started Virtual Machine qemu-31-instance-0000001a.
Dec 13 03:21:26 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 13 03:21:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 420 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 189 op/s
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.048 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614087.0484254, 9b6188af-75f0-4213-89c2-bd3eb72960b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.050 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.055 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.057 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.062 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Instance spawned successfully.#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.063 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.089 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.094 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.106 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.107 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.107 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.107 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.108 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.110 248514 INFO nova.compute.manager [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Terminating instance#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.111 248514 DEBUG nova.compute.manager [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.114 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.115 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.116 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.117 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.118 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.119 248514 DEBUG nova.virt.libvirt.driver [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.127 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.128 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614087.0485342, 9b6188af-75f0-4213-89c2-bd3eb72960b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.129 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] VM Started (Lifecycle Event)#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.183 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.188 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:21:27 np0005558241 kernel: tap4b46a3fe-06 (unregistering): left promiscuous mode
Dec 13 03:21:27 np0005558241 NetworkManager[50376]: <info>  [1765614087.2156] device (tap4b46a3fe-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:27Z|00155|binding|INFO|Releasing lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 from this chassis (sb_readonly=0)
Dec 13 03:21:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:27Z|00156|binding|INFO|Setting lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 down in Southbound
Dec 13 03:21:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:27Z|00157|binding|INFO|Removing iface tap4b46a3fe-06 ovn-installed in OVS
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.237 248514 DEBUG nova.compute.manager [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-changed-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.238 248514 DEBUG nova.compute.manager [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing instance network info cache due to event network-changed-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.238 248514 DEBUG oslo_concurrency.lockutils [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.238 248514 DEBUG oslo_concurrency.lockutils [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.238 248514 DEBUG nova.network.neutron [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing network info cache for port 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.242 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:e5:d6 10.100.0.8'], port_security=['fa:16:3e:69:e5:d6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6b1e2b65-1398-4af8-9e8a-a8b99630eef8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4b46a3fe-06cf-4169-9e52-49c6d076be13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.243 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4b46a3fe-06cf-4169-9e52-49c6d076be13 in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.245 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.249 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.263 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[564cb8cd-8492-41d5-8956-87d4397c992a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.270 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.285 248514 INFO nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Took 6.19 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:21:27 np0005558241 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Deactivated successfully.
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.286 248514 DEBUG nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:27 np0005558241 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Consumed 21.317s CPU time.
Dec 13 03:21:27 np0005558241 systemd-machined[210538]: Machine qemu-21-instance-00000013 terminated.
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.303 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8366698e-134a-473a-af58-6857d820beed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.306 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[af00807b-b0d3-491e-a0fa-f898c152c243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 kernel: tap4b46a3fe-06: entered promiscuous mode
Dec 13 03:21:27 np0005558241 NetworkManager[50376]: <info>  [1765614087.3496] manager: (tap4b46a3fe-06): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Dec 13 03:21:27 np0005558241 systemd-udevd[279462]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.349 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ac771df5-9770-43c9-8ea6-fee212935a13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 kernel: tap4b46a3fe-06 (unregistering): left promiscuous mode
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.369 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:27Z|00158|binding|INFO|Claiming lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 for this chassis.
Dec 13 03:21:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:27Z|00159|binding|INFO|4b46a3fe-06cf-4169-9e52-49c6d076be13: Claiming fa:16:3e:69:e5:d6 10.100.0.8
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.382 248514 INFO nova.virt.libvirt.driver [-] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Instance destroyed successfully.#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.383 248514 DEBUG nova.objects.instance [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'resources' on Instance uuid 6b1e2b65-1398-4af8-9e8a-a8b99630eef8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.385 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:e5:d6 10.100.0.8'], port_security=['fa:16:3e:69:e5:d6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6b1e2b65-1398-4af8-9e8a-a8b99630eef8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4b46a3fe-06cf-4169-9e52-49c6d076be13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.388 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc5699bc-7c12-4dfb-ad23-94f9a8a07418]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 22, 'rx_bytes': 994, 'tx_bytes': 1116, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 22, 'rx_bytes': 994, 'tx_bytes': 1116, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279479, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:27Z|00160|binding|INFO|Setting lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 ovn-installed in OVS
Dec 13 03:21:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:27Z|00161|binding|INFO|Setting lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 up in Southbound
Dec 13 03:21:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:27Z|00162|binding|INFO|Releasing lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 from this chassis (sb_readonly=1)
Dec 13 03:21:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:27Z|00163|if_status|INFO|Dropped 8 log messages in last 88 seconds (most recently, 77 seconds ago) due to excessive rate
Dec 13 03:21:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:27Z|00164|if_status|INFO|Not setting lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 down as sb is readonly
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.396 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:27Z|00165|binding|INFO|Removing iface tap4b46a3fe-06 ovn-installed in OVS
Dec 13 03:21:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:27Z|00166|binding|INFO|Releasing lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 from this chassis (sb_readonly=0)
Dec 13 03:21:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:27Z|00167|binding|INFO|Setting lport 4b46a3fe-06cf-4169-9e52-49c6d076be13 down in Southbound
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.412 248514 DEBUG nova.virt.libvirt.vif [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:17:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1421874999',display_name='tempest-ServersAdminTestJSON-server-1421874999',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1421874999',id=19,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:18:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-fdarv985',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:18:07Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=6b1e2b65-1398-4af8-9e8a-a8b99630eef8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "address": "fa:16:3e:69:e5:d6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b46a3fe-06", "ovs_interfaceid": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.413 248514 DEBUG nova.network.os_vif_util [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "address": "fa:16:3e:69:e5:d6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b46a3fe-06", "ovs_interfaceid": "4b46a3fe-06cf-4169-9e52-49c6d076be13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.414 248514 DEBUG nova.network.os_vif_util [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:e5:d6,bridge_name='br-int',has_traffic_filtering=True,id=4b46a3fe-06cf-4169-9e52-49c6d076be13,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b46a3fe-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.415 248514 DEBUG os_vif [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:e5:d6,bridge_name='br-int',has_traffic_filtering=True,id=4b46a3fe-06cf-4169-9e52-49c6d076be13,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b46a3fe-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.415 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:e5:d6 10.100.0.8'], port_security=['fa:16:3e:69:e5:d6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6b1e2b65-1398-4af8-9e8a-a8b99630eef8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4b46a3fe-06cf-4169-9e52-49c6d076be13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.417 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.418 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b46a3fe-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.420 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[51fd8a46-b2f3-4aa3-bc78-ec358f8cc83a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279483, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279483, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.422 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.423 248514 INFO nova.compute.manager [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Took 7.79 seconds to build instance.#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.425 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.426 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.426 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.426 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.427 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.428 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4b46a3fe-06cf-4169-9e52-49c6d076be13 in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.430 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.430 248514 INFO os_vif [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:e5:d6,bridge_name='br-int',has_traffic_filtering=True,id=4b46a3fe-06cf-4169-9e52-49c6d076be13,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b46a3fe-06')#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.446 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8e31fddf-2099-402c-9585-6e9e933d618b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.457 248514 DEBUG oslo_concurrency.lockutils [None req-d8cb8eed-d04d-41a2-a6bc-8c87da5f47d2 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "9b6188af-75f0-4213-89c2-bd3eb72960b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.484 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[30821045-ef49-46db-a55c-17f9fb506c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.488 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1f56b1fd-b326-4716-a907-b5ab23fc41e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.520 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4b6318-2416-4ae0-9be8-61f47bca7934]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.541 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa1c054-945b-437a-9152-18cec614c21a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 24, 'rx_bytes': 994, 'tx_bytes': 1200, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 24, 'rx_bytes': 994, 'tx_bytes': 1200, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279507, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.567 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11948aac-8aa3-4586-9db1-29be516278de]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279508, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279508, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.569 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.572 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.572 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.572 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.573 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.573 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.574 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4b46a3fe-06cf-4169-9e52-49c6d076be13 in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.575 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.598 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5437aea7-fd10-430b-a4e7-0ce9bb2f4738]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.637 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b0073c1c-db41-481b-aaa5-0b80e6edb0cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.644 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[28e8e5ae-5209-4377-b2c9-7ff4b902d68e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.682 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b500be-945e-4419-ba2c-715faeae340a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.715 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a62a0760-de53-48d2-924d-789b4a72d14b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 26, 'rx_bytes': 994, 'tx_bytes': 1284, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 26, 'rx_bytes': 994, 'tx_bytes': 1284, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279515, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.738 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42be9cfe-5ba1-407d-bccc-29101b07d4c6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279516, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279516, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.743 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.746 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.746 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.747 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:27.747 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.763 248514 INFO nova.virt.libvirt.driver [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Deleting instance files /var/lib/nova/instances/6b1e2b65-1398-4af8-9e8a-a8b99630eef8_del#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.764 248514 INFO nova.virt.libvirt.driver [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Deletion of /var/lib/nova/instances/6b1e2b65-1398-4af8-9e8a-a8b99630eef8_del complete#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.822 248514 INFO nova.compute.manager [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.824 248514 DEBUG oslo.service.loopingcall [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.824 248514 DEBUG nova.compute.manager [-] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:21:27 np0005558241 nova_compute[248510]: 2025-12-13 08:21:27.825 248514 DEBUG nova.network.neutron [-] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:21:28 np0005558241 nova_compute[248510]: 2025-12-13 08:21:28.592 248514 DEBUG nova.network.neutron [-] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:21:28 np0005558241 nova_compute[248510]: 2025-12-13 08:21:28.614 248514 INFO nova.compute.manager [-] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Took 0.79 seconds to deallocate network for instance.#033[00m
Dec 13 03:21:28 np0005558241 nova_compute[248510]: 2025-12-13 08:21:28.666 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:28 np0005558241 nova_compute[248510]: 2025-12-13 08:21:28.667 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:28 np0005558241 nova_compute[248510]: 2025-12-13 08:21:28.848 248514 DEBUG nova.compute.manager [req-ecfa77a5-f48b-4602-b255-a3d3cb5c478a req-c08a5e02-c37b-4dc9-8dcb-b235da6ab1a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received event network-vif-deleted-4b46a3fe-06cf-4169-9e52-49c6d076be13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:28 np0005558241 nova_compute[248510]: 2025-12-13 08:21:28.870 248514 DEBUG oslo_concurrency.processutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:28Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:0a:44 10.100.0.14
Dec 13 03:21:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:28Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:0a:44 10.100.0.14
Dec 13 03:21:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 336 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.7 MiB/s wr, 452 op/s
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.273 248514 DEBUG nova.network.neutron [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updated VIF entry in instance network info cache for port 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.274 248514 DEBUG nova.network.neutron [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.299 248514 DEBUG oslo_concurrency.lockutils [req-da5d03ae-39c4-40e9-b457-7ab0b9ed0c52 req-f5d1f139-a8a5-4a19-94a0-b459f9d1b39e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.930 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received event network-vif-unplugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.930 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.930 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.931 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.931 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] No waiting events found dispatching network-vif-unplugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.931 248514 WARNING nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received unexpected event network-vif-unplugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.931 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.931 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.931 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] No waiting events found dispatching network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 WARNING nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received unexpected event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.932 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.933 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] No waiting events found dispatching network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.933 248514 WARNING nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received unexpected event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.933 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.933 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:21:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1294930675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.934 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.934 248514 DEBUG oslo_concurrency.lockutils [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.934 248514 DEBUG nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] No waiting events found dispatching network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.934 248514 WARNING nova.compute.manager [req-2757d0c7-e18b-4a16-bdec-3f33bd259a06 req-57197d8b-e360-4ea7-b6b8-ae74a7425a65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Received unexpected event network-vif-plugged-4b46a3fe-06cf-4169-9e52-49c6d076be13 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.960 248514 DEBUG oslo_concurrency.processutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.969 248514 DEBUG nova.compute.provider_tree [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.987 248514 DEBUG nova.scheduler.client.report [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:21:29 np0005558241 nova_compute[248510]: 2025-12-13 08:21:29.991 248514 DEBUG nova.compute.manager [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.025 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.163 248514 INFO nova.compute.manager [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] instance snapshotting#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.399 248514 INFO nova.scheduler.client.report [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Deleted allocations for instance 6b1e2b65-1398-4af8-9e8a-a8b99630eef8#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.489 248514 DEBUG oslo_concurrency.lockutils [None req-e800ee29-0f33-481f-a2d2-dc99f250b6fe 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "6b1e2b65-1398-4af8-9e8a-a8b99630eef8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.610 248514 INFO nova.virt.libvirt.driver [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Beginning live snapshot process#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.864 248514 DEBUG nova.virt.libvirt.imagebackend [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.879 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "3fffafca-321d-4611-8940-da963b356ca1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.882 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.883 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "3fffafca-321d-4611-8940-da963b356ca1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.883 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.884 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.885 248514 INFO nova.compute.manager [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Terminating instance#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.887 248514 DEBUG nova.compute.manager [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:21:30 np0005558241 kernel: tapc85d366f-1e (unregistering): left promiscuous mode
Dec 13 03:21:30 np0005558241 NetworkManager[50376]: <info>  [1765614090.9402] device (tapc85d366f-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.951 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:30Z|00168|binding|INFO|Releasing lport c85d366f-1e88-41c4-9425-0e5f62b77e99 from this chassis (sb_readonly=0)
Dec 13 03:21:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:30Z|00169|binding|INFO|Setting lport c85d366f-1e88-41c4-9425-0e5f62b77e99 down in Southbound
Dec 13 03:21:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:30Z|00170|binding|INFO|Removing iface tapc85d366f-1e ovn-installed in OVS
Dec 13 03:21:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:30.973 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:e7:e6 10.100.0.11'], port_security=['fa:16:3e:25:e7:e6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '3fffafca-321d-4611-8940-da963b356ca1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c85d366f-1e88-41c4-9425-0e5f62b77e99) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:21:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:30.974 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c85d366f-1e88-41c4-9425-0e5f62b77e99 in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis#033[00m
Dec 13 03:21:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 336 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.7 MiB/s wr, 372 op/s
Dec 13 03:21:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:30Z|00171|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:21:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:30Z|00172|binding|INFO|Releasing lport c8850a7f-53d6-4776-8c0e-7fb474fe52de from this chassis (sb_readonly=0)
Dec 13 03:21:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:30.976 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5#033[00m
Dec 13 03:21:30 np0005558241 nova_compute[248510]: 2025-12-13 08:21:30.990 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:30.999 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[933d588b-cfda-4173-b128-015ff7d5225b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:31 np0005558241 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Deactivated successfully.
Dec 13 03:21:31 np0005558241 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Consumed 22.114s CPU time.
Dec 13 03:21:31 np0005558241 systemd-machined[210538]: Machine qemu-20-instance-00000012 terminated.
Dec 13 03:21:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.053 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f1bbd440-6601-48c0-b3b9-ecaa0a5cc62d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.058 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[72f5d070-8167-4647-9ae1-404cecb55c74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.092 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[34ad6154-ccfb-426c-a551-caf694dccd4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.126 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1bed1e77-4fd2-4abc-b2bb-e11323557a7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape3139e1f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:6e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 28, 'rx_bytes': 994, 'tx_bytes': 1368, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 28, 'rx_bytes': 994, 'tx_bytes': 1368, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641497, 'reachable_time': 37181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279586, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.136 248514 INFO nova.virt.libvirt.driver [-] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Instance destroyed successfully.#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.137 248514 DEBUG nova.objects.instance [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'resources' on Instance uuid 3fffafca-321d-4611-8940-da963b356ca1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.151 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5f5d2548-26e6-4944-8a53-965013944a02]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641512, 'tstamp': 641512}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279596, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape3139e1f-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641515, 'tstamp': 641515}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279596, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.153 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.155 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.158 248514 DEBUG nova.virt.libvirt.vif [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:17:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-364828820',display_name='tempest-ServersAdminTestJSON-server-364828820',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-364828820',id=18,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:17:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-t7ff1bcv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:17:29Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=3fffafca-321d-4611-8940-da963b356ca1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "address": "fa:16:3e:25:e7:e6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc85d366f-1e", "ovs_interfaceid": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.158 248514 DEBUG nova.network.os_vif_util [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "address": "fa:16:3e:25:e7:e6", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc85d366f-1e", "ovs_interfaceid": "c85d366f-1e88-41c4-9425-0e5f62b77e99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.160 248514 DEBUG nova.network.os_vif_util [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:25:e7:e6,bridge_name='br-int',has_traffic_filtering=True,id=c85d366f-1e88-41c4-9425-0e5f62b77e99,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc85d366f-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.160 248514 DEBUG os_vif [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:e7:e6,bridge_name='br-int',has_traffic_filtering=True,id=c85d366f-1e88-41c4-9425-0e5f62b77e99,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc85d366f-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.162 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.162 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc85d366f-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.163 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.166 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3139e1f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.166 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.167 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape3139e1f-d0, col_values=(('external_ids', {'iface-id': 'c8850a7f-53d6-4776-8c0e-7fb474fe52de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:31.167 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.175 248514 DEBUG nova.storage.rbd_utils [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] creating snapshot(1829f1286d3742fb834c1b1b06a524cc) on rbd image(07413df5-0bb8-42c2-95ff-13458d598139_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.209 248514 INFO os_vif [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:25:e7:e6,bridge_name='br-int',has_traffic_filtering=True,id=c85d366f-1e88-41c4-9425-0e5f62b77e99,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc85d366f-1e')#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.489 248514 INFO nova.virt.libvirt.driver [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Deleting instance files /var/lib/nova/instances/3fffafca-321d-4611-8940-da963b356ca1_del#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.490 248514 INFO nova.virt.libvirt.driver [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Deletion of /var/lib/nova/instances/3fffafca-321d-4611-8940-da963b356ca1_del complete#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.542 248514 INFO nova.compute.manager [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.542 248514 DEBUG oslo.service.loopingcall [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.542 248514 DEBUG nova.compute.manager [-] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.543 248514 DEBUG nova.network.neutron [-] [instance: 3fffafca-321d-4611-8940-da963b356ca1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:21:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Dec 13 03:21:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Dec 13 03:21:31 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.615 248514 DEBUG nova.storage.rbd_utils [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] cloning vms/07413df5-0bb8-42c2-95ff-13458d598139_disk@1829f1286d3742fb834c1b1b06a524cc to images/d29e6f1b-81ef-4730-8c93-4aec7533b72b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.735 248514 DEBUG nova.storage.rbd_utils [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] flattening images/d29e6f1b-81ef-4730-8c93-4aec7533b72b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:21:31 np0005558241 nova_compute[248510]: 2025-12-13 08:21:31.821 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.037 248514 DEBUG nova.compute.manager [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received event network-vif-unplugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.037 248514 DEBUG oslo_concurrency.lockutils [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3fffafca-321d-4611-8940-da963b356ca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.038 248514 DEBUG oslo_concurrency.lockutils [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.038 248514 DEBUG oslo_concurrency.lockutils [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.038 248514 DEBUG nova.compute.manager [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] No waiting events found dispatching network-vif-unplugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.038 248514 DEBUG nova.compute.manager [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received event network-vif-unplugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.038 248514 DEBUG nova.compute.manager [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received event network-vif-plugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.038 248514 DEBUG oslo_concurrency.lockutils [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3fffafca-321d-4611-8940-da963b356ca1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.039 248514 DEBUG oslo_concurrency.lockutils [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.039 248514 DEBUG oslo_concurrency.lockutils [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.039 248514 DEBUG nova.compute.manager [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] No waiting events found dispatching network-vif-plugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.039 248514 WARNING nova.compute.manager [req-cdaafa33-858c-4bdf-8c43-69461a458a20 req-ecd487cc-8b93-4444-b051-104adcf9cbdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received unexpected event network-vif-plugged-c85d366f-1e88-41c4-9425-0e5f62b77e99 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.295 248514 DEBUG nova.storage.rbd_utils [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] removing snapshot(1829f1286d3742fb834c1b1b06a524cc) on rbd image(07413df5-0bb8-42c2-95ff-13458d598139_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.466 248514 DEBUG nova.network.neutron [-] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.488 248514 INFO nova.compute.manager [-] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Took 0.95 seconds to deallocate network for instance.#033[00m
Dec 13 03:21:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Dec 13 03:21:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.575 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.576 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:32 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.603 248514 DEBUG nova.storage.rbd_utils [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] creating snapshot(snap) on rbd image(d29e6f1b-81ef-4730-8c93-4aec7533b72b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:21:32 np0005558241 nova_compute[248510]: 2025-12-13 08:21:32.728 248514 DEBUG oslo_concurrency.processutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 345 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 8.6 MiB/s rd, 4.4 MiB/s wr, 433 op/s
Dec 13 03:21:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:21:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676856328' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:21:33 np0005558241 nova_compute[248510]: 2025-12-13 08:21:33.317 248514 DEBUG oslo_concurrency.processutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:33 np0005558241 nova_compute[248510]: 2025-12-13 08:21:33.323 248514 DEBUG nova.compute.provider_tree [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:21:33 np0005558241 nova_compute[248510]: 2025-12-13 08:21:33.344 248514 DEBUG nova.scheduler.client.report [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:21:33 np0005558241 nova_compute[248510]: 2025-12-13 08:21:33.372 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:33 np0005558241 nova_compute[248510]: 2025-12-13 08:21:33.422 248514 INFO nova.scheduler.client.report [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Deleted allocations for instance 3fffafca-321d-4611-8940-da963b356ca1#033[00m
Dec 13 03:21:33 np0005558241 nova_compute[248510]: 2025-12-13 08:21:33.511 248514 DEBUG oslo_concurrency.lockutils [None req-83bde9ea-575d-40c8-be9d-d17711446c58 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "3fffafca-321d-4611-8940-da963b356ca1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Dec 13 03:21:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Dec 13 03:21:33 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Dec 13 03:21:33 np0005558241 nova_compute[248510]: 2025-12-13 08:21:33.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:21:33 np0005558241 nova_compute[248510]: 2025-12-13 08:21:33.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:21:33 np0005558241 nova_compute[248510]: 2025-12-13 08:21:33.775 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.134 248514 DEBUG nova.compute.manager [req-5787667f-7983-409a-8bd2-71ca44dbb3a7 req-5a3c64c2-7b88-40fe-9009-13aa5437b77b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Received event network-vif-deleted-c85d366f-1e88-41c4-9425-0e5f62b77e99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.287 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.288 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.288 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.289 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.524 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.525 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.526 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.526 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.527 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.528 248514 INFO nova.compute.manager [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Terminating instance#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.529 248514 DEBUG nova.compute.manager [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:21:34 np0005558241 kernel: tap7f8a109e-e2 (unregistering): left promiscuous mode
Dec 13 03:21:34 np0005558241 NetworkManager[50376]: <info>  [1765614094.6325] device (tap7f8a109e-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:21:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:34Z|00173|binding|INFO|Releasing lport 7f8a109e-e262-4847-8430-ac7944dace5c from this chassis (sb_readonly=0)
Dec 13 03:21:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:34Z|00174|binding|INFO|Setting lport 7f8a109e-e262-4847-8430-ac7944dace5c down in Southbound
Dec 13 03:21:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:34Z|00175|binding|INFO|Removing iface tap7f8a109e-e2 ovn-installed in OVS
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:34.651 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:0a:44 10.100.0.14'], port_security=['fa:16:3e:e6:0a:44 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a14d8b88-7aec-468f-a550-881364e4d95e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f6b42f9712f4afea6a07d08373b56ca', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e361a8de-1da5-4b02-a4b6-d6dd820154c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2cbad0e-0972-4c41-8638-8186dfccdb8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7f8a109e-e262-4847-8430-ac7944dace5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:21:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:34.653 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7f8a109e-e262-4847-8430-ac7944dace5c in datapath e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 unbound from our chassis#033[00m
Dec 13 03:21:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:34.655 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:21:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:34.656 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[332c3a52-846d-4ace-950d-1327fdd641e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:34.657 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 namespace which is not needed anymore#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.665 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:21:34 np0005558241 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000011.scope: Deactivated successfully.
Dec 13 03:21:34 np0005558241 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000011.scope: Consumed 13.474s CPU time.
Dec 13 03:21:34 np0005558241 systemd-machined[210538]: Machine qemu-28-instance-00000011 terminated.
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.788 248514 INFO nova.virt.libvirt.driver [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Instance destroyed successfully.#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.788 248514 DEBUG nova.objects.instance [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lazy-loading 'resources' on Instance uuid a14d8b88-7aec-468f-a550-881364e4d95e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.805 248514 DEBUG nova.virt.libvirt.vif [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:17:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-247332834',display_name='tempest-ServersAdminTestJSON-server-247332834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-247332834',id=17,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f6b42f9712f4afea6a07d08373b56ca',ramdisk_id='',reservation_id='r-88vex2g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1079129122',owner_user_name='tempest-ServersAdminTestJSON-1079129122-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:20Z,user_data=None,user_id='0d3578a9aa9a4f4facf4009a71564a31',uuid=a14d8b88-7aec-468f-a550-881364e4d95e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.805 248514 DEBUG nova.network.os_vif_util [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converting VIF {"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.806 248514 DEBUG nova.network.os_vif_util [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.807 248514 DEBUG os_vif [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.810 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.810 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f8a109e-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.838 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:21:34 np0005558241 nova_compute[248510]: 2025-12-13 08:21:34.843 248514 INFO os_vif [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0a:44,bridge_name='br-int',has_traffic_filtering=True,id=7f8a109e-e262-4847-8430-ac7944dace5c,network=Network(e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f8a109e-e2')#033[00m
Dec 13 03:21:34 np0005558241 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [NOTICE]   (272057) : haproxy version is 2.8.14-c23fe91
Dec 13 03:21:34 np0005558241 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [NOTICE]   (272057) : path to executable is /usr/sbin/haproxy
Dec 13 03:21:34 np0005558241 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [WARNING]  (272057) : Exiting Master process...
Dec 13 03:21:34 np0005558241 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [WARNING]  (272057) : Exiting Master process...
Dec 13 03:21:34 np0005558241 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [ALERT]    (272057) : Current worker (272059) exited with code 143 (Terminated)
Dec 13 03:21:34 np0005558241 neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5[272053]: [WARNING]  (272057) : All workers exited. Exiting... (0)
Dec 13 03:21:34 np0005558241 systemd[1]: libpod-563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864.scope: Deactivated successfully.
Dec 13 03:21:34 np0005558241 podman[279781]: 2025-12-13 08:21:34.863937944 +0000 UTC m=+0.063990577 container died 563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 03:21:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-937067be80a0f92a68338f93cc6d8182d7c2bbc7c8ec6a16edddf5b92b0e5e64-merged.mount: Deactivated successfully.
Dec 13 03:21:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864-userdata-shm.mount: Deactivated successfully.
Dec 13 03:21:34 np0005558241 podman[279781]: 2025-12-13 08:21:34.959345704 +0000 UTC m=+0.159398337 container cleanup 563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:21:34 np0005558241 systemd[1]: libpod-conmon-563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864.scope: Deactivated successfully.
Dec 13 03:21:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 306 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Dec 13 03:21:35 np0005558241 podman[279827]: 2025-12-13 08:21:35.361374216 +0000 UTC m=+0.374140216 container remove 563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:21:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.368 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[45f44cbe-2f28-4557-a1e5-6b9919cac6ab]: (4, ('Sat Dec 13 08:21:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 (563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864)\n563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864\nSat Dec 13 08:21:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 (563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864)\n563c65225d896821f8039a6151c863ae3a1ac5e184ce4b80a4ddfd253032b864\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.370 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[81d44609-ae93-468c-829a-42bacd138459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.371 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3139e1f-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:21:35 np0005558241 nova_compute[248510]: 2025-12-13 08:21:35.373 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:35 np0005558241 kernel: tape3139e1f-d0: left promiscuous mode
Dec 13 03:21:35 np0005558241 nova_compute[248510]: 2025-12-13 08:21:35.399 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.406 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac734f0-7dac-4eb7-9646-730f7ce62bdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.422 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c58d0d0b-8e61-4875-9db7-e32dbcbde432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.424 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8b70149c-607f-4140-8362-cc226511e6c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.442 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c855eaea-9a1b-43e3-85c5-17c9f1ee7772]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641488, 'reachable_time': 32059, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279842, 'error': None, 'target': 'ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.444 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:21:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:35.445 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e73d5388-5596-4c57-9c37-d2c46d011847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:21:35 np0005558241 systemd[1]: run-netns-ovnmeta\x2de3139e1f\x2dd005\x2d4b75\x2d9ad6\x2d07f3fbaa8fb5.mount: Deactivated successfully.
Dec 13 03:21:35 np0005558241 nova_compute[248510]: 2025-12-13 08:21:35.837 248514 INFO nova.virt.libvirt.driver [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Snapshot image upload complete#033[00m
Dec 13 03:21:35 np0005558241 nova_compute[248510]: 2025-12-13 08:21:35.837 248514 INFO nova.compute.manager [None req-0c4cb48f-9fc9-41cf-aa7e-f8284b324df3 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Took 5.67 seconds to snapshot the instance on the hypervisor.#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.145 248514 INFO nova.virt.libvirt.driver [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Deleting instance files /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e_del#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.147 248514 INFO nova.virt.libvirt.driver [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Deletion of /var/lib/nova/instances/a14d8b88-7aec-468f-a550-881364e4d95e_del complete#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.195 248514 INFO nova.compute.manager [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Took 1.67 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.196 248514 DEBUG oslo.service.loopingcall [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.196 248514 DEBUG nova.compute.manager [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.196 248514 DEBUG nova.network.neutron [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:21:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:36Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:c0:80 10.100.0.7
Dec 13 03:21:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:36Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:c0:80 10.100.0.7
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.498 248514 DEBUG nova.compute.manager [req-47826ce0-3b3d-41f0-a6f1-ed5782c6a8fe req-e7e96be3-0368-4877-a323-7109b3ad2967 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-unplugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.498 248514 DEBUG oslo_concurrency.lockutils [req-47826ce0-3b3d-41f0-a6f1-ed5782c6a8fe req-e7e96be3-0368-4877-a323-7109b3ad2967 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.499 248514 DEBUG oslo_concurrency.lockutils [req-47826ce0-3b3d-41f0-a6f1-ed5782c6a8fe req-e7e96be3-0368-4877-a323-7109b3ad2967 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.499 248514 DEBUG oslo_concurrency.lockutils [req-47826ce0-3b3d-41f0-a6f1-ed5782c6a8fe req-e7e96be3-0368-4877-a323-7109b3ad2967 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.499 248514 DEBUG nova.compute.manager [req-47826ce0-3b3d-41f0-a6f1-ed5782c6a8fe req-e7e96be3-0368-4877-a323-7109b3ad2967 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] No waiting events found dispatching network-vif-unplugged-7f8a109e-e262-4847-8430-ac7944dace5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.499 248514 DEBUG nova.compute.manager [req-47826ce0-3b3d-41f0-a6f1-ed5782c6a8fe req-e7e96be3-0368-4877-a323-7109b3ad2967 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-unplugged-7f8a109e-e262-4847-8430-ac7944dace5c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:21:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:21:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:21:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:21:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:21:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:21:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:21:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:21:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:21:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:21:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:21:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.876 248514 DEBUG nova.network.neutron [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.902 248514 INFO nova.compute.manager [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Took 0.71 seconds to deallocate network for instance.#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.964 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.965 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 306 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.979 248514 DEBUG nova.compute.manager [req-aded1f0b-a200-48e3-96fc-fde7e7bf2425 req-a962aacb-cc55-4d15-9ecc-d7c3f8f27a5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-deleted-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:36 np0005558241 nova_compute[248510]: 2025-12-13 08:21:36.996 248514 DEBUG nova.scheduler.client.report [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.017 248514 DEBUG nova.scheduler.client.report [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.018 248514 DEBUG nova.compute.provider_tree [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.048 248514 DEBUG nova.scheduler.client.report [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.075 248514 DEBUG nova.scheduler.client.report [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.179 248514 DEBUG oslo_concurrency.processutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.212 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614082.2091837, 0d64e209-19e7-4ad3-a790-43d04d832838 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.214 248514 INFO nova.compute.manager [-] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.257 248514 DEBUG nova.compute.manager [None req-d7602709-8c1c-4d1d-8b6d-1112f487ee50 - - - - - -] [instance: 0d64e209-19e7-4ad3-a790-43d04d832838] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.324 248514 DEBUG nova.compute.manager [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:37 np0005558241 podman[279986]: 2025-12-13 08:21:37.379374251 +0000 UTC m=+0.095465803 container create 0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.388 248514 INFO nova.compute.manager [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] instance snapshotting#033[00m
Dec 13 03:21:37 np0005558241 podman[279986]: 2025-12-13 08:21:37.314321079 +0000 UTC m=+0.030412631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:21:37 np0005558241 systemd[1]: Started libpod-conmon-0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1.scope.
Dec 13 03:21:37 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:21:37 np0005558241 podman[279986]: 2025-12-13 08:21:37.527633363 +0000 UTC m=+0.243724935 container init 0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:21:37 np0005558241 podman[279986]: 2025-12-13 08:21:37.539567737 +0000 UTC m=+0.255659279 container start 0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:21:37 np0005558241 fervent_jackson[280021]: 167 167
Dec 13 03:21:37 np0005558241 systemd[1]: libpod-0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1.scope: Deactivated successfully.
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.619 248514 INFO nova.virt.libvirt.driver [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Beginning live snapshot process#033[00m
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.648 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Updating instance_info_cache with network_info: [{"id": "7f8a109e-e262-4847-8430-ac7944dace5c", "address": "fa:16:3e:e6:0a:44", "network": {"id": "e3139e1f-d005-4b75-9ad6-07f3fbaa8fb5", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-481911663-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f6b42f9712f4afea6a07d08373b56ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f8a109e-e2", "ovs_interfaceid": "7f8a109e-e262-4847-8430-ac7944dace5c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-a14d8b88-7aec-468f-a550-881364e4d95e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:21:37 np0005558241 nova_compute[248510]: 2025-12-13 08:21:37.799 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:21:37 np0005558241 podman[279986]: 2025-12-13 08:21:37.831535418 +0000 UTC m=+0.547626990 container attach 0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:21:37 np0005558241 podman[279986]: 2025-12-13 08:21:37.832452851 +0000 UTC m=+0.548544413 container died 0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:21:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:21:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3615211835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:21:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:21:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:21:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:21:38 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cf1b48197f33311cf523717315ddad01124d303a6ec05a44809cc41f24e07cbd-merged.mount: Deactivated successfully.
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.310 248514 DEBUG oslo_concurrency.processutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.325 248514 DEBUG nova.virt.libvirt.imagebackend [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.331 248514 DEBUG nova.compute.provider_tree [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.372 248514 DEBUG nova.scheduler.client.report [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.402 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.453 248514 INFO nova.scheduler.client.report [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Deleted allocations for instance a14d8b88-7aec-468f-a550-881364e4d95e#033[00m
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.551 248514 DEBUG oslo_concurrency.lockutils [None req-417baa5a-3f5f-4a5f-8fb9-63082a8fe62e 0d3578a9aa9a4f4facf4009a71564a31 1f6b42f9712f4afea6a07d08373b56ca - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.599 248514 DEBUG nova.storage.rbd_utils [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] creating snapshot(1a2ff40d88ae4e96ad7be6f96c000c7c) on rbd image(9b6188af-75f0-4213-89c2-bd3eb72960b7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:21:38 np0005558241 podman[279986]: 2025-12-13 08:21:38.65942888 +0000 UTC m=+1.375520442 container remove 0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_jackson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 13 03:21:38 np0005558241 systemd[1]: libpod-conmon-0a0972371b72ffb3bbe65d0f54290f93cb13a2ecf071c6c51fa036767b4db3d1.scope: Deactivated successfully.
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.724 248514 DEBUG nova.compute.manager [req-6ff438ce-af54-4ad8-acfd-2a49f7c4810a req-09247596-ca63-43b3-a2b4-c255ab26a624 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.725 248514 DEBUG oslo_concurrency.lockutils [req-6ff438ce-af54-4ad8-acfd-2a49f7c4810a req-09247596-ca63-43b3-a2b4-c255ab26a624 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.725 248514 DEBUG oslo_concurrency.lockutils [req-6ff438ce-af54-4ad8-acfd-2a49f7c4810a req-09247596-ca63-43b3-a2b4-c255ab26a624 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.726 248514 DEBUG oslo_concurrency.lockutils [req-6ff438ce-af54-4ad8-acfd-2a49f7c4810a req-09247596-ca63-43b3-a2b4-c255ab26a624 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a14d8b88-7aec-468f-a550-881364e4d95e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.726 248514 DEBUG nova.compute.manager [req-6ff438ce-af54-4ad8-acfd-2a49f7c4810a req-09247596-ca63-43b3-a2b4-c255ab26a624 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] No waiting events found dispatching network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.726 248514 WARNING nova.compute.manager [req-6ff438ce-af54-4ad8-acfd-2a49f7c4810a req-09247596-ca63-43b3-a2b4-c255ab26a624 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Received unexpected event network-vif-plugged-7f8a109e-e262-4847-8430-ac7944dace5c for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:21:38 np0005558241 nova_compute[248510]: 2025-12-13 08:21:38.791 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:21:38 np0005558241 podman[280100]: 2025-12-13 08:21:38.829042328 +0000 UTC m=+0.027172270 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:21:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 293 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 9.9 MiB/s wr, 425 op/s
Dec 13 03:21:39 np0005558241 podman[280100]: 2025-12-13 08:21:39.138667914 +0000 UTC m=+0.336797836 container create 8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:21:39 np0005558241 nova_compute[248510]: 2025-12-13 08:21:39.383 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:39 np0005558241 systemd[1]: Started libpod-conmon-8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19.scope.
Dec 13 03:21:39 np0005558241 podman[280115]: 2025-12-13 08:21:39.442763855 +0000 UTC m=+0.261925243 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 03:21:39 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:21:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69fe06e57cca821899762daaee06dae2af9a602bb536d4847a6382805736894/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69fe06e57cca821899762daaee06dae2af9a602bb536d4847a6382805736894/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69fe06e57cca821899762daaee06dae2af9a602bb536d4847a6382805736894/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69fe06e57cca821899762daaee06dae2af9a602bb536d4847a6382805736894/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69fe06e57cca821899762daaee06dae2af9a602bb536d4847a6382805736894/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:39 np0005558241 podman[280116]: 2025-12-13 08:21:39.487948168 +0000 UTC m=+0.303387714 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:21:39 np0005558241 podman[280100]: 2025-12-13 08:21:39.488898671 +0000 UTC m=+0.687028613 container init 8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:21:39 np0005558241 podman[280114]: 2025-12-13 08:21:39.489134847 +0000 UTC m=+0.308435359 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 03:21:39 np0005558241 podman[280100]: 2025-12-13 08:21:39.498128108 +0000 UTC m=+0.696258030 container start 8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:21:39 np0005558241 podman[280100]: 2025-12-13 08:21:39.518297175 +0000 UTC m=+0.716427117 container attach 8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:21:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:21:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Dec 13 03:21:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Dec 13 03:21:39 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Dec 13 03:21:39 np0005558241 nova_compute[248510]: 2025-12-13 08:21:39.838 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:40 np0005558241 musing_davinci[280165]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:21:40 np0005558241 musing_davinci[280165]: --> All data devices are unavailable
Dec 13 03:21:40 np0005558241 systemd[1]: libpod-8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19.scope: Deactivated successfully.
Dec 13 03:21:40 np0005558241 podman[280100]: 2025-12-13 08:21:40.047789907 +0000 UTC m=+1.245919829 container died 8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 03:21:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:21:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:21:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:21:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:21:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:21:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:21:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 293 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.9 MiB/s wr, 353 op/s
Dec 13 03:21:41 np0005558241 nova_compute[248510]: 2025-12-13 08:21:41.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:41 np0005558241 nova_compute[248510]: 2025-12-13 08:21:41.720 248514 DEBUG nova.storage.rbd_utils [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] cloning vms/9b6188af-75f0-4213-89c2-bd3eb72960b7_disk@1a2ff40d88ae4e96ad7be6f96c000c7c to images/1b5bf03c-4e74-4ede-b890-c9dcb4210f68 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:21:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f69fe06e57cca821899762daaee06dae2af9a602bb536d4847a6382805736894-merged.mount: Deactivated successfully.
Dec 13 03:21:42 np0005558241 nova_compute[248510]: 2025-12-13 08:21:42.229 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:21:42 np0005558241 nova_compute[248510]: 2025-12-13 08:21:42.230 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:21:42 np0005558241 nova_compute[248510]: 2025-12-13 08:21:42.230 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:21:42 np0005558241 nova_compute[248510]: 2025-12-13 08:21:42.230 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:21:42 np0005558241 nova_compute[248510]: 2025-12-13 08:21:42.381 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614087.3801205, 6b1e2b65-1398-4af8-9e8a-a8b99630eef8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:42 np0005558241 nova_compute[248510]: 2025-12-13 08:21:42.381 248514 INFO nova.compute.manager [-] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:21:42 np0005558241 nova_compute[248510]: 2025-12-13 08:21:42.414 248514 DEBUG nova.compute.manager [None req-44a69ce5-d245-4e8c-be5c-abcd7d532ef1 - - - - - -] [instance: 6b1e2b65-1398-4af8-9e8a-a8b99630eef8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:42 np0005558241 podman[280100]: 2025-12-13 08:21:42.618129517 +0000 UTC m=+3.816259459 container remove 8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:21:42 np0005558241 systemd[1]: libpod-conmon-8010a0638c011bc1d300171ebfa1a5539dd300774c6c720dd4296bf2d4892f19.scope: Deactivated successfully.
Dec 13 03:21:42 np0005558241 nova_compute[248510]: 2025-12-13 08:21:42.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:21:42 np0005558241 nova_compute[248510]: 2025-12-13 08:21:42.800 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:42 np0005558241 nova_compute[248510]: 2025-12-13 08:21:42.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:42 np0005558241 nova_compute[248510]: 2025-12-13 08:21:42.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:42 np0005558241 nova_compute[248510]: 2025-12-13 08:21:42.801 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:21:42 np0005558241 nova_compute[248510]: 2025-12-13 08:21:42.801 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 293 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.9 MiB/s wr, 342 op/s
Dec 13 03:21:43 np0005558241 podman[280327]: 2025-12-13 08:21:43.125379941 +0000 UTC m=+0.029722573 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:21:43 np0005558241 podman[280327]: 2025-12-13 08:21:43.265322858 +0000 UTC m=+0.169665450 container create bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Dec 13 03:21:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:21:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/406369122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:21:43 np0005558241 nova_compute[248510]: 2025-12-13 08:21:43.364 248514 DEBUG nova.storage.rbd_utils [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] flattening images/1b5bf03c-4e74-4ede-b890-c9dcb4210f68 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:21:43 np0005558241 systemd[1]: Started libpod-conmon-bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f.scope.
Dec 13 03:21:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:21:43 np0005558241 nova_compute[248510]: 2025-12-13 08:21:43.541 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.740s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:43 np0005558241 podman[280327]: 2025-12-13 08:21:43.54268387 +0000 UTC m=+0.447026482 container init bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_merkle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:21:43 np0005558241 podman[280327]: 2025-12-13 08:21:43.554195593 +0000 UTC m=+0.458538185 container start bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:21:43 np0005558241 vigorous_merkle[280364]: 167 167
Dec 13 03:21:43 np0005558241 systemd[1]: libpod-bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f.scope: Deactivated successfully.
Dec 13 03:21:43 np0005558241 podman[280327]: 2025-12-13 08:21:43.732867244 +0000 UTC m=+0.637209926 container attach bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_merkle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 03:21:43 np0005558241 podman[280327]: 2025-12-13 08:21:43.734124875 +0000 UTC m=+0.638467467 container died bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_merkle, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:21:43 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:43Z|00176|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:21:43 np0005558241 nova_compute[248510]: 2025-12-13 08:21:43.816 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:21:43 np0005558241 nova_compute[248510]: 2025-12-13 08:21:43.817 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:21:43 np0005558241 nova_compute[248510]: 2025-12-13 08:21:43.825 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:21:43 np0005558241 nova_compute[248510]: 2025-12-13 08:21:43.825 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:21:43 np0005558241 nova_compute[248510]: 2025-12-13 08:21:43.832 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:21:43 np0005558241 nova_compute[248510]: 2025-12-13 08:21:43.833 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:21:43 np0005558241 nova_compute[248510]: 2025-12-13 08:21:43.838 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:44 np0005558241 nova_compute[248510]: 2025-12-13 08:21:44.034 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:21:44 np0005558241 nova_compute[248510]: 2025-12-13 08:21:44.037 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3723MB free_disk=59.8763771802187GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:21:44 np0005558241 nova_compute[248510]: 2025-12-13 08:21:44.037 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:44 np0005558241 nova_compute[248510]: 2025-12-13 08:21:44.037 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:44 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a6d86184a97ad6e115bce82d4d6ff9e2dad79904073f715d4a4c1883a895a56b-merged.mount: Deactivated successfully.
Dec 13 03:21:44 np0005558241 nova_compute[248510]: 2025-12-13 08:21:44.295 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:21:44 np0005558241 nova_compute[248510]: 2025-12-13 08:21:44.296 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 07413df5-0bb8-42c2-95ff-13458d598139 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:21:44 np0005558241 nova_compute[248510]: 2025-12-13 08:21:44.296 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9b6188af-75f0-4213-89c2-bd3eb72960b7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:21:44 np0005558241 nova_compute[248510]: 2025-12-13 08:21:44.296 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:21:44 np0005558241 nova_compute[248510]: 2025-12-13 08:21:44.296 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:21:44 np0005558241 nova_compute[248510]: 2025-12-13 08:21:44.376 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:44 np0005558241 nova_compute[248510]: 2025-12-13 08:21:44.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:21:44 np0005558241 nova_compute[248510]: 2025-12-13 08:21:44.841 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 317 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.2 MiB/s wr, 261 op/s
Dec 13 03:21:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:21:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3231662131' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:21:45 np0005558241 nova_compute[248510]: 2025-12-13 08:21:45.069 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:45 np0005558241 nova_compute[248510]: 2025-12-13 08:21:45.077 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:21:45 np0005558241 nova_compute[248510]: 2025-12-13 08:21:45.094 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:21:45 np0005558241 podman[280327]: 2025-12-13 08:21:45.098366036 +0000 UTC m=+2.002708648 container remove bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:21:45 np0005558241 nova_compute[248510]: 2025-12-13 08:21:45.121 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:21:45 np0005558241 nova_compute[248510]: 2025-12-13 08:21:45.122 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:45 np0005558241 systemd[1]: libpod-conmon-bf9108d3a12b30511d4f157bbbfb8c20cde4a73422d47e0203fad25fbb17fa3f.scope: Deactivated successfully.
Dec 13 03:21:45 np0005558241 podman[280411]: 2025-12-13 08:21:45.283143357 +0000 UTC m=+0.034866620 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:21:45 np0005558241 podman[280411]: 2025-12-13 08:21:45.418778428 +0000 UTC m=+0.170501661 container create 9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:21:45 np0005558241 systemd[1]: Started libpod-conmon-9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab.scope.
Dec 13 03:21:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:21:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab07785f0dc02080facdf63b43305c60dbf69c1c75360ac920fe10bfbb68d1e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab07785f0dc02080facdf63b43305c60dbf69c1c75360ac920fe10bfbb68d1e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab07785f0dc02080facdf63b43305c60dbf69c1c75360ac920fe10bfbb68d1e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab07785f0dc02080facdf63b43305c60dbf69c1c75360ac920fe10bfbb68d1e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:45 np0005558241 nova_compute[248510]: 2025-12-13 08:21:45.586 248514 DEBUG nova.storage.rbd_utils [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] removing snapshot(1a2ff40d88ae4e96ad7be6f96c000c7c) on rbd image(9b6188af-75f0-4213-89c2-bd3eb72960b7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:21:45 np0005558241 podman[280411]: 2025-12-13 08:21:45.594988808 +0000 UTC m=+0.346712071 container init 9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:21:45 np0005558241 podman[280411]: 2025-12-13 08:21:45.604621596 +0000 UTC m=+0.356344839 container start 9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:21:45 np0005558241 podman[280411]: 2025-12-13 08:21:45.616398716 +0000 UTC m=+0.368121959 container attach 9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]: {
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:    "0": [
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:        {
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "devices": [
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "/dev/loop3"
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            ],
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_name": "ceph_lv0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_size": "21470642176",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "name": "ceph_lv0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "tags": {
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.cluster_name": "ceph",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.crush_device_class": "",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.encrypted": "0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.objectstore": "bluestore",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.osd_id": "0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.type": "block",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.vdo": "0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.with_tpm": "0"
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            },
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "type": "block",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "vg_name": "ceph_vg0"
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:        }
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:    ],
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:    "1": [
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:        {
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "devices": [
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "/dev/loop4"
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            ],
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_name": "ceph_lv1",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_size": "21470642176",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "name": "ceph_lv1",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "tags": {
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.cluster_name": "ceph",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.crush_device_class": "",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.encrypted": "0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.objectstore": "bluestore",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.osd_id": "1",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.type": "block",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.vdo": "0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.with_tpm": "0"
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            },
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "type": "block",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "vg_name": "ceph_vg1"
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:        }
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:    ],
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:    "2": [
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:        {
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "devices": [
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "/dev/loop5"
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            ],
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_name": "ceph_lv2",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_size": "21470642176",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "name": "ceph_lv2",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "tags": {
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.cluster_name": "ceph",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.crush_device_class": "",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.encrypted": "0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.objectstore": "bluestore",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.osd_id": "2",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.type": "block",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.vdo": "0",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:                "ceph.with_tpm": "0"
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            },
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "type": "block",
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:            "vg_name": "ceph_vg2"
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:        }
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]:    ]
Dec 13 03:21:45 np0005558241 lucid_yonath[280435]: }
Dec 13 03:21:45 np0005558241 systemd[1]: libpod-9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab.scope: Deactivated successfully.
Dec 13 03:21:45 np0005558241 podman[280411]: 2025-12-13 08:21:45.934828089 +0000 UTC m=+0.686551332 container died 9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:21:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ab07785f0dc02080facdf63b43305c60dbf69c1c75360ac920fe10bfbb68d1e5-merged.mount: Deactivated successfully.
Dec 13 03:21:45 np0005558241 podman[280411]: 2025-12-13 08:21:45.976150297 +0000 UTC m=+0.727873540 container remove 9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_yonath, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:21:45 np0005558241 systemd[1]: libpod-conmon-9a6354d9de39049752e9137cae7395347d841678456be4d4f05a2892f411a6ab.scope: Deactivated successfully.
Dec 13 03:21:46 np0005558241 nova_compute[248510]: 2025-12-13 08:21:46.116 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:21:46 np0005558241 nova_compute[248510]: 2025-12-13 08:21:46.130 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614091.1272292, 3fffafca-321d-4611-8940-da963b356ca1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:46 np0005558241 nova_compute[248510]: 2025-12-13 08:21:46.130 248514 INFO nova.compute.manager [-] [instance: 3fffafca-321d-4611-8940-da963b356ca1] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:21:46 np0005558241 nova_compute[248510]: 2025-12-13 08:21:46.143 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:21:46 np0005558241 nova_compute[248510]: 2025-12-13 08:21:46.163 248514 DEBUG nova.compute.manager [None req-3550348f-188e-4ceb-9477-b6ab80ea8317 - - - - - -] [instance: 3fffafca-321d-4611-8940-da963b356ca1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Dec 13 03:21:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Dec 13 03:21:46 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Dec 13 03:21:46 np0005558241 nova_compute[248510]: 2025-12-13 08:21:46.436 248514 DEBUG nova.storage.rbd_utils [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] creating snapshot(snap) on rbd image(1b5bf03c-4e74-4ede-b890-c9dcb4210f68) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:21:46 np0005558241 podman[280529]: 2025-12-13 08:21:46.490897476 +0000 UTC m=+0.042913708 container create ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:21:46 np0005558241 systemd[1]: Started libpod-conmon-ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3.scope.
Dec 13 03:21:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:21:46 np0005558241 podman[280529]: 2025-12-13 08:21:46.470716508 +0000 UTC m=+0.022732770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:21:46 np0005558241 podman[280529]: 2025-12-13 08:21:46.627170202 +0000 UTC m=+0.179186464 container init ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:21:46 np0005558241 podman[280529]: 2025-12-13 08:21:46.63683568 +0000 UTC m=+0.188851922 container start ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:21:46 np0005558241 jovial_fermat[280560]: 167 167
Dec 13 03:21:46 np0005558241 systemd[1]: libpod-ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3.scope: Deactivated successfully.
Dec 13 03:21:46 np0005558241 podman[280529]: 2025-12-13 08:21:46.641902925 +0000 UTC m=+0.193919257 container attach ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 03:21:46 np0005558241 podman[280529]: 2025-12-13 08:21:46.645738819 +0000 UTC m=+0.197755051 container died ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:21:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9a027d176d76e75f24cd42c1fd0e3c6d6cac7c3ea8530a1f1948bdb32eb3d63b-merged.mount: Deactivated successfully.
Dec 13 03:21:46 np0005558241 podman[280529]: 2025-12-13 08:21:46.698237762 +0000 UTC m=+0.250253994 container remove ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:21:46 np0005558241 systemd[1]: libpod-conmon-ac989314488c0e56bb15691353f057ba869fab195f721fd186fd978dd372ceb3.scope: Deactivated successfully.
Dec 13 03:21:46 np0005558241 podman[280583]: 2025-12-13 08:21:46.941581936 +0000 UTC m=+0.095153295 container create cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ride, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:21:46 np0005558241 podman[280583]: 2025-12-13 08:21:46.875584511 +0000 UTC m=+0.029155960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:21:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 317 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 827 KiB/s rd, 2.6 MiB/s wr, 67 op/s
Dec 13 03:21:46 np0005558241 systemd[1]: Started libpod-conmon-cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128.scope.
Dec 13 03:21:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:21:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e26afcea6b15848d23cdc5167579d6b583640a8fec0913288ffef0434393f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e26afcea6b15848d23cdc5167579d6b583640a8fec0913288ffef0434393f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e26afcea6b15848d23cdc5167579d6b583640a8fec0913288ffef0434393f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33e26afcea6b15848d23cdc5167579d6b583640a8fec0913288ffef0434393f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:21:47 np0005558241 podman[280583]: 2025-12-13 08:21:47.0412092 +0000 UTC m=+0.194780589 container init cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:21:47 np0005558241 podman[280583]: 2025-12-13 08:21:47.050568201 +0000 UTC m=+0.204139560 container start cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ride, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:21:47 np0005558241 podman[280583]: 2025-12-13 08:21:47.054489667 +0000 UTC m=+0.208061016 container attach cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ride, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:21:47 np0005558241 ovn_controller[148476]: 2025-12-13T08:21:47Z|00177|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:21:47 np0005558241 nova_compute[248510]: 2025-12-13 08:21:47.389 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Dec 13 03:21:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Dec 13 03:21:47 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Dec 13 03:21:47 np0005558241 lvm[280678]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:21:47 np0005558241 lvm[280677]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:21:47 np0005558241 lvm[280677]: VG ceph_vg0 finished
Dec 13 03:21:47 np0005558241 lvm[280678]: VG ceph_vg1 finished
Dec 13 03:21:47 np0005558241 lvm[280680]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:21:47 np0005558241 lvm[280680]: VG ceph_vg2 finished
Dec 13 03:21:47 np0005558241 friendly_ride[280599]: {}
Dec 13 03:21:47 np0005558241 systemd[1]: libpod-cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128.scope: Deactivated successfully.
Dec 13 03:21:47 np0005558241 systemd[1]: libpod-cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128.scope: Consumed 1.420s CPU time.
Dec 13 03:21:47 np0005558241 podman[280583]: 2025-12-13 08:21:47.956727699 +0000 UTC m=+1.110299078 container died cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ride, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:21:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-33e26afcea6b15848d23cdc5167579d6b583640a8fec0913288ffef0434393f4-merged.mount: Deactivated successfully.
Dec 13 03:21:48 np0005558241 podman[280583]: 2025-12-13 08:21:48.022425008 +0000 UTC m=+1.175996367 container remove cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ride, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:21:48 np0005558241 systemd[1]: libpod-conmon-cde8aa198b36f9506ca9b3102ca8754c17c4004519f36ae1abea057198a12128.scope: Deactivated successfully.
Dec 13 03:21:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:21:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:21:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:21:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:21:48 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:21:48 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:21:48 np0005558241 nova_compute[248510]: 2025-12-13 08:21:48.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:21:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.5 MiB/s wr, 200 op/s
Dec 13 03:21:49 np0005558241 nova_compute[248510]: 2025-12-13 08:21:49.257 248514 INFO nova.virt.libvirt.driver [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Snapshot image upload complete#033[00m
Dec 13 03:21:49 np0005558241 nova_compute[248510]: 2025-12-13 08:21:49.257 248514 INFO nova.compute.manager [None req-6d483640-9faa-447b-ac7b-1668400f4145 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Took 11.87 seconds to snapshot the instance on the hypervisor.#033[00m
Dec 13 03:21:49 np0005558241 nova_compute[248510]: 2025-12-13 08:21:49.386 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:21:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Dec 13 03:21:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Dec 13 03:21:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Dec 13 03:21:49 np0005558241 nova_compute[248510]: 2025-12-13 08:21:49.776 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614094.7748296, a14d8b88-7aec-468f-a550-881364e4d95e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:21:49 np0005558241 nova_compute[248510]: 2025-12-13 08:21:49.777 248514 INFO nova.compute.manager [-] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:21:49 np0005558241 nova_compute[248510]: 2025-12-13 08:21:49.844 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.2 MiB/s wr, 177 op/s
Dec 13 03:21:52 np0005558241 nova_compute[248510]: 2025-12-13 08:21:52.562 248514 DEBUG nova.compute.manager [None req-961f4ab9-ab6f-4cca-aeb4-147677bc1314 - - - - - -] [instance: a14d8b88-7aec-468f-a550-881364e4d95e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:52 np0005558241 nova_compute[248510]: 2025-12-13 08:21:52.939 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:52 np0005558241 nova_compute[248510]: 2025-12-13 08:21:52.940 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:52 np0005558241 nova_compute[248510]: 2025-12-13 08:21:52.959 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:21:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.8 MiB/s wr, 177 op/s
Dec 13 03:21:53 np0005558241 nova_compute[248510]: 2025-12-13 08:21:53.044 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:53 np0005558241 nova_compute[248510]: 2025-12-13 08:21:53.045 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:53 np0005558241 nova_compute[248510]: 2025-12-13 08:21:53.052 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:21:53 np0005558241 nova_compute[248510]: 2025-12-13 08:21:53.052 248514 INFO nova.compute.claims [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:21:53 np0005558241 nova_compute[248510]: 2025-12-13 08:21:53.208 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:21:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3755280805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:21:53 np0005558241 nova_compute[248510]: 2025-12-13 08:21:53.774 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:53 np0005558241 nova_compute[248510]: 2025-12-13 08:21:53.783 248514 DEBUG nova.compute.provider_tree [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:21:53 np0005558241 nova_compute[248510]: 2025-12-13 08:21:53.928 248514 DEBUG nova.scheduler.client.report [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:21:53 np0005558241 nova_compute[248510]: 2025-12-13 08:21:53.965 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:53 np0005558241 nova_compute[248510]: 2025-12-13 08:21:53.966 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.022 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.022 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.054 248514 INFO nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.097 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.198 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.199 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.200 248514 INFO nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Creating image(s)#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.222 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.248 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.273 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.278 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.352 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.354 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.354 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.354 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.375 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.379 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f983ed7f-13a4-496d-b8e9-60768d90efe6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.409 248514 DEBUG nova.policy [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e8aa7edd2151436caa0fd25f361298fd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2495263e4f944deda2647b578d06bb21', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.478 248514 DEBUG nova.compute.manager [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.544 248514 INFO nova.compute.manager [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] instance snapshotting#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.673 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f983ed7f-13a4-496d-b8e9-60768d90efe6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:21:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.746 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] resizing rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.839 248514 DEBUG nova.objects.instance [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lazy-loading 'migration_context' on Instance uuid f983ed7f-13a4-496d-b8e9-60768d90efe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.843 248514 INFO nova.virt.libvirt.driver [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Beginning live snapshot process#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.871 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.872 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Ensure instance console log exists: /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.873 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.873 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.873 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Dec 13 03:21:54 np0005558241 nova_compute[248510]: 2025-12-13 08:21:54.988 248514 DEBUG nova.virt.libvirt.imagebackend [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:21:55 np0005558241 nova_compute[248510]: 2025-12-13 08:21:55.282 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Successfully created port: b07d4534-1cb5-41ec-b0c4-3e820159fe8e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:21:55 np0005558241 nova_compute[248510]: 2025-12-13 08:21:55.329 248514 DEBUG nova.storage.rbd_utils [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] creating snapshot(10d5fc17bda04d4a89230048af381972) on rbd image(07413df5-0bb8-42c2-95ff-13458d598139_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:21:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:55.403 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:55.404 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:21:55.404 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:21:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Dec 13 03:21:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Dec 13 03:21:55 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Dec 13 03:21:56 np0005558241 nova_compute[248510]: 2025-12-13 08:21:56.071 248514 DEBUG nova.storage.rbd_utils [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] cloning vms/07413df5-0bb8-42c2-95ff-13458d598139_disk@10d5fc17bda04d4a89230048af381972 to images/54bbec98-39a0-4d8e-b157-e4a05b0adf64 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:21:56 np0005558241 nova_compute[248510]: 2025-12-13 08:21:56.180 248514 DEBUG nova.storage.rbd_utils [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] flattening images/54bbec98-39a0-4d8e-b157-e4a05b0adf64 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:21:56 np0005558241 nova_compute[248510]: 2025-12-13 08:21:56.273 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Successfully created port: 33293def-d398-4fee-865f-a61997489b67 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:21:56 np0005558241 nova_compute[248510]: 2025-12-13 08:21:56.678 248514 DEBUG nova.storage.rbd_utils [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] removing snapshot(10d5fc17bda04d4a89230048af381972) on rbd image(07413df5-0bb8-42c2-95ff-13458d598139_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:21:56 np0005558241 nova_compute[248510]: 2025-12-13 08:21:56.860 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Successfully created port: bd554014-5cc7-4f34-b4a0-03ae7cc1f530 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:21:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 376 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 17 KiB/s wr, 12 op/s
Dec 13 03:21:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Dec 13 03:21:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Dec 13 03:21:57 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Dec 13 03:21:57 np0005558241 nova_compute[248510]: 2025-12-13 08:21:57.060 248514 DEBUG nova.storage.rbd_utils [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] creating snapshot(snap) on rbd image(54bbec98-39a0-4d8e-b157-e4a05b0adf64) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:21:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Dec 13 03:21:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Dec 13 03:21:58 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Dec 13 03:21:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 208 op/s
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.225 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Successfully updated port: b07d4534-1cb5-41ec-b0c4-3e820159fe8e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.385 248514 DEBUG nova.compute.manager [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-changed-b07d4534-1cb5-41ec-b0c4-3e820159fe8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.386 248514 DEBUG nova.compute.manager [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Refreshing instance network info cache due to event network-changed-b07d4534-1cb5-41ec-b0c4-3e820159fe8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.387 248514 DEBUG oslo_concurrency.lockutils [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.387 248514 DEBUG oslo_concurrency.lockutils [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.387 248514 DEBUG nova.network.neutron [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Refreshing network info cache for port b07d4534-1cb5-41ec-b0c4-3e820159fe8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.392 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.631 248514 DEBUG oslo_concurrency.lockutils [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.631 248514 DEBUG oslo_concurrency.lockutils [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.632 248514 DEBUG nova.objects.instance [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.643 248514 DEBUG nova.network.neutron [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.665 248514 DEBUG nova.objects.instance [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_requests' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.679 248514 DEBUG nova.network.neutron [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:21:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:21:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Dec 13 03:21:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Dec 13 03:21:59 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.834 248514 INFO nova.virt.libvirt.driver [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Snapshot image upload complete#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.835 248514 INFO nova.compute.manager [None req-7fee64c5-c9c4-4ba9-b8a0-25a4c87d3c99 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Took 5.29 seconds to snapshot the instance on the hypervisor.#033[00m
Dec 13 03:21:59 np0005558241 nova_compute[248510]: 2025-12-13 08:21:59.848 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:00 np0005558241 nova_compute[248510]: 2025-12-13 08:22:00.288 248514 DEBUG nova.policy [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:22:00 np0005558241 nova_compute[248510]: 2025-12-13 08:22:00.323 248514 DEBUG nova.network.neutron [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:00 np0005558241 nova_compute[248510]: 2025-12-13 08:22:00.343 248514 DEBUG oslo_concurrency.lockutils [req-a28ee89c-9a8f-4649-9970-1a9fddf53bee req-7a237bb2-374a-48bc-8da1-060fbc41dd4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:00 np0005558241 nova_compute[248510]: 2025-12-13 08:22:00.645 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Successfully updated port: 33293def-d398-4fee-865f-a61997489b67 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:22:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 13 MiB/s wr, 245 op/s
Dec 13 03:22:01 np0005558241 nova_compute[248510]: 2025-12-13 08:22:01.104 248514 DEBUG nova.network.neutron [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Successfully created port: 9516b135-3bb5-4da4-942f-d044cad93bd4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:22:02 np0005558241 nova_compute[248510]: 2025-12-13 08:22:02.088 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Successfully updated port: bd554014-5cc7-4f34-b4a0-03ae7cc1f530 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:22:02 np0005558241 nova_compute[248510]: 2025-12-13 08:22:02.112 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:02 np0005558241 nova_compute[248510]: 2025-12-13 08:22:02.112 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquired lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:02 np0005558241 nova_compute[248510]: 2025-12-13 08:22:02.112 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:22:02 np0005558241 nova_compute[248510]: 2025-12-13 08:22:02.421 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:22:02 np0005558241 nova_compute[248510]: 2025-12-13 08:22:02.791 248514 DEBUG nova.compute.manager [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-changed-33293def-d398-4fee-865f-a61997489b67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:02 np0005558241 nova_compute[248510]: 2025-12-13 08:22:02.791 248514 DEBUG nova.compute.manager [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Refreshing instance network info cache due to event network-changed-33293def-d398-4fee-865f-a61997489b67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:22:02 np0005558241 nova_compute[248510]: 2025-12-13 08:22:02.792 248514 DEBUG oslo_concurrency.lockutils [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 210 op/s
Dec 13 03:22:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:03.292 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:03 np0005558241 nova_compute[248510]: 2025-12-13 08:22:03.293 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:03.293 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:22:03 np0005558241 nova_compute[248510]: 2025-12-13 08:22:03.384 248514 DEBUG nova.network.neutron [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Successfully updated port: 9516b135-3bb5-4da4-942f-d044cad93bd4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:22:03 np0005558241 nova_compute[248510]: 2025-12-13 08:22:03.403 248514 DEBUG oslo_concurrency.lockutils [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:03 np0005558241 nova_compute[248510]: 2025-12-13 08:22:03.404 248514 DEBUG oslo_concurrency.lockutils [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:03 np0005558241 nova_compute[248510]: 2025-12-13 08:22:03.404 248514 DEBUG nova.network.neutron [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:22:03 np0005558241 nova_compute[248510]: 2025-12-13 08:22:03.688 248514 WARNING nova.network.neutron [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it#033[00m
Dec 13 03:22:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:22:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 13K writes, 56K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 3836 syncs, 3.55 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6771 writes, 29K keys, 6771 commit groups, 1.0 writes per commit group, ingest: 35.51 MB, 0.06 MB/s#012Interval WAL: 6772 writes, 2436 syncs, 2.78 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:22:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:04.296 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:04 np0005558241 nova_compute[248510]: 2025-12-13 08:22:04.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:22:04 np0005558241 nova_compute[248510]: 2025-12-13 08:22:04.850 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 171 op/s
Dec 13 03:22:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Dec 13 03:22:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 4.9 KiB/s wr, 14 op/s
Dec 13 03:22:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:22:09
Dec 13 03:22:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:22:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:22:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'vms', '.mgr', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.meta']
Dec 13 03:22:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.254 248514 DEBUG nova.compute.manager [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-changed-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.254 248514 DEBUG nova.compute.manager [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Refreshing instance network info cache due to event network-changed-bd554014-5cc7-4f34-b4a0-03ae7cc1f530. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.255 248514 DEBUG oslo_concurrency.lockutils [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.619 248514 DEBUG nova.network.neutron [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.713 248514 DEBUG oslo_concurrency.lockutils [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.716 248514 DEBUG nova.virt.libvirt.vif [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.717 248514 DEBUG nova.network.os_vif_util [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.717 248514 DEBUG nova.network.os_vif_util [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.718 248514 DEBUG os_vif [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.719 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.719 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.723 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9516b135-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.724 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9516b135-3b, col_values=(('external_ids', {'iface-id': '9516b135-3bb5-4da4-942f-d044cad93bd4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:b5:6d', 'vm-uuid': '2e309dc2-3cab-4ecf-8be7-eab85790a0da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:09 np0005558241 NetworkManager[50376]: <info>  [1765614129.7275] manager: (tap9516b135-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.736 248514 INFO os_vif [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b')#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.737 248514 DEBUG nova.virt.libvirt.vif [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.737 248514 DEBUG nova.network.os_vif_util [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.738 248514 DEBUG nova.network.os_vif_util [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.740 248514 DEBUG nova.virt.libvirt.guest [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] attach device xml: <interface type="ethernet">
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:96:b5:6d"/>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  <target dev="tap9516b135-3b"/>
Dec 13 03:22:09 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:22:09 np0005558241 nova_compute[248510]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec 13 03:22:09 np0005558241 kernel: tap9516b135-3b: entered promiscuous mode
Dec 13 03:22:09 np0005558241 NetworkManager[50376]: <info>  [1765614129.7544] manager: (tap9516b135-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.753 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:09Z|00178|binding|INFO|Claiming lport 9516b135-3bb5-4da4-942f-d044cad93bd4 for this chassis.
Dec 13 03:22:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:09Z|00179|binding|INFO|9516b135-3bb5-4da4-942f-d044cad93bd4: Claiming fa:16:3e:96:b5:6d 10.100.0.3
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.764 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:b5:6d 10.100.0.3'], port_security=['fa:16:3e:96:b5:6d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=9516b135-3bb5-4da4-942f-d044cad93bd4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.766 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 9516b135-3bb5-4da4-942f-d044cad93bd4 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 bound to our chassis#033[00m
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.769 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:22:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:09Z|00180|binding|INFO|Setting lport 9516b135-3bb5-4da4-942f-d044cad93bd4 ovn-installed in OVS
Dec 13 03:22:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:09Z|00181|binding|INFO|Setting lport 9516b135-3bb5-4da4-942f-d044cad93bd4 up in Southbound
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.774 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.787 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7115ba-1899-4fd3-b6ef-db755008f457]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.814 248514 DEBUG nova.network.neutron [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updating instance_info_cache with network_info: [{"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:09 np0005558241 systemd-udevd[281088]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.829 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8d19c971-ec9e-4c20-9add-65c7bc9bc6e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.833 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[91b471ca-0709-46ce-bbac-d1fd62ca2810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:09 np0005558241 NetworkManager[50376]: <info>  [1765614129.8474] device (tap9516b135-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:22:09 np0005558241 NetworkManager[50376]: <info>  [1765614129.8479] device (tap9516b135-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.851 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Releasing lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.852 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Instance network_info: |[{"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.853 248514 DEBUG oslo_concurrency.lockutils [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.853 248514 DEBUG nova.network.neutron [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Refreshing network info cache for port 33293def-d398-4fee-865f-a61997489b67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.858 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Start _get_guest_xml network_info=[{"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.865 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3e81594d-2f87-462d-8176-d7889dc54bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.870 248514 WARNING nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:22:09 np0005558241 podman[281061]: 2025-12-13 08:22:09.880083621 +0000 UTC m=+0.083829126 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.888 248514 DEBUG nova.virt.libvirt.host [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.889 248514 DEBUG nova.virt.libvirt.host [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.893 248514 DEBUG nova.virt.libvirt.host [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.894 248514 DEBUG nova.virt.libvirt.host [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.895 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.894 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2856ae-375b-4a4b-a01e-d6765cb0d46a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281121, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.896 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.897 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.897 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.897 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.898 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.898 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.899 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.899 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.900 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.900 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.900 248514 DEBUG nova.virt.hardware [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.904 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:09 np0005558241 podman[281060]: 2025-12-13 08:22:09.906706526 +0000 UTC m=+0.111076567 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.914 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a575fc18-1e21-4601-a3e2-f8fe10c7366b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665441, 'tstamp': 665441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281128, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665444, 'tstamp': 665444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281128, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:09 np0005558241 podman[281057]: 2025-12-13 08:22:09.915439381 +0000 UTC m=+0.118814687 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.916 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.919 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.920 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.920 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:09.920 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.968 248514 DEBUG nova.virt.libvirt.driver [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.968 248514 DEBUG nova.virt.libvirt.driver [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.969 248514 DEBUG nova.virt.libvirt.driver [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:54:c0:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.969 248514 DEBUG nova.virt.libvirt.driver [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:96:b5:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:09 np0005558241 nova_compute[248510]: 2025-12-13 08:22:09.993 248514 DEBUG nova.virt.libvirt.guest [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:22:09</nova:creationTime>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:22:09 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:    <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 03:22:09 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:    <nova:port uuid="9516b135-3bb5-4da4-942f-d044cad93bd4">
Dec 13 03:22:09 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:09 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:22:09 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:22:09 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:22:10 np0005558241 nova_compute[248510]: 2025-12-13 08:22:10.022 248514 DEBUG oslo_concurrency.lockutils [None req-6c161a27-2521-4b65-8f50-9ffe7ea3dee9 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:22:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:22:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984943527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:22:10 np0005558241 nova_compute[248510]: 2025-12-13 08:22:10.515 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:10 np0005558241 nova_compute[248510]: 2025-12-13 08:22:10.537 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:10 np0005558241 nova_compute[248510]: 2025-12-13 08:22:10.541 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:22:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 4.3 KiB/s wr, 12 op/s
Dec 13 03:22:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:22:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3001.8 total, 600.0 interval#012Cumulative writes: 14K writes, 56K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 4359 syncs, 3.38 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6706 writes, 25K keys, 6706 commit groups, 1.0 writes per commit group, ingest: 26.22 MB, 0.04 MB/s#012Interval WAL: 6706 writes, 2621 syncs, 2.56 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:22:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:22:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426856428' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.112 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.114 248514 DEBUG nova.virt.libvirt.vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:54Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.114 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.115 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.116 248514 DEBUG nova.virt.libvirt.vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:54Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.116 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.117 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.117 248514 DEBUG nova.virt.libvirt.vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:54Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.117 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.118 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.121 248514 DEBUG nova.objects.instance [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lazy-loading 'pci_devices' on Instance uuid f983ed7f-13a4-496d-b8e9-60768d90efe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.150 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  <uuid>f983ed7f-13a4-496d-b8e9-60768d90efe6</uuid>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  <name>instance-0000001b</name>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersTestMultiNic-server-1051085888</nova:name>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:22:09</nova:creationTime>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <nova:user uuid="e8aa7edd2151436caa0fd25f361298fd">tempest-ServersTestMultiNic-1741413593-project-member</nova:user>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <nova:project uuid="2495263e4f944deda2647b578d06bb21">tempest-ServersTestMultiNic-1741413593</nova:project>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <nova:port uuid="b07d4534-1cb5-41ec-b0c4-3e820159fe8e">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.33" ipVersion="4"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <nova:port uuid="33293def-d398-4fee-865f-a61997489b67">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.1.74" ipVersion="4"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <nova:port uuid="bd554014-5cc7-4f34-b4a0-03ae7cc1f530">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.211" ipVersion="4"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <entry name="serial">f983ed7f-13a4-496d-b8e9-60768d90efe6</entry>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <entry name="uuid">f983ed7f-13a4-496d-b8e9-60768d90efe6</entry>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/f983ed7f-13a4-496d-b8e9-60768d90efe6_disk">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/f983ed7f-13a4-496d-b8e9-60768d90efe6_disk.config">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:27:d2:76"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <target dev="tapb07d4534-1c"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:ce:8c:81"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <target dev="tap33293def-d3"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:ad:27:90"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <target dev="tapbd554014-5c"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/console.log" append="off"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:22:11 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:22:11 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:22:11 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:22:11 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.151 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Preparing to wait for external event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.151 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.151 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.151 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.151 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Preparing to wait for external event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.152 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.152 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.152 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.152 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Preparing to wait for external event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.152 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.152 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.153 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.153 248514 DEBUG nova.virt.libvirt.vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:54Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.154 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.154 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.155 248514 DEBUG os_vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.155 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.156 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.156 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.159 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.160 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07d4534-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.160 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb07d4534-1c, col_values=(('external_ids', {'iface-id': 'b07d4534-1cb5-41ec-b0c4-3e820159fe8e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:d2:76', 'vm-uuid': 'f983ed7f-13a4-496d-b8e9-60768d90efe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:11 np0005558241 NetworkManager[50376]: <info>  [1765614131.1631] manager: (tapb07d4534-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.162 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.170 248514 INFO os_vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c')#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.171 248514 DEBUG nova.virt.libvirt.vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:54Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.172 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.172 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.173 248514 DEBUG os_vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.173 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.174 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.174 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.178 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33293def-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.178 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap33293def-d3, col_values=(('external_ids', {'iface-id': '33293def-d398-4fee-865f-a61997489b67', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ce:8c:81', 'vm-uuid': 'f983ed7f-13a4-496d-b8e9-60768d90efe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.179 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:11 np0005558241 NetworkManager[50376]: <info>  [1765614131.1807] manager: (tap33293def-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.182 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.186 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.189 248514 INFO os_vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3')#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.190 248514 DEBUG nova.virt.libvirt.vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:21:54Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.190 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.191 248514 DEBUG nova.network.os_vif_util [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.192 248514 DEBUG os_vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.192 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.193 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.193 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.196 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.197 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd554014-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.197 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd554014-5c, col_values=(('external_ids', {'iface-id': 'bd554014-5cc7-4f34-b4a0-03ae7cc1f530', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:27:90', 'vm-uuid': 'f983ed7f-13a4-496d-b8e9-60768d90efe6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:11 np0005558241 NetworkManager[50376]: <info>  [1765614131.1996] manager: (tapbd554014-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.201 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.208 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.210 248514 INFO os_vif [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c')#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.375 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.375 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.375 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No VIF found with MAC fa:16:3e:27:d2:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.376 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No VIF found with MAC fa:16:3e:ce:8c:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.376 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No VIF found with MAC fa:16:3e:ad:27:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.377 248514 INFO nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Using config drive#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.488 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.537 248514 DEBUG oslo_concurrency.lockutils [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.538 248514 DEBUG oslo_concurrency.lockutils [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.538 248514 DEBUG nova.objects.instance [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.723 248514 DEBUG nova.network.neutron [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updated VIF entry in instance network info cache for port 33293def-d398-4fee-865f-a61997489b67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.723 248514 DEBUG nova.network.neutron [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updating instance_info_cache with network_info: [{"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.823 248514 DEBUG oslo_concurrency.lockutils [req-9a333ba3-56b5-481f-b547-f35ae73143f2 req-904fb73c-f204-4dcd-8aff-73e5d0e23695 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.824 248514 DEBUG oslo_concurrency.lockutils [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.825 248514 DEBUG nova.network.neutron [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Refreshing network info cache for port bd554014-5cc7-4f34-b4a0-03ae7cc1f530 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.885 248514 INFO nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Creating config drive at /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/disk.config#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.891 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsh53b5em execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:11Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:b5:6d 10.100.0.3
Dec 13 03:22:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:11Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:b5:6d 10.100.0.3
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.972 248514 DEBUG nova.objects.instance [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_requests' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:11 np0005558241 nova_compute[248510]: 2025-12-13 08:22:11.990 248514 DEBUG nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.024 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsh53b5em" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.054 248514 DEBUG nova.storage.rbd_utils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image f983ed7f-13a4-496d-b8e9-60768d90efe6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.060 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/disk.config f983ed7f-13a4-496d-b8e9-60768d90efe6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.289 248514 DEBUG nova.policy [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.871 248514 DEBUG oslo_concurrency.processutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/disk.config f983ed7f-13a4-496d-b8e9-60768d90efe6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.811s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.872 248514 INFO nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Deleting local config drive /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6/disk.config because it was imported into RBD.#033[00m
Dec 13 03:22:12 np0005558241 NetworkManager[50376]: <info>  [1765614132.9287] manager: (tapb07d4534-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Dec 13 03:22:12 np0005558241 kernel: tapb07d4534-1c: entered promiscuous mode
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:12Z|00182|binding|INFO|Claiming lport b07d4534-1cb5-41ec-b0c4-3e820159fe8e for this chassis.
Dec 13 03:22:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:12Z|00183|binding|INFO|b07d4534-1cb5-41ec-b0c4-3e820159fe8e: Claiming fa:16:3e:27:d2:76 10.100.0.33
Dec 13 03:22:12 np0005558241 NetworkManager[50376]: <info>  [1765614132.9495] manager: (tap33293def-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Dec 13 03:22:12 np0005558241 kernel: tap33293def-d3: entered promiscuous mode
Dec 13 03:22:12 np0005558241 systemd-udevd[281275]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:22:12 np0005558241 systemd-udevd[281274]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:22:12 np0005558241 NetworkManager[50376]: <info>  [1765614132.9659] manager: (tapbd554014-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Dec 13 03:22:12 np0005558241 systemd-udevd[281277]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:22:12 np0005558241 NetworkManager[50376]: <info>  [1765614132.9792] device (tapb07d4534-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:22:12 np0005558241 NetworkManager[50376]: <info>  [1765614132.9798] device (tapb07d4534-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:22:12 np0005558241 NetworkManager[50376]: <info>  [1765614132.9814] device (tap33293def-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:22:12 np0005558241 NetworkManager[50376]: <info>  [1765614132.9819] device (tap33293def-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:22:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 5.5 KiB/s wr, 12 op/s
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.994 248514 DEBUG nova.compute.manager [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.994 248514 DEBUG oslo_concurrency.lockutils [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.994 248514 DEBUG oslo_concurrency.lockutils [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.994 248514 DEBUG oslo_concurrency.lockutils [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.995 248514 DEBUG nova.compute.manager [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.995 248514 WARNING nova.compute.manager [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.995 248514 DEBUG nova.compute.manager [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.995 248514 DEBUG oslo_concurrency.lockutils [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.995 248514 DEBUG oslo_concurrency.lockutils [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.995 248514 DEBUG oslo_concurrency.lockutils [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.996 248514 DEBUG nova.compute.manager [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.996 248514 WARNING nova.compute.manager [req-7f379ad7-2849-4980-b6ce-e5ecd9d777a0 req-f4995bf1-7e90-4485-ba2d-382a3378f366 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:22:12 np0005558241 nova_compute[248510]: 2025-12-13 08:22:12.996 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:12 np0005558241 kernel: tapbd554014-5c: entered promiscuous mode
Dec 13 03:22:13 np0005558241 NetworkManager[50376]: <info>  [1765614132.9987] device (tapbd554014-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:22:13 np0005558241 NetworkManager[50376]: <info>  [1765614132.9997] device (tapbd554014-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:22:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:12Z|00184|if_status|INFO|Not updating pb chassis for 33293def-d398-4fee-865f-a61997489b67 now as sb is readonly
Dec 13 03:22:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:12Z|00185|binding|INFO|Claiming lport 33293def-d398-4fee-865f-a61997489b67 for this chassis.
Dec 13 03:22:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:12Z|00186|binding|INFO|33293def-d398-4fee-865f-a61997489b67: Claiming fa:16:3e:ce:8c:81 10.100.1.74
Dec 13 03:22:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:12Z|00187|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:12.996 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:d2:76 10.100.0.33'], port_security=['fa:16:3e:27:d2:76 10.100.0.33'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.33/24', 'neutron:device_id': 'f983ed7f-13a4-496d-b8e9-60768d90efe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-647203e6-db87-411a-8603-ed4b91cb4212', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f2b755-adc8-4d52-9a0b-2240b0923f42, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b07d4534-1cb5-41ec-b0c4-3e820159fe8e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.004 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b07d4534-1cb5-41ec-b0c4-3e820159fe8e in datapath 647203e6-db87-411a-8603-ed4b91cb4212 bound to our chassis#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.006 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 647203e6-db87-411a-8603-ed4b91cb4212#033[00m
Dec 13 03:22:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:13Z|00188|binding|INFO|Setting lport b07d4534-1cb5-41ec-b0c4-3e820159fe8e ovn-installed in OVS
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.006 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:13Z|00189|binding|INFO|Claiming lport bd554014-5cc7-4f34-b4a0-03ae7cc1f530 for this chassis.
Dec 13 03:22:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:13Z|00190|binding|INFO|bd554014-5cc7-4f34-b4a0-03ae7cc1f530: Claiming fa:16:3e:ad:27:90 10.100.0.211
Dec 13 03:22:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:13Z|00191|binding|INFO|Setting lport b07d4534-1cb5-41ec-b0c4-3e820159fe8e up in Southbound
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.013 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:8c:81 10.100.1.74'], port_security=['fa:16:3e:ce:8c:81 10.100.1.74'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.74/24', 'neutron:device_id': 'f983ed7f-13a4-496d-b8e9-60768d90efe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c824dbae-6ef3-43b5-8ec9-f4bc95c906d6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=33293def-d398-4fee-865f-a61997489b67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.022 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:27:90 10.100.0.211'], port_security=['fa:16:3e:ad:27:90 10.100.0.211'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.211/24', 'neutron:device_id': 'f983ed7f-13a4-496d-b8e9-60768d90efe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-647203e6-db87-411a-8603-ed4b91cb4212', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f2b755-adc8-4d52-9a0b-2240b0923f42, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bd554014-5cc7-4f34-b4a0-03ae7cc1f530) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:13 np0005558241 systemd-machined[210538]: New machine qemu-32-instance-0000001b.
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.047 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac4966e-2b90-470b-aec8-377bbdd105b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.049 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap647203e6-d1 in ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:22:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:13Z|00192|binding|INFO|Setting lport 33293def-d398-4fee-865f-a61997489b67 ovn-installed in OVS
Dec 13 03:22:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:13Z|00193|binding|INFO|Setting lport 33293def-d398-4fee-865f-a61997489b67 up in Southbound
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.052 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.052 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap647203e6-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.052 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[73b0297a-d781-4e20-93d3-777bffd862c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.053 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9ade395b-8d9e-463a-b453-450da8b52606]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 systemd[1]: Started Virtual Machine qemu-32-instance-0000001b.
Dec 13 03:22:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:13Z|00194|binding|INFO|Setting lport bd554014-5cc7-4f34-b4a0-03ae7cc1f530 ovn-installed in OVS
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.069 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.069 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[78ffe802-2d1f-4fed-89ad-6e9bca1a2af2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.097 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a15b7dc-ba52-47f2-9ac1-3949eeb60b1d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:13Z|00195|binding|INFO|Setting lport bd554014-5cc7-4f34-b4a0-03ae7cc1f530 up in Southbound
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.133 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[df399358-a443-4de3-bf8a-8ae80faeccbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.138 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7d7a2e4d-e38c-4631-9e45-4819316c7c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 NetworkManager[50376]: <info>  [1765614133.1401] manager: (tap647203e6-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/89)
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.176 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f0afaf44-2993-4156-9db8-61ff320e9c62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.180 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[17aa60d3-2414-4a46-8d56-f496a1b0c8ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 NetworkManager[50376]: <info>  [1765614133.2099] device (tap647203e6-d0): carrier: link connected
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.220 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[22d3e147-49d3-4898-aa7e-0232930b60ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.243 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35e91cee-5d28-4fb8-81c9-bc77d92df9d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap647203e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:b0:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671042, 'reachable_time': 17366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281314, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.263 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a94affea-bb0e-485d-9da0-f6867aaf8be7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:b0ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671042, 'tstamp': 671042}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281315, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.285 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f1667fc1-030d-4bfc-b0a8-26bbb55b89ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap647203e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:b0:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671042, 'reachable_time': 17366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281316, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.292 248514 DEBUG nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Successfully created port: cea82a7d-e92d-4ac6-ba47-854ec9905fd2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.310 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "1c17e7b7-7062-48d2-a30f-b387929244d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.310 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "1c17e7b7-7062-48d2-a30f-b387929244d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.333 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[721857fe-eb87-4abc-994c-0e79fb7d332e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.341 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.413 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2e570b-f63b-43d9-880e-0c0cd97aff5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.415 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647203e6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.415 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.415 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap647203e6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.417 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:13 np0005558241 NetworkManager[50376]: <info>  [1765614133.4183] manager: (tap647203e6-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Dec 13 03:22:13 np0005558241 kernel: tap647203e6-d0: entered promiscuous mode
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.420 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.421 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap647203e6-d0, col_values=(('external_ids', {'iface-id': '2a2d6eba-8a85-4872-98d3-6dab02d46408'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.422 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:13Z|00196|binding|INFO|Releasing lport 2a2d6eba-8a85-4872-98d3-6dab02d46408 from this chassis (sb_readonly=0)
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.440 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/647203e6-db87-411a-8603-ed4b91cb4212.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/647203e6-db87-411a-8603-ed4b91cb4212.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.441 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[125d13c6-841c-4fb3-8e5e-072ad3329c26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.443 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-647203e6-db87-411a-8603-ed4b91cb4212
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/647203e6-db87-411a-8603-ed4b91cb4212.pid.haproxy
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 647203e6-db87-411a-8603-ed4b91cb4212
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:22:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:13.444 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'env', 'PROCESS_TAG=haproxy-647203e6-db87-411a-8603-ed4b91cb4212', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/647203e6-db87-411a-8603-ed4b91cb4212.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.567 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.568 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.578 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.578 248514 INFO nova.compute.claims [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.584 248514 DEBUG nova.network.neutron [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updated VIF entry in instance network info cache for port bd554014-5cc7-4f34-b4a0-03ae7cc1f530. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.584 248514 DEBUG nova.network.neutron [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updating instance_info_cache with network_info: [{"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.621 248514 DEBUG oslo_concurrency.lockutils [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f983ed7f-13a4-496d-b8e9-60768d90efe6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.622 248514 DEBUG nova.compute.manager [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-changed-9516b135-3bb5-4da4-942f-d044cad93bd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.622 248514 DEBUG nova.compute.manager [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing instance network info cache due to event network-changed-9516b135-3bb5-4da4-942f-d044cad93bd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.623 248514 DEBUG oslo_concurrency.lockutils [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.623 248514 DEBUG oslo_concurrency.lockutils [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.623 248514 DEBUG nova.network.neutron [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing network info cache for port 9516b135-3bb5-4da4-942f-d044cad93bd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.661 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614133.6613116, f983ed7f-13a4-496d-b8e9-60768d90efe6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.662 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] VM Started (Lifecycle Event)#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.825 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.835 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614133.6614385, f983ed7f-13a4-496d-b8e9-60768d90efe6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.835 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.868 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.872 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:22:13 np0005558241 nova_compute[248510]: 2025-12-13 08:22:13.907 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:22:13 np0005558241 podman[281392]: 2025-12-13 08:22:13.827212521 +0000 UTC m=+0.031465846 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.008 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:14 np0005558241 podman[281392]: 2025-12-13 08:22:14.068771581 +0000 UTC m=+0.273024886 container create 0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 03:22:14 np0005558241 systemd[1]: Started libpod-conmon-0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56.scope.
Dec 13 03:22:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:22:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265fb3e3eb8d96b0e0a6d35b9b283058e328a9688b28ba3696775b5ebfa0aa7f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.173 248514 DEBUG nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Successfully updated port: cea82a7d-e92d-4ac6-ba47-854ec9905fd2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.200 248514 DEBUG oslo_concurrency.lockutils [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:14 np0005558241 podman[281392]: 2025-12-13 08:22:14.211194569 +0000 UTC m=+0.415447904 container init 0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 03:22:14 np0005558241 podman[281392]: 2025-12-13 08:22:14.217743 +0000 UTC m=+0.421996305 container start 0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:22:14 np0005558241 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [NOTICE]   (281432) : New worker (281434) forked
Dec 13 03:22:14 np0005558241 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [NOTICE]   (281432) : Loading success.
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.442 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:22:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092307132' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.641 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.649 248514 DEBUG nova.compute.provider_tree [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.656 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 33293def-d398-4fee-865f-a61997489b67 in datapath 527d2c60-2d6f-4195-aeaa-9dd99258fb5b unbound from our chassis#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.658 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 527d2c60-2d6f-4195-aeaa-9dd99258fb5b#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.672 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5f03dc76-d565-42e3-a1f0-2c1b7ae1f301]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.673 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap527d2c60-21 in ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.676 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap527d2c60-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.676 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d879c19e-5c52-4f2e-af28-48ef17de840e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.677 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff1ff4b-8536-46fc-a2bf-e1a5107f3e5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.691 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4b307ab5-bd39-485d-b404-5741d8db5eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.717 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a76313fb-a145-4d72-aa72-ba53ffdb9e36]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.724 248514 DEBUG nova.compute.manager [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-changed-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.725 248514 DEBUG nova.compute.manager [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing instance network info cache due to event network-changed-cea82a7d-e92d-4ac6-ba47-854ec9905fd2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.725 248514 DEBUG oslo_concurrency.lockutils [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.741 248514 DEBUG nova.scheduler.client.report [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.751 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a82b2524-19a9-4279-a4ec-47338976a087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.760 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5ffd8939-d476-4c3d-9701-a647d568feaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 NetworkManager[50376]: <info>  [1765614134.7606] manager: (tap527d2c60-20): new Veth device (/org/freedesktop/NetworkManager/Devices/91)
Dec 13 03:22:14 np0005558241 systemd-udevd[281302]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.765 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.766 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.789 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[39d46a06-8dce-4d3c-932c-616f516f61cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.793 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[83d5bd0d-e760-4733-9e3a-809393b36396]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 NetworkManager[50376]: <info>  [1765614134.8181] device (tap527d2c60-20): carrier: link connected
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.823 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[21acf589-2ec0-4831-a416-3a0f1da5d162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.829 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.830 248514 DEBUG nova.network.neutron [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.844 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[997c9337-ca6f-47fc-9963-8aa0795925b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d2c60-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:a6:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671203, 'reachable_time': 30096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281455, 'error': None, 'target': 'ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.861 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4d51634d-f964-44f1-beaa-f358669fe26a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:a661'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671203, 'tstamp': 671203}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281456, 'error': None, 'target': 'ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.863 248514 INFO nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:22:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.877 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e49b66-a61e-4375-83ef-170750b4e150]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d2c60-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:a6:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671203, 'reachable_time': 30096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281457, 'error': None, 'target': 'ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.895 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.913 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5e0f72-7719-4788-92f0-e50c60831379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:22:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2561537915' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:22:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:22:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2561537915' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.992 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[10d1ec3d-a337-4f34-9d92-f349585fe12f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 18 KiB/s wr, 18 op/s
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.994 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.995 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d2c60-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.996 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.996 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:14 np0005558241 nova_compute[248510]: 2025-12-13 08:22:14.996 248514 INFO nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Creating image(s)#033[00m
Dec 13 03:22:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:14.997 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap527d2c60-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:15 np0005558241 NetworkManager[50376]: <info>  [1765614134.9999] manager: (tap527d2c60-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Dec 13 03:22:15 np0005558241 kernel: tap527d2c60-20: entered promiscuous mode
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:15.003 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap527d2c60-20, col_values=(('external_ids', {'iface-id': 'a4843e01-b61c-4c89-820d-c2bc9f310806'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:15Z|00197|binding|INFO|Releasing lport a4843e01-b61c-4c89-820d-c2bc9f310806 from this chassis (sb_readonly=0)
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:15.023 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/527d2c60-2d6f-4195-aeaa-9dd99258fb5b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/527d2c60-2d6f-4195-aeaa-9dd99258fb5b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:15.024 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a1833d-5668-4206-a0f3-6b452b052cfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.024 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:15.025 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-527d2c60-2d6f-4195-aeaa-9dd99258fb5b
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/527d2c60-2d6f-4195-aeaa-9dd99258fb5b.pid.haproxy
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 527d2c60-2d6f-4195-aeaa-9dd99258fb5b
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:22:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:15.025 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'env', 'PROCESS_TAG=haproxy-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/527d2c60-2d6f-4195-aeaa-9dd99258fb5b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.052 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.076 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.080 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.109 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.150 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.151 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.152 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.152 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.177 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.182 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1c17e7b7-7062-48d2-a30f-b387929244d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.273 248514 DEBUG nova.network.neutron [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.274 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.488 248514 DEBUG nova.network.neutron [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updated VIF entry in instance network info cache for port 9516b135-3bb5-4da4-942f-d044cad93bd4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:22:15 np0005558241 podman[281581]: 2025-12-13 08:22:15.39633135 +0000 UTC m=+0.026019342 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.489 248514 DEBUG nova.network.neutron [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.515 248514 DEBUG oslo_concurrency.lockutils [req-dd5a11dc-f362-4b19-b5d4-297685d69117 req-9a957eac-cf83-4bdc-a9ac-9e29783dbb3b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.516 248514 DEBUG oslo_concurrency.lockutils [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.517 248514 DEBUG nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.686 248514 WARNING nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.687 248514 WARNING nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.756 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.756 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.756 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.757 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.757 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Processing event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.757 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.758 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.759 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.759 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.759 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No event matching network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e in dict_keys([('network-vif-plugged', '33293def-d398-4fee-865f-a61997489b67'), ('network-vif-plugged', 'bd554014-5cc7-4f34-b4a0-03ae7cc1f530')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.760 248514 WARNING nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received unexpected event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.760 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.760 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.760 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.760 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.760 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Processing event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.761 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.761 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.761 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.761 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.761 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No event matching network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 in dict_keys([('network-vif-plugged', 'bd554014-5cc7-4f34-b4a0-03ae7cc1f530')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.761 248514 WARNING nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received unexpected event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.762 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.762 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.762 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.762 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.762 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Processing event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.762 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.763 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.763 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.763 248514 DEBUG oslo_concurrency.lockutils [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.763 248514 DEBUG nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.763 248514 WARNING nova.compute.manager [req-0eac0186-8f67-4b7d-8ce9-43dc8f65d076 req-2e8d3109-7f58-4a32-855c-4b716028ebcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received unexpected event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.764 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.769 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614135.7689528, f983ed7f-13a4-496d-b8e9-60768d90efe6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.770 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.773 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.780 248514 INFO nova.virt.libvirt.driver [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Instance spawned successfully.#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.781 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.791 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:15 np0005558241 podman[281581]: 2025-12-13 08:22:15.802617177 +0000 UTC m=+0.432305129 container create 854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.817 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.821 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.821 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.822 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.823 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.823 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.823 248514 DEBUG nova.virt.libvirt.driver [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:15 np0005558241 nova_compute[248510]: 2025-12-13 08:22:15.858 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:22:15 np0005558241 systemd[1]: Started libpod-conmon-854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a.scope.
Dec 13 03:22:16 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:22:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c55d01b3ee3deca83f683b6cfde9ad826f72c632d13fcddd102d1648a05c1801/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:16 np0005558241 nova_compute[248510]: 2025-12-13 08:22:16.088 248514 INFO nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Took 21.89 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:22:16 np0005558241 nova_compute[248510]: 2025-12-13 08:22:16.089 248514 DEBUG nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:16 np0005558241 nova_compute[248510]: 2025-12-13 08:22:16.162 248514 INFO nova.compute.manager [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Took 23.15 seconds to build instance.#033[00m
Dec 13 03:22:16 np0005558241 nova_compute[248510]: 2025-12-13 08:22:16.187 248514 DEBUG oslo_concurrency.lockutils [None req-d5e10ea5-2500-44c5-bdc9-cf406380d273 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:16 np0005558241 nova_compute[248510]: 2025-12-13 08:22:16.200 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:16 np0005558241 podman[281581]: 2025-12-13 08:22:16.429784204 +0000 UTC m=+1.059472186 container init 854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 03:22:16 np0005558241 podman[281581]: 2025-12-13 08:22:16.43611153 +0000 UTC m=+1.065799482 container start 854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:22:16 np0005558241 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [NOTICE]   (281603) : New worker (281605) forked
Dec 13 03:22:16 np0005558241 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [NOTICE]   (281603) : Loading success.
Dec 13 03:22:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.672 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bd554014-5cc7-4f34-b4a0-03ae7cc1f530 in datapath 647203e6-db87-411a-8603-ed4b91cb4212 unbound from our chassis#033[00m
Dec 13 03:22:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.675 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 647203e6-db87-411a-8603-ed4b91cb4212#033[00m
Dec 13 03:22:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.694 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3709c8a0-252a-4cd7-93ef-54c8d3c80d4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.729 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[374e1634-82d5-4dce-845e-c086f7cf23dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.733 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[155f87d4-b060-4242-bc04-2f89393fd895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.770 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3b66890f-3155-49c3-a3e1-69ad1bd5805c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.793 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe013365-18a6-4a48-bf4b-beefec972617]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap647203e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:b0:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671042, 'reachable_time': 17366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281619, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.816 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[df096661-eb22-4741-bbba-dd87f1b87ceb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap647203e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671058, 'tstamp': 671058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281620, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap647203e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671062, 'tstamp': 671062}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281620, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.818 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647203e6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.821 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap647203e6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.822 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:16 np0005558241 nova_compute[248510]: 2025-12-13 08:22:16.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.822 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap647203e6-d0, col_values=(('external_ids', {'iface-id': '2a2d6eba-8a85-4872-98d3-6dab02d46408'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:16.823 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:16 np0005558241 nova_compute[248510]: 2025-12-13 08:22:16.827 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1c17e7b7-7062-48d2-a30f-b387929244d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:16 np0005558241 nova_compute[248510]: 2025-12-13 08:22:16.902 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] resizing rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:22:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 502 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 18 KiB/s wr, 9 op/s
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.440 248514 DEBUG nova.objects.instance [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lazy-loading 'migration_context' on Instance uuid 1c17e7b7-7062-48d2-a30f-b387929244d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.456 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.456 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Ensure instance console log exists: /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.457 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.457 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.457 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.458 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.466 248514 WARNING nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.473 248514 DEBUG nova.virt.libvirt.host [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.474 248514 DEBUG nova.virt.libvirt.host [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.478 248514 DEBUG nova.virt.libvirt.host [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.478 248514 DEBUG nova.virt.libvirt.host [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.479 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.479 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.479 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.479 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.479 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.480 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.480 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.480 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.480 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.480 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.480 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.481 248514 DEBUG nova.virt.hardware [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.483 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.985 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.986 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.986 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.986 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.986 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.988 248514 INFO nova.compute.manager [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Terminating instance#033[00m
Dec 13 03:22:17 np0005558241 nova_compute[248510]: 2025-12-13 08:22:17.989 248514 DEBUG nova.compute.manager [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:22:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:22:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/108519056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.055 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.088 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.096 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:18 np0005558241 kernel: tapb07d4534-1c (unregistering): left promiscuous mode
Dec 13 03:22:18 np0005558241 NetworkManager[50376]: <info>  [1765614138.1832] device (tapb07d4534-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:22:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:18Z|00198|binding|INFO|Releasing lport b07d4534-1cb5-41ec-b0c4-3e820159fe8e from this chassis (sb_readonly=0)
Dec 13 03:22:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:18Z|00199|binding|INFO|Setting lport b07d4534-1cb5-41ec-b0c4-3e820159fe8e down in Southbound
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.194 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:18Z|00200|binding|INFO|Removing iface tapb07d4534-1c ovn-installed in OVS
Dec 13 03:22:18 np0005558241 kernel: tap33293def-d3 (unregistering): left promiscuous mode
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.205 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:d2:76 10.100.0.33'], port_security=['fa:16:3e:27:d2:76 10.100.0.33'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.33/24', 'neutron:device_id': 'f983ed7f-13a4-496d-b8e9-60768d90efe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-647203e6-db87-411a-8603-ed4b91cb4212', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f2b755-adc8-4d52-9a0b-2240b0923f42, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b07d4534-1cb5-41ec-b0c4-3e820159fe8e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.206 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b07d4534-1cb5-41ec-b0c4-3e820159fe8e in datapath 647203e6-db87-411a-8603-ed4b91cb4212 unbound from our chassis#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.208 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 647203e6-db87-411a-8603-ed4b91cb4212#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.210 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 NetworkManager[50376]: <info>  [1765614138.2124] device (tap33293def-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.227 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 kernel: tapbd554014-5c (unregistering): left promiscuous mode
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.230 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7643c6b8-f39a-4f29-a512-152793779460]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:18Z|00201|binding|INFO|Releasing lport 33293def-d398-4fee-865f-a61997489b67 from this chassis (sb_readonly=0)
Dec 13 03:22:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:18Z|00202|binding|INFO|Setting lport 33293def-d398-4fee-865f-a61997489b67 down in Southbound
Dec 13 03:22:18 np0005558241 NetworkManager[50376]: <info>  [1765614138.2360] device (tapbd554014-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:22:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:18Z|00203|binding|INFO|Removing iface tap33293def-d3 ovn-installed in OVS
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.237 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.241 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.246 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:8c:81 10.100.1.74'], port_security=['fa:16:3e:ce:8c:81 10.100.1.74'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.74/24', 'neutron:device_id': 'f983ed7f-13a4-496d-b8e9-60768d90efe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c824dbae-6ef3-43b5-8ec9-f4bc95c906d6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=33293def-d398-4fee-865f-a61997489b67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:18Z|00204|binding|INFO|Releasing lport bd554014-5cc7-4f34-b4a0-03ae7cc1f530 from this chassis (sb_readonly=0)
Dec 13 03:22:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:18Z|00205|binding|INFO|Setting lport bd554014-5cc7-4f34-b4a0-03ae7cc1f530 down in Southbound
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:18Z|00206|binding|INFO|Removing iface tapbd554014-5c ovn-installed in OVS
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.270 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.273 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:27:90 10.100.0.211'], port_security=['fa:16:3e:ad:27:90 10.100.0.211'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.211/24', 'neutron:device_id': 'f983ed7f-13a4-496d-b8e9-60768d90efe6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-647203e6-db87-411a-8603-ed4b91cb4212', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f2b755-adc8-4d52-9a0b-2240b0923f42, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bd554014-5cc7-4f34-b4a0-03ae7cc1f530) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.275 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[60dab543-f185-4a28-96ae-5b2ddec08b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.279 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4124fe-d66f-43b8-b7a3-46c4688cd974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.284 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Dec 13 03:22:18 np0005558241 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001b.scope: Consumed 2.718s CPU time.
Dec 13 03:22:18 np0005558241 systemd-machined[210538]: Machine qemu-32-instance-0000001b terminated.
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.312 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec7c53b-19d2-43e9-a670-d7530ef0154e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.318 248514 DEBUG nova.network.neutron [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.333 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7a075ac6-86aa-4bb0-919b-4cadef8d234f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap647203e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:b0:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671042, 'reachable_time': 17366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281765, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.352 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[271da28a-969e-4490-9152-0c4071256e7b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap647203e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671058, 'tstamp': 671058}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281766, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap647203e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671062, 'tstamp': 671062}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281766, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.354 248514 DEBUG oslo_concurrency.lockutils [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.355 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647203e6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.355 248514 DEBUG oslo_concurrency.lockutils [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.356 248514 DEBUG nova.network.neutron [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing network info cache for port cea82a7d-e92d-4ac6-ba47-854ec9905fd2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.359 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.362 248514 DEBUG nova.virt.libvirt.vif [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.362 248514 DEBUG nova.network.os_vif_util [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.363 248514 DEBUG nova.network.os_vif_util [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.364 248514 DEBUG os_vif [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.365 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.366 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.368 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap647203e6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.369 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcea82a7d-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.369 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.369 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcea82a7d-e9, col_values=(('external_ids', {'iface-id': 'cea82a7d-e92d-4ac6-ba47-854ec9905fd2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:29:df', 'vm-uuid': '2e309dc2-3cab-4ecf-8be7-eab85790a0da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.369 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap647203e6-d0, col_values=(('external_ids', {'iface-id': '2a2d6eba-8a85-4872-98d3-6dab02d46408'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.370 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.371 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.371 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 33293def-d398-4fee-865f-a61997489b67 in datapath 527d2c60-2d6f-4195-aeaa-9dd99258fb5b unbound from our chassis#033[00m
Dec 13 03:22:18 np0005558241 NetworkManager[50376]: <info>  [1765614138.3721] manager: (tapcea82a7d-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.373 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 527d2c60-2d6f-4195-aeaa-9dd99258fb5b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.374 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.374 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0f977cb-e460-42f9-bbf0-49e045a48a18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.375 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b namespace which is not needed anymore#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.381 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.383 248514 INFO os_vif [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9')#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.384 248514 DEBUG nova.virt.libvirt.vif [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.384 248514 DEBUG nova.network.os_vif_util [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.385 248514 DEBUG nova.network.os_vif_util [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.388 248514 DEBUG nova.virt.libvirt.guest [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] attach device xml: <interface type="ethernet">
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:3a:29:df"/>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  <target dev="tapcea82a7d-e9"/>
Dec 13 03:22:18 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:22:18 np0005558241 nova_compute[248510]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec 13 03:22:18 np0005558241 systemd-udevd[281737]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:22:18 np0005558241 NetworkManager[50376]: <info>  [1765614138.4053] manager: (tapcea82a7d-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Dec 13 03:22:18 np0005558241 kernel: tapcea82a7d-e9: entered promiscuous mode
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.419 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:18Z|00207|binding|INFO|Claiming lport cea82a7d-e92d-4ac6-ba47-854ec9905fd2 for this chassis.
Dec 13 03:22:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:18Z|00208|binding|INFO|cea82a7d-e92d-4ac6-ba47-854ec9905fd2: Claiming fa:16:3e:3a:29:df 10.100.0.4
Dec 13 03:22:18 np0005558241 NetworkManager[50376]: <info>  [1765614138.4218] device (tapcea82a7d-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:22:18 np0005558241 NetworkManager[50376]: <info>  [1765614138.4224] device (tapcea82a7d-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:22:18 np0005558241 NetworkManager[50376]: <info>  [1765614138.4267] manager: (tap33293def-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Dec 13 03:22:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:18.430 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:29:df 10.100.0.4'], port_security=['fa:16:3e:3a:29:df 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=cea82a7d-e92d-4ac6-ba47-854ec9905fd2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:18 np0005558241 NetworkManager[50376]: <info>  [1765614138.4431] manager: (tapbd554014-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Dec 13 03:22:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:18Z|00209|binding|INFO|Setting lport cea82a7d-e92d-4ac6-ba47-854ec9905fd2 ovn-installed in OVS
Dec 13 03:22:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:18Z|00210|binding|INFO|Setting lport cea82a7d-e92d-4ac6-ba47-854ec9905fd2 up in Southbound
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.452 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.462 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.463 248514 INFO nova.virt.libvirt.driver [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Instance destroyed successfully.#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.463 248514 DEBUG nova.objects.instance [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lazy-loading 'resources' on Instance uuid f983ed7f-13a4-496d-b8e9-60768d90efe6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.479 248514 DEBUG nova.virt.libvirt.vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:22:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:22:16Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.480 248514 DEBUG nova.network.os_vif_util [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "address": "fa:16:3e:27:d2:76", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.33", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07d4534-1c", "ovs_interfaceid": "b07d4534-1cb5-41ec-b0c4-3e820159fe8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.480 248514 DEBUG nova.network.os_vif_util [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.481 248514 DEBUG os_vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.482 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.483 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d4534-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.486 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.492 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.502 248514 INFO os_vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:d2:76,bridge_name='br-int',has_traffic_filtering=True,id=b07d4534-1cb5-41ec-b0c4-3e820159fe8e,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07d4534-1c')#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.504 248514 DEBUG nova.virt.libvirt.vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:22:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:22:16Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.504 248514 DEBUG nova.network.os_vif_util [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "33293def-d398-4fee-865f-a61997489b67", "address": "fa:16:3e:ce:8c:81", "network": {"id": "527d2c60-2d6f-4195-aeaa-9dd99258fb5b", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1320195165", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.74", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33293def-d3", "ovs_interfaceid": "33293def-d398-4fee-865f-a61997489b67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.505 248514 DEBUG nova.network.os_vif_util [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.505 248514 DEBUG os_vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.507 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.507 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33293def-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.509 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.511 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.513 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.516 248514 INFO os_vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:8c:81,bridge_name='br-int',has_traffic_filtering=True,id=33293def-d398-4fee-865f-a61997489b67,network=Network(527d2c60-2d6f-4195-aeaa-9dd99258fb5b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33293def-d3')#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.517 248514 DEBUG nova.virt.libvirt.vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1051085888',display_name='tempest-ServersTestMultiNic-server-1051085888',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1051085888',id=27,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:22:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-z5mr9ldl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:22:16Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=f983ed7f-13a4-496d-b8e9-60768d90efe6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.518 248514 DEBUG nova.network.os_vif_util [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "address": "fa:16:3e:ad:27:90", "network": {"id": "647203e6-db87-411a-8603-ed4b91cb4212", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1505021598", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.211", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd554014-5c", "ovs_interfaceid": "bd554014-5cc7-4f34-b4a0-03ae7cc1f530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.520 248514 DEBUG nova.network.os_vif_util [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.520 248514 DEBUG os_vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.527 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd554014-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.532 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.536 248514 INFO os_vif [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:27:90,bridge_name='br-int',has_traffic_filtering=True,id=bd554014-5cc7-4f34-b4a0-03ae7cc1f530,network=Network(647203e6-db87-411a-8603-ed4b91cb4212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd554014-5c')#033[00m
Dec 13 03:22:18 np0005558241 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [NOTICE]   (281603) : haproxy version is 2.8.14-c23fe91
Dec 13 03:22:18 np0005558241 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [NOTICE]   (281603) : path to executable is /usr/sbin/haproxy
Dec 13 03:22:18 np0005558241 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [WARNING]  (281603) : Exiting Master process...
Dec 13 03:22:18 np0005558241 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [ALERT]    (281603) : Current worker (281605) exited with code 143 (Terminated)
Dec 13 03:22:18 np0005558241 neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b[281599]: [WARNING]  (281603) : All workers exited. Exiting... (0)
Dec 13 03:22:18 np0005558241 systemd[1]: libpod-854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a.scope: Deactivated successfully.
Dec 13 03:22:18 np0005558241 podman[281824]: 2025-12-13 08:22:18.571259521 +0000 UTC m=+0.070627761 container died 854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.613 248514 DEBUG nova.virt.libvirt.driver [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.613 248514 DEBUG nova.virt.libvirt.driver [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.613 248514 DEBUG nova.virt.libvirt.driver [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:54:c0:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.613 248514 DEBUG nova.virt.libvirt.driver [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:96:b5:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.614 248514 DEBUG nova.virt.libvirt.driver [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:3a:29:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.647 248514 DEBUG nova.virt.libvirt.guest [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:22:18</nova:creationTime>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:22:18 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:    <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 03:22:18 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:    <nova:port uuid="9516b135-3bb5-4da4-942f-d044cad93bd4">
Dec 13 03:22:18 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:    <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 03:22:18 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:18 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:22:18 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:22:18 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.676 248514 DEBUG oslo_concurrency.lockutils [None req-7de79271-58d4-4740-8a49-5351909bb3d3 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:18 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a-userdata-shm.mount: Deactivated successfully.
Dec 13 03:22:18 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c55d01b3ee3deca83f683b6cfde9ad826f72c632d13fcddd102d1648a05c1801-merged.mount: Deactivated successfully.
Dec 13 03:22:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:22:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890616477' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:22:18 np0005558241 podman[281824]: 2025-12-13 08:22:18.948532944 +0000 UTC m=+0.447901184 container cleanup 854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:22:18 np0005558241 systemd[1]: libpod-conmon-854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a.scope: Deactivated successfully.
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.972 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.876s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:18 np0005558241 nova_compute[248510]: 2025-12-13 08:22:18.974 248514 DEBUG nova.objects.instance [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c17e7b7-7062-48d2-a30f-b387929244d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 548 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Dec 13 03:22:19 np0005558241 nova_compute[248510]: 2025-12-13 08:22:19.033 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  <uuid>1c17e7b7-7062-48d2-a30f-b387929244d9</uuid>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  <name>instance-0000001c</name>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <nova:name>tempest-TenantUsagesTestJSON-server-1085101637</nova:name>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:22:17</nova:creationTime>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:        <nova:user uuid="e022e18c9c6c4da890d5bdf86cffc2a6">tempest-TenantUsagesTestJSON-1897461749-project-member</nova:user>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:        <nova:project uuid="f9c57acfa75d48db92e25886eccd2ee1">tempest-TenantUsagesTestJSON-1897461749</nova:project>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <entry name="serial">1c17e7b7-7062-48d2-a30f-b387929244d9</entry>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <entry name="uuid">1c17e7b7-7062-48d2-a30f-b387929244d9</entry>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/1c17e7b7-7062-48d2-a30f-b387929244d9_disk">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/1c17e7b7-7062-48d2-a30f-b387929244d9_disk.config">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/console.log" append="off"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:22:19 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:22:19 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:22:19 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:22:19 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:22:19 np0005558241 nova_compute[248510]: 2025-12-13 08:22:19.282 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:19 np0005558241 nova_compute[248510]: 2025-12-13 08:22:19.282 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:19 np0005558241 nova_compute[248510]: 2025-12-13 08:22:19.283 248514 INFO nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Using config drive#033[00m
Dec 13 03:22:19 np0005558241 nova_compute[248510]: 2025-12-13 08:22:19.304 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:19 np0005558241 podman[281887]: 2025-12-13 08:22:19.422912278 +0000 UTC m=+0.452577768 container remove 854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:22:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.430 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[32ef09a6-5d8d-44e0-94b3-ac14f92a076f]: (4, ('Sat Dec 13 08:22:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b (854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a)\n854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a\nSat Dec 13 08:22:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b (854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a)\n854736ca3fe099feba3f6646c4f32f1c949771ad48ea7769ece95d32730b964a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.432 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[04109abb-5fbc-4146-b474-6eb0a47380a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.433 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d2c60-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:19 np0005558241 kernel: tap527d2c60-20: left promiscuous mode
Dec 13 03:22:19 np0005558241 nova_compute[248510]: 2025-12-13 08:22:19.436 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:19 np0005558241 nova_compute[248510]: 2025-12-13 08:22:19.453 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.457 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce780c2-8b0f-4d11-a3e4-e23309d56c3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.474 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea017096-1880-4721-91d7-9c474455f648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.476 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5752487c-e1ba-4a78-bd60-8868d4e8154d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.491 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc5f633-9ebc-450e-9741-090e5c4e0bf9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671195, 'reachable_time': 35319, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281921, 'error': None, 'target': 'ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.494 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-527d2c60-2d6f-4195-aeaa-9dd99258fb5b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:22:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.494 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[8bfce3cc-7b40-4643-86f8-0cbccec1bd92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.495 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bd554014-5cc7-4f34-b4a0-03ae7cc1f530 in datapath 647203e6-db87-411a-8603-ed4b91cb4212 unbound from our chassis#033[00m
Dec 13 03:22:19 np0005558241 systemd[1]: run-netns-ovnmeta\x2d527d2c60\x2d2d6f\x2d4195\x2daeaa\x2d9dd99258fb5b.mount: Deactivated successfully.
Dec 13 03:22:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.497 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 647203e6-db87-411a-8603-ed4b91cb4212, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:22:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.497 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[34f19bd8-5e88-4d29-a20b-d93ac89df789]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:19.498 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212 namespace which is not needed anymore#033[00m
Dec 13 03:22:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Dec 13 03:22:19 np0005558241 nova_compute[248510]: 2025-12-13 08:22:19.670 248514 INFO nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Creating config drive at /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/disk.config#033[00m
Dec 13 03:22:19 np0005558241 nova_compute[248510]: 2025-12-13 08:22:19.676 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjfn0rfvg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:22:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 11K writes, 49K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3291 syncs, 3.64 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5030 writes, 21K keys, 5030 commit groups, 1.0 writes per commit group, ingest: 23.45 MB, 0.04 MB/s#012Interval WAL: 5030 writes, 1906 syncs, 2.64 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:22:19 np0005558241 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [NOTICE]   (281432) : haproxy version is 2.8.14-c23fe91
Dec 13 03:22:19 np0005558241 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [NOTICE]   (281432) : path to executable is /usr/sbin/haproxy
Dec 13 03:22:19 np0005558241 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [WARNING]  (281432) : Exiting Master process...
Dec 13 03:22:19 np0005558241 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [WARNING]  (281432) : Exiting Master process...
Dec 13 03:22:19 np0005558241 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [ALERT]    (281432) : Current worker (281434) exited with code 143 (Terminated)
Dec 13 03:22:19 np0005558241 neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212[281409]: [WARNING]  (281432) : All workers exited. Exiting... (0)
Dec 13 03:22:19 np0005558241 systemd[1]: libpod-0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56.scope: Deactivated successfully.
Dec 13 03:22:19 np0005558241 podman[281939]: 2025-12-13 08:22:19.715608877 +0000 UTC m=+0.126558768 container died 0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:22:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Dec 13 03:22:19 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Dec 13 03:22:19 np0005558241 nova_compute[248510]: 2025-12-13 08:22:19.814 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjfn0rfvg" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:22:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56-userdata-shm.mount: Deactivated successfully.
Dec 13 03:22:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay-265fb3e3eb8d96b0e0a6d35b9b283058e328a9688b28ba3696775b5ebfa0aa7f-merged.mount: Deactivated successfully.
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.002 248514 DEBUG nova.storage.rbd_utils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] rbd image 1c17e7b7-7062-48d2-a30f-b387929244d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:20 np0005558241 podman[281939]: 2025-12-13 08:22:20.003158899 +0000 UTC m=+0.414108760 container cleanup 0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.006 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/disk.config 1c17e7b7-7062-48d2-a30f-b387929244d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:20 np0005558241 systemd[1]: libpod-conmon-0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56.scope: Deactivated successfully.
Dec 13 03:22:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:20Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:29:df 10.100.0.4
Dec 13 03:22:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:20Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:29:df 10.100.0.4
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.242 248514 DEBUG nova.compute.manager [req-eea39376-436c-421e-b12d-8a6c57cff8db req-1f7bd137-7870-4cb9-8bc8-a164c84f44e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-unplugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.243 248514 DEBUG oslo_concurrency.lockutils [req-eea39376-436c-421e-b12d-8a6c57cff8db req-1f7bd137-7870-4cb9-8bc8-a164c84f44e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.243 248514 DEBUG oslo_concurrency.lockutils [req-eea39376-436c-421e-b12d-8a6c57cff8db req-1f7bd137-7870-4cb9-8bc8-a164c84f44e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.243 248514 DEBUG oslo_concurrency.lockutils [req-eea39376-436c-421e-b12d-8a6c57cff8db req-1f7bd137-7870-4cb9-8bc8-a164c84f44e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.243 248514 DEBUG nova.compute.manager [req-eea39376-436c-421e-b12d-8a6c57cff8db req-1f7bd137-7870-4cb9-8bc8-a164c84f44e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-unplugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.243 248514 DEBUG nova.compute.manager [req-eea39376-436c-421e-b12d-8a6c57cff8db req-1f7bd137-7870-4cb9-8bc8-a164c84f44e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-unplugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.270 248514 DEBUG nova.network.neutron [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updated VIF entry in instance network info cache for port cea82a7d-e92d-4ac6-ba47-854ec9905fd2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.270 248514 DEBUG nova.network.neutron [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.300 248514 DEBUG oslo_concurrency.lockutils [req-6518c3f6-3ad2-420f-9590-77bca41b51c1 req-473fab8a-a322-45e4-aa8f-a172af210f5f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029750218993480873 of space, bias 1.0, pg target 0.8925065698044262 quantized to 32 (current 32)
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0021996237752399867 of space, bias 1.0, pg target 0.659887132571996 quantized to 32 (current 32)
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.001002730076268e-07 of space, bias 4.0, pg target 0.0009601203276091521 quantized to 16 (current 32)
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:22:20 np0005558241 podman[281991]: 2025-12-13 08:22:20.860044484 +0000 UTC m=+0.834882424 container remove 0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 03:22:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.871 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[99fdbcd2-8918-4e4e-92a6-7e36f877b4dc]: (4, ('Sat Dec 13 08:22:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212 (0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56)\n0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56\nSat Dec 13 08:22:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212 (0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56)\n0bce2e1b3e8ec00fce3d22e066d210f483ac97c6dfb03152446bedcc88289e56\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.874 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[503e9c9b-69e4-4ab8-adc7-f76cebf9cba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.877 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647203e6-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.881 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:20 np0005558241 kernel: tap647203e6-d0: left promiscuous mode
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.902 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d25c96c8-3276-4de5-a8d1-cd931dc5af98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.919 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4494864e-0092-4b53-bab7-79e39767733f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.921 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[87ad7547-7437-425a-951d-72c85dba274c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.936 248514 DEBUG oslo_concurrency.processutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/disk.config 1c17e7b7-7062-48d2-a30f-b387929244d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.929s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:20 np0005558241 nova_compute[248510]: 2025-12-13 08:22:20.937 248514 INFO nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Deleting local config drive /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9/disk.config because it was imported into RBD.#033[00m
Dec 13 03:22:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.940 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe096d85-41db-4fee-aa9d-14bcbb7d9823]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671034, 'reachable_time': 21712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282027, 'error': None, 'target': 'ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.943 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-647203e6-db87-411a-8603-ed4b91cb4212 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:22:20 np0005558241 systemd[1]: run-netns-ovnmeta\x2d647203e6\x2ddb87\x2d411a\x2d8603\x2ded4b91cb4212.mount: Deactivated successfully.
Dec 13 03:22:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.943 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b331b650-ceb7-43d3-82d4-003dc266f284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.946 158419 INFO neutron.agent.ovn.metadata.agent [-] Port cea82a7d-e92d-4ac6-ba47-854ec9905fd2 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis#033[00m
Dec 13 03:22:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.947 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:22:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:20.969 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af9f7781-5dae-4733-a268-d2b2e26ff90c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 548 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 132 op/s
Dec 13 03:22:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.006 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[95b5758a-2e5d-432a-90ea-4449f61a4152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.011 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6fa750-070c-4437-ad1f-687eb94f5a54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:21 np0005558241 systemd-machined[210538]: New machine qemu-33-instance-0000001c.
Dec 13 03:22:21 np0005558241 systemd[1]: Started Virtual Machine qemu-33-instance-0000001c.
Dec 13 03:22:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.042 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[33102e56-63a3-43f2-ba18-8c2fa8fc2803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.067 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f27f64c3-371b-4a1e-92ee-f6f2665ab444]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282043, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:21 np0005558241 nova_compute[248510]: 2025-12-13 08:22:21.087 248514 INFO nova.virt.libvirt.driver [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Deleting instance files /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6_del#033[00m
Dec 13 03:22:21 np0005558241 nova_compute[248510]: 2025-12-13 08:22:21.088 248514 INFO nova.virt.libvirt.driver [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Deletion of /var/lib/nova/instances/f983ed7f-13a4-496d-b8e9-60768d90efe6_del complete#033[00m
Dec 13 03:22:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.093 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b29fee50-25cb-4ae2-9cb5-f5a11c1c1795]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665441, 'tstamp': 665441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282045, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665444, 'tstamp': 665444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282045, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.095 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:21 np0005558241 nova_compute[248510]: 2025-12-13 08:22:21.097 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:21 np0005558241 nova_compute[248510]: 2025-12-13 08:22:21.098 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.099 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.099 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.099 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:21.099 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:21 np0005558241 nova_compute[248510]: 2025-12-13 08:22:21.177 248514 INFO nova.compute.manager [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Took 3.19 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:22:21 np0005558241 nova_compute[248510]: 2025-12-13 08:22:21.178 248514 DEBUG oslo.service.loopingcall [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:22:21 np0005558241 nova_compute[248510]: 2025-12-13 08:22:21.178 248514 DEBUG nova.compute.manager [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:22:21 np0005558241 nova_compute[248510]: 2025-12-13 08:22:21.178 248514 DEBUG nova.network.neutron [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.349 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614142.349334, 1c17e7b7-7062-48d2-a30f-b387929244d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.350 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.353 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.354 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.358 248514 INFO nova.virt.libvirt.driver [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Instance spawned successfully.#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.359 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.398 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.406 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.410 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.410 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.411 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.411 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.412 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.413 248514 DEBUG nova.virt.libvirt.driver [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.450 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.451 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614142.3525267, 1c17e7b7-7062-48d2-a30f-b387929244d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.451 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] VM Started (Lifecycle Event)#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.483 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.488 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.496 248514 INFO nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Took 7.50 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.496 248514 DEBUG nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.524 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.563 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.564 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.564 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.564 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.564 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.565 248514 WARNING nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received unexpected event network-vif-plugged-b07d4534-1cb5-41ec-b0c4-3e820159fe8e for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.565 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-unplugged-33293def-d398-4fee-865f-a61997489b67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.565 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.566 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.566 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.566 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-unplugged-33293def-d398-4fee-865f-a61997489b67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.566 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-unplugged-33293def-d398-4fee-865f-a61997489b67 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.567 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.567 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.567 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.567 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.568 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.568 248514 WARNING nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received unexpected event network-vif-plugged-33293def-d398-4fee-865f-a61997489b67 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.568 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-unplugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.568 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.569 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.569 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.569 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-unplugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.569 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-unplugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.569 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.570 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.570 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.570 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.570 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] No waiting events found dispatching network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.571 248514 WARNING nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received unexpected event network-vif-plugged-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.571 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.571 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.571 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.571 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.572 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.572 248514 WARNING nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.572 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.572 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.573 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.573 248514 DEBUG oslo_concurrency.lockutils [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.573 248514 DEBUG nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.573 248514 WARNING nova.compute.manager [req-9d4418df-b563-49c6-9fc4-0780588d5027 req-70cd06ce-b193-4d7e-b0a1-4178652ba236 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.576 248514 INFO nova.compute.manager [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Took 9.05 seconds to build instance.#033[00m
Dec 13 03:22:22 np0005558241 nova_compute[248510]: 2025-12-13 08:22:22.602 248514 DEBUG oslo_concurrency.lockutils [None req-bd5ea646-afb4-4ad8-a0a1-6626e130692f e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "1c17e7b7-7062-48d2-a30f-b387929244d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 492 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Dec 13 03:22:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Dec 13 03:22:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 03:22:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Dec 13 03:22:23 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Dec 13 03:22:23 np0005558241 nova_compute[248510]: 2025-12-13 08:22:23.529 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:24 np0005558241 nova_compute[248510]: 2025-12-13 08:22:24.272 248514 DEBUG oslo_concurrency.lockutils [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-3861ef01-74c8-4321-b36e-79090caaf6dc" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:24 np0005558241 nova_compute[248510]: 2025-12-13 08:22:24.273 248514 DEBUG oslo_concurrency.lockutils [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-3861ef01-74c8-4321-b36e-79090caaf6dc" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:24 np0005558241 nova_compute[248510]: 2025-12-13 08:22:24.273 248514 DEBUG nova.objects.instance [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:24 np0005558241 nova_compute[248510]: 2025-12-13 08:22:24.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:24 np0005558241 nova_compute[248510]: 2025-12-13 08:22:24.833 248514 DEBUG nova.network.neutron [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:24 np0005558241 nova_compute[248510]: 2025-12-13 08:22:24.856 248514 INFO nova.compute.manager [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Took 3.68 seconds to deallocate network for instance.#033[00m
Dec 13 03:22:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:22:24 np0005558241 nova_compute[248510]: 2025-12-13 08:22:24.904 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:24 np0005558241 nova_compute[248510]: 2025-12-13 08:22:24.905 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:24 np0005558241 nova_compute[248510]: 2025-12-13 08:22:24.918 248514 DEBUG nova.objects.instance [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_requests' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:24 np0005558241 nova_compute[248510]: 2025-12-13 08:22:24.934 248514 DEBUG nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:22:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 423 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.7 MiB/s wr, 310 op/s
Dec 13 03:22:25 np0005558241 nova_compute[248510]: 2025-12-13 08:22:25.035 248514 DEBUG oslo_concurrency.processutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:25 np0005558241 nova_compute[248510]: 2025-12-13 08:22:25.314 248514 DEBUG nova.policy [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:22:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:22:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/321899503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:22:25 np0005558241 nova_compute[248510]: 2025-12-13 08:22:25.875 248514 DEBUG oslo_concurrency.processutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.840s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:25 np0005558241 nova_compute[248510]: 2025-12-13 08:22:25.884 248514 DEBUG nova.compute.provider_tree [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:22:25 np0005558241 nova_compute[248510]: 2025-12-13 08:22:25.903 248514 DEBUG nova.scheduler.client.report [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:22:25 np0005558241 nova_compute[248510]: 2025-12-13 08:22:25.938 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:25 np0005558241 nova_compute[248510]: 2025-12-13 08:22:25.954 248514 DEBUG nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Successfully updated port: 3861ef01-74c8-4321-b36e-79090caaf6dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:22:25 np0005558241 nova_compute[248510]: 2025-12-13 08:22:25.981 248514 DEBUG oslo_concurrency.lockutils [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:25 np0005558241 nova_compute[248510]: 2025-12-13 08:22:25.982 248514 DEBUG oslo_concurrency.lockutils [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:25 np0005558241 nova_compute[248510]: 2025-12-13 08:22:25.982 248514 DEBUG nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:22:25 np0005558241 nova_compute[248510]: 2025-12-13 08:22:25.984 248514 INFO nova.scheduler.client.report [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Deleted allocations for instance f983ed7f-13a4-496d-b8e9-60768d90efe6#033[00m
Dec 13 03:22:26 np0005558241 nova_compute[248510]: 2025-12-13 08:22:26.064 248514 DEBUG oslo_concurrency.lockutils [None req-64c992bd-d828-40ff-9b7c-9811ba506c0f e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "f983ed7f-13a4-496d-b8e9-60768d90efe6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:26 np0005558241 nova_compute[248510]: 2025-12-13 08:22:26.192 248514 WARNING nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it#033[00m
Dec 13 03:22:26 np0005558241 nova_compute[248510]: 2025-12-13 08:22:26.193 248514 WARNING nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it#033[00m
Dec 13 03:22:26 np0005558241 nova_compute[248510]: 2025-12-13 08:22:26.193 248514 WARNING nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it#033[00m
Dec 13 03:22:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 423 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 22 KiB/s wr, 157 op/s
Dec 13 03:22:27 np0005558241 nova_compute[248510]: 2025-12-13 08:22:27.319 248514 DEBUG nova.compute.manager [req-7ba32e55-f01d-4520-9484-bc109c1281a6 req-ebc34e57-6bb1-44d3-8d0f-0a2098cf7898 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-deleted-bd554014-5cc7-4f34-b4a0-03ae7cc1f530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:27 np0005558241 nova_compute[248510]: 2025-12-13 08:22:27.319 248514 DEBUG nova.compute.manager [req-7ba32e55-f01d-4520-9484-bc109c1281a6 req-ebc34e57-6bb1-44d3-8d0f-0a2098cf7898 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-deleted-b07d4534-1cb5-41ec-b0c4-3e820159fe8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:27 np0005558241 nova_compute[248510]: 2025-12-13 08:22:27.422 248514 DEBUG nova.compute.manager [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-changed-3861ef01-74c8-4321-b36e-79090caaf6dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:27 np0005558241 nova_compute[248510]: 2025-12-13 08:22:27.422 248514 DEBUG nova.compute.manager [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing instance network info cache due to event network-changed-3861ef01-74c8-4321-b36e-79090caaf6dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:22:27 np0005558241 nova_compute[248510]: 2025-12-13 08:22:27.422 248514 DEBUG oslo_concurrency.lockutils [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Dec 13 03:22:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Dec 13 03:22:28 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Dec 13 03:22:28 np0005558241 nova_compute[248510]: 2025-12-13 08:22:28.532 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 372 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 29 KiB/s wr, 224 op/s
Dec 13 03:22:29 np0005558241 nova_compute[248510]: 2025-12-13 08:22:29.497 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:22:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Dec 13 03:22:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:29Z|00211|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:22:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Dec 13 03:22:30 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Dec 13 03:22:30 np0005558241 nova_compute[248510]: 2025-12-13 08:22:30.050 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:30 np0005558241 nova_compute[248510]: 2025-12-13 08:22:30.906 248514 DEBUG nova.compute.manager [req-3377f1a2-c6fb-43d5-af2b-dfc72f571f4b req-2bc06b3e-f419-4eb6-91db-a32283eff8d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Received event network-vif-deleted-33293def-d398-4fee-865f-a61997489b67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 372 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 10 KiB/s wr, 207 op/s
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.079 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "9b6188af-75f0-4213-89c2-bd3eb72960b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.080 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "9b6188af-75f0-4213-89c2-bd3eb72960b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.080 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "9b6188af-75f0-4213-89c2-bd3eb72960b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.081 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "9b6188af-75f0-4213-89c2-bd3eb72960b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.081 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "9b6188af-75f0-4213-89c2-bd3eb72960b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.082 248514 INFO nova.compute.manager [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Terminating instance#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.083 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "refresh_cache-9b6188af-75f0-4213-89c2-bd3eb72960b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.083 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquired lock "refresh_cache-9b6188af-75f0-4213-89c2-bd3eb72960b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.083 248514 DEBUG nova.network.neutron [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.305 248514 DEBUG nova.network.neutron [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.673 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "1c17e7b7-7062-48d2-a30f-b387929244d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.674 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "1c17e7b7-7062-48d2-a30f-b387929244d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.674 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "1c17e7b7-7062-48d2-a30f-b387929244d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.675 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "1c17e7b7-7062-48d2-a30f-b387929244d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.675 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "1c17e7b7-7062-48d2-a30f-b387929244d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.676 248514 INFO nova.compute.manager [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Terminating instance#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.677 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "refresh_cache-1c17e7b7-7062-48d2-a30f-b387929244d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.677 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquired lock "refresh_cache-1c17e7b7-7062-48d2-a30f-b387929244d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.678 248514 DEBUG nova.network.neutron [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.816 248514 DEBUG nova.network.neutron [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.835 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Releasing lock "refresh_cache-9b6188af-75f0-4213-89c2-bd3eb72960b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.836 248514 DEBUG nova.compute.manager [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:22:31 np0005558241 nova_compute[248510]: 2025-12-13 08:22:31.859 248514 DEBUG nova.network.neutron [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:22:32 np0005558241 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Dec 13 03:22:32 np0005558241 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001a.scope: Consumed 14.604s CPU time.
Dec 13 03:22:32 np0005558241 systemd-machined[210538]: Machine qemu-31-instance-0000001a terminated.
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.064 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Instance destroyed successfully.#033[00m
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.064 248514 DEBUG nova.objects.instance [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lazy-loading 'resources' on Instance uuid 9b6188af-75f0-4213-89c2-bd3eb72960b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.322 248514 DEBUG nova.network.neutron [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.334 248514 INFO nova.virt.libvirt.driver [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Deleting instance files /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7_del#033[00m
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.335 248514 INFO nova.virt.libvirt.driver [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Deletion of /var/lib/nova/instances/9b6188af-75f0-4213-89c2-bd3eb72960b7_del complete#033[00m
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.347 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Releasing lock "refresh_cache-1c17e7b7-7062-48d2-a30f-b387929244d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.347 248514 DEBUG nova.compute.manager [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:22:32 np0005558241 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Dec 13 03:22:32 np0005558241 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001c.scope: Consumed 10.831s CPU time.
Dec 13 03:22:32 np0005558241 systemd-machined[210538]: Machine qemu-33-instance-0000001c terminated.
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.411 248514 INFO nova.compute.manager [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.411 248514 DEBUG oslo.service.loopingcall [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.411 248514 DEBUG nova.compute.manager [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.412 248514 DEBUG nova.network.neutron [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.572 248514 INFO nova.virt.libvirt.driver [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Instance destroyed successfully.#033[00m
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.572 248514 DEBUG nova.objects.instance [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lazy-loading 'resources' on Instance uuid 1c17e7b7-7062-48d2-a30f-b387929244d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.840 248514 INFO nova.virt.libvirt.driver [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Deleting instance files /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9_del#033[00m
Dec 13 03:22:32 np0005558241 nova_compute[248510]: 2025-12-13 08:22:32.841 248514 INFO nova.virt.libvirt.driver [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Deletion of /var/lib/nova/instances/1c17e7b7-7062-48d2-a30f-b387929244d9_del complete#033[00m
Dec 13 03:22:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 324 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 829 KiB/s rd, 15 KiB/s wr, 109 op/s
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.294 248514 DEBUG nova.network.neutron [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.302 248514 INFO nova.compute.manager [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.302 248514 DEBUG oslo.service.loopingcall [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.303 248514 DEBUG nova.compute.manager [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.303 248514 DEBUG nova.network.neutron [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.314 248514 DEBUG nova.network.neutron [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.335 248514 INFO nova.compute.manager [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Took 0.92 seconds to deallocate network for instance.#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.340 248514 DEBUG nova.network.neutron [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.382 248514 DEBUG oslo_concurrency.lockutils [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.384 248514 DEBUG oslo_concurrency.lockutils [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.385 248514 DEBUG nova.network.neutron [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Refreshing network info cache for port 3861ef01-74c8-4321-b36e-79090caaf6dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.389 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.389 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.391 248514 DEBUG nova.virt.libvirt.vif [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.391 248514 DEBUG nova.network.os_vif_util [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.392 248514 DEBUG nova.network.os_vif_util [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.392 248514 DEBUG os_vif [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.393 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.394 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.398 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.398 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3861ef01-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.399 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3861ef01-74, col_values=(('external_ids', {'iface-id': '3861ef01-74c8-4321-b36e-79090caaf6dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:de:08', 'vm-uuid': '2e309dc2-3cab-4ecf-8be7-eab85790a0da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:33 np0005558241 NetworkManager[50376]: <info>  [1765614153.4016] manager: (tap3861ef01-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.400 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.414 248514 INFO os_vif [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74')#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.415 248514 DEBUG nova.virt.libvirt.vif [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.415 248514 DEBUG nova.network.os_vif_util [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.416 248514 DEBUG nova.network.os_vif_util [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.418 248514 DEBUG nova.virt.libvirt.guest [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] attach device xml: <interface type="ethernet">
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:82:de:08"/>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  <target dev="tap3861ef01-74"/>
Dec 13 03:22:33 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:22:33 np0005558241 nova_compute[248510]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec 13 03:22:33 np0005558241 kernel: tap3861ef01-74: entered promiscuous mode
Dec 13 03:22:33 np0005558241 systemd-udevd[282115]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:22:33 np0005558241 NetworkManager[50376]: <info>  [1765614153.4334] manager: (tap3861ef01-74): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Dec 13 03:22:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:33Z|00212|binding|INFO|Claiming lport 3861ef01-74c8-4321-b36e-79090caaf6dc for this chassis.
Dec 13 03:22:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:33Z|00213|binding|INFO|3861ef01-74c8-4321-b36e-79090caaf6dc: Claiming fa:16:3e:82:de:08 10.100.0.10
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.434 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.444 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:de:08 10.100.0.10'], port_security=['fa:16:3e:82:de:08 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-2007287726', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-2007287726', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3861ef01-74c8-4321-b36e-79090caaf6dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.445 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3861ef01-74c8-4321-b36e-79090caaf6dc in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 bound to our chassis#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.447 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:22:33 np0005558241 NetworkManager[50376]: <info>  [1765614153.4502] device (tap3861ef01-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:22:33 np0005558241 NetworkManager[50376]: <info>  [1765614153.4513] device (tap3861ef01-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:22:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:33Z|00214|binding|INFO|Setting lport 3861ef01-74c8-4321-b36e-79090caaf6dc ovn-installed in OVS
Dec 13 03:22:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:33Z|00215|binding|INFO|Setting lport 3861ef01-74c8-4321-b36e-79090caaf6dc up in Southbound
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.457 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.460 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614138.460027, f983ed7f-13a4-496d-b8e9-60768d90efe6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.461 248514 INFO nova.compute.manager [-] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.467 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64036a02-ce55-4d75-8480-18d07309e263]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.489 248514 DEBUG nova.compute.manager [None req-9a3ca6fd-57e4-45cf-908a-2f8cbecefcc9 - - - - - -] [instance: f983ed7f-13a4-496d-b8e9-60768d90efe6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.506 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[029f7f9c-7ecb-43fd-a2c9-ea79cf48678a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.510 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[01d98796-7671-4835-ab77-f0083b560601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.518 248514 DEBUG nova.virt.libvirt.driver [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.519 248514 DEBUG nova.virt.libvirt.driver [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.519 248514 DEBUG nova.virt.libvirt.driver [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:54:c0:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.519 248514 DEBUG nova.virt.libvirt.driver [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:96:b5:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.520 248514 DEBUG nova.virt.libvirt.driver [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:3a:29:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.520 248514 DEBUG nova.virt.libvirt.driver [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:82:de:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.539 248514 DEBUG oslo_concurrency.processutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.545 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8031ff-0586-4e10-9190-84ea7e398e6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.568 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dc044f6d-fceb-4868-a863-2f0dac2de0bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282172, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.573 248514 DEBUG nova.virt.libvirt.guest [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:22:33</nova:creationTime>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 03:22:33 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    <nova:port uuid="9516b135-3bb5-4da4-942f-d044cad93bd4">
Dec 13 03:22:33 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 03:22:33 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 03:22:33 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:33 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:22:33 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:22:33 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.592 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[902eaf18-6d8a-4de4-8d75-a4e3d3514d6b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665441, 'tstamp': 665441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282173, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665444, 'tstamp': 665444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282173, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.594 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.596 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.597 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.598 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.598 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:33.598 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.610 248514 DEBUG oslo_concurrency.lockutils [None req-ac2e9dfb-384a-4e5b-ac44-5ec0e8d5c5e0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-3861ef01-74c8-4321-b36e-79090caaf6dc" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:33 np0005558241 nova_compute[248510]: 2025-12-13 08:22:33.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:22:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:22:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2356560262' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.097 248514 DEBUG oslo_concurrency.processutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.104 248514 DEBUG nova.compute.provider_tree [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.136 248514 DEBUG nova.scheduler.client.report [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.167 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.209 248514 INFO nova.scheduler.client.report [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Deleted allocations for instance 9b6188af-75f0-4213-89c2-bd3eb72960b7#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.226 248514 DEBUG nova.network.neutron [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.246 248514 DEBUG nova.network.neutron [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.273 248514 INFO nova.compute.manager [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Took 0.97 seconds to deallocate network for instance.#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.282 248514 DEBUG oslo_concurrency.lockutils [None req-554c6ba1-11e9-4869-b34a-aec34453c804 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "9b6188af-75f0-4213-89c2-bd3eb72960b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.339 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.340 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.427 248514 DEBUG oslo_concurrency.processutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.575 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "07413df5-0bb8-42c2-95ff-13458d598139" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.575 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "07413df5-0bb8-42c2-95ff-13458d598139" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.576 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "07413df5-0bb8-42c2-95ff-13458d598139-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.576 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "07413df5-0bb8-42c2-95ff-13458d598139-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.577 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "07413df5-0bb8-42c2-95ff-13458d598139-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.579 248514 INFO nova.compute.manager [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Terminating instance#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.580 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "refresh_cache-07413df5-0bb8-42c2-95ff-13458d598139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.580 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquired lock "refresh_cache-07413df5-0bb8-42c2-95ff-13458d598139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.580 248514 DEBUG nova.network.neutron [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.663 248514 DEBUG nova.compute.manager [req-b65fef49-129b-4b68-8fb2-81160e02db8b req-622ad759-d378-4d74-b046-7210e8d5af37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-3861ef01-74c8-4321-b36e-79090caaf6dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.663 248514 DEBUG oslo_concurrency.lockutils [req-b65fef49-129b-4b68-8fb2-81160e02db8b req-622ad759-d378-4d74-b046-7210e8d5af37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.663 248514 DEBUG oslo_concurrency.lockutils [req-b65fef49-129b-4b68-8fb2-81160e02db8b req-622ad759-d378-4d74-b046-7210e8d5af37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.664 248514 DEBUG oslo_concurrency.lockutils [req-b65fef49-129b-4b68-8fb2-81160e02db8b req-622ad759-d378-4d74-b046-7210e8d5af37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.664 248514 DEBUG nova.compute.manager [req-b65fef49-129b-4b68-8fb2-81160e02db8b req-622ad759-d378-4d74-b046-7210e8d5af37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-3861ef01-74c8-4321-b36e-79090caaf6dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.664 248514 WARNING nova.compute.manager [req-b65fef49-129b-4b68-8fb2-81160e02db8b req-622ad759-d378-4d74-b046-7210e8d5af37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-3861ef01-74c8-4321-b36e-79090caaf6dc for instance with vm_state active and task_state None.#033[00m
Dec 13 03:22:34 np0005558241 nova_compute[248510]: 2025-12-13 08:22:34.752 248514 DEBUG nova.network.neutron [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:22:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:22:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Dec 13 03:22:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Dec 13 03:22:34 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Dec 13 03:22:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:22:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1939969010' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:22:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 200 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 13 KiB/s wr, 141 op/s
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.020 248514 DEBUG oslo_concurrency.processutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.027 248514 DEBUG nova.compute.provider_tree [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.046 248514 DEBUG nova.scheduler.client.report [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:22:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:35Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:82:de:08 10.100.0.10
Dec 13 03:22:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:35Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:82:de:08 10.100.0.10
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.077 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.101 248514 INFO nova.scheduler.client.report [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Deleted allocations for instance 1c17e7b7-7062-48d2-a30f-b387929244d9#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.169 248514 DEBUG oslo_concurrency.lockutils [None req-16933830-4f1d-4584-a795-d6010ecfcf09 e022e18c9c6c4da890d5bdf86cffc2a6 f9c57acfa75d48db92e25886eccd2ee1 - - default default] Lock "1c17e7b7-7062-48d2-a30f-b387929244d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.279 248514 DEBUG nova.network.neutron [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updated VIF entry in instance network info cache for port 3861ef01-74c8-4321-b36e-79090caaf6dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.280 248514 DEBUG nova.network.neutron [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.300 248514 DEBUG oslo_concurrency.lockutils [req-8265b512-55d0-4616-9015-498434566fd7 req-ac87b3bb-913e-489a-b88c-fd0accb7eef4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.332 248514 DEBUG nova.network.neutron [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.360 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Releasing lock "refresh_cache-07413df5-0bb8-42c2-95ff-13458d598139" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.361 248514 DEBUG nova.compute.manager [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.373 248514 DEBUG oslo_concurrency.lockutils [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-9516b135-3bb5-4da4-942f-d044cad93bd4" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.374 248514 DEBUG oslo_concurrency.lockutils [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-9516b135-3bb5-4da4-942f-d044cad93bd4" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.400 248514 DEBUG nova.objects.instance [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.423 248514 DEBUG nova.virt.libvirt.vif [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.424 248514 DEBUG nova.network.os_vif_util [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.426 248514 DEBUG nova.network.os_vif_util [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.432 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.435 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.440 248514 DEBUG nova.virt.libvirt.driver [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Attempting to detach device tap9516b135-3b from instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.441 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:96:b5:6d"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <target dev="tap9516b135-3b"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:22:35 np0005558241 nova_compute[248510]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.448 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:22:35 np0005558241 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000019.scope: Deactivated successfully.
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.453 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface>not found in domain: <domain type='kvm' id='29'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <name>instance-00000018</name>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <uuid>2e309dc2-3cab-4ecf-8be7-eab85790a0da</uuid>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:22:33</nova:creationTime>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:22:35 np0005558241 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000019.scope: Consumed 14.801s CPU time.
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:port uuid="9516b135-3bb5-4da4-942f-d044cad93bd4">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:22:35 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <entry name='serial'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <entry name='uuid'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk' index='2'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config' index='1'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:54:c0:80'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target dev='tap10aa2df4-a7'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:96:b5:6d'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target dev='tap9516b135-3b'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='net1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:3a:29:df'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target dev='tapcea82a7d-e9'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='net2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:82:de:08'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target dev='tap3861ef01-74'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='net3'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <source path='/dev/pts/4'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/4'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <source path='/dev/pts/4'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c312,c966</label>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c312,c966</imagelabel>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:22:35 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:22:35 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.454 248514 INFO nova.virt.libvirt.driver [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap9516b135-3b from instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da from the persistent domain config.#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.454 248514 DEBUG nova.virt.libvirt.driver [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] (1/8): Attempting to detach device tap9516b135-3b with device alias net1 from instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.456 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:96:b5:6d"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <target dev="tap9516b135-3b"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:22:35 np0005558241 nova_compute[248510]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec 13 03:22:35 np0005558241 systemd-machined[210538]: Machine qemu-30-instance-00000019 terminated.
Dec 13 03:22:35 np0005558241 kernel: tap9516b135-3b (unregistering): left promiscuous mode
Dec 13 03:22:35 np0005558241 NetworkManager[50376]: <info>  [1765614155.5676] device (tap9516b135-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:22:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:35Z|00216|binding|INFO|Releasing lport 9516b135-3bb5-4da4-942f-d044cad93bd4 from this chassis (sb_readonly=0)
Dec 13 03:22:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:35Z|00217|binding|INFO|Setting lport 9516b135-3bb5-4da4-942f-d044cad93bd4 down in Southbound
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:35Z|00218|binding|INFO|Removing iface tap9516b135-3b ovn-installed in OVS
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.580 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.585 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:b5:6d 10.100.0.3'], port_security=['fa:16:3e:96:b5:6d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=9516b135-3bb5-4da4-942f-d044cad93bd4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.586 248514 DEBUG nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Received event <DeviceRemovedEvent: 1765614155.585611, 2e309dc2-3cab-4ecf-8be7-eab85790a0da => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.586 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 9516b135-3bb5-4da4-942f-d044cad93bd4 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.588 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.588 248514 DEBUG nova.virt.libvirt.driver [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Start waiting for the detach event from libvirt for device tap9516b135-3b with device alias net1 for instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.588 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.593 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface>not found in domain: <domain type='kvm' id='29'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <name>instance-00000018</name>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <uuid>2e309dc2-3cab-4ecf-8be7-eab85790a0da</uuid>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:22:33</nova:creationTime>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:port uuid="9516b135-3bb5-4da4-942f-d044cad93bd4">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:22:35 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <entry name='serial'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <entry name='uuid'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk' index='2'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config' index='1'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:54:c0:80'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target dev='tap10aa2df4-a7'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:3a:29:df'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target dev='tapcea82a7d-e9'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='net2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:82:de:08'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target dev='tap3861ef01-74'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='net3'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <source path='/dev/pts/4'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/4'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <source path='/dev/pts/4'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c312,c966</label>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c312,c966</imagelabel>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:22:35 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:22:35 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.594 248514 INFO nova.virt.libvirt.driver [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap9516b135-3b from instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da from the live domain config.#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.595 248514 DEBUG nova.virt.libvirt.vif [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.595 248514 DEBUG nova.network.os_vif_util [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.596 248514 DEBUG nova.network.os_vif_util [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.597 248514 DEBUG os_vif [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.599 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.600 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9516b135-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.602 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.605 248514 INFO os_vif [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b')#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.607 248514 DEBUG nova.virt.libvirt.guest [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:22:35</nova:creationTime>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 03:22:35 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:35 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:22:35 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:22:35 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.608 248514 INFO nova.virt.libvirt.driver [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Instance destroyed successfully.#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.608 248514 DEBUG nova.objects.instance [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lazy-loading 'resources' on Instance uuid 07413df5-0bb8-42c2-95ff-13458d598139 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.608 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1c5ab4-4533-483d-8f56-d1189ba6dd30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.647 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[701b7303-204f-46c6-9185-73785c05ff89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.651 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b0eccf63-bbcb-40db-9ebe-326d6c15df13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.690 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a482edcf-2e75-48fd-a53c-fb59ae512a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.711 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0c8aec-8d00-47af-946a-d4526e28ce5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282248, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.733 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d829feca-b34a-40e5-9c90-7218b470aa48]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665441, 'tstamp': 665441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282249, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665444, 'tstamp': 665444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282249, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.736 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.738 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.739 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.741 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.741 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.742 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:35.742 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.883 248514 DEBUG nova.compute.manager [req-181eb090-e259-4925-9d2d-e1927a9ecb5d req-2715ca68-9f1c-48b8-865f-c6289282d593 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-unplugged-9516b135-3bb5-4da4-942f-d044cad93bd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.884 248514 DEBUG oslo_concurrency.lockutils [req-181eb090-e259-4925-9d2d-e1927a9ecb5d req-2715ca68-9f1c-48b8-865f-c6289282d593 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.884 248514 DEBUG oslo_concurrency.lockutils [req-181eb090-e259-4925-9d2d-e1927a9ecb5d req-2715ca68-9f1c-48b8-865f-c6289282d593 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.884 248514 DEBUG oslo_concurrency.lockutils [req-181eb090-e259-4925-9d2d-e1927a9ecb5d req-2715ca68-9f1c-48b8-865f-c6289282d593 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.885 248514 DEBUG nova.compute.manager [req-181eb090-e259-4925-9d2d-e1927a9ecb5d req-2715ca68-9f1c-48b8-865f-c6289282d593 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-unplugged-9516b135-3bb5-4da4-942f-d044cad93bd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.885 248514 WARNING nova.compute.manager [req-181eb090-e259-4925-9d2d-e1927a9ecb5d req-2715ca68-9f1c-48b8-865f-c6289282d593 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-unplugged-9516b135-3bb5-4da4-942f-d044cad93bd4 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.944 248514 INFO nova.virt.libvirt.driver [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Deleting instance files /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139_del#033[00m
Dec 13 03:22:35 np0005558241 nova_compute[248510]: 2025-12-13 08:22:35.945 248514 INFO nova.virt.libvirt.driver [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Deletion of /var/lib/nova/instances/07413df5-0bb8-42c2-95ff-13458d598139_del complete#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.014 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.014 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.015 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.020 248514 INFO nova.compute.manager [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.020 248514 DEBUG oslo.service.loopingcall [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.020 248514 DEBUG nova.compute.manager [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.021 248514 DEBUG nova.network.neutron [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.134 248514 DEBUG nova.network.neutron [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.150 248514 DEBUG nova.network.neutron [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.168 248514 INFO nova.compute.manager [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Took 0.15 seconds to deallocate network for instance.#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.219 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.220 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.297 248514 DEBUG oslo_concurrency.processutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.609 248514 DEBUG oslo_concurrency.lockutils [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.771 248514 DEBUG nova.compute.manager [req-2abf7e1f-2ec4-484b-943b-0236ce677969 req-2872deae-a0fe-4e83-8c4c-7936f5fa99d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-3861ef01-74c8-4321-b36e-79090caaf6dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.772 248514 DEBUG oslo_concurrency.lockutils [req-2abf7e1f-2ec4-484b-943b-0236ce677969 req-2872deae-a0fe-4e83-8c4c-7936f5fa99d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.772 248514 DEBUG oslo_concurrency.lockutils [req-2abf7e1f-2ec4-484b-943b-0236ce677969 req-2872deae-a0fe-4e83-8c4c-7936f5fa99d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.772 248514 DEBUG oslo_concurrency.lockutils [req-2abf7e1f-2ec4-484b-943b-0236ce677969 req-2872deae-a0fe-4e83-8c4c-7936f5fa99d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.772 248514 DEBUG nova.compute.manager [req-2abf7e1f-2ec4-484b-943b-0236ce677969 req-2872deae-a0fe-4e83-8c4c-7936f5fa99d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-3861ef01-74c8-4321-b36e-79090caaf6dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.772 248514 WARNING nova.compute.manager [req-2abf7e1f-2ec4-484b-943b-0236ce677969 req-2872deae-a0fe-4e83-8c4c-7936f5fa99d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-3861ef01-74c8-4321-b36e-79090caaf6dc for instance with vm_state active and task_state None.#033[00m
Dec 13 03:22:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:22:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3226999218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.882 248514 DEBUG oslo_concurrency.processutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.889 248514 DEBUG nova.compute.provider_tree [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.906 248514 DEBUG nova.scheduler.client.report [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.937 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:36 np0005558241 nova_compute[248510]: 2025-12-13 08:22:36.981 248514 INFO nova.scheduler.client.report [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Deleted allocations for instance 07413df5-0bb8-42c2-95ff-13458d598139#033[00m
Dec 13 03:22:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 200 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 9.3 KiB/s wr, 93 op/s
Dec 13 03:22:37 np0005558241 nova_compute[248510]: 2025-12-13 08:22:37.073 248514 DEBUG oslo_concurrency.lockutils [None req-7002253c-d5e0-4c64-abcc-e655bed83272 bb27aa40b8134948b82eee1cf755ccc1 14e8ffb710fe4f92a0f68ad58c260f0f - - default default] Lock "07413df5-0bb8-42c2-95ff-13458d598139" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Dec 13 03:22:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Dec 13 03:22:38 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.216 158419 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 930ebf1c-b554-4b96-90af-54fc159022b7 with type ""#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.218 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:de:08 10.100.0.10'], port_security=['fa:16:3e:82:de:08 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-2007287726', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-2007287726', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3861ef01-74c8-4321-b36e-79090caaf6dc) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.219 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3861ef01-74c8-4321-b36e-79090caaf6dc in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.220 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:22:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:38Z|00219|binding|INFO|Removing iface tap3861ef01-74 ovn-installed in OVS
Dec 13 03:22:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:38Z|00220|binding|INFO|Removing lport 3861ef01-74c8-4321-b36e-79090caaf6dc ovn-installed in OVS
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.224 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.236 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1df9cfbc-55c7-4844-86f2-f05170844610]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.240 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.270 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8bac078f-1d45-4cdd-9a1c-9e241ca4ed6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.274 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a696914f-9f99-4034-8c58-62f9b2a210d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.316 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ddc2a5-b892-4302-8478-8ca06cc9194c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.342 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5909248c-aaec-4926-b9b1-7ba914f800ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282278, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.363 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11f05204-06c3-4df9-a860-873c9a5834f6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665441, 'tstamp': 665441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282279, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665444, 'tstamp': 665444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282279, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.366 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.371 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.372 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.372 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.372 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.611 248514 DEBUG nova.compute.manager [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.611 248514 DEBUG oslo_concurrency.lockutils [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.612 248514 DEBUG oslo_concurrency.lockutils [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.612 248514 DEBUG oslo_concurrency.lockutils [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.612 248514 DEBUG nova.compute.manager [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.613 248514 WARNING nova.compute.manager [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-9516b135-3bb5-4da4-942f-d044cad93bd4 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.613 248514 DEBUG nova.compute.manager [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-deleted-9516b135-3bb5-4da4-942f-d044cad93bd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.614 248514 INFO nova.compute.manager [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Neutron deleted interface 9516b135-3bb5-4da4-942f-d044cad93bd4; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.614 248514 DEBUG nova.network.neutron [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.646 248514 DEBUG nova.objects.instance [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'system_metadata' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.679 248514 DEBUG nova.objects.instance [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'flavor' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.703 248514 DEBUG nova.virt.libvirt.vif [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.703 248514 DEBUG nova.network.os_vif_util [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.704 248514 DEBUG nova.network.os_vif_util [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.708 248514 DEBUG nova.virt.libvirt.guest [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.712 248514 DEBUG nova.virt.libvirt.guest [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface>not found in domain: <domain type='kvm' id='29'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <name>instance-00000018</name>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <uuid>2e309dc2-3cab-4ecf-8be7-eab85790a0da</uuid>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:22:35</nova:creationTime>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:22:38 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <entry name='serial'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <entry name='uuid'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk' index='2'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config' index='1'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:54:c0:80'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target dev='tap10aa2df4-a7'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:3a:29:df'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target dev='tapcea82a7d-e9'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='net2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:82:de:08'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target dev='tap3861ef01-74'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='net3'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <source path='/dev/pts/4'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/4'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <source path='/dev/pts/4'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c312,c966</label>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c312,c966</imagelabel>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:22:38 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:22:38 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.713 248514 DEBUG nova.virt.libvirt.guest [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.716 248514 DEBUG nova.virt.libvirt.guest [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:96:b5:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9516b135-3b"/></interface>not found in domain: <domain type='kvm' id='29'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <name>instance-00000018</name>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <uuid>2e309dc2-3cab-4ecf-8be7-eab85790a0da</uuid>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:22:35</nova:creationTime>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:22:38 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <entry name='serial'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <entry name='uuid'>2e309dc2-3cab-4ecf-8be7-eab85790a0da</entry>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk' index='2'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/2e309dc2-3cab-4ecf-8be7-eab85790a0da_disk.config' index='1'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:54:c0:80'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target dev='tap10aa2df4-a7'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:3a:29:df'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target dev='tapcea82a7d-e9'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='net2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:82:de:08'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target dev='tap3861ef01-74'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='net3'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <source path='/dev/pts/4'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/4'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <source path='/dev/pts/4'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da/console.log' append='off'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5904' autoport='yes' listen='::0'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c312,c966</label>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c312,c966</imagelabel>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:22:38 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:22:38 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.717 248514 WARNING nova.virt.libvirt.driver [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Detaching interface fa:16:3e:96:b5:6d failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap9516b135-3b' not found.#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.718 248514 DEBUG nova.virt.libvirt.vif [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.718 248514 DEBUG nova.network.os_vif_util [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.718 248514 DEBUG nova.network.os_vif_util [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.719 248514 DEBUG os_vif [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.720 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9516b135-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.721 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.724 248514 INFO os_vif [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:b5:6d,bridge_name='br-int',has_traffic_filtering=True,id=9516b135-3bb5-4da4-942f-d044cad93bd4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9516b135-3b')#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.724 248514 DEBUG nova.virt.libvirt.guest [req-fccf0b5c-0897-4250-a019-03baeb43eed7 req-04052774-50df-4b9b-94c8-fe5c24f72a70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1637815753</nova:name>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:22:38</nova:creationTime>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:port uuid="10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:port uuid="cea82a7d-e92d-4ac6-ba47-854ec9905fd2">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    <nova:port uuid="3861ef01-74c8-4321-b36e-79090caaf6dc">
Dec 13 03:22:38 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:22:38 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:22:38 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:22:38 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.844 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.844 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.845 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.845 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.846 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.847 248514 INFO nova.compute.manager [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Terminating instance#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.848 248514 DEBUG nova.compute.manager [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:22:38 np0005558241 kernel: tap10aa2df4-a7 (unregistering): left promiscuous mode
Dec 13 03:22:38 np0005558241 NetworkManager[50376]: <info>  [1765614158.9035] device (tap10aa2df4-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:22:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:38Z|00221|binding|INFO|Releasing lport 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 from this chassis (sb_readonly=0)
Dec 13 03:22:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:38Z|00222|binding|INFO|Setting lport 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 down in Southbound
Dec 13 03:22:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:38Z|00223|binding|INFO|Removing iface tap10aa2df4-a7 ovn-installed in OVS
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.913 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.915 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.927 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:c0:80 10.100.0.7'], port_security=['fa:16:3e:54:c0:80 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71562f64-f92d-4728-bc7f-33bdc44249e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.929 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.931 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.931 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:38 np0005558241 kernel: tapcea82a7d-e9 (unregistering): left promiscuous mode
Dec 13 03:22:38 np0005558241 NetworkManager[50376]: <info>  [1765614158.9441] device (tapcea82a7d-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.948 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb2fce4-4900-4746-895f-3ab029e1eca6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:38Z|00224|binding|INFO|Releasing lport cea82a7d-e92d-4ac6-ba47-854ec9905fd2 from this chassis (sb_readonly=0)
Dec 13 03:22:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:38Z|00225|binding|INFO|Setting lport cea82a7d-e92d-4ac6-ba47-854ec9905fd2 down in Southbound
Dec 13 03:22:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:38Z|00226|binding|INFO|Removing iface tapcea82a7d-e9 ovn-installed in OVS
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.950 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.952 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.958 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:29:df 10.100.0.4'], port_security=['fa:16:3e:3a:29:df 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2e309dc2-3cab-4ecf-8be7-eab85790a0da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=cea82a7d-e92d-4ac6-ba47-854ec9905fd2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:38 np0005558241 kernel: tap3861ef01-74 (unregistering): left promiscuous mode
Dec 13 03:22:38 np0005558241 NetworkManager[50376]: <info>  [1765614158.9715] device (tap3861ef01-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:22:38 np0005558241 nova_compute[248510]: 2025-12-13 08:22:38.983 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.991 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0fae3da0-0648-4d30-a146-e33aa1cf9129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:38.994 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[74aaa533-c558-45ff-9360-3e2f7764ef41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:39 np0005558241 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000018.scope: Deactivated successfully.
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.029 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b865823c-13b0-4111-a0aa-89c3f85a6688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:39 np0005558241 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000018.scope: Consumed 16.862s CPU time.
Dec 13 03:22:39 np0005558241 systemd-machined[210538]: Machine qemu-29-instance-00000018 terminated.
Dec 13 03:22:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 121 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 25 KiB/s wr, 164 op/s
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.051 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[72a41ca1-6af6-4296-bec6-583003baf7b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665427, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282303, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:39 np0005558241 NetworkManager[50376]: <info>  [1765614159.0704] manager: (tap10aa2df4-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.070 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c234e823-5d9c-4de6-9917-1f8d686b64cf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665441, 'tstamp': 665441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282304, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665444, 'tstamp': 665444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282304, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.072 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.074 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 NetworkManager[50376]: <info>  [1765614159.0821] manager: (tapcea82a7d-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.092 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.093 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.093 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.093 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.093 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.094 158419 INFO neutron.agent.ovn.metadata.agent [-] Port cea82a7d-e92d-4ac6-ba47-854ec9905fd2 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis#033[00m
Dec 13 03:22:39 np0005558241 NetworkManager[50376]: <info>  [1765614159.0951] manager: (tap3861ef01-74): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.095 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ca92864-3b70-4794-9db1-fa08128cef92, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.096 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fce347-4ff0-42a6-a9c5-95dd8e3ef20b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.097 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 namespace which is not needed anymore#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.108 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Instance destroyed successfully.#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.109 248514 DEBUG nova.objects.instance [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'resources' on Instance uuid 2e309dc2-3cab-4ecf-8be7-eab85790a0da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.127 248514 DEBUG nova.virt.libvirt.vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.127 248514 DEBUG nova.network.os_vif_util [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.128 248514 DEBUG nova.network.os_vif_util [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:c0:80,bridge_name='br-int',has_traffic_filtering=True,id=10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10aa2df4-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.128 248514 DEBUG os_vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:c0:80,bridge_name='br-int',has_traffic_filtering=True,id=10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10aa2df4-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.130 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10aa2df4-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.131 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.133 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.138 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.141 248514 INFO os_vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:c0:80,bridge_name='br-int',has_traffic_filtering=True,id=10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10aa2df4-a7')#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.142 248514 DEBUG nova.virt.libvirt.vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.142 248514 DEBUG nova.network.os_vif_util [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.143 248514 DEBUG nova.network.os_vif_util [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.143 248514 DEBUG os_vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.144 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.144 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcea82a7d-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.145 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.149 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.151 248514 INFO os_vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:29:df,bridge_name='br-int',has_traffic_filtering=True,id=cea82a7d-e92d-4ac6-ba47-854ec9905fd2,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcea82a7d-e9')#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.152 248514 DEBUG nova.virt.libvirt.vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1637815753',display_name='tempest-AttachInterfacesTestJSON-server-1637815753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1637815753',id=24,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPR6/x1jo3JzoUP3+G381xbzf0akciz5IkfyMZiP7+j3Hw6reaXQYBazbgOeN0Edarx6juQS4yahATsHWxSygIxCuAlD3PA4mI3Efvx7QzwF4vcFAuc26y4DubPpv0UBLg==',key_name='tempest-keypair-1488366707',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:21:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-zcojs85u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:21:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=2e309dc2-3cab-4ecf-8be7-eab85790a0da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.153 248514 DEBUG nova.network.os_vif_util [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.153 248514 DEBUG nova.network.os_vif_util [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.154 248514 DEBUG os_vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.156 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.156 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3861ef01-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.159 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.162 248514 INFO os_vif [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:de:08,bridge_name='br-int',has_traffic_filtering=True,id=3861ef01-74c8-4321-b36e-79090caaf6dc,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3861ef01-74')#033[00m
Dec 13 03:22:39 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[278670]: [NOTICE]   (278674) : haproxy version is 2.8.14-c23fe91
Dec 13 03:22:39 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[278670]: [NOTICE]   (278674) : path to executable is /usr/sbin/haproxy
Dec 13 03:22:39 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[278670]: [WARNING]  (278674) : Exiting Master process...
Dec 13 03:22:39 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[278670]: [ALERT]    (278674) : Current worker (278676) exited with code 143 (Terminated)
Dec 13 03:22:39 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[278670]: [WARNING]  (278674) : All workers exited. Exiting... (0)
Dec 13 03:22:39 np0005558241 systemd[1]: libpod-c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7.scope: Deactivated successfully.
Dec 13 03:22:39 np0005558241 podman[282374]: 2025-12-13 08:22:39.264407137 +0000 UTC m=+0.051695864 container died c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:22:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7-userdata-shm.mount: Deactivated successfully.
Dec 13 03:22:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d74ef57e5fc97f32c1e7bb2787cfa72693667d60ba34de8c1be9c45989ad24ab-merged.mount: Deactivated successfully.
Dec 13 03:22:39 np0005558241 podman[282374]: 2025-12-13 08:22:39.327499221 +0000 UTC m=+0.114787948 container cleanup c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:22:39 np0005558241 systemd[1]: libpod-conmon-c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7.scope: Deactivated successfully.
Dec 13 03:22:39 np0005558241 podman[282406]: 2025-12-13 08:22:39.402242552 +0000 UTC m=+0.047452230 container remove c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.410 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8274f557-5483-4499-be48-4bbcab19708b]: (4, ('Sat Dec 13 08:22:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 (c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7)\nc146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7\nSat Dec 13 08:22:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 (c146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7)\nc146da766deb51ef7d057ad8e04cef94e5448b15b9780370d7b6c6e5b1cf9ab7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.414 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[98729fe1-dd1b-4530-b739-031855ee8309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.415 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.417 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 kernel: tap1ca92864-30: left promiscuous mode
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.440 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b880eced-405b-4af3-b7d5-f975beb60934]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.459 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[49efc520-58b3-4a7f-beba-a101dfb981e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.460 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79580822-bc9c-4083-b0bf-22f36138267b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.481 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[614a8b0a-ee39-4f82-a80e-be66be9ef18a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665417, 'reachable_time': 43079, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282422, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.484 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:22:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:39.484 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[75d49722-2f07-4169-8fc9-9bf1e3bdbdaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:39 np0005558241 systemd[1]: run-netns-ovnmeta\x2d1ca92864\x2d3b70\x2d4794\x2d9db1\x2dfa08128cef92.mount: Deactivated successfully.
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.500 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.507 248514 INFO nova.virt.libvirt.driver [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Deleting instance files /var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da_del#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.508 248514 INFO nova.virt.libvirt.driver [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Deletion of /var/lib/nova/instances/2e309dc2-3cab-4ecf-8be7-eab85790a0da_del complete#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.581 248514 INFO nova.compute.manager [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.581 248514 DEBUG oslo.service.loopingcall [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.582 248514 DEBUG nova.compute.manager [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:22:39 np0005558241 nova_compute[248510]: 2025-12-13 08:22:39.582 248514 DEBUG nova.network.neutron [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:22:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:22:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:22:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:22:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:22:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:22:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:22:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:22:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Dec 13 03:22:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Dec 13 03:22:40 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Dec 13 03:22:40 np0005558241 nova_compute[248510]: 2025-12-13 08:22:40.280 248514 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port 3861ef01-74c8-4321-b36e-79090caaf6dc could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Dec 13 03:22:40 np0005558241 nova_compute[248510]: 2025-12-13 08:22:40.280 248514 DEBUG nova.network.neutron [-] Unable to show port 3861ef01-74c8-4321-b36e-79090caaf6dc as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666#033[00m
Dec 13 03:22:40 np0005558241 nova_compute[248510]: 2025-12-13 08:22:40.819 248514 DEBUG nova.compute.manager [req-d53d3697-49b8-498b-b1d3-8d96dfc040fd req-2e56f19b-3b9a-4c2a-931b-5990e439a45d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-unplugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:40 np0005558241 nova_compute[248510]: 2025-12-13 08:22:40.820 248514 DEBUG oslo_concurrency.lockutils [req-d53d3697-49b8-498b-b1d3-8d96dfc040fd req-2e56f19b-3b9a-4c2a-931b-5990e439a45d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:40 np0005558241 nova_compute[248510]: 2025-12-13 08:22:40.820 248514 DEBUG oslo_concurrency.lockutils [req-d53d3697-49b8-498b-b1d3-8d96dfc040fd req-2e56f19b-3b9a-4c2a-931b-5990e439a45d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:40 np0005558241 nova_compute[248510]: 2025-12-13 08:22:40.820 248514 DEBUG oslo_concurrency.lockutils [req-d53d3697-49b8-498b-b1d3-8d96dfc040fd req-2e56f19b-3b9a-4c2a-931b-5990e439a45d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:40 np0005558241 nova_compute[248510]: 2025-12-13 08:22:40.820 248514 DEBUG nova.compute.manager [req-d53d3697-49b8-498b-b1d3-8d96dfc040fd req-2e56f19b-3b9a-4c2a-931b-5990e439a45d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-unplugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:40 np0005558241 nova_compute[248510]: 2025-12-13 08:22:40.820 248514 DEBUG nova.compute.manager [req-d53d3697-49b8-498b-b1d3-8d96dfc040fd req-2e56f19b-3b9a-4c2a-931b-5990e439a45d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-unplugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:22:40 np0005558241 podman[282425]: 2025-12-13 08:22:40.997790662 +0000 UTC m=+0.080574525 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 03:22:41 np0005558241 podman[282424]: 2025-12-13 08:22:41.001533794 +0000 UTC m=+0.087812324 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd)
Dec 13 03:22:41 np0005558241 podman[282423]: 2025-12-13 08:22:41.033186664 +0000 UTC m=+0.122766595 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 13 03:22:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 121 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 21 KiB/s wr, 93 op/s
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.257 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-deleted-3861ef01-74c8-4321-b36e-79090caaf6dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.258 248514 INFO nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Neutron deleted interface 3861ef01-74c8-4321-b36e-79090caaf6dc; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.258 248514 DEBUG nova.network.neutron [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.293 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Detach interface failed, port_id=3861ef01-74c8-4321-b36e-79090caaf6dc, reason: Instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.294 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-unplugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.294 248514 DEBUG oslo_concurrency.lockutils [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.294 248514 DEBUG oslo_concurrency.lockutils [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.295 248514 DEBUG oslo_concurrency.lockutils [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.295 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-unplugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.295 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-unplugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.295 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.295 248514 DEBUG oslo_concurrency.lockutils [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.296 248514 DEBUG oslo_concurrency.lockutils [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.296 248514 DEBUG oslo_concurrency.lockutils [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.296 248514 DEBUG nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.296 248514 WARNING nova.compute.manager [req-a9b63d96-88cd-4131-a08c-ed7982cf1236 req-b59f7b09-15bb-442d-b6ae-11ac9dfc2e0d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.542 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.543 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.562 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.639 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.640 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.650 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.650 248514 INFO nova.compute.claims [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:22:41 np0005558241 nova_compute[248510]: 2025-12-13 08:22:41.783 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.142 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [{"id": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "address": "fa:16:3e:54:c0:80", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10aa2df4-a7", "ovs_interfaceid": "10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9516b135-3bb5-4da4-942f-d044cad93bd4", "address": "fa:16:3e:96:b5:6d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9516b135-3b", "ovs_interfaceid": "9516b135-3bb5-4da4-942f-d044cad93bd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "address": "fa:16:3e:3a:29:df", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcea82a7d-e9", "ovs_interfaceid": "cea82a7d-e92d-4ac6-ba47-854ec9905fd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3861ef01-74c8-4321-b36e-79090caaf6dc", "address": "fa:16:3e:82:de:08", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3861ef01-74", "ovs_interfaceid": "3861ef01-74c8-4321-b36e-79090caaf6dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.170 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.170 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.170 248514 DEBUG oslo_concurrency.lockutils [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.171 248514 DEBUG nova.network.neutron [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.172 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.183028) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614162183256, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2187, "num_deletes": 255, "total_data_size": 3594986, "memory_usage": 3661608, "flush_reason": "Manual Compaction"}
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614162210747, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3518133, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30633, "largest_seqno": 32819, "table_properties": {"data_size": 3507892, "index_size": 6607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20816, "raw_average_key_size": 20, "raw_value_size": 3487649, "raw_average_value_size": 3466, "num_data_blocks": 289, "num_entries": 1006, "num_filter_entries": 1006, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765613960, "oldest_key_time": 1765613960, "file_creation_time": 1765614162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 27769 microseconds, and 8940 cpu microseconds.
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.210801) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3518133 bytes OK
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.210826) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.215205) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.215240) EVENT_LOG_v1 {"time_micros": 1765614162215231, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.215264) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3585746, prev total WAL file size 3585746, number of live WAL files 2.
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.216462) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3435KB)], [68(7540KB)]
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614162216638, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 11239185, "oldest_snapshot_seqno": -1}
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.276 248514 DEBUG nova.network.neutron [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5760 keys, 9528601 bytes, temperature: kUnknown
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614162298788, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9528601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9488955, "index_size": 24176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 144996, "raw_average_key_size": 25, "raw_value_size": 9384187, "raw_average_value_size": 1629, "num_data_blocks": 988, "num_entries": 5760, "num_filter_entries": 5760, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.299036) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9528601 bytes
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.300812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.7 rd, 115.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 7.4 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6284, records dropped: 524 output_compression: NoCompression
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.300828) EVENT_LOG_v1 {"time_micros": 1765614162300820, "job": 38, "event": "compaction_finished", "compaction_time_micros": 82210, "compaction_time_cpu_micros": 24658, "output_level": 6, "num_output_files": 1, "total_output_size": 9528601, "num_input_records": 6284, "num_output_records": 5760, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614162301398, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614162302451, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.216294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.302513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.302517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.302519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.302520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:22:42.302523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.311 248514 INFO nova.compute.manager [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Took 2.73 seconds to deallocate network for instance.#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.374 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:22:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80217683' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.430 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.437 248514 DEBUG nova.compute.provider_tree [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.459 248514 DEBUG nova.scheduler.client.report [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.487 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.488 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.491 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.549 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.549 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.573 248514 INFO nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.608 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.612 248514 DEBUG oslo_concurrency.processutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.709 248514 INFO nova.network.neutron [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Port 10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.710 248514 INFO nova.network.neutron [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Port 9516b135-3bb5-4da4-942f-d044cad93bd4 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.710 248514 INFO nova.network.neutron [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Port cea82a7d-e92d-4ac6-ba47-854ec9905fd2 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.710 248514 INFO nova.network.neutron [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Port 3861ef01-74c8-4321-b36e-79090caaf6dc from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.710 248514 DEBUG nova.network.neutron [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.728 248514 DEBUG oslo_concurrency.lockutils [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-2e309dc2-3cab-4ecf-8be7-eab85790a0da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.752 248514 DEBUG oslo_concurrency.lockutils [None req-de62c16e-54df-474a-8b67-10717a608317 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-2e309dc2-3cab-4ecf-8be7-eab85790a0da-9516b135-3bb5-4da4-942f-d044cad93bd4" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 7.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.756 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.758 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.758 248514 INFO nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Creating image(s)#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.779 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.796 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.818 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.822 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.862 248514 DEBUG nova.policy [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e8aa7edd2151436caa0fd25f361298fd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2495263e4f944deda2647b578d06bb21', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.867 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.909 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.909 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.910 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.910 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.931 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:42 np0005558241 nova_compute[248510]: 2025-12-13 08:22:42.935 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 96 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 28 KiB/s wr, 189 op/s
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.107 248514 DEBUG nova.compute.manager [req-64a53367-4a04-47a4-a976-22df26f64560 req-b53a059d-0330-4069-9f0f-d3649a0c1692 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.108 248514 DEBUG oslo_concurrency.lockutils [req-64a53367-4a04-47a4-a976-22df26f64560 req-b53a059d-0330-4069-9f0f-d3649a0c1692 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.109 248514 DEBUG oslo_concurrency.lockutils [req-64a53367-4a04-47a4-a976-22df26f64560 req-b53a059d-0330-4069-9f0f-d3649a0c1692 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.109 248514 DEBUG oslo_concurrency.lockutils [req-64a53367-4a04-47a4-a976-22df26f64560 req-b53a059d-0330-4069-9f0f-d3649a0c1692 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.109 248514 DEBUG nova.compute.manager [req-64a53367-4a04-47a4-a976-22df26f64560 req-b53a059d-0330-4069-9f0f-d3649a0c1692 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] No waiting events found dispatching network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.109 248514 WARNING nova.compute.manager [req-64a53367-4a04-47a4-a976-22df26f64560 req-b53a059d-0330-4069-9f0f-d3649a0c1692 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received unexpected event network-vif-plugged-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:22:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:22:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580774684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.220 248514 DEBUG oslo_concurrency.processutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.227 248514 DEBUG nova.compute.provider_tree [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.244 248514 DEBUG nova.scheduler.client.report [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.258 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.288 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.322 248514 INFO nova.scheduler.client.report [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Deleted allocations for instance 2e309dc2-3cab-4ecf-8be7-eab85790a0da#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.330 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] resizing rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.414 248514 DEBUG nova.objects.instance [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lazy-loading 'migration_context' on Instance uuid 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.422 248514 DEBUG oslo_concurrency.lockutils [None req-dc33c35f-65fc-40e4-9bbf-987d0ffcb750 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "2e309dc2-3cab-4ecf-8be7-eab85790a0da" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.435 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.435 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Ensure instance console log exists: /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.436 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.436 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.437 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.584 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Successfully created port: 4dd4aba8-8ce4-4b2e-92b4-879959570e8d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.664 248514 DEBUG nova.compute.manager [req-8c980d3a-54f6-4202-87a2-d206b955f545 req-01f08685-5a14-454d-88d7-3b9c81bf9d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-deleted-cea82a7d-e92d-4ac6-ba47-854ec9905fd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.665 248514 DEBUG nova.compute.manager [req-8c980d3a-54f6-4202-87a2-d206b955f545 req-01f08685-5a14-454d-88d7-3b9c81bf9d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Received event network-vif-deleted-10aa2df4-a74d-46a9-8cd7-efc98d0e1ec4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:22:43 np0005558241 nova_compute[248510]: 2025-12-13 08:22:43.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:22:44 np0005558241 nova_compute[248510]: 2025-12-13 08:22:44.160 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:44 np0005558241 nova_compute[248510]: 2025-12-13 08:22:44.368 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Successfully created port: ee08056b-cf18-46d7-9fea-542c2ec040ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:22:44 np0005558241 nova_compute[248510]: 2025-12-13 08:22:44.501 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:44 np0005558241 nova_compute[248510]: 2025-12-13 08:22:44.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:22:44 np0005558241 nova_compute[248510]: 2025-12-13 08:22:44.844 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:44 np0005558241 nova_compute[248510]: 2025-12-13 08:22:44.845 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:44 np0005558241 nova_compute[248510]: 2025-12-13 08:22:44.845 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:44 np0005558241 nova_compute[248510]: 2025-12-13 08:22:44.845 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:22:44 np0005558241 nova_compute[248510]: 2025-12-13 08:22:44.845 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:22:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Dec 13 03:22:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Dec 13 03:22:45 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Dec 13 03:22:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 77 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 3.0 MiB/s wr, 179 op/s
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.171 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.362 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Successfully updated port: 4dd4aba8-8ce4-4b2e-92b4-879959570e8d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:22:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:22:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2605465771' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.449 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.626 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.628 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4367MB free_disk=59.959420564584434GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.628 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.628 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.703 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.703 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.703 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.750 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.836 248514 DEBUG nova.compute.manager [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-changed-4dd4aba8-8ce4-4b2e-92b4-879959570e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.837 248514 DEBUG nova.compute.manager [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Refreshing instance network info cache due to event network-changed-4dd4aba8-8ce4-4b2e-92b4-879959570e8d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.838 248514 DEBUG oslo_concurrency.lockutils [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.838 248514 DEBUG oslo_concurrency.lockutils [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:45 np0005558241 nova_compute[248510]: 2025-12-13 08:22:45.839 248514 DEBUG nova.network.neutron [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Refreshing network info cache for port 4dd4aba8-8ce4-4b2e-92b4-879959570e8d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:22:46 np0005558241 nova_compute[248510]: 2025-12-13 08:22:46.227 248514 DEBUG nova.network.neutron [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:22:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:22:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/65429836' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:22:46 np0005558241 nova_compute[248510]: 2025-12-13 08:22:46.353 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:46 np0005558241 nova_compute[248510]: 2025-12-13 08:22:46.361 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:22:46 np0005558241 nova_compute[248510]: 2025-12-13 08:22:46.383 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:22:46 np0005558241 nova_compute[248510]: 2025-12-13 08:22:46.419 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:22:46 np0005558241 nova_compute[248510]: 2025-12-13 08:22:46.419 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 77 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 2.6 MiB/s wr, 155 op/s
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.062 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614152.0614407, 9b6188af-75f0-4213-89c2-bd3eb72960b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.063 248514 INFO nova.compute.manager [-] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.096 248514 DEBUG nova.compute.manager [None req-c5efbff1-d9bd-4341-a027-23553c6d38c5 - - - - - -] [instance: 9b6188af-75f0-4213-89c2-bd3eb72960b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.294 248514 DEBUG nova.network.neutron [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.354 248514 DEBUG oslo_concurrency.lockutils [req-68ee0c91-898b-4387-9f91-b517789b6834 req-7c27ed42-38a8-49d5-83d6-8a21fe9526d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.415 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Successfully updated port: ee08056b-cf18-46d7-9fea-542c2ec040ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.419 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.435 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.436 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquired lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.436 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.570 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614152.5687995, 1c17e7b7-7062-48d2-a30f-b387929244d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.570 248514 INFO nova.compute.manager [-] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.591 248514 DEBUG nova.compute.manager [None req-06ad8612-b386-4c35-bfdc-1f80e2249245 - - - - - -] [instance: 1c17e7b7-7062-48d2-a30f-b387929244d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.903 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.998 248514 DEBUG nova.compute.manager [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-changed-ee08056b-cf18-46d7-9fea-542c2ec040ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.998 248514 DEBUG nova.compute.manager [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Refreshing instance network info cache due to event network-changed-ee08056b-cf18-46d7-9fea-542c2ec040ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:22:47 np0005558241 nova_compute[248510]: 2025-12-13 08:22:47.998 248514 DEBUG oslo_concurrency.lockutils [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:22:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:22:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:22:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:22:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:22:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:22:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:22:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:22:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:22:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:22:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:22:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:22:49 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:22:49 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:22:49 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:22:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 88 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 2.7 MiB/s wr, 151 op/s
Dec 13 03:22:49 np0005558241 nova_compute[248510]: 2025-12-13 08:22:49.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:49 np0005558241 podman[282880]: 2025-12-13 08:22:49.398399705 +0000 UTC m=+0.055650371 container create 92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:22:49 np0005558241 systemd[1]: Started libpod-conmon-92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738.scope.
Dec 13 03:22:49 np0005558241 podman[282880]: 2025-12-13 08:22:49.367671839 +0000 UTC m=+0.024922525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:22:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:22:49 np0005558241 podman[282880]: 2025-12-13 08:22:49.494638786 +0000 UTC m=+0.151889512 container init 92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:22:49 np0005558241 podman[282880]: 2025-12-13 08:22:49.504474418 +0000 UTC m=+0.161725114 container start 92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kirch, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:22:49 np0005558241 nova_compute[248510]: 2025-12-13 08:22:49.504 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:49 np0005558241 podman[282880]: 2025-12-13 08:22:49.509560243 +0000 UTC m=+0.166810949 container attach 92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:22:49 np0005558241 systemd[1]: libpod-92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738.scope: Deactivated successfully.
Dec 13 03:22:49 np0005558241 flamboyant_kirch[282896]: 167 167
Dec 13 03:22:49 np0005558241 podman[282880]: 2025-12-13 08:22:49.51430195 +0000 UTC m=+0.171552636 container died 92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kirch, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:22:49 np0005558241 conmon[282896]: conmon 92729bc230c83be32b6a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738.scope/container/memory.events
Dec 13 03:22:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6ee4c0a75fdd675b94646a8807186fb7de064d80e12e5b8a1c606ce84d9f9cc9-merged.mount: Deactivated successfully.
Dec 13 03:22:49 np0005558241 podman[282880]: 2025-12-13 08:22:49.563850571 +0000 UTC m=+0.221101237 container remove 92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kirch, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:22:49 np0005558241 systemd[1]: libpod-conmon-92729bc230c83be32b6a2f0555d1c78612a2dd5a6904127bea9201a0901df738.scope: Deactivated successfully.
Dec 13 03:22:49 np0005558241 podman[282921]: 2025-12-13 08:22:49.746546791 +0000 UTC m=+0.049995653 container create f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:22:49 np0005558241 nova_compute[248510]: 2025-12-13 08:22:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:22:49 np0005558241 systemd[1]: Started libpod-conmon-f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c.scope.
Dec 13 03:22:49 np0005558241 podman[282921]: 2025-12-13 08:22:49.725780799 +0000 UTC m=+0.029229691 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:22:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:22:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e519fc0fc1f17c30c8a8c735ee4c09faa7773bc35526d3c86c9cfdf8795c07f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e519fc0fc1f17c30c8a8c735ee4c09faa7773bc35526d3c86c9cfdf8795c07f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e519fc0fc1f17c30c8a8c735ee4c09faa7773bc35526d3c86c9cfdf8795c07f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e519fc0fc1f17c30c8a8c735ee4c09faa7773bc35526d3c86c9cfdf8795c07f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e519fc0fc1f17c30c8a8c735ee4c09faa7773bc35526d3c86c9cfdf8795c07f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:49 np0005558241 podman[282921]: 2025-12-13 08:22:49.851207658 +0000 UTC m=+0.154656540 container init f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:22:49 np0005558241 podman[282921]: 2025-12-13 08:22:49.859887082 +0000 UTC m=+0.163335954 container start f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:22:49 np0005558241 podman[282921]: 2025-12-13 08:22:49.863950732 +0000 UTC m=+0.167399754 container attach f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:22:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:22:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Dec 13 03:22:49 np0005558241 nova_compute[248510]: 2025-12-13 08:22:49.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Dec 13 03:22:50 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Dec 13 03:22:50 np0005558241 ecstatic_mahavira[282939]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:22:50 np0005558241 ecstatic_mahavira[282939]: --> All data devices are unavailable
Dec 13 03:22:50 np0005558241 systemd[1]: libpod-f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c.scope: Deactivated successfully.
Dec 13 03:22:50 np0005558241 podman[282921]: 2025-12-13 08:22:50.440717819 +0000 UTC m=+0.744166681 container died f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:22:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e519fc0fc1f17c30c8a8c735ee4c09faa7773bc35526d3c86c9cfdf8795c07f4-merged.mount: Deactivated successfully.
Dec 13 03:22:50 np0005558241 podman[282921]: 2025-12-13 08:22:50.502171683 +0000 UTC m=+0.805620545 container remove f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mahavira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:22:50 np0005558241 systemd[1]: libpod-conmon-f5a3355c2c66f76e960a0bc7b9a97d69d548ea60ddd98bafca109777d5c5624c.scope: Deactivated successfully.
Dec 13 03:22:50 np0005558241 nova_compute[248510]: 2025-12-13 08:22:50.594 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614155.5921252, 07413df5-0bb8-42c2-95ff-13458d598139 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:22:50 np0005558241 nova_compute[248510]: 2025-12-13 08:22:50.595 248514 INFO nova.compute.manager [-] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:22:50 np0005558241 nova_compute[248510]: 2025-12-13 08:22:50.622 248514 DEBUG nova.compute.manager [None req-70d8d6a6-aa7e-4ac6-b234-12704c71be6a - - - - - -] [instance: 07413df5-0bb8-42c2-95ff-13458d598139] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 88 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.7 MiB/s wr, 80 op/s
Dec 13 03:22:51 np0005558241 podman[283032]: 2025-12-13 08:22:51.055714477 +0000 UTC m=+0.045887981 container create b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:22:51 np0005558241 systemd[1]: Started libpod-conmon-b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0.scope.
Dec 13 03:22:51 np0005558241 podman[283032]: 2025-12-13 08:22:51.036576525 +0000 UTC m=+0.026750049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:22:51 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:22:51 np0005558241 podman[283032]: 2025-12-13 08:22:51.156435108 +0000 UTC m=+0.146608702 container init b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:22:51 np0005558241 podman[283032]: 2025-12-13 08:22:51.165273875 +0000 UTC m=+0.155447389 container start b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:22:51 np0005558241 podman[283032]: 2025-12-13 08:22:51.169203072 +0000 UTC m=+0.159376746 container attach b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:22:51 np0005558241 nervous_stonebraker[283048]: 167 167
Dec 13 03:22:51 np0005558241 systemd[1]: libpod-b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0.scope: Deactivated successfully.
Dec 13 03:22:51 np0005558241 podman[283032]: 2025-12-13 08:22:51.172672298 +0000 UTC m=+0.162845802 container died b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 13 03:22:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-76b7b6e53f7535157188657df7670bf6c60bda50e49ca8149c449154f812f04a-merged.mount: Deactivated successfully.
Dec 13 03:22:51 np0005558241 podman[283032]: 2025-12-13 08:22:51.217927452 +0000 UTC m=+0.208100956 container remove b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_stonebraker, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:22:51 np0005558241 systemd[1]: libpod-conmon-b8075d546fbff87025a08513dcec1a09573a73d1f0a49cc8b16f7c122934e6e0.scope: Deactivated successfully.
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.334 248514 DEBUG nova.network.neutron [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Updating instance_info_cache with network_info: [{"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.357 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Releasing lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.357 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Instance network_info: |[{"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.357 248514 DEBUG oslo_concurrency.lockutils [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.358 248514 DEBUG nova.network.neutron [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Refreshing network info cache for port ee08056b-cf18-46d7-9fea-542c2ec040ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.361 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Start _get_guest_xml network_info=[{"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.368 248514 WARNING nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.373 248514 DEBUG nova.virt.libvirt.host [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.374 248514 DEBUG nova.virt.libvirt.host [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.378 248514 DEBUG nova.virt.libvirt.host [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.378 248514 DEBUG nova.virt.libvirt.host [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.379 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.379 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.379 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.379 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.379 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.380 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.380 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.380 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.380 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.380 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.380 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.381 248514 DEBUG nova.virt.hardware [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.384 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:51 np0005558241 podman[283072]: 2025-12-13 08:22:51.430882598 +0000 UTC m=+0.044421336 container create 0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_darwin, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:22:51 np0005558241 systemd[1]: Started libpod-conmon-0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383.scope.
Dec 13 03:22:51 np0005558241 podman[283072]: 2025-12-13 08:22:51.411315116 +0000 UTC m=+0.024853894 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:22:51 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:22:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0e980e54cfa8571cc9d0a34c25617f246d9407ce08c809530a4134380b8ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0e980e54cfa8571cc9d0a34c25617f246d9407ce08c809530a4134380b8ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0e980e54cfa8571cc9d0a34c25617f246d9407ce08c809530a4134380b8ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa0e980e54cfa8571cc9d0a34c25617f246d9407ce08c809530a4134380b8ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:51 np0005558241 podman[283072]: 2025-12-13 08:22:51.537239837 +0000 UTC m=+0.150778595 container init 0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:22:51 np0005558241 podman[283072]: 2025-12-13 08:22:51.546896545 +0000 UTC m=+0.160435283 container start 0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:22:51 np0005558241 podman[283072]: 2025-12-13 08:22:51.552448672 +0000 UTC m=+0.165987530 container attach 0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_darwin, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:22:51 np0005558241 confident_darwin[283089]: {
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:    "0": [
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:        {
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "devices": [
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "/dev/loop3"
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            ],
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_name": "ceph_lv0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_size": "21470642176",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "name": "ceph_lv0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "tags": {
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.cluster_name": "ceph",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.crush_device_class": "",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.encrypted": "0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.objectstore": "bluestore",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.osd_id": "0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.type": "block",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.vdo": "0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.with_tpm": "0"
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            },
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "type": "block",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "vg_name": "ceph_vg0"
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:        }
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:    ],
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:    "1": [
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:        {
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "devices": [
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "/dev/loop4"
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            ],
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_name": "ceph_lv1",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_size": "21470642176",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "name": "ceph_lv1",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "tags": {
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.cluster_name": "ceph",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.crush_device_class": "",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.encrypted": "0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.objectstore": "bluestore",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.osd_id": "1",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.type": "block",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.vdo": "0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.with_tpm": "0"
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            },
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "type": "block",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "vg_name": "ceph_vg1"
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:        }
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:    ],
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:    "2": [
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:        {
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "devices": [
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "/dev/loop5"
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            ],
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_name": "ceph_lv2",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_size": "21470642176",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "name": "ceph_lv2",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "tags": {
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.cluster_name": "ceph",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.crush_device_class": "",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.encrypted": "0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.objectstore": "bluestore",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.osd_id": "2",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.type": "block",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.vdo": "0",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:                "ceph.with_tpm": "0"
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            },
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "type": "block",
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:            "vg_name": "ceph_vg2"
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:        }
Dec 13 03:22:51 np0005558241 confident_darwin[283089]:    ]
Dec 13 03:22:51 np0005558241 confident_darwin[283089]: }
Dec 13 03:22:51 np0005558241 systemd[1]: libpod-0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383.scope: Deactivated successfully.
Dec 13 03:22:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:22:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437399676' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:22:51 np0005558241 podman[283120]: 2025-12-13 08:22:51.928004732 +0000 UTC m=+0.033017394 container died 0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.945 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cfa0e980e54cfa8571cc9d0a34c25617f246d9407ce08c809530a4134380b8ca-merged.mount: Deactivated successfully.
Dec 13 03:22:51 np0005558241 podman[283120]: 2025-12-13 08:22:51.972328844 +0000 UTC m=+0.077341496 container remove 0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.973 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:51 np0005558241 nova_compute[248510]: 2025-12-13 08:22:51.978 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:51 np0005558241 systemd[1]: libpod-conmon-0b6c3115e8e9ffc813b11a6200ab1597a4953596b3bd50120ff9623f3e594383.scope: Deactivated successfully.
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.073 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.073 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.090 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.180 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.180 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.189 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.189 248514 INFO nova.compute.claims [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.325 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:52 np0005558241 podman[283237]: 2025-12-13 08:22:52.498228117 +0000 UTC m=+0.051732996 container create dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:22:52 np0005558241 systemd[1]: Started libpod-conmon-dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35.scope.
Dec 13 03:22:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:22:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/843865063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:22:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:22:52 np0005558241 podman[283237]: 2025-12-13 08:22:52.476911331 +0000 UTC m=+0.030416240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.571 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.574 248514 DEBUG nova.virt.libvirt.vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1320257136',display_name='tempest-ServersTestMultiNic-server-1320257136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1320257136',id=29,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-dy8jmgm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:42Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=7299d5b2-6ea4-47fc-b16b-fcb6dd741e38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.574 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.575 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.576 248514 DEBUG nova.virt.libvirt.vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1320257136',display_name='tempest-ServersTestMultiNic-server-1320257136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1320257136',id=29,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-dy8jmgm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:42Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=7299d5b2-6ea4-47fc-b16b-fcb6dd741e38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.577 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.578 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.579 248514 DEBUG nova.objects.instance [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:52 np0005558241 podman[283237]: 2025-12-13 08:22:52.587109126 +0000 UTC m=+0.140614035 container init dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:22:52 np0005558241 podman[283237]: 2025-12-13 08:22:52.594377535 +0000 UTC m=+0.147882424 container start dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.596 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  <uuid>7299d5b2-6ea4-47fc-b16b-fcb6dd741e38</uuid>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  <name>instance-0000001d</name>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersTestMultiNic-server-1320257136</nova:name>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:22:51</nova:creationTime>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        <nova:user uuid="e8aa7edd2151436caa0fd25f361298fd">tempest-ServersTestMultiNic-1741413593-project-member</nova:user>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        <nova:project uuid="2495263e4f944deda2647b578d06bb21">tempest-ServersTestMultiNic-1741413593</nova:project>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        <nova:port uuid="4dd4aba8-8ce4-4b2e-92b4-879959570e8d">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.93" ipVersion="4"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        <nova:port uuid="ee08056b-cf18-46d7-9fea-542c2ec040ab">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.1.173" ipVersion="4"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <entry name="serial">7299d5b2-6ea4-47fc-b16b-fcb6dd741e38</entry>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <entry name="uuid">7299d5b2-6ea4-47fc-b16b-fcb6dd741e38</entry>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk.config">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:57:be:c0"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <target dev="tap4dd4aba8-8c"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:ca:98:f1"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <target dev="tapee08056b-cf"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/console.log" append="off"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:22:52 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:22:52 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:22:52 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:22:52 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.598 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Preparing to wait for external event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:22:52 np0005558241 podman[283237]: 2025-12-13 08:22:52.598154868 +0000 UTC m=+0.151659757 container attach dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.598 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.598 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.598 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.599 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Preparing to wait for external event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.599 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.599 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.599 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.600 248514 DEBUG nova.virt.libvirt.vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1320257136',display_name='tempest-ServersTestMultiNic-server-1320257136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1320257136',id=29,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-dy8jmgm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:42Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=7299d5b2-6ea4-47fc-b16b-fcb6dd741e38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.600 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:52 np0005558241 zen_jang[283272]: 167 167
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.601 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.601 248514 DEBUG os_vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:22:52 np0005558241 systemd[1]: libpod-dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35.scope: Deactivated successfully.
Dec 13 03:22:52 np0005558241 podman[283237]: 2025-12-13 08:22:52.602996937 +0000 UTC m=+0.156501826 container died dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jang, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.604 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.604 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.608 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.608 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4dd4aba8-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.609 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4dd4aba8-8c, col_values=(('external_ids', {'iface-id': '4dd4aba8-8ce4-4b2e-92b4-879959570e8d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:be:c0', 'vm-uuid': '7299d5b2-6ea4-47fc-b16b-fcb6dd741e38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.631 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:52 np0005558241 NetworkManager[50376]: <info>  [1765614172.6336] manager: (tap4dd4aba8-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.634 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.637 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.639 248514 INFO os_vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c')#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.640 248514 DEBUG nova.virt.libvirt.vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1320257136',display_name='tempest-ServersTestMultiNic-server-1320257136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1320257136',id=29,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-dy8jmgm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:42Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=7299d5b2-6ea4-47fc-b16b-fcb6dd741e38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.640 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.641 248514 DEBUG nova.network.os_vif_util [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.642 248514 DEBUG os_vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:22:52 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1f358277d9612cb73ba2ef2899ea42aba3a80a5ffb9d21dd7a6538eaa8d9c697-merged.mount: Deactivated successfully.
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.643 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.644 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.647 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee08056b-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.647 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee08056b-cf, col_values=(('external_ids', {'iface-id': 'ee08056b-cf18-46d7-9fea-542c2ec040ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:98:f1', 'vm-uuid': '7299d5b2-6ea4-47fc-b16b-fcb6dd741e38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:52 np0005558241 NetworkManager[50376]: <info>  [1765614172.6498] manager: (tapee08056b-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:52 np0005558241 podman[283237]: 2025-12-13 08:22:52.654929046 +0000 UTC m=+0.208433935 container remove dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.656 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.657 248514 INFO os_vif [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf')#033[00m
Dec 13 03:22:52 np0005558241 systemd[1]: libpod-conmon-dddf6faba4f6f4671f29778c03b1d31c482e749cd4c67fca4735d7cf5c818b35.scope: Deactivated successfully.
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.709 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.710 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.710 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No VIF found with MAC fa:16:3e:57:be:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.710 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] No VIF found with MAC fa:16:3e:ca:98:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.711 248514 INFO nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Using config drive#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.757 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.774 248514 DEBUG nova.network.neutron [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Updated VIF entry in instance network info cache for port ee08056b-cf18-46d7-9fea-542c2ec040ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.775 248514 DEBUG nova.network.neutron [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Updating instance_info_cache with network_info: [{"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.825 248514 DEBUG oslo_concurrency.lockutils [req-9e4eaca6-6586-45ce-b985-f820743caf81 req-854d0d6f-7822-4415-889b-31163f45c540 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:52 np0005558241 podman[283320]: 2025-12-13 08:22:52.856215154 +0000 UTC m=+0.059779613 container create 4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:22:52 np0005558241 systemd[1]: Started libpod-conmon-4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f.scope.
Dec 13 03:22:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:22:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19294390' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:22:52 np0005558241 podman[283320]: 2025-12-13 08:22:52.830949602 +0000 UTC m=+0.034514091 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.939 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:52 np0005558241 nova_compute[248510]: 2025-12-13 08:22:52.945 248514 DEBUG nova.compute.provider_tree [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:22:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:22:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada536aef97689b3183eff2bad3fa2e226de4bc56ce82586a56fc2ca3cbef986/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada536aef97689b3183eff2bad3fa2e226de4bc56ce82586a56fc2ca3cbef986/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada536aef97689b3183eff2bad3fa2e226de4bc56ce82586a56fc2ca3cbef986/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada536aef97689b3183eff2bad3fa2e226de4bc56ce82586a56fc2ca3cbef986/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:52 np0005558241 podman[283320]: 2025-12-13 08:22:52.966603213 +0000 UTC m=+0.170167672 container init 4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:22:52 np0005558241 podman[283320]: 2025-12-13 08:22:52.974880297 +0000 UTC m=+0.178444736 container start 4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:22:52 np0005558241 podman[283320]: 2025-12-13 08:22:52.978844395 +0000 UTC m=+0.182408834 container attach 4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bhaskara, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:22:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 88 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 436 KiB/s wr, 17 op/s
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.108 248514 DEBUG nova.scheduler.client.report [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.135 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.136 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.182 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.183 248514 DEBUG nova.network.neutron [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.207 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.208 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.210 248514 INFO nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.241 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.245 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.251 248514 INFO nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Creating config drive at /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/disk.config#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.258 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdsb0e8ew execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.350 248514 DEBUG nova.policy [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.388 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.389 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.398 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.399 248514 INFO nova.compute.claims [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.403 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.405 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.405 248514 INFO nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Creating image(s)#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.433 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.478 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.516 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.525 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.556 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdsb0e8ew" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.599 248514 DEBUG nova.storage.rbd_utils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] rbd image 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.605 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/disk.config 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.638 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.639 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.640 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.640 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.660 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.664 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:53 np0005558241 lvm[283534]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:22:53 np0005558241 lvm[283530]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:22:53 np0005558241 lvm[283530]: VG ceph_vg0 finished
Dec 13 03:22:53 np0005558241 lvm[283534]: VG ceph_vg2 finished
Dec 13 03:22:53 np0005558241 lvm[283533]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:22:53 np0005558241 lvm[283533]: VG ceph_vg1 finished
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.786 248514 DEBUG oslo_concurrency.processutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/disk.config 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.791 248514 INFO nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Deleting local config drive /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38/disk.config because it was imported into RBD.#033[00m
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.796 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:53 np0005558241 admiring_bhaskara[283337]: {}
Dec 13 03:22:53 np0005558241 systemd[1]: libpod-4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f.scope: Deactivated successfully.
Dec 13 03:22:53 np0005558241 systemd[1]: libpod-4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f.scope: Consumed 1.504s CPU time.
Dec 13 03:22:53 np0005558241 podman[283320]: 2025-12-13 08:22:53.850356981 +0000 UTC m=+1.053921420 container died 4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec 13 03:22:53 np0005558241 kernel: tap4dd4aba8-8c: entered promiscuous mode
Dec 13 03:22:53 np0005558241 systemd-udevd[283529]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:22:53 np0005558241 NetworkManager[50376]: <info>  [1765614173.8946] manager: (tap4dd4aba8-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Dec 13 03:22:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:53Z|00227|binding|INFO|Claiming lport 4dd4aba8-8ce4-4b2e-92b4-879959570e8d for this chassis.
Dec 13 03:22:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:53Z|00228|binding|INFO|4dd4aba8-8ce4-4b2e-92b4-879959570e8d: Claiming fa:16:3e:57:be:c0 10.100.0.93
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.897 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:53 np0005558241 NetworkManager[50376]: <info>  [1765614173.9097] device (tap4dd4aba8-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:22:53 np0005558241 NetworkManager[50376]: <info>  [1765614173.9108] device (tap4dd4aba8-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:22:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.912 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:be:c0 10.100.0.93'], port_security=['fa:16:3e:57:be:c0 10.100.0.93'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.93/24', 'neutron:device_id': '7299d5b2-6ea4-47fc-b16b-fcb6dd741e38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9222573e-9ff1-438c-aa91-fa531c4ff949', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7b24070-0889-4199-b066-7f798c438fcf, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4dd4aba8-8ce4-4b2e-92b4-879959570e8d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.913 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4dd4aba8-8ce4-4b2e-92b4-879959570e8d in datapath 9222573e-9ff1-438c-aa91-fa531c4ff949 bound to our chassis#033[00m
Dec 13 03:22:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.916 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9222573e-9ff1-438c-aa91-fa531c4ff949#033[00m
Dec 13 03:22:53 np0005558241 NetworkManager[50376]: <info>  [1765614173.9207] manager: (tapee08056b-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Dec 13 03:22:53 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ada536aef97689b3183eff2bad3fa2e226de4bc56ce82586a56fc2ca3cbef986-merged.mount: Deactivated successfully.
Dec 13 03:22:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.940 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e547e5c8-2cc6-4ffa-bacc-711337218eb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.943 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9222573e-91 in ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:22:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.948 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9222573e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:22:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.949 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0a53fb-50ad-4c1e-89d8-1f73ca91f804]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.950 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[41b885a0-26b5-4d51-8f53-ca29d583fd86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:53 np0005558241 kernel: tapee08056b-cf: entered promiscuous mode
Dec 13 03:22:53 np0005558241 NetworkManager[50376]: <info>  [1765614173.9522] device (tapee08056b-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:22:53 np0005558241 NetworkManager[50376]: <info>  [1765614173.9540] device (tapee08056b-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:22:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:53Z|00229|binding|INFO|Claiming lport ee08056b-cf18-46d7-9fea-542c2ec040ab for this chassis.
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.954 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:53Z|00230|binding|INFO|ee08056b-cf18-46d7-9fea-542c2ec040ab: Claiming fa:16:3e:ca:98:f1 10.100.1.173
Dec 13 03:22:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:53Z|00231|binding|INFO|Setting lport 4dd4aba8-8ce4-4b2e-92b4-879959570e8d ovn-installed in OVS
Dec 13 03:22:53 np0005558241 nova_compute[248510]: 2025-12-13 08:22:53.961 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:53 np0005558241 systemd-machined[210538]: New machine qemu-34-instance-0000001d.
Dec 13 03:22:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.965 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[c4209abe-b259-4da0-b68c-648b0fcff36a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:53Z|00232|binding|INFO|Setting lport 4dd4aba8-8ce4-4b2e-92b4-879959570e8d up in Southbound
Dec 13 03:22:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.968 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:98:f1 10.100.1.173'], port_security=['fa:16:3e:ca:98:f1 10.100.1.173'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.173/24', 'neutron:device_id': '7299d5b2-6ea4-47fc-b16b-fcb6dd741e38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77039756-6444-4fde-8e43-817fb80a54e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ef8942f-4ab0-4f06-afc6-d773195832b4, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ee08056b-cf18-46d7-9fea-542c2ec040ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:53 np0005558241 systemd[1]: Started Virtual Machine qemu-34-instance-0000001d.
Dec 13 03:22:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:53.993 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[15ae8e7b-a759-474c-bb4f-f6c9ce547f37]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 podman[283320]: 2025-12-13 08:22:53.999482634 +0000 UTC m=+1.203047073 container remove 4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bhaskara, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:22:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:54Z|00233|binding|INFO|Setting lport ee08056b-cf18-46d7-9fea-542c2ec040ab ovn-installed in OVS
Dec 13 03:22:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:54Z|00234|binding|INFO|Setting lport ee08056b-cf18-46d7-9fea-542c2ec040ab up in Southbound
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.012 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.028 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6fa469-df9b-42b8-a829-c7c1046879fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 systemd[1]: libpod-conmon-4082daab84d00785ebd588b0095764bca7f724d1e4d8f90fa3b347b6d2b4507f.scope: Deactivated successfully.
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.037 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:54 np0005558241 NetworkManager[50376]: <info>  [1765614174.0446] manager: (tap9222573e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/106)
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.043 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3465b458-79e5-4036-9e3e-3bdc24df143c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:22:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:22:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:22:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.087 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1306a2fc-5382-4365-96c3-a4bd6428b260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.092 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d52ab6f5-9c4e-45ac-90fd-eb310f3bb586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:22:54 np0005558241 NetworkManager[50376]: <info>  [1765614174.1190] device (tap9222573e-90): carrier: link connected
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.124 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad2d661-09c2-476f-ae09-627877e95e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.126 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614159.105946, 2e309dc2-3cab-4ecf-8be7-eab85790a0da => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.127 248514 INFO nova.compute.manager [-] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.136 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] resizing rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.143 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[77b4a375-a3e4-4065-a1e0-997ccc94a5c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9222573e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:40:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675133, 'reachable_time': 22562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283685, 'error': None, 'target': 'ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.164 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d123c88-64f5-4103-86a6-bcdad5645022]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:4065'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675133, 'tstamp': 675133}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283703, 'error': None, 'target': 'ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.181 248514 DEBUG nova.compute.manager [None req-206b2a67-5f37-4f61-89e1-e08e15808743 - - - - - -] [instance: 2e309dc2-3cab-4ecf-8be7-eab85790a0da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.187 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f47bdf96-8b02-42db-9e1e-2e505f38093f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9222573e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:40:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675133, 'reachable_time': 22562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283717, 'error': None, 'target': 'ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.216 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a334df17-d416-4dbc-8d85-1a4dcfd63420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.238 248514 DEBUG nova.compute.manager [req-b49e2b37-021e-46d9-9914-acc1eee6c2ea req-0b0192f0-2af6-41aa-94e3-2d5d84befe66 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.238 248514 DEBUG oslo_concurrency.lockutils [req-b49e2b37-021e-46d9-9914-acc1eee6c2ea req-0b0192f0-2af6-41aa-94e3-2d5d84befe66 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.239 248514 DEBUG oslo_concurrency.lockutils [req-b49e2b37-021e-46d9-9914-acc1eee6c2ea req-0b0192f0-2af6-41aa-94e3-2d5d84befe66 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.239 248514 DEBUG oslo_concurrency.lockutils [req-b49e2b37-021e-46d9-9914-acc1eee6c2ea req-0b0192f0-2af6-41aa-94e3-2d5d84befe66 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.240 248514 DEBUG nova.compute.manager [req-b49e2b37-021e-46d9-9914-acc1eee6c2ea req-0b0192f0-2af6-41aa-94e3-2d5d84befe66 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Processing event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.246 248514 DEBUG nova.objects.instance [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'migration_context' on Instance uuid 3b43a9c7-85e7-4558-bd2f-e4712882021e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.275 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.275 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Ensure instance console log exists: /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.276 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.276 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.276 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.286 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c38cb2f6-b4be-4c75-96c3-1ff5ccf5ee1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.287 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9222573e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.288 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.288 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9222573e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:54 np0005558241 kernel: tap9222573e-90: entered promiscuous mode
Dec 13 03:22:54 np0005558241 NetworkManager[50376]: <info>  [1765614174.2910] manager: (tap9222573e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.291 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.293 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9222573e-90, col_values=(('external_ids', {'iface-id': '45a4178e-17fc-4302-986d-b435f576d8a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:54Z|00235|binding|INFO|Releasing lport 45a4178e-17fc-4302-986d-b435f576d8a7 from this chassis (sb_readonly=0)
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.314 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9222573e-9ff1-438c-aa91-fa531c4ff949.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9222573e-9ff1-438c-aa91-fa531c4ff949.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.314 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.315 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a7a9b3-f2de-4ec6-bb4d-fe53391adb90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.316 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-9222573e-9ff1-438c-aa91-fa531c4ff949
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/9222573e-9ff1-438c-aa91-fa531c4ff949.pid.haproxy
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 9222573e-9ff1-438c-aa91-fa531c4ff949
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.317 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949', 'env', 'PROCESS_TAG=haproxy-9222573e-9ff1-438c-aa91-fa531c4ff949', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9222573e-9ff1-438c-aa91-fa531c4ff949.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.376 248514 DEBUG nova.compute.manager [req-a3aa708a-237f-4362-9d79-a0cd19d299da req-f1f3ac8a-0aa7-4ef7-b387-d5382af43c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.377 248514 DEBUG oslo_concurrency.lockutils [req-a3aa708a-237f-4362-9d79-a0cd19d299da req-f1f3ac8a-0aa7-4ef7-b387-d5382af43c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.378 248514 DEBUG oslo_concurrency.lockutils [req-a3aa708a-237f-4362-9d79-a0cd19d299da req-f1f3ac8a-0aa7-4ef7-b387-d5382af43c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.378 248514 DEBUG oslo_concurrency.lockutils [req-a3aa708a-237f-4362-9d79-a0cd19d299da req-f1f3ac8a-0aa7-4ef7-b387-d5382af43c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.378 248514 DEBUG nova.compute.manager [req-a3aa708a-237f-4362-9d79-a0cd19d299da req-f1f3ac8a-0aa7-4ef7-b387-d5382af43c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Processing event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.381 248514 DEBUG nova.network.neutron [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Successfully created port: bc4158d8-4963-4009-a434-0a0106941c9d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:22:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:22:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1194278152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.454 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.460 248514 DEBUG nova.compute.provider_tree [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.477 248514 DEBUG nova.scheduler.client.report [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.507 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.530 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.532 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.590 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.590 248514 DEBUG nova.network.neutron [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.625 248514 INFO nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.650 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:22:54 np0005558241 podman[283766]: 2025-12-13 08:22:54.718537665 +0000 UTC m=+0.059078556 container create 73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.746 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.747 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.748 248514 INFO nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Creating image(s)#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.771 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:54 np0005558241 systemd[1]: Started libpod-conmon-73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99.scope.
Dec 13 03:22:54 np0005558241 podman[283766]: 2025-12-13 08:22:54.686882305 +0000 UTC m=+0.027423206 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.792 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:22:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea3c1eb258931fd7cf42baba3d7c54850af94ec03301e14b33f172aaaa8b9257/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:54 np0005558241 podman[283766]: 2025-12-13 08:22:54.821979543 +0000 UTC m=+0.162520434 container init 73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.824 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.827 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:54 np0005558241 podman[283766]: 2025-12-13 08:22:54.828595876 +0000 UTC m=+0.169136747 container start 73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 03:22:54 np0005558241 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [NOTICE]   (283839) : New worker (283842) forked
Dec 13 03:22:54 np0005558241 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [NOTICE]   (283839) : Loading success.
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.859 248514 DEBUG nova.policy [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '79d4b34b8bd3452cb5b8c0954166f397', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06fbab937d6444558229b2351632e711', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.905 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.906 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.907 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.908 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.921 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ee08056b-cf18-46d7-9fea-542c2ec040ab in datapath 77039756-6444-4fde-8e43-817fb80a54e0 unbound from our chassis#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.923 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77039756-6444-4fde-8e43-817fb80a54e0#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.938 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.939 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[303f9ece-b4da-4443-b5a5-010bef4e1864]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.940 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77039756-61 in ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.943 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77039756-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.943 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8bd19c-fa65-46bc-a0c8-dd79a8b67d64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 nova_compute[248510]: 2025-12-13 08:22:54.943 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.944 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3608d5a1-2555-4554-933c-e249e0f43812]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.957 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b8772324-eb47-4ff9-b111-441c8a539b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:54.977 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6524c45-5962-4472-a494-8f9d8919c184]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.024 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1fee4fea-e86c-49e6-b61e-e8c297a18312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:55 np0005558241 NetworkManager[50376]: <info>  [1765614175.0322] manager: (tap77039756-60): new Veth device (/org/freedesktop/NetworkManager/Devices/108)
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.033 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9a322a40-016f-4d22-b8dc-2d6a544ff21d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 90 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 475 KiB/s wr, 18 op/s
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.065 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.067 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614175.0670154, 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.067 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] VM Started (Lifecycle Event)#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.073 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.080 248514 INFO nova.virt.libvirt.driver [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Instance spawned successfully.#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.080 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.080 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6e11c3e5-1d97-406b-ac3d-b57056490cbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.085 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[83614b17-b71f-4e8e-9d76-3d57e43f317d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:55 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:22:55 np0005558241 NetworkManager[50376]: <info>  [1765614175.1153] device (tap77039756-60): carrier: link connected
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.135 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b9433a08-ed78-4d4b-b3cd-8550138e44a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.163 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5540a6-949e-4eec-af92-5a256b5bee77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77039756-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:96:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675233, 'reachable_time': 31379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283943, 'error': None, 'target': 'ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.182 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5de2cef9-5698-4af9-88bc-59e4ecbb8ae1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:964f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675233, 'tstamp': 675233}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283944, 'error': None, 'target': 'ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.202 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09bd067d-43fe-4164-9170-341fb6a4cf92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77039756-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:96:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675233, 'reachable_time': 31379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283945, 'error': None, 'target': 'ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.244 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4d943d-0756-442c-af5d-a634694721bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.287 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.316 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cff934e5-1c2e-410d-8f83-30d7dba7fa8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.318 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77039756-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.318 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.319 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77039756-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:55 np0005558241 NetworkManager[50376]: <info>  [1765614175.3214] manager: (tap77039756-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Dec 13 03:22:55 np0005558241 kernel: tap77039756-60: entered promiscuous mode
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.323 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77039756-60, col_values=(('external_ids', {'iface-id': '1b2300c4-8e8d-459a-85ca-05cdd90f9306'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:55Z|00236|binding|INFO|Releasing lport 1b2300c4-8e8d-459a-85ca-05cdd90f9306 from this chassis (sb_readonly=0)
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.325 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77039756-6444-4fde-8e43-817fb80a54e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77039756-6444-4fde-8e43-817fb80a54e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.332 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0775640e-a921-48a6-a27d-d4617a8fbfc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.333 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-77039756-6444-4fde-8e43-817fb80a54e0
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/77039756-6444-4fde-8e43-817fb80a54e0.pid.haproxy
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 77039756-6444-4fde-8e43-817fb80a54e0
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.334 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0', 'env', 'PROCESS_TAG=haproxy-77039756-6444-4fde-8e43-817fb80a54e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77039756-6444-4fde-8e43-817fb80a54e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.363 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] resizing rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.401 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.404 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.404 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:55.405 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.409 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.410 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.410 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.411 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.411 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.411 248514 DEBUG nova.virt.libvirt.driver [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.418 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.455 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.455 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614175.0730948, 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.455 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.460 248514 DEBUG nova.objects.instance [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lazy-loading 'migration_context' on Instance uuid dc64fea4-e9a8-47e7-8a3a-d01897fc81de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.498 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.501 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.501 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Ensure instance console log exists: /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.501 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.502 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.502 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.504 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614175.0736177, 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.504 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.533 248514 INFO nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Took 12.78 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.534 248514 DEBUG nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.534 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.542 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.587 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.623 248514 INFO nova.compute.manager [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Took 14.00 seconds to build instance.#033[00m
Dec 13 03:22:55 np0005558241 nova_compute[248510]: 2025-12-13 08:22:55.646 248514 DEBUG oslo_concurrency.lockutils [None req-40728d64-16df-4408-b244-c3c596ec6ab2 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:55 np0005558241 podman[284050]: 2025-12-13 08:22:55.734430137 +0000 UTC m=+0.057527697 container create d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:22:55 np0005558241 systemd[1]: Started libpod-conmon-d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7.scope.
Dec 13 03:22:55 np0005558241 podman[284050]: 2025-12-13 08:22:55.702909561 +0000 UTC m=+0.026007141 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:22:55 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:22:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d34cd0ef81b6a8c0e2cc74104c29ceeace88d06c0627064001b1c7c88fec1e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:22:55 np0005558241 podman[284050]: 2025-12-13 08:22:55.824608208 +0000 UTC m=+0.147705798 container init d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 03:22:55 np0005558241 podman[284050]: 2025-12-13 08:22:55.831026146 +0000 UTC m=+0.154123706 container start d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 03:22:55 np0005558241 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [NOTICE]   (284069) : New worker (284071) forked
Dec 13 03:22:55 np0005558241 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [NOTICE]   (284069) : Loading success.
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.019 248514 DEBUG nova.network.neutron [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Successfully updated port: bc4158d8-4963-4009-a434-0a0106941c9d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.126 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.126 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.127 248514 DEBUG nova.network.neutron [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.293 248514 DEBUG nova.compute.manager [req-2f0b3003-36a0-41fb-a23b-6e31fef1de11 req-ee25b5b2-f15a-45d3-9ed0-c9e1c03e1dbc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.294 248514 DEBUG oslo_concurrency.lockutils [req-2f0b3003-36a0-41fb-a23b-6e31fef1de11 req-ee25b5b2-f15a-45d3-9ed0-c9e1c03e1dbc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.294 248514 DEBUG oslo_concurrency.lockutils [req-2f0b3003-36a0-41fb-a23b-6e31fef1de11 req-ee25b5b2-f15a-45d3-9ed0-c9e1c03e1dbc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.294 248514 DEBUG oslo_concurrency.lockutils [req-2f0b3003-36a0-41fb-a23b-6e31fef1de11 req-ee25b5b2-f15a-45d3-9ed0-c9e1c03e1dbc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.295 248514 DEBUG nova.compute.manager [req-2f0b3003-36a0-41fb-a23b-6e31fef1de11 req-ee25b5b2-f15a-45d3-9ed0-c9e1c03e1dbc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] No waiting events found dispatching network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.295 248514 WARNING nova.compute.manager [req-2f0b3003-36a0-41fb-a23b-6e31fef1de11 req-ee25b5b2-f15a-45d3-9ed0-c9e1c03e1dbc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received unexpected event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d for instance with vm_state active and task_state None.#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.328 248514 DEBUG nova.network.neutron [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Successfully created port: 627622b8-ef54-4181-bd8d-e8e82650b143 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.333 248514 DEBUG nova.network.neutron [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.552 248514 DEBUG nova.compute.manager [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.552 248514 DEBUG oslo_concurrency.lockutils [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.553 248514 DEBUG oslo_concurrency.lockutils [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.553 248514 DEBUG oslo_concurrency.lockutils [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.553 248514 DEBUG nova.compute.manager [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] No waiting events found dispatching network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.554 248514 WARNING nova.compute.manager [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received unexpected event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab for instance with vm_state active and task_state None.#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.554 248514 DEBUG nova.compute.manager [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.554 248514 DEBUG nova.compute.manager [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing instance network info cache due to event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:22:56 np0005558241 nova_compute[248510]: 2025-12-13 08:22:56.555 248514 DEBUG oslo_concurrency.lockutils [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 90 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 475 KiB/s wr, 18 op/s
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.649 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.899 248514 DEBUG nova.network.neutron [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.950 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.951 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Instance network_info: |[{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.953 248514 DEBUG nova.network.neutron [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Successfully updated port: 627622b8-ef54-4181-bd8d-e8e82650b143 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.954 248514 DEBUG oslo_concurrency.lockutils [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.955 248514 DEBUG nova.network.neutron [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.958 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Start _get_guest_xml network_info=[{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.964 248514 WARNING nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.972 248514 DEBUG nova.virt.libvirt.host [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.973 248514 DEBUG nova.virt.libvirt.host [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.976 248514 DEBUG nova.virt.libvirt.host [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.977 248514 DEBUG nova.virt.libvirt.host [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.977 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.977 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.978 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.978 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.978 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.979 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.979 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.979 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.979 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.980 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.980 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.980 248514 DEBUG nova.virt.hardware [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:22:57 np0005558241 nova_compute[248510]: 2025-12-13 08:22:57.983 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.022 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.023 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquired lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.023 248514 DEBUG nova.network.neutron [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.332 248514 DEBUG nova.network.neutron [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:22:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:22:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2012624721' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.590 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.616 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.621 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.794 248514 DEBUG nova.compute.manager [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.795 248514 DEBUG nova.compute.manager [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing instance network info cache due to event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.795 248514 DEBUG oslo_concurrency.lockutils [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.830 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.830 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.831 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.831 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.831 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.833 248514 INFO nova.compute.manager [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Terminating instance#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.834 248514 DEBUG nova.compute.manager [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:22:58 np0005558241 kernel: tap4dd4aba8-8c (unregistering): left promiscuous mode
Dec 13 03:22:58 np0005558241 NetworkManager[50376]: <info>  [1765614178.8727] device (tap4dd4aba8-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:22:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:58Z|00237|binding|INFO|Releasing lport 4dd4aba8-8ce4-4b2e-92b4-879959570e8d from this chassis (sb_readonly=0)
Dec 13 03:22:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:58Z|00238|binding|INFO|Setting lport 4dd4aba8-8ce4-4b2e-92b4-879959570e8d down in Southbound
Dec 13 03:22:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:58Z|00239|binding|INFO|Removing iface tap4dd4aba8-8c ovn-installed in OVS
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.887 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:58.896 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:be:c0 10.100.0.93'], port_security=['fa:16:3e:57:be:c0 10.100.0.93'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.93/24', 'neutron:device_id': '7299d5b2-6ea4-47fc-b16b-fcb6dd741e38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9222573e-9ff1-438c-aa91-fa531c4ff949', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7b24070-0889-4199-b066-7f798c438fcf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4dd4aba8-8ce4-4b2e-92b4-879959570e8d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:58.897 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4dd4aba8-8ce4-4b2e-92b4-879959570e8d in datapath 9222573e-9ff1-438c-aa91-fa531c4ff949 unbound from our chassis#033[00m
Dec 13 03:22:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:58.898 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9222573e-9ff1-438c-aa91-fa531c4ff949, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:22:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:58.900 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a933d7cf-73b1-4662-a84d-7c093053a531]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:58.900 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949 namespace which is not needed anymore#033[00m
Dec 13 03:22:58 np0005558241 kernel: tapee08056b-cf (unregistering): left promiscuous mode
Dec 13 03:22:58 np0005558241 NetworkManager[50376]: <info>  [1765614178.9096] device (tapee08056b-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.915 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:58Z|00240|binding|INFO|Releasing lport ee08056b-cf18-46d7-9fea-542c2ec040ab from this chassis (sb_readonly=0)
Dec 13 03:22:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:58Z|00241|binding|INFO|Setting lport ee08056b-cf18-46d7-9fea-542c2ec040ab down in Southbound
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:22:58Z|00242|binding|INFO|Removing iface tapee08056b-cf ovn-installed in OVS
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.920 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:58.926 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:98:f1 10.100.1.173'], port_security=['fa:16:3e:ca:98:f1 10.100.1.173'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.173/24', 'neutron:device_id': '7299d5b2-6ea4-47fc-b16b-fcb6dd741e38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77039756-6444-4fde-8e43-817fb80a54e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2495263e4f944deda2647b578d06bb21', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1bb53efe-37ab-41ee-843d-8496ad1e2807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ef8942f-4ab0-4f06-afc6-d773195832b4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ee08056b-cf18-46d7-9fea-542c2ec040ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:22:58 np0005558241 nova_compute[248510]: 2025-12-13 08:22:58.944 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:58 np0005558241 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Dec 13 03:22:58 np0005558241 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001d.scope: Consumed 4.842s CPU time.
Dec 13 03:22:58 np0005558241 systemd-machined[210538]: Machine qemu-34-instance-0000001d terminated.
Dec 13 03:22:59 np0005558241 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [NOTICE]   (283839) : haproxy version is 2.8.14-c23fe91
Dec 13 03:22:59 np0005558241 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [NOTICE]   (283839) : path to executable is /usr/sbin/haproxy
Dec 13 03:22:59 np0005558241 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [WARNING]  (283839) : Exiting Master process...
Dec 13 03:22:59 np0005558241 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [ALERT]    (283839) : Current worker (283842) exited with code 143 (Terminated)
Dec 13 03:22:59 np0005558241 neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949[283799]: [WARNING]  (283839) : All workers exited. Exiting... (0)
Dec 13 03:22:59 np0005558241 systemd[1]: libpod-73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99.scope: Deactivated successfully.
Dec 13 03:22:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 180 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 153 op/s
Dec 13 03:22:59 np0005558241 podman[284169]: 2025-12-13 08:22:59.055639632 +0000 UTC m=+0.047581883 container died 73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.061 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 NetworkManager[50376]: <info>  [1765614179.0689] manager: (tapee08056b-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.086 248514 INFO nova.virt.libvirt.driver [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Instance destroyed successfully.#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.087 248514 DEBUG nova.objects.instance [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lazy-loading 'resources' on Instance uuid 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99-userdata-shm.mount: Deactivated successfully.
Dec 13 03:22:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ea3c1eb258931fd7cf42baba3d7c54850af94ec03301e14b33f172aaaa8b9257-merged.mount: Deactivated successfully.
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.105 248514 DEBUG nova.virt.libvirt.vif [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1320257136',display_name='tempest-ServersTestMultiNic-server-1320257136',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1320257136',id=29,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:22:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-dy8jmgm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:22:55Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=7299d5b2-6ea4-47fc-b16b-fcb6dd741e38,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.106 248514 DEBUG nova.network.os_vif_util [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "address": "fa:16:3e:57:be:c0", "network": {"id": "9222573e-9ff1-438c-aa91-fa531c4ff949", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1079847857", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.93", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dd4aba8-8c", "ovs_interfaceid": "4dd4aba8-8ce4-4b2e-92b4-879959570e8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.107 248514 DEBUG nova.network.os_vif_util [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.108 248514 DEBUG os_vif [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.111 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.111 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dd4aba8-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.119 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.125 248514 INFO os_vif [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:be:c0,bridge_name='br-int',has_traffic_filtering=True,id=4dd4aba8-8ce4-4b2e-92b4-879959570e8d,network=Network(9222573e-9ff1-438c-aa91-fa531c4ff949),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dd4aba8-8c')#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.126 248514 DEBUG nova.virt.libvirt.vif [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1320257136',display_name='tempest-ServersTestMultiNic-server-1320257136',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1320257136',id=29,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:22:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2495263e4f944deda2647b578d06bb21',ramdisk_id='',reservation_id='r-dy8jmgm1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1741413593',owner_user_name='tempest-ServersTestMultiNic-1741413593-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:22:55Z,user_data=None,user_id='e8aa7edd2151436caa0fd25f361298fd',uuid=7299d5b2-6ea4-47fc-b16b-fcb6dd741e38,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.127 248514 DEBUG nova.network.os_vif_util [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converting VIF {"id": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "address": "fa:16:3e:ca:98:f1", "network": {"id": "77039756-6444-4fde-8e43-817fb80a54e0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2090328552", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2495263e4f944deda2647b578d06bb21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee08056b-cf", "ovs_interfaceid": "ee08056b-cf18-46d7-9fea-542c2ec040ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.128 248514 DEBUG nova.network.os_vif_util [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.128 248514 DEBUG os_vif [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:22:59 np0005558241 podman[284169]: 2025-12-13 08:22:59.128997859 +0000 UTC m=+0.120940110 container cleanup 73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.129 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.130 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee08056b-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.133 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.136 248514 INFO os_vif [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:98:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee08056b-cf18-46d7-9fea-542c2ec040ab,network=Network(77039756-6444-4fde-8e43-817fb80a54e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee08056b-cf')#033[00m
Dec 13 03:22:59 np0005558241 systemd[1]: libpod-conmon-73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99.scope: Deactivated successfully.
Dec 13 03:22:59 np0005558241 podman[284217]: 2025-12-13 08:22:59.212902525 +0000 UTC m=+0.056474762 container remove 73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 03:22:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:22:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1434445151' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.221 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef782ef4-215d-4a08-9775-ae189bd01e73]: (4, ('Sat Dec 13 08:22:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949 (73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99)\n73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99\nSat Dec 13 08:22:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949 (73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99)\n73c5d9783bf53abf5d1e5bd4edda2a8d53aa761c2d5cd233e5eec5a3d2645d99\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.223 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf76aa8-ad71-44c1-b084-6bd31fef38e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.226 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9222573e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 kernel: tap9222573e-90: left promiscuous mode
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.289 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.291 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.292 248514 DEBUG nova.virt.libvirt.vif [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.293 248514 DEBUG nova.network.os_vif_util [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.292 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[df1592ed-9fbe-426e-b315-c267641cd28b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.293 248514 DEBUG nova.network.os_vif_util [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.295 248514 DEBUG nova.objects.instance [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b43a9c7-85e7-4558-bd2f-e4712882021e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.308 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe808b6-6aa7-4a5d-b315-ffe6259fd7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.310 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b2be95d6-d215-482d-99f0-89be56774d39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.312 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  <uuid>3b43a9c7-85e7-4558-bd2f-e4712882021e</uuid>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  <name>instance-0000001e</name>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <nova:name>tempest-tempest.common.compute-instance-2036693049</nova:name>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:22:57</nova:creationTime>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:        <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:        <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:        <nova:port uuid="bc4158d8-4963-4009-a434-0a0106941c9d">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <entry name="serial">3b43a9c7-85e7-4558-bd2f-e4712882021e</entry>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <entry name="uuid">3b43a9c7-85e7-4558-bd2f-e4712882021e</entry>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3b43a9c7-85e7-4558-bd2f-e4712882021e_disk">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:37:48:0d"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <target dev="tapbc4158d8-49"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/console.log" append="off"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:22:59 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:22:59 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:22:59 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:22:59 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.313 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Preparing to wait for external event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.313 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.313 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.314 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.314 248514 DEBUG nova.virt.libvirt.vif [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.315 248514 DEBUG nova.network.os_vif_util [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.316 248514 DEBUG nova.network.os_vif_util [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.316 248514 DEBUG os_vif [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.317 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.317 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.318 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.322 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc4158d8-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.322 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc4158d8-49, col_values=(('external_ids', {'iface-id': 'bc4158d8-4963-4009-a434-0a0106941c9d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:48:0d', 'vm-uuid': '3b43a9c7-85e7-4558-bd2f-e4712882021e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.324 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 NetworkManager[50376]: <info>  [1765614179.3253] manager: (tapbc4158d8-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.326 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.332 248514 INFO os_vif [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49')#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.332 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc42fa5-16d9-461e-907f-13e2eb044a44]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675123, 'reachable_time': 15938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284250, 'error': None, 'target': 'ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 systemd[1]: run-netns-ovnmeta\x2d9222573e\x2d9ff1\x2d438c\x2daa91\x2dfa531c4ff949.mount: Deactivated successfully.
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.336 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9222573e-9ff1-438c-aa91-fa531c4ff949 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.336 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4684caba-5002-43c1-8cdc-453b13abdaec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.338 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ee08056b-cf18-46d7-9fea-542c2ec040ab in datapath 77039756-6444-4fde-8e43-817fb80a54e0 unbound from our chassis#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.339 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77039756-6444-4fde-8e43-817fb80a54e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.341 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[897dfcba-7182-444e-9b9a-6669516a5123]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.341 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0 namespace which is not needed anymore#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.421 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.421 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.422 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:37:48:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.422 248514 INFO nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Using config drive#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.447 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.487 248514 INFO nova.virt.libvirt.driver [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Deleting instance files /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_del#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.488 248514 INFO nova.virt.libvirt.driver [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Deletion of /var/lib/nova/instances/7299d5b2-6ea4-47fc-b16b-fcb6dd741e38_del complete#033[00m
Dec 13 03:22:59 np0005558241 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [NOTICE]   (284069) : haproxy version is 2.8.14-c23fe91
Dec 13 03:22:59 np0005558241 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [NOTICE]   (284069) : path to executable is /usr/sbin/haproxy
Dec 13 03:22:59 np0005558241 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [WARNING]  (284069) : Exiting Master process...
Dec 13 03:22:59 np0005558241 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [WARNING]  (284069) : Exiting Master process...
Dec 13 03:22:59 np0005558241 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [ALERT]    (284069) : Current worker (284071) exited with code 143 (Terminated)
Dec 13 03:22:59 np0005558241 neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0[284065]: [WARNING]  (284069) : All workers exited. Exiting... (0)
Dec 13 03:22:59 np0005558241 systemd[1]: libpod-d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7.scope: Deactivated successfully.
Dec 13 03:22:59 np0005558241 podman[284270]: 2025-12-13 08:22:59.502584 +0000 UTC m=+0.053813027 container died d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.509 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7-userdata-shm.mount: Deactivated successfully.
Dec 13 03:22:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4d34cd0ef81b6a8c0e2cc74104c29ceeace88d06c0627064001b1c7c88fec1e1-merged.mount: Deactivated successfully.
Dec 13 03:22:59 np0005558241 podman[284270]: 2025-12-13 08:22:59.549146596 +0000 UTC m=+0.100375623 container cleanup d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:22:59 np0005558241 systemd[1]: libpod-conmon-d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7.scope: Deactivated successfully.
Dec 13 03:22:59 np0005558241 podman[284317]: 2025-12-13 08:22:59.615238974 +0000 UTC m=+0.044616660 container remove d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.621 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9b339ec4-624d-4e82-b9f5-dee20628ce6f]: (4, ('Sat Dec 13 08:22:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0 (d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7)\nd408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7\nSat Dec 13 08:22:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0 (d408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7)\nd408e1b56c4a0630a74baebf68a814756e4dfb22c57cff8eca1743f5d2db58e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.623 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbb82ec-a85a-4898-b3ca-7a60ad8527fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.624 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77039756-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:22:59 np0005558241 kernel: tap77039756-60: left promiscuous mode
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.630 248514 DEBUG nova.compute.manager [req-aab2c34c-de6e-4db9-b861-aaa04216d003 req-cd254797-04fe-4089-a168-26e5655ff96f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-unplugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.630 248514 DEBUG oslo_concurrency.lockutils [req-aab2c34c-de6e-4db9-b861-aaa04216d003 req-cd254797-04fe-4089-a168-26e5655ff96f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.631 248514 DEBUG oslo_concurrency.lockutils [req-aab2c34c-de6e-4db9-b861-aaa04216d003 req-cd254797-04fe-4089-a168-26e5655ff96f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.631 248514 DEBUG oslo_concurrency.lockutils [req-aab2c34c-de6e-4db9-b861-aaa04216d003 req-cd254797-04fe-4089-a168-26e5655ff96f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.631 248514 DEBUG nova.compute.manager [req-aab2c34c-de6e-4db9-b861-aaa04216d003 req-cd254797-04fe-4089-a168-26e5655ff96f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] No waiting events found dispatching network-vif-unplugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.631 248514 DEBUG nova.compute.manager [req-aab2c34c-de6e-4db9-b861-aaa04216d003 req-cd254797-04fe-4089-a168-26e5655ff96f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-unplugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.631 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.649 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eec6cfb8-7313-459e-8da3-9aeec1af3e35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.662 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[76445a6a-bda5-410b-aaa4-2be59d9d0309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.664 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ced348be-1f03-4109-9846-cd50c7b60818]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c897ece-dfc6-4be2-bd1d-a3f58b72ee5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675223, 'reachable_time': 19379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284335, 'error': None, 'target': 'ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.690 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77039756-6444-4fde-8e43-817fb80a54e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:22:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:22:59.690 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[bec26c5a-d6ee-44b1-8fc8-b58484fad400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.694 248514 INFO nova.compute.manager [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.694 248514 DEBUG oslo.service.loopingcall [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.695 248514 DEBUG nova.compute.manager [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.695 248514 DEBUG nova.network.neutron [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.878 248514 DEBUG nova.network.neutron [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.900 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Releasing lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.901 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Instance network_info: |[{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.901 248514 DEBUG oslo_concurrency.lockutils [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.901 248514 DEBUG nova.network.neutron [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.904 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Start _get_guest_xml network_info=[{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.912 248514 WARNING nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.919 248514 DEBUG nova.virt.libvirt.host [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.919 248514 DEBUG nova.virt.libvirt.host [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.927 248514 DEBUG nova.virt.libvirt.host [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.927 248514 DEBUG nova.virt.libvirt.host [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.928 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.928 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.928 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.929 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.929 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.929 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.929 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.929 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.930 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.930 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.930 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.930 248514 DEBUG nova.virt.hardware [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:22:59 np0005558241 nova_compute[248510]: 2025-12-13 08:22:59.933 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:23:00 np0005558241 systemd[1]: run-netns-ovnmeta\x2d77039756\x2d6444\x2d4fde\x2d8e43\x2d817fb80a54e0.mount: Deactivated successfully.
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.150 248514 INFO nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Creating config drive at /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/disk.config#033[00m
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.157 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwyuyeklz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.217 248514 DEBUG nova.network.neutron [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updated VIF entry in instance network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.219 248514 DEBUG nova.network.neutron [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.242 248514 DEBUG oslo_concurrency.lockutils [req-190585c4-970d-433e-8a10-0c4eaa57c6b5 req-1ea98b0c-0e0e-43d4-9bb5-2e80606938c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.301 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwyuyeklz" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.327 248514 DEBUG nova.storage.rbd_utils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.331 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/disk.config 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.483 248514 DEBUG oslo_concurrency.processutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/disk.config 3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.484 248514 INFO nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Deleting local config drive /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/disk.config because it was imported into RBD.#033[00m
Dec 13 03:23:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:23:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1657982056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.520 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:00 np0005558241 kernel: tapbc4158d8-49: entered promiscuous mode
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.547 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:00 np0005558241 NetworkManager[50376]: <info>  [1765614180.5502] manager: (tapbc4158d8-49): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Dec 13 03:23:00 np0005558241 systemd-udevd[284143]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:23:00 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:00Z|00243|binding|INFO|Claiming lport bc4158d8-4963-4009-a434-0a0106941c9d for this chassis.
Dec 13 03:23:00 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:00Z|00244|binding|INFO|bc4158d8-4963-4009-a434-0a0106941c9d: Claiming fa:16:3e:37:48:0d 10.100.0.6
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.558 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.558 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:48:0d 10.100.0.6'], port_security=['fa:16:3e:37:48:0d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3b43a9c7-85e7-4558-bd2f-e4712882021e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd758160-971f-4f0e-a4ca-a13304d3c491', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bc4158d8-4963-4009-a434-0a0106941c9d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.559 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bc4158d8-4963-4009-a434-0a0106941c9d in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 bound to our chassis#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.561 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:23:00 np0005558241 NetworkManager[50376]: <info>  [1765614180.5643] device (tapbc4158d8-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:23:00 np0005558241 NetworkManager[50376]: <info>  [1765614180.5649] device (tapbc4158d8-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:23:00 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:00Z|00245|binding|INFO|Setting lport bc4158d8-4963-4009-a434-0a0106941c9d ovn-installed in OVS
Dec 13 03:23:00 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:00Z|00246|binding|INFO|Setting lport bc4158d8-4963-4009-a434-0a0106941c9d up in Southbound
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.578 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39e290f9-ee72-4739-b13b-ccfc74f461d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.579 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ca92864-31 in ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.581 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ca92864-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.581 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4ef5a0-57bb-402f-aa65-6cc7c36f737e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.582 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd0dbf5-e0d5-4ed6-8de0-48c3990f6128]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 systemd-machined[210538]: New machine qemu-35-instance-0000001e.
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.594 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.596 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[42e3d90c-b223-40ac-b2db-152402c7b9a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 systemd[1]: Started Virtual Machine qemu-35-instance-0000001e.
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.612 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d136a47-bd2f-41e1-a4d8-504c01813d26]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.673 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[db316b40-6884-4524-94ff-9c3bfde056c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.679 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[85d31762-c319-4e5d-9ec7-14489c8787ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 NetworkManager[50376]: <info>  [1765614180.6799] manager: (tap1ca92864-30): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.729 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0fc4d8-1e66-49c2-94b8-ca709d17ba3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.733 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[26d6a6a2-e09d-4a2f-888c-5cae840d3ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 NetworkManager[50376]: <info>  [1765614180.7627] device (tap1ca92864-30): carrier: link connected
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.772 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6836d4e4-0aa8-427f-92cc-875c7da376ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.795 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[453731f2-49cc-4f29-b22e-302a2af730f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 19285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284479, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.815 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac13849-be19-494b-8ce4-2490f743b118]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:e2af'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675797, 'tstamp': 675797}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284480, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.839 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e77927ca-97e4-4af7-81b9-78028e373e82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 19285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284481, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.875 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4972be9f-ec83-4704-b4ac-94fc8da899b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.932 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e06c2d07-b1d5-4a5e-8ce9-1e75d7042a05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.933 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.934 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.934 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:00 np0005558241 kernel: tap1ca92864-30: entered promiscuous mode
Dec 13 03:23:00 np0005558241 NetworkManager[50376]: <info>  [1765614180.9371] manager: (tap1ca92864-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.940 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:00 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:00Z|00247|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:23:00 np0005558241 nova_compute[248510]: 2025-12-13 08:23:00.960 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.961 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ca92864-3b70-4794-9db1-fa08128cef92.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ca92864-3b70-4794-9db1-fa08128cef92.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.962 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f68dfcb4-2050-4cdf-909c-b85a781e0096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.963 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/1ca92864-3b70-4794-9db1-fa08128cef92.pid.haproxy
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 1ca92864-3b70-4794-9db1-fa08128cef92
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:23:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:00.965 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'env', 'PROCESS_TAG=haproxy-1ca92864-3b70-4794-9db1-fa08128cef92', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ca92864-3b70-4794-9db1-fa08128cef92.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:23:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 180 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 138 op/s
Dec 13 03:23:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:23:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1846145538' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.141 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-unplugged-ee08056b-cf18-46d7-9fea-542c2ec040ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.142 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.142 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.143 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.143 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] No waiting events found dispatching network-vif-unplugged-ee08056b-cf18-46d7-9fea-542c2ec040ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.143 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-unplugged-ee08056b-cf18-46d7-9fea-542c2ec040ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.144 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.144 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.144 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.145 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.145 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] No waiting events found dispatching network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.145 248514 WARNING nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received unexpected event network-vif-plugged-ee08056b-cf18-46d7-9fea-542c2ec040ab for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.145 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.146 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.146 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.146 248514 DEBUG oslo_concurrency.lockutils [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.147 248514 DEBUG nova.compute.manager [req-bd90024b-bceb-49c1-abde-3bcfc739a596 req-2d9764bd-24ab-4707-a4e1-69e67389f0f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Processing event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.160 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.161 248514 DEBUG nova.virt.libvirt.vif [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-153788010',display_name='tempest-FloatingIPsAssociationTestJSON-server-153788010',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-153788010',id=31,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06fbab937d6444558229b2351632e711',ramdisk_id='',reservation_id='r-m786w5ky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-609563086',owner_user_name='tempest-FloatingIPsAssociationTestJSON-609563086-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:54Z,user_data=None,user_id='79d4b34b8bd3452cb5b8c0954166f397',uuid=dc64fea4-e9a8-47e7-8a3a-d01897fc81de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.162 248514 DEBUG nova.network.os_vif_util [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converting VIF {"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.163 248514 DEBUG nova.network.os_vif_util [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.164 248514 DEBUG nova.objects.instance [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lazy-loading 'pci_devices' on Instance uuid dc64fea4-e9a8-47e7-8a3a-d01897fc81de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.194 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.195 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614181.1951368, 3b43a9c7-85e7-4558-bd2f-e4712882021e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.196 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] VM Started (Lifecycle Event)#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.202 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.206 248514 INFO nova.virt.libvirt.driver [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Instance spawned successfully.#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.207 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.263 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  <uuid>dc64fea4-e9a8-47e7-8a3a-d01897fc81de</uuid>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  <name>instance-0000001f</name>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-153788010</nova:name>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:22:59</nova:creationTime>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:        <nova:user uuid="79d4b34b8bd3452cb5b8c0954166f397">tempest-FloatingIPsAssociationTestJSON-609563086-project-member</nova:user>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:        <nova:project uuid="06fbab937d6444558229b2351632e711">tempest-FloatingIPsAssociationTestJSON-609563086</nova:project>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:        <nova:port uuid="627622b8-ef54-4181-bd8d-e8e82650b143">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <entry name="serial">dc64fea4-e9a8-47e7-8a3a-d01897fc81de</entry>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <entry name="uuid">dc64fea4-e9a8-47e7-8a3a-d01897fc81de</entry>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk.config">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:4a:b2:83"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <target dev="tap627622b8-ef"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/console.log" append="off"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:23:01 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:23:01 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:23:01 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:23:01 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.264 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Preparing to wait for external event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.264 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.264 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.264 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.265 248514 DEBUG nova.virt.libvirt.vif [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:22:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-153788010',display_name='tempest-FloatingIPsAssociationTestJSON-server-153788010',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-153788010',id=31,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06fbab937d6444558229b2351632e711',ramdisk_id='',reservation_id='r-m786w5ky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-609563086',owner_user_name='tempest-FloatingIPsAssociationTestJSON-609563086-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:22:54Z,user_data=None,user_id='79d4b34b8bd3452cb5b8c0954166f397',uuid=dc64fea4-e9a8-47e7-8a3a-d01897fc81de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.265 248514 DEBUG nova.network.os_vif_util [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converting VIF {"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.266 248514 DEBUG nova.network.os_vif_util [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.266 248514 DEBUG os_vif [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.268 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.269 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.269 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.274 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap627622b8-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.274 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap627622b8-ef, col_values=(('external_ids', {'iface-id': '627622b8-ef54-4181-bd8d-e8e82650b143', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:b2:83', 'vm-uuid': 'dc64fea4-e9a8-47e7-8a3a-d01897fc81de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:01 np0005558241 NetworkManager[50376]: <info>  [1765614181.2771] manager: (tap627622b8-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.280 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.282 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.284 248514 DEBUG nova.network.neutron [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.285 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.287 248514 INFO os_vif [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef')#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.289 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.289 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.289 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.290 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.290 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.290 248514 DEBUG nova.virt.libvirt.driver [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.327 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.327 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614181.195357, 3b43a9c7-85e7-4558-bd2f-e4712882021e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.327 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.335 248514 INFO nova.compute.manager [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Took 1.64 seconds to deallocate network for instance.#033[00m
Dec 13 03:23:01 np0005558241 podman[284555]: 2025-12-13 08:23:01.356047522 +0000 UTC m=+0.047312856 container create e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.369 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.375 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614181.2038863, 3b43a9c7-85e7-4558-bd2f-e4712882021e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.375 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.378 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.378 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.379 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] No VIF found with MAC fa:16:3e:4a:b2:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.379 248514 INFO nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Using config drive#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.406 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:01 np0005558241 systemd[1]: Started libpod-conmon-e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b.scope.
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.415 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.417 248514 INFO nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Took 8.01 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.417 248514 DEBUG nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.417 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.418 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:01 np0005558241 podman[284555]: 2025-12-13 08:23:01.331466737 +0000 UTC m=+0.022732081 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.431 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:23:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:23:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b4067aa9b3f2b0ef0dd9859d3a57acc4d914a6c704c9a2c14d2ad026817e68f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.455 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:23:01 np0005558241 podman[284555]: 2025-12-13 08:23:01.460147836 +0000 UTC m=+0.151413190 container init e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 03:23:01 np0005558241 podman[284555]: 2025-12-13 08:23:01.468484252 +0000 UTC m=+0.159749576 container start e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 13 03:23:01 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [NOTICE]   (284593) : New worker (284595) forked
Dec 13 03:23:01 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [NOTICE]   (284593) : Loading success.
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.535 248514 DEBUG oslo_concurrency.processutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.653 248514 INFO nova.compute.manager [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Took 9.50 seconds to build instance.#033[00m
Dec 13 03:23:01 np0005558241 nova_compute[248510]: 2025-12-13 08:23:01.675 248514 DEBUG oslo_concurrency.lockutils [None req-d6d3d380-ab87-4b0f-990d-fd34efa0b4f6 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.009 248514 DEBUG nova.compute.manager [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.009 248514 DEBUG oslo_concurrency.lockutils [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.010 248514 DEBUG oslo_concurrency.lockutils [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.010 248514 DEBUG oslo_concurrency.lockutils [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.010 248514 DEBUG nova.compute.manager [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] No waiting events found dispatching network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.010 248514 WARNING nova.compute.manager [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received unexpected event network-vif-plugged-4dd4aba8-8ce4-4b2e-92b4-879959570e8d for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.010 248514 DEBUG nova.compute.manager [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-deleted-ee08056b-cf18-46d7-9fea-542c2ec040ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.011 248514 DEBUG nova.compute.manager [req-9010f004-666f-454a-bab5-016abadb09b4 req-27c913b9-b175-4af6-82d9-6dddfad02ebb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Received event network-vif-deleted-4dd4aba8-8ce4-4b2e-92b4-879959570e8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:23:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2364112568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.128 248514 DEBUG oslo_concurrency.processutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.137 248514 DEBUG nova.compute.provider_tree [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.154 248514 INFO nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Creating config drive at /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/disk.config#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.159 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4xc8mx3r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.195 248514 DEBUG nova.network.neutron [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updated VIF entry in instance network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.197 248514 DEBUG nova.network.neutron [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.199 248514 DEBUG nova.scheduler.client.report [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.223 248514 DEBUG oslo_concurrency.lockutils [req-41b4cde3-76f7-4446-a080-c88449f8a2a0 req-334eab2c-7448-4e00-a734-8ab47354faaf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.226 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.260 248514 INFO nova.scheduler.client.report [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Deleted allocations for instance 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.300 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4xc8mx3r" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.326 248514 DEBUG nova.storage.rbd_utils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.331 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/disk.config dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.373 248514 DEBUG oslo_concurrency.lockutils [None req-46c72ff0-8a13-4982-96da-77387b567985 e8aa7edd2151436caa0fd25f361298fd 2495263e4f944deda2647b578d06bb21 - - default default] Lock "7299d5b2-6ea4-47fc-b16b-fcb6dd741e38" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.506 248514 DEBUG oslo_concurrency.processutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/disk.config dc64fea4-e9a8-47e7-8a3a-d01897fc81de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.508 248514 INFO nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Deleting local config drive /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de/disk.config because it was imported into RBD.#033[00m
Dec 13 03:23:02 np0005558241 NetworkManager[50376]: <info>  [1765614182.5810] manager: (tap627622b8-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Dec 13 03:23:02 np0005558241 kernel: tap627622b8-ef: entered promiscuous mode
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.587 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:02Z|00248|binding|INFO|Claiming lport 627622b8-ef54-4181-bd8d-e8e82650b143 for this chassis.
Dec 13 03:23:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:02Z|00249|binding|INFO|627622b8-ef54-4181-bd8d-e8e82650b143: Claiming fa:16:3e:4a:b2:83 10.100.0.9
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.606 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:b2:83 10.100.0.9'], port_security=['fa:16:3e:4a:b2:83 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dc64fea4-e9a8-47e7-8a3a-d01897fc81de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06fbab937d6444558229b2351632e711', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fa325bd2-c57a-49fb-8dd9-f45405c95b4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f47224ac-d05f-46db-ac07-cb476b38b044, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=627622b8-ef54-4181-bd8d-e8e82650b143) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.614 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 627622b8-ef54-4181-bd8d-e8e82650b143 in datapath 62193ff6-aaa1-401a-b1e0-512e67752a9e bound to our chassis#033[00m
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.616 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.619 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62193ff6-aaa1-401a-b1e0-512e67752a9e#033[00m
Dec 13 03:23:02 np0005558241 systemd-machined[210538]: New machine qemu-36-instance-0000001f.
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.623 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:02Z|00250|binding|INFO|Setting lport 627622b8-ef54-4181-bd8d-e8e82650b143 ovn-installed in OVS
Dec 13 03:23:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:02Z|00251|binding|INFO|Setting lport 627622b8-ef54-4181-bd8d-e8e82650b143 up in Southbound
Dec 13 03:23:02 np0005558241 systemd[1]: Started Virtual Machine qemu-36-instance-0000001f.
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.635 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8b221c77-5686-4d3a-a5f6-7a32694d0920]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.637 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62193ff6-a1 in ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.639 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62193ff6-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.639 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eee16be2-4cab-4384-87e0-87b6ee9c35d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.640 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[218f68e0-01bd-4e31-a359-0761175874e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 systemd-udevd[284680]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.657 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1576e0f9-2d45-479b-a8c1-1c034388c88a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 NetworkManager[50376]: <info>  [1765614182.6711] device (tap627622b8-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:23:02 np0005558241 NetworkManager[50376]: <info>  [1765614182.6728] device (tap627622b8-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.676 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a1e061-b7b6-4d25-b010-eaed35a1cadc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.715 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cc33832e-08b0-4f38-ad19-bcd87d5f4d50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 systemd-udevd[284684]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:23:02 np0005558241 NetworkManager[50376]: <info>  [1765614182.7293] manager: (tap62193ff6-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/117)
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.728 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[47b61247-621f-4709-a239-ceff47eeb62e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.764 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d0dae41e-3f07-4e22-8cf5-1d28d4b24288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.769 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3ada246d-6432-41bb-953f-85f9992f695a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 NetworkManager[50376]: <info>  [1765614182.7999] device (tap62193ff6-a0): carrier: link connected
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.807 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f95928ae-3698-4133-a443-4767430dc5a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.829 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55797a0f-cc41-485b-98e5-025f497064cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62193ff6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:33:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676001, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284712, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.851 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e2bf2840-67c7-4c28-855b-5908fd1ea686]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:33dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676001, 'tstamp': 676001}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284713, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.872 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5186b55e-22d3-4381-acd6-aeac5b3fb5c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62193ff6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:33:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676001, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284714, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3629f4-1dc0-4cdc-84dc-f1f58b02260f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.988 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0905b9a-4cec-4f25-bc8a-74008c296405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.990 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62193ff6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.991 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:02.991 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62193ff6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:02 np0005558241 kernel: tap62193ff6-a0: entered promiscuous mode
Dec 13 03:23:02 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.994 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:02 np0005558241 NetworkManager[50376]: <info>  [1765614182.9977] manager: (tap62193ff6-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:02.999 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:03.005 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62193ff6-a0, col_values=(('external_ids', {'iface-id': '67d122d2-811d-4aa8-bdde-aafc5e939b2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:03Z|00252|binding|INFO|Releasing lport 67d122d2-811d-4aa8-bdde-aafc5e939b2b from this chassis (sb_readonly=0)
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.042 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:03.043 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62193ff6-aaa1-401a-b1e0-512e67752a9e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62193ff6-aaa1-401a-b1e0-512e67752a9e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:03.044 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[018906c3-8f88-40f3-a7d2-f6b900f9b778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:03.045 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-62193ff6-aaa1-401a-b1e0-512e67752a9e
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/62193ff6-aaa1-401a-b1e0-512e67752a9e.pid.haproxy
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 62193ff6-aaa1-401a-b1e0-512e67752a9e
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:23:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:03.046 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'env', 'PROCESS_TAG=haproxy-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62193ff6-aaa1-401a-b1e0-512e67752a9e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:23:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 170 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.070 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614183.0700932, dc64fea4-e9a8-47e7-8a3a-d01897fc81de => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.071 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] VM Started (Lifecycle Event)#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.096 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.100 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614183.0714447, dc64fea4-e9a8-47e7-8a3a-d01897fc81de => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.101 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.124 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.137 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.169 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.249 248514 DEBUG nova.compute.manager [req-0ac51951-6f21-4af6-93f1-3187b1db44a5 req-1aac2784-e529-4ab9-8098-4ca00a6ffbcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.250 248514 DEBUG oslo_concurrency.lockutils [req-0ac51951-6f21-4af6-93f1-3187b1db44a5 req-1aac2784-e529-4ab9-8098-4ca00a6ffbcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.250 248514 DEBUG oslo_concurrency.lockutils [req-0ac51951-6f21-4af6-93f1-3187b1db44a5 req-1aac2784-e529-4ab9-8098-4ca00a6ffbcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.250 248514 DEBUG oslo_concurrency.lockutils [req-0ac51951-6f21-4af6-93f1-3187b1db44a5 req-1aac2784-e529-4ab9-8098-4ca00a6ffbcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.250 248514 DEBUG nova.compute.manager [req-0ac51951-6f21-4af6-93f1-3187b1db44a5 req-1aac2784-e529-4ab9-8098-4ca00a6ffbcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:23:03 np0005558241 nova_compute[248510]: 2025-12-13 08:23:03.251 248514 WARNING nova.compute.manager [req-0ac51951-6f21-4af6-93f1-3187b1db44a5 req-1aac2784-e529-4ab9-8098-4ca00a6ffbcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received unexpected event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d for instance with vm_state active and task_state None.#033[00m
Dec 13 03:23:03 np0005558241 podman[284788]: 2025-12-13 08:23:03.466022522 +0000 UTC m=+0.057681942 container create 70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:23:03 np0005558241 systemd[1]: Started libpod-conmon-70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08.scope.
Dec 13 03:23:03 np0005558241 podman[284788]: 2025-12-13 08:23:03.433146582 +0000 UTC m=+0.024806002 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:23:03 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:23:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58e74a911db8950501b6e1ba22b29e7f74e9f24881f0a836706699e6a74d9c3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:03 np0005558241 podman[284788]: 2025-12-13 08:23:03.574835522 +0000 UTC m=+0.166494962 container init 70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 03:23:03 np0005558241 podman[284788]: 2025-12-13 08:23:03.580950493 +0000 UTC m=+0.172609913 container start 70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 03:23:03 np0005558241 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [NOTICE]   (284807) : New worker (284809) forked
Dec 13 03:23:03 np0005558241 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [NOTICE]   (284807) : Loading success.
Dec 13 03:23:04 np0005558241 nova_compute[248510]: 2025-12-13 08:23:04.547 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:23:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 235 op/s
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.443 248514 DEBUG nova.compute.manager [req-7b5f5d30-bb9e-47b7-aa0e-5d0520368238 req-51727e1e-d3cb-4c9a-a6e5-eb7257565502 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.443 248514 DEBUG oslo_concurrency.lockutils [req-7b5f5d30-bb9e-47b7-aa0e-5d0520368238 req-51727e1e-d3cb-4c9a-a6e5-eb7257565502 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.444 248514 DEBUG oslo_concurrency.lockutils [req-7b5f5d30-bb9e-47b7-aa0e-5d0520368238 req-51727e1e-d3cb-4c9a-a6e5-eb7257565502 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.444 248514 DEBUG oslo_concurrency.lockutils [req-7b5f5d30-bb9e-47b7-aa0e-5d0520368238 req-51727e1e-d3cb-4c9a-a6e5-eb7257565502 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.444 248514 DEBUG nova.compute.manager [req-7b5f5d30-bb9e-47b7-aa0e-5d0520368238 req-51727e1e-d3cb-4c9a-a6e5-eb7257565502 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Processing event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.445 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.454 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614185.4534385, dc64fea4-e9a8-47e7-8a3a-d01897fc81de => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.454 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.457 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.460 248514 INFO nova.virt.libvirt.driver [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Instance spawned successfully.#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.461 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.489 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.497 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.503 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.504 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.504 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.505 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.505 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.506 248514 DEBUG nova.virt.libvirt.driver [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.537 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.577 248514 INFO nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Took 10.83 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.578 248514 DEBUG nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.642 248514 INFO nova.compute.manager [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Took 12.28 seconds to build instance.#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.668 248514 DEBUG oslo_concurrency.lockutils [None req-31093be9-e199-495c-a6d4-ddec456cd8ab 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:05 np0005558241 nova_compute[248510]: 2025-12-13 08:23:05.857 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:05.858 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:23:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:05.859 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:23:06 np0005558241 nova_compute[248510]: 2025-12-13 08:23:06.288 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:06Z|00253|binding|INFO|Releasing lport 67d122d2-811d-4aa8-bdde-aafc5e939b2b from this chassis (sb_readonly=0)
Dec 13 03:23:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:06Z|00254|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:23:06 np0005558241 nova_compute[248510]: 2025-12-13 08:23:06.497 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:06Z|00255|binding|INFO|Releasing lport 67d122d2-811d-4aa8-bdde-aafc5e939b2b from this chassis (sb_readonly=0)
Dec 13 03:23:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:06Z|00256|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:23:06 np0005558241 nova_compute[248510]: 2025-12-13 08:23:06.659 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:06 np0005558241 nova_compute[248510]: 2025-12-13 08:23:06.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:06 np0005558241 NetworkManager[50376]: <info>  [1765614186.7191] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Dec 13 03:23:06 np0005558241 NetworkManager[50376]: <info>  [1765614186.7197] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Dec 13 03:23:06 np0005558241 nova_compute[248510]: 2025-12-13 08:23:06.820 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:06Z|00257|binding|INFO|Releasing lport 67d122d2-811d-4aa8-bdde-aafc5e939b2b from this chassis (sb_readonly=0)
Dec 13 03:23:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:06Z|00258|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:23:06 np0005558241 nova_compute[248510]: 2025-12-13 08:23:06.830 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.5 MiB/s wr, 231 op/s
Dec 13 03:23:08 np0005558241 nova_compute[248510]: 2025-12-13 08:23:08.523 248514 DEBUG nova.compute.manager [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:08 np0005558241 nova_compute[248510]: 2025-12-13 08:23:08.524 248514 DEBUG oslo_concurrency.lockutils [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:08 np0005558241 nova_compute[248510]: 2025-12-13 08:23:08.524 248514 DEBUG oslo_concurrency.lockutils [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:08 np0005558241 nova_compute[248510]: 2025-12-13 08:23:08.524 248514 DEBUG oslo_concurrency.lockutils [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:08 np0005558241 nova_compute[248510]: 2025-12-13 08:23:08.524 248514 DEBUG nova.compute.manager [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] No waiting events found dispatching network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:23:08 np0005558241 nova_compute[248510]: 2025-12-13 08:23:08.524 248514 WARNING nova.compute.manager [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received unexpected event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:23:08 np0005558241 nova_compute[248510]: 2025-12-13 08:23:08.525 248514 DEBUG nova.compute.manager [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:08 np0005558241 nova_compute[248510]: 2025-12-13 08:23:08.525 248514 DEBUG nova.compute.manager [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing instance network info cache due to event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:23:08 np0005558241 nova_compute[248510]: 2025-12-13 08:23:08.525 248514 DEBUG oslo_concurrency.lockutils [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:08 np0005558241 nova_compute[248510]: 2025-12-13 08:23:08.525 248514 DEBUG oslo_concurrency.lockutils [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:08 np0005558241 nova_compute[248510]: 2025-12-13 08:23:08.525 248514 DEBUG nova.network.neutron [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:23:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.5 MiB/s wr, 297 op/s
Dec 13 03:23:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:23:09
Dec 13 03:23:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:23:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:23:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'volumes', '.mgr', '.rgw.root', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'images']
Dec 13 03:23:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:23:09 np0005558241 nova_compute[248510]: 2025-12-13 08:23:09.549 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:09.862 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:23:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:23:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 173 op/s
Dec 13 03:23:11 np0005558241 nova_compute[248510]: 2025-12-13 08:23:11.292 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:11 np0005558241 nova_compute[248510]: 2025-12-13 08:23:11.314 248514 DEBUG nova.network.neutron [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updated VIF entry in instance network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:23:11 np0005558241 nova_compute[248510]: 2025-12-13 08:23:11.315 248514 DEBUG nova.network.neutron [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:11 np0005558241 nova_compute[248510]: 2025-12-13 08:23:11.352 248514 DEBUG oslo_concurrency.lockutils [req-e3364067-ac20-4f33-a199-f1d3bc5d170f req-c9071b61-3d94-4499-b3cd-6b629e2f2458 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:11 np0005558241 podman[284821]: 2025-12-13 08:23:11.981960585 +0000 UTC m=+0.061472548 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 03:23:11 np0005558241 podman[284820]: 2025-12-13 08:23:11.990413283 +0000 UTC m=+0.073600397 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:23:12 np0005558241 podman[284819]: 2025-12-13 08:23:12.023043358 +0000 UTC m=+0.107571105 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:23:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 137 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 425 KiB/s wr, 186 op/s
Dec 13 03:23:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:13Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:48:0d 10.100.0.6
Dec 13 03:23:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:13Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:48:0d 10.100.0.6
Dec 13 03:23:14 np0005558241 nova_compute[248510]: 2025-12-13 08:23:14.084 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614179.0825927, 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:14 np0005558241 nova_compute[248510]: 2025-12-13 08:23:14.085 248514 INFO nova.compute.manager [-] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:23:14 np0005558241 nova_compute[248510]: 2025-12-13 08:23:14.553 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:23:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/887234214' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:23:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:23:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/887234214' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:23:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:23:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 156 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.0 MiB/s wr, 174 op/s
Dec 13 03:23:16 np0005558241 nova_compute[248510]: 2025-12-13 08:23:16.115 248514 DEBUG nova.compute.manager [None req-13018656-a484-4227-ad64-2b2b7957f4a0 - - - - - -] [instance: 7299d5b2-6ea4-47fc-b16b-fcb6dd741e38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:16 np0005558241 nova_compute[248510]: 2025-12-13 08:23:16.296 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:16 np0005558241 nova_compute[248510]: 2025-12-13 08:23:16.713 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:16 np0005558241 nova_compute[248510]: 2025-12-13 08:23:16.714 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:16 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 13 03:23:16 np0005558241 nova_compute[248510]: 2025-12-13 08:23:16.732 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:23:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 156 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 107 op/s
Dec 13 03:23:17 np0005558241 nova_compute[248510]: 2025-12-13 08:23:17.099 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:17 np0005558241 nova_compute[248510]: 2025-12-13 08:23:17.100 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:17 np0005558241 nova_compute[248510]: 2025-12-13 08:23:17.109 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:23:17 np0005558241 nova_compute[248510]: 2025-12-13 08:23:17.109 248514 INFO nova.compute.claims [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:23:17 np0005558241 nova_compute[248510]: 2025-12-13 08:23:17.346 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:17Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:b2:83 10.100.0.9
Dec 13 03:23:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:17Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:b2:83 10.100.0.9
Dec 13 03:23:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:23:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3151020671' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:23:17 np0005558241 nova_compute[248510]: 2025-12-13 08:23:17.962 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:17 np0005558241 nova_compute[248510]: 2025-12-13 08:23:17.969 248514 DEBUG nova.compute.provider_tree [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:23:17 np0005558241 nova_compute[248510]: 2025-12-13 08:23:17.988 248514 DEBUG nova.scheduler.client.report [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.018 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.019 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.100 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.101 248514 DEBUG nova.network.neutron [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.125 248514 INFO nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.152 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.303 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.306 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.306 248514 INFO nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Creating image(s)#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.329 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.354 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.375 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.378 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.430 248514 DEBUG nova.policy [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.448 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.449 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.449 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.450 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.477 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.482 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d503913e-a05e-47d4-9366-db4426b9aac1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.786 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d503913e-a05e-47d4-9366-db4426b9aac1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.856 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] resizing rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:23:18 np0005558241 nova_compute[248510]: 2025-12-13 08:23:18.947 248514 DEBUG nova.objects.instance [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'migration_context' on Instance uuid d503913e-a05e-47d4-9366-db4426b9aac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:23:19 np0005558241 nova_compute[248510]: 2025-12-13 08:23:19.048 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:23:19 np0005558241 nova_compute[248510]: 2025-12-13 08:23:19.049 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Ensure instance console log exists: /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:23:19 np0005558241 nova_compute[248510]: 2025-12-13 08:23:19.050 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:19 np0005558241 nova_compute[248510]: 2025-12-13 08:23:19.050 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:19 np0005558241 nova_compute[248510]: 2025-12-13 08:23:19.051 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 229 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 192 op/s
Dec 13 03:23:19 np0005558241 nova_compute[248510]: 2025-12-13 08:23:19.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:23:20 np0005558241 nova_compute[248510]: 2025-12-13 08:23:20.048 248514 DEBUG nova.network.neutron [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Successfully created port: e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:23:20 np0005558241 nova_compute[248510]: 2025-12-13 08:23:20.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:20Z|00259|binding|INFO|Releasing lport 67d122d2-811d-4aa8-bdde-aafc5e939b2b from this chassis (sb_readonly=0)
Dec 13 03:23:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:20Z|00260|binding|INFO|Releasing lport dda81cda-adb6-4137-8e02-abd94f4378f2 from this chassis (sb_readonly=0)
Dec 13 03:23:20 np0005558241 nova_compute[248510]: 2025-12-13 08:23:20.372 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017437611022420622 of space, bias 1.0, pg target 0.5231283306726187 quantized to 32 (current 32)
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006668854095107248 of space, bias 1.0, pg target 0.20006562285321744 quantized to 32 (current 32)
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.289488535883092e-07 of space, bias 4.0, pg target 0.000874738624305971 quantized to 16 (current 32)
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:23:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:23:21 np0005558241 nova_compute[248510]: 2025-12-13 08:23:21.019 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:21 np0005558241 nova_compute[248510]: 2025-12-13 08:23:21.020 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:21 np0005558241 nova_compute[248510]: 2025-12-13 08:23:21.043 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:23:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 229 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 575 KiB/s rd, 5.4 MiB/s wr, 126 op/s
Dec 13 03:23:21 np0005558241 nova_compute[248510]: 2025-12-13 08:23:21.106 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:21 np0005558241 nova_compute[248510]: 2025-12-13 08:23:21.106 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:21 np0005558241 nova_compute[248510]: 2025-12-13 08:23:21.115 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:23:21 np0005558241 nova_compute[248510]: 2025-12-13 08:23:21.116 248514 INFO nova.compute.claims [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:23:21 np0005558241 nova_compute[248510]: 2025-12-13 08:23:21.282 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:21 np0005558241 nova_compute[248510]: 2025-12-13 08:23:21.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:23:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3337626705' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:23:21 np0005558241 nova_compute[248510]: 2025-12-13 08:23:21.933 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.188 248514 DEBUG nova.compute.provider_tree [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.205 248514 DEBUG nova.scheduler.client.report [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.226 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.227 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.277 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.277 248514 DEBUG nova.network.neutron [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.302 248514 INFO nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.332 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.589 248514 DEBUG nova.network.neutron [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Successfully updated port: e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.678 248514 DEBUG nova.policy [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '79d4b34b8bd3452cb5b8c0954166f397', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06fbab937d6444558229b2351632e711', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.700 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.700 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.701 248514 DEBUG nova.network.neutron [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.709 248514 DEBUG nova.compute.manager [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.710 248514 DEBUG nova.compute.manager [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing instance network info cache due to event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.710 248514 DEBUG oslo_concurrency.lockutils [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.742 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.744 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.744 248514 INFO nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Creating image(s)#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.766 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.791 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.813 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.817 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.896 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.898 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.898 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.899 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.921 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.925 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4403714d-3521-4409-9c3b-59d655fc999d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:22 np0005558241 nova_compute[248510]: 2025-12-13 08:23:22.958 248514 DEBUG nova.network.neutron [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:23:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 246 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 604 KiB/s rd, 6.0 MiB/s wr, 141 op/s
Dec 13 03:23:23 np0005558241 nova_compute[248510]: 2025-12-13 08:23:23.231 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4403714d-3521-4409-9c3b-59d655fc999d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:23 np0005558241 nova_compute[248510]: 2025-12-13 08:23:23.299 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] resizing rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:23:23 np0005558241 nova_compute[248510]: 2025-12-13 08:23:23.386 248514 DEBUG nova.objects.instance [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lazy-loading 'migration_context' on Instance uuid 4403714d-3521-4409-9c3b-59d655fc999d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:23:23 np0005558241 nova_compute[248510]: 2025-12-13 08:23:23.390 248514 DEBUG nova.network.neutron [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Successfully created port: d462a8a0-34ee-4682-ac5c-f7632b5ad39c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:23:23 np0005558241 nova_compute[248510]: 2025-12-13 08:23:23.407 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:23:23 np0005558241 nova_compute[248510]: 2025-12-13 08:23:23.408 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Ensure instance console log exists: /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:23:23 np0005558241 nova_compute[248510]: 2025-12-13 08:23:23.408 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:23 np0005558241 nova_compute[248510]: 2025-12-13 08:23:23.408 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:23 np0005558241 nova_compute[248510]: 2025-12-13 08:23:23.408 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:24 np0005558241 nova_compute[248510]: 2025-12-13 08:23:24.592 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:24 np0005558241 nova_compute[248510]: 2025-12-13 08:23:24.946 248514 DEBUG nova.network.neutron [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:24 np0005558241 nova_compute[248510]: 2025-12-13 08:23:24.982 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:24 np0005558241 nova_compute[248510]: 2025-12-13 08:23:24.983 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Instance network_info: |[{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:23:24 np0005558241 nova_compute[248510]: 2025-12-13 08:23:24.984 248514 DEBUG oslo_concurrency.lockutils [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:24 np0005558241 nova_compute[248510]: 2025-12-13 08:23:24.984 248514 DEBUG nova.network.neutron [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:23:24 np0005558241 nova_compute[248510]: 2025-12-13 08:23:24.987 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Start _get_guest_xml network_info=[{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:23:24 np0005558241 nova_compute[248510]: 2025-12-13 08:23:24.992 248514 WARNING nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:23:24 np0005558241 nova_compute[248510]: 2025-12-13 08:23:24.998 248514 DEBUG nova.virt.libvirt.host [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.000 248514 DEBUG nova.virt.libvirt.host [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:23:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.010 248514 DEBUG nova.virt.libvirt.host [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.012 248514 DEBUG nova.virt.libvirt.host [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.013 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.013 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.014 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.014 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.015 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.015 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.015 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.016 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.016 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.017 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.017 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.018 248514 DEBUG nova.virt.hardware [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.025 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 289 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 577 KiB/s rd, 7.4 MiB/s wr, 164 op/s
Dec 13 03:23:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:23:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1657012232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.610 248514 DEBUG nova.network.neutron [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Successfully updated port: d462a8a0-34ee-4682-ac5c-f7632b5ad39c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.613 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.644 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.649 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.693 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.694 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquired lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.694 248514 DEBUG nova.network.neutron [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.786 248514 DEBUG nova.compute.manager [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-changed-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.786 248514 DEBUG nova.compute.manager [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Refreshing instance network info cache due to event network-changed-d462a8a0-34ee-4682-ac5c-f7632b5ad39c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:23:25 np0005558241 nova_compute[248510]: 2025-12-13 08:23:25.787 248514 DEBUG oslo_concurrency.lockutils [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.011 248514 DEBUG nova.network.neutron [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:23:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:23:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1091847818' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.270 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.272 248514 DEBUG nova.virt.libvirt.vif [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:23:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.272 248514 DEBUG nova.network.os_vif_util [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.274 248514 DEBUG nova.network.os_vif_util [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.276 248514 DEBUG nova.objects.instance [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_devices' on Instance uuid d503913e-a05e-47d4-9366-db4426b9aac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.295 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  <uuid>d503913e-a05e-47d4-9366-db4426b9aac1</uuid>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  <name>instance-00000020</name>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <nova:name>tempest-tempest.common.compute-instance-152393601</nova:name>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:23:24</nova:creationTime>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:        <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:        <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:        <nova:port uuid="e1eabc5e-9ed7-4b2e-ba64-11149b2f4043">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <entry name="serial">d503913e-a05e-47d4-9366-db4426b9aac1</entry>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <entry name="uuid">d503913e-a05e-47d4-9366-db4426b9aac1</entry>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/d503913e-a05e-47d4-9366-db4426b9aac1_disk">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/d503913e-a05e-47d4-9366-db4426b9aac1_disk.config">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:b3:27:85"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <target dev="tape1eabc5e-9e"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/console.log" append="off"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:23:26 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:23:26 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:23:26 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:23:26 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.296 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Preparing to wait for external event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.297 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.297 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.297 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.298 248514 DEBUG nova.virt.libvirt.vif [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:23:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.299 248514 DEBUG nova.network.os_vif_util [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.299 248514 DEBUG nova.network.os_vif_util [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.300 248514 DEBUG os_vif [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.301 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.301 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.302 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.306 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.307 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1eabc5e-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.307 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape1eabc5e-9e, col_values=(('external_ids', {'iface-id': 'e1eabc5e-9ed7-4b2e-ba64-11149b2f4043', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:27:85', 'vm-uuid': 'd503913e-a05e-47d4-9366-db4426b9aac1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:26 np0005558241 NetworkManager[50376]: <info>  [1765614206.3103] manager: (tape1eabc5e-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.312 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.318 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.319 248514 INFO os_vif [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e')#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.411 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.411 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.411 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:b3:27:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.412 248514 INFO nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Using config drive#033[00m
Dec 13 03:23:26 np0005558241 nova_compute[248510]: 2025-12-13 08:23:26.431 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.057 248514 INFO nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Creating config drive at /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/disk.config#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.065 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_y2gug60 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 289 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 465 KiB/s rd, 5.7 MiB/s wr, 136 op/s
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.178 248514 DEBUG nova.network.neutron [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updating instance_info_cache with network_info: [{"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.210 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_y2gug60" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.233 248514 DEBUG nova.storage.rbd_utils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] rbd image d503913e-a05e-47d4-9366-db4426b9aac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.237 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/disk.config d503913e-a05e-47d4-9366-db4426b9aac1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.269 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Releasing lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.270 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Instance network_info: |[{"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.271 248514 DEBUG oslo_concurrency.lockutils [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.272 248514 DEBUG nova.network.neutron [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Refreshing network info cache for port d462a8a0-34ee-4682-ac5c-f7632b5ad39c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.275 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Start _get_guest_xml network_info=[{"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.280 248514 WARNING nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.286 248514 DEBUG nova.virt.libvirt.host [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.286 248514 DEBUG nova.virt.libvirt.host [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.289 248514 DEBUG nova.virt.libvirt.host [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.290 248514 DEBUG nova.virt.libvirt.host [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.290 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.290 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.291 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.291 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.291 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.292 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.292 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.292 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.292 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.293 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.293 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.293 248514 DEBUG nova.virt.hardware [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.297 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.333 248514 DEBUG nova.network.neutron [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updated VIF entry in instance network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.334 248514 DEBUG nova.network.neutron [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.386 248514 DEBUG oslo_concurrency.processutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/disk.config d503913e-a05e-47d4-9366-db4426b9aac1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.387 248514 INFO nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Deleting local config drive /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/disk.config because it was imported into RBD.#033[00m
Dec 13 03:23:27 np0005558241 NetworkManager[50376]: <info>  [1765614207.4531] manager: (tape1eabc5e-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Dec 13 03:23:27 np0005558241 kernel: tape1eabc5e-9e: entered promiscuous mode
Dec 13 03:23:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:27Z|00261|binding|INFO|Claiming lport e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 for this chassis.
Dec 13 03:23:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:27Z|00262|binding|INFO|e1eabc5e-9ed7-4b2e-ba64-11149b2f4043: Claiming fa:16:3e:b3:27:85 10.100.0.3
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.459 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:27Z|00263|binding|INFO|Setting lport e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 ovn-installed in OVS
Dec 13 03:23:27 np0005558241 systemd-udevd[285421]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:27 np0005558241 systemd-machined[210538]: New machine qemu-37-instance-00000020.
Dec 13 03:23:27 np0005558241 NetworkManager[50376]: <info>  [1765614207.4984] device (tape1eabc5e-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:23:27 np0005558241 NetworkManager[50376]: <info>  [1765614207.4992] device (tape1eabc5e-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:23:27 np0005558241 systemd[1]: Started Virtual Machine qemu-37-instance-00000020.
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.548 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:27:85 10.100.0.3'], port_security=['fa:16:3e:b3:27:85 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd503913e-a05e-47d4-9366-db4426b9aac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd758160-971f-4f0e-a4ca-a13304d3c491', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:23:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:27Z|00264|binding|INFO|Setting lport e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 up in Southbound
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.550 158419 INFO neutron.agent.ovn.metadata.agent [-] Port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 bound to our chassis#033[00m
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.552 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.562 248514 DEBUG oslo_concurrency.lockutils [req-db843c8f-b130-4004-a8eb-9012e9b67593 req-2cb1f60d-9d1c-4e17-9e44-4dfcde961cc5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb129a8-dcfa-4790-8345-e005d8718800]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.614 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8f0697-25cf-48ff-982c-46eafbbab80c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.618 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[45363661-f37c-4083-bb19-467e5d6c9229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.654 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[79772222-8646-404f-9c0b-0371c1e9993f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.673 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[437a25d2-ed55-4050-9f72-a6ed11a028d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 19285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285436, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.694 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c65ea9af-9d32-4191-ba16-b04740c2fd96]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675811, 'tstamp': 675811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285437, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675814, 'tstamp': 675814}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285437, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.697 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.700 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.700 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.700 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.701 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:27.701 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:23:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1932318373' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.902 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.928 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:27 np0005558241 nova_compute[248510]: 2025-12-13 08:23:27.933 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.277 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614208.2769887, d503913e-a05e-47d4-9366-db4426b9aac1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.279 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] VM Started (Lifecycle Event)#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.315 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.321 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614208.278722, d503913e-a05e-47d4-9366-db4426b9aac1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.322 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.363 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.368 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.395 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:23:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:23:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666929185' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.488 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.490 248514 DEBUG nova.virt.libvirt.vif [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1547732604',display_name='tempest-FloatingIPsAssociationTestJSON-server-1547732604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1547732604',id=33,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06fbab937d6444558229b2351632e711',ramdisk_id='',reservation_id='r-ath0htte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-609563086',owner_user_name='tempest-FloatingIPsAssociationTestJSON-609563086-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:23:22Z,user_data=None,user_id='79d4b34b8bd3452cb5b8c0954166f397',uuid=4403714d-3521-4409-9c3b-59d655fc999d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.491 248514 DEBUG nova.network.os_vif_util [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converting VIF {"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.491 248514 DEBUG nova.network.os_vif_util [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.493 248514 DEBUG nova.objects.instance [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4403714d-3521-4409-9c3b-59d655fc999d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.501 248514 DEBUG nova.compute.manager [req-bc81d783-28eb-4284-940b-3d2524c406ff req-17e03671-b76d-4a97-8975-26b377d5f3e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.502 248514 DEBUG oslo_concurrency.lockutils [req-bc81d783-28eb-4284-940b-3d2524c406ff req-17e03671-b76d-4a97-8975-26b377d5f3e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.502 248514 DEBUG oslo_concurrency.lockutils [req-bc81d783-28eb-4284-940b-3d2524c406ff req-17e03671-b76d-4a97-8975-26b377d5f3e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.503 248514 DEBUG oslo_concurrency.lockutils [req-bc81d783-28eb-4284-940b-3d2524c406ff req-17e03671-b76d-4a97-8975-26b377d5f3e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.503 248514 DEBUG nova.compute.manager [req-bc81d783-28eb-4284-940b-3d2524c406ff req-17e03671-b76d-4a97-8975-26b377d5f3e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Processing event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.504 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.510 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614208.5084395, d503913e-a05e-47d4-9366-db4426b9aac1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.510 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.513 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  <uuid>4403714d-3521-4409-9c3b-59d655fc999d</uuid>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  <name>instance-00000021</name>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1547732604</nova:name>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:23:27</nova:creationTime>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:        <nova:user uuid="79d4b34b8bd3452cb5b8c0954166f397">tempest-FloatingIPsAssociationTestJSON-609563086-project-member</nova:user>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:        <nova:project uuid="06fbab937d6444558229b2351632e711">tempest-FloatingIPsAssociationTestJSON-609563086</nova:project>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:        <nova:port uuid="d462a8a0-34ee-4682-ac5c-f7632b5ad39c">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <entry name="serial">4403714d-3521-4409-9c3b-59d655fc999d</entry>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <entry name="uuid">4403714d-3521-4409-9c3b-59d655fc999d</entry>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/4403714d-3521-4409-9c3b-59d655fc999d_disk">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/4403714d-3521-4409-9c3b-59d655fc999d_disk.config">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:9f:23:d9"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <target dev="tapd462a8a0-34"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/console.log" append="off"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:23:28 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:23:28 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:23:28 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:23:28 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.514 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Preparing to wait for external event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.514 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.515 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.515 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.516 248514 DEBUG nova.virt.libvirt.vif [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1547732604',display_name='tempest-FloatingIPsAssociationTestJSON-server-1547732604',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1547732604',id=33,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06fbab937d6444558229b2351632e711',ramdisk_id='',reservation_id='r-ath0htte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-609563086',owner_user_name='tempest-FloatingIPsAssociationTestJSON-609563086-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:23:22Z,user_data=None,user_id='79d4b34b8bd3452cb5b8c0954166f397',uuid=4403714d-3521-4409-9c3b-59d655fc999d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.516 248514 DEBUG nova.network.os_vif_util [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converting VIF {"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.517 248514 DEBUG nova.network.os_vif_util [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.517 248514 DEBUG os_vif [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.518 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.519 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.519 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.520 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.523 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.524 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd462a8a0-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.524 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd462a8a0-34, col_values=(('external_ids', {'iface-id': 'd462a8a0-34ee-4682-ac5c-f7632b5ad39c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:23:d9', 'vm-uuid': '4403714d-3521-4409-9c3b-59d655fc999d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:28 np0005558241 NetworkManager[50376]: <info>  [1765614208.5279] manager: (tapd462a8a0-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.530 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.531 248514 INFO nova.virt.libvirt.driver [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Instance spawned successfully.#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.531 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.538 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.540 248514 INFO os_vif [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34')#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.547 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.557 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.557 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.558 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.558 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.559 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.559 248514 DEBUG nova.virt.libvirt.driver [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.565 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.623 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.624 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.624 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] No VIF found with MAC fa:16:3e:9f:23:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.625 248514 INFO nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Using config drive#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.650 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.659 248514 INFO nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Took 10.35 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.659 248514 DEBUG nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.858 248514 INFO nova.compute.manager [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Took 12.07 seconds to build instance.#033[00m
Dec 13 03:23:28 np0005558241 nova_compute[248510]: 2025-12-13 08:23:28.884 248514 DEBUG oslo_concurrency.lockutils [None req-397b0adb-0e85-4e31-96ce-63ffdc780a75 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 293 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 5.8 MiB/s wr, 146 op/s
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.419 248514 INFO nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Creating config drive at /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/disk.config#033[00m
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.425 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8723ao83 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.563 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8723ao83" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.588 248514 DEBUG nova.storage.rbd_utils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] rbd image 4403714d-3521-4409-9c3b-59d655fc999d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.594 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/disk.config 4403714d-3521-4409-9c3b-59d655fc999d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.627 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.633 248514 DEBUG nova.network.neutron [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updated VIF entry in instance network info cache for port d462a8a0-34ee-4682-ac5c-f7632b5ad39c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.633 248514 DEBUG nova.network.neutron [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updating instance_info_cache with network_info: [{"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.657 248514 DEBUG oslo_concurrency.lockutils [req-39a9ac49-2727-4f88-a298-ac658463767f req-d31e59af-4793-4034-b840-c6aba2de2d96 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.736 248514 DEBUG oslo_concurrency.processutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/disk.config 4403714d-3521-4409-9c3b-59d655fc999d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.737 248514 INFO nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Deleting local config drive /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d/disk.config because it was imported into RBD.#033[00m
Dec 13 03:23:29 np0005558241 kernel: tapd462a8a0-34: entered promiscuous mode
Dec 13 03:23:29 np0005558241 NetworkManager[50376]: <info>  [1765614209.7843] manager: (tapd462a8a0-34): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:29Z|00265|binding|INFO|Claiming lport d462a8a0-34ee-4682-ac5c-f7632b5ad39c for this chassis.
Dec 13 03:23:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:29Z|00266|binding|INFO|d462a8a0-34ee-4682-ac5c-f7632b5ad39c: Claiming fa:16:3e:9f:23:d9 10.100.0.4
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.796 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:23:d9 10.100.0.4'], port_security=['fa:16:3e:9f:23:d9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4403714d-3521-4409-9c3b-59d655fc999d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06fbab937d6444558229b2351632e711', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fa325bd2-c57a-49fb-8dd9-f45405c95b4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f47224ac-d05f-46db-ac07-cb476b38b044, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d462a8a0-34ee-4682-ac5c-f7632b5ad39c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.797 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d462a8a0-34ee-4682-ac5c-f7632b5ad39c in datapath 62193ff6-aaa1-401a-b1e0-512e67752a9e bound to our chassis#033[00m
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.800 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62193ff6-aaa1-401a-b1e0-512e67752a9e#033[00m
Dec 13 03:23:29 np0005558241 NetworkManager[50376]: <info>  [1765614209.8076] device (tapd462a8a0-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:23:29 np0005558241 NetworkManager[50376]: <info>  [1765614209.8085] device (tapd462a8a0-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:23:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:29Z|00267|binding|INFO|Setting lport d462a8a0-34ee-4682-ac5c-f7632b5ad39c ovn-installed in OVS
Dec 13 03:23:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:29Z|00268|binding|INFO|Setting lport d462a8a0-34ee-4682-ac5c-f7632b5ad39c up in Southbound
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.816 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.823 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c71dd8e-9c8a-49b2-b5de-60f24ab2a5f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:29 np0005558241 systemd-machined[210538]: New machine qemu-38-instance-00000021.
Dec 13 03:23:29 np0005558241 systemd[1]: Started Virtual Machine qemu-38-instance-00000021.
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.865 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[363baef5-e07d-49d3-b5e6-57004be0731b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.869 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ec075711-373b-4862-93a1-ba9f4b86b7c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.904 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7a56e34c-f060-4f14-9bdd-7785c697e499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.945 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f97d5196-b21c-4648-8634-1166a1852fda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62193ff6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:33:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676001, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285607, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.968 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[70783894-0fa0-4b06-ab7e-15d15f67b1da]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62193ff6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676016, 'tstamp': 676016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285609, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62193ff6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676019, 'tstamp': 676019}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285609, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.970 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62193ff6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:29 np0005558241 nova_compute[248510]: 2025-12-13 08:23:29.972 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.973 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62193ff6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.974 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.974 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62193ff6-a0, col_values=(('external_ids', {'iface-id': '67d122d2-811d-4aa8-bdde-aafc5e939b2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:29.975 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:23:30 np0005558241 nova_compute[248510]: 2025-12-13 08:23:30.443 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614210.4427738, 4403714d-3521-4409-9c3b-59d655fc999d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:30 np0005558241 nova_compute[248510]: 2025-12-13 08:23:30.443 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] VM Started (Lifecycle Event)#033[00m
Dec 13 03:23:30 np0005558241 nova_compute[248510]: 2025-12-13 08:23:30.475 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:30 np0005558241 nova_compute[248510]: 2025-12-13 08:23:30.481 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614210.4440446, 4403714d-3521-4409-9c3b-59d655fc999d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:30 np0005558241 nova_compute[248510]: 2025-12-13 08:23:30.481 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:23:30 np0005558241 nova_compute[248510]: 2025-12-13 08:23:30.875 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:30 np0005558241 nova_compute[248510]: 2025-12-13 08:23:30.880 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:23:30 np0005558241 nova_compute[248510]: 2025-12-13 08:23:30.905 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:23:31 np0005558241 nova_compute[248510]: 2025-12-13 08:23:31.032 248514 DEBUG nova.compute.manager [req-9c389711-e0ab-4dff-a8b3-45d3e5298d5f req-d96d0e14-5a2a-4dc1-b4d9-5ba7293a2673 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:31 np0005558241 nova_compute[248510]: 2025-12-13 08:23:31.032 248514 DEBUG oslo_concurrency.lockutils [req-9c389711-e0ab-4dff-a8b3-45d3e5298d5f req-d96d0e14-5a2a-4dc1-b4d9-5ba7293a2673 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:31 np0005558241 nova_compute[248510]: 2025-12-13 08:23:31.033 248514 DEBUG oslo_concurrency.lockutils [req-9c389711-e0ab-4dff-a8b3-45d3e5298d5f req-d96d0e14-5a2a-4dc1-b4d9-5ba7293a2673 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:31 np0005558241 nova_compute[248510]: 2025-12-13 08:23:31.033 248514 DEBUG oslo_concurrency.lockutils [req-9c389711-e0ab-4dff-a8b3-45d3e5298d5f req-d96d0e14-5a2a-4dc1-b4d9-5ba7293a2673 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:31 np0005558241 nova_compute[248510]: 2025-12-13 08:23:31.033 248514 DEBUG nova.compute.manager [req-9c389711-e0ab-4dff-a8b3-45d3e5298d5f req-d96d0e14-5a2a-4dc1-b4d9-5ba7293a2673 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:23:31 np0005558241 nova_compute[248510]: 2025-12-13 08:23:31.033 248514 WARNING nova.compute.manager [req-9c389711-e0ab-4dff-a8b3-45d3e5298d5f req-d96d0e14-5a2a-4dc1-b4d9-5ba7293a2673 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received unexpected event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:23:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 293 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 2.4 MiB/s wr, 61 op/s
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.021 248514 DEBUG nova.compute.manager [req-befbd2c6-bb2d-4edc-b20b-71d4f37d0e8f req-45c69d29-a00c-4594-a52f-2b420818f92a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.021 248514 DEBUG oslo_concurrency.lockutils [req-befbd2c6-bb2d-4edc-b20b-71d4f37d0e8f req-45c69d29-a00c-4594-a52f-2b420818f92a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.022 248514 DEBUG oslo_concurrency.lockutils [req-befbd2c6-bb2d-4edc-b20b-71d4f37d0e8f req-45c69d29-a00c-4594-a52f-2b420818f92a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.022 248514 DEBUG oslo_concurrency.lockutils [req-befbd2c6-bb2d-4edc-b20b-71d4f37d0e8f req-45c69d29-a00c-4594-a52f-2b420818f92a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.022 248514 DEBUG nova.compute.manager [req-befbd2c6-bb2d-4edc-b20b-71d4f37d0e8f req-45c69d29-a00c-4594-a52f-2b420818f92a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Processing event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.023 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.027 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614213.0268362, 4403714d-3521-4409-9c3b-59d655fc999d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.027 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.033 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.038 248514 INFO nova.virt.libvirt.driver [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Instance spawned successfully.#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.038 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.054 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.061 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.065 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.065 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.066 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.066 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.066 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.067 248514 DEBUG nova.virt.libvirt.driver [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:23:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 293 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 716 KiB/s rd, 2.4 MiB/s wr, 83 op/s
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.090 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.127 248514 INFO nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Took 10.38 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.128 248514 DEBUG nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.203 248514 INFO nova.compute.manager [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Took 12.12 seconds to build instance.#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.225 248514 DEBUG oslo_concurrency.lockutils [None req-aa38e2f1-2060-4842-a506-04dca2efd3c6 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:33 np0005558241 nova_compute[248510]: 2025-12-13 08:23:33.980 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:34 np0005558241 nova_compute[248510]: 2025-12-13 08:23:34.550 248514 DEBUG nova.compute.manager [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:34 np0005558241 nova_compute[248510]: 2025-12-13 08:23:34.551 248514 DEBUG nova.compute.manager [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing instance network info cache due to event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:23:34 np0005558241 nova_compute[248510]: 2025-12-13 08:23:34.551 248514 DEBUG oslo_concurrency.lockutils [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:34 np0005558241 nova_compute[248510]: 2025-12-13 08:23:34.551 248514 DEBUG oslo_concurrency.lockutils [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:34 np0005558241 nova_compute[248510]: 2025-12-13 08:23:34.552 248514 DEBUG nova.network.neutron [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:23:34 np0005558241 nova_compute[248510]: 2025-12-13 08:23:34.596 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:34 np0005558241 nova_compute[248510]: 2025-12-13 08:23:34.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:23:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:23:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 293 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Dec 13 03:23:35 np0005558241 nova_compute[248510]: 2025-12-13 08:23:35.618 248514 DEBUG nova.compute.manager [req-29ef8ab7-4462-4536-b0f4-81b224b76ef9 req-33ff0052-1b77-4688-8e34-c69250ef8b31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:35 np0005558241 nova_compute[248510]: 2025-12-13 08:23:35.618 248514 DEBUG oslo_concurrency.lockutils [req-29ef8ab7-4462-4536-b0f4-81b224b76ef9 req-33ff0052-1b77-4688-8e34-c69250ef8b31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:35 np0005558241 nova_compute[248510]: 2025-12-13 08:23:35.618 248514 DEBUG oslo_concurrency.lockutils [req-29ef8ab7-4462-4536-b0f4-81b224b76ef9 req-33ff0052-1b77-4688-8e34-c69250ef8b31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:35 np0005558241 nova_compute[248510]: 2025-12-13 08:23:35.619 248514 DEBUG oslo_concurrency.lockutils [req-29ef8ab7-4462-4536-b0f4-81b224b76ef9 req-33ff0052-1b77-4688-8e34-c69250ef8b31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:35 np0005558241 nova_compute[248510]: 2025-12-13 08:23:35.619 248514 DEBUG nova.compute.manager [req-29ef8ab7-4462-4536-b0f4-81b224b76ef9 req-33ff0052-1b77-4688-8e34-c69250ef8b31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] No waiting events found dispatching network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:23:35 np0005558241 nova_compute[248510]: 2025-12-13 08:23:35.619 248514 WARNING nova.compute.manager [req-29ef8ab7-4462-4536-b0f4-81b224b76ef9 req-33ff0052-1b77-4688-8e34-c69250ef8b31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received unexpected event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c for instance with vm_state active and task_state None.#033[00m
Dec 13 03:23:36 np0005558241 nova_compute[248510]: 2025-12-13 08:23:36.503 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:36 np0005558241 nova_compute[248510]: 2025-12-13 08:23:36.504 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:36 np0005558241 nova_compute[248510]: 2025-12-13 08:23:36.521 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:23:36 np0005558241 nova_compute[248510]: 2025-12-13 08:23:36.619 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:36 np0005558241 nova_compute[248510]: 2025-12-13 08:23:36.621 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:36 np0005558241 nova_compute[248510]: 2025-12-13 08:23:36.627 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:23:36 np0005558241 nova_compute[248510]: 2025-12-13 08:23:36.627 248514 INFO nova.compute.claims [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:23:36 np0005558241 nova_compute[248510]: 2025-12-13 08:23:36.657 248514 DEBUG nova.compute.manager [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:36 np0005558241 nova_compute[248510]: 2025-12-13 08:23:36.657 248514 DEBUG nova.compute.manager [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing instance network info cache due to event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:23:36 np0005558241 nova_compute[248510]: 2025-12-13 08:23:36.658 248514 DEBUG oslo_concurrency.lockutils [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:36 np0005558241 nova_compute[248510]: 2025-12-13 08:23:36.658 248514 DEBUG oslo_concurrency.lockutils [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:36 np0005558241 nova_compute[248510]: 2025-12-13 08:23:36.658 248514 DEBUG nova.network.neutron [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:23:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 293 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 99 KiB/s wr, 95 op/s
Dec 13 03:23:37 np0005558241 nova_compute[248510]: 2025-12-13 08:23:37.227 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:37 np0005558241 nova_compute[248510]: 2025-12-13 08:23:37.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:23:37 np0005558241 nova_compute[248510]: 2025-12-13 08:23:37.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:23:37 np0005558241 nova_compute[248510]: 2025-12-13 08:23:37.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:23:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:23:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1088463483' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:23:37 np0005558241 nova_compute[248510]: 2025-12-13 08:23:37.813 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:37 np0005558241 nova_compute[248510]: 2025-12-13 08:23:37.821 248514 DEBUG nova.compute.provider_tree [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:23:37 np0005558241 nova_compute[248510]: 2025-12-13 08:23:37.846 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:23:37 np0005558241 nova_compute[248510]: 2025-12-13 08:23:37.848 248514 DEBUG nova.scheduler.client.report [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:23:37 np0005558241 nova_compute[248510]: 2025-12-13 08:23:37.884 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:37 np0005558241 nova_compute[248510]: 2025-12-13 08:23:37.885 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:23:37 np0005558241 nova_compute[248510]: 2025-12-13 08:23:37.965 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:23:37 np0005558241 nova_compute[248510]: 2025-12-13 08:23:37.966 248514 DEBUG nova.network.neutron [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:23:37 np0005558241 nova_compute[248510]: 2025-12-13 08:23:37.990 248514 INFO nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.015 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.125 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.127 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.128 248514 INFO nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Creating image(s)#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.153 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.182 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.206 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.211 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.292 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.293 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.294 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.294 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.319 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.324 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.356 248514 DEBUG nova.network.neutron [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updated VIF entry in instance network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.358 248514 DEBUG nova.network.neutron [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.362 248514 DEBUG nova.policy [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5faa7317a5cd4b748a984970f79ef52b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2be5ed2a3b1a405bb6891ecdc5cba68c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.405 248514 DEBUG oslo_concurrency.lockutils [req-31499c2f-6dee-47f3-be52-04dac05e3f11 req-5bd4bc8e-c17f-434c-8d38-05ca2e893450 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.680 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.759 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] resizing rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.942 248514 DEBUG nova.network.neutron [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updated VIF entry in instance network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.944 248514 DEBUG nova.network.neutron [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:38 np0005558241 nova_compute[248510]: 2025-12-13 08:23:38.970 248514 DEBUG oslo_concurrency.lockutils [req-1f8fe6aa-f4de-4dd6-94a4-793ebb14239f req-a25a29a4-71b7-4bc3-a5aa-8291531b1f34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 319 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 167 op/s
Dec 13 03:23:39 np0005558241 nova_compute[248510]: 2025-12-13 08:23:39.358 248514 DEBUG nova.objects.instance [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lazy-loading 'migration_context' on Instance uuid ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:23:39 np0005558241 nova_compute[248510]: 2025-12-13 08:23:39.376 248514 DEBUG nova.compute.manager [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:39 np0005558241 nova_compute[248510]: 2025-12-13 08:23:39.376 248514 DEBUG nova.compute.manager [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing instance network info cache due to event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:23:39 np0005558241 nova_compute[248510]: 2025-12-13 08:23:39.377 248514 DEBUG oslo_concurrency.lockutils [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:39 np0005558241 nova_compute[248510]: 2025-12-13 08:23:39.377 248514 DEBUG oslo_concurrency.lockutils [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:39 np0005558241 nova_compute[248510]: 2025-12-13 08:23:39.377 248514 DEBUG nova.network.neutron [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:23:39 np0005558241 nova_compute[248510]: 2025-12-13 08:23:39.380 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:23:39 np0005558241 nova_compute[248510]: 2025-12-13 08:23:39.381 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Ensure instance console log exists: /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:23:39 np0005558241 nova_compute[248510]: 2025-12-13 08:23:39.381 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:39 np0005558241 nova_compute[248510]: 2025-12-13 08:23:39.382 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:39 np0005558241 nova_compute[248510]: 2025-12-13 08:23:39.382 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:39 np0005558241 nova_compute[248510]: 2025-12-13 08:23:39.598 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:23:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:23:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:23:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:23:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:23:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:23:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:23:40 np0005558241 nova_compute[248510]: 2025-12-13 08:23:40.277 248514 DEBUG nova.compute.manager [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:40 np0005558241 nova_compute[248510]: 2025-12-13 08:23:40.280 248514 DEBUG nova.compute.manager [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing instance network info cache due to event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:23:40 np0005558241 nova_compute[248510]: 2025-12-13 08:23:40.280 248514 DEBUG oslo_concurrency.lockutils [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:40 np0005558241 nova_compute[248510]: 2025-12-13 08:23:40.281 248514 DEBUG oslo_concurrency.lockutils [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:40 np0005558241 nova_compute[248510]: 2025-12-13 08:23:40.281 248514 DEBUG nova.network.neutron [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:23:40 np0005558241 nova_compute[248510]: 2025-12-13 08:23:40.759 248514 DEBUG oslo_concurrency.lockutils [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-3b43a9c7-85e7-4558-bd2f-e4712882021e-815f5388-ae4c-4748-ae1e-a35179c687ad" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:40 np0005558241 nova_compute[248510]: 2025-12-13 08:23:40.760 248514 DEBUG oslo_concurrency.lockutils [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-3b43a9c7-85e7-4558-bd2f-e4712882021e-815f5388-ae4c-4748-ae1e-a35179c687ad" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:40 np0005558241 nova_compute[248510]: 2025-12-13 08:23:40.761 248514 DEBUG nova.objects.instance [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid 3b43a9c7-85e7-4558-bd2f-e4712882021e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:23:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 319 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 957 KiB/s wr, 157 op/s
Dec 13 03:23:42 np0005558241 nova_compute[248510]: 2025-12-13 08:23:42.277 248514 DEBUG nova.network.neutron [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Successfully created port: 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:23:42 np0005558241 nova_compute[248510]: 2025-12-13 08:23:42.396 248514 DEBUG nova.objects.instance [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_requests' on Instance uuid 3b43a9c7-85e7-4558-bd2f-e4712882021e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:23:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:42Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:27:85 10.100.0.3
Dec 13 03:23:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:42Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:27:85 10.100.0.3
Dec 13 03:23:42 np0005558241 nova_compute[248510]: 2025-12-13 08:23:42.593 248514 DEBUG nova.network.neutron [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:23:42 np0005558241 nova_compute[248510]: 2025-12-13 08:23:42.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:23:42 np0005558241 nova_compute[248510]: 2025-12-13 08:23:42.916 248514 DEBUG nova.policy [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:23:42 np0005558241 podman[285842]: 2025-12-13 08:23:42.976636002 +0000 UTC m=+0.059613922 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 03:23:42 np0005558241 podman[285841]: 2025-12-13 08:23:42.984574248 +0000 UTC m=+0.073145006 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 13 03:23:43 np0005558241 podman[285840]: 2025-12-13 08:23:43.010968889 +0000 UTC m=+0.101071895 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:23:43 np0005558241 nova_compute[248510]: 2025-12-13 08:23:43.010 248514 DEBUG nova.network.neutron [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updated VIF entry in instance network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:23:43 np0005558241 nova_compute[248510]: 2025-12-13 08:23:43.010 248514 DEBUG nova.network.neutron [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 346 MiB data, 534 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Dec 13 03:23:43 np0005558241 nova_compute[248510]: 2025-12-13 08:23:43.098 248514 DEBUG oslo_concurrency.lockutils [req-12da60da-7496-47c6-b24c-9e83546c2e6a req-ef387d21-8ba7-43b9-a219-29810fbe93c2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:43 np0005558241 nova_compute[248510]: 2025-12-13 08:23:43.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:43 np0005558241 nova_compute[248510]: 2025-12-13 08:23:43.722 248514 DEBUG nova.network.neutron [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updated VIF entry in instance network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:23:43 np0005558241 nova_compute[248510]: 2025-12-13 08:23:43.723 248514 DEBUG nova.network.neutron [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:43 np0005558241 nova_compute[248510]: 2025-12-13 08:23:43.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:23:43 np0005558241 nova_compute[248510]: 2025-12-13 08:23:43.876 248514 DEBUG oslo_concurrency.lockutils [req-22dfff60-f9f6-4183-b3b2-84031cb9f45f req-3c71f51e-d560-4fd2-99c0-bc0fd01e7ef5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:44 np0005558241 nova_compute[248510]: 2025-12-13 08:23:44.512 248514 DEBUG nova.compute.manager [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:44 np0005558241 nova_compute[248510]: 2025-12-13 08:23:44.513 248514 DEBUG nova.compute.manager [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing instance network info cache due to event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:23:44 np0005558241 nova_compute[248510]: 2025-12-13 08:23:44.513 248514 DEBUG oslo_concurrency.lockutils [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:44 np0005558241 nova_compute[248510]: 2025-12-13 08:23:44.514 248514 DEBUG oslo_concurrency.lockutils [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:44 np0005558241 nova_compute[248510]: 2025-12-13 08:23:44.514 248514 DEBUG nova.network.neutron [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:23:44 np0005558241 nova_compute[248510]: 2025-12-13 08:23:44.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:44 np0005558241 nova_compute[248510]: 2025-12-13 08:23:44.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:23:44 np0005558241 nova_compute[248510]: 2025-12-13 08:23:44.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:23:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:23:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 372 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 204 op/s
Dec 13 03:23:45 np0005558241 nova_compute[248510]: 2025-12-13 08:23:45.505 248514 DEBUG nova.network.neutron [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Successfully updated port: 815f5388-ae4c-4748-ae1e-a35179c687ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:23:45 np0005558241 nova_compute[248510]: 2025-12-13 08:23:45.638 248514 DEBUG oslo_concurrency.lockutils [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:45 np0005558241 nova_compute[248510]: 2025-12-13 08:23:45.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:23:45 np0005558241 nova_compute[248510]: 2025-12-13 08:23:45.774 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.140 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.141 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.141 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.141 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.141 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:23:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/650964901' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.719 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.903 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.903 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.907 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.907 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.911 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.911 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.914 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:23:46 np0005558241 nova_compute[248510]: 2025-12-13 08:23:46.914 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:23:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 372 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 141 op/s
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.113 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.114 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3560MB free_disk=59.8443395672366GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.115 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.115 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.305 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3b43a9c7-85e7-4558-bd2f-e4712882021e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.306 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance dc64fea4-e9a8-47e7-8a3a-d01897fc81de actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.307 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance d503913e-a05e-47d4-9366-db4426b9aac1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.307 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 4403714d-3521-4409-9c3b-59d655fc999d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.308 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.308 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.309 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.432 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.661 248514 DEBUG nova.network.neutron [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Successfully updated port: 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.800 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.800 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:47 np0005558241 nova_compute[248510]: 2025-12-13 08:23:47.800 248514 DEBUG nova.network.neutron [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:23:48 np0005558241 nova_compute[248510]: 2025-12-13 08:23:48.072 248514 DEBUG nova.network.neutron [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:23:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:23:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/974391493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:23:48 np0005558241 nova_compute[248510]: 2025-12-13 08:23:48.458 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:48 np0005558241 nova_compute[248510]: 2025-12-13 08:23:48.465 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:23:48 np0005558241 nova_compute[248510]: 2025-12-13 08:23:48.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:48 np0005558241 nova_compute[248510]: 2025-12-13 08:23:48.587 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:23:48 np0005558241 nova_compute[248510]: 2025-12-13 08:23:48.694 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:23:48 np0005558241 nova_compute[248510]: 2025-12-13 08:23:48.695 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:48 np0005558241 nova_compute[248510]: 2025-12-13 08:23:48.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 390 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.6 MiB/s wr, 163 op/s
Dec 13 03:23:49 np0005558241 nova_compute[248510]: 2025-12-13 08:23:49.604 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:49 np0005558241 nova_compute[248510]: 2025-12-13 08:23:49.786 248514 DEBUG nova.network.neutron [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updated VIF entry in instance network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:23:49 np0005558241 nova_compute[248510]: 2025-12-13 08:23:49.787 248514 DEBUG nova.network.neutron [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:49 np0005558241 nova_compute[248510]: 2025-12-13 08:23:49.821 248514 DEBUG oslo_concurrency.lockutils [req-f8a3e58d-6ed6-499d-a874-93903825d1e3 req-fbb23112-45f8-4ddf-97ad-7e1b383f1408 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:49 np0005558241 nova_compute[248510]: 2025-12-13 08:23:49.822 248514 DEBUG oslo_concurrency.lockutils [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:49 np0005558241 nova_compute[248510]: 2025-12-13 08:23:49.822 248514 DEBUG nova.network.neutron [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:23:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.132 248514 WARNING nova.network.neutron [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.690 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:23:50 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:50Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:23:d9 10.100.0.4
Dec 13 03:23:50 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:50Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:23:d9 10.100.0.4
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.816 248514 DEBUG nova.network.neutron [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.859 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.859 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Instance network_info: |[{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.862 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Start _get_guest_xml network_info=[{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.867 248514 WARNING nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.874 248514 DEBUG nova.virt.libvirt.host [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.875 248514 DEBUG nova.virt.libvirt.host [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.879 248514 DEBUG nova.virt.libvirt.host [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.879 248514 DEBUG nova.virt.libvirt.host [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.880 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.880 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.880 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.880 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.881 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.881 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.881 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.882 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.882 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.882 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.882 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.882 248514 DEBUG nova.virt.hardware [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:23:50 np0005558241 nova_compute[248510]: 2025-12-13 08:23:50.886 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 390 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 4.7 MiB/s wr, 91 op/s
Dec 13 03:23:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:23:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019114998' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:23:51 np0005558241 nova_compute[248510]: 2025-12-13 08:23:51.464 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:51 np0005558241 nova_compute[248510]: 2025-12-13 08:23:51.487 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:51 np0005558241 nova_compute[248510]: 2025-12-13 08:23:51.491 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:23:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/514353425' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.063 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.065 248514 DEBUG nova.virt.libvirt.vif [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1901007496',display_name='tempest-AttachInterfacesUnderV243Test-server-1901007496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1901007496',id=34,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAtqDaZq3IK7Bvm/s6fqCH+TSHLKWsERX0aPeV408BGJSMsRQoO1UjptArZn77j735/fg+c2goyKkkvVN7UQeehgaqDzhHMveiUhv8vzTex1upUSSOpKWfKRhsOR5NuVjA==',key_name='tempest-keypair-862524999',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2be5ed2a3b1a405bb6891ecdc5cba68c',ramdisk_id='',reservation_id='r-l244eynx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1008670327',owner_user_name='tempest-AttachInterfacesUnderV243Test-1008670327-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:23:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5faa7317a5cd4b748a984970f79ef52b',uuid=ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.066 248514 DEBUG nova.network.os_vif_util [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Converting VIF {"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.067 248514 DEBUG nova.network.os_vif_util [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.068 248514 DEBUG nova.objects.instance [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lazy-loading 'pci_devices' on Instance uuid ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.084 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  <uuid>ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5</uuid>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  <name>instance-00000022</name>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1901007496</nova:name>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:23:50</nova:creationTime>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:        <nova:user uuid="5faa7317a5cd4b748a984970f79ef52b">tempest-AttachInterfacesUnderV243Test-1008670327-project-member</nova:user>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:        <nova:project uuid="2be5ed2a3b1a405bb6891ecdc5cba68c">tempest-AttachInterfacesUnderV243Test-1008670327</nova:project>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:        <nova:port uuid="5d8e1c45-4a7d-4fab-9d17-2bb5837e6134">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <entry name="serial">ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5</entry>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <entry name="uuid">ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5</entry>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk.config">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:83:58:17"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <target dev="tap5d8e1c45-4a"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/console.log" append="off"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:23:52 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:23:52 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:23:52 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:23:52 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.084 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Preparing to wait for external event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.085 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.085 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.085 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.086 248514 DEBUG nova.virt.libvirt.vif [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1901007496',display_name='tempest-AttachInterfacesUnderV243Test-server-1901007496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1901007496',id=34,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAtqDaZq3IK7Bvm/s6fqCH+TSHLKWsERX0aPeV408BGJSMsRQoO1UjptArZn77j735/fg+c2goyKkkvVN7UQeehgaqDzhHMveiUhv8vzTex1upUSSOpKWfKRhsOR5NuVjA==',key_name='tempest-keypair-862524999',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2be5ed2a3b1a405bb6891ecdc5cba68c',ramdisk_id='',reservation_id='r-l244eynx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1008670327',owner_user_name='tempest-AttachInterfacesUnderV243Test-1008670327-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:23:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5faa7317a5cd4b748a984970f79ef52b',uuid=ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.086 248514 DEBUG nova.network.os_vif_util [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Converting VIF {"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.086 248514 DEBUG nova.network.os_vif_util [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.087 248514 DEBUG os_vif [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.087 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.088 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.088 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.092 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.092 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d8e1c45-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.092 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5d8e1c45-4a, col_values=(('external_ids', {'iface-id': '5d8e1c45-4a7d-4fab-9d17-2bb5837e6134', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:58:17', 'vm-uuid': 'ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:52 np0005558241 NetworkManager[50376]: <info>  [1765614232.0950] manager: (tap5d8e1c45-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.096 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.100 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.103 248514 INFO os_vif [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a')#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.159 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.159 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.159 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] No VIF found with MAC fa:16:3e:83:58:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.160 248514 INFO nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Using config drive#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.182 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.769 248514 INFO nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Creating config drive at /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/disk.config#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.775 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk53yhg8s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.917 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk53yhg8s" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.943 248514 DEBUG nova.storage.rbd_utils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] rbd image ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:23:52 np0005558241 nova_compute[248510]: 2025-12-13 08:23:52.947 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/disk.config ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 400 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 5.0 MiB/s wr, 100 op/s
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.093 248514 DEBUG oslo_concurrency.processutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/disk.config ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.095 248514 INFO nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Deleting local config drive /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5/disk.config because it was imported into RBD.#033[00m
Dec 13 03:23:53 np0005558241 kernel: tap5d8e1c45-4a: entered promiscuous mode
Dec 13 03:23:53 np0005558241 NetworkManager[50376]: <info>  [1765614233.1541] manager: (tap5d8e1c45-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Dec 13 03:23:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:53Z|00269|binding|INFO|Claiming lport 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 for this chassis.
Dec 13 03:23:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:53Z|00270|binding|INFO|5d8e1c45-4a7d-4fab-9d17-2bb5837e6134: Claiming fa:16:3e:83:58:17 10.100.0.7
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.156 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.166 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:58:17 10.100.0.7'], port_security=['fa:16:3e:83:58:17 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2be5ed2a3b1a405bb6891ecdc5cba68c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16148d83-2b30-49dd-9926-d0fb6490d2c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=615ba7d0-57bb-42d2-948a-6426e9af82d9, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.168 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 in datapath e08cb57c-0bd2-4c88-a4f8-e9d9be925301 bound to our chassis#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.170 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e08cb57c-0bd2-4c88-a4f8-e9d9be925301#033[00m
Dec 13 03:23:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:53Z|00271|binding|INFO|Setting lport 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 ovn-installed in OVS
Dec 13 03:23:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:53Z|00272|binding|INFO|Setting lport 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 up in Southbound
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.175 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.179 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.186 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fe8d67-f7c5-4c3c-a171-31838f3af3a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.187 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape08cb57c-01 in ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.189 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape08cb57c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.189 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[211d9576-878b-4df3-8c9c-f2ee3a79fa19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.190 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[54a6236a-fe6c-4b17-8c3b-b89cf7c90cd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 systemd-machined[210538]: New machine qemu-39-instance-00000022.
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.203 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8f6ef6-8c75-445c-b51b-ffbc7a8aa387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 systemd[1]: Started Virtual Machine qemu-39-instance-00000022.
Dec 13 03:23:53 np0005558241 systemd-udevd[286081]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.217 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a89e25-4d10-485b-a534-ece57b94ba8a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 NetworkManager[50376]: <info>  [1765614233.2282] device (tap5d8e1c45-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:23:53 np0005558241 NetworkManager[50376]: <info>  [1765614233.2291] device (tap5d8e1c45-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.251 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[41de7e64-b6bc-4c62-8b20-334ebaea6200]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 systemd-udevd[286084]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:23:53 np0005558241 NetworkManager[50376]: <info>  [1765614233.2587] manager: (tape08cb57c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.258 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fc070998-3002-4db3-921f-c7ca0bf76237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.295 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[41ae5c37-40f2-4545-8ba3-993b0662e2ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.298 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[440ac26b-877d-4811-a65c-828f0238631e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 NetworkManager[50376]: <info>  [1765614233.3239] device (tape08cb57c-00): carrier: link connected
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.332 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[68293124-0e2b-49b2-bcd1-73fc931517f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.357 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e38a28d8-4c37-4971-b2ee-215ac18e5980]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape08cb57c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:b8:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681053, 'reachable_time': 20902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286111, 'error': None, 'target': 'ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.378 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2f28d6-fc9d-4797-88b5-0594c0113dbd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe06:b819'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 681053, 'tstamp': 681053}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286112, 'error': None, 'target': 'ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.400 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[50cb1729-22d8-4d84-b65a-b35a17312ba4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape08cb57c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:b8:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681053, 'reachable_time': 20902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286113, 'error': None, 'target': 'ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.435 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[234ccc18-e6d1-455c-8ad0-fb213b181e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.512 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78f38083-819e-4433-b22e-5125e8c2fc11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.514 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape08cb57c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.514 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.514 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape08cb57c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.516 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 NetworkManager[50376]: <info>  [1765614233.5174] manager: (tape08cb57c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Dec 13 03:23:53 np0005558241 kernel: tape08cb57c-00: entered promiscuous mode
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.519 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.523 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape08cb57c-00, col_values=(('external_ids', {'iface-id': '54f0e9c1-d2c9-4d7a-b554-d7af88f55e22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.524 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:53Z|00273|binding|INFO|Releasing lport 54f0e9c1-d2c9-4d7a-b554-d7af88f55e22 from this chassis (sb_readonly=0)
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.542 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e08cb57c-0bd2-4c88-a4f8-e9d9be925301.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e08cb57c-0bd2-4c88-a4f8-e9d9be925301.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.543 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[143c0da7-9fa9-4f17-a5d4-22f5b52dbcb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.544 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-e08cb57c-0bd2-4c88-a4f8-e9d9be925301
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/e08cb57c-0bd2-4c88-a4f8-e9d9be925301.pid.haproxy
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID e08cb57c-0bd2-4c88-a4f8-e9d9be925301
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.544 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'env', 'PROCESS_TAG=haproxy-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e08cb57c-0bd2-4c88-a4f8-e9d9be925301.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.625 248514 DEBUG nova.network.neutron [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.648 248514 DEBUG oslo_concurrency.lockutils [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.652 248514 DEBUG nova.virt.libvirt.vif [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.652 248514 DEBUG nova.network.os_vif_util [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.653 248514 DEBUG nova.network.os_vif_util [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.653 248514 DEBUG os_vif [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.654 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.654 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.655 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.658 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.658 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap815f5388-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.659 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap815f5388-ae, col_values=(('external_ids', {'iface-id': '815f5388-ae4c-4748-ae1e-a35179c687ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:45:d8', 'vm-uuid': '3b43a9c7-85e7-4558-bd2f-e4712882021e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 NetworkManager[50376]: <info>  [1765614233.6614] manager: (tap815f5388-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.661 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.668 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.670 248514 INFO os_vif [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae')#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.672 248514 DEBUG nova.virt.libvirt.vif [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.672 248514 DEBUG nova.network.os_vif_util [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.673 248514 DEBUG nova.network.os_vif_util [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.676 248514 DEBUG nova.virt.libvirt.guest [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] attach device xml: <interface type="ethernet">
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:a8:45:d8"/>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  <target dev="tap815f5388-ae"/>
Dec 13 03:23:53 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:23:53 np0005558241 nova_compute[248510]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec 13 03:23:53 np0005558241 NetworkManager[50376]: <info>  [1765614233.6864] manager: (tap815f5388-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Dec 13 03:23:53 np0005558241 kernel: tap815f5388-ae: entered promiscuous mode
Dec 13 03:23:53 np0005558241 systemd-udevd[286094]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:23:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:53Z|00274|binding|INFO|Claiming lport 815f5388-ae4c-4748-ae1e-a35179c687ad for this chassis.
Dec 13 03:23:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:53Z|00275|binding|INFO|815f5388-ae4c-4748-ae1e-a35179c687ad: Claiming fa:16:3e:a8:45:d8 10.100.0.9
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.689 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:53.699 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:45:d8 10.100.0.9'], port_security=['fa:16:3e:a8:45:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3b43a9c7-85e7-4558-bd2f-e4712882021e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=815f5388-ae4c-4748-ae1e-a35179c687ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:23:53 np0005558241 NetworkManager[50376]: <info>  [1765614233.7019] device (tap815f5388-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:23:53 np0005558241 NetworkManager[50376]: <info>  [1765614233.7033] device (tap815f5388-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.709 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:53Z|00276|binding|INFO|Setting lport 815f5388-ae4c-4748-ae1e-a35179c687ad ovn-installed in OVS
Dec 13 03:23:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:53Z|00277|binding|INFO|Setting lport 815f5388-ae4c-4748-ae1e-a35179c687ad up in Southbound
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.712 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.787 248514 DEBUG nova.virt.libvirt.driver [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.787 248514 DEBUG nova.virt.libvirt.driver [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.787 248514 DEBUG nova.virt.libvirt.driver [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:37:48:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.788 248514 DEBUG nova.virt.libvirt.driver [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:a8:45:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.818 248514 DEBUG nova.virt.libvirt.guest [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  <nova:name>tempest-tempest.common.compute-instance-2036693049</nova:name>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:23:53</nova:creationTime>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:23:53 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:    <nova:port uuid="bc4158d8-4963-4009-a434-0a0106941c9d">
Dec 13 03:23:53 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:    <nova:port uuid="815f5388-ae4c-4748-ae1e-a35179c687ad">
Dec 13 03:23:53 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:23:53 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:23:53 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:23:53 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.841 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614233.8403728, ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.841 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] VM Started (Lifecycle Event)#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.845 248514 DEBUG oslo_concurrency.lockutils [None req-8a7de1e7-8468-4a41-a249-57b0f1c2709a ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-3b43a9c7-85e7-4558-bd2f-e4712882021e-815f5388-ae4c-4748-ae1e-a35179c687ad" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 13.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.872 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.878 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614233.8405333, ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:23:53 np0005558241 nova_compute[248510]: 2025-12-13 08:23:53.878 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:23:53 np0005558241 podman[286192]: 2025-12-13 08:23:53.98387562 +0000 UTC m=+0.061516399 container create 502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 03:23:54 np0005558241 systemd[1]: Started libpod-conmon-502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef.scope.
Dec 13 03:23:54 np0005558241 podman[286192]: 2025-12-13 08:23:53.951270855 +0000 UTC m=+0.028911654 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:23:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:23:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c545acf84645020cc8882e401df567b891cd88033c1b634e5d68fb95026bcdfb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:54 np0005558241 nova_compute[248510]: 2025-12-13 08:23:54.111 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:23:54 np0005558241 nova_compute[248510]: 2025-12-13 08:23:54.117 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:23:54 np0005558241 nova_compute[248510]: 2025-12-13 08:23:54.143 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:23:54 np0005558241 podman[286192]: 2025-12-13 08:23:54.191728658 +0000 UTC m=+0.269369457 container init 502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 03:23:54 np0005558241 podman[286192]: 2025-12-13 08:23:54.199161201 +0000 UTC m=+0.276801980 container start 502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 03:23:54 np0005558241 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [NOTICE]   (286217) : New worker (286236) forked
Dec 13 03:23:54 np0005558241 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [NOTICE]   (286217) : Loading success.
Dec 13 03:23:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.289 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 815f5388-ae4c-4748-ae1e-a35179c687ad in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis#033[00m
Dec 13 03:23:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.291 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:23:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.311 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[578507cb-408f-457e-9a8f-bed64d6db8f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.345 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca8b0d8-9962-43fd-8ee8-7ede6389655a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.349 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[76776379-22b4-4728-b5db-1b088bbfa916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.386 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3cedd981-cf92-49fb-bae4-a7a3b1e57f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.405 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7bfca803-b0f5-4fca-b05b-1f6aff56632e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 19285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286277, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.423 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[03570c02-5af0-441c-9338-ea2e8f92ef7f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675811, 'tstamp': 675811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286278, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675814, 'tstamp': 675814}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286278, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.425 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:54 np0005558241 nova_compute[248510]: 2025-12-13 08:23:54.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.428 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.429 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.429 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:54.429 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:54 np0005558241 nova_compute[248510]: 2025-12-13 08:23:54.606 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:54Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:45:d8 10.100.0.9
Dec 13 03:23:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:54Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:45:d8 10.100.0.9
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:23:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 568 KiB/s rd, 4.0 MiB/s wr, 124 op/s
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:23:55 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:23:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:55.405 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:55.406 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:55.408 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:55 np0005558241 podman[286372]: 2025-12-13 08:23:55.481905172 +0000 UTC m=+0.050674962 container create 1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sanderson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:23:55 np0005558241 podman[286372]: 2025-12-13 08:23:55.461854177 +0000 UTC m=+0.030623967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:23:55 np0005558241 systemd[1]: Started libpod-conmon-1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e.scope.
Dec 13 03:23:55 np0005558241 nova_compute[248510]: 2025-12-13 08:23:55.650 248514 DEBUG nova.compute.manager [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-changed-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:55 np0005558241 nova_compute[248510]: 2025-12-13 08:23:55.650 248514 DEBUG nova.compute.manager [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing instance network info cache due to event network-changed-815f5388-ae4c-4748-ae1e-a35179c687ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:23:55 np0005558241 nova_compute[248510]: 2025-12-13 08:23:55.651 248514 DEBUG oslo_concurrency.lockutils [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:55 np0005558241 nova_compute[248510]: 2025-12-13 08:23:55.651 248514 DEBUG oslo_concurrency.lockutils [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:55 np0005558241 nova_compute[248510]: 2025-12-13 08:23:55.651 248514 DEBUG nova.network.neutron [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing network info cache for port 815f5388-ae4c-4748-ae1e-a35179c687ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:23:55 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:23:55 np0005558241 nova_compute[248510]: 2025-12-13 08:23:55.801 248514 DEBUG nova.compute.manager [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:55 np0005558241 nova_compute[248510]: 2025-12-13 08:23:55.801 248514 DEBUG nova.compute.manager [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing instance network info cache due to event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:23:55 np0005558241 nova_compute[248510]: 2025-12-13 08:23:55.802 248514 DEBUG oslo_concurrency.lockutils [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:55 np0005558241 nova_compute[248510]: 2025-12-13 08:23:55.802 248514 DEBUG oslo_concurrency.lockutils [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:55 np0005558241 nova_compute[248510]: 2025-12-13 08:23:55.802 248514 DEBUG nova.network.neutron [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:23:55 np0005558241 podman[286372]: 2025-12-13 08:23:55.859785065 +0000 UTC m=+0.428554875 container init 1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:23:55 np0005558241 podman[286372]: 2025-12-13 08:23:55.868824808 +0000 UTC m=+0.437594588 container start 1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:23:55 np0005558241 charming_sanderson[286388]: 167 167
Dec 13 03:23:55 np0005558241 systemd[1]: libpod-1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e.scope: Deactivated successfully.
Dec 13 03:23:55 np0005558241 conmon[286388]: conmon 1cf9167ad3136774cd43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e.scope/container/memory.events
Dec 13 03:23:56 np0005558241 podman[286372]: 2025-12-13 08:23:56.021789793 +0000 UTC m=+0.590559603 container attach 1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:23:56 np0005558241 podman[286372]: 2025-12-13 08:23:56.023265939 +0000 UTC m=+0.592035749 container died 1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:23:56 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c3d30f0a0d53418b03052fcd578ff9a7c909308de84dcea6419deb60e029f98b-merged.mount: Deactivated successfully.
Dec 13 03:23:56 np0005558241 podman[286372]: 2025-12-13 08:23:56.074816381 +0000 UTC m=+0.643586171 container remove 1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_sanderson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:23:56 np0005558241 systemd[1]: libpod-conmon-1cf9167ad3136774cd433676572e28d9952c645e4da32291d645fb8a4ec5dc6e.scope: Deactivated successfully.
Dec 13 03:23:56 np0005558241 podman[286414]: 2025-12-13 08:23:56.326405999 +0000 UTC m=+0.048589450 container create af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mirzakhani, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:23:56 np0005558241 systemd[1]: Started libpod-conmon-af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717.scope.
Dec 13 03:23:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:23:56 np0005558241 podman[286414]: 2025-12-13 08:23:56.30659397 +0000 UTC m=+0.028777441 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:23:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895ef3b98cef28d692b7ef4a8ecf0d6054b4378400037e22e98978eb84ef9781/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895ef3b98cef28d692b7ef4a8ecf0d6054b4378400037e22e98978eb84ef9781/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895ef3b98cef28d692b7ef4a8ecf0d6054b4378400037e22e98978eb84ef9781/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895ef3b98cef28d692b7ef4a8ecf0d6054b4378400037e22e98978eb84ef9781/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895ef3b98cef28d692b7ef4a8ecf0d6054b4378400037e22e98978eb84ef9781/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:56 np0005558241 podman[286414]: 2025-12-13 08:23:56.424755435 +0000 UTC m=+0.146938906 container init af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:23:56 np0005558241 podman[286414]: 2025-12-13 08:23:56.437954761 +0000 UTC m=+0.160138212 container start af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:23:56 np0005558241 podman[286414]: 2025-12-13 08:23:56.442341749 +0000 UTC m=+0.164525380 container attach af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:23:56 np0005558241 frosty_mirzakhani[286431]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:23:56 np0005558241 frosty_mirzakhani[286431]: --> All data devices are unavailable
Dec 13 03:23:56 np0005558241 systemd[1]: libpod-af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717.scope: Deactivated successfully.
Dec 13 03:23:56 np0005558241 podman[286414]: 2025-12-13 08:23:56.979533593 +0000 UTC m=+0.701717054 container died af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:23:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-895ef3b98cef28d692b7ef4a8ecf0d6054b4378400037e22e98978eb84ef9781-merged.mount: Deactivated successfully.
Dec 13 03:23:57 np0005558241 podman[286414]: 2025-12-13 08:23:57.035816072 +0000 UTC m=+0.757999523 container remove af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:23:57 np0005558241 systemd[1]: libpod-conmon-af3d43a2fc50e85b96108d6b28e2488e44d45cad1bca5d705632923a54405717.scope: Deactivated successfully.
Dec 13 03:23:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Dec 13 03:23:57 np0005558241 podman[286525]: 2025-12-13 08:23:57.50791205 +0000 UTC m=+0.047353459 container create ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mclean, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:23:57 np0005558241 systemd[1]: Started libpod-conmon-ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3.scope.
Dec 13 03:23:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:23:57 np0005558241 podman[286525]: 2025-12-13 08:23:57.48888407 +0000 UTC m=+0.028325499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:23:57 np0005558241 podman[286525]: 2025-12-13 08:23:57.598253809 +0000 UTC m=+0.137695238 container init ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:23:57 np0005558241 podman[286525]: 2025-12-13 08:23:57.607431775 +0000 UTC m=+0.146873214 container start ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:23:57 np0005558241 podman[286525]: 2025-12-13 08:23:57.611941477 +0000 UTC m=+0.151383006 container attach ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mclean, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:23:57 np0005558241 brave_mclean[286542]: 167 167
Dec 13 03:23:57 np0005558241 systemd[1]: libpod-ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3.scope: Deactivated successfully.
Dec 13 03:23:57 np0005558241 podman[286525]: 2025-12-13 08:23:57.614299795 +0000 UTC m=+0.153741204 container died ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mclean, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:23:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e0458e1d31ec1397e896534b1d9b8699d5aefc5606fb58db25b19a86f08b9765-merged.mount: Deactivated successfully.
Dec 13 03:23:57 np0005558241 podman[286525]: 2025-12-13 08:23:57.658458345 +0000 UTC m=+0.197899754 container remove ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:23:57 np0005558241 systemd[1]: libpod-conmon-ede6479013f0e4c1d9fadd3fac3c086826c4e899b98b650f13d58416f6b354e3.scope: Deactivated successfully.
Dec 13 03:23:57 np0005558241 podman[286564]: 2025-12-13 08:23:57.876879214 +0000 UTC m=+0.044737885 container create 32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:23:57 np0005558241 systemd[1]: Started libpod-conmon-32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368.scope.
Dec 13 03:23:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:23:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b923ce63de443480ad6a6428ddcecfac91a9cb2a1135d6d6c5c03f911e3848/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b923ce63de443480ad6a6428ddcecfac91a9cb2a1135d6d6c5c03f911e3848/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b923ce63de443480ad6a6428ddcecfac91a9cb2a1135d6d6c5c03f911e3848/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9b923ce63de443480ad6a6428ddcecfac91a9cb2a1135d6d6c5c03f911e3848/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:57 np0005558241 podman[286564]: 2025-12-13 08:23:57.858705195 +0000 UTC m=+0.026563886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:23:57 np0005558241 podman[286564]: 2025-12-13 08:23:57.961938983 +0000 UTC m=+0.129797654 container init 32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:23:57 np0005558241 podman[286564]: 2025-12-13 08:23:57.967865089 +0000 UTC m=+0.135723760 container start 32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:23:57 np0005558241 podman[286564]: 2025-12-13 08:23:57.97157856 +0000 UTC m=+0.139437231 container attach 32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.093 248514 DEBUG nova.network.neutron [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updated VIF entry in instance network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.095 248514 DEBUG nova.network.neutron [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.134 248514 DEBUG nova.network.neutron [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updated VIF entry in instance network info cache for port 815f5388-ae4c-4748-ae1e-a35179c687ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.135 248514 DEBUG nova.network.neutron [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.148 248514 DEBUG oslo_concurrency.lockutils [req-9b22a754-10bd-469d-9818-05da2a773463 req-14b3a342-0a51-41e5-8f56-aa44d880a286 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.156 248514 DEBUG oslo_concurrency.lockutils [req-7a3d5bde-d62e-4943-9909-0551f7aa67df req-80417204-439c-48d1-bd1b-ca052656109f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]: {
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:    "0": [
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:        {
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "devices": [
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "/dev/loop3"
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            ],
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_name": "ceph_lv0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_size": "21470642176",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "name": "ceph_lv0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "tags": {
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.cluster_name": "ceph",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.crush_device_class": "",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.encrypted": "0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.objectstore": "bluestore",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.osd_id": "0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.type": "block",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.vdo": "0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.with_tpm": "0"
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            },
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "type": "block",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "vg_name": "ceph_vg0"
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:        }
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:    ],
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:    "1": [
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:        {
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "devices": [
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "/dev/loop4"
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            ],
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_name": "ceph_lv1",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_size": "21470642176",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "name": "ceph_lv1",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "tags": {
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.cluster_name": "ceph",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.crush_device_class": "",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.encrypted": "0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.objectstore": "bluestore",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.osd_id": "1",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.type": "block",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.vdo": "0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.with_tpm": "0"
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            },
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "type": "block",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "vg_name": "ceph_vg1"
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:        }
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:    ],
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:    "2": [
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:        {
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "devices": [
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "/dev/loop5"
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            ],
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_name": "ceph_lv2",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_size": "21470642176",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "name": "ceph_lv2",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "tags": {
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.cluster_name": "ceph",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.crush_device_class": "",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.encrypted": "0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.objectstore": "bluestore",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.osd_id": "2",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.type": "block",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.vdo": "0",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:                "ceph.with_tpm": "0"
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            },
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "type": "block",
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:            "vg_name": "ceph_vg2"
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:        }
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]:    ]
Dec 13 03:23:58 np0005558241 cool_elbakyan[286580]: }
Dec 13 03:23:58 np0005558241 systemd[1]: libpod-32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368.scope: Deactivated successfully.
Dec 13 03:23:58 np0005558241 podman[286564]: 2025-12-13 08:23:58.310954364 +0000 UTC m=+0.478813045 container died 32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:23:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a9b923ce63de443480ad6a6428ddcecfac91a9cb2a1135d6d6c5c03f911e3848-merged.mount: Deactivated successfully.
Dec 13 03:23:58 np0005558241 podman[286564]: 2025-12-13 08:23:58.366990517 +0000 UTC m=+0.534849188 container remove 32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_elbakyan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 13 03:23:58 np0005558241 systemd[1]: libpod-conmon-32897a936b6d2e84dfb04096911b64acc91f35b272cf33a44cb885a62a172368.scope: Deactivated successfully.
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.378 248514 DEBUG oslo_concurrency.lockutils [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-3b43a9c7-85e7-4558-bd2f-e4712882021e-815f5388-ae4c-4748-ae1e-a35179c687ad" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.379 248514 DEBUG oslo_concurrency.lockutils [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-3b43a9c7-85e7-4558-bd2f-e4712882021e-815f5388-ae4c-4748-ae1e-a35179c687ad" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.415 248514 DEBUG nova.objects.instance [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid 3b43a9c7-85e7-4558-bd2f-e4712882021e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.444 248514 DEBUG nova.virt.libvirt.vif [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.444 248514 DEBUG nova.network.os_vif_util [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.445 248514 DEBUG nova.network.os_vif_util [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.450 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.452 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.455 248514 DEBUG nova.virt.libvirt.driver [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Attempting to detach device tap815f5388-ae from instance 3b43a9c7-85e7-4558-bd2f-e4712882021e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.455 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:a8:45:d8"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <target dev="tap815f5388-ae"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:23:58 np0005558241 nova_compute[248510]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.457 248514 DEBUG nova.compute.manager [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.458 248514 DEBUG oslo_concurrency.lockutils [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.458 248514 DEBUG oslo_concurrency.lockutils [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.458 248514 DEBUG oslo_concurrency.lockutils [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.458 248514 DEBUG nova.compute.manager [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.458 248514 WARNING nova.compute.manager [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received unexpected event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.458 248514 DEBUG nova.compute.manager [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.459 248514 DEBUG oslo_concurrency.lockutils [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.459 248514 DEBUG oslo_concurrency.lockutils [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.459 248514 DEBUG oslo_concurrency.lockutils [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.459 248514 DEBUG nova.compute.manager [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.459 248514 WARNING nova.compute.manager [req-23559606-727f-4129-86fa-52c6f432316e req-f14e0b07-1b9e-465d-adbf-69c283d26305 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received unexpected event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.463 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.466 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface>not found in domain: <domain type='kvm' id='35'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <name>instance-0000001e</name>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <uuid>3b43a9c7-85e7-4558-bd2f-e4712882021e</uuid>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:name>tempest-tempest.common.compute-instance-2036693049</nova:name>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:23:53</nova:creationTime>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:port uuid="bc4158d8-4963-4009-a434-0a0106941c9d">
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:port uuid="815f5388-ae4c-4748-ae1e-a35179c687ad">
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:23:58 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <entry name='serial'>3b43a9c7-85e7-4558-bd2f-e4712882021e</entry>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <entry name='uuid'>3b43a9c7-85e7-4558-bd2f-e4712882021e</entry>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/3b43a9c7-85e7-4558-bd2f-e4712882021e_disk' index='2'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config' index='1'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:37:48:0d'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target dev='tapbc4158d8-49'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:a8:45:d8'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target dev='tap815f5388-ae'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='net1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/console.log' append='off'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/0'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/console.log' append='off'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c22,c900</label>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c22,c900</imagelabel>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:23:58 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:23:58 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.467 248514 INFO nova.virt.libvirt.driver [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap815f5388-ae from instance 3b43a9c7-85e7-4558-bd2f-e4712882021e from the persistent domain config.#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.467 248514 DEBUG nova.virt.libvirt.driver [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] (1/8): Attempting to detach device tap815f5388-ae with device alias net1 from instance 3b43a9c7-85e7-4558-bd2f-e4712882021e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.467 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:a8:45:d8"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <target dev="tap815f5388-ae"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:23:58 np0005558241 nova_compute[248510]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.537 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.538 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing instance network info cache due to event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.538 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.538 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.539 248514 DEBUG nova.network.neutron [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:23:58 np0005558241 kernel: tap815f5388-ae (unregistering): left promiscuous mode
Dec 13 03:23:58 np0005558241 NetworkManager[50376]: <info>  [1765614238.5851] device (tap815f5388-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.593 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:58Z|00278|binding|INFO|Releasing lport 815f5388-ae4c-4748-ae1e-a35179c687ad from this chassis (sb_readonly=0)
Dec 13 03:23:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:58Z|00279|binding|INFO|Setting lport 815f5388-ae4c-4748-ae1e-a35179c687ad down in Southbound
Dec 13 03:23:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:23:58Z|00280|binding|INFO|Removing iface tap815f5388-ae ovn-installed in OVS
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.601 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:45:d8 10.100.0.9'], port_security=['fa:16:3e:a8:45:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3b43a9c7-85e7-4558-bd2f-e4712882021e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=815f5388-ae4c-4748-ae1e-a35179c687ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.603 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 815f5388-ae4c-4748-ae1e-a35179c687ad in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.604 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.616 248514 DEBUG nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Received event <DeviceRemovedEvent: 1765614238.615932, 3b43a9c7-85e7-4558-bd2f-e4712882021e => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.618 248514 DEBUG nova.virt.libvirt.driver [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Start waiting for the detach event from libvirt for device tap815f5388-ae with device alias net1 for instance 3b43a9c7-85e7-4558-bd2f-e4712882021e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.619 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.623 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface>not found in domain: <domain type='kvm' id='35'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <name>instance-0000001e</name>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <uuid>3b43a9c7-85e7-4558-bd2f-e4712882021e</uuid>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:name>tempest-tempest.common.compute-instance-2036693049</nova:name>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:23:53</nova:creationTime>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:port uuid="bc4158d8-4963-4009-a434-0a0106941c9d">
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:port uuid="815f5388-ae4c-4748-ae1e-a35179c687ad">
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:23:58 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <entry name='serial'>3b43a9c7-85e7-4558-bd2f-e4712882021e</entry>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <entry name='uuid'>3b43a9c7-85e7-4558-bd2f-e4712882021e</entry>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/3b43a9c7-85e7-4558-bd2f-e4712882021e_disk' index='2'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/3b43a9c7-85e7-4558-bd2f-e4712882021e_disk.config' index='1'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:37:48:0d'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target dev='tapbc4158d8-49'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/console.log' append='off'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/0'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e/console.log' append='off'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c22,c900</label>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c22,c900</imagelabel>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:23:58 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:23:58 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.623 248514 INFO nova.virt.libvirt.driver [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap815f5388-ae from instance 3b43a9c7-85e7-4558-bd2f-e4712882021e from the live domain config.#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.624 248514 DEBUG nova.virt.libvirt.vif [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.624 248514 DEBUG nova.network.os_vif_util [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.625 248514 DEBUG nova.network.os_vif_util [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.625 248514 DEBUG os_vif [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.627 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.627 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap815f5388-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.629 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.631 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c8978279-c36f-4e44-a925-c71c7ab51142]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.635 248514 INFO os_vif [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae')#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.636 248514 DEBUG nova.virt.libvirt.guest [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:name>tempest-tempest.common.compute-instance-2036693049</nova:name>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:23:58</nova:creationTime>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    <nova:port uuid="bc4158d8-4963-4009-a434-0a0106941c9d">
Dec 13 03:23:58 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:23:58 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:23:58 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:23:58 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.670 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e6950c41-904e-4268-b1c5-330fbf34ba17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.674 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[70a05c2a-642a-4610-99aa-f03a32f43758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.710 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ec32b7f6-64b2-45bd-800a-c0b749233e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.728 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[050eba57-0e7d-40e9-b55b-9dd969a45e52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 19285, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286664, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.744 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8ff381-1c07-4bb0-8eae-7ad302a3f8df]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675811, 'tstamp': 675811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286665, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675814, 'tstamp': 675814}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286665, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.746 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.748 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:58 np0005558241 nova_compute[248510]: 2025-12-13 08:23:58.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.752 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:23:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:23:58.752 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:23:58 np0005558241 podman[286678]: 2025-12-13 08:23:58.892495513 +0000 UTC m=+0.046453357 container create f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bardeen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:23:58 np0005558241 systemd[1]: Started libpod-conmon-f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584.scope.
Dec 13 03:23:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:23:58 np0005558241 podman[286678]: 2025-12-13 08:23:58.871964446 +0000 UTC m=+0.025922300 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:23:58 np0005558241 podman[286678]: 2025-12-13 08:23:58.977207153 +0000 UTC m=+0.131164997 container init f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bardeen, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:23:58 np0005558241 podman[286678]: 2025-12-13 08:23:58.986099392 +0000 UTC m=+0.140057216 container start f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 03:23:58 np0005558241 podman[286678]: 2025-12-13 08:23:58.990232894 +0000 UTC m=+0.144190738 container attach f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:23:58 np0005558241 vigorous_bardeen[286694]: 167 167
Dec 13 03:23:58 np0005558241 systemd[1]: libpod-f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584.scope: Deactivated successfully.
Dec 13 03:23:58 np0005558241 podman[286678]: 2025-12-13 08:23:58.993110585 +0000 UTC m=+0.147068409 container died f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:23:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-be55dcd73a736c02796cab9f53e50bdfdd46c2db4cf55b5d3d35a52093905b8e-merged.mount: Deactivated successfully.
Dec 13 03:23:59 np0005558241 podman[286678]: 2025-12-13 08:23:59.047951879 +0000 UTC m=+0.201909703 container remove f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:23:59 np0005558241 systemd[1]: libpod-conmon-f966500dfc0aed48868140b781ae4acdb22e3d38d15f9db16e96b64d757dd584.scope: Deactivated successfully.
Dec 13 03:23:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 81 op/s
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.088 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.089 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.116 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.212 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.213 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.221 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.222 248514 INFO nova.compute.claims [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:23:59 np0005558241 podman[286718]: 2025-12-13 08:23:59.364408517 +0000 UTC m=+0.117885760 container create 90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:23:59 np0005558241 podman[286718]: 2025-12-13 08:23:59.272866718 +0000 UTC m=+0.026343971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:23:59 np0005558241 systemd[1]: Started libpod-conmon-90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce.scope.
Dec 13 03:23:59 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:23:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa5cf08cd9a68e9caacabf7a3ce09695e916846ea2cd012d72e45a28cbd6015/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa5cf08cd9a68e9caacabf7a3ce09695e916846ea2cd012d72e45a28cbd6015/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa5cf08cd9a68e9caacabf7a3ce09695e916846ea2cd012d72e45a28cbd6015/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa5cf08cd9a68e9caacabf7a3ce09695e916846ea2cd012d72e45a28cbd6015/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:23:59 np0005558241 podman[286718]: 2025-12-13 08:23:59.495288086 +0000 UTC m=+0.248765339 container init 90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.496 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:23:59 np0005558241 podman[286718]: 2025-12-13 08:23:59.505712053 +0000 UTC m=+0.259189286 container start 90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.610 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:23:59 np0005558241 podman[286718]: 2025-12-13 08:23:59.811367135 +0000 UTC m=+0.564844448 container attach 90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.842 248514 DEBUG nova.network.neutron [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updated VIF entry in instance network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.843 248514 DEBUG nova.network.neutron [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.866 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.867 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-changed-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.868 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Refreshing instance network info cache due to event network-changed-d462a8a0-34ee-4682-ac5c-f7632b5ad39c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.868 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.868 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.869 248514 DEBUG nova.network.neutron [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Refreshing network info cache for port d462a8a0-34ee-4682-ac5c-f7632b5ad39c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.948 248514 DEBUG oslo_concurrency.lockutils [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.948 248514 DEBUG oslo_concurrency.lockutils [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:23:59 np0005558241 nova_compute[248510]: 2025-12-13 08:23:59.949 248514 DEBUG nova.network.neutron [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:24:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:24:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:24:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3611451730' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.095 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.104 248514 DEBUG nova.compute.provider_tree [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.232 248514 DEBUG nova.scheduler.client.report [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:24:00 np0005558241 lvm[286831]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:24:00 np0005558241 lvm[286831]: VG ceph_vg0 finished
Dec 13 03:24:00 np0005558241 lvm[286833]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:24:00 np0005558241 lvm[286833]: VG ceph_vg1 finished
Dec 13 03:24:00 np0005558241 lvm[286835]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:24:00 np0005558241 lvm[286835]: VG ceph_vg2 finished
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.294 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.295 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.353 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.353 248514 DEBUG nova.network.neutron [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:24:00 np0005558241 nostalgic_visvesvaraya[286734]: {}
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.391 248514 INFO nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:24:00 np0005558241 systemd[1]: libpod-90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce.scope: Deactivated successfully.
Dec 13 03:24:00 np0005558241 systemd[1]: libpod-90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce.scope: Consumed 1.504s CPU time.
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.418 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:24:00 np0005558241 podman[286838]: 2025-12-13 08:24:00.456231855 +0000 UTC m=+0.024580987 container died 90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.519 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.521 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.522 248514 INFO nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Creating image(s)#033[00m
Dec 13 03:24:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8fa5cf08cd9a68e9caacabf7a3ce09695e916846ea2cd012d72e45a28cbd6015-merged.mount: Deactivated successfully.
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.552 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.580 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.605 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.610 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:00 np0005558241 podman[286838]: 2025-12-13 08:24:00.624384804 +0000 UTC m=+0.192733906 container remove 90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_visvesvaraya, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:24:00 np0005558241 systemd[1]: libpod-conmon-90aa8ce757940f1b104ddc2f71caa6a9c372b530b7c588b57036e006f7241dce.scope: Deactivated successfully.
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.682 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.683 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.684 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.684 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.721 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:24:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:24:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.727 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:24:00 np0005558241 nova_compute[248510]: 2025-12-13 08:24:00.802 248514 DEBUG nova.policy [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6827fc2174b74c2a92803d852e87c70a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8f78f312dfcc4df6ba40b7c8a4e1aa97', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:24:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 507 KiB/s wr, 59 op/s
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.124 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.179 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] resizing rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.261 248514 DEBUG nova.objects.instance [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lazy-loading 'migration_context' on Instance uuid b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.279 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.280 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Ensure instance console log exists: /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.281 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.281 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.281 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.428 248514 DEBUG nova.network.neutron [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updated VIF entry in instance network info cache for port d462a8a0-34ee-4682-ac5c-f7632b5ad39c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.429 248514 DEBUG nova.network.neutron [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updating instance_info_cache with network_info: [{"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.449 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.450 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.450 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.450 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.450 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.450 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Processing event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.451 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.451 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.451 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.451 248514 DEBUG oslo_concurrency.lockutils [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.451 248514 DEBUG nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] No waiting events found dispatching network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.452 248514 WARNING nova.compute.manager [req-55d246eb-aa7a-4768-ab32-80b5ac475b78 req-c660709d-ffac-4166-8dc1-265d9db65970 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received unexpected event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.452 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.458 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614241.4579537, ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.458 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.460 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.463 248514 INFO nova.virt.libvirt.driver [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Instance spawned successfully.#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.464 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.479 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.486 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.491 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.491 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.492 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.492 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.492 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.493 248514 DEBUG nova.virt.libvirt.driver [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.521 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.554 248514 INFO nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Took 23.43 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.555 248514 DEBUG nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.633 248514 INFO nova.compute.manager [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Took 25.04 seconds to build instance.#033[00m
Dec 13 03:24:01 np0005558241 nova_compute[248510]: 2025-12-13 08:24:01.652 248514 DEBUG oslo_concurrency.lockutils [None req-50db9f58-be73-4627-9697-b539d6c07553 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:01 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:24:01 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.052 248514 DEBUG nova.network.neutron [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Successfully created port: b494f789-c137-45c5-9750-2bf0b43681ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.058 248514 DEBUG nova.compute.manager [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-unplugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.059 248514 DEBUG oslo_concurrency.lockutils [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.059 248514 DEBUG oslo_concurrency.lockutils [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.059 248514 DEBUG oslo_concurrency.lockutils [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.060 248514 DEBUG nova.compute.manager [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-unplugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.060 248514 WARNING nova.compute.manager [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received unexpected event network-vif-unplugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.060 248514 DEBUG nova.compute.manager [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.061 248514 DEBUG oslo_concurrency.lockutils [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.061 248514 DEBUG oslo_concurrency.lockutils [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.061 248514 DEBUG oslo_concurrency.lockutils [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.061 248514 DEBUG nova.compute.manager [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.062 248514 WARNING nova.compute.manager [req-bfaf3cd0-9e9b-4d9d-9e14-de24949c0c17 req-728611cc-dbfc-40d9-9467-39bb314125c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received unexpected event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.135 248514 DEBUG nova.compute.manager [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-changed-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.135 248514 DEBUG nova.compute.manager [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Refreshing instance network info cache due to event network-changed-d462a8a0-34ee-4682-ac5c-f7632b5ad39c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.136 248514 DEBUG oslo_concurrency.lockutils [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.136 248514 DEBUG oslo_concurrency.lockutils [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.136 248514 DEBUG nova.network.neutron [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Refreshing network info cache for port d462a8a0-34ee-4682-ac5c-f7632b5ad39c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.167 248514 INFO nova.network.neutron [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Port 815f5388-ae4c-4748-ae1e-a35179c687ad from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.169 248514 DEBUG nova.network.neutron [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.205 248514 DEBUG oslo_concurrency.lockutils [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:02 np0005558241 nova_compute[248510]: 2025-12-13 08:24:02.225 248514 DEBUG oslo_concurrency.lockutils [None req-2dd4e247-2127-4b59-ae90-7e9663e02fb0 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-3b43a9c7-85e7-4558-bd2f-e4712882021e-815f5388-ae4c-4748-ae1e-a35179c687ad" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 415 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 879 KiB/s wr, 113 op/s
Dec 13 03:24:03 np0005558241 nova_compute[248510]: 2025-12-13 08:24:03.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.281 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.282 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.282 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.282 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.282 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.283 248514 INFO nova.compute.manager [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Terminating instance#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.284 248514 DEBUG nova.compute.manager [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:24:04 np0005558241 kernel: tapd462a8a0-34 (unregistering): left promiscuous mode
Dec 13 03:24:04 np0005558241 NetworkManager[50376]: <info>  [1765614244.5142] device (tapd462a8a0-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.527 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:04Z|00281|binding|INFO|Releasing lport d462a8a0-34ee-4682-ac5c-f7632b5ad39c from this chassis (sb_readonly=0)
Dec 13 03:24:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:04Z|00282|binding|INFO|Setting lport d462a8a0-34ee-4682-ac5c-f7632b5ad39c down in Southbound
Dec 13 03:24:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:04Z|00283|binding|INFO|Removing iface tapd462a8a0-34 ovn-installed in OVS
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.529 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.550 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:23:d9 10.100.0.4'], port_security=['fa:16:3e:9f:23:d9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4403714d-3521-4409-9c3b-59d655fc999d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06fbab937d6444558229b2351632e711', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fa325bd2-c57a-49fb-8dd9-f45405c95b4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f47224ac-d05f-46db-ac07-cb476b38b044, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d462a8a0-34ee-4682-ac5c-f7632b5ad39c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.553 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.554 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d462a8a0-34ee-4682-ac5c-f7632b5ad39c in datapath 62193ff6-aaa1-401a-b1e0-512e67752a9e unbound from our chassis#033[00m
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.556 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62193ff6-aaa1-401a-b1e0-512e67752a9e#033[00m
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.576 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff39a6b-a793-491f-888d-0850a1835b9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:04 np0005558241 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000021.scope: Deactivated successfully.
Dec 13 03:24:04 np0005558241 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000021.scope: Consumed 13.435s CPU time.
Dec 13 03:24:04 np0005558241 systemd-machined[210538]: Machine qemu-38-instance-00000021 terminated.
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.608 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e1edcc71-ef76-4dc8-a3ca-2cfbef8ebb3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.613 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[17edef4a-a1f1-4cb2-9236-503f045fc41d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.655 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0ec32e-c9f4-4082-8bd5-4a877ebbdfaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.676 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7ad46f-5730-4ae6-a3ac-7c70d79beb20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62193ff6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:33:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676001, 'reachable_time': 16033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287057, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.694 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b201e56b-e91f-4c75-b621-a41cd6cf48d5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62193ff6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676016, 'tstamp': 676016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287058, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62193ff6-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676019, 'tstamp': 676019}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287058, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.696 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62193ff6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.704 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62193ff6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.704 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.705 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62193ff6-a0, col_values=(('external_ids', {'iface-id': '67d122d2-811d-4aa8-bdde-aafc5e939b2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:04.705 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.727 248514 INFO nova.virt.libvirt.driver [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Instance destroyed successfully.#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.728 248514 DEBUG nova.objects.instance [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lazy-loading 'resources' on Instance uuid 4403714d-3521-4409-9c3b-59d655fc999d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.741 248514 DEBUG nova.virt.libvirt.vif [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1547732604',display_name='tempest-FloatingIPsAssociationTestJSON-server-1547732604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1547732604',id=33,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='06fbab937d6444558229b2351632e711',ramdisk_id='',reservation_id='r-ath0htte',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-609563086',owner_user_name='tempest-FloatingIPsAssociationTestJSON-609563086-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:33Z,user_data=None,user_id='79d4b34b8bd3452cb5b8c0954166f397',uuid=4403714d-3521-4409-9c3b-59d655fc999d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.742 248514 DEBUG nova.network.os_vif_util [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converting VIF {"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.743 248514 DEBUG nova.network.os_vif_util [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.743 248514 DEBUG os_vif [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.744 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.745 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd462a8a0-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.746 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:04 np0005558241 nova_compute[248510]: 2025-12-13 08:24:04.752 248514 INFO os_vif [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:23:d9,bridge_name='br-int',has_traffic_filtering=True,id=d462a8a0-34ee-4682-ac5c-f7632b5ad39c,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd462a8a0-34')#033[00m
Dec 13 03:24:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:24:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 451 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 196 op/s
Dec 13 03:24:05 np0005558241 nova_compute[248510]: 2025-12-13 08:24:05.577 248514 DEBUG nova.network.neutron [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Successfully updated port: b494f789-c137-45c5-9750-2bf0b43681ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:24:05 np0005558241 nova_compute[248510]: 2025-12-13 08:24:05.637 248514 DEBUG nova.network.neutron [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updated VIF entry in instance network info cache for port d462a8a0-34ee-4682-ac5c-f7632b5ad39c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:24:05 np0005558241 nova_compute[248510]: 2025-12-13 08:24:05.637 248514 DEBUG nova.network.neutron [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updating instance_info_cache with network_info: [{"id": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "address": "fa:16:3e:9f:23:d9", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd462a8a0-34", "ovs_interfaceid": "d462a8a0-34ee-4682-ac5c-f7632b5ad39c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:05 np0005558241 nova_compute[248510]: 2025-12-13 08:24:05.804 248514 DEBUG oslo_concurrency.lockutils [req-7f7d1403-4d3d-4b0b-99e4-a51443447d3f req-7cb1afe5-29a6-47ab-bbaa-09ae07eadccc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4403714d-3521-4409-9c3b-59d655fc999d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:05 np0005558241 nova_compute[248510]: 2025-12-13 08:24:05.809 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:05 np0005558241 nova_compute[248510]: 2025-12-13 08:24:05.809 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquired lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:05 np0005558241 nova_compute[248510]: 2025-12-13 08:24:05.809 248514 DEBUG nova.network.neutron [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:24:05 np0005558241 nova_compute[248510]: 2025-12-13 08:24:05.814 248514 DEBUG nova.compute.manager [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-changed-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:05 np0005558241 nova_compute[248510]: 2025-12-13 08:24:05.815 248514 DEBUG nova.compute.manager [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Refreshing instance network info cache due to event network-changed-b494f789-c137-45c5-9750-2bf0b43681ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:24:05 np0005558241 nova_compute[248510]: 2025-12-13 08:24:05.815 248514 DEBUG oslo_concurrency.lockutils [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:06 np0005558241 nova_compute[248510]: 2025-12-13 08:24:06.089 248514 INFO nova.virt.libvirt.driver [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Deleting instance files /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d_del#033[00m
Dec 13 03:24:06 np0005558241 nova_compute[248510]: 2025-12-13 08:24:06.089 248514 INFO nova.virt.libvirt.driver [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Deletion of /var/lib/nova/instances/4403714d-3521-4409-9c3b-59d655fc999d_del complete#033[00m
Dec 13 03:24:06 np0005558241 nova_compute[248510]: 2025-12-13 08:24:06.181 248514 INFO nova.compute.manager [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Took 1.90 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:24:06 np0005558241 nova_compute[248510]: 2025-12-13 08:24:06.182 248514 DEBUG oslo.service.loopingcall [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:24:06 np0005558241 nova_compute[248510]: 2025-12-13 08:24:06.182 248514 DEBUG nova.compute.manager [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:24:06 np0005558241 nova_compute[248510]: 2025-12-13 08:24:06.182 248514 DEBUG nova.network.neutron [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:24:06 np0005558241 nova_compute[248510]: 2025-12-13 08:24:06.603 248514 DEBUG nova.network.neutron [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:24:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 451 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Dec 13 03:24:07 np0005558241 nova_compute[248510]: 2025-12-13 08:24:07.491 248514 DEBUG nova.compute.manager [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:07 np0005558241 nova_compute[248510]: 2025-12-13 08:24:07.492 248514 DEBUG nova.compute.manager [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing instance network info cache due to event network-changed-bc4158d8-4963-4009-a434-0a0106941c9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:24:07 np0005558241 nova_compute[248510]: 2025-12-13 08:24:07.492 248514 DEBUG oslo_concurrency.lockutils [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:07 np0005558241 nova_compute[248510]: 2025-12-13 08:24:07.492 248514 DEBUG oslo_concurrency.lockutils [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:07 np0005558241 nova_compute[248510]: 2025-12-13 08:24:07.493 248514 DEBUG nova.network.neutron [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Refreshing network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:24:07 np0005558241 nova_compute[248510]: 2025-12-13 08:24:07.750 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:07.750 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:24:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:07.752 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:24:07 np0005558241 nova_compute[248510]: 2025-12-13 08:24:07.905 248514 DEBUG nova.network.neutron [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:07 np0005558241 nova_compute[248510]: 2025-12-13 08:24:07.928 248514 INFO nova.compute.manager [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Took 1.75 seconds to deallocate network for instance.#033[00m
Dec 13 03:24:07 np0005558241 nova_compute[248510]: 2025-12-13 08:24:07.972 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:07 np0005558241 nova_compute[248510]: 2025-12-13 08:24:07.973 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.232 248514 DEBUG oslo_concurrency.processutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.416 248514 DEBUG nova.network.neutron [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Updating instance_info_cache with network_info: [{"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.447 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Releasing lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.449 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance network_info: |[{"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.450 248514 DEBUG oslo_concurrency.lockutils [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.451 248514 DEBUG nova.network.neutron [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Refreshing network info cache for port b494f789-c137-45c5-9750-2bf0b43681ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.456 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Start _get_guest_xml network_info=[{"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.465 248514 WARNING nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.474 248514 DEBUG nova.virt.libvirt.host [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.475 248514 DEBUG nova.virt.libvirt.host [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.479 248514 DEBUG nova.virt.libvirt.host [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.480 248514 DEBUG nova.virt.libvirt.host [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.480 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.481 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.481 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.482 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.482 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.483 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.483 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.483 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.484 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.484 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.484 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.485 248514 DEBUG nova.virt.hardware [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.489 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.554 248514 DEBUG oslo_concurrency.lockutils [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-d503913e-a05e-47d4-9366-db4426b9aac1-815f5388-ae4c-4748-ae1e-a35179c687ad" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.555 248514 DEBUG oslo_concurrency.lockutils [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-d503913e-a05e-47d4-9366-db4426b9aac1-815f5388-ae4c-4748-ae1e-a35179c687ad" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.556 248514 DEBUG nova.objects.instance [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid d503913e-a05e-47d4-9366-db4426b9aac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:24:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3465593196' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.801 248514 DEBUG oslo_concurrency.processutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.808 248514 DEBUG nova.compute.provider_tree [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.837 248514 DEBUG nova.scheduler.client.report [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.865 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.892 248514 INFO nova.scheduler.client.report [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Deleted allocations for instance 4403714d-3521-4409-9c3b-59d655fc999d#033[00m
Dec 13 03:24:08 np0005558241 nova_compute[248510]: 2025-12-13 08:24:08.949 248514 DEBUG oslo_concurrency.lockutils [None req-cd8fd0a6-228c-4612-88c3-80e7b43a6f88 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:24:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/744464859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.078 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 372 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.106 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.111 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:24:09
Dec 13 03:24:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:24:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:24:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['backups', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'vms', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.meta']
Dec 13 03:24:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.419 248514 DEBUG nova.network.neutron [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updated VIF entry in instance network info cache for port bc4158d8-4963-4009-a434-0a0106941c9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.420 248514 DEBUG nova.network.neutron [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [{"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.448 248514 DEBUG oslo_concurrency.lockutils [req-256ed09f-9b3d-4da9-b003-17cf4b8fbf96 req-ca007aba-cf0a-46fb-8d99-420fad5fe295 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3b43a9c7-85e7-4558-bd2f-e4712882021e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.498 248514 DEBUG nova.objects.instance [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'pci_requests' on Instance uuid d503913e-a05e-47d4-9366-db4426b9aac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.561 248514 DEBUG nova.network.neutron [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.613 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:24:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3603289413' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.686 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.688 248514 DEBUG nova.virt.libvirt.vif [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-370636454',display_name='tempest-InstanceActionsTestJSON-server-370636454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-370636454',id=35,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f78f312dfcc4df6ba40b7c8a4e1aa97',ramdisk_id='',reservation_id='r-b8os57ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1859862292',owner_user_name='tempest-InstanceActionsTestJSON-1859862292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:24:00Z,user_data=None,user_id='6827fc2174b74c2a92803d852e87c70a',uuid=b3086cd3-fbaf-4f8e-bca2-162a0582d3a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.688 248514 DEBUG nova.network.os_vif_util [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converting VIF {"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.689 248514 DEBUG nova.network.os_vif_util [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.690 248514 DEBUG nova.objects.instance [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.727 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  <uuid>b3086cd3-fbaf-4f8e-bca2-162a0582d3a4</uuid>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  <name>instance-00000023</name>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <nova:name>tempest-InstanceActionsTestJSON-server-370636454</nova:name>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:24:08</nova:creationTime>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:        <nova:user uuid="6827fc2174b74c2a92803d852e87c70a">tempest-InstanceActionsTestJSON-1859862292-project-member</nova:user>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:        <nova:project uuid="8f78f312dfcc4df6ba40b7c8a4e1aa97">tempest-InstanceActionsTestJSON-1859862292</nova:project>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:        <nova:port uuid="b494f789-c137-45c5-9750-2bf0b43681ad">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <entry name="serial">b3086cd3-fbaf-4f8e-bca2-162a0582d3a4</entry>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <entry name="uuid">b3086cd3-fbaf-4f8e-bca2-162a0582d3a4</entry>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:26:55:3f"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <target dev="tapb494f789-c1"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/console.log" append="off"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:24:09 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:24:09 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:24:09 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:24:09 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.735 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Preparing to wait for external event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.735 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.736 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.736 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.737 248514 DEBUG nova.virt.libvirt.vif [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:23:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-370636454',display_name='tempest-InstanceActionsTestJSON-server-370636454',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-370636454',id=35,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8f78f312dfcc4df6ba40b7c8a4e1aa97',ramdisk_id='',reservation_id='r-b8os57ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1859862292',owner_user_name='tempest-InstanceActionsTestJSON-1859862292-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:24:00Z,user_data=None,user_id='6827fc2174b74c2a92803d852e87c70a',uuid=b3086cd3-fbaf-4f8e-bca2-162a0582d3a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.737 248514 DEBUG nova.network.os_vif_util [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converting VIF {"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.737 248514 DEBUG nova.network.os_vif_util [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.738 248514 DEBUG os_vif [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.738 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.739 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.739 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.743 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb494f789-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.743 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb494f789-c1, col_values=(('external_ids', {'iface-id': 'b494f789-c137-45c5-9750-2bf0b43681ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:55:3f', 'vm-uuid': 'b3086cd3-fbaf-4f8e-bca2-162a0582d3a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:09 np0005558241 NetworkManager[50376]: <info>  [1765614249.7462] manager: (tapb494f789-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.747 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.751 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:09 np0005558241 nova_compute[248510]: 2025-12-13 08:24:09.752 248514 INFO os_vif [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1')#033[00m
Dec 13 03:24:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.053 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.055 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.055 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] No VIF found with MAC fa:16:3e:26:55:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.055 248514 INFO nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Using config drive#033[00m
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.082 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.383 248514 INFO nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Creating config drive at /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/disk.config#033[00m
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.389 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8dnsgvwa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.429 248514 DEBUG nova.network.neutron [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Updated VIF entry in instance network info cache for port b494f789-c137-45c5-9750-2bf0b43681ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.430 248514 DEBUG nova.network.neutron [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Updating instance_info_cache with network_info: [{"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.434 248514 DEBUG nova.policy [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee4333dff699428ca3ae5201855ab430', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ea37c93820f34312bc386ea952ecd94f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.532 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8dnsgvwa" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:24:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.571 248514 DEBUG nova.storage.rbd_utils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] rbd image b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.576 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/disk.config b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.697 248514 DEBUG oslo_concurrency.processutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/disk.config b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.698 248514 INFO nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Deleting local config drive /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/disk.config because it was imported into RBD.#033[00m
Dec 13 03:24:10 np0005558241 kernel: tapb494f789-c1: entered promiscuous mode
Dec 13 03:24:10 np0005558241 NetworkManager[50376]: <info>  [1765614250.7529] manager: (tapb494f789-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Dec 13 03:24:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:10Z|00284|binding|INFO|Claiming lport b494f789-c137-45c5-9750-2bf0b43681ad for this chassis.
Dec 13 03:24:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:10Z|00285|binding|INFO|b494f789-c137-45c5-9750-2bf0b43681ad: Claiming fa:16:3e:26:55:3f 10.100.0.12
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.759 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.773 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:10Z|00286|binding|INFO|Setting lport b494f789-c137-45c5-9750-2bf0b43681ad ovn-installed in OVS
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:10 np0005558241 systemd-machined[210538]: New machine qemu-40-instance-00000023.
Dec 13 03:24:10 np0005558241 systemd-udevd[287248]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:24:10 np0005558241 systemd[1]: Started Virtual Machine qemu-40-instance-00000023.
Dec 13 03:24:10 np0005558241 NetworkManager[50376]: <info>  [1765614250.8130] device (tapb494f789-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:24:10 np0005558241 NetworkManager[50376]: <info>  [1765614250.8137] device (tapb494f789-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:24:10 np0005558241 nova_compute[248510]: 2025-12-13 08:24:10.834 248514 DEBUG oslo_concurrency.lockutils [req-2aad3c15-40f1-4d6d-8b61-c88be46c9919 req-2cb9a481-10d8-4848-be0b-b2dd309fee19 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.849 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:55:3f 10.100.0.12'], port_security=['fa:16:3e:26:55:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b3086cd3-fbaf-4f8e-bca2-162a0582d3a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f78f312dfcc4df6ba40b7c8a4e1aa97', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9aebbe0-bcf2-4e40-aa62-10b8cea7c801', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c3ff6a3-0731-41dd-8d72-42e18a11ea75, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b494f789-c137-45c5-9750-2bf0b43681ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:24:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:10Z|00287|binding|INFO|Setting lport b494f789-c137-45c5-9750-2bf0b43681ad up in Southbound
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.851 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b494f789-c137-45c5-9750-2bf0b43681ad in datapath 0740d1ee-47e1-4bdf-bdc4-2dafff999f03 bound to our chassis#033[00m
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.852 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0740d1ee-47e1-4bdf-bdc4-2dafff999f03#033[00m
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.868 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf925a6-7f61-4333-8632-f570732c7272]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.869 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0740d1ee-41 in ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.871 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0740d1ee-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.871 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f512038-daf6-4a78-8754-a1f1cfb2e7fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.872 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[df5298df-9a3d-4318-8402-6ce29d226f52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.890 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[2e07d5f5-4ee4-4fec-9ebe-126125a96477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.905 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb954c1e-0405-4080-ae6b-b064e8cdd4e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.936 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ef74279f-6020-4c44-8fac-451178b08769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:10 np0005558241 NetworkManager[50376]: <info>  [1765614250.9458] manager: (tap0740d1ee-40): new Veth device (/org/freedesktop/NetworkManager/Devices/133)
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.944 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9d0ef7-5e3d-48f0-ad98-732760e55c8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.987 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e62ca239-913d-4e0a-af80-52fa2c489d39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:10.995 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[584b5b17-80ae-413d-b9f7-da4f258467b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:11 np0005558241 NetworkManager[50376]: <info>  [1765614251.0328] device (tap0740d1ee-40): carrier: link connected
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.040 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe81026-989d-414d-9494-cd97d3681264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.064 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[671579bb-1f3c-444e-9cbe-ef44a4d081a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0740d1ee-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:40:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682824, 'reachable_time': 37010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287281, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.082 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2c288e77-14d1-47dc-820f-5a0bc9412a5c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0a:4019'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 682824, 'tstamp': 682824}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287282, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 372 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.102 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cadb37bc-76a2-43e5-8856-f71dfd0deb5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0740d1ee-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:40:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682824, 'reachable_time': 37010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287283, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.148 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[18f9063a-364e-4d83-922f-b9e658e26b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.218 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[278d3ea9-141f-4c50-a2fc-2229e02f1b25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.219 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0740d1ee-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.219 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.220 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0740d1ee-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:11 np0005558241 NetworkManager[50376]: <info>  [1765614251.2225] manager: (tap0740d1ee-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Dec 13 03:24:11 np0005558241 kernel: tap0740d1ee-40: entered promiscuous mode
Dec 13 03:24:11 np0005558241 nova_compute[248510]: 2025-12-13 08:24:11.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.225 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0740d1ee-40, col_values=(('external_ids', {'iface-id': '57853c24-d10c-4ddd-b435-f78af259fd27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:11Z|00288|binding|INFO|Releasing lport 57853c24-d10c-4ddd-b435-f78af259fd27 from this chassis (sb_readonly=0)
Dec 13 03:24:11 np0005558241 nova_compute[248510]: 2025-12-13 08:24:11.245 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.249 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.250 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[51f87d70-174f-4ccd-8121-6101de71d1eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.251 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-0740d1ee-47e1-4bdf-bdc4-2dafff999f03
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.pid.haproxy
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 0740d1ee-47e1-4bdf-bdc4-2dafff999f03
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:11.252 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'env', 'PROCESS_TAG=haproxy-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:24:11 np0005558241 nova_compute[248510]: 2025-12-13 08:24:11.425 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614251.4251044, b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:24:11 np0005558241 nova_compute[248510]: 2025-12-13 08:24:11.426 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] VM Started (Lifecycle Event)#033[00m
Dec 13 03:24:11 np0005558241 nova_compute[248510]: 2025-12-13 08:24:11.479 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:11 np0005558241 nova_compute[248510]: 2025-12-13 08:24:11.484 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614251.4266348, b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:24:11 np0005558241 nova_compute[248510]: 2025-12-13 08:24:11.485 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:24:11 np0005558241 nova_compute[248510]: 2025-12-13 08:24:11.514 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:11 np0005558241 nova_compute[248510]: 2025-12-13 08:24:11.519 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:24:11 np0005558241 nova_compute[248510]: 2025-12-13 08:24:11.611 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:24:11 np0005558241 podman[287357]: 2025-12-13 08:24:11.643765681 +0000 UTC m=+0.048077957 container create 3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:24:11 np0005558241 systemd[1]: Started libpod-conmon-3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9.scope.
Dec 13 03:24:11 np0005558241 podman[287357]: 2025-12-13 08:24:11.616838587 +0000 UTC m=+0.021150883 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:24:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:24:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4393fad8ce5a73df0ceef6cfed59401542ad7871cb68553053253ed76ee8e4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:24:11 np0005558241 podman[287357]: 2025-12-13 08:24:11.737427162 +0000 UTC m=+0.141739458 container init 3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 03:24:11 np0005558241 podman[287357]: 2025-12-13 08:24:11.743738788 +0000 UTC m=+0.148051064 container start 3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:24:11 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [NOTICE]   (287376) : New worker (287378) forked
Dec 13 03:24:11 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [NOTICE]   (287376) : Loading success.
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.015 248514 DEBUG nova.compute.manager [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-vif-unplugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.016 248514 DEBUG oslo_concurrency.lockutils [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.016 248514 DEBUG oslo_concurrency.lockutils [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.016 248514 DEBUG oslo_concurrency.lockutils [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.016 248514 DEBUG nova.compute.manager [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] No waiting events found dispatching network-vif-unplugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.018 248514 WARNING nova.compute.manager [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received unexpected event network-vif-unplugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.018 248514 DEBUG nova.compute.manager [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.018 248514 DEBUG oslo_concurrency.lockutils [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4403714d-3521-4409-9c3b-59d655fc999d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.018 248514 DEBUG oslo_concurrency.lockutils [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.018 248514 DEBUG oslo_concurrency.lockutils [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4403714d-3521-4409-9c3b-59d655fc999d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.019 248514 DEBUG nova.compute.manager [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] No waiting events found dispatching network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.019 248514 WARNING nova.compute.manager [req-06e9ac38-3810-4911-89a1-69c7d156b823 req-16bef5b6-8578-4668-a99c-b6b3a2f10016 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received unexpected event network-vif-plugged-d462a8a0-34ee-4682-ac5c-f7632b5ad39c for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.487 248514 DEBUG nova.network.neutron [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Successfully updated port: 815f5388-ae4c-4748-ae1e-a35179c687ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.505 248514 DEBUG oslo_concurrency.lockutils [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.505 248514 DEBUG oslo_concurrency.lockutils [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.506 248514 DEBUG nova.network.neutron [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.671 248514 WARNING nova.network.neutron [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] 1ca92864-3b70-4794-9db1-fa08128cef92 already exists in list: networks containing: ['1ca92864-3b70-4794-9db1-fa08128cef92']. ignoring it#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.736 248514 DEBUG nova.compute.manager [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.737 248514 DEBUG nova.compute.manager [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing instance network info cache due to event network-changed-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:24:12 np0005558241 nova_compute[248510]: 2025-12-13 08:24:12.737 248514 DEBUG oslo_concurrency.lockutils [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:12.755 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 376 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Dec 13 03:24:13 np0005558241 podman[287388]: 2025-12-13 08:24:13.988140786 +0000 UTC m=+0.069708791 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:24:14 np0005558241 podman[287389]: 2025-12-13 08:24:14.003549646 +0000 UTC m=+0.083797259 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:24:14 np0005558241 podman[287387]: 2025-12-13 08:24:14.015604933 +0000 UTC m=+0.097074286 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 03:24:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:14Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:58:17 10.100.0.7
Dec 13 03:24:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:14Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:58:17 10.100.0.7
Dec 13 03:24:14 np0005558241 nova_compute[248510]: 2025-12-13 08:24:14.615 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:14 np0005558241 nova_compute[248510]: 2025-12-13 08:24:14.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:24:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:24:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1842606236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:24:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:24:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1842606236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:24:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 390 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.2 MiB/s wr, 157 op/s
Dec 13 03:24:15 np0005558241 nova_compute[248510]: 2025-12-13 08:24:15.618 248514 DEBUG nova.network.neutron [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.056 248514 DEBUG oslo_concurrency.lockutils [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.057 248514 DEBUG oslo_concurrency.lockutils [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.057 248514 DEBUG nova.network.neutron [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.062 248514 DEBUG nova.virt.libvirt.vif [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.062 248514 DEBUG nova.network.os_vif_util [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.063 248514 DEBUG nova.network.os_vif_util [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.064 248514 DEBUG os_vif [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.065 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.065 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.069 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap815f5388-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.069 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap815f5388-ae, col_values=(('external_ids', {'iface-id': '815f5388-ae4c-4748-ae1e-a35179c687ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:45:d8', 'vm-uuid': 'd503913e-a05e-47d4-9366-db4426b9aac1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.070 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:16 np0005558241 NetworkManager[50376]: <info>  [1765614256.0719] manager: (tap815f5388-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.079 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.080 248514 INFO os_vif [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae')#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.081 248514 DEBUG nova.virt.libvirt.vif [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.081 248514 DEBUG nova.network.os_vif_util [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.082 248514 DEBUG nova.network.os_vif_util [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.086 248514 DEBUG nova.virt.libvirt.guest [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] attach device xml: <interface type="ethernet">
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:a8:45:d8"/>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  <target dev="tap815f5388-ae"/>
Dec 13 03:24:16 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:24:16 np0005558241 nova_compute[248510]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec 13 03:24:16 np0005558241 kernel: tap815f5388-ae: entered promiscuous mode
Dec 13 03:24:16 np0005558241 NetworkManager[50376]: <info>  [1765614256.1078] manager: (tap815f5388-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Dec 13 03:24:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:16Z|00289|binding|INFO|Claiming lport 815f5388-ae4c-4748-ae1e-a35179c687ad for this chassis.
Dec 13 03:24:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:16Z|00290|binding|INFO|815f5388-ae4c-4748-ae1e-a35179c687ad: Claiming fa:16:3e:a8:45:d8 10.100.0.9
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.110 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:16 np0005558241 systemd-udevd[287453]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:24:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:16Z|00291|binding|INFO|Setting lport 815f5388-ae4c-4748-ae1e-a35179c687ad ovn-installed in OVS
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:16 np0005558241 NetworkManager[50376]: <info>  [1765614256.1651] device (tap815f5388-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:24:16 np0005558241 NetworkManager[50376]: <info>  [1765614256.1660] device (tap815f5388-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:24:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:16Z|00292|binding|INFO|Setting lport 815f5388-ae4c-4748-ae1e-a35179c687ad up in Southbound
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.206 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:45:d8 10.100.0.9'], port_security=['fa:16:3e:a8:45:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd503913e-a05e-47d4-9366-db4426b9aac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=815f5388-ae4c-4748-ae1e-a35179c687ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.208 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 815f5388-ae4c-4748-ae1e-a35179c687ad in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 bound to our chassis#033[00m
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.211 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.231 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11a577ca-5f95-4e1f-bebb-3ce3e906d039]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.252 248514 DEBUG nova.virt.libvirt.driver [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.252 248514 DEBUG nova.virt.libvirt.driver [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.252 248514 DEBUG nova.virt.libvirt.driver [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:b3:27:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.252 248514 DEBUG nova.virt.libvirt.driver [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] No VIF found with MAC fa:16:3e:a8:45:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.281 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[243c790f-5734-4d8c-8014-775101b72516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.286 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8e09a60b-2cde-4d23-8b5b-025a043f4100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.309 248514 DEBUG nova.virt.libvirt.guest [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  <nova:name>tempest-tempest.common.compute-instance-152393601</nova:name>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:24:16</nova:creationTime>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:24:16 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:    <nova:port uuid="e1eabc5e-9ed7-4b2e-ba64-11149b2f4043">
Dec 13 03:24:16 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:    <nova:port uuid="815f5388-ae4c-4748-ae1e-a35179c687ad">
Dec 13 03:24:16 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:24:16 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:24:16 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:24:16 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.325 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[16faf166-28f7-4f23-aeb9-e7b910fab72d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.330 248514 DEBUG oslo_concurrency.lockutils [None req-b445678b-1333-4f68-bd27-4ab273455ffd ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-d503913e-a05e-47d4-9366-db4426b9aac1-815f5388-ae4c-4748-ae1e-a35179c687ad" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.345 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d726848d-d2c0-485a-a7d4-fdf7a59594c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 18986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287461, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.367 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09b8957b-657c-406d-ab32-01b67248d4fe]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675811, 'tstamp': 675811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287462, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675814, 'tstamp': 675814}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287462, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.370 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.371 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:16 np0005558241 nova_compute[248510]: 2025-12-13 08:24:16.373 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.373 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.374 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.374 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:16.375 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.021 248514 DEBUG nova.compute.manager [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Received event network-vif-deleted-d462a8a0-34ee-4682-ac5c-f7632b5ad39c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.021 248514 DEBUG nova.compute.manager [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.022 248514 DEBUG nova.compute.manager [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing instance network info cache due to event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.022 248514 DEBUG oslo_concurrency.lockutils [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.022 248514 DEBUG oslo_concurrency.lockutils [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.022 248514 DEBUG nova.network.neutron [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:24:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 390 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 171 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.221 248514 DEBUG nova.compute.manager [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.222 248514 DEBUG oslo_concurrency.lockutils [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.222 248514 DEBUG oslo_concurrency.lockutils [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.223 248514 DEBUG oslo_concurrency.lockutils [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.224 248514 DEBUG nova.compute.manager [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Processing event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.224 248514 DEBUG nova.compute.manager [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.224 248514 DEBUG oslo_concurrency.lockutils [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.225 248514 DEBUG oslo_concurrency.lockutils [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.225 248514 DEBUG oslo_concurrency.lockutils [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.226 248514 DEBUG nova.compute.manager [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.226 248514 WARNING nova.compute.manager [req-04df1891-2a64-4f46-b7fb-dd74745bc972 req-8d9b205e-1a4a-404c-a3c0-d8737779d7a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received unexpected event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.227 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.234 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614257.2332642, b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.235 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.238 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.247 248514 INFO nova.virt.libvirt.driver [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance spawned successfully.#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.248 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.266 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.273 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.277 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.278 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.278 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.278 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.278 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.279 248514 DEBUG nova.virt.libvirt.driver [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.337 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.415 248514 INFO nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Took 16.90 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.416 248514 DEBUG nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.514 248514 INFO nova.compute.manager [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Took 18.34 seconds to build instance.#033[00m
Dec 13 03:24:17 np0005558241 nova_compute[248510]: 2025-12-13 08:24:17.598 248514 DEBUG oslo_concurrency.lockutils [None req-f058bfe4-5f39-46cb-9e93-ee0a610f8ba4 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:18Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:45:d8 10.100.0.9
Dec 13 03:24:18 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:18Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:45:d8 10.100.0.9
Dec 13 03:24:18 np0005558241 nova_compute[248510]: 2025-12-13 08:24:18.475 248514 DEBUG nova.network.neutron [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updated VIF entry in instance network info cache for port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:24:18 np0005558241 nova_compute[248510]: 2025-12-13 08:24:18.476 248514 DEBUG nova.network.neutron [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:18 np0005558241 nova_compute[248510]: 2025-12-13 08:24:18.492 248514 DEBUG oslo_concurrency.lockutils [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:18 np0005558241 nova_compute[248510]: 2025-12-13 08:24:18.493 248514 DEBUG nova.compute.manager [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:18 np0005558241 nova_compute[248510]: 2025-12-13 08:24:18.493 248514 DEBUG nova.compute.manager [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing instance network info cache due to event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:24:18 np0005558241 nova_compute[248510]: 2025-12-13 08:24:18.493 248514 DEBUG oslo_concurrency.lockutils [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:18 np0005558241 nova_compute[248510]: 2025-12-13 08:24:18.493 248514 DEBUG oslo_concurrency.lockutils [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:18 np0005558241 nova_compute[248510]: 2025-12-13 08:24:18.494 248514 DEBUG nova.network.neutron [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.057 248514 DEBUG oslo_concurrency.lockutils [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "interface-d503913e-a05e-47d4-9366-db4426b9aac1-815f5388-ae4c-4748-ae1e-a35179c687ad" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.058 248514 DEBUG oslo_concurrency.lockutils [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-d503913e-a05e-47d4-9366-db4426b9aac1-815f5388-ae4c-4748-ae1e-a35179c687ad" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.089 248514 DEBUG nova.objects.instance [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'flavor' on Instance uuid d503913e-a05e-47d4-9366-db4426b9aac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 156 op/s
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.117 248514 DEBUG nova.virt.libvirt.vif [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.117 248514 DEBUG nova.network.os_vif_util [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.118 248514 DEBUG nova.network.os_vif_util [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.121 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.123 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.125 248514 DEBUG nova.virt.libvirt.driver [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Attempting to detach device tap815f5388-ae from instance d503913e-a05e-47d4-9366-db4426b9aac1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.126 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:a8:45:d8"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <target dev="tap815f5388-ae"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:24:19 np0005558241 nova_compute[248510]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.131 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.135 248514 DEBUG nova.compute.manager [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.135 248514 DEBUG nova.compute.manager [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing instance network info cache due to event network-changed-627622b8-ef54-4181-bd8d-e8e82650b143. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.136 248514 DEBUG oslo_concurrency.lockutils [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.138 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface>not found in domain: <domain type='kvm' id='37'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <name>instance-00000020</name>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <uuid>d503913e-a05e-47d4-9366-db4426b9aac1</uuid>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:name>tempest-tempest.common.compute-instance-152393601</nova:name>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:24:16</nova:creationTime>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:port uuid="e1eabc5e-9ed7-4b2e-ba64-11149b2f4043">
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:port uuid="815f5388-ae4c-4748-ae1e-a35179c687ad">
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:24:19 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <entry name='serial'>d503913e-a05e-47d4-9366-db4426b9aac1</entry>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <entry name='uuid'>d503913e-a05e-47d4-9366-db4426b9aac1</entry>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/d503913e-a05e-47d4-9366-db4426b9aac1_disk' index='2'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/d503913e-a05e-47d4-9366-db4426b9aac1_disk.config' index='1'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:b3:27:85'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target dev='tape1eabc5e-9e'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:a8:45:d8'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target dev='tap815f5388-ae'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='net1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <source path='/dev/pts/2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/console.log' append='off'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/2'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <source path='/dev/pts/2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/console.log' append='off'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c374,c657</label>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c374,c657</imagelabel>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:24:19 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:24:19 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.139 248514 INFO nova.virt.libvirt.driver [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap815f5388-ae from instance d503913e-a05e-47d4-9366-db4426b9aac1 from the persistent domain config.#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.139 248514 DEBUG nova.virt.libvirt.driver [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] (1/8): Attempting to detach device tap815f5388-ae with device alias net1 from instance d503913e-a05e-47d4-9366-db4426b9aac1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.139 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] detach device xml: <interface type="ethernet">
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:a8:45:d8"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <target dev="tap815f5388-ae"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:24:19 np0005558241 nova_compute[248510]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec 13 03:24:19 np0005558241 kernel: tap815f5388-ae (unregistering): left promiscuous mode
Dec 13 03:24:19 np0005558241 NetworkManager[50376]: <info>  [1765614259.2733] device (tap815f5388-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.278 248514 DEBUG nova.network.neutron [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updated VIF entry in instance network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.278 248514 DEBUG nova.network.neutron [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:19 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:19Z|00293|binding|INFO|Releasing lport 815f5388-ae4c-4748-ae1e-a35179c687ad from this chassis (sb_readonly=0)
Dec 13 03:24:19 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:19Z|00294|binding|INFO|Setting lport 815f5388-ae4c-4748-ae1e-a35179c687ad down in Southbound
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.282 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:19 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:19Z|00295|binding|INFO|Removing iface tap815f5388-ae ovn-installed in OVS
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.289 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:45:d8 10.100.0.9'], port_security=['fa:16:3e:a8:45:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd503913e-a05e-47d4-9366-db4426b9aac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1134538882', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '9', 'neutron:security_group_ids': '012a6c0c-52d3-4f90-8790-6042a7d534f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=815f5388-ae4c-4748-ae1e-a35179c687ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.290 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 815f5388-ae4c-4748-ae1e-a35179c687ad in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis#033[00m
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.292 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.299 248514 DEBUG nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Received event <DeviceRemovedEvent: 1765614259.2940338, d503913e-a05e-47d4-9366-db4426b9aac1 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.300 248514 DEBUG nova.virt.libvirt.driver [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Start waiting for the detach event from libvirt for device tap815f5388-ae with device alias net1 for instance d503913e-a05e-47d4-9366-db4426b9aac1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.300 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.301 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.308 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a8:45:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap815f5388-ae"/></interface>not found in domain: <domain type='kvm' id='37'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <name>instance-00000020</name>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <uuid>d503913e-a05e-47d4-9366-db4426b9aac1</uuid>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:name>tempest-tempest.common.compute-instance-152393601</nova:name>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:24:16</nova:creationTime>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:port uuid="e1eabc5e-9ed7-4b2e-ba64-11149b2f4043">
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:port uuid="815f5388-ae4c-4748-ae1e-a35179c687ad">
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:24:19 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <entry name='serial'>d503913e-a05e-47d4-9366-db4426b9aac1</entry>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <entry name='uuid'>d503913e-a05e-47d4-9366-db4426b9aac1</entry>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/d503913e-a05e-47d4-9366-db4426b9aac1_disk' index='2'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/d503913e-a05e-47d4-9366-db4426b9aac1_disk.config' index='1'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:b3:27:85'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target dev='tape1eabc5e-9e'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <source path='/dev/pts/2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/console.log' append='off'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/2'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <source path='/dev/pts/2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1/console.log' append='off'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c374,c657</label>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c374,c657</imagelabel>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:24:19 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:24:19 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.309 248514 INFO nova.virt.libvirt.driver [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully detached device tap815f5388-ae from instance d503913e-a05e-47d4-9366-db4426b9aac1 from the live domain config.#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.310 248514 DEBUG nova.virt.libvirt.vif [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.310 248514 DEBUG nova.network.os_vif_util [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.311 248514 DEBUG nova.network.os_vif_util [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.311 248514 DEBUG os_vif [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.313 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap815f5388-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.315 248514 DEBUG oslo_concurrency.lockutils [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.315 248514 DEBUG nova.compute.manager [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-changed-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.315 248514 DEBUG nova.compute.manager [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing instance network info cache due to event network-changed-815f5388-ae4c-4748-ae1e-a35179c687ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.316 248514 DEBUG oslo_concurrency.lockutils [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.316 248514 DEBUG oslo_concurrency.lockutils [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.316 248514 DEBUG nova.network.neutron [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Refreshing network info cache for port 815f5388-ae4c-4748-ae1e-a35179c687ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.318 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.319 248514 DEBUG oslo_concurrency.lockutils [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.319 248514 DEBUG nova.network.neutron [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Refreshing network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.321 248514 DEBUG nova.compute.manager [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 DEBUG oslo_concurrency.lockutils [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 DEBUG oslo_concurrency.lockutils [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 DEBUG oslo_concurrency.lockutils [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 DEBUG nova.compute.manager [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 WARNING nova.compute.manager [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received unexpected event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 DEBUG nova.compute.manager [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.322 248514 DEBUG oslo_concurrency.lockutils [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.323 248514 DEBUG oslo_concurrency.lockutils [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.323 248514 DEBUG oslo_concurrency.lockutils [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.323 248514 DEBUG nova.compute.manager [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.323 248514 WARNING nova.compute.manager [req-674e47c4-4019-4162-97c7-7d40c750c23f req-e3abc937-43f1-4751-9c43-cb43aad9ff2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received unexpected event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.324 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.317 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6067a788-bdce-40fb-b115-302926969b8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.326 248514 INFO os_vif [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae')#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.327 248514 DEBUG nova.virt.libvirt.guest [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:name>tempest-tempest.common.compute-instance-152393601</nova:name>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:24:19</nova:creationTime>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:user uuid="ee4333dff699428ca3ae5201855ab430">tempest-AttachInterfacesTestJSON-1051352616-project-member</nova:user>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:project uuid="ea37c93820f34312bc386ea952ecd94f">tempest-AttachInterfacesTestJSON-1051352616</nova:project>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    <nova:port uuid="e1eabc5e-9ed7-4b2e-ba64-11149b2f4043">
Dec 13 03:24:19 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:24:19 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:24:19 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:24:19 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.370 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c30207aa-4ab3-4174-a08e-11eb6f8e9b3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.375 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ccae78c6-adb1-49e0-aa7a-f012de4c7745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.410 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb8d5de-7ab5-4480-915c-2fbe7298d8b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.429 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ecef10ab-8044-4579-ac88-5c5630d00709]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 18986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287474, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.448 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d13eab-fc42-4e60-a5d9-e32dff4c535f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675811, 'tstamp': 675811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287475, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675814, 'tstamp': 675814}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287475, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.451 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.453 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.455 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.455 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.456 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:19.456 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.726 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614244.724721, 4403714d-3521-4409-9c3b-59d655fc999d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.727 248514 INFO nova.compute.manager [-] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:24:19 np0005558241 nova_compute[248510]: 2025-12-13 08:24:19.749 248514 DEBUG nova.compute.manager [None req-be68987f-44fc-4709-aea9-2322cd833bbc - - - - - -] [instance: 4403714d-3521-4409-9c3b-59d655fc999d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003389182341954993 of space, bias 1.0, pg target 1.016754702586498 quantized to 32 (current 32)
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670482364989137 of space, bias 1.0, pg target 0.1994474227131752 quantized to 32 (current 32)
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.230337999555883e-07 of space, bias 4.0, pg target 0.0008647484247468835 quantized to 16 (current 32)
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:24:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Dec 13 03:24:20 np0005558241 nova_compute[248510]: 2025-12-13 08:24:20.912 248514 DEBUG nova.network.neutron [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updated VIF entry in instance network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:24:20 np0005558241 nova_compute[248510]: 2025-12-13 08:24:20.912 248514 DEBUG nova.network.neutron [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:20 np0005558241 nova_compute[248510]: 2025-12-13 08:24:20.959 248514 DEBUG oslo_concurrency.lockutils [req-aed3b438-84f6-4f24-a444-4104cf0c20cf req-4ab9a0fc-a807-4161-9954-481a4e8b798a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.060 248514 DEBUG oslo_concurrency.lockutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.061 248514 DEBUG oslo_concurrency.lockutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.061 248514 INFO nova.compute.manager [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Rebooting instance#033[00m
Dec 13 03:24:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 405 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 128 op/s
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.196 248514 DEBUG oslo_concurrency.lockutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.197 248514 DEBUG oslo_concurrency.lockutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquired lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.197 248514 DEBUG nova.network.neutron [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.480 248514 DEBUG nova.network.neutron [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updated VIF entry in instance network info cache for port 815f5388-ae4c-4748-ae1e-a35179c687ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.481 248514 DEBUG nova.network.neutron [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.500 248514 DEBUG oslo_concurrency.lockutils [req-7432bf95-b697-4145-938f-bfec1b91c281 req-fe1f7beb-3cb0-4f53-9d13-61cb9992ff56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.645 248514 DEBUG nova.network.neutron [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updated VIF entry in instance network info cache for port 627622b8-ef54-4181-bd8d-e8e82650b143. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.645 248514 DEBUG nova.network.neutron [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [{"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.665 248514 DEBUG oslo_concurrency.lockutils [req-f1db74ed-4761-41aa-8287-e3f4d2374109 req-b967e346-c4e2-4309-a147-ec551e33640f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-dc64fea4-e9a8-47e7-8a3a-d01897fc81de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.826 248514 DEBUG oslo_concurrency.lockutils [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.826 248514 DEBUG oslo_concurrency.lockutils [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquired lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.827 248514 DEBUG nova.network.neutron [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.914 248514 DEBUG nova.compute.manager [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-unplugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.914 248514 DEBUG oslo_concurrency.lockutils [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.915 248514 DEBUG oslo_concurrency.lockutils [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.915 248514 DEBUG oslo_concurrency.lockutils [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.915 248514 DEBUG nova.compute.manager [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-unplugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.916 248514 WARNING nova.compute.manager [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received unexpected event network-vif-unplugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.916 248514 DEBUG nova.compute.manager [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.916 248514 DEBUG oslo_concurrency.lockutils [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.916 248514 DEBUG oslo_concurrency.lockutils [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.916 248514 DEBUG oslo_concurrency.lockutils [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.917 248514 DEBUG nova.compute.manager [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:21 np0005558241 nova_compute[248510]: 2025-12-13 08:24:21.917 248514 WARNING nova.compute.manager [req-2955f14b-8475-43b3-aed3-464af4287a90 req-19cd8479-a10d-4a38-b3f0-f32c51941d60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received unexpected event network-vif-plugged-815f5388-ae4c-4748-ae1e-a35179c687ad for instance with vm_state active and task_state None.#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.297 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.297 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.298 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.298 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.298 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.299 248514 INFO nova.compute.manager [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Terminating instance#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.300 248514 DEBUG nova.compute.manager [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:24:22 np0005558241 kernel: tape1eabc5e-9e (unregistering): left promiscuous mode
Dec 13 03:24:22 np0005558241 NetworkManager[50376]: <info>  [1765614262.3432] device (tape1eabc5e-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:24:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:22Z|00296|binding|INFO|Releasing lport e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 from this chassis (sb_readonly=0)
Dec 13 03:24:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:22Z|00297|binding|INFO|Setting lport e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 down in Southbound
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.354 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:22Z|00298|binding|INFO|Removing iface tape1eabc5e-9e ovn-installed in OVS
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.357 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.367 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:27:85 10.100.0.3'], port_security=['fa:16:3e:b3:27:85 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd503913e-a05e-47d4-9366-db4426b9aac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd758160-971f-4f0e-a4ca-a13304d3c491', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.368 158419 INFO neutron.agent.ovn.metadata.agent [-] Port e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis#033[00m
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.375 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ca92864-3b70-4794-9db1-fa08128cef92#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.379 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.399 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[754d463a-5f3d-4458-9c24-f389e5e4371b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:22 np0005558241 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Deactivated successfully.
Dec 13 03:24:22 np0005558241 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000020.scope: Consumed 14.674s CPU time.
Dec 13 03:24:22 np0005558241 systemd-machined[210538]: Machine qemu-37-instance-00000020 terminated.
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.438 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ae46693a-fabe-4fd0-b75c-5c6e2d4070bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.443 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[22c4103f-6729-4240-9582-e498e3c73766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.485 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3f1acd-4552-44ff-a60c-0919cfdf22c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.506 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[868cf228-a58a-4fc4-b5b1-c363d3ef5cee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ca92864-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:e2:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675797, 'reachable_time': 18986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287487, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.527 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4ebeef-be2d-42db-8c21-0b89d71f8b76]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675811, 'tstamp': 675811}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287489, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1ca92864-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675814, 'tstamp': 675814}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287489, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.530 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.540 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ca92864-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.541 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.541 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ca92864-30, col_values=(('external_ids', {'iface-id': 'dda81cda-adb6-4137-8e02-abd94f4378f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:22.542 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.546 248514 INFO nova.virt.libvirt.driver [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Instance destroyed successfully.#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.547 248514 DEBUG nova.objects.instance [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'resources' on Instance uuid d503913e-a05e-47d4-9366-db4426b9aac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.566 248514 DEBUG nova.virt.libvirt.vif [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.571 248514 DEBUG nova.network.os_vif_util [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.572 248514 DEBUG nova.network.os_vif_util [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.572 248514 DEBUG os_vif [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.575 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1eabc5e-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.579 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.581 248514 INFO os_vif [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:27:85,bridge_name='br-int',has_traffic_filtering=True,id=e1eabc5e-9ed7-4b2e-ba64-11149b2f4043,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1eabc5e-9e')#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.582 248514 DEBUG nova.virt.libvirt.vif [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-152393601',display_name='tempest-tempest.common.compute-instance-152393601',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-152393601',id=32,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-5m65b9nl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=d503913e-a05e-47d4-9366-db4426b9aac1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.583 248514 DEBUG nova.network.os_vif_util [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "815f5388-ae4c-4748-ae1e-a35179c687ad", "address": "fa:16:3e:a8:45:d8", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap815f5388-ae", "ovs_interfaceid": "815f5388-ae4c-4748-ae1e-a35179c687ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.584 248514 DEBUG nova.network.os_vif_util [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.584 248514 DEBUG os_vif [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.586 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap815f5388-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.587 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.588 248514 INFO os_vif [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:45:d8,bridge_name='br-int',has_traffic_filtering=True,id=815f5388-ae4c-4748-ae1e-a35179c687ad,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap815f5388-ae')#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.859 248514 INFO nova.virt.libvirt.driver [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Deleting instance files /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1_del#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.860 248514 INFO nova.virt.libvirt.driver [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Deletion of /var/lib/nova/instances/d503913e-a05e-47d4-9366-db4426b9aac1_del complete#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.967 248514 INFO nova.compute.manager [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.968 248514 DEBUG oslo.service.loopingcall [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.969 248514 DEBUG nova.compute.manager [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.969 248514 DEBUG nova.network.neutron [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:24:22 np0005558241 nova_compute[248510]: 2025-12-13 08:24:22.978 248514 DEBUG nova.network.neutron [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Updating instance_info_cache with network_info: [{"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.002 248514 DEBUG oslo_concurrency.lockutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Releasing lock "refresh_cache-b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.004 248514 DEBUG nova.compute.manager [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 383 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 147 op/s
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.271 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.272 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.272 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.273 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.273 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.275 248514 INFO nova.compute.manager [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Terminating instance#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.276 248514 DEBUG nova.compute.manager [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:24:23 np0005558241 kernel: tapb494f789-c1 (unregistering): left promiscuous mode
Dec 13 03:24:23 np0005558241 NetworkManager[50376]: <info>  [1765614263.5958] device (tapb494f789-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:24:23 np0005558241 kernel: tap627622b8-ef (unregistering): left promiscuous mode
Dec 13 03:24:23 np0005558241 NetworkManager[50376]: <info>  [1765614263.6027] device (tap627622b8-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:24:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:23Z|00299|binding|INFO|Releasing lport b494f789-c137-45c5-9750-2bf0b43681ad from this chassis (sb_readonly=0)
Dec 13 03:24:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:23Z|00300|binding|INFO|Setting lport b494f789-c137-45c5-9750-2bf0b43681ad down in Southbound
Dec 13 03:24:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:23Z|00301|binding|INFO|Removing iface tapb494f789-c1 ovn-installed in OVS
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.606 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.612 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:55:3f 10.100.0.12'], port_security=['fa:16:3e:26:55:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b3086cd3-fbaf-4f8e-bca2-162a0582d3a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f78f312dfcc4df6ba40b7c8a4e1aa97', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9aebbe0-bcf2-4e40-aa62-10b8cea7c801', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c3ff6a3-0731-41dd-8d72-42e18a11ea75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b494f789-c137-45c5-9750-2bf0b43681ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.613 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b494f789-c137-45c5-9750-2bf0b43681ad in datapath 0740d1ee-47e1-4bdf-bdc4-2dafff999f03 unbound from our chassis#033[00m
Dec 13 03:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.615 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0740d1ee-47e1-4bdf-bdc4-2dafff999f03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.617 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35c8bae5-3a8a-4490-890d-0025c8002e17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.618 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 namespace which is not needed anymore#033[00m
Dec 13 03:24:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:23Z|00302|binding|INFO|Releasing lport 627622b8-ef54-4181-bd8d-e8e82650b143 from this chassis (sb_readonly=0)
Dec 13 03:24:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:23Z|00303|binding|INFO|Setting lport 627622b8-ef54-4181-bd8d-e8e82650b143 down in Southbound
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:23Z|00304|binding|INFO|Removing iface tap627622b8-ef ovn-installed in OVS
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.637 248514 INFO nova.network.neutron [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Port 815f5388-ae4c-4748-ae1e-a35179c687ad from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec 13 03:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.638 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:b2:83 10.100.0.9'], port_security=['fa:16:3e:4a:b2:83 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dc64fea4-e9a8-47e7-8a3a-d01897fc81de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06fbab937d6444558229b2351632e711', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fa325bd2-c57a-49fb-8dd9-f45405c95b4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f47224ac-d05f-46db-ac07-cb476b38b044, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=627622b8-ef54-4181-bd8d-e8e82650b143) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.638 248514 DEBUG nova.network.neutron [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [{"id": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "address": "fa:16:3e:b3:27:85", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1eabc5e-9e", "ovs_interfaceid": "e1eabc5e-9ed7-4b2e-ba64-11149b2f4043", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:23 np0005558241 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000023.scope: Deactivated successfully.
Dec 13 03:24:23 np0005558241 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000023.scope: Consumed 6.689s CPU time.
Dec 13 03:24:23 np0005558241 systemd-machined[210538]: Machine qemu-40-instance-00000023 terminated.
Dec 13 03:24:23 np0005558241 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Dec 13 03:24:23 np0005558241 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000001f.scope: Consumed 15.210s CPU time.
Dec 13 03:24:23 np0005558241 systemd-machined[210538]: Machine qemu-36-instance-0000001f terminated.
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.675 248514 DEBUG oslo_concurrency.lockutils [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Releasing lock "refresh_cache-d503913e-a05e-47d4-9366-db4426b9aac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.704 248514 DEBUG oslo_concurrency.lockutils [None req-c4ba1dad-eb55-4790-99b8-fd00eb87a305 ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "interface-d503913e-a05e-47d4-9366-db4426b9aac1-815f5388-ae4c-4748-ae1e-a35179c687ad" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:23 np0005558241 NetworkManager[50376]: <info>  [1765614263.7215] manager: (tapb494f789-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.725 248514 INFO nova.virt.libvirt.driver [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Instance destroyed successfully.#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.729 248514 DEBUG nova.objects.instance [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lazy-loading 'resources' on Instance uuid dc64fea4-e9a8-47e7-8a3a-d01897fc81de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.736 248514 INFO nova.virt.libvirt.driver [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance destroyed successfully.#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.736 248514 DEBUG nova.objects.instance [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lazy-loading 'resources' on Instance uuid b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.746 248514 DEBUG nova.virt.libvirt.vif [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-153788010',display_name='tempest-FloatingIPsAssociationTestJSON-server-153788010',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-153788010',id=31,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='06fbab937d6444558229b2351632e711',ramdisk_id='',reservation_id='r-m786w5ky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-609563086',owner_user_name='tempest-FloatingIPsAssociationTestJSON-609563086-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:05Z,user_data=None,user_id='79d4b34b8bd3452cb5b8c0954166f397',uuid=dc64fea4-e9a8-47e7-8a3a-d01897fc81de,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.747 248514 DEBUG nova.network.os_vif_util [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converting VIF {"id": "627622b8-ef54-4181-bd8d-e8e82650b143", "address": "fa:16:3e:4a:b2:83", "network": {"id": "62193ff6-aaa1-401a-b1e0-512e67752a9e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-982656854-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06fbab937d6444558229b2351632e711", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap627622b8-ef", "ovs_interfaceid": "627622b8-ef54-4181-bd8d-e8e82650b143", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.747 248514 DEBUG nova.network.os_vif_util [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.748 248514 DEBUG os_vif [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.750 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap627622b8-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.752 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.754 248514 DEBUG nova.virt.libvirt.vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-370636454',display_name='tempest-InstanceActionsTestJSON-server-370636454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-370636454',id=35,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:24:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f78f312dfcc4df6ba40b7c8a4e1aa97',ramdisk_id='',reservation_id='r-b8os57ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1859862292',owner_user_name='tempest-InstanceActionsTestJSON-1859862292-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:24:23Z,user_data=None,user_id='6827fc2174b74c2a92803d852e87c70a',uuid=b3086cd3-fbaf-4f8e-bca2-162a0582d3a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.754 248514 DEBUG nova.network.os_vif_util [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converting VIF {"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.755 248514 DEBUG nova.network.os_vif_util [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.755 248514 DEBUG os_vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.758 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb494f789-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.759 248514 INFO os_vif [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:b2:83,bridge_name='br-int',has_traffic_filtering=True,id=627622b8-ef54-4181-bd8d-e8e82650b143,network=Network(62193ff6-aaa1-401a-b1e0-512e67752a9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap627622b8-ef')#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.774 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:23 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [NOTICE]   (287376) : haproxy version is 2.8.14-c23fe91
Dec 13 03:24:23 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [NOTICE]   (287376) : path to executable is /usr/sbin/haproxy
Dec 13 03:24:23 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [WARNING]  (287376) : Exiting Master process...
Dec 13 03:24:23 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [WARNING]  (287376) : Exiting Master process...
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:24:23 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [ALERT]    (287376) : Current worker (287378) exited with code 143 (Terminated)
Dec 13 03:24:23 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287372]: [WARNING]  (287376) : All workers exited. Exiting... (0)
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.779 248514 INFO os_vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1')#033[00m
Dec 13 03:24:23 np0005558241 systemd[1]: libpod-3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9.scope: Deactivated successfully.
Dec 13 03:24:23 np0005558241 podman[287557]: 2025-12-13 08:24:23.786427824 +0000 UTC m=+0.053397529 container died 3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.790 248514 DEBUG nova.virt.libvirt.driver [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Start _get_guest_xml network_info=[{"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.798 248514 WARNING nova.virt.libvirt.driver [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.806 248514 DEBUG nova.virt.libvirt.host [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.808 248514 DEBUG nova.virt.libvirt.host [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.811 248514 DEBUG nova.virt.libvirt.host [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.811 248514 DEBUG nova.virt.libvirt.host [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.812 248514 DEBUG nova.virt.libvirt.driver [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.812 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.813 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.813 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.813 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.813 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.813 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.813 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.814 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.814 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.814 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.814 248514 DEBUG nova.virt.hardware [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.814 248514 DEBUG nova.objects.instance [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9-userdata-shm.mount: Deactivated successfully.
Dec 13 03:24:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f4393fad8ce5a73df0ceef6cfed59401542ad7871cb68553053253ed76ee8e4e-merged.mount: Deactivated successfully.
Dec 13 03:24:23 np0005558241 podman[287557]: 2025-12-13 08:24:23.837096014 +0000 UTC m=+0.104065709 container cleanup 3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.841 248514 DEBUG oslo_concurrency.processutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:23 np0005558241 systemd[1]: libpod-conmon-3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9.scope: Deactivated successfully.
Dec 13 03:24:23 np0005558241 podman[287614]: 2025-12-13 08:24:23.94798045 +0000 UTC m=+0.085866340 container remove 3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.956 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1f9e19-1f8e-4821-84a2-d0d7fd8ef4c9]: (4, ('Sat Dec 13 08:24:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 (3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9)\n3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9\nSat Dec 13 08:24:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 (3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9)\n3237793f73c077c5d2649d978df0df71d80d86614504982b3e8c73b1e00626d9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.959 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[945dc335-6f26-44b6-a2ea-316d6f6c379e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.960 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0740d1ee-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:23 np0005558241 kernel: tap0740d1ee-40: left promiscuous mode
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.962 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:23 np0005558241 nova_compute[248510]: 2025-12-13 08:24:23.978 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.982 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[17942e2f-5587-4a13-a3aa-ed896b765434]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.996 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3614aa6f-35ac-4d1b-a92d-9733f8f22899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:23.998 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[85bcf9c9-03f6-4dad-9d88-fb02281d7a7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.020 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b280c6-b28e-4a63-944a-9c85c0601397]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 682814, 'reachable_time': 41738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287646, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.023 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.023 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d40fd7-e361-489c-bcfc-689b105dd531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.023 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 627622b8-ef54-4181-bd8d-e8e82650b143 in datapath 62193ff6-aaa1-401a-b1e0-512e67752a9e unbound from our chassis#033[00m
Dec 13 03:24:24 np0005558241 systemd[1]: run-netns-ovnmeta\x2d0740d1ee\x2d47e1\x2d4bdf\x2dbdc4\x2d2dafff999f03.mount: Deactivated successfully.
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.025 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62193ff6-aaa1-401a-b1e0-512e67752a9e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.025 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[34caeb64-a20d-438c-8f5c-d672a5b3d466]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.026 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e namespace which is not needed anymore#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.177 248514 INFO nova.virt.libvirt.driver [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Deleting instance files /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de_del#033[00m
Dec 13 03:24:24 np0005558241 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [NOTICE]   (284807) : haproxy version is 2.8.14-c23fe91
Dec 13 03:24:24 np0005558241 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [NOTICE]   (284807) : path to executable is /usr/sbin/haproxy
Dec 13 03:24:24 np0005558241 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [WARNING]  (284807) : Exiting Master process...
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.178 248514 INFO nova.virt.libvirt.driver [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Deletion of /var/lib/nova/instances/dc64fea4-e9a8-47e7-8a3a-d01897fc81de_del complete#033[00m
Dec 13 03:24:24 np0005558241 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [WARNING]  (284807) : Exiting Master process...
Dec 13 03:24:24 np0005558241 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [ALERT]    (284807) : Current worker (284809) exited with code 143 (Terminated)
Dec 13 03:24:24 np0005558241 neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e[284803]: [WARNING]  (284807) : All workers exited. Exiting... (0)
Dec 13 03:24:24 np0005558241 systemd[1]: libpod-70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08.scope: Deactivated successfully.
Dec 13 03:24:24 np0005558241 podman[287667]: 2025-12-13 08:24:24.190865063 +0000 UTC m=+0.051904142 container died 70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 03:24:24 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08-userdata-shm.mount: Deactivated successfully.
Dec 13 03:24:24 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f58e74a911db8950501b6e1ba22b29e7f74e9f24881f0a836706699e6a74d9c3-merged.mount: Deactivated successfully.
Dec 13 03:24:24 np0005558241 podman[287667]: 2025-12-13 08:24:24.227671881 +0000 UTC m=+0.088710950 container cleanup 70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:24:24 np0005558241 systemd[1]: libpod-conmon-70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08.scope: Deactivated successfully.
Dec 13 03:24:24 np0005558241 podman[287698]: 2025-12-13 08:24:24.291490835 +0000 UTC m=+0.042294204 container remove 70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.298 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4c817ddd-bf43-447f-bf95-01766bfc8416]: (4, ('Sat Dec 13 08:24:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e (70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08)\n70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08\nSat Dec 13 08:24:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e (70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08)\n70e5e119b8b510783357d647ef1a7c0679860c63bfa75976d481301cdd12dc08\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.300 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c09a7a-270e-4ed6-a6a9-3f9ef758a7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.301 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62193ff6-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:24 np0005558241 kernel: tap62193ff6-a0: left promiscuous mode
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.320 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.322 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2f880138-bc72-4b35-afc8-be60c2e17c7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.326 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-unplugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.326 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.327 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.327 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.327 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-unplugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.327 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-unplugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.328 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.328 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.328 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.328 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.328 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] No waiting events found dispatching network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.328 248514 WARNING nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received unexpected event network-vif-plugged-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.329 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-unplugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.329 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.329 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.329 248514 DEBUG oslo_concurrency.lockutils [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.329 248514 DEBUG nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-unplugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.329 248514 WARNING nova.compute.manager [req-b456b8a0-7843-4d9f-b543-27110e8115e7 req-5aec6083-9feb-4c54-9aee-462a562c1203 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received unexpected event network-vif-unplugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.344 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e90fc71b-89ee-447a-9f0b-a3f8d7eb3dbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.346 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[518edbc1-bc50-4fb5-8df7-39757b3580e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.367 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[33157f67-3657-4cf3-9b59-e1f6f2fafdc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675992, 'reachable_time': 27996, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287713, 'error': None, 'target': 'ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.369 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62193ff6-aaa1-401a-b1e0-512e67752a9e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:24:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:24.369 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2b82ac-fb40-4925-b540-daf801678ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:24:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806289067' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.469 248514 DEBUG oslo_concurrency.processutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.502 248514 DEBUG oslo_concurrency.processutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.746 248514 INFO nova.compute.manager [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Took 1.47 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.747 248514 DEBUG oslo.service.loopingcall [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.748 248514 DEBUG nova.compute.manager [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:24:24 np0005558241 nova_compute[248510]: 2025-12-13 08:24:24.749 248514 DEBUG nova.network.neutron [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:24:24 np0005558241 systemd[1]: run-netns-ovnmeta\x2d62193ff6\x2daaa1\x2d401a\x2db1e0\x2d512e67752a9e.mount: Deactivated successfully.
Dec 13 03:24:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:24:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:24:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/176819737' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.096 248514 DEBUG oslo_concurrency.processutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 326 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 157 op/s
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.098 248514 DEBUG nova.virt.libvirt.vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-370636454',display_name='tempest-InstanceActionsTestJSON-server-370636454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-370636454',id=35,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:24:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f78f312dfcc4df6ba40b7c8a4e1aa97',ramdisk_id='',reservation_id='r-b8os57ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1859862292',owner_user_name='tempest-InstanceActionsTestJSON-1859862292-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:24:23Z,user_data=None,user_id='6827fc2174b74c2a92803d852e87c70a',uuid=b3086cd3-fbaf-4f8e-bca2-162a0582d3a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.098 248514 DEBUG nova.network.os_vif_util [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converting VIF {"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.099 248514 DEBUG nova.network.os_vif_util [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.100 248514 DEBUG nova.objects.instance [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.121 248514 DEBUG nova.virt.libvirt.driver [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  <uuid>b3086cd3-fbaf-4f8e-bca2-162a0582d3a4</uuid>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  <name>instance-00000023</name>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <nova:name>tempest-InstanceActionsTestJSON-server-370636454</nova:name>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:24:23</nova:creationTime>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:        <nova:user uuid="6827fc2174b74c2a92803d852e87c70a">tempest-InstanceActionsTestJSON-1859862292-project-member</nova:user>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:        <nova:project uuid="8f78f312dfcc4df6ba40b7c8a4e1aa97">tempest-InstanceActionsTestJSON-1859862292</nova:project>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:        <nova:port uuid="b494f789-c137-45c5-9750-2bf0b43681ad">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <entry name="serial">b3086cd3-fbaf-4f8e-bca2-162a0582d3a4</entry>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <entry name="uuid">b3086cd3-fbaf-4f8e-bca2-162a0582d3a4</entry>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_disk.config">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:26:55:3f"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <target dev="tapb494f789-c1"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4/console.log" append="off"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <input type="keyboard" bus="usb"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:24:25 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:24:25 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:24:25 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:24:25 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.122 248514 DEBUG nova.virt.libvirt.driver [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.122 248514 DEBUG nova.virt.libvirt.driver [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.122 248514 DEBUG nova.virt.libvirt.vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-370636454',display_name='tempest-InstanceActionsTestJSON-server-370636454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-370636454',id=35,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:24:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='8f78f312dfcc4df6ba40b7c8a4e1aa97',ramdisk_id='',reservation_id='r-b8os57ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1859862292',owner_user_name='tempest-InstanceActionsTestJSON-1859862292-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:24:23Z,user_data=None,user_id='6827fc2174b74c2a92803d852e87c70a',uuid=b3086cd3-fbaf-4f8e-bca2-162a0582d3a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.123 248514 DEBUG nova.network.os_vif_util [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converting VIF {"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.123 248514 DEBUG nova.network.os_vif_util [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.124 248514 DEBUG os_vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.124 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.124 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.125 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.128 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb494f789-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.129 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb494f789-c1, col_values=(('external_ids', {'iface-id': 'b494f789-c137-45c5-9750-2bf0b43681ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:55:3f', 'vm-uuid': 'b3086cd3-fbaf-4f8e-bca2-162a0582d3a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:25 np0005558241 NetworkManager[50376]: <info>  [1765614265.1317] manager: (tapb494f789-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.133 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.138 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.140 248514 INFO os_vif [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1')#033[00m
Dec 13 03:24:25 np0005558241 kernel: tapb494f789-c1: entered promiscuous mode
Dec 13 03:24:25 np0005558241 NetworkManager[50376]: <info>  [1765614265.2196] manager: (tapb494f789-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Dec 13 03:24:25 np0005558241 systemd-udevd[287563]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:24:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:25Z|00305|binding|INFO|Claiming lport b494f789-c137-45c5-9750-2bf0b43681ad for this chassis.
Dec 13 03:24:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:25Z|00306|binding|INFO|b494f789-c137-45c5-9750-2bf0b43681ad: Claiming fa:16:3e:26:55:3f 10.100.0.12
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.220 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:25 np0005558241 NetworkManager[50376]: <info>  [1765614265.2348] device (tapb494f789-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:24:25 np0005558241 NetworkManager[50376]: <info>  [1765614265.2355] device (tapb494f789-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:24:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:25Z|00307|binding|INFO|Setting lport b494f789-c137-45c5-9750-2bf0b43681ad ovn-installed in OVS
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.240 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.242 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:25Z|00308|binding|INFO|Setting lport b494f789-c137-45c5-9750-2bf0b43681ad up in Southbound
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.244 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:55:3f 10.100.0.12'], port_security=['fa:16:3e:26:55:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b3086cd3-fbaf-4f8e-bca2-162a0582d3a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f78f312dfcc4df6ba40b7c8a4e1aa97', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a9aebbe0-bcf2-4e40-aa62-10b8cea7c801', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c3ff6a3-0731-41dd-8d72-42e18a11ea75, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b494f789-c137-45c5-9750-2bf0b43681ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.245 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b494f789-c137-45c5-9750-2bf0b43681ad in datapath 0740d1ee-47e1-4bdf-bdc4-2dafff999f03 bound to our chassis#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.247 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0740d1ee-47e1-4bdf-bdc4-2dafff999f03#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.261 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc9a630-fa8f-4085-850f-e1adb846ede0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.262 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0740d1ee-41 in ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.264 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0740d1ee-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.264 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[290f0751-6b94-4338-a0ed-3d53dba42d52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 systemd-machined[210538]: New machine qemu-41-instance-00000023.
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.265 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1cd87c-6166-48a9-a2af-bd9f334e11e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.277 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[c3971b62-b3a7-44b9-8719-45e589e81470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 systemd[1]: Started Virtual Machine qemu-41-instance-00000023.
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.304 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6badbc24-3102-4360-b6c2-b1ddd9d831c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.339 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[305efb95-62b9-4f9a-8985-57d5d41c3db9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 NetworkManager[50376]: <info>  [1765614265.3462] manager: (tap0740d1ee-40): new Veth device (/org/freedesktop/NetworkManager/Devices/140)
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.346 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[14b4f90a-620b-4308-87ca-6b1cc30acc1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 systemd-udevd[287784]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.378 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[19ac91df-8a33-46b2-81c2-bbf59d52a6ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.382 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[be62647d-188e-4d35-9d8c-a28b89bf6e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 NetworkManager[50376]: <info>  [1765614265.4129] device (tap0740d1ee-40): carrier: link connected
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.417 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7c89cfac-ae01-4e08-9f80-6531e6cacae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.437 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4faae7-33ce-4089-873b-3e8ff5aca6e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0740d1ee-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:40:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684262, 'reachable_time': 32953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287803, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.455 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55f8884f-22ff-4fa9-85ef-d72d633604c2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0a:4019'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 684262, 'tstamp': 684262}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287804, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.478 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5fdc3e-a336-4ab5-bbd4-cff41115e38f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0740d1ee-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:40:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684262, 'reachable_time': 32953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287805, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.509 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[91ccd67d-d9e5-4c21-83dd-f381bee17da5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.572 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3181ec-8f2b-4e4d-bf23-31b0595e02fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.573 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0740d1ee-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.574 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.574 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0740d1ee-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:25 np0005558241 NetworkManager[50376]: <info>  [1765614265.5768] manager: (tap0740d1ee-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:25 np0005558241 kernel: tap0740d1ee-40: entered promiscuous mode
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.579 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.586 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0740d1ee-40, col_values=(('external_ids', {'iface-id': '57853c24-d10c-4ddd-b435-f78af259fd27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:25Z|00309|binding|INFO|Releasing lport 57853c24-d10c-4ddd-b435-f78af259fd27 from this chassis (sb_readonly=0)
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.589 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.594 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.603 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2a7a77-963d-4993-be85-9cde1ca1e740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:25 np0005558241 nova_compute[248510]: 2025-12-13 08:24:25.604 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.604 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-0740d1ee-47e1-4bdf-bdc4-2dafff999f03
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.pid.haproxy
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 0740d1ee-47e1-4bdf-bdc4-2dafff999f03
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:25.605 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'env', 'PROCESS_TAG=haproxy-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0740d1ee-47e1-4bdf-bdc4-2dafff999f03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.034 248514 DEBUG nova.network.neutron [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.056 248514 INFO nova.compute.manager [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Took 3.09 seconds to deallocate network for instance.#033[00m
Dec 13 03:24:26 np0005558241 podman[287838]: 2025-12-13 08:24:26.068981002 +0000 UTC m=+0.065655131 container create 49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 03:24:26 np0005558241 systemd[1]: Started libpod-conmon-49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01.scope.
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.106 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.108 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:26 np0005558241 podman[287838]: 2025-12-13 08:24:26.031556038 +0000 UTC m=+0.028230187 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:24:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:24:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d33e02833de22fe4b162634b9e49aaa76311ba51063cfa9b4f10999c837011/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:24:26 np0005558241 podman[287838]: 2025-12-13 08:24:26.170293742 +0000 UTC m=+0.166967861 container init 49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 03:24:26 np0005558241 podman[287838]: 2025-12-13 08:24:26.175580152 +0000 UTC m=+0.172254281 container start 49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.178 248514 DEBUG nova.network.neutron [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:26 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [NOTICE]   (287857) : New worker (287859) forked
Dec 13 03:24:26 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [NOTICE]   (287857) : Loading success.
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.213 248514 INFO nova.compute.manager [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Took 1.46 seconds to deallocate network for instance.#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.275 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.279 248514 DEBUG oslo_concurrency.processutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.312 248514 DEBUG nova.compute.manager [req-89e79db9-cd12-4da3-915c-090407c17d1e req-4309feb6-2eb2-44b0-8bb6-d9a01adc3bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-vif-deleted-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.456 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.456 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.457 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.457 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.457 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.457 248514 WARNING nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received unexpected event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.457 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-vif-unplugged-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.458 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.458 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.458 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.458 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] No waiting events found dispatching network-vif-unplugged-627622b8-ef54-4181-bd8d-e8e82650b143 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.458 248514 WARNING nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received unexpected event network-vif-unplugged-627622b8-ef54-4181-bd8d-e8e82650b143 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.458 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] No waiting events found dispatching network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 WARNING nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Received unexpected event network-vif-plugged-627622b8-ef54-4181-bd8d-e8e82650b143 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.459 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.460 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.460 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.460 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.460 248514 WARNING nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received unexpected event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.460 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.460 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.461 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.461 248514 DEBUG oslo_concurrency.lockutils [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.461 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.461 248514 WARNING nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received unexpected event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.461 248514 DEBUG nova.compute.manager [req-15b65d02-c803-422e-b6ce-179e6d04e9f8 req-fcbd6528-15c2-4e9a-9039-25d5ff4756b2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Received event network-vif-deleted-e1eabc5e-9ed7-4b2e-ba64-11149b2f4043 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.573 248514 DEBUG nova.compute.manager [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.574 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.574 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614266.5731237, b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.575 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.580 248514 INFO nova.virt.libvirt.driver [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance rebooted successfully.#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.580 248514 DEBUG nova.compute.manager [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.738 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.741 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.745 248514 DEBUG oslo_concurrency.lockutils [None req-19e21e2b-52e5-47cb-9fec-2d3df6552efe 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.771 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614266.5742145, b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.772 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] VM Started (Lifecycle Event)#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.791 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.796 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:24:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:24:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4110202748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.881 248514 DEBUG oslo_concurrency.processutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.888 248514 DEBUG nova.compute.provider_tree [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.908 248514 DEBUG nova.scheduler.client.report [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.937 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.940 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:26 np0005558241 nova_compute[248510]: 2025-12-13 08:24:26.968 248514 INFO nova.scheduler.client.report [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Deleted allocations for instance d503913e-a05e-47d4-9366-db4426b9aac1#033[00m
Dec 13 03:24:27 np0005558241 nova_compute[248510]: 2025-12-13 08:24:27.034 248514 DEBUG oslo_concurrency.lockutils [None req-93cffb94-0e21-4ce1-af9d-d495392f05cb ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "d503913e-a05e-47d4-9366-db4426b9aac1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:27 np0005558241 nova_compute[248510]: 2025-12-13 08:24:27.048 248514 DEBUG oslo_concurrency.processutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 326 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 437 KiB/s wr, 131 op/s
Dec 13 03:24:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:24:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307190209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:24:27 np0005558241 nova_compute[248510]: 2025-12-13 08:24:27.661 248514 DEBUG oslo_concurrency.processutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:27 np0005558241 nova_compute[248510]: 2025-12-13 08:24:27.667 248514 DEBUG nova.compute.provider_tree [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.037 248514 DEBUG nova.scheduler.client.report [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.082 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.088 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.088 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.089 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.089 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.089 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.091 248514 INFO nova.compute.manager [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Terminating instance#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.092 248514 DEBUG nova.compute.manager [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.093 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.093 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.094 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.094 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.094 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.096 248514 INFO nova.compute.manager [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Terminating instance#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.097 248514 DEBUG nova.compute.manager [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.124 248514 INFO nova.scheduler.client.report [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Deleted allocations for instance dc64fea4-e9a8-47e7-8a3a-d01897fc81de#033[00m
Dec 13 03:24:28 np0005558241 kernel: tapb494f789-c1 (unregistering): left promiscuous mode
Dec 13 03:24:28 np0005558241 NetworkManager[50376]: <info>  [1765614268.1332] device (tapb494f789-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:24:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:28Z|00310|binding|INFO|Releasing lport b494f789-c137-45c5-9750-2bf0b43681ad from this chassis (sb_readonly=0)
Dec 13 03:24:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:28Z|00311|binding|INFO|Setting lport b494f789-c137-45c5-9750-2bf0b43681ad down in Southbound
Dec 13 03:24:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:28Z|00312|binding|INFO|Removing iface tapb494f789-c1 ovn-installed in OVS
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.143 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:28 np0005558241 kernel: tapbc4158d8-49 (unregistering): left promiscuous mode
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.151 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:55:3f 10.100.0.12'], port_security=['fa:16:3e:26:55:3f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b3086cd3-fbaf-4f8e-bca2-162a0582d3a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8f78f312dfcc4df6ba40b7c8a4e1aa97', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a9aebbe0-bcf2-4e40-aa62-10b8cea7c801', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c3ff6a3-0731-41dd-8d72-42e18a11ea75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b494f789-c137-45c5-9750-2bf0b43681ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.152 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b494f789-c137-45c5-9750-2bf0b43681ad in datapath 0740d1ee-47e1-4bdf-bdc4-2dafff999f03 unbound from our chassis#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.154 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0740d1ee-47e1-4bdf-bdc4-2dafff999f03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:24:28 np0005558241 NetworkManager[50376]: <info>  [1765614268.1556] device (tapbc4158d8-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.158 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7472297c-b04d-4db7-8ebf-9c200166b59b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.158 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 namespace which is not needed anymore#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.159 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:28Z|00313|binding|INFO|Releasing lport bc4158d8-4963-4009-a434-0a0106941c9d from this chassis (sb_readonly=0)
Dec 13 03:24:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:28Z|00314|binding|INFO|Setting lport bc4158d8-4963-4009-a434-0a0106941c9d down in Southbound
Dec 13 03:24:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:28Z|00315|binding|INFO|Removing iface tapbc4158d8-49 ovn-installed in OVS
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.175 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:48:0d 10.100.0.6'], port_security=['fa:16:3e:37:48:0d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3b43a9c7-85e7-4558-bd2f-e4712882021e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ca92864-3b70-4794-9db1-fa08128cef92', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ea37c93820f34312bc386ea952ecd94f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd758160-971f-4f0e-a4ca-a13304d3c491', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=de6b8cad-43c6-46a9-8904-1d91035da1ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bc4158d8-4963-4009-a434-0a0106941c9d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.180 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:28 np0005558241 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000023.scope: Deactivated successfully.
Dec 13 03:24:28 np0005558241 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000023.scope: Consumed 2.894s CPU time.
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.196 248514 DEBUG oslo_concurrency.lockutils [None req-fc8b7121-4a2b-428d-b6db-0fb8122feff9 79d4b34b8bd3452cb5b8c0954166f397 06fbab937d6444558229b2351632e711 - - default default] Lock "dc64fea4-e9a8-47e7-8a3a-d01897fc81de" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:28 np0005558241 systemd-machined[210538]: Machine qemu-41-instance-00000023 terminated.
Dec 13 03:24:28 np0005558241 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Dec 13 03:24:28 np0005558241 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001e.scope: Consumed 16.494s CPU time.
Dec 13 03:24:28 np0005558241 systemd-machined[210538]: Machine qemu-35-instance-0000001e terminated.
Dec 13 03:24:28 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [NOTICE]   (287857) : haproxy version is 2.8.14-c23fe91
Dec 13 03:24:28 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [NOTICE]   (287857) : path to executable is /usr/sbin/haproxy
Dec 13 03:24:28 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [WARNING]  (287857) : Exiting Master process...
Dec 13 03:24:28 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [WARNING]  (287857) : Exiting Master process...
Dec 13 03:24:28 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [ALERT]    (287857) : Current worker (287859) exited with code 143 (Terminated)
Dec 13 03:24:28 np0005558241 neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03[287853]: [WARNING]  (287857) : All workers exited. Exiting... (0)
Dec 13 03:24:28 np0005558241 systemd[1]: libpod-49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01.scope: Deactivated successfully.
Dec 13 03:24:28 np0005558241 conmon[287853]: conmon 49118c3b6b0e483ae4f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01.scope/container/memory.events
Dec 13 03:24:28 np0005558241 podman[287977]: 2025-12-13 08:24:28.304939371 +0000 UTC m=+0.050559438 container died 49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 03:24:28 np0005558241 kernel: tapbc4158d8-49: entered promiscuous mode
Dec 13 03:24:28 np0005558241 NetworkManager[50376]: <info>  [1765614268.3239] manager: (tapbc4158d8-49): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Dec 13 03:24:28 np0005558241 systemd-udevd[287791]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:24:28 np0005558241 kernel: tapbc4158d8-49 (unregistering): left promiscuous mode
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.336 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:28 np0005558241 NetworkManager[50376]: <info>  [1765614268.3375] manager: (tapb494f789-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Dec 13 03:24:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01-userdata-shm.mount: Deactivated successfully.
Dec 13 03:24:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay-87d33e02833de22fe4b162634b9e49aaa76311ba51063cfa9b4f10999c837011-merged.mount: Deactivated successfully.
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.356 248514 INFO nova.virt.libvirt.driver [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Instance destroyed successfully.#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.356 248514 DEBUG nova.objects.instance [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lazy-loading 'resources' on Instance uuid 3b43a9c7-85e7-4558-bd2f-e4712882021e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:28 np0005558241 podman[287977]: 2025-12-13 08:24:28.357769785 +0000 UTC m=+0.103389832 container cleanup 49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.359 248514 INFO nova.virt.libvirt.driver [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Instance destroyed successfully.#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.359 248514 DEBUG nova.objects.instance [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lazy-loading 'resources' on Instance uuid b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:28 np0005558241 systemd[1]: libpod-conmon-49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01.scope: Deactivated successfully.
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.404 248514 DEBUG nova.virt.libvirt.vif [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-370636454',display_name='tempest-InstanceActionsTestJSON-server-370636454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-370636454',id=35,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:24:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8f78f312dfcc4df6ba40b7c8a4e1aa97',ramdisk_id='',reservation_id='r-b8os57ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1859862292',owner_user_name='tempest-InstanceActionsTestJSON-1859862292-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:24:26Z,user_data=None,user_id='6827fc2174b74c2a92803d852e87c70a',uuid=b3086cd3-fbaf-4f8e-bca2-162a0582d3a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.405 248514 DEBUG nova.network.os_vif_util [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converting VIF {"id": "b494f789-c137-45c5-9750-2bf0b43681ad", "address": "fa:16:3e:26:55:3f", "network": {"id": "0740d1ee-47e1-4bdf-bdc4-2dafff999f03", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1795648852-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8f78f312dfcc4df6ba40b7c8a4e1aa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb494f789-c1", "ovs_interfaceid": "b494f789-c137-45c5-9750-2bf0b43681ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.405 248514 DEBUG nova.network.os_vif_util [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.405 248514 DEBUG os_vif [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.407 248514 DEBUG nova.virt.libvirt.vif [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:22:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2036693049',display_name='tempest-tempest.common.compute-instance-2036693049',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2036693049',id=30,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX50PKn+shUk8DzZNz/APZE89R2du7+zgXVnkICdrmMgQ/9rCbsHgGIxWlu+FpMHKoG7rEUWYuXpB+QKt86vKmD28Y4uYfVNU6aVSac2Mjbr5CIZFwhSBpGsIwCztMB5g==',key_name='tempest-keypair-1460029378',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:23:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ea37c93820f34312bc386ea952ecd94f',ramdisk_id='',reservation_id='r-nhq68swv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1051352616',owner_user_name='tempest-AttachInterfacesTestJSON-1051352616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:23:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee4333dff699428ca3ae5201855ab430',uuid=3b43a9c7-85e7-4558-bd2f-e4712882021e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.407 248514 DEBUG nova.network.os_vif_util [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converting VIF {"id": "bc4158d8-4963-4009-a434-0a0106941c9d", "address": "fa:16:3e:37:48:0d", "network": {"id": "1ca92864-3b70-4794-9db1-fa08128cef92", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1986850472-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ea37c93820f34312bc386ea952ecd94f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc4158d8-49", "ovs_interfaceid": "bc4158d8-4963-4009-a434-0a0106941c9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.408 248514 DEBUG nova.network.os_vif_util [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.408 248514 DEBUG os_vif [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.409 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.409 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb494f789-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.416 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.416 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc4158d8-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.418 248514 INFO os_vif [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:55:3f,bridge_name='br-int',has_traffic_filtering=True,id=b494f789-c137-45c5-9750-2bf0b43681ad,network=Network(0740d1ee-47e1-4bdf-bdc4-2dafff999f03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb494f789-c1')#033[00m
Dec 13 03:24:28 np0005558241 podman[288019]: 2025-12-13 08:24:28.429285619 +0000 UTC m=+0.044701484 container remove 49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.436 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[34258883-0646-4d48-a27c-753451abee99]: (4, ('Sat Dec 13 08:24:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 (49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01)\n49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01\nSat Dec 13 08:24:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 (49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01)\n49118c3b6b0e483ae4f874df488b37c070a5ff34ccf465b43b439c58e7089a01\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.438 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eef3711f-13df-4524-b38c-1a0c913a52a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.439 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0740d1ee-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.440 248514 INFO os_vif [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:48:0d,bridge_name='br-int',has_traffic_filtering=True,id=bc4158d8-4963-4009-a434-0a0106941c9d,network=Network(1ca92864-3b70-4794-9db1-fa08128cef92),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc4158d8-49')#033[00m
Dec 13 03:24:28 np0005558241 kernel: tap0740d1ee-40: left promiscuous mode
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.459 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.465 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cad9cc-eeb8-414c-a1ba-502c5659bacb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.477 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9de6e19-467f-460a-8238-49fc93db755c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.479 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[305fbe83-b991-4f1a-ab40-c597e5131744]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.497 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b503935f-3d7f-4eed-b19a-60ba7033b349]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 684254, 'reachable_time': 36671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288072, 'error': None, 'target': 'ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 systemd[1]: run-netns-ovnmeta\x2d0740d1ee\x2d47e1\x2d4bdf\x2dbdc4\x2d2dafff999f03.mount: Deactivated successfully.
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.501 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0740d1ee-47e1-4bdf-bdc4-2dafff999f03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.501 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5e2aa0-298a-40d7-8167-7503c9d4ce32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.503 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bc4158d8-4963-4009-a434-0a0106941c9d in datapath 1ca92864-3b70-4794-9db1-fa08128cef92 unbound from our chassis#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.505 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ca92864-3b70-4794-9db1-fa08128cef92, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.506 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af6f24d7-034e-4dda-b75c-7b632e5bad05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.507 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 namespace which is not needed anymore#033[00m
Dec 13 03:24:28 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [NOTICE]   (284593) : haproxy version is 2.8.14-c23fe91
Dec 13 03:24:28 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [NOTICE]   (284593) : path to executable is /usr/sbin/haproxy
Dec 13 03:24:28 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [WARNING]  (284593) : Exiting Master process...
Dec 13 03:24:28 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [ALERT]    (284593) : Current worker (284595) exited with code 143 (Terminated)
Dec 13 03:24:28 np0005558241 neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92[284589]: [WARNING]  (284593) : All workers exited. Exiting... (0)
Dec 13 03:24:28 np0005558241 systemd[1]: libpod-e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b.scope: Deactivated successfully.
Dec 13 03:24:28 np0005558241 podman[288089]: 2025-12-13 08:24:28.66924186 +0000 UTC m=+0.058127135 container died e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 03:24:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b-userdata-shm.mount: Deactivated successfully.
Dec 13 03:24:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6b4067aa9b3f2b0ef0dd9859d3a57acc4d914a6c704c9a2c14d2ad026817e68f-merged.mount: Deactivated successfully.
Dec 13 03:24:28 np0005558241 podman[288089]: 2025-12-13 08:24:28.710898738 +0000 UTC m=+0.099784013 container cleanup e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.733 248514 INFO nova.virt.libvirt.driver [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Deleting instance files /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_del#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.734 248514 INFO nova.virt.libvirt.driver [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Deletion of /var/lib/nova/instances/b3086cd3-fbaf-4f8e-bca2-162a0582d3a4_del complete#033[00m
Dec 13 03:24:28 np0005558241 systemd[1]: libpod-conmon-e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b.scope: Deactivated successfully.
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.748 248514 INFO nova.virt.libvirt.driver [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Deleting instance files /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e_del#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.750 248514 INFO nova.virt.libvirt.driver [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Deletion of /var/lib/nova/instances/3b43a9c7-85e7-4558-bd2f-e4712882021e_del complete#033[00m
Dec 13 03:24:28 np0005558241 podman[288119]: 2025-12-13 08:24:28.777262275 +0000 UTC m=+0.041354761 container remove e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.784 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[31577268-1040-4406-8704-e43c3565d478]: (4, ('Sat Dec 13 08:24:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 (e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b)\ne5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b\nSat Dec 13 08:24:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 (e5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b)\ne5301f5fa00299e4e9e468fb461a2f8e630da14f65fb807252f6d418a212361b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.787 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c955946-729a-45b8-b8b4-03aab6a3433a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.788 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ca92864-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.791 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:28 np0005558241 kernel: tap1ca92864-30: left promiscuous mode
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.805 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.809 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5e954ec9-9fbd-4820-8f7c-399b4c518511]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.833 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8d419911-ab76-498c-952c-fc13b17677da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.835 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8942a49b-592f-4ab1-bd48-e5584f5b7c7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.843 248514 INFO nova.compute.manager [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.843 248514 DEBUG oslo.service.loopingcall [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.843 248514 DEBUG nova.compute.manager [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.844 248514 DEBUG nova.network.neutron [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.851 248514 DEBUG nova.compute.manager [req-890a5f9e-be97-4574-871e-1d38c6d1c929 req-9245e84f-373f-4a93-8ddb-1c09adba64f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-unplugged-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.851 248514 DEBUG oslo_concurrency.lockutils [req-890a5f9e-be97-4574-871e-1d38c6d1c929 req-9245e84f-373f-4a93-8ddb-1c09adba64f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.851 248514 DEBUG oslo_concurrency.lockutils [req-890a5f9e-be97-4574-871e-1d38c6d1c929 req-9245e84f-373f-4a93-8ddb-1c09adba64f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.852 248514 DEBUG oslo_concurrency.lockutils [req-890a5f9e-be97-4574-871e-1d38c6d1c929 req-9245e84f-373f-4a93-8ddb-1c09adba64f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.852 248514 DEBUG nova.compute.manager [req-890a5f9e-be97-4574-871e-1d38c6d1c929 req-9245e84f-373f-4a93-8ddb-1c09adba64f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-unplugged-bc4158d8-4963-4009-a434-0a0106941c9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.852 248514 DEBUG nova.compute.manager [req-890a5f9e-be97-4574-871e-1d38c6d1c929 req-9245e84f-373f-4a93-8ddb-1c09adba64f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-unplugged-bc4158d8-4963-4009-a434-0a0106941c9d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.855 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86047300-5490-4e64-b1fd-9b761187f19e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675787, 'reachable_time': 19581, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288134, 'error': None, 'target': 'ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.858 248514 INFO nova.compute.manager [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.858 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ca92864-3b70-4794-9db1-fa08128cef92 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:24:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:28.858 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b54a160f-3235-4e04-bb7b-05ec7ed00208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.858 248514 DEBUG oslo.service.loopingcall [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.858 248514 DEBUG nova.compute.manager [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.859 248514 DEBUG nova.network.neutron [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.942 248514 DEBUG nova.compute.manager [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-unplugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.943 248514 DEBUG oslo_concurrency.lockutils [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.943 248514 DEBUG oslo_concurrency.lockutils [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.943 248514 DEBUG oslo_concurrency.lockutils [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.944 248514 DEBUG nova.compute.manager [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-unplugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.944 248514 DEBUG nova.compute.manager [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-unplugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.944 248514 DEBUG nova.compute.manager [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.944 248514 DEBUG oslo_concurrency.lockutils [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.945 248514 DEBUG oslo_concurrency.lockutils [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.945 248514 DEBUG oslo_concurrency.lockutils [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.946 248514 DEBUG nova.compute.manager [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] No waiting events found dispatching network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:28 np0005558241 nova_compute[248510]: 2025-12-13 08:24:28.946 248514 WARNING nova.compute.manager [req-a84211c6-9376-4b29-8f8a-55a13b398c11 req-b9a74d83-1748-41d2-87f1-f079301a081d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received unexpected event network-vif-plugged-b494f789-c137-45c5-9750-2bf0b43681ad for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:24:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 150 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 440 KiB/s wr, 268 op/s
Dec 13 03:24:29 np0005558241 systemd[1]: run-netns-ovnmeta\x2d1ca92864\x2d3b70\x2d4794\x2d9db1\x2dfa08128cef92.mount: Deactivated successfully.
Dec 13 03:24:29 np0005558241 nova_compute[248510]: 2025-12-13 08:24:29.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.043687) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614270043886, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1220, "num_deletes": 259, "total_data_size": 1804466, "memory_usage": 1832064, "flush_reason": "Manual Compaction"}
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614270058963, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1761195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32820, "largest_seqno": 34039, "table_properties": {"data_size": 1755397, "index_size": 3065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12854, "raw_average_key_size": 19, "raw_value_size": 1743488, "raw_average_value_size": 2690, "num_data_blocks": 137, "num_entries": 648, "num_filter_entries": 648, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614163, "oldest_key_time": 1765614163, "file_creation_time": 1765614270, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 15316 microseconds, and 7295 cpu microseconds.
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.059025) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1761195 bytes OK
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.059060) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.060392) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.060409) EVENT_LOG_v1 {"time_micros": 1765614270060404, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.060438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1798811, prev total WAL file size 1798811, number of live WAL files 2.
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.061480) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323539' seq:0, type:0; will stop at (end)
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1719KB)], [71(9305KB)]
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614270061597, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11289796, "oldest_snapshot_seqno": -1}
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5873 keys, 11185935 bytes, temperature: kUnknown
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614270169815, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 11185935, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11143298, "index_size": 26847, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 148389, "raw_average_key_size": 25, "raw_value_size": 11034340, "raw_average_value_size": 1878, "num_data_blocks": 1100, "num_entries": 5873, "num_filter_entries": 5873, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614270, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.170325) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 11185935 bytes
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.171940) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.2 rd, 103.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.1 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(12.8) write-amplify(6.4) OK, records in: 6408, records dropped: 535 output_compression: NoCompression
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.171962) EVENT_LOG_v1 {"time_micros": 1765614270171953, "job": 40, "event": "compaction_finished", "compaction_time_micros": 108399, "compaction_time_cpu_micros": 27554, "output_level": 6, "num_output_files": 1, "total_output_size": 11185935, "num_input_records": 6408, "num_output_records": 5873, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614270172449, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614270174190, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.061344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.174297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.174306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.174308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.174309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:24:30 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:24:30.174311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:24:30 np0005558241 nova_compute[248510]: 2025-12-13 08:24:30.823 248514 DEBUG nova.network.neutron [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:30 np0005558241 nova_compute[248510]: 2025-12-13 08:24:30.839 248514 INFO nova.compute.manager [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Took 1.98 seconds to deallocate network for instance.#033[00m
Dec 13 03:24:30 np0005558241 nova_compute[248510]: 2025-12-13 08:24:30.996 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:30 np0005558241 nova_compute[248510]: 2025-12-13 08:24:30.996 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.000 248514 DEBUG nova.compute.manager [req-5690185b-7dd9-4bfb-9728-b383a2070ada req-1dabc641-c9f6-4882-8dd3-501858dce121 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.001 248514 DEBUG oslo_concurrency.lockutils [req-5690185b-7dd9-4bfb-9728-b383a2070ada req-1dabc641-c9f6-4882-8dd3-501858dce121 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.001 248514 DEBUG oslo_concurrency.lockutils [req-5690185b-7dd9-4bfb-9728-b383a2070ada req-1dabc641-c9f6-4882-8dd3-501858dce121 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.002 248514 DEBUG oslo_concurrency.lockutils [req-5690185b-7dd9-4bfb-9728-b383a2070ada req-1dabc641-c9f6-4882-8dd3-501858dce121 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.002 248514 DEBUG nova.compute.manager [req-5690185b-7dd9-4bfb-9728-b383a2070ada req-1dabc641-c9f6-4882-8dd3-501858dce121 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] No waiting events found dispatching network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.002 248514 WARNING nova.compute.manager [req-5690185b-7dd9-4bfb-9728-b383a2070ada req-1dabc641-c9f6-4882-8dd3-501858dce121 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received unexpected event network-vif-plugged-bc4158d8-4963-4009-a434-0a0106941c9d for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.031 248514 DEBUG nova.objects.instance [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lazy-loading 'flavor' on Instance uuid ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.076 248514 DEBUG oslo_concurrency.lockutils [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.076 248514 DEBUG oslo_concurrency.lockutils [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 150 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 177 op/s
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.105 248514 DEBUG nova.compute.manager [req-0a54e2d4-f715-4d40-b60c-775936225170 req-f8680226-b608-426d-981c-d7a2eb2662ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Received event network-vif-deleted-b494f789-c137-45c5-9750-2bf0b43681ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.114 248514 DEBUG nova.network.neutron [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.121 248514 DEBUG oslo_concurrency.processutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.195 248514 INFO nova.compute.manager [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Took 2.35 seconds to deallocate network for instance.#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.307 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:24:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1653095254' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.674 248514 DEBUG oslo_concurrency.processutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.681 248514 DEBUG nova.compute.provider_tree [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.712 248514 DEBUG nova.scheduler.client.report [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.782 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.785 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.835 248514 INFO nova.scheduler.client.report [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Deleted allocations for instance b3086cd3-fbaf-4f8e-bca2-162a0582d3a4#033[00m
Dec 13 03:24:31 np0005558241 nova_compute[248510]: 2025-12-13 08:24:31.900 248514 DEBUG oslo_concurrency.processutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:32 np0005558241 nova_compute[248510]: 2025-12-13 08:24:32.024 248514 DEBUG oslo_concurrency.lockutils [None req-3f90de0c-a7af-4b5b-aece-f626f031055f 6827fc2174b74c2a92803d852e87c70a 8f78f312dfcc4df6ba40b7c8a4e1aa97 - - default default] Lock "b3086cd3-fbaf-4f8e-bca2-162a0582d3a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:24:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2648067625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:24:32 np0005558241 nova_compute[248510]: 2025-12-13 08:24:32.468 248514 DEBUG oslo_concurrency.processutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:32 np0005558241 nova_compute[248510]: 2025-12-13 08:24:32.476 248514 DEBUG nova.compute.provider_tree [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:24:32 np0005558241 nova_compute[248510]: 2025-12-13 08:24:32.527 248514 DEBUG nova.scheduler.client.report [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:24:32 np0005558241 nova_compute[248510]: 2025-12-13 08:24:32.697 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:32 np0005558241 nova_compute[248510]: 2025-12-13 08:24:32.751 248514 INFO nova.scheduler.client.report [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Deleted allocations for instance 3b43a9c7-85e7-4558-bd2f-e4712882021e#033[00m
Dec 13 03:24:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 121 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 193 op/s
Dec 13 03:24:33 np0005558241 nova_compute[248510]: 2025-12-13 08:24:33.180 248514 DEBUG oslo_concurrency.lockutils [None req-76493eb2-8340-490f-8934-46e7e5f49b9b ee4333dff699428ca3ae5201855ab430 ea37c93820f34312bc386ea952ecd94f - - default default] Lock "3b43a9c7-85e7-4558-bd2f-e4712882021e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:33 np0005558241 nova_compute[248510]: 2025-12-13 08:24:33.209 248514 DEBUG nova.network.neutron [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:24:33 np0005558241 nova_compute[248510]: 2025-12-13 08:24:33.344 248514 DEBUG nova.compute.manager [req-077b5d46-edde-4534-9528-d243472b1ea9 req-9361c0a8-3e44-40db-812c-314932911455 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Received event network-vif-deleted-bc4158d8-4963-4009-a434-0a0106941c9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:33 np0005558241 nova_compute[248510]: 2025-12-13 08:24:33.419 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:33 np0005558241 nova_compute[248510]: 2025-12-13 08:24:33.428 248514 DEBUG nova.compute.manager [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:33 np0005558241 nova_compute[248510]: 2025-12-13 08:24:33.429 248514 DEBUG nova.compute.manager [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing instance network info cache due to event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:24:33 np0005558241 nova_compute[248510]: 2025-12-13 08:24:33.429 248514 DEBUG oslo_concurrency.lockutils [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:34 np0005558241 nova_compute[248510]: 2025-12-13 08:24:34.626 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:34 np0005558241 nova_compute[248510]: 2025-12-13 08:24:34.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:24:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:24:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 174 op/s
Dec 13 03:24:35 np0005558241 nova_compute[248510]: 2025-12-13 08:24:35.480 248514 DEBUG nova.network.neutron [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:35 np0005558241 nova_compute[248510]: 2025-12-13 08:24:35.728 248514 DEBUG oslo_concurrency.lockutils [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:35 np0005558241 nova_compute[248510]: 2025-12-13 08:24:35.729 248514 DEBUG nova.compute.manager [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec 13 03:24:35 np0005558241 nova_compute[248510]: 2025-12-13 08:24:35.729 248514 DEBUG nova.compute.manager [None req-49e0f32a-b32e-4b6a-baa9-cb14e294b9f0 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] network_info to inject: |[{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec 13 03:24:35 np0005558241 nova_compute[248510]: 2025-12-13 08:24:35.732 248514 DEBUG oslo_concurrency.lockutils [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:35 np0005558241 nova_compute[248510]: 2025-12-13 08:24:35.732 248514 DEBUG nova.network.neutron [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:24:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:36Z|00316|binding|INFO|Releasing lport 54f0e9c1-d2c9-4d7a-b554-d7af88f55e22 from this chassis (sb_readonly=0)
Dec 13 03:24:36 np0005558241 nova_compute[248510]: 2025-12-13 08:24:36.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:36 np0005558241 nova_compute[248510]: 2025-12-13 08:24:36.730 248514 DEBUG nova.objects.instance [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lazy-loading 'flavor' on Instance uuid ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:36 np0005558241 nova_compute[248510]: 2025-12-13 08:24:36.757 248514 DEBUG oslo_concurrency.lockutils [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.5 KiB/s wr, 153 op/s
Dec 13 03:24:37 np0005558241 nova_compute[248510]: 2025-12-13 08:24:37.541 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614262.5397716, d503913e-a05e-47d4-9366-db4426b9aac1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:24:37 np0005558241 nova_compute[248510]: 2025-12-13 08:24:37.542 248514 INFO nova.compute.manager [-] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:24:37 np0005558241 nova_compute[248510]: 2025-12-13 08:24:37.662 248514 DEBUG nova.compute.manager [None req-90b003fe-38c0-41ce-acee-a3e89a666ad9 - - - - - -] [instance: d503913e-a05e-47d4-9366-db4426b9aac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:38 np0005558241 nova_compute[248510]: 2025-12-13 08:24:38.422 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:38 np0005558241 nova_compute[248510]: 2025-12-13 08:24:38.722 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614263.7212303, dc64fea4-e9a8-47e7-8a3a-d01897fc81de => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:24:38 np0005558241 nova_compute[248510]: 2025-12-13 08:24:38.722 248514 INFO nova.compute.manager [-] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:24:38 np0005558241 nova_compute[248510]: 2025-12-13 08:24:38.760 248514 DEBUG nova.compute.manager [None req-7e0db656-f419-441d-97f9-d1c11f6737c5 - - - - - -] [instance: dc64fea4-e9a8-47e7-8a3a-d01897fc81de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.5 KiB/s wr, 153 op/s
Dec 13 03:24:39 np0005558241 nova_compute[248510]: 2025-12-13 08:24:39.337 248514 DEBUG nova.network.neutron [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updated VIF entry in instance network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:24:39 np0005558241 nova_compute[248510]: 2025-12-13 08:24:39.337 248514 DEBUG nova.network.neutron [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:39 np0005558241 nova_compute[248510]: 2025-12-13 08:24:39.366 248514 DEBUG oslo_concurrency.lockutils [req-4f243dac-36e5-447e-a06f-64c09cec7cdd req-f7fb7687-6f42-4c77-b2aa-a7f3826b1667 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:39 np0005558241 nova_compute[248510]: 2025-12-13 08:24:39.369 248514 DEBUG oslo_concurrency.lockutils [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:39 np0005558241 nova_compute[248510]: 2025-12-13 08:24:39.665 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:39 np0005558241 nova_compute[248510]: 2025-12-13 08:24:39.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:24:39 np0005558241 nova_compute[248510]: 2025-12-13 08:24:39.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:24:39 np0005558241 nova_compute[248510]: 2025-12-13 08:24:39.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:24:39 np0005558241 nova_compute[248510]: 2025-12-13 08:24:39.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:24:39 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:39Z|00317|binding|INFO|Releasing lport 54f0e9c1-d2c9-4d7a-b554-d7af88f55e22 from this chassis (sb_readonly=0)
Dec 13 03:24:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:24:40 np0005558241 nova_compute[248510]: 2025-12-13 08:24:40.047 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:24:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:24:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:24:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:24:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:24:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:24:40 np0005558241 nova_compute[248510]: 2025-12-13 08:24:40.294 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 3.0 KiB/s wr, 15 op/s
Dec 13 03:24:41 np0005558241 nova_compute[248510]: 2025-12-13 08:24:41.645 248514 DEBUG nova.network.neutron [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:24:41 np0005558241 nova_compute[248510]: 2025-12-13 08:24:41.947 248514 DEBUG nova.compute.manager [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:41 np0005558241 nova_compute[248510]: 2025-12-13 08:24:41.947 248514 DEBUG nova.compute.manager [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing instance network info cache due to event network-changed-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:24:41 np0005558241 nova_compute[248510]: 2025-12-13 08:24:41.948 248514 DEBUG oslo_concurrency.lockutils [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:24:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 3.0 KiB/s wr, 15 op/s
Dec 13 03:24:43 np0005558241 nova_compute[248510]: 2025-12-13 08:24:43.350 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614268.348804, 3b43a9c7-85e7-4558-bd2f-e4712882021e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:24:43 np0005558241 nova_compute[248510]: 2025-12-13 08:24:43.351 248514 INFO nova.compute.manager [-] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:24:43 np0005558241 nova_compute[248510]: 2025-12-13 08:24:43.356 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614268.3554, b3086cd3-fbaf-4f8e-bca2-162a0582d3a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:24:43 np0005558241 nova_compute[248510]: 2025-12-13 08:24:43.356 248514 INFO nova.compute.manager [-] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:24:43 np0005558241 nova_compute[248510]: 2025-12-13 08:24:43.424 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:43 np0005558241 nova_compute[248510]: 2025-12-13 08:24:43.433 248514 DEBUG nova.compute.manager [None req-591bbb0f-b706-43e3-8e1a-dcc3cdec21b7 - - - - - -] [instance: 3b43a9c7-85e7-4558-bd2f-e4712882021e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:43 np0005558241 nova_compute[248510]: 2025-12-13 08:24:43.436 248514 DEBUG nova.compute.manager [None req-36d827d1-7b3a-4920-adb2-6ae593d450d6 - - - - - -] [instance: b3086cd3-fbaf-4f8e-bca2-162a0582d3a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:24:43 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:43Z|00318|binding|INFO|Releasing lport 54f0e9c1-d2c9-4d7a-b554-d7af88f55e22 from this chassis (sb_readonly=0)
Dec 13 03:24:43 np0005558241 nova_compute[248510]: 2025-12-13 08:24:43.794 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:44 np0005558241 nova_compute[248510]: 2025-12-13 08:24:44.669 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:44 np0005558241 nova_compute[248510]: 2025-12-13 08:24:44.879 248514 DEBUG nova.network.neutron [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:44 np0005558241 podman[288181]: 2025-12-13 08:24:44.972139299 +0000 UTC m=+0.052725982 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 03:24:44 np0005558241 podman[288180]: 2025-12-13 08:24:44.981444018 +0000 UTC m=+0.066735707 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 13 03:24:45 np0005558241 podman[288179]: 2025-12-13 08:24:45.010128746 +0000 UTC m=+0.097782844 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:24:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:24:45 np0005558241 nova_compute[248510]: 2025-12-13 08:24:45.082 248514 DEBUG oslo_concurrency.lockutils [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:45 np0005558241 nova_compute[248510]: 2025-12-13 08:24:45.083 248514 DEBUG nova.compute.manager [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec 13 03:24:45 np0005558241 nova_compute[248510]: 2025-12-13 08:24:45.083 248514 DEBUG nova.compute.manager [None req-e3dc4e30-670c-4715-b1e2-3648861831ca 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] network_info to inject: |[{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec 13 03:24:45 np0005558241 nova_compute[248510]: 2025-12-13 08:24:45.086 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:45 np0005558241 nova_compute[248510]: 2025-12-13 08:24:45.087 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:24:45 np0005558241 nova_compute[248510]: 2025-12-13 08:24:45.087 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.308 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.309 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.309 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.309 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.309 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.310 248514 INFO nova.compute.manager [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Terminating instance#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.311 248514 DEBUG nova.compute.manager [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:24:46 np0005558241 kernel: tap5d8e1c45-4a (unregistering): left promiscuous mode
Dec 13 03:24:46 np0005558241 NetworkManager[50376]: <info>  [1765614286.3672] device (tap5d8e1c45-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:24:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:46Z|00319|binding|INFO|Releasing lport 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 from this chassis (sb_readonly=0)
Dec 13 03:24:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:46Z|00320|binding|INFO|Setting lport 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 down in Southbound
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.374 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:24:46Z|00321|binding|INFO|Removing iface tap5d8e1c45-4a ovn-installed in OVS
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.378 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.394 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.420 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:58:17 10.100.0.7'], port_security=['fa:16:3e:83:58:17 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2be5ed2a3b1a405bb6891ecdc5cba68c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '16148d83-2b30-49dd-9926-d0fb6490d2c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=615ba7d0-57bb-42d2-948a-6426e9af82d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.422 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 in datapath e08cb57c-0bd2-4c88-a4f8-e9d9be925301 unbound from our chassis#033[00m
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.425 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e08cb57c-0bd2-4c88-a4f8-e9d9be925301, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.426 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53b865f3-f7bf-4ae1-aa02-47c7cb79d4ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.427 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301 namespace which is not needed anymore#033[00m
Dec 13 03:24:46 np0005558241 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000022.scope: Deactivated successfully.
Dec 13 03:24:46 np0005558241 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000022.scope: Consumed 14.306s CPU time.
Dec 13 03:24:46 np0005558241 systemd-machined[210538]: Machine qemu-39-instance-00000022 terminated.
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.548 248514 INFO nova.virt.libvirt.driver [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Instance destroyed successfully.#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.549 248514 DEBUG nova.objects.instance [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lazy-loading 'resources' on Instance uuid ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:24:46 np0005558241 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [NOTICE]   (286217) : haproxy version is 2.8.14-c23fe91
Dec 13 03:24:46 np0005558241 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [NOTICE]   (286217) : path to executable is /usr/sbin/haproxy
Dec 13 03:24:46 np0005558241 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [WARNING]  (286217) : Exiting Master process...
Dec 13 03:24:46 np0005558241 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [ALERT]    (286217) : Current worker (286236) exited with code 143 (Terminated)
Dec 13 03:24:46 np0005558241 neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301[286207]: [WARNING]  (286217) : All workers exited. Exiting... (0)
Dec 13 03:24:46 np0005558241 systemd[1]: libpod-502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef.scope: Deactivated successfully.
Dec 13 03:24:46 np0005558241 podman[288267]: 2025-12-13 08:24:46.595607246 +0000 UTC m=+0.059727495 container died 502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:24:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef-userdata-shm.mount: Deactivated successfully.
Dec 13 03:24:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c545acf84645020cc8882e401df567b891cd88033c1b634e5d68fb95026bcdfb-merged.mount: Deactivated successfully.
Dec 13 03:24:46 np0005558241 podman[288267]: 2025-12-13 08:24:46.641534299 +0000 UTC m=+0.105654528 container cleanup 502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:24:46 np0005558241 systemd[1]: libpod-conmon-502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef.scope: Deactivated successfully.
Dec 13 03:24:46 np0005558241 podman[288306]: 2025-12-13 08:24:46.722143047 +0000 UTC m=+0.055095010 container remove 502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.728 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c25c7faf-c8fe-4b02-b5ea-db6a5fff5418]: (4, ('Sat Dec 13 08:24:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301 (502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef)\n502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef\nSat Dec 13 08:24:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301 (502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef)\n502ba0f3e4e06223d4015f679aeeca00923dba8f2965021acb53b318d8cc83ef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.730 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1e40a0b8-cad8-4983-af68-d84c3f7d320a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.731 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape08cb57c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:46 np0005558241 kernel: tape08cb57c-00: left promiscuous mode
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.751 248514 DEBUG nova.virt.libvirt.vif [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1901007496',display_name='tempest-AttachInterfacesUnderV243Test-server-1901007496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1901007496',id=34,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAtqDaZq3IK7Bvm/s6fqCH+TSHLKWsERX0aPeV408BGJSMsRQoO1UjptArZn77j735/fg+c2goyKkkvVN7UQeehgaqDzhHMveiUhv8vzTex1upUSSOpKWfKRhsOR5NuVjA==',key_name='tempest-keypair-862524999',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:24:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2be5ed2a3b1a405bb6891ecdc5cba68c',ramdisk_id='',reservation_id='r-l244eynx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1008670327',owner_user_name='tempest-AttachInterfacesUnderV243Test-1008670327-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:24:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5faa7317a5cd4b748a984970f79ef52b',uuid=ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.752 248514 DEBUG nova.network.os_vif_util [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Converting VIF {"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.753 248514 DEBUG nova.network.os_vif_util [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.753 248514 DEBUG os_vif [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.757 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d8e1c45-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.756 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef03529b-126c-4ae4-8920-c14e9f3ef375]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.759 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:46 np0005558241 nova_compute[248510]: 2025-12-13 08:24:46.762 248514 INFO os_vif [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:58:17,bridge_name='br-int',has_traffic_filtering=True,id=5d8e1c45-4a7d-4fab-9d17-2bb5837e6134,network=Network(e08cb57c-0bd2-4c88-a4f8-e9d9be925301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d8e1c45-4a')#033[00m
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.776 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[059664b4-edd9-49c5-981f-940b2055884d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.778 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2dbb16e1-3117-4cfb-8325-c30eb34befb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.796 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9b51e1-bed5-43a8-aead-3b657f18c88a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681046, 'reachable_time': 23671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288338, 'error': None, 'target': 'ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:46 np0005558241 systemd[1]: run-netns-ovnmeta\x2de08cb57c\x2d0bd2\x2d4c88\x2da4f8\x2de9d9be925301.mount: Deactivated successfully.
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.800 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e08cb57c-0bd2-4c88-a4f8-e9d9be925301 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:24:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:46.800 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[8872fc67-e88b-451f-91f5-36b2deb76094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:24:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 121 MiB data, 426 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:24:47 np0005558241 nova_compute[248510]: 2025-12-13 08:24:47.209 248514 DEBUG nova.compute.manager [req-ecc15ad2-fc58-4fce-a92e-6bf21d5963b1 req-32cb73f3-f3eb-486c-acc7-4f22a08ed00c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-vif-unplugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:47 np0005558241 nova_compute[248510]: 2025-12-13 08:24:47.210 248514 DEBUG oslo_concurrency.lockutils [req-ecc15ad2-fc58-4fce-a92e-6bf21d5963b1 req-32cb73f3-f3eb-486c-acc7-4f22a08ed00c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:47 np0005558241 nova_compute[248510]: 2025-12-13 08:24:47.210 248514 DEBUG oslo_concurrency.lockutils [req-ecc15ad2-fc58-4fce-a92e-6bf21d5963b1 req-32cb73f3-f3eb-486c-acc7-4f22a08ed00c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:47 np0005558241 nova_compute[248510]: 2025-12-13 08:24:47.211 248514 DEBUG oslo_concurrency.lockutils [req-ecc15ad2-fc58-4fce-a92e-6bf21d5963b1 req-32cb73f3-f3eb-486c-acc7-4f22a08ed00c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:47 np0005558241 nova_compute[248510]: 2025-12-13 08:24:47.211 248514 DEBUG nova.compute.manager [req-ecc15ad2-fc58-4fce-a92e-6bf21d5963b1 req-32cb73f3-f3eb-486c-acc7-4f22a08ed00c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] No waiting events found dispatching network-vif-unplugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:47 np0005558241 nova_compute[248510]: 2025-12-13 08:24:47.211 248514 DEBUG nova.compute.manager [req-ecc15ad2-fc58-4fce-a92e-6bf21d5963b1 req-32cb73f3-f3eb-486c-acc7-4f22a08ed00c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-vif-unplugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:24:47 np0005558241 nova_compute[248510]: 2025-12-13 08:24:47.311 248514 INFO nova.virt.libvirt.driver [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Deleting instance files /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_del#033[00m
Dec 13 03:24:47 np0005558241 nova_compute[248510]: 2025-12-13 08:24:47.313 248514 INFO nova.virt.libvirt.driver [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Deletion of /var/lib/nova/instances/ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5_del complete#033[00m
Dec 13 03:24:47 np0005558241 nova_compute[248510]: 2025-12-13 08:24:47.457 248514 INFO nova.compute.manager [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Took 1.15 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:24:47 np0005558241 nova_compute[248510]: 2025-12-13 08:24:47.458 248514 DEBUG oslo.service.loopingcall [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:24:47 np0005558241 nova_compute[248510]: 2025-12-13 08:24:47.459 248514 DEBUG nova.compute.manager [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:24:47 np0005558241 nova_compute[248510]: 2025-12-13 08:24:47.459 248514 DEBUG nova.network.neutron [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.237 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.268 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.269 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.269 248514 DEBUG oslo_concurrency.lockutils [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.269 248514 DEBUG nova.network.neutron [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Refreshing network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.270 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.271 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.271 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.271 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.272 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.272 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.337 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.337 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.337 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.338 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.338 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:24:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3307034094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:24:48 np0005558241 nova_compute[248510]: 2025-12-13 08:24:48.900 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.085 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.086 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4396MB free_disk=59.94239760842174GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.086 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.087 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.225 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.226 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.226 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.264 248514 DEBUG nova.network.neutron [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.291 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.391 248514 INFO nova.compute.manager [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Took 1.93 seconds to deallocate network for instance.#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.439 248514 DEBUG nova.compute.manager [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.439 248514 DEBUG oslo_concurrency.lockutils [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.440 248514 DEBUG oslo_concurrency.lockutils [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.440 248514 DEBUG oslo_concurrency.lockutils [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.440 248514 DEBUG nova.compute.manager [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] No waiting events found dispatching network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.440 248514 WARNING nova.compute.manager [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received unexpected event network-vif-plugged-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.440 248514 DEBUG nova.compute.manager [req-fc208eac-ecd3-481a-9045-caf00b4c7fdb req-a13d27c1-da21-492f-a386-946c8b352bd5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Received event network-vif-deleted-5d8e1c45-4a7d-4fab-9d17-2bb5837e6134 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.468 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.671 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:24:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1994078775' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.882 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.888 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:24:49 np0005558241 nova_compute[248510]: 2025-12-13 08:24:49.951 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:24:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:24:50 np0005558241 nova_compute[248510]: 2025-12-13 08:24:50.328 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:24:50 np0005558241 nova_compute[248510]: 2025-12-13 08:24:50.329 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:50 np0005558241 nova_compute[248510]: 2025-12-13 08:24:50.330 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:50 np0005558241 nova_compute[248510]: 2025-12-13 08:24:50.330 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:24:50 np0005558241 nova_compute[248510]: 2025-12-13 08:24:50.330 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 03:24:50 np0005558241 nova_compute[248510]: 2025-12-13 08:24:50.378 248514 DEBUG oslo_concurrency.processutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:24:50 np0005558241 nova_compute[248510]: 2025-12-13 08:24:50.525 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:24:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:24:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/861745803' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:24:50 np0005558241 nova_compute[248510]: 2025-12-13 08:24:50.997 248514 DEBUG oslo_concurrency.processutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:24:51 np0005558241 nova_compute[248510]: 2025-12-13 08:24:51.004 248514 DEBUG nova.compute.provider_tree [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:24:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 03:24:51 np0005558241 nova_compute[248510]: 2025-12-13 08:24:51.191 248514 DEBUG nova.scheduler.client.report [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:24:51 np0005558241 nova_compute[248510]: 2025-12-13 08:24:51.254 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:51 np0005558241 nova_compute[248510]: 2025-12-13 08:24:51.376 248514 INFO nova.scheduler.client.report [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Deleted allocations for instance ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5#033[00m
Dec 13 03:24:51 np0005558241 nova_compute[248510]: 2025-12-13 08:24:51.380 248514 DEBUG nova.network.neutron [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updated VIF entry in instance network info cache for port 5d8e1c45-4a7d-4fab-9d17-2bb5837e6134. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:24:51 np0005558241 nova_compute[248510]: 2025-12-13 08:24:51.380 248514 DEBUG nova.network.neutron [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Updating instance_info_cache with network_info: [{"id": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "address": "fa:16:3e:83:58:17", "network": {"id": "e08cb57c-0bd2-4c88-a4f8-e9d9be925301", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-183137955-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2be5ed2a3b1a405bb6891ecdc5cba68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d8e1c45-4a", "ovs_interfaceid": "5d8e1c45-4a7d-4fab-9d17-2bb5837e6134", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:24:51 np0005558241 nova_compute[248510]: 2025-12-13 08:24:51.665 248514 DEBUG oslo_concurrency.lockutils [req-080f21c2-ac6f-49c5-bec5-579e1b0196d1 req-ee04e2d9-1c7d-41b7-ab45-2e712847a6a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:24:51 np0005558241 nova_compute[248510]: 2025-12-13 08:24:51.740 248514 DEBUG oslo_concurrency.lockutils [None req-82728d49-043b-480e-8ae6-379d17f1322a 5faa7317a5cd4b748a984970f79ef52b 2be5ed2a3b1a405bb6891ecdc5cba68c - - default default] Lock "ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:51 np0005558241 nova_compute[248510]: 2025-12-13 08:24:51.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:52 np0005558241 nova_compute[248510]: 2025-12-13 08:24:52.787 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 03:24:54 np0005558241 nova_compute[248510]: 2025-12-13 08:24:54.673 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:55 np0005558241 nova_compute[248510]: 2025-12-13 08:24:55.008 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:24:55 np0005558241 nova_compute[248510]: 2025-12-13 08:24:55.009 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:24:55 np0005558241 nova_compute[248510]: 2025-12-13 08:24:55.009 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 03:24:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:24:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 03:24:55 np0005558241 nova_compute[248510]: 2025-12-13 08:24:55.388 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 03:24:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:55.406 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:24:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:55.407 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:24:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:24:55.407 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:24:56 np0005558241 nova_compute[248510]: 2025-12-13 08:24:56.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 03:24:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Dec 13 03:24:59 np0005558241 nova_compute[248510]: 2025-12-13 08:24:59.676 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:24:59 np0005558241 nova_compute[248510]: 2025-12-13 08:24:59.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:25:00 np0005558241 nova_compute[248510]: 2025-12-13 08:25:00.082 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:25:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:25:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:25:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:25:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:25:01 np0005558241 nova_compute[248510]: 2025-12-13 08:25:01.546 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614286.5454469, ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:25:01 np0005558241 nova_compute[248510]: 2025-12-13 08:25:01.548 248514 INFO nova.compute.manager [-] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:25:01 np0005558241 nova_compute[248510]: 2025-12-13 08:25:01.742 248514 DEBUG nova.compute.manager [None req-b50378d2-355a-4c00-8e3e-37a5bb860d86 - - - - - -] [instance: ab4f5ee6-de12-495e-ae3e-0e4f6b4154e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:01 np0005558241 nova_compute[248510]: 2025-12-13 08:25:01.763 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:25:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:25:02 np0005558241 podman[288626]: 2025-12-13 08:25:02.603047785 +0000 UTC m=+0.043331040 container create 56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:25:02 np0005558241 systemd[1]: Started libpod-conmon-56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a.scope.
Dec 13 03:25:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:25:02 np0005558241 podman[288626]: 2025-12-13 08:25:02.585733378 +0000 UTC m=+0.026016663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:25:02 np0005558241 podman[288626]: 2025-12-13 08:25:02.701289159 +0000 UTC m=+0.141572444 container init 56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:25:02 np0005558241 podman[288626]: 2025-12-13 08:25:02.712015014 +0000 UTC m=+0.152298269 container start 56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:25:02 np0005558241 podman[288626]: 2025-12-13 08:25:02.716054313 +0000 UTC m=+0.156337588 container attach 56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mendeleev, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:25:02 np0005558241 flamboyant_mendeleev[288642]: 167 167
Dec 13 03:25:02 np0005558241 systemd[1]: libpod-56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a.scope: Deactivated successfully.
Dec 13 03:25:02 np0005558241 podman[288626]: 2025-12-13 08:25:02.722721988 +0000 UTC m=+0.163005243 container died 56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mendeleev, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:25:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c6905200a4619d4e756e2479197ee978a9e57a950c1a6af50faa91271e4e290d-merged.mount: Deactivated successfully.
Dec 13 03:25:02 np0005558241 podman[288626]: 2025-12-13 08:25:02.760747876 +0000 UTC m=+0.201031131 container remove 56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:25:02 np0005558241 systemd[1]: libpod-conmon-56434f8bc4016da7e405c3a90c3de34cd788fe1c002f38b6bf378cebf12ea45a.scope: Deactivated successfully.
Dec 13 03:25:02 np0005558241 podman[288666]: 2025-12-13 08:25:02.958443054 +0000 UTC m=+0.059918779 container create 00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:25:03 np0005558241 systemd[1]: Started libpod-conmon-00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5.scope.
Dec 13 03:25:03 np0005558241 podman[288666]: 2025-12-13 08:25:02.928386102 +0000 UTC m=+0.029861857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:25:03 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:25:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf54cf20a6985ff9a77b5c88ee43cc827baa0c95ea56439f9b6b2dba1262f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf54cf20a6985ff9a77b5c88ee43cc827baa0c95ea56439f9b6b2dba1262f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf54cf20a6985ff9a77b5c88ee43cc827baa0c95ea56439f9b6b2dba1262f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf54cf20a6985ff9a77b5c88ee43cc827baa0c95ea56439f9b6b2dba1262f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4caf54cf20a6985ff9a77b5c88ee43cc827baa0c95ea56439f9b6b2dba1262f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:03 np0005558241 podman[288666]: 2025-12-13 08:25:03.051824658 +0000 UTC m=+0.153300403 container init 00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:25:03 np0005558241 podman[288666]: 2025-12-13 08:25:03.058711888 +0000 UTC m=+0.160187613 container start 00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:25:03 np0005558241 podman[288666]: 2025-12-13 08:25:03.062324747 +0000 UTC m=+0.163800492 container attach 00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_babbage, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:25:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:25:03 np0005558241 crazy_babbage[288682]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:25:03 np0005558241 crazy_babbage[288682]: --> All data devices are unavailable
Dec 13 03:25:03 np0005558241 systemd[1]: libpod-00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5.scope: Deactivated successfully.
Dec 13 03:25:03 np0005558241 podman[288666]: 2025-12-13 08:25:03.561586846 +0000 UTC m=+0.663062571 container died 00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:25:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4caf54cf20a6985ff9a77b5c88ee43cc827baa0c95ea56439f9b6b2dba1262f4-merged.mount: Deactivated successfully.
Dec 13 03:25:03 np0005558241 podman[288666]: 2025-12-13 08:25:03.619640818 +0000 UTC m=+0.721116543 container remove 00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:25:03 np0005558241 systemd[1]: libpod-conmon-00fdc40935d462993bb5087346937e70448a4b5700763f8a309cb574465ce4e5.scope: Deactivated successfully.
Dec 13 03:25:03 np0005558241 nova_compute[248510]: 2025-12-13 08:25:03.880 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:04 np0005558241 podman[288778]: 2025-12-13 08:25:04.122739732 +0000 UTC m=+0.041330801 container create 7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_herschel, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Dec 13 03:25:04 np0005558241 systemd[1]: Started libpod-conmon-7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75.scope.
Dec 13 03:25:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:25:04 np0005558241 podman[288778]: 2025-12-13 08:25:04.104935612 +0000 UTC m=+0.023526701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:25:04 np0005558241 podman[288778]: 2025-12-13 08:25:04.205551945 +0000 UTC m=+0.124143034 container init 7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:25:04 np0005558241 podman[288778]: 2025-12-13 08:25:04.21304421 +0000 UTC m=+0.131635279 container start 7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 03:25:04 np0005558241 podman[288778]: 2025-12-13 08:25:04.217203172 +0000 UTC m=+0.135794241 container attach 7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_herschel, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:25:04 np0005558241 mystifying_herschel[288794]: 167 167
Dec 13 03:25:04 np0005558241 systemd[1]: libpod-7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75.scope: Deactivated successfully.
Dec 13 03:25:04 np0005558241 podman[288778]: 2025-12-13 08:25:04.219163501 +0000 UTC m=+0.137754570 container died 7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Dec 13 03:25:04 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0c53e0bbc3582b6d948973d4f90a43d0da731473d94ba746485414f4ceb245e1-merged.mount: Deactivated successfully.
Dec 13 03:25:04 np0005558241 podman[288778]: 2025-12-13 08:25:04.263912635 +0000 UTC m=+0.182503704 container remove 7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:25:04 np0005558241 systemd[1]: libpod-conmon-7aa5f791f4853e0ed0dcce04607855cccaff579d41b625256486750df576ce75.scope: Deactivated successfully.
Dec 13 03:25:04 np0005558241 podman[288818]: 2025-12-13 08:25:04.468556204 +0000 UTC m=+0.058634018 container create 25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_noyce, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:25:04 np0005558241 systemd[1]: Started libpod-conmon-25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9.scope.
Dec 13 03:25:04 np0005558241 podman[288818]: 2025-12-13 08:25:04.439947068 +0000 UTC m=+0.030024902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:25:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:25:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0dfcc7b78fa6621e3fd668c235e6f786845bf2f9794a434ac073a3cad5e8541/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0dfcc7b78fa6621e3fd668c235e6f786845bf2f9794a434ac073a3cad5e8541/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0dfcc7b78fa6621e3fd668c235e6f786845bf2f9794a434ac073a3cad5e8541/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0dfcc7b78fa6621e3fd668c235e6f786845bf2f9794a434ac073a3cad5e8541/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:04 np0005558241 podman[288818]: 2025-12-13 08:25:04.560019911 +0000 UTC m=+0.150097745 container init 25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:25:04 np0005558241 podman[288818]: 2025-12-13 08:25:04.566107781 +0000 UTC m=+0.156185595 container start 25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 03:25:04 np0005558241 podman[288818]: 2025-12-13 08:25:04.570187982 +0000 UTC m=+0.160265816 container attach 25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 03:25:04 np0005558241 nova_compute[248510]: 2025-12-13 08:25:04.678 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:04 np0005558241 loving_noyce[288834]: {
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:    "0": [
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:        {
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "devices": [
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "/dev/loop3"
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            ],
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_name": "ceph_lv0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_size": "21470642176",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "name": "ceph_lv0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "tags": {
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.cluster_name": "ceph",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.crush_device_class": "",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.encrypted": "0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.objectstore": "bluestore",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.osd_id": "0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.type": "block",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.vdo": "0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.with_tpm": "0"
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            },
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "type": "block",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "vg_name": "ceph_vg0"
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:        }
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:    ],
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:    "1": [
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:        {
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "devices": [
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "/dev/loop4"
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            ],
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_name": "ceph_lv1",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_size": "21470642176",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "name": "ceph_lv1",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "tags": {
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.cluster_name": "ceph",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.crush_device_class": "",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.encrypted": "0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.objectstore": "bluestore",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.osd_id": "1",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.type": "block",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.vdo": "0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.with_tpm": "0"
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            },
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "type": "block",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "vg_name": "ceph_vg1"
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:        }
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:    ],
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:    "2": [
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:        {
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "devices": [
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "/dev/loop5"
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            ],
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_name": "ceph_lv2",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_size": "21470642176",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "name": "ceph_lv2",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "tags": {
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.cluster_name": "ceph",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.crush_device_class": "",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.encrypted": "0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.objectstore": "bluestore",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.osd_id": "2",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.type": "block",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.vdo": "0",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:                "ceph.with_tpm": "0"
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            },
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "type": "block",
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:            "vg_name": "ceph_vg2"
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:        }
Dec 13 03:25:04 np0005558241 loving_noyce[288834]:    ]
Dec 13 03:25:04 np0005558241 loving_noyce[288834]: }
Dec 13 03:25:04 np0005558241 systemd[1]: libpod-25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9.scope: Deactivated successfully.
Dec 13 03:25:04 np0005558241 podman[288843]: 2025-12-13 08:25:04.964659244 +0000 UTC m=+0.029391916 container died 25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_noyce, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:25:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:25:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:25:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f0dfcc7b78fa6621e3fd668c235e6f786845bf2f9794a434ac073a3cad5e8541-merged.mount: Deactivated successfully.
Dec 13 03:25:05 np0005558241 podman[288843]: 2025-12-13 08:25:05.279742958 +0000 UTC m=+0.344475570 container remove 25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:25:05 np0005558241 systemd[1]: libpod-conmon-25d17dd16d4afdea1ce98854117485945dfd53641c92be446d8b9007136dc9d9.scope: Deactivated successfully.
Dec 13 03:25:05 np0005558241 podman[288920]: 2025-12-13 08:25:05.771534872 +0000 UTC m=+0.025659824 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:25:06 np0005558241 podman[288920]: 2025-12-13 08:25:06.381327658 +0000 UTC m=+0.635452610 container create aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:25:06 np0005558241 systemd[1]: Started libpod-conmon-aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9.scope.
Dec 13 03:25:06 np0005558241 nova_compute[248510]: 2025-12-13 08:25:06.766 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:06 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:25:07 np0005558241 nova_compute[248510]: 2025-12-13 08:25:07.067 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:07 np0005558241 nova_compute[248510]: 2025-12-13 08:25:07.067 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:07 np0005558241 podman[288920]: 2025-12-13 08:25:07.089192614 +0000 UTC m=+1.343317526 container init aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_albattani, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:25:07 np0005558241 podman[288920]: 2025-12-13 08:25:07.099820066 +0000 UTC m=+1.353944988 container start aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_albattani, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:25:07 np0005558241 heuristic_albattani[288937]: 167 167
Dec 13 03:25:07 np0005558241 systemd[1]: libpod-aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9.scope: Deactivated successfully.
Dec 13 03:25:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:25:07 np0005558241 podman[288920]: 2025-12-13 08:25:07.140617223 +0000 UTC m=+1.394742175 container attach aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:25:07 np0005558241 podman[288920]: 2025-12-13 08:25:07.141776431 +0000 UTC m=+1.395901353 container died aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:25:07 np0005558241 nova_compute[248510]: 2025-12-13 08:25:07.311 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:25:07 np0005558241 nova_compute[248510]: 2025-12-13 08:25:07.530 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:07 np0005558241 nova_compute[248510]: 2025-12-13 08:25:07.531 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:07 np0005558241 nova_compute[248510]: 2025-12-13 08:25:07.539 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:25:07 np0005558241 nova_compute[248510]: 2025-12-13 08:25:07.539 248514 INFO nova.compute.claims [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:25:07 np0005558241 nova_compute[248510]: 2025-12-13 08:25:07.562 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1070d96e041fb6b74a7562d91750286d093a3beeb0f66ec604ccc81252720243-merged.mount: Deactivated successfully.
Dec 13 03:25:07 np0005558241 podman[288920]: 2025-12-13 08:25:07.678503185 +0000 UTC m=+1.932628107 container remove aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:25:07 np0005558241 systemd[1]: libpod-conmon-aa7dbad95880f25093f8749fe794c25d6b405aeda33f58ed59c73d125ee8fed9.scope: Deactivated successfully.
Dec 13 03:25:07 np0005558241 nova_compute[248510]: 2025-12-13 08:25:07.715 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:07 np0005558241 nova_compute[248510]: 2025-12-13 08:25:07.747 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:07 np0005558241 podman[288963]: 2025-12-13 08:25:07.866419281 +0000 UTC m=+0.063361484 container create 32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_driscoll, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:25:07 np0005558241 systemd[1]: Started libpod-conmon-32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437.scope.
Dec 13 03:25:07 np0005558241 podman[288963]: 2025-12-13 08:25:07.825739317 +0000 UTC m=+0.022681530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:25:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:25:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe7d351de8403d9a81f8ceae9b19f077632b37cd324623fa0e2bfaf4a7251e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe7d351de8403d9a81f8ceae9b19f077632b37cd324623fa0e2bfaf4a7251e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe7d351de8403d9a81f8ceae9b19f077632b37cd324623fa0e2bfaf4a7251e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ebe7d351de8403d9a81f8ceae9b19f077632b37cd324623fa0e2bfaf4a7251e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:07 np0005558241 podman[288963]: 2025-12-13 08:25:07.991107778 +0000 UTC m=+0.188050001 container init 32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_driscoll, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:25:08 np0005558241 podman[288963]: 2025-12-13 08:25:08.000176171 +0000 UTC m=+0.197118374 container start 32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_driscoll, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 03:25:08 np0005558241 podman[288963]: 2025-12-13 08:25:08.054211015 +0000 UTC m=+0.251153228 container attach 32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_driscoll, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 03:25:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:25:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/288930945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:25:08 np0005558241 nova_compute[248510]: 2025-12-13 08:25:08.310 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:08 np0005558241 nova_compute[248510]: 2025-12-13 08:25:08.319 248514 DEBUG nova.compute.provider_tree [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:25:08 np0005558241 nova_compute[248510]: 2025-12-13 08:25:08.409 248514 DEBUG nova.scheduler.client.report [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:25:08 np0005558241 nova_compute[248510]: 2025-12-13 08:25:08.452 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:08 np0005558241 nova_compute[248510]: 2025-12-13 08:25:08.453 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:25:08 np0005558241 nova_compute[248510]: 2025-12-13 08:25:08.537 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:25:08 np0005558241 nova_compute[248510]: 2025-12-13 08:25:08.538 248514 DEBUG nova.network.neutron [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:25:08 np0005558241 nova_compute[248510]: 2025-12-13 08:25:08.561 248514 INFO nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:25:08 np0005558241 nova_compute[248510]: 2025-12-13 08:25:08.582 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:25:08 np0005558241 lvm[289078]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:25:08 np0005558241 lvm[289078]: VG ceph_vg0 finished
Dec 13 03:25:08 np0005558241 lvm[289079]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:25:08 np0005558241 lvm[289079]: VG ceph_vg1 finished
Dec 13 03:25:08 np0005558241 lvm[289081]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:25:08 np0005558241 lvm[289081]: VG ceph_vg2 finished
Dec 13 03:25:08 np0005558241 focused_driscoll[288998]: {}
Dec 13 03:25:08 np0005558241 podman[288963]: 2025-12-13 08:25:08.885891464 +0000 UTC m=+1.082833677 container died 32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_driscoll, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:25:08 np0005558241 systemd[1]: libpod-32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437.scope: Deactivated successfully.
Dec 13 03:25:08 np0005558241 systemd[1]: libpod-32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437.scope: Consumed 1.467s CPU time.
Dec 13 03:25:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0ebe7d351de8403d9a81f8ceae9b19f077632b37cd324623fa0e2bfaf4a7251e-merged.mount: Deactivated successfully.
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.025 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.026 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.027 248514 INFO nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Creating image(s)#033[00m
Dec 13 03:25:09 np0005558241 podman[288963]: 2025-12-13 08:25:09.081157731 +0000 UTC m=+1.278099964 container remove 32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_driscoll, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.089 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:09 np0005558241 systemd[1]: libpod-conmon-32fbc2c081c5e5190f579c3d1235326f177ff20f5879f47d83f296cadd8b7437.scope: Deactivated successfully.
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.114 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:25:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.140 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.144 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:25:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.175 248514 DEBUG nova.policy [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b988c7ac9354c59aac9a9f41f83c20f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52e1055963294dbdb16cd95b466cd4d9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:25:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:25:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:25:09
Dec 13 03:25:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:25:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:25:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'volumes', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.control', 'default.rgw.log']
Dec 13 03:25:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.214 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.216 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.217 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.217 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.240 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.245 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2c76f149-c467-4e59-afee-77940e515f8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:09.281 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:25:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:09.282 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.558 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2c76f149-c467-4e59-afee-77940e515f8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.633 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] resizing rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.765 248514 DEBUG nova.objects.instance [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c76f149-c467-4e59-afee-77940e515f8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.888 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.889 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Ensure instance console log exists: /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.889 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.890 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:09 np0005558241 nova_compute[248510]: 2025-12-13 08:25:09.890 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:25:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:25:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:25:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:25:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 41 MiB data, 378 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:25:11 np0005558241 nova_compute[248510]: 2025-12-13 08:25:11.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:12 np0005558241 nova_compute[248510]: 2025-12-13 08:25:12.728 248514 DEBUG nova.network.neutron [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Successfully created port: a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:25:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 67 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 1.2 MiB/s wr, 2 op/s
Dec 13 03:25:14 np0005558241 nova_compute[248510]: 2025-12-13 08:25:14.732 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:25:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4254337099' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:25:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:25:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4254337099' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:25:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:25:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:25:15 np0005558241 podman[289289]: 2025-12-13 08:25:15.972886926 +0000 UTC m=+0.061394236 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 13 03:25:15 np0005558241 podman[289288]: 2025-12-13 08:25:15.997121334 +0000 UTC m=+0.085616424 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 13 03:25:15 np0005558241 podman[289290]: 2025-12-13 08:25:15.997469482 +0000 UTC m=+0.085773107 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 03:25:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:16.284 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:16 np0005558241 nova_compute[248510]: 2025-12-13 08:25:16.643 248514 DEBUG nova.network.neutron [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Successfully updated port: a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:25:16 np0005558241 nova_compute[248510]: 2025-12-13 08:25:16.729 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "refresh_cache-2c76f149-c467-4e59-afee-77940e515f8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:25:16 np0005558241 nova_compute[248510]: 2025-12-13 08:25:16.730 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquired lock "refresh_cache-2c76f149-c467-4e59-afee-77940e515f8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:25:16 np0005558241 nova_compute[248510]: 2025-12-13 08:25:16.730 248514 DEBUG nova.network.neutron [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:25:16 np0005558241 nova_compute[248510]: 2025-12-13 08:25:16.769 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:16 np0005558241 nova_compute[248510]: 2025-12-13 08:25:16.966 248514 DEBUG nova.network.neutron [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:25:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:25:17 np0005558241 nova_compute[248510]: 2025-12-13 08:25:17.687 248514 DEBUG nova.compute.manager [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Received event network-changed-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:25:17 np0005558241 nova_compute[248510]: 2025-12-13 08:25:17.688 248514 DEBUG nova.compute.manager [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Refreshing instance network info cache due to event network-changed-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:25:17 np0005558241 nova_compute[248510]: 2025-12-13 08:25:17.688 248514 DEBUG oslo_concurrency.lockutils [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2c76f149-c467-4e59-afee-77940e515f8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.359 248514 DEBUG nova.network.neutron [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Updating instance_info_cache with network_info: [{"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.391 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Releasing lock "refresh_cache-2c76f149-c467-4e59-afee-77940e515f8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.392 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Instance network_info: |[{"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.392 248514 DEBUG oslo_concurrency.lockutils [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2c76f149-c467-4e59-afee-77940e515f8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.393 248514 DEBUG nova.network.neutron [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Refreshing network info cache for port a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.396 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Start _get_guest_xml network_info=[{"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.403 248514 WARNING nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.412 248514 DEBUG nova.virt.libvirt.host [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.413 248514 DEBUG nova.virt.libvirt.host [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.417 248514 DEBUG nova.virt.libvirt.host [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.418 248514 DEBUG nova.virt.libvirt.host [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.418 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.419 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.419 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.419 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.420 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.420 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.420 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.420 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.421 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.421 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.421 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.422 248514 DEBUG nova.virt.hardware [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.425 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.503 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.504 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.532 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.633 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.634 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.640 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.640 248514 INFO nova.compute.claims [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:25:18 np0005558241 nova_compute[248510]: 2025-12-13 08:25:18.794 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:25:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1341543332' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.028 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.053 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.058 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:25:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:25:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886910405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.351 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.359 248514 DEBUG nova.compute.provider_tree [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.389 248514 DEBUG nova.scheduler.client.report [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.419 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.420 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.467 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.467 248514 DEBUG nova.network.neutron [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.490 248514 INFO nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.521 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:25:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:25:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2823282569' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.621 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.622 248514 DEBUG nova.virt.libvirt.vif [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1811238498',display_name='tempest-ImagesTestJSON-server-1811238498',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1811238498',id=36,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-2ad42grg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:08Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=2c76f149-c467-4e59-afee-77940e515f8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.622 248514 DEBUG nova.network.os_vif_util [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.623 248514 DEBUG nova.network.os_vif_util [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:0b:3c,bridge_name='br-int',has_traffic_filtering=True,id=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5b00a7b-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.624 248514 DEBUG nova.objects.instance [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c76f149-c467-4e59-afee-77940e515f8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.642 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.644 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.644 248514 INFO nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Creating image(s)#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.669 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.698 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.721 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.724 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.751 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  <uuid>2c76f149-c467-4e59-afee-77940e515f8c</uuid>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  <name>instance-00000024</name>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <nova:name>tempest-ImagesTestJSON-server-1811238498</nova:name>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:25:18</nova:creationTime>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:        <nova:user uuid="3b988c7ac9354c59aac9a9f41f83c20f">tempest-ImagesTestJSON-1234382421-project-member</nova:user>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:        <nova:project uuid="52e1055963294dbdb16cd95b466cd4d9">tempest-ImagesTestJSON-1234382421</nova:project>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:        <nova:port uuid="a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <entry name="serial">2c76f149-c467-4e59-afee-77940e515f8c</entry>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <entry name="uuid">2c76f149-c467-4e59-afee-77940e515f8c</entry>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2c76f149-c467-4e59-afee-77940e515f8c_disk">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2c76f149-c467-4e59-afee-77940e515f8c_disk.config">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:e8:0b:3c"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <target dev="tapa5b00a7b-ee"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/console.log" append="off"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:25:19 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:25:19 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:25:19 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:25:19 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.753 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Preparing to wait for external event network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.754 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.754 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.755 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.756 248514 DEBUG nova.virt.libvirt.vif [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1811238498',display_name='tempest-ImagesTestJSON-server-1811238498',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1811238498',id=36,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-2ad42grg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:08Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=2c76f149-c467-4e59-afee-77940e515f8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.756 248514 DEBUG nova.network.os_vif_util [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.757 248514 DEBUG nova.network.os_vif_util [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:0b:3c,bridge_name='br-int',has_traffic_filtering=True,id=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5b00a7b-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.758 248514 DEBUG os_vif [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:0b:3c,bridge_name='br-int',has_traffic_filtering=True,id=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5b00a7b-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.760 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.761 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.764 248514 DEBUG nova.policy [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ca2c7fce813a4f919271d56491477e18', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '59a0c2f158ce417f80e49cc5eb2db59d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.778 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5b00a7b-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.778 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa5b00a7b-ee, col_values=(('external_ids', {'iface-id': 'a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:0b:3c', 'vm-uuid': '2c76f149-c467-4e59-afee-77940e515f8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.780 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:19 np0005558241 NetworkManager[50376]: <info>  [1765614319.7813] manager: (tapa5b00a7b-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.781 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.789 248514 INFO os_vif [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:0b:3c,bridge_name='br-int',has_traffic_filtering=True,id=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5b00a7b-ee')#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.796 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.797 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.797 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.798 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.822 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.826 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 84309355-d1f4-4a59-9f19-b212232e2428_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.901 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.902 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.902 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No VIF found with MAC fa:16:3e:e8:0b:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.902 248514 INFO nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Using config drive#033[00m
Dec 13 03:25:19 np0005558241 nova_compute[248510]: 2025-12-13 08:25:19.929 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.124 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 84309355-d1f4-4a59-9f19-b212232e2428_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.193 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] resizing rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.272 248514 DEBUG nova.objects.instance [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lazy-loading 'migration_context' on Instance uuid 84309355-d1f4-4a59-9f19-b212232e2428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.288 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.288 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Ensure instance console log exists: /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.289 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.289 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.289 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.477 248514 INFO nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Creating config drive at /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/disk.config#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.483 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzx5ua25_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.621 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzx5ua25_" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.646 248514 DEBUG nova.storage.rbd_utils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 2c76f149-c467-4e59-afee-77940e515f8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.651 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/disk.config 2c76f149-c467-4e59-afee-77940e515f8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.680 248514 DEBUG nova.network.neutron [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Successfully created port: 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.792 248514 DEBUG oslo_concurrency.processutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/disk.config 2c76f149-c467-4e59-afee-77940e515f8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.793 248514 INFO nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Deleting local config drive /var/lib/nova/instances/2c76f149-c467-4e59-afee-77940e515f8c/disk.config because it was imported into RBD.#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.811 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "29375ff9-300a-43de-a53d-942e7afbb439" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.812 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.840 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:25:20 np0005558241 kernel: tapa5b00a7b-ee: entered promiscuous mode
Dec 13 03:25:20 np0005558241 NetworkManager[50376]: <info>  [1765614320.8537] manager: (tapa5b00a7b-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:25:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:20Z|00322|binding|INFO|Claiming lport a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 for this chassis.
Dec 13 03:25:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:20Z|00323|binding|INFO|a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1: Claiming fa:16:3e:e8:0b:3c 10.100.0.13
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.854 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.857 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003510625938221278 of space, bias 1.0, pg target 0.10531877814663833 quantized to 32 (current 32)
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669903900688993 of space, bias 1.0, pg target 0.2000971170206698 quantized to 32 (current 32)
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.168237698949889e-07 of space, bias 4.0, pg target 0.0008601885238739866 quantized to 16 (current 32)
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:25:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:25:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.867 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:0b:3c 10.100.0.13'], port_security=['fa:16:3e:e8:0b:3c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2c76f149-c467-4e59-afee-77940e515f8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:25:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.868 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc bound to our chassis#033[00m
Dec 13 03:25:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.869 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc#033[00m
Dec 13 03:25:20 np0005558241 systemd-udevd[289676]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:25:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.884 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[478ad446-a8cb-4471-87b1-7093b7684388]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.886 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87bd91d0-e1 in ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:25:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.888 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87bd91d0-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:25:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.888 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[499c498c-cb77-4494-b83d-a87b34f4234c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.889 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f941c4-3a47-470f-a407-f45c1cf41d4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:20 np0005558241 systemd-machined[210538]: New machine qemu-42-instance-00000024.
Dec 13 03:25:20 np0005558241 NetworkManager[50376]: <info>  [1765614320.9015] device (tapa5b00a7b-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:25:20 np0005558241 NetworkManager[50376]: <info>  [1765614320.9023] device (tapa5b00a7b-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:25:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.903 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7e20b3-749c-4345-9c0f-ebed75a3176c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.924 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a34b1eb6-082d-41e4-a719-7ecaa01f64d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:20Z|00324|binding|INFO|Setting lport a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 ovn-installed in OVS
Dec 13 03:25:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:20Z|00325|binding|INFO|Setting lport a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 up in Southbound
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:20 np0005558241 systemd[1]: Started Virtual Machine qemu-42-instance-00000024.
Dec 13 03:25:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.958 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1e4b9b-fae8-4e85-b017-9d472aa58dcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.960 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.960 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.963 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[65764f27-88cd-42c4-a944-4fb69f102ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:20 np0005558241 NetworkManager[50376]: <info>  [1765614320.9647] manager: (tap87bd91d0-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/146)
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.974 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:25:20 np0005558241 nova_compute[248510]: 2025-12-13 08:25:20.974 248514 INFO nova.compute.claims [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:25:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:20.996 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3b3603b6-1ae6-4dfd-9f73-58ef4bec3db6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.001 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc6d89e-4ee3-475e-8f7b-4a6cd116587f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:21 np0005558241 NetworkManager[50376]: <info>  [1765614321.0296] device (tap87bd91d0-e0): carrier: link connected
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.034 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7be3fde5-c86e-433c-b56f-7da4451019f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.058 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4c986e23-4d71-449c-ba2c-b55963a95a80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689824, 'reachable_time': 42343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289709, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.076 248514 DEBUG nova.network.neutron [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Updated VIF entry in instance network info cache for port a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.078 248514 DEBUG nova.network.neutron [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Updating instance_info_cache with network_info: [{"id": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "address": "fa:16:3e:e8:0b:3c", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5b00a7b-ee", "ovs_interfaceid": "a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.077 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[85cebd47-e297-4843-b1a9-3382385eb1f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:ec7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689824, 'tstamp': 689824}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289710, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.096 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6380ea-8742-4982-9569-68840c4f2c28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689824, 'reachable_time': 42343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289711, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.114 248514 DEBUG oslo_concurrency.lockutils [req-da1701be-105d-413e-b86c-f77d45dd3e02 req-729d3877-09c9-4f96-9a5f-4c60112da9de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2c76f149-c467-4e59-afee-77940e515f8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:25:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 88 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.139 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d737cf2-25ab-4a57-9ddc-0a3194d6fa10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.209 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5884599b-2f97-483a-9daa-f70efe33feb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.211 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.211 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.212 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.212 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87bd91d0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:21 np0005558241 NetworkManager[50376]: <info>  [1765614321.2153] manager: (tap87bd91d0-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Dec 13 03:25:21 np0005558241 kernel: tap87bd91d0-e0: entered promiscuous mode
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.218 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87bd91d0-e0, col_values=(('external_ids', {'iface-id': '3e59c36d-12c7-4d92-aa2f-8b6a795f3f98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:21Z|00326|binding|INFO|Releasing lport 3e59c36d-12c7-4d92-aa2f-8b6a795f3f98 from this chassis (sb_readonly=0)
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.239 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.240 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[69bc8d90-39ad-4ef6-ac11-b2bf9d8fe658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.241 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:25:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:21.243 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'env', 'PROCESS_TAG=haproxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87bd91d0-eead-49b6-8f92-f8d0dba555dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.459 248514 DEBUG nova.compute.manager [req-bda4f866-cc03-4112-866a-c70a296b04a9 req-bd99bb70-fc80-48a4-9bbb-64292fb12d8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Received event network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.460 248514 DEBUG oslo_concurrency.lockutils [req-bda4f866-cc03-4112-866a-c70a296b04a9 req-bd99bb70-fc80-48a4-9bbb-64292fb12d8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.461 248514 DEBUG oslo_concurrency.lockutils [req-bda4f866-cc03-4112-866a-c70a296b04a9 req-bd99bb70-fc80-48a4-9bbb-64292fb12d8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.461 248514 DEBUG oslo_concurrency.lockutils [req-bda4f866-cc03-4112-866a-c70a296b04a9 req-bd99bb70-fc80-48a4-9bbb-64292fb12d8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.461 248514 DEBUG nova.compute.manager [req-bda4f866-cc03-4112-866a-c70a296b04a9 req-bd99bb70-fc80-48a4-9bbb-64292fb12d8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Processing event network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:25:21 np0005558241 podman[289779]: 2025-12-13 08:25:21.657747462 +0000 UTC m=+0.053520852 container create 4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 03:25:21 np0005558241 systemd[1]: Started libpod-conmon-4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4.scope.
Dec 13 03:25:21 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:25:21 np0005558241 podman[289779]: 2025-12-13 08:25:21.627594118 +0000 UTC m=+0.023367518 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:25:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27ffc37c60de5edd5330ae1bfe43f83df8db2401855afba2c6dc6e244c01b6f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.731 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.732 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614321.731839, 2c76f149-c467-4e59-afee-77940e515f8c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.733 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] VM Started (Lifecycle Event)#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.741 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.745 248514 INFO nova.virt.libvirt.driver [-] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Instance spawned successfully.#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.746 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:25:21 np0005558241 podman[289779]: 2025-12-13 08:25:21.749989548 +0000 UTC m=+0.145762928 container init 4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 03:25:21 np0005558241 podman[289779]: 2025-12-13 08:25:21.755994976 +0000 UTC m=+0.151768346 container start 4e109367b11b4b127479d0ba03860f65c315be0b9ec92f75547dfeb8963349a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.768 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:25:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2126761090' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.779 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.784 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.784 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.785 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.785 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.785 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.786 248514 DEBUG nova.virt.libvirt.driver [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:21 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[289819]: [NOTICE]   (289823) : New worker (289827) forked
Dec 13 03:25:21 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[289819]: [NOTICE]   (289823) : Loading success.
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.813 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.821 248514 DEBUG nova.compute.provider_tree [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.826 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.826 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614321.7319286, 2c76f149-c467-4e59-afee-77940e515f8c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.826 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.848 248514 DEBUG nova.scheduler.client.report [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.853 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.856 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614321.7413564, 2c76f149-c467-4e59-afee-77940e515f8c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.856 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.861 248514 INFO nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Took 12.84 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.862 248514 DEBUG nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.889 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.890 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.896 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.900 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.960 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.964 248514 INFO nova.compute.manager [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Took 14.48 seconds to build instance.#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.972 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.973 248514 DEBUG nova.network.neutron [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:25:21 np0005558241 nova_compute[248510]: 2025-12-13 08:25:21.992 248514 DEBUG oslo_concurrency.lockutils [None req-abe3e892-f61d-4a2b-b7d7-b00fc90dff42 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.002 248514 INFO nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.020 248514 DEBUG nova.network.neutron [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Successfully updated port: 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.024 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.036 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "refresh_cache-84309355-d1f4-4a59-9f19-b212232e2428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.037 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquired lock "refresh_cache-84309355-d1f4-4a59-9f19-b212232e2428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.037 248514 DEBUG nova.network.neutron [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.283 248514 DEBUG nova.policy [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f6fadd0581d041428cc88161ae6e6e02', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e370bdecda394d32b21d4eee440a61fa', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.306 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.307 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.308 248514 INFO nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Creating image(s)#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.337 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.367 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.392 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.399 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.431 248514 DEBUG nova.network.neutron [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.479 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.480 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.481 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.482 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.505 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.509 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 29375ff9-300a-43de-a53d-942e7afbb439_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.907 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 29375ff9-300a-43de-a53d-942e7afbb439_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:22 np0005558241 nova_compute[248510]: 2025-12-13 08:25:22.985 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] resizing rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.082 248514 DEBUG nova.objects.instance [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lazy-loading 'migration_context' on Instance uuid 29375ff9-300a-43de-a53d-942e7afbb439 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.109 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.110 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Ensure instance console log exists: /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.110 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.110 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.111 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 99 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 797 KiB/s rd, 2.3 MiB/s wr, 54 op/s
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.386 248514 DEBUG nova.network.neutron [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Updating instance_info_cache with network_info: [{"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.423 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Releasing lock "refresh_cache-84309355-d1f4-4a59-9f19-b212232e2428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.424 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Instance network_info: |[{"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.427 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Start _get_guest_xml network_info=[{"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.432 248514 WARNING nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.438 248514 DEBUG nova.virt.libvirt.host [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.439 248514 DEBUG nova.virt.libvirt.host [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.442 248514 DEBUG nova.virt.libvirt.host [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.443 248514 DEBUG nova.virt.libvirt.host [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.444 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.444 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.445 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.445 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.446 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.446 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.446 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.447 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.447 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.448 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.448 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.448 248514 DEBUG nova.virt.hardware [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.452 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.934 248514 DEBUG nova.compute.manager [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Received event network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.935 248514 DEBUG oslo_concurrency.lockutils [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.936 248514 DEBUG oslo_concurrency.lockutils [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.936 248514 DEBUG oslo_concurrency.lockutils [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.936 248514 DEBUG nova.compute.manager [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] No waiting events found dispatching network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.937 248514 WARNING nova.compute.manager [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Received unexpected event network-vif-plugged-a5b00a7b-ee17-47f0-a57f-0b1b30d27eb1 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.937 248514 DEBUG nova.compute.manager [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-changed-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.937 248514 DEBUG nova.compute.manager [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Refreshing instance network info cache due to event network-changed-2ad09377-22a9-4666-8a3b-f24c9cba7ac9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.938 248514 DEBUG oslo_concurrency.lockutils [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-84309355-d1f4-4a59-9f19-b212232e2428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.938 248514 DEBUG oslo_concurrency.lockutils [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-84309355-d1f4-4a59-9f19-b212232e2428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:25:23 np0005558241 nova_compute[248510]: 2025-12-13 08:25:23.938 248514 DEBUG nova.network.neutron [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Refreshing network info cache for port 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:25:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:25:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1594273136' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.087 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.111 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.116 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:25:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2589049922' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.652 248514 INFO nova.compute.manager [None req-43aaf044-9b00-476d-8390-fcd36e56b111 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Pausing#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.654 248514 DEBUG nova.objects.instance [None req-43aaf044-9b00-476d-8390-fcd36e56b111 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'flavor' on Instance uuid 2c76f149-c467-4e59-afee-77940e515f8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.656 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.657 248514 DEBUG nova.virt.libvirt.vif [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1068678446',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1068678446',id=37,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='59a0c2f158ce417f80e49cc5eb2db59d',ramdisk_id='',reservation_id='r-xk1seud9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-328435411',owner_user_name='tempest-InstanceActionsV221TestJSON-328435411-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:19Z,user_data=None,user_id='ca2c7fce813a4f919271d56491477e18',uuid=84309355-d1f4-4a59-9f19-b212232e2428,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.658 248514 DEBUG nova.network.os_vif_util [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Converting VIF {"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.659 248514 DEBUG nova.network.os_vif_util [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.660 248514 DEBUG nova.objects.instance [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lazy-loading 'pci_devices' on Instance uuid 84309355-d1f4-4a59-9f19-b212232e2428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.693 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  <uuid>84309355-d1f4-4a59-9f19-b212232e2428</uuid>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  <name>instance-00000025</name>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <nova:name>tempest-InstanceActionsV221TestJSON-server-1068678446</nova:name>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:25:23</nova:creationTime>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:        <nova:user uuid="ca2c7fce813a4f919271d56491477e18">tempest-InstanceActionsV221TestJSON-328435411-project-member</nova:user>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:        <nova:project uuid="59a0c2f158ce417f80e49cc5eb2db59d">tempest-InstanceActionsV221TestJSON-328435411</nova:project>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:        <nova:port uuid="2ad09377-22a9-4666-8a3b-f24c9cba7ac9">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <entry name="serial">84309355-d1f4-4a59-9f19-b212232e2428</entry>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <entry name="uuid">84309355-d1f4-4a59-9f19-b212232e2428</entry>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/84309355-d1f4-4a59-9f19-b212232e2428_disk">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/84309355-d1f4-4a59-9f19-b212232e2428_disk.config">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:38:17:ba"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <target dev="tap2ad09377-22"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/console.log" append="off"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:25:24 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:25:24 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:25:24 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:25:24 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.698 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Preparing to wait for external event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.699 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.699 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.699 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.700 248514 DEBUG nova.virt.libvirt.vif [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1068678446',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1068678446',id=37,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='59a0c2f158ce417f80e49cc5eb2db59d',ramdisk_id='',reservation_id='r-xk1seud9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-328435411',owner_user_name='tempest-InstanceActionsV221TestJSON-328435411-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:19Z,user_data=None,user_id='ca2c7fce813a4f919271d56491477e18',uuid=84309355-d1f4-4a59-9f19-b212232e2428,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.701 248514 DEBUG nova.network.os_vif_util [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Converting VIF {"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.702 248514 DEBUG nova.network.os_vif_util [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.703 248514 DEBUG os_vif [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.705 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.706 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.711 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.713 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ad09377-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.714 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ad09377-22, col_values=(('external_ids', {'iface-id': '2ad09377-22a9-4666-8a3b-f24c9cba7ac9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:17:ba', 'vm-uuid': '84309355-d1f4-4a59-9f19-b212232e2428'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.716 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614324.7155876, 2c76f149-c467-4e59-afee-77940e515f8c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.717 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:25:24 np0005558241 NetworkManager[50376]: <info>  [1765614324.7172] manager: (tap2ad09377-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.723 248514 DEBUG nova.compute.manager [None req-43aaf044-9b00-476d-8390-fcd36e56b111 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.724 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.724 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.725 248514 INFO os_vif [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22')#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.748 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.752 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.807 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.827 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.828 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.828 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] No VIF found with MAC fa:16:3e:38:17:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.829 248514 INFO nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Using config drive#033[00m
Dec 13 03:25:24 np0005558241 nova_compute[248510]: 2025-12-13 08:25:24.856 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:25:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 171 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Dec 13 03:25:25 np0005558241 nova_compute[248510]: 2025-12-13 08:25:25.276 248514 DEBUG nova.network.neutron [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Successfully created port: 66179e56-6ff7-4353-872a-ee206fe0b050 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:25:25 np0005558241 nova_compute[248510]: 2025-12-13 08:25:25.820 248514 DEBUG nova.network.neutron [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Updated VIF entry in instance network info cache for port 2ad09377-22a9-4666-8a3b-f24c9cba7ac9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:25:25 np0005558241 nova_compute[248510]: 2025-12-13 08:25:25.820 248514 DEBUG nova.network.neutron [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Updating instance_info_cache with network_info: [{"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:25:25 np0005558241 nova_compute[248510]: 2025-12-13 08:25:25.842 248514 DEBUG oslo_concurrency.lockutils [req-02933a78-7ff9-42e5-be0e-e367894f3974 req-24e6144e-814c-42c6-8575-35983d6c382e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-84309355-d1f4-4a59-9f19-b212232e2428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:25:25 np0005558241 nova_compute[248510]: 2025-12-13 08:25:25.850 248514 INFO nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Creating config drive at /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/disk.config#033[00m
Dec 13 03:25:25 np0005558241 nova_compute[248510]: 2025-12-13 08:25:25.855 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkufyj9qh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:25 np0005558241 nova_compute[248510]: 2025-12-13 08:25:25.991 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkufyj9qh" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.019 248514 DEBUG nova.storage.rbd_utils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] rbd image 84309355-d1f4-4a59-9f19-b212232e2428_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.024 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/disk.config 84309355-d1f4-4a59-9f19-b212232e2428_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.181 248514 DEBUG oslo_concurrency.processutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/disk.config 84309355-d1f4-4a59-9f19-b212232e2428_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.182 248514 INFO nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Deleting local config drive /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428/disk.config because it was imported into RBD.#033[00m
Dec 13 03:25:26 np0005558241 kernel: tap2ad09377-22: entered promiscuous mode
Dec 13 03:25:26 np0005558241 NetworkManager[50376]: <info>  [1765614326.2374] manager: (tap2ad09377-22): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Dec 13 03:25:26 np0005558241 systemd-udevd[290135]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:26Z|00327|binding|INFO|Claiming lport 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 for this chassis.
Dec 13 03:25:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:26Z|00328|binding|INFO|2ad09377-22a9-4666-8a3b-f24c9cba7ac9: Claiming fa:16:3e:38:17:ba 10.100.0.4
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.279 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:26 np0005558241 NetworkManager[50376]: <info>  [1765614326.2908] device (tap2ad09377-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:25:26 np0005558241 NetworkManager[50376]: <info>  [1765614326.2917] device (tap2ad09377-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.289 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:17:ba 10.100.0.4'], port_security=['fa:16:3e:38:17:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '84309355-d1f4-4a59-9f19-b212232e2428', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '59a0c2f158ce417f80e49cc5eb2db59d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f0954fb5-841d-4580-afb8-2b944d8752df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bbe5302-d9e0-4808-ac4c-176cf71b9a80, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2ad09377-22a9-4666-8a3b-f24c9cba7ac9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.290 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 in datapath 9f51a29b-e726-472d-a3c5-2b60fcdbe425 bound to our chassis#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.292 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f51a29b-e726-472d-a3c5-2b60fcdbe425#033[00m
Dec 13 03:25:26 np0005558241 systemd-machined[210538]: New machine qemu-43-instance-00000025.
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.307 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79729fef-4831-4145-a672-0f94c973322d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.308 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f51a29b-e1 in ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.310 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f51a29b-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.310 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da3f9b3b-cc05-4a22-b28c-95706c6fe6e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.311 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68e86e1d-074c-4d33-94d7-8f3bf190ce0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.324 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5a4ade-f907-4c44-ab2e-086f05ad274f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 systemd[1]: Started Virtual Machine qemu-43-instance-00000025.
Dec 13 03:25:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:26Z|00329|binding|INFO|Setting lport 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 ovn-installed in OVS
Dec 13 03:25:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:26Z|00330|binding|INFO|Setting lport 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 up in Southbound
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.340 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.340 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce539abb-c18c-4dc5-a390-e802f9e48278]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.374 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[68384225-a606-4306-954b-9a209b1cd536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 systemd-udevd[290139]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:25:26 np0005558241 NetworkManager[50376]: <info>  [1765614326.3837] manager: (tap9f51a29b-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/150)
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.382 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a07518d6-3487-494c-bb0f-ad8d76d07e5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.424 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[aa09c4c0-0571-4d2b-ae54-1896de7e9938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.429 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8f731663-c76c-4489-b58e-b1f8302e07df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 NetworkManager[50376]: <info>  [1765614326.4585] device (tap9f51a29b-e0): carrier: link connected
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.465 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[874b093b-1b4a-445d-98f9-c4777d61b649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.486 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2a3396fb-f7dd-46ac-bb1d-730d54af1443]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f51a29b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:6d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690367, 'reachable_time': 20778, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290171, 'error': None, 'target': 'ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.529 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e684cafb-fa55-43e1-a668-3b0dbb1ebe83]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5c:6d00'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 690367, 'tstamp': 690367}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290172, 'error': None, 'target': 'ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.549 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[67b3c2df-abcb-4a73-b3d3-4099b00f72e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f51a29b-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5c:6d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690367, 'reachable_time': 20778, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290173, 'error': None, 'target': 'ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.593 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[efeaf166-7639-43c2-bc8e-cffcd4bc42a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.626 248514 DEBUG nova.compute.manager [req-35866012-d063-4564-8e6f-9e422c125cee req-a0e984a5-268f-463b-9509-b3c1ac3e2e6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.626 248514 DEBUG oslo_concurrency.lockutils [req-35866012-d063-4564-8e6f-9e422c125cee req-a0e984a5-268f-463b-9509-b3c1ac3e2e6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.628 248514 DEBUG oslo_concurrency.lockutils [req-35866012-d063-4564-8e6f-9e422c125cee req-a0e984a5-268f-463b-9509-b3c1ac3e2e6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.628 248514 DEBUG oslo_concurrency.lockutils [req-35866012-d063-4564-8e6f-9e422c125cee req-a0e984a5-268f-463b-9509-b3c1ac3e2e6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.628 248514 DEBUG nova.compute.manager [req-35866012-d063-4564-8e6f-9e422c125cee req-a0e984a5-268f-463b-9509-b3c1ac3e2e6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Processing event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.673 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0cb55e-ca73-4981-ae41-044d74af9efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.675 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f51a29b-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.675 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.676 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f51a29b-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.678 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:26 np0005558241 NetworkManager[50376]: <info>  [1765614326.6785] manager: (tap9f51a29b-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Dec 13 03:25:26 np0005558241 kernel: tap9f51a29b-e0: entered promiscuous mode
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.680 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.684 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f51a29b-e0, col_values=(('external_ids', {'iface-id': 'eb9a13a8-7cca-49d7-94a2-673238e93c02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.686 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:26Z|00331|binding|INFO|Releasing lport eb9a13a8-7cca-49d7-94a2-673238e93c02 from this chassis (sb_readonly=0)
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.690 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f51a29b-e726-472d-a3c5-2b60fcdbe425.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f51a29b-e726-472d-a3c5-2b60fcdbe425.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.691 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3a49df17-06c1-4769-a4cb-2aa8cb941bd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.692 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-9f51a29b-e726-472d-a3c5-2b60fcdbe425
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/9f51a29b-e726-472d-a3c5-2b60fcdbe425.pid.haproxy
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 9f51a29b-e726-472d-a3c5-2b60fcdbe425
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:25:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:26.692 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'env', 'PROCESS_TAG=haproxy-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f51a29b-e726-472d-a3c5-2b60fcdbe425.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.701 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.926 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.927 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614326.925932, 84309355-d1f4-4a59-9f19-b212232e2428 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.927 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] VM Started (Lifecycle Event)#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.932 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.937 248514 INFO nova.virt.libvirt.driver [-] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Instance spawned successfully.#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.937 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.950 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.956 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.984 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.985 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614326.926225, 84309355-d1f4-4a59-9f19-b212232e2428 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.985 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.990 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.991 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.992 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.992 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.993 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:26 np0005558241 nova_compute[248510]: 2025-12-13 08:25:26.993 248514 DEBUG nova.virt.libvirt.driver [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:27 np0005558241 nova_compute[248510]: 2025-12-13 08:25:27.023 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:27 np0005558241 nova_compute[248510]: 2025-12-13 08:25:27.028 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614326.9322202, 84309355-d1f4-4a59-9f19-b212232e2428 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:25:27 np0005558241 nova_compute[248510]: 2025-12-13 08:25:27.028 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:25:27 np0005558241 nova_compute[248510]: 2025-12-13 08:25:27.060 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:27 np0005558241 nova_compute[248510]: 2025-12-13 08:25:27.064 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:25:27 np0005558241 nova_compute[248510]: 2025-12-13 08:25:27.076 248514 INFO nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Took 7.43 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:25:27 np0005558241 nova_compute[248510]: 2025-12-13 08:25:27.076 248514 DEBUG nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:27 np0005558241 nova_compute[248510]: 2025-12-13 08:25:27.104 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:25:27 np0005558241 podman[290245]: 2025-12-13 08:25:27.105854635 +0000 UTC m=+0.058337700 container create 95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 03:25:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 171 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.1 MiB/s wr, 111 op/s
Dec 13 03:25:27 np0005558241 systemd[1]: Started libpod-conmon-95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b.scope.
Dec 13 03:25:27 np0005558241 nova_compute[248510]: 2025-12-13 08:25:27.163 248514 INFO nova.compute.manager [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Took 8.57 seconds to build instance.#033[00m
Dec 13 03:25:27 np0005558241 podman[290245]: 2025-12-13 08:25:27.074982064 +0000 UTC m=+0.027465129 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:25:27 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:25:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b051f3cea02d979437a5f7518015f0b37be7af8acf55c365f6681705a374d9be/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:27 np0005558241 podman[290245]: 2025-12-13 08:25:27.20452821 +0000 UTC m=+0.157011285 container init 95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 03:25:27 np0005558241 nova_compute[248510]: 2025-12-13 08:25:27.206 248514 DEBUG oslo_concurrency.lockutils [None req-57be1638-8cf8-4ce3-aaf2-e4625563602b ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:27 np0005558241 podman[290245]: 2025-12-13 08:25:27.213214904 +0000 UTC m=+0.165697959 container start 95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 03:25:27 np0005558241 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [NOTICE]   (290265) : New worker (290267) forked
Dec 13 03:25:27 np0005558241 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [NOTICE]   (290265) : Loading success.
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.361 248514 DEBUG nova.network.neutron [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Successfully updated port: 66179e56-6ff7-4353-872a-ee206fe0b050 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.378 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.378 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquired lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.378 248514 DEBUG nova.network.neutron [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.673 248514 DEBUG nova.network.neutron [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.711 248514 DEBUG nova.compute.manager [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.750 248514 DEBUG nova.compute.manager [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Received event network-changed-66179e56-6ff7-4353-872a-ee206fe0b050 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.751 248514 DEBUG nova.compute.manager [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Refreshing instance network info cache due to event network-changed-66179e56-6ff7-4353-872a-ee206fe0b050. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.751 248514 DEBUG oslo_concurrency.lockutils [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.792 248514 INFO nova.compute.manager [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] instance snapshotting#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.793 248514 WARNING nova.compute.manager [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.914 248514 DEBUG nova.compute.manager [req-131e7279-1a0a-4046-97b7-546e080ad1c8 req-7e626969-d7a1-4a2c-b0b0-5e93f4ab9f78 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.914 248514 DEBUG oslo_concurrency.lockutils [req-131e7279-1a0a-4046-97b7-546e080ad1c8 req-7e626969-d7a1-4a2c-b0b0-5e93f4ab9f78 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.915 248514 DEBUG oslo_concurrency.lockutils [req-131e7279-1a0a-4046-97b7-546e080ad1c8 req-7e626969-d7a1-4a2c-b0b0-5e93f4ab9f78 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.915 248514 DEBUG oslo_concurrency.lockutils [req-131e7279-1a0a-4046-97b7-546e080ad1c8 req-7e626969-d7a1-4a2c-b0b0-5e93f4ab9f78 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.915 248514 DEBUG nova.compute.manager [req-131e7279-1a0a-4046-97b7-546e080ad1c8 req-7e626969-d7a1-4a2c-b0b0-5e93f4ab9f78 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] No waiting events found dispatching network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:25:28 np0005558241 nova_compute[248510]: 2025-12-13 08:25:28.915 248514 WARNING nova.compute.manager [req-131e7279-1a0a-4046-97b7-546e080ad1c8 req-7e626969-d7a1-4a2c-b0b0-5e93f4ab9f78 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received unexpected event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.113 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.115 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.115 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.116 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.116 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.117 248514 INFO nova.compute.manager [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Terminating instance#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.118 248514 DEBUG nova.compute.manager [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:25:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 181 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.194 248514 INFO nova.virt.libvirt.driver [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Beginning live snapshot process#033[00m
Dec 13 03:25:29 np0005558241 kernel: tap2ad09377-22 (unregistering): left promiscuous mode
Dec 13 03:25:29 np0005558241 NetworkManager[50376]: <info>  [1765614329.2927] device (tap2ad09377-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:25:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:29Z|00332|binding|INFO|Releasing lport 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 from this chassis (sb_readonly=0)
Dec 13 03:25:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:29Z|00333|binding|INFO|Setting lport 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 down in Southbound
Dec 13 03:25:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:29Z|00334|binding|INFO|Removing iface tap2ad09377-22 ovn-installed in OVS
Dec 13 03:25:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:29.316 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:17:ba 10.100.0.4'], port_security=['fa:16:3e:38:17:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '84309355-d1f4-4a59-9f19-b212232e2428', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '59a0c2f158ce417f80e49cc5eb2db59d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f0954fb5-841d-4580-afb8-2b944d8752df', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bbe5302-d9e0-4808-ac4c-176cf71b9a80, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2ad09377-22a9-4666-8a3b-f24c9cba7ac9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:25:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:29.318 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2ad09377-22a9-4666-8a3b-f24c9cba7ac9 in datapath 9f51a29b-e726-472d-a3c5-2b60fcdbe425 unbound from our chassis#033[00m
Dec 13 03:25:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:29.320 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f51a29b-e726-472d-a3c5-2b60fcdbe425, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:25:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:29.321 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3fb91a-d10a-4229-b66e-6e3413e549aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:29.322 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425 namespace which is not needed anymore#033[00m
Dec 13 03:25:29 np0005558241 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000025.scope: Deactivated successfully.
Dec 13 03:25:29 np0005558241 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000025.scope: Consumed 2.854s CPU time.
Dec 13 03:25:29 np0005558241 systemd-machined[210538]: Machine qemu-43-instance-00000025 terminated.
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.371 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.380 248514 DEBUG nova.virt.libvirt.imagebackend [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.569 248514 INFO nova.virt.libvirt.driver [-] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Instance destroyed successfully.#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.570 248514 DEBUG nova.objects.instance [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lazy-loading 'resources' on Instance uuid 84309355-d1f4-4a59-9f19-b212232e2428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:25:29 np0005558241 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [NOTICE]   (290265) : haproxy version is 2.8.14-c23fe91
Dec 13 03:25:29 np0005558241 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [NOTICE]   (290265) : path to executable is /usr/sbin/haproxy
Dec 13 03:25:29 np0005558241 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [WARNING]  (290265) : Exiting Master process...
Dec 13 03:25:29 np0005558241 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [ALERT]    (290265) : Current worker (290267) exited with code 143 (Terminated)
Dec 13 03:25:29 np0005558241 neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425[290261]: [WARNING]  (290265) : All workers exited. Exiting... (0)
Dec 13 03:25:29 np0005558241 systemd[1]: libpod-95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b.scope: Deactivated successfully.
Dec 13 03:25:29 np0005558241 podman[290332]: 2025-12-13 08:25:29.652851648 +0000 UTC m=+0.227508033 container died 95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.761 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.774 248514 DEBUG nova.virt.libvirt.vif [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:25:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1068678446',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1068678446',id=37,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:25:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='59a0c2f158ce417f80e49cc5eb2db59d',ramdisk_id='',reservation_id='r-xk1seud9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-328435411',owner_user_name='tempest-InstanceActionsV221TestJSON-328435411-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:25:27Z,user_data=None,user_id='ca2c7fce813a4f919271d56491477e18',uuid=84309355-d1f4-4a59-9f19-b212232e2428,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.775 248514 DEBUG nova.network.os_vif_util [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Converting VIF {"id": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "address": "fa:16:3e:38:17:ba", "network": {"id": "9f51a29b-e726-472d-a3c5-2b60fcdbe425", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1982994624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "59a0c2f158ce417f80e49cc5eb2db59d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ad09377-22", "ovs_interfaceid": "2ad09377-22a9-4666-8a3b-f24c9cba7ac9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.776 248514 DEBUG nova.network.os_vif_util [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.777 248514 DEBUG os_vif [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.780 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.782 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ad09377-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:25:29 np0005558241 nova_compute[248510]: 2025-12-13 08:25:29.788 248514 INFO os_vif [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:17:ba,bridge_name='br-int',has_traffic_filtering=True,id=2ad09377-22a9-4666-8a3b-f24c9cba7ac9,network=Network(9f51a29b-e726-472d-a3c5-2b60fcdbe425),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ad09377-22')#033[00m
Dec 13 03:25:30 np0005558241 nova_compute[248510]: 2025-12-13 08:25:30.033 248514 DEBUG nova.storage.rbd_utils [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(5e9462edf74042da85c0ddaea983ddb2) on rbd image(2c76f149-c467-4e59-afee-77940e515f8c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:25:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:25:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b-userdata-shm.mount: Deactivated successfully.
Dec 13 03:25:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b051f3cea02d979437a5f7518015f0b37be7af8acf55c365f6681705a374d9be-merged.mount: Deactivated successfully.
Dec 13 03:25:30 np0005558241 podman[290332]: 2025-12-13 08:25:30.494786212 +0000 UTC m=+1.069442577 container cleanup 95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:25:30 np0005558241 systemd[1]: libpod-conmon-95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b.scope: Deactivated successfully.
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.047 248514 DEBUG nova.compute.manager [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-vif-unplugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.048 248514 DEBUG oslo_concurrency.lockutils [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.049 248514 DEBUG oslo_concurrency.lockutils [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.049 248514 DEBUG oslo_concurrency.lockutils [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.049 248514 DEBUG nova.compute.manager [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] No waiting events found dispatching network-vif-unplugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.049 248514 DEBUG nova.compute.manager [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-vif-unplugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.050 248514 DEBUG nova.compute.manager [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.050 248514 DEBUG oslo_concurrency.lockutils [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "84309355-d1f4-4a59-9f19-b212232e2428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.050 248514 DEBUG oslo_concurrency.lockutils [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.050 248514 DEBUG oslo_concurrency.lockutils [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.050 248514 DEBUG nova.compute.manager [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] No waiting events found dispatching network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.051 248514 WARNING nova.compute.manager [req-1365c65d-f38e-46e3-a3d9-90e3bbcf4c74 req-0e829457-2202-43e4-9b42-ddb12505bc97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received unexpected event network-vif-plugged-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:25:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 181 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.204 248514 DEBUG nova.network.neutron [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Updating instance_info_cache with network_info: [{"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.230 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Releasing lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.231 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Instance network_info: |[{"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.231 248514 DEBUG oslo_concurrency.lockutils [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.232 248514 DEBUG nova.network.neutron [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Refreshing network info cache for port 66179e56-6ff7-4353-872a-ee206fe0b050 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.235 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Start _get_guest_xml network_info=[{"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.240 248514 WARNING nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.246 248514 DEBUG nova.virt.libvirt.host [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.247 248514 DEBUG nova.virt.libvirt.host [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.253 248514 DEBUG nova.virt.libvirt.host [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.254 248514 DEBUG nova.virt.libvirt.host [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.254 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.254 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.255 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.255 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.255 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.255 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.255 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.256 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.256 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.256 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.256 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.257 248514 DEBUG nova.virt.hardware [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.260 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:31 np0005558241 podman[290409]: 2025-12-13 08:25:31.292038093 +0000 UTC m=+0.769451746 container remove 95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 03:25:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.301 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f961c2-35dc-4c67-b66c-19bd9a9e0c15]: (4, ('Sat Dec 13 08:25:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425 (95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b)\n95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b\nSat Dec 13 08:25:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425 (95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b)\n95421b3f3ee6693a488576beb5f54f5d2894f97797130fe811ae986b72230c8b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.304 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a802e16-1511-488f-913b-4057ab6d4fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.306 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f51a29b-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:31 np0005558241 kernel: tap9f51a29b-e0: left promiscuous mode
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.324 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.328 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[21c60805-5157-46b5-9982-382c4af15304]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.348 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[83f6c95f-d2ac-4d66-adbb-b38da087b1fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.350 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2f02d8-28d6-4414-a7bf-41c1472ac551]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.370 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d313267-b7af-49bb-8068-34297ac96138]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690358, 'reachable_time': 19203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290428, 'error': None, 'target': 'ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.374 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f51a29b-e726-472d-a3c5-2b60fcdbe425 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:25:31 np0005558241 systemd[1]: run-netns-ovnmeta\x2d9f51a29b\x2de726\x2d472d\x2da3c5\x2d2b60fcdbe425.mount: Deactivated successfully.
Dec 13 03:25:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:31.374 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae1c4ac-f9f1-4afe-8775-e4625451081c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Dec 13 03:25:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Dec 13 03:25:31 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Dec 13 03:25:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:25:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3232377830' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.836 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.872 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.880 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:31 np0005558241 nova_compute[248510]: 2025-12-13 08:25:31.964 248514 DEBUG nova.storage.rbd_utils [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] cloning vms/2c76f149-c467-4e59-afee-77940e515f8c_disk@5e9462edf74042da85c0ddaea983ddb2 to images/385a36c8-745e-468c-aebe-757bcf15df75 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.040 248514 INFO nova.virt.libvirt.driver [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Deleting instance files /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428_del#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.042 248514 INFO nova.virt.libvirt.driver [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Deletion of /var/lib/nova/instances/84309355-d1f4-4a59-9f19-b212232e2428_del complete#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.082 248514 DEBUG nova.storage.rbd_utils [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] flattening images/385a36c8-745e-468c-aebe-757bcf15df75 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.308 248514 INFO nova.compute.manager [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Took 3.19 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.309 248514 DEBUG oslo.service.loopingcall [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.309 248514 DEBUG nova.compute.manager [-] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.310 248514 DEBUG nova.network.neutron [-] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:25:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:25:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2283608750' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.499 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.501 248514 DEBUG nova.virt.libvirt.vif [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1257374937',display_name='tempest-ServersTestJSON-server-1257374937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1257374937',id=38,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQM1EROcyTGYUNd3w+nL2c9tIiIpr7CpaZ/uVd5bqgtT8dSelOLMXPhJ/HVb4yRy7qvGCJbUXgeaaHZuyNIHWqDvsU3xORtagVm9kIjwgQBLgTK/GBqyOzmv5WpZhagyQ==',key_name='tempest-keypair-1165619389',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e370bdecda394d32b21d4eee440a61fa',ramdisk_id='',reservation_id='r-tuwuy5hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1483541269',owner_user_name='tempest-ServersTestJSON-1483541269-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f6fadd0581d041428cc88161ae6e6e02',uuid=29375ff9-300a-43de-a53d-942e7afbb439,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.502 248514 DEBUG nova.network.os_vif_util [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Converting VIF {"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.503 248514 DEBUG nova.network.os_vif_util [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ee:5f,bridge_name='br-int',has_traffic_filtering=True,id=66179e56-6ff7-4353-872a-ee206fe0b050,network=Network(f9453e13-be77-4aff-899d-cbb572239200),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66179e56-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.504 248514 DEBUG nova.objects.instance [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 29375ff9-300a-43de-a53d-942e7afbb439 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.661 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  <uuid>29375ff9-300a-43de-a53d-942e7afbb439</uuid>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  <name>instance-00000026</name>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersTestJSON-server-1257374937</nova:name>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:25:31</nova:creationTime>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:        <nova:user uuid="f6fadd0581d041428cc88161ae6e6e02">tempest-ServersTestJSON-1483541269-project-member</nova:user>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:        <nova:project uuid="e370bdecda394d32b21d4eee440a61fa">tempest-ServersTestJSON-1483541269</nova:project>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:        <nova:port uuid="66179e56-6ff7-4353-872a-ee206fe0b050">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <entry name="serial">29375ff9-300a-43de-a53d-942e7afbb439</entry>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <entry name="uuid">29375ff9-300a-43de-a53d-942e7afbb439</entry>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/29375ff9-300a-43de-a53d-942e7afbb439_disk">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/29375ff9-300a-43de-a53d-942e7afbb439_disk.config">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:c2:ee:5f"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <target dev="tap66179e56-6f"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/console.log" append="off"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:25:32 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:25:32 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:25:32 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:25:32 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.664 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Preparing to wait for external event network-vif-plugged-66179e56-6ff7-4353-872a-ee206fe0b050 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.665 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Acquiring lock "29375ff9-300a-43de-a53d-942e7afbb439-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.665 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.665 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.666 248514 DEBUG nova.virt.libvirt.vif [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:25:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1257374937',display_name='tempest-ServersTestJSON-server-1257374937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1257374937',id=38,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQM1EROcyTGYUNd3w+nL2c9tIiIpr7CpaZ/uVd5bqgtT8dSelOLMXPhJ/HVb4yRy7qvGCJbUXgeaaHZuyNIHWqDvsU3xORtagVm9kIjwgQBLgTK/GBqyOzmv5WpZhagyQ==',key_name='tempest-keypair-1165619389',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e370bdecda394d32b21d4eee440a61fa',ramdisk_id='',reservation_id='r-tuwuy5hl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1483541269',owner_user_name='tempest-ServersTestJSON-1483541269-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:25:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f6fadd0581d041428cc88161ae6e6e02',uuid=29375ff9-300a-43de-a53d-942e7afbb439,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.667 248514 DEBUG nova.network.os_vif_util [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Converting VIF {"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.667 248514 DEBUG nova.network.os_vif_util [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ee:5f,bridge_name='br-int',has_traffic_filtering=True,id=66179e56-6ff7-4353-872a-ee206fe0b050,network=Network(f9453e13-be77-4aff-899d-cbb572239200),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66179e56-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.668 248514 DEBUG os_vif [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ee:5f,bridge_name='br-int',has_traffic_filtering=True,id=66179e56-6ff7-4353-872a-ee206fe0b050,network=Network(f9453e13-be77-4aff-899d-cbb572239200),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66179e56-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.668 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.669 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.669 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.676 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66179e56-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.676 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66179e56-6f, col_values=(('external_ids', {'iface-id': '66179e56-6ff7-4353-872a-ee206fe0b050', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:ee:5f', 'vm-uuid': '29375ff9-300a-43de-a53d-942e7afbb439'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:32 np0005558241 NetworkManager[50376]: <info>  [1765614332.6807] manager: (tap66179e56-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.686 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:32 np0005558241 nova_compute[248510]: 2025-12-13 08:25:32.688 248514 INFO os_vif [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:ee:5f,bridge_name='br-int',has_traffic_filtering=True,id=66179e56-6ff7-4353-872a-ee206fe0b050,network=Network(f9453e13-be77-4aff-899d-cbb572239200),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66179e56-6f')#033[00m
Dec 13 03:25:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 181 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.7 MiB/s wr, 214 op/s
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.163 248514 DEBUG nova.network.neutron [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Updated VIF entry in instance network info cache for port 66179e56-6ff7-4353-872a-ee206fe0b050. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.163 248514 DEBUG nova.network.neutron [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Updating instance_info_cache with network_info: [{"id": "66179e56-6ff7-4353-872a-ee206fe0b050", "address": "fa:16:3e:c2:ee:5f", "network": {"id": "f9453e13-be77-4aff-899d-cbb572239200", "bridge": "br-int", "label": "tempest-ServersTestJSON-426351732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e370bdecda394d32b21d4eee440a61fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66179e56-6f", "ovs_interfaceid": "66179e56-6ff7-4353-872a-ee206fe0b050", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.290 248514 DEBUG oslo_concurrency.lockutils [req-a1ebe4a6-1cbb-4c46-b034-19f1b8e5b5d2 req-7879f7e8-4d76-471b-b505-5c867429a4ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-29375ff9-300a-43de-a53d-942e7afbb439" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.296 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.297 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.297 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] No VIF found with MAC fa:16:3e:c2:ee:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.298 248514 INFO nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Using config drive#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.322 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.372 248514 DEBUG nova.network.neutron [-] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.392 248514 INFO nova.compute.manager [-] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Took 1.08 seconds to deallocate network for instance.#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.444 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.444 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.595 248514 DEBUG oslo_concurrency.processutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.647 248514 INFO nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Creating config drive at /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/disk.config#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.653 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoba82dx4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.759 248514 DEBUG nova.storage.rbd_utils [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] removing snapshot(5e9462edf74042da85c0ddaea983ddb2) on rbd image(2c76f149-c467-4e59-afee-77940e515f8c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.795 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoba82dx4" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.820 248514 DEBUG nova.storage.rbd_utils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] rbd image 29375ff9-300a-43de-a53d-942e7afbb439_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.824 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/disk.config 29375ff9-300a-43de-a53d-942e7afbb439_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Dec 13 03:25:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Dec 13 03:25:33 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Dec 13 03:25:33 np0005558241 nova_compute[248510]: 2025-12-13 08:25:33.881 248514 DEBUG nova.storage.rbd_utils [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(snap) on rbd image(385a36c8-745e-468c-aebe-757bcf15df75) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:25:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:25:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3004742559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:25:34 np0005558241 nova_compute[248510]: 2025-12-13 08:25:34.178 248514 DEBUG oslo_concurrency.processutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:34 np0005558241 nova_compute[248510]: 2025-12-13 08:25:34.187 248514 DEBUG nova.compute.provider_tree [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:25:34 np0005558241 nova_compute[248510]: 2025-12-13 08:25:34.229 248514 DEBUG nova.scheduler.client.report [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:25:34 np0005558241 nova_compute[248510]: 2025-12-13 08:25:34.257 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:34 np0005558241 nova_compute[248510]: 2025-12-13 08:25:34.296 248514 INFO nova.scheduler.client.report [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Deleted allocations for instance 84309355-d1f4-4a59-9f19-b212232e2428#033[00m
Dec 13 03:25:34 np0005558241 nova_compute[248510]: 2025-12-13 08:25:34.382 248514 DEBUG oslo_concurrency.lockutils [None req-98d9b332-06f7-4ca4-b901-44ed7a1f84e7 ca2c7fce813a4f919271d56491477e18 59a0c2f158ce417f80e49cc5eb2db59d - - default default] Lock "84309355-d1f4-4a59-9f19-b212232e2428" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:34 np0005558241 nova_compute[248510]: 2025-12-13 08:25:34.417 248514 DEBUG nova.compute.manager [req-b1f8b38c-7cda-4b0b-8173-507027b3515d req-040e502f-f43e-48d4-82c0-0802102b1029 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 84309355-d1f4-4a59-9f19-b212232e2428] Received event network-vif-deleted-2ad09377-22a9-4666-8a3b-f24c9cba7ac9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:25:34 np0005558241 nova_compute[248510]: 2025-12-13 08:25:34.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Dec 13 03:25:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Dec 13 03:25:34 np0005558241 nova_compute[248510]: 2025-12-13 08:25:34.882 248514 DEBUG oslo_concurrency.processutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/disk.config 29375ff9-300a-43de-a53d-942e7afbb439_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:34 np0005558241 nova_compute[248510]: 2025-12-13 08:25:34.882 248514 INFO nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Deleting local config drive /var/lib/nova/instances/29375ff9-300a-43de-a53d-942e7afbb439/disk.config because it was imported into RBD.#033[00m
Dec 13 03:25:34 np0005558241 kernel: tap66179e56-6f: entered promiscuous mode
Dec 13 03:25:34 np0005558241 NetworkManager[50376]: <info>  [1765614334.9472] manager: (tap66179e56-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/153)
Dec 13 03:25:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:34Z|00335|binding|INFO|Claiming lport 66179e56-6ff7-4353-872a-ee206fe0b050 for this chassis.
Dec 13 03:25:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:34Z|00336|binding|INFO|66179e56-6ff7-4353-872a-ee206fe0b050: Claiming fa:16:3e:c2:ee:5f 10.100.0.13
Dec 13 03:25:34 np0005558241 nova_compute[248510]: 2025-12-13 08:25:34.948 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:34 np0005558241 nova_compute[248510]: 2025-12-13 08:25:34.952 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.960 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:ee:5f 10.100.0.13'], port_security=['fa:16:3e:c2:ee:5f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '29375ff9-300a-43de-a53d-942e7afbb439', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f9453e13-be77-4aff-899d-cbb572239200', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e370bdecda394d32b21d4eee440a61fa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '06ce8d20-248e-4763-9b89-5c8df9c9f100', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=490cf977-f165-4aa4-8179-937f6e939091, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=66179e56-6ff7-4353-872a-ee206fe0b050) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:25:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.962 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 66179e56-6ff7-4353-872a-ee206fe0b050 in datapath f9453e13-be77-4aff-899d-cbb572239200 bound to our chassis#033[00m
Dec 13 03:25:34 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Dec 13 03:25:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.963 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f9453e13-be77-4aff-899d-cbb572239200#033[00m
Dec 13 03:25:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.978 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2cefc7ea-fa33-438b-8bc7-bbb0191f57c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.979 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf9453e13-b1 in ovnmeta-f9453e13-be77-4aff-899d-cbb572239200 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:25:34 np0005558241 systemd-udevd[290676]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:25:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.982 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf9453e13-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:25:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.982 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0d7fa7-485d-47ee-829e-bb5efc26811c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.983 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec714b9-bdef-4bb0-a39f-d2f3dc695b5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:34 np0005558241 NetworkManager[50376]: <info>  [1765614334.9941] device (tap66179e56-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:25:34 np0005558241 NetworkManager[50376]: <info>  [1765614334.9949] device (tap66179e56-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:25:35 np0005558241 systemd-machined[210538]: New machine qemu-44-instance-00000026.
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:34.998 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[5414ec3b-dcfc-4c59-9b39-ae2b93683fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:35 np0005558241 systemd[1]: Started Virtual Machine qemu-44-instance-00000026.
Dec 13 03:25:35 np0005558241 nova_compute[248510]: 2025-12-13 08:25:35.017 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:35Z|00337|binding|INFO|Setting lport 66179e56-6ff7-4353-872a-ee206fe0b050 ovn-installed in OVS
Dec 13 03:25:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:35Z|00338|binding|INFO|Setting lport 66179e56-6ff7-4353-872a-ee206fe0b050 up in Southbound
Dec 13 03:25:35 np0005558241 nova_compute[248510]: 2025-12-13 08:25:35.022 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.028 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0884279e-0b33-4816-9a6d-f24e6ac8bc24]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.065 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3885fa91-2d97-4f35-88f9-39f9962a990e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.071 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[986f1eec-b1b3-4068-996c-c5459468b360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:35 np0005558241 NetworkManager[50376]: <info>  [1765614335.0723] manager: (tapf9453e13-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/154)
Dec 13 03:25:35 np0005558241 systemd-udevd[290680]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.108 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7c6763-9dbb-4955-8b81-121af71e5425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.112 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1e5c58-512a-422c-8d3d-89462b4f2c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 176 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 157 op/s
Dec 13 03:25:35 np0005558241 NetworkManager[50376]: <info>  [1765614335.1409] device (tapf9453e13-b0): carrier: link connected
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.147 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce64d7b-d2e7-4ccc-93f1-db81e67435f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.168 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d2d15d-195f-4bae-be8d-8111c7f04bea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf9453e13-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:a3:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691235, 'reachable_time': 42879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290709, 'error': None, 'target': 'ovnmeta-f9453e13-be77-4aff-899d-cbb572239200', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.190 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d59c396-199d-4f1f-ad5a-3abd55322047]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:a3cd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691235, 'tstamp': 691235}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290710, 'error': None, 'target': 'ovnmeta-f9453e13-be77-4aff-899d-cbb572239200', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.213 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b72931bf-faa9-43f6-b66e-f44a0be0a4ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf9453e13-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:a3:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691235, 'reachable_time': 42879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290711, 'error': None, 'target': 'ovnmeta-f9453e13-be77-4aff-899d-cbb572239200', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.250 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0def6b13-9e46-463b-9f6f-b7369ca202db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.323 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2bf6fb-71e1-4f9e-974f-47610e883b8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.325 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf9453e13-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.326 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.326 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf9453e13-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:35 np0005558241 NetworkManager[50376]: <info>  [1765614335.3295] manager: (tapf9453e13-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Dec 13 03:25:35 np0005558241 nova_compute[248510]: 2025-12-13 08:25:35.329 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:35 np0005558241 kernel: tapf9453e13-b0: entered promiscuous mode
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.332 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf9453e13-b0, col_values=(('external_ids', {'iface-id': 'b4b5208f-f540-413f-b711-c27b19daefc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:25:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:35Z|00339|binding|INFO|Releasing lport b4b5208f-f540-413f-b711-c27b19daefc1 from this chassis (sb_readonly=0)
Dec 13 03:25:35 np0005558241 nova_compute[248510]: 2025-12-13 08:25:35.334 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.338 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f9453e13-be77-4aff-899d-cbb572239200.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f9453e13-be77-4aff-899d-cbb572239200.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.339 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea01f35-01b1-4c1f-a792-da00f692e478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.340 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-f9453e13-be77-4aff-899d-cbb572239200
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/f9453e13-be77-4aff-899d-cbb572239200.pid.haproxy
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID f9453e13-be77-4aff-899d-cbb572239200
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:25:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:25:35.341 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f9453e13-be77-4aff-899d-cbb572239200', 'env', 'PROCESS_TAG=haproxy-f9453e13-be77-4aff-899d-cbb572239200', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f9453e13-be77-4aff-899d-cbb572239200.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:25:35 np0005558241 nova_compute[248510]: 2025-12-13 08:25:35.348 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:35 np0005558241 nova_compute[248510]: 2025-12-13 08:25:35.539 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614335.5391817, 29375ff9-300a-43de-a53d-942e7afbb439 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:25:35 np0005558241 nova_compute[248510]: 2025-12-13 08:25:35.540 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] VM Started (Lifecycle Event)#033[00m
Dec 13 03:25:35 np0005558241 nova_compute[248510]: 2025-12-13 08:25:35.570 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:35 np0005558241 nova_compute[248510]: 2025-12-13 08:25:35.574 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614335.5414612, 29375ff9-300a-43de-a53d-942e7afbb439 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:25:35 np0005558241 nova_compute[248510]: 2025-12-13 08:25:35.575 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:25:35 np0005558241 nova_compute[248510]: 2025-12-13 08:25:35.603 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:35 np0005558241 nova_compute[248510]: 2025-12-13 08:25:35.609 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:25:35 np0005558241 nova_compute[248510]: 2025-12-13 08:25:35.633 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:25:35 np0005558241 podman[290785]: 2025-12-13 08:25:35.734877004 +0000 UTC m=+0.028903274 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:25:36 np0005558241 podman[290785]: 2025-12-13 08:25:36.005351408 +0000 UTC m=+0.299377648 container create 1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:25:36 np0005558241 systemd[1]: Started libpod-conmon-1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04.scope.
Dec 13 03:25:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:25:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d31eec13ac5f88b15ff754f3542405a83e8a8c72b6ef49186db1b491f558445/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:25:36 np0005558241 podman[290785]: 2025-12-13 08:25:36.10190212 +0000 UTC m=+0.395928380 container init 1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:25:36 np0005558241 podman[290785]: 2025-12-13 08:25:36.108761339 +0000 UTC m=+0.402787579 container start 1412ae6b3a040e7a74bf434e40c81baccb63ff88fc6b9ba86ce22e1623dc1d04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Dec 13 03:25:36 np0005558241 neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200[290800]: [NOTICE]   (290804) : New worker (290806) forked
Dec 13 03:25:36 np0005558241 neutron-haproxy-ovnmeta-f9453e13-be77-4aff-899d-cbb572239200[290800]: [NOTICE]   (290804) : Loading success.
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.437 248514 DEBUG nova.compute.manager [req-dc83a1d9-a998-4051-8d76-612d4adcaf83 req-ada1e826-14ae-4c84-b1d5-40ec945964ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Received event network-vif-plugged-66179e56-6ff7-4353-872a-ee206fe0b050 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.437 248514 DEBUG oslo_concurrency.lockutils [req-dc83a1d9-a998-4051-8d76-612d4adcaf83 req-ada1e826-14ae-4c84-b1d5-40ec945964ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "29375ff9-300a-43de-a53d-942e7afbb439-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.438 248514 DEBUG oslo_concurrency.lockutils [req-dc83a1d9-a998-4051-8d76-612d4adcaf83 req-ada1e826-14ae-4c84-b1d5-40ec945964ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.438 248514 DEBUG oslo_concurrency.lockutils [req-dc83a1d9-a998-4051-8d76-612d4adcaf83 req-ada1e826-14ae-4c84-b1d5-40ec945964ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.438 248514 DEBUG nova.compute.manager [req-dc83a1d9-a998-4051-8d76-612d4adcaf83 req-ada1e826-14ae-4c84-b1d5-40ec945964ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Processing event network-vif-plugged-66179e56-6ff7-4353-872a-ee206fe0b050 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.438 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.444 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614336.443807, 29375ff9-300a-43de-a53d-942e7afbb439 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.444 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.447 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.451 248514 INFO nova.virt.libvirt.driver [-] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Instance spawned successfully.#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.451 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.471 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.478 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.483 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.484 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.484 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.485 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.485 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.486 248514 DEBUG nova.virt.libvirt.driver [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.512 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.575 248514 INFO nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Took 14.27 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.576 248514 DEBUG nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.648 248514 INFO nova.compute.manager [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Took 15.74 seconds to build instance.#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.671 248514 DEBUG oslo_concurrency.lockutils [None req-1c46fa85-95ea-423a-bda7-120734a4c667 f6fadd0581d041428cc88161ae6e6e02 e370bdecda394d32b21d4eee440a61fa - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.812 248514 INFO nova.virt.libvirt.driver [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Snapshot image upload complete#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.812 248514 INFO nova.compute.manager [None req-7a4c202f-57fc-40e5-8c24-0e81c15c6f6c 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 2c76f149-c467-4e59-afee-77940e515f8c] Took 8.02 seconds to snapshot the instance on the hypervisor.#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.895 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "50753e78-0239-4dc0-aeb1-4ee82b493132" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.895 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "50753e78-0239-4dc0-aeb1-4ee82b493132" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.914 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:25:36 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.999 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:37 np0005558241 nova_compute[248510]: 2025-12-13 08:25:36.999 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:37 np0005558241 nova_compute[248510]: 2025-12-13 08:25:37.008 248514 DEBUG nova.virt.hardware [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:25:37 np0005558241 nova_compute[248510]: 2025-12-13 08:25:37.008 248514 INFO nova.compute.claims [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:25:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 176 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 157 op/s
Dec 13 03:25:37 np0005558241 nova_compute[248510]: 2025-12-13 08:25:37.151 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:25:37 np0005558241 nova_compute[248510]: 2025-12-13 08:25:37.344 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:37 np0005558241 nova_compute[248510]: 2025-12-13 08:25:37.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:25:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2198550678' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:25:37 np0005558241 nova_compute[248510]: 2025-12-13 08:25:37.901 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:37 np0005558241 nova_compute[248510]: 2025-12-13 08:25:37.907 248514 DEBUG nova.compute.provider_tree [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:25:37 np0005558241 nova_compute[248510]: 2025-12-13 08:25:37.978 248514 DEBUG nova.scheduler.client.report [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.020 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.021 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.447 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.448 248514 DEBUG nova.network.neutron [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.532 248514 INFO nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.652 248514 DEBUG nova.policy [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b6f7967ce16c45d394188c1302b02907', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.696 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.753 248514 DEBUG nova.compute.manager [req-f6402574-004a-4eb2-a552-7ca697adb40c req-3ecbde43-5b72-4985-aebb-11891b53eedb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Received event network-vif-plugged-66179e56-6ff7-4353-872a-ee206fe0b050 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.754 248514 DEBUG oslo_concurrency.lockutils [req-f6402574-004a-4eb2-a552-7ca697adb40c req-3ecbde43-5b72-4985-aebb-11891b53eedb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "29375ff9-300a-43de-a53d-942e7afbb439-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.755 248514 DEBUG oslo_concurrency.lockutils [req-f6402574-004a-4eb2-a552-7ca697adb40c req-3ecbde43-5b72-4985-aebb-11891b53eedb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.755 248514 DEBUG oslo_concurrency.lockutils [req-f6402574-004a-4eb2-a552-7ca697adb40c req-3ecbde43-5b72-4985-aebb-11891b53eedb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "29375ff9-300a-43de-a53d-942e7afbb439-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.756 248514 DEBUG nova.compute.manager [req-f6402574-004a-4eb2-a552-7ca697adb40c req-3ecbde43-5b72-4985-aebb-11891b53eedb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] No waiting events found dispatching network-vif-plugged-66179e56-6ff7-4353-872a-ee206fe0b050 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.756 248514 WARNING nova.compute.manager [req-f6402574-004a-4eb2-a552-7ca697adb40c req-3ecbde43-5b72-4985-aebb-11891b53eedb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 29375ff9-300a-43de-a53d-942e7afbb439] Received unexpected event network-vif-plugged-66179e56-6ff7-4353-872a-ee206fe0b050 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.946 248514 DEBUG nova.compute.manager [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.948 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.949 248514 INFO nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Creating image(s)#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.970 248514 DEBUG nova.storage.rbd_utils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 50753e78-0239-4dc0-aeb1-4ee82b493132_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:38 np0005558241 nova_compute[248510]: 2025-12-13 08:25:38.996 248514 DEBUG nova.storage.rbd_utils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 50753e78-0239-4dc0-aeb1-4ee82b493132_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.026 248514 DEBUG nova.storage.rbd_utils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 50753e78-0239-4dc0-aeb1-4ee82b493132_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.032 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.105 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.106 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.107 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.108 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.130 248514 DEBUG nova.storage.rbd_utils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 50753e78-0239-4dc0-aeb1-4ee82b493132_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:25:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 181 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 2.9 MiB/s wr, 279 op/s
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.136 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 50753e78-0239-4dc0-aeb1-4ee82b493132_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.406 248514 DEBUG nova.network.neutron [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Successfully created port: 18ef0b77-20ed-4ee2-ac70-d33b3d31ea43 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.742 248514 DEBUG oslo_concurrency.processutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 50753e78-0239-4dc0-aeb1-4ee82b493132_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.773 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.804 248514 DEBUG nova.storage.rbd_utils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] resizing rbd image 50753e78-0239-4dc0-aeb1-4ee82b493132_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.913 248514 DEBUG nova.objects.instance [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'migration_context' on Instance uuid 50753e78-0239-4dc0-aeb1-4ee82b493132 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.982 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.983 248514 DEBUG nova.virt.libvirt.driver [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 50753e78-0239-4dc0-aeb1-4ee82b493132] Ensure instance console log exists: /var/lib/nova/instances/50753e78-0239-4dc0-aeb1-4ee82b493132/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.983 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.984 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:39 np0005558241 nova_compute[248510]: 2025-12-13 08:25:39.984 248514 DEBUG oslo_concurrency.lockutils [None req-85b09721-83f9-40f7-9007-571bc74b750e b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:25:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:25:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Dec 13 03:25:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Dec 13 03:25:40 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Dec 13 03:25:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:25:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:25:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:25:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:25:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:25:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:25:40 np0005558241 NetworkManager[50376]: <info>  [1765614340.4286] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Dec 13 03:25:40 np0005558241 NetworkManager[50376]: <info>  [1765614340.4292] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Dec 13 03:25:40 np0005558241 nova_compute[248510]: 2025-12-13 08:25:40.428 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:40 np0005558241 nova_compute[248510]: 2025-12-13 08:25:40.512 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:40Z|00340|binding|INFO|Releasing lport b4b5208f-f540-413f-b711-c27b19daefc1 from this chassis (sb_readonly=0)
Dec 13 03:25:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:25:40Z|00341|binding|INFO|Releasing lport 3e59c36d-12c7-4d92-aa2f-8b6a795f3f98 from this chassis (sb_readonly=0)
Dec 13 03:25:40 np0005558241 nova_compute[248510]: 2025-12-13 08:25:40.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:40 np0005558241 nova_compute[248510]: 2025-12-13 08:25:40.549 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:25:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 181 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.6 MiB/s wr, 180 op/s
Dec 13 03:25:41 np0005558241 nova_compute[248510]: 2025-12-13 08:25:41.149 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:41 np0005558241 nova_compute[248510]: 2025-12-13 08:25:41.150 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:41 np0005558241 nova_compute[248510]: 2025-12-13 08:25:41.150 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "2c76f149-c467-4e59-afee-77940e515f8c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:25:41 np0005558241 nova_compute[248510]: 2025-12-13 08:25:41.151 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:25:41 np0005558241 nova_compute[248510]: 2025-12-13 08:25:41.151 248514 DEBUG oslo_concurrency.lockutils [None req-19f64d9a-5871-4ae8-b5d1-4d880c28c6d4 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "2c76f149-c467-4e59-afee-77940e515f8c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:27:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:27:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658466818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.842 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.843 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.843 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.843 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.844 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] No waiting events found dispatching network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.844 248514 WARNING nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received unexpected event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.844 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-unplugged-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.845 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.845 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.845 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.845 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] No waiting events found dispatching network-vif-unplugged-57241dd9-27dd-49bc-befb-1ef45674d6be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.846 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-unplugged-57241dd9-27dd-49bc-befb-1ef45674d6be for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.846 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.846 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.847 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.849 248514 DEBUG oslo_concurrency.lockutils [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.850 248514 DEBUG nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] No waiting events found dispatching network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.850 248514 WARNING nova.compute.manager [req-8a6cc3ef-0d61-45ed-8659-f44acea4b29d req-1f5e623d-27be-4e90-917e-3a6819c30f7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received unexpected event network-vif-plugged-57241dd9-27dd-49bc-befb-1ef45674d6be for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.851 248514 DEBUG oslo_concurrency.processutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.859 248514 DEBUG nova.compute.provider_tree [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.885 248514 DEBUG nova.scheduler.client.report [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:27:41 np0005558241 nova_compute[248510]: 2025-12-13 08:27:41.926 248514 DEBUG oslo_concurrency.lockutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.006 248514 DEBUG oslo_concurrency.lockutils [None req-1b24ad35-953b-455b-b854-e8bdfb41b319 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "2e52d555-08dd-49fb-a73a-eded391e154c" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 32.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.092 248514 DEBUG nova.network.neutron [-] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:27:42 np0005558241 rsyslogd[1002]: imjournal: 6433 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.133 248514 INFO nova.compute.manager [-] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Took 1.65 seconds to deallocate network for instance.#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.208 248514 DEBUG oslo_concurrency.lockutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.208 248514 DEBUG oslo_concurrency.lockutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.360 248514 DEBUG oslo_concurrency.processutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.606 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.606 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.629 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.725 248514 DEBUG nova.network.neutron [req-4c93ef52-9460-4440-a364-c82d5bd04072 req-7ff15e75-db56-4a8b-9665-f08f9985a045 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Updated VIF entry in instance network info cache for port b1a08ea3-3044-4caa-a944-744bd324adc9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.726 248514 DEBUG nova.network.neutron [req-4c93ef52-9460-4440-a364-c82d5bd04072 req-7ff15e75-db56-4a8b-9665-f08f9985a045 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e52d555-08dd-49fb-a73a-eded391e154c] Updating instance_info_cache with network_info: [{"id": "b1a08ea3-3044-4caa-a944-744bd324adc9", "address": "fa:16:3e:e9:8e:7b", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": null, "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapb1a08ea3-30", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.731 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.754 248514 DEBUG oslo_concurrency.lockutils [req-4c93ef52-9460-4440-a364-c82d5bd04072 req-7ff15e75-db56-4a8b-9665-f08f9985a045 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e52d555-08dd-49fb-a73a-eded391e154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:27:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:27:42Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:75:73:cb 10.100.0.8
Dec 13 03:27:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:27:42Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:73:cb 10.100.0.8
Dec 13 03:27:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:27:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237591976' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.954 248514 DEBUG oslo_concurrency.processutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.961 248514 DEBUG nova.compute.provider_tree [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:27:42 np0005558241 nova_compute[248510]: 2025-12-13 08:27:42.983 248514 DEBUG nova.scheduler.client.report [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.012 248514 DEBUG oslo_concurrency.lockutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.017 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.029 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.029 248514 INFO nova.compute.claims [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.057 248514 INFO nova.scheduler.client.report [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Deleted allocations for instance 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.140 248514 DEBUG oslo_concurrency.lockutils [None req-64c957e3-289d-4617-9a50-ab8195406cfe a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "05e06a6b-e157-4cd9-88c0-889fa4cfd9fa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.270 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:27:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 439 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 3.3 MiB/s wr, 320 op/s
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:27:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:27:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4212434085' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.825 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.832 248514 DEBUG nova.compute.provider_tree [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.856 248514 DEBUG nova.scheduler.client.report [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.884 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.884 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.962 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.962 248514 DEBUG nova.network.neutron [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:27:43 np0005558241 nova_compute[248510]: 2025-12-13 08:27:43.988 248514 INFO nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.008 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.081 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.082 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.082 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.084 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.554 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.555 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.556 248514 INFO nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Creating image(s)#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.581 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.607 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.631 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.635 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.678 248514 DEBUG nova.compute.manager [req-3622120c-43cd-44e5-bf98-824565320837 req-246a116d-e482-4903-a225-8d4c397a7e6a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Received event network-vif-deleted-57241dd9-27dd-49bc-befb-1ef45674d6be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.722 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.723 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.723 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.723 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.746 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.749 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:27:44 np0005558241 nova_compute[248510]: 2025-12-13 08:27:44.934 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.060 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.141 248514 DEBUG nova.policy [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5bc32e49dbd4372a006913090b9ef0f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.149 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] resizing rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.239 248514 DEBUG nova.objects.instance [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'migration_context' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.266 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.266 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Ensure instance console log exists: /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.267 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.267 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.267 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 372 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.3 MiB/s wr, 354 op/s
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.501 248514 DEBUG nova.compute.manager [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received event network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.501 248514 DEBUG oslo_concurrency.lockutils [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.501 248514 DEBUG oslo_concurrency.lockutils [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.503 248514 DEBUG oslo_concurrency.lockutils [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.503 248514 DEBUG nova.compute.manager [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Processing event network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.503 248514 DEBUG nova.compute.manager [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received event network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.504 248514 DEBUG oslo_concurrency.lockutils [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.504 248514 DEBUG oslo_concurrency.lockutils [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.504 248514 DEBUG oslo_concurrency.lockutils [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.504 248514 DEBUG nova.compute.manager [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] No waiting events found dispatching network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.504 248514 WARNING nova.compute.manager [req-778e08b2-327c-4793-8955-c95821064569 req-34736e21-5cdf-476c-a299-83f573713f21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received unexpected event network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.505 248514 DEBUG nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.520 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614465.5168235, df25cd40-72b5-4e0f-90ec-8677c699d1d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.522 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.524 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.531 248514 INFO nova.virt.libvirt.driver [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Instance spawned successfully.#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.532 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.557 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.560 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.570 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.571 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.572 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.572 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.573 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.573 248514 DEBUG nova.virt.libvirt.driver [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.597 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.766 248514 INFO nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Took 21.53 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.767 248514 DEBUG nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:27:45 np0005558241 nova_compute[248510]: 2025-12-13 08:27:45.845 248514 INFO nova.compute.manager [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Took 28.30 seconds to build instance.#033[00m
Dec 13 03:27:46 np0005558241 nova_compute[248510]: 2025-12-13 08:27:46.399 248514 DEBUG oslo_concurrency.lockutils [None req-a6994f72-d67b-4cd2-bab5-11e959288890 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 30.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Dec 13 03:27:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Dec 13 03:27:46 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Dec 13 03:27:47 np0005558241 nova_compute[248510]: 2025-12-13 08:27:47.155 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Updating instance_info_cache with network_info: [{"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:27:47 np0005558241 nova_compute[248510]: 2025-12-13 08:27:47.185 248514 DEBUG nova.network.neutron [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Successfully created port: d8c7cad7-f601-4205-8838-a583b6e04b0f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:27:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 372 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.1 MiB/s wr, 304 op/s
Dec 13 03:27:47 np0005558241 nova_compute[248510]: 2025-12-13 08:27:47.567 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:27:47 np0005558241 nova_compute[248510]: 2025-12-13 08:27:47.567 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:27:47 np0005558241 nova_compute[248510]: 2025-12-13 08:27:47.568 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:27:47 np0005558241 ovn_controller[148476]: 2025-12-13T08:27:47Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:bc:28 10.100.0.4
Dec 13 03:27:47 np0005558241 ovn_controller[148476]: 2025-12-13T08:27:47Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:bc:28 10.100.0.4
Dec 13 03:27:47 np0005558241 nova_compute[248510]: 2025-12-13 08:27:47.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:27:47 np0005558241 nova_compute[248510]: 2025-12-13 08:27:47.836 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:47 np0005558241 nova_compute[248510]: 2025-12-13 08:27:47.836 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:47 np0005558241 nova_compute[248510]: 2025-12-13 08:27:47.837 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:47 np0005558241 nova_compute[248510]: 2025-12-13 08:27:47.837 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:27:47 np0005558241 nova_compute[248510]: 2025-12-13 08:27:47.837 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:27:48 np0005558241 podman[299941]: 2025-12-13 08:27:48.015078252 +0000 UTC m=+0.092584016 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3)
Dec 13 03:27:48 np0005558241 podman[299942]: 2025-12-13 08:27:48.030530133 +0000 UTC m=+0.101349682 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 03:27:48 np0005558241 podman[299940]: 2025-12-13 08:27:48.043012231 +0000 UTC m=+0.121704204 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:27:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:27:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/537386707' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:27:48 np0005558241 nova_compute[248510]: 2025-12-13 08:27:48.453 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:48 np0005558241 nova_compute[248510]: 2025-12-13 08:27:48.956 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:27:48 np0005558241 nova_compute[248510]: 2025-12-13 08:27:48.957 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:27:48 np0005558241 nova_compute[248510]: 2025-12-13 08:27:48.963 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:27:48 np0005558241 nova_compute[248510]: 2025-12-13 08:27:48.963 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:27:48 np0005558241 nova_compute[248510]: 2025-12-13 08:27:48.967 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:27:48 np0005558241 nova_compute[248510]: 2025-12-13 08:27:48.968 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:27:48 np0005558241 nova_compute[248510]: 2025-12-13 08:27:48.974 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:27:48 np0005558241 nova_compute[248510]: 2025-12-13 08:27:48.975 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.177 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.179 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3444MB free_disk=59.85529749840498GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.179 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.179 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.342 248514 DEBUG nova.network.neutron [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Successfully updated port: d8c7cad7-f601-4205-8838-a583b6e04b0f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.387 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 98240df6-1cba-40e1-833c-24611270ed83 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.388 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 41602b99-e7f2-450c-885e-51d07a1236d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.388 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance e6e0fdaf-f934-4e56-8e59-4c4475bacd26 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.388 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance df25cd40-72b5-4e0f-90ec-8677c699d1d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.388 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.389 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.389 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1216MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.424 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "refresh_cache-0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.424 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquired lock "refresh_cache-0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.425 248514 DEBUG nova.network.neutron [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:27:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 369 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.2 MiB/s wr, 350 op/s
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.519 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.658 248514 DEBUG nova.network.neutron [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.968 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.973 248514 DEBUG nova.compute.manager [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.991 248514 DEBUG nova.compute.manager [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-changed-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.991 248514 DEBUG nova.compute.manager [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Refreshing instance network info cache due to event network-changed-d8c7cad7-f601-4205-8838-a583b6e04b0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:27:49 np0005558241 nova_compute[248510]: 2025-12-13 08:27:49.991 248514 DEBUG oslo_concurrency.lockutils [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.030 248514 INFO nova.compute.manager [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] instance snapshotting#033[00m
Dec 13 03:27:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:27:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606732519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:27:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:27:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Dec 13 03:27:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Dec 13 03:27:50 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.140 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.146 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.169 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.201 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.202 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.310 248514 INFO nova.virt.libvirt.driver [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Beginning live snapshot process#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.491 248514 DEBUG nova.virt.libvirt.imagebackend [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.540 248514 DEBUG nova.network.neutron [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Updating instance_info_cache with network_info: [{"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.570 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Releasing lock "refresh_cache-0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.571 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance network_info: |[{"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.571 248514 DEBUG oslo_concurrency.lockutils [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.572 248514 DEBUG nova.network.neutron [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Refreshing network info cache for port d8c7cad7-f601-4205-8838-a583b6e04b0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.575 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Start _get_guest_xml network_info=[{"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.579 248514 WARNING nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.584 248514 DEBUG nova.virt.libvirt.host [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.585 248514 DEBUG nova.virt.libvirt.host [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.588 248514 DEBUG nova.virt.libvirt.host [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.589 248514 DEBUG nova.virt.libvirt.host [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.589 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.589 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.590 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.590 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.590 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.591 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.591 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.591 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.591 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.592 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.592 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.592 248514 DEBUG nova.virt.hardware [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.595 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:27:50 np0005558241 nova_compute[248510]: 2025-12-13 08:27:50.737 248514 DEBUG nova.storage.rbd_utils [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(a2cef009714948e589373b905ede49bc) on rbd image(df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:27:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Dec 13 03:27:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Dec 13 03:27:51 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Dec 13 03:27:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:27:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/731523404' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.173 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.205 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.211 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.251 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.252 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.253 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.253 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.270 248514 DEBUG nova.storage.rbd_utils [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] cloning vms/df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk@a2cef009714948e589373b905ede49bc to images/e4e9ee37-4059-46f1-9bf8-83a72d0403e7 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.379 248514 DEBUG nova.storage.rbd_utils [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] flattening images/e4e9ee37-4059-46f1-9bf8-83a72d0403e7 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:27:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 369 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 7.8 MiB/s wr, 331 op/s
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.675 248514 DEBUG nova.storage.rbd_utils [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] removing snapshot(a2cef009714948e589373b905ede49bc) on rbd image(df25cd40-72b5-4e0f-90ec-8677c699d1d3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:27:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:27:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1980582512' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.827 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.829 248514 DEBUG nova.virt.libvirt.vif [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1729116913',display_name='tempest-ServerDiskConfigTestJSON-server-1729116913',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1729116913',id=51,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-3hps9qrz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:44Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=0ccf9d68-ffc2-4f17-a2e7-effa18fc607c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.829 248514 DEBUG nova.network.os_vif_util [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.831 248514 DEBUG nova.network.os_vif_util [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.832 248514 DEBUG nova.objects.instance [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.890 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  <uuid>0ccf9d68-ffc2-4f17-a2e7-effa18fc607c</uuid>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  <name>instance-00000033</name>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1729116913</nova:name>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:27:50</nova:creationTime>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:        <nova:user uuid="a5bc32e49dbd4372a006913090b9ef0f">tempest-ServerDiskConfigTestJSON-167971983-project-member</nova:user>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:        <nova:project uuid="9aea752cb9b648a7aa9b3f634ced797e">tempest-ServerDiskConfigTestJSON-167971983</nova:project>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:        <nova:port uuid="d8c7cad7-f601-4205-8838-a583b6e04b0f">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <entry name="serial">0ccf9d68-ffc2-4f17-a2e7-effa18fc607c</entry>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <entry name="uuid">0ccf9d68-ffc2-4f17-a2e7-effa18fc607c</entry>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:45:57:38"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <target dev="tapd8c7cad7-f6"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/console.log" append="off"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:27:51 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:27:51 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:27:51 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:27:51 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.891 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Preparing to wait for external event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.895 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.896 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.896 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.897 248514 DEBUG nova.virt.libvirt.vif [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1729116913',display_name='tempest-ServerDiskConfigTestJSON-server-1729116913',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1729116913',id=51,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-3hps9qrz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:44Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=0ccf9d68-ffc2-4f17-a2e7-effa18fc607c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.897 248514 DEBUG nova.network.os_vif_util [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.898 248514 DEBUG nova.network.os_vif_util [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.899 248514 DEBUG os_vif [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.900 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.900 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.901 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.905 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.906 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8c7cad7-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.906 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8c7cad7-f6, col_values=(('external_ids', {'iface-id': 'd8c7cad7-f601-4205-8838-a583b6e04b0f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:57:38', 'vm-uuid': '0ccf9d68-ffc2-4f17-a2e7-effa18fc607c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.908 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:51 np0005558241 NetworkManager[50376]: <info>  [1765614471.9094] manager: (tapd8c7cad7-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.911 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.921 248514 INFO os_vif [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6')#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.994 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.996 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.996 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No VIF found with MAC fa:16:3e:45:57:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:27:51 np0005558241 nova_compute[248510]: 2025-12-13 08:27:51.996 248514 INFO nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Using config drive#033[00m
Dec 13 03:27:52 np0005558241 nova_compute[248510]: 2025-12-13 08:27:52.018 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:27:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Dec 13 03:27:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Dec 13 03:27:52 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Dec 13 03:27:52 np0005558241 nova_compute[248510]: 2025-12-13 08:27:52.179 248514 DEBUG nova.storage.rbd_utils [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(snap) on rbd image(e4e9ee37-4059-46f1-9bf8-83a72d0403e7) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:27:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.161 248514 INFO nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Creating config drive at /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config#033[00m
Dec 13 03:27:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.171 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8_6a34e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:27:53 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.264 248514 DEBUG nova.network.neutron [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Updated VIF entry in instance network info cache for port d8c7cad7-f601-4205-8838-a583b6e04b0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.265 248514 DEBUG nova.network.neutron [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Updating instance_info_cache with network_info: [{"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.293 248514 DEBUG oslo_concurrency.lockutils [req-67d33348-1eee-47d8-bb42-d5e50f4a4a25 req-79ac18ef-6914-4c37-9f4a-a372c73341e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.323 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo8_6a34e" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.356 248514 DEBUG nova.storage.rbd_utils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.362 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:27:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 402 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.7 MiB/s wr, 145 op/s
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.530 248514 DEBUG oslo_concurrency.processutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.531 248514 INFO nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deleting local config drive /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config because it was imported into RBD.#033[00m
Dec 13 03:27:53 np0005558241 kernel: tapd8c7cad7-f6: entered promiscuous mode
Dec 13 03:27:53 np0005558241 NetworkManager[50376]: <info>  [1765614473.5821] manager: (tapd8c7cad7-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Dec 13 03:27:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:27:53Z|00439|binding|INFO|Claiming lport d8c7cad7-f601-4205-8838-a583b6e04b0f for this chassis.
Dec 13 03:27:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:27:53Z|00440|binding|INFO|d8c7cad7-f601-4205-8838-a583b6e04b0f: Claiming fa:16:3e:45:57:38 10.100.0.8
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.591 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.593 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:57:38 10.100.0.8'], port_security=['fa:16:3e:45:57:38 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0ccf9d68-ffc2-4f17-a2e7-effa18fc607c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d8c7cad7-f601-4205-8838-a583b6e04b0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.594 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d8c7cad7-f601-4205-8838-a583b6e04b0f in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 bound to our chassis#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.596 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c63049d-63e9-47af-99e2-ce1403a42891#033[00m
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.604 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.605 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:27:53Z|00441|binding|INFO|Setting lport d8c7cad7-f601-4205-8838-a583b6e04b0f ovn-installed in OVS
Dec 13 03:27:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:27:53Z|00442|binding|INFO|Setting lport d8c7cad7-f601-4205-8838-a583b6e04b0f up in Southbound
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.612 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.613 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94fd69fc-7662-4023-8768-03cfc294a657]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.614 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c63049d-61 in ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.616 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c63049d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.616 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d57901c-29aa-41fe-9488-86c3d732c4ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.616 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.617 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bdbc2735-bc23-4827-9fd6-df715a556843]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 systemd-udevd[300320]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.631 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f41552-b101-43a5-a62c-cc1bf88bdd8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 systemd-machined[210538]: New machine qemu-57-instance-00000033.
Dec 13 03:27:53 np0005558241 NetworkManager[50376]: <info>  [1765614473.6467] device (tapd8c7cad7-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.645 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:27:53 np0005558241 NetworkManager[50376]: <info>  [1765614473.6478] device (tapd8c7cad7-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:27:53 np0005558241 systemd[1]: Started Virtual Machine qemu-57-instance-00000033.
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.649 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6642c099-2bf9-42a4-b59a-e60095f531c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.687 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc26f04-c507-4391-b453-07a9bd2fa919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 systemd-udevd[300323]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:27:53 np0005558241 NetworkManager[50376]: <info>  [1765614473.6964] manager: (tap6c63049d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/205)
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.697 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd30a55-1f7d-45d3-bedd-926a47cc7784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.739 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa4c981-48a4-4cba-be09-550e39cfd7e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.743 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7295da1a-7727-4737-8ec0-7c25b0334d96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 NetworkManager[50376]: <info>  [1765614473.7757] device (tap6c63049d-60): carrier: link connected
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.787 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[06b4d152-0631-4471-86e6-9f4e9ae2cf82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.808 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[24786ef4-ded4-48a5-90a6-c3546697a5a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705099, 'reachable_time': 15368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300351, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.827 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9abc9395-0035-465e-b634-63d03554c922]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:c2f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705099, 'tstamp': 705099}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300352, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.848 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d036c907-fb8d-443d-ac82-a1bb98f5646b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705099, 'reachable_time': 15368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300353, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.885 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c84c08f4-248b-4578-8488-b985db772c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.992 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dfed81eb-6c9b-4bd4-93e0-793a055d4ca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.994 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.994 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:27:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:53.995 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c63049d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:27:53 np0005558241 nova_compute[248510]: 2025-12-13 08:27:53.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:53 np0005558241 kernel: tap6c63049d-60: entered promiscuous mode
Dec 13 03:27:53 np0005558241 NetworkManager[50376]: <info>  [1765614473.9996] manager: (tap6c63049d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:54.003 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c63049d-60, col_values=(('external_ids', {'iface-id': 'b410790c-12b7-4a29-87e5-13a29af9c319'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:27:54Z|00443|binding|INFO|Releasing lport b410790c-12b7-4a29-87e5-13a29af9c319 from this chassis (sb_readonly=0)
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.005 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:54.007 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:54.008 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb855859-6d18-42d3-9405-0e1de83c8843]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:54.009 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:27:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:54.010 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'env', 'PROCESS_TAG=haproxy-6c63049d-63e9-47af-99e2-ce1403a42891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c63049d-63e9-47af-99e2-ce1403a42891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.022 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.191 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614474.1906586, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.191 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Started (Lifecycle Event)#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.252 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614459.250947, 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.253 248514 INFO nova.compute.manager [-] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.322 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.329 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614474.1931643, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.330 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.338 248514 DEBUG nova.compute.manager [None req-e612e9e0-5e5b-4ebf-ac6e-293bb5ff1b0c - - - - - -] [instance: 05e06a6b-e157-4cd9-88c0-889fa4cfd9fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.426 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.430 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.436 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.437 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.452 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.452 248514 INFO nova.compute.claims [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:27:54 np0005558241 podman[300425]: 2025-12-13 08:27:54.457700404 +0000 UTC m=+0.070152282 container create 7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.475 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:27:54 np0005558241 systemd[1]: Started libpod-conmon-7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759.scope.
Dec 13 03:27:54 np0005558241 podman[300425]: 2025-12-13 08:27:54.423733905 +0000 UTC m=+0.036185803 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:27:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:27:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a1045c2667f2928359016f34c22c18152582122077ea101da9c4a143c306b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:27:54 np0005558241 podman[300425]: 2025-12-13 08:27:54.573201733 +0000 UTC m=+0.185653621 container init 7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:27:54 np0005558241 podman[300425]: 2025-12-13 08:27:54.580701238 +0000 UTC m=+0.193153106 container start 7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:27:54 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [NOTICE]   (300444) : New worker (300446) forked
Dec 13 03:27:54 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [NOTICE]   (300444) : Loading success.
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.743 248514 DEBUG oslo_concurrency.lockutils [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.744 248514 DEBUG oslo_concurrency.lockutils [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.744 248514 DEBUG nova.compute.manager [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.752 248514 DEBUG nova.compute.manager [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.753 248514 DEBUG nova.objects.instance [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'flavor' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.814 248514 DEBUG nova.virt.libvirt.driver [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:27:54 np0005558241 nova_compute[248510]: 2025-12-13 08:27:54.972 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:27:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:55.409 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:55.411 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:55.411 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 418 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.1 MiB/s wr, 132 op/s
Dec 13 03:27:55 np0005558241 nova_compute[248510]: 2025-12-13 08:27:55.647 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:27:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:27:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1999904515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.282 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.288 248514 DEBUG nova.compute.provider_tree [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.540 248514 DEBUG nova.compute.manager [req-61b934ce-3bda-4893-9325-df9a558a73f5 req-4eb31e31-222f-47de-9f95-3712cc3ec4e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.541 248514 DEBUG oslo_concurrency.lockutils [req-61b934ce-3bda-4893-9325-df9a558a73f5 req-4eb31e31-222f-47de-9f95-3712cc3ec4e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.541 248514 DEBUG oslo_concurrency.lockutils [req-61b934ce-3bda-4893-9325-df9a558a73f5 req-4eb31e31-222f-47de-9f95-3712cc3ec4e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.541 248514 DEBUG oslo_concurrency.lockutils [req-61b934ce-3bda-4893-9325-df9a558a73f5 req-4eb31e31-222f-47de-9f95-3712cc3ec4e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.542 248514 DEBUG nova.compute.manager [req-61b934ce-3bda-4893-9325-df9a558a73f5 req-4eb31e31-222f-47de-9f95-3712cc3ec4e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Processing event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.544 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.548 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614476.5483558, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.549 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.553 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.556 248514 INFO nova.virt.libvirt.driver [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance spawned successfully.#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.557 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.791 248514 INFO nova.virt.libvirt.driver [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Snapshot image upload complete#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.792 248514 INFO nova.compute.manager [None req-e587ff61-b067-425c-912a-f8d1f62d67fc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Took 6.76 seconds to snapshot the instance on the hypervisor.#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.839 248514 DEBUG nova.scheduler.client.report [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.911 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.914 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.936 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.938 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.944 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.944 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.945 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.945 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.945 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.947 248514 DEBUG nova.virt.libvirt.driver [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.953 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:56 np0005558241 nova_compute[248510]: 2025-12-13 08:27:56.953 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.055 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:27:57 np0005558241 kernel: tapbdc94f2e-b1 (unregistering): left promiscuous mode
Dec 13 03:27:57 np0005558241 NetworkManager[50376]: <info>  [1765614477.0964] device (tapbdc94f2e-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:27:57Z|00444|binding|INFO|Releasing lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 from this chassis (sb_readonly=0)
Dec 13 03:27:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:27:57Z|00445|binding|INFO|Setting lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 down in Southbound
Dec 13 03:27:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:27:57Z|00446|binding|INFO|Removing iface tapbdc94f2e-b1 ovn-installed in OVS
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.120 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.129 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:b2 10.100.0.10'], port_security=['fa:16:3e:d6:ce:b2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '98240df6-1cba-40e1-833c-24611270ed83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bdc94f2e-b14e-4e39-bea0-978ff56ff722) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.130 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bdc94f2e-b14e-4e39-bea0-978ff56ff722 in datapath 7576f079-0439-46aa-98af-04f80cd254ca unbound from our chassis#033[00m
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.132 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.142 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.147 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:27:57 np0005558241 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.148 248514 DEBUG nova.network.neutron [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:27:57 np0005558241 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002f.scope: Consumed 13.739s CPU time.
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.152 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b6ced0-56ec-43ad-a450-e886a338b6df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:57 np0005558241 systemd-machined[210538]: Machine qemu-52-instance-0000002f terminated.
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.155 248514 INFO nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Took 12.60 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.155 248514 DEBUG nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.198 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e6625854-b57b-4aba-894f-ff3b4e08ee0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.202 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7e8c47-31df-43d5-b63e-4997b0eb440b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.252 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[36ec10aa-0b9a-4290-ab6a-7aed8815f6c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.271 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[45d76601-d2f8-4f7d-84cb-6344876f578e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300488, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.288 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3633d8-1e1b-4bf0-8ba9-88c362813be0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300489, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300489, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.290 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.293 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.298 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.298 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.298 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.298 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:27:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:27:57.299 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.324 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 418 MiB data, 649 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 111 op/s
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.515 248514 INFO nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.610 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.681 248514 INFO nova.compute.manager [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Took 14.97 seconds to build instance.#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.730 248514 DEBUG oslo_concurrency.lockutils [None req-a1b8e51f-076f-4981-bc29-24874b0fc96f a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.770 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.772 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.772 248514 INFO nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Creating image(s)#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.796 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.865 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.897 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.901 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.941 248514 DEBUG nova.policy [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b6f7967ce16c45d394188c1302b02907', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.944 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.950 248514 INFO nova.virt.libvirt.driver [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance shutdown successfully after 3 seconds.#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.959 248514 INFO nova.virt.libvirt.driver [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance destroyed successfully.#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.960 248514 DEBUG nova.objects.instance [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'numa_topology' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.997 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.997 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.998 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:57 np0005558241 nova_compute[248510]: 2025-12-13 08:27:57.999 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:58 np0005558241 nova_compute[248510]: 2025-12-13 08:27:58.021 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:27:58 np0005558241 nova_compute[248510]: 2025-12-13 08:27:58.025 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 678d2db2-0536-4744-b65c-f0a5852f35e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:27:58 np0005558241 nova_compute[248510]: 2025-12-13 08:27:58.253 248514 DEBUG nova.compute.manager [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:27:58 np0005558241 nova_compute[248510]: 2025-12-13 08:27:58.296 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 678d2db2-0536-4744-b65c-f0a5852f35e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:27:58 np0005558241 nova_compute[248510]: 2025-12-13 08:27:58.361 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] resizing rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:27:58 np0005558241 nova_compute[248510]: 2025-12-13 08:27:58.506 248514 DEBUG nova.objects.instance [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'migration_context' on Instance uuid 678d2db2-0536-4744-b65c-f0a5852f35e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:27:58 np0005558241 nova_compute[248510]: 2025-12-13 08:27:58.899 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:27:58 np0005558241 nova_compute[248510]: 2025-12-13 08:27:58.900 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Ensure instance console log exists: /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:27:58 np0005558241 nova_compute[248510]: 2025-12-13 08:27:58.900 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:58 np0005558241 nova_compute[248510]: 2025-12-13 08:27:58.901 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:58 np0005558241 nova_compute[248510]: 2025-12-13 08:27:58.901 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:58 np0005558241 nova_compute[248510]: 2025-12-13 08:27:58.935 248514 DEBUG oslo_concurrency.lockutils [None req-ffd6315a-6c3f-428b-b33d-fddc4222fea3 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 4.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 448 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.4 MiB/s wr, 287 op/s
Dec 13 03:27:59 np0005558241 nova_compute[248510]: 2025-12-13 08:27:59.635 248514 DEBUG nova.compute.manager [req-afb43441-df74-428d-a6fa-4c5d6f60c85e req-5a7b18c6-fce1-485d-bb08-846a0c415385 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-unplugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:27:59 np0005558241 nova_compute[248510]: 2025-12-13 08:27:59.635 248514 DEBUG oslo_concurrency.lockutils [req-afb43441-df74-428d-a6fa-4c5d6f60c85e req-5a7b18c6-fce1-485d-bb08-846a0c415385 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:59 np0005558241 nova_compute[248510]: 2025-12-13 08:27:59.635 248514 DEBUG oslo_concurrency.lockutils [req-afb43441-df74-428d-a6fa-4c5d6f60c85e req-5a7b18c6-fce1-485d-bb08-846a0c415385 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:59 np0005558241 nova_compute[248510]: 2025-12-13 08:27:59.636 248514 DEBUG oslo_concurrency.lockutils [req-afb43441-df74-428d-a6fa-4c5d6f60c85e req-5a7b18c6-fce1-485d-bb08-846a0c415385 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:59 np0005558241 nova_compute[248510]: 2025-12-13 08:27:59.636 248514 DEBUG nova.compute.manager [req-afb43441-df74-428d-a6fa-4c5d6f60c85e req-5a7b18c6-fce1-485d-bb08-846a0c415385 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] No waiting events found dispatching network-vif-unplugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:27:59 np0005558241 nova_compute[248510]: 2025-12-13 08:27:59.636 248514 WARNING nova.compute.manager [req-afb43441-df74-428d-a6fa-4c5d6f60c85e req-5a7b18c6-fce1-485d-bb08-846a0c415385 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received unexpected event network-vif-unplugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 for instance with vm_state stopped and task_state None.#033[00m
Dec 13 03:27:59 np0005558241 nova_compute[248510]: 2025-12-13 08:27:59.975 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:27:59 np0005558241 nova_compute[248510]: 2025-12-13 08:27:59.995 248514 DEBUG nova.compute.manager [req-675d4482-7869-4cfb-8db1-d702200e487d req-8f51ea0a-4506-4160-b235-f62b9412ec4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:27:59 np0005558241 nova_compute[248510]: 2025-12-13 08:27:59.996 248514 DEBUG oslo_concurrency.lockutils [req-675d4482-7869-4cfb-8db1-d702200e487d req-8f51ea0a-4506-4160-b235-f62b9412ec4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:27:59 np0005558241 nova_compute[248510]: 2025-12-13 08:27:59.996 248514 DEBUG oslo_concurrency.lockutils [req-675d4482-7869-4cfb-8db1-d702200e487d req-8f51ea0a-4506-4160-b235-f62b9412ec4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:27:59 np0005558241 nova_compute[248510]: 2025-12-13 08:27:59.996 248514 DEBUG oslo_concurrency.lockutils [req-675d4482-7869-4cfb-8db1-d702200e487d req-8f51ea0a-4506-4160-b235-f62b9412ec4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:27:59 np0005558241 nova_compute[248510]: 2025-12-13 08:27:59.996 248514 DEBUG nova.compute.manager [req-675d4482-7869-4cfb-8db1-d702200e487d req-8f51ea0a-4506-4160-b235-f62b9412ec4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] No waiting events found dispatching network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:27:59 np0005558241 nova_compute[248510]: 2025-12-13 08:27:59.997 248514 WARNING nova.compute.manager [req-675d4482-7869-4cfb-8db1-d702200e487d req-8f51ea0a-4506-4160-b235-f62b9412ec4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received unexpected event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f for instance with vm_state active and task_state None.#033[00m
Dec 13 03:28:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:28:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Dec 13 03:28:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Dec 13 03:28:00 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Dec 13 03:28:00 np0005558241 nova_compute[248510]: 2025-12-13 08:28:00.813 248514 DEBUG nova.network.neutron [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Successfully created port: 3a8efc9e-7582-4b17-ab0e-b248e09932b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:28:00 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:00Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1c:88:26 10.100.0.12
Dec 13 03:28:00 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:00Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1c:88:26 10.100.0.12
Dec 13 03:28:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 448 MiB data, 664 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 208 op/s
Dec 13 03:28:01 np0005558241 nova_compute[248510]: 2025-12-13 08:28:01.915 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.274 248514 DEBUG nova.compute.manager [req-a225be72-2e73-41af-9e0e-bc19e38bfd14 req-9d424ea1-26b5-40ab-a375-9db3be78e529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.274 248514 DEBUG oslo_concurrency.lockutils [req-a225be72-2e73-41af-9e0e-bc19e38bfd14 req-9d424ea1-26b5-40ab-a375-9db3be78e529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.275 248514 DEBUG oslo_concurrency.lockutils [req-a225be72-2e73-41af-9e0e-bc19e38bfd14 req-9d424ea1-26b5-40ab-a375-9db3be78e529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.275 248514 DEBUG oslo_concurrency.lockutils [req-a225be72-2e73-41af-9e0e-bc19e38bfd14 req-9d424ea1-26b5-40ab-a375-9db3be78e529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.275 248514 DEBUG nova.compute.manager [req-a225be72-2e73-41af-9e0e-bc19e38bfd14 req-9d424ea1-26b5-40ab-a375-9db3be78e529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] No waiting events found dispatching network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.275 248514 WARNING nova.compute.manager [req-a225be72-2e73-41af-9e0e-bc19e38bfd14 req-9d424ea1-26b5-40ab-a375-9db3be78e529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received unexpected event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 for instance with vm_state stopped and task_state None.#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.277 248514 DEBUG nova.network.neutron [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Successfully updated port: 3a8efc9e-7582-4b17-ab0e-b248e09932b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.389 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "refresh_cache-678d2db2-0536-4744-b65c-f0a5852f35e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.390 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquired lock "refresh_cache-678d2db2-0536-4744-b65c-f0a5852f35e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.390 248514 DEBUG nova.network.neutron [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:28:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 488 MiB data, 689 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.3 MiB/s wr, 207 op/s
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.682 248514 DEBUG nova.network.neutron [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.740 248514 DEBUG nova.objects.instance [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'flavor' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.784 248514 DEBUG oslo_concurrency.lockutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.785 248514 DEBUG oslo_concurrency.lockutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquired lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.785 248514 DEBUG nova.network.neutron [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.785 248514 DEBUG nova.objects.instance [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'info_cache' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.791 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:03 np0005558241 nova_compute[248510]: 2025-12-13 08:28:03.792 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:04 np0005558241 nova_compute[248510]: 2025-12-13 08:28:04.071 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:28:04 np0005558241 nova_compute[248510]: 2025-12-13 08:28:04.242 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:04 np0005558241 nova_compute[248510]: 2025-12-13 08:28:04.243 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:04 np0005558241 nova_compute[248510]: 2025-12-13 08:28:04.251 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:28:04 np0005558241 nova_compute[248510]: 2025-12-13 08:28:04.251 248514 INFO nova.compute.claims [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:28:04 np0005558241 nova_compute[248510]: 2025-12-13 08:28:04.916 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:04 np0005558241 nova_compute[248510]: 2025-12-13 08:28:04.983 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.089 248514 DEBUG nova.network.neutron [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Updating instance_info_cache with network_info: [{"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.111 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Releasing lock "refresh_cache-678d2db2-0536-4744-b65c-f0a5852f35e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.112 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance network_info: |[{"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.114 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Start _get_guest_xml network_info=[{"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.119 248514 WARNING nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.123 248514 DEBUG nova.virt.libvirt.host [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.124 248514 DEBUG nova.virt.libvirt.host [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.126 248514 DEBUG nova.virt.libvirt.host [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.126 248514 DEBUG nova.virt.libvirt.host [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.127 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.127 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.128 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.128 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.128 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.129 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.129 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.129 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.130 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.130 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.130 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.130 248514 DEBUG nova.virt.hardware [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.135 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.360 248514 DEBUG nova.compute.manager [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received event network-changed-3a8efc9e-7582-4b17-ab0e-b248e09932b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.361 248514 DEBUG nova.compute.manager [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Refreshing instance network info cache due to event network-changed-3a8efc9e-7582-4b17-ab0e-b248e09932b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.361 248514 DEBUG oslo_concurrency.lockutils [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-678d2db2-0536-4744-b65c-f0a5852f35e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.362 248514 DEBUG oslo_concurrency.lockutils [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-678d2db2-0536-4744-b65c-f0a5852f35e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.362 248514 DEBUG nova.network.neutron [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Refreshing network info cache for port 3a8efc9e-7582-4b17-ab0e-b248e09932b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:28:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 497 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Dec 13 03:28:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:28:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3802705485' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.554 248514 DEBUG nova.network.neutron [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Updating instance_info_cache with network_info: [{"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.560 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.567 248514 DEBUG nova.compute.provider_tree [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.582 248514 DEBUG oslo_concurrency.lockutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Releasing lock "refresh_cache-98240df6-1cba-40e1-833c-24611270ed83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.599 248514 DEBUG nova.scheduler.client.report [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.617 248514 INFO nova.virt.libvirt.driver [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance destroyed successfully.#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.618 248514 DEBUG nova.objects.instance [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'numa_topology' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:28:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834718991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.734 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.756 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.761 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.850 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.852 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.856 248514 DEBUG nova.objects.instance [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'resources' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.892 248514 DEBUG nova.virt.libvirt.vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-612802196',display_name='tempest-ListServerFiltersTestJSON-instance-612802196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-612802196',id=47,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-wpyl99bg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:58Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=98240df6-1cba-40e1-833c-24611270ed83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.894 248514 DEBUG nova.network.os_vif_util [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.895 248514 DEBUG nova.network.os_vif_util [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.896 248514 DEBUG os_vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.900 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdc94f2e-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.902 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.904 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.907 248514 INFO os_vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1')#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.916 248514 DEBUG nova.virt.libvirt.driver [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Start _get_guest_xml network_info=[{"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.920 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.921 248514 DEBUG nova.network.neutron [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.928 248514 WARNING nova.virt.libvirt.driver [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.935 248514 DEBUG nova.virt.libvirt.host [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.936 248514 DEBUG nova.virt.libvirt.host [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.940 248514 DEBUG nova.virt.libvirt.host [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.940 248514 DEBUG nova.virt.libvirt.host [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.941 248514 DEBUG nova.virt.libvirt.driver [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.941 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.942 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.942 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.942 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.943 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.943 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.943 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.944 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.944 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.944 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.945 248514 DEBUG nova.virt.hardware [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.945 248514 DEBUG nova.objects.instance [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.956 248514 INFO nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:28:05 np0005558241 nova_compute[248510]: 2025-12-13 08:28:05.977 248514 DEBUG oslo_concurrency.processutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.013 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.135 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.137 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.138 248514 INFO nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Creating image(s)#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.160 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.188 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.213 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.217 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "4ca0214a272891cae00922c0f452dbc91c01667e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.218 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "4ca0214a272891cae00922c0f452dbc91c01667e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.226 248514 DEBUG nova.policy [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b988c7ac9354c59aac9a9f41f83c20f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52e1055963294dbdb16cd95b466cd4d9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:28:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:28:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/395874849' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.376 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.377 248514 DEBUG nova.virt.libvirt.vif [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1859844308',display_name='tempest-DeleteServersTestJSON-server-1859844308',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1859844308',id=52,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-vi6y6qcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:57Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=678d2db2-0536-4744-b65c-f0a5852f35e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.378 248514 DEBUG nova.network.os_vif_util [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.379 248514 DEBUG nova.network.os_vif_util [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.380 248514 DEBUG nova.objects.instance [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 678d2db2-0536-4744-b65c-f0a5852f35e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.400 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  <uuid>678d2db2-0536-4744-b65c-f0a5852f35e0</uuid>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  <name>instance-00000034</name>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <nova:name>tempest-DeleteServersTestJSON-server-1859844308</nova:name>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:28:05</nova:creationTime>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:        <nova:user uuid="b6f7967ce16c45d394188c1302b02907">tempest-DeleteServersTestJSON-991966373-project-member</nova:user>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:        <nova:project uuid="6f6beadbb0244529b8dfc1abff8e8e10">tempest-DeleteServersTestJSON-991966373</nova:project>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:        <nova:port uuid="3a8efc9e-7582-4b17-ab0e-b248e09932b3">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <entry name="serial">678d2db2-0536-4744-b65c-f0a5852f35e0</entry>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <entry name="uuid">678d2db2-0536-4744-b65c-f0a5852f35e0</entry>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/678d2db2-0536-4744-b65c-f0a5852f35e0_disk">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/678d2db2-0536-4744-b65c-f0a5852f35e0_disk.config">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:cc:30:15"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <target dev="tap3a8efc9e-75"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/console.log" append="off"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:28:06 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:28:06 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:28:06 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:28:06 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.401 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Preparing to wait for external event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.401 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.402 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.402 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.403 248514 DEBUG nova.virt.libvirt.vif [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1859844308',display_name='tempest-DeleteServersTestJSON-server-1859844308',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1859844308',id=52,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-vi6y6qcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:27:57Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=678d2db2-0536-4744-b65c-f0a5852f35e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.403 248514 DEBUG nova.network.os_vif_util [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.404 248514 DEBUG nova.network.os_vif_util [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.404 248514 DEBUG os_vif [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.405 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.405 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.408 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.408 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a8efc9e-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.409 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3a8efc9e-75, col_values=(('external_ids', {'iface-id': '3a8efc9e-7582-4b17-ab0e-b248e09932b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:30:15', 'vm-uuid': '678d2db2-0536-4744-b65c-f0a5852f35e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:06 np0005558241 NetworkManager[50376]: <info>  [1765614486.4116] manager: (tap3a8efc9e-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.418 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.419 248514 INFO os_vif [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75')#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.507 248514 DEBUG nova.virt.libvirt.imagebackend [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/e4e9ee37-4059-46f1-9bf8-83a72d0403e7/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/e4e9ee37-4059-46f1-9bf8-83a72d0403e7/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec 13 03:28:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:28:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3357257966' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.564 248514 DEBUG nova.virt.libvirt.imagebackend [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Selected location: {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/e4e9ee37-4059-46f1-9bf8-83a72d0403e7/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.565 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] cloning images/e4e9ee37-4059-46f1-9bf8-83a72d0403e7@snap to None/5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.593 248514 DEBUG oslo_concurrency.processutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.642 248514 DEBUG oslo_concurrency.processutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.679 248514 INFO nova.compute.manager [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Rebuilding instance#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.691 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "4ca0214a272891cae00922c0f452dbc91c01667e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.728 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.729 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.729 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No VIF found with MAC fa:16:3e:cc:30:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.730 248514 INFO nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Using config drive#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.749 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.846 248514 DEBUG nova.objects.instance [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d8c2900-0048-4631-bbc6-0122bce8f4f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.877 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.878 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Ensure instance console log exists: /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.878 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.879 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:06 np0005558241 nova_compute[248510]: 2025-12-13 08:28:06.879 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:28:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644608389' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:28:07 np0005558241 nova_compute[248510]: 2025-12-13 08:28:07.227 248514 DEBUG oslo_concurrency.processutils [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:07 np0005558241 nova_compute[248510]: 2025-12-13 08:28:07.229 248514 DEBUG nova.virt.libvirt.vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-612802196',display_name='tempest-ListServerFiltersTestJSON-instance-612802196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-612802196',id=47,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-wpyl99bg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:58Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=98240df6-1cba-40e1-833c-24611270ed83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:28:07 np0005558241 nova_compute[248510]: 2025-12-13 08:28:07.229 248514 DEBUG nova.network.os_vif_util [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:07 np0005558241 nova_compute[248510]: 2025-12-13 08:28:07.230 248514 DEBUG nova.network.os_vif_util [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:07 np0005558241 nova_compute[248510]: 2025-12-13 08:28:07.231 248514 DEBUG nova.objects.instance [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 497 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.085 248514 DEBUG nova.virt.libvirt.driver [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  <uuid>98240df6-1cba-40e1-833c-24611270ed83</uuid>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  <name>instance-0000002f</name>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-612802196</nova:name>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:28:05</nova:creationTime>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:        <nova:user uuid="65a6b617130a42ac9c3d9b4abf6a1cfb">tempest-ListServerFiltersTestJSON-1229542462-project-member</nova:user>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:        <nova:project uuid="3490ad817e664ff6b12c4ea88192b667">tempest-ListServerFiltersTestJSON-1229542462</nova:project>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:        <nova:port uuid="bdc94f2e-b14e-4e39-bea0-978ff56ff722">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <entry name="serial">98240df6-1cba-40e1-833c-24611270ed83</entry>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <entry name="uuid">98240df6-1cba-40e1-833c-24611270ed83</entry>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/98240df6-1cba-40e1-833c-24611270ed83_disk">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/98240df6-1cba-40e1-833c-24611270ed83_disk.config">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:d6:ce:b2"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <target dev="tapbdc94f2e-b1"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83/console.log" append="off"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <input type="keyboard" bus="usb"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:28:08 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:28:08 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:28:08 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:28:08 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.086 248514 DEBUG nova.virt.libvirt.driver [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.087 248514 DEBUG nova.virt.libvirt.driver [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.088 248514 DEBUG nova.virt.libvirt.vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-612802196',display_name='tempest-ListServerFiltersTestJSON-instance-612802196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-612802196',id=47,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-wpyl99bg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:58Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=98240df6-1cba-40e1-833c-24611270ed83,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.088 248514 DEBUG nova.network.os_vif_util [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.089 248514 DEBUG nova.network.os_vif_util [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.091 248514 DEBUG os_vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.093 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.093 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.093 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.096 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.096 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbdc94f2e-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.097 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbdc94f2e-b1, col_values=(('external_ids', {'iface-id': 'bdc94f2e-b14e-4e39-bea0-978ff56ff722', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:ce:b2', 'vm-uuid': '98240df6-1cba-40e1-833c-24611270ed83'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.098 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:08 np0005558241 NetworkManager[50376]: <info>  [1765614488.1000] manager: (tapbdc94f2e-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.101 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.115 248514 INFO os_vif [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1')#033[00m
Dec 13 03:28:08 np0005558241 kernel: tapbdc94f2e-b1: entered promiscuous mode
Dec 13 03:28:08 np0005558241 NetworkManager[50376]: <info>  [1765614488.1917] manager: (tapbdc94f2e-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/209)
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:08 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:08Z|00447|binding|INFO|Claiming lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 for this chassis.
Dec 13 03:28:08 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:08Z|00448|binding|INFO|bdc94f2e-b14e-4e39-bea0-978ff56ff722: Claiming fa:16:3e:d6:ce:b2 10.100.0.10
Dec 13 03:28:08 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:08Z|00449|binding|INFO|Setting lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 ovn-installed in OVS
Dec 13 03:28:08 np0005558241 systemd-udevd[301033]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:28:08 np0005558241 NetworkManager[50376]: <info>  [1765614488.2488] device (tapbdc94f2e-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:28:08 np0005558241 NetworkManager[50376]: <info>  [1765614488.2498] device (tapbdc94f2e-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.326 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:08 np0005558241 systemd-machined[210538]: New machine qemu-58-instance-0000002f.
Dec 13 03:28:08 np0005558241 systemd[1]: Started Virtual Machine qemu-58-instance-0000002f.
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.463 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:08 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:08Z|00450|binding|INFO|Setting lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 up in Southbound
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.507 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:b2 10.100.0.10'], port_security=['fa:16:3e:d6:ce:b2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '98240df6-1cba-40e1-833c-24611270ed83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bdc94f2e-b14e-4e39-bea0-978ff56ff722) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.509 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bdc94f2e-b14e-4e39-bea0-978ff56ff722 in datapath 7576f079-0439-46aa-98af-04f80cd254ca bound to our chassis#033[00m
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.511 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.525 248514 DEBUG nova.compute.manager [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.531 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0dff9437-2922-4610-92b3-392d77a00554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.568 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a58b8b24-ae32-4f8c-b56e-1924aa3daa92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.575 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[52b7fd3c-e2a1-4d7d-9423-2b1f8ff69b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.606 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8f089d-3ea5-4f57-87ad-2e433dcb4723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.636 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'pci_requests' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.637 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78f3cc5c-da5a-481e-aa84-2f68f454f41d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301050, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.656 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.662 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b901c1a5-e56e-4dfc-abc7-65d600490af6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301058, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301058, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.664 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.672 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.673 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.673 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'resources' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.677 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:08.678 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.690 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'migration_context' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.708 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.713 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.781 248514 DEBUG nova.network.neutron [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Successfully created port: d41fdf9b-1d4a-475f-b516-69fa17b19cfb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.807 248514 INFO nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Creating config drive at /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/disk.config#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.819 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp7ans4mg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.864 248514 DEBUG nova.compute.manager [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.867 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 98240df6-1cba-40e1-833c-24611270ed83 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.867 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614488.838627, 98240df6-1cba-40e1-833c-24611270ed83 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.867 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.874 248514 INFO nova.virt.libvirt.driver [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance rebooted successfully.#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.875 248514 DEBUG nova.compute.manager [None req-28f67fbe-884d-4ea0-8a99-ed5dda8cc41a 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.903 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.908 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.935 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.936 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614488.8391178, 98240df6-1cba-40e1-833c-24611270ed83 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.936 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] VM Started (Lifecycle Event)#033[00m
Dec 13 03:28:08 np0005558241 nova_compute[248510]: 2025-12-13 08:28:08.973 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp7ans4mg" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:09 np0005558241 nova_compute[248510]: 2025-12-13 08:28:09.004 248514 DEBUG nova.storage.rbd_utils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 678d2db2-0536-4744-b65c-f0a5852f35e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:09 np0005558241 nova_compute[248510]: 2025-12-13 08:28:09.009 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/disk.config 678d2db2-0536-4744-b65c-f0a5852f35e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:09 np0005558241 nova_compute[248510]: 2025-12-13 08:28:09.052 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:09 np0005558241 nova_compute[248510]: 2025-12-13 08:28:09.058 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:28:09 np0005558241 nova_compute[248510]: 2025-12-13 08:28:09.162 248514 DEBUG oslo_concurrency.processutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/disk.config 678d2db2-0536-4744-b65c-f0a5852f35e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:09 np0005558241 nova_compute[248510]: 2025-12-13 08:28:09.163 248514 INFO nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Deleting local config drive /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0/disk.config because it was imported into RBD.#033[00m
Dec 13 03:28:09 np0005558241 kernel: tap3a8efc9e-75: entered promiscuous mode
Dec 13 03:28:09 np0005558241 NetworkManager[50376]: <info>  [1765614489.2128] manager: (tap3a8efc9e-75): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Dec 13 03:28:09 np0005558241 systemd-udevd[301035]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:28:09 np0005558241 nova_compute[248510]: 2025-12-13 08:28:09.215 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:09Z|00451|binding|INFO|Claiming lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 for this chassis.
Dec 13 03:28:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:09Z|00452|binding|INFO|3a8efc9e-7582-4b17-ab0e-b248e09932b3: Claiming fa:16:3e:cc:30:15 10.100.0.4
Dec 13 03:28:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:28:09
Dec 13 03:28:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:28:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:28:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'vms', '.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control']
Dec 13 03:28:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:28:09 np0005558241 NetworkManager[50376]: <info>  [1765614489.2294] device (tap3a8efc9e-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:28:09 np0005558241 NetworkManager[50376]: <info>  [1765614489.2306] device (tap3a8efc9e-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:28:09 np0005558241 systemd-machined[210538]: New machine qemu-59-instance-00000034.
Dec 13 03:28:09 np0005558241 systemd[1]: Started Virtual Machine qemu-59-instance-00000034.
Dec 13 03:28:09 np0005558241 nova_compute[248510]: 2025-12-13 08:28:09.314 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:09Z|00453|binding|INFO|Setting lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 ovn-installed in OVS
Dec 13 03:28:09 np0005558241 nova_compute[248510]: 2025-12-13 08:28:09.317 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 508 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 4.0 MiB/s wr, 129 op/s
Dec 13 03:28:09 np0005558241 nova_compute[248510]: 2025-12-13 08:28:09.559 248514 DEBUG nova.network.neutron [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Updated VIF entry in instance network info cache for port 3a8efc9e-7582-4b17-ab0e-b248e09932b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:28:09 np0005558241 nova_compute[248510]: 2025-12-13 08:28:09.560 248514 DEBUG nova.network.neutron [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Updating instance_info_cache with network_info: [{"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:09Z|00454|binding|INFO|Setting lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 up in Southbound
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.771 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:30:15 10.100.0.4'], port_security=['fa:16:3e:cc:30:15 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '678d2db2-0536-4744-b65c-f0a5852f35e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3a8efc9e-7582-4b17-ab0e-b248e09932b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.773 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3a8efc9e-7582-4b17-ab0e-b248e09932b3 in datapath 85372fca-ab50-48b6-8c21-507f630c205a bound to our chassis#033[00m
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.776 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85372fca-ab50-48b6-8c21-507f630c205a#033[00m
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.792 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e5680925-6500-46be-9d99-18b1b0a7deb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.794 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85372fca-a1 in ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.796 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85372fca-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.796 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7251f9-93bf-42b2-94e7-cb8e35540e90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.797 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[276732c3-0c06-4924-b9c0-1bd35c54d57a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.821 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4120ed-3e73-4f72-8650-a6b8d06f5826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.840 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5b84d69d-28d0-4d1e-9492-17a72d5e2569]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:09 np0005558241 nova_compute[248510]: 2025-12-13 08:28:09.848 248514 DEBUG oslo_concurrency.lockutils [req-f28d2be9-23a5-4f56-9818-a9ca08e6750b req-c1db17bd-fdf4-47eb-ae6a-d777eb4ce640 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-678d2db2-0536-4744-b65c-f0a5852f35e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.873 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[614eeaf9-d45c-4480-924c-f045bfa99521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:09 np0005558241 NetworkManager[50376]: <info>  [1765614489.8823] manager: (tap85372fca-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/211)
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.884 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[50178412-fbc4-4961-881c-66ad8407ac7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.924 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f8121d22-2e78-4c92-9985-7e9f3e2277b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.927 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8a5b2c-ccb2-4566-8a3c-5afe93934c0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:09 np0005558241 NetworkManager[50376]: <info>  [1765614489.9585] device (tap85372fca-a0): carrier: link connected
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.968 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a4357418-6515-42aa-ad86-84adb7fb137a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:09 np0005558241 nova_compute[248510]: 2025-12-13 08:28:09.977 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:09.989 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a5050b-e1c9-46c5-956a-21b1c998d293]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 135], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706717, 'reachable_time': 18702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301181, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.006 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9819844e-1d3c-4a5a-991e-23519f87e4bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:30d0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 706717, 'tstamp': 706717}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301182, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.029 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5441a12f-7155-4a62-998e-7824fee2bbdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 135], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706717, 'reachable_time': 18702, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301183, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.040 248514 DEBUG nova.compute.manager [req-276e8d9e-e043-4e16-8a00-fb0455d73e85 req-6e5b1e01-5a36-4c6e-a799-c85924fc9d07 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.041 248514 DEBUG oslo_concurrency.lockutils [req-276e8d9e-e043-4e16-8a00-fb0455d73e85 req-6e5b1e01-5a36-4c6e-a799-c85924fc9d07 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.041 248514 DEBUG oslo_concurrency.lockutils [req-276e8d9e-e043-4e16-8a00-fb0455d73e85 req-6e5b1e01-5a36-4c6e-a799-c85924fc9d07 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.042 248514 DEBUG oslo_concurrency.lockutils [req-276e8d9e-e043-4e16-8a00-fb0455d73e85 req-6e5b1e01-5a36-4c6e-a799-c85924fc9d07 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.042 248514 DEBUG nova.compute.manager [req-276e8d9e-e043-4e16-8a00-fb0455d73e85 req-6e5b1e01-5a36-4c6e-a799-c85924fc9d07 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] No waiting events found dispatching network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.042 248514 WARNING nova.compute.manager [req-276e8d9e-e043-4e16-8a00-fb0455d73e85 req-6e5b1e01-5a36-4c6e-a799-c85924fc9d07 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received unexpected event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.077 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba57c5c-3665-4eab-a34f-e3b1600415fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:28:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.146 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[71b85e7e-652b-4262-acce-742a85094526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.148 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.148 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.149 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85372fca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:10 np0005558241 kernel: tap85372fca-a0: entered promiscuous mode
Dec 13 03:28:10 np0005558241 NetworkManager[50376]: <info>  [1765614490.1517] manager: (tap85372fca-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.154 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85372fca-a0, col_values=(('external_ids', {'iface-id': '2c0f4981-0ad0-478e-b1ad-551d231022ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:10Z|00455|binding|INFO|Releasing lport 2c0f4981-0ad0-478e-b1ad-551d231022ad from this chassis (sb_readonly=0)
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.172 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.174 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.175 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf852c1-3665-4188-90f2-1eb69cb2e8bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.176 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:28:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:10.176 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'env', 'PROCESS_TAG=haproxy-85372fca-ab50-48b6-8c21-507f630c205a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85372fca-ab50-48b6-8c21-507f630c205a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.256 248514 DEBUG nova.compute.manager [req-00dba9c2-6750-4d07-9a8c-bc93fcc97a1e req-7cf32f95-f441-4109-a946-9221880ec1d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.256 248514 DEBUG oslo_concurrency.lockutils [req-00dba9c2-6750-4d07-9a8c-bc93fcc97a1e req-7cf32f95-f441-4109-a946-9221880ec1d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.257 248514 DEBUG oslo_concurrency.lockutils [req-00dba9c2-6750-4d07-9a8c-bc93fcc97a1e req-7cf32f95-f441-4109-a946-9221880ec1d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.257 248514 DEBUG oslo_concurrency.lockutils [req-00dba9c2-6750-4d07-9a8c-bc93fcc97a1e req-7cf32f95-f441-4109-a946-9221880ec1d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.257 248514 DEBUG nova.compute.manager [req-00dba9c2-6750-4d07-9a8c-bc93fcc97a1e req-7cf32f95-f441-4109-a946-9221880ec1d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Processing event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.320 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.321 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614490.3193424, 678d2db2-0536-4744-b65c-f0a5852f35e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.322 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] VM Started (Lifecycle Event)#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.325 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.330 248514 INFO nova.virt.libvirt.driver [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance spawned successfully.#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.330 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.356 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.361 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.365 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.366 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.366 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.367 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.367 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.368 248514 DEBUG nova.virt.libvirt.driver [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.398 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.399 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614490.3196661, 678d2db2-0536-4744-b65c-f0a5852f35e0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.399 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.430 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.435 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614490.3256648, 678d2db2-0536-4744-b65c-f0a5852f35e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.436 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:28:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:28:10 np0005558241 podman[301256]: 2025-12-13 08:28:10.638418379 +0000 UTC m=+0.068219414 container create b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:28:10 np0005558241 systemd[1]: Started libpod-conmon-b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a.scope.
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.697 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:10 np0005558241 podman[301256]: 2025-12-13 08:28:10.607342533 +0000 UTC m=+0.037143598 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.703 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.716 248514 INFO nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Took 12.95 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.717 248514 DEBUG nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:10 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:28:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f9f9ca3b39265b46e15e383b3b89331e9eff00a18a8f9d53df9e42b58de486/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:10 np0005558241 podman[301256]: 2025-12-13 08:28:10.744965487 +0000 UTC m=+0.174766552 container init b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:28:10 np0005558241 podman[301256]: 2025-12-13 08:28:10.751635622 +0000 UTC m=+0.181436657 container start b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.764 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:28:10 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [NOTICE]   (301274) : New worker (301276) forked
Dec 13 03:28:10 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [NOTICE]   (301274) : Loading success.
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.797 248514 DEBUG nova.network.neutron [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Successfully updated port: d41fdf9b-1d4a-475f-b516-69fa17b19cfb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.816 248514 INFO nova.compute.manager [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Took 16.41 seconds to build instance.#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.819 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "refresh_cache-5d8c2900-0048-4631-bbc6-0122bce8f4f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.819 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquired lock "refresh_cache-5d8c2900-0048-4631-bbc6-0122bce8f4f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.819 248514 DEBUG nova.network.neutron [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:28:10 np0005558241 nova_compute[248510]: 2025-12-13 08:28:10.844 248514 DEBUG oslo_concurrency.lockutils [None req-5bb36aa6-24db-4ebb-bbfb-d48fcd07e627 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:11 np0005558241 nova_compute[248510]: 2025-12-13 08:28:11.063 248514 DEBUG nova.network.neutron [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:28:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 508 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 488 KiB/s rd, 3.6 MiB/s wr, 115 op/s
Dec 13 03:28:12 np0005558241 kernel: tapd8c7cad7-f6 (unregistering): left promiscuous mode
Dec 13 03:28:12 np0005558241 NetworkManager[50376]: <info>  [1765614492.1577] device (tapd8c7cad7-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.172 248514 DEBUG oslo_concurrency.lockutils [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.175 248514 DEBUG oslo_concurrency.lockutils [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.182 248514 DEBUG nova.compute.manager [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.195 248514 DEBUG nova.compute.manager [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.198 248514 DEBUG nova.objects.instance [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'flavor' on Instance uuid 678d2db2-0536-4744-b65c-f0a5852f35e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:12Z|00456|binding|INFO|Releasing lport d8c7cad7-f601-4205-8838-a583b6e04b0f from this chassis (sb_readonly=0)
Dec 13 03:28:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:12Z|00457|binding|INFO|Setting lport d8c7cad7-f601-4205-8838-a583b6e04b0f down in Southbound
Dec 13 03:28:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:12Z|00458|binding|INFO|Removing iface tapd8c7cad7-f6 ovn-installed in OVS
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.202 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.205 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:57:38 10.100.0.8'], port_security=['fa:16:3e:45:57:38 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0ccf9d68-ffc2-4f17-a2e7-effa18fc607c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d8c7cad7-f601-4205-8838-a583b6e04b0f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.207 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d8c7cad7-f601-4205-8838-a583b6e04b0f in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 unbound from our chassis#033[00m
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.210 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c63049d-63e9-47af-99e2-ce1403a42891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.212 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c15c18-33f3-48e6-ae3a-39f4230e0ffe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.213 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace which is not needed anymore#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.221 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:12 np0005558241 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Deactivated successfully.
Dec 13 03:28:12 np0005558241 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Consumed 13.112s CPU time.
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.248 248514 DEBUG nova.virt.libvirt.driver [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:28:12 np0005558241 systemd-machined[210538]: Machine qemu-57-instance-00000033 terminated.
Dec 13 03:28:12 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [NOTICE]   (300444) : haproxy version is 2.8.14-c23fe91
Dec 13 03:28:12 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [NOTICE]   (300444) : path to executable is /usr/sbin/haproxy
Dec 13 03:28:12 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [WARNING]  (300444) : Exiting Master process...
Dec 13 03:28:12 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [WARNING]  (300444) : Exiting Master process...
Dec 13 03:28:12 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [ALERT]    (300444) : Current worker (300446) exited with code 143 (Terminated)
Dec 13 03:28:12 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[300440]: [WARNING]  (300444) : All workers exited. Exiting... (0)
Dec 13 03:28:12 np0005558241 systemd[1]: libpod-7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759.scope: Deactivated successfully.
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.394 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:12 np0005558241 podman[301308]: 2025-12-13 08:28:12.399928102 +0000 UTC m=+0.068498212 container died 7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759-userdata-shm.mount: Deactivated successfully.
Dec 13 03:28:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-74a1045c2667f2928359016f34c22c18152582122077ea101da9c4a143c306b5-merged.mount: Deactivated successfully.
Dec 13 03:28:12 np0005558241 podman[301308]: 2025-12-13 08:28:12.462141387 +0000 UTC m=+0.130711487 container cleanup 7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 03:28:12 np0005558241 systemd[1]: libpod-conmon-7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759.scope: Deactivated successfully.
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.499 248514 DEBUG nova.network.neutron [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Updating instance_info_cache with network_info: [{"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:12 np0005558241 podman[301343]: 2025-12-13 08:28:12.544409666 +0000 UTC m=+0.057225543 container remove 7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.551 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c07f4656-19c5-4cdb-9e29-97c16234af5b]: (4, ('Sat Dec 13 08:28:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759)\n7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759\nSat Dec 13 08:28:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759)\n7b0498f7003e6a9654811bc985f26056820a609dfb372be3d787401d9c4a6759\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.553 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[47d508ba-891d-4e99-a9b2-7406130e1dbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.554 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:12 np0005558241 kernel: tap6c63049d-60: left promiscuous mode
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.561 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.582 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.585 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.588 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[95541b3e-7cc0-48e6-a3c1-4177d9bb4ca2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.603 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[758f70b5-f3a1-4217-99ee-5c7e2bfa0d03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.604 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fff0d840-4b7a-4282-bbef-cd9ed9fc21ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.625 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[41c2d835-fac9-4c3d-bc32-631f7239bb41]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705089, 'reachable_time': 42510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301362, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:12 np0005558241 systemd[1]: run-netns-ovnmeta\x2d6c63049d\x2d63e9\x2d47af\x2d99e2\x2dce1403a42891.mount: Deactivated successfully.
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.629 248514 DEBUG nova.compute.manager [req-b491d2f3-e389-42c5-8a77-a45d1c702eab req-ef0d79a5-7b41-4956-b009-0f9dc0df9a8f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.629 248514 DEBUG oslo_concurrency.lockutils [req-b491d2f3-e389-42c5-8a77-a45d1c702eab req-ef0d79a5-7b41-4956-b009-0f9dc0df9a8f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.629 248514 DEBUG oslo_concurrency.lockutils [req-b491d2f3-e389-42c5-8a77-a45d1c702eab req-ef0d79a5-7b41-4956-b009-0f9dc0df9a8f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.629 248514 DEBUG oslo_concurrency.lockutils [req-b491d2f3-e389-42c5-8a77-a45d1c702eab req-ef0d79a5-7b41-4956-b009-0f9dc0df9a8f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.630 248514 DEBUG nova.compute.manager [req-b491d2f3-e389-42c5-8a77-a45d1c702eab req-ef0d79a5-7b41-4956-b009-0f9dc0df9a8f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] No waiting events found dispatching network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.630 248514 WARNING nova.compute.manager [req-b491d2f3-e389-42c5-8a77-a45d1c702eab req-ef0d79a5-7b41-4956-b009-0f9dc0df9a8f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received unexpected event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.630 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:28:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:12.630 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[11f24a4f-7f37-48ab-8c34-b3958a7c24ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.645 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Releasing lock "refresh_cache-5d8c2900-0048-4631-bbc6-0122bce8f4f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.646 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Instance network_info: |[{"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.650 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Start _get_guest_xml network_info=[{"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:27:49Z,direct_url=<?>,disk_format='raw',id=e4e9ee37-4059-46f1-9bf8-83a72d0403e7,min_disk=1,min_ram=0,name='tempest-test-snap-671859850',owner='52e1055963294dbdb16cd95b466cd4d9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:27:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': 'e4e9ee37-4059-46f1-9bf8-83a72d0403e7'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.658 248514 WARNING nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.670 248514 DEBUG nova.virt.libvirt.host [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.672 248514 DEBUG nova.virt.libvirt.host [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.677 248514 DEBUG nova.virt.libvirt.host [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.680 248514 DEBUG nova.virt.libvirt.host [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.681 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.681 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:27:49Z,direct_url=<?>,disk_format='raw',id=e4e9ee37-4059-46f1-9bf8-83a72d0403e7,min_disk=1,min_ram=0,name='tempest-test-snap-671859850',owner='52e1055963294dbdb16cd95b466cd4d9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:27:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.682 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.682 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.682 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.682 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.682 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.683 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.684 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.686 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.686 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.687 248514 DEBUG nova.virt.hardware [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.693 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.752 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance shutdown successfully after 4 seconds.#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.764 248514 INFO nova.virt.libvirt.driver [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance destroyed successfully.#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.770 248514 INFO nova.virt.libvirt.driver [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance destroyed successfully.#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.771 248514 DEBUG nova.virt.libvirt.vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1729116913',display_name='tempest-ServerDiskConfigTestJSON-server-1729116913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1729116913',id=51,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-3hps9qrz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:06Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=0ccf9d68-ffc2-4f17-a2e7-effa18fc607c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.772 248514 DEBUG nova.network.os_vif_util [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.773 248514 DEBUG nova.network.os_vif_util [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.774 248514 DEBUG os_vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.776 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8c7cad7-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.819 248514 DEBUG nova.compute.manager [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.820 248514 DEBUG oslo_concurrency.lockutils [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.820 248514 DEBUG oslo_concurrency.lockutils [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.821 248514 DEBUG oslo_concurrency.lockutils [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.821 248514 DEBUG nova.compute.manager [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] No waiting events found dispatching network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.821 248514 WARNING nova.compute.manager [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received unexpected event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 for instance with vm_state active and task_state powering-off.#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.821 248514 DEBUG nova.compute.manager [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-changed-d41fdf9b-1d4a-475f-b516-69fa17b19cfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.822 248514 DEBUG nova.compute.manager [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Refreshing instance network info cache due to event network-changed-d41fdf9b-1d4a-475f-b516-69fa17b19cfb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.822 248514 DEBUG oslo_concurrency.lockutils [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5d8c2900-0048-4631-bbc6-0122bce8f4f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.822 248514 DEBUG oslo_concurrency.lockutils [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5d8c2900-0048-4631-bbc6-0122bce8f4f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.822 248514 DEBUG nova.network.neutron [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Refreshing network info cache for port d41fdf9b-1d4a-475f-b516-69fa17b19cfb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.825 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.828 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.833 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:12 np0005558241 nova_compute[248510]: 2025-12-13 08:28:12.836 248514 INFO os_vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6')#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.127 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deleting instance files /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_del#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.128 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deletion of /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_del complete#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.312 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.312 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Creating image(s)#033[00m
Dec 13 03:28:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:28:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3218796268' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.340 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.375 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.403 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.410 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 526 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.2 MiB/s wr, 207 op/s
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.450 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.757s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.488 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.492 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.539 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.540 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.541 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.541 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.566 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.571 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.918 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:13 np0005558241 nova_compute[248510]: 2025-12-13 08:28:13.996 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] resizing rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:28:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:28:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297712692' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.103 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.104 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Ensure instance console log exists: /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.105 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.105 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.105 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.108 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Start _get_guest_xml network_info=[{"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.109 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.110 248514 DEBUG nova.virt.libvirt.vif [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-622802661',display_name='tempest-ImagesTestJSON-server-622802661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-622802661',id=53,image_ref='e4e9ee37-4059-46f1-9bf8-83a72d0403e7',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-0ufcm8q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='df25cd40-72b5-4e0f-90ec-8677c699d1d3',image_min_disk='1',image_min_ram='0',image_owner_id='52e1055963294dbdb16cd95b466cd4d9',image_owner_project_name='tempest-ImagesTestJSON-1234382421',image_owner_user_name='tempest-ImagesTestJSON-1234382421-project-member',image_user_id='3b988c7ac9354c59aac9a9f41f83c20f',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:06Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=5d8c2900-0048-4631-bbc6-0122bce8f4f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.111 248514 DEBUG nova.network.os_vif_util [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.112 248514 DEBUG nova.network.os_vif_util [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.113 248514 DEBUG nova.objects.instance [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d8c2900-0048-4631-bbc6-0122bce8f4f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.119 248514 WARNING nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.125 248514 DEBUG nova.virt.libvirt.host [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.131 248514 DEBUG nova.virt.libvirt.host [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.136 248514 DEBUG nova.virt.libvirt.host [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.137 248514 DEBUG nova.virt.libvirt.host [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.138 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.138 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.138 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.138 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.139 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.139 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.139 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.139 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.139 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.139 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.140 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.140 248514 DEBUG nova.virt.hardware [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.140 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.144 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  <uuid>5d8c2900-0048-4631-bbc6-0122bce8f4f3</uuid>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  <name>instance-00000035</name>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <nova:name>tempest-ImagesTestJSON-server-622802661</nova:name>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:28:12</nova:creationTime>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:        <nova:user uuid="3b988c7ac9354c59aac9a9f41f83c20f">tempest-ImagesTestJSON-1234382421-project-member</nova:user>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:        <nova:project uuid="52e1055963294dbdb16cd95b466cd4d9">tempest-ImagesTestJSON-1234382421</nova:project>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="e4e9ee37-4059-46f1-9bf8-83a72d0403e7"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:        <nova:port uuid="d41fdf9b-1d4a-475f-b516-69fa17b19cfb">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <entry name="serial">5d8c2900-0048-4631-bbc6-0122bce8f4f3</entry>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <entry name="uuid">5d8c2900-0048-4631-bbc6-0122bce8f4f3</entry>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk.config">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:8e:5e:3e"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <target dev="tapd41fdf9b-1d"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/console.log" append="off"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <input type="keyboard" bus="usb"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:28:14 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:28:14 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:28:14 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:28:14 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.145 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Preparing to wait for external event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.145 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.146 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.146 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.147 248514 DEBUG nova.virt.libvirt.vif [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-622802661',display_name='tempest-ImagesTestJSON-server-622802661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-622802661',id=53,image_ref='e4e9ee37-4059-46f1-9bf8-83a72d0403e7',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-0ufcm8q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='df25cd40-72b5-4e0f-90ec-8677c699d1d3',image_min_disk='1',image_min_ram='0',image_owner_id='52e1055963294dbdb16cd95b466cd4d9',image_owner_project_name='tempest-ImagesTestJSON-1234382421',image_owner_user_name='tempest-ImagesTestJSON-1234382421-project-member',image_user_id='3b988c7ac9354c59aac9a9f41f83c20f',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:06Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=5d8c2900-0048-4631-bbc6-0122bce8f4f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.147 248514 DEBUG nova.network.os_vif_util [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.148 248514 DEBUG nova.network.os_vif_util [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.148 248514 DEBUG os_vif [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.149 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.150 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.150 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.153 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd41fdf9b-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.154 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd41fdf9b-1d, col_values=(('external_ids', {'iface-id': 'd41fdf9b-1d4a-475f-b516-69fa17b19cfb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:5e:3e', 'vm-uuid': '5d8c2900-0048-4631-bbc6-0122bce8f4f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.155 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:14 np0005558241 NetworkManager[50376]: <info>  [1765614494.1564] manager: (tapd41fdf9b-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.160 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.160 248514 INFO os_vif [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d')#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.181 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.255 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.256 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.256 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No VIF found with MAC fa:16:3e:8e:5e:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.256 248514 INFO nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Using config drive#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.279 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.481 248514 DEBUG nova.network.neutron [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Updated VIF entry in instance network info cache for port d41fdf9b-1d4a-475f-b516-69fa17b19cfb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.482 248514 DEBUG nova.network.neutron [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Updating instance_info_cache with network_info: [{"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.510 248514 DEBUG oslo_concurrency.lockutils [req-039b4235-2953-4990-94a9-35b53a5e59aa req-4c329f19-6390-4d71-b438-97d115d130dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5d8c2900-0048-4631-bbc6-0122bce8f4f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.705 248514 INFO nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Creating config drive at /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/disk.config#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.710 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuo5wh6x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:28:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167132967' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.758 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.785 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.790 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.855 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwuo5wh6x" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.891 248514 DEBUG nova.storage.rbd_utils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.895 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/disk.config 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:14 np0005558241 nova_compute[248510]: 2025-12-13 08:28:14.980 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:28:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1762142997' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:28:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:28:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1762142997' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.035 248514 DEBUG oslo_concurrency.processutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/disk.config 5d8c2900-0048-4631-bbc6-0122bce8f4f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.038 248514 INFO nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Deleting local config drive /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3/disk.config because it was imported into RBD.#033[00m
Dec 13 03:28:15 np0005558241 kernel: tapd41fdf9b-1d: entered promiscuous mode
Dec 13 03:28:15 np0005558241 NetworkManager[50376]: <info>  [1765614495.1053] manager: (tapd41fdf9b-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Dec 13 03:28:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.208 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:15Z|00459|binding|INFO|Claiming lport d41fdf9b-1d4a-475f-b516-69fa17b19cfb for this chassis.
Dec 13 03:28:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:15Z|00460|binding|INFO|d41fdf9b-1d4a-475f-b516-69fa17b19cfb: Claiming fa:16:3e:8e:5e:3e 10.100.0.3
Dec 13 03:28:15 np0005558241 systemd-udevd[301289]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.229 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:5e:3e 10.100.0.3'], port_security=['fa:16:3e:8e:5e:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5d8c2900-0048-4631-bbc6-0122bce8f4f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d41fdf9b-1d4a-475f-b516-69fa17b19cfb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.230 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d41fdf9b-1d4a-475f-b516-69fa17b19cfb in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc bound to our chassis#033[00m
Dec 13 03:28:15 np0005558241 NetworkManager[50376]: <info>  [1765614495.2361] device (tapd41fdf9b-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.234 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc#033[00m
Dec 13 03:28:15 np0005558241 NetworkManager[50376]: <info>  [1765614495.2372] device (tapd41fdf9b-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:28:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:15Z|00461|binding|INFO|Setting lport d41fdf9b-1d4a-475f-b516-69fa17b19cfb ovn-installed in OVS
Dec 13 03:28:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:15Z|00462|binding|INFO|Setting lport d41fdf9b-1d4a-475f-b516-69fa17b19cfb up in Southbound
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.245 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.261 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4216de39-4e68-45cf-85e6-41e199f4f69a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:15 np0005558241 systemd-machined[210538]: New machine qemu-60-instance-00000035.
Dec 13 03:28:15 np0005558241 systemd[1]: Started Virtual Machine qemu-60-instance-00000035.
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.295 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[56f14f97-8e40-4fd7-b8ad-93113b72592e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.308 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1e0a50-af89-4424-a564-882eb8a0ac64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.354 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d3080060-0e30-41d4-aeff-f4551c72fd83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.376 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5609c6d2-0023-4321-bfe2-f56e2f0eda1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703784, 'reachable_time': 44592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301758, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:28:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3807542780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.393 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7e7a3848-c256-4e62-be0c-2e557bf056ad]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap87bd91d0-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703801, 'tstamp': 703801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301761, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap87bd91d0-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703805, 'tstamp': 703805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301761, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.395 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.397 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.402 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87bd91d0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.402 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.403 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87bd91d0-e0, col_values=(('external_ids', {'iface-id': '3e59c36d-12c7-4d92-aa2f-8b6a795f3f98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.402 248514 DEBUG nova.compute.manager [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-unplugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:15.403 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.403 248514 DEBUG oslo_concurrency.lockutils [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.404 248514 DEBUG oslo_concurrency.lockutils [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.404 248514 DEBUG oslo_concurrency.lockutils [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.404 248514 DEBUG nova.compute.manager [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] No waiting events found dispatching network-vif-unplugged-d8c7cad7-f601-4205-8838-a583b6e04b0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.404 248514 WARNING nova.compute.manager [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received unexpected event network-vif-unplugged-d8c7cad7-f601-4205-8838-a583b6e04b0f for instance with vm_state active and task_state rebuild_spawning.#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.405 248514 DEBUG nova.compute.manager [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.406 248514 DEBUG oslo_concurrency.lockutils [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.406 248514 DEBUG oslo_concurrency.lockutils [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.406 248514 DEBUG oslo_concurrency.lockutils [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.406 248514 DEBUG nova.compute.manager [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] No waiting events found dispatching network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.407 248514 WARNING nova.compute.manager [req-eea204ed-f1c1-436c-8fdc-ce86080a31bb req-d45a19cc-5ebb-48bc-b222-ab53bbc7385b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received unexpected event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f for instance with vm_state active and task_state rebuild_spawning.#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.407 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.415 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.417 248514 DEBUG nova.virt.libvirt.vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1729116913',display_name='tempest-ServerDiskConfigTestJSON-server-1729116913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1729116913',id=51,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-3hps9qrz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:13Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=0ccf9d68-ffc2-4f17-a2e7-effa18fc607c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.417 248514 DEBUG nova.network.os_vif_util [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.418 248514 DEBUG nova.network.os_vif_util [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.421 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  <uuid>0ccf9d68-ffc2-4f17-a2e7-effa18fc607c</uuid>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  <name>instance-00000033</name>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1729116913</nova:name>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:28:14</nova:creationTime>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:        <nova:user uuid="a5bc32e49dbd4372a006913090b9ef0f">tempest-ServerDiskConfigTestJSON-167971983-project-member</nova:user>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:        <nova:project uuid="9aea752cb9b648a7aa9b3f634ced797e">tempest-ServerDiskConfigTestJSON-167971983</nova:project>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:        <nova:port uuid="d8c7cad7-f601-4205-8838-a583b6e04b0f">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <entry name="serial">0ccf9d68-ffc2-4f17-a2e7-effa18fc607c</entry>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <entry name="uuid">0ccf9d68-ffc2-4f17-a2e7-effa18fc607c</entry>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:45:57:38"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <target dev="tapd8c7cad7-f6"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/console.log" append="off"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:28:15 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:28:15 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:28:15 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:28:15 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.422 248514 DEBUG nova.compute.manager [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Preparing to wait for external event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.422 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.423 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.423 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.423 248514 DEBUG nova.virt.libvirt.vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1729116913',display_name='tempest-ServerDiskConfigTestJSON-server-1729116913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1729116913',id=51,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-3hps9qrz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:13Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=0ccf9d68-ffc2-4f17-a2e7-effa18fc607c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.424 248514 DEBUG nova.network.os_vif_util [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.424 248514 DEBUG nova.network.os_vif_util [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.425 248514 DEBUG os_vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.425 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.426 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.426 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.429 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.429 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8c7cad7-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.430 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8c7cad7-f6, col_values=(('external_ids', {'iface-id': 'd8c7cad7-f601-4205-8838-a583b6e04b0f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:57:38', 'vm-uuid': '0ccf9d68-ffc2-4f17-a2e7-effa18fc607c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.432 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:15 np0005558241 NetworkManager[50376]: <info>  [1765614495.4342] manager: (tapd8c7cad7-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.442 248514 INFO os_vif [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6')#033[00m
Dec 13 03:28:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 519 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.9 MiB/s wr, 285 op/s
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.501 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.501 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.502 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No VIF found with MAC fa:16:3e:45:57:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.502 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Using config drive#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.526 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.559 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.614 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'keypairs' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.925 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614495.9248798, 5d8c2900-0048-4631-bbc6-0122bce8f4f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.926 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] VM Started (Lifecycle Event)#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.970 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.976 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614495.9250119, 5d8c2900-0048-4631-bbc6-0122bce8f4f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:15 np0005558241 nova_compute[248510]: 2025-12-13 08:28:15.976 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:28:16 np0005558241 nova_compute[248510]: 2025-12-13 08:28:16.321 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:16 np0005558241 nova_compute[248510]: 2025-12-13 08:28:16.328 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:28:16 np0005558241 nova_compute[248510]: 2025-12-13 08:28:16.364 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:28:16 np0005558241 nova_compute[248510]: 2025-12-13 08:28:16.817 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Creating config drive at /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config#033[00m
Dec 13 03:28:16 np0005558241 nova_compute[248510]: 2025-12-13 08:28:16.823 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoh8a2dv8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:16 np0005558241 nova_compute[248510]: 2025-12-13 08:28:16.979 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoh8a2dv8" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:17 np0005558241 nova_compute[248510]: 2025-12-13 08:28:17.005 248514 DEBUG nova.storage.rbd_utils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:17 np0005558241 nova_compute[248510]: 2025-12-13 08:28:17.010 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:17 np0005558241 nova_compute[248510]: 2025-12-13 08:28:17.152 248514 DEBUG oslo_concurrency.processutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:17 np0005558241 nova_compute[248510]: 2025-12-13 08:28:17.154 248514 INFO nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deleting local config drive /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c/disk.config because it was imported into RBD.#033[00m
Dec 13 03:28:17 np0005558241 kernel: tapd8c7cad7-f6: entered promiscuous mode
Dec 13 03:28:17 np0005558241 NetworkManager[50376]: <info>  [1765614497.2173] manager: (tapd8c7cad7-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Dec 13 03:28:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:17Z|00463|binding|INFO|Claiming lport d8c7cad7-f601-4205-8838-a583b6e04b0f for this chassis.
Dec 13 03:28:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:17Z|00464|binding|INFO|d8c7cad7-f601-4205-8838-a583b6e04b0f: Claiming fa:16:3e:45:57:38 10.100.0.8
Dec 13 03:28:17 np0005558241 nova_compute[248510]: 2025-12-13 08:28:17.225 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:17 np0005558241 systemd-udevd[301749]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:28:17 np0005558241 NetworkManager[50376]: <info>  [1765614497.2448] device (tapd8c7cad7-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:28:17 np0005558241 NetworkManager[50376]: <info>  [1765614497.2456] device (tapd8c7cad7-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:28:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:17Z|00465|binding|INFO|Setting lport d8c7cad7-f601-4205-8838-a583b6e04b0f ovn-installed in OVS
Dec 13 03:28:17 np0005558241 nova_compute[248510]: 2025-12-13 08:28:17.252 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:17 np0005558241 systemd-machined[210538]: New machine qemu-61-instance-00000033.
Dec 13 03:28:17 np0005558241 systemd[1]: Started Virtual Machine qemu-61-instance-00000033.
Dec 13 03:28:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 519 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 267 op/s
Dec 13 03:28:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:17Z|00466|binding|INFO|Setting lport d8c7cad7-f601-4205-8838-a583b6e04b0f up in Southbound
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.540 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:57:38 10.100.0.8'], port_security=['fa:16:3e:45:57:38 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0ccf9d68-ffc2-4f17-a2e7-effa18fc607c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d8c7cad7-f601-4205-8838-a583b6e04b0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.541 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d8c7cad7-f601-4205-8838-a583b6e04b0f in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 bound to our chassis#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.543 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c63049d-63e9-47af-99e2-ce1403a42891#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.563 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6649ad5-4992-4d14-b514-ddb0f0362009]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.565 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c63049d-61 in ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.567 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c63049d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.567 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee88934-4d72-4561-89b2-28d2b5e9d835]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.570 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bf295939-aa2b-48c3-a5cf-374828815ef0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.587 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[21bc429b-5add-413a-be8e-e956c69e9754]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.613 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9c84b5-1c37-4ce2-a85d-7ee77efd1bda]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.645 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a4881c8d-313d-45ef-a442-7bc88f518502]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 NetworkManager[50376]: <info>  [1765614497.6520] manager: (tap6c63049d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/217)
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.653 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d0fa864-09da-4557-81c1-e0df3788a217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.689 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0e55d867-ada1-4f31-a429-d6c543a5717a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.692 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0b842e59-d1b4-46d3-b0f7-c7d4db2bc992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 NetworkManager[50376]: <info>  [1765614497.7227] device (tap6c63049d-60): carrier: link connected
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.734 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[91b28eef-0a8a-4794-8c6f-d65669bc1fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.755 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb12f1a-c88d-47a9-92d1-69c25dd6b310]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707493, 'reachable_time': 20027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301911, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.774 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3a58c212-529d-4527-97c1-07c2e9161efb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:c2f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707493, 'tstamp': 707493}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301912, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.797 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7bff5f14-b415-4ba7-bb36-0bb5a01270ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707493, 'reachable_time': 20027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301913, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.836 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[379caf59-29be-4ced-bc21-111dff56a813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.912 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b629920-09c4-4401-95ff-e212721889cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.913 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.914 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.914 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c63049d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:17 np0005558241 nova_compute[248510]: 2025-12-13 08:28:17.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:17 np0005558241 NetworkManager[50376]: <info>  [1765614497.9168] manager: (tap6c63049d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Dec 13 03:28:17 np0005558241 kernel: tap6c63049d-60: entered promiscuous mode
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.921 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c63049d-60, col_values=(('external_ids', {'iface-id': 'b410790c-12b7-4a29-87e5-13a29af9c319'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:17 np0005558241 nova_compute[248510]: 2025-12-13 08:28:17.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:17Z|00467|binding|INFO|Releasing lport b410790c-12b7-4a29-87e5-13a29af9c319 from this chassis (sb_readonly=0)
Dec 13 03:28:17 np0005558241 nova_compute[248510]: 2025-12-13 08:28:17.941 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.943 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.944 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8366a85f-5c79-449f-b8b1-38d8bc345ed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.945 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:28:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:17.945 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'env', 'PROCESS_TAG=haproxy-6c63049d-63e9-47af-99e2-ce1403a42891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c63049d-63e9-47af-99e2-ce1403a42891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:28:18 np0005558241 podman[301943]: 2025-12-13 08:28:18.395150924 +0000 UTC m=+0.066875171 container create abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 03:28:18 np0005558241 podman[301943]: 2025-12-13 08:28:18.360306925 +0000 UTC m=+0.032031202 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:28:18 np0005558241 systemd[1]: Started libpod-conmon-abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8.scope.
Dec 13 03:28:18 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:28:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0850c2732f6cad671f2980e01a6b8b5803f4a1da7e4ec39d2b25d9c9189acf0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:18 np0005558241 podman[301943]: 2025-12-13 08:28:18.524611609 +0000 UTC m=+0.196335876 container init abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:28:18 np0005558241 podman[301943]: 2025-12-13 08:28:18.536571174 +0000 UTC m=+0.208295421 container start abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:28:18 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [NOTICE]   (302052) : New worker (302061) forked
Dec 13 03:28:18 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [NOTICE]   (302052) : Loading success.
Dec 13 03:28:18 np0005558241 podman[301995]: 2025-12-13 08:28:18.576989111 +0000 UTC m=+0.136561460 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 03:28:18 np0005558241 nova_compute[248510]: 2025-12-13 08:28:18.581 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:28:18 np0005558241 nova_compute[248510]: 2025-12-13 08:28:18.582 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614498.580779, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:18 np0005558241 nova_compute[248510]: 2025-12-13 08:28:18.582 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Started (Lifecycle Event)#033[00m
Dec 13 03:28:18 np0005558241 podman[301996]: 2025-12-13 08:28:18.586112876 +0000 UTC m=+0.139498823 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 13 03:28:18 np0005558241 podman[301992]: 2025-12-13 08:28:18.589730265 +0000 UTC m=+0.147728716 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:28:18 np0005558241 nova_compute[248510]: 2025-12-13 08:28:18.681 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:18 np0005558241 nova_compute[248510]: 2025-12-13 08:28:18.686 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614498.5809326, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:18 np0005558241 nova_compute[248510]: 2025-12-13 08:28:18.686 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:28:18 np0005558241 nova_compute[248510]: 2025-12-13 08:28:18.711 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:18 np0005558241 nova_compute[248510]: 2025-12-13 08:28:18.714 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:28:18 np0005558241 nova_compute[248510]: 2025-12-13 08:28:18.745 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:28:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 498 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 316 op/s
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.030 248514 DEBUG nova.compute.manager [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.031 248514 DEBUG oslo_concurrency.lockutils [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.032 248514 DEBUG oslo_concurrency.lockutils [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.032 248514 DEBUG oslo_concurrency.lockutils [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.032 248514 DEBUG nova.compute.manager [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Processing event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.033 248514 DEBUG nova.compute.manager [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.033 248514 DEBUG oslo_concurrency.lockutils [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.033 248514 DEBUG oslo_concurrency.lockutils [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.033 248514 DEBUG oslo_concurrency.lockutils [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.034 248514 DEBUG nova.compute.manager [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] No waiting events found dispatching network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.034 248514 WARNING nova.compute.manager [req-006db116-602e-407d-99c6-125659e769af req-4be652cc-bfe8-4b45-be00-e30e3540080c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received unexpected event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.035 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.037 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.051 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614500.0513272, 5d8c2900-0048-4631-bbc6-0122bce8f4f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.052 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.054 248514 DEBUG nova.virt.libvirt.driver [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.058 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Instance spawned successfully.#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.058 248514 INFO nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Took 13.92 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.059 248514 DEBUG nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.077 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.082 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.117 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:28:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.312 248514 INFO nova.compute.manager [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Took 16.10 seconds to build instance.#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.433 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:20 np0005558241 nova_compute[248510]: 2025-12-13 08:28:20.468 248514 DEBUG oslo_concurrency.lockutils [None req-0f75d756-048f-43a1-9da3-e072e086f352 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00374151092803035 of space, bias 1.0, pg target 1.122453278409105 quantized to 32 (current 32)
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010130296284747075 of space, bias 1.0, pg target 0.30289585891393755 quantized to 32 (current 32)
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.067014208962118e-07 of space, bias 4.0, pg target 0.0008452148993918693 quantized to 16 (current 32)
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:28:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Dec 13 03:28:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 498 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.8 MiB/s wr, 255 op/s
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.324 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.324 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.325 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.325 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.325 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.327 248514 INFO nova.compute.manager [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Terminating instance#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.328 248514 DEBUG nova.compute.manager [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.336 248514 DEBUG nova.virt.libvirt.driver [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:28:22 np0005558241 kernel: tapb03c2424-77 (unregistering): left promiscuous mode
Dec 13 03:28:22 np0005558241 NetworkManager[50376]: <info>  [1765614502.3746] device (tapb03c2424-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00468|binding|INFO|Releasing lport b03c2424-77e3-49e1-b55f-f317911025b6 from this chassis (sb_readonly=0)
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00469|binding|INFO|Setting lport b03c2424-77e3-49e1-b55f-f317911025b6 down in Southbound
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00470|binding|INFO|Removing iface tapb03c2424-77 ovn-installed in OVS
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.396 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.434 248514 DEBUG nova.compute.manager [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.433 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:bc:28 10.100.0.4'], port_security=['fa:16:3e:33:bc:28 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e6e0fdaf-f934-4e56-8e59-4c4475bacd26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b03c2424-77e3-49e1-b55f-f317911025b6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.435 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b03c2424-77e3-49e1-b55f-f317911025b6 in datapath 7576f079-0439-46aa-98af-04f80cd254ca unbound from our chassis#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.435 248514 DEBUG oslo_concurrency.lockutils [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.435 248514 DEBUG oslo_concurrency.lockutils [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.436 248514 DEBUG oslo_concurrency.lockutils [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.436 248514 DEBUG nova.compute.manager [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Processing event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.436 248514 DEBUG nova.compute.manager [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.437 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.436 248514 DEBUG oslo_concurrency.lockutils [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.437 248514 DEBUG oslo_concurrency.lockutils [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.437 248514 DEBUG oslo_concurrency.lockutils [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.437 248514 DEBUG nova.compute.manager [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] No waiting events found dispatching network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.438 248514 WARNING nova.compute.manager [req-0c7d3741-ff9a-4bbb-b3d7-6b5064113af0 req-aa8752e9-635c-4b44-97d3-0827ce443c34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received unexpected event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f for instance with vm_state active and task_state rebuild_spawning.#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.439 248514 DEBUG nova.compute.manager [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:28:22 np0005558241 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Deactivated successfully.
Dec 13 03:28:22 np0005558241 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Consumed 14.196s CPU time.
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.450 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614502.4432456, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.451 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.452 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:28:22 np0005558241 systemd-machined[210538]: Machine qemu-54-instance-00000031 terminated.
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.456 248514 INFO nova.virt.libvirt.driver [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance spawned successfully.#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.456 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.464 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[51bf02ff-8c2a-4d88-855d-721da7cb4165]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.509 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3e876e55-dc46-4684-a8da-a6e929725c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.514 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e0306571-06ba-4333-b34f-a76d0f171337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.546 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.559 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.559 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:22 np0005558241 kernel: tapb03c2424-77: entered promiscuous mode
Dec 13 03:28:22 np0005558241 systemd-udevd[302077]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00471|binding|INFO|Claiming lport b03c2424-77e3-49e1-b55f-f317911025b6 for this chassis.
Dec 13 03:28:22 np0005558241 NetworkManager[50376]: <info>  [1765614502.5647] manager: (tapb03c2424-77): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00472|binding|INFO|b03c2424-77e3-49e1-b55f-f317911025b6: Claiming fa:16:3e:33:bc:28 10.100.0.4
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.560 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.568 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.569 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[505fa89a-5c2f-497c-8182-0b16c508c5fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.569 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.569 248514 DEBUG nova.virt.libvirt.driver [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 kernel: tapb03c2424-77 (unregistering): left promiscuous mode
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.582 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.585 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:bc:28 10.100.0.4'], port_security=['fa:16:3e:33:bc:28 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e6e0fdaf-f934-4e56-8e59-4c4475bacd26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b03c2424-77e3-49e1-b55f-f317911025b6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:22 np0005558241 virtnodedevd[248873]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 13 03:28:22 np0005558241 virtnodedevd[248873]: hostname: compute-0
Dec 13 03:28:22 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 03:28:22 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00473|binding|INFO|Setting lport b03c2424-77e3-49e1-b55f-f317911025b6 ovn-installed in OVS
Dec 13 03:28:22 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00474|binding|INFO|Setting lport b03c2424-77e3-49e1-b55f-f317911025b6 up in Southbound
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00475|binding|INFO|Releasing lport b03c2424-77e3-49e1-b55f-f317911025b6 from this chassis (sb_readonly=1)
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.606 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00476|if_status|INFO|Dropped 1 log messages in last 99 seconds (most recently, 99 seconds ago) due to excessive rate
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00477|if_status|INFO|Not setting lport b03c2424-77e3-49e1-b55f-f317911025b6 down as sb is readonly
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00478|binding|INFO|Removing iface tapb03c2424-77 ovn-installed in OVS
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.611 248514 INFO nova.virt.libvirt.driver [-] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Instance destroyed successfully.#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.612 248514 DEBUG nova.objects.instance [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'resources' on Instance uuid e6e0fdaf-f934-4e56-8e59-4c4475bacd26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.612 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f69d39c-37c4-4889-8a1c-ed9068292f4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302090, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 03:28:22 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00479|binding|INFO|Releasing lport b03c2424-77e3-49e1-b55f-f317911025b6 from this chassis (sb_readonly=0)
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00480|binding|INFO|Setting lport b03c2424-77e3-49e1-b55f-f317911025b6 down in Southbound
Dec 13 03:28:22 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.640 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da916449-8398-4aa2-bd6d-639d5e70196f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302101, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302101, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.645 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tapb03c2424-77: No such device
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.650 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:bc:28 10.100.0.4'], port_security=['fa:16:3e:33:bc:28 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e6e0fdaf-f934-4e56-8e59-4c4475bacd26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b03c2424-77e3-49e1-b55f-f317911025b6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.653 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.653 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.653 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.654 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.655 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b03c2424-77e3-49e1-b55f-f317911025b6 in datapath 7576f079-0439-46aa-98af-04f80cd254ca unbound from our chassis#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.657 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.678 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2591a32e-47f1-4a51-9730-2a8c63d4542e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.685 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.692 248514 DEBUG nova.virt.libvirt.vif [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:27:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1305127951',display_name='tempest-ListServerFiltersTestJSON-instance-1305127951',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1305127951',id=49,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-xlshtj64',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:35Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=e6e0fdaf-f934-4e56-8e59-4c4475bacd26,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b03c2424-77e3-49e1-b55f-f317911025b6", "address": "fa:16:3e:33:bc:28", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03c2424-77", "ovs_interfaceid": "b03c2424-77e3-49e1-b55f-f317911025b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.692 248514 DEBUG nova.network.os_vif_util [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "b03c2424-77e3-49e1-b55f-f317911025b6", "address": "fa:16:3e:33:bc:28", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb03c2424-77", "ovs_interfaceid": "b03c2424-77e3-49e1-b55f-f317911025b6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.693 248514 DEBUG nova.network.os_vif_util [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:bc:28,bridge_name='br-int',has_traffic_filtering=True,id=b03c2424-77e3-49e1-b55f-f317911025b6,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03c2424-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.694 248514 DEBUG os_vif [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:bc:28,bridge_name='br-int',has_traffic_filtering=True,id=b03c2424-77e3-49e1-b55f-f317911025b6,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03c2424-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.696 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.696 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb03c2424-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.702 248514 INFO os_vif [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:bc:28,bridge_name='br-int',has_traffic_filtering=True,id=b03c2424-77e3-49e1-b55f-f317911025b6,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb03c2424-77')#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.730 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[78623917-6ed6-47c3-982b-2e31e546774b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.735 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c8373fa5-ed67-4d15-986c-c9644faaca43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.779 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[18651dbc-0b0b-420b-8e05-c317c453bf54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.811 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d37de6-7e9f-4d8b-a442-34bb9d27305d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302136, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.838 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e401b0c0-0b48-46a5-9fbc-dfbe40386784]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302137, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302137, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.840 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.847 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.848 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.849 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.850 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.853 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b03c2424-77e3-49e1-b55f-f317911025b6 in datapath 7576f079-0439-46aa-98af-04f80cd254ca unbound from our chassis#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.854 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.854 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.854 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.855 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.855 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.855 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.856 248514 INFO nova.compute.manager [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Terminating instance#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.858 248514 DEBUG nova.compute.manager [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.860 248514 DEBUG nova.compute.manager [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.861 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.886 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fec3d027-2fba-453f-9f8b-b32736616013]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 kernel: tapd41fdf9b-1d (unregistering): left promiscuous mode
Dec 13 03:28:22 np0005558241 NetworkManager[50376]: <info>  [1765614502.9169] device (tapd41fdf9b-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00481|binding|INFO|Releasing lport d41fdf9b-1d4a-475f-b516-69fa17b19cfb from this chassis (sb_readonly=0)
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00482|binding|INFO|Setting lport d41fdf9b-1d4a-475f-b516-69fa17b19cfb down in Southbound
Dec 13 03:28:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:22Z|00483|binding|INFO|Removing iface tapd41fdf9b-1d ovn-installed in OVS
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.929 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.944 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.947 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad3e4fc-f2f3-499d-a88d-182844e83aea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.951 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd88e98-28c3-4dc3-8f11-70ad20625c36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.966 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:5e:3e 10.100.0.3'], port_security=['fa:16:3e:8e:5e:3e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5d8c2900-0048-4631-bbc6-0122bce8f4f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d41fdf9b-1d4a-475f-b516-69fa17b19cfb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.977 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.978 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:22 np0005558241 nova_compute[248510]: 2025-12-13 08:28:22.978 248514 DEBUG nova.objects.instance [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:28:22 np0005558241 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Deactivated successfully.
Dec 13 03:28:22 np0005558241 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000035.scope: Consumed 3.660s CPU time.
Dec 13 03:28:22 np0005558241 systemd-machined[210538]: Machine qemu-60-instance-00000035 terminated.
Dec 13 03:28:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:22.993 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[afd28837-9356-431f-99df-b065c4788e3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.020 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79593e59-525a-4433-a70a-09c4a39ed366]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302147, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.045 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ff40368e-1d37-4476-8398-54410a4a00a8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302148, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302148, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.048 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.051 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.057 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.058 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.058 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.058 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.059 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.060 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d41fdf9b-1d4a-475f-b516-69fa17b19cfb in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc unbound from our chassis#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.061 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.090 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab72f7a3-426b-4c1e-85a5-73ee9e85662b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.102 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Instance destroyed successfully.#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.102 248514 DEBUG nova.objects.instance [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'resources' on Instance uuid 5d8c2900-0048-4631-bbc6-0122bce8f4f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.137 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9b330e9e-233e-4fdf-acdf-2445277042ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.139 248514 INFO nova.virt.libvirt.driver [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Deleting instance files /var/lib/nova/instances/e6e0fdaf-f934-4e56-8e59-4c4475bacd26_del#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.141 248514 INFO nova.virt.libvirt.driver [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Deletion of /var/lib/nova/instances/e6e0fdaf-f934-4e56-8e59-4c4475bacd26_del complete#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.141 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb63a97-a2df-4949-8562-be39a9d8c83a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.182 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f04bcd56-478a-4f97-b56c-3b6cea642b4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.204 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[43e03665-8047-451e-be9c-0c514df28187]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703784, 'reachable_time': 44592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302166, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.210 248514 DEBUG oslo_concurrency.lockutils [None req-497bae2a-1671-4eec-a26b-aaaab7e3c2d9 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.227 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6056094b-c553-4fe9-97f5-1f1c195c90dd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap87bd91d0-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703801, 'tstamp': 703801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302167, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap87bd91d0-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703805, 'tstamp': 703805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302167, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.229 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.231 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.236 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.237 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87bd91d0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.237 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.238 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87bd91d0-e0, col_values=(('external_ids', {'iface-id': '3e59c36d-12c7-4d92-aa2f-8b6a795f3f98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:23.238 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 457 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.8 MiB/s wr, 305 op/s
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.651 248514 DEBUG nova.virt.libvirt.vif [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:28:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-622802661',display_name='tempest-ImagesTestJSON-server-622802661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-622802661',id=53,image_ref='e4e9ee37-4059-46f1-9bf8-83a72d0403e7',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:28:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-0ufcm8q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='df25cd40-72b5-4e0f-90ec-8677c699d1d3',image_min_disk='1',image_min_ram='0',image_owner_id='52e1055963294dbdb16cd95b466cd4d9',image_owner_project_name='tempest-ImagesTestJSON-1234382421',image_owner_user_name='tempest-ImagesTestJSON-1234382421-project-member',image_user_id='3b988c7ac9354c59aac9a9f41f83c20f',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:28:20Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=5d8c2900-0048-4631-bbc6-0122bce8f4f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.652 248514 DEBUG nova.network.os_vif_util [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "address": "fa:16:3e:8e:5e:3e", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41fdf9b-1d", "ovs_interfaceid": "d41fdf9b-1d4a-475f-b516-69fa17b19cfb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.653 248514 DEBUG nova.network.os_vif_util [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.653 248514 DEBUG os_vif [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.656 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.657 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd41fdf9b-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.658 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.670 248514 INFO os_vif [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=d41fdf9b-1d4a-475f-b516-69fa17b19cfb,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41fdf9b-1d')#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.853 248514 INFO nova.compute.manager [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Took 1.53 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.854 248514 DEBUG oslo.service.loopingcall [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.855 248514 DEBUG nova.compute.manager [-] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:28:23 np0005558241 nova_compute[248510]: 2025-12-13 08:28:23.855 248514 DEBUG nova.network.neutron [-] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:28:24 np0005558241 nova_compute[248510]: 2025-12-13 08:28:24.164 248514 INFO nova.virt.libvirt.driver [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Deleting instance files /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3_del#033[00m
Dec 13 03:28:24 np0005558241 nova_compute[248510]: 2025-12-13 08:28:24.165 248514 INFO nova.virt.libvirt.driver [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Deletion of /var/lib/nova/instances/5d8c2900-0048-4631-bbc6-0122bce8f4f3_del complete#033[00m
Dec 13 03:28:24 np0005558241 nova_compute[248510]: 2025-12-13 08:28:24.675 248514 INFO nova.compute.manager [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Took 1.82 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:28:24 np0005558241 nova_compute[248510]: 2025-12-13 08:28:24.675 248514 DEBUG oslo.service.loopingcall [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:28:24 np0005558241 nova_compute[248510]: 2025-12-13 08:28:24.676 248514 DEBUG nova.compute.manager [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:28:24 np0005558241 nova_compute[248510]: 2025-12-13 08:28:24.676 248514 DEBUG nova.network.neutron [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:28:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:24Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:ce:b2 10.100.0.10
Dec 13 03:28:24 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec 13 03:28:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:24.925 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:24.927 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:28:24 np0005558241 nova_compute[248510]: 2025-12-13 08:28:24.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:25 np0005558241 nova_compute[248510]: 2025-12-13 08:28:25.030 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:28:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 429 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.6 MiB/s wr, 319 op/s
Dec 13 03:28:25 np0005558241 nova_compute[248510]: 2025-12-13 08:28:25.678 248514 DEBUG nova.compute.manager [req-2ecfffda-58ef-47f8-8cef-6621b0580d24 req-4d6d54ff-8372-4816-b39f-92c5a69f7f2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received event network-vif-unplugged-b03c2424-77e3-49e1-b55f-f317911025b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:25 np0005558241 nova_compute[248510]: 2025-12-13 08:28:25.679 248514 DEBUG oslo_concurrency.lockutils [req-2ecfffda-58ef-47f8-8cef-6621b0580d24 req-4d6d54ff-8372-4816-b39f-92c5a69f7f2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:25 np0005558241 nova_compute[248510]: 2025-12-13 08:28:25.679 248514 DEBUG oslo_concurrency.lockutils [req-2ecfffda-58ef-47f8-8cef-6621b0580d24 req-4d6d54ff-8372-4816-b39f-92c5a69f7f2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:25 np0005558241 nova_compute[248510]: 2025-12-13 08:28:25.680 248514 DEBUG oslo_concurrency.lockutils [req-2ecfffda-58ef-47f8-8cef-6621b0580d24 req-4d6d54ff-8372-4816-b39f-92c5a69f7f2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:25 np0005558241 nova_compute[248510]: 2025-12-13 08:28:25.680 248514 DEBUG nova.compute.manager [req-2ecfffda-58ef-47f8-8cef-6621b0580d24 req-4d6d54ff-8372-4816-b39f-92c5a69f7f2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] No waiting events found dispatching network-vif-unplugged-b03c2424-77e3-49e1-b55f-f317911025b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:25 np0005558241 nova_compute[248510]: 2025-12-13 08:28:25.680 248514 DEBUG nova.compute.manager [req-2ecfffda-58ef-47f8-8cef-6621b0580d24 req-4d6d54ff-8372-4816-b39f-92c5a69f7f2d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received event network-vif-unplugged-b03c2424-77e3-49e1-b55f-f317911025b6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:28:25 np0005558241 nova_compute[248510]: 2025-12-13 08:28:25.794 248514 DEBUG nova.compute.manager [req-c6ad9c15-e577-4b71-b6a3-d57c13903eda req-a9719af8-e8e0-47b2-8fac-43de829cafbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-vif-unplugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:25 np0005558241 nova_compute[248510]: 2025-12-13 08:28:25.795 248514 DEBUG oslo_concurrency.lockutils [req-c6ad9c15-e577-4b71-b6a3-d57c13903eda req-a9719af8-e8e0-47b2-8fac-43de829cafbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:25 np0005558241 nova_compute[248510]: 2025-12-13 08:28:25.795 248514 DEBUG oslo_concurrency.lockutils [req-c6ad9c15-e577-4b71-b6a3-d57c13903eda req-a9719af8-e8e0-47b2-8fac-43de829cafbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:25 np0005558241 nova_compute[248510]: 2025-12-13 08:28:25.795 248514 DEBUG oslo_concurrency.lockutils [req-c6ad9c15-e577-4b71-b6a3-d57c13903eda req-a9719af8-e8e0-47b2-8fac-43de829cafbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:25 np0005558241 nova_compute[248510]: 2025-12-13 08:28:25.795 248514 DEBUG nova.compute.manager [req-c6ad9c15-e577-4b71-b6a3-d57c13903eda req-a9719af8-e8e0-47b2-8fac-43de829cafbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] No waiting events found dispatching network-vif-unplugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:25 np0005558241 nova_compute[248510]: 2025-12-13 08:28:25.795 248514 DEBUG nova.compute.manager [req-c6ad9c15-e577-4b71-b6a3-d57c13903eda req-a9719af8-e8e0-47b2-8fac-43de829cafbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-vif-unplugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:28:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:28:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:28:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:28:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:28:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:28:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:28:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:28:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:28:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:28:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:28:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:28:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:28:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:26Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:30:15 10.100.0.4
Dec 13 03:28:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:26Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:30:15 10.100.0.4
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.379 248514 DEBUG nova.network.neutron [-] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.403 248514 DEBUG nova.network.neutron [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:26 np0005558241 podman[302331]: 2025-12-13 08:28:26.429554382 +0000 UTC m=+0.062959245 container create e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.442 248514 INFO nova.compute.manager [-] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Took 2.59 seconds to deallocate network for instance.#033[00m
Dec 13 03:28:26 np0005558241 systemd[1]: Started libpod-conmon-e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9.scope.
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.487 248514 INFO nova.compute.manager [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Took 1.81 seconds to deallocate network for instance.#033[00m
Dec 13 03:28:26 np0005558241 podman[302331]: 2025-12-13 08:28:26.406925463 +0000 UTC m=+0.040330346 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:28:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:28:26 np0005558241 podman[302331]: 2025-12-13 08:28:26.538240163 +0000 UTC m=+0.171645056 container init e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:28:26 np0005558241 podman[302331]: 2025-12-13 08:28:26.549137502 +0000 UTC m=+0.182542365 container start e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:28:26 np0005558241 podman[302331]: 2025-12-13 08:28:26.552759612 +0000 UTC m=+0.186164505 container attach e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:28:26 np0005558241 systemd[1]: libpod-e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9.scope: Deactivated successfully.
Dec 13 03:28:26 np0005558241 modest_greider[302344]: 167 167
Dec 13 03:28:26 np0005558241 conmon[302344]: conmon e996c2515384e7abe4da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9.scope/container/memory.events
Dec 13 03:28:26 np0005558241 podman[302331]: 2025-12-13 08:28:26.560902162 +0000 UTC m=+0.194307025 container died e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:28:26 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c1a79a754c8820a4f0dad31d9dbd878961df491583a1f6131d2d4121a13ce6e8-merged.mount: Deactivated successfully.
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.603 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.604 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:26 np0005558241 podman[302331]: 2025-12-13 08:28:26.613298555 +0000 UTC m=+0.246703418 container remove e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_greider, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:28:26 np0005558241 systemd[1]: libpod-conmon-e996c2515384e7abe4da404e1f1c4768811a28e490971aa153287354f51171c9.scope: Deactivated successfully.
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.718 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.718 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.719 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.719 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.720 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.721 248514 INFO nova.compute.manager [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Terminating instance#033[00m
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.722 248514 DEBUG nova.compute.manager [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.724 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:26 np0005558241 kernel: tapd8c7cad7-f6 (unregistering): left promiscuous mode
Dec 13 03:28:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:28:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:28:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:28:26 np0005558241 NetworkManager[50376]: <info>  [1765614506.7694] device (tapd8c7cad7-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.779 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:26Z|00484|binding|INFO|Releasing lport d8c7cad7-f601-4205-8838-a583b6e04b0f from this chassis (sb_readonly=0)
Dec 13 03:28:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:26Z|00485|binding|INFO|Setting lport d8c7cad7-f601-4205-8838-a583b6e04b0f down in Southbound
Dec 13 03:28:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:26Z|00486|binding|INFO|Removing iface tapd8c7cad7-f6 ovn-installed in OVS
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.782 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:26.794 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:57:38 10.100.0.8'], port_security=['fa:16:3e:45:57:38 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0ccf9d68-ffc2-4f17-a2e7-effa18fc607c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d8c7cad7-f601-4205-8838-a583b6e04b0f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:26.796 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d8c7cad7-f601-4205-8838-a583b6e04b0f in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 unbound from our chassis#033[00m
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.799 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:26.800 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c63049d-63e9-47af-99e2-ce1403a42891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:28:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:26.802 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[37122332-ac58-48c1-af87-0a34e0b24a96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:26.803 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace which is not needed anymore#033[00m
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.811 248514 DEBUG oslo_concurrency.processutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:26 np0005558241 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000033.scope: Deactivated successfully.
Dec 13 03:28:26 np0005558241 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000033.scope: Consumed 5.376s CPU time.
Dec 13 03:28:26 np0005558241 systemd-machined[210538]: Machine qemu-61-instance-00000033 terminated.
Dec 13 03:28:26 np0005558241 podman[302373]: 2025-12-13 08:28:26.866416301 +0000 UTC m=+0.057413618 container create 04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 03:28:26 np0005558241 systemd[1]: Started libpod-conmon-04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8.scope.
Dec 13 03:28:26 np0005558241 podman[302373]: 2025-12-13 08:28:26.842288125 +0000 UTC m=+0.033285442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:28:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:28:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071ce435d49ba2eb57f3f71f225c9875901a24a1806c029d5ee4fc46f31352d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071ce435d49ba2eb57f3f71f225c9875901a24a1806c029d5ee4fc46f31352d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071ce435d49ba2eb57f3f71f225c9875901a24a1806c029d5ee4fc46f31352d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071ce435d49ba2eb57f3f71f225c9875901a24a1806c029d5ee4fc46f31352d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071ce435d49ba2eb57f3f71f225c9875901a24a1806c029d5ee4fc46f31352d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:26 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [NOTICE]   (302052) : haproxy version is 2.8.14-c23fe91
Dec 13 03:28:26 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [NOTICE]   (302052) : path to executable is /usr/sbin/haproxy
Dec 13 03:28:26 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [WARNING]  (302052) : Exiting Master process...
Dec 13 03:28:26 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [ALERT]    (302052) : Current worker (302061) exited with code 143 (Terminated)
Dec 13 03:28:26 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[302023]: [WARNING]  (302052) : All workers exited. Exiting... (0)
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.974 248514 INFO nova.virt.libvirt.driver [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Instance destroyed successfully.#033[00m
Dec 13 03:28:26 np0005558241 systemd[1]: libpod-abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8.scope: Deactivated successfully.
Dec 13 03:28:26 np0005558241 nova_compute[248510]: 2025-12-13 08:28:26.975 248514 DEBUG nova.objects.instance [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'resources' on Instance uuid 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:26 np0005558241 conmon[302023]: conmon abd7204c5e223c08d4aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8.scope/container/memory.events
Dec 13 03:28:26 np0005558241 podman[302408]: 2025-12-13 08:28:26.978675721 +0000 UTC m=+0.058980177 container died abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:28:27 np0005558241 podman[302373]: 2025-12-13 08:28:27.00419795 +0000 UTC m=+0.195195267 container init 04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_noether, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:28:27 np0005558241 podman[302373]: 2025-12-13 08:28:27.014243308 +0000 UTC m=+0.205240605 container start 04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_noether, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:28:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8-userdata-shm.mount: Deactivated successfully.
Dec 13 03:28:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b0850c2732f6cad671f2980e01a6b8b5803f4a1da7e4ec39d2b25d9c9189acf0-merged.mount: Deactivated successfully.
Dec 13 03:28:27 np0005558241 podman[302373]: 2025-12-13 08:28:27.023036815 +0000 UTC m=+0.214034142 container attach 04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 03:28:27 np0005558241 podman[302408]: 2025-12-13 08:28:27.033426671 +0000 UTC m=+0.113731127 container cleanup abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:28:27 np0005558241 systemd[1]: libpod-conmon-abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8.scope: Deactivated successfully.
Dec 13 03:28:27 np0005558241 podman[302475]: 2025-12-13 08:28:27.106471124 +0000 UTC m=+0.043709660 container remove abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:28:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.112 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[573846b4-8e1e-4d3f-8a22-41bec5b3bd0c]: (4, ('Sat Dec 13 08:28:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8)\nabd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8\nSat Dec 13 08:28:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (abd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8)\nabd7204c5e223c08d4aaf13a6e125a0b28761035fe9f5ba6c262c2476c2942b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.115 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7c2199-1e68-4765-bd61-7346c174da87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.116 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.118 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:27 np0005558241 kernel: tap6c63049d-60: left promiscuous mode
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.149 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a365100b-d94f-4242-811c-a071c99c937e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.151 248514 DEBUG nova.virt.libvirt.vif [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:27:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1729116913',display_name='tempest-ServerDiskConfigTestJSON-server-1729116913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1729116913',id=51,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:28:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-3hps9qrz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:28:23Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=0ccf9d68-ffc2-4f17-a2e7-effa18fc607c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.152 248514 DEBUG nova.network.os_vif_util [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "address": "fa:16:3e:45:57:38", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8c7cad7-f6", "ovs_interfaceid": "d8c7cad7-f601-4205-8838-a583b6e04b0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.153 248514 DEBUG nova.network.os_vif_util [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.153 248514 DEBUG os_vif [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.158 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8c7cad7-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.160 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.162 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.163 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.165 248514 INFO os_vif [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:57:38,bridge_name='br-int',has_traffic_filtering=True,id=d8c7cad7-f601-4205-8838-a583b6e04b0f,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8c7cad7-f6')#033[00m
Dec 13 03:28:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.165 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f56471b1-4037-4eac-a831-c92a96b7779f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.167 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef300a0f-2547-474e-aadb-b62a3b51af30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.190 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[631e98d5-179f-426e-9248-3de5444ed033]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707485, 'reachable_time': 36389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302494, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.193 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:28:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:27.193 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[66e79c4b-7952-46dd-b7c8-5d9231bc118e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:28:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1065828780' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:28:27 np0005558241 systemd[1]: run-netns-ovnmeta\x2d6c63049d\x2d63e9\x2d47af\x2d99e2\x2dce1403a42891.mount: Deactivated successfully.
Dec 13 03:28:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 429 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 881 KiB/s wr, 212 op/s
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.463 248514 DEBUG oslo_concurrency.processutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.471 248514 DEBUG nova.compute.provider_tree [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.487 248514 INFO nova.virt.libvirt.driver [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deleting instance files /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_del#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.487 248514 INFO nova.virt.libvirt.driver [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deletion of /var/lib/nova/instances/0ccf9d68-ffc2-4f17-a2e7-effa18fc607c_del complete#033[00m
Dec 13 03:28:27 np0005558241 xenodochial_noether[302411]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:28:27 np0005558241 xenodochial_noether[302411]: --> All data devices are unavailable
Dec 13 03:28:27 np0005558241 systemd[1]: libpod-04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8.scope: Deactivated successfully.
Dec 13 03:28:27 np0005558241 podman[302373]: 2025-12-13 08:28:27.555896383 +0000 UTC m=+0.746893690 container died 04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.559 248514 DEBUG nova.scheduler.client.report [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:28:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-071ce435d49ba2eb57f3f71f225c9875901a24a1806c029d5ee4fc46f31352d1-merged.mount: Deactivated successfully.
Dec 13 03:28:27 np0005558241 podman[302373]: 2025-12-13 08:28:27.609415503 +0000 UTC m=+0.800412790 container remove 04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.616 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:27 np0005558241 systemd[1]: libpod-conmon-04ccfce77f0acc69f30fc0a5eaed7d249214d0570ff8af010d30decf87a714a8.scope: Deactivated successfully.
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.620 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.654 248514 INFO nova.compute.manager [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.655 248514 DEBUG oslo.service.loopingcall [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.655 248514 DEBUG nova.compute.manager [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.656 248514 DEBUG nova.network.neutron [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.660 248514 INFO nova.scheduler.client.report [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Deleted allocations for instance e6e0fdaf-f934-4e56-8e59-4c4475bacd26#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.812 248514 DEBUG oslo_concurrency.processutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.938 248514 DEBUG nova.compute.manager [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received event network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.939 248514 DEBUG oslo_concurrency.lockutils [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.939 248514 DEBUG oslo_concurrency.lockutils [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.939 248514 DEBUG oslo_concurrency.lockutils [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.940 248514 DEBUG nova.compute.manager [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] No waiting events found dispatching network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.940 248514 WARNING nova.compute.manager [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received unexpected event network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.940 248514 DEBUG nova.compute.manager [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received event network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.940 248514 DEBUG oslo_concurrency.lockutils [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.940 248514 DEBUG oslo_concurrency.lockutils [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.940 248514 DEBUG oslo_concurrency.lockutils [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.941 248514 DEBUG nova.compute.manager [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] No waiting events found dispatching network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.941 248514 WARNING nova.compute.manager [req-c37cfea7-35cb-431d-8fda-d6a9ababd9e1 req-3477bf30-85ac-4cdf-8386-3ad902186311 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received unexpected event network-vif-plugged-b03c2424-77e3-49e1-b55f-f317911025b6 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:28:27 np0005558241 nova_compute[248510]: 2025-12-13 08:28:27.944 248514 DEBUG oslo_concurrency.lockutils [None req-d03b3aff-1d8f-485e-bc11-dc7d575ac1bd 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "e6e0fdaf-f934-4e56-8e59-4c4475bacd26" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:28 np0005558241 podman[302625]: 2025-12-13 08:28:28.118381241 +0000 UTC m=+0.045979145 container create 56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jang, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 03:28:28 np0005558241 systemd[1]: Started libpod-conmon-56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1.scope.
Dec 13 03:28:28 np0005558241 podman[302625]: 2025-12-13 08:28:28.096537642 +0000 UTC m=+0.024135576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:28:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:28:28 np0005558241 podman[302625]: 2025-12-13 08:28:28.231605445 +0000 UTC m=+0.159203379 container init 56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jang, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:28:28 np0005558241 podman[302625]: 2025-12-13 08:28:28.240927965 +0000 UTC m=+0.168525869 container start 56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:28:28 np0005558241 podman[302625]: 2025-12-13 08:28:28.244889543 +0000 UTC m=+0.172487447 container attach 56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jang, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 03:28:28 np0005558241 elated_jang[302642]: 167 167
Dec 13 03:28:28 np0005558241 systemd[1]: libpod-56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1.scope: Deactivated successfully.
Dec 13 03:28:28 np0005558241 conmon[302642]: conmon 56b2f8ae35f87a628bde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1.scope/container/memory.events
Dec 13 03:28:28 np0005558241 podman[302625]: 2025-12-13 08:28:28.249154448 +0000 UTC m=+0.176752392 container died 56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:28:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay-dacd29a090b89faa77af99c823eb047e0caff2e5207262e9552cd1738f94b527-merged.mount: Deactivated successfully.
Dec 13 03:28:28 np0005558241 podman[302625]: 2025-12-13 08:28:28.299181162 +0000 UTC m=+0.226779076 container remove 56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:28:28 np0005558241 systemd[1]: libpod-conmon-56b2f8ae35f87a628bde74e18f3e5ae846848ab253611dacc07aeefd94e929d1.scope: Deactivated successfully.
Dec 13 03:28:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:28:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169479124' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.396 248514 DEBUG oslo_concurrency.processutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.404 248514 DEBUG nova.compute.provider_tree [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:28:28 np0005558241 podman[302670]: 2025-12-13 08:28:28.511284636 +0000 UTC m=+0.052926247 container create 437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Dec 13 03:28:28 np0005558241 systemd[1]: Started libpod-conmon-437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de.scope.
Dec 13 03:28:28 np0005558241 podman[302670]: 2025-12-13 08:28:28.486958626 +0000 UTC m=+0.028600287 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:28:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:28:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf4469d07dd13652455cde2bcfc9a7647d8cee98d1d03be78a395382abae12a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf4469d07dd13652455cde2bcfc9a7647d8cee98d1d03be78a395382abae12a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf4469d07dd13652455cde2bcfc9a7647d8cee98d1d03be78a395382abae12a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf4469d07dd13652455cde2bcfc9a7647d8cee98d1d03be78a395382abae12a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:28 np0005558241 podman[302670]: 2025-12-13 08:28:28.612450081 +0000 UTC m=+0.154091712 container init 437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_austin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:28:28 np0005558241 podman[302670]: 2025-12-13 08:28:28.629949463 +0000 UTC m=+0.171591074 container start 437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_austin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 03:28:28 np0005558241 podman[302670]: 2025-12-13 08:28:28.636142296 +0000 UTC m=+0.177783927 container attach 437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.681 248514 DEBUG nova.scheduler.client.report [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.848 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.882 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.882 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.883 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.883 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.883 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] No waiting events found dispatching network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.883 248514 WARNING nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received unexpected event network-vif-plugged-d41fdf9b-1d4a-475f-b516-69fa17b19cfb for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.883 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Received event network-vif-deleted-b03c2424-77e3-49e1-b55f-f317911025b6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.884 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Received event network-vif-deleted-d41fdf9b-1d4a-475f-b516-69fa17b19cfb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.884 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-unplugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.884 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.884 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.884 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.885 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] No waiting events found dispatching network-vif-unplugged-d8c7cad7-f601-4205-8838-a583b6e04b0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.885 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-unplugged-d8c7cad7-f601-4205-8838-a583b6e04b0f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.885 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.885 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.885 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.886 248514 DEBUG oslo_concurrency.lockutils [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.886 248514 DEBUG nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] No waiting events found dispatching network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.886 248514 WARNING nova.compute.manager [req-abb08cad-e8c6-431f-8501-1c7b4956a104 req-5a618773-fff6-4391-8f44-6655b2917603 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received unexpected event network-vif-plugged-d8c7cad7-f601-4205-8838-a583b6e04b0f for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:28:28 np0005558241 agitated_austin[302687]: {
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:    "0": [
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:        {
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "devices": [
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "/dev/loop3"
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            ],
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_name": "ceph_lv0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_size": "21470642176",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "name": "ceph_lv0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "tags": {
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.cluster_name": "ceph",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.crush_device_class": "",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.encrypted": "0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.objectstore": "bluestore",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.osd_id": "0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.type": "block",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.vdo": "0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.with_tpm": "0"
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            },
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "type": "block",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "vg_name": "ceph_vg0"
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:        }
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:    ],
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:    "1": [
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:        {
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "devices": [
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "/dev/loop4"
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            ],
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_name": "ceph_lv1",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_size": "21470642176",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "name": "ceph_lv1",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "tags": {
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.cluster_name": "ceph",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.crush_device_class": "",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.encrypted": "0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.objectstore": "bluestore",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.osd_id": "1",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.type": "block",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.vdo": "0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.with_tpm": "0"
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            },
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "type": "block",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "vg_name": "ceph_vg1"
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:        }
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:    ],
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:    "2": [
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:        {
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "devices": [
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "/dev/loop5"
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            ],
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_name": "ceph_lv2",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_size": "21470642176",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "name": "ceph_lv2",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "tags": {
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.cluster_name": "ceph",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.crush_device_class": "",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.encrypted": "0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.objectstore": "bluestore",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.osd_id": "2",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.type": "block",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.vdo": "0",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:                "ceph.with_tpm": "0"
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            },
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "type": "block",
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:            "vg_name": "ceph_vg2"
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:        }
Dec 13 03:28:28 np0005558241 agitated_austin[302687]:    ]
Dec 13 03:28:28 np0005558241 agitated_austin[302687]: }
Dec 13 03:28:28 np0005558241 nova_compute[248510]: 2025-12-13 08:28:28.972 248514 INFO nova.scheduler.client.report [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Deleted allocations for instance 5d8c2900-0048-4631-bbc6-0122bce8f4f3#033[00m
Dec 13 03:28:28 np0005558241 systemd[1]: libpod-437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de.scope: Deactivated successfully.
Dec 13 03:28:28 np0005558241 podman[302670]: 2025-12-13 08:28:28.996243231 +0000 UTC m=+0.537884862 container died 437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_austin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:28:29 np0005558241 systemd[1]: var-lib-containers-storage-overlay-adf4469d07dd13652455cde2bcfc9a7647d8cee98d1d03be78a395382abae12a-merged.mount: Deactivated successfully.
Dec 13 03:28:29 np0005558241 podman[302670]: 2025-12-13 08:28:29.042908602 +0000 UTC m=+0.584550213 container remove 437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:28:29 np0005558241 systemd[1]: libpod-conmon-437e65da8620017cbb14714076a0b8f94694ad7fce620d417974c0b0a997e8de.scope: Deactivated successfully.
Dec 13 03:28:29 np0005558241 nova_compute[248510]: 2025-12-13 08:28:29.090 248514 DEBUG oslo_concurrency.lockutils [None req-4eef2d53-1b1c-4756-8154-0c00c2c6daf3 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "5d8c2900-0048-4631-bbc6-0122bce8f4f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 405 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.3 MiB/s wr, 360 op/s
Dec 13 03:28:29 np0005558241 nova_compute[248510]: 2025-12-13 08:28:29.532 248514 DEBUG nova.network.neutron [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:29 np0005558241 podman[302770]: 2025-12-13 08:28:29.571882784 +0000 UTC m=+0.050770484 container create 0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_gould, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:28:29 np0005558241 systemd[1]: Started libpod-conmon-0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28.scope.
Dec 13 03:28:29 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:28:29 np0005558241 podman[302770]: 2025-12-13 08:28:29.547658976 +0000 UTC m=+0.026546706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:28:29 np0005558241 podman[302770]: 2025-12-13 08:28:29.648684969 +0000 UTC m=+0.127572699 container init 0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_gould, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 03:28:29 np0005558241 podman[302770]: 2025-12-13 08:28:29.656686156 +0000 UTC m=+0.135573846 container start 0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_gould, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:28:29 np0005558241 podman[302770]: 2025-12-13 08:28:29.660235354 +0000 UTC m=+0.139123064 container attach 0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 03:28:29 np0005558241 optimistic_gould[302786]: 167 167
Dec 13 03:28:29 np0005558241 systemd[1]: libpod-0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28.scope: Deactivated successfully.
Dec 13 03:28:29 np0005558241 conmon[302786]: conmon 0dc3e1d02c2cd0470b6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28.scope/container/memory.events
Dec 13 03:28:29 np0005558241 podman[302770]: 2025-12-13 08:28:29.663571396 +0000 UTC m=+0.142459096 container died 0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:28:29 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ad9234ae3f546a1401b4d79f467b2497fda8e16b33a6e15b3ae10a0255073172-merged.mount: Deactivated successfully.
Dec 13 03:28:29 np0005558241 podman[302770]: 2025-12-13 08:28:29.709633783 +0000 UTC m=+0.188521463 container remove 0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_gould, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 03:28:29 np0005558241 systemd[1]: libpod-conmon-0dc3e1d02c2cd0470b6be339c47d164da4a89849657c769a202a48745195cd28.scope: Deactivated successfully.
Dec 13 03:28:29 np0005558241 podman[302808]: 2025-12-13 08:28:29.906439719 +0000 UTC m=+0.041561017 container create 67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:28:29 np0005558241 systemd[1]: Started libpod-conmon-67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071.scope.
Dec 13 03:28:29 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:28:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e02b5c93fa7e0a62370599c6a0ea197fb0c8d6089636f80c18779b403bf9ce8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e02b5c93fa7e0a62370599c6a0ea197fb0c8d6089636f80c18779b403bf9ce8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e02b5c93fa7e0a62370599c6a0ea197fb0c8d6089636f80c18779b403bf9ce8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e02b5c93fa7e0a62370599c6a0ea197fb0c8d6089636f80c18779b403bf9ce8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:29 np0005558241 podman[302808]: 2025-12-13 08:28:29.888424414 +0000 UTC m=+0.023545732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:28:29 np0005558241 podman[302808]: 2025-12-13 08:28:29.991428056 +0000 UTC m=+0.126549354 container init 67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_williams, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:28:29 np0005558241 podman[302808]: 2025-12-13 08:28:29.999341681 +0000 UTC m=+0.134462989 container start 67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:28:30 np0005558241 podman[302808]: 2025-12-13 08:28:30.002195381 +0000 UTC m=+0.137316679 container attach 67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.027 248514 INFO nova.compute.manager [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Took 2.37 seconds to deallocate network for instance.#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.033 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.435 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.436 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.462 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "41602b99-e7f2-450c-885e-51d07a1236d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.463 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.463 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.463 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.463 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.465 248514 INFO nova.compute.manager [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Terminating instance#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.466 248514 DEBUG nova.compute.manager [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:28:30 np0005558241 kernel: tapc76cbcb4-3f (unregistering): left promiscuous mode
Dec 13 03:28:30 np0005558241 NetworkManager[50376]: <info>  [1765614510.5094] device (tapc76cbcb4-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:28:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:30Z|00487|binding|INFO|Releasing lport c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 from this chassis (sb_readonly=0)
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:30Z|00488|binding|INFO|Setting lport c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 down in Southbound
Dec 13 03:28:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:30Z|00489|binding|INFO|Removing iface tapc76cbcb4-3f ovn-installed in OVS
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.527 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:30 np0005558241 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Deactivated successfully.
Dec 13 03:28:30 np0005558241 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Consumed 14.464s CPU time.
Dec 13 03:28:30 np0005558241 systemd-machined[210538]: Machine qemu-53-instance-00000030 terminated.
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.596 248514 DEBUG oslo_concurrency.processutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.644 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:73:cb 10.100.0.8'], port_security=['fa:16:3e:75:73:cb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '41602b99-e7f2-450c-885e-51d07a1236d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c76cbcb4-3f53-4cff-a0c1-7d8be5000c32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.646 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 in datapath 7576f079-0439-46aa-98af-04f80cd254ca unbound from our chassis#033[00m
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.647 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7576f079-0439-46aa-98af-04f80cd254ca#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.667 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.667 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.668 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8571840a-9b2f-4c57-a9f4-b6df8c10730e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.711 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4db9b708-d849-40a9-9550-025e3311079c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.716 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b06c91-3a68-4cee-bf90-c50dcd0ea3f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.717 248514 INFO nova.virt.libvirt.driver [-] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Instance destroyed successfully.#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.718 248514 DEBUG nova.objects.instance [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'resources' on Instance uuid 41602b99-e7f2-450c-885e-51d07a1236d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.754 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d90729fb-a851-4916-a3dc-ea9c5a36aecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.774 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2b061213-afee-442e-9b80-20b447f5524a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7576f079-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:f1:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 19, 'rx_bytes': 826, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 19, 'rx_bytes': 826, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701789, 'reachable_time': 37822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302920, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.778 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:28:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.794 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0533e9f7-5d91-40c6-9d84-f26085df7971]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701804, 'tstamp': 701804}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302941, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7576f079-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701808, 'tstamp': 701808}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302941, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.797 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.799 248514 DEBUG nova.virt.libvirt.vif [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:27:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-254704430',display_name='tempest-ListServerFiltersTestJSON-instance-254704430',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-254704430',id=48,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-5xaoa0nb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:30Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=41602b99-e7f2-450c-885e-51d07a1236d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "address": "fa:16:3e:75:73:cb", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76cbcb4-3f", "ovs_interfaceid": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.800 248514 DEBUG nova.network.os_vif_util [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "address": "fa:16:3e:75:73:cb", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc76cbcb4-3f", "ovs_interfaceid": "c76cbcb4-3f53-4cff-a0c1-7d8be5000c32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.801 248514 DEBUG nova.network.os_vif_util [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=c76cbcb4-3f53-4cff-a0c1-7d8be5000c32,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76cbcb4-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.802 248514 DEBUG os_vif [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=c76cbcb4-3f53-4cff-a0c1-7d8be5000c32,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76cbcb4-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:28:30 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.806 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.807 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc76cbcb4-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.808 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7576f079-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.809 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.809 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.810 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7576f079-00, col_values=(('external_ids', {'iface-id': '5d3dc22b-9ad2-46cb-b9cb-35c58ffd1fa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:30.811 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:30 np0005558241 nova_compute[248510]: 2025-12-13 08:28:30.812 248514 INFO os_vif [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:73:cb,bridge_name='br-int',has_traffic_filtering=True,id=c76cbcb4-3f53-4cff-a0c1-7d8be5000c32,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc76cbcb4-3f')#033[00m
Dec 13 03:28:30 np0005558241 lvm[302944]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:28:30 np0005558241 lvm[302944]: VG ceph_vg0 finished
Dec 13 03:28:30 np0005558241 lvm[302947]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:28:30 np0005558241 lvm[302947]: VG ceph_vg1 finished
Dec 13 03:28:30 np0005558241 lvm[302963]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:28:30 np0005558241 lvm[302963]: VG ceph_vg2 finished
Dec 13 03:28:30 np0005558241 youthful_williams[302824]: {}
Dec 13 03:28:30 np0005558241 systemd[1]: libpod-67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071.scope: Deactivated successfully.
Dec 13 03:28:30 np0005558241 systemd[1]: libpod-67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071.scope: Consumed 1.486s CPU time.
Dec 13 03:28:30 np0005558241 podman[302808]: 2025-12-13 08:28:30.957604905 +0000 UTC m=+1.092726203 container died 67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_williams, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:28:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1e02b5c93fa7e0a62370599c6a0ea197fb0c8d6089636f80c18779b403bf9ce8-merged.mount: Deactivated successfully.
Dec 13 03:28:31 np0005558241 podman[302808]: 2025-12-13 08:28:31.015978035 +0000 UTC m=+1.151099323 container remove 67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:28:31 np0005558241 nova_compute[248510]: 2025-12-13 08:28:31.027 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:31 np0005558241 systemd[1]: libpod-conmon-67eaa1e36b293384b94198f0eabe87232da023bab880db7ddb9e44403a43f071.scope: Deactivated successfully.
Dec 13 03:28:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:28:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:28:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2420189846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:28:31 np0005558241 nova_compute[248510]: 2025-12-13 08:28:31.242 248514 DEBUG oslo_concurrency.processutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:31 np0005558241 nova_compute[248510]: 2025-12-13 08:28:31.252 248514 DEBUG nova.compute.provider_tree [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:28:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:28:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:28:31 np0005558241 nova_compute[248510]: 2025-12-13 08:28:31.343 248514 DEBUG nova.scheduler.client.report [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:28:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 405 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.6 MiB/s wr, 373 op/s
Dec 13 03:28:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:28:31 np0005558241 nova_compute[248510]: 2025-12-13 08:28:31.503 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:31 np0005558241 nova_compute[248510]: 2025-12-13 08:28:31.506 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:31 np0005558241 nova_compute[248510]: 2025-12-13 08:28:31.515 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:28:31 np0005558241 nova_compute[248510]: 2025-12-13 08:28:31.516 248514 INFO nova.compute.claims [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:28:31 np0005558241 nova_compute[248510]: 2025-12-13 08:28:31.576 248514 INFO nova.scheduler.client.report [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Deleted allocations for instance 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c#033[00m
Dec 13 03:28:31 np0005558241 nova_compute[248510]: 2025-12-13 08:28:31.679 248514 DEBUG oslo_concurrency.lockutils [None req-37da719d-cbb1-4c85-b74b-3a2f72a23bf7 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "0ccf9d68-ffc2-4f17-a2e7-effa18fc607c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:31 np0005558241 nova_compute[248510]: 2025-12-13 08:28:31.845 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:28:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:28:31 np0005558241 nova_compute[248510]: 2025-12-13 08:28:31.902 248514 INFO nova.virt.libvirt.driver [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Deleting instance files /var/lib/nova/instances/41602b99-e7f2-450c-885e-51d07a1236d3_del#033[00m
Dec 13 03:28:31 np0005558241 nova_compute[248510]: 2025-12-13 08:28:31.904 248514 INFO nova.virt.libvirt.driver [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Deletion of /var/lib/nova/instances/41602b99-e7f2-450c-885e-51d07a1236d3_del complete#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.061 248514 DEBUG nova.compute.manager [req-2ade022e-1b0b-45fa-a5fe-4a656f53f3f5 req-852aa4bf-eb0d-461a-9061-58e4e459f7cb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Received event network-vif-deleted-d8c7cad7-f601-4205-8838-a583b6e04b0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.066 248514 INFO nova.compute.manager [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Took 1.60 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.067 248514 DEBUG oslo.service.loopingcall [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.067 248514 DEBUG nova.compute.manager [-] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.067 248514 DEBUG nova.network.neutron [-] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:28:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:28:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2605926860' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.449 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.455 248514 DEBUG nova.compute.provider_tree [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.493 248514 DEBUG nova.scheduler.client.report [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.531 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.531 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.532 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.532 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.533 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.534 248514 INFO nova.compute.manager [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Terminating instance#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.536 248514 DEBUG nova.compute.manager [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:28:32 np0005558241 kernel: tapeca7f353-34 (unregistering): left promiscuous mode
Dec 13 03:28:32 np0005558241 NetworkManager[50376]: <info>  [1765614512.5871] device (tapeca7f353-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.596 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:32 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:32Z|00490|binding|INFO|Releasing lport eca7f353-3478-46ea-a63f-617a11a8f7ff from this chassis (sb_readonly=0)
Dec 13 03:28:32 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:32Z|00491|binding|INFO|Setting lport eca7f353-3478-46ea-a63f-617a11a8f7ff down in Southbound
Dec 13 03:28:32 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:32Z|00492|binding|INFO|Removing iface tapeca7f353-34 ovn-installed in OVS
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.612 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:88:26 10.100.0.12'], port_security=['fa:16:3e:1c:88:26 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'df25cd40-72b5-4e0f-90ec-8677c699d1d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=eca7f353-3478-46ea-a63f-617a11a8f7ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.616 158419 INFO neutron.agent.ovn.metadata.agent [-] Port eca7f353-3478-46ea-a63f-617a11a8f7ff in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc unbound from our chassis#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.617 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.618 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.618 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.619 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7c52b7-cf10-4047-825e-bfe75cfbeede]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.620 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace which is not needed anymore#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:32 np0005558241 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000032.scope: Deactivated successfully.
Dec 13 03:28:32 np0005558241 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000032.scope: Consumed 15.522s CPU time.
Dec 13 03:28:32 np0005558241 systemd-machined[210538]: Machine qemu-56-instance-00000032 terminated.
Dec 13 03:28:32 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [NOTICE]   (299716) : haproxy version is 2.8.14-c23fe91
Dec 13 03:28:32 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [NOTICE]   (299716) : path to executable is /usr/sbin/haproxy
Dec 13 03:28:32 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [WARNING]  (299716) : Exiting Master process...
Dec 13 03:28:32 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [WARNING]  (299716) : Exiting Master process...
Dec 13 03:28:32 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [ALERT]    (299716) : Current worker (299718) exited with code 143 (Terminated)
Dec 13 03:28:32 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[299693]: [WARNING]  (299716) : All workers exited. Exiting... (0)
Dec 13 03:28:32 np0005558241 systemd[1]: libpod-e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d.scope: Deactivated successfully.
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.783 248514 INFO nova.virt.libvirt.driver [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Instance destroyed successfully.#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.784 248514 DEBUG nova.objects.instance [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'resources' on Instance uuid df25cd40-72b5-4e0f-90ec-8677c699d1d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:32 np0005558241 podman[303054]: 2025-12-13 08:28:32.786359706 +0000 UTC m=+0.054540577 container died e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.811 248514 DEBUG nova.virt.libvirt.vif [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:27:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-533202396',display_name='tempest-ImagesTestJSON-server-533202396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-533202396',id=50,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-e880c0lz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:27:56Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=df25cd40-72b5-4e0f-90ec-8677c699d1d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "address": "fa:16:3e:1c:88:26", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeca7f353-34", "ovs_interfaceid": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.812 248514 DEBUG nova.network.os_vif_util [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "address": "fa:16:3e:1c:88:26", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeca7f353-34", "ovs_interfaceid": "eca7f353-3478-46ea-a63f-617a11a8f7ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.813 248514 DEBUG nova.network.os_vif_util [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:88:26,bridge_name='br-int',has_traffic_filtering=True,id=eca7f353-3478-46ea-a63f-617a11a8f7ff,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeca7f353-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.813 248514 DEBUG os_vif [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:88:26,bridge_name='br-int',has_traffic_filtering=True,id=eca7f353-3478-46ea-a63f-617a11a8f7ff,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeca7f353-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.816 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.816 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeca7f353-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.818 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.822 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.823 248514 DEBUG nova.network.neutron [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:28:32 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d-userdata-shm.mount: Deactivated successfully.
Dec 13 03:28:32 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3c8666a1b8044e354b810eaa4281d1b07b7db2628215dd5a45bddcfe0eef16ae-merged.mount: Deactivated successfully.
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.837 248514 INFO os_vif [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:88:26,bridge_name='br-int',has_traffic_filtering=True,id=eca7f353-3478-46ea-a63f-617a11a8f7ff,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeca7f353-34')#033[00m
Dec 13 03:28:32 np0005558241 podman[303054]: 2025-12-13 08:28:32.838017261 +0000 UTC m=+0.106198132 container cleanup e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:28:32 np0005558241 systemd[1]: libpod-conmon-e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d.scope: Deactivated successfully.
Dec 13 03:28:32 np0005558241 podman[303100]: 2025-12-13 08:28:32.916662591 +0000 UTC m=+0.050884856 container remove e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.923 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[faef0eaa-52ee-4cff-8e17-7f366c9a071b]: (4, ('Sat Dec 13 08:28:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d)\ne4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d\nSat Dec 13 08:28:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (e4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d)\ne4814a7b8746f585591268e80562b186c847746a26bec3d590d7b6120649ef2d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.925 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[895c80c0-98cc-4ef0-97c5-aa64a604affb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.926 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.929 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:32 np0005558241 kernel: tap87bd91d0-e0: left promiscuous mode
Dec 13 03:28:32 np0005558241 nova_compute[248510]: 2025-12-13 08:28:32.947 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.951 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7a28762e-e378-4112-be0d-51e61e184f54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.969 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[892ee32a-512f-4614-8aa4-c4dd9d9c4383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.971 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[12c3f298-c56d-46f0-b783-bcad1245e239]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.991 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5cf67da-b47a-4a08-bcaa-5008f61560a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703776, 'reachable_time': 16673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303125, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:32 np0005558241 systemd[1]: run-netns-ovnmeta\x2d87bd91d0\x2deead\x2d49b6\x2d8f92\x2df8d0dba555dc.mount: Deactivated successfully.
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.995 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:28:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:32.996 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[268c258b-5213-408a-9201-d1fd89b42445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:33 np0005558241 nova_compute[248510]: 2025-12-13 08:28:33.006 248514 INFO nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:28:33 np0005558241 nova_compute[248510]: 2025-12-13 08:28:33.030 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:28:33 np0005558241 nova_compute[248510]: 2025-12-13 08:28:33.122 248514 INFO nova.virt.libvirt.driver [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Deleting instance files /var/lib/nova/instances/df25cd40-72b5-4e0f-90ec-8677c699d1d3_del#033[00m
Dec 13 03:28:33 np0005558241 nova_compute[248510]: 2025-12-13 08:28:33.123 248514 INFO nova.virt.libvirt.driver [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Deletion of /var/lib/nova/instances/df25cd40-72b5-4e0f-90ec-8677c699d1d3_del complete#033[00m
Dec 13 03:28:33 np0005558241 nova_compute[248510]: 2025-12-13 08:28:33.330 248514 DEBUG nova.policy [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5bc32e49dbd4372a006913090b9ef0f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:28:33 np0005558241 nova_compute[248510]: 2025-12-13 08:28:33.434 248514 DEBUG nova.virt.libvirt.driver [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:28:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 341 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 357 op/s
Dec 13 03:28:33 np0005558241 nova_compute[248510]: 2025-12-13 08:28:33.731 248514 INFO nova.compute.manager [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Took 1.19 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:28:33 np0005558241 nova_compute[248510]: 2025-12-13 08:28:33.731 248514 DEBUG oslo.service.loopingcall [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:28:33 np0005558241 nova_compute[248510]: 2025-12-13 08:28:33.732 248514 DEBUG nova.compute.manager [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:28:33 np0005558241 nova_compute[248510]: 2025-12-13 08:28:33.732 248514 DEBUG nova.network.neutron [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.218 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.220 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.220 248514 INFO nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Creating image(s)#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.241 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.267 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.292 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.296 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.376 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.378 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.379 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.379 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.400 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.404 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7139a479-b2fe-4d64-8061-97fceda2e392_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.624 248514 DEBUG nova.network.neutron [-] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.652 248514 INFO nova.compute.manager [-] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Took 2.59 seconds to deallocate network for instance.#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.713 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7139a479-b2fe-4d64-8061-97fceda2e392_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.784 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] resizing rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.823 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.823 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.825 248514 DEBUG nova.compute.manager [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received event network-vif-unplugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.825 248514 DEBUG oslo_concurrency.lockutils [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.826 248514 DEBUG oslo_concurrency.lockutils [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.826 248514 DEBUG oslo_concurrency.lockutils [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.826 248514 DEBUG nova.compute.manager [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] No waiting events found dispatching network-vif-unplugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.826 248514 WARNING nova.compute.manager [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received unexpected event network-vif-unplugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.826 248514 DEBUG nova.compute.manager [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received event network-vif-plugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.826 248514 DEBUG oslo_concurrency.lockutils [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.827 248514 DEBUG oslo_concurrency.lockutils [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.827 248514 DEBUG oslo_concurrency.lockutils [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.827 248514 DEBUG nova.compute.manager [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] No waiting events found dispatching network-vif-plugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.827 248514 WARNING nova.compute.manager [req-798f970a-0de3-4e0c-ad38-06557e32c6af req-4533d155-3c29-4b97-afb3-91c7610c15b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received unexpected event network-vif-plugged-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.906 248514 DEBUG nova.objects.instance [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'migration_context' on Instance uuid 7139a479-b2fe-4d64-8061-97fceda2e392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:34.929 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:34 np0005558241 nova_compute[248510]: 2025-12-13 08:28:34.996 248514 DEBUG oslo_concurrency.processutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.036 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.045 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.046 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Ensure instance console log exists: /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.046 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.046 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.047 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:28:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 229 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.7 MiB/s wr, 269 op/s
Dec 13 03:28:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:28:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/894203805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.594 248514 DEBUG oslo_concurrency.processutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.601 248514 DEBUG nova.compute.provider_tree [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:28:35 np0005558241 kernel: tap3a8efc9e-75 (unregistering): left promiscuous mode
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.666 248514 DEBUG nova.scheduler.client.report [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:28:35 np0005558241 NetworkManager[50376]: <info>  [1765614515.6685] device (tap3a8efc9e-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:35Z|00493|binding|INFO|Releasing lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 from this chassis (sb_readonly=0)
Dec 13 03:28:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:35Z|00494|binding|INFO|Setting lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 down in Southbound
Dec 13 03:28:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:35Z|00495|binding|INFO|Removing iface tap3a8efc9e-75 ovn-installed in OVS
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.689 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:30:15 10.100.0.4'], port_security=['fa:16:3e:cc:30:15 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '678d2db2-0536-4744-b65c-f0a5852f35e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3a8efc9e-7582-4b17-ab0e-b248e09932b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.691 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3a8efc9e-7582-4b17-ab0e-b248e09932b3 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis#033[00m
Dec 13 03:28:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.693 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85372fca-ab50-48b6-8c21-507f630c205a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:28:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.694 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bc1cb8-78e0-4dc5-96de-a36182c24164]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.695 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace which is not needed anymore#033[00m
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.696 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.764 248514 INFO nova.scheduler.client.report [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Deleted allocations for instance 41602b99-e7f2-450c-885e-51d07a1236d3#033[00m
Dec 13 03:28:35 np0005558241 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Deactivated successfully.
Dec 13 03:28:35 np0005558241 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Consumed 15.679s CPU time.
Dec 13 03:28:35 np0005558241 systemd-machined[210538]: Machine qemu-59-instance-00000034 terminated.
Dec 13 03:28:35 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [NOTICE]   (301274) : haproxy version is 2.8.14-c23fe91
Dec 13 03:28:35 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [NOTICE]   (301274) : path to executable is /usr/sbin/haproxy
Dec 13 03:28:35 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [WARNING]  (301274) : Exiting Master process...
Dec 13 03:28:35 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [WARNING]  (301274) : Exiting Master process...
Dec 13 03:28:35 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [ALERT]    (301274) : Current worker (301276) exited with code 143 (Terminated)
Dec 13 03:28:35 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[301270]: [WARNING]  (301274) : All workers exited. Exiting... (0)
Dec 13 03:28:35 np0005558241 systemd[1]: libpod-b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a.scope: Deactivated successfully.
Dec 13 03:28:35 np0005558241 podman[303338]: 2025-12-13 08:28:35.846261205 +0000 UTC m=+0.047950765 container died b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 03:28:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-01f9f9ca3b39265b46e15e383b3b89331e9eff00a18a8f9d53df9e42b58de486-merged.mount: Deactivated successfully.
Dec 13 03:28:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a-userdata-shm.mount: Deactivated successfully.
Dec 13 03:28:35 np0005558241 podman[303338]: 2025-12-13 08:28:35.885863792 +0000 UTC m=+0.087553352 container cleanup b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:28:35 np0005558241 kernel: tap3a8efc9e-75: entered promiscuous mode
Dec 13 03:28:35 np0005558241 NetworkManager[50376]: <info>  [1765614515.8944] manager: (tap3a8efc9e-75): new Tun device (/org/freedesktop/NetworkManager/Devices/220)
Dec 13 03:28:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:35Z|00496|binding|INFO|Claiming lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 for this chassis.
Dec 13 03:28:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:35Z|00497|binding|INFO|3a8efc9e-7582-4b17-ab0e-b248e09932b3: Claiming fa:16:3e:cc:30:15 10.100.0.4
Dec 13 03:28:35 np0005558241 kernel: tap3a8efc9e-75 (unregistering): left promiscuous mode
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:35 np0005558241 systemd[1]: libpod-conmon-b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a.scope: Deactivated successfully.
Dec 13 03:28:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:35Z|00498|binding|INFO|Setting lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 ovn-installed in OVS
Dec 13 03:28:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:35Z|00499|if_status|INFO|Dropped 4 log messages in last 14 seconds (most recently, 14 seconds ago) due to excessive rate
Dec 13 03:28:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:35Z|00500|if_status|INFO|Not setting lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 down as sb is readonly
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:35 np0005558241 podman[303369]: 2025-12-13 08:28:35.960004511 +0000 UTC m=+0.047073522 container remove b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:28:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.967 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[341ae1d6-e7ef-4d7f-83f3-292da2403fbe]: (4, ('Sat Dec 13 08:28:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a)\nb5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a\nSat Dec 13 08:28:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (b5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a)\nb5eb492a051dc368415a7c876f86354fcdb1133e82af4c916e010016aa76093a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.969 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb4b495-bb87-4c18-843c-978f761d6fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.970 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.972 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:35 np0005558241 kernel: tap85372fca-a0: left promiscuous mode
Dec 13 03:28:35 np0005558241 nova_compute[248510]: 2025-12-13 08:28:35.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:35.999 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[67726c50-1656-4b58-814f-84c64beb5f25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.018 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d84124a-0732-42be-ada6-f45fbb1ae24e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:36Z|00501|binding|INFO|Releasing lport 3a8efc9e-7582-4b17-ab0e-b248e09932b3 from this chassis (sb_readonly=0)
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.022 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:30:15 10.100.0.4'], port_security=['fa:16:3e:cc:30:15 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '678d2db2-0536-4744-b65c-f0a5852f35e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3a8efc9e-7582-4b17-ab0e-b248e09932b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.020 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[36ebfc99-f9ee-43d3-a013-c186e3b345a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.027 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:30:15 10.100.0.4'], port_security=['fa:16:3e:cc:30:15 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '678d2db2-0536-4744-b65c-f0a5852f35e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3a8efc9e-7582-4b17-ab0e-b248e09932b3) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:36 np0005558241 nova_compute[248510]: 2025-12-13 08:28:36.036 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.044 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[933f7393-67de-493e-8f2b-e4934d56b8e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 706708, 'reachable_time': 41951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303394, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.048 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.048 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1c6c8d-d021-4df2-815a-454b6c712309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.049 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3a8efc9e-7582-4b17-ab0e-b248e09932b3 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis#033[00m
Dec 13 03:28:36 np0005558241 systemd[1]: run-netns-ovnmeta\x2d85372fca\x2dab50\x2d48b6\x2d8c21\x2d507f630c205a.mount: Deactivated successfully.
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.050 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85372fca-ab50-48b6-8c21-507f630c205a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.051 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f0400c-4c85-45a7-b3b9-acc17d7b184e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.052 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3a8efc9e-7582-4b17-ab0e-b248e09932b3 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis#033[00m
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.053 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85372fca-ab50-48b6-8c21-507f630c205a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:28:36 np0005558241 nova_compute[248510]: 2025-12-13 08:28:36.053 248514 DEBUG nova.network.neutron [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Successfully created port: 834fc672-a8af-4884-964c-481d0d8d318e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:28:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:36.054 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6b6cc9-abaf-4728-859a-df99dad728d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:36 np0005558241 nova_compute[248510]: 2025-12-13 08:28:36.077 248514 DEBUG nova.network.neutron [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:36 np0005558241 nova_compute[248510]: 2025-12-13 08:28:36.085 248514 DEBUG oslo_concurrency.lockutils [None req-7d1ecb3f-f6d0-489a-972d-1e4909d94bed 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "41602b99-e7f2-450c-885e-51d07a1236d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:36 np0005558241 nova_compute[248510]: 2025-12-13 08:28:36.457 248514 INFO nova.virt.libvirt.driver [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance shutdown successfully after 24 seconds.#033[00m
Dec 13 03:28:36 np0005558241 nova_compute[248510]: 2025-12-13 08:28:36.466 248514 INFO nova.virt.libvirt.driver [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance destroyed successfully.#033[00m
Dec 13 03:28:36 np0005558241 nova_compute[248510]: 2025-12-13 08:28:36.467 248514 DEBUG nova.objects.instance [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'numa_topology' on Instance uuid 678d2db2-0536-4744-b65c-f0a5852f35e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:37 np0005558241 nova_compute[248510]: 2025-12-13 08:28:37.209 248514 DEBUG nova.compute.manager [req-1d66d1e7-5903-40e1-8eb4-16cdf0708a2f req-83e7f35e-2035-4deb-8460-9317a10a0d4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received event network-vif-unplugged-eca7f353-3478-46ea-a63f-617a11a8f7ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:37 np0005558241 nova_compute[248510]: 2025-12-13 08:28:37.209 248514 DEBUG oslo_concurrency.lockutils [req-1d66d1e7-5903-40e1-8eb4-16cdf0708a2f req-83e7f35e-2035-4deb-8460-9317a10a0d4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:37 np0005558241 nova_compute[248510]: 2025-12-13 08:28:37.209 248514 DEBUG oslo_concurrency.lockutils [req-1d66d1e7-5903-40e1-8eb4-16cdf0708a2f req-83e7f35e-2035-4deb-8460-9317a10a0d4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:37 np0005558241 nova_compute[248510]: 2025-12-13 08:28:37.210 248514 DEBUG oslo_concurrency.lockutils [req-1d66d1e7-5903-40e1-8eb4-16cdf0708a2f req-83e7f35e-2035-4deb-8460-9317a10a0d4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:37 np0005558241 nova_compute[248510]: 2025-12-13 08:28:37.210 248514 DEBUG nova.compute.manager [req-1d66d1e7-5903-40e1-8eb4-16cdf0708a2f req-83e7f35e-2035-4deb-8460-9317a10a0d4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] No waiting events found dispatching network-vif-unplugged-eca7f353-3478-46ea-a63f-617a11a8f7ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:37 np0005558241 nova_compute[248510]: 2025-12-13 08:28:37.210 248514 DEBUG nova.compute.manager [req-1d66d1e7-5903-40e1-8eb4-16cdf0708a2f req-83e7f35e-2035-4deb-8460-9317a10a0d4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received event network-vif-unplugged-eca7f353-3478-46ea-a63f-617a11a8f7ff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:28:37 np0005558241 nova_compute[248510]: 2025-12-13 08:28:37.212 248514 INFO nova.compute.manager [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Took 3.48 seconds to deallocate network for instance.#033[00m
Dec 13 03:28:37 np0005558241 nova_compute[248510]: 2025-12-13 08:28:37.234 248514 DEBUG nova.compute.manager [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 229 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.7 MiB/s wr, 269 op/s
Dec 13 03:28:37 np0005558241 nova_compute[248510]: 2025-12-13 08:28:37.686 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614502.606397, e6e0fdaf-f934-4e56-8e59-4c4475bacd26 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:37 np0005558241 nova_compute[248510]: 2025-12-13 08:28:37.686 248514 INFO nova.compute.manager [-] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:28:37 np0005558241 nova_compute[248510]: 2025-12-13 08:28:37.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:28:37 np0005558241 nova_compute[248510]: 2025-12-13 08:28:37.819 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.100 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614503.099627, 5d8c2900-0048-4631-bbc6-0122bce8f4f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.101 248514 INFO nova.compute.manager [-] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.332 248514 DEBUG nova.compute.manager [None req-7f3dd03b-9e85-45f0-a271-c95b356dc98a - - - - - -] [instance: 5d8c2900-0048-4631-bbc6-0122bce8f4f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.545 248514 DEBUG nova.compute.manager [None req-dcd39408-0442-4a8f-9748-e44028922a19 - - - - - -] [instance: e6e0fdaf-f934-4e56-8e59-4c4475bacd26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.642 248514 DEBUG nova.compute.manager [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Received event network-vif-deleted-c76cbcb4-3f53-4cff-a0c1-7d8be5000c32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.643 248514 DEBUG nova.compute.manager [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received event network-vif-deleted-eca7f353-3478-46ea-a63f-617a11a8f7ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.644 248514 DEBUG nova.compute.manager [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received event network-vif-unplugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.644 248514 DEBUG oslo_concurrency.lockutils [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.644 248514 DEBUG oslo_concurrency.lockutils [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.645 248514 DEBUG oslo_concurrency.lockutils [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.645 248514 DEBUG nova.compute.manager [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] No waiting events found dispatching network-vif-unplugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.645 248514 WARNING nova.compute.manager [req-3a2124d5-e8bf-447a-8c3e-6b0d52f72deb req-8a4c2df3-b7a0-4012-9e83-dfd7d26c64e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received unexpected event network-vif-unplugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 for instance with vm_state active and task_state powering-off.#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.732 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:38 np0005558241 nova_compute[248510]: 2025-12-13 08:28:38.733 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.052 248514 DEBUG oslo_concurrency.lockutils [None req-7334b6af-6e3e-4b5d-a4d2-dce83d9b853c b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 26.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.079 248514 DEBUG oslo_concurrency.processutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 248 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 2.2 MiB/s wr, 131 op/s
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.468 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.468 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.489 248514 DEBUG nova.network.neutron [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Successfully updated port: 834fc672-a8af-4884-964c-481d0d8d318e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.584 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "refresh_cache-7139a479-b2fe-4d64-8061-97fceda2e392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.584 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquired lock "refresh_cache-7139a479-b2fe-4d64-8061-97fceda2e392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.585 248514 DEBUG nova.network.neutron [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:28:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:28:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388726031' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.663 248514 DEBUG oslo_concurrency.processutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.674 248514 DEBUG nova.compute.provider_tree [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.677 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.816 248514 DEBUG nova.scheduler.client.report [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.900 248514 DEBUG nova.compute.manager [req-c3785b55-201e-46b8-aaa2-b96b72d78d58 req-c00f1dfe-2ddf-446d-8e2f-773a301c40bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received event network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.901 248514 DEBUG oslo_concurrency.lockutils [req-c3785b55-201e-46b8-aaa2-b96b72d78d58 req-c00f1dfe-2ddf-446d-8e2f-773a301c40bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.901 248514 DEBUG oslo_concurrency.lockutils [req-c3785b55-201e-46b8-aaa2-b96b72d78d58 req-c00f1dfe-2ddf-446d-8e2f-773a301c40bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.901 248514 DEBUG oslo_concurrency.lockutils [req-c3785b55-201e-46b8-aaa2-b96b72d78d58 req-c00f1dfe-2ddf-446d-8e2f-773a301c40bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.901 248514 DEBUG nova.compute.manager [req-c3785b55-201e-46b8-aaa2-b96b72d78d58 req-c00f1dfe-2ddf-446d-8e2f-773a301c40bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] No waiting events found dispatching network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.902 248514 WARNING nova.compute.manager [req-c3785b55-201e-46b8-aaa2-b96b72d78d58 req-c00f1dfe-2ddf-446d-8e2f-773a301c40bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Received unexpected event network-vif-plugged-eca7f353-3478-46ea-a63f-617a11a8f7ff for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.935 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.949 248514 DEBUG nova.network.neutron [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.956 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.956 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.956 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.957 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.957 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.958 248514 INFO nova.compute.manager [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Terminating instance#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.958 248514 DEBUG nova.compute.manager [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:28:39 np0005558241 nova_compute[248510]: 2025-12-13 08:28:39.985 248514 INFO nova.scheduler.client.report [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Deleted allocations for instance df25cd40-72b5-4e0f-90ec-8677c699d1d3#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.003 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.004 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.011 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.011 248514 INFO nova.compute.claims [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:28:40 np0005558241 kernel: tapbdc94f2e-b1 (unregistering): left promiscuous mode
Dec 13 03:28:40 np0005558241 NetworkManager[50376]: <info>  [1765614520.0227] device (tapbdc94f2e-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.023 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:40Z|00502|binding|INFO|Releasing lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 from this chassis (sb_readonly=0)
Dec 13 03:28:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:40Z|00503|binding|INFO|Setting lport bdc94f2e-b14e-4e39-bea0-978ff56ff722 down in Southbound
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.033 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:40Z|00504|binding|INFO|Removing iface tapbdc94f2e-b1 ovn-installed in OVS
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.036 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.043 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:ce:b2 10.100.0.10'], port_security=['fa:16:3e:d6:ce:b2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '98240df6-1cba-40e1-833c-24611270ed83', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7576f079-0439-46aa-98af-04f80cd254ca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3490ad817e664ff6b12c4ea88192b667', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'be530db5-69a3-4a66-a983-785ca72e353e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97e2093c-87d8-4348-abd8-68baa3943443, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bdc94f2e-b14e-4e39-bea0-978ff56ff722) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.046 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bdc94f2e-b14e-4e39-bea0-978ff56ff722 in datapath 7576f079-0439-46aa-98af-04f80cd254ca unbound from our chassis#033[00m
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.049 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7576f079-0439-46aa-98af-04f80cd254ca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.050 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[45247abc-d667-4697-bfcf-7d03cbbc0f99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.050 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca namespace which is not needed anymore#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.055 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:28:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:28:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:28:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:28:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:28:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:28:40 np0005558241 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Dec 13 03:28:40 np0005558241 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000002f.scope: Consumed 16.415s CPU time.
Dec 13 03:28:40 np0005558241 systemd-machined[210538]: Machine qemu-58-instance-0000002f terminated.
Dec 13 03:28:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:28:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Dec 13 03:28:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Dec 13 03:28:40 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.162 248514 DEBUG oslo_concurrency.lockutils [None req-fa66aee8-84a1-4d17-96ba-83a07ea0036f 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "df25cd40-72b5-4e0f-90ec-8677c699d1d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:40 np0005558241 neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca[297611]: [NOTICE]   (297615) : haproxy version is 2.8.14-c23fe91
Dec 13 03:28:40 np0005558241 neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca[297611]: [NOTICE]   (297615) : path to executable is /usr/sbin/haproxy
Dec 13 03:28:40 np0005558241 neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca[297611]: [WARNING]  (297615) : Exiting Master process...
Dec 13 03:28:40 np0005558241 neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca[297611]: [ALERT]    (297615) : Current worker (297618) exited with code 143 (Terminated)
Dec 13 03:28:40 np0005558241 neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca[297611]: [WARNING]  (297615) : All workers exited. Exiting... (0)
Dec 13 03:28:40 np0005558241 systemd[1]: libpod-8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77.scope: Deactivated successfully.
Dec 13 03:28:40 np0005558241 podman[303440]: 2025-12-13 08:28:40.196848309 +0000 UTC m=+0.056005743 container died 8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.220 248514 INFO nova.virt.libvirt.driver [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Instance destroyed successfully.#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.221 248514 DEBUG nova.objects.instance [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lazy-loading 'resources' on Instance uuid 98240df6-1cba-40e1-833c-24611270ed83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77-userdata-shm.mount: Deactivated successfully.
Dec 13 03:28:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c4cf732e89cfd948592bd93195ade27ca38a2f5cf5bd9d4f7b2379dcc373df76-merged.mount: Deactivated successfully.
Dec 13 03:28:40 np0005558241 podman[303440]: 2025-12-13 08:28:40.246946445 +0000 UTC m=+0.106103879 container cleanup 8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.258 248514 DEBUG nova.virt.libvirt.vif [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-612802196',display_name='tempest-ListServerFiltersTestJSON-instance-612802196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-612802196',id=47,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:27:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3490ad817e664ff6b12c4ea88192b667',ramdisk_id='',reservation_id='r-wpyl99bg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1229542462',owner_user_name='tempest-ListServerFiltersTestJSON-1229542462-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:28:08Z,user_data=None,user_id='65a6b617130a42ac9c3d9b4abf6a1cfb',uuid=98240df6-1cba-40e1-833c-24611270ed83,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.259 248514 DEBUG nova.network.os_vif_util [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converting VIF {"id": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "address": "fa:16:3e:d6:ce:b2", "network": {"id": "7576f079-0439-46aa-98af-04f80cd254ca", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-193785753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3490ad817e664ff6b12c4ea88192b667", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdc94f2e-b1", "ovs_interfaceid": "bdc94f2e-b14e-4e39-bea0-978ff56ff722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.260 248514 DEBUG nova.network.os_vif_util [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.260 248514 DEBUG os_vif [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.262 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.262 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdc94f2e-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.267 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:28:40 np0005558241 systemd[1]: libpod-conmon-8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77.scope: Deactivated successfully.
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.271 248514 INFO os_vif [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:ce:b2,bridge_name='br-int',has_traffic_filtering=True,id=bdc94f2e-b14e-4e39-bea0-978ff56ff722,network=Network(7576f079-0439-46aa-98af-04f80cd254ca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdc94f2e-b1')#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.316 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:40 np0005558241 podman[303478]: 2025-12-13 08:28:40.324311374 +0000 UTC m=+0.051834770 container remove 8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.330 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f552e5-22c3-4e80-97c6-a53446bba438]: (4, ('Sat Dec 13 08:28:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca (8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77)\n8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77\nSat Dec 13 08:28:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca (8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77)\n8d5c3258fd67975065583dc6bd45185a76c677ddebbf87a350fd1f93cbd41b77\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.333 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9c253c1c-6d2f-44e7-bb93-bdd4d6babaff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.334 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7576f079-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:40 np0005558241 kernel: tap7576f079-00: left promiscuous mode
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.351 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.366 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.372 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[162d7d95-5f55-47b5-ad88-d066ffba25fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.390 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e2e853-b786-4cab-aaa7-cc0a3a47c48c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.392 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b58573af-eb94-4d79-9da1-6b5fe727234b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.413 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c59eebd-665f-43ea-ab4c-49cbeddc4de3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701777, 'reachable_time': 33628, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303516, 'error': None, 'target': 'ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.416 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7576f079-0439-46aa-98af-04f80cd254ca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:28:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:40.416 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[25235a98-2b33-4838-a659-b7c3c8bf5c9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:40 np0005558241 systemd[1]: run-netns-ovnmeta\x2d7576f079\x2d0439\x2d46aa\x2d98af\x2d04f80cd254ca.mount: Deactivated successfully.
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.681 248514 INFO nova.virt.libvirt.driver [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Deleting instance files /var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83_del#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.682 248514 INFO nova.virt.libvirt.driver [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Deletion of /var/lib/nova/instances/98240df6-1cba-40e1-833c-24611270ed83_del complete#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.852 248514 INFO nova.compute.manager [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.853 248514 DEBUG oslo.service.loopingcall [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.853 248514 DEBUG nova.compute.manager [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.854 248514 DEBUG nova.network.neutron [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:28:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:28:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3943427554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.947 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:40 np0005558241 nova_compute[248510]: 2025-12-13 08:28:40.953 248514 DEBUG nova.compute.provider_tree [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:28:41 np0005558241 nova_compute[248510]: 2025-12-13 08:28:41.045 248514 DEBUG nova.scheduler.client.report [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:28:41 np0005558241 nova_compute[248510]: 2025-12-13 08:28:41.261 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:41 np0005558241 nova_compute[248510]: 2025-12-13 08:28:41.262 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:28:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 248 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 2.2 MiB/s wr, 131 op/s
Dec 13 03:28:41 np0005558241 nova_compute[248510]: 2025-12-13 08:28:41.501 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:28:41 np0005558241 nova_compute[248510]: 2025-12-13 08:28:41.502 248514 DEBUG nova.network.neutron [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:28:41 np0005558241 nova_compute[248510]: 2025-12-13 08:28:41.703 248514 INFO nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:28:41 np0005558241 nova_compute[248510]: 2025-12-13 08:28:41.890 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:28:41 np0005558241 nova_compute[248510]: 2025-12-13 08:28:41.969 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614506.968725, 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:41 np0005558241 nova_compute[248510]: 2025-12-13 08:28:41.970 248514 INFO nova.compute.manager [-] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.014 248514 DEBUG nova.policy [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b988c7ac9354c59aac9a9f41f83c20f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52e1055963294dbdb16cd95b466cd4d9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.384 248514 DEBUG nova.compute.manager [req-c2868f5c-c42e-4607-80e0-b11c49ffd24c req-94dbea4a-a2c5-41b8-b1a7-ec9636f6b0af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.385 248514 DEBUG oslo_concurrency.lockutils [req-c2868f5c-c42e-4607-80e0-b11c49ffd24c req-94dbea4a-a2c5-41b8-b1a7-ec9636f6b0af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.385 248514 DEBUG oslo_concurrency.lockutils [req-c2868f5c-c42e-4607-80e0-b11c49ffd24c req-94dbea4a-a2c5-41b8-b1a7-ec9636f6b0af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.385 248514 DEBUG oslo_concurrency.lockutils [req-c2868f5c-c42e-4607-80e0-b11c49ffd24c req-94dbea4a-a2c5-41b8-b1a7-ec9636f6b0af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.385 248514 DEBUG nova.compute.manager [req-c2868f5c-c42e-4607-80e0-b11c49ffd24c req-94dbea4a-a2c5-41b8-b1a7-ec9636f6b0af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] No waiting events found dispatching network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.386 248514 WARNING nova.compute.manager [req-c2868f5c-c42e-4607-80e0-b11c49ffd24c req-94dbea4a-a2c5-41b8-b1a7-ec9636f6b0af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received unexpected event network-vif-plugged-3a8efc9e-7582-4b17-ab0e-b248e09932b3 for instance with vm_state stopped and task_state None.#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.463 248514 DEBUG nova.compute.manager [None req-9721eb7b-73b0-436d-9f8a-153a03101012 - - - - - -] [instance: 0ccf9d68-ffc2-4f17-a2e7-effa18fc607c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.567 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.568 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.569 248514 INFO nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Creating image(s)#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.593 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.619 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.642 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.647 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.716 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.716 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.717 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.717 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.717 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.719 248514 INFO nova.compute.manager [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Terminating instance#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.719 248514 DEBUG nova.compute.manager [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.727 248514 INFO nova.virt.libvirt.driver [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Instance destroyed successfully.#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.729 248514 DEBUG nova.objects.instance [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'resources' on Instance uuid 678d2db2-0536-4744-b65c-f0a5852f35e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.732 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.732 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.733 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.733 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.754 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.758 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.799 248514 DEBUG nova.network.neutron [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Updating instance_info_cache with network_info: [{"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.804 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.805 248514 DEBUG nova.virt.libvirt.vif [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:27:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1859844308',display_name='tempest-DeleteServersTestJSON-server-1859844308',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1859844308',id=52,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:28:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-vi6y6qcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:28:38Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=678d2db2-0536-4744-b65c-f0a5852f35e0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.805 248514 DEBUG nova.network.os_vif_util [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "address": "fa:16:3e:cc:30:15", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a8efc9e-75", "ovs_interfaceid": "3a8efc9e-7582-4b17-ab0e-b248e09932b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.806 248514 DEBUG nova.network.os_vif_util [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.807 248514 DEBUG os_vif [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.809 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.809 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a8efc9e-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.815 248514 INFO os_vif [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:30:15,bridge_name='br-int',has_traffic_filtering=True,id=3a8efc9e-7582-4b17-ab0e-b248e09932b3,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a8efc9e-75')#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.832 248514 DEBUG nova.network.neutron [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.834 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Releasing lock "refresh_cache-7139a479-b2fe-4d64-8061-97fceda2e392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.835 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Instance network_info: |[{"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.837 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Start _get_guest_xml network_info=[{"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.842 248514 WARNING nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.853 248514 DEBUG nova.virt.libvirt.host [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.854 248514 DEBUG nova.virt.libvirt.host [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.859 248514 DEBUG nova.virt.libvirt.host [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.859 248514 DEBUG nova.virt.libvirt.host [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.860 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.860 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.861 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.861 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.862 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.862 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.862 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.862 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.863 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.863 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.863 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.864 248514 DEBUG nova.virt.hardware [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.871 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.930 248514 INFO nova.compute.manager [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Took 2.08 seconds to deallocate network for instance.#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.936 248514 DEBUG nova.compute.manager [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-changed-834fc672-a8af-4884-964c-481d0d8d318e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.937 248514 DEBUG nova.compute.manager [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Refreshing instance network info cache due to event network-changed-834fc672-a8af-4884-964c-481d0d8d318e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.938 248514 DEBUG oslo_concurrency.lockutils [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7139a479-b2fe-4d64-8061-97fceda2e392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.938 248514 DEBUG oslo_concurrency.lockutils [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7139a479-b2fe-4d64-8061-97fceda2e392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:28:42 np0005558241 nova_compute[248510]: 2025-12-13 08:28:42.939 248514 DEBUG nova.network.neutron [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Refreshing network info cache for port 834fc672-a8af-4884-964c-481d0d8d318e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.013 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.014 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.112 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.195 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] resizing rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.229 248514 DEBUG oslo_concurrency.processutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.324 248514 DEBUG nova.objects.instance [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.334 248514 INFO nova.virt.libvirt.driver [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Deleting instance files /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0_del#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.335 248514 INFO nova.virt.libvirt.driver [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Deletion of /var/lib/nova/instances/678d2db2-0536-4744-b65c-f0a5852f35e0_del complete#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.355 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.355 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Ensure instance console log exists: /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.356 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.356 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.356 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.402 248514 INFO nova.compute.manager [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.403 248514 DEBUG oslo.service.loopingcall [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.403 248514 DEBUG nova.compute.manager [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.403 248514 DEBUG nova.network.neutron [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:28:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 222 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 2.3 MiB/s wr, 117 op/s
Dec 13 03:28:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:28:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318679882' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.520 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.543 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.547 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:28:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700987716' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.875 248514 DEBUG oslo_concurrency.processutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.883 248514 DEBUG nova.compute.provider_tree [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.907 248514 DEBUG nova.scheduler.client.report [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.945 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:43 np0005558241 nova_compute[248510]: 2025-12-13 08:28:43.982 248514 INFO nova.scheduler.client.report [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Deleted allocations for instance 98240df6-1cba-40e1-833c-24611270ed83#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.080 248514 DEBUG oslo_concurrency.lockutils [None req-ca88c24d-8aa0-41a8-a6ca-caadfc8cfb48 65a6b617130a42ac9c3d9b4abf6a1cfb 3490ad817e664ff6b12c4ea88192b667 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:28:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3325959890' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.184 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.186 248514 DEBUG nova.virt.libvirt.vif [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-771461744',display_name='tempest-ServerDiskConfigTestJSON-server-771461744',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-771461744',id=54,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-lisf0ibr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:33Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=7139a479-b2fe-4d64-8061-97fceda2e392,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.187 248514 DEBUG nova.network.os_vif_util [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.188 248514 DEBUG nova.network.os_vif_util [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.189 248514 DEBUG nova.objects.instance [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7139a479-b2fe-4d64-8061-97fceda2e392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.203 248514 DEBUG nova.network.neutron [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Successfully created port: 3aadc575-dc9f-4823-82c7-112e9b9832fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.213 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  <uuid>7139a479-b2fe-4d64-8061-97fceda2e392</uuid>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  <name>instance-00000036</name>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-771461744</nova:name>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:28:42</nova:creationTime>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:        <nova:user uuid="a5bc32e49dbd4372a006913090b9ef0f">tempest-ServerDiskConfigTestJSON-167971983-project-member</nova:user>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:        <nova:project uuid="9aea752cb9b648a7aa9b3f634ced797e">tempest-ServerDiskConfigTestJSON-167971983</nova:project>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:        <nova:port uuid="834fc672-a8af-4884-964c-481d0d8d318e">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <entry name="serial">7139a479-b2fe-4d64-8061-97fceda2e392</entry>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <entry name="uuid">7139a479-b2fe-4d64-8061-97fceda2e392</entry>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/7139a479-b2fe-4d64-8061-97fceda2e392_disk">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/7139a479-b2fe-4d64-8061-97fceda2e392_disk.config">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:b6:94:dc"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <target dev="tap834fc672-a8"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/console.log" append="off"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:28:44 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:28:44 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:28:44 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:28:44 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.215 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Preparing to wait for external event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.216 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.216 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.217 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.218 248514 DEBUG nova.virt.libvirt.vif [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-771461744',display_name='tempest-ServerDiskConfigTestJSON-server-771461744',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-771461744',id=54,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-lisf0ibr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:33Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=7139a479-b2fe-4d64-8061-97fceda2e392,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.218 248514 DEBUG nova.network.os_vif_util [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.219 248514 DEBUG nova.network.os_vif_util [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.219 248514 DEBUG os_vif [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.220 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.221 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.221 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.226 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap834fc672-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.227 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap834fc672-a8, col_values=(('external_ids', {'iface-id': '834fc672-a8af-4884-964c-481d0d8d318e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:94:dc', 'vm-uuid': '7139a479-b2fe-4d64-8061-97fceda2e392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:44 np0005558241 NetworkManager[50376]: <info>  [1765614524.2303] manager: (tap834fc672-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.231 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.235 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.236 248514 INFO os_vif [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8')#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.311 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.312 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.312 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] No VIF found with MAC fa:16:3e:b6:94:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.313 248514 INFO nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Using config drive#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.335 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.689 248514 DEBUG nova.compute.manager [req-e1011515-e5c8-4f3b-9ec9-e5919633a83b req-1cba1da7-1394-46ad-a76d-de0c8a012b48 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-deleted-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.817 248514 DEBUG nova.network.neutron [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.854 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.856 248514 INFO nova.compute.manager [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Took 1.45 seconds to deallocate network for instance.#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.913 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:44 np0005558241 nova_compute[248510]: 2025-12-13 08:28:44.914 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.010 248514 INFO nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Creating config drive at /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/disk.config#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.016 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeee0okbi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.061 248514 DEBUG oslo_concurrency.processutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.101 248514 DEBUG nova.compute.manager [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.101 248514 DEBUG oslo_concurrency.lockutils [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.102 248514 DEBUG oslo_concurrency.lockutils [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.102 248514 DEBUG oslo_concurrency.lockutils [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.102 248514 DEBUG nova.compute.manager [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] No waiting events found dispatching network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.102 248514 WARNING nova.compute.manager [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received unexpected event network-vif-plugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.103 248514 DEBUG nova.compute.manager [req-12fcd077-99bd-45d8-aedf-d1fa5c51f3ce req-368ad30b-dbb1-4f00-b2d7-cbcbdd510509 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Received event network-vif-deleted-3a8efc9e-7582-4b17-ab0e-b248e09932b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.104 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.161 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeee0okbi" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.187 248514 DEBUG nova.storage.rbd_utils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] rbd image 7139a479-b2fe-4d64-8061-97fceda2e392_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.193 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/disk.config 7139a479-b2fe-4d64-8061-97fceda2e392_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 157 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Dec 13 03:28:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:28:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/236063891' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.691 248514 DEBUG oslo_concurrency.processutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.694 248514 DEBUG oslo_concurrency.processutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/disk.config 7139a479-b2fe-4d64-8061-97fceda2e392_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.695 248514 INFO nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Deleting local config drive /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392/disk.config because it was imported into RBD.#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.698 248514 DEBUG nova.compute.provider_tree [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.707 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614510.7056813, 41602b99-e7f2-450c-885e-51d07a1236d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.707 248514 INFO nova.compute.manager [-] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.725 248514 DEBUG nova.scheduler.client.report [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.731 248514 DEBUG nova.compute.manager [None req-199dca5c-b7be-4aeb-bf78-7b5caacdee3d - - - - - -] [instance: 41602b99-e7f2-450c-885e-51d07a1236d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.748 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:45 np0005558241 kernel: tap834fc672-a8: entered promiscuous mode
Dec 13 03:28:45 np0005558241 NetworkManager[50376]: <info>  [1765614525.7642] manager: (tap834fc672-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Dec 13 03:28:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:45Z|00505|binding|INFO|Claiming lport 834fc672-a8af-4884-964c-481d0d8d318e for this chassis.
Dec 13 03:28:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:45Z|00506|binding|INFO|834fc672-a8af-4884-964c-481d0d8d318e: Claiming fa:16:3e:b6:94:dc 10.100.0.10
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.778 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:94:dc 10.100.0.10'], port_security=['fa:16:3e:b6:94:dc 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7139a479-b2fe-4d64-8061-97fceda2e392', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=834fc672-a8af-4884-964c-481d0d8d318e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.779 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 834fc672-a8af-4884-964c-481d0d8d318e in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 bound to our chassis#033[00m
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.781 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c63049d-63e9-47af-99e2-ce1403a42891#033[00m
Dec 13 03:28:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:45Z|00507|binding|INFO|Setting lport 834fc672-a8af-4884-964c-481d0d8d318e up in Southbound
Dec 13 03:28:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:45Z|00508|binding|INFO|Setting lport 834fc672-a8af-4884-964c-481d0d8d318e ovn-installed in OVS
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.784 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.787 248514 INFO nova.scheduler.client.report [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Deleted allocations for instance 678d2db2-0536-4744-b65c-f0a5852f35e0#033[00m
Dec 13 03:28:45 np0005558241 systemd-udevd[303904]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.799 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f3db8eae-1ade-44a2-b962-15501e6f8f67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.800 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c63049d-61 in ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:28:45 np0005558241 systemd-machined[210538]: New machine qemu-62-instance-00000036.
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.804 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c63049d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.804 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c2de6cd9-c235-409c-93f6-43064061597b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.805 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e87ffb71-4e86-4c30-90a8-3ba0a4e808c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:45 np0005558241 NetworkManager[50376]: <info>  [1765614525.8154] device (tap834fc672-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:28:45 np0005558241 systemd[1]: Started Virtual Machine qemu-62-instance-00000036.
Dec 13 03:28:45 np0005558241 NetworkManager[50376]: <info>  [1765614525.8164] device (tap834fc672-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.819 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e232d5e4-3762-4e88-a52e-635b5755a3f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.843 248514 DEBUG nova.network.neutron [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Updated VIF entry in instance network info cache for port 834fc672-a8af-4884-964c-481d0d8d318e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.844 248514 DEBUG nova.network.neutron [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Updating instance_info_cache with network_info: [{"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.850 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[93151451-c564-48ae-80b8-523b47907040]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.891 248514 DEBUG oslo_concurrency.lockutils [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7139a479-b2fe-4d64-8061-97fceda2e392" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.892 248514 DEBUG nova.compute.manager [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-unplugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.892 248514 DEBUG oslo_concurrency.lockutils [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "98240df6-1cba-40e1-833c-24611270ed83-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.893 248514 DEBUG oslo_concurrency.lockutils [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.893 248514 DEBUG oslo_concurrency.lockutils [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "98240df6-1cba-40e1-833c-24611270ed83-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.893 248514 DEBUG nova.compute.manager [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] No waiting events found dispatching network-vif-unplugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.893 248514 DEBUG nova.compute.manager [req-67cb5588-8ed1-4cf8-8bb4-62dfeaa5346d req-c3d0910c-8a72-4f39-b1a8-f6aeb237ce09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Received event network-vif-unplugged-bdc94f2e-b14e-4e39-bea0-978ff56ff722 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.903 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc4fda9-cb19-4038-b47a-9aa74172c2ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:45 np0005558241 systemd-udevd[303908]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.912 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2da4c41b-d7dc-4c31-bc65-5c2bea9f45e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:45 np0005558241 NetworkManager[50376]: <info>  [1765614525.9137] manager: (tap6c63049d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Dec 13 03:28:45 np0005558241 nova_compute[248510]: 2025-12-13 08:28:45.941 248514 DEBUG oslo_concurrency.lockutils [None req-70fee1e9-54f0-46bd-96f3-a5a197b163b5 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "678d2db2-0536-4744-b65c-f0a5852f35e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.951 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[945fd282-d1a8-470e-9276-0c3e85aee9d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.956 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ec7a2bd5-a92c-4da7-a5df-7dca03234609]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:45 np0005558241 NetworkManager[50376]: <info>  [1765614525.9809] device (tap6c63049d-60): carrier: link connected
Dec 13 03:28:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:45.987 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2227333f-4ddf-4b95-b824-59acd872080a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.009 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d646c27a-7875-4b33-90c0-85172fe95d2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710319, 'reachable_time': 19836, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303937, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.031 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7322d221-b06e-4e26-a2eb-63ac69ad9ea9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:c2f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 710319, 'tstamp': 710319}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303938, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.054 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75913810-8ee8-4484-8bfc-40fc1c3caa48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c63049d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:c2:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710319, 'reachable_time': 19836, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303939, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.096 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db88b816-2be5-4c2b-9e67-5cfc22978a7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.169 248514 DEBUG nova.network.neutron [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Successfully updated port: 3aadc575-dc9f-4823-82c7-112e9b9832fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.171 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35605293-1b03-49c0-9a8f-761c0c8577c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.174 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.174 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.175 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c63049d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:46 np0005558241 kernel: tap6c63049d-60: entered promiscuous mode
Dec 13 03:28:46 np0005558241 NetworkManager[50376]: <info>  [1765614526.1798] manager: (tap6c63049d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.192 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c63049d-60, col_values=(('external_ids', {'iface-id': 'b410790c-12b7-4a29-87e5-13a29af9c319'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:46Z|00509|binding|INFO|Releasing lport b410790c-12b7-4a29-87e5-13a29af9c319 from this chassis (sb_readonly=0)
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.195 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.198 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "refresh_cache-9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.199 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquired lock "refresh_cache-9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.199 248514 DEBUG nova.network.neutron [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.390 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.392 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.391 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5ebf3e-1d67-48c2-ad6c-6aaaf0ddc828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.393 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/6c63049d-63e9-47af-99e2-ce1403a42891.pid.haproxy
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 6c63049d-63e9-47af-99e2-ce1403a42891
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:28:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:46.394 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'env', 'PROCESS_TAG=haproxy-6c63049d-63e9-47af-99e2-ce1403a42891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c63049d-63e9-47af-99e2-ce1403a42891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.407 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.534 248514 DEBUG nova.network.neutron [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.592 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614526.5922618, 7139a479-b2fe-4d64-8061-97fceda2e392 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.593 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] VM Started (Lifecycle Event)#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.623 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.628 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614526.592531, 7139a479-b2fe-4d64-8061-97fceda2e392 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.629 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.655 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.660 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.692 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:28:46 np0005558241 nova_compute[248510]: 2025-12-13 08:28:46.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:28:46 np0005558241 podman[304013]: 2025-12-13 08:28:46.791225456 +0000 UTC m=+0.053383289 container create 6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Dec 13 03:28:46 np0005558241 systemd[1]: Started libpod-conmon-6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5.scope.
Dec 13 03:28:46 np0005558241 podman[304013]: 2025-12-13 08:28:46.763270316 +0000 UTC m=+0.025428139 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:28:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:28:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f27d429928f4f51cc8b3e9fb422bd397a12e0d5e7d42eff0f2f2f804535b5b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:46 np0005558241 podman[304013]: 2025-12-13 08:28:46.884658391 +0000 UTC m=+0.146816234 container init 6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Dec 13 03:28:46 np0005558241 podman[304013]: 2025-12-13 08:28:46.890414583 +0000 UTC m=+0.152572386 container start 6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 03:28:46 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [NOTICE]   (304032) : New worker (304034) forked
Dec 13 03:28:46 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [NOTICE]   (304032) : Loading success.
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.173 248514 DEBUG nova.compute.manager [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-changed-3aadc575-dc9f-4823-82c7-112e9b9832fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.173 248514 DEBUG nova.compute.manager [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Refreshing instance network info cache due to event network-changed-3aadc575-dc9f-4823-82c7-112e9b9832fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.174 248514 DEBUG oslo_concurrency.lockutils [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.449 248514 DEBUG nova.compute.manager [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.450 248514 DEBUG oslo_concurrency.lockutils [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.450 248514 DEBUG oslo_concurrency.lockutils [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.450 248514 DEBUG oslo_concurrency.lockutils [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.450 248514 DEBUG nova.compute.manager [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Processing event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.450 248514 DEBUG nova.compute.manager [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.450 248514 DEBUG oslo_concurrency.lockutils [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.451 248514 DEBUG oslo_concurrency.lockutils [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.451 248514 DEBUG oslo_concurrency.lockutils [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.451 248514 DEBUG nova.compute.manager [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] No waiting events found dispatching network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.451 248514 WARNING nova.compute.manager [req-f961dad8-95da-4c54-a3bd-914d777cb66c req-73d06cbb-5d73-4397-95e3-0fe8d8a4bfe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received unexpected event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.452 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.460 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:28:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 157 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.462 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614527.4589188, 7139a479-b2fe-4d64-8061-97fceda2e392 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.462 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.468 248514 INFO nova.virt.libvirt.driver [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Instance spawned successfully.#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.469 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.516 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.522 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.526 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.526 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.527 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.527 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.528 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.528 248514 DEBUG nova.virt.libvirt.driver [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.598 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.708 248514 INFO nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Took 13.49 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.709 248514 DEBUG nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.780 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614512.7797358, df25cd40-72b5-4e0f-90ec-8677c699d1d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.781 248514 INFO nova.compute.manager [-] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.828 248514 DEBUG nova.compute.manager [None req-ffe8b032-1df9-4024-b4bb-4c0685a80df1 - - - - - -] [instance: df25cd40-72b5-4e0f-90ec-8677c699d1d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.836 248514 INFO nova.compute.manager [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Took 16.84 seconds to build instance.#033[00m
Dec 13 03:28:47 np0005558241 nova_compute[248510]: 2025-12-13 08:28:47.867 248514 DEBUG oslo_concurrency.lockutils [None req-e1eb20de-f2bd-49d7-a96c-ed0074b78d41 a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:48 np0005558241 podman[304045]: 2025-12-13 08:28:48.973003288 +0000 UTC m=+0.053963502 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 13 03:28:48 np0005558241 podman[304044]: 2025-12-13 08:28:48.982771229 +0000 UTC m=+0.068600973 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:28:49 np0005558241 podman[304043]: 2025-12-13 08:28:49.00915988 +0000 UTC m=+0.096894631 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:28:49 np0005558241 nova_compute[248510]: 2025-12-13 08:28:49.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 178 op/s
Dec 13 03:28:49 np0005558241 nova_compute[248510]: 2025-12-13 08:28:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:28:49 np0005558241 nova_compute[248510]: 2025-12-13 08:28:49.852 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:49 np0005558241 nova_compute[248510]: 2025-12-13 08:28:49.853 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:49 np0005558241 nova_compute[248510]: 2025-12-13 08:28:49.854 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:49 np0005558241 nova_compute[248510]: 2025-12-13 08:28:49.854 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:28:49 np0005558241 nova_compute[248510]: 2025-12-13 08:28:49.855 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.059 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:28:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:28:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2795236163' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.457 248514 DEBUG nova.network.neutron [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Updating instance_info_cache with network_info: [{"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.486 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.506 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Releasing lock "refresh_cache-9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.507 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Instance network_info: |[{"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.508 248514 DEBUG oslo_concurrency.lockutils [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.508 248514 DEBUG nova.network.neutron [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Refreshing network info cache for port 3aadc575-dc9f-4823-82c7-112e9b9832fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.511 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Start _get_guest_xml network_info=[{"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.517 248514 WARNING nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.521 248514 DEBUG nova.virt.libvirt.host [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.523 248514 DEBUG nova.virt.libvirt.host [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.528 248514 DEBUG nova.virt.libvirt.host [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.529 248514 DEBUG nova.virt.libvirt.host [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.530 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.530 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.531 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.531 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.532 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.532 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.532 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.533 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.533 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.533 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.534 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.534 248514 DEBUG nova.virt.hardware [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.539 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.655 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.655 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.835 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.837 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3966MB free_disk=59.94623995665461GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.837 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.838 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.910 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614515.909534, 678d2db2-0536-4744-b65c-f0a5852f35e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.911 248514 INFO nova.compute.manager [-] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:28:50 np0005558241 nova_compute[248510]: 2025-12-13 08:28:50.970 248514 DEBUG nova.compute.manager [None req-476bc214-621f-407c-9178-97ec3640072c - - - - - -] [instance: 678d2db2-0536-4744-b65c-f0a5852f35e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.052 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 7139a479-b2fe-4d64-8061-97fceda2e392 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.053 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.053 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.054 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.133 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:28:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555582104' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.220 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.242 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.247 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 157 op/s
Dec 13 03:28:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:28:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2735186484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.746 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.754 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.782 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:28:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:28:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/521739913' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.847 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.848 248514 DEBUG nova.virt.libvirt.vif [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1587589866',display_name='tempest-ImagesTestJSON-server-1587589866',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1587589866',id=55,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-uay75zc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:41Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=9b45ce75-4bd3-4cc0-a772-2474ffc2cd52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.849 248514 DEBUG nova.network.os_vif_util [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.850 248514 DEBUG nova.network.os_vif_util [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.851 248514 DEBUG nova.objects.instance [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.854 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.855 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.879 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  <uuid>9b45ce75-4bd3-4cc0-a772-2474ffc2cd52</uuid>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  <name>instance-00000037</name>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <nova:name>tempest-ImagesTestJSON-server-1587589866</nova:name>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:28:50</nova:creationTime>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:        <nova:user uuid="3b988c7ac9354c59aac9a9f41f83c20f">tempest-ImagesTestJSON-1234382421-project-member</nova:user>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:        <nova:project uuid="52e1055963294dbdb16cd95b466cd4d9">tempest-ImagesTestJSON-1234382421</nova:project>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:        <nova:port uuid="3aadc575-dc9f-4823-82c7-112e9b9832fe">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <entry name="serial">9b45ce75-4bd3-4cc0-a772-2474ffc2cd52</entry>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <entry name="uuid">9b45ce75-4bd3-4cc0-a772-2474ffc2cd52</entry>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk.config">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:dd:f5:51"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <target dev="tap3aadc575-dc"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/console.log" append="off"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:28:51 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:28:51 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:28:51 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:28:51 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.885 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Preparing to wait for external event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.886 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.886 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.886 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.887 248514 DEBUG nova.virt.libvirt.vif [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1587589866',display_name='tempest-ImagesTestJSON-server-1587589866',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1587589866',id=55,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-uay75zc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:41Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=9b45ce75-4bd3-4cc0-a772-2474ffc2cd52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.887 248514 DEBUG nova.network.os_vif_util [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.888 248514 DEBUG nova.network.os_vif_util [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.889 248514 DEBUG os_vif [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.890 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.890 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.894 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3aadc575-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.894 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3aadc575-dc, col_values=(('external_ids', {'iface-id': '3aadc575-dc9f-4823-82c7-112e9b9832fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:f5:51', 'vm-uuid': '9b45ce75-4bd3-4cc0-a772-2474ffc2cd52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:51 np0005558241 NetworkManager[50376]: <info>  [1765614531.8973] manager: (tap3aadc575-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:51 np0005558241 nova_compute[248510]: 2025-12-13 08:28:51.905 248514 INFO os_vif [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc')#033[00m
Dec 13 03:28:52 np0005558241 nova_compute[248510]: 2025-12-13 08:28:52.234 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:28:52 np0005558241 nova_compute[248510]: 2025-12-13 08:28:52.235 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:28:52 np0005558241 nova_compute[248510]: 2025-12-13 08:28:52.235 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No VIF found with MAC fa:16:3e:dd:f5:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:28:52 np0005558241 nova_compute[248510]: 2025-12-13 08:28:52.236 248514 INFO nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Using config drive#033[00m
Dec 13 03:28:52 np0005558241 nova_compute[248510]: 2025-12-13 08:28:52.258 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:52 np0005558241 nova_compute[248510]: 2025-12-13 08:28:52.854 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:28:52 np0005558241 nova_compute[248510]: 2025-12-13 08:28:52.855 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:28:52 np0005558241 nova_compute[248510]: 2025-12-13 08:28:52.856 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:28:52 np0005558241 nova_compute[248510]: 2025-12-13 08:28:52.856 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.043 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.043 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.076 248514 INFO nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Creating config drive at /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/disk.config#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.082 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcv6e4ei_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.132 248514 DEBUG nova.network.neutron [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Updated VIF entry in instance network info cache for port 3aadc575-dc9f-4823-82c7-112e9b9832fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.134 248514 DEBUG nova.network.neutron [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Updating instance_info_cache with network_info: [{"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.137 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.180 248514 DEBUG oslo_concurrency.lockutils [req-77c922da-c16c-4a48-a350-35f6d1aa939a req-263492f4-0150-4c3a-aec3-4062facc7aa9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.238 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcv6e4ei_" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.263 248514 DEBUG nova.storage.rbd_utils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] rbd image 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.271 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/disk.config 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.342 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.343 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.353 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.354 248514 INFO nova.compute.claims [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:28:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.587 248514 DEBUG oslo_concurrency.processutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/disk.config 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.588 248514 INFO nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Deleting local config drive /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52/disk.config because it was imported into RBD.#033[00m
Dec 13 03:28:53 np0005558241 NetworkManager[50376]: <info>  [1765614533.6433] manager: (tap3aadc575-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Dec 13 03:28:53 np0005558241 kernel: tap3aadc575-dc: entered promiscuous mode
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.648 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:53Z|00510|binding|INFO|Claiming lport 3aadc575-dc9f-4823-82c7-112e9b9832fe for this chassis.
Dec 13 03:28:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:53Z|00511|binding|INFO|3aadc575-dc9f-4823-82c7-112e9b9832fe: Claiming fa:16:3e:dd:f5:51 10.100.0.11
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.656 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:53 np0005558241 systemd-udevd[304280]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:28:53 np0005558241 NetworkManager[50376]: <info>  [1765614533.7092] device (tap3aadc575-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:28:53 np0005558241 NetworkManager[50376]: <info>  [1765614533.7104] device (tap3aadc575-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:53Z|00512|binding|INFO|Setting lport 3aadc575-dc9f-4823-82c7-112e9b9832fe ovn-installed in OVS
Dec 13 03:28:53 np0005558241 nova_compute[248510]: 2025-12-13 08:28:53.762 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:53 np0005558241 systemd-machined[210538]: New machine qemu-63-instance-00000037.
Dec 13 03:28:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.933 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:f5:51 10.100.0.11'], port_security=['fa:16:3e:dd:f5:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9b45ce75-4bd3-4cc0-a772-2474ffc2cd52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3aadc575-dc9f-4823-82c7-112e9b9832fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:28:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.935 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3aadc575-dc9f-4823-82c7-112e9b9832fe in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc bound to our chassis#033[00m
Dec 13 03:28:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.937 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc#033[00m
Dec 13 03:28:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:53Z|00513|binding|INFO|Setting lport 3aadc575-dc9f-4823-82c7-112e9b9832fe up in Southbound
Dec 13 03:28:53 np0005558241 systemd[1]: Started Virtual Machine qemu-63-instance-00000037.
Dec 13 03:28:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.953 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[317629c0-bf64-4d47-aaf1-f2ef69deaed0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.954 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87bd91d0-e1 in ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:28:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.960 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87bd91d0-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:28:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.960 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[418409c3-35aa-471a-920c-d4d192c416b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.961 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[17eef8b4-a7b9-4118-82fa-2ab99926b46e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.974 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[6a02623c-023d-4b11-bc1f-7fd93b8ce468]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:53.991 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dc69cb0a-3249-4d6f-831f-a6d9f71e752f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.040 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ec2d25-7761-4cb8-8183-d45ccf0b1cda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:54 np0005558241 NetworkManager[50376]: <info>  [1765614534.0473] manager: (tap87bd91d0-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/227)
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.046 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff38bb8-7f3b-48ee-8f57-0bfad566f957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.094 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0570a7-b3a3-4c99-ae6c-24919c8dff0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.097 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2f239d80-51ad-4648-98ce-77b6fb16cf66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:54 np0005558241 NetworkManager[50376]: <info>  [1765614534.1249] device (tap87bd91d0-e0): carrier: link connected
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.135 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[aea5b0dc-ded9-443d-973d-7a6a09f186eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.158 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fab31c47-1601-44d6-ba95-39a63867fb9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711134, 'reachable_time': 42720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304316, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.181 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[476383cf-40f2-4c10-ad4a-dbd2a057a4d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:ec7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711134, 'tstamp': 711134}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304317, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.207 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[56e7c471-8373-465d-b7da-203cc53b60e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87bd91d0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:ec:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 150], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711134, 'reachable_time': 42720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304318, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.247 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53da2727-94fb-484c-8aa1-8520c2dbed43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.326 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2d015f13-3d2c-4d11-bfe1-ececdcbcef67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.328 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.328 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.329 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87bd91d0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:54 np0005558241 nova_compute[248510]: 2025-12-13 08:28:54.363 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:54 np0005558241 kernel: tap87bd91d0-e0: entered promiscuous mode
Dec 13 03:28:54 np0005558241 NetworkManager[50376]: <info>  [1765614534.3654] manager: (tap87bd91d0-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Dec 13 03:28:54 np0005558241 nova_compute[248510]: 2025-12-13 08:28:54.366 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.367 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87bd91d0-e0, col_values=(('external_ids', {'iface-id': '3e59c36d-12c7-4d92-aa2f-8b6a795f3f98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:28:54 np0005558241 nova_compute[248510]: 2025-12-13 08:28:54.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:28:54Z|00514|binding|INFO|Releasing lport 3e59c36d-12c7-4d92-aa2f-8b6a795f3f98 from this chassis (sb_readonly=0)
Dec 13 03:28:54 np0005558241 nova_compute[248510]: 2025-12-13 08:28:54.385 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.387 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.388 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[de6e625b-caa3-4f1b-bbd5-f42eb2d7b0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.389 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/87bd91d0-eead-49b6-8f92-f8d0dba555dc.pid.haproxy
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 87bd91d0-eead-49b6-8f92-f8d0dba555dc
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:28:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:54.389 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'env', 'PROCESS_TAG=haproxy-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87bd91d0-eead-49b6-8f92-f8d0dba555dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:28:54 np0005558241 nova_compute[248510]: 2025-12-13 08:28:54.502 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:54 np0005558241 nova_compute[248510]: 2025-12-13 08:28:54.762 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614534.7623432, 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:54 np0005558241 nova_compute[248510]: 2025-12-13 08:28:54.763 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] VM Started (Lifecycle Event)#033[00m
Dec 13 03:28:54 np0005558241 podman[304411]: 2025-12-13 08:28:54.850949148 +0000 UTC m=+0.083140393 container create 2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:28:54 np0005558241 podman[304411]: 2025-12-13 08:28:54.790505116 +0000 UTC m=+0.022696391 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:28:54 np0005558241 systemd[1]: Started libpod-conmon-2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01.scope.
Dec 13 03:28:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:28:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d06258ddb97e0e16af3f5236f13273e2b5d49c570ee6de133888013ff7c7f8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:28:54 np0005558241 podman[304411]: 2025-12-13 08:28:54.954748419 +0000 UTC m=+0.186939704 container init 2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 03:28:54 np0005558241 podman[304411]: 2025-12-13 08:28:54.961875725 +0000 UTC m=+0.194066980 container start 2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 03:28:54 np0005558241 nova_compute[248510]: 2025-12-13 08:28:54.983 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:54 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [NOTICE]   (304431) : New worker (304433) forked
Dec 13 03:28:54 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [NOTICE]   (304431) : Loading success.
Dec 13 03:28:54 np0005558241 nova_compute[248510]: 2025-12-13 08:28:54.991 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614534.7624454, 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:54 np0005558241 nova_compute[248510]: 2025-12-13 08:28:54.991 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.061 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.091 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:28:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/231925796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.096 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.122 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.128 248514 DEBUG nova.compute.provider_tree [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.147 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:28:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.199 248514 DEBUG nova.scheduler.client.report [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.215 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614520.214119, 98240df6-1cba-40e1-833c-24611270ed83 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.215 248514 INFO nova.compute.manager [-] [instance: 98240df6-1cba-40e1-833c-24611270ed83] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.302 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.303 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.310 248514 DEBUG nova.compute.manager [None req-3fd42ab4-3f83-442e-8914-787ca79cd5da - - - - - -] [instance: 98240df6-1cba-40e1-833c-24611270ed83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.349 248514 DEBUG nova.compute.manager [req-c9b8fd99-8a0e-4adb-8951-985ad2855185 req-41b94a41-27ab-4b12-b97d-4bbf72df139e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.352 248514 DEBUG oslo_concurrency.lockutils [req-c9b8fd99-8a0e-4adb-8951-985ad2855185 req-41b94a41-27ab-4b12-b97d-4bbf72df139e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.352 248514 DEBUG oslo_concurrency.lockutils [req-c9b8fd99-8a0e-4adb-8951-985ad2855185 req-41b94a41-27ab-4b12-b97d-4bbf72df139e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.353 248514 DEBUG oslo_concurrency.lockutils [req-c9b8fd99-8a0e-4adb-8951-985ad2855185 req-41b94a41-27ab-4b12-b97d-4bbf72df139e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.353 248514 DEBUG nova.compute.manager [req-c9b8fd99-8a0e-4adb-8951-985ad2855185 req-41b94a41-27ab-4b12-b97d-4bbf72df139e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Processing event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.354 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.368 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614535.3664398, 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.379 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.382 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.389 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Instance spawned successfully.#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.389 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:28:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:55.410 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:55.411 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:28:55.412 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.427 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.427 248514 DEBUG nova.network.neutron [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.459 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.464 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 131 op/s
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.465 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.466 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.466 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.467 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.467 248514 DEBUG nova.virt.libvirt.driver [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.472 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.523 248514 INFO nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.537 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.618 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.635 248514 INFO nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Took 13.07 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.636 248514 DEBUG nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:55 np0005558241 nova_compute[248510]: 2025-12-13 08:28:55.735 248514 DEBUG nova.policy [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b6f7967ce16c45d394188c1302b02907', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.005 248514 INFO nova.compute.manager [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Took 16.05 seconds to build instance.#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.036 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.038 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.039 248514 INFO nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Creating image(s)#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.064 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.092 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.121 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.125 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.169 248514 DEBUG oslo_concurrency.lockutils [None req-1198b06e-477e-4098-9b2a-ddf37f33c783 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.217 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.218 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.219 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.219 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.251 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.256 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 917c2aaf-7701-4198-802a-0bfc5753885a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.589 248514 DEBUG nova.network.neutron [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Successfully created port: a795a3d9-1388-4e72-8bfd-271816a45466 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:28:56 np0005558241 nova_compute[248510]: 2025-12-13 08:28:56.897 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:28:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 134 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 585 KiB/s wr, 99 op/s
Dec 13 03:28:57 np0005558241 nova_compute[248510]: 2025-12-13 08:28:57.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.542 248514 DEBUG nova.compute.manager [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.560 248514 DEBUG nova.compute.manager [req-0c2637eb-6382-408e-87f5-dce54265093d req-a49420e7-7e05-4aa3-9716-46745e5cd417 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.561 248514 DEBUG oslo_concurrency.lockutils [req-0c2637eb-6382-408e-87f5-dce54265093d req-a49420e7-7e05-4aa3-9716-46745e5cd417 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.561 248514 DEBUG oslo_concurrency.lockutils [req-0c2637eb-6382-408e-87f5-dce54265093d req-a49420e7-7e05-4aa3-9716-46745e5cd417 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.562 248514 DEBUG oslo_concurrency.lockutils [req-0c2637eb-6382-408e-87f5-dce54265093d req-a49420e7-7e05-4aa3-9716-46745e5cd417 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.562 248514 DEBUG nova.compute.manager [req-0c2637eb-6382-408e-87f5-dce54265093d req-a49420e7-7e05-4aa3-9716-46745e5cd417 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] No waiting events found dispatching network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.563 248514 WARNING nova.compute.manager [req-0c2637eb-6382-408e-87f5-dce54265093d req-a49420e7-7e05-4aa3-9716-46745e5cd417 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received unexpected event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe for instance with vm_state active and task_state image_snapshot.#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.564 248514 DEBUG nova.network.neutron [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Successfully updated port: a795a3d9-1388-4e72-8bfd-271816a45466 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.658 248514 INFO nova.compute.manager [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] instance snapshotting#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.682 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "refresh_cache-917c2aaf-7701-4198-802a-0bfc5753885a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.683 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquired lock "refresh_cache-917c2aaf-7701-4198-802a-0bfc5753885a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.683 248514 DEBUG nova.network.neutron [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.974 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.974 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.975 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.975 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.975 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.976 248514 INFO nova.compute.manager [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Terminating instance#033[00m
Dec 13 03:28:58 np0005558241 nova_compute[248510]: 2025-12-13 08:28:58.977 248514 DEBUG nova.compute.manager [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:28:59 np0005558241 nova_compute[248510]: 2025-12-13 08:28:59.009 248514 INFO nova.virt.libvirt.driver [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Beginning live snapshot process#033[00m
Dec 13 03:28:59 np0005558241 nova_compute[248510]: 2025-12-13 08:28:59.044 248514 DEBUG nova.network.neutron [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:28:59 np0005558241 nova_compute[248510]: 2025-12-13 08:28:59.051 248514 DEBUG nova.compute.manager [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received event network-changed-a795a3d9-1388-4e72-8bfd-271816a45466 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:28:59 np0005558241 nova_compute[248510]: 2025-12-13 08:28:59.051 248514 DEBUG nova.compute.manager [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Refreshing instance network info cache due to event network-changed-a795a3d9-1388-4e72-8bfd-271816a45466. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:28:59 np0005558241 nova_compute[248510]: 2025-12-13 08:28:59.051 248514 DEBUG oslo_concurrency.lockutils [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-917c2aaf-7701-4198-802a-0bfc5753885a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:28:59 np0005558241 nova_compute[248510]: 2025-12-13 08:28:59.326 248514 DEBUG nova.virt.libvirt.imagebackend [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:28:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 170 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Dec 13 03:28:59 np0005558241 nova_compute[248510]: 2025-12-13 08:28:59.570 248514 DEBUG nova.storage.rbd_utils [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(36c458ad47e44a7282d63c2a8d0dce5d) on rbd image(9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:28:59 np0005558241 nova_compute[248510]: 2025-12-13 08:28:59.976 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 917c2aaf-7701-4198-802a-0bfc5753885a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.721s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:00 np0005558241 kernel: tap834fc672-a8 (unregistering): left promiscuous mode
Dec 13 03:29:00 np0005558241 NetworkManager[50376]: <info>  [1765614540.1174] device (tap834fc672-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:29:00 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:00Z|00515|binding|INFO|Releasing lport 834fc672-a8af-4884-964c-481d0d8d318e from this chassis (sb_readonly=0)
Dec 13 03:29:00 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:00Z|00516|binding|INFO|Setting lport 834fc672-a8af-4884-964c-481d0d8d318e down in Southbound
Dec 13 03:29:00 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:00Z|00517|binding|INFO|Removing iface tap834fc672-a8 ovn-installed in OVS
Dec 13 03:29:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.171 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] resizing rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:29:00 np0005558241 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000036.scope: Deactivated successfully.
Dec 13 03:29:00 np0005558241 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000036.scope: Consumed 11.795s CPU time.
Dec 13 03:29:00 np0005558241 systemd-machined[210538]: Machine qemu-62-instance-00000036 terminated.
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.278 248514 INFO nova.virt.libvirt.driver [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Instance destroyed successfully.#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.278 248514 DEBUG nova.objects.instance [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lazy-loading 'resources' on Instance uuid 7139a479-b2fe-4d64-8061-97fceda2e392 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:00.435 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:94:dc 10.100.0.10'], port_security=['fa:16:3e:b6:94:dc 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7139a479-b2fe-4d64-8061-97fceda2e392', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c63049d-63e9-47af-99e2-ce1403a42891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9aea752cb9b648a7aa9b3f634ced797e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '568e13d6-6674-49c1-866e-8e025ebc1046', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e00a5e3-7050-4e40-9e5a-7a1e6eee34cc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=834fc672-a8af-4884-964c-481d0d8d318e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:29:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:00.436 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 834fc672-a8af-4884-964c-481d0d8d318e in datapath 6c63049d-63e9-47af-99e2-ce1403a42891 unbound from our chassis#033[00m
Dec 13 03:29:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:00.438 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c63049d-63e9-47af-99e2-ce1403a42891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:29:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:00.439 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2f1547-bf5c-4c47-bdf1-dbf6a8894416]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:00.439 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 namespace which is not needed anymore#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.443 248514 DEBUG nova.virt.libvirt.vif [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:28:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-771461744',display_name='tempest-ServerDiskConfigTestJSON-server-771461744',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-771461744',id=54,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:28:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9aea752cb9b648a7aa9b3f634ced797e',ramdisk_id='',reservation_id='r-lisf0ibr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-167971983',owner_user_name='tempest-ServerDiskConfigTestJSON-167971983-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:28:56Z,user_data=None,user_id='a5bc32e49dbd4372a006913090b9ef0f',uuid=7139a479-b2fe-4d64-8061-97fceda2e392,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.444 248514 DEBUG nova.network.os_vif_util [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converting VIF {"id": "834fc672-a8af-4884-964c-481d0d8d318e", "address": "fa:16:3e:b6:94:dc", "network": {"id": "6c63049d-63e9-47af-99e2-ce1403a42891", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-680004125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9aea752cb9b648a7aa9b3f634ced797e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap834fc672-a8", "ovs_interfaceid": "834fc672-a8af-4884-964c-481d0d8d318e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.445 248514 DEBUG nova.network.os_vif_util [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.445 248514 DEBUG os_vif [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.448 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap834fc672-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.452 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.454 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.457 248514 INFO os_vif [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:94:dc,bridge_name='br-int',has_traffic_filtering=True,id=834fc672-a8af-4884-964c-481d0d8d318e,network=Network(6c63049d-63e9-47af-99e2-ce1403a42891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap834fc672-a8')#033[00m
Dec 13 03:29:00 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [NOTICE]   (304032) : haproxy version is 2.8.14-c23fe91
Dec 13 03:29:00 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [NOTICE]   (304032) : path to executable is /usr/sbin/haproxy
Dec 13 03:29:00 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [WARNING]  (304032) : Exiting Master process...
Dec 13 03:29:00 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [ALERT]    (304032) : Current worker (304034) exited with code 143 (Terminated)
Dec 13 03:29:00 np0005558241 neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891[304028]: [WARNING]  (304032) : All workers exited. Exiting... (0)
Dec 13 03:29:00 np0005558241 systemd[1]: libpod-6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5.scope: Deactivated successfully.
Dec 13 03:29:00 np0005558241 podman[304694]: 2025-12-13 08:29:00.698263892 +0000 UTC m=+0.154751299 container died 6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.781 248514 DEBUG nova.objects.instance [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'migration_context' on Instance uuid 917c2aaf-7701-4198-802a-0bfc5753885a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.802 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.804 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Ensure instance console log exists: /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.804 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.805 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:00 np0005558241 nova_compute[248510]: 2025-12-13 08:29:00.805 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5-userdata-shm.mount: Deactivated successfully.
Dec 13 03:29:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f5f27d429928f4f51cc8b3e9fb422bd397a12e0d5e7d42eff0f2f2f804535b5b-merged.mount: Deactivated successfully.
Dec 13 03:29:00 np0005558241 podman[304694]: 2025-12-13 08:29:00.973203145 +0000 UTC m=+0.429690552 container cleanup 6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:29:00 np0005558241 systemd[1]: libpod-conmon-6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5.scope: Deactivated successfully.
Dec 13 03:29:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Dec 13 03:29:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Dec 13 03:29:01 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Dec 13 03:29:01 np0005558241 podman[304742]: 2025-12-13 08:29:01.456632913 +0000 UTC m=+0.454702880 container remove 6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.459 248514 DEBUG nova.network.neutron [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Updating instance_info_cache with network_info: [{"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 170 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Dec 13 03:29:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.471 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d86b40-a636-4c78-85ff-f5c455b9cadf]: (4, ('Sat Dec 13 08:29:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5)\n6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5\nSat Dec 13 08:29:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 (6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5)\n6d9e5477107b54c2473c8a2e94cba675a1784c67629ecf745642d9ac2585d4c5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.473 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[feb86db5-9b74-474c-b958-ea5caed61cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.474 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c63049d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:01 np0005558241 kernel: tap6c63049d-60: left promiscuous mode
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.486 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.489 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Releasing lock "refresh_cache-917c2aaf-7701-4198-802a-0bfc5753885a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.490 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Instance network_info: |[{"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.490 248514 DEBUG oslo_concurrency.lockutils [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-917c2aaf-7701-4198-802a-0bfc5753885a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.490 248514 DEBUG nova.network.neutron [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Refreshing network info cache for port a795a3d9-1388-4e72-8bfd-271816a45466 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.493 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Start _get_guest_xml network_info=[{"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.516 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.520 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[106fbbbb-54b3-47b4-846f-6abaf6c4dddd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.526 248514 WARNING nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.534 248514 DEBUG nova.virt.libvirt.host [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.535 248514 DEBUG nova.virt.libvirt.host [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:29:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.535 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c289457-8866-4c81-9f4f-0b90ab0f2179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.537 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ef512f-b8a3-40ec-a3f0-508eac89dbca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.543 248514 DEBUG nova.virt.libvirt.host [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.544 248514 DEBUG nova.virt.libvirt.host [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.545 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.545 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.546 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.546 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.547 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.547 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.547 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.548 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.548 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.548 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.549 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.549 248514 DEBUG nova.virt.hardware [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:29:01 np0005558241 nova_compute[248510]: 2025-12-13 08:29:01.553 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.558 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a502e4-413f-4b05-809e-836d92402a2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710311, 'reachable_time': 25771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304758, 'error': None, 'target': 'ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:01 np0005558241 systemd[1]: run-netns-ovnmeta\x2d6c63049d\x2d63e9\x2d47af\x2d99e2\x2dce1403a42891.mount: Deactivated successfully.
Dec 13 03:29:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.562 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c63049d-63e9-47af-99e2-ce1403a42891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:29:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:01.562 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[9908cf5b-6990-4d16-aad7-d30c29500a39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:29:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1830690144' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.135 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.256 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.262 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.419 248514 DEBUG nova.compute.manager [req-6e65f6ea-c35a-4747-ae34-145891bcce32 req-ecbb328e-2d15-4b1f-b10a-71dc8fd62219 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-vif-unplugged-834fc672-a8af-4884-964c-481d0d8d318e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.420 248514 DEBUG oslo_concurrency.lockutils [req-6e65f6ea-c35a-4747-ae34-145891bcce32 req-ecbb328e-2d15-4b1f-b10a-71dc8fd62219 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.420 248514 DEBUG oslo_concurrency.lockutils [req-6e65f6ea-c35a-4747-ae34-145891bcce32 req-ecbb328e-2d15-4b1f-b10a-71dc8fd62219 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.420 248514 DEBUG oslo_concurrency.lockutils [req-6e65f6ea-c35a-4747-ae34-145891bcce32 req-ecbb328e-2d15-4b1f-b10a-71dc8fd62219 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.421 248514 DEBUG nova.compute.manager [req-6e65f6ea-c35a-4747-ae34-145891bcce32 req-ecbb328e-2d15-4b1f-b10a-71dc8fd62219 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] No waiting events found dispatching network-vif-unplugged-834fc672-a8af-4884-964c-481d0d8d318e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.421 248514 DEBUG nova.compute.manager [req-6e65f6ea-c35a-4747-ae34-145891bcce32 req-ecbb328e-2d15-4b1f-b10a-71dc8fd62219 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-vif-unplugged-834fc672-a8af-4884-964c-481d0d8d318e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.580 248514 DEBUG nova.storage.rbd_utils [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] cloning vms/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk@36c458ad47e44a7282d63c2a8d0dce5d to images/9644b262-cf46-483d-8fc3-333210320729 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:29:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:29:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/372351703' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.836 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.837 248514 DEBUG nova.virt.libvirt.vif [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1976093525',display_name='tempest-DeleteServersTestJSON-server-1976093525',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1976093525',id=56,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-r0pczy0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:55Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=917c2aaf-7701-4198-802a-0bfc5753885a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.837 248514 DEBUG nova.network.os_vif_util [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.838 248514 DEBUG nova.network.os_vif_util [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.840 248514 DEBUG nova.objects.instance [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 917c2aaf-7701-4198-802a-0bfc5753885a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.862 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  <uuid>917c2aaf-7701-4198-802a-0bfc5753885a</uuid>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  <name>instance-00000038</name>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <nova:name>tempest-DeleteServersTestJSON-server-1976093525</nova:name>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:29:01</nova:creationTime>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:        <nova:user uuid="b6f7967ce16c45d394188c1302b02907">tempest-DeleteServersTestJSON-991966373-project-member</nova:user>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:        <nova:project uuid="6f6beadbb0244529b8dfc1abff8e8e10">tempest-DeleteServersTestJSON-991966373</nova:project>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:        <nova:port uuid="a795a3d9-1388-4e72-8bfd-271816a45466">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <entry name="serial">917c2aaf-7701-4198-802a-0bfc5753885a</entry>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <entry name="uuid">917c2aaf-7701-4198-802a-0bfc5753885a</entry>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/917c2aaf-7701-4198-802a-0bfc5753885a_disk">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/917c2aaf-7701-4198-802a-0bfc5753885a_disk.config">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:a8:54:48"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <target dev="tapa795a3d9-13"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/console.log" append="off"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:29:02 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:29:02 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:29:02 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:29:02 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.864 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Preparing to wait for external event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.864 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.864 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.865 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.865 248514 DEBUG nova.virt.libvirt.vif [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1976093525',display_name='tempest-DeleteServersTestJSON-server-1976093525',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1976093525',id=56,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-r0pczy0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:28:55Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=917c2aaf-7701-4198-802a-0bfc5753885a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.866 248514 DEBUG nova.network.os_vif_util [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.866 248514 DEBUG nova.network.os_vif_util [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.867 248514 DEBUG os_vif [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.867 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.868 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.868 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.872 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa795a3d9-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.872 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa795a3d9-13, col_values=(('external_ids', {'iface-id': 'a795a3d9-1388-4e72-8bfd-271816a45466', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:54:48', 'vm-uuid': '917c2aaf-7701-4198-802a-0bfc5753885a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:02 np0005558241 NetworkManager[50376]: <info>  [1765614542.8753] manager: (tapa795a3d9-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.880 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.882 248514 INFO os_vif [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13')#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.965 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.966 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.966 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] No VIF found with MAC fa:16:3e:a8:54:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.966 248514 INFO nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Using config drive#033[00m
Dec 13 03:29:02 np0005558241 nova_compute[248510]: 2025-12-13 08:29:02.987 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:03 np0005558241 nova_compute[248510]: 2025-12-13 08:29:03.240 248514 DEBUG nova.network.neutron [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Updated VIF entry in instance network info cache for port a795a3d9-1388-4e72-8bfd-271816a45466. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:29:03 np0005558241 nova_compute[248510]: 2025-12-13 08:29:03.240 248514 DEBUG nova.network.neutron [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Updating instance_info_cache with network_info: [{"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:03 np0005558241 nova_compute[248510]: 2025-12-13 08:29:03.261 248514 DEBUG oslo_concurrency.lockutils [req-3a1043fd-d98b-4f3c-9c77-d9bc27ce515b req-2599b06d-5817-4e9e-be32-f4c1b9439dfb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-917c2aaf-7701-4198-802a-0bfc5753885a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:29:03 np0005558241 nova_compute[248510]: 2025-12-13 08:29:03.349 248514 DEBUG nova.storage.rbd_utils [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] flattening images/9644b262-cf46-483d-8fc3-333210320729 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:29:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 174 MiB data, 543 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec 13 03:29:03 np0005558241 nova_compute[248510]: 2025-12-13 08:29:03.727 248514 INFO nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Creating config drive at /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/disk.config#033[00m
Dec 13 03:29:03 np0005558241 nova_compute[248510]: 2025-12-13 08:29:03.733 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvzapdv_1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:03 np0005558241 nova_compute[248510]: 2025-12-13 08:29:03.884 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvzapdv_1" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.001 248514 DEBUG nova.storage.rbd_utils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] rbd image 917c2aaf-7701-4198-802a-0bfc5753885a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.010 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/disk.config 917c2aaf-7701-4198-802a-0bfc5753885a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.626 248514 DEBUG nova.storage.rbd_utils [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] removing snapshot(36c458ad47e44a7282d63c2a8d0dce5d) on rbd image(9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.692 248514 INFO nova.virt.libvirt.driver [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Deleting instance files /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392_del#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.692 248514 INFO nova.virt.libvirt.driver [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Deletion of /var/lib/nova/instances/7139a479-b2fe-4d64-8061-97fceda2e392_del complete#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.711 248514 DEBUG oslo_concurrency.processutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/disk.config 917c2aaf-7701-4198-802a-0bfc5753885a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.711 248514 INFO nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Deleting local config drive /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a/disk.config because it was imported into RBD.#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.760 248514 INFO nova.compute.manager [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Took 5.78 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.761 248514 DEBUG oslo.service.loopingcall [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.762 248514 DEBUG nova.compute.manager [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.762 248514 DEBUG nova.network.neutron [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:29:04 np0005558241 kernel: tapa795a3d9-13: entered promiscuous mode
Dec 13 03:29:04 np0005558241 NetworkManager[50376]: <info>  [1765614544.7805] manager: (tapa795a3d9-13): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.782 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:04Z|00518|binding|INFO|Claiming lport a795a3d9-1388-4e72-8bfd-271816a45466 for this chassis.
Dec 13 03:29:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:04Z|00519|binding|INFO|a795a3d9-1388-4e72-8bfd-271816a45466: Claiming fa:16:3e:a8:54:48 10.100.0.12
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.795 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:54:48 10.100.0.12'], port_security=['fa:16:3e:a8:54:48 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '917c2aaf-7701-4198-802a-0bfc5753885a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a795a3d9-1388-4e72-8bfd-271816a45466) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.796 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a795a3d9-1388-4e72-8bfd-271816a45466 in datapath 85372fca-ab50-48b6-8c21-507f630c205a bound to our chassis#033[00m
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.798 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85372fca-ab50-48b6-8c21-507f630c205a#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.805 248514 DEBUG nova.compute.manager [req-4aa6793f-6a1c-4b5a-8906-85bbf86e77dd req-a4fdb38a-c906-4f3f-abbb-46b291cb1127 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.806 248514 DEBUG oslo_concurrency.lockutils [req-4aa6793f-6a1c-4b5a-8906-85bbf86e77dd req-a4fdb38a-c906-4f3f-abbb-46b291cb1127 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.806 248514 DEBUG oslo_concurrency.lockutils [req-4aa6793f-6a1c-4b5a-8906-85bbf86e77dd req-a4fdb38a-c906-4f3f-abbb-46b291cb1127 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.806 248514 DEBUG oslo_concurrency.lockutils [req-4aa6793f-6a1c-4b5a-8906-85bbf86e77dd req-a4fdb38a-c906-4f3f-abbb-46b291cb1127 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.807 248514 DEBUG nova.compute.manager [req-4aa6793f-6a1c-4b5a-8906-85bbf86e77dd req-a4fdb38a-c906-4f3f-abbb-46b291cb1127 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] No waiting events found dispatching network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.807 248514 WARNING nova.compute.manager [req-4aa6793f-6a1c-4b5a-8906-85bbf86e77dd req-a4fdb38a-c906-4f3f-abbb-46b291cb1127 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received unexpected event network-vif-plugged-834fc672-a8af-4884-964c-481d0d8d318e for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.811 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b13e68-239f-4dca-a25f-8e6239177f5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.813 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85372fca-a1 in ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.816 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85372fca-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.817 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4774e6d5-5345-46ad-89f8-4fcf9326b45b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.817 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c08108f4-039c-4e70-bb94-c3e6baecda6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:04 np0005558241 systemd-udevd[304969]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.835 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[27c3fd26-13f7-4cf9-8c81-1bfffd1c0772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:04 np0005558241 NetworkManager[50376]: <info>  [1765614544.8501] device (tapa795a3d9-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:29:04 np0005558241 NetworkManager[50376]: <info>  [1765614544.8512] device (tapa795a3d9-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:29:04 np0005558241 systemd-machined[210538]: New machine qemu-64-instance-00000038.
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.866 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[77e27877-6b1e-4816-b2ca-73e5fe0e0000]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:04 np0005558241 systemd[1]: Started Virtual Machine qemu-64-instance-00000038.
Dec 13 03:29:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:04Z|00520|binding|INFO|Setting lport a795a3d9-1388-4e72-8bfd-271816a45466 ovn-installed in OVS
Dec 13 03:29:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:04Z|00521|binding|INFO|Setting lport a795a3d9-1388-4e72-8bfd-271816a45466 up in Southbound
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.882 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:04 np0005558241 nova_compute[248510]: 2025-12-13 08:29:04.887 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.906 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[89d6fba8-1df7-4c3f-80c4-5dd8778f921e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:04 np0005558241 NetworkManager[50376]: <info>  [1765614544.9129] manager: (tap85372fca-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/231)
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.912 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a398ba-f04f-409a-b935-c29b91a71521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.958 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0605a238-73a7-4cc0-9b5b-9baa00950033]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.962 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c7479ee4-df55-48b7-b820-231674c8bf93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:04 np0005558241 NetworkManager[50376]: <info>  [1765614544.9916] device (tap85372fca-a0): carrier: link connected
Dec 13 03:29:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:04.997 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cd14b780-51e0-4b44-8b6e-7993cefc65a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.018 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09e37613-0947-4ab5-aa0c-01063b6e30c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712220, 'reachable_time': 30678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305001, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.033 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fb390305-f431-4536-b7f3-d644f3395718]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:30d0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712220, 'tstamp': 712220}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305002, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.058 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2523e835-deed-4111-882c-e48fb736753b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85372fca-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:30:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712220, 'reachable_time': 30678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305003, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Dec 13 03:29:05 np0005558241 nova_compute[248510]: 2025-12-13 08:29:05.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.098 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75c7a013-f14e-42ff-867f-b223744a3855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Dec 13 03:29:05 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Dec 13 03:29:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.183 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[89278b6b-b773-4b18-855a-e750dfd9dbcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.185 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.185 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.186 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85372fca-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:05 np0005558241 kernel: tap85372fca-a0: entered promiscuous mode
Dec 13 03:29:05 np0005558241 nova_compute[248510]: 2025-12-13 08:29:05.189 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:05 np0005558241 NetworkManager[50376]: <info>  [1765614545.1908] manager: (tap85372fca-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.193 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85372fca-a0, col_values=(('external_ids', {'iface-id': '2c0f4981-0ad0-478e-b1ad-551d231022ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:05 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:05Z|00522|binding|INFO|Releasing lport 2c0f4981-0ad0-478e-b1ad-551d231022ad from this chassis (sb_readonly=0)
Dec 13 03:29:05 np0005558241 nova_compute[248510]: 2025-12-13 08:29:05.194 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.197 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.199 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e600ccdf-04c4-4ac5-93f3-fbdc963fb3e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.202 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/85372fca-ab50-48b6-8c21-507f630c205a.pid.haproxy
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 85372fca-ab50-48b6-8c21-507f630c205a
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:29:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:05.204 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'env', 'PROCESS_TAG=haproxy-85372fca-ab50-48b6-8c21-507f630c205a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85372fca-ab50-48b6-8c21-507f630c205a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:29:05 np0005558241 nova_compute[248510]: 2025-12-13 08:29:05.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 138 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.2 MiB/s wr, 229 op/s
Dec 13 03:29:05 np0005558241 nova_compute[248510]: 2025-12-13 08:29:05.621 248514 DEBUG nova.storage.rbd_utils [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] creating snapshot(snap) on rbd image(9644b262-cf46-483d-8fc3-333210320729) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:29:05 np0005558241 podman[305053]: 2025-12-13 08:29:05.583413495 +0000 UTC m=+0.027201452 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:29:05 np0005558241 nova_compute[248510]: 2025-12-13 08:29:05.904 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614545.7925143, 917c2aaf-7701-4198-802a-0bfc5753885a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:05 np0005558241 nova_compute[248510]: 2025-12-13 08:29:05.904 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] VM Started (Lifecycle Event)#033[00m
Dec 13 03:29:05 np0005558241 nova_compute[248510]: 2025-12-13 08:29:05.928 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:05 np0005558241 nova_compute[248510]: 2025-12-13 08:29:05.933 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614545.7926009, 917c2aaf-7701-4198-802a-0bfc5753885a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:05 np0005558241 nova_compute[248510]: 2025-12-13 08:29:05.934 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:29:05 np0005558241 nova_compute[248510]: 2025-12-13 08:29:05.961 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:05 np0005558241 nova_compute[248510]: 2025-12-13 08:29:05.966 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:29:05 np0005558241 podman[305053]: 2025-12-13 08:29:05.992495389 +0000 UTC m=+0.436283316 container create c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.024 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:29:06 np0005558241 systemd[1]: Started libpod-conmon-c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec.scope.
Dec 13 03:29:06 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:29:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f712d337e457c36f21aba716ac7b7563dada02a2e75c25932692cea653917424/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:06 np0005558241 podman[305053]: 2025-12-13 08:29:06.179697858 +0000 UTC m=+0.623485805 container init c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:29:06 np0005558241 podman[305053]: 2025-12-13 08:29:06.187004738 +0000 UTC m=+0.630792655 container start c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:29:06 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[305110]: [NOTICE]   (305114) : New worker (305116) forked
Dec 13 03:29:06 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[305110]: [NOTICE]   (305114) : Loading success.
Dec 13 03:29:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Dec 13 03:29:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Dec 13 03:29:06 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 9644b262-cf46-483d-8fc3-333210320729 could not be found.
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     image = self._client.call(
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 9644b262-cf46-483d-8fc3-333210320729
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver 
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver 
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     image = self._client.call(
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 9644b262-cf46-483d-8fc3-333210320729 could not be found.
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.676 248514 ERROR nova.virt.libvirt.driver #033[00m
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.808 248514 DEBUG nova.storage.rbd_utils [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] removing snapshot(snap) on rbd image(9644b262-cf46-483d-8fc3-333210320729) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.959 248514 DEBUG nova.network.neutron [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:06 np0005558241 nova_compute[248510]: 2025-12-13 08:29:06.997 248514 INFO nova.compute.manager [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Took 2.23 seconds to deallocate network for instance.#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.052 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.053 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.189 248514 DEBUG oslo_concurrency.processutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.416 248514 DEBUG nova.compute.manager [req-d14016f8-bf22-44e2-bf34-9be11021870e req-a2879dfa-a1eb-4a27-aedd-d4284f8efcd3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.417 248514 DEBUG oslo_concurrency.lockutils [req-d14016f8-bf22-44e2-bf34-9be11021870e req-a2879dfa-a1eb-4a27-aedd-d4284f8efcd3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.418 248514 DEBUG oslo_concurrency.lockutils [req-d14016f8-bf22-44e2-bf34-9be11021870e req-a2879dfa-a1eb-4a27-aedd-d4284f8efcd3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.418 248514 DEBUG oslo_concurrency.lockutils [req-d14016f8-bf22-44e2-bf34-9be11021870e req-a2879dfa-a1eb-4a27-aedd-d4284f8efcd3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.418 248514 DEBUG nova.compute.manager [req-d14016f8-bf22-44e2-bf34-9be11021870e req-a2879dfa-a1eb-4a27-aedd-d4284f8efcd3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Processing event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.419 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.423 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614547.423581, 917c2aaf-7701-4198-802a-0bfc5753885a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.424 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.427 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.433 248514 INFO nova.virt.libvirt.driver [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Instance spawned successfully.#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.434 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.453 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.462 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.463 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.464 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.465 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.466 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.466 248514 DEBUG nova.virt.libvirt.driver [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 138 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 120 op/s
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.472 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:29:07 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.517 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.550 248514 INFO nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Took 11.51 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.551 248514 DEBUG nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.634 248514 INFO nova.compute.manager [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Took 14.33 seconds to build instance.#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.652 248514 DEBUG oslo_concurrency.lockutils [None req-230e17fe-6b6f-4529-ac96-2e12b2044a8d b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3120351394' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.879 248514 DEBUG oslo_concurrency.processutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.887 248514 DEBUG nova.compute.provider_tree [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.906 248514 DEBUG nova.scheduler.client.report [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.928 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:07 np0005558241 nova_compute[248510]: 2025-12-13 08:29:07.958 248514 INFO nova.scheduler.client.report [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Deleted allocations for instance 7139a479-b2fe-4d64-8061-97fceda2e392#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.045 248514 DEBUG oslo_concurrency.lockutils [None req-8e658834-3593-4f9a-82dc-cbeb9018d6df a5bc32e49dbd4372a006913090b9ef0f 9aea752cb9b648a7aa9b3f634ced797e - - default default] Lock "7139a479-b2fe-4d64-8061-97fceda2e392" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.553 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.553 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.582 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.595 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.595 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.624 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.656 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.657 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.680 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.683 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.683 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.690 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.690 248514 INFO nova.compute.claims [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.711 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.779 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:08 np0005558241 nova_compute[248510]: 2025-12-13 08:29:08.911 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:29:09
Dec 13 03:29:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:29:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:29:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms']
Dec 13 03:29:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:29:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 202 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 7.6 MiB/s wr, 368 op/s
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981163263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:09 np0005558241 nova_compute[248510]: 2025-12-13 08:29:09.497 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:09 np0005558241 nova_compute[248510]: 2025-12-13 08:29:09.504 248514 DEBUG nova.compute.provider_tree [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:09 np0005558241 nova_compute[248510]: 2025-12-13 08:29:09.581 248514 DEBUG nova.scheduler.client.report [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:09 np0005558241 nova_compute[248510]: 2025-12-13 08:29:09.618 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:09 np0005558241 nova_compute[248510]: 2025-12-13 08:29:09.619 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:29:09 np0005558241 nova_compute[248510]: 2025-12-13 08:29:09.622 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:09 np0005558241 nova_compute[248510]: 2025-12-13 08:29:09.628 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:29:09 np0005558241 nova_compute[248510]: 2025-12-13 08:29:09.628 248514 INFO nova.compute.claims [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:29:09 np0005558241 nova_compute[248510]: 2025-12-13 08:29:09.797 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:29:09 np0005558241 nova_compute[248510]: 2025-12-13 08:29:09.797 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:29:09 np0005558241 nova_compute[248510]: 2025-12-13 08:29:09.835 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:29:09 np0005558241 nova_compute[248510]: 2025-12-13 08:29:09.840 248514 WARNING nova.compute.manager [None req-56edf306-3552-43ba-b7c8-394e584ecdfc 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Image not found during snapshot: nova.exception.ImageNotFound: Image 9644b262-cf46-483d-8fc3-333210320729 could not be found.#033[00m
Dec 13 03:29:09 np0005558241 nova_compute[248510]: 2025-12-13 08:29:09.872 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.899136) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614549899192, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2225, "num_deletes": 259, "total_data_size": 3448339, "memory_usage": 3507744, "flush_reason": "Manual Compaction"}
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614549962906, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3350819, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35442, "largest_seqno": 37666, "table_properties": {"data_size": 3340685, "index_size": 6436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21770, "raw_average_key_size": 20, "raw_value_size": 3320063, "raw_average_value_size": 3186, "num_data_blocks": 281, "num_entries": 1042, "num_filter_entries": 1042, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614368, "oldest_key_time": 1765614368, "file_creation_time": 1765614549, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 63858 microseconds, and 7962 cpu microseconds.
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.962985) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3350819 bytes OK
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.963019) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.986973) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.987022) EVENT_LOG_v1 {"time_micros": 1765614549987012, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.987056) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3438841, prev total WAL file size 3438841, number of live WAL files 2.
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.988286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3272KB)], [80(7400KB)]
Dec 13 03:29:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614549988419, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10928777, "oldest_snapshot_seqno": -1}
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6225 keys, 9110588 bytes, temperature: kUnknown
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614550083190, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 9110588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9069164, "index_size": 24738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 157766, "raw_average_key_size": 25, "raw_value_size": 8957597, "raw_average_value_size": 1438, "num_data_blocks": 1001, "num_entries": 6225, "num_filter_entries": 6225, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614549, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.085 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.088 248514 DEBUG nova.policy [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b4cd12d84d34c95ac78f304b6e7546d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.094 248514 DEBUG nova.compute.manager [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Received event network-vif-deleted-834fc672-a8af-4884-964c-481d0d8d318e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.095 248514 DEBUG nova.compute.manager [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.096 248514 DEBUG oslo_concurrency.lockutils [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.096 248514 DEBUG oslo_concurrency.lockutils [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.096 248514 DEBUG oslo_concurrency.lockutils [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.097 248514 DEBUG nova.compute.manager [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] No waiting events found dispatching network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.097 248514 WARNING nova.compute.manager [req-11ebb268-f874-4eea-aec0-e31e3b5bd31a req-2e077443-9161-4afe-8156-5f9372ee9d32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received unexpected event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.083496) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 9110588 bytes
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.103997) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.2 rd, 96.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 6753, records dropped: 528 output_compression: NoCompression
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.104045) EVENT_LOG_v1 {"time_micros": 1765614550104028, "job": 46, "event": "compaction_finished", "compaction_time_micros": 94860, "compaction_time_cpu_micros": 24018, "output_level": 6, "num_output_files": 1, "total_output_size": 9110588, "num_input_records": 6753, "num_output_records": 6225, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614550104834, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614550106033, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:09.988156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.106102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.106107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.106109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.106111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:29:10.106113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.176 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.178 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.178 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Creating image(s)#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.247 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.274 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:10 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.375 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:10Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:f5:51 10.100.0.11
Dec 13 03:29:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:10Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:f5:51 10.100.0.11
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.384 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.427 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.474 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.475 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.475 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.476 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.501 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.507 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.561 248514 DEBUG nova.objects.instance [None req-e4edd0ab-9994-4934-b0b1-653c32d39a66 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 917c2aaf-7701-4198-802a-0bfc5753885a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.590 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614550.5899262, 917c2aaf-7701-4198-802a-0bfc5753885a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.590 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:29:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.615 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.627 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:29:10 np0005558241 nova_compute[248510]: 2025-12-13 08:29:10.651 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.314 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Successfully created port: c0518450-5f7c-4fa0-bf72-59ebcb5be073 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:29:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 202 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.9 MiB/s wr, 287 op/s
Dec 13 03:29:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1657075699' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.613 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.622 248514 DEBUG nova.compute.provider_tree [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.644 248514 DEBUG nova.scheduler.client.report [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.688 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.689 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.693 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.700 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.701 248514 INFO nova.compute.claims [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.798 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.799 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.799 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.799 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.800 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.801 248514 INFO nova.compute.manager [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Terminating instance#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.802 248514 DEBUG nova.compute.manager [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.821 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.822 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.847 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.873 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.984 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.986 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:29:11 np0005558241 nova_compute[248510]: 2025-12-13 08:29:11.986 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Creating image(s)#033[00m
Dec 13 03:29:12 np0005558241 kernel: tapa795a3d9-13 (unregistering): left promiscuous mode
Dec 13 03:29:12 np0005558241 NetworkManager[50376]: <info>  [1765614552.4312] device (tapa795a3d9-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:29:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:12Z|00523|binding|INFO|Releasing lport a795a3d9-1388-4e72-8bfd-271816a45466 from this chassis (sb_readonly=0)
Dec 13 03:29:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:12Z|00524|binding|INFO|Setting lport a795a3d9-1388-4e72-8bfd-271816a45466 down in Southbound
Dec 13 03:29:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:12Z|00525|binding|INFO|Removing iface tapa795a3d9-13 ovn-installed in OVS
Dec 13 03:29:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:12.446 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:54:48 10.100.0.12'], port_security=['fa:16:3e:a8:54:48 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '917c2aaf-7701-4198-802a-0bfc5753885a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85372fca-ab50-48b6-8c21-507f630c205a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6f6beadbb0244529b8dfc1abff8e8e10', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0f5a41f-13ce-48ad-8695-24ce6a7978e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c6768d3-c890-4dda-a7eb-31dc35c760aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a795a3d9-1388-4e72-8bfd-271816a45466) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:29:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:12.448 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a795a3d9-1388-4e72-8bfd-271816a45466 in datapath 85372fca-ab50-48b6-8c21-507f630c205a unbound from our chassis#033[00m
Dec 13 03:29:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:12.450 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85372fca-ab50-48b6-8c21-507f630c205a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:29:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:12.452 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[07867776-e3f0-41aa-89c3-ccb7f7ee635a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:12.452 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a namespace which is not needed anymore#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.476 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:12 np0005558241 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000038.scope: Deactivated successfully.
Dec 13 03:29:12 np0005558241 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000038.scope: Consumed 3.717s CPU time.
Dec 13 03:29:12 np0005558241 systemd-machined[210538]: Machine qemu-64-instance-00000038 terminated.
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.511 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.537 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.543 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.596 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.648 248514 DEBUG nova.policy [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b4cd12d84d34c95ac78f304b6e7546d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.658 248514 DEBUG nova.compute.manager [None req-e4edd0ab-9994-4934-b0b1-653c32d39a66 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.659 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.659 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.660 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.660 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.685 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.692 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:12 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[305110]: [NOTICE]   (305114) : haproxy version is 2.8.14-c23fe91
Dec 13 03:29:12 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[305110]: [NOTICE]   (305114) : path to executable is /usr/sbin/haproxy
Dec 13 03:29:12 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[305110]: [ALERT]    (305114) : Current worker (305116) exited with code 143 (Terminated)
Dec 13 03:29:12 np0005558241 neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a[305110]: [WARNING]  (305114) : All workers exited. Exiting... (0)
Dec 13 03:29:12 np0005558241 systemd[1]: libpod-c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec.scope: Deactivated successfully.
Dec 13 03:29:12 np0005558241 podman[305406]: 2025-12-13 08:29:12.796315573 +0000 UTC m=+0.235394999 container died c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.861 248514 DEBUG nova.compute.manager [req-fc142b7e-779d-4bb1-ad47-5c21c26ac543 req-77f6fb15-9628-4ae2-818c-641ef66b24c8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received event network-vif-unplugged-a795a3d9-1388-4e72-8bfd-271816a45466 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.863 248514 DEBUG oslo_concurrency.lockutils [req-fc142b7e-779d-4bb1-ad47-5c21c26ac543 req-77f6fb15-9628-4ae2-818c-641ef66b24c8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.863 248514 DEBUG oslo_concurrency.lockutils [req-fc142b7e-779d-4bb1-ad47-5c21c26ac543 req-77f6fb15-9628-4ae2-818c-641ef66b24c8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.863 248514 DEBUG oslo_concurrency.lockutils [req-fc142b7e-779d-4bb1-ad47-5c21c26ac543 req-77f6fb15-9628-4ae2-818c-641ef66b24c8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.863 248514 DEBUG nova.compute.manager [req-fc142b7e-779d-4bb1-ad47-5c21c26ac543 req-77f6fb15-9628-4ae2-818c-641ef66b24c8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] No waiting events found dispatching network-vif-unplugged-a795a3d9-1388-4e72-8bfd-271816a45466 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.864 248514 WARNING nova.compute.manager [req-fc142b7e-779d-4bb1-ad47-5c21c26ac543 req-77f6fb15-9628-4ae2-818c-641ef66b24c8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received unexpected event network-vif-unplugged-a795a3d9-1388-4e72-8bfd-271816a45466 for instance with vm_state suspended and task_state None.#033[00m
Dec 13 03:29:12 np0005558241 nova_compute[248510]: 2025-12-13 08:29:12.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315320300' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.200 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.207 248514 DEBUG nova.compute.provider_tree [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.227 248514 DEBUG nova.scheduler.client.report [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.256 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.257 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.278 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Successfully created port: dd059dc2-8c1a-49c4-b820-7cf31293c210 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.291 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Successfully updated port: c0518450-5f7c-4fa0-bf72-59ebcb5be073 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.316 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.317 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.319 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "refresh_cache-b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.319 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquired lock "refresh_cache-b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.319 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.337 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:29:13 np0005558241 kernel: tap3aadc575-dc (unregistering): left promiscuous mode
Dec 13 03:29:13 np0005558241 NetworkManager[50376]: <info>  [1765614553.3757] device (tap3aadc575-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.375 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:29:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:13Z|00526|binding|INFO|Releasing lport 3aadc575-dc9f-4823-82c7-112e9b9832fe from this chassis (sb_readonly=0)
Dec 13 03:29:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:13Z|00527|binding|INFO|Setting lport 3aadc575-dc9f-4823-82c7-112e9b9832fe down in Southbound
Dec 13 03:29:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:13Z|00528|binding|INFO|Removing iface tap3aadc575-dc ovn-installed in OVS
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.383 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:13.395 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:f5:51 10.100.0.11'], port_security=['fa:16:3e:dd:f5:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9b45ce75-4bd3-4cc0-a772-2474ffc2cd52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52e1055963294dbdb16cd95b466cd4d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '269e7310-e268-4e11-a2e8-e4c22118637e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99e34124-e040-4994-aa81-ced5b49550b0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3aadc575-dc9f-4823-82c7-112e9b9832fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.461 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 195 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 5.9 MiB/s wr, 309 op/s
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.504 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.507 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.508 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Creating image(s)#033[00m
Dec 13 03:29:13 np0005558241 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000037.scope: Deactivated successfully.
Dec 13 03:29:13 np0005558241 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000037.scope: Consumed 12.744s CPU time.
Dec 13 03:29:13 np0005558241 systemd-machined[210538]: Machine qemu-63-instance-00000037 terminated.
Dec 13 03:29:13 np0005558241 NetworkManager[50376]: <info>  [1765614553.6280] manager: (tap3aadc575-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Dec 13 03:29:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec-userdata-shm.mount: Deactivated successfully.
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.772 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f712d337e457c36f21aba716ac7b7563dada02a2e75c25932692cea653917424-merged.mount: Deactivated successfully.
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.802 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.825 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.829 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.873 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.877 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.884 248514 DEBUG nova.policy [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b4cd12d84d34c95ac78f304b6e7546d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.891 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Instance destroyed successfully.#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.891 248514 DEBUG nova.objects.instance [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lazy-loading 'resources' on Instance uuid 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.923 248514 DEBUG nova.virt.libvirt.vif [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1587589866',display_name='tempest-ImagesTestJSON-server-1587589866',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1587589866',id=55,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:28:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52e1055963294dbdb16cd95b466cd4d9',ramdisk_id='',reservation_id='r-uay75zc1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1234382421',owner_user_name='tempest-ImagesTestJSON-1234382421-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:29:09Z,user_data=None,user_id='3b988c7ac9354c59aac9a9f41f83c20f',uuid=9b45ce75-4bd3-4cc0-a772-2474ffc2cd52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.923 248514 DEBUG nova.network.os_vif_util [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converting VIF {"id": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "address": "fa:16:3e:dd:f5:51", "network": {"id": "87bd91d0-eead-49b6-8f92-f8d0dba555dc", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1542442766-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52e1055963294dbdb16cd95b466cd4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aadc575-dc", "ovs_interfaceid": "3aadc575-dc9f-4823-82c7-112e9b9832fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.924 248514 DEBUG nova.network.os_vif_util [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.924 248514 DEBUG os_vif [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.926 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.926 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3aadc575-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.931 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.932 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.932 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:13 np0005558241 nova_compute[248510]: 2025-12-13 08:29:13.933 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:14Z|00529|binding|INFO|Releasing lport 2c0f4981-0ad0-478e-b1ad-551d231022ad from this chassis (sb_readonly=0)
Dec 13 03:29:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:14Z|00530|binding|INFO|Releasing lport 3e59c36d-12c7-4d92-aa2f-8b6a795f3f98 from this chassis (sb_readonly=0)
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.661 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:14 np0005558241 podman[305406]: 2025-12-13 08:29:14.665706418 +0000 UTC m=+2.104785844 container cleanup c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.667 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1c940191-84c7-423e-901a-233b14c2acec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:14 np0005558241 systemd[1]: libpod-conmon-c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec.scope: Deactivated successfully.
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.717 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Successfully updated port: dd059dc2-8c1a-49c4-b820-7cf31293c210 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.721 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.754 248514 DEBUG nova.compute.manager [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-changed-dd059dc2-8c1a-49c4-b820-7cf31293c210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.755 248514 DEBUG nova.compute.manager [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Refreshing instance network info cache due to event network-changed-dd059dc2-8c1a-49c4-b820-7cf31293c210. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.756 248514 DEBUG oslo_concurrency.lockutils [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-b9e0d5ab-483f-49a1-901a-c36f31ab710f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.756 248514 DEBUG oslo_concurrency.lockutils [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-b9e0d5ab-483f-49a1-901a-c36f31ab710f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.757 248514 DEBUG nova.network.neutron [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Refreshing network info cache for port dd059dc2-8c1a-49c4-b820-7cf31293c210 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.761 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "refresh_cache-b9e0d5ab-483f-49a1-901a-c36f31ab710f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.765 248514 INFO os_vif [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:f5:51,bridge_name='br-int',has_traffic_filtering=True,id=3aadc575-dc9f-4823-82c7-112e9b9832fe,network=Network(87bd91d0-eead-49b6-8f92-f8d0dba555dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aadc575-dc')#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.951 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.952 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.953 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.953 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.954 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.955 248514 INFO nova.compute.manager [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Terminating instance#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.957 248514 DEBUG nova.compute.manager [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.970 248514 INFO nova.virt.libvirt.driver [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Instance destroyed successfully.#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.971 248514 DEBUG nova.objects.instance [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lazy-loading 'resources' on Instance uuid 917c2aaf-7701-4198-802a-0bfc5753885a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.987 248514 DEBUG nova.virt.libvirt.vif [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1976093525',display_name='tempest-DeleteServersTestJSON-server-1976093525',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1976093525',id=56,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6f6beadbb0244529b8dfc1abff8e8e10',ramdisk_id='',reservation_id='r-r0pczy0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-991966373',owner_user_name='tempest-DeleteServersTestJSON-991966373-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:29:12Z,user_data=None,user_id='b6f7967ce16c45d394188c1302b02907',uuid=917c2aaf-7701-4198-802a-0bfc5753885a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.988 248514 DEBUG nova.network.os_vif_util [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converting VIF {"id": "a795a3d9-1388-4e72-8bfd-271816a45466", "address": "fa:16:3e:a8:54:48", "network": {"id": "85372fca-ab50-48b6-8c21-507f630c205a", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1036444300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6f6beadbb0244529b8dfc1abff8e8e10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa795a3d9-13", "ovs_interfaceid": "a795a3d9-1388-4e72-8bfd-271816a45466", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.989 248514 DEBUG nova.network.os_vif_util [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.989 248514 DEBUG os_vif [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.991 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.991 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa795a3d9-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.993 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:14 np0005558241 nova_compute[248510]: 2025-12-13 08:29:14.998 248514 INFO os_vif [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:54:48,bridge_name='br-int',has_traffic_filtering=True,id=a795a3d9-1388-4e72-8bfd-271816a45466,network=Network(85372fca-ab50-48b6-8c21-507f630c205a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa795a3d9-13')#033[00m
Dec 13 03:29:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.380 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.382 248514 DEBUG nova.network.neutron [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.385 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Successfully created port: 41420fc6-e900-4745-a3c1-4f2541c9e1f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.387 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614540.2147691, 7139a479-b2fe-4d64-8061-97fceda2e392 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.388 248514 INFO nova.compute.manager [-] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.391 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.391 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.392 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.392 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.392 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] No waiting events found dispatching network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.392 248514 WARNING nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received unexpected event network-vif-plugged-a795a3d9-1388-4e72-8bfd-271816a45466 for instance with vm_state suspended and task_state deleting.#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.393 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Received event network-changed-c0518450-5f7c-4fa0-bf72-59ebcb5be073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.393 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Refreshing instance network info cache due to event network-changed-c0518450-5f7c-4fa0-bf72-59ebcb5be073. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.393 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:29:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:29:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2938618048' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:29:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:29:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2938618048' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.417 248514 DEBUG nova.compute.manager [None req-2b2d81da-9b5e-4b48-9c04-0ac1a6b52694 - - - - - -] [instance: 7139a479-b2fe-4d64-8061-97fceda2e392] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 169 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 5.5 MiB/s wr, 294 op/s
Dec 13 03:29:15 np0005558241 podman[305602]: 2025-12-13 08:29:15.559537911 +0000 UTC m=+0.861410904 container remove c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:29:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.569 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4814e6bd-44f0-4949-a3cf-816570c8468b]: (4, ('Sat Dec 13 08:29:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec)\nc90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec\nSat Dec 13 08:29:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a (c90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec)\nc90628ec1be584b80b9d88e4034c46a59f3fff509ac2439f1098a1f67e06eaec\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.572 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[45ad0200-839d-43e2-a5fc-22a93d186b4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.573 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85372fca-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:15 np0005558241 kernel: tap85372fca-a0: left promiscuous mode
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.697 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.700 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42c8f875-4417-490d-bcb9-f5890a56833e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.722 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fac2e5f-c749-4bfc-b7bf-a5e7413406dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.724 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55413d1a-df59-4fed-ba44-71dc05c9bb30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.747 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[850c5860-97b5-4704-b269-7731e6782412]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712211, 'reachable_time': 30884, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305673, 'error': None, 'target': 'ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:15 np0005558241 systemd[1]: run-netns-ovnmeta\x2d85372fca\x2dab50\x2d48b6\x2d8c21\x2d507f630c205a.mount: Deactivated successfully.
Dec 13 03:29:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.752 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85372fca-ab50-48b6-8c21-507f630c205a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:29:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.752 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[aaca91f0-990e-4979-9334-e1b9d2aef6af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.753 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3aadc575-dc9f-4823-82c7-112e9b9832fe in datapath 87bd91d0-eead-49b6-8f92-f8d0dba555dc unbound from our chassis#033[00m
Dec 13 03:29:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.754 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87bd91d0-eead-49b6-8f92-f8d0dba555dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:29:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.755 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea825946-8ad5-499d-b9bf-22b3b2808612]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:15.755 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc namespace which is not needed anymore#033[00m
Dec 13 03:29:15 np0005558241 nova_compute[248510]: 2025-12-13 08:29:15.821 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:16 np0005558241 nova_compute[248510]: 2025-12-13 08:29:16.050 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] resizing rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:29:16 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [NOTICE]   (304431) : haproxy version is 2.8.14-c23fe91
Dec 13 03:29:16 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [NOTICE]   (304431) : path to executable is /usr/sbin/haproxy
Dec 13 03:29:16 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [WARNING]  (304431) : Exiting Master process...
Dec 13 03:29:16 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [ALERT]    (304431) : Current worker (304433) exited with code 143 (Terminated)
Dec 13 03:29:16 np0005558241 neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc[304427]: [WARNING]  (304431) : All workers exited. Exiting... (0)
Dec 13 03:29:16 np0005558241 systemd[1]: libpod-2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01.scope: Deactivated successfully.
Dec 13 03:29:16 np0005558241 podman[305713]: 2025-12-13 08:29:16.106294492 +0000 UTC m=+0.245785976 container died 2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 03:29:16 np0005558241 nova_compute[248510]: 2025-12-13 08:29:16.364 248514 DEBUG nova.network.neutron [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:16 np0005558241 nova_compute[248510]: 2025-12-13 08:29:16.434 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Updating instance_info_cache with network_info: [{"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:16 np0005558241 nova_compute[248510]: 2025-12-13 08:29:16.480 248514 DEBUG oslo_concurrency.lockutils [req-df96fedf-c2f3-4624-ab42-4cf45f369521 req-4e4cd242-d004-4f94-9600-5c3f39e4d959 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-b9e0d5ab-483f-49a1-901a-c36f31ab710f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:29:16 np0005558241 nova_compute[248510]: 2025-12-13 08:29:16.483 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquired lock "refresh_cache-b9e0d5ab-483f-49a1-901a-c36f31ab710f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:29:16 np0005558241 nova_compute[248510]: 2025-12-13 08:29:16.483 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:29:16 np0005558241 nova_compute[248510]: 2025-12-13 08:29:16.488 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Releasing lock "refresh_cache-b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:29:16 np0005558241 nova_compute[248510]: 2025-12-13 08:29:16.488 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Instance network_info: |[{"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:29:16 np0005558241 nova_compute[248510]: 2025-12-13 08:29:16.490 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:29:16 np0005558241 nova_compute[248510]: 2025-12-13 08:29:16.491 248514 DEBUG nova.network.neutron [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Refreshing network info cache for port c0518450-5f7c-4fa0-bf72-59ebcb5be073 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:29:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01-userdata-shm.mount: Deactivated successfully.
Dec 13 03:29:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay-80d06258ddb97e0e16af3f5236f13273e2b5d49c570ee6de133888013ff7c7f8-merged.mount: Deactivated successfully.
Dec 13 03:29:16 np0005558241 podman[305713]: 2025-12-13 08:29:16.835821151 +0000 UTC m=+0.975312635 container cleanup 2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 03:29:16 np0005558241 systemd[1]: libpod-conmon-2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01.scope: Deactivated successfully.
Dec 13 03:29:16 np0005558241 nova_compute[248510]: 2025-12-13 08:29:16.851 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:29:16 np0005558241 nova_compute[248510]: 2025-12-13 08:29:16.863 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:16 np0005558241 nova_compute[248510]: 2025-12-13 08:29:16.941 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] resizing rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.086 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1c940191-84c7-423e-901a-233b14c2acec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.114 248514 DEBUG nova.objects.instance [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'migration_context' on Instance uuid b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.142 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.143 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Ensure instance console log exists: /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.143 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.144 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.144 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.147 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Start _get_guest_xml network_info=[{"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.153 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] resizing rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:29:17 np0005558241 podman[305783]: 2025-12-13 08:29:17.157220322 +0000 UTC m=+0.290834647 container remove 2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 03:29:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.166 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[590268c6-adc4-4874-8e47-0c5fc8b18f94]: (4, ('Sat Dec 13 08:29:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01)\n2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01\nSat Dec 13 08:29:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc (2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01)\n2f6b750dba1c3365253c299628182c3defe511160f15984b9ab8d3a2e4d41e01\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.169 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ed684bb9-2926-4c89-9eaf-ed12f358cca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.170 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87bd91d0-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:17 np0005558241 kernel: tap87bd91d0-e0: left promiscuous mode
Dec 13 03:29:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.192 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f40dbb4-5bb4-475b-b541-7c619785edb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.216 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[84a7e03a-6909-4c6d-9989-96b635f39284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.218 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6e16b1d0-d4a9-4185-950f-01befed22711]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.227 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.231 248514 WARNING nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.238 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:29:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.237 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e9dfba-2dad-4142-8f01-14fd96188e51]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711124, 'reachable_time': 33967, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305925, 'error': None, 'target': 'ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.238 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:29:17 np0005558241 systemd[1]: run-netns-ovnmeta\x2d87bd91d0\x2deead\x2d49b6\x2d8f92\x2df8d0dba555dc.mount: Deactivated successfully.
Dec 13 03:29:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.240 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87bd91d0-eead-49b6-8f92-f8d0dba555dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:29:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:17.241 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d95a3bff-f947-46ff-85fb-89434a7d3db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.242 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.242 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.243 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.243 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.244 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.244 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.244 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.244 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.244 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.245 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.245 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.245 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.245 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.246 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.249 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.320 248514 DEBUG nova.objects.instance [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'migration_context' on Instance uuid b9e0d5ab-483f-49a1-901a-c36f31ab710f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.373 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.374 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Ensure instance console log exists: /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.375 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.376 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.376 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 169 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.4 MiB/s wr, 236 op/s
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.535 248514 DEBUG nova.objects.instance [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'migration_context' on Instance uuid 1c940191-84c7-423e-901a-233b14c2acec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.556 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.556 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Ensure instance console log exists: /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.557 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.557 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.557 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:29:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/322013128' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.829 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.861 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.866 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.906 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Successfully updated port: 41420fc6-e900-4745-a3c1-4f2541c9e1f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.939 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "refresh_cache-1c940191-84c7-423e-901a-233b14c2acec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.939 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquired lock "refresh_cache-1c940191-84c7-423e-901a-233b14c2acec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.939 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.984 248514 DEBUG nova.compute.manager [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received event network-changed-41420fc6-e900-4745-a3c1-4f2541c9e1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.984 248514 DEBUG nova.compute.manager [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Refreshing instance network info cache due to event network-changed-41420fc6-e900-4745-a3c1-4f2541c9e1f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:29:17 np0005558241 nova_compute[248510]: 2025-12-13 08:29:17.984 248514 DEBUG oslo_concurrency.lockutils [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1c940191-84c7-423e-901a-233b14c2acec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.140 248514 INFO nova.virt.libvirt.driver [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Deleting instance files /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a_del#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.141 248514 INFO nova.virt.libvirt.driver [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Deletion of /var/lib/nova/instances/917c2aaf-7701-4198-802a-0bfc5753885a_del complete#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.228 248514 INFO nova.virt.libvirt.driver [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Deleting instance files /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_del#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.229 248514 INFO nova.virt.libvirt.driver [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Deletion of /var/lib/nova/instances/9b45ce75-4bd3-4cc0-a772-2474ffc2cd52_del complete#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.235 248514 INFO nova.compute.manager [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Took 3.28 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.236 248514 DEBUG oslo.service.loopingcall [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.237 248514 DEBUG nova.compute.manager [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.237 248514 DEBUG nova.network.neutron [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.304 248514 INFO nova.compute.manager [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Took 6.50 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.305 248514 DEBUG oslo.service.loopingcall [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.305 248514 DEBUG nova.compute.manager [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.306 248514 DEBUG nova.network.neutron [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:29:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:29:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876366977' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.474 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.476 248514 DEBUG nova.virt.libvirt.vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-1',id=57,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:09Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.477 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.477 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.479 248514 DEBUG nova.objects.instance [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.495 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.503 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  <uuid>b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d</uuid>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  <name>instance-00000039</name>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <nova:name>tempest-ListServersNegativeTestJSON-server-582157972-1</nova:name>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:29:17</nova:creationTime>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:        <nova:user uuid="3b4cd12d84d34c95ac78f304b6e7546d">tempest-ListServersNegativeTestJSON-67858047-project-member</nova:user>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:        <nova:project uuid="1f0a98b431c940d98cf7e91fd7bdea03">tempest-ListServersNegativeTestJSON-67858047</nova:project>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:        <nova:port uuid="c0518450-5f7c-4fa0-bf72-59ebcb5be073">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <entry name="serial">b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d</entry>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <entry name="uuid">b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d</entry>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk.config">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:ad:c1:45"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <target dev="tapc0518450-5f"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/console.log" append="off"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:29:18 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:29:18 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:29:18 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:29:18 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.504 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Preparing to wait for external event network-vif-plugged-c0518450-5f7c-4fa0-bf72-59ebcb5be073 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.504 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.504 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.504 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.505 248514 DEBUG nova.virt.libvirt.vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-1',id=57,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:09Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.505 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.506 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.507 248514 DEBUG os_vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.507 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.508 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.508 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.511 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.511 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0518450-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.511 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc0518450-5f, col_values=(('external_ids', {'iface-id': 'c0518450-5f7c-4fa0-bf72-59ebcb5be073', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:c1:45', 'vm-uuid': 'b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.513 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:18 np0005558241 NetworkManager[50376]: <info>  [1765614558.5142] manager: (tapc0518450-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.515 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.519 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.519 248514 INFO os_vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f')#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.609 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.609 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.610 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No VIF found with MAC fa:16:3e:ad:c1:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.610 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Using config drive#033[00m
Dec 13 03:29:18 np0005558241 nova_compute[248510]: 2025-12-13 08:29:18.635 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 164 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 810 KiB/s rd, 4.1 MiB/s wr, 180 op/s
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.678 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Updating instance_info_cache with network_info: [{"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.755 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Releasing lock "refresh_cache-b9e0d5ab-483f-49a1-901a-c36f31ab710f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.756 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Instance network_info: |[{"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.758 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Start _get_guest_xml network_info=[{"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.764 248514 WARNING nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.769 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.770 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.774 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.775 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.775 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.775 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.776 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.776 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.776 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.776 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.777 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.777 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.777 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.778 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.778 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.778 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.781 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.941 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Creating config drive at /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/disk.config#033[00m
Dec 13 03:29:19 np0005558241 nova_compute[248510]: 2025-12-13 08:29:19.946 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2m5f05lx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:19 np0005558241 podman[306051]: 2025-12-13 08:29:19.990208951 +0000 UTC m=+0.072336245 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 03:29:20 np0005558241 podman[306049]: 2025-12-13 08:29:20.000897325 +0000 UTC m=+0.094393900 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:29:20 np0005558241 podman[306048]: 2025-12-13 08:29:20.03919909 +0000 UTC m=+0.132805388 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.061 248514 DEBUG nova.network.neutron [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.090 248514 INFO nova.compute.manager [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Took 1.85 seconds to deallocate network for instance.#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.096 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2m5f05lx" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.216 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.222 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/disk.config b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.285 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.287 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:29:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:29:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4165741887' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.402 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/disk.config b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.403 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Deleting local config drive /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d/disk.config because it was imported into RBD.#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.415 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.442 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.447 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:20 np0005558241 kernel: tapc0518450-5f: entered promiscuous mode
Dec 13 03:29:20 np0005558241 NetworkManager[50376]: <info>  [1765614560.4628] manager: (tapc0518450-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/235)
Dec 13 03:29:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:20Z|00531|binding|INFO|Claiming lport c0518450-5f7c-4fa0-bf72-59ebcb5be073 for this chassis.
Dec 13 03:29:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:20Z|00532|binding|INFO|c0518450-5f7c-4fa0-bf72-59ebcb5be073: Claiming fa:16:3e:ad:c1:45 10.100.0.9
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.491 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:20 np0005558241 systemd-udevd[306199]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:29:20 np0005558241 systemd-machined[210538]: New machine qemu-65-instance-00000039.
Dec 13 03:29:20 np0005558241 NetworkManager[50376]: <info>  [1765614560.5110] device (tapc0518450-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:29:20 np0005558241 NetworkManager[50376]: <info>  [1765614560.5123] device (tapc0518450-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:29:20 np0005558241 systemd[1]: Started Virtual Machine qemu-65-instance-00000039.
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.515 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:c1:45 10.100.0.9'], port_security=['fa:16:3e:ad:c1:45 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63655ec3-e19a-48cf-bc09-d1f4e53966e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3562b8c-e174-48df-b01e-6261323c4619, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c0518450-5f7c-4fa0-bf72-59ebcb5be073) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.517 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c0518450-5f7c-4fa0-bf72-59ebcb5be073 in datapath f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 bound to our chassis#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.519 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6669b7a-7d21-4e4e-96cd-84193f6fdcf3#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.527 248514 DEBUG nova.network.neutron [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.534 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[152fef9f-ede0-437e-9162-96568cdf47fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.535 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6669b7a-71 in ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.538 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6669b7a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.538 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b196c0-a75b-406f-9064-1a9a541b5ff4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.539 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d9cc7f-61ef-4b93-b934-9e57aba8a9a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.549 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[16ffbcea-692c-425f-8a2c-502f95d30054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.559 248514 INFO nova.compute.manager [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Took 2.25 seconds to deallocate network for instance.#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.564 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:20Z|00533|binding|INFO|Setting lport c0518450-5f7c-4fa0-bf72-59ebcb5be073 ovn-installed in OVS
Dec 13 03:29:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:20Z|00534|binding|INFO|Setting lport c0518450-5f7c-4fa0-bf72-59ebcb5be073 up in Southbound
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.575 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b0cefd-cfbc-4b9a-b38b-38b9fcdbce00]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.610 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[44e87ca4-0b6d-4830-8000-b6ef3ac07153]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.618 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bd5edcb9-47da-4656-9b52-39aa92068ca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 NetworkManager[50376]: <info>  [1765614560.6204] manager: (tapf6669b7a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/236)
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.622 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.633 248514 DEBUG oslo_concurrency.processutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.656 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[84351ad1-764b-4ff0-9d0c-6850fab7fb1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.660 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dd676a48-cf32-4345-acef-ed94080ae25e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.681 248514 DEBUG nova.compute.manager [req-9daa44bc-ed9a-40a5-840d-7cc7da990429 req-a1e5aed9-6880-4235-9a7b-26fcde648166 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Received event network-vif-deleted-a795a3d9-1388-4e72-8bfd-271816a45466 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:20 np0005558241 NetworkManager[50376]: <info>  [1765614560.6855] device (tapf6669b7a-70): carrier: link connected
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.692 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c909db25-9ca7-45a9-8380-37f9661e7e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.716 248514 DEBUG nova.network.neutron [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Updated VIF entry in instance network info cache for port c0518450-5f7c-4fa0-bf72-59ebcb5be073. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.716 248514 DEBUG nova.network.neutron [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Updating instance_info_cache with network_info: [{"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.717 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a3627f39-d13c-4cd5-aaa3-587d48dba923]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6669b7a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:9e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713790, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306254, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.735 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11603133-0e27-4830-98a5-dc8a0efff646]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8a:9e66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713790, 'tstamp': 713790}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306255, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.738 248514 DEBUG nova.compute.manager [req-82885f15-0c4a-4ed6-b98b-a7cf64980288 req-e07bf532-bdc3-43da-b7a6-223981ba9839 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-vif-deleted-3aadc575-dc9f-4823-82c7-112e9b9832fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.748 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.748 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-vif-unplugged-3aadc575-dc9f-4823-82c7-112e9b9832fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.748 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.748 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.749 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.749 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] No waiting events found dispatching network-vif-unplugged-3aadc575-dc9f-4823-82c7-112e9b9832fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.749 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-vif-unplugged-3aadc575-dc9f-4823-82c7-112e9b9832fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.749 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.749 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.750 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.750 248514 DEBUG oslo_concurrency.lockutils [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.750 248514 DEBUG nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] No waiting events found dispatching network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.750 248514 WARNING nova.compute.manager [req-2509781a-c787-45f1-a203-915eae1242a9 req-7d1c9152-c543-4b52-b15c-e5ba7e50a6aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Received unexpected event network-vif-plugged-3aadc575-dc9f-4823-82c7-112e9b9832fe for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.757 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb261bc6-82f0-48db-93b0-3fa279955eaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6669b7a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:9e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713790, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306256, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.787 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[024004e3-b776-4169-8710-1b0353008cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.851 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab29a5e-891f-479b-b98b-35fbbc406680]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.852 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6669b7a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.853 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.853 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6669b7a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:20 np0005558241 NetworkManager[50376]: <info>  [1765614560.8568] manager: (tapf6669b7a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Dec 13 03:29:20 np0005558241 kernel: tapf6669b7a-70: entered promiscuous mode
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.859 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.862 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6669b7a-70, col_values=(('external_ids', {'iface-id': 'fc5b13bf-8bf9-4192-a467-a89c8b6706fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.863 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:20Z|00535|binding|INFO|Releasing lport fc5b13bf-8bf9-4192-a467-a89c8b6706fb from this chassis (sb_readonly=0)
Dec 13 03:29:20 np0005558241 nova_compute[248510]: 2025-12-13 08:29:20.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.887 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6669b7a-7d21-4e4e-96cd-84193f6fdcf3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6669b7a-7d21-4e4e-96cd-84193f6fdcf3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.888 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bfabae5b-c00f-4fab-a7c2-173c3c921da6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.889 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/f6669b7a-7d21-4e4e-96cd-84193f6fdcf3.pid.haproxy
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID f6669b7a-7d21-4e4e-96cd-84193f6fdcf3
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:29:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:20.890 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'env', 'PROCESS_TAG=haproxy-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6669b7a-7d21-4e4e-96cd-84193f6fdcf3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012070986413704181 of space, bias 1.0, pg target 0.3621295924111254 quantized to 32 (current 32)
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670072192503635 of space, bias 1.0, pg target 0.20010216577510903 quantized to 32 (current 32)
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.011434439919754e-07 of space, bias 4.0, pg target 0.0008413721327903704 quantized to 16 (current 32)
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:29:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:29:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:29:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/590400138' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.053 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.056 248514 DEBUG nova.virt.libvirt.vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-2',id=58,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:11Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=b9e0d5ab-483f-49a1-901a-c36f31ab710f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.056 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.057 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.058 248514 DEBUG nova.objects.instance [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'pci_devices' on Instance uuid b9e0d5ab-483f-49a1-901a-c36f31ab710f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.087 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  <uuid>b9e0d5ab-483f-49a1-901a-c36f31ab710f</uuid>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  <name>instance-0000003a</name>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <nova:name>tempest-ListServersNegativeTestJSON-server-582157972-2</nova:name>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:29:19</nova:creationTime>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:        <nova:user uuid="3b4cd12d84d34c95ac78f304b6e7546d">tempest-ListServersNegativeTestJSON-67858047-project-member</nova:user>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:        <nova:project uuid="1f0a98b431c940d98cf7e91fd7bdea03">tempest-ListServersNegativeTestJSON-67858047</nova:project>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:        <nova:port uuid="dd059dc2-8c1a-49c4-b820-7cf31293c210">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <entry name="serial">b9e0d5ab-483f-49a1-901a-c36f31ab710f</entry>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <entry name="uuid">b9e0d5ab-483f-49a1-901a-c36f31ab710f</entry>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk.config">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:5d:20:15"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <target dev="tapdd059dc2-8c"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/console.log" append="off"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:29:21 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:29:21 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:29:21 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:29:21 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.088 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Preparing to wait for external event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.089 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.089 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.090 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.091 248514 DEBUG nova.virt.libvirt.vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-2',id=58,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:11Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=b9e0d5ab-483f-49a1-901a-c36f31ab710f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.091 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.092 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.092 248514 DEBUG os_vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.093 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.094 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.094 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.098 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.098 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd059dc2-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.099 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdd059dc2-8c, col_values=(('external_ids', {'iface-id': 'dd059dc2-8c1a-49c4-b820-7cf31293c210', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:20:15', 'vm-uuid': 'b9e0d5ab-483f-49a1-901a-c36f31ab710f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:21 np0005558241 NetworkManager[50376]: <info>  [1765614561.1024] manager: (tapdd059dc2-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.104 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.107 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.109 248514 INFO os_vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c')#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.169 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.170 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.170 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No VIF found with MAC fa:16:3e:5d:20:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.171 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Using config drive#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.190 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2144417386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.234 248514 DEBUG oslo_concurrency.processutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.239 248514 DEBUG nova.compute.provider_tree [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.255 248514 DEBUG nova.scheduler.client.report [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:21 np0005558241 podman[306326]: 2025-12-13 08:29:21.265241181 +0000 UTC m=+0.045269438 container create 401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.287 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.289 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:21 np0005558241 systemd[1]: Started libpod-conmon-401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d.scope.
Dec 13 03:29:21 np0005558241 podman[306326]: 2025-12-13 08:29:21.240904641 +0000 UTC m=+0.020932918 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.344 248514 INFO nova.scheduler.client.report [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Deleted allocations for instance 917c2aaf-7701-4198-802a-0bfc5753885a#033[00m
Dec 13 03:29:21 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:29:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7554b3fff23359996616576e29b124b752a5d83d38de1b07b4ecde93acadd26c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:21 np0005558241 podman[306326]: 2025-12-13 08:29:21.372042707 +0000 UTC m=+0.152070984 container init 401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 03:29:21 np0005558241 podman[306326]: 2025-12-13 08:29:21.379996743 +0000 UTC m=+0.160025000 container start 401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 03:29:21 np0005558241 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [NOTICE]   (306348) : New worker (306350) forked
Dec 13 03:29:21 np0005558241 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [NOTICE]   (306348) : Loading success.
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.422 248514 DEBUG oslo_concurrency.processutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.468 248514 DEBUG nova.network.neutron [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Updating instance_info_cache with network_info: [{"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.477 248514 DEBUG oslo_concurrency.lockutils [None req-29a13ddb-1c51-4939-8aa9-e34df1280530 b6f7967ce16c45d394188c1302b02907 6f6beadbb0244529b8dfc1abff8e8e10 - - default default] Lock "917c2aaf-7701-4198-802a-0bfc5753885a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 180 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 736 KiB/s rd, 5.9 MiB/s wr, 186 op/s
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.513 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Releasing lock "refresh_cache-1c940191-84c7-423e-901a-233b14c2acec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.513 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Instance network_info: |[{"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.514 248514 DEBUG oslo_concurrency.lockutils [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1c940191-84c7-423e-901a-233b14c2acec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.514 248514 DEBUG nova.network.neutron [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Refreshing network info cache for port 41420fc6-e900-4745-a3c1-4f2541c9e1f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.518 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Start _get_guest_xml network_info=[{"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.522 248514 WARNING nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.532 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.532 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.536 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.537 248514 DEBUG nova.virt.libvirt.host [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.537 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.537 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.538 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.538 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.539 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.539 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.539 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.539 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.540 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.540 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.540 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.540 248514 DEBUG nova.virt.hardware [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.544 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.864 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Creating config drive at /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/disk.config#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.872 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbvux0b8u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.942 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614561.941771, b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.943 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] VM Started (Lifecycle Event)#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.973 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.979 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614561.9428473, b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:21 np0005558241 nova_compute[248510]: 2025-12-13 08:29:21.979 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:29:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947150714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.008 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.009 248514 DEBUG oslo_concurrency.processutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.015 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.023 248514 DEBUG nova.compute.provider_tree [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.026 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbvux0b8u" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.052 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.056 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/disk.config b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:29:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3773651429' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.100 248514 DEBUG nova.scheduler.client.report [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.105 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.122 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.149 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.155 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.193 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.209 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/disk.config b9e0d5ab-483f-49a1-901a-c36f31ab710f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.209 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Deleting local config drive /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f/disk.config because it was imported into RBD.#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.252 248514 INFO nova.scheduler.client.report [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Deleted allocations for instance 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52#033[00m
Dec 13 03:29:22 np0005558241 NetworkManager[50376]: <info>  [1765614562.2715] manager: (tapdd059dc2-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/239)
Dec 13 03:29:22 np0005558241 kernel: tapdd059dc2-8c: entered promiscuous mode
Dec 13 03:29:22 np0005558241 NetworkManager[50376]: <info>  [1765614562.2831] device (tapdd059dc2-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:29:22 np0005558241 NetworkManager[50376]: <info>  [1765614562.2842] device (tapdd059dc2-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:22Z|00536|binding|INFO|Claiming lport dd059dc2-8c1a-49c4-b820-7cf31293c210 for this chassis.
Dec 13 03:29:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:22Z|00537|binding|INFO|dd059dc2-8c1a-49c4-b820-7cf31293c210: Claiming fa:16:3e:5d:20:15 10.100.0.11
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.345 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:20:15 10.100.0.11'], port_security=['fa:16:3e:5d:20:15 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b9e0d5ab-483f-49a1-901a-c36f31ab710f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63655ec3-e19a-48cf-bc09-d1f4e53966e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3562b8c-e174-48df-b01e-6261323c4619, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=dd059dc2-8c1a-49c4-b820-7cf31293c210) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.347 158419 INFO neutron.agent.ovn.metadata.agent [-] Port dd059dc2-8c1a-49c4-b820-7cf31293c210 in datapath f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 bound to our chassis#033[00m
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.348 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6669b7a-7d21-4e4e-96cd-84193f6fdcf3#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.358 248514 DEBUG oslo_concurrency.lockutils [None req-5ac49bfe-0fe4-4d99-a8f7-d8497a587781 3b988c7ac9354c59aac9a9f41f83c20f 52e1055963294dbdb16cd95b466cd4d9 - - default default] Lock "9b45ce75-4bd3-4cc0-a772-2474ffc2cd52" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:22Z|00538|binding|INFO|Setting lport dd059dc2-8c1a-49c4-b820-7cf31293c210 ovn-installed in OVS
Dec 13 03:29:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:22Z|00539|binding|INFO|Setting lport dd059dc2-8c1a-49c4-b820-7cf31293c210 up in Southbound
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.362 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.369 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8c96d15d-b922-4110-89d0-bf5f640b99cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:22 np0005558241 systemd-machined[210538]: New machine qemu-66-instance-0000003a.
Dec 13 03:29:22 np0005558241 systemd[1]: Started Virtual Machine qemu-66-instance-0000003a.
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.412 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[afc6d8b6-5d49-40e4-baa1-6460cd98736f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.418 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[649ebba5-7d44-44e6-a3ed-3b01f3c491e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.466 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[96e55fa7-e0ab-4571-97c6-e74918cc23dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.494 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a581d960-9833-4112-8da9-48b3a7daf068]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6669b7a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:9e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713790, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306550, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.518 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94d33e0d-d64b-433f-8ea9-849bc2ed4654]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713803, 'tstamp': 713803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306551, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713806, 'tstamp': 713806}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306551, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.521 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6669b7a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.524 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6669b7a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.525 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.525 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6669b7a-70, col_values=(('external_ids', {'iface-id': 'fc5b13bf-8bf9-4192-a467-a89c8b6706fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:22.526 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:29:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605435020' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.780 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.782 248514 DEBUG nova.virt.libvirt.vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-3',id=59,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:13Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=1c940191-84c7-423e-901a-233b14c2acec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.783 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.784 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.785 248514 DEBUG nova.objects.instance [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c940191-84c7-423e-901a-233b14c2acec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.809 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  <uuid>1c940191-84c7-423e-901a-233b14c2acec</uuid>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  <name>instance-0000003b</name>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <nova:name>tempest-ListServersNegativeTestJSON-server-582157972-3</nova:name>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:29:21</nova:creationTime>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:        <nova:user uuid="3b4cd12d84d34c95ac78f304b6e7546d">tempest-ListServersNegativeTestJSON-67858047-project-member</nova:user>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:        <nova:project uuid="1f0a98b431c940d98cf7e91fd7bdea03">tempest-ListServersNegativeTestJSON-67858047</nova:project>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:        <nova:port uuid="41420fc6-e900-4745-a3c1-4f2541c9e1f5">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <entry name="serial">1c940191-84c7-423e-901a-233b14c2acec</entry>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <entry name="uuid">1c940191-84c7-423e-901a-233b14c2acec</entry>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/1c940191-84c7-423e-901a-233b14c2acec_disk">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/1c940191-84c7-423e-901a-233b14c2acec_disk.config">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:de:08:03"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <target dev="tap41420fc6-e9"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/console.log" append="off"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:29:22 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:29:22 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:29:22 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:29:22 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.811 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Preparing to wait for external event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.811 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.811 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.811 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.812 248514 DEBUG nova.virt.libvirt.vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-3',id=59,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:13Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=1c940191-84c7-423e-901a-233b14c2acec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.813 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.813 248514 DEBUG nova.network.os_vif_util [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.813 248514 DEBUG os_vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.814 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.814 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.815 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.818 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.818 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41420fc6-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.818 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41420fc6-e9, col_values=(('external_ids', {'iface-id': '41420fc6-e900-4745-a3c1-4f2541c9e1f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:08:03', 'vm-uuid': '1c940191-84c7-423e-901a-233b14c2acec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.820 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:22 np0005558241 NetworkManager[50376]: <info>  [1765614562.8210] manager: (tap41420fc6-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.825 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.826 248514 INFO os_vif [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9')#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.828 248514 DEBUG nova.compute.manager [req-63d967ba-bcb5-4059-8b29-7a0cd688ff37 req-16c23b2e-da80-4692-b4a5-85bd8102814a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.829 248514 DEBUG oslo_concurrency.lockutils [req-63d967ba-bcb5-4059-8b29-7a0cd688ff37 req-16c23b2e-da80-4692-b4a5-85bd8102814a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.829 248514 DEBUG oslo_concurrency.lockutils [req-63d967ba-bcb5-4059-8b29-7a0cd688ff37 req-16c23b2e-da80-4692-b4a5-85bd8102814a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.829 248514 DEBUG oslo_concurrency.lockutils [req-63d967ba-bcb5-4059-8b29-7a0cd688ff37 req-16c23b2e-da80-4692-b4a5-85bd8102814a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.830 248514 DEBUG nova.compute.manager [req-63d967ba-bcb5-4059-8b29-7a0cd688ff37 req-16c23b2e-da80-4692-b4a5-85bd8102814a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Processing event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.886 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.887 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.887 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] No VIF found with MAC fa:16:3e:de:08:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.888 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Using config drive#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.912 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.952 248514 DEBUG nova.compute.manager [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Received event network-vif-plugged-c0518450-5f7c-4fa0-bf72-59ebcb5be073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.953 248514 DEBUG oslo_concurrency.lockutils [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.953 248514 DEBUG oslo_concurrency.lockutils [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.954 248514 DEBUG oslo_concurrency.lockutils [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.954 248514 DEBUG nova.compute.manager [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Processing event network-vif-plugged-c0518450-5f7c-4fa0-bf72-59ebcb5be073 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.954 248514 DEBUG nova.compute.manager [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Received event network-vif-plugged-c0518450-5f7c-4fa0-bf72-59ebcb5be073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.954 248514 DEBUG oslo_concurrency.lockutils [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.954 248514 DEBUG oslo_concurrency.lockutils [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.955 248514 DEBUG oslo_concurrency.lockutils [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.955 248514 DEBUG nova.compute.manager [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] No waiting events found dispatching network-vif-plugged-c0518450-5f7c-4fa0-bf72-59ebcb5be073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.955 248514 WARNING nova.compute.manager [req-430ee58b-66a5-4bba-9379-d4a85f4bc06c req-da091272-bf0f-48ab-a32c-ea8b1d55e280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Received unexpected event network-vif-plugged-c0518450-5f7c-4fa0-bf72-59ebcb5be073 for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.956 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.960 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614562.9598994, b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.960 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.962 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.968 248514 INFO nova.virt.libvirt.driver [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Instance spawned successfully.#033[00m
Dec 13 03:29:22 np0005558241 nova_compute[248510]: 2025-12-13 08:29:22.969 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.003 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.010 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.033 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.033 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.034 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.034 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.035 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.035 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.073 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.111 248514 INFO nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Took 12.93 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.111 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.200 248514 INFO nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Took 14.53 seconds to build instance.#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.223 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.321 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Creating config drive at /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/disk.config#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.328 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7kxczao9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.458 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614563.4581342, b9e0d5ab-483f-49a1-901a-c36f31ab710f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.459 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] VM Started (Lifecycle Event)#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.462 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.467 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.471 248514 INFO nova.virt.libvirt.driver [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Instance spawned successfully.#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.472 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:29:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 180 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 5.5 MiB/s wr, 187 op/s
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.481 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7kxczao9" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.512 248514 DEBUG nova.storage.rbd_utils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] rbd image 1c940191-84c7-423e-901a-233b14c2acec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.517 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/disk.config 1c940191-84c7-423e-901a-233b14c2acec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.559 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.567 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.573 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.574 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.575 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.575 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.576 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.577 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.591 248514 DEBUG nova.network.neutron [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Updated VIF entry in instance network info cache for port 41420fc6-e900-4745-a3c1-4f2541c9e1f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.591 248514 DEBUG nova.network.neutron [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Updating instance_info_cache with network_info: [{"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.612 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.613 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614563.4582496, b9e0d5ab-483f-49a1-901a-c36f31ab710f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.613 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.640 248514 DEBUG oslo_concurrency.lockutils [req-7e4673d5-7b03-4144-939d-c8a27b41483d req-f62dc1c7-ad71-48ec-9b91-f749c5ffd768 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1c940191-84c7-423e-901a-233b14c2acec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.661 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.666 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614563.465802, b9e0d5ab-483f-49a1-901a-c36f31ab710f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.666 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.711 248514 INFO nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Took 11.73 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.711 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.712 248514 DEBUG oslo_concurrency.processutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/disk.config 1c940191-84c7-423e-901a-233b14c2acec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.713 248514 INFO nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Deleting local config drive /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec/disk.config because it was imported into RBD.#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.725 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.728 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:29:23 np0005558241 kernel: tap41420fc6-e9: entered promiscuous mode
Dec 13 03:29:23 np0005558241 NetworkManager[50376]: <info>  [1765614563.7787] manager: (tap41420fc6-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Dec 13 03:29:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:23Z|00540|binding|INFO|Claiming lport 41420fc6-e900-4745-a3c1-4f2541c9e1f5 for this chassis.
Dec 13 03:29:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:23Z|00541|binding|INFO|41420fc6-e900-4745-a3c1-4f2541c9e1f5: Claiming fa:16:3e:de:08:03 10.100.0.4
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.790 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.794 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:08:03 10.100.0.4'], port_security=['fa:16:3e:de:08:03 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c940191-84c7-423e-901a-233b14c2acec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'neutron:revision_number': '2', 'neutron:security_group_ids': '63655ec3-e19a-48cf-bc09-d1f4e53966e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3562b8c-e174-48df-b01e-6261323c4619, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=41420fc6-e900-4745-a3c1-4f2541c9e1f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.796 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 41420fc6-e900-4745-a3c1-4f2541c9e1f5 in datapath f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 bound to our chassis#033[00m
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.798 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6669b7a-7d21-4e4e-96cd-84193f6fdcf3#033[00m
Dec 13 03:29:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:23Z|00542|binding|INFO|Setting lport 41420fc6-e900-4745-a3c1-4f2541c9e1f5 ovn-installed in OVS
Dec 13 03:29:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:23Z|00543|binding|INFO|Setting lport 41420fc6-e900-4745-a3c1-4f2541c9e1f5 up in Southbound
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.800 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.803 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.807 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.822 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6022150-2bfd-4bbc-84b1-b0bd0c3aadec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:23 np0005558241 systemd-udevd[306671]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:29:23 np0005558241 systemd-machined[210538]: New machine qemu-67-instance-0000003b.
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.838 248514 INFO nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Took 15.15 seconds to build instance.#033[00m
Dec 13 03:29:23 np0005558241 NetworkManager[50376]: <info>  [1765614563.8422] device (tap41420fc6-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:29:23 np0005558241 NetworkManager[50376]: <info>  [1765614563.8433] device (tap41420fc6-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:29:23 np0005558241 systemd[1]: Started Virtual Machine qemu-67-instance-0000003b.
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.864 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7a527502-7182-4c6c-81f7-987225a76b0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.866 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.868 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c0dc80c3-40ba-46d5-99b8-5eb15b9d74fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.904 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f038b60c-af0a-4867-8efa-7f2ebbdf7864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.928 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d48cf201-e9a4-401c-bebc-c66d73e661a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6669b7a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:9e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713790, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306683, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.949 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6edbe335-dba2-43a3-bf16-ffbce6a61936]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713803, 'tstamp': 713803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306685, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713806, 'tstamp': 713806}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306685, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.952 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6669b7a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:23 np0005558241 nova_compute[248510]: 2025-12-13 08:29:23.955 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.955 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6669b7a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.955 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.956 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6669b7a-70, col_values=(('external_ids', {'iface-id': 'fc5b13bf-8bf9-4192-a467-a89c8b6706fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:23.956 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:24 np0005558241 nova_compute[248510]: 2025-12-13 08:29:24.361 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614564.3607767, 1c940191-84c7-423e-901a-233b14c2acec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:24 np0005558241 nova_compute[248510]: 2025-12-13 08:29:24.362 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] VM Started (Lifecycle Event)#033[00m
Dec 13 03:29:24 np0005558241 nova_compute[248510]: 2025-12-13 08:29:24.390 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:24 np0005558241 nova_compute[248510]: 2025-12-13 08:29:24.395 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614564.3609877, 1c940191-84c7-423e-901a-233b14c2acec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:24 np0005558241 nova_compute[248510]: 2025-12-13 08:29:24.395 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:29:24 np0005558241 nova_compute[248510]: 2025-12-13 08:29:24.419 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:24 np0005558241 nova_compute[248510]: 2025-12-13 08:29:24.423 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:29:24 np0005558241 nova_compute[248510]: 2025-12-13 08:29:24.448 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.074 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.131 248514 DEBUG nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.132 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.132 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.133 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.133 248514 DEBUG nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] No waiting events found dispatching network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.133 248514 WARNING nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received unexpected event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.133 248514 DEBUG nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.134 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.134 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.134 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.135 248514 DEBUG nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Processing event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.135 248514 DEBUG nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.135 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.135 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.136 248514 DEBUG oslo_concurrency.lockutils [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.136 248514 DEBUG nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] No waiting events found dispatching network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.136 248514 WARNING nova.compute.manager [req-16e52972-d301-4d67-96be-925f84cc9a35 req-941c0008-5d99-4bdb-bd2c-c89496aa2428 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received unexpected event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.137 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.163 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614565.1474512, 1c940191-84c7-423e-901a-233b14c2acec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.163 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.165 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.168 248514 INFO nova.virt.libvirt.driver [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Instance spawned successfully.#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.168 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.221 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.222 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.222 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.223 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.223 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.224 248514 DEBUG nova.virt.libvirt.driver [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.245 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.250 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:29:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.430 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:29:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 181 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.4 MiB/s wr, 213 op/s
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.480 248514 INFO nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Took 11.97 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.482 248514 DEBUG nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.583 248514 INFO nova.compute.manager [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Took 16.84 seconds to build instance.#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.614 248514 DEBUG oslo_concurrency.lockutils [None req-3450f2e9-f443-4c69-b1ea-980cc8a7e42b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:25.991 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:29:25 np0005558241 nova_compute[248510]: 2025-12-13 08:29:25.993 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:25.993 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:29:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 181 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 201 op/s
Dec 13 03:29:27 np0005558241 nova_compute[248510]: 2025-12-13 08:29:27.657 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614552.6348193, 917c2aaf-7701-4198-802a-0bfc5753885a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:27 np0005558241 nova_compute[248510]: 2025-12-13 08:29:27.659 248514 INFO nova.compute.manager [-] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:29:27 np0005558241 nova_compute[248510]: 2025-12-13 08:29:27.820 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:27.996 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:28 np0005558241 nova_compute[248510]: 2025-12-13 08:29:28.050 248514 DEBUG nova.compute.manager [None req-e5e5bbeb-698b-439d-8010-78951fa7a7f7 - - - - - -] [instance: 917c2aaf-7701-4198-802a-0bfc5753885a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:28 np0005558241 nova_compute[248510]: 2025-12-13 08:29:28.882 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614553.644247, 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:28 np0005558241 nova_compute[248510]: 2025-12-13 08:29:28.884 248514 INFO nova.compute.manager [-] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:29:28 np0005558241 nova_compute[248510]: 2025-12-13 08:29:28.914 248514 DEBUG nova.compute.manager [None req-8f75ad4b-345d-4e85-94c3-3de360eb2982 - - - - - -] [instance: 9b45ce75-4bd3-4cc0-a772-2474ffc2cd52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.396 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.398 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.399 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.399 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.400 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.403 248514 INFO nova.compute.manager [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Terminating instance#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.405 248514 DEBUG nova.compute.manager [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:29:29 np0005558241 kernel: tapc0518450-5f (unregistering): left promiscuous mode
Dec 13 03:29:29 np0005558241 NetworkManager[50376]: <info>  [1765614569.4547] device (tapc0518450-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:29:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:29Z|00544|binding|INFO|Releasing lport c0518450-5f7c-4fa0-bf72-59ebcb5be073 from this chassis (sb_readonly=0)
Dec 13 03:29:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:29Z|00545|binding|INFO|Setting lport c0518450-5f7c-4fa0-bf72-59ebcb5be073 down in Southbound
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.463 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:29Z|00546|binding|INFO|Removing iface tapc0518450-5f ovn-installed in OVS
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.471 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:c1:45 10.100.0.9'], port_security=['fa:16:3e:ad:c1:45 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63655ec3-e19a-48cf-bc09-d1f4e53966e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3562b8c-e174-48df-b01e-6261323c4619, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c0518450-5f7c-4fa0-bf72-59ebcb5be073) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.472 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c0518450-5f7c-4fa0-bf72-59ebcb5be073 in datapath f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 unbound from our chassis#033[00m
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.474 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6669b7a-7d21-4e4e-96cd-84193f6fdcf3#033[00m
Dec 13 03:29:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 181 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 5.3 MiB/s wr, 318 op/s
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.490 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:29 np0005558241 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Deactivated successfully.
Dec 13 03:29:29 np0005558241 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Consumed 7.667s CPU time.
Dec 13 03:29:29 np0005558241 systemd-machined[210538]: Machine qemu-65-instance-00000039 terminated.
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.510 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c240f41-f2d0-4324-8a49-c3bd27bce9cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.549 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[694ca023-bd79-4a57-9c3a-804810642fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.553 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[961e7163-5290-478b-8ce6-a76ee1ff27bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.587 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[66480a1e-93df-4c4f-8953-34cf6ce6f6a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.623 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aa36075d-ade7-431b-9ea3-dc758cb89123]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6669b7a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:9e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713790, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306739, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.651 248514 INFO nova.virt.libvirt.driver [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Instance destroyed successfully.#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.652 248514 DEBUG nova.objects.instance [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'resources' on Instance uuid b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.661 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c60a74-f9e3-4e5b-bb3f-1ef11fda70d9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713803, 'tstamp': 713803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306744, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713806, 'tstamp': 713806}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306744, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.664 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6669b7a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.673 248514 DEBUG nova.virt.libvirt.vif [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-1',id=57,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:29:23Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.674 248514 DEBUG nova.network.os_vif_util [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "address": "fa:16:3e:ad:c1:45", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0518450-5f", "ovs_interfaceid": "c0518450-5f7c-4fa0-bf72-59ebcb5be073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.675 248514 DEBUG nova.network.os_vif_util [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.676 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6669b7a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.677 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.677 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6669b7a-70, col_values=(('external_ids', {'iface-id': 'fc5b13bf-8bf9-4192-a467-a89c8b6706fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.676 248514 DEBUG os_vif [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:29:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:29.678 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.680 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.681 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0518450-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.684 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:29 np0005558241 nova_compute[248510]: 2025-12-13 08:29:29.687 248514 INFO os_vif [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:c1:45,bridge_name='br-int',has_traffic_filtering=True,id=c0518450-5f7c-4fa0-bf72-59ebcb5be073,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0518450-5f')#033[00m
Dec 13 03:29:30 np0005558241 nova_compute[248510]: 2025-12-13 08:29:30.050 248514 INFO nova.virt.libvirt.driver [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Deleting instance files /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_del#033[00m
Dec 13 03:29:30 np0005558241 nova_compute[248510]: 2025-12-13 08:29:30.052 248514 INFO nova.virt.libvirt.driver [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Deletion of /var/lib/nova/instances/b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d_del complete#033[00m
Dec 13 03:29:30 np0005558241 nova_compute[248510]: 2025-12-13 08:29:30.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:30 np0005558241 nova_compute[248510]: 2025-12-13 08:29:30.254 248514 INFO nova.compute.manager [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:29:30 np0005558241 nova_compute[248510]: 2025-12-13 08:29:30.255 248514 DEBUG oslo.service.loopingcall [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:29:30 np0005558241 nova_compute[248510]: 2025-12-13 08:29:30.255 248514 DEBUG nova.compute.manager [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:29:30 np0005558241 nova_compute[248510]: 2025-12-13 08:29:30.256 248514 DEBUG nova.network.neutron [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:29:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:29:31 np0005558241 nova_compute[248510]: 2025-12-13 08:29:31.017 248514 DEBUG nova.network.neutron [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:31 np0005558241 nova_compute[248510]: 2025-12-13 08:29:31.046 248514 INFO nova.compute.manager [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Took 0.79 seconds to deallocate network for instance.#033[00m
Dec 13 03:29:31 np0005558241 nova_compute[248510]: 2025-12-13 08:29:31.112 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:31 np0005558241 nova_compute[248510]: 2025-12-13 08:29:31.112 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:31 np0005558241 nova_compute[248510]: 2025-12-13 08:29:31.201 248514 DEBUG nova.compute.manager [req-c26de796-ac97-45a4-9675-57b51e89eb85 req-f5c60551-f274-4e74-9cc2-649526a33d8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Received event network-vif-deleted-c0518450-5f7c-4fa0-bf72-59ebcb5be073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:31 np0005558241 nova_compute[248510]: 2025-12-13 08:29:31.225 248514 DEBUG oslo_concurrency.processutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 181 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 253 op/s
Dec 13 03:29:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2166302626' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:31 np0005558241 nova_compute[248510]: 2025-12-13 08:29:31.807 248514 DEBUG oslo_concurrency.processutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:31 np0005558241 nova_compute[248510]: 2025-12-13 08:29:31.815 248514 DEBUG nova.compute.provider_tree [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:31 np0005558241 nova_compute[248510]: 2025-12-13 08:29:31.834 248514 DEBUG nova.scheduler.client.report [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:31 np0005558241 nova_compute[248510]: 2025-12-13 08:29:31.878 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:31 np0005558241 nova_compute[248510]: 2025-12-13 08:29:31.916 248514 INFO nova.scheduler.client.report [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Deleted allocations for instance b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d#033[00m
Dec 13 03:29:32 np0005558241 nova_compute[248510]: 2025-12-13 08:29:32.005 248514 DEBUG oslo_concurrency.lockutils [None req-e0cce4a2-4fe4-4a70-a9fd-290f52eacb2a 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:29:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:29:32 np0005558241 podman[306934]: 2025-12-13 08:29:32.838911765 +0000 UTC m=+0.047091843 container create f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:29:32 np0005558241 systemd[1]: Started libpod-conmon-f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23.scope.
Dec 13 03:29:32 np0005558241 podman[306934]: 2025-12-13 08:29:32.81681326 +0000 UTC m=+0.024993358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:29:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:29:32 np0005558241 podman[306934]: 2025-12-13 08:29:32.954028765 +0000 UTC m=+0.162208833 container init f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 03:29:32 np0005558241 podman[306934]: 2025-12-13 08:29:32.963666083 +0000 UTC m=+0.171846131 container start f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 03:29:32 np0005558241 podman[306934]: 2025-12-13 08:29:32.967022566 +0000 UTC m=+0.175202644 container attach f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:29:32 np0005558241 blissful_perlman[306950]: 167 167
Dec 13 03:29:32 np0005558241 systemd[1]: libpod-f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23.scope: Deactivated successfully.
Dec 13 03:29:33 np0005558241 podman[306955]: 2025-12-13 08:29:33.020868923 +0000 UTC m=+0.031900667 container died f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:29:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cf99735017157d0fb3ed06b9e8586a470ff55e1a60b5eafc8da2c3c2837fad39-merged.mount: Deactivated successfully.
Dec 13 03:29:33 np0005558241 podman[306955]: 2025-12-13 08:29:33.063333571 +0000 UTC m=+0.074365295 container remove f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:29:33 np0005558241 systemd[1]: libpod-conmon-f25641a24bae3a66f2d82dbda4ffd6170b7990f87e67368de269ca1b27795a23.scope: Deactivated successfully.
Dec 13 03:29:33 np0005558241 podman[306977]: 2025-12-13 08:29:33.287728278 +0000 UTC m=+0.054318862 container create 56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclean, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:29:33 np0005558241 systemd[1]: Started libpod-conmon-56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a.scope.
Dec 13 03:29:33 np0005558241 podman[306977]: 2025-12-13 08:29:33.264938875 +0000 UTC m=+0.031529489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:29:33 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:29:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767f7872f5d9b25be81bd1247c64a2fd6eab95b4eb135bb4866572614a0b72df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767f7872f5d9b25be81bd1247c64a2fd6eab95b4eb135bb4866572614a0b72df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767f7872f5d9b25be81bd1247c64a2fd6eab95b4eb135bb4866572614a0b72df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767f7872f5d9b25be81bd1247c64a2fd6eab95b4eb135bb4866572614a0b72df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767f7872f5d9b25be81bd1247c64a2fd6eab95b4eb135bb4866572614a0b72df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:33 np0005558241 podman[306977]: 2025-12-13 08:29:33.403285009 +0000 UTC m=+0.169875613 container init 56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclean, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 03:29:33 np0005558241 podman[306977]: 2025-12-13 08:29:33.410987249 +0000 UTC m=+0.177577833 container start 56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:29:33 np0005558241 podman[306977]: 2025-12-13 08:29:33.415613023 +0000 UTC m=+0.182203607 container attach 56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclean, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:29:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 154 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 39 KiB/s wr, 242 op/s
Dec 13 03:29:33 np0005558241 happy_mclean[306993]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:29:33 np0005558241 happy_mclean[306993]: --> All data devices are unavailable
Dec 13 03:29:33 np0005558241 systemd[1]: libpod-56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a.scope: Deactivated successfully.
Dec 13 03:29:33 np0005558241 podman[306977]: 2025-12-13 08:29:33.984795107 +0000 UTC m=+0.751385691 container died 56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclean, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:29:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-767f7872f5d9b25be81bd1247c64a2fd6eab95b4eb135bb4866572614a0b72df-merged.mount: Deactivated successfully.
Dec 13 03:29:34 np0005558241 podman[306977]: 2025-12-13 08:29:34.033047548 +0000 UTC m=+0.799638132 container remove 56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclean, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:29:34 np0005558241 systemd[1]: libpod-conmon-56f48e2206018cacf1272ee114ef15e0d71ffebc9abe177f83b62bb114ecd84a.scope: Deactivated successfully.
Dec 13 03:29:34 np0005558241 podman[307088]: 2025-12-13 08:29:34.568263383 +0000 UTC m=+0.052618219 container create aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_benz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:29:34 np0005558241 systemd[1]: Started libpod-conmon-aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7.scope.
Dec 13 03:29:34 np0005558241 podman[307088]: 2025-12-13 08:29:34.546314752 +0000 UTC m=+0.030669608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:29:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:29:34 np0005558241 podman[307088]: 2025-12-13 08:29:34.662039847 +0000 UTC m=+0.146394713 container init aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_benz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Dec 13 03:29:34 np0005558241 podman[307088]: 2025-12-13 08:29:34.671632754 +0000 UTC m=+0.155987590 container start aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:29:34 np0005558241 inspiring_benz[307104]: 167 167
Dec 13 03:29:34 np0005558241 systemd[1]: libpod-aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7.scope: Deactivated successfully.
Dec 13 03:29:34 np0005558241 podman[307088]: 2025-12-13 08:29:34.680151574 +0000 UTC m=+0.164506410 container attach aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_benz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:29:34 np0005558241 podman[307088]: 2025-12-13 08:29:34.68119047 +0000 UTC m=+0.165545306 container died aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_benz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:29:34 np0005558241 nova_compute[248510]: 2025-12-13 08:29:34.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-94a73f73964b2e6b9b35bfb74e0cbc0a6f661be39fb8d598da99cee150d22736-merged.mount: Deactivated successfully.
Dec 13 03:29:34 np0005558241 podman[307088]: 2025-12-13 08:29:34.767380406 +0000 UTC m=+0.251735242 container remove aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:29:34 np0005558241 systemd[1]: libpod-conmon-aeadfd4be36903141da38b2bd1c3bad48e1f19ed8b3c377e9203af92710810c7.scope: Deactivated successfully.
Dec 13 03:29:34 np0005558241 podman[307127]: 2025-12-13 08:29:34.970720474 +0000 UTC m=+0.044103030 container create ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 03:29:35 np0005558241 systemd[1]: Started libpod-conmon-ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b.scope.
Dec 13 03:29:35 np0005558241 podman[307127]: 2025-12-13 08:29:34.952080824 +0000 UTC m=+0.025463400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:29:35 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:29:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6351239223a19975d43c2593b75d008b37b16912e33fc0a55b5c3acf4bbe9664/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6351239223a19975d43c2593b75d008b37b16912e33fc0a55b5c3acf4bbe9664/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6351239223a19975d43c2593b75d008b37b16912e33fc0a55b5c3acf4bbe9664/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6351239223a19975d43c2593b75d008b37b16912e33fc0a55b5c3acf4bbe9664/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:35 np0005558241 podman[307127]: 2025-12-13 08:29:35.079268882 +0000 UTC m=+0.152651458 container init ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 13 03:29:35 np0005558241 podman[307127]: 2025-12-13 08:29:35.088694114 +0000 UTC m=+0.162076670 container start ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:29:35 np0005558241 podman[307127]: 2025-12-13 08:29:35.095428971 +0000 UTC m=+0.168811527 container attach ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:29:35 np0005558241 nova_compute[248510]: 2025-12-13 08:29:35.132 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:35Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:20:15 10.100.0.11
Dec 13 03:29:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:35Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:20:15 10.100.0.11
Dec 13 03:29:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]: {
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:    "0": [
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:        {
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "devices": [
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "/dev/loop3"
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            ],
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_name": "ceph_lv0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_size": "21470642176",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "name": "ceph_lv0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "tags": {
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.cluster_name": "ceph",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.crush_device_class": "",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.encrypted": "0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.objectstore": "bluestore",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.osd_id": "0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.type": "block",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.vdo": "0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.with_tpm": "0"
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            },
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "type": "block",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "vg_name": "ceph_vg0"
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:        }
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:    ],
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:    "1": [
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:        {
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "devices": [
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "/dev/loop4"
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            ],
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_name": "ceph_lv1",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_size": "21470642176",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "name": "ceph_lv1",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "tags": {
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.cluster_name": "ceph",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.crush_device_class": "",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.encrypted": "0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.objectstore": "bluestore",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.osd_id": "1",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.type": "block",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.vdo": "0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.with_tpm": "0"
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            },
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "type": "block",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "vg_name": "ceph_vg1"
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:        }
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:    ],
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:    "2": [
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:        {
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "devices": [
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "/dev/loop5"
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            ],
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_name": "ceph_lv2",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_size": "21470642176",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "name": "ceph_lv2",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "tags": {
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.cluster_name": "ceph",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.crush_device_class": "",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.encrypted": "0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.objectstore": "bluestore",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.osd_id": "2",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.type": "block",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.vdo": "0",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:                "ceph.with_tpm": "0"
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            },
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "type": "block",
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:            "vg_name": "ceph_vg2"
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:        }
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]:    ]
Dec 13 03:29:35 np0005558241 exciting_engelbart[307143]: }
Dec 13 03:29:35 np0005558241 systemd[1]: libpod-ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b.scope: Deactivated successfully.
Dec 13 03:29:35 np0005558241 podman[307127]: 2025-12-13 08:29:35.483038544 +0000 UTC m=+0.556421100 container died ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:29:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 135 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 184 KiB/s wr, 245 op/s
Dec 13 03:29:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6351239223a19975d43c2593b75d008b37b16912e33fc0a55b5c3acf4bbe9664-merged.mount: Deactivated successfully.
Dec 13 03:29:35 np0005558241 podman[307127]: 2025-12-13 08:29:35.541022955 +0000 UTC m=+0.614405531 container remove ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 03:29:35 np0005558241 systemd[1]: libpod-conmon-ccb113c3ebe8aa564045df71093ab4e4b38dfd41ec886d0e72f9c11f6766952b.scope: Deactivated successfully.
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.005 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.007 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.008 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.009 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.010 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.013 248514 INFO nova.compute.manager [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Terminating instance#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.015 248514 DEBUG nova.compute.manager [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:29:36 np0005558241 kernel: tapdd059dc2-8c (unregistering): left promiscuous mode
Dec 13 03:29:36 np0005558241 NetworkManager[50376]: <info>  [1765614576.0778] device (tapdd059dc2-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.088 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:36Z|00547|binding|INFO|Releasing lport dd059dc2-8c1a-49c4-b820-7cf31293c210 from this chassis (sb_readonly=0)
Dec 13 03:29:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:36Z|00548|binding|INFO|Setting lport dd059dc2-8c1a-49c4-b820-7cf31293c210 down in Southbound
Dec 13 03:29:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:36Z|00549|binding|INFO|Removing iface tapdd059dc2-8c ovn-installed in OVS
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.090 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 podman[307227]: 2025-12-13 08:29:36.096204414 +0000 UTC m=+0.069523767 container create d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.096 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:20:15 10.100.0.11'], port_security=['fa:16:3e:5d:20:15 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'b9e0d5ab-483f-49a1-901a-c36f31ab710f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63655ec3-e19a-48cf-bc09-d1f4e53966e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3562b8c-e174-48df-b01e-6261323c4619, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=dd059dc2-8c1a-49c4-b820-7cf31293c210) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.100 158419 INFO neutron.agent.ovn.metadata.agent [-] Port dd059dc2-8c1a-49c4-b820-7cf31293c210 in datapath f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 unbound from our chassis#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.103 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6669b7a-7d21-4e4e-96cd-84193f6fdcf3#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.109 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 systemd[1]: Started libpod-conmon-d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7.scope.
Dec 13 03:29:36 np0005558241 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Dec 13 03:29:36 np0005558241 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003a.scope: Consumed 12.899s CPU time.
Dec 13 03:29:36 np0005558241 systemd-machined[210538]: Machine qemu-66-instance-0000003a terminated.
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.137 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa916d65-f844-4e55-ae00-a5d9a2b00320]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 podman[307227]: 2025-12-13 08:29:36.057621862 +0000 UTC m=+0.030941315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:29:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:29:36 np0005558241 podman[307227]: 2025-12-13 08:29:36.180793411 +0000 UTC m=+0.154112844 container init d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:29:36 np0005558241 podman[307227]: 2025-12-13 08:29:36.196305533 +0000 UTC m=+0.169624876 container start d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.195 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2d362cd2-9bad-46f2-b5b8-205c2549a075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 podman[307227]: 2025-12-13 08:29:36.200148578 +0000 UTC m=+0.173467991 container attach d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bartik, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.200 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9ecb8b-3bbf-46c8-971c-1c4310e9cf30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 strange_bartik[307249]: 167 167
Dec 13 03:29:36 np0005558241 systemd[1]: libpod-d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7.scope: Deactivated successfully.
Dec 13 03:29:36 np0005558241 podman[307227]: 2025-12-13 08:29:36.204469405 +0000 UTC m=+0.177788748 container died d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bartik, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:29:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7396f9fdae65fa922ac99dc1c38b68431d3bf19013a15af307ddb23f26a576ff-merged.mount: Deactivated successfully.
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 podman[307227]: 2025-12-13 08:29:36.251742081 +0000 UTC m=+0.225061444 container remove d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.254 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.255 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e28b9d05-00ae-4568-a694-1f75b6ed1c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.267 248514 INFO nova.virt.libvirt.driver [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Instance destroyed successfully.#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.268 248514 DEBUG nova.objects.instance [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'resources' on Instance uuid b9e0d5ab-483f-49a1-901a-c36f31ab710f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:36 np0005558241 systemd[1]: libpod-conmon-d518fab62125138ce5132133a65ae1b6b23fe2f0c5b62ceaa15fa00969e690a7.scope: Deactivated successfully.
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.277 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2662d298-fb67-4591-8c7b-1bf1cc5be4b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6669b7a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8a:9e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713790, 'reachable_time': 32755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307278, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.300 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[beef004a-e44c-46c9-a5d6-3267c0ec075b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713803, 'tstamp': 713803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307283, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6669b7a-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713806, 'tstamp': 713806}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307283, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.303 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6669b7a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.310 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6669b7a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.311 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.312 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6669b7a-70, col_values=(('external_ids', {'iface-id': 'fc5b13bf-8bf9-4192-a467-a89c8b6706fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.312 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.327 248514 DEBUG nova.virt.libvirt.vif [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-2',id=58,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-12-13T08:29:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:29:23Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=b9e0d5ab-483f-49a1-901a-c36f31ab710f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.327 248514 DEBUG nova.network.os_vif_util [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "address": "fa:16:3e:5d:20:15", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd059dc2-8c", "ovs_interfaceid": "dd059dc2-8c1a-49c4-b820-7cf31293c210", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.328 248514 DEBUG nova.network.os_vif_util [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.328 248514 DEBUG os_vif [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.330 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.330 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd059dc2-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.332 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.333 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.333 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.334 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.334 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.334 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.335 248514 INFO nova.compute.manager [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Terminating instance#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.336 248514 DEBUG nova.compute.manager [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.336 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.340 248514 INFO os_vif [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:20:15,bridge_name='br-int',has_traffic_filtering=True,id=dd059dc2-8c1a-49c4-b820-7cf31293c210,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd059dc2-8c')#033[00m
Dec 13 03:29:36 np0005558241 kernel: tap41420fc6-e9 (unregistering): left promiscuous mode
Dec 13 03:29:36 np0005558241 NetworkManager[50376]: <info>  [1765614576.3811] device (tap41420fc6-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:29:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:36Z|00550|binding|INFO|Releasing lport 41420fc6-e900-4745-a3c1-4f2541c9e1f5 from this chassis (sb_readonly=0)
Dec 13 03:29:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:36Z|00551|binding|INFO|Setting lport 41420fc6-e900-4745-a3c1-4f2541c9e1f5 down in Southbound
Dec 13 03:29:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:36Z|00552|binding|INFO|Removing iface tap41420fc6-e9 ovn-installed in OVS
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.385 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.389 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.393 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:08:03 10.100.0.4'], port_security=['fa:16:3e:de:08:03 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c940191-84c7-423e-901a-233b14c2acec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f0a98b431c940d98cf7e91fd7bdea03', 'neutron:revision_number': '4', 'neutron:security_group_ids': '63655ec3-e19a-48cf-bc09-d1f4e53966e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3562b8c-e174-48df-b01e-6261323c4619, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=41420fc6-e900-4745-a3c1-4f2541c9e1f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.394 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 41420fc6-e900-4745-a3c1-4f2541c9e1f5 in datapath f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 unbound from our chassis#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.396 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.397 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db99801d-5235-4b1d-af90-1ee1ff497b03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.398 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 namespace which is not needed anymore#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.409 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Dec 13 03:29:36 np0005558241 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Consumed 11.435s CPU time.
Dec 13 03:29:36 np0005558241 systemd-machined[210538]: Machine qemu-67-instance-0000003b terminated.
Dec 13 03:29:36 np0005558241 podman[307315]: 2025-12-13 08:29:36.48269668 +0000 UTC m=+0.052082516 container create fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_black, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:29:36 np0005558241 systemd[1]: Started libpod-conmon-fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951.scope.
Dec 13 03:29:36 np0005558241 podman[307315]: 2025-12-13 08:29:36.456727689 +0000 UTC m=+0.026113525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:29:36 np0005558241 NetworkManager[50376]: <info>  [1765614576.5629] manager: (tap41420fc6-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Dec 13 03:29:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.560 248514 DEBUG nova.compute.manager [req-a63aa699-fa13-4ecc-9d64-85f5307ea88c req-88935790-ce08-4e2b-b9fd-e17a00de5cfe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-vif-unplugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.564 248514 DEBUG oslo_concurrency.lockutils [req-a63aa699-fa13-4ecc-9d64-85f5307ea88c req-88935790-ce08-4e2b-b9fd-e17a00de5cfe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.565 248514 DEBUG oslo_concurrency.lockutils [req-a63aa699-fa13-4ecc-9d64-85f5307ea88c req-88935790-ce08-4e2b-b9fd-e17a00de5cfe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.565 248514 DEBUG oslo_concurrency.lockutils [req-a63aa699-fa13-4ecc-9d64-85f5307ea88c req-88935790-ce08-4e2b-b9fd-e17a00de5cfe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.566 248514 DEBUG nova.compute.manager [req-a63aa699-fa13-4ecc-9d64-85f5307ea88c req-88935790-ce08-4e2b-b9fd-e17a00de5cfe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] No waiting events found dispatching network-vif-unplugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:36 np0005558241 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [NOTICE]   (306348) : haproxy version is 2.8.14-c23fe91
Dec 13 03:29:36 np0005558241 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [NOTICE]   (306348) : path to executable is /usr/sbin/haproxy
Dec 13 03:29:36 np0005558241 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [WARNING]  (306348) : Exiting Master process...
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.566 248514 DEBUG nova.compute.manager [req-a63aa699-fa13-4ecc-9d64-85f5307ea88c req-88935790-ce08-4e2b-b9fd-e17a00de5cfe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-vif-unplugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:29:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15cafa148ff8e7f5cb76fe0b265a7fb3a964f8fe64d8d5d7a539f69a4f04b0df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15cafa148ff8e7f5cb76fe0b265a7fb3a964f8fe64d8d5d7a539f69a4f04b0df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15cafa148ff8e7f5cb76fe0b265a7fb3a964f8fe64d8d5d7a539f69a4f04b0df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15cafa148ff8e7f5cb76fe0b265a7fb3a964f8fe64d8d5d7a539f69a4f04b0df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:36 np0005558241 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [ALERT]    (306348) : Current worker (306350) exited with code 143 (Terminated)
Dec 13 03:29:36 np0005558241 neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3[306343]: [WARNING]  (306348) : All workers exited. Exiting... (0)
Dec 13 03:29:36 np0005558241 systemd[1]: libpod-401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d.scope: Deactivated successfully.
Dec 13 03:29:36 np0005558241 podman[307345]: 2025-12-13 08:29:36.577270853 +0000 UTC m=+0.069928936 container died 401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.589 248514 INFO nova.virt.libvirt.driver [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Instance destroyed successfully.#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.589 248514 DEBUG nova.objects.instance [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lazy-loading 'resources' on Instance uuid 1c940191-84c7-423e-901a-233b14c2acec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:36 np0005558241 podman[307315]: 2025-12-13 08:29:36.600332331 +0000 UTC m=+0.169718187 container init fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:29:36 np0005558241 podman[307315]: 2025-12-13 08:29:36.616151462 +0000 UTC m=+0.185537288 container start fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_black, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:29:36 np0005558241 podman[307315]: 2025-12-13 08:29:36.620114629 +0000 UTC m=+0.189500475 container attach fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_black, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.623 248514 DEBUG nova.virt.libvirt.vif [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-582157972',display_name='tempest-ListServersNegativeTestJSON-server-582157972-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-582157972-3',id=59,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-12-13T08:29:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f0a98b431c940d98cf7e91fd7bdea03',ramdisk_id='',reservation_id='r-1h3e6d75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-67858047',owner_user_name='tempest-ListServersNegativeTestJSON-67858047-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:29:25Z,user_data=None,user_id='3b4cd12d84d34c95ac78f304b6e7546d',uuid=1c940191-84c7-423e-901a-233b14c2acec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.624 248514 DEBUG nova.network.os_vif_util [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converting VIF {"id": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "address": "fa:16:3e:de:08:03", "network": {"id": "f6669b7a-7d21-4e4e-96cd-84193f6fdcf3", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1778846175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f0a98b431c940d98cf7e91fd7bdea03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41420fc6-e9", "ovs_interfaceid": "41420fc6-e900-4745-a3c1-4f2541c9e1f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.625 248514 DEBUG nova.network.os_vif_util [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.625 248514 DEBUG os_vif [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.627 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.627 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41420fc6-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d-userdata-shm.mount: Deactivated successfully.
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:29:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7554b3fff23359996616576e29b124b752a5d83d38de1b07b4ecde93acadd26c-merged.mount: Deactivated successfully.
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.634 248514 INFO os_vif [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:08:03,bridge_name='br-int',has_traffic_filtering=True,id=41420fc6-e900-4745-a3c1-4f2541c9e1f5,network=Network(f6669b7a-7d21-4e4e-96cd-84193f6fdcf3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41420fc6-e9')#033[00m
Dec 13 03:29:36 np0005558241 podman[307345]: 2025-12-13 08:29:36.643838005 +0000 UTC m=+0.136496098 container cleanup 401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:29:36 np0005558241 systemd[1]: libpod-conmon-401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d.scope: Deactivated successfully.
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.702 248514 INFO nova.virt.libvirt.driver [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Deleting instance files /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f_del#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.705 248514 INFO nova.virt.libvirt.driver [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Deletion of /var/lib/nova/instances/b9e0d5ab-483f-49a1-901a-c36f31ab710f_del complete#033[00m
Dec 13 03:29:36 np0005558241 podman[307406]: 2025-12-13 08:29:36.736414279 +0000 UTC m=+0.060180436 container remove 401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.744 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a8972899-a030-4978-a4dc-4f98860980a1]: (4, ('Sat Dec 13 08:29:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 (401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d)\n401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d\nSat Dec 13 08:29:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 (401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d)\n401e4e9a0c060ade57bc50c14d1373884a13602f72050b18bf501f22de9f269d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.747 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[096c2ab0-1ec8-46ec-bea5-7f00eb51f825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.748 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6669b7a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 kernel: tapf6669b7a-70: left promiscuous mode
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.772 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9c97d8c8-e72b-413d-a299-0c9dda0d616b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.781 248514 INFO nova.compute.manager [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.782 248514 DEBUG oslo.service.loopingcall [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.783 248514 DEBUG nova.compute.manager [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.783 248514 DEBUG nova.network.neutron [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.789 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed45a98-43c3-40e0-8eb3-00b2645c8296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.791 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[be669d8a-2ac2-48cc-9535-a50510b9582e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.814 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd34303-166d-473c-962d-002892c51990]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713781, 'reachable_time': 38343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307424, 'error': None, 'target': 'ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 systemd[1]: run-netns-ovnmeta\x2df6669b7a\x2d7d21\x2d4e4e\x2d96cd\x2d84193f6fdcf3.mount: Deactivated successfully.
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.819 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6669b7a-7d21-4e4e-96cd-84193f6fdcf3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:29:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:36.820 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[24bdd4cf-446b-48d9-81cd-9562700e5968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.939 248514 INFO nova.virt.libvirt.driver [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Deleting instance files /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec_del#033[00m
Dec 13 03:29:36 np0005558241 nova_compute[248510]: 2025-12-13 08:29:36.940 248514 INFO nova.virt.libvirt.driver [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Deletion of /var/lib/nova/instances/1c940191-84c7-423e-901a-233b14c2acec_del complete#033[00m
Dec 13 03:29:37 np0005558241 nova_compute[248510]: 2025-12-13 08:29:37.066 248514 INFO nova.compute.manager [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:29:37 np0005558241 nova_compute[248510]: 2025-12-13 08:29:37.067 248514 DEBUG oslo.service.loopingcall [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:29:37 np0005558241 nova_compute[248510]: 2025-12-13 08:29:37.068 248514 DEBUG nova.compute.manager [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:29:37 np0005558241 nova_compute[248510]: 2025-12-13 08:29:37.068 248514 DEBUG nova.network.neutron [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:29:37 np0005558241 lvm[307496]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:29:37 np0005558241 lvm[307496]: VG ceph_vg0 finished
Dec 13 03:29:37 np0005558241 lvm[307498]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:29:37 np0005558241 lvm[307498]: VG ceph_vg1 finished
Dec 13 03:29:37 np0005558241 lvm[307499]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:29:37 np0005558241 lvm[307499]: VG ceph_vg2 finished
Dec 13 03:29:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 135 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 159 KiB/s wr, 178 op/s
Dec 13 03:29:37 np0005558241 eloquent_black[307361]: {}
Dec 13 03:29:37 np0005558241 systemd[1]: libpod-fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951.scope: Deactivated successfully.
Dec 13 03:29:37 np0005558241 systemd[1]: libpod-fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951.scope: Consumed 1.475s CPU time.
Dec 13 03:29:37 np0005558241 podman[307502]: 2025-12-13 08:29:37.59000461 +0000 UTC m=+0.026399552 container died fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_black, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 03:29:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-15cafa148ff8e7f5cb76fe0b265a7fb3a964f8fe64d8d5d7a539f69a4f04b0df-merged.mount: Deactivated successfully.
Dec 13 03:29:37 np0005558241 podman[307502]: 2025-12-13 08:29:37.630587662 +0000 UTC m=+0.066982594 container remove fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:29:37 np0005558241 systemd[1]: libpod-conmon-fcc55f7e5d855a7fc49942230e4e32942a51060b68493c1617931b0d009d1951.scope: Deactivated successfully.
Dec 13 03:29:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:29:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:29:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:29:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:29:37 np0005558241 nova_compute[248510]: 2025-12-13 08:29:37.900 248514 DEBUG nova.network.neutron [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:37 np0005558241 nova_compute[248510]: 2025-12-13 08:29:37.922 248514 INFO nova.compute.manager [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Took 0.85 seconds to deallocate network for instance.#033[00m
Dec 13 03:29:37 np0005558241 nova_compute[248510]: 2025-12-13 08:29:37.984 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:37 np0005558241 nova_compute[248510]: 2025-12-13 08:29:37.984 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.034 248514 DEBUG nova.compute.manager [req-fc8f03ff-c661-4622-9807-3eaf8ad83efc req-2fa2900d-1113-4ea5-95e5-4373e382cc47 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received event network-vif-deleted-41420fc6-e900-4745-a3c1-4f2541c9e1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.074 248514 DEBUG oslo_concurrency.processutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/428194467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.624 248514 DEBUG oslo_concurrency.processutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.633 248514 DEBUG nova.compute.provider_tree [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.671 248514 DEBUG nova.scheduler.client.report [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.700 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.708 248514 DEBUG nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.708 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.708 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.708 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.708 248514 DEBUG nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] No waiting events found dispatching network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.709 248514 WARNING nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received unexpected event network-vif-plugged-dd059dc2-8c1a-49c4-b820-7cf31293c210 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.709 248514 DEBUG nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received event network-vif-unplugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.709 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.709 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.709 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.709 248514 DEBUG nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] No waiting events found dispatching network-vif-unplugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 WARNING nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received unexpected event network-vif-unplugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 DEBUG nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1c940191-84c7-423e-901a-233b14c2acec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 DEBUG oslo_concurrency.lockutils [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 DEBUG nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] No waiting events found dispatching network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.710 248514 WARNING nova.compute.manager [req-27524206-7a64-4a3a-bd52-7e781ce8d7b8 req-3c6b540d-b9fd-45e5-8471-de4ede8db627 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Received unexpected event network-vif-plugged-41420fc6-e900-4745-a3c1-4f2541c9e1f5 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:29:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:29:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.729 248514 INFO nova.scheduler.client.report [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Deleted allocations for instance 1c940191-84c7-423e-901a-233b14c2acec#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.818 248514 DEBUG oslo_concurrency.lockutils [None req-c8d278e9-8160-4e48-941d-682ebdb0611b 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "1c940191-84c7-423e-901a-233b14c2acec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:38 np0005558241 nova_compute[248510]: 2025-12-13 08:29:38.984 248514 DEBUG nova.network.neutron [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:39 np0005558241 nova_compute[248510]: 2025-12-13 08:29:39.006 248514 INFO nova.compute.manager [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Took 2.22 seconds to deallocate network for instance.#033[00m
Dec 13 03:29:39 np0005558241 nova_compute[248510]: 2025-12-13 08:29:39.080 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:39 np0005558241 nova_compute[248510]: 2025-12-13 08:29:39.081 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:39 np0005558241 nova_compute[248510]: 2025-12-13 08:29:39.140 248514 DEBUG oslo_concurrency.processutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 88 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.6 MiB/s wr, 260 op/s
Dec 13 03:29:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1952332824' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:39 np0005558241 nova_compute[248510]: 2025-12-13 08:29:39.695 248514 DEBUG oslo_concurrency.processutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:39 np0005558241 nova_compute[248510]: 2025-12-13 08:29:39.704 248514 DEBUG nova.compute.provider_tree [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:39 np0005558241 nova_compute[248510]: 2025-12-13 08:29:39.730 248514 DEBUG nova.scheduler.client.report [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:39 np0005558241 nova_compute[248510]: 2025-12-13 08:29:39.779 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:39 np0005558241 nova_compute[248510]: 2025-12-13 08:29:39.814 248514 INFO nova.scheduler.client.report [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Deleted allocations for instance b9e0d5ab-483f-49a1-901a-c36f31ab710f#033[00m
Dec 13 03:29:39 np0005558241 nova_compute[248510]: 2025-12-13 08:29:39.902 248514 DEBUG oslo_concurrency.lockutils [None req-08b4d715-3a00-491f-b14e-e9bd8b8804a7 3b4cd12d84d34c95ac78f304b6e7546d 1f0a98b431c940d98cf7e91fd7bdea03 - - default default] Lock "b9e0d5ab-483f-49a1-901a-c36f31ab710f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:29:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:29:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:29:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:29:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:29:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:29:40 np0005558241 nova_compute[248510]: 2025-12-13 08:29:40.134 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:29:41 np0005558241 nova_compute[248510]: 2025-12-13 08:29:41.123 248514 DEBUG nova.compute.manager [req-12a01825-d6ba-4cc7-a211-81702060d299 req-087453db-21cd-4676-ac67-1b28cbc95587 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Received event network-vif-deleted-dd059dc2-8c1a-49c4-b820-7cf31293c210 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 41 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 163 op/s
Dec 13 03:29:41 np0005558241 nova_compute[248510]: 2025-12-13 08:29:41.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:42 np0005558241 nova_compute[248510]: 2025-12-13 08:29:42.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:43 np0005558241 nova_compute[248510]: 2025-12-13 08:29:43.377 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 41 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 2.0 MiB/s wr, 129 op/s
Dec 13 03:29:43 np0005558241 nova_compute[248510]: 2025-12-13 08:29:43.833 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:43 np0005558241 nova_compute[248510]: 2025-12-13 08:29:43.834 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:43 np0005558241 nova_compute[248510]: 2025-12-13 08:29:43.885 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:29:43 np0005558241 nova_compute[248510]: 2025-12-13 08:29:43.998 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:43 np0005558241 nova_compute[248510]: 2025-12-13 08:29:43.999 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.007 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.008 248514 INFO nova.compute.claims [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.154 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.648 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614569.6474895, b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.650 248514 INFO nova.compute.manager [-] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.678 248514 DEBUG nova.compute.manager [None req-2c2fd371-5c54-4808-bce7-d438da2850ff - - - - - -] [instance: b5c9b931-8e5b-45ff-8c88-7a0b15a57a7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953493313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.717 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.724 248514 DEBUG nova.compute.provider_tree [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.747 248514 DEBUG nova.scheduler.client.report [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.772 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.773 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.820 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.821 248514 DEBUG nova.network.neutron [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.877 248514 INFO nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:29:44 np0005558241 nova_compute[248510]: 2025-12-13 08:29:44.906 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.041 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.043 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.044 248514 INFO nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Creating image(s)#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.067 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.094 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.121 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.125 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.161 248514 DEBUG nova.policy [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d19f7d5ece8482dab03e4bc02fdf410', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c6718df841f0471ba710516400f126fa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.164 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.195 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.196 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.196 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.197 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.219 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.223 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:29:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 41 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.0 MiB/s wr, 117 op/s
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.581 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.659 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] resizing rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.739 248514 DEBUG nova.objects.instance [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'migration_context' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.776 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.777 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Ensure instance console log exists: /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.777 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.778 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:45 np0005558241 nova_compute[248510]: 2025-12-13 08:29:45.778 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:46 np0005558241 nova_compute[248510]: 2025-12-13 08:29:46.215 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:46 np0005558241 nova_compute[248510]: 2025-12-13 08:29:46.631 248514 DEBUG nova.network.neutron [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Successfully created port: b5058a06-7109-4ac0-96d8-7562e66bee25 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:29:46 np0005558241 nova_compute[248510]: 2025-12-13 08:29:46.639 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:46 np0005558241 nova_compute[248510]: 2025-12-13 08:29:46.774 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:46 np0005558241 nova_compute[248510]: 2025-12-13 08:29:46.775 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:29:46 np0005558241 nova_compute[248510]: 2025-12-13 08:29:46.775 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:29:46 np0005558241 nova_compute[248510]: 2025-12-13 08:29:46.807 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:29:46 np0005558241 nova_compute[248510]: 2025-12-13 08:29:46.808 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:29:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 41 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 1.9 MiB/s wr, 100 op/s
Dec 13 03:29:48 np0005558241 nova_compute[248510]: 2025-12-13 08:29:48.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 80 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.3 MiB/s wr, 127 op/s
Dec 13 03:29:49 np0005558241 nova_compute[248510]: 2025-12-13 08:29:49.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:49 np0005558241 nova_compute[248510]: 2025-12-13 08:29:49.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.092 248514 DEBUG nova.network.neutron [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Successfully updated port: b5058a06-7109-4ac0-96d8-7562e66bee25 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.115 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.115 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.116 248514 DEBUG nova.network.neutron [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.186 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.274 248514 DEBUG nova.compute.manager [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-changed-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.274 248514 DEBUG nova.compute.manager [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Refreshing instance network info cache due to event network-changed-b5058a06-7109-4ac0-96d8-7562e66bee25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.274 248514 DEBUG oslo_concurrency.lockutils [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.345 248514 DEBUG nova.network.neutron [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:29:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.795 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.839 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.840 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.841 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.841 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:29:50 np0005558241 nova_compute[248510]: 2025-12-13 08:29:50.842 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:50 np0005558241 podman[307776]: 2025-12-13 08:29:50.988806426 +0000 UTC m=+0.064965774 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:29:50 np0005558241 podman[307777]: 2025-12-13 08:29:50.997195173 +0000 UTC m=+0.065258781 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 03:29:51 np0005558241 podman[307775]: 2025-12-13 08:29:51.020273293 +0000 UTC m=+0.100136362 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.039 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.041 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.066 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.093 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.094 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.135 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.263 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614576.2622378, b9e0d5ab-483f-49a1-901a-c36f31ab710f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.264 248514 INFO nova.compute.manager [-] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:29:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578582874' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.427 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.444 248514 DEBUG nova.compute.manager [None req-a473ef59-971c-4ede-8bd7-c61ef44ef0c3 - - - - - -] [instance: b9e0d5ab-483f-49a1-901a-c36f31ab710f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 88 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 131 KiB/s rd, 2.3 MiB/s wr, 46 op/s
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.502 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.503 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.505 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.516 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.516 248514 INFO nova.compute.claims [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.586 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614576.5860462, 1c940191-84c7-423e-901a-233b14c2acec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.586 248514 INFO nova.compute.manager [-] [instance: 1c940191-84c7-423e-901a-233b14c2acec] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.620 248514 DEBUG nova.compute.manager [None req-43ea25d4-a5db-415e-bfa3-dc068dc28615 - - - - - -] [instance: 1c940191-84c7-423e-901a-233b14c2acec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.683 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.684 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4108MB free_disk=59.9709356110543GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.684 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.800 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.963 248514 DEBUG nova.network.neutron [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.996 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.997 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance network_info: |[{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.997 248514 DEBUG oslo_concurrency.lockutils [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:29:51 np0005558241 nova_compute[248510]: 2025-12-13 08:29:51.998 248514 DEBUG nova.network.neutron [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Refreshing network info cache for port b5058a06-7109-4ac0-96d8-7562e66bee25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.003 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start _get_guest_xml network_info=[{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.008 248514 WARNING nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.013 248514 DEBUG nova.virt.libvirt.host [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.014 248514 DEBUG nova.virt.libvirt.host [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.019 248514 DEBUG nova.virt.libvirt.host [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.020 248514 DEBUG nova.virt.libvirt.host [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.020 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.021 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.021 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.022 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.022 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.022 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.022 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.023 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.023 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.023 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.023 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.023 248514 DEBUG nova.virt.hardware [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.027 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111001457' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.442 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.449 248514 DEBUG nova.compute.provider_tree [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.474 248514 DEBUG nova.scheduler.client.report [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.503 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.504 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.507 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.515 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.515 248514 INFO nova.compute.claims [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.586 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.587 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:29:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:29:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078143762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.619 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.652 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.657 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.699 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.728 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:29:52 np0005558241 nova_compute[248510]: 2025-12-13 08:29:52.775 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.175 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.177 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.178 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Creating image(s)#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.206 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:29:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/984297487' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.237 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.264 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.269 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.311 248514 DEBUG nova.policy [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f703cc8fd3b4cdabb2b154345f70a7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c25aa866d502481eb9410b7d92a1347b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.315 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.317 248514 DEBUG nova.virt.libvirt.vif [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.318 248514 DEBUG nova.network.os_vif_util [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.319 248514 DEBUG nova.network.os_vif_util [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.321 248514 DEBUG nova.objects.instance [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/658180110' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.353 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.354 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.355 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.355 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.377 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.381 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d594d7c8-13f8-4e02-80d2-490469301cca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.414 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.424 248514 DEBUG nova.compute.provider_tree [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 88 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.742 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d594d7c8-13f8-4e02-80d2-490469301cca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.800 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] resizing rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.834 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  <uuid>3ced27d6-a2a8-4ce3-a7e7-494270418542</uuid>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  <name>instance-0000003c</name>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerActionsTestJSON-server-286397061</nova:name>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:29:52</nova:creationTime>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:        <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:        <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:        <nova:port uuid="b5058a06-7109-4ac0-96d8-7562e66bee25">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <entry name="serial">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <entry name="uuid">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:1f:d1:eb"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <target dev="tapb5058a06-71"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/console.log" append="off"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:29:53 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:29:53 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:29:53 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:29:53 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.835 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Preparing to wait for external event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.835 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.835 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.835 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.836 248514 DEBUG nova.virt.libvirt.vif [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.836 248514 DEBUG nova.network.os_vif_util [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.837 248514 DEBUG nova.network.os_vif_util [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.837 248514 DEBUG os_vif [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.838 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.838 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.839 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.841 248514 DEBUG nova.scheduler.client.report [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.845 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.845 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5058a06-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.845 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5058a06-71, col_values=(('external_ids', {'iface-id': 'b5058a06-7109-4ac0-96d8-7562e66bee25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:d1:eb', 'vm-uuid': '3ced27d6-a2a8-4ce3-a7e7-494270418542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:53 np0005558241 NetworkManager[50376]: <info>  [1765614593.8480] manager: (tapb5058a06-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.856 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.857 248514 INFO os_vif [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')#033[00m
Dec 13 03:29:53 np0005558241 nova_compute[248510]: 2025-12-13 08:29:53.917 248514 DEBUG nova.objects.instance [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'migration_context' on Instance uuid d594d7c8-13f8-4e02-80d2-490469301cca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.320 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.321 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Ensure instance console log exists: /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.322 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.322 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.323 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.326 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.327 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.332 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 2.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.350 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.351 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.352 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No VIF found with MAC fa:16:3e:1f:d1:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.353 248514 INFO nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Using config drive#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.386 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.442 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.444 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.476 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.498 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3ced27d6-a2a8-4ce3-a7e7-494270418542 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.498 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance d594d7c8-13f8-4e02-80d2-490469301cca actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.498 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.499 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.499 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.514 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.606 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.656 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.660 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.661 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Creating image(s)#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.695 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.721 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.748 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.753 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.804 248514 DEBUG nova.policy [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f703cc8fd3b4cdabb2b154345f70a7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c25aa866d502481eb9410b7d92a1347b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.836 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.837 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.838 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.838 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.872 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:54 np0005558241 nova_compute[248510]: 2025-12-13 08:29:54.878 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.117 248514 INFO nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Creating config drive at /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/disk.config#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.123 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwwvqxald execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.165 248514 DEBUG nova.network.neutron [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updated VIF entry in instance network info cache for port b5058a06-7109-4ac0-96d8-7562e66bee25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.165 248514 DEBUG nova.network.neutron [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290259365' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.188 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.206 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.238 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.238 248514 DEBUG oslo_concurrency.lockutils [req-1f19888e-1d70-478f-b0d4-1a4143346a51 req-e07c6ed0-15b3-4ce5-8ee0-c537b5e26f38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.271 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwwvqxald" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.295 248514 DEBUG nova.storage.rbd_utils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.299 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/disk.config 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.344 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] resizing rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:29:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.378 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Successfully created port: 13e6d879-e4c6-4a44-a982-10a62782047e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.387 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.411 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.413 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.413 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.439 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.450 248514 DEBUG nova.objects.instance [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'migration_context' on Instance uuid 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.453 248514 DEBUG oslo_concurrency.processutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/disk.config 3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.453 248514 INFO nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Deleting local config drive /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/disk.config because it was imported into RBD.#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.467 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.468 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Ensure instance console log exists: /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.469 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.469 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.470 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.471 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.471 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 126 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.4 MiB/s wr, 53 op/s
Dec 13 03:29:55 np0005558241 kernel: tapb5058a06-71: entered promiscuous mode
Dec 13 03:29:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:55Z|00553|binding|INFO|Claiming lport b5058a06-7109-4ac0-96d8-7562e66bee25 for this chassis.
Dec 13 03:29:55 np0005558241 NetworkManager[50376]: <info>  [1765614595.5140] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.514 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:55Z|00554|binding|INFO|b5058a06-7109-4ac0-96d8-7562e66bee25: Claiming fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.517 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.527 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.529 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.531 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.544 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[899cdd55-b076-4dff-aa49-7a5a9ceda272]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.545 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43ee8730-a1 in ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:29:55 np0005558241 systemd-machined[210538]: New machine qemu-68-instance-0000003c.
Dec 13 03:29:55 np0005558241 systemd-udevd[308392]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.547 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43ee8730-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.548 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1659fd8a-469e-4c49-925f-116d27cc9f5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.549 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2a21fefc-a136-400c-8cdd-4aa30bfb9d88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 NetworkManager[50376]: <info>  [1765614595.5609] device (tapb5058a06-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:29:55 np0005558241 NetworkManager[50376]: <info>  [1765614595.5619] device (tapb5058a06-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.562 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dee295c5-0966-4247-b819-dcbd1ddcbff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 systemd[1]: Started Virtual Machine qemu-68-instance-0000003c.
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.589 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.590 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a95186ad-926d-41b0-896a-2f29fe8d6d58]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.596 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:55Z|00555|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 ovn-installed in OVS
Dec 13 03:29:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:55Z|00556|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 up in Southbound
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.629 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[958d53be-9ced-4dc8-8488-25193aa6b752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 NetworkManager[50376]: <info>  [1765614595.6368] manager: (tap43ee8730-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/245)
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.636 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fc091b04-0658-4d4e-b6cf-9edde1e675a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.673 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[67339d33-2690-4b20-a42e-447bc2d77255]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.677 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9236d60e-d3f7-496f-bc63-0d57022e50fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.699 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Successfully created port: b94af62b-6c45-4269-94d9-b235090f4778 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:29:55 np0005558241 NetworkManager[50376]: <info>  [1765614595.7049] device (tap43ee8730-a0): carrier: link connected
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.711 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8ac0fb-fba6-463c-944f-6a0d9f59c7b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.733 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4e70c810-69c8-4ebd-aaca-3bf0e74e646d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717292, 'reachable_time': 43850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308424, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.751 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f9ca534-2e1a-4f2d-89b3-108eefb3457c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:bbcd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 717292, 'tstamp': 717292}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308425, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.772 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b1a57e-8145-4ba0-9191-24e14d339718]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 164], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717292, 'reachable_time': 43850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308426, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.812 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[80d67c90-9c3b-486c-8c24-a70e91e17ec6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.893 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[25a4aaf9-7aa0-433f-a84e-7bfab11bd656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.896 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.896 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.897 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:55 np0005558241 NetworkManager[50376]: <info>  [1765614595.8999] manager: (tap43ee8730-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Dec 13 03:29:55 np0005558241 kernel: tap43ee8730-a0: entered promiscuous mode
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.902 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:29:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:29:55Z|00557|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:55 np0005558241 nova_compute[248510]: 2025-12-13 08:29:55.923 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.924 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.925 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[30d0a5e3-0d6a-4cc2-9f95-f01b09b4826b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.926 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:29:55.926 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'env', 'PROCESS_TAG=haproxy-43ee8730-a174-4859-9c95-b3ad6dc68839', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43ee8730-a174-4859-9c95-b3ad6dc68839.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.053 248514 DEBUG nova.compute.manager [req-b1eb7213-7071-4d40-ae07-390f97d003b4 req-6f16589d-2c71-4a90-8fae-61f5170d7390 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.053 248514 DEBUG oslo_concurrency.lockutils [req-b1eb7213-7071-4d40-ae07-390f97d003b4 req-6f16589d-2c71-4a90-8fae-61f5170d7390 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.053 248514 DEBUG oslo_concurrency.lockutils [req-b1eb7213-7071-4d40-ae07-390f97d003b4 req-6f16589d-2c71-4a90-8fae-61f5170d7390 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.054 248514 DEBUG oslo_concurrency.lockutils [req-b1eb7213-7071-4d40-ae07-390f97d003b4 req-6f16589d-2c71-4a90-8fae-61f5170d7390 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.054 248514 DEBUG nova.compute.manager [req-b1eb7213-7071-4d40-ae07-390f97d003b4 req-6f16589d-2c71-4a90-8fae-61f5170d7390 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Processing event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.255 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614596.2554512, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.256 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Started (Lifecycle Event)#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.259 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.262 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.267 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance spawned successfully.#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.268 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:29:56 np0005558241 podman[308500]: 2025-12-13 08:29:56.373945717 +0000 UTC m=+0.051671536 container create 291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.384 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.388 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.389 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.389 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.390 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.390 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.390 248514 DEBUG nova.virt.libvirt.driver [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.396 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:29:56 np0005558241 systemd[1]: Started libpod-conmon-291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795.scope.
Dec 13 03:29:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:29:56 np0005558241 podman[308500]: 2025-12-13 08:29:56.347425983 +0000 UTC m=+0.025151822 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:29:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9368b1b4038d92fdf8ac3d0acda6e2c35cf6264d3fdc71f92b90f08d6224132/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.451 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.451 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614596.2565768, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.451 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:29:56 np0005558241 podman[308500]: 2025-12-13 08:29:56.462505632 +0000 UTC m=+0.140231471 container init 291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 03:29:56 np0005558241 podman[308500]: 2025-12-13 08:29:56.467989298 +0000 UTC m=+0.145715117 container start 291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:29:56 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [NOTICE]   (308519) : New worker (308521) forked
Dec 13 03:29:56 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [NOTICE]   (308519) : Loading success.
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.540 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.545 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614596.2615957, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.545 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.582 248514 INFO nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Took 11.54 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.583 248514 DEBUG nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.584 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.591 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.645 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.681 248514 INFO nova.compute.manager [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Took 12.72 seconds to build instance.#033[00m
Dec 13 03:29:56 np0005558241 nova_compute[248510]: 2025-12-13 08:29:56.701 248514 DEBUG oslo_concurrency.lockutils [None req-a400e8fd-c470-4508-b162-1fe3fb85b4db 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.043 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Successfully updated port: 13e6d879-e4c6-4a44-a982-10a62782047e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.063 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "refresh_cache-d594d7c8-13f8-4e02-80d2-490469301cca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.064 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquired lock "refresh_cache-d594d7c8-13f8-4e02-80d2-490469301cca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.064 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.115 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Successfully updated port: b94af62b-6c45-4269-94d9-b235090f4778 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.136 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "refresh_cache-13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.136 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquired lock "refresh_cache-13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.137 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.304 248514 DEBUG nova.compute.manager [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Received event network-changed-b94af62b-6c45-4269-94d9-b235090f4778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.304 248514 DEBUG nova.compute.manager [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Refreshing instance network info cache due to event network-changed-b94af62b-6c45-4269-94d9-b235090f4778. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.304 248514 DEBUG oslo_concurrency.lockutils [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.305 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.305 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.336 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.421 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.422 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.430 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.432 248514 INFO nova.compute.claims [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.446 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.453 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:29:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 126 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.4 MiB/s wr, 53 op/s
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.683 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.790 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.821 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.821 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 03:29:57 np0005558241 nova_compute[248510]: 2025-12-13 08:29:57.867 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 03:29:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:29:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2836552486' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.281 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.289 248514 DEBUG nova.compute.provider_tree [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.324 248514 DEBUG nova.scheduler.client.report [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.357 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.358 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.427 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.428 248514 DEBUG nova.network.neutron [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.451 248514 INFO nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.482 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.611 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.612 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.613 248514 INFO nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Creating image(s)#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.634 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.659 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.678 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.682 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.722 248514 DEBUG nova.compute.manager [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.723 248514 DEBUG oslo_concurrency.lockutils [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.723 248514 DEBUG oslo_concurrency.lockutils [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.723 248514 DEBUG oslo_concurrency.lockutils [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.724 248514 DEBUG nova.compute.manager [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.724 248514 WARNING nova.compute.manager [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.724 248514 DEBUG nova.compute.manager [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-changed-13e6d879-e4c6-4a44-a982-10a62782047e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.724 248514 DEBUG nova.compute.manager [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Refreshing instance network info cache due to event network-changed-13e6d879-e4c6-4a44-a982-10a62782047e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.724 248514 DEBUG oslo_concurrency.lockutils [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d594d7c8-13f8-4e02-80d2-490469301cca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.758 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.759 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.760 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.760 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.786 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.790 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 850bda47-d7a0-4d8d-a048-258b8388cab7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.838 248514 DEBUG nova.policy [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '865bfe2430ea4f9ca639a4f89c86899d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8fd9d373def4437880ac432124a30a67', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:29:58 np0005558241 nova_compute[248510]: 2025-12-13 08:29:58.848 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.144 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 850bda47-d7a0-4d8d-a048-258b8388cab7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.202 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] resizing rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.235 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Updating instance_info_cache with network_info: [{"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.287 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Releasing lock "refresh_cache-13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.288 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Instance network_info: |[{"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.289 248514 DEBUG oslo_concurrency.lockutils [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.289 248514 DEBUG nova.network.neutron [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Refreshing network info cache for port b94af62b-6c45-4269-94d9-b235090f4778 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.292 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Start _get_guest_xml network_info=[{"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.297 248514 DEBUG nova.objects.instance [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lazy-loading 'migration_context' on Instance uuid 850bda47-d7a0-4d8d-a048-258b8388cab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.304 248514 WARNING nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.311 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.312 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.315 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.316 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.316 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.316 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.317 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.317 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.318 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.318 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.318 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.318 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.319 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.319 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.319 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.320 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.323 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.367 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.368 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Ensure instance console log exists: /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.369 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.369 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.370 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:29:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 161 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.8 MiB/s wr, 129 op/s
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.527 248514 DEBUG nova.network.neutron [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Updating instance_info_cache with network_info: [{"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.561 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Releasing lock "refresh_cache-d594d7c8-13f8-4e02-80d2-490469301cca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.562 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Instance network_info: |[{"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.562 248514 DEBUG oslo_concurrency.lockutils [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d594d7c8-13f8-4e02-80d2-490469301cca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.562 248514 DEBUG nova.network.neutron [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Refreshing network info cache for port 13e6d879-e4c6-4a44-a982-10a62782047e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.565 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Start _get_guest_xml network_info=[{"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.570 248514 WARNING nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.577 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.577 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.581 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.581 248514 DEBUG nova.virt.libvirt.host [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.582 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.582 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.583 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.583 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.583 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.583 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.584 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.584 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.584 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.584 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.585 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.585 248514 DEBUG nova.virt.hardware [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.589 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.818 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:29:59 np0005558241 NetworkManager[50376]: <info>  [1765614599.8872] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.887 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:29:59 np0005558241 NetworkManager[50376]: <info>  [1765614599.8882] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Dec 13 03:29:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:29:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822640300' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.933 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.958 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:29:59 np0005558241 nova_compute[248510]: 2025-12-13 08:29:59.964 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.050 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:00 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:00Z|00558|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.062 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2624074968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.196 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.211 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.235 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.240 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:30:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2547013222' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.556 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.560 248514 DEBUG nova.virt.libvirt.vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1395231476',display_name='tempest-tempest.common.compute-instance-1395231476-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1395231476-2',id=62,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-jt3b6t8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:54Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=13a5d640-9e2a-49d7-9f95-be18ebbe1cfe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.560 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.562 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.563 248514 DEBUG nova.objects.instance [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'pci_devices' on Instance uuid 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2383553656' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.836 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.838 248514 DEBUG nova.virt.libvirt.vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1395231476',display_name='tempest-tempest.common.compute-instance-1395231476-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1395231476-1',id=61,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-jt3b6t8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:52Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=d594d7c8-13f8-4e02-80d2-490469301cca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.838 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.839 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.840 248514 DEBUG nova.objects.instance [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'pci_devices' on Instance uuid d594d7c8-13f8-4e02-80d2-490469301cca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.857 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <uuid>13a5d640-9e2a-49d7-9f95-be18ebbe1cfe</uuid>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <name>instance-0000003e</name>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:name>tempest-tempest.common.compute-instance-1395231476-2</nova:name>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:29:59</nova:creationTime>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:user uuid="1f703cc8fd3b4cdabb2b154345f70a7c">tempest-MultipleCreateTestJSON-478861069-project-member</nova:user>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:project uuid="c25aa866d502481eb9410b7d92a1347b">tempest-MultipleCreateTestJSON-478861069</nova:project>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:port uuid="b94af62b-6c45-4269-94d9-b235090f4778">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <entry name="serial">13a5d640-9e2a-49d7-9f95-be18ebbe1cfe</entry>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <entry name="uuid">13a5d640-9e2a-49d7-9f95-be18ebbe1cfe</entry>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk.config">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:a6:af:90"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <target dev="tapb94af62b-6c"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/console.log" append="off"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:30:00 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:30:00 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.870 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Preparing to wait for external event network-vif-plugged-b94af62b-6c45-4269-94d9-b235090f4778 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.871 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.872 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.872 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.874 248514 DEBUG nova.virt.libvirt.vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1395231476',display_name='tempest-tempest.common.compute-instance-1395231476-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1395231476-2',id=62,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-jt3b6t8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:54Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=13a5d640-9e2a-49d7-9f95-be18ebbe1cfe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.875 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.877 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.877 248514 DEBUG os_vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.878 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.879 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.880 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.885 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.885 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb94af62b-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.886 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb94af62b-6c, col_values=(('external_ids', {'iface-id': 'b94af62b-6c45-4269-94d9-b235090f4778', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:af:90', 'vm-uuid': '13a5d640-9e2a-49d7-9f95-be18ebbe1cfe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:00 np0005558241 NetworkManager[50376]: <info>  [1765614600.8894] manager: (tapb94af62b-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.896 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <uuid>d594d7c8-13f8-4e02-80d2-490469301cca</uuid>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <name>instance-0000003d</name>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:name>tempest-tempest.common.compute-instance-1395231476-1</nova:name>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:29:59</nova:creationTime>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:user uuid="1f703cc8fd3b4cdabb2b154345f70a7c">tempest-MultipleCreateTestJSON-478861069-project-member</nova:user>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:project uuid="c25aa866d502481eb9410b7d92a1347b">tempest-MultipleCreateTestJSON-478861069</nova:project>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <nova:port uuid="13e6d879-e4c6-4a44-a982-10a62782047e">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <entry name="serial">d594d7c8-13f8-4e02-80d2-490469301cca</entry>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <entry name="uuid">d594d7c8-13f8-4e02-80d2-490469301cca</entry>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/d594d7c8-13f8-4e02-80d2-490469301cca_disk">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/d594d7c8-13f8-4e02-80d2-490469301cca_disk.config">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:89:36:41"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <target dev="tap13e6d879-e4"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/console.log" append="off"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:30:00 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:30:00 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:30:00 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:30:00 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.896 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Preparing to wait for external event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.903 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.903 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.904 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.904 248514 DEBUG nova.virt.libvirt.vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1395231476',display_name='tempest-tempest.common.compute-instance-1395231476-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1395231476-1',id=61,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-jt3b6t8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:52Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=d594d7c8-13f8-4e02-80d2-490469301cca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.905 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.905 248514 DEBUG nova.network.os_vif_util [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.906 248514 DEBUG os_vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.908 248514 INFO os_vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c')#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.909 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.910 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.911 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.923 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.924 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13e6d879-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.924 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13e6d879-e4, col_values=(('external_ids', {'iface-id': '13e6d879-e4c6-4a44-a982-10a62782047e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:36:41', 'vm-uuid': 'd594d7c8-13f8-4e02-80d2-490469301cca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:00 np0005558241 NetworkManager[50376]: <info>  [1765614600.9316] manager: (tap13e6d879-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.937 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:00 np0005558241 nova_compute[248510]: 2025-12-13 08:30:00.938 248514 INFO os_vif [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4')#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.075 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.076 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.076 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No VIF found with MAC fa:16:3e:89:36:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.077 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Using config drive#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.102 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.118 248514 DEBUG nova.compute.manager [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-changed-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.119 248514 DEBUG nova.compute.manager [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Refreshing instance network info cache due to event network-changed-b5058a06-7109-4ac0-96d8-7562e66bee25. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.119 248514 DEBUG oslo_concurrency.lockutils [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.119 248514 DEBUG oslo_concurrency.lockutils [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.119 248514 DEBUG nova.network.neutron [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Refreshing network info cache for port b5058a06-7109-4ac0-96d8-7562e66bee25 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.125 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.125 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.125 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No VIF found with MAC fa:16:3e:a6:af:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.126 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Using config drive#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.149 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:01 np0005558241 nova_compute[248510]: 2025-12-13 08:30:01.166 248514 DEBUG nova.network.neutron [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Successfully created port: 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:30:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 210 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.5 MiB/s wr, 141 op/s
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.325 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Creating config drive at /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/disk.config#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.331 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxugaqzdb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.477 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxugaqzdb" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.506 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.511 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/disk.config 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.552 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Creating config drive at /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/disk.config#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.558 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb7ulmk4g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.668 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/disk.config 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.670 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Deleting local config drive /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe/disk.config because it was imported into RBD.#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.696 248514 DEBUG nova.network.neutron [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Updated VIF entry in instance network info cache for port 13e6d879-e4c6-4a44-a982-10a62782047e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.697 248514 DEBUG nova.network.neutron [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Updating instance_info_cache with network_info: [{"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.703 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb7ulmk4g" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.735 248514 DEBUG nova.storage.rbd_utils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image d594d7c8-13f8-4e02-80d2-490469301cca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:02 np0005558241 kernel: tapb94af62b-6c: entered promiscuous mode
Dec 13 03:30:02 np0005558241 NetworkManager[50376]: <info>  [1765614602.7418] manager: (tapb94af62b-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/251)
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.744 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/disk.config d594d7c8-13f8-4e02-80d2-490469301cca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:02Z|00559|binding|INFO|Claiming lport b94af62b-6c45-4269-94d9-b235090f4778 for this chassis.
Dec 13 03:30:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:02Z|00560|binding|INFO|b94af62b-6c45-4269-94d9-b235090f4778: Claiming fa:16:3e:a6:af:90 10.100.0.11
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.755 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:af:90 10.100.0.11'], port_security=['fa:16:3e:a6:af:90 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '13a5d640-9e2a-49d7-9f95-be18ebbe1cfe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b94af62b-6c45-4269-94d9-b235090f4778) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.757 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b94af62b-6c45-4269-94d9-b235090f4778 in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf bound to our chassis#033[00m
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.758 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf#033[00m
Dec 13 03:30:02 np0005558241 systemd-udevd[308956]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:30:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:02Z|00561|binding|INFO|Setting lport b94af62b-6c45-4269-94d9-b235090f4778 ovn-installed in OVS
Dec 13 03:30:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:02Z|00562|binding|INFO|Setting lport b94af62b-6c45-4269-94d9-b235090f4778 up in Southbound
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.772 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da4e29a8-15cb-4951-b212-8e268a9df2f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.776 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2ca54d31-d1 in ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:30:02 np0005558241 NetworkManager[50376]: <info>  [1765614602.7832] device (tapb94af62b-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:30:02 np0005558241 NetworkManager[50376]: <info>  [1765614602.7842] device (tapb94af62b-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.785 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2ca54d31-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.786 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e67e7972-e9d3-42a4-8b16-488014a3a694]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.788 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[22052584-b073-424b-96de-1459bfd59648]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:02 np0005558241 systemd-machined[210538]: New machine qemu-69-instance-0000003e.
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.797 248514 DEBUG oslo_concurrency.lockutils [req-9c067897-2ceb-4edc-8393-ff27170b6b6f req-e6d613e9-80bf-4cb8-a071-36901b8b0204 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d594d7c8-13f8-4e02-80d2-490469301cca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.803 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[8087200d-79a7-43c9-b1ea-e4b0a497e304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:02 np0005558241 systemd[1]: Started Virtual Machine qemu-69-instance-0000003e.
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.821 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1d02d7-3934-4f95-bb34-77f109a164f6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.853 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[40fa3d20-3bec-4bd5-aae9-bc91bdd2896e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:02 np0005558241 NetworkManager[50376]: <info>  [1765614602.8617] manager: (tap2ca54d31-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/252)
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.862 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b412861-3492-4ae8-8a58-81405c616ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.907 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4847f6ba-1a14-4631-a36b-795ffd05f2a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.911 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b4952305-6a7e-430c-9adb-6b3d6f1d313e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.933 248514 DEBUG oslo_concurrency.processutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/disk.config d594d7c8-13f8-4e02-80d2-490469301cca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:02 np0005558241 nova_compute[248510]: 2025-12-13 08:30:02.934 248514 INFO nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Deleting local config drive /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca/disk.config because it was imported into RBD.#033[00m
Dec 13 03:30:02 np0005558241 NetworkManager[50376]: <info>  [1765614602.9422] device (tap2ca54d31-d0): carrier: link connected
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.951 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3eff1e7e-7e46-496a-a821-22adc2afd548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:02.979 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1580ff-faec-4ba7-a9f3-1aa6cfb97341]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718015, 'reachable_time': 37588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309017, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:03 np0005558241 NetworkManager[50376]: <info>  [1765614603.0070] manager: (tap13e6d879-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Dec 13 03:30:03 np0005558241 kernel: tap13e6d879-e4: entered promiscuous mode
Dec 13 03:30:03 np0005558241 systemd-udevd[309009]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.009 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4f2196-a602-496c-99c5-9e630b891163]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:9738'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718015, 'tstamp': 718015}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309022, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:03Z|00563|binding|INFO|Claiming lport 13e6d879-e4c6-4a44-a982-10a62782047e for this chassis.
Dec 13 03:30:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:03Z|00564|binding|INFO|13e6d879-e4c6-4a44-a982-10a62782047e: Claiming fa:16:3e:89:36:41 10.100.0.6
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.023 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:36:41 10.100.0.6'], port_security=['fa:16:3e:89:36:41 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd594d7c8-13f8-4e02-80d2-490469301cca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=13e6d879-e4c6-4a44-a982-10a62782047e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:03 np0005558241 NetworkManager[50376]: <info>  [1765614603.0318] device (tap13e6d879-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:30:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:03Z|00565|binding|INFO|Setting lport 13e6d879-e4c6-4a44-a982-10a62782047e ovn-installed in OVS
Dec 13 03:30:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:03Z|00566|binding|INFO|Setting lport 13e6d879-e4c6-4a44-a982-10a62782047e up in Southbound
Dec 13 03:30:03 np0005558241 NetworkManager[50376]: <info>  [1765614603.0326] device (tap13e6d879-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.032 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.036 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.041 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[346391a6-1e1b-4a49-a736-bffd1aef2874]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718015, 'reachable_time': 37588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309027, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:03 np0005558241 systemd-machined[210538]: New machine qemu-70-instance-0000003d.
Dec 13 03:30:03 np0005558241 systemd[1]: Started Virtual Machine qemu-70-instance-0000003d.
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.103 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[def907e2-26e6-4b4b-ac3f-e908b7677e7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.127 248514 DEBUG nova.network.neutron [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Updated VIF entry in instance network info cache for port b94af62b-6c45-4269-94d9-b235090f4778. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.128 248514 DEBUG nova.network.neutron [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Updating instance_info_cache with network_info: [{"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.189 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a32f22-56e7-45e9-b932-6f70bb321195]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.195 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.195 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.198 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ca54d31-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.201 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:03 np0005558241 NetworkManager[50376]: <info>  [1765614603.2017] manager: (tap2ca54d31-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Dec 13 03:30:03 np0005558241 kernel: tap2ca54d31-d0: entered promiscuous mode
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.208 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ca54d31-d0, col_values=(('external_ids', {'iface-id': '327a65c7-a67a-4fc2-b067-82e72753566c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:03Z|00567|binding|INFO|Releasing lport 327a65c7-a67a-4fc2-b067-82e72753566c from this chassis (sb_readonly=0)
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.209 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.228 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.230 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bed17384-eea5-493d-9e7b-2ee07d97e977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.231 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.pid.haproxy
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.231 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'env', 'PROCESS_TAG=haproxy-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:30:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 219 MiB data, 551 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 156 op/s
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.521 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614603.5213628, 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.523 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] VM Started (Lifecycle Event)#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.527 248514 DEBUG oslo_concurrency.lockutils [req-f4287a57-4d56-45de-a755-0afdeed3cab6 req-594f9bbc-c405-4b42-a9c3-e6833ceb7e8d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:03 np0005558241 podman[309152]: 2025-12-13 08:30:03.65556111 +0000 UTC m=+0.059019337 container create ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 03:30:03 np0005558241 systemd[1]: Started libpod-conmon-ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e.scope.
Dec 13 03:30:03 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:30:03 np0005558241 podman[309152]: 2025-12-13 08:30:03.623195392 +0000 UTC m=+0.026653649 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:30:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be07c0ff00db9044e874028d58961a61aa814490eafdd0ea0e2424e549f83981/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:03 np0005558241 podman[309152]: 2025-12-13 08:30:03.737633565 +0000 UTC m=+0.141091822 container init ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:30:03 np0005558241 podman[309152]: 2025-12-13 08:30:03.743267614 +0000 UTC m=+0.146725841 container start ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 03:30:03 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [NOTICE]   (309172) : New worker (309174) forked
Dec 13 03:30:03 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [NOTICE]   (309172) : Loading success.
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.814 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 13e6d879-e4c6-4a44-a982-10a62782047e in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf unbound from our chassis#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.816 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.836 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[285396d1-0ecf-4866-b587-662c877f4eb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.867 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[84561108-984a-4b36-963e-0c503636ac79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.871 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1b648267-53b3-4395-a77f-66af7a127a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.911 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9e67e7-50cf-4f2f-b3cc-ce01d39c5f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.932 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc8b85c-33a0-47ee-9746-04608c18dc9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718015, 'reachable_time': 37588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309188, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.953 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f081fc74-617b-48df-92ab-1f26887a322a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718035, 'tstamp': 718035}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309189, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718039, 'tstamp': 718039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309189, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.955 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.959 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ca54d31-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.959 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.959 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.959 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ca54d31-d0, col_values=(('external_ids', {'iface-id': '327a65c7-a67a-4fc2-b067-82e72753566c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:03.960 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.963 248514 DEBUG nova.compute.manager [req-11c6279c-7867-4676-aecc-bf87b0c8756c req-6d238f14-e5c0-49e0-9635-6d2f40159a98 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Received event network-vif-plugged-b94af62b-6c45-4269-94d9-b235090f4778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.964 248514 DEBUG oslo_concurrency.lockutils [req-11c6279c-7867-4676-aecc-bf87b0c8756c req-6d238f14-e5c0-49e0-9635-6d2f40159a98 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.964 248514 DEBUG oslo_concurrency.lockutils [req-11c6279c-7867-4676-aecc-bf87b0c8756c req-6d238f14-e5c0-49e0-9635-6d2f40159a98 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.965 248514 DEBUG oslo_concurrency.lockutils [req-11c6279c-7867-4676-aecc-bf87b0c8756c req-6d238f14-e5c0-49e0-9635-6d2f40159a98 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.965 248514 DEBUG nova.compute.manager [req-11c6279c-7867-4676-aecc-bf87b0c8756c req-6d238f14-e5c0-49e0-9635-6d2f40159a98 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Processing event network-vif-plugged-b94af62b-6c45-4269-94d9-b235090f4778 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.966 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.970 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.974 248514 INFO nova.virt.libvirt.driver [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Instance spawned successfully.#033[00m
Dec 13 03:30:03 np0005558241 nova_compute[248510]: 2025-12-13 08:30:03.975 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.724 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.732 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.737 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.737 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.738 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.739 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.739 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.740 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.755 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.756 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614603.5226912, 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.757 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.793 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.798 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614603.5227277, d594d7c8-13f8-4e02-80d2-490469301cca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.799 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] VM Started (Lifecycle Event)#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.825 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.830 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614603.5227458, d594d7c8-13f8-4e02-80d2-490469301cca => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.830 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.842 248514 INFO nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Took 10.18 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.843 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.856 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.861 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.901 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.902 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614603.970301, 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.902 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.935 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.939 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.950 248514 INFO nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Took 13.47 seconds to build instance.#033[00m
Dec 13 03:30:04 np0005558241 nova_compute[248510]: 2025-12-13 08:30:04.968 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:05 np0005558241 nova_compute[248510]: 2025-12-13 08:30:05.025 248514 DEBUG nova.network.neutron [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updated VIF entry in instance network info cache for port b5058a06-7109-4ac0-96d8-7562e66bee25. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:30:05 np0005558241 nova_compute[248510]: 2025-12-13 08:30:05.026 248514 DEBUG nova.network.neutron [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:05 np0005558241 nova_compute[248510]: 2025-12-13 08:30:05.029 248514 DEBUG nova.network.neutron [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Successfully updated port: 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:30:05 np0005558241 nova_compute[248510]: 2025-12-13 08:30:05.083 248514 DEBUG oslo_concurrency.lockutils [req-c90dd522-d865-4112-a1b1-3946b2f835bc req-8e5e29d0-77cf-447c-bd8b-5a38b53a39a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:05 np0005558241 nova_compute[248510]: 2025-12-13 08:30:05.092 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "refresh_cache-850bda47-d7a0-4d8d-a048-258b8388cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:05 np0005558241 nova_compute[248510]: 2025-12-13 08:30:05.092 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquired lock "refresh_cache-850bda47-d7a0-4d8d-a048-258b8388cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:05 np0005558241 nova_compute[248510]: 2025-12-13 08:30:05.093 248514 DEBUG nova.network.neutron [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:30:05 np0005558241 nova_compute[248510]: 2025-12-13 08:30:05.193 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:30:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 227 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 182 op/s
Dec 13 03:30:05 np0005558241 nova_compute[248510]: 2025-12-13 08:30:05.631 248514 DEBUG nova.network.neutron [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:30:05 np0005558241 nova_compute[248510]: 2025-12-13 08:30:05.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:06 np0005558241 nova_compute[248510]: 2025-12-13 08:30:06.135 248514 DEBUG nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Received event network-vif-plugged-b94af62b-6c45-4269-94d9-b235090f4778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:06 np0005558241 nova_compute[248510]: 2025-12-13 08:30:06.136 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:06 np0005558241 nova_compute[248510]: 2025-12-13 08:30:06.137 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:06 np0005558241 nova_compute[248510]: 2025-12-13 08:30:06.137 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:06 np0005558241 nova_compute[248510]: 2025-12-13 08:30:06.137 248514 DEBUG nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] No waiting events found dispatching network-vif-plugged-b94af62b-6c45-4269-94d9-b235090f4778 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:06 np0005558241 nova_compute[248510]: 2025-12-13 08:30:06.137 248514 WARNING nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Received unexpected event network-vif-plugged-b94af62b-6c45-4269-94d9-b235090f4778 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:30:06 np0005558241 nova_compute[248510]: 2025-12-13 08:30:06.138 248514 DEBUG nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-changed-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:06 np0005558241 nova_compute[248510]: 2025-12-13 08:30:06.138 248514 DEBUG nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Refreshing instance network info cache due to event network-changed-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:30:06 np0005558241 nova_compute[248510]: 2025-12-13 08:30:06.138 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-850bda47-d7a0-4d8d-a048-258b8388cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 227 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 155 op/s
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.661 248514 DEBUG nova.network.neutron [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Updating instance_info_cache with network_info: [{"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.687 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Releasing lock "refresh_cache-850bda47-d7a0-4d8d-a048-258b8388cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.688 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Instance network_info: |[{"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.689 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-850bda47-d7a0-4d8d-a048-258b8388cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.689 248514 DEBUG nova.network.neutron [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Refreshing network info cache for port 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.692 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Start _get_guest_xml network_info=[{"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.697 248514 WARNING nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.703 248514 DEBUG nova.virt.libvirt.host [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.704 248514 DEBUG nova.virt.libvirt.host [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.706 248514 DEBUG nova.virt.libvirt.host [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.707 248514 DEBUG nova.virt.libvirt.host [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.707 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.707 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.708 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.709 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.709 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.709 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.710 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.711 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.711 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.711 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.712 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.712 248514 DEBUG nova.virt.hardware [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:30:07 np0005558241 nova_compute[248510]: 2025-12-13 08:30:07.715 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.232 248514 DEBUG nova.compute.manager [req-391ee370-8154-4f11-83cd-e1adba8c2954 req-5b5b0d7a-a515-43cd-a430-371b1b55c7f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.233 248514 DEBUG oslo_concurrency.lockutils [req-391ee370-8154-4f11-83cd-e1adba8c2954 req-5b5b0d7a-a515-43cd-a430-371b1b55c7f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.234 248514 DEBUG oslo_concurrency.lockutils [req-391ee370-8154-4f11-83cd-e1adba8c2954 req-5b5b0d7a-a515-43cd-a430-371b1b55c7f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.234 248514 DEBUG oslo_concurrency.lockutils [req-391ee370-8154-4f11-83cd-e1adba8c2954 req-5b5b0d7a-a515-43cd-a430-371b1b55c7f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.235 248514 DEBUG nova.compute.manager [req-391ee370-8154-4f11-83cd-e1adba8c2954 req-5b5b0d7a-a515-43cd-a430-371b1b55c7f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Processing event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.237 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.242 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614608.2415438, d594d7c8-13f8-4e02-80d2-490469301cca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.243 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.246 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.252 248514 INFO nova.virt.libvirt.driver [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Instance spawned successfully.#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.253 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.271 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.291 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.297 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.298 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.299 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.300 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.300 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.301 248514 DEBUG nova.virt.libvirt.driver [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357585265' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.335 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.336 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.362 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.367 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:08 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:08Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 03:30:08 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:08Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.415 248514 INFO nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Took 15.24 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.416 248514 DEBUG nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.676 248514 INFO nova.compute.manager [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Took 17.24 seconds to build instance.#033[00m
Dec 13 03:30:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4255884628' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.951 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.953 248514 DEBUG nova.virt.libvirt.vif [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-461167032',display_name='tempest-ImagesNegativeTestJSON-server-461167032',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-461167032',id=63,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8fd9d373def4437880ac432124a30a67',ramdisk_id='',reservation_id='r-ne053sg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-865869987',owner_user_name='tempest-ImagesNegativeTestJSON-865869987-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:58Z,user_data=None,user_id='865bfe2430ea4f9ca639a4f89c86899d',uuid=850bda47-d7a0-4d8d-a048-258b8388cab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.953 248514 DEBUG nova.network.os_vif_util [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Converting VIF {"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.954 248514 DEBUG nova.network.os_vif_util [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:08 np0005558241 nova_compute[248510]: 2025-12-13 08:30:08.956 248514 DEBUG nova.objects.instance [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lazy-loading 'pci_devices' on Instance uuid 850bda47-d7a0-4d8d-a048-258b8388cab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:30:09
Dec 13 03:30:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:30:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:30:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'volumes', 'default.rgw.control', '.rgw.root', 'backups', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec 13 03:30:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.420 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  <uuid>850bda47-d7a0-4d8d-a048-258b8388cab7</uuid>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  <name>instance-0000003f</name>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <nova:name>tempest-ImagesNegativeTestJSON-server-461167032</nova:name>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:30:07</nova:creationTime>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:        <nova:user uuid="865bfe2430ea4f9ca639a4f89c86899d">tempest-ImagesNegativeTestJSON-865869987-project-member</nova:user>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:        <nova:project uuid="8fd9d373def4437880ac432124a30a67">tempest-ImagesNegativeTestJSON-865869987</nova:project>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:        <nova:port uuid="554f8172-ce8f-4d4c-a511-9d7bfc8ecb45">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <entry name="serial">850bda47-d7a0-4d8d-a048-258b8388cab7</entry>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <entry name="uuid">850bda47-d7a0-4d8d-a048-258b8388cab7</entry>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/850bda47-d7a0-4d8d-a048-258b8388cab7_disk">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/850bda47-d7a0-4d8d-a048-258b8388cab7_disk.config">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:e7:c0:75"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <target dev="tap554f8172-ce"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/console.log" append="off"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:30:09 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:30:09 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:30:09 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:30:09 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.424 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Preparing to wait for external event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.424 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.426 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.427 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.429 248514 DEBUG nova.virt.libvirt.vif [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:29:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-461167032',display_name='tempest-ImagesNegativeTestJSON-server-461167032',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-461167032',id=63,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8fd9d373def4437880ac432124a30a67',ramdisk_id='',reservation_id='r-ne053sg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-865869987',owner_user_name='tempest-ImagesNegativeTestJSON-865869987-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:29:58Z,user_data=None,user_id='865bfe2430ea4f9ca639a4f89c86899d',uuid=850bda47-d7a0-4d8d-a048-258b8388cab7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.430 248514 DEBUG nova.network.os_vif_util [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Converting VIF {"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.433 248514 DEBUG nova.network.os_vif_util [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.434 248514 DEBUG os_vif [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.440 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.442 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.444 248514 DEBUG oslo_concurrency.lockutils [None req-21751314-365f-4df9-a641-189008445624 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.450 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap554f8172-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.451 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap554f8172-ce, col_values=(('external_ids', {'iface-id': '554f8172-ce8f-4d4c-a511-9d7bfc8ecb45', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:c0:75', 'vm-uuid': '850bda47-d7a0-4d8d-a048-258b8388cab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:09 np0005558241 NetworkManager[50376]: <info>  [1765614609.4683] manager: (tap554f8172-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.467 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.475 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.476 248514 INFO os_vif [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce')#033[00m
Dec 13 03:30:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 244 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.4 MiB/s wr, 234 op/s
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.722 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.723 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.723 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] No VIF found with MAC fa:16:3e:e7:c0:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.724 248514 INFO nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Using config drive#033[00m
Dec 13 03:30:09 np0005558241 nova_compute[248510]: 2025-12-13 08:30:09.750 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.224 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.258 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.258 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid d594d7c8-13f8-4e02-80d2-490469301cca _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.259 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.259 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 850bda47-d7a0-4d8d-a048-258b8388cab7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.259 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.259 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.259 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.260 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "d594d7c8-13f8-4e02-80d2-490469301cca" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.260 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.260 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.260 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.277 248514 INFO nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Creating config drive at /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/disk.config#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.282 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpurlkq8l4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.330 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.332 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.332 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "d594d7c8-13f8-4e02-80d2-490469301cca" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.435 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpurlkq8l4" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.472 248514 DEBUG nova.storage.rbd_utils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] rbd image 850bda47-d7a0-4d8d-a048-258b8388cab7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.477 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/disk.config 850bda47-d7a0-4d8d-a048-258b8388cab7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:30:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.654 248514 DEBUG oslo_concurrency.processutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/disk.config 850bda47-d7a0-4d8d-a048-258b8388cab7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.655 248514 INFO nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Deleting local config drive /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7/disk.config because it was imported into RBD.#033[00m
Dec 13 03:30:10 np0005558241 NetworkManager[50376]: <info>  [1765614610.7124] manager: (tap554f8172-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/256)
Dec 13 03:30:10 np0005558241 kernel: tap554f8172-ce: entered promiscuous mode
Dec 13 03:30:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:10Z|00568|binding|INFO|Claiming lport 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 for this chassis.
Dec 13 03:30:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:10Z|00569|binding|INFO|554f8172-ce8f-4d4c-a511-9d7bfc8ecb45: Claiming fa:16:3e:e7:c0:75 10.100.0.8
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.726 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:c0:75 10.100.0.8'], port_security=['fa:16:3e:e7:c0:75 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '850bda47-d7a0-4d8d-a048-258b8388cab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66bfc0e0-7de5-436f-90fb-b5e591519781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8fd9d373def4437880ac432124a30a67', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccef7c54-bce8-4925-91bd-ad28ca3c3b7f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb2b3e88-e3d3-490d-ba95-e13073ca0a5b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.728 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 in datapath 66bfc0e0-7de5-436f-90fb-b5e591519781 bound to our chassis#033[00m
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.730 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 66bfc0e0-7de5-436f-90fb-b5e591519781#033[00m
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.743 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[349f1e67-d485-47a4-8e53-ed88306fd5a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.744 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap66bfc0e0-71 in ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.746 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap66bfc0e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.746 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[589877ff-1f16-49a7-a686-8e946bdffe6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.747 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cf17bef6-01b0-4419-8db0-13094171b174]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:10 np0005558241 systemd-udevd[309328]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.749 248514 DEBUG nova.network.neutron [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Updated VIF entry in instance network info cache for port 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.750 248514 DEBUG nova.network.neutron [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Updating instance_info_cache with network_info: [{"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:10Z|00570|binding|INFO|Setting lport 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 ovn-installed in OVS
Dec 13 03:30:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:10Z|00571|binding|INFO|Setting lport 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 up in Southbound
Dec 13 03:30:10 np0005558241 NetworkManager[50376]: <info>  [1765614610.7688] device (tap554f8172-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.765 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[af8db099-a8b7-4a05-8784-6a5fa1472ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:10 np0005558241 NetworkManager[50376]: <info>  [1765614610.7698] device (tap554f8172-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:30:10 np0005558241 systemd-machined[210538]: New machine qemu-71-instance-0000003f.
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.868 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-850bda47-d7a0-4d8d-a048-258b8388cab7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.870 248514 DEBUG nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.871 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.871 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.872 248514 DEBUG oslo_concurrency.lockutils [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.872 248514 DEBUG nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] No waiting events found dispatching network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.872 248514 WARNING nova.compute.manager [req-18a95c7d-df19-4490-a173-fd73bc50739b req-343c4135-9fca-4459-9d3b-4b6d67a65e68 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received unexpected event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:30:10 np0005558241 nova_compute[248510]: 2025-12-13 08:30:10.873 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:10 np0005558241 systemd[1]: Started Virtual Machine qemu-71-instance-0000003f.
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.881 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[69ffe9e6-fcce-4c53-b0a6-c90e731a2807]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.919 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2e3c47-1e3e-4d72-b487-1d4a1d934fb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:10 np0005558241 systemd-udevd[309332]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:30:10 np0005558241 NetworkManager[50376]: <info>  [1765614610.9318] manager: (tap66bfc0e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/257)
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.930 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1f6905-44fa-470b-b8e7-735f46f8fdb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.975 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[417269c5-720c-49e8-8f5c-5a081db737a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:10.980 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[93630a69-c965-4f35-a13b-b6b78f20c606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:11 np0005558241 NetworkManager[50376]: <info>  [1765614611.0090] device (tap66bfc0e0-70): carrier: link connected
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.016 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc9745c-7b72-4c10-a222-00558f677948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.036 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[95d18016-e594-4779-9bbe-cf1d2bc6485e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66bfc0e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:ed:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718822, 'reachable_time': 27903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309361, 'error': None, 'target': 'ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.060 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[06d15214-525e-477b-b6bb-7de8218492ff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:ed50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718822, 'tstamp': 718822}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309362, 'error': None, 'target': 'ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.086 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7a4ad3-2636-470e-b146-6833e9f7a930]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66bfc0e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:ed:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718822, 'reachable_time': 27903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309363, 'error': None, 'target': 'ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.127 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[be8b7a78-0062-427e-adc0-edad04d7a740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.194 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5047986f-7d66-4b3e-a4c1-dad9f1499b8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.196 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66bfc0e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.196 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.197 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66bfc0e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:11 np0005558241 nova_compute[248510]: 2025-12-13 08:30:11.199 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:11 np0005558241 NetworkManager[50376]: <info>  [1765614611.1997] manager: (tap66bfc0e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Dec 13 03:30:11 np0005558241 kernel: tap66bfc0e0-70: entered promiscuous mode
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.202 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap66bfc0e0-70, col_values=(('external_ids', {'iface-id': '679b65d4-8c76-4539-9e29-7edabab1f4fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:11Z|00572|binding|INFO|Releasing lport 679b65d4-8c76-4539-9e29-7edabab1f4fa from this chassis (sb_readonly=0)
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.222 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/66bfc0e0-7de5-436f-90fb-b5e591519781.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/66bfc0e0-7de5-436f-90fb-b5e591519781.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:30:11 np0005558241 nova_compute[248510]: 2025-12-13 08:30:11.223 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.225 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ca676fc7-1e89-49f5-b031-d52981b2932a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.226 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-66bfc0e0-7de5-436f-90fb-b5e591519781
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/66bfc0e0-7de5-436f-90fb-b5e591519781.pid.haproxy
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 66bfc0e0-7de5-436f-90fb-b5e591519781
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:30:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:11.227 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781', 'env', 'PROCESS_TAG=haproxy-66bfc0e0-7de5-436f-90fb-b5e591519781', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/66bfc0e0-7de5-436f-90fb-b5e591519781.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:30:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 256 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.3 MiB/s wr, 215 op/s
Dec 13 03:30:11 np0005558241 podman[309395]: 2025-12-13 08:30:11.615944711 +0000 UTC m=+0.057386067 container create 7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 03:30:11 np0005558241 systemd[1]: Started libpod-conmon-7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a.scope.
Dec 13 03:30:11 np0005558241 podman[309395]: 2025-12-13 08:30:11.585355366 +0000 UTC m=+0.026796722 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:30:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:30:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72003b56c84a1a54322c2566a9afa6d2b3bf57da10dc3d13cd92732eb061ede0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:11 np0005558241 podman[309395]: 2025-12-13 08:30:11.71397553 +0000 UTC m=+0.155416866 container init 7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:30:11 np0005558241 podman[309395]: 2025-12-13 08:30:11.723408203 +0000 UTC m=+0.164849539 container start 7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:30:11 np0005558241 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [NOTICE]   (309414) : New worker (309430) forked
Dec 13 03:30:11 np0005558241 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [NOTICE]   (309414) : Loading success.
Dec 13 03:30:11 np0005558241 nova_compute[248510]: 2025-12-13 08:30:11.948 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614611.9481275, 850bda47-d7a0-4d8d-a048-258b8388cab7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:11 np0005558241 nova_compute[248510]: 2025-12-13 08:30:11.949 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] VM Started (Lifecycle Event)#033[00m
Dec 13 03:30:11 np0005558241 nova_compute[248510]: 2025-12-13 08:30:11.979 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:11 np0005558241 nova_compute[248510]: 2025-12-13 08:30:11.984 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614611.9515266, 850bda47-d7a0-4d8d-a048-258b8388cab7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:11 np0005558241 nova_compute[248510]: 2025-12-13 08:30:11.985 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.016 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.021 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.046 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.933 248514 DEBUG nova.compute.manager [req-079bad9e-0aae-4d49-893f-d4233cf354cf req-535d6e41-0566-4d75-ac8f-7e13f4fe00a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.934 248514 DEBUG oslo_concurrency.lockutils [req-079bad9e-0aae-4d49-893f-d4233cf354cf req-535d6e41-0566-4d75-ac8f-7e13f4fe00a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.935 248514 DEBUG oslo_concurrency.lockutils [req-079bad9e-0aae-4d49-893f-d4233cf354cf req-535d6e41-0566-4d75-ac8f-7e13f4fe00a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.935 248514 DEBUG oslo_concurrency.lockutils [req-079bad9e-0aae-4d49-893f-d4233cf354cf req-535d6e41-0566-4d75-ac8f-7e13f4fe00a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.936 248514 DEBUG nova.compute.manager [req-079bad9e-0aae-4d49-893f-d4233cf354cf req-535d6e41-0566-4d75-ac8f-7e13f4fe00a3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Processing event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.937 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.942 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.943 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614612.9411068, 850bda47-d7a0-4d8d-a048-258b8388cab7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.943 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.948 248514 INFO nova.virt.libvirt.driver [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Instance spawned successfully.#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.948 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.975 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.983 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.983 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.984 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.985 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.985 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.986 248514 DEBUG nova.virt.libvirt.driver [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:12 np0005558241 nova_compute[248510]: 2025-12-13 08:30:12.991 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.023 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.053 248514 INFO nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Took 14.44 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.053 248514 DEBUG nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.114 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.115 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.115 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.115 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.115 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.117 248514 INFO nova.compute.manager [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Terminating instance#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.118 248514 DEBUG nova.compute.manager [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.129 248514 INFO nova.compute.manager [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Took 15.73 seconds to build instance.#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.151 248514 DEBUG oslo_concurrency.lockutils [None req-189aa421-bc49-4346-a600-00f47a7b3a53 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.151 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.152 248514 INFO nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.152 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:13 np0005558241 kernel: tap13e6d879-e4 (unregistering): left promiscuous mode
Dec 13 03:30:13 np0005558241 NetworkManager[50376]: <info>  [1765614613.1768] device (tap13e6d879-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:30:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:13Z|00573|binding|INFO|Releasing lport 13e6d879-e4c6-4a44-a982-10a62782047e from this chassis (sb_readonly=0)
Dec 13 03:30:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:13Z|00574|binding|INFO|Setting lport 13e6d879-e4c6-4a44-a982-10a62782047e down in Southbound
Dec 13 03:30:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:13Z|00575|binding|INFO|Removing iface tap13e6d879-e4 ovn-installed in OVS
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.182 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.188 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:36:41 10.100.0.6'], port_security=['fa:16:3e:89:36:41 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd594d7c8-13f8-4e02-80d2-490469301cca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=13e6d879-e4c6-4a44-a982-10a62782047e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.191 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 13e6d879-e4c6-4a44-a982-10a62782047e in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf unbound from our chassis#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.196 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.213 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.220 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b908be6-ecc7-423d-915e-17b542810c7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Dec 13 03:30:13 np0005558241 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Consumed 5.413s CPU time.
Dec 13 03:30:13 np0005558241 systemd-machined[210538]: Machine qemu-70-instance-0000003d terminated.
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.254 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9e456535-8fc5-4042-afa7-44fe5ebdc947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.259 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[15fd2139-7b62-473f-8cca-71e941b719ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.292 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad3d1df-bcf5-47c1-b05a-7bfa6fb7c23d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.295 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.296 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.296 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.297 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.297 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.298 248514 INFO nova.compute.manager [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Terminating instance#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.300 248514 DEBUG nova.compute.manager [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.316 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d820a4fd-642d-483c-8abf-b9dc643d11dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718015, 'reachable_time': 37588, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309477, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.340 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1b61de-565e-4bd7-ae9e-6685a5a90829]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718035, 'tstamp': 718035}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309478, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 718039, 'tstamp': 718039}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309478, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.344 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.346 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 kernel: tapb94af62b-6c (unregistering): left promiscuous mode
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.358 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 NetworkManager[50376]: <info>  [1765614613.3596] device (tapb94af62b-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.361 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ca54d31-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.361 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.362 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ca54d31-d0, col_values=(('external_ids', {'iface-id': '327a65c7-a67a-4fc2-b067-82e72753566c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.362 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.366 248514 INFO nova.virt.libvirt.driver [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Instance destroyed successfully.#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.368 248514 DEBUG nova.objects.instance [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'resources' on Instance uuid d594d7c8-13f8-4e02-80d2-490469301cca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.371 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:13Z|00576|binding|INFO|Releasing lport b94af62b-6c45-4269-94d9-b235090f4778 from this chassis (sb_readonly=0)
Dec 13 03:30:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:13Z|00577|binding|INFO|Setting lport b94af62b-6c45-4269-94d9-b235090f4778 down in Southbound
Dec 13 03:30:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:13Z|00578|binding|INFO|Removing iface tapb94af62b-6c ovn-installed in OVS
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.373 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.381 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:af:90 10.100.0.11'], port_security=['fa:16:3e:a6:af:90 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '13a5d640-9e2a-49d7-9f95-be18ebbe1cfe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b94af62b-6c45-4269-94d9-b235090f4778) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.385 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b94af62b-6c45-4269-94d9-b235090f4778 in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf unbound from our chassis#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.389 248514 DEBUG nova.virt.libvirt.vif [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1395231476',display_name='tempest-tempest.common.compute-instance-1395231476-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1395231476-1',id=61,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:30:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-jt3b6t8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:08Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=d594d7c8-13f8-4e02-80d2-490469301cca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.391 248514 DEBUG nova.network.os_vif_util [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "13e6d879-e4c6-4a44-a982-10a62782047e", "address": "fa:16:3e:89:36:41", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e6d879-e4", "ovs_interfaceid": "13e6d879-e4c6-4a44-a982-10a62782047e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.391 248514 DEBUG nova.network.os_vif_util [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.394 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.396 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9379b9-368e-485c-b401-2c7530336a66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.397 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf namespace which is not needed anymore#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.407 248514 DEBUG os_vif [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.412 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13e6d879-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.420 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.422 248514 INFO os_vif [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:36:41,bridge_name='br-int',has_traffic_filtering=True,id=13e6d879-e4c6-4a44-a982-10a62782047e,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e6d879-e4')#033[00m
Dec 13 03:30:13 np0005558241 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Dec 13 03:30:13 np0005558241 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Consumed 10.133s CPU time.
Dec 13 03:30:13 np0005558241 systemd-machined[210538]: Machine qemu-69-instance-0000003e terminated.
Dec 13 03:30:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 258 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.3 MiB/s wr, 201 op/s
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.543 248514 INFO nova.virt.libvirt.driver [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Instance destroyed successfully.#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.544 248514 DEBUG nova.objects.instance [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'resources' on Instance uuid 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.569 248514 DEBUG nova.virt.libvirt.vif [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1395231476',display_name='tempest-tempest.common.compute-instance-1395231476-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1395231476-2',id=62,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-12-13T08:30:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-jt3b6t8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:04Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=13a5d640-9e2a-49d7-9f95-be18ebbe1cfe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.570 248514 DEBUG nova.network.os_vif_util [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "b94af62b-6c45-4269-94d9-b235090f4778", "address": "fa:16:3e:a6:af:90", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb94af62b-6c", "ovs_interfaceid": "b94af62b-6c45-4269-94d9-b235090f4778", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.571 248514 DEBUG nova.network.os_vif_util [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.572 248514 DEBUG os_vif [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.574 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb94af62b-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.578 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.578 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.581 248514 INFO os_vif [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:af:90,bridge_name='br-int',has_traffic_filtering=True,id=b94af62b-6c45-4269-94d9-b235090f4778,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb94af62b-6c')#033[00m
Dec 13 03:30:13 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [NOTICE]   (309172) : haproxy version is 2.8.14-c23fe91
Dec 13 03:30:13 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [NOTICE]   (309172) : path to executable is /usr/sbin/haproxy
Dec 13 03:30:13 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [WARNING]  (309172) : Exiting Master process...
Dec 13 03:30:13 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [WARNING]  (309172) : Exiting Master process...
Dec 13 03:30:13 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [ALERT]    (309172) : Current worker (309174) exited with code 143 (Terminated)
Dec 13 03:30:13 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[309168]: [WARNING]  (309172) : All workers exited. Exiting... (0)
Dec 13 03:30:13 np0005558241 systemd[1]: libpod-ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e.scope: Deactivated successfully.
Dec 13 03:30:13 np0005558241 podman[309530]: 2025-12-13 08:30:13.603533541 +0000 UTC m=+0.076814026 container died ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:30:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e-userdata-shm.mount: Deactivated successfully.
Dec 13 03:30:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-be07c0ff00db9044e874028d58961a61aa814490eafdd0ea0e2424e549f83981-merged.mount: Deactivated successfully.
Dec 13 03:30:13 np0005558241 podman[309530]: 2025-12-13 08:30:13.655550655 +0000 UTC m=+0.128831140 container cleanup ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:30:13 np0005558241 systemd[1]: libpod-conmon-ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e.scope: Deactivated successfully.
Dec 13 03:30:13 np0005558241 podman[309586]: 2025-12-13 08:30:13.737965408 +0000 UTC m=+0.058498354 container remove ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.749 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57ed40af-edd3-420a-a71c-02fb2a5ab888]: (4, ('Sat Dec 13 08:30:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf (ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e)\nccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e\nSat Dec 13 08:30:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf (ccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e)\nccb7d7538a27fd7b4f04570dfa13852c00c0126c426ee9e4b1c98c8123af579e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.752 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a4f006-2375-41dd-a986-4043a3c04135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.754 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 kernel: tap2ca54d31-d0: left promiscuous mode
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.780 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[617daa5c-639e-4456-83ce-e49e1e34b575]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.802 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e9886f-275a-42e9-b3d0-f560944b24ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.803 248514 INFO nova.virt.libvirt.driver [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Deleting instance files /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca_del#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.804 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8344d4ec-e07e-4959-a7c4-0a129e42c411]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.805 248514 INFO nova.virt.libvirt.driver [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Deletion of /var/lib/nova/instances/d594d7c8-13f8-4e02-80d2-490469301cca_del complete#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.828 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[25e76bb5-8fed-47dd-aea6-6ed5ae1d5d15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718006, 'reachable_time': 18582, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309600, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 systemd[1]: run-netns-ovnmeta\x2d2ca54d31\x2dd7ab\x2d4904\x2da0d6\x2d4e2970bb54bf.mount: Deactivated successfully.
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.833 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:30:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:13.833 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1cf3a3-ee05-4b7f-ae4f-61902620209a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.912 248514 INFO nova.virt.libvirt.driver [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Deleting instance files /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_del#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.913 248514 INFO nova.virt.libvirt.driver [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Deletion of /var/lib/nova/instances/13a5d640-9e2a-49d7-9f95-be18ebbe1cfe_del complete#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.920 248514 INFO nova.compute.manager [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.920 248514 DEBUG oslo.service.loopingcall [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.921 248514 DEBUG nova.compute.manager [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.922 248514 DEBUG nova.network.neutron [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.972 248514 INFO nova.compute.manager [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.972 248514 DEBUG oslo.service.loopingcall [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.973 248514 DEBUG nova.compute.manager [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:30:13 np0005558241 nova_compute[248510]: 2025-12-13 08:30:13.973 248514 DEBUG nova.network.neutron [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:30:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:30:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1776667451' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:30:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:30:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1776667451' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.145 248514 DEBUG nova.network.neutron [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.169 248514 INFO nova.compute.manager [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Took 1.20 seconds to deallocate network for instance.#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.199 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.228 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.228 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.278 248514 DEBUG nova.compute.manager [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.280 248514 DEBUG oslo_concurrency.lockutils [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.281 248514 DEBUG oslo_concurrency.lockutils [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.282 248514 DEBUG oslo_concurrency.lockutils [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.282 248514 DEBUG nova.compute.manager [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] No waiting events found dispatching network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.282 248514 WARNING nova.compute.manager [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received unexpected event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.283 248514 DEBUG nova.compute.manager [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-vif-unplugged-13e6d879-e4c6-4a44-a982-10a62782047e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.284 248514 DEBUG oslo_concurrency.lockutils [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.285 248514 DEBUG oslo_concurrency.lockutils [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.286 248514 DEBUG oslo_concurrency.lockutils [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:15 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:30:15 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.291 248514 DEBUG nova.compute.manager [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] No waiting events found dispatching network-vif-unplugged-13e6d879-e4c6-4a44-a982-10a62782047e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.292 248514 DEBUG nova.compute.manager [req-68e80a6d-0c59-40ab-a142-a1ca3131f8c3 req-0f309bd5-2ba6-4336-b720-41da7dad802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-vif-unplugged-13e6d879-e4c6-4a44-a982-10a62782047e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.297 248514 DEBUG nova.compute.manager [req-5b1b83b5-e444-4c81-a553-a9106891022e req-6fc4a89a-ced6-48e6-9489-9df6a63b3a29 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Received event network-vif-deleted-b94af62b-6c45-4269-94d9-b235090f4778 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.369 248514 DEBUG oslo_concurrency.processutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.461 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.462 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.462 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.462 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.463 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.465 248514 INFO nova.compute.manager [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Terminating instance#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.466 248514 DEBUG nova.compute.manager [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:30:15 np0005558241 kernel: tap554f8172-ce (unregistering): left promiscuous mode
Dec 13 03:30:15 np0005558241 NetworkManager[50376]: <info>  [1765614615.5048] device (tap554f8172-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:30:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 189 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.7 MiB/s wr, 292 op/s
Dec 13 03:30:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:15Z|00579|binding|INFO|Releasing lport 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 from this chassis (sb_readonly=0)
Dec 13 03:30:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:15Z|00580|binding|INFO|Setting lport 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 down in Southbound
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.510 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:15Z|00581|binding|INFO|Removing iface tap554f8172-ce ovn-installed in OVS
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.521 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:c0:75 10.100.0.8'], port_security=['fa:16:3e:e7:c0:75 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '850bda47-d7a0-4d8d-a048-258b8388cab7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66bfc0e0-7de5-436f-90fb-b5e591519781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8fd9d373def4437880ac432124a30a67', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccef7c54-bce8-4925-91bd-ad28ca3c3b7f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb2b3e88-e3d3-490d-ba95-e13073ca0a5b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.523 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 in datapath 66bfc0e0-7de5-436f-90fb-b5e591519781 unbound from our chassis#033[00m
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.525 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 66bfc0e0-7de5-436f-90fb-b5e591519781, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.526 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[911a1804-dc34-41c8-8722-dda130d8e2c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.526 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781 namespace which is not needed anymore#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:15 np0005558241 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Dec 13 03:30:15 np0005558241 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003f.scope: Consumed 3.561s CPU time.
Dec 13 03:30:15 np0005558241 systemd-machined[210538]: Machine qemu-71-instance-0000003f terminated.
Dec 13 03:30:15 np0005558241 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [NOTICE]   (309414) : haproxy version is 2.8.14-c23fe91
Dec 13 03:30:15 np0005558241 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [NOTICE]   (309414) : path to executable is /usr/sbin/haproxy
Dec 13 03:30:15 np0005558241 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [WARNING]  (309414) : Exiting Master process...
Dec 13 03:30:15 np0005558241 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [WARNING]  (309414) : Exiting Master process...
Dec 13 03:30:15 np0005558241 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [ALERT]    (309414) : Current worker (309430) exited with code 143 (Terminated)
Dec 13 03:30:15 np0005558241 neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781[309410]: [WARNING]  (309414) : All workers exited. Exiting... (0)
Dec 13 03:30:15 np0005558241 systemd[1]: libpod-7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a.scope: Deactivated successfully.
Dec 13 03:30:15 np0005558241 podman[309648]: 2025-12-13 08:30:15.678294824 +0000 UTC m=+0.047020042 container died 7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.703 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.712 248514 INFO nova.virt.libvirt.driver [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Instance destroyed successfully.#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.713 248514 DEBUG nova.objects.instance [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lazy-loading 'resources' on Instance uuid 850bda47-d7a0-4d8d-a048-258b8388cab7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a-userdata-shm.mount: Deactivated successfully.
Dec 13 03:30:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-72003b56c84a1a54322c2566a9afa6d2b3bf57da10dc3d13cd92732eb061ede0-merged.mount: Deactivated successfully.
Dec 13 03:30:15 np0005558241 podman[309648]: 2025-12-13 08:30:15.735083145 +0000 UTC m=+0.103808363 container cleanup 7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.735 248514 DEBUG nova.virt.libvirt.vif [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-461167032',display_name='tempest-ImagesNegativeTestJSON-server-461167032',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-461167032',id=63,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:30:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8fd9d373def4437880ac432124a30a67',ramdisk_id='',reservation_id='r-ne053sg4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-865869987',owner_user_name='tempest-ImagesNegativeTestJSON-865869987-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:13Z,user_data=None,user_id='865bfe2430ea4f9ca639a4f89c86899d',uuid=850bda47-d7a0-4d8d-a048-258b8388cab7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.736 248514 DEBUG nova.network.os_vif_util [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Converting VIF {"id": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "address": "fa:16:3e:e7:c0:75", "network": {"id": "66bfc0e0-7de5-436f-90fb-b5e591519781", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-1147483234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fd9d373def4437880ac432124a30a67", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554f8172-ce", "ovs_interfaceid": "554f8172-ce8f-4d4c-a511-9d7bfc8ecb45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.736 248514 DEBUG nova.network.os_vif_util [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.737 248514 DEBUG os_vif [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.739 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.740 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap554f8172-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:15 np0005558241 systemd[1]: libpod-conmon-7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a.scope: Deactivated successfully.
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.750 248514 INFO os_vif [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c0:75,bridge_name='br-int',has_traffic_filtering=True,id=554f8172-ce8f-4d4c-a511-9d7bfc8ecb45,network=Network(66bfc0e0-7de5-436f-90fb-b5e591519781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap554f8172-ce')#033[00m
Dec 13 03:30:15 np0005558241 podman[309684]: 2025-12-13 08:30:15.82039349 +0000 UTC m=+0.057610443 container remove 7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.829 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1610c90e-9f57-4127-85dd-d60c1e2745c3]: (4, ('Sat Dec 13 08:30:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781 (7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a)\n7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a\nSat Dec 13 08:30:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781 (7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a)\n7452258c2c1e57665756648bc59a623371e524894aada5864ec5444b01ac277a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.832 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78590b6b-e342-458e-b530-26e53961a773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.834 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66bfc0e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.836 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:15 np0005558241 kernel: tap66bfc0e0-70: left promiscuous mode
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.867 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd309a3-0015-4e38-9df5-4b00aba5bc28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.882 248514 DEBUG nova.network.neutron [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.884 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6ed305-5278-4a81-b974-2e0a5dedf0f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.886 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c762e79e-6d48-4757-813a-179feaef6f25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.904 248514 INFO nova.compute.manager [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Took 1.98 seconds to deallocate network for instance.#033[00m
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.906 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aa17e711-72c5-4ad9-a118-8b609c73dd59]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 718812, 'reachable_time': 26080, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309717, 'error': None, 'target': 'ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:15 np0005558241 systemd[1]: run-netns-ovnmeta\x2d66bfc0e0\x2d7de5\x2d436f\x2d90fb\x2db5e591519781.mount: Deactivated successfully.
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.912 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-66bfc0e0-7de5-436f-90fb-b5e591519781 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:30:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:15.912 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d81962-929f-41bf-b73d-bcc476cb6515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.960 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:30:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3129252889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:30:15 np0005558241 nova_compute[248510]: 2025-12-13 08:30:15.997 248514 DEBUG oslo_concurrency.processutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.006 248514 DEBUG nova.compute.provider_tree [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.037 248514 DEBUG nova.scheduler.client.report [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.062 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.066 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.079 248514 INFO nova.virt.libvirt.driver [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Deleting instance files /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7_del#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.080 248514 INFO nova.virt.libvirt.driver [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Deletion of /var/lib/nova/instances/850bda47-d7a0-4d8d-a048-258b8388cab7_del complete#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.094 248514 INFO nova.scheduler.client.report [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Deleted allocations for instance 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.166 248514 INFO nova.compute.manager [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.167 248514 DEBUG oslo.service.loopingcall [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.167 248514 DEBUG nova.compute.manager [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.168 248514 DEBUG nova.network.neutron [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.176 248514 DEBUG oslo_concurrency.processutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.212 248514 DEBUG oslo_concurrency.lockutils [None req-99741b2e-5ad6-48ae-b019-3397bfabbd5b 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "13a5d640-9e2a-49d7-9f95-be18ebbe1cfe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:30:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337783534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.721 248514 DEBUG oslo_concurrency.processutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.728 248514 DEBUG nova.compute.provider_tree [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.744 248514 DEBUG nova.scheduler.client.report [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.772 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.802 248514 INFO nova.scheduler.client.report [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Deleted allocations for instance d594d7c8-13f8-4e02-80d2-490469301cca#033[00m
Dec 13 03:30:16 np0005558241 nova_compute[248510]: 2025-12-13 08:30:16.878 248514 DEBUG oslo_concurrency.lockutils [None req-f6f08876-eade-4cf5-844f-f384099b620e 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.408 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.409 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.409 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.410 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d594d7c8-13f8-4e02-80d2-490469301cca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.410 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] No waiting events found dispatching network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.411 248514 WARNING nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received unexpected event network-vif-plugged-13e6d879-e4c6-4a44-a982-10a62782047e for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.411 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Received event network-vif-deleted-13e6d879-e4c6-4a44-a982-10a62782047e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.411 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-vif-unplugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.411 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.412 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.412 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.412 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] No waiting events found dispatching network-vif-unplugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.412 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-vif-unplugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.413 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.413 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.413 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.413 248514 DEBUG oslo_concurrency.lockutils [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.413 248514 DEBUG nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] No waiting events found dispatching network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:17 np0005558241 nova_compute[248510]: 2025-12-13 08:30:17.414 248514 WARNING nova.compute.manager [req-4ed7ab7b-e1cd-47c3-bc09-f6d466cdc55c req-2835c64c-e7f9-4717-a7ef-45b13db427ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received unexpected event network-vif-plugged-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:30:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 189 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.1 MiB/s wr, 266 op/s
Dec 13 03:30:18 np0005558241 nova_compute[248510]: 2025-12-13 08:30:18.587 248514 DEBUG nova.network.neutron [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:18 np0005558241 nova_compute[248510]: 2025-12-13 08:30:18.615 248514 INFO nova.compute.manager [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Took 2.45 seconds to deallocate network for instance.#033[00m
Dec 13 03:30:18 np0005558241 nova_compute[248510]: 2025-12-13 08:30:18.682 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:18 np0005558241 nova_compute[248510]: 2025-12-13 08:30:18.683 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:18 np0005558241 nova_compute[248510]: 2025-12-13 08:30:18.759 248514 DEBUG oslo_concurrency.processutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:30:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1970177452' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:30:19 np0005558241 nova_compute[248510]: 2025-12-13 08:30:19.349 248514 DEBUG oslo_concurrency.processutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:19 np0005558241 nova_compute[248510]: 2025-12-13 08:30:19.357 248514 DEBUG nova.compute.provider_tree [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:30:19 np0005558241 nova_compute[248510]: 2025-12-13 08:30:19.378 248514 DEBUG nova.scheduler.client.report [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:30:19 np0005558241 nova_compute[248510]: 2025-12-13 08:30:19.402 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:19 np0005558241 nova_compute[248510]: 2025-12-13 08:30:19.430 248514 INFO nova.scheduler.client.report [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Deleted allocations for instance 850bda47-d7a0-4d8d-a048-258b8388cab7#033[00m
Dec 13 03:30:19 np0005558241 nova_compute[248510]: 2025-12-13 08:30:19.498 248514 DEBUG oslo_concurrency.lockutils [None req-5e2b2903-e1b4-441d-8362-b5b7f1bf9e0e 865bfe2430ea4f9ca639a4f89c86899d 8fd9d373def4437880ac432124a30a67 - - default default] Lock "850bda47-d7a0-4d8d-a048-258b8388cab7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 148 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.2 MiB/s wr, 332 op/s
Dec 13 03:30:19 np0005558241 nova_compute[248510]: 2025-12-13 08:30:19.653 248514 DEBUG oslo_concurrency.lockutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:19 np0005558241 nova_compute[248510]: 2025-12-13 08:30:19.654 248514 DEBUG oslo_concurrency.lockutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:19 np0005558241 nova_compute[248510]: 2025-12-13 08:30:19.654 248514 INFO nova.compute.manager [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Rebooting instance#033[00m
Dec 13 03:30:19 np0005558241 nova_compute[248510]: 2025-12-13 08:30:19.671 248514 DEBUG oslo_concurrency.lockutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:19 np0005558241 nova_compute[248510]: 2025-12-13 08:30:19.672 248514 DEBUG oslo_concurrency.lockutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:19 np0005558241 nova_compute[248510]: 2025-12-13 08:30:19.672 248514 DEBUG nova.network.neutron [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:30:19 np0005558241 nova_compute[248510]: 2025-12-13 08:30:19.697 248514 DEBUG nova.compute.manager [req-5c8be7cf-62d0-4ec2-b6a8-c1389e3b32cf req-892ecbb6-29d3-403c-ba81-d62cd4361cd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Received event network-vif-deleted-554f8172-ce8f-4d4c-a511-9d7bfc8ecb45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:20 np0005558241 nova_compute[248510]: 2025-12-13 08:30:20.202 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:30:20 np0005558241 nova_compute[248510]: 2025-12-13 08:30:20.743 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008767810410926015 of space, bias 1.0, pg target 0.26303431232778046 quantized to 32 (current 32)
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006672346150261075 of space, bias 1.0, pg target 0.20017038450783223 quantized to 32 (current 32)
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.932567058150141e-07 of space, bias 4.0, pg target 0.000831908046978017 quantized to 16 (current 32)
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:30:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:30:21 np0005558241 nova_compute[248510]: 2025-12-13 08:30:21.135 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 121 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 445 KiB/s wr, 258 op/s
Dec 13 03:30:21 np0005558241 nova_compute[248510]: 2025-12-13 08:30:21.891 248514 DEBUG nova.network.neutron [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:21 np0005558241 nova_compute[248510]: 2025-12-13 08:30:21.914 248514 DEBUG oslo_concurrency.lockutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:21 np0005558241 nova_compute[248510]: 2025-12-13 08:30:21.916 248514 DEBUG nova.compute.manager [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:21 np0005558241 nova_compute[248510]: 2025-12-13 08:30:21.986 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:21 np0005558241 nova_compute[248510]: 2025-12-13 08:30:21.986 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.014 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.036 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.037 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.066 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.103 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.104 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.112 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.112 248514 INFO nova.compute.claims [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.174 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:22 np0005558241 kernel: tapb5058a06-71 (unregistering): left promiscuous mode
Dec 13 03:30:22 np0005558241 NetworkManager[50376]: <info>  [1765614622.2842] device (tapb5058a06-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.277 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:22Z|00582|binding|INFO|Releasing lport b5058a06-7109-4ac0-96d8-7562e66bee25 from this chassis (sb_readonly=0)
Dec 13 03:30:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:22Z|00583|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 down in Southbound
Dec 13 03:30:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:22Z|00584|binding|INFO|Removing iface tapb5058a06-71 ovn-installed in OVS
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.299 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.300 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis#033[00m
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.302 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43ee8730-a174-4859-9c95-b3ad6dc68839, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.305 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[102d957d-86a4-4846-9c6d-9726500024dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.305 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace which is not needed anymore#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:22 np0005558241 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Dec 13 03:30:22 np0005558241 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003c.scope: Consumed 13.792s CPU time.
Dec 13 03:30:22 np0005558241 systemd-machined[210538]: Machine qemu-68-instance-0000003c terminated.
Dec 13 03:30:22 np0005558241 podman[309768]: 2025-12-13 08:30:22.34971384 +0000 UTC m=+0.073243678 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:30:22 np0005558241 podman[309767]: 2025-12-13 08:30:22.374281547 +0000 UTC m=+0.087885410 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:30:22 np0005558241 podman[309766]: 2025-12-13 08:30:22.381849033 +0000 UTC m=+0.111938073 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:30:22 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [NOTICE]   (308519) : haproxy version is 2.8.14-c23fe91
Dec 13 03:30:22 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [NOTICE]   (308519) : path to executable is /usr/sbin/haproxy
Dec 13 03:30:22 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [WARNING]  (308519) : Exiting Master process...
Dec 13 03:30:22 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [ALERT]    (308519) : Current worker (308521) exited with code 143 (Terminated)
Dec 13 03:30:22 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[308515]: [WARNING]  (308519) : All workers exited. Exiting... (0)
Dec 13 03:30:22 np0005558241 systemd[1]: libpod-291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795.scope: Deactivated successfully.
Dec 13 03:30:22 np0005558241 podman[309846]: 2025-12-13 08:30:22.454741872 +0000 UTC m=+0.048108218 container died 291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.482 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.483 248514 DEBUG nova.objects.instance [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795-userdata-shm.mount: Deactivated successfully.
Dec 13 03:30:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d9368b1b4038d92fdf8ac3d0acda6e2c35cf6264d3fdc71f92b90f08d6224132-merged.mount: Deactivated successfully.
Dec 13 03:30:22 np0005558241 podman[309846]: 2025-12-13 08:30:22.503898305 +0000 UTC m=+0.097264651 container cleanup 291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.504 248514 DEBUG nova.virt.libvirt.vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.506 248514 DEBUG nova.network.os_vif_util [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.508 248514 DEBUG nova.network.os_vif_util [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.508 248514 DEBUG os_vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.511 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.511 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5058a06-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.517 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:22 np0005558241 systemd[1]: libpod-conmon-291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795.scope: Deactivated successfully.
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.518 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.521 248514 INFO os_vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.529 248514 DEBUG nova.virt.libvirt.driver [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start _get_guest_xml network_info=[{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.533 248514 WARNING nova.virt.libvirt.driver [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.538 248514 DEBUG nova.virt.libvirt.host [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.538 248514 DEBUG nova.virt.libvirt.host [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.542 248514 DEBUG nova.virt.libvirt.host [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.542 248514 DEBUG nova.virt.libvirt.host [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.543 248514 DEBUG nova.virt.libvirt.driver [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.543 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.544 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.544 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.544 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.544 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.545 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.545 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.545 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.545 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.546 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.546 248514 DEBUG nova.virt.hardware [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.546 248514 DEBUG nova.objects.instance [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.567 248514 DEBUG oslo_concurrency.processutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:22 np0005558241 podman[309906]: 2025-12-13 08:30:22.592660315 +0000 UTC m=+0.061360055 container remove 291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.599 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[122ab475-762e-40da-863f-c7dd1ecff877]: (4, ('Sat Dec 13 08:30:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795)\n291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795\nSat Dec 13 08:30:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795)\n291d4695d4f545bef5f5d34e9762c0e756de140723cac7007644e3c4f2a18795\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.601 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9093daff-49b3-4eec-9c90-d7b85f736d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.602 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:22 np0005558241 kernel: tap43ee8730-a0: left promiscuous mode
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.609 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.624 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2919ede9-b3a6-4bda-93b6-7cf4ce00c8c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.644 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35a5d0d4-4344-4ea7-8c9c-c7e10bb6a945]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.646 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[034013fc-e9f0-4e96-9480-256ed1a92ed8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.667 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42f2e1e8-54f6-4101-bce9-ce054be0dd6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717283, 'reachable_time': 39195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309922, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:22 np0005558241 systemd[1]: run-netns-ovnmeta\x2d43ee8730\x2da174\x2d4859\x2d9c95\x2db3ad6dc68839.mount: Deactivated successfully.
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.672 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:30:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:22.672 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e54b8c81-3f7e-4777-aee3-b8a002c5eeff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:30:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109453365' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.848 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.856 248514 DEBUG nova.compute.provider_tree [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.879 248514 DEBUG nova.scheduler.client.report [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.919 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.921 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.925 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.932 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:30:22 np0005558241 nova_compute[248510]: 2025-12-13 08:30:22.932 248514 INFO nova.compute.claims [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.015 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.016 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.049 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.079 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.154 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/854366638' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.209 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.211 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.211 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Creating image(s)#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.232 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.255 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.279 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.283 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.322 248514 DEBUG oslo_concurrency.processutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.756s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.327 248514 DEBUG nova.policy [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f703cc8fd3b4cdabb2b154345f70a7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c25aa866d502481eb9410b7d92a1347b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.369 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.377 248514 DEBUG oslo_concurrency.processutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.414 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.416 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.416 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.440 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.445 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 18653df3-1934-41dc-b6ab-d1dc122052f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 121 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 196 KiB/s wr, 202 op/s
Dec 13 03:30:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:30:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3678635955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.791 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 18653df3-1934-41dc-b6ab-d1dc122052f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.816 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.845 248514 DEBUG nova.compute.provider_tree [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.850 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] resizing rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.876 248514 DEBUG nova.scheduler.client.report [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.920 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.921 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.928 248514 DEBUG nova.objects.instance [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'migration_context' on Instance uuid 18653df3-1934-41dc-b6ab-d1dc122052f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1104556540' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.961 248514 DEBUG oslo_concurrency.processutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.962 248514 DEBUG nova.virt.libvirt.vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.962 248514 DEBUG nova.network.os_vif_util [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.963 248514 DEBUG nova.network.os_vif_util [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.964 248514 DEBUG nova.objects.instance [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.966 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.966 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Ensure instance console log exists: /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.966 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.967 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.967 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.992 248514 DEBUG nova.virt.libvirt.driver [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  <uuid>3ced27d6-a2a8-4ce3-a7e7-494270418542</uuid>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  <name>instance-0000003c</name>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerActionsTestJSON-server-286397061</nova:name>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:30:22</nova:creationTime>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:        <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:        <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:        <nova:port uuid="b5058a06-7109-4ac0-96d8-7562e66bee25">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <entry name="serial">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <entry name="uuid">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:1f:d1:eb"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <target dev="tapb5058a06-71"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/console.log" append="off"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <input type="keyboard" bus="usb"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:30:23 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:30:23 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:30:23 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:30:23 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.995 248514 DEBUG nova.virt.libvirt.driver [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.995 248514 DEBUG nova.virt.libvirt.driver [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.996 248514 DEBUG nova.virt.libvirt.vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.996 248514 DEBUG nova.network.os_vif_util [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.997 248514 DEBUG nova.network.os_vif_util [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.997 248514 DEBUG os_vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.999 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:23 np0005558241 nova_compute[248510]: 2025-12-13 08:30:23.999 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.006 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.007 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.012 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.012 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5058a06-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.013 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5058a06-71, col_values=(('external_ids', {'iface-id': 'b5058a06-7109-4ac0-96d8-7562e66bee25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:d1:eb', 'vm-uuid': '3ced27d6-a2a8-4ce3-a7e7-494270418542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.038 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.044 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:24 np0005558241 NetworkManager[50376]: <info>  [1765614624.0465] manager: (tapb5058a06-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.049 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.051 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.053 248514 INFO os_vif [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.062 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:30:24 np0005558241 kernel: tapb5058a06-71: entered promiscuous mode
Dec 13 03:30:24 np0005558241 NetworkManager[50376]: <info>  [1765614624.1438] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Dec 13 03:30:24 np0005558241 systemd-udevd[309815]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:30:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:24Z|00585|binding|INFO|Claiming lport b5058a06-7109-4ac0-96d8-7562e66bee25 for this chassis.
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.145 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:24Z|00586|binding|INFO|b5058a06-7109-4ac0-96d8-7562e66bee25: Claiming fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.155 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:24 np0005558241 NetworkManager[50376]: <info>  [1765614624.1589] device (tapb5058a06-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.159 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis#033[00m
Dec 13 03:30:24 np0005558241 NetworkManager[50376]: <info>  [1765614624.1595] device (tapb5058a06-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.160 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.166 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:30:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:24Z|00587|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 ovn-installed in OVS
Dec 13 03:30:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:24Z|00588|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 up in Southbound
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.168 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.168 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Creating image(s)#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.175 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78134152-3ad2-4c36-b544-4638b49fa2c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.176 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43ee8730-a1 in ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.179 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43ee8730-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.179 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f378d6f-766d-4548-81e4-1b3af23775bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.181 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[656a7960-6803-4a40-8c01-55d17eef0d7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 systemd-machined[210538]: New machine qemu-72-instance-0000003c.
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.194 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1a80b501-8bb0-4207-b3e5-d900c5c2b372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.197 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:24 np0005558241 systemd[1]: Started Virtual Machine qemu-72-instance-0000003c.
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.224 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b92f1ba-f7c2-40f6-a952-d0128f7d70b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.229 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.257 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.263 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6645fb35-5dae-4031-a7dc-3b80fff4483f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.265 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:24 np0005558241 NetworkManager[50376]: <info>  [1765614624.2716] manager: (tap43ee8730-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/261)
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.271 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53a4bc3f-50d5-4d50-8d01-64b6328676b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.310 248514 DEBUG nova.policy [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f703cc8fd3b4cdabb2b154345f70a7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c25aa866d502481eb9410b7d92a1347b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.313 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[347ae49c-c52e-4a10-bc20-9fb90d8658a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.318 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[35627988-5e83-4c9d-bad7-38263dc59a8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 NetworkManager[50376]: <info>  [1765614624.3498] device (tap43ee8730-a0): carrier: link connected
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.357 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2afede01-a0bb-4691-be4e-57d535a956fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.370 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.371 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.371 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.372 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.386 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e72f7956-1bf9-439e-bfe5-ccb6c8660f36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720156, 'reachable_time': 21033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310276, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.397 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.402 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.402 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[23db0687-378d-438e-a1a7-3dbe3d79418b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:bbcd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720156, 'tstamp': 720156}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310292, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.420 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1d64514b-cb68-4fc0-b0b2-7db1fff6ed95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720156, 'reachable_time': 21033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310300, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.466 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[290ccb7b-f30c-4df7-ba4b-c248e875e347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.546 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9527ef78-ca1c-429c-aef1-3da578515e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.548 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.549 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.549 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:24 np0005558241 kernel: tap43ee8730-a0: entered promiscuous mode
Dec 13 03:30:24 np0005558241 NetworkManager[50376]: <info>  [1765614624.5521] manager: (tap43ee8730-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.554 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.555 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.556 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:24Z|00589|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.557 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.558 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.559 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ec035514-8b4d-4ce9-8d1f-15b3af04a4f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.560 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:30:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:24.560 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'env', 'PROCESS_TAG=haproxy-43ee8730-a174-4859-9c95-b3ad6dc68839', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43ee8730-a174-4859-9c95-b3ad6dc68839.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.604 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.604 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.615 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 3ced27d6-a2a8-4ce3-a7e7-494270418542 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.616 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614624.615239, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.616 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.622 248514 DEBUG nova.compute.manager [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.626 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance rebooted successfully.#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.627 248514 DEBUG nova.compute.manager [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.643 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.716 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.724 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.742 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.793 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.793 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614624.6167905, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.793 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Started (Lifecycle Event)#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.797 248514 DEBUG oslo_concurrency.lockutils [None req-6799bc78-d304-41e2-9f85-5aad84c2d223 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.807 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] resizing rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.846 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.851 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.903 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Successfully created port: 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.907 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.908 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.912 248514 DEBUG nova.compute.manager [req-316e5e1b-2f8d-47e4-b17a-41d08e7b9116 req-cf3db46d-73fc-42b8-8770-39227395e010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.912 248514 DEBUG oslo_concurrency.lockutils [req-316e5e1b-2f8d-47e4-b17a-41d08e7b9116 req-cf3db46d-73fc-42b8-8770-39227395e010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.912 248514 DEBUG oslo_concurrency.lockutils [req-316e5e1b-2f8d-47e4-b17a-41d08e7b9116 req-cf3db46d-73fc-42b8-8770-39227395e010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.912 248514 DEBUG oslo_concurrency.lockutils [req-316e5e1b-2f8d-47e4-b17a-41d08e7b9116 req-cf3db46d-73fc-42b8-8770-39227395e010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.912 248514 DEBUG nova.compute.manager [req-316e5e1b-2f8d-47e4-b17a-41d08e7b9116 req-cf3db46d-73fc-42b8-8770-39227395e010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.913 248514 WARNING nova.compute.manager [req-316e5e1b-2f8d-47e4-b17a-41d08e7b9116 req-cf3db46d-73fc-42b8-8770-39227395e010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.919 248514 DEBUG nova.objects.instance [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'migration_context' on Instance uuid 69f6dd3a-7c99-4537-8173-ec79bc6336a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.923 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.924 248514 INFO nova.compute.claims [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.955 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.955 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Ensure instance console log exists: /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.956 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.956 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:24 np0005558241 nova_compute[248510]: 2025-12-13 08:30:24.956 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:24 np0005558241 podman[310461]: 2025-12-13 08:30:24.973257622 +0000 UTC m=+0.056981447 container create 6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:30:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:24Z|00590|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 03:30:25 np0005558241 systemd[1]: Started libpod-conmon-6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8.scope.
Dec 13 03:30:25 np0005558241 nova_compute[248510]: 2025-12-13 08:30:25.012 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Successfully created port: 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:30:25 np0005558241 podman[310461]: 2025-12-13 08:30:24.944152294 +0000 UTC m=+0.027876139 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:30:25 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:30:25 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c4e163921b64c8e9176bf88efb899af63dfc0f8f476d28a475a589e7ef09c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:25 np0005558241 podman[310461]: 2025-12-13 08:30:25.067568819 +0000 UTC m=+0.151292644 container init 6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:30:25 np0005558241 podman[310461]: 2025-12-13 08:30:25.073757152 +0000 UTC m=+0.157480977 container start 6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 03:30:25 np0005558241 nova_compute[248510]: 2025-12-13 08:30:25.107 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:25 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [NOTICE]   (310480) : New worker (310482) forked
Dec 13 03:30:25 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [NOTICE]   (310480) : Loading success.
Dec 13 03:30:25 np0005558241 nova_compute[248510]: 2025-12-13 08:30:25.172 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:25 np0005558241 nova_compute[248510]: 2025-12-13 08:30:25.214 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:30:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 181 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 205 op/s
Dec 13 03:30:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:30:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/612064466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:30:25 np0005558241 nova_compute[248510]: 2025-12-13 08:30:25.768 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:25 np0005558241 nova_compute[248510]: 2025-12-13 08:30:25.777 248514 DEBUG nova.compute.provider_tree [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:30:25 np0005558241 nova_compute[248510]: 2025-12-13 08:30:25.803 248514 DEBUG nova.scheduler.client.report [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:30:25 np0005558241 nova_compute[248510]: 2025-12-13 08:30:25.834 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:25 np0005558241 nova_compute[248510]: 2025-12-13 08:30:25.835 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:30:25 np0005558241 nova_compute[248510]: 2025-12-13 08:30:25.904 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:30:25 np0005558241 nova_compute[248510]: 2025-12-13 08:30:25.905 248514 DEBUG nova.network.neutron [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:30:25 np0005558241 nova_compute[248510]: 2025-12-13 08:30:25.926 248514 INFO nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:30:25 np0005558241 nova_compute[248510]: 2025-12-13 08:30:25.956 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.097 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.099 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.100 248514 INFO nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Creating image(s)#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.125 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.159 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.184 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.188 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.271 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.272 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.273 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.273 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.299 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.304 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.633 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.707 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] resizing rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.792 248514 DEBUG nova.policy [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '83bbc7cfbcdc49ab885e530a79ae26f2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'def846f35c2747099dbe41221905d739', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.802 248514 DEBUG nova.objects.instance [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lazy-loading 'migration_context' on Instance uuid bc7aabfd-0b89-4d02-8aff-29f1bc423621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.824 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.825 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Ensure instance console log exists: /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.825 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.826 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:26 np0005558241 nova_compute[248510]: 2025-12-13 08:30:26.826 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.416 248514 DEBUG nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.417 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.417 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.417 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.417 248514 DEBUG nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.418 248514 WARNING nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.418 248514 DEBUG nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.418 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.418 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.419 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.419 248514 DEBUG nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.419 248514 WARNING nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.419 248514 DEBUG nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.419 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.420 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.420 248514 DEBUG oslo_concurrency.lockutils [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.420 248514 DEBUG nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.420 248514 WARNING nova.compute.manager [req-9d99d2a2-4fea-47a1-a0e8-b4043ec16fab req-698da294-272a-424c-9837-5761eb1ef541 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:30:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:27.426 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:27.427 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.463 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 181 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 976 KiB/s rd, 2.7 MiB/s wr, 99 op/s
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.536 248514 INFO nova.compute.manager [None req-7bae8189-25c1-4b14-b187-fa0c8ec7cfd5 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Get console output#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.543 248514 INFO oslo.privsep.daemon [None req-7bae8189-25c1-4b14-b187-fa0c8ec7cfd5 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpvghvmluz/privsep.sock']#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.586 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Successfully updated port: 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.613 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "refresh_cache-18653df3-1934-41dc-b6ab-d1dc122052f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.613 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquired lock "refresh_cache-18653df3-1934-41dc-b6ab-d1dc122052f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:27 np0005558241 nova_compute[248510]: 2025-12-13 08:30:27.614 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.364 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614613.3637993, d594d7c8-13f8-4e02-80d2-490469301cca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.365 248514 INFO nova.compute.manager [-] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.386 248514 DEBUG nova.compute.manager [None req-8f3d9467-9f47-4072-99a6-26a6d730183b - - - - - -] [instance: d594d7c8-13f8-4e02-80d2-490469301cca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.414 248514 INFO oslo.privsep.daemon [None req-7bae8189-25c1-4b14-b187-fa0c8ec7cfd5 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.251 310683 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.255 310683 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.257 310683 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.257 310683 INFO oslo.privsep.daemon [-] privsep daemon running as pid 310683#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.498 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.511 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Successfully updated port: 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.510 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.538 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "refresh_cache-69f6dd3a-7c99-4537-8173-ec79bc6336a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.539 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquired lock "refresh_cache-69f6dd3a-7c99-4537-8173-ec79bc6336a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.539 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.544 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614613.5442035, 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.545 248514 INFO nova.compute.manager [-] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.570 248514 DEBUG nova.compute.manager [None req-3fb44d25-f99b-4b4c-8594-d4db3156b6fa - - - - - -] [instance: 13a5d640-9e2a-49d7-9f95-be18ebbe1cfe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:28 np0005558241 nova_compute[248510]: 2025-12-13 08:30:28.597 248514 DEBUG nova.network.neutron [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Successfully created port: 132de588-b258-4d1f-9d17-7b0ef7d73a3b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:30:29 np0005558241 nova_compute[248510]: 2025-12-13 08:30:29.047 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:29 np0005558241 nova_compute[248510]: 2025-12-13 08:30:29.092 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:30:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 243 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.2 MiB/s wr, 198 op/s
Dec 13 03:30:29 np0005558241 nova_compute[248510]: 2025-12-13 08:30:29.670 248514 DEBUG nova.compute.manager [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-changed-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:29 np0005558241 nova_compute[248510]: 2025-12-13 08:30:29.671 248514 DEBUG nova.compute.manager [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Refreshing instance network info cache due to event network-changed-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:30:29 np0005558241 nova_compute[248510]: 2025-12-13 08:30:29.672 248514 DEBUG oslo_concurrency.lockutils [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-18653df3-1934-41dc-b6ab-d1dc122052f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:29 np0005558241 nova_compute[248510]: 2025-12-13 08:30:29.694 248514 DEBUG nova.network.neutron [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Successfully updated port: 132de588-b258-4d1f-9d17-7b0ef7d73a3b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:30:29 np0005558241 nova_compute[248510]: 2025-12-13 08:30:29.715 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "refresh_cache-bc7aabfd-0b89-4d02-8aff-29f1bc423621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:29 np0005558241 nova_compute[248510]: 2025-12-13 08:30:29.716 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquired lock "refresh_cache-bc7aabfd-0b89-4d02-8aff-29f1bc423621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:29 np0005558241 nova_compute[248510]: 2025-12-13 08:30:29.716 248514 DEBUG nova.network.neutron [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:30:29 np0005558241 nova_compute[248510]: 2025-12-13 08:30:29.817 248514 DEBUG nova.compute.manager [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Received event network-changed-132de588-b258-4d1f-9d17-7b0ef7d73a3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:29 np0005558241 nova_compute[248510]: 2025-12-13 08:30:29.818 248514 DEBUG nova.compute.manager [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Refreshing instance network info cache due to event network-changed-132de588-b258-4d1f-9d17-7b0ef7d73a3b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:30:29 np0005558241 nova_compute[248510]: 2025-12-13 08:30:29.818 248514 DEBUG oslo_concurrency.lockutils [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bc7aabfd-0b89-4d02-8aff-29f1bc423621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:29 np0005558241 nova_compute[248510]: 2025-12-13 08:30:29.995 248514 DEBUG nova.network.neutron [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.162 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Updating instance_info_cache with network_info: [{"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.198 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Releasing lock "refresh_cache-18653df3-1934-41dc-b6ab-d1dc122052f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.199 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Instance network_info: |[{"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.199 248514 DEBUG oslo_concurrency.lockutils [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-18653df3-1934-41dc-b6ab-d1dc122052f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.200 248514 DEBUG nova.network.neutron [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Refreshing network info cache for port 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.204 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Start _get_guest_xml network_info=[{"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.208 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.217 248514 WARNING nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.223 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.224 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.227 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.228 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.228 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.229 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.229 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.229 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.230 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.230 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.230 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.231 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.231 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.231 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.232 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.232 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.235 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.331 248514 DEBUG nova.network.neutron [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Updating instance_info_cache with network_info: [{"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.365 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Releasing lock "refresh_cache-69f6dd3a-7c99-4537-8173-ec79bc6336a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.366 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Instance network_info: |[{"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.370 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Start _get_guest_xml network_info=[{"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.377 248514 WARNING nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.384 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.384 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.389 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.389 248514 DEBUG nova.virt.libvirt.host [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.389 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.390 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.390 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.390 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.391 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.391 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.391 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.392 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.392 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.392 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.392 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.393 248514 DEBUG nova.virt.hardware [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.396 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:30:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:30.429 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.710 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614615.7088149, 850bda47-d7a0-4d8d-a048-258b8388cab7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.712 248514 INFO nova.compute.manager [-] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:30:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/532874596' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.862 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.887 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.892 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:30 np0005558241 nova_compute[248510]: 2025-12-13 08:30:30.930 248514 DEBUG nova.compute.manager [None req-9bb4bcea-d332-4d1e-a752-1ef914a2f6dd - - - - - -] [instance: 850bda47-d7a0-4d8d-a048-258b8388cab7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1002318066' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.002 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.027 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.032 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 260 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 157 op/s
Dec 13 03:30:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4055225063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.572 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.576 248514 DEBUG nova.virt.libvirt.vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1821338648',display_name='tempest-MultipleCreateTestJSON-server-1821338648-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1821338648-1',id=64,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-lfmihffr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:23Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=18653df3-1934-41dc-b6ab-d1dc122052f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.577 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.579 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.581 248514 DEBUG nova.objects.instance [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'pci_devices' on Instance uuid 18653df3-1934-41dc-b6ab-d1dc122052f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3429205274' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.606 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.608 248514 DEBUG nova.virt.libvirt.vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1821338648',display_name='tempest-MultipleCreateTestJSON-server-1821338648-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1821338648-2',id=65,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-lfmihffr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:24Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=69f6dd3a-7c99-4537-8173-ec79bc6336a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.609 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.610 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.611 248514 DEBUG nova.objects.instance [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'pci_devices' on Instance uuid 69f6dd3a-7c99-4537-8173-ec79bc6336a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.617 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <uuid>18653df3-1934-41dc-b6ab-d1dc122052f0</uuid>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <name>instance-00000040</name>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:name>tempest-MultipleCreateTestJSON-server-1821338648-1</nova:name>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:30:30</nova:creationTime>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:user uuid="1f703cc8fd3b4cdabb2b154345f70a7c">tempest-MultipleCreateTestJSON-478861069-project-member</nova:user>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:project uuid="c25aa866d502481eb9410b7d92a1347b">tempest-MultipleCreateTestJSON-478861069</nova:project>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:port uuid="6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <entry name="serial">18653df3-1934-41dc-b6ab-d1dc122052f0</entry>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <entry name="uuid">18653df3-1934-41dc-b6ab-d1dc122052f0</entry>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/18653df3-1934-41dc-b6ab-d1dc122052f0_disk">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/18653df3-1934-41dc-b6ab-d1dc122052f0_disk.config">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:e4:a5:c5"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <target dev="tap6d0e1f86-2f"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/console.log" append="off"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:30:31 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:30:31 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.631 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Preparing to wait for external event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.631 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.632 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.632 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.633 248514 DEBUG nova.virt.libvirt.vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1821338648',display_name='tempest-MultipleCreateTestJSON-server-1821338648-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1821338648-1',id=64,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-lfmihffr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:23Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=18653df3-1934-41dc-b6ab-d1dc122052f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.634 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.635 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.637 248514 DEBUG os_vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.638 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.640 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.641 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.645 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <uuid>69f6dd3a-7c99-4537-8173-ec79bc6336a9</uuid>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <name>instance-00000041</name>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:name>tempest-MultipleCreateTestJSON-server-1821338648-2</nova:name>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:30:30</nova:creationTime>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:user uuid="1f703cc8fd3b4cdabb2b154345f70a7c">tempest-MultipleCreateTestJSON-478861069-project-member</nova:user>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:project uuid="c25aa866d502481eb9410b7d92a1347b">tempest-MultipleCreateTestJSON-478861069</nova:project>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <nova:port uuid="4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <entry name="serial">69f6dd3a-7c99-4537-8173-ec79bc6336a9</entry>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <entry name="uuid">69f6dd3a-7c99-4537-8173-ec79bc6336a9</entry>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk.config">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:7f:99:c4"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <target dev="tap4892c0f3-fa"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/console.log" append="off"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:30:31 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:30:31 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:30:31 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:30:31 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.654 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Preparing to wait for external event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.655 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.655 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.655 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.656 248514 DEBUG nova.virt.libvirt.vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1821338648',display_name='tempest-MultipleCreateTestJSON-server-1821338648-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1821338648-2',id=65,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-lfmihffr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:24Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=69f6dd3a-7c99-4537-8173-ec79bc6336a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.657 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.657 248514 DEBUG nova.network.os_vif_util [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.658 248514 DEBUG os_vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.660 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.661 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.662 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d0e1f86-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.662 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6d0e1f86-2f, col_values=(('external_ids', {'iface-id': '6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:a5:c5', 'vm-uuid': '18653df3-1934-41dc-b6ab-d1dc122052f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.664 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:31 np0005558241 NetworkManager[50376]: <info>  [1765614631.6650] manager: (tap6d0e1f86-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.668 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.673 248514 DEBUG nova.network.neutron [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Updating instance_info_cache with network_info: [{"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.675 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.676 248514 INFO os_vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f')#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.677 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4892c0f3-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.678 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4892c0f3-fa, col_values=(('external_ids', {'iface-id': '4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:99:c4', 'vm-uuid': '69f6dd3a-7c99-4537-8173-ec79bc6336a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:31 np0005558241 NetworkManager[50376]: <info>  [1765614631.6806] manager: (tap4892c0f3-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.680 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.688 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.689 248514 INFO os_vif [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa')#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.698 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Releasing lock "refresh_cache-bc7aabfd-0b89-4d02-8aff-29f1bc423621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.698 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Instance network_info: |[{"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.699 248514 DEBUG oslo_concurrency.lockutils [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bc7aabfd-0b89-4d02-8aff-29f1bc423621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.699 248514 DEBUG nova.network.neutron [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Refreshing network info cache for port 132de588-b258-4d1f-9d17-7b0ef7d73a3b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.701 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Start _get_guest_xml network_info=[{"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.729 248514 WARNING nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.734 248514 DEBUG nova.virt.libvirt.host [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.735 248514 DEBUG nova.virt.libvirt.host [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.751 248514 DEBUG nova.virt.libvirt.host [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.751 248514 DEBUG nova.virt.libvirt.host [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.752 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.752 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.752 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.752 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.753 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.753 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.753 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.753 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.754 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.754 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.754 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.754 248514 DEBUG nova.virt.hardware [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.758 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.810 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.811 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.811 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No VIF found with MAC fa:16:3e:7f:99:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.812 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Using config drive#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.836 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.849 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.850 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.850 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] No VIF found with MAC fa:16:3e:e4:a5:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.851 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Using config drive#033[00m
Dec 13 03:30:31 np0005558241 nova_compute[248510]: 2025-12-13 08:30:31.873 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2671993912' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:32 np0005558241 nova_compute[248510]: 2025-12-13 08:30:32.359 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:32 np0005558241 nova_compute[248510]: 2025-12-13 08:30:32.981 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:32 np0005558241 nova_compute[248510]: 2025-12-13 08:30:32.986 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.028 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Creating config drive at /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/disk.config#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.033 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl7h55uxi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.081 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Creating config drive at /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/disk.config#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.086 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjw00atwr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.187 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl7h55uxi" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.212 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 18653df3-1934-41dc-b6ab-d1dc122052f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.216 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/disk.config 18653df3-1934-41dc-b6ab-d1dc122052f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.259 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjw00atwr" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.288 248514 DEBUG nova.storage.rbd_utils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] rbd image 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.293 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/disk.config 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.377 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/disk.config 18653df3-1934-41dc-b6ab-d1dc122052f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.378 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Deleting local config drive /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0/disk.config because it was imported into RBD.#033[00m
Dec 13 03:30:33 np0005558241 kernel: tap6d0e1f86-2f: entered promiscuous mode
Dec 13 03:30:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:33Z|00591|binding|INFO|Claiming lport 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc for this chassis.
Dec 13 03:30:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:33Z|00592|binding|INFO|6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc: Claiming fa:16:3e:e4:a5:c5 10.100.0.10
Dec 13 03:30:33 np0005558241 NetworkManager[50376]: <info>  [1765614633.4479] manager: (tap6d0e1f86-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/265)
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.451 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:a5:c5 10.100.0.10'], port_security=['fa:16:3e:e4:a5:c5 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '18653df3-1934-41dc-b6ab-d1dc122052f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.452 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf bound to our chassis#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.455 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf#033[00m
Dec 13 03:30:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:33Z|00593|binding|INFO|Setting lport 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc ovn-installed in OVS
Dec 13 03:30:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:33Z|00594|binding|INFO|Setting lport 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc up in Southbound
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.471 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b67d0dc-f05e-4821-83bc-2bbf2fb32bff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.472 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2ca54d31-d1 in ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 systemd-udevd[311006]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.474 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2ca54d31-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.475 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da3ac371-136c-460c-bb68-45c986bc1eed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.479 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[48ab9c7a-12df-4f37-901d-7c95164782e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 systemd-machined[210538]: New machine qemu-73-instance-00000040.
Dec 13 03:30:33 np0005558241 systemd[1]: Started Virtual Machine qemu-73-instance-00000040.
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.498 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[63aa8bfc-3c8d-4f62-8222-5256343b7790]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 NetworkManager[50376]: <info>  [1765614633.5031] device (tap6d0e1f86-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:30:33 np0005558241 NetworkManager[50376]: <info>  [1765614633.5038] device (tap6d0e1f86-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:30:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 260 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 152 op/s
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.525 248514 DEBUG oslo_concurrency.processutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/disk.config 69f6dd3a-7c99-4537-8173-ec79bc6336a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.526 248514 INFO nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Deleting local config drive /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9/disk.config because it was imported into RBD.#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.526 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e9306d0e-1939-4f5a-aca1-86ca9ca3be62]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1676282674' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.576 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.577 248514 DEBUG nova.virt.libvirt.vif [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-406757652',display_name='tempest-ServerMetadataNegativeTestJSON-server-406757652',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-406757652',id=66,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='def846f35c2747099dbe41221905d739',ramdisk_id='',reservation_id='r-aon0aotv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-402645639',owner_user_name='tempest-ServerMetadataNegativeTestJSON-402645639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:26Z,user_data=None,user_id='83bbc7cfbcdc49ab885e530a79ae26f2',uuid=bc7aabfd-0b89-4d02-8aff-29f1bc423621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.578 248514 DEBUG nova.network.os_vif_util [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Converting VIF {"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.579 248514 DEBUG nova.network.os_vif_util [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.579 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2987eab0-963f-4300-9c7d-859423f6bf9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.581 248514 DEBUG nova.objects.instance [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lazy-loading 'pci_devices' on Instance uuid bc7aabfd-0b89-4d02-8aff-29f1bc423621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:33 np0005558241 NetworkManager[50376]: <info>  [1765614633.5873] manager: (tap2ca54d31-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/266)
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.586 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db12c5d8-6315-4029-8b66-49bce925a176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.604 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  <uuid>bc7aabfd-0b89-4d02-8aff-29f1bc423621</uuid>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  <name>instance-00000042</name>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerMetadataNegativeTestJSON-server-406757652</nova:name>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:30:31</nova:creationTime>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:        <nova:user uuid="83bbc7cfbcdc49ab885e530a79ae26f2">tempest-ServerMetadataNegativeTestJSON-402645639-project-member</nova:user>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:        <nova:project uuid="def846f35c2747099dbe41221905d739">tempest-ServerMetadataNegativeTestJSON-402645639</nova:project>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:        <nova:port uuid="132de588-b258-4d1f-9d17-7b0ef7d73a3b">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <entry name="serial">bc7aabfd-0b89-4d02-8aff-29f1bc423621</entry>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <entry name="uuid">bc7aabfd-0b89-4d02-8aff-29f1bc423621</entry>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk.config">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:7d:5e:68"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <target dev="tap132de588-b2"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/console.log" append="off"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:30:33 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:30:33 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:30:33 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:30:33 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.605 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Preparing to wait for external event network-vif-plugged-132de588-b258-4d1f-9d17-7b0ef7d73a3b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.606 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.606 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.606 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.609 248514 DEBUG nova.virt.libvirt.vif [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-406757652',display_name='tempest-ServerMetadataNegativeTestJSON-server-406757652',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-406757652',id=66,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='def846f35c2747099dbe41221905d739',ramdisk_id='',reservation_id='r-aon0aotv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-402645639',owner_user_name='tempest-ServerMetadataNegativeTestJSON-402645639-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:26Z,user_data=None,user_id='83bbc7cfbcdc49ab885e530a79ae26f2',uuid=bc7aabfd-0b89-4d02-8aff-29f1bc423621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.609 248514 DEBUG nova.network.os_vif_util [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Converting VIF {"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.610 248514 DEBUG nova.network.os_vif_util [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.610 248514 DEBUG os_vif [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.611 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.612 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.614 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap132de588-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.615 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap132de588-b2, col_values=(('external_ids', {'iface-id': '132de588-b258-4d1f-9d17-7b0ef7d73a3b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:5e:68', 'vm-uuid': 'bc7aabfd-0b89-4d02-8aff-29f1bc423621'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:33 np0005558241 NetworkManager[50376]: <info>  [1765614633.6173] manager: (tap132de588-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.624 248514 INFO os_vif [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2')#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.628 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[af841e84-70b9-4dad-9665-5fb79acbb969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.632 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5763f719-1fd2-403f-a77c-66d3edd8c70c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 NetworkManager[50376]: <info>  [1765614633.6369] manager: (tap4892c0f3-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/268)
Dec 13 03:30:33 np0005558241 systemd-udevd[311029]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:30:33 np0005558241 kernel: tap4892c0f3-fa: entered promiscuous mode
Dec 13 03:30:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:33Z|00595|binding|INFO|Claiming lport 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d for this chassis.
Dec 13 03:30:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:33Z|00596|binding|INFO|4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d: Claiming fa:16:3e:7f:99:c4 10.100.0.5
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 NetworkManager[50376]: <info>  [1765614633.6532] device (tap4892c0f3-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:30:33 np0005558241 NetworkManager[50376]: <info>  [1765614633.6540] device (tap4892c0f3-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.655 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:99:c4 10.100.0.5'], port_security=['fa:16:3e:7f:99:c4 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '69f6dd3a-7c99-4537-8173-ec79bc6336a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:33 np0005558241 NetworkManager[50376]: <info>  [1765614633.6688] device (tap2ca54d31-d0): carrier: link connected
Dec 13 03:30:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:33Z|00597|binding|INFO|Setting lport 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d ovn-installed in OVS
Dec 13 03:30:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:33Z|00598|binding|INFO|Setting lport 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d up in Southbound
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.670 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.676 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8ec2d6-6d11-4cbd-9e56-568ee5a9120d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 systemd-machined[210538]: New machine qemu-74-instance-00000041.
Dec 13 03:30:33 np0005558241 systemd[1]: Started Virtual Machine qemu-74-instance-00000041.
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.699 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c9b796-dcca-476b-8858-f7a8f9a79254]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721088, 'reachable_time': 41340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311062, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.704 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.705 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.705 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] No VIF found with MAC fa:16:3e:7d:5e:68, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.706 248514 INFO nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Using config drive#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.721 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1de71551-67cc-44c4-8786-65c4017f8261]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:9738'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721088, 'tstamp': 721088}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311063, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.744 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.747 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[641a4896-2cf3-46cc-9c47-cb0cbe227c9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721088, 'reachable_time': 41340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311079, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.788 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c92e6c21-df8b-48dd-af87-18f4f058c39b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.872 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0ab7ff-d76b-4f9b-a0a4-f61950ec2beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.874 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.874 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.874 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ca54d31-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:33 np0005558241 NetworkManager[50376]: <info>  [1765614633.8778] manager: (tap2ca54d31-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Dec 13 03:30:33 np0005558241 kernel: tap2ca54d31-d0: entered promiscuous mode
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.879 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.884 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.885 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ca54d31-d0, col_values=(('external_ids', {'iface-id': '327a65c7-a67a-4fc2-b067-82e72753566c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:33Z|00599|binding|INFO|Releasing lport 327a65c7-a67a-4fc2-b067-82e72753566c from this chassis (sb_readonly=0)
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.911 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.912 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.914 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f8fecc0e-00b8-4e5b-a624-a2b342c1863e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.916 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.pid.haproxy
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:30:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:33.918 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'env', 'PROCESS_TAG=haproxy-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2ca54d31-d7ab-4904-a0d6-4e2970bb54bf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.993 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614633.9927099, 18653df3-1934-41dc-b6ab-d1dc122052f0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:33 np0005558241 nova_compute[248510]: 2025-12-13 08:30:33.994 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] VM Started (Lifecycle Event)#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.018 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.023 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614633.9931607, 18653df3-1934-41dc-b6ab-d1dc122052f0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.023 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.043 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.048 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.073 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.091 248514 DEBUG nova.compute.manager [req-b2b24e06-4003-477d-943f-62f48cafe378 req-321f0368-68d4-4afd-9cf0-549f28bf0e80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.091 248514 DEBUG oslo_concurrency.lockutils [req-b2b24e06-4003-477d-943f-62f48cafe378 req-321f0368-68d4-4afd-9cf0-549f28bf0e80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.092 248514 DEBUG oslo_concurrency.lockutils [req-b2b24e06-4003-477d-943f-62f48cafe378 req-321f0368-68d4-4afd-9cf0-549f28bf0e80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.092 248514 DEBUG oslo_concurrency.lockutils [req-b2b24e06-4003-477d-943f-62f48cafe378 req-321f0368-68d4-4afd-9cf0-549f28bf0e80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.092 248514 DEBUG nova.compute.manager [req-b2b24e06-4003-477d-943f-62f48cafe378 req-321f0368-68d4-4afd-9cf0-549f28bf0e80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Processing event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.093 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.098 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614634.0960023, 18653df3-1934-41dc-b6ab-d1dc122052f0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.098 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.099 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.104 248514 INFO nova.virt.libvirt.driver [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Instance spawned successfully.#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.105 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.125 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.132 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.137 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.137 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.138 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.138 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.139 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.139 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.173 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.182 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614634.1819553, 69f6dd3a-7c99-4537-8173-ec79bc6336a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.183 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] VM Started (Lifecycle Event)#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.221 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.225 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614634.1822293, 69f6dd3a-7c99-4537-8173-ec79bc6336a9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.226 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.230 248514 INFO nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Took 11.02 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.231 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.242 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.246 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.300 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.328 248514 INFO nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Took 12.25 seconds to build instance.#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.352 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:34 np0005558241 podman[311207]: 2025-12-13 08:30:34.38246361 +0000 UTC m=+0.072059069 container create 83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:30:34 np0005558241 podman[311207]: 2025-12-13 08:30:34.344987555 +0000 UTC m=+0.034583014 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:30:34 np0005558241 systemd[1]: Started libpod-conmon-83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82.scope.
Dec 13 03:30:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:30:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84e59ed93a5e8226cd62fab7d9bf9fc22cba6a596a769cf7e462aabfa2f131c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:34 np0005558241 podman[311207]: 2025-12-13 08:30:34.495721915 +0000 UTC m=+0.185317354 container init 83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 03:30:34 np0005558241 podman[311207]: 2025-12-13 08:30:34.501969579 +0000 UTC m=+0.191564998 container start 83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.510 248514 DEBUG oslo_concurrency.lockutils [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.510 248514 DEBUG oslo_concurrency.lockutils [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.511 248514 DEBUG nova.compute.manager [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.514 248514 DEBUG nova.compute.manager [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.515 248514 DEBUG nova.objects.instance [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:34 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [NOTICE]   (311226) : New worker (311228) forked
Dec 13 03:30:34 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [NOTICE]   (311226) : Loading success.
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.540 248514 DEBUG nova.virt.libvirt.driver [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:30:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.564 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf unbound from our chassis#033[00m
Dec 13 03:30:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.569 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf#033[00m
Dec 13 03:30:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.587 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d072ed02-141c-4276-9632-aef67cf96051]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.626 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[be92e3aa-c27e-41b3-b939-e8605c97df79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.631 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ed30da5d-d111-4273-829d-1ae4724b8dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.666 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[656eb866-1a3f-4336-94b9-c839c816b0b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3347b323-29e2-4eb0-ada3-a23c84c83451]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721088, 'reachable_time': 41340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311242, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.707 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0e155f-7ff1-465f-815f-424dc94adb17]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721103, 'tstamp': 721103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311243, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721107, 'tstamp': 721107}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311243, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.711 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.721 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ca54d31-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.722 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.724 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ca54d31-d0, col_values=(('external_ids', {'iface-id': '327a65c7-a67a-4fc2-b067-82e72753566c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:34.724 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.739 248514 DEBUG nova.network.neutron [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Updated VIF entry in instance network info cache for port 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.739 248514 DEBUG nova.network.neutron [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Updating instance_info_cache with network_info: [{"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.768 248514 DEBUG oslo_concurrency.lockutils [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-18653df3-1934-41dc-b6ab-d1dc122052f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.769 248514 DEBUG nova.compute.manager [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-changed-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.774 248514 DEBUG nova.compute.manager [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Refreshing instance network info cache due to event network-changed-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.774 248514 DEBUG oslo_concurrency.lockutils [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-69f6dd3a-7c99-4537-8173-ec79bc6336a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.774 248514 DEBUG oslo_concurrency.lockutils [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-69f6dd3a-7c99-4537-8173-ec79bc6336a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:34 np0005558241 nova_compute[248510]: 2025-12-13 08:30:34.775 248514 DEBUG nova.network.neutron [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Refreshing network info cache for port 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.000 248514 INFO nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Creating config drive at /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/disk.config#033[00m
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.005 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3xdqikyh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.156 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3xdqikyh" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.185 248514 DEBUG nova.storage.rbd_utils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] rbd image bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.191 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/disk.config bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.340 248514 DEBUG oslo_concurrency.processutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/disk.config bc7aabfd-0b89-4d02-8aff-29f1bc423621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.341 248514 INFO nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Deleting local config drive /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621/disk.config because it was imported into RBD.#033[00m
Dec 13 03:30:35 np0005558241 kernel: tap132de588-b2: entered promiscuous mode
Dec 13 03:30:35 np0005558241 NetworkManager[50376]: <info>  [1765614635.4142] manager: (tap132de588-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Dec 13 03:30:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:35Z|00600|binding|INFO|Claiming lport 132de588-b258-4d1f-9d17-7b0ef7d73a3b for this chassis.
Dec 13 03:30:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:35Z|00601|binding|INFO|132de588-b258-4d1f-9d17-7b0ef7d73a3b: Claiming fa:16:3e:7d:5e:68 10.100.0.5
Dec 13 03:30:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.420 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:35 np0005558241 NetworkManager[50376]: <info>  [1765614635.4376] device (tap132de588-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:30:35 np0005558241 NetworkManager[50376]: <info>  [1765614635.4383] device (tap132de588-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:35Z|00602|binding|INFO|Setting lport 132de588-b258-4d1f-9d17-7b0ef7d73a3b ovn-installed in OVS
Dec 13 03:30:35 np0005558241 systemd-machined[210538]: New machine qemu-75-instance-00000042.
Dec 13 03:30:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 260 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.3 MiB/s wr, 168 op/s
Dec 13 03:30:35 np0005558241 systemd[1]: Started Virtual Machine qemu-75-instance-00000042.
Dec 13 03:30:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:35Z|00603|binding|INFO|Setting lport 132de588-b258-4d1f-9d17-7b0ef7d73a3b up in Southbound
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.543 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:5e:68 10.100.0.5'], port_security=['fa:16:3e:7d:5e:68 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'bc7aabfd-0b89-4d02-8aff-29f1bc423621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'def846f35c2747099dbe41221905d739', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75dfe6d3-3b39-436a-9304-896fdf5a4d05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fbc462b-7eea-4b98-8702-5b666823a7a9, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=132de588-b258-4d1f-9d17-7b0ef7d73a3b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.545 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 132de588-b258-4d1f-9d17-7b0ef7d73a3b in datapath 96c67522-dadc-49c7-81e6-c5a1d8a2b085 bound to our chassis#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.547 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 96c67522-dadc-49c7-81e6-c5a1d8a2b085#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.564 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c547bf1d-0ac0-4a2b-a6a1-92f2646ac0db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.565 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap96c67522-d1 in ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.570 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap96c67522-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.570 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7588210a-9894-4236-a1c9-4f683529d1f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.571 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8efeeca6-57af-49ab-83ba-67f2f0280a9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.589 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bd666d-ec15-444c-8815-4246756b3b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.613 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e51b3bbe-f295-4d40-8c57-611d43145dfe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.654 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5f09cdcb-0bba-475e-836c-87dd789be7b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 NetworkManager[50376]: <info>  [1765614635.6639] manager: (tap96c67522-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/271)
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.663 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e14f6219-ede8-4e6b-b5b5-9201f159d096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.707 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f34457-7d83-4388-b954-ffe2a9180bd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.714 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[38ece6c3-3116-4497-97a7-5c9776d9d594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 NetworkManager[50376]: <info>  [1765614635.7407] device (tap96c67522-d0): carrier: link connected
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.751 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3f44fb-aea0-4002-8896-4a1356a2e895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.779 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e5660175-9f18-4771-86ac-c6f34421b336]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96c67522-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:77:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721295, 'reachable_time': 37573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311317, 'error': None, 'target': 'ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.802 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e76505e6-5259-463e-a072-d8fea8b8017e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe72:7701'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721295, 'tstamp': 721295}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311318, 'error': None, 'target': 'ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.832 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b6fabe07-01fc-45cd-8887-46dcf85e7dd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96c67522-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:77:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721295, 'reachable_time': 37573, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311319, 'error': None, 'target': 'ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.880 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[84d7c76c-54b0-4083-ba7a-0b439a4ad5b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.925 248514 DEBUG nova.network.neutron [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Updated VIF entry in instance network info cache for port 132de588-b258-4d1f-9d17-7b0ef7d73a3b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.926 248514 DEBUG nova.network.neutron [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Updating instance_info_cache with network_info: [{"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.971 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4de90a-6e6f-4ac1-aa9e-af02b1b79657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.974 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96c67522-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.975 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.975 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96c67522-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.977 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:35 np0005558241 NetworkManager[50376]: <info>  [1765614635.9778] manager: (tap96c67522-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/272)
Dec 13 03:30:35 np0005558241 kernel: tap96c67522-d0: entered promiscuous mode
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.982 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.992 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap96c67522-d0, col_values=(('external_ids', {'iface-id': '3ab67b78-7a30-440a-987a-9b2a515dc605'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:35 np0005558241 nova_compute[248510]: 2025-12-13 08:30:35.994 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:35Z|00604|binding|INFO|Releasing lport 3ab67b78-7a30-440a-987a-9b2a515dc605 from this chassis (sb_readonly=0)
Dec 13 03:30:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.998 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/96c67522-dadc-49c7-81e6-c5a1d8a2b085.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/96c67522-dadc-49c7-81e6-c5a1d8a2b085.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:35.999 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62d92560-3bab-402a-83d6-52d3355d7b68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:36.000 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-96c67522-dadc-49c7-81e6-c5a1d8a2b085
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/96c67522-dadc-49c7-81e6-c5a1d8a2b085.pid.haproxy
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 96c67522-dadc-49c7-81e6-c5a1d8a2b085
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:30:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:36.000 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'env', 'PROCESS_TAG=haproxy-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/96c67522-dadc-49c7-81e6-c5a1d8a2b085.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.017 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.111 248514 DEBUG oslo_concurrency.lockutils [req-57854bcb-5b7c-4aa7-93c4-6a51660ea915 req-bcb9773b-a717-42a8-a669-a8aa988a755d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bc7aabfd-0b89-4d02-8aff-29f1bc423621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.155 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614636.1546292, bc7aabfd-0b89-4d02-8aff-29f1bc423621 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.155 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] VM Started (Lifecycle Event)#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.185 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.190 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614636.1548202, bc7aabfd-0b89-4d02-8aff-29f1bc423621 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.190 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.349 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.361 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.382 248514 DEBUG nova.compute.manager [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.384 248514 DEBUG oslo_concurrency.lockutils [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.385 248514 DEBUG oslo_concurrency.lockutils [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.385 248514 DEBUG oslo_concurrency.lockutils [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.385 248514 DEBUG nova.compute.manager [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] No waiting events found dispatching network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.385 248514 WARNING nova.compute.manager [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received unexpected event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc for instance with vm_state active and task_state None.#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.385 248514 DEBUG nova.compute.manager [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.385 248514 DEBUG oslo_concurrency.lockutils [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.386 248514 DEBUG oslo_concurrency.lockutils [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.386 248514 DEBUG oslo_concurrency.lockutils [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.386 248514 DEBUG nova.compute.manager [req-106c0b30-4bec-42d7-bb1b-46ca2de6b402 req-27e5bf9c-54c2-49c0-a9d4-f882ccef1918 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Processing event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.387 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.393 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.397 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.397 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614636.3940964, 69f6dd3a-7c99-4537-8173-ec79bc6336a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.398 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.400 248514 INFO nova.virt.libvirt.driver [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Instance spawned successfully.#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.400 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:30:36 np0005558241 podman[311392]: 2025-12-13 08:30:36.420485716 +0000 UTC m=+0.070701236 container create 39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.427 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.436 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.436 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.437 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.437 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.440 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.440 248514 DEBUG nova.virt.libvirt.driver [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.446 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:36 np0005558241 systemd[1]: Started libpod-conmon-39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6.scope.
Dec 13 03:30:36 np0005558241 podman[311392]: 2025-12-13 08:30:36.387109272 +0000 UTC m=+0.037324812 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:30:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:30:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/594738986a7fafb8e2e8c20fa7ea6b435a01ddf06491d50b530c9beeaa2d34f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:36 np0005558241 podman[311392]: 2025-12-13 08:30:36.527991748 +0000 UTC m=+0.178207278 container init 39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.533 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:36 np0005558241 podman[311392]: 2025-12-13 08:30:36.5402173 +0000 UTC m=+0.190432810 container start 39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.542 248514 DEBUG nova.compute.manager [req-6991f432-be04-458f-ad65-d0b9be7a2c3b req-643762ef-ddc2-4430-b5b7-fb37f24862c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Received event network-vif-plugged-132de588-b258-4d1f-9d17-7b0ef7d73a3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.542 248514 DEBUG oslo_concurrency.lockutils [req-6991f432-be04-458f-ad65-d0b9be7a2c3b req-643762ef-ddc2-4430-b5b7-fb37f24862c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.542 248514 DEBUG oslo_concurrency.lockutils [req-6991f432-be04-458f-ad65-d0b9be7a2c3b req-643762ef-ddc2-4430-b5b7-fb37f24862c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.542 248514 DEBUG oslo_concurrency.lockutils [req-6991f432-be04-458f-ad65-d0b9be7a2c3b req-643762ef-ddc2-4430-b5b7-fb37f24862c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.543 248514 DEBUG nova.compute.manager [req-6991f432-be04-458f-ad65-d0b9be7a2c3b req-643762ef-ddc2-4430-b5b7-fb37f24862c4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Processing event network-vif-plugged-132de588-b258-4d1f-9d17-7b0ef7d73a3b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.543 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.549 248514 INFO nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Took 12.38 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.549 248514 DEBUG nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.551 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614636.5506535, bc7aabfd-0b89-4d02-8aff-29f1bc423621 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.551 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.553 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.560 248514 INFO nova.virt.libvirt.driver [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Instance spawned successfully.#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.561 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:30:36 np0005558241 neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085[311407]: [NOTICE]   (311411) : New worker (311413) forked
Dec 13 03:30:36 np0005558241 neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085[311407]: [NOTICE]   (311411) : Loading success.
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.589 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.593 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.603 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.604 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.604 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.604 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.605 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.605 248514 DEBUG nova.virt.libvirt.driver [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.646 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.648 248514 INFO nova.compute.manager [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Took 14.50 seconds to build instance.#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.681 248514 DEBUG oslo_concurrency.lockutils [None req-c2336f3e-f656-458f-8f98-3e8092e47ac7 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.685 248514 INFO nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Took 10.59 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.686 248514 DEBUG nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.764 248514 INFO nova.compute.manager [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Took 11.96 seconds to build instance.#033[00m
Dec 13 03:30:36 np0005558241 nova_compute[248510]: 2025-12-13 08:30:36.787 248514 DEBUG oslo_concurrency.lockutils [None req-8e76febd-2c79-4ddf-a1e5-afdfc4c93b2c 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 260 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 140 op/s
Dec 13 03:30:37 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:37Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 03:30:37 np0005558241 nova_compute[248510]: 2025-12-13 08:30:37.949 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.122 248514 DEBUG nova.network.neutron [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Updated VIF entry in instance network info cache for port 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.123 248514 DEBUG nova.network.neutron [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Updating instance_info_cache with network_info: [{"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.169 248514 DEBUG oslo_concurrency.lockutils [req-747d9131-facc-41f4-bec0-2870469a6ea8 req-300677fc-ffbc-4e0f-89a5-d2b1330ffe17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-69f6dd3a-7c99-4537-8173-ec79bc6336a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.539 248514 DEBUG nova.compute.manager [req-e789f4e7-584e-4edb-8865-7814ccc3c80c req-0fb54eb0-e0e6-411a-9720-5d8e82894b1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.539 248514 DEBUG oslo_concurrency.lockutils [req-e789f4e7-584e-4edb-8865-7814ccc3c80c req-0fb54eb0-e0e6-411a-9720-5d8e82894b1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.539 248514 DEBUG oslo_concurrency.lockutils [req-e789f4e7-584e-4edb-8865-7814ccc3c80c req-0fb54eb0-e0e6-411a-9720-5d8e82894b1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.540 248514 DEBUG oslo_concurrency.lockutils [req-e789f4e7-584e-4edb-8865-7814ccc3c80c req-0fb54eb0-e0e6-411a-9720-5d8e82894b1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.540 248514 DEBUG nova.compute.manager [req-e789f4e7-584e-4edb-8865-7814ccc3c80c req-0fb54eb0-e0e6-411a-9720-5d8e82894b1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] No waiting events found dispatching network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.540 248514 WARNING nova.compute.manager [req-e789f4e7-584e-4edb-8865-7814ccc3c80c req-0fb54eb0-e0e6-411a-9720-5d8e82894b1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received unexpected event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d for instance with vm_state active and task_state None.#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.649 248514 DEBUG nova.compute.manager [req-1f885c8d-b519-43f1-8674-269a3f416f63 req-630a47c6-fb19-4c6e-a950-250ede468141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Received event network-vif-plugged-132de588-b258-4d1f-9d17-7b0ef7d73a3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.650 248514 DEBUG oslo_concurrency.lockutils [req-1f885c8d-b519-43f1-8674-269a3f416f63 req-630a47c6-fb19-4c6e-a950-250ede468141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.651 248514 DEBUG oslo_concurrency.lockutils [req-1f885c8d-b519-43f1-8674-269a3f416f63 req-630a47c6-fb19-4c6e-a950-250ede468141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.651 248514 DEBUG oslo_concurrency.lockutils [req-1f885c8d-b519-43f1-8674-269a3f416f63 req-630a47c6-fb19-4c6e-a950-250ede468141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.651 248514 DEBUG nova.compute.manager [req-1f885c8d-b519-43f1-8674-269a3f416f63 req-630a47c6-fb19-4c6e-a950-250ede468141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] No waiting events found dispatching network-vif-plugged-132de588-b258-4d1f-9d17-7b0ef7d73a3b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.652 248514 WARNING nova.compute.manager [req-1f885c8d-b519-43f1-8674-269a3f416f63 req-630a47c6-fb19-4c6e-a950-250ede468141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Received unexpected event network-vif-plugged-132de588-b258-4d1f-9d17-7b0ef7d73a3b for instance with vm_state active and task_state None.#033[00m
Dec 13 03:30:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:30:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:30:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:30:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:30:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:30:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:30:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:30:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:30:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:30:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:30:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:30:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:30:38 np0005558241 nova_compute[248510]: 2025-12-13 08:30:38.808 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:30:39 np0005558241 podman[311563]: 2025-12-13 08:30:39.11544892 +0000 UTC m=+0.055062080 container create 80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:30:39 np0005558241 systemd[1]: Started libpod-conmon-80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04.scope.
Dec 13 03:30:39 np0005558241 podman[311563]: 2025-12-13 08:30:39.088269099 +0000 UTC m=+0.027882279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:30:39 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:30:39 np0005558241 podman[311563]: 2025-12-13 08:30:39.205066061 +0000 UTC m=+0.144679241 container init 80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_agnesi, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:30:39 np0005558241 podman[311563]: 2025-12-13 08:30:39.21432557 +0000 UTC m=+0.153938730 container start 80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_agnesi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 03:30:39 np0005558241 podman[311563]: 2025-12-13 08:30:39.218057952 +0000 UTC m=+0.157671112 container attach 80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:30:39 np0005558241 systemd[1]: libpod-80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04.scope: Deactivated successfully.
Dec 13 03:30:39 np0005558241 festive_agnesi[311579]: 167 167
Dec 13 03:30:39 np0005558241 conmon[311579]: conmon 80845921bdb9b9200b81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04.scope/container/memory.events
Dec 13 03:30:39 np0005558241 podman[311584]: 2025-12-13 08:30:39.275875808 +0000 UTC m=+0.032577545 container died 80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_agnesi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:30:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a55c1fa4d61a94eb03d6f45a1bcedabde23ad610f1df893689e8499b68a804ed-merged.mount: Deactivated successfully.
Dec 13 03:30:39 np0005558241 podman[311584]: 2025-12-13 08:30:39.333612383 +0000 UTC m=+0.090314100 container remove 80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_agnesi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 03:30:39 np0005558241 systemd[1]: libpod-conmon-80845921bdb9b9200b81623852efe88292e1bca056a61c72f2b674a7b982bf04.scope: Deactivated successfully.
Dec 13 03:30:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 260 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 301 op/s
Dec 13 03:30:39 np0005558241 podman[311607]: 2025-12-13 08:30:39.58033488 +0000 UTC m=+0.061985030 container create f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:30:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:30:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:30:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:30:39 np0005558241 systemd[1]: Started libpod-conmon-f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03.scope.
Dec 13 03:30:39 np0005558241 podman[311607]: 2025-12-13 08:30:39.548663779 +0000 UTC m=+0.030313959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:30:39 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:30:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fab3a38a7904f890a1700099a6f1f1159c7dab734fc88ebb54059a103c465e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fab3a38a7904f890a1700099a6f1f1159c7dab734fc88ebb54059a103c465e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fab3a38a7904f890a1700099a6f1f1159c7dab734fc88ebb54059a103c465e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fab3a38a7904f890a1700099a6f1f1159c7dab734fc88ebb54059a103c465e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fab3a38a7904f890a1700099a6f1f1159c7dab734fc88ebb54059a103c465e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:39 np0005558241 podman[311607]: 2025-12-13 08:30:39.686303745 +0000 UTC m=+0.167953925 container init f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_austin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:30:39 np0005558241 podman[311607]: 2025-12-13 08:30:39.698204989 +0000 UTC m=+0.179855139 container start f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_austin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:30:39 np0005558241 podman[311607]: 2025-12-13 08:30:39.70230815 +0000 UTC m=+0.183958300 container attach f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_austin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:30:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:30:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:30:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:30:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:30:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:30:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:30:40 np0005558241 musing_austin[311623]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:30:40 np0005558241 musing_austin[311623]: --> All data devices are unavailable
Dec 13 03:30:40 np0005558241 systemd[1]: libpod-f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03.scope: Deactivated successfully.
Dec 13 03:30:40 np0005558241 conmon[311623]: conmon f64889132a7feba27ed9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03.scope/container/memory.events
Dec 13 03:30:40 np0005558241 podman[311607]: 2025-12-13 08:30:40.212132259 +0000 UTC m=+0.693782409 container died f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay-09fab3a38a7904f890a1700099a6f1f1159c7dab734fc88ebb54059a103c465e-merged.mount: Deactivated successfully.
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.257 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.258 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.259 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.259 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.259 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:40 np0005558241 podman[311607]: 2025-12-13 08:30:40.261311723 +0000 UTC m=+0.742961873 container remove f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_austin, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.261 248514 INFO nova.compute.manager [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Terminating instance#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.269 248514 DEBUG nova.compute.manager [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:30:40 np0005558241 systemd[1]: libpod-conmon-f64889132a7feba27ed9876642e4b53c636f3ba7d5e7e09f371074e0610bfd03.scope: Deactivated successfully.
Dec 13 03:30:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.461 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.462 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.462 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.463 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.463 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.464 248514 INFO nova.compute.manager [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Terminating instance#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.466 248514 DEBUG nova.compute.manager [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:30:40 np0005558241 kernel: tap6d0e1f86-2f (unregistering): left promiscuous mode
Dec 13 03:30:40 np0005558241 NetworkManager[50376]: <info>  [1765614640.6380] device (tap6d0e1f86-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:30:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:40Z|00605|binding|INFO|Releasing lport 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc from this chassis (sb_readonly=0)
Dec 13 03:30:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:40Z|00606|binding|INFO|Setting lport 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc down in Southbound
Dec 13 03:30:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:40Z|00607|binding|INFO|Removing iface tap6d0e1f86-2f ovn-installed in OVS
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.659 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:40 np0005558241 kernel: tap4892c0f3-fa (unregistering): left promiscuous mode
Dec 13 03:30:40 np0005558241 NetworkManager[50376]: <info>  [1765614640.6686] device (tap4892c0f3-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:30:40 np0005558241 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000040.scope: Deactivated successfully.
Dec 13 03:30:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:40Z|00608|binding|INFO|Releasing lport 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d from this chassis (sb_readonly=1)
Dec 13 03:30:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:40Z|00609|if_status|INFO|Dropped 4 log messages in last 124 seconds (most recently, 124 seconds ago) due to excessive rate
Dec 13 03:30:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:40Z|00610|if_status|INFO|Not setting lport 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d down as sb is readonly
Dec 13 03:30:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:40Z|00611|binding|INFO|Removing iface tap4892c0f3-fa ovn-installed in OVS
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:40Z|00612|binding|INFO|Setting lport 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d down in Southbound
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.702 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:a5:c5 10.100.0.10'], port_security=['fa:16:3e:e4:a5:c5 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '18653df3-1934-41dc-b6ab-d1dc122052f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:40 np0005558241 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000040.scope: Consumed 6.612s CPU time.
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.704 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf unbound from our chassis#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.709 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf#033[00m
Dec 13 03:30:40 np0005558241 systemd-machined[210538]: Machine qemu-73-instance-00000040 terminated.
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:40 np0005558241 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000041.scope: Deactivated successfully.
Dec 13 03:30:40 np0005558241 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000041.scope: Consumed 4.525s CPU time.
Dec 13 03:30:40 np0005558241 systemd-machined[210538]: Machine qemu-74-instance-00000041 terminated.
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.732 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:99:c4 10.100.0.5'], port_security=['fa:16:3e:7f:99:c4 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '69f6dd3a-7c99-4537-8173-ec79bc6336a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c25aa866d502481eb9410b7d92a1347b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '156063d6-b943-4f89-aea0-35fed05fb231', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be6d27f2-b986-4f4f-bd03-a90a181463f2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.745 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e0ac7d-07e7-4ff5-bd2d-347f583cbea9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.754 248514 INFO nova.virt.libvirt.driver [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Instance destroyed successfully.#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.754 248514 DEBUG nova.objects.instance [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'resources' on Instance uuid 18653df3-1934-41dc-b6ab-d1dc122052f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.783 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[00c7b527-d4e3-4851-8ea3-e6c01f80136c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.787 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[81f5d8aa-4e31-4fa7-99bb-a2154b460e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.788 248514 DEBUG nova.virt.libvirt.vif [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1821338648',display_name='tempest-MultipleCreateTestJSON-server-1821338648-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1821338648-1',id=64,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:30:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-lfmihffr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:34Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=18653df3-1934-41dc-b6ab-d1dc122052f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.788 248514 DEBUG nova.network.os_vif_util [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "address": "fa:16:3e:e4:a5:c5", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d0e1f86-2f", "ovs_interfaceid": "6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.790 248514 DEBUG nova.network.os_vif_util [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.790 248514 DEBUG os_vif [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.793 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d0e1f86-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.795 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.797 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.803 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.805 248514 INFO os_vif [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:a5:c5,bridge_name='br-int',has_traffic_filtering=True,id=6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d0e1f86-2f')#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.821 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a38dc4c2-0ee2-49c0-8aa8-6ff99531ae19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.845 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[955fb94c-0795-4ed5-82eb-3c4a2b2e969e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ca54d31-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721088, 'reachable_time': 41340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311765, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:40 np0005558241 podman[311732]: 2025-12-13 08:30:40.855628257 +0000 UTC m=+0.105783241 container create 01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.869 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8015f952-e719-47c4-98d8-aa242120aeb1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721103, 'tstamp': 721103}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311774, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2ca54d31-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721107, 'tstamp': 721107}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311774, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.872 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.878 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.879 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ca54d31-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.879 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.880 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ca54d31-d0, col_values=(('external_ids', {'iface-id': '327a65c7-a67a-4fc2-b067-82e72753566c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.880 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.882 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d in datapath 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf unbound from our chassis#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.884 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:30:40 np0005558241 podman[311732]: 2025-12-13 08:30:40.793846802 +0000 UTC m=+0.044001786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.885 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2e10d969-a950-450f-84f2-7fe0805993d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:40.886 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf namespace which is not needed anymore#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.914 248514 INFO nova.virt.libvirt.driver [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Instance destroyed successfully.#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.915 248514 DEBUG nova.objects.instance [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lazy-loading 'resources' on Instance uuid 69f6dd3a-7c99-4537-8173-ec79bc6336a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.931 248514 DEBUG nova.virt.libvirt.vif [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1821338648',display_name='tempest-MultipleCreateTestJSON-server-1821338648-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1821338648-2',id=65,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-12-13T08:30:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c25aa866d502481eb9410b7d92a1347b',ramdisk_id='',reservation_id='r-lfmihffr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-478861069',owner_user_name='tempest-MultipleCreateTestJSON-478861069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:36Z,user_data=None,user_id='1f703cc8fd3b4cdabb2b154345f70a7c',uuid=69f6dd3a-7c99-4537-8173-ec79bc6336a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.932 248514 DEBUG nova.network.os_vif_util [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converting VIF {"id": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "address": "fa:16:3e:7f:99:c4", "network": {"id": "2ca54d31-d7ab-4904-a0d6-4e2970bb54bf", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1520180554-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c25aa866d502481eb9410b7d92a1347b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4892c0f3-fa", "ovs_interfaceid": "4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.933 248514 DEBUG nova.network.os_vif_util [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.933 248514 DEBUG os_vif [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.934 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.935 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4892c0f3-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.967 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.969 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:40 np0005558241 nova_compute[248510]: 2025-12-13 08:30:40.974 248514 INFO os_vif [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:99:c4,bridge_name='br-int',has_traffic_filtering=True,id=4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d,network=Network(2ca54d31-d7ab-4904-a0d6-4e2970bb54bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4892c0f3-fa')#033[00m
Dec 13 03:30:41 np0005558241 systemd[1]: Started libpod-conmon-01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59.scope.
Dec 13 03:30:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:30:41 np0005558241 nova_compute[248510]: 2025-12-13 08:30:41.088 248514 DEBUG nova.compute.manager [req-afa78fb2-7272-4632-a5aa-fb4f7cb5ed60 req-79c280bb-51e4-4e33-a4bd-808df22fb619 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-vif-unplugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:41 np0005558241 nova_compute[248510]: 2025-12-13 08:30:41.090 248514 DEBUG oslo_concurrency.lockutils [req-afa78fb2-7272-4632-a5aa-fb4f7cb5ed60 req-79c280bb-51e4-4e33-a4bd-808df22fb619 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:41 np0005558241 nova_compute[248510]: 2025-12-13 08:30:41.090 248514 DEBUG oslo_concurrency.lockutils [req-afa78fb2-7272-4632-a5aa-fb4f7cb5ed60 req-79c280bb-51e4-4e33-a4bd-808df22fb619 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:41 np0005558241 nova_compute[248510]: 2025-12-13 08:30:41.090 248514 DEBUG oslo_concurrency.lockutils [req-afa78fb2-7272-4632-a5aa-fb4f7cb5ed60 req-79c280bb-51e4-4e33-a4bd-808df22fb619 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:41 np0005558241 nova_compute[248510]: 2025-12-13 08:30:41.090 248514 DEBUG nova.compute.manager [req-afa78fb2-7272-4632-a5aa-fb4f7cb5ed60 req-79c280bb-51e4-4e33-a4bd-808df22fb619 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] No waiting events found dispatching network-vif-unplugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:41 np0005558241 nova_compute[248510]: 2025-12-13 08:30:41.090 248514 DEBUG nova.compute.manager [req-afa78fb2-7272-4632-a5aa-fb4f7cb5ed60 req-79c280bb-51e4-4e33-a4bd-808df22fb619 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-vif-unplugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:30:41 np0005558241 podman[311732]: 2025-12-13 08:30:41.263262544 +0000 UTC m=+0.513417558 container init 01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:30:41 np0005558241 podman[311732]: 2025-12-13 08:30:41.274185633 +0000 UTC m=+0.524340617 container start 01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:30:41 np0005558241 podman[311732]: 2025-12-13 08:30:41.279677459 +0000 UTC m=+0.529832463 container attach 01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:30:41 np0005558241 serene_ganguly[311823]: 167 167
Dec 13 03:30:41 np0005558241 systemd[1]: libpod-01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59.scope: Deactivated successfully.
Dec 13 03:30:41 np0005558241 podman[311732]: 2025-12-13 08:30:41.283745729 +0000 UTC m=+0.533900713 container died 01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:30:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 260 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 1.1 MiB/s wr, 290 op/s
Dec 13 03:30:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7e577f61c2612131839de60dfc17e43bc739eef07e64d924df227be08868e254-merged.mount: Deactivated successfully.
Dec 13 03:30:41 np0005558241 podman[311732]: 2025-12-13 08:30:41.823595699 +0000 UTC m=+1.073750693 container remove 01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:30:41 np0005558241 nova_compute[248510]: 2025-12-13 08:30:41.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:41 np0005558241 systemd[1]: libpod-conmon-01edfd167a7a92abcccae5858565e23ba8515056587474c2454b8e315f6ebe59.scope: Deactivated successfully.
Dec 13 03:30:42 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [NOTICE]   (311226) : haproxy version is 2.8.14-c23fe91
Dec 13 03:30:42 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [NOTICE]   (311226) : path to executable is /usr/sbin/haproxy
Dec 13 03:30:42 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [WARNING]  (311226) : Exiting Master process...
Dec 13 03:30:42 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [ALERT]    (311226) : Current worker (311228) exited with code 143 (Terminated)
Dec 13 03:30:42 np0005558241 neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf[311222]: [WARNING]  (311226) : All workers exited. Exiting... (0)
Dec 13 03:30:42 np0005558241 systemd[1]: libpod-83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82.scope: Deactivated successfully.
Dec 13 03:30:42 np0005558241 podman[311826]: 2025-12-13 08:30:42.079405361 +0000 UTC m=+1.041571601 container died 83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 03:30:42 np0005558241 podman[311868]: 2025-12-13 08:30:42.082583539 +0000 UTC m=+0.087233213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:30:42 np0005558241 podman[311868]: 2025-12-13 08:30:42.393279305 +0000 UTC m=+0.397928959 container create a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shirley, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 03:30:42 np0005558241 systemd[1]: Started libpod-conmon-a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a.scope.
Dec 13 03:30:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:30:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82-userdata-shm.mount: Deactivated successfully.
Dec 13 03:30:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a84e59ed93a5e8226cd62fab7d9bf9fc22cba6a596a769cf7e462aabfa2f131c-merged.mount: Deactivated successfully.
Dec 13 03:30:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/710f0e6b8cf035f0d507005dd07e5b07a03746e8d692015118a115a014a12faa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/710f0e6b8cf035f0d507005dd07e5b07a03746e8d692015118a115a014a12faa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/710f0e6b8cf035f0d507005dd07e5b07a03746e8d692015118a115a014a12faa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:42 np0005558241 podman[311826]: 2025-12-13 08:30:42.696711932 +0000 UTC m=+1.658878172 container cleanup 83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:30:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/710f0e6b8cf035f0d507005dd07e5b07a03746e8d692015118a115a014a12faa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:42 np0005558241 systemd[1]: libpod-conmon-83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82.scope: Deactivated successfully.
Dec 13 03:30:42 np0005558241 podman[311868]: 2025-12-13 08:30:42.714384628 +0000 UTC m=+0.719034332 container init a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:30:42 np0005558241 podman[311868]: 2025-12-13 08:30:42.725647136 +0000 UTC m=+0.730296790 container start a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shirley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:30:42 np0005558241 nova_compute[248510]: 2025-12-13 08:30:42.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:30:42 np0005558241 podman[311868]: 2025-12-13 08:30:42.792958757 +0000 UTC m=+0.797608411 container attach a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]: {
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:    "0": [
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:        {
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "devices": [
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "/dev/loop3"
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            ],
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_name": "ceph_lv0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_size": "21470642176",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "name": "ceph_lv0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "tags": {
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.cluster_name": "ceph",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.crush_device_class": "",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.encrypted": "0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.objectstore": "bluestore",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.osd_id": "0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.type": "block",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.vdo": "0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.with_tpm": "0"
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            },
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "type": "block",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "vg_name": "ceph_vg0"
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:        }
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:    ],
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:    "1": [
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:        {
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "devices": [
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "/dev/loop4"
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            ],
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_name": "ceph_lv1",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_size": "21470642176",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "name": "ceph_lv1",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "tags": {
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.cluster_name": "ceph",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.crush_device_class": "",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.encrypted": "0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.objectstore": "bluestore",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.osd_id": "1",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.type": "block",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.vdo": "0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.with_tpm": "0"
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            },
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "type": "block",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "vg_name": "ceph_vg1"
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:        }
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:    ],
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:    "2": [
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:        {
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "devices": [
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "/dev/loop5"
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            ],
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_name": "ceph_lv2",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_size": "21470642176",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "name": "ceph_lv2",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "tags": {
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.cluster_name": "ceph",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.crush_device_class": "",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.encrypted": "0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.objectstore": "bluestore",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.osd_id": "2",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.type": "block",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.vdo": "0",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:                "ceph.with_tpm": "0"
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            },
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "type": "block",
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:            "vg_name": "ceph_vg2"
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:        }
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]:    ]
Dec 13 03:30:43 np0005558241 stupefied_shirley[311901]: }
Dec 13 03:30:43 np0005558241 systemd[1]: libpod-a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a.scope: Deactivated successfully.
Dec 13 03:30:43 np0005558241 podman[311906]: 2025-12-13 08:30:43.092638161 +0000 UTC m=+0.361709485 container remove 83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:30:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.103 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[080315d8-0c7f-4570-bee2-6623506855ce]: (4, ('Sat Dec 13 08:30:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf (83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82)\n83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82\nSat Dec 13 08:30:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf (83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82)\n83af37c842d982d71333d649151875420a39a15b1576cf3ee837ce861b4dfd82\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:43 np0005558241 podman[311868]: 2025-12-13 08:30:43.107445667 +0000 UTC m=+1.112095341 container died a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shirley, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:30:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.106 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f1bb29-01fc-457d-a1de-6bce291c0413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.107 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ca54d31-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:43 np0005558241 kernel: tap2ca54d31-d0: left promiscuous mode
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.111 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.134 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[89b4227f-0518-4486-a447-6a217e11bb85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.146 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6bd55093-1d76-4942-aa24-2a1e5a1cfd24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.148 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf2dff4-c797-4d0c-a2a1-448563415c25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.167 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba4cc12-6770-4784-aae2-91c217d290f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721078, 'reachable_time': 23831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311936, 'error': None, 'target': 'ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:43 np0005558241 systemd[1]: run-netns-ovnmeta\x2d2ca54d31\x2dd7ab\x2d4904\x2da0d6\x2d4e2970bb54bf.mount: Deactivated successfully.
Dec 13 03:30:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.173 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2ca54d31-d7ab-4904-a0d6-4e2970bb54bf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:30:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:43.174 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[78feedf5-fd68-4369-a967-2d12e7c9b89a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:43 np0005558241 systemd[1]: var-lib-containers-storage-overlay-710f0e6b8cf035f0d507005dd07e5b07a03746e8d692015118a115a014a12faa-merged.mount: Deactivated successfully.
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.356 248514 INFO nova.virt.libvirt.driver [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Deleting instance files /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0_del#033[00m
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.357 248514 INFO nova.virt.libvirt.driver [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Deletion of /var/lib/nova/instances/18653df3-1934-41dc-b6ab-d1dc122052f0_del complete#033[00m
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.429 248514 INFO nova.compute.manager [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Took 3.16 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.430 248514 DEBUG oslo.service.loopingcall [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.430 248514 DEBUG nova.compute.manager [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.431 248514 DEBUG nova.network.neutron [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:30:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 247 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 51 KiB/s wr, 281 op/s
Dec 13 03:30:43 np0005558241 podman[311868]: 2025-12-13 08:30:43.579285969 +0000 UTC m=+1.583935623 container remove a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:30:43 np0005558241 systemd[1]: libpod-conmon-a56381d8971c4e7d003d107c0bd6f6466e5abd2e0968d3ec77735210a7f5db2a.scope: Deactivated successfully.
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.612 248514 INFO nova.virt.libvirt.driver [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Deleting instance files /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9_del#033[00m
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.614 248514 INFO nova.virt.libvirt.driver [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Deletion of /var/lib/nova/instances/69f6dd3a-7c99-4537-8173-ec79bc6336a9_del complete#033[00m
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.695 248514 INFO nova.compute.manager [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Took 3.23 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.697 248514 DEBUG oslo.service.loopingcall [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.698 248514 DEBUG nova.compute.manager [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:30:43 np0005558241 nova_compute[248510]: 2025-12-13 08:30:43.699 248514 DEBUG nova.network.neutron [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:30:44 np0005558241 podman[312001]: 2025-12-13 08:30:44.103439112 +0000 UTC m=+0.045129265 container create e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:30:44 np0005558241 systemd[1]: Started libpod-conmon-e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139.scope.
Dec 13 03:30:44 np0005558241 podman[312001]: 2025-12-13 08:30:44.084775751 +0000 UTC m=+0.026465924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:30:44 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:30:44 np0005558241 podman[312001]: 2025-12-13 08:30:44.216300426 +0000 UTC m=+0.157990579 container init e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:30:44 np0005558241 podman[312001]: 2025-12-13 08:30:44.225243817 +0000 UTC m=+0.166933970 container start e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:30:44 np0005558241 podman[312001]: 2025-12-13 08:30:44.229351428 +0000 UTC m=+0.171041581 container attach e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 03:30:44 np0005558241 stupefied_panini[312017]: 167 167
Dec 13 03:30:44 np0005558241 systemd[1]: libpod-e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139.scope: Deactivated successfully.
Dec 13 03:30:44 np0005558241 podman[312001]: 2025-12-13 08:30:44.234560047 +0000 UTC m=+0.176250200 container died e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:30:44 np0005558241 systemd[1]: var-lib-containers-storage-overlay-dc555c5033e25ffe0ac28911eef765c9f9c7dddc955539de127ec1bf687a9094-merged.mount: Deactivated successfully.
Dec 13 03:30:44 np0005558241 systemd[1]: libpod-conmon-e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139.scope: Deactivated successfully.
Dec 13 03:30:44 np0005558241 podman[312001]: 2025-12-13 08:30:44.279556087 +0000 UTC m=+0.221246240 container remove e0463ec849dc6377707d5cf0f3a317b4ad9a989750fbf8adf87ef6181a822139 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_panini, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.419 248514 DEBUG nova.network.neutron [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.443 248514 INFO nova.compute.manager [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Took 1.01 seconds to deallocate network for instance.#033[00m
Dec 13 03:30:44 np0005558241 podman[312040]: 2025-12-13 08:30:44.489931608 +0000 UTC m=+0.053400549 container create 9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.495 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.496 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:44 np0005558241 systemd[1]: Started libpod-conmon-9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c.scope.
Dec 13 03:30:44 np0005558241 podman[312040]: 2025-12-13 08:30:44.468450698 +0000 UTC m=+0.031919659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:30:44 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:30:44 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f26c2d2e74e2aed451bb3575a37a03007eebd1a61aa68c45e2e1d4703c4f4b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:44 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f26c2d2e74e2aed451bb3575a37a03007eebd1a61aa68c45e2e1d4703c4f4b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:44 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f26c2d2e74e2aed451bb3575a37a03007eebd1a61aa68c45e2e1d4703c4f4b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:44 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f26c2d2e74e2aed451bb3575a37a03007eebd1a61aa68c45e2e1d4703c4f4b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:44 np0005558241 podman[312040]: 2025-12-13 08:30:44.600472444 +0000 UTC m=+0.163941405 container init 9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilbur, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:30:44 np0005558241 podman[312040]: 2025-12-13 08:30:44.609901887 +0000 UTC m=+0.173370828 container start 9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:30:44 np0005558241 podman[312040]: 2025-12-13 08:30:44.618023967 +0000 UTC m=+0.181492908 container attach 9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.644 248514 DEBUG nova.virt.libvirt.driver [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.829 248514 DEBUG nova.compute.manager [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.829 248514 DEBUG oslo_concurrency.lockutils [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.831 248514 DEBUG oslo_concurrency.lockutils [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.832 248514 DEBUG oslo_concurrency.lockutils [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.832 248514 DEBUG nova.compute.manager [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] No waiting events found dispatching network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.832 248514 WARNING nova.compute.manager [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received unexpected event network-vif-plugged-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.833 248514 DEBUG nova.compute.manager [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-vif-unplugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.833 248514 DEBUG oslo_concurrency.lockutils [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.833 248514 DEBUG oslo_concurrency.lockutils [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.834 248514 DEBUG oslo_concurrency.lockutils [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.834 248514 DEBUG nova.compute.manager [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] No waiting events found dispatching network-vif-unplugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.834 248514 DEBUG nova.compute.manager [req-19319f3b-3d7e-4fef-ada1-ff35da8bff15 req-e7a0c9ff-56db-465e-b591-c1525ceb9444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-vif-unplugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:30:44 np0005558241 nova_compute[248510]: 2025-12-13 08:30:44.860 248514 DEBUG oslo_concurrency.processutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.026 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.027 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.027 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.028 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.028 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.029 248514 INFO nova.compute.manager [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Terminating instance#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.030 248514 DEBUG nova.compute.manager [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.058 248514 DEBUG nova.network.neutron [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:45 np0005558241 kernel: tap132de588-b2 (unregistering): left promiscuous mode
Dec 13 03:30:45 np0005558241 NetworkManager[50376]: <info>  [1765614645.0727] device (tap132de588-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:30:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:45Z|00613|binding|INFO|Releasing lport 132de588-b258-4d1f-9d17-7b0ef7d73a3b from this chassis (sb_readonly=0)
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:45Z|00614|binding|INFO|Setting lport 132de588-b258-4d1f-9d17-7b0ef7d73a3b down in Southbound
Dec 13 03:30:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:45Z|00615|binding|INFO|Removing iface tap132de588-b2 ovn-installed in OVS
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.083 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.088 248514 INFO nova.compute.manager [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Took 1.39 seconds to deallocate network for instance.#033[00m
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.088 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:5e:68 10.100.0.5'], port_security=['fa:16:3e:7d:5e:68 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'bc7aabfd-0b89-4d02-8aff-29f1bc423621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'def846f35c2747099dbe41221905d739', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75dfe6d3-3b39-436a-9304-896fdf5a4d05', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6fbc462b-7eea-4b98-8702-5b666823a7a9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=132de588-b258-4d1f-9d17-7b0ef7d73a3b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.089 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 132de588-b258-4d1f-9d17-7b0ef7d73a3b in datapath 96c67522-dadc-49c7-81e6-c5a1d8a2b085 unbound from our chassis#033[00m
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.091 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96c67522-dadc-49c7-81e6-c5a1d8a2b085, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.092 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd13bfe-0fb3-4d22-b3c5-9d81bf194425]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.093 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085 namespace which is not needed anymore#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.110 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:45 np0005558241 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000042.scope: Deactivated successfully.
Dec 13 03:30:45 np0005558241 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000042.scope: Consumed 9.048s CPU time.
Dec 13 03:30:45 np0005558241 systemd-machined[210538]: Machine qemu-75-instance-00000042 terminated.
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.158 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.214 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.255 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.261 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.277 248514 INFO nova.virt.libvirt.driver [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Instance destroyed successfully.#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.278 248514 DEBUG nova.objects.instance [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lazy-loading 'resources' on Instance uuid bc7aabfd-0b89-4d02-8aff-29f1bc423621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:45 np0005558241 neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085[311407]: [NOTICE]   (311411) : haproxy version is 2.8.14-c23fe91
Dec 13 03:30:45 np0005558241 neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085[311407]: [NOTICE]   (311411) : path to executable is /usr/sbin/haproxy
Dec 13 03:30:45 np0005558241 neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085[311407]: [ALERT]    (311411) : Current worker (311413) exited with code 143 (Terminated)
Dec 13 03:30:45 np0005558241 neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085[311407]: [WARNING]  (311411) : All workers exited. Exiting... (0)
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.296 248514 DEBUG nova.virt.libvirt.vif [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-406757652',display_name='tempest-ServerMetadataNegativeTestJSON-server-406757652',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-406757652',id=66,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:30:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='def846f35c2747099dbe41221905d739',ramdisk_id='',reservation_id='r-aon0aotv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-402645639',owner_user_name='tempest-ServerMetadataNegativeTestJSON-402645639-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:36Z,user_data=None,user_id='83bbc7cfbcdc49ab885e530a79ae26f2',uuid=bc7aabfd-0b89-4d02-8aff-29f1bc423621,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.297 248514 DEBUG nova.network.os_vif_util [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Converting VIF {"id": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "address": "fa:16:3e:7d:5e:68", "network": {"id": "96c67522-dadc-49c7-81e6-c5a1d8a2b085", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-286033375-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "def846f35c2747099dbe41221905d739", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132de588-b2", "ovs_interfaceid": "132de588-b258-4d1f-9d17-7b0ef7d73a3b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.297 248514 DEBUG nova.network.os_vif_util [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.298 248514 DEBUG os_vif [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.300 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.300 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap132de588-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.302 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:45 np0005558241 systemd[1]: libpod-39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6.scope: Deactivated successfully.
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.306 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.309 248514 INFO os_vif [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:5e:68,bridge_name='br-int',has_traffic_filtering=True,id=132de588-b258-4d1f-9d17-7b0ef7d73a3b,network=Network(96c67522-dadc-49c7-81e6-c5a1d8a2b085),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132de588-b2')#033[00m
Dec 13 03:30:45 np0005558241 podman[312157]: 2025-12-13 08:30:45.312700558 +0000 UTC m=+0.084177848 container died 39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.362 248514 DEBUG nova.compute.manager [req-244defc8-2235-4951-aebf-04ae85afad64 req-a46f18ce-f877-47e0-a03a-fbce1f103c21 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-vif-deleted-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:45 np0005558241 lvm[312229]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:30:45 np0005558241 lvm[312229]: VG ceph_vg0 finished
Dec 13 03:30:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6-userdata-shm.mount: Deactivated successfully.
Dec 13 03:30:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-594738986a7fafb8e2e8c20fa7ea6b435a01ddf06491d50b530c9beeaa2d34f3-merged.mount: Deactivated successfully.
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:30:45 np0005558241 lvm[312231]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:30:45 np0005558241 lvm[312231]: VG ceph_vg1 finished
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.430163) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614645430219, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1121, "num_deletes": 258, "total_data_size": 1546693, "memory_usage": 1578176, "flush_reason": "Manual Compaction"}
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Dec 13 03:30:45 np0005558241 podman[312157]: 2025-12-13 08:30:45.431756385 +0000 UTC m=+0.203233685 container cleanup 39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 03:30:45 np0005558241 lvm[312233]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:30:45 np0005558241 lvm[312233]: VG ceph_vg2 finished
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614645445258, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1518653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37667, "largest_seqno": 38787, "table_properties": {"data_size": 1513251, "index_size": 2734, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12333, "raw_average_key_size": 19, "raw_value_size": 1502060, "raw_average_value_size": 2430, "num_data_blocks": 121, "num_entries": 618, "num_filter_entries": 618, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614550, "oldest_key_time": 1765614550, "file_creation_time": 1765614645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 15186 microseconds, and 5338 cpu microseconds.
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.445338) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1518653 bytes OK
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.445373) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.453926) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.454001) EVENT_LOG_v1 {"time_micros": 1765614645453986, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.454041) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1541384, prev total WAL file size 1541384, number of live WAL files 2.
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.455338) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323538' seq:72057594037927935, type:22 .. '6C6F676D0031353130' seq:0, type:0; will stop at (end)
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1483KB)], [83(8897KB)]
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614645455624, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10629241, "oldest_snapshot_seqno": -1}
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2454432318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:30:45 np0005558241 systemd[1]: libpod-conmon-39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6.scope: Deactivated successfully.
Dec 13 03:30:45 np0005558241 relaxed_wilbur[312056]: {}
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.517 248514 DEBUG oslo_concurrency.processutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 169 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 66 KiB/s wr, 317 op/s
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.524 248514 DEBUG nova.compute.provider_tree [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.542 248514 DEBUG nova.scheduler.client.report [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6311 keys, 10508206 bytes, temperature: kUnknown
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614645556300, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 10508206, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10464340, "index_size": 26988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 160659, "raw_average_key_size": 25, "raw_value_size": 10349359, "raw_average_value_size": 1639, "num_data_blocks": 1095, "num_entries": 6311, "num_filter_entries": 6311, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:30:45 np0005558241 systemd[1]: libpod-9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c.scope: Deactivated successfully.
Dec 13 03:30:45 np0005558241 systemd[1]: libpod-9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c.scope: Consumed 1.535s CPU time.
Dec 13 03:30:45 np0005558241 conmon[312056]: conmon 9c931010274a253ba756 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c.scope/container/memory.events
Dec 13 03:30:45 np0005558241 podman[312040]: 2025-12-13 08:30:45.559429095 +0000 UTC m=+1.122898036 container died 9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilbur, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.556801) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 10508206 bytes
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.560308) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.4 rd, 104.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 8.7 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(13.9) write-amplify(6.9) OK, records in: 6843, records dropped: 532 output_compression: NoCompression
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.560337) EVENT_LOG_v1 {"time_micros": 1765614645560325, "job": 48, "event": "compaction_finished", "compaction_time_micros": 100884, "compaction_time_cpu_micros": 31704, "output_level": 6, "num_output_files": 1, "total_output_size": 10508206, "num_input_records": 6843, "num_output_records": 6311, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614645560849, "job": 48, "event": "table_file_deletion", "file_number": 85}
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614645562766, "job": 48, "event": "table_file_deletion", "file_number": 83}
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.455224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.562995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.563004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.563006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.563016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:30:45.563018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.578 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.580 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:45 np0005558241 podman[312235]: 2025-12-13 08:30:45.583367336 +0000 UTC m=+0.122603466 container remove 39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.593 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[44d61177-38c9-4ae3-a3a5-2e8276b99dc6]: (4, ('Sat Dec 13 08:30:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085 (39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6)\n39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6\nSat Dec 13 08:30:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085 (39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6)\n39d124e64a4211f5fa5423aa01403f4a328c1e3b1cdadd5a51d9348f7f7250d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.603 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8053dfb5-7fea-4149-be9c-3c0b5f1cdc90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.606 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96c67522-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:45 np0005558241 kernel: tap96c67522-d0: left promiscuous mode
Dec 13 03:30:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0f26c2d2e74e2aed451bb3575a37a03007eebd1a61aa68c45e2e1d4703c4f4b1-merged.mount: Deactivated successfully.
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.628 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.635 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94a59cbc-5bc5-42fb-b6b4-54af7f856d50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.636 248514 INFO nova.scheduler.client.report [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Deleted allocations for instance 18653df3-1934-41dc-b6ab-d1dc122052f0#033[00m
Dec 13 03:30:45 np0005558241 podman[312040]: 2025-12-13 08:30:45.640000393 +0000 UTC m=+1.203469334 container remove 9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.648 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[964b332c-d8d6-4f9d-8b05-f5c9caa866cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:45 np0005558241 systemd[1]: libpod-conmon-9c931010274a253ba756223ca6a394310059cedf2e5f29d3736027f7c2d0824c.scope: Deactivated successfully.
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.650 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[433e8c54-7b93-4111-84e1-86a7617edcf9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.676 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b44cd41a-b2d4-42a7-9f54-56dfb8d3f862]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721286, 'reachable_time': 22927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312265, 'error': None, 'target': 'ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.679 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-96c67522-dadc-49c7-81e6-c5a1d8a2b085 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:30:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:45.679 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[463989cb-a1dc-447d-a8a4-9a63367a29f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:45 np0005558241 systemd[1]: run-netns-ovnmeta\x2d96c67522\x2ddadc\x2d49c7\x2d81e6\x2dc5a1d8a2b085.mount: Deactivated successfully.
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.730 248514 DEBUG oslo_concurrency.lockutils [None req-8a531247-67b4-4a56-957d-792bd4972c0c 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "18653df3-1934-41dc-b6ab-d1dc122052f0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.737 248514 DEBUG oslo_concurrency.processutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.858 248514 INFO nova.virt.libvirt.driver [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Deleting instance files /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621_del#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.860 248514 INFO nova.virt.libvirt.driver [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Deletion of /var/lib/nova/instances/bc7aabfd-0b89-4d02-8aff-29f1bc423621_del complete#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.929 248514 INFO nova.compute.manager [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.930 248514 DEBUG oslo.service.loopingcall [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.930 248514 DEBUG nova.compute.manager [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:30:45 np0005558241 nova_compute[248510]: 2025-12-13 08:30:45.930 248514 DEBUG nova.network.neutron [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.141 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.142 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.161 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.238 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:30:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031531616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.322 248514 DEBUG oslo_concurrency.processutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.328 248514 DEBUG nova.compute.provider_tree [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.349 248514 DEBUG nova.scheduler.client.report [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.390 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.393 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.401 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.401 248514 INFO nova.compute.claims [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.461 248514 INFO nova.scheduler.client.report [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Deleted allocations for instance 69f6dd3a-7c99-4537-8173-ec79bc6336a9#033[00m
Dec 13 03:30:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:30:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.569 248514 DEBUG oslo_concurrency.lockutils [None req-6b83bfcc-351e-41b1-80f7-d2256f5b47d2 1f703cc8fd3b4cdabb2b154345f70a7c c25aa866d502481eb9410b7d92a1347b - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.613 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.675 248514 DEBUG nova.network.neutron [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.701 248514 INFO nova.compute.manager [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Took 0.77 seconds to deallocate network for instance.#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.758 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:46 np0005558241 kernel: tapb5058a06-71 (unregistering): left promiscuous mode
Dec 13 03:30:46 np0005558241 NetworkManager[50376]: <info>  [1765614646.9091] device (tapb5058a06-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:30:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:46Z|00616|binding|INFO|Releasing lport b5058a06-7109-4ac0-96d8-7562e66bee25 from this chassis (sb_readonly=0)
Dec 13 03:30:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:46Z|00617|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 down in Southbound
Dec 13 03:30:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:46Z|00618|binding|INFO|Removing iface tapb5058a06-71 ovn-installed in OVS
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:46.925 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:46.926 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis#033[00m
Dec 13 03:30:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:46.928 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43ee8730-a174-4859-9c95-b3ad6dc68839, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:30:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:46.929 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[271c3ae6-6d78-45bb-af57-eb1c416d088d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:46.929 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace which is not needed anymore#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.961 248514 DEBUG nova.compute.manager [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.962 248514 DEBUG oslo_concurrency.lockutils [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.963 248514 DEBUG oslo_concurrency.lockutils [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.963 248514 DEBUG oslo_concurrency.lockutils [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "69f6dd3a-7c99-4537-8173-ec79bc6336a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.963 248514 DEBUG nova.compute.manager [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] No waiting events found dispatching network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.964 248514 WARNING nova.compute.manager [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Received unexpected event network-vif-plugged-4892c0f3-faa8-4fc8-9c97-fd2ada3ead3d for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:30:46 np0005558241 nova_compute[248510]: 2025-12-13 08:30:46.964 248514 DEBUG nova.compute.manager [req-9c3018b5-b954-4f6c-9d0b-c1a14ca5ecf2 req-d78746ca-5e4f-428e-9ad4-9bcf2b209fe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Received event network-vif-deleted-6d0e1f86-2fcb-46f9-9cd1-35dca4ceadfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:46 np0005558241 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Dec 13 03:30:46 np0005558241 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003c.scope: Consumed 13.280s CPU time.
Dec 13 03:30:46 np0005558241 systemd-machined[210538]: Machine qemu-72-instance-0000003c terminated.
Dec 13 03:30:47 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [NOTICE]   (310480) : haproxy version is 2.8.14-c23fe91
Dec 13 03:30:47 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [NOTICE]   (310480) : path to executable is /usr/sbin/haproxy
Dec 13 03:30:47 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [WARNING]  (310480) : Exiting Master process...
Dec 13 03:30:47 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [WARNING]  (310480) : Exiting Master process...
Dec 13 03:30:47 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [ALERT]    (310480) : Current worker (310482) exited with code 143 (Terminated)
Dec 13 03:30:47 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[310476]: [WARNING]  (310480) : All workers exited. Exiting... (0)
Dec 13 03:30:47 np0005558241 systemd[1]: libpod-6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8.scope: Deactivated successfully.
Dec 13 03:30:47 np0005558241 podman[312357]: 2025-12-13 08:30:47.080914636 +0000 UTC m=+0.046287683 container died 6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:30:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8-userdata-shm.mount: Deactivated successfully.
Dec 13 03:30:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-51c4e163921b64c8e9176bf88efb899af63dfc0f8f476d28a475a589e7ef09c5-merged.mount: Deactivated successfully.
Dec 13 03:30:47 np0005558241 NetworkManager[50376]: <info>  [1765614647.1401] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/273)
Dec 13 03:30:47 np0005558241 podman[312357]: 2025-12-13 08:30:47.140570338 +0000 UTC m=+0.105943385 container cleanup 6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.141 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:47 np0005558241 systemd[1]: libpod-conmon-6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8.scope: Deactivated successfully.
Dec 13 03:30:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:30:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1856961826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:30:47 np0005558241 podman[312393]: 2025-12-13 08:30:47.218552462 +0000 UTC m=+0.049075592 container remove 6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.222 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.229 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb4d734-fe39-4061-b050-a5e719501b65]: (4, ('Sat Dec 13 08:30:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8)\n6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8\nSat Dec 13 08:30:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8)\n6d16b20ac309146a91bcab3752ab85bf2994e2903bfde4e672a4277b2e6369b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.230 248514 DEBUG nova.compute.provider_tree [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:30:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.231 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b7b54ec3-d4e1-4224-9870-2a14a5e5b02b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.232 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:47 np0005558241 kernel: tap43ee8730-a0: left promiscuous mode
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.249 248514 DEBUG nova.scheduler.client.report [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.257 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.263 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46428bf5-3154-44df-9574-76c45a00ef6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.274 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.275 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.278 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.284 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[15d5a692-f19a-4e52-bd77-9b18d632c2b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.287 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[91679efa-08b2-455e-9c14-d5aca82cfcb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.306 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[374a2019-f168-4bf3-9acd-12b51d69127b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720147, 'reachable_time': 23962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312418, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.308 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:30:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:47.308 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4e32594c-69bd-4977-bd81-14bf8e32b0e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:47 np0005558241 systemd[1]: run-netns-ovnmeta\x2d43ee8730\x2da174\x2d4859\x2d9c95\x2db3ad6dc68839.mount: Deactivated successfully.
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.328 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.328 248514 DEBUG nova.network.neutron [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.355 248514 INFO nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.381 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.392 248514 DEBUG oslo_concurrency.processutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.477 248514 DEBUG nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Received event network-vif-deleted-132de588-b258-4d1f-9d17-7b0ef7d73a3b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.477 248514 DEBUG nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.478 248514 DEBUG oslo_concurrency.lockutils [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.478 248514 DEBUG oslo_concurrency.lockutils [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.478 248514 DEBUG oslo_concurrency.lockutils [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.478 248514 DEBUG nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.479 248514 WARNING nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state powering-off.#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.479 248514 DEBUG nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.479 248514 DEBUG oslo_concurrency.lockutils [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.479 248514 DEBUG oslo_concurrency.lockutils [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.479 248514 DEBUG oslo_concurrency.lockutils [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.480 248514 DEBUG nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.480 248514 WARNING nova.compute.manager [req-18ae0cdb-2f3a-46ad-b6c7-54c2b9ab224b req-ca6c8155-21ce-4bcb-bff3-895c59b02c7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state powering-off.#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.485 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.487 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.487 248514 INFO nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Creating image(s)#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.510 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 169 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 53 KiB/s wr, 301 op/s
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.538 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.562 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.566 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.604 248514 DEBUG nova.policy [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '91d0d3efedc943b48ad0fc4295b6fc7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.641 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.642 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.642 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.643 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.663 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.666 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.704 248514 INFO nova.virt.libvirt.driver [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance shutdown successfully after 13 seconds.#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.710 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.710 248514 DEBUG nova.objects.instance [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.728 248514 DEBUG nova.compute.manager [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.788 248514 DEBUG oslo_concurrency.lockutils [None req-9dc772b1-02dc-4e2d-b1f4-d031ec832df8 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.791 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.928 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.929 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.929 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.929 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:30:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680485059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.964 248514 DEBUG oslo_concurrency.processutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.970 248514 DEBUG nova.compute.provider_tree [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:30:47 np0005558241 nova_compute[248510]: 2025-12-13 08:30:47.990 248514 DEBUG nova.scheduler.client.report [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:30:48 np0005558241 nova_compute[248510]: 2025-12-13 08:30:48.017 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:48 np0005558241 nova_compute[248510]: 2025-12-13 08:30:48.048 248514 INFO nova.scheduler.client.report [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Deleted allocations for instance bc7aabfd-0b89-4d02-8aff-29f1bc423621#033[00m
Dec 13 03:30:48 np0005558241 nova_compute[248510]: 2025-12-13 08:30:48.122 248514 DEBUG oslo_concurrency.lockutils [None req-e02b6fc6-d1fd-471d-8f22-31e09d53b33a 83bbc7cfbcdc49ab885e530a79ae26f2 def846f35c2747099dbe41221905d739 - - default default] Lock "bc7aabfd-0b89-4d02-8aff-29f1bc423621" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:48 np0005558241 nova_compute[248510]: 2025-12-13 08:30:48.418 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.752s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:48 np0005558241 nova_compute[248510]: 2025-12-13 08:30:48.483 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] resizing rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:30:48 np0005558241 nova_compute[248510]: 2025-12-13 08:30:48.651 248514 DEBUG nova.network.neutron [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Successfully created port: 0cba7327-b5c5-419e-84f1-4264a7dedf3c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:30:48 np0005558241 nova_compute[248510]: 2025-12-13 08:30:48.856 248514 DEBUG nova.objects.instance [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'migration_context' on Instance uuid b4a46029-fa6e-4566-9187-16b9d1bfd6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:48 np0005558241 nova_compute[248510]: 2025-12-13 08:30:48.877 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:30:48 np0005558241 nova_compute[248510]: 2025-12-13 08:30:48.878 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Ensure instance console log exists: /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:30:48 np0005558241 nova_compute[248510]: 2025-12-13 08:30:48.879 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:48 np0005558241 nova_compute[248510]: 2025-12-13 08:30:48.879 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:48 np0005558241 nova_compute[248510]: 2025-12-13 08:30:48.879 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 8484 writes, 38K keys, 8484 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8484 writes, 8484 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1608 writes, 7468 keys, 1608 commit groups, 1.0 writes per commit group, ingest: 10.21 MB, 0.02 MB/s#012Interval WAL: 1608 writes, 1608 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.2      3.17              0.16        24    0.132       0      0       0.0       0.0#012  L6      1/0   10.02 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.8     62.0     51.4      3.62              0.56        23    0.157    123K    12K       0.0       0.0#012 Sum      1/0   10.02 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.8     33.1     34.5      6.79              0.72        47    0.144    123K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.0     76.1     79.4      0.82              0.19        12    0.068     38K   3118       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     62.0     51.4      3.62              0.56        23    0.157    123K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.3      3.17              0.16        23    0.138       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.047, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.23 GB write, 0.07 MB/s write, 0.22 GB read, 0.06 MB/s read, 6.8 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 26.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000273 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1641,25.37 MB,8.34618%) FilterBlock(48,346.55 KB,0.111324%) IndexBlock(48,606.03 KB,0.19468%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 03:30:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 144 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 109 KiB/s wr, 317 op/s
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.577 248514 DEBUG nova.network.neutron [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Successfully updated port: 0cba7327-b5c5-419e-84f1-4264a7dedf3c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.601 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "refresh_cache-b4a46029-fa6e-4566-9187-16b9d1bfd6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.602 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquired lock "refresh_cache-b4a46029-fa6e-4566-9187-16b9d1bfd6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.602 248514 DEBUG nova.network.neutron [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.656 248514 DEBUG nova.objects.instance [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.687 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.692 248514 DEBUG oslo_concurrency.lockutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.699 248514 DEBUG nova.compute.manager [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Received event network-changed-0cba7327-b5c5-419e-84f1-4264a7dedf3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.700 248514 DEBUG nova.compute.manager [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Refreshing instance network info cache due to event network-changed-0cba7327-b5c5-419e-84f1-4264a7dedf3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.700 248514 DEBUG oslo_concurrency.lockutils [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-b4a46029-fa6e-4566-9187-16b9d1bfd6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.719 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.719 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.720 248514 DEBUG oslo_concurrency.lockutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.720 248514 DEBUG nova.network.neutron [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.720 248514 DEBUG nova.objects.instance [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'info_cache' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.721 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:30:49 np0005558241 nova_compute[248510]: 2025-12-13 08:30:49.857 248514 DEBUG nova.network.neutron [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.302 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.654 248514 DEBUG nova.network.neutron [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Updating instance_info_cache with network_info: [{"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.679 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Releasing lock "refresh_cache-b4a46029-fa6e-4566-9187-16b9d1bfd6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.679 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Instance network_info: |[{"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.680 248514 DEBUG oslo_concurrency.lockutils [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-b4a46029-fa6e-4566-9187-16b9d1bfd6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.680 248514 DEBUG nova.network.neutron [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Refreshing network info cache for port 0cba7327-b5c5-419e-84f1-4264a7dedf3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.683 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Start _get_guest_xml network_info=[{"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.688 248514 WARNING nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.693 248514 DEBUG nova.virt.libvirt.host [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.694 248514 DEBUG nova.virt.libvirt.host [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.696 248514 DEBUG nova.virt.libvirt.host [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.697 248514 DEBUG nova.virt.libvirt.host [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.698 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.698 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.699 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.699 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.699 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.699 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.699 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.700 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.700 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.700 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.700 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.701 248514 DEBUG nova.virt.hardware [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:30:50 np0005558241 nova_compute[248510]: 2025-12-13 08:30:50.704 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1223735418' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.300 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.321 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.326 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 156 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 175 op/s
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.802 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.802 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.802 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.802 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1205994326' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.879 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.882 248514 DEBUG nova.virt.libvirt.vif [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-729072340',display_name='tempest-ImagesOneServerNegativeTestJSON-server-729072340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-729072340',id=67,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-vhrthntm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:47Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=b4a46029-fa6e-4566-9187-16b9d1bfd6d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.882 248514 DEBUG nova.network.os_vif_util [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.883 248514 DEBUG nova.network.os_vif_util [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.884 248514 DEBUG nova.objects.instance [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'pci_devices' on Instance uuid b4a46029-fa6e-4566-9187-16b9d1bfd6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.909 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  <uuid>b4a46029-fa6e-4566-9187-16b9d1bfd6d6</uuid>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  <name>instance-00000043</name>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-729072340</nova:name>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:30:50</nova:creationTime>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:        <nova:user uuid="91d0d3efedc943b48ad0fc4295b6fc7c">tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member</nova:user>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:        <nova:project uuid="2de328b46a6e4f588f5e2a254db7f4ef">tempest-ImagesOneServerNegativeTestJSON-1826994500</nova:project>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:        <nova:port uuid="0cba7327-b5c5-419e-84f1-4264a7dedf3c">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <entry name="serial">b4a46029-fa6e-4566-9187-16b9d1bfd6d6</entry>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <entry name="uuid">b4a46029-fa6e-4566-9187-16b9d1bfd6d6</entry>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk.config">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:2e:e4:66"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <target dev="tap0cba7327-b5"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/console.log" append="off"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:30:51 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:30:51 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:30:51 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:30:51 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.911 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Preparing to wait for external event network-vif-plugged-0cba7327-b5c5-419e-84f1-4264a7dedf3c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.911 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.911 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.912 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.912 248514 DEBUG nova.virt.libvirt.vif [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-729072340',display_name='tempest-ImagesOneServerNegativeTestJSON-server-729072340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-729072340',id=67,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-vhrthntm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:30:47Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=b4a46029-fa6e-4566-9187-16b9d1bfd6d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.913 248514 DEBUG nova.network.os_vif_util [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.913 248514 DEBUG nova.network.os_vif_util [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.914 248514 DEBUG os_vif [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.915 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.916 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.920 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cba7327-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.920 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0cba7327-b5, col_values=(('external_ids', {'iface-id': '0cba7327-b5c5-419e-84f1-4264a7dedf3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:e4:66', 'vm-uuid': 'b4a46029-fa6e-4566-9187-16b9d1bfd6d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:51 np0005558241 NetworkManager[50376]: <info>  [1765614651.9236] manager: (tap0cba7327-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.924 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.931 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:51 np0005558241 nova_compute[248510]: 2025-12-13 08:30:51.933 248514 INFO os_vif [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5')#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.001 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.003 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.003 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No VIF found with MAC fa:16:3e:2e:e4:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.004 248514 INFO nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Using config drive#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.032 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:30:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1674983329' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.437 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.515 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.516 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.519 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.519 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.674 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.676 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4058MB free_disk=59.92822165135294GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.676 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.676 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.726 248514 INFO nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Creating config drive at /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/disk.config#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.731 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp24n24kn_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.774 248514 DEBUG nova.network.neutron [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Updated VIF entry in instance network info cache for port 0cba7327-b5c5-419e-84f1-4264a7dedf3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.775 248514 DEBUG nova.network.neutron [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Updating instance_info_cache with network_info: [{"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.863 248514 DEBUG oslo_concurrency.lockutils [req-5edff062-d2c7-48a9-95a7-92e10d50cdeb req-f8b33ec9-529a-4bfb-a505-06886d3e37ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-b4a46029-fa6e-4566-9187-16b9d1bfd6d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.879 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp24n24kn_" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.924 248514 DEBUG nova.storage.rbd_utils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.933 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/disk.config b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.978 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3ced27d6-a2a8-4ce3-a7e7-494270418542 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.978 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance b4a46029-fa6e-4566-9187-16b9d1bfd6d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.979 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:30:52 np0005558241 nova_compute[248510]: 2025-12-13 08:30:52.979 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:30:52 np0005558241 podman[312724]: 2025-12-13 08:30:52.982993891 +0000 UTC m=+0.069013024 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 13 03:30:53 np0005558241 podman[312725]: 2025-12-13 08:30:53.007216068 +0000 UTC m=+0.089887068 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 03:30:53 np0005558241 podman[312716]: 2025-12-13 08:30:53.012299864 +0000 UTC m=+0.100948062 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Dec 13 03:30:53 np0005558241 nova_compute[248510]: 2025-12-13 08:30:53.037 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 1.8 MiB/s wr, 109 op/s
Dec 13 03:30:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:30:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/17534862' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:30:54 np0005558241 nova_compute[248510]: 2025-12-13 08:30:54.073 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:54 np0005558241 nova_compute[248510]: 2025-12-13 08:30:54.083 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:30:54 np0005558241 nova_compute[248510]: 2025-12-13 08:30:54.729 248514 DEBUG nova.network.neutron [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.219 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.261 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.269 248514 DEBUG oslo_concurrency.lockutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.291 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.291 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.305 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.306 248514 DEBUG nova.objects.instance [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.326 248514 DEBUG nova.objects.instance [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.332 248514 DEBUG oslo_concurrency.processutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/disk.config b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.333 248514 INFO nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Deleting local config drive /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6/disk.config because it was imported into RBD.#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.340 248514 DEBUG nova.virt.libvirt.vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.340 248514 DEBUG nova.network.os_vif_util [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.341 248514 DEBUG nova.network.os_vif_util [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.342 248514 DEBUG os_vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.345 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5058a06-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.346 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.349 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.358 248514 INFO os_vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.370 248514 DEBUG nova.virt.libvirt.driver [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start _get_guest_xml network_info=[{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.376 248514 WARNING nova.virt.libvirt.driver [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.384 248514 DEBUG nova.virt.libvirt.host [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.385 248514 DEBUG nova.virt.libvirt.host [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.392 248514 DEBUG nova.virt.libvirt.host [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.393 248514 DEBUG nova.virt.libvirt.host [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.394 248514 DEBUG nova.virt.libvirt.driver [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.394 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.395 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.395 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.395 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.395 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.395 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.396 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.396 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.396 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.396 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.397 248514 DEBUG nova.virt.hardware [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.397 248514 DEBUG nova.objects.instance [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:55 np0005558241 kernel: tap0cba7327-b5: entered promiscuous mode
Dec 13 03:30:55 np0005558241 NetworkManager[50376]: <info>  [1765614655.4092] manager: (tap0cba7327-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Dec 13 03:30:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:55Z|00619|binding|INFO|Claiming lport 0cba7327-b5c5-419e-84f1-4264a7dedf3c for this chassis.
Dec 13 03:30:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:55Z|00620|binding|INFO|0cba7327-b5c5-419e-84f1-4264a7dedf3c: Claiming fa:16:3e:2e:e4:66 10.100.0.10
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.411 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.412 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.412 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:55Z|00621|binding|INFO|Setting lport 0cba7327-b5c5-419e-84f1-4264a7dedf3c ovn-installed in OVS
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.428 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.433 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:55 np0005558241 systemd-udevd[312848]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:30:55 np0005558241 systemd-machined[210538]: New machine qemu-76-instance-00000043.
Dec 13 03:30:55 np0005558241 NetworkManager[50376]: <info>  [1765614655.4667] device (tap0cba7327-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:30:55 np0005558241 NetworkManager[50376]: <info>  [1765614655.4678] device (tap0cba7327-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:30:55 np0005558241 systemd[1]: Started Virtual Machine qemu-76-instance-00000043.
Dec 13 03:30:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Dec 13 03:30:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:55Z|00622|binding|INFO|Setting lport 0cba7327-b5c5-419e-84f1-4264a7dedf3c up in Southbound
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.548 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:e4:66 10.100.0.10'], port_security=['fa:16:3e:2e:e4:66 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b4a46029-fa6e-4566-9187-16b9d1bfd6d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b109409-2f17-43b9-91ce-3ee865f0de12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6896dc77-88b0-4868-833d-01241883d6af, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0cba7327-b5c5-419e-84f1-4264a7dedf3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.550 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0cba7327-b5c5-419e-84f1-4264a7dedf3c in datapath 527d37da-eda0-4bfe-9f1d-310d58024d5d bound to our chassis#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.552 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 527d37da-eda0-4bfe-9f1d-310d58024d5d#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.558 248514 DEBUG oslo_concurrency.processutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.571 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6bcdf942-d247-4086-b66f-d9edbce2cc99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.572 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap527d37da-e1 in ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.575 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap527d37da-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.575 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5fa1ffd-008d-48f7-be63-e7a416da4947]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.576 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57819f36-5b72-4c73-8be2-b6a9af68c3da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.591 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e81f0fa2-b063-4f76-be19-6f2d90eba968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.608 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8d11e0-b5e3-488d-bc0e-7da700ef611d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.651 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f3da4de7-4749-4aa4-b582-c92e62d653b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.658 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e08b9652-0d59-4fac-aed3-6916ed414f65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 NetworkManager[50376]: <info>  [1765614655.6590] manager: (tap527d37da-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/276)
Dec 13 03:30:55 np0005558241 systemd-udevd[312850]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.700 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[41e4481d-4deb-44b9-8d95-afee416ec292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.707 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[17e4a9ae-0884-4b91-840b-116bb5eaf64e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 NetworkManager[50376]: <info>  [1765614655.7401] device (tap527d37da-e0): carrier: link connected
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.748 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614640.7473803, 18653df3-1934-41dc-b6ab-d1dc122052f0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.749 248514 INFO nova.compute.manager [-] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.749 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8a7290-ede6-4063-af7e-f445c5d6d02f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.770 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc6bd92-2b18-4dd1-a95e-a669a9beea83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d37da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:e1:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723295, 'reachable_time': 34222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312901, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.783 248514 DEBUG nova.compute.manager [None req-337ee93a-d7f6-4087-8bb4-47c1ee370977 - - - - - -] [instance: 18653df3-1934-41dc-b6ab-d1dc122052f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.790 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d8227afc-fb76-4b75-9c9c-fe0d3a1d54b4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:e196'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 723295, 'tstamp': 723295}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312902, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.811 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a2333594-f27f-42b8-9323-379ad35cdb31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d37da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:e1:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723295, 'reachable_time': 34222, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312903, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.852 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f24e824c-c185-46e0-a450-41066e95955a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.912 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614640.9121532, 69f6dd3a-7c99-4537-8173-ec79bc6336a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.913 248514 INFO nova.compute.manager [-] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.923 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[88fe6e81-c271-45c2-b2c1-03b1c3fc9cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.924 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d37da-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.925 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.925 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap527d37da-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:55 np0005558241 NetworkManager[50376]: <info>  [1765614655.9277] manager: (tap527d37da-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Dec 13 03:30:55 np0005558241 kernel: tap527d37da-e0: entered promiscuous mode
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.929 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap527d37da-e0, col_values=(('external_ids', {'iface-id': '9bf9e6e9-c189-485c-8803-c58be1ee6099'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:55Z|00623|binding|INFO|Releasing lport 9bf9e6e9-c189-485c-8803-c58be1ee6099 from this chassis (sb_readonly=0)
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.947 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.949 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.950 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[519cd626-7cf1-4612-9cdf-76c08fce6784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.951 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:55.953 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'env', 'PROCESS_TAG=haproxy-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/527d37da-eda0-4bfe-9f1d-310d58024d5d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:30:55 np0005558241 nova_compute[248510]: 2025-12-13 08:30:55.965 248514 DEBUG nova.compute.manager [None req-054332cc-abf0-4c53-9da6-5b4963f92f2a - - - - - -] [instance: 69f6dd3a-7c99-4537-8173-ec79bc6336a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1551382908' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.196 248514 DEBUG oslo_concurrency.processutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.232 248514 DEBUG oslo_concurrency.processutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.291 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.292 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.292 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:30:56 np0005558241 podman[312972]: 2025-12-13 08:30:56.342984793 +0000 UTC m=+0.054353802 container create 545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 03:30:56 np0005558241 systemd[1]: Started libpod-conmon-545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae.scope.
Dec 13 03:30:56 np0005558241 podman[312972]: 2025-12-13 08:30:56.31608521 +0000 UTC m=+0.027454249 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:30:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:30:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c47f5a0e1eb0f634e298d3a1de4c4aaa7f62306ce8a9171dbd54df477b2f667/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.428 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614656.428335, b4a46029-fa6e-4566-9187-16b9d1bfd6d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.429 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] VM Started (Lifecycle Event)#033[00m
Dec 13 03:30:56 np0005558241 podman[312972]: 2025-12-13 08:30:56.43525409 +0000 UTC m=+0.146623139 container init 545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 03:30:56 np0005558241 podman[312972]: 2025-12-13 08:30:56.441294259 +0000 UTC m=+0.152663268 container start 545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 03:30:56 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [NOTICE]   (313034) : New worker (313036) forked
Dec 13 03:30:56 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [NOTICE]   (313034) : Loading success.
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.466 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.472 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614656.4285111, b4a46029-fa6e-4566-9187-16b9d1bfd6d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.473 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.512 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.514 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.559 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:30:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1506052271' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.838 248514 DEBUG oslo_concurrency.processutils [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.840 248514 DEBUG nova.virt.libvirt.vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.840 248514 DEBUG nova.network.os_vif_util [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.841 248514 DEBUG nova.network.os_vif_util [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.843 248514 DEBUG nova.objects.instance [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.982 248514 DEBUG nova.virt.libvirt.driver [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  <uuid>3ced27d6-a2a8-4ce3-a7e7-494270418542</uuid>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  <name>instance-0000003c</name>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerActionsTestJSON-server-286397061</nova:name>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:30:55</nova:creationTime>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:        <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:        <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:        <nova:port uuid="b5058a06-7109-4ac0-96d8-7562e66bee25">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <entry name="serial">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <entry name="uuid">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:1f:d1:eb"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <target dev="tapb5058a06-71"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/console.log" append="off"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <input type="keyboard" bus="usb"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:30:56 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:30:56 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:30:56 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:30:56 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.984 248514 DEBUG nova.virt.libvirt.driver [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.984 248514 DEBUG nova.virt.libvirt.driver [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.985 248514 DEBUG nova.virt.libvirt.vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:30:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.985 248514 DEBUG nova.network.os_vif_util [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.986 248514 DEBUG nova.network.os_vif_util [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.986 248514 DEBUG os_vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.987 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.987 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.988 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.990 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.990 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5058a06-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.991 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5058a06-71, col_values=(('external_ids', {'iface-id': 'b5058a06-7109-4ac0-96d8-7562e66bee25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:d1:eb', 'vm-uuid': '3ced27d6-a2a8-4ce3-a7e7-494270418542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.992 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:56 np0005558241 NetworkManager[50376]: <info>  [1765614656.9946] manager: (tapb5058a06-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Dec 13 03:30:56 np0005558241 nova_compute[248510]: 2025-12-13 08:30:56.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.003 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.005 248514 INFO os_vif [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')#033[00m
Dec 13 03:30:57 np0005558241 kernel: tapb5058a06-71: entered promiscuous mode
Dec 13 03:30:57 np0005558241 NetworkManager[50376]: <info>  [1765614657.0916] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Dec 13 03:30:57 np0005558241 systemd-udevd[312872]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:30:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:57Z|00624|binding|INFO|Claiming lport b5058a06-7109-4ac0-96d8-7562e66bee25 for this chassis.
Dec 13 03:30:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:57Z|00625|binding|INFO|b5058a06-7109-4ac0-96d8-7562e66bee25: Claiming fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.092 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:57 np0005558241 NetworkManager[50376]: <info>  [1765614657.1046] device (tapb5058a06-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:30:57 np0005558241 NetworkManager[50376]: <info>  [1765614657.1056] device (tapb5058a06-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:30:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:57Z|00626|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 ovn-installed in OVS
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.110 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:57 np0005558241 systemd-machined[210538]: New machine qemu-77-instance-0000003c.
Dec 13 03:30:57 np0005558241 systemd[1]: Started Virtual Machine qemu-77-instance-0000003c.
Dec 13 03:30:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:57Z|00627|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 up in Southbound
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.297 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.299 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.302 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.322 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[843baf84-e8c3-4e5c-aef9-faa9776fadb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.324 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43ee8730-a1 in ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.326 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43ee8730-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.326 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3edbb4a4-bcc2-421c-8664-3333b050ceba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.327 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[89186f67-2c66-41f8-a61b-3d3c69dd8e55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.342 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[cb4919fb-1747-41a2-8647-a1bdd3989649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.364 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cbae5e66-4230-472b-a7d5-5360347799ff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.394 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fa346783-1f2f-4ac0-8e89-df6f52e61b42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.400 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe80c21d-9e37-4c0d-a8d1-4647df2650aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 NetworkManager[50376]: <info>  [1765614657.4020] manager: (tap43ee8730-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/280)
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.441 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f306b98a-12a2-4d98-94ca-f7e80128524e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.445 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4b19776f-ecc5-4886-9c3e-1105c0bf44a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 NetworkManager[50376]: <info>  [1765614657.4729] device (tap43ee8730-a0): carrier: link connected
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.480 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8bec8b-db7d-4535-990c-6166bf8ca1b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.499 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c6e035-cfd3-4f8e-b558-7286f8e5cdf0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723468, 'reachable_time': 38411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313079, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.518 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea25c2f4-0a7a-4ddf-9602-4a66ebb44c6e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:bbcd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 723468, 'tstamp': 723468}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313080, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.535 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef2fdff-fd66-466d-b696-7982373a888c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723468, 'reachable_time': 38411, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313081, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.571 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8072db-7e0f-4932-839b-8df6c3df91e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.641 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e6aaa3-f081-4ba1-a3ef-0e73a95e2790]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.642 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.643 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.643 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:57 np0005558241 NetworkManager[50376]: <info>  [1765614657.6463] manager: (tap43ee8730-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Dec 13 03:30:57 np0005558241 kernel: tap43ee8730-a0: entered promiscuous mode
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.649 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:30:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:57Z|00628|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.650 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.667 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.669 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[98a7940e-fe2b-46c4-a42e-a7b8edb4399c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.670 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:30:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:30:57.671 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'env', 'PROCESS_TAG=haproxy-43ee8730-a174-4859-9c95-b3ad6dc68839', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43ee8730-a174-4859-9c95-b3ad6dc68839.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.984 248514 DEBUG nova.compute.manager [req-184febec-a9e5-455c-b3bf-c830df7918f4 req-59990f42-7d4f-4d30-adc0-5cb802313dba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Received event network-vif-plugged-0cba7327-b5c5-419e-84f1-4264a7dedf3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.985 248514 DEBUG oslo_concurrency.lockutils [req-184febec-a9e5-455c-b3bf-c830df7918f4 req-59990f42-7d4f-4d30-adc0-5cb802313dba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.985 248514 DEBUG oslo_concurrency.lockutils [req-184febec-a9e5-455c-b3bf-c830df7918f4 req-59990f42-7d4f-4d30-adc0-5cb802313dba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.985 248514 DEBUG oslo_concurrency.lockutils [req-184febec-a9e5-455c-b3bf-c830df7918f4 req-59990f42-7d4f-4d30-adc0-5cb802313dba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.986 248514 DEBUG nova.compute.manager [req-184febec-a9e5-455c-b3bf-c830df7918f4 req-59990f42-7d4f-4d30-adc0-5cb802313dba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Processing event network-vif-plugged-0cba7327-b5c5-419e-84f1-4264a7dedf3c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.987 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.993 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.996 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614657.9943354, b4a46029-fa6e-4566-9187-16b9d1bfd6d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:57 np0005558241 nova_compute[248510]: 2025-12-13 08:30:57.996 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.001 248514 INFO nova.virt.libvirt.driver [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Instance spawned successfully.#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.002 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.017 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.021 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.030 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.031 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.032 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.032 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.033 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.033 248514 DEBUG nova.virt.libvirt.driver [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.057 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.090 248514 INFO nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Took 10.60 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.091 248514 DEBUG nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:58 np0005558241 podman[313113]: 2025-12-13 08:30:58.096208272 +0000 UTC m=+0.057341776 container create 0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:30:58 np0005558241 systemd[1]: Started libpod-conmon-0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5.scope.
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.155 248514 INFO nova.compute.manager [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Took 11.94 seconds to build instance.#033[00m
Dec 13 03:30:58 np0005558241 podman[313113]: 2025-12-13 08:30:58.06369848 +0000 UTC m=+0.024832044 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:30:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:30:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2dff49460421801b39e16b895a11c4e399a618cd7fbd30a89711e20d6bd165c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:30:58 np0005558241 podman[313113]: 2025-12-13 08:30:58.186540721 +0000 UTC m=+0.147674245 container init 0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 03:30:58 np0005558241 podman[313113]: 2025-12-13 08:30:58.192705873 +0000 UTC m=+0.153839377 container start 0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.192 248514 DEBUG oslo_concurrency.lockutils [None req-74ca7b52-e487-4807-82ee-e1ebdb8f070a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.202 248514 DEBUG nova.compute.manager [req-acda5fa0-e168-4f93-b179-dc17aec93401 req-edd037e9-3674-4858-b2a0-2054c65dafaa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.202 248514 DEBUG oslo_concurrency.lockutils [req-acda5fa0-e168-4f93-b179-dc17aec93401 req-edd037e9-3674-4858-b2a0-2054c65dafaa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.202 248514 DEBUG oslo_concurrency.lockutils [req-acda5fa0-e168-4f93-b179-dc17aec93401 req-edd037e9-3674-4858-b2a0-2054c65dafaa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.203 248514 DEBUG oslo_concurrency.lockutils [req-acda5fa0-e168-4f93-b179-dc17aec93401 req-edd037e9-3674-4858-b2a0-2054c65dafaa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.203 248514 DEBUG nova.compute.manager [req-acda5fa0-e168-4f93-b179-dc17aec93401 req-edd037e9-3674-4858-b2a0-2054c65dafaa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.203 248514 WARNING nova.compute.manager [req-acda5fa0-e168-4f93-b179-dc17aec93401 req-edd037e9-3674-4858-b2a0-2054c65dafaa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state stopped and task_state powering-on.#033[00m
Dec 13 03:30:58 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [NOTICE]   (313171) : New worker (313174) forked
Dec 13 03:30:58 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [NOTICE]   (313171) : Loading success.
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.241 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 3ced27d6-a2a8-4ce3-a7e7-494270418542 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.242 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614658.2412708, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.243 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.250 248514 DEBUG nova.compute.manager [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.254 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance rebooted successfully.#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.255 248514 DEBUG nova.compute.manager [None req-70c8de00-f468-4a80-ab71-f4b8c9646ca6 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:58Z|00629|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 03:30:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:58Z|00630|binding|INFO|Releasing lport 9bf9e6e9-c189-485c-8803-c58be1ee6099 from this chassis (sb_readonly=0)
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.296 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.299 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.353 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.353 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614658.250274, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.353 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Started (Lifecycle Event)#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.372 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.404 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:30:58 np0005558241 nova_compute[248510]: 2025-12-13 08:30:58.408 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:30:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:58Z|00631|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 03:30:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:30:58Z|00632|binding|INFO|Releasing lport 9bf9e6e9-c189-485c-8803-c58be1ee6099 from this chassis (sb_readonly=0)
Dec 13 03:30:59 np0005558241 nova_compute[248510]: 2025-12-13 08:30:59.018 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:30:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Dec 13 03:30:59 np0005558241 nova_compute[248510]: 2025-12-13 08:30:59.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.220 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.272 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614645.2704365, bc7aabfd-0b89-4d02-8aff-29f1bc423621 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.272 248514 INFO nova.compute.manager [-] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.294 248514 DEBUG nova.compute.manager [None req-7ec94b77-6436-4707-873c-e8c30ba18268 - - - - - -] [instance: bc7aabfd-0b89-4d02-8aff-29f1bc423621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.354 248514 DEBUG nova.compute.manager [req-4b623bfb-40d8-4cb5-b9d5-12a25ef82b66 req-f3012280-cbc3-4364-90d8-52fb6c48cd86 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.354 248514 DEBUG oslo_concurrency.lockutils [req-4b623bfb-40d8-4cb5-b9d5-12a25ef82b66 req-f3012280-cbc3-4364-90d8-52fb6c48cd86 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.355 248514 DEBUG oslo_concurrency.lockutils [req-4b623bfb-40d8-4cb5-b9d5-12a25ef82b66 req-f3012280-cbc3-4364-90d8-52fb6c48cd86 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.355 248514 DEBUG oslo_concurrency.lockutils [req-4b623bfb-40d8-4cb5-b9d5-12a25ef82b66 req-f3012280-cbc3-4364-90d8-52fb6c48cd86 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.355 248514 DEBUG nova.compute.manager [req-4b623bfb-40d8-4cb5-b9d5-12a25ef82b66 req-f3012280-cbc3-4364-90d8-52fb6c48cd86 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.355 248514 WARNING nova.compute.manager [req-4b623bfb-40d8-4cb5-b9d5-12a25ef82b66 req-f3012280-cbc3-4364-90d8-52fb6c48cd86 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:31:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.520 248514 DEBUG nova.compute.manager [req-295f4f4a-70b9-4278-8ade-1fef7a71fb23 req-135c0b82-2dc7-4cfb-8b4a-c94d8e4001e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Received event network-vif-plugged-0cba7327-b5c5-419e-84f1-4264a7dedf3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.520 248514 DEBUG oslo_concurrency.lockutils [req-295f4f4a-70b9-4278-8ade-1fef7a71fb23 req-135c0b82-2dc7-4cfb-8b4a-c94d8e4001e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.522 248514 DEBUG oslo_concurrency.lockutils [req-295f4f4a-70b9-4278-8ade-1fef7a71fb23 req-135c0b82-2dc7-4cfb-8b4a-c94d8e4001e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.523 248514 DEBUG oslo_concurrency.lockutils [req-295f4f4a-70b9-4278-8ade-1fef7a71fb23 req-135c0b82-2dc7-4cfb-8b4a-c94d8e4001e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.523 248514 DEBUG nova.compute.manager [req-295f4f4a-70b9-4278-8ade-1fef7a71fb23 req-135c0b82-2dc7-4cfb-8b4a-c94d8e4001e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] No waiting events found dispatching network-vif-plugged-0cba7327-b5c5-419e-84f1-4264a7dedf3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:31:00 np0005558241 nova_compute[248510]: 2025-12-13 08:31:00.523 248514 WARNING nova.compute.manager [req-295f4f4a-70b9-4278-8ade-1fef7a71fb23 req-135c0b82-2dc7-4cfb-8b4a-c94d8e4001e0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Received unexpected event network-vif-plugged-0cba7327-b5c5-419e-84f1-4264a7dedf3c for instance with vm_state active and task_state None.#033[00m
Dec 13 03:31:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 111 op/s
Dec 13 03:31:01 np0005558241 nova_compute[248510]: 2025-12-13 08:31:01.993 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 606 KiB/s wr, 142 op/s
Dec 13 03:31:03 np0005558241 nova_compute[248510]: 2025-12-13 08:31:03.926 248514 INFO nova.compute.manager [None req-8ebbe8f1-3ffd-4ff2-afca-5df4106c2d82 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Pausing#033[00m
Dec 13 03:31:03 np0005558241 nova_compute[248510]: 2025-12-13 08:31:03.928 248514 DEBUG nova.objects.instance [None req-8ebbe8f1-3ffd-4ff2-afca-5df4106c2d82 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:03 np0005558241 nova_compute[248510]: 2025-12-13 08:31:03.959 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614663.95928, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:03 np0005558241 nova_compute[248510]: 2025-12-13 08:31:03.960 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:31:03 np0005558241 nova_compute[248510]: 2025-12-13 08:31:03.963 248514 DEBUG nova.compute.manager [None req-8ebbe8f1-3ffd-4ff2-afca-5df4106c2d82 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:03 np0005558241 nova_compute[248510]: 2025-12-13 08:31:03.988 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:03 np0005558241 nova_compute[248510]: 2025-12-13 08:31:03.993 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:31:04 np0005558241 nova_compute[248510]: 2025-12-13 08:31:04.025 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Dec 13 03:31:04 np0005558241 nova_compute[248510]: 2025-12-13 08:31:04.950 248514 INFO nova.compute.manager [None req-dec8e43c-f14d-4313-a70c-27f775a22bcd 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Unpausing#033[00m
Dec 13 03:31:04 np0005558241 nova_compute[248510]: 2025-12-13 08:31:04.952 248514 DEBUG nova.objects.instance [None req-dec8e43c-f14d-4313-a70c-27f775a22bcd 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:05 np0005558241 nova_compute[248510]: 2025-12-13 08:31:05.000 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614665.0005004, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:05 np0005558241 virtqemud[248808]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 13 03:31:05 np0005558241 nova_compute[248510]: 2025-12-13 08:31:05.001 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:31:05 np0005558241 virtqemud[248808]: hostname: compute-0
Dec 13 03:31:05 np0005558241 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 03:31:05 np0005558241 nova_compute[248510]: 2025-12-13 08:31:05.005 248514 DEBUG nova.virt.libvirt.guest [None req-dec8e43c-f14d-4313-a70c-27f775a22bcd 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec 13 03:31:05 np0005558241 nova_compute[248510]: 2025-12-13 08:31:05.005 248514 DEBUG nova.compute.manager [None req-dec8e43c-f14d-4313-a70c-27f775a22bcd 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:05 np0005558241 nova_compute[248510]: 2025-12-13 08:31:05.039 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:05 np0005558241 nova_compute[248510]: 2025-12-13 08:31:05.042 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:31:05 np0005558241 nova_compute[248510]: 2025-12-13 08:31:05.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:31:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 145 op/s
Dec 13 03:31:06 np0005558241 nova_compute[248510]: 2025-12-13 08:31:06.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:07 np0005558241 nova_compute[248510]: 2025-12-13 08:31:07.269 248514 DEBUG nova.compute.manager [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:07 np0005558241 nova_compute[248510]: 2025-12-13 08:31:07.326 248514 INFO nova.compute.manager [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] instance snapshotting#033[00m
Dec 13 03:31:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 144 op/s
Dec 13 03:31:07 np0005558241 nova_compute[248510]: 2025-12-13 08:31:07.589 248514 INFO nova.virt.libvirt.driver [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Beginning live snapshot process#033[00m
Dec 13 03:31:07 np0005558241 nova_compute[248510]: 2025-12-13 08:31:07.737 248514 DEBUG nova.virt.libvirt.imagebackend [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:31:07 np0005558241 nova_compute[248510]: 2025-12-13 08:31:07.972 248514 DEBUG nova.storage.rbd_utils [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] creating snapshot(229866037550404a8011c7271a534aa2) on rbd image(b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:31:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:31:09
Dec 13 03:31:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:31:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:31:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'images', '.rgw.root']
Dec 13 03:31:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:31:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Dec 13 03:31:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Dec 13 03:31:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 146 op/s
Dec 13 03:31:09 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Dec 13 03:31:09 np0005558241 nova_compute[248510]: 2025-12-13 08:31:09.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:09 np0005558241 nova_compute[248510]: 2025-12-13 08:31:09.961 248514 DEBUG nova.storage.rbd_utils [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] cloning vms/b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk@229866037550404a8011c7271a534aa2 to images/26afe0db-dea0-4b7c-8b57-f424054157ea clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:31:10 np0005558241 nova_compute[248510]: 2025-12-13 08:31:10.225 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:31:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:31:10 np0005558241 nova_compute[248510]: 2025-12-13 08:31:10.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:11 np0005558241 nova_compute[248510]: 2025-12-13 08:31:11.087 248514 DEBUG nova.storage.rbd_utils [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] flattening images/26afe0db-dea0-4b7c-8b57-f424054157ea flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:31:11 np0005558241 nova_compute[248510]: 2025-12-13 08:31:11.508 248514 DEBUG nova.storage.rbd_utils [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] removing snapshot(229866037550404a8011c7271a534aa2) on rbd image(b4a46029-fa6e-4566-9187-16b9d1bfd6d6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:31:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 169 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 204 B/s wr, 98 op/s
Dec 13 03:31:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Dec 13 03:31:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Dec 13 03:31:11 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Dec 13 03:31:11 np0005558241 nova_compute[248510]: 2025-12-13 08:31:11.666 248514 DEBUG nova.storage.rbd_utils [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] creating snapshot(snap) on rbd image(26afe0db-dea0-4b7c-8b57-f424054157ea) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:31:12 np0005558241 nova_compute[248510]: 2025-12-13 08:31:12.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Dec 13 03:31:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:12Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 03:31:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Dec 13 03:31:12 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Dec 13 03:31:12 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 26afe0db-dea0-4b7c-8b57-f424054157ea could not be found.
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     image = self._client.call(
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 26afe0db-dea0-4b7c-8b57-f424054157ea
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver 
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver 
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     image = self._client.call(
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 26afe0db-dea0-4b7c-8b57-f424054157ea could not be found.
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.015 248514 ERROR nova.virt.libvirt.driver #033[00m
Dec 13 03:31:13 np0005558241 nova_compute[248510]: 2025-12-13 08:31:13.054 248514 DEBUG nova.storage.rbd_utils [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] removing snapshot(snap) on rbd image(26afe0db-dea0-4b7c-8b57-f424054157ea) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:31:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 215 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 205 op/s
Dec 13 03:31:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:13Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:e4:66 10.100.0.10
Dec 13 03:31:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:13Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:e4:66 10.100.0.10
Dec 13 03:31:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Dec 13 03:31:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Dec 13 03:31:13 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Dec 13 03:31:14 np0005558241 nova_compute[248510]: 2025-12-13 08:31:14.481 248514 WARNING nova.compute.manager [None req-d4620be9-ef18-4de9-a45e-fd6d6d2c6460 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Image not found during snapshot: nova.exception.ImageNotFound: Image 26afe0db-dea0-4b7c-8b57-f424054157ea could not be found.#033[00m
Dec 13 03:31:15 np0005558241 nova_compute[248510]: 2025-12-13 08:31:15.228 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:31:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 227 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 7.9 MiB/s wr, 364 op/s
Dec 13 03:31:15 np0005558241 nova_compute[248510]: 2025-12-13 08:31:15.999 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:16 np0005558241 nova_compute[248510]: 2025-12-13 08:31:16.000 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:16 np0005558241 nova_compute[248510]: 2025-12-13 08:31:16.000 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:16 np0005558241 nova_compute[248510]: 2025-12-13 08:31:16.001 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:16 np0005558241 nova_compute[248510]: 2025-12-13 08:31:16.001 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:16 np0005558241 nova_compute[248510]: 2025-12-13 08:31:16.002 248514 INFO nova.compute.manager [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Terminating instance#033[00m
Dec 13 03:31:16 np0005558241 nova_compute[248510]: 2025-12-13 08:31:16.003 248514 DEBUG nova.compute.manager [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:31:16 np0005558241 nova_compute[248510]: 2025-12-13 08:31:16.414 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:16 np0005558241 nova_compute[248510]: 2025-12-13 08:31:16.937 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:17 np0005558241 nova_compute[248510]: 2025-12-13 08:31:17.001 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 227 MiB data, 612 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 7.8 MiB/s wr, 354 op/s
Dec 13 03:31:17 np0005558241 kernel: tap0cba7327-b5 (unregistering): left promiscuous mode
Dec 13 03:31:17 np0005558241 NetworkManager[50376]: <info>  [1765614677.6736] device (tap0cba7327-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:31:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:17Z|00633|binding|INFO|Releasing lport 0cba7327-b5c5-419e-84f1-4264a7dedf3c from this chassis (sb_readonly=0)
Dec 13 03:31:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:17Z|00634|binding|INFO|Setting lport 0cba7327-b5c5-419e-84f1-4264a7dedf3c down in Southbound
Dec 13 03:31:17 np0005558241 nova_compute[248510]: 2025-12-13 08:31:17.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:17Z|00635|binding|INFO|Removing iface tap0cba7327-b5 ovn-installed in OVS
Dec 13 03:31:17 np0005558241 nova_compute[248510]: 2025-12-13 08:31:17.685 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:17 np0005558241 nova_compute[248510]: 2025-12-13 08:31:17.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:17 np0005558241 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000043.scope: Deactivated successfully.
Dec 13 03:31:17 np0005558241 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000043.scope: Consumed 13.775s CPU time.
Dec 13 03:31:17 np0005558241 systemd-machined[210538]: Machine qemu-76-instance-00000043 terminated.
Dec 13 03:31:17 np0005558241 nova_compute[248510]: 2025-12-13 08:31:17.856 248514 INFO nova.virt.libvirt.driver [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Instance destroyed successfully.#033[00m
Dec 13 03:31:17 np0005558241 nova_compute[248510]: 2025-12-13 08:31:17.856 248514 DEBUG nova.objects.instance [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'resources' on Instance uuid b4a46029-fa6e-4566-9187-16b9d1bfd6d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:18.002 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:e4:66 10.100.0.10'], port_security=['fa:16:3e:2e:e4:66 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b4a46029-fa6e-4566-9187-16b9d1bfd6d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b109409-2f17-43b9-91ce-3ee865f0de12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6896dc77-88b0-4868-833d-01241883d6af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0cba7327-b5c5-419e-84f1-4264a7dedf3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:31:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:18.004 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0cba7327-b5c5-419e-84f1-4264a7dedf3c in datapath 527d37da-eda0-4bfe-9f1d-310d58024d5d unbound from our chassis#033[00m
Dec 13 03:31:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:18.008 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 527d37da-eda0-4bfe-9f1d-310d58024d5d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:31:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:18.010 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c8bd7968-67d4-4831-b429-942786ba3aaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:18.011 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d namespace which is not needed anymore#033[00m
Dec 13 03:31:18 np0005558241 nova_compute[248510]: 2025-12-13 08:31:18.067 248514 DEBUG nova.virt.libvirt.vif [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:30:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-729072340',display_name='tempest-ImagesOneServerNegativeTestJSON-server-729072340',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-729072340',id=67,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-vhrthntm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:14Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=b4a46029-fa6e-4566-9187-16b9d1bfd6d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:31:18 np0005558241 nova_compute[248510]: 2025-12-13 08:31:18.068 248514 DEBUG nova.network.os_vif_util [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "address": "fa:16:3e:2e:e4:66", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0cba7327-b5", "ovs_interfaceid": "0cba7327-b5c5-419e-84f1-4264a7dedf3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:31:18 np0005558241 nova_compute[248510]: 2025-12-13 08:31:18.069 248514 DEBUG nova.network.os_vif_util [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:31:18 np0005558241 nova_compute[248510]: 2025-12-13 08:31:18.069 248514 DEBUG os_vif [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:31:18 np0005558241 nova_compute[248510]: 2025-12-13 08:31:18.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:18 np0005558241 nova_compute[248510]: 2025-12-13 08:31:18.071 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cba7327-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:18 np0005558241 nova_compute[248510]: 2025-12-13 08:31:18.074 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:18 np0005558241 nova_compute[248510]: 2025-12-13 08:31:18.076 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:18 np0005558241 nova_compute[248510]: 2025-12-13 08:31:18.079 248514 INFO os_vif [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:e4:66,bridge_name='br-int',has_traffic_filtering=True,id=0cba7327-b5c5-419e-84f1-4264a7dedf3c,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0cba7327-b5')#033[00m
Dec 13 03:31:19 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [NOTICE]   (313034) : haproxy version is 2.8.14-c23fe91
Dec 13 03:31:19 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [NOTICE]   (313034) : path to executable is /usr/sbin/haproxy
Dec 13 03:31:19 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [WARNING]  (313034) : Exiting Master process...
Dec 13 03:31:19 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [WARNING]  (313034) : Exiting Master process...
Dec 13 03:31:19 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [ALERT]    (313034) : Current worker (313036) exited with code 143 (Terminated)
Dec 13 03:31:19 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[313021]: [WARNING]  (313034) : All workers exited. Exiting... (0)
Dec 13 03:31:19 np0005558241 systemd[1]: libpod-545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae.scope: Deactivated successfully.
Dec 13 03:31:19 np0005558241 podman[313403]: 2025-12-13 08:31:19.1899885 +0000 UTC m=+1.050889100 container died 545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 03:31:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 202 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.9 MiB/s wr, 294 op/s
Dec 13 03:31:20 np0005558241 nova_compute[248510]: 2025-12-13 08:31:20.232 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:31:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Dec 13 03:31:20 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae-userdata-shm.mount: Deactivated successfully.
Dec 13 03:31:20 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2c47f5a0e1eb0f634e298d3a1de4c4aaa7f62306ce8a9171dbd54df477b2f667-merged.mount: Deactivated successfully.
Dec 13 03:31:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015243929236818743 of space, bias 1.0, pg target 0.4573178771045623 quantized to 32 (current 32)
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006673486932783207 of space, bias 1.0, pg target 0.2002046079834962 quantized to 32 (current 32)
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.087537217654078e-07 of space, bias 4.0, pg target 0.0007305044661184894 quantized to 16 (current 32)
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:31:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:31:21 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Dec 13 03:31:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 202 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.5 MiB/s wr, 147 op/s
Dec 13 03:31:21 np0005558241 podman[313403]: 2025-12-13 08:31:21.873155453 +0000 UTC m=+3.734056023 container cleanup 545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 03:31:21 np0005558241 systemd[1]: libpod-conmon-545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae.scope: Deactivated successfully.
Dec 13 03:31:21 np0005558241 nova_compute[248510]: 2025-12-13 08:31:21.998 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:21 np0005558241 nova_compute[248510]: 2025-12-13 08:31:21.998 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:22 np0005558241 nova_compute[248510]: 2025-12-13 08:31:22.021 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:31:22 np0005558241 podman[313445]: 2025-12-13 08:31:22.435390515 +0000 UTC m=+0.536633601 container remove 545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:31:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.442 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[25a15ce8-afdf-4e33-ac72-3fc9e28cb993]: (4, ('Sat Dec 13 08:31:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d (545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae)\n545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae\nSat Dec 13 08:31:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d (545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae)\n545062fa3626d094a83f4ed171f241a694d30cd66992c85e3ff532eb6149d1ae\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.445 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[59d3cd0f-ea4b-4692-9256-498fe0ac1965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.446 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d37da-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:22 np0005558241 nova_compute[248510]: 2025-12-13 08:31:22.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:22 np0005558241 kernel: tap527d37da-e0: left promiscuous mode
Dec 13 03:31:22 np0005558241 nova_compute[248510]: 2025-12-13 08:31:22.466 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.470 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9063e7a2-f4b1-4ff9-b688-0f36a6db8bf8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.487 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[563ecb1c-cbe4-44a9-a36e-b6b76409a169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.489 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b64858f9-e04f-4315-a5b5-fd9865b5a08b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.517 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6ffd95bb-5ca0-4f74-ae50-aea1a7e6f82b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723285, 'reachable_time': 33752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313458, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:22 np0005558241 systemd[1]: run-netns-ovnmeta\x2d527d37da\x2deda0\x2d4bfe\x2d9f1d\x2d310d58024d5d.mount: Deactivated successfully.
Dec 13 03:31:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.521 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:31:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:22.521 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[10ae18ed-cdf2-4c7f-93a4-5c2a38abda5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:22 np0005558241 nova_compute[248510]: 2025-12-13 08:31:22.566 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:22 np0005558241 nova_compute[248510]: 2025-12-13 08:31:22.566 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:22 np0005558241 nova_compute[248510]: 2025-12-13 08:31:22.576 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:31:22 np0005558241 nova_compute[248510]: 2025-12-13 08:31:22.576 248514 INFO nova.compute.claims [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:31:22 np0005558241 nova_compute[248510]: 2025-12-13 08:31:22.888 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.102 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:31:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4245757975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:31:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 176 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 132 op/s
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.548 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.556 248514 DEBUG nova.compute.provider_tree [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.578 248514 DEBUG nova.scheduler.client.report [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.600 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.601 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.666 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.667 248514 DEBUG nova.network.neutron [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.702 248514 INFO nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.727 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.829 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.831 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.831 248514 INFO nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Creating image(s)#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.858 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.886 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.926 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.948 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:23 np0005558241 nova_compute[248510]: 2025-12-13 08:31:23.994 248514 DEBUG nova.policy [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93eec08d500a4f03afb3281e9899bd6a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '71e2453379684f0ca0563f8c370ea4a3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:31:24 np0005558241 podman[313540]: 2025-12-13 08:31:24.005585097 +0000 UTC m=+0.079793559 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:31:24 np0005558241 podman[313522]: 2025-12-13 08:31:24.020723801 +0000 UTC m=+0.101375452 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:31:24 np0005558241 podman[313538]: 2025-12-13 08:31:24.032470741 +0000 UTC m=+0.106105838 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:31:24 np0005558241 nova_compute[248510]: 2025-12-13 08:31:24.036 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:24 np0005558241 nova_compute[248510]: 2025-12-13 08:31:24.037 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:24 np0005558241 nova_compute[248510]: 2025-12-13 08:31:24.037 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:24 np0005558241 nova_compute[248510]: 2025-12-13 08:31:24.037 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:24 np0005558241 nova_compute[248510]: 2025-12-13 08:31:24.062 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:24 np0005558241 nova_compute[248510]: 2025-12-13 08:31:24.068 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a9c6de9d-63c0-43a5-9d6e-be356e504837_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:24 np0005558241 nova_compute[248510]: 2025-12-13 08:31:24.897 248514 DEBUG nova.network.neutron [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Successfully created port: 7b3b1c0a-882e-4f33-a582-667d018090d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:31:25 np0005558241 nova_compute[248510]: 2025-12-13 08:31:25.277 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 130 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 17 KiB/s wr, 38 op/s
Dec 13 03:31:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:31:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Dec 13 03:31:25 np0005558241 nova_compute[248510]: 2025-12-13 08:31:25.836 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:25 np0005558241 nova_compute[248510]: 2025-12-13 08:31:25.837 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:25 np0005558241 nova_compute[248510]: 2025-12-13 08:31:25.881 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:31:25 np0005558241 nova_compute[248510]: 2025-12-13 08:31:25.894 248514 DEBUG nova.network.neutron [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Successfully updated port: 7b3b1c0a-882e-4f33-a582-667d018090d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:31:25 np0005558241 nova_compute[248510]: 2025-12-13 08:31:25.926 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:31:25 np0005558241 nova_compute[248510]: 2025-12-13 08:31:25.927 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquired lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:31:25 np0005558241 nova_compute[248510]: 2025-12-13 08:31:25.927 248514 DEBUG nova.network.neutron [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:31:25 np0005558241 nova_compute[248510]: 2025-12-13 08:31:25.986 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:25 np0005558241 nova_compute[248510]: 2025-12-13 08:31:25.987 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:25 np0005558241 nova_compute[248510]: 2025-12-13 08:31:25.998 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:31:25 np0005558241 nova_compute[248510]: 2025-12-13 08:31:25.999 248514 INFO nova.compute.claims [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:31:26 np0005558241 nova_compute[248510]: 2025-12-13 08:31:26.020 248514 DEBUG nova.compute.manager [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-changed-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:26 np0005558241 nova_compute[248510]: 2025-12-13 08:31:26.020 248514 DEBUG nova.compute.manager [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Refreshing instance network info cache due to event network-changed-7b3b1c0a-882e-4f33-a582-667d018090d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:31:26 np0005558241 nova_compute[248510]: 2025-12-13 08:31:26.021 248514 DEBUG oslo_concurrency.lockutils [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:31:26 np0005558241 nova_compute[248510]: 2025-12-13 08:31:26.140 248514 DEBUG nova.network.neutron [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:31:26 np0005558241 nova_compute[248510]: 2025-12-13 08:31:26.205 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Dec 13 03:31:26 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Dec 13 03:31:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:31:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2929692289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:31:26 np0005558241 nova_compute[248510]: 2025-12-13 08:31:26.948 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.743s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:26 np0005558241 nova_compute[248510]: 2025-12-13 08:31:26.957 248514 DEBUG nova.compute.provider_tree [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.023 248514 DEBUG nova.scheduler.client.report [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.323 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a9c6de9d-63c0-43a5-9d6e-be356e504837_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.380 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.381 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.391 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] resizing rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:31:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 130 MiB data, 581 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 18 KiB/s wr, 23 op/s
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.562 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.563 248514 DEBUG nova.network.neutron [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.590 248514 INFO nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.611 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.741 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.743 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.743 248514 INFO nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Creating image(s)#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.766 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.790 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.811 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.815 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.902 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.904 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.905 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.905 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.924 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:27 np0005558241 nova_compute[248510]: 2025-12-13 08:31:27.928 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.031 248514 DEBUG nova.objects.instance [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'migration_context' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.056 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.056 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Ensure instance console log exists: /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.057 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.057 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.057 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.106 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.163 248514 INFO nova.virt.libvirt.driver [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Deleting instance files /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6_del#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.164 248514 INFO nova.virt.libvirt.driver [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Deletion of /var/lib/nova/instances/b4a46029-fa6e-4566-9187-16b9d1bfd6d6_del complete#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.224 248514 INFO nova.compute.manager [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Took 12.22 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.225 248514 DEBUG oslo.service.loopingcall [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.227 248514 DEBUG nova.compute.manager [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.227 248514 DEBUG nova.network.neutron [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.599 248514 DEBUG nova.network.neutron [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Updating instance_info_cache with network_info: [{"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.604 248514 DEBUG nova.policy [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b310bdebec646949fad4ea1821b4c3f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.627 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Releasing lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.628 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance network_info: |[{"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.628 248514 DEBUG oslo_concurrency.lockutils [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.629 248514 DEBUG nova.network.neutron [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Refreshing network info cache for port 7b3b1c0a-882e-4f33-a582-667d018090d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.631 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Start _get_guest_xml network_info=[{"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.635 248514 WARNING nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.643 248514 DEBUG nova.virt.libvirt.host [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.643 248514 DEBUG nova.virt.libvirt.host [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.648 248514 DEBUG nova.virt.libvirt.host [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.649 248514 DEBUG nova.virt.libvirt.host [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.649 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.649 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.650 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.650 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.650 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.650 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.650 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.651 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.651 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.651 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.651 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.651 248514 DEBUG nova.virt.hardware [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.655 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.694 248514 DEBUG oslo_concurrency.lockutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.695 248514 DEBUG oslo_concurrency.lockutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.696 248514 INFO nova.compute.manager [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Rebooting instance#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.721 248514 DEBUG oslo_concurrency.lockutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.722 248514 DEBUG oslo_concurrency.lockutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.722 248514 DEBUG nova.network.neutron [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.858 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.930s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:28 np0005558241 nova_compute[248510]: 2025-12-13 08:31:28.936 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] resizing rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.014 248514 DEBUG nova.objects.instance [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'migration_context' on Instance uuid ce9adb21-8832-4d3e-867e-b0b49bdb6850 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.033 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.034 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Ensure instance console log exists: /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.034 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.035 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.035 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:31:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2468583397' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.226 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.250 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.254 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 157 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.9 MiB/s wr, 60 op/s
Dec 13 03:31:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:31:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4132604061' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.895 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.897 248514 DEBUG nova.virt.libvirt.vif [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:31:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1742357064',display_name='tempest-ServerRescueTestJSON-server-1742357064',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1742357064',id=68,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-hpc6t4lx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:23Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=a9c6de9d-63c0-43a5-9d6e-be356e504837,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.897 248514 DEBUG nova.network.os_vif_util [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.898 248514 DEBUG nova.network.os_vif_util [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.899 248514 DEBUG nova.objects.instance [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.921 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  <uuid>a9c6de9d-63c0-43a5-9d6e-be356e504837</uuid>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  <name>instance-00000044</name>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerRescueTestJSON-server-1742357064</nova:name>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:31:28</nova:creationTime>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:        <nova:user uuid="93eec08d500a4f03afb3281e9899bd6a">tempest-ServerRescueTestJSON-1425963100-project-member</nova:user>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:        <nova:project uuid="71e2453379684f0ca0563f8c370ea4a3">tempest-ServerRescueTestJSON-1425963100</nova:project>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:        <nova:port uuid="7b3b1c0a-882e-4f33-a582-667d018090d4">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <entry name="serial">a9c6de9d-63c0-43a5-9d6e-be356e504837</entry>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <entry name="uuid">a9c6de9d-63c0-43a5-9d6e-be356e504837</entry>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/a9c6de9d-63c0-43a5-9d6e-be356e504837_disk">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:41:56:12"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <target dev="tap7b3b1c0a-88"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/console.log" append="off"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:31:29 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:31:29 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:31:29 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:31:29 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.922 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Preparing to wait for external event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.922 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.922 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.922 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.923 248514 DEBUG nova.virt.libvirt.vif [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:31:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1742357064',display_name='tempest-ServerRescueTestJSON-server-1742357064',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1742357064',id=68,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-hpc6t4lx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:23Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=a9c6de9d-63c0-43a5-9d6e-be356e504837,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.923 248514 DEBUG nova.network.os_vif_util [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.924 248514 DEBUG nova.network.os_vif_util [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.924 248514 DEBUG os_vif [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.926 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.926 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.930 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b3b1c0a-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.931 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b3b1c0a-88, col_values=(('external_ids', {'iface-id': '7b3b1c0a-882e-4f33-a582-667d018090d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:56:12', 'vm-uuid': 'a9c6de9d-63c0-43a5-9d6e-be356e504837'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.933 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:29 np0005558241 NetworkManager[50376]: <info>  [1765614689.9343] manager: (tap7b3b1c0a-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:29 np0005558241 nova_compute[248510]: 2025-12-13 08:31:29.940 248514 INFO os_vif [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88')#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.003 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.003 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.004 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No VIF found with MAC fa:16:3e:41:56:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.004 248514 INFO nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Using config drive#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.026 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:30.211 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.211 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:30.212 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.246 248514 DEBUG nova.network.neutron [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.268 248514 INFO nova.compute.manager [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Took 2.04 seconds to deallocate network for instance.#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.279 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.329 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.330 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.438 248514 INFO nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Creating config drive at /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.444 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbiqgkl39 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.481 248514 DEBUG oslo_concurrency.processutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.536 248514 DEBUG nova.compute.manager [req-e21598cd-f65b-4ac5-b937-d399f3a57a8d req-b283e31d-cf01-40a8-a2ce-51c39d3cfe9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Received event network-vif-deleted-0cba7327-b5c5-419e-84f1-4264a7dedf3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.587 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbiqgkl39" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.613 248514 DEBUG nova.storage.rbd_utils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:30 np0005558241 nova_compute[248510]: 2025-12-13 08:31:30.617 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:31:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76940838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:31:31 np0005558241 nova_compute[248510]: 2025-12-13 08:31:31.060 248514 DEBUG oslo_concurrency.processutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:31 np0005558241 nova_compute[248510]: 2025-12-13 08:31:31.069 248514 DEBUG nova.compute.provider_tree [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:31:31 np0005558241 nova_compute[248510]: 2025-12-13 08:31:31.126 248514 DEBUG nova.scheduler.client.report [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:31:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:31:31 np0005558241 nova_compute[248510]: 2025-12-13 08:31:31.166 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:31 np0005558241 nova_compute[248510]: 2025-12-13 08:31:31.212 248514 INFO nova.scheduler.client.report [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Deleted allocations for instance b4a46029-fa6e-4566-9187-16b9d1bfd6d6#033[00m
Dec 13 03:31:31 np0005558241 nova_compute[248510]: 2025-12-13 08:31:31.253 248514 DEBUG nova.network.neutron [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:31:31 np0005558241 nova_compute[248510]: 2025-12-13 08:31:31.283 248514 DEBUG oslo_concurrency.lockutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:31:31 np0005558241 nova_compute[248510]: 2025-12-13 08:31:31.286 248514 DEBUG nova.compute.manager [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:31 np0005558241 nova_compute[248510]: 2025-12-13 08:31:31.294 248514 DEBUG oslo_concurrency.lockutils [None req-67d704dd-932c-4be4-8e26-1fc522345e03 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "b4a46029-fa6e-4566-9187-16b9d1bfd6d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:31 np0005558241 nova_compute[248510]: 2025-12-13 08:31:31.492 248514 DEBUG nova.network.neutron [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Successfully created port: b2ee664d-ff99-4665-a5cc-70bd7aeb1546 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:31:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 201 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 3.7 MiB/s wr, 82 op/s
Dec 13 03:31:31 np0005558241 nova_compute[248510]: 2025-12-13 08:31:31.748 248514 DEBUG nova.network.neutron [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Updated VIF entry in instance network info cache for port 7b3b1c0a-882e-4f33-a582-667d018090d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:31:31 np0005558241 nova_compute[248510]: 2025-12-13 08:31:31.750 248514 DEBUG nova.network.neutron [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Updating instance_info_cache with network_info: [{"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:31:31 np0005558241 nova_compute[248510]: 2025-12-13 08:31:31.775 248514 DEBUG oslo_concurrency.lockutils [req-a7710e10-ea69-469f-9e95-8966a48eacb0 req-afe88da5-81c5-419d-8e9c-9cc7eb2d04ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:31:32 np0005558241 kernel: tapb5058a06-71 (unregistering): left promiscuous mode
Dec 13 03:31:32 np0005558241 NetworkManager[50376]: <info>  [1765614692.5433] device (tapb5058a06-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:31:32 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:32Z|00636|binding|INFO|Releasing lport b5058a06-7109-4ac0-96d8-7562e66bee25 from this chassis (sb_readonly=0)
Dec 13 03:31:32 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:32Z|00637|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 down in Southbound
Dec 13 03:31:32 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:32Z|00638|binding|INFO|Removing iface tapb5058a06-71 ovn-installed in OVS
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:32.565 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:31:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:32.566 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis#033[00m
Dec 13 03:31:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:32.568 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43ee8730-a174-4859-9c95-b3ad6dc68839, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:31:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:32.569 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[788c9a22-fc91-4352-83e1-2693e482fda2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:32.569 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace which is not needed anymore#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.592 248514 DEBUG nova.network.neutron [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Successfully updated port: b2ee664d-ff99-4665-a5cc-70bd7aeb1546 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.594 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:32 np0005558241 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Dec 13 03:31:32 np0005558241 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000003c.scope: Consumed 14.208s CPU time.
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.612 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.613 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquired lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.613 248514 DEBUG nova.network.neutron [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:31:32 np0005558241 systemd-machined[210538]: Machine qemu-77-instance-0000003c terminated.
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.688 248514 DEBUG nova.compute.manager [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-changed-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.688 248514 DEBUG nova.compute.manager [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Refreshing instance network info cache due to event network-changed-b2ee664d-ff99-4665-a5cc-70bd7aeb1546. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.689 248514 DEBUG oslo_concurrency.lockutils [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.846 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.847 248514 DEBUG nova.objects.instance [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.854 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614677.853914, b4a46029-fa6e-4566-9187-16b9d1bfd6d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.854 248514 INFO nova.compute.manager [-] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.871 248514 DEBUG nova.virt.libvirt.vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.871 248514 DEBUG nova.network.os_vif_util [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.872 248514 DEBUG nova.network.os_vif_util [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.873 248514 DEBUG os_vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.874 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5058a06-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.878 248514 DEBUG nova.compute.manager [None req-60d99fb7-2e76-40e0-a2c6-e05c03a091d1 - - - - - -] [instance: b4a46029-fa6e-4566-9187-16b9d1bfd6d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.879 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.881 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.886 248514 INFO os_vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.893 248514 DEBUG nova.virt.libvirt.driver [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start _get_guest_xml network_info=[{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.896 248514 WARNING nova.virt.libvirt.driver [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.902 248514 DEBUG nova.virt.libvirt.host [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.903 248514 DEBUG nova.virt.libvirt.host [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.906 248514 DEBUG nova.virt.libvirt.host [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.906 248514 DEBUG nova.virt.libvirt.host [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.907 248514 DEBUG nova.virt.libvirt.driver [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.907 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.907 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.907 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.908 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.908 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.908 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.908 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.909 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.909 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.909 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.909 248514 DEBUG nova.virt.hardware [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.909 248514 DEBUG nova.objects.instance [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.929 248514 DEBUG oslo_concurrency.processutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:32 np0005558241 nova_compute[248510]: 2025-12-13 08:31:32.967 248514 DEBUG nova.network.neutron [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.352 248514 DEBUG nova.compute.manager [req-33b03887-6400-4006-8882-8fa8ae755e38 req-a7df91d5-40a6-441b-b383-ab4a04776532 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.352 248514 DEBUG oslo_concurrency.lockutils [req-33b03887-6400-4006-8882-8fa8ae755e38 req-a7df91d5-40a6-441b-b383-ab4a04776532 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.353 248514 DEBUG oslo_concurrency.lockutils [req-33b03887-6400-4006-8882-8fa8ae755e38 req-a7df91d5-40a6-441b-b383-ab4a04776532 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.353 248514 DEBUG oslo_concurrency.lockutils [req-33b03887-6400-4006-8882-8fa8ae755e38 req-a7df91d5-40a6-441b-b383-ab4a04776532 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.353 248514 DEBUG nova.compute.manager [req-33b03887-6400-4006-8882-8fa8ae755e38 req-a7df91d5-40a6-441b-b383-ab4a04776532 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.353 248514 WARNING nova.compute.manager [req-33b03887-6400-4006-8882-8fa8ae755e38 req-a7df91d5-40a6-441b-b383-ab4a04776532 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec 13 03:31:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 4.3 MiB/s wr, 87 op/s
Dec 13 03:31:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:31:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316122110' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:31:33 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [NOTICE]   (313171) : haproxy version is 2.8.14-c23fe91
Dec 13 03:31:33 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [NOTICE]   (313171) : path to executable is /usr/sbin/haproxy
Dec 13 03:31:33 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [WARNING]  (313171) : Exiting Master process...
Dec 13 03:31:33 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [WARNING]  (313171) : Exiting Master process...
Dec 13 03:31:33 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [ALERT]    (313171) : Current worker (313174) exited with code 143 (Terminated)
Dec 13 03:31:33 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[313163]: [WARNING]  (313171) : All workers exited. Exiting... (0)
Dec 13 03:31:33 np0005558241 systemd[1]: libpod-0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5.scope: Deactivated successfully.
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.817 248514 DEBUG oslo_concurrency.processutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.889s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:33 np0005558241 podman[314072]: 2025-12-13 08:31:33.8239087 +0000 UTC m=+1.167675822 container died 0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.877 248514 DEBUG oslo_concurrency.processutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.960 248514 DEBUG nova.network.neutron [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updating instance_info_cache with network_info: [{"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.988 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Releasing lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.989 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Instance network_info: |[{"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.989 248514 DEBUG oslo_concurrency.lockutils [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.990 248514 DEBUG nova.network.neutron [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Refreshing network info cache for port b2ee664d-ff99-4665-a5cc-70bd7aeb1546 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:31:33 np0005558241 nova_compute[248510]: 2025-12-13 08:31:33.995 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Start _get_guest_xml network_info=[{"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.002 248514 WARNING nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.007 248514 DEBUG nova.virt.libvirt.host [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.008 248514 DEBUG nova.virt.libvirt.host [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.014 248514 DEBUG nova.virt.libvirt.host [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.014 248514 DEBUG nova.virt.libvirt.host [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.015 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.015 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.016 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.016 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.016 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.016 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.016 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.017 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.017 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.017 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.017 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.017 248514 DEBUG nova.virt.hardware [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.021 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a2dff49460421801b39e16b895a11c4e399a618cd7fbd30a89711e20d6bd165c-merged.mount: Deactivated successfully.
Dec 13 03:31:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5-userdata-shm.mount: Deactivated successfully.
Dec 13 03:31:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:31:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300550188' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.504 248514 DEBUG oslo_concurrency.processutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.506 248514 DEBUG nova.virt.libvirt.vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.506 248514 DEBUG nova.network.os_vif_util [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.507 248514 DEBUG nova.network.os_vif_util [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.509 248514 DEBUG nova.objects.instance [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.524 248514 DEBUG nova.virt.libvirt.driver [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  <uuid>3ced27d6-a2a8-4ce3-a7e7-494270418542</uuid>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  <name>instance-0000003c</name>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerActionsTestJSON-server-286397061</nova:name>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:31:32</nova:creationTime>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:        <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:        <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:        <nova:port uuid="b5058a06-7109-4ac0-96d8-7562e66bee25">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <entry name="serial">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <entry name="uuid">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:1f:d1:eb"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <target dev="tapb5058a06-71"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/console.log" append="off"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <input type="keyboard" bus="usb"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:31:34 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:31:34 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:31:34 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:31:34 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.526 248514 DEBUG nova.virt.libvirt.driver [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.526 248514 DEBUG nova.virt.libvirt.driver [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.527 248514 DEBUG nova.virt.libvirt.vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.527 248514 DEBUG nova.network.os_vif_util [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.528 248514 DEBUG nova.network.os_vif_util [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.528 248514 DEBUG os_vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.529 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.530 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.530 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.532 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.532 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5058a06-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.533 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5058a06-71, col_values=(('external_ids', {'iface-id': 'b5058a06-7109-4ac0-96d8-7562e66bee25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:d1:eb', 'vm-uuid': '3ced27d6-a2a8-4ce3-a7e7-494270418542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:34 np0005558241 NetworkManager[50376]: <info>  [1765614694.5363] manager: (tapb5058a06-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.537 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.547 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.548 248514 INFO os_vif [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.555 248514 DEBUG oslo_concurrency.processutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.938s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.555 248514 INFO nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Deleting local config drive /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config because it was imported into RBD.#033[00m
Dec 13 03:31:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:31:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026574021' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.589 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.622 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:34 np0005558241 NetworkManager[50376]: <info>  [1765614694.6295] manager: (tap7b3b1c0a-88): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Dec 13 03:31:34 np0005558241 kernel: tap7b3b1c0a-88: entered promiscuous mode
Dec 13 03:31:34 np0005558241 systemd-udevd[314051]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.633 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:34Z|00639|binding|INFO|Claiming lport 7b3b1c0a-882e-4f33-a582-667d018090d4 for this chassis.
Dec 13 03:31:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:34Z|00640|binding|INFO|7b3b1c0a-882e-4f33-a582-667d018090d4: Claiming fa:16:3e:41:56:12 10.100.0.13
Dec 13 03:31:34 np0005558241 NetworkManager[50376]: <info>  [1765614694.6441] device (tap7b3b1c0a-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:31:34 np0005558241 NetworkManager[50376]: <info>  [1765614694.6450] device (tap7b3b1c0a-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:31:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:34.648 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:56:12 10.100.0.13'], port_security=['fa:16:3e:41:56:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a9c6de9d-63c0-43a5-9d6e-be356e504837', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7b3b1c0a-882e-4f33-a582-667d018090d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:31:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:34Z|00641|binding|INFO|Setting lport 7b3b1c0a-882e-4f33-a582-667d018090d4 ovn-installed in OVS
Dec 13 03:31:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:34Z|00642|binding|INFO|Setting lport 7b3b1c0a-882e-4f33-a582-667d018090d4 up in Southbound
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.675 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:34 np0005558241 systemd-machined[210538]: New machine qemu-78-instance-00000044.
Dec 13 03:31:34 np0005558241 systemd[1]: Started Virtual Machine qemu-78-instance-00000044.
Dec 13 03:31:34 np0005558241 NetworkManager[50376]: <info>  [1765614694.8337] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Dec 13 03:31:34 np0005558241 kernel: tapb5058a06-71: entered promiscuous mode
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.835 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:34Z|00643|binding|INFO|Claiming lport b5058a06-7109-4ac0-96d8-7562e66bee25 for this chassis.
Dec 13 03:31:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:34Z|00644|binding|INFO|b5058a06-7109-4ac0-96d8-7562e66bee25: Claiming fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 03:31:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:34.849 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:31:34 np0005558241 NetworkManager[50376]: <info>  [1765614694.8522] device (tapb5058a06-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:31:34 np0005558241 NetworkManager[50376]: <info>  [1765614694.8527] device (tapb5058a06-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:31:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:34Z|00645|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 ovn-installed in OVS
Dec 13 03:31:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:34Z|00646|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 up in Southbound
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.859 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:34 np0005558241 nova_compute[248510]: 2025-12-13 08:31:34.867 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:34 np0005558241 systemd-machined[210538]: New machine qemu-79-instance-0000003c.
Dec 13 03:31:34 np0005558241 systemd[1]: Started Virtual Machine qemu-79-instance-0000003c.
Dec 13 03:31:35 np0005558241 podman[314072]: 2025-12-13 08:31:35.066304324 +0000 UTC m=+2.410071446 container cleanup 0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 03:31:35 np0005558241 systemd[1]: libpod-conmon-0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5.scope: Deactivated successfully.
Dec 13 03:31:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:31:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1502144583' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.260 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.262 248514 DEBUG nova.virt.libvirt.vif [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:31:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2099788276',display_name='tempest-ServerActionsTestOtherA-server-2099788276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2099788276',id=69,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxJgtD1FEWUw7tJ8pibGATgtrZITyeOCdRqSR73HftGqNDavcdP1XHx0prQ71D2yOjUOO7ZJAEgnPXlpVAfW1QGvCbp1snKSBX1V/4lwFnsJaGPS7QewWPvSMs5UYFhVA==',key_name='tempest-keypair-180617026',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-b9qwtikj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=ce9adb21-8832-4d3e-867e-b0b49bdb6850,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.262 248514 DEBUG nova.network.os_vif_util [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.263 248514 DEBUG nova.network.os_vif_util [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.265 248514 DEBUG nova.objects.instance [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'pci_devices' on Instance uuid ce9adb21-8832-4d3e-867e-b0b49bdb6850 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.332 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  <uuid>ce9adb21-8832-4d3e-867e-b0b49bdb6850</uuid>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  <name>instance-00000045</name>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerActionsTestOtherA-server-2099788276</nova:name>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:31:34</nova:creationTime>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:        <nova:user uuid="5b310bdebec646949fad4ea1821b4c3f">tempest-ServerActionsTestOtherA-1325599242-project-member</nova:user>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:        <nova:project uuid="b4d2999518df4b9f8ccbabe38976dc3c">tempest-ServerActionsTestOtherA-1325599242</nova:project>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:        <nova:port uuid="b2ee664d-ff99-4665-a5cc-70bd7aeb1546">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <entry name="serial">ce9adb21-8832-4d3e-867e-b0b49bdb6850</entry>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <entry name="uuid">ce9adb21-8832-4d3e-867e-b0b49bdb6850</entry>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk.config">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:63:3a:53"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <target dev="tapb2ee664d-ff"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/console.log" append="off"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:31:35 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:31:35 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:31:35 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:31:35 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.334 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Preparing to wait for external event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.334 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.335 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.335 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.336 248514 DEBUG nova.virt.libvirt.vif [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:31:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2099788276',display_name='tempest-ServerActionsTestOtherA-server-2099788276',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2099788276',id=69,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxJgtD1FEWUw7tJ8pibGATgtrZITyeOCdRqSR73HftGqNDavcdP1XHx0prQ71D2yOjUOO7ZJAEgnPXlpVAfW1QGvCbp1snKSBX1V/4lwFnsJaGPS7QewWPvSMs5UYFhVA==',key_name='tempest-keypair-180617026',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-b9qwtikj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=ce9adb21-8832-4d3e-867e-b0b49bdb6850,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.336 248514 DEBUG nova.network.os_vif_util [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.338 248514 DEBUG nova.network.os_vif_util [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.339 248514 DEBUG os_vif [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.340 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.341 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.342 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.342 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.346 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.347 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2ee664d-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.348 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2ee664d-ff, col_values=(('external_ids', {'iface-id': 'b2ee664d-ff99-4665-a5cc-70bd7aeb1546', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:3a:53', 'vm-uuid': 'ce9adb21-8832-4d3e-867e-b0b49bdb6850'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.349 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:35 np0005558241 NetworkManager[50376]: <info>  [1765614695.3509] manager: (tapb2ee664d-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.353 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.357 248514 INFO os_vif [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff')#033[00m
Dec 13 03:31:35 np0005558241 podman[314288]: 2025-12-13 08:31:35.449644053 +0000 UTC m=+0.348288955 container remove 0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.461 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614695.4604409, a9c6de9d-63c0-43a5-9d6e-be356e504837 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.461 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] VM Started (Lifecycle Event)#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.461 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[97418971-ae57-4582-a104-ed4695b8aea3]: (4, ('Sat Dec 13 08:31:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5)\n0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5\nSat Dec 13 08:31:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5)\n0e99beaf3b437ea5d553b7870d2192d19f1fd61b4c5fa9413f1c33831487daf5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.464 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53b9756d-2229-4090-9c73-1edc09bb245d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.466 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:35 np0005558241 kernel: tap43ee8730-a0: left promiscuous mode
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.491 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.494 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.494 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f43b02b5-deb0-46bc-880c-f6d1e0186d7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.501 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614695.460597, a9c6de9d-63c0-43a5-9d6e-be356e504837 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.502 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.506 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.506 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.506 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No VIF found with MAC fa:16:3e:63:3a:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.507 248514 INFO nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Using config drive#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.509 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0774e96b-c46d-4d54-aaf0-a4d403888fc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.515 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c814c2-aae7-4a7d-9e5b-89e96f2304ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.538 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[135ce90a-f232-4366-ae41-1a1711ad7a82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 723460, 'reachable_time': 21436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314360, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.543 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.544 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:31:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.3 MiB/s wr, 88 op/s
Dec 13 03:31:35 np0005558241 systemd[1]: run-netns-ovnmeta\x2d43ee8730\x2da174\x2d4859\x2d9c95\x2db3ad6dc68839.mount: Deactivated successfully.
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.544 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3a8ff1-0a51-469a-a1aa-157af1f214a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.546 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3b1c0a-882e-4f33-a582-667d018090d4 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 unbound from our chassis#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.548 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.549 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09a5c771-f4fd-4c17-9319-27a5ccf066e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.550 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.551 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.557 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.566 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.566 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[07d95d1d-c645-451d-afb4-9f28cbf94096]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.568 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43ee8730-a1 in ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.570 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43ee8730-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.570 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[69dd3bb7-0afe-4ee5-84dd-36368b29e66e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.571 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2db8fb15-2889-4b48-ba13-d89732fe5302]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.584 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1092c58e-ec1e-484a-8512-8ad7031fb8c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.599 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.601 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3687c8-fe34-439b-b20d-e7c437d259fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.645 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ce2ca0-0d30-47e5-b0dc-5bcb498ce8d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 NetworkManager[50376]: <info>  [1765614695.6528] manager: (tap43ee8730-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/287)
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.652 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52dd9cd5-d092-46e0-adf8-2abb014f85c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 systemd-udevd[314391]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.696 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[92f4da8e-3739-42bf-ab8d-af9c93a1b16d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.703 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b7289652-a343-41c1-be11-3b666abffc91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 NetworkManager[50376]: <info>  [1765614695.7350] device (tap43ee8730-a0): carrier: link connected
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.742 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c38d44-8aec-43bf-840e-e51be8ef7c13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.761 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bac9e3a8-346b-402c-9fed-99e988ebc5a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727295, 'reachable_time': 29620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314412, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.777 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42629ff2-9efc-4daa-8e5a-18cb961d8c8d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:bbcd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727295, 'tstamp': 727295}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314429, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.793 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[44023f03-d916-40f3-9b83-11b5a99747ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727295, 'reachable_time': 29620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314432, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.819 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0556b5-ecd3-4a8a-bb07-bb78908dd400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.877 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[84d655e6-7e4b-412c-a7e3-ea422c38f016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.879 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.879 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.879 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.881 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:35 np0005558241 NetworkManager[50376]: <info>  [1765614695.8822] manager: (tap43ee8730-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Dec 13 03:31:35 np0005558241 kernel: tap43ee8730-a0: entered promiscuous mode
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.887 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:35Z|00647|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.904 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.909 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.910 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.911 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8871b9bc-55b8-4334-b678-693f1b8035ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.911 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:31:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:35.912 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'env', 'PROCESS_TAG=haproxy-43ee8730-a174-4859-9c95-b3ad6dc68839', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43ee8730-a174-4859-9c95-b3ad6dc68839.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.983 248514 DEBUG nova.compute.manager [req-8c5059c9-bac5-4d1c-9bd0-6d26ae0c7c65 req-bca9079e-74c2-4bdb-ae80-a1e65f01e939 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.984 248514 DEBUG oslo_concurrency.lockutils [req-8c5059c9-bac5-4d1c-9bd0-6d26ae0c7c65 req-bca9079e-74c2-4bdb-ae80-a1e65f01e939 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.984 248514 DEBUG oslo_concurrency.lockutils [req-8c5059c9-bac5-4d1c-9bd0-6d26ae0c7c65 req-bca9079e-74c2-4bdb-ae80-a1e65f01e939 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.984 248514 DEBUG oslo_concurrency.lockutils [req-8c5059c9-bac5-4d1c-9bd0-6d26ae0c7c65 req-bca9079e-74c2-4bdb-ae80-a1e65f01e939 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.984 248514 DEBUG nova.compute.manager [req-8c5059c9-bac5-4d1c-9bd0-6d26ae0c7c65 req-bca9079e-74c2-4bdb-ae80-a1e65f01e939 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Processing event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.985 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.991 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614695.9903412, a9c6de9d-63c0-43a5-9d6e-be356e504837 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.991 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:31:35 np0005558241 nova_compute[248510]: 2025-12-13 08:31:35.993 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.002 248514 INFO nova.virt.libvirt.driver [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance spawned successfully.#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.003 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.021 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.025 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.029 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.030 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.030 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.031 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.031 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.031 248514 DEBUG nova.virt.libvirt.driver [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.039 248514 INFO nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Creating config drive at /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/disk.config#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.045 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyunprnlr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.090 248514 DEBUG nova.network.neutron [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updated VIF entry in instance network info cache for port b2ee664d-ff99-4665-a5cc-70bd7aeb1546. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.091 248514 DEBUG nova.network.neutron [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updating instance_info_cache with network_info: [{"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.094 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.097 248514 DEBUG nova.compute.manager [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.097 248514 DEBUG oslo_concurrency.lockutils [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.098 248514 DEBUG oslo_concurrency.lockutils [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.098 248514 DEBUG oslo_concurrency.lockutils [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.099 248514 DEBUG nova.compute.manager [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.099 248514 WARNING nova.compute.manager [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.099 248514 DEBUG nova.compute.manager [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.100 248514 DEBUG oslo_concurrency.lockutils [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.100 248514 DEBUG oslo_concurrency.lockutils [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.100 248514 DEBUG oslo_concurrency.lockutils [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.101 248514 DEBUG nova.compute.manager [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.101 248514 WARNING nova.compute.manager [req-185e9e4b-2fd3-44a2-b3c0-b5e1af0af1bf req-c2ab6777-9f59-442a-ac4f-c26ca117b272 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.102 248514 DEBUG nova.compute.manager [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.103 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 3ced27d6-a2a8-4ce3-a7e7-494270418542 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.103 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614696.0367146, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.104 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.107 248514 INFO nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Took 12.28 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.107 248514 DEBUG nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.113 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance rebooted successfully.#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.114 248514 DEBUG nova.compute.manager [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.158 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.161 248514 DEBUG oslo_concurrency.lockutils [req-46787674-02ec-40b6-8264-175180264f15 req-bf2c6479-b1a8-422b-8301-2963aacf6c18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.164 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.195 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyunprnlr" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.215 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.228 248514 DEBUG nova.storage.rbd_utils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.234 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/disk.config ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.277 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.277 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614696.0368996, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.278 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Started (Lifecycle Event)#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.281 248514 DEBUG oslo_concurrency.lockutils [None req-f33a7c6e-16e9-45ad-82f0-418ff9a5a400 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.289 248514 INFO nova.compute.manager [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Took 13.76 seconds to build instance.#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.320 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.321 248514 DEBUG oslo_concurrency.lockutils [None req-b470771f-7ad6-4278-95c5-1353e00e70ca 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.326 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:31:36 np0005558241 podman[314494]: 2025-12-13 08:31:36.341014576 +0000 UTC m=+0.071997267 container create f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:31:36 np0005558241 systemd[1]: Started libpod-conmon-f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce.scope.
Dec 13 03:31:36 np0005558241 podman[314494]: 2025-12-13 08:31:36.297234096 +0000 UTC m=+0.028216807 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:31:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:31:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5248c08c37cabcce8b13f5f177f9d27ffd14bb5e2bc32ec9054b3d6fc2fbb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:36 np0005558241 podman[314494]: 2025-12-13 08:31:36.439764003 +0000 UTC m=+0.170746724 container init f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:31:36 np0005558241 podman[314494]: 2025-12-13 08:31:36.445860103 +0000 UTC m=+0.176842794 container start f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.456 248514 DEBUG oslo_concurrency.processutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/disk.config ce9adb21-8832-4d3e-867e-b0b49bdb6850_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.458 248514 INFO nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Deleting local config drive /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850/disk.config because it was imported into RBD.#033[00m
Dec 13 03:31:36 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [NOTICE]   (314530) : New worker (314533) forked
Dec 13 03:31:36 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [NOTICE]   (314530) : Loading success.
Dec 13 03:31:36 np0005558241 kernel: tapb2ee664d-ff: entered promiscuous mode
Dec 13 03:31:36 np0005558241 NetworkManager[50376]: <info>  [1765614696.5176] manager: (tapb2ee664d-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Dec 13 03:31:36 np0005558241 systemd-udevd[314395]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:31:36 np0005558241 NetworkManager[50376]: <info>  [1765614696.5399] device (tapb2ee664d-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:31:36 np0005558241 NetworkManager[50376]: <info>  [1765614696.5408] device (tapb2ee664d-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:31:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:36Z|00648|binding|INFO|Claiming lport b2ee664d-ff99-4665-a5cc-70bd7aeb1546 for this chassis.
Dec 13 03:31:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:36Z|00649|binding|INFO|b2ee664d-ff99-4665-a5cc-70bd7aeb1546: Claiming fa:16:3e:63:3a:53 10.100.0.3
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.559 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:3a:53 10.100.0.3'], port_security=['fa:16:3e:63:3a:53 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ce9adb21-8832-4d3e-867e-b0b49bdb6850', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4a5bf7b-dd16-4d92-81b7-546493ad4db6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b2ee664d-ff99-4665-a5cc-70bd7aeb1546) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.560 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b2ee664d-ff99-4665-a5cc-70bd7aeb1546 in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f bound to our chassis#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.563 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f#033[00m
Dec 13 03:31:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:36Z|00650|binding|INFO|Setting lport b2ee664d-ff99-4665-a5cc-70bd7aeb1546 ovn-installed in OVS
Dec 13 03:31:36 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:36Z|00651|binding|INFO|Setting lport b2ee664d-ff99-4665-a5cc-70bd7aeb1546 up in Southbound
Dec 13 03:31:36 np0005558241 nova_compute[248510]: 2025-12-13 08:31:36.571 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:36 np0005558241 systemd-machined[210538]: New machine qemu-80-instance-00000045.
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.583 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9efae554-9552-4654-801e-f80a2e61b2b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.584 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap409fc0bb-c1 in ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.586 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap409fc0bb-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.586 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d982545-4a92-4eef-a8fa-6acf4e477c30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.587 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1100f0-667a-4960-8c5b-0c2b84cc58bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:36 np0005558241 systemd[1]: Started Virtual Machine qemu-80-instance-00000045.
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.607 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9bc01b-e705-4d09-aabe-931d7dcfb9fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.628 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[743842b1-9c00-48b5-b43e-eb68de8b4908]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.676 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1076277f-5f91-47bf-8a42-de8498f6000c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:36 np0005558241 NetworkManager[50376]: <info>  [1765614696.6857] manager: (tap409fc0bb-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/290)
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.684 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[82532825-a922-4b4a-bbd4-acfcbb323a44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.732 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf613de-d2aa-4521-bb7d-6f8ae632c402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.737 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[631d89ea-01e4-4e46-b264-03957dde4a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:36 np0005558241 NetworkManager[50376]: <info>  [1765614696.7694] device (tap409fc0bb-c0): carrier: link connected
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.776 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[31bc3869-7d8d-4814-82e6-d5cac2de6d4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.800 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[38d64295-ab01-4e4e-95e2-ce75045bd611]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314570, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.820 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[45200e70-4b22-4d98-a477-e7c0a51c7585]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:3b3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727398, 'tstamp': 727398}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314571, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.842 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[83d8482d-6020-43f0-8aae-b5eccac63582]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314572, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:36.902 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3a259058-7a0b-4ba5-ab3a-04f89abeb4b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.006 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[acad6b18-dee7-4063-b2fb-19a986e9fe86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.008 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.009 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.009 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:37 np0005558241 NetworkManager[50376]: <info>  [1765614697.0122] manager: (tap409fc0bb-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:37 np0005558241 kernel: tap409fc0bb-c0: entered promiscuous mode
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.018 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:37 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:37Z|00652|binding|INFO|Releasing lport c8e8a31b-a5fe-4e2d-bc19-65995078988f from this chassis (sb_readonly=0)
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.023 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.025 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/409fc0bb-caf3-4b90-9e44-83ff383dc88f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/409fc0bb-caf3-4b90-9e44-83ff383dc88f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.026 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[93656d86-8bf2-4d0d-8f71-1356e4c2ebc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.027 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-409fc0bb-caf3-4b90-9e44-83ff383dc88f
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/409fc0bb-caf3-4b90-9e44-83ff383dc88f.pid.haproxy
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 409fc0bb-caf3-4b90-9e44-83ff383dc88f
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:31:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:37.029 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'env', 'PROCESS_TAG=haproxy-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/409fc0bb-caf3-4b90-9e44-83ff383dc88f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.040 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.063 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614697.0624516, ce9adb21-8832-4d3e-867e-b0b49bdb6850 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.063 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] VM Started (Lifecycle Event)#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.095 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.099 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614697.063127, ce9adb21-8832-4d3e-867e-b0b49bdb6850 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.100 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.129 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.130 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.133 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.137 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.160 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.165 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.256 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.257 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.265 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.265 248514 INFO nova.compute.claims [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:31:37 np0005558241 nova_compute[248510]: 2025-12-13 08:31:37.423 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:37 np0005558241 podman[314646]: 2025-12-13 08:31:37.483622708 +0000 UTC m=+0.068258235 container create 2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 03:31:37 np0005558241 systemd[1]: Started libpod-conmon-2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a.scope.
Dec 13 03:31:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.9 MiB/s wr, 81 op/s
Dec 13 03:31:37 np0005558241 podman[314646]: 2025-12-13 08:31:37.45287817 +0000 UTC m=+0.037513717 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:31:37 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:31:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe1480fb3e22837a053f76acd0fd060e8f26ee729f63461240b6103c38efbe6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:37 np0005558241 podman[314646]: 2025-12-13 08:31:37.576130581 +0000 UTC m=+0.160766118 container init 2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:31:37 np0005558241 podman[314646]: 2025-12-13 08:31:37.584457106 +0000 UTC m=+0.169092633 container start 2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:31:37 np0005558241 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [NOTICE]   (314684) : New worker (314686) forked
Dec 13 03:31:37 np0005558241 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [NOTICE]   (314684) : Loading success.
Dec 13 03:31:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:31:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4005998734' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.012 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.020 248514 DEBUG nova.compute.provider_tree [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.045 248514 DEBUG nova.scheduler.client.report [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.071 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.071 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.114 248514 DEBUG nova.compute.manager [req-f1300c9a-5838-49c3-bb7b-61de61ef9423 req-7c88d50a-5208-4d90-85e6-66dcbb5eeb18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.114 248514 DEBUG oslo_concurrency.lockutils [req-f1300c9a-5838-49c3-bb7b-61de61ef9423 req-7c88d50a-5208-4d90-85e6-66dcbb5eeb18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.115 248514 DEBUG oslo_concurrency.lockutils [req-f1300c9a-5838-49c3-bb7b-61de61ef9423 req-7c88d50a-5208-4d90-85e6-66dcbb5eeb18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.115 248514 DEBUG oslo_concurrency.lockutils [req-f1300c9a-5838-49c3-bb7b-61de61ef9423 req-7c88d50a-5208-4d90-85e6-66dcbb5eeb18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.115 248514 DEBUG nova.compute.manager [req-f1300c9a-5838-49c3-bb7b-61de61ef9423 req-7c88d50a-5208-4d90-85e6-66dcbb5eeb18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.115 248514 WARNING nova.compute.manager [req-f1300c9a-5838-49c3-bb7b-61de61ef9423 req-7c88d50a-5208-4d90-85e6-66dcbb5eeb18 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received unexpected event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.121 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.121 248514 DEBUG nova.network.neutron [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.146 248514 INFO nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.172 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.196 248514 DEBUG nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.197 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.197 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.197 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.198 248514 DEBUG nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.198 248514 WARNING nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.198 248514 DEBUG nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.198 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.198 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.199 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.199 248514 DEBUG nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Processing event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.199 248514 DEBUG nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.199 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.200 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.200 248514 DEBUG oslo_concurrency.lockutils [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.200 248514 DEBUG nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] No waiting events found dispatching network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.200 248514 WARNING nova.compute.manager [req-0f6f18d0-0f1d-4979-8ece-5e112497221b req-d6d6d0ef-7218-4088-ae8c-94de368929f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received unexpected event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.201 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.214 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614698.2123458, ce9adb21-8832-4d3e-867e-b0b49bdb6850 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.215 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.217 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.235 248514 INFO nova.virt.libvirt.driver [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Instance spawned successfully.#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.236 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.259 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.270 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.277 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.277 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.278 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.278 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.279 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.279 248514 DEBUG nova.virt.libvirt.driver [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.320 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.323 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.324 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.324 248514 INFO nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Creating image(s)#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.345 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.367 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.390 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.393 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.436 248514 DEBUG nova.policy [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '91d0d3efedc943b48ad0fc4295b6fc7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.444 248514 INFO nova.compute.manager [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Rescuing#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.444 248514 DEBUG oslo_concurrency.lockutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.444 248514 DEBUG oslo_concurrency.lockutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquired lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.445 248514 DEBUG nova.network.neutron [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.448 248514 INFO nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Took 10.71 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.449 248514 DEBUG nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.486 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.487 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.488 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.488 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.516 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.526 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.588 248514 INFO nova.compute.manager [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Took 12.64 seconds to build instance.#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.652 248514 DEBUG oslo_concurrency.lockutils [None req-7f914086-93ba-4033-a5cc-9b097a5405ec 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:38 np0005558241 nova_compute[248510]: 2025-12-13 08:31:38.925 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:39 np0005558241 nova_compute[248510]: 2025-12-13 08:31:39.016 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] resizing rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:31:39 np0005558241 nova_compute[248510]: 2025-12-13 08:31:39.147 248514 DEBUG nova.objects.instance [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'migration_context' on Instance uuid 4d93482e-582f-4d44-ab53-87cd5f6aa66a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:39 np0005558241 nova_compute[248510]: 2025-12-13 08:31:39.176 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:31:39 np0005558241 nova_compute[248510]: 2025-12-13 08:31:39.176 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Ensure instance console log exists: /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:31:39 np0005558241 nova_compute[248510]: 2025-12-13 08:31:39.178 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:39 np0005558241 nova_compute[248510]: 2025-12-13 08:31:39.178 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:39 np0005558241 nova_compute[248510]: 2025-12-13 08:31:39.178 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:39 np0005558241 nova_compute[248510]: 2025-12-13 08:31:39.505 248514 DEBUG nova.network.neutron [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Successfully created port: 35b172ab-1be7-44b2-9a76-0f60de6851ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:31:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 225 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 181 op/s
Dec 13 03:31:39 np0005558241 nova_compute[248510]: 2025-12-13 08:31:39.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:31:40 np0005558241 nova_compute[248510]: 2025-12-13 08:31:40.332 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:40 np0005558241 nova_compute[248510]: 2025-12-13 08:31:40.350 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:40 np0005558241 nova_compute[248510]: 2025-12-13 08:31:40.742 248514 DEBUG nova.network.neutron [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Updating instance_info_cache with network_info: [{"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:31:40 np0005558241 nova_compute[248510]: 2025-12-13 08:31:40.769 248514 DEBUG oslo_concurrency.lockutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Releasing lock "refresh_cache-a9c6de9d-63c0-43a5-9d6e-be356e504837" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:31:41 np0005558241 nova_compute[248510]: 2025-12-13 08:31:41.066 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:31:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:31:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 239 MiB data, 618 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.3 MiB/s wr, 237 op/s
Dec 13 03:31:42 np0005558241 nova_compute[248510]: 2025-12-13 08:31:42.494 248514 DEBUG nova.compute.manager [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-changed-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:42 np0005558241 nova_compute[248510]: 2025-12-13 08:31:42.494 248514 DEBUG nova.compute.manager [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Refreshing instance network info cache due to event network-changed-b2ee664d-ff99-4665-a5cc-70bd7aeb1546. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:31:42 np0005558241 nova_compute[248510]: 2025-12-13 08:31:42.494 248514 DEBUG oslo_concurrency.lockutils [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:31:42 np0005558241 nova_compute[248510]: 2025-12-13 08:31:42.495 248514 DEBUG oslo_concurrency.lockutils [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:31:42 np0005558241 nova_compute[248510]: 2025-12-13 08:31:42.495 248514 DEBUG nova.network.neutron [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Refreshing network info cache for port b2ee664d-ff99-4665-a5cc-70bd7aeb1546 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:31:42 np0005558241 nova_compute[248510]: 2025-12-13 08:31:42.657 248514 DEBUG nova.network.neutron [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Successfully updated port: 35b172ab-1be7-44b2-9a76-0f60de6851ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:31:42 np0005558241 nova_compute[248510]: 2025-12-13 08:31:42.682 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "refresh_cache-4d93482e-582f-4d44-ab53-87cd5f6aa66a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:31:42 np0005558241 nova_compute[248510]: 2025-12-13 08:31:42.682 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquired lock "refresh_cache-4d93482e-582f-4d44-ab53-87cd5f6aa66a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:31:42 np0005558241 nova_compute[248510]: 2025-12-13 08:31:42.683 248514 DEBUG nova.network.neutron [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:31:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 262 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.3 MiB/s wr, 249 op/s
Dec 13 03:31:43 np0005558241 nova_compute[248510]: 2025-12-13 08:31:43.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:31:43 np0005558241 nova_compute[248510]: 2025-12-13 08:31:43.774 248514 DEBUG nova.network.neutron [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:31:44 np0005558241 nova_compute[248510]: 2025-12-13 08:31:44.807 248514 DEBUG nova.compute.manager [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-changed-35b172ab-1be7-44b2-9a76-0f60de6851ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:44 np0005558241 nova_compute[248510]: 2025-12-13 08:31:44.807 248514 DEBUG nova.compute.manager [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Refreshing instance network info cache due to event network-changed-35b172ab-1be7-44b2-9a76-0f60de6851ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:31:44 np0005558241 nova_compute[248510]: 2025-12-13 08:31:44.807 248514 DEBUG oslo_concurrency.lockutils [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4d93482e-582f-4d44-ab53-87cd5f6aa66a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:31:45 np0005558241 nova_compute[248510]: 2025-12-13 08:31:45.334 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:45 np0005558241 nova_compute[248510]: 2025-12-13 08:31:45.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 262 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 245 op/s
Dec 13 03:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.003 248514 DEBUG nova.network.neutron [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Updating instance_info_cache with network_info: [{"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.012 248514 DEBUG nova.network.neutron [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updated VIF entry in instance network info cache for port b2ee664d-ff99-4665-a5cc-70bd7aeb1546. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.012 248514 DEBUG nova.network.neutron [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updating instance_info_cache with network_info: [{"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.027 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Releasing lock "refresh_cache-4d93482e-582f-4d44-ab53-87cd5f6aa66a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:31:47 np0005558241 podman[315004]: 2025-12-13 08:31:47.026470803 +0000 UTC m=+0.050479279 container create c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rhodes, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.029 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Instance network_info: |[{"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.030 248514 DEBUG oslo_concurrency.lockutils [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4d93482e-582f-4d44-ab53-87cd5f6aa66a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.030 248514 DEBUG nova.network.neutron [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Refreshing network info cache for port 35b172ab-1be7-44b2-9a76-0f60de6851ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.036 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Start _get_guest_xml network_info=[{"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.038 248514 DEBUG oslo_concurrency.lockutils [req-fcc51384-0683-429a-acef-3ba125251881 req-90e6600c-a7e1-46c6-8fc7-93cd30cb99b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.043 248514 WARNING nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.052 248514 DEBUG nova.virt.libvirt.host [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.054 248514 DEBUG nova.virt.libvirt.host [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.061 248514 DEBUG nova.virt.libvirt.host [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.061 248514 DEBUG nova.virt.libvirt.host [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.062 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.062 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.063 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.063 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.064 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.064 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.065 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.065 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.065 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.066 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.066 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.067 248514 DEBUG nova.virt.hardware [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.071 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:47 np0005558241 systemd[1]: Started libpod-conmon-c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99.scope.
Dec 13 03:31:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:31:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:31:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:31:47 np0005558241 podman[315004]: 2025-12-13 08:31:47.001907663 +0000 UTC m=+0.025916159 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:31:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:31:47 np0005558241 podman[315004]: 2025-12-13 08:31:47.125513457 +0000 UTC m=+0.149521953 container init c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:31:47 np0005558241 podman[315004]: 2025-12-13 08:31:47.134286509 +0000 UTC m=+0.158294985 container start c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:31:47 np0005558241 podman[315004]: 2025-12-13 08:31:47.138256552 +0000 UTC m=+0.162265248 container attach c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:31:47 np0005558241 intelligent_rhodes[315019]: 167 167
Dec 13 03:31:47 np0005558241 systemd[1]: libpod-c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99.scope: Deactivated successfully.
Dec 13 03:31:47 np0005558241 podman[315004]: 2025-12-13 08:31:47.144047245 +0000 UTC m=+0.168055721 container died c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:31:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-79082616f053c878e9656e6427e7e5ef0b1fcbdb23034c72f4840fb69ab67c4d-merged.mount: Deactivated successfully.
Dec 13 03:31:47 np0005558241 podman[315004]: 2025-12-13 08:31:47.196179975 +0000 UTC m=+0.220188451 container remove c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:31:47 np0005558241 systemd[1]: libpod-conmon-c5ba0b2c4b4596f69e2627508123c73973dd34e966e7259dc0cb33c8daf9da99.scope: Deactivated successfully.
Dec 13 03:31:47 np0005558241 podman[315062]: 2025-12-13 08:31:47.406960256 +0000 UTC m=+0.049128021 container create 145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jennings, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 03:31:47 np0005558241 systemd[1]: Started libpod-conmon-145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305.scope.
Dec 13 03:31:47 np0005558241 podman[315062]: 2025-12-13 08:31:47.386435722 +0000 UTC m=+0.028603507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:31:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af3e791781eefdc3c22bf0cf8de0c62610e531afcc438d8e484dd4dcb086c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af3e791781eefdc3c22bf0cf8de0c62610e531afcc438d8e484dd4dcb086c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af3e791781eefdc3c22bf0cf8de0c62610e531afcc438d8e484dd4dcb086c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af3e791781eefdc3c22bf0cf8de0c62610e531afcc438d8e484dd4dcb086c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af3e791781eefdc3c22bf0cf8de0c62610e531afcc438d8e484dd4dcb086c8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:47 np0005558241 podman[315062]: 2025-12-13 08:31:47.507931524 +0000 UTC m=+0.150099309 container init 145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jennings, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:31:47 np0005558241 podman[315062]: 2025-12-13 08:31:47.516525069 +0000 UTC m=+0.158692834 container start 145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:31:47 np0005558241 podman[315062]: 2025-12-13 08:31:47.520807855 +0000 UTC m=+0.162975640 container attach 145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:31:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 262 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Dec 13 03:31:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:31:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2490801119' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.695 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.730 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.737 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.792 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.794 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:31:47 np0005558241 nova_compute[248510]: 2025-12-13 08:31:47.826 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:31:48 np0005558241 elated_jennings[315078]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:31:48 np0005558241 elated_jennings[315078]: --> All data devices are unavailable
Dec 13 03:31:48 np0005558241 systemd[1]: libpod-145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305.scope: Deactivated successfully.
Dec 13 03:31:48 np0005558241 conmon[315078]: conmon 145369c4449b2c3669c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305.scope/container/memory.events
Dec 13 03:31:48 np0005558241 podman[315062]: 2025-12-13 08:31:48.143157713 +0000 UTC m=+0.785325488 container died 145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jennings, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:31:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-65af3e791781eefdc3c22bf0cf8de0c62610e531afcc438d8e484dd4dcb086c8-merged.mount: Deactivated successfully.
Dec 13 03:31:48 np0005558241 podman[315062]: 2025-12-13 08:31:48.200970041 +0000 UTC m=+0.843137806 container remove 145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:31:48 np0005558241 systemd[1]: libpod-conmon-145369c4449b2c3669c6fac963479c176215e317eb9f20503ab11577d5068305.scope: Deactivated successfully.
Dec 13 03:31:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:48Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 03:31:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:31:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1830471578' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.447 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.711s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.450 248514 DEBUG nova.virt.libvirt.vif [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1782155204',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1782155204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1782155204',id=70,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-e9s8cfpo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:38Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=4d93482e-582f-4d44-ab53-87cd5f6aa66a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.450 248514 DEBUG nova.network.os_vif_util [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.451 248514 DEBUG nova.network.os_vif_util [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.453 248514 DEBUG nova.objects.instance [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d93482e-582f-4d44-ab53-87cd5f6aa66a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.477 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  <uuid>4d93482e-582f-4d44-ab53-87cd5f6aa66a</uuid>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  <name>instance-00000046</name>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1782155204</nova:name>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:31:47</nova:creationTime>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:        <nova:user uuid="91d0d3efedc943b48ad0fc4295b6fc7c">tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member</nova:user>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:        <nova:project uuid="2de328b46a6e4f588f5e2a254db7f4ef">tempest-ImagesOneServerNegativeTestJSON-1826994500</nova:project>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:        <nova:port uuid="35b172ab-1be7-44b2-9a76-0f60de6851ab">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <entry name="serial">4d93482e-582f-4d44-ab53-87cd5f6aa66a</entry>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <entry name="uuid">4d93482e-582f-4d44-ab53-87cd5f6aa66a</entry>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk.config">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:67:86:fc"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <target dev="tap35b172ab-1b"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/console.log" append="off"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:31:48 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:31:48 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:31:48 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:31:48 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.484 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Preparing to wait for external event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.484 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.485 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.485 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.486 248514 DEBUG nova.virt.libvirt.vif [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1782155204',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1782155204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1782155204',id=70,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-e9s8cfpo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:38Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=4d93482e-582f-4d44-ab53-87cd5f6aa66a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.487 248514 DEBUG nova.network.os_vif_util [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.488 248514 DEBUG nova.network.os_vif_util [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.488 248514 DEBUG os_vif [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.490 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.491 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.495 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.496 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35b172ab-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.497 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35b172ab-1b, col_values=(('external_ids', {'iface-id': '35b172ab-1be7-44b2-9a76-0f60de6851ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:86:fc', 'vm-uuid': '4d93482e-582f-4d44-ab53-87cd5f6aa66a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:48 np0005558241 NetworkManager[50376]: <info>  [1765614708.5002] manager: (tap35b172ab-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.503 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.508 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.509 248514 INFO os_vif [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b')#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.573 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.574 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.575 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No VIF found with MAC fa:16:3e:67:86:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.575 248514 INFO nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Using config drive#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.616 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:48 np0005558241 podman[315235]: 2025-12-13 08:31:48.708292438 +0000 UTC m=+0.045800796 container create b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ardinghelli, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 03:31:48 np0005558241 systemd[1]: Started libpod-conmon-b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2.scope.
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:31:48 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:31:48 np0005558241 podman[315235]: 2025-12-13 08:31:48.689175206 +0000 UTC m=+0.026683584 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:31:48 np0005558241 podman[315235]: 2025-12-13 08:31:48.796390336 +0000 UTC m=+0.133898714 container init b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ardinghelli, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:31:48 np0005558241 podman[315235]: 2025-12-13 08:31:48.806056977 +0000 UTC m=+0.143565335 container start b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 03:31:48 np0005558241 distracted_ardinghelli[315250]: 167 167
Dec 13 03:31:48 np0005558241 systemd[1]: libpod-b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2.scope: Deactivated successfully.
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.951 248514 INFO nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Creating config drive at /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/disk.config#033[00m
Dec 13 03:31:48 np0005558241 nova_compute[248510]: 2025-12-13 08:31:48.958 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpswaztoxc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:49 np0005558241 podman[315235]: 2025-12-13 08:31:49.096779649 +0000 UTC m=+0.434288047 container attach b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 03:31:49 np0005558241 podman[315235]: 2025-12-13 08:31:49.098499544 +0000 UTC m=+0.436007902 container died b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:31:49 np0005558241 nova_compute[248510]: 2025-12-13 08:31:49.114 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpswaztoxc" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:49 np0005558241 nova_compute[248510]: 2025-12-13 08:31:49.146 248514 DEBUG nova.storage.rbd_utils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:49 np0005558241 nova_compute[248510]: 2025-12-13 08:31:49.158 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/disk.config 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3c7f1de502834039167cc4e7f79ca686d348c8e365c18e5d123c840791d4e869-merged.mount: Deactivated successfully.
Dec 13 03:31:49 np0005558241 nova_compute[248510]: 2025-12-13 08:31:49.201 248514 DEBUG nova.network.neutron [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Updated VIF entry in instance network info cache for port 35b172ab-1be7-44b2-9a76-0f60de6851ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:31:49 np0005558241 nova_compute[248510]: 2025-12-13 08:31:49.202 248514 DEBUG nova.network.neutron [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Updating instance_info_cache with network_info: [{"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:31:49 np0005558241 nova_compute[248510]: 2025-12-13 08:31:49.223 248514 DEBUG oslo_concurrency.lockutils [req-5e6ebe5f-76ef-40e6-8ab9-d36b1f2ac521 req-1964bfba-cbdc-4b34-9409-38bde69dec8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4d93482e-582f-4d44-ab53-87cd5f6aa66a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:31:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 280 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 3.3 MiB/s wr, 290 op/s
Dec 13 03:31:49 np0005558241 podman[315235]: 2025-12-13 08:31:49.575657307 +0000 UTC m=+0.913165685 container remove b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:31:49 np0005558241 systemd[1]: libpod-conmon-b183ca3d575c1fddfdaa0040d38f53fcf6cdb499ff2ee9da048db9ee4ff3d5b2.scope: Deactivated successfully.
Dec 13 03:31:49 np0005558241 nova_compute[248510]: 2025-12-13 08:31:49.656 248514 DEBUG oslo_concurrency.processutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/disk.config 4d93482e-582f-4d44-ab53-87cd5f6aa66a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:49 np0005558241 nova_compute[248510]: 2025-12-13 08:31:49.657 248514 INFO nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Deleting local config drive /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a/disk.config because it was imported into RBD.#033[00m
Dec 13 03:31:49 np0005558241 kernel: tap35b172ab-1b: entered promiscuous mode
Dec 13 03:31:49 np0005558241 NetworkManager[50376]: <info>  [1765614709.7198] manager: (tap35b172ab-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/293)
Dec 13 03:31:49 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:49Z|00653|binding|INFO|Claiming lport 35b172ab-1be7-44b2-9a76-0f60de6851ab for this chassis.
Dec 13 03:31:49 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:49Z|00654|binding|INFO|35b172ab-1be7-44b2-9a76-0f60de6851ab: Claiming fa:16:3e:67:86:fc 10.100.0.3
Dec 13 03:31:49 np0005558241 nova_compute[248510]: 2025-12-13 08:31:49.722 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.735 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:86:fc 10.100.0.3'], port_security=['fa:16:3e:67:86:fc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4d93482e-582f-4d44-ab53-87cd5f6aa66a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b109409-2f17-43b9-91ce-3ee865f0de12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6896dc77-88b0-4868-833d-01241883d6af, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=35b172ab-1be7-44b2-9a76-0f60de6851ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.737 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 35b172ab-1be7-44b2-9a76-0f60de6851ab in datapath 527d37da-eda0-4bfe-9f1d-310d58024d5d bound to our chassis#033[00m
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.740 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 527d37da-eda0-4bfe-9f1d-310d58024d5d#033[00m
Dec 13 03:31:49 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:49Z|00655|binding|INFO|Setting lport 35b172ab-1be7-44b2-9a76-0f60de6851ab ovn-installed in OVS
Dec 13 03:31:49 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:49Z|00656|binding|INFO|Setting lport 35b172ab-1be7-44b2-9a76-0f60de6851ab up in Southbound
Dec 13 03:31:49 np0005558241 nova_compute[248510]: 2025-12-13 08:31:49.753 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:49 np0005558241 nova_compute[248510]: 2025-12-13 08:31:49.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.758 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86a472ac-752c-444c-9f84-b1158d099cdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.759 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap527d37da-e1 in ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:31:49 np0005558241 systemd-udevd[315337]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.762 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap527d37da-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.762 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dba48cbb-99da-4f6b-a22f-efa3d4d90e2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.763 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b3e4fd-ea05-422b-bdc8-16bdc3f65c70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.776 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ed499bc1-3d74-46ac-bf2a-137e6e8e2965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:49 np0005558241 systemd-machined[210538]: New machine qemu-81-instance-00000046.
Dec 13 03:31:49 np0005558241 NetworkManager[50376]: <info>  [1765614709.7893] device (tap35b172ab-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:31:49 np0005558241 NetworkManager[50376]: <info>  [1765614709.7904] device (tap35b172ab-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:31:49 np0005558241 systemd[1]: Started Virtual Machine qemu-81-instance-00000046.
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.806 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42298666-283f-403a-8aae-f3027db99df4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.837 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1331c18b-1ebc-4d4a-9a69-2018f7b7fe89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.842 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46e60fa4-e0e5-43ec-a96c-6c9773c3f5e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:49 np0005558241 NetworkManager[50376]: <info>  [1765614709.8435] manager: (tap527d37da-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/294)
Dec 13 03:31:49 np0005558241 podman[315326]: 2025-12-13 08:31:49.785164742 +0000 UTC m=+0.040994916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.888 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9caf9d28-f92e-4a8b-bb23-8e73e761bb95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.894 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2efdfb5b-b46f-4417-be67-07edd0b29367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:49 np0005558241 NetworkManager[50376]: <info>  [1765614709.9246] device (tap527d37da-e0): carrier: link connected
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.933 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[463acf46-57a3-4db5-a36b-e78be3081404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.966 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[643f2016-caae-4688-ac6b-9844537765f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d37da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:e1:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728714, 'reachable_time': 35714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315373, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:49.993 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cf78d08c-b81a-4901-affb-5b40a0b749a7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:e196'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 728714, 'tstamp': 728714}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315374, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.013 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea13872-f6a0-4e06-9fbe-349daad71488]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d37da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:e1:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728714, 'reachable_time': 35714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315375, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.050 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca510a7-5711-47da-b0a2-593d596adaf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.132 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9364f90-e5c3-4d4a-a356-119125340703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.133 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d37da-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.134 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.134 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap527d37da-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.136 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:50 np0005558241 NetworkManager[50376]: <info>  [1765614710.1369] manager: (tap527d37da-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/295)
Dec 13 03:31:50 np0005558241 kernel: tap527d37da-e0: entered promiscuous mode
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.139 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:50 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:50Z|00657|binding|INFO|Releasing lport 9bf9e6e9-c189-485c-8803-c58be1ee6099 from this chassis (sb_readonly=0)
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.139 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap527d37da-e0, col_values=(('external_ids', {'iface-id': '9bf9e6e9-c189-485c-8803-c58be1ee6099'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.161 248514 DEBUG nova.compute.manager [req-c2562ef3-b1c9-42e8-8441-cd37feabf0cf req-4358cf95-c8ed-4ca7-8af0-126db98e9c81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.162 248514 DEBUG oslo_concurrency.lockutils [req-c2562ef3-b1c9-42e8-8441-cd37feabf0cf req-4358cf95-c8ed-4ca7-8af0-126db98e9c81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.162 248514 DEBUG oslo_concurrency.lockutils [req-c2562ef3-b1c9-42e8-8441-cd37feabf0cf req-4358cf95-c8ed-4ca7-8af0-126db98e9c81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.162 248514 DEBUG oslo_concurrency.lockutils [req-c2562ef3-b1c9-42e8-8441-cd37feabf0cf req-4358cf95-c8ed-4ca7-8af0-126db98e9c81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.162 248514 DEBUG nova.compute.manager [req-c2562ef3-b1c9-42e8-8441-cd37feabf0cf req-4358cf95-c8ed-4ca7-8af0-126db98e9c81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Processing event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.215 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.216 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.217 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d89f7720-9069-4fc6-b7a4-6a6442337d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.218 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.218 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'env', 'PROCESS_TAG=haproxy-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/527d37da-eda0-4bfe-9f1d-310d58024d5d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:31:50 np0005558241 podman[315326]: 2025-12-13 08:31:50.292846055 +0000 UTC m=+0.548676209 container create a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_heyrovsky, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:50 np0005558241 systemd[1]: Started libpod-conmon-a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead.scope.
Dec 13 03:31:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:31:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6d3f60db00aa8c7ed42f546c3ad84461dd8ae4ba8069c90252e3fede71a4fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6d3f60db00aa8c7ed42f546c3ad84461dd8ae4ba8069c90252e3fede71a4fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6d3f60db00aa8c7ed42f546c3ad84461dd8ae4ba8069c90252e3fede71a4fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c6d3f60db00aa8c7ed42f546c3ad84461dd8ae4ba8069c90252e3fede71a4fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:50 np0005558241 podman[315326]: 2025-12-13 08:31:50.437809609 +0000 UTC m=+0.693639783 container init a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:31:50 np0005558241 podman[315326]: 2025-12-13 08:31:50.451518327 +0000 UTC m=+0.707348481 container start a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:31:50 np0005558241 podman[315326]: 2025-12-13 08:31:50.472206198 +0000 UTC m=+0.728036452 container attach a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.613 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.615 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614710.614457, 4d93482e-582f-4d44-ab53-87cd5f6aa66a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.615 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] VM Started (Lifecycle Event)#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.623 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.629 248514 INFO nova.virt.libvirt.driver [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Instance spawned successfully.#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.629 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.643 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.658 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.663 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.665 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.666 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.666 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.666 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.667 248514 DEBUG nova.virt.libvirt.driver [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.678 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.679 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614710.6147447, 4d93482e-582f-4d44-ab53-87cd5f6aa66a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.679 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.706 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.710 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614710.6232316, 4d93482e-582f-4d44-ab53-87cd5f6aa66a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.710 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:31:50 np0005558241 podman[315460]: 2025-12-13 08:31:50.71629886 +0000 UTC m=+0.074804570 container create 00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.741 248514 INFO nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Took 12.42 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.741 248514 DEBUG nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.742 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.751 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:31:50 np0005558241 podman[315460]: 2025-12-13 08:31:50.677538541 +0000 UTC m=+0.036044271 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:31:50 np0005558241 systemd[1]: Started libpod-conmon-00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223.scope.
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.797 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:31:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:31:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/375cfa30e4ad2df2d660069f03564a6feb3d318d0dbd2e0180ee77a13cc3fe04/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]: {
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:    "0": [
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:        {
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "devices": [
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "/dev/loop3"
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            ],
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_name": "ceph_lv0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_size": "21470642176",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "name": "ceph_lv0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "tags": {
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.cluster_name": "ceph",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.crush_device_class": "",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.encrypted": "0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.objectstore": "bluestore",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.osd_id": "0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.type": "block",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.vdo": "0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.with_tpm": "0"
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            },
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "type": "block",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "vg_name": "ceph_vg0"
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:        }
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:    ],
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:    "1": [
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:        {
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "devices": [
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "/dev/loop4"
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            ],
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_name": "ceph_lv1",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_size": "21470642176",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "name": "ceph_lv1",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "tags": {
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.cluster_name": "ceph",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.crush_device_class": "",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.encrypted": "0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.objectstore": "bluestore",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.osd_id": "1",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.type": "block",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.vdo": "0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.with_tpm": "0"
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            },
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "type": "block",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "vg_name": "ceph_vg1"
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:        }
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:    ],
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:    "2": [
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:        {
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "devices": [
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "/dev/loop5"
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            ],
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_name": "ceph_lv2",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_size": "21470642176",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "name": "ceph_lv2",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "tags": {
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.cluster_name": "ceph",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.crush_device_class": "",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.encrypted": "0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.objectstore": "bluestore",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.osd_id": "2",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.type": "block",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.vdo": "0",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:                "ceph.with_tpm": "0"
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            },
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "type": "block",
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:            "vg_name": "ceph_vg2"
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:        }
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]:    ]
Dec 13 03:31:50 np0005558241 great_heyrovsky[315409]: }
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.825 248514 INFO nova.compute.manager [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Took 13.61 seconds to build instance.#033[00m
Dec 13 03:31:50 np0005558241 podman[315460]: 2025-12-13 08:31:50.839054736 +0000 UTC m=+0.197560466 container init 00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 03:31:50 np0005558241 podman[315460]: 2025-12-13 08:31:50.845609412 +0000 UTC m=+0.204115122 container start 00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 03:31:50 np0005558241 nova_compute[248510]: 2025-12-13 08:31:50.845 248514 DEBUG oslo_concurrency.lockutils [None req-2e4877dd-60f6-4ff2-a515-4519dd8fb76a 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:50 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [NOTICE]   (315483) : New worker (315485) forked
Dec 13 03:31:50 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [NOTICE]   (315483) : Loading success.
Dec 13 03:31:50 np0005558241 systemd[1]: libpod-a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead.scope: Deactivated successfully.
Dec 13 03:31:50 np0005558241 podman[315326]: 2025-12-13 08:31:50.897151187 +0000 UTC m=+1.152981341 container died a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_heyrovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.967 158419 INFO oslo_service.service [-] Child 315396 exited with status 0#033[00m
Dec 13 03:31:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:50.968 158419 WARNING oslo_service.service [-] pid 315396 not in child list#033[00m
Dec 13 03:31:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:31:51 np0005558241 nova_compute[248510]: 2025-12-13 08:31:51.226 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:31:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7c6d3f60db00aa8c7ed42f546c3ad84461dd8ae4ba8069c90252e3fede71a4fa-merged.mount: Deactivated successfully.
Dec 13 03:31:51 np0005558241 podman[315326]: 2025-12-13 08:31:51.253797621 +0000 UTC m=+1.509627775 container remove a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_heyrovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:31:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:51Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:3a:53 10.100.0.3
Dec 13 03:31:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:51Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:3a:53 10.100.0.3
Dec 13 03:31:51 np0005558241 systemd[1]: libpod-conmon-a5a814dc9ea53edc063f1902b5a55753d0cf88fe8bc026c7021e33cfc863cead.scope: Deactivated successfully.
Dec 13 03:31:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 287 MiB data, 642 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.5 MiB/s wr, 217 op/s
Dec 13 03:31:51 np0005558241 podman[315571]: 2025-12-13 08:31:51.797962043 +0000 UTC m=+0.029084297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:31:51 np0005558241 podman[315571]: 2025-12-13 08:31:51.904349607 +0000 UTC m=+0.135471841 container create 96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.240 248514 DEBUG nova.compute.manager [req-9c657b90-146c-4291-804e-fe16ef2bc674 req-4360c6d7-c34c-447e-ad77-3e79128bc9e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.241 248514 DEBUG oslo_concurrency.lockutils [req-9c657b90-146c-4291-804e-fe16ef2bc674 req-4360c6d7-c34c-447e-ad77-3e79128bc9e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.243 248514 DEBUG oslo_concurrency.lockutils [req-9c657b90-146c-4291-804e-fe16ef2bc674 req-4360c6d7-c34c-447e-ad77-3e79128bc9e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.243 248514 DEBUG oslo_concurrency.lockutils [req-9c657b90-146c-4291-804e-fe16ef2bc674 req-4360c6d7-c34c-447e-ad77-3e79128bc9e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.244 248514 DEBUG nova.compute.manager [req-9c657b90-146c-4291-804e-fe16ef2bc674 req-4360c6d7-c34c-447e-ad77-3e79128bc9e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] No waiting events found dispatching network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.245 248514 WARNING nova.compute.manager [req-9c657b90-146c-4291-804e-fe16ef2bc674 req-4360c6d7-c34c-447e-ad77-3e79128bc9e7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received unexpected event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab for instance with vm_state active and task_state None.#033[00m
Dec 13 03:31:52 np0005558241 systemd[1]: Started libpod-conmon-96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64.scope.
Dec 13 03:31:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:31:52 np0005558241 podman[315571]: 2025-12-13 08:31:52.586611825 +0000 UTC m=+0.817734099 container init 96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:31:52 np0005558241 podman[315571]: 2025-12-13 08:31:52.597227687 +0000 UTC m=+0.828349921 container start 96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:31:52 np0005558241 podman[315571]: 2025-12-13 08:31:52.604587807 +0000 UTC m=+0.835710091 container attach 96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:31:52 np0005558241 dazzling_turing[315588]: 167 167
Dec 13 03:31:52 np0005558241 systemd[1]: libpod-96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64.scope: Deactivated successfully.
Dec 13 03:31:52 np0005558241 podman[315571]: 2025-12-13 08:31:52.610746165 +0000 UTC m=+0.841868399 container died 96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:31:52 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fc4f9154a2fcda621531acdc66d3e03531958e0941ddf0c98db6f6138c4e2df4-merged.mount: Deactivated successfully.
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:31:52 np0005558241 podman[315571]: 2025-12-13 08:31:52.791971378 +0000 UTC m=+1.023093632 container remove 96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.796 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.796 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.797 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.797 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.797 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:52 np0005558241 systemd[1]: libpod-conmon-96e127678286a7b91689e6358bade526ceb9d6c312a6112ffc258c7f14429d64.scope: Deactivated successfully.
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.928 248514 DEBUG nova.compute.manager [None req-3b4cf12d-beea-41c5-9501-a2afa6d243ee 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:52 np0005558241 nova_compute[248510]: 2025-12-13 08:31:52.973 248514 INFO nova.compute.manager [None req-3b4cf12d-beea-41c5-9501-a2afa6d243ee 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] instance snapshotting#033[00m
Dec 13 03:31:53 np0005558241 podman[315615]: 2025-12-13 08:31:53.005905537 +0000 UTC m=+0.039040882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:31:53 np0005558241 podman[315615]: 2025-12-13 08:31:53.098777872 +0000 UTC m=+0.131913177 container create 51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:31:53 np0005558241 systemd[1]: Started libpod-conmon-51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658.scope.
Dec 13 03:31:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:31:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465adff802e94268f49fd0fbcb5082f0339d44b90de7cfebfe9614c7fe06396f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465adff802e94268f49fd0fbcb5082f0339d44b90de7cfebfe9614c7fe06396f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465adff802e94268f49fd0fbcb5082f0339d44b90de7cfebfe9614c7fe06396f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465adff802e94268f49fd0fbcb5082f0339d44b90de7cfebfe9614c7fe06396f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.212 248514 WARNING nova.compute.manager [None req-3b4cf12d-beea-41c5-9501-a2afa6d243ee 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Image not found during snapshot: nova.exception.ImageNotFound: Image 612b7faa-fd2d-4fbd-b80b-aebf1594e00f could not be found.#033[00m
Dec 13 03:31:53 np0005558241 podman[315615]: 2025-12-13 08:31:53.225994153 +0000 UTC m=+0.259129458 container init 51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:31:53 np0005558241 podman[315615]: 2025-12-13 08:31:53.236311022 +0000 UTC m=+0.269446327 container start 51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:31:53 np0005558241 podman[315615]: 2025-12-13 08:31:53.241292409 +0000 UTC m=+0.274427724 container attach 51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mccarthy, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:31:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:31:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531632031' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.444 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 303 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.7 MiB/s wr, 204 op/s
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.600 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.600 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.607 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.608 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.615 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.615 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.619 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.619 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.908 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.909 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3278MB free_disk=59.85584915988147GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.909 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:53 np0005558241 nova_compute[248510]: 2025-12-13 08:31:53.909 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:54 np0005558241 kernel: tap7b3b1c0a-88 (unregistering): left promiscuous mode
Dec 13 03:31:54 np0005558241 NetworkManager[50376]: <info>  [1765614714.0161] device (tap7b3b1c0a-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:31:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:54Z|00658|binding|INFO|Releasing lport 7b3b1c0a-882e-4f33-a582-667d018090d4 from this chassis (sb_readonly=0)
Dec 13 03:31:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:54Z|00659|binding|INFO|Setting lport 7b3b1c0a-882e-4f33-a582-667d018090d4 down in Southbound
Dec 13 03:31:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:54Z|00660|binding|INFO|Removing iface tap7b3b1c0a-88 ovn-installed in OVS
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.032 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.037 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:56:12 10.100.0.13'], port_security=['fa:16:3e:41:56:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a9c6de9d-63c0-43a5-9d6e-be356e504837', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7b3b1c0a-882e-4f33-a582-667d018090d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.038 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3b1c0a-882e-4f33-a582-667d018090d4 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 unbound from our chassis#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.040 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.043 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea92200-d0b1-475f-a9f7-477102b7fc5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.045 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3ced27d6-a2a8-4ce3-a7e7-494270418542 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.046 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance a9c6de9d-63c0-43a5-9d6e-be356e504837 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.046 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance ce9adb21-8832-4d3e-867e-b0b49bdb6850 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.046 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 4d93482e-582f-4d44-ab53-87cd5f6aa66a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.046 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.047 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.055 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.071 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 03:31:54 np0005558241 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000044.scope: Deactivated successfully.
Dec 13 03:31:54 np0005558241 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000044.scope: Consumed 13.966s CPU time.
Dec 13 03:31:54 np0005558241 systemd-machined[210538]: Machine qemu-78-instance-00000044 terminated.
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.105 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.107 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.123 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 03:31:54 np0005558241 lvm[315781]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:31:54 np0005558241 lvm[315770]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:31:54 np0005558241 lvm[315770]: VG ceph_vg0 finished
Dec 13 03:31:54 np0005558241 lvm[315781]: VG ceph_vg1 finished
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.149 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 03:31:54 np0005558241 lvm[315789]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:31:54 np0005558241 lvm[315789]: VG ceph_vg2 finished
Dec 13 03:31:54 np0005558241 podman[315725]: 2025-12-13 08:31:54.153660679 +0000 UTC m=+0.086664065 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 03:31:54 np0005558241 podman[315723]: 2025-12-13 08:31:54.17157431 +0000 UTC m=+0.111697156 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.179 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.180 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.180 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.180 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.180 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.181 248514 INFO nova.compute.manager [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Terminating instance#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.183 248514 DEBUG nova.compute.manager [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:31:54 np0005558241 podman[315724]: 2025-12-13 08:31:54.18672562 +0000 UTC m=+0.132856508 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 13 03:31:54 np0005558241 lvm[315798]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:31:54 np0005558241 lvm[315798]: VG ceph_vg1 finished
Dec 13 03:31:54 np0005558241 vigorous_mccarthy[315646]: {}
Dec 13 03:31:54 np0005558241 lvm[315801]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:31:54 np0005558241 lvm[315801]: VG ceph_vg1 finished
Dec 13 03:31:54 np0005558241 kernel: tap35b172ab-1b (unregistering): left promiscuous mode
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.254 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:54 np0005558241 systemd[1]: libpod-51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658.scope: Deactivated successfully.
Dec 13 03:31:54 np0005558241 systemd[1]: libpod-51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658.scope: Consumed 1.547s CPU time.
Dec 13 03:31:54 np0005558241 podman[315615]: 2025-12-13 08:31:54.260313795 +0000 UTC m=+1.293449100 container died 51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:31:54 np0005558241 NetworkManager[50376]: <info>  [1765614714.2617] device (tap35b172ab-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:31:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:54Z|00661|binding|INFO|Releasing lport 35b172ab-1be7-44b2-9a76-0f60de6851ab from this chassis (sb_readonly=0)
Dec 13 03:31:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:54Z|00662|binding|INFO|Setting lport 35b172ab-1be7-44b2-9a76-0f60de6851ab down in Southbound
Dec 13 03:31:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:54Z|00663|binding|INFO|Removing iface tap35b172ab-1b ovn-installed in OVS
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.296 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:86:fc 10.100.0.3'], port_security=['fa:16:3e:67:86:fc 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4d93482e-582f-4d44-ab53-87cd5f6aa66a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b109409-2f17-43b9-91ce-3ee865f0de12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6896dc77-88b0-4868-833d-01241883d6af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=35b172ab-1be7-44b2-9a76-0f60de6851ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.297 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 35b172ab-1be7-44b2-9a76-0f60de6851ab in datapath 527d37da-eda0-4bfe-9f1d-310d58024d5d unbound from our chassis#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.299 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 527d37da-eda0-4bfe-9f1d-310d58024d5d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.301 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc73a615-1dd2-4935-82f8-81dd4302a5e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.302 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d namespace which is not needed anymore#033[00m
Dec 13 03:31:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay-465adff802e94268f49fd0fbcb5082f0339d44b90de7cfebfe9614c7fe06396f-merged.mount: Deactivated successfully.
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.324 248514 INFO nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance shutdown successfully after 13 seconds.#033[00m
Dec 13 03:31:54 np0005558241 podman[315615]: 2025-12-13 08:31:54.330670159 +0000 UTC m=+1.363805464 container remove 51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mccarthy, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.331 248514 INFO nova.virt.libvirt.driver [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance destroyed successfully.#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.331 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'numa_topology' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:54 np0005558241 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000046.scope: Deactivated successfully.
Dec 13 03:31:54 np0005558241 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000046.scope: Consumed 3.936s CPU time.
Dec 13 03:31:54 np0005558241 systemd-machined[210538]: Machine qemu-81-instance-00000046 terminated.
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.349 248514 DEBUG nova.compute.manager [req-6bf0dc6c-1453-41c2-9f16-fb0ee92c7e92 req-a6da1dae-8d1c-47f8-9dee-73407737c161 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-unplugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.350 248514 DEBUG oslo_concurrency.lockutils [req-6bf0dc6c-1453-41c2-9f16-fb0ee92c7e92 req-a6da1dae-8d1c-47f8-9dee-73407737c161 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.350 248514 DEBUG oslo_concurrency.lockutils [req-6bf0dc6c-1453-41c2-9f16-fb0ee92c7e92 req-a6da1dae-8d1c-47f8-9dee-73407737c161 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.350 248514 DEBUG oslo_concurrency.lockutils [req-6bf0dc6c-1453-41c2-9f16-fb0ee92c7e92 req-a6da1dae-8d1c-47f8-9dee-73407737c161 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.350 248514 DEBUG nova.compute.manager [req-6bf0dc6c-1453-41c2-9f16-fb0ee92c7e92 req-a6da1dae-8d1c-47f8-9dee-73407737c161 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-unplugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.351 248514 WARNING nova.compute.manager [req-6bf0dc6c-1453-41c2-9f16-fb0ee92c7e92 req-a6da1dae-8d1c-47f8-9dee-73407737c161 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received unexpected event network-vif-unplugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with vm_state active and task_state rescuing.#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.358 248514 INFO nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Attempting rescue#033[00m
Dec 13 03:31:54 np0005558241 systemd[1]: libpod-conmon-51cc74ad892e19ad2253ccb27a4480373c5f417dfc7f4b71fb97700bf4ce1658.scope: Deactivated successfully.
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.359 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.364 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.365 248514 INFO nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Creating image(s)#033[00m
Dec 13 03:31:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:31:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:31:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.391 248514 DEBUG nova.storage.rbd_utils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.398 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.467 248514 DEBUG nova.storage.rbd_utils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:54 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [NOTICE]   (315483) : haproxy version is 2.8.14-c23fe91
Dec 13 03:31:54 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [NOTICE]   (315483) : path to executable is /usr/sbin/haproxy
Dec 13 03:31:54 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [WARNING]  (315483) : Exiting Master process...
Dec 13 03:31:54 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [ALERT]    (315483) : Current worker (315485) exited with code 143 (Terminated)
Dec 13 03:31:54 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[315479]: [WARNING]  (315483) : All workers exited. Exiting... (0)
Dec 13 03:31:54 np0005558241 systemd[1]: libpod-00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223.scope: Deactivated successfully.
Dec 13 03:31:54 np0005558241 conmon[315479]: conmon 00e45c34b9e65e4ee06b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223.scope/container/memory.events
Dec 13 03:31:54 np0005558241 podman[315863]: 2025-12-13 08:31:54.489122701 +0000 UTC m=+0.060494466 container died 00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.494 248514 DEBUG nova.storage.rbd_utils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.501 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223-userdata-shm.mount: Deactivated successfully.
Dec 13 03:31:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay-375cfa30e4ad2df2d660069f03564a6feb3d318d0dbd2e0180ee77a13cc3fe04-merged.mount: Deactivated successfully.
Dec 13 03:31:54 np0005558241 podman[315863]: 2025-12-13 08:31:54.541375717 +0000 UTC m=+0.112747482 container cleanup 00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.547 248514 INFO nova.virt.libvirt.driver [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Instance destroyed successfully.#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.549 248514 DEBUG nova.objects.instance [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'resources' on Instance uuid 4d93482e-582f-4d44-ab53-87cd5f6aa66a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:54 np0005558241 systemd[1]: libpod-conmon-00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223.scope: Deactivated successfully.
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.580 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.581 248514 DEBUG oslo_concurrency.lockutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.582 248514 DEBUG oslo_concurrency.lockutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.582 248514 DEBUG oslo_concurrency.lockutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.606 248514 DEBUG nova.storage.rbd_utils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:54 np0005558241 podman[315980]: 2025-12-13 08:31:54.608909269 +0000 UTC m=+0.046209954 container remove 00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.612 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.617 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[92dfc657-244e-4a84-a062-910ac6cbe7aa]: (4, ('Sat Dec 13 08:31:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d (00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223)\n00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223\nSat Dec 13 08:31:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d (00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223)\n00e45c34b9e65e4ee06bf1663bde2d234156331496f3124c982d3050fa4bf223\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.618 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6344b2b-b5dd-4d50-b482-2fb7e7d3b3d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.619 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d37da-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:54 np0005558241 kernel: tap527d37da-e0: left promiscuous mode
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.653 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.657 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11b5d54c-e3a2-4b20-a2b0-0b5dc30f9500]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.675 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6a4548-a5eb-406c-b408-b32dd14e67e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.677 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[81758caa-c0f8-452e-8dfa-47f6ad4a08b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.696 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac9d06d-a3d7-4dd7-87b4-12391cfdb7d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728704, 'reachable_time': 16366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316022, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.699 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:31:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:54.699 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7d8a85-6d22-43e2-be68-e8217c70eb37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:54 np0005558241 systemd[1]: run-netns-ovnmeta\x2d527d37da\x2deda0\x2d4bfe\x2d9f1d\x2d310d58024d5d.mount: Deactivated successfully.
Dec 13 03:31:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:31:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1284377546' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:31:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:31:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.880 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.888 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.991 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:54 np0005558241 nova_compute[248510]: 2025-12-13 08:31:54.992 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'migration_context' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.308 248514 DEBUG nova.compute.manager [req-ee51a36d-050c-4f66-b7a9-c4d08be0da62 req-d96c2820-6184-4408-93b3-d9c9a27c881a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-vif-unplugged-35b172ab-1be7-44b2-9a76-0f60de6851ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.309 248514 DEBUG oslo_concurrency.lockutils [req-ee51a36d-050c-4f66-b7a9-c4d08be0da62 req-d96c2820-6184-4408-93b3-d9c9a27c881a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.309 248514 DEBUG oslo_concurrency.lockutils [req-ee51a36d-050c-4f66-b7a9-c4d08be0da62 req-d96c2820-6184-4408-93b3-d9c9a27c881a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.309 248514 DEBUG oslo_concurrency.lockutils [req-ee51a36d-050c-4f66-b7a9-c4d08be0da62 req-d96c2820-6184-4408-93b3-d9c9a27c881a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.310 248514 DEBUG nova.compute.manager [req-ee51a36d-050c-4f66-b7a9-c4d08be0da62 req-d96c2820-6184-4408-93b3-d9c9a27c881a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] No waiting events found dispatching network-vif-unplugged-35b172ab-1be7-44b2-9a76-0f60de6851ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.310 248514 DEBUG nova.compute.manager [req-ee51a36d-050c-4f66-b7a9-c4d08be0da62 req-d96c2820-6184-4408-93b3-d9c9a27c881a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-vif-unplugged-35b172ab-1be7-44b2-9a76-0f60de6851ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.340 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.358 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.362 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.362 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Start _get_guest_xml network_info=[{"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1650812198-network", "vif_mac": "fa:16:3e:41:56:12"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.362 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'resources' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.364 248514 DEBUG nova.virt.libvirt.vif [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:31:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1782155204',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1782155204',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1782155204',id=70,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:31:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-e9s8cfpo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:53Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=4d93482e-582f-4d44-ab53-87cd5f6aa66a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.364 248514 DEBUG nova.network.os_vif_util [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "address": "fa:16:3e:67:86:fc", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35b172ab-1b", "ovs_interfaceid": "35b172ab-1be7-44b2-9a76-0f60de6851ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.365 248514 DEBUG nova.network.os_vif_util [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.365 248514 DEBUG os_vif [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.368 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35b172ab-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.370 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.372 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.377 248514 INFO os_vif [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:86:fc,bridge_name='br-int',has_traffic_filtering=True,id=35b172ab-1be7-44b2-9a76-0f60de6851ab,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35b172ab-1b')#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.403 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.404 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.405 248514 WARNING nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.413 248514 DEBUG nova.virt.libvirt.host [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:31:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:55.412 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:55.413 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.413 248514 DEBUG nova.virt.libvirt.host [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:31:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:55.414 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.418 248514 DEBUG nova.virt.libvirt.host [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.419 248514 DEBUG nova.virt.libvirt.host [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.419 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.419 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.419 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.420 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.420 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.420 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.420 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.420 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.420 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.421 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.421 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.421 248514 DEBUG nova.virt.hardware [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.421 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.443 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 326 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.3 MiB/s wr, 261 op/s
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.639 248514 INFO nova.virt.libvirt.driver [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Deleting instance files /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a_del#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.641 248514 INFO nova.virt.libvirt.driver [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Deletion of /var/lib/nova/instances/4d93482e-582f-4d44-ab53-87cd5f6aa66a_del complete#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.701 248514 INFO nova.compute.manager [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Took 1.52 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.702 248514 DEBUG oslo.service.loopingcall [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.702 248514 DEBUG nova.compute.manager [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:31:55 np0005558241 nova_compute[248510]: 2025-12-13 08:31:55.703 248514 DEBUG nova.network.neutron [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:31:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:31:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1480405954' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:31:56 np0005558241 nova_compute[248510]: 2025-12-13 08:31:56.030 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:56 np0005558241 nova_compute[248510]: 2025-12-13 08:31:56.032 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:31:56 np0005558241 nova_compute[248510]: 2025-12-13 08:31:56.404 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:31:56 np0005558241 nova_compute[248510]: 2025-12-13 08:31:56.404 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:31:56 np0005558241 nova_compute[248510]: 2025-12-13 08:31:56.405 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:31:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:31:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2317881052' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:31:56 np0005558241 nova_compute[248510]: 2025-12-13 08:31:56.611 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:56 np0005558241 nova_compute[248510]: 2025-12-13 08:31:56.612 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:56 np0005558241 nova_compute[248510]: 2025-12-13 08:31:56.922 248514 DEBUG nova.compute.manager [req-7a948055-ef2f-40e1-aeb5-977f99e84d48 req-fb117710-6550-4f9f-9235-a46ed77f911d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:56 np0005558241 nova_compute[248510]: 2025-12-13 08:31:56.923 248514 DEBUG oslo_concurrency.lockutils [req-7a948055-ef2f-40e1-aeb5-977f99e84d48 req-fb117710-6550-4f9f-9235-a46ed77f911d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:56 np0005558241 nova_compute[248510]: 2025-12-13 08:31:56.923 248514 DEBUG oslo_concurrency.lockutils [req-7a948055-ef2f-40e1-aeb5-977f99e84d48 req-fb117710-6550-4f9f-9235-a46ed77f911d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:56 np0005558241 nova_compute[248510]: 2025-12-13 08:31:56.923 248514 DEBUG oslo_concurrency.lockutils [req-7a948055-ef2f-40e1-aeb5-977f99e84d48 req-fb117710-6550-4f9f-9235-a46ed77f911d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:56 np0005558241 nova_compute[248510]: 2025-12-13 08:31:56.924 248514 DEBUG nova.compute.manager [req-7a948055-ef2f-40e1-aeb5-977f99e84d48 req-fb117710-6550-4f9f-9235-a46ed77f911d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:31:56 np0005558241 nova_compute[248510]: 2025-12-13 08:31:56.924 248514 WARNING nova.compute.manager [req-7a948055-ef2f-40e1-aeb5-977f99e84d48 req-fb117710-6550-4f9f-9235-a46ed77f911d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received unexpected event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with vm_state active and task_state rescuing.#033[00m
Dec 13 03:31:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:31:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/42037945' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.143 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.145 248514 DEBUG nova.virt.libvirt.vif [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:31:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1742357064',display_name='tempest-ServerRescueTestJSON-server-1742357064',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1742357064',id=68,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:31:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-hpc6t4lx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:31:36Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=a9c6de9d-63c0-43a5-9d6e-be356e504837,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1650812198-network", "vif_mac": "fa:16:3e:41:56:12"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.145 248514 DEBUG nova.network.os_vif_util [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1650812198-network", "vif_mac": "fa:16:3e:41:56:12"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.146 248514 DEBUG nova.network.os_vif_util [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.148 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.175 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  <uuid>a9c6de9d-63c0-43a5-9d6e-be356e504837</uuid>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  <name>instance-00000044</name>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerRescueTestJSON-server-1742357064</nova:name>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:31:55</nova:creationTime>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <nova:user uuid="93eec08d500a4f03afb3281e9899bd6a">tempest-ServerRescueTestJSON-1425963100-project-member</nova:user>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <nova:project uuid="71e2453379684f0ca0563f8c370ea4a3">tempest-ServerRescueTestJSON-1425963100</nova:project>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <nova:port uuid="7b3b1c0a-882e-4f33-a582-667d018090d4">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <entry name="serial">a9c6de9d-63c0-43a5-9d6e-be356e504837</entry>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <entry name="uuid">a9c6de9d-63c0-43a5-9d6e-be356e504837</entry>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.rescue">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/a9c6de9d-63c0-43a5-9d6e-be356e504837_disk">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <target dev="vdb" bus="virtio"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config.rescue">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:41:56:12"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <target dev="tap7b3b1c0a-88"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/console.log" append="off"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:31:57 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:31:57 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:31:57 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:31:57 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.184 248514 INFO nova.virt.libvirt.driver [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance destroyed successfully.#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.291 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.291 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.292 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.292 248514 DEBUG nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No VIF found with MAC fa:16:3e:41:56:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.293 248514 INFO nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Using config drive#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.315 248514 DEBUG nova.storage.rbd_utils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.342 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.408 248514 DEBUG nova.objects.instance [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'keypairs' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:31:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 326 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 250 op/s
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.930 248514 DEBUG nova.compute.manager [req-a6fb2437-d538-4c3e-baa5-5f0f43f728d5 req-616cd1b2-3d5c-443c-996b-5813e25466ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.930 248514 DEBUG oslo_concurrency.lockutils [req-a6fb2437-d538-4c3e-baa5-5f0f43f728d5 req-616cd1b2-3d5c-443c-996b-5813e25466ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.931 248514 DEBUG oslo_concurrency.lockutils [req-a6fb2437-d538-4c3e-baa5-5f0f43f728d5 req-616cd1b2-3d5c-443c-996b-5813e25466ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.931 248514 DEBUG oslo_concurrency.lockutils [req-a6fb2437-d538-4c3e-baa5-5f0f43f728d5 req-616cd1b2-3d5c-443c-996b-5813e25466ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.931 248514 DEBUG nova.compute.manager [req-a6fb2437-d538-4c3e-baa5-5f0f43f728d5 req-616cd1b2-3d5c-443c-996b-5813e25466ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] No waiting events found dispatching network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:31:57 np0005558241 nova_compute[248510]: 2025-12-13 08:31:57.932 248514 WARNING nova.compute.manager [req-a6fb2437-d538-4c3e-baa5-5f0f43f728d5 req-616cd1b2-3d5c-443c-996b-5813e25466ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received unexpected event network-vif-plugged-35b172ab-1be7-44b2-9a76-0f60de6851ab for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.491 248514 INFO nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Creating config drive at /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config.rescue#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.498 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7oounzy0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.544 248514 DEBUG nova.network.neutron [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.566 248514 INFO nova.compute.manager [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Took 2.86 seconds to deallocate network for instance.#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.637 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.637 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.652 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7oounzy0" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.677 248514 DEBUG nova.storage.rbd_utils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.682 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config.rescue a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.806 248514 DEBUG oslo_concurrency.processutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.847 248514 DEBUG oslo_concurrency.processutils [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config.rescue a9c6de9d-63c0-43a5-9d6e-be356e504837_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.848 248514 INFO nova.virt.libvirt.driver [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Deleting local config drive /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837/disk.config.rescue because it was imported into RBD.#033[00m
Dec 13 03:31:58 np0005558241 kernel: tap7b3b1c0a-88: entered promiscuous mode
Dec 13 03:31:58 np0005558241 NetworkManager[50376]: <info>  [1765614718.9176] manager: (tap7b3b1c0a-88): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Dec 13 03:31:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:58Z|00664|binding|INFO|Claiming lport 7b3b1c0a-882e-4f33-a582-667d018090d4 for this chassis.
Dec 13 03:31:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:58Z|00665|binding|INFO|7b3b1c0a-882e-4f33-a582-667d018090d4: Claiming fa:16:3e:41:56:12 10.100.0.13
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:58.935 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:56:12 10.100.0.13'], port_security=['fa:16:3e:41:56:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a9c6de9d-63c0-43a5-9d6e-be356e504837', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7b3b1c0a-882e-4f33-a582-667d018090d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:31:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:58.937 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3b1c0a-882e-4f33-a582-667d018090d4 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 bound to our chassis#033[00m
Dec 13 03:31:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:58.938 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 03:31:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:58Z|00666|binding|INFO|Setting lport 7b3b1c0a-882e-4f33-a582-667d018090d4 up in Southbound
Dec 13 03:31:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:31:58Z|00667|binding|INFO|Setting lport 7b3b1c0a-882e-4f33-a582-667d018090d4 ovn-installed in OVS
Dec 13 03:31:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:31:58.939 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5fddc54e-3923-405f-ab48-e67ee1c92370]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.942 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:58 np0005558241 nova_compute[248510]: 2025-12-13 08:31:58.946 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:31:58 np0005558241 systemd-udevd[316204]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:31:58 np0005558241 NetworkManager[50376]: <info>  [1765614718.9698] device (tap7b3b1c0a-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:31:58 np0005558241 NetworkManager[50376]: <info>  [1765614718.9704] device (tap7b3b1c0a-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:31:58 np0005558241 systemd-machined[210538]: New machine qemu-82-instance-00000044.
Dec 13 03:31:58 np0005558241 systemd[1]: Started Virtual Machine qemu-82-instance-00000044.
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.451 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for a9c6de9d-63c0-43a5-9d6e-be356e504837 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.452 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614719.4505186, a9c6de9d-63c0-43a5-9d6e-be356e504837 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.452 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.459 248514 DEBUG nova.compute.manager [None req-119e4e4e-bb2c-40c3-8172-2260c2ba06e9 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:31:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3298807745' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.493 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.496 248514 DEBUG oslo_concurrency.processutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.498 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.503 248514 DEBUG nova.compute.provider_tree [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.527 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.527 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614719.4515405, a9c6de9d-63c0-43a5-9d6e-be356e504837 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.528 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] VM Started (Lifecycle Event)#033[00m
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.532 248514 DEBUG nova.scheduler.client.report [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.548 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:31:59 np0005558241 nova_compute[248510]: 2025-12-13 08:31:59.551 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:31:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 336 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.8 MiB/s wr, 293 op/s
Dec 13 03:32:00 np0005558241 nova_compute[248510]: 2025-12-13 08:32:00.092 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:00 np0005558241 nova_compute[248510]: 2025-12-13 08:32:00.146 248514 INFO nova.scheduler.client.report [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Deleted allocations for instance 4d93482e-582f-4d44-ab53-87cd5f6aa66a#033[00m
Dec 13 03:32:00 np0005558241 nova_compute[248510]: 2025-12-13 08:32:00.246 248514 DEBUG nova.compute.manager [req-557d4be1-e12e-4639-be60-a984ec878484 req-b03798b0-2ffe-489c-bcc3-e88a5dfc6620 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Received event network-vif-deleted-35b172ab-1be7-44b2-9a76-0f60de6851ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:00 np0005558241 nova_compute[248510]: 2025-12-13 08:32:00.252 248514 DEBUG oslo_concurrency.lockutils [None req-80d77c92-c506-403c-ae0a-75ccdf412bea 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "4d93482e-582f-4d44-ab53-87cd5f6aa66a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:00 np0005558241 nova_compute[248510]: 2025-12-13 08:32:00.345 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:00 np0005558241 nova_compute[248510]: 2025-12-13 08:32:00.370 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:00 np0005558241 nova_compute[248510]: 2025-12-13 08:32:00.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:32:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:32:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 328 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.6 MiB/s wr, 251 op/s
Dec 13 03:32:02 np0005558241 nova_compute[248510]: 2025-12-13 08:32:02.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:32:03 np0005558241 nova_compute[248510]: 2025-12-13 08:32:03.383 248514 DEBUG nova.compute.manager [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:03 np0005558241 nova_compute[248510]: 2025-12-13 08:32:03.383 248514 DEBUG oslo_concurrency.lockutils [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:03 np0005558241 nova_compute[248510]: 2025-12-13 08:32:03.384 248514 DEBUG oslo_concurrency.lockutils [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:03 np0005558241 nova_compute[248510]: 2025-12-13 08:32:03.384 248514 DEBUG oslo_concurrency.lockutils [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:03 np0005558241 nova_compute[248510]: 2025-12-13 08:32:03.384 248514 DEBUG nova.compute.manager [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:03 np0005558241 nova_compute[248510]: 2025-12-13 08:32:03.385 248514 WARNING nova.compute.manager [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received unexpected event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with vm_state rescued and task_state None.#033[00m
Dec 13 03:32:03 np0005558241 nova_compute[248510]: 2025-12-13 08:32:03.385 248514 DEBUG nova.compute.manager [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:03 np0005558241 nova_compute[248510]: 2025-12-13 08:32:03.385 248514 DEBUG oslo_concurrency.lockutils [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:03 np0005558241 nova_compute[248510]: 2025-12-13 08:32:03.385 248514 DEBUG oslo_concurrency.lockutils [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:03 np0005558241 nova_compute[248510]: 2025-12-13 08:32:03.385 248514 DEBUG oslo_concurrency.lockutils [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:03 np0005558241 nova_compute[248510]: 2025-12-13 08:32:03.386 248514 DEBUG nova.compute.manager [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:03 np0005558241 nova_compute[248510]: 2025-12-13 08:32:03.386 248514 WARNING nova.compute.manager [req-4794d2a9-7e0d-49c1-ab3b-04cde715bba5 req-d03ec605-24dd-4390-805e-d000bc766018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received unexpected event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with vm_state rescued and task_state None.#033[00m
Dec 13 03:32:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 328 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.1 MiB/s wr, 249 op/s
Dec 13 03:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 22K writes, 93K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 22K writes, 7420 syncs, 3.07 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9146 writes, 36K keys, 9146 commit groups, 1.0 writes per commit group, ingest: 36.34 MB, 0.06 MB/s#012Interval WAL: 9146 writes, 3584 syncs, 2.55 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:32:03 np0005558241 ceph-osd[87041]: bluestore.MempoolThread fragmentation_score=0.002893 took=0.000055s
Dec 13 03:32:04 np0005558241 nova_compute[248510]: 2025-12-13 08:32:04.692 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:04 np0005558241 nova_compute[248510]: 2025-12-13 08:32:04.693 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:04 np0005558241 nova_compute[248510]: 2025-12-13 08:32:04.983 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.008 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.009 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.054 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.090 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.092 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.106 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.107 248514 INFO nova.compute.claims [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.149 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.346 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.350 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.388 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 328 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.1 MiB/s wr, 219 op/s
Dec 13 03:32:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:32:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1642367905' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.921 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.930 248514 DEBUG nova.compute.provider_tree [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.959 248514 DEBUG nova.scheduler.client.report [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.994 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.995 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:32:05 np0005558241 nova_compute[248510]: 2025-12-13 08:32:05.998 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.007 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.007 248514 INFO nova.compute.claims [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.102 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.103 248514 DEBUG nova.network.neutron [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.139 248514 INFO nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:32:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.175 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.300 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.302 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.302 248514 INFO nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Creating image(s)#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.326 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.351 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.375 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.380 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.418 248514 DEBUG nova.policy [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93eec08d500a4f03afb3281e9899bd6a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '71e2453379684f0ca0563f8c370ea4a3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.440 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.474 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.475 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.476 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.477 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.499 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.504 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c5edbf88-6361-407a-a0f1-c133f70b50e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.553 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.554 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.589 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:32:06 np0005558241 nova_compute[248510]: 2025-12-13 08:32:06.682 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:32:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/72469475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.023 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.033 248514 DEBUG nova.compute.provider_tree [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.075 248514 DEBUG nova.scheduler.client.report [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.117 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.118 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.128 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.148 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.152 248514 INFO nova.compute.claims [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.228 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.228 248514 DEBUG nova.network.neutron [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.262 248514 INFO nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.321 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.477 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.480 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.480 248514 INFO nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Creating image(s)#033[00m
Dec 13 03:32:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 328 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.612 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.654 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.686 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.691 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.732 248514 DEBUG nova.policy [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b310bdebec646949fad4ea1821b4c3f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.773 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.774 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.775 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.775 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.801 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.806 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5d34feed-2663-4e17-b951-65a37bd3a275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.852 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:07 np0005558241 nova_compute[248510]: 2025-12-13 08:32:07.981 248514 DEBUG nova.network.neutron [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Successfully created port: 2bdcea64-a01f-4a75-b664-9c9c971533a6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:32:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:32:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/544839028' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.475 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.483 248514 DEBUG nova.compute.provider_tree [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.521 248514 DEBUG nova.network.neutron [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Successfully created port: 533958d1-8a74-4963-9731-40767b4bb127 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.526 248514 DEBUG nova.scheduler.client.report [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.556 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.557 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.602 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c5edbf88-6361-407a-a0f1-c133f70b50e9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.654 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.654 248514 DEBUG nova.network.neutron [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.711 248514 INFO nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.721 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] resizing rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.765 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.825 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5d34feed-2663-4e17-b951-65a37bd3a275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.872 248514 DEBUG nova.objects.instance [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'migration_context' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.907 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.907 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Ensure instance console log exists: /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.908 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.908 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.909 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.915 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] resizing rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.985 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.988 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:32:08 np0005558241 nova_compute[248510]: 2025-12-13 08:32:08.989 248514 INFO nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Creating image(s)#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.012 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.040 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.071 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.076 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.125 248514 DEBUG nova.policy [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '91d0d3efedc943b48ad0fc4295b6fc7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.166 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.167 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.167 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.168 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.188 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.192 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 dc076c88-fe0b-4674-ac32-fb22420b78bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:32:09
Dec 13 03:32:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:32:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:32:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'images']
Dec 13 03:32:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.312 248514 DEBUG nova.objects.instance [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'migration_context' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.344 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.345 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Ensure instance console log exists: /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.346 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.346 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.346 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.545 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614714.4352958, 4d93482e-582f-4d44-ab53-87cd5f6aa66a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.546 248514 INFO nova.compute.manager [-] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:32:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 351 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 146 op/s
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.571 248514 DEBUG nova.network.neutron [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Successfully updated port: 2bdcea64-a01f-4a75-b664-9c9c971533a6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.579 248514 DEBUG nova.compute.manager [None req-8ee42bc1-df6f-4227-a95a-22c04ff1bac7 - - - - - -] [instance: 4d93482e-582f-4d44-ab53-87cd5f6aa66a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.611 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.611 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquired lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.611 248514 DEBUG nova.network.neutron [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.770 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 dc076c88-fe0b-4674-ac32-fb22420b78bc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.840 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] resizing rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:32:09 np0005558241 nova_compute[248510]: 2025-12-13 08:32:09.938 248514 DEBUG nova.objects.instance [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'migration_context' on Instance uuid dc076c88-fe0b-4674-ac32-fb22420b78bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:32:10 np0005558241 nova_compute[248510]: 2025-12-13 08:32:10.117 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:32:10 np0005558241 nova_compute[248510]: 2025-12-13 08:32:10.118 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Ensure instance console log exists: /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:32:10 np0005558241 nova_compute[248510]: 2025-12-13 08:32:10.119 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:10 np0005558241 nova_compute[248510]: 2025-12-13 08:32:10.119 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:10 np0005558241 nova_compute[248510]: 2025-12-13 08:32:10.119 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:10 np0005558241 nova_compute[248510]: 2025-12-13 08:32:10.389 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:32:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:32:10 np0005558241 nova_compute[248510]: 2025-12-13 08:32:10.655 248514 DEBUG nova.network.neutron [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Successfully created port: 7903e65c-c0bf-4bb5-b044-95f5693f5c38 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:32:10 np0005558241 nova_compute[248510]: 2025-12-13 08:32:10.880 248514 DEBUG nova.compute.manager [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-changed-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:10 np0005558241 nova_compute[248510]: 2025-12-13 08:32:10.880 248514 DEBUG nova.compute.manager [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Refreshing instance network info cache due to event network-changed-2bdcea64-a01f-4a75-b664-9c9c971533a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:32:10 np0005558241 nova_compute[248510]: 2025-12-13 08:32:10.880 248514 DEBUG oslo_concurrency.lockutils [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:32:10 np0005558241 nova_compute[248510]: 2025-12-13 08:32:10.881 248514 DEBUG nova.network.neutron [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:32:11 np0005558241 nova_compute[248510]: 2025-12-13 08:32:11.035 248514 DEBUG nova.network.neutron [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Successfully updated port: 533958d1-8a74-4963-9731-40767b4bb127 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:32:11 np0005558241 nova_compute[248510]: 2025-12-13 08:32:11.057 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "refresh_cache-5d34feed-2663-4e17-b951-65a37bd3a275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:32:11 np0005558241 nova_compute[248510]: 2025-12-13 08:32:11.057 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquired lock "refresh_cache-5d34feed-2663-4e17-b951-65a37bd3a275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:32:11 np0005558241 nova_compute[248510]: 2025-12-13 08:32:11.057 248514 DEBUG nova.network.neutron [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:32:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:32:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3601.8 total, 600.0 interval#012Cumulative writes: 25K writes, 95K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 25K writes, 8466 syncs, 2.95 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 38.37 MB, 0.06 MB/s#012Interval WAL: 10K writes, 4107 syncs, 2.50 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:32:11 np0005558241 ceph-osd[88086]: bluestore.MempoolThread fragmentation_score=0.002571 took=0.000042s
Dec 13 03:32:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:32:11 np0005558241 nova_compute[248510]: 2025-12-13 08:32:11.296 248514 DEBUG nova.network.neutron [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:32:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 418 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 122 op/s
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.030 248514 DEBUG nova.network.neutron [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Successfully updated port: 7903e65c-c0bf-4bb5-b044-95f5693f5c38 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.048 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "refresh_cache-dc076c88-fe0b-4674-ac32-fb22420b78bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.049 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquired lock "refresh_cache-dc076c88-fe0b-4674-ac32-fb22420b78bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.049 248514 DEBUG nova.network.neutron [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.282 248514 DEBUG nova.network.neutron [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.374 248514 DEBUG nova.compute.manager [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-changed-7903e65c-c0bf-4bb5-b044-95f5693f5c38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.375 248514 DEBUG nova.compute.manager [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Refreshing instance network info cache due to event network-changed-7903e65c-c0bf-4bb5-b044-95f5693f5c38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.375 248514 DEBUG oslo_concurrency.lockutils [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-dc076c88-fe0b-4674-ac32-fb22420b78bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.405 248514 DEBUG nova.network.neutron [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Updating instance_info_cache with network_info: [{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.436 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Releasing lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.437 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance network_info: |[{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.437 248514 DEBUG oslo_concurrency.lockutils [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.438 248514 DEBUG nova.network.neutron [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Refreshing network info cache for port 2bdcea64-a01f-4a75-b664-9c9c971533a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.442 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Start _get_guest_xml network_info=[{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.446 248514 WARNING nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.458 248514 DEBUG nova.virt.libvirt.host [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.459 248514 DEBUG nova.virt.libvirt.host [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.464 248514 DEBUG nova.virt.libvirt.host [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.465 248514 DEBUG nova.virt.libvirt.host [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.465 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.465 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.466 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.466 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.467 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.467 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.467 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.467 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.468 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.468 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.468 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.468 248514 DEBUG nova.virt.hardware [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.472 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.868 248514 DEBUG nova.network.neutron [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Updating instance_info_cache with network_info: [{"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.903 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Releasing lock "refresh_cache-5d34feed-2663-4e17-b951-65a37bd3a275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.904 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance network_info: |[{"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.907 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Start _get_guest_xml network_info=[{"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.913 248514 WARNING nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.920 248514 DEBUG nova.virt.libvirt.host [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.920 248514 DEBUG nova.virt.libvirt.host [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.924 248514 DEBUG nova.virt.libvirt.host [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.924 248514 DEBUG nova.virt.libvirt.host [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.925 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.925 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.926 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.926 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.926 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.926 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.927 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.927 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.927 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.927 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.928 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.928 248514 DEBUG nova.virt.hardware [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:32:12 np0005558241 nova_compute[248510]: 2025-12-13 08:32:12.932 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:32:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448719229' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.099 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.128 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.133 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.177 248514 DEBUG nova.compute.manager [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-changed-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.178 248514 DEBUG nova.compute.manager [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Refreshing instance network info cache due to event network-changed-533958d1-8a74-4963-9731-40767b4bb127. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.179 248514 DEBUG oslo_concurrency.lockutils [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5d34feed-2663-4e17-b951-65a37bd3a275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.179 248514 DEBUG oslo_concurrency.lockutils [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5d34feed-2663-4e17-b951-65a37bd3a275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.179 248514 DEBUG nova.network.neutron [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Refreshing network info cache for port 533958d1-8a74-4963-9731-40767b4bb127 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.345 248514 DEBUG nova.network.neutron [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Updating instance_info_cache with network_info: [{"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.373 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Releasing lock "refresh_cache-dc076c88-fe0b-4674-ac32-fb22420b78bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.374 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Instance network_info: |[{"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.374 248514 DEBUG oslo_concurrency.lockutils [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-dc076c88-fe0b-4674-ac32-fb22420b78bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.375 248514 DEBUG nova.network.neutron [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Refreshing network info cache for port 7903e65c-c0bf-4bb5-b044-95f5693f5c38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.378 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Start _get_guest_xml network_info=[{"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.391 248514 WARNING nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.396 248514 DEBUG nova.virt.libvirt.host [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.397 248514 DEBUG nova.virt.libvirt.host [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.405 248514 DEBUG nova.virt.libvirt.host [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.405 248514 DEBUG nova.virt.libvirt.host [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.406 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.406 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.407 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.407 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.407 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.407 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.407 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.408 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.408 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.408 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.408 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.409 248514 DEBUG nova.virt.hardware [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.412 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:32:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 455 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.8 MiB/s wr, 155 op/s
Dec 13 03:32:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2298876853' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.584 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.611 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.615 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:32:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1499395576' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.784 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.789 248514 DEBUG nova.virt.libvirt.vif [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-607419756',display_name='tempest-ServerRescueTestJSON-server-607419756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-607419756',id=71,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-h6adjjd8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:06Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=c5edbf88-6361-407a-a0f1-c133f70b50e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.791 248514 DEBUG nova.network.os_vif_util [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.793 248514 DEBUG nova.network.os_vif_util [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.796 248514 DEBUG nova.objects.instance [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.818 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  <uuid>c5edbf88-6361-407a-a0f1-c133f70b50e9</uuid>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  <name>instance-00000047</name>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerRescueTestJSON-server-607419756</nova:name>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:32:12</nova:creationTime>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:        <nova:user uuid="93eec08d500a4f03afb3281e9899bd6a">tempest-ServerRescueTestJSON-1425963100-project-member</nova:user>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:        <nova:project uuid="71e2453379684f0ca0563f8c370ea4a3">tempest-ServerRescueTestJSON-1425963100</nova:project>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:        <nova:port uuid="2bdcea64-a01f-4a75-b664-9c9c971533a6">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <entry name="serial">c5edbf88-6361-407a-a0f1-c133f70b50e9</entry>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <entry name="uuid">c5edbf88-6361-407a-a0f1-c133f70b50e9</entry>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c5edbf88-6361-407a-a0f1-c133f70b50e9_disk">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:13:4d:d0"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <target dev="tap2bdcea64-a0"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/console.log" append="off"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:32:13 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:32:13 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:32:13 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:32:13 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.819 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Preparing to wait for external event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.820 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.820 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.820 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.821 248514 DEBUG nova.virt.libvirt.vif [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-607419756',display_name='tempest-ServerRescueTestJSON-server-607419756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-607419756',id=71,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-h6adjjd8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:06Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=c5edbf88-6361-407a-a0f1-c133f70b50e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.822 248514 DEBUG nova.network.os_vif_util [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.822 248514 DEBUG nova.network.os_vif_util [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.823 248514 DEBUG os_vif [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.824 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.824 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.825 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.828 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.829 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2bdcea64-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.829 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2bdcea64-a0, col_values=(('external_ids', {'iface-id': '2bdcea64-a01f-4a75-b664-9c9c971533a6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:4d:d0', 'vm-uuid': 'c5edbf88-6361-407a-a0f1-c133f70b50e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.832 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:13 np0005558241 NetworkManager[50376]: <info>  [1765614733.8332] manager: (tap2bdcea64-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.841 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.843 248514 INFO os_vif [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0')#033[00m
Dec 13 03:32:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:32:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2388076238' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.948 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.949 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.949 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No VIF found with MAC fa:16:3e:13:4d:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.950 248514 INFO nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Using config drive#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.974 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:13 np0005558241 nova_compute[248510]: 2025-12-13 08:32:13.980 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.002 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.006 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:32:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376110682' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.189 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.190 248514 DEBUG nova.virt.libvirt.vif [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1505100715',display_name='tempest-tempest.common.compute-instance-1505100715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1505100715',id=72,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-kk2o7y6y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:07Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=5d34feed-2663-4e17-b951-65a37bd3a275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.191 248514 DEBUG nova.network.os_vif_util [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.191 248514 DEBUG nova.network.os_vif_util [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.192 248514 DEBUG nova.objects.instance [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.229 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <uuid>5d34feed-2663-4e17-b951-65a37bd3a275</uuid>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <name>instance-00000048</name>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:name>tempest-tempest.common.compute-instance-1505100715</nova:name>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:32:12</nova:creationTime>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:user uuid="5b310bdebec646949fad4ea1821b4c3f">tempest-ServerActionsTestOtherA-1325599242-project-member</nova:user>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:project uuid="b4d2999518df4b9f8ccbabe38976dc3c">tempest-ServerActionsTestOtherA-1325599242</nova:project>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:port uuid="533958d1-8a74-4963-9731-40767b4bb127">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <entry name="serial">5d34feed-2663-4e17-b951-65a37bd3a275</entry>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <entry name="uuid">5d34feed-2663-4e17-b951-65a37bd3a275</entry>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/5d34feed-2663-4e17-b951-65a37bd3a275_disk">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/5d34feed-2663-4e17-b951-65a37bd3a275_disk.config">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:66:ab:b9"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <target dev="tap533958d1-8a"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/console.log" append="off"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:32:14 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:32:14 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.232 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Preparing to wait for external event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.233 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.233 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.234 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.235 248514 DEBUG nova.virt.libvirt.vif [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1505100715',display_name='tempest-tempest.common.compute-instance-1505100715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1505100715',id=72,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-kk2o7y6y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:07Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=5d34feed-2663-4e17-b951-65a37bd3a275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.235 248514 DEBUG nova.network.os_vif_util [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.237 248514 DEBUG nova.network.os_vif_util [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.237 248514 DEBUG os_vif [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.238 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.239 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.240 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.245 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap533958d1-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.246 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap533958d1-8a, col_values=(('external_ids', {'iface-id': '533958d1-8a74-4963-9731-40767b4bb127', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:ab:b9', 'vm-uuid': '5d34feed-2663-4e17-b951-65a37bd3a275'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:14 np0005558241 NetworkManager[50376]: <info>  [1765614734.2490] manager: (tap533958d1-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.250 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.256 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.258 248514 INFO os_vif [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a')#033[00m
Dec 13 03:32:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:32:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3081812408' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.624 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.626 248514 DEBUG nova.virt.libvirt.vif [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1364276133',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1364276133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1364276133',id=73,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-1zyw8g5c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:08Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=dc076c88-fe0b-4674-ac32-fb22420b78bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.626 248514 DEBUG nova.network.os_vif_util [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.627 248514 DEBUG nova.network.os_vif_util [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.628 248514 DEBUG nova.objects.instance [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'pci_devices' on Instance uuid dc076c88-fe0b-4674-ac32-fb22420b78bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.823 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <uuid>dc076c88-fe0b-4674-ac32-fb22420b78bc</uuid>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <name>instance-00000049</name>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1364276133</nova:name>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:32:13</nova:creationTime>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:user uuid="91d0d3efedc943b48ad0fc4295b6fc7c">tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member</nova:user>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:project uuid="2de328b46a6e4f588f5e2a254db7f4ef">tempest-ImagesOneServerNegativeTestJSON-1826994500</nova:project>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <nova:port uuid="7903e65c-c0bf-4bb5-b044-95f5693f5c38">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <entry name="serial">dc076c88-fe0b-4674-ac32-fb22420b78bc</entry>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <entry name="uuid">dc076c88-fe0b-4674-ac32-fb22420b78bc</entry>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/dc076c88-fe0b-4674-ac32-fb22420b78bc_disk">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/dc076c88-fe0b-4674-ac32-fb22420b78bc_disk.config">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:cc:56:55"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <target dev="tap7903e65c-c0"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/console.log" append="off"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:32:14 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:32:14 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:32:14 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:32:14 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.825 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Preparing to wait for external event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.825 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.826 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.826 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.828 248514 DEBUG nova.virt.libvirt.vif [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1364276133',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1364276133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1364276133',id=73,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-1zyw8g5c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:08Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=dc076c88-fe0b-4674-ac32-fb22420b78bc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.829 248514 DEBUG nova.network.os_vif_util [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.830 248514 DEBUG nova.network.os_vif_util [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.831 248514 DEBUG os_vif [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.832 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.832 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.833 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.837 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.838 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7903e65c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.839 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7903e65c-c0, col_values=(('external_ids', {'iface-id': '7903e65c-c0bf-4bb5-b044-95f5693f5c38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:56:55', 'vm-uuid': 'dc076c88-fe0b-4674-ac32-fb22420b78bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:14 np0005558241 NetworkManager[50376]: <info>  [1765614734.8869] manager: (tap7903e65c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.890 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.894 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.894 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.895 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No VIF found with MAC fa:16:3e:66:ab:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.895 248514 INFO nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Using config drive#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.918 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.923 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.925 248514 INFO os_vif [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0')#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.985 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.986 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.986 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] No VIF found with MAC fa:16:3e:cc:56:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:32:14 np0005558241 nova_compute[248510]: 2025-12-13 08:32:14.987 248514 INFO nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Using config drive#033[00m
Dec 13 03:32:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:32:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/287174040' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:32:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:32:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/287174040' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:32:15 np0005558241 nova_compute[248510]: 2025-12-13 08:32:15.011 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:15 np0005558241 nova_compute[248510]: 2025-12-13 08:32:15.159 248514 INFO nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Creating config drive at /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config#033[00m
Dec 13 03:32:15 np0005558241 nova_compute[248510]: 2025-12-13 08:32:15.164 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6nlukd81 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:15 np0005558241 nova_compute[248510]: 2025-12-13 08:32:15.319 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6nlukd81" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:15 np0005558241 nova_compute[248510]: 2025-12-13 08:32:15.349 248514 DEBUG nova.storage.rbd_utils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:15 np0005558241 nova_compute[248510]: 2025-12-13 08:32:15.352 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:15 np0005558241 nova_compute[248510]: 2025-12-13 08:32:15.391 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 180 op/s
Dec 13 03:32:15 np0005558241 nova_compute[248510]: 2025-12-13 08:32:15.957 248514 DEBUG oslo_concurrency.processutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:15 np0005558241 nova_compute[248510]: 2025-12-13 08:32:15.958 248514 INFO nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Deleting local config drive /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config because it was imported into RBD.#033[00m
Dec 13 03:32:16 np0005558241 NetworkManager[50376]: <info>  [1765614736.0191] manager: (tap2bdcea64-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Dec 13 03:32:16 np0005558241 kernel: tap2bdcea64-a0: entered promiscuous mode
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.028 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:16Z|00668|binding|INFO|Claiming lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 for this chassis.
Dec 13 03:32:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:16Z|00669|binding|INFO|2bdcea64-a01f-4a75-b664-9c9c971533a6: Claiming fa:16:3e:13:4d:d0 10.100.0.6
Dec 13 03:32:16 np0005558241 systemd-udevd[317157]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.057 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:4d:d0 10.100.0.6'], port_security=['fa:16:3e:13:4d:d0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5edbf88-6361-407a-a0f1-c133f70b50e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2bdcea64-a01f-4a75-b664-9c9c971533a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.058 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2bdcea64-a01f-4a75-b664-9c9c971533a6 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 bound to our chassis#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.059 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.060 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[caa6fe11-64a3-43e6-a853-b77e2d4e761a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:16Z|00670|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 ovn-installed in OVS
Dec 13 03:32:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:16Z|00671|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 up in Southbound
Dec 13 03:32:16 np0005558241 NetworkManager[50376]: <info>  [1765614736.0682] device (tap2bdcea64-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.067 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:16 np0005558241 NetworkManager[50376]: <info>  [1765614736.0700] device (tap2bdcea64-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:32:16 np0005558241 systemd-machined[210538]: New machine qemu-83-instance-00000047.
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:16 np0005558241 systemd[1]: Started Virtual Machine qemu-83-instance-00000047.
Dec 13 03:32:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.259 248514 INFO nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Creating config drive at /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config#033[00m
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.266 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpponz1jqm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.409 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpponz1jqm" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.438 248514 DEBUG nova.storage.rbd_utils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.443 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.697 248514 DEBUG oslo_concurrency.processutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.698 248514 INFO nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deleting local config drive /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config because it was imported into RBD.#033[00m
Dec 13 03:32:16 np0005558241 NetworkManager[50376]: <info>  [1765614736.7582] manager: (tap533958d1-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/301)
Dec 13 03:32:16 np0005558241 kernel: tap533958d1-8a: entered promiscuous mode
Dec 13 03:32:16 np0005558241 systemd-udevd[317163]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:32:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:16Z|00672|binding|INFO|Claiming lport 533958d1-8a74-4963-9731-40767b4bb127 for this chassis.
Dec 13 03:32:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:16Z|00673|binding|INFO|533958d1-8a74-4963-9731-40767b4bb127: Claiming fa:16:3e:66:ab:b9 10.100.0.5
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.769 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.776 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:ab:b9 10.100.0.5'], port_security=['fa:16:3e:66:ab:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d34feed-2663-4e17-b951-65a37bd3a275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45afb483-a012-4442-b20c-edd0f1f0f374', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=533958d1-8a74-4963-9731-40767b4bb127) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.777 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 533958d1-8a74-4963-9731-40767b4bb127 in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f bound to our chassis#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.779 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f#033[00m
Dec 13 03:32:16 np0005558241 NetworkManager[50376]: <info>  [1765614736.7807] device (tap533958d1-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:32:16 np0005558241 NetworkManager[50376]: <info>  [1765614736.7885] device (tap533958d1-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:32:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:16Z|00674|binding|INFO|Setting lport 533958d1-8a74-4963-9731-40767b4bb127 ovn-installed in OVS
Dec 13 03:32:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:16Z|00675|binding|INFO|Setting lport 533958d1-8a74-4963-9731-40767b4bb127 up in Southbound
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.799 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7c87e1-8f7c-49be-a125-c176606131af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.800 248514 DEBUG nova.network.neutron [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Updated VIF entry in instance network info cache for port 7903e65c-c0bf-4bb5-b044-95f5693f5c38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.801 248514 DEBUG nova.network.neutron [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Updating instance_info_cache with network_info: [{"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.803 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:16 np0005558241 systemd-machined[210538]: New machine qemu-84-instance-00000048.
Dec 13 03:32:16 np0005558241 systemd[1]: Started Virtual Machine qemu-84-instance-00000048.
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.828 248514 DEBUG oslo_concurrency.lockutils [req-7b4ad618-0267-4797-a69a-af0879b5e156 req-fc26382a-6e40-484a-98f1-299e3e04908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-dc076c88-fe0b-4674-ac32-fb22420b78bc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.835 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[39342143-a25f-4afc-a76e-77698e4106bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.840 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f97c6b1e-af70-4f49-a455-703e99201d63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.877 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d0cd9e83-b1a6-4849-a5c0-ef61bed3a353]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.899 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f78a7dbb-41bf-44f8-a8a3-45fe96a151ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317236, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.922 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[415b25c3-c415-4f07-b2ba-0b9fce7d653c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727416, 'tstamp': 727416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317238, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727421, 'tstamp': 727421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317238, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.925 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.933 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.934 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.935 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.935 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:16.936 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.945 248514 INFO nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Creating config drive at /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/disk.config#033[00m
Dec 13 03:32:16 np0005558241 nova_compute[248510]: 2025-12-13 08:32:16.952 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp085tm9rq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.003 248514 DEBUG nova.network.neutron [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Updated VIF entry in instance network info cache for port 2bdcea64-a01f-4a75-b664-9c9c971533a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.004 248514 DEBUG nova.network.neutron [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Updating instance_info_cache with network_info: [{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.035 248514 DEBUG oslo_concurrency.lockutils [req-81a71b1b-3e55-429a-9365-c1bdb738b9ea req-dbbdbebf-b62c-434c-8495-25da89f4fd38 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.104 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp085tm9rq" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.139 248514 DEBUG nova.storage.rbd_utils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] rbd image dc076c88-fe0b-4674-ac32-fb22420b78bc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.145 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/disk.config dc076c88-fe0b-4674-ac32-fb22420b78bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.312 248514 DEBUG oslo_concurrency.processutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/disk.config dc076c88-fe0b-4674-ac32-fb22420b78bc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.313 248514 INFO nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Deleting local config drive /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc/disk.config because it was imported into RBD.#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.325 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614737.3251662, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.326 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Started (Lifecycle Event)#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.363 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.368 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614737.3325074, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.369 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:32:17 np0005558241 kernel: tap7903e65c-c0: entered promiscuous mode
Dec 13 03:32:17 np0005558241 NetworkManager[50376]: <info>  [1765614737.3713] manager: (tap7903e65c-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/302)
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.374 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:17Z|00676|binding|INFO|Claiming lport 7903e65c-c0bf-4bb5-b044-95f5693f5c38 for this chassis.
Dec 13 03:32:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:17Z|00677|binding|INFO|7903e65c-c0bf-4bb5-b044-95f5693f5c38: Claiming fa:16:3e:cc:56:55 10.100.0.9
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.385 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:56:55 10.100.0.9'], port_security=['fa:16:3e:cc:56:55 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dc076c88-fe0b-4674-ac32-fb22420b78bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b109409-2f17-43b9-91ce-3ee865f0de12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6896dc77-88b0-4868-833d-01241883d6af, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7903e65c-c0bf-4bb5-b044-95f5693f5c38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.387 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7903e65c-c0bf-4bb5-b044-95f5693f5c38 in datapath 527d37da-eda0-4bfe-9f1d-310d58024d5d bound to our chassis#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.389 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 527d37da-eda0-4bfe-9f1d-310d58024d5d#033[00m
Dec 13 03:32:17 np0005558241 NetworkManager[50376]: <info>  [1765614737.3944] device (tap7903e65c-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:32:17 np0005558241 NetworkManager[50376]: <info>  [1765614737.3952] device (tap7903e65c-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.398 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:17Z|00678|binding|INFO|Setting lport 7903e65c-c0bf-4bb5-b044-95f5693f5c38 ovn-installed in OVS
Dec 13 03:32:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:17Z|00679|binding|INFO|Setting lport 7903e65c-c0bf-4bb5-b044-95f5693f5c38 up in Southbound
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.401 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.402 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.406 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f23dec5d-5926-4268-9035-aa49aecdf954]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.407 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap527d37da-e1 in ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.409 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap527d37da-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.409 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[19e9c0b9-4a75-401a-99d5-86c6b1cdaebe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.411 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[edf07889-ca73-472f-a4e0-effc644d569d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.411 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:17 np0005558241 systemd-machined[210538]: New machine qemu-85-instance-00000049.
Dec 13 03:32:17 np0005558241 systemd[1]: Started Virtual Machine qemu-85-instance-00000049.
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.427 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb893c2-9a8a-44e7-a5d2-c660e1b65416]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.441 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.441 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614737.4354827, 5d34feed-2663-4e17-b951-65a37bd3a275 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.441 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Started (Lifecycle Event)#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.445 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5e0346-c074-4914-9b7a-b7292207ab4e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.465 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.472 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614737.435613, 5d34feed-2663-4e17-b951-65a37bd3a275 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.472 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.479 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9b1b60-9948-4f4c-a9d7-b76647b881b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 NetworkManager[50376]: <info>  [1765614737.4887] manager: (tap527d37da-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/303)
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.488 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0ea2f5-2835-47d9-9488-c27779132590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.498 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.505 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.527 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d133cded-edbc-42d7-b4e2-f42a6c127e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.529 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.530 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[59f95452-27d9-45cd-b9d9-8534e2e9b783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 NetworkManager[50376]: <info>  [1765614737.5564] device (tap527d37da-e0): carrier: link connected
Dec 13 03:32:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 5.3 MiB/s wr, 141 op/s
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.565 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dba5458c-443c-4e03-a0ad-a863dcce3c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.588 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[54d72114-3316-49e8-a771-7ea8d99bb137]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d37da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:e1:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731477, 'reachable_time': 37710, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317414, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.609 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[18c7fe41-a824-416d-b08a-e2b89d68bff4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:e196'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731477, 'tstamp': 731477}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317415, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.627 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9770f970-7ded-4d1b-98a5-97ad218bf00d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527d37da-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:e1:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731477, 'reachable_time': 37710, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317416, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.671 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef597c0-e8dd-4c0c-8d83-749d9038389a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.749 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b71dad1-924e-4a4a-8355-f4f9a6a398a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d37da-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap527d37da-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:17 np0005558241 kernel: tap527d37da-e0: entered promiscuous mode
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.754 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:17 np0005558241 NetworkManager[50376]: <info>  [1765614737.7551] manager: (tap527d37da-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.757 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap527d37da-e0, col_values=(('external_ids', {'iface-id': '9bf9e6e9-c189-485c-8803-c58be1ee6099'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:17Z|00680|binding|INFO|Releasing lport 9bf9e6e9-c189-485c-8803-c58be1ee6099 from this chassis (sb_readonly=0)
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.758 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:17 np0005558241 nova_compute[248510]: 2025-12-13 08:32:17.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.778 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.779 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[492dd429-bda1-4b42-bff1-c46804cd48f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.780 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/527d37da-eda0-4bfe-9f1d-310d58024d5d.pid.haproxy
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 527d37da-eda0-4bfe-9f1d-310d58024d5d
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:32:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:17.781 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'env', 'PROCESS_TAG=haproxy-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/527d37da-eda0-4bfe-9f1d-310d58024d5d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.176 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614738.1762235, dc076c88-fe0b-4674-ac32-fb22420b78bc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.177 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] VM Started (Lifecycle Event)#033[00m
Dec 13 03:32:18 np0005558241 podman[317489]: 2025-12-13 08:32:18.206832464 +0000 UTC m=+0.068202582 container create 7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.210 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.221 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614738.1764863, dc076c88-fe0b-4674-ac32-fb22420b78bc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.221 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:32:18 np0005558241 systemd[1]: Started libpod-conmon-7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3.scope.
Dec 13 03:32:18 np0005558241 podman[317489]: 2025-12-13 08:32:18.171024134 +0000 UTC m=+0.032394242 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:32:18 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:32:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9542c431d09130ceda6f2031f37bc321f382444a062bd064f5a5f75508c11df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:32:18 np0005558241 podman[317489]: 2025-12-13 08:32:18.303434462 +0000 UTC m=+0.164804570 container init 7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:32:18 np0005558241 podman[317489]: 2025-12-13 08:32:18.309113349 +0000 UTC m=+0.170483437 container start 7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 03:32:18 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [NOTICE]   (317509) : New worker (317511) forked
Dec 13 03:32:18 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [NOTICE]   (317509) : Loading success.
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.386 248514 DEBUG nova.network.neutron [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Updated VIF entry in instance network info cache for port 533958d1-8a74-4963-9731-40767b4bb127. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.386 248514 DEBUG nova.network.neutron [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Updating instance_info_cache with network_info: [{"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.397 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.403 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.407 248514 DEBUG oslo_concurrency.lockutils [req-ea06376b-4ae5-4c27-8766-d849c908f24c req-681219eb-f37e-4041-82f2-5da35e23f0bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5d34feed-2663-4e17-b951-65a37bd3a275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.428 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.868 248514 DEBUG nova.compute.manager [req-9400019d-ea10-489e-aebb-59ee85dfb86d req-3b9d641e-4e9a-4167-af3e-b4554cb7b55e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.869 248514 DEBUG oslo_concurrency.lockutils [req-9400019d-ea10-489e-aebb-59ee85dfb86d req-3b9d641e-4e9a-4167-af3e-b4554cb7b55e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.869 248514 DEBUG oslo_concurrency.lockutils [req-9400019d-ea10-489e-aebb-59ee85dfb86d req-3b9d641e-4e9a-4167-af3e-b4554cb7b55e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.869 248514 DEBUG oslo_concurrency.lockutils [req-9400019d-ea10-489e-aebb-59ee85dfb86d req-3b9d641e-4e9a-4167-af3e-b4554cb7b55e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.869 248514 DEBUG nova.compute.manager [req-9400019d-ea10-489e-aebb-59ee85dfb86d req-3b9d641e-4e9a-4167-af3e-b4554cb7b55e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Processing event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.870 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.875 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614738.8752797, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.875 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.878 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.883 248514 INFO nova.virt.libvirt.driver [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance spawned successfully.#033[00m
Dec 13 03:32:18 np0005558241 nova_compute[248510]: 2025-12-13 08:32:18.883 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.059 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.067 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.071 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.071 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.072 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.072 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.072 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.073 248514 DEBUG nova.virt.libvirt.driver [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.104 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.163 248514 INFO nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Took 12.86 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.164 248514 DEBUG nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.244 248514 INFO nova.compute.manager [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Took 14.18 seconds to build instance.#033[00m
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.267 248514 DEBUG oslo_concurrency.lockutils [None req-d83cc1b7-6606-4c7c-9e36-9d12ca0bc48c 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 5.4 MiB/s wr, 154 op/s
Dec 13 03:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 19K writes, 79K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 19K writes, 6254 syncs, 3.12 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7515 writes, 29K keys, 7515 commit groups, 1.0 writes per commit group, ingest: 28.11 MB, 0.05 MB/s#012Interval WAL: 7515 writes, 2963 syncs, 2.54 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:32:19 np0005558241 ceph-osd[89221]: bluestore.MempoolThread fragmentation_score=0.002473 took=0.000052s
Dec 13 03:32:19 np0005558241 nova_compute[248510]: 2025-12-13 08:32:19.887 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:20 np0005558241 nova_compute[248510]: 2025-12-13 08:32:20.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0036801846766212594 of space, bias 1.0, pg target 1.1040554029863778 quantized to 32 (current 32)
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006673386640797728 of space, bias 1.0, pg target 0.19953426055985207 quantized to 32 (current 32)
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.056797568854111e-07 of space, bias 4.0, pg target 0.0007243929892349517 quantized to 16 (current 32)
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:32:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Dec 13 03:32:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.214 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.214 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.215 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.217 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.217 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.217 248514 WARNING nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.217 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.218 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.218 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.218 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.218 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Processing event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.219 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.219 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.219 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.219 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.220 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] No waiting events found dispatching network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.220 248514 WARNING nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received unexpected event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.220 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.220 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.220 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.221 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.221 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Processing event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.221 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.221 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.222 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.222 248514 DEBUG oslo_concurrency.lockutils [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.222 248514 DEBUG nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] No waiting events found dispatching network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.222 248514 WARNING nova.compute.manager [req-dad993c5-2a92-49fc-90d9-2f5b4deae392 req-4dd4cd2d-5e39-4235-b600-50b1b8e51f0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received unexpected event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.223 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.224 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.231 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614741.2305806, dc076c88-fe0b-4674-ac32-fb22420b78bc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.232 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.234 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.234 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.239 248514 INFO nova.virt.libvirt.driver [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Instance spawned successfully.#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.239 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.242 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance spawned successfully.#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.243 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.258 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.263 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.279 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.280 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.280 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.280 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.281 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.281 248514 DEBUG nova.virt.libvirt.driver [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.293 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.294 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.295 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.295 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.295 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.296 248514 DEBUG nova.virt.libvirt.driver [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.342 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.343 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614741.2339013, 5d34feed-2663-4e17-b951-65a37bd3a275 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.343 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.398 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.401 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.437 248514 INFO nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Took 12.45 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.437 248514 DEBUG nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.438 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.546 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.547 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.564 248514 INFO nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Took 14.09 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.565 248514 DEBUG nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 469 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.0 MiB/s wr, 165 op/s
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.576 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.660 248514 INFO nova.compute.manager [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Took 15.00 seconds to build instance.#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.696 248514 DEBUG oslo_concurrency.lockutils [None req-f5ba589a-00a3-40e4-b04b-2f9913bb4393 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.708 248514 INFO nova.compute.manager [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Took 16.59 seconds to build instance.#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.728 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.728 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.733 248514 DEBUG oslo_concurrency.lockutils [None req-cce19ee9-526b-4b6e-8757-a1770edd56e6 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.736 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:32:21 np0005558241 nova_compute[248510]: 2025-12-13 08:32:21.736 248514 INFO nova.compute.claims [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:32:22 np0005558241 nova_compute[248510]: 2025-12-13 08:32:22.132 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:32:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531208489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:32:22 np0005558241 nova_compute[248510]: 2025-12-13 08:32:22.919 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.788s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:22 np0005558241 nova_compute[248510]: 2025-12-13 08:32:22.927 248514 DEBUG nova.compute.provider_tree [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:32:22 np0005558241 nova_compute[248510]: 2025-12-13 08:32:22.956 248514 DEBUG nova.scheduler.client.report [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:32:22 np0005558241 nova_compute[248510]: 2025-12-13 08:32:22.990 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:22 np0005558241 nova_compute[248510]: 2025-12-13 08:32:22.991 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.049 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.050 248514 DEBUG nova.network.neutron [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.078 248514 INFO nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.102 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:32:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.459 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.460 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.460 248514 INFO nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Creating image(s)#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.488 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.520 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.548 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.553 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 469 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 191 op/s
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.597 248514 DEBUG nova.policy [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d19f7d5ece8482dab03e4bc02fdf410', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c6718df841f0471ba710516400f126fa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.640 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.641 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.642 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.642 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.642 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.644 248514 INFO nova.compute.manager [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Terminating instance#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.645 248514 DEBUG nova.compute.manager [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.646 248514 INFO nova.compute.manager [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Rescuing#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.646 248514 DEBUG oslo_concurrency.lockutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.646 248514 DEBUG oslo_concurrency.lockutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquired lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.646 248514 DEBUG nova.network.neutron [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.649 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.650 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.650 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.651 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.706 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.715 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bb8c91ff-01cb-4fd5-ab69-005313784b57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:23 np0005558241 kernel: tap7903e65c-c0 (unregistering): left promiscuous mode
Dec 13 03:32:23 np0005558241 NetworkManager[50376]: <info>  [1765614743.9364] device (tap7903e65c-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:32:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:23Z|00681|binding|INFO|Releasing lport 7903e65c-c0bf-4bb5-b044-95f5693f5c38 from this chassis (sb_readonly=0)
Dec 13 03:32:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:23Z|00682|binding|INFO|Setting lport 7903e65c-c0bf-4bb5-b044-95f5693f5c38 down in Southbound
Dec 13 03:32:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:23Z|00683|binding|INFO|Removing iface tap7903e65c-c0 ovn-installed in OVS
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.951 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:23.966 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:56:55 10.100.0.9'], port_security=['fa:16:3e:cc:56:55 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'dc076c88-fe0b-4674-ac32-fb22420b78bc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2de328b46a6e4f588f5e2a254db7f4ef', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b109409-2f17-43b9-91ce-3ee865f0de12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6896dc77-88b0-4868-833d-01241883d6af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7903e65c-c0bf-4bb5-b044-95f5693f5c38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:32:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:23.967 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7903e65c-c0bf-4bb5-b044-95f5693f5c38 in datapath 527d37da-eda0-4bfe-9f1d-310d58024d5d unbound from our chassis#033[00m
Dec 13 03:32:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:23.969 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 527d37da-eda0-4bfe-9f1d-310d58024d5d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:32:23 np0005558241 nova_compute[248510]: 2025-12-13 08:32:23.980 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:23.977 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3583fa-5a42-477e-8577-3010472d391d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:23.983 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d namespace which is not needed anymore#033[00m
Dec 13 03:32:23 np0005558241 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000049.scope: Deactivated successfully.
Dec 13 03:32:23 np0005558241 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000049.scope: Consumed 3.330s CPU time.
Dec 13 03:32:24 np0005558241 systemd-machined[210538]: Machine qemu-85-instance-00000049 terminated.
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.203 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.211 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.221 248514 INFO nova.virt.libvirt.driver [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Instance destroyed successfully.#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.221 248514 DEBUG nova.objects.instance [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lazy-loading 'resources' on Instance uuid dc076c88-fe0b-4674-ac32-fb22420b78bc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:24 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [NOTICE]   (317509) : haproxy version is 2.8.14-c23fe91
Dec 13 03:32:24 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [NOTICE]   (317509) : path to executable is /usr/sbin/haproxy
Dec 13 03:32:24 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [WARNING]  (317509) : Exiting Master process...
Dec 13 03:32:24 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [ALERT]    (317509) : Current worker (317511) exited with code 143 (Terminated)
Dec 13 03:32:24 np0005558241 neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d[317505]: [WARNING]  (317509) : All workers exited. Exiting... (0)
Dec 13 03:32:24 np0005558241 systemd[1]: libpod-7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3.scope: Deactivated successfully.
Dec 13 03:32:24 np0005558241 podman[317659]: 2025-12-13 08:32:24.246251398 +0000 UTC m=+0.126755819 container died 7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.245 248514 DEBUG nova.virt.libvirt.vif [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:32:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1364276133',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1364276133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1364276133',id=73,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2de328b46a6e4f588f5e2a254db7f4ef',ramdisk_id='',reservation_id='r-1zyw8g5c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1826994500',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1826994500-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:32:21Z,user_data=None,user_id='91d0d3efedc943b48ad0fc4295b6fc7c',uuid=dc076c88-fe0b-4674-ac32-fb22420b78bc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.246 248514 DEBUG nova.network.os_vif_util [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converting VIF {"id": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "address": "fa:16:3e:cc:56:55", "network": {"id": "527d37da-eda0-4bfe-9f1d-310d58024d5d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-144643297-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2de328b46a6e4f588f5e2a254db7f4ef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7903e65c-c0", "ovs_interfaceid": "7903e65c-c0bf-4bb5-b044-95f5693f5c38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.247 248514 DEBUG nova.network.os_vif_util [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.248 248514 DEBUG os_vif [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.251 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.252 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7903e65c-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.254 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.257 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.260 248514 INFO os_vif [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:56:55,bridge_name='br-int',has_traffic_filtering=True,id=7903e65c-c0bf-4bb5-b044-95f5693f5c38,network=Network(527d37da-eda0-4bfe-9f1d-310d58024d5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7903e65c-c0')#033[00m
Dec 13 03:32:24 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3-userdata-shm.mount: Deactivated successfully.
Dec 13 03:32:24 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a9542c431d09130ceda6f2031f37bc321f382444a062bd064f5a5f75508c11df-merged.mount: Deactivated successfully.
Dec 13 03:32:24 np0005558241 podman[317659]: 2025-12-13 08:32:24.322275886 +0000 UTC m=+0.202780307 container cleanup 7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 03:32:24 np0005558241 systemd[1]: libpod-conmon-7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3.scope: Deactivated successfully.
Dec 13 03:32:24 np0005558241 podman[317688]: 2025-12-13 08:32:24.387860285 +0000 UTC m=+0.093365560 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.388 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bb8c91ff-01cb-4fd5-ab69-005313784b57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:24 np0005558241 podman[317740]: 2025-12-13 08:32:24.405333949 +0000 UTC m=+0.058552495 container remove 7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:32:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.416 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aecda265-9364-41cb-bb75-424049f46d03]: (4, ('Sat Dec 13 08:32:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d (7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3)\n7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3\nSat Dec 13 08:32:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d (7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3)\n7770894eda72c3357c4c62df2e5c42b7f08fd4ef95460017b38178a532bd1fb3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.419 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af1c6e88-5465-42ea-90e1-0a0aaa5da5d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.420 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527d37da-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:24 np0005558241 kernel: tap527d37da-e0: left promiscuous mode
Dec 13 03:32:24 np0005558241 podman[317684]: 2025-12-13 08:32:24.431033807 +0000 UTC m=+0.157873932 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:32:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.446 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[70755ccc-2a4e-4952-81ae-be90277df4e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.459 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b468c0-4187-43dd-88eb-c0049c7fb32a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.461 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f96dff-1b2f-4472-a0e5-47ad6959d1b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:24 np0005558241 podman[317682]: 2025-12-13 08:32:24.478168228 +0000 UTC m=+0.199619149 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:32:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.482 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5cc71971-4b27-4b1e-a1fc-294a159e56d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731468, 'reachable_time': 32811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317804, 'error': None, 'target': 'ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:24 np0005558241 systemd[1]: run-netns-ovnmeta\x2d527d37da\x2deda0\x2d4bfe\x2d9f1d\x2d310d58024d5d.mount: Deactivated successfully.
Dec 13 03:32:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.487 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-527d37da-eda0-4bfe-9f1d-310d58024d5d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:32:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:24.487 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0dcdc51d-e969-4c31-9638-2e02fc104ced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.516 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] resizing rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.606 248514 DEBUG nova.objects.instance [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'migration_context' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.626 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.626 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Ensure instance console log exists: /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.627 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.627 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.628 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.706 248514 INFO nova.virt.libvirt.driver [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Deleting instance files /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc_del#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.707 248514 INFO nova.virt.libvirt.driver [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Deletion of /var/lib/nova/instances/dc076c88-fe0b-4674-ac32-fb22420b78bc_del complete#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.791 248514 INFO nova.compute.manager [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Took 1.15 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.792 248514 DEBUG oslo.service.loopingcall [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.794 248514 DEBUG nova.compute.manager [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:32:24 np0005558241 nova_compute[248510]: 2025-12-13 08:32:24.794 248514 DEBUG nova.network.neutron [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.252 248514 DEBUG nova.network.neutron [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Successfully created port: d001c32a-bc2d-4374-9cf1-cea4a3723c66 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.397 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.508 248514 DEBUG nova.compute.manager [req-e55fdfeb-da45-465a-83e6-78a76429f362 req-7aecb702-3615-45bd-bbfc-ea666b928345 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-vif-unplugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.508 248514 DEBUG oslo_concurrency.lockutils [req-e55fdfeb-da45-465a-83e6-78a76429f362 req-7aecb702-3615-45bd-bbfc-ea666b928345 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.508 248514 DEBUG oslo_concurrency.lockutils [req-e55fdfeb-da45-465a-83e6-78a76429f362 req-7aecb702-3615-45bd-bbfc-ea666b928345 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.509 248514 DEBUG oslo_concurrency.lockutils [req-e55fdfeb-da45-465a-83e6-78a76429f362 req-7aecb702-3615-45bd-bbfc-ea666b928345 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.509 248514 DEBUG nova.compute.manager [req-e55fdfeb-da45-465a-83e6-78a76429f362 req-7aecb702-3615-45bd-bbfc-ea666b928345 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] No waiting events found dispatching network-vif-unplugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.509 248514 DEBUG nova.compute.manager [req-e55fdfeb-da45-465a-83e6-78a76429f362 req-7aecb702-3615-45bd-bbfc-ea666b928345 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-vif-unplugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:32:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 1.5 MiB/s wr, 309 op/s
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.683 248514 DEBUG oslo_concurrency.lockutils [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.683 248514 DEBUG oslo_concurrency.lockutils [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.684 248514 DEBUG nova.compute.manager [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.687 248514 DEBUG nova.compute.manager [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.690 248514 DEBUG nova.objects.instance [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'flavor' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.732 248514 DEBUG nova.virt.libvirt.driver [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.740 248514 DEBUG nova.network.neutron [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Updating instance_info_cache with network_info: [{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:32:25 np0005558241 nova_compute[248510]: 2025-12-13 08:32:25.760 248514 DEBUG oslo_concurrency.lockutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Releasing lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:32:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:32:26 np0005558241 nova_compute[248510]: 2025-12-13 08:32:26.170 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:32:26 np0005558241 nova_compute[248510]: 2025-12-13 08:32:26.319 248514 DEBUG nova.network.neutron [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:32:26 np0005558241 nova_compute[248510]: 2025-12-13 08:32:26.352 248514 INFO nova.compute.manager [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Took 1.56 seconds to deallocate network for instance.#033[00m
Dec 13 03:32:26 np0005558241 nova_compute[248510]: 2025-12-13 08:32:26.409 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:26 np0005558241 nova_compute[248510]: 2025-12-13 08:32:26.410 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:26 np0005558241 nova_compute[248510]: 2025-12-13 08:32:26.627 248514 DEBUG oslo_concurrency.processutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:26 np0005558241 nova_compute[248510]: 2025-12-13 08:32:26.840 248514 DEBUG nova.network.neutron [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Successfully updated port: d001c32a-bc2d-4374-9cf1-cea4a3723c66 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:32:26 np0005558241 nova_compute[248510]: 2025-12-13 08:32:26.863 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-bb8c91ff-01cb-4fd5-ab69-005313784b57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:32:26 np0005558241 nova_compute[248510]: 2025-12-13 08:32:26.863 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-bb8c91ff-01cb-4fd5-ab69-005313784b57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:32:26 np0005558241 nova_compute[248510]: 2025-12-13 08:32:26.863 248514 DEBUG nova.network.neutron [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:32:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:32:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3852630843' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.241 248514 DEBUG oslo_concurrency.processutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.246 248514 DEBUG nova.compute.provider_tree [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.266 248514 DEBUG nova.scheduler.client.report [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.430 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.470 248514 INFO nova.scheduler.client.report [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Deleted allocations for instance dc076c88-fe0b-4674-ac32-fb22420b78bc#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.473 248514 DEBUG nova.network.neutron [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:32:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 467 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 1.0 MiB/s wr, 250 op/s
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.615 248514 DEBUG nova.compute.manager [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.616 248514 DEBUG oslo_concurrency.lockutils [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.616 248514 DEBUG oslo_concurrency.lockutils [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.616 248514 DEBUG oslo_concurrency.lockutils [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.616 248514 DEBUG nova.compute.manager [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] No waiting events found dispatching network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.616 248514 WARNING nova.compute.manager [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received unexpected event network-vif-plugged-7903e65c-c0bf-4bb5-b044-95f5693f5c38 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.617 248514 DEBUG nova.compute.manager [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Received event network-vif-deleted-7903e65c-c0bf-4bb5-b044-95f5693f5c38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.617 248514 DEBUG nova.compute.manager [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-changed-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.617 248514 DEBUG nova.compute.manager [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Refreshing instance network info cache due to event network-changed-d001c32a-bc2d-4374-9cf1-cea4a3723c66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.617 248514 DEBUG oslo_concurrency.lockutils [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bb8c91ff-01cb-4fd5-ab69-005313784b57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:32:27 np0005558241 nova_compute[248510]: 2025-12-13 08:32:27.620 248514 DEBUG oslo_concurrency.lockutils [None req-e63b63c1-b3c4-40f3-876e-fa7169716cb1 91d0d3efedc943b48ad0fc4295b6fc7c 2de328b46a6e4f588f5e2a254db7f4ef - - default default] Lock "dc076c88-fe0b-4674-ac32-fb22420b78bc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.803 248514 DEBUG nova.network.neutron [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Updating instance_info_cache with network_info: [{"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.833 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-bb8c91ff-01cb-4fd5-ab69-005313784b57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.833 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance network_info: |[{"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.834 248514 DEBUG oslo_concurrency.lockutils [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bb8c91ff-01cb-4fd5-ab69-005313784b57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.834 248514 DEBUG nova.network.neutron [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Refreshing network info cache for port d001c32a-bc2d-4374-9cf1-cea4a3723c66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.837 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Start _get_guest_xml network_info=[{"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.841 248514 WARNING nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.848 248514 DEBUG nova.virt.libvirt.host [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.849 248514 DEBUG nova.virt.libvirt.host [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.852 248514 DEBUG nova.virt.libvirt.host [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.853 248514 DEBUG nova.virt.libvirt.host [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.853 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.854 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.854 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.854 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.854 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.854 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.855 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.855 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.855 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.855 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.855 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.856 248514 DEBUG nova.virt.hardware [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:32:28 np0005558241 nova_compute[248510]: 2025-12-13 08:32:28.859 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:29 np0005558241 nova_compute[248510]: 2025-12-13 08:32:29.256 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:32:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514561463' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:32:29 np0005558241 nova_compute[248510]: 2025-12-13 08:32:29.433 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:29 np0005558241 nova_compute[248510]: 2025-12-13 08:32:29.456 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:29 np0005558241 nova_compute[248510]: 2025-12-13 08:32:29.461 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 469 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 276 op/s
Dec 13 03:32:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:32:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1214043577' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.064 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.066 248514 DEBUG nova.virt.libvirt.vif [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-81806715',display_name='tempest-tempest.common.compute-instance-81806715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-81806715',id=74,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-yhg7rdps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:23Z,user_data=None,user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=bb8c91ff-01cb-4fd5-ab69-005313784b57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.066 248514 DEBUG nova.network.os_vif_util [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.067 248514 DEBUG nova.network.os_vif_util [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.068 248514 DEBUG nova.objects.instance [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.273 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  <uuid>bb8c91ff-01cb-4fd5-ab69-005313784b57</uuid>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  <name>instance-0000004a</name>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <nova:name>tempest-tempest.common.compute-instance-81806715</nova:name>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:32:28</nova:creationTime>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:        <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:        <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:        <nova:port uuid="d001c32a-bc2d-4374-9cf1-cea4a3723c66">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <entry name="serial">bb8c91ff-01cb-4fd5-ab69-005313784b57</entry>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <entry name="uuid">bb8c91ff-01cb-4fd5-ab69-005313784b57</entry>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/bb8c91ff-01cb-4fd5-ab69-005313784b57_disk">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:63:10:dc"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <target dev="tapd001c32a-bc"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/console.log" append="off"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:32:30 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:32:30 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:32:30 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:32:30 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.275 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Preparing to wait for external event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.275 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.276 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.276 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.277 248514 DEBUG nova.virt.libvirt.vif [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:32:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-81806715',display_name='tempest-tempest.common.compute-instance-81806715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-81806715',id=74,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-yhg7rdps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:23Z,user_data=None,user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=bb8c91ff-01cb-4fd5-ab69-005313784b57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.277 248514 DEBUG nova.network.os_vif_util [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.278 248514 DEBUG nova.network.os_vif_util [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.278 248514 DEBUG os_vif [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.279 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.280 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.280 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.284 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd001c32a-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.284 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd001c32a-bc, col_values=(('external_ids', {'iface-id': 'd001c32a-bc2d-4374-9cf1-cea4a3723c66', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:10:dc', 'vm-uuid': 'bb8c91ff-01cb-4fd5-ab69-005313784b57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.286 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:30 np0005558241 NetworkManager[50376]: <info>  [1765614750.2870] manager: (tapd001c32a-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.289 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.292 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.294 248514 INFO os_vif [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc')#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.357 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.357 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.358 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No VIF found with MAC fa:16:3e:63:10:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.358 248514 INFO nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Using config drive#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.384 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.398 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.878 248514 INFO nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Creating config drive at /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.884 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0dcnw3y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.934 248514 DEBUG nova.network.neutron [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Updated VIF entry in instance network info cache for port d001c32a-bc2d-4374-9cf1-cea4a3723c66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:32:30 np0005558241 nova_compute[248510]: 2025-12-13 08:32:30.935 248514 DEBUG nova.network.neutron [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Updating instance_info_cache with network_info: [{"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.032 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0dcnw3y" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.061 248514 DEBUG nova.storage.rbd_utils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.065 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.074 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.075 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.119 248514 DEBUG oslo_concurrency.lockutils [req-58b3d52f-625b-44d6-a574-7ab6dfff9a27 req-2d2ce474-27f3-4ad5-8416-1f041caace36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bb8c91ff-01cb-4fd5-ab69-005313784b57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.229 248514 DEBUG oslo_concurrency.processutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.231 248514 INFO nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deleting local config drive /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config because it was imported into RBD.#033[00m
Dec 13 03:32:31 np0005558241 kernel: tapd001c32a-bc: entered promiscuous mode
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.291 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:31Z|00684|binding|INFO|Claiming lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 for this chassis.
Dec 13 03:32:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:31Z|00685|binding|INFO|d001c32a-bc2d-4374-9cf1-cea4a3723c66: Claiming fa:16:3e:63:10:dc 10.100.0.13
Dec 13 03:32:31 np0005558241 NetworkManager[50376]: <info>  [1765614751.2971] manager: (tapd001c32a-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/306)
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.304 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:10:dc 10.100.0.13'], port_security=['fa:16:3e:63:10:dc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb8c91ff-01cb-4fd5-ab69-005313784b57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8f5cceeb-71e0-4eee-9335-64d44ec2d969', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d001c32a-bc2d-4374-9cf1-cea4a3723c66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.305 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d001c32a-bc2d-4374-9cf1-cea4a3723c66 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis#033[00m
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.309 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839#033[00m
Dec 13 03:32:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:31Z|00686|binding|INFO|Setting lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 ovn-installed in OVS
Dec 13 03:32:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:31Z|00687|binding|INFO|Setting lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 up in Southbound
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.319 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:31 np0005558241 systemd-udevd[318016]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.331 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f94dc89-09f1-45a9-9052-e1b84ff473a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:31 np0005558241 NetworkManager[50376]: <info>  [1765614751.3431] device (tapd001c32a-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:32:31 np0005558241 NetworkManager[50376]: <info>  [1765614751.3441] device (tapd001c32a-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:32:31 np0005558241 systemd-machined[210538]: New machine qemu-86-instance-0000004a.
Dec 13 03:32:31 np0005558241 systemd[1]: Started Virtual Machine qemu-86-instance-0000004a.
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.376 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7af3cdd8-9f6d-4b4c-b94e-e4eecf85761b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.379 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0822d0b6-671d-49d9-950f-019cf8c63ada]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.413 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f51befd5-1773-49d6-86e1-f0cbb668e61f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.448 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff81c76-b002-4f8e-9a93-6214fabfc0b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727295, 'reachable_time': 29620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318030, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.468 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b53bfb5c-e5cf-426f-91cf-c1cc9d95c8c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727305, 'tstamp': 727305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318032, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727308, 'tstamp': 727308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318032, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.471 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.473 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.475 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.475 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.475 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:31.476 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 469 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 263 op/s
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.634149) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614751634215, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1267, "num_deletes": 253, "total_data_size": 1870054, "memory_usage": 1901392, "flush_reason": "Manual Compaction"}
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614751648568, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 1825298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38788, "largest_seqno": 40054, "table_properties": {"data_size": 1819273, "index_size": 3292, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13478, "raw_average_key_size": 20, "raw_value_size": 1806879, "raw_average_value_size": 2725, "num_data_blocks": 146, "num_entries": 663, "num_filter_entries": 663, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614645, "oldest_key_time": 1765614645, "file_creation_time": 1765614751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 14512 microseconds, and 6202 cpu microseconds.
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.648627) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 1825298 bytes OK
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.648686) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.651399) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.651415) EVENT_LOG_v1 {"time_micros": 1765614751651410, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.651442) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1864244, prev total WAL file size 1864244, number of live WAL files 2.
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.652172) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(1782KB)], [86(10MB)]
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614751652480, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 12333504, "oldest_snapshot_seqno": -1}
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6452 keys, 10557422 bytes, temperature: kUnknown
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614751748021, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10557422, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10512489, "index_size": 27671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164414, "raw_average_key_size": 25, "raw_value_size": 10395008, "raw_average_value_size": 1611, "num_data_blocks": 1117, "num_entries": 6452, "num_filter_entries": 6452, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.748462) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10557422 bytes
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.750800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.9 rd, 110.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.0 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(12.5) write-amplify(5.8) OK, records in: 6974, records dropped: 522 output_compression: NoCompression
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.750840) EVENT_LOG_v1 {"time_micros": 1765614751750825, "job": 50, "event": "compaction_finished", "compaction_time_micros": 95697, "compaction_time_cpu_micros": 32593, "output_level": 6, "num_output_files": 1, "total_output_size": 10557422, "num_input_records": 6974, "num_output_records": 6452, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614751751698, "job": 50, "event": "table_file_deletion", "file_number": 88}
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614751755608, "job": 50, "event": "table_file_deletion", "file_number": 86}
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.652051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.755690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.755694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.755696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.755698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:32:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:32:31.755699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.802 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614751.8011086, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.802 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Started (Lifecycle Event)#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.829 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.836 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614751.8050423, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.836 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.861 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.865 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.887 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.914 248514 DEBUG nova.compute.manager [req-304c7f5a-3b75-46c0-98d8-47475f258609 req-be1bc0f8-f6e2-4938-8704-9265d9774566 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.914 248514 DEBUG oslo_concurrency.lockutils [req-304c7f5a-3b75-46c0-98d8-47475f258609 req-be1bc0f8-f6e2-4938-8704-9265d9774566 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.914 248514 DEBUG oslo_concurrency.lockutils [req-304c7f5a-3b75-46c0-98d8-47475f258609 req-be1bc0f8-f6e2-4938-8704-9265d9774566 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.915 248514 DEBUG oslo_concurrency.lockutils [req-304c7f5a-3b75-46c0-98d8-47475f258609 req-be1bc0f8-f6e2-4938-8704-9265d9774566 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.915 248514 DEBUG nova.compute.manager [req-304c7f5a-3b75-46c0-98d8-47475f258609 req-be1bc0f8-f6e2-4938-8704-9265d9774566 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Processing event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.915 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.918 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614751.918699, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.919 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.921 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.925 248514 INFO nova.virt.libvirt.driver [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance spawned successfully.#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.925 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.951 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.962 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.970 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.971 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.972 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.973 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.974 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:31 np0005558241 nova_compute[248510]: 2025-12-13 08:32:31.975 248514 DEBUG nova.virt.libvirt.driver [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:32:32 np0005558241 nova_compute[248510]: 2025-12-13 08:32:32.009 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:32:32 np0005558241 nova_compute[248510]: 2025-12-13 08:32:32.180 248514 INFO nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Took 8.72 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:32:32 np0005558241 nova_compute[248510]: 2025-12-13 08:32:32.181 248514 DEBUG nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:32 np0005558241 nova_compute[248510]: 2025-12-13 08:32:32.290 248514 INFO nova.compute.manager [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Took 10.60 seconds to build instance.#033[00m
Dec 13 03:32:32 np0005558241 nova_compute[248510]: 2025-12-13 08:32:32.317 248514 DEBUG oslo_concurrency.lockutils [None req-12cdfe55-d852-47d2-80c3-52fef9396908 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 471 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.1 MiB/s wr, 245 op/s
Dec 13 03:32:34 np0005558241 nova_compute[248510]: 2025-12-13 08:32:34.683 248514 DEBUG nova.compute.manager [req-76a313f6-41bf-4d9e-a96b-b8782f1dc956 req-8262d44c-d897-47f9-9e9b-ebbaac9d4b01 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:34 np0005558241 nova_compute[248510]: 2025-12-13 08:32:34.684 248514 DEBUG oslo_concurrency.lockutils [req-76a313f6-41bf-4d9e-a96b-b8782f1dc956 req-8262d44c-d897-47f9-9e9b-ebbaac9d4b01 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:34 np0005558241 nova_compute[248510]: 2025-12-13 08:32:34.684 248514 DEBUG oslo_concurrency.lockutils [req-76a313f6-41bf-4d9e-a96b-b8782f1dc956 req-8262d44c-d897-47f9-9e9b-ebbaac9d4b01 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:34 np0005558241 nova_compute[248510]: 2025-12-13 08:32:34.684 248514 DEBUG oslo_concurrency.lockutils [req-76a313f6-41bf-4d9e-a96b-b8782f1dc956 req-8262d44c-d897-47f9-9e9b-ebbaac9d4b01 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:34 np0005558241 nova_compute[248510]: 2025-12-13 08:32:34.685 248514 DEBUG nova.compute.manager [req-76a313f6-41bf-4d9e-a96b-b8782f1dc956 req-8262d44c-d897-47f9-9e9b-ebbaac9d4b01 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] No waiting events found dispatching network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:34 np0005558241 nova_compute[248510]: 2025-12-13 08:32:34.685 248514 WARNING nova.compute.manager [req-76a313f6-41bf-4d9e-a96b-b8782f1dc956 req-8262d44c-d897-47f9-9e9b-ebbaac9d4b01 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received unexpected event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:32:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:35.077 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:35 np0005558241 nova_compute[248510]: 2025-12-13 08:32:35.288 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:35 np0005558241 nova_compute[248510]: 2025-12-13 08:32:35.401 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:35Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:ab:b9 10.100.0.5
Dec 13 03:32:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:35Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:ab:b9 10.100.0.5
Dec 13 03:32:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 500 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.0 MiB/s wr, 298 op/s
Dec 13 03:32:35 np0005558241 nova_compute[248510]: 2025-12-13 08:32:35.782 248514 DEBUG nova.virt.libvirt.driver [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:32:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:32:36 np0005558241 nova_compute[248510]: 2025-12-13 08:32:36.216 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:32:36 np0005558241 nova_compute[248510]: 2025-12-13 08:32:36.745 248514 INFO nova.compute.manager [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Rebuilding instance#033[00m
Dec 13 03:32:37 np0005558241 nova_compute[248510]: 2025-12-13 08:32:37.095 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'trusted_certs' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:37 np0005558241 nova_compute[248510]: 2025-12-13 08:32:37.117 248514 DEBUG nova.compute.manager [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:37 np0005558241 nova_compute[248510]: 2025-12-13 08:32:37.184 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_requests' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:37 np0005558241 nova_compute[248510]: 2025-12-13 08:32:37.198 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:37 np0005558241 nova_compute[248510]: 2025-12-13 08:32:37.215 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:37 np0005558241 nova_compute[248510]: 2025-12-13 08:32:37.229 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'migration_context' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:37 np0005558241 nova_compute[248510]: 2025-12-13 08:32:37.242 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:32:37 np0005558241 nova_compute[248510]: 2025-12-13 08:32:37.246 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:32:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 500 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 139 op/s
Dec 13 03:32:38 np0005558241 kernel: tap533958d1-8a (unregistering): left promiscuous mode
Dec 13 03:32:38 np0005558241 NetworkManager[50376]: <info>  [1765614758.4375] device (tap533958d1-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:32:38 np0005558241 nova_compute[248510]: 2025-12-13 08:32:38.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:38Z|00688|binding|INFO|Releasing lport 533958d1-8a74-4963-9731-40767b4bb127 from this chassis (sb_readonly=0)
Dec 13 03:32:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:38Z|00689|binding|INFO|Setting lport 533958d1-8a74-4963-9731-40767b4bb127 down in Southbound
Dec 13 03:32:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:38Z|00690|binding|INFO|Removing iface tap533958d1-8a ovn-installed in OVS
Dec 13 03:32:38 np0005558241 nova_compute[248510]: 2025-12-13 08:32:38.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:38 np0005558241 nova_compute[248510]: 2025-12-13 08:32:38.469 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:38 np0005558241 kernel: tap2bdcea64-a0 (unregistering): left promiscuous mode
Dec 13 03:32:38 np0005558241 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000048.scope: Deactivated successfully.
Dec 13 03:32:38 np0005558241 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000048.scope: Consumed 14.954s CPU time.
Dec 13 03:32:38 np0005558241 NetworkManager[50376]: <info>  [1765614758.5090] device (tap2bdcea64-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:32:38 np0005558241 systemd-machined[210538]: Machine qemu-84-instance-00000048 terminated.
Dec 13 03:32:38 np0005558241 nova_compute[248510]: 2025-12-13 08:32:38.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:38Z|00691|binding|INFO|Releasing lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 from this chassis (sb_readonly=1)
Dec 13 03:32:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:38Z|00692|binding|INFO|Removing iface tap2bdcea64-a0 ovn-installed in OVS
Dec 13 03:32:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:38Z|00693|if_status|INFO|Dropped 1 log messages in last 118 seconds (most recently, 118 seconds ago) due to excessive rate
Dec 13 03:32:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:38Z|00694|if_status|INFO|Not setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 down as sb is readonly
Dec 13 03:32:38 np0005558241 nova_compute[248510]: 2025-12-13 08:32:38.527 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:38 np0005558241 nova_compute[248510]: 2025-12-13 08:32:38.544 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:38 np0005558241 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000047.scope: Deactivated successfully.
Dec 13 03:32:38 np0005558241 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000047.scope: Consumed 16.649s CPU time.
Dec 13 03:32:38 np0005558241 systemd-machined[210538]: Machine qemu-83-instance-00000047 terminated.
Dec 13 03:32:38 np0005558241 nova_compute[248510]: 2025-12-13 08:32:38.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:38 np0005558241 nova_compute[248510]: 2025-12-13 08:32:38.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:38 np0005558241 NetworkManager[50376]: <info>  [1765614758.6996] manager: (tap2bdcea64-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Dec 13 03:32:38 np0005558241 nova_compute[248510]: 2025-12-13 08:32:38.799 248514 INFO nova.virt.libvirt.driver [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance shutdown successfully after 13 seconds.#033[00m
Dec 13 03:32:38 np0005558241 nova_compute[248510]: 2025-12-13 08:32:38.805 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance destroyed successfully.#033[00m
Dec 13 03:32:38 np0005558241 nova_compute[248510]: 2025-12-13 08:32:38.806 248514 DEBUG nova.objects.instance [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'numa_topology' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:38Z|00695|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 down in Southbound
Dec 13 03:32:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:38.959 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:ab:b9 10.100.0.5'], port_security=['fa:16:3e:66:ab:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d34feed-2663-4e17-b951-65a37bd3a275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45afb483-a012-4442-b20c-edd0f1f0f374', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=533958d1-8a74-4963-9731-40767b4bb127) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:32:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:38.960 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 533958d1-8a74-4963-9731-40767b4bb127 in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f unbound from our chassis#033[00m
Dec 13 03:32:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:38.963 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f#033[00m
Dec 13 03:32:38 np0005558241 nova_compute[248510]: 2025-12-13 08:32:38.976 248514 DEBUG nova.compute.manager [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:38.981 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[070fe869-5b07-4e63-8160-31b60f0ace64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.015 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4edbc03b-d2f5-453c-929b-d2d844b27c22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.019 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[96eb5a84-8a09-4727-be80-0b8e8491de44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.040 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:4d:d0 10.100.0.6'], port_security=['fa:16:3e:13:4d:d0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5edbf88-6361-407a-a0f1-c133f70b50e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2bdcea64-a01f-4a75-b664-9c9c971533a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.063 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a0550654-c774-4588-a556-ff88c4a7ce09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.081 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[27c44905-08e6-4979-ae9f-9a05d9c1dd81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318119, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.101 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[90b526af-1ddf-4154-84a8-ef214226b8c6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727416, 'tstamp': 727416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318120, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727421, 'tstamp': 727421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318120, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.103 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:39 np0005558241 nova_compute[248510]: 2025-12-13 08:32:39.104 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:39 np0005558241 nova_compute[248510]: 2025-12-13 08:32:39.112 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.113 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.113 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.114 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.114 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.116 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2bdcea64-a01f-4a75-b664-9c9c971533a6 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 unbound from our chassis#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.117 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 03:32:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:39.119 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[01700b5f-7403-495b-8a0a-22533f6f2591]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:39 np0005558241 nova_compute[248510]: 2025-12-13 08:32:39.210 248514 DEBUG oslo_concurrency.lockutils [None req-ef0d53a8-f475-42a2-802e-d66d248d9e10 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:39 np0005558241 nova_compute[248510]: 2025-12-13 08:32:39.217 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614744.2162104, dc076c88-fe0b-4674-ac32-fb22420b78bc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:39 np0005558241 nova_compute[248510]: 2025-12-13 08:32:39.219 248514 INFO nova.compute.manager [-] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:32:39 np0005558241 nova_compute[248510]: 2025-12-13 08:32:39.233 248514 INFO nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance shutdown successfully after 13 seconds.#033[00m
Dec 13 03:32:39 np0005558241 nova_compute[248510]: 2025-12-13 08:32:39.242 248514 INFO nova.virt.libvirt.driver [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance destroyed successfully.#033[00m
Dec 13 03:32:39 np0005558241 nova_compute[248510]: 2025-12-13 08:32:39.242 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'numa_topology' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 535 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.1 MiB/s wr, 226 op/s
Dec 13 03:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.143 248514 DEBUG nova.compute.manager [None req-9386d99e-eccc-4eec-bd52-f3a1048eeeac - - - - - -] [instance: dc076c88-fe0b-4674-ac32-fb22420b78bc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.149 248514 INFO nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Attempting rescue#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.150 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.156 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.157 248514 INFO nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Creating image(s)#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.185 248514 DEBUG nova.storage.rbd_utils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.190 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.273 248514 DEBUG nova.storage.rbd_utils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.296 248514 DEBUG nova.storage.rbd_utils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.301 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.346 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.403 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.408 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.409 248514 DEBUG oslo_concurrency.lockutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.410 248514 DEBUG oslo_concurrency.lockutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.410 248514 DEBUG oslo_concurrency.lockutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.434 248514 DEBUG nova.storage.rbd_utils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.438 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.783 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.784 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'migration_context' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.885 248514 DEBUG nova.compute.manager [req-ee51b5fd-fb61-4919-8089-b662ffe04f48 req-a7be5b12-9362-405f-b1db-0d732c95aeea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-unplugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.886 248514 DEBUG oslo_concurrency.lockutils [req-ee51b5fd-fb61-4919-8089-b662ffe04f48 req-a7be5b12-9362-405f-b1db-0d732c95aeea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.886 248514 DEBUG oslo_concurrency.lockutils [req-ee51b5fd-fb61-4919-8089-b662ffe04f48 req-a7be5b12-9362-405f-b1db-0d732c95aeea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.887 248514 DEBUG oslo_concurrency.lockutils [req-ee51b5fd-fb61-4919-8089-b662ffe04f48 req-a7be5b12-9362-405f-b1db-0d732c95aeea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.887 248514 DEBUG nova.compute.manager [req-ee51b5fd-fb61-4919-8089-b662ffe04f48 req-a7be5b12-9362-405f-b1db-0d732c95aeea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] No waiting events found dispatching network-vif-unplugged-533958d1-8a74-4963-9731-40767b4bb127 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.887 248514 WARNING nova.compute.manager [req-ee51b5fd-fb61-4919-8089-b662ffe04f48 req-a7be5b12-9362-405f-b1db-0d732c95aeea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received unexpected event network-vif-unplugged-533958d1-8a74-4963-9731-40767b4bb127 for instance with vm_state stopped and task_state None.#033[00m
Dec 13 03:32:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:40Z|00696|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 03:32:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:40Z|00697|binding|INFO|Releasing lport c8e8a31b-a5fe-4e2d-bc19-65995078988f from this chassis (sb_readonly=0)
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.918 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.920 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Start _get_guest_xml network_info=[{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1650812198-network", "vif_mac": "fa:16:3e:13:4d:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.921 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'resources' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:40 np0005558241 nova_compute[248510]: 2025-12-13 08:32:40.968 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.022 248514 WARNING nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.035 248514 DEBUG nova.virt.libvirt.host [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.036 248514 DEBUG nova.virt.libvirt.host [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.039 248514 DEBUG nova.virt.libvirt.host [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.040 248514 DEBUG nova.virt.libvirt.host [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.040 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.041 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.041 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.041 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.042 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.042 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.042 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.042 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.043 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.043 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.043 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.043 248514 DEBUG nova.virt.hardware [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.044 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.063 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:32:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 535 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 207 op/s
Dec 13 03:32:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:32:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3564628827' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.705 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:41 np0005558241 nova_compute[248510]: 2025-12-13 08:32:41.706 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:32:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1542106509' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:32:42 np0005558241 nova_compute[248510]: 2025-12-13 08:32:42.387 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:42 np0005558241 nova_compute[248510]: 2025-12-13 08:32:42.389 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:32:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661846961' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.057 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.060 248514 DEBUG nova.virt.libvirt.vif [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-607419756',display_name='tempest-ServerRescueTestJSON-server-607419756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-607419756',id=71,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-h6adjjd8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:19Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=c5edbf88-6361-407a-a0f1-c133f70b50e9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1650812198-network", "vif_mac": "fa:16:3e:13:4d:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.060 248514 DEBUG nova.network.os_vif_util [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1650812198-network", "vif_mac": "fa:16:3e:13:4d:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.061 248514 DEBUG nova.network.os_vif_util [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.062 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.102 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  <uuid>c5edbf88-6361-407a-a0f1-c133f70b50e9</uuid>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  <name>instance-00000047</name>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerRescueTestJSON-server-607419756</nova:name>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:32:41</nova:creationTime>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <nova:user uuid="93eec08d500a4f03afb3281e9899bd6a">tempest-ServerRescueTestJSON-1425963100-project-member</nova:user>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <nova:project uuid="71e2453379684f0ca0563f8c370ea4a3">tempest-ServerRescueTestJSON-1425963100</nova:project>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <nova:port uuid="2bdcea64-a01f-4a75-b664-9c9c971533a6">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <entry name="serial">c5edbf88-6361-407a-a0f1-c133f70b50e9</entry>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <entry name="uuid">c5edbf88-6361-407a-a0f1-c133f70b50e9</entry>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.rescue">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c5edbf88-6361-407a-a0f1-c133f70b50e9_disk">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <target dev="vdb" bus="virtio"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config.rescue">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:13:4d:d0"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <target dev="tap2bdcea64-a0"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/console.log" append="off"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:32:43 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:32:43 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:32:43 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:32:43 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.111 248514 INFO nova.virt.libvirt.driver [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance destroyed successfully.#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.212 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.213 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.213 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.213 248514 DEBUG nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] No VIF found with MAC fa:16:3e:13:4d:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.214 248514 INFO nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Using config drive#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.235 248514 DEBUG nova.storage.rbd_utils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.268 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.305 248514 DEBUG nova.objects.instance [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'keypairs' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 553 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.9 MiB/s wr, 211 op/s
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.651 248514 INFO nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Rebuilding instance#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.810 248514 INFO nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Creating config drive at /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config.rescue#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.815 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplhwsqjta execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.962 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplhwsqjta" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:43 np0005558241 nova_compute[248510]: 2025-12-13 08:32:43.998 248514 DEBUG nova.storage.rbd_utils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] rbd image c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.003 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config.rescue c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.374 248514 DEBUG nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.375 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.375 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.376 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.376 248514 DEBUG nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] No waiting events found dispatching network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.376 248514 WARNING nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received unexpected event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 for instance with vm_state stopped and task_state rebuilding.#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.376 248514 DEBUG nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.377 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.377 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.377 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.377 248514 DEBUG nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.378 248514 WARNING nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state active and task_state rescuing.#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.378 248514 DEBUG nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.378 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.378 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.379 248514 DEBUG oslo_concurrency.lockutils [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.379 248514 DEBUG nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.379 248514 WARNING nova.compute.manager [req-7f2d229d-5ad5-4be4-bce5-718d12cbb8ff req-38fad923-40ef-47a8-b74c-a10923644dd2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state active and task_state rescuing.#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.426 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.451 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.509 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'pci_requests' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.526 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.546 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'resources' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.562 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'migration_context' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.605 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.611 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance already shutdown.#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.617 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance destroyed successfully.#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.624 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance destroyed successfully.#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.625 248514 DEBUG nova.virt.libvirt.vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1505100715',display_name='tempest-tempest.common.compute-instance-1505100715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1505100715',id=72,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-kk2o7y6y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:42Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=5d34feed-2663-4e17-b951-65a37bd3a275,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.626 248514 DEBUG nova.network.os_vif_util [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.627 248514 DEBUG nova.network.os_vif_util [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.628 248514 DEBUG os_vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.631 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.632 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap533958d1-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.637 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.643 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.647 248514 INFO os_vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a')#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.773 248514 DEBUG oslo_concurrency.processutils [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config.rescue c5edbf88-6361-407a-a0f1-c133f70b50e9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.770s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.773 248514 INFO nova.virt.libvirt.driver [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Deleting local config drive /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9/disk.config.rescue because it was imported into RBD.#033[00m
Dec 13 03:32:44 np0005558241 kernel: tap2bdcea64-a0: entered promiscuous mode
Dec 13 03:32:44 np0005558241 NetworkManager[50376]: <info>  [1765614764.8176] manager: (tap2bdcea64-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Dec 13 03:32:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:44Z|00698|binding|INFO|Claiming lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 for this chassis.
Dec 13 03:32:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:44Z|00699|binding|INFO|2bdcea64-a01f-4a75-b664-9c9c971533a6: Claiming fa:16:3e:13:4d:d0 10.100.0.6
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.818 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:44.826 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:4d:d0 10.100.0.6'], port_security=['fa:16:3e:13:4d:d0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5edbf88-6361-407a-a0f1-c133f70b50e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2bdcea64-a01f-4a75-b664-9c9c971533a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:32:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:44.827 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2bdcea64-a01f-4a75-b664-9c9c971533a6 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 bound to our chassis#033[00m
Dec 13 03:32:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:44.829 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 03:32:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:44.830 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[063f0f72-2a06-4b56-876c-d465b5872341]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:44Z|00700|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 ovn-installed in OVS
Dec 13 03:32:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:44Z|00701|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 up in Southbound
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.837 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:44 np0005558241 nova_compute[248510]: 2025-12-13 08:32:44.841 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:44 np0005558241 systemd-udevd[318374]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:32:44 np0005558241 systemd-machined[210538]: New machine qemu-87-instance-00000047.
Dec 13 03:32:44 np0005558241 NetworkManager[50376]: <info>  [1765614764.8610] device (tap2bdcea64-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:32:44 np0005558241 NetworkManager[50376]: <info>  [1765614764.8623] device (tap2bdcea64-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:32:44 np0005558241 systemd[1]: Started Virtual Machine qemu-87-instance-00000047.
Dec 13 03:32:45 np0005558241 nova_compute[248510]: 2025-12-13 08:32:45.405 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 581 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 212 op/s
Dec 13 03:32:45 np0005558241 nova_compute[248510]: 2025-12-13 08:32:45.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:32:45 np0005558241 nova_compute[248510]: 2025-12-13 08:32:45.902 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for c5edbf88-6361-407a-a0f1-c133f70b50e9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:32:45 np0005558241 nova_compute[248510]: 2025-12-13 08:32:45.904 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614765.9022048, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:45 np0005558241 nova_compute[248510]: 2025-12-13 08:32:45.904 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:32:45 np0005558241 nova_compute[248510]: 2025-12-13 08:32:45.911 248514 DEBUG nova.compute.manager [None req-432b77e5-f9cb-4f6d-a7db-9b52e5a3a262 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:45 np0005558241 nova_compute[248510]: 2025-12-13 08:32:45.941 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:45 np0005558241 nova_compute[248510]: 2025-12-13 08:32:45.945 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:45 np0005558241 nova_compute[248510]: 2025-12-13 08:32:45.976 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Dec 13 03:32:45 np0005558241 nova_compute[248510]: 2025-12-13 08:32:45.977 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614765.9077804, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:45 np0005558241 nova_compute[248510]: 2025-12-13 08:32:45.977 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Started (Lifecycle Event)#033[00m
Dec 13 03:32:45 np0005558241 nova_compute[248510]: 2025-12-13 08:32:45.999 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:46 np0005558241 nova_compute[248510]: 2025-12-13 08:32:46.003 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:32:46 np0005558241 nova_compute[248510]: 2025-12-13 08:32:46.582 248514 DEBUG nova.compute.manager [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:46 np0005558241 nova_compute[248510]: 2025-12-13 08:32:46.583 248514 DEBUG oslo_concurrency.lockutils [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:46 np0005558241 nova_compute[248510]: 2025-12-13 08:32:46.583 248514 DEBUG oslo_concurrency.lockutils [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:46 np0005558241 nova_compute[248510]: 2025-12-13 08:32:46.583 248514 DEBUG oslo_concurrency.lockutils [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:46 np0005558241 nova_compute[248510]: 2025-12-13 08:32:46.584 248514 DEBUG nova.compute.manager [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:46 np0005558241 nova_compute[248510]: 2025-12-13 08:32:46.584 248514 WARNING nova.compute.manager [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state rescued and task_state None.#033[00m
Dec 13 03:32:46 np0005558241 nova_compute[248510]: 2025-12-13 08:32:46.584 248514 DEBUG nova.compute.manager [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:46 np0005558241 nova_compute[248510]: 2025-12-13 08:32:46.584 248514 DEBUG oslo_concurrency.lockutils [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:46 np0005558241 nova_compute[248510]: 2025-12-13 08:32:46.585 248514 DEBUG oslo_concurrency.lockutils [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:46 np0005558241 nova_compute[248510]: 2025-12-13 08:32:46.585 248514 DEBUG oslo_concurrency.lockutils [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:46 np0005558241 nova_compute[248510]: 2025-12-13 08:32:46.585 248514 DEBUG nova.compute.manager [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:46 np0005558241 nova_compute[248510]: 2025-12-13 08:32:46.586 248514 WARNING nova.compute.manager [req-30fe3dfc-43cd-48d7-a911-7ad55afc4180 req-861673e9-7fd3-425c-90c4-d87b0935c4a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state rescued and task_state None.#033[00m
Dec 13 03:32:47 np0005558241 nova_compute[248510]: 2025-12-13 08:32:47.394 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:32:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 581 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 641 KiB/s rd, 2.9 MiB/s wr, 114 op/s
Dec 13 03:32:47 np0005558241 nova_compute[248510]: 2025-12-13 08:32:47.759 248514 INFO nova.compute.manager [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Unrescuing#033[00m
Dec 13 03:32:47 np0005558241 nova_compute[248510]: 2025-12-13 08:32:47.760 248514 DEBUG oslo_concurrency.lockutils [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:32:47 np0005558241 nova_compute[248510]: 2025-12-13 08:32:47.760 248514 DEBUG oslo_concurrency.lockutils [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquired lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:32:47 np0005558241 nova_compute[248510]: 2025-12-13 08:32:47.760 248514 DEBUG nova.network.neutron [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:32:47 np0005558241 nova_compute[248510]: 2025-12-13 08:32:47.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:32:47 np0005558241 nova_compute[248510]: 2025-12-13 08:32:47.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:32:47 np0005558241 nova_compute[248510]: 2025-12-13 08:32:47.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:32:47 np0005558241 nova_compute[248510]: 2025-12-13 08:32:47.997 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:32:47 np0005558241 nova_compute[248510]: 2025-12-13 08:32:47.998 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:32:47 np0005558241 nova_compute[248510]: 2025-12-13 08:32:47.998 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:32:47 np0005558241 nova_compute[248510]: 2025-12-13 08:32:47.998 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:48 np0005558241 nova_compute[248510]: 2025-12-13 08:32:48.780 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deleting instance files /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275_del#033[00m
Dec 13 03:32:48 np0005558241 nova_compute[248510]: 2025-12-13 08:32:48.782 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deletion of /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275_del complete#033[00m
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.398 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.399 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Creating image(s)#033[00m
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.427 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.455 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.486 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.490 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.566 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.567 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.569 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.569 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 547 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.2 MiB/s wr, 196 op/s
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.590 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.597 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 5d34feed-2663-4e17-b951-65a37bd3a275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.631 248514 DEBUG nova.network.neutron [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Updating instance_info_cache with network_info: [{"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:32:49 np0005558241 nova_compute[248510]: 2025-12-13 08:32:49.673 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.081 248514 DEBUG oslo_concurrency.lockutils [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Releasing lock "refresh_cache-c5edbf88-6361-407a-a0f1-c133f70b50e9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.084 248514 DEBUG nova.objects.instance [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'flavor' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.336 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 5d34feed-2663-4e17-b951-65a37bd3a275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.739s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.395 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] resizing rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.424 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:50 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:50Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:10:dc 10.100.0.13
Dec 13 03:32:50 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:50Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:10:dc 10.100.0.13
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.546 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.547 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Ensure instance console log exists: /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.548 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.548 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.549 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.553 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Start _get_guest_xml network_info=[{"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.559 248514 WARNING nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.566 248514 DEBUG nova.virt.libvirt.host [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.567 248514 DEBUG nova.virt.libvirt.host [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.570 248514 DEBUG nova.virt.libvirt.host [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.571 248514 DEBUG nova.virt.libvirt.host [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.572 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.572 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.573 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.573 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.574 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.574 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.575 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.575 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.575 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.576 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.576 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.577 248514 DEBUG nova.virt.hardware [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:32:50 np0005558241 nova_compute[248510]: 2025-12-13 08:32:50.577 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.143 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:32:51 np0005558241 kernel: tap2bdcea64-a0 (unregistering): left promiscuous mode
Dec 13 03:32:51 np0005558241 NetworkManager[50376]: <info>  [1765614771.4005] device (tap2bdcea64-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:32:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:51Z|00702|binding|INFO|Releasing lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 from this chassis (sb_readonly=0)
Dec 13 03:32:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:51Z|00703|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 down in Southbound
Dec 13 03:32:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:51Z|00704|binding|INFO|Removing iface tap2bdcea64-a0 ovn-installed in OVS
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:51 np0005558241 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000047.scope: Deactivated successfully.
Dec 13 03:32:51 np0005558241 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000047.scope: Consumed 5.297s CPU time.
Dec 13 03:32:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.455 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:4d:d0 10.100.0.6'], port_security=['fa:16:3e:13:4d:d0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5edbf88-6361-407a-a0f1-c133f70b50e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2bdcea64-a01f-4a75-b664-9c9c971533a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:32:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.456 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2bdcea64-a01f-4a75-b664-9c9c971533a6 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 unbound from our chassis#033[00m
Dec 13 03:32:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.458 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 03:32:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.459 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4a5a36-830b-4230-b8f5-8deb211455d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:51 np0005558241 systemd-machined[210538]: Machine qemu-87-instance-00000047 terminated.
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.542 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.549 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.559 248514 INFO nova.virt.libvirt.driver [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance destroyed successfully.#033[00m
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.561 248514 DEBUG nova.objects.instance [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'numa_topology' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 523 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Dec 13 03:32:51 np0005558241 kernel: tap2bdcea64-a0: entered promiscuous mode
Dec 13 03:32:51 np0005558241 systemd-udevd[318635]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:32:51 np0005558241 NetworkManager[50376]: <info>  [1765614771.6838] manager: (tap2bdcea64-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Dec 13 03:32:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:51Z|00705|binding|INFO|Claiming lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 for this chassis.
Dec 13 03:32:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:51Z|00706|binding|INFO|2bdcea64-a01f-4a75-b664-9c9c971533a6: Claiming fa:16:3e:13:4d:d0 10.100.0.6
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.686 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.692 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:32:51 np0005558241 NetworkManager[50376]: <info>  [1765614771.6961] device (tap2bdcea64-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:32:51 np0005558241 NetworkManager[50376]: <info>  [1765614771.6969] device (tap2bdcea64-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.706 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:51Z|00707|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 ovn-installed in OVS
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.711 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:51Z|00708|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 up in Southbound
Dec 13 03:32:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.723 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:4d:d0 10.100.0.6'], port_security=['fa:16:3e:13:4d:d0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5edbf88-6361-407a-a0f1-c133f70b50e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2bdcea64-a01f-4a75-b664-9c9c971533a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:32:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.725 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2bdcea64-a01f-4a75-b664-9c9c971533a6 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 bound to our chassis#033[00m
Dec 13 03:32:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.726 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 03:32:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:51.727 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7925d9c3-8eae-4ed1-bade-45f0ac49c907]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:51 np0005558241 systemd-machined[210538]: New machine qemu-88-instance-00000047.
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.740 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.740 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.741 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:32:51 np0005558241 systemd[1]: Started Virtual Machine qemu-88-instance-00000047.
Dec 13 03:32:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:32:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3348443458' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.781 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.812 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:51 np0005558241 nova_compute[248510]: 2025-12-13 08:32:51.817 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.284 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for c5edbf88-6361-407a-a0f1-c133f70b50e9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.286 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614772.28383, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.287 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:32:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:32:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3078105784' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.449 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.451 248514 DEBUG nova.virt.libvirt.vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1505100715',display_name='tempest-tempest.common.compute-instance-1505100715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1505100715',id=72,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-kk2o7y6y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:49Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=5d34feed-2663-4e17-b951-65a37bd3a275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.451 248514 DEBUG nova.network.os_vif_util [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.452 248514 DEBUG nova.network.os_vif_util [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.455 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  <uuid>5d34feed-2663-4e17-b951-65a37bd3a275</uuid>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  <name>instance-00000048</name>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <nova:name>tempest-tempest.common.compute-instance-1505100715</nova:name>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:32:50</nova:creationTime>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:        <nova:user uuid="5b310bdebec646949fad4ea1821b4c3f">tempest-ServerActionsTestOtherA-1325599242-project-member</nova:user>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:        <nova:project uuid="b4d2999518df4b9f8ccbabe38976dc3c">tempest-ServerActionsTestOtherA-1325599242</nova:project>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:        <nova:port uuid="533958d1-8a74-4963-9731-40767b4bb127">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <entry name="serial">5d34feed-2663-4e17-b951-65a37bd3a275</entry>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <entry name="uuid">5d34feed-2663-4e17-b951-65a37bd3a275</entry>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/5d34feed-2663-4e17-b951-65a37bd3a275_disk">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/5d34feed-2663-4e17-b951-65a37bd3a275_disk.config">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:66:ab:b9"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <target dev="tap533958d1-8a"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/console.log" append="off"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:32:52 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:32:52 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:32:52 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:32:52 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.455 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Preparing to wait for external event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.455 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.456 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.456 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.457 248514 DEBUG nova.virt.libvirt.vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1505100715',display_name='tempest-tempest.common.compute-instance-1505100715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1505100715',id=72,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-kk2o7y6y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:49Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=5d34feed-2663-4e17-b951-65a37bd3a275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.457 248514 DEBUG nova.network.os_vif_util [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.458 248514 DEBUG nova.network.os_vif_util [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.458 248514 DEBUG os_vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.459 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.459 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.460 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.465 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap533958d1-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.466 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap533958d1-8a, col_values=(('external_ids', {'iface-id': '533958d1-8a74-4963-9731-40767b4bb127', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:ab:b9', 'vm-uuid': '5d34feed-2663-4e17-b951-65a37bd3a275'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:52 np0005558241 NetworkManager[50376]: <info>  [1765614772.4701] manager: (tap533958d1-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/310)
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.469 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.477 248514 INFO os_vif [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a')#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.618 248514 DEBUG nova.compute.manager [None req-035d46f3-c7e8-402d-842b-141826a37da1 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:32:52 np0005558241 nova_compute[248510]: 2025-12-13 08:32:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:32:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 528 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Dec 13 03:32:53 np0005558241 nova_compute[248510]: 2025-12-13 08:32:53.684 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614758.6838434, 5d34feed-2663-4e17-b951-65a37bd3a275 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:53 np0005558241 nova_compute[248510]: 2025-12-13 08:32:53.685 248514 INFO nova.compute.manager [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:32:54 np0005558241 podman[318799]: 2025-12-13 08:32:54.70652776 +0000 UTC m=+0.074592953 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:32:54 np0005558241 podman[318798]: 2025-12-13 08:32:54.735253814 +0000 UTC m=+0.103493551 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:32:54 np0005558241 podman[318797]: 2025-12-13 08:32:54.770375826 +0000 UTC m=+0.137336812 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.064 248514 DEBUG nova.compute.manager [req-ae46d4e3-83ab-47a1-8fc7-415d5ed1efd7 req-c28648b3-5e52-4c2c-81f1-e1d96f652583 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.065 248514 DEBUG oslo_concurrency.lockutils [req-ae46d4e3-83ab-47a1-8fc7-415d5ed1efd7 req-c28648b3-5e52-4c2c-81f1-e1d96f652583 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.066 248514 DEBUG oslo_concurrency.lockutils [req-ae46d4e3-83ab-47a1-8fc7-415d5ed1efd7 req-c28648b3-5e52-4c2c-81f1-e1d96f652583 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.066 248514 DEBUG oslo_concurrency.lockutils [req-ae46d4e3-83ab-47a1-8fc7-415d5ed1efd7 req-c28648b3-5e52-4c2c-81f1-e1d96f652583 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.066 248514 DEBUG nova.compute.manager [req-ae46d4e3-83ab-47a1-8fc7-415d5ed1efd7 req-c28648b3-5e52-4c2c-81f1-e1d96f652583 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.067 248514 WARNING nova.compute.manager [req-ae46d4e3-83ab-47a1-8fc7-415d5ed1efd7 req-c28648b3-5e52-4c2c-81f1-e1d96f652583 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state rescued and task_state unrescuing.#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.112 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.113 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.113 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.113 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.114 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.150 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.155 248514 DEBUG nova.compute.manager [None req-989d6202-c2ac-4efa-a641-3dc013fd9c98 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.160 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.165 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.165 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.166 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No VIF found with MAC fa:16:3e:66:ab:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.167 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Using config drive#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.201 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.213 248514 DEBUG nova.compute.manager [None req-989d6202-c2ac-4efa-a641-3dc013fd9c98 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.254 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.259 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614772.286051, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.259 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Started (Lifecycle Event)#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.260 248514 INFO nova.compute.manager [None req-989d6202-c2ac-4efa-a641-3dc013fd9c98 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.358 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.362 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'keypairs' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.370 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.409 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:55.413 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:55.415 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:55.417 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 534 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.1 MiB/s wr, 285 op/s
Dec 13 03:32:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:32:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2511849649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:32:55 np0005558241 nova_compute[248510]: 2025-12-13 08:32:55.720 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:32:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:32:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:32:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:32:56 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:32:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:32:56 np0005558241 podman[319087]: 2025-12-13 08:32:56.39330302 +0000 UTC m=+0.025850723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:32:56 np0005558241 podman[319087]: 2025-12-13 08:32:56.492608426 +0000 UTC m=+0.125156149 container create e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:32:56 np0005558241 nova_compute[248510]: 2025-12-13 08:32:56.594 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Creating config drive at /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config#033[00m
Dec 13 03:32:56 np0005558241 nova_compute[248510]: 2025-12-13 08:32:56.605 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplbn39prn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:56 np0005558241 systemd[1]: Started libpod-conmon-e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0.scope.
Dec 13 03:32:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:32:56 np0005558241 podman[319087]: 2025-12-13 08:32:56.713687566 +0000 UTC m=+0.346235269 container init e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:32:56 np0005558241 podman[319087]: 2025-12-13 08:32:56.725298965 +0000 UTC m=+0.357846648 container start e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:32:56 np0005558241 systemd[1]: libpod-e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0.scope: Deactivated successfully.
Dec 13 03:32:56 np0005558241 quizzical_visvesvaraya[319107]: 167 167
Dec 13 03:32:56 np0005558241 podman[319087]: 2025-12-13 08:32:56.739654891 +0000 UTC m=+0.372202574 container attach e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:32:56 np0005558241 conmon[319107]: conmon e3fc15f770ee55958587 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0.scope/container/memory.events
Dec 13 03:32:56 np0005558241 podman[319087]: 2025-12-13 08:32:56.741866276 +0000 UTC m=+0.374413959 container died e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:32:56 np0005558241 nova_compute[248510]: 2025-12-13 08:32:56.758 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplbn39prn" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:56 np0005558241 nova_compute[248510]: 2025-12-13 08:32:56.785 248514 DEBUG nova.storage.rbd_utils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:32:56 np0005558241 nova_compute[248510]: 2025-12-13 08:32:56.790 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:56 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4bb7bd5d9ea982ce7c440cf2db7fcdd94f56bba2a8ce7d938eda21b0711740e8-merged.mount: Deactivated successfully.
Dec 13 03:32:57 np0005558241 podman[319087]: 2025-12-13 08:32:57.010898957 +0000 UTC m=+0.643446660 container remove e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_visvesvaraya, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:32:57 np0005558241 systemd[1]: libpod-conmon-e3fc15f770ee55958587a3439b4ce099f2601e79d1b32bd8f28c5e6d8808c3c0.scope: Deactivated successfully.
Dec 13 03:32:57 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:32:57 np0005558241 podman[319166]: 2025-12-13 08:32:57.265797146 +0000 UTC m=+0.065762214 container create e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_kirch, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.306 248514 DEBUG oslo_concurrency.processutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config 5d34feed-2663-4e17-b951-65a37bd3a275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.308 248514 INFO nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deleting local config drive /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275/disk.config because it was imported into RBD.#033[00m
Dec 13 03:32:57 np0005558241 podman[319166]: 2025-12-13 08:32:57.226813558 +0000 UTC m=+0.026778646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:32:57 np0005558241 systemd[1]: Started libpod-conmon-e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427.scope.
Dec 13 03:32:57 np0005558241 kernel: tap533958d1-8a: entered promiscuous mode
Dec 13 03:32:57 np0005558241 NetworkManager[50376]: <info>  [1765614777.3721] manager: (tap533958d1-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/311)
Dec 13 03:32:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:57Z|00709|binding|INFO|Claiming lport 533958d1-8a74-4963-9731-40767b4bb127 for this chassis.
Dec 13 03:32:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:57Z|00710|binding|INFO|533958d1-8a74-4963-9731-40767b4bb127: Claiming fa:16:3e:66:ab:b9 10.100.0.5
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:32:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18d5db5659cd996d58892e016e1a0114f85c56b4e6c64b13470a2c8eec1d01b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:32:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18d5db5659cd996d58892e016e1a0114f85c56b4e6c64b13470a2c8eec1d01b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:32:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18d5db5659cd996d58892e016e1a0114f85c56b4e6c64b13470a2c8eec1d01b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:32:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18d5db5659cd996d58892e016e1a0114f85c56b4e6c64b13470a2c8eec1d01b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:32:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:57Z|00711|binding|INFO|Setting lport 533958d1-8a74-4963-9731-40767b4bb127 ovn-installed in OVS
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.407 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:57 np0005558241 systemd-machined[210538]: New machine qemu-89-instance-00000048.
Dec 13 03:32:57 np0005558241 systemd-udevd[319199]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:32:57 np0005558241 NetworkManager[50376]: <info>  [1765614777.4437] device (tap533958d1-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:32:57 np0005558241 NetworkManager[50376]: <info>  [1765614777.4445] device (tap533958d1-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:32:57 np0005558241 systemd[1]: Started Virtual Machine qemu-89-instance-00000048.
Dec 13 03:32:57 np0005558241 podman[319166]: 2025-12-13 08:32:57.457427785 +0000 UTC m=+0.257392853 container init e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 03:32:57 np0005558241 podman[319166]: 2025-12-13 08:32:57.466643914 +0000 UTC m=+0.266608982 container start e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_kirch, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.469 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:57 np0005558241 podman[319166]: 2025-12-13 08:32:57.534825267 +0000 UTC m=+0.334790335 container attach e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_kirch, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:32:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 534 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 269 op/s
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.656 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:ab:b9 10.100.0.5'], port_security=['fa:16:3e:66:ab:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d34feed-2663-4e17-b951-65a37bd3a275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '45afb483-a012-4442-b20c-edd0f1f0f374', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=533958d1-8a74-4963-9731-40767b4bb127) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:32:57 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:57Z|00712|binding|INFO|Setting lport 533958d1-8a74-4963-9731-40767b4bb127 up in Southbound
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.660 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 533958d1-8a74-4963-9731-40767b4bb127 in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f bound to our chassis#033[00m
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.662 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.680 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.681 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.685 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[40425b5c-b8ee-405c-b94b-ec7ad47a5bff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.686 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.686 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.694 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.694 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.700 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.700 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.701 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.704 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.704 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.710 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.710 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.725 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c9afc4-89ca-435b-8d7f-8d50057fceb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.729 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca15c4c-1d08-48f2-9459-e6f579a6e1d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.765 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f390372b-f2f5-4842-8b84-2926b67bf2f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.786 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6973906-16f0-4322-88c5-cec79f64fef2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319215, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.809 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[412a930c-3c2c-498c-a173-0cb89b738edf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727416, 'tstamp': 727416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319216, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727421, 'tstamp': 727421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319216, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.812 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.815 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.815 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.816 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:57.816 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.935 248514 DEBUG nova.compute.manager [req-ccba2215-9a1e-4d80-abdc-66fee0ccdf8c req-10f3efc6-16ac-43fd-a5a5-fcde704f7e56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.937 248514 DEBUG oslo_concurrency.lockutils [req-ccba2215-9a1e-4d80-abdc-66fee0ccdf8c req-10f3efc6-16ac-43fd-a5a5-fcde704f7e56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.937 248514 DEBUG oslo_concurrency.lockutils [req-ccba2215-9a1e-4d80-abdc-66fee0ccdf8c req-10f3efc6-16ac-43fd-a5a5-fcde704f7e56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.938 248514 DEBUG oslo_concurrency.lockutils [req-ccba2215-9a1e-4d80-abdc-66fee0ccdf8c req-10f3efc6-16ac-43fd-a5a5-fcde704f7e56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.938 248514 DEBUG nova.compute.manager [req-ccba2215-9a1e-4d80-abdc-66fee0ccdf8c req-10f3efc6-16ac-43fd-a5a5-fcde704f7e56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.938 248514 WARNING nova.compute.manager [req-ccba2215-9a1e-4d80-abdc-66fee0ccdf8c req-10f3efc6-16ac-43fd-a5a5-fcde704f7e56 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.977 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.978 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3027MB free_disk=59.72048218920827GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.979 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:57 np0005558241 nova_compute[248510]: 2025-12-13 08:32:57.979 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.093 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614778.0930796, 5d34feed-2663-4e17-b951-65a37bd3a275 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.094 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Started (Lifecycle Event)#033[00m
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]: [
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:    {
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:        "available": false,
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:        "being_replaced": false,
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:        "ceph_device_lvm": false,
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:        "lsm_data": {},
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:        "lvs": [],
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:        "path": "/dev/sr0",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:        "rejected_reasons": [
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "Insufficient space (<5GB)",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "Has a FileSystem"
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:        ],
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:        "sys_api": {
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "actuators": null,
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "device_nodes": [
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:                "sr0"
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            ],
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "devname": "sr0",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "human_readable_size": "482.00 KB",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "id_bus": "ata",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "model": "QEMU DVD-ROM",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "nr_requests": "2",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "parent": "/dev/sr0",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "partitions": {},
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "path": "/dev/sr0",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "removable": "1",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "rev": "2.5+",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "ro": "0",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "rotational": "1",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "sas_address": "",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "sas_device_handle": "",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "scheduler_mode": "mq-deadline",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "sectors": 0,
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "sectorsize": "2048",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "size": 493568.0,
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "support_discard": "2048",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "type": "disk",
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:            "vendor": "QEMU"
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:        }
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]:    }
Dec 13 03:32:58 np0005558241 eloquent_kirch[319188]: ]
Dec 13 03:32:58 np0005558241 systemd[1]: libpod-e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427.scope: Deactivated successfully.
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.373 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.377 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614778.0933862, 5d34feed-2663-4e17-b951-65a37bd3a275 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.377 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:32:58 np0005558241 podman[320131]: 2025-12-13 08:32:58.396929527 +0000 UTC m=+0.032379185 container died e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.397 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.398 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.398 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.399 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.399 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.402 248514 INFO nova.compute.manager [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Terminating instance#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.403 248514 DEBUG nova.compute.manager [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.406 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.411 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:32:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b18d5db5659cd996d58892e016e1a0114f85c56b4e6c64b13470a2c8eec1d01b-merged.mount: Deactivated successfully.
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.434 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3ced27d6-a2a8-4ce3-a7e7-494270418542 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.434 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance a9c6de9d-63c0-43a5-9d6e-be356e504837 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.435 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance ce9adb21-8832-4d3e-867e-b0b49bdb6850 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.435 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance c5edbf88-6361-407a-a0f1-c133f70b50e9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.435 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 5d34feed-2663-4e17-b951-65a37bd3a275 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.435 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance bb8c91ff-01cb-4fd5-ab69-005313784b57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.436 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.436 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.440 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:32:58 np0005558241 kernel: tap2bdcea64-a0 (unregistering): left promiscuous mode
Dec 13 03:32:58 np0005558241 NetworkManager[50376]: <info>  [1765614778.4505] device (tap2bdcea64-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:32:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:58Z|00713|binding|INFO|Releasing lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 from this chassis (sb_readonly=0)
Dec 13 03:32:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:58Z|00714|binding|INFO|Setting lport 2bdcea64-a01f-4a75-b664-9c9c971533a6 down in Southbound
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.453 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:32:58Z|00715|binding|INFO|Removing iface tap2bdcea64-a0 ovn-installed in OVS
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.457 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:58.463 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:4d:d0 10.100.0.6'], port_security=['fa:16:3e:13:4d:d0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c5edbf88-6361-407a-a0f1-c133f70b50e9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '8', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2bdcea64-a01f-4a75-b664-9c9c971533a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:32:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:58.465 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2bdcea64-a01f-4a75-b664-9c9c971533a6 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 unbound from our chassis#033[00m
Dec 13 03:32:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:58.466 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 03:32:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:32:58.468 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[abdae7f8-9cee-4644-bd01-b3077cf791a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.479 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:58 np0005558241 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d00000047.scope: Deactivated successfully.
Dec 13 03:32:58 np0005558241 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d00000047.scope: Consumed 6.726s CPU time.
Dec 13 03:32:58 np0005558241 systemd-machined[210538]: Machine qemu-88-instance-00000047 terminated.
Dec 13 03:32:58 np0005558241 podman[320131]: 2025-12-13 08:32:58.510443156 +0000 UTC m=+0.145892794 container remove e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_kirch, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:32:58 np0005558241 systemd[1]: libpod-conmon-e02d2c30279afa8c93e4a66c6d91e0286a332890c3d3c5348e1908383ef6b427.scope: Deactivated successfully.
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.608 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:32:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.661 248514 INFO nova.virt.libvirt.driver [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Instance destroyed successfully.#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.662 248514 DEBUG nova.objects.instance [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'resources' on Instance uuid c5edbf88-6361-407a-a0f1-c133f70b50e9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.684 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.705 248514 DEBUG nova.virt.libvirt.vif [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-607419756',display_name='tempest-ServerRescueTestJSON-server-607419756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-607419756',id=71,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-h6adjjd8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:32:55Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=c5edbf88-6361-407a-a0f1-c133f70b50e9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.706 248514 DEBUG nova.network.os_vif_util [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "address": "fa:16:3e:13:4d:d0", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2bdcea64-a0", "ovs_interfaceid": "2bdcea64-a01f-4a75-b664-9c9c971533a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.707 248514 DEBUG nova.network.os_vif_util [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.708 248514 DEBUG os_vif [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.713 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.713 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2bdcea64-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:32:58 np0005558241 nova_compute[248510]: 2025-12-13 08:32:58.720 248514 INFO os_vif [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:4d:d0,bridge_name='br-int',has_traffic_filtering=True,id=2bdcea64-a01f-4a75-b664-9c9c971533a6,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2bdcea64-a0')#033[00m
Dec 13 03:32:59 np0005558241 podman[320261]: 2025-12-13 08:32:59.191391897 +0000 UTC m=+0.104719592 container create 083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_aryabhata, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:32:59 np0005558241 podman[320261]: 2025-12-13 08:32:59.11139391 +0000 UTC m=+0.024721595 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:32:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:32:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/677728074' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:32:59 np0005558241 nova_compute[248510]: 2025-12-13 08:32:59.247 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:32:59 np0005558241 nova_compute[248510]: 2025-12-13 08:32:59.254 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:32:59 np0005558241 systemd[1]: Started libpod-conmon-083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d.scope.
Dec 13 03:32:59 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:32:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 534 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 291 op/s
Dec 13 03:32:59 np0005558241 podman[320261]: 2025-12-13 08:32:59.662268331 +0000 UTC m=+0.575596026 container init 083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_aryabhata, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:32:59 np0005558241 podman[320261]: 2025-12-13 08:32:59.67071168 +0000 UTC m=+0.584039345 container start 083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:32:59 np0005558241 exciting_aryabhata[320279]: 167 167
Dec 13 03:32:59 np0005558241 systemd[1]: libpod-083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d.scope: Deactivated successfully.
Dec 13 03:32:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:32:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:32:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:32:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:32:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:32:59 np0005558241 podman[320261]: 2025-12-13 08:32:59.721373338 +0000 UTC m=+0.634701083 container attach 083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:32:59 np0005558241 podman[320261]: 2025-12-13 08:32:59.722971788 +0000 UTC m=+0.636299473 container died 083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:32:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6da9fd8bc8eacf29ac568dd99aeee4338c425deaf00158873bf2e186b8913dfb-merged.mount: Deactivated successfully.
Dec 13 03:33:00 np0005558241 podman[320261]: 2025-12-13 08:33:00.02019826 +0000 UTC m=+0.933525925 container remove 083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:33:00 np0005558241 systemd[1]: libpod-conmon-083672824b70fd8ece4d77f3be7cbf2c1c8b1c615c9cd831c2460d0abea0c82d.scope: Deactivated successfully.
Dec 13 03:33:00 np0005558241 nova_compute[248510]: 2025-12-13 08:33:00.179 248514 INFO nova.virt.libvirt.driver [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Deleting instance files /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9_del#033[00m
Dec 13 03:33:00 np0005558241 nova_compute[248510]: 2025-12-13 08:33:00.181 248514 INFO nova.virt.libvirt.driver [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Deletion of /var/lib/nova/instances/c5edbf88-6361-407a-a0f1-c133f70b50e9_del complete#033[00m
Dec 13 03:33:00 np0005558241 podman[320305]: 2025-12-13 08:33:00.254851377 +0000 UTC m=+0.060215136 container create 6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:33:00 np0005558241 systemd[1]: Started libpod-conmon-6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b.scope.
Dec 13 03:33:00 np0005558241 podman[320305]: 2025-12-13 08:33:00.234913532 +0000 UTC m=+0.040277311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:33:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:33:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c52eecdc9094507c8104f170f4f5b87a9999f17e867e6bc89409d5835cad0b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c52eecdc9094507c8104f170f4f5b87a9999f17e867e6bc89409d5835cad0b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c52eecdc9094507c8104f170f4f5b87a9999f17e867e6bc89409d5835cad0b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c52eecdc9094507c8104f170f4f5b87a9999f17e867e6bc89409d5835cad0b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c52eecdc9094507c8104f170f4f5b87a9999f17e867e6bc89409d5835cad0b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:00 np0005558241 podman[320305]: 2025-12-13 08:33:00.358586233 +0000 UTC m=+0.163950002 container init 6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:33:00 np0005558241 podman[320305]: 2025-12-13 08:33:00.366781267 +0000 UTC m=+0.172145026 container start 6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 03:33:00 np0005558241 podman[320305]: 2025-12-13 08:33:00.370992651 +0000 UTC m=+0.176356430 container attach 6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 03:33:00 np0005558241 nova_compute[248510]: 2025-12-13 08:33:00.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:00 np0005558241 great_colden[320322]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:33:00 np0005558241 great_colden[320322]: --> All data devices are unavailable
Dec 13 03:33:00 np0005558241 systemd[1]: libpod-6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b.scope: Deactivated successfully.
Dec 13 03:33:00 np0005558241 podman[320305]: 2025-12-13 08:33:00.895472055 +0000 UTC m=+0.700835824 container died 6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:33:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1c52eecdc9094507c8104f170f4f5b87a9999f17e867e6bc89409d5835cad0b6-merged.mount: Deactivated successfully.
Dec 13 03:33:01 np0005558241 podman[320305]: 2025-12-13 08:33:01.132359358 +0000 UTC m=+0.937723157 container remove 6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:33:01 np0005558241 systemd[1]: libpod-conmon-6a538c58f6d8c8a4f16284f92864ef28eb70150a2c43df0afe2b7773010e192b.scope: Deactivated successfully.
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.242723) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614781242780, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 532, "num_deletes": 250, "total_data_size": 545480, "memory_usage": 555480, "flush_reason": "Manual Compaction"}
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614781287500, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 526503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40055, "largest_seqno": 40586, "table_properties": {"data_size": 523564, "index_size": 911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6053, "raw_average_key_size": 16, "raw_value_size": 517704, "raw_average_value_size": 1395, "num_data_blocks": 40, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614752, "oldest_key_time": 1765614752, "file_creation_time": 1765614781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 44831 microseconds, and 2440 cpu microseconds.
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.287554) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 526503 bytes OK
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.287578) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.289533) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.289548) EVENT_LOG_v1 {"time_micros": 1765614781289543, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.289567) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 542418, prev total WAL file size 542418, number of live WAL files 2.
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.290139) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(514KB)], [89(10MB)]
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614781290370, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11083925, "oldest_snapshot_seqno": -1}
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6311 keys, 10321792 bytes, temperature: kUnknown
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614781373311, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 10321792, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10277722, "index_size": 27192, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 163295, "raw_average_key_size": 25, "raw_value_size": 10162487, "raw_average_value_size": 1610, "num_data_blocks": 1077, "num_entries": 6311, "num_filter_entries": 6311, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.373647) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10321792 bytes
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.375168) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.4 rd, 124.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.1 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(40.7) write-amplify(19.6) OK, records in: 6823, records dropped: 512 output_compression: NoCompression
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.375185) EVENT_LOG_v1 {"time_micros": 1765614781375177, "job": 52, "event": "compaction_finished", "compaction_time_micros": 83092, "compaction_time_cpu_micros": 33349, "output_level": 6, "num_output_files": 1, "total_output_size": 10321792, "num_input_records": 6823, "num_output_records": 6311, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614781375375, "job": 52, "event": "table_file_deletion", "file_number": 91}
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614781377209, "job": 52, "event": "table_file_deletion", "file_number": 89}
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.289979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.377239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.377243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.377245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.377246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:33:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:33:01.377249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:33:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 535 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 224 op/s
Dec 13 03:33:01 np0005558241 podman[320420]: 2025-12-13 08:33:01.717357686 +0000 UTC m=+0.072133142 container create 8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:33:01 np0005558241 podman[320420]: 2025-12-13 08:33:01.67562909 +0000 UTC m=+0.030404566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:33:01 np0005558241 systemd[1]: Started libpod-conmon-8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45.scope.
Dec 13 03:33:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:33:01 np0005558241 kernel: tapd001c32a-bc (unregistering): left promiscuous mode
Dec 13 03:33:01 np0005558241 NetworkManager[50376]: <info>  [1765614781.9666] device (tapd001c32a-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:33:01 np0005558241 podman[320420]: 2025-12-13 08:33:01.976487331 +0000 UTC m=+0.331262807 container init 8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 03:33:01 np0005558241 nova_compute[248510]: 2025-12-13 08:33:01.976 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:01 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:01Z|00716|binding|INFO|Releasing lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 from this chassis (sb_readonly=0)
Dec 13 03:33:01 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:01Z|00717|binding|INFO|Setting lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 down in Southbound
Dec 13 03:33:01 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:01Z|00718|binding|INFO|Removing iface tapd001c32a-bc ovn-installed in OVS
Dec 13 03:33:01 np0005558241 nova_compute[248510]: 2025-12-13 08:33:01.980 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:01 np0005558241 podman[320420]: 2025-12-13 08:33:01.986298265 +0000 UTC m=+0.341073721 container start 8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 03:33:01 np0005558241 podman[320420]: 2025-12-13 08:33:01.991468013 +0000 UTC m=+0.346243519 container attach 8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:33:01 np0005558241 systemd[1]: libpod-8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45.scope: Deactivated successfully.
Dec 13 03:33:01 np0005558241 eloquent_bhaskara[320436]: 167 167
Dec 13 03:33:01 np0005558241 conmon[320436]: conmon 8e614ed305013b613de1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45.scope/container/memory.events
Dec 13 03:33:01 np0005558241 nova_compute[248510]: 2025-12-13 08:33:01.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:02 np0005558241 podman[320445]: 2025-12-13 08:33:02.045494435 +0000 UTC m=+0.030636992 container died 8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 03:33:02 np0005558241 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Dec 13 03:33:02 np0005558241 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004a.scope: Consumed 15.901s CPU time.
Dec 13 03:33:02 np0005558241 systemd-machined[210538]: Machine qemu-86-instance-0000004a terminated.
Dec 13 03:33:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1a46cf14a40daa16884c103e54cb76efd65f460f4760bf80edfb426c00a0d706-merged.mount: Deactivated successfully.
Dec 13 03:33:02 np0005558241 podman[320445]: 2025-12-13 08:33:02.209459927 +0000 UTC m=+0.194602454 container remove 8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:33:02 np0005558241 systemd[1]: libpod-conmon-8e614ed305013b613de1041c7a097a59a64c9ce4d2ea794c8678e3bdc5c0bb45.scope: Deactivated successfully.
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.492 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:10:dc 10.100.0.13'], port_security=['fa:16:3e:63:10:dc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb8c91ff-01cb-4fd5-ab69-005313784b57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8f5cceeb-71e0-4eee-9335-64d44ec2d969', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d001c32a-bc2d-4374-9cf1-cea4a3723c66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.493 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d001c32a-bc2d-4374-9cf1-cea4a3723c66 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.495 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839#033[00m
Dec 13 03:33:02 np0005558241 podman[320480]: 2025-12-13 08:33:02.404334036 +0000 UTC m=+0.030749504 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.504 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.505 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.505 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.505 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.505 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.505 248514 WARNING nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.506 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.506 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.506 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.506 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.506 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.507 248514 WARNING nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.507 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.507 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.507 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.507 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.507 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Processing event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.508 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.508 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.508 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.508 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.508 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] No waiting events found dispatching network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.508 248514 WARNING nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received unexpected event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 for instance with vm_state stopped and task_state rebuild_spawning.#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.509 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.509 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.509 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.509 248514 DEBUG oslo_concurrency.lockutils [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.509 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.509 248514 DEBUG nova.compute.manager [req-b2ee3377-620b-4528-814f-a365403537a8 req-badcf419-9272-4ee4-949e-5c767ed678c1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-unplugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.510 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.513 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2a8a16-40af-4059-a203-3a8a6bbf818d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.514 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614782.5146015, 5d34feed-2663-4e17-b951-65a37bd3a275 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.515 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.518 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:33:02 np0005558241 podman[320480]: 2025-12-13 08:33:02.520427089 +0000 UTC m=+0.146842527 container create 45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.522 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance spawned successfully.#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.522 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.525 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.548 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[227d1804-c924-4413-ac7f-daac01a6c475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.552 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6e938092-10c9-42de-83c2-89166fd12ba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.580 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dbbd2c29-586d-4bc5-838b-cd92700f3661]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.597 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.600 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.601 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.604 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.603 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53bc3772-44f6-4ed4-b0aa-82e7f3389470]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727295, 'reachable_time': 29620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320502, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:02 np0005558241 systemd[1]: Started libpod-conmon-45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a.scope.
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.611 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.611 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.612 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.612 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.613 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.613 248514 DEBUG nova.virt.libvirt.driver [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.622 248514 INFO nova.compute.manager [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Took 4.22 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.622 248514 DEBUG oslo.service.loopingcall [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.623 248514 DEBUG nova.compute.manager [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.623 248514 DEBUG nova.network.neutron [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:33:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:33:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e6d5640372a21c1b8cef302f52a30b06f4dec59f93141f683aff1f0cf2166b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e6d5640372a21c1b8cef302f52a30b06f4dec59f93141f683aff1f0cf2166b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e6d5640372a21c1b8cef302f52a30b06f4dec59f93141f683aff1f0cf2166b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e6d5640372a21c1b8cef302f52a30b06f4dec59f93141f683aff1f0cf2166b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.628 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba47d38-2ca4-479c-a228-a86b30eb8189]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727305, 'tstamp': 727305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320505, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727308, 'tstamp': 727308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320505, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.636 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.638 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.644 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.645 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.646 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.646 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.647 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:33:02 np0005558241 podman[320480]: 2025-12-13 08:33:02.687275253 +0000 UTC m=+0.313690711 container init 45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.693 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:02 np0005558241 podman[320480]: 2025-12-13 08:33:02.697493407 +0000 UTC m=+0.323908845 container start 45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:33:02 np0005558241 podman[320480]: 2025-12-13 08:33:02.701107446 +0000 UTC m=+0.327522894 container attach 45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.712 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance shutdown successfully after 25 seconds.#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.718 248514 INFO nova.virt.libvirt.driver [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance destroyed successfully.#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.723 248514 INFO nova.virt.libvirt.driver [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance destroyed successfully.#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.724 248514 DEBUG nova.virt.libvirt.vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:32:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-81806715',display_name='tempest-ServerActionsTestJSON-server-1327556776',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-81806715',id=74,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-yhg7rdps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:32:36Z,user_data=None,user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=bb8c91ff-01cb-4fd5-ab69-005313784b57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.724 248514 DEBUG nova.network.os_vif_util [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.725 248514 DEBUG nova.network.os_vif_util [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.725 248514 DEBUG os_vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.726 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.727 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd001c32a-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.734 248514 INFO os_vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc')#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.769 248514 INFO nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] bringing vm to original state: 'stopped'#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.862 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.862 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.863 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.867 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Dec 13 03:33:02 np0005558241 kernel: tap533958d1-8a (unregistering): left promiscuous mode
Dec 13 03:33:02 np0005558241 NetworkManager[50376]: <info>  [1765614782.9045] device (tap533958d1-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:33:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:02Z|00719|binding|INFO|Releasing lport 533958d1-8a74-4963-9731-40767b4bb127 from this chassis (sb_readonly=0)
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.911 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:02Z|00720|binding|INFO|Setting lport 533958d1-8a74-4963-9731-40767b4bb127 down in Southbound
Dec 13 03:33:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:02Z|00721|binding|INFO|Removing iface tap533958d1-8a ovn-installed in OVS
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.920 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:ab:b9 10.100.0.5'], port_security=['fa:16:3e:66:ab:b9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d34feed-2663-4e17-b951-65a37bd3a275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '45afb483-a012-4442-b20c-edd0f1f0f374', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=533958d1-8a74-4963-9731-40767b4bb127) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.921 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 533958d1-8a74-4963-9731-40767b4bb127 in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f unbound from our chassis#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.923 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f#033[00m
Dec 13 03:33:02 np0005558241 nova_compute[248510]: 2025-12-13 08:33:02.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.939 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[276489f2-760e-4ad3-a1b8-4f7b772073dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:02 np0005558241 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d00000048.scope: Deactivated successfully.
Dec 13 03:33:02 np0005558241 systemd-machined[210538]: Machine qemu-89-instance-00000048 terminated.
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.966 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[25baca03-4caa-4b6a-bb9a-16accdd71127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.968 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c93a577c-2412-4bd2-89e3-b806213b2990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:02.994 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0785f5-601f-4aa0-913c-7f322db62dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]: {
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:    "0": [
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:        {
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "devices": [
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "/dev/loop3"
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            ],
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_name": "ceph_lv0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_size": "21470642176",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "name": "ceph_lv0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "tags": {
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.cluster_name": "ceph",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.crush_device_class": "",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.encrypted": "0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.objectstore": "bluestore",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.osd_id": "0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.type": "block",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.vdo": "0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.with_tpm": "0"
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            },
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "type": "block",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "vg_name": "ceph_vg0"
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:        }
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:    ],
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:    "1": [
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:        {
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "devices": [
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "/dev/loop4"
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            ],
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_name": "ceph_lv1",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_size": "21470642176",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "name": "ceph_lv1",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "tags": {
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.cluster_name": "ceph",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.crush_device_class": "",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.encrypted": "0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.objectstore": "bluestore",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.osd_id": "1",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.type": "block",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.vdo": "0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.with_tpm": "0"
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            },
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "type": "block",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "vg_name": "ceph_vg1"
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:        }
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:    ],
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:    "2": [
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:        {
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "devices": [
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "/dev/loop5"
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            ],
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_name": "ceph_lv2",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_size": "21470642176",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "name": "ceph_lv2",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "tags": {
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.cluster_name": "ceph",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.crush_device_class": "",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.encrypted": "0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.objectstore": "bluestore",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.osd_id": "2",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.type": "block",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.vdo": "0",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:                "ceph.with_tpm": "0"
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            },
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "type": "block",
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:            "vg_name": "ceph_vg2"
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:        }
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]:    ]
Dec 13 03:33:03 np0005558241 crazy_swanson[320503]: }
Dec 13 03:33:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.017 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[36171559-8111-4fa6-af6f-0b8b6704043b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320540, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:03 np0005558241 systemd[1]: libpod-45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a.scope: Deactivated successfully.
Dec 13 03:33:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.036 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa90035d-5009-44f9-82ff-1eda351bb212]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727416, 'tstamp': 727416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320541, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727421, 'tstamp': 727421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320541, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.039 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.040 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.045 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:03.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:03 np0005558241 podman[320543]: 2025-12-13 08:33:03.085502543 +0000 UTC m=+0.032684143 container died 45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.115 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance destroyed successfully.#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.115 248514 DEBUG nova.compute.manager [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.187 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.218 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.218 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.218 248514 DEBUG nova.objects.instance [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.295 248514 DEBUG oslo_concurrency.lockutils [None req-2d10340c-0e6a-4b72-aa1f-3614c1d0e6bd 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.395 248514 DEBUG nova.compute.manager [req-b7d3fed2-93b2-4ab7-b6f4-729d369f9392 req-8766cc66-0a15-4d9d-8da2-ee48c75a0a34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-unplugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.395 248514 DEBUG oslo_concurrency.lockutils [req-b7d3fed2-93b2-4ab7-b6f4-729d369f9392 req-8766cc66-0a15-4d9d-8da2-ee48c75a0a34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.396 248514 DEBUG oslo_concurrency.lockutils [req-b7d3fed2-93b2-4ab7-b6f4-729d369f9392 req-8766cc66-0a15-4d9d-8da2-ee48c75a0a34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.396 248514 DEBUG oslo_concurrency.lockutils [req-b7d3fed2-93b2-4ab7-b6f4-729d369f9392 req-8766cc66-0a15-4d9d-8da2-ee48c75a0a34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.396 248514 DEBUG nova.compute.manager [req-b7d3fed2-93b2-4ab7-b6f4-729d369f9392 req-8766cc66-0a15-4d9d-8da2-ee48c75a0a34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] No waiting events found dispatching network-vif-unplugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.396 248514 WARNING nova.compute.manager [req-b7d3fed2-93b2-4ab7-b6f4-729d369f9392 req-8766cc66-0a15-4d9d-8da2-ee48c75a0a34 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received unexpected event network-vif-unplugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 for instance with vm_state active and task_state rebuilding.#033[00m
Dec 13 03:33:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 513 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 181 op/s
Dec 13 03:33:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-14e6d5640372a21c1b8cef302f52a30b06f4dec59f93141f683aff1f0cf2166b-merged.mount: Deactivated successfully.
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.911 248514 DEBUG nova.network.neutron [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.938 248514 INFO nova.compute.manager [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Took 1.32 seconds to deallocate network for instance.#033[00m
Dec 13 03:33:03 np0005558241 podman[320543]: 2025-12-13 08:33:03.98575373 +0000 UTC m=+0.932935310 container remove 45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.990 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:03 np0005558241 nova_compute[248510]: 2025-12-13 08:33:03.990 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:03 np0005558241 systemd[1]: libpod-conmon-45cfaa3252f2a9eaaee7f0361bb741634958887327f5abe77783ead1921ae95a.scope: Deactivated successfully.
Dec 13 03:33:04 np0005558241 nova_compute[248510]: 2025-12-13 08:33:04.160 248514 DEBUG oslo_concurrency.processutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:04 np0005558241 podman[320648]: 2025-12-13 08:33:04.572191762 +0000 UTC m=+0.104339462 container create 69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lovelace, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:33:04 np0005558241 podman[320648]: 2025-12-13 08:33:04.491755095 +0000 UTC m=+0.023902785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:33:04 np0005558241 nova_compute[248510]: 2025-12-13 08:33:04.601 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:33:04 np0005558241 nova_compute[248510]: 2025-12-13 08:33:04.602 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:33:04 np0005558241 nova_compute[248510]: 2025-12-13 08:33:04.603 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:33:04 np0005558241 nova_compute[248510]: 2025-12-13 08:33:04.603 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:33:04 np0005558241 systemd[1]: Started libpod-conmon-69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7.scope.
Dec 13 03:33:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:33:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:33:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3911432715' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:33:04 np0005558241 podman[320648]: 2025-12-13 08:33:04.93089779 +0000 UTC m=+0.463045500 container init 69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lovelace, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:33:04 np0005558241 nova_compute[248510]: 2025-12-13 08:33:04.933 248514 DEBUG oslo_concurrency.processutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.772s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:04 np0005558241 nova_compute[248510]: 2025-12-13 08:33:04.944 248514 DEBUG nova.compute.provider_tree [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:33:04 np0005558241 podman[320648]: 2025-12-13 08:33:04.944457337 +0000 UTC m=+0.476604997 container start 69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lovelace, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:33:04 np0005558241 sharp_lovelace[320666]: 167 167
Dec 13 03:33:04 np0005558241 systemd[1]: libpod-69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7.scope: Deactivated successfully.
Dec 13 03:33:04 np0005558241 nova_compute[248510]: 2025-12-13 08:33:04.967 248514 DEBUG nova.scheduler.client.report [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:33:04 np0005558241 podman[320648]: 2025-12-13 08:33:04.990688625 +0000 UTC m=+0.522836315 container attach 69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lovelace, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:33:04 np0005558241 podman[320648]: 2025-12-13 08:33:04.991313391 +0000 UTC m=+0.523461081 container died 69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:33:04 np0005558241 nova_compute[248510]: 2025-12-13 08:33:04.995 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.031 248514 INFO nova.scheduler.client.report [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Deleted allocations for instance c5edbf88-6361-407a-a0f1-c133f70b50e9#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.120 248514 DEBUG oslo_concurrency.lockutils [None req-35896c2b-8584-4018-aeda-5389edef3722 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.233 248514 DEBUG nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.233 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.234 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.235 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c5edbf88-6361-407a-a0f1-c133f70b50e9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.235 248514 DEBUG nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] No waiting events found dispatching network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.235 248514 WARNING nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received unexpected event network-vif-plugged-2bdcea64-a01f-4a75-b664-9c9c971533a6 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.235 248514 DEBUG nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-unplugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.235 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 DEBUG nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] No waiting events found dispatching network-vif-unplugged-533958d1-8a74-4963-9731-40767b4bb127 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 WARNING nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received unexpected event network-vif-unplugged-533958d1-8a74-4963-9731-40767b4bb127 for instance with vm_state stopped and task_state None.#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 DEBUG nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.236 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.237 248514 DEBUG oslo_concurrency.lockutils [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.237 248514 DEBUG nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] No waiting events found dispatching network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.237 248514 WARNING nova.compute.manager [req-2eadd067-7559-4ab9-9080-2b4bb1036de2 req-fc04a623-cae3-4bc9-abf3-da1d627f1745 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received unexpected event network-vif-plugged-533958d1-8a74-4963-9731-40767b4bb127 for instance with vm_state stopped and task_state None.#033[00m
Dec 13 03:33:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f4b7550db5ef0b9210bdc205ee0b9550ad95ef4d3c7ae9b3363eaffbc9a2aa9d-merged.mount: Deactivated successfully.
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.421 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:05 np0005558241 podman[320648]: 2025-12-13 08:33:05.531546997 +0000 UTC m=+1.063694657 container remove 69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.538 248514 DEBUG nova.compute.manager [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.539 248514 DEBUG oslo_concurrency.lockutils [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.539 248514 DEBUG oslo_concurrency.lockutils [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.539 248514 DEBUG oslo_concurrency.lockutils [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.539 248514 DEBUG nova.compute.manager [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] No waiting events found dispatching network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.540 248514 WARNING nova.compute.manager [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received unexpected event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 for instance with vm_state active and task_state rebuilding.#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.540 248514 DEBUG nova.compute.manager [req-f4a044a9-9293-4154-9f85-d8093df9ccbf req-51cf80f0-e9fc-40e5-baa7-8272337c5d88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Received event network-vif-deleted-2bdcea64-a01f-4a75-b664-9c9c971533a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:05 np0005558241 systemd[1]: libpod-conmon-69d3fea6efd0e6e99c93b9f3470cf9f5c2a3ec1dcea90ac39336197517d300c7.scope: Deactivated successfully.
Dec 13 03:33:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 395 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 165 op/s
Dec 13 03:33:05 np0005558241 podman[320690]: 2025-12-13 08:33:05.75152183 +0000 UTC m=+0.026853618 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:33:05 np0005558241 podman[320690]: 2025-12-13 08:33:05.961386492 +0000 UTC m=+0.236718250 container create 5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.962 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.963 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.964 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.965 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.965 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.967 248514 INFO nova.compute.manager [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Terminating instance#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.968 248514 DEBUG nova.compute.manager [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.978 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deleting instance files /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57_del#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.979 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deletion of /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57_del complete#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.989 248514 INFO nova.virt.libvirt.driver [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Instance destroyed successfully.#033[00m
Dec 13 03:33:05 np0005558241 nova_compute[248510]: 2025-12-13 08:33:05.990 248514 DEBUG nova.objects.instance [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'resources' on Instance uuid 5d34feed-2663-4e17-b951-65a37bd3a275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.014 248514 DEBUG nova.virt.libvirt.vif [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:32:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1505100715',display_name='tempest-tempest.common.compute-instance-1505100715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1505100715',id=72,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:33:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-kk2o7y6y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:03Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=5d34feed-2663-4e17-b951-65a37bd3a275,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.014 248514 DEBUG nova.network.os_vif_util [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "533958d1-8a74-4963-9731-40767b4bb127", "address": "fa:16:3e:66:ab:b9", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap533958d1-8a", "ovs_interfaceid": "533958d1-8a74-4963-9731-40767b4bb127", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.015 248514 DEBUG nova.network.os_vif_util [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.015 248514 DEBUG os_vif [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.018 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.018 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap533958d1-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.019 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.021 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.023 248514 INFO os_vif [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:ab:b9,bridge_name='br-int',has_traffic_filtering=True,id=533958d1-8a74-4963-9731-40767b4bb127,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap533958d1-8a')#033[00m
Dec 13 03:33:06 np0005558241 systemd[1]: Started libpod-conmon-5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be.scope.
Dec 13 03:33:06 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:33:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4936bd69fac4a138aa07744c131934183e99b4ee8c320e3cca08c2853adcc5b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4936bd69fac4a138aa07744c131934183e99b4ee8c320e3cca08c2853adcc5b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4936bd69fac4a138aa07744c131934183e99b4ee8c320e3cca08c2853adcc5b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4936bd69fac4a138aa07744c131934183e99b4ee8c320e3cca08c2853adcc5b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.195 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.196 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Creating image(s)#033[00m
Dec 13 03:33:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.224 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:06 np0005558241 podman[320690]: 2025-12-13 08:33:06.257785642 +0000 UTC m=+0.533117440 container init 5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.261 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:06 np0005558241 podman[320690]: 2025-12-13 08:33:06.26654636 +0000 UTC m=+0.541878128 container start 5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.286 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.291 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.332 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.332 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.333 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.333 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.333 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.335 248514 INFO nova.compute.manager [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Terminating instance#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.337 248514 DEBUG nova.compute.manager [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:33:06 np0005558241 podman[320690]: 2025-12-13 08:33:06.369507147 +0000 UTC m=+0.644839015 container attach 5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.381 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.382 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.383 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.383 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.406 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.410 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc bb8c91ff-01cb-4fd5-ab69-005313784b57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:06 np0005558241 kernel: tap7b3b1c0a-88 (unregistering): left promiscuous mode
Dec 13 03:33:06 np0005558241 NetworkManager[50376]: <info>  [1765614786.5709] device (tap7b3b1c0a-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:33:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:06Z|00722|binding|INFO|Releasing lport 7b3b1c0a-882e-4f33-a582-667d018090d4 from this chassis (sb_readonly=0)
Dec 13 03:33:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:06Z|00723|binding|INFO|Setting lport 7b3b1c0a-882e-4f33-a582-667d018090d4 down in Southbound
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.591 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:06Z|00724|binding|INFO|Removing iface tap7b3b1c0a-88 ovn-installed in OVS
Dec 13 03:33:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:06.606 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:56:12 10.100.0.13'], port_security=['fa:16:3e:41:56:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a9c6de9d-63c0-43a5-9d6e-be356e504837', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189fba04-5215-4f6f-8ce4-10038442ab30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71e2453379684f0ca0563f8c370ea4a3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '90778dd6-e0e3-4218-b41d-4ccc9c7bcba4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01261386-9137-452e-bab0-3013eb5f3942, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7b3b1c0a-882e-4f33-a582-667d018090d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:06.608 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7b3b1c0a-882e-4f33-a582-667d018090d4 in datapath 189fba04-5215-4f6f-8ce4-10038442ab30 unbound from our chassis#033[00m
Dec 13 03:33:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:06.609 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 189fba04-5215-4f6f-8ce4-10038442ab30 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 03:33:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:06.610 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ceda403d-8c25-4123-b8d0-f9c37ac00905]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.613 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:06 np0005558241 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000044.scope: Deactivated successfully.
Dec 13 03:33:06 np0005558241 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000044.scope: Consumed 14.821s CPU time.
Dec 13 03:33:06 np0005558241 systemd-machined[210538]: Machine qemu-82-instance-00000044 terminated.
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.782 248514 INFO nova.virt.libvirt.driver [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Instance destroyed successfully.#033[00m
Dec 13 03:33:06 np0005558241 nova_compute[248510]: 2025-12-13 08:33:06.784 248514 DEBUG nova.objects.instance [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lazy-loading 'resources' on Instance uuid a9c6de9d-63c0-43a5-9d6e-be356e504837 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:07 np0005558241 lvm[320910]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:33:07 np0005558241 lvm[320910]: VG ceph_vg0 finished
Dec 13 03:33:07 np0005558241 lvm[320912]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:33:07 np0005558241 lvm[320912]: VG ceph_vg1 finished
Dec 13 03:33:07 np0005558241 lvm[320914]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:33:07 np0005558241 lvm[320914]: VG ceph_vg2 finished
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.139 248514 DEBUG nova.virt.libvirt.vif [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:31:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1742357064',display_name='tempest-ServerRescueTestJSON-server-1742357064',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1742357064',id=68,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:31:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71e2453379684f0ca0563f8c370ea4a3',ramdisk_id='',reservation_id='r-hpc6t4lx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1425963100',owner_user_name='tempest-ServerRescueTestJSON-1425963100-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:59Z,user_data=None,user_id='93eec08d500a4f03afb3281e9899bd6a',uuid=a9c6de9d-63c0-43a5-9d6e-be356e504837,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.141 248514 DEBUG nova.network.os_vif_util [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converting VIF {"id": "7b3b1c0a-882e-4f33-a582-667d018090d4", "address": "fa:16:3e:41:56:12", "network": {"id": "189fba04-5215-4f6f-8ce4-10038442ab30", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1650812198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "71e2453379684f0ca0563f8c370ea4a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b3b1c0a-88", "ovs_interfaceid": "7b3b1c0a-882e-4f33-a582-667d018090d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.142 248514 DEBUG nova.network.os_vif_util [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.143 248514 DEBUG os_vif [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.146 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.146 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b3b1c0a-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.150 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.153 248514 INFO os_vif [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:56:12,bridge_name='br-int',has_traffic_filtering=True,id=7b3b1c0a-882e-4f33-a582-667d018090d4,network=Network(189fba04-5215-4f6f-8ce4-10038442ab30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b3b1c0a-88')#033[00m
Dec 13 03:33:07 np0005558241 laughing_vaughan[320722]: {}
Dec 13 03:33:07 np0005558241 systemd[1]: libpod-5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be.scope: Deactivated successfully.
Dec 13 03:33:07 np0005558241 systemd[1]: libpod-5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be.scope: Consumed 1.485s CPU time.
Dec 13 03:33:07 np0005558241 podman[320690]: 2025-12-13 08:33:07.209957829 +0000 UTC m=+1.485289597 container died 5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:33:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4936bd69fac4a138aa07744c131934183e99b4ee8c320e3cca08c2853adcc5b2-merged.mount: Deactivated successfully.
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.328 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc bb8c91ff-01cb-4fd5-ab69-005313784b57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.918s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:07 np0005558241 podman[320690]: 2025-12-13 08:33:07.338515321 +0000 UTC m=+1.613847099 container remove 5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_vaughan, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:33:07 np0005558241 systemd[1]: libpod-conmon-5b52793c163e070544e95851bc4e3e0d635ec3b185125d46aa26ffcb4a9897be.scope: Deactivated successfully.
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.392 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] resizing rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:33:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:33:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:33:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:33:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:33:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 395 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 581 KiB/s rd, 43 KiB/s wr, 65 op/s
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.702 248514 DEBUG nova.compute.manager [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-unplugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.704 248514 DEBUG oslo_concurrency.lockutils [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG oslo_concurrency.lockutils [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG oslo_concurrency.lockutils [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG nova.compute.manager [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-unplugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG nova.compute.manager [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-unplugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG nova.compute.manager [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG oslo_concurrency.lockutils [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.705 248514 DEBUG oslo_concurrency.lockutils [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.706 248514 DEBUG oslo_concurrency.lockutils [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.706 248514 DEBUG nova.compute.manager [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] No waiting events found dispatching network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.706 248514 WARNING nova.compute.manager [req-54bbabb6-7083-40a2-b280-5fa1fc03b29f req-7186a749-4d2b-41f1-b8ae-e4b72ec3d53b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received unexpected event network-vif-plugged-7b3b1c0a-882e-4f33-a582-667d018090d4 for instance with vm_state rescued and task_state deleting.#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.751 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.751 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Ensure instance console log exists: /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.752 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.752 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.752 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.754 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Start _get_guest_xml network_info=[{"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.759 248514 WARNING nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.764 248514 DEBUG nova.virt.libvirt.host [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.765 248514 DEBUG nova.virt.libvirt.host [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.769 248514 DEBUG nova.virt.libvirt.host [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.769 248514 DEBUG nova.virt.libvirt.host [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.770 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.770 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.771 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.771 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.771 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.771 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.771 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.771 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.772 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.772 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.772 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.772 248514 DEBUG nova.virt.hardware [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.772 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'vcpu_model' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:07 np0005558241 nova_compute[248510]: 2025-12-13 08:33:07.800 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:33:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350825097' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.360 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.381 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.386 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:08 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:33:08 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.790 248514 INFO nova.virt.libvirt.driver [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deleting instance files /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275_del#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.791 248514 INFO nova.virt.libvirt.driver [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deletion of /var/lib/nova/instances/5d34feed-2663-4e17-b951-65a37bd3a275_del complete#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.846 248514 INFO nova.compute.manager [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Took 2.88 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.847 248514 DEBUG oslo.service.loopingcall [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.847 248514 DEBUG nova.compute.manager [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.847 248514 DEBUG nova.network.neutron [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:33:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:33:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552837108' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.984 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.986 248514 DEBUG nova.virt.libvirt.vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:32:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-81806715',display_name='tempest-ServerActionsTestJSON-server-1327556776',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-81806715',id=74,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-yhg7rdps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:33:06Z,user_data=None,user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=bb8c91ff-01cb-4fd5-ab69-005313784b57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.986 248514 DEBUG nova.network.os_vif_util [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.987 248514 DEBUG nova.network.os_vif_util [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.991 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  <uuid>bb8c91ff-01cb-4fd5-ab69-005313784b57</uuid>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  <name>instance-0000004a</name>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerActionsTestJSON-server-1327556776</nova:name>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:33:07</nova:creationTime>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:        <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:        <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:        <nova:port uuid="d001c32a-bc2d-4374-9cf1-cea4a3723c66">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <entry name="serial">bb8c91ff-01cb-4fd5-ab69-005313784b57</entry>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <entry name="uuid">bb8c91ff-01cb-4fd5-ab69-005313784b57</entry>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/bb8c91ff-01cb-4fd5-ab69-005313784b57_disk">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:63:10:dc"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <target dev="tapd001c32a-bc"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/console.log" append="off"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:33:08 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:33:08 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:33:08 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:33:08 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.993 248514 DEBUG nova.compute.manager [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Preparing to wait for external event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.993 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.993 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.993 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.994 248514 DEBUG nova.virt.libvirt.vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:32:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-81806715',display_name='tempest-ServerActionsTestJSON-server-1327556776',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-81806715',id=74,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:32:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-yhg7rdps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:33:06Z,user_data=None,user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=bb8c91ff-01cb-4fd5-ab69-005313784b57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.994 248514 DEBUG nova.network.os_vif_util [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.995 248514 DEBUG nova.network.os_vif_util [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.995 248514 DEBUG os_vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.996 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.996 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:08 np0005558241 nova_compute[248510]: 2025-12-13 08:33:08.997 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.000 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd001c32a-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.001 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd001c32a-bc, col_values=(('external_ids', {'iface-id': 'd001c32a-bc2d-4374-9cf1-cea4a3723c66', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:10:dc', 'vm-uuid': 'bb8c91ff-01cb-4fd5-ab69-005313784b57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:09 np0005558241 NetworkManager[50376]: <info>  [1765614789.0375] manager: (tapd001c32a-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.037 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.041 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.046 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.047 248514 INFO os_vif [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc')#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.124 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.125 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.125 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] No VIF found with MAC fa:16:3e:63:10:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.126 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Using config drive#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.148 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.172 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'ec2_ids' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.223 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'keypairs' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:33:09
Dec 13 03:33:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:33:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:33:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'vms', 'volumes', 'default.rgw.log', 'default.rgw.meta']
Dec 13 03:33:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:33:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 344 MiB data, 717 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 1.4 MiB/s wr, 126 op/s
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.891 248514 INFO nova.virt.libvirt.driver [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Deleting instance files /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837_del#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.892 248514 INFO nova.virt.libvirt.driver [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Deletion of /var/lib/nova/instances/a9c6de9d-63c0-43a5-9d6e-be356e504837_del complete#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.957 248514 INFO nova.compute.manager [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Took 3.62 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.958 248514 DEBUG oslo.service.loopingcall [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.959 248514 DEBUG nova.compute.manager [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:33:09 np0005558241 nova_compute[248510]: 2025-12-13 08:33:09.959 248514 DEBUG nova.network.neutron [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:33:10 np0005558241 nova_compute[248510]: 2025-12-13 08:33:10.147 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Creating config drive at /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config#033[00m
Dec 13 03:33:10 np0005558241 nova_compute[248510]: 2025-12-13 08:33:10.152 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptov3s5sj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:10 np0005558241 nova_compute[248510]: 2025-12-13 08:33:10.295 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptov3s5sj" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:10 np0005558241 nova_compute[248510]: 2025-12-13 08:33:10.323 248514 DEBUG nova.storage.rbd_utils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] rbd image bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:10 np0005558241 nova_compute[248510]: 2025-12-13 08:33:10.329 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:10 np0005558241 nova_compute[248510]: 2025-12-13 08:33:10.422 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:10 np0005558241 nova_compute[248510]: 2025-12-13 08:33:10.550 248514 DEBUG oslo_concurrency.processutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config bb8c91ff-01cb-4fd5-ab69-005313784b57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:10 np0005558241 nova_compute[248510]: 2025-12-13 08:33:10.551 248514 INFO nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deleting local config drive /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57/disk.config because it was imported into RBD.#033[00m
Dec 13 03:33:10 np0005558241 kernel: tapd001c32a-bc: entered promiscuous mode
Dec 13 03:33:10 np0005558241 NetworkManager[50376]: <info>  [1765614790.6056] manager: (tapd001c32a-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/313)
Dec 13 03:33:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:10Z|00725|binding|INFO|Claiming lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 for this chassis.
Dec 13 03:33:10 np0005558241 nova_compute[248510]: 2025-12-13 08:33:10.607 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:10Z|00726|binding|INFO|d001c32a-bc2d-4374-9cf1-cea4a3723c66: Claiming fa:16:3e:63:10:dc 10.100.0.13
Dec 13 03:33:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:10Z|00727|binding|INFO|Setting lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 ovn-installed in OVS
Dec 13 03:33:10 np0005558241 nova_compute[248510]: 2025-12-13 08:33:10.628 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:33:10 np0005558241 systemd-udevd[321183]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:33:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:33:10 np0005558241 systemd-machined[210538]: New machine qemu-90-instance-0000004a.
Dec 13 03:33:10 np0005558241 NetworkManager[50376]: <info>  [1765614790.6521] device (tapd001c32a-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:33:10 np0005558241 NetworkManager[50376]: <info>  [1765614790.6526] device (tapd001c32a-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:33:10 np0005558241 systemd[1]: Started Virtual Machine qemu-90-instance-0000004a.
Dec 13 03:33:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:10Z|00728|binding|INFO|Setting lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 up in Southbound
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.665 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:10:dc 10.100.0.13'], port_security=['fa:16:3e:63:10:dc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb8c91ff-01cb-4fd5-ab69-005313784b57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8f5cceeb-71e0-4eee-9335-64d44ec2d969', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d001c32a-bc2d-4374-9cf1-cea4a3723c66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.666 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d001c32a-bc2d-4374-9cf1-cea4a3723c66 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis#033[00m
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.668 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839#033[00m
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4c8fc3-49cd-4127-bd1b-c1051dd39fc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.715 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a3375000-1912-42b2-99cb-ef2ea9246832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.718 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f27519ef-ac5a-4e7c-9239-a40f025a6fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.745 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eacbbad5-eb60-4b14-9be3-70fa8f1aa536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.765 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[26d03e3d-4f91-4314-b8c9-8bc05bda1c16]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727295, 'reachable_time': 29620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321198, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.780 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9cef977-917d-4e02-a0a3-41eceb39db8b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727305, 'tstamp': 727305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321199, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727308, 'tstamp': 727308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321199, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.782 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:10 np0005558241 nova_compute[248510]: 2025-12-13 08:33:10.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:10 np0005558241 nova_compute[248510]: 2025-12-13 08:33:10.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.785 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.785 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.786 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:10.786 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.077 248514 DEBUG nova.network.neutron [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.082 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for bb8c91ff-01cb-4fd5-ab69-005313784b57 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.083 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614791.0813084, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.083 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Started (Lifecycle Event)#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.125 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.130 248514 INFO nova.compute.manager [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Took 2.28 seconds to deallocate network for instance.#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.137 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614791.0815198, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.137 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.197 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.202 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.211 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.212 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.217 248514 DEBUG nova.compute.manager [req-81daf74f-1d5c-4aed-bb50-0259ac766e96 req-9ee9eb7f-64ae-4492-8737-2f71ad5baac9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Received event network-vif-deleted-533958d1-8a74-4963-9731-40767b4bb127 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.241 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.370 248514 DEBUG oslo_concurrency.processutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 305 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 1.8 MiB/s wr, 152 op/s
Dec 13 03:33:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:33:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423383259' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.953 248514 DEBUG oslo_concurrency.processutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:11 np0005558241 nova_compute[248510]: 2025-12-13 08:33:11.959 248514 DEBUG nova.compute.provider_tree [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:33:12 np0005558241 nova_compute[248510]: 2025-12-13 08:33:12.126 248514 DEBUG nova.scheduler.client.report [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:33:12 np0005558241 nova_compute[248510]: 2025-12-13 08:33:12.757 248514 DEBUG nova.network.neutron [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:33:12 np0005558241 nova_compute[248510]: 2025-12-13 08:33:12.763 248514 DEBUG nova.compute.manager [req-735623e6-432f-4623-968c-b96688d045fe req-ccce2a6c-cb96-4f66-b033-47614369de63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Received event network-vif-deleted-7b3b1c0a-882e-4f33-a582-667d018090d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:12 np0005558241 nova_compute[248510]: 2025-12-13 08:33:12.763 248514 INFO nova.compute.manager [req-735623e6-432f-4623-968c-b96688d045fe req-ccce2a6c-cb96-4f66-b033-47614369de63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Neutron deleted interface 7b3b1c0a-882e-4f33-a582-667d018090d4; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 03:33:12 np0005558241 nova_compute[248510]: 2025-12-13 08:33:12.763 248514 DEBUG nova.network.neutron [req-735623e6-432f-4623-968c-b96688d045fe req-ccce2a6c-cb96-4f66-b033-47614369de63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:33:12 np0005558241 nova_compute[248510]: 2025-12-13 08:33:12.788 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:12 np0005558241 nova_compute[248510]: 2025-12-13 08:33:12.793 248514 INFO nova.compute.manager [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Took 2.83 seconds to deallocate network for instance.#033[00m
Dec 13 03:33:12 np0005558241 nova_compute[248510]: 2025-12-13 08:33:12.800 248514 DEBUG nova.compute.manager [req-735623e6-432f-4623-968c-b96688d045fe req-ccce2a6c-cb96-4f66-b033-47614369de63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Detach interface failed, port_id=7b3b1c0a-882e-4f33-a582-667d018090d4, reason: Instance a9c6de9d-63c0-43a5-9d6e-be356e504837 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 03:33:12 np0005558241 nova_compute[248510]: 2025-12-13 08:33:12.834 248514 INFO nova.scheduler.client.report [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Deleted allocations for instance 5d34feed-2663-4e17-b951-65a37bd3a275#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.040 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.041 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.158 248514 DEBUG oslo_concurrency.processutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.351 248514 DEBUG nova.compute.manager [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.352 248514 DEBUG oslo_concurrency.lockutils [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.353 248514 DEBUG oslo_concurrency.lockutils [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.353 248514 DEBUG oslo_concurrency.lockutils [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.353 248514 DEBUG nova.compute.manager [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Processing event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.354 248514 DEBUG nova.compute.manager [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.354 248514 DEBUG oslo_concurrency.lockutils [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.355 248514 DEBUG oslo_concurrency.lockutils [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.355 248514 DEBUG oslo_concurrency.lockutils [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.355 248514 DEBUG nova.compute.manager [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] No waiting events found dispatching network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.355 248514 WARNING nova.compute.manager [req-e93cd8ed-cb4b-490d-9177-0f5f51117d93 req-6dc9a527-5a2d-4ae8-8055-4437fdc14bd4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received unexpected event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.357 248514 DEBUG nova.compute.manager [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.362 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.364 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614793.3622882, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.364 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.373 248514 INFO nova.virt.libvirt.driver [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance spawned successfully.#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.374 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.381 248514 DEBUG oslo_concurrency.lockutils [None req-cabe3549-a3d4-4537-ad6c-9f1029a59a08 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "5d34feed-2663-4e17-b951-65a37bd3a275" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.397 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.408 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.421 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.422 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.423 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.423 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.424 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.425 248514 DEBUG nova.virt.libvirt.driver [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.453 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.554 248514 DEBUG nova.compute.manager [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 273 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 1.8 MiB/s wr, 150 op/s
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.624 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.660 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614778.6589525, c5edbf88-6361-407a-a0f1-c133f70b50e9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.660 248514 INFO nova.compute.manager [-] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.721 248514 DEBUG nova.compute.manager [None req-04348bb6-9266-49b7-844a-5fc1bf7eaa61 - - - - - -] [instance: c5edbf88-6361-407a-a0f1-c133f70b50e9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:33:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3120018909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.754 248514 DEBUG oslo_concurrency.processutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:13 np0005558241 nova_compute[248510]: 2025-12-13 08:33:13.759 248514 DEBUG nova.compute.provider_tree [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:33:14 np0005558241 nova_compute[248510]: 2025-12-13 08:33:14.037 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:14 np0005558241 nova_compute[248510]: 2025-12-13 08:33:14.478 248514 DEBUG nova.scheduler.client.report [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:33:14 np0005558241 nova_compute[248510]: 2025-12-13 08:33:14.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:33:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/953622266' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:33:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:33:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/953622266' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:33:15 np0005558241 nova_compute[248510]: 2025-12-13 08:33:15.088 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:15 np0005558241 nova_compute[248510]: 2025-12-13 08:33:15.091 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 1.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:15 np0005558241 nova_compute[248510]: 2025-12-13 08:33:15.092 248514 DEBUG nova.objects.instance [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:33:15 np0005558241 nova_compute[248510]: 2025-12-13 08:33:15.122 248514 INFO nova.scheduler.client.report [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Deleted allocations for instance a9c6de9d-63c0-43a5-9d6e-be356e504837#033[00m
Dec 13 03:33:15 np0005558241 nova_compute[248510]: 2025-12-13 08:33:15.168 248514 DEBUG oslo_concurrency.lockutils [None req-d9ece206-8586-458b-b550-021fe06554d7 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:15 np0005558241 nova_compute[248510]: 2025-12-13 08:33:15.217 248514 DEBUG oslo_concurrency.lockutils [None req-605b37f2-decb-41e6-8ee4-0a1167b8cdcb 93eec08d500a4f03afb3281e9899bd6a 71e2453379684f0ca0563f8c370ea4a3 - - default default] Lock "a9c6de9d-63c0-43a5-9d6e-be356e504837" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:15 np0005558241 nova_compute[248510]: 2025-12-13 08:33:15.423 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 249 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 1.8 MiB/s wr, 181 op/s
Dec 13 03:33:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:33:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 249 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 694 KiB/s rd, 1.8 MiB/s wr, 159 op/s
Dec 13 03:33:18 np0005558241 nova_compute[248510]: 2025-12-13 08:33:18.113 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614783.1120586, 5d34feed-2663-4e17-b951-65a37bd3a275 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:18 np0005558241 nova_compute[248510]: 2025-12-13 08:33:18.114 248514 INFO nova.compute.manager [-] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:33:18 np0005558241 nova_compute[248510]: 2025-12-13 08:33:18.335 248514 DEBUG nova.compute.manager [None req-32937ac4-a574-4c32-a091-4b040a77c725 - - - - - -] [instance: 5d34feed-2663-4e17-b951-65a37bd3a275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.091 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.091 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.092 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.092 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.092 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.094 248514 INFO nova.compute.manager [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Terminating instance#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.095 248514 DEBUG nova.compute.manager [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.178 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:19 np0005558241 kernel: tapd001c32a-bc (unregistering): left promiscuous mode
Dec 13 03:33:19 np0005558241 NetworkManager[50376]: <info>  [1765614799.2317] device (tapd001c32a-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.240 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:19 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:19Z|00729|binding|INFO|Releasing lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 from this chassis (sb_readonly=0)
Dec 13 03:33:19 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:19Z|00730|binding|INFO|Setting lport d001c32a-bc2d-4374-9cf1-cea4a3723c66 down in Southbound
Dec 13 03:33:19 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:19Z|00731|binding|INFO|Removing iface tapd001c32a-bc ovn-installed in OVS
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.253 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:10:dc 10.100.0.13'], port_security=['fa:16:3e:63:10:dc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bb8c91ff-01cb-4fd5-ab69-005313784b57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8f5cceeb-71e0-4eee-9335-64d44ec2d969', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d001c32a-bc2d-4374-9cf1-cea4a3723c66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.255 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d001c32a-bc2d-4374-9cf1-cea4a3723c66 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis#033[00m
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.257 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.261 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.273 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddaf093-d8f9-4fb0-acb6-b12730dcad38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:19 np0005558241 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Dec 13 03:33:19 np0005558241 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004a.scope: Consumed 6.320s CPU time.
Dec 13 03:33:19 np0005558241 systemd-machined[210538]: Machine qemu-90-instance-0000004a terminated.
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.308 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ee9e54-2cfe-4243-ac3e-b93eba764d9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.312 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[075c2fe4-0c90-41f1-b597-4796442c5c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.348 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d7bb9f2b-39e1-4f0f-b34a-2f94f0346c76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.367 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8fe5e0-177a-4936-a08e-39d191b88387]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727295, 'reachable_time': 29620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321298, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.389 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[24563474-dfad-4deb-9773-7a8f02462746]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727305, 'tstamp': 727305}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321299, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap43ee8730-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727308, 'tstamp': 727308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321299, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.392 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.394 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:19 np0005558241 NetworkManager[50376]: <info>  [1765614799.3987] manager: (tapd001c32a-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.399 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.399 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.400 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.400 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:19.400 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.418 248514 INFO nova.virt.libvirt.driver [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Instance destroyed successfully.#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.418 248514 DEBUG nova.objects.instance [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid bb8c91ff-01cb-4fd5-ab69-005313784b57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.437 248514 DEBUG nova.virt.libvirt.vif [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:32:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-81806715',display_name='tempest-ServerActionsTestJSON-server-1327556776',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-81806715',id=74,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:33:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-yhg7rdps',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:15Z,user_data=None,user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=bb8c91ff-01cb-4fd5-ab69-005313784b57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.437 248514 DEBUG nova.network.os_vif_util [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "address": "fa:16:3e:63:10:dc", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd001c32a-bc", "ovs_interfaceid": "d001c32a-bc2d-4374-9cf1-cea4a3723c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.438 248514 DEBUG nova.network.os_vif_util [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.438 248514 DEBUG os_vif [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.440 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd001c32a-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.442 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.445 248514 INFO os_vif [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:10:dc,bridge_name='br-int',has_traffic_filtering=True,id=d001c32a-bc2d-4374-9cf1-cea4a3723c66,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd001c32a-bc')#033[00m
Dec 13 03:33:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 249 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 202 op/s
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.717 248514 INFO nova.virt.libvirt.driver [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deleting instance files /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57_del#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.718 248514 INFO nova.virt.libvirt.driver [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deletion of /var/lib/nova/instances/bb8c91ff-01cb-4fd5-ab69-005313784b57_del complete#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.781 248514 INFO nova.compute.manager [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.782 248514 DEBUG oslo.service.loopingcall [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.783 248514 DEBUG nova.compute.manager [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:33:19 np0005558241 nova_compute[248510]: 2025-12-13 08:33:19.784 248514 DEBUG nova.network.neutron [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:33:20 np0005558241 nova_compute[248510]: 2025-12-13 08:33:20.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001877099840313598 of space, bias 1.0, pg target 0.5631299520940793 quantized to 32 (current 32)
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00066741625840538 of space, bias 1.0, pg target 0.200224877521614 quantized to 32 (current 32)
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.013637859932945e-07 of space, bias 4.0, pg target 0.0007216365431919533 quantized to 16 (current 32)
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:33:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.191 248514 DEBUG nova.network.neutron [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.218 248514 INFO nova.compute.manager [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Took 1.43 seconds to deallocate network for instance.#033[00m
Dec 13 03:33:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.286 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.287 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.310 248514 DEBUG nova.compute.manager [req-95512933-be9f-4742-885a-d4b297a04e9d req-73a201fe-723d-4e9b-b990-a3e69adbbca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-unplugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.311 248514 DEBUG oslo_concurrency.lockutils [req-95512933-be9f-4742-885a-d4b297a04e9d req-73a201fe-723d-4e9b-b990-a3e69adbbca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.311 248514 DEBUG oslo_concurrency.lockutils [req-95512933-be9f-4742-885a-d4b297a04e9d req-73a201fe-723d-4e9b-b990-a3e69adbbca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.311 248514 DEBUG oslo_concurrency.lockutils [req-95512933-be9f-4742-885a-d4b297a04e9d req-73a201fe-723d-4e9b-b990-a3e69adbbca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.311 248514 DEBUG nova.compute.manager [req-95512933-be9f-4742-885a-d4b297a04e9d req-73a201fe-723d-4e9b-b990-a3e69adbbca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] No waiting events found dispatching network-vif-unplugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.311 248514 WARNING nova.compute.manager [req-95512933-be9f-4742-885a-d4b297a04e9d req-73a201fe-723d-4e9b-b990-a3e69adbbca2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received unexpected event network-vif-unplugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.405 248514 DEBUG oslo_concurrency.processutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 233 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 480 KiB/s wr, 146 op/s
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.780 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614786.7785583, a9c6de9d-63c0-43a5-9d6e-be356e504837 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.781 248514 INFO nova.compute.manager [-] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.804 248514 DEBUG nova.compute.manager [None req-69261dad-8bd9-461b-bd4d-31fe12630dda - - - - - -] [instance: a9c6de9d-63c0-43a5-9d6e-be356e504837] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:33:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1598863755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.988 248514 DEBUG oslo_concurrency.processutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:21 np0005558241 nova_compute[248510]: 2025-12-13 08:33:21.995 248514 DEBUG nova.compute.provider_tree [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:33:22 np0005558241 nova_compute[248510]: 2025-12-13 08:33:22.026 248514 DEBUG nova.scheduler.client.report [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:33:22 np0005558241 nova_compute[248510]: 2025-12-13 08:33:22.074 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:22 np0005558241 nova_compute[248510]: 2025-12-13 08:33:22.167 248514 INFO nova.scheduler.client.report [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Deleted allocations for instance bb8c91ff-01cb-4fd5-ab69-005313784b57#033[00m
Dec 13 03:33:22 np0005558241 nova_compute[248510]: 2025-12-13 08:33:22.263 248514 DEBUG oslo_concurrency.lockutils [None req-64b595eb-0de0-4f75-9d7d-2c49f58d1b6c 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 219 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 108 op/s
Dec 13 03:33:24 np0005558241 nova_compute[248510]: 2025-12-13 08:33:24.412 248514 DEBUG nova.compute.manager [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:24 np0005558241 nova_compute[248510]: 2025-12-13 08:33:24.413 248514 DEBUG oslo_concurrency.lockutils [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:24 np0005558241 nova_compute[248510]: 2025-12-13 08:33:24.413 248514 DEBUG oslo_concurrency.lockutils [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:24 np0005558241 nova_compute[248510]: 2025-12-13 08:33:24.413 248514 DEBUG oslo_concurrency.lockutils [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb8c91ff-01cb-4fd5-ab69-005313784b57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:24 np0005558241 nova_compute[248510]: 2025-12-13 08:33:24.414 248514 DEBUG nova.compute.manager [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] No waiting events found dispatching network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:24 np0005558241 nova_compute[248510]: 2025-12-13 08:33:24.414 248514 WARNING nova.compute.manager [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received unexpected event network-vif-plugged-d001c32a-bc2d-4374-9cf1-cea4a3723c66 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:33:24 np0005558241 nova_compute[248510]: 2025-12-13 08:33:24.414 248514 DEBUG nova.compute.manager [req-8834ecae-828c-4dd5-a1eb-59ccc33301b0 req-4ce77b77-b333-420d-81db-283316235112 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Received event network-vif-deleted-d001c32a-bc2d-4374-9cf1-cea4a3723c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:24 np0005558241 nova_compute[248510]: 2025-12-13 08:33:24.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:24 np0005558241 podman[321348]: 2025-12-13 08:33:24.988505978 +0000 UTC m=+0.073183408 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 03:33:25 np0005558241 podman[321347]: 2025-12-13 08:33:25.014984176 +0000 UTC m=+0.096408885 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 13 03:33:25 np0005558241 podman[321346]: 2025-12-13 08:33:25.031196039 +0000 UTC m=+0.116150146 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 03:33:25 np0005558241 nova_compute[248510]: 2025-12-13 08:33:25.428 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 202 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.1 KiB/s wr, 107 op/s
Dec 13 03:33:26 np0005558241 nova_compute[248510]: 2025-12-13 08:33:26.202 248514 DEBUG oslo_concurrency.lockutils [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:26 np0005558241 nova_compute[248510]: 2025-12-13 08:33:26.203 248514 DEBUG oslo_concurrency.lockutils [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:26 np0005558241 nova_compute[248510]: 2025-12-13 08:33:26.203 248514 DEBUG nova.compute.manager [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:26 np0005558241 nova_compute[248510]: 2025-12-13 08:33:26.209 248514 DEBUG nova.compute.manager [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Dec 13 03:33:26 np0005558241 nova_compute[248510]: 2025-12-13 08:33:26.210 248514 DEBUG nova.objects.instance [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:33:26 np0005558241 nova_compute[248510]: 2025-12-13 08:33:26.249 248514 DEBUG nova.virt.libvirt.driver [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:33:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:26Z|00732|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 03:33:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:26Z|00733|binding|INFO|Releasing lport c8e8a31b-a5fe-4e2d-bc19-65995078988f from this chassis (sb_readonly=0)
Dec 13 03:33:26 np0005558241 nova_compute[248510]: 2025-12-13 08:33:26.424 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:26 np0005558241 nova_compute[248510]: 2025-12-13 08:33:26.845 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:27 np0005558241 nova_compute[248510]: 2025-12-13 08:33:27.564 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:27 np0005558241 nova_compute[248510]: 2025-12-13 08:33:27.565 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:27 np0005558241 nova_compute[248510]: 2025-12-13 08:33:27.587 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:33:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 202 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.7 KiB/s wr, 70 op/s
Dec 13 03:33:27 np0005558241 nova_compute[248510]: 2025-12-13 08:33:27.674 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:27 np0005558241 nova_compute[248510]: 2025-12-13 08:33:27.675 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:27 np0005558241 nova_compute[248510]: 2025-12-13 08:33:27.684 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:33:27 np0005558241 nova_compute[248510]: 2025-12-13 08:33:27.685 248514 INFO nova.compute.claims [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:33:27 np0005558241 nova_compute[248510]: 2025-12-13 08:33:27.842 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:33:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3044164758' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.446 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.453 248514 DEBUG nova.compute.provider_tree [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:33:28 np0005558241 kernel: tapb5058a06-71 (unregistering): left promiscuous mode
Dec 13 03:33:28 np0005558241 NetworkManager[50376]: <info>  [1765614808.4771] device (tapb5058a06-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.478 248514 DEBUG nova.scheduler.client.report [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:33:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:28Z|00734|binding|INFO|Releasing lport b5058a06-7109-4ac0-96d8-7562e66bee25 from this chassis (sb_readonly=0)
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:28Z|00735|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 down in Southbound
Dec 13 03:33:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:28Z|00736|binding|INFO|Removing iface tapb5058a06-71 ovn-installed in OVS
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.498 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.500 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis#033[00m
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.502 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43ee8730-a174-4859-9c95-b3ad6dc68839, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.503 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[06ef4e92-4fba-430a-bc1c-dbf73a195999]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.504 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace which is not needed anymore#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.512 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.513 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.517 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:28 np0005558241 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Dec 13 03:33:28 np0005558241 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000003c.scope: Consumed 17.934s CPU time.
Dec 13 03:33:28 np0005558241 systemd-machined[210538]: Machine qemu-79-instance-0000003c terminated.
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.578 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.579 248514 DEBUG nova.network.neutron [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.610 248514 INFO nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.638 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:33:28 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [NOTICE]   (314530) : haproxy version is 2.8.14-c23fe91
Dec 13 03:33:28 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [NOTICE]   (314530) : path to executable is /usr/sbin/haproxy
Dec 13 03:33:28 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [WARNING]  (314530) : Exiting Master process...
Dec 13 03:33:28 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [ALERT]    (314530) : Current worker (314533) exited with code 143 (Terminated)
Dec 13 03:33:28 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[314526]: [WARNING]  (314530) : All workers exited. Exiting... (0)
Dec 13 03:33:28 np0005558241 systemd[1]: libpod-f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce.scope: Deactivated successfully.
Dec 13 03:33:28 np0005558241 podman[321454]: 2025-12-13 08:33:28.652036888 +0000 UTC m=+0.047234164 container died f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 03:33:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce-userdata-shm.mount: Deactivated successfully.
Dec 13 03:33:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cf5248c08c37cabcce8b13f5f177f9d27ffd14bb5e2bc32ec9054b3d6fc2fbb6-merged.mount: Deactivated successfully.
Dec 13 03:33:28 np0005558241 podman[321454]: 2025-12-13 08:33:28.695488057 +0000 UTC m=+0.090685313 container cleanup f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:33:28 np0005558241 systemd[1]: libpod-conmon-f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce.scope: Deactivated successfully.
Dec 13 03:33:28 np0005558241 podman[321485]: 2025-12-13 08:33:28.766444889 +0000 UTC m=+0.047905871 container remove f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.774 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3d932cab-084a-40e2-9f4b-7ca96d60e428]: (4, ('Sat Dec 13 08:33:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce)\nf362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce\nSat Dec 13 08:33:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (f362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce)\nf362db5021724a140e37a45f8ca70feb2cbf98f0f2e0bfc1c83fc55c578749ce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.777 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c4415d1-7d88-4d41-9969-bcef5582ecdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.778 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.780 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:28 np0005558241 kernel: tap43ee8730-a0: left promiscuous mode
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.800 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.804 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f5f7802-afe6-463e-893e-d252ac8a1eb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.813 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.814 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.815 248514 INFO nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Creating image(s)#033[00m
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.824 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d62b36a9-70ef-49ab-b625-c144f6133d76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.825 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79254998-3065-4ab6-b1ed-255bd818e53d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.838 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.841 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[73af01c0-6e8d-426a-913e-c61372831bee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727285, 'reachable_time': 40152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321526, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:28 np0005558241 systemd[1]: run-netns-ovnmeta\x2d43ee8730\x2da174\x2d4859\x2d9c95\x2db3ad6dc68839.mount: Deactivated successfully.
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.847 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:33:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:28.847 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4052a517-4fd2-4d47-9b6d-40f23ce3b2dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.866 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.893 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.899 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.946 248514 DEBUG nova.policy [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5b310bdebec646949fad4ea1821b4c3f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.988 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.988 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.989 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:28 np0005558241 nova_compute[248510]: 2025-12-13 08:33:28.990 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.016 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.021 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.102 248514 DEBUG nova.compute.manager [req-8fe51fe2-5292-4f88-9992-7f85303d5eda req-0168bafa-620a-4704-97e6-3e4bc3eeb595 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.102 248514 DEBUG oslo_concurrency.lockutils [req-8fe51fe2-5292-4f88-9992-7f85303d5eda req-0168bafa-620a-4704-97e6-3e4bc3eeb595 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.103 248514 DEBUG oslo_concurrency.lockutils [req-8fe51fe2-5292-4f88-9992-7f85303d5eda req-0168bafa-620a-4704-97e6-3e4bc3eeb595 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.103 248514 DEBUG oslo_concurrency.lockutils [req-8fe51fe2-5292-4f88-9992-7f85303d5eda req-0168bafa-620a-4704-97e6-3e4bc3eeb595 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.103 248514 DEBUG nova.compute.manager [req-8fe51fe2-5292-4f88-9992-7f85303d5eda req-0168bafa-620a-4704-97e6-3e4bc3eeb595 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.103 248514 WARNING nova.compute.manager [req-8fe51fe2-5292-4f88-9992-7f85303d5eda req-0168bafa-620a-4704-97e6-3e4bc3eeb595 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state powering-off.#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.271 248514 INFO nova.virt.libvirt.driver [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance shutdown successfully after 3 seconds.#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.280 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.281 248514 DEBUG nova.objects.instance [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.303 248514 DEBUG nova.compute.manager [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.351 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.383 248514 DEBUG oslo_concurrency.lockutils [None req-33fb19f6-92a8-4590-bdce-7051c41f9eb4 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.419 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] resizing rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.502 248514 DEBUG nova.objects.instance [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'migration_context' on Instance uuid c98e670c-9bea-41c0-87ad-fcaba6d2be2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.520 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.521 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Ensure instance console log exists: /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.522 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.522 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.523 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 202 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 6.7 KiB/s wr, 71 op/s
Dec 13 03:33:29 np0005558241 nova_compute[248510]: 2025-12-13 08:33:29.765 248514 DEBUG nova.network.neutron [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Successfully created port: 6b1400c2-c07a-450e-8698-6c2b60d0227e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:33:30 np0005558241 nova_compute[248510]: 2025-12-13 08:33:30.431 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:30 np0005558241 nova_compute[248510]: 2025-12-13 08:33:30.903 248514 DEBUG nova.network.neutron [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Successfully updated port: 6b1400c2-c07a-450e-8698-6c2b60d0227e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:33:30 np0005558241 nova_compute[248510]: 2025-12-13 08:33:30.932 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:33:30 np0005558241 nova_compute[248510]: 2025-12-13 08:33:30.933 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquired lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:33:30 np0005558241 nova_compute[248510]: 2025-12-13 08:33:30.933 248514 DEBUG nova.network.neutron [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.139 248514 DEBUG nova.compute.manager [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-changed-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.140 248514 DEBUG nova.compute.manager [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Refreshing instance network info cache due to event network-changed-6b1400c2-c07a-450e-8698-6c2b60d0227e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.140 248514 DEBUG oslo_concurrency.lockutils [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.189 248514 DEBUG nova.network.neutron [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:33:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.401 248514 DEBUG nova.objects.instance [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.427 248514 DEBUG oslo_concurrency.lockutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.428 248514 DEBUG oslo_concurrency.lockutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.428 248514 DEBUG nova.network.neutron [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.428 248514 DEBUG nova.objects.instance [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'info_cache' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.474 248514 DEBUG nova.compute.manager [req-893244e9-8666-4e2c-8b20-705e0531b1d9 req-c1f4ae4c-5e98-4875-a346-aafadb5df54f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.475 248514 DEBUG oslo_concurrency.lockutils [req-893244e9-8666-4e2c-8b20-705e0531b1d9 req-c1f4ae4c-5e98-4875-a346-aafadb5df54f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.475 248514 DEBUG oslo_concurrency.lockutils [req-893244e9-8666-4e2c-8b20-705e0531b1d9 req-c1f4ae4c-5e98-4875-a346-aafadb5df54f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.475 248514 DEBUG oslo_concurrency.lockutils [req-893244e9-8666-4e2c-8b20-705e0531b1d9 req-c1f4ae4c-5e98-4875-a346-aafadb5df54f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.476 248514 DEBUG nova.compute.manager [req-893244e9-8666-4e2c-8b20-705e0531b1d9 req-c1f4ae4c-5e98-4875-a346-aafadb5df54f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:31 np0005558241 nova_compute[248510]: 2025-12-13 08:33:31.476 248514 WARNING nova.compute.manager [req-893244e9-8666-4e2c-8b20-705e0531b1d9 req-c1f4ae4c-5e98-4875-a346-aafadb5df54f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state stopped and task_state powering-on.#033[00m
Dec 13 03:33:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 223 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 686 KiB/s wr, 29 op/s
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.689 248514 DEBUG nova.network.neutron [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Updating instance_info_cache with network_info: [{"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.712 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Releasing lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.712 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Instance network_info: |[{"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.712 248514 DEBUG oslo_concurrency.lockutils [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.713 248514 DEBUG nova.network.neutron [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Refreshing network info cache for port 6b1400c2-c07a-450e-8698-6c2b60d0227e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.715 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Start _get_guest_xml network_info=[{"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.721 248514 WARNING nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.727 248514 DEBUG nova.virt.libvirt.host [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.727 248514 DEBUG nova.virt.libvirt.host [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.737 248514 DEBUG nova.virt.libvirt.host [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.738 248514 DEBUG nova.virt.libvirt.host [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.738 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.738 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.739 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.739 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.739 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.740 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.740 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.740 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.740 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.741 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.741 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.741 248514 DEBUG nova.virt.hardware [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:33:32 np0005558241 nova_compute[248510]: 2025-12-13 08:33:32.744 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.288 248514 DEBUG nova.network.neutron [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:33:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:33:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3656069266' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.315 248514 DEBUG oslo_concurrency.lockutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.328 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.353 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.359 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.407 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.408 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.423 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.424 248514 DEBUG nova.objects.instance [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.439 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.444 248514 DEBUG nova.objects.instance [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.465 248514 DEBUG nova.virt.libvirt.vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.466 248514 DEBUG nova.network.os_vif_util [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.467 248514 DEBUG nova.network.os_vif_util [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.467 248514 DEBUG os_vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.470 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.470 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5058a06-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.477 248514 INFO os_vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.486 248514 DEBUG nova.virt.libvirt.driver [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start _get_guest_xml network_info=[{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.492 248514 WARNING nova.virt.libvirt.driver [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.501 248514 DEBUG nova.virt.libvirt.host [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.502 248514 DEBUG nova.virt.libvirt.host [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.507 248514 DEBUG nova.virt.libvirt.host [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.508 248514 DEBUG nova.virt.libvirt.host [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.509 248514 DEBUG nova.virt.libvirt.driver [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.510 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.511 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.511 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.511 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.511 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.511 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.512 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.512 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.512 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.513 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.513 248514 DEBUG nova.virt.hardware [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.513 248514 DEBUG nova.objects.instance [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.537 248514 DEBUG oslo_concurrency.processutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.593 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.595 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 245 MiB data, 639 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.7 MiB/s wr, 38 op/s
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.613 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.613 248514 INFO nova.compute.claims [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.853 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:33:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/331903218' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.967 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.969 248514 DEBUG nova.virt.libvirt.vif [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:33:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1353488107',display_name='tempest-ServerActionsTestOtherA-server-1353488107',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1353488107',id=75,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-9n9e56kl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:33:28Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=c98e670c-9bea-41c0-87ad-fcaba6d2be2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.970 248514 DEBUG nova.network.os_vif_util [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.971 248514 DEBUG nova.network.os_vif_util [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.972 248514 DEBUG nova.objects.instance [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'pci_devices' on Instance uuid c98e670c-9bea-41c0-87ad-fcaba6d2be2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.978 248514 DEBUG nova.network.neutron [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Updated VIF entry in instance network info cache for port 6b1400c2-c07a-450e-8698-6c2b60d0227e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.979 248514 DEBUG nova.network.neutron [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Updating instance_info_cache with network_info: [{"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:33:33 np0005558241 nova_compute[248510]: 2025-12-13 08:33:33.998 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <uuid>c98e670c-9bea-41c0-87ad-fcaba6d2be2c</uuid>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <name>instance-0000004b</name>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerActionsTestOtherA-server-1353488107</nova:name>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:33:32</nova:creationTime>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:user uuid="5b310bdebec646949fad4ea1821b4c3f">tempest-ServerActionsTestOtherA-1325599242-project-member</nova:user>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:project uuid="b4d2999518df4b9f8ccbabe38976dc3c">tempest-ServerActionsTestOtherA-1325599242</nova:project>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:port uuid="6b1400c2-c07a-450e-8698-6c2b60d0227e">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <entry name="serial">c98e670c-9bea-41c0-87ad-fcaba6d2be2c</entry>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <entry name="uuid">c98e670c-9bea-41c0-87ad-fcaba6d2be2c</entry>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk.config">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:ee:af:77"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <target dev="tap6b1400c2-c0"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/console.log" append="off"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:33:34 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:33:34 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.004 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Preparing to wait for external event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.005 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.005 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.006 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.007 248514 DEBUG nova.virt.libvirt.vif [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:33:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1353488107',display_name='tempest-ServerActionsTestOtherA-server-1353488107',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1353488107',id=75,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-9n9e56kl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:33:28Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=c98e670c-9bea-41c0-87ad-fcaba6d2be2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.007 248514 DEBUG nova.network.os_vif_util [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.008 248514 DEBUG nova.network.os_vif_util [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.009 248514 DEBUG os_vif [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.012 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.012 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.013 248514 DEBUG oslo_concurrency.lockutils [req-711ff389-93e7-4683-926b-2e83a857a93e req-82964ed9-261d-4f78-b170-335bdbfa46a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.016 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.016 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b1400c2-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.017 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b1400c2-c0, col_values=(('external_ids', {'iface-id': '6b1400c2-c07a-450e-8698-6c2b60d0227e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:af:77', 'vm-uuid': 'c98e670c-9bea-41c0-87ad-fcaba6d2be2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.018 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:34 np0005558241 NetworkManager[50376]: <info>  [1765614814.0195] manager: (tap6b1400c2-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.022 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.024 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.025 248514 INFO os_vif [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0')#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.088 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.089 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.089 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] No VIF found with MAC fa:16:3e:ee:af:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.090 248514 INFO nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Using config drive#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.112 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:33:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255405447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.172 248514 DEBUG oslo_concurrency.processutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.204 248514 DEBUG oslo_concurrency.processutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:33:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1088339276' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.417 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614799.4151325, bb8c91ff-01cb-4fd5-ab69-005313784b57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.418 248514 INFO nova.compute.manager [-] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.434 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.443 248514 DEBUG nova.compute.manager [None req-eda2318f-9fdd-404a-9040-41417c93f847 - - - - - -] [instance: bb8c91ff-01cb-4fd5-ab69-005313784b57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.446 248514 DEBUG nova.compute.provider_tree [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.456 248514 INFO nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Creating config drive at /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/disk.config#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.462 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3gxmdtgw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.495 248514 DEBUG nova.scheduler.client.report [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.523 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.524 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.577 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.601 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3gxmdtgw" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.645 248514 DEBUG nova.storage.rbd_utils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] rbd image c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.650 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/disk.config c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.690 248514 INFO nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.714 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.801 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.802 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.803 248514 INFO nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Creating image(s)#033[00m
Dec 13 03:33:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:33:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2608325763' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.824 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.845 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.864 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.869 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.903 248514 DEBUG oslo_concurrency.processutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/disk.config c98e670c-9bea-41c0-87ad-fcaba6d2be2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.906 248514 INFO nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Deleting local config drive /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c/disk.config because it was imported into RBD.#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.907 248514 DEBUG oslo_concurrency.processutils [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.704s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.909 248514 DEBUG nova.virt.libvirt.vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.910 248514 DEBUG nova.network.os_vif_util [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.911 248514 DEBUG nova.network.os_vif_util [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.914 248514 DEBUG nova.objects.instance [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.936 248514 DEBUG nova.virt.libvirt.driver [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <uuid>3ced27d6-a2a8-4ce3-a7e7-494270418542</uuid>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <name>instance-0000003c</name>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerActionsTestJSON-server-286397061</nova:name>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:33:33</nova:creationTime>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:user uuid="4d19f7d5ece8482dab03e4bc02fdf410">tempest-ServerActionsTestJSON-473360614-project-member</nova:user>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:project uuid="c6718df841f0471ba710516400f126fa">tempest-ServerActionsTestJSON-473360614</nova:project>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <nova:port uuid="b5058a06-7109-4ac0-96d8-7562e66bee25">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <entry name="serial">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <entry name="uuid">3ced27d6-a2a8-4ce3-a7e7-494270418542</entry>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3ced27d6-a2a8-4ce3-a7e7-494270418542_disk.config">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:1f:d1:eb"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <target dev="tapb5058a06-71"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542/console.log" append="off"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <input type="keyboard" bus="usb"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:33:34 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:33:34 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:33:34 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:33:34 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.938 248514 DEBUG nova.virt.libvirt.driver [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.939 248514 DEBUG nova.virt.libvirt.driver [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.940 248514 DEBUG nova.virt.libvirt.vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.940 248514 DEBUG nova.network.os_vif_util [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.941 248514 DEBUG nova.network.os_vif_util [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.941 248514 DEBUG os_vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.941 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.942 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.943 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.945 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.945 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5058a06-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.946 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5058a06-71, col_values=(('external_ids', {'iface-id': 'b5058a06-7109-4ac0-96d8-7562e66bee25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:d1:eb', 'vm-uuid': '3ced27d6-a2a8-4ce3-a7e7-494270418542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.947 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:34 np0005558241 NetworkManager[50376]: <info>  [1765614814.9488] manager: (tapb5058a06-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.954 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.956 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:34 np0005558241 kernel: tap6b1400c2-c0: entered promiscuous mode
Dec 13 03:33:34 np0005558241 NetworkManager[50376]: <info>  [1765614814.9598] manager: (tap6b1400c2-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/317)
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.957 248514 INFO os_vif [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.963 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:34Z|00737|binding|INFO|Claiming lport 6b1400c2-c07a-450e-8698-6c2b60d0227e for this chassis.
Dec 13 03:33:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:34Z|00738|binding|INFO|6b1400c2-c07a-450e-8698-6c2b60d0227e: Claiming fa:16:3e:ee:af:77 10.100.0.6
Dec 13 03:33:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:34.976 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:af:77 10.100.0.6'], port_security=['fa:16:3e:ee:af:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c98e670c-9bea-41c0-87ad-fcaba6d2be2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45afb483-a012-4442-b20c-edd0f1f0f374', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b1400c2-c07a-450e-8698-6c2b60d0227e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:34.978 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b1400c2-c07a-450e-8698-6c2b60d0227e in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f bound to our chassis#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.977 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.978 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.979 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:34 np0005558241 nova_compute[248510]: 2025-12-13 08:33:34.979 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:34.980 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f#033[00m
Dec 13 03:33:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:34Z|00739|binding|INFO|Setting lport 6b1400c2-c07a-450e-8698-6c2b60d0227e ovn-installed in OVS
Dec 13 03:33:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:34Z|00740|binding|INFO|Setting lport 6b1400c2-c07a-450e-8698-6c2b60d0227e up in Southbound
Dec 13 03:33:35 np0005558241 systemd-udevd[321967]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.000 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[81b49983-a96f-427c-8704-ddf080332e9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 systemd-machined[210538]: New machine qemu-91-instance-0000004b.
Dec 13 03:33:35 np0005558241 NetworkManager[50376]: <info>  [1765614815.0165] device (tap6b1400c2-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:33:35 np0005558241 systemd[1]: Started Virtual Machine qemu-91-instance-0000004b.
Dec 13 03:33:35 np0005558241 NetworkManager[50376]: <info>  [1765614815.0174] device (tap6b1400c2-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.066 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.071 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.082 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[65c443e4-0ef2-454d-80c8-bcfbc6900977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.086 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f60e638b-f557-47bf-bcfb-18fb3ea47bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.110 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.122 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[04490eac-5476-4907-a24b-3871e30fdf8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.142 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c11690-9e16-45b3-bdd2-ddf898ca0a2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321992, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.158 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[13c9c145-0a7b-4a3d-9363-391ea11bb05c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727416, 'tstamp': 727416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321998, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727421, 'tstamp': 727421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321998, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.161 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.164 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.170 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.171 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.171 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.172 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:35 np0005558241 kernel: tapb5058a06-71: entered promiscuous mode
Dec 13 03:33:35 np0005558241 NetworkManager[50376]: <info>  [1765614815.1733] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/318)
Dec 13 03:33:35 np0005558241 systemd-udevd[321981]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:33:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:35Z|00741|binding|INFO|Claiming lport b5058a06-7109-4ac0-96d8-7562e66bee25 for this chassis.
Dec 13 03:33:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:35Z|00742|binding|INFO|b5058a06-7109-4ac0-96d8-7562e66bee25: Claiming fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.173 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.182 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.184 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.186 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839#033[00m
Dec 13 03:33:35 np0005558241 NetworkManager[50376]: <info>  [1765614815.1904] device (tapb5058a06-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:33:35 np0005558241 NetworkManager[50376]: <info>  [1765614815.1912] device (tapb5058a06-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:33:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:35Z|00743|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 ovn-installed in OVS
Dec 13 03:33:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:35Z|00744|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 up in Southbound
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.196 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.205 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[adbfcbeb-8d24-41f4-941f-e7dd3cdda83b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.206 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43ee8730-a1 in ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.208 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43ee8730-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.209 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ec98436c-6437-4d36-9232-30f2974d2043]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.210 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4f30b058-33c5-4667-860d-cb27c80c553a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 systemd-machined[210538]: New machine qemu-92-instance-0000003c.
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.226 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[fc8735f0-8a87-48ba-a8e2-eee967c328ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 systemd[1]: Started Virtual Machine qemu-92-instance-0000003c.
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.251 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[049f3777-4654-440a-b831-b6ef22a884a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.289 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4639f5fd-6f8f-489d-8347-48b363b61b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 NetworkManager[50376]: <info>  [1765614815.2972] manager: (tap43ee8730-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/319)
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.298 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe75b5f-50c1-48ed-8afa-2dfe2ec200bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.340 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b13a5f13-b06c-4009-83cf-a14d6cd6dd65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.343 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b0ce37-839f-42f6-8530-67f8ab59d2d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 NetworkManager[50376]: <info>  [1765614815.3694] device (tap43ee8730-a0): carrier: link connected
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.376 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5a282176-f98d-4dc8-8089-6a897408fd1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.395 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c79b87b4-34f9-47f7-97a4-3fa94660c1e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739258, 'reachable_time': 40636, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322057, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.416 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[473ae6a1-1c96-48bf-a973-5851c78eea0e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:bbcd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739258, 'tstamp': 739258}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322058, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.431 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.437 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5ec931-c27d-441e-a244-532ab5d85296]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739258, 'reachable_time': 40636, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322059, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.444 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.481 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da74336c-c477-49d2-a74b-4100bf0ee61b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.534 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] resizing rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.551 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[10710c00-dd2d-42fb-bd63-4a205b55be70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.553 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.553 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.553 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:35 np0005558241 NetworkManager[50376]: <info>  [1765614815.5564] manager: (tap43ee8730-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Dec 13 03:33:35 np0005558241 kernel: tap43ee8730-a0: entered promiscuous mode
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.560 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:35Z|00745|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.578 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.579 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[410214dc-8770-4b67-818b-5b400a5857ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.580 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.581 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'env', 'PROCESS_TAG=haproxy-43ee8730-a174-4859-9c95-b3ad6dc68839', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43ee8730-a174-4859-9c95-b3ad6dc68839.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:33:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 248 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.640 248514 DEBUG nova.objects.instance [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'migration_context' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.658 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.658 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Ensure instance console log exists: /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.659 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.659 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.659 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.661 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.666 248514 WARNING nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.671 248514 DEBUG nova.virt.libvirt.host [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.672 248514 DEBUG nova.virt.libvirt.host [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.675 248514 DEBUG nova.virt.libvirt.host [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.676 248514 DEBUG nova.virt.libvirt.host [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.676 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.676 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.677 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.677 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.677 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.677 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.677 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.678 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.678 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.678 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.678 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.678 248514 DEBUG nova.virt.hardware [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.681 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.716 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614815.696059, c98e670c-9bea-41c0-87ad-fcaba6d2be2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.717 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] VM Started (Lifecycle Event)#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.748 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.752 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614815.69667, c98e670c-9bea-41c0-87ad-fcaba6d2be2c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.753 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.781 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.785 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.812 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:33:35 np0005558241 nova_compute[248510]: 2025-12-13 08:33:35.957 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:35.960 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:36 np0005558241 podman[322225]: 2025-12-13 08:33:36.05176753 +0000 UTC m=+0.070993863 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:33:36 np0005558241 podman[322225]: 2025-12-13 08:33:36.191671615 +0000 UTC m=+0.210897958 container create eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:33:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:33:36 np0005558241 systemd[1]: Started libpod-conmon-eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959.scope.
Dec 13 03:33:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:33:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c4ee944e775e18453f7f96f287b0d219a31177c28ca489631fbbf96e995c67/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:36 np0005558241 podman[322225]: 2025-12-13 08:33:36.313765257 +0000 UTC m=+0.332991580 container init eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.316 248514 DEBUG nova.compute.manager [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.319 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 3ced27d6-a2a8-4ce3-a7e7-494270418542 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.319 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614816.3178973, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.319 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:33:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:33:36 np0005558241 podman[322225]: 2025-12-13 08:33:36.321875009 +0000 UTC m=+0.341101312 container start eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true)
Dec 13 03:33:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968502624' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.326 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance rebooted successfully.#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.327 248514 DEBUG nova.compute.manager [None req-5b83253d-9b0c-41d7-a8d8-cddaeb821f36 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.343 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:36 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [NOTICE]   (322288) : New worker (322291) forked
Dec 13 03:33:36 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [NOTICE]   (322288) : Loading success.
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.352 248514 DEBUG nova.compute.manager [req-4a506106-98f9-4ae1-b51e-ad72afbd4834 req-26044286-1703-48f7-a58d-8fb86bf40a5b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.353 248514 DEBUG oslo_concurrency.lockutils [req-4a506106-98f9-4ae1-b51e-ad72afbd4834 req-26044286-1703-48f7-a58d-8fb86bf40a5b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.353 248514 DEBUG oslo_concurrency.lockutils [req-4a506106-98f9-4ae1-b51e-ad72afbd4834 req-26044286-1703-48f7-a58d-8fb86bf40a5b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.353 248514 DEBUG oslo_concurrency.lockutils [req-4a506106-98f9-4ae1-b51e-ad72afbd4834 req-26044286-1703-48f7-a58d-8fb86bf40a5b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.354 248514 DEBUG nova.compute.manager [req-4a506106-98f9-4ae1-b51e-ad72afbd4834 req-26044286-1703-48f7-a58d-8fb86bf40a5b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Processing event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.355 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.355 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.359 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.360 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.679s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.383 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.393 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:36.402 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.430 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.431 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614816.3181283, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.432 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Started (Lifecycle Event)#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.444 248514 DEBUG nova.compute.manager [req-b89358d6-4904-4a24-ba47-88b21270a8e0 req-d5114624-b951-4623-be6c-51c0e7d1adcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.444 248514 DEBUG oslo_concurrency.lockutils [req-b89358d6-4904-4a24-ba47-88b21270a8e0 req-d5114624-b951-4623-be6c-51c0e7d1adcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.445 248514 DEBUG oslo_concurrency.lockutils [req-b89358d6-4904-4a24-ba47-88b21270a8e0 req-d5114624-b951-4623-be6c-51c0e7d1adcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.445 248514 DEBUG oslo_concurrency.lockutils [req-b89358d6-4904-4a24-ba47-88b21270a8e0 req-d5114624-b951-4623-be6c-51c0e7d1adcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.446 248514 DEBUG nova.compute.manager [req-b89358d6-4904-4a24-ba47-88b21270a8e0 req-d5114624-b951-4623-be6c-51c0e7d1adcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.446 248514 WARNING nova.compute.manager [req-b89358d6-4904-4a24-ba47-88b21270a8e0 req-d5114624-b951-4623-be6c-51c0e7d1adcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.449 248514 INFO nova.virt.libvirt.driver [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Instance spawned successfully.#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.449 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.475 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.481 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.486 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.486 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.487 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.487 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.488 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.488 248514 DEBUG nova.virt.libvirt.driver [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.521 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614816.3589265, c98e670c-9bea-41c0-87ad-fcaba6d2be2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.521 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.547 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.551 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.559 248514 INFO nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Took 7.75 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.560 248514 DEBUG nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.592 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.660 248514 INFO nova.compute.manager [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Took 9.01 seconds to build instance.#033[00m
Dec 13 03:33:36 np0005558241 nova_compute[248510]: 2025-12-13 08:33:36.686 248514 DEBUG oslo_concurrency.lockutils [None req-e2c9f494-74d5-4a6d-a7aa-818c4a4f800f 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:33:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3284486636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:33:37 np0005558241 nova_compute[248510]: 2025-12-13 08:33:37.004 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:37 np0005558241 nova_compute[248510]: 2025-12-13 08:33:37.007 248514 DEBUG nova.objects.instance [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:37 np0005558241 nova_compute[248510]: 2025-12-13 08:33:37.024 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  <uuid>84abd1d4-6b7b-459e-9783-fdc15d7e8bde</uuid>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  <name>instance-0000004c</name>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerShowV254Test-server-116887759</nova:name>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:33:35</nova:creationTime>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:        <nova:user uuid="dfd2c0543e264c50b5b818f8b1bef249">tempest-ServerShowV254Test-1640329662-project-member</nova:user>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:        <nova:project uuid="94f9c66cba1c4ab683e5ee108b067558">tempest-ServerShowV254Test-1640329662</nova:project>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <entry name="serial">84abd1d4-6b7b-459e-9783-fdc15d7e8bde</entry>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <entry name="uuid">84abd1d4-6b7b-459e-9783-fdc15d7e8bde</entry>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/console.log" append="off"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:33:37 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:33:37 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:33:37 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:33:37 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:33:37 np0005558241 nova_compute[248510]: 2025-12-13 08:33:37.215 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:33:37 np0005558241 nova_compute[248510]: 2025-12-13 08:33:37.215 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:33:37 np0005558241 nova_compute[248510]: 2025-12-13 08:33:37.216 248514 INFO nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Using config drive#033[00m
Dec 13 03:33:37 np0005558241 nova_compute[248510]: 2025-12-13 08:33:37.242 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:37 np0005558241 nova_compute[248510]: 2025-12-13 08:33:37.590 248514 INFO nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Creating config drive at /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config#033[00m
Dec 13 03:33:37 np0005558241 nova_compute[248510]: 2025-12-13 08:33:37.597 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8uf9svio execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 248 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Dec 13 03:33:37 np0005558241 nova_compute[248510]: 2025-12-13 08:33:37.737 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8uf9svio" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:37 np0005558241 nova_compute[248510]: 2025-12-13 08:33:37.766 248514 DEBUG nova.storage.rbd_utils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:37 np0005558241 nova_compute[248510]: 2025-12-13 08:33:37.770 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:38 np0005558241 nova_compute[248510]: 2025-12-13 08:33:38.407 248514 DEBUG oslo_concurrency.processutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:38 np0005558241 nova_compute[248510]: 2025-12-13 08:33:38.409 248514 INFO nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deleting local config drive /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config because it was imported into RBD.#033[00m
Dec 13 03:33:38 np0005558241 systemd-machined[210538]: New machine qemu-93-instance-0000004c.
Dec 13 03:33:38 np0005558241 systemd[1]: Started Virtual Machine qemu-93-instance-0000004c.
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.150 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614819.1500237, 84abd1d4-6b7b-459e-9783-fdc15d7e8bde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.151 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.155 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.156 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.161 248514 INFO nova.virt.libvirt.driver [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance spawned successfully.#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.162 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.187 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.195 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.199 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.199 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.200 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.200 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.200 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.201 248514 DEBUG nova.virt.libvirt.driver [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.236 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.237 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614819.150698, 84abd1d4-6b7b-459e-9783-fdc15d7e8bde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.237 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] VM Started (Lifecycle Event)#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.281 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.284 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.292 248514 INFO nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Took 4.49 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.292 248514 DEBUG nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.318 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.372 248514 INFO nova.compute.manager [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Took 5.88 seconds to build instance.#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.391 248514 DEBUG oslo_concurrency.lockutils [None req-52927781-45b9-4eed-987a-e2c0e270e901 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 264 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 132 op/s
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.902 248514 DEBUG nova.compute.manager [req-4cf9db59-8b04-46d3-83e5-c7cf5163771b req-9eb53be2-9135-4d8e-8ae2-d19f3ff0db42 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.902 248514 DEBUG oslo_concurrency.lockutils [req-4cf9db59-8b04-46d3-83e5-c7cf5163771b req-9eb53be2-9135-4d8e-8ae2-d19f3ff0db42 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.902 248514 DEBUG oslo_concurrency.lockutils [req-4cf9db59-8b04-46d3-83e5-c7cf5163771b req-9eb53be2-9135-4d8e-8ae2-d19f3ff0db42 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.902 248514 DEBUG oslo_concurrency.lockutils [req-4cf9db59-8b04-46d3-83e5-c7cf5163771b req-9eb53be2-9135-4d8e-8ae2-d19f3ff0db42 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.903 248514 DEBUG nova.compute.manager [req-4cf9db59-8b04-46d3-83e5-c7cf5163771b req-9eb53be2-9135-4d8e-8ae2-d19f3ff0db42 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] No waiting events found dispatching network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.903 248514 WARNING nova.compute.manager [req-4cf9db59-8b04-46d3-83e5-c7cf5163771b req-9eb53be2-9135-4d8e-8ae2-d19f3ff0db42 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received unexpected event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e for instance with vm_state active and task_state None.#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.949 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.990 248514 DEBUG nova.compute.manager [req-5e9c7ca5-10d9-4526-b343-2e8a226ed00b req-89a40027-47e2-4f82-a4d9-e9834932ba0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.991 248514 DEBUG oslo_concurrency.lockutils [req-5e9c7ca5-10d9-4526-b343-2e8a226ed00b req-89a40027-47e2-4f82-a4d9-e9834932ba0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.991 248514 DEBUG oslo_concurrency.lockutils [req-5e9c7ca5-10d9-4526-b343-2e8a226ed00b req-89a40027-47e2-4f82-a4d9-e9834932ba0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.991 248514 DEBUG oslo_concurrency.lockutils [req-5e9c7ca5-10d9-4526-b343-2e8a226ed00b req-89a40027-47e2-4f82-a4d9-e9834932ba0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.992 248514 DEBUG nova.compute.manager [req-5e9c7ca5-10d9-4526-b343-2e8a226ed00b req-89a40027-47e2-4f82-a4d9-e9834932ba0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:39 np0005558241 nova_compute[248510]: 2025-12-13 08:33:39.992 248514 WARNING nova.compute.manager [req-5e9c7ca5-10d9-4526-b343-2e8a226ed00b req-89a40027-47e2-4f82-a4d9-e9834932ba0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:33:40 np0005558241 nova_compute[248510]: 2025-12-13 08:33:40.434 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:33:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 295 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 210 op/s
Dec 13 03:33:41 np0005558241 nova_compute[248510]: 2025-12-13 08:33:41.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:33:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:43.404 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 295 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.9 MiB/s wr, 223 op/s
Dec 13 03:33:43 np0005558241 nova_compute[248510]: 2025-12-13 08:33:43.919 248514 DEBUG nova.compute.manager [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-changed-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:43 np0005558241 nova_compute[248510]: 2025-12-13 08:33:43.919 248514 DEBUG nova.compute.manager [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Refreshing instance network info cache due to event network-changed-6b1400c2-c07a-450e-8698-6c2b60d0227e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:33:43 np0005558241 nova_compute[248510]: 2025-12-13 08:33:43.919 248514 DEBUG oslo_concurrency.lockutils [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:33:43 np0005558241 nova_compute[248510]: 2025-12-13 08:33:43.919 248514 DEBUG oslo_concurrency.lockutils [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:33:43 np0005558241 nova_compute[248510]: 2025-12-13 08:33:43.919 248514 DEBUG nova.network.neutron [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Refreshing network info cache for port 6b1400c2-c07a-450e-8698-6c2b60d0227e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:33:44 np0005558241 nova_compute[248510]: 2025-12-13 08:33:44.912 248514 INFO nova.compute.manager [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Rebuilding instance#033[00m
Dec 13 03:33:44 np0005558241 nova_compute[248510]: 2025-12-13 08:33:44.954 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:45 np0005558241 nova_compute[248510]: 2025-12-13 08:33:45.221 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:45 np0005558241 nova_compute[248510]: 2025-12-13 08:33:45.241 248514 DEBUG nova.compute.manager [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:45 np0005558241 nova_compute[248510]: 2025-12-13 08:33:45.297 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'pci_requests' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:45 np0005558241 nova_compute[248510]: 2025-12-13 08:33:45.316 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'pci_devices' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:45 np0005558241 nova_compute[248510]: 2025-12-13 08:33:45.328 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'resources' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:45 np0005558241 nova_compute[248510]: 2025-12-13 08:33:45.339 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'migration_context' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:45 np0005558241 nova_compute[248510]: 2025-12-13 08:33:45.352 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:33:45 np0005558241 nova_compute[248510]: 2025-12-13 08:33:45.357 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:33:45 np0005558241 nova_compute[248510]: 2025-12-13 08:33:45.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 295 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.9 MiB/s wr, 257 op/s
Dec 13 03:33:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:33:46 np0005558241 nova_compute[248510]: 2025-12-13 08:33:46.688 248514 DEBUG nova.network.neutron [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Updated VIF entry in instance network info cache for port 6b1400c2-c07a-450e-8698-6c2b60d0227e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:33:46 np0005558241 nova_compute[248510]: 2025-12-13 08:33:46.688 248514 DEBUG nova.network.neutron [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Updating instance_info_cache with network_info: [{"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:33:46 np0005558241 nova_compute[248510]: 2025-12-13 08:33:46.730 248514 DEBUG oslo_concurrency.lockutils [req-6c998532-d5fb-4e93-8849-700d41487cb9 req-fd3220d5-f741-4ed5-87a6-faec567ebd7e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c98e670c-9bea-41c0-87ad-fcaba6d2be2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.216 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.217 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.218 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.218 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.219 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.222 248514 INFO nova.compute.manager [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Terminating instance#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.225 248514 DEBUG nova.compute.manager [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:33:47 np0005558241 kernel: tap6b1400c2-c0 (unregistering): left promiscuous mode
Dec 13 03:33:47 np0005558241 NetworkManager[50376]: <info>  [1765614827.2618] device (tap6b1400c2-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:33:47 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:47Z|00746|binding|INFO|Releasing lport 6b1400c2-c07a-450e-8698-6c2b60d0227e from this chassis (sb_readonly=0)
Dec 13 03:33:47 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:47Z|00747|binding|INFO|Setting lport 6b1400c2-c07a-450e-8698-6c2b60d0227e down in Southbound
Dec 13 03:33:47 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:47Z|00748|binding|INFO|Removing iface tap6b1400c2-c0 ovn-installed in OVS
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.329 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.333 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:af:77 10.100.0.6'], port_security=['fa:16:3e:ee:af:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c98e670c-9bea-41c0-87ad-fcaba6d2be2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b1400c2-c07a-450e-8698-6c2b60d0227e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.335 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b1400c2-c07a-450e-8698-6c2b60d0227e in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f unbound from our chassis#033[00m
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.337 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.358 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aa028625-7a5a-40de-a226-0e58d0009840]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:47 np0005558241 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Dec 13 03:33:47 np0005558241 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004b.scope: Consumed 11.415s CPU time.
Dec 13 03:33:47 np0005558241 systemd-machined[210538]: Machine qemu-91-instance-0000004b terminated.
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.392 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d507412f-f2b3-4ec9-8186-a02cf90c1884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.396 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9afd2b32-f3a8-4fc3-a61f-65bfadddc3e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.426 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a71c80b0-8ee2-4957-9c0c-1484f5906cfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.452 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c432448b-6817-47e6-bc27-c52b5efc629f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap409fc0bb-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:3b:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 195], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727398, 'reachable_time': 29799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322467, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.467 248514 INFO nova.virt.libvirt.driver [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Instance destroyed successfully.#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.468 248514 DEBUG nova.objects.instance [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'resources' on Instance uuid c98e670c-9bea-41c0-87ad-fcaba6d2be2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.477 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7f42b334-183e-490b-adc0-12ee1624f5bc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727416, 'tstamp': 727416}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322477, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap409fc0bb-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727421, 'tstamp': 727421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322477, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.480 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.481 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.483 248514 DEBUG nova.virt.libvirt.vif [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:33:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1353488107',display_name='tempest-ServerActionsTestOtherA-server-1353488107',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1353488107',id=75,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:33:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-9n9e56kl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:36Z,user_data=None,user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=c98e670c-9bea-41c0-87ad-fcaba6d2be2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.484 248514 DEBUG nova.network.os_vif_util [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "address": "fa:16:3e:ee:af:77", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b1400c2-c0", "ovs_interfaceid": "6b1400c2-c07a-450e-8698-6c2b60d0227e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.484 248514 DEBUG nova.network.os_vif_util [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.485 248514 DEBUG os_vif [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.488 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b1400c2-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.493 248514 INFO os_vif [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:af:77,bridge_name='br-int',has_traffic_filtering=True,id=6b1400c2-c07a-450e-8698-6c2b60d0227e,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b1400c2-c0')#033[00m
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.499 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap409fc0bb-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.499 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.499 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap409fc0bb-c0, col_values=(('external_ids', {'iface-id': 'c8e8a31b-a5fe-4e2d-bc19-65995078988f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:47.499 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 295 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 245 op/s
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.820 248514 DEBUG nova.objects.instance [None req-dda55a40-86ef-4341-99c9-9af150cfd938 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.865 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614827.865392, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.866 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.893 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.899 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:33:47 np0005558241 nova_compute[248510]: 2025-12-13 08:33:47.931 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.012 248514 INFO nova.virt.libvirt.driver [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Deleting instance files /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c_del#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.014 248514 INFO nova.virt.libvirt.driver [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Deletion of /var/lib/nova/instances/c98e670c-9bea-41c0-87ad-fcaba6d2be2c_del complete#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.040 248514 DEBUG nova.compute.manager [req-12798290-313a-4564-83f4-a73ed90f68f7 req-243acd7b-f00d-4612-a8c8-f2250aa16f11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-vif-unplugged-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.041 248514 DEBUG oslo_concurrency.lockutils [req-12798290-313a-4564-83f4-a73ed90f68f7 req-243acd7b-f00d-4612-a8c8-f2250aa16f11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.041 248514 DEBUG oslo_concurrency.lockutils [req-12798290-313a-4564-83f4-a73ed90f68f7 req-243acd7b-f00d-4612-a8c8-f2250aa16f11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.041 248514 DEBUG oslo_concurrency.lockutils [req-12798290-313a-4564-83f4-a73ed90f68f7 req-243acd7b-f00d-4612-a8c8-f2250aa16f11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.041 248514 DEBUG nova.compute.manager [req-12798290-313a-4564-83f4-a73ed90f68f7 req-243acd7b-f00d-4612-a8c8-f2250aa16f11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] No waiting events found dispatching network-vif-unplugged-6b1400c2-c07a-450e-8698-6c2b60d0227e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.042 248514 DEBUG nova.compute.manager [req-12798290-313a-4564-83f4-a73ed90f68f7 req-243acd7b-f00d-4612-a8c8-f2250aa16f11 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-vif-unplugged-6b1400c2-c07a-450e-8698-6c2b60d0227e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.066 248514 INFO nova.compute.manager [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.066 248514 DEBUG oslo.service.loopingcall [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.067 248514 DEBUG nova.compute.manager [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.067 248514 DEBUG nova.network.neutron [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.128 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.129 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.129 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:33:48 np0005558241 kernel: tapb5058a06-71 (unregistering): left promiscuous mode
Dec 13 03:33:48 np0005558241 NetworkManager[50376]: <info>  [1765614828.8532] device (tapb5058a06-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:33:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:48Z|00749|binding|INFO|Releasing lport b5058a06-7109-4ac0-96d8-7562e66bee25 from this chassis (sb_readonly=0)
Dec 13 03:33:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:48Z|00750|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 down in Southbound
Dec 13 03:33:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:48Z|00751|binding|INFO|Removing iface tapb5058a06-71 ovn-installed in OVS
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.860 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:48.868 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.864 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:48.870 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis#033[00m
Dec 13 03:33:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:48.872 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43ee8730-a174-4859-9c95-b3ad6dc68839, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:33:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:48.873 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f828d7-4b88-4557-9cb3-68f52f130c75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:48.879 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace which is not needed anymore#033[00m
Dec 13 03:33:48 np0005558241 nova_compute[248510]: 2025-12-13 08:33:48.884 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:48 np0005558241 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Dec 13 03:33:48 np0005558241 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000003c.scope: Consumed 12.492s CPU time.
Dec 13 03:33:48 np0005558241 systemd-machined[210538]: Machine qemu-92-instance-0000003c terminated.
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.017 248514 DEBUG nova.compute.manager [None req-dda55a40-86ef-4341-99c9-9af150cfd938 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:49 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [NOTICE]   (322288) : haproxy version is 2.8.14-c23fe91
Dec 13 03:33:49 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [NOTICE]   (322288) : path to executable is /usr/sbin/haproxy
Dec 13 03:33:49 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [WARNING]  (322288) : Exiting Master process...
Dec 13 03:33:49 np0005558241 systemd[1]: libpod-eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959.scope: Deactivated successfully.
Dec 13 03:33:49 np0005558241 conmon[322282]: conmon eab32040dc9843d1afa4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959.scope/container/memory.events
Dec 13 03:33:49 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [ALERT]    (322288) : Current worker (322291) exited with code 143 (Terminated)
Dec 13 03:33:49 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322282]: [WARNING]  (322288) : All workers exited. Exiting... (0)
Dec 13 03:33:49 np0005558241 podman[322525]: 2025-12-13 08:33:49.05581231 +0000 UTC m=+0.056481094 container died eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.164 248514 DEBUG nova.network.neutron [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.189 248514 INFO nova.compute.manager [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Took 1.12 seconds to deallocate network for instance.#033[00m
Dec 13 03:33:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959-userdata-shm.mount: Deactivated successfully.
Dec 13 03:33:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c9c4ee944e775e18453f7f96f287b0d219a31177c28ca489631fbbf96e995c67-merged.mount: Deactivated successfully.
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.246 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.247 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:49 np0005558241 podman[322525]: 2025-12-13 08:33:49.347933214 +0000 UTC m=+0.348601998 container cleanup eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 03:33:49 np0005558241 systemd[1]: libpod-conmon-eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959.scope: Deactivated successfully.
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.357 248514 DEBUG nova.compute.manager [req-32cb44e0-81f5-4930-982c-f5d51eec5d24 req-59a2b2b3-3ab6-44e5-8a52-949a4a0586ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.358 248514 DEBUG oslo_concurrency.lockutils [req-32cb44e0-81f5-4930-982c-f5d51eec5d24 req-59a2b2b3-3ab6-44e5-8a52-949a4a0586ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.358 248514 DEBUG oslo_concurrency.lockutils [req-32cb44e0-81f5-4930-982c-f5d51eec5d24 req-59a2b2b3-3ab6-44e5-8a52-949a4a0586ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.359 248514 DEBUG oslo_concurrency.lockutils [req-32cb44e0-81f5-4930-982c-f5d51eec5d24 req-59a2b2b3-3ab6-44e5-8a52-949a4a0586ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.359 248514 DEBUG nova.compute.manager [req-32cb44e0-81f5-4930-982c-f5d51eec5d24 req-59a2b2b3-3ab6-44e5-8a52-949a4a0586ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.360 248514 WARNING nova.compute.manager [req-32cb44e0-81f5-4930-982c-f5d51eec5d24 req-59a2b2b3-3ab6-44e5-8a52-949a4a0586ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state suspended and task_state None.#033[00m
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.367 248514 DEBUG oslo_concurrency.processutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:49 np0005558241 podman[322567]: 2025-12-13 08:33:49.422523197 +0000 UTC m=+0.050561167 container remove eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:33:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.432 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b4227b17-968c-40c7-8aef-551ec489642a]: (4, ('Sat Dec 13 08:33:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959)\neab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959\nSat Dec 13 08:33:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (eab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959)\neab32040dc9843d1afa440d76ccaf461308656496cc8e127c70cf2bf3306d959\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.434 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57404be3-359d-40b6-bee0-ab785be362f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.434 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:49 np0005558241 kernel: tap43ee8730-a0: left promiscuous mode
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.440 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.458 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.461 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2222e729-1519-4b81-8bf9-3aafccd2e046]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.473 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[18cf5e86-3f75-451f-bf79-0ca7f4b15bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.475 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c019fa9-afca-4e30-ac29-9491b514f8ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3252d9a2-27a8-43d1-b01f-09d07a88d672]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739249, 'reachable_time': 26335, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322586, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:49 np0005558241 systemd[1]: run-netns-ovnmeta\x2d43ee8730\x2da174\x2d4859\x2d9c95\x2db3ad6dc68839.mount: Deactivated successfully.
Dec 13 03:33:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.498 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:33:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:49.498 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f7f8bd-988c-433b-964f-011bb5c05948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 269 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 1.8 MiB/s wr, 287 op/s
Dec 13 03:33:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:33:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3300441500' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.953 248514 DEBUG oslo_concurrency.processutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.959 248514 DEBUG nova.compute.provider_tree [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:33:49 np0005558241 nova_compute[248510]: 2025-12-13 08:33:49.980 248514 DEBUG nova.scheduler.client.report [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:33:50 np0005558241 nova_compute[248510]: 2025-12-13 08:33:50.010 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:50 np0005558241 nova_compute[248510]: 2025-12-13 08:33:50.035 248514 INFO nova.scheduler.client.report [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Deleted allocations for instance c98e670c-9bea-41c0-87ad-fcaba6d2be2c#033[00m
Dec 13 03:33:50 np0005558241 nova_compute[248510]: 2025-12-13 08:33:50.135 248514 DEBUG oslo_concurrency.lockutils [None req-528fb862-4cbf-47cb-83bc-3bea0aa3e9b8 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:50 np0005558241 nova_compute[248510]: 2025-12-13 08:33:50.144 248514 DEBUG nova.compute.manager [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:50 np0005558241 nova_compute[248510]: 2025-12-13 08:33:50.145 248514 DEBUG oslo_concurrency.lockutils [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:50 np0005558241 nova_compute[248510]: 2025-12-13 08:33:50.145 248514 DEBUG oslo_concurrency.lockutils [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:50 np0005558241 nova_compute[248510]: 2025-12-13 08:33:50.145 248514 DEBUG oslo_concurrency.lockutils [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c98e670c-9bea-41c0-87ad-fcaba6d2be2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:50 np0005558241 nova_compute[248510]: 2025-12-13 08:33:50.146 248514 DEBUG nova.compute.manager [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] No waiting events found dispatching network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:50 np0005558241 nova_compute[248510]: 2025-12-13 08:33:50.146 248514 WARNING nova.compute.manager [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received unexpected event network-vif-plugged-6b1400c2-c07a-450e-8698-6c2b60d0227e for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:33:50 np0005558241 nova_compute[248510]: 2025-12-13 08:33:50.146 248514 DEBUG nova.compute.manager [req-4ea93ca3-6f20-4c73-b7aa-98f8f9ed03c8 req-1a636bf9-cf12-4bbe-a335-6a609adb8c9d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Received event network-vif-deleted-6b1400c2-c07a-450e-8698-6c2b60d0227e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:50 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Dec 13 03:33:50 np0005558241 nova_compute[248510]: 2025-12-13 08:33:50.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.089 248514 INFO nova.compute.manager [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Resuming#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.090 248514 DEBUG nova.objects.instance [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'flavor' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.155 248514 DEBUG oslo_concurrency.lockutils [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.155 248514 DEBUG oslo_concurrency.lockutils [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquired lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.155 248514 DEBUG nova.network.neutron [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:33:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.349 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updating instance_info_cache with network_info: [{"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.372 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-ce9adb21-8832-4d3e-867e-b0b49bdb6850" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.372 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.442 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.442 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.443 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.443 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.443 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.444 248514 INFO nova.compute.manager [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Terminating instance#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.446 248514 DEBUG nova.compute.manager [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:33:51 np0005558241 kernel: tapb2ee664d-ff (unregistering): left promiscuous mode
Dec 13 03:33:51 np0005558241 NetworkManager[50376]: <info>  [1765614831.4990] device (tapb2ee664d-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:33:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:51Z|00752|binding|INFO|Releasing lport b2ee664d-ff99-4665-a5cc-70bd7aeb1546 from this chassis (sb_readonly=0)
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.506 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:51Z|00753|binding|INFO|Setting lport b2ee664d-ff99-4665-a5cc-70bd7aeb1546 down in Southbound
Dec 13 03:33:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:51Z|00754|binding|INFO|Removing iface tapb2ee664d-ff ovn-installed in OVS
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.509 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:51 np0005558241 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000045.scope: Deactivated successfully.
Dec 13 03:33:51 np0005558241 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000045.scope: Consumed 18.401s CPU time.
Dec 13 03:33:51 np0005558241 systemd-machined[210538]: Machine qemu-80-instance-00000045 terminated.
Dec 13 03:33:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 249 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.2 MiB/s wr, 203 op/s
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.675 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.689 248514 INFO nova.virt.libvirt.driver [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Instance destroyed successfully.#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.690 248514 DEBUG nova.objects.instance [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lazy-loading 'resources' on Instance uuid ce9adb21-8832-4d3e-867e-b0b49bdb6850 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:51.703 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:3a:53 10.100.0.3'], port_security=['fa:16:3e:63:3a:53 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ce9adb21-8832-4d3e-867e-b0b49bdb6850', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4d2999518df4b9f8ccbabe38976dc3c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4a5bf7b-dd16-4d92-81b7-546493ad4db6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b6794829-4e4e-4196-8b70-d8e6b3d73dd5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b2ee664d-ff99-4665-a5cc-70bd7aeb1546) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:51.705 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b2ee664d-ff99-4665-a5cc-70bd7aeb1546 in datapath 409fc0bb-caf3-4b90-9e44-83ff383dc88f unbound from our chassis#033[00m
Dec 13 03:33:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:51.707 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 409fc0bb-caf3-4b90-9e44-83ff383dc88f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:33:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:51.709 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[13ef923e-abfc-4bb2-8a95-fe6ba2bf1c7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:51.709 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f namespace which is not needed anymore#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.721 248514 DEBUG nova.virt.libvirt.vif [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:31:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-2099788276',display_name='tempest-ServerActionsTestOtherA-server-2099788276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-2099788276',id=69,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxJgtD1FEWUw7tJ8pibGATgtrZITyeOCdRqSR73HftGqNDavcdP1XHx0prQ71D2yOjUOO7ZJAEgnPXlpVAfW1QGvCbp1snKSBX1V/4lwFnsJaGPS7QewWPvSMs5UYFhVA==',key_name='tempest-keypair-180617026',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:31:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b4d2999518df4b9f8ccbabe38976dc3c',ramdisk_id='',reservation_id='r-b9qwtikj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1325599242',owner_user_name='tempest-ServerActionsTestOtherA-1325599242-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:31:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5b310bdebec646949fad4ea1821b4c3f',uuid=ce9adb21-8832-4d3e-867e-b0b49bdb6850,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.721 248514 DEBUG nova.network.os_vif_util [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converting VIF {"id": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "address": "fa:16:3e:63:3a:53", "network": {"id": "409fc0bb-caf3-4b90-9e44-83ff383dc88f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1820585267-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b4d2999518df4b9f8ccbabe38976dc3c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2ee664d-ff", "ovs_interfaceid": "b2ee664d-ff99-4665-a5cc-70bd7aeb1546", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.722 248514 DEBUG nova.network.os_vif_util [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.724 248514 DEBUG os_vif [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.726 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.727 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2ee664d-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.736 248514 INFO os_vif [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:3a:53,bridge_name='br-int',has_traffic_filtering=True,id=b2ee664d-ff99-4665-a5cc-70bd7aeb1546,network=Network(409fc0bb-caf3-4b90-9e44-83ff383dc88f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2ee664d-ff')#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.799 248514 DEBUG nova.compute.manager [req-ab615d12-0f53-48b0-b014-cdfff62139a1 req-418b5083-d697-45bc-83c6-c87f76935471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.799 248514 DEBUG oslo_concurrency.lockutils [req-ab615d12-0f53-48b0-b014-cdfff62139a1 req-418b5083-d697-45bc-83c6-c87f76935471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.800 248514 DEBUG oslo_concurrency.lockutils [req-ab615d12-0f53-48b0-b014-cdfff62139a1 req-418b5083-d697-45bc-83c6-c87f76935471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.800 248514 DEBUG oslo_concurrency.lockutils [req-ab615d12-0f53-48b0-b014-cdfff62139a1 req-418b5083-d697-45bc-83c6-c87f76935471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.800 248514 DEBUG nova.compute.manager [req-ab615d12-0f53-48b0-b014-cdfff62139a1 req-418b5083-d697-45bc-83c6-c87f76935471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:51 np0005558241 nova_compute[248510]: 2025-12-13 08:33:51.800 248514 WARNING nova.compute.manager [req-ab615d12-0f53-48b0-b014-cdfff62139a1 req-418b5083-d697-45bc-83c6-c87f76935471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state suspended and task_state resuming.#033[00m
Dec 13 03:33:51 np0005558241 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [NOTICE]   (314684) : haproxy version is 2.8.14-c23fe91
Dec 13 03:33:51 np0005558241 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [NOTICE]   (314684) : path to executable is /usr/sbin/haproxy
Dec 13 03:33:51 np0005558241 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [WARNING]  (314684) : Exiting Master process...
Dec 13 03:33:51 np0005558241 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [WARNING]  (314684) : Exiting Master process...
Dec 13 03:33:51 np0005558241 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [ALERT]    (314684) : Current worker (314686) exited with code 143 (Terminated)
Dec 13 03:33:51 np0005558241 neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f[314661]: [WARNING]  (314684) : All workers exited. Exiting... (0)
Dec 13 03:33:51 np0005558241 systemd[1]: libpod-2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a.scope: Deactivated successfully.
Dec 13 03:33:51 np0005558241 conmon[314661]: conmon 2eb827e287b29525194b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a.scope/container/memory.events
Dec 13 03:33:51 np0005558241 podman[322663]: 2025-12-13 08:33:51.882520447 +0000 UTC m=+0.050494255 container died 2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:33:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-abe1480fb3e22837a053f76acd0fd060e8f26ee729f63461240b6103c38efbe6-merged.mount: Deactivated successfully.
Dec 13 03:33:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a-userdata-shm.mount: Deactivated successfully.
Dec 13 03:33:51 np0005558241 podman[322663]: 2025-12-13 08:33:51.93095432 +0000 UTC m=+0.098928108 container cleanup 2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 03:33:51 np0005558241 systemd[1]: libpod-conmon-2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a.scope: Deactivated successfully.
Dec 13 03:33:52 np0005558241 nova_compute[248510]: 2025-12-13 08:33:52.036 248514 INFO nova.virt.libvirt.driver [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Deleting instance files /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850_del#033[00m
Dec 13 03:33:52 np0005558241 nova_compute[248510]: 2025-12-13 08:33:52.038 248514 INFO nova.virt.libvirt.driver [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Deletion of /var/lib/nova/instances/ce9adb21-8832-4d3e-867e-b0b49bdb6850_del complete#033[00m
Dec 13 03:33:52 np0005558241 podman[322691]: 2025-12-13 08:33:52.042456929 +0000 UTC m=+0.074473030 container remove 2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:33:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.054 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[306b4d1e-97e2-4787-834e-b7d736127ccc]: (4, ('Sat Dec 13 08:33:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f (2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a)\n2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a\nSat Dec 13 08:33:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f (2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a)\n2eb827e287b29525194bebb61c01b324e2eb4e88ae8ae5dc58645dc28cdf938a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.056 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7468d828-9f13-4b59-8db1-ff096dd62ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.058 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap409fc0bb-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:52 np0005558241 nova_compute[248510]: 2025-12-13 08:33:52.061 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:52 np0005558241 kernel: tap409fc0bb-c0: left promiscuous mode
Dec 13 03:33:52 np0005558241 nova_compute[248510]: 2025-12-13 08:33:52.088 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.091 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4b13ce-ddee-4868-b1ac-4d4088033042]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.105 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[47519533-a880-417a-a442-f73b9a0a0470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.106 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[95cdd325-e0de-473b-8d1a-14ed6af56600]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:52 np0005558241 nova_compute[248510]: 2025-12-13 08:33:52.115 248514 INFO nova.compute.manager [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:33:52 np0005558241 nova_compute[248510]: 2025-12-13 08:33:52.117 248514 DEBUG oslo.service.loopingcall [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:33:52 np0005558241 nova_compute[248510]: 2025-12-13 08:33:52.119 248514 DEBUG nova.compute.manager [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:33:52 np0005558241 nova_compute[248510]: 2025-12-13 08:33:52.120 248514 DEBUG nova.network.neutron [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:33:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.130 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62a28e36-5abf-4677-98a3-7f476ffe9ed9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727388, 'reachable_time': 24704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322713, 'error': None, 'target': 'ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.132 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-409fc0bb-caf3-4b90-9e44-83ff383dc88f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:33:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:52.132 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b31f1dbf-e2ca-46a4-ab3a-b2778dc481e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:52 np0005558241 systemd[1]: run-netns-ovnmeta\x2d409fc0bb\x2dcaf3\x2d4b90\x2d9e44\x2d83ff383dc88f.mount: Deactivated successfully.
Dec 13 03:33:52 np0005558241 nova_compute[248510]: 2025-12-13 08:33:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:33:52 np0005558241 nova_compute[248510]: 2025-12-13 08:33:52.947 248514 DEBUG nova.network.neutron [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [{"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.213 248514 DEBUG oslo_concurrency.lockutils [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Releasing lock "refresh_cache-3ced27d6-a2a8-4ce3-a7e7-494270418542" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.221 248514 DEBUG nova.virt.libvirt.vif [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.221 248514 DEBUG nova.network.os_vif_util [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.222 248514 DEBUG nova.network.os_vif_util [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.223 248514 DEBUG os_vif [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.224 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.224 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.225 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.228 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.229 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5058a06-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.229 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5058a06-71, col_values=(('external_ids', {'iface-id': 'b5058a06-7109-4ac0-96d8-7562e66bee25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:d1:eb', 'vm-uuid': '3ced27d6-a2a8-4ce3-a7e7-494270418542'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.230 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.230 248514 INFO os_vif [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.256 248514 DEBUG nova.objects.instance [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'numa_topology' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:53 np0005558241 kernel: tapb5058a06-71: entered promiscuous mode
Dec 13 03:33:53 np0005558241 NetworkManager[50376]: <info>  [1765614833.4121] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/321)
Dec 13 03:33:53 np0005558241 systemd-udevd[322614]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.414 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:53Z|00755|binding|INFO|Claiming lport b5058a06-7109-4ac0-96d8-7562e66bee25 for this chassis.
Dec 13 03:33:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:53Z|00756|binding|INFO|b5058a06-7109-4ac0-96d8-7562e66bee25: Claiming fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.425 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.426 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 bound to our chassis#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.427 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43ee8730-a174-4859-9c95-b3ad6dc68839#033[00m
Dec 13 03:33:53 np0005558241 NetworkManager[50376]: <info>  [1765614833.4291] device (tapb5058a06-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:33:53 np0005558241 NetworkManager[50376]: <info>  [1765614833.4300] device (tapb5058a06-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.439 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[20902f1b-dde0-44dc-a9c2-a93bfa926fee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.440 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43ee8730-a1 in ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.442 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43ee8730-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.443 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3d80fd-941c-42ce-8e53-e4ed810cbb76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.443 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d9c8ac-cce1-4ef6-88b4-51b6c1ca6fa5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:53Z|00757|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 ovn-installed in OVS
Dec 13 03:33:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:53Z|00758|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 up in Southbound
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.446 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:53 np0005558241 systemd-machined[210538]: New machine qemu-94-instance-0000003c.
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.458 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[02a4d913-674d-4eb0-b45a-3837f9c42d47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 systemd[1]: Started Virtual Machine qemu-94-instance-0000003c.
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.484 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3866408b-3084-497e-a135-4f1e7b74f849]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.521 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0966b1a7-239f-4562-bf6a-1fc710c5096f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 NetworkManager[50376]: <info>  [1765614833.5277] manager: (tap43ee8730-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/322)
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.527 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[48964b7f-a792-4577-9536-038b88b216e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.558 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d29914ea-9b9a-472d-bf8c-42c2a222d739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.563 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6c434a-dd22-4add-b67f-5719f701a48b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 NetworkManager[50376]: <info>  [1765614833.5817] device (tap43ee8730-a0): carrier: link connected
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.585 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[12e9fe47-009b-497a-9ca9-c2ad239011a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.604 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1fdf4631-150b-433d-b45e-2fef4bc32856]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741079, 'reachable_time': 39670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322758, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 228 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 402 KiB/s wr, 152 op/s
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.623 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f716781f-8c6a-4470-a900-54c8c092805a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:bbcd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 741079, 'tstamp': 741079}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322759, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.642 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[07393a51-16f2-4a8e-b9d2-9c07065d5f9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43ee8730-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:bb:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741079, 'reachable_time': 39670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322760, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.674 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b91cab-4f3f-43d3-81fd-80c79a4b1fad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.749 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[188be0a6-8adb-40ff-8904-cac2020227a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.750 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ee8730-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.754 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:53 np0005558241 NetworkManager[50376]: <info>  [1765614833.7550] manager: (tap43ee8730-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Dec 13 03:33:53 np0005558241 kernel: tap43ee8730-a0: entered promiscuous mode
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.759 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43ee8730-a0, col_values=(('external_ids', {'iface-id': '46196864-922e-451f-891b-cf9666992dd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:53Z|00759|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.762 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.791 248514 DEBUG nova.network.neutron [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.793 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.795 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.795 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8badeb85-de21-4f41-8c31-783b1fcd9f6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.795 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.795 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.796 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.796 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.796 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/43ee8730-a174-4859-9c95-b3ad6dc68839.pid.haproxy
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 43ee8730-a174-4859-9c95-b3ad6dc68839
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:33:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:53.799 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'env', 'PROCESS_TAG=haproxy-43ee8730-a174-4859-9c95-b3ad6dc68839', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43ee8730-a174-4859-9c95-b3ad6dc68839.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.831 248514 INFO nova.compute.manager [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Took 1.71 seconds to deallocate network for instance.#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.909 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:53 np0005558241 nova_compute[248510]: 2025-12-13 08:33:53.910 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.020 248514 DEBUG oslo_concurrency.processutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:54 np0005558241 podman[322812]: 2025-12-13 08:33:54.200301708 +0000 UTC m=+0.048052365 container create e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.202 248514 DEBUG nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-vif-unplugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.203 248514 DEBUG oslo_concurrency.lockutils [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.203 248514 DEBUG oslo_concurrency.lockutils [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.204 248514 DEBUG oslo_concurrency.lockutils [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.204 248514 DEBUG nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] No waiting events found dispatching network-vif-unplugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.204 248514 WARNING nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received unexpected event network-vif-unplugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.204 248514 DEBUG nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.205 248514 DEBUG oslo_concurrency.lockutils [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.205 248514 DEBUG oslo_concurrency.lockutils [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.205 248514 DEBUG oslo_concurrency.lockutils [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.205 248514 DEBUG nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] No waiting events found dispatching network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.206 248514 WARNING nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received unexpected event network-vif-plugged-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.206 248514 DEBUG nova.compute.manager [req-2742a8a4-b096-4d76-8abd-cced5ed511e2 req-e472aae1-ef73-4b6d-afd7-84b7042e3875 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Received event network-vif-deleted-b2ee664d-ff99-4665-a5cc-70bd7aeb1546 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:54 np0005558241 systemd[1]: Started libpod-conmon-e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b.scope.
Dec 13 03:33:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:33:54 np0005558241 podman[322812]: 2025-12-13 08:33:54.17825588 +0000 UTC m=+0.026006567 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:33:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ce670cf208a55e7edf7d049fefbc3f51f1fb991fa118f368570ceaef6ceea3e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:33:54 np0005558241 podman[322812]: 2025-12-13 08:33:54.2917981 +0000 UTC m=+0.139548757 container init e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:33:54 np0005558241 podman[322812]: 2025-12-13 08:33:54.297938651 +0000 UTC m=+0.145689308 container start e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:33:54 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322844]: [NOTICE]   (322864) : New worker (322882) forked
Dec 13 03:33:54 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322844]: [NOTICE]   (322864) : Loading success.
Dec 13 03:33:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:33:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2021240571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.349 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:33:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644318088' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.625 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 3ced27d6-a2a8-4ce3-a7e7-494270418542 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.628 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614834.6253257, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.628 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Started (Lifecycle Event)#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.642 248514 DEBUG oslo_concurrency.processutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.647 248514 DEBUG nova.compute.provider_tree [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.678 248514 DEBUG nova.compute.manager [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.678 248514 DEBUG nova.objects.instance [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.793 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.796 248514 DEBUG nova.scheduler.client.report [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.805 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.806 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.808 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.814 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance running successfully.#033[00m
Dec 13 03:33:54 np0005558241 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.816 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.817 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.818 248514 DEBUG nova.virt.libvirt.guest [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.818 248514 DEBUG nova.compute.manager [None req-0d39d142-3cc0-4d47-b474-81e3e2498bd0 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.977 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.978 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3701MB free_disk=59.888090652413666GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:33:54 np0005558241 nova_compute[248510]: 2025-12-13 08:33:54.979 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.025 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.025 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614834.632862, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.026 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.028 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.031 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.073 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.076 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.080 248514 INFO nova.scheduler.client.report [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Deleted allocations for instance ce9adb21-8832-4d3e-867e-b0b49bdb6850#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.161 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3ced27d6-a2a8-4ce3-a7e7-494270418542 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.161 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 84abd1d4-6b7b-459e-9783-fdc15d7e8bde actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.162 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.162 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.189 248514 DEBUG oslo_concurrency.lockutils [None req-9af62070-572b-4e11-8267-8aac35f072a7 5b310bdebec646949fad4ea1821b4c3f b4d2999518df4b9f8ccbabe38976dc3c - - default default] Lock "ce9adb21-8832-4d3e-867e-b0b49bdb6850" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.268 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:55.414 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:55.415 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:55.416 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.433 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.442 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:55Z|00760|binding|INFO|Releasing lport 46196864-922e-451f-891b-cf9666992dd4 from this chassis (sb_readonly=0)
Dec 13 03:33:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 202 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 204 op/s
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.628 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:33:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2262848978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.884 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:55 np0005558241 nova_compute[248510]: 2025-12-13 08:33:55.892 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:33:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:55Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1f:d1:eb 10.100.0.5
Dec 13 03:33:55 np0005558241 podman[322930]: 2025-12-13 08:33:55.970464337 +0000 UTC m=+0.053943601 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 13 03:33:55 np0005558241 podman[322929]: 2025-12-13 08:33:55.986091355 +0000 UTC m=+0.074815839 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:33:56 np0005558241 podman[322928]: 2025-12-13 08:33:56.015320401 +0000 UTC m=+0.104656620 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:33:56 np0005558241 nova_compute[248510]: 2025-12-13 08:33:56.086 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:33:56 np0005558241 nova_compute[248510]: 2025-12-13 08:33:56.122 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:33:56 np0005558241 nova_compute[248510]: 2025-12-13 08:33:56.122 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:33:56 np0005558241 nova_compute[248510]: 2025-12-13 08:33:56.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 202 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 771 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Dec 13 03:33:57 np0005558241 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Dec 13 03:33:57 np0005558241 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004c.scope: Consumed 12.701s CPU time.
Dec 13 03:33:57 np0005558241 systemd-machined[210538]: Machine qemu-93-instance-0000004c terminated.
Dec 13 03:33:57 np0005558241 nova_compute[248510]: 2025-12-13 08:33:57.779 248514 DEBUG nova.compute.manager [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:57 np0005558241 nova_compute[248510]: 2025-12-13 08:33:57.780 248514 DEBUG oslo_concurrency.lockutils [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:57 np0005558241 nova_compute[248510]: 2025-12-13 08:33:57.781 248514 DEBUG oslo_concurrency.lockutils [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:57 np0005558241 nova_compute[248510]: 2025-12-13 08:33:57.781 248514 DEBUG oslo_concurrency.lockutils [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:57 np0005558241 nova_compute[248510]: 2025-12-13 08:33:57.782 248514 DEBUG nova.compute.manager [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:57 np0005558241 nova_compute[248510]: 2025-12-13 08:33:57.782 248514 WARNING nova.compute.manager [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:33:57 np0005558241 nova_compute[248510]: 2025-12-13 08:33:57.783 248514 DEBUG nova.compute.manager [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:57 np0005558241 nova_compute[248510]: 2025-12-13 08:33:57.783 248514 DEBUG oslo_concurrency.lockutils [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:57 np0005558241 nova_compute[248510]: 2025-12-13 08:33:57.784 248514 DEBUG oslo_concurrency.lockutils [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:57 np0005558241 nova_compute[248510]: 2025-12-13 08:33:57.784 248514 DEBUG oslo_concurrency.lockutils [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:57 np0005558241 nova_compute[248510]: 2025-12-13 08:33:57.785 248514 DEBUG nova.compute.manager [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:57 np0005558241 nova_compute[248510]: 2025-12-13 08:33:57.785 248514 WARNING nova.compute.manager [req-15bbd14d-fc6c-4c84-857d-dfc8942f524d req-2066eed3-a58e-4ae1-8eba-ce4d56c078d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.281 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.282 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.282 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.283 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.283 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.285 248514 INFO nova.compute.manager [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Terminating instance#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.286 248514 DEBUG nova.compute.manager [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:33:58 np0005558241 kernel: tapb5058a06-71 (unregistering): left promiscuous mode
Dec 13 03:33:58 np0005558241 NetworkManager[50376]: <info>  [1765614838.3251] device (tapb5058a06-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.332 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:58Z|00761|binding|INFO|Releasing lport b5058a06-7109-4ac0-96d8-7562e66bee25 from this chassis (sb_readonly=0)
Dec 13 03:33:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:58Z|00762|binding|INFO|Setting lport b5058a06-7109-4ac0-96d8-7562e66bee25 down in Southbound
Dec 13 03:33:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:33:58Z|00763|binding|INFO|Removing iface tapb5058a06-71 ovn-installed in OVS
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.334 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.344 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:d1:eb 10.100.0.5'], port_security=['fa:16:3e:1f:d1:eb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3ced27d6-a2a8-4ce3-a7e7-494270418542', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43ee8730-a174-4859-9c95-b3ad6dc68839', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6718df841f0471ba710516400f126fa', 'neutron:revision_number': '14', 'neutron:security_group_ids': 'f93bca94-72c5-44be-ad3b-b7af451eaa1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b55a0b9c-b9df-43a6-98bb-19309eedcef6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b5058a06-7109-4ac0-96d8-7562e66bee25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.345 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b5058a06-7109-4ac0-96d8-7562e66bee25 in datapath 43ee8730-a174-4859-9c95-b3ad6dc68839 unbound from our chassis#033[00m
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.347 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43ee8730-a174-4859-9c95-b3ad6dc68839, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.348 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0d806e59-e7ad-4799-b918-a521fb8b44eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.350 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 namespace which is not needed anymore#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.354 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:58 np0005558241 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Dec 13 03:33:58 np0005558241 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d0000003c.scope: Consumed 1.764s CPU time.
Dec 13 03:33:58 np0005558241 systemd-machined[210538]: Machine qemu-94-instance-0000003c terminated.
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.451 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance shutdown successfully after 13 seconds.#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.458 248514 INFO nova.virt.libvirt.driver [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance destroyed successfully.#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.464 248514 INFO nova.virt.libvirt.driver [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance destroyed successfully.#033[00m
Dec 13 03:33:58 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322844]: [NOTICE]   (322864) : haproxy version is 2.8.14-c23fe91
Dec 13 03:33:58 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322844]: [NOTICE]   (322864) : path to executable is /usr/sbin/haproxy
Dec 13 03:33:58 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322844]: [ALERT]    (322864) : Current worker (322882) exited with code 143 (Terminated)
Dec 13 03:33:58 np0005558241 neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839[322844]: [WARNING]  (322864) : All workers exited. Exiting... (0)
Dec 13 03:33:58 np0005558241 systemd[1]: libpod-e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b.scope: Deactivated successfully.
Dec 13 03:33:58 np0005558241 podman[323012]: 2025-12-13 08:33:58.500485977 +0000 UTC m=+0.058084874 container died e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:33:58 np0005558241 NetworkManager[50376]: <info>  [1765614838.5058] manager: (tapb5058a06-71): new Tun device (/org/freedesktop/NetworkManager/Devices/324)
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.509 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.517 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.520 248514 INFO nova.virt.libvirt.driver [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Instance destroyed successfully.#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.521 248514 DEBUG nova.objects.instance [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lazy-loading 'resources' on Instance uuid 3ced27d6-a2a8-4ce3-a7e7-494270418542 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7ce670cf208a55e7edf7d049fefbc3f51f1fb991fa118f368570ceaef6ceea3e-merged.mount: Deactivated successfully.
Dec 13 03:33:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b-userdata-shm.mount: Deactivated successfully.
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.538 248514 DEBUG nova.virt.libvirt.vif [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:29:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-286397061',display_name='tempest-ServerActionsTestJSON-server-286397061',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-286397061',id=60,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbY9xhJ+DAJz8BC5zDQLO5zmfn+VHuSL4Hz1LNu//ohevuRHYfQJCjWCw3wit/s6HxgA4NHVoJ2SkaNSX4Fg3CiP//dAPOziG5bhQEMnSjapVxiv9+Y14JhJuF4JJy7NQ==',key_name='tempest-keypair-1106266935',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:29:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6718df841f0471ba710516400f126fa',ramdisk_id='',reservation_id='r-3b97yk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-473360614',owner_user_name='tempest-ServerActionsTestJSON-473360614-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:33:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4d19f7d5ece8482dab03e4bc02fdf410',uuid=3ced27d6-a2a8-4ce3-a7e7-494270418542,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.539 248514 DEBUG nova.network.os_vif_util [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converting VIF {"id": "b5058a06-7109-4ac0-96d8-7562e66bee25", "address": "fa:16:3e:1f:d1:eb", "network": {"id": "43ee8730-a174-4859-9c95-b3ad6dc68839", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1767680052-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6718df841f0471ba710516400f126fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5058a06-71", "ovs_interfaceid": "b5058a06-7109-4ac0-96d8-7562e66bee25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:33:58 np0005558241 podman[323012]: 2025-12-13 08:33:58.541343312 +0000 UTC m=+0.098942209 container cleanup e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.541 248514 DEBUG nova.network.os_vif_util [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.542 248514 DEBUG os_vif [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.544 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.545 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5058a06-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.547 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:58 np0005558241 systemd[1]: libpod-conmon-e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b.scope: Deactivated successfully.
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.555 248514 INFO os_vif [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:d1:eb,bridge_name='br-int',has_traffic_filtering=True,id=b5058a06-7109-4ac0-96d8-7562e66bee25,network=Network(43ee8730-a174-4859-9c95-b3ad6dc68839),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5058a06-71')#033[00m
Dec 13 03:33:58 np0005558241 podman[323066]: 2025-12-13 08:33:58.621213295 +0000 UTC m=+0.052986707 container remove e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.628 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ca795ba8-a6bf-4cd7-98c1-10c227cbb9a0]: (4, ('Sat Dec 13 08:33:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b)\ne24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b\nSat Dec 13 08:33:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 (e24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b)\ne24ff13dd6aa6cebc9e390608831cc1d17d6ca589e8767d776fcaef2182af69b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.631 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[163274d0-2eb8-4fc3-919b-fdcf431a099f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.632 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ee8730-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.634 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:58 np0005558241 kernel: tap43ee8730-a0: left promiscuous mode
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.650 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.653 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[96551753-cbcf-489f-a146-aaad7c2fdd26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.672 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc5fa5e-4b88-4bb1-8ef4-0d4e30f2a556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.673 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a77c5daf-7b8f-41d9-a15f-eb2afe60470b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.688 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[131f9f46-be06-4180-8725-d47f5be1c59a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741073, 'reachable_time': 15043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323100, 'error': None, 'target': 'ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:58 np0005558241 systemd[1]: run-netns-ovnmeta\x2d43ee8730\x2da174\x2d4859\x2d9c95\x2db3ad6dc68839.mount: Deactivated successfully.
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.692 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43ee8730-a174-4859-9c95-b3ad6dc68839 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:33:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:33:58.692 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4f7831-941b-4c2d-8674-5d93ad3b6de3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.769 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deleting instance files /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_del#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.770 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deletion of /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_del complete#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.805 248514 INFO nova.virt.libvirt.driver [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Deleting instance files /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542_del#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.806 248514 INFO nova.virt.libvirt.driver [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Deletion of /var/lib/nova/instances/3ced27d6-a2a8-4ce3-a7e7-494270418542_del complete#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.903 248514 INFO nova.compute.manager [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.905 248514 DEBUG oslo.service.loopingcall [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.905 248514 DEBUG nova.compute.manager [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:33:58 np0005558241 nova_compute[248510]: 2025-12-13 08:33:58.905 248514 DEBUG nova.network.neutron [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.007 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.008 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Creating image(s)#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.033 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.061 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.088 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.092 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.135 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.136 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.136 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.177 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.179 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.180 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.180 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.199 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.202 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.531 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.606 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] resizing rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:33:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 153 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 2.1 MiB/s wr, 187 op/s
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.713 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.713 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Ensure instance console log exists: /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.714 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.714 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.714 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.716 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.722 248514 WARNING nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.735 248514 DEBUG nova.virt.libvirt.host [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.735 248514 DEBUG nova.virt.libvirt.host [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.739 248514 DEBUG nova.virt.libvirt.host [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.740 248514 DEBUG nova.virt.libvirt.host [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.740 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.740 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.741 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.741 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.742 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.742 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.742 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.742 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.743 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.743 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.743 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.743 248514 DEBUG nova.virt.hardware [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.744 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.775 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.933 248514 DEBUG nova.compute.manager [req-7b92f5c7-4f8d-4319-9a15-615e3abdbbcd req-9066bf92-3d63-42e7-b4dd-a52b677fbb7c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.934 248514 DEBUG oslo_concurrency.lockutils [req-7b92f5c7-4f8d-4319-9a15-615e3abdbbcd req-9066bf92-3d63-42e7-b4dd-a52b677fbb7c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.934 248514 DEBUG oslo_concurrency.lockutils [req-7b92f5c7-4f8d-4319-9a15-615e3abdbbcd req-9066bf92-3d63-42e7-b4dd-a52b677fbb7c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.935 248514 DEBUG oslo_concurrency.lockutils [req-7b92f5c7-4f8d-4319-9a15-615e3abdbbcd req-9066bf92-3d63-42e7-b4dd-a52b677fbb7c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.935 248514 DEBUG nova.compute.manager [req-7b92f5c7-4f8d-4319-9a15-615e3abdbbcd req-9066bf92-3d63-42e7-b4dd-a52b677fbb7c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:33:59 np0005558241 nova_compute[248510]: 2025-12-13 08:33:59.935 248514 DEBUG nova.compute.manager [req-7b92f5c7-4f8d-4319-9a15-615e3abdbbcd req-9066bf92-3d63-42e7-b4dd-a52b677fbb7c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-unplugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:34:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:34:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236151226' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:34:00 np0005558241 nova_compute[248510]: 2025-12-13 08:34:00.360 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:00 np0005558241 nova_compute[248510]: 2025-12-13 08:34:00.391 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:00 np0005558241 nova_compute[248510]: 2025-12-13 08:34:00.395 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:00 np0005558241 nova_compute[248510]: 2025-12-13 08:34:00.444 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:34:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2445963606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:34:00 np0005558241 nova_compute[248510]: 2025-12-13 08:34:00.963 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:00 np0005558241 nova_compute[248510]: 2025-12-13 08:34:00.968 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  <uuid>84abd1d4-6b7b-459e-9783-fdc15d7e8bde</uuid>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  <name>instance-0000004c</name>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerShowV254Test-server-116887759</nova:name>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:33:59</nova:creationTime>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:        <nova:user uuid="dfd2c0543e264c50b5b818f8b1bef249">tempest-ServerShowV254Test-1640329662-project-member</nova:user>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:        <nova:project uuid="94f9c66cba1c4ab683e5ee108b067558">tempest-ServerShowV254Test-1640329662</nova:project>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <entry name="serial">84abd1d4-6b7b-459e-9783-fdc15d7e8bde</entry>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <entry name="uuid">84abd1d4-6b7b-459e-9783-fdc15d7e8bde</entry>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/console.log" append="off"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:34:00 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:34:00 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:34:00 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:34:00 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.326 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.327 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.328 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Using config drive#033[00m
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.362 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.385 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.423 248514 DEBUG nova.network.neutron [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.450 248514 INFO nova.compute.manager [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Took 2.54 seconds to deallocate network for instance.#033[00m
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.512 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.513 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.612 248514 DEBUG oslo_concurrency.processutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 96 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 619 KiB/s rd, 2.4 MiB/s wr, 193 op/s
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.766 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Creating config drive at /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config#033[00m
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.774 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9robjiyq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.811 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:34:01 np0005558241 nova_compute[248510]: 2025-12-13 08:34:01.919 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9robjiyq" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.061 248514 DEBUG nova.storage.rbd_utils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] rbd image 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.068 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.119 248514 DEBUG nova.compute.manager [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.119 248514 DEBUG oslo_concurrency.lockutils [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.120 248514 DEBUG oslo_concurrency.lockutils [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.120 248514 DEBUG oslo_concurrency.lockutils [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.120 248514 DEBUG nova.compute.manager [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] No waiting events found dispatching network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.120 248514 WARNING nova.compute.manager [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received unexpected event network-vif-plugged-b5058a06-7109-4ac0-96d8-7562e66bee25 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.120 248514 DEBUG nova.compute.manager [req-b308ebd6-aaf5-4aab-8c7a-fcda26499f50 req-9c816abf-4edb-42d0-ad15-6943de9297b9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Received event network-vif-deleted-b5058a06-7109-4ac0-96d8-7562e66bee25 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:34:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:34:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918785895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.216 248514 DEBUG oslo_concurrency.processutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.225 248514 DEBUG nova.compute.provider_tree [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.245 248514 DEBUG nova.scheduler.client.report [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.270 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.320 248514 INFO nova.scheduler.client.report [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Deleted allocations for instance 3ced27d6-a2a8-4ce3-a7e7-494270418542#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.410 248514 DEBUG oslo_concurrency.lockutils [None req-1f54b52c-0f26-4aa0-b869-454407a65391 4d19f7d5ece8482dab03e4bc02fdf410 c6718df841f0471ba710516400f126fa - - default default] Lock "3ced27d6-a2a8-4ce3-a7e7-494270418542" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.463 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614827.4630418, c98e670c-9bea-41c0-87ad-fcaba6d2be2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.464 248514 INFO nova.compute.manager [-] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.497 248514 DEBUG nova.compute.manager [None req-8b03e052-71b2-4f49-83fe-c6e80449fdbc - - - - - -] [instance: c98e670c-9bea-41c0-87ad-fcaba6d2be2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.524 248514 DEBUG oslo_concurrency.processutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config 84abd1d4-6b7b-459e-9783-fdc15d7e8bde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:02 np0005558241 nova_compute[248510]: 2025-12-13 08:34:02.524 248514 INFO nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deleting local config drive /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde/disk.config because it was imported into RBD.#033[00m
Dec 13 03:34:02 np0005558241 systemd-machined[210538]: New machine qemu-95-instance-0000004c.
Dec 13 03:34:02 np0005558241 systemd[1]: Started Virtual Machine qemu-95-instance-0000004c.
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.548 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.594 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 84abd1d4-6b7b-459e-9783-fdc15d7e8bde due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.594 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614843.5941281, 84abd1d4-6b7b-459e-9783-fdc15d7e8bde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.595 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.597 248514 DEBUG nova.compute.manager [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.598 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.602 248514 INFO nova.virt.libvirt.driver [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance spawned successfully.#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.602 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:34:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 69 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 587 KiB/s rd, 2.8 MiB/s wr, 218 op/s
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.625 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.630 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.631 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.631 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.632 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.632 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.632 248514 DEBUG nova.virt.libvirt.driver [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.637 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.667 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.668 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614843.5968723, 84abd1d4-6b7b-459e-9783-fdc15d7e8bde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.668 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] VM Started (Lifecycle Event)#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.717 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.722 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.731 248514 DEBUG nova.compute.manager [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.764 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.820 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.820 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.821 248514 DEBUG nova.objects.instance [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:34:03 np0005558241 nova_compute[248510]: 2025-12-13 08:34:03.915 248514 DEBUG oslo_concurrency.lockutils [None req-fbcd7450-b05f-4f0f-9938-bd0caad60a8f dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:05 np0005558241 nova_compute[248510]: 2025-12-13 08:34:05.099 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:05 np0005558241 nova_compute[248510]: 2025-12-13 08:34:05.247 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:05 np0005558241 nova_compute[248510]: 2025-12-13 08:34:05.365 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:05 np0005558241 nova_compute[248510]: 2025-12-13 08:34:05.366 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:05 np0005558241 nova_compute[248510]: 2025-12-13 08:34:05.366 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:05 np0005558241 nova_compute[248510]: 2025-12-13 08:34:05.366 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:05 np0005558241 nova_compute[248510]: 2025-12-13 08:34:05.367 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:05 np0005558241 nova_compute[248510]: 2025-12-13 08:34:05.368 248514 INFO nova.compute.manager [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Terminating instance#033[00m
Dec 13 03:34:05 np0005558241 nova_compute[248510]: 2025-12-13 08:34:05.368 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "refresh_cache-84abd1d4-6b7b-459e-9783-fdc15d7e8bde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:34:05 np0005558241 nova_compute[248510]: 2025-12-13 08:34:05.369 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquired lock "refresh_cache-84abd1d4-6b7b-459e-9783-fdc15d7e8bde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:34:05 np0005558241 nova_compute[248510]: 2025-12-13 08:34:05.369 248514 DEBUG nova.network.neutron [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:34:05 np0005558241 nova_compute[248510]: 2025-12-13 08:34:05.445 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 852 KiB/s rd, 3.6 MiB/s wr, 239 op/s
Dec 13 03:34:05 np0005558241 nova_compute[248510]: 2025-12-13 08:34:05.677 248514 DEBUG nova.network.neutron [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.082 248514 DEBUG nova.network.neutron [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.115 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Releasing lock "refresh_cache-84abd1d4-6b7b-459e-9783-fdc15d7e8bde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.115 248514 DEBUG nova.compute.manager [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:34:06 np0005558241 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Dec 13 03:34:06 np0005558241 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d0000004c.scope: Consumed 3.420s CPU time.
Dec 13 03:34:06 np0005558241 systemd-machined[210538]: Machine qemu-95-instance-0000004c terminated.
Dec 13 03:34:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.337 248514 INFO nova.virt.libvirt.driver [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance destroyed successfully.#033[00m
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.338 248514 DEBUG nova.objects.instance [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lazy-loading 'resources' on Instance uuid 84abd1d4-6b7b-459e-9783-fdc15d7e8bde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.687 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614831.6862476, ce9adb21-8832-4d3e-867e-b0b49bdb6850 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.689 248514 INFO nova.compute.manager [-] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.714 248514 DEBUG nova.compute.manager [None req-0250c1e3-382d-49a3-8da1-aa750c5a424f - - - - - -] [instance: ce9adb21-8832-4d3e-867e-b0b49bdb6850] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.885 248514 INFO nova.virt.libvirt.driver [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deleting instance files /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_del#033[00m
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.886 248514 INFO nova.virt.libvirt.driver [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deletion of /var/lib/nova/instances/84abd1d4-6b7b-459e-9783-fdc15d7e8bde_del complete#033[00m
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.956 248514 INFO nova.compute.manager [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.957 248514 DEBUG oslo.service.loopingcall [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.958 248514 DEBUG nova.compute.manager [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:34:06 np0005558241 nova_compute[248510]: 2025-12-13 08:34:06.958 248514 DEBUG nova.network.neutron [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:34:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 545 KiB/s rd, 1.8 MiB/s wr, 173 op/s
Dec 13 03:34:07 np0005558241 nova_compute[248510]: 2025-12-13 08:34:07.640 248514 DEBUG nova.network.neutron [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:34:07 np0005558241 nova_compute[248510]: 2025-12-13 08:34:07.661 248514 DEBUG nova.network.neutron [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:34:07 np0005558241 nova_compute[248510]: 2025-12-13 08:34:07.676 248514 INFO nova.compute.manager [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Took 0.72 seconds to deallocate network for instance.#033[00m
Dec 13 03:34:07 np0005558241 nova_compute[248510]: 2025-12-13 08:34:07.734 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:07 np0005558241 nova_compute[248510]: 2025-12-13 08:34:07.735 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:07 np0005558241 nova_compute[248510]: 2025-12-13 08:34:07.820 248514 DEBUG oslo_concurrency.processutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1408091150' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:34:08 np0005558241 nova_compute[248510]: 2025-12-13 08:34:08.389 248514 DEBUG oslo_concurrency.processutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:34:08 np0005558241 nova_compute[248510]: 2025-12-13 08:34:08.398 248514 DEBUG nova.compute.provider_tree [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:34:08 np0005558241 nova_compute[248510]: 2025-12-13 08:34:08.433 248514 DEBUG nova.scheduler.client.report [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:34:08 np0005558241 nova_compute[248510]: 2025-12-13 08:34:08.463 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:08 np0005558241 nova_compute[248510]: 2025-12-13 08:34:08.496 248514 INFO nova.scheduler.client.report [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Deleted allocations for instance 84abd1d4-6b7b-459e-9783-fdc15d7e8bde#033[00m
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:34:08 np0005558241 nova_compute[248510]: 2025-12-13 08:34:08.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:08 np0005558241 nova_compute[248510]: 2025-12-13 08:34:08.570 248514 DEBUG oslo_concurrency.lockutils [None req-01425157-be67-4e1c-8ada-f701d8260896 dfd2c0543e264c50b5b818f8b1bef249 94f9c66cba1c4ab683e5ee108b067558 - - default default] Lock "84abd1d4-6b7b-459e-9783-fdc15d7e8bde" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:34:08 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:34:08 np0005558241 podman[323654]: 2025-12-13 08:34:08.973798549 +0000 UTC m=+0.048172278 container create 393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:34:09 np0005558241 systemd[1]: Started libpod-conmon-393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0.scope.
Dec 13 03:34:09 np0005558241 podman[323654]: 2025-12-13 08:34:08.953531785 +0000 UTC m=+0.027905544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:34:09 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:34:09 np0005558241 podman[323654]: 2025-12-13 08:34:09.084763264 +0000 UTC m=+0.159137013 container init 393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:34:09 np0005558241 podman[323654]: 2025-12-13 08:34:09.094123357 +0000 UTC m=+0.168497086 container start 393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:34:09 np0005558241 podman[323654]: 2025-12-13 08:34:09.097377798 +0000 UTC m=+0.171751527 container attach 393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:34:09 np0005558241 beautiful_morse[323670]: 167 167
Dec 13 03:34:09 np0005558241 systemd[1]: libpod-393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0.scope: Deactivated successfully.
Dec 13 03:34:09 np0005558241 conmon[323670]: conmon 393bfe5579ffe1c02a35 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0.scope/container/memory.events
Dec 13 03:34:09 np0005558241 podman[323654]: 2025-12-13 08:34:09.103271274 +0000 UTC m=+0.177645043 container died 393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:34:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9da754c065afcd3a88d6a2aa4ff2ecb0d062243bd7ef488b8a4739c395629f07-merged.mount: Deactivated successfully.
Dec 13 03:34:09 np0005558241 podman[323654]: 2025-12-13 08:34:09.150997909 +0000 UTC m=+0.225371638 container remove 393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:34:09 np0005558241 systemd[1]: libpod-conmon-393bfe5579ffe1c02a358618cf67061ee654d6e9cc6380e50aea14f8b646aba0.scope: Deactivated successfully.
Dec 13 03:34:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:34:09
Dec 13 03:34:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:34:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:34:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['vms', '.mgr', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.control']
Dec 13 03:34:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:34:09 np0005558241 podman[323693]: 2025-12-13 08:34:09.352436042 +0000 UTC m=+0.069083187 container create 9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:34:09 np0005558241 systemd[1]: Started libpod-conmon-9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666.scope.
Dec 13 03:34:09 np0005558241 podman[323693]: 2025-12-13 08:34:09.309725881 +0000 UTC m=+0.026373026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:34:09 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:34:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796d49250dc66b0c98a1fc82a3dc6473bf6450ce91334b20a3cf44458abe0d7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796d49250dc66b0c98a1fc82a3dc6473bf6450ce91334b20a3cf44458abe0d7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796d49250dc66b0c98a1fc82a3dc6473bf6450ce91334b20a3cf44458abe0d7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796d49250dc66b0c98a1fc82a3dc6473bf6450ce91334b20a3cf44458abe0d7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796d49250dc66b0c98a1fc82a3dc6473bf6450ce91334b20a3cf44458abe0d7b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:09 np0005558241 podman[323693]: 2025-12-13 08:34:09.462730211 +0000 UTC m=+0.179377356 container init 9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:34:09 np0005558241 podman[323693]: 2025-12-13 08:34:09.471783706 +0000 UTC m=+0.188430851 container start 9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:34:09 np0005558241 podman[323693]: 2025-12-13 08:34:09.523831598 +0000 UTC m=+0.240478783 container attach 9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:34:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 49 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 257 op/s
Dec 13 03:34:09 np0005558241 fervent_shamir[323709]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:34:09 np0005558241 fervent_shamir[323709]: --> All data devices are unavailable
Dec 13 03:34:09 np0005558241 systemd[1]: libpod-9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666.scope: Deactivated successfully.
Dec 13 03:34:09 np0005558241 podman[323693]: 2025-12-13 08:34:09.984904028 +0000 UTC m=+0.701551233 container died 9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:34:10 np0005558241 systemd[1]: var-lib-containers-storage-overlay-796d49250dc66b0c98a1fc82a3dc6473bf6450ce91334b20a3cf44458abe0d7b-merged.mount: Deactivated successfully.
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:34:10 np0005558241 podman[323693]: 2025-12-13 08:34:10.223638287 +0000 UTC m=+0.940285442 container remove 9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_shamir, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:34:10 np0005558241 systemd[1]: libpod-conmon-9600659660f4093eba73437ee40c789011b964f22c89063f3a395efc20a7d666.scope: Deactivated successfully.
Dec 13 03:34:10 np0005558241 nova_compute[248510]: 2025-12-13 08:34:10.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:34:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:34:10 np0005558241 podman[323804]: 2025-12-13 08:34:10.69729279 +0000 UTC m=+0.028855368 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:34:10 np0005558241 podman[323804]: 2025-12-13 08:34:10.906589918 +0000 UTC m=+0.238152466 container create a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:34:11 np0005558241 systemd[1]: Started libpod-conmon-a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10.scope.
Dec 13 03:34:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:34:11 np0005558241 podman[323804]: 2025-12-13 08:34:11.337482169 +0000 UTC m=+0.669044747 container init a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:34:11 np0005558241 podman[323804]: 2025-12-13 08:34:11.346372869 +0000 UTC m=+0.677935437 container start a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:34:11 np0005558241 bold_wright[323820]: 167 167
Dec 13 03:34:11 np0005558241 systemd[1]: libpod-a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10.scope: Deactivated successfully.
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.371208) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614851371294, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 840, "num_deletes": 250, "total_data_size": 1138734, "memory_usage": 1167544, "flush_reason": "Manual Compaction"}
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614851420356, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 718803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40587, "largest_seqno": 41426, "table_properties": {"data_size": 715299, "index_size": 1284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9582, "raw_average_key_size": 20, "raw_value_size": 707776, "raw_average_value_size": 1538, "num_data_blocks": 57, "num_entries": 460, "num_filter_entries": 460, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765614782, "oldest_key_time": 1765614782, "file_creation_time": 1765614851, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 49196 microseconds, and 3033 cpu microseconds.
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.420401) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 718803 bytes OK
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.420423) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.432659) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.432705) EVENT_LOG_v1 {"time_micros": 1765614851432697, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.432731) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 1134548, prev total WAL file size 1134548, number of live WAL files 2.
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.433385) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353039' seq:72057594037927935, type:22 .. '6D6772737461740031373630' seq:0, type:0; will stop at (end)
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(701KB)], [92(10079KB)]
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614851433437, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 11040595, "oldest_snapshot_seqno": -1}
Dec 13 03:34:11 np0005558241 podman[323804]: 2025-12-13 08:34:11.433886922 +0000 UTC m=+0.765449480 container attach a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:34:11 np0005558241 podman[323804]: 2025-12-13 08:34:11.434458397 +0000 UTC m=+0.766020965 container died a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6289 keys, 7976431 bytes, temperature: kUnknown
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614851517513, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 7976431, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7936493, "index_size": 23124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 163088, "raw_average_key_size": 25, "raw_value_size": 7825623, "raw_average_value_size": 1244, "num_data_blocks": 910, "num_entries": 6289, "num_filter_entries": 6289, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765614851, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.517786) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 7976431 bytes
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.523222) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.2 rd, 94.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.8 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(26.5) write-amplify(11.1) OK, records in: 6771, records dropped: 482 output_compression: NoCompression
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.523246) EVENT_LOG_v1 {"time_micros": 1765614851523236, "job": 54, "event": "compaction_finished", "compaction_time_micros": 84169, "compaction_time_cpu_micros": 23661, "output_level": 6, "num_output_files": 1, "total_output_size": 7976431, "num_input_records": 6771, "num_output_records": 6289, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614851523475, "job": 54, "event": "table_file_deletion", "file_number": 94}
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765614851526697, "job": 54, "event": "table_file_deletion", "file_number": 92}
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.433328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.526829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.526838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.526841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.526844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:34:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:34:11.526848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:34:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0b5429439c6aa4411a2f5ff73cd2a081a27be70a895f74b871df63796a5c4be4-merged.mount: Deactivated successfully.
Dec 13 03:34:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Dec 13 03:34:11 np0005558241 podman[323804]: 2025-12-13 08:34:11.796573 +0000 UTC m=+1.128135588 container remove a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:34:11 np0005558241 systemd[1]: libpod-conmon-a4d3f9f1d7c9348226533a34b0931d992f8b43f0c33493c43a78850ce079cc10.scope: Deactivated successfully.
Dec 13 03:34:12 np0005558241 podman[323845]: 2025-12-13 08:34:12.013565708 +0000 UTC m=+0.051706455 container create d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_booth, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 03:34:12 np0005558241 systemd[1]: Started libpod-conmon-d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919.scope.
Dec 13 03:34:12 np0005558241 podman[323845]: 2025-12-13 08:34:11.990855444 +0000 UTC m=+0.028996221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:34:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:34:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac2a4082592707cc0916f5daedfccb96dafe2f0fed097f8039966fa9c9ef5ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac2a4082592707cc0916f5daedfccb96dafe2f0fed097f8039966fa9c9ef5ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac2a4082592707cc0916f5daedfccb96dafe2f0fed097f8039966fa9c9ef5ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac2a4082592707cc0916f5daedfccb96dafe2f0fed097f8039966fa9c9ef5ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:12 np0005558241 podman[323845]: 2025-12-13 08:34:12.098780225 +0000 UTC m=+0.136920992 container init d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_booth, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 03:34:12 np0005558241 podman[323845]: 2025-12-13 08:34:12.109237644 +0000 UTC m=+0.147378371 container start d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 03:34:12 np0005558241 podman[323845]: 2025-12-13 08:34:12.112795533 +0000 UTC m=+0.150936260 container attach d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_booth, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:34:12 np0005558241 agitated_booth[323862]: {
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:    "0": [
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:        {
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "devices": [
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "/dev/loop3"
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            ],
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_name": "ceph_lv0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_size": "21470642176",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "name": "ceph_lv0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "tags": {
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.cluster_name": "ceph",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.crush_device_class": "",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.encrypted": "0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.objectstore": "bluestore",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.osd_id": "0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.type": "block",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.vdo": "0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.with_tpm": "0"
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            },
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "type": "block",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "vg_name": "ceph_vg0"
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:        }
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:    ],
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:    "1": [
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:        {
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "devices": [
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "/dev/loop4"
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            ],
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_name": "ceph_lv1",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_size": "21470642176",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "name": "ceph_lv1",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "tags": {
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.cluster_name": "ceph",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.crush_device_class": "",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.encrypted": "0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.objectstore": "bluestore",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.osd_id": "1",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.type": "block",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.vdo": "0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.with_tpm": "0"
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            },
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "type": "block",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "vg_name": "ceph_vg1"
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:        }
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:    ],
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:    "2": [
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:        {
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "devices": [
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "/dev/loop5"
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            ],
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_name": "ceph_lv2",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_size": "21470642176",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "name": "ceph_lv2",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "tags": {
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.cluster_name": "ceph",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.crush_device_class": "",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.encrypted": "0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.objectstore": "bluestore",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.osd_id": "2",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.type": "block",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.vdo": "0",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:                "ceph.with_tpm": "0"
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            },
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "type": "block",
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:            "vg_name": "ceph_vg2"
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:        }
Dec 13 03:34:12 np0005558241 agitated_booth[323862]:    ]
Dec 13 03:34:12 np0005558241 agitated_booth[323862]: }
Dec 13 03:34:12 np0005558241 systemd[1]: libpod-d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919.scope: Deactivated successfully.
Dec 13 03:34:12 np0005558241 podman[323845]: 2025-12-13 08:34:12.484029111 +0000 UTC m=+0.522169838 container died d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:34:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-bac2a4082592707cc0916f5daedfccb96dafe2f0fed097f8039966fa9c9ef5ce-merged.mount: Deactivated successfully.
Dec 13 03:34:12 np0005558241 podman[323845]: 2025-12-13 08:34:12.528120396 +0000 UTC m=+0.566261123 container remove d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_booth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:34:12 np0005558241 systemd[1]: libpod-conmon-d0374bd99506a805be8197f0f74952721e874ba0689bf41099acb1f0a50bc919.scope: Deactivated successfully.
Dec 13 03:34:13 np0005558241 podman[323945]: 2025-12-13 08:34:12.999578484 +0000 UTC m=+0.045778098 container create 0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:34:13 np0005558241 systemd[1]: Started libpod-conmon-0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0.scope.
Dec 13 03:34:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:34:13 np0005558241 podman[323945]: 2025-12-13 08:34:12.977618309 +0000 UTC m=+0.023817893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:34:13 np0005558241 podman[323945]: 2025-12-13 08:34:13.082342499 +0000 UTC m=+0.128542073 container init 0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:34:13 np0005558241 podman[323945]: 2025-12-13 08:34:13.090196965 +0000 UTC m=+0.136396529 container start 0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:34:13 np0005558241 podman[323945]: 2025-12-13 08:34:13.093932877 +0000 UTC m=+0.140132541 container attach 0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 03:34:13 np0005558241 loving_booth[323961]: 167 167
Dec 13 03:34:13 np0005558241 systemd[1]: libpod-0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0.scope: Deactivated successfully.
Dec 13 03:34:13 np0005558241 podman[323945]: 2025-12-13 08:34:13.099001913 +0000 UTC m=+0.145201477 container died 0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:34:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0c3d44a51e73ab35b2fffd74f28ad81e713f69b640fa8800b4853c9676b108ec-merged.mount: Deactivated successfully.
Dec 13 03:34:13 np0005558241 podman[323945]: 2025-12-13 08:34:13.147503138 +0000 UTC m=+0.193702702 container remove 0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_booth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 03:34:13 np0005558241 systemd[1]: libpod-conmon-0c1eb9c2fec422b1cd10ee9c217706c9195f2683596b59add37f867d9d3570a0.scope: Deactivated successfully.
Dec 13 03:34:13 np0005558241 podman[323985]: 2025-12-13 08:34:13.349636697 +0000 UTC m=+0.054401512 container create 6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_keller, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:34:13 np0005558241 systemd[1]: Started libpod-conmon-6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634.scope.
Dec 13 03:34:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:34:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f8401aa8143b40705c10116c80b2b74a90b6a6c0c13515f031d3bf8452a433/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f8401aa8143b40705c10116c80b2b74a90b6a6c0c13515f031d3bf8452a433/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f8401aa8143b40705c10116c80b2b74a90b6a6c0c13515f031d3bf8452a433/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f8401aa8143b40705c10116c80b2b74a90b6a6c0c13515f031d3bf8452a433/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:13 np0005558241 podman[323985]: 2025-12-13 08:34:13.330845721 +0000 UTC m=+0.035610556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:34:13 np0005558241 podman[323985]: 2025-12-13 08:34:13.430219959 +0000 UTC m=+0.134984804 container init 6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_keller, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:34:13 np0005558241 podman[323985]: 2025-12-13 08:34:13.438011202 +0000 UTC m=+0.142776037 container start 6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:34:13 np0005558241 podman[323985]: 2025-12-13 08:34:13.442307369 +0000 UTC m=+0.147072194 container attach 6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_keller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:34:13 np0005558241 nova_compute[248510]: 2025-12-13 08:34:13.517 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614838.516485, 3ced27d6-a2a8-4ce3-a7e7-494270418542 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:34:13 np0005558241 nova_compute[248510]: 2025-12-13 08:34:13.519 248514 INFO nova.compute.manager [-] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:34:13 np0005558241 nova_compute[248510]: 2025-12-13 08:34:13.555 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:13 np0005558241 nova_compute[248510]: 2025-12-13 08:34:13.558 248514 DEBUG nova.compute.manager [None req-e3387d21-6b2e-404e-a0ea-4e95b51f9504 - - - - - -] [instance: 3ced27d6-a2a8-4ce3-a7e7-494270418542] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:34:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 192 op/s
Dec 13 03:34:14 np0005558241 lvm[324080]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:34:14 np0005558241 lvm[324079]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:34:14 np0005558241 lvm[324080]: VG ceph_vg1 finished
Dec 13 03:34:14 np0005558241 lvm[324079]: VG ceph_vg0 finished
Dec 13 03:34:14 np0005558241 lvm[324082]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:34:14 np0005558241 lvm[324082]: VG ceph_vg2 finished
Dec 13 03:34:14 np0005558241 funny_keller[324001]: {}
Dec 13 03:34:14 np0005558241 systemd[1]: libpod-6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634.scope: Deactivated successfully.
Dec 13 03:34:14 np0005558241 systemd[1]: libpod-6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634.scope: Consumed 1.369s CPU time.
Dec 13 03:34:14 np0005558241 podman[323985]: 2025-12-13 08:34:14.289442157 +0000 UTC m=+0.994207022 container died 6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:34:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-22f8401aa8143b40705c10116c80b2b74a90b6a6c0c13515f031d3bf8452a433-merged.mount: Deactivated successfully.
Dec 13 03:34:14 np0005558241 podman[323985]: 2025-12-13 08:34:14.341982851 +0000 UTC m=+1.046747676 container remove 6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_keller, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:34:14 np0005558241 systemd[1]: libpod-conmon-6579997bcbb5a8cfc173fe3009129038e57a4b3120b85235d0f8cbe617bf0634.scope: Deactivated successfully.
Dec 13 03:34:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:34:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:34:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:34:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:34:14 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:34:14 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:34:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:34:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2818477851' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:34:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:34:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2818477851' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:34:15 np0005558241 nova_compute[248510]: 2025-12-13 08:34:15.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 148 op/s
Dec 13 03:34:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:34:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 KiB/s wr, 98 op/s
Dec 13 03:34:18 np0005558241 nova_compute[248510]: 2025-12-13 08:34:18.557 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 KiB/s wr, 98 op/s
Dec 13 03:34:20 np0005558241 nova_compute[248510]: 2025-12-13 08:34:20.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.48350654070969e-06 of space, bias 1.0, pg target 0.002545051962212907 quantized to 32 (current 32)
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006672809573754347 of space, bias 1.0, pg target 0.2001842872126304 quantized to 32 (current 32)
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.983829715642068e-07 of space, bias 4.0, pg target 0.0007180595658770481 quantized to 16 (current 32)
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:34:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:34:21 np0005558241 nova_compute[248510]: 2025-12-13 08:34:21.336 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765614846.335488, 84abd1d4-6b7b-459e-9783-fdc15d7e8bde => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:34:21 np0005558241 nova_compute[248510]: 2025-12-13 08:34:21.337 248514 INFO nova.compute.manager [-] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:34:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:34:21 np0005558241 nova_compute[248510]: 2025-12-13 08:34:21.367 248514 DEBUG nova.compute.manager [None req-ac388c50-0e24-42a7-b43d-bf4ddf87ddaa - - - - - -] [instance: 84abd1d4-6b7b-459e-9783-fdc15d7e8bde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:34:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 341 B/s wr, 14 op/s
Dec 13 03:34:23 np0005558241 nova_compute[248510]: 2025-12-13 08:34:23.516 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:23 np0005558241 nova_compute[248510]: 2025-12-13 08:34:23.516 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:23 np0005558241 nova_compute[248510]: 2025-12-13 08:34:23.545 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:34:23 np0005558241 nova_compute[248510]: 2025-12-13 08:34:23.559 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:23 np0005558241 nova_compute[248510]: 2025-12-13 08:34:23.622 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:23 np0005558241 nova_compute[248510]: 2025-12-13 08:34:23.622 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:34:23 np0005558241 nova_compute[248510]: 2025-12-13 08:34:23.634 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:34:23 np0005558241 nova_compute[248510]: 2025-12-13 08:34:23.635 248514 INFO nova.compute.claims [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:34:23 np0005558241 nova_compute[248510]: 2025-12-13 08:34:23.765 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:34:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3244428344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.356 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.363 248514 DEBUG nova.compute.provider_tree [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.383 248514 DEBUG nova.scheduler.client.report [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.404 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.405 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.482 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.483 248514 DEBUG nova.network.neutron [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.515 248514 INFO nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.537 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.647 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.649 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.649 248514 INFO nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Creating image(s)#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.676 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.709 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.740 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.745 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.818 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.819 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.819 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.820 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.850 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.854 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:24 np0005558241 nova_compute[248510]: 2025-12-13 08:34:24.956 248514 DEBUG nova.policy [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1daf78b2d25748d183901a09e4605044', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '082f98d662a4450a9fd767ac13a76391', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:34:25 np0005558241 nova_compute[248510]: 2025-12-13 08:34:25.452 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:34:25 np0005558241 nova_compute[248510]: 2025-12-13 08:34:25.666 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.812s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:25 np0005558241 nova_compute[248510]: 2025-12-13 08:34:25.721 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] resizing rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:34:26 np0005558241 nova_compute[248510]: 2025-12-13 08:34:26.041 248514 DEBUG nova.objects.instance [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lazy-loading 'migration_context' on Instance uuid 68f568e2-917b-4b70-8345-8d04c0f5d1a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:34:26 np0005558241 nova_compute[248510]: 2025-12-13 08:34:26.061 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:34:26 np0005558241 nova_compute[248510]: 2025-12-13 08:34:26.062 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Ensure instance console log exists: /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:34:26 np0005558241 nova_compute[248510]: 2025-12-13 08:34:26.063 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:26 np0005558241 nova_compute[248510]: 2025-12-13 08:34:26.063 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:26 np0005558241 nova_compute[248510]: 2025-12-13 08:34:26.064 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:34:26 np0005558241 nova_compute[248510]: 2025-12-13 08:34:26.474 248514 DEBUG nova.network.neutron [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Successfully created port: 9c81d394-98c0-444d-a1f2-9b909c7e2559 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:34:26 np0005558241 podman[324314]: 2025-12-13 08:34:26.990103432 +0000 UTC m=+0.074667696 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:34:26 np0005558241 podman[324313]: 2025-12-13 08:34:26.994683875 +0000 UTC m=+0.078165132 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:34:27 np0005558241 podman[324312]: 2025-12-13 08:34:27.060380297 +0000 UTC m=+0.141776932 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 03:34:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 41 MiB data, 569 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:34:28 np0005558241 nova_compute[248510]: 2025-12-13 08:34:28.443 248514 DEBUG nova.network.neutron [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Successfully updated port: 9c81d394-98c0-444d-a1f2-9b909c7e2559 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:34:28 np0005558241 nova_compute[248510]: 2025-12-13 08:34:28.469 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:34:28 np0005558241 nova_compute[248510]: 2025-12-13 08:34:28.469 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquired lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:34:28 np0005558241 nova_compute[248510]: 2025-12-13 08:34:28.470 248514 DEBUG nova.network.neutron [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:34:28 np0005558241 nova_compute[248510]: 2025-12-13 08:34:28.561 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:28 np0005558241 nova_compute[248510]: 2025-12-13 08:34:28.670 248514 DEBUG nova.network.neutron [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:34:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 72 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.066 248514 DEBUG nova.compute.manager [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-changed-9c81d394-98c0-444d-a1f2-9b909c7e2559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.067 248514 DEBUG nova.compute.manager [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Refreshing instance network info cache due to event network-changed-9c81d394-98c0-444d-a1f2-9b909c7e2559. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.067 248514 DEBUG oslo_concurrency.lockutils [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.454 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.652 248514 DEBUG nova.network.neutron [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Updating instance_info_cache with network_info: [{"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.679 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Releasing lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.679 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Instance network_info: |[{"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.680 248514 DEBUG oslo_concurrency.lockutils [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.680 248514 DEBUG nova.network.neutron [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Refreshing network info cache for port 9c81d394-98c0-444d-a1f2-9b909c7e2559 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.683 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Start _get_guest_xml network_info=[{"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.687 248514 WARNING nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.694 248514 DEBUG nova.virt.libvirt.host [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.695 248514 DEBUG nova.virt.libvirt.host [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.702 248514 DEBUG nova.virt.libvirt.host [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.703 248514 DEBUG nova.virt.libvirt.host [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.703 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.704 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.704 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.704 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.704 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.705 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.705 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.705 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.705 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.706 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.706 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.706 248514 DEBUG nova.virt.hardware [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:34:30 np0005558241 nova_compute[248510]: 2025-12-13 08:34:30.709 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:34:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2111517588' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:34:31 np0005558241 nova_compute[248510]: 2025-12-13 08:34:31.281 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:31 np0005558241 nova_compute[248510]: 2025-12-13 08:34:31.307 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:31 np0005558241 nova_compute[248510]: 2025-12-13 08:34:31.311 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:34:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:34:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:34:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2484614483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.029 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.718s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.032 248514 DEBUG nova.virt.libvirt.vif [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:34:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-458198005',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-458198005',id=77,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='082f98d662a4450a9fd767ac13a76391',ramdisk_id='',reservation_id='r-rt0w0l4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-2087650193',owner_user_name='tempest-AttachInterfacesV270Test-2087650193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:34:24Z,user_data=None,user_id='1daf78b2d25748d183901a09e4605044',uuid=68f568e2-917b-4b70-8345-8d04c0f5d1a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.032 248514 DEBUG nova.network.os_vif_util [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converting VIF {"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.033 248514 DEBUG nova.network.os_vif_util [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:28:0b,bridge_name='br-int',has_traffic_filtering=True,id=9c81d394-98c0-444d-a1f2-9b909c7e2559,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c81d394-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.035 248514 DEBUG nova.objects.instance [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lazy-loading 'pci_devices' on Instance uuid 68f568e2-917b-4b70-8345-8d04c0f5d1a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.059 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  <uuid>68f568e2-917b-4b70-8345-8d04c0f5d1a6</uuid>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  <name>instance-0000004d</name>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <nova:name>tempest-AttachInterfacesV270Test-server-458198005</nova:name>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:34:30</nova:creationTime>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:        <nova:user uuid="1daf78b2d25748d183901a09e4605044">tempest-AttachInterfacesV270Test-2087650193-project-member</nova:user>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:        <nova:project uuid="082f98d662a4450a9fd767ac13a76391">tempest-AttachInterfacesV270Test-2087650193</nova:project>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:        <nova:port uuid="9c81d394-98c0-444d-a1f2-9b909c7e2559">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <entry name="serial">68f568e2-917b-4b70-8345-8d04c0f5d1a6</entry>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <entry name="uuid">68f568e2-917b-4b70-8345-8d04c0f5d1a6</entry>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk.config">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:07:28:0b"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <target dev="tap9c81d394-98"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/console.log" append="off"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:34:32 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:34:32 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:34:32 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:34:32 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.061 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Preparing to wait for external event network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.062 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.063 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.063 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.064 248514 DEBUG nova.virt.libvirt.vif [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:34:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-458198005',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-458198005',id=77,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='082f98d662a4450a9fd767ac13a76391',ramdisk_id='',reservation_id='r-rt0w0l4h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-2087650193',owner_user_name='tempest-AttachInterfacesV270Test-2087650193-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:34:24Z,user_data=None,user_id='1daf78b2d25748d183901a09e4605044',uuid=68f568e2-917b-4b70-8345-8d04c0f5d1a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.065 248514 DEBUG nova.network.os_vif_util [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converting VIF {"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.066 248514 DEBUG nova.network.os_vif_util [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:28:0b,bridge_name='br-int',has_traffic_filtering=True,id=9c81d394-98c0-444d-a1f2-9b909c7e2559,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c81d394-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.067 248514 DEBUG os_vif [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:28:0b,bridge_name='br-int',has_traffic_filtering=True,id=9c81d394-98c0-444d-a1f2-9b909c7e2559,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c81d394-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.069 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.070 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.075 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.076 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c81d394-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.076 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c81d394-98, col_values=(('external_ids', {'iface-id': '9c81d394-98c0-444d-a1f2-9b909c7e2559', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:28:0b', 'vm-uuid': '68f568e2-917b-4b70-8345-8d04c0f5d1a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:34:32 np0005558241 NetworkManager[50376]: <info>  [1765614872.0810] manager: (tap9c81d394-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.087 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.089 248514 INFO os_vif [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:28:0b,bridge_name='br-int',has_traffic_filtering=True,id=9c81d394-98c0-444d-a1f2-9b909c7e2559,network=Network(dfb72597-ac7e-412b-834c-574618fe1a4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c81d394-98')#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.336 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.337 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.337 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] No VIF found with MAC fa:16:3e:07:28:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.339 248514 INFO nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Using config drive#033[00m
Dec 13 03:34:32 np0005558241 nova_compute[248510]: 2025-12-13 08:34:32.372 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:34:33 np0005558241 nova_compute[248510]: 2025-12-13 08:34:33.904 248514 DEBUG nova.network.neutron [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Updated VIF entry in instance network info cache for port 9c81d394-98c0-444d-a1f2-9b909c7e2559. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:34:33 np0005558241 nova_compute[248510]: 2025-12-13 08:34:33.904 248514 DEBUG nova.network.neutron [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Updating instance_info_cache with network_info: [{"id": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "address": "fa:16:3e:07:28:0b", "network": {"id": "dfb72597-ac7e-412b-834c-574618fe1a4c", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-1410380687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "082f98d662a4450a9fd767ac13a76391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c81d394-98", "ovs_interfaceid": "9c81d394-98c0-444d-a1f2-9b909c7e2559", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:34:33 np0005558241 nova_compute[248510]: 2025-12-13 08:34:33.940 248514 DEBUG oslo_concurrency.lockutils [req-a26fe1c3-175b-468e-8d0a-53dee6594fad req-1b1ecf34-72a3-4b1c-9d85-e7d0250b14e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-68f568e2-917b-4b70-8345-8d04c0f5d1a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.017 248514 INFO nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Creating config drive at /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/disk.config#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.023 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvh0o5s5v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.174 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvh0o5s5v" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.199 248514 DEBUG nova.storage.rbd_utils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] rbd image 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.203 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/disk.config 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.364 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "287069fd-8250-4b7f-92c5-74450b2e92ca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.365 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "287069fd-8250-4b7f-92c5-74450b2e92ca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.398 248514 DEBUG oslo_concurrency.processutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/disk.config 68f568e2-917b-4b70-8345-8d04c0f5d1a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.398 248514 INFO nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Deleting local config drive /var/lib/nova/instances/68f568e2-917b-4b70-8345-8d04c0f5d1a6/disk.config because it was imported into RBD.#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.405 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:34:34 np0005558241 kernel: tap9c81d394-98: entered promiscuous mode
Dec 13 03:34:34 np0005558241 NetworkManager[50376]: <info>  [1765614874.4562] manager: (tap9c81d394-98): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.457 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:34:34Z|00764|binding|INFO|Claiming lport 9c81d394-98c0-444d-a1f2-9b909c7e2559 for this chassis.
Dec 13 03:34:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:34:34Z|00765|binding|INFO|9c81d394-98c0-444d-a1f2-9b909c7e2559: Claiming fa:16:3e:07:28:0b 10.100.0.13
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.478 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:28:0b 10.100.0.13'], port_security=['fa:16:3e:07:28:0b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '68f568e2-917b-4b70-8345-8d04c0f5d1a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfb72597-ac7e-412b-834c-574618fe1a4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '082f98d662a4450a9fd767ac13a76391', 'neutron:revision_number': '2', 'neutron:security_group_ids': '79fb568a-41f5-487e-bdd9-c21c5553decf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=778b417f-43c9-4614-a0b9-308a3289fddb, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=9c81d394-98c0-444d-a1f2-9b909c7e2559) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.480 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 9c81d394-98c0-444d-a1f2-9b909c7e2559 in datapath dfb72597-ac7e-412b-834c-574618fe1a4c bound to our chassis#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.481 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dfb72597-ac7e-412b-834c-574618fe1a4c#033[00m
Dec 13 03:34:34 np0005558241 systemd-udevd[324507]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.496 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[756c0737-679c-4f55-9709-a23363da1aa2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.497 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdfb72597-a1 in ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:34:34 np0005558241 systemd-machined[210538]: New machine qemu-96-instance-0000004d.
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.500 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdfb72597-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.500 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f99d37-5b4b-40ab-90a5-5d9e03c809a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.501 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[06dd563f-6e80-4f28-a6f1-01361cbb1144]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.508 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:34 np0005558241 NetworkManager[50376]: <info>  [1765614874.5101] device (tap9c81d394-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:34:34 np0005558241 NetworkManager[50376]: <info>  [1765614874.5107] device (tap9c81d394-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.509 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.511 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[502ba1ea-f0e0-411b-ab84-86f65d3f4222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.519 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.520 248514 INFO nova.compute.claims [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:34:34 np0005558241 systemd[1]: Started Virtual Machine qemu-96-instance-0000004d.
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:34:34Z|00766|binding|INFO|Setting lport 9c81d394-98c0-444d-a1f2-9b909c7e2559 ovn-installed in OVS
Dec 13 03:34:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:34:34Z|00767|binding|INFO|Setting lport 9c81d394-98c0-444d-a1f2-9b909c7e2559 up in Southbound
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.543 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9640a060-b765-4d50-bbb6-f54513930479]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.581 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4492c4ca-0bd6-472b-8794-da119ff63c9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.587 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[913d7d0e-1b8b-4169-ac2b-9dd2fe8f0149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 NetworkManager[50376]: <info>  [1765614874.5882] manager: (tapdfb72597-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/327)
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.621 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[102a8b1b-016c-4868-8b5a-edd7e3676e22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.624 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[305ca2c8-37d6-490b-b44f-8f0a4ff3df4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 NetworkManager[50376]: <info>  [1765614874.6476] device (tapdfb72597-a0): carrier: link connected
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.653 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a681b59e-3e11-4f84-be3d-db514f24a34e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.676 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f71afc4-e413-4e3b-83f1-c9b90921b210]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfb72597-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:fa:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745186, 'reachable_time': 24813, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324541, 'error': None, 'target': 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.694 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.696 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc44c915-1554-4cd4-9d98-3630fef3ec28]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:fa74'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 745186, 'tstamp': 745186}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324542, 'error': None, 'target': 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.720 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f116ccd-c153-472e-b4fb-ce152de8e60c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfb72597-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:fa:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745186, 'reachable_time': 24813, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324543, 'error': None, 'target': 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.763 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f21c0570-54e7-4c03-a38a-39dbd2c5be33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.820 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b859b036-f552-4da6-8597-104502c338cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.821 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfb72597-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.822 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.822 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfb72597-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:34:34 np0005558241 kernel: tapdfb72597-a0: entered promiscuous mode
Dec 13 03:34:34 np0005558241 NetworkManager[50376]: <info>  [1765614874.8249] manager: (tapdfb72597-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.827 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdfb72597-a0, col_values=(('external_ids', {'iface-id': '5ddcc36f-0a9b-4b88-89cc-ebf23344a1de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:34:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:34:34Z|00768|binding|INFO|Releasing lport 5ddcc36f-0a9b-4b88-89cc-ebf23344a1de from this chassis (sb_readonly=0)
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.830 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dfb72597-ac7e-412b-834c-574618fe1a4c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dfb72597-ac7e-412b-834c-574618fe1a4c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.831 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6adb75e6-d935-4129-830a-76a3877bca82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.832 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-dfb72597-ac7e-412b-834c-574618fe1a4c
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/dfb72597-ac7e-412b-834c-574618fe1a4c.pid.haproxy
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID dfb72597-ac7e-412b-834c-574618fe1a4c
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:34:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:34.834 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c', 'env', 'PROCESS_TAG=haproxy-dfb72597-ac7e-412b-834c-574618fe1a4c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dfb72597-ac7e-412b-834c-574618fe1a4c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:34:34 np0005558241 nova_compute[248510]: 2025-12-13 08:34:34.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.144 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614875.1437128, 68f568e2-917b-4b70-8345-8d04c0f5d1a6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.146 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] VM Started (Lifecycle Event)#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.178 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.187 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614875.1447654, 68f568e2-917b-4b70-8345-8d04c0f5d1a6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.188 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.213 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.218 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.238 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:34:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:34:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1100857933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:34:35 np0005558241 podman[324635]: 2025-12-13 08:34:35.234532401 +0000 UTC m=+0.029065233 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.331 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.340 248514 DEBUG nova.compute.provider_tree [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.361 248514 DEBUG nova.scheduler.client.report [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.395 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.396 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:34:35 np0005558241 podman[324635]: 2025-12-13 08:34:35.412608764 +0000 UTC m=+0.207141576 container create 5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.456 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.464 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.465 248514 DEBUG nova.network.neutron [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:34:35 np0005558241 systemd[1]: Started libpod-conmon-5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e.scope.
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.492 248514 INFO nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:34:35 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:34:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f3be40f551ba5f18c4992eac5c86f2009e1b17fb3be624151e23ab2639672f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:34:35 np0005558241 podman[324635]: 2025-12-13 08:34:35.516177276 +0000 UTC m=+0.310710108 container init 5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 03:34:35 np0005558241 podman[324635]: 2025-12-13 08:34:35.522708358 +0000 UTC m=+0.317241170 container start 5eb92aeceaf3c32adf1d41d203152dd85a809a4357a03fc9ed9a1fcbbaedb65e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.523 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:34:35 np0005558241 neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c[324652]: [NOTICE]   (324656) : New worker (324658) forked
Dec 13 03:34:35 np0005558241 neutron-haproxy-ovnmeta-dfb72597-ac7e-412b-834c-574618fe1a4c[324652]: [NOTICE]   (324656) : Loading success.
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.626 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.627 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.628 248514 INFO nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Creating image(s)#033[00m
Dec 13 03:34:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.652 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.679 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.703 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.707 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.803 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.805 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.805 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.806 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.831 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.837 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 287069fd-8250-4b7f-92c5-74450b2e92ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:35 np0005558241 nova_compute[248510]: 2025-12-13 08:34:35.873 248514 DEBUG nova.policy [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c023c9703ede4646a33bf3a4c18781e1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fec71de612e44bc68abc52770bf74e0c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:34:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:34:36 np0005558241 nova_compute[248510]: 2025-12-13 08:34:36.699 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 287069fd-8250-4b7f-92c5-74450b2e92ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.863s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:36 np0005558241 nova_compute[248510]: 2025-12-13 08:34:36.781 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] resizing rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:34:36 np0005558241 nova_compute[248510]: 2025-12-13 08:34:36.960 248514 DEBUG nova.objects.instance [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lazy-loading 'migration_context' on Instance uuid 287069fd-8250-4b7f-92c5-74450b2e92ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:34:36 np0005558241 nova_compute[248510]: 2025-12-13 08:34:36.979 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:34:36 np0005558241 nova_compute[248510]: 2025-12-13 08:34:36.980 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Ensure instance console log exists: /var/lib/nova/instances/287069fd-8250-4b7f-92c5-74450b2e92ca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:34:36 np0005558241 nova_compute[248510]: 2025-12-13 08:34:36.981 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:36 np0005558241 nova_compute[248510]: 2025-12-13 08:34:36.981 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:36 np0005558241 nova_compute[248510]: 2025-12-13 08:34:36.982 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:37 np0005558241 nova_compute[248510]: 2025-12-13 08:34:37.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:37 np0005558241 nova_compute[248510]: 2025-12-13 08:34:37.427 248514 DEBUG nova.network.neutron [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Successfully created port: 65b19603-026e-4844-b71e-20239b4c9b1b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:34:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 88 MiB data, 590 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:34:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 126 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 39 op/s
Dec 13 03:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.277 248514 DEBUG nova.network.neutron [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Successfully updated port: 65b19603-026e-4844-b71e-20239b4c9b1b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.300 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquiring lock "refresh_cache-287069fd-8250-4b7f-92c5-74450b2e92ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.301 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Acquired lock "refresh_cache-287069fd-8250-4b7f-92c5-74450b2e92ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.301 248514 DEBUG nova.network.neutron [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.458 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.618 248514 DEBUG nova.network.neutron [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:34:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:40.679 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:34:40.682 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.862 248514 DEBUG nova.compute.manager [req-e6d69b1f-756e-4e76-956c-b09548732b3b req-a8bd2817-1f78-4c6d-beb4-3801338c6ed6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.862 248514 DEBUG oslo_concurrency.lockutils [req-e6d69b1f-756e-4e76-956c-b09548732b3b req-a8bd2817-1f78-4c6d-beb4-3801338c6ed6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.863 248514 DEBUG oslo_concurrency.lockutils [req-e6d69b1f-756e-4e76-956c-b09548732b3b req-a8bd2817-1f78-4c6d-beb4-3801338c6ed6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.863 248514 DEBUG oslo_concurrency.lockutils [req-e6d69b1f-756e-4e76-956c-b09548732b3b req-a8bd2817-1f78-4c6d-beb4-3801338c6ed6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.863 248514 DEBUG nova.compute.manager [req-e6d69b1f-756e-4e76-956c-b09548732b3b req-a8bd2817-1f78-4c6d-beb4-3801338c6ed6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Processing event network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.864 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.867 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765614880.867357, 68f568e2-917b-4b70-8345-8d04c0f5d1a6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.867 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.870 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.875 248514 INFO nova.virt.libvirt.driver [-] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Instance spawned successfully.#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.875 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.915 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.925 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.926 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.926 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.927 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.927 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.928 248514 DEBUG nova.virt.libvirt.driver [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.933 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.985 248514 DEBUG nova.compute.manager [req-7e8f200f-8f04-4284-b5fa-92764bf944f2 req-d6674ac8-69cf-45aa-b48f-f5119e52bf27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Received event network-changed-65b19603-026e-4844-b71e-20239b4c9b1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.986 248514 DEBUG nova.compute.manager [req-7e8f200f-8f04-4284-b5fa-92764bf944f2 req-d6674ac8-69cf-45aa-b48f-f5119e52bf27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Refreshing instance network info cache due to event network-changed-65b19603-026e-4844-b71e-20239b4c9b1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.986 248514 DEBUG oslo_concurrency.lockutils [req-7e8f200f-8f04-4284-b5fa-92764bf944f2 req-d6674ac8-69cf-45aa-b48f-f5119e52bf27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-287069fd-8250-4b7f-92c5-74450b2e92ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:34:40 np0005558241 nova_compute[248510]: 2025-12-13 08:34:40.989 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:34:41 np0005558241 nova_compute[248510]: 2025-12-13 08:34:41.049 248514 INFO nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Took 16.40 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:34:41 np0005558241 nova_compute[248510]: 2025-12-13 08:34:41.050 248514 DEBUG nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:34:41 np0005558241 nova_compute[248510]: 2025-12-13 08:34:41.160 248514 INFO nova.compute.manager [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Took 17.57 seconds to build instance.#033[00m
Dec 13 03:34:41 np0005558241 nova_compute[248510]: 2025-12-13 08:34:41.188 248514 DEBUG oslo_concurrency.lockutils [None req-8c7e2a92-e20b-4d54-9565-669dadb4d548 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:34:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 134 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.1 MiB/s wr, 39 op/s
Dec 13 03:34:41 np0005558241 nova_compute[248510]: 2025-12-13 08:34:41.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.084 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.768 248514 DEBUG nova.network.neutron [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Updating instance_info_cache with network_info: [{"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.781 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Acquiring lock "a393861c-2eea-4309-8914-042ec8bcc873" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.781 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "a393861c-2eea-4309-8914-042ec8bcc873" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.801 248514 DEBUG oslo_concurrency.lockutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Releasing lock "refresh_cache-287069fd-8250-4b7f-92c5-74450b2e92ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.801 248514 DEBUG nova.compute.manager [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Instance network_info: |[{"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.801 248514 DEBUG nova.compute.manager [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.803 248514 DEBUG oslo_concurrency.lockutils [req-7e8f200f-8f04-4284-b5fa-92764bf944f2 req-d6674ac8-69cf-45aa-b48f-f5119e52bf27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-287069fd-8250-4b7f-92c5-74450b2e92ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.804 248514 DEBUG nova.network.neutron [req-7e8f200f-8f04-4284-b5fa-92764bf944f2 req-d6674ac8-69cf-45aa-b48f-f5119e52bf27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Refreshing network info cache for port 65b19603-026e-4844-b71e-20239b4c9b1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.806 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] Start _get_guest_xml network_info=[{"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.811 248514 WARNING nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.817 248514 DEBUG nova.virt.libvirt.host [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.818 248514 DEBUG nova.virt.libvirt.host [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.822 248514 DEBUG nova.virt.libvirt.host [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.822 248514 DEBUG nova.virt.libvirt.host [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.823 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.823 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.823 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.823 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.824 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.824 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.824 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.824 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.824 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.825 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.825 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.825 248514 DEBUG nova.virt.hardware [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.828 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.919 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.920 248514 DEBUG oslo_concurrency.lockutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.930 248514 DEBUG nova.virt.hardware [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:34:42 np0005558241 nova_compute[248510]: 2025-12-13 08:34:42.930 248514 INFO nova.compute.claims [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] [instance: a393861c-2eea-4309-8914-042ec8bcc873] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:34:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:34:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071280628' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.445 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.474 248514 DEBUG nova.storage.rbd_utils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] rbd image 287069fd-8250-4b7f-92c5-74450b2e92ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.479 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.519 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.554 248514 DEBUG nova.compute.manager [req-b71e2da7-3b6a-40de-a2b8-4ed8fe1dad38 req-e38ff72b-df15-4c2d-870b-7d09735d18af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received event network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.555 248514 DEBUG oslo_concurrency.lockutils [req-b71e2da7-3b6a-40de-a2b8-4ed8fe1dad38 req-e38ff72b-df15-4c2d-870b-7d09735d18af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.555 248514 DEBUG oslo_concurrency.lockutils [req-b71e2da7-3b6a-40de-a2b8-4ed8fe1dad38 req-e38ff72b-df15-4c2d-870b-7d09735d18af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.556 248514 DEBUG oslo_concurrency.lockutils [req-b71e2da7-3b6a-40de-a2b8-4ed8fe1dad38 req-e38ff72b-df15-4c2d-870b-7d09735d18af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "68f568e2-917b-4b70-8345-8d04c0f5d1a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.556 248514 DEBUG nova.compute.manager [req-b71e2da7-3b6a-40de-a2b8-4ed8fe1dad38 req-e38ff72b-df15-4c2d-870b-7d09735d18af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] No waiting events found dispatching network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.556 248514 WARNING nova.compute.manager [req-b71e2da7-3b6a-40de-a2b8-4ed8fe1dad38 req-e38ff72b-df15-4c2d-870b-7d09735d18af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] Received unexpected event network-vif-plugged-9c81d394-98c0-444d-a1f2-9b909c7e2559 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:34:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 134 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.749 248514 DEBUG oslo_concurrency.lockutils [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Acquiring lock "interface-68f568e2-917b-4b70-8345-8d04c0f5d1a6-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.750 248514 DEBUG oslo_concurrency.lockutils [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lock "interface-68f568e2-917b-4b70-8345-8d04c0f5d1a6-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.750 248514 DEBUG nova.objects.instance [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lazy-loading 'flavor' on Instance uuid 68f568e2-917b-4b70-8345-8d04c0f5d1a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.783 248514 DEBUG nova.objects.instance [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] Lazy-loading 'pci_requests' on Instance uuid 68f568e2-917b-4b70-8345-8d04c0f5d1a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:34:43 np0005558241 nova_compute[248510]: 2025-12-13 08:34:43.802 248514 DEBUG nova.network.neutron [None req-4788990e-af34-461e-9657-3c51014549bd 1daf78b2d25748d183901a09e4605044 082f98d662a4450a9fd767ac13a76391 - - default default] [instance: 68f568e2-917b-4b70-8345-8d04c0f5d1a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:34:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:34:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1068031923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:34:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:34:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3088948199' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:34:44 np0005558241 nova_compute[248510]: 2025-12-13 08:34:44.122 248514 DEBUG oslo_concurrency.processutils [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:44 np0005558241 nova_compute[248510]: 2025-12-13 08:34:44.128 248514 DEBUG nova.compute.provider_tree [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:34:44 np0005558241 nova_compute[248510]: 2025-12-13 08:34:44.131 248514 DEBUG oslo_concurrency.processutils [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:34:44 np0005558241 nova_compute[248510]: 2025-12-13 08:34:44.132 248514 DEBUG nova.virt.libvirt.vif [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:34:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1887941666',display_name='tempest-ServerAddressesTestJSON-server-1887941666',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1887941666',id=78,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fec71de612e44bc68abc52770bf74e0c',ramdisk_id='',reservation_id='r-a75xsshy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1681799594',owner_user_name='tempest-ServerAddressesTestJSON-1681799594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:34:35Z,user_data=None,user_id='c023c9703ede4646a33bf3a4c18781e1',uuid=287069fd-8250-4b7f-92c5-74450b2e92ca,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:34:44 np0005558241 nova_compute[248510]: 2025-12-13 08:34:44.133 248514 DEBUG nova.network.os_vif_util [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Converting VIF {"id": "65b19603-026e-4844-b71e-20239b4c9b1b", "address": "fa:16:3e:12:23:89", "network": {"id": "ac80588a-2bf0-4858-a3dc-62f2f67c7bfd", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-649535640-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fec71de612e44bc68abc52770bf74e0c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65b19603-02", "ovs_interfaceid": "65b19603-026e-4844-b71e-20239b4c9b1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:34:44 np0005558241 nova_compute[248510]: 2025-12-13 08:34:44.134 248514 DEBUG nova.network.os_vif_util [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:23:89,bridge_name='br-int',has_traffic_filtering=True,id=65b19603-026e-4844-b71e-20239b4c9b1b,network=Network(ac80588a-2bf0-4858-a3dc-62f2f67c7bfd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65b19603-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:34:44 np0005558241 nova_compute[248510]: 2025-12-13 08:34:44.136 248514 DEBUG nova.objects.instance [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] Lazy-loading 'pci_devices' on Instance uuid 287069fd-8250-4b7f-92c5-74450b2e92ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:34:44 np0005558241 nova_compute[248510]: 2025-12-13 08:34:44.154 248514 DEBUG nova.scheduler.client.report [None req-3cdcc197-14a0-4019-bc58-803965c54c30 f0b9d8d9077a42d49daac45b8edf2a49 e8467605fc934a9cbc34a7e2300c34ac - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:34:44 np0005558241 nova_compute[248510]: 2025-12-13 08:34:44.160 248514 DEBUG nova.virt.libvirt.driver [None req-7c63308e-98a1-4afc-8b65-b75c4f9f6e27 c023c9703ede4646a33bf3a4c18781e1 fec71de612e44bc68abc52770bf74e0c - - default default] [instance: 287069fd-8250-4b7f-92c5-74450b2e92ca] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  <uuid>287069fd-8250-4b7f-92c5-74450b2e92ca</uuid>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  <name>instance-0000004e</name>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerAddressesTestJSON-server-1887941666</nova:name>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:34:42</nova:creationTime>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:        <nova:user uuid="c023c9703ede4646a33bf3a4c18781e1">tempest-ServerAddressesTestJSON-1681799594-project-member</nova:user>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:        <nova:project uuid="fec71de612e44bc68abc52770bf74e0c">tempest-ServerAddressesTestJSON-1681799594</nova:project>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:        <nova:port uuid="65b19603-026e-4844-b71e-20239b4c9b1b">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <entry name="serial">287069fd-8250-4b7f-92c5-74450b2e92ca</entry>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <entry name="uuid">287069fd-8250-4b7f-92c5-74450b2e92ca</entry>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/287069fd-8250-4b7f-92c5-74450b2e92ca_disk">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/287069fd-8250-4b7f-92c5-74450b2e92ca_disk.config">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:12:23:89"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <target dev="tap65b19603-02"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/287069fd-8250-4b7f-92c5-74450b2e92ca/console.log" append="off"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:34:44 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:34:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 91 op/s
Dec 13 03:37:43 np0005558241 nova_compute[248510]: 2025-12-13 08:37:43.745 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:37:43 np0005558241 nova_compute[248510]: 2025-12-13 08:37:43.745 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:37:43 np0005558241 nova_compute[248510]: 2025-12-13 08:37:43.769 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:37:43 np0005558241 rsyslogd[1002]: imjournal: 5487 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 13 03:37:43 np0005558241 nova_compute[248510]: 2025-12-13 08:37:43.869 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:37:43 np0005558241 nova_compute[248510]: 2025-12-13 08:37:43.870 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:37:43 np0005558241 nova_compute[248510]: 2025-12-13 08:37:43.881 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:37:43 np0005558241 nova_compute[248510]: 2025-12-13 08:37:43.881 248514 INFO nova.compute.claims [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.049 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:37:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:37:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/967052416' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:37:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:44.589 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.590 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.591 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:44.591 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.596 248514 DEBUG nova.compute.provider_tree [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.650 248514 DEBUG nova.scheduler.client.report [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.682 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.683 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.737 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.737 248514 DEBUG nova.network.neutron [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.767 248514 INFO nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.789 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.917 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.918 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.919 248514 INFO nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Creating image(s)#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.943 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.967 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.985 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:37:44 np0005558241 nova_compute[248510]: 2025-12-13 08:37:44.989 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.080 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.082 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.083 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.084 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.115 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.121 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 fa9180aa-8387-44cb-9e70-535f0652e390_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.153 248514 DEBUG nova.policy [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'be7c161a509a4392b0b424b31178f424', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4497dd13cb734db4999e9c01823dc0fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.606 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 fa9180aa-8387-44cb-9e70-535f0652e390_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.674 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] resizing rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:37:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.757 248514 DEBUG nova.objects.instance [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lazy-loading 'migration_context' on Instance uuid fa9180aa-8387-44cb-9e70-535f0652e390 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.776 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.777 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Ensure instance console log exists: /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.777 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.777 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:37:45 np0005558241 nova_compute[248510]: 2025-12-13 08:37:45.778 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:37:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:37:46 np0005558241 nova_compute[248510]: 2025-12-13 08:37:46.716 248514 DEBUG nova.network.neutron [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Successfully created port: 3b4d29b5-8158-4612-8f60-a6a57327d01c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:37:47 np0005558241 nova_compute[248510]: 2025-12-13 08:37:47.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:47 np0005558241 ovn_controller[148476]: 2025-12-13T08:37:47Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:82:3a 10.100.0.3
Dec 13 03:37:47 np0005558241 ovn_controller[148476]: 2025-12-13T08:37:47Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:82:3a 10.100.0.3
Dec 13 03:37:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 88 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 64 op/s
Dec 13 03:37:48 np0005558241 nova_compute[248510]: 2025-12-13 08:37:48.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:37:49 np0005558241 nova_compute[248510]: 2025-12-13 08:37:49.361 248514 DEBUG nova.network.neutron [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Successfully updated port: 3b4d29b5-8158-4612-8f60-a6a57327d01c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:37:49 np0005558241 nova_compute[248510]: 2025-12-13 08:37:49.380 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "refresh_cache-fa9180aa-8387-44cb-9e70-535f0652e390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:37:49 np0005558241 nova_compute[248510]: 2025-12-13 08:37:49.381 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquired lock "refresh_cache-fa9180aa-8387-44cb-9e70-535f0652e390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:37:49 np0005558241 nova_compute[248510]: 2025-12-13 08:37:49.381 248514 DEBUG nova.network.neutron [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:37:49 np0005558241 nova_compute[248510]: 2025-12-13 08:37:49.663 248514 DEBUG nova.compute.manager [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-changed-3b4d29b5-8158-4612-8f60-a6a57327d01c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:37:49 np0005558241 nova_compute[248510]: 2025-12-13 08:37:49.664 248514 DEBUG nova.compute.manager [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Refreshing instance network info cache due to event network-changed-3b4d29b5-8158-4612-8f60-a6a57327d01c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:37:49 np0005558241 nova_compute[248510]: 2025-12-13 08:37:49.664 248514 DEBUG oslo_concurrency.lockutils [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fa9180aa-8387-44cb-9e70-535f0652e390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:37:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 133 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 121 op/s
Dec 13 03:37:49 np0005558241 nova_compute[248510]: 2025-12-13 08:37:49.754 248514 DEBUG nova.network.neutron [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:37:50 np0005558241 nova_compute[248510]: 2025-12-13 08:37:50.285 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:50.594 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:37:50 np0005558241 nova_compute[248510]: 2025-12-13 08:37:50.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:50 np0005558241 nova_compute[248510]: 2025-12-13 08:37:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:37:50 np0005558241 nova_compute[248510]: 2025-12-13 08:37:50.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:37:50 np0005558241 nova_compute[248510]: 2025-12-13 08:37:50.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:37:50 np0005558241 nova_compute[248510]: 2025-12-13 08:37:50.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.214 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.215 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.215 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.216 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.267 248514 DEBUG nova.network.neutron [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Updating instance_info_cache with network_info: [{"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.296 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Releasing lock "refresh_cache-fa9180aa-8387-44cb-9e70-535f0652e390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.296 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Instance network_info: |[{"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.297 248514 DEBUG oslo_concurrency.lockutils [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fa9180aa-8387-44cb-9e70-535f0652e390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.298 248514 DEBUG nova.network.neutron [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Refreshing network info cache for port 3b4d29b5-8158-4612-8f60-a6a57327d01c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.305 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Start _get_guest_xml network_info=[{"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.311 248514 WARNING nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.325 248514 DEBUG nova.virt.libvirt.host [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.325 248514 DEBUG nova.virt.libvirt.host [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.331 248514 DEBUG nova.virt.libvirt.host [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.332 248514 DEBUG nova.virt.libvirt.host [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.333 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.333 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.333 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.334 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.334 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.334 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.334 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.335 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.335 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.335 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.335 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.336 248514 DEBUG nova.virt.hardware [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.340 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:37:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:37:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 479 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Dec 13 03:37:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:37:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/112773468' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.962 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.990 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:37:51 np0005558241 nova_compute[248510]: 2025-12-13 08:37:51.993 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:37:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:37:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2305898004' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.570 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.572 248514 DEBUG nova.virt.libvirt.vif [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:37:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-354314434',display_name='tempest-ServerPasswordTestJSON-server-354314434',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-354314434',id=86,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4497dd13cb734db4999e9c01823dc0fc',ramdisk_id='',reservation_id='r-dolnm7xr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1689122570',owner_user_name='tempest-ServerPasswordTestJSON-1689122570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:37:44Z,user_data=None,user_id='be7c161a509a4392b0b424b31178f424',uuid=fa9180aa-8387-44cb-9e70-535f0652e390,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.572 248514 DEBUG nova.network.os_vif_util [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Converting VIF {"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.573 248514 DEBUG nova.network.os_vif_util [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.574 248514 DEBUG nova.objects.instance [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lazy-loading 'pci_devices' on Instance uuid fa9180aa-8387-44cb-9e70-535f0652e390 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.590 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  <uuid>fa9180aa-8387-44cb-9e70-535f0652e390</uuid>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  <name>instance-00000056</name>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerPasswordTestJSON-server-354314434</nova:name>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:37:51</nova:creationTime>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:        <nova:user uuid="be7c161a509a4392b0b424b31178f424">tempest-ServerPasswordTestJSON-1689122570-project-member</nova:user>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:        <nova:project uuid="4497dd13cb734db4999e9c01823dc0fc">tempest-ServerPasswordTestJSON-1689122570</nova:project>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:        <nova:port uuid="3b4d29b5-8158-4612-8f60-a6a57327d01c">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <entry name="serial">fa9180aa-8387-44cb-9e70-535f0652e390</entry>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <entry name="uuid">fa9180aa-8387-44cb-9e70-535f0652e390</entry>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/fa9180aa-8387-44cb-9e70-535f0652e390_disk">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/fa9180aa-8387-44cb-9e70-535f0652e390_disk.config">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:07:47:7f"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <target dev="tap3b4d29b5-81"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/console.log" append="off"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:37:52 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:37:52 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:37:52 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:37:52 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.591 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Preparing to wait for external event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.591 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.592 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.592 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.593 248514 DEBUG nova.virt.libvirt.vif [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:37:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-354314434',display_name='tempest-ServerPasswordTestJSON-server-354314434',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-354314434',id=86,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4497dd13cb734db4999e9c01823dc0fc',ramdisk_id='',reservation_id='r-dolnm7xr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1689122570',owner_user_name='tempest-ServerPasswordTestJSON-1689122570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:37:44Z,user_data=None,user_id='be7c161a509a4392b0b424b31178f424',uuid=fa9180aa-8387-44cb-9e70-535f0652e390,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.593 248514 DEBUG nova.network.os_vif_util [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Converting VIF {"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.593 248514 DEBUG nova.network.os_vif_util [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.594 248514 DEBUG os_vif [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.594 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.595 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.595 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.598 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b4d29b5-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.598 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b4d29b5-81, col_values=(('external_ids', {'iface-id': '3b4d29b5-8158-4612-8f60-a6a57327d01c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:47:7f', 'vm-uuid': 'fa9180aa-8387-44cb-9e70-535f0652e390'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.599 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:52 np0005558241 NetworkManager[50376]: <info>  [1765615072.6008] manager: (tap3b4d29b5-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.602 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.609 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.610 248514 INFO os_vif [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81')#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.669 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.669 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.669 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] No VIF found with MAC fa:16:3e:07:47:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.670 248514 INFO nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Using config drive#033[00m
Dec 13 03:37:52 np0005558241 nova_compute[248510]: 2025-12-13 08:37:52.697 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:37:53 np0005558241 nova_compute[248510]: 2025-12-13 08:37:53.339 248514 INFO nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Creating config drive at /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/disk.config#033[00m
Dec 13 03:37:53 np0005558241 nova_compute[248510]: 2025-12-13 08:37:53.344 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53veqlx4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:37:53 np0005558241 nova_compute[248510]: 2025-12-13 08:37:53.487 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53veqlx4" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:37:53 np0005558241 nova_compute[248510]: 2025-12-13 08:37:53.515 248514 DEBUG nova.storage.rbd_utils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] rbd image fa9180aa-8387-44cb-9e70-535f0652e390_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:37:53 np0005558241 nova_compute[248510]: 2025-12-13 08:37:53.518 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/disk.config fa9180aa-8387-44cb-9e70-535f0652e390_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:37:53 np0005558241 nova_compute[248510]: 2025-12-13 08:37:53.657 248514 DEBUG oslo_concurrency.processutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/disk.config fa9180aa-8387-44cb-9e70-535f0652e390_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:37:53 np0005558241 nova_compute[248510]: 2025-12-13 08:37:53.658 248514 INFO nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Deleting local config drive /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390/disk.config because it was imported into RBD.#033[00m
Dec 13 03:37:53 np0005558241 kernel: tap3b4d29b5-81: entered promiscuous mode
Dec 13 03:37:53 np0005558241 NetworkManager[50376]: <info>  [1765615073.7116] manager: (tap3b4d29b5-81): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Dec 13 03:37:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:37:53Z|00854|binding|INFO|Claiming lport 3b4d29b5-8158-4612-8f60-a6a57327d01c for this chassis.
Dec 13 03:37:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:37:53Z|00855|binding|INFO|3b4d29b5-8158-4612-8f60-a6a57327d01c: Claiming fa:16:3e:07:47:7f 10.100.0.11
Dec 13 03:37:53 np0005558241 nova_compute[248510]: 2025-12-13 08:37:53.713 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.720 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:47:7f 10.100.0.11'], port_security=['fa:16:3e:07:47:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fa9180aa-8387-44cb-9e70-535f0652e390', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4497dd13cb734db4999e9c01823dc0fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82b41d33-9cc9-41ca-a7e4-ff901fea742b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1169a38-50f8-4ffe-a49f-5841c99ba14f, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b4d29b5-8158-4612-8f60-a6a57327d01c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.722 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b4d29b5-8158-4612-8f60-a6a57327d01c in datapath b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 bound to our chassis#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.724 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b0588191-fbf6-4ff4-8e6f-2ba53eda08f1#033[00m
Dec 13 03:37:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:37:53Z|00856|binding|INFO|Setting lport 3b4d29b5-8158-4612-8f60-a6a57327d01c ovn-installed in OVS
Dec 13 03:37:53 np0005558241 ovn_controller[148476]: 2025-12-13T08:37:53Z|00857|binding|INFO|Setting lport 3b4d29b5-8158-4612-8f60-a6a57327d01c up in Southbound
Dec 13 03:37:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Dec 13 03:37:53 np0005558241 nova_compute[248510]: 2025-12-13 08:37:53.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.736 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[701de2c0-4316-4f90-a72e-a2e02ed6c15c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.737 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb0588191-f1 in ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.739 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb0588191-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.739 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f764fa34-4007-48d1-8522-e7eeca0e0634]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.740 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aff6e5b1-34d0-4ca9-900f-569422e129a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.753 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa4b131-3c10-40a7-8345-0c45954f5316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:53 np0005558241 systemd-machined[210538]: New machine qemu-106-instance-00000056.
Dec 13 03:37:53 np0005558241 systemd[1]: Started Virtual Machine qemu-106-instance-00000056.
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.770 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[102aff35-16fd-427d-ad5c-a0e1474c72ec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:53 np0005558241 systemd-udevd[332533]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:37:53 np0005558241 NetworkManager[50376]: <info>  [1765615073.7928] device (tap3b4d29b5-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:37:53 np0005558241 NetworkManager[50376]: <info>  [1765615073.7934] device (tap3b4d29b5-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.798 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7fd24c2e-70b3-437d-a03d-0c37140da830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.803 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f761b2b0-7ec8-423b-84d1-d99c5493dcfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:53 np0005558241 systemd-udevd[332536]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:37:53 np0005558241 NetworkManager[50376]: <info>  [1765615073.8048] manager: (tapb0588191-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/371)
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.833 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9348087e-31a8-42e1-b1a0-a6a94903d413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.836 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[026ed689-1e9b-409a-8deb-e0872afa9557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:53 np0005558241 NetworkManager[50376]: <info>  [1765615073.8637] device (tapb0588191-f0): carrier: link connected
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.871 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[08701887-d5b7-4ac2-852e-818359e80a55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.892 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c993e89-f292-4f3f-a9dd-2b49450e09fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0588191-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:00:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765107, 'reachable_time': 31885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332562, 'error': None, 'target': 'ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.909 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f1c212-3a29-4e70-8b25-1e24649d7954]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7e:aa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 765107, 'tstamp': 765107}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332563, 'error': None, 'target': 'ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.928 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[499307b2-b434-40bb-b16b-64afb43a89c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb0588191-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7e:00:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765107, 'reachable_time': 31885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332564, 'error': None, 'target': 'ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:53.965 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aff49922-9306-44df-b934-04b8ce7f6cc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.042 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[230afc5b-3292-4230-b0bb-02bdfc6dd0f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.044 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0588191-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.044 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.045 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0588191-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:37:54 np0005558241 NetworkManager[50376]: <info>  [1765615074.0476] manager: (tapb0588191-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.047 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:54 np0005558241 kernel: tapb0588191-f0: entered promiscuous mode
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.050 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.052 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb0588191-f0, col_values=(('external_ids', {'iface-id': '4ccc34c2-c568-4745-9202-7dc43ff5cdee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.053 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:37:54Z|00858|binding|INFO|Releasing lport 4ccc34c2-c568-4745-9202-7dc43ff5cdee from this chassis (sb_readonly=0)
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.069 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b0588191-fbf6-4ff4-8e6f-2ba53eda08f1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b0588191-fbf6-4ff4-8e6f-2ba53eda08f1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.070 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2bccaa2e-f4bb-4f69-b415-a130a8fde0fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.071 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/b0588191-fbf6-4ff4-8e6f-2ba53eda08f1.pid.haproxy
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID b0588191-fbf6-4ff4-8e6f-2ba53eda08f1
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:37:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:54.071 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'env', 'PROCESS_TAG=haproxy-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b0588191-fbf6-4ff4-8e6f-2ba53eda08f1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.133 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615074.132913, fa9180aa-8387-44cb-9e70-535f0652e390 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.133 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] VM Started (Lifecycle Event)#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.249 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.262 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615074.1332505, fa9180aa-8387-44cb-9e70-535f0652e390 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.262 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.287 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.291 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.318 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:37:54 np0005558241 podman[332638]: 2025-12-13 08:37:54.466733408 +0000 UTC m=+0.050946216 container create 064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.474 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.479 248514 DEBUG nova.network.neutron [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Updated VIF entry in instance network info cache for port 3b4d29b5-8158-4612-8f60-a6a57327d01c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.480 248514 DEBUG nova.network.neutron [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Updating instance_info_cache with network_info: [{"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:37:54 np0005558241 systemd[1]: Started libpod-conmon-064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac.scope.
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.514 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.514 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.515 248514 DEBUG oslo_concurrency.lockutils [req-7ea6c541-637f-41b1-9a0e-08f0991d8cae req-e8e12638-4dd6-4a81-ba54-3f628571908b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fa9180aa-8387-44cb-9e70-535f0652e390" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:37:54 np0005558241 nova_compute[248510]: 2025-12-13 08:37:54.516 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:37:54 np0005558241 podman[332638]: 2025-12-13 08:37:54.439781899 +0000 UTC m=+0.023994727 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:37:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:37:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5316c9592aac0df80f7827a4e0b5f1ce24a99c5a9f3cbdd4429ab127d248b430/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:37:54 np0005558241 podman[332638]: 2025-12-13 08:37:54.581422266 +0000 UTC m=+0.165635094 container init 064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 03:37:54 np0005558241 podman[332638]: 2025-12-13 08:37:54.587313713 +0000 UTC m=+0.171526521 container start 064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:37:54 np0005558241 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [NOTICE]   (332657) : New worker (332659) forked
Dec 13 03:37:54 np0005558241 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [NOTICE]   (332657) : Loading success.
Dec 13 03:37:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:55.418 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:37:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:55.419 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:37:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:37:55.420 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:37:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 13 03:37:55 np0005558241 nova_compute[248510]: 2025-12-13 08:37:55.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:55 np0005558241 nova_compute[248510]: 2025-12-13 08:37:55.978 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:37:56 np0005558241 nova_compute[248510]: 2025-12-13 08:37:56.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:37:56 np0005558241 nova_compute[248510]: 2025-12-13 08:37:56.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:37:56 np0005558241 nova_compute[248510]: 2025-12-13 08:37:56.805 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:37:56 np0005558241 nova_compute[248510]: 2025-12-13 08:37:56.805 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:37:56 np0005558241 nova_compute[248510]: 2025-12-13 08:37:56.805 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:37:56 np0005558241 nova_compute[248510]: 2025-12-13 08:37:56.806 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:37:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:37:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143835827' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.381 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.472 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.473 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.477 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.477 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.720 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.722 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3714MB free_disk=59.921302248723805GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.722 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.723 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:37:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.816 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9b486227-b98c-4393-9a3c-aae3e3c419a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.816 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance fa9180aa-8387-44cb-9e70-535f0652e390 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.816 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.816 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:37:57 np0005558241 nova_compute[248510]: 2025-12-13 08:37:57.894 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:37:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:37:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776613182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:37:58 np0005558241 nova_compute[248510]: 2025-12-13 08:37:58.452 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:37:58 np0005558241 nova_compute[248510]: 2025-12-13 08:37:58.459 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:37:58 np0005558241 nova_compute[248510]: 2025-12-13 08:37:58.479 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:37:58 np0005558241 nova_compute[248510]: 2025-12-13 08:37:58.514 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:37:58 np0005558241 nova_compute[248510]: 2025-12-13 08:37:58.515 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:37:59 np0005558241 nova_compute[248510]: 2025-12-13 08:37:59.515 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:37:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Dec 13 03:37:59 np0005558241 nova_compute[248510]: 2025-12-13 08:37:59.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:37:59 np0005558241 nova_compute[248510]: 2025-12-13 08:37:59.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.467 248514 DEBUG nova.compute.manager [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.511 248514 INFO nova.compute.manager [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] instance snapshotting#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.512 248514 DEBUG nova.objects.instance [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'flavor' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.734 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.841 248514 INFO nova.virt.libvirt.driver [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Beginning live snapshot process#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.920 248514 DEBUG nova.compute.manager [req-cc26685b-7995-410e-abda-01551f004f33 req-51ef4106-681d-4636-a509-8ed62ec555dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.920 248514 DEBUG oslo_concurrency.lockutils [req-cc26685b-7995-410e-abda-01551f004f33 req-51ef4106-681d-4636-a509-8ed62ec555dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.920 248514 DEBUG oslo_concurrency.lockutils [req-cc26685b-7995-410e-abda-01551f004f33 req-51ef4106-681d-4636-a509-8ed62ec555dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.921 248514 DEBUG oslo_concurrency.lockutils [req-cc26685b-7995-410e-abda-01551f004f33 req-51ef4106-681d-4636-a509-8ed62ec555dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.921 248514 DEBUG nova.compute.manager [req-cc26685b-7995-410e-abda-01551f004f33 req-51ef4106-681d-4636-a509-8ed62ec555dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Processing event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.922 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.927 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615080.9268777, fa9180aa-8387-44cb-9e70-535f0652e390 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.927 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.930 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.947 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.948 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.972 248514 INFO nova.virt.libvirt.driver [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Instance spawned successfully.#033[00m
Dec 13 03:38:00 np0005558241 nova_compute[248510]: 2025-12-13 08:38:00.972 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:38:00 np0005558241 podman[332715]: 2025-12-13 08:38:00.992250404 +0000 UTC m=+0.060465642 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 03:38:01 np0005558241 podman[332714]: 2025-12-13 08:38:01.021015978 +0000 UTC m=+0.103023869 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.024 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.027 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.035 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.035 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.035 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.036 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.036 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.037 248514 DEBUG nova.virt.libvirt.driver [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:01 np0005558241 podman[332713]: 2025-12-13 08:38:01.040256716 +0000 UTC m=+0.119830947 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.051 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.090 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.297 248514 INFO nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Took 16.38 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.297 248514 DEBUG nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.307 248514 DEBUG nova.virt.libvirt.imagebackend [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.315 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.315 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.323 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.323 248514 INFO nova.compute.claims [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.399 248514 INFO nova.compute.manager [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Took 17.57 seconds to build instance.#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.459 248514 DEBUG oslo_concurrency.lockutils [None req-c21c61c5-9985-4f14-b3d3-c7fe139554be be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.546 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:01 np0005558241 nova_compute[248510]: 2025-12-13 08:38:01.616 248514 DEBUG nova.storage.rbd_utils [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(4de1c3c190d94cda92595db338170aee) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:38:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 167 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Dec 13 03:38:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Dec 13 03:38:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Dec 13 03:38:02 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.148 248514 DEBUG nova.storage.rbd_utils [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] cloning vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk@4de1c3c190d94cda92595db338170aee to images/bd9a009b-216b-49be-81a8-600047037026 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:38:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:38:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030052450' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.207 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.213 248514 DEBUG nova.compute.provider_tree [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.245 248514 DEBUG nova.storage.rbd_utils [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] flattening images/bd9a009b-216b-49be-81a8-600047037026 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.286 248514 DEBUG nova.scheduler.client.report [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.334 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.335 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.400 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.401 248514 DEBUG nova.network.neutron [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.426 248514 INFO nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.456 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.574 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.577 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.580 248514 INFO nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Creating image(s)#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.600 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.623 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.655 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.660 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.692 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.696 248514 DEBUG nova.policy [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '302853b7f91745baadf52361a0a7d535', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c968daf58e624ebda00676d79f6bde96', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.739 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.739 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.740 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.740 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.758 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.761 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 6fb6c605-344c-4ed9-806d-96964b0474f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:02 np0005558241 nova_compute[248510]: 2025-12-13 08:38:02.801 248514 DEBUG nova.storage.rbd_utils [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] removing snapshot(4de1c3c190d94cda92595db338170aee) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.061 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 6fb6c605-344c-4ed9-806d-96964b0474f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.085 248514 DEBUG nova.compute.manager [req-410f8055-5b23-4d3e-976e-b29b091ee14f req-a964e031-bbe0-4374-97dd-5f4bb04cbe49 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.085 248514 DEBUG oslo_concurrency.lockutils [req-410f8055-5b23-4d3e-976e-b29b091ee14f req-a964e031-bbe0-4374-97dd-5f4bb04cbe49 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.085 248514 DEBUG oslo_concurrency.lockutils [req-410f8055-5b23-4d3e-976e-b29b091ee14f req-a964e031-bbe0-4374-97dd-5f4bb04cbe49 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.086 248514 DEBUG oslo_concurrency.lockutils [req-410f8055-5b23-4d3e-976e-b29b091ee14f req-a964e031-bbe0-4374-97dd-5f4bb04cbe49 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.086 248514 DEBUG nova.compute.manager [req-410f8055-5b23-4d3e-976e-b29b091ee14f req-a964e031-bbe0-4374-97dd-5f4bb04cbe49 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] No waiting events found dispatching network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.086 248514 WARNING nova.compute.manager [req-410f8055-5b23-4d3e-976e-b29b091ee14f req-a964e031-bbe0-4374-97dd-5f4bb04cbe49 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received unexpected event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c for instance with vm_state active and task_state None.#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.112 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] resizing rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:38:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Dec 13 03:38:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Dec 13 03:38:03 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.160 248514 DEBUG nova.storage.rbd_utils [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(snap) on rbd image(bd9a009b-216b-49be-81a8-600047037026) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.241 248514 DEBUG nova.objects.instance [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'migration_context' on Instance uuid 6fb6c605-344c-4ed9-806d-96964b0474f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.265 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.266 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Ensure instance console log exists: /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.267 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.267 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.267 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 199 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.0 MiB/s wr, 51 op/s
Dec 13 03:38:03 np0005558241 nova_compute[248510]: 2025-12-13 08:38:03.773 248514 DEBUG nova.network.neutron [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Successfully created port: 3b8730f4-e225-4c0a-bf95-708c9c122a4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.088 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.089 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.117 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:38:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Dec 13 03:38:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Dec 13 03:38:04 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.248 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.249 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.261 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.262 248514 INFO nova.compute.claims [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.423 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.503 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.503 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.504 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.504 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.504 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.506 248514 INFO nova.compute.manager [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Terminating instance#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.507 248514 DEBUG nova.compute.manager [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:38:04 np0005558241 kernel: tap3b4d29b5-81 (unregistering): left promiscuous mode
Dec 13 03:38:04 np0005558241 NetworkManager[50376]: <info>  [1765615084.5427] device (tap3b4d29b5-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:38:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:04Z|00859|binding|INFO|Releasing lport 3b4d29b5-8158-4612-8f60-a6a57327d01c from this chassis (sb_readonly=0)
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:04Z|00860|binding|INFO|Setting lport 3b4d29b5-8158-4612-8f60-a6a57327d01c down in Southbound
Dec 13 03:38:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:04Z|00861|binding|INFO|Removing iface tap3b4d29b5-81 ovn-installed in OVS
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.553 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.558 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:47:7f 10.100.0.11'], port_security=['fa:16:3e:07:47:7f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fa9180aa-8387-44cb-9e70-535f0652e390', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4497dd13cb734db4999e9c01823dc0fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82b41d33-9cc9-41ca-a7e4-ff901fea742b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1169a38-50f8-4ffe-a49f-5841c99ba14f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b4d29b5-8158-4612-8f60-a6a57327d01c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.560 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b4d29b5-8158-4612-8f60-a6a57327d01c in datapath b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 unbound from our chassis#033[00m
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.563 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.564 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4449b9ea-4d84-4b4d-8693-77d5af4c1b5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.565 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 namespace which is not needed anymore#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.567 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:04 np0005558241 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000056.scope: Deactivated successfully.
Dec 13 03:38:04 np0005558241 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d00000056.scope: Consumed 4.023s CPU time.
Dec 13 03:38:04 np0005558241 systemd-machined[210538]: Machine qemu-106-instance-00000056 terminated.
Dec 13 03:38:04 np0005558241 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [NOTICE]   (332657) : haproxy version is 2.8.14-c23fe91
Dec 13 03:38:04 np0005558241 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [NOTICE]   (332657) : path to executable is /usr/sbin/haproxy
Dec 13 03:38:04 np0005558241 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [WARNING]  (332657) : Exiting Master process...
Dec 13 03:38:04 np0005558241 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [ALERT]    (332657) : Current worker (332659) exited with code 143 (Terminated)
Dec 13 03:38:04 np0005558241 neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1[332653]: [WARNING]  (332657) : All workers exited. Exiting... (0)
Dec 13 03:38:04 np0005558241 systemd[1]: libpod-064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac.scope: Deactivated successfully.
Dec 13 03:38:04 np0005558241 podman[333149]: 2025-12-13 08:38:04.711570489 +0000 UTC m=+0.040072767 container died 064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:04 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5316c9592aac0df80f7827a4e0b5f1ce24a99c5a9f3cbdd4429ab127d248b430-merged.mount: Deactivated successfully.
Dec 13 03:38:04 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac-userdata-shm.mount: Deactivated successfully.
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.747 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.752 248514 INFO nova.virt.libvirt.driver [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Instance destroyed successfully.#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.753 248514 DEBUG nova.objects.instance [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lazy-loading 'resources' on Instance uuid fa9180aa-8387-44cb-9e70-535f0652e390 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:04 np0005558241 podman[333149]: 2025-12-13 08:38:04.757232322 +0000 UTC m=+0.085734610 container cleanup 064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.774 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.775 248514 DEBUG nova.virt.libvirt.vif [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:37:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-354314434',display_name='tempest-ServerPasswordTestJSON-server-354314434',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-354314434',id=86,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:38:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4497dd13cb734db4999e9c01823dc0fc',ramdisk_id='',reservation_id='r-dolnm7xr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1689122570',owner_user_name='tempest-ServerPasswordTestJSON-1689122570-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:38:04Z,user_data=None,user_id='be7c161a509a4392b0b424b31178f424',uuid=fa9180aa-8387-44cb-9e70-535f0652e390,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.776 248514 DEBUG nova.network.os_vif_util [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Converting VIF {"id": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "address": "fa:16:3e:07:47:7f", "network": {"id": "b0588191-fbf6-4ff4-8e6f-2ba53eda08f1", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-340962445-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4497dd13cb734db4999e9c01823dc0fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b4d29b5-81", "ovs_interfaceid": "3b4d29b5-8158-4612-8f60-a6a57327d01c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:38:04 np0005558241 systemd[1]: libpod-conmon-064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac.scope: Deactivated successfully.
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.777 248514 DEBUG nova.network.os_vif_util [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.777 248514 DEBUG os_vif [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.779 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.779 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b4d29b5-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.781 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.785 248514 INFO os_vif [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:47:7f,bridge_name='br-int',has_traffic_filtering=True,id=3b4d29b5-8158-4612-8f60-a6a57327d01c,network=Network(b0588191-fbf6-4ff4-8e6f-2ba53eda08f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b4d29b5-81')#033[00m
Dec 13 03:38:04 np0005558241 podman[333191]: 2025-12-13 08:38:04.837162797 +0000 UTC m=+0.044483495 container remove 064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.843 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64e45504-ec32-4457-ac65-f58d4ce805b4]: (4, ('Sat Dec 13 08:38:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 (064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac)\n064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac\nSat Dec 13 08:38:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 (064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac)\n064c0b3cbc3bbb4c1d489c520f7be3e93a1c1a2796ffbf3dc9b17a77da2bb7ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.845 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[672a7193-c261-48a4-90c1-93377de0015a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.846 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0588191-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.848 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:04 np0005558241 kernel: tapb0588191-f0: left promiscuous mode
Dec 13 03:38:04 np0005558241 nova_compute[248510]: 2025-12-13 08:38:04.863 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.866 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db41451e-2bfe-4cb4-88c6-70f2762e6c25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.882 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[461cd323-6d2a-4c54-a443-542900ba44df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.883 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75b17f18-d9e1-4010-bca1-92aebfcfecd6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.900 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb7f520d-a208-40a7-848a-e1a107f2023e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765100, 'reachable_time': 24787, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333223, 'error': None, 'target': 'ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.902 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b0588191-fbf6-4ff4-8e6f-2ba53eda08f1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:38:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:04.903 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e45f4e06-f3da-4673-904a-2da79dd9e07e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:04 np0005558241 systemd[1]: run-netns-ovnmeta\x2db0588191\x2dfbf6\x2d4ff4\x2d8e6f\x2d2ba53eda08f1.mount: Deactivated successfully.
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.003 248514 DEBUG nova.network.neutron [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Successfully updated port: 3b8730f4-e225-4c0a-bf95-708c9c122a4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.034 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "refresh_cache-6fb6c605-344c-4ed9-806d-96964b0474f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.035 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquired lock "refresh_cache-6fb6c605-344c-4ed9-806d-96964b0474f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.035 248514 DEBUG nova.network.neutron [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.047 248514 INFO nova.virt.libvirt.driver [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Deleting instance files /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390_del#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.048 248514 INFO nova.virt.libvirt.driver [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Deletion of /var/lib/nova/instances/fa9180aa-8387-44cb-9e70-535f0652e390_del complete#033[00m
Dec 13 03:38:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:38:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255841871' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.073 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.078 248514 DEBUG nova.compute.provider_tree [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.097 248514 DEBUG nova.scheduler.client.report [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.103 248514 INFO nova.compute.manager [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.104 248514 DEBUG oslo.service.loopingcall [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.104 248514 DEBUG nova.compute.manager [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.104 248514 DEBUG nova.network.neutron [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.127 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.127 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.196 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.196 248514 DEBUG nova.network.neutron [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.218 248514 INFO nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.236 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.351 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.353 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.354 248514 INFO nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Creating image(s)#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.380 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.404 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.425 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.429 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.461 248514 DEBUG nova.network.neutron [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.471 248514 DEBUG nova.compute.manager [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-vif-unplugged-3b4d29b5-8158-4612-8f60-a6a57327d01c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.471 248514 DEBUG oslo_concurrency.lockutils [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.471 248514 DEBUG oslo_concurrency.lockutils [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.472 248514 DEBUG oslo_concurrency.lockutils [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.472 248514 DEBUG nova.compute.manager [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] No waiting events found dispatching network-vif-unplugged-3b4d29b5-8158-4612-8f60-a6a57327d01c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.472 248514 DEBUG nova.compute.manager [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-vif-unplugged-3b4d29b5-8158-4612-8f60-a6a57327d01c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.472 248514 DEBUG nova.compute.manager [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-changed-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.472 248514 DEBUG nova.compute.manager [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Refreshing instance network info cache due to event network-changed-3b8730f4-e225-4c0a-bf95-708c9c122a4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.473 248514 DEBUG oslo_concurrency.lockutils [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-6fb6c605-344c-4ed9-806d-96964b0474f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.502 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.502 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.503 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.503 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.521 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.524 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 88cb43c1-f01b-4098-84ea-d372176a0e20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.718 248514 DEBUG nova.policy [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '302853b7f91745baadf52361a0a7d535', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c968daf58e624ebda00676d79f6bde96', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:38:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 270 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 9.5 MiB/s wr, 332 op/s
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.926 248514 INFO nova.virt.libvirt.driver [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Snapshot image upload complete#033[00m
Dec 13 03:38:05 np0005558241 nova_compute[248510]: 2025-12-13 08:38:05.927 248514 INFO nova.compute.manager [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Took 5.38 seconds to snapshot the instance on the hypervisor.#033[00m
Dec 13 03:38:06 np0005558241 nova_compute[248510]: 2025-12-13 08:38:06.194 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 88cb43c1-f01b-4098-84ea-d372176a0e20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:06 np0005558241 nova_compute[248510]: 2025-12-13 08:38:06.282 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] resizing rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:38:06 np0005558241 nova_compute[248510]: 2025-12-13 08:38:06.405 248514 DEBUG nova.objects.instance [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'migration_context' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:06 np0005558241 nova_compute[248510]: 2025-12-13 08:38:06.414 248514 DEBUG nova.compute.manager [None req-1b134e74-9daa-40b7-9005-42ec1cd7483f 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Dec 13 03:38:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:38:06 np0005558241 nova_compute[248510]: 2025-12-13 08:38:06.488 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:38:06 np0005558241 nova_compute[248510]: 2025-12-13 08:38:06.488 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Ensure instance console log exists: /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:38:06 np0005558241 nova_compute[248510]: 2025-12-13 08:38:06.489 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:06 np0005558241 nova_compute[248510]: 2025-12-13 08:38:06.489 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:06 np0005558241 nova_compute[248510]: 2025-12-13 08:38:06.490 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:06 np0005558241 nova_compute[248510]: 2025-12-13 08:38:06.728 248514 DEBUG nova.network.neutron [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:38:06 np0005558241 nova_compute[248510]: 2025-12-13 08:38:06.754 248514 INFO nova.compute.manager [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Took 1.65 seconds to deallocate network for instance.#033[00m
Dec 13 03:38:06 np0005558241 nova_compute[248510]: 2025-12-13 08:38:06.836 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:06 np0005558241 nova_compute[248510]: 2025-12-13 08:38:06.836 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.024 248514 DEBUG oslo_concurrency.processutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.467 248514 DEBUG nova.network.neutron [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Updating instance_info_cache with network_info: [{"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:38:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:38:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392605823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.570 248514 DEBUG oslo_concurrency.processutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.578 248514 DEBUG nova.compute.provider_tree [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.693 248514 DEBUG nova.compute.manager [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.694 248514 DEBUG oslo_concurrency.lockutils [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.695 248514 DEBUG oslo_concurrency.lockutils [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.695 248514 DEBUG oslo_concurrency.lockutils [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.695 248514 DEBUG nova.compute.manager [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] No waiting events found dispatching network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.695 248514 WARNING nova.compute.manager [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received unexpected event network-vif-plugged-3b4d29b5-8158-4612-8f60-a6a57327d01c for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.696 248514 DEBUG nova.compute.manager [req-9c89f976-ee75-4a06-934e-a774ab177cb7 req-2af585e1-98dd-49fb-bff8-19e52c8bedce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Received event network-vif-deleted-3b4d29b5-8158-4612-8f60-a6a57327d01c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.698 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Releasing lock "refresh_cache-6fb6c605-344c-4ed9-806d-96964b0474f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.698 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Instance network_info: |[{"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.699 248514 DEBUG oslo_concurrency.lockutils [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-6fb6c605-344c-4ed9-806d-96964b0474f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.699 248514 DEBUG nova.network.neutron [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Refreshing network info cache for port 3b8730f4-e225-4c0a-bf95-708c9c122a4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.702 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Start _get_guest_xml network_info=[{"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.706 248514 DEBUG nova.scheduler.client.report [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.713 248514 WARNING nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.718 248514 DEBUG nova.virt.libvirt.host [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.719 248514 DEBUG nova.virt.libvirt.host [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.724 248514 DEBUG nova.virt.libvirt.host [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.725 248514 DEBUG nova.virt.libvirt.host [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.725 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.725 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.726 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.726 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.727 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.727 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.727 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.727 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.727 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.728 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.728 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.728 248514 DEBUG nova.virt.hardware [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:38:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 270 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 9.5 MiB/s wr, 332 op/s
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.732 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.772 248514 DEBUG nova.network.neutron [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Successfully created port: 7527f90e-f037-4a94-a011-f952b6e72722 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.782 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.823 248514 INFO nova.scheduler.client.report [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Deleted allocations for instance fa9180aa-8387-44cb-9e70-535f0652e390#033[00m
Dec 13 03:38:07 np0005558241 nova_compute[248510]: 2025-12-13 08:38:07.931 248514 DEBUG oslo_concurrency.lockutils [None req-c9ee1f26-ffcf-48d2-8917-f07d21877ca5 be7c161a509a4392b0b424b31178f424 4497dd13cb734db4999e9c01823dc0fc - - default default] Lock "fa9180aa-8387-44cb-9e70-535f0652e390" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:38:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241218707' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.311 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.332 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.337 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.624 248514 DEBUG nova.compute.manager [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.665 248514 INFO nova.compute.manager [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] instance snapshotting#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.666 248514 DEBUG nova.objects.instance [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'flavor' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:38:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1701116610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.935 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.936 248514 DEBUG nova.virt.libvirt.vif [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:37:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1394190380',display_name='tempest-ServerRescueNegativeTestJSON-server-1394190380',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1394190380',id=87,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-689d01nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:02Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=6fb6c605-344c-4ed9-806d-96964b0474f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.936 248514 DEBUG nova.network.os_vif_util [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.937 248514 DEBUG nova.network.os_vif_util [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.938 248514 DEBUG nova.objects.instance [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6fb6c605-344c-4ed9-806d-96964b0474f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.960 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  <uuid>6fb6c605-344c-4ed9-806d-96964b0474f9</uuid>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  <name>instance-00000057</name>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1394190380</nova:name>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:38:07</nova:creationTime>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:        <nova:user uuid="302853b7f91745baadf52361a0a7d535">tempest-ServerRescueNegativeTestJSON-1030815648-project-member</nova:user>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:        <nova:project uuid="c968daf58e624ebda00676d79f6bde96">tempest-ServerRescueNegativeTestJSON-1030815648</nova:project>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:        <nova:port uuid="3b8730f4-e225-4c0a-bf95-708c9c122a4f">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <entry name="serial">6fb6c605-344c-4ed9-806d-96964b0474f9</entry>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <entry name="uuid">6fb6c605-344c-4ed9-806d-96964b0474f9</entry>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/6fb6c605-344c-4ed9-806d-96964b0474f9_disk">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/6fb6c605-344c-4ed9-806d-96964b0474f9_disk.config">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:e7:9a:26"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <target dev="tap3b8730f4-e2"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/console.log" append="off"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:38:08 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:38:08 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:38:08 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:38:08 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.961 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Preparing to wait for external event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.961 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.962 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.962 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.963 248514 DEBUG nova.virt.libvirt.vif [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:37:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1394190380',display_name='tempest-ServerRescueNegativeTestJSON-server-1394190380',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1394190380',id=87,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-689d01nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:02Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=6fb6c605-344c-4ed9-806d-96964b0474f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.963 248514 DEBUG nova.network.os_vif_util [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.963 248514 DEBUG nova.network.os_vif_util [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.964 248514 DEBUG os_vif [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.964 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.965 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.965 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.969 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.970 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b8730f4-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.970 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b8730f4-e2, col_values=(('external_ids', {'iface-id': '3b8730f4-e225-4c0a-bf95-708c9c122a4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:9a:26', 'vm-uuid': '6fb6c605-344c-4ed9-806d-96964b0474f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:08 np0005558241 NetworkManager[50376]: <info>  [1765615088.9723] manager: (tap3b8730f4-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.974 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.976 248514 INFO nova.virt.libvirt.driver [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Beginning live snapshot process#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.982 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:08 np0005558241 nova_compute[248510]: 2025-12-13 08:38:08.983 248514 INFO os_vif [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2')#033[00m
Dec 13 03:38:09 np0005558241 nova_compute[248510]: 2025-12-13 08:38:09.116 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:38:09 np0005558241 nova_compute[248510]: 2025-12-13 08:38:09.116 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:38:09 np0005558241 nova_compute[248510]: 2025-12-13 08:38:09.116 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No VIF found with MAC fa:16:3e:e7:9a:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:38:09 np0005558241 nova_compute[248510]: 2025-12-13 08:38:09.117 248514 INFO nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Using config drive#033[00m
Dec 13 03:38:09 np0005558241 nova_compute[248510]: 2025-12-13 08:38:09.139 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:09 np0005558241 nova_compute[248510]: 2025-12-13 08:38:09.188 248514 DEBUG nova.virt.libvirt.imagebackend [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:38:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:38:09
Dec 13 03:38:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:38:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:38:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'vms', 'volumes', 'backups']
Dec 13 03:38:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:38:09 np0005558241 nova_compute[248510]: 2025-12-13 08:38:09.727 248514 DEBUG nova.storage.rbd_utils [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(c2d837b896cf41d2a900caaedc6b42bf) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:38:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 291 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 11 MiB/s wr, 302 op/s
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.219 248514 DEBUG nova.network.neutron [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Successfully updated port: 7527f90e-f037-4a94-a011-f952b6e72722 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.248 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.249 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquired lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.249 248514 DEBUG nova.network.neutron [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.396 248514 DEBUG nova.compute.manager [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-changed-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.397 248514 DEBUG nova.compute.manager [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Refreshing instance network info cache due to event network-changed-7527f90e-f037-4a94-a011-f952b6e72722. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.397 248514 DEBUG oslo_concurrency.lockutils [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.594 248514 INFO nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Creating config drive at /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/disk.config#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.601 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb1nkvx5o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Dec 13 03:38:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Dec 13 03:38:10 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.642 248514 DEBUG nova.network.neutron [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.649 248514 DEBUG nova.network.neutron [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Updated VIF entry in instance network info cache for port 3b8730f4-e225-4c0a-bf95-708c9c122a4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.650 248514 DEBUG nova.network.neutron [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Updating instance_info_cache with network_info: [{"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.670 248514 DEBUG nova.storage.rbd_utils [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] cloning vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk@c2d837b896cf41d2a900caaedc6b42bf to images/54456b38-e7f9-43d3-a9dc-39087af61485 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:38:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.705 248514 DEBUG oslo_concurrency.lockutils [req-8b4919eb-b647-40e7-9464-c6f55b8cbd50 req-8250caee-f738-4b99-ae88-4fcf5cfaceff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-6fb6c605-344c-4ed9-806d-96964b0474f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.738 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.745 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb1nkvx5o" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.776 248514 DEBUG nova.storage.rbd_utils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 6fb6c605-344c-4ed9-806d-96964b0474f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.781 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/disk.config 6fb6c605-344c-4ed9-806d-96964b0474f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.822 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:38:10 np0005558241 nova_compute[248510]: 2025-12-13 08:38:10.841 248514 DEBUG nova.storage.rbd_utils [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] flattening images/54456b38-e7f9-43d3-a9dc-39087af61485 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.114 248514 DEBUG oslo_concurrency.processutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/disk.config 6fb6c605-344c-4ed9-806d-96964b0474f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.115 248514 INFO nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Deleting local config drive /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9/disk.config because it was imported into RBD.#033[00m
Dec 13 03:38:11 np0005558241 kernel: tap3b8730f4-e2: entered promiscuous mode
Dec 13 03:38:11 np0005558241 NetworkManager[50376]: <info>  [1765615091.1621] manager: (tap3b8730f4-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Dec 13 03:38:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:11Z|00862|binding|INFO|Claiming lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f for this chassis.
Dec 13 03:38:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:11Z|00863|binding|INFO|3b8730f4-e225-4c0a-bf95-708c9c122a4f: Claiming fa:16:3e:e7:9a:26 10.100.0.6
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.163 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:11Z|00864|binding|INFO|Setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f ovn-installed in OVS
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.181 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:9a:26 10.100.0.6'], port_security=['fa:16:3e:e7:9a:26 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6fb6c605-344c-4ed9-806d-96964b0474f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b8730f4-e225-4c0a-bf95-708c9c122a4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:38:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:11Z|00865|binding|INFO|Setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f up in Southbound
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.183 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b8730f4-e225-4c0a-bf95-708c9c122a4f in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 bound to our chassis#033[00m
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.181 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.183 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.185 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f249527d-f9e6-43ce-a178-f71fc1d38891#033[00m
Dec 13 03:38:11 np0005558241 systemd-udevd[333657]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.196 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d454df-17e2-4374-919d-cd4bb0877447]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.197 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf249527d-f1 in ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.199 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf249527d-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.199 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d137f9ac-27d4-4116-953c-38f2d76f3e25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.200 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1425ceff-bcf5-4b8f-b4a6-8daa865aa041]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 systemd-machined[210538]: New machine qemu-107-instance-00000057.
Dec 13 03:38:11 np0005558241 NetworkManager[50376]: <info>  [1765615091.2077] device (tap3b8730f4-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:38:11 np0005558241 NetworkManager[50376]: <info>  [1765615091.2086] device (tap3b8730f4-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:38:11 np0005558241 systemd[1]: Started Virtual Machine qemu-107-instance-00000057.
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.211 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[9203f856-5441-4900-91db-f27860424ea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.226 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[750b0465-ca13-43b2-b76b-48a44840596a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.255 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[731c8516-4026-4ca8-8dc5-1725a7c807ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.260 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbb41b8-1a62-414f-b722-9c6670ff16f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 NetworkManager[50376]: <info>  [1765615091.2609] manager: (tapf249527d-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/375)
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.297 248514 DEBUG nova.storage.rbd_utils [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] removing snapshot(c2d837b896cf41d2a900caaedc6b42bf) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.303 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[47d2a422-2e5e-4a19-b18c-02ed97a34160]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.306 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7ae40b-a82e-4e9d-a23d-0a6de0602ba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 NetworkManager[50376]: <info>  [1765615091.3285] device (tapf249527d-f0): carrier: link connected
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.332 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0154d072-2b6c-4e1c-83c6-2ee57b85645f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.349 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[91ef3c13-b95d-45dd-93e6-fc7c9d880c9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf249527d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:d2:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766854, 'reachable_time': 22354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333707, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.366 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[131f2321-ed77-4e34-a603-1b9e5e690d0a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:d2c4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766854, 'tstamp': 766854}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333708, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.385 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af3c0be6-7d92-4988-a8e3-5b16733c3f5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf249527d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:d2:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766854, 'reachable_time': 22354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333709, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.420 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5b0d51-cba2-4bb5-9a1d-93e061e4f6f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.473 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c71ccb06-39d5-4297-b318-7ff9c0cceff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.474 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf249527d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.475 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.475 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf249527d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:11 np0005558241 NetworkManager[50376]: <info>  [1765615091.4782] manager: (tapf249527d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.478 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:11 np0005558241 kernel: tapf249527d-f0: entered promiscuous mode
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.482 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf249527d-f0, col_values=(('external_ids', {'iface-id': '238a7791-e0e6-4f94-b696-bd1f6886a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:38:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Dec 13 03:38:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:11Z|00866|binding|INFO|Releasing lport 238a7791-e0e6-4f94-b696-bd1f6886a564 from this chassis (sb_readonly=0)
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.485 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f249527d-f9e6-43ce-a178-f71fc1d38891.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f249527d-f9e6-43ce-a178-f71fc1d38891.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.485 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68dbe151-e804-4535-8fdb-7d5cea1170fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.486 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-f249527d-f9e6-43ce-a178-f71fc1d38891
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/f249527d-f9e6-43ce-a178-f71fc1d38891.pid.haproxy
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID f249527d-f9e6-43ce-a178-f71fc1d38891
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:38:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:11.487 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'env', 'PROCESS_TAG=haproxy-f249527d-f9e6-43ce-a178-f71fc1d38891', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f249527d-f9e6-43ce-a178-f71fc1d38891.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:38:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:11 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.501 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.519 248514 DEBUG nova.storage.rbd_utils [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(snap) on rbd image(54456b38-e7f9-43d3-a9dc-39087af61485) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:38:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 292 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.3 MiB/s wr, 261 op/s
Dec 13 03:38:11 np0005558241 podman[333758]: 2025-12-13 08:38:11.875018073 +0000 UTC m=+0.060168806 container create a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.924 248514 DEBUG nova.compute.manager [req-183ddfb5-ea7a-4517-b643-aecee202569d req-3d17a553-e96e-4748-8145-b6dda46ec102 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.926 248514 DEBUG oslo_concurrency.lockutils [req-183ddfb5-ea7a-4517-b643-aecee202569d req-3d17a553-e96e-4748-8145-b6dda46ec102 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.927 248514 DEBUG oslo_concurrency.lockutils [req-183ddfb5-ea7a-4517-b643-aecee202569d req-3d17a553-e96e-4748-8145-b6dda46ec102 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.927 248514 DEBUG oslo_concurrency.lockutils [req-183ddfb5-ea7a-4517-b643-aecee202569d req-3d17a553-e96e-4748-8145-b6dda46ec102 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:11 np0005558241 nova_compute[248510]: 2025-12-13 08:38:11.927 248514 DEBUG nova.compute.manager [req-183ddfb5-ea7a-4517-b643-aecee202569d req-3d17a553-e96e-4748-8145-b6dda46ec102 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Processing event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:38:11 np0005558241 podman[333758]: 2025-12-13 08:38:11.838711991 +0000 UTC m=+0.023862774 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:38:11 np0005558241 systemd[1]: Started libpod-conmon-a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441.scope.
Dec 13 03:38:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:38:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1630a06e863d3fc2328692d0f3a97579d4a2f822414283bd74ab2bb685b53f54/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:11 np0005558241 podman[333758]: 2025-12-13 08:38:11.988381538 +0000 UTC m=+0.173532251 container init a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 03:38:11 np0005558241 podman[333758]: 2025-12-13 08:38:11.993345091 +0000 UTC m=+0.178495784 container start a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:38:12 np0005558241 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [NOTICE]   (333778) : New worker (333780) forked
Dec 13 03:38:12 np0005558241 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [NOTICE]   (333778) : Loading success.
Dec 13 03:38:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Dec 13 03:38:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Dec 13 03:38:12 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.718 248514 DEBUG nova.network.neutron [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Updating instance_info_cache with network_info: [{"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.966 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Releasing lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.966 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance network_info: |[{"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.966 248514 DEBUG oslo_concurrency.lockutils [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.967 248514 DEBUG nova.network.neutron [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Refreshing network info cache for port 7527f90e-f037-4a94-a011-f952b6e72722 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.970 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Start _get_guest_xml network_info=[{"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.976 248514 WARNING nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.983 248514 DEBUG nova.virt.libvirt.host [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.983 248514 DEBUG nova.virt.libvirt.host [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.989 248514 DEBUG nova.virt.libvirt.host [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.989 248514 DEBUG nova.virt.libvirt.host [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.990 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.990 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.991 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.991 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.991 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.991 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.992 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.992 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.992 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.992 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.993 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.993 248514 DEBUG nova.virt.hardware [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:38:12 np0005558241 nova_compute[248510]: 2025-12-13 08:38:12.996 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.041 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615093.0412018, 6fb6c605-344c-4ed9-806d-96964b0474f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.042 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] VM Started (Lifecycle Event)#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.044 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.049 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.053 248514 INFO nova.virt.libvirt.driver [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Instance spawned successfully.#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.053 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.072 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.081 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.081 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.082 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.082 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.083 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.083 248514 DEBUG nova.virt.libvirt.driver [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.092 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.120 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.121 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615093.0438383, 6fb6c605-344c-4ed9-806d-96964b0474f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.121 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.192 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.196 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615093.046616, 6fb6c605-344c-4ed9-806d-96964b0474f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.196 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.208 248514 INFO nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Took 10.63 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.208 248514 DEBUG nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.243 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.253 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.280 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.295 248514 INFO nova.compute.manager [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Took 12.00 seconds to build instance.#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.318 248514 DEBUG oslo_concurrency.lockutils [None req-a6889683-b1d8-4286-94af-a46204133660 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:38:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/161677277' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.630 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.673 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.680 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 329 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.6 MiB/s wr, 160 op/s
Dec 13 03:38:13 np0005558241 nova_compute[248510]: 2025-12-13 08:38:13.973 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:38:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398371592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.299 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.301 248514 DEBUG nova.virt.libvirt.vif [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:38:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1217173991',display_name='tempest-ServerRescueNegativeTestJSON-server-1217173991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1217173991',id=88,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-axcrlu30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:05Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=88cb43c1-f01b-4098-84ea-d372176a0e20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.301 248514 DEBUG nova.network.os_vif_util [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.302 248514 DEBUG nova.network.os_vif_util [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.303 248514 DEBUG nova.objects.instance [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'pci_devices' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.407 248514 DEBUG nova.compute.manager [req-7562e53d-4832-4ea8-b923-8a78d46924a8 req-43f0f719-13bc-4900-860f-48e4aa392b5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.408 248514 DEBUG oslo_concurrency.lockutils [req-7562e53d-4832-4ea8-b923-8a78d46924a8 req-43f0f719-13bc-4900-860f-48e4aa392b5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.408 248514 DEBUG oslo_concurrency.lockutils [req-7562e53d-4832-4ea8-b923-8a78d46924a8 req-43f0f719-13bc-4900-860f-48e4aa392b5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.408 248514 DEBUG oslo_concurrency.lockutils [req-7562e53d-4832-4ea8-b923-8a78d46924a8 req-43f0f719-13bc-4900-860f-48e4aa392b5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.409 248514 DEBUG nova.compute.manager [req-7562e53d-4832-4ea8-b923-8a78d46924a8 req-43f0f719-13bc-4900-860f-48e4aa392b5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.409 248514 WARNING nova.compute.manager [req-7562e53d-4832-4ea8-b923-8a78d46924a8 req-43f0f719-13bc-4900-860f-48e4aa392b5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received unexpected event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with vm_state active and task_state None.#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.500 248514 INFO nova.virt.libvirt.driver [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Snapshot image upload complete#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.501 248514 INFO nova.compute.manager [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Took 5.81 seconds to snapshot the instance on the hypervisor.#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.855 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  <uuid>88cb43c1-f01b-4098-84ea-d372176a0e20</uuid>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  <name>instance-00000058</name>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1217173991</nova:name>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:38:12</nova:creationTime>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:        <nova:user uuid="302853b7f91745baadf52361a0a7d535">tempest-ServerRescueNegativeTestJSON-1030815648-project-member</nova:user>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:        <nova:project uuid="c968daf58e624ebda00676d79f6bde96">tempest-ServerRescueNegativeTestJSON-1030815648</nova:project>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:        <nova:port uuid="7527f90e-f037-4a94-a011-f952b6e72722">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <entry name="serial">88cb43c1-f01b-4098-84ea-d372176a0e20</entry>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <entry name="uuid">88cb43c1-f01b-4098-84ea-d372176a0e20</entry>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/88cb43c1-f01b-4098-84ea-d372176a0e20_disk">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:f6:f9:b9"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <target dev="tap7527f90e-f0"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/console.log" append="off"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:38:14 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:38:14 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:38:14 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:38:14 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.857 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Preparing to wait for external event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.858 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.858 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.859 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.860 248514 DEBUG nova.virt.libvirt.vif [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:38:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1217173991',display_name='tempest-ServerRescueNegativeTestJSON-server-1217173991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1217173991',id=88,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-axcrlu30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:05Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=88cb43c1-f01b-4098-84ea-d372176a0e20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.860 248514 DEBUG nova.network.os_vif_util [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.861 248514 DEBUG nova.network.os_vif_util [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.861 248514 DEBUG os_vif [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.862 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.863 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.863 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.866 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.866 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7527f90e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.867 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7527f90e-f0, col_values=(('external_ids', {'iface-id': '7527f90e-f037-4a94-a011-f952b6e72722', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:f9:b9', 'vm-uuid': '88cb43c1-f01b-4098-84ea-d372176a0e20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.868 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:14 np0005558241 NetworkManager[50376]: <info>  [1765615094.8693] manager: (tap7527f90e-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.875 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:14 np0005558241 nova_compute[248510]: 2025-12-13 08:38:14.876 248514 INFO os_vif [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0')#033[00m
Dec 13 03:38:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:38:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1377857341' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:38:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:38:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1377857341' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:38:15 np0005558241 nova_compute[248510]: 2025-12-13 08:38:15.152 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:38:15 np0005558241 nova_compute[248510]: 2025-12-13 08:38:15.152 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:38:15 np0005558241 nova_compute[248510]: 2025-12-13 08:38:15.152 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No VIF found with MAC fa:16:3e:f6:f9:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:38:15 np0005558241 nova_compute[248510]: 2025-12-13 08:38:15.153 248514 INFO nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Using config drive#033[00m
Dec 13 03:38:15 np0005558241 nova_compute[248510]: 2025-12-13 08:38:15.172 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 372 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 9.5 MiB/s rd, 8.8 MiB/s wr, 329 op/s
Dec 13 03:38:15 np0005558241 nova_compute[248510]: 2025-12-13 08:38:15.740 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:15 np0005558241 nova_compute[248510]: 2025-12-13 08:38:15.887 248514 DEBUG nova.compute.manager [None req-747137e4-b117-444c-bed1-521ec4e38c22 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Dec 13 03:38:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:38:16 np0005558241 nova_compute[248510]: 2025-12-13 08:38:16.666 248514 INFO nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Creating config drive at /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config#033[00m
Dec 13 03:38:16 np0005558241 nova_compute[248510]: 2025-12-13 08:38:16.672 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeca5gtue execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:16 np0005558241 nova_compute[248510]: 2025-12-13 08:38:16.830 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeca5gtue" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:16Z|00867|binding|INFO|Releasing lport 6c33e57f-b143-4ed7-a271-dc33d6e81057 from this chassis (sb_readonly=0)
Dec 13 03:38:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:16Z|00868|binding|INFO|Releasing lport 238a7791-e0e6-4f94-b696-bd1f6886a564 from this chassis (sb_readonly=0)
Dec 13 03:38:16 np0005558241 nova_compute[248510]: 2025-12-13 08:38:16.879 248514 DEBUG nova.storage.rbd_utils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:16 np0005558241 nova_compute[248510]: 2025-12-13 08:38:16.890 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:16 np0005558241 nova_compute[248510]: 2025-12-13 08:38:16.931 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.047 248514 DEBUG oslo_concurrency.processutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.048 248514 INFO nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Deleting local config drive /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config because it was imported into RBD.#033[00m
Dec 13 03:38:17 np0005558241 kernel: tap7527f90e-f0: entered promiscuous mode
Dec 13 03:38:17 np0005558241 NetworkManager[50376]: <info>  [1765615097.1046] manager: (tap7527f90e-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.106 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:17Z|00869|binding|INFO|Claiming lport 7527f90e-f037-4a94-a011-f952b6e72722 for this chassis.
Dec 13 03:38:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:17Z|00870|binding|INFO|7527f90e-f037-4a94-a011-f952b6e72722: Claiming fa:16:3e:f6:f9:b9 10.100.0.7
Dec 13 03:38:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:17Z|00871|binding|INFO|Setting lport 7527f90e-f037-4a94-a011-f952b6e72722 ovn-installed in OVS
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.124 248514 DEBUG nova.network.neutron [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Updated VIF entry in instance network info cache for port 7527f90e-f037-4a94-a011-f952b6e72722. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.125 248514 DEBUG nova.network.neutron [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Updating instance_info_cache with network_info: [{"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.127 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:17Z|00872|binding|INFO|Setting lport 7527f90e-f037-4a94-a011-f952b6e72722 up in Southbound
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.130 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f9:b9 10.100.0.7'], port_security=['fa:16:3e:f6:f9:b9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '88cb43c1-f01b-4098-84ea-d372176a0e20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7527f90e-f037-4a94-a011-f952b6e72722) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.132 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7527f90e-f037-4a94-a011-f952b6e72722 in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 bound to our chassis#033[00m
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.134 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f249527d-f9e6-43ce-a178-f71fc1d38891#033[00m
Dec 13 03:38:17 np0005558241 systemd-udevd[333968]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.157 248514 DEBUG oslo_concurrency.lockutils [req-dcb75a97-d3ba-4e40-93b3-82bbc4c24622 req-b9eca1d4-eb56-4431-b12b-899ee579b1fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:38:17 np0005558241 systemd-machined[210538]: New machine qemu-108-instance-00000058.
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.158 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[20261668-c7ae-464a-b63b-20347ec3e595]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:17 np0005558241 NetworkManager[50376]: <info>  [1765615097.1650] device (tap7527f90e-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:38:17 np0005558241 NetworkManager[50376]: <info>  [1765615097.1662] device (tap7527f90e-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:38:17 np0005558241 systemd[1]: Started Virtual Machine qemu-108-instance-00000058.
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.205 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[95e05d14-8958-4853-b869-84fed2b96213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.208 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0fd6aa-7de0-4916-a61d-640f3ccfaf19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.236 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7e951b-2c83-4461-926b-49cdfc0a97f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.253 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fcba9291-7acb-4ee5-a8e5-2e8df3449436]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf249527d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:d2:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766854, 'reachable_time': 22354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333981, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.270 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a99f37ed-dec8-4adb-8e12-c15d4ea5c413]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766865, 'tstamp': 766865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333982, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766868, 'tstamp': 766868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333982, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.272 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf249527d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.275 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf249527d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.275 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.275 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf249527d-f0, col_values=(('external_ids', {'iface-id': '238a7791-e0e6-4f94-b696-bd1f6886a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:17.276 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.491 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615097.490849, 88cb43c1-f01b-4098-84ea-d372176a0e20 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.492 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] VM Started (Lifecycle Event)#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.540 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.545 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615097.49206, 88cb43c1-f01b-4098-84ea-d372176a0e20 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.545 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.586 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.591 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.616 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.622 248514 DEBUG nova.compute.manager [req-93cfa849-f3ab-4a86-ac96-de4adfddd09f req-1a736295-0831-4b62-856b-766a84577140 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.622 248514 DEBUG oslo_concurrency.lockutils [req-93cfa849-f3ab-4a86-ac96-de4adfddd09f req-1a736295-0831-4b62-856b-766a84577140 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.623 248514 DEBUG oslo_concurrency.lockutils [req-93cfa849-f3ab-4a86-ac96-de4adfddd09f req-1a736295-0831-4b62-856b-766a84577140 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.623 248514 DEBUG oslo_concurrency.lockutils [req-93cfa849-f3ab-4a86-ac96-de4adfddd09f req-1a736295-0831-4b62-856b-766a84577140 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.623 248514 DEBUG nova.compute.manager [req-93cfa849-f3ab-4a86-ac96-de4adfddd09f req-1a736295-0831-4b62-856b-766a84577140 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Processing event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.624 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.628 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615097.628413, 88cb43c1-f01b-4098-84ea-d372176a0e20 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.629 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.631 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.637 248514 INFO nova.virt.libvirt.driver [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance spawned successfully.#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.638 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.718 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.718 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.719 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.719 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.720 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.720 248514 DEBUG nova.virt.libvirt.driver [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 372 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 6.6 MiB/s wr, 203 op/s
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.787 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.797 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.889 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.952 248514 INFO nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Took 12.60 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:38:17 np0005558241 nova_compute[248510]: 2025-12-13 08:38:17.953 248514 DEBUG nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:18 np0005558241 nova_compute[248510]: 2025-12-13 08:38:18.344 248514 INFO nova.compute.manager [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Took 14.12 seconds to build instance.#033[00m
Dec 13 03:38:18 np0005558241 nova_compute[248510]: 2025-12-13 08:38:18.415 248514 DEBUG oslo_concurrency.lockutils [None req-a5d1b4ff-b131-4407-9e01-0c7fe716ef9c 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 372 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 9.4 MiB/s rd, 5.7 MiB/s wr, 268 op/s
Dec 13 03:38:19 np0005558241 nova_compute[248510]: 2025-12-13 08:38:19.751 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615084.7509744, fa9180aa-8387-44cb-9e70-535f0652e390 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:19 np0005558241 nova_compute[248510]: 2025-12-13 08:38:19.753 248514 INFO nova.compute.manager [-] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:38:19 np0005558241 nova_compute[248510]: 2025-12-13 08:38:19.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:20 np0005558241 nova_compute[248510]: 2025-12-13 08:38:20.324 248514 DEBUG nova.compute.manager [None req-7b783003-2c4c-42f2-a2cf-a81cd6384099 - - - - - -] [instance: fa9180aa-8387-44cb-9e70-535f0652e390] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:20 np0005558241 nova_compute[248510]: 2025-12-13 08:38:20.589 248514 DEBUG nova.compute.manager [req-cf768c1a-9b18-4697-8758-c601ba650e11 req-3f0b5cd7-e8ed-40fa-8798-027d0a13bd65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:20 np0005558241 nova_compute[248510]: 2025-12-13 08:38:20.589 248514 DEBUG oslo_concurrency.lockutils [req-cf768c1a-9b18-4697-8758-c601ba650e11 req-3f0b5cd7-e8ed-40fa-8798-027d0a13bd65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:20 np0005558241 nova_compute[248510]: 2025-12-13 08:38:20.590 248514 DEBUG oslo_concurrency.lockutils [req-cf768c1a-9b18-4697-8758-c601ba650e11 req-3f0b5cd7-e8ed-40fa-8798-027d0a13bd65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:20 np0005558241 nova_compute[248510]: 2025-12-13 08:38:20.590 248514 DEBUG oslo_concurrency.lockutils [req-cf768c1a-9b18-4697-8758-c601ba650e11 req-3f0b5cd7-e8ed-40fa-8798-027d0a13bd65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:20 np0005558241 nova_compute[248510]: 2025-12-13 08:38:20.590 248514 DEBUG nova.compute.manager [req-cf768c1a-9b18-4697-8758-c601ba650e11 req-3f0b5cd7-e8ed-40fa-8798-027d0a13bd65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:38:20 np0005558241 nova_compute[248510]: 2025-12-13 08:38:20.591 248514 WARNING nova.compute.manager [req-cf768c1a-9b18-4697-8758-c601ba650e11 req-3f0b5cd7-e8ed-40fa-8798-027d0a13bd65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received unexpected event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:38:20 np0005558241 nova_compute[248510]: 2025-12-13 08:38:20.616 248514 DEBUG nova.compute.manager [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:20 np0005558241 nova_compute[248510]: 2025-12-13 08:38:20.673 248514 INFO nova.compute.manager [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] instance snapshotting#033[00m
Dec 13 03:38:20 np0005558241 nova_compute[248510]: 2025-12-13 08:38:20.675 248514 DEBUG nova.objects.instance [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'flavor' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:20 np0005558241 nova_compute[248510]: 2025-12-13 08:38:20.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:20 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014668542161819687 of space, bias 1.0, pg target 0.4400562648545906 quantized to 32 (current 32)
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0021822167660036987 of space, bias 1.0, pg target 0.6546650298011096 quantized to 32 (current 32)
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.947345789036047e-07 of space, bias 4.0, pg target 0.0007136814946843256 quantized to 16 (current 32)
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:38:21 np0005558241 nova_compute[248510]: 2025-12-13 08:38:21.473 248514 INFO nova.compute.manager [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Rescuing#033[00m
Dec 13 03:38:21 np0005558241 nova_compute[248510]: 2025-12-13 08:38:21.474 248514 DEBUG oslo_concurrency.lockutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:38:21 np0005558241 nova_compute[248510]: 2025-12-13 08:38:21.474 248514 DEBUG oslo_concurrency.lockutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquired lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:38:21 np0005558241 nova_compute[248510]: 2025-12-13 08:38:21.474 248514 DEBUG nova.network.neutron [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:38:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:38:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Dec 13 03:38:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Dec 13 03:38:21 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Dec 13 03:38:21 np0005558241 nova_compute[248510]: 2025-12-13 08:38:21.645 248514 INFO nova.virt.libvirt.driver [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Beginning live snapshot process#033[00m
Dec 13 03:38:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 372 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 5.1 MiB/s wr, 269 op/s
Dec 13 03:38:21 np0005558241 nova_compute[248510]: 2025-12-13 08:38:21.998 248514 DEBUG nova.virt.libvirt.imagebackend [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:38:22 np0005558241 nova_compute[248510]: 2025-12-13 08:38:22.561 248514 DEBUG nova.storage.rbd_utils [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(e1b63fbb027f475faf9cba72882a99bc) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:38:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Dec 13 03:38:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Dec 13 03:38:23 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Dec 13 03:38:23 np0005558241 nova_compute[248510]: 2025-12-13 08:38:23.544 248514 DEBUG nova.storage.rbd_utils [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] cloning vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk@e1b63fbb027f475faf9cba72882a99bc to images/8b0d2f78-fa72-484b-b0f0-842d1746a5ea clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:38:23 np0005558241 nova_compute[248510]: 2025-12-13 08:38:23.618 248514 DEBUG nova.storage.rbd_utils [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] flattening images/8b0d2f78-fa72-484b-b0f0-842d1746a5ea flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:38:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 372 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 22 KiB/s wr, 168 op/s
Dec 13 03:38:24 np0005558241 nova_compute[248510]: 2025-12-13 08:38:24.114 248514 DEBUG nova.storage.rbd_utils [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] removing snapshot(e1b63fbb027f475faf9cba72882a99bc) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:38:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Dec 13 03:38:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Dec 13 03:38:24 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Dec 13 03:38:24 np0005558241 nova_compute[248510]: 2025-12-13 08:38:24.539 248514 DEBUG nova.storage.rbd_utils [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(snap) on rbd image(8b0d2f78-fa72-484b-b0f0-842d1746a5ea) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:38:24 np0005558241 nova_compute[248510]: 2025-12-13 08:38:24.722 248514 DEBUG nova.network.neutron [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Updating instance_info_cache with network_info: [{"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:38:24 np0005558241 nova_compute[248510]: 2025-12-13 08:38:24.831 248514 DEBUG oslo_concurrency.lockutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Releasing lock "refresh_cache-88cb43c1-f01b-4098-84ea-d372176a0e20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:38:24 np0005558241 nova_compute[248510]: 2025-12-13 08:38:24.873 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Dec 13 03:38:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Dec 13 03:38:25 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Dec 13 03:38:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 429 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 9.7 MiB/s wr, 313 op/s
Dec 13 03:38:25 np0005558241 nova_compute[248510]: 2025-12-13 08:38:25.744 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:25 np0005558241 nova_compute[248510]: 2025-12-13 08:38:25.767 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:38:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:25Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:9a:26 10.100.0.6
Dec 13 03:38:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:25Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:9a:26 10.100.0.6
Dec 13 03:38:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:38:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 429 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.9 MiB/s wr, 221 op/s
Dec 13 03:38:28 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Dec 13 03:38:28 np0005558241 nova_compute[248510]: 2025-12-13 08:38:28.398 248514 INFO nova.virt.libvirt.driver [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Snapshot image upload complete#033[00m
Dec 13 03:38:28 np0005558241 nova_compute[248510]: 2025-12-13 08:38:28.399 248514 INFO nova.compute.manager [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Took 7.69 seconds to snapshot the instance on the hypervisor.#033[00m
Dec 13 03:38:28 np0005558241 nova_compute[248510]: 2025-12-13 08:38:28.890 248514 DEBUG nova.compute.manager [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Dec 13 03:38:28 np0005558241 nova_compute[248510]: 2025-12-13 08:38:28.890 248514 DEBUG nova.compute.manager [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Dec 13 03:38:28 np0005558241 nova_compute[248510]: 2025-12-13 08:38:28.890 248514 DEBUG nova.compute.manager [None req-1cce2bf8-d395-4636-9aa0-6457e2cfee7e 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deleting image bd9a009b-216b-49be-81a8-600047037026 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Dec 13 03:38:29 np0005558241 nova_compute[248510]: 2025-12-13 08:38:29.270 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "0d52c1df-d252-4012-b05c-40737f1089bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:29 np0005558241 nova_compute[248510]: 2025-12-13 08:38:29.270 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0d52c1df-d252-4012-b05c-40737f1089bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:29 np0005558241 nova_compute[248510]: 2025-12-13 08:38:29.289 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:38:29 np0005558241 nova_compute[248510]: 2025-12-13 08:38:29.397 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:29 np0005558241 nova_compute[248510]: 2025-12-13 08:38:29.398 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:29 np0005558241 nova_compute[248510]: 2025-12-13 08:38:29.406 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:38:29 np0005558241 nova_compute[248510]: 2025-12-13 08:38:29.407 248514 INFO nova.compute.claims [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:38:29 np0005558241 nova_compute[248510]: 2025-12-13 08:38:29.613 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:29Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:f9:b9 10.100.0.7
Dec 13 03:38:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:29Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:f9:b9 10.100.0.7
Dec 13 03:38:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 316 active+clean; 479 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 12 MiB/s wr, 277 op/s
Dec 13 03:38:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Dec 13 03:38:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Dec 13 03:38:29 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Dec 13 03:38:29 np0005558241 nova_compute[248510]: 2025-12-13 08:38:29.877 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:38:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564059498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.215 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.221 248514 DEBUG nova.compute.provider_tree [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.256 248514 DEBUG nova.scheduler.client.report [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.280 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.281 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.338 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.416 248514 INFO nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.462 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.620 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.621 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.622 248514 INFO nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Creating image(s)#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.642 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.664 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.688 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.692 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.746 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.770 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.771 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.771 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.771 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.793 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:30 np0005558241 nova_compute[248510]: 2025-12-13 08:38:30.798 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0d52c1df-d252-4012-b05c-40737f1089bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.099 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0d52c1df-d252-4012-b05c-40737f1089bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.153 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] resizing rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.221 248514 DEBUG nova.objects.instance [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d52c1df-d252-4012-b05c-40737f1089bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.231 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "0187165c-81b1-43b8-81f5-05e847fe1fa6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.231 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0187165c-81b1-43b8-81f5-05e847fe1fa6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.239 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.239 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Ensure instance console log exists: /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.239 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.240 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.240 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.241 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.246 248514 WARNING nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.251 248514 DEBUG nova.virt.libvirt.host [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.251 248514 DEBUG nova.virt.libvirt.host [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.254 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.257 248514 DEBUG nova.virt.libvirt.host [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.258 248514 DEBUG nova.virt.libvirt.host [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.258 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.258 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.259 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.259 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.259 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.259 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.260 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.260 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.260 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.260 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.261 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.261 248514 DEBUG nova.virt.hardware [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.263 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.352 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.352 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.361 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.361 248514 INFO nova.compute.claims [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:38:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.603 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 494 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 8.4 MiB/s wr, 253 op/s
Dec 13 03:38:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:38:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697837179' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.815 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.844 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:31 np0005558241 nova_compute[248510]: 2025-12-13 08:38:31.849 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:31 np0005558241 podman[334417]: 2025-12-13 08:38:31.986918085 +0000 UTC m=+0.068994554 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:38:32 np0005558241 podman[334418]: 2025-12-13 08:38:32.005156898 +0000 UTC m=+0.083570536 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:38:32 np0005558241 podman[334416]: 2025-12-13 08:38:32.016533901 +0000 UTC m=+0.097446731 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Dec 13 03:38:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:38:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3368706835' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.235 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.241 248514 DEBUG nova.compute.provider_tree [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.378 248514 DEBUG nova.scheduler.client.report [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.417 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.419 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:38:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:38:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3392828137' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.483 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.495 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.496 248514 DEBUG nova.objects.instance [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d52c1df-d252-4012-b05c-40737f1089bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.509 248514 INFO nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.517 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  <uuid>0d52c1df-d252-4012-b05c-40737f1089bb</uuid>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  <name>instance-00000059</name>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerShowV247Test-server-990588052</nova:name>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:38:31</nova:creationTime>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:        <nova:user uuid="0e8471eedc0e4ae0a028132802bc1967">tempest-ServerShowV247Test-2063492104-project-member</nova:user>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:        <nova:project uuid="c4b40eb3fb314c44867a54f3ba244ec1">tempest-ServerShowV247Test-2063492104</nova:project>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <entry name="serial">0d52c1df-d252-4012-b05c-40737f1089bb</entry>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <entry name="uuid">0d52c1df-d252-4012-b05c-40737f1089bb</entry>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0d52c1df-d252-4012-b05c-40737f1089bb_disk">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0d52c1df-d252-4012-b05c-40737f1089bb_disk.config">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/console.log" append="off"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:38:32 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:38:32 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:38:32 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:38:32 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.532 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.656 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.658 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.658 248514 INFO nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Creating image(s)#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.677 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.701 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.721 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.725 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.787 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.788 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.788 248514 INFO nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Using config drive#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.808 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.815 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.815 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.816 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.816 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.962 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:32 np0005558241 nova_compute[248510]: 2025-12-13 08:38:32.966 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:33 np0005558241 nova_compute[248510]: 2025-12-13 08:38:33.344 248514 INFO nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Creating config drive at /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/disk.config#033[00m
Dec 13 03:38:33 np0005558241 nova_compute[248510]: 2025-12-13 08:38:33.349 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1sgfnms execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:33 np0005558241 nova_compute[248510]: 2025-12-13 08:38:33.509 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1sgfnms" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:33 np0005558241 nova_compute[248510]: 2025-12-13 08:38:33.541 248514 DEBUG nova.storage.rbd_utils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0d52c1df-d252-4012-b05c-40737f1089bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:33 np0005558241 nova_compute[248510]: 2025-12-13 08:38:33.546 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/disk.config 0d52c1df-d252-4012-b05c-40737f1089bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 489 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 7.3 MiB/s wr, 234 op/s
Dec 13 03:38:33 np0005558241 nova_compute[248510]: 2025-12-13 08:38:33.837 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.871s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:33 np0005558241 nova_compute[248510]: 2025-12-13 08:38:33.881 248514 DEBUG oslo_concurrency.processutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/disk.config 0d52c1df-d252-4012-b05c-40737f1089bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:33 np0005558241 nova_compute[248510]: 2025-12-13 08:38:33.882 248514 INFO nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Deleting local config drive /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb/disk.config because it was imported into RBD.#033[00m
Dec 13 03:38:33 np0005558241 nova_compute[248510]: 2025-12-13 08:38:33.918 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] resizing rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:38:33 np0005558241 systemd-machined[210538]: New machine qemu-109-instance-00000059.
Dec 13 03:38:33 np0005558241 systemd[1]: Started Virtual Machine qemu-109-instance-00000059.
Dec 13 03:38:33 np0005558241 nova_compute[248510]: 2025-12-13 08:38:33.984 248514 DEBUG nova.objects.instance [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'migration_context' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.008 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.008 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Ensure instance console log exists: /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.009 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.009 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.010 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.011 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.017 248514 WARNING nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.021 248514 DEBUG nova.virt.libvirt.host [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.022 248514 DEBUG nova.virt.libvirt.host [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.024 248514 DEBUG nova.virt.libvirt.host [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.025 248514 DEBUG nova.virt.libvirt.host [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.025 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.025 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.026 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.026 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.026 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.027 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.027 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.027 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.027 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.028 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.028 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.028 248514 DEBUG nova.virt.hardware [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.031 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.359 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615114.3590355, 0d52c1df-d252-4012-b05c-40737f1089bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.360 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.371 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.372 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.376 248514 INFO nova.virt.libvirt.driver [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Instance spawned successfully.#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.376 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.396 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.403 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.407 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.408 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.408 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.409 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.409 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.409 248514 DEBUG nova.virt.libvirt.driver [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.444 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.445 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615114.3593073, 0d52c1df-d252-4012-b05c-40737f1089bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.445 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] VM Started (Lifecycle Event)#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.486 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.495 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.503 248514 INFO nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Took 3.88 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.504 248514 DEBUG nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.537 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:38:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:38:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732251910' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.575 248514 INFO nova.compute.manager [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Took 5.22 seconds to build instance.#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.593 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.621 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.629 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.665 248514 DEBUG oslo_concurrency.lockutils [None req-78d08471-c938-4222-a6c0-212e82202ab5 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0d52c1df-d252-4012-b05c-40737f1089bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:34 np0005558241 nova_compute[248510]: 2025-12-13 08:38:34.882 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Dec 13 03:38:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:38:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3605667858' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:38:35 np0005558241 nova_compute[248510]: 2025-12-13 08:38:35.256 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:35 np0005558241 nova_compute[248510]: 2025-12-13 08:38:35.258 248514 DEBUG nova.objects.instance [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:35 np0005558241 nova_compute[248510]: 2025-12-13 08:38:35.486 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  <uuid>0187165c-81b1-43b8-81f5-05e847fe1fa6</uuid>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  <name>instance-0000005a</name>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerShowV247Test-server-1806406257</nova:name>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:38:34</nova:creationTime>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:        <nova:user uuid="0e8471eedc0e4ae0a028132802bc1967">tempest-ServerShowV247Test-2063492104-project-member</nova:user>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:        <nova:project uuid="c4b40eb3fb314c44867a54f3ba244ec1">tempest-ServerShowV247Test-2063492104</nova:project>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <entry name="serial">0187165c-81b1-43b8-81f5-05e847fe1fa6</entry>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <entry name="uuid">0187165c-81b1-43b8-81f5-05e847fe1fa6</entry>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0187165c-81b1-43b8-81f5-05e847fe1fa6_disk">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/console.log" append="off"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:38:35 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:38:35 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:38:35 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:38:35 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:38:35 np0005558241 podman[334988]: 2025-12-13 08:38:35.538784491 +0000 UTC m=+0.021922655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:38:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 511 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 11 MiB/s wr, 326 op/s
Dec 13 03:38:35 np0005558241 nova_compute[248510]: 2025-12-13 08:38:35.747 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:35 np0005558241 podman[334988]: 2025-12-13 08:38:35.824450406 +0000 UTC m=+0.307588550 container create 3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_snyder, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:38:35 np0005558241 nova_compute[248510]: 2025-12-13 08:38:35.852 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:38:35 np0005558241 nova_compute[248510]: 2025-12-13 08:38:35.853 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:38:35 np0005558241 nova_compute[248510]: 2025-12-13 08:38:35.854 248514 INFO nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Using config drive#033[00m
Dec 13 03:38:35 np0005558241 nova_compute[248510]: 2025-12-13 08:38:35.883 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:38:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:38:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:38:36 np0005558241 nova_compute[248510]: 2025-12-13 08:38:36.030 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:38:36 np0005558241 systemd[1]: Started libpod-conmon-3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8.scope.
Dec 13 03:38:36 np0005558241 nova_compute[248510]: 2025-12-13 08:38:36.076 248514 INFO nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Creating config drive at /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config#033[00m
Dec 13 03:38:36 np0005558241 nova_compute[248510]: 2025-12-13 08:38:36.082 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkmt80305 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:38:36 np0005558241 nova_compute[248510]: 2025-12-13 08:38:36.229 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkmt80305" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:36 np0005558241 podman[334988]: 2025-12-13 08:38:36.260174886 +0000 UTC m=+0.743313080 container init 3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_snyder, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:38:36 np0005558241 podman[334988]: 2025-12-13 08:38:36.270168384 +0000 UTC m=+0.753306538 container start 3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_snyder, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:38:36 np0005558241 festive_snyder[335022]: 167 167
Dec 13 03:38:36 np0005558241 systemd[1]: libpod-3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8.scope: Deactivated successfully.
Dec 13 03:38:36 np0005558241 nova_compute[248510]: 2025-12-13 08:38:36.293 248514 DEBUG nova.storage.rbd_utils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:36 np0005558241 nova_compute[248510]: 2025-12-13 08:38:36.298 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:36 np0005558241 podman[334988]: 2025-12-13 08:38:36.309331767 +0000 UTC m=+0.792469931 container attach 3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_snyder, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:38:36 np0005558241 podman[334988]: 2025-12-13 08:38:36.309777408 +0000 UTC m=+0.792915572 container died 3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_snyder, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:38:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:38:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Dec 13 03:38:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Dec 13 03:38:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-df3a950504ea75905efcbb2541771f6d9d73773bdda7d80eef6ce72f2d8f6f35-merged.mount: Deactivated successfully.
Dec 13 03:38:36 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Dec 13 03:38:37 np0005558241 podman[334988]: 2025-12-13 08:38:37.161637642 +0000 UTC m=+1.644775786 container remove 3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:38:37 np0005558241 systemd[1]: libpod-conmon-3352c8ce567a28670bd3d5d93c6b48a185c21bbf7e7dcd787df3a199c057a9e8.scope: Deactivated successfully.
Dec 13 03:38:37 np0005558241 podman[335085]: 2025-12-13 08:38:37.339314445 +0000 UTC m=+0.025933185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:38:37 np0005558241 podman[335085]: 2025-12-13 08:38:37.566757804 +0000 UTC m=+0.253376544 container create f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:38:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 511 MiB data, 888 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 6.3 MiB/s wr, 172 op/s
Dec 13 03:38:37 np0005558241 nova_compute[248510]: 2025-12-13 08:38:37.836 248514 DEBUG oslo_concurrency.processutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:37 np0005558241 nova_compute[248510]: 2025-12-13 08:38:37.838 248514 INFO nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deleting local config drive /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config because it was imported into RBD.#033[00m
Dec 13 03:38:37 np0005558241 systemd[1]: Started libpod-conmon-f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a.scope.
Dec 13 03:38:37 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:38:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f6f28ea7988bd928640a610f81d226df3c4123c8dbabe56cc718efedf03727/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f6f28ea7988bd928640a610f81d226df3c4123c8dbabe56cc718efedf03727/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f6f28ea7988bd928640a610f81d226df3c4123c8dbabe56cc718efedf03727/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f6f28ea7988bd928640a610f81d226df3c4123c8dbabe56cc718efedf03727/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99f6f28ea7988bd928640a610f81d226df3c4123c8dbabe56cc718efedf03727/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:37 np0005558241 systemd-machined[210538]: New machine qemu-110-instance-0000005a.
Dec 13 03:38:37 np0005558241 systemd[1]: Started Virtual Machine qemu-110-instance-0000005a.
Dec 13 03:38:37 np0005558241 podman[335085]: 2025-12-13 08:38:37.968479041 +0000 UTC m=+0.655097791 container init f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mahavira, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:38:37 np0005558241 podman[335085]: 2025-12-13 08:38:37.978644124 +0000 UTC m=+0.665262844 container start f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mahavira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:38:37 np0005558241 podman[335085]: 2025-12-13 08:38:37.989394441 +0000 UTC m=+0.676013161 container attach f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:38:38 np0005558241 gifted_mahavira[335104]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:38:38 np0005558241 gifted_mahavira[335104]: --> All data devices are unavailable
Dec 13 03:38:38 np0005558241 systemd[1]: libpod-f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a.scope: Deactivated successfully.
Dec 13 03:38:38 np0005558241 podman[335085]: 2025-12-13 08:38:38.538095717 +0000 UTC m=+1.224714437 container died f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mahavira, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:38:38 np0005558241 systemd[1]: var-lib-containers-storage-overlay-99f6f28ea7988bd928640a610f81d226df3c4123c8dbabe56cc718efedf03727-merged.mount: Deactivated successfully.
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.713 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615118.7073314, 0187165c-81b1-43b8-81f5-05e847fe1fa6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.716 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.720 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.722 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:38:38 np0005558241 podman[335085]: 2025-12-13 08:38:38.723546813 +0000 UTC m=+1.410165543 container remove f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mahavira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.728 248514 INFO nova.virt.libvirt.driver [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance spawned successfully.#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.730 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:38:38 np0005558241 systemd[1]: libpod-conmon-f680e015cb4185a41027b04888772e9450b2d78ffb3e1a35e7904bfa5374ed2a.scope: Deactivated successfully.
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.784 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.792 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.792 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.794 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.794 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.795 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.796 248514 DEBUG nova.virt.libvirt.driver [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.803 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:38:38 np0005558241 kernel: tap7527f90e-f0 (unregistering): left promiscuous mode
Dec 13 03:38:38 np0005558241 NetworkManager[50376]: <info>  [1765615118.8535] device (tap7527f90e-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.860 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.860 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615118.7079344, 0187165c-81b1-43b8-81f5-05e847fe1fa6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.861 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] VM Started (Lifecycle Event)#033[00m
Dec 13 03:38:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:38Z|00873|binding|INFO|Releasing lport 7527f90e-f037-4a94-a011-f952b6e72722 from this chassis (sb_readonly=0)
Dec 13 03:38:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:38Z|00874|binding|INFO|Setting lport 7527f90e-f037-4a94-a011-f952b6e72722 down in Southbound
Dec 13 03:38:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:38Z|00875|binding|INFO|Removing iface tap7527f90e-f0 ovn-installed in OVS
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.868 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.880 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f9:b9 10.100.0.7'], port_security=['fa:16:3e:f6:f9:b9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '88cb43c1-f01b-4098-84ea-d372176a0e20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7527f90e-f037-4a94-a011-f952b6e72722) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:38:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.881 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7527f90e-f037-4a94-a011-f952b6e72722 in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 unbound from our chassis#033[00m
Dec 13 03:38:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.883 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f249527d-f9e6-43ce-a178-f71fc1d38891#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.901 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[179f9efb-02cd-4b2e-8853-4e91ac9b727f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.918 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.925 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:38:38 np0005558241 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000058.scope: Deactivated successfully.
Dec 13 03:38:38 np0005558241 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000058.scope: Consumed 12.032s CPU time.
Dec 13 03:38:38 np0005558241 systemd-machined[210538]: Machine qemu-108-instance-00000058 terminated.
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.938 248514 INFO nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Took 6.28 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.939 248514 DEBUG nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.939 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9e888ab3-de0c-4630-8d19-5399325b136d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.946 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fac61326-7d56-4cec-a568-e19ce4b05f70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:38 np0005558241 nova_compute[248510]: 2025-12-13 08:38:38.951 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:38:38 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:38.981 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a962a4fb-b489-40ef-8dd3-cf86bdd7a9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.005 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8563d4f2-dd24-41e4-a1d7-a90e4e7cd883]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf249527d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:d2:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766854, 'reachable_time': 22354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335250, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.027 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2d51e569-9731-44c9-9e1c-67e96867f383]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766865, 'tstamp': 766865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335251, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766868, 'tstamp': 766868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335251, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.031 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf249527d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.031 248514 INFO nova.compute.manager [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Took 7.71 seconds to build instance.#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.034 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.039 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.043 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf249527d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.043 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:38:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf249527d-f0, col_values=(('external_ids', {'iface-id': '238a7791-e0e6-4f94-b696-bd1f6886a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:39.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.055 248514 DEBUG oslo_concurrency.lockutils [None req-d973d64c-da94-4492-b4ab-17291df41c71 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0187165c-81b1-43b8-81f5-05e847fe1fa6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.111 248514 INFO nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance shutdown successfully after 13 seconds.#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.117 248514 INFO nova.virt.libvirt.driver [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance destroyed successfully.#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.117 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'numa_topology' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.137 248514 INFO nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Attempting rescue#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.138 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.143 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.144 248514 INFO nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Creating image(s)#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.164 248514 DEBUG nova.storage.rbd_utils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.173 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.239 248514 DEBUG nova.storage.rbd_utils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.265 248514 DEBUG nova.storage.rbd_utils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.272 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.307 248514 DEBUG nova.compute.manager [req-d5b24a6a-4f94-4041-8beb-20c124d2431d req-dae07ef8-1039-47eb-ae9e-97f7863fce6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-unplugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.309 248514 DEBUG oslo_concurrency.lockutils [req-d5b24a6a-4f94-4041-8beb-20c124d2431d req-dae07ef8-1039-47eb-ae9e-97f7863fce6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.310 248514 DEBUG oslo_concurrency.lockutils [req-d5b24a6a-4f94-4041-8beb-20c124d2431d req-dae07ef8-1039-47eb-ae9e-97f7863fce6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.311 248514 DEBUG oslo_concurrency.lockutils [req-d5b24a6a-4f94-4041-8beb-20c124d2431d req-dae07ef8-1039-47eb-ae9e-97f7863fce6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.311 248514 DEBUG nova.compute.manager [req-d5b24a6a-4f94-4041-8beb-20c124d2431d req-dae07ef8-1039-47eb-ae9e-97f7863fce6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-unplugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.312 248514 WARNING nova.compute.manager [req-d5b24a6a-4f94-4041-8beb-20c124d2431d req-dae07ef8-1039-47eb-ae9e-97f7863fce6f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received unexpected event network-vif-unplugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with vm_state active and task_state rescuing.#033[00m
Dec 13 03:38:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Dec 13 03:38:39 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.365 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.366 248514 DEBUG oslo_concurrency.lockutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.367 248514 DEBUG oslo_concurrency.lockutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.368 248514 DEBUG oslo_concurrency.lockutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.396 248514 DEBUG nova.storage.rbd_utils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.400 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:39 np0005558241 podman[335350]: 2025-12-13 08:38:39.539982539 +0000 UTC m=+0.097060732 container create df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lumiere, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:38:39 np0005558241 podman[335350]: 2025-12-13 08:38:39.474221156 +0000 UTC m=+0.031299369 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:38:39 np0005558241 systemd[1]: Started libpod-conmon-df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820.scope.
Dec 13 03:38:39 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:38:39 np0005558241 podman[335350]: 2025-12-13 08:38:39.671542666 +0000 UTC m=+0.228620859 container init df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:38:39 np0005558241 podman[335350]: 2025-12-13 08:38:39.682143639 +0000 UTC m=+0.239221832 container start df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lumiere, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:38:39 np0005558241 eloquent_lumiere[335384]: 167 167
Dec 13 03:38:39 np0005558241 systemd[1]: libpod-df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820.scope: Deactivated successfully.
Dec 13 03:38:39 np0005558241 podman[335350]: 2025-12-13 08:38:39.710247527 +0000 UTC m=+0.267325750 container attach df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:38:39 np0005558241 podman[335350]: 2025-12-13 08:38:39.711372895 +0000 UTC m=+0.268451088 container died df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:38:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 473 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.6 MiB/s wr, 261 op/s
Dec 13 03:38:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4b2e97aec0366d44b2018c10784c3bd5ae3e323c75c1156fef2c3b9bb1c6f5d3-merged.mount: Deactivated successfully.
Dec 13 03:38:39 np0005558241 nova_compute[248510]: 2025-12-13 08:38:39.885 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:40 np0005558241 podman[335350]: 2025-12-13 08:38:40.07234128 +0000 UTC m=+0.629419473 container remove df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_lumiere, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:38:40 np0005558241 systemd[1]: libpod-conmon-df6c08df2201c6941d0e6528f5275479ef7cadc55f1d5416349030d1ccf7e820.scope: Deactivated successfully.
Dec 13 03:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.135 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.735s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.136 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'migration_context' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.166 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.168 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Start _get_guest_xml network_info=[{"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "vif_mac": "fa:16:3e:f6:f9:b9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.169 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'resources' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.214 248514 WARNING nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.220 248514 DEBUG nova.virt.libvirt.host [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.221 248514 DEBUG nova.virt.libvirt.host [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.232 248514 DEBUG nova.virt.libvirt.host [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.232 248514 DEBUG nova.virt.libvirt.host [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.233 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.233 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.234 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.234 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.235 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.235 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.235 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.236 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.236 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.236 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.237 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.237 248514 DEBUG nova.virt.hardware [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.237 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.261 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:40 np0005558241 podman[335409]: 2025-12-13 08:38:40.345127014 +0000 UTC m=+0.093982565 container create 10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:38:40 np0005558241 podman[335409]: 2025-12-13 08:38:40.282912899 +0000 UTC m=+0.031768480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:38:40 np0005558241 systemd[1]: Started libpod-conmon-10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652.scope.
Dec 13 03:38:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:38:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da147e85234dd9aa360e25f142be6ee74ecd0ebc5efa85cb0135f62bab008e9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da147e85234dd9aa360e25f142be6ee74ecd0ebc5efa85cb0135f62bab008e9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da147e85234dd9aa360e25f142be6ee74ecd0ebc5efa85cb0135f62bab008e9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da147e85234dd9aa360e25f142be6ee74ecd0ebc5efa85cb0135f62bab008e9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:40 np0005558241 podman[335409]: 2025-12-13 08:38:40.484664439 +0000 UTC m=+0.233520020 container init 10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:38:40 np0005558241 podman[335409]: 2025-12-13 08:38:40.491320824 +0000 UTC m=+0.240176385 container start 10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 03:38:40 np0005558241 podman[335409]: 2025-12-13 08:38:40.4996143 +0000 UTC m=+0.248469861 container attach 10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wozniak, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:38:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771877666' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.817 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:40 np0005558241 nova_compute[248510]: 2025-12-13 08:38:40.818 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]: {
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:    "0": [
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:        {
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "devices": [
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "/dev/loop3"
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            ],
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_name": "ceph_lv0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_size": "21470642176",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "name": "ceph_lv0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "tags": {
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.cluster_name": "ceph",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.crush_device_class": "",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.encrypted": "0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.objectstore": "bluestore",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.osd_id": "0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.type": "block",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.vdo": "0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.with_tpm": "0"
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            },
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "type": "block",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "vg_name": "ceph_vg0"
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:        }
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:    ],
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:    "1": [
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:        {
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "devices": [
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "/dev/loop4"
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            ],
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_name": "ceph_lv1",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_size": "21470642176",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "name": "ceph_lv1",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "tags": {
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.cluster_name": "ceph",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.crush_device_class": "",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.encrypted": "0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.objectstore": "bluestore",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.osd_id": "1",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.type": "block",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.vdo": "0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.with_tpm": "0"
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            },
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "type": "block",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "vg_name": "ceph_vg1"
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:        }
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:    ],
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:    "2": [
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:        {
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "devices": [
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "/dev/loop5"
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            ],
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_name": "ceph_lv2",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_size": "21470642176",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "name": "ceph_lv2",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "tags": {
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.cluster_name": "ceph",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.crush_device_class": "",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.encrypted": "0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.objectstore": "bluestore",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.osd_id": "2",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.type": "block",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.vdo": "0",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:                "ceph.with_tpm": "0"
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            },
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "type": "block",
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:            "vg_name": "ceph_vg2"
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:        }
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]:    ]
Dec 13 03:38:40 np0005558241 practical_wozniak[335445]: }
Dec 13 03:38:40 np0005558241 systemd[1]: libpod-10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652.scope: Deactivated successfully.
Dec 13 03:38:40 np0005558241 podman[335409]: 2025-12-13 08:38:40.87010134 +0000 UTC m=+0.618956901 container died 10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wozniak, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:38:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay-da147e85234dd9aa360e25f142be6ee74ecd0ebc5efa85cb0135f62bab008e9e-merged.mount: Deactivated successfully.
Dec 13 03:38:40 np0005558241 podman[335409]: 2025-12-13 08:38:40.943974005 +0000 UTC m=+0.692829566 container remove 10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wozniak, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:38:40 np0005558241 systemd[1]: libpod-conmon-10e25d9fce63cc79599f4dc2d75ec17ecd9e065fdc10d35e2bb15959b0112652.scope: Deactivated successfully.
Dec 13 03:38:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:38:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1823226581' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:38:41 np0005558241 nova_compute[248510]: 2025-12-13 08:38:41.416 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:41 np0005558241 nova_compute[248510]: 2025-12-13 08:38:41.418 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:41 np0005558241 podman[335551]: 2025-12-13 08:38:41.383539631 +0000 UTC m=+0.021845164 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:38:41 np0005558241 podman[335551]: 2025-12-13 08:38:41.566799642 +0000 UTC m=+0.205105155 container create 5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chebyshev, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 03:38:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:38:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 455 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.6 MiB/s wr, 228 op/s
Dec 13 03:38:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:38:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4050967200' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:38:41 np0005558241 nova_compute[248510]: 2025-12-13 08:38:41.961 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:41 np0005558241 nova_compute[248510]: 2025-12-13 08:38:41.963 248514 DEBUG nova.virt.libvirt.vif [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:38:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1217173991',display_name='tempest-ServerRescueNegativeTestJSON-server-1217173991',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1217173991',id=88,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:38:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-axcrlu30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:18Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=88cb43c1-f01b-4098-84ea-d372176a0e20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "vif_mac": "fa:16:3e:f6:f9:b9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:38:41 np0005558241 nova_compute[248510]: 2025-12-13 08:38:41.963 248514 DEBUG nova.network.os_vif_util [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "vif_mac": "fa:16:3e:f6:f9:b9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:38:41 np0005558241 nova_compute[248510]: 2025-12-13 08:38:41.964 248514 DEBUG nova.network.os_vif_util [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:38:41 np0005558241 nova_compute[248510]: 2025-12-13 08:38:41.965 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'pci_devices' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:41 np0005558241 systemd[1]: Started libpod-conmon-5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb.scope.
Dec 13 03:38:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:38:42 np0005558241 podman[335551]: 2025-12-13 08:38:42.055377845 +0000 UTC m=+0.693683358 container init 5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chebyshev, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:38:42 np0005558241 podman[335551]: 2025-12-13 08:38:42.062479121 +0000 UTC m=+0.700784634 container start 5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:38:42 np0005558241 focused_chebyshev[335588]: 167 167
Dec 13 03:38:42 np0005558241 systemd[1]: libpod-5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb.scope: Deactivated successfully.
Dec 13 03:38:42 np0005558241 podman[335551]: 2025-12-13 08:38:42.099255085 +0000 UTC m=+0.737560598 container attach 5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 03:38:42 np0005558241 podman[335551]: 2025-12-13 08:38:42.099519081 +0000 UTC m=+0.737824594 container died 5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.145 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  <uuid>88cb43c1-f01b-4098-84ea-d372176a0e20</uuid>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  <name>instance-00000058</name>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1217173991</nova:name>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:38:40</nova:creationTime>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <nova:user uuid="302853b7f91745baadf52361a0a7d535">tempest-ServerRescueNegativeTestJSON-1030815648-project-member</nova:user>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <nova:project uuid="c968daf58e624ebda00676d79f6bde96">tempest-ServerRescueNegativeTestJSON-1030815648</nova:project>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <nova:port uuid="7527f90e-f037-4a94-a011-f952b6e72722">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <entry name="serial">88cb43c1-f01b-4098-84ea-d372176a0e20</entry>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <entry name="uuid">88cb43c1-f01b-4098-84ea-d372176a0e20</entry>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/88cb43c1-f01b-4098-84ea-d372176a0e20_disk.rescue">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/88cb43c1-f01b-4098-84ea-d372176a0e20_disk">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <target dev="vdb" bus="virtio"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config.rescue">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:f6:f9:b9"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <target dev="tap7527f90e-f0"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/console.log" append="off"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:38:42 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:38:42 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:38:42 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:38:42 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.159 248514 INFO nova.virt.libvirt.driver [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance destroyed successfully.#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.230 248514 INFO nova.compute.manager [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Rebuilding instance#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.318 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.318 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.319 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.319 248514 DEBUG nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] No VIF found with MAC fa:16:3e:f6:f9:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.319 248514 INFO nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Using config drive#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.343 248514 DEBUG nova.storage.rbd_utils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.353 248514 DEBUG nova.compute.manager [req-f861c44a-3b38-4883-8b63-35383d6edabd req-25a1a3ec-f320-488f-a44c-e446d90f43f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.354 248514 DEBUG oslo_concurrency.lockutils [req-f861c44a-3b38-4883-8b63-35383d6edabd req-25a1a3ec-f320-488f-a44c-e446d90f43f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.354 248514 DEBUG oslo_concurrency.lockutils [req-f861c44a-3b38-4883-8b63-35383d6edabd req-25a1a3ec-f320-488f-a44c-e446d90f43f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.354 248514 DEBUG oslo_concurrency.lockutils [req-f861c44a-3b38-4883-8b63-35383d6edabd req-25a1a3ec-f320-488f-a44c-e446d90f43f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.354 248514 DEBUG nova.compute.manager [req-f861c44a-3b38-4883-8b63-35383d6edabd req-25a1a3ec-f320-488f-a44c-e446d90f43f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.355 248514 WARNING nova.compute.manager [req-f861c44a-3b38-4883-8b63-35383d6edabd req-25a1a3ec-f320-488f-a44c-e446d90f43f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received unexpected event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with vm_state active and task_state rescuing.#033[00m
Dec 13 03:38:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e734d26709e43dba86abeaf0044ea193bc6dffd26e94601e451ecb6546ec2746-merged.mount: Deactivated successfully.
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.404 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.438 248514 DEBUG nova.objects.instance [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'keypairs' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:42 np0005558241 podman[335551]: 2025-12-13 08:38:42.674209943 +0000 UTC m=+1.312515456 container remove 5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_chebyshev, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:38:42 np0005558241 systemd[1]: libpod-conmon-5a0f1343ccb48ffdb52fbdb45eae617cd8e330782532500ce71144c78435d9fb.scope: Deactivated successfully.
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.820 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.845 248514 DEBUG nova.compute.manager [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.901 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.923 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:42 np0005558241 podman[335634]: 2025-12-13 08:38:42.848103332 +0000 UTC m=+0.031184596 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.953 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'resources' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.967 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'migration_context' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.986 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:38:42 np0005558241 nova_compute[248510]: 2025-12-13 08:38:42.989 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:38:43 np0005558241 podman[335634]: 2025-12-13 08:38:43.030937442 +0000 UTC m=+0.214018676 container create e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hofstadter, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:38:43 np0005558241 systemd[1]: Started libpod-conmon-e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e.scope.
Dec 13 03:38:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:38:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4366aef7e89853f4ab417813372af5e842a5aadac9a6ac87e294ccc9d86c2efd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4366aef7e89853f4ab417813372af5e842a5aadac9a6ac87e294ccc9d86c2efd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4366aef7e89853f4ab417813372af5e842a5aadac9a6ac87e294ccc9d86c2efd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4366aef7e89853f4ab417813372af5e842a5aadac9a6ac87e294ccc9d86c2efd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:38:43 np0005558241 nova_compute[248510]: 2025-12-13 08:38:43.288 248514 INFO nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Creating config drive at /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config.rescue#033[00m
Dec 13 03:38:43 np0005558241 podman[335634]: 2025-12-13 08:38:43.291638446 +0000 UTC m=+0.474719710 container init e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hofstadter, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:38:43 np0005558241 nova_compute[248510]: 2025-12-13 08:38:43.299 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi5g0lf52 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:43 np0005558241 podman[335634]: 2025-12-13 08:38:43.300354433 +0000 UTC m=+0.483435667 container start e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hofstadter, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:38:43 np0005558241 podman[335634]: 2025-12-13 08:38:43.326312208 +0000 UTC m=+0.509393462 container attach e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hofstadter, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 03:38:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 439 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 2.2 MiB/s wr, 248 op/s
Dec 13 03:38:44 np0005558241 lvm[335730]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:38:44 np0005558241 lvm[335731]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:38:44 np0005558241 lvm[335731]: VG ceph_vg1 finished
Dec 13 03:38:44 np0005558241 lvm[335730]: VG ceph_vg0 finished
Dec 13 03:38:44 np0005558241 lvm[335733]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:38:44 np0005558241 lvm[335733]: VG ceph_vg2 finished
Dec 13 03:38:44 np0005558241 strange_hofstadter[335649]: {}
Dec 13 03:38:44 np0005558241 systemd[1]: libpod-e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e.scope: Deactivated successfully.
Dec 13 03:38:44 np0005558241 systemd[1]: libpod-e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e.scope: Consumed 1.364s CPU time.
Dec 13 03:38:44 np0005558241 podman[335736]: 2025-12-13 08:38:44.180712346 +0000 UTC m=+0.023235028 container died e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:38:44 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4366aef7e89853f4ab417813372af5e842a5aadac9a6ac87e294ccc9d86c2efd-merged.mount: Deactivated successfully.
Dec 13 03:38:44 np0005558241 podman[335736]: 2025-12-13 08:38:44.233102036 +0000 UTC m=+0.075624728 container remove e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_hofstadter, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:38:44 np0005558241 systemd[1]: libpod-conmon-e8375d0e3341b94887112f1194c7b07195b4464bc3c0ca59e6aeaac1ee66040e.scope: Deactivated successfully.
Dec 13 03:38:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:38:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:38:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:38:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:38:44 np0005558241 nova_compute[248510]: 2025-12-13 08:38:44.354 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi5g0lf52" returned: 0 in 1.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:44 np0005558241 nova_compute[248510]: 2025-12-13 08:38:44.376 248514 DEBUG nova.storage.rbd_utils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] rbd image 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:44 np0005558241 nova_compute[248510]: 2025-12-13 08:38:44.380 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config.rescue 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:44 np0005558241 nova_compute[248510]: 2025-12-13 08:38:44.507 248514 DEBUG oslo_concurrency.processutils [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config.rescue 88cb43c1-f01b-4098-84ea-d372176a0e20_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:44 np0005558241 nova_compute[248510]: 2025-12-13 08:38:44.508 248514 INFO nova.virt.libvirt.driver [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Deleting local config drive /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20/disk.config.rescue because it was imported into RBD.#033[00m
Dec 13 03:38:44 np0005558241 kernel: tap7527f90e-f0: entered promiscuous mode
Dec 13 03:38:44 np0005558241 systemd-udevd[335728]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:38:44 np0005558241 NetworkManager[50376]: <info>  [1765615124.5829] manager: (tap7527f90e-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/379)
Dec 13 03:38:44 np0005558241 nova_compute[248510]: 2025-12-13 08:38:44.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:44Z|00876|binding|INFO|Claiming lport 7527f90e-f037-4a94-a011-f952b6e72722 for this chassis.
Dec 13 03:38:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:44Z|00877|binding|INFO|7527f90e-f037-4a94-a011-f952b6e72722: Claiming fa:16:3e:f6:f9:b9 10.100.0.7
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.595 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f9:b9 10.100.0.7'], port_security=['fa:16:3e:f6:f9:b9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '88cb43c1-f01b-4098-84ea-d372176a0e20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '5', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7527f90e-f037-4a94-a011-f952b6e72722) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.597 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7527f90e-f037-4a94-a011-f952b6e72722 in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 bound to our chassis#033[00m
Dec 13 03:38:44 np0005558241 NetworkManager[50376]: <info>  [1765615124.5978] device (tap7527f90e-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:38:44 np0005558241 NetworkManager[50376]: <info>  [1765615124.5988] device (tap7527f90e-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.599 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f249527d-f9e6-43ce-a178-f71fc1d38891#033[00m
Dec 13 03:38:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:44Z|00878|binding|INFO|Setting lport 7527f90e-f037-4a94-a011-f952b6e72722 ovn-installed in OVS
Dec 13 03:38:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:44Z|00879|binding|INFO|Setting lport 7527f90e-f037-4a94-a011-f952b6e72722 up in Southbound
Dec 13 03:38:44 np0005558241 nova_compute[248510]: 2025-12-13 08:38:44.610 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:44 np0005558241 nova_compute[248510]: 2025-12-13 08:38:44.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.623 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62aed008-e1b9-47d2-9abf-c73458753945]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:44 np0005558241 systemd-machined[210538]: New machine qemu-111-instance-00000058.
Dec 13 03:38:44 np0005558241 systemd[1]: Started Virtual Machine qemu-111-instance-00000058.
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.655 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2a19b711-9c95-4bba-9830-31b0dd7b683d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.660 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eacc9ea9-2ff3-48c8-8dc0-96d87d394cb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.692 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[51990ac6-e592-4a2f-b3b2-5d4177dfcebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.710 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fcee6e8-0a06-4521-bc4f-dce691625cb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf249527d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:d2:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766854, 'reachable_time': 22354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335835, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.736 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bd815f8d-0ddd-4277-86a6-d3a5c7402c53]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766865, 'tstamp': 766865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335838, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766868, 'tstamp': 766868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335838, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.737 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf249527d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:44 np0005558241 nova_compute[248510]: 2025-12-13 08:38:44.739 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:44 np0005558241 nova_compute[248510]: 2025-12-13 08:38:44.740 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.741 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf249527d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.741 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.741 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf249527d-f0, col_values=(('external_ids', {'iface-id': '238a7791-e0e6-4f94-b696-bd1f6886a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:44.742 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:38:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:38:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:38:44 np0005558241 nova_compute[248510]: 2025-12-13 08:38:44.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:45 np0005558241 nova_compute[248510]: 2025-12-13 08:38:45.253 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 88cb43c1-f01b-4098-84ea-d372176a0e20 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:38:45 np0005558241 nova_compute[248510]: 2025-12-13 08:38:45.254 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615125.2533755, 88cb43c1-f01b-4098-84ea-d372176a0e20 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:45 np0005558241 nova_compute[248510]: 2025-12-13 08:38:45.255 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:38:45 np0005558241 nova_compute[248510]: 2025-12-13 08:38:45.260 248514 DEBUG nova.compute.manager [None req-6a21cd9a-8fd8-44ec-9ad8-dd7bcfb19726 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:45 np0005558241 nova_compute[248510]: 2025-12-13 08:38:45.357 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:45 np0005558241 nova_compute[248510]: 2025-12-13 08:38:45.360 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:38:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 418 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.4 MiB/s wr, 266 op/s
Dec 13 03:38:45 np0005558241 nova_compute[248510]: 2025-12-13 08:38:45.751 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:45 np0005558241 nova_compute[248510]: 2025-12-13 08:38:45.798 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Dec 13 03:38:45 np0005558241 nova_compute[248510]: 2025-12-13 08:38:45.798 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615125.254314, 88cb43c1-f01b-4098-84ea-d372176a0e20 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:45 np0005558241 nova_compute[248510]: 2025-12-13 08:38:45.799 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] VM Started (Lifecycle Event)#033[00m
Dec 13 03:38:45 np0005558241 nova_compute[248510]: 2025-12-13 08:38:45.881 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:45 np0005558241 nova_compute[248510]: 2025-12-13 08:38:45.885 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:38:45 np0005558241 nova_compute[248510]: 2025-12-13 08:38:45.961 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Dec 13 03:38:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:38:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Dec 13 03:38:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Dec 13 03:38:46 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Dec 13 03:38:46 np0005558241 nova_compute[248510]: 2025-12-13 08:38:46.649 248514 DEBUG nova.compute.manager [req-ed5656db-b671-4abd-aedd-7ee6aa687b00 req-ae00445d-ad36-4dcb-823e-cc9b6ad77fed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:46 np0005558241 nova_compute[248510]: 2025-12-13 08:38:46.650 248514 DEBUG oslo_concurrency.lockutils [req-ed5656db-b671-4abd-aedd-7ee6aa687b00 req-ae00445d-ad36-4dcb-823e-cc9b6ad77fed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:46 np0005558241 nova_compute[248510]: 2025-12-13 08:38:46.651 248514 DEBUG oslo_concurrency.lockutils [req-ed5656db-b671-4abd-aedd-7ee6aa687b00 req-ae00445d-ad36-4dcb-823e-cc9b6ad77fed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:46 np0005558241 nova_compute[248510]: 2025-12-13 08:38:46.651 248514 DEBUG oslo_concurrency.lockutils [req-ed5656db-b671-4abd-aedd-7ee6aa687b00 req-ae00445d-ad36-4dcb-823e-cc9b6ad77fed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:46 np0005558241 nova_compute[248510]: 2025-12-13 08:38:46.651 248514 DEBUG nova.compute.manager [req-ed5656db-b671-4abd-aedd-7ee6aa687b00 req-ae00445d-ad36-4dcb-823e-cc9b6ad77fed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:38:46 np0005558241 nova_compute[248510]: 2025-12-13 08:38:46.651 248514 WARNING nova.compute.manager [req-ed5656db-b671-4abd-aedd-7ee6aa687b00 req-ae00445d-ad36-4dcb-823e-cc9b6ad77fed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received unexpected event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with vm_state rescued and task_state None.#033[00m
Dec 13 03:38:46 np0005558241 nova_compute[248510]: 2025-12-13 08:38:46.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:38:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:46.923 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:38:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:46.924 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:38:46 np0005558241 nova_compute[248510]: 2025-12-13 08:38:46.924 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 418 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Dec 13 03:38:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:48Z|00880|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 13 03:38:48 np0005558241 nova_compute[248510]: 2025-12-13 08:38:48.827 248514 DEBUG nova.compute.manager [req-1c0c7024-f1bc-4433-b1bd-cdac04f200c8 req-fcefe544-9f80-4212-9f35-b8e3562396ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:48 np0005558241 nova_compute[248510]: 2025-12-13 08:38:48.827 248514 DEBUG oslo_concurrency.lockutils [req-1c0c7024-f1bc-4433-b1bd-cdac04f200c8 req-fcefe544-9f80-4212-9f35-b8e3562396ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:48 np0005558241 nova_compute[248510]: 2025-12-13 08:38:48.827 248514 DEBUG oslo_concurrency.lockutils [req-1c0c7024-f1bc-4433-b1bd-cdac04f200c8 req-fcefe544-9f80-4212-9f35-b8e3562396ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:48 np0005558241 nova_compute[248510]: 2025-12-13 08:38:48.828 248514 DEBUG oslo_concurrency.lockutils [req-1c0c7024-f1bc-4433-b1bd-cdac04f200c8 req-fcefe544-9f80-4212-9f35-b8e3562396ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:48 np0005558241 nova_compute[248510]: 2025-12-13 08:38:48.828 248514 DEBUG nova.compute.manager [req-1c0c7024-f1bc-4433-b1bd-cdac04f200c8 req-fcefe544-9f80-4212-9f35-b8e3562396ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:38:48 np0005558241 nova_compute[248510]: 2025-12-13 08:38:48.828 248514 WARNING nova.compute.manager [req-1c0c7024-f1bc-4433-b1bd-cdac04f200c8 req-fcefe544-9f80-4212-9f35-b8e3562396ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received unexpected event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with vm_state rescued and task_state None.#033[00m
Dec 13 03:38:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 436 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 268 op/s
Dec 13 03:38:49 np0005558241 nova_compute[248510]: 2025-12-13 08:38:49.890 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:50 np0005558241 nova_compute[248510]: 2025-12-13 08:38:50.521 248514 INFO nova.compute.manager [None req-3b87bcea-62e9-4416-a763-2cb56083df2d 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Pausing#033[00m
Dec 13 03:38:50 np0005558241 nova_compute[248510]: 2025-12-13 08:38:50.522 248514 DEBUG nova.objects.instance [None req-3b87bcea-62e9-4416-a763-2cb56083df2d 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'flavor' on Instance uuid 6fb6c605-344c-4ed9-806d-96964b0474f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:50 np0005558241 nova_compute[248510]: 2025-12-13 08:38:50.555 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615130.5549407, 6fb6c605-344c-4ed9-806d-96964b0474f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:50 np0005558241 nova_compute[248510]: 2025-12-13 08:38:50.555 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:38:50 np0005558241 nova_compute[248510]: 2025-12-13 08:38:50.557 248514 DEBUG nova.compute.manager [None req-3b87bcea-62e9-4416-a763-2cb56083df2d 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:50 np0005558241 nova_compute[248510]: 2025-12-13 08:38:50.601 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:50 np0005558241 nova_compute[248510]: 2025-12-13 08:38:50.608 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:38:50 np0005558241 nova_compute[248510]: 2025-12-13 08:38:50.753 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:50 np0005558241 nova_compute[248510]: 2025-12-13 08:38:50.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:38:50 np0005558241 nova_compute[248510]: 2025-12-13 08:38:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:38:50 np0005558241 nova_compute[248510]: 2025-12-13 08:38:50.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:38:50 np0005558241 nova_compute[248510]: 2025-12-13 08:38:50.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:38:51 np0005558241 nova_compute[248510]: 2025-12-13 08:38:51.148 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:38:51 np0005558241 nova_compute[248510]: 2025-12-13 08:38:51.148 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:38:51 np0005558241 nova_compute[248510]: 2025-12-13 08:38:51.148 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:38:51 np0005558241 nova_compute[248510]: 2025-12-13 08:38:51.148 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:38:51 np0005558241 nova_compute[248510]: 2025-12-13 08:38:51.676 248514 INFO nova.compute.manager [None req-33feb07c-a376-4c65-a100-6bc40bbe0a95 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Unpausing#033[00m
Dec 13 03:38:51 np0005558241 nova_compute[248510]: 2025-12-13 08:38:51.677 248514 DEBUG nova.objects.instance [None req-33feb07c-a376-4c65-a100-6bc40bbe0a95 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'flavor' on Instance uuid 6fb6c605-344c-4ed9-806d-96964b0474f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:51 np0005558241 nova_compute[248510]: 2025-12-13 08:38:51.706 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615131.7060363, 6fb6c605-344c-4ed9-806d-96964b0474f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:38:51 np0005558241 nova_compute[248510]: 2025-12-13 08:38:51.707 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:38:51 np0005558241 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 03:38:51 np0005558241 nova_compute[248510]: 2025-12-13 08:38:51.710 248514 DEBUG nova.virt.libvirt.guest [None req-33feb07c-a376-4c65-a100-6bc40bbe0a95 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec 13 03:38:51 np0005558241 nova_compute[248510]: 2025-12-13 08:38:51.711 248514 DEBUG nova.compute.manager [None req-33feb07c-a376-4c65-a100-6bc40bbe0a95 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:51 np0005558241 nova_compute[248510]: 2025-12-13 08:38:51.734 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:38:51 np0005558241 nova_compute[248510]: 2025-12-13 08:38:51.736 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:38:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 451 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 257 op/s
Dec 13 03:38:51 np0005558241 nova_compute[248510]: 2025-12-13 08:38:51.761 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Dec 13 03:38:52 np0005558241 nova_compute[248510]: 2025-12-13 08:38:52.651 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:38:52 np0005558241 nova_compute[248510]: 2025-12-13 08:38:52.670 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:38:52 np0005558241 nova_compute[248510]: 2025-12-13 08:38:52.671 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:38:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Dec 13 03:38:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Dec 13 03:38:52 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Dec 13 03:38:53 np0005558241 nova_compute[248510]: 2025-12-13 08:38:53.033 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:38:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 467 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.3 MiB/s wr, 276 op/s
Dec 13 03:38:53 np0005558241 nova_compute[248510]: 2025-12-13 08:38:53.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.044 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.044 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.137 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.231 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.232 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.232 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.232 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.232 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.233 248514 INFO nova.compute.manager [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Terminating instance#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.234 248514 DEBUG nova.compute.manager [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.247 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.247 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.254 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.254 248514 INFO nova.compute.claims [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:38:54 np0005558241 kernel: tap7527f90e-f0 (unregistering): left promiscuous mode
Dec 13 03:38:54 np0005558241 NetworkManager[50376]: <info>  [1765615134.2769] device (tap7527f90e-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.290 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:54Z|00881|binding|INFO|Releasing lport 7527f90e-f037-4a94-a011-f952b6e72722 from this chassis (sb_readonly=0)
Dec 13 03:38:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:54Z|00882|binding|INFO|Setting lport 7527f90e-f037-4a94-a011-f952b6e72722 down in Southbound
Dec 13 03:38:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:54Z|00883|binding|INFO|Removing iface tap7527f90e-f0 ovn-installed in OVS
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.292 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.300 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f9:b9 10.100.0.7'], port_security=['fa:16:3e:f6:f9:b9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '88cb43c1-f01b-4098-84ea-d372176a0e20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '6', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=7527f90e-f037-4a94-a011-f952b6e72722) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.301 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 7527f90e-f037-4a94-a011-f952b6e72722 in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 unbound from our chassis#033[00m
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.302 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f249527d-f9e6-43ce-a178-f71fc1d38891#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.309 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.320 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[87c44d16-a827-4fa4-86cb-4e98fa3ebd18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.347 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[118d0996-9b78-47f7-afef-d6e8ef016cb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.349 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b2953401-58ab-4851-a619-6f958bd05c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:54 np0005558241 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000058.scope: Deactivated successfully.
Dec 13 03:38:54 np0005558241 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000058.scope: Consumed 9.713s CPU time.
Dec 13 03:38:54 np0005558241 systemd-machined[210538]: Machine qemu-111-instance-00000058 terminated.
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.378 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[19331db5-c74d-4981-9e93-0ed8a0b5f15e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.395 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e02352af-9955-4215-8d3a-a8e8c94bb729]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf249527d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:d2:c4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766854, 'reachable_time': 22354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335911, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.412 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f15d4f2-45bb-4666-92fc-1450c5fb9ba9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766865, 'tstamp': 766865}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335912, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf249527d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766868, 'tstamp': 766868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335912, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.413 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf249527d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.419 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.419 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf249527d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.419 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.420 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf249527d-f0, col_values=(('external_ids', {'iface-id': '238a7791-e0e6-4f94-b696-bd1f6886a564'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:54.420 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.474 248514 INFO nova.virt.libvirt.driver [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Instance destroyed successfully.#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.474 248514 DEBUG nova.objects.instance [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'resources' on Instance uuid 88cb43c1-f01b-4098-84ea-d372176a0e20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.492 248514 DEBUG nova.virt.libvirt.vif [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:38:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1217173991',display_name='tempest-ServerRescueNegativeTestJSON-server-1217173991',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1217173991',id=88,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:38:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-axcrlu30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:38:45Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=88cb43c1-f01b-4098-84ea-d372176a0e20,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.493 248514 DEBUG nova.network.os_vif_util [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "7527f90e-f037-4a94-a011-f952b6e72722", "address": "fa:16:3e:f6:f9:b9", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7527f90e-f0", "ovs_interfaceid": "7527f90e-f037-4a94-a011-f952b6e72722", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.494 248514 DEBUG nova.network.os_vif_util [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.494 248514 DEBUG os_vif [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.495 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.495 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7527f90e-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.500 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.502 248514 INFO os_vif [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f9:b9,bridge_name='br-int',has_traffic_filtering=True,id=7527f90e-f037-4a94-a011-f952b6e72722,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7527f90e-f0')#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.543 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.950 248514 INFO nova.virt.libvirt.driver [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Deleting instance files /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20_del#033[00m
Dec 13 03:38:54 np0005558241 nova_compute[248510]: 2025-12-13 08:38:54.951 248514 INFO nova.virt.libvirt.driver [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Deletion of /var/lib/nova/instances/88cb43c1-f01b-4098-84ea-d372176a0e20_del complete#033[00m
Dec 13 03:38:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:38:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3243695784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.106 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.111 248514 DEBUG nova.compute.provider_tree [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:38:55 np0005558241 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Dec 13 03:38:55 np0005558241 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d0000005a.scope: Consumed 13.384s CPU time.
Dec 13 03:38:55 np0005558241 systemd-machined[210538]: Machine qemu-110-instance-0000005a terminated.
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.327 248514 DEBUG nova.scheduler.client.report [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.386 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.387 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.396 248514 INFO nova.compute.manager [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Took 1.16 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.397 248514 DEBUG oslo.service.loopingcall [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.397 248514 DEBUG nova.compute.manager [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.397 248514 DEBUG nova.network.neutron [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:38:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:55.419 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:55.419 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:55.420 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.499 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.500 248514 DEBUG nova.network.neutron [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.568 248514 INFO nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.630 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:38:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 460 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.6 MiB/s wr, 284 op/s
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:55 np0005558241 nova_compute[248510]: 2025-12-13 08:38:55.828 248514 DEBUG nova.policy [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7507939da64e4320a1c6f389d0fc9045', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:38:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Dec 13 03:38:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Dec 13 03:38:55 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Dec 13 03:38:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:55.927 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.081 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.083 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.083 248514 INFO nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Creating image(s)#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.104 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.128 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.152 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.156 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.199 248514 DEBUG nova.compute.manager [req-4981d1c5-10be-4bd5-b55a-fb71f249cfd8 req-f6f8bfb2-0be3-4fb7-a213-b90390a07cee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-unplugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.200 248514 DEBUG oslo_concurrency.lockutils [req-4981d1c5-10be-4bd5-b55a-fb71f249cfd8 req-f6f8bfb2-0be3-4fb7-a213-b90390a07cee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.200 248514 DEBUG oslo_concurrency.lockutils [req-4981d1c5-10be-4bd5-b55a-fb71f249cfd8 req-f6f8bfb2-0be3-4fb7-a213-b90390a07cee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.200 248514 DEBUG oslo_concurrency.lockutils [req-4981d1c5-10be-4bd5-b55a-fb71f249cfd8 req-f6f8bfb2-0be3-4fb7-a213-b90390a07cee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.201 248514 DEBUG nova.compute.manager [req-4981d1c5-10be-4bd5-b55a-fb71f249cfd8 req-f6f8bfb2-0be3-4fb7-a213-b90390a07cee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-unplugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.201 248514 DEBUG nova.compute.manager [req-4981d1c5-10be-4bd5-b55a-fb71f249cfd8 req-f6f8bfb2-0be3-4fb7-a213-b90390a07cee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-unplugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.204 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance shutdown successfully after 13 seconds.#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.213 248514 INFO nova.virt.libvirt.driver [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance destroyed successfully.#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.219 248514 INFO nova.virt.libvirt.driver [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance destroyed successfully.#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.240 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.241 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.241 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.242 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.263 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.267 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 658e5f04-399b-4a8a-8680-5ae9717949c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.574 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 658e5f04-399b-4a8a-8680-5ae9717949c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.631 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deleting instance files /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6_del#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.632 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deletion of /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6_del complete#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.640 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] resizing rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.735 248514 DEBUG nova.objects.instance [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'migration_context' on Instance uuid 658e5f04-399b-4a8a-8680-5ae9717949c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.776 248514 DEBUG nova.network.neutron [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.778 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.778 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Ensure instance console log exists: /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.779 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.779 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.779 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.871 248514 INFO nova.compute.manager [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Took 1.47 seconds to deallocate network for instance.#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.940 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.941 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Creating image(s)#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.967 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:56 np0005558241 nova_compute[248510]: 2025-12-13 08:38:56.990 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.015 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.019 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.056 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.057 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.096 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.097 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.098 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.098 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.117 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.120 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.285 248514 DEBUG oslo_concurrency.processutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.454 248514 DEBUG nova.network.neutron [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Successfully created port: f767e871-4f9e-414e-a61d-c70cffe80128 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:38:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 460 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.6 MiB/s wr, 185 op/s
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.779 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.846 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] resizing rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:38:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Dec 13 03:38:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Dec 13 03:38:57 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.931 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.932 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Ensure instance console log exists: /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.932 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.933 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.933 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.934 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:38:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:38:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1505906422' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.939 248514 WARNING nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.960 248514 DEBUG oslo_concurrency.processutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.961 248514 DEBUG nova.virt.libvirt.host [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.961 248514 DEBUG nova.virt.libvirt.host [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.965 248514 DEBUG nova.compute.provider_tree [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.967 248514 DEBUG nova.virt.libvirt.host [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.968 248514 DEBUG nova.virt.libvirt.host [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.968 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.968 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.969 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.969 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.969 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.969 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.969 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.970 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.970 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.970 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.970 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.970 248514 DEBUG nova.virt.hardware [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.971 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.989 248514 DEBUG nova.scheduler.client.report [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:38:57 np0005558241 nova_compute[248510]: 2025-12-13 08:38:57.996 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.065 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.070 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.070 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.071 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.071 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.165 248514 INFO nova.scheduler.client.report [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Deleted allocations for instance 88cb43c1-f01b-4098-84ea-d372176a0e20#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.262 248514 DEBUG nova.compute.manager [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.262 248514 DEBUG oslo_concurrency.lockutils [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.263 248514 DEBUG oslo_concurrency.lockutils [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.263 248514 DEBUG oslo_concurrency.lockutils [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.263 248514 DEBUG nova.compute.manager [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] No waiting events found dispatching network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.264 248514 WARNING nova.compute.manager [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received unexpected event network-vif-plugged-7527f90e-f037-4a94-a011-f952b6e72722 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.264 248514 DEBUG nova.compute.manager [req-fd2a86c4-dc05-499e-bdd1-f79d3ba9f69d req-269e614a-1b37-425c-ad03-61322c061f30 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Received event network-vif-deleted-7527f90e-f037-4a94-a011-f952b6e72722 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.269 248514 DEBUG oslo_concurrency.lockutils [None req-8a1dfcfa-39b4-45cb-b1ad-fc16e8b59e66 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "88cb43c1-f01b-4098-84ea-d372176a0e20" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.356 248514 DEBUG nova.network.neutron [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Successfully updated port: f767e871-4f9e-414e-a61d-c70cffe80128 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.372 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.372 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquired lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.372 248514 DEBUG nova.network.neutron [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.493 248514 DEBUG nova.compute.manager [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-changed-f767e871-4f9e-414e-a61d-c70cffe80128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.493 248514 DEBUG nova.compute.manager [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Refreshing instance network info cache due to event network-changed-f767e871-4f9e-414e-a61d-c70cffe80128. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.493 248514 DEBUG oslo_concurrency.lockutils [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:38:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:38:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3833616363' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.574 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.608 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.615 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:38:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2421621160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.660 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.672 248514 DEBUG nova.network.neutron [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.691 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.691 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.692 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.692 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.692 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.695 248514 INFO nova.compute.manager [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Terminating instance#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.696 248514 DEBUG nova.compute.manager [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:38:58 np0005558241 kernel: tap3b8730f4-e2 (unregistering): left promiscuous mode
Dec 13 03:38:58 np0005558241 NetworkManager[50376]: <info>  [1765615138.7482] device (tap3b8730f4-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:38:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:58Z|00884|binding|INFO|Releasing lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f from this chassis (sb_readonly=0)
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:58Z|00885|binding|INFO|Setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f down in Southbound
Dec 13 03:38:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:58Z|00886|binding|INFO|Removing iface tap3b8730f4-e2 ovn-installed in OVS
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.767 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:9a:26 10.100.0.6'], port_security=['fa:16:3e:e7:9a:26 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6fb6c605-344c-4ed9-806d-96964b0474f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b8730f4-e225-4c0a-bf95-708c9c122a4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:38:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.769 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b8730f4-e225-4c0a-bf95-708c9c122a4f in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 unbound from our chassis#033[00m
Dec 13 03:38:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.771 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f249527d-f9e6-43ce-a178-f71fc1d38891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:38:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.772 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec09ef2-7583-4fc1-b1a4-8510d67d563e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.773 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891 namespace which is not needed anymore#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.775 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:58 np0005558241 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000057.scope: Deactivated successfully.
Dec 13 03:38:58 np0005558241 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000057.scope: Consumed 15.149s CPU time.
Dec 13 03:38:58 np0005558241 systemd-machined[210538]: Machine qemu-107-instance-00000057 terminated.
Dec 13 03:38:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Dec 13 03:38:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Dec 13 03:38:58 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Dec 13 03:38:58 np0005558241 kernel: tap3b8730f4-e2: entered promiscuous mode
Dec 13 03:38:58 np0005558241 systemd-udevd[336427]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:38:58 np0005558241 kernel: tap3b8730f4-e2 (unregistering): left promiscuous mode
Dec 13 03:38:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:58Z|00887|binding|INFO|Claiming lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f for this chassis.
Dec 13 03:38:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:58Z|00888|binding|INFO|3b8730f4-e225-4c0a-bf95-708c9c122a4f: Claiming fa:16:3e:e7:9a:26 10.100.0.6
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.918 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.930 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:9a:26 10.100.0.6'], port_security=['fa:16:3e:e7:9a:26 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6fb6c605-344c-4ed9-806d-96964b0474f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b8730f4-e225-4c0a-bf95-708c9c122a4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:38:58 np0005558241 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [NOTICE]   (333778) : haproxy version is 2.8.14-c23fe91
Dec 13 03:38:58 np0005558241 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [NOTICE]   (333778) : path to executable is /usr/sbin/haproxy
Dec 13 03:38:58 np0005558241 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [WARNING]  (333778) : Exiting Master process...
Dec 13 03:38:58 np0005558241 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [WARNING]  (333778) : Exiting Master process...
Dec 13 03:38:58 np0005558241 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [ALERT]    (333778) : Current worker (333780) exited with code 143 (Terminated)
Dec 13 03:38:58 np0005558241 neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891[333774]: [WARNING]  (333778) : All workers exited. Exiting... (0)
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.941 248514 INFO nova.virt.libvirt.driver [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Instance destroyed successfully.#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.942 248514 DEBUG nova.objects.instance [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lazy-loading 'resources' on Instance uuid 6fb6c605-344c-4ed9-806d-96964b0474f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:58 np0005558241 systemd[1]: libpod-a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441.scope: Deactivated successfully.
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.948 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:58Z|00889|binding|INFO|Setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f ovn-installed in OVS
Dec 13 03:38:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:58Z|00890|binding|INFO|Setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f up in Southbound
Dec 13 03:38:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:58Z|00891|binding|INFO|Releasing lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f from this chassis (sb_readonly=1)
Dec 13 03:38:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:58Z|00892|if_status|INFO|Dropped 4 log messages in last 381 seconds (most recently, 381 seconds ago) due to excessive rate
Dec 13 03:38:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:58Z|00893|if_status|INFO|Not setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f down as sb is readonly
Dec 13 03:38:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:58Z|00894|binding|INFO|Removing iface tap3b8730f4-e2 ovn-installed in OVS
Dec 13 03:38:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:58Z|00895|binding|INFO|Releasing lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f from this chassis (sb_readonly=0)
Dec 13 03:38:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:38:58Z|00896|binding|INFO|Setting lport 3b8730f4-e225-4c0a-bf95-708c9c122a4f down in Southbound
Dec 13 03:38:58 np0005558241 podman[336447]: 2025-12-13 08:38:58.95959103 +0000 UTC m=+0.076170283 container died a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.965 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.972 248514 DEBUG nova.virt.libvirt.vif [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:37:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1394190380',display_name='tempest-ServerRescueNegativeTestJSON-server-1394190380',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1394190380',id=87,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:38:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c968daf58e624ebda00676d79f6bde96',ramdisk_id='',reservation_id='r-689d01nr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1030815648',owner_user_name='tempest-ServerRescueNegativeTestJSON-1030815648-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:38:51Z,user_data=None,user_id='302853b7f91745baadf52361a0a7d535',uuid=6fb6c605-344c-4ed9-806d-96964b0474f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.973 248514 DEBUG nova.network.os_vif_util [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converting VIF {"id": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "address": "fa:16:3e:e7:9a:26", "network": {"id": "f249527d-f9e6-43ce-a178-f71fc1d38891", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2147099147-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c968daf58e624ebda00676d79f6bde96", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b8730f4-e2", "ovs_interfaceid": "3b8730f4-e225-4c0a-bf95-708c9c122a4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.974 248514 DEBUG nova.network.os_vif_util [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.975 248514 DEBUG os_vif [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:38:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:58.975 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:9a:26 10.100.0.6'], port_security=['fa:16:3e:e7:9a:26 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6fb6c605-344c-4ed9-806d-96964b0474f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f249527d-f9e6-43ce-a178-f71fc1d38891', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c968daf58e624ebda00676d79f6bde96', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10b8d803-068c-4438-9c54-bb9fca806dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83315ce7-c99d-41f6-afce-aabb37f0e27c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b8730f4-e225-4c0a-bf95-708c9c122a4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.977 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.977 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b8730f4-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.984 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:38:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441-userdata-shm.mount: Deactivated successfully.
Dec 13 03:38:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1630a06e863d3fc2328692d0f3a97579d4a2f822414283bd74ab2bb685b53f54-merged.mount: Deactivated successfully.
Dec 13 03:38:58 np0005558241 nova_compute[248510]: 2025-12-13 08:38:58.992 248514 INFO os_vif [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:9a:26,bridge_name='br-int',has_traffic_filtering=True,id=3b8730f4-e225-4c0a-bf95-708c9c122a4f,network=Network(f249527d-f9e6-43ce-a178-f71fc1d38891),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b8730f4-e2')#033[00m
Dec 13 03:38:59 np0005558241 podman[336447]: 2025-12-13 08:38:59.006475224 +0000 UTC m=+0.123054477 container cleanup a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.011 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.011 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:38:59 np0005558241 systemd[1]: libpod-conmon-a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441.scope: Deactivated successfully.
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.017 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.018 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.023 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.023 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:38:59 np0005558241 podman[336493]: 2025-12-13 08:38:59.076292178 +0000 UTC m=+0.047708326 container remove a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.081 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[66fea91a-c5b7-43c0-b2e0-c7b920c4580f]: (4, ('Sat Dec 13 08:38:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891 (a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441)\na833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441\nSat Dec 13 08:38:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891 (a833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441)\na833d2f3386789d4607aa929bb612cb86b7cb4d54b497ec704d80959f8e5d441\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.084 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[442f5feb-1dde-4313-a783-f71f268db8b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.086 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf249527d-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.089 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:59 np0005558241 kernel: tapf249527d-f0: left promiscuous mode
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.125 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.129 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a097e4-fb29-4c1b-bac3-00ed3efb605c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.141 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a895af-9bb2-4dcc-863e-15216b363103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.145 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db6e3c38-6411-4a55-bb6d-21daccdc347f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.165 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[18a09cd6-a366-4720-be16-de1764e54653]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766846, 'reachable_time': 37032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336513, 'error': None, 'target': 'ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:59 np0005558241 systemd[1]: run-netns-ovnmeta\x2df249527d\x2df9e6\x2d43ce\x2da178\x2df71fc1d38891.mount: Deactivated successfully.
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.170 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f249527d-f9e6-43ce-a178-f71fc1d38891 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.171 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc4ee7d-2951-474b-a615-a2d570183d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.172 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b8730f4-e225-4c0a-bf95-708c9c122a4f in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 unbound from our chassis#033[00m
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.173 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f249527d-f9e6-43ce-a178-f71fc1d38891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.174 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[76892199-5ea3-4d97-ab47-6d8d491983d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.174 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b8730f4-e225-4c0a-bf95-708c9c122a4f in datapath f249527d-f9e6-43ce-a178-f71fc1d38891 unbound from our chassis#033[00m
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.175 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f249527d-f9e6-43ce-a178-f71fc1d38891, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:38:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:38:59.175 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09ab73ae-ab57-40ed-ae03-95ccc94fa585]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:38:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:38:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/185825765' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.259 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.261 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  <uuid>0187165c-81b1-43b8-81f5-05e847fe1fa6</uuid>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  <name>instance-0000005a</name>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerShowV247Test-server-1806406257</nova:name>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:38:57</nova:creationTime>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:        <nova:user uuid="0e8471eedc0e4ae0a028132802bc1967">tempest-ServerShowV247Test-2063492104-project-member</nova:user>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:        <nova:project uuid="c4b40eb3fb314c44867a54f3ba244ec1">tempest-ServerShowV247Test-2063492104</nova:project>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <entry name="serial">0187165c-81b1-43b8-81f5-05e847fe1fa6</entry>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <entry name="uuid">0187165c-81b1-43b8-81f5-05e847fe1fa6</entry>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0187165c-81b1-43b8-81f5-05e847fe1fa6_disk">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/console.log" append="off"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:38:59 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:38:59 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:38:59 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:38:59 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.294 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.295 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3201MB free_disk=59.752280401065946GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.296 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.296 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.321 248514 INFO nova.virt.libvirt.driver [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Deleting instance files /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9_del#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.322 248514 INFO nova.virt.libvirt.driver [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Deletion of /var/lib/nova/instances/6fb6c605-344c-4ed9-806d-96964b0474f9_del complete#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.328 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.328 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.328 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Using config drive#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.346 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.396 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.435 248514 INFO nova.compute.manager [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.435 248514 DEBUG oslo.service.loopingcall [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.436 248514 DEBUG nova.compute.manager [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.436 248514 DEBUG nova.network.neutron [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.439 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'keypairs' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.444 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9b486227-b98c-4393-9a3c-aae3e3c419a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.444 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 6fb6c605-344c-4ed9-806d-96964b0474f9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.445 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 0d52c1df-d252-4012-b05c-40737f1089bb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.445 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 0187165c-81b1-43b8-81f5-05e847fe1fa6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.445 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 658e5f04-399b-4a8a-8680-5ae9717949c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.445 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.446 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.582 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:38:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 377 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 446 KiB/s rd, 7.8 MiB/s wr, 304 op/s
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.879 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Creating config drive at /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config#033[00m
Dec 13 03:38:59 np0005558241 nova_compute[248510]: 2025-12-13 08:38:59.884 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8z49aori execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.026 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8z49aori" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.057 248514 DEBUG nova.storage.rbd_utils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] rbd image 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.061 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.095 248514 DEBUG nova.network.neutron [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Updating instance_info_cache with network_info: [{"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.129 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Releasing lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.130 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Instance network_info: |[{"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.130 248514 DEBUG oslo_concurrency.lockutils [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.130 248514 DEBUG nova.network.neutron [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Refreshing network info cache for port f767e871-4f9e-414e-a61d-c70cffe80128 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.134 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Start _get_guest_xml network_info=[{"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:39:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:39:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2527932293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.140 248514 WARNING nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.148 248514 DEBUG nova.virt.libvirt.host [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.149 248514 DEBUG nova.virt.libvirt.host [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.155 248514 DEBUG nova.virt.libvirt.host [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.155 248514 DEBUG nova.virt.libvirt.host [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.156 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.156 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.157 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.157 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.157 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.157 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.158 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.158 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.158 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.158 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.159 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.159 248514 DEBUG nova.virt.hardware [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.163 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.193 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.201 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.237 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.281 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.282 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.417 248514 DEBUG oslo_concurrency.processutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config 0187165c-81b1-43b8-81f5-05e847fe1fa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.418 248514 INFO nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deleting local config drive /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6/disk.config because it was imported into RBD.#033[00m
Dec 13 03:39:00 np0005558241 systemd-machined[210538]: New machine qemu-112-instance-0000005a.
Dec 13 03:39:00 np0005558241 systemd[1]: Started Virtual Machine qemu-112-instance-0000005a.
Dec 13 03:39:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:39:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2674907218' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.769 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.790 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.794 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.910 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 0187165c-81b1-43b8-81f5-05e847fe1fa6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.911 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615140.9099798, 0187165c-81b1-43b8-81f5-05e847fe1fa6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.911 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.915 248514 DEBUG nova.compute.manager [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.915 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.920 248514 INFO nova.virt.libvirt.driver [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance spawned successfully.#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.921 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.979 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.986 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.990 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.991 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.991 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.992 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.992 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:00 np0005558241 nova_compute[248510]: 2025-12-13 08:39:00.993 248514 DEBUG nova.virt.libvirt.driver [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.047 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.048 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615140.9167469, 0187165c-81b1-43b8-81f5-05e847fe1fa6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.048 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] VM Started (Lifecycle Event)#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.116 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.119 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.155 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.166 248514 DEBUG nova.compute.manager [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.195 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-unplugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.196 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.196 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.197 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.197 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-unplugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.197 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-unplugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.197 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.198 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.198 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.198 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.198 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.199 248514 WARNING nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received unexpected event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.199 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.199 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.199 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.200 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.200 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.200 248514 WARNING nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received unexpected event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.200 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.201 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.201 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.201 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.201 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.202 248514 WARNING nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received unexpected event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.202 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-unplugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.202 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.202 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.203 248514 DEBUG oslo_concurrency.lockutils [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.203 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-unplugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.203 248514 DEBUG nova.compute.manager [req-a1e9a72f-23ad-40d2-b287-bf59144768ed req-8af7d4dc-d8cf-4a22-8915-d4278209a7a0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-unplugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.260 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.261 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.261 248514 DEBUG nova.objects.instance [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.364 248514 DEBUG nova.network.neutron [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.395 248514 DEBUG oslo_concurrency.lockutils [None req-e5828df4-d297-4aa8-b648-3d412a459ec9 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.409 248514 INFO nova.compute.manager [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Took 1.97 seconds to deallocate network for instance.#033[00m
Dec 13 03:39:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:39:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269903454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.481 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.482 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.491 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.697s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.493 248514 DEBUG nova.virt.libvirt.vif [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:38:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2058674229',display_name='tempest-ServerActionsTestOtherB-server-2058674229',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2058674229',id=91,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-hq6btaus',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:55Z,user_data=None,user_id='7507939da64e4320a1c6f389d0fc9045',uuid=658e5f04-399b-4a8a-8680-5ae9717949c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.493 248514 DEBUG nova.network.os_vif_util [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.494 248514 DEBUG nova.network.os_vif_util [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.496 248514 DEBUG nova.objects.instance [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 658e5f04-399b-4a8a-8680-5ae9717949c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.511 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  <uuid>658e5f04-399b-4a8a-8680-5ae9717949c0</uuid>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  <name>instance-0000005b</name>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerActionsTestOtherB-server-2058674229</nova:name>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:39:00</nova:creationTime>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:        <nova:user uuid="7507939da64e4320a1c6f389d0fc9045">tempest-ServerActionsTestOtherB-1515133862-project-member</nova:user>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:        <nova:project uuid="f0aee359fbaa484eb7ead3f81eef51e7">tempest-ServerActionsTestOtherB-1515133862</nova:project>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:        <nova:port uuid="f767e871-4f9e-414e-a61d-c70cffe80128">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <entry name="serial">658e5f04-399b-4a8a-8680-5ae9717949c0</entry>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <entry name="uuid">658e5f04-399b-4a8a-8680-5ae9717949c0</entry>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/658e5f04-399b-4a8a-8680-5ae9717949c0_disk">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/658e5f04-399b-4a8a-8680-5ae9717949c0_disk.config">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:96:d2:10"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <target dev="tapf767e871-4f"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/console.log" append="off"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:39:01 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:39:01 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:39:01 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:39:01 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.518 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Preparing to wait for external event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.518 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.519 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.519 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.520 248514 DEBUG nova.virt.libvirt.vif [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:38:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2058674229',display_name='tempest-ServerActionsTestOtherB-server-2058674229',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2058674229',id=91,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-hq6btaus',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:38:55Z,user_data=None,user_id='7507939da64e4320a1c6f389d0fc9045',uuid=658e5f04-399b-4a8a-8680-5ae9717949c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.520 248514 DEBUG nova.network.os_vif_util [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.521 248514 DEBUG nova.network.os_vif_util [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.522 248514 DEBUG os_vif [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.523 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.524 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.524 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.528 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf767e871-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.529 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf767e871-4f, col_values=(('external_ids', {'iface-id': 'f767e871-4f9e-414e-a61d-c70cffe80128', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:d2:10', 'vm-uuid': '658e5f04-399b-4a8a-8680-5ae9717949c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:01 np0005558241 NetworkManager[50376]: <info>  [1765615141.5323] manager: (tapf767e871-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.537 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.538 248514 INFO os_vif [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f')#033[00m
Dec 13 03:39:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.602 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.602 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.603 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No VIF found with MAC fa:16:3e:96:d2:10, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.603 248514 INFO nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Using config drive#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.623 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:01 np0005558241 nova_compute[248510]: 2025-12-13 08:39:01.695 248514 DEBUG oslo_concurrency.processutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 350 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 284 KiB/s rd, 7.2 MiB/s wr, 410 op/s
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.188 248514 INFO nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Creating config drive at /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/disk.config#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.198 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp690hln9a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:39:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524472740' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.283 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.284 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.284 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.285 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.299 248514 DEBUG oslo_concurrency.processutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.305 248514 DEBUG nova.compute.provider_tree [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.344 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp690hln9a" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.371 248514 DEBUG nova.storage.rbd_utils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 658e5f04-399b-4a8a-8680-5ae9717949c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.374 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/disk.config 658e5f04-399b-4a8a-8680-5ae9717949c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.451 248514 DEBUG nova.scheduler.client.report [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.492 248514 DEBUG oslo_concurrency.processutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/disk.config 658e5f04-399b-4a8a-8680-5ae9717949c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.493 248514 INFO nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Deleting local config drive /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0/disk.config because it was imported into RBD.#033[00m
Dec 13 03:39:02 np0005558241 kernel: tapf767e871-4f: entered promiscuous mode
Dec 13 03:39:02 np0005558241 NetworkManager[50376]: <info>  [1765615142.5405] manager: (tapf767e871-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/381)
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:02Z|00897|binding|INFO|Claiming lport f767e871-4f9e-414e-a61d-c70cffe80128 for this chassis.
Dec 13 03:39:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:02Z|00898|binding|INFO|f767e871-4f9e-414e-a61d-c70cffe80128: Claiming fa:16:3e:96:d2:10 10.100.0.4
Dec 13 03:39:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:02Z|00899|binding|INFO|Setting lport f767e871-4f9e-414e-a61d-c70cffe80128 ovn-installed in OVS
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.567 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:02 np0005558241 systemd-machined[210538]: New machine qemu-113-instance-0000005b.
Dec 13 03:39:02 np0005558241 systemd[1]: Started Virtual Machine qemu-113-instance-0000005b.
Dec 13 03:39:02 np0005558241 systemd-udevd[336837]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:39:02 np0005558241 NetworkManager[50376]: <info>  [1765615142.6298] device (tapf767e871-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:39:02 np0005558241 NetworkManager[50376]: <info>  [1765615142.6318] device (tapf767e871-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:39:02 np0005558241 podman[336809]: 2025-12-13 08:39:02.681992651 +0000 UTC m=+0.107052659 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 03:39:02 np0005558241 podman[336810]: 2025-12-13 08:39:02.699576678 +0000 UTC m=+0.112660349 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:39:02 np0005558241 podman[336808]: 2025-12-13 08:39:02.708492079 +0000 UTC m=+0.135079235 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 03:39:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:02Z|00900|binding|INFO|Setting lport f767e871-4f9e-414e-a61d-c70cffe80128 up in Southbound
Dec 13 03:39:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.882 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:d2:10 10.100.0.4'], port_security=['fa:16:3e:96:d2:10 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '658e5f04-399b-4a8a-8680-5ae9717949c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77183472-893b-4c33-ab3e-e88f01770ed8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f767e871-4f9e-414e-a61d-c70cffe80128) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:39:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.883 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f767e871-4f9e-414e-a61d-c70cffe80128 in datapath 369f7528-6571-47b6-a030-5281647e1eac bound to our chassis#033[00m
Dec 13 03:39:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.885 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 369f7528-6571-47b6-a030-5281647e1eac#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.909 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.911 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[057690bf-00c8-4cbc-97e1-25d2f9a467c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:02 np0005558241 nova_compute[248510]: 2025-12-13 08:39:02.939 248514 INFO nova.scheduler.client.report [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Deleted allocations for instance 6fb6c605-344c-4ed9-806d-96964b0474f9#033[00m
Dec 13 03:39:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.945 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a636c15d-8cfa-4c2c-bed1-44372008ed7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.948 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[48a19ed5-1d2a-451d-b302-1c6a170da622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.975 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a08efb6a-5096-437a-b74d-1dbb36f8d77e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:02.997 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3566b6ef-6bfd-411f-907e-881ba1368fe5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 21848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336886, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:03.017 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ba331a-7d18-4e03-b6cf-f3cfc68b2ca3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763117, 'tstamp': 763117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336887, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763121, 'tstamp': 763121}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336887, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:03.019 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.020 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.021 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:03.021 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369f7528-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:03.021 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:39:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:03.022 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap369f7528-60, col_values=(('external_ids', {'iface-id': '6c33e57f-b143-4ed7-a271-dc33d6e81057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:03.022 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.034 248514 DEBUG nova.network.neutron [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Updated VIF entry in instance network info cache for port f767e871-4f9e-414e-a61d-c70cffe80128. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.035 248514 DEBUG nova.network.neutron [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Updating instance_info_cache with network_info: [{"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.234 248514 DEBUG oslo_concurrency.lockutils [req-713be700-3b85-4a3c-812a-3f5a031d1113 req-8ec4841b-8d60-4bd2-898f-ccf18283c904 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.368 248514 DEBUG oslo_concurrency.lockutils [None req-344c326d-b7ad-4eec-8ebc-bb36c819e22f 302853b7f91745baadf52361a0a7d535 c968daf58e624ebda00676d79f6bde96 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.659 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615143.6589673, 658e5f04-399b-4a8a-8680-5ae9717949c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.660 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] VM Started (Lifecycle Event)#033[00m
Dec 13 03:39:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 315 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.5 MiB/s wr, 409 op/s
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.961 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "0187165c-81b1-43b8-81f5-05e847fe1fa6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.961 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0187165c-81b1-43b8-81f5-05e847fe1fa6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.962 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "0187165c-81b1-43b8-81f5-05e847fe1fa6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.962 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0187165c-81b1-43b8-81f5-05e847fe1fa6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.962 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0187165c-81b1-43b8-81f5-05e847fe1fa6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.963 248514 INFO nova.compute.manager [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Terminating instance#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.964 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "refresh_cache-0187165c-81b1-43b8-81f5-05e847fe1fa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.964 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquired lock "refresh_cache-0187165c-81b1-43b8-81f5-05e847fe1fa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.964 248514 DEBUG nova.network.neutron [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.969 248514 DEBUG nova.compute.manager [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.969 248514 DEBUG oslo_concurrency.lockutils [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.969 248514 DEBUG oslo_concurrency.lockutils [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.969 248514 DEBUG oslo_concurrency.lockutils [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6fb6c605-344c-4ed9-806d-96964b0474f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.970 248514 DEBUG nova.compute.manager [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] No waiting events found dispatching network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.970 248514 WARNING nova.compute.manager [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received unexpected event network-vif-plugged-3b8730f4-e225-4c0a-bf95-708c9c122a4f for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.970 248514 DEBUG nova.compute.manager [req-7acd0104-8fff-4b88-bb8e-dd964c037dc0 req-56e11f52-1313-4e40-99f6-bbde76211336 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Received event network-vif-deleted-3b8730f4-e225-4c0a-bf95-708c9c122a4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.971 248514 DEBUG nova.compute.manager [req-30cea5a4-5382-4445-ba66-5e7999094afc req-e5d2229b-9092-48fb-bb3c-35847acc41e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.971 248514 DEBUG oslo_concurrency.lockutils [req-30cea5a4-5382-4445-ba66-5e7999094afc req-e5d2229b-9092-48fb-bb3c-35847acc41e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.971 248514 DEBUG oslo_concurrency.lockutils [req-30cea5a4-5382-4445-ba66-5e7999094afc req-e5d2229b-9092-48fb-bb3c-35847acc41e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.972 248514 DEBUG oslo_concurrency.lockutils [req-30cea5a4-5382-4445-ba66-5e7999094afc req-e5d2229b-9092-48fb-bb3c-35847acc41e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.972 248514 DEBUG nova.compute.manager [req-30cea5a4-5382-4445-ba66-5e7999094afc req-e5d2229b-9092-48fb-bb3c-35847acc41e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Processing event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.973 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.976 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.979 248514 INFO nova.virt.libvirt.driver [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Instance spawned successfully.#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.980 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.987 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:03 np0005558241 nova_compute[248510]: 2025-12-13 08:39:03.995 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.077 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.078 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.078 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.079 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.079 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.079 248514 DEBUG nova.virt.libvirt.driver [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.082 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.082 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615143.6591413, 658e5f04-399b-4a8a-8680-5ae9717949c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.083 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.162 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.165 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615143.9764216, 658e5f04-399b-4a8a-8680-5ae9717949c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.166 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.455 248514 INFO nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Took 8.37 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.455 248514 DEBUG nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.493 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.496 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.676 248514 INFO nova.compute.manager [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Took 10.46 seconds to build instance.#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.693 248514 DEBUG nova.network.neutron [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:39:04 np0005558241 nova_compute[248510]: 2025-12-13 08:39:04.701 248514 DEBUG oslo_concurrency.lockutils [None req-c0136a09-2cdf-4c12-b8ee-862cbc33c4f8 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:05 np0005558241 nova_compute[248510]: 2025-12-13 08:39:05.451 248514 DEBUG nova.network.neutron [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:39:05 np0005558241 nova_compute[248510]: 2025-12-13 08:39:05.474 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Releasing lock "refresh_cache-0187165c-81b1-43b8-81f5-05e847fe1fa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:39:05 np0005558241 nova_compute[248510]: 2025-12-13 08:39:05.474 248514 DEBUG nova.compute.manager [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:39:05 np0005558241 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Dec 13 03:39:05 np0005558241 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005a.scope: Consumed 5.042s CPU time.
Dec 13 03:39:05 np0005558241 systemd-machined[210538]: Machine qemu-112-instance-0000005a terminated.
Dec 13 03:39:05 np0005558241 nova_compute[248510]: 2025-12-13 08:39:05.696 248514 INFO nova.virt.libvirt.driver [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance destroyed successfully.#033[00m
Dec 13 03:39:05 np0005558241 nova_compute[248510]: 2025-12-13 08:39:05.697 248514 DEBUG nova.objects.instance [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'resources' on Instance uuid 0187165c-81b1-43b8-81f5-05e847fe1fa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:39:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.4 MiB/s wr, 492 op/s
Dec 13 03:39:05 np0005558241 nova_compute[248510]: 2025-12-13 08:39:05.759 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:05 np0005558241 nova_compute[248510]: 2025-12-13 08:39:05.817 248514 INFO nova.compute.manager [None req-53f112dd-3570-4f28-844f-fd194a4e95f5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Get console output#033[00m
Dec 13 03:39:05 np0005558241 nova_compute[248510]: 2025-12-13 08:39:05.956 248514 INFO nova.virt.libvirt.driver [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deleting instance files /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6_del#033[00m
Dec 13 03:39:05 np0005558241 nova_compute[248510]: 2025-12-13 08:39:05.957 248514 INFO nova.virt.libvirt.driver [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deletion of /var/lib/nova/instances/0187165c-81b1-43b8-81f5-05e847fe1fa6_del complete#033[00m
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.043 248514 INFO nova.compute.manager [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.044 248514 DEBUG oslo.service.loopingcall [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.044 248514 DEBUG nova.compute.manager [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.044 248514 DEBUG nova.network.neutron [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:39:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.634 248514 DEBUG nova.network.neutron [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:39:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Dec 13 03:39:06 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.903 248514 DEBUG nova.compute.manager [req-b7374614-711c-4531-aa25-b0624776ea84 req-aa4b62d2-1c67-4de6-b342-d4e3e5784b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.904 248514 DEBUG oslo_concurrency.lockutils [req-b7374614-711c-4531-aa25-b0624776ea84 req-aa4b62d2-1c67-4de6-b342-d4e3e5784b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.904 248514 DEBUG oslo_concurrency.lockutils [req-b7374614-711c-4531-aa25-b0624776ea84 req-aa4b62d2-1c67-4de6-b342-d4e3e5784b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.905 248514 DEBUG oslo_concurrency.lockutils [req-b7374614-711c-4531-aa25-b0624776ea84 req-aa4b62d2-1c67-4de6-b342-d4e3e5784b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.905 248514 DEBUG nova.compute.manager [req-b7374614-711c-4531-aa25-b0624776ea84 req-aa4b62d2-1c67-4de6-b342-d4e3e5784b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] No waiting events found dispatching network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.905 248514 WARNING nova.compute.manager [req-b7374614-711c-4531-aa25-b0624776ea84 req-aa4b62d2-1c67-4de6-b342-d4e3e5784b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received unexpected event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.918 248514 DEBUG nova.network.neutron [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:39:06 np0005558241 nova_compute[248510]: 2025-12-13 08:39:06.944 248514 INFO nova.compute.manager [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Took 0.90 seconds to deallocate network for instance.#033[00m
Dec 13 03:39:07 np0005558241 nova_compute[248510]: 2025-12-13 08:39:07.005 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:07 np0005558241 nova_compute[248510]: 2025-12-13 08:39:07.006 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:07 np0005558241 nova_compute[248510]: 2025-12-13 08:39:07.164 248514 DEBUG oslo_concurrency.processutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:39:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2141122604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:39:07 np0005558241 nova_compute[248510]: 2025-12-13 08:39:07.745 248514 DEBUG oslo_concurrency.processutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:07 np0005558241 nova_compute[248510]: 2025-12-13 08:39:07.751 248514 DEBUG nova.compute.provider_tree [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:39:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 2.4 MiB/s wr, 339 op/s
Dec 13 03:39:07 np0005558241 nova_compute[248510]: 2025-12-13 08:39:07.779 248514 DEBUG nova.scheduler.client.report [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:39:07 np0005558241 nova_compute[248510]: 2025-12-13 08:39:07.815 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:07 np0005558241 nova_compute[248510]: 2025-12-13 08:39:07.875 248514 INFO nova.scheduler.client.report [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Deleted allocations for instance 0187165c-81b1-43b8-81f5-05e847fe1fa6#033[00m
Dec 13 03:39:07 np0005558241 nova_compute[248510]: 2025-12-13 08:39:07.995 248514 DEBUG oslo_concurrency.lockutils [None req-cb5fb63d-9fac-4b95-80e2-c7a39def5ec3 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0187165c-81b1-43b8-81f5-05e847fe1fa6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Dec 13 03:39:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Dec 13 03:39:08 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Dec 13 03:39:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:39:09
Dec 13 03:39:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:39:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:39:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'vms']
Dec 13 03:39:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:39:09 np0005558241 nova_compute[248510]: 2025-12-13 08:39:09.471 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615134.469909, 88cb43c1-f01b-4098-84ea-d372176a0e20 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:09 np0005558241 nova_compute[248510]: 2025-12-13 08:39:09.472 248514 INFO nova.compute.manager [-] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:39:09 np0005558241 nova_compute[248510]: 2025-12-13 08:39:09.494 248514 DEBUG nova.compute.manager [None req-a355d750-857c-4b61-a3bf-4e5cd5976a5a - - - - - -] [instance: 88cb43c1-f01b-4098-84ea-d372176a0e20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 259 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 42 KiB/s wr, 279 op/s
Dec 13 03:39:09 np0005558241 nova_compute[248510]: 2025-12-13 08:39:09.851 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "0d52c1df-d252-4012-b05c-40737f1089bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:09 np0005558241 nova_compute[248510]: 2025-12-13 08:39:09.852 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0d52c1df-d252-4012-b05c-40737f1089bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:09 np0005558241 nova_compute[248510]: 2025-12-13 08:39:09.852 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "0d52c1df-d252-4012-b05c-40737f1089bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:09 np0005558241 nova_compute[248510]: 2025-12-13 08:39:09.853 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0d52c1df-d252-4012-b05c-40737f1089bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:09 np0005558241 nova_compute[248510]: 2025-12-13 08:39:09.853 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0d52c1df-d252-4012-b05c-40737f1089bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:09 np0005558241 nova_compute[248510]: 2025-12-13 08:39:09.854 248514 INFO nova.compute.manager [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Terminating instance#033[00m
Dec 13 03:39:09 np0005558241 nova_compute[248510]: 2025-12-13 08:39:09.855 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "refresh_cache-0d52c1df-d252-4012-b05c-40737f1089bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:39:09 np0005558241 nova_compute[248510]: 2025-12-13 08:39:09.855 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquired lock "refresh_cache-0d52c1df-d252-4012-b05c-40737f1089bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:39:09 np0005558241 nova_compute[248510]: 2025-12-13 08:39:09.855 248514 DEBUG nova.network.neutron [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:39:10 np0005558241 nova_compute[248510]: 2025-12-13 08:39:10.139 248514 DEBUG nova.network.neutron [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:39:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:10Z|00901|binding|INFO|Releasing lport 6c33e57f-b143-4ed7-a271-dc33d6e81057 from this chassis (sb_readonly=0)
Dec 13 03:39:10 np0005558241 nova_compute[248510]: 2025-12-13 08:39:10.251 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:10 np0005558241 nova_compute[248510]: 2025-12-13 08:39:10.516 248514 DEBUG nova.network.neutron [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:39:10 np0005558241 nova_compute[248510]: 2025-12-13 08:39:10.546 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Releasing lock "refresh_cache-0d52c1df-d252-4012-b05c-40737f1089bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:39:10 np0005558241 nova_compute[248510]: 2025-12-13 08:39:10.547 248514 DEBUG nova.compute.manager [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:39:10 np0005558241 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000059.scope: Deactivated successfully.
Dec 13 03:39:10 np0005558241 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000059.scope: Consumed 13.035s CPU time.
Dec 13 03:39:10 np0005558241 systemd-machined[210538]: Machine qemu-109-instance-00000059 terminated.
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:39:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:39:10 np0005558241 nova_compute[248510]: 2025-12-13 08:39:10.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:10 np0005558241 nova_compute[248510]: 2025-12-13 08:39:10.767 248514 INFO nova.virt.libvirt.driver [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Instance destroyed successfully.#033[00m
Dec 13 03:39:10 np0005558241 nova_compute[248510]: 2025-12-13 08:39:10.768 248514 DEBUG nova.objects.instance [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lazy-loading 'resources' on Instance uuid 0d52c1df-d252-4012-b05c-40737f1089bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:39:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Dec 13 03:39:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Dec 13 03:39:10 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Dec 13 03:39:11 np0005558241 nova_compute[248510]: 2025-12-13 08:39:11.057 248514 INFO nova.virt.libvirt.driver [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Deleting instance files /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb_del#033[00m
Dec 13 03:39:11 np0005558241 nova_compute[248510]: 2025-12-13 08:39:11.058 248514 INFO nova.virt.libvirt.driver [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Deletion of /var/lib/nova/instances/0d52c1df-d252-4012-b05c-40737f1089bb_del complete#033[00m
Dec 13 03:39:11 np0005558241 nova_compute[248510]: 2025-12-13 08:39:11.121 248514 INFO nova.compute.manager [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:39:11 np0005558241 nova_compute[248510]: 2025-12-13 08:39:11.122 248514 DEBUG oslo.service.loopingcall [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:39:11 np0005558241 nova_compute[248510]: 2025-12-13 08:39:11.122 248514 DEBUG nova.compute.manager [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:39:11 np0005558241 nova_compute[248510]: 2025-12-13 08:39:11.122 248514 DEBUG nova.network.neutron [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:39:11 np0005558241 nova_compute[248510]: 2025-12-13 08:39:11.300 248514 DEBUG nova.network.neutron [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:39:11 np0005558241 nova_compute[248510]: 2025-12-13 08:39:11.325 248514 DEBUG nova.network.neutron [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:39:11 np0005558241 nova_compute[248510]: 2025-12-13 08:39:11.356 248514 INFO nova.compute.manager [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Took 0.23 seconds to deallocate network for instance.#033[00m
Dec 13 03:39:11 np0005558241 nova_compute[248510]: 2025-12-13 08:39:11.423 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:11 np0005558241 nova_compute[248510]: 2025-12-13 08:39:11.424 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:11 np0005558241 nova_compute[248510]: 2025-12-13 08:39:11.534 248514 DEBUG oslo_concurrency.processutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:11 np0005558241 nova_compute[248510]: 2025-12-13 08:39:11.572 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:39:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.7 KiB/s wr, 180 op/s
Dec 13 03:39:11 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Dec 13 03:39:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:11.993235) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:39:11 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Dec 13 03:39:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615151993278, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1485, "num_deletes": 257, "total_data_size": 2146999, "memory_usage": 2177232, "flush_reason": "Manual Compaction"}
Dec 13 03:39:11 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615152009289, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2084922, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43587, "largest_seqno": 45071, "table_properties": {"data_size": 2077834, "index_size": 4099, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15447, "raw_average_key_size": 20, "raw_value_size": 2063406, "raw_average_value_size": 2758, "num_data_blocks": 181, "num_entries": 748, "num_filter_entries": 748, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615046, "oldest_key_time": 1765615046, "file_creation_time": 1765615151, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 16355 microseconds, and 5543 cpu microseconds.
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.009587) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2084922 bytes OK
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.009687) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.011688) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.011702) EVENT_LOG_v1 {"time_micros": 1765615152011698, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.011718) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2140307, prev total WAL file size 2140307, number of live WAL files 2.
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.012997) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2036KB)], [101(9024KB)]
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615152013060, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11325617, "oldest_snapshot_seqno": -1}
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3344372783' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6627 keys, 9594797 bytes, temperature: kUnknown
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615152085172, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9594797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9550445, "index_size": 26704, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 172670, "raw_average_key_size": 26, "raw_value_size": 9431515, "raw_average_value_size": 1423, "num_data_blocks": 1048, "num_entries": 6627, "num_filter_entries": 6627, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615152, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.085464) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9594797 bytes
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.087022) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.8 rd, 132.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.8 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(10.0) write-amplify(4.6) OK, records in: 7156, records dropped: 529 output_compression: NoCompression
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.087041) EVENT_LOG_v1 {"time_micros": 1765615152087032, "job": 60, "event": "compaction_finished", "compaction_time_micros": 72211, "compaction_time_cpu_micros": 24607, "output_level": 6, "num_output_files": 1, "total_output_size": 9594797, "num_input_records": 7156, "num_output_records": 6627, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615152087512, "job": 60, "event": "table_file_deletion", "file_number": 103}
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615152089009, "job": 60, "event": "table_file_deletion", "file_number": 101}
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.012892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.089059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.089080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.089082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.089084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:39:12 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:39:12.089085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:39:12 np0005558241 nova_compute[248510]: 2025-12-13 08:39:12.104 248514 DEBUG oslo_concurrency.processutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:12 np0005558241 nova_compute[248510]: 2025-12-13 08:39:12.110 248514 DEBUG nova.compute.provider_tree [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:39:12 np0005558241 nova_compute[248510]: 2025-12-13 08:39:12.135 248514 DEBUG nova.scheduler.client.report [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:39:12 np0005558241 nova_compute[248510]: 2025-12-13 08:39:12.170 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:12 np0005558241 nova_compute[248510]: 2025-12-13 08:39:12.205 248514 INFO nova.scheduler.client.report [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Deleted allocations for instance 0d52c1df-d252-4012-b05c-40737f1089bb#033[00m
Dec 13 03:39:12 np0005558241 nova_compute[248510]: 2025-12-13 08:39:12.324 248514 DEBUG oslo_concurrency.lockutils [None req-852089b0-dc04-40e0-92d3-b600b3a4cb1e 0e8471eedc0e4ae0a028132802bc1967 c4b40eb3fb314c44867a54f3ba244ec1 - - default default] Lock "0d52c1df-d252-4012-b05c-40737f1089bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 234 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.8 KiB/s wr, 179 op/s
Dec 13 03:39:13 np0005558241 nova_compute[248510]: 2025-12-13 08:39:13.939 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615138.9386199, 6fb6c605-344c-4ed9-806d-96964b0474f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:13 np0005558241 nova_compute[248510]: 2025-12-13 08:39:13.940 248514 INFO nova.compute.manager [-] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:39:13 np0005558241 nova_compute[248510]: 2025-12-13 08:39:13.969 248514 DEBUG nova.compute.manager [None req-8c8d1fa7-6f98-40e2-8ffe-12c4933c9474 - - - - - -] [instance: 6fb6c605-344c-4ed9-806d-96964b0474f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:39:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2352041844' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:39:15 np0005558241 nova_compute[248510]: 2025-12-13 08:39:15.053 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:15 np0005558241 nova_compute[248510]: 2025-12-13 08:39:15.053 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:39:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2352041844' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:39:15 np0005558241 nova_compute[248510]: 2025-12-13 08:39:15.082 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:39:15 np0005558241 nova_compute[248510]: 2025-12-13 08:39:15.191 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:15 np0005558241 nova_compute[248510]: 2025-12-13 08:39:15.191 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:15 np0005558241 nova_compute[248510]: 2025-12-13 08:39:15.201 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:39:15 np0005558241 nova_compute[248510]: 2025-12-13 08:39:15.202 248514 INFO nova.compute.claims [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:39:15 np0005558241 nova_compute[248510]: 2025-12-13 08:39:15.411 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 169 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 311 KiB/s wr, 218 op/s
Dec 13 03:39:15 np0005558241 nova_compute[248510]: 2025-12-13 08:39:15.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:39:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2448708107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.068 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.075 248514 DEBUG nova.compute.provider_tree [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.108 248514 DEBUG nova.scheduler.client.report [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.137 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.138 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.229 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.229 248514 DEBUG nova.network.neutron [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.317 248514 INFO nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.341 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:39:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:16Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:d2:10 10.100.0.4
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.461 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:39:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:16Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:d2:10 10.100.0.4
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.462 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.462 248514 INFO nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Creating image(s)#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.482 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.506 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.528 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.532 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.625 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.626 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.627 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.627 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.652 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.656 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:39:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Dec 13 03:39:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Dec 13 03:39:16 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Dec 13 03:39:16 np0005558241 nova_compute[248510]: 2025-12-13 08:39:16.695 248514 DEBUG nova.policy [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7507939da64e4320a1c6f389d0fc9045', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:39:17 np0005558241 nova_compute[248510]: 2025-12-13 08:39:17.009 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:17 np0005558241 nova_compute[248510]: 2025-12-13 08:39:17.066 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] resizing rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:39:17 np0005558241 nova_compute[248510]: 2025-12-13 08:39:17.137 248514 DEBUG nova.objects.instance [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:39:17 np0005558241 nova_compute[248510]: 2025-12-13 08:39:17.161 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:39:17 np0005558241 nova_compute[248510]: 2025-12-13 08:39:17.162 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Ensure instance console log exists: /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:39:17 np0005558241 nova_compute[248510]: 2025-12-13 08:39:17.162 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:17 np0005558241 nova_compute[248510]: 2025-12-13 08:39:17.162 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:17 np0005558241 nova_compute[248510]: 2025-12-13 08:39:17.163 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 169 MiB data, 690 MiB used, 59 GiB / 60 GiB avail; 245 KiB/s rd, 309 KiB/s wr, 123 op/s
Dec 13 03:39:18 np0005558241 nova_compute[248510]: 2025-12-13 08:39:18.464 248514 DEBUG nova.network.neutron [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Successfully created port: ac5a6aec-ff77-4185-a9cb-f95e9ee9461a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:39:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 219 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 4.0 MiB/s wr, 140 op/s
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.772 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.773 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.773 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.774 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.774 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.774 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.933 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.970 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.970 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Image id 0ed20320-9c25-4108-ad76-64b3cb3500ce yields fingerprint 7e19890462cb757da298333dcef0801755c35301 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.970 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] image 0ed20320-9c25-4108-ad76-64b3cb3500ce at (/var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301): checking#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.971 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] image 0ed20320-9c25-4108-ad76-64b3cb3500ce at (/var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.973 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.973 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] 9b486227-b98c-4393-9a3c-aae3e3c419a8 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.973 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] 658e5f04-399b-4a8a-8680-5ae9717949c0 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.974 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.974 248514 WARNING nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.974 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Active base files: /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.974 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Removable base files: /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.975 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.975 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.975 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.975 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Dec 13 03:39:19 np0005558241 nova_compute[248510]: 2025-12-13 08:39:19.976 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Dec 13 03:39:20 np0005558241 nova_compute[248510]: 2025-12-13 08:39:20.694 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615145.692605, 0187165c-81b1-43b8-81f5-05e847fe1fa6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:20 np0005558241 nova_compute[248510]: 2025-12-13 08:39:20.694 248514 INFO nova.compute.manager [-] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:39:20 np0005558241 nova_compute[248510]: 2025-12-13 08:39:20.719 248514 DEBUG nova.compute.manager [None req-b98b473b-1560-416c-bc4f-862fbc8a59a9 - - - - - -] [instance: 0187165c-81b1-43b8-81f5-05e847fe1fa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:20 np0005558241 nova_compute[248510]: 2025-12-13 08:39:20.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016849448052577893 of space, bias 1.0, pg target 0.5054834415773368 quantized to 32 (current 32)
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675473055647338 of space, bias 1.0, pg target 0.20026419166942014 quantized to 32 (current 32)
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.955729329617855e-07 of space, bias 4.0, pg target 0.0007146875195541426 quantized to 16 (current 32)
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:39:21 np0005558241 nova_compute[248510]: 2025-12-13 08:39:21.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:21 np0005558241 nova_compute[248510]: 2025-12-13 08:39:21.588 248514 DEBUG nova.network.neutron [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Successfully updated port: ac5a6aec-ff77-4185-a9cb-f95e9ee9461a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:39:21 np0005558241 nova_compute[248510]: 2025-12-13 08:39:21.616 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:39:21 np0005558241 nova_compute[248510]: 2025-12-13 08:39:21.616 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquired lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:39:21 np0005558241 nova_compute[248510]: 2025-12-13 08:39:21.616 248514 DEBUG nova.network.neutron [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:39:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:39:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 526 KiB/s rd, 4.7 MiB/s wr, 167 op/s
Dec 13 03:39:22 np0005558241 nova_compute[248510]: 2025-12-13 08:39:22.083 248514 DEBUG nova.compute.manager [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received event network-changed-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:22 np0005558241 nova_compute[248510]: 2025-12-13 08:39:22.084 248514 DEBUG nova.compute.manager [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Refreshing instance network info cache due to event network-changed-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:39:22 np0005558241 nova_compute[248510]: 2025-12-13 08:39:22.084 248514 DEBUG oslo_concurrency.lockutils [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:39:22 np0005558241 nova_compute[248510]: 2025-12-13 08:39:22.218 248514 DEBUG nova.network.neutron [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:39:22 np0005558241 nova_compute[248510]: 2025-12-13 08:39:22.289 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 4.7 MiB/s wr, 149 op/s
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.031 248514 DEBUG nova.network.neutron [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Updating instance_info_cache with network_info: [{"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.067 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Releasing lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.068 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance network_info: |[{"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.069 248514 DEBUG oslo_concurrency.lockutils [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.070 248514 DEBUG nova.network.neutron [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Refreshing network info cache for port ac5a6aec-ff77-4185-a9cb-f95e9ee9461a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.073 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Start _get_guest_xml network_info=[{"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.077 248514 WARNING nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.084 248514 DEBUG nova.virt.libvirt.host [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.085 248514 DEBUG nova.virt.libvirt.host [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.093 248514 DEBUG nova.virt.libvirt.host [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.095 248514 DEBUG nova.virt.libvirt.host [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.096 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.096 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.097 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.097 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.098 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.098 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.098 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.099 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.099 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.099 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.100 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.100 248514 DEBUG nova.virt.hardware [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.104 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:39:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2031110307' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.746 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 456 KiB/s rd, 4.5 MiB/s wr, 101 op/s
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.766 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.769 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.800 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615150.765342, 0d52c1df-d252-4012-b05c-40737f1089bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.801 248514 INFO nova.compute.manager [-] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:25 np0005558241 nova_compute[248510]: 2025-12-13 08:39:25.830 248514 DEBUG nova.compute.manager [None req-77d26965-f904-4a80-aef8-45990654c32e - - - - - -] [instance: 0d52c1df-d252-4012-b05c-40737f1089bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:39:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886587860' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.336 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.338 248514 DEBUG nova.virt.libvirt.vif [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:39:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1902812623',display_name='tempest-ServerActionsTestOtherB-server-1902812623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1902812623',id=92,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-8adwm7jj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:39:16Z,user_data=None,user_id='7507939da64e4320a1c6f389d0fc9045',uuid=2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.339 248514 DEBUG nova.network.os_vif_util [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.340 248514 DEBUG nova.network.os_vif_util [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.342 248514 DEBUG nova.objects.instance [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.467 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  <uuid>2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be</uuid>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  <name>instance-0000005c</name>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerActionsTestOtherB-server-1902812623</nova:name>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:39:25</nova:creationTime>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:        <nova:user uuid="7507939da64e4320a1c6f389d0fc9045">tempest-ServerActionsTestOtherB-1515133862-project-member</nova:user>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:        <nova:project uuid="f0aee359fbaa484eb7ead3f81eef51e7">tempest-ServerActionsTestOtherB-1515133862</nova:project>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:        <nova:port uuid="ac5a6aec-ff77-4185-a9cb-f95e9ee9461a">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <entry name="serial">2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be</entry>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <entry name="uuid">2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be</entry>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk.config">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:1f:ee:c9"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <target dev="tapac5a6aec-ff"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/console.log" append="off"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:39:26 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:39:26 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:39:26 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:39:26 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.469 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Preparing to wait for external event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.470 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.470 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.470 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.471 248514 DEBUG nova.virt.libvirt.vif [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:39:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1902812623',display_name='tempest-ServerActionsTestOtherB-server-1902812623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1902812623',id=92,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-8adwm7jj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:39:16Z,user_data=None,user_id='7507939da64e4320a1c6f389d0fc9045',uuid=2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.472 248514 DEBUG nova.network.os_vif_util [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.472 248514 DEBUG nova.network.os_vif_util [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.473 248514 DEBUG os_vif [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.474 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.474 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.478 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.479 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac5a6aec-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.480 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac5a6aec-ff, col_values=(('external_ids', {'iface-id': 'ac5a6aec-ff77-4185-a9cb-f95e9ee9461a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:ee:c9', 'vm-uuid': '2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.482 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:26 np0005558241 NetworkManager[50376]: <info>  [1765615166.4835] manager: (tapac5a6aec-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.484 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.491 248514 INFO os_vif [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff')#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.577 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.578 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.578 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No VIF found with MAC fa:16:3e:1f:ee:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.579 248514 INFO nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Using config drive#033[00m
Dec 13 03:39:26 np0005558241 nova_compute[248510]: 2025-12-13 08:39:26.602 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:39:27 np0005558241 nova_compute[248510]: 2025-12-13 08:39:27.406 248514 INFO nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Creating config drive at /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/disk.config#033[00m
Dec 13 03:39:27 np0005558241 nova_compute[248510]: 2025-12-13 08:39:27.417 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4s1bd4bs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:27 np0005558241 nova_compute[248510]: 2025-12-13 08:39:27.586 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4s1bd4bs" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:27 np0005558241 nova_compute[248510]: 2025-12-13 08:39:27.617 248514 DEBUG nova.storage.rbd_utils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:27 np0005558241 nova_compute[248510]: 2025-12-13 08:39:27.624 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/disk.config 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 411 KiB/s rd, 4.0 MiB/s wr, 91 op/s
Dec 13 03:39:27 np0005558241 nova_compute[248510]: 2025-12-13 08:39:27.765 248514 DEBUG oslo_concurrency.processutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/disk.config 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:27 np0005558241 nova_compute[248510]: 2025-12-13 08:39:27.766 248514 INFO nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Deleting local config drive /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be/disk.config because it was imported into RBD.#033[00m
Dec 13 03:39:27 np0005558241 NetworkManager[50376]: <info>  [1765615167.8235] manager: (tapac5a6aec-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/383)
Dec 13 03:39:27 np0005558241 kernel: tapac5a6aec-ff: entered promiscuous mode
Dec 13 03:39:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:27Z|00902|binding|INFO|Claiming lport ac5a6aec-ff77-4185-a9cb-f95e9ee9461a for this chassis.
Dec 13 03:39:27 np0005558241 nova_compute[248510]: 2025-12-13 08:39:27.825 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:27Z|00903|binding|INFO|ac5a6aec-ff77-4185-a9cb-f95e9ee9461a: Claiming fa:16:3e:1f:ee:c9 10.100.0.13
Dec 13 03:39:27 np0005558241 nova_compute[248510]: 2025-12-13 08:39:27.844 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:27Z|00904|binding|INFO|Setting lport ac5a6aec-ff77-4185-a9cb-f95e9ee9461a ovn-installed in OVS
Dec 13 03:39:27 np0005558241 nova_compute[248510]: 2025-12-13 08:39:27.847 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:27 np0005558241 systemd-udevd[337343]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:39:27 np0005558241 systemd-machined[210538]: New machine qemu-114-instance-0000005c.
Dec 13 03:39:27 np0005558241 NetworkManager[50376]: <info>  [1765615167.8675] device (tapac5a6aec-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:39:27 np0005558241 NetworkManager[50376]: <info>  [1765615167.8680] device (tapac5a6aec-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:39:27 np0005558241 systemd[1]: Started Virtual Machine qemu-114-instance-0000005c.
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.051 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:ee:c9 10.100.0.13'], port_security=['fa:16:3e:1f:ee:c9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77183472-893b-4c33-ab3e-e88f01770ed8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:39:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:28Z|00905|binding|INFO|Setting lport ac5a6aec-ff77-4185-a9cb-f95e9ee9461a up in Southbound
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.053 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ac5a6aec-ff77-4185-a9cb-f95e9ee9461a in datapath 369f7528-6571-47b6-a030-5281647e1eac bound to our chassis#033[00m
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.055 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 369f7528-6571-47b6-a030-5281647e1eac#033[00m
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.071 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64181ff1-0f91-463d-8f98-2a17fe7bc3b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.117 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c23e4d4b-c745-4f5e-8140-b8dc08d66cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.120 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fc439d7b-dc04-4bd3-920f-5b9029e2645d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.148 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[50747038-9244-412f-b9e5-836b488c7ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.166 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2dddcd82-e552-4bc8-8859-3482a4440908]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 34886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337358, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.180 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ad939c4f-943d-4af3-b869-d130b6f1e4b8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763117, 'tstamp': 763117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337359, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763121, 'tstamp': 763121}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337359, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.182 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:28 np0005558241 nova_compute[248510]: 2025-12-13 08:39:28.184 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.185 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369f7528-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.186 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.186 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap369f7528-60, col_values=(('external_ids', {'iface-id': '6c33e57f-b143-4ed7-a271-dc33d6e81057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:28.186 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:39:28 np0005558241 nova_compute[248510]: 2025-12-13 08:39:28.364 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615168.3643646, 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:28 np0005558241 nova_compute[248510]: 2025-12-13 08:39:28.365 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] VM Started (Lifecycle Event)#033[00m
Dec 13 03:39:28 np0005558241 nova_compute[248510]: 2025-12-13 08:39:28.405 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:28 np0005558241 nova_compute[248510]: 2025-12-13 08:39:28.409 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615168.3645506, 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:28 np0005558241 nova_compute[248510]: 2025-12-13 08:39:28.409 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:39:28 np0005558241 nova_compute[248510]: 2025-12-13 08:39:28.450 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:28 np0005558241 nova_compute[248510]: 2025-12-13 08:39:28.453 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:39:28 np0005558241 nova_compute[248510]: 2025-12-13 08:39:28.484 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:39:28 np0005558241 nova_compute[248510]: 2025-12-13 08:39:28.625 248514 DEBUG nova.network.neutron [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Updated VIF entry in instance network info cache for port ac5a6aec-ff77-4185-a9cb-f95e9ee9461a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:39:28 np0005558241 nova_compute[248510]: 2025-12-13 08:39:28.625 248514 DEBUG nova.network.neutron [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Updating instance_info_cache with network_info: [{"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:39:28 np0005558241 nova_compute[248510]: 2025-12-13 08:39:28.658 248514 DEBUG oslo_concurrency.lockutils [req-3f1cc351-e550-47b0-92d1-1483273a8271 req-625e46f2-89b4-4b87-882f-e122f3a8f280 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:39:29 np0005558241 nova_compute[248510]: 2025-12-13 08:39:29.504 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 3.7 MiB/s wr, 90 op/s
Dec 13 03:39:30 np0005558241 nova_compute[248510]: 2025-12-13 08:39:30.815 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:31 np0005558241 nova_compute[248510]: 2025-12-13 08:39:31.483 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:39:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 1.0 MiB/s wr, 44 op/s
Dec 13 03:39:32 np0005558241 podman[337404]: 2025-12-13 08:39:32.974309738 +0000 UTC m=+0.058981866 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 13 03:39:32 np0005558241 podman[337403]: 2025-12-13 08:39:32.997432062 +0000 UTC m=+0.076430679 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 03:39:33 np0005558241 podman[337402]: 2025-12-13 08:39:33.011305636 +0000 UTC m=+0.101075661 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 03:39:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 25 KiB/s wr, 10 op/s
Dec 13 03:39:34 np0005558241 nova_compute[248510]: 2025-12-13 08:39:34.970 248514 DEBUG nova.compute.manager [req-0ae57db2-272d-4414-9f1d-5e9823a3ed0d req-f2a81742-6ccb-4019-8b00-9a8a6b746d74 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:34 np0005558241 nova_compute[248510]: 2025-12-13 08:39:34.970 248514 DEBUG oslo_concurrency.lockutils [req-0ae57db2-272d-4414-9f1d-5e9823a3ed0d req-f2a81742-6ccb-4019-8b00-9a8a6b746d74 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:34 np0005558241 nova_compute[248510]: 2025-12-13 08:39:34.971 248514 DEBUG oslo_concurrency.lockutils [req-0ae57db2-272d-4414-9f1d-5e9823a3ed0d req-f2a81742-6ccb-4019-8b00-9a8a6b746d74 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:34 np0005558241 nova_compute[248510]: 2025-12-13 08:39:34.971 248514 DEBUG oslo_concurrency.lockutils [req-0ae57db2-272d-4414-9f1d-5e9823a3ed0d req-f2a81742-6ccb-4019-8b00-9a8a6b746d74 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:34 np0005558241 nova_compute[248510]: 2025-12-13 08:39:34.971 248514 DEBUG nova.compute.manager [req-0ae57db2-272d-4414-9f1d-5e9823a3ed0d req-f2a81742-6ccb-4019-8b00-9a8a6b746d74 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Processing event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:39:34 np0005558241 nova_compute[248510]: 2025-12-13 08:39:34.972 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:39:34 np0005558241 nova_compute[248510]: 2025-12-13 08:39:34.974 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615174.974791, 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:34 np0005558241 nova_compute[248510]: 2025-12-13 08:39:34.975 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:39:34 np0005558241 nova_compute[248510]: 2025-12-13 08:39:34.976 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:39:34 np0005558241 nova_compute[248510]: 2025-12-13 08:39:34.979 248514 INFO nova.virt.libvirt.driver [-] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance spawned successfully.#033[00m
Dec 13 03:39:34 np0005558241 nova_compute[248510]: 2025-12-13 08:39:34.979 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.077 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.081 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.081 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.082 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.082 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.083 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.083 248514 DEBUG nova.virt.libvirt.driver [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.087 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.138 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.195 248514 INFO nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Took 18.73 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.195 248514 DEBUG nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.284 248514 INFO nova.compute.manager [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Took 20.12 seconds to build instance.#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.427 248514 DEBUG oslo_concurrency.lockutils [None req-cb4d0387-0008-40af-9df0-a22b0cfd6317 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.445 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.445 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.472 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.587 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.588 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.597 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.597 248514 INFO nova.compute.claims [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:39:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 13 KiB/s wr, 9 op/s
Dec 13 03:39:35 np0005558241 nova_compute[248510]: 2025-12-13 08:39:35.815 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:36 np0005558241 nova_compute[248510]: 2025-12-13 08:39:36.136 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:36 np0005558241 nova_compute[248510]: 2025-12-13 08:39:36.485 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:39:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3273581284' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:39:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:39:36 np0005558241 nova_compute[248510]: 2025-12-13 08:39:36.683 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:36 np0005558241 nova_compute[248510]: 2025-12-13 08:39:36.690 248514 DEBUG nova.compute.provider_tree [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:39:36 np0005558241 nova_compute[248510]: 2025-12-13 08:39:36.762 248514 DEBUG nova.scheduler.client.report [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:39:36 np0005558241 nova_compute[248510]: 2025-12-13 08:39:36.794 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:36 np0005558241 nova_compute[248510]: 2025-12-13 08:39:36.795 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:39:36 np0005558241 nova_compute[248510]: 2025-12-13 08:39:36.936 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:39:36 np0005558241 nova_compute[248510]: 2025-12-13 08:39:36.936 248514 DEBUG nova.network.neutron [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:39:36 np0005558241 nova_compute[248510]: 2025-12-13 08:39:36.991 248514 INFO nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.024 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.128 248514 DEBUG nova.compute.manager [req-fb0a8cf8-08d9-41de-9772-a817c9892a8f req-d2febb84-d6c1-4552-a5d6-3043107dcbe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.130 248514 DEBUG oslo_concurrency.lockutils [req-fb0a8cf8-08d9-41de-9772-a817c9892a8f req-d2febb84-d6c1-4552-a5d6-3043107dcbe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.131 248514 DEBUG oslo_concurrency.lockutils [req-fb0a8cf8-08d9-41de-9772-a817c9892a8f req-d2febb84-d6c1-4552-a5d6-3043107dcbe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.131 248514 DEBUG oslo_concurrency.lockutils [req-fb0a8cf8-08d9-41de-9772-a817c9892a8f req-d2febb84-d6c1-4552-a5d6-3043107dcbe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.132 248514 DEBUG nova.compute.manager [req-fb0a8cf8-08d9-41de-9772-a817c9892a8f req-d2febb84-d6c1-4552-a5d6-3043107dcbe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] No waiting events found dispatching network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.133 248514 WARNING nova.compute.manager [req-fb0a8cf8-08d9-41de-9772-a817c9892a8f req-d2febb84-d6c1-4552-a5d6-3043107dcbe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received unexpected event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a for instance with vm_state active and task_state pausing.#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.189 248514 INFO nova.compute.manager [None req-a437fb84-64ec-418f-aa6e-6d859119238c 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Pausing#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.191 248514 DEBUG nova.objects.instance [None req-a437fb84-64ec-418f-aa6e-6d859119238c 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'flavor' on Instance uuid 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.237 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.239 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.239 248514 INFO nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Creating image(s)#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.260 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.282 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.308 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.312 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.359 248514 DEBUG nova.policy [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ab92f76b5ae549d8bae02bb7911221d6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '791fc8cca65d4bfd9d9f2f19018d60fb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.367 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615177.3676865, 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.368 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.370 248514 DEBUG nova.compute.manager [None req-a437fb84-64ec-418f-aa6e-6d859119238c 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.403 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.407 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.411 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.412 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.413 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.413 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.438 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.443 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 70a00398-fa02-482c-a33e-f190895c8d28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.793 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 70a00398-fa02-482c-a33e-f190895c8d28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.858 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] resizing rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:39:37 np0005558241 nova_compute[248510]: 2025-12-13 08:39:37.941 248514 DEBUG nova.objects.instance [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lazy-loading 'migration_context' on Instance uuid 70a00398-fa02-482c-a33e-f190895c8d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:39:38 np0005558241 nova_compute[248510]: 2025-12-13 08:39:38.021 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:39:38 np0005558241 nova_compute[248510]: 2025-12-13 08:39:38.022 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Ensure instance console log exists: /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:39:38 np0005558241 nova_compute[248510]: 2025-12-13 08:39:38.023 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:38 np0005558241 nova_compute[248510]: 2025-12-13 08:39:38.023 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:38 np0005558241 nova_compute[248510]: 2025-12-13 08:39:38.024 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:39 np0005558241 nova_compute[248510]: 2025-12-13 08:39:39.107 248514 DEBUG nova.network.neutron [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Successfully created port: 004ae3d0-5827-4393-953d-aa704915956b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:39:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 272 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 53 op/s
Dec 13 03:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:39:40 np0005558241 nova_compute[248510]: 2025-12-13 08:39:40.844 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:41 np0005558241 nova_compute[248510]: 2025-12-13 08:39:41.488 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:41 np0005558241 nova_compute[248510]: 2025-12-13 08:39:41.526 248514 DEBUG nova.network.neutron [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Successfully updated port: 004ae3d0-5827-4393-953d-aa704915956b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:39:41 np0005558241 nova_compute[248510]: 2025-12-13 08:39:41.554 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "refresh_cache-70a00398-fa02-482c-a33e-f190895c8d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:39:41 np0005558241 nova_compute[248510]: 2025-12-13 08:39:41.554 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquired lock "refresh_cache-70a00398-fa02-482c-a33e-f190895c8d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:39:41 np0005558241 nova_compute[248510]: 2025-12-13 08:39:41.555 248514 DEBUG nova.network.neutron [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:39:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:39:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 293 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Dec 13 03:39:41 np0005558241 nova_compute[248510]: 2025-12-13 08:39:41.813 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:41 np0005558241 nova_compute[248510]: 2025-12-13 08:39:41.813 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:41 np0005558241 nova_compute[248510]: 2025-12-13 08:39:41.814 248514 INFO nova.compute.manager [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Shelving#033[00m
Dec 13 03:39:41 np0005558241 nova_compute[248510]: 2025-12-13 08:39:41.854 248514 DEBUG nova.compute.manager [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-changed-004ae3d0-5827-4393-953d-aa704915956b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:41 np0005558241 nova_compute[248510]: 2025-12-13 08:39:41.854 248514 DEBUG nova.compute.manager [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Refreshing instance network info cache due to event network-changed-004ae3d0-5827-4393-953d-aa704915956b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:39:41 np0005558241 nova_compute[248510]: 2025-12-13 08:39:41.855 248514 DEBUG oslo_concurrency.lockutils [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-70a00398-fa02-482c-a33e-f190895c8d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:39:41 np0005558241 kernel: tapac5a6aec-ff (unregistering): left promiscuous mode
Dec 13 03:39:41 np0005558241 NetworkManager[50376]: <info>  [1765615181.8668] device (tapac5a6aec-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:39:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:41Z|00906|binding|INFO|Releasing lport ac5a6aec-ff77-4185-a9cb-f95e9ee9461a from this chassis (sb_readonly=0)
Dec 13 03:39:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:41Z|00907|binding|INFO|Setting lport ac5a6aec-ff77-4185-a9cb-f95e9ee9461a down in Southbound
Dec 13 03:39:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:41Z|00908|binding|INFO|Removing iface tapac5a6aec-ff ovn-installed in OVS
Dec 13 03:39:41 np0005558241 nova_compute[248510]: 2025-12-13 08:39:41.941 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:41.947 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:ee:c9 10.100.0.13'], port_security=['fa:16:3e:1f:ee:c9 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77183472-893b-4c33-ab3e-e88f01770ed8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:39:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:41.948 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ac5a6aec-ff77-4185-a9cb-f95e9ee9461a in datapath 369f7528-6571-47b6-a030-5281647e1eac unbound from our chassis#033[00m
Dec 13 03:39:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:41.949 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 369f7528-6571-47b6-a030-5281647e1eac#033[00m
Dec 13 03:39:41 np0005558241 nova_compute[248510]: 2025-12-13 08:39:41.958 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:41.967 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b9947fee-f03b-4f85-a711-492b4bb46b50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:41 np0005558241 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Dec 13 03:39:41 np0005558241 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005c.scope: Consumed 2.927s CPU time.
Dec 13 03:39:41 np0005558241 systemd-machined[210538]: Machine qemu-114-instance-0000005c terminated.
Dec 13 03:39:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:41.996 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[862e1021-4b15-4051-a830-16640b8a6016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:41.999 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[50c28bc3-fc9a-4981-a96d-e623bb58dc4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:42 np0005558241 nova_compute[248510]: 2025-12-13 08:39:42.009 248514 DEBUG nova.network.neutron [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:39:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.025 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[31b9fa0b-ad35-403e-843b-0f3f5314fa76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.044 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[51e53e98-948e-4d6e-89ed-84310ccbc7ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 34886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337661, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:42 np0005558241 nova_compute[248510]: 2025-12-13 08:39:42.055 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:42 np0005558241 nova_compute[248510]: 2025-12-13 08:39:42.059 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.062 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94203b55-d89d-4272-b378-c4b2d5dee372]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763117, 'tstamp': 763117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337664, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763121, 'tstamp': 763121}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337664, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.063 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:42 np0005558241 nova_compute[248510]: 2025-12-13 08:39:42.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:42 np0005558241 nova_compute[248510]: 2025-12-13 08:39:42.069 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.069 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369f7528-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.069 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:39:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.070 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap369f7528-60, col_values=(('external_ids', {'iface-id': '6c33e57f-b143-4ed7-a271-dc33d6e81057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:42.070 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:39:42 np0005558241 nova_compute[248510]: 2025-12-13 08:39:42.072 248514 INFO nova.virt.libvirt.driver [-] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance destroyed successfully.#033[00m
Dec 13 03:39:42 np0005558241 nova_compute[248510]: 2025-12-13 08:39:42.072 248514 DEBUG nova.objects.instance [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:39:43 np0005558241 nova_compute[248510]: 2025-12-13 08:39:43.147 248514 INFO nova.virt.libvirt.driver [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Beginning cold snapshot process#033[00m
Dec 13 03:39:43 np0005558241 nova_compute[248510]: 2025-12-13 08:39:43.346 248514 DEBUG nova.virt.libvirt.imagebackend [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:39:43 np0005558241 nova_compute[248510]: 2025-12-13 08:39:43.738 248514 DEBUG nova.storage.rbd_utils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(5f10e4dbf0b14f0fa5e1d5c60a24ded4) on rbd image(2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:39:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 13 03:39:43 np0005558241 nova_compute[248510]: 2025-12-13 08:39:43.861 248514 DEBUG nova.network.neutron [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Updating instance_info_cache with network_info: [{"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.090 248514 DEBUG nova.compute.manager [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received event network-vif-unplugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.091 248514 DEBUG oslo_concurrency.lockutils [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.091 248514 DEBUG oslo_concurrency.lockutils [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.091 248514 DEBUG oslo_concurrency.lockutils [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.091 248514 DEBUG nova.compute.manager [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] No waiting events found dispatching network-vif-unplugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.091 248514 WARNING nova.compute.manager [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received unexpected event network-vif-unplugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.092 248514 DEBUG nova.compute.manager [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.092 248514 DEBUG oslo_concurrency.lockutils [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.092 248514 DEBUG oslo_concurrency.lockutils [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.092 248514 DEBUG oslo_concurrency.lockutils [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.092 248514 DEBUG nova.compute.manager [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] No waiting events found dispatching network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.093 248514 WARNING nova.compute.manager [req-02e63b35-43bf-4f6a-8dc2-ee93adc95267 req-3836ab2b-7bb7-4ec6-99f8-3668cd5e5685 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received unexpected event network-vif-plugged-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.096 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Releasing lock "refresh_cache-70a00398-fa02-482c-a33e-f190895c8d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.097 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Instance network_info: |[{"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.097 248514 DEBUG oslo_concurrency.lockutils [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-70a00398-fa02-482c-a33e-f190895c8d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.097 248514 DEBUG nova.network.neutron [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Refreshing network info cache for port 004ae3d0-5827-4393-953d-aa704915956b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.100 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Start _get_guest_xml network_info=[{"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.105 248514 WARNING nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.112 248514 DEBUG nova.virt.libvirt.host [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.113 248514 DEBUG nova.virt.libvirt.host [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.116 248514 DEBUG nova.virt.libvirt.host [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.116 248514 DEBUG nova.virt.libvirt.host [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.116 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.117 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.117 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.117 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.117 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.117 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.118 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.118 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.118 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.118 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.118 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.119 248514 DEBUG nova.virt.hardware [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.121 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:39:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571528689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.713 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.740 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:44 np0005558241 nova_compute[248510]: 2025-12-13 08:39:44.746 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3109038917' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.511 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.765s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.513 248514 DEBUG nova.virt.libvirt.vif [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-764535591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-764535591',id=93,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='791fc8cca65d4bfd9d9f2f19018d60fb',ramdisk_id='',reservation_id='r-m1z94rmm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1097359454',owner_user_name='tempest-ServerTagsTestJSON-1097359454-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:39:37Z,user_data=None,user_id='ab92f76b5ae549d8bae02bb7911221d6',uuid=70a00398-fa02-482c-a33e-f190895c8d28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.513 248514 DEBUG nova.network.os_vif_util [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Converting VIF {"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.514 248514 DEBUG nova.network.os_vif_util [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.515 248514 DEBUG nova.objects.instance [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lazy-loading 'pci_devices' on Instance uuid 70a00398-fa02-482c-a33e-f190895c8d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.540 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  <uuid>70a00398-fa02-482c-a33e-f190895c8d28</uuid>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  <name>instance-0000005d</name>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerTagsTestJSON-server-764535591</nova:name>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:39:44</nova:creationTime>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:        <nova:user uuid="ab92f76b5ae549d8bae02bb7911221d6">tempest-ServerTagsTestJSON-1097359454-project-member</nova:user>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:        <nova:project uuid="791fc8cca65d4bfd9d9f2f19018d60fb">tempest-ServerTagsTestJSON-1097359454</nova:project>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:        <nova:port uuid="004ae3d0-5827-4393-953d-aa704915956b">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <entry name="serial">70a00398-fa02-482c-a33e-f190895c8d28</entry>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <entry name="uuid">70a00398-fa02-482c-a33e-f190895c8d28</entry>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/70a00398-fa02-482c-a33e-f190895c8d28_disk">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/70a00398-fa02-482c-a33e-f190895c8d28_disk.config">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:b0:7b:f4"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <target dev="tap004ae3d0-58"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/console.log" append="off"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:39:45 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:39:45 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:39:45 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:39:45 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.541 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Preparing to wait for external event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.541 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.541 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.542 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.542 248514 DEBUG nova.virt.libvirt.vif [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-764535591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-764535591',id=93,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='791fc8cca65d4bfd9d9f2f19018d60fb',ramdisk_id='',reservation_id='r-m1z94rmm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1097359454',owner_user_name='tempest-ServerTagsTestJSON-1097359454-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:39:37Z,user_data=None,user_id='ab92f76b5ae549d8bae02bb7911221d6',uuid=70a00398-fa02-482c-a33e-f190895c8d28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.542 248514 DEBUG nova.network.os_vif_util [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Converting VIF {"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.543 248514 DEBUG nova.network.os_vif_util [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.543 248514 DEBUG os_vif [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.544 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.544 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.545 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.548 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.548 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap004ae3d0-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.549 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap004ae3d0-58, col_values=(('external_ids', {'iface-id': '004ae3d0-5827-4393-953d-aa704915956b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:7b:f4', 'vm-uuid': '70a00398-fa02-482c-a33e-f190895c8d28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:39:45 np0005558241 NetworkManager[50376]: <info>  [1765615185.5528] manager: (tap004ae3d0-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.559 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.560 248514 INFO os_vif [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58')#033[00m
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:39:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.761 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.762 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.762 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] No VIF found with MAC fa:16:3e:b0:7b:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.762 248514 INFO nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Using config drive#033[00m
Dec 13 03:39:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.785 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:45 np0005558241 nova_compute[248510]: 2025-12-13 08:39:45.845 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:46 np0005558241 podman[337950]: 2025-12-13 08:39:46.064665848 +0000 UTC m=+0.040966669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:39:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:39:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:39:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:39:46 np0005558241 podman[337950]: 2025-12-13 08:39:46.381898486 +0000 UTC m=+0.358199227 container create af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hofstadter, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:39:46 np0005558241 nova_compute[248510]: 2025-12-13 08:39:46.481 248514 DEBUG nova.storage.rbd_utils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] cloning vms/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk@5f10e4dbf0b14f0fa5e1d5c60a24ded4 to images/764d0410-b25b-4414-843f-9f74e17a2d49 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:39:46 np0005558241 systemd[1]: Started libpod-conmon-af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c.scope.
Dec 13 03:39:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:39:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:39:46 np0005558241 podman[337950]: 2025-12-13 08:39:46.814521279 +0000 UTC m=+0.790822060 container init af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hofstadter, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:39:46 np0005558241 podman[337950]: 2025-12-13 08:39:46.821268407 +0000 UTC m=+0.797569158 container start af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hofstadter, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:39:46 np0005558241 jovial_hofstadter[337973]: 167 167
Dec 13 03:39:46 np0005558241 systemd[1]: libpod-af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c.scope: Deactivated successfully.
Dec 13 03:39:46 np0005558241 podman[337950]: 2025-12-13 08:39:46.84072447 +0000 UTC m=+0.817025311 container attach af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hofstadter, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:39:46 np0005558241 podman[337950]: 2025-12-13 08:39:46.842232888 +0000 UTC m=+0.818533649 container died af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hofstadter, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:39:46 np0005558241 nova_compute[248510]: 2025-12-13 08:39:46.853 248514 INFO nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Creating config drive at /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/disk.config#033[00m
Dec 13 03:39:46 np0005558241 nova_compute[248510]: 2025-12-13 08:39:46.858 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf3u11fmb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d6a68ac05c552a44b21e6e055d43115fe9a26b1dc6215dc5ad7b5f0d95ce6a48-merged.mount: Deactivated successfully.
Dec 13 03:39:46 np0005558241 nova_compute[248510]: 2025-12-13 08:39:46.992 248514 DEBUG nova.storage.rbd_utils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] flattening images/764d0410-b25b-4414-843f-9f74e17a2d49 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:39:47 np0005558241 podman[337950]: 2025-12-13 08:39:47.009537052 +0000 UTC m=+0.985837783 container remove af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_hofstadter, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:39:47 np0005558241 systemd[1]: libpod-conmon-af37f8239fc1bdbd9fb16a1da097d349229bb8372341b4b07bc02513f25b141c.scope: Deactivated successfully.
Dec 13 03:39:47 np0005558241 nova_compute[248510]: 2025-12-13 08:39:47.038 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf3u11fmb" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:47 np0005558241 nova_compute[248510]: 2025-12-13 08:39:47.073 248514 DEBUG nova.storage.rbd_utils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] rbd image 70a00398-fa02-482c-a33e-f190895c8d28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:39:47 np0005558241 nova_compute[248510]: 2025-12-13 08:39:47.077 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/disk.config 70a00398-fa02-482c-a33e-f190895c8d28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:47 np0005558241 nova_compute[248510]: 2025-12-13 08:39:47.211 248514 DEBUG nova.network.neutron [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Updated VIF entry in instance network info cache for port 004ae3d0-5827-4393-953d-aa704915956b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:39:47 np0005558241 nova_compute[248510]: 2025-12-13 08:39:47.212 248514 DEBUG nova.network.neutron [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Updating instance_info_cache with network_info: [{"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:39:47 np0005558241 nova_compute[248510]: 2025-12-13 08:39:47.232 248514 DEBUG oslo_concurrency.lockutils [req-611da9d4-1adc-4393-a8a7-7973f2e12686 req-a7a3090a-8aa6-4fa0-9c4c-48717a8eb98f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-70a00398-fa02-482c-a33e-f190895c8d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:39:47 np0005558241 podman[338065]: 2025-12-13 08:39:47.248199599 +0000 UTC m=+0.094669792 container create 075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_feynman, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 03:39:47 np0005558241 podman[338065]: 2025-12-13 08:39:47.17896584 +0000 UTC m=+0.025436063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:39:47 np0005558241 systemd[1]: Started libpod-conmon-075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d.scope.
Dec 13 03:39:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:39:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d724d3b2b9147e965b80f78b9d0c6b485909289fe1e5dd38841710bada7708/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d724d3b2b9147e965b80f78b9d0c6b485909289fe1e5dd38841710bada7708/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d724d3b2b9147e965b80f78b9d0c6b485909289fe1e5dd38841710bada7708/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d724d3b2b9147e965b80f78b9d0c6b485909289fe1e5dd38841710bada7708/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d724d3b2b9147e965b80f78b9d0c6b485909289fe1e5dd38841710bada7708/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 13 03:39:47 np0005558241 nova_compute[248510]: 2025-12-13 08:39:47.977 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:39:48 np0005558241 podman[338065]: 2025-12-13 08:39:48.042443784 +0000 UTC m=+0.888914027 container init 075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:39:48 np0005558241 podman[338065]: 2025-12-13 08:39:48.050609377 +0000 UTC m=+0.897079570 container start 075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_feynman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.092 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:39:48 np0005558241 nova_compute[248510]: 2025-12-13 08:39:48.093 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.093 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:39:48 np0005558241 podman[338065]: 2025-12-13 08:39:48.373204928 +0000 UTC m=+1.219675141 container attach 075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_feynman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:39:48 np0005558241 nova_compute[248510]: 2025-12-13 08:39:48.583 248514 DEBUG oslo_concurrency.processutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/disk.config 70a00398-fa02-482c-a33e-f190895c8d28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:48 np0005558241 nova_compute[248510]: 2025-12-13 08:39:48.584 248514 INFO nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Deleting local config drive /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28/disk.config because it was imported into RBD.#033[00m
Dec 13 03:39:48 np0005558241 nova_compute[248510]: 2025-12-13 08:39:48.600 248514 DEBUG nova.storage.rbd_utils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] removing snapshot(5f10e4dbf0b14f0fa5e1d5c60a24ded4) on rbd image(2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:39:48 np0005558241 kernel: tap004ae3d0-58: entered promiscuous mode
Dec 13 03:39:48 np0005558241 NetworkManager[50376]: <info>  [1765615188.6428] manager: (tap004ae3d0-58): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Dec 13 03:39:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:48Z|00909|binding|INFO|Claiming lport 004ae3d0-5827-4393-953d-aa704915956b for this chassis.
Dec 13 03:39:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:48Z|00910|binding|INFO|004ae3d0-5827-4393-953d-aa704915956b: Claiming fa:16:3e:b0:7b:f4 10.100.0.4
Dec 13 03:39:48 np0005558241 nova_compute[248510]: 2025-12-13 08:39:48.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.655 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:7b:f4 10.100.0.4'], port_security=['fa:16:3e:b0:7b:f4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '70a00398-fa02-482c-a33e-f190895c8d28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '791fc8cca65d4bfd9d9f2f19018d60fb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '480cd559-e8d2-47d2-86f5-7cc9a5743178', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e0ff201-d141-4a6c-ab2a-8538bea7e081, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=004ae3d0-5827-4393-953d-aa704915956b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.656 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 004ae3d0-5827-4393-953d-aa704915956b in datapath 6c5b1dd4-5a39-41f9-a98f-1f901a1328cf bound to our chassis#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.657 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c5b1dd4-5a39-41f9-a98f-1f901a1328cf#033[00m
Dec 13 03:39:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:48Z|00911|binding|INFO|Setting lport 004ae3d0-5827-4393-953d-aa704915956b ovn-installed in OVS
Dec 13 03:39:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:48Z|00912|binding|INFO|Setting lport 004ae3d0-5827-4393-953d-aa704915956b up in Southbound
Dec 13 03:39:48 np0005558241 nova_compute[248510]: 2025-12-13 08:39:48.670 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.670 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[de7d369f-fcad-40af-9fa4-684ae8b845b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.671 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c5b1dd4-51 in ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.673 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c5b1dd4-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.673 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a8654e7b-5bc5-40a6-a9b4-b1ebc3ea9828]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 nova_compute[248510]: 2025-12-13 08:39:48.676 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.677 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2773a0f6-f50e-4650-b34b-aa77050ec541]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 systemd-udevd[338154]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:39:48 np0005558241 systemd-machined[210538]: New machine qemu-115-instance-0000005d.
Dec 13 03:39:48 np0005558241 NetworkManager[50376]: <info>  [1765615188.6907] device (tap004ae3d0-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:39:48 np0005558241 NetworkManager[50376]: <info>  [1765615188.6913] device (tap004ae3d0-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.689 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[5479c4a2-f6c4-4fcc-82c3-198fa67290c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 systemd[1]: Started Virtual Machine qemu-115-instance-0000005d.
Dec 13 03:39:48 np0005558241 eager_feynman[338097]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:39:48 np0005558241 eager_feynman[338097]: --> All data devices are unavailable
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.714 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2444f8-8018-4137-9dd7-bb113d48477d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 systemd[1]: libpod-075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d.scope: Deactivated successfully.
Dec 13 03:39:48 np0005558241 podman[338065]: 2025-12-13 08:39:48.730758207 +0000 UTC m=+1.577228410 container died 075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.744 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3421fc3f-1ed8-4fcc-a52b-6cc979faa064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 NetworkManager[50376]: <info>  [1765615188.7522] manager: (tap6c5b1dd4-50): new Veth device (/org/freedesktop/NetworkManager/Devices/386)
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.752 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[65c4ee34-db3b-4707-89a7-04f8b1bb5e01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-55d724d3b2b9147e965b80f78b9d0c6b485909289fe1e5dd38841710bada7708-merged.mount: Deactivated successfully.
Dec 13 03:39:48 np0005558241 podman[338065]: 2025-12-13 08:39:48.779175549 +0000 UTC m=+1.625645742 container remove 075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:39:48 np0005558241 systemd[1]: libpod-conmon-075859b9e287511c235366a97b289d82150dcc33887576d4c55d55d51743436d.scope: Deactivated successfully.
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.794 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4f53a573-0bae-47a4-b422-7b1178b1b6c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.798 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[09d825e4-65d2-4e12-b644-c2f2e6375418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 NetworkManager[50376]: <info>  [1765615188.8249] device (tap6c5b1dd4-50): carrier: link connected
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.831 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c272d29f-0413-43f6-8820-27b4700ae2b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.847 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[43bf0cd5-3220-4d94-b7ec-36378c2a7016]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c5b1dd4-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:fd:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776604, 'reachable_time': 23218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338205, 'error': None, 'target': 'ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.863 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ae65dce2-9104-4e3b-a963-c7b48b011a43]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:fd77'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 776604, 'tstamp': 776604}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338219, 'error': None, 'target': 'ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.882 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed52c01-f0b2-47c6-8d0f-222d10e541ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c5b1dd4-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:fd:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776604, 'reachable_time': 23218, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338225, 'error': None, 'target': 'ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.925 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a78f4dd1-8d0f-44e5-95cb-088b8a0d4399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.986 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3c83b3-f0da-4c83-b7a8-334ce5ef5859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.988 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c5b1dd4-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.988 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.989 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c5b1dd4-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:48 np0005558241 nova_compute[248510]: 2025-12-13 08:39:48.990 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:48 np0005558241 kernel: tap6c5b1dd4-50: entered promiscuous mode
Dec 13 03:39:48 np0005558241 NetworkManager[50376]: <info>  [1765615188.9927] manager: (tap6c5b1dd4-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Dec 13 03:39:48 np0005558241 nova_compute[248510]: 2025-12-13 08:39:48.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:48.996 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c5b1dd4-50, col_values=(('external_ids', {'iface-id': 'b1e704a1-ed75-41ef-b655-bc56c3a0d60e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:48 np0005558241 nova_compute[248510]: 2025-12-13 08:39:48.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:48Z|00913|binding|INFO|Releasing lport b1e704a1-ed75-41ef-b655-bc56c3a0d60e from this chassis (sb_readonly=0)
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.017 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:49.018 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c5b1dd4-5a39-41f9-a98f-1f901a1328cf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c5b1dd4-5a39-41f9-a98f-1f901a1328cf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:49.019 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a72565f6-9dad-425d-b061-98de22e34955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:49.020 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/6c5b1dd4-5a39-41f9-a98f-1f901a1328cf.pid.haproxy
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 6c5b1dd4-5a39-41f9-a98f-1f901a1328cf
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:39:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:49.023 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'env', 'PROCESS_TAG=haproxy-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c5b1dd4-5a39-41f9-a98f-1f901a1328cf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.240 248514 DEBUG nova.compute.manager [req-dd1f6388-5d43-4ad6-a305-bbb09a85fd91 req-83610825-db66-4ce0-8678-5e1d3c9b5dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.240 248514 DEBUG oslo_concurrency.lockutils [req-dd1f6388-5d43-4ad6-a305-bbb09a85fd91 req-83610825-db66-4ce0-8678-5e1d3c9b5dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.240 248514 DEBUG oslo_concurrency.lockutils [req-dd1f6388-5d43-4ad6-a305-bbb09a85fd91 req-83610825-db66-4ce0-8678-5e1d3c9b5dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.241 248514 DEBUG oslo_concurrency.lockutils [req-dd1f6388-5d43-4ad6-a305-bbb09a85fd91 req-83610825-db66-4ce0-8678-5e1d3c9b5dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.241 248514 DEBUG nova.compute.manager [req-dd1f6388-5d43-4ad6-a305-bbb09a85fd91 req-83610825-db66-4ce0-8678-5e1d3c9b5dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Processing event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:39:49 np0005558241 podman[338277]: 2025-12-13 08:39:49.254149765 +0000 UTC m=+0.038634101 container create c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hofstadter, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:39:49 np0005558241 systemd[1]: Started libpod-conmon-c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c.scope.
Dec 13 03:39:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:39:49 np0005558241 podman[338277]: 2025-12-13 08:39:49.330154952 +0000 UTC m=+0.114639298 container init c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:39:49 np0005558241 podman[338277]: 2025-12-13 08:39:49.236224509 +0000 UTC m=+0.020708835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:39:49 np0005558241 podman[338277]: 2025-12-13 08:39:49.337061534 +0000 UTC m=+0.121545860 container start c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hofstadter, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 03:39:49 np0005558241 podman[338277]: 2025-12-13 08:39:49.341272888 +0000 UTC m=+0.125757234 container attach c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hofstadter, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:39:49 np0005558241 exciting_hofstadter[338309]: 167 167
Dec 13 03:39:49 np0005558241 systemd[1]: libpod-c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c.scope: Deactivated successfully.
Dec 13 03:39:49 np0005558241 podman[338277]: 2025-12-13 08:39:49.345433031 +0000 UTC m=+0.129917357 container died c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hofstadter, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:39:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f3273eec25cc77b6dd7c90bd62fc36041eb13640fbd996f106f9a30af5a1c27a-merged.mount: Deactivated successfully.
Dec 13 03:39:49 np0005558241 podman[338277]: 2025-12-13 08:39:49.390437769 +0000 UTC m=+0.174922095 container remove c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:39:49 np0005558241 systemd[1]: libpod-conmon-c742943d43c1d0c5f1352455a911f90281892082176fa160d5c881ffd7dbd02c.scope: Deactivated successfully.
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.444 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615189.4445612, 70a00398-fa02-482c-a33e-f190895c8d28 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.445 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] VM Started (Lifecycle Event)#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.447 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.451 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.456 248514 INFO nova.virt.libvirt.driver [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Instance spawned successfully.#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.457 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:39:49 np0005558241 podman[338374]: 2025-12-13 08:39:49.458799137 +0000 UTC m=+0.061060808 container create e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.476 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.482 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.484 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.484 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.485 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.485 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.485 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.486 248514 DEBUG nova.virt.libvirt.driver [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:39:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Dec 13 03:39:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Dec 13 03:39:49 np0005558241 systemd[1]: Started libpod-conmon-e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb.scope.
Dec 13 03:39:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.519 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.520 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615189.4471893, 70a00398-fa02-482c-a33e-f190895c8d28 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.520 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:39:49 np0005558241 podman[338374]: 2025-12-13 08:39:49.421124041 +0000 UTC m=+0.023385742 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:39:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.548 248514 DEBUG nova.storage.rbd_utils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(snap) on rbd image(764d0410-b25b-4414-843f-9f74e17a2d49) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:39:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d7c4b47c7bcc837568d4b5d03309c58f78909de53f1978dfe18a183e466a4e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:49 np0005558241 podman[338374]: 2025-12-13 08:39:49.571362562 +0000 UTC m=+0.173624273 container init e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:39:49 np0005558241 podman[338374]: 2025-12-13 08:39:49.579066773 +0000 UTC m=+0.181328444 container start e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.588 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.590 248514 INFO nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Took 12.35 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.590 248514 DEBUG nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.595 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615189.4495208, 70a00398-fa02-482c-a33e-f190895c8d28 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.596 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:39:49 np0005558241 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [NOTICE]   (338430) : New worker (338432) forked
Dec 13 03:39:49 np0005558241 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [NOTICE]   (338430) : Loading success.
Dec 13 03:39:49 np0005558241 podman[338400]: 2025-12-13 08:39:49.618343929 +0000 UTC m=+0.060539215 container create 9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.630 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.633 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:39:49 np0005558241 systemd[1]: Started libpod-conmon-9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0.scope.
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.664 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.677 248514 INFO nova.compute.manager [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Took 14.12 seconds to build instance.#033[00m
Dec 13 03:39:49 np0005558241 podman[338400]: 2025-12-13 08:39:49.592317363 +0000 UTC m=+0.034512659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:39:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:39:49 np0005558241 nova_compute[248510]: 2025-12-13 08:39:49.700 248514 DEBUG oslo_concurrency.lockutils [None req-f438925f-ca2f-4ff4-80ef-8ac501ea9ef9 ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2600ad51e38076428193a4f4df35ff6c1755749176ffe170315af33ae1e3ffbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2600ad51e38076428193a4f4df35ff6c1755749176ffe170315af33ae1e3ffbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2600ad51e38076428193a4f4df35ff6c1755749176ffe170315af33ae1e3ffbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2600ad51e38076428193a4f4df35ff6c1755749176ffe170315af33ae1e3ffbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:49 np0005558241 podman[338400]: 2025-12-13 08:39:49.725954401 +0000 UTC m=+0.168149707 container init 9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bohr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:39:49 np0005558241 podman[338400]: 2025-12-13 08:39:49.732427952 +0000 UTC m=+0.174623238 container start 9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:39:49 np0005558241 podman[338400]: 2025-12-13 08:39:49.737119699 +0000 UTC m=+0.179314985 container attach 9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bohr, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 03:39:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 332 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 68 op/s
Dec 13 03:39:50 np0005558241 charming_bohr[338446]: {
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:    "0": [
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:        {
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "devices": [
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "/dev/loop3"
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            ],
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_name": "ceph_lv0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_size": "21470642176",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "name": "ceph_lv0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "tags": {
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.cluster_name": "ceph",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.crush_device_class": "",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.encrypted": "0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.objectstore": "bluestore",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.osd_id": "0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.type": "block",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.vdo": "0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.with_tpm": "0"
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            },
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "type": "block",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "vg_name": "ceph_vg0"
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:        }
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:    ],
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:    "1": [
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:        {
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "devices": [
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "/dev/loop4"
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            ],
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_name": "ceph_lv1",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_size": "21470642176",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "name": "ceph_lv1",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "tags": {
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.cluster_name": "ceph",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.crush_device_class": "",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.encrypted": "0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.objectstore": "bluestore",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.osd_id": "1",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.type": "block",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.vdo": "0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.with_tpm": "0"
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            },
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "type": "block",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "vg_name": "ceph_vg1"
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:        }
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:    ],
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:    "2": [
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:        {
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "devices": [
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "/dev/loop5"
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            ],
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_name": "ceph_lv2",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_size": "21470642176",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "name": "ceph_lv2",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "tags": {
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.cluster_name": "ceph",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.crush_device_class": "",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.encrypted": "0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.objectstore": "bluestore",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.osd_id": "2",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.type": "block",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.vdo": "0",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:                "ceph.with_tpm": "0"
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            },
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "type": "block",
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:            "vg_name": "ceph_vg2"
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:        }
Dec 13 03:39:50 np0005558241 charming_bohr[338446]:    ]
Dec 13 03:39:50 np0005558241 charming_bohr[338446]: }
Dec 13 03:39:50 np0005558241 systemd[1]: libpod-9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0.scope: Deactivated successfully.
Dec 13 03:39:50 np0005558241 conmon[338446]: conmon 9e23823f88c288c2deb0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0.scope/container/memory.events
Dec 13 03:39:50 np0005558241 podman[338400]: 2025-12-13 08:39:50.080130657 +0000 UTC m=+0.522325973 container died 9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:39:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2600ad51e38076428193a4f4df35ff6c1755749176ffe170315af33ae1e3ffbe-merged.mount: Deactivated successfully.
Dec 13 03:39:50 np0005558241 podman[338400]: 2025-12-13 08:39:50.13017992 +0000 UTC m=+0.572375206 container remove 9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:39:50 np0005558241 systemd[1]: libpod-conmon-9e23823f88c288c2deb0d08578268f16e60deb1ced855a986114a673f32fbfe0.scope: Deactivated successfully.
Dec 13 03:39:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Dec 13 03:39:50 np0005558241 nova_compute[248510]: 2025-12-13 08:39:50.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Dec 13 03:39:50 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Dec 13 03:39:50 np0005558241 podman[338529]: 2025-12-13 08:39:50.649868316 +0000 UTC m=+0.053983602 container create 88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_wing, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:39:50 np0005558241 systemd[1]: Started libpod-conmon-88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412.scope.
Dec 13 03:39:50 np0005558241 podman[338529]: 2025-12-13 08:39:50.625321456 +0000 UTC m=+0.029436772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:39:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:39:50 np0005558241 podman[338529]: 2025-12-13 08:39:50.745846329 +0000 UTC m=+0.149961645 container init 88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_wing, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:39:50 np0005558241 podman[338529]: 2025-12-13 08:39:50.75392154 +0000 UTC m=+0.158036826 container start 88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_wing, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:39:50 np0005558241 podman[338529]: 2025-12-13 08:39:50.758232107 +0000 UTC m=+0.162347533 container attach 88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_wing, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:39:50 np0005558241 romantic_wing[338545]: 167 167
Dec 13 03:39:50 np0005558241 systemd[1]: libpod-88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412.scope: Deactivated successfully.
Dec 13 03:39:50 np0005558241 podman[338529]: 2025-12-13 08:39:50.763161759 +0000 UTC m=+0.167277065 container died 88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:39:50 np0005558241 nova_compute[248510]: 2025-12-13 08:39:50.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:39:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cdde3396188335eb6dbd89cf1e4f56051c67d6068b35bcf917d12fcd2dd91be7-merged.mount: Deactivated successfully.
Dec 13 03:39:50 np0005558241 podman[338529]: 2025-12-13 08:39:50.808358572 +0000 UTC m=+0.212473858 container remove 88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:39:50 np0005558241 systemd[1]: libpod-conmon-88642009bb7c1fb24f46ce1d97a59311419d75187ca772e6c934984c0df03412.scope: Deactivated successfully.
Dec 13 03:39:50 np0005558241 nova_compute[248510]: 2025-12-13 08:39:50.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:50 np0005558241 podman[338568]: 2025-12-13 08:39:50.989271065 +0000 UTC m=+0.041926983 container create 15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:39:51 np0005558241 systemd[1]: Started libpod-conmon-15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27.scope.
Dec 13 03:39:51 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:39:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e807f731104d8fb8f9798c1bb9330d40381422564891f2dac416f29eac2a623/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e807f731104d8fb8f9798c1bb9330d40381422564891f2dac416f29eac2a623/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:51 np0005558241 podman[338568]: 2025-12-13 08:39:50.971107273 +0000 UTC m=+0.023763211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:39:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e807f731104d8fb8f9798c1bb9330d40381422564891f2dac416f29eac2a623/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e807f731104d8fb8f9798c1bb9330d40381422564891f2dac416f29eac2a623/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:39:51 np0005558241 podman[338568]: 2025-12-13 08:39:51.080647354 +0000 UTC m=+0.133303292 container init 15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:39:51 np0005558241 podman[338568]: 2025-12-13 08:39:51.088731745 +0000 UTC m=+0.141387663 container start 15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:39:51 np0005558241 podman[338568]: 2025-12-13 08:39:51.092258502 +0000 UTC m=+0.144914440 container attach 15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:39:51 np0005558241 nova_compute[248510]: 2025-12-13 08:39:51.479 248514 DEBUG nova.compute.manager [req-8f6de1e3-85e1-4f5f-99a1-9e55c02e0544 req-26be639b-f0bd-4e78-95d2-0c6a1152b9dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:51 np0005558241 nova_compute[248510]: 2025-12-13 08:39:51.480 248514 DEBUG oslo_concurrency.lockutils [req-8f6de1e3-85e1-4f5f-99a1-9e55c02e0544 req-26be639b-f0bd-4e78-95d2-0c6a1152b9dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:51 np0005558241 nova_compute[248510]: 2025-12-13 08:39:51.480 248514 DEBUG oslo_concurrency.lockutils [req-8f6de1e3-85e1-4f5f-99a1-9e55c02e0544 req-26be639b-f0bd-4e78-95d2-0c6a1152b9dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:51 np0005558241 nova_compute[248510]: 2025-12-13 08:39:51.480 248514 DEBUG oslo_concurrency.lockutils [req-8f6de1e3-85e1-4f5f-99a1-9e55c02e0544 req-26be639b-f0bd-4e78-95d2-0c6a1152b9dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:51 np0005558241 nova_compute[248510]: 2025-12-13 08:39:51.480 248514 DEBUG nova.compute.manager [req-8f6de1e3-85e1-4f5f-99a1-9e55c02e0544 req-26be639b-f0bd-4e78-95d2-0c6a1152b9dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] No waiting events found dispatching network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:39:51 np0005558241 nova_compute[248510]: 2025-12-13 08:39:51.480 248514 WARNING nova.compute.manager [req-8f6de1e3-85e1-4f5f-99a1-9e55c02e0544 req-26be639b-f0bd-4e78-95d2-0c6a1152b9dd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received unexpected event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b for instance with vm_state active and task_state None.#033[00m
Dec 13 03:39:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:39:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 86 op/s
Dec 13 03:39:51 np0005558241 lvm[338662]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:39:51 np0005558241 lvm[338665]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:39:51 np0005558241 lvm[338662]: VG ceph_vg0 finished
Dec 13 03:39:51 np0005558241 lvm[338665]: VG ceph_vg2 finished
Dec 13 03:39:51 np0005558241 lvm[338664]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:39:51 np0005558241 lvm[338664]: VG ceph_vg1 finished
Dec 13 03:39:51 np0005558241 pensive_morse[338584]: {}
Dec 13 03:39:51 np0005558241 systemd[1]: libpod-15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27.scope: Deactivated successfully.
Dec 13 03:39:51 np0005558241 podman[338568]: 2025-12-13 08:39:51.909911328 +0000 UTC m=+0.962567276 container died 15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:39:51 np0005558241 systemd[1]: libpod-15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27.scope: Consumed 1.368s CPU time.
Dec 13 03:39:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4e807f731104d8fb8f9798c1bb9330d40381422564891f2dac416f29eac2a623-merged.mount: Deactivated successfully.
Dec 13 03:39:51 np0005558241 podman[338568]: 2025-12-13 08:39:51.958962306 +0000 UTC m=+1.011618214 container remove 15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_morse, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 13 03:39:51 np0005558241 systemd[1]: libpod-conmon-15c3974bc3b50a5476067265c3d38611bca620879b1d67d905ed2f9c2c978e27.scope: Deactivated successfully.
Dec 13 03:39:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:39:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:39:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:39:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:39:52 np0005558241 nova_compute[248510]: 2025-12-13 08:39:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:39:52 np0005558241 nova_compute[248510]: 2025-12-13 08:39:52.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:39:52 np0005558241 nova_compute[248510]: 2025-12-13 08:39:52.895 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:39:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:39:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:39:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.7 MiB/s wr, 134 op/s
Dec 13 03:39:54 np0005558241 nova_compute[248510]: 2025-12-13 08:39:54.198 248514 INFO nova.virt.libvirt.driver [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Snapshot image upload complete#033[00m
Dec 13 03:39:54 np0005558241 nova_compute[248510]: 2025-12-13 08:39:54.199 248514 DEBUG nova.compute.manager [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:54 np0005558241 nova_compute[248510]: 2025-12-13 08:39:54.604 248514 INFO nova.compute.manager [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Shelve offloading#033[00m
Dec 13 03:39:54 np0005558241 nova_compute[248510]: 2025-12-13 08:39:54.610 248514 INFO nova.virt.libvirt.driver [-] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance destroyed successfully.#033[00m
Dec 13 03:39:54 np0005558241 nova_compute[248510]: 2025-12-13 08:39:54.611 248514 DEBUG nova.compute.manager [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:54 np0005558241 nova_compute[248510]: 2025-12-13 08:39:54.613 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:39:54 np0005558241 nova_compute[248510]: 2025-12-13 08:39:54.613 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquired lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:39:54 np0005558241 nova_compute[248510]: 2025-12-13 08:39:54.613 248514 DEBUG nova.network.neutron [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:39:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.097 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.419 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.421 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.422 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.556 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.693 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.693 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.693 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.694 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.694 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.695 248514 INFO nova.compute.manager [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Terminating instance#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.696 248514 DEBUG nova.compute.manager [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:39:55 np0005558241 kernel: tap004ae3d0-58 (unregistering): left promiscuous mode
Dec 13 03:39:55 np0005558241 NetworkManager[50376]: <info>  [1765615195.7448] device (tap004ae3d0-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:55Z|00914|binding|INFO|Releasing lport 004ae3d0-5827-4393-953d-aa704915956b from this chassis (sb_readonly=0)
Dec 13 03:39:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:55Z|00915|binding|INFO|Setting lport 004ae3d0-5827-4393-953d-aa704915956b down in Southbound
Dec 13 03:39:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:39:55Z|00916|binding|INFO|Removing iface tap004ae3d0-58 ovn-installed in OVS
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.767 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:39:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 195 op/s
Dec 13 03:39:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.780 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:7b:f4 10.100.0.4'], port_security=['fa:16:3e:b0:7b:f4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '70a00398-fa02-482c-a33e-f190895c8d28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '791fc8cca65d4bfd9d9f2f19018d60fb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '480cd559-e8d2-47d2-86f5-7cc9a5743178', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e0ff201-d141-4a6c-ab2a-8538bea7e081, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=004ae3d0-5827-4393-953d-aa704915956b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:39:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.781 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 004ae3d0-5827-4393-953d-aa704915956b in datapath 6c5b1dd4-5a39-41f9-a98f-1f901a1328cf unbound from our chassis#033[00m
Dec 13 03:39:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.783 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:39:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.785 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[518d9150-f100-4f46-a1c7-7bea45b6ba93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:55.786 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf namespace which is not needed anymore#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:55 np0005558241 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Dec 13 03:39:55 np0005558241 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005d.scope: Consumed 7.154s CPU time.
Dec 13 03:39:55 np0005558241 systemd-machined[210538]: Machine qemu-115-instance-0000005d terminated.
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.849 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.926 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:55 np0005558241 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [NOTICE]   (338430) : haproxy version is 2.8.14-c23fe91
Dec 13 03:39:55 np0005558241 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [NOTICE]   (338430) : path to executable is /usr/sbin/haproxy
Dec 13 03:39:55 np0005558241 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [WARNING]  (338430) : Exiting Master process...
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:55 np0005558241 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [ALERT]    (338430) : Current worker (338432) exited with code 143 (Terminated)
Dec 13 03:39:55 np0005558241 neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf[338393]: [WARNING]  (338430) : All workers exited. Exiting... (0)
Dec 13 03:39:55 np0005558241 systemd[1]: libpod-e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb.scope: Deactivated successfully.
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.943 248514 INFO nova.virt.libvirt.driver [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Instance destroyed successfully.#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.944 248514 DEBUG nova.objects.instance [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lazy-loading 'resources' on Instance uuid 70a00398-fa02-482c-a33e-f190895c8d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:39:55 np0005558241 podman[338731]: 2025-12-13 08:39:55.946452669 +0000 UTC m=+0.050989037 container died e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.958 248514 DEBUG nova.virt.libvirt.vif [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:39:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-764535591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-764535591',id=93,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:39:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='791fc8cca65d4bfd9d9f2f19018d60fb',ramdisk_id='',reservation_id='r-m1z94rmm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1097359454',owner_user_name='tempest-ServerTagsTestJSON-1097359454-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:39:49Z,user_data=None,user_id='ab92f76b5ae549d8bae02bb7911221d6',uuid=70a00398-fa02-482c-a33e-f190895c8d28,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.958 248514 DEBUG nova.network.os_vif_util [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Converting VIF {"id": "004ae3d0-5827-4393-953d-aa704915956b", "address": "fa:16:3e:b0:7b:f4", "network": {"id": "6c5b1dd4-5a39-41f9-a98f-1f901a1328cf", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-504776780-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "791fc8cca65d4bfd9d9f2f19018d60fb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap004ae3d0-58", "ovs_interfaceid": "004ae3d0-5827-4393-953d-aa704915956b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.959 248514 DEBUG nova.network.os_vif_util [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.959 248514 DEBUG os_vif [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.961 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.962 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap004ae3d0-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.963 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.966 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.968 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:55 np0005558241 nova_compute[248510]: 2025-12-13 08:39:55.970 248514 INFO os_vif [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:7b:f4,bridge_name='br-int',has_traffic_filtering=True,id=004ae3d0-5827-4393-953d-aa704915956b,network=Network(6c5b1dd4-5a39-41f9-a98f-1f901a1328cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap004ae3d0-58')#033[00m
Dec 13 03:39:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb-userdata-shm.mount: Deactivated successfully.
Dec 13 03:39:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2d7c4b47c7bcc837568d4b5d03309c58f78909de53f1978dfe18a183e466a4e2-merged.mount: Deactivated successfully.
Dec 13 03:39:55 np0005558241 podman[338731]: 2025-12-13 08:39:55.994344309 +0000 UTC m=+0.098880667 container cleanup e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 03:39:56 np0005558241 systemd[1]: libpod-conmon-e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb.scope: Deactivated successfully.
Dec 13 03:39:56 np0005558241 podman[338785]: 2025-12-13 08:39:56.083877432 +0000 UTC m=+0.062627146 container remove e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:39:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.090 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[92261bfe-e782-49c2-a1c6-6c84286b07e4]: (4, ('Sat Dec 13 08:39:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf (e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb)\ne243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb\nSat Dec 13 08:39:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf (e243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb)\ne243679e8a4de11f89279de0415a5aab5f771fec3b503d5ce3201ce8d11d58cb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.092 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f2039ab3-9697-4eaa-9600-f8deab0867cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.093 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c5b1dd4-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.095 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:56 np0005558241 kernel: tap6c5b1dd4-50: left promiscuous mode
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.119 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d4bf7765-05b7-4813-9f20-4f52e50d1b05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.132 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[71daf30b-8761-45e5-8d7a-9543ded72dfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.133 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09193c50-6b10-41e3-bf1e-52527b27d70b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.152 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2abc9b2a-62de-49fa-b0d7-5eb2645459aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 776595, 'reachable_time': 29728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338808, 'error': None, 'target': 'ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:56 np0005558241 systemd[1]: run-netns-ovnmeta\x2d6c5b1dd4\x2d5a39\x2d41f9\x2da98f\x2d1f901a1328cf.mount: Deactivated successfully.
Dec 13 03:39:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.157 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c5b1dd4-5a39-41f9-a98f-1f901a1328cf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:39:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:39:56.157 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[9642dff8-076b-4e69-9c33-f18defec409b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.246 248514 INFO nova.virt.libvirt.driver [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Deleting instance files /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28_del#033[00m
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.247 248514 INFO nova.virt.libvirt.driver [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Deletion of /var/lib/nova/instances/70a00398-fa02-482c-a33e-f190895c8d28_del complete#033[00m
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.288 248514 DEBUG nova.compute.manager [req-c8752636-70f4-4da3-9a6f-7190302aed3d req-b80573d0-2bb3-4b6f-9be4-24cc850225ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-vif-unplugged-004ae3d0-5827-4393-953d-aa704915956b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.289 248514 DEBUG oslo_concurrency.lockutils [req-c8752636-70f4-4da3-9a6f-7190302aed3d req-b80573d0-2bb3-4b6f-9be4-24cc850225ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.289 248514 DEBUG oslo_concurrency.lockutils [req-c8752636-70f4-4da3-9a6f-7190302aed3d req-b80573d0-2bb3-4b6f-9be4-24cc850225ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.290 248514 DEBUG oslo_concurrency.lockutils [req-c8752636-70f4-4da3-9a6f-7190302aed3d req-b80573d0-2bb3-4b6f-9be4-24cc850225ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.290 248514 DEBUG nova.compute.manager [req-c8752636-70f4-4da3-9a6f-7190302aed3d req-b80573d0-2bb3-4b6f-9be4-24cc850225ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] No waiting events found dispatching network-vif-unplugged-004ae3d0-5827-4393-953d-aa704915956b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.290 248514 DEBUG nova.compute.manager [req-c8752636-70f4-4da3-9a6f-7190302aed3d req-b80573d0-2bb3-4b6f-9be4-24cc850225ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-vif-unplugged-004ae3d0-5827-4393-953d-aa704915956b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.327 248514 INFO nova.compute.manager [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.328 248514 DEBUG oslo.service.loopingcall [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.328 248514 DEBUG nova.compute.manager [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:39:56 np0005558241 nova_compute[248510]: 2025-12-13 08:39:56.328 248514 DEBUG nova.network.neutron [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:39:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:39:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Dec 13 03:39:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Dec 13 03:39:56 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Dec 13 03:39:57 np0005558241 nova_compute[248510]: 2025-12-13 08:39:57.071 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615182.0693662, 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:39:57 np0005558241 nova_compute[248510]: 2025-12-13 08:39:57.072 248514 INFO nova.compute.manager [-] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:39:57 np0005558241 nova_compute[248510]: 2025-12-13 08:39:57.098 248514 DEBUG nova.compute.manager [None req-77d70a0e-a256-4cc1-b7b9-4d611b8cac16 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:39:57 np0005558241 nova_compute[248510]: 2025-12-13 08:39:57.103 248514 DEBUG nova.compute.manager [None req-77d70a0e-a256-4cc1-b7b9-4d611b8cac16 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:39:57 np0005558241 nova_compute[248510]: 2025-12-13 08:39:57.124 248514 INFO nova.compute.manager [None req-77d70a0e-a256-4cc1-b7b9-4d611b8cac16 - - - - - -] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Dec 13 03:39:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 723 KiB/s wr, 138 op/s
Dec 13 03:39:58 np0005558241 nova_compute[248510]: 2025-12-13 08:39:58.192 248514 DEBUG nova.network.neutron [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Updating instance_info_cache with network_info: [{"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:39:58 np0005558241 nova_compute[248510]: 2025-12-13 08:39:58.212 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Releasing lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:39:58 np0005558241 nova_compute[248510]: 2025-12-13 08:39:58.524 248514 DEBUG nova.compute.manager [req-2c7683eb-f9c4-4615-9107-a01a84eec28b req-787c7991-32b2-4688-870b-7d94cb304592 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:39:58 np0005558241 nova_compute[248510]: 2025-12-13 08:39:58.524 248514 DEBUG oslo_concurrency.lockutils [req-2c7683eb-f9c4-4615-9107-a01a84eec28b req-787c7991-32b2-4688-870b-7d94cb304592 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "70a00398-fa02-482c-a33e-f190895c8d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:58 np0005558241 nova_compute[248510]: 2025-12-13 08:39:58.524 248514 DEBUG oslo_concurrency.lockutils [req-2c7683eb-f9c4-4615-9107-a01a84eec28b req-787c7991-32b2-4688-870b-7d94cb304592 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:58 np0005558241 nova_compute[248510]: 2025-12-13 08:39:58.525 248514 DEBUG oslo_concurrency.lockutils [req-2c7683eb-f9c4-4615-9107-a01a84eec28b req-787c7991-32b2-4688-870b-7d94cb304592 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:58 np0005558241 nova_compute[248510]: 2025-12-13 08:39:58.525 248514 DEBUG nova.compute.manager [req-2c7683eb-f9c4-4615-9107-a01a84eec28b req-787c7991-32b2-4688-870b-7d94cb304592 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] No waiting events found dispatching network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:39:58 np0005558241 nova_compute[248510]: 2025-12-13 08:39:58.525 248514 WARNING nova.compute.manager [req-2c7683eb-f9c4-4615-9107-a01a84eec28b req-787c7991-32b2-4688-870b-7d94cb304592 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received unexpected event network-vif-plugged-004ae3d0-5827-4393-953d-aa704915956b for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:39:58 np0005558241 nova_compute[248510]: 2025-12-13 08:39:58.776 248514 DEBUG nova.network.neutron [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:39:58 np0005558241 nova_compute[248510]: 2025-12-13 08:39:58.811 248514 INFO nova.compute.manager [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Took 2.48 seconds to deallocate network for instance.#033[00m
Dec 13 03:39:58 np0005558241 nova_compute[248510]: 2025-12-13 08:39:58.865 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:58 np0005558241 nova_compute[248510]: 2025-12-13 08:39:58.866 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.003 248514 DEBUG oslo_concurrency.processutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:39:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/298707572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.598 248514 DEBUG oslo_concurrency.processutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.607 248514 DEBUG nova.compute.provider_tree [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.643 248514 DEBUG nova.scheduler.client.report [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.673 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.700 248514 INFO nova.scheduler.client.report [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Deleted allocations for instance 70a00398-fa02-482c-a33e-f190895c8d28#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:39:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 313 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.2 KiB/s wr, 146 op/s
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.813 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.814 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.815 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.864 248514 DEBUG oslo_concurrency.lockutils [None req-f98b46ef-d7ec-45a3-9fe8-57bd67f2f01e ab92f76b5ae549d8bae02bb7911221d6 791fc8cca65d4bfd9d9f2f19018d60fb - - default default] Lock "70a00398-fa02-482c-a33e-f190895c8d28" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.873 248514 INFO nova.virt.libvirt.driver [-] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Instance destroyed successfully.#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.875 248514 DEBUG nova.objects.instance [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'resources' on Instance uuid 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.896 248514 DEBUG nova.virt.libvirt.vif [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:39:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1902812623',display_name='tempest-ServerActionsTestOtherB-server-1902812623',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1902812623',id=92,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:39:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-8adwm7jj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member',shelved_at='2025-12-13T08:39:54.199494',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='764d0410-b25b-4414-843f-9f74e17a2d49'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:39:43Z,user_data=None,user_id='7507939da64e4320a1c6f389d0fc9045',uuid=2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.897 248514 DEBUG nova.network.os_vif_util [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.898 248514 DEBUG nova.network.os_vif_util [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.899 248514 DEBUG os_vif [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.902 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.903 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac5a6aec-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.908 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.910 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:39:59 np0005558241 nova_compute[248510]: 2025-12-13 08:39:59.914 248514 INFO os_vif [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:ee:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac5a6aec-ff77-4185-a9cb-f95e9ee9461a,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac5a6aec-ff')#033[00m
Dec 13 03:40:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:40:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602304390' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.421 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.521 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.521 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.524 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.524 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.527 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.527 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.706 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.708 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3405MB free_disk=59.87046145275235GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.708 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.708 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.791 248514 DEBUG nova.compute.manager [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Received event network-vif-deleted-004ae3d0-5827-4393-953d-aa704915956b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.792 248514 DEBUG nova.compute.manager [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Received event network-changed-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.792 248514 DEBUG nova.compute.manager [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Refreshing instance network info cache due to event network-changed-ac5a6aec-ff77-4185-a9cb-f95e9ee9461a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.792 248514 DEBUG oslo_concurrency.lockutils [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.792 248514 DEBUG oslo_concurrency.lockutils [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.793 248514 DEBUG nova.network.neutron [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Refreshing network info cache for port ac5a6aec-ff77-4185-a9cb-f95e9ee9461a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.806 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9b486227-b98c-4393-9a3c-aae3e3c419a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 658e5f04-399b-4a8a-8680-5ae9717949c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.851 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:00 np0005558241 nova_compute[248510]: 2025-12-13 08:40:00.908 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:40:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:40:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3083906428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:40:01 np0005558241 nova_compute[248510]: 2025-12-13 08:40:01.466 248514 INFO nova.virt.libvirt.driver [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Deleting instance files /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_del#033[00m
Dec 13 03:40:01 np0005558241 nova_compute[248510]: 2025-12-13 08:40:01.467 248514 INFO nova.virt.libvirt.driver [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Deletion of /var/lib/nova/instances/2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be_del complete#033[00m
Dec 13 03:40:01 np0005558241 nova_compute[248510]: 2025-12-13 08:40:01.480 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:01 np0005558241 nova_compute[248510]: 2025-12-13 08:40:01.485 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:40:01 np0005558241 nova_compute[248510]: 2025-12-13 08:40:01.508 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:40:01 np0005558241 nova_compute[248510]: 2025-12-13 08:40:01.550 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:40:01 np0005558241 nova_compute[248510]: 2025-12-13 08:40:01.550 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:01 np0005558241 nova_compute[248510]: 2025-12-13 08:40:01.608 248514 INFO nova.scheduler.client.report [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Deleted allocations for instance 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be#033[00m
Dec 13 03:40:01 np0005558241 nova_compute[248510]: 2025-12-13 08:40:01.684 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:01 np0005558241 nova_compute[248510]: 2025-12-13 08:40:01.684 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:40:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 293 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 144 op/s
Dec 13 03:40:01 np0005558241 nova_compute[248510]: 2025-12-13 08:40:01.792 248514 DEBUG oslo_concurrency.processutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:40:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:40:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/771570506' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:40:02 np0005558241 nova_compute[248510]: 2025-12-13 08:40:02.408 248514 DEBUG oslo_concurrency.processutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:02 np0005558241 nova_compute[248510]: 2025-12-13 08:40:02.415 248514 DEBUG nova.compute.provider_tree [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:40:02 np0005558241 nova_compute[248510]: 2025-12-13 08:40:02.439 248514 DEBUG nova.scheduler.client.report [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:40:02 np0005558241 nova_compute[248510]: 2025-12-13 08:40:02.477 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:02 np0005558241 nova_compute[248510]: 2025-12-13 08:40:02.552 248514 DEBUG oslo_concurrency.lockutils [None req-badbb9f0-b336-46ca-92bb-db8cd6023902 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 20.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:02 np0005558241 nova_compute[248510]: 2025-12-13 08:40:02.964 248514 DEBUG nova.network.neutron [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Updated VIF entry in instance network info cache for port ac5a6aec-ff77-4185-a9cb-f95e9ee9461a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:40:02 np0005558241 nova_compute[248510]: 2025-12-13 08:40:02.965 248514 DEBUG nova.network.neutron [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be] Updating instance_info_cache with network_info: [{"id": "ac5a6aec-ff77-4185-a9cb-f95e9ee9461a", "address": "fa:16:3e:1f:ee:c9", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": null, "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapac5a6aec-ff", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:40:02 np0005558241 nova_compute[248510]: 2025-12-13 08:40:02.998 248514 DEBUG oslo_concurrency.lockutils [req-804fad03-454d-4a94-b3c2-934cbf4c9b16 req-e3a6ef46-0a14-410e-9c87-b76ef3bd45d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2b8fcd61-b63b-4b4e-86e0-8a57bbcf97be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:40:03 np0005558241 nova_compute[248510]: 2025-12-13 08:40:03.550 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:40:03 np0005558241 nova_compute[248510]: 2025-12-13 08:40:03.550 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:40:03 np0005558241 nova_compute[248510]: 2025-12-13 08:40:03.551 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:40:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:40:03Z|00917|binding|INFO|Releasing lport 6c33e57f-b143-4ed7-a271-dc33d6e81057 from this chassis (sb_readonly=0)
Dec 13 03:40:03 np0005558241 nova_compute[248510]: 2025-12-13 08:40:03.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 284 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.2 KiB/s wr, 102 op/s
Dec 13 03:40:03 np0005558241 podman[338919]: 2025-12-13 08:40:03.996984454 +0000 UTC m=+0.079557467 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 13 03:40:04 np0005558241 podman[338917]: 2025-12-13 08:40:04.004943002 +0000 UTC m=+0.086892419 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:40:04 np0005558241 podman[338918]: 2025-12-13 08:40:04.004978263 +0000 UTC m=+0.085611078 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:40:04 np0005558241 nova_compute[248510]: 2025-12-13 08:40:04.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:05 np0005558241 nova_compute[248510]: 2025-12-13 08:40:05.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:40:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 12 KiB/s wr, 74 op/s
Dec 13 03:40:05 np0005558241 nova_compute[248510]: 2025-12-13 08:40:05.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:40:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 11 KiB/s wr, 67 op/s
Dec 13 03:40:07 np0005558241 nova_compute[248510]: 2025-12-13 08:40:07.957 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:40:07 np0005558241 nova_compute[248510]: 2025-12-13 08:40:07.958 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:40:07 np0005558241 nova_compute[248510]: 2025-12-13 08:40:07.958 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 03:40:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:40:09
Dec 13 03:40:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:40:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:40:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'default.rgw.log', 'backups', 'default.rgw.control', 'vms', 'volumes']
Dec 13 03:40:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:40:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 10 KiB/s wr, 61 op/s
Dec 13 03:40:09 np0005558241 nova_compute[248510]: 2025-12-13 08:40:09.910 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:40:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:40:10 np0005558241 nova_compute[248510]: 2025-12-13 08:40:10.920 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:10 np0005558241 nova_compute[248510]: 2025-12-13 08:40:10.941 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615195.9397519, 70a00398-fa02-482c-a33e-f190895c8d28 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:40:10 np0005558241 nova_compute[248510]: 2025-12-13 08:40:10.942 248514 INFO nova.compute.manager [-] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:40:11 np0005558241 nova_compute[248510]: 2025-12-13 08:40:11.027 248514 DEBUG nova.compute.manager [None req-74174837-704d-49e6-9cf0-233c092090c9 - - - - - -] [instance: 70a00398-fa02-482c-a33e-f190895c8d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:40:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 9.9 KiB/s wr, 35 op/s
Dec 13 03:40:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:40:11 np0005558241 nova_compute[248510]: 2025-12-13 08:40:11.919 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:11 np0005558241 nova_compute[248510]: 2025-12-13 08:40:11.919 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:11 np0005558241 nova_compute[248510]: 2025-12-13 08:40:11.919 248514 INFO nova.compute.manager [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Shelving#033[00m
Dec 13 03:40:11 np0005558241 nova_compute[248510]: 2025-12-13 08:40:11.945 248514 DEBUG nova.virt.libvirt.driver [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:40:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 13 03:40:13 np0005558241 nova_compute[248510]: 2025-12-13 08:40:13.801 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:40:14 np0005558241 kernel: tapea5aafe7-a7 (unregistering): left promiscuous mode
Dec 13 03:40:14 np0005558241 NetworkManager[50376]: <info>  [1765615214.4369] device (tapea5aafe7-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:40:14 np0005558241 nova_compute[248510]: 2025-12-13 08:40:14.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:40:14Z|00918|binding|INFO|Releasing lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 from this chassis (sb_readonly=0)
Dec 13 03:40:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:40:14Z|00919|binding|INFO|Setting lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 down in Southbound
Dec 13 03:40:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:40:14Z|00920|binding|INFO|Removing iface tapea5aafe7-a7 ovn-installed in OVS
Dec 13 03:40:14 np0005558241 nova_compute[248510]: 2025-12-13 08:40:14.446 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.453 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:82:3a 10.100.0.3'], port_security=['fa:16:3e:e0:82:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9b486227-b98c-4393-9a3c-aae3e3c419a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '35769c0b-1e0e-43bc-832c-d54c65a53a36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.454 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 in datapath 369f7528-6571-47b6-a030-5281647e1eac unbound from our chassis#033[00m
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.456 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 369f7528-6571-47b6-a030-5281647e1eac#033[00m
Dec 13 03:40:14 np0005558241 nova_compute[248510]: 2025-12-13 08:40:14.461 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.478 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0149cac-e8e6-4e49-af00-07aeb0edd33b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:14 np0005558241 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000055.scope: Deactivated successfully.
Dec 13 03:40:14 np0005558241 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d00000055.scope: Consumed 18.396s CPU time.
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.509 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f4a5f3-7ea2-43f9-a2df-98bbdbfa1aeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:14 np0005558241 systemd-machined[210538]: Machine qemu-105-instance-00000055 terminated.
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.513 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dae2f18f-2cfe-45ef-b110-eb42b82ade61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.541 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[74915ce2-c23d-49f1-8346-9a6d8c0ddf6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.561 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a66b3529-4531-4c54-8a5a-930648d8b8c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 34886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338992, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.579 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af967b49-d845-4970-ba4b-4af782b465a7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763117, 'tstamp': 763117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338993, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763121, 'tstamp': 763121}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338993, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.581 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:14 np0005558241 nova_compute[248510]: 2025-12-13 08:40:14.582 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:14 np0005558241 nova_compute[248510]: 2025-12-13 08:40:14.587 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.588 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369f7528-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.589 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.589 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap369f7528-60, col_values=(('external_ids', {'iface-id': '6c33e57f-b143-4ed7-a271-dc33d6e81057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:14.589 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:40:14 np0005558241 nova_compute[248510]: 2025-12-13 08:40:14.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:40:14 np0005558241 nova_compute[248510]: 2025-12-13 08:40:14.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 03:40:14 np0005558241 nova_compute[248510]: 2025-12-13 08:40:14.838 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 03:40:14 np0005558241 nova_compute[248510]: 2025-12-13 08:40:14.912 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:14 np0005558241 nova_compute[248510]: 2025-12-13 08:40:14.964 248514 INFO nova.virt.libvirt.driver [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance shutdown successfully after 3 seconds.#033[00m
Dec 13 03:40:14 np0005558241 nova_compute[248510]: 2025-12-13 08:40:14.969 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance destroyed successfully.#033[00m
Dec 13 03:40:14 np0005558241 nova_compute[248510]: 2025-12-13 08:40:14.969 248514 DEBUG nova.objects.instance [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:40:15 np0005558241 nova_compute[248510]: 2025-12-13 08:40:15.090 248514 DEBUG nova.compute.manager [req-708fd4db-aee2-46bd-ae74-e0d0be4aa784 req-17c8062b-5982-40e4-aa4a-d8e880140db2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-unplugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:40:15 np0005558241 nova_compute[248510]: 2025-12-13 08:40:15.091 248514 DEBUG oslo_concurrency.lockutils [req-708fd4db-aee2-46bd-ae74-e0d0be4aa784 req-17c8062b-5982-40e4-aa4a-d8e880140db2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:15 np0005558241 nova_compute[248510]: 2025-12-13 08:40:15.091 248514 DEBUG oslo_concurrency.lockutils [req-708fd4db-aee2-46bd-ae74-e0d0be4aa784 req-17c8062b-5982-40e4-aa4a-d8e880140db2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:15 np0005558241 nova_compute[248510]: 2025-12-13 08:40:15.091 248514 DEBUG oslo_concurrency.lockutils [req-708fd4db-aee2-46bd-ae74-e0d0be4aa784 req-17c8062b-5982-40e4-aa4a-d8e880140db2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:15 np0005558241 nova_compute[248510]: 2025-12-13 08:40:15.091 248514 DEBUG nova.compute.manager [req-708fd4db-aee2-46bd-ae74-e0d0be4aa784 req-17c8062b-5982-40e4-aa4a-d8e880140db2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] No waiting events found dispatching network-vif-unplugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:40:15 np0005558241 nova_compute[248510]: 2025-12-13 08:40:15.092 248514 WARNING nova.compute.manager [req-708fd4db-aee2-46bd-ae74-e0d0be4aa784 req-17c8062b-5982-40e4-aa4a-d8e880140db2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received unexpected event network-vif-unplugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for instance with vm_state active and task_state shelving.#033[00m
Dec 13 03:40:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:40:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2771698614' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:40:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:40:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2771698614' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:40:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 17 KiB/s wr, 19 op/s
Dec 13 03:40:15 np0005558241 nova_compute[248510]: 2025-12-13 08:40:15.829 248514 INFO nova.virt.libvirt.driver [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Beginning cold snapshot process#033[00m
Dec 13 03:40:16 np0005558241 nova_compute[248510]: 2025-12-13 08:40:16.015 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:16 np0005558241 nova_compute[248510]: 2025-12-13 08:40:16.023 248514 DEBUG nova.virt.libvirt.imagebackend [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:40:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:40:16 np0005558241 nova_compute[248510]: 2025-12-13 08:40:16.909 248514 DEBUG nova.storage.rbd_utils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(687ede9d59134649b503496d80eb8703) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:40:17 np0005558241 nova_compute[248510]: 2025-12-13 08:40:17.340 248514 DEBUG nova.compute.manager [req-86118057-169d-4afd-9ec4-039a79e6b9ef req-641b403e-c5ce-4a86-becd-1640dcdae7e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:40:17 np0005558241 nova_compute[248510]: 2025-12-13 08:40:17.340 248514 DEBUG oslo_concurrency.lockutils [req-86118057-169d-4afd-9ec4-039a79e6b9ef req-641b403e-c5ce-4a86-becd-1640dcdae7e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:17 np0005558241 nova_compute[248510]: 2025-12-13 08:40:17.340 248514 DEBUG oslo_concurrency.lockutils [req-86118057-169d-4afd-9ec4-039a79e6b9ef req-641b403e-c5ce-4a86-becd-1640dcdae7e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:17 np0005558241 nova_compute[248510]: 2025-12-13 08:40:17.341 248514 DEBUG oslo_concurrency.lockutils [req-86118057-169d-4afd-9ec4-039a79e6b9ef req-641b403e-c5ce-4a86-becd-1640dcdae7e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:17 np0005558241 nova_compute[248510]: 2025-12-13 08:40:17.341 248514 DEBUG nova.compute.manager [req-86118057-169d-4afd-9ec4-039a79e6b9ef req-641b403e-c5ce-4a86-becd-1640dcdae7e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] No waiting events found dispatching network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:40:17 np0005558241 nova_compute[248510]: 2025-12-13 08:40:17.341 248514 WARNING nova.compute.manager [req-86118057-169d-4afd-9ec4-039a79e6b9ef req-641b403e-c5ce-4a86-becd-1640dcdae7e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received unexpected event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Dec 13 03:40:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Dec 13 03:40:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Dec 13 03:40:17 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Dec 13 03:40:17 np0005558241 nova_compute[248510]: 2025-12-13 08:40:17.456 248514 DEBUG nova.storage.rbd_utils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] cloning vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk@687ede9d59134649b503496d80eb8703 to images/f328af0c-ddf7-42aa-aa98-c602105202af clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:40:17 np0005558241 nova_compute[248510]: 2025-12-13 08:40:17.541 248514 DEBUG nova.storage.rbd_utils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] flattening images/f328af0c-ddf7-42aa-aa98-c602105202af flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:40:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 246 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 10 KiB/s wr, 2 op/s
Dec 13 03:40:17 np0005558241 nova_compute[248510]: 2025-12-13 08:40:17.999 248514 DEBUG nova.storage.rbd_utils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] removing snapshot(687ede9d59134649b503496d80eb8703) on rbd image(9b486227-b98c-4393-9a3c-aae3e3c419a8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:40:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Dec 13 03:40:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Dec 13 03:40:18 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Dec 13 03:40:18 np0005558241 nova_compute[248510]: 2025-12-13 08:40:18.448 248514 DEBUG nova.storage.rbd_utils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] creating snapshot(snap) on rbd image(f328af0c-ddf7-42aa-aa98-c602105202af) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:40:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Dec 13 03:40:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Dec 13 03:40:19 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Dec 13 03:40:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 285 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.1 MiB/s wr, 66 op/s
Dec 13 03:40:19 np0005558241 nova_compute[248510]: 2025-12-13 08:40:19.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:20 np0005558241 nova_compute[248510]: 2025-12-13 08:40:20.973 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001530852208824031 of space, bias 1.0, pg target 0.4592556626472093 quantized to 32 (current 32)
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014091162288174185 of space, bias 1.0, pg target 0.4227348686452256 quantized to 32 (current 32)
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.938651746951207e-07 of space, bias 4.0, pg target 0.0007126382096341448 quantized to 16 (current 32)
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:40:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 320 MiB data, 774 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 146 op/s
Dec 13 03:40:22 np0005558241 nova_compute[248510]: 2025-12-13 08:40:22.342 248514 INFO nova.virt.libvirt.driver [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Snapshot image upload complete#033[00m
Dec 13 03:40:22 np0005558241 nova_compute[248510]: 2025-12-13 08:40:22.342 248514 DEBUG nova.compute.manager [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:40:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:40:22 np0005558241 nova_compute[248510]: 2025-12-13 08:40:22.425 248514 INFO nova.compute.manager [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Shelve offloading#033[00m
Dec 13 03:40:22 np0005558241 nova_compute[248510]: 2025-12-13 08:40:22.434 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance destroyed successfully.#033[00m
Dec 13 03:40:22 np0005558241 nova_compute[248510]: 2025-12-13 08:40:22.435 248514 DEBUG nova.compute.manager [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:40:22 np0005558241 nova_compute[248510]: 2025-12-13 08:40:22.438 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:40:22 np0005558241 nova_compute[248510]: 2025-12-13 08:40:22.438 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:40:22 np0005558241 nova_compute[248510]: 2025-12-13 08:40:22.438 248514 DEBUG nova.network.neutron [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:40:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.3 MiB/s wr, 146 op/s
Dec 13 03:40:23 np0005558241 nova_compute[248510]: 2025-12-13 08:40:23.800 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:24 np0005558241 nova_compute[248510]: 2025-12-13 08:40:24.915 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 129 op/s
Dec 13 03:40:25 np0005558241 nova_compute[248510]: 2025-12-13 08:40:25.975 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:40:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Dec 13 03:40:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Dec 13 03:40:27 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Dec 13 03:40:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.7 MiB/s wr, 78 op/s
Dec 13 03:40:28 np0005558241 nova_compute[248510]: 2025-12-13 08:40:28.238 248514 DEBUG nova.network.neutron [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:40:28 np0005558241 nova_compute[248510]: 2025-12-13 08:40:28.734 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:40:29 np0005558241 nova_compute[248510]: 2025-12-13 08:40:29.679 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615214.6782806, 9b486227-b98c-4393-9a3c-aae3e3c419a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:40:29 np0005558241 nova_compute[248510]: 2025-12-13 08:40:29.680 248514 INFO nova.compute.manager [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:40:29 np0005558241 nova_compute[248510]: 2025-12-13 08:40:29.729 248514 DEBUG nova.compute.manager [None req-41c58ff6-cc5f-49f7-ae18-ea71ff4c1d42 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:40:29 np0005558241 nova_compute[248510]: 2025-12-13 08:40:29.732 248514 DEBUG nova.compute.manager [None req-41c58ff6-cc5f-49f7-ae18-ea71ff4c1d42 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:40:29 np0005558241 nova_compute[248510]: 2025-12-13 08:40:29.766 248514 INFO nova.compute.manager [None req-41c58ff6-cc5f-49f7-ae18-ea71ff4c1d42 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Dec 13 03:40:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 65 op/s
Dec 13 03:40:29 np0005558241 nova_compute[248510]: 2025-12-13 08:40:29.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:30 np0005558241 nova_compute[248510]: 2025-12-13 08:40:30.649 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.051 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.059 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance destroyed successfully.#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.059 248514 DEBUG nova.objects.instance [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'resources' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.076 248514 DEBUG nova.virt.libvirt.vif [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:37:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-647049604',display_name='tempest-ServerActionsTestOtherB-server-647049604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-647049604',id=85,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMCBc3yf7DLFWm969JJ3AJvRq1SqBawRmsOjScixeqlFSyjq4/Kpbcw0olzxybOT1DbERtB0mKMV4pquo3M97LIG1LWOqbG4HPkmobMKh41xqoYhtSOyaVVjlfwlnNokPA==',key_name='tempest-keypair-2123324397',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:37:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-ys1cli8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member',shelved_at='2025-12-13T08:40:22.342504',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f328af0c-ddf7-42aa-aa98-c602105202af'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:40:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7507939da64e4320a1c6f389d0fc9045',uuid=9b486227-b98c-4393-9a3c-aae3e3c419a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.076 248514 DEBUG nova.network.os_vif_util [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.077 248514 DEBUG nova.network.os_vif_util [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.078 248514 DEBUG os_vif [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.080 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea5aafe7-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.083 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.085 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.087 248514 INFO os_vif [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7')#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.346 248514 DEBUG nova.compute.manager [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-changed-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.346 248514 DEBUG nova.compute.manager [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Refreshing instance network info cache due to event network-changed-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.346 248514 DEBUG oslo_concurrency.lockutils [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.346 248514 DEBUG oslo_concurrency.lockutils [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.347 248514 DEBUG nova.network.neutron [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Refreshing network info cache for port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.379 248514 INFO nova.virt.libvirt.driver [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deleting instance files /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8_del#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.379 248514 INFO nova.virt.libvirt.driver [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deletion of /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8_del complete#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.501 248514 INFO nova.scheduler.client.report [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Deleted allocations for instance 9b486227-b98c-4393-9a3c-aae3e3c419a8#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.561 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.562 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:31 np0005558241 nova_compute[248510]: 2025-12-13 08:40:31.619 248514 DEBUG oslo_concurrency.processutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:40:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 325 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 7.8 KiB/s wr, 15 op/s
Dec 13 03:40:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:40:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3382742547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:40:32 np0005558241 nova_compute[248510]: 2025-12-13 08:40:32.186 248514 DEBUG oslo_concurrency.processutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:32 np0005558241 nova_compute[248510]: 2025-12-13 08:40:32.194 248514 DEBUG nova.compute.provider_tree [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:40:32 np0005558241 nova_compute[248510]: 2025-12-13 08:40:32.216 248514 DEBUG nova.scheduler.client.report [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:40:32 np0005558241 nova_compute[248510]: 2025-12-13 08:40:32.247 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:32 np0005558241 nova_compute[248510]: 2025-12-13 08:40:32.302 248514 DEBUG oslo_concurrency.lockutils [None req-76f8a320-3acf-40a0-aa57-3ae5877838a6 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 20.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:40:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 298 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 9.8 KiB/s rd, 716 B/s wr, 13 op/s
Dec 13 03:40:34 np0005558241 nova_compute[248510]: 2025-12-13 08:40:34.359 248514 DEBUG nova.network.neutron [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updated VIF entry in instance network info cache for port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:40:34 np0005558241 nova_compute[248510]: 2025-12-13 08:40:34.360 248514 DEBUG nova.network.neutron [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": null, "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:40:34 np0005558241 nova_compute[248510]: 2025-12-13 08:40:34.394 248514 DEBUG oslo_concurrency.lockutils [req-f73f6bfe-e7f3-4784-ae58-6919f9dfdb2c req-1f101453-51df-474d-87fb-7a3b598b262b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:40:34 np0005558241 podman[339191]: 2025-12-13 08:40:34.976095546 +0000 UTC m=+0.048546234 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 03:40:34 np0005558241 podman[339190]: 2025-12-13 08:40:34.983339946 +0000 UTC m=+0.062483199 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:40:35 np0005558241 podman[339189]: 2025-12-13 08:40:35.003042053 +0000 UTC m=+0.082115904 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:40:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 246 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec 13 03:40:36 np0005558241 nova_compute[248510]: 2025-12-13 08:40:36.054 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:36 np0005558241 nova_compute[248510]: 2025-12-13 08:40:36.084 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:36 np0005558241 nova_compute[248510]: 2025-12-13 08:40:36.261 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:36 np0005558241 nova_compute[248510]: 2025-12-13 08:40:36.262 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:36 np0005558241 nova_compute[248510]: 2025-12-13 08:40:36.294 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:40:36 np0005558241 nova_compute[248510]: 2025-12-13 08:40:36.433 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:36 np0005558241 nova_compute[248510]: 2025-12-13 08:40:36.434 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:36 np0005558241 nova_compute[248510]: 2025-12-13 08:40:36.445 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:40:36 np0005558241 nova_compute[248510]: 2025-12-13 08:40:36.445 248514 INFO nova.compute.claims [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:40:36 np0005558241 nova_compute[248510]: 2025-12-13 08:40:36.604 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:40:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:40:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256936633' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.262 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.270 248514 DEBUG nova.compute.provider_tree [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.293 248514 DEBUG nova.scheduler.client.report [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.432 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.433 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.484 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.484 248514 DEBUG nova.network.neutron [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.514 248514 INFO nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.539 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:40:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.746 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.748 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.748 248514 INFO nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Creating image(s)#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.780 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:40:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 246 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.809 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.836 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.841 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.947 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.948 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.949 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.950 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.976 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:40:37 np0005558241 nova_compute[248510]: 2025-12-13 08:40:37.980 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:40:38 np0005558241 nova_compute[248510]: 2025-12-13 08:40:38.215 248514 DEBUG nova.policy [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:40:38 np0005558241 nova_compute[248510]: 2025-12-13 08:40:38.291 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:40:38 np0005558241 nova_compute[248510]: 2025-12-13 08:40:38.332 248514 WARNING nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.#033[00m
Dec 13 03:40:38 np0005558241 nova_compute[248510]: 2025-12-13 08:40:38.332 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 658e5f04-399b-4a8a-8680-5ae9717949c0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:40:38 np0005558241 nova_compute[248510]: 2025-12-13 08:40:38.333 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:40:38 np0005558241 nova_compute[248510]: 2025-12-13 08:40:38.334 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:38 np0005558241 nova_compute[248510]: 2025-12-13 08:40:38.335 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:38 np0005558241 nova_compute[248510]: 2025-12-13 08:40:38.336 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:38 np0005558241 nova_compute[248510]: 2025-12-13 08:40:38.372 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:39 np0005558241 nova_compute[248510]: 2025-12-13 08:40:39.002 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:39 np0005558241 nova_compute[248510]: 2025-12-13 08:40:39.003 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:39 np0005558241 nova_compute[248510]: 2025-12-13 08:40:39.003 248514 INFO nova.compute.manager [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Unshelving#033[00m
Dec 13 03:40:39 np0005558241 nova_compute[248510]: 2025-12-13 08:40:39.107 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:39 np0005558241 nova_compute[248510]: 2025-12-13 08:40:39.108 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:39 np0005558241 nova_compute[248510]: 2025-12-13 08:40:39.112 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:40:39 np0005558241 nova_compute[248510]: 2025-12-13 08:40:39.131 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:40:39 np0005558241 nova_compute[248510]: 2025-12-13 08:40:39.146 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:40:39 np0005558241 nova_compute[248510]: 2025-12-13 08:40:39.147 248514 INFO nova.compute.claims [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:40:39 np0005558241 nova_compute[248510]: 2025-12-13 08:40:39.321 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:40:39 np0005558241 nova_compute[248510]: 2025-12-13 08:40:39.709 248514 DEBUG nova.network.neutron [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Successfully created port: eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:40:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 248 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 93 KiB/s wr, 28 op/s
Dec 13 03:40:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:40:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2681529047' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:40:40 np0005558241 nova_compute[248510]: 2025-12-13 08:40:40.062 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:40 np0005558241 nova_compute[248510]: 2025-12-13 08:40:40.068 248514 DEBUG nova.compute.provider_tree [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:40:40 np0005558241 nova_compute[248510]: 2025-12-13 08:40:40.087 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:40:40 np0005558241 nova_compute[248510]: 2025-12-13 08:40:40.171 248514 DEBUG nova.scheduler.client.report [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:40:40 np0005558241 nova_compute[248510]: 2025-12-13 08:40:40.202 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:40:40 np0005558241 nova_compute[248510]: 2025-12-13 08:40:40.334 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:40 np0005558241 nova_compute[248510]: 2025-12-13 08:40:40.821 248514 INFO nova.network.neutron [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Dec 13 03:40:41 np0005558241 nova_compute[248510]: 2025-12-13 08:40:41.055 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:41 np0005558241 nova_compute[248510]: 2025-12-13 08:40:41.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:41 np0005558241 nova_compute[248510]: 2025-12-13 08:40:41.314 248514 DEBUG nova.objects.instance [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:40:41 np0005558241 nova_compute[248510]: 2025-12-13 08:40:41.340 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:40:41 np0005558241 nova_compute[248510]: 2025-12-13 08:40:41.340 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Ensure instance console log exists: /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:40:41 np0005558241 nova_compute[248510]: 2025-12-13 08:40:41.340 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:41 np0005558241 nova_compute[248510]: 2025-12-13 08:40:41.341 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:41 np0005558241 nova_compute[248510]: 2025-12-13 08:40:41.341 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 272 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 711 KiB/s wr, 42 op/s
Dec 13 03:40:42 np0005558241 nova_compute[248510]: 2025-12-13 08:40:42.386 248514 DEBUG nova.network.neutron [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Successfully updated port: eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:40:42 np0005558241 nova_compute[248510]: 2025-12-13 08:40:42.406 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:40:42 np0005558241 nova_compute[248510]: 2025-12-13 08:40:42.407 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:40:42 np0005558241 nova_compute[248510]: 2025-12-13 08:40:42.407 248514 DEBUG nova.network.neutron [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:40:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:40:43 np0005558241 nova_compute[248510]: 2025-12-13 08:40:43.104 248514 DEBUG nova.network.neutron [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:40:43 np0005558241 nova_compute[248510]: 2025-12-13 08:40:43.140 248514 DEBUG nova.compute.manager [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-changed-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:40:43 np0005558241 nova_compute[248510]: 2025-12-13 08:40:43.140 248514 DEBUG nova.compute.manager [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Refreshing instance network info cache due to event network-changed-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:40:43 np0005558241 nova_compute[248510]: 2025-12-13 08:40:43.141 248514 DEBUG oslo_concurrency.lockutils [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:40:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 283 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1020 KiB/s wr, 43 op/s
Dec 13 03:40:44 np0005558241 nova_compute[248510]: 2025-12-13 08:40:44.767 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:40:44 np0005558241 nova_compute[248510]: 2025-12-13 08:40:44.768 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:40:44 np0005558241 nova_compute[248510]: 2025-12-13 08:40:44.768 248514 DEBUG nova.network.neutron [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:40:45 np0005558241 nova_compute[248510]: 2025-12-13 08:40:45.458 248514 DEBUG nova.compute.manager [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-changed-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:40:45 np0005558241 nova_compute[248510]: 2025-12-13 08:40:45.458 248514 DEBUG nova.compute.manager [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Refreshing instance network info cache due to event network-changed-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:40:45 np0005558241 nova_compute[248510]: 2025-12-13 08:40:45.459 248514 DEBUG oslo_concurrency.lockutils [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:40:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 292 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.057 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.088 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.696 248514 DEBUG nova.network.neutron [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updating instance_info_cache with network_info: [{"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.758 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.759 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Instance network_info: |[{"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.759 248514 DEBUG oslo_concurrency.lockutils [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.759 248514 DEBUG nova.network.neutron [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Refreshing network info cache for port eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.764 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Start _get_guest_xml network_info=[{"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.769 248514 WARNING nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.776 248514 DEBUG nova.virt.libvirt.host [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.776 248514 DEBUG nova.virt.libvirt.host [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.781 248514 DEBUG nova.virt.libvirt.host [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.781 248514 DEBUG nova.virt.libvirt.host [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.782 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.782 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.782 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.783 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.783 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.783 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.783 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.784 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.784 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.784 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.784 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.785 248514 DEBUG nova.virt.hardware [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:40:46 np0005558241 nova_compute[248510]: 2025-12-13 08:40:46.787 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:40:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:40:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897695458' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:40:47 np0005558241 nova_compute[248510]: 2025-12-13 08:40:47.355 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:47 np0005558241 nova_compute[248510]: 2025-12-13 08:40:47.384 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:40:47 np0005558241 nova_compute[248510]: 2025-12-13 08:40:47.390 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:40:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 292 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:40:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:40:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:40:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2285801360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:40:47 np0005558241 nova_compute[248510]: 2025-12-13 08:40:47.985 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:47 np0005558241 nova_compute[248510]: 2025-12-13 08:40:47.986 248514 DEBUG nova.virt.libvirt.vif [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:40:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-694899991',display_name='tempest-₡-694899991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--694899991',id=94,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-vc3q6lf7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:40:37Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:40:47 np0005558241 nova_compute[248510]: 2025-12-13 08:40:47.987 248514 DEBUG nova.network.os_vif_util [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:40:47 np0005558241 nova_compute[248510]: 2025-12-13 08:40:47.988 248514 DEBUG nova.network.os_vif_util [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:40:47 np0005558241 nova_compute[248510]: 2025-12-13 08:40:47.989 248514 DEBUG nova.objects.instance [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.007 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  <uuid>9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c</uuid>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  <name>instance-0000005e</name>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <nova:name>tempest-₡-694899991</nova:name>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:40:46</nova:creationTime>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:        <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:        <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:        <nova:port uuid="eacb4de9-7daa-4e47-a0b7-b0a986dc50f1">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <entry name="serial">9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c</entry>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <entry name="uuid">9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c</entry>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk.config">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:b3:83:5f"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <target dev="tapeacb4de9-7d"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/console.log" append="off"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:40:48 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:40:48 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:40:48 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:40:48 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.008 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Preparing to wait for external event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.009 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.009 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.010 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.010 248514 DEBUG nova.virt.libvirt.vif [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:40:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-694899991',display_name='tempest-₡-694899991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--694899991',id=94,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-vc3q6lf7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:40:37Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.011 248514 DEBUG nova.network.os_vif_util [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.011 248514 DEBUG nova.network.os_vif_util [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.012 248514 DEBUG os_vif [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.012 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.013 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.013 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.016 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.016 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeacb4de9-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.017 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeacb4de9-7d, col_values=(('external_ids', {'iface-id': 'eacb4de9-7daa-4e47-a0b7-b0a986dc50f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:83:5f', 'vm-uuid': '9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.018 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:48 np0005558241 NetworkManager[50376]: <info>  [1765615248.0196] manager: (tapeacb4de9-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.021 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.023 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.023 248514 INFO os_vif [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d')#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.202 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.203 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.203 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:b3:83:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.203 248514 INFO nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Using config drive#033[00m
Dec 13 03:40:48 np0005558241 nova_compute[248510]: 2025-12-13 08:40:48.228 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.260 248514 INFO nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Creating config drive at /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/disk.config#033[00m
Dec 13 03:40:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:49.265 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.266 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbq35jnxb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:40:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:49.266 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:40:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:40:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 45K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1524 writes, 7100 keys, 1524 commit groups, 1.0 writes per commit group, ingest: 9.94 MB, 0.02 MB/s#012Interval WAL: 1523 writes, 1523 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     16.7      3.36              0.19        30    0.112       0      0       0.0       0.0#012  L6      1/0    9.15 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3     70.0     58.5      4.09              0.72        29    0.141    164K    16K       0.0       0.0#012 Sum      1/0    9.15 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3     38.4     39.7      7.45              0.92        59    0.126    164K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7     93.7     92.4      0.66              0.20        12    0.055     41K   3083       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     70.0     58.5      4.09              0.72        29    0.141    164K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     16.8      3.35              0.19        29    0.116       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.055, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.29 GB write, 0.07 MB/s write, 0.28 GB read, 0.07 MB/s read, 7.5 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 33.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000724 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2101,32.40 MB,10.6574%) FilterBlock(60,456.05 KB,0.146499%) IndexBlock(60,792.34 KB,0.254531%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.332 248514 DEBUG nova.network.neutron [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.372 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.374 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.375 248514 INFO nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Creating image(s)#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.395 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.398 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.401 248514 DEBUG oslo_concurrency.lockutils [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.401 248514 DEBUG nova.network.neutron [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Refreshing network info cache for port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.413 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbq35jnxb" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.434 248514 DEBUG nova.storage.rbd_utils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.437 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/disk.config 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.493 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.513 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.517 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9013f4a5ad7d819d713aea28bc14bb02d0de3469" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.517 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9013f4a5ad7d819d713aea28bc14bb02d0de3469" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 292 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:40:49 np0005558241 nova_compute[248510]: 2025-12-13 08:40:49.817 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:40:50 np0005558241 nova_compute[248510]: 2025-12-13 08:40:50.651 248514 DEBUG oslo_concurrency.processutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/disk.config 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:50 np0005558241 nova_compute[248510]: 2025-12-13 08:40:50.652 248514 INFO nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Deleting local config drive /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c/disk.config because it was imported into RBD.#033[00m
Dec 13 03:40:50 np0005558241 kernel: tapeacb4de9-7d: entered promiscuous mode
Dec 13 03:40:50 np0005558241 NetworkManager[50376]: <info>  [1765615250.7237] manager: (tapeacb4de9-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Dec 13 03:40:50 np0005558241 ovn_controller[148476]: 2025-12-13T08:40:50Z|00921|binding|INFO|Claiming lport eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 for this chassis.
Dec 13 03:40:50 np0005558241 ovn_controller[148476]: 2025-12-13T08:40:50Z|00922|binding|INFO|eacb4de9-7daa-4e47-a0b7-b0a986dc50f1: Claiming fa:16:3e:b3:83:5f 10.100.0.5
Dec 13 03:40:50 np0005558241 nova_compute[248510]: 2025-12-13 08:40:50.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:50 np0005558241 ovn_controller[148476]: 2025-12-13T08:40:50Z|00923|binding|INFO|Setting lport eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 ovn-installed in OVS
Dec 13 03:40:50 np0005558241 nova_compute[248510]: 2025-12-13 08:40:50.743 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:50 np0005558241 systemd-udevd[339650]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:40:50 np0005558241 systemd-machined[210538]: New machine qemu-116-instance-0000005e.
Dec 13 03:40:50 np0005558241 ovn_controller[148476]: 2025-12-13T08:40:50Z|00924|binding|INFO|Setting lport eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 up in Southbound
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.767 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:83:5f 10.100.0.5'], port_security=['fa:16:3e:b3:83:5f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.769 158419 INFO neutron.agent.ovn.metadata.agent [-] Port eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis#033[00m
Dec 13 03:40:50 np0005558241 NetworkManager[50376]: <info>  [1765615250.7709] device (tapeacb4de9-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:40:50 np0005558241 NetworkManager[50376]: <info>  [1765615250.7719] device (tapeacb4de9-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.772 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3#033[00m
Dec 13 03:40:50 np0005558241 systemd[1]: Started Virtual Machine qemu-116-instance-0000005e.
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.786 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6e3c83-39c6-4e51-99e3-e574b91762c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.787 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8f018c93-d1 in ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.789 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8f018c93-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.789 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[318ae8bb-b169-44c0-bf13-11732bb4bcda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.790 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[145df534-0e4b-4286-b07b-1762d98613cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.802 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae4af74-3ba8-44bf-8236-f56dffdef1c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.831 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[701b6df4-da94-4d1a-817b-39b2468b4f9d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.860 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[69a8c840-f262-4901-b7fc-c0b200242bc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.864 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[88a179ed-a9c3-4ac0-9087-841ae0e2e20c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:50 np0005558241 systemd-udevd[339653]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:40:50 np0005558241 NetworkManager[50376]: <info>  [1765615250.8661] manager: (tap8f018c93-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/390)
Dec 13 03:40:50 np0005558241 nova_compute[248510]: 2025-12-13 08:40:50.877 248514 DEBUG nova.virt.libvirt.imagebackend [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/f328af0c-ddf7-42aa-aa98-c602105202af/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/f328af0c-ddf7-42aa-aa98-c602105202af/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.899 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2b426b4b-4e51-49be-bb30-83a37363fbc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.902 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9c56f5bd-da09-4280-94ba-826380f336f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:50 np0005558241 NetworkManager[50376]: <info>  [1765615250.9269] device (tap8f018c93-d0): carrier: link connected
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.932 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[61744960-09c4-43c7-901a-c7d70b3bcdad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.950 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[56dfe86b-a185-487b-8975-761cb3d6fe24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339706, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.964 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[685147c0-df74-43e9-8a46-406a330c6e3f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:fc06'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782814, 'tstamp': 782814}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339707, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:50.986 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2a92f857-83be-42ee-b514-30776ee134c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339708, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.017 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[697a6994-69a6-45aa-9b46-002ee4413fb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.035 248514 DEBUG nova.virt.libvirt.imagebackend [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Selected location: {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/f328af0c-ddf7-42aa-aa98-c602105202af/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.036 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] cloning images/f328af0c-ddf7-42aa-aa98-c602105202af@snap to None/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.076 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[71439b13-1f74-407b-8dfc-40cfb8d04c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.077 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.078 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.078 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:51 np0005558241 kernel: tap8f018c93-d0: entered promiscuous mode
Dec 13 03:40:51 np0005558241 NetworkManager[50376]: <info>  [1765615251.0810] manager: (tap8f018c93-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.082 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.085 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:40:51Z|00925|binding|INFO|Releasing lport 0f8d26a1-147d-466a-8164-8d2036166124 from this chassis (sb_readonly=0)
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.087 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8f018c93-df47-4a6c-acdb-f508a51f75b3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8f018c93-df47-4a6c-acdb-f508a51f75b3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.088 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e12f883e-e138-474c-b210-d6e95137ff87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.089 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/8f018c93-df47-4a6c-acdb-f508a51f75b3.pid.haproxy
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 8f018c93-df47-4a6c-acdb-f508a51f75b3
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:40:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:51.090 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'env', 'PROCESS_TAG=haproxy-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8f018c93-df47-4a6c-acdb-f508a51f75b3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.102 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.255 248514 DEBUG nova.network.neutron [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updated VIF entry in instance network info cache for port eacb4de9-7daa-4e47-a0b7-b0a986dc50f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.257 248514 DEBUG nova.network.neutron [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updating instance_info_cache with network_info: [{"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.281 248514 DEBUG oslo_concurrency.lockutils [req-1094199e-ac93-406d-bde3-4152d110758a req-08cdd099-8bee-4aa1-9aff-e2795c4b726e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:40:51 np0005558241 podman[339816]: 2025-12-13 08:40:51.433651184 +0000 UTC m=+0.022805399 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.601 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615251.6009712, 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.602 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] VM Started (Lifecycle Event)#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.632 248514 DEBUG nova.compute.manager [req-bb42b9f2-ef2e-4ab7-a7cf-06ade69bc3e6 req-65c2f735-f367-4561-ae55-530783aa285d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.633 248514 DEBUG oslo_concurrency.lockutils [req-bb42b9f2-ef2e-4ab7-a7cf-06ade69bc3e6 req-65c2f735-f367-4561-ae55-530783aa285d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.633 248514 DEBUG oslo_concurrency.lockutils [req-bb42b9f2-ef2e-4ab7-a7cf-06ade69bc3e6 req-65c2f735-f367-4561-ae55-530783aa285d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.634 248514 DEBUG oslo_concurrency.lockutils [req-bb42b9f2-ef2e-4ab7-a7cf-06ade69bc3e6 req-65c2f735-f367-4561-ae55-530783aa285d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.634 248514 DEBUG nova.compute.manager [req-bb42b9f2-ef2e-4ab7-a7cf-06ade69bc3e6 req-65c2f735-f367-4561-ae55-530783aa285d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Processing event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.635 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.640 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.641 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.644 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.647 248514 INFO nova.virt.libvirt.driver [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Instance spawned successfully.#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.647 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.705 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.705 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615251.60113, 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.705 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.710 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.710 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.711 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.711 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.711 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.712 248514 DEBUG nova.virt.libvirt.driver [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:40:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 292 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 28 op/s
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.834 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.838 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615251.6406531, 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.839 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.874 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.877 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.882 248514 INFO nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Took 14.14 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.883 248514 DEBUG nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.901 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:40:51 np0005558241 podman[339816]: 2025-12-13 08:40:51.919943328 +0000 UTC m=+0.509097523 container create 009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.961 248514 INFO nova.compute.manager [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Took 15.55 seconds to build instance.#033[00m
Dec 13 03:40:51 np0005558241 nova_compute[248510]: 2025-12-13 08:40:51.990 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9013f4a5ad7d819d713aea28bc14bb02d0de3469" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:52 np0005558241 systemd[1]: Started libpod-conmon-009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6.scope.
Dec 13 03:40:52 np0005558241 nova_compute[248510]: 2025-12-13 08:40:52.029 248514 DEBUG oslo_concurrency.lockutils [None req-cf1ff583-166c-4756-86ab-667fa43d29e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:52 np0005558241 nova_compute[248510]: 2025-12-13 08:40:52.031 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 13.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:52 np0005558241 nova_compute[248510]: 2025-12-13 08:40:52.031 248514 INFO nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:40:52 np0005558241 nova_compute[248510]: 2025-12-13 08:40:52.031 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:40:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dfb9bdf13c872c9b470ea097dce2795d900c75827e3e75f827321fc99371a11/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:40:52 np0005558241 podman[339816]: 2025-12-13 08:40:52.196955783 +0000 UTC m=+0.786109978 container init 009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:40:52 np0005558241 podman[339816]: 2025-12-13 08:40:52.203390821 +0000 UTC m=+0.792545016 container start 009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 03:40:52 np0005558241 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [NOTICE]   (339943) : New worker (339953) forked
Dec 13 03:40:52 np0005558241 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [NOTICE]   (339943) : Loading success.
Dec 13 03:40:52 np0005558241 nova_compute[248510]: 2025-12-13 08:40:52.253 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:40:52 np0005558241 nova_compute[248510]: 2025-12-13 08:40:52.432 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] flattening vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:40:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:40:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:40:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:40:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:40:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:40:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:40:53 np0005558241 nova_compute[248510]: 2025-12-13 08:40:53.020 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:53 np0005558241 nova_compute[248510]: 2025-12-13 08:40:53.329 248514 DEBUG nova.network.neutron [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updated VIF entry in instance network info cache for port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:40:53 np0005558241 nova_compute[248510]: 2025-12-13 08:40:53.330 248514 DEBUG nova.network.neutron [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:40:53 np0005558241 nova_compute[248510]: 2025-12-13 08:40:53.372 248514 DEBUG oslo_concurrency.lockutils [req-657eafdd-20e0-49c2-a0ab-d5b037b5cc32 req-993b5582-ef6d-43d1-9bc2-86110ab5eb8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9b486227-b98c-4393-9a3c-aae3e3c419a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:40:53 np0005558241 nova_compute[248510]: 2025-12-13 08:40:53.785 248514 DEBUG nova.compute.manager [req-ad8a1197-dfaf-471b-8fa8-f0521ad6df1f req-bf65eeb2-bcda-4472-aa96-9c52dd5ac800 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:40:53 np0005558241 nova_compute[248510]: 2025-12-13 08:40:53.785 248514 DEBUG oslo_concurrency.lockutils [req-ad8a1197-dfaf-471b-8fa8-f0521ad6df1f req-bf65eeb2-bcda-4472-aa96-9c52dd5ac800 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:53 np0005558241 nova_compute[248510]: 2025-12-13 08:40:53.786 248514 DEBUG oslo_concurrency.lockutils [req-ad8a1197-dfaf-471b-8fa8-f0521ad6df1f req-bf65eeb2-bcda-4472-aa96-9c52dd5ac800 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:53 np0005558241 nova_compute[248510]: 2025-12-13 08:40:53.786 248514 DEBUG oslo_concurrency.lockutils [req-ad8a1197-dfaf-471b-8fa8-f0521ad6df1f req-bf65eeb2-bcda-4472-aa96-9c52dd5ac800 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:53 np0005558241 nova_compute[248510]: 2025-12-13 08:40:53.786 248514 DEBUG nova.compute.manager [req-ad8a1197-dfaf-471b-8fa8-f0521ad6df1f req-bf65eeb2-bcda-4472-aa96-9c52dd5ac800 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] No waiting events found dispatching network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:40:53 np0005558241 nova_compute[248510]: 2025-12-13 08:40:53.786 248514 WARNING nova.compute.manager [req-ad8a1197-dfaf-471b-8fa8-f0521ad6df1f req-bf65eeb2-bcda-4472-aa96-9c52dd5ac800 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received unexpected event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:40:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 293 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 963 KiB/s rd, 1.1 MiB/s wr, 31 op/s
Dec 13 03:40:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:40:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:40:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:40:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:40:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:40:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:40:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:40:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:40:54 np0005558241 podman[340089]: 2025-12-13 08:40:54.587901196 +0000 UTC m=+0.029850214 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:40:54 np0005558241 nova_compute[248510]: 2025-12-13 08:40:54.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:40:54 np0005558241 nova_compute[248510]: 2025-12-13 08:40:54.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:40:54 np0005558241 nova_compute[248510]: 2025-12-13 08:40:54.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:40:55 np0005558241 podman[340089]: 2025-12-13 08:40:55.049184814 +0000 UTC m=+0.491133732 container create 33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_stonebraker, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 03:40:55 np0005558241 nova_compute[248510]: 2025-12-13 08:40:55.174 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:40:55 np0005558241 nova_compute[248510]: 2025-12-13 08:40:55.175 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:40:55 np0005558241 nova_compute[248510]: 2025-12-13 08:40:55.175 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:40:55 np0005558241 nova_compute[248510]: 2025-12-13 08:40:55.175 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 658e5f04-399b-4a8a-8680-5ae9717949c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:40:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:55.268 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:55.420 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:55.420 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:40:55.421 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:55 np0005558241 systemd[1]: Started libpod-conmon-33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b.scope.
Dec 13 03:40:55 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:40:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 318 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 2.3 MiB/s wr, 141 op/s
Dec 13 03:40:56 np0005558241 nova_compute[248510]: 2025-12-13 08:40:56.060 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:56 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:40:56 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:40:56 np0005558241 podman[340089]: 2025-12-13 08:40:56.889971769 +0000 UTC m=+2.331920707 container init 33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_stonebraker, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:40:56 np0005558241 podman[340089]: 2025-12-13 08:40:56.904990293 +0000 UTC m=+2.346939211 container start 33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_stonebraker, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:40:56 np0005558241 wizardly_stonebraker[340106]: 167 167
Dec 13 03:40:56 np0005558241 systemd[1]: libpod-33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b.scope: Deactivated successfully.
Dec 13 03:40:56 np0005558241 conmon[340106]: conmon 33ac59d649209929289d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b.scope/container/memory.events
Dec 13 03:40:57 np0005558241 podman[340089]: 2025-12-13 08:40:57.301672236 +0000 UTC m=+2.743621154 container attach 33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_stonebraker, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:40:57 np0005558241 podman[340089]: 2025-12-13 08:40:57.303532415 +0000 UTC m=+2.745481343 container died 33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_stonebraker, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:40:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 318 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.5 MiB/s wr, 130 op/s
Dec 13 03:40:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:40:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fe49bf938c9072578a61d4995a6e04bbfcd819c3ced6ba46b9dff0db01fbffd5-merged.mount: Deactivated successfully.
Dec 13 03:40:57 np0005558241 nova_compute[248510]: 2025-12-13 08:40:57.959 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Updating instance_info_cache with network_info: [{"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:40:57 np0005558241 nova_compute[248510]: 2025-12-13 08:40:57.997 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Image rbd:vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Dec 13 03:40:57 np0005558241 nova_compute[248510]: 2025-12-13 08:40:57.998 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:40:57 np0005558241 nova_compute[248510]: 2025-12-13 08:40:57.998 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Ensure instance console log exists: /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:40:57 np0005558241 nova_compute[248510]: 2025-12-13 08:40:57.999 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:57 np0005558241 nova_compute[248510]: 2025-12-13 08:40:57.999 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:57 np0005558241 nova_compute[248510]: 2025-12-13 08:40:57.999 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.001 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Start _get_guest_xml network_info=[{"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:40:11Z,direct_url=<?>,disk_format='raw',id=f328af0c-ddf7-42aa-aa98-c602105202af,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-647049604-shelved',owner='f0aee359fbaa484eb7ead3f81eef51e7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:40:21Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.005 248514 WARNING nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.010 248514 DEBUG nova.virt.libvirt.host [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.010 248514 DEBUG nova.virt.libvirt.host [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.013 248514 DEBUG nova.virt.libvirt.host [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.013 248514 DEBUG nova.virt.libvirt.host [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.014 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.014 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:40:11Z,direct_url=<?>,disk_format='raw',id=f328af0c-ddf7-42aa-aa98-c602105202af,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-647049604-shelved',owner='f0aee359fbaa484eb7ead3f81eef51e7',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:40:21Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.014 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.014 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.015 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.015 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.015 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.015 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.016 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.016 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.016 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.016 248514 DEBUG nova.virt.hardware [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.016 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.024 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.034 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.070 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-658e5f04-399b-4a8a-8680-5ae9717949c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.073 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.075 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:40:58 np0005558241 podman[340089]: 2025-12-13 08:40:58.257220106 +0000 UTC m=+3.699169024 container remove 33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:40:58 np0005558241 systemd[1]: libpod-conmon-33ac59d649209929289da533cce7fc3c5fb6831389656d1aae75b8bbc2bb968b.scope: Deactivated successfully.
Dec 13 03:40:58 np0005558241 podman[340150]: 2025-12-13 08:40:58.416810641 +0000 UTC m=+0.025552931 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:40:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:40:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2287762503' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.617 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.643 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:40:58 np0005558241 nova_compute[248510]: 2025-12-13 08:40:58.648 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:40:58 np0005558241 podman[340150]: 2025-12-13 08:40:58.684534613 +0000 UTC m=+0.293276873 container create 50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_buck, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Dec 13 03:40:58 np0005558241 systemd[1]: Started libpod-conmon-50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e.scope.
Dec 13 03:40:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:40:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffa0d71fe3ca389f759e4dfea784340475e0310391daff34bed5caca2492032/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:40:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffa0d71fe3ca389f759e4dfea784340475e0310391daff34bed5caca2492032/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:40:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffa0d71fe3ca389f759e4dfea784340475e0310391daff34bed5caca2492032/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:40:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffa0d71fe3ca389f759e4dfea784340475e0310391daff34bed5caca2492032/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:40:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cffa0d71fe3ca389f759e4dfea784340475e0310391daff34bed5caca2492032/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:40:58 np0005558241 podman[340150]: 2025-12-13 08:40:58.871497466 +0000 UTC m=+0.480239756 container init 50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_buck, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:40:58 np0005558241 podman[340150]: 2025-12-13 08:40:58.880339978 +0000 UTC m=+0.489082238 container start 50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_buck, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:40:58 np0005558241 podman[340150]: 2025-12-13 08:40:58.924761033 +0000 UTC m=+0.533503313 container attach 50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_buck, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:40:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:40:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1201357498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.257 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.263 248514 DEBUG nova.virt.libvirt.vif [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:37:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-647049604',display_name='tempest-ServerActionsTestOtherB-server-647049604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-647049604',id=85,image_ref='f328af0c-ddf7-42aa-aa98-c602105202af',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-2123324397',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:37:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-ys1cli8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member',shelved_at='2025-12-13T08:40:22.342504',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f328af0c-ddf7-42aa-aa98-c602105202af'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:40:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7507939da64e4320a1c6f389d0fc9045',uuid=9b486227-b98c-4393-9a3c-aae3e3c419a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.265 248514 DEBUG nova.network.os_vif_util [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.267 248514 DEBUG nova.network.os_vif_util [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.269 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.296 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  <uuid>9b486227-b98c-4393-9a3c-aae3e3c419a8</uuid>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  <name>instance-00000055</name>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerActionsTestOtherB-server-647049604</nova:name>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:40:58</nova:creationTime>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:        <nova:user uuid="7507939da64e4320a1c6f389d0fc9045">tempest-ServerActionsTestOtherB-1515133862-project-member</nova:user>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:        <nova:project uuid="f0aee359fbaa484eb7ead3f81eef51e7">tempest-ServerActionsTestOtherB-1515133862</nova:project>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="f328af0c-ddf7-42aa-aa98-c602105202af"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:        <nova:port uuid="ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <entry name="serial">9b486227-b98c-4393-9a3c-aae3e3c419a8</entry>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <entry name="uuid">9b486227-b98c-4393-9a3c-aae3e3c419a8</entry>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:e0:82:3a"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <target dev="tapea5aafe7-a7"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/console.log" append="off"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <input type="keyboard" bus="usb"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:40:59 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:40:59 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:40:59 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:40:59 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.312 248514 DEBUG nova.compute.manager [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Preparing to wait for external event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.313 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.314 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.314 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.316 248514 DEBUG nova.virt.libvirt.vif [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:37:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-647049604',display_name='tempest-ServerActionsTestOtherB-server-647049604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-647049604',id=85,image_ref='f328af0c-ddf7-42aa-aa98-c602105202af',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-2123324397',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:37:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-ys1cli8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member',shelved_at='2025-12-13T08:40:22.342504',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f328af0c-ddf7-42aa-aa98-c602105202af'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:40:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7507939da64e4320a1c6f389d0fc9045',uuid=9b486227-b98c-4393-9a3c-aae3e3c419a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.317 248514 DEBUG nova.network.os_vif_util [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.319 248514 DEBUG nova.network.os_vif_util [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.320 248514 DEBUG os_vif [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.322 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.322 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.326 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.326 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea5aafe7-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.327 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea5aafe7-a7, col_values=(('external_ids', {'iface-id': 'ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:82:3a', 'vm-uuid': '9b486227-b98c-4393-9a3c-aae3e3c419a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.329 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:59 np0005558241 NetworkManager[50376]: <info>  [1765615259.3302] manager: (tapea5aafe7-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.339 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.341 248514 INFO os_vif [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7')#033[00m
Dec 13 03:40:59 np0005558241 laughing_buck[340188]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:40:59 np0005558241 laughing_buck[340188]: --> All data devices are unavailable
Dec 13 03:40:59 np0005558241 systemd[1]: libpod-50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e.scope: Deactivated successfully.
Dec 13 03:40:59 np0005558241 podman[340150]: 2025-12-13 08:40:59.401115216 +0000 UTC m=+1.009857486 container died 50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.439 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.440 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.440 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] No VIF found with MAC fa:16:3e:e0:82:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.441 248514 INFO nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Using config drive#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.458 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.478 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:40:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cffa0d71fe3ca389f759e4dfea784340475e0310391daff34bed5caca2492032-merged.mount: Deactivated successfully.
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.538 248514 DEBUG nova.objects.instance [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'keypairs' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:40:59 np0005558241 podman[340150]: 2025-12-13 08:40:59.616091404 +0000 UTC m=+1.224833664 container remove 50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_buck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:40:59 np0005558241 systemd[1]: libpod-conmon-50cd7ddc1c03757b1994cfaf86fe1dce2ecb7c6000697d97a2849d637abc202e.scope: Deactivated successfully.
Dec 13 03:40:59 np0005558241 nova_compute[248510]: 2025-12-13 08:40:59.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:40:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 354 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.2 MiB/s wr, 141 op/s
Dec 13 03:41:00 np0005558241 podman[340322]: 2025-12-13 08:41:00.076010755 +0000 UTC m=+0.031190089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:41:00 np0005558241 podman[340322]: 2025-12-13 08:41:00.288507107 +0000 UTC m=+0.243686411 container create 006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:41:00 np0005558241 systemd[1]: Started libpod-conmon-006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d.scope.
Dec 13 03:41:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:41:00 np0005558241 podman[340322]: 2025-12-13 08:41:00.390547943 +0000 UTC m=+0.345727267 container init 006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:41:00 np0005558241 podman[340322]: 2025-12-13 08:41:00.398579174 +0000 UTC m=+0.353758478 container start 006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:41:00 np0005558241 goofy_shockley[340340]: 167 167
Dec 13 03:41:00 np0005558241 systemd[1]: libpod-006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d.scope: Deactivated successfully.
Dec 13 03:41:00 np0005558241 podman[340322]: 2025-12-13 08:41:00.404354655 +0000 UTC m=+0.359533989 container attach 006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:41:00 np0005558241 podman[340322]: 2025-12-13 08:41:00.404630743 +0000 UTC m=+0.359810057 container died 006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shockley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:41:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cab5a099aed85c6e0b8768e9af6cb597d8a61f4e310b1812c48718425f6dc349-merged.mount: Deactivated successfully.
Dec 13 03:41:00 np0005558241 podman[340322]: 2025-12-13 08:41:00.447008144 +0000 UTC m=+0.402187448 container remove 006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:41:00 np0005558241 systemd[1]: libpod-conmon-006a6be6931f90af8030b06cad2362a7b5c2167ff8ca0d9d10f6cacc975cae7d.scope: Deactivated successfully.
Dec 13 03:41:00 np0005558241 nova_compute[248510]: 2025-12-13 08:41:00.627 248514 INFO nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Creating config drive at /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config#033[00m
Dec 13 03:41:00 np0005558241 nova_compute[248510]: 2025-12-13 08:41:00.636 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8bbe45o2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:00 np0005558241 podman[340364]: 2025-12-13 08:41:00.661553891 +0000 UTC m=+0.069753131 container create 954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_rosalind, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:41:00 np0005558241 podman[340364]: 2025-12-13 08:41:00.617735931 +0000 UTC m=+0.025935241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:41:00 np0005558241 systemd[1]: Started libpod-conmon-954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704.scope.
Dec 13 03:41:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:41:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dc6fbcf9aba334eaa344dd8dde57fee2f89c81fbb36648aa78e3b1f384428e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:41:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dc6fbcf9aba334eaa344dd8dde57fee2f89c81fbb36648aa78e3b1f384428e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:41:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dc6fbcf9aba334eaa344dd8dde57fee2f89c81fbb36648aa78e3b1f384428e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:41:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dc6fbcf9aba334eaa344dd8dde57fee2f89c81fbb36648aa78e3b1f384428e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:41:00 np0005558241 podman[340364]: 2025-12-13 08:41:00.77895431 +0000 UTC m=+0.187153580 container init 954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:41:00 np0005558241 podman[340364]: 2025-12-13 08:41:00.791576471 +0000 UTC m=+0.199775721 container start 954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:41:00 np0005558241 podman[340364]: 2025-12-13 08:41:00.794946939 +0000 UTC m=+0.203146209 container attach 954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_rosalind, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:41:00 np0005558241 nova_compute[248510]: 2025-12-13 08:41:00.804 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8bbe45o2" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:00 np0005558241 nova_compute[248510]: 2025-12-13 08:41:00.831 248514 DEBUG nova.storage.rbd_utils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] rbd image 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:00 np0005558241 nova_compute[248510]: 2025-12-13 08:41:00.834 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:00 np0005558241 nova_compute[248510]: 2025-12-13 08:41:00.982 248514 DEBUG oslo_concurrency.processutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config 9b486227-b98c-4393-9a3c-aae3e3c419a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:00 np0005558241 nova_compute[248510]: 2025-12-13 08:41:00.983 248514 INFO nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deleting local config drive /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8/disk.config because it was imported into RBD.#033[00m
Dec 13 03:41:01 np0005558241 kernel: tapea5aafe7-a7: entered promiscuous mode
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.040 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:01 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:01Z|00926|binding|INFO|Claiming lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for this chassis.
Dec 13 03:41:01 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:01Z|00927|binding|INFO|ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01: Claiming fa:16:3e:e0:82:3a 10.100.0.3
Dec 13 03:41:01 np0005558241 NetworkManager[50376]: <info>  [1765615261.0506] manager: (tapea5aafe7-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Dec 13 03:41:01 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:01Z|00928|binding|INFO|Setting lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 ovn-installed in OVS
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.069 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:01 np0005558241 systemd-udevd[340441]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:41:01 np0005558241 systemd-machined[210538]: New machine qemu-117-instance-00000055.
Dec 13 03:41:01 np0005558241 NetworkManager[50376]: <info>  [1765615261.0980] device (tapea5aafe7-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:41:01 np0005558241 NetworkManager[50376]: <info>  [1765615261.0988] device (tapea5aafe7-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:41:01 np0005558241 systemd[1]: Started Virtual Machine qemu-117-instance-00000055.
Dec 13 03:41:01 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:01Z|00929|binding|INFO|Setting lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 up in Southbound
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.105 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:82:3a 10.100.0.3'], port_security=['fa:16:3e:e0:82:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9b486227-b98c-4393-9a3c-aae3e3c419a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '7', 'neutron:security_group_ids': '35769c0b-1e0e-43bc-832c-d54c65a53a36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.107 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 in datapath 369f7528-6571-47b6-a030-5281647e1eac bound to our chassis#033[00m
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.109 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 369f7528-6571-47b6-a030-5281647e1eac#033[00m
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]: {
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:    "0": [
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:        {
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "devices": [
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "/dev/loop3"
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            ],
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_name": "ceph_lv0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_size": "21470642176",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "name": "ceph_lv0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "tags": {
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.cluster_name": "ceph",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.crush_device_class": "",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.encrypted": "0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.objectstore": "bluestore",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.osd_id": "0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.type": "block",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.vdo": "0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.with_tpm": "0"
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            },
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "type": "block",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "vg_name": "ceph_vg0"
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:        }
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:    ],
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:    "1": [
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:        {
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "devices": [
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "/dev/loop4"
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            ],
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_name": "ceph_lv1",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_size": "21470642176",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "name": "ceph_lv1",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "tags": {
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.cluster_name": "ceph",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.crush_device_class": "",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.encrypted": "0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.objectstore": "bluestore",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.osd_id": "1",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.type": "block",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.vdo": "0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.with_tpm": "0"
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            },
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "type": "block",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "vg_name": "ceph_vg1"
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:        }
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:    ],
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:    "2": [
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:        {
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "devices": [
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "/dev/loop5"
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            ],
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_name": "ceph_lv2",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_size": "21470642176",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "name": "ceph_lv2",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "tags": {
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.cluster_name": "ceph",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.crush_device_class": "",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.encrypted": "0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.objectstore": "bluestore",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.osd_id": "2",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.type": "block",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.vdo": "0",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:                "ceph.with_tpm": "0"
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            },
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "type": "block",
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:            "vg_name": "ceph_vg2"
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:        }
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]:    ]
Dec 13 03:41:01 np0005558241 loving_rosalind[340383]: }
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.128 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68916ac6-63dc-4c98-ac5c-012d2f500348]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:01 np0005558241 podman[340364]: 2025-12-13 08:41:01.171004441 +0000 UTC m=+0.579203711 container died 954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_rosalind, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.171 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d77d5eb4-8b3b-465c-8536-2c423735ed56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:01 np0005558241 systemd[1]: libpod-954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704.scope: Deactivated successfully.
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.174 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[646cc3aa-d601-460b-80da-f76e5efd02f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7dc6fbcf9aba334eaa344dd8dde57fee2f89c81fbb36648aa78e3b1f384428e0-merged.mount: Deactivated successfully.
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.218 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2f5920-33e6-42db-b42e-a93acf033d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:01 np0005558241 podman[340364]: 2025-12-13 08:41:01.225789738 +0000 UTC m=+0.633988988 container remove 954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:41:01 np0005558241 systemd[1]: libpod-conmon-954d7dc72a3841ffc61055e3cc4ad946d4dbe0f0068df4bb69e33c0f66e16704.scope: Deactivated successfully.
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.248 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d0eae443-11b0-4838-a140-474ea57f001c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 34886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340468, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.266 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[90a7a7a4-7ef7-4168-a0f6-68de4af2c23a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763117, 'tstamp': 763117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340469, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763121, 'tstamp': 763121}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340469, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.268 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.270 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.271 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.272 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369f7528-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.272 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.272 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap369f7528-60, col_values=(('external_ids', {'iface-id': '6c33e57f-b143-4ed7-a271-dc33d6e81057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:01.272 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.536 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615261.5351875, 9b486227-b98c-4393-9a3c-aae3e3c419a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.537 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] VM Started (Lifecycle Event)#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.594 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.599 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615261.5353959, 9b486227-b98c-4393-9a3c-aae3e3c419a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.599 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:41:01 np0005558241 podman[340575]: 2025-12-13 08:41:01.7211681 +0000 UTC m=+0.039223910 container create f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.762 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:01 np0005558241 systemd[1]: Started libpod-conmon-f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac.scope.
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.771 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:41:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:41:01 np0005558241 podman[340575]: 2025-12-13 08:41:01.70514541 +0000 UTC m=+0.023201270 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:41:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 372 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.813 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.815 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.815 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.815 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:01 np0005558241 podman[340575]: 2025-12-13 08:41:01.825527397 +0000 UTC m=+0.143583237 container init f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shirley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:41:01 np0005558241 podman[340575]: 2025-12-13 08:41:01.836091714 +0000 UTC m=+0.154147534 container start f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:41:01 np0005558241 podman[340575]: 2025-12-13 08:41:01.841027283 +0000 UTC m=+0.159083103 container attach f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shirley, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:41:01 np0005558241 stoic_shirley[340591]: 167 167
Dec 13 03:41:01 np0005558241 systemd[1]: libpod-f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac.scope: Deactivated successfully.
Dec 13 03:41:01 np0005558241 podman[340575]: 2025-12-13 08:41:01.844854924 +0000 UTC m=+0.162910744 container died f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 13 03:41:01 np0005558241 nova_compute[248510]: 2025-12-13 08:41:01.854 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:41:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f4a726975b97b3b1a6e8f6824650558b134928b5f564be8bd9d4d6ad49ba1b02-merged.mount: Deactivated successfully.
Dec 13 03:41:01 np0005558241 podman[340575]: 2025-12-13 08:41:01.926451144 +0000 UTC m=+0.244506964 container remove f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:41:01 np0005558241 systemd[1]: libpod-conmon-f1c7cbd67a466a042683137b9f779750447c4bede02f40b8dc47562724c8f8ac.scope: Deactivated successfully.
Dec 13 03:41:02 np0005558241 podman[340634]: 2025-12-13 08:41:02.169150179 +0000 UTC m=+0.079382543 container create 82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:41:02 np0005558241 podman[340634]: 2025-12-13 08:41:02.115885812 +0000 UTC m=+0.026118176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:41:02 np0005558241 systemd[1]: Started libpod-conmon-82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a.scope.
Dec 13 03:41:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:41:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7824ed3fc95ef9cb80422daf27631a37749c75184aae74d3eebf6d0ccca20b53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:41:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7824ed3fc95ef9cb80422daf27631a37749c75184aae74d3eebf6d0ccca20b53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:41:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7824ed3fc95ef9cb80422daf27631a37749c75184aae74d3eebf6d0ccca20b53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:41:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7824ed3fc95ef9cb80422daf27631a37749c75184aae74d3eebf6d0ccca20b53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:41:02 np0005558241 podman[340634]: 2025-12-13 08:41:02.288367455 +0000 UTC m=+0.198599819 container init 82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_panini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:41:02 np0005558241 podman[340634]: 2025-12-13 08:41:02.298802589 +0000 UTC m=+0.209034953 container start 82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:41:02 np0005558241 podman[340634]: 2025-12-13 08:41:02.302120006 +0000 UTC m=+0.212352390 container attach 82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_panini, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.397 248514 DEBUG nova.compute.manager [req-0b04eb6b-6083-4606-a391-d0870878f47b req-b6dbc1a8-2ee7-4766-bfd2-e843b60b0ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.399 248514 DEBUG oslo_concurrency.lockutils [req-0b04eb6b-6083-4606-a391-d0870878f47b req-b6dbc1a8-2ee7-4766-bfd2-e843b60b0ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.399 248514 DEBUG oslo_concurrency.lockutils [req-0b04eb6b-6083-4606-a391-d0870878f47b req-b6dbc1a8-2ee7-4766-bfd2-e843b60b0ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.400 248514 DEBUG oslo_concurrency.lockutils [req-0b04eb6b-6083-4606-a391-d0870878f47b req-b6dbc1a8-2ee7-4766-bfd2-e843b60b0ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.400 248514 DEBUG nova.compute.manager [req-0b04eb6b-6083-4606-a391-d0870878f47b req-b6dbc1a8-2ee7-4766-bfd2-e843b60b0ffe 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Processing event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.405 248514 DEBUG nova.compute.manager [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.410 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615262.4102623, 9b486227-b98c-4393-9a3c-aae3e3c419a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.411 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.433 248514 DEBUG nova.virt.libvirt.driver [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.440 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance spawned successfully.#033[00m
Dec 13 03:41:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:41:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221487140' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.475 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.623 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.628 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.666 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.677 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.678 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.683 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.683 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.689 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.689 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:41:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.938 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.942 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3257MB free_disk=59.87570957839489GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.942 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:02 np0005558241 nova_compute[248510]: 2025-12-13 08:41:02.942 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:03 np0005558241 lvm[340729]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:41:03 np0005558241 lvm[340729]: VG ceph_vg0 finished
Dec 13 03:41:03 np0005558241 lvm[340731]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:41:03 np0005558241 lvm[340731]: VG ceph_vg1 finished
Dec 13 03:41:03 np0005558241 lvm[340734]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:41:03 np0005558241 lvm[340734]: VG ceph_vg2 finished
Dec 13 03:41:03 np0005558241 nova_compute[248510]: 2025-12-13 08:41:03.155 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 658e5f04-399b-4a8a-8680-5ae9717949c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:41:03 np0005558241 nova_compute[248510]: 2025-12-13 08:41:03.156 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:41:03 np0005558241 nova_compute[248510]: 2025-12-13 08:41:03.156 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9b486227-b98c-4393-9a3c-aae3e3c419a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:41:03 np0005558241 nova_compute[248510]: 2025-12-13 08:41:03.156 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:41:03 np0005558241 nova_compute[248510]: 2025-12-13 08:41:03.157 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:41:03 np0005558241 suspicious_panini[340651]: {}
Dec 13 03:41:03 np0005558241 systemd[1]: libpod-82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a.scope: Deactivated successfully.
Dec 13 03:41:03 np0005558241 systemd[1]: libpod-82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a.scope: Consumed 1.435s CPU time.
Dec 13 03:41:03 np0005558241 conmon[340651]: conmon 82c2b06c04589e96dbe0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a.scope/container/memory.events
Dec 13 03:41:03 np0005558241 podman[340634]: 2025-12-13 08:41:03.253322482 +0000 UTC m=+1.163554856 container died 82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_panini, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:41:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7824ed3fc95ef9cb80422daf27631a37749c75184aae74d3eebf6d0ccca20b53-merged.mount: Deactivated successfully.
Dec 13 03:41:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Dec 13 03:41:03 np0005558241 podman[340634]: 2025-12-13 08:41:03.310335727 +0000 UTC m=+1.220568091 container remove 82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:41:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Dec 13 03:41:03 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Dec 13 03:41:03 np0005558241 systemd[1]: libpod-conmon-82c2b06c04589e96dbe00c51dc611dc1f83cb5992a537a095fd8db1ed523376a.scope: Deactivated successfully.
Dec 13 03:41:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:41:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:41:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:41:03 np0005558241 nova_compute[248510]: 2025-12-13 08:41:03.397 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:41:03 np0005558241 nova_compute[248510]: 2025-12-13 08:41:03.456 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:03 np0005558241 nova_compute[248510]: 2025-12-13 08:41:03.457 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:03 np0005558241 nova_compute[248510]: 2025-12-13 08:41:03.512 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:41:03 np0005558241 nova_compute[248510]: 2025-12-13 08:41:03.665 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 372 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 4.7 MiB/s wr, 160 op/s
Dec 13 03:41:03 np0005558241 nova_compute[248510]: 2025-12-13 08:41:03.850 248514 DEBUG nova.compute.manager [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:41:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1833703554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.063 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.070 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.150 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.166 248514 DEBUG oslo_concurrency.lockutils [None req-a0901c81-b592-4ada-bb8c-5a8de29546c5 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 25.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.195 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.195 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.195 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.204 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.205 248514 INFO nova.compute.claims [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:41:04 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:41:04 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.330 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.533 248514 DEBUG nova.compute.manager [req-72308b1d-fd0d-45e7-815d-d173cd9881e8 req-2917cc2e-49bd-4616-a82b-c0565d19438b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.533 248514 DEBUG oslo_concurrency.lockutils [req-72308b1d-fd0d-45e7-815d-d173cd9881e8 req-2917cc2e-49bd-4616-a82b-c0565d19438b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.533 248514 DEBUG oslo_concurrency.lockutils [req-72308b1d-fd0d-45e7-815d-d173cd9881e8 req-2917cc2e-49bd-4616-a82b-c0565d19438b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.534 248514 DEBUG oslo_concurrency.lockutils [req-72308b1d-fd0d-45e7-815d-d173cd9881e8 req-2917cc2e-49bd-4616-a82b-c0565d19438b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.534 248514 DEBUG nova.compute.manager [req-72308b1d-fd0d-45e7-815d-d173cd9881e8 req-2917cc2e-49bd-4616-a82b-c0565d19438b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] No waiting events found dispatching network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.534 248514 WARNING nova.compute.manager [req-72308b1d-fd0d-45e7-815d-d173cd9881e8 req-2917cc2e-49bd-4616-a82b-c0565d19438b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received unexpected event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:41:04 np0005558241 nova_compute[248510]: 2025-12-13 08:41:04.617 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:41:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2993607602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.263 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.271 248514 DEBUG nova.compute.provider_tree [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.301 248514 DEBUG nova.scheduler.client.report [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.327 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.328 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:41:05 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:05Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:83:5f 10.100.0.5
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.475 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:41:05 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:05Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:83:5f 10.100.0.5
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.476 248514 DEBUG nova.network.neutron [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.558 248514 INFO nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.640 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:41:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 323 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 179 op/s
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.890 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.891 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.891 248514 INFO nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Creating image(s)#033[00m
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.928 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:05 np0005558241 nova_compute[248510]: 2025-12-13 08:41:05.969 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:05 np0005558241 podman[340825]: 2025-12-13 08:41:05.991575994 +0000 UTC m=+0.073363515 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:41:06 np0005558241 podman[340826]: 2025-12-13 08:41:06.003393534 +0000 UTC m=+0.082016662 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 13 03:41:06 np0005558241 nova_compute[248510]: 2025-12-13 08:41:06.004 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:06 np0005558241 nova_compute[248510]: 2025-12-13 08:41:06.014 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:06 np0005558241 podman[340817]: 2025-12-13 08:41:06.042315284 +0000 UTC m=+0.124505186 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:41:06 np0005558241 nova_compute[248510]: 2025-12-13 08:41:06.069 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:06 np0005558241 nova_compute[248510]: 2025-12-13 08:41:06.102 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:06 np0005558241 nova_compute[248510]: 2025-12-13 08:41:06.103 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:06 np0005558241 nova_compute[248510]: 2025-12-13 08:41:06.104 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:06 np0005558241 nova_compute[248510]: 2025-12-13 08:41:06.104 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:06 np0005558241 nova_compute[248510]: 2025-12-13 08:41:06.124 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:06 np0005558241 nova_compute[248510]: 2025-12-13 08:41:06.127 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f5485560-d9b8-44ef-9425-57a45ac866af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:06 np0005558241 nova_compute[248510]: 2025-12-13 08:41:06.356 248514 DEBUG nova.policy [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:41:07 np0005558241 nova_compute[248510]: 2025-12-13 08:41:07.069 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f5485560-d9b8-44ef-9425-57a45ac866af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.942s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:07 np0005558241 nova_compute[248510]: 2025-12-13 08:41:07.135 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:41:07 np0005558241 nova_compute[248510]: 2025-12-13 08:41:07.193 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:41:07 np0005558241 nova_compute[248510]: 2025-12-13 08:41:07.194 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:41:07 np0005558241 nova_compute[248510]: 2025-12-13 08:41:07.713 248514 DEBUG nova.objects.instance [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid f5485560-d9b8-44ef-9425-57a45ac866af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:41:07 np0005558241 nova_compute[248510]: 2025-12-13 08:41:07.737 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:41:07 np0005558241 nova_compute[248510]: 2025-12-13 08:41:07.737 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Ensure instance console log exists: /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:41:07 np0005558241 nova_compute[248510]: 2025-12-13 08:41:07.738 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:07 np0005558241 nova_compute[248510]: 2025-12-13 08:41:07.738 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:07 np0005558241 nova_compute[248510]: 2025-12-13 08:41:07.739 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 323 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 179 op/s
Dec 13 03:41:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:41:08 np0005558241 nova_compute[248510]: 2025-12-13 08:41:08.434 248514 DEBUG nova.network.neutron [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Successfully created port: 685c5b77-ceb9-42c7-86cb-933b381677ba _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:41:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Dec 13 03:41:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Dec 13 03:41:09 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Dec 13 03:41:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:41:09
Dec 13 03:41:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:41:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:41:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr']
Dec 13 03:41:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:41:09 np0005558241 nova_compute[248510]: 2025-12-13 08:41:09.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:09 np0005558241 nova_compute[248510]: 2025-12-13 08:41:09.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:41:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 350 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.6 MiB/s wr, 277 op/s
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:41:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:41:11 np0005558241 nova_compute[248510]: 2025-12-13 08:41:11.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 364 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.5 MiB/s wr, 262 op/s
Dec 13 03:41:11 np0005558241 nova_compute[248510]: 2025-12-13 08:41:11.838 248514 DEBUG nova.network.neutron [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Successfully updated port: 685c5b77-ceb9-42c7-86cb-933b381677ba _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:41:11 np0005558241 nova_compute[248510]: 2025-12-13 08:41:11.942 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-f5485560-d9b8-44ef-9425-57a45ac866af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:41:11 np0005558241 nova_compute[248510]: 2025-12-13 08:41:11.942 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-f5485560-d9b8-44ef-9425-57a45ac866af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:41:11 np0005558241 nova_compute[248510]: 2025-12-13 08:41:11.942 248514 DEBUG nova.network.neutron [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:41:12 np0005558241 nova_compute[248510]: 2025-12-13 08:41:12.029 248514 DEBUG nova.compute.manager [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-changed-685c5b77-ceb9-42c7-86cb-933b381677ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:12 np0005558241 nova_compute[248510]: 2025-12-13 08:41:12.030 248514 DEBUG nova.compute.manager [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Refreshing instance network info cache due to event network-changed-685c5b77-ceb9-42c7-86cb-933b381677ba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:41:12 np0005558241 nova_compute[248510]: 2025-12-13 08:41:12.030 248514 DEBUG oslo_concurrency.lockutils [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f5485560-d9b8-44ef-9425-57a45ac866af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:41:12 np0005558241 nova_compute[248510]: 2025-12-13 08:41:12.299 248514 DEBUG nova.network.neutron [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:41:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:41:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Dec 13 03:41:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Dec 13 03:41:12 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Dec 13 03:41:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2353: 321 pgs: 321 active+clean; 353 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 596 KiB/s rd, 4.3 MiB/s wr, 117 op/s
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.161 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.161 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.162 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.162 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.162 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.163 248514 INFO nova.compute.manager [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Terminating instance#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.164 248514 DEBUG nova.compute.manager [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:14 np0005558241 kernel: tapf767e871-4f (unregistering): left promiscuous mode
Dec 13 03:41:14 np0005558241 NetworkManager[50376]: <info>  [1765615274.3580] device (tapf767e871-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:14Z|00930|binding|INFO|Releasing lport f767e871-4f9e-414e-a61d-c70cffe80128 from this chassis (sb_readonly=0)
Dec 13 03:41:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:14Z|00931|binding|INFO|Setting lport f767e871-4f9e-414e-a61d-c70cffe80128 down in Southbound
Dec 13 03:41:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:14Z|00932|binding|INFO|Removing iface tapf767e871-4f ovn-installed in OVS
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.370 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.388 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:14 np0005558241 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.415 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:d2:10 10.100.0.4'], port_security=['fa:16:3e:96:d2:10 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '658e5f04-399b-4a8a-8680-5ae9717949c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77183472-893b-4c33-ab3e-e88f01770ed8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f767e871-4f9e-414e-a61d-c70cffe80128) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:41:14 np0005558241 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005b.scope: Consumed 17.854s CPU time.
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.416 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f767e871-4f9e-414e-a61d-c70cffe80128 in datapath 369f7528-6571-47b6-a030-5281647e1eac unbound from our chassis#033[00m
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.418 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 369f7528-6571-47b6-a030-5281647e1eac#033[00m
Dec 13 03:41:14 np0005558241 systemd-machined[210538]: Machine qemu-113-instance-0000005b terminated.
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.435 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb10d85-ab0a-45c4-b14b-4602c556cd06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.471 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[65d8db0c-e4cc-438e-8b7f-5f47426b7faf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.474 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1f0b04-1b0e-49cc-8d84-ff9eeff857fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.502 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fda9e149-121f-49df-9cda-ebfb624f0484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.521 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c86546f9-ffe5-413f-b1e8-6be8503865fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap369f7528-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:b1:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763105, 'reachable_time': 34886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341058, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[88ad8d02-d5d3-4126-8fa9-0f9a2b9d3f64]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763117, 'tstamp': 763117}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341059, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap369f7528-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763121, 'tstamp': 763121}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341059, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.575 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.583 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369f7528-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.582 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.583 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.584 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap369f7528-60, col_values=(('external_ids', {'iface-id': '6c33e57f-b143-4ed7-a271-dc33d6e81057'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:14.584 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.616 248514 INFO nova.virt.libvirt.driver [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Instance destroyed successfully.#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.617 248514 DEBUG nova.objects.instance [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'resources' on Instance uuid 658e5f04-399b-4a8a-8680-5ae9717949c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.645 248514 DEBUG nova.virt.libvirt.vif [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:38:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2058674229',display_name='tempest-ServerActionsTestOtherB-server-2058674229',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2058674229',id=91,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:39:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-hq6btaus',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:39:04Z,user_data=None,user_id='7507939da64e4320a1c6f389d0fc9045',uuid=658e5f04-399b-4a8a-8680-5ae9717949c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.645 248514 DEBUG nova.network.os_vif_util [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "f767e871-4f9e-414e-a61d-c70cffe80128", "address": "fa:16:3e:96:d2:10", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf767e871-4f", "ovs_interfaceid": "f767e871-4f9e-414e-a61d-c70cffe80128", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.646 248514 DEBUG nova.network.os_vif_util [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.646 248514 DEBUG os_vif [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.648 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.648 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf767e871-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.650 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.655 248514 INFO os_vif [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:d2:10,bridge_name='br-int',has_traffic_filtering=True,id=f767e871-4f9e-414e-a61d-c70cffe80128,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf767e871-4f')#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.738 248514 DEBUG nova.network.neutron [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Updating instance_info_cache with network_info: [{"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.764 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-f5485560-d9b8-44ef-9425-57a45ac866af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.765 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Instance network_info: |[{"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.765 248514 DEBUG oslo_concurrency.lockutils [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f5485560-d9b8-44ef-9425-57a45ac866af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.766 248514 DEBUG nova.network.neutron [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Refreshing network info cache for port 685c5b77-ceb9-42c7-86cb-933b381677ba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.769 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Start _get_guest_xml network_info=[{"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.774 248514 WARNING nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.785 248514 DEBUG nova.virt.libvirt.host [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.787 248514 DEBUG nova.virt.libvirt.host [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.792 248514 DEBUG nova.virt.libvirt.host [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.792 248514 DEBUG nova.virt.libvirt.host [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.793 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.793 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.794 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.794 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.794 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.795 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.795 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.795 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.796 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.796 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.796 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.797 248514 DEBUG nova.virt.hardware [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:41:14 np0005558241 nova_compute[248510]: 2025-12-13 08:41:14.800 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:41:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/673103078' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:41:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:41:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/673103078' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:41:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:15Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:82:3a 10.100.0.3
Dec 13 03:41:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:41:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640700649' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:41:15 np0005558241 nova_compute[248510]: 2025-12-13 08:41:15.468 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:15 np0005558241 nova_compute[248510]: 2025-12-13 08:41:15.498 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:15 np0005558241 nova_compute[248510]: 2025-12-13 08:41:15.504 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 326 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 737 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Dec 13 03:41:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:41:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695515673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.077 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.079 248514 DEBUG nova.virt.libvirt.vif [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:41:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1093776704',display_name='tempest-ServersTestJSON-server-1093776704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1093776704',id=95,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-8z05o414',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:41:05Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=f5485560-d9b8-44ef-9425-57a45ac866af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.080 248514 DEBUG nova.network.os_vif_util [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.081 248514 DEBUG nova.network.os_vif_util [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.082 248514 DEBUG nova.objects.instance [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid f5485560-d9b8-44ef-9425-57a45ac866af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.102 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  <uuid>f5485560-d9b8-44ef-9425-57a45ac866af</uuid>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  <name>instance-0000005f</name>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersTestJSON-server-1093776704</nova:name>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:41:14</nova:creationTime>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:        <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:        <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:        <nova:port uuid="685c5b77-ceb9-42c7-86cb-933b381677ba">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <entry name="serial">f5485560-d9b8-44ef-9425-57a45ac866af</entry>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <entry name="uuid">f5485560-d9b8-44ef-9425-57a45ac866af</entry>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/f5485560-d9b8-44ef-9425-57a45ac866af_disk">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/f5485560-d9b8-44ef-9425-57a45ac866af_disk.config">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:c7:03:3c"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <target dev="tap685c5b77-ce"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/console.log" append="off"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:41:16 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:41:16 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:41:16 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:41:16 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.104 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Preparing to wait for external event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.104 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.105 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.106 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.107 248514 DEBUG nova.virt.libvirt.vif [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:41:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1093776704',display_name='tempest-ServersTestJSON-server-1093776704',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1093776704',id=95,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-8z05o414',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:41:05Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=f5485560-d9b8-44ef-9425-57a45ac866af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.107 248514 DEBUG nova.network.os_vif_util [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.108 248514 DEBUG nova.network.os_vif_util [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.108 248514 DEBUG os_vif [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.109 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.109 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.110 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.113 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap685c5b77-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.114 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap685c5b77-ce, col_values=(('external_ids', {'iface-id': '685c5b77-ceb9-42c7-86cb-933b381677ba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:03:3c', 'vm-uuid': 'f5485560-d9b8-44ef-9425-57a45ac866af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:16 np0005558241 NetworkManager[50376]: <info>  [1765615276.1171] manager: (tap685c5b77-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.117 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.121 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.123 248514 INFO os_vif [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce')#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.586 248514 DEBUG nova.compute.manager [req-c9ce9b6f-9654-4aec-a69d-25a5c47c7a65 req-0eebe3bb-6b1d-4a6a-a34a-e351220bde26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-vif-unplugged-f767e871-4f9e-414e-a61d-c70cffe80128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.586 248514 DEBUG oslo_concurrency.lockutils [req-c9ce9b6f-9654-4aec-a69d-25a5c47c7a65 req-0eebe3bb-6b1d-4a6a-a34a-e351220bde26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.586 248514 DEBUG oslo_concurrency.lockutils [req-c9ce9b6f-9654-4aec-a69d-25a5c47c7a65 req-0eebe3bb-6b1d-4a6a-a34a-e351220bde26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.586 248514 DEBUG oslo_concurrency.lockutils [req-c9ce9b6f-9654-4aec-a69d-25a5c47c7a65 req-0eebe3bb-6b1d-4a6a-a34a-e351220bde26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.587 248514 DEBUG nova.compute.manager [req-c9ce9b6f-9654-4aec-a69d-25a5c47c7a65 req-0eebe3bb-6b1d-4a6a-a34a-e351220bde26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] No waiting events found dispatching network-vif-unplugged-f767e871-4f9e-414e-a61d-c70cffe80128 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.587 248514 DEBUG nova.compute.manager [req-c9ce9b6f-9654-4aec-a69d-25a5c47c7a65 req-0eebe3bb-6b1d-4a6a-a34a-e351220bde26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-vif-unplugged-f767e871-4f9e-414e-a61d-c70cffe80128 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.600 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.601 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.601 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:c7:03:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.601 248514 INFO nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Using config drive#033[00m
Dec 13 03:41:16 np0005558241 nova_compute[248510]: 2025-12-13 08:41:16.963 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 321 active+clean; 326 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 215 KiB/s rd, 1.6 MiB/s wr, 73 op/s
Dec 13 03:41:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:41:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Dec 13 03:41:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Dec 13 03:41:17 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Dec 13 03:41:18 np0005558241 nova_compute[248510]: 2025-12-13 08:41:18.809 248514 INFO nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Creating config drive at /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/disk.config#033[00m
Dec 13 03:41:18 np0005558241 nova_compute[248510]: 2025-12-13 08:41:18.815 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqr_292fb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:18 np0005558241 nova_compute[248510]: 2025-12-13 08:41:18.962 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqr_292fb" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:18 np0005558241 nova_compute[248510]: 2025-12-13 08:41:18.990 248514 DEBUG nova.storage.rbd_utils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image f5485560-d9b8-44ef-9425-57a45ac866af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:18 np0005558241 nova_compute[248510]: 2025-12-13 08:41:18.994 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/disk.config f5485560-d9b8-44ef-9425-57a45ac866af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.036 248514 DEBUG nova.compute.manager [req-4394d195-2b11-4a75-9da7-2dc0506e8653 req-f7476d20-7072-424b-b37c-949942630c69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.037 248514 DEBUG oslo_concurrency.lockutils [req-4394d195-2b11-4a75-9da7-2dc0506e8653 req-f7476d20-7072-424b-b37c-949942630c69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.037 248514 DEBUG oslo_concurrency.lockutils [req-4394d195-2b11-4a75-9da7-2dc0506e8653 req-f7476d20-7072-424b-b37c-949942630c69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.037 248514 DEBUG oslo_concurrency.lockutils [req-4394d195-2b11-4a75-9da7-2dc0506e8653 req-f7476d20-7072-424b-b37c-949942630c69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.037 248514 DEBUG nova.compute.manager [req-4394d195-2b11-4a75-9da7-2dc0506e8653 req-f7476d20-7072-424b-b37c-949942630c69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] No waiting events found dispatching network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.038 248514 WARNING nova.compute.manager [req-4394d195-2b11-4a75-9da7-2dc0506e8653 req-f7476d20-7072-424b-b37c-949942630c69 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received unexpected event network-vif-plugged-f767e871-4f9e-414e-a61d-c70cffe80128 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.152 248514 INFO nova.virt.libvirt.driver [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Deleting instance files /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0_del#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.153 248514 INFO nova.virt.libvirt.driver [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Deletion of /var/lib/nova/instances/658e5f04-399b-4a8a-8680-5ae9717949c0_del complete#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.198 248514 DEBUG oslo_concurrency.processutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/disk.config f5485560-d9b8-44ef-9425-57a45ac866af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.199 248514 INFO nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Deleting local config drive /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af/disk.config because it was imported into RBD.#033[00m
Dec 13 03:41:19 np0005558241 kernel: tap685c5b77-ce: entered promiscuous mode
Dec 13 03:41:19 np0005558241 NetworkManager[50376]: <info>  [1765615279.2547] manager: (tap685c5b77-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Dec 13 03:41:19 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:19Z|00933|binding|INFO|Claiming lport 685c5b77-ceb9-42c7-86cb-933b381677ba for this chassis.
Dec 13 03:41:19 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:19Z|00934|binding|INFO|685c5b77-ceb9-42c7-86cb-933b381677ba: Claiming fa:16:3e:c7:03:3c 10.100.0.11
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.255 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.263 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:03:3c 10.100.0.11'], port_security=['fa:16:3e:c7:03:3c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f5485560-d9b8-44ef-9425-57a45ac866af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=685c5b77-ceb9-42c7-86cb-933b381677ba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.266 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 685c5b77-ceb9-42c7-86cb-933b381677ba in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.268 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3#033[00m
Dec 13 03:41:19 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:19Z|00935|binding|INFO|Setting lport 685c5b77-ceb9-42c7-86cb-933b381677ba ovn-installed in OVS
Dec 13 03:41:19 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:19Z|00936|binding|INFO|Setting lport 685c5b77-ceb9-42c7-86cb-933b381677ba up in Southbound
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.275 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.285 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e3584a43-acc3-4260-9e5b-2864319d3c2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:19 np0005558241 systemd-udevd[341228]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:41:19 np0005558241 systemd-machined[210538]: New machine qemu-118-instance-0000005f.
Dec 13 03:41:19 np0005558241 NetworkManager[50376]: <info>  [1765615279.2998] device (tap685c5b77-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:41:19 np0005558241 NetworkManager[50376]: <info>  [1765615279.3012] device (tap685c5b77-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:41:19 np0005558241 systemd[1]: Started Virtual Machine qemu-118-instance-0000005f.
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.304 248514 INFO nova.compute.manager [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Took 5.14 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.305 248514 DEBUG oslo.service.loopingcall [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.305 248514 DEBUG nova.compute.manager [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.305 248514 DEBUG nova.network.neutron [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.320 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7f7abd-0a99-4f31-baec-2427c0f4487f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.324 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4c70e3-32c7-43b2-ac20-049fc90c2002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.357 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6471eeff-96bb-4e7e-9b6e-3c1ddc10f7b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.376 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b509f8fd-78cd-4c08-bff0-91e8d092cf59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341240, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.398 248514 DEBUG nova.network.neutron [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Updated VIF entry in instance network info cache for port 685c5b77-ceb9-42c7-86cb-933b381677ba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.399 248514 DEBUG nova.network.neutron [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Updating instance_info_cache with network_info: [{"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.401 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5f006b-6e87-4ea2-b26a-96bc697955e7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341242, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341242, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.403 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.405 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.406 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.406 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.406 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:19.407 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.425 248514 DEBUG oslo_concurrency.lockutils [req-4daae2a9-97d4-4483-8d2a-c2c53423fee2 req-e5f84b13-602f-4987-b71d-e7a0b0dad03a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f5485560-d9b8-44ef-9425-57a45ac866af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.582 248514 DEBUG nova.compute.manager [req-fe8cb056-70e9-46c2-9028-3041e79eb035 req-503300f1-58ec-495c-8b54-e437523f2a27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.583 248514 DEBUG oslo_concurrency.lockutils [req-fe8cb056-70e9-46c2-9028-3041e79eb035 req-503300f1-58ec-495c-8b54-e437523f2a27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.583 248514 DEBUG oslo_concurrency.lockutils [req-fe8cb056-70e9-46c2-9028-3041e79eb035 req-503300f1-58ec-495c-8b54-e437523f2a27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.584 248514 DEBUG oslo_concurrency.lockutils [req-fe8cb056-70e9-46c2-9028-3041e79eb035 req-503300f1-58ec-495c-8b54-e437523f2a27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.584 248514 DEBUG nova.compute.manager [req-fe8cb056-70e9-46c2-9028-3041e79eb035 req-503300f1-58ec-495c-8b54-e437523f2a27 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Processing event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:41:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 268 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 843 KiB/s rd, 40 KiB/s wr, 136 op/s
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.867 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615279.8673797, f5485560-d9b8-44ef-9425-57a45ac866af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.868 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] VM Started (Lifecycle Event)#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.870 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.872 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.875 248514 INFO nova.virt.libvirt.driver [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Instance spawned successfully.#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.876 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.897 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.901 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.901 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.901 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.902 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.902 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.902 248514 DEBUG nova.virt.libvirt.driver [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.906 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.979 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.979 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615279.8676324, f5485560-d9b8-44ef-9425-57a45ac866af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:19 np0005558241 nova_compute[248510]: 2025-12-13 08:41:19.979 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:41:20 np0005558241 nova_compute[248510]: 2025-12-13 08:41:20.019 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:20 np0005558241 nova_compute[248510]: 2025-12-13 08:41:20.023 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615279.8720002, f5485560-d9b8-44ef-9425-57a45ac866af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:20 np0005558241 nova_compute[248510]: 2025-12-13 08:41:20.024 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:41:20 np0005558241 nova_compute[248510]: 2025-12-13 08:41:20.028 248514 INFO nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Took 14.14 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:41:20 np0005558241 nova_compute[248510]: 2025-12-13 08:41:20.029 248514 DEBUG nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:20 np0005558241 nova_compute[248510]: 2025-12-13 08:41:20.073 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:20 np0005558241 nova_compute[248510]: 2025-12-13 08:41:20.077 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:41:20 np0005558241 nova_compute[248510]: 2025-12-13 08:41:20.116 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:41:20 np0005558241 nova_compute[248510]: 2025-12-13 08:41:20.854 248514 INFO nova.compute.manager [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Took 17.22 seconds to build instance.#033[00m
Dec 13 03:41:20 np0005558241 nova_compute[248510]: 2025-12-13 08:41:20.902 248514 DEBUG oslo_concurrency.lockutils [None req-9ee989d6-b911-422c-a11a-51ebc5839623 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020360573277216046 of space, bias 1.0, pg target 0.6108171983164814 quantized to 32 (current 32)
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677608684985178 of space, bias 1.0, pg target 0.20032826054955533 quantized to 32 (current 32)
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.938341245448177e-07 of space, bias 4.0, pg target 0.0007126009494537812 quantized to 16 (current 32)
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.075 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.418 248514 DEBUG nova.network.neutron [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.445 248514 INFO nova.compute.manager [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Took 2.14 seconds to deallocate network for instance.#033[00m
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.512 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.513 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.691 248514 DEBUG oslo_concurrency.processutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.741 248514 DEBUG nova.compute.manager [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.741 248514 DEBUG oslo_concurrency.lockutils [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.742 248514 DEBUG oslo_concurrency.lockutils [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.742 248514 DEBUG oslo_concurrency.lockutils [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.742 248514 DEBUG nova.compute.manager [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] No waiting events found dispatching network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.742 248514 WARNING nova.compute.manager [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received unexpected event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba for instance with vm_state active and task_state None.#033[00m
Dec 13 03:41:21 np0005558241 nova_compute[248510]: 2025-12-13 08:41:21.742 248514 DEBUG nova.compute.manager [req-d40485aa-2201-4a76-ba17-7b1a46e36d5a req-af2c55ba-70e8-42b9-ada2-4c6043a8a3cd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Received event network-vif-deleted-f767e871-4f9e-414e-a61d-c70cffe80128 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 246 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 754 KiB/s rd, 52 KiB/s wr, 125 op/s
Dec 13 03:41:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:41:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739840725' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:41:22 np0005558241 nova_compute[248510]: 2025-12-13 08:41:22.297 248514 DEBUG oslo_concurrency.processutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:22 np0005558241 nova_compute[248510]: 2025-12-13 08:41:22.303 248514 DEBUG nova.compute.provider_tree [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:41:22 np0005558241 nova_compute[248510]: 2025-12-13 08:41:22.325 248514 DEBUG nova.scheduler.client.report [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:41:22 np0005558241 nova_compute[248510]: 2025-12-13 08:41:22.352 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:22 np0005558241 nova_compute[248510]: 2025-12-13 08:41:22.380 248514 INFO nova.scheduler.client.report [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Deleted allocations for instance 658e5f04-399b-4a8a-8680-5ae9717949c0#033[00m
Dec 13 03:41:22 np0005558241 nova_compute[248510]: 2025-12-13 08:41:22.499 248514 DEBUG oslo_concurrency.lockutils [None req-34a6a7cc-58cd-4d15-8f55-2c5885db60e2 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "658e5f04-399b-4a8a-8680-5ae9717949c0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:41:22 np0005558241 nova_compute[248510]: 2025-12-13 08:41:22.984 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:22 np0005558241 nova_compute[248510]: 2025-12-13 08:41:22.984 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:22 np0005558241 nova_compute[248510]: 2025-12-13 08:41:22.985 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:22 np0005558241 nova_compute[248510]: 2025-12-13 08:41:22.985 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:22 np0005558241 nova_compute[248510]: 2025-12-13 08:41:22.985 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:22 np0005558241 nova_compute[248510]: 2025-12-13 08:41:22.987 248514 INFO nova.compute.manager [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Terminating instance#033[00m
Dec 13 03:41:22 np0005558241 nova_compute[248510]: 2025-12-13 08:41:22.988 248514 DEBUG nova.compute.manager [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:41:23 np0005558241 kernel: tapea5aafe7-a7 (unregistering): left promiscuous mode
Dec 13 03:41:23 np0005558241 NetworkManager[50376]: <info>  [1765615283.2592] device (tapea5aafe7-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:41:23 np0005558241 nova_compute[248510]: 2025-12-13 08:41:23.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:23Z|00937|binding|INFO|Releasing lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 from this chassis (sb_readonly=0)
Dec 13 03:41:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:23Z|00938|binding|INFO|Setting lport ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 down in Southbound
Dec 13 03:41:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:23Z|00939|binding|INFO|Removing iface tapea5aafe7-a7 ovn-installed in OVS
Dec 13 03:41:23 np0005558241 nova_compute[248510]: 2025-12-13 08:41:23.316 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:23.323 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:82:3a 10.100.0.3'], port_security=['fa:16:3e:e0:82:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9b486227-b98c-4393-9a3c-aae3e3c419a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-369f7528-6571-47b6-a030-5281647e1eac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0aee359fbaa484eb7ead3f81eef51e7', 'neutron:revision_number': '9', 'neutron:security_group_ids': '35769c0b-1e0e-43bc-832c-d54c65a53a36', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.181', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ade703f-c35f-4093-80be-9470cf548d7c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:41:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:23.324 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 in datapath 369f7528-6571-47b6-a030-5281647e1eac unbound from our chassis#033[00m
Dec 13 03:41:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:23.326 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 369f7528-6571-47b6-a030-5281647e1eac, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:41:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:23.327 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3f39c1-6c83-42ed-b6b6-9544858c73c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:23.328 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-369f7528-6571-47b6-a030-5281647e1eac namespace which is not needed anymore#033[00m
Dec 13 03:41:23 np0005558241 nova_compute[248510]: 2025-12-13 08:41:23.336 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:23 np0005558241 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000055.scope: Deactivated successfully.
Dec 13 03:41:23 np0005558241 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d00000055.scope: Consumed 13.542s CPU time.
Dec 13 03:41:23 np0005558241 systemd-machined[210538]: Machine qemu-117-instance-00000055 terminated.
Dec 13 03:41:23 np0005558241 nova_compute[248510]: 2025-12-13 08:41:23.420 248514 INFO nova.virt.libvirt.driver [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Instance destroyed successfully.#033[00m
Dec 13 03:41:23 np0005558241 nova_compute[248510]: 2025-12-13 08:41:23.421 248514 DEBUG nova.objects.instance [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lazy-loading 'resources' on Instance uuid 9b486227-b98c-4393-9a3c-aae3e3c419a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:41:23 np0005558241 nova_compute[248510]: 2025-12-13 08:41:23.441 248514 DEBUG nova.virt.libvirt.vif [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:37:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-647049604',display_name='tempest-ServerActionsTestOtherB-server-647049604',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-647049604',id=85,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMCBc3yf7DLFWm969JJ3AJvRq1SqBawRmsOjScixeqlFSyjq4/Kpbcw0olzxybOT1DbERtB0mKMV4pquo3M97LIG1LWOqbG4HPkmobMKh41xqoYhtSOyaVVjlfwlnNokPA==',key_name='tempest-keypair-2123324397',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:41:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f0aee359fbaa484eb7ead3f81eef51e7',ramdisk_id='',reservation_id='r-ys1cli8v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1515133862',owner_user_name='tempest-ServerActionsTestOtherB-1515133862-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:41:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7507939da64e4320a1c6f389d0fc9045',uuid=9b486227-b98c-4393-9a3c-aae3e3c419a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:41:23 np0005558241 nova_compute[248510]: 2025-12-13 08:41:23.442 248514 DEBUG nova.network.os_vif_util [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converting VIF {"id": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "address": "fa:16:3e:e0:82:3a", "network": {"id": "369f7528-6571-47b6-a030-5281647e1eac", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-333782570-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0aee359fbaa484eb7ead3f81eef51e7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea5aafe7-a7", "ovs_interfaceid": "ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:41:23 np0005558241 nova_compute[248510]: 2025-12-13 08:41:23.443 248514 DEBUG nova.network.os_vif_util [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:41:23 np0005558241 nova_compute[248510]: 2025-12-13 08:41:23.443 248514 DEBUG os_vif [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:41:23 np0005558241 nova_compute[248510]: 2025-12-13 08:41:23.444 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:23 np0005558241 nova_compute[248510]: 2025-12-13 08:41:23.445 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea5aafe7-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:23 np0005558241 nova_compute[248510]: 2025-12-13 08:41:23.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:23 np0005558241 nova_compute[248510]: 2025-12-13 08:41:23.449 248514 INFO os_vif [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:82:3a,bridge_name='br-int',has_traffic_filtering=True,id=ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01,network=Network(369f7528-6571-47b6-a030-5281647e1eac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea5aafe7-a7')#033[00m
Dec 13 03:41:23 np0005558241 neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac[332189]: [NOTICE]   (332193) : haproxy version is 2.8.14-c23fe91
Dec 13 03:41:23 np0005558241 neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac[332189]: [NOTICE]   (332193) : path to executable is /usr/sbin/haproxy
Dec 13 03:41:23 np0005558241 neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac[332189]: [WARNING]  (332193) : Exiting Master process...
Dec 13 03:41:23 np0005558241 neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac[332189]: [ALERT]    (332193) : Current worker (332195) exited with code 143 (Terminated)
Dec 13 03:41:23 np0005558241 neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac[332189]: [WARNING]  (332193) : All workers exited. Exiting... (0)
Dec 13 03:41:23 np0005558241 systemd[1]: libpod-2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6.scope: Deactivated successfully.
Dec 13 03:41:23 np0005558241 podman[341337]: 2025-12-13 08:41:23.673207535 +0000 UTC m=+0.246770892 container died 2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:41:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0c47fe0120125029b14bb1c725f8596f8bf16f2a2dd657933ca7281e9bf15dff-merged.mount: Deactivated successfully.
Dec 13 03:41:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6-userdata-shm.mount: Deactivated successfully.
Dec 13 03:41:23 np0005558241 podman[341337]: 2025-12-13 08:41:23.789524986 +0000 UTC m=+0.363088323 container cleanup 2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:41:23 np0005558241 systemd[1]: libpod-conmon-2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6.scope: Deactivated successfully.
Dec 13 03:41:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2359: 321 pgs: 321 active+clean; 246 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 46 KiB/s wr, 133 op/s
Dec 13 03:41:24 np0005558241 nova_compute[248510]: 2025-12-13 08:41:24.154 248514 DEBUG nova.compute.manager [req-cdb0ebe3-ebea-46f2-9198-55f7894ea14e req-7476fa08-fff6-4e8c-a66a-7ccb8ce64767 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-unplugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:24 np0005558241 nova_compute[248510]: 2025-12-13 08:41:24.155 248514 DEBUG oslo_concurrency.lockutils [req-cdb0ebe3-ebea-46f2-9198-55f7894ea14e req-7476fa08-fff6-4e8c-a66a-7ccb8ce64767 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:24 np0005558241 nova_compute[248510]: 2025-12-13 08:41:24.155 248514 DEBUG oslo_concurrency.lockutils [req-cdb0ebe3-ebea-46f2-9198-55f7894ea14e req-7476fa08-fff6-4e8c-a66a-7ccb8ce64767 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:24 np0005558241 nova_compute[248510]: 2025-12-13 08:41:24.155 248514 DEBUG oslo_concurrency.lockutils [req-cdb0ebe3-ebea-46f2-9198-55f7894ea14e req-7476fa08-fff6-4e8c-a66a-7ccb8ce64767 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:24 np0005558241 nova_compute[248510]: 2025-12-13 08:41:24.155 248514 DEBUG nova.compute.manager [req-cdb0ebe3-ebea-46f2-9198-55f7894ea14e req-7476fa08-fff6-4e8c-a66a-7ccb8ce64767 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] No waiting events found dispatching network-vif-unplugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:41:24 np0005558241 nova_compute[248510]: 2025-12-13 08:41:24.156 248514 DEBUG nova.compute.manager [req-cdb0ebe3-ebea-46f2-9198-55f7894ea14e req-7476fa08-fff6-4e8c-a66a-7ccb8ce64767 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-unplugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:41:24 np0005558241 nova_compute[248510]: 2025-12-13 08:41:24.957 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:24 np0005558241 nova_compute[248510]: 2025-12-13 08:41:24.958 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:24 np0005558241 nova_compute[248510]: 2025-12-13 08:41:24.958 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:24 np0005558241 nova_compute[248510]: 2025-12-13 08:41:24.958 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:24 np0005558241 nova_compute[248510]: 2025-12-13 08:41:24.958 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:24 np0005558241 nova_compute[248510]: 2025-12-13 08:41:24.959 248514 INFO nova.compute.manager [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Terminating instance#033[00m
Dec 13 03:41:24 np0005558241 nova_compute[248510]: 2025-12-13 08:41:24.960 248514 DEBUG nova.compute.manager [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:41:25 np0005558241 kernel: tap685c5b77-ce (unregistering): left promiscuous mode
Dec 13 03:41:25 np0005558241 NetworkManager[50376]: <info>  [1765615285.4384] device (tap685c5b77-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:41:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:25Z|00940|binding|INFO|Releasing lport 685c5b77-ceb9-42c7-86cb-933b381677ba from this chassis (sb_readonly=0)
Dec 13 03:41:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:25Z|00941|binding|INFO|Setting lport 685c5b77-ceb9-42c7-86cb-933b381677ba down in Southbound
Dec 13 03:41:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:25Z|00942|binding|INFO|Removing iface tap685c5b77-ce ovn-installed in OVS
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.457 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:03:3c 10.100.0.11'], port_security=['fa:16:3e:c7:03:3c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f5485560-d9b8-44ef-9425-57a45ac866af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=685c5b77-ceb9-42c7-86cb-933b381677ba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.461 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:25 np0005558241 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Dec 13 03:41:25 np0005558241 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005f.scope: Consumed 5.635s CPU time.
Dec 13 03:41:25 np0005558241 systemd-machined[210538]: Machine qemu-118-instance-0000005f terminated.
Dec 13 03:41:25 np0005558241 podman[341388]: 2025-12-13 08:41:25.570791409 +0000 UTC m=+1.758041225 container remove 2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.578 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94619704-9147-43a1-9516-b3c16fa6322d]: (4, ('Sat Dec 13 08:41:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac (2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6)\n2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6\nSat Dec 13 08:41:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-369f7528-6571-47b6-a030-5281647e1eac (2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6)\n2142c9eeec00ee4579a195d1da079571d9778cb57ceefd1919ef04966de790a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.579 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0dcba0-3a1f-494e-8644-309b75064070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.581 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369f7528-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.584 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.600 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:25 np0005558241 kernel: tap369f7528-60: left promiscuous mode
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.607 248514 INFO nova.virt.libvirt.driver [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Instance destroyed successfully.#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.608 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.608 248514 DEBUG nova.objects.instance [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid f5485560-d9b8-44ef-9425-57a45ac866af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.610 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b3f7e8-e6da-4fb5-8815-4f2b667646a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.622 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9afcd2-9e89-4f95-9b27-9383032768c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.624 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9162d51a-ce1b-40a0-8616-0a40f67c248c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.640 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd2f670-f2f0-45ee-a52a-723589349837]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763096, 'reachable_time': 31742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341421, 'error': None, 'target': 'ovnmeta-369f7528-6571-47b6-a030-5281647e1eac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.643 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-369f7528-6571-47b6-a030-5281647e1eac deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.643 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dbcdc5c8-3909-4c69-a249-e660d08354c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.644 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 685c5b77-ceb9-42c7-86cb-933b381677ba in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis#033[00m
Dec 13 03:41:25 np0005558241 systemd[1]: run-netns-ovnmeta\x2d369f7528\x2d6571\x2d47b6\x2da030\x2d5281647e1eac.mount: Deactivated successfully.
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.645 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.656 248514 DEBUG nova.virt.libvirt.vif [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:41:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1093776704',display_name='tempest-ServersTestJSON-server-1093776704',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1093776704',id=95,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:41:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-8z05o414',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:41:20Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=f5485560-d9b8-44ef-9425-57a45ac866af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.657 248514 DEBUG nova.network.os_vif_util [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "685c5b77-ceb9-42c7-86cb-933b381677ba", "address": "fa:16:3e:c7:03:3c", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap685c5b77-ce", "ovs_interfaceid": "685c5b77-ceb9-42c7-86cb-933b381677ba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.657 248514 DEBUG nova.network.os_vif_util [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.658 248514 DEBUG os_vif [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.659 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.659 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap685c5b77-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.661 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.662 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[41558524-2254-4708-b4e3-771376eafc29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.664 248514 INFO os_vif [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:03:3c,bridge_name='br-int',has_traffic_filtering=True,id=685c5b77-ceb9-42c7-86cb-933b381677ba,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap685c5b77-ce')#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.688 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0a937081-8f3e-48e5-afa6-5e65028a4cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.691 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[349fa624-4ddd-44d5-a31b-1b4494a53825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.721 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5811524a-4c6b-4f61-8e2a-135959980820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.738 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0fcf9cb9-f2c7-488c-869d-331d8af832e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341446, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.762 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1864ff4-40c3-418d-a096-6c858dd27182]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341447, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341447, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.764 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:25 np0005558241 nova_compute[248510]: 2025-12-13 08:41:25.766 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.769 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.769 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.769 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:25.770 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 205 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 44 KiB/s wr, 180 op/s
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.077 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.465 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.466 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.466 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.466 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.466 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] No waiting events found dispatching network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.466 248514 WARNING nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received unexpected event network-vif-plugged-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.467 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-vif-unplugged-685c5b77-ceb9-42c7-86cb-933b381677ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.467 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.467 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.467 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.468 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] No waiting events found dispatching network-vif-unplugged-685c5b77-ceb9-42c7-86cb-933b381677ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.468 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-vif-unplugged-685c5b77-ceb9-42c7-86cb-933b381677ba for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.468 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.468 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.468 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.469 248514 DEBUG oslo_concurrency.lockutils [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.469 248514 DEBUG nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] No waiting events found dispatching network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:41:26 np0005558241 nova_compute[248510]: 2025-12-13 08:41:26.469 248514 WARNING nova.compute.manager [req-8d254648-68da-444f-a541-81aaffc9112c req-567f1f5e-c3b9-4e38-8b7c-c40fef9f6a28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received unexpected event network-vif-plugged-685c5b77-ceb9-42c7-86cb-933b381677ba for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:41:27 np0005558241 nova_compute[248510]: 2025-12-13 08:41:27.124 248514 INFO nova.virt.libvirt.driver [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deleting instance files /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8_del#033[00m
Dec 13 03:41:27 np0005558241 nova_compute[248510]: 2025-12-13 08:41:27.124 248514 INFO nova.virt.libvirt.driver [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deletion of /var/lib/nova/instances/9b486227-b98c-4393-9a3c-aae3e3c419a8_del complete#033[00m
Dec 13 03:41:27 np0005558241 nova_compute[248510]: 2025-12-13 08:41:27.226 248514 INFO nova.compute.manager [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Took 4.24 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:41:27 np0005558241 nova_compute[248510]: 2025-12-13 08:41:27.227 248514 DEBUG oslo.service.loopingcall [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:41:27 np0005558241 nova_compute[248510]: 2025-12-13 08:41:27.227 248514 DEBUG nova.compute.manager [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:41:27 np0005558241 nova_compute[248510]: 2025-12-13 08:41:27.227 248514 DEBUG nova.network.neutron [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:41:27 np0005558241 nova_compute[248510]: 2025-12-13 08:41:27.275 248514 INFO nova.virt.libvirt.driver [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Deleting instance files /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af_del#033[00m
Dec 13 03:41:27 np0005558241 nova_compute[248510]: 2025-12-13 08:41:27.276 248514 INFO nova.virt.libvirt.driver [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Deletion of /var/lib/nova/instances/f5485560-d9b8-44ef-9425-57a45ac866af_del complete#033[00m
Dec 13 03:41:27 np0005558241 nova_compute[248510]: 2025-12-13 08:41:27.429 248514 INFO nova.compute.manager [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Took 2.47 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:41:27 np0005558241 nova_compute[248510]: 2025-12-13 08:41:27.429 248514 DEBUG oslo.service.loopingcall [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:41:27 np0005558241 nova_compute[248510]: 2025-12-13 08:41:27.430 248514 DEBUG nova.compute.manager [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:41:27 np0005558241 nova_compute[248510]: 2025-12-13 08:41:27.430 248514 DEBUG nova.network.neutron [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:41:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 205 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 44 KiB/s wr, 180 op/s
Dec 13 03:41:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:41:28 np0005558241 nova_compute[248510]: 2025-12-13 08:41:28.927 248514 DEBUG nova.network.neutron [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:41:28 np0005558241 nova_compute[248510]: 2025-12-13 08:41:28.955 248514 INFO nova.compute.manager [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Took 1.52 seconds to deallocate network for instance.#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.049 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.049 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.167 248514 DEBUG oslo_concurrency.processutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.239 248514 DEBUG nova.compute.manager [req-f851926b-3c58-4bc4-89fd-0c732140294b req-6ae919cf-39c3-421c-a3bb-0bfa95df46cc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Received event network-vif-deleted-685c5b77-ceb9-42c7-86cb-933b381677ba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.240 248514 DEBUG nova.network.neutron [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.272 248514 INFO nova.compute.manager [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Took 2.04 seconds to deallocate network for instance.#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.394 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.616 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615274.6148403, 658e5f04-399b-4a8a-8680-5ae9717949c0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.616 248514 INFO nova.compute.manager [-] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.639 248514 DEBUG nova.compute.manager [None req-0215539b-c72d-461a-bfff-97fd15ede42b - - - - - -] [instance: 658e5f04-399b-4a8a-8680-5ae9717949c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:41:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4163737752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.745 248514 DEBUG oslo_concurrency.processutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.751 248514 DEBUG nova.compute.provider_tree [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.770 248514 DEBUG nova.scheduler.client.report [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.805 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.807 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 142 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 37 KiB/s wr, 154 op/s
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.837 248514 INFO nova.scheduler.client.report [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance f5485560-d9b8-44ef-9425-57a45ac866af#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.923 248514 DEBUG oslo_concurrency.processutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:29 np0005558241 nova_compute[248510]: 2025-12-13 08:41:29.987 248514 DEBUG oslo_concurrency.lockutils [None req-ebcac027-5a41-47e2-8ce4-df7a65585f5d 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "f5485560-d9b8-44ef-9425-57a45ac866af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:30 np0005558241 nova_compute[248510]: 2025-12-13 08:41:30.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:41:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4051534305' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:41:30 np0005558241 nova_compute[248510]: 2025-12-13 08:41:30.839 248514 DEBUG oslo_concurrency.processutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.915s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:30 np0005558241 nova_compute[248510]: 2025-12-13 08:41:30.845 248514 DEBUG nova.compute.provider_tree [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:41:30 np0005558241 nova_compute[248510]: 2025-12-13 08:41:30.872 248514 DEBUG nova.scheduler.client.report [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:41:30 np0005558241 nova_compute[248510]: 2025-12-13 08:41:30.905 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:30 np0005558241 nova_compute[248510]: 2025-12-13 08:41:30.954 248514 INFO nova.scheduler.client.report [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Deleted allocations for instance 9b486227-b98c-4393-9a3c-aae3e3c419a8#033[00m
Dec 13 03:41:31 np0005558241 nova_compute[248510]: 2025-12-13 08:41:31.046 248514 DEBUG oslo_concurrency.lockutils [None req-91b7155b-1067-4b01-be60-1cd0641868b7 7507939da64e4320a1c6f389d0fc9045 f0aee359fbaa484eb7ead3f81eef51e7 - - default default] Lock "9b486227-b98c-4393-9a3c-aae3e3c419a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:31 np0005558241 nova_compute[248510]: 2025-12-13 08:41:31.118 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:31 np0005558241 nova_compute[248510]: 2025-12-13 08:41:31.338 248514 DEBUG nova.compute.manager [req-9e4097a7-463d-42e7-8761-723b3bb406dc req-fef85a29-f142-432f-b0f0-d8740d7cd5ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Received event network-vif-deleted-ea5aafe7-a7e1-4e65-9df5-b93bf8a00a01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 130 op/s
Dec 13 03:41:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:41:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 127 op/s
Dec 13 03:41:34 np0005558241 nova_compute[248510]: 2025-12-13 08:41:34.568 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:34 np0005558241 nova_compute[248510]: 2025-12-13 08:41:34.568 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:34 np0005558241 nova_compute[248510]: 2025-12-13 08:41:34.588 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:41:34 np0005558241 nova_compute[248510]: 2025-12-13 08:41:34.670 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:34 np0005558241 nova_compute[248510]: 2025-12-13 08:41:34.671 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:34 np0005558241 nova_compute[248510]: 2025-12-13 08:41:34.679 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:41:34 np0005558241 nova_compute[248510]: 2025-12-13 08:41:34.679 248514 INFO nova.compute.claims [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:41:34 np0005558241 nova_compute[248510]: 2025-12-13 08:41:34.920 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:41:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3805570996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.521 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.528 248514 DEBUG nova.compute.provider_tree [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.550 248514 DEBUG nova.scheduler.client.report [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.573 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.573 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.640 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.641 248514 DEBUG nova.network.neutron [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.705 248514 INFO nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.727 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:41:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 14 KiB/s wr, 91 op/s
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.839 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.840 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.841 248514 INFO nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Creating image(s)#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.867 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.895 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.929 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:35 np0005558241 nova_compute[248510]: 2025-12-13 08:41:35.935 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.014 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.016 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.016 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.017 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.038 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.041 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 58550847-ec24-44b6-a066-642db86841ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.122 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.399 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 58550847-ec24-44b6-a066-642db86841ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.432 248514 DEBUG nova.policy [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.466 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image 58550847-ec24-44b6-a066-642db86841ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.555 248514 DEBUG nova.objects.instance [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid 58550847-ec24-44b6-a066-642db86841ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.765 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.766 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Ensure instance console log exists: /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.766 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.766 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:36 np0005558241 nova_compute[248510]: 2025-12-13 08:41:36.767 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:36 np0005558241 podman[341685]: 2025-12-13 08:41:36.980008722 +0000 UTC m=+0.059906182 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 13 03:41:36 np0005558241 podman[341684]: 2025-12-13 08:41:36.994995905 +0000 UTC m=+0.074898345 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 03:41:37 np0005558241 podman[341683]: 2025-12-13 08:41:37.016886629 +0000 UTC m=+0.096286976 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec 13 03:41:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 121 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.0 KiB/s wr, 41 op/s
Dec 13 03:41:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:41:38 np0005558241 nova_compute[248510]: 2025-12-13 08:41:38.421 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615283.4194164, 9b486227-b98c-4393-9a3c-aae3e3c419a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:38 np0005558241 nova_compute[248510]: 2025-12-13 08:41:38.421 248514 INFO nova.compute.manager [-] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:41:38 np0005558241 nova_compute[248510]: 2025-12-13 08:41:38.470 248514 DEBUG nova.network.neutron [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Successfully created port: 0f023371-37d6-4847-a15a-fa86a3f0a3f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:41:38 np0005558241 nova_compute[248510]: 2025-12-13 08:41:38.493 248514 DEBUG nova.compute.manager [None req-a49f4f48-0949-40ff-9eab-6f80881d1690 - - - - - -] [instance: 9b486227-b98c-4393-9a3c-aae3e3c419a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 150 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.1 MiB/s wr, 55 op/s
Dec 13 03:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:41:40 np0005558241 nova_compute[248510]: 2025-12-13 08:41:40.605 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615285.604048, f5485560-d9b8-44ef-9425-57a45ac866af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:40 np0005558241 nova_compute[248510]: 2025-12-13 08:41:40.606 248514 INFO nova.compute.manager [-] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:41:40 np0005558241 nova_compute[248510]: 2025-12-13 08:41:40.629 248514 DEBUG nova.compute.manager [None req-d23f33a4-6df9-40f4-bcc2-07bc5f2211f4 - - - - - -] [instance: f5485560-d9b8-44ef-9425-57a45ac866af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:40 np0005558241 nova_compute[248510]: 2025-12-13 08:41:40.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:41 np0005558241 nova_compute[248510]: 2025-12-13 08:41:41.123 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:41 np0005558241 nova_compute[248510]: 2025-12-13 08:41:41.462 248514 DEBUG nova.network.neutron [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Successfully updated port: 0f023371-37d6-4847-a15a-fa86a3f0a3f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:41:41 np0005558241 nova_compute[248510]: 2025-12-13 08:41:41.486 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-58550847-ec24-44b6-a066-642db86841ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:41:41 np0005558241 nova_compute[248510]: 2025-12-13 08:41:41.487 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-58550847-ec24-44b6-a066-642db86841ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:41:41 np0005558241 nova_compute[248510]: 2025-12-13 08:41:41.487 248514 DEBUG nova.network.neutron [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:41:41 np0005558241 nova_compute[248510]: 2025-12-13 08:41:41.589 248514 DEBUG nova.compute.manager [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-changed-0f023371-37d6-4847-a15a-fa86a3f0a3f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:41 np0005558241 nova_compute[248510]: 2025-12-13 08:41:41.590 248514 DEBUG nova.compute.manager [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Refreshing instance network info cache due to event network-changed-0f023371-37d6-4847-a15a-fa86a3f0a3f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:41:41 np0005558241 nova_compute[248510]: 2025-12-13 08:41:41.590 248514 DEBUG oslo_concurrency.lockutils [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-58550847-ec24-44b6-a066-642db86841ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:41:41 np0005558241 nova_compute[248510]: 2025-12-13 08:41:41.706 248514 DEBUG nova.network.neutron [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:41:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:41Z|00943|binding|INFO|Releasing lport 0f8d26a1-147d-466a-8164-8d2036166124 from this chassis (sb_readonly=0)
Dec 13 03:41:41 np0005558241 nova_compute[248510]: 2025-12-13 08:41:41.806 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 13 03:41:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:41Z|00944|binding|INFO|Releasing lport 0f8d26a1-147d-466a-8164-8d2036166124 from this chassis (sb_readonly=0)
Dec 13 03:41:41 np0005558241 nova_compute[248510]: 2025-12-13 08:41:41.900 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:41:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.649 248514 DEBUG nova.network.neutron [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Updating instance_info_cache with network_info: [{"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.893 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-58550847-ec24-44b6-a066-642db86841ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.893 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Instance network_info: |[{"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.893 248514 DEBUG oslo_concurrency.lockutils [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-58550847-ec24-44b6-a066-642db86841ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.894 248514 DEBUG nova.network.neutron [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Refreshing network info cache for port 0f023371-37d6-4847-a15a-fa86a3f0a3f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.897 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Start _get_guest_xml network_info=[{"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.903 248514 WARNING nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.921 248514 DEBUG nova.virt.libvirt.host [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.922 248514 DEBUG nova.virt.libvirt.host [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.928 248514 DEBUG nova.virt.libvirt.host [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.928 248514 DEBUG nova.virt.libvirt.host [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.929 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.929 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.930 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.930 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.930 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.930 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.931 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.931 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.931 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.932 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.932 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.932 248514 DEBUG nova.virt.hardware [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:41:44 np0005558241 nova_compute[248510]: 2025-12-13 08:41:44.937 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:41:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/179130776' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:41:45 np0005558241 nova_compute[248510]: 2025-12-13 08:41:45.562 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:45 np0005558241 nova_compute[248510]: 2025-12-13 08:41:45.586 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:45 np0005558241 nova_compute[248510]: 2025-12-13 08:41:45.590 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:45 np0005558241 nova_compute[248510]: 2025-12-13 08:41:45.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.125 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:41:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738661585' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.160 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.161 248514 DEBUG nova.virt.libvirt.vif [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:41:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1050406831',display_name='tempest-ServersTestJSON-server-1050406831',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1050406831',id=96,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYqQi2FIq7Gsymh9E4l7vrJYWPFYm4cOxQBvQKtSgT87c9NTs3K+3wQ1sFvpE/1xNj+RqzGI4ZZeSDRTg9ddxk3LWkpDXgbT9zZytcv9YGDwfjOIh6BYT6dInWo/hgQVA==',key_name='tempest-key-912387218',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-nljn6049',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:41:35Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=58550847-ec24-44b6-a066-642db86841ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.162 248514 DEBUG nova.network.os_vif_util [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.163 248514 DEBUG nova.network.os_vif_util [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.164 248514 DEBUG nova.objects.instance [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid 58550847-ec24-44b6-a066-642db86841ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.201 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  <uuid>58550847-ec24-44b6-a066-642db86841ce</uuid>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  <name>instance-00000060</name>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersTestJSON-server-1050406831</nova:name>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:41:44</nova:creationTime>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:        <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:        <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:        <nova:port uuid="0f023371-37d6-4847-a15a-fa86a3f0a3f5">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <entry name="serial">58550847-ec24-44b6-a066-642db86841ce</entry>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <entry name="uuid">58550847-ec24-44b6-a066-642db86841ce</entry>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/58550847-ec24-44b6-a066-642db86841ce_disk">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/58550847-ec24-44b6-a066-642db86841ce_disk.config">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:75:bf:1d"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <target dev="tap0f023371-37"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/console.log" append="off"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:41:46 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:41:46 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:41:46 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:41:46 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.202 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Preparing to wait for external event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.203 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.203 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.204 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.204 248514 DEBUG nova.virt.libvirt.vif [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:41:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1050406831',display_name='tempest-ServersTestJSON-server-1050406831',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1050406831',id=96,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYqQi2FIq7Gsymh9E4l7vrJYWPFYm4cOxQBvQKtSgT87c9NTs3K+3wQ1sFvpE/1xNj+RqzGI4ZZeSDRTg9ddxk3LWkpDXgbT9zZytcv9YGDwfjOIh6BYT6dInWo/hgQVA==',key_name='tempest-key-912387218',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-nljn6049',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:41:35Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=58550847-ec24-44b6-a066-642db86841ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.205 248514 DEBUG nova.network.os_vif_util [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.206 248514 DEBUG nova.network.os_vif_util [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.206 248514 DEBUG os_vif [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.207 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.207 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.208 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.211 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.211 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f023371-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.212 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f023371-37, col_values=(('external_ids', {'iface-id': '0f023371-37d6-4847-a15a-fa86a3f0a3f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:bf:1d', 'vm-uuid': '58550847-ec24-44b6-a066-642db86841ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.213 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:46 np0005558241 NetworkManager[50376]: <info>  [1765615306.2146] manager: (tap0f023371-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.218 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.219 248514 INFO os_vif [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37')#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.366 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.367 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.367 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:75:bf:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.368 248514 INFO nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Using config drive#033[00m
Dec 13 03:41:46 np0005558241 nova_compute[248510]: 2025-12-13 08:41:46.385 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:47 np0005558241 nova_compute[248510]: 2025-12-13 08:41:47.362 248514 INFO nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Creating config drive at /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/disk.config#033[00m
Dec 13 03:41:47 np0005558241 nova_compute[248510]: 2025-12-13 08:41:47.367 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp500hl267 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:47 np0005558241 nova_compute[248510]: 2025-12-13 08:41:47.512 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp500hl267" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:47 np0005558241 nova_compute[248510]: 2025-12-13 08:41:47.537 248514 DEBUG nova.storage.rbd_utils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 58550847-ec24-44b6-a066-642db86841ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:47 np0005558241 nova_compute[248510]: 2025-12-13 08:41:47.541 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/disk.config 58550847-ec24-44b6-a066-642db86841ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:41:47 np0005558241 nova_compute[248510]: 2025-12-13 08:41:47.993 248514 DEBUG nova.network.neutron [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Updated VIF entry in instance network info cache for port 0f023371-37d6-4847-a15a-fa86a3f0a3f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:41:47 np0005558241 nova_compute[248510]: 2025-12-13 08:41:47.994 248514 DEBUG nova.network.neutron [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Updating instance_info_cache with network_info: [{"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:41:48 np0005558241 nova_compute[248510]: 2025-12-13 08:41:48.021 248514 DEBUG oslo_concurrency.lockutils [req-9bc21949-1334-47b1-8df0-b2d233463a62 req-8cb9322b-4b68-4227-ba78-cd1652ab117a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-58550847-ec24-44b6-a066-642db86841ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:41:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:41:48 np0005558241 nova_compute[248510]: 2025-12-13 08:41:48.524 248514 DEBUG oslo_concurrency.processutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/disk.config 58550847-ec24-44b6-a066-642db86841ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.983s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:48 np0005558241 nova_compute[248510]: 2025-12-13 08:41:48.525 248514 INFO nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Deleting local config drive /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce/disk.config because it was imported into RBD.#033[00m
Dec 13 03:41:48 np0005558241 kernel: tap0f023371-37: entered promiscuous mode
Dec 13 03:41:48 np0005558241 NetworkManager[50376]: <info>  [1765615308.5776] manager: (tap0f023371-37): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Dec 13 03:41:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:48Z|00945|binding|INFO|Claiming lport 0f023371-37d6-4847-a15a-fa86a3f0a3f5 for this chassis.
Dec 13 03:41:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:48Z|00946|binding|INFO|0f023371-37d6-4847-a15a-fa86a3f0a3f5: Claiming fa:16:3e:75:bf:1d 10.100.0.7
Dec 13 03:41:48 np0005558241 nova_compute[248510]: 2025-12-13 08:41:48.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.750 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:bf:1d 10.100.0.7'], port_security=['fa:16:3e:75:bf:1d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '58550847-ec24-44b6-a066-642db86841ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0f023371-37d6-4847-a15a-fa86a3f0a3f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.751 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0f023371-37d6-4847-a15a-fa86a3f0a3f5 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis#033[00m
Dec 13 03:41:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:48Z|00947|binding|INFO|Setting lport 0f023371-37d6-4847-a15a-fa86a3f0a3f5 ovn-installed in OVS
Dec 13 03:41:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:48Z|00948|binding|INFO|Setting lport 0f023371-37d6-4847-a15a-fa86a3f0a3f5 up in Southbound
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.752 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3#033[00m
Dec 13 03:41:48 np0005558241 nova_compute[248510]: 2025-12-13 08:41:48.752 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:48 np0005558241 systemd-udevd[341880]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.768 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[43ecd0d1-5574-46d9-8cd3-71a78cafb1df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:48 np0005558241 systemd-machined[210538]: New machine qemu-119-instance-00000060.
Dec 13 03:41:48 np0005558241 NetworkManager[50376]: <info>  [1765615308.7804] device (tap0f023371-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:41:48 np0005558241 NetworkManager[50376]: <info>  [1765615308.7811] device (tap0f023371-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:41:48 np0005558241 systemd[1]: Started Virtual Machine qemu-119-instance-00000060.
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.798 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8a21afe0-05fe-41ce-a549-f8d15b277e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.801 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[51e4aa6a-038a-4e62-84cf-4d7f783ea339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.831 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f892f062-a450-46d9-8214-1f9983d39605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.850 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[efcaa825-d101-4edb-9c1a-be1acb9af06a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341893, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.867 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a57fcdd7-b4e2-4d45-bbfb-6aae5c355182]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341894, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341894, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.869 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:48 np0005558241 nova_compute[248510]: 2025-12-13 08:41:48.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:48 np0005558241 nova_compute[248510]: 2025-12-13 08:41:48.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.872 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.872 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.872 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:48.872 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.150 248514 DEBUG nova.compute.manager [req-f98176a9-1311-4e19-a24a-8e770c1a33cf req-523b8da9-cc0d-44f4-a781-0f536e84b071 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.151 248514 DEBUG oslo_concurrency.lockutils [req-f98176a9-1311-4e19-a24a-8e770c1a33cf req-523b8da9-cc0d-44f4-a781-0f536e84b071 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.151 248514 DEBUG oslo_concurrency.lockutils [req-f98176a9-1311-4e19-a24a-8e770c1a33cf req-523b8da9-cc0d-44f4-a781-0f536e84b071 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.151 248514 DEBUG oslo_concurrency.lockutils [req-f98176a9-1311-4e19-a24a-8e770c1a33cf req-523b8da9-cc0d-44f4-a781-0f536e84b071 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.151 248514 DEBUG nova.compute.manager [req-f98176a9-1311-4e19-a24a-8e770c1a33cf req-523b8da9-cc0d-44f4-a781-0f536e84b071 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Processing event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:41:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.895 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615309.8946912, 58550847-ec24-44b6-a066-642db86841ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.896 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] VM Started (Lifecycle Event)#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.898 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.903 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.907 248514 INFO nova.virt.libvirt.driver [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] Instance spawned successfully.#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.908 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.932 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.937 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.948 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.949 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.949 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.950 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.951 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.951 248514 DEBUG nova.virt.libvirt.driver [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.987 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.987 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615309.8949203, 58550847-ec24-44b6-a066-642db86841ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:49 np0005558241 nova_compute[248510]: 2025-12-13 08:41:49.988 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:41:50 np0005558241 nova_compute[248510]: 2025-12-13 08:41:50.034 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:50 np0005558241 nova_compute[248510]: 2025-12-13 08:41:50.040 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615309.9022415, 58550847-ec24-44b6-a066-642db86841ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:50 np0005558241 nova_compute[248510]: 2025-12-13 08:41:50.040 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:41:50 np0005558241 nova_compute[248510]: 2025-12-13 08:41:50.052 248514 INFO nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Took 14.21 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:41:50 np0005558241 nova_compute[248510]: 2025-12-13 08:41:50.053 248514 DEBUG nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:50 np0005558241 nova_compute[248510]: 2025-12-13 08:41:50.064 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:50 np0005558241 nova_compute[248510]: 2025-12-13 08:41:50.067 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:41:50 np0005558241 nova_compute[248510]: 2025-12-13 08:41:50.102 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:41:50 np0005558241 nova_compute[248510]: 2025-12-13 08:41:50.150 248514 INFO nova.compute.manager [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Took 15.50 seconds to build instance.#033[00m
Dec 13 03:41:50 np0005558241 nova_compute[248510]: 2025-12-13 08:41:50.173 248514 DEBUG oslo_concurrency.lockutils [None req-09a40d7c-0395-45a7-8aaa-e77d6266ad38 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.214 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.340 248514 DEBUG nova.compute.manager [req-44151237-8a05-434b-a4f3-04a56463d530 req-351d9ed9-65a5-483b-8d3a-b81eade81691 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.341 248514 DEBUG oslo_concurrency.lockutils [req-44151237-8a05-434b-a4f3-04a56463d530 req-351d9ed9-65a5-483b-8d3a-b81eade81691 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.341 248514 DEBUG oslo_concurrency.lockutils [req-44151237-8a05-434b-a4f3-04a56463d530 req-351d9ed9-65a5-483b-8d3a-b81eade81691 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.342 248514 DEBUG oslo_concurrency.lockutils [req-44151237-8a05-434b-a4f3-04a56463d530 req-351d9ed9-65a5-483b-8d3a-b81eade81691 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.342 248514 DEBUG nova.compute.manager [req-44151237-8a05-434b-a4f3-04a56463d530 req-351d9ed9-65a5-483b-8d3a-b81eade81691 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] No waiting events found dispatching network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.342 248514 WARNING nova.compute.manager [req-44151237-8a05-434b-a4f3-04a56463d530 req-351d9ed9-65a5-483b-8d3a-b81eade81691 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received unexpected event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.742 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "e7ecf467-c27c-44d4-b072-2537dba749e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.743 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "e7ecf467-c27c-44d4-b072-2537dba749e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.765 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:41:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 663 KiB/s wr, 21 op/s
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.848 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.849 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.857 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:41:51 np0005558241 nova_compute[248510]: 2025-12-13 08:41:51.857 248514 INFO nova.compute.claims [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.028 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:41:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351467125' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.649 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.651 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.651 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.651 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.652 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.654 248514 INFO nova.compute.manager [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Terminating instance#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.655 248514 DEBUG nova.compute.manager [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.657 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.665 248514 DEBUG nova.compute.provider_tree [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.684 248514 DEBUG nova.scheduler.client.report [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.712 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.713 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:41:52 np0005558241 kernel: tap0f023371-37 (unregistering): left promiscuous mode
Dec 13 03:41:52 np0005558241 NetworkManager[50376]: <info>  [1765615312.7538] device (tap0f023371-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:52 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:52Z|00949|binding|INFO|Releasing lport 0f023371-37d6-4847-a15a-fa86a3f0a3f5 from this chassis (sb_readonly=0)
Dec 13 03:41:52 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:52Z|00950|binding|INFO|Setting lport 0f023371-37d6-4847-a15a-fa86a3f0a3f5 down in Southbound
Dec 13 03:41:52 np0005558241 ovn_controller[148476]: 2025-12-13T08:41:52Z|00951|binding|INFO|Removing iface tap0f023371-37 ovn-installed in OVS
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.770 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:bf:1d 10.100.0.7'], port_security=['fa:16:3e:75:bf:1d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '58550847-ec24-44b6-a066-642db86841ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0f023371-37d6-4847-a15a-fa86a3f0a3f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.772 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0f023371-37d6-4847-a15a-fa86a3f0a3f5 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis#033[00m
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.773 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.774 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.782 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.790 248514 INFO nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.790 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[255c37b1-edc1-4c1f-a6ee-1282ebb232a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:52 np0005558241 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000060.scope: Deactivated successfully.
Dec 13 03:41:52 np0005558241 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000060.scope: Consumed 3.717s CPU time.
Dec 13 03:41:52 np0005558241 systemd-machined[210538]: Machine qemu-119-instance-00000060 terminated.
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.810 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.827 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[94ddc125-51c0-448c-bdb6-a08a1b2cd594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.831 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[578bc9fd-e294-4fce-9b89-72bfca46eb2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.868 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1b27d78a-4de9-488a-be7e-5cf0f92f0616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:52 np0005558241 kernel: tap0f023371-37: entered promiscuous mode
Dec 13 03:41:52 np0005558241 kernel: tap0f023371-37 (unregistering): left promiscuous mode
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.894 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55ba9b0a-bb53-4f6a-b51e-f142599a4963]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341973, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.904 248514 INFO nova.virt.libvirt.driver [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] Instance destroyed successfully.#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.905 248514 DEBUG nova.objects.instance [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid 58550847-ec24-44b6-a066-642db86841ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.915 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[72a8eacc-cfb2-490b-b36d-a55e379a5437]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341982, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341982, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.917 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.923 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.923 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.924 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.924 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:52.924 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.933 248514 DEBUG nova.virt.libvirt.vif [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:41:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1050406831',display_name='tempest-ServersTestJSON-server-1050406831',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1050406831',id=96,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYqQi2FIq7Gsymh9E4l7vrJYWPFYm4cOxQBvQKtSgT87c9NTs3K+3wQ1sFvpE/1xNj+RqzGI4ZZeSDRTg9ddxk3LWkpDXgbT9zZytcv9YGDwfjOIh6BYT6dInWo/hgQVA==',key_name='tempest-key-912387218',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:41:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-nljn6049',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:41:50Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=58550847-ec24-44b6-a066-642db86841ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.934 248514 DEBUG nova.network.os_vif_util [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "address": "fa:16:3e:75:bf:1d", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f023371-37", "ovs_interfaceid": "0f023371-37d6-4847-a15a-fa86a3f0a3f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.935 248514 DEBUG nova.network.os_vif_util [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.935 248514 DEBUG os_vif [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.937 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.937 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f023371-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.942 248514 INFO os_vif [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:bf:1d,bridge_name='br-int',has_traffic_filtering=True,id=0f023371-37d6-4847-a15a-fa86a3f0a3f5,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f023371-37')#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.960 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.962 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.962 248514 INFO nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Creating image(s)#033[00m
Dec 13 03:41:52 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.980 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:52.999 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.019 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.023 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.098 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.099 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.099 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.100 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.118 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.121 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e7ecf467-c27c-44d4-b072-2537dba749e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:41:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:53.226 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:41:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:53.227 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.227 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.577 248514 DEBUG nova.compute.manager [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-vif-unplugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.577 248514 DEBUG oslo_concurrency.lockutils [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.578 248514 DEBUG oslo_concurrency.lockutils [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.578 248514 DEBUG oslo_concurrency.lockutils [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.578 248514 DEBUG nova.compute.manager [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] No waiting events found dispatching network-vif-unplugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.578 248514 DEBUG nova.compute.manager [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-vif-unplugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.578 248514 DEBUG nova.compute.manager [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.579 248514 DEBUG oslo_concurrency.lockutils [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "58550847-ec24-44b6-a066-642db86841ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.579 248514 DEBUG oslo_concurrency.lockutils [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.579 248514 DEBUG oslo_concurrency.lockutils [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.579 248514 DEBUG nova.compute.manager [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] No waiting events found dispatching network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.579 248514 WARNING nova.compute.manager [req-5aec6a06-46c4-434f-b05e-0de17ed2fbed req-1fdbccbd-6bfa-4b3a-ab06-6cad5f0be69c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received unexpected event network-vif-plugged-0f023371-37d6-4847-a15a-fa86a3f0a3f5 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.717 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e7ecf467-c27c-44d4-b072-2537dba749e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.777 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] resizing rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:41:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 466 KiB/s rd, 12 KiB/s wr, 23 op/s
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.868 248514 DEBUG nova.objects.instance [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lazy-loading 'migration_context' on Instance uuid e7ecf467-c27c-44d4-b072-2537dba749e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.887 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.888 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Ensure instance console log exists: /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.889 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.889 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.889 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.890 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.896 248514 WARNING nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.902 248514 DEBUG nova.virt.libvirt.host [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.903 248514 DEBUG nova.virt.libvirt.host [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.908 248514 DEBUG nova.virt.libvirt.host [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.908 248514 DEBUG nova.virt.libvirt.host [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.909 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.909 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.910 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.910 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.910 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.910 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.911 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.911 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.911 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.911 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.911 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.912 248514 DEBUG nova.virt.hardware [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.914 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.963 248514 INFO nova.virt.libvirt.driver [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Deleting instance files /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce_del#033[00m
Dec 13 03:41:53 np0005558241 nova_compute[248510]: 2025-12-13 08:41:53.964 248514 INFO nova.virt.libvirt.driver [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Deletion of /var/lib/nova/instances/58550847-ec24-44b6-a066-642db86841ce_del complete#033[00m
Dec 13 03:41:54 np0005558241 nova_compute[248510]: 2025-12-13 08:41:54.058 248514 INFO nova.compute.manager [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Took 1.40 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:41:54 np0005558241 nova_compute[248510]: 2025-12-13 08:41:54.059 248514 DEBUG oslo.service.loopingcall [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:41:54 np0005558241 nova_compute[248510]: 2025-12-13 08:41:54.059 248514 DEBUG nova.compute.manager [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:41:54 np0005558241 nova_compute[248510]: 2025-12-13 08:41:54.059 248514 DEBUG nova.network.neutron [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:41:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:41:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2428530057' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:41:54 np0005558241 nova_compute[248510]: 2025-12-13 08:41:54.498 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:54 np0005558241 nova_compute[248510]: 2025-12-13 08:41:54.520 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:54 np0005558241 nova_compute[248510]: 2025-12-13 08:41:54.524 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:41:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3805700237' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.133 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.135 248514 DEBUG nova.objects.instance [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lazy-loading 'pci_devices' on Instance uuid e7ecf467-c27c-44d4-b072-2537dba749e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.143 248514 DEBUG nova.network.neutron [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.164 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  <uuid>e7ecf467-c27c-44d4-b072-2537dba749e2</uuid>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  <name>instance-00000061</name>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersAaction247Test-server-2007909855</nova:name>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:41:53</nova:creationTime>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:        <nova:user uuid="7001a6774c58458f8817dd3d0a51c534">tempest-ServersAaction247Test-1329667427-project-member</nova:user>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:        <nova:project uuid="3789df2219b04c028efeab8c1eae2147">tempest-ServersAaction247Test-1329667427</nova:project>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <entry name="serial">e7ecf467-c27c-44d4-b072-2537dba749e2</entry>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <entry name="uuid">e7ecf467-c27c-44d4-b072-2537dba749e2</entry>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/e7ecf467-c27c-44d4-b072-2537dba749e2_disk">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/e7ecf467-c27c-44d4-b072-2537dba749e2_disk.config">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/console.log" append="off"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:41:55 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:41:55 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:41:55 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:41:55 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.168 248514 INFO nova.compute.manager [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] Took 1.11 seconds to deallocate network for instance.#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.235 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.236 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.255 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.256 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.256 248514 INFO nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Using config drive#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.278 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.355 248514 DEBUG oslo_concurrency.processutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:55.421 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:55.421 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:55.421 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.692 248514 INFO nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Creating config drive at /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/disk.config#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.697 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphp91auaf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.740 248514 DEBUG nova.compute.manager [req-b11e83ae-f922-4e23-a5d5-7d6c92133147 req-f37abdf5-20e7-405f-8791-f046ac504590 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 58550847-ec24-44b6-a066-642db86841ce] Received event network-vif-deleted-0f023371-37d6-4847-a15a-fa86a3f0a3f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:41:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 171 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.846 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphp91auaf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.873 248514 DEBUG nova.storage.rbd_utils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] rbd image e7ecf467-c27c-44d4-b072-2537dba749e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.877 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/disk.config e7ecf467-c27c-44d4-b072-2537dba749e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:41:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:41:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1901742580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.938 248514 DEBUG oslo_concurrency.processutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.945 248514 DEBUG nova.compute.provider_tree [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.965 248514 DEBUG nova.scheduler.client.report [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:41:55 np0005558241 nova_compute[248510]: 2025-12-13 08:41:55.996 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.023 248514 INFO nova.scheduler.client.report [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance 58550847-ec24-44b6-a066-642db86841ce#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.040 248514 DEBUG oslo_concurrency.processutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/disk.config e7ecf467-c27c-44d4-b072-2537dba749e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.041 248514 INFO nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Deleting local config drive /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2/disk.config because it was imported into RBD.#033[00m
Dec 13 03:41:56 np0005558241 systemd-machined[210538]: New machine qemu-120-instance-00000061.
Dec 13 03:41:56 np0005558241 systemd[1]: Started Virtual Machine qemu-120-instance-00000061.
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.137 248514 DEBUG oslo_concurrency.lockutils [None req-42554e3b-d853-4e29-9794-5887103d233c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "58550847-ec24-44b6-a066-642db86841ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.142 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.629 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615316.6288662, e7ecf467-c27c-44d4-b072-2537dba749e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.630 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.632 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.633 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.637 248514 INFO nova.virt.libvirt.driver [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Instance spawned successfully.#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.637 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.652 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.658 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.662 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.662 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.663 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.663 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.664 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.664 248514 DEBUG nova.virt.libvirt.driver [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.698 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.698 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615316.630049, e7ecf467-c27c-44d4-b072-2537dba749e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.699 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] VM Started (Lifecycle Event)#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.732 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.735 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.742 248514 INFO nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Took 3.78 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.743 248514 DEBUG nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.769 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.814 248514 INFO nova.compute.manager [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Took 5.00 seconds to build instance.#033[00m
Dec 13 03:41:56 np0005558241 nova_compute[248510]: 2025-12-13 08:41:56.842 248514 DEBUG oslo_concurrency.lockutils [None req-2042c585-a162-47e9-96c4-f09b0b57dfc7 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "e7ecf467-c27c-44d4-b072-2537dba749e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:41:57.230 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:41:57 np0005558241 nova_compute[248510]: 2025-12-13 08:41:57.251 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:41:57 np0005558241 nova_compute[248510]: 2025-12-13 08:41:57.251 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:41:57 np0005558241 nova_compute[248510]: 2025-12-13 08:41:57.252 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:41:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 171 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Dec 13 03:41:57 np0005558241 nova_compute[248510]: 2025-12-13 08:41:57.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:41:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.230 248514 DEBUG nova.compute.manager [None req-2970bca0-40e9-4504-8e56-baa8b9ac7971 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.286 248514 INFO nova.compute.manager [None req-2970bca0-40e9-4504-8e56-baa8b9ac7971 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] instance snapshotting#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.287 248514 DEBUG nova.objects.instance [None req-2970bca0-40e9-4504-8e56-baa8b9ac7971 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lazy-loading 'flavor' on Instance uuid e7ecf467-c27c-44d4-b072-2537dba749e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.561 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "e7ecf467-c27c-44d4-b072-2537dba749e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.562 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "e7ecf467-c27c-44d4-b072-2537dba749e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.562 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "e7ecf467-c27c-44d4-b072-2537dba749e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.563 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "e7ecf467-c27c-44d4-b072-2537dba749e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.563 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "e7ecf467-c27c-44d4-b072-2537dba749e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.564 248514 INFO nova.compute.manager [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Terminating instance#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.565 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "refresh_cache-e7ecf467-c27c-44d4-b072-2537dba749e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.565 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquired lock "refresh_cache-e7ecf467-c27c-44d4-b072-2537dba749e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.565 248514 DEBUG nova.network.neutron [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.638 248514 INFO nova.virt.libvirt.driver [None req-2970bca0-40e9-4504-8e56-baa8b9ac7971 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Beginning live snapshot process#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.684 248514 DEBUG nova.compute.manager [None req-2970bca0-40e9-4504-8e56-baa8b9ac7971 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390#033[00m
Dec 13 03:41:58 np0005558241 nova_compute[248510]: 2025-12-13 08:41:58.747 248514 DEBUG nova.network.neutron [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:41:59 np0005558241 nova_compute[248510]: 2025-12-13 08:41:59.215 248514 DEBUG nova.network.neutron [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:41:59 np0005558241 nova_compute[248510]: 2025-12-13 08:41:59.231 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Releasing lock "refresh_cache-e7ecf467-c27c-44d4-b072-2537dba749e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:41:59 np0005558241 nova_compute[248510]: 2025-12-13 08:41:59.231 248514 DEBUG nova.compute.manager [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:41:59 np0005558241 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000061.scope: Deactivated successfully.
Dec 13 03:41:59 np0005558241 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000061.scope: Consumed 3.218s CPU time.
Dec 13 03:41:59 np0005558241 systemd-machined[210538]: Machine qemu-120-instance-00000061 terminated.
Dec 13 03:41:59 np0005558241 nova_compute[248510]: 2025-12-13 08:41:59.329 248514 DEBUG nova.compute.manager [None req-2970bca0-40e9-4504-8e56-baa8b9ac7971 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Dec 13 03:41:59 np0005558241 nova_compute[248510]: 2025-12-13 08:41:59.453 248514 INFO nova.virt.libvirt.driver [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Instance destroyed successfully.#033[00m
Dec 13 03:41:59 np0005558241 nova_compute[248510]: 2025-12-13 08:41:59.453 248514 DEBUG nova.objects.instance [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lazy-loading 'resources' on Instance uuid e7ecf467-c27c-44d4-b072-2537dba749e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:41:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 167 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Dec 13 03:41:59 np0005558241 nova_compute[248510]: 2025-12-13 08:41:59.929 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updating instance_info_cache with network_info: [{"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.166 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.166 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.166 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.204 248514 INFO nova.virt.libvirt.driver [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Deleting instance files /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2_del#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.205 248514 INFO nova.virt.libvirt.driver [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Deletion of /var/lib/nova/instances/e7ecf467-c27c-44d4-b072-2537dba749e2_del complete#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.299 248514 INFO nova.compute.manager [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Took 1.07 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.300 248514 DEBUG oslo.service.loopingcall [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.300 248514 DEBUG nova.compute.manager [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.300 248514 DEBUG nova.network.neutron [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.513 248514 DEBUG nova.network.neutron [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.530 248514 DEBUG nova.network.neutron [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.548 248514 INFO nova.compute.manager [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Took 0.25 seconds to deallocate network for instance.#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.601 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.602 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.637 248514 DEBUG nova.scheduler.client.report [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.676 248514 DEBUG nova.scheduler.client.report [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.677 248514 DEBUG nova.compute.provider_tree [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.702 248514 DEBUG nova.scheduler.client.report [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.740 248514 DEBUG nova.scheduler.client.report [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.837 248514 DEBUG oslo_concurrency.processutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.959 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "2e9e6711-892f-4278-b911-bfacbff9b48e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.960 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:00 np0005558241 nova_compute[248510]: 2025-12-13 08:42:00.992 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:42:01 np0005558241 nova_compute[248510]: 2025-12-13 08:42:01.112 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:01 np0005558241 nova_compute[248510]: 2025-12-13 08:42:01.144 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:42:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1997877790' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:42:01 np0005558241 nova_compute[248510]: 2025-12-13 08:42:01.408 248514 DEBUG oslo_concurrency.processutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:01 np0005558241 nova_compute[248510]: 2025-12-13 08:42:01.414 248514 DEBUG nova.compute.provider_tree [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:42:01 np0005558241 nova_compute[248510]: 2025-12-13 08:42:01.440 248514 DEBUG nova.scheduler.client.report [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:42:01 np0005558241 nova_compute[248510]: 2025-12-13 08:42:01.478 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:01 np0005558241 nova_compute[248510]: 2025-12-13 08:42:01.480 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:01 np0005558241 nova_compute[248510]: 2025-12-13 08:42:01.490 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:42:01 np0005558241 nova_compute[248510]: 2025-12-13 08:42:01.491 248514 INFO nova.compute.claims [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:42:01 np0005558241 nova_compute[248510]: 2025-12-13 08:42:01.541 248514 INFO nova.scheduler.client.report [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Deleted allocations for instance e7ecf467-c27c-44d4-b072-2537dba749e2#033[00m
Dec 13 03:42:01 np0005558241 nova_compute[248510]: 2025-12-13 08:42:01.654 248514 DEBUG oslo_concurrency.lockutils [None req-a8ee087f-55a4-4389-9b80-e62860dabdab 7001a6774c58458f8817dd3d0a51c534 3789df2219b04c028efeab8c1eae2147 - - default default] Lock "e7ecf467-c27c-44d4-b072-2537dba749e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:01 np0005558241 nova_compute[248510]: 2025-12-13 08:42:01.675 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 160 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Dec 13 03:42:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:42:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2075129003' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.234 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.239 248514 DEBUG nova.compute.provider_tree [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.260 248514 DEBUG nova.scheduler.client.report [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.291 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.292 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.361 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.362 248514 DEBUG nova.network.neutron [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.401 248514 INFO nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.431 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.610 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.612 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.612 248514 INFO nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Creating image(s)#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.640 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.753 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.773 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.776 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.811 248514 DEBUG nova.policy [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.815 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.816 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.850 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.850 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.851 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.851 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.851 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.891 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.893 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.895 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.895 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.925 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.928 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2e9e6711-892f-4278-b911-bfacbff9b48e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:02 np0005558241 nova_compute[248510]: 2025-12-13 08:42:02.965 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:42:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:42:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3311972622' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:42:03 np0005558241 nova_compute[248510]: 2025-12-13 08:42:03.426 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:03 np0005558241 nova_compute[248510]: 2025-12-13 08:42:03.549 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:42:03 np0005558241 nova_compute[248510]: 2025-12-13 08:42:03.550 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:42:03 np0005558241 nova_compute[248510]: 2025-12-13 08:42:03.740 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:42:03 np0005558241 nova_compute[248510]: 2025-12-13 08:42:03.742 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3510MB free_disk=59.92250001523644GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:42:03 np0005558241 nova_compute[248510]: 2025-12-13 08:42:03.742 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:03 np0005558241 nova_compute[248510]: 2025-12-13 08:42:03.743 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:42:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 30K writes, 122K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 30K writes, 10K syncs, 2.90 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7746 writes, 28K keys, 7746 commit groups, 1.0 writes per commit group, ingest: 28.13 MB, 0.05 MB/s#012Interval WAL: 7745 writes, 3091 syncs, 2.51 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:42:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 148 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Dec 13 03:42:03 np0005558241 nova_compute[248510]: 2025-12-13 08:42:03.853 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:42:03 np0005558241 nova_compute[248510]: 2025-12-13 08:42:03.853 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2e9e6711-892f-4278-b911-bfacbff9b48e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:42:03 np0005558241 nova_compute[248510]: 2025-12-13 08:42:03.854 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:42:03 np0005558241 nova_compute[248510]: 2025-12-13 08:42:03.854 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:42:03 np0005558241 nova_compute[248510]: 2025-12-13 08:42:03.920 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.060 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2e9e6711-892f-4278-b911-bfacbff9b48e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.135 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:42:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2260077907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.513 248514 DEBUG nova.objects.instance [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid 2e9e6711-892f-4278-b911-bfacbff9b48e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.535 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.541 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.635 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.635 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Ensure instance console log exists: /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.636 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.636 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.636 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.637 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.656 248514 DEBUG nova.network.neutron [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Successfully created port: a6d7b148-2fef-4c47-a3fa-c8948759b4a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.661 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:42:04 np0005558241 nova_compute[248510]: 2025-12-13 08:42:04.661 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:04 np0005558241 podman[342788]: 2025-12-13 08:42:04.81081969 +0000 UTC m=+0.053679939 container create 743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:42:04 np0005558241 systemd[1]: Started libpod-conmon-743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911.scope.
Dec 13 03:42:04 np0005558241 podman[342788]: 2025-12-13 08:42:04.777828764 +0000 UTC m=+0.020689053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:42:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:42:04 np0005558241 podman[342788]: 2025-12-13 08:42:04.981046194 +0000 UTC m=+0.223906483 container init 743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hamilton, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:42:04 np0005558241 podman[342788]: 2025-12-13 08:42:04.988471669 +0000 UTC m=+0.231331928 container start 743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hamilton, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:42:04 np0005558241 fervent_hamilton[342804]: 167 167
Dec 13 03:42:04 np0005558241 systemd[1]: libpod-743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911.scope: Deactivated successfully.
Dec 13 03:42:04 np0005558241 conmon[342804]: conmon 743d8260ff3896b596ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911.scope/container/memory.events
Dec 13 03:42:05 np0005558241 podman[342788]: 2025-12-13 08:42:05.001049749 +0000 UTC m=+0.243910008 container attach 743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:42:05 np0005558241 podman[342788]: 2025-12-13 08:42:05.001544822 +0000 UTC m=+0.244405081 container died 743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hamilton, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:42:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:42:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:42:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:42:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d3344eebdd1ed99d30df31abf8f0f640e0828bbbf6c98b4e2477cf7a75a24f54-merged.mount: Deactivated successfully.
Dec 13 03:42:05 np0005558241 podman[342788]: 2025-12-13 08:42:05.152052949 +0000 UTC m=+0.394913208 container remove 743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:42:05 np0005558241 systemd[1]: libpod-conmon-743d8260ff3896b596ef292edfbd385a94652c7afae8baa9b2237d65339ca911.scope: Deactivated successfully.
Dec 13 03:42:05 np0005558241 podman[342827]: 2025-12-13 08:42:05.302456063 +0000 UTC m=+0.022277715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:42:05 np0005558241 podman[342827]: 2025-12-13 08:42:05.704053875 +0000 UTC m=+0.423875507 container create 6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lamarr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 03:42:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 151 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 218 op/s
Dec 13 03:42:05 np0005558241 systemd[1]: Started libpod-conmon-6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f.scope.
Dec 13 03:42:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:42:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886cbff8e9720111208439e2b4f782052fc2d768f7181fccb338f69a35670b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:42:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886cbff8e9720111208439e2b4f782052fc2d768f7181fccb338f69a35670b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:42:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886cbff8e9720111208439e2b4f782052fc2d768f7181fccb338f69a35670b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:42:05 np0005558241 nova_compute[248510]: 2025-12-13 08:42:05.944 248514 DEBUG nova.network.neutron [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Successfully updated port: a6d7b148-2fef-4c47-a3fa-c8948759b4a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:42:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886cbff8e9720111208439e2b4f782052fc2d768f7181fccb338f69a35670b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:42:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6886cbff8e9720111208439e2b4f782052fc2d768f7181fccb338f69a35670b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:42:05 np0005558241 nova_compute[248510]: 2025-12-13 08:42:05.970 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-2e9e6711-892f-4278-b911-bfacbff9b48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:42:05 np0005558241 nova_compute[248510]: 2025-12-13 08:42:05.970 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-2e9e6711-892f-4278-b911-bfacbff9b48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:42:05 np0005558241 nova_compute[248510]: 2025-12-13 08:42:05.970 248514 DEBUG nova.network.neutron [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:42:06 np0005558241 nova_compute[248510]: 2025-12-13 08:42:06.146 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:06 np0005558241 nova_compute[248510]: 2025-12-13 08:42:06.161 248514 DEBUG nova.compute.manager [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Received event network-changed-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:42:06 np0005558241 nova_compute[248510]: 2025-12-13 08:42:06.162 248514 DEBUG nova.compute.manager [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Refreshing instance network info cache due to event network-changed-a6d7b148-2fef-4c47-a3fa-c8948759b4a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:42:06 np0005558241 nova_compute[248510]: 2025-12-13 08:42:06.162 248514 DEBUG oslo_concurrency.lockutils [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2e9e6711-892f-4278-b911-bfacbff9b48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:42:06 np0005558241 podman[342827]: 2025-12-13 08:42:06.237712631 +0000 UTC m=+0.957534283 container init 6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lamarr, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:42:06 np0005558241 nova_compute[248510]: 2025-12-13 08:42:06.244 248514 DEBUG nova.network.neutron [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:42:06 np0005558241 podman[342827]: 2025-12-13 08:42:06.245522636 +0000 UTC m=+0.965344268 container start 6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lamarr, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 03:42:06 np0005558241 podman[342827]: 2025-12-13 08:42:06.288826622 +0000 UTC m=+1.008648284 container attach 6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lamarr, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec 13 03:42:06 np0005558241 happy_lamarr[342843]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:42:06 np0005558241 happy_lamarr[342843]: --> All data devices are unavailable
Dec 13 03:42:06 np0005558241 systemd[1]: libpod-6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f.scope: Deactivated successfully.
Dec 13 03:42:06 np0005558241 podman[342827]: 2025-12-13 08:42:06.793384724 +0000 UTC m=+1.513206376 container died 6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lamarr, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.362 248514 DEBUG nova.network.neutron [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Updating instance_info_cache with network_info: [{"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.386 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-2e9e6711-892f-4278-b911-bfacbff9b48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.386 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Instance network_info: |[{"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.386 248514 DEBUG oslo_concurrency.lockutils [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2e9e6711-892f-4278-b911-bfacbff9b48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.387 248514 DEBUG nova.network.neutron [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Refreshing network info cache for port a6d7b148-2fef-4c47-a3fa-c8948759b4a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.389 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Start _get_guest_xml network_info=[{"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.394 248514 WARNING nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.400 248514 DEBUG nova.virt.libvirt.host [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.402 248514 DEBUG nova.virt.libvirt.host [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.410 248514 DEBUG nova.virt.libvirt.host [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.410 248514 DEBUG nova.virt.libvirt.host [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.411 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.411 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.412 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.412 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.412 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.412 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.413 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.413 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.413 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.413 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.413 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.414 248514 DEBUG nova.virt.hardware [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.418 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6886cbff8e9720111208439e2b4f782052fc2d768f7181fccb338f69a35670b8-merged.mount: Deactivated successfully.
Dec 13 03:42:07 np0005558241 podman[342827]: 2025-12-13 08:42:07.647132844 +0000 UTC m=+2.366954476 container remove 6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lamarr, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:42:07 np0005558241 systemd[1]: libpod-conmon-6ff1b7e38cc43f001c53897ddcab58d83f987561a8dc2d037ad1ad859b10c55f.scope: Deactivated successfully.
Dec 13 03:42:07 np0005558241 podman[342877]: 2025-12-13 08:42:07.729180026 +0000 UTC m=+0.295535532 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 03:42:07 np0005558241 podman[342876]: 2025-12-13 08:42:07.73391469 +0000 UTC m=+0.296563708 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:42:07 np0005558241 podman[342875]: 2025-12-13 08:42:07.754979433 +0000 UTC m=+0.325933809 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller)
Dec 13 03:42:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 151 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 128 op/s
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.903 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615312.901425, 58550847-ec24-44b6-a066-642db86841ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.904 248514 INFO nova.compute.manager [-] [instance: 58550847-ec24-44b6-a066-642db86841ce] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:42:07 np0005558241 nova_compute[248510]: 2025-12-13 08:42:07.967 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.003 248514 DEBUG nova.compute.manager [None req-84a96038-e98b-48f8-9a50-beb4b40deb75 - - - - - -] [instance: 58550847-ec24-44b6-a066-642db86841ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:42:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/238631318' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.031 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.053 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.058 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:08 np0005558241 podman[343041]: 2025-12-13 08:42:08.111225866 +0000 UTC m=+0.019960625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:42:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:42:08 np0005558241 podman[343041]: 2025-12-13 08:42:08.29446929 +0000 UTC m=+0.203204029 container create c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:42:08 np0005558241 systemd[1]: Started libpod-conmon-c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756.scope.
Dec 13 03:42:08 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:42:08 np0005558241 podman[343041]: 2025-12-13 08:42:08.451907679 +0000 UTC m=+0.360642448 container init c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_knuth, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 03:42:08 np0005558241 podman[343041]: 2025-12-13 08:42:08.459785526 +0000 UTC m=+0.368520265 container start c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_knuth, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:42:08 np0005558241 exciting_knuth[343076]: 167 167
Dec 13 03:42:08 np0005558241 systemd[1]: libpod-c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756.scope: Deactivated successfully.
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.618 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.618 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:42:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:42:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4043436435' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.697 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.699 248514 DEBUG nova.virt.libvirt.vif [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:41:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1551844735',display_name='tempest-ServersTestJSON-server-1551844735',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1551844735',id=98,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-6ci0kbz7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:42:02Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=2e9e6711-892f-4278-b911-bfacbff9b48e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.699 248514 DEBUG nova.network.os_vif_util [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.700 248514 DEBUG nova.network.os_vif_util [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.702 248514 DEBUG nova.objects.instance [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e9e6711-892f-4278-b911-bfacbff9b48e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.726 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  <uuid>2e9e6711-892f-4278-b911-bfacbff9b48e</uuid>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  <name>instance-00000062</name>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersTestJSON-server-1551844735</nova:name>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:42:07</nova:creationTime>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:        <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:        <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:        <nova:port uuid="a6d7b148-2fef-4c47-a3fa-c8948759b4a9">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <entry name="serial">2e9e6711-892f-4278-b911-bfacbff9b48e</entry>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <entry name="uuid">2e9e6711-892f-4278-b911-bfacbff9b48e</entry>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2e9e6711-892f-4278-b911-bfacbff9b48e_disk">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2e9e6711-892f-4278-b911-bfacbff9b48e_disk.config">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:e5:df:21"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <target dev="tapa6d7b148-2f"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/console.log" append="off"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:42:08 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:42:08 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:42:08 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:42:08 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.727 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Preparing to wait for external event network-vif-plugged-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.727 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.728 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.728 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.729 248514 DEBUG nova.virt.libvirt.vif [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:41:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1551844735',display_name='tempest-ServersTestJSON-server-1551844735',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1551844735',id=98,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-6ci0kbz7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:42:02Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=2e9e6711-892f-4278-b911-bfacbff9b48e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.730 248514 DEBUG nova.network.os_vif_util [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.730 248514 DEBUG nova.network.os_vif_util [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.731 248514 DEBUG os_vif [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.732 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.733 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.737 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6d7b148-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.737 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6d7b148-2f, col_values=(('external_ids', {'iface-id': 'a6d7b148-2fef-4c47-a3fa-c8948759b4a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:df:21', 'vm-uuid': '2e9e6711-892f-4278-b911-bfacbff9b48e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:08 np0005558241 podman[343041]: 2025-12-13 08:42:08.782737725 +0000 UTC m=+0.691472514 container attach c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:42:08 np0005558241 podman[343041]: 2025-12-13 08:42:08.783815454 +0000 UTC m=+0.692550193 container died c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_knuth, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:08 np0005558241 NetworkManager[50376]: <info>  [1765615328.7843] manager: (tapa6d7b148-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.787 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:08 np0005558241 nova_compute[248510]: 2025-12-13 08:42:08.793 248514 INFO os_vif [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f')#033[00m
Dec 13 03:42:09 np0005558241 nova_compute[248510]: 2025-12-13 08:42:09.008 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:42:09 np0005558241 nova_compute[248510]: 2025-12-13 08:42:09.008 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:42:09 np0005558241 nova_compute[248510]: 2025-12-13 08:42:09.008 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:e5:df:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:42:09 np0005558241 nova_compute[248510]: 2025-12-13 08:42:09.009 248514 INFO nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Using config drive#033[00m
Dec 13 03:42:09 np0005558241 nova_compute[248510]: 2025-12-13 08:42:09.025 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:09 np0005558241 nova_compute[248510]: 2025-12-13 08:42:09.185 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "4edf7378-73e8-4119-a019-6ab7098ee4ad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:09 np0005558241 nova_compute[248510]: 2025-12-13 08:42:09.185 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "4edf7378-73e8-4119-a019-6ab7098ee4ad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:09 np0005558241 nova_compute[248510]: 2025-12-13 08:42:09.210 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:42:09
Dec 13 03:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.rgw.root', 'images', 'default.rgw.log', '.mgr', 'backups', 'cephfs.cephfs.data', 'volumes']
Dec 13 03:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:42:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-afc30dafa26c339b2ac11af23d91469e3e7a227dd26f2a7da50aafe3536f8890-merged.mount: Deactivated successfully.
Dec 13 03:42:09 np0005558241 nova_compute[248510]: 2025-12-13 08:42:09.310 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:09 np0005558241 nova_compute[248510]: 2025-12-13 08:42:09.310 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:09 np0005558241 nova_compute[248510]: 2025-12-13 08:42:09.318 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:42:09 np0005558241 nova_compute[248510]: 2025-12-13 08:42:09.319 248514 INFO nova.compute.claims [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:42:09 np0005558241 nova_compute[248510]: 2025-12-13 08:42:09.497 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 141 op/s
Dec 13 03:42:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:42:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2353635197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.074 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.080 248514 DEBUG nova.compute.provider_tree [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:42:10 np0005558241 podman[343041]: 2025-12-13 08:42:10.097463475 +0000 UTC m=+2.006198214 container remove c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.102 248514 DEBUG nova.scheduler.client.report [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:42:10 np0005558241 systemd[1]: libpod-conmon-c15942718e710019499eac7ee123da18214578e5bb4dae4010d44ed0a0971756.scope: Deactivated successfully.
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.145 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.146 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.200 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.220 248514 INFO nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.241 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.279 248514 INFO nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Creating config drive at /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/disk.config#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.284 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmyrcnbfr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:10 np0005558241 podman[343144]: 2025-12-13 08:42:10.249184894 +0000 UTC m=+0.022347647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.367 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.369 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.369 248514 INFO nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Creating image(s)#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.545 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:10 np0005558241 podman[343144]: 2025-12-13 08:42:10.550395834 +0000 UTC m=+0.323558567 container create 923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.585 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.607 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.611 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.656 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmyrcnbfr" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:10 np0005558241 systemd[1]: Started libpod-conmon-923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028.scope.
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.685 248514 DEBUG nova.storage.rbd_utils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 2e9e6711-892f-4278-b911-bfacbff9b48e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.690 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/disk.config 2e9e6711-892f-4278-b911-bfacbff9b48e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:42:10 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:42:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:42:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad90636d2cb1eae1ff154e3bc7ea4ed551514efe72712f1b1823918adb1ae87a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:42:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad90636d2cb1eae1ff154e3bc7ea4ed551514efe72712f1b1823918adb1ae87a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:42:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad90636d2cb1eae1ff154e3bc7ea4ed551514efe72712f1b1823918adb1ae87a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:42:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad90636d2cb1eae1ff154e3bc7ea4ed551514efe72712f1b1823918adb1ae87a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.733 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.738 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.739 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.739 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:10 np0005558241 podman[343144]: 2025-12-13 08:42:10.746294371 +0000 UTC m=+0.519457104 container init 923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 03:42:10 np0005558241 podman[343144]: 2025-12-13 08:42:10.755641947 +0000 UTC m=+0.528804680 container start 923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:42:10 np0005558241 podman[343144]: 2025-12-13 08:42:10.761326056 +0000 UTC m=+0.534488809 container attach 923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.761 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.768 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.806 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.848 248514 DEBUG oslo_concurrency.processutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/disk.config 2e9e6711-892f-4278-b911-bfacbff9b48e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.849 248514 INFO nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Deleting local config drive /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e/disk.config because it was imported into RBD.#033[00m
Dec 13 03:42:10 np0005558241 kernel: tapa6d7b148-2f: entered promiscuous mode
Dec 13 03:42:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:10Z|00952|binding|INFO|Claiming lport a6d7b148-2fef-4c47-a3fa-c8948759b4a9 for this chassis.
Dec 13 03:42:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:10Z|00953|binding|INFO|a6d7b148-2fef-4c47-a3fa-c8948759b4a9: Claiming fa:16:3e:e5:df:21 10.100.0.9
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.908 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:10 np0005558241 NetworkManager[50376]: <info>  [1765615330.9114] manager: (tapa6d7b148-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/399)
Dec 13 03:42:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:10.916 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:df:21 10.100.0.9'], port_security=['fa:16:3e:e5:df:21 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2e9e6711-892f-4278-b911-bfacbff9b48e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a6d7b148-2fef-4c47-a3fa-c8948759b4a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:42:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:10.917 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a6d7b148-2fef-4c47-a3fa-c8948759b4a9 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis#033[00m
Dec 13 03:42:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:10.918 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3#033[00m
Dec 13 03:42:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:10Z|00954|binding|INFO|Setting lport a6d7b148-2fef-4c47-a3fa-c8948759b4a9 ovn-installed in OVS
Dec 13 03:42:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:10Z|00955|binding|INFO|Setting lport a6d7b148-2fef-4c47-a3fa-c8948759b4a9 up in Southbound
Dec 13 03:42:10 np0005558241 nova_compute[248510]: 2025-12-13 08:42:10.934 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:10.943 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4beaa995-dc9d-4d1a-bdf8-fefadc049250]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:10 np0005558241 systemd-udevd[343314]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:42:10 np0005558241 systemd-machined[210538]: New machine qemu-121-instance-00000062.
Dec 13 03:42:10 np0005558241 systemd[1]: Started Virtual Machine qemu-121-instance-00000062.
Dec 13 03:42:10 np0005558241 NetworkManager[50376]: <info>  [1765615330.9720] device (tapa6d7b148-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:42:10 np0005558241 NetworkManager[50376]: <info>  [1765615330.9733] device (tapa6d7b148-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:42:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:10.981 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c52f151a-4314-45d8-bab0-033194563f47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:10.986 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb2f233-b9f5-4cff-8524-df4b765ba60e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.021 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[53627624-ab8a-4668-9f3d-5653ca617291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.043 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3b02cf8a-8e06-45a0-bb53-e764b787c4e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343324, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.055 248514 DEBUG nova.network.neutron [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Updated VIF entry in instance network info cache for port a6d7b148-2fef-4c47-a3fa-c8948759b4a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.055 248514 DEBUG nova.network.neutron [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Updating instance_info_cache with network_info: [{"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:42:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.062 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8e848b33-c023-45e2-b075-31e04ecb1c72]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343330, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343330, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.064 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.066 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.067 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.067 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.068 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:42:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.068 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:11.069 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:42:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:42:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4201.8 total, 600.0 interval#012Cumulative writes: 31K writes, 120K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s#012Cumulative WAL: 31K writes, 11K syncs, 2.83 writes per sync, written: 0.11 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6963 writes, 25K keys, 6963 commit groups, 1.0 writes per commit group, ingest: 25.25 MB, 0.04 MB/s#012Interval WAL: 6963 writes, 2848 syncs, 2.44 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.087 248514 DEBUG oslo_concurrency.lockutils [req-b89c5683-c8b4-45eb-8c62-b236c8aa39df req-c2b3901a-42b8-410c-ab1b-c9f4a5d36096 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2e9e6711-892f-4278-b911-bfacbff9b48e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]: {
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:    "0": [
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:        {
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "devices": [
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "/dev/loop3"
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            ],
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_name": "ceph_lv0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_size": "21470642176",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "name": "ceph_lv0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "tags": {
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.cluster_name": "ceph",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.crush_device_class": "",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.encrypted": "0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.objectstore": "bluestore",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.osd_id": "0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.type": "block",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.vdo": "0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.with_tpm": "0"
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            },
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "type": "block",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "vg_name": "ceph_vg0"
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:        }
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:    ],
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:    "1": [
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:        {
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "devices": [
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "/dev/loop4"
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            ],
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_name": "ceph_lv1",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_size": "21470642176",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "name": "ceph_lv1",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "tags": {
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.cluster_name": "ceph",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.crush_device_class": "",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.encrypted": "0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.objectstore": "bluestore",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.osd_id": "1",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.type": "block",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.vdo": "0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.with_tpm": "0"
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            },
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "type": "block",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "vg_name": "ceph_vg1"
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:        }
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:    ],
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:    "2": [
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:        {
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "devices": [
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "/dev/loop5"
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            ],
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_name": "ceph_lv2",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_size": "21470642176",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "name": "ceph_lv2",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "tags": {
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.cluster_name": "ceph",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.crush_device_class": "",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.encrypted": "0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.objectstore": "bluestore",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.osd_id": "2",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.type": "block",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.vdo": "0",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:                "ceph.with_tpm": "0"
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            },
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "type": "block",
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:            "vg_name": "ceph_vg2"
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:        }
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]:    ]
Dec 13 03:42:11 np0005558241 recursing_blackwell[343233]: }
Dec 13 03:42:11 np0005558241 systemd[1]: libpod-923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028.scope: Deactivated successfully.
Dec 13 03:42:11 np0005558241 conmon[343233]: conmon 923b29edb526804908c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028.scope/container/memory.events
Dec 13 03:42:11 np0005558241 podman[343144]: 2025-12-13 08:42:11.134842891 +0000 UTC m=+0.908005634 container died 923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.210 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ad90636d2cb1eae1ff154e3bc7ea4ed551514efe72712f1b1823918adb1ae87a-merged.mount: Deactivated successfully.
Dec 13 03:42:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 167 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.881 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615331.880728, 2e9e6711-892f-4278-b911-bfacbff9b48e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.882 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] VM Started (Lifecycle Event)#033[00m
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.905 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.910 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615331.8814082, 2e9e6711-892f-4278-b911-bfacbff9b48e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.910 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.931 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.935 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:42:11 np0005558241 nova_compute[248510]: 2025-12-13 08:42:11.961 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:42:12 np0005558241 nova_compute[248510]: 2025-12-13 08:42:12.351 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:12 np0005558241 nova_compute[248510]: 2025-12-13 08:42:12.408 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] resizing rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:42:12 np0005558241 podman[343144]: 2025-12-13 08:42:12.749786444 +0000 UTC m=+2.522949167 container remove 923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:42:12 np0005558241 systemd[1]: libpod-conmon-923b29edb526804908c17a6f4042cfe18d90428f841911fbc18cd5d094993028.scope: Deactivated successfully.
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.177 248514 DEBUG nova.objects.instance [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'migration_context' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.199 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.200 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Ensure instance console log exists: /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.200 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.200 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.201 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.202 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.206 248514 WARNING nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.213 248514 DEBUG nova.virt.libvirt.host [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.214 248514 DEBUG nova.virt.libvirt.host [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.216 248514 DEBUG nova.virt.libvirt.host [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.217 248514 DEBUG nova.virt.libvirt.host [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.217 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.217 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.218 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.218 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.218 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.218 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.219 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.219 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.219 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.219 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.220 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.220 248514 DEBUG nova.virt.hardware [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.223 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:42:13 np0005558241 podman[343509]: 2025-12-13 08:42:13.242271329 +0000 UTC m=+0.070064768 container create 81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_beaver, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:42:13 np0005558241 systemd[1]: Started libpod-conmon-81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f.scope.
Dec 13 03:42:13 np0005558241 podman[343509]: 2025-12-13 08:42:13.197760382 +0000 UTC m=+0.025553851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:42:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:42:13 np0005558241 podman[343509]: 2025-12-13 08:42:13.319805323 +0000 UTC m=+0.147598792 container init 81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_beaver, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:42:13 np0005558241 podman[343509]: 2025-12-13 08:42:13.327842434 +0000 UTC m=+0.155635883 container start 81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:42:13 np0005558241 awesome_beaver[343537]: 167 167
Dec 13 03:42:13 np0005558241 systemd[1]: libpod-81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f.scope: Deactivated successfully.
Dec 13 03:42:13 np0005558241 podman[343509]: 2025-12-13 08:42:13.436714429 +0000 UTC m=+0.264507908 container attach 81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_beaver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:42:13 np0005558241 podman[343509]: 2025-12-13 08:42:13.43751131 +0000 UTC m=+0.265304759 container died 81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:42:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-99b0812f99cc9e7246bf12bb313e8f17059e39200e558e3f23d0a814978f6d91-merged.mount: Deactivated successfully.
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:42:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2982063026' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.813 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 195 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.7 MiB/s wr, 56 op/s
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.840 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:13 np0005558241 nova_compute[248510]: 2025-12-13 08:42:13.844 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:13 np0005558241 podman[343509]: 2025-12-13 08:42:13.884323118 +0000 UTC m=+0.712116567 container remove 81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_beaver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:42:13 np0005558241 systemd[1]: libpod-conmon-81fb27842a6340f0302ee01d9ce7d0d931c03b6cf50444458799b1d834e2c02f.scope: Deactivated successfully.
Dec 13 03:42:14 np0005558241 podman[343620]: 2025-12-13 08:42:14.041671424 +0000 UTC m=+0.023349243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:42:14 np0005558241 podman[343620]: 2025-12-13 08:42:14.152398438 +0000 UTC m=+0.134076237 container create 7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haibt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:42:14 np0005558241 systemd[1]: Started libpod-conmon-7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8.scope.
Dec 13 03:42:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:42:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7492aa08b4b88b847681456390b31332da1d67c25f871fc110df0cdd4290cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:42:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7492aa08b4b88b847681456390b31332da1d67c25f871fc110df0cdd4290cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:42:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7492aa08b4b88b847681456390b31332da1d67c25f871fc110df0cdd4290cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:42:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7492aa08b4b88b847681456390b31332da1d67c25f871fc110df0cdd4290cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:42:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:42:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2609557199' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:42:14 np0005558241 nova_compute[248510]: 2025-12-13 08:42:14.434 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:14 np0005558241 nova_compute[248510]: 2025-12-13 08:42:14.437 248514 DEBUG nova.objects.instance [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:14 np0005558241 nova_compute[248510]: 2025-12-13 08:42:14.452 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615319.45055, e7ecf467-c27c-44d4-b072-2537dba749e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:14 np0005558241 nova_compute[248510]: 2025-12-13 08:42:14.452 248514 INFO nova.compute.manager [-] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:42:14 np0005558241 nova_compute[248510]: 2025-12-13 08:42:14.470 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  <uuid>4edf7378-73e8-4119-a019-6ab7098ee4ad</uuid>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  <name>instance-00000063</name>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerShowV257Test-server-1971514055</nova:name>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:42:13</nova:creationTime>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:        <nova:user uuid="e382ad6b2fdc41398bfa58dbb651d4be">tempest-ServerShowV257Test-371152472-project-member</nova:user>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:        <nova:project uuid="6d41fdecfe334aaeaaba54b9c6eaeb00">tempest-ServerShowV257Test-371152472</nova:project>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <entry name="serial">4edf7378-73e8-4119-a019-6ab7098ee4ad</entry>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <entry name="uuid">4edf7378-73e8-4119-a019-6ab7098ee4ad</entry>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/4edf7378-73e8-4119-a019-6ab7098ee4ad_disk">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/console.log" append="off"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:42:14 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:42:14 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:42:14 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:42:14 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:42:14 np0005558241 nova_compute[248510]: 2025-12-13 08:42:14.477 248514 DEBUG nova.compute.manager [None req-cf932fe5-e3dd-4a11-8748-9ba4909b7cbe - - - - - -] [instance: e7ecf467-c27c-44d4-b072-2537dba749e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:14 np0005558241 podman[343620]: 2025-12-13 08:42:14.525841602 +0000 UTC m=+0.507519431 container init 7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:42:14 np0005558241 podman[343620]: 2025-12-13 08:42:14.535406043 +0000 UTC m=+0.517083852 container start 7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haibt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 03:42:14 np0005558241 podman[343620]: 2025-12-13 08:42:14.576333786 +0000 UTC m=+0.558011605 container attach 7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haibt, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:42:14 np0005558241 nova_compute[248510]: 2025-12-13 08:42:14.602 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:42:14 np0005558241 nova_compute[248510]: 2025-12-13 08:42:14.603 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:42:14 np0005558241 nova_compute[248510]: 2025-12-13 08:42:14.603 248514 INFO nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Using config drive#033[00m
Dec 13 03:42:14 np0005558241 nova_compute[248510]: 2025-12-13 08:42:14.627 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:14 np0005558241 nova_compute[248510]: 2025-12-13 08:42:14.867 248514 INFO nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Creating config drive at /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config#033[00m
Dec 13 03:42:14 np0005558241 nova_compute[248510]: 2025-12-13 08:42:14.875 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphqmz0mm7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:15 np0005558241 nova_compute[248510]: 2025-12-13 08:42:15.021 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphqmz0mm7" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:42:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3453947686' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:42:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:42:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3453947686' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:42:15 np0005558241 nova_compute[248510]: 2025-12-13 08:42:15.056 248514 DEBUG nova.storage.rbd_utils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:15 np0005558241 nova_compute[248510]: 2025-12-13 08:42:15.060 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:15 np0005558241 lvm[343771]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:42:15 np0005558241 lvm[343771]: VG ceph_vg0 finished
Dec 13 03:42:15 np0005558241 lvm[343774]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:42:15 np0005558241 lvm[343774]: VG ceph_vg1 finished
Dec 13 03:42:15 np0005558241 lvm[343776]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:42:15 np0005558241 lvm[343776]: VG ceph_vg2 finished
Dec 13 03:42:15 np0005558241 cool_haibt[343636]: {}
Dec 13 03:42:15 np0005558241 systemd[1]: libpod-7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8.scope: Deactivated successfully.
Dec 13 03:42:15 np0005558241 systemd[1]: libpod-7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8.scope: Consumed 1.316s CPU time.
Dec 13 03:42:15 np0005558241 podman[343620]: 2025-12-13 08:42:15.392375737 +0000 UTC m=+1.374053536 container died 7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:42:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7f7492aa08b4b88b847681456390b31332da1d67c25f871fc110df0cdd4290cb-merged.mount: Deactivated successfully.
Dec 13 03:42:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 65 op/s
Dec 13 03:42:16 np0005558241 nova_compute[248510]: 2025-12-13 08:42:16.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:16 np0005558241 podman[343620]: 2025-12-13 08:42:16.223456423 +0000 UTC m=+2.205134242 container remove 7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 03:42:16 np0005558241 nova_compute[248510]: 2025-12-13 08:42:16.233 248514 DEBUG oslo_concurrency.processutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:16 np0005558241 nova_compute[248510]: 2025-12-13 08:42:16.234 248514 INFO nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deleting local config drive /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config because it was imported into RBD.#033[00m
Dec 13 03:42:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:42:16 np0005558241 systemd[1]: libpod-conmon-7475d699258077e4a3c2c3b9962bfa4e31f805a6533f5fdfc0271bdfbfad20f8.scope: Deactivated successfully.
Dec 13 03:42:16 np0005558241 systemd-machined[210538]: New machine qemu-122-instance-00000063.
Dec 13 03:42:16 np0005558241 systemd[1]: Started Virtual Machine qemu-122-instance-00000063.
Dec 13 03:42:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:42:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:42:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.177 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615337.1766627, 4edf7378-73e8-4119-a019-6ab7098ee4ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.178 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.181 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.181 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.184 248514 INFO nova.virt.libvirt.driver [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance spawned successfully.#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.185 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.207 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.212 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.212 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.213 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.213 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.213 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.214 248514 DEBUG nova.virt.libvirt.driver [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.218 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.253 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.253 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615337.1777263, 4edf7378-73e8-4119-a019-6ab7098ee4ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.253 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] VM Started (Lifecycle Event)#033[00m
Dec 13 03:42:17 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:42:17 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.292 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.299 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.307 248514 INFO nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Took 6.94 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.307 248514 DEBUG nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.335 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.383 248514 INFO nova.compute.manager [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Took 8.10 seconds to build instance.#033[00m
Dec 13 03:42:17 np0005558241 nova_compute[248510]: 2025-12-13 08:42:17.407 248514 DEBUG oslo_concurrency.lockutils [None req-e694a3f6-001c-4b88-a747-84cbeaaf464b e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "4edf7378-73e8-4119-a019-6ab7098ee4ad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Dec 13 03:42:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:42:18 np0005558241 nova_compute[248510]: 2025-12-13 08:42:18.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:42:18 np0005558241 nova_compute[248510]: 2025-12-13 08:42:18.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:42:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 25K writes, 101K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s#012Cumulative WAL: 25K writes, 8616 syncs, 2.95 writes per sync, written: 0.10 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5918 writes, 22K keys, 5918 commit groups, 1.0 writes per commit group, ingest: 23.38 MB, 0.04 MB/s#012Interval WAL: 5918 writes, 2362 syncs, 2.51 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.709 248514 INFO nova.compute.manager [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Rebuilding instance#033[00m
Dec 13 03:42:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 213 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.900 248514 DEBUG nova.compute.manager [req-4d7d75be-146b-45a6-8da6-a324e9e99a5b req-3299a24f-2b48-4117-a927-59c7e1416f4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Received event network-vif-plugged-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.901 248514 DEBUG oslo_concurrency.lockutils [req-4d7d75be-146b-45a6-8da6-a324e9e99a5b req-3299a24f-2b48-4117-a927-59c7e1416f4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.902 248514 DEBUG oslo_concurrency.lockutils [req-4d7d75be-146b-45a6-8da6-a324e9e99a5b req-3299a24f-2b48-4117-a927-59c7e1416f4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.902 248514 DEBUG oslo_concurrency.lockutils [req-4d7d75be-146b-45a6-8da6-a324e9e99a5b req-3299a24f-2b48-4117-a927-59c7e1416f4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.902 248514 DEBUG nova.compute.manager [req-4d7d75be-146b-45a6-8da6-a324e9e99a5b req-3299a24f-2b48-4117-a927-59c7e1416f4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Processing event network-vif-plugged-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.903 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.913 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615339.912378, 2e9e6711-892f-4278-b911-bfacbff9b48e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.914 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.916 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.926 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Instance spawned successfully.#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.926 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.936 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.942 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.962 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.963 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.963 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.964 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.965 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.965 248514 DEBUG nova.virt.libvirt.driver [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:19 np0005558241 nova_compute[248510]: 2025-12-13 08:42:19.971 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:42:20 np0005558241 nova_compute[248510]: 2025-12-13 08:42:20.045 248514 INFO nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Took 17.44 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:42:20 np0005558241 nova_compute[248510]: 2025-12-13 08:42:20.046 248514 DEBUG nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:20 np0005558241 nova_compute[248510]: 2025-12-13 08:42:20.071 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:20 np0005558241 nova_compute[248510]: 2025-12-13 08:42:20.098 248514 DEBUG nova.compute.manager [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:20 np0005558241 nova_compute[248510]: 2025-12-13 08:42:20.116 248514 INFO nova.compute.manager [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Took 19.03 seconds to build instance.#033[00m
Dec 13 03:42:20 np0005558241 nova_compute[248510]: 2025-12-13 08:42:20.148 248514 DEBUG oslo_concurrency.lockutils [None req-3bdd0d37-5d1c-4322-a4ab-4015755a5b99 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:20 np0005558241 nova_compute[248510]: 2025-12-13 08:42:20.152 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'pci_requests' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:20 np0005558241 nova_compute[248510]: 2025-12-13 08:42:20.166 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:20 np0005558241 nova_compute[248510]: 2025-12-13 08:42:20.183 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'resources' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:20 np0005558241 nova_compute[248510]: 2025-12-13 08:42:20.198 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'migration_context' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:20 np0005558241 nova_compute[248510]: 2025-12-13 08:42:20.276 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:42:20 np0005558241 nova_compute[248510]: 2025-12-13 08:42:20.279 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014647506305992414 of space, bias 1.0, pg target 0.4394251891797724 quantized to 32 (current 32)
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006677122905383687 of space, bias 1.0, pg target 0.20031368716151063 quantized to 32 (current 32)
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.927163191339098e-07 of space, bias 4.0, pg target 0.0007112595829606918 quantized to 16 (current 32)
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:42:21 np0005558241 nova_compute[248510]: 2025-12-13 08:42:21.215 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 214 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Dec 13 03:42:22 np0005558241 nova_compute[248510]: 2025-12-13 08:42:22.030 248514 DEBUG nova.compute.manager [req-455efc10-3847-43ff-9ff4-ed9506bb0651 req-da7448f0-f556-45b9-9736-a15d3726833d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Received event network-vif-plugged-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:42:22 np0005558241 nova_compute[248510]: 2025-12-13 08:42:22.031 248514 DEBUG oslo_concurrency.lockutils [req-455efc10-3847-43ff-9ff4-ed9506bb0651 req-da7448f0-f556-45b9-9736-a15d3726833d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:22 np0005558241 nova_compute[248510]: 2025-12-13 08:42:22.031 248514 DEBUG oslo_concurrency.lockutils [req-455efc10-3847-43ff-9ff4-ed9506bb0651 req-da7448f0-f556-45b9-9736-a15d3726833d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:22 np0005558241 nova_compute[248510]: 2025-12-13 08:42:22.032 248514 DEBUG oslo_concurrency.lockutils [req-455efc10-3847-43ff-9ff4-ed9506bb0651 req-da7448f0-f556-45b9-9736-a15d3726833d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:22 np0005558241 nova_compute[248510]: 2025-12-13 08:42:22.032 248514 DEBUG nova.compute.manager [req-455efc10-3847-43ff-9ff4-ed9506bb0651 req-da7448f0-f556-45b9-9736-a15d3726833d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] No waiting events found dispatching network-vif-plugged-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:42:22 np0005558241 nova_compute[248510]: 2025-12-13 08:42:22.032 248514 WARNING nova.compute.manager [req-455efc10-3847-43ff-9ff4-ed9506bb0651 req-da7448f0-f556-45b9-9736-a15d3726833d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Received unexpected event network-vif-plugged-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:42:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:42:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 03:42:23 np0005558241 nova_compute[248510]: 2025-12-13 08:42:23.789 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 214 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 13 03:42:24 np0005558241 nova_compute[248510]: 2025-12-13 08:42:24.844 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:24 np0005558241 nova_compute[248510]: 2025-12-13 08:42:24.845 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:24 np0005558241 nova_compute[248510]: 2025-12-13 08:42:24.864 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:42:24 np0005558241 nova_compute[248510]: 2025-12-13 08:42:24.941 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:24 np0005558241 nova_compute[248510]: 2025-12-13 08:42:24.942 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:24 np0005558241 nova_compute[248510]: 2025-12-13 08:42:24.952 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:42:24 np0005558241 nova_compute[248510]: 2025-12-13 08:42:24.953 248514 INFO nova.compute.claims [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:42:25 np0005558241 nova_compute[248510]: 2025-12-13 08:42:25.176 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:42:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868920489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:42:25 np0005558241 nova_compute[248510]: 2025-12-13 08:42:25.770 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:25 np0005558241 nova_compute[248510]: 2025-12-13 08:42:25.775 248514 DEBUG nova.compute.provider_tree [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:42:25 np0005558241 nova_compute[248510]: 2025-12-13 08:42:25.799 248514 DEBUG nova.scheduler.client.report [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:42:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 214 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 875 KiB/s wr, 159 op/s
Dec 13 03:42:25 np0005558241 nova_compute[248510]: 2025-12-13 08:42:25.844 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:25 np0005558241 nova_compute[248510]: 2025-12-13 08:42:25.845 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:42:25 np0005558241 nova_compute[248510]: 2025-12-13 08:42:25.926 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:42:25 np0005558241 nova_compute[248510]: 2025-12-13 08:42:25.926 248514 DEBUG nova.network.neutron [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:42:25 np0005558241 nova_compute[248510]: 2025-12-13 08:42:25.956 248514 INFO nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:42:25 np0005558241 nova_compute[248510]: 2025-12-13 08:42:25.990 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.109 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.110 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.110 248514 INFO nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Creating image(s)#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.129 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.150 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.173 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.177 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.265 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.265 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.266 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.266 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.286 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.290 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:26 np0005558241 nova_compute[248510]: 2025-12-13 08:42:26.328 248514 DEBUG nova.policy [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:42:27 np0005558241 nova_compute[248510]: 2025-12-13 08:42:27.302 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:27 np0005558241 nova_compute[248510]: 2025-12-13 08:42:27.369 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:42:27 np0005558241 nova_compute[248510]: 2025-12-13 08:42:27.483 248514 DEBUG nova.objects.instance [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:27 np0005558241 nova_compute[248510]: 2025-12-13 08:42:27.594 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:42:27 np0005558241 nova_compute[248510]: 2025-12-13 08:42:27.596 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Ensure instance console log exists: /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:42:27 np0005558241 nova_compute[248510]: 2025-12-13 08:42:27.596 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:27 np0005558241 nova_compute[248510]: 2025-12-13 08:42:27.597 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:27 np0005558241 nova_compute[248510]: 2025-12-13 08:42:27.597 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 214 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 137 op/s
Dec 13 03:42:28 np0005558241 nova_compute[248510]: 2025-12-13 08:42:28.001 248514 DEBUG nova.network.neutron [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Successfully created port: 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:42:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:42:28 np0005558241 nova_compute[248510]: 2025-12-13 08:42:28.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:29 np0005558241 nova_compute[248510]: 2025-12-13 08:42:29.106 248514 DEBUG nova.network.neutron [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Successfully updated port: 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:42:29 np0005558241 nova_compute[248510]: 2025-12-13 08:42:29.137 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:42:29 np0005558241 nova_compute[248510]: 2025-12-13 08:42:29.137 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:42:29 np0005558241 nova_compute[248510]: 2025-12-13 08:42:29.137 248514 DEBUG nova.network.neutron [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:42:29 np0005558241 nova_compute[248510]: 2025-12-13 08:42:29.285 248514 DEBUG nova.compute.manager [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-changed-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:42:29 np0005558241 nova_compute[248510]: 2025-12-13 08:42:29.286 248514 DEBUG nova.compute.manager [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Refreshing instance network info cache due to event network-changed-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:42:29 np0005558241 nova_compute[248510]: 2025-12-13 08:42:29.286 248514 DEBUG oslo_concurrency.lockutils [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:42:29 np0005558241 nova_compute[248510]: 2025-12-13 08:42:29.394 248514 DEBUG nova.network.neutron [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:42:29 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Dec 13 03:42:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 233 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 164 op/s
Dec 13 03:42:30 np0005558241 nova_compute[248510]: 2025-12-13 08:42:30.432 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.205 248514 DEBUG nova.network.neutron [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Updating instance_info_cache with network_info: [{"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.218 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.234 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.235 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Instance network_info: |[{"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.235 248514 DEBUG oslo_concurrency.lockutils [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.235 248514 DEBUG nova.network.neutron [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Refreshing network info cache for port 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.238 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Start _get_guest_xml network_info=[{"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.241 248514 WARNING nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.247 248514 DEBUG nova.virt.libvirt.host [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.248 248514 DEBUG nova.virt.libvirt.host [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.251 248514 DEBUG nova.virt.libvirt.host [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.252 248514 DEBUG nova.virt.libvirt.host [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.252 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.253 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.254 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.254 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.255 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.255 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.255 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.256 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.256 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.256 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.257 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.257 248514 DEBUG nova.virt.hardware [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.262 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:42:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2289946204' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.831 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 264 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.861 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:31 np0005558241 nova_compute[248510]: 2025-12-13 08:42:31.865 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:42:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3299341507' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.456 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.458 248514 DEBUG nova.virt.libvirt.vif [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1551844735',display_name='tempest-ServersTestJSON-server-1551844735',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1551844735',id=100,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-2qzmyk70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:42:26Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.458 248514 DEBUG nova.network.os_vif_util [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.459 248514 DEBUG nova.network.os_vif_util [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.460 248514 DEBUG nova.objects.instance [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.480 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  <uuid>6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c</uuid>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  <name>instance-00000064</name>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersTestJSON-server-1551844735</nova:name>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:42:31</nova:creationTime>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:        <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:        <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:        <nova:port uuid="6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <entry name="serial">6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c</entry>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <entry name="uuid">6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c</entry>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk.config">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:89:6b:01"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <target dev="tap6a8f6d0b-00"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/console.log" append="off"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:42:32 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:42:32 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:42:32 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:42:32 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.481 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Preparing to wait for external event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.481 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.481 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.481 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.482 248514 DEBUG nova.virt.libvirt.vif [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1551844735',display_name='tempest-ServersTestJSON-server-1551844735',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1551844735',id=100,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-2qzmyk70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:42:26Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.482 248514 DEBUG nova.network.os_vif_util [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.483 248514 DEBUG nova.network.os_vif_util [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.483 248514 DEBUG os_vif [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.484 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.484 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.485 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.487 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a8f6d0b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.488 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6a8f6d0b-00, col_values=(('external_ids', {'iface-id': '6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:6b:01', 'vm-uuid': '6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:32 np0005558241 NetworkManager[50376]: <info>  [1765615352.4901] manager: (tap6a8f6d0b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.497 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.497 248514 INFO os_vif [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00')#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.558 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.559 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.559 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:89:6b:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.560 248514 INFO nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Using config drive#033[00m
Dec 13 03:42:32 np0005558241 nova_compute[248510]: 2025-12-13 08:42:32.578 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.253600) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615353253636, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2100, "num_deletes": 255, "total_data_size": 3434009, "memory_usage": 3492000, "flush_reason": "Manual Compaction"}
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615353270530, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 2052033, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45072, "largest_seqno": 47171, "table_properties": {"data_size": 2044946, "index_size": 3777, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18775, "raw_average_key_size": 21, "raw_value_size": 2029063, "raw_average_value_size": 2295, "num_data_blocks": 170, "num_entries": 884, "num_filter_entries": 884, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615152, "oldest_key_time": 1765615152, "file_creation_time": 1765615353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 16987 microseconds, and 5072 cpu microseconds.
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.270580) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 2052033 bytes OK
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.270602) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.272600) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.272618) EVENT_LOG_v1 {"time_micros": 1765615353272613, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.272637) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3425090, prev total WAL file size 3425090, number of live WAL files 2.
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.273751) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373539' seq:72057594037927935, type:22 .. '6D6772737461740032303131' seq:0, type:0; will stop at (end)
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(2003KB)], [104(9369KB)]
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615353273808, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 11646830, "oldest_snapshot_seqno": -1}
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7081 keys, 9465697 bytes, temperature: kUnknown
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615353341668, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 9465697, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9420086, "index_size": 26811, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 182502, "raw_average_key_size": 25, "raw_value_size": 9294990, "raw_average_value_size": 1312, "num_data_blocks": 1059, "num_entries": 7081, "num_filter_entries": 7081, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615353, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.341914) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 9465697 bytes
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.343945) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.4 rd, 139.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.2 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(10.3) write-amplify(4.6) OK, records in: 7511, records dropped: 430 output_compression: NoCompression
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.343966) EVENT_LOG_v1 {"time_micros": 1765615353343957, "job": 62, "event": "compaction_finished", "compaction_time_micros": 67944, "compaction_time_cpu_micros": 26537, "output_level": 6, "num_output_files": 1, "total_output_size": 9465697, "num_input_records": 7511, "num_output_records": 7081, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615353344558, "job": 62, "event": "table_file_deletion", "file_number": 106}
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615353346542, "job": 62, "event": "table_file_deletion", "file_number": 104}
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.273673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.346671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.346677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.346679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.346680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:42:33 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:33.346682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:42:33 np0005558241 nova_compute[248510]: 2025-12-13 08:42:33.590 248514 INFO nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Creating config drive at /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/disk.config#033[00m
Dec 13 03:42:33 np0005558241 nova_compute[248510]: 2025-12-13 08:42:33.595 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqyoe2iv9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:33 np0005558241 nova_compute[248510]: 2025-12-13 08:42:33.709 248514 DEBUG nova.network.neutron [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Updated VIF entry in instance network info cache for port 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:42:33 np0005558241 nova_compute[248510]: 2025-12-13 08:42:33.710 248514 DEBUG nova.network.neutron [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Updating instance_info_cache with network_info: [{"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:42:33 np0005558241 nova_compute[248510]: 2025-12-13 08:42:33.733 248514 DEBUG oslo_concurrency.lockutils [req-7860456d-5cff-475f-8ad1-c2fd808d8b40 req-59801b75-003f-4e2c-b491-703a86932cce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:42:33 np0005558241 nova_compute[248510]: 2025-12-13 08:42:33.743 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqyoe2iv9" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:33 np0005558241 nova_compute[248510]: 2025-12-13 08:42:33.765 248514 DEBUG nova.storage.rbd_utils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:33 np0005558241 nova_compute[248510]: 2025-12-13 08:42:33.769 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/disk.config 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2394: 321 pgs: 321 active+clean; 272 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 154 op/s
Dec 13 03:42:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:33Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:df:21 10.100.0.9
Dec 13 03:42:33 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:33Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:df:21 10.100.0.9
Dec 13 03:42:34 np0005558241 nova_compute[248510]: 2025-12-13 08:42:34.401 248514 DEBUG oslo_concurrency.processutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/disk.config 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:34 np0005558241 nova_compute[248510]: 2025-12-13 08:42:34.402 248514 INFO nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Deleting local config drive /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c/disk.config because it was imported into RBD.#033[00m
Dec 13 03:42:34 np0005558241 NetworkManager[50376]: <info>  [1765615354.4537] manager: (tap6a8f6d0b-00): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Dec 13 03:42:34 np0005558241 kernel: tap6a8f6d0b-00: entered promiscuous mode
Dec 13 03:42:34 np0005558241 nova_compute[248510]: 2025-12-13 08:42:34.456 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:34Z|00956|binding|INFO|Claiming lport 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 for this chassis.
Dec 13 03:42:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:34Z|00957|binding|INFO|6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965: Claiming fa:16:3e:89:6b:01 10.100.0.7
Dec 13 03:42:34 np0005558241 nova_compute[248510]: 2025-12-13 08:42:34.459 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.468 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:6b:01 10.100.0.7'], port_security=['fa:16:3e:89:6b:01 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.469 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis#033[00m
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.471 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3#033[00m
Dec 13 03:42:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:34Z|00958|binding|INFO|Setting lport 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 ovn-installed in OVS
Dec 13 03:42:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:34Z|00959|binding|INFO|Setting lport 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 up in Southbound
Dec 13 03:42:34 np0005558241 nova_compute[248510]: 2025-12-13 08:42:34.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:34 np0005558241 systemd-udevd[344201]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.495 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc74c8b2-5760-4ccc-8d17-f6cb46af33e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:34 np0005558241 systemd-machined[210538]: New machine qemu-123-instance-00000064.
Dec 13 03:42:34 np0005558241 systemd[1]: Started Virtual Machine qemu-123-instance-00000064.
Dec 13 03:42:34 np0005558241 NetworkManager[50376]: <info>  [1765615354.5130] device (tap6a8f6d0b-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:42:34 np0005558241 NetworkManager[50376]: <info>  [1765615354.5152] device (tap6a8f6d0b-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.543 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[885cbbea-d3cd-4a22-b704-ae3bd089c30d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.547 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0e4f82-95c5-4ef3-afd5-8cef12e6356c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.582 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5d456f-2db3-41bf-b80c-2fc37388c71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.600 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e89350fc-9398-4997-a140-7a3fc92fc30a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344215, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.616 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[09fdb1f7-c6bc-455c-8ec9-192866b19977]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344216, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344216, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.618 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:34 np0005558241 nova_compute[248510]: 2025-12-13 08:42:34.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:34 np0005558241 nova_compute[248510]: 2025-12-13 08:42:34.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.621 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.621 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.622 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:34.622 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:42:34 np0005558241 nova_compute[248510]: 2025-12-13 08:42:34.976 248514 DEBUG nova.compute.manager [req-d2f30b2d-1405-48fc-a7de-24ae5dbece68 req-cf778766-fe3a-4d84-8f1f-fe9c33c68fdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:42:34 np0005558241 nova_compute[248510]: 2025-12-13 08:42:34.977 248514 DEBUG oslo_concurrency.lockutils [req-d2f30b2d-1405-48fc-a7de-24ae5dbece68 req-cf778766-fe3a-4d84-8f1f-fe9c33c68fdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:34 np0005558241 nova_compute[248510]: 2025-12-13 08:42:34.977 248514 DEBUG oslo_concurrency.lockutils [req-d2f30b2d-1405-48fc-a7de-24ae5dbece68 req-cf778766-fe3a-4d84-8f1f-fe9c33c68fdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:34 np0005558241 nova_compute[248510]: 2025-12-13 08:42:34.977 248514 DEBUG oslo_concurrency.lockutils [req-d2f30b2d-1405-48fc-a7de-24ae5dbece68 req-cf778766-fe3a-4d84-8f1f-fe9c33c68fdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:34 np0005558241 nova_compute[248510]: 2025-12-13 08:42:34.978 248514 DEBUG nova.compute.manager [req-d2f30b2d-1405-48fc-a7de-24ae5dbece68 req-cf778766-fe3a-4d84-8f1f-fe9c33c68fdd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Processing event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.114 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615355.1143548, 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.115 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] VM Started (Lifecycle Event)#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.117 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.119 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.122 248514 INFO nova.virt.libvirt.driver [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Instance spawned successfully.#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.122 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.144 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.150 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.155 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.155 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.156 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.156 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.157 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.159 248514 DEBUG nova.virt.libvirt.driver [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.195 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.195 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615355.1144834, 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.195 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.230 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.234 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615355.1190386, 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.234 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.249 248514 INFO nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Took 9.14 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.249 248514 DEBUG nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.261 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.263 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.295 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.355 248514 INFO nova.compute.manager [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Took 10.44 seconds to build instance.#033[00m
Dec 13 03:42:35 np0005558241 nova_compute[248510]: 2025-12-13 08:42:35.380 248514 DEBUG oslo_concurrency.lockutils [None req-293f7d5b-b2dc-4d32-9f4e-dd0ccc9b3119 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 318 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.0 MiB/s wr, 195 op/s
Dec 13 03:42:36 np0005558241 nova_compute[248510]: 2025-12-13 08:42:36.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:37 np0005558241 nova_compute[248510]: 2025-12-13 08:42:37.120 248514 DEBUG nova.compute.manager [req-4615e44a-bcc2-4c3e-ae81-8313cb1ad513 req-9f299343-fad7-4903-9c1a-fa8d5401799e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:42:37 np0005558241 nova_compute[248510]: 2025-12-13 08:42:37.120 248514 DEBUG oslo_concurrency.lockutils [req-4615e44a-bcc2-4c3e-ae81-8313cb1ad513 req-9f299343-fad7-4903-9c1a-fa8d5401799e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:37 np0005558241 nova_compute[248510]: 2025-12-13 08:42:37.120 248514 DEBUG oslo_concurrency.lockutils [req-4615e44a-bcc2-4c3e-ae81-8313cb1ad513 req-9f299343-fad7-4903-9c1a-fa8d5401799e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:37 np0005558241 nova_compute[248510]: 2025-12-13 08:42:37.121 248514 DEBUG oslo_concurrency.lockutils [req-4615e44a-bcc2-4c3e-ae81-8313cb1ad513 req-9f299343-fad7-4903-9c1a-fa8d5401799e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:37 np0005558241 nova_compute[248510]: 2025-12-13 08:42:37.121 248514 DEBUG nova.compute.manager [req-4615e44a-bcc2-4c3e-ae81-8313cb1ad513 req-9f299343-fad7-4903-9c1a-fa8d5401799e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] No waiting events found dispatching network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:42:37 np0005558241 nova_compute[248510]: 2025-12-13 08:42:37.121 248514 WARNING nova.compute.manager [req-4615e44a-bcc2-4c3e-ae81-8313cb1ad513 req-9f299343-fad7-4903-9c1a-fa8d5401799e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received unexpected event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:42:37 np0005558241 nova_compute[248510]: 2025-12-13 08:42:37.491 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 318 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 6.0 MiB/s wr, 146 op/s
Dec 13 03:42:37 np0005558241 podman[344260]: 2025-12-13 08:42:37.982833564 +0000 UTC m=+0.066813133 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 03:42:37 np0005558241 podman[344261]: 2025-12-13 08:42:37.999820109 +0000 UTC m=+0.083070069 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 13 03:42:38 np0005558241 podman[344259]: 2025-12-13 08:42:38.028980044 +0000 UTC m=+0.113144678 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.496894) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615358496922, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 291, "num_deletes": 251, "total_data_size": 101214, "memory_usage": 108200, "flush_reason": "Manual Compaction"}
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615358528246, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 100159, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47172, "largest_seqno": 47462, "table_properties": {"data_size": 98208, "index_size": 179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4961, "raw_average_key_size": 18, "raw_value_size": 94431, "raw_average_value_size": 348, "num_data_blocks": 8, "num_entries": 271, "num_filter_entries": 271, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615354, "oldest_key_time": 1765615354, "file_creation_time": 1765615358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 31402 microseconds, and 1018 cpu microseconds.
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.528290) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 100159 bytes OK
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.528310) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.615547) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.615593) EVENT_LOG_v1 {"time_micros": 1765615358615583, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.615619) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 99070, prev total WAL file size 99070, number of live WAL files 2.
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.618118) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(97KB)], [107(9243KB)]
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615358618177, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 9565856, "oldest_snapshot_seqno": -1}
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6843 keys, 7777672 bytes, temperature: kUnknown
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615358871222, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 7777672, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7735044, "index_size": 24404, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 178244, "raw_average_key_size": 26, "raw_value_size": 7615469, "raw_average_value_size": 1112, "num_data_blocks": 949, "num_entries": 6843, "num_filter_entries": 6843, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.871638) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 7777672 bytes
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.878159) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 37.8 rd, 30.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(173.2) write-amplify(77.7) OK, records in: 7352, records dropped: 509 output_compression: NoCompression
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.878190) EVENT_LOG_v1 {"time_micros": 1765615358878177, "job": 64, "event": "compaction_finished", "compaction_time_micros": 253170, "compaction_time_cpu_micros": 19160, "output_level": 6, "num_output_files": 1, "total_output_size": 7777672, "num_input_records": 7352, "num_output_records": 6843, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615358878377, "job": 64, "event": "table_file_deletion", "file_number": 109}
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615358880412, "job": 64, "event": "table_file_deletion", "file_number": 107}
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.618026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.880513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.880518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.880520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.880521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:42:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:42:38.880523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:42:38 np0005558241 nova_compute[248510]: 2025-12-13 08:42:38.994 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:38 np0005558241 nova_compute[248510]: 2025-12-13 08:42:38.995 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:38 np0005558241 nova_compute[248510]: 2025-12-13 08:42:38.995 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:38 np0005558241 nova_compute[248510]: 2025-12-13 08:42:38.995 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:38 np0005558241 nova_compute[248510]: 2025-12-13 08:42:38.996 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:38 np0005558241 nova_compute[248510]: 2025-12-13 08:42:38.997 248514 INFO nova.compute.manager [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Terminating instance#033[00m
Dec 13 03:42:38 np0005558241 nova_compute[248510]: 2025-12-13 08:42:38.998 248514 DEBUG nova.compute.manager [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:42:39 np0005558241 kernel: tap6a8f6d0b-00 (unregistering): left promiscuous mode
Dec 13 03:42:39 np0005558241 NetworkManager[50376]: <info>  [1765615359.0633] device (tap6a8f6d0b-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:42:39 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:39Z|00960|binding|INFO|Releasing lport 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 from this chassis (sb_readonly=0)
Dec 13 03:42:39 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:39Z|00961|binding|INFO|Setting lport 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 down in Southbound
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.070 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:39 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:39Z|00962|binding|INFO|Removing iface tap6a8f6d0b-00 ovn-installed in OVS
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.085 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:6b:01 10.100.0.7'], port_security=['fa:16:3e:89:6b:01 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.086 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.088 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3#033[00m
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.103 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b10647db-a289-4f73-ad84-321245fced13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:39 np0005558241 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Deactivated successfully.
Dec 13 03:42:39 np0005558241 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Consumed 4.498s CPU time.
Dec 13 03:42:39 np0005558241 systemd-machined[210538]: Machine qemu-123-instance-00000064 terminated.
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.132 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c745a6a5-9b3e-46e5-9e4c-b1250d32afff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.137 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f9c96f-247e-467a-966c-f73b74ce4c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.167 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e680adc7-7785-4d79-9c4d-5ce45b3a884b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.186 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[269992d1-af2f-453d-a0b4-4461960e37ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 17, 'rx_bytes': 658, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344332, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.203 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[40a3248c-0719-49b7-9e35-9b311eb52241]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344333, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344333, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.205 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.206 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.211 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.212 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.213 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.213 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:39.213 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.229 248514 INFO nova.virt.libvirt.driver [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Instance destroyed successfully.#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.230 248514 DEBUG nova.objects.instance [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.253 248514 DEBUG nova.virt.libvirt.vif [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1551844735',display_name='tempest-ServersTestJSON-server-1551844735',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1551844735',id=100,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:42:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-2qzmyk70',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:42:35Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.253 248514 DEBUG nova.network.os_vif_util [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "address": "fa:16:3e:89:6b:01", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a8f6d0b-00", "ovs_interfaceid": "6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.254 248514 DEBUG nova.network.os_vif_util [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.255 248514 DEBUG os_vif [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.256 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.257 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a8f6d0b-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.258 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.260 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.262 248514 INFO os_vif [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:6b:01,bridge_name='br-int',has_traffic_filtering=True,id=6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a8f6d0b-00')#033[00m
Dec 13 03:42:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 326 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 6.0 MiB/s wr, 196 op/s
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.867 248514 DEBUG nova.compute.manager [req-e1efcb37-bfff-4083-aad3-0a65b882dc35 req-ea2c795a-8989-4663-9325-d37640379ca4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-vif-unplugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.867 248514 DEBUG oslo_concurrency.lockutils [req-e1efcb37-bfff-4083-aad3-0a65b882dc35 req-ea2c795a-8989-4663-9325-d37640379ca4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.868 248514 DEBUG oslo_concurrency.lockutils [req-e1efcb37-bfff-4083-aad3-0a65b882dc35 req-ea2c795a-8989-4663-9325-d37640379ca4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.868 248514 DEBUG oslo_concurrency.lockutils [req-e1efcb37-bfff-4083-aad3-0a65b882dc35 req-ea2c795a-8989-4663-9325-d37640379ca4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.868 248514 DEBUG nova.compute.manager [req-e1efcb37-bfff-4083-aad3-0a65b882dc35 req-ea2c795a-8989-4663-9325-d37640379ca4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] No waiting events found dispatching network-vif-unplugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:42:39 np0005558241 nova_compute[248510]: 2025-12-13 08:42:39.869 248514 DEBUG nova.compute.manager [req-e1efcb37-bfff-4083-aad3-0a65b882dc35 req-ea2c795a-8989-4663-9325-d37640379ca4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-vif-unplugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:42:40 np0005558241 nova_compute[248510]: 2025-12-13 08:42:40.054 248514 INFO nova.virt.libvirt.driver [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Deleting instance files /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_del#033[00m
Dec 13 03:42:40 np0005558241 nova_compute[248510]: 2025-12-13 08:42:40.054 248514 INFO nova.virt.libvirt.driver [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Deletion of /var/lib/nova/instances/6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c_del complete#033[00m
Dec 13 03:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:42:40 np0005558241 nova_compute[248510]: 2025-12-13 08:42:40.130 248514 INFO nova.compute.manager [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:42:40 np0005558241 nova_compute[248510]: 2025-12-13 08:42:40.130 248514 DEBUG oslo.service.loopingcall [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:42:40 np0005558241 nova_compute[248510]: 2025-12-13 08:42:40.130 248514 DEBUG nova.compute.manager [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:42:40 np0005558241 nova_compute[248510]: 2025-12-13 08:42:40.131 248514 DEBUG nova.network.neutron [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.480 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.605 248514 DEBUG nova.network.neutron [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.628 248514 INFO nova.compute.manager [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Took 1.50 seconds to deallocate network for instance.#033[00m
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.691 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.692 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.744 248514 DEBUG nova.compute.manager [req-44fd1b34-730f-44ed-a57e-7dca291c8f83 req-e53774e9-d2e6-4e3d-aef3-53c57c96b172 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-vif-deleted-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.833 248514 DEBUG oslo_concurrency.processutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 298 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 207 op/s
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.994 248514 DEBUG nova.compute.manager [req-2e9c1a12-bf53-4de8-b111-c0df572bbae1 req-5eb187af-9a72-41e3-90fa-fd6302d7c9bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.994 248514 DEBUG oslo_concurrency.lockutils [req-2e9c1a12-bf53-4de8-b111-c0df572bbae1 req-5eb187af-9a72-41e3-90fa-fd6302d7c9bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.995 248514 DEBUG oslo_concurrency.lockutils [req-2e9c1a12-bf53-4de8-b111-c0df572bbae1 req-5eb187af-9a72-41e3-90fa-fd6302d7c9bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.995 248514 DEBUG oslo_concurrency.lockutils [req-2e9c1a12-bf53-4de8-b111-c0df572bbae1 req-5eb187af-9a72-41e3-90fa-fd6302d7c9bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.995 248514 DEBUG nova.compute.manager [req-2e9c1a12-bf53-4de8-b111-c0df572bbae1 req-5eb187af-9a72-41e3-90fa-fd6302d7c9bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] No waiting events found dispatching network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:42:41 np0005558241 nova_compute[248510]: 2025-12-13 08:42:41.995 248514 WARNING nova.compute.manager [req-2e9c1a12-bf53-4de8-b111-c0df572bbae1 req-5eb187af-9a72-41e3-90fa-fd6302d7c9bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Received unexpected event network-vif-plugged-6a8f6d0b-0003-4c1c-a1b9-e3c76cd86965 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:42:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:42:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4273022990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:42:42 np0005558241 nova_compute[248510]: 2025-12-13 08:42:42.405 248514 DEBUG oslo_concurrency.processutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:42 np0005558241 nova_compute[248510]: 2025-12-13 08:42:42.411 248514 DEBUG nova.compute.provider_tree [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:42:42 np0005558241 nova_compute[248510]: 2025-12-13 08:42:42.434 248514 DEBUG nova.scheduler.client.report [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:42:42 np0005558241 nova_compute[248510]: 2025-12-13 08:42:42.467 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:42 np0005558241 nova_compute[248510]: 2025-12-13 08:42:42.520 248514 INFO nova.scheduler.client.report [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c#033[00m
Dec 13 03:42:42 np0005558241 nova_compute[248510]: 2025-12-13 08:42:42.627 248514 DEBUG oslo_concurrency.lockutils [None req-6631b906-287d-4522-8836-ee1189019b76 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.248 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "2e9e6711-892f-4278-b911-bfacbff9b48e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.248 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.248 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.249 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.249 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.250 248514 INFO nova.compute.manager [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Terminating instance#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.251 248514 DEBUG nova.compute.manager [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:42:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:42:43 np0005558241 kernel: tapa6d7b148-2f (unregistering): left promiscuous mode
Dec 13 03:42:43 np0005558241 NetworkManager[50376]: <info>  [1765615363.3568] device (tapa6d7b148-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.361 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:43 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:43Z|00963|binding|INFO|Releasing lport a6d7b148-2fef-4c47-a3fa-c8948759b4a9 from this chassis (sb_readonly=0)
Dec 13 03:42:43 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:43Z|00964|binding|INFO|Setting lport a6d7b148-2fef-4c47-a3fa-c8948759b4a9 down in Southbound
Dec 13 03:42:43 np0005558241 ovn_controller[148476]: 2025-12-13T08:42:43Z|00965|binding|INFO|Removing iface tapa6d7b148-2f ovn-installed in OVS
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.363 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.376 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:43 np0005558241 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000062.scope: Deactivated successfully.
Dec 13 03:42:43 np0005558241 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000062.scope: Consumed 13.268s CPU time.
Dec 13 03:42:43 np0005558241 systemd-machined[210538]: Machine qemu-121-instance-00000062 terminated.
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.487 248514 INFO nova.virt.libvirt.driver [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Instance destroyed successfully.#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.488 248514 DEBUG nova.objects.instance [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid 2e9e6711-892f-4278-b911-bfacbff9b48e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.545 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:df:21 10.100.0.9'], port_security=['fa:16:3e:e5:df:21 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2e9e6711-892f-4278-b911-bfacbff9b48e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a6d7b148-2fef-4c47-a3fa-c8948759b4a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.546 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a6d7b148-2fef-4c47-a3fa-c8948759b4a9 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.547 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.563 248514 DEBUG nova.virt.libvirt.vif [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:41:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1551844735',display_name='tempest-ServersTestJSON-server-1551844735',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1551844735',id=98,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:42:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-6ci0kbz7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:42:20Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=2e9e6711-892f-4278-b911-bfacbff9b48e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.563 248514 DEBUG nova.network.os_vif_util [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "address": "fa:16:3e:e5:df:21", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d7b148-2f", "ovs_interfaceid": "a6d7b148-2fef-4c47-a3fa-c8948759b4a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.564 248514 DEBUG nova.network.os_vif_util [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.565 248514 DEBUG os_vif [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.566 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.566 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6d7b148-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.568 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.570 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78a0f578-2fbe-4168-8f82-2f01c381407b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.572 248514 INFO os_vif [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:df:21,bridge_name='br-int',has_traffic_filtering=True,id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d7b148-2f')#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.606 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1afe41ae-7fde-4c89-b696-3c658cf84d53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.610 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd4aa5f-f2f0-4587-ab2a-65dc7ee2ce5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.646 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4663de1b-a58d-4aec-b66a-b8133659c7a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.671 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea65c441-a543-479b-a73d-c41157e28b01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344426, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.699 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b9693afd-a969-4ea7-b409-2dfe4262a7dd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344427, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344427, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.702 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.703 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.708 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.708 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.709 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:42:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:43.709 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:42:43 np0005558241 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000063.scope: Deactivated successfully.
Dec 13 03:42:43 np0005558241 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000063.scope: Consumed 13.446s CPU time.
Dec 13 03:42:43 np0005558241 systemd-machined[210538]: Machine qemu-122-instance-00000063 terminated.
Dec 13 03:42:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 285 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 224 op/s
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.866 248514 INFO nova.virt.libvirt.driver [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Deleting instance files /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e_del#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.867 248514 INFO nova.virt.libvirt.driver [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Deletion of /var/lib/nova/instances/2e9e6711-892f-4278-b911-bfacbff9b48e_del complete#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.939 248514 INFO nova.compute.manager [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.940 248514 DEBUG oslo.service.loopingcall [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.940 248514 DEBUG nova.compute.manager [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:42:43 np0005558241 nova_compute[248510]: 2025-12-13 08:42:43.940 248514 DEBUG nova.network.neutron [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:42:44 np0005558241 nova_compute[248510]: 2025-12-13 08:42:44.495 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance shutdown successfully after 24 seconds.#033[00m
Dec 13 03:42:44 np0005558241 nova_compute[248510]: 2025-12-13 08:42:44.501 248514 INFO nova.virt.libvirt.driver [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance destroyed successfully.#033[00m
Dec 13 03:42:44 np0005558241 nova_compute[248510]: 2025-12-13 08:42:44.506 248514 INFO nova.virt.libvirt.driver [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance destroyed successfully.#033[00m
Dec 13 03:42:44 np0005558241 nova_compute[248510]: 2025-12-13 08:42:44.802 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deleting instance files /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad_del#033[00m
Dec 13 03:42:44 np0005558241 nova_compute[248510]: 2025-12-13 08:42:44.803 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deletion of /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad_del complete#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.013 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.014 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Creating image(s)#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.037 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.064 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.091 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.099 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.186 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.187 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.187 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.187 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.210 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.214 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.527 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.585 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] resizing rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.681 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.683 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Ensure instance console log exists: /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.684 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.684 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.685 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.687 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.693 248514 WARNING nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.699 248514 DEBUG nova.virt.libvirt.host [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.699 248514 DEBUG nova.virt.libvirt.host [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.703 248514 DEBUG nova.virt.libvirt.host [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.704 248514 DEBUG nova.virt.libvirt.host [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.704 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.704 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.705 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.705 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.705 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.705 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.706 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.706 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.706 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.706 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.707 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.707 248514 DEBUG nova.virt.hardware [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.707 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.736 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.814 248514 DEBUG nova.network.neutron [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:42:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 204 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 215 op/s
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.892 248514 INFO nova.compute.manager [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Took 1.95 seconds to deallocate network for instance.#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.907 248514 DEBUG nova.compute.manager [req-c2edb3a6-263d-4cc5-82af-0fcd4cdf4ec7 req-66ccdf7b-db3c-4330-8174-f5cb3c4b783b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Received event network-vif-deleted-a6d7b148-2fef-4c47-a3fa-c8948759b4a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.908 248514 INFO nova.compute.manager [req-c2edb3a6-263d-4cc5-82af-0fcd4cdf4ec7 req-66ccdf7b-db3c-4330-8174-f5cb3c4b783b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Neutron deleted interface a6d7b148-2fef-4c47-a3fa-c8948759b4a9; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.908 248514 DEBUG nova.network.neutron [req-c2edb3a6-263d-4cc5-82af-0fcd4cdf4ec7 req-66ccdf7b-db3c-4330-8174-f5cb3c4b783b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.949 248514 DEBUG nova.compute.manager [req-c2edb3a6-263d-4cc5-82af-0fcd4cdf4ec7 req-66ccdf7b-db3c-4330-8174-f5cb3c4b783b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Detach interface failed, port_id=a6d7b148-2fef-4c47-a3fa-c8948759b4a9, reason: Instance 2e9e6711-892f-4278-b911-bfacbff9b48e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.980 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:45 np0005558241 nova_compute[248510]: 2025-12-13 08:42:45.981 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:46 np0005558241 nova_compute[248510]: 2025-12-13 08:42:46.094 248514 DEBUG oslo_concurrency.processutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:46 np0005558241 nova_compute[248510]: 2025-12-13 08:42:46.227 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:42:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1058594592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:42:46 np0005558241 nova_compute[248510]: 2025-12-13 08:42:46.396 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:46 np0005558241 nova_compute[248510]: 2025-12-13 08:42:46.420 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:46 np0005558241 nova_compute[248510]: 2025-12-13 08:42:46.424 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:42:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2517483989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:42:46 np0005558241 nova_compute[248510]: 2025-12-13 08:42:46.693 248514 DEBUG oslo_concurrency.processutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:46 np0005558241 nova_compute[248510]: 2025-12-13 08:42:46.699 248514 DEBUG nova.compute.provider_tree [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:42:46 np0005558241 nova_compute[248510]: 2025-12-13 08:42:46.737 248514 DEBUG nova.scheduler.client.report [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:42:46 np0005558241 nova_compute[248510]: 2025-12-13 08:42:46.770 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:46 np0005558241 nova_compute[248510]: 2025-12-13 08:42:46.917 248514 INFO nova.scheduler.client.report [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance 2e9e6711-892f-4278-b911-bfacbff9b48e#033[00m
Dec 13 03:42:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:42:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459931686' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.066 248514 DEBUG oslo_concurrency.lockutils [None req-34f73fd6-d87a-4826-a2fd-9ac3c54fe40c 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "2e9e6711-892f-4278-b911-bfacbff9b48e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.068 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.072 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  <uuid>4edf7378-73e8-4119-a019-6ab7098ee4ad</uuid>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  <name>instance-00000063</name>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServerShowV257Test-server-1971514055</nova:name>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:42:45</nova:creationTime>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:        <nova:user uuid="e382ad6b2fdc41398bfa58dbb651d4be">tempest-ServerShowV257Test-371152472-project-member</nova:user>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:        <nova:project uuid="6d41fdecfe334aaeaaba54b9c6eaeb00">tempest-ServerShowV257Test-371152472</nova:project>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <entry name="serial">4edf7378-73e8-4119-a019-6ab7098ee4ad</entry>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <entry name="uuid">4edf7378-73e8-4119-a019-6ab7098ee4ad</entry>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/4edf7378-73e8-4119-a019-6ab7098ee4ad_disk">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/console.log" append="off"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:42:47 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:42:47 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:42:47 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:42:47 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.152 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.153 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.154 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Using config drive#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.183 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.220 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.277 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'keypairs' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.480 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Creating config drive at /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.486 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4kcn1t6n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.630 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4kcn1t6n" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.652 248514 DEBUG nova.storage.rbd_utils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] rbd image 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.656 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.803 248514 DEBUG oslo_concurrency.processutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config 4edf7378-73e8-4119-a019-6ab7098ee4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:47 np0005558241 nova_compute[248510]: 2025-12-13 08:42:47.806 248514 INFO nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deleting local config drive /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad/disk.config because it was imported into RBD.#033[00m
Dec 13 03:42:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 204 MiB data, 767 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 126 KiB/s wr, 135 op/s
Dec 13 03:42:47 np0005558241 systemd-machined[210538]: New machine qemu-124-instance-00000063.
Dec 13 03:42:47 np0005558241 systemd[1]: Started Virtual Machine qemu-124-instance-00000063.
Dec 13 03:42:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.568 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.593 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 4edf7378-73e8-4119-a019-6ab7098ee4ad due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.594 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615368.5933955, 4edf7378-73e8-4119-a019-6ab7098ee4ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.594 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.597 248514 DEBUG nova.compute.manager [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.597 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.601 248514 INFO nova.virt.libvirt.driver [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance spawned successfully.#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.602 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.632 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.636 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.637 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.637 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.637 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.638 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.638 248514 DEBUG nova.virt.libvirt.driver [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.642 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.679 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.679 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615368.5968769, 4edf7378-73e8-4119-a019-6ab7098ee4ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.680 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] VM Started (Lifecycle Event)#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.714 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.718 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.740 248514 DEBUG nova.compute.manager [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.753 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.840 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.841 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.842 248514 DEBUG nova.objects.instance [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 03:42:48 np0005558241 nova_compute[248510]: 2025-12-13 08:42:48.986 248514 DEBUG oslo_concurrency.lockutils [None req-6496babc-964a-450d-84f4-e1e5a4448a2d e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:49 np0005558241 nova_compute[248510]: 2025-12-13 08:42:49.465 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "4edf7378-73e8-4119-a019-6ab7098ee4ad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:49 np0005558241 nova_compute[248510]: 2025-12-13 08:42:49.466 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "4edf7378-73e8-4119-a019-6ab7098ee4ad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:49 np0005558241 nova_compute[248510]: 2025-12-13 08:42:49.466 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "4edf7378-73e8-4119-a019-6ab7098ee4ad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:49 np0005558241 nova_compute[248510]: 2025-12-13 08:42:49.466 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "4edf7378-73e8-4119-a019-6ab7098ee4ad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:49 np0005558241 nova_compute[248510]: 2025-12-13 08:42:49.467 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "4edf7378-73e8-4119-a019-6ab7098ee4ad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:49 np0005558241 nova_compute[248510]: 2025-12-13 08:42:49.468 248514 INFO nova.compute.manager [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Terminating instance#033[00m
Dec 13 03:42:49 np0005558241 nova_compute[248510]: 2025-12-13 08:42:49.469 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "refresh_cache-4edf7378-73e8-4119-a019-6ab7098ee4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:42:49 np0005558241 nova_compute[248510]: 2025-12-13 08:42:49.470 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquired lock "refresh_cache-4edf7378-73e8-4119-a019-6ab7098ee4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:42:49 np0005558241 nova_compute[248510]: 2025-12-13 08:42:49.470 248514 DEBUG nova.network.neutron [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:42:49 np0005558241 nova_compute[248510]: 2025-12-13 08:42:49.647 248514 DEBUG nova.network.neutron [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:42:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2402: 321 pgs: 321 active+clean; 146 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 199 op/s
Dec 13 03:42:49 np0005558241 nova_compute[248510]: 2025-12-13 08:42:49.928 248514 DEBUG nova.network.neutron [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:42:49 np0005558241 nova_compute[248510]: 2025-12-13 08:42:49.950 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Releasing lock "refresh_cache-4edf7378-73e8-4119-a019-6ab7098ee4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:42:49 np0005558241 nova_compute[248510]: 2025-12-13 08:42:49.951 248514 DEBUG nova.compute.manager [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:42:50 np0005558241 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000063.scope: Deactivated successfully.
Dec 13 03:42:50 np0005558241 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000063.scope: Consumed 2.148s CPU time.
Dec 13 03:42:50 np0005558241 systemd-machined[210538]: Machine qemu-124-instance-00000063 terminated.
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.172 248514 INFO nova.virt.libvirt.driver [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance destroyed successfully.#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.172 248514 DEBUG nova.objects.instance [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lazy-loading 'resources' on Instance uuid 4edf7378-73e8-4119-a019-6ab7098ee4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.458 248514 INFO nova.virt.libvirt.driver [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deleting instance files /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad_del#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.459 248514 INFO nova.virt.libvirt.driver [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deletion of /var/lib/nova/instances/4edf7378-73e8-4119-a019-6ab7098ee4ad_del complete#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.527 248514 INFO nova.compute.manager [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.528 248514 DEBUG oslo.service.loopingcall [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.529 248514 DEBUG nova.compute.manager [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.529 248514 DEBUG nova.network.neutron [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.722 248514 DEBUG nova.network.neutron [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.739 248514 DEBUG nova.network.neutron [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.764 248514 INFO nova.compute.manager [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Took 0.23 seconds to deallocate network for instance.#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.816 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.817 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:50 np0005558241 nova_compute[248510]: 2025-12-13 08:42:50.931 248514 DEBUG oslo_concurrency.processutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:51 np0005558241 nova_compute[248510]: 2025-12-13 08:42:51.228 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:42:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962436280' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:42:51 np0005558241 nova_compute[248510]: 2025-12-13 08:42:51.499 248514 DEBUG oslo_concurrency.processutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:51 np0005558241 nova_compute[248510]: 2025-12-13 08:42:51.506 248514 DEBUG nova.compute.provider_tree [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:42:51 np0005558241 nova_compute[248510]: 2025-12-13 08:42:51.538 248514 DEBUG nova.scheduler.client.report [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:42:51 np0005558241 nova_compute[248510]: 2025-12-13 08:42:51.622 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:51 np0005558241 nova_compute[248510]: 2025-12-13 08:42:51.659 248514 INFO nova.scheduler.client.report [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Deleted allocations for instance 4edf7378-73e8-4119-a019-6ab7098ee4ad#033[00m
Dec 13 03:42:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2403: 321 pgs: 321 active+clean; 167 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Dec 13 03:42:51 np0005558241 nova_compute[248510]: 2025-12-13 08:42:51.931 248514 DEBUG oslo_concurrency.lockutils [None req-d884c40e-551c-467d-b962-227cc807503a e382ad6b2fdc41398bfa58dbb651d4be 6d41fdecfe334aaeaaba54b9c6eaeb00 - - default default] Lock "4edf7378-73e8-4119-a019-6ab7098ee4ad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:52 np0005558241 nova_compute[248510]: 2025-12-13 08:42:52.569 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:52 np0005558241 nova_compute[248510]: 2025-12-13 08:42:52.569 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:52 np0005558241 nova_compute[248510]: 2025-12-13 08:42:52.618 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:42:52 np0005558241 nova_compute[248510]: 2025-12-13 08:42:52.703 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:52 np0005558241 nova_compute[248510]: 2025-12-13 08:42:52.704 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:52 np0005558241 nova_compute[248510]: 2025-12-13 08:42:52.711 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:42:52 np0005558241 nova_compute[248510]: 2025-12-13 08:42:52.712 248514 INFO nova.compute.claims [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:42:52 np0005558241 nova_compute[248510]: 2025-12-13 08:42:52.935 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:42:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:53.431 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:42:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:53.432 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:42:53 np0005558241 nova_compute[248510]: 2025-12-13 08:42:53.432 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:42:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1438340591' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:42:53 np0005558241 nova_compute[248510]: 2025-12-13 08:42:53.520 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:53 np0005558241 nova_compute[248510]: 2025-12-13 08:42:53.526 248514 DEBUG nova.compute.provider_tree [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:42:53 np0005558241 nova_compute[248510]: 2025-12-13 08:42:53.561 248514 DEBUG nova.scheduler.client.report [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:42:53 np0005558241 nova_compute[248510]: 2025-12-13 08:42:53.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:53 np0005558241 nova_compute[248510]: 2025-12-13 08:42:53.644 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:53 np0005558241 nova_compute[248510]: 2025-12-13 08:42:53.645 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:42:53 np0005558241 nova_compute[248510]: 2025-12-13 08:42:53.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:42:53 np0005558241 nova_compute[248510]: 2025-12-13 08:42:53.779 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:42:53 np0005558241 nova_compute[248510]: 2025-12-13 08:42:53.780 248514 DEBUG nova.network.neutron [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:42:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 153 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.159 248514 INFO nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.228 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615359.2278087, 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.229 248514 INFO nova.compute.manager [-] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.241 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.272 248514 DEBUG nova.compute.manager [None req-f95517ad-de54-41f3-acc0-cbc72a6366e4 - - - - - -] [instance: 6c1cce36-8ed9-43dd-bdb9-f9526ab49f0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:54 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:42:54 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.389 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.391 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.391 248514 INFO nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Creating image(s)#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.421 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.446 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.467 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.471 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.507 248514 DEBUG nova.policy [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.541 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.542 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.543 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.543 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.565 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.568 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 47b7be19-6608-45b4-9f0e-74393969e3f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.871 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 47b7be19-6608-45b4-9f0e-74393969e3f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:42:54 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.932 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:42:55 np0005558241 nova_compute[248510]: 2025-12-13 08:42:54.999 248514 DEBUG nova.objects.instance [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid 47b7be19-6608-45b4-9f0e-74393969e3f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:42:55 np0005558241 nova_compute[248510]: 2025-12-13 08:42:55.101 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:42:55 np0005558241 nova_compute[248510]: 2025-12-13 08:42:55.101 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Ensure instance console log exists: /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:42:55 np0005558241 nova_compute[248510]: 2025-12-13 08:42:55.102 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:55 np0005558241 nova_compute[248510]: 2025-12-13 08:42:55.102 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:55 np0005558241 nova_compute[248510]: 2025-12-13 08:42:55.103 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:55.422 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:42:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:55.422 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:42:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:42:55.423 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:42:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2405: 321 pgs: 321 active+clean; 124 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 195 op/s
Dec 13 03:42:56 np0005558241 nova_compute[248510]: 2025-12-13 08:42:56.233 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:56 np0005558241 nova_compute[248510]: 2025-12-13 08:42:56.618 248514 DEBUG nova.network.neutron [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Successfully created port: ab92e3fe-6177-4ba7-962b-08377c7056ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:42:57 np0005558241 nova_compute[248510]: 2025-12-13 08:42:57.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:42:57 np0005558241 nova_compute[248510]: 2025-12-13 08:42:57.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:42:57 np0005558241 nova_compute[248510]: 2025-12-13 08:42:57.809 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:42:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 124 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Dec 13 03:42:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:42:58 np0005558241 nova_compute[248510]: 2025-12-13 08:42:58.486 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615363.4846647, 2e9e6711-892f-4278-b911-bfacbff9b48e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:42:58 np0005558241 nova_compute[248510]: 2025-12-13 08:42:58.487 248514 INFO nova.compute.manager [-] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:42:58 np0005558241 nova_compute[248510]: 2025-12-13 08:42:58.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:42:58 np0005558241 nova_compute[248510]: 2025-12-13 08:42:58.670 248514 DEBUG nova.compute.manager [None req-89e29a84-a949-4f03-ae05-1f772516bbb1 - - - - - -] [instance: 2e9e6711-892f-4278-b911-bfacbff9b48e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:42:58 np0005558241 nova_compute[248510]: 2025-12-13 08:42:58.697 248514 DEBUG nova.network.neutron [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Successfully updated port: ab92e3fe-6177-4ba7-962b-08377c7056ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:42:58 np0005558241 nova_compute[248510]: 2025-12-13 08:42:58.743 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-47b7be19-6608-45b4-9f0e-74393969e3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:42:58 np0005558241 nova_compute[248510]: 2025-12-13 08:42:58.743 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-47b7be19-6608-45b4-9f0e-74393969e3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:42:58 np0005558241 nova_compute[248510]: 2025-12-13 08:42:58.743 248514 DEBUG nova.network.neutron [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:42:58 np0005558241 nova_compute[248510]: 2025-12-13 08:42:58.914 248514 DEBUG nova.compute.manager [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-changed-ab92e3fe-6177-4ba7-962b-08377c7056ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:42:58 np0005558241 nova_compute[248510]: 2025-12-13 08:42:58.915 248514 DEBUG nova.compute.manager [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Refreshing instance network info cache due to event network-changed-ab92e3fe-6177-4ba7-962b-08377c7056ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:42:58 np0005558241 nova_compute[248510]: 2025-12-13 08:42:58.915 248514 DEBUG oslo_concurrency.lockutils [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-47b7be19-6608-45b4-9f0e-74393969e3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:42:59 np0005558241 nova_compute[248510]: 2025-12-13 08:42:59.235 248514 DEBUG nova.network.neutron [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:42:59 np0005558241 nova_compute[248510]: 2025-12-13 08:42:59.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:42:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 167 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.097 248514 DEBUG nova.network.neutron [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Updating instance_info_cache with network_info: [{"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.170 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-47b7be19-6608-45b4-9f0e-74393969e3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.171 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Instance network_info: |[{"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.173 248514 DEBUG oslo_concurrency.lockutils [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-47b7be19-6608-45b4-9f0e-74393969e3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.173 248514 DEBUG nova.network.neutron [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Refreshing network info cache for port ab92e3fe-6177-4ba7-962b-08377c7056ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.177 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Start _get_guest_xml network_info=[{"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.182 248514 WARNING nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.188 248514 DEBUG nova.virt.libvirt.host [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.189 248514 DEBUG nova.virt.libvirt.host [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.195 248514 DEBUG nova.virt.libvirt.host [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.195 248514 DEBUG nova.virt.libvirt.host [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.195 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.196 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.196 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.196 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.197 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.197 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.197 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.198 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.198 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.198 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.198 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.199 248514 DEBUG nova.virt.hardware [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.201 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:43:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3708720613' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.756 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.786 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.792 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:01 np0005558241 nova_compute[248510]: 2025-12-13 08:43:01.839 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:43:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2408: 321 pgs: 321 active+clean; 167 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 123 op/s
Dec 13 03:43:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:43:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683537550' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.365 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.367 248514 DEBUG nova.virt.libvirt.vif [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:42:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-173101990',display_name='tempest-ServersTestJSON-server-173101990',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-173101990',id=101,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-l0nko7x7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:42:54Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=47b7be19-6608-45b4-9f0e-74393969e3f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.368 248514 DEBUG nova.network.os_vif_util [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.369 248514 DEBUG nova.network.os_vif_util [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.371 248514 DEBUG nova.objects.instance [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid 47b7be19-6608-45b4-9f0e-74393969e3f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.402 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  <uuid>47b7be19-6608-45b4-9f0e-74393969e3f4</uuid>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  <name>instance-00000065</name>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersTestJSON-server-173101990</nova:name>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:43:01</nova:creationTime>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:        <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:        <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:        <nova:port uuid="ab92e3fe-6177-4ba7-962b-08377c7056ad">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <entry name="serial">47b7be19-6608-45b4-9f0e-74393969e3f4</entry>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <entry name="uuid">47b7be19-6608-45b4-9f0e-74393969e3f4</entry>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/47b7be19-6608-45b4-9f0e-74393969e3f4_disk">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/47b7be19-6608-45b4-9f0e-74393969e3f4_disk.config">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:aa:4e:c6"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <target dev="tapab92e3fe-61"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/console.log" append="off"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:43:02 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:43:02 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:43:02 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:43:02 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.404 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Preparing to wait for external event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.404 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.405 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.405 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.406 248514 DEBUG nova.virt.libvirt.vif [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:42:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-173101990',display_name='tempest-ServersTestJSON-server-173101990',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-173101990',id=101,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-l0nko7x7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:42:54Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=47b7be19-6608-45b4-9f0e-74393969e3f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.406 248514 DEBUG nova.network.os_vif_util [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.407 248514 DEBUG nova.network.os_vif_util [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.407 248514 DEBUG os_vif [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.407 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.408 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.408 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.411 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab92e3fe-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.411 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab92e3fe-61, col_values=(('external_ids', {'iface-id': 'ab92e3fe-6177-4ba7-962b-08377c7056ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:4e:c6', 'vm-uuid': '47b7be19-6608-45b4-9f0e-74393969e3f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.466 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:02 np0005558241 NetworkManager[50376]: <info>  [1765615382.4667] manager: (tapab92e3fe-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.469 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.472 248514 INFO os_vif [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61')#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.755 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.756 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.756 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:aa:4e:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.757 248514 INFO nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Using config drive#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.779 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.785 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.914 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.915 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.916 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.916 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:43:02 np0005558241 nova_compute[248510]: 2025-12-13 08:43:02.916 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:43:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:03.434 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:43:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814005455' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:43:03 np0005558241 nova_compute[248510]: 2025-12-13 08:43:03.501 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:03 np0005558241 nova_compute[248510]: 2025-12-13 08:43:03.586 248514 INFO nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Creating config drive at /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/disk.config#033[00m
Dec 13 03:43:03 np0005558241 nova_compute[248510]: 2025-12-13 08:43:03.591 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf73n2v4t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:03 np0005558241 nova_compute[248510]: 2025-12-13 08:43:03.732 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf73n2v4t" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:03 np0005558241 nova_compute[248510]: 2025-12-13 08:43:03.765 248514 DEBUG nova.storage.rbd_utils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image 47b7be19-6608-45b4-9f0e-74393969e3f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:43:03 np0005558241 nova_compute[248510]: 2025-12-13 08:43:03.769 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/disk.config 47b7be19-6608-45b4-9f0e-74393969e3f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 167 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 13 03:43:03 np0005558241 nova_compute[248510]: 2025-12-13 08:43:03.900 248514 DEBUG oslo_concurrency.processutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/disk.config 47b7be19-6608-45b4-9f0e-74393969e3f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:03 np0005558241 nova_compute[248510]: 2025-12-13 08:43:03.901 248514 INFO nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Deleting local config drive /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4/disk.config because it was imported into RBD.#033[00m
Dec 13 03:43:03 np0005558241 kernel: tapab92e3fe-61: entered promiscuous mode
Dec 13 03:43:03 np0005558241 nova_compute[248510]: 2025-12-13 08:43:03.948 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:03Z|00966|binding|INFO|Claiming lport ab92e3fe-6177-4ba7-962b-08377c7056ad for this chassis.
Dec 13 03:43:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:03Z|00967|binding|INFO|ab92e3fe-6177-4ba7-962b-08377c7056ad: Claiming fa:16:3e:aa:4e:c6 10.100.0.6
Dec 13 03:43:03 np0005558241 NetworkManager[50376]: <info>  [1765615383.9492] manager: (tapab92e3fe-61): new Tun device (/org/freedesktop/NetworkManager/Devices/403)
Dec 13 03:43:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:03Z|00968|binding|INFO|Setting lport ab92e3fe-6177-4ba7-962b-08377c7056ad ovn-installed in OVS
Dec 13 03:43:03 np0005558241 nova_compute[248510]: 2025-12-13 08:43:03.964 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:03 np0005558241 nova_compute[248510]: 2025-12-13 08:43:03.965 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:03 np0005558241 systemd-udevd[345205]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:43:03 np0005558241 systemd-machined[210538]: New machine qemu-125-instance-00000065.
Dec 13 03:43:03 np0005558241 NetworkManager[50376]: <info>  [1765615383.9835] device (tapab92e3fe-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:43:03 np0005558241 NetworkManager[50376]: <info>  [1765615383.9844] device (tapab92e3fe-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:43:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:03Z|00969|binding|INFO|Setting lport ab92e3fe-6177-4ba7-962b-08377c7056ad up in Southbound
Dec 13 03:43:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:03.986 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:4e:c6 10.100.0.6'], port_security=['fa:16:3e:aa:4e:c6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '47b7be19-6608-45b4-9f0e-74393969e3f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ab92e3fe-6177-4ba7-962b-08377c7056ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:43:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:03.987 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ab92e3fe-6177-4ba7-962b-08377c7056ad in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis#033[00m
Dec 13 03:43:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:03.989 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3#033[00m
Dec 13 03:43:03 np0005558241 systemd[1]: Started Virtual Machine qemu-125-instance-00000065.
Dec 13 03:43:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.004 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8b59ef43-c26b-4c23-b746-996c084f2499]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.010 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.010 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:43:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.031 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[633a455f-39f2-4949-9c83-b62e1575e0e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.034 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0f88e6-b08f-4850-b53c-d397ad3d0d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.052 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.053 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:43:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.062 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f18fb8-5097-4c8d-a2a4-13d8dbb5124e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.081 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f96e8e69-2f63-4061-9ffc-f4b3630c5156]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345219, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.096 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d121803b-17d0-4d2a-8ed1-084185e56b16]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345221, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345221, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.099 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.100 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.101 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.102 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:43:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.102 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:04.103 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.183 248514 DEBUG nova.network.neutron [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Updated VIF entry in instance network info cache for port ab92e3fe-6177-4ba7-962b-08377c7056ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.183 248514 DEBUG nova.network.neutron [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Updating instance_info_cache with network_info: [{"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.217 248514 DEBUG oslo_concurrency.lockutils [req-83076adc-844e-4b51-a773-dc953e9dd5da req-ac2b20bd-c3e7-4495-b6b9-61e2cceca533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-47b7be19-6608-45b4-9f0e-74393969e3f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.225 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.226 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3499MB free_disk=59.92131443321705GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.226 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.227 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.584 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615384.5835812, 47b7be19-6608-45b4-9f0e-74393969e3f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.584 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] VM Started (Lifecycle Event)#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.719 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.723 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615384.5846167, 47b7be19-6608-45b4-9f0e-74393969e3f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.723 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.763 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.764 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 47b7be19-6608-45b4-9f0e-74393969e3f4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.764 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.764 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.829 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.832 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.856 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:04 np0005558241 nova_compute[248510]: 2025-12-13 08:43:04.891 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.106 248514 DEBUG nova.compute.manager [req-c6e52120-a7c0-418a-9734-5bd6faca1644 req-b5eb0e8f-a0e1-42de-bdfd-7e4c7def2611 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.106 248514 DEBUG oslo_concurrency.lockutils [req-c6e52120-a7c0-418a-9734-5bd6faca1644 req-b5eb0e8f-a0e1-42de-bdfd-7e4c7def2611 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.106 248514 DEBUG oslo_concurrency.lockutils [req-c6e52120-a7c0-418a-9734-5bd6faca1644 req-b5eb0e8f-a0e1-42de-bdfd-7e4c7def2611 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.107 248514 DEBUG oslo_concurrency.lockutils [req-c6e52120-a7c0-418a-9734-5bd6faca1644 req-b5eb0e8f-a0e1-42de-bdfd-7e4c7def2611 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.107 248514 DEBUG nova.compute.manager [req-c6e52120-a7c0-418a-9734-5bd6faca1644 req-b5eb0e8f-a0e1-42de-bdfd-7e4c7def2611 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Processing event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.108 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.127 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615385.1117122, 47b7be19-6608-45b4-9f0e-74393969e3f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.128 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.130 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.152 248514 INFO nova.virt.libvirt.driver [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Instance spawned successfully.#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.153 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.170 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615370.1698139, 4edf7378-73e8-4119-a019-6ab7098ee4ad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.171 248514 INFO nova.compute.manager [-] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:43:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:43:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1756913723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.404 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.404 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.405 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.405 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.406 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.406 248514 DEBUG nova.virt.libvirt.driver [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.409 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.411 248514 DEBUG nova.compute.manager [None req-7c3baadc-27e3-44f3-8111-d1a796d054a6 - - - - - -] [instance: 4edf7378-73e8-4119-a019-6ab7098ee4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.411 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.418 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.420 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.456 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.464 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.495 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.496 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.506 248514 INFO nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Took 11.12 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.506 248514 DEBUG nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.590 248514 INFO nova.compute.manager [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Took 12.91 seconds to build instance.#033[00m
Dec 13 03:43:05 np0005558241 nova_compute[248510]: 2025-12-13 08:43:05.644 248514 DEBUG oslo_concurrency.lockutils [None req-ed7df3b6-4cfa-4dca-b720-fb3d305dcb98 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 167 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 694 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 13 03:43:06 np0005558241 nova_compute[248510]: 2025-12-13 08:43:06.302 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:06 np0005558241 nova_compute[248510]: 2025-12-13 08:43:06.483 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:43:06 np0005558241 nova_compute[248510]: 2025-12-13 08:43:06.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:43:06 np0005558241 nova_compute[248510]: 2025-12-13 08:43:06.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:43:07 np0005558241 nova_compute[248510]: 2025-12-13 08:43:07.248 248514 DEBUG nova.compute.manager [req-9d29abce-9243-43e8-8fe0-62747ec47383 req-7dcd9ac5-54ed-4196-90e5-700e5702c18f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:43:07 np0005558241 nova_compute[248510]: 2025-12-13 08:43:07.249 248514 DEBUG oslo_concurrency.lockutils [req-9d29abce-9243-43e8-8fe0-62747ec47383 req-7dcd9ac5-54ed-4196-90e5-700e5702c18f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:07 np0005558241 nova_compute[248510]: 2025-12-13 08:43:07.249 248514 DEBUG oslo_concurrency.lockutils [req-9d29abce-9243-43e8-8fe0-62747ec47383 req-7dcd9ac5-54ed-4196-90e5-700e5702c18f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:07 np0005558241 nova_compute[248510]: 2025-12-13 08:43:07.249 248514 DEBUG oslo_concurrency.lockutils [req-9d29abce-9243-43e8-8fe0-62747ec47383 req-7dcd9ac5-54ed-4196-90e5-700e5702c18f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:07 np0005558241 nova_compute[248510]: 2025-12-13 08:43:07.250 248514 DEBUG nova.compute.manager [req-9d29abce-9243-43e8-8fe0-62747ec47383 req-7dcd9ac5-54ed-4196-90e5-700e5702c18f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] No waiting events found dispatching network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:43:07 np0005558241 nova_compute[248510]: 2025-12-13 08:43:07.250 248514 WARNING nova.compute.manager [req-9d29abce-9243-43e8-8fe0-62747ec47383 req-7dcd9ac5-54ed-4196-90e5-700e5702c18f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received unexpected event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad for instance with vm_state active and task_state None.#033[00m
Dec 13 03:43:07 np0005558241 nova_compute[248510]: 2025-12-13 08:43:07.519 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2411: 321 pgs: 321 active+clean; 167 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 22 op/s
Dec 13 03:43:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:43:08 np0005558241 podman[345287]: 2025-12-13 08:43:08.980612098 +0000 UTC m=+0.068868657 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:43:09 np0005558241 podman[345286]: 2025-12-13 08:43:09.014059785 +0000 UTC m=+0.102213292 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:43:09 np0005558241 podman[345288]: 2025-12-13 08:43:09.019905138 +0000 UTC m=+0.096116482 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Dec 13 03:43:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:43:09
Dec 13 03:43:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:43:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:43:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'backups', 'volumes', '.mgr']
Dec 13 03:43:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:43:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 167 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:43:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:43:11 np0005558241 nova_compute[248510]: 2025-12-13 08:43:11.318 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 167 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.480 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.480 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.481 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.481 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.481 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.482 248514 INFO nova.compute.manager [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Terminating instance#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.483 248514 DEBUG nova.compute.manager [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:43:12 np0005558241 kernel: tapab92e3fe-61 (unregistering): left promiscuous mode
Dec 13 03:43:12 np0005558241 NetworkManager[50376]: <info>  [1765615392.5104] device (tapab92e3fe-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.518 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:12Z|00970|binding|INFO|Releasing lport ab92e3fe-6177-4ba7-962b-08377c7056ad from this chassis (sb_readonly=0)
Dec 13 03:43:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:12Z|00971|binding|INFO|Setting lport ab92e3fe-6177-4ba7-962b-08377c7056ad down in Southbound
Dec 13 03:43:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:12Z|00972|binding|INFO|Removing iface tapab92e3fe-61 ovn-installed in OVS
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.520 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.535 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:4e:c6 10.100.0.6'], port_security=['fa:16:3e:aa:4e:c6 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '47b7be19-6608-45b4-9f0e-74393969e3f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=ab92e3fe-6177-4ba7-962b-08377c7056ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.536 158419 INFO neutron.agent.ovn.metadata.agent [-] Port ab92e3fe-6177-4ba7-962b-08377c7056ad in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.537 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.538 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3#033[00m
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.555 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f7684574-0b4d-4b89-80a6-045309fcbc50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:12 np0005558241 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000065.scope: Deactivated successfully.
Dec 13 03:43:12 np0005558241 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000065.scope: Consumed 8.041s CPU time.
Dec 13 03:43:12 np0005558241 systemd-machined[210538]: Machine qemu-125-instance-00000065 terminated.
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.583 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[17f4a8be-c7b4-4f83-af55-358809a36ea9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.586 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[258fbaec-e008-4d07-9bb8-dd71a0be1fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.622 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[24aa6173-9295-405b-a6e0-0ba8c047424a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.639 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dad05a1d-0b9e-4050-812c-4a9e28cc29fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345361, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.657 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d044de69-0f40-4aaa-8a1c-1dd458ed13df]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345362, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345362, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.659 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.664 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.665 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.665 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.666 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:12.666 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.717 248514 INFO nova.virt.libvirt.driver [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Instance destroyed successfully.#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.718 248514 DEBUG nova.objects.instance [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid 47b7be19-6608-45b4-9f0e-74393969e3f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.737 248514 DEBUG nova.virt.libvirt.vif [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:42:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-173101990',display_name='tempest-ServersTestJSON-server-173101990',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-173101990',id=101,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:43:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-l0nko7x7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:43:09Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=47b7be19-6608-45b4-9f0e-74393969e3f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.738 248514 DEBUG nova.network.os_vif_util [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "address": "fa:16:3e:aa:4e:c6", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92e3fe-61", "ovs_interfaceid": "ab92e3fe-6177-4ba7-962b-08377c7056ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.739 248514 DEBUG nova.network.os_vif_util [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.739 248514 DEBUG os_vif [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.740 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.741 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab92e3fe-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.743 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.745 248514 INFO os_vif [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4e:c6,bridge_name='br-int',has_traffic_filtering=True,id=ab92e3fe-6177-4ba7-962b-08377c7056ad,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92e3fe-61')#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.959 248514 DEBUG nova.compute.manager [req-58da14c1-71ad-47d8-9631-17b3b7ddf2d8 req-43c80123-da2e-45af-b4ff-87ece7088b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-vif-unplugged-ab92e3fe-6177-4ba7-962b-08377c7056ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.959 248514 DEBUG oslo_concurrency.lockutils [req-58da14c1-71ad-47d8-9631-17b3b7ddf2d8 req-43c80123-da2e-45af-b4ff-87ece7088b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.960 248514 DEBUG oslo_concurrency.lockutils [req-58da14c1-71ad-47d8-9631-17b3b7ddf2d8 req-43c80123-da2e-45af-b4ff-87ece7088b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.960 248514 DEBUG oslo_concurrency.lockutils [req-58da14c1-71ad-47d8-9631-17b3b7ddf2d8 req-43c80123-da2e-45af-b4ff-87ece7088b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.960 248514 DEBUG nova.compute.manager [req-58da14c1-71ad-47d8-9631-17b3b7ddf2d8 req-43c80123-da2e-45af-b4ff-87ece7088b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] No waiting events found dispatching network-vif-unplugged-ab92e3fe-6177-4ba7-962b-08377c7056ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:43:12 np0005558241 nova_compute[248510]: 2025-12-13 08:43:12.960 248514 DEBUG nova.compute.manager [req-58da14c1-71ad-47d8-9631-17b3b7ddf2d8 req-43c80123-da2e-45af-b4ff-87ece7088b33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-vif-unplugged-ab92e3fe-6177-4ba7-962b-08377c7056ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:43:13 np0005558241 nova_compute[248510]: 2025-12-13 08:43:13.043 248514 INFO nova.virt.libvirt.driver [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Deleting instance files /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4_del#033[00m
Dec 13 03:43:13 np0005558241 nova_compute[248510]: 2025-12-13 08:43:13.044 248514 INFO nova.virt.libvirt.driver [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Deletion of /var/lib/nova/instances/47b7be19-6608-45b4-9f0e-74393969e3f4_del complete#033[00m
Dec 13 03:43:13 np0005558241 nova_compute[248510]: 2025-12-13 08:43:13.117 248514 INFO nova.compute.manager [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:43:13 np0005558241 nova_compute[248510]: 2025-12-13 08:43:13.118 248514 DEBUG oslo.service.loopingcall [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:43:13 np0005558241 nova_compute[248510]: 2025-12-13 08:43:13.118 248514 DEBUG nova.compute.manager [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:43:13 np0005558241 nova_compute[248510]: 2025-12-13 08:43:13.119 248514 DEBUG nova.network.neutron [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:43:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:43:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2414: 321 pgs: 321 active+clean; 148 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 03:43:14 np0005558241 nova_compute[248510]: 2025-12-13 08:43:14.137 248514 DEBUG nova.network.neutron [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:43:14 np0005558241 nova_compute[248510]: 2025-12-13 08:43:14.524 248514 INFO nova.compute.manager [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Took 1.41 seconds to deallocate network for instance.#033[00m
Dec 13 03:43:14 np0005558241 nova_compute[248510]: 2025-12-13 08:43:14.612 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:14 np0005558241 nova_compute[248510]: 2025-12-13 08:43:14.613 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:14 np0005558241 nova_compute[248510]: 2025-12-13 08:43:14.738 248514 DEBUG oslo_concurrency.processutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:43:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1741052178' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:43:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:43:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1741052178' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:43:15 np0005558241 nova_compute[248510]: 2025-12-13 08:43:15.205 248514 DEBUG nova.compute.manager [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:43:15 np0005558241 nova_compute[248510]: 2025-12-13 08:43:15.205 248514 DEBUG oslo_concurrency.lockutils [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:15 np0005558241 nova_compute[248510]: 2025-12-13 08:43:15.206 248514 DEBUG oslo_concurrency.lockutils [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:15 np0005558241 nova_compute[248510]: 2025-12-13 08:43:15.206 248514 DEBUG oslo_concurrency.lockutils [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:15 np0005558241 nova_compute[248510]: 2025-12-13 08:43:15.206 248514 DEBUG nova.compute.manager [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] No waiting events found dispatching network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:43:15 np0005558241 nova_compute[248510]: 2025-12-13 08:43:15.206 248514 WARNING nova.compute.manager [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received unexpected event network-vif-plugged-ab92e3fe-6177-4ba7-962b-08377c7056ad for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:43:15 np0005558241 nova_compute[248510]: 2025-12-13 08:43:15.207 248514 DEBUG nova.compute.manager [req-142a43cc-00d1-48ba-812e-db34455a1cc9 req-6d17ba69-ca54-4031-8efb-eac966309c23 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Received event network-vif-deleted-ab92e3fe-6177-4ba7-962b-08377c7056ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:43:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:43:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1301256832' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:43:15 np0005558241 nova_compute[248510]: 2025-12-13 08:43:15.337 248514 DEBUG oslo_concurrency.processutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:15 np0005558241 nova_compute[248510]: 2025-12-13 08:43:15.346 248514 DEBUG nova.compute.provider_tree [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:43:15 np0005558241 nova_compute[248510]: 2025-12-13 08:43:15.544 248514 DEBUG nova.scheduler.client.report [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:43:15 np0005558241 nova_compute[248510]: 2025-12-13 08:43:15.584 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:15 np0005558241 nova_compute[248510]: 2025-12-13 08:43:15.639 248514 INFO nova.scheduler.client.report [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance 47b7be19-6608-45b4-9f0e-74393969e3f4#033[00m
Dec 13 03:43:15 np0005558241 nova_compute[248510]: 2025-12-13 08:43:15.714 248514 DEBUG oslo_concurrency.lockutils [None req-d56b8d32-88f9-4d82-be4e-00ddccea2cc5 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "47b7be19-6608-45b4-9f0e-74393969e3f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 121 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 99 op/s
Dec 13 03:43:16 np0005558241 nova_compute[248510]: 2025-12-13 08:43:16.320 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:43:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:43:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:43:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:43:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:43:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:43:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:43:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:43:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:43:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:43:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:43:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:43:17 np0005558241 nova_compute[248510]: 2025-12-13 08:43:17.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:17 np0005558241 podman[345559]: 2025-12-13 08:43:17.785441929 +0000 UTC m=+0.040665748 container create d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:43:17 np0005558241 systemd[1]: Started libpod-conmon-d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6.scope.
Dec 13 03:43:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:43:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 121 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 938 B/s wr, 91 op/s
Dec 13 03:43:17 np0005558241 podman[345559]: 2025-12-13 08:43:17.766445141 +0000 UTC m=+0.021669010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:43:17 np0005558241 podman[345559]: 2025-12-13 08:43:17.866543586 +0000 UTC m=+0.121767405 container init d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:17 np0005558241 podman[345559]: 2025-12-13 08:43:17.874748871 +0000 UTC m=+0.129972690 container start d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:17 np0005558241 podman[345559]: 2025-12-13 08:43:17.879057354 +0000 UTC m=+0.134281193 container attach d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:43:17 np0005558241 angry_goldberg[345575]: 167 167
Dec 13 03:43:17 np0005558241 systemd[1]: libpod-d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6.scope: Deactivated successfully.
Dec 13 03:43:17 np0005558241 podman[345559]: 2025-12-13 08:43:17.881873128 +0000 UTC m=+0.137096947 container died d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5b1b91f45ff5d262d143db26dcd11c3d4447feff050b268ef605cbe282a57332-merged.mount: Deactivated successfully.
Dec 13 03:43:17 np0005558241 podman[345559]: 2025-12-13 08:43:17.924990529 +0000 UTC m=+0.180214358 container remove d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:17 np0005558241 systemd[1]: libpod-conmon-d74e4ddfb52137e0b73f5cb22ac011140a131e654d93fd9e0a9f106dd8506fe6.scope: Deactivated successfully.
Dec 13 03:43:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:43:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:43:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:43:18 np0005558241 podman[345599]: 2025-12-13 08:43:18.110617747 +0000 UTC m=+0.046803799 container create 35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_jepsen, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:43:18 np0005558241 systemd[1]: Started libpod-conmon-35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa.scope.
Dec 13 03:43:18 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:43:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6a6ef9744ece2eb195dc2122cdef789e4daf6dcaaba1749d3c6578af9ad907/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6a6ef9744ece2eb195dc2122cdef789e4daf6dcaaba1749d3c6578af9ad907/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6a6ef9744ece2eb195dc2122cdef789e4daf6dcaaba1749d3c6578af9ad907/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6a6ef9744ece2eb195dc2122cdef789e4daf6dcaaba1749d3c6578af9ad907/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6a6ef9744ece2eb195dc2122cdef789e4daf6dcaaba1749d3c6578af9ad907/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:18 np0005558241 podman[345599]: 2025-12-13 08:43:18.180616873 +0000 UTC m=+0.116802925 container init 35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_jepsen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:18 np0005558241 podman[345599]: 2025-12-13 08:43:18.094396721 +0000 UTC m=+0.030582793 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:43:18 np0005558241 podman[345599]: 2025-12-13 08:43:18.192975307 +0000 UTC m=+0.129161359 container start 35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_jepsen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:18 np0005558241 podman[345599]: 2025-12-13 08:43:18.19691915 +0000 UTC m=+0.133105262 container attach 35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_jepsen, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:43:18 np0005558241 beautiful_jepsen[345616]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:43:18 np0005558241 beautiful_jepsen[345616]: --> All data devices are unavailable
Dec 13 03:43:18 np0005558241 systemd[1]: libpod-35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa.scope: Deactivated successfully.
Dec 13 03:43:18 np0005558241 podman[345599]: 2025-12-13 08:43:18.674493705 +0000 UTC m=+0.610679757 container died 35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:18 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4f6a6ef9744ece2eb195dc2122cdef789e4daf6dcaaba1749d3c6578af9ad907-merged.mount: Deactivated successfully.
Dec 13 03:43:18 np0005558241 podman[345599]: 2025-12-13 08:43:18.744530692 +0000 UTC m=+0.680716744 container remove 35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:18 np0005558241 systemd[1]: libpod-conmon-35477e27d63dda2b55777ff3a35931070ce9be9c7bbe1a604f7777d519d986aa.scope: Deactivated successfully.
Dec 13 03:43:19 np0005558241 podman[345709]: 2025-12-13 08:43:19.185445265 +0000 UTC m=+0.037680269 container create 19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 03:43:19 np0005558241 systemd[1]: Started libpod-conmon-19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba.scope.
Dec 13 03:43:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:43:19 np0005558241 podman[345709]: 2025-12-13 08:43:19.170689898 +0000 UTC m=+0.022924922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:43:19 np0005558241 podman[345709]: 2025-12-13 08:43:19.271847971 +0000 UTC m=+0.124082995 container init 19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:19 np0005558241 podman[345709]: 2025-12-13 08:43:19.278338731 +0000 UTC m=+0.130573735 container start 19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 03:43:19 np0005558241 podman[345709]: 2025-12-13 08:43:19.281250368 +0000 UTC m=+0.133485392 container attach 19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:43:19 np0005558241 adoring_khayyam[345726]: 167 167
Dec 13 03:43:19 np0005558241 systemd[1]: libpod-19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba.scope: Deactivated successfully.
Dec 13 03:43:19 np0005558241 podman[345709]: 2025-12-13 08:43:19.28360938 +0000 UTC m=+0.135844384 container died 19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay-bb1c593cbcc464ef6a69426517d46638a03b126b2ddfb2563805e3352e1ec9c8-merged.mount: Deactivated successfully.
Dec 13 03:43:19 np0005558241 podman[345709]: 2025-12-13 08:43:19.341500978 +0000 UTC m=+0.193735982 container remove 19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_khayyam, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:19 np0005558241 systemd[1]: libpod-conmon-19be2c8ac80f10a1eb9a49356df9e77c008ee5f943c7a8701380c03876dd67ba.scope: Deactivated successfully.
Dec 13 03:43:19 np0005558241 podman[345751]: 2025-12-13 08:43:19.524058976 +0000 UTC m=+0.049069858 container create 2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:19 np0005558241 systemd[1]: Started libpod-conmon-2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522.scope.
Dec 13 03:43:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:43:19 np0005558241 podman[345751]: 2025-12-13 08:43:19.503854296 +0000 UTC m=+0.028865188 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:43:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ffb0c44dda29d499c64dc05134aa80b3e606f45d1e4a017cc7147cff8fea8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ffb0c44dda29d499c64dc05134aa80b3e606f45d1e4a017cc7147cff8fea8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ffb0c44dda29d499c64dc05134aa80b3e606f45d1e4a017cc7147cff8fea8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ffb0c44dda29d499c64dc05134aa80b3e606f45d1e4a017cc7147cff8fea8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:19 np0005558241 podman[345751]: 2025-12-13 08:43:19.61420584 +0000 UTC m=+0.139216752 container init 2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 03:43:19 np0005558241 podman[345751]: 2025-12-13 08:43:19.620514825 +0000 UTC m=+0.145525707 container start 2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 03:43:19 np0005558241 podman[345751]: 2025-12-13 08:43:19.632860849 +0000 UTC m=+0.157871731 container attach 2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 121 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]: {
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:    "0": [
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:        {
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "devices": [
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "/dev/loop3"
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            ],
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_name": "ceph_lv0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_size": "21470642176",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "name": "ceph_lv0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "tags": {
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.cluster_name": "ceph",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.crush_device_class": "",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.encrypted": "0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.objectstore": "bluestore",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.osd_id": "0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.type": "block",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.vdo": "0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.with_tpm": "0"
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            },
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "type": "block",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "vg_name": "ceph_vg0"
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:        }
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:    ],
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:    "1": [
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:        {
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "devices": [
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "/dev/loop4"
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            ],
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_name": "ceph_lv1",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_size": "21470642176",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "name": "ceph_lv1",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "tags": {
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.cluster_name": "ceph",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.crush_device_class": "",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.encrypted": "0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.objectstore": "bluestore",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.osd_id": "1",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.type": "block",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.vdo": "0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.with_tpm": "0"
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            },
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "type": "block",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "vg_name": "ceph_vg1"
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:        }
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:    ],
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:    "2": [
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:        {
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "devices": [
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "/dev/loop5"
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            ],
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_name": "ceph_lv2",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_size": "21470642176",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "name": "ceph_lv2",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "tags": {
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.cluster_name": "ceph",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.crush_device_class": "",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.encrypted": "0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.objectstore": "bluestore",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.osd_id": "2",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.type": "block",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.vdo": "0",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:                "ceph.with_tpm": "0"
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            },
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "type": "block",
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:            "vg_name": "ceph_vg2"
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:        }
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]:    ]
Dec 13 03:43:19 np0005558241 vigorous_cartwright[345768]: }
Dec 13 03:43:19 np0005558241 systemd[1]: libpod-2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522.scope: Deactivated successfully.
Dec 13 03:43:19 np0005558241 podman[345751]: 2025-12-13 08:43:19.952509951 +0000 UTC m=+0.477520833 container died 2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:43:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f6ffb0c44dda29d499c64dc05134aa80b3e606f45d1e4a017cc7147cff8fea8b-merged.mount: Deactivated successfully.
Dec 13 03:43:20 np0005558241 podman[345751]: 2025-12-13 08:43:20.00126984 +0000 UTC m=+0.526280722 container remove 2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:20 np0005558241 systemd[1]: libpod-conmon-2b9d7206a15ac84b50ce83e944f4e15919268de66815c67e51074622197aa522.scope: Deactivated successfully.
Dec 13 03:43:20 np0005558241 podman[345850]: 2025-12-13 08:43:20.470867315 +0000 UTC m=+0.044048406 container create 22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ganguly, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 03:43:20 np0005558241 systemd[1]: Started libpod-conmon-22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82.scope.
Dec 13 03:43:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:43:20 np0005558241 podman[345850]: 2025-12-13 08:43:20.449155936 +0000 UTC m=+0.022337047 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:43:20 np0005558241 podman[345850]: 2025-12-13 08:43:20.552145387 +0000 UTC m=+0.125326488 container init 22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ganguly, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:20 np0005558241 podman[345850]: 2025-12-13 08:43:20.561475412 +0000 UTC m=+0.134656503 container start 22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:43:20 np0005558241 podman[345850]: 2025-12-13 08:43:20.565769014 +0000 UTC m=+0.138950135 container attach 22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ganguly, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:43:20 np0005558241 keen_ganguly[345866]: 167 167
Dec 13 03:43:20 np0005558241 systemd[1]: libpod-22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82.scope: Deactivated successfully.
Dec 13 03:43:20 np0005558241 podman[345850]: 2025-12-13 08:43:20.568693911 +0000 UTC m=+0.141875032 container died 22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ganguly, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:43:20 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c9c75ac06866a93e10ee20456aab578ac31522c8c17efd1b6a859fd07373c3ec-merged.mount: Deactivated successfully.
Dec 13 03:43:20 np0005558241 podman[345850]: 2025-12-13 08:43:20.606370599 +0000 UTC m=+0.179551690 container remove 22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_ganguly, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:43:20 np0005558241 systemd[1]: libpod-conmon-22d3f63e828821d048434eb4a6b4eac0a163e00a85e55a63d4215cc67ddb3e82.scope: Deactivated successfully.
Dec 13 03:43:20 np0005558241 podman[345890]: 2025-12-13 08:43:20.77762167 +0000 UTC m=+0.038221523 container create d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:43:20 np0005558241 systemd[1]: Started libpod-conmon-d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b.scope.
Dec 13 03:43:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:43:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e3783fc2f21d7788bd2408319e3f4db0dc5abfd155717eb9b871a171dd2bbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e3783fc2f21d7788bd2408319e3f4db0dc5abfd155717eb9b871a171dd2bbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e3783fc2f21d7788bd2408319e3f4db0dc5abfd155717eb9b871a171dd2bbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e3783fc2f21d7788bd2408319e3f4db0dc5abfd155717eb9b871a171dd2bbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:20 np0005558241 podman[345890]: 2025-12-13 08:43:20.761362564 +0000 UTC m=+0.021962437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:43:20 np0005558241 podman[345890]: 2025-12-13 08:43:20.878449215 +0000 UTC m=+0.139049098 container init d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:20 np0005558241 podman[345890]: 2025-12-13 08:43:20.886926667 +0000 UTC m=+0.147526550 container start d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:43:20 np0005558241 podman[345890]: 2025-12-13 08:43:20.890459209 +0000 UTC m=+0.151059062 container attach d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007703559523006975 of space, bias 1.0, pg target 0.23110678569020926 quantized to 32 (current 32)
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006676265921235325 of space, bias 1.0, pg target 0.20028797763705977 quantized to 32 (current 32)
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.905117584623971e-07 of space, bias 4.0, pg target 0.0007086141101548765 quantized to 16 (current 32)
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:43:21 np0005558241 nova_compute[248510]: 2025-12-13 08:43:21.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:21 np0005558241 nova_compute[248510]: 2025-12-13 08:43:21.411 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:21 np0005558241 nova_compute[248510]: 2025-12-13 08:43:21.411 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:21 np0005558241 nova_compute[248510]: 2025-12-13 08:43:21.445 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:43:21 np0005558241 nova_compute[248510]: 2025-12-13 08:43:21.536 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:21 np0005558241 nova_compute[248510]: 2025-12-13 08:43:21.536 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:21 np0005558241 nova_compute[248510]: 2025-12-13 08:43:21.543 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:43:21 np0005558241 nova_compute[248510]: 2025-12-13 08:43:21.544 248514 INFO nova.compute.claims [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:43:21 np0005558241 lvm[345984]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:43:21 np0005558241 lvm[345985]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:43:21 np0005558241 lvm[345985]: VG ceph_vg1 finished
Dec 13 03:43:21 np0005558241 lvm[345984]: VG ceph_vg0 finished
Dec 13 03:43:21 np0005558241 lvm[345987]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:43:21 np0005558241 lvm[345987]: VG ceph_vg2 finished
Dec 13 03:43:21 np0005558241 nova_compute[248510]: 2025-12-13 08:43:21.752 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:21 np0005558241 heuristic_williamson[345906]: {}
Dec 13 03:43:21 np0005558241 systemd[1]: libpod-d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b.scope: Deactivated successfully.
Dec 13 03:43:21 np0005558241 systemd[1]: libpod-d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b.scope: Consumed 1.461s CPU time.
Dec 13 03:43:21 np0005558241 podman[345890]: 2025-12-13 08:43:21.82427664 +0000 UTC m=+1.084876533 container died d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:43:21 np0005558241 systemd[1]: var-lib-containers-storage-overlay-66e3783fc2f21d7788bd2408319e3f4db0dc5abfd155717eb9b871a171dd2bbf-merged.mount: Deactivated successfully.
Dec 13 03:43:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 121 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 13 03:43:21 np0005558241 podman[345890]: 2025-12-13 08:43:21.873811049 +0000 UTC m=+1.134410902 container remove d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:43:21 np0005558241 systemd[1]: libpod-conmon-d3ec457d0b3c3726da641cbd3b6917a86948b06592fa7576477964d732e4744b.scope: Deactivated successfully.
Dec 13 03:43:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:43:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:43:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:43:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:43:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:43:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040416571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.325 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.333 248514 DEBUG nova.compute.provider_tree [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.357 248514 DEBUG nova.scheduler.client.report [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.405 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.406 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.473 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.473 248514 DEBUG nova.network.neutron [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.655 248514 INFO nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.697 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.820 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.821 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.821 248514 INFO nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Creating image(s)#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.840 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.864 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.885 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.889 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:22 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:43:22 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.974 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.975 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.976 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:22 np0005558241 nova_compute[248510]: 2025-12-13 08:43:22.976 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:23 np0005558241 nova_compute[248510]: 2025-12-13 08:43:23.001 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:43:23 np0005558241 nova_compute[248510]: 2025-12-13 08:43:23.006 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:23 np0005558241 nova_compute[248510]: 2025-12-13 08:43:23.061 248514 DEBUG nova.policy [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3fcdbbf0ede24d3e820e927d9136712b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:43:23 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Dec 13 03:43:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:43:23 np0005558241 nova_compute[248510]: 2025-12-13 08:43:23.612 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:23 np0005558241 nova_compute[248510]: 2025-12-13 08:43:23.673 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] resizing rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:43:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2419: 321 pgs: 321 active+clean; 121 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 13 03:43:23 np0005558241 nova_compute[248510]: 2025-12-13 08:43:23.897 248514 DEBUG nova.objects.instance [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'migration_context' on Instance uuid d7f248dd-d4a5-4de5-b69a-bf263bfa30ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:43:23 np0005558241 nova_compute[248510]: 2025-12-13 08:43:23.917 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:43:23 np0005558241 nova_compute[248510]: 2025-12-13 08:43:23.917 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Ensure instance console log exists: /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:43:23 np0005558241 nova_compute[248510]: 2025-12-13 08:43:23.918 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:23 np0005558241 nova_compute[248510]: 2025-12-13 08:43:23.918 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:23 np0005558241 nova_compute[248510]: 2025-12-13 08:43:23.918 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:24 np0005558241 nova_compute[248510]: 2025-12-13 08:43:24.232 248514 DEBUG nova.network.neutron [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Successfully created port: 6168919d-f5d8-46ba-a89a-e352f37e674d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:43:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 156 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Dec 13 03:43:26 np0005558241 nova_compute[248510]: 2025-12-13 08:43:26.011 248514 DEBUG nova.network.neutron [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Successfully updated port: 6168919d-f5d8-46ba-a89a-e352f37e674d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:43:26 np0005558241 nova_compute[248510]: 2025-12-13 08:43:26.041 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "refresh_cache-d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:43:26 np0005558241 nova_compute[248510]: 2025-12-13 08:43:26.042 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquired lock "refresh_cache-d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:43:26 np0005558241 nova_compute[248510]: 2025-12-13 08:43:26.042 248514 DEBUG nova.network.neutron [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:43:26 np0005558241 nova_compute[248510]: 2025-12-13 08:43:26.255 248514 DEBUG nova.compute.manager [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received event network-changed-6168919d-f5d8-46ba-a89a-e352f37e674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:43:26 np0005558241 nova_compute[248510]: 2025-12-13 08:43:26.256 248514 DEBUG nova.compute.manager [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Refreshing instance network info cache due to event network-changed-6168919d-f5d8-46ba-a89a-e352f37e674d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:43:26 np0005558241 nova_compute[248510]: 2025-12-13 08:43:26.256 248514 DEBUG oslo_concurrency.lockutils [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:43:26 np0005558241 nova_compute[248510]: 2025-12-13 08:43:26.366 248514 DEBUG nova.network.neutron [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:43:26 np0005558241 nova_compute[248510]: 2025-12-13 08:43:26.374 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:27 np0005558241 nova_compute[248510]: 2025-12-13 08:43:27.716 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615392.7151818, 47b7be19-6608-45b4-9f0e-74393969e3f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:43:27 np0005558241 nova_compute[248510]: 2025-12-13 08:43:27.717 248514 INFO nova.compute.manager [-] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:43:27 np0005558241 nova_compute[248510]: 2025-12-13 08:43:27.740 248514 DEBUG nova.compute.manager [None req-7faea41c-af8f-422e-a34b-98f70d7fc7c1 - - - - - -] [instance: 47b7be19-6608-45b4-9f0e-74393969e3f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:43:27 np0005558241 nova_compute[248510]: 2025-12-13 08:43:27.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 156 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.174 248514 DEBUG nova.network.neutron [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Updating instance_info_cache with network_info: [{"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.213 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Releasing lock "refresh_cache-d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.214 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance network_info: |[{"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.215 248514 DEBUG oslo_concurrency.lockutils [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.215 248514 DEBUG nova.network.neutron [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Refreshing network info cache for port 6168919d-f5d8-46ba-a89a-e352f37e674d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.220 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Start _get_guest_xml network_info=[{"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.226 248514 WARNING nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.234 248514 DEBUG nova.virt.libvirt.host [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.234 248514 DEBUG nova.virt.libvirt.host [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.238 248514 DEBUG nova.virt.libvirt.host [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.238 248514 DEBUG nova.virt.libvirt.host [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.239 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.239 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.239 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.240 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.240 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.240 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.240 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.240 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.241 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.241 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.241 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.241 248514 DEBUG nova.virt.hardware [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.244 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:43:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:43:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3767693065' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.805 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.834 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:43:28 np0005558241 nova_compute[248510]: 2025-12-13 08:43:28.840 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:43:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761894302' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.493 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.495 248514 DEBUG nova.virt.libvirt.vif [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:43:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-596298568',display_name='tempest-ServersTestJSON-server-596298568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-596298568',id=102,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-gzr370um',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:43:22Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=d7f248dd-d4a5-4de5-b69a-bf263bfa30ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.496 248514 DEBUG nova.network.os_vif_util [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.496 248514 DEBUG nova.network.os_vif_util [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.498 248514 DEBUG nova.objects.instance [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'pci_devices' on Instance uuid d7f248dd-d4a5-4de5-b69a-bf263bfa30ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.515 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  <uuid>d7f248dd-d4a5-4de5-b69a-bf263bfa30ad</uuid>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  <name>instance-00000066</name>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersTestJSON-server-596298568</nova:name>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:43:28</nova:creationTime>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:        <nova:user uuid="3fcdbbf0ede24d3e820e927d9136712b">tempest-ServersTestJSON-1443479207-project-member</nova:user>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:        <nova:project uuid="3fb4f27cdc7f468faa36b4e7a461d7be">tempest-ServersTestJSON-1443479207</nova:project>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:        <nova:port uuid="6168919d-f5d8-46ba-a89a-e352f37e674d">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <entry name="serial">d7f248dd-d4a5-4de5-b69a-bf263bfa30ad</entry>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <entry name="uuid">d7f248dd-d4a5-4de5-b69a-bf263bfa30ad</entry>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk.config">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:24:fb:ce"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <target dev="tap6168919d-f5"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/console.log" append="off"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:43:29 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:43:29 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:43:29 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:43:29 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.517 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Preparing to wait for external event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.518 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.518 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.518 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.519 248514 DEBUG nova.virt.libvirt.vif [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:43:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-596298568',display_name='tempest-ServersTestJSON-server-596298568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-596298568',id=102,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-gzr370um',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:43:22Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=d7f248dd-d4a5-4de5-b69a-bf263bfa30ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.519 248514 DEBUG nova.network.os_vif_util [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.520 248514 DEBUG nova.network.os_vif_util [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.520 248514 DEBUG os_vif [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.521 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.522 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.525 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6168919d-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.526 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6168919d-f5, col_values=(('external_ids', {'iface-id': '6168919d-f5d8-46ba-a89a-e352f37e674d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:fb:ce', 'vm-uuid': 'd7f248dd-d4a5-4de5-b69a-bf263bfa30ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:29 np0005558241 NetworkManager[50376]: <info>  [1765615409.5298] manager: (tap6168919d-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.530 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.537 248514 INFO os_vif [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5')#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.604 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.605 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.605 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] No VIF found with MAC fa:16:3e:24:fb:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.606 248514 INFO nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Using config drive#033[00m
Dec 13 03:43:29 np0005558241 nova_compute[248510]: 2025-12-13 08:43:29.634 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:43:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 03:43:30 np0005558241 nova_compute[248510]: 2025-12-13 08:43:30.695 248514 INFO nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Creating config drive at /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/disk.config#033[00m
Dec 13 03:43:30 np0005558241 nova_compute[248510]: 2025-12-13 08:43:30.701 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpahiph2_e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:30 np0005558241 nova_compute[248510]: 2025-12-13 08:43:30.856 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpahiph2_e" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:30 np0005558241 nova_compute[248510]: 2025-12-13 08:43:30.883 248514 DEBUG nova.storage.rbd_utils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] rbd image d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:43:30 np0005558241 nova_compute[248510]: 2025-12-13 08:43:30.888 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/disk.config d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.058 248514 DEBUG oslo_concurrency.processutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/disk.config d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.059 248514 INFO nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Deleting local config drive /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad/disk.config because it was imported into RBD.#033[00m
Dec 13 03:43:31 np0005558241 kernel: tap6168919d-f5: entered promiscuous mode
Dec 13 03:43:31 np0005558241 NetworkManager[50376]: <info>  [1765615411.1109] manager: (tap6168919d-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/405)
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.111 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:31Z|00973|binding|INFO|Claiming lport 6168919d-f5d8-46ba-a89a-e352f37e674d for this chassis.
Dec 13 03:43:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:31Z|00974|binding|INFO|6168919d-f5d8-46ba-a89a-e352f37e674d: Claiming fa:16:3e:24:fb:ce 10.100.0.13
Dec 13 03:43:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:31Z|00975|binding|INFO|Setting lport 6168919d-f5d8-46ba-a89a-e352f37e674d ovn-installed in OVS
Dec 13 03:43:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:31Z|00976|binding|INFO|Setting lport 6168919d-f5d8-46ba-a89a-e352f37e674d up in Southbound
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.127 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.128 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:fb:ce 10.100.0.13'], port_security=['fa:16:3e:24:fb:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd7f248dd-d4a5-4de5-b69a-bf263bfa30ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6168919d-f5d8-46ba-a89a-e352f37e674d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.129 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.130 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6168919d-f5d8-46ba-a89a-e352f37e674d in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 bound to our chassis#033[00m
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.131 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3#033[00m
Dec 13 03:43:31 np0005558241 systemd-udevd[346350]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:43:31 np0005558241 systemd-machined[210538]: New machine qemu-126-instance-00000066.
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.147 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fd89fd-7ddb-4578-9c9e-ddcd0e785eb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:31 np0005558241 NetworkManager[50376]: <info>  [1765615411.1512] device (tap6168919d-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:43:31 np0005558241 NetworkManager[50376]: <info>  [1765615411.1521] device (tap6168919d-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:43:31 np0005558241 systemd[1]: Started Virtual Machine qemu-126-instance-00000066.
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.174 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[db995b64-b3bc-4154-8438-de28adb16484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.177 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7e63e6f5-787e-4855-8843-44edb22c7103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.205 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0c6adc-1e9d-4666-87bf-a4c936339b0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.222 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2d721847-dad0-4afc-802b-c04bdbc658f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346365, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.237 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ff274458-cf18-4d7f-9d70-7b5b24541b97]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346366, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346366, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.239 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.281 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.282 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.283 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.284 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.285 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:31.286 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.309 248514 DEBUG nova.network.neutron [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Updated VIF entry in instance network info cache for port 6168919d-f5d8-46ba-a89a-e352f37e674d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.310 248514 DEBUG nova.network.neutron [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Updating instance_info_cache with network_info: [{"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.341 248514 DEBUG oslo_concurrency.lockutils [req-7705b6a0-08f9-4d6b-8e5c-b13cee38442d req-558f8e0a-700b-4448-ae53-5752e5e1b229 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.376 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.642 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615411.6415884, d7f248dd-d4a5-4de5-b69a-bf263bfa30ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.643 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] VM Started (Lifecycle Event)#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.695 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.702 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615411.6417358, d7f248dd-d4a5-4de5-b69a-bf263bfa30ad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.702 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.762 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.767 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:43:31 np0005558241 nova_compute[248510]: 2025-12-13 08:43:31.825 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:43:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2423: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:43:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:43:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:43:34 np0005558241 nova_compute[248510]: 2025-12-13 08:43:34.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.357 248514 DEBUG nova.compute.manager [req-a65256fa-463f-4878-a575-487beb434c7a req-f683755a-1110-40d1-879f-2a5843891eec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.358 248514 DEBUG oslo_concurrency.lockutils [req-a65256fa-463f-4878-a575-487beb434c7a req-f683755a-1110-40d1-879f-2a5843891eec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.359 248514 DEBUG oslo_concurrency.lockutils [req-a65256fa-463f-4878-a575-487beb434c7a req-f683755a-1110-40d1-879f-2a5843891eec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.359 248514 DEBUG oslo_concurrency.lockutils [req-a65256fa-463f-4878-a575-487beb434c7a req-f683755a-1110-40d1-879f-2a5843891eec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.359 248514 DEBUG nova.compute.manager [req-a65256fa-463f-4878-a575-487beb434c7a req-f683755a-1110-40d1-879f-2a5843891eec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Processing event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.360 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.363 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615415.363652, d7f248dd-d4a5-4de5-b69a-bf263bfa30ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.364 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.366 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.369 248514 INFO nova.virt.libvirt.driver [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance spawned successfully.#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.369 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.407 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.407 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.408 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.408 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.408 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.409 248514 DEBUG nova.virt.libvirt.driver [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.413 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.416 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.492 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.506 248514 INFO nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Took 12.69 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.507 248514 DEBUG nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.600 248514 INFO nova.compute.manager [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Took 14.09 seconds to build instance.#033[00m
Dec 13 03:43:35 np0005558241 nova_compute[248510]: 2025-12-13 08:43:35.643 248514 DEBUG oslo_concurrency.lockutils [None req-4ba2dd0e-3100-4aa1-a865-589e18621267 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 13 03:43:36 np0005558241 nova_compute[248510]: 2025-12-13 08:43:36.378 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 454 KiB/s wr, 11 op/s
Dec 13 03:43:38 np0005558241 nova_compute[248510]: 2025-12-13 08:43:38.016 248514 DEBUG nova.compute.manager [req-3f22058e-039e-4ae6-a0ed-3310d6151fe1 req-2959f37b-d44e-46f7-a32d-e450942b2f97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:43:38 np0005558241 nova_compute[248510]: 2025-12-13 08:43:38.016 248514 DEBUG oslo_concurrency.lockutils [req-3f22058e-039e-4ae6-a0ed-3310d6151fe1 req-2959f37b-d44e-46f7-a32d-e450942b2f97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:38 np0005558241 nova_compute[248510]: 2025-12-13 08:43:38.016 248514 DEBUG oslo_concurrency.lockutils [req-3f22058e-039e-4ae6-a0ed-3310d6151fe1 req-2959f37b-d44e-46f7-a32d-e450942b2f97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:38 np0005558241 nova_compute[248510]: 2025-12-13 08:43:38.017 248514 DEBUG oslo_concurrency.lockutils [req-3f22058e-039e-4ae6-a0ed-3310d6151fe1 req-2959f37b-d44e-46f7-a32d-e450942b2f97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:38 np0005558241 nova_compute[248510]: 2025-12-13 08:43:38.017 248514 DEBUG nova.compute.manager [req-3f22058e-039e-4ae6-a0ed-3310d6151fe1 req-2959f37b-d44e-46f7-a32d-e450942b2f97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] No waiting events found dispatching network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:43:38 np0005558241 nova_compute[248510]: 2025-12-13 08:43:38.017 248514 WARNING nova.compute.manager [req-3f22058e-039e-4ae6-a0ed-3310d6151fe1 req-2959f37b-d44e-46f7-a32d-e450942b2f97 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received unexpected event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d for instance with vm_state active and task_state None.#033[00m
Dec 13 03:43:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:43:39 np0005558241 nova_compute[248510]: 2025-12-13 08:43:39.530 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 454 KiB/s wr, 37 op/s
Dec 13 03:43:39 np0005558241 podman[346411]: 2025-12-13 08:43:39.97757188 +0000 UTC m=+0.064154174 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 03:43:39 np0005558241 podman[346410]: 2025-12-13 08:43:39.986198036 +0000 UTC m=+0.073879639 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:43:40 np0005558241 podman[346409]: 2025-12-13 08:43:40.040259614 +0000 UTC m=+0.127934336 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Dec 13 03:43:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:43:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:43:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:43:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:43:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:43:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:43:41 np0005558241 nova_compute[248510]: 2025-12-13 08:43:41.380 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:43:42 np0005558241 nova_compute[248510]: 2025-12-13 08:43:42.841 248514 DEBUG oslo_concurrency.lockutils [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:42 np0005558241 nova_compute[248510]: 2025-12-13 08:43:42.842 248514 DEBUG oslo_concurrency.lockutils [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:42 np0005558241 nova_compute[248510]: 2025-12-13 08:43:42.842 248514 DEBUG nova.compute.manager [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:43:42 np0005558241 nova_compute[248510]: 2025-12-13 08:43:42.846 248514 DEBUG nova.compute.manager [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Dec 13 03:43:42 np0005558241 nova_compute[248510]: 2025-12-13 08:43:42.847 248514 DEBUG nova.objects.instance [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'flavor' on Instance uuid d7f248dd-d4a5-4de5-b69a-bf263bfa30ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:43:42 np0005558241 nova_compute[248510]: 2025-12-13 08:43:42.890 248514 DEBUG nova.virt.libvirt.driver [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:43:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:43:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2429: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:43:44 np0005558241 nova_compute[248510]: 2025-12-13 08:43:44.533 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 73 op/s
Dec 13 03:43:46 np0005558241 nova_compute[248510]: 2025-12-13 08:43:46.382 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2431: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 13 03:43:47 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:47Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:fb:ce 10.100.0.13
Dec 13 03:43:47 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:47Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:fb:ce 10.100.0.13
Dec 13 03:43:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:43:49 np0005558241 nova_compute[248510]: 2025-12-13 08:43:49.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 181 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 995 KiB/s wr, 108 op/s
Dec 13 03:43:51 np0005558241 nova_compute[248510]: 2025-12-13 08:43:51.384 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:51 np0005558241 nova_compute[248510]: 2025-12-13 08:43:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:43:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2433: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Dec 13 03:43:52 np0005558241 nova_compute[248510]: 2025-12-13 08:43:52.934 248514 DEBUG nova.virt.libvirt.driver [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Dec 13 03:43:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 03:43:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:43:54 np0005558241 nova_compute[248510]: 2025-12-13 08:43:54.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:54.048 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:43:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:54.050 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:43:54 np0005558241 nova_compute[248510]: 2025-12-13 08:43:54.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:54 np0005558241 nova_compute[248510]: 2025-12-13 08:43:54.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:43:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:55.422 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:55.422 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:55.423 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 03:43:56 np0005558241 nova_compute[248510]: 2025-12-13 08:43:56.388 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:56 np0005558241 kernel: tap6168919d-f5 (unregistering): left promiscuous mode
Dec 13 03:43:56 np0005558241 NetworkManager[50376]: <info>  [1765615436.8019] device (tap6168919d-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:43:56 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:56Z|00977|binding|INFO|Releasing lport 6168919d-f5d8-46ba-a89a-e352f37e674d from this chassis (sb_readonly=0)
Dec 13 03:43:56 np0005558241 nova_compute[248510]: 2025-12-13 08:43:56.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:56 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:56Z|00978|binding|INFO|Setting lport 6168919d-f5d8-46ba-a89a-e352f37e674d down in Southbound
Dec 13 03:43:56 np0005558241 ovn_controller[148476]: 2025-12-13T08:43:56Z|00979|binding|INFO|Removing iface tap6168919d-f5 ovn-installed in OVS
Dec 13 03:43:56 np0005558241 nova_compute[248510]: 2025-12-13 08:43:56.817 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.823 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:fb:ce 10.100.0.13'], port_security=['fa:16:3e:24:fb:ce 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd7f248dd-d4a5-4de5-b69a-bf263bfa30ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6168919d-f5d8-46ba-a89a-e352f37e674d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.826 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6168919d-f5d8-46ba-a89a-e352f37e674d in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis#033[00m
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.829 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8f018c93-df47-4a6c-acdb-f508a51f75b3#033[00m
Dec 13 03:43:56 np0005558241 nova_compute[248510]: 2025-12-13 08:43:56.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.848 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68aa16b4-6e0a-4799-b995-57fe1859341a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:56 np0005558241 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000066.scope: Deactivated successfully.
Dec 13 03:43:56 np0005558241 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000066.scope: Consumed 12.761s CPU time.
Dec 13 03:43:56 np0005558241 systemd-machined[210538]: Machine qemu-126-instance-00000066 terminated.
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.892 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a2050349-d12c-4837-b3ac-bbe573bab088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.896 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[39468f78-86e9-4818-9aa4-5b322f3cc469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.935 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d79e20-f8d7-4ef3-b428-51d1f2162ebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.955 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8092b18b-8b7f-422a-b027-c6c12ca778d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8f018c93-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:fc:06'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 27, 'rx_bytes': 742, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 27, 'rx_bytes': 742, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 276], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782814, 'reachable_time': 25819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346482, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.972 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[63b99c19-9a1b-496d-9a2a-8cbee49e8e4e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782825, 'tstamp': 782825}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346483, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8f018c93-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782828, 'tstamp': 782828}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346483, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.974 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:56 np0005558241 nova_compute[248510]: 2025-12-13 08:43:56.976 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:56 np0005558241 nova_compute[248510]: 2025-12-13 08:43:56.980 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.981 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f018c93-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.982 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.982 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8f018c93-d0, col_values=(('external_ids', {'iface-id': '0f8d26a1-147d-466a-8164-8d2036166124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:56.983 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.049 248514 INFO nova.virt.libvirt.driver [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance shutdown successfully after 14 seconds.#033[00m
Dec 13 03:43:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:43:57.052 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.055 248514 INFO nova.virt.libvirt.driver [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance destroyed successfully.#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.057 248514 DEBUG nova.objects.instance [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'numa_topology' on Instance uuid d7f248dd-d4a5-4de5-b69a-bf263bfa30ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.078 248514 DEBUG nova.compute.manager [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.152 248514 DEBUG oslo_concurrency.lockutils [None req-64b9e0c4-6569-48f8-89db-fb1b4fec341e 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.288 248514 DEBUG nova.compute.manager [req-9f9f6209-77bd-42e2-8a49-c3a136ac6ed1 req-6894a51f-7aae-4b6e-9922-76ca9ff107f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received event network-vif-unplugged-6168919d-f5d8-46ba-a89a-e352f37e674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.289 248514 DEBUG oslo_concurrency.lockutils [req-9f9f6209-77bd-42e2-8a49-c3a136ac6ed1 req-6894a51f-7aae-4b6e-9922-76ca9ff107f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.289 248514 DEBUG oslo_concurrency.lockutils [req-9f9f6209-77bd-42e2-8a49-c3a136ac6ed1 req-6894a51f-7aae-4b6e-9922-76ca9ff107f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.289 248514 DEBUG oslo_concurrency.lockutils [req-9f9f6209-77bd-42e2-8a49-c3a136ac6ed1 req-6894a51f-7aae-4b6e-9922-76ca9ff107f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.289 248514 DEBUG nova.compute.manager [req-9f9f6209-77bd-42e2-8a49-c3a136ac6ed1 req-6894a51f-7aae-4b6e-9922-76ca9ff107f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] No waiting events found dispatching network-vif-unplugged-6168919d-f5d8-46ba-a89a-e352f37e674d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.290 248514 WARNING nova.compute.manager [req-9f9f6209-77bd-42e2-8a49-c3a136ac6ed1 req-6894a51f-7aae-4b6e-9922-76ca9ff107f0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received unexpected event network-vif-unplugged-6168919d-f5d8-46ba-a89a-e352f37e674d for instance with vm_state stopped and task_state None.#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:43:57 np0005558241 nova_compute[248510]: 2025-12-13 08:43:57.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:43:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 03:43:58 np0005558241 nova_compute[248510]: 2025-12-13 08:43:58.139 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:43:58 np0005558241 nova_compute[248510]: 2025-12-13 08:43:58.140 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:43:58 np0005558241 nova_compute[248510]: 2025-12-13 08:43:58.140 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:43:58 np0005558241 nova_compute[248510]: 2025-12-13 08:43:58.140 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:43:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:43:58 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Dec 13 03:43:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:58.985134) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:43:58 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Dec 13 03:43:58 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615438985167, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 909, "num_deletes": 255, "total_data_size": 1279599, "memory_usage": 1306488, "flush_reason": "Manual Compaction"}
Dec 13 03:43:58 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615439006118, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1257136, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47463, "largest_seqno": 48371, "table_properties": {"data_size": 1252591, "index_size": 2133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9859, "raw_average_key_size": 19, "raw_value_size": 1243515, "raw_average_value_size": 2424, "num_data_blocks": 95, "num_entries": 513, "num_filter_entries": 513, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615359, "oldest_key_time": 1765615359, "file_creation_time": 1765615438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 21051 microseconds, and 4430 cpu microseconds.
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.006176) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1257136 bytes OK
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.006199) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.014576) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.014591) EVENT_LOG_v1 {"time_micros": 1765615439014587, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.014608) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1275165, prev total WAL file size 1275165, number of live WAL files 2.
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.015054) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373630' seq:72057594037927935, type:22 .. '6C6F676D0032303131' seq:0, type:0; will stop at (end)
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1227KB)], [110(7595KB)]
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615439015158, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 9034808, "oldest_snapshot_seqno": -1}
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6834 keys, 8900177 bytes, temperature: kUnknown
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615439084308, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8900177, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8856070, "index_size": 25947, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 178993, "raw_average_key_size": 26, "raw_value_size": 8735101, "raw_average_value_size": 1278, "num_data_blocks": 1013, "num_entries": 6834, "num_filter_entries": 6834, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615439, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.084552) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8900177 bytes
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.087880) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.5 rd, 128.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.4 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(14.3) write-amplify(7.1) OK, records in: 7356, records dropped: 522 output_compression: NoCompression
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.087897) EVENT_LOG_v1 {"time_micros": 1765615439087889, "job": 66, "event": "compaction_finished", "compaction_time_micros": 69230, "compaction_time_cpu_micros": 21009, "output_level": 6, "num_output_files": 1, "total_output_size": 8900177, "num_input_records": 7356, "num_output_records": 6834, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615439088314, "job": 66, "event": "table_file_deletion", "file_number": 112}
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615439089742, "job": 66, "event": "table_file_deletion", "file_number": 110}
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.014979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.089772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.089776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.089778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.089779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:43:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:43:59.089781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:43:59 np0005558241 nova_compute[248510]: 2025-12-13 08:43:59.542 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:43:59 np0005558241 nova_compute[248510]: 2025-12-13 08:43:59.834 248514 DEBUG nova.compute.manager [req-9b3d6826-6395-4d1e-8894-690fcdb28a50 req-16b60c43-8f40-4d5a-ae4f-d1155dfaa4ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:43:59 np0005558241 nova_compute[248510]: 2025-12-13 08:43:59.834 248514 DEBUG oslo_concurrency.lockutils [req-9b3d6826-6395-4d1e-8894-690fcdb28a50 req-16b60c43-8f40-4d5a-ae4f-d1155dfaa4ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:43:59 np0005558241 nova_compute[248510]: 2025-12-13 08:43:59.834 248514 DEBUG oslo_concurrency.lockutils [req-9b3d6826-6395-4d1e-8894-690fcdb28a50 req-16b60c43-8f40-4d5a-ae4f-d1155dfaa4ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:43:59 np0005558241 nova_compute[248510]: 2025-12-13 08:43:59.834 248514 DEBUG oslo_concurrency.lockutils [req-9b3d6826-6395-4d1e-8894-690fcdb28a50 req-16b60c43-8f40-4d5a-ae4f-d1155dfaa4ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:43:59 np0005558241 nova_compute[248510]: 2025-12-13 08:43:59.835 248514 DEBUG nova.compute.manager [req-9b3d6826-6395-4d1e-8894-690fcdb28a50 req-16b60c43-8f40-4d5a-ae4f-d1155dfaa4ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] No waiting events found dispatching network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:43:59 np0005558241 nova_compute[248510]: 2025-12-13 08:43:59.835 248514 WARNING nova.compute.manager [req-9b3d6826-6395-4d1e-8894-690fcdb28a50 req-16b60c43-8f40-4d5a-ae4f-d1155dfaa4ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received unexpected event network-vif-plugged-6168919d-f5d8-46ba-a89a-e352f37e674d for instance with vm_state stopped and task_state None.#033[00m
Dec 13 03:43:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.656 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updating instance_info_cache with network_info: [{"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.686 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.686 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.686 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.854 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.855 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.855 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.855 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.855 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.857 248514 INFO nova.compute.manager [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Terminating instance#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.858 248514 DEBUG nova.compute.manager [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.867 248514 INFO nova.virt.libvirt.driver [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Instance destroyed successfully.#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.868 248514 DEBUG nova.objects.instance [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid d7f248dd-d4a5-4de5-b69a-bf263bfa30ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:44:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2438: 321 pgs: 321 active+clean; 200 MiB data, 786 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.888 248514 DEBUG nova.virt.libvirt.vif [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:43:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-596298568',display_name='tempest-Íñstáñcé-1719169917',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-596298568',id=102,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:43:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-gzr370um',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:43:58Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=d7f248dd-d4a5-4de5-b69a-bf263bfa30ad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.889 248514 DEBUG nova.network.os_vif_util [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "6168919d-f5d8-46ba-a89a-e352f37e674d", "address": "fa:16:3e:24:fb:ce", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6168919d-f5", "ovs_interfaceid": "6168919d-f5d8-46ba-a89a-e352f37e674d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.890 248514 DEBUG nova.network.os_vif_util [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.890 248514 DEBUG os_vif [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.892 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6168919d-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.893 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:01 np0005558241 nova_compute[248510]: 2025-12-13 08:44:01.897 248514 INFO os_vif [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:fb:ce,bridge_name='br-int',has_traffic_filtering=True,id=6168919d-f5d8-46ba-a89a-e352f37e674d,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6168919d-f5')#033[00m
Dec 13 03:44:02 np0005558241 nova_compute[248510]: 2025-12-13 08:44:02.177 248514 INFO nova.virt.libvirt.driver [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Deleting instance files /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_del#033[00m
Dec 13 03:44:02 np0005558241 nova_compute[248510]: 2025-12-13 08:44:02.178 248514 INFO nova.virt.libvirt.driver [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Deletion of /var/lib/nova/instances/d7f248dd-d4a5-4de5-b69a-bf263bfa30ad_del complete#033[00m
Dec 13 03:44:02 np0005558241 nova_compute[248510]: 2025-12-13 08:44:02.269 248514 INFO nova.compute.manager [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:44:02 np0005558241 nova_compute[248510]: 2025-12-13 08:44:02.270 248514 DEBUG oslo.service.loopingcall [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:44:02 np0005558241 nova_compute[248510]: 2025-12-13 08:44:02.270 248514 DEBUG nova.compute.manager [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:44:02 np0005558241 nova_compute[248510]: 2025-12-13 08:44:02.270 248514 DEBUG nova.network.neutron [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:44:03 np0005558241 nova_compute[248510]: 2025-12-13 08:44:03.583 248514 DEBUG nova.compute.manager [req-435ba39d-b917-4902-944e-8df29b78d197 req-841e4eb7-4524-4d94-a374-dcdf9e716b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Received event network-vif-deleted-6168919d-f5d8-46ba-a89a-e352f37e674d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:44:03 np0005558241 nova_compute[248510]: 2025-12-13 08:44:03.584 248514 INFO nova.compute.manager [req-435ba39d-b917-4902-944e-8df29b78d197 req-841e4eb7-4524-4d94-a374-dcdf9e716b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Neutron deleted interface 6168919d-f5d8-46ba-a89a-e352f37e674d; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 03:44:03 np0005558241 nova_compute[248510]: 2025-12-13 08:44:03.585 248514 DEBUG nova.network.neutron [req-435ba39d-b917-4902-944e-8df29b78d197 req-841e4eb7-4524-4d94-a374-dcdf9e716b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:44:03 np0005558241 nova_compute[248510]: 2025-12-13 08:44:03.677 248514 DEBUG nova.network.neutron [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:44:03 np0005558241 nova_compute[248510]: 2025-12-13 08:44:03.698 248514 INFO nova.compute.manager [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Took 1.43 seconds to deallocate network for instance.#033[00m
Dec 13 03:44:03 np0005558241 nova_compute[248510]: 2025-12-13 08:44:03.701 248514 DEBUG nova.compute.manager [req-435ba39d-b917-4902-944e-8df29b78d197 req-841e4eb7-4524-4d94-a374-dcdf9e716b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Detach interface failed, port_id=6168919d-f5d8-46ba-a89a-e352f37e674d, reason: Instance d7f248dd-d4a5-4de5-b69a-bf263bfa30ad could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 03:44:03 np0005558241 nova_compute[248510]: 2025-12-13 08:44:03.759 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:03 np0005558241 nova_compute[248510]: 2025-12-13 08:44:03.759 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:03 np0005558241 nova_compute[248510]: 2025-12-13 08:44:03.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:44:03 np0005558241 nova_compute[248510]: 2025-12-13 08:44:03.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:44:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 178 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 26 KiB/s wr, 29 op/s
Dec 13 03:44:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:44:04 np0005558241 nova_compute[248510]: 2025-12-13 08:44:04.060 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:04 np0005558241 nova_compute[248510]: 2025-12-13 08:44:04.176 248514 DEBUG oslo_concurrency.processutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:44:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:44:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592116885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:44:04 np0005558241 nova_compute[248510]: 2025-12-13 08:44:04.754 248514 DEBUG oslo_concurrency.processutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:44:04 np0005558241 nova_compute[248510]: 2025-12-13 08:44:04.762 248514 DEBUG nova.compute.provider_tree [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:44:04 np0005558241 nova_compute[248510]: 2025-12-13 08:44:04.833 248514 DEBUG nova.scheduler.client.report [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:44:04 np0005558241 nova_compute[248510]: 2025-12-13 08:44:04.872 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:04 np0005558241 nova_compute[248510]: 2025-12-13 08:44:04.875 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:04 np0005558241 nova_compute[248510]: 2025-12-13 08:44:04.876 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:04 np0005558241 nova_compute[248510]: 2025-12-13 08:44:04.876 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:44:04 np0005558241 nova_compute[248510]: 2025-12-13 08:44:04.876 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:44:04 np0005558241 nova_compute[248510]: 2025-12-13 08:44:04.969 248514 INFO nova.scheduler.client.report [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance d7f248dd-d4a5-4de5-b69a-bf263bfa30ad#033[00m
Dec 13 03:44:05 np0005558241 nova_compute[248510]: 2025-12-13 08:44:05.094 248514 DEBUG oslo_concurrency.lockutils [None req-767d7d4d-7325-463d-8654-719a9014f6e4 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "d7f248dd-d4a5-4de5-b69a-bf263bfa30ad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:44:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1456379895' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:44:05 np0005558241 nova_compute[248510]: 2025-12-13 08:44:05.531 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:44:05 np0005558241 nova_compute[248510]: 2025-12-13 08:44:05.682 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:44:05 np0005558241 nova_compute[248510]: 2025-12-13 08:44:05.683 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:44:05 np0005558241 nova_compute[248510]: 2025-12-13 08:44:05.851 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:44:05 np0005558241 nova_compute[248510]: 2025-12-13 08:44:05.853 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3598MB free_disk=59.9078850755468GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:44:05 np0005558241 nova_compute[248510]: 2025-12-13 08:44:05.853 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:05 np0005558241 nova_compute[248510]: 2025-12-13 08:44:05.853 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 121 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 27 KiB/s wr, 90 op/s
Dec 13 03:44:06 np0005558241 nova_compute[248510]: 2025-12-13 08:44:06.042 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:44:06 np0005558241 nova_compute[248510]: 2025-12-13 08:44:06.043 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:44:06 np0005558241 nova_compute[248510]: 2025-12-13 08:44:06.043 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:44:06 np0005558241 nova_compute[248510]: 2025-12-13 08:44:06.151 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:44:06 np0005558241 nova_compute[248510]: 2025-12-13 08:44:06.429 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:44:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/537074341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:44:06 np0005558241 nova_compute[248510]: 2025-12-13 08:44:06.774 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:44:06 np0005558241 nova_compute[248510]: 2025-12-13 08:44:06.779 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:44:06 np0005558241 nova_compute[248510]: 2025-12-13 08:44:06.881 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:44:06 np0005558241 nova_compute[248510]: 2025-12-13 08:44:06.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:07 np0005558241 nova_compute[248510]: 2025-12-13 08:44:07.000 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:44:07 np0005558241 nova_compute[248510]: 2025-12-13 08:44:07.001 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 121 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 4.5 KiB/s wr, 87 op/s
Dec 13 03:44:08 np0005558241 nova_compute[248510]: 2025-12-13 08:44:08.907 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:08 np0005558241 nova_compute[248510]: 2025-12-13 08:44:08.907 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:08 np0005558241 nova_compute[248510]: 2025-12-13 08:44:08.907 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:08 np0005558241 nova_compute[248510]: 2025-12-13 08:44:08.908 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:08 np0005558241 nova_compute[248510]: 2025-12-13 08:44:08.908 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:08 np0005558241 nova_compute[248510]: 2025-12-13 08:44:08.910 248514 INFO nova.compute.manager [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Terminating instance#033[00m
Dec 13 03:44:08 np0005558241 nova_compute[248510]: 2025-12-13 08:44:08.911 248514 DEBUG nova.compute.manager [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:44:08 np0005558241 kernel: tapeacb4de9-7d (unregistering): left promiscuous mode
Dec 13 03:44:08 np0005558241 NetworkManager[50376]: <info>  [1765615448.9589] device (tapeacb4de9-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:44:08 np0005558241 ovn_controller[148476]: 2025-12-13T08:44:08Z|00980|binding|INFO|Releasing lport eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 from this chassis (sb_readonly=0)
Dec 13 03:44:08 np0005558241 ovn_controller[148476]: 2025-12-13T08:44:08Z|00981|binding|INFO|Setting lport eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 down in Southbound
Dec 13 03:44:08 np0005558241 nova_compute[248510]: 2025-12-13 08:44:08.964 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:08 np0005558241 ovn_controller[148476]: 2025-12-13T08:44:08Z|00982|binding|INFO|Removing iface tapeacb4de9-7d ovn-installed in OVS
Dec 13 03:44:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:08.974 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:83:5f 10.100.0.5'], port_security=['fa:16:3e:b3:83:5f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3fb4f27cdc7f468faa36b4e7a461d7be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64472756-128f-45dd-8691-0aa1384b733d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66e50f00-39b5-402d-8fa5-9c6323d89a88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:44:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:08.975 158419 INFO neutron.agent.ovn.metadata.agent [-] Port eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 in datapath 8f018c93-df47-4a6c-acdb-f508a51f75b3 unbound from our chassis#033[00m
Dec 13 03:44:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:08.976 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8f018c93-df47-4a6c-acdb-f508a51f75b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:44:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:08.977 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0c80236e-4ce4-4bbe-831f-1f0ba9827ce6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:08.978 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3 namespace which is not needed anymore#033[00m
Dec 13 03:44:08 np0005558241 nova_compute[248510]: 2025-12-13 08:44:08.983 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:08.999 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.000 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.000 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:44:09 np0005558241 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Dec 13 03:44:09 np0005558241 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005e.scope: Consumed 20.716s CPU time.
Dec 13 03:44:09 np0005558241 systemd-machined[210538]: Machine qemu-116-instance-0000005e terminated.
Dec 13 03:44:09 np0005558241 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [NOTICE]   (339943) : haproxy version is 2.8.14-c23fe91
Dec 13 03:44:09 np0005558241 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [NOTICE]   (339943) : path to executable is /usr/sbin/haproxy
Dec 13 03:44:09 np0005558241 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [WARNING]  (339943) : Exiting Master process...
Dec 13 03:44:09 np0005558241 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [ALERT]    (339943) : Current worker (339953) exited with code 143 (Terminated)
Dec 13 03:44:09 np0005558241 neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3[339862]: [WARNING]  (339943) : All workers exited. Exiting... (0)
Dec 13 03:44:09 np0005558241 systemd[1]: libpod-009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6.scope: Deactivated successfully.
Dec 13 03:44:09 np0005558241 conmon[339862]: conmon 009ac3a8f481e99b8c11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6.scope/container/memory.events
Dec 13 03:44:09 np0005558241 podman[346607]: 2025-12-13 08:44:09.124504942 +0000 UTC m=+0.044675982 container died 009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.145 248514 INFO nova.virt.libvirt.driver [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Instance destroyed successfully.#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.146 248514 DEBUG nova.objects.instance [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lazy-loading 'resources' on Instance uuid 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:44:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6-userdata-shm.mount: Deactivated successfully.
Dec 13 03:44:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3dfb9bdf13c872c9b470ea097dce2795d900c75827e3e75f827321fc99371a11-merged.mount: Deactivated successfully.
Dec 13 03:44:09 np0005558241 podman[346607]: 2025-12-13 08:44:09.179345138 +0000 UTC m=+0.099516178 container cleanup 009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:44:09 np0005558241 systemd[1]: libpod-conmon-009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6.scope: Deactivated successfully.
Dec 13 03:44:09 np0005558241 podman[346645]: 2025-12-13 08:44:09.238493693 +0000 UTC m=+0.038902928 container remove 009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.244 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eae5aa31-6c44-4bb5-945a-0081381e7e54]: (4, ('Sat Dec 13 08:44:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3 (009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6)\n009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6\nSat Dec 13 08:44:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3 (009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6)\n009ac3a8f481e99b8c114074dd9dd2d84ecb11326b6348ef329ec295aa29ddc6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.246 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d57aa912-c973-4a96-b674-d24228eb628d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.246 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f018c93-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:09 np0005558241 kernel: tap8f018c93-d0: left promiscuous mode
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.264 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.267 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0af3db03-49fc-4358-b459-22ebc5fec122]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:44:09
Dec 13 03:44:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:44:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:44:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups']
Dec 13 03:44:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:44:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.284 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b46ddfe0-699c-4beb-979f-a46608911ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.285 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c578a285-460a-43a8-b1ce-fff6c03e0387]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.304 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d927264-ba02-4c9f-a2c6-449b92ef85c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782807, 'reachable_time': 35416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346664, 'error': None, 'target': 'ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:09 np0005558241 systemd[1]: run-netns-ovnmeta\x2d8f018c93\x2ddf47\x2d4a6c\x2dacdb\x2df508a51f75b3.mount: Deactivated successfully.
Dec 13 03:44:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.309 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8f018c93-df47-4a6c-acdb-f508a51f75b3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:44:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:09.309 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[bbf2c675-1d2c-4621-b889-efa26da4b401]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.431 248514 DEBUG nova.virt.libvirt.vif [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:40:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-694899991',display_name='tempest-₡-694899991',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--694899991',id=94,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:40:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3fb4f27cdc7f468faa36b4e7a461d7be',ramdisk_id='',reservation_id='r-vc3q6lf7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1443479207',owner_user_name='tempest-ServersTestJSON-1443479207-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:40:51Z,user_data=None,user_id='3fcdbbf0ede24d3e820e927d9136712b',uuid=9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.432 248514 DEBUG nova.network.os_vif_util [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converting VIF {"id": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "address": "fa:16:3e:b3:83:5f", "network": {"id": "8f018c93-df47-4a6c-acdb-f508a51f75b3", "bridge": "br-int", "label": "tempest-ServersTestJSON-112861479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3fb4f27cdc7f468faa36b4e7a461d7be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeacb4de9-7d", "ovs_interfaceid": "eacb4de9-7daa-4e47-a0b7-b0a986dc50f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.434 248514 DEBUG nova.network.os_vif_util [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.434 248514 DEBUG os_vif [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.436 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.436 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeacb4de9-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.441 248514 INFO os_vif [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:83:5f,bridge_name='br-int',has_traffic_filtering=True,id=eacb4de9-7daa-4e47-a0b7-b0a986dc50f1,network=Network(8f018c93-df47-4a6c-acdb-f508a51f75b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeacb4de9-7d')#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.687 248514 DEBUG nova.compute.manager [req-e59ea5ea-8f4a-4cea-bb46-ba270340763b req-99f5814c-9fb2-4ce2-8991-ed0143dcd13a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-vif-unplugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.688 248514 DEBUG oslo_concurrency.lockutils [req-e59ea5ea-8f4a-4cea-bb46-ba270340763b req-99f5814c-9fb2-4ce2-8991-ed0143dcd13a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.689 248514 DEBUG oslo_concurrency.lockutils [req-e59ea5ea-8f4a-4cea-bb46-ba270340763b req-99f5814c-9fb2-4ce2-8991-ed0143dcd13a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.689 248514 DEBUG oslo_concurrency.lockutils [req-e59ea5ea-8f4a-4cea-bb46-ba270340763b req-99f5814c-9fb2-4ce2-8991-ed0143dcd13a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.689 248514 DEBUG nova.compute.manager [req-e59ea5ea-8f4a-4cea-bb46-ba270340763b req-99f5814c-9fb2-4ce2-8991-ed0143dcd13a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] No waiting events found dispatching network-vif-unplugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.689 248514 DEBUG nova.compute.manager [req-e59ea5ea-8f4a-4cea-bb46-ba270340763b req-99f5814c-9fb2-4ce2-8991-ed0143dcd13a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-vif-unplugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.708 248514 INFO nova.virt.libvirt.driver [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Deleting instance files /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_del#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.709 248514 INFO nova.virt.libvirt.driver [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Deletion of /var/lib/nova/instances/9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c_del complete#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.803 248514 INFO nova.compute.manager [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.803 248514 DEBUG oslo.service.loopingcall [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.804 248514 DEBUG nova.compute.manager [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:44:09 np0005558241 nova_compute[248510]: 2025-12-13 08:44:09.804 248514 DEBUG nova.network.neutron [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:44:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2442: 321 pgs: 321 active+clean; 121 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 4.5 KiB/s wr, 87 op/s
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:44:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:44:10 np0005558241 podman[346686]: 2025-12-13 08:44:10.963216083 +0000 UTC m=+0.051151994 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:44:10 np0005558241 podman[346685]: 2025-12-13 08:44:10.969623634 +0000 UTC m=+0.062394067 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:10 np0005558241 podman[346684]: 2025-12-13 08:44:10.989121893 +0000 UTC m=+0.082116221 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.245 248514 DEBUG nova.network.neutron [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.286 248514 INFO nova.compute.manager [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Took 1.48 seconds to deallocate network for instance.#033[00m
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.353 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.353 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.405 248514 DEBUG oslo_concurrency.processutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 91 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.5 KiB/s wr, 106 op/s
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.895 248514 DEBUG nova.compute.manager [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.896 248514 DEBUG oslo_concurrency.lockutils [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.896 248514 DEBUG oslo_concurrency.lockutils [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.896 248514 DEBUG oslo_concurrency.lockutils [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.897 248514 DEBUG nova.compute.manager [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] No waiting events found dispatching network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.897 248514 WARNING nova.compute.manager [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received unexpected event network-vif-plugged-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.897 248514 DEBUG nova.compute.manager [req-cf8089a7-29dc-4d92-a342-df3ac780c160 req-71badfbe-e9b6-4a7a-9f98-9804f76a9c12 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Received event network-vif-deleted-eacb4de9-7daa-4e47-a0b7-b0a986dc50f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:44:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:44:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/19150681' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.964 248514 DEBUG oslo_concurrency.processutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.971 248514 DEBUG nova.compute.provider_tree [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:44:11 np0005558241 nova_compute[248510]: 2025-12-13 08:44:11.990 248514 DEBUG nova.scheduler.client.report [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:44:12 np0005558241 nova_compute[248510]: 2025-12-13 08:44:12.021 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:12 np0005558241 nova_compute[248510]: 2025-12-13 08:44:12.048 248514 INFO nova.scheduler.client.report [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Deleted allocations for instance 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c#033[00m
Dec 13 03:44:12 np0005558241 nova_compute[248510]: 2025-12-13 08:44:12.049 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615437.0484374, d7f248dd-d4a5-4de5-b69a-bf263bfa30ad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:44:12 np0005558241 nova_compute[248510]: 2025-12-13 08:44:12.049 248514 INFO nova.compute.manager [-] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:44:12 np0005558241 nova_compute[248510]: 2025-12-13 08:44:12.079 248514 DEBUG nova.compute.manager [None req-62f8c865-dfcb-47e6-b9ca-72a786c88328 - - - - - -] [instance: d7f248dd-d4a5-4de5-b69a-bf263bfa30ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:44:12 np0005558241 nova_compute[248510]: 2025-12-13 08:44:12.172 248514 DEBUG oslo_concurrency.lockutils [None req-c9adf9fa-a109-4414-83b0-8200d71cf5e1 3fcdbbf0ede24d3e820e927d9136712b 3fb4f27cdc7f468faa36b4e7a461d7be - - default default] Lock "9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2444: 321 pgs: 321 active+clean; 65 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 2.3 KiB/s wr, 108 op/s
Dec 13 03:44:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:44:14 np0005558241 nova_compute[248510]: 2025-12-13 08:44:14.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:14 np0005558241 nova_compute[248510]: 2025-12-13 08:44:14.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:44:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:44:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/267277573' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:44:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:44:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/267277573' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:44:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 KiB/s wr, 88 op/s
Dec 13 03:44:16 np0005558241 nova_compute[248510]: 2025-12-13 08:44:16.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2446: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 03:44:18 np0005558241 nova_compute[248510]: 2025-12-13 08:44:18.391 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:44:19 np0005558241 nova_compute[248510]: 2025-12-13 08:44:19.440 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.2116032574507006e-05 of space, bias 1.0, pg target 0.0036348097723521017 quantized to 32 (current 32)
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006676247291145143 of space, bias 1.0, pg target 0.2002874187343543 quantized to 32 (current 32)
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.905117584623971e-07 of space, bias 4.0, pg target 0.0007086141101548765 quantized to 16 (current 32)
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:44:21 np0005558241 nova_compute[248510]: 2025-12-13 08:44:21.475 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:21 np0005558241 nova_compute[248510]: 2025-12-13 08:44:21.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:44:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 03:44:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:44:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:44:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:44:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:44:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:44:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:44:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:44:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:44:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:44:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:44:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:44:23 np0005558241 podman[346910]: 2025-12-13 08:44:23.141960207 +0000 UTC m=+0.049717439 container create 6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:44:23 np0005558241 systemd[1]: Started libpod-conmon-6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e.scope.
Dec 13 03:44:23 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:44:23 np0005558241 podman[346910]: 2025-12-13 08:44:23.115350099 +0000 UTC m=+0.023107351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:23 np0005558241 podman[346910]: 2025-12-13 08:44:23.279290173 +0000 UTC m=+0.187047405 container init 6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:23 np0005558241 podman[346910]: 2025-12-13 08:44:23.292749571 +0000 UTC m=+0.200506803 container start 6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:23 np0005558241 podman[346910]: 2025-12-13 08:44:23.296647649 +0000 UTC m=+0.204404901 container attach 6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Dec 13 03:44:23 np0005558241 priceless_austin[346926]: 167 167
Dec 13 03:44:23 np0005558241 systemd[1]: libpod-6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e.scope: Deactivated successfully.
Dec 13 03:44:23 np0005558241 podman[346910]: 2025-12-13 08:44:23.299839929 +0000 UTC m=+0.207597181 container died 6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 6 op/s
Dec 13 03:44:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:44:24 np0005558241 nova_compute[248510]: 2025-12-13 08:44:24.142 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615449.139949, 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:44:24 np0005558241 nova_compute[248510]: 2025-12-13 08:44:24.144 248514 INFO nova.compute.manager [-] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:44:24 np0005558241 nova_compute[248510]: 2025-12-13 08:44:24.167 248514 DEBUG nova.compute.manager [None req-dba7eb62-7c58-4601-9903-7c677bcc5f1b - - - - - -] [instance: 9fb5c08b-d1e1-4ad0-8c18-8f63081cfb0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:44:24 np0005558241 nova_compute[248510]: 2025-12-13 08:44:24.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:24 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1c6cbc5f7fb3943d6989285ba9a964270fe01325c40adafe607816445aa9c3b0-merged.mount: Deactivated successfully.
Dec 13 03:44:24 np0005558241 podman[346910]: 2025-12-13 08:44:24.508352485 +0000 UTC m=+1.416109737 container remove 6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:24 np0005558241 systemd[1]: libpod-conmon-6dc6ed5ae775c6ea00e403daafa1cfbf75e871e9e7236a734230b820dfd80c4e.scope: Deactivated successfully.
Dec 13 03:44:24 np0005558241 podman[346951]: 2025-12-13 08:44:24.673231393 +0000 UTC m=+0.045897003 container create 7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_moser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:24 np0005558241 systemd[1]: Started libpod-conmon-7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64.scope.
Dec 13 03:44:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:44:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1bbc2ad28a3f82bb48ed2f6cf30572a4a2b44fcb57b754a410400ac161e2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1bbc2ad28a3f82bb48ed2f6cf30572a4a2b44fcb57b754a410400ac161e2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1bbc2ad28a3f82bb48ed2f6cf30572a4a2b44fcb57b754a410400ac161e2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1bbc2ad28a3f82bb48ed2f6cf30572a4a2b44fcb57b754a410400ac161e2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eb1bbc2ad28a3f82bb48ed2f6cf30572a4a2b44fcb57b754a410400ac161e2e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:24 np0005558241 podman[346951]: 2025-12-13 08:44:24.656313239 +0000 UTC m=+0.028978879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:24 np0005558241 podman[346951]: 2025-12-13 08:44:24.754610525 +0000 UTC m=+0.127276155 container init 7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:44:24 np0005558241 podman[346951]: 2025-12-13 08:44:24.762904664 +0000 UTC m=+0.135570274 container start 7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_moser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:44:24 np0005558241 podman[346951]: 2025-12-13 08:44:24.766952955 +0000 UTC m=+0.139618595 container attach 7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_moser, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:25 np0005558241 intelligent_moser[346969]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:44:25 np0005558241 intelligent_moser[346969]: --> All data devices are unavailable
Dec 13 03:44:25 np0005558241 systemd[1]: libpod-7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64.scope: Deactivated successfully.
Dec 13 03:44:25 np0005558241 podman[346951]: 2025-12-13 08:44:25.229141474 +0000 UTC m=+0.601807104 container died 7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_moser, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:25 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0eb1bbc2ad28a3f82bb48ed2f6cf30572a4a2b44fcb57b754a410400ac161e2e-merged.mount: Deactivated successfully.
Dec 13 03:44:25 np0005558241 podman[346951]: 2025-12-13 08:44:25.276098842 +0000 UTC m=+0.648764442 container remove 7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:25 np0005558241 systemd[1]: libpod-conmon-7849b7031facfd84f26af116bdab0a91b4a6608ad56c6256c432de2f463adf64.scope: Deactivated successfully.
Dec 13 03:44:25 np0005558241 podman[347060]: 2025-12-13 08:44:25.769404952 +0000 UTC m=+0.075132337 container create 79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shtern, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 03:44:25 np0005558241 systemd[1]: Started libpod-conmon-79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00.scope.
Dec 13 03:44:25 np0005558241 podman[347060]: 2025-12-13 08:44:25.720601707 +0000 UTC m=+0.026329112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:25 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:44:25 np0005558241 podman[347060]: 2025-12-13 08:44:25.856230161 +0000 UTC m=+0.161957576 container init 79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shtern, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:25 np0005558241 podman[347060]: 2025-12-13 08:44:25.863932324 +0000 UTC m=+0.169659709 container start 79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:25 np0005558241 quizzical_shtern[347076]: 167 167
Dec 13 03:44:25 np0005558241 systemd[1]: libpod-79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00.scope: Deactivated successfully.
Dec 13 03:44:25 np0005558241 podman[347060]: 2025-12-13 08:44:25.868673463 +0000 UTC m=+0.174400868 container attach 79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shtern, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:44:25 np0005558241 podman[347060]: 2025-12-13 08:44:25.86934648 +0000 UTC m=+0.175073865 container died 79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:44:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2450: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Dec 13 03:44:25 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fa22edd0862fb2d535e77cf18625f0f7108232ce79b4b679ac392f5251cc2a7d-merged.mount: Deactivated successfully.
Dec 13 03:44:25 np0005558241 podman[347060]: 2025-12-13 08:44:25.911386005 +0000 UTC m=+0.217113380 container remove 79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:44:25 np0005558241 systemd[1]: libpod-conmon-79f283218dbd20b00a5b4431eb90698ef4900d186554d04302532dae37133c00.scope: Deactivated successfully.
Dec 13 03:44:26 np0005558241 podman[347098]: 2025-12-13 08:44:26.103397044 +0000 UTC m=+0.046378725 container create d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:26 np0005558241 systemd[1]: Started libpod-conmon-d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e.scope.
Dec 13 03:44:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:44:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc80aff31efd023cf5709add2f58158e0d46b8a2dcca79d6f6e22ddb3b95713/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc80aff31efd023cf5709add2f58158e0d46b8a2dcca79d6f6e22ddb3b95713/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc80aff31efd023cf5709add2f58158e0d46b8a2dcca79d6f6e22ddb3b95713/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc80aff31efd023cf5709add2f58158e0d46b8a2dcca79d6f6e22ddb3b95713/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:26 np0005558241 podman[347098]: 2025-12-13 08:44:26.175569355 +0000 UTC m=+0.118551056 container init d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_satoshi, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:26 np0005558241 podman[347098]: 2025-12-13 08:44:26.084002027 +0000 UTC m=+0.026983728 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:26 np0005558241 podman[347098]: 2025-12-13 08:44:26.181538685 +0000 UTC m=+0.124520366 container start d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:26 np0005558241 podman[347098]: 2025-12-13 08:44:26.184980001 +0000 UTC m=+0.127961702 container attach d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]: {
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:    "0": [
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:        {
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "devices": [
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "/dev/loop3"
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            ],
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_name": "ceph_lv0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_size": "21470642176",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "name": "ceph_lv0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "tags": {
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.cluster_name": "ceph",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.crush_device_class": "",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.encrypted": "0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.objectstore": "bluestore",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.osd_id": "0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.type": "block",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.vdo": "0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.with_tpm": "0"
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            },
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "type": "block",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "vg_name": "ceph_vg0"
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:        }
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:    ],
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:    "1": [
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:        {
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "devices": [
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "/dev/loop4"
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            ],
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_name": "ceph_lv1",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_size": "21470642176",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "name": "ceph_lv1",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "tags": {
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.cluster_name": "ceph",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.crush_device_class": "",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.encrypted": "0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.objectstore": "bluestore",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.osd_id": "1",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.type": "block",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.vdo": "0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.with_tpm": "0"
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            },
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "type": "block",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "vg_name": "ceph_vg1"
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:        }
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:    ],
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:    "2": [
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:        {
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "devices": [
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "/dev/loop5"
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            ],
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_name": "ceph_lv2",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_size": "21470642176",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "name": "ceph_lv2",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "tags": {
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.cluster_name": "ceph",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.crush_device_class": "",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.encrypted": "0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.objectstore": "bluestore",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.osd_id": "2",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.type": "block",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.vdo": "0",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:                "ceph.with_tpm": "0"
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            },
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "type": "block",
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:            "vg_name": "ceph_vg2"
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:        }
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]:    ]
Dec 13 03:44:26 np0005558241 eloquent_satoshi[347115]: }
Dec 13 03:44:26 np0005558241 systemd[1]: libpod-d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e.scope: Deactivated successfully.
Dec 13 03:44:26 np0005558241 podman[347098]: 2025-12-13 08:44:26.527767053 +0000 UTC m=+0.470748744 container died d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:26 np0005558241 nova_compute[248510]: 2025-12-13 08:44:26.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:26 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0cc80aff31efd023cf5709add2f58158e0d46b8a2dcca79d6f6e22ddb3b95713-merged.mount: Deactivated successfully.
Dec 13 03:44:26 np0005558241 podman[347098]: 2025-12-13 08:44:26.5722859 +0000 UTC m=+0.515267581 container remove d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:26 np0005558241 systemd[1]: libpod-conmon-d9d97de0b3b06a7a00241bc4847be4d6b1ec5d1597dde93a7d0bb5337f8e431e.scope: Deactivated successfully.
Dec 13 03:44:27 np0005558241 podman[347195]: 2025-12-13 08:44:27.04765154 +0000 UTC m=+0.023217044 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:27 np0005558241 podman[347195]: 2025-12-13 08:44:27.366639655 +0000 UTC m=+0.342205139 container create ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:44:27 np0005558241 systemd[1]: Started libpod-conmon-ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626.scope.
Dec 13 03:44:27 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:44:27 np0005558241 podman[347195]: 2025-12-13 08:44:27.46369343 +0000 UTC m=+0.439258944 container init ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:44:27 np0005558241 podman[347195]: 2025-12-13 08:44:27.471223079 +0000 UTC m=+0.446788563 container start ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:27 np0005558241 nice_heyrovsky[347211]: 167 167
Dec 13 03:44:27 np0005558241 systemd[1]: libpod-ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626.scope: Deactivated successfully.
Dec 13 03:44:27 np0005558241 podman[347195]: 2025-12-13 08:44:27.477831275 +0000 UTC m=+0.453396789 container attach ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:27 np0005558241 podman[347195]: 2025-12-13 08:44:27.478169883 +0000 UTC m=+0.453735367 container died ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-984628a64d9e75d499299216c2d9e2ea0007e09439a00f6e303c8a2dd5c6e82b-merged.mount: Deactivated successfully.
Dec 13 03:44:27 np0005558241 podman[347195]: 2025-12-13 08:44:27.511469359 +0000 UTC m=+0.487034843 container remove ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Dec 13 03:44:27 np0005558241 systemd[1]: libpod-conmon-ef4556e28cda52484d39c055d84dffe540339b9248d03efcb954be14fa368626.scope: Deactivated successfully.
Dec 13 03:44:27 np0005558241 podman[347235]: 2025-12-13 08:44:27.675733491 +0000 UTC m=+0.052203531 container create 837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_curie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:44:27 np0005558241 systemd[1]: Started libpod-conmon-837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8.scope.
Dec 13 03:44:27 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:44:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0da932d9a8c24972398944ac47a3386d1db142199bfad1f1c560331463db3a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0da932d9a8c24972398944ac47a3386d1db142199bfad1f1c560331463db3a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0da932d9a8c24972398944ac47a3386d1db142199bfad1f1c560331463db3a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:27 np0005558241 podman[347235]: 2025-12-13 08:44:27.654591881 +0000 UTC m=+0.031061931 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0da932d9a8c24972398944ac47a3386d1db142199bfad1f1c560331463db3a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:27 np0005558241 podman[347235]: 2025-12-13 08:44:27.753314448 +0000 UTC m=+0.129784478 container init 837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:44:27 np0005558241 podman[347235]: 2025-12-13 08:44:27.760769085 +0000 UTC m=+0.137239125 container start 837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:27 np0005558241 podman[347235]: 2025-12-13 08:44:27.764281863 +0000 UTC m=+0.140751933 container attach 837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:44:28 np0005558241 lvm[347329]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:44:28 np0005558241 lvm[347331]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:44:28 np0005558241 lvm[347329]: VG ceph_vg0 finished
Dec 13 03:44:28 np0005558241 lvm[347331]: VG ceph_vg1 finished
Dec 13 03:44:28 np0005558241 lvm[347333]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:44:28 np0005558241 lvm[347333]: VG ceph_vg2 finished
Dec 13 03:44:28 np0005558241 upbeat_curie[347252]: {}
Dec 13 03:44:28 np0005558241 systemd[1]: libpod-837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8.scope: Deactivated successfully.
Dec 13 03:44:28 np0005558241 podman[347235]: 2025-12-13 08:44:28.572865004 +0000 UTC m=+0.949335074 container died 837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 03:44:28 np0005558241 systemd[1]: libpod-837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8.scope: Consumed 1.240s CPU time.
Dec 13 03:44:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c0da932d9a8c24972398944ac47a3386d1db142199bfad1f1c560331463db3a1-merged.mount: Deactivated successfully.
Dec 13 03:44:28 np0005558241 podman[347235]: 2025-12-13 08:44:28.641780643 +0000 UTC m=+1.018250683 container remove 837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_curie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:28 np0005558241 systemd[1]: libpod-conmon-837a494e9702220ce3a90bf4afb2d9f1c22d1639dfdebdddd88febc6156e95d8.scope: Deactivated successfully.
Dec 13 03:44:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:44:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:44:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:44:29 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:44:29 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:44:29 np0005558241 nova_compute[248510]: 2025-12-13 08:44:29.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:44:31 np0005558241 nova_compute[248510]: 2025-12-13 08:44:31.530 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:44:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:44:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:44:34 np0005558241 nova_compute[248510]: 2025-12-13 08:44:34.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:44:36 np0005558241 nova_compute[248510]: 2025-12-13 08:44:36.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2456: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:44:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:44:39 np0005558241 nova_compute[248510]: 2025-12-13 08:44:39.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:44:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:44:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:44:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:44:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:44:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:44:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:44:41 np0005558241 nova_compute[248510]: 2025-12-13 08:44:41.579 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2458: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:44:41 np0005558241 podman[347375]: 2025-12-13 08:44:41.973427431 +0000 UTC m=+0.060823277 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:44:41 np0005558241 podman[347374]: 2025-12-13 08:44:41.977670958 +0000 UTC m=+0.066817578 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 03:44:42 np0005558241 podman[347373]: 2025-12-13 08:44:42.003841474 +0000 UTC m=+0.093121627 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2459: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:44:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:44:44 np0005558241 nova_compute[248510]: 2025-12-13 08:44:44.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:44:46 np0005558241 nova_compute[248510]: 2025-12-13 08:44:46.308 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:46 np0005558241 nova_compute[248510]: 2025-12-13 08:44:46.309 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:46 np0005558241 nova_compute[248510]: 2025-12-13 08:44:46.340 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:44:46 np0005558241 nova_compute[248510]: 2025-12-13 08:44:46.469 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:46 np0005558241 nova_compute[248510]: 2025-12-13 08:44:46.470 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:46 np0005558241 nova_compute[248510]: 2025-12-13 08:44:46.482 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:44:46 np0005558241 nova_compute[248510]: 2025-12-13 08:44:46.482 248514 INFO nova.compute.claims [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:44:46 np0005558241 nova_compute[248510]: 2025-12-13 08:44:46.581 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:46 np0005558241 nova_compute[248510]: 2025-12-13 08:44:46.711 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:44:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:44:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2387109520' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.295 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.301 248514 DEBUG nova.compute.provider_tree [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.322 248514 DEBUG nova.scheduler.client.report [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.449 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.450 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.551 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.551 248514 DEBUG nova.network.neutron [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.596 248514 INFO nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.628 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.731 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.733 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.733 248514 INFO nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Creating image(s)#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.757 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.783 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.805 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.809 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.901 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.902 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.903 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.903 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2461: 321 pgs: 321 active+clean; 41 MiB data, 697 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.927 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.930 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c3fb322f-a9db-4396-b659-2307698e5524_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:44:47 np0005558241 nova_compute[248510]: 2025-12-13 08:44:47.968 248514 DEBUG nova.policy [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8948a1b0c26f43129cb50ef6f3872ecd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:44:48 np0005558241 nova_compute[248510]: 2025-12-13 08:44:48.224 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c3fb322f-a9db-4396-b659-2307698e5524_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:44:48 np0005558241 nova_compute[248510]: 2025-12-13 08:44:48.287 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] resizing rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:44:48 np0005558241 nova_compute[248510]: 2025-12-13 08:44:48.350 248514 DEBUG nova.objects.instance [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'migration_context' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:44:48 np0005558241 nova_compute[248510]: 2025-12-13 08:44:48.410 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:44:48 np0005558241 nova_compute[248510]: 2025-12-13 08:44:48.410 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Ensure instance console log exists: /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:44:48 np0005558241 nova_compute[248510]: 2025-12-13 08:44:48.411 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:48 np0005558241 nova_compute[248510]: 2025-12-13 08:44:48.411 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:48 np0005558241 nova_compute[248510]: 2025-12-13 08:44:48.412 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:44:49 np0005558241 nova_compute[248510]: 2025-12-13 08:44:49.452 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:49 np0005558241 ovn_controller[148476]: 2025-12-13T08:44:49Z|00983|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 13 03:44:49 np0005558241 nova_compute[248510]: 2025-12-13 08:44:49.747 248514 DEBUG nova.network.neutron [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Successfully created port: 2d164f50-a56a-4eaf-ad60-84274a0eb413 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:44:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2462: 321 pgs: 321 active+clean; 54 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 643 KiB/s wr, 1 op/s
Dec 13 03:44:51 np0005558241 nova_compute[248510]: 2025-12-13 08:44:51.583 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:51 np0005558241 nova_compute[248510]: 2025-12-13 08:44:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:44:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2463: 321 pgs: 321 active+clean; 68 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 24 op/s
Dec 13 03:44:52 np0005558241 nova_compute[248510]: 2025-12-13 08:44:52.107 248514 DEBUG nova.network.neutron [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Successfully updated port: 2d164f50-a56a-4eaf-ad60-84274a0eb413 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:44:52 np0005558241 nova_compute[248510]: 2025-12-13 08:44:52.126 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:44:52 np0005558241 nova_compute[248510]: 2025-12-13 08:44:52.127 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:44:52 np0005558241 nova_compute[248510]: 2025-12-13 08:44:52.127 248514 DEBUG nova.network.neutron [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:44:52 np0005558241 nova_compute[248510]: 2025-12-13 08:44:52.297 248514 DEBUG nova.compute.manager [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-changed-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:44:52 np0005558241 nova_compute[248510]: 2025-12-13 08:44:52.298 248514 DEBUG nova.compute.manager [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Refreshing instance network info cache due to event network-changed-2d164f50-a56a-4eaf-ad60-84274a0eb413. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:44:52 np0005558241 nova_compute[248510]: 2025-12-13 08:44:52.299 248514 DEBUG oslo_concurrency.lockutils [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:44:52 np0005558241 nova_compute[248510]: 2025-12-13 08:44:52.570 248514 DEBUG nova.network.neutron [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:44:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:44:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:44:54 np0005558241 nova_compute[248510]: 2025-12-13 08:44:54.454 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:54 np0005558241 nova_compute[248510]: 2025-12-13 08:44:54.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:44:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:55.423 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:55.423 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:55.423 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.424 248514 DEBUG nova.network.neutron [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.544 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.545 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance network_info: |[{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.545 248514 DEBUG oslo_concurrency.lockutils [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.546 248514 DEBUG nova.network.neutron [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Refreshing network info cache for port 2d164f50-a56a-4eaf-ad60-84274a0eb413 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.549 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Start _get_guest_xml network_info=[{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.553 248514 WARNING nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.560 248514 DEBUG nova.virt.libvirt.host [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.561 248514 DEBUG nova.virt.libvirt.host [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.570 248514 DEBUG nova.virt.libvirt.host [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.571 248514 DEBUG nova.virt.libvirt.host [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.571 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.572 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.572 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.572 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.572 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.572 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.573 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.573 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.573 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.573 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.573 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.574 248514 DEBUG nova.virt.hardware [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:44:55 np0005558241 nova_compute[248510]: 2025-12-13 08:44:55.577 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:44:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2465: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:44:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:44:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3290291857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.147 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.167 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.171 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.584 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:44:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683683758' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.710 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.712 248514 DEBUG nova.virt.libvirt.vif [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:44:47Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.712 248514 DEBUG nova.network.os_vif_util [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.713 248514 DEBUG nova.network.os_vif_util [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.714 248514 DEBUG nova.objects.instance [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'pci_devices' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.737 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  <uuid>c3fb322f-a9db-4396-b659-2307698e5524</uuid>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  <name>instance-00000067</name>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersNegativeTestJSON-server-235457723</nova:name>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:44:55</nova:creationTime>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:        <nova:user uuid="8948a1b0c26f43129cb50ef6f3872ecd">tempest-ServersNegativeTestJSON-1471623163-project-member</nova:user>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:        <nova:project uuid="d2d4d23379cc4b03bbdd72a9134fdd9b">tempest-ServersNegativeTestJSON-1471623163</nova:project>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:        <nova:port uuid="2d164f50-a56a-4eaf-ad60-84274a0eb413">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <entry name="serial">c3fb322f-a9db-4396-b659-2307698e5524</entry>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <entry name="uuid">c3fb322f-a9db-4396-b659-2307698e5524</entry>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c3fb322f-a9db-4396-b659-2307698e5524_disk">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c3fb322f-a9db-4396-b659-2307698e5524_disk.config">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:e2:39:62"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <target dev="tap2d164f50-a5"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/console.log" append="off"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:44:56 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:44:56 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:44:56 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:44:56 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.738 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Preparing to wait for external event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.739 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.739 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.739 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.740 248514 DEBUG nova.virt.libvirt.vif [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:44:47Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.740 248514 DEBUG nova.network.os_vif_util [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.741 248514 DEBUG nova.network.os_vif_util [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.741 248514 DEBUG os_vif [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.743 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.743 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.746 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.746 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d164f50-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.747 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d164f50-a5, col_values=(('external_ids', {'iface-id': '2d164f50-a56a-4eaf-ad60-84274a0eb413', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:39:62', 'vm-uuid': 'c3fb322f-a9db-4396-b659-2307698e5524'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:44:56 np0005558241 NetworkManager[50376]: <info>  [1765615496.7500] manager: (tap2d164f50-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.751 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.755 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.756 248514 INFO os_vif [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5')#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.929 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.930 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.930 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No VIF found with MAC fa:16:3e:e2:39:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.931 248514 INFO nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Using config drive#033[00m
Dec 13 03:44:56 np0005558241 nova_compute[248510]: 2025-12-13 08:44:56.953 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:44:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:57.077 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:44:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:57.078 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:44:57 np0005558241 nova_compute[248510]: 2025-12-13 08:44:57.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:57 np0005558241 nova_compute[248510]: 2025-12-13 08:44:57.757 248514 INFO nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Creating config drive at /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config#033[00m
Dec 13 03:44:57 np0005558241 nova_compute[248510]: 2025-12-13 08:44:57.763 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8q0huyow execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:44:57 np0005558241 nova_compute[248510]: 2025-12-13 08:44:57.906 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8q0huyow" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:44:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:44:57 np0005558241 nova_compute[248510]: 2025-12-13 08:44:57.938 248514 DEBUG nova.storage.rbd_utils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:44:57 np0005558241 nova_compute[248510]: 2025-12-13 08:44:57.943 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config c3fb322f-a9db-4396-b659-2307698e5524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:44:58 np0005558241 nova_compute[248510]: 2025-12-13 08:44:58.249 248514 DEBUG nova.network.neutron [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updated VIF entry in instance network info cache for port 2d164f50-a56a-4eaf-ad60-84274a0eb413. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:44:58 np0005558241 nova_compute[248510]: 2025-12-13 08:44:58.250 248514 DEBUG nova.network.neutron [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:44:58 np0005558241 nova_compute[248510]: 2025-12-13 08:44:58.275 248514 DEBUG oslo_concurrency.lockutils [req-f30e50e3-3a5a-4847-985e-f4c2ae04691a req-783435e1-a8d0-47a4-9707-96a3604986b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:44:58 np0005558241 nova_compute[248510]: 2025-12-13 08:44:58.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:44:58 np0005558241 nova_compute[248510]: 2025-12-13 08:44:58.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:44:58 np0005558241 nova_compute[248510]: 2025-12-13 08:44:58.815 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:44:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:44:59 np0005558241 nova_compute[248510]: 2025-12-13 08:44:59.468 248514 DEBUG oslo_concurrency.processutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config c3fb322f-a9db-4396-b659-2307698e5524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:44:59 np0005558241 nova_compute[248510]: 2025-12-13 08:44:59.469 248514 INFO nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deleting local config drive /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config because it was imported into RBD.#033[00m
Dec 13 03:44:59 np0005558241 kernel: tap2d164f50-a5: entered promiscuous mode
Dec 13 03:44:59 np0005558241 ovn_controller[148476]: 2025-12-13T08:44:59Z|00984|binding|INFO|Claiming lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 for this chassis.
Dec 13 03:44:59 np0005558241 ovn_controller[148476]: 2025-12-13T08:44:59Z|00985|binding|INFO|2d164f50-a56a-4eaf-ad60-84274a0eb413: Claiming fa:16:3e:e2:39:62 10.100.0.6
Dec 13 03:44:59 np0005558241 nova_compute[248510]: 2025-12-13 08:44:59.519 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:59 np0005558241 NetworkManager[50376]: <info>  [1765615499.5226] manager: (tap2d164f50-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Dec 13 03:44:59 np0005558241 nova_compute[248510]: 2025-12-13 08:44:59.523 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:59 np0005558241 nova_compute[248510]: 2025-12-13 08:44:59.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:59 np0005558241 systemd-udevd[347759]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:44:59 np0005558241 NetworkManager[50376]: <info>  [1765615499.5592] device (tap2d164f50-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:44:59 np0005558241 NetworkManager[50376]: <info>  [1765615499.5599] device (tap2d164f50-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:44:59 np0005558241 nova_compute[248510]: 2025-12-13 08:44:59.581 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:59 np0005558241 ovn_controller[148476]: 2025-12-13T08:44:59Z|00986|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 ovn-installed in OVS
Dec 13 03:44:59 np0005558241 nova_compute[248510]: 2025-12-13 08:44:59.585 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:59 np0005558241 systemd-machined[210538]: New machine qemu-127-instance-00000067.
Dec 13 03:44:59 np0005558241 ovn_controller[148476]: 2025-12-13T08:44:59Z|00987|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 up in Southbound
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.610 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:39:62 10.100.0.6'], port_security=['fa:16:3e:e2:39:62 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c3fb322f-a9db-4396-b659-2307698e5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2d164f50-a56a-4eaf-ad60-84274a0eb413) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.610 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2d164f50-a56a-4eaf-ad60-84274a0eb413 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b bound to our chassis#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.612 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b#033[00m
Dec 13 03:44:59 np0005558241 systemd[1]: Started Virtual Machine qemu-127-instance-00000067.
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.625 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7144342d-2192-40a2-b11f-7f82b2fe9674]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.626 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ad7f755-f1 in ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.627 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ad7f755-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.627 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[680a7cbb-ca1c-45da-9ea8-dbc03fa1c0f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.628 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[22d80d02-685f-4187-bc20-0ffce223257e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.641 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7f809824-bb72-41a4-9d2d-ec42e3607b5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.668 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[93365a6a-5d07-44c1-9e8e-2581d19b8c5b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.697 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d6bcb6-9fa6-4fa1-a624-0507e56203c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 systemd-udevd[347761]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:44:59 np0005558241 NetworkManager[50376]: <info>  [1765615499.7049] manager: (tap6ad7f755-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/408)
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.704 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef61ceea-e621-4df0-a091-ffe86c116758]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.733 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b35abfb4-6886-46db-b0cb-c9881e5ea76d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.736 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[68ac012c-1129-4ecd-b38f-c410f44bc5c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 NetworkManager[50376]: <info>  [1765615499.7612] device (tap6ad7f755-f0): carrier: link connected
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.767 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[db80ee6f-0079-47ad-a2df-c69e3a351168]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.785 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab30ab2a-3094-4397-bd0e-328ca637ee1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807697, 'reachable_time': 38471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347795, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.802 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba8c0a2-d034-43f9-9b20-e85b5c406b14]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:35ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807697, 'tstamp': 807697}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347796, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.827 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3db5e29f-5f79-4a0d-a953-395faad21e7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807697, 'reachable_time': 38471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347798, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.860 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53e811c2-5d29-4b0d-869d-04c8d1682c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.917 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4d7b5640-05d1-4694-8686-92efab9b67d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.918 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.919 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.919 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:44:59 np0005558241 nova_compute[248510]: 2025-12-13 08:44:59.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:59 np0005558241 kernel: tap6ad7f755-f0: entered promiscuous mode
Dec 13 03:44:59 np0005558241 NetworkManager[50376]: <info>  [1765615499.9222] manager: (tap6ad7f755-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Dec 13 03:44:59 np0005558241 nova_compute[248510]: 2025-12-13 08:44:59.924 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.924 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:44:59 np0005558241 nova_compute[248510]: 2025-12-13 08:44:59.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:59 np0005558241 ovn_controller[148476]: 2025-12-13T08:44:59Z|00988|binding|INFO|Releasing lport 683d7da0-6f1e-41a6-9158-6204fb05ee50 from this chassis (sb_readonly=0)
Dec 13 03:44:59 np0005558241 nova_compute[248510]: 2025-12-13 08:44:59.938 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.939 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.943 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[63ef8c00-3dc0-489e-a8bf-1ee80161e7b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.943 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:44:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:44:59.944 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'env', 'PROCESS_TAG=haproxy-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:45:00 np0005558241 nova_compute[248510]: 2025-12-13 08:45:00.090 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615500.0895076, c3fb322f-a9db-4396-b659-2307698e5524 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:45:00 np0005558241 nova_compute[248510]: 2025-12-13 08:45:00.091 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Started (Lifecycle Event)#033[00m
Dec 13 03:45:00 np0005558241 nova_compute[248510]: 2025-12-13 08:45:00.124 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:45:00 np0005558241 nova_compute[248510]: 2025-12-13 08:45:00.128 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615500.0896235, c3fb322f-a9db-4396-b659-2307698e5524 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:45:00 np0005558241 nova_compute[248510]: 2025-12-13 08:45:00.129 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:45:00 np0005558241 nova_compute[248510]: 2025-12-13 08:45:00.156 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:45:00 np0005558241 nova_compute[248510]: 2025-12-13 08:45:00.159 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:45:00 np0005558241 nova_compute[248510]: 2025-12-13 08:45:00.201 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:45:00 np0005558241 podman[347871]: 2025-12-13 08:45:00.29151851 +0000 UTC m=+0.027015289 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:45:00 np0005558241 podman[347871]: 2025-12-13 08:45:00.631374879 +0000 UTC m=+0.366871588 container create 44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 03:45:00 np0005558241 systemd[1]: Started libpod-conmon-44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7.scope.
Dec 13 03:45:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:45:00 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b7ca1c7f01e35473290f2070a95e46c25e0e0fafb8bf13da3280c15aeb90b0b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:00 np0005558241 podman[347871]: 2025-12-13 08:45:00.743674758 +0000 UTC m=+0.479171467 container init 44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:00 np0005558241 podman[347871]: 2025-12-13 08:45:00.749388091 +0000 UTC m=+0.484884790 container start 44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:45:00 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [NOTICE]   (347890) : New worker (347892) forked
Dec 13 03:45:00 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [NOTICE]   (347890) : Loading success.
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.459 248514 DEBUG nova.compute.manager [req-72c160c8-009c-4c09-af87-b10eb307afaf req-bfce8667-d032-4511-b164-4a6d4e87a840 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.460 248514 DEBUG oslo_concurrency.lockutils [req-72c160c8-009c-4c09-af87-b10eb307afaf req-bfce8667-d032-4511-b164-4a6d4e87a840 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.460 248514 DEBUG oslo_concurrency.lockutils [req-72c160c8-009c-4c09-af87-b10eb307afaf req-bfce8667-d032-4511-b164-4a6d4e87a840 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.460 248514 DEBUG oslo_concurrency.lockutils [req-72c160c8-009c-4c09-af87-b10eb307afaf req-bfce8667-d032-4511-b164-4a6d4e87a840 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.460 248514 DEBUG nova.compute.manager [req-72c160c8-009c-4c09-af87-b10eb307afaf req-bfce8667-d032-4511-b164-4a6d4e87a840 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Processing event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.461 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.464 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615501.4645479, c3fb322f-a9db-4396-b659-2307698e5524 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.464 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.466 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.471 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance spawned successfully.#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.471 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.509 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.515 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.518 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.519 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.519 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.520 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.520 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.520 248514 DEBUG nova.virt.libvirt.driver [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.564 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.626 248514 INFO nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Took 13.89 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.626 248514 DEBUG nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.797 248514 INFO nova.compute.manager [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Took 15.38 seconds to build instance.#033[00m
Dec 13 03:45:01 np0005558241 nova_compute[248510]: 2025-12-13 08:45:01.862 248514 DEBUG oslo_concurrency.lockutils [None req-7a8562ca-b588-4985-8d2e-a9a6eaba8c34 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.2 MiB/s wr, 33 op/s
Dec 13 03:45:03 np0005558241 nova_compute[248510]: 2025-12-13 08:45:03.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:45:03 np0005558241 nova_compute[248510]: 2025-12-13 08:45:03.863 248514 DEBUG nova.compute.manager [req-057147a7-8944-4c28-869a-433a53c1293c req-a29b63fd-f85d-4b6e-bac8-76a542653078 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:45:03 np0005558241 nova_compute[248510]: 2025-12-13 08:45:03.863 248514 DEBUG oslo_concurrency.lockutils [req-057147a7-8944-4c28-869a-433a53c1293c req-a29b63fd-f85d-4b6e-bac8-76a542653078 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:03 np0005558241 nova_compute[248510]: 2025-12-13 08:45:03.864 248514 DEBUG oslo_concurrency.lockutils [req-057147a7-8944-4c28-869a-433a53c1293c req-a29b63fd-f85d-4b6e-bac8-76a542653078 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:03 np0005558241 nova_compute[248510]: 2025-12-13 08:45:03.864 248514 DEBUG oslo_concurrency.lockutils [req-057147a7-8944-4c28-869a-433a53c1293c req-a29b63fd-f85d-4b6e-bac8-76a542653078 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:03 np0005558241 nova_compute[248510]: 2025-12-13 08:45:03.864 248514 DEBUG nova.compute.manager [req-057147a7-8944-4c28-869a-433a53c1293c req-a29b63fd-f85d-4b6e-bac8-76a542653078 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:45:03 np0005558241 nova_compute[248510]: 2025-12-13 08:45:03.864 248514 WARNING nova.compute.manager [req-057147a7-8944-4c28-869a-433a53c1293c req-a29b63fd-f85d-4b6e-bac8-76a542653078 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:45:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 622 KiB/s rd, 548 KiB/s wr, 31 op/s
Dec 13 03:45:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:45:05 np0005558241 nova_compute[248510]: 2025-12-13 08:45:05.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:45:05 np0005558241 nova_compute[248510]: 2025-12-13 08:45:05.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:45:05 np0005558241 nova_compute[248510]: 2025-12-13 08:45:05.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:05 np0005558241 nova_compute[248510]: 2025-12-13 08:45:05.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:05 np0005558241 nova_compute[248510]: 2025-12-13 08:45:05.808 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:05 np0005558241 nova_compute[248510]: 2025-12-13 08:45:05.808 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:45:05 np0005558241 nova_compute[248510]: 2025-12-13 08:45:05.808 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:45:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:45:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:06.081 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:45:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2304269005' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:45:06 np0005558241 nova_compute[248510]: 2025-12-13 08:45:06.368 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:45:06 np0005558241 nova_compute[248510]: 2025-12-13 08:45:06.450 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:45:06 np0005558241 nova_compute[248510]: 2025-12-13 08:45:06.450 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:45:06 np0005558241 nova_compute[248510]: 2025-12-13 08:45:06.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:06 np0005558241 nova_compute[248510]: 2025-12-13 08:45:06.615 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:45:06 np0005558241 nova_compute[248510]: 2025-12-13 08:45:06.616 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3646MB free_disk=59.96665725391358GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:45:06 np0005558241 nova_compute[248510]: 2025-12-13 08:45:06.616 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:06 np0005558241 nova_compute[248510]: 2025-12-13 08:45:06.617 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:06 np0005558241 nova_compute[248510]: 2025-12-13 08:45:06.747 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance c3fb322f-a9db-4396-b659-2307698e5524 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:45:06 np0005558241 nova_compute[248510]: 2025-12-13 08:45:06.747 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:45:06 np0005558241 nova_compute[248510]: 2025-12-13 08:45:06.747 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:45:06 np0005558241 nova_compute[248510]: 2025-12-13 08:45:06.752 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:06 np0005558241 nova_compute[248510]: 2025-12-13 08:45:06.823 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:45:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:45:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/648866817' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:45:07 np0005558241 nova_compute[248510]: 2025-12-13 08:45:07.435 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:45:07 np0005558241 nova_compute[248510]: 2025-12-13 08:45:07.441 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:45:07 np0005558241 nova_compute[248510]: 2025-12-13 08:45:07.464 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:45:07 np0005558241 nova_compute[248510]: 2025-12-13 08:45:07.496 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:45:07 np0005558241 nova_compute[248510]: 2025-12-13 08:45:07.497 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:07 np0005558241 nova_compute[248510]: 2025-12-13 08:45:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:45:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:45:09 np0005558241 nova_compute[248510]: 2025-12-13 08:45:09.024 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:45:09 np0005558241 nova_compute[248510]: 2025-12-13 08:45:09.024 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:45:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:45:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:45:09
Dec 13 03:45:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:45:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:45:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms']
Dec 13 03:45:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:45:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:45:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:45:10 np0005558241 nova_compute[248510]: 2025-12-13 08:45:10.930 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:10 np0005558241 nova_compute[248510]: 2025-12-13 08:45:10.931 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:10 np0005558241 nova_compute[248510]: 2025-12-13 08:45:10.954 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.053 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.054 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.061 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.062 248514 INFO nova.compute.claims [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.226 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.590 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.752 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:45:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/378654992' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.825 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.831 248514 DEBUG nova.compute.provider_tree [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.878 248514 DEBUG nova.scheduler.client.report [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.900 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.901 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:45:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.956 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.957 248514 DEBUG nova.network.neutron [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:45:11 np0005558241 nova_compute[248510]: 2025-12-13 08:45:11.982 248514 INFO nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.009 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.132 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.133 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.133 248514 INFO nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Creating image(s)#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.153 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.173 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.191 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.194 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.263 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.264 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.265 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.265 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.283 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.286 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.578 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.649 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] resizing rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.769 248514 DEBUG nova.objects.instance [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'migration_context' on Instance uuid 55a85a4e-6537-4498-8c7d-5c062cd421e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.802 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.803 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Ensure instance console log exists: /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.803 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.804 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.805 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:12 np0005558241 nova_compute[248510]: 2025-12-13 08:45:12.859 248514 DEBUG nova.policy [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8948a1b0c26f43129cb50ef6f3872ecd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:45:12 np0005558241 podman[348135]: 2025-12-13 08:45:12.996146252 +0000 UTC m=+0.071166527 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Dec 13 03:45:13 np0005558241 podman[348134]: 2025-12-13 08:45:13.0088039 +0000 UTC m=+0.088513123 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:45:13 np0005558241 podman[348136]: 2025-12-13 08:45:13.041312695 +0000 UTC m=+0.102315828 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 03:45:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:13Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:39:62 10.100.0.6
Dec 13 03:45:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:13Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:39:62 10.100.0.6
Dec 13 03:45:13 np0005558241 nova_compute[248510]: 2025-12-13 08:45:13.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:45:13 np0005558241 nova_compute[248510]: 2025-12-13 08:45:13.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 03:45:13 np0005558241 nova_compute[248510]: 2025-12-13 08:45:13.877 248514 DEBUG nova.network.neutron [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Successfully created port: c5e3056c-03cb-4408-8f11-96bd3e735ff6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:45:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 115 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 98 op/s
Dec 13 03:45:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:45:14 np0005558241 nova_compute[248510]: 2025-12-13 08:45:14.795 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:45:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:45:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1515492163' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:45:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:45:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1515492163' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:45:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 165 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.8 MiB/s wr, 122 op/s
Dec 13 03:45:16 np0005558241 nova_compute[248510]: 2025-12-13 08:45:16.099 248514 DEBUG nova.network.neutron [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Successfully updated port: c5e3056c-03cb-4408-8f11-96bd3e735ff6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:45:16 np0005558241 nova_compute[248510]: 2025-12-13 08:45:16.135 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "refresh_cache-55a85a4e-6537-4498-8c7d-5c062cd421e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:45:16 np0005558241 nova_compute[248510]: 2025-12-13 08:45:16.136 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquired lock "refresh_cache-55a85a4e-6537-4498-8c7d-5c062cd421e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:45:16 np0005558241 nova_compute[248510]: 2025-12-13 08:45:16.136 248514 DEBUG nova.network.neutron [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:45:16 np0005558241 nova_compute[248510]: 2025-12-13 08:45:16.372 248514 DEBUG nova.compute.manager [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-changed-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:45:16 np0005558241 nova_compute[248510]: 2025-12-13 08:45:16.373 248514 DEBUG nova.compute.manager [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Refreshing instance network info cache due to event network-changed-c5e3056c-03cb-4408-8f11-96bd3e735ff6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:45:16 np0005558241 nova_compute[248510]: 2025-12-13 08:45:16.373 248514 DEBUG oslo_concurrency.lockutils [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-55a85a4e-6537-4498-8c7d-5c062cd421e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:45:16 np0005558241 nova_compute[248510]: 2025-12-13 08:45:16.593 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:16 np0005558241 nova_compute[248510]: 2025-12-13 08:45:16.754 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:16 np0005558241 nova_compute[248510]: 2025-12-13 08:45:16.952 248514 DEBUG nova.network.neutron [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:45:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.799 248514 DEBUG nova.network.neutron [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Updating instance_info_cache with network_info: [{"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.846 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Releasing lock "refresh_cache-55a85a4e-6537-4498-8c7d-5c062cd421e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.847 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Instance network_info: |[{"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.847 248514 DEBUG oslo_concurrency.lockutils [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-55a85a4e-6537-4498-8c7d-5c062cd421e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.847 248514 DEBUG nova.network.neutron [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Refreshing network info cache for port c5e3056c-03cb-4408-8f11-96bd3e735ff6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.850 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Start _get_guest_xml network_info=[{"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.855 248514 WARNING nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.860 248514 DEBUG nova.virt.libvirt.host [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.861 248514 DEBUG nova.virt.libvirt.host [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.864 248514 DEBUG nova.virt.libvirt.host [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.864 248514 DEBUG nova.virt.libvirt.host [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.864 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.865 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.865 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.865 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.865 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.866 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.866 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.866 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.866 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.867 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.867 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.867 248514 DEBUG nova.virt.hardware [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:45:18 np0005558241 nova_compute[248510]: 2025-12-13 08:45:18.870 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:45:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:45:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:45:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4288817914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:45:19 np0005558241 nova_compute[248510]: 2025-12-13 08:45:19.512 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:45:19 np0005558241 nova_compute[248510]: 2025-12-13 08:45:19.532 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:45:19 np0005558241 nova_compute[248510]: 2025-12-13 08:45:19.535 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:45:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 13 03:45:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:45:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434092488' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.077 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.079 248514 DEBUG nova.virt.libvirt.vif [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:45:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-981558366',display_name='tempest-ServersNegativeTestJSON-server-981558366',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-981558366',id=104,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-j9mepcx2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:45:12Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=55a85a4e-6537-4498-8c7d-5c062cd421e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.080 248514 DEBUG nova.network.os_vif_util [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.081 248514 DEBUG nova.network.os_vif_util [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.082 248514 DEBUG nova.objects.instance [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'pci_devices' on Instance uuid 55a85a4e-6537-4498-8c7d-5c062cd421e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.117 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  <uuid>55a85a4e-6537-4498-8c7d-5c062cd421e2</uuid>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  <name>instance-00000068</name>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersNegativeTestJSON-server-981558366</nova:name>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:45:18</nova:creationTime>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:        <nova:user uuid="8948a1b0c26f43129cb50ef6f3872ecd">tempest-ServersNegativeTestJSON-1471623163-project-member</nova:user>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:        <nova:project uuid="d2d4d23379cc4b03bbdd72a9134fdd9b">tempest-ServersNegativeTestJSON-1471623163</nova:project>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:        <nova:port uuid="c5e3056c-03cb-4408-8f11-96bd3e735ff6">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <entry name="serial">55a85a4e-6537-4498-8c7d-5c062cd421e2</entry>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <entry name="uuid">55a85a4e-6537-4498-8c7d-5c062cd421e2</entry>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/55a85a4e-6537-4498-8c7d-5c062cd421e2_disk">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/55a85a4e-6537-4498-8c7d-5c062cd421e2_disk.config">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:2b:0e:dd"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <target dev="tapc5e3056c-03"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/console.log" append="off"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:45:20 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:45:20 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:45:20 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:45:20 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.118 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Preparing to wait for external event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.119 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.120 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.120 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.121 248514 DEBUG nova.virt.libvirt.vif [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:45:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-981558366',display_name='tempest-ServersNegativeTestJSON-server-981558366',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-981558366',id=104,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-j9mepcx2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:45:12Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=55a85a4e-6537-4498-8c7d-5c062cd421e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.121 248514 DEBUG nova.network.os_vif_util [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.122 248514 DEBUG nova.network.os_vif_util [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.122 248514 DEBUG os_vif [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.123 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.124 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.124 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.129 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.129 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5e3056c-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.130 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc5e3056c-03, col_values=(('external_ids', {'iface-id': 'c5e3056c-03cb-4408-8f11-96bd3e735ff6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:0e:dd', 'vm-uuid': '55a85a4e-6537-4498-8c7d-5c062cd421e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.131 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:20 np0005558241 NetworkManager[50376]: <info>  [1765615520.1327] manager: (tapc5e3056c-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.133 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.138 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.141 248514 INFO os_vif [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03')#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.238 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.239 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.239 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No VIF found with MAC fa:16:3e:2b:0e:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.241 248514 INFO nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Using config drive#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.270 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.850 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.954 248514 INFO nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Creating config drive at /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/disk.config#033[00m
Dec 13 03:45:20 np0005558241 nova_compute[248510]: 2025-12-13 08:45:20.959 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqmdb9fd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001113967037902661 of space, bias 1.0, pg target 0.3341901113707983 quantized to 32 (current 32)
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006676236113091034 of space, bias 1.0, pg target 0.20028708339273102 quantized to 32 (current 32)
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.90077056358155e-07 of space, bias 4.0, pg target 0.000708092467629786 quantized to 16 (current 32)
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.116 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdqmdb9fd" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.142 248514 DEBUG nova.storage.rbd_utils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.145 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/disk.config 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.276 248514 DEBUG oslo_concurrency.processutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/disk.config 55a85a4e-6537-4498-8c7d-5c062cd421e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.277 248514 INFO nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Deleting local config drive /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2/disk.config because it was imported into RBD.#033[00m
Dec 13 03:45:21 np0005558241 kernel: tapc5e3056c-03: entered promiscuous mode
Dec 13 03:45:21 np0005558241 NetworkManager[50376]: <info>  [1765615521.3229] manager: (tapc5e3056c-03): new Tun device (/org/freedesktop/NetworkManager/Devices/411)
Dec 13 03:45:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:21Z|00989|binding|INFO|Claiming lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 for this chassis.
Dec 13 03:45:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:21Z|00990|binding|INFO|c5e3056c-03cb-4408-8f11-96bd3e735ff6: Claiming fa:16:3e:2b:0e:dd 10.100.0.3
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.325 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.342 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:0e:dd 10.100.0.3'], port_security=['fa:16:3e:2b:0e:dd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '55a85a4e-6537-4498-8c7d-5c062cd421e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c5e3056c-03cb-4408-8f11-96bd3e735ff6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:45:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:21Z|00991|binding|INFO|Setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 ovn-installed in OVS
Dec 13 03:45:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:21Z|00992|binding|INFO|Setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 up in Southbound
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.344 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c5e3056c-03cb-4408-8f11-96bd3e735ff6 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b bound to our chassis#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.347 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.348 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:21 np0005558241 systemd-udevd[348331]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:45:21 np0005558241 systemd-machined[210538]: New machine qemu-128-instance-00000068.
Dec 13 03:45:21 np0005558241 NetworkManager[50376]: <info>  [1765615521.3681] device (tapc5e3056c-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:45:21 np0005558241 NetworkManager[50376]: <info>  [1765615521.3687] device (tapc5e3056c-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.370 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5520604c-3fcd-407b-adba-4900ef51eca3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:21 np0005558241 systemd[1]: Started Virtual Machine qemu-128-instance-00000068.
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.401 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac0784d-f63e-4921-ae1a-66751240020f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.404 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c75e2202-1273-4439-b97d-15483f991e6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.438 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5fbf31-26eb-47ed-8014-02f4e421e894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.455 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8222d065-9c4a-450c-9a44-526ec4a8da9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807697, 'reachable_time': 38471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348345, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.468 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc258d9b-51d0-4b4e-9cba-51c40064f93c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807710, 'tstamp': 807710}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348346, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807712, 'tstamp': 807712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348346, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.469 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.472 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.472 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.473 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:21.473 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.594 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.735 248514 DEBUG nova.network.neutron [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Updated VIF entry in instance network info cache for port c5e3056c-03cb-4408-8f11-96bd3e735ff6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.736 248514 DEBUG nova.network.neutron [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Updating instance_info_cache with network_info: [{"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.754 248514 DEBUG nova.compute.manager [req-474a3b96-6035-4788-91d3-839a6a5bc83f req-dbbbcf85-43f9-4ce4-a230-33d0a2d1f316 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.755 248514 DEBUG oslo_concurrency.lockutils [req-474a3b96-6035-4788-91d3-839a6a5bc83f req-dbbbcf85-43f9-4ce4-a230-33d0a2d1f316 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.755 248514 DEBUG oslo_concurrency.lockutils [req-474a3b96-6035-4788-91d3-839a6a5bc83f req-dbbbcf85-43f9-4ce4-a230-33d0a2d1f316 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.756 248514 DEBUG oslo_concurrency.lockutils [req-474a3b96-6035-4788-91d3-839a6a5bc83f req-dbbbcf85-43f9-4ce4-a230-33d0a2d1f316 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.756 248514 DEBUG nova.compute.manager [req-474a3b96-6035-4788-91d3-839a6a5bc83f req-dbbbcf85-43f9-4ce4-a230-33d0a2d1f316 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Processing event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.769 248514 DEBUG oslo_concurrency.lockutils [req-be620b57-dec8-49cb-a331-b7bc1cb86095 req-7532e86f-252a-4463-b7bc-bd87dafd8b65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-55a85a4e-6537-4498-8c7d-5c062cd421e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:45:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.992 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.993 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615521.991312, 55a85a4e-6537-4498-8c7d-5c062cd421e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.993 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] VM Started (Lifecycle Event)#033[00m
Dec 13 03:45:21 np0005558241 nova_compute[248510]: 2025-12-13 08:45:21.997 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.002 248514 INFO nova.virt.libvirt.driver [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Instance spawned successfully.#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.002 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.070 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.078 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.078 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.079 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.080 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.081 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.081 248514 DEBUG nova.virt.libvirt.driver [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.086 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.122 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.123 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615521.9917, 55a85a4e-6537-4498-8c7d-5c062cd421e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.123 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.158 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.162 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615521.996112, 55a85a4e-6537-4498-8c7d-5c062cd421e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.162 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.214 248514 INFO nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Took 10.08 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.215 248514 DEBUG nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.270 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.273 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.330 248514 INFO nova.compute.manager [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Took 11.30 seconds to build instance.#033[00m
Dec 13 03:45:22 np0005558241 nova_compute[248510]: 2025-12-13 08:45:22.387 248514 DEBUG oslo_concurrency.lockutils [None req-14a527a1-2cdd-4893-8208-54761ada2a8a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 167 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 832 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Dec 13 03:45:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.152 248514 DEBUG nova.compute.manager [req-cc4ddbc5-f2dc-460c-a2b0-506f38331f01 req-d54e08de-640d-44cb-8d69-f46abba54e88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.153 248514 DEBUG oslo_concurrency.lockutils [req-cc4ddbc5-f2dc-460c-a2b0-506f38331f01 req-d54e08de-640d-44cb-8d69-f46abba54e88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.153 248514 DEBUG oslo_concurrency.lockutils [req-cc4ddbc5-f2dc-460c-a2b0-506f38331f01 req-d54e08de-640d-44cb-8d69-f46abba54e88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.153 248514 DEBUG oslo_concurrency.lockutils [req-cc4ddbc5-f2dc-460c-a2b0-506f38331f01 req-d54e08de-640d-44cb-8d69-f46abba54e88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.154 248514 DEBUG nova.compute.manager [req-cc4ddbc5-f2dc-460c-a2b0-506f38331f01 req-d54e08de-640d-44cb-8d69-f46abba54e88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] No waiting events found dispatching network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.154 248514 WARNING nova.compute.manager [req-cc4ddbc5-f2dc-460c-a2b0-506f38331f01 req-d54e08de-640d-44cb-8d69-f46abba54e88 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received unexpected event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.796 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.797 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.797 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.797 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.798 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.799 248514 INFO nova.compute.manager [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Terminating instance#033[00m
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.800 248514 DEBUG nova.compute.manager [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:45:24 np0005558241 kernel: tapc5e3056c-03 (unregistering): left promiscuous mode
Dec 13 03:45:24 np0005558241 NetworkManager[50376]: <info>  [1765615524.8551] device (tapc5e3056c-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:45:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:24Z|00993|binding|INFO|Releasing lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 from this chassis (sb_readonly=0)
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.908 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:24Z|00994|binding|INFO|Setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 down in Southbound
Dec 13 03:45:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:24Z|00995|binding|INFO|Removing iface tapc5e3056c-03 ovn-installed in OVS
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.910 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:24.922 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:0e:dd 10.100.0.3'], port_security=['fa:16:3e:2b:0e:dd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '55a85a4e-6537-4498-8c7d-5c062cd421e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c5e3056c-03cb-4408-8f11-96bd3e735ff6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:45:24 np0005558241 nova_compute[248510]: 2025-12-13 08:45:24.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:24.924 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c5e3056c-03cb-4408-8f11-96bd3e735ff6 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b unbound from our chassis#033[00m
Dec 13 03:45:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:24.925 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b#033[00m
Dec 13 03:45:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:24.941 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68ca1ebd-b9c7-42bd-845e-25f746dae6a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:24.969 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[84f99f27-d488-464d-842e-8273fc24b62c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:24 np0005558241 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000068.scope: Deactivated successfully.
Dec 13 03:45:24 np0005558241 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000068.scope: Consumed 3.214s CPU time.
Dec 13 03:45:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:24.972 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e31eb2-27fd-402c-89e1-03ea17ebdcf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:24 np0005558241 systemd-machined[210538]: Machine qemu-128-instance-00000068 terminated.
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.001 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[344c0436-98b6-49d9-bd1a-8d64d34d71b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 kernel: tapc5e3056c-03: entered promiscuous mode
Dec 13 03:45:25 np0005558241 kernel: tapc5e3056c-03 (unregistering): left promiscuous mode
Dec 13 03:45:25 np0005558241 NetworkManager[50376]: <info>  [1765615525.0177] manager: (tapc5e3056c-03): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Dec 13 03:45:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:25Z|00996|binding|INFO|Claiming lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 for this chassis.
Dec 13 03:45:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:25Z|00997|binding|INFO|c5e3056c-03cb-4408-8f11-96bd3e735ff6: Claiming fa:16:3e:2b:0e:dd 10.100.0.3
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.019 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.021 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e98982b1-5e43-40d0-85d9-bd7aec1f1655]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807697, 'reachable_time': 38471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348402, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.027 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:0e:dd 10.100.0.3'], port_security=['fa:16:3e:2b:0e:dd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '55a85a4e-6537-4498-8c7d-5c062cd421e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c5e3056c-03cb-4408-8f11-96bd3e735ff6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.034 248514 INFO nova.virt.libvirt.driver [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Instance destroyed successfully.#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.035 248514 DEBUG nova.objects.instance [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'resources' on Instance uuid 55a85a4e-6537-4498-8c7d-5c062cd421e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:45:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:25Z|00998|binding|INFO|Setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 ovn-installed in OVS
Dec 13 03:45:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:25Z|00999|binding|INFO|Setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 up in Southbound
Dec 13 03:45:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:25Z|01000|binding|INFO|Releasing lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 from this chassis (sb_readonly=1)
Dec 13 03:45:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:25Z|01001|if_status|INFO|Dropped 2 log messages in last 386 seconds (most recently, 386 seconds ago) due to excessive rate
Dec 13 03:45:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:25Z|01002|if_status|INFO|Not setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 down as sb is readonly
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.041 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:25Z|01003|binding|INFO|Removing iface tapc5e3056c-03 ovn-installed in OVS
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.040 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[37993ee2-0d9c-45ad-ad77-9a2f6130b37e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807710, 'tstamp': 807710}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348406, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807712, 'tstamp': 807712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348406, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.043 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.044 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:25Z|01004|binding|INFO|Releasing lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 from this chassis (sb_readonly=0)
Dec 13 03:45:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:45:25Z|01005|binding|INFO|Setting lport c5e3056c-03cb-4408-8f11-96bd3e735ff6 down in Southbound
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.053 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.059 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.059 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.059 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.060 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.059 248514 DEBUG nova.virt.libvirt.vif [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:45:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-981558366',display_name='tempest-ServersNegativeTestJSON-server-981558366',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-981558366',id=104,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:45:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-j9mepcx2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:45:22Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=55a85a4e-6537-4498-8c7d-5c062cd421e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.060 248514 DEBUG nova.network.os_vif_util [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "address": "fa:16:3e:2b:0e:dd", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5e3056c-03", "ovs_interfaceid": "c5e3056c-03cb-4408-8f11-96bd3e735ff6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.061 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c5e3056c-03cb-4408-8f11-96bd3e735ff6 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b unbound from our chassis#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.060 248514 DEBUG nova.network.os_vif_util [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.061 248514 DEBUG os_vif [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.062 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.062 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.063 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:0e:dd 10.100.0.3'], port_security=['fa:16:3e:2b:0e:dd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '55a85a4e-6537-4498-8c7d-5c062cd421e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c5e3056c-03cb-4408-8f11-96bd3e735ff6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.063 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5e3056c-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.063 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.066 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.068 248514 INFO os_vif [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:0e:dd,bridge_name='br-int',has_traffic_filtering=True,id=c5e3056c-03cb-4408-8f11-96bd3e735ff6,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5e3056c-03')#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.075 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8de863-7fc9-483c-86c2-f96aa0f065f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.115 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[66c0c79a-8383-4627-b358-42e08ebcf469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.118 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[191c1228-6c30-4789-a972-bb90ecade421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.150 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2a20ff71-0bcd-43c4-b6a6-ea7631b1c3c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.172 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cf29b573-a992-4c09-89a0-d9aa50ed54be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807697, 'reachable_time': 38471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348432, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.190 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7e9941e4-198d-4999-9933-353181d95a66]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807710, 'tstamp': 807710}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348433, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807712, 'tstamp': 807712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348433, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.192 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.194 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.196 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.197 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.197 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.198 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.198 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c5e3056c-03cb-4408-8f11-96bd3e735ff6 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b unbound from our chassis#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.199 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.216 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2564bb3e-0ed6-4a7b-8d89-16ab1359fdd0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.256 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[75931ddd-c762-4bdd-8bd3-cf4825beacb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.260 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e6291014-6401-4883-82b5-428eb9d17357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.290 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[26f79c52-67d8-4753-a151-841175a3f246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.300 248514 INFO nova.virt.libvirt.driver [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Deleting instance files /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2_del#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.301 248514 INFO nova.virt.libvirt.driver [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Deletion of /var/lib/nova/instances/55a85a4e-6537-4498-8c7d-5c062cd421e2_del complete#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.306 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd826a2c-43c9-4be1-adcb-d84802fb281b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807697, 'reachable_time': 38471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348442, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.322 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c064fef-691b-4d5f-874c-749b426c6e3d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807710, 'tstamp': 807710}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348443, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6ad7f755-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 807712, 'tstamp': 807712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348443, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.324 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.326 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.327 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.328 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.328 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.329 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:45:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:25.330 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.550 248514 INFO nova.compute.manager [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.551 248514 DEBUG oslo.service.loopingcall [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.552 248514 DEBUG nova.compute.manager [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:45:25 np0005558241 nova_compute[248510]: 2025-12-13 08:45:25.552 248514 DEBUG nova.network.neutron [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:45:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 141 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 124 op/s
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.426 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-unplugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.426 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.427 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.427 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.427 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] No waiting events found dispatching network-vif-unplugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.427 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-unplugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.427 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.428 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.428 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.428 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.428 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] No waiting events found dispatching network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.429 248514 WARNING nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received unexpected event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.429 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.429 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.429 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.429 248514 DEBUG oslo_concurrency.lockutils [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.430 248514 DEBUG nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] No waiting events found dispatching network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.430 248514 WARNING nova.compute.manager [req-142ccc72-a521-4e39-9726-f35545fde7b9 req-30db3342-2242-4c4b-880a-f0cd29adbf82 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received unexpected event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.595 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.824 248514 DEBUG nova.network.neutron [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.850 248514 INFO nova.compute.manager [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Took 1.30 seconds to deallocate network for instance.#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.908 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:26 np0005558241 nova_compute[248510]: 2025-12-13 08:45:26.909 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:27 np0005558241 nova_compute[248510]: 2025-12-13 08:45:27.011 248514 DEBUG oslo_concurrency.processutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:45:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:45:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1026977894' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:45:27 np0005558241 nova_compute[248510]: 2025-12-13 08:45:27.551 248514 DEBUG oslo_concurrency.processutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:45:27 np0005558241 nova_compute[248510]: 2025-12-13 08:45:27.559 248514 DEBUG nova.compute.provider_tree [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:45:27 np0005558241 nova_compute[248510]: 2025-12-13 08:45:27.579 248514 DEBUG nova.scheduler.client.report [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:45:27 np0005558241 nova_compute[248510]: 2025-12-13 08:45:27.667 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:27 np0005558241 nova_compute[248510]: 2025-12-13 08:45:27.716 248514 INFO nova.scheduler.client.report [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Deleted allocations for instance 55a85a4e-6537-4498-8c7d-5c062cd421e2#033[00m
Dec 13 03:45:27 np0005558241 nova_compute[248510]: 2025-12-13 08:45:27.807 248514 DEBUG oslo_concurrency.lockutils [None req-6c946c95-7243-4779-94a0-0987ff3a2ec1 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 121 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 81 KiB/s wr, 104 op/s
Dec 13 03:45:28 np0005558241 nova_compute[248510]: 2025-12-13 08:45:28.603 248514 DEBUG nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:45:28 np0005558241 nova_compute[248510]: 2025-12-13 08:45:28.603 248514 DEBUG oslo_concurrency.lockutils [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:28 np0005558241 nova_compute[248510]: 2025-12-13 08:45:28.604 248514 DEBUG oslo_concurrency.lockutils [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:28 np0005558241 nova_compute[248510]: 2025-12-13 08:45:28.604 248514 DEBUG oslo_concurrency.lockutils [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:28 np0005558241 nova_compute[248510]: 2025-12-13 08:45:28.604 248514 DEBUG nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] No waiting events found dispatching network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:45:28 np0005558241 nova_compute[248510]: 2025-12-13 08:45:28.604 248514 WARNING nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received unexpected event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:45:28 np0005558241 nova_compute[248510]: 2025-12-13 08:45:28.604 248514 DEBUG nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:45:28 np0005558241 nova_compute[248510]: 2025-12-13 08:45:28.605 248514 DEBUG oslo_concurrency.lockutils [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:28 np0005558241 nova_compute[248510]: 2025-12-13 08:45:28.605 248514 DEBUG oslo_concurrency.lockutils [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:28 np0005558241 nova_compute[248510]: 2025-12-13 08:45:28.605 248514 DEBUG oslo_concurrency.lockutils [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "55a85a4e-6537-4498-8c7d-5c062cd421e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:28 np0005558241 nova_compute[248510]: 2025-12-13 08:45:28.605 248514 DEBUG nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] No waiting events found dispatching network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:45:28 np0005558241 nova_compute[248510]: 2025-12-13 08:45:28.605 248514 WARNING nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received unexpected event network-vif-plugged-c5e3056c-03cb-4408-8f11-96bd3e735ff6 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:45:28 np0005558241 nova_compute[248510]: 2025-12-13 08:45:28.606 248514 DEBUG nova.compute.manager [req-004cb201-d229-4ac9-aa64-1900c8aef661 req-c24bb499-f078-4aca-8e2a-75f87f038559 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Received event network-vif-deleted-c5e3056c-03cb-4408-8f11-96bd3e735ff6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:45:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:45:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:45:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:45:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 121 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 92 op/s
Dec 13 03:45:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:45:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:45:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:45:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:45:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:45:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:45:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:45:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:45:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:30 np0005558241 nova_compute[248510]: 2025-12-13 08:45:30.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:30 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:45:30 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:45:30 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:45:30 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:45:30 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:45:30 np0005558241 podman[348680]: 2025-12-13 08:45:30.419656852 +0000 UTC m=+0.049499793 container create 70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_agnesi, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 03:45:30 np0005558241 systemd[1]: Started libpod-conmon-70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d.scope.
Dec 13 03:45:30 np0005558241 podman[348680]: 2025-12-13 08:45:30.392463109 +0000 UTC m=+0.022306120 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:45:30 np0005558241 podman[348680]: 2025-12-13 08:45:30.55666815 +0000 UTC m=+0.186511111 container init 70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_agnesi, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:45:30 np0005558241 podman[348680]: 2025-12-13 08:45:30.564526747 +0000 UTC m=+0.194369678 container start 70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_agnesi, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:30 np0005558241 xenodochial_agnesi[348696]: 167 167
Dec 13 03:45:30 np0005558241 systemd[1]: libpod-70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d.scope: Deactivated successfully.
Dec 13 03:45:30 np0005558241 podman[348680]: 2025-12-13 08:45:30.631367015 +0000 UTC m=+0.261209946 container attach 70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:45:30 np0005558241 podman[348680]: 2025-12-13 08:45:30.631951239 +0000 UTC m=+0.261794170 container died 70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_agnesi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5bde2c46014e63e527fbd3d3217b811af3d46c27c6eb7d7fbb16dd12675af55d-merged.mount: Deactivated successfully.
Dec 13 03:45:30 np0005558241 podman[348680]: 2025-12-13 08:45:30.763145452 +0000 UTC m=+0.392988383 container remove 70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_agnesi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:30 np0005558241 systemd[1]: libpod-conmon-70c40a1581ca9a1d823afde63a5a4fc020437382cc246c90910985afae8dce3d.scope: Deactivated successfully.
Dec 13 03:45:30 np0005558241 podman[348721]: 2025-12-13 08:45:30.967306985 +0000 UTC m=+0.081314422 container create 0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:45:31 np0005558241 podman[348721]: 2025-12-13 08:45:30.913549876 +0000 UTC m=+0.027557333 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:31 np0005558241 systemd[1]: Started libpod-conmon-0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b.scope.
Dec 13 03:45:31 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:45:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0278d58159ef24f2cf7788a374bc09556feb4871b903b43fdfb85aa2e24298ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0278d58159ef24f2cf7788a374bc09556feb4871b903b43fdfb85aa2e24298ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0278d58159ef24f2cf7788a374bc09556feb4871b903b43fdfb85aa2e24298ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0278d58159ef24f2cf7788a374bc09556feb4871b903b43fdfb85aa2e24298ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0278d58159ef24f2cf7788a374bc09556feb4871b903b43fdfb85aa2e24298ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:31 np0005558241 podman[348721]: 2025-12-13 08:45:31.059667943 +0000 UTC m=+0.173675390 container init 0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:31 np0005558241 podman[348721]: 2025-12-13 08:45:31.068006252 +0000 UTC m=+0.182013689 container start 0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 03:45:31 np0005558241 podman[348721]: 2025-12-13 08:45:31.072086585 +0000 UTC m=+0.186094022 container attach 0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:45:31 np0005558241 dazzling_taussig[348737]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:45:31 np0005558241 dazzling_taussig[348737]: --> All data devices are unavailable
Dec 13 03:45:31 np0005558241 systemd[1]: libpod-0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b.scope: Deactivated successfully.
Dec 13 03:45:31 np0005558241 podman[348721]: 2025-12-13 08:45:31.600211198 +0000 UTC m=+0.714218665 container died 0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:31 np0005558241 nova_compute[248510]: 2025-12-13 08:45:31.598 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0278d58159ef24f2cf7788a374bc09556feb4871b903b43fdfb85aa2e24298ba-merged.mount: Deactivated successfully.
Dec 13 03:45:31 np0005558241 podman[348721]: 2025-12-13 08:45:31.798892304 +0000 UTC m=+0.912899771 container remove 0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_taussig, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:45:31 np0005558241 systemd[1]: libpod-conmon-0044e547ffc1286571c7fb73e3b92a700b8c7bd7b6ac11755411a68ad4232d8b.scope: Deactivated successfully.
Dec 13 03:45:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 121 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 100 op/s
Dec 13 03:45:32 np0005558241 podman[348833]: 2025-12-13 08:45:32.366358923 +0000 UTC m=+0.110401711 container create 7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:32 np0005558241 podman[348833]: 2025-12-13 08:45:32.276702923 +0000 UTC m=+0.020745732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:32 np0005558241 systemd[1]: Started libpod-conmon-7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2.scope.
Dec 13 03:45:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:45:32 np0005558241 podman[348833]: 2025-12-13 08:45:32.527873276 +0000 UTC m=+0.271916184 container init 7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_golick, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:45:32 np0005558241 podman[348833]: 2025-12-13 08:45:32.537538359 +0000 UTC m=+0.281581147 container start 7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:32 np0005558241 zealous_golick[348850]: 167 167
Dec 13 03:45:32 np0005558241 systemd[1]: libpod-7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2.scope: Deactivated successfully.
Dec 13 03:45:32 np0005558241 podman[348833]: 2025-12-13 08:45:32.588696453 +0000 UTC m=+0.332739271 container attach 7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_golick, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:45:32 np0005558241 podman[348833]: 2025-12-13 08:45:32.589947714 +0000 UTC m=+0.333990562 container died 7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_golick, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:45:32 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a81075742ae245bdba68f6c10336aa612ac21d930ce1a6291b67929bccf40e29-merged.mount: Deactivated successfully.
Dec 13 03:45:32 np0005558241 podman[348833]: 2025-12-13 08:45:32.861845657 +0000 UTC m=+0.605888445 container remove 7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_golick, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:45:32 np0005558241 systemd[1]: libpod-conmon-7d59711d4a25d70123221974cd8bbcc5d7f846422698b57ce7245b9105634da2.scope: Deactivated successfully.
Dec 13 03:45:33 np0005558241 podman[348873]: 2025-12-13 08:45:33.034220203 +0000 UTC m=+0.046575920 container create 7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:33 np0005558241 systemd[1]: Started libpod-conmon-7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f.scope.
Dec 13 03:45:33 np0005558241 podman[348873]: 2025-12-13 08:45:33.014442967 +0000 UTC m=+0.026798704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:33 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:45:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f1328e78004b1f87a759f4394ac0b14a0ab357067ebec3833ebf05319b6cf6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f1328e78004b1f87a759f4394ac0b14a0ab357067ebec3833ebf05319b6cf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f1328e78004b1f87a759f4394ac0b14a0ab357067ebec3833ebf05319b6cf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f1328e78004b1f87a759f4394ac0b14a0ab357067ebec3833ebf05319b6cf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:33 np0005558241 podman[348873]: 2025-12-13 08:45:33.136216142 +0000 UTC m=+0.148571879 container init 7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:45:33 np0005558241 podman[348873]: 2025-12-13 08:45:33.144745497 +0000 UTC m=+0.157101214 container start 7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:33 np0005558241 podman[348873]: 2025-12-13 08:45:33.149415534 +0000 UTC m=+0.161771251 container attach 7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 03:45:33 np0005558241 kind_merkle[348889]: {
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:    "0": [
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:        {
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "devices": [
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "/dev/loop3"
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            ],
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_name": "ceph_lv0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_size": "21470642176",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "name": "ceph_lv0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "tags": {
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.cluster_name": "ceph",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.crush_device_class": "",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.encrypted": "0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.objectstore": "bluestore",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.osd_id": "0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.type": "block",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.vdo": "0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.with_tpm": "0"
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            },
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "type": "block",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "vg_name": "ceph_vg0"
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:        }
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:    ],
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:    "1": [
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:        {
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "devices": [
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "/dev/loop4"
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            ],
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_name": "ceph_lv1",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_size": "21470642176",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "name": "ceph_lv1",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "tags": {
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.cluster_name": "ceph",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.crush_device_class": "",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.encrypted": "0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.objectstore": "bluestore",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.osd_id": "1",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.type": "block",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.vdo": "0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.with_tpm": "0"
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            },
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "type": "block",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "vg_name": "ceph_vg1"
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:        }
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:    ],
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:    "2": [
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:        {
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "devices": [
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "/dev/loop5"
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            ],
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_name": "ceph_lv2",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_size": "21470642176",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "name": "ceph_lv2",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "tags": {
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.cluster_name": "ceph",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.crush_device_class": "",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.encrypted": "0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.objectstore": "bluestore",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.osd_id": "2",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.type": "block",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.vdo": "0",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:                "ceph.with_tpm": "0"
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            },
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "type": "block",
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:            "vg_name": "ceph_vg2"
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:        }
Dec 13 03:45:33 np0005558241 kind_merkle[348889]:    ]
Dec 13 03:45:33 np0005558241 kind_merkle[348889]: }
Dec 13 03:45:33 np0005558241 systemd[1]: libpod-7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f.scope: Deactivated successfully.
Dec 13 03:45:33 np0005558241 podman[348873]: 2025-12-13 08:45:33.469385423 +0000 UTC m=+0.481741140 container died 7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-21f1328e78004b1f87a759f4394ac0b14a0ab357067ebec3833ebf05319b6cf6-merged.mount: Deactivated successfully.
Dec 13 03:45:33 np0005558241 podman[348873]: 2025-12-13 08:45:33.522918827 +0000 UTC m=+0.535274554 container remove 7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:33 np0005558241 systemd[1]: libpod-conmon-7c7510e22172fd8ede53e1b004dae325eb2ecce88ddfb19bc20d3f23dc1fb07f.scope: Deactivated successfully.
Dec 13 03:45:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 100 op/s
Dec 13 03:45:33 np0005558241 podman[348974]: 2025-12-13 08:45:33.96376112 +0000 UTC m=+0.038007005 container create 4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:45:34 np0005558241 systemd[1]: Started libpod-conmon-4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36.scope.
Dec 13 03:45:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:45:34 np0005558241 podman[348974]: 2025-12-13 08:45:33.947980824 +0000 UTC m=+0.022226729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:34 np0005558241 podman[348974]: 2025-12-13 08:45:34.051803829 +0000 UTC m=+0.126049744 container init 4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mcclintock, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:34 np0005558241 podman[348974]: 2025-12-13 08:45:34.060412855 +0000 UTC m=+0.134658740 container start 4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:45:34 np0005558241 podman[348974]: 2025-12-13 08:45:34.063538504 +0000 UTC m=+0.137784389 container attach 4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mcclintock, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:34 np0005558241 upbeat_mcclintock[348990]: 167 167
Dec 13 03:45:34 np0005558241 systemd[1]: libpod-4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36.scope: Deactivated successfully.
Dec 13 03:45:34 np0005558241 podman[348974]: 2025-12-13 08:45:34.067104503 +0000 UTC m=+0.141350388 container died 4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-805ca287a976607b427314ae0ae7845c6c34f0cf3d71a24b6be7a0f4e86f2926-merged.mount: Deactivated successfully.
Dec 13 03:45:34 np0005558241 podman[348974]: 2025-12-13 08:45:34.102062311 +0000 UTC m=+0.176308196 container remove 4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:45:34 np0005558241 systemd[1]: libpod-conmon-4a63baa1136c6d62279edaa2b9927bb048f098560291ce53d1b348af907ecb36.scope: Deactivated successfully.
Dec 13 03:45:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:45:34 np0005558241 podman[349014]: 2025-12-13 08:45:34.281429742 +0000 UTC m=+0.054491469 container create 30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:34 np0005558241 systemd[1]: Started libpod-conmon-30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94.scope.
Dec 13 03:45:34 np0005558241 podman[349014]: 2025-12-13 08:45:34.254557417 +0000 UTC m=+0.027619174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:45:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce271b3aa9330609f83801f426734b040e1052659edd775332687fad7a0fd03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce271b3aa9330609f83801f426734b040e1052659edd775332687fad7a0fd03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce271b3aa9330609f83801f426734b040e1052659edd775332687fad7a0fd03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce271b3aa9330609f83801f426734b040e1052659edd775332687fad7a0fd03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:34 np0005558241 podman[349014]: 2025-12-13 08:45:34.368827005 +0000 UTC m=+0.141888732 container init 30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:34 np0005558241 podman[349014]: 2025-12-13 08:45:34.375677617 +0000 UTC m=+0.148739324 container start 30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:45:34 np0005558241 podman[349014]: 2025-12-13 08:45:34.378707813 +0000 UTC m=+0.151769530 container attach 30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:35 np0005558241 lvm[349110]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:45:35 np0005558241 lvm[349109]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:45:35 np0005558241 lvm[349110]: VG ceph_vg1 finished
Dec 13 03:45:35 np0005558241 lvm[349109]: VG ceph_vg0 finished
Dec 13 03:45:35 np0005558241 lvm[349112]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:45:35 np0005558241 lvm[349112]: VG ceph_vg2 finished
Dec 13 03:45:35 np0005558241 nova_compute[248510]: 2025-12-13 08:45:35.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:35 np0005558241 ecstatic_keldysh[349031]: {}
Dec 13 03:45:35 np0005558241 systemd[1]: libpod-30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94.scope: Deactivated successfully.
Dec 13 03:45:35 np0005558241 systemd[1]: libpod-30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94.scope: Consumed 1.360s CPU time.
Dec 13 03:45:35 np0005558241 podman[349014]: 2025-12-13 08:45:35.258014889 +0000 UTC m=+1.031076616 container died 30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 03:45:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9ce271b3aa9330609f83801f426734b040e1052659edd775332687fad7a0fd03-merged.mount: Deactivated successfully.
Dec 13 03:45:35 np0005558241 podman[349014]: 2025-12-13 08:45:35.560137302 +0000 UTC m=+1.333199009 container remove 30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:45:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:45:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:45:35 np0005558241 systemd[1]: libpod-conmon-30369c79db87fd21bc7c307a78dafa6266679c359b0c74bd6de338ba9e7e8d94.scope: Deactivated successfully.
Dec 13 03:45:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 14 KiB/s wr, 79 op/s
Dec 13 03:45:36 np0005558241 nova_compute[248510]: 2025-12-13 08:45:36.599 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:45:36 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:45:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 283 KiB/s rd, 2.2 KiB/s wr, 34 op/s
Dec 13 03:45:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:45:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 1023 B/s wr, 9 op/s
Dec 13 03:45:40 np0005558241 nova_compute[248510]: 2025-12-13 08:45:40.034 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615525.032338, 55a85a4e-6537-4498-8c7d-5c062cd421e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:45:40 np0005558241 nova_compute[248510]: 2025-12-13 08:45:40.035 248514 INFO nova.compute.manager [-] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:45:40 np0005558241 nova_compute[248510]: 2025-12-13 08:45:40.077 248514 DEBUG nova.compute.manager [None req-3289fc79-ecc5-497b-86d7-33e13ff6b862 - - - - - -] [instance: 55a85a4e-6537-4498-8c7d-5c062cd421e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:45:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:45:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:45:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:45:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:45:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:45:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:45:40 np0005558241 nova_compute[248510]: 2025-12-13 08:45:40.188 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Dec 13 03:45:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Dec 13 03:45:41 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Dec 13 03:45:41 np0005558241 nova_compute[248510]: 2025-12-13 08:45:41.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Dec 13 03:45:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 102 B/s wr, 0 op/s
Dec 13 03:45:43 np0005558241 podman[349152]: 2025-12-13 08:45:43.994478652 +0000 UTC m=+0.072033279 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:45:43 np0005558241 podman[349153]: 2025-12-13 08:45:43.995672732 +0000 UTC m=+0.073269380 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:45:44 np0005558241 podman[349151]: 2025-12-13 08:45:44.075138476 +0000 UTC m=+0.151817611 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 13 03:45:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:45:45 np0005558241 nova_compute[248510]: 2025-12-13 08:45:45.192 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 137 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.6 MiB/s wr, 23 op/s
Dec 13 03:45:46 np0005558241 nova_compute[248510]: 2025-12-13 08:45:46.604 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Dec 13 03:45:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Dec 13 03:45:47 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Dec 13 03:45:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 193 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 9.0 MiB/s wr, 51 op/s
Dec 13 03:45:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:45:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Dec 13 03:45:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Dec 13 03:45:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Dec 13 03:45:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 233 MiB data, 852 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 14 MiB/s wr, 51 op/s
Dec 13 03:45:50 np0005558241 nova_compute[248510]: 2025-12-13 08:45:50.195 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Dec 13 03:45:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Dec 13 03:45:50 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Dec 13 03:45:51 np0005558241 nova_compute[248510]: 2025-12-13 08:45:51.605 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:51 np0005558241 nova_compute[248510]: 2025-12-13 08:45:51.851 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:45:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 217 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 16 MiB/s wr, 85 op/s
Dec 13 03:45:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 161 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 14 MiB/s wr, 86 op/s
Dec 13 03:45:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:45:55 np0005558241 nova_compute[248510]: 2025-12-13 08:45:55.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:55.424 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:45:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:55.424 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:45:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:45:55.425 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:45:55 np0005558241 nova_compute[248510]: 2025-12-13 08:45:55.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:45:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 5.0 MiB/s wr, 74 op/s
Dec 13 03:45:56 np0005558241 nova_compute[248510]: 2025-12-13 08:45:56.608 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:45:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.9 KiB/s wr, 67 op/s
Dec 13 03:45:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:45:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Dec 13 03:45:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Dec 13 03:45:59 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Dec 13 03:45:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 03:46:00 np0005558241 nova_compute[248510]: 2025-12-13 08:46:00.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:00 np0005558241 nova_compute[248510]: 2025-12-13 08:46:00.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:46:00 np0005558241 nova_compute[248510]: 2025-12-13 08:46:00.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:46:00 np0005558241 nova_compute[248510]: 2025-12-13 08:46:00.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:46:01 np0005558241 nova_compute[248510]: 2025-12-13 08:46:01.228 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:46:01 np0005558241 nova_compute[248510]: 2025-12-13 08:46:01.229 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:46:01 np0005558241 nova_compute[248510]: 2025-12-13 08:46:01.229 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:46:01 np0005558241 nova_compute[248510]: 2025-12-13 08:46:01.229 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:46:01 np0005558241 nova_compute[248510]: 2025-12-13 08:46:01.610 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:01.763 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:46:01 np0005558241 nova_compute[248510]: 2025-12-13 08:46:01.763 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:01.764 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:46:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 03:46:03 np0005558241 nova_compute[248510]: 2025-12-13 08:46:03.756 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:46:03 np0005558241 nova_compute[248510]: 2025-12-13 08:46:03.787 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:46:03 np0005558241 nova_compute[248510]: 2025-12-13 08:46:03.788 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:46:03 np0005558241 nova_compute[248510]: 2025-12-13 08:46:03.788 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:46:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 921 B/s wr, 18 op/s
Dec 13 03:46:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:46:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:04.766 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:46:05 np0005558241 nova_compute[248510]: 2025-12-13 08:46:05.250 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:05 np0005558241 nova_compute[248510]: 2025-12-13 08:46:05.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:46:05 np0005558241 nova_compute[248510]: 2025-12-13 08:46:05.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:46:05 np0005558241 nova_compute[248510]: 2025-12-13 08:46:05.832 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:46:05 np0005558241 nova_compute[248510]: 2025-12-13 08:46:05.834 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:46:05 np0005558241 nova_compute[248510]: 2025-12-13 08:46:05.835 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:46:05 np0005558241 nova_compute[248510]: 2025-12-13 08:46:05.835 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:46:05 np0005558241 nova_compute[248510]: 2025-12-13 08:46:05.835 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:46:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:46:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:46:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169512374' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:46:06 np0005558241 nova_compute[248510]: 2025-12-13 08:46:06.409 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:46:06 np0005558241 nova_compute[248510]: 2025-12-13 08:46:06.508 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:46:06 np0005558241 nova_compute[248510]: 2025-12-13 08:46:06.508 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:46:06 np0005558241 nova_compute[248510]: 2025-12-13 08:46:06.612 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:06 np0005558241 nova_compute[248510]: 2025-12-13 08:46:06.690 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:46:06 np0005558241 nova_compute[248510]: 2025-12-13 08:46:06.692 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3669MB free_disk=59.942045137286186GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:46:06 np0005558241 nova_compute[248510]: 2025-12-13 08:46:06.692 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:46:06 np0005558241 nova_compute[248510]: 2025-12-13 08:46:06.692 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:46:06 np0005558241 nova_compute[248510]: 2025-12-13 08:46:06.989 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance c3fb322f-a9db-4396-b659-2307698e5524 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:46:06 np0005558241 nova_compute[248510]: 2025-12-13 08:46:06.989 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:46:06 np0005558241 nova_compute[248510]: 2025-12-13 08:46:06.989 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:46:07 np0005558241 nova_compute[248510]: 2025-12-13 08:46:07.179 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:46:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:46:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1125902812' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:46:07 np0005558241 nova_compute[248510]: 2025-12-13 08:46:07.722 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:46:07 np0005558241 nova_compute[248510]: 2025-12-13 08:46:07.728 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:46:07 np0005558241 nova_compute[248510]: 2025-12-13 08:46:07.762 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:46:07 np0005558241 nova_compute[248510]: 2025-12-13 08:46:07.882 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:46:07 np0005558241 nova_compute[248510]: 2025-12-13 08:46:07.883 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:46:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:46:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:46:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:46:09
Dec 13 03:46:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:46:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:46:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta', 'backups']
Dec 13 03:46:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:46:09 np0005558241 nova_compute[248510]: 2025-12-13 08:46:09.884 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:46:09 np0005558241 nova_compute[248510]: 2025-12-13 08:46:09.884 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:46:09 np0005558241 nova_compute[248510]: 2025-12-13 08:46:09.884 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:46:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:46:10 np0005558241 nova_compute[248510]: 2025-12-13 08:46:10.253 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:46:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.104446) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615571104493, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1420, "num_deletes": 252, "total_data_size": 2239492, "memory_usage": 2284832, "flush_reason": "Manual Compaction"}
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615571147494, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 2176533, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48372, "largest_seqno": 49791, "table_properties": {"data_size": 2169885, "index_size": 3783, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14152, "raw_average_key_size": 20, "raw_value_size": 2156446, "raw_average_value_size": 3058, "num_data_blocks": 169, "num_entries": 705, "num_filter_entries": 705, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615439, "oldest_key_time": 1765615439, "file_creation_time": 1765615571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 43110 microseconds, and 6176 cpu microseconds.
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.147550) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 2176533 bytes OK
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.147575) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.150007) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.150030) EVENT_LOG_v1 {"time_micros": 1765615571150025, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.150067) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 2233200, prev total WAL file size 2233200, number of live WAL files 2.
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.150851) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(2125KB)], [113(8691KB)]
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615571150906, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 11076710, "oldest_snapshot_seqno": -1}
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7019 keys, 9307756 bytes, temperature: kUnknown
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615571221027, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 9307756, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9262092, "index_size": 26992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 183516, "raw_average_key_size": 26, "raw_value_size": 9137620, "raw_average_value_size": 1301, "num_data_blocks": 1052, "num_entries": 7019, "num_filter_entries": 7019, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.221309) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 9307756 bytes
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.233730) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.7 rd, 132.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 8.5 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(9.4) write-amplify(4.3) OK, records in: 7539, records dropped: 520 output_compression: NoCompression
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.233768) EVENT_LOG_v1 {"time_micros": 1765615571233754, "job": 68, "event": "compaction_finished", "compaction_time_micros": 70242, "compaction_time_cpu_micros": 24518, "output_level": 6, "num_output_files": 1, "total_output_size": 9307756, "num_input_records": 7539, "num_output_records": 7019, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615571234361, "job": 68, "event": "table_file_deletion", "file_number": 115}
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615571235724, "job": 68, "event": "table_file_deletion", "file_number": 113}
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.150802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.235799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.235805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.235807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.235808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:46:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:46:11.235809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:46:11 np0005558241 nova_compute[248510]: 2025-12-13 08:46:11.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:46:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:46:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:46:14 np0005558241 podman[349259]: 2025-12-13 08:46:14.973276455 +0000 UTC m=+0.054744685 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 13 03:46:14 np0005558241 podman[349258]: 2025-12-13 08:46:14.975341006 +0000 UTC m=+0.056301113 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:46:14 np0005558241 podman[349257]: 2025-12-13 08:46:14.993387889 +0000 UTC m=+0.085843515 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Dec 13 03:46:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:46:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1959195663' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:46:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:46:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1959195663' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:46:15 np0005558241 nova_compute[248510]: 2025-12-13 08:46:15.255 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:15 np0005558241 nova_compute[248510]: 2025-12-13 08:46:15.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:46:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:46:16 np0005558241 nova_compute[248510]: 2025-12-13 08:46:16.616 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:46:18 np0005558241 nova_compute[248510]: 2025-12-13 08:46:18.202 248514 INFO nova.compute.manager [None req-528e3b7d-601c-43ab-a174-be31e484ad0c 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Pausing#033[00m
Dec 13 03:46:18 np0005558241 nova_compute[248510]: 2025-12-13 08:46:18.204 248514 DEBUG nova.objects.instance [None req-528e3b7d-601c-43ab-a174-be31e484ad0c 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'flavor' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:46:18 np0005558241 nova_compute[248510]: 2025-12-13 08:46:18.307 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615578.3069959, c3fb322f-a9db-4396-b659-2307698e5524 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:46:18 np0005558241 nova_compute[248510]: 2025-12-13 08:46:18.307 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:46:18 np0005558241 nova_compute[248510]: 2025-12-13 08:46:18.309 248514 DEBUG nova.compute.manager [None req-528e3b7d-601c-43ab-a174-be31e484ad0c 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:46:18 np0005558241 nova_compute[248510]: 2025-12-13 08:46:18.444 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:46:18 np0005558241 nova_compute[248510]: 2025-12-13 08:46:18.448 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:46:18 np0005558241 nova_compute[248510]: 2025-12-13 08:46:18.506 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Dec 13 03:46:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:46:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Dec 13 03:46:20 np0005558241 nova_compute[248510]: 2025-12-13 08:46:20.308 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000770751562265708 of space, bias 1.0, pg target 0.23122546867971241 quantized to 32 (current 32)
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006681201032124484 of space, bias 1.0, pg target 0.20043603096373452 quantized to 32 (current 32)
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.839601767484647e-07 of space, bias 4.0, pg target 0.0007007522120981576 quantized to 16 (current 32)
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:46:21 np0005558241 nova_compute[248510]: 2025-12-13 08:46:21.566 248514 INFO nova.compute.manager [None req-01197d19-f7a0-48d8-b0e9-2e06b7add471 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Unpausing#033[00m
Dec 13 03:46:21 np0005558241 nova_compute[248510]: 2025-12-13 08:46:21.567 248514 DEBUG nova.objects.instance [None req-01197d19-f7a0-48d8-b0e9-2e06b7add471 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'flavor' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:46:21 np0005558241 nova_compute[248510]: 2025-12-13 08:46:21.599 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615581.599589, c3fb322f-a9db-4396-b659-2307698e5524 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:46:21 np0005558241 nova_compute[248510]: 2025-12-13 08:46:21.600 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:46:21 np0005558241 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 03:46:21 np0005558241 nova_compute[248510]: 2025-12-13 08:46:21.604 248514 DEBUG nova.virt.libvirt.guest [None req-01197d19-f7a0-48d8-b0e9-2e06b7add471 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec 13 03:46:21 np0005558241 nova_compute[248510]: 2025-12-13 08:46:21.605 248514 DEBUG nova.compute.manager [None req-01197d19-f7a0-48d8-b0e9-2e06b7add471 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:46:21 np0005558241 nova_compute[248510]: 2025-12-13 08:46:21.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:21 np0005558241 nova_compute[248510]: 2025-12-13 08:46:21.654 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:46:21 np0005558241 nova_compute[248510]: 2025-12-13 08:46:21.657 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:46:21 np0005558241 nova_compute[248510]: 2025-12-13 08:46:21.698 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Dec 13 03:46:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 03:46:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 03:46:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:46:24 np0005558241 nova_compute[248510]: 2025-12-13 08:46:24.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:46:25 np0005558241 nova_compute[248510]: 2025-12-13 08:46:25.311 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 03:46:26 np0005558241 nova_compute[248510]: 2025-12-13 08:46:26.620 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 03:46:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:46:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Dec 13 03:46:30 np0005558241 nova_compute[248510]: 2025-12-13 08:46:30.314 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:31 np0005558241 nova_compute[248510]: 2025-12-13 08:46:31.623 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec 13 03:46:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:46:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:46:35 np0005558241 nova_compute[248510]: 2025-12-13 08:46:35.316 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:46:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:46:36 np0005558241 nova_compute[248510]: 2025-12-13 08:46:36.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:36 np0005558241 podman[349466]: 2025-12-13 08:46:36.820355184 +0000 UTC m=+0.049212536 container create 1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:46:36 np0005558241 systemd[1]: Started libpod-conmon-1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3.scope.
Dec 13 03:46:36 np0005558241 podman[349466]: 2025-12-13 08:46:36.796224379 +0000 UTC m=+0.025081721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:46:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:46:36 np0005558241 podman[349466]: 2025-12-13 08:46:36.916875306 +0000 UTC m=+0.145732668 container init 1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_elbakyan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:46:36 np0005558241 podman[349466]: 2025-12-13 08:46:36.925473142 +0000 UTC m=+0.154330454 container start 1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:46:36 np0005558241 peaceful_elbakyan[349483]: 167 167
Dec 13 03:46:36 np0005558241 systemd[1]: libpod-1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3.scope: Deactivated successfully.
Dec 13 03:46:36 np0005558241 podman[349466]: 2025-12-13 08:46:36.933216897 +0000 UTC m=+0.162074219 container attach 1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:46:36 np0005558241 podman[349466]: 2025-12-13 08:46:36.934464268 +0000 UTC m=+0.163321590 container died 1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 03:46:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b75d4a3f798b042ae552d92066605b9f818fd7c601d570181bdfc85730251278-merged.mount: Deactivated successfully.
Dec 13 03:46:36 np0005558241 podman[349466]: 2025-12-13 08:46:36.977259252 +0000 UTC m=+0.206116564 container remove 1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:46:36 np0005558241 systemd[1]: libpod-conmon-1fa2c8d554aa31cda354447f48a444aac817162a3fd9a9f887627e233507a2f3.scope: Deactivated successfully.
Dec 13 03:46:37 np0005558241 podman[349507]: 2025-12-13 08:46:37.155843673 +0000 UTC m=+0.041368939 container create 8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:46:37 np0005558241 systemd[1]: Started libpod-conmon-8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47.scope.
Dec 13 03:46:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 03:46:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:46:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:46:37 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:46:37 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:46:37 np0005558241 podman[349507]: 2025-12-13 08:46:37.135904433 +0000 UTC m=+0.021429699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:46:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd97c874009ea6ce0e691bc157e9ee27f1a85ced685a2783ff2d8fc1f96747a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd97c874009ea6ce0e691bc157e9ee27f1a85ced685a2783ff2d8fc1f96747a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd97c874009ea6ce0e691bc157e9ee27f1a85ced685a2783ff2d8fc1f96747a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd97c874009ea6ce0e691bc157e9ee27f1a85ced685a2783ff2d8fc1f96747a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd97c874009ea6ce0e691bc157e9ee27f1a85ced685a2783ff2d8fc1f96747a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:37 np0005558241 podman[349507]: 2025-12-13 08:46:37.247588226 +0000 UTC m=+0.133113492 container init 8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:46:37 np0005558241 podman[349507]: 2025-12-13 08:46:37.259455414 +0000 UTC m=+0.144980660 container start 8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 03:46:37 np0005558241 podman[349507]: 2025-12-13 08:46:37.263335491 +0000 UTC m=+0.148860757 container attach 8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:46:37 np0005558241 wonderful_payne[349523]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:46:37 np0005558241 wonderful_payne[349523]: --> All data devices are unavailable
Dec 13 03:46:37 np0005558241 systemd[1]: libpod-8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47.scope: Deactivated successfully.
Dec 13 03:46:37 np0005558241 podman[349507]: 2025-12-13 08:46:37.792537531 +0000 UTC m=+0.678062797 container died 8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:46:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8bd97c874009ea6ce0e691bc157e9ee27f1a85ced685a2783ff2d8fc1f96747a-merged.mount: Deactivated successfully.
Dec 13 03:46:37 np0005558241 podman[349507]: 2025-12-13 08:46:37.839825978 +0000 UTC m=+0.725351224 container remove 8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_payne, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:46:37 np0005558241 systemd[1]: libpod-conmon-8b2e03aef9e096a291702cd6e445076dcea555e0fc0c25b9b88ae060d313ea47.scope: Deactivated successfully.
Dec 13 03:46:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:46:38 np0005558241 podman[349616]: 2025-12-13 08:46:38.306973161 +0000 UTC m=+0.038467616 container create 18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:46:38 np0005558241 systemd[1]: Started libpod-conmon-18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4.scope.
Dec 13 03:46:38 np0005558241 podman[349616]: 2025-12-13 08:46:38.290881858 +0000 UTC m=+0.022376323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:46:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:46:38 np0005558241 podman[349616]: 2025-12-13 08:46:38.403193576 +0000 UTC m=+0.134688031 container init 18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 03:46:38 np0005558241 podman[349616]: 2025-12-13 08:46:38.412413567 +0000 UTC m=+0.143908002 container start 18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:46:38 np0005558241 podman[349616]: 2025-12-13 08:46:38.415812543 +0000 UTC m=+0.147307008 container attach 18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_raman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 03:46:38 np0005558241 great_raman[349632]: 167 167
Dec 13 03:46:38 np0005558241 systemd[1]: libpod-18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4.scope: Deactivated successfully.
Dec 13 03:46:38 np0005558241 podman[349616]: 2025-12-13 08:46:38.418768207 +0000 UTC m=+0.150262642 container died 18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_raman, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 03:46:38 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d42d3b9980440864b7c9f9ec2e5444a633c4fbd4ec5678596c45541cd821da42-merged.mount: Deactivated successfully.
Dec 13 03:46:38 np0005558241 podman[349616]: 2025-12-13 08:46:38.459196001 +0000 UTC m=+0.190690436 container remove 18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:46:38 np0005558241 systemd[1]: libpod-conmon-18abbf525c9f002cb1222409a8b09084c825ad632b2c8d14d82948dc0b8161a4.scope: Deactivated successfully.
Dec 13 03:46:38 np0005558241 podman[349655]: 2025-12-13 08:46:38.623696889 +0000 UTC m=+0.039508512 container create 49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:46:38 np0005558241 systemd[1]: Started libpod-conmon-49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55.scope.
Dec 13 03:46:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:46:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac8b695bb486b6e2989e448b40bd438332c60f91929ba36e5c4167e76f09a31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac8b695bb486b6e2989e448b40bd438332c60f91929ba36e5c4167e76f09a31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac8b695bb486b6e2989e448b40bd438332c60f91929ba36e5c4167e76f09a31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eac8b695bb486b6e2989e448b40bd438332c60f91929ba36e5c4167e76f09a31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:38 np0005558241 podman[349655]: 2025-12-13 08:46:38.69264173 +0000 UTC m=+0.108453383 container init 49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:46:38 np0005558241 podman[349655]: 2025-12-13 08:46:38.698450315 +0000 UTC m=+0.114261928 container start 49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:46:38 np0005558241 podman[349655]: 2025-12-13 08:46:38.701335598 +0000 UTC m=+0.117147251 container attach 49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:46:38 np0005558241 podman[349655]: 2025-12-13 08:46:38.606707903 +0000 UTC m=+0.022519566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]: {
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:    "0": [
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:        {
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "devices": [
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "/dev/loop3"
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            ],
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_name": "ceph_lv0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_size": "21470642176",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "name": "ceph_lv0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "tags": {
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.cluster_name": "ceph",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.crush_device_class": "",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.encrypted": "0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.objectstore": "bluestore",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.osd_id": "0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.type": "block",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.vdo": "0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.with_tpm": "0"
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            },
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "type": "block",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "vg_name": "ceph_vg0"
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:        }
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:    ],
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:    "1": [
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:        {
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "devices": [
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "/dev/loop4"
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            ],
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_name": "ceph_lv1",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_size": "21470642176",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "name": "ceph_lv1",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "tags": {
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.cluster_name": "ceph",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.crush_device_class": "",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.encrypted": "0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.objectstore": "bluestore",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.osd_id": "1",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.type": "block",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.vdo": "0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.with_tpm": "0"
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            },
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "type": "block",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "vg_name": "ceph_vg1"
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:        }
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:    ],
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:    "2": [
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:        {
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "devices": [
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "/dev/loop5"
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            ],
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_name": "ceph_lv2",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_size": "21470642176",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "name": "ceph_lv2",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "tags": {
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.cluster_name": "ceph",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.crush_device_class": "",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.encrypted": "0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.objectstore": "bluestore",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.osd_id": "2",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.type": "block",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.vdo": "0",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:                "ceph.with_tpm": "0"
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            },
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "type": "block",
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:            "vg_name": "ceph_vg2"
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:        }
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]:    ]
Dec 13 03:46:38 np0005558241 kind_lovelace[349671]: }
Dec 13 03:46:39 np0005558241 systemd[1]: libpod-49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55.scope: Deactivated successfully.
Dec 13 03:46:39 np0005558241 podman[349655]: 2025-12-13 08:46:39.019291037 +0000 UTC m=+0.435102690 container died 49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 03:46:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:46:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-eac8b695bb486b6e2989e448b40bd438332c60f91929ba36e5c4167e76f09a31-merged.mount: Deactivated successfully.
Dec 13 03:46:39 np0005558241 podman[349655]: 2025-12-13 08:46:39.399763496 +0000 UTC m=+0.815575119 container remove 49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:46:39 np0005558241 systemd[1]: libpod-conmon-49636d7739e45f3bc27a2146e57f79a9022d71f58dc2fd550f3d9a0136e49e55.scope: Deactivated successfully.
Dec 13 03:46:39 np0005558241 podman[349753]: 2025-12-13 08:46:39.84503478 +0000 UTC m=+0.027691756 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:46:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:46:40 np0005558241 podman[349753]: 2025-12-13 08:46:40.027198841 +0000 UTC m=+0.209855797 container create 8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:46:40 np0005558241 systemd[1]: Started libpod-conmon-8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234.scope.
Dec 13 03:46:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:46:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:46:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:46:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:46:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:46:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:46:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:46:40 np0005558241 podman[349753]: 2025-12-13 08:46:40.245507199 +0000 UTC m=+0.428164175 container init 8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:46:40 np0005558241 podman[349753]: 2025-12-13 08:46:40.259368546 +0000 UTC m=+0.442025502 container start 8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:46:40 np0005558241 eager_euler[349769]: 167 167
Dec 13 03:46:40 np0005558241 systemd[1]: libpod-8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234.scope: Deactivated successfully.
Dec 13 03:46:40 np0005558241 podman[349753]: 2025-12-13 08:46:40.270275989 +0000 UTC m=+0.452932975 container attach 8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 03:46:40 np0005558241 podman[349753]: 2025-12-13 08:46:40.27190659 +0000 UTC m=+0.454563586 container died 8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:46:40 np0005558241 nova_compute[248510]: 2025-12-13 08:46:40.318 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay-432fad8ed84535cd4f63ae4841d3e4f2c0c9f27b1eb59a8ee048412a03bd482d-merged.mount: Deactivated successfully.
Dec 13 03:46:40 np0005558241 podman[349753]: 2025-12-13 08:46:40.44880005 +0000 UTC m=+0.631456996 container remove 8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_euler, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:46:40 np0005558241 systemd[1]: libpod-conmon-8812ac7bfdc343bfb620ab244e130dda73764f53be6a2323aa6e9337a56a0234.scope: Deactivated successfully.
Dec 13 03:46:40 np0005558241 podman[349792]: 2025-12-13 08:46:40.613901293 +0000 UTC m=+0.024730402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:46:40 np0005558241 podman[349792]: 2025-12-13 08:46:40.740418558 +0000 UTC m=+0.151247637 container create f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_ellis, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:46:40 np0005558241 systemd[1]: Started libpod-conmon-f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1.scope.
Dec 13 03:46:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:46:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a7aaa2af434992e1418d1490a72497e11fc345ea0519bcd9c1bbb9574e3a883/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a7aaa2af434992e1418d1490a72497e11fc345ea0519bcd9c1bbb9574e3a883/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a7aaa2af434992e1418d1490a72497e11fc345ea0519bcd9c1bbb9574e3a883/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a7aaa2af434992e1418d1490a72497e11fc345ea0519bcd9c1bbb9574e3a883/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:40 np0005558241 podman[349792]: 2025-12-13 08:46:40.982983315 +0000 UTC m=+0.393812474 container init f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:46:40 np0005558241 podman[349792]: 2025-12-13 08:46:40.991870168 +0000 UTC m=+0.402699247 container start f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_ellis, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:46:41 np0005558241 podman[349792]: 2025-12-13 08:46:41.00030325 +0000 UTC m=+0.411132349 container attach f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:46:41 np0005558241 nova_compute[248510]: 2025-12-13 08:46:41.626 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:41 np0005558241 lvm[349886]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:46:41 np0005558241 lvm[349887]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:46:41 np0005558241 lvm[349887]: VG ceph_vg1 finished
Dec 13 03:46:41 np0005558241 lvm[349886]: VG ceph_vg0 finished
Dec 13 03:46:41 np0005558241 lvm[349889]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:46:41 np0005558241 lvm[349889]: VG ceph_vg2 finished
Dec 13 03:46:41 np0005558241 hardcore_ellis[349808]: {}
Dec 13 03:46:41 np0005558241 systemd[1]: libpod-f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1.scope: Deactivated successfully.
Dec 13 03:46:41 np0005558241 systemd[1]: libpod-f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1.scope: Consumed 1.355s CPU time.
Dec 13 03:46:41 np0005558241 podman[349792]: 2025-12-13 08:46:41.803737002 +0000 UTC m=+1.214566101 container died f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:46:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8a7aaa2af434992e1418d1490a72497e11fc345ea0519bcd9c1bbb9574e3a883-merged.mount: Deactivated successfully.
Dec 13 03:46:41 np0005558241 podman[349792]: 2025-12-13 08:46:41.858669311 +0000 UTC m=+1.269498390 container remove f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:46:41 np0005558241 systemd[1]: libpod-conmon-f5180f42bf0678cceed6c3d9f6754d42f045051800d9665c99a7f6040268bea1.scope: Deactivated successfully.
Dec 13 03:46:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:46:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:46:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:46:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:46:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:46:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:46:42 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:46:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:46:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:46:44 np0005558241 nova_compute[248510]: 2025-12-13 08:46:44.264 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:46:44 np0005558241 nova_compute[248510]: 2025-12-13 08:46:44.264 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:46:44 np0005558241 nova_compute[248510]: 2025-12-13 08:46:44.264 248514 INFO nova.compute.manager [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Shelving#033[00m
Dec 13 03:46:44 np0005558241 nova_compute[248510]: 2025-12-13 08:46:44.293 248514 DEBUG nova.virt.libvirt.driver [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:46:45 np0005558241 nova_compute[248510]: 2025-12-13 08:46:45.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec 13 03:46:45 np0005558241 podman[349932]: 2025-12-13 08:46:45.981063471 +0000 UTC m=+0.066243914 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 13 03:46:45 np0005558241 podman[349933]: 2025-12-13 08:46:45.994399236 +0000 UTC m=+0.077216019 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 03:46:46 np0005558241 podman[349931]: 2025-12-13 08:46:46.009986647 +0000 UTC m=+0.098059732 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 03:46:46 np0005558241 kernel: tap2d164f50-a5 (unregistering): left promiscuous mode
Dec 13 03:46:46 np0005558241 NetworkManager[50376]: <info>  [1765615606.5601] device (tap2d164f50-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:46:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:46:46Z|01006|binding|INFO|Releasing lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 from this chassis (sb_readonly=0)
Dec 13 03:46:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:46:46Z|01007|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 down in Southbound
Dec 13 03:46:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:46:46Z|01008|binding|INFO|Removing iface tap2d164f50-a5 ovn-installed in OVS
Dec 13 03:46:46 np0005558241 nova_compute[248510]: 2025-12-13 08:46:46.569 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:46 np0005558241 nova_compute[248510]: 2025-12-13 08:46:46.572 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.577 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:39:62 10.100.0.6'], port_security=['fa:16:3e:e2:39:62 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c3fb322f-a9db-4396-b659-2307698e5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2d164f50-a56a-4eaf-ad60-84274a0eb413) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.579 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2d164f50-a56a-4eaf-ad60-84274a0eb413 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b unbound from our chassis#033[00m
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.580 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.583 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c965579-235e-46d1-a366-bb066f00776e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.584 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b namespace which is not needed anymore#033[00m
Dec 13 03:46:46 np0005558241 nova_compute[248510]: 2025-12-13 08:46:46.593 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:46 np0005558241 nova_compute[248510]: 2025-12-13 08:46:46.628 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:46 np0005558241 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Deactivated successfully.
Dec 13 03:46:46 np0005558241 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Consumed 16.572s CPU time.
Dec 13 03:46:46 np0005558241 systemd-machined[210538]: Machine qemu-127-instance-00000067 terminated.
Dec 13 03:46:46 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [NOTICE]   (347890) : haproxy version is 2.8.14-c23fe91
Dec 13 03:46:46 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [NOTICE]   (347890) : path to executable is /usr/sbin/haproxy
Dec 13 03:46:46 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [WARNING]  (347890) : Exiting Master process...
Dec 13 03:46:46 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [WARNING]  (347890) : Exiting Master process...
Dec 13 03:46:46 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [ALERT]    (347890) : Current worker (347892) exited with code 143 (Terminated)
Dec 13 03:46:46 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[347886]: [WARNING]  (347890) : All workers exited. Exiting... (0)
Dec 13 03:46:46 np0005558241 systemd[1]: libpod-44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7.scope: Deactivated successfully.
Dec 13 03:46:46 np0005558241 podman[350016]: 2025-12-13 08:46:46.736113409 +0000 UTC m=+0.047576905 container died 44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 03:46:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7-userdata-shm.mount: Deactivated successfully.
Dec 13 03:46:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4b7ca1c7f01e35473290f2070a95e46c25e0e0fafb8bf13da3280c15aeb90b0b-merged.mount: Deactivated successfully.
Dec 13 03:46:46 np0005558241 podman[350016]: 2025-12-13 08:46:46.778517643 +0000 UTC m=+0.089981139 container cleanup 44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:46:46 np0005558241 systemd[1]: libpod-conmon-44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7.scope: Deactivated successfully.
Dec 13 03:46:46 np0005558241 podman[350048]: 2025-12-13 08:46:46.844521939 +0000 UTC m=+0.046605440 container remove 44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.851 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[306cd8ba-2bd1-4002-85fc-10c5eb295908]: (4, ('Sat Dec 13 08:46:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b (44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7)\n44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7\nSat Dec 13 08:46:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b (44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7)\n44f5c3a35d975b2df836aa3eba312c332ff875f4a1af5f598914121c70b72ec7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.853 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e8d963-92b1-4885-99f2-689bd42d946d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.854 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:46:46 np0005558241 nova_compute[248510]: 2025-12-13 08:46:46.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:46 np0005558241 kernel: tap6ad7f755-f0: left promiscuous mode
Dec 13 03:46:46 np0005558241 nova_compute[248510]: 2025-12-13 08:46:46.872 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.876 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b232dac2-2576-4235-9d34-d503bd5169eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.891 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d30c1e-586b-42d2-a713-1f3f889aa834]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.893 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f485bb04-362e-44b4-8f92-4864f5628bf2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.910 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68bdda6c-3ec7-4db2-baf0-e2cfb887827e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 807690, 'reachable_time': 38466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350076, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.913 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:46:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:46.913 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7e030b0b-95ec-487d-ac83-aee49dd3a5a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:46:46 np0005558241 systemd[1]: run-netns-ovnmeta\x2d6ad7f755\x2dfa29\x2d40dd\x2d89c4\x2d988d0a51cf9b.mount: Deactivated successfully.
Dec 13 03:46:47 np0005558241 nova_compute[248510]: 2025-12-13 08:46:47.061 248514 DEBUG nova.compute.manager [req-b6afe536-9ec3-447c-be47-5dbd2850a957 req-bf57ad3b-966c-4b72-a5fd-ca0657d0e8be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-unplugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:46:47 np0005558241 nova_compute[248510]: 2025-12-13 08:46:47.062 248514 DEBUG oslo_concurrency.lockutils [req-b6afe536-9ec3-447c-be47-5dbd2850a957 req-bf57ad3b-966c-4b72-a5fd-ca0657d0e8be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:46:47 np0005558241 nova_compute[248510]: 2025-12-13 08:46:47.062 248514 DEBUG oslo_concurrency.lockutils [req-b6afe536-9ec3-447c-be47-5dbd2850a957 req-bf57ad3b-966c-4b72-a5fd-ca0657d0e8be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:46:47 np0005558241 nova_compute[248510]: 2025-12-13 08:46:47.062 248514 DEBUG oslo_concurrency.lockutils [req-b6afe536-9ec3-447c-be47-5dbd2850a957 req-bf57ad3b-966c-4b72-a5fd-ca0657d0e8be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:46:47 np0005558241 nova_compute[248510]: 2025-12-13 08:46:47.062 248514 DEBUG nova.compute.manager [req-b6afe536-9ec3-447c-be47-5dbd2850a957 req-bf57ad3b-966c-4b72-a5fd-ca0657d0e8be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-unplugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:46:47 np0005558241 nova_compute[248510]: 2025-12-13 08:46:47.063 248514 WARNING nova.compute.manager [req-b6afe536-9ec3-447c-be47-5dbd2850a957 req-bf57ad3b-966c-4b72-a5fd-ca0657d0e8be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-unplugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state active and task_state shelving.#033[00m
Dec 13 03:46:47 np0005558241 nova_compute[248510]: 2025-12-13 08:46:47.310 248514 INFO nova.virt.libvirt.driver [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance shutdown successfully after 3 seconds.#033[00m
Dec 13 03:46:47 np0005558241 nova_compute[248510]: 2025-12-13 08:46:47.317 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance destroyed successfully.#033[00m
Dec 13 03:46:47 np0005558241 nova_compute[248510]: 2025-12-13 08:46:47.317 248514 DEBUG nova.objects.instance [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'numa_topology' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:46:47 np0005558241 nova_compute[248510]: 2025-12-13 08:46:47.637 248514 INFO nova.virt.libvirt.driver [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Beginning cold snapshot process#033[00m
Dec 13 03:46:47 np0005558241 nova_compute[248510]: 2025-12-13 08:46:47.823 248514 DEBUG nova.virt.libvirt.imagebackend [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:46:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 5.4 KiB/s wr, 1 op/s
Dec 13 03:46:48 np0005558241 nova_compute[248510]: 2025-12-13 08:46:48.132 248514 DEBUG nova.storage.rbd_utils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] creating snapshot(eb3953769aa74c6bbf42ade93aaf4c37) on rbd image(c3fb322f-a9db-4396-b659-2307698e5524_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:46:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Dec 13 03:46:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Dec 13 03:46:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Dec 13 03:46:49 np0005558241 nova_compute[248510]: 2025-12-13 08:46:49.089 248514 DEBUG nova.storage.rbd_utils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] cloning vms/c3fb322f-a9db-4396-b659-2307698e5524_disk@eb3953769aa74c6bbf42ade93aaf4c37 to images/04012c2a-611f-4e76-a6ad-a0a4e85f7f7e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:46:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:46:49 np0005558241 nova_compute[248510]: 2025-12-13 08:46:49.161 248514 DEBUG nova.storage.rbd_utils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] flattening images/04012c2a-611f-4e76-a6ad-a0a4e85f7f7e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:46:49 np0005558241 nova_compute[248510]: 2025-12-13 08:46:49.225 248514 DEBUG nova.compute.manager [req-ca317561-40cb-4be8-bd25-1e61957f6940 req-6d77a9d0-0ef0-4153-90c9-6ad1dda35663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:46:49 np0005558241 nova_compute[248510]: 2025-12-13 08:46:49.225 248514 DEBUG oslo_concurrency.lockutils [req-ca317561-40cb-4be8-bd25-1e61957f6940 req-6d77a9d0-0ef0-4153-90c9-6ad1dda35663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:46:49 np0005558241 nova_compute[248510]: 2025-12-13 08:46:49.225 248514 DEBUG oslo_concurrency.lockutils [req-ca317561-40cb-4be8-bd25-1e61957f6940 req-6d77a9d0-0ef0-4153-90c9-6ad1dda35663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:46:49 np0005558241 nova_compute[248510]: 2025-12-13 08:46:49.226 248514 DEBUG oslo_concurrency.lockutils [req-ca317561-40cb-4be8-bd25-1e61957f6940 req-6d77a9d0-0ef0-4153-90c9-6ad1dda35663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:46:49 np0005558241 nova_compute[248510]: 2025-12-13 08:46:49.227 248514 DEBUG nova.compute.manager [req-ca317561-40cb-4be8-bd25-1e61957f6940 req-6d77a9d0-0ef0-4153-90c9-6ad1dda35663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:46:49 np0005558241 nova_compute[248510]: 2025-12-13 08:46:49.228 248514 WARNING nova.compute.manager [req-ca317561-40cb-4be8-bd25-1e61957f6940 req-6d77a9d0-0ef0-4153-90c9-6ad1dda35663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Dec 13 03:46:49 np0005558241 nova_compute[248510]: 2025-12-13 08:46:49.626 248514 DEBUG nova.storage.rbd_utils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] removing snapshot(eb3953769aa74c6bbf42ade93aaf4c37) on rbd image(c3fb322f-a9db-4396-b659-2307698e5524_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:46:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 121 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 815 KiB/s rd, 10 KiB/s wr, 4 op/s
Dec 13 03:46:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Dec 13 03:46:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Dec 13 03:46:50 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Dec 13 03:46:50 np0005558241 nova_compute[248510]: 2025-12-13 08:46:50.098 248514 DEBUG nova.storage.rbd_utils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] creating snapshot(snap) on rbd image(04012c2a-611f-4e76-a6ad-a0a4e85f7f7e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:46:50 np0005558241 nova_compute[248510]: 2025-12-13 08:46:50.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Dec 13 03:46:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Dec 13 03:46:51 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Dec 13 03:46:51 np0005558241 nova_compute[248510]: 2025-12-13 08:46:51.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 136 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 1.7 MiB/s wr, 61 op/s
Dec 13 03:46:52 np0005558241 nova_compute[248510]: 2025-12-13 08:46:52.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:46:53 np0005558241 nova_compute[248510]: 2025-12-13 08:46:53.210 248514 INFO nova.virt.libvirt.driver [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Snapshot image upload complete#033[00m
Dec 13 03:46:53 np0005558241 nova_compute[248510]: 2025-12-13 08:46:53.210 248514 DEBUG nova.compute.manager [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:46:53 np0005558241 nova_compute[248510]: 2025-12-13 08:46:53.345 248514 INFO nova.compute.manager [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Shelve offloading#033[00m
Dec 13 03:46:53 np0005558241 nova_compute[248510]: 2025-12-13 08:46:53.353 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance destroyed successfully.#033[00m
Dec 13 03:46:53 np0005558241 nova_compute[248510]: 2025-12-13 08:46:53.353 248514 DEBUG nova.compute.manager [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:46:53 np0005558241 nova_compute[248510]: 2025-12-13 08:46:53.356 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:46:53 np0005558241 nova_compute[248510]: 2025-12-13 08:46:53.356 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:46:53 np0005558241 nova_compute[248510]: 2025-12-13 08:46:53.356 248514 DEBUG nova.network.neutron [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:46:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 164 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 4.6 MiB/s wr, 156 op/s
Dec 13 03:46:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:46:55 np0005558241 nova_compute[248510]: 2025-12-13 08:46:55.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:55.425 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:46:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:55.426 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:46:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:46:55.426 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:46:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.7 MiB/s wr, 150 op/s
Dec 13 03:46:56 np0005558241 nova_compute[248510]: 2025-12-13 08:46:56.182 248514 DEBUG nova.network.neutron [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:46:56 np0005558241 nova_compute[248510]: 2025-12-13 08:46:56.207 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:46:56 np0005558241 nova_compute[248510]: 2025-12-13 08:46:56.672 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:56 np0005558241 nova_compute[248510]: 2025-12-13 08:46:56.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.599 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance destroyed successfully.#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.599 248514 DEBUG nova.objects.instance [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'resources' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.631 248514 DEBUG nova.virt.libvirt.vif [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:45:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member',shelved_at='2025-12-13T08:46:53.210929',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='04012c2a-611f-4e76-a6ad-a0a4e85f7f7e'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:46:47Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.632 248514 DEBUG nova.network.os_vif_util [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.634 248514 DEBUG nova.network.os_vif_util [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.635 248514 DEBUG os_vif [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.637 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.637 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d164f50-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.639 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.646 248514 INFO os_vif [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5')#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.750 248514 DEBUG nova.compute.manager [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-changed-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.750 248514 DEBUG nova.compute.manager [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Refreshing instance network info cache due to event network-changed-2d164f50-a56a-4eaf-ad60-84274a0eb413. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.751 248514 DEBUG oslo_concurrency.lockutils [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.751 248514 DEBUG oslo_concurrency.lockutils [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.751 248514 DEBUG nova.network.neutron [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Refreshing network info cache for port 2d164f50-a56a-4eaf-ad60-84274a0eb413 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.946 248514 INFO nova.virt.libvirt.driver [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deleting instance files /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524_del#033[00m
Dec 13 03:46:57 np0005558241 nova_compute[248510]: 2025-12-13 08:46:57.947 248514 INFO nova.virt.libvirt.driver [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deletion of /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524_del complete#033[00m
Dec 13 03:46:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 170 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 5.8 MiB/s wr, 130 op/s
Dec 13 03:46:58 np0005558241 nova_compute[248510]: 2025-12-13 08:46:58.061 248514 INFO nova.scheduler.client.report [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Deleted allocations for instance c3fb322f-a9db-4396-b659-2307698e5524#033[00m
Dec 13 03:46:58 np0005558241 nova_compute[248510]: 2025-12-13 08:46:58.139 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:46:58 np0005558241 nova_compute[248510]: 2025-12-13 08:46:58.139 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:46:58 np0005558241 nova_compute[248510]: 2025-12-13 08:46:58.174 248514 DEBUG oslo_concurrency.processutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:46:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:46:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3384703289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:46:58 np0005558241 nova_compute[248510]: 2025-12-13 08:46:58.774 248514 DEBUG oslo_concurrency.processutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:46:58 np0005558241 nova_compute[248510]: 2025-12-13 08:46:58.783 248514 DEBUG nova.compute.provider_tree [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:46:58 np0005558241 nova_compute[248510]: 2025-12-13 08:46:58.811 248514 DEBUG nova.scheduler.client.report [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:46:58 np0005558241 nova_compute[248510]: 2025-12-13 08:46:58.845 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:46:58 np0005558241 nova_compute[248510]: 2025-12-13 08:46:58.904 248514 DEBUG oslo_concurrency.lockutils [None req-d0e2b302-0ad6-44ac-943b-a56e3af9828a 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:46:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:46:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Dec 13 03:46:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Dec 13 03:46:59 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Dec 13 03:46:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 144 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 92 op/s
Dec 13 03:47:00 np0005558241 nova_compute[248510]: 2025-12-13 08:47:00.943 248514 DEBUG nova.network.neutron [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updated VIF entry in instance network info cache for port 2d164f50-a56a-4eaf-ad60-84274a0eb413. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:47:00 np0005558241 nova_compute[248510]: 2025-12-13 08:47:00.944 248514 DEBUG nova.network.neutron [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": null, "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap2d164f50-a5", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:47:00 np0005558241 nova_compute[248510]: 2025-12-13 08:47:00.973 248514 DEBUG oslo_concurrency.lockutils [req-d02ed319-4e9f-44b0-8d58-3449277e0c62 req-6fa92fe4-b62b-4996-ac3d-65b5cb3f9c67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:47:01 np0005558241 nova_compute[248510]: 2025-12-13 08:47:01.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:01 np0005558241 nova_compute[248510]: 2025-12-13 08:47:01.811 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615606.810217, c3fb322f-a9db-4396-b659-2307698e5524 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:47:01 np0005558241 nova_compute[248510]: 2025-12-13 08:47:01.812 248514 INFO nova.compute.manager [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:47:01 np0005558241 nova_compute[248510]: 2025-12-13 08:47:01.843 248514 DEBUG nova.compute.manager [None req-46bdaf19-b3a0-40b3-be4b-8f2a4ac9312e - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 120 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 101 op/s
Dec 13 03:47:02 np0005558241 nova_compute[248510]: 2025-12-13 08:47:02.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:02 np0005558241 nova_compute[248510]: 2025-12-13 08:47:02.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:47:02 np0005558241 nova_compute[248510]: 2025-12-13 08:47:02.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:47:02 np0005558241 nova_compute[248510]: 2025-12-13 08:47:02.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:47:02 np0005558241 nova_compute[248510]: 2025-12-13 08:47:02.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.641 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.642 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.642 248514 INFO nova.compute.manager [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Unshelving#033[00m
Dec 13 03:47:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:03.749 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.750 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:03.750 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.762 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.763 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.768 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'pci_requests' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.801 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'numa_topology' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.827 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.828 248514 INFO nova.compute.claims [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.963 248514 DEBUG nova.scheduler.client.report [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 03:47:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 1.9 MiB/s wr, 43 op/s
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.995 248514 DEBUG nova.scheduler.client.report [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 03:47:03 np0005558241 nova_compute[248510]: 2025-12-13 08:47:03.996 248514 DEBUG nova.compute.provider_tree [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 03:47:04 np0005558241 nova_compute[248510]: 2025-12-13 08:47:04.030 248514 DEBUG nova.scheduler.client.report [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 03:47:04 np0005558241 nova_compute[248510]: 2025-12-13 08:47:04.057 248514 DEBUG nova.scheduler.client.report [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 03:47:04 np0005558241 nova_compute[248510]: 2025-12-13 08:47:04.100 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:47:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:47:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1798797823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:47:04 np0005558241 nova_compute[248510]: 2025-12-13 08:47:04.717 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:04 np0005558241 nova_compute[248510]: 2025-12-13 08:47:04.737 248514 DEBUG nova.compute.provider_tree [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:47:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:04.753 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:04 np0005558241 nova_compute[248510]: 2025-12-13 08:47:04.768 248514 DEBUG nova.scheduler.client.report [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:47:04 np0005558241 nova_compute[248510]: 2025-12-13 08:47:04.796 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:05 np0005558241 nova_compute[248510]: 2025-12-13 08:47:05.150 248514 INFO nova.network.neutron [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating port 2d164f50-a56a-4eaf-ad60-84274a0eb413 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Dec 13 03:47:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Dec 13 03:47:06 np0005558241 nova_compute[248510]: 2025-12-13 08:47:06.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:06 np0005558241 nova_compute[248510]: 2025-12-13 08:47:06.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:47:06 np0005558241 nova_compute[248510]: 2025-12-13 08:47:06.858 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:47:06 np0005558241 nova_compute[248510]: 2025-12-13 08:47:06.858 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:47:06 np0005558241 nova_compute[248510]: 2025-12-13 08:47:06.859 248514 DEBUG nova.network.neutron [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:47:07 np0005558241 nova_compute[248510]: 2025-12-13 08:47:07.009 248514 DEBUG nova.compute.manager [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-changed-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:47:07 np0005558241 nova_compute[248510]: 2025-12-13 08:47:07.010 248514 DEBUG nova.compute.manager [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Refreshing instance network info cache due to event network-changed-2d164f50-a56a-4eaf-ad60-84274a0eb413. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:47:07 np0005558241 nova_compute[248510]: 2025-12-13 08:47:07.011 248514 DEBUG oslo_concurrency.lockutils [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:47:07 np0005558241 nova_compute[248510]: 2025-12-13 08:47:07.643 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:07 np0005558241 nova_compute[248510]: 2025-12-13 08:47:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:47:07 np0005558241 nova_compute[248510]: 2025-12-13 08:47:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:47:07 np0005558241 nova_compute[248510]: 2025-12-13 08:47:07.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:07 np0005558241 nova_compute[248510]: 2025-12-13 08:47:07.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:07 np0005558241 nova_compute[248510]: 2025-12-13 08:47:07.805 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:07 np0005558241 nova_compute[248510]: 2025-12-13 08:47:07.805 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:47:07 np0005558241 nova_compute[248510]: 2025-12-13 08:47:07.805 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 818 B/s wr, 29 op/s
Dec 13 03:47:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:47:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3961085225' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:47:08 np0005558241 nova_compute[248510]: 2025-12-13 08:47:08.397 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:08 np0005558241 nova_compute[248510]: 2025-12-13 08:47:08.565 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:47:08 np0005558241 nova_compute[248510]: 2025-12-13 08:47:08.566 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3758MB free_disk=59.98753574863076GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:47:08 np0005558241 nova_compute[248510]: 2025-12-13 08:47:08.567 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:08 np0005558241 nova_compute[248510]: 2025-12-13 08:47:08.567 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:08 np0005558241 nova_compute[248510]: 2025-12-13 08:47:08.687 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance c3fb322f-a9db-4396-b659-2307698e5524 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:47:08 np0005558241 nova_compute[248510]: 2025-12-13 08:47:08.688 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:47:08 np0005558241 nova_compute[248510]: 2025-12-13 08:47:08.688 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:47:08 np0005558241 nova_compute[248510]: 2025-12-13 08:47:08.768 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:47:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:47:09
Dec 13 03:47:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:47:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:47:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'backups', '.mgr', 'vms', 'default.rgw.log']
Dec 13 03:47:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:47:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:47:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791921854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.361 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.369 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.394 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.435 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.436 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.648 248514 DEBUG nova.network.neutron [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.672 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.674 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.674 248514 INFO nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Creating image(s)#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.698 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.701 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'trusted_certs' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.702 248514 DEBUG oslo_concurrency.lockutils [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.703 248514 DEBUG nova.network.neutron [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Refreshing network info cache for port 2d164f50-a56a-4eaf-ad60-84274a0eb413 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.744 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.762 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.765 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "fbdf5d23e6e4d187216e212e7434ae52f5a80494" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:09 np0005558241 nova_compute[248510]: 2025-12-13 08:47:09.766 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "fbdf5d23e6e4d187216e212e7434ae52f5a80494" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 757 B/s wr, 27 op/s
Dec 13 03:47:10 np0005558241 nova_compute[248510]: 2025-12-13 08:47:10.109 248514 DEBUG nova.virt.libvirt.imagebackend [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/04012c2a-611f-4e76-a6ad-a0a4e85f7f7e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/04012c2a-611f-4e76-a6ad-a0a4e85f7f7e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:47:10 np0005558241 nova_compute[248510]: 2025-12-13 08:47:10.160 248514 DEBUG nova.virt.libvirt.imagebackend [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Selected location: {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/04012c2a-611f-4e76-a6ad-a0a4e85f7f7e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Dec 13 03:47:10 np0005558241 nova_compute[248510]: 2025-12-13 08:47:10.161 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] cloning images/04012c2a-611f-4e76-a6ad-a0a4e85f7f7e@snap to None/c3fb322f-a9db-4396-b659-2307698e5524_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:47:10 np0005558241 nova_compute[248510]: 2025-12-13 08:47:10.641 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "fbdf5d23e6e4d187216e212e7434ae52f5a80494" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:47:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:47:10 np0005558241 nova_compute[248510]: 2025-12-13 08:47:10.756 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'migration_context' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:10 np0005558241 nova_compute[248510]: 2025-12-13 08:47:10.940 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] flattening vms/c3fb322f-a9db-4396-b659-2307698e5524_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:47:11 np0005558241 nova_compute[248510]: 2025-12-13 08:47:11.436 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:47:11 np0005558241 nova_compute[248510]: 2025-12-13 08:47:11.436 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:47:11 np0005558241 nova_compute[248510]: 2025-12-13 08:47:11.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 120 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 682 B/s wr, 15 op/s
Dec 13 03:47:12 np0005558241 nova_compute[248510]: 2025-12-13 08:47:12.319 248514 DEBUG nova.network.neutron [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updated VIF entry in instance network info cache for port 2d164f50-a56a-4eaf-ad60-84274a0eb413. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:47:12 np0005558241 nova_compute[248510]: 2025-12-13 08:47:12.320 248514 DEBUG nova.network.neutron [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:47:12 np0005558241 nova_compute[248510]: 2025-12-13 08:47:12.339 248514 DEBUG oslo_concurrency.lockutils [req-f1c3d60b-e617-4f73-8abc-3db23ec9819b req-5d0f27b8-d8cf-4139-a988-76d744ea1b4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:47:12 np0005558241 nova_compute[248510]: 2025-12-13 08:47:12.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.429 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Image rbd:vms/c3fb322f-a9db-4396-b659-2307698e5524_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.429 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.430 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Ensure instance console log exists: /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.430 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.430 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.431 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.434 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Start _get_guest_xml network_info=[{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:46:44Z,direct_url=<?>,disk_format='raw',id=04012c2a-611f-4e76-a6ad-a0a4e85f7f7e,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-235457723-shelved',owner='d2d4d23379cc4b03bbdd72a9134fdd9b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:46:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.439 248514 WARNING nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.445 248514 DEBUG nova.virt.libvirt.host [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.445 248514 DEBUG nova.virt.libvirt.host [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.449 248514 DEBUG nova.virt.libvirt.host [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.450 248514 DEBUG nova.virt.libvirt.host [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.450 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.450 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:46:44Z,direct_url=<?>,disk_format='raw',id=04012c2a-611f-4e76-a6ad-a0a4e85f7f7e,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-235457723-shelved',owner='d2d4d23379cc4b03bbdd72a9134fdd9b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:46:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.451 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.451 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.451 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.451 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.452 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.452 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.452 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.452 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.452 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.453 248514 DEBUG nova.virt.hardware [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.453 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'vcpu_model' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:13 np0005558241 nova_compute[248510]: 2025-12-13 08:47:13.487 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 143 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 28 op/s
Dec 13 03:47:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:47:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434996582' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.058 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.088 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.092 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:47:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:47:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535499270' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.628 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.630 248514 DEBUG nova.virt.libvirt.vif [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='04012c2a-611f-4e76-a6ad-a0a4e85f7f7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:45:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member',shelved_at='2025-12-13T08:46:53.210929',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='04012c2a-611f-4e76-a6ad-a0a4e85f7f7e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:47:03Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.631 248514 DEBUG nova.network.os_vif_util [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.631 248514 DEBUG nova.network.os_vif_util [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.633 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'pci_devices' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.668 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  <uuid>c3fb322f-a9db-4396-b659-2307698e5524</uuid>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  <name>instance-00000067</name>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <nova:name>tempest-ServersNegativeTestJSON-server-235457723</nova:name>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:47:13</nova:creationTime>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:        <nova:user uuid="8948a1b0c26f43129cb50ef6f3872ecd">tempest-ServersNegativeTestJSON-1471623163-project-member</nova:user>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:        <nova:project uuid="d2d4d23379cc4b03bbdd72a9134fdd9b">tempest-ServersNegativeTestJSON-1471623163</nova:project>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="04012c2a-611f-4e76-a6ad-a0a4e85f7f7e"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:        <nova:port uuid="2d164f50-a56a-4eaf-ad60-84274a0eb413">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <entry name="serial">c3fb322f-a9db-4396-b659-2307698e5524</entry>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <entry name="uuid">c3fb322f-a9db-4396-b659-2307698e5524</entry>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c3fb322f-a9db-4396-b659-2307698e5524_disk">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c3fb322f-a9db-4396-b659-2307698e5524_disk.config">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:e2:39:62"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <target dev="tap2d164f50-a5"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/console.log" append="off"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <input type="keyboard" bus="usb"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:47:14 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:47:14 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:47:14 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:47:14 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.669 248514 DEBUG nova.compute.manager [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Preparing to wait for external event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.670 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.670 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.671 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.672 248514 DEBUG nova.virt.libvirt.vif [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='04012c2a-611f-4e76-a6ad-a0a4e85f7f7e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:45:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member',shelved_at='2025-12-13T08:46:53.210929',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='04012c2a-611f-4e76-a6ad-a0a4e85f7f7e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:47:03Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.672 248514 DEBUG nova.network.os_vif_util [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.673 248514 DEBUG nova.network.os_vif_util [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.673 248514 DEBUG os_vif [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.675 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.675 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.678 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.678 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d164f50-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.679 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d164f50-a5, col_values=(('external_ids', {'iface-id': '2d164f50-a56a-4eaf-ad60-84274a0eb413', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:39:62', 'vm-uuid': 'c3fb322f-a9db-4396-b659-2307698e5524'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:14 np0005558241 NetworkManager[50376]: <info>  [1765615634.6817] manager: (tap2d164f50-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.684 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.687 248514 INFO os_vif [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5')#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.887 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.888 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.889 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] No VIF found with MAC fa:16:3e:e2:39:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.889 248514 INFO nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Using config drive#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.913 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.945 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'ec2_ids' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:14 np0005558241 nova_compute[248510]: 2025-12-13 08:47:14.989 248514 DEBUG nova.objects.instance [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'keypairs' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:47:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/48211187' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:47:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:47:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/48211187' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:47:15 np0005558241 nova_compute[248510]: 2025-12-13 08:47:15.622 248514 INFO nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Creating config drive at /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config#033[00m
Dec 13 03:47:15 np0005558241 nova_compute[248510]: 2025-12-13 08:47:15.627 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1757rw03 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:15 np0005558241 nova_compute[248510]: 2025-12-13 08:47:15.777 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1757rw03" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:15 np0005558241 nova_compute[248510]: 2025-12-13 08:47:15.811 248514 DEBUG nova.storage.rbd_utils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] rbd image c3fb322f-a9db-4396-b659-2307698e5524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:47:15 np0005558241 nova_compute[248510]: 2025-12-13 08:47:15.815 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config c3fb322f-a9db-4396-b659-2307698e5524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 199 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 75 op/s
Dec 13 03:47:16 np0005558241 nova_compute[248510]: 2025-12-13 08:47:16.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:16 np0005558241 nova_compute[248510]: 2025-12-13 08:47:16.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:47:16 np0005558241 podman[350668]: 2025-12-13 08:47:16.992941346 +0000 UTC m=+0.065355431 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 03:47:17 np0005558241 podman[350667]: 2025-12-13 08:47:17.013046221 +0000 UTC m=+0.092683857 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 03:47:17 np0005558241 podman[350666]: 2025-12-13 08:47:17.017471712 +0000 UTC m=+0.096733359 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:47:17 np0005558241 nova_compute[248510]: 2025-12-13 08:47:17.344 248514 DEBUG oslo_concurrency.processutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config c3fb322f-a9db-4396-b659-2307698e5524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:17 np0005558241 nova_compute[248510]: 2025-12-13 08:47:17.345 248514 INFO nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deleting local config drive /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524/disk.config because it was imported into RBD.#033[00m
Dec 13 03:47:17 np0005558241 kernel: tap2d164f50-a5: entered promiscuous mode
Dec 13 03:47:17 np0005558241 NetworkManager[50376]: <info>  [1765615637.4090] manager: (tap2d164f50-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/414)
Dec 13 03:47:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:17Z|01009|binding|INFO|Claiming lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 for this chassis.
Dec 13 03:47:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:17Z|01010|binding|INFO|2d164f50-a56a-4eaf-ad60-84274a0eb413: Claiming fa:16:3e:e2:39:62 10.100.0.6
Dec 13 03:47:17 np0005558241 nova_compute[248510]: 2025-12-13 08:47:17.412 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:17Z|01011|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 ovn-installed in OVS
Dec 13 03:47:17 np0005558241 nova_compute[248510]: 2025-12-13 08:47:17.432 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.434 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:39:62 10.100.0.6'], port_security=['fa:16:3e:e2:39:62 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c3fb322f-a9db-4396-b659-2307698e5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2d164f50-a56a-4eaf-ad60-84274a0eb413) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:47:17 np0005558241 nova_compute[248510]: 2025-12-13 08:47:17.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:17Z|01012|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 up in Southbound
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.436 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2d164f50-a56a-4eaf-ad60-84274a0eb413 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b bound to our chassis#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.437 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b#033[00m
Dec 13 03:47:17 np0005558241 systemd-machined[210538]: New machine qemu-129-instance-00000067.
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.453 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39c2bf4b-870a-4456-aeb7-944224effc91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.454 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ad7f755-f1 in ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.457 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ad7f755-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.457 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5cf569-688b-4643-9b18-7884d81eeffc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.458 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c25a6c77-fc9c-476b-97f7-69ccf3e7fcae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 systemd[1]: Started Virtual Machine qemu-129-instance-00000067.
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.471 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e111bacb-1439-40c1-bbce-f8ba37c6ae6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 systemd-udevd[350745]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.489 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:5d:27 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3cd63fa2-b81b-489a-a8cf-c4a874eedf7b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cd63fa2-b81b-489a-a8cf-c4a874eedf7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062cf52f23fa4a85853b3a51dc81e171', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f01d2e7-04a2-4e90-bd8e-58cdccdf83ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=14861846-11b2-42ff-8374-23cd7b7371bb) old=Port_Binding(mac=['fa:16:3e:9a:5d:27 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-3cd63fa2-b81b-489a-a8cf-c4a874eedf7b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3cd63fa2-b81b-489a-a8cf-c4a874eedf7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062cf52f23fa4a85853b3a51dc81e171', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:47:17 np0005558241 NetworkManager[50376]: <info>  [1765615637.4915] device (tap2d164f50-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[718e6fb4-0c9b-446c-ae9b-b6576559d778]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 NetworkManager[50376]: <info>  [1765615637.4937] device (tap2d164f50-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.532 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e99f56dc-69cc-48ff-9854-dad8fe009a5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 NetworkManager[50376]: <info>  [1765615637.5386] manager: (tap6ad7f755-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/415)
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.537 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d37b9515-8748-4af6-84a6-abea7b2df971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.569 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa594db-9ad2-43f6-9774-c51f81179de2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.573 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[959307ab-b1a2-4739-bb88-c5e36e3e7c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 NetworkManager[50376]: <info>  [1765615637.5974] device (tap6ad7f755-f0): carrier: link connected
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.602 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[26e57f56-5792-449b-94a0-84df4e2475d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.623 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ee771fa4-9cf4-4dc0-a971-6b602d5c3248]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 821481, 'reachable_time': 32904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350776, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.640 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6d8e68-b480-452f-834b-e1cd59eb1636]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:35ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 821481, 'tstamp': 821481}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350777, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.665 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[357b12e8-fb99-4c77-82c8-9c041f0a5e10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 299], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 821481, 'reachable_time': 32904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350778, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.706 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0a814d6b-01df-4061-b11b-5170bed1e20a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.782 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ea9529-117e-4c95-b20d-489329329dea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.784 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.784 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.784 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:17 np0005558241 kernel: tap6ad7f755-f0: entered promiscuous mode
Dec 13 03:47:17 np0005558241 NetworkManager[50376]: <info>  [1765615637.7869] manager: (tap6ad7f755-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Dec 13 03:47:17 np0005558241 nova_compute[248510]: 2025-12-13 08:47:17.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.788 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:17Z|01013|binding|INFO|Releasing lport 683d7da0-6f1e-41a6-9158-6204fb05ee50 from this chassis (sb_readonly=0)
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.790 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.798 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[04077414-8523-4fa7-908d-6ba21207b31c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.799 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:47:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:17.800 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'env', 'PROCESS_TAG=haproxy-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:47:17 np0005558241 nova_compute[248510]: 2025-12-13 08:47:17.805 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 78 op/s
Dec 13 03:47:18 np0005558241 podman[350847]: 2025-12-13 08:47:18.206320836 +0000 UTC m=+0.065086454 container create 14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:47:18 np0005558241 nova_compute[248510]: 2025-12-13 08:47:18.238 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615638.2374232, c3fb322f-a9db-4396-b659-2307698e5524 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:47:18 np0005558241 nova_compute[248510]: 2025-12-13 08:47:18.240 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Started (Lifecycle Event)#033[00m
Dec 13 03:47:18 np0005558241 systemd[1]: Started libpod-conmon-14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81.scope.
Dec 13 03:47:18 np0005558241 podman[350847]: 2025-12-13 08:47:18.174488977 +0000 UTC m=+0.033254625 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:47:18 np0005558241 nova_compute[248510]: 2025-12-13 08:47:18.270 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:18 np0005558241 nova_compute[248510]: 2025-12-13 08:47:18.277 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615638.2375324, c3fb322f-a9db-4396-b659-2307698e5524 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:47:18 np0005558241 nova_compute[248510]: 2025-12-13 08:47:18.278 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:47:18 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:47:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1c5e1c88792b2cd7c895939660c80ec1d136af36d046687f8916aa03ba3455/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:18 np0005558241 nova_compute[248510]: 2025-12-13 08:47:18.302 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:18 np0005558241 podman[350847]: 2025-12-13 08:47:18.305528676 +0000 UTC m=+0.164294324 container init 14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:47:18 np0005558241 nova_compute[248510]: 2025-12-13 08:47:18.308 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:47:18 np0005558241 podman[350847]: 2025-12-13 08:47:18.311933586 +0000 UTC m=+0.170699204 container start 14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 03:47:18 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [NOTICE]   (350872) : New worker (350874) forked
Dec 13 03:47:18 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [NOTICE]   (350872) : Loading success.
Dec 13 03:47:18 np0005558241 nova_compute[248510]: 2025-12-13 08:47:18.340 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:47:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:18.372 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 14861846-11b2-42ff-8374-23cd7b7371bb in datapath 3cd63fa2-b81b-489a-a8cf-c4a874eedf7b updated#033[00m
Dec 13 03:47:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:18.374 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3cd63fa2-b81b-489a-a8cf-c4a874eedf7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:47:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:18.375 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[32da3c98-e39c-4e4c-8538-a956838365ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:18 np0005558241 nova_compute[248510]: 2025-12-13 08:47:18.919 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:18 np0005558241 nova_compute[248510]: 2025-12-13 08:47:18.919 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:18 np0005558241 nova_compute[248510]: 2025-12-13 08:47:18.942 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.028 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.028 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.034 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.035 248514 INFO nova.compute.claims [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.159 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:47:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261967043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.815 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.821 248514 DEBUG nova.compute.provider_tree [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.842 248514 DEBUG nova.scheduler.client.report [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.870 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.842s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.871 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.931 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.931 248514 DEBUG nova.network.neutron [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.958 248514 INFO nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:47:19 np0005558241 nova_compute[248510]: 2025-12-13 08:47:19.979 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:47:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 86 op/s
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.088 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.090 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.090 248514 INFO nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Creating image(s)#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.110 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.131 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.159 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.162 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.204 248514 DEBUG nova.policy [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.237 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.239 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.241 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.242 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.270 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.275 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.636 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.705 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.796 248514 DEBUG nova.objects.instance [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.816 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.816 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Ensure instance console log exists: /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.817 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.817 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:20 np0005558241 nova_compute[248510]: 2025-12-13 08:47:20.817 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007717642784429154 of space, bias 1.0, pg target 0.2315292835328746 quantized to 32 (current 32)
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014251084379551505 of space, bias 1.0, pg target 0.4275325313865451 quantized to 32 (current 32)
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.839601767484647e-07 of space, bias 4.0, pg target 0.0007007522120981576 quantized to 16 (current 32)
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:47:21 np0005558241 nova_compute[248510]: 2025-12-13 08:47:21.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:21 np0005558241 nova_compute[248510]: 2025-12-13 08:47:21.846 248514 DEBUG nova.network.neutron [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Successfully created port: 1a00c927-1c7f-4af5-9337-d6e58800dc3c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:47:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 200 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 86 op/s
Dec 13 03:47:22 np0005558241 nova_compute[248510]: 2025-12-13 08:47:22.299 248514 DEBUG nova.compute.manager [req-dd637808-06bd-43ea-91d8-34d820229025 req-6ca500dc-dfed-4ffd-8e22-c0d0a355d196 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:47:22 np0005558241 nova_compute[248510]: 2025-12-13 08:47:22.300 248514 DEBUG oslo_concurrency.lockutils [req-dd637808-06bd-43ea-91d8-34d820229025 req-6ca500dc-dfed-4ffd-8e22-c0d0a355d196 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:22 np0005558241 nova_compute[248510]: 2025-12-13 08:47:22.300 248514 DEBUG oslo_concurrency.lockutils [req-dd637808-06bd-43ea-91d8-34d820229025 req-6ca500dc-dfed-4ffd-8e22-c0d0a355d196 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:22 np0005558241 nova_compute[248510]: 2025-12-13 08:47:22.300 248514 DEBUG oslo_concurrency.lockutils [req-dd637808-06bd-43ea-91d8-34d820229025 req-6ca500dc-dfed-4ffd-8e22-c0d0a355d196 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:22 np0005558241 nova_compute[248510]: 2025-12-13 08:47:22.300 248514 DEBUG nova.compute.manager [req-dd637808-06bd-43ea-91d8-34d820229025 req-6ca500dc-dfed-4ffd-8e22-c0d0a355d196 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Processing event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:47:22 np0005558241 nova_compute[248510]: 2025-12-13 08:47:22.301 248514 DEBUG nova.compute.manager [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:47:22 np0005558241 nova_compute[248510]: 2025-12-13 08:47:22.304 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615642.3041523, c3fb322f-a9db-4396-b659-2307698e5524 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:47:22 np0005558241 nova_compute[248510]: 2025-12-13 08:47:22.304 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:47:22 np0005558241 nova_compute[248510]: 2025-12-13 08:47:22.306 248514 DEBUG nova.virt.libvirt.driver [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:47:22 np0005558241 nova_compute[248510]: 2025-12-13 08:47:22.309 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance spawned successfully.#033[00m
Dec 13 03:47:22 np0005558241 nova_compute[248510]: 2025-12-13 08:47:22.334 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:22 np0005558241 nova_compute[248510]: 2025-12-13 08:47:22.338 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:47:22 np0005558241 nova_compute[248510]: 2025-12-13 08:47:22.369 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:47:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Dec 13 03:47:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Dec 13 03:47:23 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Dec 13 03:47:23 np0005558241 nova_compute[248510]: 2025-12-13 08:47:23.566 248514 DEBUG nova.network.neutron [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Successfully updated port: 1a00c927-1c7f-4af5-9337-d6e58800dc3c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:47:23 np0005558241 nova_compute[248510]: 2025-12-13 08:47:23.586 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:47:23 np0005558241 nova_compute[248510]: 2025-12-13 08:47:23.586 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:47:23 np0005558241 nova_compute[248510]: 2025-12-13 08:47:23.586 248514 DEBUG nova.network.neutron [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:47:23 np0005558241 nova_compute[248510]: 2025-12-13 08:47:23.642 248514 DEBUG nova.compute.manager [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:23 np0005558241 nova_compute[248510]: 2025-12-13 08:47:23.752 248514 DEBUG oslo_concurrency.lockutils [None req-9b21c4d9-34cb-4431-86d5-7a3b6668e761 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 20.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:23 np0005558241 nova_compute[248510]: 2025-12-13 08:47:23.760 248514 DEBUG nova.compute.manager [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received event network-changed-1a00c927-1c7f-4af5-9337-d6e58800dc3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:47:23 np0005558241 nova_compute[248510]: 2025-12-13 08:47:23.760 248514 DEBUG nova.compute.manager [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Refreshing instance network info cache due to event network-changed-1a00c927-1c7f-4af5-9337-d6e58800dc3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:47:23 np0005558241 nova_compute[248510]: 2025-12-13 08:47:23.761 248514 DEBUG oslo_concurrency.lockutils [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:47:23 np0005558241 nova_compute[248510]: 2025-12-13 08:47:23.912 248514 DEBUG nova.network.neutron [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:47:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 238 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.1 MiB/s wr, 135 op/s
Dec 13 03:47:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:47:24 np0005558241 nova_compute[248510]: 2025-12-13 08:47:24.685 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:25 np0005558241 nova_compute[248510]: 2025-12-13 08:47:25.228 248514 DEBUG nova.compute.manager [req-f0b6e4c7-1863-4a86-b20f-43027d47f648 req-6ea4690c-7916-4625-81d0-754a1c2c3018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:47:25 np0005558241 nova_compute[248510]: 2025-12-13 08:47:25.229 248514 DEBUG oslo_concurrency.lockutils [req-f0b6e4c7-1863-4a86-b20f-43027d47f648 req-6ea4690c-7916-4625-81d0-754a1c2c3018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:25 np0005558241 nova_compute[248510]: 2025-12-13 08:47:25.229 248514 DEBUG oslo_concurrency.lockutils [req-f0b6e4c7-1863-4a86-b20f-43027d47f648 req-6ea4690c-7916-4625-81d0-754a1c2c3018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:25 np0005558241 nova_compute[248510]: 2025-12-13 08:47:25.229 248514 DEBUG oslo_concurrency.lockutils [req-f0b6e4c7-1863-4a86-b20f-43027d47f648 req-6ea4690c-7916-4625-81d0-754a1c2c3018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:25 np0005558241 nova_compute[248510]: 2025-12-13 08:47:25.230 248514 DEBUG nova.compute.manager [req-f0b6e4c7-1863-4a86-b20f-43027d47f648 req-6ea4690c-7916-4625-81d0-754a1c2c3018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:47:25 np0005558241 nova_compute[248510]: 2025-12-13 08:47:25.230 248514 WARNING nova.compute.manager [req-f0b6e4c7-1863-4a86-b20f-43027d47f648 req-6ea4690c-7916-4625-81d0-754a1c2c3018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:47:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 210 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 13 03:47:26 np0005558241 nova_compute[248510]: 2025-12-13 08:47:26.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.531 248514 DEBUG nova.network.neutron [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.561 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.562 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Instance network_info: |[{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.562 248514 DEBUG oslo_concurrency.lockutils [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.562 248514 DEBUG nova.network.neutron [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Refreshing network info cache for port 1a00c927-1c7f-4af5-9337-d6e58800dc3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.566 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Start _get_guest_xml network_info=[{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.573 248514 WARNING nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.581 248514 DEBUG nova.virt.libvirt.host [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.582 248514 DEBUG nova.virt.libvirt.host [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.596 248514 DEBUG nova.virt.libvirt.host [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.597 248514 DEBUG nova.virt.libvirt.host [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.597 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.598 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.598 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.598 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.599 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.599 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.599 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.599 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.600 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.600 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.600 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.600 248514 DEBUG nova.virt.hardware [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:47:27 np0005558241 nova_compute[248510]: 2025-12-13 08:47:27.604 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 182 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Dec 13 03:47:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:47:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190927087' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.251 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.275 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.279 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:47:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807636306' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.855 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.857 248514 DEBUG nova.virt.libvirt.vif [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:47:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=105,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOgNSj2RX2tEOr5Rxtdc3T7qrIqjyVapwoURlTzSwBUNw2HAjV8i9+69CD+ahp0R2Tk6YrJ3W0cDR2tzHXyNVMUTiAkgjDao6U5yvxeoFoLQPs8Nmve95azrQ/Z/Vbs68Q==',key_name='tempest-TestSecurityGroupsBasicOps-80214463',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-c1fl6h76',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:47:20Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=1d73d88c-ca9a-4136-80de-fa2cf028ffb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.857 248514 DEBUG nova.network.os_vif_util [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.858 248514 DEBUG nova.network.os_vif_util [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.859 248514 DEBUG nova.objects.instance [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.882 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  <uuid>1d73d88c-ca9a-4136-80de-fa2cf028ffb7</uuid>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  <name>instance-00000069</name>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678</nova:name>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:47:27</nova:creationTime>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:        <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:        <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:        <nova:port uuid="1a00c927-1c7f-4af5-9337-d6e58800dc3c">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <entry name="serial">1d73d88c-ca9a-4136-80de-fa2cf028ffb7</entry>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <entry name="uuid">1d73d88c-ca9a-4136-80de-fa2cf028ffb7</entry>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk.config">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:9c:2b:42"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <target dev="tap1a00c927-1c"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/console.log" append="off"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:47:28 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:47:28 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:47:28 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:47:28 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.884 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Preparing to wait for external event network-vif-plugged-1a00c927-1c7f-4af5-9337-d6e58800dc3c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.884 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.884 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.885 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.885 248514 DEBUG nova.virt.libvirt.vif [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:47:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=105,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOgNSj2RX2tEOr5Rxtdc3T7qrIqjyVapwoURlTzSwBUNw2HAjV8i9+69CD+ahp0R2Tk6YrJ3W0cDR2tzHXyNVMUTiAkgjDao6U5yvxeoFoLQPs8Nmve95azrQ/Z/Vbs68Q==',key_name='tempest-TestSecurityGroupsBasicOps-80214463',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-c1fl6h76',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:47:20Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=1d73d88c-ca9a-4136-80de-fa2cf028ffb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.886 248514 DEBUG nova.network.os_vif_util [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.886 248514 DEBUG nova.network.os_vif_util [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.887 248514 DEBUG os_vif [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.887 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.888 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.889 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.893 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.894 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a00c927-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.894 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a00c927-1c, col_values=(('external_ids', {'iface-id': '1a00c927-1c7f-4af5-9337-d6e58800dc3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:2b:42', 'vm-uuid': '1d73d88c-ca9a-4136-80de-fa2cf028ffb7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:28 np0005558241 NetworkManager[50376]: <info>  [1765615648.8976] manager: (tap1a00c927-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.904 248514 INFO os_vif [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c')#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.969 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.970 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.970 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:9c:2b:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.971 248514 INFO nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Using config drive#033[00m
Dec 13 03:47:28 np0005558241 nova_compute[248510]: 2025-12-13 08:47:28.991 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:47:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:47:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Dec 13 03:47:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Dec 13 03:47:29 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Dec 13 03:47:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 167 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Dec 13 03:47:30 np0005558241 nova_compute[248510]: 2025-12-13 08:47:30.456 248514 DEBUG nova.network.neutron [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updated VIF entry in instance network info cache for port 1a00c927-1c7f-4af5-9337-d6e58800dc3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:47:30 np0005558241 nova_compute[248510]: 2025-12-13 08:47:30.457 248514 DEBUG nova.network.neutron [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:47:30 np0005558241 nova_compute[248510]: 2025-12-13 08:47:30.507 248514 DEBUG oslo_concurrency.lockutils [req-93915638-8152-41ce-ab26-6eb3675d5010 req-0ad7c881-8dcb-42db-b683-a855dac5cb33 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:47:30 np0005558241 nova_compute[248510]: 2025-12-13 08:47:30.753 248514 INFO nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Creating config drive at /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/disk.config#033[00m
Dec 13 03:47:30 np0005558241 nova_compute[248510]: 2025-12-13 08:47:30.758 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj1k28ezz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:30 np0005558241 nova_compute[248510]: 2025-12-13 08:47:30.902 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj1k28ezz" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:30 np0005558241 nova_compute[248510]: 2025-12-13 08:47:30.927 248514 DEBUG nova.storage.rbd_utils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:47:30 np0005558241 nova_compute[248510]: 2025-12-13 08:47:30.931 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/disk.config 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.082 248514 DEBUG oslo_concurrency.processutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/disk.config 1d73d88c-ca9a-4136-80de-fa2cf028ffb7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.083 248514 INFO nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Deleting local config drive /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7/disk.config because it was imported into RBD.#033[00m
Dec 13 03:47:31 np0005558241 NetworkManager[50376]: <info>  [1765615651.1417] manager: (tap1a00c927-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/418)
Dec 13 03:47:31 np0005558241 kernel: tap1a00c927-1c: entered promiscuous mode
Dec 13 03:47:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:31Z|01014|binding|INFO|Claiming lport 1a00c927-1c7f-4af5-9337-d6e58800dc3c for this chassis.
Dec 13 03:47:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:31Z|01015|binding|INFO|1a00c927-1c7f-4af5-9337-d6e58800dc3c: Claiming fa:16:3e:9c:2b:42 10.100.0.9
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.150 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.165 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:2b:42 10.100.0.9'], port_security=['fa:16:3e:9c:2b:42 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1d73d88c-ca9a-4136-80de-fa2cf028ffb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '531d7c80-e840-46e0-9afc-03ae0558f787 73f9632c-0914-472c-9969-a269b215d831', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ffad85fc-28b3-4529-8d06-0367d9c3d476, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1a00c927-1c7f-4af5-9337-d6e58800dc3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.167 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1a00c927-1c7f-4af5-9337-d6e58800dc3c in datapath b7bba2fe-699d-4423-a6e2-09604625a8f5 bound to our chassis#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.168 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b7bba2fe-699d-4423-a6e2-09604625a8f5#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.180 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e644bdeb-1b0a-4fd3-8084-8450aed965a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.181 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb7bba2fe-61 in ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.183 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb7bba2fe-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.183 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fd542ad7-6990-4744-83ac-9e516e184e8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.183 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9604580e-c702-4686-9765-e023a421cea7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 systemd-machined[210538]: New machine qemu-130-instance-00000069.
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.197 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e3cdbe-70fd-4c9e-853f-4d65c6d7a72c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 systemd[1]: Started Virtual Machine qemu-130-instance-00000069.
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.220 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.222 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3af3f93a-25e9-46b5-a1a9-31bf75ed559b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:31Z|01016|binding|INFO|Setting lport 1a00c927-1c7f-4af5-9337-d6e58800dc3c ovn-installed in OVS
Dec 13 03:47:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:31Z|01017|binding|INFO|Setting lport 1a00c927-1c7f-4af5-9337-d6e58800dc3c up in Southbound
Dec 13 03:47:31 np0005558241 systemd-udevd[351210]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:31 np0005558241 NetworkManager[50376]: <info>  [1765615651.2425] device (tap1a00c927-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:47:31 np0005558241 NetworkManager[50376]: <info>  [1765615651.2436] device (tap1a00c927-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.254 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[26b7320e-3c76-4ccb-9baf-3bbadb921076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 NetworkManager[50376]: <info>  [1765615651.2606] manager: (tapb7bba2fe-60): new Veth device (/org/freedesktop/NetworkManager/Devices/419)
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.259 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b3fffa-0dfb-438f-882e-6c7ac4365c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.296 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bc11b992-5b17-47ca-9b99-2b0cf3817630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.299 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[83dcb12c-a990-4be5-b2a3-ff4410ff9bc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 NetworkManager[50376]: <info>  [1765615651.3304] device (tapb7bba2fe-60): carrier: link connected
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.337 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[86723ff8-4d94-4683-98e7-403dd1e8282d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.356 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f4890f25-5c0a-4c22-a642-1bca1c6703b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7bba2fe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:f5:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822854, 'reachable_time': 33108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351240, 'error': None, 'target': 'ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.374 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[70a6dc1e-d951-470c-94cb-544fe43822a6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:f5f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 822854, 'tstamp': 822854}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351241, 'error': None, 'target': 'ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.394 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4430bf70-8f9c-4b67-9bf4-4e2350b34e07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7bba2fe-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:f5:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822854, 'reachable_time': 33108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351242, 'error': None, 'target': 'ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.432 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[60ee3e0f-8c81-458b-b3e6-50db2d23cf92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.514 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b8122516-7c5c-47a6-bb8b-f19f391297f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.516 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7bba2fe-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.517 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.518 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7bba2fe-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:31 np0005558241 NetworkManager[50376]: <info>  [1765615651.5214] manager: (tapb7bba2fe-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.520 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:31 np0005558241 kernel: tapb7bba2fe-60: entered promiscuous mode
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.528 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb7bba2fe-60, col_values=(('external_ids', {'iface-id': 'd344c035-f22d-4b1b-95ff-ed098d3a946c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.530 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:31Z|01018|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.534 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b7bba2fe-699d-4423-a6e2-09604625a8f5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b7bba2fe-699d-4423-a6e2-09604625a8f5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.535 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c677640f-833d-49f7-b3c4-41648ce03c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.537 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-b7bba2fe-699d-4423-a6e2-09604625a8f5
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/b7bba2fe-699d-4423-a6e2-09604625a8f5.pid.haproxy
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID b7bba2fe-699d-4423-a6e2-09604625a8f5
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:47:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:31.537 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'env', 'PROCESS_TAG=haproxy-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b7bba2fe-699d-4423-a6e2-09604625a8f5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.627 248514 DEBUG nova.compute.manager [req-0b9e67e0-036c-4611-933e-9205444bf9b9 req-8014a17b-891a-4eae-a722-0c06c895cf09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received event network-vif-plugged-1a00c927-1c7f-4af5-9337-d6e58800dc3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.628 248514 DEBUG oslo_concurrency.lockutils [req-0b9e67e0-036c-4611-933e-9205444bf9b9 req-8014a17b-891a-4eae-a722-0c06c895cf09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.629 248514 DEBUG oslo_concurrency.lockutils [req-0b9e67e0-036c-4611-933e-9205444bf9b9 req-8014a17b-891a-4eae-a722-0c06c895cf09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.629 248514 DEBUG oslo_concurrency.lockutils [req-0b9e67e0-036c-4611-933e-9205444bf9b9 req-8014a17b-891a-4eae-a722-0c06c895cf09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.629 248514 DEBUG nova.compute.manager [req-0b9e67e0-036c-4611-933e-9205444bf9b9 req-8014a17b-891a-4eae-a722-0c06c895cf09 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Processing event network-vif-plugged-1a00c927-1c7f-4af5-9337-d6e58800dc3c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:47:31 np0005558241 nova_compute[248510]: 2025-12-13 08:47:31.738 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 167 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.5 MiB/s wr, 144 op/s
Dec 13 03:47:31 np0005558241 podman[351287]: 2025-12-13 08:47:31.89612744 +0000 UTC m=+0.023681675 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.102 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615652.1016548, 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.102 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] VM Started (Lifecycle Event)#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.104 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.109 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.112 248514 INFO nova.virt.libvirt.driver [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Instance spawned successfully.#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.112 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.137 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.137 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.138 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.138 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.139 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.140 248514 DEBUG nova.virt.libvirt.driver [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.149 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.160 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:47:32 np0005558241 podman[351287]: 2025-12-13 08:47:32.17386896 +0000 UTC m=+0.301423165 container create 63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.188 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.189 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615652.1018097, 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.189 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.226 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.231 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615652.1075459, 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.232 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.239 248514 INFO nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Took 12.15 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.239 248514 DEBUG nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.261 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.269 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:47:32 np0005558241 systemd[1]: Started libpod-conmon-63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5.scope.
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.317 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:47:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:47:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27792fd296ec41c0b07a50cc3d8774b88ded6a613398922a1314c6ce563b646f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.353 248514 INFO nova.compute.manager [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Took 13.35 seconds to build instance.#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.380 248514 DEBUG oslo_concurrency.lockutils [None req-63fa87a0-c0e9-4742-a142-9c8b7c1f4078 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:32 np0005558241 podman[351287]: 2025-12-13 08:47:32.444514082 +0000 UTC m=+0.572068287 container init 63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:47:32 np0005558241 podman[351287]: 2025-12-13 08:47:32.450915983 +0000 UTC m=+0.578470188 container start 63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 03:47:32 np0005558241 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [NOTICE]   (351333) : New worker (351335) forked
Dec 13 03:47:32 np0005558241 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [NOTICE]   (351333) : Loading success.
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.866 248514 DEBUG nova.objects.instance [None req-8cb325ee-f759-4832-ac11-e8a30850aa6f 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'pci_devices' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.902 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615652.9024022, c3fb322f-a9db-4396-b659-2307698e5524 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.903 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.928 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.934 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:47:32 np0005558241 nova_compute[248510]: 2025-12-13 08:47:32.959 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Dec 13 03:47:33 np0005558241 nova_compute[248510]: 2025-12-13 08:47:33.862 248514 DEBUG nova.compute.manager [req-7bc6caba-2bfc-45b3-816e-c462fb16ddfc req-12210518-6963-4e73-978c-fcf2efc6efcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received event network-vif-plugged-1a00c927-1c7f-4af5-9337-d6e58800dc3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:47:33 np0005558241 nova_compute[248510]: 2025-12-13 08:47:33.862 248514 DEBUG oslo_concurrency.lockutils [req-7bc6caba-2bfc-45b3-816e-c462fb16ddfc req-12210518-6963-4e73-978c-fcf2efc6efcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:33 np0005558241 nova_compute[248510]: 2025-12-13 08:47:33.862 248514 DEBUG oslo_concurrency.lockutils [req-7bc6caba-2bfc-45b3-816e-c462fb16ddfc req-12210518-6963-4e73-978c-fcf2efc6efcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:33 np0005558241 nova_compute[248510]: 2025-12-13 08:47:33.863 248514 DEBUG oslo_concurrency.lockutils [req-7bc6caba-2bfc-45b3-816e-c462fb16ddfc req-12210518-6963-4e73-978c-fcf2efc6efcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:33 np0005558241 nova_compute[248510]: 2025-12-13 08:47:33.863 248514 DEBUG nova.compute.manager [req-7bc6caba-2bfc-45b3-816e-c462fb16ddfc req-12210518-6963-4e73-978c-fcf2efc6efcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] No waiting events found dispatching network-vif-plugged-1a00c927-1c7f-4af5-9337-d6e58800dc3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:47:33 np0005558241 nova_compute[248510]: 2025-12-13 08:47:33.863 248514 WARNING nova.compute.manager [req-7bc6caba-2bfc-45b3-816e-c462fb16ddfc req-12210518-6963-4e73-978c-fcf2efc6efcb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received unexpected event network-vif-plugged-1a00c927-1c7f-4af5-9337-d6e58800dc3c for instance with vm_state active and task_state None.#033[00m
Dec 13 03:47:33 np0005558241 nova_compute[248510]: 2025-12-13 08:47:33.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 298 KiB/s wr, 135 op/s
Dec 13 03:47:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:47:35 np0005558241 kernel: tap2d164f50-a5 (unregistering): left promiscuous mode
Dec 13 03:47:35 np0005558241 NetworkManager[50376]: <info>  [1765615655.3270] device (tap2d164f50-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:47:35 np0005558241 nova_compute[248510]: 2025-12-13 08:47:35.345 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:35Z|01019|binding|INFO|Releasing lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 from this chassis (sb_readonly=0)
Dec 13 03:47:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:35Z|01020|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 down in Southbound
Dec 13 03:47:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:35Z|01021|binding|INFO|Removing iface tap2d164f50-a5 ovn-installed in OVS
Dec 13 03:47:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:35.354 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:39:62 10.100.0.6'], port_security=['fa:16:3e:e2:39:62 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c3fb322f-a9db-4396-b659-2307698e5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2d164f50-a56a-4eaf-ad60-84274a0eb413) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:47:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:35.355 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2d164f50-a56a-4eaf-ad60-84274a0eb413 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b unbound from our chassis#033[00m
Dec 13 03:47:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:35.357 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:47:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:35.358 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[23d463a6-ecd0-42b9-bdf8-4c3ecba1e749]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:35.359 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b namespace which is not needed anymore#033[00m
Dec 13 03:47:35 np0005558241 nova_compute[248510]: 2025-12-13 08:47:35.362 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:35 np0005558241 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Deactivated successfully.
Dec 13 03:47:35 np0005558241 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000067.scope: Consumed 11.550s CPU time.
Dec 13 03:47:35 np0005558241 systemd-machined[210538]: Machine qemu-129-instance-00000067 terminated.
Dec 13 03:47:35 np0005558241 nova_compute[248510]: 2025-12-13 08:47:35.548 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:35 np0005558241 nova_compute[248510]: 2025-12-13 08:47:35.555 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:35 np0005558241 nova_compute[248510]: 2025-12-13 08:47:35.561 248514 DEBUG nova.compute.manager [None req-8cb325ee-f759-4832-ac11-e8a30850aa6f 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:35 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [NOTICE]   (350872) : haproxy version is 2.8.14-c23fe91
Dec 13 03:47:35 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [NOTICE]   (350872) : path to executable is /usr/sbin/haproxy
Dec 13 03:47:35 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [WARNING]  (350872) : Exiting Master process...
Dec 13 03:47:35 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [WARNING]  (350872) : Exiting Master process...
Dec 13 03:47:35 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [ALERT]    (350872) : Current worker (350874) exited with code 143 (Terminated)
Dec 13 03:47:35 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[350868]: [WARNING]  (350872) : All workers exited. Exiting... (0)
Dec 13 03:47:35 np0005558241 systemd[1]: libpod-14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81.scope: Deactivated successfully.
Dec 13 03:47:35 np0005558241 podman[351371]: 2025-12-13 08:47:35.640999517 +0000 UTC m=+0.186314786 container died 14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:47:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81-userdata-shm.mount: Deactivated successfully.
Dec 13 03:47:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-bd1c5e1c88792b2cd7c895939660c80ec1d136af36d046687f8916aa03ba3455-merged.mount: Deactivated successfully.
Dec 13 03:47:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 103 op/s
Dec 13 03:47:36 np0005558241 nova_compute[248510]: 2025-12-13 08:47:36.019 248514 DEBUG nova.compute.manager [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-unplugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:47:36 np0005558241 nova_compute[248510]: 2025-12-13 08:47:36.019 248514 DEBUG oslo_concurrency.lockutils [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:36 np0005558241 nova_compute[248510]: 2025-12-13 08:47:36.020 248514 DEBUG oslo_concurrency.lockutils [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:36 np0005558241 nova_compute[248510]: 2025-12-13 08:47:36.020 248514 DEBUG oslo_concurrency.lockutils [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:36 np0005558241 nova_compute[248510]: 2025-12-13 08:47:36.020 248514 DEBUG nova.compute.manager [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-unplugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:47:36 np0005558241 nova_compute[248510]: 2025-12-13 08:47:36.020 248514 WARNING nova.compute.manager [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-unplugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state suspended and task_state None.#033[00m
Dec 13 03:47:36 np0005558241 nova_compute[248510]: 2025-12-13 08:47:36.020 248514 DEBUG nova.compute.manager [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:47:36 np0005558241 nova_compute[248510]: 2025-12-13 08:47:36.021 248514 DEBUG oslo_concurrency.lockutils [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:36 np0005558241 nova_compute[248510]: 2025-12-13 08:47:36.021 248514 DEBUG oslo_concurrency.lockutils [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:36 np0005558241 nova_compute[248510]: 2025-12-13 08:47:36.021 248514 DEBUG oslo_concurrency.lockutils [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:36 np0005558241 nova_compute[248510]: 2025-12-13 08:47:36.021 248514 DEBUG nova.compute.manager [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:47:36 np0005558241 nova_compute[248510]: 2025-12-13 08:47:36.022 248514 WARNING nova.compute.manager [req-9387478f-975a-42f5-995c-6a16c05f2e37 req-d6d6b02c-2cc7-485c-9430-16149277078f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state suspended and task_state None.#033[00m
Dec 13 03:47:36 np0005558241 podman[351371]: 2025-12-13 08:47:36.63773469 +0000 UTC m=+1.183049959 container cleanup 14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:47:36 np0005558241 systemd[1]: libpod-conmon-14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81.scope: Deactivated successfully.
Dec 13 03:47:36 np0005558241 nova_compute[248510]: 2025-12-13 08:47:36.740 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:37 np0005558241 podman[351410]: 2025-12-13 08:47:37.482604123 +0000 UTC m=+0.821106627 container remove 14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:47:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.490 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb7ce44-0a5c-46d1-9925-d9015331932c]: (4, ('Sat Dec 13 08:47:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b (14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81)\n14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81\nSat Dec 13 08:47:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b (14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81)\n14ea5b392de974c50ae29d917c9d9501fcb243d5a5cec80d64f5deb22dadbd81\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bb68b473-70ec-4c94-a71c-04a633cdbba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.493 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:37 np0005558241 nova_compute[248510]: 2025-12-13 08:47:37.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:37 np0005558241 kernel: tap6ad7f755-f0: left promiscuous mode
Dec 13 03:47:37 np0005558241 nova_compute[248510]: 2025-12-13 08:47:37.512 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.516 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ca530a-f3b3-41e7-83ac-a19c6a53ac26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.539 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb078e37-affb-4893-b7d2-02453f38c093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.541 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75edd375-4a23-4115-8b12-8de5cfa826ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.557 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4e07c10c-45c1-4de9-b780-13b9831d9ffe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 821473, 'reachable_time': 36630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351428, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:37 np0005558241 systemd[1]: run-netns-ovnmeta\x2d6ad7f755\x2dfa29\x2d40dd\x2d89c4\x2d988d0a51cf9b.mount: Deactivated successfully.
Dec 13 03:47:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.560 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:47:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:37.560 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ca0a82-788d-49de-ac89-836111ac0e85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 90 op/s
Dec 13 03:47:38 np0005558241 nova_compute[248510]: 2025-12-13 08:47:38.571 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:38 np0005558241 NetworkManager[50376]: <info>  [1765615658.5738] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Dec 13 03:47:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:38Z|01022|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 03:47:38 np0005558241 NetworkManager[50376]: <info>  [1765615658.5805] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Dec 13 03:47:38 np0005558241 nova_compute[248510]: 2025-12-13 08:47:38.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:38 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:38Z|01023|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 03:47:38 np0005558241 nova_compute[248510]: 2025-12-13 08:47:38.626 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:38 np0005558241 nova_compute[248510]: 2025-12-13 08:47:38.901 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:47:39 np0005558241 nova_compute[248510]: 2025-12-13 08:47:39.378 248514 INFO nova.compute.manager [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Resuming#033[00m
Dec 13 03:47:39 np0005558241 nova_compute[248510]: 2025-12-13 08:47:39.379 248514 DEBUG nova.objects.instance [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'flavor' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:39 np0005558241 nova_compute[248510]: 2025-12-13 08:47:39.426 248514 DEBUG oslo_concurrency.lockutils [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:47:39 np0005558241 nova_compute[248510]: 2025-12-13 08:47:39.427 248514 DEBUG oslo_concurrency.lockutils [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquired lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:47:39 np0005558241 nova_compute[248510]: 2025-12-13 08:47:39.427 248514 DEBUG nova.network.neutron [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:47:39 np0005558241 nova_compute[248510]: 2025-12-13 08:47:39.929 248514 DEBUG nova.compute.manager [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received event network-changed-1a00c927-1c7f-4af5-9337-d6e58800dc3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:47:39 np0005558241 nova_compute[248510]: 2025-12-13 08:47:39.931 248514 DEBUG nova.compute.manager [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Refreshing instance network info cache due to event network-changed-1a00c927-1c7f-4af5-9337-d6e58800dc3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:47:39 np0005558241 nova_compute[248510]: 2025-12-13 08:47:39.931 248514 DEBUG oslo_concurrency.lockutils [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:47:39 np0005558241 nova_compute[248510]: 2025-12-13 08:47:39.932 248514 DEBUG oslo_concurrency.lockutils [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:47:39 np0005558241 nova_compute[248510]: 2025-12-13 08:47:39.932 248514 DEBUG nova.network.neutron [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Refreshing network info cache for port 1a00c927-1c7f-4af5-9337-d6e58800dc3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:47:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 84 op/s
Dec 13 03:47:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:47:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:47:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:47:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:47:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:47:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.542 248514 DEBUG nova.network.neutron [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [{"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.568 248514 DEBUG oslo_concurrency.lockutils [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Releasing lock "refresh_cache-c3fb322f-a9db-4396-b659-2307698e5524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.576 248514 DEBUG nova.virt.libvirt.vif [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:47:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:47:35Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.578 248514 DEBUG nova.network.os_vif_util [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.580 248514 DEBUG nova.network.os_vif_util [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.581 248514 DEBUG os_vif [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.583 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.584 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.584 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.589 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d164f50-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.589 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d164f50-a5, col_values=(('external_ids', {'iface-id': '2d164f50-a56a-4eaf-ad60-84274a0eb413', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:39:62', 'vm-uuid': 'c3fb322f-a9db-4396-b659-2307698e5524'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.590 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.591 248514 INFO os_vif [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5')#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.789 248514 DEBUG nova.objects.instance [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'numa_topology' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:41 np0005558241 kernel: tap2d164f50-a5: entered promiscuous mode
Dec 13 03:47:41 np0005558241 NetworkManager[50376]: <info>  [1765615661.8807] manager: (tap2d164f50-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/423)
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.885 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:41Z|01024|binding|INFO|Claiming lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 for this chassis.
Dec 13 03:47:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:41Z|01025|binding|INFO|2d164f50-a56a-4eaf-ad60-84274a0eb413: Claiming fa:16:3e:e2:39:62 10.100.0.6
Dec 13 03:47:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.895 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:39:62 10.100.0.6'], port_security=['fa:16:3e:e2:39:62 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c3fb322f-a9db-4396-b659-2307698e5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2d164f50-a56a-4eaf-ad60-84274a0eb413) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:47:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.896 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2d164f50-a56a-4eaf-ad60-84274a0eb413 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b bound to our chassis#033[00m
Dec 13 03:47:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.898 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b#033[00m
Dec 13 03:47:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:41Z|01026|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 ovn-installed in OVS
Dec 13 03:47:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:41Z|01027|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 up in Southbound
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:41 np0005558241 nova_compute[248510]: 2025-12-13 08:47:41.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.910 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e94671de-60d5-44ca-bf55-ec170bdeb49f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.912 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6ad7f755-f1 in ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:47:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.915 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6ad7f755-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:47:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b73b22fe-1f2d-4d84-9259-b1be862d85ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:41 np0005558241 systemd-udevd[351445]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:47:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.921 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e0ad63-d6fb-4ef4-b2fd-544aef79f591]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:41 np0005558241 systemd-machined[210538]: New machine qemu-131-instance-00000067.
Dec 13 03:47:41 np0005558241 NetworkManager[50376]: <info>  [1765615661.9336] device (tap2d164f50-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:47:41 np0005558241 NetworkManager[50376]: <info>  [1765615661.9350] device (tap2d164f50-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:47:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.937 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bdded4-801c-491d-8dbe-a3cd1b87b21f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:41 np0005558241 systemd[1]: Started Virtual Machine qemu-131-instance-00000067.
Dec 13 03:47:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.957 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a27ee49b-0cdf-4f36-9e63-13a1d3caf878]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:41.993 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e2870624-1bf8-4c4a-a22a-dc62203c80f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.000 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[683b76d3-439c-42f2-aa1c-bfb12e29814f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:42 np0005558241 NetworkManager[50376]: <info>  [1765615662.0017] manager: (tap6ad7f755-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/424)
Dec 13 03:47:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 167 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.046 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[da029a23-e7a7-4f32-861b-8ab254a020ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.051 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce27be8-65f7-4ba3-9763-c19b0daf1e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:42 np0005558241 NetworkManager[50376]: <info>  [1765615662.1058] device (tap6ad7f755-f0): carrier: link connected
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.113 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[224bd1f2-2ee8-466d-bb57-ed0101ca0049]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.141 248514 DEBUG nova.compute.manager [req-f3f65767-c2b3-46cd-993f-d7174fc19e9b req-8790064e-6e0e-402c-b115-1f185862a2c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.141 248514 DEBUG oslo_concurrency.lockutils [req-f3f65767-c2b3-46cd-993f-d7174fc19e9b req-8790064e-6e0e-402c-b115-1f185862a2c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.141 248514 DEBUG oslo_concurrency.lockutils [req-f3f65767-c2b3-46cd-993f-d7174fc19e9b req-8790064e-6e0e-402c-b115-1f185862a2c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.141 248514 DEBUG oslo_concurrency.lockutils [req-f3f65767-c2b3-46cd-993f-d7174fc19e9b req-8790064e-6e0e-402c-b115-1f185862a2c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.141 248514 DEBUG nova.compute.manager [req-f3f65767-c2b3-46cd-993f-d7174fc19e9b req-8790064e-6e0e-402c-b115-1f185862a2c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.142 248514 WARNING nova.compute.manager [req-f3f65767-c2b3-46cd-993f-d7174fc19e9b req-8790064e-6e0e-402c-b115-1f185862a2c9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state suspended and task_state resuming.#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.145 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f7bd69c2-4da2-4d81-b2be-22d153999619]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 304], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823932, 'reachable_time': 32706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351502, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.168 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0855daec-c0f4-4154-ac6a-c3aeb1fe84c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:35ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 823932, 'tstamp': 823932}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351509, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.194 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e02a94-79f6-4f21-ad38-85e7b1146992]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6ad7f755-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:35:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 304], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823932, 'reachable_time': 32706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351523, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.244 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9e61c0-2031-4462-80a4-9775ca3151af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.317 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7f119b79-cdeb-46e6-9f32-9f3c7bf1bba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.320 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.321 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.321 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ad7f755-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.324 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:42 np0005558241 NetworkManager[50376]: <info>  [1765615662.3249] manager: (tap6ad7f755-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Dec 13 03:47:42 np0005558241 kernel: tap6ad7f755-f0: entered promiscuous mode
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.329 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.337 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6ad7f755-f0, col_values=(('external_ids', {'iface-id': '683d7da0-6f1e-41a6-9158-6204fb05ee50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.338 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:42Z|01028|binding|INFO|Releasing lport 683d7da0-6f1e-41a6-9158-6204fb05ee50 from this chassis (sb_readonly=0)
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.339 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.340 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.353 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[49a99b30-6087-43b6-9824-764e9a0c2e1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.354 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.pid.haproxy
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 6ad7f755-fa29-40dd-89c4-988d0a51cf9b
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:47:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:42.355 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'env', 'PROCESS_TAG=haproxy-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6ad7f755-fa29-40dd-89c4-988d0a51cf9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.691 248514 DEBUG nova.network.neutron [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updated VIF entry in instance network info cache for port 1a00c927-1c7f-4af5-9337-d6e58800dc3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.693 248514 DEBUG nova.network.neutron [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.737 248514 DEBUG oslo_concurrency.lockutils [req-beaf0ac2-be48-473d-a405-e66e0832a515 req-e8bf523b-ff1a-47fa-84a5-0fe486230fd8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:47:42 np0005558241 podman[351624]: 2025-12-13 08:47:42.773783254 +0000 UTC m=+0.101337744 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:47:42 np0005558241 podman[351655]: 2025-12-13 08:47:42.818398543 +0000 UTC m=+0.071585307 container create 0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 03:47:42 np0005558241 podman[351655]: 2025-12-13 08:47:42.771037315 +0000 UTC m=+0.024224109 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:47:42 np0005558241 systemd[1]: Started libpod-conmon-0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771.scope.
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.889 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for c3fb322f-a9db-4396-b659-2307698e5524 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.892 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615662.888831, c3fb322f-a9db-4396-b659-2307698e5524 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.892 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Started (Lifecycle Event)#033[00m
Dec 13 03:47:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:47:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2d7bff872b6cdff54ab27f3dd8aa237d58f134c1191165160b971ebebd88a7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:42 np0005558241 podman[351655]: 2025-12-13 08:47:42.921648084 +0000 UTC m=+0.174834858 container init 0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:47:42 np0005558241 podman[351624]: 2025-12-13 08:47:42.924137207 +0000 UTC m=+0.251691697 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.932 248514 DEBUG nova.compute.manager [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.934 248514 DEBUG nova.objects.instance [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'pci_devices' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:47:42 np0005558241 podman[351655]: 2025-12-13 08:47:42.936015465 +0000 UTC m=+0.189202229 container start 0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.951 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.957 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:47:42 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [NOTICE]   (351689) : New worker (351691) forked
Dec 13 03:47:42 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [NOTICE]   (351689) : Loading success.
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.976 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance running successfully.#033[00m
Dec 13 03:47:42 np0005558241 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.984 248514 DEBUG nova.virt.libvirt.guest [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.984 248514 DEBUG nova.compute.manager [None req-2c81d923-63df-4e5f-be54-bbe3f5bc2421 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.995 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.997 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615662.8959033, c3fb322f-a9db-4396-b659-2307698e5524 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:47:42 np0005558241 nova_compute[248510]: 2025-12-13 08:47:42.997 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:47:43 np0005558241 nova_compute[248510]: 2025-12-13 08:47:43.060 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:47:43 np0005558241 nova_compute[248510]: 2025-12-13 08:47:43.068 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:47:43 np0005558241 nova_compute[248510]: 2025-12-13 08:47:43.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:47:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:47:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:47:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:47:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 174 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 529 KiB/s wr, 109 op/s
Dec 13 03:47:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:47:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:44Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:39:62 10.100.0.6
Dec 13 03:47:44 np0005558241 nova_compute[248510]: 2025-12-13 08:47:44.524 248514 DEBUG nova.compute.manager [req-c425d6ab-6cda-437b-90f6-ea487b3a0642 req-33b0df5f-6897-4941-80ae-bc8e953f0104 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:47:44 np0005558241 nova_compute[248510]: 2025-12-13 08:47:44.525 248514 DEBUG oslo_concurrency.lockutils [req-c425d6ab-6cda-437b-90f6-ea487b3a0642 req-33b0df5f-6897-4941-80ae-bc8e953f0104 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:44 np0005558241 nova_compute[248510]: 2025-12-13 08:47:44.525 248514 DEBUG oslo_concurrency.lockutils [req-c425d6ab-6cda-437b-90f6-ea487b3a0642 req-33b0df5f-6897-4941-80ae-bc8e953f0104 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:44 np0005558241 nova_compute[248510]: 2025-12-13 08:47:44.525 248514 DEBUG oslo_concurrency.lockutils [req-c425d6ab-6cda-437b-90f6-ea487b3a0642 req-33b0df5f-6897-4941-80ae-bc8e953f0104 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:44 np0005558241 nova_compute[248510]: 2025-12-13 08:47:44.526 248514 DEBUG nova.compute.manager [req-c425d6ab-6cda-437b-90f6-ea487b3a0642 req-33b0df5f-6897-4941-80ae-bc8e953f0104 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:47:44 np0005558241 nova_compute[248510]: 2025-12-13 08:47:44.526 248514 WARNING nova.compute.manager [req-c425d6ab-6cda-437b-90f6-ea487b3a0642 req-33b0df5f-6897-4941-80ae-bc8e953f0104 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:47:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:47:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:47:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:47:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:47:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:47:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:47:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:47:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:47:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:47:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:47:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:47:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:47:45 np0005558241 podman[351998]: 2025-12-13 08:47:45.13868221 +0000 UTC m=+0.022709861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:47:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:47:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:47:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:47:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:47:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:47:45 np0005558241 podman[351998]: 2025-12-13 08:47:45.361921283 +0000 UTC m=+0.245948914 container create db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brattain, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:47:45 np0005558241 systemd[1]: Started libpod-conmon-db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a.scope.
Dec 13 03:47:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:47:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:45Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:2b:42 10.100.0.9
Dec 13 03:47:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:47:45Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:2b:42 10.100.0.9
Dec 13 03:47:45 np0005558241 podman[351998]: 2025-12-13 08:47:45.648578816 +0000 UTC m=+0.532606467 container init db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:47:45 np0005558241 podman[351998]: 2025-12-13 08:47:45.655168522 +0000 UTC m=+0.539196153 container start db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brattain, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:47:45 np0005558241 bold_brattain[352014]: 167 167
Dec 13 03:47:45 np0005558241 systemd[1]: libpod-db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a.scope: Deactivated successfully.
Dec 13 03:47:45 np0005558241 podman[351998]: 2025-12-13 08:47:45.690651912 +0000 UTC m=+0.574679573 container attach db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:47:45 np0005558241 podman[351998]: 2025-12-13 08:47:45.693159125 +0000 UTC m=+0.577186776 container died db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brattain, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:47:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c30d9fbbce8b58164be1ea382ea944e9a3ea9f3be9e8b59ed7f55edc72f75bdd-merged.mount: Deactivated successfully.
Dec 13 03:47:45 np0005558241 podman[351998]: 2025-12-13 08:47:45.9344196 +0000 UTC m=+0.818447231 container remove db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brattain, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:47:45 np0005558241 systemd[1]: libpod-conmon-db055a2f5b5ca073ba82094843ed79600b6346777f6add0ffa65266b9edd600a.scope: Deactivated successfully.
Dec 13 03:47:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 174 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 900 KiB/s rd, 517 KiB/s wr, 55 op/s
Dec 13 03:47:46 np0005558241 podman[352040]: 2025-12-13 08:47:46.138322556 +0000 UTC m=+0.047049291 container create cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dirac, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:47:46 np0005558241 systemd[1]: Started libpod-conmon-cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d.scope.
Dec 13 03:47:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:47:46 np0005558241 podman[352040]: 2025-12-13 08:47:46.118509529 +0000 UTC m=+0.027236294 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:47:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c68ba8d1d955ba286c624f56d78a42fbc2761e6b7c951aab1181a6bc93980e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c68ba8d1d955ba286c624f56d78a42fbc2761e6b7c951aab1181a6bc93980e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c68ba8d1d955ba286c624f56d78a42fbc2761e6b7c951aab1181a6bc93980e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c68ba8d1d955ba286c624f56d78a42fbc2761e6b7c951aab1181a6bc93980e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c68ba8d1d955ba286c624f56d78a42fbc2761e6b7c951aab1181a6bc93980e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:46 np0005558241 podman[352040]: 2025-12-13 08:47:46.24247397 +0000 UTC m=+0.151200715 container init cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:47:46 np0005558241 podman[352040]: 2025-12-13 08:47:46.25003386 +0000 UTC m=+0.158760595 container start cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:47:46 np0005558241 podman[352040]: 2025-12-13 08:47:46.254202014 +0000 UTC m=+0.162928769 container attach cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:47:46 np0005558241 cranky_dirac[352056]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:47:46 np0005558241 cranky_dirac[352056]: --> All data devices are unavailable
Dec 13 03:47:46 np0005558241 nova_compute[248510]: 2025-12-13 08:47:46.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:46 np0005558241 systemd[1]: libpod-cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d.scope: Deactivated successfully.
Dec 13 03:47:46 np0005558241 podman[352040]: 2025-12-13 08:47:46.799790706 +0000 UTC m=+0.708517441 container died cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dirac, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:47:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9c68ba8d1d955ba286c624f56d78a42fbc2761e6b7c951aab1181a6bc93980e9-merged.mount: Deactivated successfully.
Dec 13 03:47:46 np0005558241 podman[352040]: 2025-12-13 08:47:46.854054938 +0000 UTC m=+0.762781673 container remove cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:47:46 np0005558241 systemd[1]: libpod-conmon-cad315589342525cd9cee59136754a9970593ecc026a826dd42170f458eb8c8d.scope: Deactivated successfully.
Dec 13 03:47:47 np0005558241 podman[352139]: 2025-12-13 08:47:47.142412434 +0000 UTC m=+0.070301755 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 03:47:47 np0005558241 podman[352137]: 2025-12-13 08:47:47.163199496 +0000 UTC m=+0.093899687 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 03:47:47 np0005558241 podman[352138]: 2025-12-13 08:47:47.163477643 +0000 UTC m=+0.092772959 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 03:47:47 np0005558241 podman[352212]: 2025-12-13 08:47:47.412166394 +0000 UTC m=+0.085559879 container create 5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:47:47 np0005558241 podman[352212]: 2025-12-13 08:47:47.347731406 +0000 UTC m=+0.021124981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:47:47 np0005558241 systemd[1]: Started libpod-conmon-5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a.scope.
Dec 13 03:47:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:47:47 np0005558241 podman[352212]: 2025-12-13 08:47:47.544058523 +0000 UTC m=+0.217452038 container init 5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:47:47 np0005558241 podman[352212]: 2025-12-13 08:47:47.552573187 +0000 UTC m=+0.225966672 container start 5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:47:47 np0005558241 podman[352212]: 2025-12-13 08:47:47.557225584 +0000 UTC m=+0.230619089 container attach 5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 03:47:47 np0005558241 compassionate_ritchie[352228]: 167 167
Dec 13 03:47:47 np0005558241 systemd[1]: libpod-5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a.scope: Deactivated successfully.
Dec 13 03:47:47 np0005558241 podman[352212]: 2025-12-13 08:47:47.559198443 +0000 UTC m=+0.232591928 container died 5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:47:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6752dfce27e47be13ed8d34d7e31ac603ad8064349764cef160a8df3ddb872a0-merged.mount: Deactivated successfully.
Dec 13 03:47:47 np0005558241 podman[352212]: 2025-12-13 08:47:47.605211978 +0000 UTC m=+0.278605463 container remove 5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:47:47 np0005558241 systemd[1]: libpod-conmon-5ed8c80d8668cb0f0a9261bace2b43334ba34734ad2452b6db41946675b4ec5a.scope: Deactivated successfully.
Dec 13 03:47:47 np0005558241 podman[352254]: 2025-12-13 08:47:47.774966798 +0000 UTC m=+0.025252445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:47:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 187 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 84 op/s
Dec 13 03:47:48 np0005558241 podman[352254]: 2025-12-13 08:47:48.247860405 +0000 UTC m=+0.498146032 container create 0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:47:48 np0005558241 systemd[1]: Started libpod-conmon-0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae.scope.
Dec 13 03:47:48 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:47:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fda450b0af2f08bd92367c72ccfe302d32c7494900c45429213321833b9659d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fda450b0af2f08bd92367c72ccfe302d32c7494900c45429213321833b9659d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fda450b0af2f08bd92367c72ccfe302d32c7494900c45429213321833b9659d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fda450b0af2f08bd92367c72ccfe302d32c7494900c45429213321833b9659d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:48 np0005558241 podman[352254]: 2025-12-13 08:47:48.519977093 +0000 UTC m=+0.770262730 container init 0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:47:48 np0005558241 podman[352254]: 2025-12-13 08:47:48.526354283 +0000 UTC m=+0.776639910 container start 0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:47:48 np0005558241 podman[352254]: 2025-12-13 08:47:48.676561942 +0000 UTC m=+0.926847569 container attach 0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]: {
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:    "0": [
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:        {
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "devices": [
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "/dev/loop3"
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            ],
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_name": "ceph_lv0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_size": "21470642176",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "name": "ceph_lv0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "tags": {
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.cluster_name": "ceph",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.crush_device_class": "",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.encrypted": "0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.objectstore": "bluestore",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.osd_id": "0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.type": "block",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.vdo": "0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.with_tpm": "0"
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            },
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "type": "block",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "vg_name": "ceph_vg0"
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:        }
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:    ],
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:    "1": [
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:        {
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "devices": [
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "/dev/loop4"
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            ],
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_name": "ceph_lv1",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_size": "21470642176",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "name": "ceph_lv1",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "tags": {
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.cluster_name": "ceph",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.crush_device_class": "",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.encrypted": "0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.objectstore": "bluestore",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.osd_id": "1",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.type": "block",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.vdo": "0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.with_tpm": "0"
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            },
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "type": "block",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "vg_name": "ceph_vg1"
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:        }
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:    ],
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:    "2": [
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:        {
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "devices": [
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "/dev/loop5"
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            ],
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_name": "ceph_lv2",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_size": "21470642176",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "name": "ceph_lv2",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "tags": {
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.cluster_name": "ceph",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.crush_device_class": "",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.encrypted": "0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.objectstore": "bluestore",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.osd_id": "2",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.type": "block",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.vdo": "0",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:                "ceph.with_tpm": "0"
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            },
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "type": "block",
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:            "vg_name": "ceph_vg2"
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:        }
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]:    ]
Dec 13 03:47:48 np0005558241 vigorous_robinson[352270]: }
Dec 13 03:47:48 np0005558241 systemd[1]: libpod-0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae.scope: Deactivated successfully.
Dec 13 03:47:48 np0005558241 podman[352254]: 2025-12-13 08:47:48.90122284 +0000 UTC m=+1.151508527 container died 0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:47:48 np0005558241 nova_compute[248510]: 2025-12-13 08:47:48.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:47:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1fda450b0af2f08bd92367c72ccfe302d32c7494900c45429213321833b9659d-merged.mount: Deactivated successfully.
Dec 13 03:47:49 np0005558241 podman[352254]: 2025-12-13 08:47:49.97552508 +0000 UTC m=+2.225810707 container remove 0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:47:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Dec 13 03:47:50 np0005558241 systemd[1]: libpod-conmon-0b077ed73d7ad0e51f61b92bfeeb8bfce27cec691cd561c8e860b24f15651fae.scope: Deactivated successfully.
Dec 13 03:47:50 np0005558241 podman[352354]: 2025-12-13 08:47:50.516992468 +0000 UTC m=+0.026633469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:47:50 np0005558241 podman[352354]: 2025-12-13 08:47:50.695817145 +0000 UTC m=+0.205458116 container create 3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:47:50 np0005558241 systemd[1]: Started libpod-conmon-3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234.scope.
Dec 13 03:47:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:47:50 np0005558241 podman[352354]: 2025-12-13 08:47:50.809051357 +0000 UTC m=+0.318692348 container init 3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_carver, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 03:47:50 np0005558241 podman[352354]: 2025-12-13 08:47:50.818859873 +0000 UTC m=+0.328501084 container start 3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:47:50 np0005558241 angry_carver[352371]: 167 167
Dec 13 03:47:50 np0005558241 systemd[1]: libpod-3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234.scope: Deactivated successfully.
Dec 13 03:47:50 np0005558241 podman[352354]: 2025-12-13 08:47:50.824569847 +0000 UTC m=+0.334210828 container attach 3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 03:47:50 np0005558241 podman[352354]: 2025-12-13 08:47:50.827167992 +0000 UTC m=+0.336808963 container died 3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_carver, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:47:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-12bbb450115cb2191eeb73e6884af7a739747be49b61eba162552827a0d764bb-merged.mount: Deactivated successfully.
Dec 13 03:47:50 np0005558241 podman[352354]: 2025-12-13 08:47:50.919919569 +0000 UTC m=+0.429560540 container remove 3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_carver, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:47:50 np0005558241 systemd[1]: libpod-conmon-3bc038f96523841851040daa67edefb63843ee2ce6aba345df2eda791fedf234.scope: Deactivated successfully.
Dec 13 03:47:51 np0005558241 podman[352394]: 2025-12-13 08:47:51.1120161 +0000 UTC m=+0.046426406 container create a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cartwright, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:47:51 np0005558241 systemd[1]: Started libpod-conmon-a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4.scope.
Dec 13 03:47:51 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:47:51 np0005558241 podman[352394]: 2025-12-13 08:47:51.092417448 +0000 UTC m=+0.026827784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:47:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7984461fe67cd477978a519b3cef3ab29e19d1c3ee51ef7ec6eb5f0727c0a274/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7984461fe67cd477978a519b3cef3ab29e19d1c3ee51ef7ec6eb5f0727c0a274/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7984461fe67cd477978a519b3cef3ab29e19d1c3ee51ef7ec6eb5f0727c0a274/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7984461fe67cd477978a519b3cef3ab29e19d1c3ee51ef7ec6eb5f0727c0a274/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:51 np0005558241 podman[352394]: 2025-12-13 08:47:51.233716424 +0000 UTC m=+0.168126760 container init a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:47:51 np0005558241 podman[352394]: 2025-12-13 08:47:51.24589033 +0000 UTC m=+0.180300636 container start a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:47:51 np0005558241 podman[352394]: 2025-12-13 08:47:51.254164968 +0000 UTC m=+0.188575264 container attach a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:47:51 np0005558241 nova_compute[248510]: 2025-12-13 08:47:51.748 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 894 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Dec 13 03:47:52 np0005558241 lvm[352489]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:47:52 np0005558241 lvm[352489]: VG ceph_vg1 finished
Dec 13 03:47:52 np0005558241 lvm[352488]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:47:52 np0005558241 lvm[352488]: VG ceph_vg0 finished
Dec 13 03:47:52 np0005558241 lvm[352491]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:47:52 np0005558241 lvm[352491]: VG ceph_vg2 finished
Dec 13 03:47:52 np0005558241 inspiring_cartwright[352410]: {}
Dec 13 03:47:52 np0005558241 systemd[1]: libpod-a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4.scope: Deactivated successfully.
Dec 13 03:47:52 np0005558241 podman[352394]: 2025-12-13 08:47:52.267980118 +0000 UTC m=+1.202390444 container died a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cartwright, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:47:52 np0005558241 systemd[1]: libpod-a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4.scope: Consumed 1.647s CPU time.
Dec 13 03:47:52 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7984461fe67cd477978a519b3cef3ab29e19d1c3ee51ef7ec6eb5f0727c0a274-merged.mount: Deactivated successfully.
Dec 13 03:47:52 np0005558241 podman[352394]: 2025-12-13 08:47:52.325188524 +0000 UTC m=+1.259598830 container remove a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:47:52 np0005558241 systemd[1]: libpod-conmon-a5b6abfbefe9f59d8701f7e63430e760769dcfcdf8a569e13a10e3bb8c83e9b4.scope: Deactivated successfully.
Dec 13 03:47:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:47:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:47:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:47:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:47:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:47:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:47:53 np0005558241 nova_compute[248510]: 2025-12-13 08:47:53.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:47:53 np0005558241 nova_compute[248510]: 2025-12-13 08:47:53.910 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 894 KiB/s rd, 2.2 MiB/s wr, 112 op/s
Dec 13 03:47:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:55.426 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:55.427 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:47:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:47:55.428 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:47:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 1.7 MiB/s wr, 78 op/s
Dec 13 03:47:56 np0005558241 nova_compute[248510]: 2025-12-13 08:47:56.750 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:57 np0005558241 nova_compute[248510]: 2025-12-13 08:47:57.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:47:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 1.7 MiB/s wr, 78 op/s
Dec 13 03:47:58 np0005558241 nova_compute[248510]: 2025-12-13 08:47:58.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:47:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:48:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 590 KiB/s wr, 49 op/s
Dec 13 03:48:01 np0005558241 nova_compute[248510]: 2025-12-13 08:48:01.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:01 np0005558241 nova_compute[248510]: 2025-12-13 08:48:01.800 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:01 np0005558241 nova_compute[248510]: 2025-12-13 08:48:01.910 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:01 np0005558241 nova_compute[248510]: 2025-12-13 08:48:01.911 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:01 np0005558241 nova_compute[248510]: 2025-12-13 08:48:01.911 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:01 np0005558241 nova_compute[248510]: 2025-12-13 08:48:01.911 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:01 np0005558241 nova_compute[248510]: 2025-12-13 08:48:01.912 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:01 np0005558241 nova_compute[248510]: 2025-12-13 08:48:01.913 248514 INFO nova.compute.manager [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Terminating instance#033[00m
Dec 13 03:48:01 np0005558241 nova_compute[248510]: 2025-12-13 08:48:01.913 248514 DEBUG nova.compute.manager [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:48:01 np0005558241 kernel: tap2d164f50-a5 (unregistering): left promiscuous mode
Dec 13 03:48:01 np0005558241 NetworkManager[50376]: <info>  [1765615681.9847] device (tap2d164f50-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:48:01 np0005558241 ovn_controller[148476]: 2025-12-13T08:48:01Z|01029|binding|INFO|Releasing lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 from this chassis (sb_readonly=0)
Dec 13 03:48:01 np0005558241 ovn_controller[148476]: 2025-12-13T08:48:01Z|01030|binding|INFO|Setting lport 2d164f50-a56a-4eaf-ad60-84274a0eb413 down in Southbound
Dec 13 03:48:01 np0005558241 nova_compute[248510]: 2025-12-13 08:48:01.996 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:01 np0005558241 ovn_controller[148476]: 2025-12-13T08:48:01Z|01031|binding|INFO|Removing iface tap2d164f50-a5 ovn-installed in OVS
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:01.999 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.007 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:39:62 10.100.0.6'], port_security=['fa:16:3e:e2:39:62 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c3fb322f-a9db-4396-b659-2307698e5524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2d4d23379cc4b03bbdd72a9134fdd9b', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'f3732966-b58f-403f-8b6d-f814639eb59b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb47a1c8-b027-42a5-95ca-41b2f56663d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2d164f50-a56a-4eaf-ad60-84274a0eb413) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.009 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2d164f50-a56a-4eaf-ad60-84274a0eb413 in datapath 6ad7f755-fa29-40dd-89c4-988d0a51cf9b unbound from our chassis#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.010 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.012 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6ad7f755-fa29-40dd-89c4-988d0a51cf9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.014 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[971872bf-331b-4140-ab22-6a029cb61627]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.014 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b namespace which is not needed anymore#033[00m
Dec 13 03:48:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 202 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 22 KiB/s wr, 1 op/s
Dec 13 03:48:02 np0005558241 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000067.scope: Deactivated successfully.
Dec 13 03:48:02 np0005558241 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000067.scope: Consumed 2.857s CPU time.
Dec 13 03:48:02 np0005558241 systemd-machined[210538]: Machine qemu-131-instance-00000067 terminated.
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.143 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.149 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.164 248514 INFO nova.virt.libvirt.driver [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Instance destroyed successfully.#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.164 248514 DEBUG nova.objects.instance [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lazy-loading 'resources' on Instance uuid c3fb322f-a9db-4396-b659-2307698e5524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:48:02 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [NOTICE]   (351689) : haproxy version is 2.8.14-c23fe91
Dec 13 03:48:02 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [NOTICE]   (351689) : path to executable is /usr/sbin/haproxy
Dec 13 03:48:02 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [WARNING]  (351689) : Exiting Master process...
Dec 13 03:48:02 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [WARNING]  (351689) : Exiting Master process...
Dec 13 03:48:02 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [ALERT]    (351689) : Current worker (351691) exited with code 143 (Terminated)
Dec 13 03:48:02 np0005558241 neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b[351676]: [WARNING]  (351689) : All workers exited. Exiting... (0)
Dec 13 03:48:02 np0005558241 systemd[1]: libpod-0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771.scope: Deactivated successfully.
Dec 13 03:48:02 np0005558241 podman[352557]: 2025-12-13 08:48:02.178505036 +0000 UTC m=+0.052567941 container died 0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.195 248514 DEBUG nova.virt.libvirt.vif [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:44:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-235457723',display_name='tempest-ServersNegativeTestJSON-server-235457723',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-235457723',id=103,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:47:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d2d4d23379cc4b03bbdd72a9134fdd9b',ramdisk_id='',reservation_id='r-u6zqzes4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1471623163',owner_user_name='tempest-ServersNegativeTestJSON-1471623163-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:47:43Z,user_data=None,user_id='8948a1b0c26f43129cb50ef6f3872ecd',uuid=c3fb322f-a9db-4396-b659-2307698e5524,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.196 248514 DEBUG nova.network.os_vif_util [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converting VIF {"id": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "address": "fa:16:3e:e2:39:62", "network": {"id": "6ad7f755-fa29-40dd-89c4-988d0a51cf9b", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2096945333-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2d4d23379cc4b03bbdd72a9134fdd9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d164f50-a5", "ovs_interfaceid": "2d164f50-a56a-4eaf-ad60-84274a0eb413", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.197 248514 DEBUG nova.network.os_vif_util [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.198 248514 DEBUG os_vif [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.201 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.202 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d164f50-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.204 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.206 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.210 248514 INFO os_vif [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:39:62,bridge_name='br-int',has_traffic_filtering=True,id=2d164f50-a56a-4eaf-ad60-84274a0eb413,network=Network(6ad7f755-fa29-40dd-89c4-988d0a51cf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d164f50-a5')#033[00m
Dec 13 03:48:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7c2d7bff872b6cdff54ab27f3dd8aa237d58f134c1191165160b971ebebd88a7-merged.mount: Deactivated successfully.
Dec 13 03:48:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771-userdata-shm.mount: Deactivated successfully.
Dec 13 03:48:02 np0005558241 podman[352557]: 2025-12-13 08:48:02.224157911 +0000 UTC m=+0.098220806 container cleanup 0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:48:02 np0005558241 systemd[1]: libpod-conmon-0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771.scope: Deactivated successfully.
Dec 13 03:48:02 np0005558241 podman[352606]: 2025-12-13 08:48:02.307040771 +0000 UTC m=+0.056436457 container remove 0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.313 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46085592-3495-4d3e-93b7-4d44676adb8a]: (4, ('Sat Dec 13 08:48:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b (0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771)\n0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771\nSat Dec 13 08:48:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b (0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771)\n0caaf2cdb07a328228e72c673fa7bdb63ea9cbd1d795885012afb6132e3ae771\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.315 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1e7547-1546-4584-b819-7beef157c1c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.316 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ad7f755-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.319 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:02 np0005558241 kernel: tap6ad7f755-f0: left promiscuous mode
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.325 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d42a7dda-c150-4705-8e53-347b2ed02fc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.342 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1284b20c-1889-4faf-88a6-b6e121f73b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.344 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7819ee7-9866-4e85-8ae5-816c2e44e1b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.361 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8a11f0-6604-4774-b526-8c4fb8e401a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823920, 'reachable_time': 39110, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352624, 'error': None, 'target': 'ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.365 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6ad7f755-fa29-40dd-89c4-988d0a51cf9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:48:02 np0005558241 systemd[1]: run-netns-ovnmeta\x2d6ad7f755\x2dfa29\x2d40dd\x2d89c4\x2d988d0a51cf9b.mount: Deactivated successfully.
Dec 13 03:48:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:02.366 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[695d02c8-ce4e-44ae-a5c6-19bffdabb91c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.489 248514 INFO nova.virt.libvirt.driver [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deleting instance files /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524_del#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.490 248514 INFO nova.virt.libvirt.driver [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deletion of /var/lib/nova/instances/c3fb322f-a9db-4396-b659-2307698e5524_del complete#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.556 248514 INFO nova.compute.manager [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.557 248514 DEBUG oslo.service.loopingcall [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.557 248514 DEBUG nova.compute.manager [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.558 248514 DEBUG nova.network.neutron [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:48:02 np0005558241 nova_compute[248510]: 2025-12-13 08:48:02.795 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec 13 03:48:03 np0005558241 nova_compute[248510]: 2025-12-13 08:48:03.529 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:48:03 np0005558241 nova_compute[248510]: 2025-12-13 08:48:03.530 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:48:03 np0005558241 nova_compute[248510]: 2025-12-13 08:48:03.530 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:48:03 np0005558241 nova_compute[248510]: 2025-12-13 08:48:03.530 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:48:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:03.722 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:1f:a9 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062cf52f23fa4a85853b3a51dc81e171', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=496055dc-0896-49fa-bf70-209d5a08ecd5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=50178087-37ed-4900-b708-4cc10cbf6678) old=Port_Binding(mac=['fa:16:3e:92:1f:a9 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062cf52f23fa4a85853b3a51dc81e171', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:48:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:03.723 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 50178087-37ed-4900-b708-4cc10cbf6678 in datapath 15538243-d813-425c-a420-0747a4cf75d2 updated#033[00m
Dec 13 03:48:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:03.724 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15538243-d813-425c-a420-0747a4cf75d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:48:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:03.725 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6bcf0719-cad5-4562-97af-819f072cd123]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 25 KiB/s wr, 29 op/s
Dec 13 03:48:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:48:04 np0005558241 nova_compute[248510]: 2025-12-13 08:48:04.798 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:05 np0005558241 nova_compute[248510]: 2025-12-13 08:48:05.135 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:05.455 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:48:05 np0005558241 nova_compute[248510]: 2025-12-13 08:48:05.456 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:05.457 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:48:05 np0005558241 nova_compute[248510]: 2025-12-13 08:48:05.729 248514 DEBUG nova.network.neutron [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:48:05 np0005558241 nova_compute[248510]: 2025-12-13 08:48:05.785 248514 INFO nova.compute.manager [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Took 3.23 seconds to deallocate network for instance.#033[00m
Dec 13 03:48:05 np0005558241 nova_compute[248510]: 2025-12-13 08:48:05.858 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:05 np0005558241 nova_compute[248510]: 2025-12-13 08:48:05.858 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:05 np0005558241 nova_compute[248510]: 2025-12-13 08:48:05.890 248514 DEBUG nova.compute.manager [req-acc0e784-9171-4a55-ac8a-a533f48bd852 req-730c725d-e9fe-4724-bb34-f6cecaa19430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-deleted-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:48:05 np0005558241 nova_compute[248510]: 2025-12-13 08:48:05.983 248514 DEBUG oslo_concurrency.processutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:48:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 13 03:48:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:48:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2914377555' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:48:06 np0005558241 nova_compute[248510]: 2025-12-13 08:48:06.601 248514 DEBUG oslo_concurrency.processutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:48:06 np0005558241 nova_compute[248510]: 2025-12-13 08:48:06.609 248514 DEBUG nova.compute.provider_tree [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:48:06 np0005558241 nova_compute[248510]: 2025-12-13 08:48:06.644 248514 DEBUG nova.scheduler.client.report [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:48:06 np0005558241 nova_compute[248510]: 2025-12-13 08:48:06.682 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:06 np0005558241 nova_compute[248510]: 2025-12-13 08:48:06.712 248514 INFO nova.scheduler.client.report [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Deleted allocations for instance c3fb322f-a9db-4396-b659-2307698e5524#033[00m
Dec 13 03:48:06 np0005558241 nova_compute[248510]: 2025-12-13 08:48:06.802 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:06 np0005558241 nova_compute[248510]: 2025-12-13 08:48:06.836 248514 DEBUG oslo_concurrency.lockutils [None req-132c87a5-320b-4b5f-b12d-4194dfbbdeca 8948a1b0c26f43129cb50ef6f3872ecd d2d4d23379cc4b03bbdd72a9134fdd9b - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:07 np0005558241 nova_compute[248510]: 2025-12-13 08:48:07.099 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:48:07 np0005558241 nova_compute[248510]: 2025-12-13 08:48:07.143 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:48:07 np0005558241 nova_compute[248510]: 2025-12-13 08:48:07.144 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:48:07 np0005558241 nova_compute[248510]: 2025-12-13 08:48:07.145 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:48:07 np0005558241 nova_compute[248510]: 2025-12-13 08:48:07.204 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:07.459 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:48:07 np0005558241 ovn_controller[148476]: 2025-12-13T08:48:07Z|01032|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 03:48:07 np0005558241 nova_compute[248510]: 2025-12-13 08:48:07.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:07 np0005558241 nova_compute[248510]: 2025-12-13 08:48:07.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:48:07 np0005558241 nova_compute[248510]: 2025-12-13 08:48:07.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:07 np0005558241 nova_compute[248510]: 2025-12-13 08:48:07.811 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:07 np0005558241 nova_compute[248510]: 2025-12-13 08:48:07.812 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:07 np0005558241 nova_compute[248510]: 2025-12-13 08:48:07.812 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:48:07 np0005558241 nova_compute[248510]: 2025-12-13 08:48:07.812 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:48:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 13 03:48:08 np0005558241 nova_compute[248510]: 2025-12-13 08:48:08.041 248514 DEBUG nova.compute.manager [req-86008551-14e8-4cf9-b0cd-6c5f12299361 req-a32492ec-9ab7-44f4-a713-68d62fe42340 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:48:08 np0005558241 nova_compute[248510]: 2025-12-13 08:48:08.042 248514 DEBUG oslo_concurrency.lockutils [req-86008551-14e8-4cf9-b0cd-6c5f12299361 req-a32492ec-9ab7-44f4-a713-68d62fe42340 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c3fb322f-a9db-4396-b659-2307698e5524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:08 np0005558241 nova_compute[248510]: 2025-12-13 08:48:08.042 248514 DEBUG oslo_concurrency.lockutils [req-86008551-14e8-4cf9-b0cd-6c5f12299361 req-a32492ec-9ab7-44f4-a713-68d62fe42340 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:08 np0005558241 nova_compute[248510]: 2025-12-13 08:48:08.043 248514 DEBUG oslo_concurrency.lockutils [req-86008551-14e8-4cf9-b0cd-6c5f12299361 req-a32492ec-9ab7-44f4-a713-68d62fe42340 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c3fb322f-a9db-4396-b659-2307698e5524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:08 np0005558241 nova_compute[248510]: 2025-12-13 08:48:08.043 248514 DEBUG nova.compute.manager [req-86008551-14e8-4cf9-b0cd-6c5f12299361 req-a32492ec-9ab7-44f4-a713-68d62fe42340 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] No waiting events found dispatching network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:48:08 np0005558241 nova_compute[248510]: 2025-12-13 08:48:08.043 248514 WARNING nova.compute.manager [req-86008551-14e8-4cf9-b0cd-6c5f12299361 req-a32492ec-9ab7-44f4-a713-68d62fe42340 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Received unexpected event network-vif-plugged-2d164f50-a56a-4eaf-ad60-84274a0eb413 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:48:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:48:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1566965958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:48:08 np0005558241 nova_compute[248510]: 2025-12-13 08:48:08.424 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:48:08 np0005558241 nova_compute[248510]: 2025-12-13 08:48:08.808 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:48:08 np0005558241 nova_compute[248510]: 2025-12-13 08:48:08.809 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:48:08 np0005558241 nova_compute[248510]: 2025-12-13 08:48:08.971 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:48:08 np0005558241 nova_compute[248510]: 2025-12-13 08:48:08.972 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3528MB free_disk=59.941981153562665GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:48:08 np0005558241 nova_compute[248510]: 2025-12-13 08:48:08.973 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:08 np0005558241 nova_compute[248510]: 2025-12-13 08:48:08.973 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:09 np0005558241 nova_compute[248510]: 2025-12-13 08:48:09.051 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:48:09 np0005558241 nova_compute[248510]: 2025-12-13 08:48:09.052 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:48:09 np0005558241 nova_compute[248510]: 2025-12-13 08:48:09.052 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:48:09 np0005558241 nova_compute[248510]: 2025-12-13 08:48:09.096 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:48:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:48:09
Dec 13 03:48:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:48:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:48:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'images']
Dec 13 03:48:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:48:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:48:09Z|01033|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 03:48:09 np0005558241 nova_compute[248510]: 2025-12-13 08:48:09.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:48:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/852006070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:48:09 np0005558241 nova_compute[248510]: 2025-12-13 08:48:09.693 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:48:09 np0005558241 nova_compute[248510]: 2025-12-13 08:48:09.699 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:48:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:48:10 np0005558241 nova_compute[248510]: 2025-12-13 08:48:10.010 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:48:10 np0005558241 nova_compute[248510]: 2025-12-13 08:48:10.243 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:48:10 np0005558241 nova_compute[248510]: 2025-12-13 08:48:10.244 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:48:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:48:11 np0005558241 nova_compute[248510]: 2025-12-13 08:48:11.245 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:48:11 np0005558241 nova_compute[248510]: 2025-12-13 08:48:11.250 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:48:11 np0005558241 nova_compute[248510]: 2025-12-13 08:48:11.250 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:48:11 np0005558241 nova_compute[248510]: 2025-12-13 08:48:11.250 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:48:11 np0005558241 nova_compute[248510]: 2025-12-13 08:48:11.804 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 13 03:48:12 np0005558241 nova_compute[248510]: 2025-12-13 08:48:12.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:12 np0005558241 nova_compute[248510]: 2025-12-13 08:48:12.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 13 03:48:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:48:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:48:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2807869102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:48:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:48:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2807869102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:48:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:48:16 np0005558241 ovn_controller[148476]: 2025-12-13T08:48:16Z|01034|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 03:48:16 np0005558241 nova_compute[248510]: 2025-12-13 08:48:16.831 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:17 np0005558241 nova_compute[248510]: 2025-12-13 08:48:17.163 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615682.1614242, c3fb322f-a9db-4396-b659-2307698e5524 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:48:17 np0005558241 nova_compute[248510]: 2025-12-13 08:48:17.163 248514 INFO nova.compute.manager [-] [instance: c3fb322f-a9db-4396-b659-2307698e5524] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:48:17 np0005558241 nova_compute[248510]: 2025-12-13 08:48:17.194 248514 DEBUG nova.compute.manager [None req-bc299604-a0ba-4be8-9bda-478d8b4323df - - - - - -] [instance: c3fb322f-a9db-4396-b659-2307698e5524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:48:17 np0005558241 nova_compute[248510]: 2025-12-13 08:48:17.207 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:17 np0005558241 nova_compute[248510]: 2025-12-13 08:48:17.776 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:48:17 np0005558241 podman[352695]: 2025-12-13 08:48:17.969820446 +0000 UTC m=+0.053059813 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 03:48:17 np0005558241 podman[352694]: 2025-12-13 08:48:17.976886313 +0000 UTC m=+0.061534635 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 03:48:18 np0005558241 podman[352693]: 2025-12-13 08:48:18.006017354 +0000 UTC m=+0.092385450 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec 13 03:48:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 121 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Dec 13 03:48:18 np0005558241 nova_compute[248510]: 2025-12-13 08:48:18.884 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:18 np0005558241 nova_compute[248510]: 2025-12-13 08:48:18.885 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:18 np0005558241 nova_compute[248510]: 2025-12-13 08:48:18.941 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.034 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.035 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.041 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.042 248514 INFO nova.compute.claims [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.180 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:48:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:48:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3636471413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.753 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.759 248514 DEBUG nova.compute.provider_tree [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:48:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.785 248514 DEBUG nova.scheduler.client.report [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.822 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.823 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.894 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.895 248514 DEBUG nova.network.neutron [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.924 248514 INFO nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:48:19 np0005558241 nova_compute[248510]: 2025-12-13 08:48:19.947 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:48:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.066 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.068 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.068 248514 INFO nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Creating image(s)#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.087 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.104 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.134 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.139 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.225 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.226 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.227 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.227 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.249 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.252 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 99100320-043d-4f13-ac93-5fd3309abbf7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.447 248514 DEBUG nova.policy [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '649a4118d92a4ee68ff645ddec797a5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b75a9df2d3584458bf4c9c127010a4d1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.569 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 99100320-043d-4f13-ac93-5fd3309abbf7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.641 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] resizing rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.709 248514 DEBUG nova.objects.instance [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 99100320-043d-4f13-ac93-5fd3309abbf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.728 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.728 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Ensure instance console log exists: /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.728 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.729 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:20 np0005558241 nova_compute[248510]: 2025-12-13 08:48:20.729 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:20.954 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:1f:a9 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062cf52f23fa4a85853b3a51dc81e171', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=496055dc-0896-49fa-bf70-209d5a08ecd5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=50178087-37ed-4900-b708-4cc10cbf6678) old=Port_Binding(mac=['fa:16:3e:92:1f:a9 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-15538243-d813-425c-a420-0747a4cf75d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '062cf52f23fa4a85853b3a51dc81e171', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:48:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:20.956 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 50178087-37ed-4900-b708-4cc10cbf6678 in datapath 15538243-d813-425c-a420-0747a4cf75d2 updated#033[00m
Dec 13 03:48:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:20.957 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 15538243-d813-425c-a420-0747a4cf75d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:48:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:20.958 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f77e214-20e6-4560-bfa1-11c51348e617]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007718826105657202 of space, bias 1.0, pg target 0.23156478316971604 quantized to 32 (current 32)
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006682139988669646 of space, bias 1.0, pg target 0.20046419966008938 quantized to 32 (current 32)
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.841775278005856e-07 of space, bias 4.0, pg target 0.0007010130333607028 quantized to 16 (current 32)
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:48:21 np0005558241 nova_compute[248510]: 2025-12-13 08:48:21.832 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Dec 13 03:48:22 np0005558241 nova_compute[248510]: 2025-12-13 08:48:22.184 248514 DEBUG nova.network.neutron [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Successfully created port: 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:48:22 np0005558241 nova_compute[248510]: 2025-12-13 08:48:22.208 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:23 np0005558241 nova_compute[248510]: 2025-12-13 08:48:23.858 248514 DEBUG nova.network.neutron [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Successfully updated port: 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:48:23 np0005558241 nova_compute[248510]: 2025-12-13 08:48:23.893 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:48:23 np0005558241 nova_compute[248510]: 2025-12-13 08:48:23.894 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquired lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:48:23 np0005558241 nova_compute[248510]: 2025-12-13 08:48:23.894 248514 DEBUG nova.network.neutron [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:48:24 np0005558241 nova_compute[248510]: 2025-12-13 08:48:24.027 248514 DEBUG nova.compute.manager [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-changed-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:48:24 np0005558241 nova_compute[248510]: 2025-12-13 08:48:24.028 248514 DEBUG nova.compute.manager [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Refreshing instance network info cache due to event network-changed-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:48:24 np0005558241 nova_compute[248510]: 2025-12-13 08:48:24.028 248514 DEBUG oslo_concurrency.lockutils [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:48:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:48:24 np0005558241 nova_compute[248510]: 2025-12-13 08:48:24.572 248514 DEBUG nova.network.neutron [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:48:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:48:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.515 248514 DEBUG nova.network.neutron [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updating instance_info_cache with network_info: [{"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.575 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Releasing lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.575 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Instance network_info: |[{"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.576 248514 DEBUG oslo_concurrency.lockutils [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.577 248514 DEBUG nova.network.neutron [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Refreshing network info cache for port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.582 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Start _get_guest_xml network_info=[{"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.587 248514 WARNING nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.593 248514 DEBUG nova.virt.libvirt.host [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.594 248514 DEBUG nova.virt.libvirt.host [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.599 248514 DEBUG nova.virt.libvirt.host [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.599 248514 DEBUG nova.virt.libvirt.host [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.600 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.601 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.602 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.603 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.603 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.604 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.604 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.605 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.606 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.606 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.607 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.608 248514 DEBUG nova.virt.hardware [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.615 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:48:26 np0005558241 nova_compute[248510]: 2025-12-13 08:48:26.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:48:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4241298965' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.187 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.212 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.216 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.252 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:48:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4131002482' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.821 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.822 248514 DEBUG nova.virt.libvirt.vif [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:48:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-647979759-acc',id=106,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAq9JxVWdGwadoCAlDLVLF/FgbwuunvGPYPWGxv9o2qZSwXgRBKF+h53qswJupwtL+dZgpz/rFzjqXvS7XDDi2cr6DR4JG28HfzJLgzD6wSuJyxP2VMxhs/n7K+Z53vcIw==',key_name='tempest-TestSecurityGroupsBasicOps-1306110225',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b75a9df2d3584458bf4c9c127010a4d1',ramdisk_id='',reservation_id='r-vazklzs4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-647979759',owner_user_name='tempest-TestSecurityGroupsBasicOps-647979759-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:48:19Z,user_data=None,user_id='649a4118d92a4ee68ff645ddec797a5a',uuid=99100320-043d-4f13-ac93-5fd3309abbf7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.823 248514 DEBUG nova.network.os_vif_util [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Converting VIF {"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.824 248514 DEBUG nova.network.os_vif_util [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.825 248514 DEBUG nova.objects.instance [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 99100320-043d-4f13-ac93-5fd3309abbf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.854 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  <uuid>99100320-043d-4f13-ac93-5fd3309abbf7</uuid>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  <name>instance-0000006a</name>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991</nova:name>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:48:26</nova:creationTime>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:        <nova:user uuid="649a4118d92a4ee68ff645ddec797a5a">tempest-TestSecurityGroupsBasicOps-647979759-project-member</nova:user>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:        <nova:project uuid="b75a9df2d3584458bf4c9c127010a4d1">tempest-TestSecurityGroupsBasicOps-647979759</nova:project>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:        <nova:port uuid="5027cfa3-f4ed-4668-87ec-ebfe75f4fb14">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <entry name="serial">99100320-043d-4f13-ac93-5fd3309abbf7</entry>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <entry name="uuid">99100320-043d-4f13-ac93-5fd3309abbf7</entry>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/99100320-043d-4f13-ac93-5fd3309abbf7_disk">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/99100320-043d-4f13-ac93-5fd3309abbf7_disk.config">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:df:76:90"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <target dev="tap5027cfa3-f4"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/console.log" append="off"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:48:27 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:48:27 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:48:27 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:48:27 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.855 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Preparing to wait for external event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.855 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.855 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.855 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.856 248514 DEBUG nova.virt.libvirt.vif [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:48:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-647979759-acc',id=106,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAq9JxVWdGwadoCAlDLVLF/FgbwuunvGPYPWGxv9o2qZSwXgRBKF+h53qswJupwtL+dZgpz/rFzjqXvS7XDDi2cr6DR4JG28HfzJLgzD6wSuJyxP2VMxhs/n7K+Z53vcIw==',key_name='tempest-TestSecurityGroupsBasicOps-1306110225',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b75a9df2d3584458bf4c9c127010a4d1',ramdisk_id='',reservation_id='r-vazklzs4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-647979759',owner_user_name='tempest-TestSecurityGroupsBasicOps-647979759-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:48:19Z,user_data=None,user_id='649a4118d92a4ee68ff645ddec797a5a',uuid=99100320-043d-4f13-ac93-5fd3309abbf7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.856 248514 DEBUG nova.network.os_vif_util [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Converting VIF {"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.857 248514 DEBUG nova.network.os_vif_util [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.857 248514 DEBUG os_vif [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.857 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.858 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.858 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.861 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.861 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5027cfa3-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.861 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5027cfa3-f4, col_values=(('external_ids', {'iface-id': '5027cfa3-f4ed-4668-87ec-ebfe75f4fb14', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:76:90', 'vm-uuid': '99100320-043d-4f13-ac93-5fd3309abbf7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:27 np0005558241 NetworkManager[50376]: <info>  [1765615707.9182] manager: (tap5027cfa3-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.920 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.924 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.926 248514 INFO os_vif [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4')#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.984 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.984 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.984 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] No VIF found with MAC fa:16:3e:df:76:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:48:27 np0005558241 nova_compute[248510]: 2025-12-13 08:48:27.985 248514 INFO nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Using config drive#033[00m
Dec 13 03:48:28 np0005558241 nova_compute[248510]: 2025-12-13 08:48:28.003 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:48:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:48:28 np0005558241 nova_compute[248510]: 2025-12-13 08:48:28.938 248514 INFO nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Creating config drive at /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/disk.config#033[00m
Dec 13 03:48:28 np0005558241 nova_compute[248510]: 2025-12-13 08:48:28.943 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgbhu9dq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.087 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgbhu9dq" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.120 248514 DEBUG nova.storage.rbd_utils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] rbd image 99100320-043d-4f13-ac93-5fd3309abbf7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.125 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/disk.config 99100320-043d-4f13-ac93-5fd3309abbf7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.286 248514 DEBUG oslo_concurrency.processutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/disk.config 99100320-043d-4f13-ac93-5fd3309abbf7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.287 248514 INFO nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Deleting local config drive /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7/disk.config because it was imported into RBD.#033[00m
Dec 13 03:48:29 np0005558241 kernel: tap5027cfa3-f4: entered promiscuous mode
Dec 13 03:48:29 np0005558241 NetworkManager[50376]: <info>  [1765615709.3361] manager: (tap5027cfa3-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/427)
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.391 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:48:29Z|01035|binding|INFO|Claiming lport 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 for this chassis.
Dec 13 03:48:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:48:29Z|01036|binding|INFO|5027cfa3-f4ed-4668-87ec-ebfe75f4fb14: Claiming fa:16:3e:df:76:90 10.100.0.10
Dec 13 03:48:29 np0005558241 systemd-udevd[353075]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.401 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:76:90 10.100.0.10'], port_security=['fa:16:3e:df:76:90 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '99100320-043d-4f13-ac93-5fd3309abbf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b75a9df2d3584458bf4c9c127010a4d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2a4d558-076e-40f7-a16c-096cc56882f7 fa0ae845-58e9-4e37-8129-696ba9497adb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e9afcf2-3319-47a2-a247-8f6d1e6c6308, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.403 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 in datapath 8a649c29-105b-45ad-91b4-9f7a7c58b419 bound to our chassis#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.405 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a649c29-105b-45ad-91b4-9f7a7c58b419#033[00m
Dec 13 03:48:29 np0005558241 NetworkManager[50376]: <info>  [1765615709.4108] device (tap5027cfa3-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:48:29 np0005558241 NetworkManager[50376]: <info>  [1765615709.4124] device (tap5027cfa3-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:48:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:48:29Z|01037|binding|INFO|Setting lport 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 ovn-installed in OVS
Dec 13 03:48:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:48:29Z|01038|binding|INFO|Setting lport 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 up in Southbound
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.421 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5dc2d2-42e8-4156-ae38-7633750c70f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.422 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8a649c29-11 in ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:48:29 np0005558241 systemd-machined[210538]: New machine qemu-132-instance-0000006a.
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.425 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8a649c29-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.426 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[88f6a94d-68f1-46ab-8dbf-a84d56d2c68d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.427 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e5f78f-3130-42da-bc43-ca7fd18d3cd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 systemd[1]: Started Virtual Machine qemu-132-instance-0000006a.
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.439 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[46cf6f70-5934-46a5-a5a5-195337b74296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.461 248514 DEBUG nova.network.neutron [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updated VIF entry in instance network info cache for port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.461 248514 DEBUG nova.network.neutron [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updating instance_info_cache with network_info: [{"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.465 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd65d10-40a9-4742-b542-0f05cc210951]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.480 248514 DEBUG oslo_concurrency.lockutils [req-dfbd6584-d2d3-468a-88c4-ad45ab3a472a req-8127859b-0951-49dc-8159-24dc1298f056 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.504 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5e0ef653-65bc-4556-a808-87ea77011920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.511 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dabd3144-6a58-4b44-ac27-7632d2e471e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 NetworkManager[50376]: <info>  [1765615709.5126] manager: (tap8a649c29-10): new Veth device (/org/freedesktop/NetworkManager/Devices/428)
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.549 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e64eac0e-d73d-48ec-9aad-ee602cd23309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.553 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[33a53044-cbb4-4684-b28d-955e2c9c836c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 NetworkManager[50376]: <info>  [1765615709.5751] device (tap8a649c29-10): carrier: link connected
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.580 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4da2660f-fac9-4053-8234-21312867e3e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.597 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[afd7071b-8bd1-42c3-9097-b56999388957]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a649c29-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:39:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 828679, 'reachable_time': 44018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353113, 'error': None, 'target': 'ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.612 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af2f2617-10bb-493f-96d2-0eb0b64a1342]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe96:3988'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 828679, 'tstamp': 828679}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353114, 'error': None, 'target': 'ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.627 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[614ef3ff-a4d6-4691-a804-c7585918e8d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a649c29-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:96:39:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 828679, 'reachable_time': 44018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353115, 'error': None, 'target': 'ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.661 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f45bf186-602a-4d22-96d1-43a0616ccef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 kernel: tap8a649c29-10: entered promiscuous mode
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.722 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[73772902-0529-4dc0-b899-b8e93dd6a121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.723 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a649c29-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.723 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.724 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a649c29-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.726 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.729 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a649c29-10, col_values=(('external_ids', {'iface-id': '7a1c6f9d-5534-4f2e-8492-9a55e2589a5b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.730 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:48:29Z|01039|binding|INFO|Releasing lport 7a1c6f9d-5534-4f2e-8492-9a55e2589a5b from this chassis (sb_readonly=0)
Dec 13 03:48:29 np0005558241 NetworkManager[50376]: <info>  [1765615709.7357] manager: (tap8a649c29-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Dec 13 03:48:29 np0005558241 nova_compute[248510]: 2025-12-13 08:48:29.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.747 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8a649c29-105b-45ad-91b4-9f7a7c58b419.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8a649c29-105b-45ad-91b4-9f7a7c58b419.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.747 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7a2832-f67f-4c2b-959a-9d2d51a02268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.748 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-8a649c29-105b-45ad-91b4-9f7a7c58b419
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/8a649c29-105b-45ad-91b4-9f7a7c58b419.pid.haproxy
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 8a649c29-105b-45ad-91b4-9f7a7c58b419
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:48:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:29.749 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'env', 'PROCESS_TAG=haproxy-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8a649c29-105b-45ad-91b4-9f7a7c58b419.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:48:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:48:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 03:48:30 np0005558241 podman[353145]: 2025-12-13 08:48:30.114693411 +0000 UTC m=+0.052016726 container create 1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 03:48:30 np0005558241 systemd[1]: Started libpod-conmon-1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429.scope.
Dec 13 03:48:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:48:30 np0005558241 podman[353145]: 2025-12-13 08:48:30.086386341 +0000 UTC m=+0.023709676 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:48:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7930ac85e7195342e64df4fba87493093ddc740f0c0e309c4994565a0f79486e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:30 np0005558241 podman[353145]: 2025-12-13 08:48:30.205133091 +0000 UTC m=+0.142456416 container init 1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:48:30 np0005558241 podman[353145]: 2025-12-13 08:48:30.211850489 +0000 UTC m=+0.149173804 container start 1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:48:30 np0005558241 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [NOTICE]   (353164) : New worker (353166) forked
Dec 13 03:48:30 np0005558241 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [NOTICE]   (353164) : Loading success.
Dec 13 03:48:30 np0005558241 nova_compute[248510]: 2025-12-13 08:48:30.384 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615710.3838782, 99100320-043d-4f13-ac93-5fd3309abbf7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:48:30 np0005558241 nova_compute[248510]: 2025-12-13 08:48:30.385 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] VM Started (Lifecycle Event)#033[00m
Dec 13 03:48:30 np0005558241 nova_compute[248510]: 2025-12-13 08:48:30.411 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:48:30 np0005558241 nova_compute[248510]: 2025-12-13 08:48:30.415 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615710.3844368, 99100320-043d-4f13-ac93-5fd3309abbf7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:48:30 np0005558241 nova_compute[248510]: 2025-12-13 08:48:30.415 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:48:30 np0005558241 nova_compute[248510]: 2025-12-13 08:48:30.437 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:48:30 np0005558241 nova_compute[248510]: 2025-12-13 08:48:30.441 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:48:30 np0005558241 nova_compute[248510]: 2025-12-13 08:48:30.467 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:48:31 np0005558241 nova_compute[248510]: 2025-12-13 08:48:31.836 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 03:48:32 np0005558241 nova_compute[248510]: 2025-12-13 08:48:32.951 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.674 248514 DEBUG nova.compute.manager [req-ade74023-ed04-466c-8ed7-f35df06a885a req-27fc161a-d3b7-4e0c-9a92-e8d25467bd3d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.674 248514 DEBUG oslo_concurrency.lockutils [req-ade74023-ed04-466c-8ed7-f35df06a885a req-27fc161a-d3b7-4e0c-9a92-e8d25467bd3d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.675 248514 DEBUG oslo_concurrency.lockutils [req-ade74023-ed04-466c-8ed7-f35df06a885a req-27fc161a-d3b7-4e0c-9a92-e8d25467bd3d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.675 248514 DEBUG oslo_concurrency.lockutils [req-ade74023-ed04-466c-8ed7-f35df06a885a req-27fc161a-d3b7-4e0c-9a92-e8d25467bd3d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.675 248514 DEBUG nova.compute.manager [req-ade74023-ed04-466c-8ed7-f35df06a885a req-27fc161a-d3b7-4e0c-9a92-e8d25467bd3d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Processing event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.676 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.678 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615714.6786327, 99100320-043d-4f13-ac93-5fd3309abbf7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.679 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.681 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.683 248514 INFO nova.virt.libvirt.driver [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Instance spawned successfully.#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.683 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.749 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.754 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.758 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.758 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.759 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.759 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.759 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.760 248514 DEBUG nova.virt.libvirt.driver [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:48:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.794 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.920 248514 INFO nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Took 14.85 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:48:34 np0005558241 nova_compute[248510]: 2025-12-13 08:48:34.921 248514 DEBUG nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:48:35 np0005558241 nova_compute[248510]: 2025-12-13 08:48:35.021 248514 INFO nova.compute.manager [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Took 16.02 seconds to build instance.#033[00m
Dec 13 03:48:35 np0005558241 nova_compute[248510]: 2025-12-13 08:48:35.064 248514 DEBUG oslo_concurrency.lockutils [None req-cffb09b0-dce3-4bdd-9517-a224e80c74b5 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 21 KiB/s wr, 11 op/s
Dec 13 03:48:36 np0005558241 nova_compute[248510]: 2025-12-13 08:48:36.838 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:36 np0005558241 nova_compute[248510]: 2025-12-13 08:48:36.881 248514 DEBUG nova.compute.manager [req-ce73764a-947c-49d9-bfe3-4ad5c729bbd7 req-63721c7a-b316-4a14-86b9-2f63f879905f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:48:36 np0005558241 nova_compute[248510]: 2025-12-13 08:48:36.881 248514 DEBUG oslo_concurrency.lockutils [req-ce73764a-947c-49d9-bfe3-4ad5c729bbd7 req-63721c7a-b316-4a14-86b9-2f63f879905f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:36 np0005558241 nova_compute[248510]: 2025-12-13 08:48:36.882 248514 DEBUG oslo_concurrency.lockutils [req-ce73764a-947c-49d9-bfe3-4ad5c729bbd7 req-63721c7a-b316-4a14-86b9-2f63f879905f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:36 np0005558241 nova_compute[248510]: 2025-12-13 08:48:36.882 248514 DEBUG oslo_concurrency.lockutils [req-ce73764a-947c-49d9-bfe3-4ad5c729bbd7 req-63721c7a-b316-4a14-86b9-2f63f879905f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:36 np0005558241 nova_compute[248510]: 2025-12-13 08:48:36.883 248514 DEBUG nova.compute.manager [req-ce73764a-947c-49d9-bfe3-4ad5c729bbd7 req-63721c7a-b316-4a14-86b9-2f63f879905f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] No waiting events found dispatching network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:48:36 np0005558241 nova_compute[248510]: 2025-12-13 08:48:36.883 248514 WARNING nova.compute.manager [req-ce73764a-947c-49d9-bfe3-4ad5c729bbd7 req-63721c7a-b316-4a14-86b9-2f63f879905f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received unexpected event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:48:38 np0005558241 nova_compute[248510]: 2025-12-13 08:48:38.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 954 KiB/s rd, 21 KiB/s wr, 41 op/s
Dec 13 03:48:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:48:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Dec 13 03:48:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:48:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:48:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:48:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:48:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:48:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:48:41 np0005558241 nova_compute[248510]: 2025-12-13 08:48:41.842 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:48:42 np0005558241 nova_compute[248510]: 2025-12-13 08:48:42.147 248514 DEBUG nova.compute.manager [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-changed-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:48:42 np0005558241 nova_compute[248510]: 2025-12-13 08:48:42.148 248514 DEBUG nova.compute.manager [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Refreshing instance network info cache due to event network-changed-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:48:42 np0005558241 nova_compute[248510]: 2025-12-13 08:48:42.148 248514 DEBUG oslo_concurrency.lockutils [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:48:42 np0005558241 nova_compute[248510]: 2025-12-13 08:48:42.148 248514 DEBUG oslo_concurrency.lockutils [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:48:42 np0005558241 nova_compute[248510]: 2025-12-13 08:48:42.149 248514 DEBUG nova.network.neutron [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Refreshing network info cache for port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:48:43 np0005558241 nova_compute[248510]: 2025-12-13 08:48:43.008 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:48:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:48:44 np0005558241 nova_compute[248510]: 2025-12-13 08:48:44.863 248514 DEBUG nova.network.neutron [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updated VIF entry in instance network info cache for port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:48:44 np0005558241 nova_compute[248510]: 2025-12-13 08:48:44.864 248514 DEBUG nova.network.neutron [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updating instance_info_cache with network_info: [{"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:48:44 np0005558241 nova_compute[248510]: 2025-12-13 08:48:44.887 248514 DEBUG oslo_concurrency.lockutils [req-4edc7b7d-038c-4a62-bc08-f9c676dfa146 req-3383ebb0-e915-42f4-82c9-4f4835f172fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:48:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 167 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 13 03:48:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:48:46Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:df:76:90 10.100.0.10
Dec 13 03:48:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:48:46Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:76:90 10.100.0.10
Dec 13 03:48:46 np0005558241 nova_compute[248510]: 2025-12-13 08:48:46.843 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:48 np0005558241 nova_compute[248510]: 2025-12-13 08:48:48.010 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 174 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 706 KiB/s wr, 82 op/s
Dec 13 03:48:48 np0005558241 podman[353222]: 2025-12-13 08:48:48.986845025 +0000 UTC m=+0.066708316 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:48:48 np0005558241 podman[353221]: 2025-12-13 08:48:48.991479661 +0000 UTC m=+0.071534116 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Dec 13 03:48:49 np0005558241 podman[353220]: 2025-12-13 08:48:49.011007281 +0000 UTC m=+0.104342400 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:48:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:48:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Dec 13 03:48:51 np0005558241 nova_compute[248510]: 2025-12-13 08:48:51.845 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 200 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 03:48:53 np0005558241 nova_compute[248510]: 2025-12-13 08:48:53.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:48:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:48:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:48:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:48:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:48:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:48:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:48:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:48:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:48:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:48:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:48:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:48:53 np0005558241 podman[353425]: 2025-12-13 08:48:53.664244446 +0000 UTC m=+0.041411469 container create 318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:48:53 np0005558241 systemd[1]: Started libpod-conmon-318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80.scope.
Dec 13 03:48:53 np0005558241 podman[353425]: 2025-12-13 08:48:53.64565211 +0000 UTC m=+0.022819163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:48:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:48:53 np0005558241 podman[353425]: 2025-12-13 08:48:53.833283054 +0000 UTC m=+0.210450177 container init 318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:48:53 np0005558241 podman[353425]: 2025-12-13 08:48:53.844153517 +0000 UTC m=+0.221320580 container start 318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:48:53 np0005558241 podman[353425]: 2025-12-13 08:48:53.851383098 +0000 UTC m=+0.228550151 container attach 318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:48:53 np0005558241 quirky_pare[353441]: 167 167
Dec 13 03:48:53 np0005558241 systemd[1]: libpod-318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80.scope: Deactivated successfully.
Dec 13 03:48:53 np0005558241 conmon[353441]: conmon 318a9775b505a30abb2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80.scope/container/memory.events
Dec 13 03:48:53 np0005558241 podman[353425]: 2025-12-13 08:48:53.854319332 +0000 UTC m=+0.231486355 container died 318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:48:53 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d5a70b09ab64113ba21e2d8ed4ce3d378893a6174d4deb25e7cfc39527f7b535-merged.mount: Deactivated successfully.
Dec 13 03:48:53 np0005558241 podman[353425]: 2025-12-13 08:48:53.902234043 +0000 UTC m=+0.279401066 container remove 318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 03:48:53 np0005558241 systemd[1]: libpod-conmon-318a9775b505a30abb2fc98c52c3e84fcc6b320cfd4aeb2236add7c4af4e8a80.scope: Deactivated successfully.
Dec 13 03:48:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 03:48:54 np0005558241 podman[353465]: 2025-12-13 08:48:54.087927479 +0000 UTC m=+0.022871895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:48:54 np0005558241 podman[353465]: 2025-12-13 08:48:54.188303556 +0000 UTC m=+0.123247922 container create 3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:48:54 np0005558241 systemd[1]: Started libpod-conmon-3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e.scope.
Dec 13 03:48:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:48:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:48:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:48:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:48:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/019ce3a5b8bc331aae88f36e7e57f277fd4dfb2bc0b669e0b31f236b4d348186/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/019ce3a5b8bc331aae88f36e7e57f277fd4dfb2bc0b669e0b31f236b4d348186/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/019ce3a5b8bc331aae88f36e7e57f277fd4dfb2bc0b669e0b31f236b4d348186/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/019ce3a5b8bc331aae88f36e7e57f277fd4dfb2bc0b669e0b31f236b4d348186/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/019ce3a5b8bc331aae88f36e7e57f277fd4dfb2bc0b669e0b31f236b4d348186/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:54 np0005558241 podman[353465]: 2025-12-13 08:48:54.505992561 +0000 UTC m=+0.440936957 container init 3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:48:54 np0005558241 podman[353465]: 2025-12-13 08:48:54.514420093 +0000 UTC m=+0.449364459 container start 3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wiles, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:48:54 np0005558241 podman[353465]: 2025-12-13 08:48:54.523634364 +0000 UTC m=+0.458578740 container attach 3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:48:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:48:55 np0005558241 confident_wiles[353481]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:48:55 np0005558241 confident_wiles[353481]: --> All data devices are unavailable
Dec 13 03:48:55 np0005558241 systemd[1]: libpod-3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e.scope: Deactivated successfully.
Dec 13 03:48:55 np0005558241 podman[353465]: 2025-12-13 08:48:55.078960367 +0000 UTC m=+1.013904733 container died 3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:48:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-019ce3a5b8bc331aae88f36e7e57f277fd4dfb2bc0b669e0b31f236b4d348186-merged.mount: Deactivated successfully.
Dec 13 03:48:55 np0005558241 podman[353465]: 2025-12-13 08:48:55.142654354 +0000 UTC m=+1.077598720 container remove 3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:48:55 np0005558241 systemd[1]: libpod-conmon-3f0fb66be30ec6dcc36def4e840c2dcf055242acbe1aa07c7a2b72466b9e1c1e.scope: Deactivated successfully.
Dec 13 03:48:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:55.427 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:48:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:55.429 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:48:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:48:55.429 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:48:55 np0005558241 podman[353577]: 2025-12-13 08:48:55.678692234 +0000 UTC m=+0.042448085 container create 4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:48:55 np0005558241 systemd[1]: Started libpod-conmon-4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9.scope.
Dec 13 03:48:55 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:48:55 np0005558241 podman[353577]: 2025-12-13 08:48:55.748593417 +0000 UTC m=+0.112349288 container init 4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:48:55 np0005558241 podman[353577]: 2025-12-13 08:48:55.658388335 +0000 UTC m=+0.022144206 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:48:55 np0005558241 podman[353577]: 2025-12-13 08:48:55.756090775 +0000 UTC m=+0.119846646 container start 4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hertz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 03:48:55 np0005558241 podman[353577]: 2025-12-13 08:48:55.759213883 +0000 UTC m=+0.122969734 container attach 4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:48:55 np0005558241 clever_hertz[353594]: 167 167
Dec 13 03:48:55 np0005558241 systemd[1]: libpod-4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9.scope: Deactivated successfully.
Dec 13 03:48:55 np0005558241 podman[353577]: 2025-12-13 08:48:55.760842414 +0000 UTC m=+0.124598255 container died 4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:48:55 np0005558241 nova_compute[248510]: 2025-12-13 08:48:55.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:48:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d716b08ff5d2e9cfcf9b58877788931e2f2f205dee78bb27f4b82cda8bf932d0-merged.mount: Deactivated successfully.
Dec 13 03:48:55 np0005558241 podman[353577]: 2025-12-13 08:48:55.797063252 +0000 UTC m=+0.160819093 container remove 4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hertz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:48:55 np0005558241 systemd[1]: libpod-conmon-4c83a67f7ddd8092ca91d48c7d217d09640f4dfa5382baa31747d6bcd8e79af9.scope: Deactivated successfully.
Dec 13 03:48:55 np0005558241 podman[353617]: 2025-12-13 08:48:55.979583638 +0000 UTC m=+0.042279791 container create ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chebyshev, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:48:56 np0005558241 systemd[1]: Started libpod-conmon-ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8.scope.
Dec 13 03:48:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:48:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 03:48:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17b4524f8fd7c654732e03a2e50a31fe5b57cfd685eaae8f7e872c31a9d1f26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17b4524f8fd7c654732e03a2e50a31fe5b57cfd685eaae8f7e872c31a9d1f26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17b4524f8fd7c654732e03a2e50a31fe5b57cfd685eaae8f7e872c31a9d1f26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17b4524f8fd7c654732e03a2e50a31fe5b57cfd685eaae8f7e872c31a9d1f26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:56 np0005558241 podman[353617]: 2025-12-13 08:48:55.963331671 +0000 UTC m=+0.026027844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:48:56 np0005558241 podman[353617]: 2025-12-13 08:48:56.069703728 +0000 UTC m=+0.132399931 container init ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:48:56 np0005558241 podman[353617]: 2025-12-13 08:48:56.077337669 +0000 UTC m=+0.140033822 container start ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chebyshev, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:48:56 np0005558241 podman[353617]: 2025-12-13 08:48:56.08096691 +0000 UTC m=+0.143663073 container attach ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chebyshev, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]: {
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:    "0": [
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:        {
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "devices": [
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "/dev/loop3"
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            ],
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_name": "ceph_lv0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_size": "21470642176",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "name": "ceph_lv0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "tags": {
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.cluster_name": "ceph",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.crush_device_class": "",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.encrypted": "0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.objectstore": "bluestore",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.osd_id": "0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.type": "block",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.vdo": "0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.with_tpm": "0"
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            },
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "type": "block",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "vg_name": "ceph_vg0"
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:        }
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:    ],
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:    "1": [
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:        {
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "devices": [
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "/dev/loop4"
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            ],
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_name": "ceph_lv1",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_size": "21470642176",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "name": "ceph_lv1",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "tags": {
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.cluster_name": "ceph",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.crush_device_class": "",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.encrypted": "0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.objectstore": "bluestore",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.osd_id": "1",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.type": "block",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.vdo": "0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.with_tpm": "0"
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            },
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "type": "block",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "vg_name": "ceph_vg1"
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:        }
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:    ],
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:    "2": [
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:        {
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "devices": [
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "/dev/loop5"
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            ],
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_name": "ceph_lv2",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_size": "21470642176",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "name": "ceph_lv2",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "tags": {
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.cluster_name": "ceph",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.crush_device_class": "",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.encrypted": "0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.objectstore": "bluestore",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.osd_id": "2",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.type": "block",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.vdo": "0",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:                "ceph.with_tpm": "0"
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            },
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "type": "block",
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:            "vg_name": "ceph_vg2"
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:        }
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]:    ]
Dec 13 03:48:56 np0005558241 cool_chebyshev[353633]: }
Dec 13 03:48:56 np0005558241 systemd[1]: libpod-ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8.scope: Deactivated successfully.
Dec 13 03:48:56 np0005558241 podman[353642]: 2025-12-13 08:48:56.434541265 +0000 UTC m=+0.024949467 container died ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:48:56 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c17b4524f8fd7c654732e03a2e50a31fe5b57cfd685eaae8f7e872c31a9d1f26-merged.mount: Deactivated successfully.
Dec 13 03:48:56 np0005558241 podman[353642]: 2025-12-13 08:48:56.475160333 +0000 UTC m=+0.065568525 container remove ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chebyshev, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 03:48:56 np0005558241 systemd[1]: libpod-conmon-ba824f942facc77b8909bbda5f684f2499f9f96b269e1beacd8153346aa872e8.scope: Deactivated successfully.
Dec 13 03:48:56 np0005558241 nova_compute[248510]: 2025-12-13 08:48:56.848 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:56 np0005558241 podman[353717]: 2025-12-13 08:48:56.921093634 +0000 UTC m=+0.040500847 container create 2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lewin, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:48:56 np0005558241 systemd[1]: Started libpod-conmon-2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36.scope.
Dec 13 03:48:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:48:56 np0005558241 podman[353717]: 2025-12-13 08:48:56.902645831 +0000 UTC m=+0.022053064 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:48:57 np0005558241 podman[353717]: 2025-12-13 08:48:57.000285929 +0000 UTC m=+0.119693172 container init 2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 03:48:57 np0005558241 podman[353717]: 2025-12-13 08:48:57.00908445 +0000 UTC m=+0.128491663 container start 2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:48:57 np0005558241 podman[353717]: 2025-12-13 08:48:57.013050909 +0000 UTC m=+0.132458182 container attach 2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lewin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:48:57 np0005558241 zen_lewin[353734]: 167 167
Dec 13 03:48:57 np0005558241 systemd[1]: libpod-2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36.scope: Deactivated successfully.
Dec 13 03:48:57 np0005558241 podman[353739]: 2025-12-13 08:48:57.059543755 +0000 UTC m=+0.029102791 container died 2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:48:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d961f2016584af4e21d2ee14f666e1ee307e64512d42a05366b6f8bc0636d81e-merged.mount: Deactivated successfully.
Dec 13 03:48:57 np0005558241 podman[353739]: 2025-12-13 08:48:57.093659891 +0000 UTC m=+0.063218887 container remove 2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_lewin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:48:57 np0005558241 systemd[1]: libpod-conmon-2bd91fbb8cc361d50747e2930f965826b9af4e0532b5951fb499d0653746cd36.scope: Deactivated successfully.
Dec 13 03:48:57 np0005558241 podman[353762]: 2025-12-13 08:48:57.279946371 +0000 UTC m=+0.040392893 container create f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 03:48:57 np0005558241 systemd[1]: Started libpod-conmon-f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed.scope.
Dec 13 03:48:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:48:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55037d95bbfff40f9d28ffaf1d8603a419e00da9a81eaac3288efdc667492088/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:57 np0005558241 podman[353762]: 2025-12-13 08:48:57.262775621 +0000 UTC m=+0.023222163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:48:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55037d95bbfff40f9d28ffaf1d8603a419e00da9a81eaac3288efdc667492088/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55037d95bbfff40f9d28ffaf1d8603a419e00da9a81eaac3288efdc667492088/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55037d95bbfff40f9d28ffaf1d8603a419e00da9a81eaac3288efdc667492088/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:48:57 np0005558241 podman[353762]: 2025-12-13 08:48:57.377830496 +0000 UTC m=+0.138277048 container init f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:48:57 np0005558241 podman[353762]: 2025-12-13 08:48:57.38439221 +0000 UTC m=+0.144838732 container start f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_meninsky, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:48:57 np0005558241 podman[353762]: 2025-12-13 08:48:57.389126139 +0000 UTC m=+0.149572691 container attach f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:48:58 np0005558241 nova_compute[248510]: 2025-12-13 08:48:58.016 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:48:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 03:48:58 np0005558241 lvm[353858]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:48:58 np0005558241 lvm[353857]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:48:58 np0005558241 lvm[353858]: VG ceph_vg1 finished
Dec 13 03:48:58 np0005558241 lvm[353857]: VG ceph_vg0 finished
Dec 13 03:48:58 np0005558241 lvm[353860]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:48:58 np0005558241 lvm[353860]: VG ceph_vg2 finished
Dec 13 03:48:58 np0005558241 nervous_meninsky[353779]: {}
Dec 13 03:48:58 np0005558241 systemd[1]: libpod-f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed.scope: Deactivated successfully.
Dec 13 03:48:58 np0005558241 systemd[1]: libpod-f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed.scope: Consumed 1.385s CPU time.
Dec 13 03:48:58 np0005558241 podman[353762]: 2025-12-13 08:48:58.24509456 +0000 UTC m=+1.005541112 container died f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_meninsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:48:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-55037d95bbfff40f9d28ffaf1d8603a419e00da9a81eaac3288efdc667492088-merged.mount: Deactivated successfully.
Dec 13 03:48:58 np0005558241 podman[353762]: 2025-12-13 08:48:58.289794061 +0000 UTC m=+1.050240583 container remove f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_meninsky, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:48:58 np0005558241 systemd[1]: libpod-conmon-f49e719c5e89ad762e9631821df9ec4d2276f4b77caa41a73e9e9b00fdd917ed.scope: Deactivated successfully.
Dec 13 03:48:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:48:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:48:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:48:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:48:58 np0005558241 nova_compute[248510]: 2025-12-13 08:48:58.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:48:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:48:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:48:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:49:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 265 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Dec 13 03:49:01 np0005558241 nova_compute[248510]: 2025-12-13 08:49:01.850 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Dec 13 03:49:03 np0005558241 nova_compute[248510]: 2025-12-13 08:49:03.020 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 1 op/s
Dec 13 03:49:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:49:04 np0005558241 nova_compute[248510]: 2025-12-13 08:49:04.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:49:04 np0005558241 nova_compute[248510]: 2025-12-13 08:49:04.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:49:04 np0005558241 nova_compute[248510]: 2025-12-13 08:49:04.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:49:05 np0005558241 nova_compute[248510]: 2025-12-13 08:49:05.087 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:49:05 np0005558241 nova_compute[248510]: 2025-12-13 08:49:05.087 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:49:05 np0005558241 nova_compute[248510]: 2025-12-13 08:49:05.088 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:49:05 np0005558241 nova_compute[248510]: 2025-12-13 08:49:05.088 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:49:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 426 B/s wr, 1 op/s
Dec 13 03:49:06 np0005558241 nova_compute[248510]: 2025-12-13 08:49:06.864 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:07 np0005558241 nova_compute[248510]: 2025-12-13 08:49:07.630 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:49:07 np0005558241 nova_compute[248510]: 2025-12-13 08:49:07.658 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:49:07 np0005558241 nova_compute[248510]: 2025-12-13 08:49:07.659 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:49:07 np0005558241 nova_compute[248510]: 2025-12-13 08:49:07.660 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:49:07 np0005558241 nova_compute[248510]: 2025-12-13 08:49:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:49:07 np0005558241 nova_compute[248510]: 2025-12-13 08:49:07.797 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:07 np0005558241 nova_compute[248510]: 2025-12-13 08:49:07.797 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:07 np0005558241 nova_compute[248510]: 2025-12-13 08:49:07.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:07 np0005558241 nova_compute[248510]: 2025-12-13 08:49:07.798 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:49:07 np0005558241 nova_compute[248510]: 2025-12-13 08:49:07.799 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.024 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 767 B/s wr, 1 op/s
Dec 13 03:49:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:08.326 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.327 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:08.327 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:49:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:49:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256843846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.424 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.535 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.536 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.539 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.540 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.709 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.710 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3299MB free_disk=59.89645113516599GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.710 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.711 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.872 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.872 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 99100320-043d-4f13-ac93-5fd3309abbf7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.872 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.873 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:49:08 np0005558241 nova_compute[248510]: 2025-12-13 08:49:08.988 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.081 248514 DEBUG nova.compute.manager [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-changed-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.082 248514 DEBUG nova.compute.manager [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Refreshing instance network info cache due to event network-changed-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.082 248514 DEBUG oslo_concurrency.lockutils [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.082 248514 DEBUG oslo_concurrency.lockutils [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.083 248514 DEBUG nova.network.neutron [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Refreshing network info cache for port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.154 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.155 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.156 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.156 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.157 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.159 248514 INFO nova.compute.manager [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Terminating instance#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.161 248514 DEBUG nova.compute.manager [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:49:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:49:09
Dec 13 03:49:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:49:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:49:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.rgw.root', 'volumes', 'images', 'vms', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec 13 03:49:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:49:09 np0005558241 kernel: tap5027cfa3-f4 (unregistering): left promiscuous mode
Dec 13 03:49:09 np0005558241 NetworkManager[50376]: <info>  [1765615749.4769] device (tap5027cfa3-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:49:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:49:09Z|01040|binding|INFO|Releasing lport 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 from this chassis (sb_readonly=0)
Dec 13 03:49:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:49:09Z|01041|binding|INFO|Setting lport 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 down in Southbound
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.483 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:49:09Z|01042|binding|INFO|Removing iface tap5027cfa3-f4 ovn-installed in OVS
Dec 13 03:49:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:09.491 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:76:90 10.100.0.10'], port_security=['fa:16:3e:df:76:90 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '99100320-043d-4f13-ac93-5fd3309abbf7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b75a9df2d3584458bf4c9c127010a4d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2a4d558-076e-40f7-a16c-096cc56882f7 fa0ae845-58e9-4e37-8129-696ba9497adb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e9afcf2-3319-47a2-a247-8f6d1e6c6308, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:49:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:09.492 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 in datapath 8a649c29-105b-45ad-91b4-9f7a7c58b419 unbound from our chassis#033[00m
Dec 13 03:49:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:09.494 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8a649c29-105b-45ad-91b4-9f7a7c58b419, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:49:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:09.499 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[976c2973-c817-4966-a4b9-372c92116832]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.507 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:09.507 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419 namespace which is not needed anymore#033[00m
Dec 13 03:49:09 np0005558241 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Dec 13 03:49:09 np0005558241 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006a.scope: Consumed 14.213s CPU time.
Dec 13 03:49:09 np0005558241 systemd-machined[210538]: Machine qemu-132-instance-0000006a terminated.
Dec 13 03:49:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:49:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4062591854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.598 248514 INFO nova.virt.libvirt.driver [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Instance destroyed successfully.#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.599 248514 DEBUG nova.objects.instance [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lazy-loading 'resources' on Instance uuid 99100320-043d-4f13-ac93-5fd3309abbf7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.617 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.620 248514 DEBUG nova.virt.libvirt.vif [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:48:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-647979759-access_point-1909280991',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-647979759-acc',id=106,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAq9JxVWdGwadoCAlDLVLF/FgbwuunvGPYPWGxv9o2qZSwXgRBKF+h53qswJupwtL+dZgpz/rFzjqXvS7XDDi2cr6DR4JG28HfzJLgzD6wSuJyxP2VMxhs/n7K+Z53vcIw==',key_name='tempest-TestSecurityGroupsBasicOps-1306110225',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:48:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b75a9df2d3584458bf4c9c127010a4d1',ramdisk_id='',reservation_id='r-vazklzs4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-647979759',owner_user_name='tempest-TestSecurityGroupsBasicOps-647979759-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:48:34Z,user_data=None,user_id='649a4118d92a4ee68ff645ddec797a5a',uuid=99100320-043d-4f13-ac93-5fd3309abbf7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.621 248514 DEBUG nova.network.os_vif_util [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Converting VIF {"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.622 248514 DEBUG nova.network.os_vif_util [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.623 248514 DEBUG os_vif [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.626 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.626 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5027cfa3-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.628 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.633 248514 INFO os_vif [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:76:90,bridge_name='br-int',has_traffic_filtering=True,id=5027cfa3-f4ed-4668-87ec-ebfe75f4fb14,network=Network(8a649c29-105b-45ad-91b4-9f7a7c58b419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5027cfa3-f4')#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.653 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.678 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.703 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.704 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.953 248514 DEBUG nova.compute.manager [req-8c1ea787-0a21-4a83-a737-6f7e9e9ffd3d req-8e9df576-00bb-4c30-b214-772d0626d64f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-vif-unplugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.954 248514 DEBUG oslo_concurrency.lockutils [req-8c1ea787-0a21-4a83-a737-6f7e9e9ffd3d req-8e9df576-00bb-4c30-b214-772d0626d64f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.954 248514 DEBUG oslo_concurrency.lockutils [req-8c1ea787-0a21-4a83-a737-6f7e9e9ffd3d req-8e9df576-00bb-4c30-b214-772d0626d64f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.954 248514 DEBUG oslo_concurrency.lockutils [req-8c1ea787-0a21-4a83-a737-6f7e9e9ffd3d req-8e9df576-00bb-4c30-b214-772d0626d64f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.955 248514 DEBUG nova.compute.manager [req-8c1ea787-0a21-4a83-a737-6f7e9e9ffd3d req-8e9df576-00bb-4c30-b214-772d0626d64f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] No waiting events found dispatching network-vif-unplugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:49:09 np0005558241 nova_compute[248510]: 2025-12-13 08:49:09.955 248514 DEBUG nova.compute.manager [req-8c1ea787-0a21-4a83-a737-6f7e9e9ffd3d req-8e9df576-00bb-4c30-b214-772d0626d64f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-vif-unplugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:49:10 np0005558241 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [NOTICE]   (353164) : haproxy version is 2.8.14-c23fe91
Dec 13 03:49:10 np0005558241 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [NOTICE]   (353164) : path to executable is /usr/sbin/haproxy
Dec 13 03:49:10 np0005558241 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [WARNING]  (353164) : Exiting Master process...
Dec 13 03:49:10 np0005558241 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [ALERT]    (353164) : Current worker (353166) exited with code 143 (Terminated)
Dec 13 03:49:10 np0005558241 neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419[353160]: [WARNING]  (353164) : All workers exited. Exiting... (0)
Dec 13 03:49:10 np0005558241 systemd[1]: libpod-1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429.scope: Deactivated successfully.
Dec 13 03:49:10 np0005558241 podman[353977]: 2025-12-13 08:49:10.013116934 +0000 UTC m=+0.401948169 container died 1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:49:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:49:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429-userdata-shm.mount: Deactivated successfully.
Dec 13 03:49:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7930ac85e7195342e64df4fba87493093ddc740f0c0e309c4994565a0f79486e-merged.mount: Deactivated successfully.
Dec 13 03:49:11 np0005558241 podman[353977]: 2025-12-13 08:49:11.434290007 +0000 UTC m=+1.823121172 container cleanup 1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:49:11 np0005558241 systemd[1]: libpod-conmon-1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429.scope: Deactivated successfully.
Dec 13 03:49:11 np0005558241 nova_compute[248510]: 2025-12-13 08:49:11.627 248514 DEBUG nova.network.neutron [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updated VIF entry in instance network info cache for port 5027cfa3-f4ed-4668-87ec-ebfe75f4fb14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:49:11 np0005558241 nova_compute[248510]: 2025-12-13 08:49:11.628 248514 DEBUG nova.network.neutron [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updating instance_info_cache with network_info: [{"id": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "address": "fa:16:3e:df:76:90", "network": {"id": "8a649c29-105b-45ad-91b4-9f7a7c58b419", "bridge": "br-int", "label": "tempest-network-smoke--977636721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b75a9df2d3584458bf4c9c127010a4d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5027cfa3-f4", "ovs_interfaceid": "5027cfa3-f4ed-4668-87ec-ebfe75f4fb14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:49:11 np0005558241 nova_compute[248510]: 2025-12-13 08:49:11.657 248514 DEBUG oslo_concurrency.lockutils [req-32befc81-0a20-4cbb-b397-f797917cacf5 req-1fb6e238-d78e-46c1-be92-171ce26d1fa2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-99100320-043d-4f13-ac93-5fd3309abbf7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:49:11 np0005558241 nova_compute[248510]: 2025-12-13 08:49:11.704 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:49:11 np0005558241 nova_compute[248510]: 2025-12-13 08:49:11.705 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:49:11 np0005558241 nova_compute[248510]: 2025-12-13 08:49:11.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:49:11 np0005558241 nova_compute[248510]: 2025-12-13 08:49:11.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:49:11 np0005558241 nova_compute[248510]: 2025-12-13 08:49:11.866 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:12 np0005558241 podman[354024]: 2025-12-13 08:49:12.042491556 +0000 UTC m=+0.581671095 container remove 1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 03:49:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.052 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b606d539-ccfb-4c79-a6bf-4048e90c8fef]: (4, ('Sat Dec 13 08:49:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419 (1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429)\n1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429\nSat Dec 13 08:49:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419 (1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429)\n1588748b3496f8e1549df916282c711ca97e245c428a6601a8c57a47e97b6429\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.054 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[399e8820-b24e-4f1f-a9f2-c99e9d05ff54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.056 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a649c29-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:49:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 200 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Dec 13 03:49:12 np0005558241 nova_compute[248510]: 2025-12-13 08:49:12.058 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:12 np0005558241 kernel: tap8a649c29-10: left promiscuous mode
Dec 13 03:49:12 np0005558241 nova_compute[248510]: 2025-12-13 08:49:12.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.091 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5ff60f-5bca-4d9e-a759-9f7048ea578f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.106 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[18c849b1-4240-413c-b36c-9eacd5a4deac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.108 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f474818c-0431-45c9-8884-acc62d957b8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.128 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[40d61dab-a145-4058-84d0-e4e8fb7b46ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 828671, 'reachable_time': 22002, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354040, 'error': None, 'target': 'ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.131 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8a649c29-105b-45ad-91b4-9f7a7c58b419 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:49:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:12.131 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dd747def-d8d4-4ac6-91cd-f694b2e215b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:12 np0005558241 systemd[1]: run-netns-ovnmeta\x2d8a649c29\x2d105b\x2d45ad\x2d91b4\x2d9f7a7c58b419.mount: Deactivated successfully.
Dec 13 03:49:12 np0005558241 nova_compute[248510]: 2025-12-13 08:49:12.196 248514 DEBUG nova.compute.manager [req-9f2b85e1-bff9-452b-81b7-e3a5fe8a6398 req-8c6e794a-55ca-49dd-9faf-714fd2dd86b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:49:12 np0005558241 nova_compute[248510]: 2025-12-13 08:49:12.197 248514 DEBUG oslo_concurrency.lockutils [req-9f2b85e1-bff9-452b-81b7-e3a5fe8a6398 req-8c6e794a-55ca-49dd-9faf-714fd2dd86b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:12 np0005558241 nova_compute[248510]: 2025-12-13 08:49:12.197 248514 DEBUG oslo_concurrency.lockutils [req-9f2b85e1-bff9-452b-81b7-e3a5fe8a6398 req-8c6e794a-55ca-49dd-9faf-714fd2dd86b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:12 np0005558241 nova_compute[248510]: 2025-12-13 08:49:12.198 248514 DEBUG oslo_concurrency.lockutils [req-9f2b85e1-bff9-452b-81b7-e3a5fe8a6398 req-8c6e794a-55ca-49dd-9faf-714fd2dd86b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:12 np0005558241 nova_compute[248510]: 2025-12-13 08:49:12.198 248514 DEBUG nova.compute.manager [req-9f2b85e1-bff9-452b-81b7-e3a5fe8a6398 req-8c6e794a-55ca-49dd-9faf-714fd2dd86b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] No waiting events found dispatching network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:49:12 np0005558241 nova_compute[248510]: 2025-12-13 08:49:12.198 248514 WARNING nova.compute.manager [req-9f2b85e1-bff9-452b-81b7-e3a5fe8a6398 req-8c6e794a-55ca-49dd-9faf-714fd2dd86b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received unexpected event network-vif-plugged-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:49:13 np0005558241 nova_compute[248510]: 2025-12-13 08:49:13.382 248514 INFO nova.virt.libvirt.driver [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Deleting instance files /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7_del#033[00m
Dec 13 03:49:13 np0005558241 nova_compute[248510]: 2025-12-13 08:49:13.383 248514 INFO nova.virt.libvirt.driver [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Deletion of /var/lib/nova/instances/99100320-043d-4f13-ac93-5fd3309abbf7_del complete#033[00m
Dec 13 03:49:13 np0005558241 nova_compute[248510]: 2025-12-13 08:49:13.456 248514 INFO nova.compute.manager [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Took 4.29 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:49:13 np0005558241 nova_compute[248510]: 2025-12-13 08:49:13.457 248514 DEBUG oslo.service.loopingcall [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:49:13 np0005558241 nova_compute[248510]: 2025-12-13 08:49:13.457 248514 DEBUG nova.compute.manager [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:49:13 np0005558241 nova_compute[248510]: 2025-12-13 08:49:13.458 248514 DEBUG nova.network.neutron [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:49:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 121 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 5.7 KiB/s wr, 28 op/s
Dec 13 03:49:14 np0005558241 nova_compute[248510]: 2025-12-13 08:49:14.243 248514 DEBUG nova.network.neutron [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:49:14 np0005558241 nova_compute[248510]: 2025-12-13 08:49:14.267 248514 INFO nova.compute.manager [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Took 0.81 seconds to deallocate network for instance.#033[00m
Dec 13 03:49:14 np0005558241 nova_compute[248510]: 2025-12-13 08:49:14.361 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:14 np0005558241 nova_compute[248510]: 2025-12-13 08:49:14.362 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:14 np0005558241 nova_compute[248510]: 2025-12-13 08:49:14.383 248514 DEBUG nova.compute.manager [req-a62180f4-9147-45bb-bdd4-4b04b03647d2 req-469578e0-c8e7-432d-8499-62e79a75ce3c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Received event network-vif-deleted-5027cfa3-f4ed-4668-87ec-ebfe75f4fb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:49:14 np0005558241 nova_compute[248510]: 2025-12-13 08:49:14.444 248514 DEBUG oslo_concurrency.processutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:49:14 np0005558241 nova_compute[248510]: 2025-12-13 08:49:14.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:49:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:49:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237925263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:49:15 np0005558241 nova_compute[248510]: 2025-12-13 08:49:15.038 248514 DEBUG oslo_concurrency.processutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:49:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:49:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1050045149' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:49:15 np0005558241 nova_compute[248510]: 2025-12-13 08:49:15.047 248514 DEBUG nova.compute.provider_tree [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:49:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:49:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1050045149' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:49:15 np0005558241 nova_compute[248510]: 2025-12-13 08:49:15.070 248514 DEBUG nova.scheduler.client.report [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:49:15 np0005558241 nova_compute[248510]: 2025-12-13 08:49:15.102 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:15 np0005558241 nova_compute[248510]: 2025-12-13 08:49:15.132 248514 INFO nova.scheduler.client.report [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Deleted allocations for instance 99100320-043d-4f13-ac93-5fd3309abbf7#033[00m
Dec 13 03:49:15 np0005558241 nova_compute[248510]: 2025-12-13 08:49:15.249 248514 DEBUG oslo_concurrency.lockutils [None req-e67269f0-bb39-4767-88ca-a2a08919437d 649a4118d92a4ee68ff645ddec797a5a b75a9df2d3584458bf4c9c127010a4d1 - - default default] Lock "99100320-043d-4f13-ac93-5fd3309abbf7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 121 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 5.6 KiB/s wr, 27 op/s
Dec 13 03:49:16 np0005558241 nova_compute[248510]: 2025-12-13 08:49:16.868 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:17.329 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:49:17 np0005558241 nova_compute[248510]: 2025-12-13 08:49:17.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:49:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Dec 13 03:49:19 np0005558241 nova_compute[248510]: 2025-12-13 08:49:19.635 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:49:19 np0005558241 podman[354066]: 2025-12-13 08:49:19.978606906 +0000 UTC m=+0.061209406 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:49:20 np0005558241 podman[354067]: 2025-12-13 08:49:20.004878095 +0000 UTC m=+0.082881509 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 03:49:20 np0005558241 podman[354065]: 2025-12-13 08:49:20.005624294 +0000 UTC m=+0.089257729 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 03:49:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 28 op/s
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007723968476299632 of space, bias 1.0, pg target 0.23171905428898898 quantized to 32 (current 32)
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006682139988669646 of space, bias 1.0, pg target 0.20046419966008938 quantized to 32 (current 32)
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.841775278005856e-07 of space, bias 4.0, pg target 0.0007010130333607028 quantized to 16 (current 32)
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:49:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:49:21Z|01043|binding|INFO|Releasing lport d344c035-f22d-4b1b-95ff-ed098d3a946c from this chassis (sb_readonly=0)
Dec 13 03:49:21 np0005558241 nova_compute[248510]: 2025-12-13 08:49:21.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:21 np0005558241 nova_compute[248510]: 2025-12-13 08:49:21.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 03:49:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 03:49:24 np0005558241 nova_compute[248510]: 2025-12-13 08:49:24.597 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615749.5951822, 99100320-043d-4f13-ac93-5fd3309abbf7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:49:24 np0005558241 nova_compute[248510]: 2025-12-13 08:49:24.597 248514 INFO nova.compute.manager [-] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:49:24 np0005558241 nova_compute[248510]: 2025-12-13 08:49:24.637 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:24 np0005558241 nova_compute[248510]: 2025-12-13 08:49:24.672 248514 DEBUG nova.compute.manager [None req-1b1c37f6-c2a7-4bfd-b5ab-0b03c84f2aff - - - - - -] [instance: 99100320-043d-4f13-ac93-5fd3309abbf7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:49:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.142 248514 DEBUG nova.compute.manager [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received event network-changed-1a00c927-1c7f-4af5-9337-d6e58800dc3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.142 248514 DEBUG nova.compute.manager [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Refreshing instance network info cache due to event network-changed-1a00c927-1c7f-4af5-9337-d6e58800dc3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.143 248514 DEBUG oslo_concurrency.lockutils [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.143 248514 DEBUG oslo_concurrency.lockutils [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.143 248514 DEBUG nova.network.neutron [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Refreshing network info cache for port 1a00c927-1c7f-4af5-9337-d6e58800dc3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.316 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.317 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.317 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.317 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.317 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.318 248514 INFO nova.compute.manager [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Terminating instance#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.319 248514 DEBUG nova.compute.manager [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:49:25 np0005558241 kernel: tap1a00c927-1c (unregistering): left promiscuous mode
Dec 13 03:49:25 np0005558241 NetworkManager[50376]: <info>  [1765615765.6118] device (tap1a00c927-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:49:25Z|01044|binding|INFO|Releasing lport 1a00c927-1c7f-4af5-9337-d6e58800dc3c from this chassis (sb_readonly=0)
Dec 13 03:49:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:49:25Z|01045|binding|INFO|Setting lport 1a00c927-1c7f-4af5-9337-d6e58800dc3c down in Southbound
Dec 13 03:49:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:49:25Z|01046|binding|INFO|Removing iface tap1a00c927-1c ovn-installed in OVS
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.628 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:2b:42 10.100.0.9'], port_security=['fa:16:3e:9c:2b:42 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1d73d88c-ca9a-4136-80de-fa2cf028ffb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '531d7c80-e840-46e0-9afc-03ae0558f787 73f9632c-0914-472c-9969-a269b215d831', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ffad85fc-28b3-4529-8d06-0367d9c3d476, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1a00c927-1c7f-4af5-9337-d6e58800dc3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:49:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.629 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1a00c927-1c7f-4af5-9337-d6e58800dc3c in datapath b7bba2fe-699d-4423-a6e2-09604625a8f5 unbound from our chassis#033[00m
Dec 13 03:49:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.630 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b7bba2fe-699d-4423-a6e2-09604625a8f5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:49:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.631 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a2ff63-1a9d-49d6-8d7a-cc876948b702]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.631 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5 namespace which is not needed anymore#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.634 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:25 np0005558241 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000069.scope: Deactivated successfully.
Dec 13 03:49:25 np0005558241 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000069.scope: Consumed 17.468s CPU time.
Dec 13 03:49:25 np0005558241 systemd-machined[210538]: Machine qemu-130-instance-00000069 terminated.
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.757 248514 INFO nova.virt.libvirt.driver [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Instance destroyed successfully.#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.758 248514 DEBUG nova.objects.instance [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:49:25 np0005558241 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [NOTICE]   (351333) : haproxy version is 2.8.14-c23fe91
Dec 13 03:49:25 np0005558241 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [NOTICE]   (351333) : path to executable is /usr/sbin/haproxy
Dec 13 03:49:25 np0005558241 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [WARNING]  (351333) : Exiting Master process...
Dec 13 03:49:25 np0005558241 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [ALERT]    (351333) : Current worker (351335) exited with code 143 (Terminated)
Dec 13 03:49:25 np0005558241 neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5[351329]: [WARNING]  (351333) : All workers exited. Exiting... (0)
Dec 13 03:49:25 np0005558241 systemd[1]: libpod-63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5.scope: Deactivated successfully.
Dec 13 03:49:25 np0005558241 podman[354154]: 2025-12-13 08:49:25.773134699 +0000 UTC m=+0.053620445 container died 63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.779 248514 DEBUG nova.virt.libvirt.vif [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:47:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-1397234678',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=105,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOgNSj2RX2tEOr5Rxtdc3T7qrIqjyVapwoURlTzSwBUNw2HAjV8i9+69CD+ahp0R2Tk6YrJ3W0cDR2tzHXyNVMUTiAkgjDao6U5yvxeoFoLQPs8Nmve95azrQ/Z/Vbs68Q==',key_name='tempest-TestSecurityGroupsBasicOps-80214463',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:47:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-c1fl6h76',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:47:32Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=1d73d88c-ca9a-4136-80de-fa2cf028ffb7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.780 248514 DEBUG nova.network.os_vif_util [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.781 248514 DEBUG nova.network.os_vif_util [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.781 248514 DEBUG os_vif [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.784 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a00c927-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.823 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.826 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.829 248514 INFO os_vif [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:2b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a00c927-1c7f-4af5-9337-d6e58800dc3c,network=Network(b7bba2fe-699d-4423-a6e2-09604625a8f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a00c927-1c')#033[00m
Dec 13 03:49:25 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5-userdata-shm.mount: Deactivated successfully.
Dec 13 03:49:25 np0005558241 systemd[1]: var-lib-containers-storage-overlay-27792fd296ec41c0b07a50cc3d8774b88ded6a613398922a1314c6ce563b646f-merged.mount: Deactivated successfully.
Dec 13 03:49:25 np0005558241 podman[354154]: 2025-12-13 08:49:25.852726175 +0000 UTC m=+0.133211921 container cleanup 63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:49:25 np0005558241 systemd[1]: libpod-conmon-63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5.scope: Deactivated successfully.
Dec 13 03:49:25 np0005558241 podman[354211]: 2025-12-13 08:49:25.93829525 +0000 UTC m=+0.060888527 container remove 63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:49:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.944 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d25c9046-1a5b-410b-bef4-71324205ff0d]: (4, ('Sat Dec 13 08:49:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5 (63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5)\n63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5\nSat Dec 13 08:49:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5 (63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5)\n63f848e966e343478e5b790e989cbf5ebfda0357a97dd22afc2905727d3576b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.946 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0bfcb30-efe2-495e-af4a-482ef90deee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.947 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7bba2fe-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.949 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:25 np0005558241 kernel: tapb7bba2fe-60: left promiscuous mode
Dec 13 03:49:25 np0005558241 nova_compute[248510]: 2025-12-13 08:49:25.961 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.965 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[844dd31f-f042-4b08-8d9e-68b7fc45a674]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.982 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eab7bc23-627f-44a7-8417-13d62bed30b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:25.984 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6e5068f2-d45a-4e69-8da2-f2a7123f0bce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:26.002 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5f344d33-5a25-408f-ad2e-9a2774a2cad7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822846, 'reachable_time': 18390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354226, 'error': None, 'target': 'ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:26 np0005558241 systemd[1]: run-netns-ovnmeta\x2db7bba2fe\x2d699d\x2d4423\x2da6e2\x2d09604625a8f5.mount: Deactivated successfully.
Dec 13 03:49:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:26.005 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b7bba2fe-699d-4423-a6e2-09604625a8f5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:49:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:26.005 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[92d2317e-9966-4087-a145-ffa1b76dd5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:49:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 121 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Dec 13 03:49:26 np0005558241 nova_compute[248510]: 2025-12-13 08:49:26.138 248514 INFO nova.virt.libvirt.driver [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Deleting instance files /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7_del#033[00m
Dec 13 03:49:26 np0005558241 nova_compute[248510]: 2025-12-13 08:49:26.139 248514 INFO nova.virt.libvirt.driver [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Deletion of /var/lib/nova/instances/1d73d88c-ca9a-4136-80de-fa2cf028ffb7_del complete#033[00m
Dec 13 03:49:26 np0005558241 nova_compute[248510]: 2025-12-13 08:49:26.215 248514 INFO nova.compute.manager [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:49:26 np0005558241 nova_compute[248510]: 2025-12-13 08:49:26.216 248514 DEBUG oslo.service.loopingcall [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:49:26 np0005558241 nova_compute[248510]: 2025-12-13 08:49:26.217 248514 DEBUG nova.compute.manager [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:49:26 np0005558241 nova_compute[248510]: 2025-12-13 08:49:26.217 248514 DEBUG nova.network.neutron [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:49:26 np0005558241 nova_compute[248510]: 2025-12-13 08:49:26.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 85 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 3 op/s
Dec 13 03:49:28 np0005558241 nova_compute[248510]: 2025-12-13 08:49:28.818 248514 DEBUG nova.network.neutron [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updated VIF entry in instance network info cache for port 1a00c927-1c7f-4af5-9337-d6e58800dc3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:49:28 np0005558241 nova_compute[248510]: 2025-12-13 08:49:28.819 248514 DEBUG nova.network.neutron [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [{"id": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "address": "fa:16:3e:9c:2b:42", "network": {"id": "b7bba2fe-699d-4423-a6e2-09604625a8f5", "bridge": "br-int", "label": "tempest-network-smoke--83273027", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a00c927-1c", "ovs_interfaceid": "1a00c927-1c7f-4af5-9337-d6e58800dc3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:49:28 np0005558241 nova_compute[248510]: 2025-12-13 08:49:28.854 248514 DEBUG oslo_concurrency.lockutils [req-b083aa3b-edfe-4c8b-8266-8348ea1053c7 req-a2058cc4-af8e-4a5d-a2d9-38f94b245f8b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1d73d88c-ca9a-4136-80de-fa2cf028ffb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:49:28 np0005558241 nova_compute[248510]: 2025-12-13 08:49:28.985 248514 DEBUG nova.network.neutron [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:49:29 np0005558241 nova_compute[248510]: 2025-12-13 08:49:29.003 248514 INFO nova.compute.manager [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Took 2.79 seconds to deallocate network for instance.#033[00m
Dec 13 03:49:29 np0005558241 nova_compute[248510]: 2025-12-13 08:49:29.062 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:29 np0005558241 nova_compute[248510]: 2025-12-13 08:49:29.063 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:29 np0005558241 nova_compute[248510]: 2025-12-13 08:49:29.074 248514 DEBUG nova.compute.manager [req-b384e0ae-a2f0-4fa7-be25-211747d1b2c0 req-a19622b2-ec4c-4bdf-ba52-bccbe4a37ae8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Received event network-vif-deleted-1a00c927-1c7f-4af5-9337-d6e58800dc3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:49:29 np0005558241 nova_compute[248510]: 2025-12-13 08:49:29.141 248514 DEBUG oslo_concurrency.processutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:49:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:49:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572251386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:49:29 np0005558241 nova_compute[248510]: 2025-12-13 08:49:29.748 248514 DEBUG oslo_concurrency.processutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:49:29 np0005558241 nova_compute[248510]: 2025-12-13 08:49:29.755 248514 DEBUG nova.compute.provider_tree [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:49:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:49:29 np0005558241 nova_compute[248510]: 2025-12-13 08:49:29.782 248514 DEBUG nova.scheduler.client.report [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:49:29 np0005558241 nova_compute[248510]: 2025-12-13 08:49:29.829 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:29 np0005558241 nova_compute[248510]: 2025-12-13 08:49:29.864 248514 INFO nova.scheduler.client.report [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 1d73d88c-ca9a-4136-80de-fa2cf028ffb7#033[00m
Dec 13 03:49:29 np0005558241 nova_compute[248510]: 2025-12-13 08:49:29.954 248514 DEBUG oslo_concurrency.lockutils [None req-7da9f03e-50ca-46f3-9494-e0a2e623e76b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "1d73d88c-ca9a-4136-80de-fa2cf028ffb7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 03:49:30 np0005558241 nova_compute[248510]: 2025-12-13 08:49:30.825 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:31 np0005558241 nova_compute[248510]: 2025-12-13 08:49:31.957 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 03:49:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 03:49:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:49:35 np0005558241 nova_compute[248510]: 2025-12-13 08:49:35.828 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 03:49:36 np0005558241 nova_compute[248510]: 2025-12-13 08:49:36.959 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:37 np0005558241 nova_compute[248510]: 2025-12-13 08:49:37.077 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:37 np0005558241 nova_compute[248510]: 2025-12-13 08:49:37.151 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 03:49:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:49:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 24 op/s
Dec 13 03:49:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:49:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:49:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:49:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:49:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:49:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:49:40 np0005558241 nova_compute[248510]: 2025-12-13 08:49:40.757 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615765.7549932, 1d73d88c-ca9a-4136-80de-fa2cf028ffb7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:49:40 np0005558241 nova_compute[248510]: 2025-12-13 08:49:40.757 248514 INFO nova.compute.manager [-] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:49:40 np0005558241 nova_compute[248510]: 2025-12-13 08:49:40.785 248514 DEBUG nova.compute.manager [None req-82d416e1-f49b-4fbc-9f2b-93c38ffb3798 - - - - - -] [instance: 1d73d88c-ca9a-4136-80de-fa2cf028ffb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:49:40 np0005558241 nova_compute[248510]: 2025-12-13 08:49:40.831 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:41 np0005558241 nova_compute[248510]: 2025-12-13 08:49:41.961 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:49:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:49:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:49:45 np0005558241 nova_compute[248510]: 2025-12-13 08:49:45.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:49:47 np0005558241 nova_compute[248510]: 2025-12-13 08:49:47.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 03:49:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:49:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 03:49:50 np0005558241 nova_compute[248510]: 2025-12-13 08:49:50.836 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:50 np0005558241 podman[354253]: 2025-12-13 08:49:50.956683994 +0000 UTC m=+0.046447385 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:49:50 np0005558241 podman[354252]: 2025-12-13 08:49:50.966976183 +0000 UTC m=+0.060633842 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 13 03:49:50 np0005558241 podman[354251]: 2025-12-13 08:49:50.989061826 +0000 UTC m=+0.083324120 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 13 03:49:52 np0005558241 nova_compute[248510]: 2025-12-13 08:49:52.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.383534) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615793383624, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2087, "num_deletes": 253, "total_data_size": 3662861, "memory_usage": 3723280, "flush_reason": "Manual Compaction"}
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615793410404, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3563511, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49792, "largest_seqno": 51878, "table_properties": {"data_size": 3553860, "index_size": 6145, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19427, "raw_average_key_size": 20, "raw_value_size": 3534757, "raw_average_value_size": 3701, "num_data_blocks": 271, "num_entries": 955, "num_filter_entries": 955, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615572, "oldest_key_time": 1765615572, "file_creation_time": 1765615793, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 26931 microseconds, and 9825 cpu microseconds.
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.410462) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3563511 bytes OK
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.410489) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.413901) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.413920) EVENT_LOG_v1 {"time_micros": 1765615793413915, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.413938) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3654095, prev total WAL file size 3654095, number of live WAL files 2.
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.415052) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3479KB)], [116(9089KB)]
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615793415131, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 12871267, "oldest_snapshot_seqno": -1}
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7452 keys, 11081509 bytes, temperature: kUnknown
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615793497937, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 11081509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11031253, "index_size": 30496, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18693, "raw_key_size": 193220, "raw_average_key_size": 25, "raw_value_size": 10897514, "raw_average_value_size": 1462, "num_data_blocks": 1198, "num_entries": 7452, "num_filter_entries": 7452, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615793, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.498510) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 11081509 bytes
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.500880) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.8 rd, 133.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7974, records dropped: 522 output_compression: NoCompression
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.500909) EVENT_LOG_v1 {"time_micros": 1765615793500895, "job": 70, "event": "compaction_finished", "compaction_time_micros": 83133, "compaction_time_cpu_micros": 27448, "output_level": 6, "num_output_files": 1, "total_output_size": 11081509, "num_input_records": 7974, "num_output_records": 7452, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615793502131, "job": 70, "event": "table_file_deletion", "file_number": 118}
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615793504256, "job": 70, "event": "table_file_deletion", "file_number": 116}
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.414937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.504404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.504410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.504412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.504414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:49:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:49:53.504415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:49:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 03:49:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:49:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:55.427 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:55.428 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:49:55.428 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:55 np0005558241 nova_compute[248510]: 2025-12-13 08:49:55.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:49:55 np0005558241 nova_compute[248510]: 2025-12-13 08:49:55.839 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 03:49:56 np0005558241 nova_compute[248510]: 2025-12-13 08:49:56.973 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:56 np0005558241 nova_compute[248510]: 2025-12-13 08:49:56.973 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:56 np0005558241 nova_compute[248510]: 2025-12-13 08:49:56.994 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:49:57 np0005558241 nova_compute[248510]: 2025-12-13 08:49:57.034 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:49:57 np0005558241 nova_compute[248510]: 2025-12-13 08:49:57.128 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:57 np0005558241 nova_compute[248510]: 2025-12-13 08:49:57.128 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:57 np0005558241 nova_compute[248510]: 2025-12-13 08:49:57.138 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:49:57 np0005558241 nova_compute[248510]: 2025-12-13 08:49:57.139 248514 INFO nova.compute.claims [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:49:57 np0005558241 nova_compute[248510]: 2025-12-13 08:49:57.269 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:49:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:49:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1769668349' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:49:57 np0005558241 nova_compute[248510]: 2025-12-13 08:49:57.907 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:49:57 np0005558241 nova_compute[248510]: 2025-12-13 08:49:57.916 248514 DEBUG nova.compute.provider_tree [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:49:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.133 248514 DEBUG nova.scheduler.client.report [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.174 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.175 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.250 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.250 248514 DEBUG nova.network.neutron [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.291 248514 INFO nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.330 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.435 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.437 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.438 248514 INFO nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Creating image(s)#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.465 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.503 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.532 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.536 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.629 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.630 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.630 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.631 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.655 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.659 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 8d919892-73fd-4a11-be79-2a1a9280a987_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:49:58 np0005558241 nova_compute[248510]: 2025-12-13 08:49:58.717 248514 DEBUG nova.policy [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:49:59 np0005558241 nova_compute[248510]: 2025-12-13 08:49:59.035 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 8d919892-73fd-4a11-be79-2a1a9280a987_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:49:59 np0005558241 nova_compute[248510]: 2025-12-13 08:49:59.107 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:49:59 np0005558241 nova_compute[248510]: 2025-12-13 08:49:59.206 248514 DEBUG nova.objects.instance [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 8d919892-73fd-4a11-be79-2a1a9280a987 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:49:59 np0005558241 nova_compute[248510]: 2025-12-13 08:49:59.227 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:49:59 np0005558241 nova_compute[248510]: 2025-12-13 08:49:59.228 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Ensure instance console log exists: /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:49:59 np0005558241 nova_compute[248510]: 2025-12-13 08:49:59.229 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:49:59 np0005558241 nova_compute[248510]: 2025-12-13 08:49:59.229 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:49:59 np0005558241 nova_compute[248510]: 2025-12-13 08:49:59.230 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:49:59 np0005558241 podman[354648]: 2025-12-13 08:49:59.518510824 +0000 UTC m=+0.039079031 container create b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mccarthy, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 03:49:59 np0005558241 systemd[1]: Started libpod-conmon-b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705.scope.
Dec 13 03:49:59 np0005558241 podman[354648]: 2025-12-13 08:49:59.501266551 +0000 UTC m=+0.021834798 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:49:59 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:49:59 np0005558241 podman[354648]: 2025-12-13 08:49:59.615843434 +0000 UTC m=+0.136411671 container init b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mccarthy, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:49:59 np0005558241 podman[354648]: 2025-12-13 08:49:59.628174253 +0000 UTC m=+0.148742480 container start b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 03:49:59 np0005558241 podman[354648]: 2025-12-13 08:49:59.632401119 +0000 UTC m=+0.152969356 container attach b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:49:59 np0005558241 cool_mccarthy[354664]: 167 167
Dec 13 03:49:59 np0005558241 systemd[1]: libpod-b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705.scope: Deactivated successfully.
Dec 13 03:49:59 np0005558241 podman[354648]: 2025-12-13 08:49:59.637475856 +0000 UTC m=+0.158044093 container died b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 13 03:49:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-765e921f3a175de5a49a3b38ce96ffdd9cdba80c63da2a5ed882bb02b16e0642-merged.mount: Deactivated successfully.
Dec 13 03:49:59 np0005558241 podman[354648]: 2025-12-13 08:49:59.683153882 +0000 UTC m=+0.203722099 container remove b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:49:59 np0005558241 systemd[1]: libpod-conmon-b5621dde5670ab3df7e2b577c00fc97ab104569fb4ad08f6a9a58f8109978705.scope: Deactivated successfully.
Dec 13 03:49:59 np0005558241 nova_compute[248510]: 2025-12-13 08:49:59.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:49:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:49:59 np0005558241 podman[354689]: 2025-12-13 08:49:59.851135813 +0000 UTC m=+0.044462775 container create 3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:49:59 np0005558241 systemd[1]: Started libpod-conmon-3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19.scope.
Dec 13 03:49:59 np0005558241 podman[354689]: 2025-12-13 08:49:59.828873955 +0000 UTC m=+0.022200917 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:49:59 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:49:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3bbae4fd2dcc422abe54115f19eefab53f68fa2c25fc913c29b5563116f63b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3bbae4fd2dcc422abe54115f19eefab53f68fa2c25fc913c29b5563116f63b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3bbae4fd2dcc422abe54115f19eefab53f68fa2c25fc913c29b5563116f63b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3bbae4fd2dcc422abe54115f19eefab53f68fa2c25fc913c29b5563116f63b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3bbae4fd2dcc422abe54115f19eefab53f68fa2c25fc913c29b5563116f63b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:59 np0005558241 podman[354689]: 2025-12-13 08:49:59.945683264 +0000 UTC m=+0.139010226 container init 3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:49:59 np0005558241 podman[354689]: 2025-12-13 08:49:59.952389342 +0000 UTC m=+0.145716274 container start 3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yalow, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:49:59 np0005558241 podman[354689]: 2025-12-13 08:49:59.956779212 +0000 UTC m=+0.150106144 container attach 3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yalow, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:50:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 85 B/s wr, 10 op/s
Dec 13 03:50:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Dec 13 03:50:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Dec 13 03:50:00 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Dec 13 03:50:00 np0005558241 nova_compute[248510]: 2025-12-13 08:50:00.413 248514 DEBUG nova.network.neutron [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Successfully created port: 8d84f494-97c1-4708-b8df-444e42f55484 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:50:00 np0005558241 goofy_yalow[354705]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:50:00 np0005558241 goofy_yalow[354705]: --> All data devices are unavailable
Dec 13 03:50:00 np0005558241 systemd[1]: libpod-3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19.scope: Deactivated successfully.
Dec 13 03:50:00 np0005558241 podman[354689]: 2025-12-13 08:50:00.454261226 +0000 UTC m=+0.647588158 container died 3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:50:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-dd3bbae4fd2dcc422abe54115f19eefab53f68fa2c25fc913c29b5563116f63b-merged.mount: Deactivated successfully.
Dec 13 03:50:00 np0005558241 podman[354689]: 2025-12-13 08:50:00.500335881 +0000 UTC m=+0.693662813 container remove 3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:50:00 np0005558241 systemd[1]: libpod-conmon-3a1c2c975de824ea8b783b1e6de7795737f70acda3c06e0f4e24c7272f7eba19.scope: Deactivated successfully.
Dec 13 03:50:00 np0005558241 nova_compute[248510]: 2025-12-13 08:50:00.842 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:01 np0005558241 podman[354799]: 2025-12-13 08:50:01.009431094 +0000 UTC m=+0.066690803 container create 34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:50:01 np0005558241 podman[354799]: 2025-12-13 08:50:00.970386535 +0000 UTC m=+0.027646324 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:50:01 np0005558241 systemd[1]: Started libpod-conmon-34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99.scope.
Dec 13 03:50:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:50:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Dec 13 03:50:01 np0005558241 podman[354799]: 2025-12-13 08:50:01.287208959 +0000 UTC m=+0.344468698 container init 34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:50:01 np0005558241 podman[354799]: 2025-12-13 08:50:01.295128407 +0000 UTC m=+0.352388156 container start 34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:50:01 np0005558241 bold_dhawan[354816]: 167 167
Dec 13 03:50:01 np0005558241 systemd[1]: libpod-34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99.scope: Deactivated successfully.
Dec 13 03:50:01 np0005558241 podman[354799]: 2025-12-13 08:50:01.357623934 +0000 UTC m=+0.414883733 container attach 34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:50:01 np0005558241 podman[354799]: 2025-12-13 08:50:01.358354972 +0000 UTC m=+0.415614741 container died 34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:50:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Dec 13 03:50:01 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Dec 13 03:50:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-bf37ab939bf1855dd1052c77111df13da9d844b28c4046cc91c5d88f39da154f-merged.mount: Deactivated successfully.
Dec 13 03:50:01 np0005558241 podman[354799]: 2025-12-13 08:50:01.415048534 +0000 UTC m=+0.472308243 container remove 34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_dhawan, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:50:01 np0005558241 systemd[1]: libpod-conmon-34e76a40a8cf3aacb09b811aab72435d1bcfffd624bb5db05be10c171b409a99.scope: Deactivated successfully.
Dec 13 03:50:01 np0005558241 podman[354842]: 2025-12-13 08:50:01.580288327 +0000 UTC m=+0.042360083 container create da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hofstadter, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:50:01 np0005558241 systemd[1]: Started libpod-conmon-da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78.scope.
Dec 13 03:50:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:50:01 np0005558241 podman[354842]: 2025-12-13 08:50:01.561950037 +0000 UTC m=+0.024021823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:50:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ee0441f175fbe1e0138ccf8d9c3f51c5b0d8b974c9c9128b38ff1fa1ce181c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ee0441f175fbe1e0138ccf8d9c3f51c5b0d8b974c9c9128b38ff1fa1ce181c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ee0441f175fbe1e0138ccf8d9c3f51c5b0d8b974c9c9128b38ff1fa1ce181c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13ee0441f175fbe1e0138ccf8d9c3f51c5b0d8b974c9c9128b38ff1fa1ce181c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:01 np0005558241 podman[354842]: 2025-12-13 08:50:01.675814402 +0000 UTC m=+0.137886178 container init da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hofstadter, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 03:50:01 np0005558241 podman[354842]: 2025-12-13 08:50:01.683461184 +0000 UTC m=+0.145532940 container start da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:50:01 np0005558241 podman[354842]: 2025-12-13 08:50:01.687391902 +0000 UTC m=+0.149463678 container attach da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]: {
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:    "0": [
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:        {
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "devices": [
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "/dev/loop3"
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            ],
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_name": "ceph_lv0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_size": "21470642176",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "name": "ceph_lv0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "tags": {
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.cluster_name": "ceph",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.crush_device_class": "",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.encrypted": "0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.objectstore": "bluestore",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.osd_id": "0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.type": "block",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.vdo": "0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.with_tpm": "0"
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            },
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "type": "block",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "vg_name": "ceph_vg0"
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:        }
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:    ],
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:    "1": [
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:        {
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "devices": [
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "/dev/loop4"
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            ],
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_name": "ceph_lv1",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_size": "21470642176",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "name": "ceph_lv1",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "tags": {
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.cluster_name": "ceph",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.crush_device_class": "",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.encrypted": "0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.objectstore": "bluestore",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.osd_id": "1",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.type": "block",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.vdo": "0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.with_tpm": "0"
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            },
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "type": "block",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "vg_name": "ceph_vg1"
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:        }
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:    ],
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:    "2": [
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:        {
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "devices": [
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "/dev/loop5"
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            ],
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_name": "ceph_lv2",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_size": "21470642176",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "name": "ceph_lv2",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "tags": {
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.cluster_name": "ceph",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.crush_device_class": "",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.encrypted": "0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.objectstore": "bluestore",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.osd_id": "2",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.type": "block",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.vdo": "0",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:                "ceph.with_tpm": "0"
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            },
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "type": "block",
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:            "vg_name": "ceph_vg2"
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:        }
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]:    ]
Dec 13 03:50:02 np0005558241 sharp_hofstadter[354859]: }
Dec 13 03:50:02 np0005558241 nova_compute[248510]: 2025-12-13 08:50:02.037 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:02 np0005558241 systemd[1]: libpod-da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78.scope: Deactivated successfully.
Dec 13 03:50:02 np0005558241 podman[354842]: 2025-12-13 08:50:02.044990308 +0000 UTC m=+0.507062064 container died da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 03:50:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-13ee0441f175fbe1e0138ccf8d9c3f51c5b0d8b974c9c9128b38ff1fa1ce181c-merged.mount: Deactivated successfully.
Dec 13 03:50:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 41 MiB data, 711 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 127 B/s wr, 0 op/s
Dec 13 03:50:02 np0005558241 nova_compute[248510]: 2025-12-13 08:50:02.093 248514 DEBUG nova.network.neutron [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Successfully updated port: 8d84f494-97c1-4708-b8df-444e42f55484 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:50:02 np0005558241 podman[354842]: 2025-12-13 08:50:02.096362116 +0000 UTC m=+0.558433872 container remove da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:50:02 np0005558241 systemd[1]: libpod-conmon-da51fe4f1e69ba1545bafcb45dd4e2bebeb7a22b69cf1263db9a08008e82bd78.scope: Deactivated successfully.
Dec 13 03:50:02 np0005558241 nova_compute[248510]: 2025-12-13 08:50:02.120 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:50:02 np0005558241 nova_compute[248510]: 2025-12-13 08:50:02.120 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:50:02 np0005558241 nova_compute[248510]: 2025-12-13 08:50:02.121 248514 DEBUG nova.network.neutron [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:50:02 np0005558241 nova_compute[248510]: 2025-12-13 08:50:02.255 248514 DEBUG nova.compute.manager [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-changed-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:50:02 np0005558241 nova_compute[248510]: 2025-12-13 08:50:02.256 248514 DEBUG nova.compute.manager [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Refreshing instance network info cache due to event network-changed-8d84f494-97c1-4708-b8df-444e42f55484. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:50:02 np0005558241 nova_compute[248510]: 2025-12-13 08:50:02.256 248514 DEBUG oslo_concurrency.lockutils [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:50:02 np0005558241 nova_compute[248510]: 2025-12-13 08:50:02.360 248514 DEBUG nova.network.neutron [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:50:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Dec 13 03:50:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Dec 13 03:50:02 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Dec 13 03:50:02 np0005558241 podman[354941]: 2025-12-13 08:50:02.608946708 +0000 UTC m=+0.042202589 container create e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:50:02 np0005558241 systemd[1]: Started libpod-conmon-e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a.scope.
Dec 13 03:50:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:50:02 np0005558241 podman[354941]: 2025-12-13 08:50:02.59068582 +0000 UTC m=+0.023941721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:50:02 np0005558241 podman[354941]: 2025-12-13 08:50:02.690576705 +0000 UTC m=+0.123832606 container init e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:50:02 np0005558241 podman[354941]: 2025-12-13 08:50:02.698086663 +0000 UTC m=+0.131342544 container start e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goodall, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:50:02 np0005558241 priceless_goodall[354957]: 167 167
Dec 13 03:50:02 np0005558241 systemd[1]: libpod-e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a.scope: Deactivated successfully.
Dec 13 03:50:02 np0005558241 podman[354941]: 2025-12-13 08:50:02.707875639 +0000 UTC m=+0.141131510 container attach e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:50:02 np0005558241 podman[354941]: 2025-12-13 08:50:02.711241433 +0000 UTC m=+0.144497324 container died e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:50:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fdce28a27de5d06019b981e271a3a5c5f9c16727fa70b449f1bbc2c361ea39cc-merged.mount: Deactivated successfully.
Dec 13 03:50:02 np0005558241 podman[354941]: 2025-12-13 08:50:02.762280383 +0000 UTC m=+0.195536294 container remove e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_goodall, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:50:02 np0005558241 systemd[1]: libpod-conmon-e370bca12945b2dc4a1dae0fdfa44287636c2538206e40c887e63eddbebdab7a.scope: Deactivated successfully.
Dec 13 03:50:02 np0005558241 podman[354981]: 2025-12-13 08:50:02.951037186 +0000 UTC m=+0.044763344 container create d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:50:02 np0005558241 systemd[1]: Started libpod-conmon-d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1.scope.
Dec 13 03:50:03 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:50:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e93c80d4a66e0260f051f22b1366c81d8f06882a63e94f06939154234ba7fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e93c80d4a66e0260f051f22b1366c81d8f06882a63e94f06939154234ba7fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e93c80d4a66e0260f051f22b1366c81d8f06882a63e94f06939154234ba7fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e93c80d4a66e0260f051f22b1366c81d8f06882a63e94f06939154234ba7fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:03 np0005558241 podman[354981]: 2025-12-13 08:50:03.030544869 +0000 UTC m=+0.124271057 container init d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_babbage, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 03:50:03 np0005558241 podman[354981]: 2025-12-13 08:50:02.933384213 +0000 UTC m=+0.027110391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:50:03 np0005558241 podman[354981]: 2025-12-13 08:50:03.038413866 +0000 UTC m=+0.132140024 container start d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_babbage, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:50:03 np0005558241 podman[354981]: 2025-12-13 08:50:03.042353075 +0000 UTC m=+0.136079243 container attach d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:50:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Dec 13 03:50:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Dec 13 03:50:03 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Dec 13 03:50:03 np0005558241 lvm[355077]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:50:03 np0005558241 lvm[355076]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:50:03 np0005558241 lvm[355077]: VG ceph_vg1 finished
Dec 13 03:50:03 np0005558241 lvm[355076]: VG ceph_vg0 finished
Dec 13 03:50:03 np0005558241 lvm[355079]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:50:03 np0005558241 lvm[355079]: VG ceph_vg2 finished
Dec 13 03:50:03 np0005558241 intelligent_babbage[354998]: {}
Dec 13 03:50:03 np0005558241 systemd[1]: libpod-d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1.scope: Deactivated successfully.
Dec 13 03:50:03 np0005558241 systemd[1]: libpod-d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1.scope: Consumed 1.309s CPU time.
Dec 13 03:50:03 np0005558241 podman[354981]: 2025-12-13 08:50:03.87891604 +0000 UTC m=+0.972642198 container died d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:50:03 np0005558241 nova_compute[248510]: 2025-12-13 08:50:03.887 248514 DEBUG nova.network.neutron [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updating instance_info_cache with network_info: [{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:50:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-25e93c80d4a66e0260f051f22b1366c81d8f06882a63e94f06939154234ba7fa-merged.mount: Deactivated successfully.
Dec 13 03:50:03 np0005558241 podman[354981]: 2025-12-13 08:50:03.929293613 +0000 UTC m=+1.023019761 container remove d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:50:03 np0005558241 systemd[1]: libpod-conmon-d602e4fe243f3b4ddef81ce173275de57257d31f0a451d8e2edf966c67fcc6b1.scope: Deactivated successfully.
Dec 13 03:50:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:50:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:50:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.001 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.002 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Instance network_info: |[{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.002 248514 DEBUG oslo_concurrency.lockutils [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.002 248514 DEBUG nova.network.neutron [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Refreshing network info cache for port 8d84f494-97c1-4708-b8df-444e42f55484 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:50:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.005 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Start _get_guest_xml network_info=[{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.011 248514 WARNING nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.019 248514 DEBUG nova.virt.libvirt.host [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.020 248514 DEBUG nova.virt.libvirt.host [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.024 248514 DEBUG nova.virt.libvirt.host [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.025 248514 DEBUG nova.virt.libvirt.host [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.025 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.025 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.026 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.026 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.026 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.026 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.026 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.026 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.027 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.027 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.027 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.027 248514 DEBUG nova.virt.hardware [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.031 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 151 KiB/s rd, 5.3 MiB/s wr, 223 op/s
Dec 13 03:50:04 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:50:04 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:50:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:50:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327126050' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.616 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.638 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:04 np0005558241 nova_compute[248510]: 2025-12-13 08:50:04.642 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:50:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Dec 13 03:50:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Dec 13 03:50:05 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Dec 13 03:50:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:50:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506370385' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.225 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.227 248514 DEBUG nova.virt.libvirt.vif [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:49:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=107,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr0tqCFuWCJmy9b1oflh+eI2eFf4+EocjRlIktK5eBb1qKCRDnUDy1aaX29KorZ+3GlNFyu3OLFeV0DAzkIQCZVEoFaSpXloiDb56F5zzuYqySsB4TKa4TVSZgtsxz/9w==',key_name='tempest-TestSecurityGroupsBasicOps-305772843',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-rsplf7ip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:49:58Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=8d919892-73fd-4a11-be79-2a1a9280a987,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.228 248514 DEBUG nova.network.os_vif_util [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.229 248514 DEBUG nova.network.os_vif_util [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.230 248514 DEBUG nova.objects.instance [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d919892-73fd-4a11-be79-2a1a9280a987 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.252 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  <uuid>8d919892-73fd-4a11-be79-2a1a9280a987</uuid>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  <name>instance-0000006b</name>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340</nova:name>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:50:04</nova:creationTime>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:        <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:        <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:        <nova:port uuid="8d84f494-97c1-4708-b8df-444e42f55484">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <entry name="serial">8d919892-73fd-4a11-be79-2a1a9280a987</entry>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <entry name="uuid">8d919892-73fd-4a11-be79-2a1a9280a987</entry>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/8d919892-73fd-4a11-be79-2a1a9280a987_disk">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/8d919892-73fd-4a11-be79-2a1a9280a987_disk.config">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:87:03:ce"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <target dev="tap8d84f494-97"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/console.log" append="off"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:50:05 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:50:05 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:50:05 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:50:05 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.252 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Preparing to wait for external event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.253 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.253 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.253 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.254 248514 DEBUG nova.virt.libvirt.vif [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:49:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=107,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr0tqCFuWCJmy9b1oflh+eI2eFf4+EocjRlIktK5eBb1qKCRDnUDy1aaX29KorZ+3GlNFyu3OLFeV0DAzkIQCZVEoFaSpXloiDb56F5zzuYqySsB4TKa4TVSZgtsxz/9w==',key_name='tempest-TestSecurityGroupsBasicOps-305772843',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-rsplf7ip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:49:58Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=8d919892-73fd-4a11-be79-2a1a9280a987,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.255 248514 DEBUG nova.network.os_vif_util [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.255 248514 DEBUG nova.network.os_vif_util [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.256 248514 DEBUG os_vif [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.256 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.257 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.257 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.261 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.261 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d84f494-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.262 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d84f494-97, col_values=(('external_ids', {'iface-id': '8d84f494-97c1-4708-b8df-444e42f55484', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:03:ce', 'vm-uuid': '8d919892-73fd-4a11-be79-2a1a9280a987'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.263 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:05 np0005558241 NetworkManager[50376]: <info>  [1765615805.2654] manager: (tap8d84f494-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.267 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.272 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.273 248514 INFO os_vif [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97')#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.346 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.347 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.347 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:87:03:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.348 248514 INFO nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Using config drive#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.373 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.803 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.803 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.864 248514 INFO nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Creating config drive at /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/disk.config#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.871 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwev1jesv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.957 248514 DEBUG nova.network.neutron [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updated VIF entry in instance network info cache for port 8d84f494-97c1-4708-b8df-444e42f55484. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.958 248514 DEBUG nova.network.neutron [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updating instance_info_cache with network_info: [{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:50:05 np0005558241 nova_compute[248510]: 2025-12-13 08:50:05.987 248514 DEBUG oslo_concurrency.lockutils [req-27beef2e-633f-4090-897a-a4de6cec5fc0 req-0481f37c-14f7-4429-96e2-2863ee514141 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:50:06 np0005558241 nova_compute[248510]: 2025-12-13 08:50:06.018 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwev1jesv" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:06 np0005558241 nova_compute[248510]: 2025-12-13 08:50:06.042 248514 DEBUG nova.storage.rbd_utils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 8d919892-73fd-4a11-be79-2a1a9280a987_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:06 np0005558241 nova_compute[248510]: 2025-12-13 08:50:06.046 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/disk.config 8d919892-73fd-4a11-be79-2a1a9280a987_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 4.5 MiB/s wr, 189 op/s
Dec 13 03:50:06 np0005558241 nova_compute[248510]: 2025-12-13 08:50:06.334 248514 DEBUG oslo_concurrency.processutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/disk.config 8d919892-73fd-4a11-be79-2a1a9280a987_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:06 np0005558241 nova_compute[248510]: 2025-12-13 08:50:06.335 248514 INFO nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Deleting local config drive /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987/disk.config because it was imported into RBD.#033[00m
Dec 13 03:50:06 np0005558241 kernel: tap8d84f494-97: entered promiscuous mode
Dec 13 03:50:06 np0005558241 NetworkManager[50376]: <info>  [1765615806.3915] manager: (tap8d84f494-97): new Tun device (/org/freedesktop/NetworkManager/Devices/431)
Dec 13 03:50:06 np0005558241 systemd-udevd[355074]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:50:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:06Z|01047|binding|INFO|Claiming lport 8d84f494-97c1-4708-b8df-444e42f55484 for this chassis.
Dec 13 03:50:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:06Z|01048|binding|INFO|8d84f494-97c1-4708-b8df-444e42f55484: Claiming fa:16:3e:87:03:ce 10.100.0.11
Dec 13 03:50:06 np0005558241 nova_compute[248510]: 2025-12-13 08:50:06.392 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:06 np0005558241 nova_compute[248510]: 2025-12-13 08:50:06.396 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:06 np0005558241 NetworkManager[50376]: <info>  [1765615806.4063] device (tap8d84f494-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:50:06 np0005558241 NetworkManager[50376]: <info>  [1765615806.4073] device (tap8d84f494-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.409 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:03:ce 10.100.0.11'], port_security=['fa:16:3e:87:03:ce 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8d919892-73fd-4a11-be79-2a1a9280a987', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c8d25b9-2d2e-4d46-aaac-2d96c7d8db60 8b6ad7fe-fef3-4e9f-8568-5f8d9cabfe04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d2f617a-f802-4ec5-b71a-0c176c8c3ae6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=8d84f494-97c1-4708-b8df-444e42f55484) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.410 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 8d84f494-97c1-4708-b8df-444e42f55484 in datapath abf04c22-5ac7-46ee-bfad-53f95095fba3 bound to our chassis#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.412 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abf04c22-5ac7-46ee-bfad-53f95095fba3#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.423 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f80541-3b26-4317-8b56-fac69c52ec9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.424 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabf04c22-51 in ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.426 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabf04c22-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.426 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[66d63047-507c-48f3-9610-4d00c8599b4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.427 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86727e10-733e-4c87-bf07-923fa2391cb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 systemd-machined[210538]: New machine qemu-133-instance-0000006b.
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.437 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1c6e24-97f2-4c89-92ea-9443fb6fa859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 nova_compute[248510]: 2025-12-13 08:50:06.455 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:06 np0005558241 systemd[1]: Started Virtual Machine qemu-133-instance-0000006b.
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.460 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d90d3c-1e52-4fdd-b0df-28629171d423]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:06Z|01049|binding|INFO|Setting lport 8d84f494-97c1-4708-b8df-444e42f55484 ovn-installed in OVS
Dec 13 03:50:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:06Z|01050|binding|INFO|Setting lport 8d84f494-97c1-4708-b8df-444e42f55484 up in Southbound
Dec 13 03:50:06 np0005558241 nova_compute[248510]: 2025-12-13 08:50:06.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.491 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c8da15cc-ebdd-452d-b7a3-00abd5f49378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 NetworkManager[50376]: <info>  [1765615806.4982] manager: (tapabf04c22-50): new Veth device (/org/freedesktop/NetworkManager/Devices/432)
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.497 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a6869f6c-3cde-4b90-ac1c-2f18f44d086a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.528 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e29fab5c-cba4-4369-9217-77d68af8c35d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.530 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1754fd29-5c10-45fe-b78d-8b8790dcbb01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 NetworkManager[50376]: <info>  [1765615806.5553] device (tapabf04c22-50): carrier: link connected
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.561 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ee91d2b7-dc3d-4899-8df0-e9cad23fcb77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.577 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1e954ae1-ab88-4b17-8c86-cf7085a1bb5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabf04c22-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:36:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 838377, 'reachable_time': 33674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355288, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.592 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[65057111-031c-4295-ada4-626f893c1156]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:36c6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 838377, 'tstamp': 838377}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355289, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.610 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7269fbd2-57e7-47c5-a459-a5ef839ad9da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabf04c22-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:36:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 838377, 'reachable_time': 33674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355290, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.640 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f7166421-7766-475c-a1dd-0966ce042042]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.709 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[757dba42-bede-42fd-bb70-2b5dd8900ed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.711 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabf04c22-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.712 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.712 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabf04c22-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:50:06 np0005558241 NetworkManager[50376]: <info>  [1765615806.7151] manager: (tapabf04c22-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Dec 13 03:50:06 np0005558241 nova_compute[248510]: 2025-12-13 08:50:06.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:06 np0005558241 kernel: tapabf04c22-50: entered promiscuous mode
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.717 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabf04c22-50, col_values=(('external_ids', {'iface-id': '6b94eeb9-e344-4933-88eb-29577cf3087f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:50:06 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:06Z|01051|binding|INFO|Releasing lport 6b94eeb9-e344-4933-88eb-29577cf3087f from this chassis (sb_readonly=0)
Dec 13 03:50:06 np0005558241 nova_compute[248510]: 2025-12-13 08:50:06.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:06 np0005558241 nova_compute[248510]: 2025-12-13 08:50:06.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.720 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abf04c22-5ac7-46ee-bfad-53f95095fba3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abf04c22-5ac7-46ee-bfad-53f95095fba3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.721 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[16428ed9-8e3f-4a01-aa5d-df314fc3e228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.722 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-abf04c22-5ac7-46ee-bfad-53f95095fba3
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/abf04c22-5ac7-46ee-bfad-53f95095fba3.pid.haproxy
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID abf04c22-5ac7-46ee-bfad-53f95095fba3
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:50:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:06.723 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'env', 'PROCESS_TAG=haproxy-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abf04c22-5ac7-46ee-bfad-53f95095fba3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:50:06 np0005558241 nova_compute[248510]: 2025-12-13 08:50:06.753 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Dec 13 03:50:07 np0005558241 nova_compute[248510]: 2025-12-13 08:50:07.076 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:07 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Dec 13 03:50:07 np0005558241 podman[355347]: 2025-12-13 08:50:07.073589329 +0000 UTC m=+0.022789752 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:50:07 np0005558241 podman[355347]: 2025-12-13 08:50:07.735762552 +0000 UTC m=+0.684962965 container create 0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:50:07 np0005558241 nova_compute[248510]: 2025-12-13 08:50:07.750 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615807.7500963, 8d919892-73fd-4a11-be79-2a1a9280a987 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:50:07 np0005558241 nova_compute[248510]: 2025-12-13 08:50:07.750 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] VM Started (Lifecycle Event)#033[00m
Dec 13 03:50:07 np0005558241 nova_compute[248510]: 2025-12-13 08:50:07.779 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:50:07 np0005558241 nova_compute[248510]: 2025-12-13 08:50:07.784 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615807.7502682, 8d919892-73fd-4a11-be79-2a1a9280a987 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:50:07 np0005558241 nova_compute[248510]: 2025-12-13 08:50:07.785 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:50:07 np0005558241 systemd[1]: Started libpod-conmon-0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112.scope.
Dec 13 03:50:07 np0005558241 nova_compute[248510]: 2025-12-13 08:50:07.809 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:50:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:50:07 np0005558241 nova_compute[248510]: 2025-12-13 08:50:07.814 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:50:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7015d7066325922029e49e3e8c4de1d3ef5bbeec39e3ee942291a993c00912b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:07 np0005558241 nova_compute[248510]: 2025-12-13 08:50:07.841 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:50:08 np0005558241 podman[355347]: 2025-12-13 08:50:08.046270646 +0000 UTC m=+0.995471059 container init 0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:50:08 np0005558241 podman[355347]: 2025-12-13 08:50:08.052295027 +0000 UTC m=+1.001495430 container start 0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 03:50:08 np0005558241 neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3[355380]: [NOTICE]   (355384) : New worker (355386) forked
Dec 13 03:50:08 np0005558241 neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3[355380]: [NOTICE]   (355384) : Loading success.
Dec 13 03:50:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 3.7 MiB/s wr, 169 op/s
Dec 13 03:50:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:08.952 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:50:08 np0005558241 nova_compute[248510]: 2025-12-13 08:50:08.952 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:08.954 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:50:08 np0005558241 nova_compute[248510]: 2025-12-13 08:50:08.969 248514 DEBUG nova.compute.manager [req-01fab348-6bb4-4041-a45b-358247f28293 req-3fc17e0e-6820-4f1c-a634-12f892708401 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:50:08 np0005558241 nova_compute[248510]: 2025-12-13 08:50:08.970 248514 DEBUG oslo_concurrency.lockutils [req-01fab348-6bb4-4041-a45b-358247f28293 req-3fc17e0e-6820-4f1c-a634-12f892708401 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:08 np0005558241 nova_compute[248510]: 2025-12-13 08:50:08.970 248514 DEBUG oslo_concurrency.lockutils [req-01fab348-6bb4-4041-a45b-358247f28293 req-3fc17e0e-6820-4f1c-a634-12f892708401 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:08 np0005558241 nova_compute[248510]: 2025-12-13 08:50:08.970 248514 DEBUG oslo_concurrency.lockutils [req-01fab348-6bb4-4041-a45b-358247f28293 req-3fc17e0e-6820-4f1c-a634-12f892708401 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:08 np0005558241 nova_compute[248510]: 2025-12-13 08:50:08.970 248514 DEBUG nova.compute.manager [req-01fab348-6bb4-4041-a45b-358247f28293 req-3fc17e0e-6820-4f1c-a634-12f892708401 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Processing event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:50:08 np0005558241 nova_compute[248510]: 2025-12-13 08:50:08.971 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:50:08 np0005558241 nova_compute[248510]: 2025-12-13 08:50:08.975 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615808.9756334, 8d919892-73fd-4a11-be79-2a1a9280a987 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:50:08 np0005558241 nova_compute[248510]: 2025-12-13 08:50:08.976 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:50:08 np0005558241 nova_compute[248510]: 2025-12-13 08:50:08.977 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:50:08 np0005558241 nova_compute[248510]: 2025-12-13 08:50:08.981 248514 INFO nova.virt.libvirt.driver [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Instance spawned successfully.#033[00m
Dec 13 03:50:08 np0005558241 nova_compute[248510]: 2025-12-13 08:50:08.982 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.004 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.009 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.010 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.010 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.011 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.011 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.012 248514 DEBUG nova.virt.libvirt.driver [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.016 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.049 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.100 248514 INFO nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Took 10.66 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.101 248514 DEBUG nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.183 248514 INFO nova.compute.manager [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Took 12.13 seconds to build instance.#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.220 248514 DEBUG oslo_concurrency.lockutils [None req-cee2264b-679b-4463-8a7e-7566a4ba5e44 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:50:09
Dec 13 03:50:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:50:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:50:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'vms']
Dec 13 03:50:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.800 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.801 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:50:09 np0005558241 nova_compute[248510]: 2025-12-13 08:50:09.801 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 2.0 MiB/s wr, 149 op/s
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:50:10 np0005558241 nova_compute[248510]: 2025-12-13 08:50:10.264 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:50:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2688414309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:50:10 np0005558241 nova_compute[248510]: 2025-12-13 08:50:10.388 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:10 np0005558241 nova_compute[248510]: 2025-12-13 08:50:10.476 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:50:10 np0005558241 nova_compute[248510]: 2025-12-13 08:50:10.476 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:50:10 np0005558241 nova_compute[248510]: 2025-12-13 08:50:10.632 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:50:10 np0005558241 nova_compute[248510]: 2025-12-13 08:50:10.634 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3647MB free_disk=59.96662239357829GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:50:10 np0005558241 nova_compute[248510]: 2025-12-13 08:50:10.634 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:10 np0005558241 nova_compute[248510]: 2025-12-13 08:50:10.634 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:10 np0005558241 nova_compute[248510]: 2025-12-13 08:50:10.745 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 8d919892-73fd-4a11-be79-2a1a9280a987 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:50:10 np0005558241 nova_compute[248510]: 2025-12-13 08:50:10.745 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:50:10 np0005558241 nova_compute[248510]: 2025-12-13 08:50:10.746 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:50:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:50:10 np0005558241 nova_compute[248510]: 2025-12-13 08:50:10.785 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:11 np0005558241 nova_compute[248510]: 2025-12-13 08:50:11.093 248514 DEBUG nova.compute.manager [req-0eb4336a-5b00-4c51-827e-fbaec608f5fe req-42f22a6f-c3d4-45c3-b8f2-094c39fe3b93 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:50:11 np0005558241 nova_compute[248510]: 2025-12-13 08:50:11.093 248514 DEBUG oslo_concurrency.lockutils [req-0eb4336a-5b00-4c51-827e-fbaec608f5fe req-42f22a6f-c3d4-45c3-b8f2-094c39fe3b93 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:11 np0005558241 nova_compute[248510]: 2025-12-13 08:50:11.094 248514 DEBUG oslo_concurrency.lockutils [req-0eb4336a-5b00-4c51-827e-fbaec608f5fe req-42f22a6f-c3d4-45c3-b8f2-094c39fe3b93 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:11 np0005558241 nova_compute[248510]: 2025-12-13 08:50:11.094 248514 DEBUG oslo_concurrency.lockutils [req-0eb4336a-5b00-4c51-827e-fbaec608f5fe req-42f22a6f-c3d4-45c3-b8f2-094c39fe3b93 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:11 np0005558241 nova_compute[248510]: 2025-12-13 08:50:11.094 248514 DEBUG nova.compute.manager [req-0eb4336a-5b00-4c51-827e-fbaec608f5fe req-42f22a6f-c3d4-45c3-b8f2-094c39fe3b93 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] No waiting events found dispatching network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:50:11 np0005558241 nova_compute[248510]: 2025-12-13 08:50:11.095 248514 WARNING nova.compute.manager [req-0eb4336a-5b00-4c51-827e-fbaec608f5fe req-42f22a6f-c3d4-45c3-b8f2-094c39fe3b93 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received unexpected event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:50:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Dec 13 03:50:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Dec 13 03:50:11 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Dec 13 03:50:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:50:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2638770936' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:50:11 np0005558241 nova_compute[248510]: 2025-12-13 08:50:11.389 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:11 np0005558241 nova_compute[248510]: 2025-12-13 08:50:11.396 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:50:11 np0005558241 nova_compute[248510]: 2025-12-13 08:50:11.414 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:50:11 np0005558241 nova_compute[248510]: 2025-12-13 08:50:11.437 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:50:11 np0005558241 nova_compute[248510]: 2025-12-13 08:50:11.438 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:12 np0005558241 nova_compute[248510]: 2025-12-13 08:50:12.079 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 27 KiB/s wr, 62 op/s
Dec 13 03:50:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Dec 13 03:50:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Dec 13 03:50:12 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Dec 13 03:50:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:12Z|01052|binding|INFO|Releasing lport 6b94eeb9-e344-4933-88eb-29577cf3087f from this chassis (sb_readonly=0)
Dec 13 03:50:12 np0005558241 NetworkManager[50376]: <info>  [1765615812.3901] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Dec 13 03:50:12 np0005558241 NetworkManager[50376]: <info>  [1765615812.3911] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Dec 13 03:50:12 np0005558241 nova_compute[248510]: 2025-12-13 08:50:12.394 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:12 np0005558241 nova_compute[248510]: 2025-12-13 08:50:12.438 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:12 np0005558241 nova_compute[248510]: 2025-12-13 08:50:12.439 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:12Z|01053|binding|INFO|Releasing lport 6b94eeb9-e344-4933-88eb-29577cf3087f from this chassis (sb_readonly=0)
Dec 13 03:50:12 np0005558241 nova_compute[248510]: 2025-12-13 08:50:12.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:13 np0005558241 nova_compute[248510]: 2025-12-13 08:50:13.222 248514 DEBUG nova.compute.manager [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-changed-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:50:13 np0005558241 nova_compute[248510]: 2025-12-13 08:50:13.222 248514 DEBUG nova.compute.manager [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Refreshing instance network info cache due to event network-changed-8d84f494-97c1-4708-b8df-444e42f55484. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:50:13 np0005558241 nova_compute[248510]: 2025-12-13 08:50:13.223 248514 DEBUG oslo_concurrency.lockutils [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:50:13 np0005558241 nova_compute[248510]: 2025-12-13 08:50:13.223 248514 DEBUG oslo_concurrency.lockutils [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:50:13 np0005558241 nova_compute[248510]: 2025-12-13 08:50:13.224 248514 DEBUG nova.network.neutron [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Refreshing network info cache for port 8d84f494-97c1-4708-b8df-444e42f55484 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:50:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Dec 13 03:50:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Dec 13 03:50:13 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Dec 13 03:50:13 np0005558241 nova_compute[248510]: 2025-12-13 08:50:13.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:13 np0005558241 nova_compute[248510]: 2025-12-13 08:50:13.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:50:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:13.958 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:50:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 36 KiB/s wr, 295 op/s
Dec 13 03:50:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Dec 13 03:50:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Dec 13 03:50:14 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Dec 13 03:50:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:50:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Dec 13 03:50:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Dec 13 03:50:14 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Dec 13 03:50:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:50:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1470245815' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:50:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:50:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1470245815' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:50:15 np0005558241 nova_compute[248510]: 2025-12-13 08:50:15.267 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Dec 13 03:50:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Dec 13 03:50:15 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Dec 13 03:50:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 9.3 KiB/s wr, 373 op/s
Dec 13 03:50:16 np0005558241 nova_compute[248510]: 2025-12-13 08:50:16.453 248514 DEBUG nova.network.neutron [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updated VIF entry in instance network info cache for port 8d84f494-97c1-4708-b8df-444e42f55484. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:50:16 np0005558241 nova_compute[248510]: 2025-12-13 08:50:16.454 248514 DEBUG nova.network.neutron [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updating instance_info_cache with network_info: [{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:50:16 np0005558241 nova_compute[248510]: 2025-12-13 08:50:16.480 248514 DEBUG oslo_concurrency.lockutils [req-25ef3645-5c6a-4ea4-8660-fd7f1bdd6e64 req-5a2592c2-4e89-45f5-aa87-e7eaed0fef81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:50:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Dec 13 03:50:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Dec 13 03:50:16 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Dec 13 03:50:17 np0005558241 nova_compute[248510]: 2025-12-13 08:50:17.123 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:17 np0005558241 nova_compute[248510]: 2025-12-13 08:50:17.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 4.0 KiB/s wr, 74 op/s
Dec 13 03:50:18 np0005558241 nova_compute[248510]: 2025-12-13 08:50:18.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:50:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 8.6 KiB/s wr, 151 op/s
Dec 13 03:50:20 np0005558241 nova_compute[248510]: 2025-12-13 08:50:20.269 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:20Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:03:ce 10.100.0.11
Dec 13 03:50:20 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:20Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:03:ce 10.100.0.11
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036111357405043337 of space, bias 1.0, pg target 0.10833407221513001 quantized to 32 (current 32)
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669159180967263 of space, bias 1.0, pg target 0.2007477542901789 quantized to 32 (current 32)
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.734186507205972e-07 of space, bias 4.0, pg target 0.0006881023808647166 quantized to 16 (current 32)
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:50:21 np0005558241 nova_compute[248510]: 2025-12-13 08:50:21.790 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:21 np0005558241 nova_compute[248510]: 2025-12-13 08:50:21.790 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 03:50:21 np0005558241 nova_compute[248510]: 2025-12-13 08:50:21.809 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 03:50:21 np0005558241 podman[355444]: 2025-12-13 08:50:21.968270286 +0000 UTC m=+0.054268651 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 13 03:50:21 np0005558241 podman[355443]: 2025-12-13 08:50:21.973031476 +0000 UTC m=+0.066060397 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:50:22 np0005558241 podman[355442]: 2025-12-13 08:50:22.003535001 +0000 UTC m=+0.098890791 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:50:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 6.7 KiB/s wr, 118 op/s
Dec 13 03:50:22 np0005558241 nova_compute[248510]: 2025-12-13 08:50:22.124 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 546 KiB/s rd, 3.1 MiB/s wr, 195 op/s
Dec 13 03:50:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:50:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Dec 13 03:50:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Dec 13 03:50:24 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Dec 13 03:50:25 np0005558241 nova_compute[248510]: 2025-12-13 08:50:25.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 488 KiB/s rd, 2.8 MiB/s wr, 174 op/s
Dec 13 03:50:27 np0005558241 nova_compute[248510]: 2025-12-13 08:50:27.127 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:27 np0005558241 nova_compute[248510]: 2025-12-13 08:50:27.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:27 np0005558241 nova_compute[248510]: 2025-12-13 08:50:27.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 03:50:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 432 KiB/s rd, 2.6 MiB/s wr, 132 op/s
Dec 13 03:50:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Dec 13 03:50:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Dec 13 03:50:29 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Dec 13 03:50:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:50:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 489 KiB/s rd, 3.2 MiB/s wr, 98 op/s
Dec 13 03:50:30 np0005558241 nova_compute[248510]: 2025-12-13 08:50:30.274 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:31 np0005558241 nova_compute[248510]: 2025-12-13 08:50:31.789 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.7 KiB/s wr, 3 op/s
Dec 13 03:50:32 np0005558241 nova_compute[248510]: 2025-12-13 08:50:32.131 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:32 np0005558241 nova_compute[248510]: 2025-12-13 08:50:32.656 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "35cf794f-191d-48c6-9cee-746c7f1345d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:32 np0005558241 nova_compute[248510]: 2025-12-13 08:50:32.656 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "35cf794f-191d-48c6-9cee-746c7f1345d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:32 np0005558241 nova_compute[248510]: 2025-12-13 08:50:32.677 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:50:32 np0005558241 nova_compute[248510]: 2025-12-13 08:50:32.770 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:32 np0005558241 nova_compute[248510]: 2025-12-13 08:50:32.771 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:32 np0005558241 nova_compute[248510]: 2025-12-13 08:50:32.780 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:50:32 np0005558241 nova_compute[248510]: 2025-12-13 08:50:32.780 248514 INFO nova.compute.claims [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:50:32 np0005558241 nova_compute[248510]: 2025-12-13 08:50:32.924 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:50:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2968745251' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.488 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.493 248514 DEBUG nova.compute.provider_tree [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.516 248514 DEBUG nova.scheduler.client.report [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.541 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.541 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.603 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.603 248514 DEBUG nova.network.neutron [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.628 248514 INFO nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.650 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.746 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.747 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.747 248514 INFO nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Creating image(s)#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.772 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.794 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.816 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.820 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "6788d23df91f0893ccfbbff5ab81ae6cb178abb0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:33 np0005558241 nova_compute[248510]: 2025-12-13 08:50:33.821 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "6788d23df91f0893ccfbbff5ab81ae6cb178abb0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 16 op/s
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.100 248514 DEBUG nova.virt.libvirt.imagebackend [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/2a6d824f-44d5-41be-b030-32dfee0e816a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/2a6d824f-44d5-41be-b030-32dfee0e816a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.160 248514 DEBUG nova.virt.libvirt.imagebackend [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Selected location: {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/2a6d824f-44d5-41be-b030-32dfee0e816a/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.161 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] cloning images/2a6d824f-44d5-41be-b030-32dfee0e816a@snap to None/35cf794f-191d-48c6-9cee-746c7f1345d6_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.192 248514 DEBUG nova.network.neutron [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.193 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.259 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "6788d23df91f0893ccfbbff5ab81ae6cb178abb0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.381 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] resizing rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.448 248514 DEBUG nova.objects.instance [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lazy-loading 'migration_context' on Instance uuid 35cf794f-191d-48c6-9cee-746c7f1345d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.476 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.477 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Ensure instance console log exists: /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.478 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.478 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.478 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.480 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='e4b42f14e17f85e97648723f4cbbb1bd',container_format='bare',created_at=2025-12-13T08:50:28Z,direct_url=<?>,disk_format='raw',id=2a6d824f-44d5-41be-b030-32dfee0e816a,min_disk=0,min_ram=0,name='tempest-image-dependency-test-509129331',owner='adc96857d97b46b8834f6fb544e19670',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-12-13T08:50:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '2a6d824f-44d5-41be-b030-32dfee0e816a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.485 248514 WARNING nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.491 248514 DEBUG nova.virt.libvirt.host [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.492 248514 DEBUG nova.virt.libvirt.host [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.495 248514 DEBUG nova.virt.libvirt.host [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.495 248514 DEBUG nova.virt.libvirt.host [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.496 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.496 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='e4b42f14e17f85e97648723f4cbbb1bd',container_format='bare',created_at=2025-12-13T08:50:28Z,direct_url=<?>,disk_format='raw',id=2a6d824f-44d5-41be-b030-32dfee0e816a,min_disk=0,min_ram=0,name='tempest-image-dependency-test-509129331',owner='adc96857d97b46b8834f6fb544e19670',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-12-13T08:50:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.497 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.497 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.497 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.497 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.498 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.498 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.498 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.499 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.499 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.499 248514 DEBUG nova.virt.hardware [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:50:34 np0005558241 nova_compute[248510]: 2025-12-13 08:50:34.502 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:50:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:50:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/649305437' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:50:35 np0005558241 nova_compute[248510]: 2025-12-13 08:50:35.099 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:35 np0005558241 nova_compute[248510]: 2025-12-13 08:50:35.121 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:35 np0005558241 nova_compute[248510]: 2025-12-13 08:50:35.125 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:35 np0005558241 nova_compute[248510]: 2025-12-13 08:50:35.276 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:50:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2426095890' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:50:35 np0005558241 nova_compute[248510]: 2025-12-13 08:50:35.694 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:35 np0005558241 nova_compute[248510]: 2025-12-13 08:50:35.696 248514 DEBUG nova.objects.instance [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lazy-loading 'pci_devices' on Instance uuid 35cf794f-191d-48c6-9cee-746c7f1345d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:50:35 np0005558241 nova_compute[248510]: 2025-12-13 08:50:35.740 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  <uuid>35cf794f-191d-48c6-9cee-746c7f1345d6</uuid>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  <name>instance-0000006c</name>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <nova:name>instance-depend-image</nova:name>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:50:34</nova:creationTime>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:        <nova:user uuid="e7ff6f539f764ed2a1c838a697e60b52">tempest-ImageDependencyTests-1370295620-project-member</nova:user>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:        <nova:project uuid="adc96857d97b46b8834f6fb544e19670">tempest-ImageDependencyTests-1370295620</nova:project>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="2a6d824f-44d5-41be-b030-32dfee0e816a"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <entry name="serial">35cf794f-191d-48c6-9cee-746c7f1345d6</entry>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <entry name="uuid">35cf794f-191d-48c6-9cee-746c7f1345d6</entry>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/35cf794f-191d-48c6-9cee-746c7f1345d6_disk">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/35cf794f-191d-48c6-9cee-746c7f1345d6_disk.config">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/console.log" append="off"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:50:35 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:50:35 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:50:35 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:50:35 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:50:35 np0005558241 nova_compute[248510]: 2025-12-13 08:50:35.814 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:50:35 np0005558241 nova_compute[248510]: 2025-12-13 08:50:35.815 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:50:35 np0005558241 nova_compute[248510]: 2025-12-13 08:50:35.816 248514 INFO nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Using config drive#033[00m
Dec 13 03:50:35 np0005558241 nova_compute[248510]: 2025-12-13 08:50:35.845 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.060 248514 INFO nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Creating config drive at /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/disk.config#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.066 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm9qjvcnn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 16 KiB/s wr, 15 op/s
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.219 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm9qjvcnn" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.220 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.220 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.253 248514 DEBUG nova.storage.rbd_utils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] rbd image 35cf794f-191d-48c6-9cee-746c7f1345d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.257 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/disk.config 35cf794f-191d-48c6-9cee-746c7f1345d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.322 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.418 248514 DEBUG oslo_concurrency.processutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/disk.config 35cf794f-191d-48c6-9cee-746c7f1345d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.419 248514 INFO nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Deleting local config drive /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6/disk.config because it was imported into RBD.#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.422 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.423 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.430 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.431 248514 INFO nova.compute.claims [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:50:36 np0005558241 systemd-machined[210538]: New machine qemu-134-instance-0000006c.
Dec 13 03:50:36 np0005558241 systemd[1]: Started Virtual Machine qemu-134-instance-0000006c.
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.748 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.922 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615836.9207609, 35cf794f-191d-48c6-9cee-746c7f1345d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.923 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.927 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.927 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.931 248514 INFO nova.virt.libvirt.driver [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Instance spawned successfully.#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.932 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.957 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.963 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.967 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.967 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.968 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.968 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.969 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:36 np0005558241 nova_compute[248510]: 2025-12-13 08:50:36.969 248514 DEBUG nova.virt.libvirt.driver [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.008 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.011 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615836.922719, 35cf794f-191d-48c6-9cee-746c7f1345d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.012 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] VM Started (Lifecycle Event)#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.043 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.048 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.078 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.084 248514 INFO nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Took 3.34 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.084 248514 DEBUG nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.184 248514 INFO nova.compute.manager [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Took 4.45 seconds to build instance.#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.186 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.217 248514 DEBUG oslo_concurrency.lockutils [None req-336307ef-46af-49c6-ae33-8c5588c632d7 e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "35cf794f-191d-48c6-9cee-746c7f1345d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:50:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411183811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.326 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.333 248514 DEBUG nova.compute.provider_tree [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.350 248514 DEBUG nova.scheduler.client.report [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.378 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.379 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.441 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.442 248514 DEBUG nova.network.neutron [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.469 248514 INFO nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.496 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.601 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.603 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.604 248514 INFO nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Creating image(s)#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.625 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.656 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.684 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.687 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.763 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.763 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.764 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.764 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.784 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.788 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:37 np0005558241 nova_compute[248510]: 2025-12-13 08:50:37.889 248514 DEBUG nova.policy [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:50:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 121 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 17 KiB/s wr, 25 op/s
Dec 13 03:50:38 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Dec 13 03:50:38 np0005558241 nova_compute[248510]: 2025-12-13 08:50:38.369 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:38 np0005558241 nova_compute[248510]: 2025-12-13 08:50:38.447 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:50:38 np0005558241 nova_compute[248510]: 2025-12-13 08:50:38.559 248514 DEBUG nova.objects.instance [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 67dc02f2-2430-4cc6-a575-bdfd238a5460 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:50:38 np0005558241 nova_compute[248510]: 2025-12-13 08:50:38.577 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:50:38 np0005558241 nova_compute[248510]: 2025-12-13 08:50:38.577 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Ensure instance console log exists: /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:50:38 np0005558241 nova_compute[248510]: 2025-12-13 08:50:38.578 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:38 np0005558241 nova_compute[248510]: 2025-12-13 08:50:38.578 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:38 np0005558241 nova_compute[248510]: 2025-12-13 08:50:38.578 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:39 np0005558241 nova_compute[248510]: 2025-12-13 08:50:39.304 248514 DEBUG nova.network.neutron [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Successfully created port: 5c3b12ae-3db8-4a4c-b558-19ab89a179ff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:50:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:50:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 150 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.1 MiB/s wr, 84 op/s
Dec 13 03:50:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:50:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:50:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:50:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:50:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:50:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:50:40 np0005558241 nova_compute[248510]: 2025-12-13 08:50:40.136 248514 DEBUG nova.compute.manager [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:50:40 np0005558241 nova_compute[248510]: 2025-12-13 08:50:40.183 248514 INFO nova.compute.manager [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] instance snapshotting#033[00m
Dec 13 03:50:40 np0005558241 nova_compute[248510]: 2025-12-13 08:50:40.278 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:40 np0005558241 nova_compute[248510]: 2025-12-13 08:50:40.419 248514 INFO nova.virt.libvirt.driver [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Beginning live snapshot process#033[00m
Dec 13 03:50:40 np0005558241 nova_compute[248510]: 2025-12-13 08:50:40.559 248514 DEBUG nova.storage.rbd_utils [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] creating snapshot(c19d2b933f9942ada3db82e0a518b226) on rbd image(35cf794f-191d-48c6-9cee-746c7f1345d6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:50:40 np0005558241 nova_compute[248510]: 2025-12-13 08:50:40.806 248514 DEBUG nova.network.neutron [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Successfully updated port: 5c3b12ae-3db8-4a4c-b558-19ab89a179ff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:50:40 np0005558241 nova_compute[248510]: 2025-12-13 08:50:40.835 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-67dc02f2-2430-4cc6-a575-bdfd238a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:50:40 np0005558241 nova_compute[248510]: 2025-12-13 08:50:40.835 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-67dc02f2-2430-4cc6-a575-bdfd238a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:50:40 np0005558241 nova_compute[248510]: 2025-12-13 08:50:40.835 248514 DEBUG nova.network.neutron [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:50:40 np0005558241 nova_compute[248510]: 2025-12-13 08:50:40.921 248514 DEBUG nova.compute.manager [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-changed-5c3b12ae-3db8-4a4c-b558-19ab89a179ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:50:40 np0005558241 nova_compute[248510]: 2025-12-13 08:50:40.922 248514 DEBUG nova.compute.manager [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Refreshing instance network info cache due to event network-changed-5c3b12ae-3db8-4a4c-b558-19ab89a179ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:50:40 np0005558241 nova_compute[248510]: 2025-12-13 08:50:40.922 248514 DEBUG oslo_concurrency.lockutils [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-67dc02f2-2430-4cc6-a575-bdfd238a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:50:41 np0005558241 nova_compute[248510]: 2025-12-13 08:50:41.008 248514 DEBUG nova.network.neutron [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:50:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Dec 13 03:50:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Dec 13 03:50:41 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Dec 13 03:50:41 np0005558241 nova_compute[248510]: 2025-12-13 08:50:41.457 248514 DEBUG nova.storage.rbd_utils [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] cloning vms/35cf794f-191d-48c6-9cee-746c7f1345d6_disk@c19d2b933f9942ada3db82e0a518b226 to images/25f6e4ed-352a-433b-befc-4cedd791f581 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:50:41 np0005558241 nova_compute[248510]: 2025-12-13 08:50:41.544 248514 DEBUG nova.storage.rbd_utils [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] flattening images/25f6e4ed-352a-433b-befc-4cedd791f581 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:50:41 np0005558241 nova_compute[248510]: 2025-12-13 08:50:41.689 248514 DEBUG nova.storage.rbd_utils [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] removing snapshot(c19d2b933f9942ada3db82e0a518b226) on rbd image(35cf794f-191d-48c6-9cee-746c7f1345d6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:50:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 150 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 1.2 MiB/s wr, 91 op/s
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.189 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Dec 13 03:50:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Dec 13 03:50:42 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.889 248514 DEBUG nova.network.neutron [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Updating instance_info_cache with network_info: [{"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.917 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-67dc02f2-2430-4cc6-a575-bdfd238a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.917 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Instance network_info: |[{"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.918 248514 DEBUG oslo_concurrency.lockutils [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-67dc02f2-2430-4cc6-a575-bdfd238a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.918 248514 DEBUG nova.network.neutron [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Refreshing network info cache for port 5c3b12ae-3db8-4a4c-b558-19ab89a179ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.921 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Start _get_guest_xml network_info=[{"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.925 248514 WARNING nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.930 248514 DEBUG nova.virt.libvirt.host [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.931 248514 DEBUG nova.virt.libvirt.host [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.933 248514 DEBUG nova.virt.libvirt.host [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.934 248514 DEBUG nova.virt.libvirt.host [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.934 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.935 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.935 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.935 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.936 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.936 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.936 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.937 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.937 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.937 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.938 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.938 248514 DEBUG nova.virt.hardware [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:50:42 np0005558241 nova_compute[248510]: 2025-12-13 08:50:42.941 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:43 np0005558241 nova_compute[248510]: 2025-12-13 08:50:43.057 248514 DEBUG nova.storage.rbd_utils [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] creating snapshot(snap) on rbd image(25f6e4ed-352a-433b-befc-4cedd791f581) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:50:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:50:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855967601' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:50:43 np0005558241 nova_compute[248510]: 2025-12-13 08:50:43.536 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:43 np0005558241 nova_compute[248510]: 2025-12-13 08:50:43.563 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:43 np0005558241 nova_compute[248510]: 2025-12-13 08:50:43.567 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Dec 13 03:50:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Dec 13 03:50:43 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Dec 13 03:50:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 3.6 MiB/s wr, 240 op/s
Dec 13 03:50:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:50:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/697935915' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.185 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.187 248514 DEBUG nova.virt.libvirt.vif [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=109,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr0tqCFuWCJmy9b1oflh+eI2eFf4+EocjRlIktK5eBb1qKCRDnUDy1aaX29KorZ+3GlNFyu3OLFeV0DAzkIQCZVEoFaSpXloiDb56F5zzuYqySsB4TKa4TVSZgtsxz/9w==',key_name='tempest-TestSecurityGroupsBasicOps-305772843',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-9wzr47fu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:50:37Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=67dc02f2-2430-4cc6-a575-bdfd238a5460,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.187 248514 DEBUG nova.network.os_vif_util [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.188 248514 DEBUG nova.network.os_vif_util [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.189 248514 DEBUG nova.objects.instance [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 67dc02f2-2430-4cc6-a575-bdfd238a5460 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.223 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  <uuid>67dc02f2-2430-4cc6-a575-bdfd238a5460</uuid>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  <name>instance-0000006d</name>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238</nova:name>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:50:42</nova:creationTime>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:        <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:        <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:        <nova:port uuid="5c3b12ae-3db8-4a4c-b558-19ab89a179ff">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <entry name="serial">67dc02f2-2430-4cc6-a575-bdfd238a5460</entry>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <entry name="uuid">67dc02f2-2430-4cc6-a575-bdfd238a5460</entry>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/67dc02f2-2430-4cc6-a575-bdfd238a5460_disk">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/67dc02f2-2430-4cc6-a575-bdfd238a5460_disk.config">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:8f:2f:3f"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <target dev="tap5c3b12ae-3d"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/console.log" append="off"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:50:44 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:50:44 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:50:44 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:50:44 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.225 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Preparing to wait for external event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.225 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.225 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.226 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.227 248514 DEBUG nova.virt.libvirt.vif [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=109,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr0tqCFuWCJmy9b1oflh+eI2eFf4+EocjRlIktK5eBb1qKCRDnUDy1aaX29KorZ+3GlNFyu3OLFeV0DAzkIQCZVEoFaSpXloiDb56F5zzuYqySsB4TKa4TVSZgtsxz/9w==',key_name='tempest-TestSecurityGroupsBasicOps-305772843',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-9wzr47fu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:50:37Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=67dc02f2-2430-4cc6-a575-bdfd238a5460,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.227 248514 DEBUG nova.network.os_vif_util [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.228 248514 DEBUG nova.network.os_vif_util [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.228 248514 DEBUG os_vif [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.230 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.230 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.233 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.234 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c3b12ae-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.234 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5c3b12ae-3d, col_values=(('external_ids', {'iface-id': '5c3b12ae-3db8-4a4c-b558-19ab89a179ff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:2f:3f', 'vm-uuid': '67dc02f2-2430-4cc6-a575-bdfd238a5460'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.236 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:44 np0005558241 NetworkManager[50376]: <info>  [1765615844.2373] manager: (tap5c3b12ae-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.242 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:44 np0005558241 nova_compute[248510]: 2025-12-13 08:50:44.243 248514 INFO os_vif [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d')#033[00m
Dec 13 03:50:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:50:45 np0005558241 nova_compute[248510]: 2025-12-13 08:50:45.178 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:50:45 np0005558241 nova_compute[248510]: 2025-12-13 08:50:45.179 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:50:45 np0005558241 nova_compute[248510]: 2025-12-13 08:50:45.179 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:8f:2f:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:50:45 np0005558241 nova_compute[248510]: 2025-12-13 08:50:45.180 248514 INFO nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Using config drive#033[00m
Dec 13 03:50:45 np0005558241 nova_compute[248510]: 2025-12-13 08:50:45.257 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.003 248514 INFO nova.virt.libvirt.driver [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Snapshot image upload complete#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.004 248514 INFO nova.compute.manager [None req-09d6b7c1-9c1e-4882-a527-28da1e11f97d e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Took 5.82 seconds to snapshot the instance on the hypervisor.#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.019 248514 INFO nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Creating config drive at /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/disk.config#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.024 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9qejnvxl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 1.7 MiB/s wr, 125 op/s
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.177 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9qejnvxl" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.212 248514 DEBUG nova.storage.rbd_utils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.217 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/disk.config 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.374 248514 DEBUG oslo_concurrency.processutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/disk.config 67dc02f2-2430-4cc6-a575-bdfd238a5460_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.376 248514 INFO nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Deleting local config drive /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460/disk.config because it was imported into RBD.#033[00m
Dec 13 03:50:46 np0005558241 kernel: tap5c3b12ae-3d: entered promiscuous mode
Dec 13 03:50:46 np0005558241 NetworkManager[50376]: <info>  [1765615846.4223] manager: (tap5c3b12ae-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/437)
Dec 13 03:50:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:46Z|01054|binding|INFO|Claiming lport 5c3b12ae-3db8-4a4c-b558-19ab89a179ff for this chassis.
Dec 13 03:50:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:46Z|01055|binding|INFO|5c3b12ae-3db8-4a4c-b558-19ab89a179ff: Claiming fa:16:3e:8f:2f:3f 10.100.0.9
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.424 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.433 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:2f:3f 10.100.0.9'], port_security=['fa:16:3e:8f:2f:3f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '67dc02f2-2430-4cc6-a575-bdfd238a5460', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8b6ad7fe-fef3-4e9f-8568-5f8d9cabfe04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d2f617a-f802-4ec5-b71a-0c176c8c3ae6, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5c3b12ae-3db8-4a4c-b558-19ab89a179ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.434 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5c3b12ae-3db8-4a4c-b558-19ab89a179ff in datapath abf04c22-5ac7-46ee-bfad-53f95095fba3 bound to our chassis#033[00m
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.435 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abf04c22-5ac7-46ee-bfad-53f95095fba3#033[00m
Dec 13 03:50:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:46Z|01056|binding|INFO|Setting lport 5c3b12ae-3db8-4a4c-b558-19ab89a179ff ovn-installed in OVS
Dec 13 03:50:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:46Z|01057|binding|INFO|Setting lport 5c3b12ae-3db8-4a4c-b558-19ab89a179ff up in Southbound
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.449 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1249969-1027-4f65-8bc4-08521a1b4235]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:46 np0005558241 systemd-udevd[356366]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:50:46 np0005558241 NetworkManager[50376]: <info>  [1765615846.4685] device (tap5c3b12ae-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:50:46 np0005558241 NetworkManager[50376]: <info>  [1765615846.4693] device (tap5c3b12ae-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:50:46 np0005558241 systemd-machined[210538]: New machine qemu-135-instance-0000006d.
Dec 13 03:50:46 np0005558241 systemd[1]: Started Virtual Machine qemu-135-instance-0000006d.
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.478 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0d864852-ed1f-4879-ae6b-dc14d1936a2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.482 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[91653346-a7dd-4e39-9ad9-5d864518523f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.509 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1feb41-816d-4edf-b8ae-98a753f585c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.528 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6254f071-f187-43c6-bc08-36d5ed57c01a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabf04c22-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:36:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 838377, 'reachable_time': 33674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356377, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.544 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b479791f-9a9c-494e-8d73-c0a6def91227]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabf04c22-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 838387, 'tstamp': 838387}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356381, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapabf04c22-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 838390, 'tstamp': 838390}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356381, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.546 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabf04c22-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.549 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabf04c22-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.549 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.548 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.549 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabf04c22-50, col_values=(('external_ids', {'iface-id': '6b94eeb9-e344-4933-88eb-29577cf3087f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:50:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:46.550 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.589 248514 DEBUG nova.network.neutron [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Updated VIF entry in instance network info cache for port 5c3b12ae-3db8-4a4c-b558-19ab89a179ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.590 248514 DEBUG nova.network.neutron [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Updating instance_info_cache with network_info: [{"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.620 248514 DEBUG oslo_concurrency.lockutils [req-69422283-9ef6-484b-83dc-c3ea31780e5a req-d3b1eae6-da61-4226-a245-f4cc4467c241 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-67dc02f2-2430-4cc6-a575-bdfd238a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.821 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615846.8207915, 67dc02f2-2430-4cc6-a575-bdfd238a5460 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.822 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] VM Started (Lifecycle Event)#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.849 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.853 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615846.8217306, 67dc02f2-2430-4cc6-a575-bdfd238a5460 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.853 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.880 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.883 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.894 248514 DEBUG nova.compute.manager [req-7436e66e-ef8f-4de7-a97a-4e6aa1d0a243 req-6685bae9-5fac-43c9-aff0-1010b2f10b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.894 248514 DEBUG oslo_concurrency.lockutils [req-7436e66e-ef8f-4de7-a97a-4e6aa1d0a243 req-6685bae9-5fac-43c9-aff0-1010b2f10b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.895 248514 DEBUG oslo_concurrency.lockutils [req-7436e66e-ef8f-4de7-a97a-4e6aa1d0a243 req-6685bae9-5fac-43c9-aff0-1010b2f10b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.895 248514 DEBUG oslo_concurrency.lockutils [req-7436e66e-ef8f-4de7-a97a-4e6aa1d0a243 req-6685bae9-5fac-43c9-aff0-1010b2f10b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.895 248514 DEBUG nova.compute.manager [req-7436e66e-ef8f-4de7-a97a-4e6aa1d0a243 req-6685bae9-5fac-43c9-aff0-1010b2f10b08 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Processing event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.896 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.900 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.902 248514 INFO nova.virt.libvirt.driver [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Instance spawned successfully.#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.902 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.908 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.909 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615846.8988144, 67dc02f2-2430-4cc6-a575-bdfd238a5460 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.909 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.958 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.963 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.963 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.963 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.964 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.964 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.964 248514 DEBUG nova.virt.libvirt.driver [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:50:46 np0005558241 nova_compute[248510]: 2025-12-13 08:50:46.969 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.031 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.056 248514 INFO nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Took 9.45 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.057 248514 DEBUG nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.144 248514 INFO nova.compute.manager [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Took 10.74 seconds to build instance.#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.163 248514 DEBUG oslo_concurrency.lockutils [None req-e99d1857-3f3f-4f7d-b22a-7d14f2642024 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Dec 13 03:50:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Dec 13 03:50:47 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.232 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.989 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "35cf794f-191d-48c6-9cee-746c7f1345d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.989 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "35cf794f-191d-48c6-9cee-746c7f1345d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.989 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "35cf794f-191d-48c6-9cee-746c7f1345d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.990 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "35cf794f-191d-48c6-9cee-746c7f1345d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.990 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "35cf794f-191d-48c6-9cee-746c7f1345d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.991 248514 INFO nova.compute.manager [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Terminating instance#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.991 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "refresh_cache-35cf794f-191d-48c6-9cee-746c7f1345d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.991 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquired lock "refresh_cache-35cf794f-191d-48c6-9cee-746c7f1345d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:50:47 np0005558241 nova_compute[248510]: 2025-12-13 08:50:47.992 248514 DEBUG nova.network.neutron [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:50:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 1.7 MiB/s wr, 171 op/s
Dec 13 03:50:48 np0005558241 nova_compute[248510]: 2025-12-13 08:50:48.220 248514 DEBUG nova.network.neutron [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.051 248514 DEBUG nova.network.neutron [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.075 248514 DEBUG nova.compute.manager [req-41addae0-65af-42ba-9e2f-198e8c776d22 req-38df3c9e-e062-41c1-bd10-8041fda55de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.076 248514 DEBUG oslo_concurrency.lockutils [req-41addae0-65af-42ba-9e2f-198e8c776d22 req-38df3c9e-e062-41c1-bd10-8041fda55de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.076 248514 DEBUG oslo_concurrency.lockutils [req-41addae0-65af-42ba-9e2f-198e8c776d22 req-38df3c9e-e062-41c1-bd10-8041fda55de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.076 248514 DEBUG oslo_concurrency.lockutils [req-41addae0-65af-42ba-9e2f-198e8c776d22 req-38df3c9e-e062-41c1-bd10-8041fda55de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.077 248514 DEBUG nova.compute.manager [req-41addae0-65af-42ba-9e2f-198e8c776d22 req-38df3c9e-e062-41c1-bd10-8041fda55de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] No waiting events found dispatching network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.077 248514 WARNING nova.compute.manager [req-41addae0-65af-42ba-9e2f-198e8c776d22 req-38df3c9e-e062-41c1-bd10-8041fda55de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received unexpected event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff for instance with vm_state active and task_state None.#033[00m
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.089 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Releasing lock "refresh_cache-35cf794f-191d-48c6-9cee-746c7f1345d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.089 248514 DEBUG nova.compute.manager [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:50:49 np0005558241 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Dec 13 03:50:49 np0005558241 systemd-machined[210538]: Machine qemu-134-instance-0000006c terminated.
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.236 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:50:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 11K writes, 52K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1483 writes, 6641 keys, 1483 commit groups, 1.0 writes per commit group, ingest: 9.77 MB, 0.02 MB/s#012Interval WAL: 1484 writes, 1484 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     18.6      3.50              0.22        35    0.100       0      0       0.0       0.0#012  L6      1/0   10.57 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4     72.9     61.2      4.64              0.84        34    0.136    202K    18K       0.0       0.0#012 Sum      1/0   10.57 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4     41.6     42.9      8.13              1.06        69    0.118    202K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.1     75.7     77.7      0.68              0.15        10    0.068     37K   2503       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     72.9     61.2      4.64              0.84        34    0.136    202K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     18.6      3.49              0.22        34    0.103       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.063, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.34 GB write, 0.07 MB/s write, 0.33 GB read, 0.07 MB/s read, 8.1 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 40.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000385 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2543,39.22 MB,12.9002%) FilterBlock(70,565.23 KB,0.181575%) IndexBlock(70,956.06 KB,0.307123%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.309 248514 INFO nova.virt.libvirt.driver [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Instance destroyed successfully.#033[00m
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.310 248514 DEBUG nova.objects.instance [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lazy-loading 'resources' on Instance uuid 35cf794f-191d-48c6-9cee-746c7f1345d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:50:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:50:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Dec 13 03:50:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Dec 13 03:50:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.967 248514 INFO nova.virt.libvirt.driver [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Deleting instance files /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6_del#033[00m
Dec 13 03:50:49 np0005558241 nova_compute[248510]: 2025-12-13 08:50:49.968 248514 INFO nova.virt.libvirt.driver [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Deletion of /var/lib/nova/instances/35cf794f-191d-48c6-9cee-746c7f1345d6_del complete#033[00m
Dec 13 03:50:50 np0005558241 nova_compute[248510]: 2025-12-13 08:50:50.067 248514 INFO nova.compute.manager [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:50:50 np0005558241 nova_compute[248510]: 2025-12-13 08:50:50.068 248514 DEBUG oslo.service.loopingcall [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:50:50 np0005558241 nova_compute[248510]: 2025-12-13 08:50:50.068 248514 DEBUG nova.compute.manager [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:50:50 np0005558241 nova_compute[248510]: 2025-12-13 08:50:50.068 248514 DEBUG nova.network.neutron [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:50:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 28 KiB/s wr, 220 op/s
Dec 13 03:50:50 np0005558241 nova_compute[248510]: 2025-12-13 08:50:50.959 248514 DEBUG nova.network.neutron [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:50:51 np0005558241 nova_compute[248510]: 2025-12-13 08:50:51.038 248514 DEBUG nova.network.neutron [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:50:51 np0005558241 nova_compute[248510]: 2025-12-13 08:50:51.055 248514 INFO nova.compute.manager [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Took 0.99 seconds to deallocate network for instance.#033[00m
Dec 13 03:50:51 np0005558241 nova_compute[248510]: 2025-12-13 08:50:51.167 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:51 np0005558241 nova_compute[248510]: 2025-12-13 08:50:51.167 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:51 np0005558241 nova_compute[248510]: 2025-12-13 08:50:51.404 248514 DEBUG oslo_concurrency.processutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:50:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:50:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880413754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:50:51 np0005558241 nova_compute[248510]: 2025-12-13 08:50:51.977 248514 DEBUG oslo_concurrency.processutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:50:51 np0005558241 nova_compute[248510]: 2025-12-13 08:50:51.986 248514 DEBUG nova.compute.provider_tree [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:50:52 np0005558241 nova_compute[248510]: 2025-12-13 08:50:52.009 248514 DEBUG nova.scheduler.client.report [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:50:52 np0005558241 nova_compute[248510]: 2025-12-13 08:50:52.037 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:52 np0005558241 nova_compute[248510]: 2025-12-13 08:50:52.067 248514 INFO nova.scheduler.client.report [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Deleted allocations for instance 35cf794f-191d-48c6-9cee-746c7f1345d6#033[00m
Dec 13 03:50:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 21 KiB/s wr, 156 op/s
Dec 13 03:50:52 np0005558241 nova_compute[248510]: 2025-12-13 08:50:52.139 248514 DEBUG oslo_concurrency.lockutils [None req-1c1ff241-00f4-4e1b-80f4-4814fe54b59b e7ff6f539f764ed2a1c838a697e60b52 adc96857d97b46b8834f6fb544e19670 - - default default] Lock "35cf794f-191d-48c6-9cee-746c7f1345d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:52 np0005558241 nova_compute[248510]: 2025-12-13 08:50:52.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:53 np0005558241 podman[356470]: 2025-12-13 08:50:53.002906424 +0000 UTC m=+0.058576080 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 03:50:53 np0005558241 podman[356469]: 2025-12-13 08:50:53.005879149 +0000 UTC m=+0.079267119 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 03:50:53 np0005558241 podman[356468]: 2025-12-13 08:50:53.072984171 +0000 UTC m=+0.147931400 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 13 03:50:53 np0005558241 nova_compute[248510]: 2025-12-13 08:50:53.224 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:53 np0005558241 nova_compute[248510]: 2025-12-13 08:50:53.251 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 8d919892-73fd-4a11-be79-2a1a9280a987 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:50:53 np0005558241 nova_compute[248510]: 2025-12-13 08:50:53.252 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 67dc02f2-2430-4cc6-a575-bdfd238a5460 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 03:50:53 np0005558241 nova_compute[248510]: 2025-12-13 08:50:53.252 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:53 np0005558241 nova_compute[248510]: 2025-12-13 08:50:53.253 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "8d919892-73fd-4a11-be79-2a1a9280a987" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:53 np0005558241 nova_compute[248510]: 2025-12-13 08:50:53.253 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:53 np0005558241 nova_compute[248510]: 2025-12-13 08:50:53.254 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:53 np0005558241 nova_compute[248510]: 2025-12-13 08:50:53.290 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "8d919892-73fd-4a11-be79-2a1a9280a987" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:53 np0005558241 nova_compute[248510]: 2025-12-13 08:50:53.291 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 25 KiB/s wr, 232 op/s
Dec 13 03:50:54 np0005558241 nova_compute[248510]: 2025-12-13 08:50:54.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.808157) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615854808180, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1029, "num_deletes": 254, "total_data_size": 1342178, "memory_usage": 1364584, "flush_reason": "Manual Compaction"}
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615854819167, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 973086, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51879, "largest_seqno": 52907, "table_properties": {"data_size": 968409, "index_size": 2201, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11437, "raw_average_key_size": 21, "raw_value_size": 958737, "raw_average_value_size": 1782, "num_data_blocks": 97, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615794, "oldest_key_time": 1765615794, "file_creation_time": 1765615854, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 11071 microseconds, and 3264 cpu microseconds.
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.819218) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 973086 bytes OK
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.819242) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.821246) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.821270) EVENT_LOG_v1 {"time_micros": 1765615854821263, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.821291) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 1337229, prev total WAL file size 1337229, number of live WAL files 2.
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.822005) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303130' seq:72057594037927935, type:22 .. '6D6772737461740032323632' seq:0, type:0; will stop at (end)
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(950KB)], [119(10MB)]
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615854822032, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 12054595, "oldest_snapshot_seqno": -1}
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7492 keys, 8923456 bytes, temperature: kUnknown
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615854895765, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 8923456, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8876283, "index_size": 27349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18757, "raw_key_size": 194381, "raw_average_key_size": 25, "raw_value_size": 8745073, "raw_average_value_size": 1167, "num_data_blocks": 1067, "num_entries": 7492, "num_filter_entries": 7492, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615854, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.896142) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 8923456 bytes
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.900292) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.0 rd, 120.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.6 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(21.6) write-amplify(9.2) OK, records in: 7990, records dropped: 498 output_compression: NoCompression
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.900324) EVENT_LOG_v1 {"time_micros": 1765615854900313, "job": 72, "event": "compaction_finished", "compaction_time_micros": 73940, "compaction_time_cpu_micros": 23237, "output_level": 6, "num_output_files": 1, "total_output_size": 8923456, "num_input_records": 7990, "num_output_records": 7492, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615854901084, "job": 72, "event": "table_file_deletion", "file_number": 121}
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615854903369, "job": 72, "event": "table_file_deletion", "file_number": 119}
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.821927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.903597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.903603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.903604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.903605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:50:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:54.903608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:50:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:55.429 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:50:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:55.429 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:50:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:50:55.430 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:50:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.4 KiB/s wr, 198 op/s
Dec 13 03:50:56 np0005558241 nova_compute[248510]: 2025-12-13 08:50:56.802 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:57 np0005558241 nova_compute[248510]: 2025-12-13 08:50:57.237 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 181 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 108 op/s
Dec 13 03:50:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:58Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:2f:3f 10.100.0.9
Dec 13 03:50:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:50:58Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:2f:3f 10.100.0.9
Dec 13 03:50:59 np0005558241 nova_compute[248510]: 2025-12-13 08:50:59.243 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:50:59 np0005558241 nova_compute[248510]: 2025-12-13 08:50:59.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.807355) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615859807430, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 297, "num_deletes": 256, "total_data_size": 74663, "memory_usage": 80680, "flush_reason": "Manual Compaction"}
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615859809974, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 74024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52908, "largest_seqno": 53204, "table_properties": {"data_size": 72098, "index_size": 154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4874, "raw_average_key_size": 17, "raw_value_size": 68266, "raw_average_value_size": 245, "num_data_blocks": 7, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615855, "oldest_key_time": 1765615855, "file_creation_time": 1765615859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 2657 microseconds, and 1058 cpu microseconds.
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.810023) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 74024 bytes OK
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.810039) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.811313) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.811325) EVENT_LOG_v1 {"time_micros": 1765615859811321, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.811337) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 72472, prev total WAL file size 72472, number of live WAL files 2.
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.811652) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303130' seq:72057594037927935, type:22 .. '6C6F676D0032323632' seq:0, type:0; will stop at (end)
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(72KB)], [122(8714KB)]
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615859811700, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 8997480, "oldest_snapshot_seqno": -1}
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7251 keys, 8884822 bytes, temperature: kUnknown
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615859890815, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 8884822, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8838622, "index_size": 26952, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18181, "raw_key_size": 190319, "raw_average_key_size": 26, "raw_value_size": 8710997, "raw_average_value_size": 1201, "num_data_blocks": 1047, "num_entries": 7251, "num_filter_entries": 7251, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615859, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.891118) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 8884822 bytes
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.893304) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.6 rd, 112.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.5 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(241.6) write-amplify(120.0) OK, records in: 7770, records dropped: 519 output_compression: NoCompression
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.893328) EVENT_LOG_v1 {"time_micros": 1765615859893317, "job": 74, "event": "compaction_finished", "compaction_time_micros": 79195, "compaction_time_cpu_micros": 32724, "output_level": 6, "num_output_files": 1, "total_output_size": 8884822, "num_input_records": 7770, "num_output_records": 7251, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615859893468, "job": 74, "event": "table_file_deletion", "file_number": 124}
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615859894847, "job": 74, "event": "table_file_deletion", "file_number": 122}
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.811574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.894903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.894908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.894911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.894913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:50:59 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:50:59.894915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:51:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 194 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 123 op/s
Dec 13 03:51:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 194 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 123 op/s
Dec 13 03:51:02 np0005558241 nova_compute[248510]: 2025-12-13 08:51:02.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 200 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.290 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.290 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.290 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.291 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.291 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.292 248514 INFO nova.compute.manager [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Terminating instance#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.293 248514 DEBUG nova.compute.manager [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.308 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615849.3071628, 35cf794f-191d-48c6-9cee-746c7f1345d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.309 248514 INFO nova.compute.manager [-] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.334 248514 DEBUG nova.compute.manager [None req-fba204c5-69c7-4e77-b0b3-a3a36281b492 - - - - - -] [instance: 35cf794f-191d-48c6-9cee-746c7f1345d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:51:04 np0005558241 kernel: tap5c3b12ae-3d (unregistering): left promiscuous mode
Dec 13 03:51:04 np0005558241 NetworkManager[50376]: <info>  [1765615864.3559] device (tap5c3b12ae-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:51:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:51:04Z|01058|binding|INFO|Releasing lport 5c3b12ae-3db8-4a4c-b558-19ab89a179ff from this chassis (sb_readonly=0)
Dec 13 03:51:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:51:04Z|01059|binding|INFO|Setting lport 5c3b12ae-3db8-4a4c-b558-19ab89a179ff down in Southbound
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.363 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:51:04Z|01060|binding|INFO|Removing iface tap5c3b12ae-3d ovn-installed in OVS
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.372 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:2f:3f 10.100.0.9'], port_security=['fa:16:3e:8f:2f:3f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '67dc02f2-2430-4cc6-a575-bdfd238a5460', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8b6ad7fe-fef3-4e9f-8568-5f8d9cabfe04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d2f617a-f802-4ec5-b71a-0c176c8c3ae6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=5c3b12ae-3db8-4a4c-b558-19ab89a179ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.373 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 5c3b12ae-3db8-4a4c-b558-19ab89a179ff in datapath abf04c22-5ac7-46ee-bfad-53f95095fba3 unbound from our chassis#033[00m
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.375 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abf04c22-5ac7-46ee-bfad-53f95095fba3#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.383 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.392 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52e548c8-d0a6-4ca0-92ff-eb7436898a36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:04 np0005558241 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Dec 13 03:51:04 np0005558241 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Consumed 11.890s CPU time.
Dec 13 03:51:04 np0005558241 systemd-machined[210538]: Machine qemu-135-instance-0000006d terminated.
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.422 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[660b76c3-ae1b-44e2-a98a-4ac0ea874d95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.425 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[daea573f-9450-405a-9bd5-46ecec6f0eac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.452 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0246c6-baef-488f-ad41-d7754fa45ca5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.469 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa957d66-792e-497c-84c4-c8b2eb4445c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabf04c22-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:36:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 311], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 838377, 'reachable_time': 33674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356605, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.485 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bcaf4be6-44b4-49f1-8966-c4032f496610]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabf04c22-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 838387, 'tstamp': 838387}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356606, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapabf04c22-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 838390, 'tstamp': 838390}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356606, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.487 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabf04c22-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.494 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabf04c22-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.495 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.495 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabf04c22-50, col_values=(('external_ids', {'iface-id': '6b94eeb9-e344-4933-88eb-29577cf3087f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:51:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:04.495 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.528 248514 INFO nova.virt.libvirt.driver [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Instance destroyed successfully.#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.529 248514 DEBUG nova.objects.instance [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 67dc02f2-2430-4cc6-a575-bdfd238a5460 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.552 248514 DEBUG nova.virt.libvirt.vif [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-0-1507938238',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=109,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr0tqCFuWCJmy9b1oflh+eI2eFf4+EocjRlIktK5eBb1qKCRDnUDy1aaX29KorZ+3GlNFyu3OLFeV0DAzkIQCZVEoFaSpXloiDb56F5zzuYqySsB4TKa4TVSZgtsxz/9w==',key_name='tempest-TestSecurityGroupsBasicOps-305772843',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:50:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-9wzr47fu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:50:47Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=67dc02f2-2430-4cc6-a575-bdfd238a5460,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.553 248514 DEBUG nova.network.os_vif_util [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "address": "fa:16:3e:8f:2f:3f", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c3b12ae-3d", "ovs_interfaceid": "5c3b12ae-3db8-4a4c-b558-19ab89a179ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.554 248514 DEBUG nova.network.os_vif_util [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.554 248514 DEBUG os_vif [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.556 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.556 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c3b12ae-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.559 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.561 248514 INFO os_vif [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:2f:3f,bridge_name='br-int',has_traffic_filtering=True,id=5c3b12ae-3db8-4a4c-b558-19ab89a179ff,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c3b12ae-3d')#033[00m
Dec 13 03:51:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:51:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:51:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:51:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:51:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:51:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:51:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:51:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:51:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:51:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:51:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:51:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:51:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.865 248514 INFO nova.virt.libvirt.driver [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Deleting instance files /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460_del#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.866 248514 INFO nova.virt.libvirt.driver [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Deletion of /var/lib/nova/instances/67dc02f2-2430-4cc6-a575-bdfd238a5460_del complete#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.925 248514 INFO nova.compute.manager [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.925 248514 DEBUG oslo.service.loopingcall [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.926 248514 DEBUG nova.compute.manager [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:51:04 np0005558241 nova_compute[248510]: 2025-12-13 08:51:04.926 248514 DEBUG nova.network.neutron [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:51:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:51:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:51:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:51:05 np0005558241 podman[356718]: 2025-12-13 08:51:05.199209686 +0000 UTC m=+0.052360643 container create 5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:51:05 np0005558241 systemd[1]: Started libpod-conmon-5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba.scope.
Dec 13 03:51:05 np0005558241 podman[356718]: 2025-12-13 08:51:05.175151963 +0000 UTC m=+0.028302950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:51:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:51:05 np0005558241 podman[356718]: 2025-12-13 08:51:05.291760657 +0000 UTC m=+0.144911644 container init 5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:51:05 np0005558241 podman[356718]: 2025-12-13 08:51:05.298990568 +0000 UTC m=+0.152141525 container start 5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cartwright, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:51:05 np0005558241 podman[356718]: 2025-12-13 08:51:05.302204809 +0000 UTC m=+0.155355776 container attach 5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:51:05 np0005558241 systemd[1]: libpod-5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba.scope: Deactivated successfully.
Dec 13 03:51:05 np0005558241 priceless_cartwright[356734]: 167 167
Dec 13 03:51:05 np0005558241 podman[356718]: 2025-12-13 08:51:05.305595074 +0000 UTC m=+0.158746031 container died 5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:51:05 np0005558241 conmon[356734]: conmon 5b9db9a18e95c1abd83d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba.scope/container/memory.events
Dec 13 03:51:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cdfea8696b6bce665fa277c0d16a617ad5fb98b4d18fc79086896d9603878240-merged.mount: Deactivated successfully.
Dec 13 03:51:05 np0005558241 podman[356718]: 2025-12-13 08:51:05.346334415 +0000 UTC m=+0.199485372 container remove 5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:51:05 np0005558241 systemd[1]: libpod-conmon-5b9db9a18e95c1abd83d80730606e209a19be5ca5a39ce1bafff3ff3410e3aba.scope: Deactivated successfully.
Dec 13 03:51:05 np0005558241 nova_compute[248510]: 2025-12-13 08:51:05.430 248514 DEBUG nova.compute.manager [req-fd5f4f52-905c-43de-85f1-8ec81a7e3361 req-cc79f6eb-17c5-4945-aaf9-20947d45eb6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-vif-unplugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:51:05 np0005558241 nova_compute[248510]: 2025-12-13 08:51:05.432 248514 DEBUG oslo_concurrency.lockutils [req-fd5f4f52-905c-43de-85f1-8ec81a7e3361 req-cc79f6eb-17c5-4945-aaf9-20947d45eb6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:05 np0005558241 nova_compute[248510]: 2025-12-13 08:51:05.432 248514 DEBUG oslo_concurrency.lockutils [req-fd5f4f52-905c-43de-85f1-8ec81a7e3361 req-cc79f6eb-17c5-4945-aaf9-20947d45eb6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:05 np0005558241 nova_compute[248510]: 2025-12-13 08:51:05.432 248514 DEBUG oslo_concurrency.lockutils [req-fd5f4f52-905c-43de-85f1-8ec81a7e3361 req-cc79f6eb-17c5-4945-aaf9-20947d45eb6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:05 np0005558241 nova_compute[248510]: 2025-12-13 08:51:05.433 248514 DEBUG nova.compute.manager [req-fd5f4f52-905c-43de-85f1-8ec81a7e3361 req-cc79f6eb-17c5-4945-aaf9-20947d45eb6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] No waiting events found dispatching network-vif-unplugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:51:05 np0005558241 nova_compute[248510]: 2025-12-13 08:51:05.433 248514 DEBUG nova.compute.manager [req-fd5f4f52-905c-43de-85f1-8ec81a7e3361 req-cc79f6eb-17c5-4945-aaf9-20947d45eb6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-vif-unplugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:51:05 np0005558241 podman[356759]: 2025-12-13 08:51:05.52999829 +0000 UTC m=+0.044407824 container create 918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cerf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 03:51:05 np0005558241 systemd[1]: Started libpod-conmon-918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911.scope.
Dec 13 03:51:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:51:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb7332060821fc5ae92fcbf20dcbf3cd42b34ebdf72f28e0febdbac2c14a6df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb7332060821fc5ae92fcbf20dcbf3cd42b34ebdf72f28e0febdbac2c14a6df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb7332060821fc5ae92fcbf20dcbf3cd42b34ebdf72f28e0febdbac2c14a6df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb7332060821fc5ae92fcbf20dcbf3cd42b34ebdf72f28e0febdbac2c14a6df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bb7332060821fc5ae92fcbf20dcbf3cd42b34ebdf72f28e0febdbac2c14a6df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:05 np0005558241 podman[356759]: 2025-12-13 08:51:05.511166138 +0000 UTC m=+0.025575692 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:51:05 np0005558241 podman[356759]: 2025-12-13 08:51:05.625146766 +0000 UTC m=+0.139556320 container init 918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 03:51:05 np0005558241 podman[356759]: 2025-12-13 08:51:05.634322856 +0000 UTC m=+0.148732390 container start 918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:51:05 np0005558241 podman[356759]: 2025-12-13 08:51:05.638689625 +0000 UTC m=+0.153099179 container attach 918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cerf, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 03:51:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 200 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 2.3 MiB/s wr, 68 op/s
Dec 13 03:51:06 np0005558241 determined_cerf[356775]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:51:06 np0005558241 determined_cerf[356775]: --> All data devices are unavailable
Dec 13 03:51:06 np0005558241 systemd[1]: libpod-918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911.scope: Deactivated successfully.
Dec 13 03:51:06 np0005558241 podman[356759]: 2025-12-13 08:51:06.132708402 +0000 UTC m=+0.647117936 container died 918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 03:51:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4bb7332060821fc5ae92fcbf20dcbf3cd42b34ebdf72f28e0febdbac2c14a6df-merged.mount: Deactivated successfully.
Dec 13 03:51:06 np0005558241 nova_compute[248510]: 2025-12-13 08:51:06.167 248514 DEBUG nova.network.neutron [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:51:06 np0005558241 podman[356759]: 2025-12-13 08:51:06.173155046 +0000 UTC m=+0.687564580 container remove 918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cerf, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:51:06 np0005558241 systemd[1]: libpod-conmon-918dad42c969acbcebd09a208406f2510d2e326240393d33f18575685b3bd911.scope: Deactivated successfully.
Dec 13 03:51:06 np0005558241 nova_compute[248510]: 2025-12-13 08:51:06.193 248514 INFO nova.compute.manager [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Took 1.27 seconds to deallocate network for instance.#033[00m
Dec 13 03:51:06 np0005558241 nova_compute[248510]: 2025-12-13 08:51:06.246 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:06 np0005558241 nova_compute[248510]: 2025-12-13 08:51:06.247 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:06 np0005558241 nova_compute[248510]: 2025-12-13 08:51:06.541 248514 DEBUG oslo_concurrency.processutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:51:06 np0005558241 podman[356870]: 2025-12-13 08:51:06.616211715 +0000 UTC m=+0.043732248 container create 783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:51:06 np0005558241 systemd[1]: Started libpod-conmon-783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f.scope.
Dec 13 03:51:06 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:51:06 np0005558241 podman[356870]: 2025-12-13 08:51:06.69180655 +0000 UTC m=+0.119327103 container init 783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:51:06 np0005558241 podman[356870]: 2025-12-13 08:51:06.596958622 +0000 UTC m=+0.024479175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:51:06 np0005558241 podman[356870]: 2025-12-13 08:51:06.698265412 +0000 UTC m=+0.125785945 container start 783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:51:06 np0005558241 podman[356870]: 2025-12-13 08:51:06.7017805 +0000 UTC m=+0.129301063 container attach 783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:51:06 np0005558241 systemd[1]: libpod-783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f.scope: Deactivated successfully.
Dec 13 03:51:06 np0005558241 kind_goldberg[356886]: 167 167
Dec 13 03:51:06 np0005558241 conmon[356886]: conmon 783588bc265afa382791 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f.scope/container/memory.events
Dec 13 03:51:06 np0005558241 podman[356870]: 2025-12-13 08:51:06.704342944 +0000 UTC m=+0.131863637 container died 783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:51:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d0af0a68954049caddaf836efbc2b57535a9024b0e35dae25037e20fcf5e1f05-merged.mount: Deactivated successfully.
Dec 13 03:51:06 np0005558241 podman[356870]: 2025-12-13 08:51:06.742901461 +0000 UTC m=+0.170421994 container remove 783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:51:06 np0005558241 systemd[1]: libpod-conmon-783588bc265afa382791b04ac8c31267ef5981b9e672196da8b34992f11d004f.scope: Deactivated successfully.
Dec 13 03:51:06 np0005558241 podman[356928]: 2025-12-13 08:51:06.90437746 +0000 UTC m=+0.041967393 container create 26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:51:06 np0005558241 systemd[1]: Started libpod-conmon-26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be.scope.
Dec 13 03:51:06 np0005558241 podman[356928]: 2025-12-13 08:51:06.886980604 +0000 UTC m=+0.024570557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:51:06 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:51:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeaf5996d3eb0e89ea2dd0804d0390aaeeba807d71c9a287e4391316e174129/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeaf5996d3eb0e89ea2dd0804d0390aaeeba807d71c9a287e4391316e174129/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeaf5996d3eb0e89ea2dd0804d0390aaeeba807d71c9a287e4391316e174129/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeaf5996d3eb0e89ea2dd0804d0390aaeeba807d71c9a287e4391316e174129/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:07 np0005558241 podman[356928]: 2025-12-13 08:51:07.023802145 +0000 UTC m=+0.161392088 container init 26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_rubin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:51:07 np0005558241 podman[356928]: 2025-12-13 08:51:07.030812831 +0000 UTC m=+0.168402774 container start 26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_rubin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 03:51:07 np0005558241 podman[356928]: 2025-12-13 08:51:07.035410616 +0000 UTC m=+0.173000549 container attach 26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:51:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:51:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713157814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.133 248514 DEBUG oslo_concurrency.processutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.142 248514 DEBUG nova.compute.provider_tree [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.165 248514 DEBUG nova.scheduler.client.report [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.194 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.236 248514 INFO nova.scheduler.client.report [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 67dc02f2-2430-4cc6-a575-bdfd238a5460#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.241 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:07 np0005558241 bold_rubin[356944]: {
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:    "0": [
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:        {
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "devices": [
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "/dev/loop3"
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            ],
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_name": "ceph_lv0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_size": "21470642176",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "name": "ceph_lv0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "tags": {
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.cluster_name": "ceph",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.crush_device_class": "",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.encrypted": "0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.objectstore": "bluestore",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.osd_id": "0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.type": "block",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.vdo": "0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.with_tpm": "0"
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            },
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "type": "block",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "vg_name": "ceph_vg0"
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:        }
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:    ],
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:    "1": [
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:        {
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "devices": [
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "/dev/loop4"
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            ],
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_name": "ceph_lv1",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_size": "21470642176",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "name": "ceph_lv1",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "tags": {
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.cluster_name": "ceph",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.crush_device_class": "",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.encrypted": "0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.objectstore": "bluestore",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.osd_id": "1",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.type": "block",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.vdo": "0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.with_tpm": "0"
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            },
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "type": "block",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "vg_name": "ceph_vg1"
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:        }
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:    ],
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:    "2": [
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:        {
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "devices": [
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "/dev/loop5"
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            ],
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_name": "ceph_lv2",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_size": "21470642176",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "name": "ceph_lv2",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "tags": {
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.cluster_name": "ceph",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.crush_device_class": "",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.encrypted": "0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.objectstore": "bluestore",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.osd_id": "2",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.type": "block",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.vdo": "0",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:                "ceph.with_tpm": "0"
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            },
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "type": "block",
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:            "vg_name": "ceph_vg2"
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:        }
Dec 13 03:51:07 np0005558241 bold_rubin[356944]:    ]
Dec 13 03:51:07 np0005558241 bold_rubin[356944]: }
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.343 248514 DEBUG oslo_concurrency.lockutils [None req-14b5263e-ab77-4dff-8f3c-a8861c673919 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:07 np0005558241 systemd[1]: libpod-26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be.scope: Deactivated successfully.
Dec 13 03:51:07 np0005558241 podman[356928]: 2025-12-13 08:51:07.351873061 +0000 UTC m=+0.489463004 container died 26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:51:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ebeaf5996d3eb0e89ea2dd0804d0390aaeeba807d71c9a287e4391316e174129-merged.mount: Deactivated successfully.
Dec 13 03:51:07 np0005558241 podman[356928]: 2025-12-13 08:51:07.465449379 +0000 UTC m=+0.603039312 container remove 26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:51:07 np0005558241 systemd[1]: libpod-conmon-26c5acd1a28268b0f3db679ba41c9df88fb7ca14a0ad37a37241041ccce787be.scope: Deactivated successfully.
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.660 248514 DEBUG nova.compute.manager [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.660 248514 DEBUG oslo_concurrency.lockutils [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.660 248514 DEBUG oslo_concurrency.lockutils [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.661 248514 DEBUG oslo_concurrency.lockutils [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "67dc02f2-2430-4cc6-a575-bdfd238a5460-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.661 248514 DEBUG nova.compute.manager [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] No waiting events found dispatching network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.661 248514 WARNING nova.compute.manager [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received unexpected event network-vif-plugged-5c3b12ae-3db8-4a4c-b558-19ab89a179ff for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.661 248514 DEBUG nova.compute.manager [req-a956d732-ed78-438a-b3b5-9c5e2ccd00c1 req-ce461616-127c-4b88-9b05-f6386c2d01ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Received event network-vif-deleted-5c3b12ae-3db8-4a4c-b558-19ab89a179ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:51:07 np0005558241 nova_compute[248510]: 2025-12-13 08:51:07.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:51:08 np0005558241 podman[357028]: 2025-12-13 08:51:07.906893468 +0000 UTC m=+0.021329335 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:51:08 np0005558241 podman[357028]: 2025-12-13 08:51:08.051697219 +0000 UTC m=+0.166133056 container create 05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:51:08 np0005558241 nova_compute[248510]: 2025-12-13 08:51:08.083 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:51:08 np0005558241 nova_compute[248510]: 2025-12-13 08:51:08.083 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:51:08 np0005558241 nova_compute[248510]: 2025-12-13 08:51:08.083 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:51:08 np0005558241 nova_compute[248510]: 2025-12-13 08:51:08.084 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8d919892-73fd-4a11-be79-2a1a9280a987 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:51:08 np0005558241 systemd[1]: Started libpod-conmon-05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e.scope.
Dec 13 03:51:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 163 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Dec 13 03:51:08 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:51:08 np0005558241 podman[357028]: 2025-12-13 08:51:08.21768239 +0000 UTC m=+0.332118257 container init 05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tharp, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 13 03:51:08 np0005558241 podman[357028]: 2025-12-13 08:51:08.224931392 +0000 UTC m=+0.339367229 container start 05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:51:08 np0005558241 hardcore_tharp[357044]: 167 167
Dec 13 03:51:08 np0005558241 systemd[1]: libpod-05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e.scope: Deactivated successfully.
Dec 13 03:51:08 np0005558241 podman[357028]: 2025-12-13 08:51:08.229032645 +0000 UTC m=+0.343468502 container attach 05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tharp, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 03:51:08 np0005558241 podman[357028]: 2025-12-13 08:51:08.230690076 +0000 UTC m=+0.345125923 container died 05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tharp, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:51:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-391ae54c8cc2df406a282594583ab9804f7bb7721424fcb2eee669a48b7b01d7-merged.mount: Deactivated successfully.
Dec 13 03:51:08 np0005558241 podman[357028]: 2025-12-13 08:51:08.310840736 +0000 UTC m=+0.425276573 container remove 05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:51:08 np0005558241 systemd[1]: libpod-conmon-05e12ad0e72de86539564441778aed5cdf6df8e04ca7b8a18108521036d08e8e.scope: Deactivated successfully.
Dec 13 03:51:08 np0005558241 podman[357070]: 2025-12-13 08:51:08.518462182 +0000 UTC m=+0.077746330 container create 13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:51:08 np0005558241 systemd[1]: Started libpod-conmon-13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f.scope.
Dec 13 03:51:08 np0005558241 podman[357070]: 2025-12-13 08:51:08.463397661 +0000 UTC m=+0.022681829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:51:08 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:51:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f29de4a3aea2bc2038ff6ee9e898d3fd85c9663e4ac7f540257a55e535f4dcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f29de4a3aea2bc2038ff6ee9e898d3fd85c9663e4ac7f540257a55e535f4dcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f29de4a3aea2bc2038ff6ee9e898d3fd85c9663e4ac7f540257a55e535f4dcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f29de4a3aea2bc2038ff6ee9e898d3fd85c9663e4ac7f540257a55e535f4dcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:08 np0005558241 podman[357070]: 2025-12-13 08:51:08.650968334 +0000 UTC m=+0.210252492 container init 13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:51:08 np0005558241 podman[357070]: 2025-12-13 08:51:08.657374045 +0000 UTC m=+0.216658193 container start 13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:51:08 np0005558241 podman[357070]: 2025-12-13 08:51:08.66316034 +0000 UTC m=+0.222444488 container attach 13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dubinsky, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:51:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:09.256 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:51:09 np0005558241 nova_compute[248510]: 2025-12-13 08:51:09.257 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:09.259 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:51:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:51:09
Dec 13 03:51:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:51:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:51:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.mgr', 'backups', 'vms', 'default.rgw.log', 'default.rgw.meta', 'images']
Dec 13 03:51:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:51:09 np0005558241 lvm[357162]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:51:09 np0005558241 lvm[357162]: VG ceph_vg0 finished
Dec 13 03:51:09 np0005558241 lvm[357165]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:51:09 np0005558241 lvm[357165]: VG ceph_vg1 finished
Dec 13 03:51:09 np0005558241 lvm[357167]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:51:09 np0005558241 lvm[357167]: VG ceph_vg2 finished
Dec 13 03:51:09 np0005558241 lvm[357168]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:51:09 np0005558241 lvm[357168]: VG ceph_vg1 finished
Dec 13 03:51:09 np0005558241 lvm[357170]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:51:09 np0005558241 lvm[357170]: VG ceph_vg1 finished
Dec 13 03:51:09 np0005558241 nova_compute[248510]: 2025-12-13 08:51:09.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:09 np0005558241 sleepy_dubinsky[357086]: {}
Dec 13 03:51:09 np0005558241 systemd[1]: libpod-13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f.scope: Deactivated successfully.
Dec 13 03:51:09 np0005558241 systemd[1]: libpod-13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f.scope: Consumed 1.312s CPU time.
Dec 13 03:51:09 np0005558241 podman[357070]: 2025-12-13 08:51:09.590662645 +0000 UTC m=+1.149946793 container died 13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dubinsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:51:09 np0005558241 nova_compute[248510]: 2025-12-13 08:51:09.776 248514 DEBUG nova.compute.manager [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-changed-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:51:09 np0005558241 nova_compute[248510]: 2025-12-13 08:51:09.776 248514 DEBUG nova.compute.manager [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Refreshing instance network info cache due to event network-changed-8d84f494-97c1-4708-b8df-444e42f55484. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:51:09 np0005558241 nova_compute[248510]: 2025-12-13 08:51:09.777 248514 DEBUG oslo_concurrency.lockutils [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:51:09 np0005558241 nova_compute[248510]: 2025-12-13 08:51:09.828 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:09 np0005558241 nova_compute[248510]: 2025-12-13 08:51:09.829 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:09 np0005558241 nova_compute[248510]: 2025-12-13 08:51:09.830 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:09 np0005558241 nova_compute[248510]: 2025-12-13 08:51:09.831 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:09 np0005558241 nova_compute[248510]: 2025-12-13 08:51:09.831 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:09 np0005558241 nova_compute[248510]: 2025-12-13 08:51:09.833 248514 INFO nova.compute.manager [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Terminating instance#033[00m
Dec 13 03:51:09 np0005558241 nova_compute[248510]: 2025-12-13 08:51:09.835 248514 DEBUG nova.compute.manager [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 121 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 1.1 MiB/s wr, 68 op/s
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:51:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.227 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updating instance_info_cache with network_info: [{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.255 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.255 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.256 248514 DEBUG oslo_concurrency.lockutils [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.256 248514 DEBUG nova.network.neutron [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Refreshing network info cache for port 8d84f494-97c1-4708-b8df-444e42f55484 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.257 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:51:10 np0005558241 kernel: tap8d84f494-97 (unregistering): left promiscuous mode
Dec 13 03:51:10 np0005558241 NetworkManager[50376]: <info>  [1765615870.3573] device (tap8d84f494-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:51:10Z|01061|binding|INFO|Releasing lport 8d84f494-97c1-4708-b8df-444e42f55484 from this chassis (sb_readonly=0)
Dec 13 03:51:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:51:10Z|01062|binding|INFO|Setting lport 8d84f494-97c1-4708-b8df-444e42f55484 down in Southbound
Dec 13 03:51:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:51:10Z|01063|binding|INFO|Removing iface tap8d84f494-97 ovn-installed in OVS
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.390 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:10 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4f29de4a3aea2bc2038ff6ee9e898d3fd85c9663e4ac7f540257a55e535f4dcf-merged.mount: Deactivated successfully.
Dec 13 03:51:10 np0005558241 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Dec 13 03:51:10 np0005558241 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006b.scope: Consumed 14.946s CPU time.
Dec 13 03:51:10 np0005558241 systemd-machined[210538]: Machine qemu-133-instance-0000006b terminated.
Dec 13 03:51:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:10.428 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:03:ce 10.100.0.11'], port_security=['fa:16:3e:87:03:ce 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8d919892-73fd-4a11-be79-2a1a9280a987', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c8d25b9-2d2e-4d46-aaac-2d96c7d8db60 8b6ad7fe-fef3-4e9f-8568-5f8d9cabfe04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d2f617a-f802-4ec5-b71a-0c176c8c3ae6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=8d84f494-97c1-4708-b8df-444e42f55484) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:51:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:10.429 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 8d84f494-97c1-4708-b8df-444e42f55484 in datapath abf04c22-5ac7-46ee-bfad-53f95095fba3 unbound from our chassis#033[00m
Dec 13 03:51:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:10.430 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abf04c22-5ac7-46ee-bfad-53f95095fba3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:51:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:10.430 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[854e6854-7e27-4804-b05d-204bd9dc6f4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:10.431 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3 namespace which is not needed anymore#033[00m
Dec 13 03:51:10 np0005558241 NetworkManager[50376]: <info>  [1765615870.4574] manager: (tap8d84f494-97): new Tun device (/org/freedesktop/NetworkManager/Devices/438)
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.487 248514 INFO nova.virt.libvirt.driver [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Instance destroyed successfully.#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.488 248514 DEBUG nova.objects.instance [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 8d919892-73fd-4a11-be79-2a1a9280a987 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.515 248514 DEBUG nova.virt.libvirt.vif [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:49:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-415223340',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=107,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHr0tqCFuWCJmy9b1oflh+eI2eFf4+EocjRlIktK5eBb1qKCRDnUDy1aaX29KorZ+3GlNFyu3OLFeV0DAzkIQCZVEoFaSpXloiDb56F5zzuYqySsB4TKa4TVSZgtsxz/9w==',key_name='tempest-TestSecurityGroupsBasicOps-305772843',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:50:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-rsplf7ip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:50:09Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=8d919892-73fd-4a11-be79-2a1a9280a987,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.515 248514 DEBUG nova.network.os_vif_util [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.516 248514 DEBUG nova.network.os_vif_util [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.516 248514 DEBUG os_vif [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.518 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.518 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d84f494-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.524 248514 INFO os_vif [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:03:ce,bridge_name='br-int',has_traffic_filtering=True,id=8d84f494-97c1-4708-b8df-444e42f55484,network=Network(abf04c22-5ac7-46ee-bfad-53f95095fba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d84f494-97')#033[00m
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:51:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:51:10 np0005558241 podman[357070]: 2025-12-13 08:51:10.795652318 +0000 UTC m=+2.354936466 container remove 13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:51:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:51:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:51:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:51:10 np0005558241 systemd[1]: libpod-conmon-13fafbd5afc0151a3a762e99fda9ea5251828a716da8a5a4dd2d5ba4e9a6782f.scope: Deactivated successfully.
Dec 13 03:51:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.913 248514 DEBUG nova.compute.manager [req-507c8d6f-e9c6-4f62-abac-056cf0891277 req-9868a998-a110-4957-ad18-2843e88ef6ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-vif-unplugged-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.914 248514 DEBUG oslo_concurrency.lockutils [req-507c8d6f-e9c6-4f62-abac-056cf0891277 req-9868a998-a110-4957-ad18-2843e88ef6ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.914 248514 DEBUG oslo_concurrency.lockutils [req-507c8d6f-e9c6-4f62-abac-056cf0891277 req-9868a998-a110-4957-ad18-2843e88ef6ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.915 248514 DEBUG oslo_concurrency.lockutils [req-507c8d6f-e9c6-4f62-abac-056cf0891277 req-9868a998-a110-4957-ad18-2843e88ef6ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.915 248514 DEBUG nova.compute.manager [req-507c8d6f-e9c6-4f62-abac-056cf0891277 req-9868a998-a110-4957-ad18-2843e88ef6ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] No waiting events found dispatching network-vif-unplugged-8d84f494-97c1-4708-b8df-444e42f55484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:51:10 np0005558241 nova_compute[248510]: 2025-12-13 08:51:10.915 248514 DEBUG nova.compute.manager [req-507c8d6f-e9c6-4f62-abac-056cf0891277 req-9868a998-a110-4957-ad18-2843e88ef6ec 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-vif-unplugged-8d84f494-97c1-4708-b8df-444e42f55484 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:51:10 np0005558241 neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3[355380]: [NOTICE]   (355384) : haproxy version is 2.8.14-c23fe91
Dec 13 03:51:10 np0005558241 neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3[355380]: [NOTICE]   (355384) : path to executable is /usr/sbin/haproxy
Dec 13 03:51:10 np0005558241 neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3[355380]: [ALERT]    (355384) : Current worker (355386) exited with code 143 (Terminated)
Dec 13 03:51:10 np0005558241 neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3[355380]: [WARNING]  (355384) : All workers exited. Exiting... (0)
Dec 13 03:51:10 np0005558241 systemd[1]: libpod-0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112.scope: Deactivated successfully.
Dec 13 03:51:10 np0005558241 podman[357232]: 2025-12-13 08:51:10.93413716 +0000 UTC m=+0.060580700 container died 0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:51:10 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112-userdata-shm.mount: Deactivated successfully.
Dec 13 03:51:10 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7015d7066325922029e49e3e8c4de1d3ef5bbeec39e3ee942291a993c00912b6-merged.mount: Deactivated successfully.
Dec 13 03:51:10 np0005558241 podman[357232]: 2025-12-13 08:51:10.985419426 +0000 UTC m=+0.111862966 container cleanup 0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 03:51:10 np0005558241 systemd[1]: libpod-conmon-0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112.scope: Deactivated successfully.
Dec 13 03:51:11 np0005558241 podman[357286]: 2025-12-13 08:51:11.063419942 +0000 UTC m=+0.050981580 container remove 0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:51:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.071 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d7507ef4-5fcb-484a-92ed-d180b5e7738c]: (4, ('Sat Dec 13 08:51:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3 (0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112)\n0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112\nSat Dec 13 08:51:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3 (0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112)\n0deaac015f2127ce32674b716ca1153e1234a0cbf7196f2fa1a6a31261e3b112\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.073 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6ddc1c-6a1c-414e-9692-70f90ce9091f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.075 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabf04c22-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.078 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:11 np0005558241 kernel: tapabf04c22-50: left promiscuous mode
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.085 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52ae25d7-255e-4eff-a814-11a2f52c8199]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.093 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.112 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b807d3a6-4d3c-4b4e-95f1-c783f30931ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.113 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d8dcb86f-c269-458a-bdba-f23e4d3886fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.121 248514 INFO nova.virt.libvirt.driver [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Deleting instance files /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987_del#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.122 248514 INFO nova.virt.libvirt.driver [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Deletion of /var/lib/nova/instances/8d919892-73fd-4a11-be79-2a1a9280a987_del complete#033[00m
Dec 13 03:51:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.131 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[54b47e68-9340-4160-a56e-770c6f55ac73]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 838370, 'reachable_time': 25825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357301, 'error': None, 'target': 'ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:11 np0005558241 systemd[1]: run-netns-ovnmeta\x2dabf04c22\x2d5ac7\x2d46ee\x2dbfad\x2d53f95095fba3.mount: Deactivated successfully.
Dec 13 03:51:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.135 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abf04c22-5ac7-46ee-bfad-53f95095fba3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:51:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:11.135 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a92c3e-180d-450e-932d-588c3b165192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.189 248514 INFO nova.compute.manager [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Took 1.35 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.190 248514 DEBUG oslo.service.loopingcall [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.191 248514 DEBUG nova.compute.manager [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.191 248514 DEBUG nova.network.neutron [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.811 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.811 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.812 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.849 248514 DEBUG nova.network.neutron [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updated VIF entry in instance network info cache for port 8d84f494-97c1-4708-b8df-444e42f55484. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.850 248514 DEBUG nova.network.neutron [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updating instance_info_cache with network_info: [{"id": "8d84f494-97c1-4708-b8df-444e42f55484", "address": "fa:16:3e:87:03:ce", "network": {"id": "abf04c22-5ac7-46ee-bfad-53f95095fba3", "bridge": "br-int", "label": "tempest-network-smoke--23107118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d84f494-97", "ovs_interfaceid": "8d84f494-97c1-4708-b8df-444e42f55484", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:51:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:51:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:51:11 np0005558241 nova_compute[248510]: 2025-12-13 08:51:11.873 248514 DEBUG oslo_concurrency.lockutils [req-160ea061-c9c0-4042-8c12-16afb93bbeb6 req-972c5095-303f-41b9-85f0-03ca43cd1444 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d919892-73fd-4a11-be79-2a1a9280a987" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:51:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 121 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 57 KiB/s wr, 40 op/s
Dec 13 03:51:12 np0005558241 nova_compute[248510]: 2025-12-13 08:51:12.243 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:12 np0005558241 nova_compute[248510]: 2025-12-13 08:51:12.254 248514 DEBUG nova.network.neutron [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:51:12 np0005558241 nova_compute[248510]: 2025-12-13 08:51:12.277 248514 INFO nova.compute.manager [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Took 1.09 seconds to deallocate network for instance.#033[00m
Dec 13 03:51:12 np0005558241 nova_compute[248510]: 2025-12-13 08:51:12.310 248514 DEBUG nova.compute.manager [req-eeed2041-55e4-4392-9cdd-ae764484f00b req-d8f7805f-d19c-4138-b0dc-ce975cf9bbf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-vif-deleted-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:51:12 np0005558241 nova_compute[248510]: 2025-12-13 08:51:12.366 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:12 np0005558241 nova_compute[248510]: 2025-12-13 08:51:12.366 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:51:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490537596' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:51:12 np0005558241 nova_compute[248510]: 2025-12-13 08:51:12.398 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:51:12 np0005558241 nova_compute[248510]: 2025-12-13 08:51:12.593 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:51:12 np0005558241 nova_compute[248510]: 2025-12-13 08:51:12.595 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3712MB free_disk=59.94190831575543GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:51:12 np0005558241 nova_compute[248510]: 2025-12-13 08:51:12.596 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:12 np0005558241 nova_compute[248510]: 2025-12-13 08:51:12.614 248514 DEBUG oslo_concurrency.processutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:51:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:51:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684781816' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.192 248514 DEBUG oslo_concurrency.processutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.197 248514 DEBUG nova.compute.manager [req-b469ab80-e09b-42aa-9f54-47495b830c93 req-a0a2fc58-0c46-485c-8767-4d2b028d7b41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.198 248514 DEBUG oslo_concurrency.lockutils [req-b469ab80-e09b-42aa-9f54-47495b830c93 req-a0a2fc58-0c46-485c-8767-4d2b028d7b41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.198 248514 DEBUG oslo_concurrency.lockutils [req-b469ab80-e09b-42aa-9f54-47495b830c93 req-a0a2fc58-0c46-485c-8767-4d2b028d7b41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.199 248514 DEBUG oslo_concurrency.lockutils [req-b469ab80-e09b-42aa-9f54-47495b830c93 req-a0a2fc58-0c46-485c-8767-4d2b028d7b41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.199 248514 DEBUG nova.compute.manager [req-b469ab80-e09b-42aa-9f54-47495b830c93 req-a0a2fc58-0c46-485c-8767-4d2b028d7b41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] No waiting events found dispatching network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.200 248514 WARNING nova.compute.manager [req-b469ab80-e09b-42aa-9f54-47495b830c93 req-a0a2fc58-0c46-485c-8767-4d2b028d7b41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Received unexpected event network-vif-plugged-8d84f494-97c1-4708-b8df-444e42f55484 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.206 248514 DEBUG nova.compute.provider_tree [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.228 248514 DEBUG nova.scheduler.client.report [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:51:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:13.262 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.306 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.309 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.345 248514 INFO nova.scheduler.client.report [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 8d919892-73fd-4a11-be79-2a1a9280a987#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.394 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.395 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.416 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:51:13 np0005558241 nova_compute[248510]: 2025-12-13 08:51:13.458 248514 DEBUG oslo_concurrency.lockutils [None req-22fa73b8-11cb-4ab5-847c-19ebf05cb620 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "8d919892-73fd-4a11-be79-2a1a9280a987" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:51:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/987824012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:51:14 np0005558241 nova_compute[248510]: 2025-12-13 08:51:14.012 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:51:14 np0005558241 nova_compute[248510]: 2025-12-13 08:51:14.018 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:51:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 59 KiB/s wr, 68 op/s
Dec 13 03:51:14 np0005558241 nova_compute[248510]: 2025-12-13 08:51:14.222 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:51:14 np0005558241 nova_compute[248510]: 2025-12-13 08:51:14.251 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:51:14 np0005558241 nova_compute[248510]: 2025-12-13 08:51:14.252 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:51:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/341224126' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:51:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:51:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/341224126' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:51:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:51:15 np0005558241 nova_compute[248510]: 2025-12-13 08:51:15.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Dec 13 03:51:16 np0005558241 nova_compute[248510]: 2025-12-13 08:51:16.254 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:51:16 np0005558241 nova_compute[248510]: 2025-12-13 08:51:16.254 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:51:17 np0005558241 nova_compute[248510]: 2025-12-13 08:51:17.245 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Dec 13 03:51:18 np0005558241 nova_compute[248510]: 2025-12-13 08:51:18.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:51:19 np0005558241 nova_compute[248510]: 2025-12-13 08:51:19.478 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:19 np0005558241 nova_compute[248510]: 2025-12-13 08:51:19.527 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615864.527167, 67dc02f2-2430-4cc6-a575-bdfd238a5460 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:51:19 np0005558241 nova_compute[248510]: 2025-12-13 08:51:19.528 248514 INFO nova.compute.manager [-] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:51:19 np0005558241 nova_compute[248510]: 2025-12-13 08:51:19.560 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:19 np0005558241 nova_compute[248510]: 2025-12-13 08:51:19.562 248514 DEBUG nova.compute.manager [None req-bae99fae-eecb-4046-90e2-3168abe9b9ee - - - - - -] [instance: 67dc02f2-2430-4cc6-a575-bdfd238a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:51:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.5 KiB/s wr, 42 op/s
Dec 13 03:51:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:51:20 np0005558241 nova_compute[248510]: 2025-12-13 08:51:20.524 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.3073215557897495e-05 of space, bias 1.0, pg target 0.003921964667369248 quantized to 32 (current 32)
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693612553454349 of space, bias 1.0, pg target 0.2008083766036305 quantized to 32 (current 32)
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.740707038769602e-07 of space, bias 4.0, pg target 0.0006888848446523523 quantized to 16 (current 32)
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:51:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Dec 13 03:51:22 np0005558241 nova_compute[248510]: 2025-12-13 08:51:22.289 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:23 np0005558241 podman[357371]: 2025-12-13 08:51:23.962799542 +0000 UTC m=+0.053642776 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:51:23 np0005558241 podman[357372]: 2025-12-13 08:51:23.98466101 +0000 UTC m=+0.062055707 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 03:51:23 np0005558241 podman[357370]: 2025-12-13 08:51:23.984687421 +0000 UTC m=+0.072021717 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:51:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Dec 13 03:51:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:51:25 np0005558241 nova_compute[248510]: 2025-12-13 08:51:25.485 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615870.4848216, 8d919892-73fd-4a11-be79-2a1a9280a987 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:51:25 np0005558241 nova_compute[248510]: 2025-12-13 08:51:25.486 248514 INFO nova.compute.manager [-] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:51:25 np0005558241 nova_compute[248510]: 2025-12-13 08:51:25.524 248514 DEBUG nova.compute.manager [None req-fba8d9c3-2733-4074-bd94-914db2bc6961 - - - - - -] [instance: 8d919892-73fd-4a11-be79-2a1a9280a987] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:51:25 np0005558241 nova_compute[248510]: 2025-12-13 08:51:25.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:51:27 np0005558241 nova_compute[248510]: 2025-12-13 08:51:27.291 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:51:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:51:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:51:30 np0005558241 nova_compute[248510]: 2025-12-13 08:51:30.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2695: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:51:32 np0005558241 nova_compute[248510]: 2025-12-13 08:51:32.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:51:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:51:35 np0005558241 nova_compute[248510]: 2025-12-13 08:51:35.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:51:37 np0005558241 nova_compute[248510]: 2025-12-13 08:51:37.336 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2698: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:51:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:51:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:51:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:51:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:51:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:51:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:51:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:51:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:51:40 np0005558241 nova_compute[248510]: 2025-12-13 08:51:40.533 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:41 np0005558241 nova_compute[248510]: 2025-12-13 08:51:41.935 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:41 np0005558241 nova_compute[248510]: 2025-12-13 08:51:41.935 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:41 np0005558241 nova_compute[248510]: 2025-12-13 08:51:41.969 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:51:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:51:42 np0005558241 nova_compute[248510]: 2025-12-13 08:51:42.223 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:42 np0005558241 nova_compute[248510]: 2025-12-13 08:51:42.224 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:42 np0005558241 nova_compute[248510]: 2025-12-13 08:51:42.233 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:51:42 np0005558241 nova_compute[248510]: 2025-12-13 08:51:42.233 248514 INFO nova.compute.claims [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:51:42 np0005558241 nova_compute[248510]: 2025-12-13 08:51:42.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:42 np0005558241 nova_compute[248510]: 2025-12-13 08:51:42.372 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:51:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:51:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2123659522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:51:43 np0005558241 nova_compute[248510]: 2025-12-13 08:51:43.075 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.703s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:51:43 np0005558241 nova_compute[248510]: 2025-12-13 08:51:43.082 248514 DEBUG nova.compute.provider_tree [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:51:43 np0005558241 nova_compute[248510]: 2025-12-13 08:51:43.109 248514 DEBUG nova.scheduler.client.report [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:51:43 np0005558241 nova_compute[248510]: 2025-12-13 08:51:43.438 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:43 np0005558241 nova_compute[248510]: 2025-12-13 08:51:43.438 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:51:43 np0005558241 nova_compute[248510]: 2025-12-13 08:51:43.512 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:51:43 np0005558241 nova_compute[248510]: 2025-12-13 08:51:43.512 248514 DEBUG nova.network.neutron [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:51:43 np0005558241 nova_compute[248510]: 2025-12-13 08:51:43.542 248514 INFO nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:51:43 np0005558241 nova_compute[248510]: 2025-12-13 08:51:43.566 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.066 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.068 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.069 248514 INFO nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Creating image(s)#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.111 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:51:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.136 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.157 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.160 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.203 248514 DEBUG nova.policy [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.243 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.244 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.245 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.245 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.265 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.269 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.581 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.666 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.762 248514 DEBUG nova.objects.instance [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 32f46650-28ec-40b8-8cbb-afb8a34cda45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.780 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.780 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Ensure instance console log exists: /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.781 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.781 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:44 np0005558241 nova_compute[248510]: 2025-12-13 08:51:44.781 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:51:45 np0005558241 nova_compute[248510]: 2025-12-13 08:51:45.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2702: 321 pgs: 321 active+clean; 41 MiB data, 739 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:51:47 np0005558241 nova_compute[248510]: 2025-12-13 08:51:47.067 248514 DEBUG nova.network.neutron [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Successfully created port: 4b6e89d0-3078-468e-ad37-fa04ed14c96a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:51:47 np0005558241 nova_compute[248510]: 2025-12-13 08:51:47.339 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2703: 321 pgs: 321 active+clean; 53 MiB data, 742 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s wr, 0 op/s
Dec 13 03:51:48 np0005558241 nova_compute[248510]: 2025-12-13 08:51:48.727 248514 DEBUG nova.network.neutron [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Successfully updated port: 4b6e89d0-3078-468e-ad37-fa04ed14c96a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:51:48 np0005558241 nova_compute[248510]: 2025-12-13 08:51:48.745 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:51:48 np0005558241 nova_compute[248510]: 2025-12-13 08:51:48.746 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:51:48 np0005558241 nova_compute[248510]: 2025-12-13 08:51:48.746 248514 DEBUG nova.network.neutron [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:51:48 np0005558241 nova_compute[248510]: 2025-12-13 08:51:48.988 248514 DEBUG nova.compute.manager [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-changed-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:51:48 np0005558241 nova_compute[248510]: 2025-12-13 08:51:48.988 248514 DEBUG nova.compute.manager [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Refreshing instance network info cache due to event network-changed-4b6e89d0-3078-468e-ad37-fa04ed14c96a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:51:48 np0005558241 nova_compute[248510]: 2025-12-13 08:51:48.989 248514 DEBUG oslo_concurrency.lockutils [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:51:50 np0005558241 nova_compute[248510]: 2025-12-13 08:51:50.040 248514 DEBUG nova.network.neutron [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:51:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2704: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:51:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:51:50 np0005558241 nova_compute[248510]: 2025-12-13 08:51:50.538 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.763 248514 DEBUG nova.network.neutron [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updating instance_info_cache with network_info: [{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.845 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.845 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Instance network_info: |[{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.846 248514 DEBUG oslo_concurrency.lockutils [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.846 248514 DEBUG nova.network.neutron [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Refreshing network info cache for port 4b6e89d0-3078-468e-ad37-fa04ed14c96a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.848 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Start _get_guest_xml network_info=[{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.862 248514 WARNING nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.882 248514 DEBUG nova.virt.libvirt.host [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.884 248514 DEBUG nova.virt.libvirt.host [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.890 248514 DEBUG nova.virt.libvirt.host [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.890 248514 DEBUG nova.virt.libvirt.host [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.890 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.891 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.891 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.891 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.891 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.892 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.892 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.892 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.892 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.893 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.893 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.893 248514 DEBUG nova.virt.hardware [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:51:51 np0005558241 nova_compute[248510]: 2025-12-13 08:51:51.896 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:51:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2705: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:51:52 np0005558241 nova_compute[248510]: 2025-12-13 08:51:52.381 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:51:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2994822390' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:51:52 np0005558241 nova_compute[248510]: 2025-12-13 08:51:52.514 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:51:52 np0005558241 nova_compute[248510]: 2025-12-13 08:51:52.535 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:51:52 np0005558241 nova_compute[248510]: 2025-12-13 08:51:52.540 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:51:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:51:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3645111657' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.130 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.132 248514 DEBUG nova.virt.libvirt.vif [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:51:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=110,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1EAPswQkN6gGnzWb6nWTEPMlUNbetceQGhufBgelanH3kUDSBVad+EWLTxUJKeHTg22cPL3Ixvag9/dm2M/FjTcKf+ix54cOXq9k631rEiL8V03+5FUWKZfsVC6yBvBw==',key_name='tempest-TestSecurityGroupsBasicOps-451827369',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-lhganq1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:51:43Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=32f46650-28ec-40b8-8cbb-afb8a34cda45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.132 248514 DEBUG nova.network.os_vif_util [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.133 248514 DEBUG nova.network.os_vif_util [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.135 248514 DEBUG nova.objects.instance [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 32f46650-28ec-40b8-8cbb-afb8a34cda45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.154 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  <uuid>32f46650-28ec-40b8-8cbb-afb8a34cda45</uuid>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  <name>instance-0000006e</name>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557</nova:name>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:51:51</nova:creationTime>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:        <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:        <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:        <nova:port uuid="4b6e89d0-3078-468e-ad37-fa04ed14c96a">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <entry name="serial">32f46650-28ec-40b8-8cbb-afb8a34cda45</entry>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <entry name="uuid">32f46650-28ec-40b8-8cbb-afb8a34cda45</entry>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/32f46650-28ec-40b8-8cbb-afb8a34cda45_disk">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/32f46650-28ec-40b8-8cbb-afb8a34cda45_disk.config">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:c0:e1:26"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <target dev="tap4b6e89d0-30"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/console.log" append="off"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:51:53 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:51:53 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:51:53 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:51:53 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.155 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Preparing to wait for external event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.156 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.156 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.156 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.157 248514 DEBUG nova.virt.libvirt.vif [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:51:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=110,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1EAPswQkN6gGnzWb6nWTEPMlUNbetceQGhufBgelanH3kUDSBVad+EWLTxUJKeHTg22cPL3Ixvag9/dm2M/FjTcKf+ix54cOXq9k631rEiL8V03+5FUWKZfsVC6yBvBw==',key_name='tempest-TestSecurityGroupsBasicOps-451827369',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-lhganq1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:51:43Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=32f46650-28ec-40b8-8cbb-afb8a34cda45,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.157 248514 DEBUG nova.network.os_vif_util [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.158 248514 DEBUG nova.network.os_vif_util [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.158 248514 DEBUG os_vif [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.159 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.159 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.160 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.163 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.163 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b6e89d0-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.164 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b6e89d0-30, col_values=(('external_ids', {'iface-id': '4b6e89d0-3078-468e-ad37-fa04ed14c96a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:e1:26', 'vm-uuid': '32f46650-28ec-40b8-8cbb-afb8a34cda45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.165 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:53 np0005558241 NetworkManager[50376]: <info>  [1765615913.1679] manager: (tap4b6e89d0-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.168 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.175 248514 INFO os_vif [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30')#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.577 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.578 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.578 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:c0:e1:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.578 248514 INFO nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Using config drive#033[00m
Dec 13 03:51:53 np0005558241 nova_compute[248510]: 2025-12-13 08:51:53.597 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:51:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2706: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:51:54 np0005558241 nova_compute[248510]: 2025-12-13 08:51:54.958 248514 INFO nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Creating config drive at /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/disk.config#033[00m
Dec 13 03:51:54 np0005558241 nova_compute[248510]: 2025-12-13 08:51:54.963 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp44ywgd7f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:51:54 np0005558241 podman[357705]: 2025-12-13 08:51:54.977219435 +0000 UTC m=+0.065335389 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:51:54 np0005558241 podman[357706]: 2025-12-13 08:51:54.99098498 +0000 UTC m=+0.077914354 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 03:51:54 np0005558241 podman[357704]: 2025-12-13 08:51:54.997237407 +0000 UTC m=+0.085303880 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:51:55 np0005558241 nova_compute[248510]: 2025-12-13 08:51:55.108 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp44ywgd7f" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:51:55 np0005558241 nova_compute[248510]: 2025-12-13 08:51:55.133 248514 DEBUG nova.storage.rbd_utils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:51:55 np0005558241 nova_compute[248510]: 2025-12-13 08:51:55.138 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/disk.config 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:51:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:51:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:55.430 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:55 np0005558241 nova_compute[248510]: 2025-12-13 08:51:55.472 248514 DEBUG nova.network.neutron [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updated VIF entry in instance network info cache for port 4b6e89d0-3078-468e-ad37-fa04ed14c96a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:51:55 np0005558241 nova_compute[248510]: 2025-12-13 08:51:55.473 248514 DEBUG nova.network.neutron [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updating instance_info_cache with network_info: [{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:51:55 np0005558241 nova_compute[248510]: 2025-12-13 08:51:55.529 248514 DEBUG oslo_concurrency.lockutils [req-a2e95953-5b05-4859-ba06-1f455a9877ab req-1510ad8d-730d-4e5d-8943-37a29197624e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:51:55 np0005558241 nova_compute[248510]: 2025-12-13 08:51:55.913 248514 DEBUG oslo_concurrency.processutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/disk.config 32f46650-28ec-40b8-8cbb-afb8a34cda45_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.776s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:51:55 np0005558241 nova_compute[248510]: 2025-12-13 08:51:55.914 248514 INFO nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Deleting local config drive /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45/disk.config because it was imported into RBD.#033[00m
Dec 13 03:51:55 np0005558241 kernel: tap4b6e89d0-30: entered promiscuous mode
Dec 13 03:51:55 np0005558241 NetworkManager[50376]: <info>  [1765615915.9889] manager: (tap4b6e89d0-30): new Tun device (/org/freedesktop/NetworkManager/Devices/440)
Dec 13 03:51:56 np0005558241 ovn_controller[148476]: 2025-12-13T08:51:56Z|01064|binding|INFO|Claiming lport 4b6e89d0-3078-468e-ad37-fa04ed14c96a for this chassis.
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.008 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:56 np0005558241 ovn_controller[148476]: 2025-12-13T08:51:56Z|01065|binding|INFO|4b6e89d0-3078-468e-ad37-fa04ed14c96a: Claiming fa:16:3e:c0:e1:26 10.100.0.9
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.014 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:56 np0005558241 systemd-udevd[357819]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:51:56 np0005558241 NetworkManager[50376]: <info>  [1765615916.0381] device (tap4b6e89d0-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:51:56 np0005558241 NetworkManager[50376]: <info>  [1765615916.0391] device (tap4b6e89d0-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:51:56 np0005558241 ovn_controller[148476]: 2025-12-13T08:51:56Z|01066|binding|INFO|Setting lport 4b6e89d0-3078-468e-ad37-fa04ed14c96a ovn-installed in OVS
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.082 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:56 np0005558241 systemd-machined[210538]: New machine qemu-136-instance-0000006e.
Dec 13 03:51:56 np0005558241 systemd[1]: Started Virtual Machine qemu-136-instance-0000006e.
Dec 13 03:51:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2707: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.178 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:e1:26 10.100.0.9'], port_security=['fa:16:3e:c0:e1:26 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '32f46650-28ec-40b8-8cbb-afb8a34cda45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c6d054d-d66c-470b-bd1c-b4cabcf90c1c a55bf908-21fd-47cc-b7fd-f8685207b408', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=369f5416-159a-497c-b005-e2677c61c320, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4b6e89d0-3078-468e-ad37-fa04ed14c96a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:51:56 np0005558241 ovn_controller[148476]: 2025-12-13T08:51:56Z|01067|binding|INFO|Setting lport 4b6e89d0-3078-468e-ad37-fa04ed14c96a up in Southbound
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.181 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4b6e89d0-3078-468e-ad37-fa04ed14c96a in datapath e742c2df-df4d-48f6-8153-8353ec98fe5c bound to our chassis#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.183 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e742c2df-df4d-48f6-8153-8353ec98fe5c#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.203 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2148b79a-2ad5-4622-b861-c148b6bd9a65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.205 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape742c2df-d1 in ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.208 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape742c2df-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.208 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6927761-d9b0-4127-ba90-ea5a80902b32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.210 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef29aa7a-dcef-4a83-b142-f5e432185dc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.220 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4170b7da-8b17-458f-9924-97da400471cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.239 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39ccc8c3-b25a-4050-8c55-8c72b023a7d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.270 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e5f8ee-1b80-4475-96b0-54cd092eb01b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 NetworkManager[50376]: <info>  [1765615916.2797] manager: (tape742c2df-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/441)
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.281 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c420707-d6b7-47c8-98a1-4815e2c10fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.320 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eadc1e36-415d-4cdb-9c61-eb823be30a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.327 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ca432ee4-db57-4a31-949b-2f08ebefa3dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 NetworkManager[50376]: <info>  [1765615916.3603] device (tape742c2df-d0): carrier: link connected
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.367 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8e728c55-a17d-47dc-95d0-478919045b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.385 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e693c7-ead2-48a2-aea5-2c6fc494cb23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape742c2df-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:a4:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 849357, 'reachable_time': 27765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357855, 'error': None, 'target': 'ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.410 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b0b32d-d397-46bc-a3c6-9e6b3f60d914]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:a4f3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 849357, 'tstamp': 849357}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357856, 'error': None, 'target': 'ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.432 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c194cc03-e56a-4858-9c36-ded6c6a49114]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape742c2df-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:a4:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 849357, 'reachable_time': 27765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357857, 'error': None, 'target': 'ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.480 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[268bdc71-a7b3-4122-8f2c-07ba69c8ded3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.548 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6df07c28-5e5b-4e05-8cc4-6bb6faca93e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.550 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape742c2df-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.550 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.551 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape742c2df-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:51:56 np0005558241 kernel: tape742c2df-d0: entered promiscuous mode
Dec 13 03:51:56 np0005558241 NetworkManager[50376]: <info>  [1765615916.5530] manager: (tape742c2df-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.555 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape742c2df-d0, col_values=(('external_ids', {'iface-id': 'a0154a44-7c71-4193-bf4a-ea45abd45ba7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:51:56 np0005558241 ovn_controller[148476]: 2025-12-13T08:51:56Z|01068|binding|INFO|Releasing lport a0154a44-7c71-4193-bf4a-ea45abd45ba7 from this chassis (sb_readonly=0)
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.571 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.572 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e742c2df-df4d-48f6-8153-8353ec98fe5c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e742c2df-df4d-48f6-8153-8353ec98fe5c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e6cd18db-27f5-44bf-81df-b54562a52169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.574 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-e742c2df-df4d-48f6-8153-8353ec98fe5c
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/e742c2df-df4d-48f6-8153-8353ec98fe5c.pid.haproxy
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID e742c2df-df4d-48f6-8153-8353ec98fe5c
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:51:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:51:56.575 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'env', 'PROCESS_TAG=haproxy-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e742c2df-df4d-48f6-8153-8353ec98fe5c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.742 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615916.7417943, 32f46650-28ec-40b8-8cbb-afb8a34cda45 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.743 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] VM Started (Lifecycle Event)#033[00m
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.780 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.785 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615916.7431455, 32f46650-28ec-40b8-8cbb-afb8a34cda45 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.786 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.820 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.824 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:51:56 np0005558241 nova_compute[248510]: 2025-12-13 08:51:56.906 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:51:56 np0005558241 podman[357931]: 2025-12-13 08:51:56.981965699 +0000 UTC m=+0.072980371 container create ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:51:57 np0005558241 systemd[1]: Started libpod-conmon-ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d.scope.
Dec 13 03:51:57 np0005558241 podman[357931]: 2025-12-13 08:51:56.932473528 +0000 UTC m=+0.023488230 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:51:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:51:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82de2f5cdb66864aa371b69f7e8060f09c82097adfafe67c8d328dd59b19987/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:57 np0005558241 podman[357931]: 2025-12-13 08:51:57.079248948 +0000 UTC m=+0.170263640 container init ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 03:51:57 np0005558241 podman[357931]: 2025-12-13 08:51:57.084450358 +0000 UTC m=+0.175465030 container start ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:51:57 np0005558241 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [NOTICE]   (357952) : New worker (357954) forked
Dec 13 03:51:57 np0005558241 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [NOTICE]   (357952) : Loading success.
Dec 13 03:51:57 np0005558241 nova_compute[248510]: 2025-12-13 08:51:57.384 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2708: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.585 248514 DEBUG nova.compute.manager [req-d39c2ee5-f484-472c-aa0d-938d06e18bb7 req-6f3cbbbf-434d-4c8b-9d05-b17189f71dbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.586 248514 DEBUG oslo_concurrency.lockutils [req-d39c2ee5-f484-472c-aa0d-938d06e18bb7 req-6f3cbbbf-434d-4c8b-9d05-b17189f71dbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.586 248514 DEBUG oslo_concurrency.lockutils [req-d39c2ee5-f484-472c-aa0d-938d06e18bb7 req-6f3cbbbf-434d-4c8b-9d05-b17189f71dbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.586 248514 DEBUG oslo_concurrency.lockutils [req-d39c2ee5-f484-472c-aa0d-938d06e18bb7 req-6f3cbbbf-434d-4c8b-9d05-b17189f71dbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.586 248514 DEBUG nova.compute.manager [req-d39c2ee5-f484-472c-aa0d-938d06e18bb7 req-6f3cbbbf-434d-4c8b-9d05-b17189f71dbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Processing event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.587 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.594 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615918.5937185, 32f46650-28ec-40b8-8cbb-afb8a34cda45 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.594 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.596 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.600 248514 INFO nova.virt.libvirt.driver [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Instance spawned successfully.#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.601 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.620 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.627 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.632 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.632 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.633 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.633 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.634 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.634 248514 DEBUG nova.virt.libvirt.driver [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.750 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.840 248514 INFO nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Took 14.77 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.840 248514 DEBUG nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:51:58 np0005558241 nova_compute[248510]: 2025-12-13 08:51:58.955 248514 INFO nova.compute.manager [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Took 16.78 seconds to build instance.#033[00m
Dec 13 03:51:59 np0005558241 nova_compute[248510]: 2025-12-13 08:51:59.015 248514 DEBUG oslo_concurrency.lockutils [None req-5bd411a4-d89c-4852-8709-0d4a222394f2 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:51:59 np0005558241 nova_compute[248510]: 2025-12-13 08:51:59.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:52:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2709: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 MiB/s wr, 36 op/s
Dec 13 03:52:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:52:01 np0005558241 nova_compute[248510]: 2025-12-13 08:52:01.083 248514 DEBUG nova.compute.manager [req-d8dd8e1c-b362-40e1-a536-a195e2fa7f7d req-81c3fe37-e633-4823-b65c-a97544c6b74b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:52:01 np0005558241 nova_compute[248510]: 2025-12-13 08:52:01.083 248514 DEBUG oslo_concurrency.lockutils [req-d8dd8e1c-b362-40e1-a536-a195e2fa7f7d req-81c3fe37-e633-4823-b65c-a97544c6b74b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:01 np0005558241 nova_compute[248510]: 2025-12-13 08:52:01.083 248514 DEBUG oslo_concurrency.lockutils [req-d8dd8e1c-b362-40e1-a536-a195e2fa7f7d req-81c3fe37-e633-4823-b65c-a97544c6b74b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:01 np0005558241 nova_compute[248510]: 2025-12-13 08:52:01.084 248514 DEBUG oslo_concurrency.lockutils [req-d8dd8e1c-b362-40e1-a536-a195e2fa7f7d req-81c3fe37-e633-4823-b65c-a97544c6b74b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:01 np0005558241 nova_compute[248510]: 2025-12-13 08:52:01.084 248514 DEBUG nova.compute.manager [req-d8dd8e1c-b362-40e1-a536-a195e2fa7f7d req-81c3fe37-e633-4823-b65c-a97544c6b74b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] No waiting events found dispatching network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:52:01 np0005558241 nova_compute[248510]: 2025-12-13 08:52:01.084 248514 WARNING nova.compute.manager [req-d8dd8e1c-b362-40e1-a536-a195e2fa7f7d req-81c3fe37-e633-4823-b65c-a97544c6b74b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received unexpected event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a for instance with vm_state active and task_state None.#033[00m
Dec 13 03:52:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2710: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Dec 13 03:52:02 np0005558241 nova_compute[248510]: 2025-12-13 08:52:02.386 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:03 np0005558241 nova_compute[248510]: 2025-12-13 08:52:03.168 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:52:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 34K writes, 136K keys, 34K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 34K writes, 12K syncs, 2.83 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4214 writes, 14K keys, 4214 commit groups, 1.0 writes per commit group, ingest: 14.64 MB, 0.02 MB/s#012Interval WAL: 4215 writes, 1764 syncs, 2.39 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:52:03 np0005558241 nova_compute[248510]: 2025-12-13 08:52:03.982 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:03 np0005558241 NetworkManager[50376]: <info>  [1765615923.9846] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/443)
Dec 13 03:52:03 np0005558241 NetworkManager[50376]: <info>  [1765615923.9854] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Dec 13 03:52:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2711: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:52:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:04Z|01069|binding|INFO|Releasing lport a0154a44-7c71-4193-bf4a-ea45abd45ba7 from this chassis (sb_readonly=0)
Dec 13 03:52:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:04Z|01070|binding|INFO|Releasing lport a0154a44-7c71-4193-bf4a-ea45abd45ba7 from this chassis (sb_readonly=0)
Dec 13 03:52:04 np0005558241 nova_compute[248510]: 2025-12-13 08:52:04.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:52:05 np0005558241 nova_compute[248510]: 2025-12-13 08:52:05.716 248514 DEBUG nova.compute.manager [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-changed-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:52:05 np0005558241 nova_compute[248510]: 2025-12-13 08:52:05.717 248514 DEBUG nova.compute.manager [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Refreshing instance network info cache due to event network-changed-4b6e89d0-3078-468e-ad37-fa04ed14c96a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:52:05 np0005558241 nova_compute[248510]: 2025-12-13 08:52:05.718 248514 DEBUG oslo_concurrency.lockutils [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:52:05 np0005558241 nova_compute[248510]: 2025-12-13 08:52:05.718 248514 DEBUG oslo_concurrency.lockutils [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:52:05 np0005558241 nova_compute[248510]: 2025-12-13 08:52:05.718 248514 DEBUG nova.network.neutron [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Refreshing network info cache for port 4b6e89d0-3078-468e-ad37-fa04ed14c96a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:52:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2712: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:52:07 np0005558241 nova_compute[248510]: 2025-12-13 08:52:07.388 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:07 np0005558241 nova_compute[248510]: 2025-12-13 08:52:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:52:07 np0005558241 nova_compute[248510]: 2025-12-13 08:52:07.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:52:07 np0005558241 nova_compute[248510]: 2025-12-13 08:52:07.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:52:08 np0005558241 nova_compute[248510]: 2025-12-13 08:52:08.088 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:52:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2713: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:52:08 np0005558241 nova_compute[248510]: 2025-12-13 08:52:08.132 248514 DEBUG nova.network.neutron [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updated VIF entry in instance network info cache for port 4b6e89d0-3078-468e-ad37-fa04ed14c96a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:52:08 np0005558241 nova_compute[248510]: 2025-12-13 08:52:08.136 248514 DEBUG nova.network.neutron [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updating instance_info_cache with network_info: [{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:52:08 np0005558241 nova_compute[248510]: 2025-12-13 08:52:08.164 248514 DEBUG oslo_concurrency.lockutils [req-de6ed77f-dec5-4879-b4ae-ffdfba34af2a req-b1ebfe37-cb52-4e6d-b4de-06756d0d632b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:52:08 np0005558241 nova_compute[248510]: 2025-12-13 08:52:08.165 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:52:08 np0005558241 nova_compute[248510]: 2025-12-13 08:52:08.166 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:52:08 np0005558241 nova_compute[248510]: 2025-12-13 08:52:08.166 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 32f46650-28ec-40b8-8cbb-afb8a34cda45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:52:08 np0005558241 nova_compute[248510]: 2025-12-13 08:52:08.170 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:52:09
Dec 13 03:52:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:52:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:52:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['vms', 'images', '.rgw.root', 'volumes', 'default.rgw.control', '.mgr', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log']
Dec 13 03:52:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2714: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:52:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.399 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updating instance_info_cache with network_info: [{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.435 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.435 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.435 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.500 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.501 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.548 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.657 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.659 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.671 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.671 248514 INFO nova.compute.claims [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:52:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.778 248514 DEBUG nova.scheduler.client.report [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.807 248514 DEBUG nova.scheduler.client.report [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.807 248514 DEBUG nova.compute.provider_tree [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.828 248514 DEBUG nova.scheduler.client.report [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.875 248514 DEBUG nova.scheduler.client.report [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 03:52:10 np0005558241 nova_compute[248510]: 2025-12-13 08:52:10.951 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:52:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:52:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4801.8 total, 600.0 interval#012Cumulative writes: 35K writes, 135K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.78 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3929 writes, 15K keys, 3929 commit groups, 1.0 writes per commit group, ingest: 14.64 MB, 0.02 MB/s#012Interval WAL: 3928 writes, 1587 syncs, 2.48 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/751258432' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.544 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.555 248514 DEBUG nova.compute.provider_tree [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.583 248514 DEBUG nova.scheduler.client.report [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.610 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.611 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.668 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.668 248514 DEBUG nova.network.neutron [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.701 248514 INFO nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.758 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.811 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.811 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.811 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.811 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:52:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.931 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.934 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.935 248514 INFO nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Creating image(s)#033[00m
Dec 13 03:52:11 np0005558241 nova_compute[248510]: 2025-12-13 08:52:11.967 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.017 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.053 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.060 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.112 248514 DEBUG nova.policy [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fa34623cd3de4a47aa57959f09b3ff79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff4d2c6ad4dc4848ac9f55ff1b9e829a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:52:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2715: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.164 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.164 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.165 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.165 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.189 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.196 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.390 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:12 np0005558241 podman[358241]: 2025-12-13 08:52:12.424969016 +0000 UTC m=+0.098394848 container create b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:52:12 np0005558241 podman[358241]: 2025-12-13 08:52:12.37208175 +0000 UTC m=+0.045507602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:52:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:52:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/117600970' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.493 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:52:12 np0005558241 systemd[1]: Started libpod-conmon-b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a.scope.
Dec 13 03:52:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.594 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.595 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:52:12 np0005558241 podman[358241]: 2025-12-13 08:52:12.657514617 +0000 UTC m=+0.330940479 container init b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:52:12 np0005558241 podman[358241]: 2025-12-13 08:52:12.66803381 +0000 UTC m=+0.341459632 container start b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:52:12 np0005558241 ecstatic_mirzakhani[358261]: 167 167
Dec 13 03:52:12 np0005558241 systemd[1]: libpod-b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a.scope: Deactivated successfully.
Dec 13 03:52:12 np0005558241 conmon[358261]: conmon b31401648f204b1477c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a.scope/container/memory.events
Dec 13 03:52:12 np0005558241 podman[358241]: 2025-12-13 08:52:12.800684536 +0000 UTC m=+0.474110378 container attach b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:52:12 np0005558241 podman[358241]: 2025-12-13 08:52:12.801381864 +0000 UTC m=+0.474807716 container died b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.837 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.838 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3481MB free_disk=59.96660049352795GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.838 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.839 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:52:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:52:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.922 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 32f46650-28ec-40b8-8cbb-afb8a34cda45 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.922 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance fcc617ec-f5f9-41bb-ad4b-86d790622e74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.923 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:52:12 np0005558241 nova_compute[248510]: 2025-12-13 08:52:12.923 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:52:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:13.007 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:52:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:13.009 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.017 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:52:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:13Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:e1:26 10.100.0.9
Dec 13 03:52:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:13Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:e1:26 10.100.0.9
Dec 13 03:52:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-645e5c0440744b070078b618d95a01689c91062b7e3c914a4faebc4de817fe72-merged.mount: Deactivated successfully.
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.232 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.299 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.363 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] resizing rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:52:13 np0005558241 podman[358241]: 2025-12-13 08:52:13.453449632 +0000 UTC m=+1.126875464 container remove b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:52:13 np0005558241 systemd[1]: libpod-conmon-b31401648f204b1477c9374b0a38e68cdb03a88c8d92d4077a2843fce2bebb4a.scope: Deactivated successfully.
Dec 13 03:52:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:52:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1982669787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.674 248514 DEBUG nova.objects.instance [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'migration_context' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.678 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.684 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.704 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.705 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Ensure instance console log exists: /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.707 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.708 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.708 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.710 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:52:13 np0005558241 podman[358361]: 2025-12-13 08:52:13.735466613 +0000 UTC m=+0.112752378 container create 95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 03:52:13 np0005558241 podman[358361]: 2025-12-13 08:52:13.649156389 +0000 UTC m=+0.026442184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.749 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:52:13 np0005558241 nova_compute[248510]: 2025-12-13 08:52:13.750 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:13 np0005558241 systemd[1]: Started libpod-conmon-95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa.scope.
Dec 13 03:52:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:52:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d76a08cb412284e11cb2b7f48daf4957038a58a2fe62ba91cf86a8b2574ce6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d76a08cb412284e11cb2b7f48daf4957038a58a2fe62ba91cf86a8b2574ce6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d76a08cb412284e11cb2b7f48daf4957038a58a2fe62ba91cf86a8b2574ce6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d76a08cb412284e11cb2b7f48daf4957038a58a2fe62ba91cf86a8b2574ce6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d76a08cb412284e11cb2b7f48daf4957038a58a2fe62ba91cf86a8b2574ce6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:13 np0005558241 podman[358361]: 2025-12-13 08:52:13.972416594 +0000 UTC m=+0.349702449 container init 95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bouman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:52:13 np0005558241 podman[358361]: 2025-12-13 08:52:13.980015294 +0000 UTC m=+0.357301059 container start 95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:52:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2716: 321 pgs: 321 active+clean; 159 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 130 op/s
Dec 13 03:52:14 np0005558241 podman[358361]: 2025-12-13 08:52:14.216400231 +0000 UTC m=+0.593686016 container attach 95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:52:14 np0005558241 nova_compute[248510]: 2025-12-13 08:52:14.225 248514 DEBUG nova.network.neutron [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Successfully created port: b2143648-4c23-49b5-8777-433a5b34c7ce _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:52:14 np0005558241 goofy_bouman[358398]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:52:14 np0005558241 goofy_bouman[358398]: --> All data devices are unavailable
Dec 13 03:52:14 np0005558241 systemd[1]: libpod-95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa.scope: Deactivated successfully.
Dec 13 03:52:14 np0005558241 podman[358418]: 2025-12-13 08:52:14.54102641 +0000 UTC m=+0.026834913 container died 95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bouman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:52:14 np0005558241 nova_compute[248510]: 2025-12-13 08:52:14.750 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:52:14 np0005558241 nova_compute[248510]: 2025-12-13 08:52:14.751 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:52:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a8d76a08cb412284e11cb2b7f48daf4957038a58a2fe62ba91cf86a8b2574ce6-merged.mount: Deactivated successfully.
Dec 13 03:52:14 np0005558241 podman[358418]: 2025-12-13 08:52:14.942754053 +0000 UTC m=+0.428562506 container remove 95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:52:14 np0005558241 systemd[1]: libpod-conmon-95b99f3c798addad5149e3bb7c4081341658711725775b2763ff4088b32d00aa.scope: Deactivated successfully.
Dec 13 03:52:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:15.011 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:52:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:52:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3748475953' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:52:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:52:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3748475953' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:52:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:52:15 np0005558241 podman[358496]: 2025-12-13 08:52:15.442239236 +0000 UTC m=+0.023640813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:52:15 np0005558241 podman[358496]: 2025-12-13 08:52:15.673834923 +0000 UTC m=+0.255236490 container create 2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:52:15 np0005558241 nova_compute[248510]: 2025-12-13 08:52:15.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:52:15 np0005558241 nova_compute[248510]: 2025-12-13 08:52:15.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:52:16 np0005558241 systemd[1]: Started libpod-conmon-2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce.scope.
Dec 13 03:52:16 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:52:16 np0005558241 podman[358496]: 2025-12-13 08:52:16.076510729 +0000 UTC m=+0.657912306 container init 2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:52:16 np0005558241 podman[358496]: 2025-12-13 08:52:16.083593597 +0000 UTC m=+0.664995144 container start 2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 03:52:16 np0005558241 admiring_burnell[358512]: 167 167
Dec 13 03:52:16 np0005558241 systemd[1]: libpod-2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce.scope: Deactivated successfully.
Dec 13 03:52:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2717: 321 pgs: 321 active+clean; 159 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 245 KiB/s rd, 3.8 MiB/s wr, 66 op/s
Dec 13 03:52:16 np0005558241 podman[358496]: 2025-12-13 08:52:16.146142495 +0000 UTC m=+0.727544052 container attach 2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 03:52:16 np0005558241 podman[358496]: 2025-12-13 08:52:16.146594546 +0000 UTC m=+0.727996103 container died 2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:52:16 np0005558241 nova_compute[248510]: 2025-12-13 08:52:16.200 248514 DEBUG nova.network.neutron [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Successfully updated port: b2143648-4c23-49b5-8777-433a5b34c7ce _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:52:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay-998edd489d72cafcb72f30530f93fc419d01c24864bc4ffb05f21b6632771eca-merged.mount: Deactivated successfully.
Dec 13 03:52:16 np0005558241 nova_compute[248510]: 2025-12-13 08:52:16.275 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:52:16 np0005558241 nova_compute[248510]: 2025-12-13 08:52:16.276 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:52:16 np0005558241 nova_compute[248510]: 2025-12-13 08:52:16.276 248514 DEBUG nova.network.neutron [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:52:16 np0005558241 nova_compute[248510]: 2025-12-13 08:52:16.379 248514 DEBUG nova.compute.manager [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:52:16 np0005558241 nova_compute[248510]: 2025-12-13 08:52:16.379 248514 DEBUG nova.compute.manager [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing instance network info cache due to event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:52:16 np0005558241 nova_compute[248510]: 2025-12-13 08:52:16.379 248514 DEBUG oslo_concurrency.lockutils [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:52:16 np0005558241 podman[358496]: 2025-12-13 08:52:16.44069135 +0000 UTC m=+1.022092897 container remove 2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:52:16 np0005558241 systemd[1]: libpod-conmon-2360feb7992f1045dd1bf43081e33b93da3a54605c77e0cff59de01f6e6286ce.scope: Deactivated successfully.
Dec 13 03:52:16 np0005558241 nova_compute[248510]: 2025-12-13 08:52:16.649 248514 DEBUG nova.network.neutron [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:52:16 np0005558241 podman[358540]: 2025-12-13 08:52:16.622620462 +0000 UTC m=+0.026401663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:52:16 np0005558241 podman[358540]: 2025-12-13 08:52:16.941716341 +0000 UTC m=+0.345497522 container create 15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:52:17 np0005558241 systemd[1]: Started libpod-conmon-15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd.scope.
Dec 13 03:52:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:52:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31feae384e36765e6db510118cc359668ad9b32646beed1d92c5ae5ba48eae7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31feae384e36765e6db510118cc359668ad9b32646beed1d92c5ae5ba48eae7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31feae384e36765e6db510118cc359668ad9b32646beed1d92c5ae5ba48eae7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31feae384e36765e6db510118cc359668ad9b32646beed1d92c5ae5ba48eae7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:17 np0005558241 podman[358540]: 2025-12-13 08:52:17.175174615 +0000 UTC m=+0.578955816 container init 15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feynman, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:52:17 np0005558241 podman[358540]: 2025-12-13 08:52:17.185295898 +0000 UTC m=+0.589077079 container start 15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feynman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:52:17 np0005558241 podman[358540]: 2025-12-13 08:52:17.202613313 +0000 UTC m=+0.606394524 container attach 15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:52:17 np0005558241 nova_compute[248510]: 2025-12-13 08:52:17.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]: {
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:    "0": [
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:        {
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "devices": [
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "/dev/loop3"
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            ],
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_name": "ceph_lv0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_size": "21470642176",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "name": "ceph_lv0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "tags": {
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.cluster_name": "ceph",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.crush_device_class": "",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.encrypted": "0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.objectstore": "bluestore",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.osd_id": "0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.type": "block",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.vdo": "0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.with_tpm": "0"
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            },
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "type": "block",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "vg_name": "ceph_vg0"
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:        }
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:    ],
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:    "1": [
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:        {
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "devices": [
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "/dev/loop4"
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            ],
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_name": "ceph_lv1",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_size": "21470642176",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "name": "ceph_lv1",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "tags": {
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.cluster_name": "ceph",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.crush_device_class": "",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.encrypted": "0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.objectstore": "bluestore",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.osd_id": "1",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.type": "block",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.vdo": "0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.with_tpm": "0"
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            },
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "type": "block",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "vg_name": "ceph_vg1"
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:        }
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:    ],
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:    "2": [
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:        {
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "devices": [
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "/dev/loop5"
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            ],
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_name": "ceph_lv2",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_size": "21470642176",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "name": "ceph_lv2",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "tags": {
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.cluster_name": "ceph",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.crush_device_class": "",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.encrypted": "0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.objectstore": "bluestore",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.osd_id": "2",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.type": "block",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.vdo": "0",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:                "ceph.with_tpm": "0"
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            },
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "type": "block",
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:            "vg_name": "ceph_vg2"
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:        }
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]:    ]
Dec 13 03:52:17 np0005558241 relaxed_feynman[358557]: }
Dec 13 03:52:17 np0005558241 systemd[1]: libpod-15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd.scope: Deactivated successfully.
Dec 13 03:52:17 np0005558241 conmon[358557]: conmon 15095555ef4af562efe8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd.scope/container/memory.events
Dec 13 03:52:17 np0005558241 podman[358540]: 2025-12-13 08:52:17.545703515 +0000 UTC m=+0.949484716 container died 15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:52:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay-31feae384e36765e6db510118cc359668ad9b32646beed1d92c5ae5ba48eae7d-merged.mount: Deactivated successfully.
Dec 13 03:52:17 np0005558241 podman[358540]: 2025-12-13 08:52:17.785569739 +0000 UTC m=+1.189350920 container remove 15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:52:17 np0005558241 systemd[1]: libpod-conmon-15095555ef4af562efe8e4f462e927ab55d9b0547941e975ba8c6bb859c2decd.scope: Deactivated successfully.
Dec 13 03:52:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:17Z|01071|binding|INFO|Releasing lport a0154a44-7c71-4193-bf4a-ea45abd45ba7 from this chassis (sb_readonly=0)
Dec 13 03:52:17 np0005558241 nova_compute[248510]: 2025-12-13 08:52:17.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2718: 321 pgs: 321 active+clean; 163 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 282 KiB/s rd, 3.8 MiB/s wr, 77 op/s
Dec 13 03:52:18 np0005558241 nova_compute[248510]: 2025-12-13 08:52:18.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:18 np0005558241 podman[358638]: 2025-12-13 08:52:18.375171152 +0000 UTC m=+0.077056073 container create 176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:52:18 np0005558241 podman[358638]: 2025-12-13 08:52:18.322114562 +0000 UTC m=+0.023999523 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:52:18 np0005558241 systemd[1]: Started libpod-conmon-176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f.scope.
Dec 13 03:52:18 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:52:18 np0005558241 podman[358638]: 2025-12-13 08:52:18.654791493 +0000 UTC m=+0.356676434 container init 176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cray, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:52:18 np0005558241 podman[358638]: 2025-12-13 08:52:18.662423645 +0000 UTC m=+0.364308566 container start 176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cray, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:52:18 np0005558241 affectionate_cray[358654]: 167 167
Dec 13 03:52:18 np0005558241 systemd[1]: libpod-176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f.scope: Deactivated successfully.
Dec 13 03:52:18 np0005558241 podman[358638]: 2025-12-13 08:52:18.778444744 +0000 UTC m=+0.480329655 container attach 176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cray, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:52:18 np0005558241 podman[358638]: 2025-12-13 08:52:18.779981742 +0000 UTC m=+0.481866663 container died 176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cray, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 03:52:18 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b191b7ade96d6cc4c83c07e6d47bb47cc0d2901416d2e9ae8ecdc834a1c185d4-merged.mount: Deactivated successfully.
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.160 248514 DEBUG nova.network.neutron [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.185 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.186 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance network_info: |[{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.188 248514 DEBUG oslo_concurrency.lockutils [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.188 248514 DEBUG nova.network.neutron [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.191 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Start _get_guest_xml network_info=[{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.195 248514 WARNING nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.201 248514 DEBUG nova.virt.libvirt.host [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.203 248514 DEBUG nova.virt.libvirt.host [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.209 248514 DEBUG nova.virt.libvirt.host [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.210 248514 DEBUG nova.virt.libvirt.host [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.210 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.211 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.211 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.211 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.211 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.212 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.212 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.212 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.212 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.213 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.213 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.213 248514 DEBUG nova.virt.hardware [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.216 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:52:19 np0005558241 podman[358638]: 2025-12-13 08:52:19.417173718 +0000 UTC m=+1.119058649 container remove 176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cray, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 03:52:19 np0005558241 systemd[1]: libpod-conmon-176be4c4abf3a775eeb8d24921f9f24452d9bdbe1c5f20c2acdbaa2fde64629f.scope: Deactivated successfully.
Dec 13 03:52:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:52:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 28K writes, 114K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s#012Cumulative WAL: 28K writes, 9954 syncs, 2.88 writes per sync, written: 0.11 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3291 writes, 12K keys, 3291 commit groups, 1.0 writes per commit group, ingest: 13.13 MB, 0.02 MB/s#012Interval WAL: 3291 writes, 1338 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 03:52:19 np0005558241 podman[358701]: 2025-12-13 08:52:19.603812198 +0000 UTC m=+0.027286935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:52:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:52:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/103592188' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.864 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.889 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:52:19 np0005558241 nova_compute[248510]: 2025-12-13 08:52:19.893 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:52:20 np0005558241 podman[358701]: 2025-12-13 08:52:20.022876315 +0000 UTC m=+0.446351032 container create 22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:52:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2719: 321 pgs: 321 active+clean; 167 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Dec 13 03:52:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:52:20 np0005558241 systemd[1]: Started libpod-conmon-22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6.scope.
Dec 13 03:52:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:52:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e2b5a9cafa3512546ead1382e8841912a692d6775ac38b2f4a4914b927541/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e2b5a9cafa3512546ead1382e8841912a692d6775ac38b2f4a4914b927541/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e2b5a9cafa3512546ead1382e8841912a692d6775ac38b2f4a4914b927541/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e2b5a9cafa3512546ead1382e8841912a692d6775ac38b2f4a4914b927541/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:20 np0005558241 podman[358701]: 2025-12-13 08:52:20.620752155 +0000 UTC m=+1.044226882 container init 22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:52:20 np0005558241 podman[358701]: 2025-12-13 08:52:20.632457608 +0000 UTC m=+1.055932325 container start 22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:52:20 np0005558241 nova_compute[248510]: 2025-12-13 08:52:20.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011156126648185698 of space, bias 1.0, pg target 0.3346837994455709 quantized to 32 (current 32)
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006693546727135707 of space, bias 1.0, pg target 0.2008064018140712 quantized to 32 (current 32)
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.736360017727181e-07 of space, bias 4.0, pg target 0.0006883632021272618 quantized to 16 (current 32)
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:52:21 np0005558241 podman[358701]: 2025-12-13 08:52:21.293626995 +0000 UTC m=+1.717101702 container attach 22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:52:21 np0005558241 lvm[358836]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:52:21 np0005558241 lvm[358837]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:52:21 np0005558241 lvm[358836]: VG ceph_vg1 finished
Dec 13 03:52:21 np0005558241 lvm[358837]: VG ceph_vg0 finished
Dec 13 03:52:21 np0005558241 lvm[358839]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:52:21 np0005558241 lvm[358839]: VG ceph_vg2 finished
Dec 13 03:52:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:52:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2711485502' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.432 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.434 248514 DEBUG nova.virt.libvirt.vif [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:52:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1747506169',display_name='tempest-TestShelveInstance-server-1747506169',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1747506169',id=111,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAebkVWpSolmv9rvqo0lcOY35Sse8ZKcOxj7o5K69+ccF5rX0zOaHwOkKpIPYjoJDZ0lGFUDC6z1VdzfkWYgp+l6o2jOhZfRbhSP9Loovk747mSM12eSN6XwaL50YLDn8Q==',key_name='tempest-TestShelveInstance-1284595260',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff4d2c6ad4dc4848ac9f55ff1b9e829a',ramdisk_id='',reservation_id='r-wfxenjt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-2105398574',owner_user_name='tempest-TestShelveInstance-2105398574-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:52:11Z,user_data=None,user_id='fa34623cd3de4a47aa57959f09b3ff79',uuid=fcc617ec-f5f9-41bb-ad4b-86d790622e74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.434 248514 DEBUG nova.network.os_vif_util [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converting VIF {"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.435 248514 DEBUG nova.network.os_vif_util [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.436 248514 DEBUG nova.objects.instance [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'pci_devices' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.458 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  <uuid>fcc617ec-f5f9-41bb-ad4b-86d790622e74</uuid>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  <name>instance-0000006f</name>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestShelveInstance-server-1747506169</nova:name>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:52:19</nova:creationTime>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:        <nova:user uuid="fa34623cd3de4a47aa57959f09b3ff79">tempest-TestShelveInstance-2105398574-project-member</nova:user>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:        <nova:project uuid="ff4d2c6ad4dc4848ac9f55ff1b9e829a">tempest-TestShelveInstance-2105398574</nova:project>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:        <nova:port uuid="b2143648-4c23-49b5-8777-433a5b34c7ce">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <entry name="serial">fcc617ec-f5f9-41bb-ad4b-86d790622e74</entry>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <entry name="uuid">fcc617ec-f5f9-41bb-ad4b-86d790622e74</entry>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:02:b7:87"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <target dev="tapb2143648-4c"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/console.log" append="off"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:52:21 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:52:21 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:52:21 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:52:21 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.459 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Preparing to wait for external event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.459 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.459 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.459 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.460 248514 DEBUG nova.virt.libvirt.vif [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:52:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1747506169',display_name='tempest-TestShelveInstance-server-1747506169',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1747506169',id=111,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAebkVWpSolmv9rvqo0lcOY35Sse8ZKcOxj7o5K69+ccF5rX0zOaHwOkKpIPYjoJDZ0lGFUDC6z1VdzfkWYgp+l6o2jOhZfRbhSP9Loovk747mSM12eSN6XwaL50YLDn8Q==',key_name='tempest-TestShelveInstance-1284595260',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff4d2c6ad4dc4848ac9f55ff1b9e829a',ramdisk_id='',reservation_id='r-wfxenjt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-2105398574',owner_user_name='tempest-TestShelveInstance-2105398574-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:52:11Z,user_data=None,user_id='fa34623cd3de4a47aa57959f09b3ff79',uuid=fcc617ec-f5f9-41bb-ad4b-86d790622e74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.460 248514 DEBUG nova.network.os_vif_util [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converting VIF {"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.461 248514 DEBUG nova.network.os_vif_util [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.461 248514 DEBUG os_vif [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.462 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.462 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.462 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:52:21 np0005558241 musing_banach[358757]: {}
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.466 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.466 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2143648-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.466 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2143648-4c, col_values=(('external_ids', {'iface-id': 'b2143648-4c23-49b5-8777-433a5b34c7ce', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:b7:87', 'vm-uuid': 'fcc617ec-f5f9-41bb-ad4b-86d790622e74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:21 np0005558241 NetworkManager[50376]: <info>  [1765615941.4708] manager: (tapb2143648-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.478 248514 INFO os_vif [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c')#033[00m
Dec 13 03:52:21 np0005558241 systemd[1]: libpod-22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6.scope: Deactivated successfully.
Dec 13 03:52:21 np0005558241 systemd[1]: libpod-22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6.scope: Consumed 1.462s CPU time.
Dec 13 03:52:21 np0005558241 podman[358845]: 2025-12-13 08:52:21.529251613 +0000 UTC m=+0.024800433 container died 22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.915 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.915 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.915 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No VIF found with MAC fa:16:3e:02:b7:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.916 248514 INFO nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Using config drive#033[00m
Dec 13 03:52:21 np0005558241 nova_compute[248510]: 2025-12-13 08:52:21.966 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:52:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2720: 321 pgs: 321 active+clean; 167 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Dec 13 03:52:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-914e2b5a9cafa3512546ead1382e8841912a692d6775ac38b2f4a4914b927541-merged.mount: Deactivated successfully.
Dec 13 03:52:22 np0005558241 nova_compute[248510]: 2025-12-13 08:52:22.396 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:22 np0005558241 podman[358845]: 2025-12-13 08:52:22.421954375 +0000 UTC m=+0.917503175 container remove 22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:52:22 np0005558241 systemd[1]: libpod-conmon-22016ae0af5ac82cc2fab2d7e54483556a7978f7c458f4d9cdedb7e1cc5604b6.scope: Deactivated successfully.
Dec 13 03:52:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:52:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:52:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:52:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:52:22 np0005558241 nova_compute[248510]: 2025-12-13 08:52:22.921 248514 DEBUG nova.network.neutron [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updated VIF entry in instance network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:52:22 np0005558241 nova_compute[248510]: 2025-12-13 08:52:22.923 248514 DEBUG nova.network.neutron [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:52:22 np0005558241 nova_compute[248510]: 2025-12-13 08:52:22.952 248514 DEBUG oslo_concurrency.lockutils [req-8422cab8-7548-48bf-8865-cab788263622 req-87100c38-4e1b-4d9b-bc8e-8a902b2501ab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:52:23 np0005558241 nova_compute[248510]: 2025-12-13 08:52:23.005 248514 INFO nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Creating config drive at /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config#033[00m
Dec 13 03:52:23 np0005558241 nova_compute[248510]: 2025-12-13 08:52:23.011 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcrtstmjd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:52:23 np0005558241 nova_compute[248510]: 2025-12-13 08:52:23.179 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcrtstmjd" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:52:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 03:52:23 np0005558241 nova_compute[248510]: 2025-12-13 08:52:23.517 248514 DEBUG nova.storage.rbd_utils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:52:23 np0005558241 nova_compute[248510]: 2025-12-13 08:52:23.521 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:52:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:52:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:52:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2721: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 13 03:52:24 np0005558241 nova_compute[248510]: 2025-12-13 08:52:24.410 248514 DEBUG oslo_concurrency.processutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.889s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:52:24 np0005558241 nova_compute[248510]: 2025-12-13 08:52:24.412 248514 INFO nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deleting local config drive /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config because it was imported into RBD.#033[00m
Dec 13 03:52:24 np0005558241 kernel: tapb2143648-4c: entered promiscuous mode
Dec 13 03:52:24 np0005558241 NetworkManager[50376]: <info>  [1765615944.4992] manager: (tapb2143648-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/446)
Dec 13 03:52:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:24Z|01072|binding|INFO|Claiming lport b2143648-4c23-49b5-8777-433a5b34c7ce for this chassis.
Dec 13 03:52:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:24Z|01073|binding|INFO|b2143648-4c23-49b5-8777-433a5b34c7ce: Claiming fa:16:3e:02:b7:87 10.100.0.4
Dec 13 03:52:24 np0005558241 nova_compute[248510]: 2025-12-13 08:52:24.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.507 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b7:87 10.100.0.4'], port_security=['fa:16:3e:02:b7:87 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fcc617ec-f5f9-41bb-ad4b-86d790622e74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db56c55-78f1-455f-855e-db3acef05ff3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4d2c6ad4dc4848ac9f55ff1b9e829a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9bebfabc-b3c7-415b-9881-5a1028b3b8d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efdd787e-21fc-422f-be64-57ef7368490d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b2143648-4c23-49b5-8777-433a5b34c7ce) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.509 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b2143648-4c23-49b5-8777-433a5b34c7ce in datapath 6db56c55-78f1-455f-855e-db3acef05ff3 bound to our chassis#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.510 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6db56c55-78f1-455f-855e-db3acef05ff3#033[00m
Dec 13 03:52:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:24Z|01074|binding|INFO|Setting lport b2143648-4c23-49b5-8777-433a5b34c7ce ovn-installed in OVS
Dec 13 03:52:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:24Z|01075|binding|INFO|Setting lport b2143648-4c23-49b5-8777-433a5b34c7ce up in Southbound
Dec 13 03:52:24 np0005558241 nova_compute[248510]: 2025-12-13 08:52:24.518 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:24 np0005558241 nova_compute[248510]: 2025-12-13 08:52:24.522 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.525 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[18bca0c0-6228-45b2-b15b-108cfd7694d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.526 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6db56c55-71 in ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.528 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6db56c55-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.529 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f41a372-5f46-4c7d-a245-b7d18a08a275]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.530 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ca186484-ebcc-4ad0-acfd-5dab4227c68d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 systemd-udevd[358955]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:52:24 np0005558241 systemd-machined[210538]: New machine qemu-137-instance-0000006f.
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.545 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[02d96675-384e-4d16-ae99-6e54c6cc9783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 NetworkManager[50376]: <info>  [1765615944.5513] device (tapb2143648-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:52:24 np0005558241 NetworkManager[50376]: <info>  [1765615944.5520] device (tapb2143648-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:52:24 np0005558241 systemd[1]: Started Virtual Machine qemu-137-instance-0000006f.
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.565 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[27e0e555-1210-4285-be43-aa6c3e7d01a0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.601 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[618021f8-bae8-4452-a346-cc23b32d5d9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 NetworkManager[50376]: <info>  [1765615944.6107] manager: (tap6db56c55-70): new Veth device (/org/freedesktop/NetworkManager/Devices/447)
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.610 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[87f71812-19da-490d-85ee-667e0f91b915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.657 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a2593440-539c-4ac4-a217-4cb810314e41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.661 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2015a230-50fa-418a-819d-fc1160f1286d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 NetworkManager[50376]: <info>  [1765615944.6878] device (tap6db56c55-70): carrier: link connected
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.697 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[15fbee32-87d1-4678-993d-441a6d01e78e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.718 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6fbf83-f28a-4636-b7f2-462e94ac6d2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6db56c55-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:4b:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 318], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852190, 'reachable_time': 33294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358989, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.736 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[20ef2a45-0af6-43d7-8c4f-8179cc150a61]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:4b23'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 852190, 'tstamp': 852190}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358990, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.755 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[08e39f62-ec16-4bf0-b6e1-dcbfa1666ab7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6db56c55-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:4b:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 318], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852190, 'reachable_time': 33294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358991, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.790 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[600ba002-2331-40c1-86a1-cdc7544a8c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.884 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b93a8a7f-841c-4e9c-a7c9-f5f9121b3630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.886 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6db56c55-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.886 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.886 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6db56c55-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:52:24 np0005558241 kernel: tap6db56c55-70: entered promiscuous mode
Dec 13 03:52:24 np0005558241 NetworkManager[50376]: <info>  [1765615944.8891] manager: (tap6db56c55-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/448)
Dec 13 03:52:24 np0005558241 nova_compute[248510]: 2025-12-13 08:52:24.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:24 np0005558241 nova_compute[248510]: 2025-12-13 08:52:24.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.924 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6db56c55-70, col_values=(('external_ids', {'iface-id': '401abfe8-06be-4b57-8432-310dcd747a81'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:52:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:24Z|01076|binding|INFO|Releasing lport 401abfe8-06be-4b57-8432-310dcd747a81 from this chassis (sb_readonly=0)
Dec 13 03:52:24 np0005558241 nova_compute[248510]: 2025-12-13 08:52:24.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:24 np0005558241 nova_compute[248510]: 2025-12-13 08:52:24.926 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.927 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6db56c55-78f1-455f-855e-db3acef05ff3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6db56c55-78f1-455f-855e-db3acef05ff3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.928 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe16d0cc-ca4b-41ef-8aa5-489b01d88157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.929 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-6db56c55-78f1-455f-855e-db3acef05ff3
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/6db56c55-78f1-455f-855e-db3acef05ff3.pid.haproxy
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 6db56c55-78f1-455f-855e-db3acef05ff3
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:52:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:24.929 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'env', 'PROCESS_TAG=haproxy-6db56c55-78f1-455f-855e-db3acef05ff3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6db56c55-78f1-455f-855e-db3acef05ff3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:52:24 np0005558241 nova_compute[248510]: 2025-12-13 08:52:24.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:52:25 np0005558241 podman[359059]: 2025-12-13 08:52:25.28300925 +0000 UTC m=+0.024870245 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.608 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615945.608165, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.609 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Started (Lifecycle Event)#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.639 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.643 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615945.6095579, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.644 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.925 248514 DEBUG nova.compute.manager [req-f01bc60b-3e00-486f-9737-cf2d1a81c631 req-5414d73b-3097-460f-b477-f59cc1b05df3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.925 248514 DEBUG oslo_concurrency.lockutils [req-f01bc60b-3e00-486f-9737-cf2d1a81c631 req-5414d73b-3097-460f-b477-f59cc1b05df3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.926 248514 DEBUG oslo_concurrency.lockutils [req-f01bc60b-3e00-486f-9737-cf2d1a81c631 req-5414d73b-3097-460f-b477-f59cc1b05df3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.926 248514 DEBUG oslo_concurrency.lockutils [req-f01bc60b-3e00-486f-9737-cf2d1a81c631 req-5414d73b-3097-460f-b477-f59cc1b05df3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.926 248514 DEBUG nova.compute.manager [req-f01bc60b-3e00-486f-9737-cf2d1a81c631 req-5414d73b-3097-460f-b477-f59cc1b05df3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Processing event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.927 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.930 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.935 248514 INFO nova.virt.libvirt.driver [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance spawned successfully.#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.936 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.944 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.947 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615945.9299753, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:52:25 np0005558241 nova_compute[248510]: 2025-12-13 08:52:25.948 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:52:25 np0005558241 podman[359059]: 2025-12-13 08:52:25.988358645 +0000 UTC m=+0.730219620 container create b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.011 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.016 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.016 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.017 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.017 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.017 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.018 248514 DEBUG nova.virt.libvirt.driver [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.021 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.065 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:52:26 np0005558241 systemd[1]: Started libpod-conmon-b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06.scope.
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.089 248514 INFO nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Took 14.16 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.089 248514 DEBUG nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:52:26 np0005558241 podman[359080]: 2025-12-13 08:52:26.123370891 +0000 UTC m=+0.206788716 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 03:52:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:52:26 np0005558241 podman[359079]: 2025-12-13 08:52:26.130153811 +0000 UTC m=+0.212927330 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 03:52:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d101a5c71da0f833ee85e3864334a8ddce618ff29ffdbe45b70d5c22bb3233b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2722: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 103 KiB/s wr, 23 op/s
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.210 248514 INFO nova.compute.manager [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Took 15.60 seconds to build instance.#033[00m
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.243 248514 DEBUG oslo_concurrency.lockutils [None req-68fcd59b-0f7a-4ff9-88c2-2fddaa3d36b5 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:26Z|01077|binding|INFO|Releasing lport a0154a44-7c71-4193-bf4a-ea45abd45ba7 from this chassis (sb_readonly=0)
Dec 13 03:52:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:26Z|01078|binding|INFO|Releasing lport 401abfe8-06be-4b57-8432-310dcd747a81 from this chassis (sb_readonly=0)
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:26 np0005558241 podman[359059]: 2025-12-13 08:52:26.365797879 +0000 UTC m=+1.107658894 container init b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 03:52:26 np0005558241 podman[359059]: 2025-12-13 08:52:26.371458641 +0000 UTC m=+1.113319626 container start b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 03:52:26 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [NOTICE]   (359147) : New worker (359149) forked
Dec 13 03:52:26 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [NOTICE]   (359147) : Loading success.
Dec 13 03:52:26 np0005558241 podman[359078]: 2025-12-13 08:52:26.399824973 +0000 UTC m=+0.487307590 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 13 03:52:26 np0005558241 nova_compute[248510]: 2025-12-13 08:52:26.469 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:27 np0005558241 nova_compute[248510]: 2025-12-13 08:52:27.398 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:28 np0005558241 nova_compute[248510]: 2025-12-13 08:52:28.086 248514 DEBUG nova.compute.manager [req-11de4aaf-f597-4bc0-a90c-853a805c0b16 req-c2ad2ffe-c987-4c5a-bbb2-af84d6954cf5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:52:28 np0005558241 nova_compute[248510]: 2025-12-13 08:52:28.086 248514 DEBUG oslo_concurrency.lockutils [req-11de4aaf-f597-4bc0-a90c-853a805c0b16 req-c2ad2ffe-c987-4c5a-bbb2-af84d6954cf5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:28 np0005558241 nova_compute[248510]: 2025-12-13 08:52:28.087 248514 DEBUG oslo_concurrency.lockutils [req-11de4aaf-f597-4bc0-a90c-853a805c0b16 req-c2ad2ffe-c987-4c5a-bbb2-af84d6954cf5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:28 np0005558241 nova_compute[248510]: 2025-12-13 08:52:28.087 248514 DEBUG oslo_concurrency.lockutils [req-11de4aaf-f597-4bc0-a90c-853a805c0b16 req-c2ad2ffe-c987-4c5a-bbb2-af84d6954cf5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:28 np0005558241 nova_compute[248510]: 2025-12-13 08:52:28.087 248514 DEBUG nova.compute.manager [req-11de4aaf-f597-4bc0-a90c-853a805c0b16 req-c2ad2ffe-c987-4c5a-bbb2-af84d6954cf5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] No waiting events found dispatching network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:52:28 np0005558241 nova_compute[248510]: 2025-12-13 08:52:28.087 248514 WARNING nova.compute.manager [req-11de4aaf-f597-4bc0-a90c-853a805c0b16 req-c2ad2ffe-c987-4c5a-bbb2-af84d6954cf5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received unexpected event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce for instance with vm_state active and task_state None.#033[00m
Dec 13 03:52:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2723: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 115 KiB/s wr, 73 op/s
Dec 13 03:52:28 np0005558241 nova_compute[248510]: 2025-12-13 08:52:28.861 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:28 np0005558241 nova_compute[248510]: 2025-12-13 08:52:28.861 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:28 np0005558241 nova_compute[248510]: 2025-12-13 08:52:28.861 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:28 np0005558241 nova_compute[248510]: 2025-12-13 08:52:28.862 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:28 np0005558241 nova_compute[248510]: 2025-12-13 08:52:28.862 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:28 np0005558241 nova_compute[248510]: 2025-12-13 08:52:28.863 248514 INFO nova.compute.manager [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Terminating instance#033[00m
Dec 13 03:52:28 np0005558241 nova_compute[248510]: 2025-12-13 08:52:28.864 248514 DEBUG nova.compute.manager [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:52:29 np0005558241 kernel: tap4b6e89d0-30 (unregistering): left promiscuous mode
Dec 13 03:52:29 np0005558241 NetworkManager[50376]: <info>  [1765615949.4000] device (tap4b6e89d0-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.407 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:29Z|01079|binding|INFO|Releasing lport 4b6e89d0-3078-468e-ad37-fa04ed14c96a from this chassis (sb_readonly=0)
Dec 13 03:52:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:29Z|01080|binding|INFO|Setting lport 4b6e89d0-3078-468e-ad37-fa04ed14c96a down in Southbound
Dec 13 03:52:29 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:29Z|01081|binding|INFO|Removing iface tap4b6e89d0-30 ovn-installed in OVS
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:29.423 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:e1:26 10.100.0.9'], port_security=['fa:16:3e:c0:e1:26 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '32f46650-28ec-40b8-8cbb-afb8a34cda45', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c6d054d-d66c-470b-bd1c-b4cabcf90c1c a55bf908-21fd-47cc-b7fd-f8685207b408', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=369f5416-159a-497c-b005-e2677c61c320, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4b6e89d0-3078-468e-ad37-fa04ed14c96a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:52:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:29.424 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4b6e89d0-3078-468e-ad37-fa04ed14c96a in datapath e742c2df-df4d-48f6-8153-8353ec98fe5c unbound from our chassis#033[00m
Dec 13 03:52:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:29.426 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e742c2df-df4d-48f6-8153-8353ec98fe5c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:52:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:29.426 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[48dc9f4d-907b-42f6-819c-dee2a6dab20e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:29.427 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c namespace which is not needed anymore#033[00m
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.433 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:29 np0005558241 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Dec 13 03:52:29 np0005558241 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006e.scope: Consumed 13.825s CPU time.
Dec 13 03:52:29 np0005558241 systemd-machined[210538]: Machine qemu-136-instance-0000006e terminated.
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.503 248514 INFO nova.virt.libvirt.driver [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Instance destroyed successfully.#033[00m
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.503 248514 DEBUG nova.objects.instance [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 32f46650-28ec-40b8-8cbb-afb8a34cda45 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.529 248514 DEBUG nova.virt.libvirt.vif [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:51:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-737999557',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=110,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1EAPswQkN6gGnzWb6nWTEPMlUNbetceQGhufBgelanH3kUDSBVad+EWLTxUJKeHTg22cPL3Ixvag9/dm2M/FjTcKf+ix54cOXq9k631rEiL8V03+5FUWKZfsVC6yBvBw==',key_name='tempest-TestSecurityGroupsBasicOps-451827369',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:51:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-lhganq1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:51:58Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=32f46650-28ec-40b8-8cbb-afb8a34cda45,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.529 248514 DEBUG nova.network.os_vif_util [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.530 248514 DEBUG nova.network.os_vif_util [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.530 248514 DEBUG os_vif [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.532 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.533 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b6e89d0-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.534 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:29 np0005558241 nova_compute[248510]: 2025-12-13 08:52:29.538 248514 INFO os_vif [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:e1:26,bridge_name='br-int',has_traffic_filtering=True,id=4b6e89d0-3078-468e-ad37-fa04ed14c96a,network=Network(e742c2df-df4d-48f6-8153-8353ec98fe5c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b6e89d0-30')#033[00m
Dec 13 03:52:29 np0005558241 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [NOTICE]   (357952) : haproxy version is 2.8.14-c23fe91
Dec 13 03:52:29 np0005558241 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [NOTICE]   (357952) : path to executable is /usr/sbin/haproxy
Dec 13 03:52:29 np0005558241 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [WARNING]  (357952) : Exiting Master process...
Dec 13 03:52:29 np0005558241 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [ALERT]    (357952) : Current worker (357954) exited with code 143 (Terminated)
Dec 13 03:52:29 np0005558241 neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c[357946]: [WARNING]  (357952) : All workers exited. Exiting... (0)
Dec 13 03:52:29 np0005558241 systemd[1]: libpod-ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d.scope: Deactivated successfully.
Dec 13 03:52:29 np0005558241 podman[359192]: 2025-12-13 08:52:29.796700602 +0000 UTC m=+0.281142700 container died ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:52:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2724: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 108 KiB/s wr, 86 op/s
Dec 13 03:52:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d-userdata-shm.mount: Deactivated successfully.
Dec 13 03:52:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a82de2f5cdb66864aa371b69f7e8060f09c82097adfafe67c8d328dd59b19987-merged.mount: Deactivated successfully.
Dec 13 03:52:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:52:30 np0005558241 nova_compute[248510]: 2025-12-13 08:52:30.233 248514 DEBUG nova.compute.manager [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-changed-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:52:30 np0005558241 nova_compute[248510]: 2025-12-13 08:52:30.233 248514 DEBUG nova.compute.manager [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Refreshing instance network info cache due to event network-changed-4b6e89d0-3078-468e-ad37-fa04ed14c96a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:52:30 np0005558241 nova_compute[248510]: 2025-12-13 08:52:30.234 248514 DEBUG oslo_concurrency.lockutils [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:52:30 np0005558241 nova_compute[248510]: 2025-12-13 08:52:30.234 248514 DEBUG oslo_concurrency.lockutils [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:52:30 np0005558241 nova_compute[248510]: 2025-12-13 08:52:30.234 248514 DEBUG nova.network.neutron [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Refreshing network info cache for port 4b6e89d0-3078-468e-ad37-fa04ed14c96a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:52:30 np0005558241 podman[359192]: 2025-12-13 08:52:30.488795515 +0000 UTC m=+0.973237623 container cleanup ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 03:52:30 np0005558241 systemd[1]: libpod-conmon-ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d.scope: Deactivated successfully.
Dec 13 03:52:31 np0005558241 podman[359240]: 2025-12-13 08:52:31.852568308 +0000 UTC m=+1.335808823 container remove ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:52:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.862 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a27d4422-64a7-441e-b463-9a4bcd234238]: (4, ('Sat Dec 13 08:52:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c (ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d)\nae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d\nSat Dec 13 08:52:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c (ae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d)\nae734f5b06863810c307a45eb803c89579838514ed5d3979661d7c5a4a91183d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.864 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[12192075-fb88-4562-9c29-e27bc309ecd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.865 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape742c2df-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:52:31 np0005558241 nova_compute[248510]: 2025-12-13 08:52:31.866 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:31 np0005558241 kernel: tape742c2df-d0: left promiscuous mode
Dec 13 03:52:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.874 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3585c421-895b-4ed7-b3b3-13803584c960]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:31 np0005558241 nova_compute[248510]: 2025-12-13 08:52:31.890 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.898 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9c5df7-219e-4dae-8585-ebe665e1fb18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.900 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7daddc7-1837-4368-b494-22eada63437f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d6184e9b-6d5a-4bbf-9e86-622072b7bfa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 849348, 'reachable_time': 28673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359256, 'error': None, 'target': 'ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:31 np0005558241 systemd[1]: run-netns-ovnmeta\x2de742c2df\x2ddf4d\x2d48f6\x2d8153\x2d8353ec98fe5c.mount: Deactivated successfully.
Dec 13 03:52:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.921 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e742c2df-df4d-48f6-8153-8353ec98fe5c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:52:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:31.921 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dcea2618-c011-4ec7-a438-73db2fcaca06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2725: 321 pgs: 321 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.402 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.420 248514 DEBUG nova.compute.manager [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-vif-unplugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.421 248514 DEBUG oslo_concurrency.lockutils [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.421 248514 DEBUG oslo_concurrency.lockutils [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.421 248514 DEBUG oslo_concurrency.lockutils [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.421 248514 DEBUG nova.compute.manager [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] No waiting events found dispatching network-vif-unplugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.422 248514 DEBUG nova.compute.manager [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-vif-unplugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.422 248514 DEBUG nova.compute.manager [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.422 248514 DEBUG oslo_concurrency.lockutils [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.423 248514 DEBUG oslo_concurrency.lockutils [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.424 248514 DEBUG oslo_concurrency.lockutils [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.424 248514 DEBUG nova.compute.manager [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] No waiting events found dispatching network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.425 248514 WARNING nova.compute.manager [req-d79937e6-4249-4a93-9ad9-ae020bc923c7 req-b6f94b57-3e6a-4912-9efb-417113177ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received unexpected event network-vif-plugged-4b6e89d0-3078-468e-ad37-fa04ed14c96a for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.557 248514 DEBUG nova.network.neutron [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updated VIF entry in instance network info cache for port 4b6e89d0-3078-468e-ad37-fa04ed14c96a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.558 248514 DEBUG nova.network.neutron [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updating instance_info_cache with network_info: [{"id": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "address": "fa:16:3e:c0:e1:26", "network": {"id": "e742c2df-df4d-48f6-8153-8353ec98fe5c", "bridge": "br-int", "label": "tempest-network-smoke--955427403", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b6e89d0-30", "ovs_interfaceid": "4b6e89d0-3078-468e-ad37-fa04ed14c96a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:52:32 np0005558241 nova_compute[248510]: 2025-12-13 08:52:32.691 248514 DEBUG oslo_concurrency.lockutils [req-f3e868ad-a54b-454c-be68-668bd7c8690a req-d623e4b4-9849-4ab9-9b3c-9221c505ad26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-32f46650-28ec-40b8-8cbb-afb8a34cda45" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:52:33 np0005558241 nova_compute[248510]: 2025-12-13 08:52:33.503 248514 DEBUG nova.compute.manager [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:52:33 np0005558241 nova_compute[248510]: 2025-12-13 08:52:33.504 248514 DEBUG nova.compute.manager [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing instance network info cache due to event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:52:33 np0005558241 nova_compute[248510]: 2025-12-13 08:52:33.504 248514 DEBUG oslo_concurrency.lockutils [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:52:33 np0005558241 nova_compute[248510]: 2025-12-13 08:52:33.505 248514 DEBUG oslo_concurrency.lockutils [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:52:33 np0005558241 nova_compute[248510]: 2025-12-13 08:52:33.505 248514 DEBUG nova.network.neutron [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:52:33 np0005558241 nova_compute[248510]: 2025-12-13 08:52:33.799 248514 INFO nova.virt.libvirt.driver [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Deleting instance files /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45_del#033[00m
Dec 13 03:52:33 np0005558241 nova_compute[248510]: 2025-12-13 08:52:33.801 248514 INFO nova.virt.libvirt.driver [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Deletion of /var/lib/nova/instances/32f46650-28ec-40b8-8cbb-afb8a34cda45_del complete#033[00m
Dec 13 03:52:33 np0005558241 nova_compute[248510]: 2025-12-13 08:52:33.888 248514 INFO nova.compute.manager [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Took 5.02 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:52:33 np0005558241 nova_compute[248510]: 2025-12-13 08:52:33.889 248514 DEBUG oslo.service.loopingcall [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:52:33 np0005558241 nova_compute[248510]: 2025-12-13 08:52:33.890 248514 DEBUG nova.compute.manager [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:52:33 np0005558241 nova_compute[248510]: 2025-12-13 08:52:33.890 248514 DEBUG nova.network.neutron [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:52:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2726: 321 pgs: 321 active+clean; 88 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 100 op/s
Dec 13 03:52:34 np0005558241 nova_compute[248510]: 2025-12-13 08:52:34.537 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:52:35 np0005558241 nova_compute[248510]: 2025-12-13 08:52:35.294 248514 DEBUG nova.network.neutron [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:52:35 np0005558241 nova_compute[248510]: 2025-12-13 08:52:35.324 248514 INFO nova.compute.manager [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Took 1.43 seconds to deallocate network for instance.#033[00m
Dec 13 03:52:35 np0005558241 nova_compute[248510]: 2025-12-13 08:52:35.393 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:35 np0005558241 nova_compute[248510]: 2025-12-13 08:52:35.394 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:35 np0005558241 nova_compute[248510]: 2025-12-13 08:52:35.430 248514 DEBUG nova.compute.manager [req-e34162b2-06cf-4fe2-bd67-0488f3e365ce req-a11d7639-6bbf-44e3-af3d-2f9caf6e8ab8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Received event network-vif-deleted-4b6e89d0-3078-468e-ad37-fa04ed14c96a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:52:35 np0005558241 nova_compute[248510]: 2025-12-13 08:52:35.501 248514 DEBUG oslo_concurrency.processutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:52:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:52:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1544682842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:52:36 np0005558241 nova_compute[248510]: 2025-12-13 08:52:36.122 248514 DEBUG oslo_concurrency.processutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:52:36 np0005558241 nova_compute[248510]: 2025-12-13 08:52:36.130 248514 DEBUG nova.compute.provider_tree [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:52:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2727: 321 pgs: 321 active+clean; 88 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 100 op/s
Dec 13 03:52:36 np0005558241 nova_compute[248510]: 2025-12-13 08:52:36.157 248514 DEBUG nova.scheduler.client.report [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:52:36 np0005558241 nova_compute[248510]: 2025-12-13 08:52:36.190 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:36 np0005558241 nova_compute[248510]: 2025-12-13 08:52:36.217 248514 INFO nova.scheduler.client.report [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 32f46650-28ec-40b8-8cbb-afb8a34cda45#033[00m
Dec 13 03:52:36 np0005558241 nova_compute[248510]: 2025-12-13 08:52:36.318 248514 DEBUG nova.network.neutron [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updated VIF entry in instance network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:52:36 np0005558241 nova_compute[248510]: 2025-12-13 08:52:36.319 248514 DEBUG nova.network.neutron [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:52:36 np0005558241 nova_compute[248510]: 2025-12-13 08:52:36.331 248514 DEBUG oslo_concurrency.lockutils [None req-404bd98c-09eb-470c-b5e3-b9c511f4f922 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "32f46650-28ec-40b8-8cbb-afb8a34cda45" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:36 np0005558241 nova_compute[248510]: 2025-12-13 08:52:36.345 248514 DEBUG oslo_concurrency.lockutils [req-203ea66b-a75d-481a-9603-a2a3fb091234 req-b6597e08-8645-4169-824b-f082da771e0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:52:36 np0005558241 nova_compute[248510]: 2025-12-13 08:52:36.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:52:37 np0005558241 nova_compute[248510]: 2025-12-13 08:52:37.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2728: 321 pgs: 321 active+clean; 88 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 105 op/s
Dec 13 03:52:39 np0005558241 nova_compute[248510]: 2025-12-13 08:52:39.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:52:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:52:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:52:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:52:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:52:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:52:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2729: 321 pgs: 321 active+clean; 88 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 14 KiB/s wr, 60 op/s
Dec 13 03:52:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:52:41 np0005558241 nova_compute[248510]: 2025-12-13 08:52:41.112 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:41Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:b7:87 10.100.0.4
Dec 13 03:52:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:41Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:b7:87 10.100.0.4
Dec 13 03:52:41 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:41Z|01082|binding|INFO|Releasing lport 401abfe8-06be-4b57-8432-310dcd747a81 from this chassis (sb_readonly=0)
Dec 13 03:52:41 np0005558241 nova_compute[248510]: 2025-12-13 08:52:41.798 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2730: 321 pgs: 321 active+clean; 88 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 14 KiB/s wr, 36 op/s
Dec 13 03:52:42 np0005558241 nova_compute[248510]: 2025-12-13 08:52:42.451 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2731: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Dec 13 03:52:44 np0005558241 nova_compute[248510]: 2025-12-13 08:52:44.499 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615949.4979377, 32f46650-28ec-40b8-8cbb-afb8a34cda45 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:52:44 np0005558241 nova_compute[248510]: 2025-12-13 08:52:44.500 248514 INFO nova.compute.manager [-] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:52:44 np0005558241 nova_compute[248510]: 2025-12-13 08:52:44.525 248514 DEBUG nova.compute.manager [None req-db214919-404c-4918-a1b5-8a8f72fd62aa - - - - - -] [instance: 32f46650-28ec-40b8-8cbb-afb8a34cda45] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:52:44 np0005558241 nova_compute[248510]: 2025-12-13 08:52:44.544 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:52:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2732: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 13 03:52:47 np0005558241 nova_compute[248510]: 2025-12-13 08:52:47.453 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2733: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 267 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 03:52:49 np0005558241 nova_compute[248510]: 2025-12-13 08:52:49.302 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:49 np0005558241 nova_compute[248510]: 2025-12-13 08:52:49.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:50 np0005558241 nova_compute[248510]: 2025-12-13 08:52:50.051 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:50 np0005558241 nova_compute[248510]: 2025-12-13 08:52:50.052 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:50 np0005558241 nova_compute[248510]: 2025-12-13 08:52:50.052 248514 INFO nova.compute.manager [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Shelving#033[00m
Dec 13 03:52:50 np0005558241 nova_compute[248510]: 2025-12-13 08:52:50.075 248514 DEBUG nova.virt.libvirt.driver [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 03:52:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2734: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Dec 13 03:52:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:52:50 np0005558241 nova_compute[248510]: 2025-12-13 08:52:50.405 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2735: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Dec 13 03:52:52 np0005558241 kernel: tapb2143648-4c (unregistering): left promiscuous mode
Dec 13 03:52:52 np0005558241 NetworkManager[50376]: <info>  [1765615972.3476] device (tapb2143648-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:52:52 np0005558241 nova_compute[248510]: 2025-12-13 08:52:52.357 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:52 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:52Z|01083|binding|INFO|Releasing lport b2143648-4c23-49b5-8777-433a5b34c7ce from this chassis (sb_readonly=0)
Dec 13 03:52:52 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:52Z|01084|binding|INFO|Setting lport b2143648-4c23-49b5-8777-433a5b34c7ce down in Southbound
Dec 13 03:52:52 np0005558241 ovn_controller[148476]: 2025-12-13T08:52:52Z|01085|binding|INFO|Removing iface tapb2143648-4c ovn-installed in OVS
Dec 13 03:52:52 np0005558241 nova_compute[248510]: 2025-12-13 08:52:52.360 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.365 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b7:87 10.100.0.4'], port_security=['fa:16:3e:02:b7:87 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fcc617ec-f5f9-41bb-ad4b-86d790622e74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db56c55-78f1-455f-855e-db3acef05ff3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4d2c6ad4dc4848ac9f55ff1b9e829a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9bebfabc-b3c7-415b-9881-5a1028b3b8d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efdd787e-21fc-422f-be64-57ef7368490d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b2143648-4c23-49b5-8777-433a5b34c7ce) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.367 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b2143648-4c23-49b5-8777-433a5b34c7ce in datapath 6db56c55-78f1-455f-855e-db3acef05ff3 unbound from our chassis#033[00m
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.368 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db56c55-78f1-455f-855e-db3acef05ff3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.369 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[61a9ed37-94fc-4092-a325-f1d1aa2c0e7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.369 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 namespace which is not needed anymore#033[00m
Dec 13 03:52:52 np0005558241 nova_compute[248510]: 2025-12-13 08:52:52.377 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:52 np0005558241 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Dec 13 03:52:52 np0005558241 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006f.scope: Consumed 14.061s CPU time.
Dec 13 03:52:52 np0005558241 systemd-machined[210538]: Machine qemu-137-instance-0000006f terminated.
Dec 13 03:52:52 np0005558241 nova_compute[248510]: 2025-12-13 08:52:52.455 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:52 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [NOTICE]   (359147) : haproxy version is 2.8.14-c23fe91
Dec 13 03:52:52 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [NOTICE]   (359147) : path to executable is /usr/sbin/haproxy
Dec 13 03:52:52 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [WARNING]  (359147) : Exiting Master process...
Dec 13 03:52:52 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [ALERT]    (359147) : Current worker (359149) exited with code 143 (Terminated)
Dec 13 03:52:52 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[359133]: [WARNING]  (359147) : All workers exited. Exiting... (0)
Dec 13 03:52:52 np0005558241 systemd[1]: libpod-b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06.scope: Deactivated successfully.
Dec 13 03:52:52 np0005558241 podman[359305]: 2025-12-13 08:52:52.519311708 +0000 UTC m=+0.049950223 container died b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 03:52:52 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06-userdata-shm.mount: Deactivated successfully.
Dec 13 03:52:52 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1d101a5c71da0f833ee85e3864334a8ddce618ff29ffdbe45b70d5c22bb3233b-merged.mount: Deactivated successfully.
Dec 13 03:52:52 np0005558241 podman[359305]: 2025-12-13 08:52:52.585481717 +0000 UTC m=+0.116120242 container cleanup b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Dec 13 03:52:52 np0005558241 systemd[1]: libpod-conmon-b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06.scope: Deactivated successfully.
Dec 13 03:52:52 np0005558241 podman[359343]: 2025-12-13 08:52:52.664528849 +0000 UTC m=+0.053907163 container remove b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.671 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bcf982-b677-4e53-9b35-e4f0fc2651b1]: (4, ('Sat Dec 13 08:52:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 (b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06)\nb21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06\nSat Dec 13 08:52:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 (b21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06)\nb21f11c5b9bda35b0d1c224506c5f6313ae4e966cee285db4a29afd45022ec06\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.672 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2b086b53-03f3-45ed-a07f-81330416bfee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.674 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6db56c55-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:52:52 np0005558241 nova_compute[248510]: 2025-12-13 08:52:52.697 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:52 np0005558241 kernel: tap6db56c55-70: left promiscuous mode
Dec 13 03:52:52 np0005558241 nova_compute[248510]: 2025-12-13 08:52:52.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.717 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3e0992-de1b-4fb8-8574-9002884763bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.738 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b265c1d-e107-4629-81d3-80c30fb53da0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.739 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d37cadd0-a076-4777-8ee9-fd052c21deb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.758 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0c56412f-ae1d-4d3f-9c0c-94d7a57aabbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 852181, 'reachable_time': 29470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359365, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.761 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:52:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:52.761 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7eab78b6-99e7-4104-b716-db935eb15266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:52:52 np0005558241 systemd[1]: run-netns-ovnmeta\x2d6db56c55\x2d78f1\x2d455f\x2d855e\x2ddb3acef05ff3.mount: Deactivated successfully.
Dec 13 03:52:53 np0005558241 nova_compute[248510]: 2025-12-13 08:52:53.093 248514 INFO nova.virt.libvirt.driver [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance shutdown successfully after 3 seconds.#033[00m
Dec 13 03:52:53 np0005558241 nova_compute[248510]: 2025-12-13 08:52:53.098 248514 INFO nova.virt.libvirt.driver [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance destroyed successfully.#033[00m
Dec 13 03:52:53 np0005558241 nova_compute[248510]: 2025-12-13 08:52:53.099 248514 DEBUG nova.objects.instance [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'numa_topology' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:52:53 np0005558241 nova_compute[248510]: 2025-12-13 08:52:53.427 248514 INFO nova.virt.libvirt.driver [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Beginning cold snapshot process#033[00m
Dec 13 03:52:54 np0005558241 nova_compute[248510]: 2025-12-13 08:52:54.102 248514 DEBUG nova.virt.libvirt.imagebackend [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:52:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2736: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 241 KiB/s rd, 2.2 MiB/s wr, 58 op/s
Dec 13 03:52:54 np0005558241 nova_compute[248510]: 2025-12-13 08:52:54.323 248514 DEBUG nova.storage.rbd_utils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] creating snapshot(ef9ff3d602fb4ef884bff796ec97a142) on rbd image(fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:52:54 np0005558241 nova_compute[248510]: 2025-12-13 08:52:54.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Dec 13 03:52:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Dec 13 03:52:54 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Dec 13 03:52:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:52:55 np0005558241 nova_compute[248510]: 2025-12-13 08:52:55.407 248514 DEBUG nova.storage.rbd_utils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] cloning vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk@ef9ff3d602fb4ef884bff796ec97a142 to images/e80a280c-5146-4d78-99c1-0d3591de049e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:52:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:52:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:55 np0005558241 nova_compute[248510]: 2025-12-13 08:52:55.701 248514 DEBUG nova.storage.rbd_utils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] flattening images/e80a280c-5146-4d78-99c1-0d3591de049e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:52:55 np0005558241 nova_compute[248510]: 2025-12-13 08:52:55.818 248514 DEBUG nova.compute.manager [req-f8ce9988-fa63-4974-b649-c4f9a08f8eb2 req-ad4305a3-5183-42fc-9201-75cf5c3d59d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-unplugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:52:55 np0005558241 nova_compute[248510]: 2025-12-13 08:52:55.818 248514 DEBUG oslo_concurrency.lockutils [req-f8ce9988-fa63-4974-b649-c4f9a08f8eb2 req-ad4305a3-5183-42fc-9201-75cf5c3d59d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:55 np0005558241 nova_compute[248510]: 2025-12-13 08:52:55.818 248514 DEBUG oslo_concurrency.lockutils [req-f8ce9988-fa63-4974-b649-c4f9a08f8eb2 req-ad4305a3-5183-42fc-9201-75cf5c3d59d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:55 np0005558241 nova_compute[248510]: 2025-12-13 08:52:55.819 248514 DEBUG oslo_concurrency.lockutils [req-f8ce9988-fa63-4974-b649-c4f9a08f8eb2 req-ad4305a3-5183-42fc-9201-75cf5c3d59d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:55 np0005558241 nova_compute[248510]: 2025-12-13 08:52:55.819 248514 DEBUG nova.compute.manager [req-f8ce9988-fa63-4974-b649-c4f9a08f8eb2 req-ad4305a3-5183-42fc-9201-75cf5c3d59d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] No waiting events found dispatching network-vif-unplugged-b2143648-4c23-49b5-8777-433a5b34c7ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:52:55 np0005558241 nova_compute[248510]: 2025-12-13 08:52:55.819 248514 WARNING nova.compute.manager [req-f8ce9988-fa63-4974-b649-c4f9a08f8eb2 req-ad4305a3-5183-42fc-9201-75cf5c3d59d6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received unexpected event network-vif-unplugged-b2143648-4c23-49b5-8777-433a5b34c7ce for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Dec 13 03:52:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2738: 321 pgs: 321 active+clean; 121 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 56 KiB/s wr, 9 op/s
Dec 13 03:52:56 np0005558241 nova_compute[248510]: 2025-12-13 08:52:56.684 248514 DEBUG nova.storage.rbd_utils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] removing snapshot(ef9ff3d602fb4ef884bff796ec97a142) on rbd image(fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:52:56 np0005558241 podman[359491]: 2025-12-13 08:52:56.962041509 +0000 UTC m=+0.052865647 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:52:56 np0005558241 podman[359490]: 2025-12-13 08:52:56.967115086 +0000 UTC m=+0.060425326 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 03:52:56 np0005558241 podman[359489]: 2025-12-13 08:52:56.991924808 +0000 UTC m=+0.086378977 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:52:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Dec 13 03:52:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Dec 13 03:52:57 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Dec 13 03:52:57 np0005558241 nova_compute[248510]: 2025-12-13 08:52:57.458 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:57 np0005558241 nova_compute[248510]: 2025-12-13 08:52:57.578 248514 DEBUG nova.storage.rbd_utils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] creating snapshot(snap) on rbd image(e80a280c-5146-4d78-99c1-0d3591de049e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:52:57 np0005558241 nova_compute[248510]: 2025-12-13 08:52:57.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:52:58 np0005558241 nova_compute[248510]: 2025-12-13 08:52:58.110 248514 DEBUG nova.compute.manager [req-7ff21b7c-2121-4bb9-b421-8721351dbc34 req-29ffaadc-1dd8-4fcc-9f80-6b40d9fc7927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:52:58 np0005558241 nova_compute[248510]: 2025-12-13 08:52:58.110 248514 DEBUG oslo_concurrency.lockutils [req-7ff21b7c-2121-4bb9-b421-8721351dbc34 req-29ffaadc-1dd8-4fcc-9f80-6b40d9fc7927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:52:58 np0005558241 nova_compute[248510]: 2025-12-13 08:52:58.110 248514 DEBUG oslo_concurrency.lockutils [req-7ff21b7c-2121-4bb9-b421-8721351dbc34 req-29ffaadc-1dd8-4fcc-9f80-6b40d9fc7927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:52:58 np0005558241 nova_compute[248510]: 2025-12-13 08:52:58.111 248514 DEBUG oslo_concurrency.lockutils [req-7ff21b7c-2121-4bb9-b421-8721351dbc34 req-29ffaadc-1dd8-4fcc-9f80-6b40d9fc7927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:52:58 np0005558241 nova_compute[248510]: 2025-12-13 08:52:58.111 248514 DEBUG nova.compute.manager [req-7ff21b7c-2121-4bb9-b421-8721351dbc34 req-29ffaadc-1dd8-4fcc-9f80-6b40d9fc7927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] No waiting events found dispatching network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:52:58 np0005558241 nova_compute[248510]: 2025-12-13 08:52:58.111 248514 WARNING nova.compute.manager [req-7ff21b7c-2121-4bb9-b421-8721351dbc34 req-29ffaadc-1dd8-4fcc-9f80-6b40d9fc7927 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received unexpected event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Dec 13 03:52:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2740: 321 pgs: 321 active+clean; 135 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 681 KiB/s wr, 43 op/s
Dec 13 03:52:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Dec 13 03:52:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Dec 13 03:52:58 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Dec 13 03:52:58 np0005558241 nova_compute[248510]: 2025-12-13 08:52:58.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:52:59 np0005558241 nova_compute[248510]: 2025-12-13 08:52:59.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2742: 321 pgs: 321 active+clean; 200 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 150 op/s
Dec 13 03:53:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.070 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.070 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.094 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.214 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.215 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.225 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.225 248514 INFO nova.compute.claims [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.444 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.810 248514 INFO nova.virt.libvirt.driver [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Snapshot image upload complete#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.811 248514 DEBUG nova.compute.manager [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.865 248514 INFO nova.compute.manager [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Shelve offloading#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.875 248514 INFO nova.virt.libvirt.driver [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance destroyed successfully.#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.876 248514 DEBUG nova.compute.manager [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.879 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.880 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:01 np0005558241 nova_compute[248510]: 2025-12-13 08:53:01.880 248514 DEBUG nova.network.neutron [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:53:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:53:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039086334' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.051 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.057 248514 DEBUG nova.compute.provider_tree [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.077 248514 DEBUG nova.scheduler.client.report [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.106 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.107 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:53:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2743: 321 pgs: 321 active+clean; 200 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.4 MiB/s wr, 123 op/s
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.183 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.184 248514 DEBUG nova.network.neutron [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.210 248514 INFO nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.237 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.399 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.400 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.400 248514 INFO nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Creating image(s)#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.539 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.560 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.586 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.589 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.636 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.640 248514 DEBUG nova.policy [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.674 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.675 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.675 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.676 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.701 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:02 np0005558241 nova_compute[248510]: 2025-12-13 08:53:02.705 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 af2dc023-560c-4c66-b330-e41218a7a4eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:03 np0005558241 nova_compute[248510]: 2025-12-13 08:53:03.763 248514 DEBUG nova.network.neutron [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:03 np0005558241 nova_compute[248510]: 2025-12-13 08:53:03.820 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2744: 321 pgs: 321 active+clean; 209 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 6.1 MiB/s wr, 135 op/s
Dec 13 03:53:04 np0005558241 nova_compute[248510]: 2025-12-13 08:53:04.434 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 af2dc023-560c-4c66-b330-e41218a7a4eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:04 np0005558241 nova_compute[248510]: 2025-12-13 08:53:04.493 248514 DEBUG nova.network.neutron [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Successfully created port: 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:53:04 np0005558241 nova_compute[248510]: 2025-12-13 08:53:04.498 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:53:04 np0005558241 nova_compute[248510]: 2025-12-13 08:53:04.556 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:04 np0005558241 nova_compute[248510]: 2025-12-13 08:53:04.746 248514 DEBUG nova.objects.instance [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid af2dc023-560c-4c66-b330-e41218a7a4eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:04 np0005558241 nova_compute[248510]: 2025-12-13 08:53:04.797 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:53:04 np0005558241 nova_compute[248510]: 2025-12-13 08:53:04.798 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Ensure instance console log exists: /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:53:04 np0005558241 nova_compute[248510]: 2025-12-13 08:53:04.799 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:04 np0005558241 nova_compute[248510]: 2025-12-13 08:53:04.799 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:04 np0005558241 nova_compute[248510]: 2025-12-13 08:53:04.799 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:53:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Dec 13 03:53:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Dec 13 03:53:05 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.448 248514 INFO nova.virt.libvirt.driver [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance destroyed successfully.#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.449 248514 DEBUG nova.objects.instance [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'resources' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.466 248514 DEBUG nova.virt.libvirt.vif [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:52:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1747506169',display_name='tempest-TestShelveInstance-server-1747506169',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1747506169',id=111,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAebkVWpSolmv9rvqo0lcOY35Sse8ZKcOxj7o5K69+ccF5rX0zOaHwOkKpIPYjoJDZ0lGFUDC6z1VdzfkWYgp+l6o2jOhZfRbhSP9Loovk747mSM12eSN6XwaL50YLDn8Q==',key_name='tempest-TestShelveInstance-1284595260',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:52:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ff4d2c6ad4dc4848ac9f55ff1b9e829a',ramdisk_id='',reservation_id='r-wfxenjt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-2105398574',owner_user_name='tempest-TestShelveInstance-2105398574-project-member',shelved_at='2025-12-13T08:53:01.811270',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e80a280c-5146-4d78-99c1-0d3591de049e'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:52:53Z,user_data=None,user_id='fa34623cd3de4a47aa57959f09b3ff79',uuid=fcc617ec-f5f9-41bb-ad4b-86d790622e74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.467 248514 DEBUG nova.network.os_vif_util [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converting VIF {"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.468 248514 DEBUG nova.network.os_vif_util [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.470 248514 DEBUG os_vif [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.472 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2143648-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.479 248514 INFO os_vif [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c')#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.573 248514 DEBUG nova.compute.manager [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.574 248514 DEBUG nova.compute.manager [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing instance network info cache due to event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.574 248514 DEBUG oslo_concurrency.lockutils [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.574 248514 DEBUG oslo_concurrency.lockutils [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.575 248514 DEBUG nova.network.neutron [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.850 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.851 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.874 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.974 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.974 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.983 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:53:05 np0005558241 nova_compute[248510]: 2025-12-13 08:53:05.984 248514 INFO nova.compute.claims [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:53:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2746: 321 pgs: 321 active+clean; 209 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.5 MiB/s wr, 98 op/s
Dec 13 03:53:06 np0005558241 nova_compute[248510]: 2025-12-13 08:53:06.171 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:06 np0005558241 nova_compute[248510]: 2025-12-13 08:53:06.217 248514 INFO nova.virt.libvirt.driver [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deleting instance files /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74_del#033[00m
Dec 13 03:53:06 np0005558241 nova_compute[248510]: 2025-12-13 08:53:06.218 248514 INFO nova.virt.libvirt.driver [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deletion of /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74_del complete#033[00m
Dec 13 03:53:06 np0005558241 nova_compute[248510]: 2025-12-13 08:53:06.332 248514 INFO nova.scheduler.client.report [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Deleted allocations for instance fcc617ec-f5f9-41bb-ad4b-86d790622e74#033[00m
Dec 13 03:53:06 np0005558241 nova_compute[248510]: 2025-12-13 08:53:06.414 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:53:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111711744' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:53:06 np0005558241 nova_compute[248510]: 2025-12-13 08:53:06.871 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:06 np0005558241 nova_compute[248510]: 2025-12-13 08:53:06.878 248514 DEBUG nova.compute.provider_tree [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:53:06 np0005558241 nova_compute[248510]: 2025-12-13 08:53:06.898 248514 DEBUG nova.scheduler.client.report [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:53:06 np0005558241 nova_compute[248510]: 2025-12-13 08:53:06.923 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:06 np0005558241 nova_compute[248510]: 2025-12-13 08:53:06.924 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:53:06 np0005558241 nova_compute[248510]: 2025-12-13 08:53:06.928 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:06 np0005558241 nova_compute[248510]: 2025-12-13 08:53:06.984 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:53:06 np0005558241 nova_compute[248510]: 2025-12-13 08:53:06.985 248514 DEBUG nova.network.neutron [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.001 248514 DEBUG oslo_concurrency.processutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.051 248514 INFO nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.093 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.225 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.228 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.228 248514 INFO nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Creating image(s)#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.257 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.283 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.307 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.312 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.369 248514 DEBUG nova.network.neutron [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Successfully updated port: 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.373 248514 DEBUG nova.policy [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.400 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.400 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.401 248514 DEBUG nova.network.neutron [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.422 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.423 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.423 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.424 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.442 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.445 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.491 248514 DEBUG nova.compute.manager [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-changed-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.491 248514 DEBUG nova.compute.manager [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Refreshing instance network info cache due to event network-changed-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.491 248514 DEBUG oslo_concurrency.lockutils [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.592 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765615972.5882154, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.592 248514 INFO nova.compute.manager [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:53:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:53:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3872345773' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.617 248514 DEBUG nova.compute.manager [None req-b82a58b8-d8e1-427b-a513-05b6739c36f7 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.626 248514 DEBUG oslo_concurrency.processutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.632 248514 DEBUG nova.compute.provider_tree [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.669 248514 DEBUG nova.scheduler.client.report [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.709 248514 DEBUG nova.network.neutron [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.712 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.765 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.791 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.791 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.791 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.832 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.832 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.832 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.834 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.836 248514 DEBUG oslo_concurrency.lockutils [None req-6e2bdb8f-9927-4e63-9461-13b18ad6ec06 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 17.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.840 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.926 248514 DEBUG nova.objects.instance [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 2d2a33c7-0a90-4b64-b291-b268d37dce5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.955 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.955 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Ensure instance console log exists: /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.956 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.956 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:07 np0005558241 nova_compute[248510]: 2025-12-13 08:53:07.956 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2747: 321 pgs: 321 active+clean; 194 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 MiB/s wr, 132 op/s
Dec 13 03:53:08 np0005558241 nova_compute[248510]: 2025-12-13 08:53:08.346 248514 DEBUG nova.network.neutron [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Successfully created port: 1babd66f-ec6a-4702-8a8f-839d32ba8761 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:53:08 np0005558241 nova_compute[248510]: 2025-12-13 08:53:08.623 248514 DEBUG nova.network.neutron [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updated VIF entry in instance network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:53:08 np0005558241 nova_compute[248510]: 2025-12-13 08:53:08.623 248514 DEBUG nova.network.neutron [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": null, "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapb2143648-4c", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:08 np0005558241 nova_compute[248510]: 2025-12-13 08:53:08.646 248514 DEBUG oslo_concurrency.lockutils [req-3f6d4ba3-0c9b-4b72-a59c-7a4f85a5b7f2 req-c04952e0-e16c-42d1-ba77-4b584885f5f5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.193739) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615989193864, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1357, "num_deletes": 252, "total_data_size": 2146655, "memory_usage": 2187696, "flush_reason": "Manual Compaction"}
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.214 248514 DEBUG nova.network.neutron [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Successfully updated port: 1babd66f-ec6a-4702-8a8f-839d32ba8761 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615989218182, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 2096873, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53205, "largest_seqno": 54561, "table_properties": {"data_size": 2090460, "index_size": 3616, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13719, "raw_average_key_size": 20, "raw_value_size": 2077476, "raw_average_value_size": 3041, "num_data_blocks": 162, "num_entries": 683, "num_filter_entries": 683, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615860, "oldest_key_time": 1765615860, "file_creation_time": 1765615989, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 24489 microseconds, and 5351 cpu microseconds.
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.218232) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 2096873 bytes OK
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.218254) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.226045) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.226113) EVENT_LOG_v1 {"time_micros": 1765615989226104, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.226139) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 2140576, prev total WAL file size 2140576, number of live WAL files 2.
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.227254) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(2047KB)], [125(8676KB)]
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615989227376, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10981695, "oldest_snapshot_seqno": -1}
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.235 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.235 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.235 248514 DEBUG nova.network.neutron [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.282 248514 DEBUG nova.network.neutron [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updating instance_info_cache with network_info: [{"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:53:09
Dec 13 03:53:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:53:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:53:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.control', 'vms']
Dec 13 03:53:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.313 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.313 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Instance network_info: |[{"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.314 248514 DEBUG oslo_concurrency.lockutils [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.314 248514 DEBUG nova.network.neutron [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Refreshing network info cache for port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.317 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Start _get_guest_xml network_info=[{"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.321 248514 DEBUG nova.compute.manager [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-changed-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.321 248514 DEBUG nova.compute.manager [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Refreshing instance network info cache due to event network-changed-1babd66f-ec6a-4702-8a8f-839d32ba8761. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.321 248514 DEBUG oslo_concurrency.lockutils [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.323 248514 WARNING nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.329 248514 DEBUG nova.virt.libvirt.host [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.329 248514 DEBUG nova.virt.libvirt.host [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.333 248514 DEBUG nova.virt.libvirt.host [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.334 248514 DEBUG nova.virt.libvirt.host [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.334 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.334 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.335 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.335 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.335 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.335 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.336 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.336 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.336 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.336 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.337 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.337 248514 DEBUG nova.virt.hardware [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7414 keys, 9200330 bytes, temperature: kUnknown
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615989338246, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 9200330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9152687, "index_size": 28010, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 194409, "raw_average_key_size": 26, "raw_value_size": 9021865, "raw_average_value_size": 1216, "num_data_blocks": 1086, "num_entries": 7414, "num_filter_entries": 7414, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765615989, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.341 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.338459) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 9200330 bytes
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.344558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 99.2 rd, 83.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.5 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(9.6) write-amplify(4.4) OK, records in: 7934, records dropped: 520 output_compression: NoCompression
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.344591) EVENT_LOG_v1 {"time_micros": 1765615989344578, "job": 76, "event": "compaction_finished", "compaction_time_micros": 110722, "compaction_time_cpu_micros": 48275, "output_level": 6, "num_output_files": 1, "total_output_size": 9200330, "num_input_records": 7934, "num_output_records": 7414, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615989345121, "job": 76, "event": "table_file_deletion", "file_number": 127}
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765615989346655, "job": 76, "event": "table_file_deletion", "file_number": 125}
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.226996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.346693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.346697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.346699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.346700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:53:09.346702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.511 248514 DEBUG nova.network.neutron [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:53:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:09.646 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:09.648 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated#033[00m
Dec 13 03:53:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:09.650 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:53:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:09.651 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb37c20-6473-4a5e-b099-2bec28f618cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:53:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2912515147' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.928 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.951 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:09 np0005558241 nova_compute[248510]: 2025-12-13 08:53:09.956 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2748: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 4.3 MiB/s wr, 110 op/s
Dec 13 03:53:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:53:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862715630' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.561 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.564 248514 DEBUG nova.virt.libvirt.vif [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-673770696',display_name='tempest-TestNetworkBasicOps-server-673770696',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-673770696',id=112,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFWWKdP0apeEEX6KLq89U2vRGSHeV3KAUwR7F/v8SOdmJ9w4un8uAKW6W1VsXiUAnc8fLGuX3ip0yk759e6Z6EnqMVZe+COaAk19ulIyzOUeifphXpMKMaa2a+4orpaKaw==',key_name='tempest-TestNetworkBasicOps-724159602',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-ahio6eh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:02Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=af2dc023-560c-4c66-b330-e41218a7a4eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.565 248514 DEBUG nova.network.os_vif_util [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.566 248514 DEBUG nova.network.os_vif_util [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.569 248514 DEBUG nova.objects.instance [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid af2dc023-560c-4c66-b330-e41218a7a4eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.596 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  <uuid>af2dc023-560c-4c66-b330-e41218a7a4eb</uuid>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  <name>instance-00000070</name>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkBasicOps-server-673770696</nova:name>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:53:09</nova:creationTime>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:        <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:        <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:        <nova:port uuid="0eac2381-7f12-4f67-bde8-76c8fb9ae0b0">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <entry name="serial">af2dc023-560c-4c66-b330-e41218a7a4eb</entry>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <entry name="uuid">af2dc023-560c-4c66-b330-e41218a7a4eb</entry>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/af2dc023-560c-4c66-b330-e41218a7a4eb_disk">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/af2dc023-560c-4c66-b330-e41218a7a4eb_disk.config">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:6a:eb:91"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <target dev="tap0eac2381-7f"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/console.log" append="off"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:53:10 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:53:10 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:53:10 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:53:10 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.597 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Preparing to wait for external event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.597 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.598 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.598 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.600 248514 DEBUG nova.virt.libvirt.vif [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-673770696',display_name='tempest-TestNetworkBasicOps-server-673770696',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-673770696',id=112,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFWWKdP0apeEEX6KLq89U2vRGSHeV3KAUwR7F/v8SOdmJ9w4un8uAKW6W1VsXiUAnc8fLGuX3ip0yk759e6Z6EnqMVZe+COaAk19ulIyzOUeifphXpMKMaa2a+4orpaKaw==',key_name='tempest-TestNetworkBasicOps-724159602',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-ahio6eh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:02Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=af2dc023-560c-4c66-b330-e41218a7a4eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.600 248514 DEBUG nova.network.os_vif_util [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.601 248514 DEBUG nova.network.os_vif_util [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.602 248514 DEBUG os_vif [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.603 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.604 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.608 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.608 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0eac2381-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.609 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0eac2381-7f, col_values=(('external_ids', {'iface-id': '0eac2381-7f12-4f67-bde8-76c8fb9ae0b0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:eb:91', 'vm-uuid': 'af2dc023-560c-4c66-b330-e41218a7a4eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:10 np0005558241 NetworkManager[50376]: <info>  [1765615990.6119] manager: (tap0eac2381-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.613 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.616 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.616 248514 INFO os_vif [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f')#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.702 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.702 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.702 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:6a:eb:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.703 248514 INFO nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Using config drive#033[00m
Dec 13 03:53:10 np0005558241 nova_compute[248510]: 2025-12-13 08:53:10.722 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:53:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.082 248514 DEBUG nova.network.neutron [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updating instance_info_cache with network_info: [{"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.108 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.108 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Instance network_info: |[{"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.108 248514 DEBUG oslo_concurrency.lockutils [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.109 248514 DEBUG nova.network.neutron [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Refreshing network info cache for port 1babd66f-ec6a-4702-8a8f-839d32ba8761 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.111 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Start _get_guest_xml network_info=[{"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.115 248514 WARNING nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.124 248514 DEBUG nova.virt.libvirt.host [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.124 248514 DEBUG nova.virt.libvirt.host [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.132 248514 DEBUG nova.virt.libvirt.host [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.132 248514 DEBUG nova.virt.libvirt.host [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.133 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.133 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.133 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.133 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.134 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.134 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.134 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.134 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.134 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.134 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.135 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.135 248514 DEBUG nova.virt.hardware [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.138 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.307 248514 INFO nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Creating config drive at /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/disk.config#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.312 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnx0mqudc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.457 248514 DEBUG nova.network.neutron [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updated VIF entry in instance network info cache for port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.458 248514 DEBUG nova.network.neutron [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updating instance_info_cache with network_info: [{"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.461 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnx0mqudc" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.486 248514 DEBUG nova.storage.rbd_utils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image af2dc023-560c-4c66-b330-e41218a7a4eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.490 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/disk.config af2dc023-560c-4c66-b330-e41218a7a4eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.534 248514 DEBUG oslo_concurrency.lockutils [req-ba935414-9593-4266-82f2-2d872bf410c3 req-565847ea-4e2e-496c-8759-6ccf2828daf2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.653 248514 DEBUG oslo_concurrency.processutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/disk.config af2dc023-560c-4c66-b330-e41218a7a4eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.653 248514 INFO nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Deleting local config drive /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb/disk.config because it was imported into RBD.#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.664 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.668 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.668 248514 INFO nova.compute.manager [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Unshelving#033[00m
Dec 13 03:53:11 np0005558241 kernel: tap0eac2381-7f: entered promiscuous mode
Dec 13 03:53:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:11Z|01086|binding|INFO|Claiming lport 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 for this chassis.
Dec 13 03:53:11 np0005558241 NetworkManager[50376]: <info>  [1765615991.7089] manager: (tap0eac2381-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/450)
Dec 13 03:53:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:11Z|01087|binding|INFO|0eac2381-7f12-4f67-bde8-76c8fb9ae0b0: Claiming fa:16:3e:6a:eb:91 10.100.0.8
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.716 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:eb:91 10.100.0.8'], port_security=['fa:16:3e:6a:eb:91 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'af2dc023-560c-4c66-b330-e41218a7a4eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09b8bd52-e920-467f-994b-646113fcb821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6cedfe2-a795-4750-8f73-fd0610750728', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7e75a11b-9fc0-4a04-84da-8ed3853196e7, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.719 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 in datapath 09b8bd52-e920-467f-994b-646113fcb821 bound to our chassis#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.721 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 09b8bd52-e920-467f-994b-646113fcb821#033[00m
Dec 13 03:53:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:11Z|01088|binding|INFO|Setting lport 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 ovn-installed in OVS
Dec 13 03:53:11 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:11Z|01089|binding|INFO|Setting lport 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 up in Southbound
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.730 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.738 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b37dae6e-b762-4875-8cb2-62223f739a72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.740 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap09b8bd52-e1 in ovnmeta-09b8bd52-e920-467f-994b-646113fcb821 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:53:11 np0005558241 systemd-machined[210538]: New machine qemu-138-instance-00000070.
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.743 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap09b8bd52-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.744 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1526e835-221f-4d34-bac4-abd5ba3ac571]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.746 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[adfbaadd-70b3-48b8-bb9b-a80adc1da533]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:11 np0005558241 systemd-udevd[360141]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:53:11 np0005558241 systemd[1]: Started Virtual Machine qemu-138-instance-00000070.
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.761 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[3534c9a7-a7ac-4482-befe-7b9d40661ca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:11 np0005558241 NetworkManager[50376]: <info>  [1765615991.7673] device (tap0eac2381-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:53:11 np0005558241 NetworkManager[50376]: <info>  [1765615991.7679] device (tap0eac2381-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:53:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:53:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2374952932' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.776 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.776 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.784 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'pci_requests' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.789 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[61474119-ca60-416a-afd0-5d53f35337a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.798 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.821 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[18ffba9f-72ef-4d20-acb9-152a4c8ef946]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.827 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d75155c-647b-4fdf-837c-16f0831f988d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:11 np0005558241 NetworkManager[50376]: <info>  [1765615991.8282] manager: (tap09b8bd52-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/451)
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.829 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.838 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.861 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[89621412-6cb4-459b-b854-b70d9e9c915c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.865 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[10c40cd0-4411-45ae-80ee-9498c0265377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:11 np0005558241 NetworkManager[50376]: <info>  [1765615991.8903] device (tap09b8bd52-e0): carrier: link connected
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.893 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.895 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[74251966-5e0b-46c1-b65c-19632d969938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.899 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'numa_topology' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db0f2abc-d5ef-43ac-aeb7-28291285a69d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09b8bd52-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:21:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856910, 'reachable_time': 28899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360194, 'error': None, 'target': 'ovnmeta-09b8bd52-e920-467f-994b-646113fcb821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.923 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:53:11 np0005558241 nova_compute[248510]: 2025-12-13 08:53:11.924 248514 INFO nova.compute.claims [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.940 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f58d0cbc-4ce6-4c4b-a376-b1e8b080ded4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:213c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856910, 'tstamp': 856910}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360195, 'error': None, 'target': 'ovnmeta-09b8bd52-e920-467f-994b-646113fcb821', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:11.966 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57538ebc-5408-4ab6-9649-654bf312dadc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap09b8bd52-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:21:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 322], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856910, 'reachable_time': 28899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360211, 'error': None, 'target': 'ovnmeta-09b8bd52-e920-467f-994b-646113fcb821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.008 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[183b15dc-7c9a-4c35-b3e3-6671db4cd850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.080 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c009e97d-e50e-4d93-b9b6-468235b881a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.083 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09b8bd52-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.084 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.085 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09b8bd52-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.126 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:12 np0005558241 NetworkManager[50376]: <info>  [1765615992.1349] manager: (tap09b8bd52-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Dec 13 03:53:12 np0005558241 kernel: tap09b8bd52-e0: entered promiscuous mode
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.140 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap09b8bd52-e0, col_values=(('external_ids', {'iface-id': 'eef5d4b2-f2d3-4d15-9528-0e68d65ce454'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.144 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/09b8bd52-e920-467f-994b-646113fcb821.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/09b8bd52-e920-467f-994b-646113fcb821.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:53:12 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:12Z|01090|binding|INFO|Releasing lport eef5d4b2-f2d3-4d15-9528-0e68d65ce454 from this chassis (sb_readonly=0)
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.149 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[346c753e-55ef-4212-9d38-2cf51b400b5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.151 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-09b8bd52-e920-467f-994b-646113fcb821
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/09b8bd52-e920-467f-994b-646113fcb821.pid.haproxy
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 09b8bd52-e920-467f-994b-646113fcb821
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:53:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:12.151 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-09b8bd52-e920-467f-994b-646113fcb821', 'env', 'PROCESS_TAG=haproxy-09b8bd52-e920-467f-994b-646113fcb821', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/09b8bd52-e920-467f-994b-646113fcb821.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:53:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2749: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 4.3 MiB/s wr, 110 op/s
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.167 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615992.150785, af2dc023-560c-4c66-b330-e41218a7a4eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.168 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] VM Started (Lifecycle Event)#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.171 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.198 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.205 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615992.1513333, af2dc023-560c-4c66-b330-e41218a7a4eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.206 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.228 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.236 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.247 248514 DEBUG nova.compute.manager [req-6433a2a5-cd70-4ee3-a76a-359c8186ae59 req-4f294c77-636a-421a-8d63-8899e0115788 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.248 248514 DEBUG oslo_concurrency.lockutils [req-6433a2a5-cd70-4ee3-a76a-359c8186ae59 req-4f294c77-636a-421a-8d63-8899e0115788 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.248 248514 DEBUG oslo_concurrency.lockutils [req-6433a2a5-cd70-4ee3-a76a-359c8186ae59 req-4f294c77-636a-421a-8d63-8899e0115788 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.248 248514 DEBUG oslo_concurrency.lockutils [req-6433a2a5-cd70-4ee3-a76a-359c8186ae59 req-4f294c77-636a-421a-8d63-8899e0115788 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.248 248514 DEBUG nova.compute.manager [req-6433a2a5-cd70-4ee3-a76a-359c8186ae59 req-4f294c77-636a-421a-8d63-8899e0115788 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Processing event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.249 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.253 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.261 248514 INFO nova.virt.libvirt.driver [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Instance spawned successfully.#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.261 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.268 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.269 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615992.252289, af2dc023-560c-4c66-b330-e41218a7a4eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.269 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.286 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.288 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.289 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.289 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.289 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.290 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.290 248514 DEBUG nova.virt.libvirt.driver [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.319 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.372 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.414 248514 INFO nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Took 10.01 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.414 248514 DEBUG nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:53:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/385961892' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.472 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.473 248514 DEBUG nova.virt.libvirt.vif [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:53:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=113,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGUq/AunPgAiW/gJ0J2O7Gg2tWHaC1FajarqmNhDhKsCdHoif6mzxgzomxd0iSuMUg8lqNyZetu6VpRVmxHZgqpb0ZC3j0IoZa8EwnwWYlucFJrp/EYR/i0MEEr0woib3Q==',key_name='tempest-TestSecurityGroupsBasicOps-33804060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-ogye2bl2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:07Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=2d2a33c7-0a90-4b64-b291-b268d37dce5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.474 248514 DEBUG nova.network.os_vif_util [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.475 248514 DEBUG nova.network.os_vif_util [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.476 248514 DEBUG nova.objects.instance [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d2a33c7-0a90-4b64-b291-b268d37dce5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.531 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  <uuid>2d2a33c7-0a90-4b64-b291-b268d37dce5e</uuid>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  <name>instance-00000071</name>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416</nova:name>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:53:11</nova:creationTime>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:        <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:        <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:        <nova:port uuid="1babd66f-ec6a-4702-8a8f-839d32ba8761">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <entry name="serial">2d2a33c7-0a90-4b64-b291-b268d37dce5e</entry>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <entry name="uuid">2d2a33c7-0a90-4b64-b291-b268d37dce5e</entry>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk.config">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:b1:81:46"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <target dev="tap1babd66f-ec"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/console.log" append="off"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:53:12 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:53:12 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:53:12 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:53:12 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.531 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Preparing to wait for external event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.531 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.532 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.532 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.532 248514 DEBUG nova.virt.libvirt.vif [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:53:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=113,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGUq/AunPgAiW/gJ0J2O7Gg2tWHaC1FajarqmNhDhKsCdHoif6mzxgzomxd0iSuMUg8lqNyZetu6VpRVmxHZgqpb0ZC3j0IoZa8EwnwWYlucFJrp/EYR/i0MEEr0woib3Q==',key_name='tempest-TestSecurityGroupsBasicOps-33804060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-ogye2bl2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:07Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=2d2a33c7-0a90-4b64-b291-b268d37dce5e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.533 248514 DEBUG nova.network.os_vif_util [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.533 248514 DEBUG nova.network.os_vif_util [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.534 248514 DEBUG os_vif [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.534 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.535 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.535 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.541 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1babd66f-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.542 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1babd66f-ec, col_values=(('external_ids', {'iface-id': '1babd66f-ec6a-4702-8a8f-839d32ba8761', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:81:46', 'vm-uuid': '2d2a33c7-0a90-4b64-b291-b268d37dce5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:12 np0005558241 NetworkManager[50376]: <info>  [1765615992.5441] manager: (tap1babd66f-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.552 248514 INFO os_vif [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec')#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.565 248514 INFO nova.compute.manager [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Took 11.40 seconds to build instance.#033[00m
Dec 13 03:53:12 np0005558241 podman[360310]: 2025-12-13 08:53:12.56824727 +0000 UTC m=+0.062351744 container create e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.599 248514 DEBUG oslo_concurrency.lockutils [None req-479754dd-da92-4392-a6a7-4476eea39c83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:12 np0005558241 systemd[1]: Started libpod-conmon-e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d.scope.
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.621 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.622 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.622 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:b1:81:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.623 248514 INFO nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Using config drive#033[00m
Dec 13 03:53:12 np0005558241 podman[360310]: 2025-12-13 08:53:12.530435382 +0000 UTC m=+0.024539876 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:53:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.652 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8de9dacfc504e0470aec8b697e93be4e6449bd328978b3076d04e8364d2802e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:12 np0005558241 podman[360310]: 2025-12-13 08:53:12.67431901 +0000 UTC m=+0.168423504 container init e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:53:12 np0005558241 podman[360310]: 2025-12-13 08:53:12.680959656 +0000 UTC m=+0.175064130 container start e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:53:12 np0005558241 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [NOTICE]   (360349) : New worker (360351) forked
Dec 13 03:53:12 np0005558241 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [NOTICE]   (360349) : Loading success.
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.797 248514 DEBUG nova.network.neutron [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updated VIF entry in instance network info cache for port 1babd66f-ec6a-4702-8a8f-839d32ba8761. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.797 248514 DEBUG nova.network.neutron [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updating instance_info_cache with network_info: [{"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.817 248514 DEBUG oslo_concurrency.lockutils [req-e193f05f-9f72-4286-ba76-41766b4474ea req-afff6b18-b9da-45aa-848c-07d3fafb131e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:53:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2999988297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.867 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.874 248514 DEBUG nova.compute.provider_tree [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.899 248514 DEBUG nova.scheduler.client.report [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.924 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.928 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.928 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.928 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:53:12 np0005558241 nova_compute[248510]: 2025-12-13 08:53:12.929 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.135 248514 INFO nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Creating config drive at /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/disk.config#033[00m
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.140 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpti75ytre execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.225 248514 INFO nova.network.neutron [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating port b2143648-4c23-49b5-8777-433a5b34c7ce with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.294 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpti75ytre" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.317 248514 DEBUG nova.storage.rbd_utils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.322 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/disk.config 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.372 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.374 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.373 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:53:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/336295226' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.579 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.664 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.664 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.670 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.671 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.733 248514 DEBUG oslo_concurrency.processutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/disk.config 2d2a33c7-0a90-4b64-b291-b268d37dce5e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.734 248514 INFO nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Deleting local config drive /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e/disk.config because it was imported into RBD.#033[00m
Dec 13 03:53:13 np0005558241 kernel: tap1babd66f-ec: entered promiscuous mode
Dec 13 03:53:13 np0005558241 systemd-udevd[360188]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:53:13 np0005558241 NetworkManager[50376]: <info>  [1765615993.8007] manager: (tap1babd66f-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/454)
Dec 13 03:53:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:13Z|01091|binding|INFO|Claiming lport 1babd66f-ec6a-4702-8a8f-839d32ba8761 for this chassis.
Dec 13 03:53:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:13Z|01092|binding|INFO|1babd66f-ec6a-4702-8a8f-839d32ba8761: Claiming fa:16:3e:b1:81:46 10.100.0.10
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.809 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.810 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:81:46 10.100.0.10'], port_security=['fa:16:3e:b1:81:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2d2a33c7-0a90-4b64-b291-b268d37dce5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3479ed9a-2670-4333-b282-6f40685ff746', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '57e2154e-1e2d-4537-afe5-11c61b80fdbc ee7df75c-fefa-4bc0-977e-537259cc7755', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9839b158-8451-4098-b558-d860fc6a92ca, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1babd66f-ec6a-4702-8a8f-839d32ba8761) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.811 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1babd66f-ec6a-4702-8a8f-839d32ba8761 in datapath 3479ed9a-2670-4333-b282-6f40685ff746 bound to our chassis#033[00m
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.813 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3479ed9a-2670-4333-b282-6f40685ff746#033[00m
Dec 13 03:53:13 np0005558241 NetworkManager[50376]: <info>  [1765615993.8179] device (tap1babd66f-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:53:13 np0005558241 NetworkManager[50376]: <info>  [1765615993.8191] device (tap1babd66f-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:53:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:13Z|01093|binding|INFO|Setting lport 1babd66f-ec6a-4702-8a8f-839d32ba8761 ovn-installed in OVS
Dec 13 03:53:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:13Z|01094|binding|INFO|Setting lport 1babd66f-ec6a-4702-8a8f-839d32ba8761 up in Southbound
Dec 13 03:53:13 np0005558241 nova_compute[248510]: 2025-12-13 08:53:13.823 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.831 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[43fc1189-5dda-4053-9151-b7a91f8462b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.833 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3479ed9a-21 in ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.835 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3479ed9a-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.835 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce53cb34-7cb3-4862-bed9-94d5a2b984e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.837 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[71801a36-a1ce-49e3-90b8-d4d6896bc640]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.851 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[a191b44d-d86c-42cc-ae4f-74ee96acd85d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:13 np0005558241 systemd-machined[210538]: New machine qemu-139-instance-00000071.
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.869 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3245e391-111d-4d5c-9e0a-4c7a70642f53]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:13 np0005558241 systemd[1]: Started Virtual Machine qemu-139-instance-00000071.
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.909 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8fe2fe-6f8c-45ab-82ef-4b221930b6bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:13 np0005558241 NetworkManager[50376]: <info>  [1765615993.9202] manager: (tap3479ed9a-20): new Veth device (/org/freedesktop/NetworkManager/Devices/455)
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.921 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[00f652fd-887f-4a28-8d2d-f5a71ac9bc39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.966 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8b145423-6918-4ea1-a706-36994331ae59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:13.971 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[65749387-8a21-4af9-b93c-aa077e162d44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:14 np0005558241 NetworkManager[50376]: <info>  [1765615994.0004] device (tap3479ed9a-20): carrier: link connected
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.006 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b87cf192-90a1-4611-9a7e-12831297f894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.023 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[03d7d402-2c41-4deb-980b-fffce20be722]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3479ed9a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:9f:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 857121, 'reachable_time': 24238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360454, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.066 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.064 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4550e5cf-6260-416a-a2fd-5f5c5df62a8c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3d:9fbc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 857121, 'tstamp': 857121}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360455, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.068 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3512MB free_disk=59.946029658429325GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.069 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.069 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.089 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db3d6e35-b0af-485b-82f3-ab1eb4c34f83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3479ed9a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:9f:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 857121, 'reachable_time': 24238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360456, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.152 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b7ce1b-cb55-48a2-9917-887a5ec92425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.157 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance af2dc023-560c-4c66-b330-e41218a7a4eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.158 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2d2a33c7-0a90-4b64-b291-b268d37dce5e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.158 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance fcc617ec-f5f9-41bb-ad4b-86d790622e74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.158 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.159 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:53:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2750: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.1 MiB/s wr, 154 op/s
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.223 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6938b3cc-37c0-4454-ab69-e70f2e55518f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.225 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3479ed9a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.225 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.225 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3479ed9a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:14 np0005558241 kernel: tap3479ed9a-20: entered promiscuous mode
Dec 13 03:53:14 np0005558241 NetworkManager[50376]: <info>  [1765615994.2281] manager: (tap3479ed9a-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.228 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.230 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3479ed9a-20, col_values=(('external_ids', {'iface-id': 'd8892183-3d82-41f5-b0bd-dc5a1c170b1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:14Z|01095|binding|INFO|Releasing lport d8892183-3d82-41f5-b0bd-dc5a1c170b1a from this chassis (sb_readonly=0)
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.234 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.294 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3479ed9a-2670-4333-b282-6f40685ff746.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3479ed9a-2670-4333-b282-6f40685ff746.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.295 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d2101a19-03f0-4dab-a2fe-ca075593e4d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.296 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-3479ed9a-2670-4333-b282-6f40685ff746
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/3479ed9a-2670-4333-b282-6f40685ff746.pid.haproxy
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 3479ed9a-2670-4333-b282-6f40685ff746
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:53:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:14.297 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'env', 'PROCESS_TAG=haproxy-3479ed9a-2670-4333-b282-6f40685ff746', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3479ed9a-2670-4333-b282-6f40685ff746.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.295 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:14 np0005558241 podman[360507]: 2025-12-13 08:53:14.76347091 +0000 UTC m=+0.112778629 container create b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 03:53:14 np0005558241 podman[360507]: 2025-12-13 08:53:14.676600742 +0000 UTC m=+0.025908491 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:53:14 np0005558241 systemd[1]: Started libpod-conmon-b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a.scope.
Dec 13 03:53:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:53:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15bfa81170ec655fc806f556ce96cba9f04e8c1b8d6c8296ea616ed5bdca96a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:53:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2191403147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:53:14 np0005558241 podman[360507]: 2025-12-13 08:53:14.862238557 +0000 UTC m=+0.211546306 container init b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 03:53:14 np0005558241 podman[360507]: 2025-12-13 08:53:14.871084638 +0000 UTC m=+0.220392357 container start b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.888 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:14 np0005558241 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [NOTICE]   (360543) : New worker (360547) forked
Dec 13 03:53:14 np0005558241 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [NOTICE]   (360543) : Loading success.
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.897 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.919 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.944 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:53:14 np0005558241 nova_compute[248510]: 2025-12-13 08:53:14.944 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:53:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3088222618' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:53:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:53:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3088222618' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:53:15 np0005558241 nova_compute[248510]: 2025-12-13 08:53:15.065 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615995.0646038, 2d2a33c7-0a90-4b64-b291-b268d37dce5e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:15 np0005558241 nova_compute[248510]: 2025-12-13 08:53:15.065 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] VM Started (Lifecycle Event)#033[00m
Dec 13 03:53:15 np0005558241 nova_compute[248510]: 2025-12-13 08:53:15.095 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:15 np0005558241 nova_compute[248510]: 2025-12-13 08:53:15.100 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615995.0702167, 2d2a33c7-0a90-4b64-b291-b268d37dce5e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:15 np0005558241 nova_compute[248510]: 2025-12-13 08:53:15.100 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:53:15 np0005558241 nova_compute[248510]: 2025-12-13 08:53:15.123 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:15 np0005558241 nova_compute[248510]: 2025-12-13 08:53:15.127 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:53:15 np0005558241 nova_compute[248510]: 2025-12-13 08:53:15.151 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:53:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:53:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2751: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.8 MiB/s wr, 144 op/s
Dec 13 03:53:16 np0005558241 nova_compute[248510]: 2025-12-13 08:53:16.555 248514 DEBUG nova.compute.manager [req-8ffe3d03-5fd6-4497-a9ed-d343428f5b6d req-96e6711f-7f11-4e76-9757-8cf3d447ee4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:16 np0005558241 nova_compute[248510]: 2025-12-13 08:53:16.556 248514 DEBUG oslo_concurrency.lockutils [req-8ffe3d03-5fd6-4497-a9ed-d343428f5b6d req-96e6711f-7f11-4e76-9757-8cf3d447ee4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:16 np0005558241 nova_compute[248510]: 2025-12-13 08:53:16.556 248514 DEBUG oslo_concurrency.lockutils [req-8ffe3d03-5fd6-4497-a9ed-d343428f5b6d req-96e6711f-7f11-4e76-9757-8cf3d447ee4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:16 np0005558241 nova_compute[248510]: 2025-12-13 08:53:16.556 248514 DEBUG oslo_concurrency.lockutils [req-8ffe3d03-5fd6-4497-a9ed-d343428f5b6d req-96e6711f-7f11-4e76-9757-8cf3d447ee4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:16 np0005558241 nova_compute[248510]: 2025-12-13 08:53:16.556 248514 DEBUG nova.compute.manager [req-8ffe3d03-5fd6-4497-a9ed-d343428f5b6d req-96e6711f-7f11-4e76-9757-8cf3d447ee4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] No waiting events found dispatching network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:53:16 np0005558241 nova_compute[248510]: 2025-12-13 08:53:16.556 248514 WARNING nova.compute.manager [req-8ffe3d03-5fd6-4497-a9ed-d343428f5b6d req-96e6711f-7f11-4e76-9757-8cf3d447ee4e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received unexpected event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:53:17 np0005558241 nova_compute[248510]: 2025-12-13 08:53:17.467 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:17 np0005558241 nova_compute[248510]: 2025-12-13 08:53:17.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:17 np0005558241 nova_compute[248510]: 2025-12-13 08:53:17.957 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:53:17 np0005558241 nova_compute[248510]: 2025-12-13 08:53:17.957 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:53:17 np0005558241 nova_compute[248510]: 2025-12-13 08:53:17.958 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:53:17 np0005558241 nova_compute[248510]: 2025-12-13 08:53:17.958 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:53:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2752: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 159 op/s
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.239 248514 DEBUG nova.compute.manager [req-7b4049a3-56b2-47c3-a3ef-f496d64a8dc7 req-712cf4c0-add8-4f3b-9a05-96dbfdfa343c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.239 248514 DEBUG oslo_concurrency.lockutils [req-7b4049a3-56b2-47c3-a3ef-f496d64a8dc7 req-712cf4c0-add8-4f3b-9a05-96dbfdfa343c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.239 248514 DEBUG oslo_concurrency.lockutils [req-7b4049a3-56b2-47c3-a3ef-f496d64a8dc7 req-712cf4c0-add8-4f3b-9a05-96dbfdfa343c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.240 248514 DEBUG oslo_concurrency.lockutils [req-7b4049a3-56b2-47c3-a3ef-f496d64a8dc7 req-712cf4c0-add8-4f3b-9a05-96dbfdfa343c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.240 248514 DEBUG nova.compute.manager [req-7b4049a3-56b2-47c3-a3ef-f496d64a8dc7 req-712cf4c0-add8-4f3b-9a05-96dbfdfa343c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Processing event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.240 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.252 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765615999.2442622, 2d2a33c7-0a90-4b64-b291-b268d37dce5e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.259 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.261 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.271 248514 INFO nova.virt.libvirt.driver [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Instance spawned successfully.#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.272 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.287 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.291 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.312 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.314 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.314 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.314 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.315 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.315 248514 DEBUG nova.virt.libvirt.driver [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.319 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.377 248514 INFO nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Took 12.15 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.377 248514 DEBUG nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.475 248514 INFO nova.compute.manager [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Took 13.54 seconds to build instance.#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.481 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.481 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.482 248514 DEBUG nova.network.neutron [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:53:19 np0005558241 nova_compute[248510]: 2025-12-13 08:53:19.511 248514 DEBUG oslo_concurrency.lockutils [None req-488e49b1-84ce-453f-b0fa-fc1e21ba6249 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2753: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 13 03:53:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:20.376 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007091986137092553 of space, bias 1.0, pg target 0.21275958411277657 quantized to 32 (current 32)
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014283697439170003 of space, bias 1.0, pg target 0.4285109231751001 quantized to 32 (current 32)
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.735894265472636e-07 of space, bias 4.0, pg target 0.0006883073118567163 quantized to 16 (current 32)
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:53:21 np0005558241 nova_compute[248510]: 2025-12-13 08:53:21.433 248514 DEBUG nova.compute.manager [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:21 np0005558241 nova_compute[248510]: 2025-12-13 08:53:21.434 248514 DEBUG oslo_concurrency.lockutils [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:21 np0005558241 nova_compute[248510]: 2025-12-13 08:53:21.435 248514 DEBUG oslo_concurrency.lockutils [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:21 np0005558241 nova_compute[248510]: 2025-12-13 08:53:21.435 248514 DEBUG oslo_concurrency.lockutils [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:21 np0005558241 nova_compute[248510]: 2025-12-13 08:53:21.435 248514 DEBUG nova.compute.manager [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] No waiting events found dispatching network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:53:21 np0005558241 nova_compute[248510]: 2025-12-13 08:53:21.436 248514 WARNING nova.compute.manager [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received unexpected event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:53:21 np0005558241 nova_compute[248510]: 2025-12-13 08:53:21.436 248514 DEBUG nova.compute.manager [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:21 np0005558241 nova_compute[248510]: 2025-12-13 08:53:21.436 248514 DEBUG nova.compute.manager [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing instance network info cache due to event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:53:21 np0005558241 nova_compute[248510]: 2025-12-13 08:53:21.436 248514 DEBUG oslo_concurrency.lockutils [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:21.626 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 2001:db8::f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:21.627 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated#033[00m
Dec 13 03:53:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:21.629 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:53:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:21.630 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[93b65367-23f4-423c-9fe6-3576e86adba1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:21 np0005558241 nova_compute[248510]: 2025-12-13 08:53:21.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:53:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2754: 321 pgs: 321 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 83 op/s
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.470 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.545 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.561 248514 DEBUG nova.network.neutron [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.592 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.593 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.594 248514 INFO nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Creating image(s)#033[00m
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.619 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.623 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'trusted_certs' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.625 248514 DEBUG oslo_concurrency.lockutils [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.626 248514 DEBUG nova.network.neutron [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.679 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.710 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.716 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "904114c6e9ea91bfc56a15099c4749b640a96cc9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:22 np0005558241 nova_compute[248510]: 2025-12-13 08:53:22.718 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "904114c6e9ea91bfc56a15099c4749b640a96cc9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:23 np0005558241 nova_compute[248510]: 2025-12-13 08:53:23.071 248514 DEBUG nova.compute.manager [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-changed-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:23 np0005558241 nova_compute[248510]: 2025-12-13 08:53:23.072 248514 DEBUG nova.compute.manager [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Refreshing instance network info cache due to event network-changed-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:53:23 np0005558241 nova_compute[248510]: 2025-12-13 08:53:23.073 248514 DEBUG oslo_concurrency.lockutils [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:23 np0005558241 nova_compute[248510]: 2025-12-13 08:53:23.073 248514 DEBUG oslo_concurrency.lockutils [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:23 np0005558241 nova_compute[248510]: 2025-12-13 08:53:23.073 248514 DEBUG nova.network.neutron [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Refreshing network info cache for port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:53:23 np0005558241 nova_compute[248510]: 2025-12-13 08:53:23.343 248514 DEBUG nova.virt.libvirt.imagebackend [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/e80a280c-5146-4d78-99c1-0d3591de049e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/e80a280c-5146-4d78-99c1-0d3591de049e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec 13 03:53:23 np0005558241 nova_compute[248510]: 2025-12-13 08:53:23.397 248514 DEBUG nova.virt.libvirt.imagebackend [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Selected location: {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/e80a280c-5146-4d78-99c1-0d3591de049e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Dec 13 03:53:23 np0005558241 nova_compute[248510]: 2025-12-13 08:53:23.398 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] cloning images/e80a280c-5146-4d78-99c1-0d3591de049e@snap to None/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:53:23 np0005558241 nova_compute[248510]: 2025-12-13 08:53:23.635 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "904114c6e9ea91bfc56a15099c4749b640a96cc9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:53:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:53:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:53:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:53:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:53:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:53:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:53:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:53:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:53:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:53:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:53:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:53:23 np0005558241 nova_compute[248510]: 2025-12-13 08:53:23.856 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'migration_context' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:23 np0005558241 nova_compute[248510]: 2025-12-13 08:53:23.959 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] flattening vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:53:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2755: 321 pgs: 321 active+clean; 230 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.7 MiB/s wr, 208 op/s
Dec 13 03:53:24 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:53:24 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:53:24 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:53:24 np0005558241 podman[360939]: 2025-12-13 08:53:24.34203296 +0000 UTC m=+0.088478479 container create 670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_johnson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:53:24 np0005558241 podman[360939]: 2025-12-13 08:53:24.282474597 +0000 UTC m=+0.028920146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:53:24 np0005558241 systemd[1]: Started libpod-conmon-670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb.scope.
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.439 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Image rbd:vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.440 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.440 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Ensure instance console log exists: /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.441 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.441 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.441 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.443 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Start _get_guest_xml network_info=[{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:52:49Z,direct_url=<?>,disk_format='raw',id=e80a280c-5146-4d78-99c1-0d3591de049e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1747506169-shelved',owner='ff4d2c6ad4dc4848ac9f55ff1b9e829a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:53:01Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.450 248514 WARNING nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:53:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:53:24 np0005558241 podman[360939]: 2025-12-13 08:53:24.477886346 +0000 UTC m=+0.224331895 container init 670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:53:24 np0005558241 podman[360939]: 2025-12-13 08:53:24.486206635 +0000 UTC m=+0.232652154 container start 670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:53:24 np0005558241 podman[360939]: 2025-12-13 08:53:24.489620591 +0000 UTC m=+0.236066110 container attach 670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:53:24 np0005558241 systemd[1]: libpod-670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb.scope: Deactivated successfully.
Dec 13 03:53:24 np0005558241 happy_johnson[360955]: 167 167
Dec 13 03:53:24 np0005558241 podman[360939]: 2025-12-13 08:53:24.49559933 +0000 UTC m=+0.242044869 container died 670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_johnson, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.506 248514 DEBUG nova.virt.libvirt.host [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.507 248514 DEBUG nova.virt.libvirt.host [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:53:24 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b1430df9bd3976963e60d4a8c2bbae54a5f199eb326f57c1b14cbde57cbf515b-merged.mount: Deactivated successfully.
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.544 248514 DEBUG nova.virt.libvirt.host [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.544 248514 DEBUG nova.virt.libvirt.host [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.545 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.545 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:52:49Z,direct_url=<?>,disk_format='raw',id=e80a280c-5146-4d78-99c1-0d3591de049e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1747506169-shelved',owner='ff4d2c6ad4dc4848ac9f55ff1b9e829a',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:53:01Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.545 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.545 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:53:24 np0005558241 podman[360939]: 2025-12-13 08:53:24.546130557 +0000 UTC m=+0.292576076 container remove 670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_johnson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.546 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.546 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.546 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.546 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.547 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.547 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.547 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.547 248514 DEBUG nova.virt.hardware [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.547 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'vcpu_model' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:24 np0005558241 systemd[1]: libpod-conmon-670346e64f349b49a82159d4b8d8f13980b937efa2dc5c6ff71e750ab7ecaceb.scope: Deactivated successfully.
Dec 13 03:53:24 np0005558241 nova_compute[248510]: 2025-12-13 08:53:24.698 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:24 np0005558241 podman[360980]: 2025-12-13 08:53:24.728865318 +0000 UTC m=+0.045899471 container create 558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goldstine, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:53:24 np0005558241 systemd[1]: Started libpod-conmon-558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d.scope.
Dec 13 03:53:24 np0005558241 podman[360980]: 2025-12-13 08:53:24.709833761 +0000 UTC m=+0.026867934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:53:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:53:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239f39c32eff7e354d1c4dce3d63b8dd62835a564f753613dff75ec3521fd71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239f39c32eff7e354d1c4dce3d63b8dd62835a564f753613dff75ec3521fd71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239f39c32eff7e354d1c4dce3d63b8dd62835a564f753613dff75ec3521fd71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239f39c32eff7e354d1c4dce3d63b8dd62835a564f753613dff75ec3521fd71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9239f39c32eff7e354d1c4dce3d63b8dd62835a564f753613dff75ec3521fd71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:24 np0005558241 podman[360980]: 2025-12-13 08:53:24.852044697 +0000 UTC m=+0.169078860 container init 558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:53:24 np0005558241 podman[360980]: 2025-12-13 08:53:24.861545135 +0000 UTC m=+0.178579288 container start 558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goldstine, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:53:24 np0005558241 podman[360980]: 2025-12-13 08:53:24.866376846 +0000 UTC m=+0.183411039 container attach 558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goldstine, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:53:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:25.180 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:25.183 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated#033[00m
Dec 13 03:53:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:25.186 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:53:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:25.187 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1afc8b5d-bb3f-44b9-9e79-55bb7968eeee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:53:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2566843178' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:53:25 np0005558241 nova_compute[248510]: 2025-12-13 08:53:25.338 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:25 np0005558241 happy_goldstine[360998]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:53:25 np0005558241 happy_goldstine[360998]: --> All data devices are unavailable
Dec 13 03:53:25 np0005558241 systemd[1]: libpod-558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d.scope: Deactivated successfully.
Dec 13 03:53:25 np0005558241 podman[360980]: 2025-12-13 08:53:25.372242999 +0000 UTC m=+0.689277152 container died 558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Dec 13 03:53:25 np0005558241 nova_compute[248510]: 2025-12-13 08:53:25.392 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:25 np0005558241 nova_compute[248510]: 2025-12-13 08:53:25.400 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:25 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9239f39c32eff7e354d1c4dce3d63b8dd62835a564f753613dff75ec3521fd71-merged.mount: Deactivated successfully.
Dec 13 03:53:25 np0005558241 podman[360980]: 2025-12-13 08:53:25.43010556 +0000 UTC m=+0.747139713 container remove 558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:53:25 np0005558241 systemd[1]: libpod-conmon-558c6bc15a1640e928323b7770eb221aa8bfcaa87b9907a0ea95ff70cef0fa8d.scope: Deactivated successfully.
Dec 13 03:53:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:25Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:eb:91 10.100.0.8
Dec 13 03:53:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:25Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:eb:91 10.100.0.8
Dec 13 03:53:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:53:25 np0005558241 nova_compute[248510]: 2025-12-13 08:53:25.569 248514 DEBUG nova.network.neutron [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updated VIF entry in instance network info cache for port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:53:25 np0005558241 nova_compute[248510]: 2025-12-13 08:53:25.571 248514 DEBUG nova.network.neutron [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updating instance_info_cache with network_info: [{"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:25 np0005558241 nova_compute[248510]: 2025-12-13 08:53:25.601 248514 DEBUG nova.network.neutron [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updated VIF entry in instance network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:53:25 np0005558241 nova_compute[248510]: 2025-12-13 08:53:25.602 248514 DEBUG nova.network.neutron [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:25 np0005558241 nova_compute[248510]: 2025-12-13 08:53:25.649 248514 DEBUG oslo_concurrency.lockutils [req-de3403e2-b22f-4ac4-9a66-960487cb8e8f req-d4338a99-d675-4e94-a8d2-6a96a88a2a03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:25 np0005558241 nova_compute[248510]: 2025-12-13 08:53:25.651 248514 DEBUG oslo_concurrency.lockutils [req-c65cca04-5bcd-439a-a139-734ccba10481 req-6621083e-0d3c-4396-ab65-cdce2d773652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:25 np0005558241 podman[361151]: 2025-12-13 08:53:25.892106184 +0000 UTC m=+0.035810079 container create e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 03:53:25 np0005558241 systemd[1]: Started libpod-conmon-e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81.scope.
Dec 13 03:53:25 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:53:25 np0005558241 podman[361151]: 2025-12-13 08:53:25.876574654 +0000 UTC m=+0.020278569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:53:25 np0005558241 podman[361151]: 2025-12-13 08:53:25.991293761 +0000 UTC m=+0.134997676 container init e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:53:26 np0005558241 podman[361151]: 2025-12-13 08:53:26.00164261 +0000 UTC m=+0.145346515 container start e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:53:26 np0005558241 podman[361151]: 2025-12-13 08:53:26.005841246 +0000 UTC m=+0.149545161 container attach e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 03:53:26 np0005558241 nervous_aryabhata[361167]: 167 167
Dec 13 03:53:26 np0005558241 systemd[1]: libpod-e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81.scope: Deactivated successfully.
Dec 13 03:53:26 np0005558241 conmon[361167]: conmon e1999cab1bbfa4d4db22 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81.scope/container/memory.events
Dec 13 03:53:26 np0005558241 podman[361151]: 2025-12-13 08:53:26.009441676 +0000 UTC m=+0.153145581 container died e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:53:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:53:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1515509598' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:53:26 np0005558241 systemd[1]: var-lib-containers-storage-overlay-200e3598393003f8d37d71505ac742004b7087b005c2874610048f9eb3832b50-merged.mount: Deactivated successfully.
Dec 13 03:53:26 np0005558241 podman[361151]: 2025-12-13 08:53:26.051939911 +0000 UTC m=+0.195643806 container remove e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.058 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.060 248514 DEBUG nova.virt.libvirt.vif [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:52:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1747506169',display_name='tempest-TestShelveInstance-server-1747506169',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1747506169',id=111,image_ref='e80a280c-5146-4d78-99c1-0d3591de049e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1284595260',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:52:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ff4d2c6ad4dc4848ac9f55ff1b9e829a',ramdisk_id='',reservation_id='r-wfxenjt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-2105398574',owner_user_name='tempest-TestShelveInstance-2105398574-project-member',shelved_at='2025-12-13T08:53:01.811270',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e80a280c-5146-4d78-99c1-0d3591de049e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:11Z,user_data=None,user_id='fa34623cd3de4a47aa57959f09b3ff79',uuid=fcc617ec-f5f9-41bb-ad4b-86d790622e74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.061 248514 DEBUG nova.network.os_vif_util [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converting VIF {"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.062 248514 DEBUG nova.network.os_vif_util [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:53:26 np0005558241 systemd[1]: libpod-conmon-e1999cab1bbfa4d4db22279976904da725a2523e28a475ff9bdc78f6a9bb7e81.scope: Deactivated successfully.
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.063 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'pci_devices' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.090 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  <uuid>fcc617ec-f5f9-41bb-ad4b-86d790622e74</uuid>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  <name>instance-0000006f</name>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestShelveInstance-server-1747506169</nova:name>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:53:24</nova:creationTime>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:        <nova:user uuid="fa34623cd3de4a47aa57959f09b3ff79">tempest-TestShelveInstance-2105398574-project-member</nova:user>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:        <nova:project uuid="ff4d2c6ad4dc4848ac9f55ff1b9e829a">tempest-TestShelveInstance-2105398574</nova:project>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="e80a280c-5146-4d78-99c1-0d3591de049e"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:        <nova:port uuid="b2143648-4c23-49b5-8777-433a5b34c7ce">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <entry name="serial">fcc617ec-f5f9-41bb-ad4b-86d790622e74</entry>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <entry name="uuid">fcc617ec-f5f9-41bb-ad4b-86d790622e74</entry>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:02:b7:87"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <target dev="tapb2143648-4c"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/console.log" append="off"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <input type="keyboard" bus="usb"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:53:26 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:53:26 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:53:26 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:53:26 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.091 248514 DEBUG nova.compute.manager [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Preparing to wait for external event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.091 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.091 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.092 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.092 248514 DEBUG nova.virt.libvirt.vif [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:52:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1747506169',display_name='tempest-TestShelveInstance-server-1747506169',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1747506169',id=111,image_ref='e80a280c-5146-4d78-99c1-0d3591de049e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1284595260',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:52:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ff4d2c6ad4dc4848ac9f55ff1b9e829a',ramdisk_id='',reservation_id='r-wfxenjt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-2105398574',owner_user_name='tempest-TestShelveInstance-2105398574-project-member',shelved_at='2025-12-13T08:53:01.811270',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e80a280c-5146-4d78-99c1-0d3591de049e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:11Z,user_data=None,user_id='fa34623cd3de4a47aa57959f09b3ff79',uuid=fcc617ec-f5f9-41bb-ad4b-86d790622e74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.093 248514 DEBUG nova.network.os_vif_util [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converting VIF {"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.093 248514 DEBUG nova.network.os_vif_util [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.094 248514 DEBUG os_vif [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.094 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.094 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.095 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.099 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.099 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2143648-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.099 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2143648-4c, col_values=(('external_ids', {'iface-id': 'b2143648-4c23-49b5-8777-433a5b34c7ce', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:b7:87', 'vm-uuid': 'fcc617ec-f5f9-41bb-ad4b-86d790622e74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.101 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:26 np0005558241 NetworkManager[50376]: <info>  [1765616006.1025] manager: (tapb2143648-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.104 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.108 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.108 248514 INFO os_vif [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c')#033[00m
Dec 13 03:53:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2756: 321 pgs: 321 active+clean; 230 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.7 MiB/s wr, 156 op/s
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.218 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.219 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.219 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] No VIF found with MAC fa:16:3e:02:b7:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.220 248514 INFO nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Using config drive#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.239 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.262 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'ec2_ids' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:26 np0005558241 podman[361196]: 2025-12-13 08:53:26.219277507 +0000 UTC m=+0.029188093 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:53:26 np0005558241 nova_compute[248510]: 2025-12-13 08:53:26.328 248514 DEBUG nova.objects.instance [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'keypairs' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:27 np0005558241 podman[361196]: 2025-12-13 08:53:27.131051658 +0000 UTC m=+0.940962214 container create fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jemison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:53:27 np0005558241 nova_compute[248510]: 2025-12-13 08:53:27.160 248514 DEBUG nova.compute.manager [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-changed-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:27 np0005558241 nova_compute[248510]: 2025-12-13 08:53:27.161 248514 DEBUG nova.compute.manager [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Refreshing instance network info cache due to event network-changed-1babd66f-ec6a-4702-8a8f-839d32ba8761. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:53:27 np0005558241 nova_compute[248510]: 2025-12-13 08:53:27.161 248514 DEBUG oslo_concurrency.lockutils [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:27 np0005558241 nova_compute[248510]: 2025-12-13 08:53:27.161 248514 DEBUG oslo_concurrency.lockutils [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:27 np0005558241 nova_compute[248510]: 2025-12-13 08:53:27.161 248514 DEBUG nova.network.neutron [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Refreshing network info cache for port 1babd66f-ec6a-4702-8a8f-839d32ba8761 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:53:27 np0005558241 systemd[1]: Started libpod-conmon-fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606.scope.
Dec 13 03:53:27 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:53:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e40a41685922ea8b8ba8ad7363f4998f0553c0bb1de592889901f2c600ce464/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e40a41685922ea8b8ba8ad7363f4998f0553c0bb1de592889901f2c600ce464/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e40a41685922ea8b8ba8ad7363f4998f0553c0bb1de592889901f2c600ce464/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e40a41685922ea8b8ba8ad7363f4998f0553c0bb1de592889901f2c600ce464/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:27 np0005558241 podman[361196]: 2025-12-13 08:53:27.233341292 +0000 UTC m=+1.043251878 container init fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:53:27 np0005558241 podman[361196]: 2025-12-13 08:53:27.242454811 +0000 UTC m=+1.052365367 container start fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jemison, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:53:27 np0005558241 podman[361196]: 2025-12-13 08:53:27.252299998 +0000 UTC m=+1.062210554 container attach fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jemison, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 03:53:27 np0005558241 podman[361233]: 2025-12-13 08:53:27.259286763 +0000 UTC m=+0.077101454 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 13 03:53:27 np0005558241 podman[361232]: 2025-12-13 08:53:27.272133175 +0000 UTC m=+0.091555477 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:53:27 np0005558241 podman[361229]: 2025-12-13 08:53:27.297915701 +0000 UTC m=+0.117478946 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec 13 03:53:27 np0005558241 nova_compute[248510]: 2025-12-13 08:53:27.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:27 np0005558241 nova_compute[248510]: 2025-12-13 08:53:27.495 248514 INFO nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Creating config drive at /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config#033[00m
Dec 13 03:53:27 np0005558241 nova_compute[248510]: 2025-12-13 08:53:27.501 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2s1lrv_2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]: {
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:    "0": [
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:        {
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "devices": [
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "/dev/loop3"
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            ],
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_name": "ceph_lv0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_size": "21470642176",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "name": "ceph_lv0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "tags": {
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.cluster_name": "ceph",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.crush_device_class": "",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.encrypted": "0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.objectstore": "bluestore",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.osd_id": "0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.type": "block",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.vdo": "0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.with_tpm": "0"
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            },
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "type": "block",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "vg_name": "ceph_vg0"
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:        }
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:    ],
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:    "1": [
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:        {
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "devices": [
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "/dev/loop4"
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            ],
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_name": "ceph_lv1",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_size": "21470642176",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "name": "ceph_lv1",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "tags": {
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.cluster_name": "ceph",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.crush_device_class": "",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.encrypted": "0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.objectstore": "bluestore",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.osd_id": "1",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.type": "block",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.vdo": "0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.with_tpm": "0"
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            },
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "type": "block",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "vg_name": "ceph_vg1"
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:        }
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:    ],
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:    "2": [
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:        {
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "devices": [
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "/dev/loop5"
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            ],
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_name": "ceph_lv2",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_size": "21470642176",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "name": "ceph_lv2",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "tags": {
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.cluster_name": "ceph",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.crush_device_class": "",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.encrypted": "0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.objectstore": "bluestore",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.osd_id": "2",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.type": "block",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.vdo": "0",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:                "ceph.with_tpm": "0"
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            },
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "type": "block",
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:            "vg_name": "ceph_vg2"
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:        }
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]:    ]
Dec 13 03:53:27 np0005558241 stupefied_jemison[361234]: }
Dec 13 03:53:27 np0005558241 systemd[1]: libpod-fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606.scope: Deactivated successfully.
Dec 13 03:53:27 np0005558241 podman[361196]: 2025-12-13 08:53:27.601323549 +0000 UTC m=+1.411234105 container died fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 03:53:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5e40a41685922ea8b8ba8ad7363f4998f0553c0bb1de592889901f2c600ce464-merged.mount: Deactivated successfully.
Dec 13 03:53:27 np0005558241 nova_compute[248510]: 2025-12-13 08:53:27.661 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2s1lrv_2" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:27 np0005558241 podman[361196]: 2025-12-13 08:53:27.681668053 +0000 UTC m=+1.491578609 container remove fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:53:27 np0005558241 systemd[1]: libpod-conmon-fbbf523e0f1120ee2e2792040101c425b1f0604ccaa49073c298869bd11fb606.scope: Deactivated successfully.
Dec 13 03:53:27 np0005558241 nova_compute[248510]: 2025-12-13 08:53:27.697 248514 DEBUG nova.storage.rbd_utils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] rbd image fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:27 np0005558241 nova_compute[248510]: 2025-12-13 08:53:27.701 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:28 np0005558241 nova_compute[248510]: 2025-12-13 08:53:28.034 248514 DEBUG oslo_concurrency.processutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config fcc617ec-f5f9-41bb-ad4b-86d790622e74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:28 np0005558241 nova_compute[248510]: 2025-12-13 08:53:28.036 248514 INFO nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deleting local config drive /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74/disk.config because it was imported into RBD.#033[00m
Dec 13 03:53:28 np0005558241 kernel: tapb2143648-4c: entered promiscuous mode
Dec 13 03:53:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:28Z|01096|binding|INFO|Claiming lport b2143648-4c23-49b5-8777-433a5b34c7ce for this chassis.
Dec 13 03:53:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:28Z|01097|binding|INFO|b2143648-4c23-49b5-8777-433a5b34c7ce: Claiming fa:16:3e:02:b7:87 10.100.0.4
Dec 13 03:53:28 np0005558241 nova_compute[248510]: 2025-12-13 08:53:28.099 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:28 np0005558241 NetworkManager[50376]: <info>  [1765616008.1035] manager: (tapb2143648-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/458)
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.105 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b7:87 10.100.0.4'], port_security=['fa:16:3e:02:b7:87 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fcc617ec-f5f9-41bb-ad4b-86d790622e74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db56c55-78f1-455f-855e-db3acef05ff3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4d2c6ad4dc4848ac9f55ff1b9e829a', 'neutron:revision_number': '7', 'neutron:security_group_ids': '9bebfabc-b3c7-415b-9881-5a1028b3b8d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efdd787e-21fc-422f-be64-57ef7368490d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b2143648-4c23-49b5-8777-433a5b34c7ce) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.107 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b2143648-4c23-49b5-8777-433a5b34c7ce in datapath 6db56c55-78f1-455f-855e-db3acef05ff3 bound to our chassis#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.109 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6db56c55-78f1-455f-855e-db3acef05ff3#033[00m
Dec 13 03:53:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:28Z|01098|binding|INFO|Setting lport b2143648-4c23-49b5-8777-433a5b34c7ce ovn-installed in OVS
Dec 13 03:53:28 np0005558241 nova_compute[248510]: 2025-12-13 08:53:28.123 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:28Z|01099|binding|INFO|Setting lport b2143648-4c23-49b5-8777-433a5b34c7ce up in Southbound
Dec 13 03:53:28 np0005558241 nova_compute[248510]: 2025-12-13 08:53:28.125 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.131 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0b2531-f3db-4419-afd3-7d49d09e47af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.132 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6db56c55-71 in ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.134 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6db56c55-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.134 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6de256-b9fb-402d-a0e4-0f51020d9e70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.135 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5590f2a5-2f30-4e0e-8e96-56d8e8796427]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.150 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0dcae99d-f9b3-4714-b7ad-e833bc757ab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 systemd-udevd[361427]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:53:28 np0005558241 systemd-machined[210538]: New machine qemu-140-instance-0000006f.
Dec 13 03:53:28 np0005558241 systemd[1]: Started Virtual Machine qemu-140-instance-0000006f.
Dec 13 03:53:28 np0005558241 NetworkManager[50376]: <info>  [1765616008.1702] device (tapb2143648-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:53:28 np0005558241 NetworkManager[50376]: <info>  [1765616008.1711] device (tapb2143648-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.167 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eded7415-70b3-4606-bbdd-4d55a7c87be2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2757: 321 pgs: 321 active+clean; 281 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 195 op/s
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.204 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0e38d1e0-509a-4f50-8901-fe085a6ff4ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 NetworkManager[50376]: <info>  [1765616008.2130] manager: (tap6db56c55-70): new Veth device (/org/freedesktop/NetworkManager/Devices/459)
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.211 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7794c3-e653-4fba-b5a7-924cebf73063]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 systemd-udevd[361436]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.251 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d182d9-1d23-4550-9b03-1a68b63c9036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.256 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b88f2769-f2f9-44dc-85fe-97d26b2cf98f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 podman[361425]: 2025-12-13 08:53:28.26821993 +0000 UTC m=+0.104392979 container create 942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:53:28 np0005558241 NetworkManager[50376]: <info>  [1765616008.2872] device (tap6db56c55-70): carrier: link connected
Dec 13 03:53:28 np0005558241 podman[361425]: 2025-12-13 08:53:28.197779464 +0000 UTC m=+0.033952543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.295 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f611e0ba-2400-4a72-9f63-e4ac0981682c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.315 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eebdb9e1-5792-4137-8103-9e0573512f77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6db56c55-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:4b:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858550, 'reachable_time': 35475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361470, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 systemd[1]: Started libpod-conmon-942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e.scope.
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.341 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1989fd-1ecb-4474-be90-aef5058b1799]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:4b23'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858550, 'tstamp': 858550}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361473, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.360 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0a720e9d-b1c9-4cd7-8a12-d3cf09c380b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6db56c55-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:4b:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 326], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858550, 'reachable_time': 35475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361477, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.400 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79311578-0cb8-4adc-80ae-cdc1d7fc1f8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.458 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e85945d4-ae63-4fb1-a0bf-77fcfbfe16ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.459 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6db56c55-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.459 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.460 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6db56c55-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:28 np0005558241 nova_compute[248510]: 2025-12-13 08:53:28.462 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:28 np0005558241 kernel: tap6db56c55-70: entered promiscuous mode
Dec 13 03:53:28 np0005558241 NetworkManager[50376]: <info>  [1765616008.4626] manager: (tap6db56c55-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/460)
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.471 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6db56c55-70, col_values=(('external_ids', {'iface-id': '401abfe8-06be-4b57-8432-310dcd747a81'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:28Z|01100|binding|INFO|Releasing lport 401abfe8-06be-4b57-8432-310dcd747a81 from this chassis (sb_readonly=0)
Dec 13 03:53:28 np0005558241 nova_compute[248510]: 2025-12-13 08:53:28.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:28 np0005558241 podman[361425]: 2025-12-13 08:53:28.475187158 +0000 UTC m=+0.311360237 container init 942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_swirles, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:53:28 np0005558241 podman[361425]: 2025-12-13 08:53:28.483940948 +0000 UTC m=+0.320114027 container start 942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.488 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6db56c55-78f1-455f-855e-db3acef05ff3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6db56c55-78f1-455f-855e-db3acef05ff3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:53:28 np0005558241 nova_compute[248510]: 2025-12-13 08:53:28.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:28 np0005558241 gifted_swirles[361474]: 167 167
Dec 13 03:53:28 np0005558241 systemd[1]: libpod-942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e.scope: Deactivated successfully.
Dec 13 03:53:28 np0005558241 conmon[361474]: conmon 942894bcba2677ad5a77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e.scope/container/memory.events
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5385aa5a-61da-4439-8509-d12b307269e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.493 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-6db56c55-78f1-455f-855e-db3acef05ff3
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/6db56c55-78f1-455f-855e-db3acef05ff3.pid.haproxy
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 6db56c55-78f1-455f-855e-db3acef05ff3
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:53:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:28.493 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'env', 'PROCESS_TAG=haproxy-6db56c55-78f1-455f-855e-db3acef05ff3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6db56c55-78f1-455f-855e-db3acef05ff3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:53:28 np0005558241 podman[361425]: 2025-12-13 08:53:28.745547247 +0000 UTC m=+0.581720316 container attach 942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:53:28 np0005558241 podman[361425]: 2025-12-13 08:53:28.747017254 +0000 UTC m=+0.583190313 container died 942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:53:28 np0005558241 nova_compute[248510]: 2025-12-13 08:53:28.862 248514 DEBUG nova.network.neutron [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updated VIF entry in instance network info cache for port 1babd66f-ec6a-4702-8a8f-839d32ba8761. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:53:28 np0005558241 nova_compute[248510]: 2025-12-13 08:53:28.863 248514 DEBUG nova.network.neutron [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updating instance_info_cache with network_info: [{"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:28 np0005558241 nova_compute[248510]: 2025-12-13 08:53:28.892 248514 DEBUG oslo_concurrency.lockutils [req-35e03774-1c4b-4f35-b0db-cc84e1b31d19 req-ed0d5e57-d199-479f-b844-184587c87fd0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:29 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7600f55b210fa74d466a8b1ac569a0c090401c1d0835a102c474385c023cb414-merged.mount: Deactivated successfully.
Dec 13 03:53:29 np0005558241 podman[361425]: 2025-12-13 08:53:29.156215623 +0000 UTC m=+0.992388672 container remove 942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:53:29 np0005558241 podman[361522]: 2025-12-13 08:53:29.170207354 +0000 UTC m=+0.119720323 container create 60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 03:53:29 np0005558241 podman[361522]: 2025-12-13 08:53:29.076678479 +0000 UTC m=+0.026191478 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:53:29 np0005558241 systemd[1]: Started libpod-conmon-60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0.scope.
Dec 13 03:53:29 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:53:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00eeaad21779e0d1e4a21f624f33c14a068fe0d0787499d4a34517514951359/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.280 248514 DEBUG nova.compute.manager [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.282 248514 DEBUG oslo_concurrency.lockutils [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.282 248514 DEBUG oslo_concurrency.lockutils [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.283 248514 DEBUG oslo_concurrency.lockutils [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.283 248514 DEBUG nova.compute.manager [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Processing event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.283 248514 DEBUG nova.compute.manager [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.284 248514 DEBUG oslo_concurrency.lockutils [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.284 248514 DEBUG oslo_concurrency.lockutils [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.284 248514 DEBUG oslo_concurrency.lockutils [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.284 248514 DEBUG nova.compute.manager [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] No waiting events found dispatching network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.285 248514 WARNING nova.compute.manager [req-00d9f6d1-42e6-4b43-bc03-224fe7fd48f9 req-6d84d908-ab0e-42a4-beb0-4659845e7dcd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received unexpected event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Dec 13 03:53:29 np0005558241 podman[361522]: 2025-12-13 08:53:29.308956323 +0000 UTC m=+0.258469322 container init 60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:53:29 np0005558241 podman[361522]: 2025-12-13 08:53:29.314238025 +0000 UTC m=+0.263750994 container start 60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:53:29 np0005558241 systemd[1]: libpod-conmon-942894bcba2677ad5a77a00a04b808681f1f1491ec62ee72b21f52f6b1fb450e.scope: Deactivated successfully.
Dec 13 03:53:29 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [NOTICE]   (361549) : New worker (361560) forked
Dec 13 03:53:29 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [NOTICE]   (361549) : Loading success.
Dec 13 03:53:29 np0005558241 podman[361546]: 2025-12-13 08:53:29.378944088 +0000 UTC m=+0.058693843 container create b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 13 03:53:29 np0005558241 systemd[1]: Started libpod-conmon-b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380.scope.
Dec 13 03:53:29 np0005558241 podman[361546]: 2025-12-13 08:53:29.361972852 +0000 UTC m=+0.041722817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:53:29 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:53:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac544eb799c93be48db0053ecb792d4e1bc104ca717a3e0662d311a3735fa07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac544eb799c93be48db0053ecb792d4e1bc104ca717a3e0662d311a3735fa07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac544eb799c93be48db0053ecb792d4e1bc104ca717a3e0662d311a3735fa07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac544eb799c93be48db0053ecb792d4e1bc104ca717a3e0662d311a3735fa07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:29 np0005558241 podman[361546]: 2025-12-13 08:53:29.482425652 +0000 UTC m=+0.162175417 container init b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_maxwell, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:53:29 np0005558241 podman[361546]: 2025-12-13 08:53:29.489658584 +0000 UTC m=+0.169408349 container start b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_maxwell, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:53:29 np0005558241 podman[361546]: 2025-12-13 08:53:29.49309824 +0000 UTC m=+0.172848035 container attach b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_maxwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.756 248514 DEBUG nova.compute.manager [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.756 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616009.7554352, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.757 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Started (Lifecycle Event)#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.763 248514 DEBUG nova.virt.libvirt.driver [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.769 248514 INFO nova.virt.libvirt.driver [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance spawned successfully.#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.789 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.792 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.813 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.814 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616009.7557135, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.814 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.838 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.852 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616009.7627423, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.852 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.880 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.886 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:53:29 np0005558241 nova_compute[248510]: 2025-12-13 08:53:29.908 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:53:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Dec 13 03:53:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Dec 13 03:53:30 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Dec 13 03:53:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2759: 321 pgs: 321 active+clean; 325 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.2 MiB/s wr, 247 op/s
Dec 13 03:53:30 np0005558241 lvm[361691]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:53:30 np0005558241 lvm[361691]: VG ceph_vg0 finished
Dec 13 03:53:30 np0005558241 lvm[361693]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:53:30 np0005558241 lvm[361693]: VG ceph_vg1 finished
Dec 13 03:53:30 np0005558241 lvm[361694]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:53:30 np0005558241 lvm[361694]: VG ceph_vg2 finished
Dec 13 03:53:30 np0005558241 compassionate_maxwell[361574]: {}
Dec 13 03:53:30 np0005558241 systemd[1]: libpod-b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380.scope: Deactivated successfully.
Dec 13 03:53:30 np0005558241 podman[361546]: 2025-12-13 08:53:30.384349266 +0000 UTC m=+1.064099031 container died b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:53:30 np0005558241 systemd[1]: libpod-b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380.scope: Consumed 1.398s CPU time.
Dec 13 03:53:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8ac544eb799c93be48db0053ecb792d4e1bc104ca717a3e0662d311a3735fa07-merged.mount: Deactivated successfully.
Dec 13 03:53:30 np0005558241 podman[361546]: 2025-12-13 08:53:30.506123369 +0000 UTC m=+1.185873134 container remove b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:53:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:53:30 np0005558241 systemd[1]: libpod-conmon-b70636f1f6f4137dff6e3cb60dcde461d9aa48292d7079b24ec0279bdca0f380.scope: Deactivated successfully.
Dec 13 03:53:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:53:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:53:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:53:30 np0005558241 nova_compute[248510]: 2025-12-13 08:53:30.583 248514 DEBUG nova.compute.manager [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:53:30 np0005558241 nova_compute[248510]: 2025-12-13 08:53:30.663 248514 DEBUG oslo_concurrency.lockutils [None req-53035912-a9db-45de-8f54-6461df6f5f7a fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 18.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:30 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Dec 13 03:53:31 np0005558241 nova_compute[248510]: 2025-12-13 08:53:31.101 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:53:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:53:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:31.471 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:31.473 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated#033[00m
Dec 13 03:53:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:31.475 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:53:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:31.476 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[743fa8f2-2586-467f-a71f-91dff596ac07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:31 np0005558241 nova_compute[248510]: 2025-12-13 08:53:31.892 248514 INFO nova.compute.manager [None req-a00f6f9a-ea03-479b-9d64-e20cdf6ae142 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Get console output#033[00m
Dec 13 03:53:31 np0005558241 nova_compute[248510]: 2025-12-13 08:53:31.899 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 03:53:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2760: 321 pgs: 321 active+clean; 325 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.2 MiB/s wr, 247 op/s
Dec 13 03:53:32 np0005558241 nova_compute[248510]: 2025-12-13 08:53:32.475 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:32 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:32Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b1:81:46 10.100.0.10
Dec 13 03:53:32 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:32Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b1:81:46 10.100.0.10
Dec 13 03:53:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:33.185 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:33.186 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated#033[00m
Dec 13 03:53:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:33.188 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:53:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:33.188 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5e7c98-f02d-4052-bd76-a2987323147f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2761: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 7.8 MiB/s wr, 281 op/s
Dec 13 03:53:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:53:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Dec 13 03:53:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Dec 13 03:53:35 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Dec 13 03:53:36 np0005558241 nova_compute[248510]: 2025-12-13 08:53:36.103 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2763: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 6.4 MiB/s wr, 294 op/s
Dec 13 03:53:37 np0005558241 nova_compute[248510]: 2025-12-13 08:53:37.479 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:37.686 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:37.687 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated#033[00m
Dec 13 03:53:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:37.689 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:53:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:37.690 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[878e19b1-e46c-4a58-b6c2-99a3b39b8a82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2764: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 229 op/s
Dec 13 03:53:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:53:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:53:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:53:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:53:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:53:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:53:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2765: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 185 op/s
Dec 13 03:53:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:53:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:40.650 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:40.651 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated#033[00m
Dec 13 03:53:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:40.653 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:53:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:40.654 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd44a25-a181-43e4-9ab8-c729792b95dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:41 np0005558241 nova_compute[248510]: 2025-12-13 08:53:41.105 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2766: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 185 op/s
Dec 13 03:53:42 np0005558241 nova_compute[248510]: 2025-12-13 08:53:42.481 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:43 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:43Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:b7:87 10.100.0.4
Dec 13 03:53:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2767: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 436 KiB/s rd, 40 KiB/s wr, 33 op/s
Dec 13 03:53:45 np0005558241 nova_compute[248510]: 2025-12-13 08:53:45.109 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:45 np0005558241 nova_compute[248510]: 2025-12-13 08:53:45.110 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:45 np0005558241 nova_compute[248510]: 2025-12-13 08:53:45.181 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:53:45 np0005558241 nova_compute[248510]: 2025-12-13 08:53:45.284 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:45 np0005558241 nova_compute[248510]: 2025-12-13 08:53:45.284 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:45 np0005558241 nova_compute[248510]: 2025-12-13 08:53:45.296 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:53:45 np0005558241 nova_compute[248510]: 2025-12-13 08:53:45.296 248514 INFO nova.compute.claims [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:53:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:53:45 np0005558241 nova_compute[248510]: 2025-12-13 08:53:45.663 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:46 np0005558241 nova_compute[248510]: 2025-12-13 08:53:46.107 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2768: 321 pgs: 321 active+clean; 279 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 38 KiB/s wr, 31 op/s
Dec 13 03:53:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:53:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1435371036' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:53:46 np0005558241 nova_compute[248510]: 2025-12-13 08:53:46.242 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:46 np0005558241 nova_compute[248510]: 2025-12-13 08:53:46.249 248514 DEBUG nova.compute.provider_tree [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:53:46 np0005558241 nova_compute[248510]: 2025-12-13 08:53:46.276 248514 DEBUG nova.scheduler.client.report [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:53:46 np0005558241 nova_compute[248510]: 2025-12-13 08:53:46.450 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:46 np0005558241 nova_compute[248510]: 2025-12-13 08:53:46.450 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:53:46 np0005558241 nova_compute[248510]: 2025-12-13 08:53:46.539 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:53:46 np0005558241 nova_compute[248510]: 2025-12-13 08:53:46.540 248514 DEBUG nova.network.neutron [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:53:46 np0005558241 nova_compute[248510]: 2025-12-13 08:53:46.620 248514 INFO nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:53:46 np0005558241 nova_compute[248510]: 2025-12-13 08:53:46.797 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:53:46 np0005558241 nova_compute[248510]: 2025-12-13 08:53:46.804 248514 DEBUG nova.policy [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.002 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.003 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.004 248514 INFO nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Creating image(s)#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.030 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.061 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.090 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.094 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.169 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.170 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.171 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.171 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.199 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.202 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 24e9bc91-cab7-4459-921c-5000eb9839b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.484 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.511 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 24e9bc91-cab7-4459-921c-5000eb9839b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.565 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.645 248514 DEBUG nova.objects.instance [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 24e9bc91-cab7-4459-921c-5000eb9839b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.677 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.678 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Ensure instance console log exists: /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.678 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.679 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.679 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.941 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.941 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:47 np0005558241 nova_compute[248510]: 2025-12-13 08:53:47.964 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:53:48 np0005558241 nova_compute[248510]: 2025-12-13 08:53:48.042 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:48 np0005558241 nova_compute[248510]: 2025-12-13 08:53:48.042 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:48 np0005558241 nova_compute[248510]: 2025-12-13 08:53:48.050 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:53:48 np0005558241 nova_compute[248510]: 2025-12-13 08:53:48.050 248514 INFO nova.compute.claims [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:53:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2769: 321 pgs: 321 active+clean; 300 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 926 KiB/s wr, 47 op/s
Dec 13 03:53:48 np0005558241 nova_compute[248510]: 2025-12-13 08:53:48.279 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:48 np0005558241 nova_compute[248510]: 2025-12-13 08:53:48.647 248514 DEBUG nova.network.neutron [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Successfully created port: 0f4fe885-0823-4b8e-93ad-70a45aba4b2e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:53:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:53:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315581974' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:53:48 np0005558241 nova_compute[248510]: 2025-12-13 08:53:48.855 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:48 np0005558241 nova_compute[248510]: 2025-12-13 08:53:48.862 248514 DEBUG nova.compute.provider_tree [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:53:48 np0005558241 nova_compute[248510]: 2025-12-13 08:53:48.889 248514 DEBUG nova.scheduler.client.report [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:53:48 np0005558241 nova_compute[248510]: 2025-12-13 08:53:48.923 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:48 np0005558241 nova_compute[248510]: 2025-12-13 08:53:48.924 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:53:48 np0005558241 nova_compute[248510]: 2025-12-13 08:53:48.978 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:53:48 np0005558241 nova_compute[248510]: 2025-12-13 08:53:48.978 248514 DEBUG nova.network.neutron [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.005 248514 INFO nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.030 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.153 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.155 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.155 248514 INFO nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Creating image(s)#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.180 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.205 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.226 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.230 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.284 248514 DEBUG nova.policy [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.337 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.337 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.338 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.338 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.358 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.362 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:49.463 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 2001:db8::f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 10.100.0.2 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:49.465 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated#033[00m
Dec 13 03:53:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:49.466 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:53:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:49.467 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[feddfe8e-1d70-4994-af02-d36b0a9bffe1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.749 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.819 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.953 248514 DEBUG nova.objects.instance [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid fa7f7cf9-50d4-461e-ab73-21e65aa729a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.985 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.985 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Ensure instance console log exists: /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.986 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.986 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:49 np0005558241 nova_compute[248510]: 2025-12-13 08:53:49.987 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2770: 321 pgs: 321 active+clean; 327 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 556 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Dec 13 03:53:50 np0005558241 nova_compute[248510]: 2025-12-13 08:53:50.266 248514 DEBUG nova.network.neutron [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Successfully created port: 6585c17a-67f3-4c7c-a637-4acf71e85c4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:53:50 np0005558241 nova_compute[248510]: 2025-12-13 08:53:50.435 248514 DEBUG nova.network.neutron [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Successfully updated port: 0f4fe885-0823-4b8e-93ad-70a45aba4b2e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:53:50 np0005558241 nova_compute[248510]: 2025-12-13 08:53:50.456 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-24e9bc91-cab7-4459-921c-5000eb9839b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:50 np0005558241 nova_compute[248510]: 2025-12-13 08:53:50.456 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-24e9bc91-cab7-4459-921c-5000eb9839b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:50 np0005558241 nova_compute[248510]: 2025-12-13 08:53:50.456 248514 DEBUG nova.network.neutron [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:53:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:53:50 np0005558241 nova_compute[248510]: 2025-12-13 08:53:50.633 248514 DEBUG nova.compute.manager [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-changed-0f4fe885-0823-4b8e-93ad-70a45aba4b2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:50 np0005558241 nova_compute[248510]: 2025-12-13 08:53:50.633 248514 DEBUG nova.compute.manager [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Refreshing instance network info cache due to event network-changed-0f4fe885-0823-4b8e-93ad-70a45aba4b2e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:53:50 np0005558241 nova_compute[248510]: 2025-12-13 08:53:50.634 248514 DEBUG oslo_concurrency.lockutils [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-24e9bc91-cab7-4459-921c-5000eb9839b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:50 np0005558241 nova_compute[248510]: 2025-12-13 08:53:50.686 248514 DEBUG nova.network.neutron [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.110 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.444 248514 DEBUG nova.network.neutron [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Successfully updated port: 6585c17a-67f3-4c7c-a637-4acf71e85c4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.463 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.464 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.464 248514 DEBUG nova.network.neutron [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.563 248514 DEBUG nova.compute.manager [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-changed-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.564 248514 DEBUG nova.compute.manager [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Refreshing instance network info cache due to event network-changed-6585c17a-67f3-4c7c-a637-4acf71e85c4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.565 248514 DEBUG oslo_concurrency.lockutils [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.745 248514 DEBUG nova.network.neutron [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.909 248514 DEBUG nova.network.neutron [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Updating instance_info_cache with network_info: [{"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.943 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-24e9bc91-cab7-4459-921c-5000eb9839b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.943 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Instance network_info: |[{"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.944 248514 DEBUG oslo_concurrency.lockutils [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-24e9bc91-cab7-4459-921c-5000eb9839b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.944 248514 DEBUG nova.network.neutron [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Refreshing network info cache for port 0f4fe885-0823-4b8e-93ad-70a45aba4b2e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.947 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Start _get_guest_xml network_info=[{"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.951 248514 WARNING nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.956 248514 DEBUG nova.virt.libvirt.host [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.957 248514 DEBUG nova.virt.libvirt.host [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.960 248514 DEBUG nova.virt.libvirt.host [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.960 248514 DEBUG nova.virt.libvirt.host [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.961 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.961 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.961 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.962 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.962 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.962 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.962 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.962 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.963 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.963 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.963 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.963 248514 DEBUG nova.virt.hardware [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:53:51 np0005558241 nova_compute[248510]: 2025-12-13 08:53:51.967 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2771: 321 pgs: 321 active+clean; 327 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 556 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 13 03:53:52 np0005558241 nova_compute[248510]: 2025-12-13 08:53:52.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:53:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324248881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:53:52 np0005558241 nova_compute[248510]: 2025-12-13 08:53:52.588 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:52 np0005558241 nova_compute[248510]: 2025-12-13 08:53:52.611 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:52 np0005558241 nova_compute[248510]: 2025-12-13 08:53:52.615 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.044 248514 DEBUG nova.network.neutron [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updating instance_info_cache with network_info: [{"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.077 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.077 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Instance network_info: |[{"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.078 248514 DEBUG oslo_concurrency.lockutils [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.078 248514 DEBUG nova.network.neutron [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Refreshing network info cache for port 6585c17a-67f3-4c7c-a637-4acf71e85c4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.081 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Start _get_guest_xml network_info=[{"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.085 248514 WARNING nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.089 248514 DEBUG nova.virt.libvirt.host [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.090 248514 DEBUG nova.virt.libvirt.host [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.097 248514 DEBUG nova.virt.libvirt.host [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.098 248514 DEBUG nova.virt.libvirt.host [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.098 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.098 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.099 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.099 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.099 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.099 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.099 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.100 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.100 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.100 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.100 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.100 248514 DEBUG nova.virt.hardware [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.104 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:53:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2818679212' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.207 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.208 248514 DEBUG nova.virt.libvirt.vif [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1197553871',display_name='tempest-TestNetworkBasicOps-server-1197553871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1197553871',id=114,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN0tz1T7CupVnJjmfwxRYFIICN0QfqtBB3GDRo75b9UqyPvHjXgcUKDczGDgsRsdgI58Js+Fgc15P+M+AHNMFdqZevDxnjQbmKdK1Wi86XTXa0E7byhCNYmQdGF2ON/oDA==',key_name='tempest-TestNetworkBasicOps-982734592',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-03pexmx7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:46Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=24e9bc91-cab7-4459-921c-5000eb9839b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.209 248514 DEBUG nova.network.os_vif_util [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.210 248514 DEBUG nova.network.os_vif_util [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.211 248514 DEBUG nova.objects.instance [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 24e9bc91-cab7-4459-921c-5000eb9839b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.234 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  <uuid>24e9bc91-cab7-4459-921c-5000eb9839b7</uuid>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  <name>instance-00000072</name>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkBasicOps-server-1197553871</nova:name>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:53:51</nova:creationTime>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:        <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:        <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:        <nova:port uuid="0f4fe885-0823-4b8e-93ad-70a45aba4b2e">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <entry name="serial">24e9bc91-cab7-4459-921c-5000eb9839b7</entry>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <entry name="uuid">24e9bc91-cab7-4459-921c-5000eb9839b7</entry>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/24e9bc91-cab7-4459-921c-5000eb9839b7_disk">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/24e9bc91-cab7-4459-921c-5000eb9839b7_disk.config">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:23:ed:41"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <target dev="tap0f4fe885-08"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/console.log" append="off"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:53:53 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:53:53 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:53:53 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:53:53 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.235 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Preparing to wait for external event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.235 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.236 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.236 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.237 248514 DEBUG nova.virt.libvirt.vif [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1197553871',display_name='tempest-TestNetworkBasicOps-server-1197553871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1197553871',id=114,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN0tz1T7CupVnJjmfwxRYFIICN0QfqtBB3GDRo75b9UqyPvHjXgcUKDczGDgsRsdgI58Js+Fgc15P+M+AHNMFdqZevDxnjQbmKdK1Wi86XTXa0E7byhCNYmQdGF2ON/oDA==',key_name='tempest-TestNetworkBasicOps-982734592',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-03pexmx7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:46Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=24e9bc91-cab7-4459-921c-5000eb9839b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.237 248514 DEBUG nova.network.os_vif_util [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.238 248514 DEBUG nova.network.os_vif_util [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.239 248514 DEBUG os_vif [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.240 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.240 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.244 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f4fe885-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.244 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f4fe885-08, col_values=(('external_ids', {'iface-id': '0f4fe885-0823-4b8e-93ad-70a45aba4b2e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:ed:41', 'vm-uuid': '24e9bc91-cab7-4459-921c-5000eb9839b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:53 np0005558241 NetworkManager[50376]: <info>  [1765616033.2473] manager: (tap0f4fe885-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.257 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.259 248514 INFO os_vif [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08')#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.339 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.339 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.339 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:23:ed:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.340 248514 INFO nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Using config drive#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.359 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.474 248514 DEBUG nova.network.neutron [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Updated VIF entry in instance network info cache for port 0f4fe885-0823-4b8e-93ad-70a45aba4b2e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.475 248514 DEBUG nova.network.neutron [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Updating instance_info_cache with network_info: [{"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.496 248514 DEBUG oslo_concurrency.lockutils [req-ad99d582-a810-4f0b-a606-c2d0102d527c req-8fa000e4-8859-4a04-b0e8-40a39e716651 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-24e9bc91-cab7-4459-921c-5000eb9839b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:53:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3522719023' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.700 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.726 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:53 np0005558241 nova_compute[248510]: 2025-12-13 08:53:53.730 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.091 248514 INFO nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Creating config drive at /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/disk.config#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.096 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3k2pccjs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2772: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 573 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.238 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3k2pccjs" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.262 248514 DEBUG nova.storage.rbd_utils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 24e9bc91-cab7-4459-921c-5000eb9839b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.266 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/disk.config 24e9bc91-cab7-4459-921c-5000eb9839b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:53:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2450293502' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.327 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.329 248514 DEBUG nova.virt.libvirt.vif [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:53:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=115,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGUq/AunPgAiW/gJ0J2O7Gg2tWHaC1FajarqmNhDhKsCdHoif6mzxgzomxd0iSuMUg8lqNyZetu6VpRVmxHZgqpb0ZC3j0IoZa8EwnwWYlucFJrp/EYR/i0MEEr0woib3Q==',key_name='tempest-TestSecurityGroupsBasicOps-33804060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-e5gqnowb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:49Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=fa7f7cf9-50d4-461e-ab73-21e65aa729a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.330 248514 DEBUG nova.network.os_vif_util [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.331 248514 DEBUG nova.network.os_vif_util [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.332 248514 DEBUG nova.objects.instance [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid fa7f7cf9-50d4-461e-ab73-21e65aa729a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.352 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  <uuid>fa7f7cf9-50d4-461e-ab73-21e65aa729a4</uuid>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  <name>instance-00000073</name>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191</nova:name>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:53:53</nova:creationTime>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:        <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:        <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:        <nova:port uuid="6585c17a-67f3-4c7c-a637-4acf71e85c4f">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <entry name="serial">fa7f7cf9-50d4-461e-ab73-21e65aa729a4</entry>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <entry name="uuid">fa7f7cf9-50d4-461e-ab73-21e65aa729a4</entry>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk.config">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:f5:d3:da"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <target dev="tap6585c17a-67"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/console.log" append="off"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:53:54 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:53:54 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:53:54 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:53:54 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.353 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Preparing to wait for external event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.354 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.354 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.354 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.355 248514 DEBUG nova.virt.libvirt.vif [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:53:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=115,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGUq/AunPgAiW/gJ0J2O7Gg2tWHaC1FajarqmNhDhKsCdHoif6mzxgzomxd0iSuMUg8lqNyZetu6VpRVmxHZgqpb0ZC3j0IoZa8EwnwWYlucFJrp/EYR/i0MEEr0woib3Q==',key_name='tempest-TestSecurityGroupsBasicOps-33804060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-e5gqnowb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:53:49Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=fa7f7cf9-50d4-461e-ab73-21e65aa729a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.355 248514 DEBUG nova.network.os_vif_util [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.356 248514 DEBUG nova.network.os_vif_util [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.356 248514 DEBUG os_vif [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.357 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.358 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.358 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.360 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.361 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6585c17a-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.361 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6585c17a-67, col_values=(('external_ids', {'iface-id': '6585c17a-67f3-4c7c-a637-4acf71e85c4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:d3:da', 'vm-uuid': 'fa7f7cf9-50d4-461e-ab73-21e65aa729a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.363 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:54 np0005558241 NetworkManager[50376]: <info>  [1765616034.3646] manager: (tap6585c17a-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/462)
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.377 248514 INFO os_vif [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67')#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.418 248514 DEBUG oslo_concurrency.processutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/disk.config 24e9bc91-cab7-4459-921c-5000eb9839b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.418 248514 INFO nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Deleting local config drive /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7/disk.config because it was imported into RBD.#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.446 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.446 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.446 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:f5:d3:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.447 248514 INFO nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Using config drive#033[00m
Dec 13 03:53:54 np0005558241 kernel: tap0f4fe885-08: entered promiscuous mode
Dec 13 03:53:54 np0005558241 NetworkManager[50376]: <info>  [1765616034.4691] manager: (tap0f4fe885-08): new Tun device (/org/freedesktop/NetworkManager/Devices/463)
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.471 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:54Z|01101|binding|INFO|Claiming lport 0f4fe885-0823-4b8e-93ad-70a45aba4b2e for this chassis.
Dec 13 03:53:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:54Z|01102|binding|INFO|0f4fe885-0823-4b8e-93ad-70a45aba4b2e: Claiming fa:16:3e:23:ed:41 10.100.0.19
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.485 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:ed:41 10.100.0.19'], port_security=['fa:16:3e:23:ed:41 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '24e9bc91-cab7-4459-921c-5000eb9839b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2d56de33-8a42-4b7a-a729-f5a7b63e022f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19856422-45a4-439c-b584-d577928b61a5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0f4fe885-0823-4b8e-93ad-70a45aba4b2e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.486 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0f4fe885-0823-4b8e-93ad-70a45aba4b2e in datapath 2e91b4b4-aa39-4f2a-bb46-9126feb64b26 bound to our chassis#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.488 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2e91b4b4-aa39-4f2a-bb46-9126feb64b26#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.500 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7155bc-4e13-49d7-aa70-d46cdfbc261e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.501 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2e91b4b4-a1 in ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.503 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2e91b4b4-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.503 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[02f072bb-30e4-4e71-a869-a100407ef68f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.504 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd647922-1d6c-4326-bd75-227384e06ebf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 systemd-udevd[362328]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.514 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[95d7cb35-c5ef-47d3-89b9-7c0707580626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 systemd-machined[210538]: New machine qemu-141-instance-00000072.
Dec 13 03:53:54 np0005558241 NetworkManager[50376]: <info>  [1765616034.5260] device (tap0f4fe885-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:53:54 np0005558241 NetworkManager[50376]: <info>  [1765616034.5266] device (tap0f4fe885-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:53:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:54Z|01103|binding|INFO|Setting lport 0f4fe885-0823-4b8e-93ad-70a45aba4b2e ovn-installed in OVS
Dec 13 03:53:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:54Z|01104|binding|INFO|Setting lport 0f4fe885-0823-4b8e-93ad-70a45aba4b2e up in Southbound
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:54 np0005558241 systemd[1]: Started Virtual Machine qemu-141-instance-00000072.
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.531 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce534d93-f0ff-4dbf-89cb-640a9b5f8df5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.558 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[92fa9ea4-e249-4a0d-8466-73ddfe0c2dc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 NetworkManager[50376]: <info>  [1765616034.5652] manager: (tap2e91b4b4-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/464)
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.565 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[96bd98c2-e45a-45e7-83d7-2171b7503242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.601 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[62e23a2f-7644-445b-94a9-d360eccbcfd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.605 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9de645-f18d-4fe9-a305-49a4ebfb6296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 NetworkManager[50376]: <info>  [1765616034.6323] device (tap2e91b4b4-a0): carrier: link connected
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.638 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8a75ee-90b7-4310-963c-4b2189156d29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.657 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc21fa1-ed68-44e3-a948-fad35ff7fa65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e91b4b4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:eb:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861184, 'reachable_time': 20409, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362365, 'error': None, 'target': 'ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.679 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[733019c9-0dbe-4d4e-b0b5-980473591a7c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea6:ebfd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 861184, 'tstamp': 861184}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362366, 'error': None, 'target': 'ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.699 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[31c6c457-f496-46af-9fab-a124e99ce7fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2e91b4b4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a6:eb:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861184, 'reachable_time': 20409, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362367, 'error': None, 'target': 'ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.737 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9987e2-a9dc-4ae0-9f29-9422dc473739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.797 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d28700dc-e35b-4d46-8176-97b9927de3b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.799 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e91b4b4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.799 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.799 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e91b4b4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.806 248514 DEBUG nova.compute.manager [req-b674d2ec-e1aa-4df5-92ce-3af8ff8f71f9 req-1232f7a3-afaa-4c2a-9fb4-5199cdc7f010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.806 248514 DEBUG oslo_concurrency.lockutils [req-b674d2ec-e1aa-4df5-92ce-3af8ff8f71f9 req-1232f7a3-afaa-4c2a-9fb4-5199cdc7f010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.806 248514 DEBUG oslo_concurrency.lockutils [req-b674d2ec-e1aa-4df5-92ce-3af8ff8f71f9 req-1232f7a3-afaa-4c2a-9fb4-5199cdc7f010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.807 248514 DEBUG oslo_concurrency.lockutils [req-b674d2ec-e1aa-4df5-92ce-3af8ff8f71f9 req-1232f7a3-afaa-4c2a-9fb4-5199cdc7f010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.807 248514 DEBUG nova.compute.manager [req-b674d2ec-e1aa-4df5-92ce-3af8ff8f71f9 req-1232f7a3-afaa-4c2a-9fb4-5199cdc7f010 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Processing event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.847 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:54 np0005558241 NetworkManager[50376]: <info>  [1765616034.8479] manager: (tap2e91b4b4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/465)
Dec 13 03:53:54 np0005558241 kernel: tap2e91b4b4-a0: entered promiscuous mode
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.852 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.855 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2e91b4b4-a0, col_values=(('external_ids', {'iface-id': '924f5729-755e-4be7-818a-17dd23445f7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.857 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:54Z|01105|binding|INFO|Releasing lport 924f5729-755e-4be7-818a-17dd23445f7d from this chassis (sb_readonly=0)
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.872 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.877 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.879 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2e91b4b4-aa39-4f2a-bb46-9126feb64b26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2e91b4b4-aa39-4f2a-bb46-9126feb64b26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.880 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d0154801-7470-4335-8dab-696838f2c136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.882 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-2e91b4b4-aa39-4f2a-bb46-9126feb64b26
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/2e91b4b4-aa39-4f2a-bb46-9126feb64b26.pid.haproxy
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 2e91b4b4-aa39-4f2a-bb46-9126feb64b26
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:53:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:54.883 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'env', 'PROCESS_TAG=haproxy-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2e91b4b4-aa39-4f2a-bb46-9126feb64b26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.936 248514 INFO nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Creating config drive at /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/disk.config#033[00m
Dec 13 03:53:54 np0005558241 nova_compute[248510]: 2025-12-13 08:53:54.941 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywqikfqb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.087 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpywqikfqb" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.114 248514 DEBUG nova.storage.rbd_utils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.124 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/disk.config fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.175 248514 DEBUG nova.network.neutron [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updated VIF entry in instance network info cache for port 6585c17a-67f3-4c7c-a637-4acf71e85c4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.176 248514 DEBUG nova.network.neutron [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updating instance_info_cache with network_info: [{"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.274 248514 DEBUG oslo_concurrency.processutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/disk.config fa7f7cf9-50d4-461e-ab73-21e65aa729a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.275 248514 INFO nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Deleting local config drive /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4/disk.config because it was imported into RBD.#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.307 248514 DEBUG oslo_concurrency.lockutils [req-d4969542-a4b5-4519-af0a-7b456b67986d req-7a215a0f-0c1d-4cc2-a825-d9db7d5c05dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:53:55 np0005558241 podman[362452]: 2025-12-13 08:53:55.342275267 +0000 UTC m=+0.065168635 container create 5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:53:55 np0005558241 NetworkManager[50376]: <info>  [1765616035.3457] manager: (tap6585c17a-67): new Tun device (/org/freedesktop/NetworkManager/Devices/466)
Dec 13 03:53:55 np0005558241 systemd-udevd[362346]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:53:55 np0005558241 kernel: tap6585c17a-67: entered promiscuous mode
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.354 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:55Z|01106|binding|INFO|Claiming lport 6585c17a-67f3-4c7c-a637-4acf71e85c4f for this chassis.
Dec 13 03:53:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:55Z|01107|binding|INFO|6585c17a-67f3-4c7c-a637-4acf71e85c4f: Claiming fa:16:3e:f5:d3:da 10.100.0.11
Dec 13 03:53:55 np0005558241 NetworkManager[50376]: <info>  [1765616035.3615] device (tap6585c17a-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:53:55 np0005558241 NetworkManager[50376]: <info>  [1765616035.3628] device (tap6585c17a-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.362 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:d3:da 10.100.0.11'], port_security=['fa:16:3e:f5:d3:da 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fa7f7cf9-50d4-461e-ab73-21e65aa729a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3479ed9a-2670-4333-b282-6f40685ff746', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee7df75c-fefa-4bc0-977e-537259cc7755', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9839b158-8451-4098-b558-d860fc6a92ca, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6585c17a-67f3-4c7c-a637-4acf71e85c4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:53:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:55Z|01108|binding|INFO|Setting lport 6585c17a-67f3-4c7c-a637-4acf71e85c4f ovn-installed in OVS
Dec 13 03:53:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:53:55Z|01109|binding|INFO|Setting lport 6585c17a-67f3-4c7c-a637-4acf71e85c4f up in Southbound
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.377 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:55 np0005558241 systemd[1]: Started libpod-conmon-5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d.scope.
Dec 13 03:53:55 np0005558241 systemd-machined[210538]: New machine qemu-142-instance-00000073.
Dec 13 03:53:55 np0005558241 podman[362452]: 2025-12-13 08:53:55.308450589 +0000 UTC m=+0.031343977 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:53:55 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:53:55 np0005558241 systemd[1]: Started Virtual Machine qemu-142-instance-00000073.
Dec 13 03:53:55 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:53:55 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:53:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c970440edf2f59bc003d5f5f6092186ad69c47ce3109c48710c40f04e5794b1f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.432 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.433 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:55 np0005558241 podman[362452]: 2025-12-13 08:53:55.443952246 +0000 UTC m=+0.166845624 container init 5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:53:55 np0005558241 podman[362452]: 2025-12-13 08:53:55.450468279 +0000 UTC m=+0.173361647 container start 5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 03:53:55 np0005558241 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [NOTICE]   (362518) : New worker (362524) forked
Dec 13 03:53:55 np0005558241 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [NOTICE]   (362518) : Loading success.
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.504 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.506 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616035.5039809, 24e9bc91-cab7-4459-921c-5000eb9839b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.506 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] VM Started (Lifecycle Event)#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.511 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.515 248514 INFO nova.virt.libvirt.driver [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Instance spawned successfully.#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.516 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:53:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.544 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.554 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6585c17a-67f3-4c7c-a637-4acf71e85c4f in datapath 3479ed9a-2670-4333-b282-6f40685ff746 unbound from our chassis#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.554 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.556 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3479ed9a-2670-4333-b282-6f40685ff746#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.565 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.566 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.567 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.567 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.568 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.568 248514 DEBUG nova.virt.libvirt.driver [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.581 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8035f2-65bc-4d15-ae9a-98c0c2b45820]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.604 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.604 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616035.5052018, 24e9bc91-cab7-4459-921c-5000eb9839b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.605 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.618 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[111c214e-1b48-407f-88d6-556fec01d122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.621 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a3724390-e9f8-4178-a53f-e7f7bb205263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.641 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.645 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616035.5105426, 24e9bc91-cab7-4459-921c-5000eb9839b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.646 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.650 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed1e944-b2b4-42c6-909d-17e654002743]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.657 248514 INFO nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Took 8.66 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.658 248514 DEBUG nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.668 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.669 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[856b93aa-0ea4-4b46-8cfd-1134e069bff7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3479ed9a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:9f:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 857121, 'reachable_time': 24238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362539, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.672 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9d65761d-faf3-4454-a668-fc54ed048d51]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3479ed9a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 857140, 'tstamp': 857140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362547, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3479ed9a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 857143, 'tstamp': 857143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362547, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.688 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3479ed9a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.690 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.691 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.691 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3479ed9a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.692 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.692 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3479ed9a-20, col_values=(('external_ids', {'iface-id': 'd8892183-3d82-41f5-b0bd-dc5a1c170b1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:53:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:53:55.693 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.708 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.726 248514 INFO nova.compute.manager [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Took 10.47 seconds to build instance.#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.745 248514 DEBUG oslo_concurrency.lockutils [None req-2b4e19ad-a137-4665-ab34-81d05ab72126 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.837 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616035.836684, fa7f7cf9-50d4-461e-ab73-21e65aa729a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.837 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] VM Started (Lifecycle Event)#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.866 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.871 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616035.83682, fa7f7cf9-50d4-461e-ab73-21e65aa729a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.871 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.910 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.915 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:53:55 np0005558241 nova_compute[248510]: 2025-12-13 08:53:55.952 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:53:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2773: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 3.6 MiB/s wr, 72 op/s
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.439 248514 DEBUG nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.440 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.441 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.441 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.441 248514 DEBUG nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] No waiting events found dispatching network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.442 248514 WARNING nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received unexpected event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e for instance with vm_state active and task_state None.#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.442 248514 DEBUG nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.442 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.443 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.443 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.443 248514 DEBUG nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Processing event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.444 248514 DEBUG nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.444 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.444 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.445 248514 DEBUG oslo_concurrency.lockutils [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.445 248514 DEBUG nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] No waiting events found dispatching network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.445 248514 WARNING nova.compute.manager [req-5fc4e622-70c3-4ec9-a3ef-7247c3b81c91 req-523193a6-ca29-4feb-a076-4f6a931df430 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received unexpected event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.446 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.450 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616037.4500172, fa7f7cf9-50d4-461e-ab73-21e65aa729a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.450 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.452 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.457 248514 INFO nova.virt.libvirt.driver [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Instance spawned successfully.#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.458 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.533 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.542 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.550 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.553 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.554 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.554 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.555 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.555 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.556 248514 DEBUG nova.virt.libvirt.driver [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.623 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.689 248514 INFO nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Took 8.54 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.689 248514 DEBUG nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.819 248514 INFO nova.compute.manager [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Took 9.81 seconds to build instance.#033[00m
Dec 13 03:53:57 np0005558241 nova_compute[248510]: 2025-12-13 08:53:57.928 248514 DEBUG oslo_concurrency.lockutils [None req-6e909904-5765-4671-b402-ac1f21a5257f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:53:57 np0005558241 podman[362585]: 2025-12-13 08:53:57.980302608 +0000 UTC m=+0.064374755 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 03:53:57 np0005558241 podman[362584]: 2025-12-13 08:53:57.991963541 +0000 UTC m=+0.078270544 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 03:53:58 np0005558241 podman[362583]: 2025-12-13 08:53:58.044131449 +0000 UTC m=+0.131771575 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller)
Dec 13 03:53:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2774: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 115 op/s
Dec 13 03:53:59 np0005558241 nova_compute[248510]: 2025-12-13 08:53:59.364 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:53:59 np0005558241 nova_compute[248510]: 2025-12-13 08:53:59.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:54:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2775: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 180 op/s
Dec 13 03:54:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:54:01 np0005558241 nova_compute[248510]: 2025-12-13 08:54:01.318 248514 DEBUG nova.compute.manager [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-changed-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:01 np0005558241 nova_compute[248510]: 2025-12-13 08:54:01.319 248514 DEBUG nova.compute.manager [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Refreshing instance network info cache due to event network-changed-6585c17a-67f3-4c7c-a637-4acf71e85c4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:54:01 np0005558241 nova_compute[248510]: 2025-12-13 08:54:01.319 248514 DEBUG oslo_concurrency.lockutils [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:54:01 np0005558241 nova_compute[248510]: 2025-12-13 08:54:01.319 248514 DEBUG oslo_concurrency.lockutils [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:54:01 np0005558241 nova_compute[248510]: 2025-12-13 08:54:01.319 248514 DEBUG nova.network.neutron [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Refreshing network info cache for port 6585c17a-67f3-4c7c-a637-4acf71e85c4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:54:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2776: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Dec 13 03:54:02 np0005558241 nova_compute[248510]: 2025-12-13 08:54:02.532 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:02 np0005558241 nova_compute[248510]: 2025-12-13 08:54:02.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:54:03 np0005558241 nova_compute[248510]: 2025-12-13 08:54:03.026 248514 DEBUG nova.network.neutron [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updated VIF entry in instance network info cache for port 6585c17a-67f3-4c7c-a637-4acf71e85c4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:54:03 np0005558241 nova_compute[248510]: 2025-12-13 08:54:03.027 248514 DEBUG nova.network.neutron [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updating instance_info_cache with network_info: [{"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:54:03 np0005558241 nova_compute[248510]: 2025-12-13 08:54:03.051 248514 DEBUG oslo_concurrency.lockutils [req-f97ebfba-308d-4289-bdd9-ad8e063ea7a8 req-1427aa1e-97e2-4299-92c7-861a21ddf9da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:54:03 np0005558241 nova_compute[248510]: 2025-12-13 08:54:03.413 248514 DEBUG nova.compute.manager [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-changed-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:03 np0005558241 nova_compute[248510]: 2025-12-13 08:54:03.414 248514 DEBUG nova.compute.manager [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Refreshing instance network info cache due to event network-changed-6585c17a-67f3-4c7c-a637-4acf71e85c4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:54:03 np0005558241 nova_compute[248510]: 2025-12-13 08:54:03.415 248514 DEBUG oslo_concurrency.lockutils [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:54:03 np0005558241 nova_compute[248510]: 2025-12-13 08:54:03.415 248514 DEBUG oslo_concurrency.lockutils [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:54:03 np0005558241 nova_compute[248510]: 2025-12-13 08:54:03.415 248514 DEBUG nova.network.neutron [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Refreshing network info cache for port 6585c17a-67f3-4c7c-a637-4acf71e85c4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:54:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2777: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 235 op/s
Dec 13 03:54:04 np0005558241 nova_compute[248510]: 2025-12-13 08:54:04.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:54:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2778: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 208 op/s
Dec 13 03:54:07 np0005558241 nova_compute[248510]: 2025-12-13 08:54:07.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:07 np0005558241 nova_compute[248510]: 2025-12-13 08:54:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:54:07 np0005558241 nova_compute[248510]: 2025-12-13 08:54:07.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:54:07 np0005558241 nova_compute[248510]: 2025-12-13 08:54:07.775 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:54:07 np0005558241 nova_compute[248510]: 2025-12-13 08:54:07.885 248514 DEBUG nova.network.neutron [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updated VIF entry in instance network info cache for port 6585c17a-67f3-4c7c-a637-4acf71e85c4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:54:07 np0005558241 nova_compute[248510]: 2025-12-13 08:54:07.886 248514 DEBUG nova.network.neutron [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updating instance_info_cache with network_info: [{"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:54:07 np0005558241 nova_compute[248510]: 2025-12-13 08:54:07.911 248514 DEBUG oslo_concurrency.lockutils [req-aa9e19a3-8f0e-4064-9134-5113601cf237 req-e54b48e9-4b29-4b8d-9feb-f802830848dc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fa7f7cf9-50d4-461e-ab73-21e65aa729a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:54:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2779: 321 pgs: 321 active+clean; 374 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 32 KiB/s wr, 212 op/s
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.508 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.510 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.510 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.511 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.643 248514 DEBUG nova.compute.manager [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.645 248514 DEBUG nova.compute.manager [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing instance network info cache due to event network-changed-b2143648-4c23-49b5-8777-433a5b34c7ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.645 248514 DEBUG oslo_concurrency.lockutils [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.716 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.716 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.717 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.717 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.718 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.719 248514 INFO nova.compute.manager [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Terminating instance#033[00m
Dec 13 03:54:08 np0005558241 nova_compute[248510]: 2025-12-13 08:54:08.720 248514 DEBUG nova.compute.manager [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:54:09 np0005558241 kernel: tapb2143648-4c (unregistering): left promiscuous mode
Dec 13 03:54:09 np0005558241 NetworkManager[50376]: <info>  [1765616049.1512] device (tapb2143648-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:54:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:09Z|01110|binding|INFO|Releasing lport b2143648-4c23-49b5-8777-433a5b34c7ce from this chassis (sb_readonly=0)
Dec 13 03:54:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:09Z|01111|binding|INFO|Setting lport b2143648-4c23-49b5-8777-433a5b34c7ce down in Southbound
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.163 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:09 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:09Z|01112|binding|INFO|Removing iface tapb2143648-4c ovn-installed in OVS
Dec 13 03:54:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:09.173 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:b7:87 10.100.0.4'], port_security=['fa:16:3e:02:b7:87 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fcc617ec-f5f9-41bb-ad4b-86d790622e74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6db56c55-78f1-455f-855e-db3acef05ff3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff4d2c6ad4dc4848ac9f55ff1b9e829a', 'neutron:revision_number': '9', 'neutron:security_group_ids': '9bebfabc-b3c7-415b-9881-5a1028b3b8d0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efdd787e-21fc-422f-be64-57ef7368490d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b2143648-4c23-49b5-8777-433a5b34c7ce) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:54:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:09.175 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b2143648-4c23-49b5-8777-433a5b34c7ce in datapath 6db56c55-78f1-455f-855e-db3acef05ff3 unbound from our chassis#033[00m
Dec 13 03:54:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:09.182 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6db56c55-78f1-455f-855e-db3acef05ff3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:54:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:09.183 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3e83a02c-b790-45b0-b781-6a369f137c23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:09.184 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 namespace which is not needed anymore#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:09 np0005558241 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Dec 13 03:54:09 np0005558241 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006f.scope: Consumed 17.066s CPU time.
Dec 13 03:54:09 np0005558241 systemd-machined[210538]: Machine qemu-140-instance-0000006f terminated.
Dec 13 03:54:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:54:09
Dec 13 03:54:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:54:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:54:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'default.rgw.control']
Dec 13 03:54:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.381 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.394 248514 INFO nova.virt.libvirt.driver [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Instance destroyed successfully.#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.394 248514 DEBUG nova.objects.instance [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lazy-loading 'resources' on Instance uuid fcc617ec-f5f9-41bb-ad4b-86d790622e74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.415 248514 DEBUG nova.virt.libvirt.vif [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T08:52:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1747506169',display_name='tempest-TestShelveInstance-server-1747506169',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1747506169',id=111,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAebkVWpSolmv9rvqo0lcOY35Sse8ZKcOxj7o5K69+ccF5rX0zOaHwOkKpIPYjoJDZ0lGFUDC6z1VdzfkWYgp+l6o2jOhZfRbhSP9Loovk747mSM12eSN6XwaL50YLDn8Q==',key_name='tempest-TestShelveInstance-1284595260',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:53:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff4d2c6ad4dc4848ac9f55ff1b9e829a',ramdisk_id='',reservation_id='r-wfxenjt6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-2105398574',owner_user_name='tempest-TestShelveInstance-2105398574-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:53:30Z,user_data=None,user_id='fa34623cd3de4a47aa57959f09b3ff79',uuid=fcc617ec-f5f9-41bb-ad4b-86d790622e74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.416 248514 DEBUG nova.network.os_vif_util [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converting VIF {"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.417 248514 DEBUG nova.network.os_vif_util [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.417 248514 DEBUG os_vif [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.418 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.419 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2143648-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.420 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.424 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.427 248514 INFO os_vif [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:b7:87,bridge_name='br-int',has_traffic_filtering=True,id=b2143648-4c23-49b5-8777-433a5b34c7ce,network=Network(6db56c55-78f1-455f-855e-db3acef05ff3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2143648-4c')#033[00m
Dec 13 03:54:09 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [NOTICE]   (361549) : haproxy version is 2.8.14-c23fe91
Dec 13 03:54:09 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [NOTICE]   (361549) : path to executable is /usr/sbin/haproxy
Dec 13 03:54:09 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [WARNING]  (361549) : Exiting Master process...
Dec 13 03:54:09 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [ALERT]    (361549) : Current worker (361560) exited with code 143 (Terminated)
Dec 13 03:54:09 np0005558241 neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3[361538]: [WARNING]  (361549) : All workers exited. Exiting... (0)
Dec 13 03:54:09 np0005558241 systemd[1]: libpod-60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0.scope: Deactivated successfully.
Dec 13 03:54:09 np0005558241 podman[362667]: 2025-12-13 08:54:09.515239629 +0000 UTC m=+0.230008228 container died 60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:54:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0-userdata-shm.mount: Deactivated successfully.
Dec 13 03:54:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d00eeaad21779e0d1e4a21f624f33c14a068fe0d0787499d4a34517514951359-merged.mount: Deactivated successfully.
Dec 13 03:54:09 np0005558241 podman[362667]: 2025-12-13 08:54:09.801751513 +0000 UTC m=+0.516520112 container cleanup 60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:54:09 np0005558241 systemd[1]: libpod-conmon-60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0.scope: Deactivated successfully.
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.926 248514 DEBUG nova.compute.manager [req-fbbfa0c9-7344-429d-bb60-831f3a5e3d73 req-68ef9488-b103-4ae4-ae72-a92cfd25e3ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-unplugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.927 248514 DEBUG oslo_concurrency.lockutils [req-fbbfa0c9-7344-429d-bb60-831f3a5e3d73 req-68ef9488-b103-4ae4-ae72-a92cfd25e3ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.927 248514 DEBUG oslo_concurrency.lockutils [req-fbbfa0c9-7344-429d-bb60-831f3a5e3d73 req-68ef9488-b103-4ae4-ae72-a92cfd25e3ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.927 248514 DEBUG oslo_concurrency.lockutils [req-fbbfa0c9-7344-429d-bb60-831f3a5e3d73 req-68ef9488-b103-4ae4-ae72-a92cfd25e3ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.928 248514 DEBUG nova.compute.manager [req-fbbfa0c9-7344-429d-bb60-831f3a5e3d73 req-68ef9488-b103-4ae4-ae72-a92cfd25e3ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] No waiting events found dispatching network-vif-unplugged-b2143648-4c23-49b5-8777-433a5b34c7ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:54:09 np0005558241 nova_compute[248510]: 2025-12-13 08:54:09.928 248514 DEBUG nova.compute.manager [req-fbbfa0c9-7344-429d-bb60-831f3a5e3d73 req-68ef9488-b103-4ae4-ae72-a92cfd25e3ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-unplugged-b2143648-4c23-49b5-8777-433a5b34c7ce for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:54:10 np0005558241 podman[362723]: 2025-12-13 08:54:10.11075132 +0000 UTC m=+0.283403306 container remove 60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:54:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.126 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[633222af-5b6e-4af8-b452-145cde6ebdef]: (4, ('Sat Dec 13 08:54:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 (60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0)\n60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0\nSat Dec 13 08:54:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 (60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0)\n60b067095a3a3d6f65ee31050419640728b6314fef4479aac17b7e4954113da0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.131 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5952eda4-5e68-453e-9fa0-f2b4ad4c9b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.132 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6db56c55-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:54:10 np0005558241 nova_compute[248510]: 2025-12-13 08:54:10.135 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:10 np0005558241 kernel: tap6db56c55-70: left promiscuous mode
Dec 13 03:54:10 np0005558241 nova_compute[248510]: 2025-12-13 08:54:10.150 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:10 np0005558241 nova_compute[248510]: 2025-12-13 08:54:10.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.156 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[84cd24fc-ebf9-45d2-a17c-dc27adcab0d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.170 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3122cdbd-e800-45a0-b4c1-3ba2df1a6530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.174 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7d8f8bd8-6873-4dd4-8286-ae87c02b2120]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.197 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86ee3b01-60f2-4be5-9fc1-b1170c41a62b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858541, 'reachable_time': 37108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362738, 'error': None, 'target': 'ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:10 np0005558241 systemd[1]: run-netns-ovnmeta\x2d6db56c55\x2d78f1\x2d455f\x2d855e\x2ddb3acef05ff3.mount: Deactivated successfully.
Dec 13 03:54:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.200 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6db56c55-78f1-455f-855e-db3acef05ff3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:54:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:10.201 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d885ffe2-a325-40f5-bafa-7c5b18f5abb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2780: 321 pgs: 321 active+clean; 378 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 442 KiB/s wr, 182 op/s
Dec 13 03:54:10 np0005558241 nova_compute[248510]: 2025-12-13 08:54:10.378 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:54:10 np0005558241 nova_compute[248510]: 2025-12-13 08:54:10.403 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:54:10 np0005558241 nova_compute[248510]: 2025-12-13 08:54:10.403 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:54:10 np0005558241 nova_compute[248510]: 2025-12-13 08:54:10.403 248514 DEBUG oslo_concurrency.lockutils [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:54:10 np0005558241 nova_compute[248510]: 2025-12-13 08:54:10.403 248514 DEBUG nova.network.neutron [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Refreshing network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:54:10 np0005558241 nova_compute[248510]: 2025-12-13 08:54:10.404 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:54:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:54:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:54:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:10Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:ed:41 10.100.0.19
Dec 13 03:54:10 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:10Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:ed:41 10.100.0.19
Dec 13 03:54:11 np0005558241 nova_compute[248510]: 2025-12-13 08:54:11.215 248514 INFO nova.virt.libvirt.driver [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deleting instance files /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74_del#033[00m
Dec 13 03:54:11 np0005558241 nova_compute[248510]: 2025-12-13 08:54:11.217 248514 INFO nova.virt.libvirt.driver [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deletion of /var/lib/nova/instances/fcc617ec-f5f9-41bb-ad4b-86d790622e74_del complete#033[00m
Dec 13 03:54:11 np0005558241 nova_compute[248510]: 2025-12-13 08:54:11.289 248514 INFO nova.compute.manager [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Took 2.57 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:54:11 np0005558241 nova_compute[248510]: 2025-12-13 08:54:11.292 248514 DEBUG oslo.service.loopingcall [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:54:11 np0005558241 nova_compute[248510]: 2025-12-13 08:54:11.294 248514 DEBUG nova.compute.manager [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:54:11 np0005558241 nova_compute[248510]: 2025-12-13 08:54:11.294 248514 DEBUG nova.network.neutron [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.026 248514 DEBUG nova.compute.manager [req-c3e99845-31c9-4712-b15d-49a7e997978d req-dceef5af-2de8-409a-acca-55a7658a9fd9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.027 248514 DEBUG oslo_concurrency.lockutils [req-c3e99845-31c9-4712-b15d-49a7e997978d req-dceef5af-2de8-409a-acca-55a7658a9fd9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.027 248514 DEBUG oslo_concurrency.lockutils [req-c3e99845-31c9-4712-b15d-49a7e997978d req-dceef5af-2de8-409a-acca-55a7658a9fd9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.027 248514 DEBUG oslo_concurrency.lockutils [req-c3e99845-31c9-4712-b15d-49a7e997978d req-dceef5af-2de8-409a-acca-55a7658a9fd9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.028 248514 DEBUG nova.compute.manager [req-c3e99845-31c9-4712-b15d-49a7e997978d req-dceef5af-2de8-409a-acca-55a7658a9fd9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] No waiting events found dispatching network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.028 248514 WARNING nova.compute.manager [req-c3e99845-31c9-4712-b15d-49a7e997978d req-dceef5af-2de8-409a-acca-55a7658a9fd9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received unexpected event network-vif-plugged-b2143648-4c23-49b5-8777-433a5b34c7ce for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:54:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2781: 321 pgs: 321 active+clean; 378 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 911 KiB/s rd, 440 KiB/s wr, 97 op/s
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.436 248514 DEBUG nova.network.neutron [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updated VIF entry in instance network info cache for port b2143648-4c23-49b5-8777-433a5b34c7ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.437 248514 DEBUG nova.network.neutron [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [{"id": "b2143648-4c23-49b5-8777-433a5b34c7ce", "address": "fa:16:3e:02:b7:87", "network": {"id": "6db56c55-78f1-455f-855e-db3acef05ff3", "bridge": "br-int", "label": "tempest-TestShelveInstance-1741572243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff4d2c6ad4dc4848ac9f55ff1b9e829a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2143648-4c", "ovs_interfaceid": "b2143648-4c23-49b5-8777-433a5b34c7ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.467 248514 DEBUG oslo_concurrency.lockutils [req-9e49bc04-6c43-4f5a-bd02-1ecfba3d5bfc req-5c9d9e0f-1967-4610-8781-655de80ead5e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-fcc617ec-f5f9-41bb-ad4b-86d790622e74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.538 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.689 248514 DEBUG nova.network.neutron [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.711 248514 INFO nova.compute.manager [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Took 1.42 seconds to deallocate network for instance.#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.864 248514 DEBUG nova.compute.manager [req-68d59fe8-a22b-41bf-8f29-40298b2088c8 req-fd8ff017-dfc1-4563-8df9-420f240f9cbd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Received event network-vif-deleted-b2143648-4c23-49b5-8777-433a5b34c7ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.900 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.901 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.901 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.901 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.902 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.944 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:12 np0005558241 nova_compute[248510]: 2025-12-13 08:54:12.945 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.110 248514 DEBUG oslo_concurrency.processutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:54:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:13Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f5:d3:da 10.100.0.11
Dec 13 03:54:13 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:13Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f5:d3:da 10.100.0.11
Dec 13 03:54:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:54:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/309752790' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:54:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:13.512 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:54:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:13.512 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.514 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.527 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.645 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.646 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.650 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.650 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.654 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.655 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.659 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.659 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:54:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:54:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/468648036' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.712 248514 DEBUG oslo_concurrency.processutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.719 248514 DEBUG nova.compute.provider_tree [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.740 248514 DEBUG nova.scheduler.client.report [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.766 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.794 248514 INFO nova.scheduler.client.report [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Deleted allocations for instance fcc617ec-f5f9-41bb-ad4b-86d790622e74#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.873 248514 DEBUG oslo_concurrency.lockutils [None req-63fdaf4a-840e-46f9-8ff8-0c9006bce076 fa34623cd3de4a47aa57959f09b3ff79 ff4d2c6ad4dc4848ac9f55ff1b9e829a - - default default] Lock "fcc617ec-f5f9-41bb-ad4b-86d790622e74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.938 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.940 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2831MB free_disk=59.802959844470024GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.940 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:13 np0005558241 nova_compute[248510]: 2025-12-13 08:54:13.940 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:14 np0005558241 nova_compute[248510]: 2025-12-13 08:54:14.017 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance af2dc023-560c-4c66-b330-e41218a7a4eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:54:14 np0005558241 nova_compute[248510]: 2025-12-13 08:54:14.017 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2d2a33c7-0a90-4b64-b291-b268d37dce5e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:54:14 np0005558241 nova_compute[248510]: 2025-12-13 08:54:14.017 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 24e9bc91-cab7-4459-921c-5000eb9839b7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:54:14 np0005558241 nova_compute[248510]: 2025-12-13 08:54:14.018 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance fa7f7cf9-50d4-461e-ab73-21e65aa729a4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:54:14 np0005558241 nova_compute[248510]: 2025-12-13 08:54:14.018 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:54:14 np0005558241 nova_compute[248510]: 2025-12-13 08:54:14.018 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:54:14 np0005558241 nova_compute[248510]: 2025-12-13 08:54:14.120 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:54:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2782: 321 pgs: 321 active+clean; 355 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.3 MiB/s wr, 237 op/s
Dec 13 03:54:14 np0005558241 nova_compute[248510]: 2025-12-13 08:54:14.421 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:54:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1321916222' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:54:14 np0005558241 nova_compute[248510]: 2025-12-13 08:54:14.759 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:54:14 np0005558241 nova_compute[248510]: 2025-12-13 08:54:14.766 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:54:14 np0005558241 nova_compute[248510]: 2025-12-13 08:54:14.786 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:54:14 np0005558241 nova_compute[248510]: 2025-12-13 08:54:14.811 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:54:14 np0005558241 nova_compute[248510]: 2025-12-13 08:54:14.812 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:54:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2636081950' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:54:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:54:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2636081950' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:54:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:54:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2783: 321 pgs: 321 active+clean; 355 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 867 KiB/s rd, 4.3 MiB/s wr, 157 op/s
Dec 13 03:54:16 np0005558241 nova_compute[248510]: 2025-12-13 08:54:16.813 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:54:16 np0005558241 nova_compute[248510]: 2025-12-13 08:54:16.813 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:54:16 np0005558241 nova_compute[248510]: 2025-12-13 08:54:16.813 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:54:16 np0005558241 nova_compute[248510]: 2025-12-13 08:54:16.813 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:54:17 np0005558241 nova_compute[248510]: 2025-12-13 08:54:17.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2784: 321 pgs: 321 active+clean; 359 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 4.3 MiB/s wr, 161 op/s
Dec 13 03:54:19 np0005558241 nova_compute[248510]: 2025-12-13 08:54:19.423 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2785: 321 pgs: 321 active+clean; 359 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 772 KiB/s rd, 4.3 MiB/s wr, 159 op/s
Dec 13 03:54:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0030516155096611615 of space, bias 1.0, pg target 0.9154846528983485 quantized to 32 (current 32)
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697195585548564 of space, bias 1.0, pg target 0.2009158675664569 quantized to 32 (current 32)
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.738067775993847e-07 of space, bias 4.0, pg target 0.0006885681331192617 quantized to 16 (current 32)
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:54:21 np0005558241 nova_compute[248510]: 2025-12-13 08:54:21.374 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:21 np0005558241 nova_compute[248510]: 2025-12-13 08:54:21.375 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:21 np0005558241 nova_compute[248510]: 2025-12-13 08:54:21.375 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:21 np0005558241 nova_compute[248510]: 2025-12-13 08:54:21.375 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:21 np0005558241 nova_compute[248510]: 2025-12-13 08:54:21.375 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:21 np0005558241 nova_compute[248510]: 2025-12-13 08:54:21.377 248514 INFO nova.compute.manager [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Terminating instance#033[00m
Dec 13 03:54:21 np0005558241 nova_compute[248510]: 2025-12-13 08:54:21.378 248514 DEBUG nova.compute.manager [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:54:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.515 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:54:21 np0005558241 nova_compute[248510]: 2025-12-13 08:54:21.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:54:21 np0005558241 kernel: tap6585c17a-67 (unregistering): left promiscuous mode
Dec 13 03:54:21 np0005558241 NetworkManager[50376]: <info>  [1765616061.8967] device (tap6585c17a-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:54:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:21Z|01113|binding|INFO|Releasing lport 6585c17a-67f3-4c7c-a637-4acf71e85c4f from this chassis (sb_readonly=0)
Dec 13 03:54:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:21Z|01114|binding|INFO|Setting lport 6585c17a-67f3-4c7c-a637-4acf71e85c4f down in Southbound
Dec 13 03:54:21 np0005558241 nova_compute[248510]: 2025-12-13 08:54:21.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:21Z|01115|binding|INFO|Removing iface tap6585c17a-67 ovn-installed in OVS
Dec 13 03:54:21 np0005558241 nova_compute[248510]: 2025-12-13 08:54:21.909 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.923 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:d3:da 10.100.0.11', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'fa7f7cf9-50d4-461e-ab73-21e65aa729a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3479ed9a-2670-4333-b282-6f40685ff746', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9839b158-8451-4098-b558-d860fc6a92ca, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6585c17a-67f3-4c7c-a637-4acf71e85c4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:54:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.926 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6585c17a-67f3-4c7c-a637-4acf71e85c4f in datapath 3479ed9a-2670-4333-b282-6f40685ff746 unbound from our chassis#033[00m
Dec 13 03:54:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.929 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3479ed9a-2670-4333-b282-6f40685ff746#033[00m
Dec 13 03:54:21 np0005558241 nova_compute[248510]: 2025-12-13 08:54:21.938 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.960 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[001b701f-4db9-4424-a2a0-8c3b94592a2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.990 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1f705b3d-54e7-406f-957f-62e9d58a21c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:21 np0005558241 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000073.scope: Deactivated successfully.
Dec 13 03:54:21 np0005558241 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000073.scope: Consumed 15.477s CPU time.
Dec 13 03:54:21 np0005558241 systemd-machined[210538]: Machine qemu-142-instance-00000073 terminated.
Dec 13 03:54:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:21.995 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[72584c34-8caf-4a5e-b9ef-45945342b359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.022 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2432e567-6925-42e1-94b7-6a914d4056cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.039 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e2745099-e6d7-42d1-a7da-d5d568687e96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3479ed9a-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:9f:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1482, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1482, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 857121, 'reachable_time': 24238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 14, 'inoctets': 992, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 14, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 992, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 14, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362820, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.062 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.062 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.063 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.063 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.063 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.063 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6404d0f1-5aca-478f-bc84-17172e52e2de]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3479ed9a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 857140, 'tstamp': 857140}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362821, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3479ed9a-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 857143, 'tstamp': 857143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362821, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.064 248514 INFO nova.compute.manager [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Terminating instance#033[00m
Dec 13 03:54:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.065 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3479ed9a-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.065 248514 DEBUG nova.compute.manager [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.066 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.072 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3479ed9a-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:54:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.072 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:54:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.073 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3479ed9a-20, col_values=(('external_ids', {'iface-id': 'd8892183-3d82-41f5-b0bd-dc5a1c170b1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:54:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.073 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:54:22 np0005558241 kernel: tap0f4fe885-08 (unregistering): left promiscuous mode
Dec 13 03:54:22 np0005558241 NetworkManager[50376]: <info>  [1765616062.1299] device (tap0f4fe885-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:54:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:22Z|01116|binding|INFO|Releasing lport 0f4fe885-0823-4b8e-93ad-70a45aba4b2e from this chassis (sb_readonly=0)
Dec 13 03:54:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:22Z|01117|binding|INFO|Setting lport 0f4fe885-0823-4b8e-93ad-70a45aba4b2e down in Southbound
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.141 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:22Z|01118|binding|INFO|Removing iface tap0f4fe885-08 ovn-installed in OVS
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.143 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.147 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:ed:41 10.100.0.19'], port_security=['fa:16:3e:23:ed:41 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '24e9bc91-cab7-4459-921c-5000eb9839b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2d56de33-8a42-4b7a-a729-f5a7b63e022f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19856422-45a4-439c-b584-d577928b61a5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0f4fe885-0823-4b8e-93ad-70a45aba4b2e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:54:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.149 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0f4fe885-0823-4b8e-93ad-70a45aba4b2e in datapath 2e91b4b4-aa39-4f2a-bb46-9126feb64b26 unbound from our chassis#033[00m
Dec 13 03:54:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.151 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2e91b4b4-aa39-4f2a-bb46-9126feb64b26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:54:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.152 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e3efacda-dfc3-4754-8c8b-b48816d70add]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:22.153 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26 namespace which is not needed anymore#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.160 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:22 np0005558241 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000072.scope: Deactivated successfully.
Dec 13 03:54:22 np0005558241 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000072.scope: Consumed 16.204s CPU time.
Dec 13 03:54:22 np0005558241 systemd-machined[210538]: Machine qemu-141-instance-00000072 terminated.
Dec 13 03:54:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2786: 321 pgs: 321 active+clean; 359 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 659 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.215 248514 INFO nova.virt.libvirt.driver [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Instance destroyed successfully.#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.216 248514 DEBUG nova.objects.instance [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid fa7f7cf9-50d4-461e-ab73-21e65aa729a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.236 248514 DEBUG nova.virt.libvirt.vif [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:53:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-537914191',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=115,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGUq/AunPgAiW/gJ0J2O7Gg2tWHaC1FajarqmNhDhKsCdHoif6mzxgzomxd0iSuMUg8lqNyZetu6VpRVmxHZgqpb0ZC3j0IoZa8EwnwWYlucFJrp/EYR/i0MEEr0woib3Q==',key_name='tempest-TestSecurityGroupsBasicOps-33804060',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:53:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-e5gqnowb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:53:57Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=fa7f7cf9-50d4-461e-ab73-21e65aa729a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.236 248514 DEBUG nova.network.os_vif_util [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "address": "fa:16:3e:f5:d3:da", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6585c17a-67", "ovs_interfaceid": "6585c17a-67f3-4c7c-a637-4acf71e85c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.237 248514 DEBUG nova.network.os_vif_util [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.238 248514 DEBUG os_vif [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.239 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6585c17a-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.241 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.247 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.250 248514 INFO os_vif [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:d3:da,bridge_name='br-int',has_traffic_filtering=True,id=6585c17a-67f3-4c7c-a637-4acf71e85c4f,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6585c17a-67')#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.329 248514 INFO nova.virt.libvirt.driver [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Instance destroyed successfully.#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.330 248514 DEBUG nova.objects.instance [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 24e9bc91-cab7-4459-921c-5000eb9839b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.344 248514 DEBUG nova.virt.libvirt.vif [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:53:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1197553871',display_name='tempest-TestNetworkBasicOps-server-1197553871',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1197553871',id=114,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN0tz1T7CupVnJjmfwxRYFIICN0QfqtBB3GDRo75b9UqyPvHjXgcUKDczGDgsRsdgI58Js+Fgc15P+M+AHNMFdqZevDxnjQbmKdK1Wi86XTXa0E7byhCNYmQdGF2ON/oDA==',key_name='tempest-TestNetworkBasicOps-982734592',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:53:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-03pexmx7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:53:55Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=24e9bc91-cab7-4459-921c-5000eb9839b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.344 248514 DEBUG nova.network.os_vif_util [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "address": "fa:16:3e:23:ed:41", "network": {"id": "2e91b4b4-aa39-4f2a-bb46-9126feb64b26", "bridge": "br-int", "label": "tempest-network-smoke--253609953", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f4fe885-08", "ovs_interfaceid": "0f4fe885-0823-4b8e-93ad-70a45aba4b2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.345 248514 DEBUG nova.network.os_vif_util [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.345 248514 DEBUG os_vif [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.347 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f4fe885-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.349 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.350 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.353 248514 INFO os_vif [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:ed:41,bridge_name='br-int',has_traffic_filtering=True,id=0f4fe885-0823-4b8e-93ad-70a45aba4b2e,network=Network(2e91b4b4-aa39-4f2a-bb46-9126feb64b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f4fe885-08')#033[00m
Dec 13 03:54:22 np0005558241 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [NOTICE]   (362518) : haproxy version is 2.8.14-c23fe91
Dec 13 03:54:22 np0005558241 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [NOTICE]   (362518) : path to executable is /usr/sbin/haproxy
Dec 13 03:54:22 np0005558241 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [WARNING]  (362518) : Exiting Master process...
Dec 13 03:54:22 np0005558241 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [WARNING]  (362518) : Exiting Master process...
Dec 13 03:54:22 np0005558241 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [ALERT]    (362518) : Current worker (362524) exited with code 143 (Terminated)
Dec 13 03:54:22 np0005558241 neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26[362511]: [WARNING]  (362518) : All workers exited. Exiting... (0)
Dec 13 03:54:22 np0005558241 systemd[1]: libpod-5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d.scope: Deactivated successfully.
Dec 13 03:54:22 np0005558241 podman[362858]: 2025-12-13 08:54:22.442686423 +0000 UTC m=+0.187233375 container died 5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.591 248514 DEBUG nova.compute.manager [req-62f67993-2368-45c9-b4ce-22d24d490a5e req-d9d7e890-e84d-437f-aa70-71c4291d8022 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-vif-unplugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.591 248514 DEBUG oslo_concurrency.lockutils [req-62f67993-2368-45c9-b4ce-22d24d490a5e req-d9d7e890-e84d-437f-aa70-71c4291d8022 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.592 248514 DEBUG oslo_concurrency.lockutils [req-62f67993-2368-45c9-b4ce-22d24d490a5e req-d9d7e890-e84d-437f-aa70-71c4291d8022 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.592 248514 DEBUG oslo_concurrency.lockutils [req-62f67993-2368-45c9-b4ce-22d24d490a5e req-d9d7e890-e84d-437f-aa70-71c4291d8022 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.592 248514 DEBUG nova.compute.manager [req-62f67993-2368-45c9-b4ce-22d24d490a5e req-d9d7e890-e84d-437f-aa70-71c4291d8022 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] No waiting events found dispatching network-vif-unplugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.592 248514 DEBUG nova.compute.manager [req-62f67993-2368-45c9-b4ce-22d24d490a5e req-d9d7e890-e84d-437f-aa70-71c4291d8022 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-vif-unplugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:54:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:22Z|01119|binding|INFO|Releasing lport d8892183-3d82-41f5-b0bd-dc5a1c170b1a from this chassis (sb_readonly=0)
Dec 13 03:54:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:22Z|01120|binding|INFO|Releasing lport eef5d4b2-f2d3-4d15-9528-0e68d65ce454 from this chassis (sb_readonly=0)
Dec 13 03:54:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:22Z|01121|binding|INFO|Releasing lport 924f5729-755e-4be7-818a-17dd23445f7d from this chassis (sb_readonly=0)
Dec 13 03:54:22 np0005558241 nova_compute[248510]: 2025-12-13 08:54:22.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d-userdata-shm.mount: Deactivated successfully.
Dec 13 03:54:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c970440edf2f59bc003d5f5f6092186ad69c47ce3109c48710c40f04e5794b1f-merged.mount: Deactivated successfully.
Dec 13 03:54:23 np0005558241 podman[362858]: 2025-12-13 08:54:23.503155143 +0000 UTC m=+1.247702105 container cleanup 5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 03:54:23 np0005558241 systemd[1]: libpod-conmon-5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d.scope: Deactivated successfully.
Dec 13 03:54:23 np0005558241 podman[362939]: 2025-12-13 08:54:23.654379644 +0000 UTC m=+0.124847191 container remove 5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:54:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.660 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e359c7-a9a0-4e43-959b-fa75bf07367f]: (4, ('Sat Dec 13 08:54:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26 (5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d)\n5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d\nSat Dec 13 08:54:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26 (5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d)\n5d55a542174844f12c8273254a116c4b40a5bc0c9eb1c59f54f12a8c2d06676d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.662 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e4688b7e-6dc4-4154-a724-d061e38bc45c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.663 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e91b4b4-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:54:23 np0005558241 nova_compute[248510]: 2025-12-13 08:54:23.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:23 np0005558241 kernel: tap2e91b4b4-a0: left promiscuous mode
Dec 13 03:54:23 np0005558241 nova_compute[248510]: 2025-12-13 08:54:23.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.684 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[24140a55-c054-4de6-a987-9a7b4e5a1db2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.701 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d62f610c-3d8d-4554-94a2-fda71b254706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.703 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8477655b-b702-47df-9c4d-78423fdd28ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.725 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[67ea16b8-d63e-4454-8565-7930e67da868]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861176, 'reachable_time': 18940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362955, 'error': None, 'target': 'ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.727 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2e91b4b4-aa39-4f2a-bb46-9126feb64b26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:54:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:23.728 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[817ed9c3-86b6-496b-9381-88a98c6937f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:23 np0005558241 systemd[1]: run-netns-ovnmeta\x2d2e91b4b4\x2daa39\x2d4f2a\x2dbb46\x2d9126feb64b26.mount: Deactivated successfully.
Dec 13 03:54:23 np0005558241 nova_compute[248510]: 2025-12-13 08:54:23.921 248514 INFO nova.virt.libvirt.driver [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Deleting instance files /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4_del#033[00m
Dec 13 03:54:23 np0005558241 nova_compute[248510]: 2025-12-13 08:54:23.922 248514 INFO nova.virt.libvirt.driver [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Deletion of /var/lib/nova/instances/fa7f7cf9-50d4-461e-ab73-21e65aa729a4_del complete#033[00m
Dec 13 03:54:23 np0005558241 nova_compute[248510]: 2025-12-13 08:54:23.931 248514 INFO nova.virt.libvirt.driver [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Deleting instance files /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7_del#033[00m
Dec 13 03:54:23 np0005558241 nova_compute[248510]: 2025-12-13 08:54:23.932 248514 INFO nova.virt.libvirt.driver [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Deletion of /var/lib/nova/instances/24e9bc91-cab7-4459-921c-5000eb9839b7_del complete#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.047 248514 INFO nova.compute.manager [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Took 1.98 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.048 248514 DEBUG oslo.service.loopingcall [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.048 248514 DEBUG nova.compute.manager [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.048 248514 DEBUG nova.network.neutron [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.072 248514 INFO nova.compute.manager [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Took 2.69 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.073 248514 DEBUG oslo.service.loopingcall [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.073 248514 DEBUG nova.compute.manager [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.073 248514 DEBUG nova.network.neutron [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:54:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2787: 321 pgs: 321 active+clean; 266 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 3.9 MiB/s wr, 177 op/s
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.393 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616049.3917692, fcc617ec-f5f9-41bb-ad4b-86d790622e74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.393 248514 INFO nova.compute.manager [-] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.425 248514 DEBUG nova.compute.manager [None req-eccedc19-d0b2-4dbf-a796-f7b480405e59 - - - - - -] [instance: fcc617ec-f5f9-41bb-ad4b-86d790622e74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.756 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.757 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.757 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.757 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.758 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] No waiting events found dispatching network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.758 248514 WARNING nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received unexpected event network-vif-plugged-6585c17a-67f3-4c7c-a637-4acf71e85c4f for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.758 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-vif-unplugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.758 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.759 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.759 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.759 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] No waiting events found dispatching network-vif-unplugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.759 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-vif-unplugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.760 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.760 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.760 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.760 248514 DEBUG oslo_concurrency.lockutils [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.761 248514 DEBUG nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] No waiting events found dispatching network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:54:24 np0005558241 nova_compute[248510]: 2025-12-13 08:54:24.761 248514 WARNING nova.compute.manager [req-1993b26e-0df0-4ec4-a65f-140a5487dd5f req-5a520ee8-998d-4c45-a8c8-2818cf16f5da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received unexpected event network-vif-plugged-0f4fe885-0823-4b8e-93ad-70a45aba4b2e for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:54:25 np0005558241 nova_compute[248510]: 2025-12-13 08:54:25.442 248514 DEBUG nova.network.neutron [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:54:25 np0005558241 nova_compute[248510]: 2025-12-13 08:54:25.479 248514 DEBUG nova.network.neutron [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:54:25 np0005558241 nova_compute[248510]: 2025-12-13 08:54:25.500 248514 INFO nova.compute.manager [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Took 1.45 seconds to deallocate network for instance.#033[00m
Dec 13 03:54:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:54:25 np0005558241 nova_compute[248510]: 2025-12-13 08:54:25.594 248514 INFO nova.compute.manager [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Took 1.52 seconds to deallocate network for instance.#033[00m
Dec 13 03:54:25 np0005558241 nova_compute[248510]: 2025-12-13 08:54:25.730 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:25 np0005558241 nova_compute[248510]: 2025-12-13 08:54:25.731 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:25 np0005558241 nova_compute[248510]: 2025-12-13 08:54:25.877 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:25 np0005558241 nova_compute[248510]: 2025-12-13 08:54:25.919 248514 DEBUG nova.compute.manager [req-41de5a8d-8ff7-4991-b2ed-5f920c8e663f req-7e77f453-c30c-4ead-a4a7-2019edb98290 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Received event network-vif-deleted-0f4fe885-0823-4b8e-93ad-70a45aba4b2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:26 np0005558241 nova_compute[248510]: 2025-12-13 08:54:26.027 248514 DEBUG oslo_concurrency.processutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:54:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2788: 321 pgs: 321 active+clean; 266 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 44 KiB/s wr, 36 op/s
Dec 13 03:54:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:54:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/742887035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:54:26 np0005558241 nova_compute[248510]: 2025-12-13 08:54:26.740 248514 DEBUG oslo_concurrency.processutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.713s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:54:26 np0005558241 nova_compute[248510]: 2025-12-13 08:54:26.749 248514 DEBUG nova.compute.provider_tree [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:54:26 np0005558241 nova_compute[248510]: 2025-12-13 08:54:26.798 248514 DEBUG nova.scheduler.client.report [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:54:26 np0005558241 nova_compute[248510]: 2025-12-13 08:54:26.885 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:26 np0005558241 nova_compute[248510]: 2025-12-13 08:54:26.888 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:26 np0005558241 nova_compute[248510]: 2025-12-13 08:54:26.951 248514 INFO nova.scheduler.client.report [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 24e9bc91-cab7-4459-921c-5000eb9839b7#033[00m
Dec 13 03:54:27 np0005558241 nova_compute[248510]: 2025-12-13 08:54:27.067 248514 DEBUG oslo_concurrency.processutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:54:27 np0005558241 nova_compute[248510]: 2025-12-13 08:54:27.118 248514 DEBUG oslo_concurrency.lockutils [None req-4a0e59b3-06a8-4c96-97f3-3d3124c94a4d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "24e9bc91-cab7-4459-921c-5000eb9839b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:27 np0005558241 nova_compute[248510]: 2025-12-13 08:54:27.350 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:27 np0005558241 nova_compute[248510]: 2025-12-13 08:54:27.545 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:54:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225026825' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:54:27 np0005558241 nova_compute[248510]: 2025-12-13 08:54:27.744 248514 DEBUG oslo_concurrency.processutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:54:27 np0005558241 nova_compute[248510]: 2025-12-13 08:54:27.750 248514 DEBUG nova.compute.provider_tree [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:54:27 np0005558241 nova_compute[248510]: 2025-12-13 08:54:27.809 248514 DEBUG nova.scheduler.client.report [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:54:27 np0005558241 nova_compute[248510]: 2025-12-13 08:54:27.920 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:28 np0005558241 nova_compute[248510]: 2025-12-13 08:54:28.128 248514 DEBUG nova.compute.manager [req-abd24438-a2f6-42b9-88dd-a70f3a663452 req-684ce535-7b6c-4373-8876-10a6e1c6d357 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Received event network-vif-deleted-6585c17a-67f3-4c7c-a637-4acf71e85c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:28 np0005558241 nova_compute[248510]: 2025-12-13 08:54:28.131 248514 INFO nova.scheduler.client.report [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance fa7f7cf9-50d4-461e-ab73-21e65aa729a4#033[00m
Dec 13 03:54:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2789: 321 pgs: 321 active+clean; 200 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 45 KiB/s wr, 62 op/s
Dec 13 03:54:28 np0005558241 nova_compute[248510]: 2025-12-13 08:54:28.389 248514 DEBUG oslo_concurrency.lockutils [None req-e787c06b-ebde-4540-b9d3-f1ba75642043 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "fa7f7cf9-50d4-461e-ab73-21e65aa729a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:28 np0005558241 podman[363004]: 2025-12-13 08:54:28.974940816 +0000 UTC m=+0.060429616 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 13 03:54:28 np0005558241 podman[363003]: 2025-12-13 08:54:28.983172813 +0000 UTC m=+0.066961480 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:54:29 np0005558241 podman[363002]: 2025-12-13 08:54:29.013268517 +0000 UTC m=+0.097856734 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 03:54:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2790: 321 pgs: 321 active+clean; 200 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 47 KiB/s wr, 59 op/s
Dec 13 03:54:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:54:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:54:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:54:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:54:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:54:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:54:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:54:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:54:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:54:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:54:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:54:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:54:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:54:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:31Z|01122|binding|INFO|Releasing lport d8892183-3d82-41f5-b0bd-dc5a1c170b1a from this chassis (sb_readonly=0)
Dec 13 03:54:31 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:31Z|01123|binding|INFO|Releasing lport eef5d4b2-f2d3-4d15-9528-0e68d65ce454 from this chassis (sb_readonly=0)
Dec 13 03:54:31 np0005558241 nova_compute[248510]: 2025-12-13 08:54:31.746 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:31 np0005558241 podman[363208]: 2025-12-13 08:54:31.920790118 +0000 UTC m=+0.041254756 container create 5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_austin, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:54:31 np0005558241 systemd[1]: Started libpod-conmon-5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918.scope.
Dec 13 03:54:31 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:54:31 np0005558241 podman[363208]: 2025-12-13 08:54:31.902264213 +0000 UTC m=+0.022728881 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:54:32 np0005558241 podman[363208]: 2025-12-13 08:54:32.009965734 +0000 UTC m=+0.130430392 container init 5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:54:32 np0005558241 podman[363208]: 2025-12-13 08:54:32.021574565 +0000 UTC m=+0.142039203 container start 5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_austin, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:54:32 np0005558241 podman[363208]: 2025-12-13 08:54:32.025585685 +0000 UTC m=+0.146050333 container attach 5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_austin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:54:32 np0005558241 stupefied_austin[363224]: 167 167
Dec 13 03:54:32 np0005558241 systemd[1]: libpod-5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918.scope: Deactivated successfully.
Dec 13 03:54:32 np0005558241 conmon[363224]: conmon 5a7a0b452dbaa768b1ea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918.scope/container/memory.events
Dec 13 03:54:32 np0005558241 podman[363208]: 2025-12-13 08:54:32.029504783 +0000 UTC m=+0.149969431 container died 5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:54:32 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9383458ef939be1283d2cdc5362744be178b85fc5bf2cd2471978c1b22b7f876-merged.mount: Deactivated successfully.
Dec 13 03:54:32 np0005558241 podman[363208]: 2025-12-13 08:54:32.142518257 +0000 UTC m=+0.262982895 container remove 5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_austin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:54:32 np0005558241 systemd[1]: libpod-conmon-5a7a0b452dbaa768b1ead26e52bf176bef1a44d210b2cd2828ad4fc499c30918.scope: Deactivated successfully.
Dec 13 03:54:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2791: 321 pgs: 321 active+clean; 200 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 14 KiB/s wr, 57 op/s
Dec 13 03:54:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:54:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:54:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:54:32 np0005558241 podman[363249]: 2025-12-13 08:54:32.3344858 +0000 UTC m=+0.060236461 container create b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:32 np0005558241 systemd[1]: Started libpod-conmon-b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1.scope.
Dec 13 03:54:32 np0005558241 podman[363249]: 2025-12-13 08:54:32.299092553 +0000 UTC m=+0.024843234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:54:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:54:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7ff9388716a3b5bb0e72b660ba5a5428d9ad9d3c05e3cf2dcb016e3aeb1169/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7ff9388716a3b5bb0e72b660ba5a5428d9ad9d3c05e3cf2dcb016e3aeb1169/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7ff9388716a3b5bb0e72b660ba5a5428d9ad9d3c05e3cf2dcb016e3aeb1169/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7ff9388716a3b5bb0e72b660ba5a5428d9ad9d3c05e3cf2dcb016e3aeb1169/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb7ff9388716a3b5bb0e72b660ba5a5428d9ad9d3c05e3cf2dcb016e3aeb1169/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:32 np0005558241 podman[363249]: 2025-12-13 08:54:32.450617512 +0000 UTC m=+0.176368193 container init b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:54:32 np0005558241 podman[363249]: 2025-12-13 08:54:32.458692105 +0000 UTC m=+0.184442766 container start b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_swartz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:54:32 np0005558241 podman[363249]: 2025-12-13 08:54:32.490984954 +0000 UTC m=+0.216735635 container attach b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.547 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.713 248514 DEBUG nova.compute.manager [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-changed-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.714 248514 DEBUG nova.compute.manager [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Refreshing instance network info cache due to event network-changed-1babd66f-ec6a-4702-8a8f-839d32ba8761. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.714 248514 DEBUG oslo_concurrency.lockutils [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.714 248514 DEBUG oslo_concurrency.lockutils [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.715 248514 DEBUG nova.network.neutron [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Refreshing network info cache for port 1babd66f-ec6a-4702-8a8f-839d32ba8761 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.815 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.816 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.817 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.817 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.817 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.818 248514 INFO nova.compute.manager [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Terminating instance#033[00m
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.819 248514 DEBUG nova.compute.manager [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:54:32 np0005558241 kernel: tap1babd66f-ec (unregistering): left promiscuous mode
Dec 13 03:54:32 np0005558241 NetworkManager[50376]: <info>  [1765616072.9504] device (tap1babd66f-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:54:32 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:32Z|01124|binding|INFO|Releasing lport 1babd66f-ec6a-4702-8a8f-839d32ba8761 from this chassis (sb_readonly=0)
Dec 13 03:54:32 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:32Z|01125|binding|INFO|Setting lport 1babd66f-ec6a-4702-8a8f-839d32ba8761 down in Southbound
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.959 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:32 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:32Z|01126|binding|INFO|Removing iface tap1babd66f-ec ovn-installed in OVS
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.962 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:32 np0005558241 sleepy_swartz[363265]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:54:32 np0005558241 sleepy_swartz[363265]: --> All data devices are unavailable
Dec 13 03:54:32 np0005558241 nova_compute[248510]: 2025-12-13 08:54:32.978 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:33 np0005558241 systemd[1]: libpod-b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1.scope: Deactivated successfully.
Dec 13 03:54:33 np0005558241 podman[363249]: 2025-12-13 08:54:33.003103584 +0000 UTC m=+0.728854245 container died b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Dec 13 03:54:33 np0005558241 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000071.scope: Deactivated successfully.
Dec 13 03:54:33 np0005558241 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000071.scope: Consumed 16.542s CPU time.
Dec 13 03:54:33 np0005558241 systemd-machined[210538]: Machine qemu-139-instance-00000071 terminated.
Dec 13 03:54:33 np0005558241 nova_compute[248510]: 2025-12-13 08:54:33.062 248514 INFO nova.virt.libvirt.driver [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Instance destroyed successfully.#033[00m
Dec 13 03:54:33 np0005558241 nova_compute[248510]: 2025-12-13 08:54:33.063 248514 DEBUG nova.objects.instance [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 2d2a33c7-0a90-4b64-b291-b268d37dce5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:54:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:33.109 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:81:46 10.100.0.10'], port_security=['fa:16:3e:b1:81:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2d2a33c7-0a90-4b64-b291-b268d37dce5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3479ed9a-2670-4333-b282-6f40685ff746', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '57e2154e-1e2d-4537-afe5-11c61b80fdbc ee7df75c-fefa-4bc0-977e-537259cc7755', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9839b158-8451-4098-b558-d860fc6a92ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1babd66f-ec6a-4702-8a8f-839d32ba8761) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:54:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:33.110 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1babd66f-ec6a-4702-8a8f-839d32ba8761 in datapath 3479ed9a-2670-4333-b282-6f40685ff746 unbound from our chassis#033[00m
Dec 13 03:54:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:33.111 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3479ed9a-2670-4333-b282-6f40685ff746, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:54:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:33.113 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[174c950a-a2fa-418e-b6aa-7adfad7ba150]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:33.114 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746 namespace which is not needed anymore#033[00m
Dec 13 03:54:33 np0005558241 nova_compute[248510]: 2025-12-13 08:54:33.128 248514 DEBUG nova.virt.libvirt.vif [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:53:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-473378416',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=113,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGUq/AunPgAiW/gJ0J2O7Gg2tWHaC1FajarqmNhDhKsCdHoif6mzxgzomxd0iSuMUg8lqNyZetu6VpRVmxHZgqpb0ZC3j0IoZa8EwnwWYlucFJrp/EYR/i0MEEr0woib3Q==',key_name='tempest-TestSecurityGroupsBasicOps-33804060',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:53:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-ogye2bl2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:53:19Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=2d2a33c7-0a90-4b64-b291-b268d37dce5e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:54:33 np0005558241 nova_compute[248510]: 2025-12-13 08:54:33.129 248514 DEBUG nova.network.os_vif_util [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:54:33 np0005558241 nova_compute[248510]: 2025-12-13 08:54:33.129 248514 DEBUG nova.network.os_vif_util [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:54:33 np0005558241 nova_compute[248510]: 2025-12-13 08:54:33.130 248514 DEBUG os_vif [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:54:33 np0005558241 nova_compute[248510]: 2025-12-13 08:54:33.132 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:33 np0005558241 nova_compute[248510]: 2025-12-13 08:54:33.132 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1babd66f-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:54:33 np0005558241 nova_compute[248510]: 2025-12-13 08:54:33.134 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:33 np0005558241 nova_compute[248510]: 2025-12-13 08:54:33.136 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:54:33 np0005558241 nova_compute[248510]: 2025-12-13 08:54:33.138 248514 INFO os_vif [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:81:46,bridge_name='br-int',has_traffic_filtering=True,id=1babd66f-ec6a-4702-8a8f-839d32ba8761,network=Network(3479ed9a-2670-4333-b282-6f40685ff746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1babd66f-ec')#033[00m
Dec 13 03:54:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fb7ff9388716a3b5bb0e72b660ba5a5428d9ad9d3c05e3cf2dcb016e3aeb1169-merged.mount: Deactivated successfully.
Dec 13 03:54:33 np0005558241 podman[363249]: 2025-12-13 08:54:33.742617666 +0000 UTC m=+1.468368327 container remove b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_swartz, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:54:33 np0005558241 systemd[1]: libpod-conmon-b218b4cb70212cf90e37262377d9ac8d4fcd1566239dd54066e060361b1e45f1.scope: Deactivated successfully.
Dec 13 03:54:33 np0005558241 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [NOTICE]   (360543) : haproxy version is 2.8.14-c23fe91
Dec 13 03:54:33 np0005558241 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [NOTICE]   (360543) : path to executable is /usr/sbin/haproxy
Dec 13 03:54:33 np0005558241 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [WARNING]  (360543) : Exiting Master process...
Dec 13 03:54:33 np0005558241 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [WARNING]  (360543) : Exiting Master process...
Dec 13 03:54:33 np0005558241 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [ALERT]    (360543) : Current worker (360547) exited with code 143 (Terminated)
Dec 13 03:54:33 np0005558241 neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746[360521]: [WARNING]  (360543) : All workers exited. Exiting... (0)
Dec 13 03:54:33 np0005558241 systemd[1]: libpod-b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a.scope: Deactivated successfully.
Dec 13 03:54:33 np0005558241 podman[363372]: 2025-12-13 08:54:33.912473325 +0000 UTC m=+0.050121838 container died b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 03:54:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a-userdata-shm.mount: Deactivated successfully.
Dec 13 03:54:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-15bfa81170ec655fc806f556ce96cba9f04e8c1b8d6c8296ea616ed5bdca96a0-merged.mount: Deactivated successfully.
Dec 13 03:54:34 np0005558241 podman[363372]: 2025-12-13 08:54:34.184324391 +0000 UTC m=+0.321972894 container cleanup b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:54:34 np0005558241 systemd[1]: libpod-conmon-b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a.scope: Deactivated successfully.
Dec 13 03:54:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2792: 321 pgs: 321 active+clean; 168 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 14 KiB/s wr, 61 op/s
Dec 13 03:54:34 np0005558241 podman[363438]: 2025-12-13 08:54:34.265609429 +0000 UTC m=+0.057787170 container remove b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:54:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.272 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eaadab1d-ba3a-42cf-be0b-7b834c50545b]: (4, ('Sat Dec 13 08:54:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746 (b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a)\nb2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a\nSat Dec 13 08:54:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746 (b2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a)\nb2fc8e04fb4576ca51a45d52078978e830f8328aa191ab3c1c6de2d4932fb30a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.274 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f88f5d0e-f7bb-45c5-a04c-5ae9683613d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.275 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3479ed9a-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:54:34 np0005558241 podman[363439]: 2025-12-13 08:54:34.275687982 +0000 UTC m=+0.069522804 container create de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:54:34 np0005558241 kernel: tap3479ed9a-20: left promiscuous mode
Dec 13 03:54:34 np0005558241 nova_compute[248510]: 2025-12-13 08:54:34.278 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:34 np0005558241 nova_compute[248510]: 2025-12-13 08:54:34.293 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.295 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[abc18ef4-c31e-4e53-aeda-77d9e51fbbd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.311 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb6fa77-4de5-470f-b583-fc341e2f7cf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.312 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d7003c-0bcf-4535-93ad-32fad9eaee71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:34 np0005558241 podman[363439]: 2025-12-13 08:54:34.231514764 +0000 UTC m=+0.025349616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:54:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.329 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[228cff81-c5ce-49a0-8e76-cfe6737fcd1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 857111, 'reachable_time': 36220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363467, 'error': None, 'target': 'ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.332 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3479ed9a-2670-4333-b282-6f40685ff746 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:54:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:34.332 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[bde75909-b5b5-42ab-82ac-9480c753f266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:34 np0005558241 systemd[1]: run-netns-ovnmeta\x2d3479ed9a\x2d2670\x2d4333\x2db282\x2d6f40685ff746.mount: Deactivated successfully.
Dec 13 03:54:34 np0005558241 systemd[1]: Started libpod-conmon-de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac.scope.
Dec 13 03:54:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:54:34 np0005558241 podman[363439]: 2025-12-13 08:54:34.589377327 +0000 UTC m=+0.383212189 container init de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:54:34 np0005558241 podman[363439]: 2025-12-13 08:54:34.597707356 +0000 UTC m=+0.391542188 container start de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elion, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:54:34 np0005558241 dazzling_elion[363470]: 167 167
Dec 13 03:54:34 np0005558241 systemd[1]: libpod-de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac.scope: Deactivated successfully.
Dec 13 03:54:34 np0005558241 nova_compute[248510]: 2025-12-13 08:54:34.649 248514 DEBUG nova.compute.manager [req-86e29c42-1e83-4c32-9d52-d18e09806654 req-203e37a5-2aea-48a9-be18-aa33c9864770 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-vif-unplugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:34 np0005558241 nova_compute[248510]: 2025-12-13 08:54:34.650 248514 DEBUG oslo_concurrency.lockutils [req-86e29c42-1e83-4c32-9d52-d18e09806654 req-203e37a5-2aea-48a9-be18-aa33c9864770 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:34 np0005558241 nova_compute[248510]: 2025-12-13 08:54:34.650 248514 DEBUG oslo_concurrency.lockutils [req-86e29c42-1e83-4c32-9d52-d18e09806654 req-203e37a5-2aea-48a9-be18-aa33c9864770 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:34 np0005558241 nova_compute[248510]: 2025-12-13 08:54:34.651 248514 DEBUG oslo_concurrency.lockutils [req-86e29c42-1e83-4c32-9d52-d18e09806654 req-203e37a5-2aea-48a9-be18-aa33c9864770 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:34 np0005558241 nova_compute[248510]: 2025-12-13 08:54:34.651 248514 DEBUG nova.compute.manager [req-86e29c42-1e83-4c32-9d52-d18e09806654 req-203e37a5-2aea-48a9-be18-aa33c9864770 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] No waiting events found dispatching network-vif-unplugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:54:34 np0005558241 nova_compute[248510]: 2025-12-13 08:54:34.651 248514 DEBUG nova.compute.manager [req-86e29c42-1e83-4c32-9d52-d18e09806654 req-203e37a5-2aea-48a9-be18-aa33c9864770 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-vif-unplugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:54:34 np0005558241 podman[363439]: 2025-12-13 08:54:34.708606206 +0000 UTC m=+0.502441038 container attach de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:54:34 np0005558241 podman[363439]: 2025-12-13 08:54:34.7091399 +0000 UTC m=+0.502974732 container died de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:54:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-92ac515acc6bf0a20b0985c124b8eb1fcc7ef2345ba74df4b0fe39e7bd1bd4ca-merged.mount: Deactivated successfully.
Dec 13 03:54:34 np0005558241 podman[363439]: 2025-12-13 08:54:34.859339935 +0000 UTC m=+0.653174777 container remove de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:54:34 np0005558241 systemd[1]: libpod-conmon-de7dcd55023894b978500ad67aeb3a212305aa3c5087a21e5c984f60fb412dac.scope: Deactivated successfully.
Dec 13 03:54:34 np0005558241 nova_compute[248510]: 2025-12-13 08:54:34.967 248514 INFO nova.virt.libvirt.driver [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Deleting instance files /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e_del#033[00m
Dec 13 03:54:34 np0005558241 nova_compute[248510]: 2025-12-13 08:54:34.969 248514 INFO nova.virt.libvirt.driver [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Deletion of /var/lib/nova/instances/2d2a33c7-0a90-4b64-b291-b268d37dce5e_del complete#033[00m
Dec 13 03:54:35 np0005558241 podman[363496]: 2025-12-13 08:54:35.054934429 +0000 UTC m=+0.051273076 container create f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_einstein, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.087 248514 DEBUG nova.compute.manager [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-changed-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.087 248514 DEBUG nova.compute.manager [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Refreshing instance network info cache due to event network-changed-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.087 248514 DEBUG oslo_concurrency.lockutils [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.087 248514 DEBUG oslo_concurrency.lockutils [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.088 248514 DEBUG nova.network.neutron [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Refreshing network info cache for port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:54:35 np0005558241 systemd[1]: Started libpod-conmon-f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744.scope.
Dec 13 03:54:35 np0005558241 podman[363496]: 2025-12-13 08:54:35.027348838 +0000 UTC m=+0.023687505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:54:35 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:54:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fff8362eb3112d90c4b1a03f4c209e0db3d1108ec59cc5e446f93f3224e27c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fff8362eb3112d90c4b1a03f4c209e0db3d1108ec59cc5e446f93f3224e27c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fff8362eb3112d90c4b1a03f4c209e0db3d1108ec59cc5e446f93f3224e27c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fff8362eb3112d90c4b1a03f4c209e0db3d1108ec59cc5e446f93f3224e27c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.144 248514 INFO nova.compute.manager [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Took 2.32 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.144 248514 DEBUG oslo.service.loopingcall [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.144 248514 DEBUG nova.compute.manager [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.145 248514 DEBUG nova.network.neutron [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.159 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.160 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.160 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.160 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.160 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:35 np0005558241 podman[363496]: 2025-12-13 08:54:35.16144272 +0000 UTC m=+0.157781387 container init f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_einstein, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.161 248514 INFO nova.compute.manager [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Terminating instance#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.162 248514 DEBUG nova.compute.manager [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:54:35 np0005558241 podman[363496]: 2025-12-13 08:54:35.170329043 +0000 UTC m=+0.166667690 container start f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:54:35 np0005558241 podman[363496]: 2025-12-13 08:54:35.370816779 +0000 UTC m=+0.367155466 container attach f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_einstein, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]: {
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:    "0": [
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:        {
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "devices": [
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "/dev/loop3"
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            ],
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_name": "ceph_lv0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_size": "21470642176",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "name": "ceph_lv0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "tags": {
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.cluster_name": "ceph",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.crush_device_class": "",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.encrypted": "0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.objectstore": "bluestore",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.osd_id": "0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.type": "block",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.vdo": "0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.with_tpm": "0"
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            },
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "type": "block",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "vg_name": "ceph_vg0"
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:        }
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:    ],
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:    "1": [
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:        {
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "devices": [
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "/dev/loop4"
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            ],
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_name": "ceph_lv1",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_size": "21470642176",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "name": "ceph_lv1",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "tags": {
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.cluster_name": "ceph",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.crush_device_class": "",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.encrypted": "0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.objectstore": "bluestore",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.osd_id": "1",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.type": "block",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.vdo": "0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.with_tpm": "0"
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            },
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "type": "block",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "vg_name": "ceph_vg1"
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:        }
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:    ],
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:    "2": [
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:        {
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "devices": [
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "/dev/loop5"
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            ],
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_name": "ceph_lv2",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_size": "21470642176",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "name": "ceph_lv2",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "tags": {
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.cluster_name": "ceph",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.crush_device_class": "",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.encrypted": "0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.objectstore": "bluestore",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.osd_id": "2",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.type": "block",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.vdo": "0",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:                "ceph.with_tpm": "0"
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            },
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "type": "block",
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:            "vg_name": "ceph_vg2"
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:        }
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]:    ]
Dec 13 03:54:35 np0005558241 awesome_einstein[363513]: }
Dec 13 03:54:35 np0005558241 systemd[1]: libpod-f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744.scope: Deactivated successfully.
Dec 13 03:54:35 np0005558241 podman[363496]: 2025-12-13 08:54:35.473650198 +0000 UTC m=+0.469988845 container died f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_einstein, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:54:35 np0005558241 kernel: tap0eac2381-7f (unregistering): left promiscuous mode
Dec 13 03:54:35 np0005558241 NetworkManager[50376]: <info>  [1765616075.5193] device (tap0eac2381-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:54:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:35Z|01127|binding|INFO|Releasing lport 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 from this chassis (sb_readonly=0)
Dec 13 03:54:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:35Z|01128|binding|INFO|Setting lport 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 down in Southbound
Dec 13 03:54:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:54:35Z|01129|binding|INFO|Removing iface tap0eac2381-7f ovn-installed in OVS
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.545 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:35.574 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:eb:91 10.100.0.8'], port_security=['fa:16:3e:6a:eb:91 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'af2dc023-560c-4c66-b330-e41218a7a4eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-09b8bd52-e920-467f-994b-646113fcb821', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6cedfe2-a795-4750-8f73-fd0610750728', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7e75a11b-9fc0-4a04-84da-8ed3853196e7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:54:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:35.575 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 in datapath 09b8bd52-e920-467f-994b-646113fcb821 unbound from our chassis#033[00m
Dec 13 03:54:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:35.577 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 09b8bd52-e920-467f-994b-646113fcb821, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:54:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:35.578 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[941a0877-994b-4932-81c1-ebe3a417da37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:35.579 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-09b8bd52-e920-467f-994b-646113fcb821 namespace which is not needed anymore#033[00m
Dec 13 03:54:35 np0005558241 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d00000070.scope: Deactivated successfully.
Dec 13 03:54:35 np0005558241 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d00000070.scope: Consumed 15.836s CPU time.
Dec 13 03:54:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:54:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7fff8362eb3112d90c4b1a03f4c209e0db3d1108ec59cc5e446f93f3224e27c1-merged.mount: Deactivated successfully.
Dec 13 03:54:35 np0005558241 systemd-machined[210538]: Machine qemu-138-instance-00000070 terminated.
Dec 13 03:54:35 np0005558241 podman[363496]: 2025-12-13 08:54:35.69866839 +0000 UTC m=+0.695007037 container remove f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_einstein, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:54:35 np0005558241 systemd[1]: libpod-conmon-f551c11553e4e21f25ca83306dca87591f073c61a526d1ab84ddf76c85e28744.scope: Deactivated successfully.
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.762 248514 DEBUG nova.network.neutron [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updated VIF entry in instance network info cache for port 1babd66f-ec6a-4702-8a8f-839d32ba8761. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.763 248514 DEBUG nova.network.neutron [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updating instance_info_cache with network_info: [{"id": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "address": "fa:16:3e:b1:81:46", "network": {"id": "3479ed9a-2670-4333-b282-6f40685ff746", "bridge": "br-int", "label": "tempest-network-smoke--1943394626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1babd66f-ec", "ovs_interfaceid": "1babd66f-ec6a-4702-8a8f-839d32ba8761", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:54:35 np0005558241 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [NOTICE]   (360349) : haproxy version is 2.8.14-c23fe91
Dec 13 03:54:35 np0005558241 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [NOTICE]   (360349) : path to executable is /usr/sbin/haproxy
Dec 13 03:54:35 np0005558241 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [WARNING]  (360349) : Exiting Master process...
Dec 13 03:54:35 np0005558241 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [WARNING]  (360349) : Exiting Master process...
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.789 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:35 np0005558241 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [ALERT]    (360349) : Current worker (360351) exited with code 143 (Terminated)
Dec 13 03:54:35 np0005558241 neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821[360327]: [WARNING]  (360349) : All workers exited. Exiting... (0)
Dec 13 03:54:35 np0005558241 systemd[1]: libpod-e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d.scope: Deactivated successfully.
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.799 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:35 np0005558241 podman[363557]: 2025-12-13 08:54:35.8031808 +0000 UTC m=+0.063763160 container died e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.809 248514 INFO nova.virt.libvirt.driver [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Instance destroyed successfully.#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.809 248514 DEBUG nova.objects.instance [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid af2dc023-560c-4c66-b330-e41218a7a4eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.888 248514 DEBUG nova.virt.libvirt.vif [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:52:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-673770696',display_name='tempest-TestNetworkBasicOps-server-673770696',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-673770696',id=112,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFWWKdP0apeEEX6KLq89U2vRGSHeV3KAUwR7F/v8SOdmJ9w4un8uAKW6W1VsXiUAnc8fLGuX3ip0yk759e6Z6EnqMVZe+COaAk19ulIyzOUeifphXpMKMaa2a+4orpaKaw==',key_name='tempest-TestNetworkBasicOps-724159602',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:53:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-ahio6eh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:53:12Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=af2dc023-560c-4c66-b330-e41218a7a4eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.889 248514 DEBUG nova.network.os_vif_util [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.890 248514 DEBUG nova.network.os_vif_util [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.891 248514 DEBUG os_vif [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.893 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.893 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0eac2381-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.898 248514 INFO os_vif [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:eb:91,bridge_name='br-int',has_traffic_filtering=True,id=0eac2381-7f12-4f67-bde8-76c8fb9ae0b0,network=Network(09b8bd52-e920-467f-994b-646113fcb821),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0eac2381-7f')#033[00m
Dec 13 03:54:35 np0005558241 nova_compute[248510]: 2025-12-13 08:54:35.914 248514 DEBUG oslo_concurrency.lockutils [req-40ed5e33-428b-40da-9740-e5c680d435e4 req-40eaa335-04df-43d3-8fca-d140a5e865d2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2d2a33c7-0a90-4b64-b291-b268d37dce5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:54:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d-userdata-shm.mount: Deactivated successfully.
Dec 13 03:54:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8de9dacfc504e0470aec8b697e93be4e6449bd328978b3076d04e8364d2802e5-merged.mount: Deactivated successfully.
Dec 13 03:54:35 np0005558241 podman[363557]: 2025-12-13 08:54:35.97986647 +0000 UTC m=+0.240448800 container cleanup e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:54:35 np0005558241 systemd[1]: libpod-conmon-e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d.scope: Deactivated successfully.
Dec 13 03:54:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2793: 321 pgs: 321 active+clean; 168 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 9.8 KiB/s wr, 31 op/s
Dec 13 03:54:36 np0005558241 podman[363664]: 2025-12-13 08:54:36.309158335 +0000 UTC m=+0.301558511 container remove e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 03:54:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.315 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e80e60bb-109d-4ddb-9a82-4256d6100c21]: (4, ('Sat Dec 13 08:54:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821 (e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d)\ne1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d\nSat Dec 13 08:54:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-09b8bd52-e920-467f-994b-646113fcb821 (e1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d)\ne1d5bb734bd063f13a98dfdef73c9a54836d3b0e8f7c929b0c923f4ef1ff9e7d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.317 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[41ad0d82-b5a4-4f0c-a830-0576f8f92945]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.318 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09b8bd52-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.319 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:36 np0005558241 kernel: tap09b8bd52-e0: left promiscuous mode
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.336 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.339 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[145e0cc3-69dc-4588-a7e7-99f32354503b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.353 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7a295e-3f28-4fec-b2b9-eb774f5c443d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.355 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f6afc58e-f936-44b3-b2d6-4fb68f04b16b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.373 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8d567227-3b7f-4e4b-9695-0bf8388e6093]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856903, 'reachable_time': 40139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363701, 'error': None, 'target': 'ovnmeta-09b8bd52-e920-467f-994b-646113fcb821', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:36 np0005558241 systemd[1]: run-netns-ovnmeta\x2d09b8bd52\x2de920\x2d467f\x2d994b\x2d646113fcb821.mount: Deactivated successfully.
Dec 13 03:54:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.376 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-09b8bd52-e920-467f-994b-646113fcb821 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:54:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:36.376 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[8ddc12f1-f279-494b-8d33-944d480b85f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:36 np0005558241 podman[363688]: 2025-12-13 08:54:36.407514671 +0000 UTC m=+0.100007148 container create 31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:54:36 np0005558241 podman[363688]: 2025-12-13 08:54:36.333735021 +0000 UTC m=+0.026227528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:54:36 np0005558241 systemd[1]: Started libpod-conmon-31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b.scope.
Dec 13 03:54:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:54:36 np0005558241 podman[363688]: 2025-12-13 08:54:36.587779671 +0000 UTC m=+0.280272238 container init 31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:54:36 np0005558241 podman[363688]: 2025-12-13 08:54:36.597367981 +0000 UTC m=+0.289860469 container start 31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_tharp, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:54:36 np0005558241 agitated_tharp[363709]: 167 167
Dec 13 03:54:36 np0005558241 systemd[1]: libpod-31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b.scope: Deactivated successfully.
Dec 13 03:54:36 np0005558241 podman[363688]: 2025-12-13 08:54:36.650569125 +0000 UTC m=+0.343061612 container attach 31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 03:54:36 np0005558241 podman[363688]: 2025-12-13 08:54:36.65116687 +0000 UTC m=+0.343659347 container died 31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.843 248514 DEBUG nova.network.neutron [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.862 248514 INFO nova.compute.manager [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Took 1.72 seconds to deallocate network for instance.#033[00m
Dec 13 03:54:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d5d9d265d2552d906e1e709f82daaf6ae3ca8b15fd8aa36f5257c8cd1024ee9f-merged.mount: Deactivated successfully.
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.909 248514 DEBUG nova.compute.manager [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.909 248514 DEBUG oslo_concurrency.lockutils [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.910 248514 DEBUG oslo_concurrency.lockutils [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.910 248514 DEBUG oslo_concurrency.lockutils [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.910 248514 DEBUG nova.compute.manager [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] No waiting events found dispatching network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.910 248514 WARNING nova.compute.manager [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received unexpected event network-vif-plugged-1babd66f-ec6a-4702-8a8f-839d32ba8761 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.911 248514 DEBUG nova.compute.manager [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-vif-unplugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.911 248514 DEBUG oslo_concurrency.lockutils [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.911 248514 DEBUG oslo_concurrency.lockutils [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.912 248514 DEBUG oslo_concurrency.lockutils [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.912 248514 DEBUG nova.compute.manager [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] No waiting events found dispatching network-vif-unplugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.912 248514 DEBUG nova.compute.manager [req-b8d2d0bc-efa6-4124-bd9a-4a732b229832 req-97382a22-4cc0-4466-8d41-21adff32ff22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-vif-unplugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.923 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:36 np0005558241 nova_compute[248510]: 2025-12-13 08:54:36.924 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:36 np0005558241 podman[363688]: 2025-12-13 08:54:36.944518455 +0000 UTC m=+0.637010943 container remove 31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_tharp, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:54:36 np0005558241 systemd[1]: libpod-conmon-31fc6c6c9909d27933cdef3e4322a6cc658c381d145f8c803960cdf87708cf2b.scope: Deactivated successfully.
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.014 248514 DEBUG oslo_concurrency.processutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:54:37 np0005558241 podman[363737]: 2025-12-13 08:54:37.168375568 +0000 UTC m=+0.084796407 container create 222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 03:54:37 np0005558241 podman[363737]: 2025-12-13 08:54:37.108720552 +0000 UTC m=+0.025141411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:54:37 np0005558241 systemd[1]: Started libpod-conmon-222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c.scope.
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.214 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616062.2129524, fa7f7cf9-50d4-461e-ab73-21e65aa729a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.215 248514 INFO nova.compute.manager [-] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:54:37 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:54:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b42cbd1e3a447c23341df03fdaf73a114c8a32494b7ddc4723dc2122d31b00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.244 248514 DEBUG nova.compute.manager [None req-88a6f8da-c6f3-46af-ab82-23ce71795305 - - - - - -] [instance: fa7f7cf9-50d4-461e-ab73-21e65aa729a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:54:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b42cbd1e3a447c23341df03fdaf73a114c8a32494b7ddc4723dc2122d31b00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b42cbd1e3a447c23341df03fdaf73a114c8a32494b7ddc4723dc2122d31b00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b42cbd1e3a447c23341df03fdaf73a114c8a32494b7ddc4723dc2122d31b00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:37 np0005558241 podman[363737]: 2025-12-13 08:54:37.256776155 +0000 UTC m=+0.173196994 container init 222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:54:37 np0005558241 podman[363737]: 2025-12-13 08:54:37.267793791 +0000 UTC m=+0.184214630 container start 222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:54:37 np0005558241 podman[363737]: 2025-12-13 08:54:37.271562555 +0000 UTC m=+0.187983394 container attach 222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.271 248514 INFO nova.virt.libvirt.driver [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Deleting instance files /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb_del#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.271 248514 INFO nova.virt.libvirt.driver [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Deletion of /var/lib/nova/instances/af2dc023-560c-4c66-b330-e41218a7a4eb_del complete#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.326 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616062.2975323, 24e9bc91-cab7-4459-921c-5000eb9839b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.326 248514 INFO nova.compute.manager [-] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.331 248514 INFO nova.compute.manager [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Took 2.17 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.332 248514 DEBUG oslo.service.loopingcall [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.332 248514 DEBUG nova.compute.manager [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.333 248514 DEBUG nova.network.neutron [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.348 248514 DEBUG nova.compute.manager [None req-9f5000d1-baea-47e8-add8-9b3aa2274ff4 - - - - - -] [instance: 24e9bc91-cab7-4459-921c-5000eb9839b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.549 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:54:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159612600' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.646 248514 DEBUG oslo_concurrency.processutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.655 248514 DEBUG nova.compute.provider_tree [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.679 248514 DEBUG nova.scheduler.client.report [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.714 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.752 248514 INFO nova.scheduler.client.report [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 2d2a33c7-0a90-4b64-b291-b268d37dce5e#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.823 248514 DEBUG nova.network.neutron [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updated VIF entry in instance network info cache for port 0eac2381-7f12-4f67-bde8-76c8fb9ae0b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.823 248514 DEBUG nova.network.neutron [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updating instance_info_cache with network_info: [{"id": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "address": "fa:16:3e:6a:eb:91", "network": {"id": "09b8bd52-e920-467f-994b-646113fcb821", "bridge": "br-int", "label": "tempest-network-smoke--149806603", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0eac2381-7f", "ovs_interfaceid": "0eac2381-7f12-4f67-bde8-76c8fb9ae0b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.865 248514 DEBUG oslo_concurrency.lockutils [None req-a5d91c7e-5d1f-430e-b42d-5ab4045ce3ce c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2d2a33c7-0a90-4b64-b291-b268d37dce5e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:37 np0005558241 nova_compute[248510]: 2025-12-13 08:54:37.901 248514 DEBUG oslo_concurrency.lockutils [req-2c0b044b-a2ac-446b-b77d-d5578f2126fa req-a2883b08-7e0d-46b9-8d2d-2029d32d0b81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-af2dc023-560c-4c66-b330-e41218a7a4eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:54:38 np0005558241 lvm[363849]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:54:38 np0005558241 lvm[363849]: VG ceph_vg0 finished
Dec 13 03:54:38 np0005558241 lvm[363851]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:54:38 np0005558241 lvm[363851]: VG ceph_vg1 finished
Dec 13 03:54:38 np0005558241 lvm[363852]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:54:38 np0005558241 lvm[363852]: VG ceph_vg2 finished
Dec 13 03:54:38 np0005558241 adoring_euclid[363772]: {}
Dec 13 03:54:38 np0005558241 nova_compute[248510]: 2025-12-13 08:54:38.180 248514 DEBUG nova.network.neutron [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:54:38 np0005558241 nova_compute[248510]: 2025-12-13 08:54:38.204 248514 INFO nova.compute.manager [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Took 0.87 seconds to deallocate network for instance.#033[00m
Dec 13 03:54:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2794: 321 pgs: 321 active+clean; 132 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 11 KiB/s wr, 57 op/s
Dec 13 03:54:38 np0005558241 systemd[1]: libpod-222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c.scope: Deactivated successfully.
Dec 13 03:54:38 np0005558241 systemd[1]: libpod-222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c.scope: Consumed 1.484s CPU time.
Dec 13 03:54:38 np0005558241 podman[363737]: 2025-12-13 08:54:38.230797756 +0000 UTC m=+1.147218605 container died 222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:54:38 np0005558241 nova_compute[248510]: 2025-12-13 08:54:38.255 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:38 np0005558241 nova_compute[248510]: 2025-12-13 08:54:38.256 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:38 np0005558241 systemd[1]: var-lib-containers-storage-overlay-81b42cbd1e3a447c23341df03fdaf73a114c8a32494b7ddc4723dc2122d31b00-merged.mount: Deactivated successfully.
Dec 13 03:54:38 np0005558241 podman[363737]: 2025-12-13 08:54:38.280642766 +0000 UTC m=+1.197063605 container remove 222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_euclid, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:54:38 np0005558241 systemd[1]: libpod-conmon-222271f460bef8bc1111b96c1ea95a8e064004952c41290c26e3ecff5d1e4c4c.scope: Deactivated successfully.
Dec 13 03:54:38 np0005558241 nova_compute[248510]: 2025-12-13 08:54:38.310 248514 DEBUG oslo_concurrency.processutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:54:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:54:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:54:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:54:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:54:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:54:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3781666245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:54:38 np0005558241 nova_compute[248510]: 2025-12-13 08:54:38.886 248514 DEBUG oslo_concurrency.processutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:54:38 np0005558241 nova_compute[248510]: 2025-12-13 08:54:38.893 248514 DEBUG nova.compute.provider_tree [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:54:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:54:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:54:39 np0005558241 nova_compute[248510]: 2025-12-13 08:54:39.464 248514 DEBUG nova.compute.manager [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:39 np0005558241 nova_compute[248510]: 2025-12-13 08:54:39.464 248514 DEBUG oslo_concurrency.lockutils [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:39 np0005558241 nova_compute[248510]: 2025-12-13 08:54:39.465 248514 DEBUG oslo_concurrency.lockutils [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:39 np0005558241 nova_compute[248510]: 2025-12-13 08:54:39.465 248514 DEBUG oslo_concurrency.lockutils [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:39 np0005558241 nova_compute[248510]: 2025-12-13 08:54:39.465 248514 DEBUG nova.compute.manager [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] No waiting events found dispatching network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:54:39 np0005558241 nova_compute[248510]: 2025-12-13 08:54:39.465 248514 WARNING nova.compute.manager [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received unexpected event network-vif-plugged-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:54:39 np0005558241 nova_compute[248510]: 2025-12-13 08:54:39.466 248514 DEBUG nova.compute.manager [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Received event network-vif-deleted-1babd66f-ec6a-4702-8a8f-839d32ba8761 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:39 np0005558241 nova_compute[248510]: 2025-12-13 08:54:39.466 248514 DEBUG nova.compute.manager [req-479e7175-9d38-48d6-9923-c32c027fb62b req-f52ff50d-db4b-4bfb-9aa6-b4f07ee7a8ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Received event network-vif-deleted-0eac2381-7f12-4f67-bde8-76c8fb9ae0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:54:39 np0005558241 nova_compute[248510]: 2025-12-13 08:54:39.484 248514 DEBUG nova.scheduler.client.report [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:54:39 np0005558241 nova_compute[248510]: 2025-12-13 08:54:39.515 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:39 np0005558241 nova_compute[248510]: 2025-12-13 08:54:39.825 248514 INFO nova.scheduler.client.report [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance af2dc023-560c-4c66-b330-e41218a7a4eb#033[00m
Dec 13 03:54:40 np0005558241 nova_compute[248510]: 2025-12-13 08:54:40.107 248514 DEBUG oslo_concurrency.lockutils [None req-5ef2158e-86fb-4fb5-923e-a34d3c0937bd 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "af2dc023-560c-4c66-b330-e41218a7a4eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:54:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:54:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:54:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:54:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:54:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:54:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2795: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 11 KiB/s wr, 56 op/s
Dec 13 03:54:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:54:40 np0005558241 nova_compute[248510]: 2025-12-13 08:54:40.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:54:40 np0005558241 nova_compute[248510]: 2025-12-13 08:54:40.895 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:41.039 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3f:9d 2001:db8:0:1:f816:3eff:fea4:3f9d'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a97dbe15-88c4-44cf-8dbe-4f1cf8c59d01, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=1cad8a42-b0dd-468b-aee7-cbceaefdb18d) old=Port_Binding(mac=['fa:16:3e:a4:3f:9d 2001:db8::f816:3eff:fea4:3f9d'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea4:3f9d/64', 'neutron:device_id': 'ovnmeta-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a100b8b9-7333-44af-8310-9d269980caa2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ef3ec6f2b544929ac9f319d6eb124d2', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:54:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:41.040 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 1cad8a42-b0dd-468b-aee7-cbceaefdb18d in datapath a100b8b9-7333-44af-8310-9d269980caa2 updated#033[00m
Dec 13 03:54:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:41.041 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a100b8b9-7333-44af-8310-9d269980caa2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:54:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:41.042 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[19cdb1be-caca-4228-9b13-e10d9f699f5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:54:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2796: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 KiB/s wr, 55 op/s
Dec 13 03:54:42 np0005558241 nova_compute[248510]: 2025-12-13 08:54:42.551 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2797: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.7 KiB/s wr, 55 op/s
Dec 13 03:54:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:54:45 np0005558241 nova_compute[248510]: 2025-12-13 08:54:45.897 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2798: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.2 KiB/s wr, 51 op/s
Dec 13 03:54:47 np0005558241 nova_compute[248510]: 2025-12-13 08:54:47.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:47 np0005558241 nova_compute[248510]: 2025-12-13 08:54:47.348 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:47 np0005558241 nova_compute[248510]: 2025-12-13 08:54:47.554 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:48 np0005558241 nova_compute[248510]: 2025-12-13 08:54:48.060 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616073.0588737, 2d2a33c7-0a90-4b64-b291-b268d37dce5e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:54:48 np0005558241 nova_compute[248510]: 2025-12-13 08:54:48.060 248514 INFO nova.compute.manager [-] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:54:48 np0005558241 nova_compute[248510]: 2025-12-13 08:54:48.118 248514 DEBUG nova.compute.manager [None req-c0f17856-9737-46af-8db9-df2719d861ce - - - - - -] [instance: 2d2a33c7-0a90-4b64-b291-b268d37dce5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:54:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2799: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.2 KiB/s wr, 51 op/s
Dec 13 03:54:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2800: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 511 B/s wr, 25 op/s
Dec 13 03:54:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:54:50 np0005558241 nova_compute[248510]: 2025-12-13 08:54:50.806 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616075.8042293, af2dc023-560c-4c66-b330-e41218a7a4eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:54:50 np0005558241 nova_compute[248510]: 2025-12-13 08:54:50.806 248514 INFO nova.compute.manager [-] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:54:50 np0005558241 nova_compute[248510]: 2025-12-13 08:54:50.827 248514 DEBUG nova.compute.manager [None req-df016397-2540-4195-939a-5fdcaff1b5bf - - - - - -] [instance: af2dc023-560c-4c66-b330-e41218a7a4eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:54:50 np0005558241 nova_compute[248510]: 2025-12-13 08:54:50.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2801: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:54:52 np0005558241 nova_compute[248510]: 2025-12-13 08:54:52.557 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2802: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:54:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:55.431 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:54:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:55.432 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:54:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:54:55.432 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:54:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:54:55 np0005558241 nova_compute[248510]: 2025-12-13 08:54:55.901 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2803: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:54:57 np0005558241 nova_compute[248510]: 2025-12-13 08:54:57.556 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:54:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2804: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:54:59 np0005558241 podman[363916]: 2025-12-13 08:54:59.980517867 +0000 UTC m=+0.064527309 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 13 03:55:00 np0005558241 podman[363914]: 2025-12-13 08:55:00.014635263 +0000 UTC m=+0.103789123 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:55:00 np0005558241 podman[363915]: 2025-12-13 08:55:00.014449478 +0000 UTC m=+0.098688555 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd)
Dec 13 03:55:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2805: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:55:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:55:00 np0005558241 nova_compute[248510]: 2025-12-13 08:55:00.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:55:00 np0005558241 nova_compute[248510]: 2025-12-13 08:55:00.904 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2806: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:55:02 np0005558241 nova_compute[248510]: 2025-12-13 08:55:02.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:02 np0005558241 nova_compute[248510]: 2025-12-13 08:55:02.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:55:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2807: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:55:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:55:05 np0005558241 nova_compute[248510]: 2025-12-13 08:55:05.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2808: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:55:07 np0005558241 nova_compute[248510]: 2025-12-13 08:55:07.560 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2809: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:55:08 np0005558241 nova_compute[248510]: 2025-12-13 08:55:08.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:55:08 np0005558241 nova_compute[248510]: 2025-12-13 08:55:08.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:55:09 np0005558241 nova_compute[248510]: 2025-12-13 08:55:09.122 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:55:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:55:09
Dec 13 03:55:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:55:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:55:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'backups']
Dec 13 03:55:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2810: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:55:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:55:10 np0005558241 nova_compute[248510]: 2025-12-13 08:55:10.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:55:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:55:10 np0005558241 nova_compute[248510]: 2025-12-13 08:55:10.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2811: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:55:12 np0005558241 nova_compute[248510]: 2025-12-13 08:55:12.562 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:13.818 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:55:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:13.819 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:55:13 np0005558241 nova_compute[248510]: 2025-12-13 08:55:13.820 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.211 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.211 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2812: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.236 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.336 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.337 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.345 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.345 248514 INFO nova.compute.claims [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.361 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.361 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.393 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.501 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.528 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:55:14 np0005558241 nova_compute[248510]: 2025-12-13 08:55:14.802 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:14.822 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:55:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1607725416' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:55:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:55:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1607725416' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:55:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:55:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570311342' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.141 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.148 248514 DEBUG nova.compute.provider_tree [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.169 248514 DEBUG nova.scheduler.client.report [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.194 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.194 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.196 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.205 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.206 248514 INFO nova.compute.claims [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.278 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.278 248514 DEBUG nova.network.neutron [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.331 248514 INFO nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.368 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.405 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.514 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.516 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.516 248514 INFO nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Creating image(s)#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.547 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.580 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.604 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.609 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.663 248514 DEBUG nova.policy [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.707 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.708 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.709 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.710 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.734 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.738 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2dacd79d-d668-430f-89e3-bd607a8298ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:15 np0005558241 nova_compute[248510]: 2025-12-13 08:55:15.911 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:55:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2386839979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.035 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.043 248514 DEBUG nova.compute.provider_tree [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.065 248514 DEBUG nova.scheduler.client.report [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.088 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.089 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.091 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 2dacd79d-d668-430f-89e3-bd607a8298ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.092 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.092 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.092 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.093 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.203 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.204 248514 DEBUG nova.network.neutron [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.213 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:55:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2813: 321 pgs: 321 active+clean; 41 MiB data, 769 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.243 248514 INFO nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.267 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.303 248514 DEBUG nova.objects.instance [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 2dacd79d-d668-430f-89e3-bd607a8298ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.328 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.328 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Ensure instance console log exists: /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.329 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.329 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.329 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.372 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.373 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.374 248514 INFO nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Creating image(s)#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.403 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.427 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.459 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.467 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.567 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.569 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.570 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.571 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.601 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.606 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5c900cfb-46bb-436b-a574-7985be3447da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:55:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/719210447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.677 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.695 248514 DEBUG nova.policy [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.905 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.906 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3579MB free_disk=59.987517456524074GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.906 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.906 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:16 np0005558241 nova_compute[248510]: 2025-12-13 08:55:16.997 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5c900cfb-46bb-436b-a574-7985be3447da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.022 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2dacd79d-d668-430f-89e3-bd607a8298ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.022 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 5c900cfb-46bb-436b-a574-7985be3447da actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.023 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.023 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.060 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.103 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.187 248514 DEBUG nova.objects.instance [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.205 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.205 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Ensure instance console log exists: /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.206 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.206 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.206 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.395 248514 DEBUG nova.network.neutron [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Successfully created port: 59834f67-f81d-41bf-9bec-95eea737178e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.565 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:55:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3648480722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.709 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.715 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.751 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.817 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.818 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:17 np0005558241 nova_compute[248510]: 2025-12-13 08:55:17.973 248514 DEBUG nova.network.neutron [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Successfully created port: a010c1a2-26e3-477b-9539-f12ad28801ca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:55:18 np0005558241 nova_compute[248510]: 2025-12-13 08:55:18.197 248514 DEBUG nova.network.neutron [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Successfully updated port: 59834f67-f81d-41bf-9bec-95eea737178e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:55:18 np0005558241 nova_compute[248510]: 2025-12-13 08:55:18.229 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:55:18 np0005558241 nova_compute[248510]: 2025-12-13 08:55:18.230 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:55:18 np0005558241 nova_compute[248510]: 2025-12-13 08:55:18.230 248514 DEBUG nova.network.neutron [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:55:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2814: 321 pgs: 321 active+clean; 74 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 1.3 MiB/s wr, 12 op/s
Dec 13 03:55:18 np0005558241 nova_compute[248510]: 2025-12-13 08:55:18.788 248514 DEBUG nova.network.neutron [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:55:18 np0005558241 nova_compute[248510]: 2025-12-13 08:55:18.819 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:55:18 np0005558241 nova_compute[248510]: 2025-12-13 08:55:18.819 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:55:18 np0005558241 nova_compute[248510]: 2025-12-13 08:55:18.819 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:55:18 np0005558241 nova_compute[248510]: 2025-12-13 08:55:18.819 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:55:19 np0005558241 nova_compute[248510]: 2025-12-13 08:55:19.416 248514 DEBUG nova.compute.manager [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-changed-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:19 np0005558241 nova_compute[248510]: 2025-12-13 08:55:19.416 248514 DEBUG nova.compute.manager [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Refreshing instance network info cache due to event network-changed-59834f67-f81d-41bf-9bec-95eea737178e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:55:19 np0005558241 nova_compute[248510]: 2025-12-13 08:55:19.417 248514 DEBUG oslo_concurrency.lockutils [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:55:19 np0005558241 nova_compute[248510]: 2025-12-13 08:55:19.435 248514 DEBUG nova.network.neutron [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Successfully updated port: a010c1a2-26e3-477b-9539-f12ad28801ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:55:19 np0005558241 nova_compute[248510]: 2025-12-13 08:55:19.452 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:55:19 np0005558241 nova_compute[248510]: 2025-12-13 08:55:19.453 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:55:19 np0005558241 nova_compute[248510]: 2025-12-13 08:55:19.453 248514 DEBUG nova.network.neutron [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:55:19 np0005558241 nova_compute[248510]: 2025-12-13 08:55:19.617 248514 DEBUG nova.compute.manager [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-changed-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:19 np0005558241 nova_compute[248510]: 2025-12-13 08:55:19.617 248514 DEBUG nova.compute.manager [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing instance network info cache due to event network-changed-a010c1a2-26e3-477b-9539-f12ad28801ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:55:19 np0005558241 nova_compute[248510]: 2025-12-13 08:55:19.618 248514 DEBUG oslo_concurrency.lockutils [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:55:19 np0005558241 nova_compute[248510]: 2025-12-13 08:55:19.698 248514 DEBUG nova.network.neutron [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.006 248514 DEBUG nova.network.neutron [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updating instance_info_cache with network_info: [{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.028 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.029 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Instance network_info: |[{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.029 248514 DEBUG oslo_concurrency.lockutils [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.030 248514 DEBUG nova.network.neutron [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Refreshing network info cache for port 59834f67-f81d-41bf-9bec-95eea737178e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.032 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Start _get_guest_xml network_info=[{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.036 248514 WARNING nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.043 248514 DEBUG nova.virt.libvirt.host [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.044 248514 DEBUG nova.virt.libvirt.host [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.047 248514 DEBUG nova.virt.libvirt.host [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.048 248514 DEBUG nova.virt.libvirt.host [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.049 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.049 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.050 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.050 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.051 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.051 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.051 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.052 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.052 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.053 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.053 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.053 248514 DEBUG nova.virt.hardware [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.058 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2815: 321 pgs: 321 active+clean; 134 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Dec 13 03:55:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:55:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:55:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1599522550' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.644 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.665 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.668 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.811 248514 DEBUG nova.network.neutron [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.831 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.831 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Instance network_info: |[{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.832 248514 DEBUG oslo_concurrency.lockutils [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.832 248514 DEBUG nova.network.neutron [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing network info cache for port a010c1a2-26e3-477b-9539-f12ad28801ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.835 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Start _get_guest_xml network_info=[{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.838 248514 WARNING nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.845 248514 DEBUG nova.virt.libvirt.host [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.846 248514 DEBUG nova.virt.libvirt.host [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.849 248514 DEBUG nova.virt.libvirt.host [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.849 248514 DEBUG nova.virt.libvirt.host [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.850 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.850 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.850 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.851 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.851 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.851 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.851 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.851 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.852 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.852 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.852 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.852 248514 DEBUG nova.virt.hardware [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.857 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:20 np0005558241 nova_compute[248510]: 2025-12-13 08:55:20.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007053977648106654 of space, bias 1.0, pg target 0.2116193294431996 quantized to 32 (current 32)
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697138298021254 of space, bias 1.0, pg target 0.2009141489406376 quantized to 32 (current 32)
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.738067775993847e-07 of space, bias 4.0, pg target 0.0006885681331192617 quantized to 16 (current 32)
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:55:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:55:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2339603879' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.290 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.293 248514 DEBUG nova.virt.libvirt.vif [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=116,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHu0mZ4KdPQNIQDmStfb8oGr9IxxGZIxLFalN/4uGlZNxfEdmbl3D4ueCINh2ZhW4F33mK6Ev46I8YbLOH/wB4QDYG50l143kirQCN41tCbmF+vCIghQWMs2Kzj6YH+pg==',key_name='tempest-TestSecurityGroupsBasicOps-398328978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-9i8zv62t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:15Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=2dacd79d-d668-430f-89e3-bd607a8298ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.294 248514 DEBUG nova.network.os_vif_util [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.295 248514 DEBUG nova.network.os_vif_util [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.296 248514 DEBUG nova.objects.instance [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 2dacd79d-d668-430f-89e3-bd607a8298ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.315 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  <uuid>2dacd79d-d668-430f-89e3-bd607a8298ba</uuid>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  <name>instance-00000074</name>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889</nova:name>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:55:20</nova:creationTime>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:        <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:        <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:        <nova:port uuid="59834f67-f81d-41bf-9bec-95eea737178e">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <entry name="serial">2dacd79d-d668-430f-89e3-bd607a8298ba</entry>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <entry name="uuid">2dacd79d-d668-430f-89e3-bd607a8298ba</entry>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2dacd79d-d668-430f-89e3-bd607a8298ba_disk">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/2dacd79d-d668-430f-89e3-bd607a8298ba_disk.config">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:fc:dc:ee"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <target dev="tap59834f67-f8"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/console.log" append="off"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:55:21 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:55:21 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:55:21 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:55:21 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.317 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Preparing to wait for external event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.317 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.318 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.318 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.319 248514 DEBUG nova.virt.libvirt.vif [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=116,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHu0mZ4KdPQNIQDmStfb8oGr9IxxGZIxLFalN/4uGlZNxfEdmbl3D4ueCINh2ZhW4F33mK6Ev46I8YbLOH/wB4QDYG50l143kirQCN41tCbmF+vCIghQWMs2Kzj6YH+pg==',key_name='tempest-TestSecurityGroupsBasicOps-398328978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-9i8zv62t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:15Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=2dacd79d-d668-430f-89e3-bd607a8298ba,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.320 248514 DEBUG nova.network.os_vif_util [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.320 248514 DEBUG nova.network.os_vif_util [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.321 248514 DEBUG os_vif [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.323 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.324 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.327 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.328 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59834f67-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.328 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap59834f67-f8, col_values=(('external_ids', {'iface-id': '59834f67-f81d-41bf-9bec-95eea737178e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:dc:ee', 'vm-uuid': '2dacd79d-d668-430f-89e3-bd607a8298ba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.330 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:21 np0005558241 NetworkManager[50376]: <info>  [1765616121.3311] manager: (tap59834f67-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.338 248514 INFO os_vif [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8')#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.394 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.394 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.394 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:fc:dc:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.395 248514 INFO nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Using config drive#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.416 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:55:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/427886782' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.504 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.528 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.533 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.572 248514 DEBUG nova.network.neutron [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updated VIF entry in instance network info cache for port 59834f67-f81d-41bf-9bec-95eea737178e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.574 248514 DEBUG nova.network.neutron [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updating instance_info_cache with network_info: [{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.603 248514 DEBUG oslo_concurrency.lockutils [req-b10e8f72-725e-438d-a56a-576198c60322 req-7353117f-22f1-4a0e-98e8-5b3cb09b4cc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:55:21 np0005558241 nova_compute[248510]: 2025-12-13 08:55:21.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.033 248514 INFO nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Creating config drive at /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/disk.config#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.038 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu32c49wo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:55:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2292909053' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.112 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.114 248514 DEBUG nova.virt.libvirt.vif [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:16Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.114 248514 DEBUG nova.network.os_vif_util [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.115 248514 DEBUG nova.network.os_vif_util [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.116 248514 DEBUG nova.objects.instance [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.133 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  <uuid>5c900cfb-46bb-436b-a574-7985be3447da</uuid>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  <name>instance-00000075</name>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:55:20</nova:creationTime>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:        <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:        <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:        <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <entry name="serial">5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <entry name="uuid">5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/5c900cfb-46bb-436b-a574-7985be3447da_disk">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/5c900cfb-46bb-436b-a574-7985be3447da_disk.config">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:c9:87:a3"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <target dev="tapa010c1a2-26"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log" append="off"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:55:22 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:55:22 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:55:22 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:55:22 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.134 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Preparing to wait for external event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.134 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.135 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.135 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.135 248514 DEBUG nova.virt.libvirt.vif [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:16Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.136 248514 DEBUG nova.network.os_vif_util [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.136 248514 DEBUG nova.network.os_vif_util [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.137 248514 DEBUG os_vif [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.137 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.138 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.138 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.140 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa010c1a2-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.141 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa010c1a2-26, col_values=(('external_ids', {'iface-id': 'a010c1a2-26e3-477b-9539-f12ad28801ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:87:a3', 'vm-uuid': '5c900cfb-46bb-436b-a574-7985be3447da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.142 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:22 np0005558241 NetworkManager[50376]: <info>  [1765616122.1432] manager: (tapa010c1a2-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.144 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.149 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.150 248514 INFO os_vif [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26')#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.194 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu32c49wo" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.217 248514 DEBUG nova.storage.rbd_utils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 2dacd79d-d668-430f-89e3-bd607a8298ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.221 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/disk.config 2dacd79d-d668-430f-89e3-bd607a8298ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2816: 321 pgs: 321 active+clean; 134 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.260 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.261 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.261 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:c9:87:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.262 248514 INFO nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Using config drive#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.287 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.361 248514 DEBUG oslo_concurrency.processutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/disk.config 2dacd79d-d668-430f-89e3-bd607a8298ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.362 248514 INFO nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Deleting local config drive /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba/disk.config because it was imported into RBD.#033[00m
Dec 13 03:55:22 np0005558241 NetworkManager[50376]: <info>  [1765616122.4089] manager: (tap59834f67-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/469)
Dec 13 03:55:22 np0005558241 kernel: tap59834f67-f8: entered promiscuous mode
Dec 13 03:55:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:22Z|01130|binding|INFO|Claiming lport 59834f67-f81d-41bf-9bec-95eea737178e for this chassis.
Dec 13 03:55:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:22Z|01131|binding|INFO|59834f67-f81d-41bf-9bec-95eea737178e: Claiming fa:16:3e:fc:dc:ee 10.100.0.7
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.412 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.425 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:dc:ee 10.100.0.7'], port_security=['fa:16:3e:fc:dc:ee 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2dacd79d-d668-430f-89e3-bd607a8298ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49b9fdf8-f095-49d6-8a3e-6b41045e0020 daf1c258-d9fc-43cc-a960-fdfffc57ef37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ef6794c-20c0-44ef-9932-95bf5c168e3e, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=59834f67-f81d-41bf-9bec-95eea737178e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.427 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 59834f67-f81d-41bf-9bec-95eea737178e in datapath d62e4a11-9334-4dbd-978f-dcabebeb9f79 bound to our chassis#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.429 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d62e4a11-9334-4dbd-978f-dcabebeb9f79#033[00m
Dec 13 03:55:22 np0005558241 systemd-udevd[364618]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.438 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[961fe3b8-cd00-4f34-92e4-625071dbeea2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.439 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd62e4a11-91 in ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.441 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd62e4a11-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.441 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[77bc4996-b7f7-4c6e-928d-850d15e9917a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.442 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8a718843-adfd-4611-a87a-7aec9a655dfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 NetworkManager[50376]: <info>  [1765616122.4538] device (tap59834f67-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:55:22 np0005558241 NetworkManager[50376]: <info>  [1765616122.4558] device (tap59834f67-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.455 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[84afdf5a-bd34-4711-8b6d-e37e3f6d481b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 systemd-machined[210538]: New machine qemu-143-instance-00000074.
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.481 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5240100-8de9-40ef-b5d1-191fa6829a93]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.490 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:22 np0005558241 systemd[1]: Started Virtual Machine qemu-143-instance-00000074.
Dec 13 03:55:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:22Z|01132|binding|INFO|Setting lport 59834f67-f81d-41bf-9bec-95eea737178e ovn-installed in OVS
Dec 13 03:55:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:22Z|01133|binding|INFO|Setting lport 59834f67-f81d-41bf-9bec-95eea737178e up in Southbound
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.501 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.520 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba06fbd-5221-4b9d-a38f-5cd5fe8ca307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.526 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46a7bec6-9f3f-4f96-aecb-2c21ffa4d917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 NetworkManager[50376]: <info>  [1765616122.5279] manager: (tapd62e4a11-90): new Veth device (/org/freedesktop/NetworkManager/Devices/470)
Dec 13 03:55:22 np0005558241 systemd-udevd[364622]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.567 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.571 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b16c7f53-9104-4334-90f1-ed5e6b2b5ae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.574 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[14db448d-17d7-4e87-b3d3-4e7fcfef3caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 NetworkManager[50376]: <info>  [1765616122.6006] device (tapd62e4a11-90): carrier: link connected
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.606 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1cb1e9-3f4a-49b4-ae34-83a842e164f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.627 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[03440b92-cf18-4be7-8f6d-7a38d523e1fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62e4a11-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869981, 'reachable_time': 20363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364656, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.647 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35903600-c192-44f7-83a3-87801e99501d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:643c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869981, 'tstamp': 869981}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364657, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.671 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[150bba2e-7661-46c9-ba2d-ffd80bbe4030]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62e4a11-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869981, 'reachable_time': 20363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364658, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.702 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd4a760-9ae9-4045-9bec-b4d60205301a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.756 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[22df6ace-c48e-4f78-a0c8-8cf04597355f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.758 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62e4a11-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.758 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.758 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd62e4a11-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:22 np0005558241 NetworkManager[50376]: <info>  [1765616122.7607] manager: (tapd62e4a11-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.761 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:22 np0005558241 kernel: tapd62e4a11-90: entered promiscuous mode
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.766 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd62e4a11-90, col_values=(('external_ids', {'iface-id': '3d979ee9-5b95-4edf-8ffc-3de7e778c5ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.767 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:22 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:22Z|01134|binding|INFO|Releasing lport 3d979ee9-5b95-4edf-8ffc-3de7e778c5ff from this chassis (sb_readonly=0)
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:22 np0005558241 nova_compute[248510]: 2025-12-13 08:55:22.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.786 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d62e4a11-9334-4dbd-978f-dcabebeb9f79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d62e4a11-9334-4dbd-978f-dcabebeb9f79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.787 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[76bd67fd-43a7-4e37-b063-ee94fbe9ec86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.788 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-d62e4a11-9334-4dbd-978f-dcabebeb9f79
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/d62e4a11-9334-4dbd-978f-dcabebeb9f79.pid.haproxy
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID d62e4a11-9334-4dbd-978f-dcabebeb9f79
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:55:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:22.789 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'env', 'PROCESS_TAG=haproxy-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d62e4a11-9334-4dbd-978f-dcabebeb9f79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:55:23 np0005558241 podman[364693]: 2025-12-13 08:55:23.16845949 +0000 UTC m=+0.053323488 container create 88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:55:23 np0005558241 systemd[1]: Started libpod-conmon-88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55.scope.
Dec 13 03:55:23 np0005558241 podman[364693]: 2025-12-13 08:55:23.140683773 +0000 UTC m=+0.025547791 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:55:23 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:55:23 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e03dea7b690d3db119f67c61ad462e71c1d2570616b62b014daca58c0b50d9d7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:23 np0005558241 podman[364693]: 2025-12-13 08:55:23.262139889 +0000 UTC m=+0.147003907 container init 88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:55:23 np0005558241 podman[364693]: 2025-12-13 08:55:23.268935589 +0000 UTC m=+0.153799597 container start 88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.285 248514 DEBUG nova.network.neutron [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updated VIF entry in instance network info cache for port a010c1a2-26e3-477b-9539-f12ad28801ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.285 248514 DEBUG nova.network.neutron [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:55:23 np0005558241 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [NOTICE]   (364730) : New worker (364747) forked
Dec 13 03:55:23 np0005558241 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [NOTICE]   (364730) : Loading success.
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.307 248514 DEBUG oslo_concurrency.lockutils [req-8766994d-107b-4386-af9d-8ea2cd8c2189 req-0f3037bb-295c-4b50-91cc-eb2b8e4cf12e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.408 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616123.4079223, 2dacd79d-d668-430f-89e3-bd607a8298ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.408 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] VM Started (Lifecycle Event)#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.436 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.441 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616123.4099493, 2dacd79d-d668-430f-89e3-bd607a8298ba => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.442 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.480 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.484 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.510 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.593 248514 INFO nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Creating config drive at /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/disk.config#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.598 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzea2pki2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.743 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzea2pki2" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.770 248514 DEBUG nova.storage.rbd_utils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5c900cfb-46bb-436b-a574-7985be3447da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.774 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/disk.config 5c900cfb-46bb-436b-a574-7985be3447da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.919 248514 DEBUG oslo_concurrency.processutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/disk.config 5c900cfb-46bb-436b-a574-7985be3447da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.920 248514 INFO nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Deleting local config drive /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/disk.config because it was imported into RBD.#033[00m
Dec 13 03:55:23 np0005558241 kernel: tapa010c1a2-26: entered promiscuous mode
Dec 13 03:55:23 np0005558241 NetworkManager[50376]: <info>  [1765616123.9759] manager: (tapa010c1a2-26): new Tun device (/org/freedesktop/NetworkManager/Devices/472)
Dec 13 03:55:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:23Z|01135|binding|INFO|Claiming lport a010c1a2-26e3-477b-9539-f12ad28801ca for this chassis.
Dec 13 03:55:23 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:23Z|01136|binding|INFO|a010c1a2-26e3-477b-9539-f12ad28801ca: Claiming fa:16:3e:c9:87:a3 10.100.0.8
Dec 13 03:55:23 np0005558241 systemd-udevd[364635]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.976 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:23 np0005558241 nova_compute[248510]: 2025-12-13 08:55:23.983 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:23 np0005558241 NetworkManager[50376]: <info>  [1765616123.9930] device (tapa010c1a2-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:55:23 np0005558241 NetworkManager[50376]: <info>  [1765616123.9960] device (tapa010c1a2-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.046 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:24Z|01137|binding|INFO|Setting lport a010c1a2-26e3-477b-9539-f12ad28801ca ovn-installed in OVS
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.050 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:24Z|01138|binding|INFO|Setting lport a010c1a2-26e3-477b-9539-f12ad28801ca up in Southbound
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.067 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:87:a3 10.100.0.8'], port_security=['fa:16:3e:c9:87:a3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5c900cfb-46bb-436b-a574-7985be3447da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd2e8c6a7-9a03-4a93-a9dd-d4be82f5297d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d20f24f-0d1e-4b3a-97e8-eb661209feb7, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a010c1a2-26e3-477b-9539-f12ad28801ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.070 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a010c1a2-26e3-477b-9539-f12ad28801ca in datapath b2d9e215-0c32-4abc-92a1-ad5f852b369d bound to our chassis#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.073 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b2d9e215-0c32-4abc-92a1-ad5f852b369d#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.084 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[add1a308-a0aa-4a6e-a671-be6f5a1d92f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.085 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb2d9e215-01 in ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.087 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb2d9e215-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.087 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f558e84c-b330-49f5-9159-2260abcdb073]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.088 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d277cebb-cc3a-4fa8-8f79-c93db76eebb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 systemd-machined[210538]: New machine qemu-144-instance-00000075.
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.098 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[40faf508-3539-483a-8baf-43dd304da564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 systemd[1]: Started Virtual Machine qemu-144-instance-00000075.
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.112 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[adf0eba1-77e6-47aa-8961-85513784d04d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.136 248514 DEBUG nova.compute.manager [req-b5a3cdc3-49c3-4d14-b5de-12a492b1490c req-3dd670a8-220c-4a01-b3a3-3a5b40d4ec05 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.137 248514 DEBUG oslo_concurrency.lockutils [req-b5a3cdc3-49c3-4d14-b5de-12a492b1490c req-3dd670a8-220c-4a01-b3a3-3a5b40d4ec05 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.137 248514 DEBUG oslo_concurrency.lockutils [req-b5a3cdc3-49c3-4d14-b5de-12a492b1490c req-3dd670a8-220c-4a01-b3a3-3a5b40d4ec05 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.137 248514 DEBUG oslo_concurrency.lockutils [req-b5a3cdc3-49c3-4d14-b5de-12a492b1490c req-3dd670a8-220c-4a01-b3a3-3a5b40d4ec05 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.138 248514 DEBUG nova.compute.manager [req-b5a3cdc3-49c3-4d14-b5de-12a492b1490c req-3dd670a8-220c-4a01-b3a3-3a5b40d4ec05 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Processing event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.138 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.147 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616124.147333, 2dacd79d-d668-430f-89e3-bd607a8298ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.148 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.152 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.153 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b7d7b1-844a-46a6-8a5c-4adf0af15313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.160 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc27da14-239c-43b7-94dc-dfc5ef001393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 NetworkManager[50376]: <info>  [1765616124.1622] manager: (tapb2d9e215-00): new Veth device (/org/freedesktop/NetworkManager/Devices/473)
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.163 248514 INFO nova.virt.libvirt.driver [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Instance spawned successfully.#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.164 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.186 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.192 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.196 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.196 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.197 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.197 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.198 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.198 248514 DEBUG nova.virt.libvirt.driver [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.201 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[28b0cedd-4319-4c39-b424-5cd53df7d185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.204 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b293b59f-fe1c-413d-be37-d99bf521f7ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.231 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:55:24 np0005558241 NetworkManager[50376]: <info>  [1765616124.2321] device (tapb2d9e215-00): carrier: link connected
Dec 13 03:55:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2817: 321 pgs: 321 active+clean; 134 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.242 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9b58fa18-686b-4e8d-8474-5b848292869c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.259 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[689afcc8-cc9b-4b10-bdc0-4d642833e59f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2d9e215-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:a0:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870144, 'reachable_time': 38470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364837, 'error': None, 'target': 'ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.276 248514 INFO nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Took 8.76 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.278 248514 DEBUG nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.279 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1f7318-7b93-42fd-a6ff-25aa86e1a50e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:a0fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870144, 'tstamp': 870144}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364838, 'error': None, 'target': 'ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.299 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c85e3034-d5df-4f2c-bb7f-a4252cbd50cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb2d9e215-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:a0:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870144, 'reachable_time': 38470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364839, 'error': None, 'target': 'ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.335 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5567cb88-02d9-4f5f-a785-9fd01b381c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.351 248514 INFO nova.compute.manager [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Took 10.04 seconds to build instance.#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.369 248514 DEBUG oslo_concurrency.lockutils [None req-9ca8ede3-ea81-49e2-bc61-76cab478e847 c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.384 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f9380591-3307-46e5-bf81-c49b2810b052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.385 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2d9e215-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.385 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.386 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2d9e215-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:24 np0005558241 kernel: tapb2d9e215-00: entered promiscuous mode
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.416 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:24 np0005558241 NetworkManager[50376]: <info>  [1765616124.4226] manager: (tapb2d9e215-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/474)
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.423 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.425 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb2d9e215-00, col_values=(('external_ids', {'iface-id': '091240c0-aa08-4e16-a096-0471c0ff1f24'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:24 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:24Z|01139|binding|INFO|Releasing lport 091240c0-aa08-4e16-a096-0471c0ff1f24 from this chassis (sb_readonly=0)
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.430 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.429 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b2d9e215-0c32-4abc-92a1-ad5f852b369d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b2d9e215-0c32-4abc-92a1-ad5f852b369d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.432 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[90f1c49d-f1fc-472c-82c7-731916cc08cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.432 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-b2d9e215-0c32-4abc-92a1-ad5f852b369d
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/b2d9e215-0c32-4abc-92a1-ad5f852b369d.pid.haproxy
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID b2d9e215-0c32-4abc-92a1-ad5f852b369d
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:55:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:24.434 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'env', 'PROCESS_TAG=haproxy-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b2d9e215-0c32-4abc-92a1-ad5f852b369d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.690 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616124.6895673, 5c900cfb-46bb-436b-a574-7985be3447da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.690 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] VM Started (Lifecycle Event)#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.721 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.727 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616124.6897125, 5c900cfb-46bb-436b-a574-7985be3447da => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.728 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.759 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.763 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:55:24 np0005558241 nova_compute[248510]: 2025-12-13 08:55:24.793 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:55:24 np0005558241 podman[364910]: 2025-12-13 08:55:24.835980259 +0000 UTC m=+0.054929198 container create d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:55:24 np0005558241 systemd[1]: Started libpod-conmon-d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6.scope.
Dec 13 03:55:24 np0005558241 podman[364910]: 2025-12-13 08:55:24.809918176 +0000 UTC m=+0.028867125 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:55:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:55:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761b232b1aa41b0d759478588d2b31a2caafbf1df47bdacf3a2927b25a18515d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:24 np0005558241 podman[364910]: 2025-12-13 08:55:24.927947215 +0000 UTC m=+0.146896164 container init d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:55:24 np0005558241 podman[364910]: 2025-12-13 08:55:24.933609307 +0000 UTC m=+0.152558246 container start d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:55:24 np0005558241 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [NOTICE]   (364929) : New worker (364931) forked
Dec 13 03:55:24 np0005558241 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [NOTICE]   (364929) : Loading success.
Dec 13 03:55:25 np0005558241 nova_compute[248510]: 2025-12-13 08:55:25.489 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:25 np0005558241 nova_compute[248510]: 2025-12-13 08:55:25.491 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:25 np0005558241 nova_compute[248510]: 2025-12-13 08:55:25.510 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:55:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:55:25 np0005558241 nova_compute[248510]: 2025-12-13 08:55:25.617 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:25 np0005558241 nova_compute[248510]: 2025-12-13 08:55:25.618 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:25 np0005558241 nova_compute[248510]: 2025-12-13 08:55:25.628 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:55:25 np0005558241 nova_compute[248510]: 2025-12-13 08:55:25.629 248514 INFO nova.compute.claims [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:55:25 np0005558241 nova_compute[248510]: 2025-12-13 08:55:25.825 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2818: 321 pgs: 321 active+clean; 134 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Dec 13 03:55:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:55:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2357623257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.672 248514 DEBUG nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.673 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.673 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.674 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.674 248514 DEBUG nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] No waiting events found dispatching network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.674 248514 WARNING nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received unexpected event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e for instance with vm_state active and task_state None.#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.674 248514 DEBUG nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.675 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.675 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.675 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.676 248514 DEBUG nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Processing event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.676 248514 DEBUG nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.676 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.676 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.677 248514 DEBUG oslo_concurrency.lockutils [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.677 248514 DEBUG nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.677 248514 WARNING nova.compute.manager [req-15e3c6e9-0c2c-4a1a-8196-2a1869878222 req-73f58733-f423-4ecd-a853-3a6edaf680d4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received unexpected event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.678 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.886 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616126.8864288, 5c900cfb-46bb-436b-a574-7985be3447da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.888 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.891 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.894 248514 INFO nova.virt.libvirt.driver [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Instance spawned successfully.#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.895 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.903 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.908 248514 DEBUG nova.compute.provider_tree [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.915 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.918 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.932 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.935 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.936 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.936 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.936 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.937 248514 DEBUG nova.virt.libvirt.driver [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.943 248514 DEBUG nova.scheduler.client.report [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.946 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.982 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:26 np0005558241 nova_compute[248510]: 2025-12-13 08:55:26.983 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.050 248514 INFO nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Took 10.68 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.051 248514 DEBUG nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.092 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.092 248514 DEBUG nova.network.neutron [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.131 248514 INFO nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.143 248514 INFO nova.compute.manager [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Took 12.67 seconds to build instance.#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.145 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.155 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.167 248514 DEBUG oslo_concurrency.lockutils [None req-bc671385-590e-42c4-bb9e-33a0d58fe10d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.255 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.259 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.259 248514 INFO nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Creating image(s)#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.284 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.307 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.329 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.336 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.417 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.418 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.419 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.420 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.441 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.446 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 75f348ef-4044-47a1-ba1b-f1b66513450c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.569 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.584 248514 DEBUG nova.policy [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '81fb01d9d08845c3b626079ab726db7a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6c21c2eb2d0c4465ae562381f358fbd8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.843 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 75f348ef-4044-47a1-ba1b-f1b66513450c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:27 np0005558241 nova_compute[248510]: 2025-12-13 08:55:27.945 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] resizing rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:55:28 np0005558241 nova_compute[248510]: 2025-12-13 08:55:28.032 248514 DEBUG nova.objects.instance [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lazy-loading 'migration_context' on Instance uuid 75f348ef-4044-47a1-ba1b-f1b66513450c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:55:28 np0005558241 nova_compute[248510]: 2025-12-13 08:55:28.049 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:55:28 np0005558241 nova_compute[248510]: 2025-12-13 08:55:28.050 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Ensure instance console log exists: /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:55:28 np0005558241 nova_compute[248510]: 2025-12-13 08:55:28.050 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:28 np0005558241 nova_compute[248510]: 2025-12-13 08:55:28.051 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:28 np0005558241 nova_compute[248510]: 2025-12-13 08:55:28.051 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2819: 321 pgs: 321 active+clean; 148 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.3 MiB/s wr, 122 op/s
Dec 13 03:55:28 np0005558241 nova_compute[248510]: 2025-12-13 08:55:28.794 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:55:28 np0005558241 nova_compute[248510]: 2025-12-13 08:55:28.795 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 03:55:28 np0005558241 nova_compute[248510]: 2025-12-13 08:55:28.853 248514 DEBUG nova.network.neutron [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Successfully created port: 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:55:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2820: 321 pgs: 321 active+clean; 181 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 4.1 MiB/s wr, 212 op/s
Dec 13 03:55:30 np0005558241 nova_compute[248510]: 2025-12-13 08:55:30.542 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:30 np0005558241 NetworkManager[50376]: <info>  [1765616130.5455] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/475)
Dec 13 03:55:30 np0005558241 NetworkManager[50376]: <info>  [1765616130.5463] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/476)
Dec 13 03:55:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:55:30 np0005558241 nova_compute[248510]: 2025-12-13 08:55:30.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:30Z|01140|binding|INFO|Releasing lport 091240c0-aa08-4e16-a096-0471c0ff1f24 from this chassis (sb_readonly=0)
Dec 13 03:55:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:30Z|01141|binding|INFO|Releasing lport 3d979ee9-5b95-4edf-8ffc-3de7e778c5ff from this chassis (sb_readonly=0)
Dec 13 03:55:30 np0005558241 nova_compute[248510]: 2025-12-13 08:55:30.692 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:30 np0005558241 podman[365130]: 2025-12-13 08:55:30.977579344 +0000 UTC m=+0.063255467 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 13 03:55:30 np0005558241 podman[365131]: 2025-12-13 08:55:30.992584051 +0000 UTC m=+0.076218572 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:55:31 np0005558241 podman[365129]: 2025-12-13 08:55:31.005157216 +0000 UTC m=+0.092240764 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.302 248514 DEBUG nova.compute.manager [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-changed-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.303 248514 DEBUG nova.compute.manager [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Refreshing instance network info cache due to event network-changed-59834f67-f81d-41bf-9bec-95eea737178e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.303 248514 DEBUG oslo_concurrency.lockutils [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.304 248514 DEBUG oslo_concurrency.lockutils [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.304 248514 DEBUG nova.network.neutron [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Refreshing network info cache for port 59834f67-f81d-41bf-9bec-95eea737178e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.347 248514 DEBUG nova.network.neutron [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Successfully updated port: 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.369 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.369 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquired lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.370 248514 DEBUG nova.network.neutron [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.712 248514 DEBUG nova.network.neutron [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.921 248514 DEBUG nova.compute.manager [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-changed-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.921 248514 DEBUG nova.compute.manager [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing instance network info cache due to event network-changed-a010c1a2-26e3-477b-9539-f12ad28801ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.922 248514 DEBUG oslo_concurrency.lockutils [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.922 248514 DEBUG oslo_concurrency.lockutils [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:55:31 np0005558241 nova_compute[248510]: 2025-12-13 08:55:31.923 248514 DEBUG nova.network.neutron [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing network info cache for port a010c1a2-26e3-477b-9539-f12ad28801ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:55:32 np0005558241 nova_compute[248510]: 2025-12-13 08:55:32.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2821: 321 pgs: 321 active+clean; 181 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Dec 13 03:55:32 np0005558241 nova_compute[248510]: 2025-12-13 08:55:32.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.112 248514 DEBUG nova.network.neutron [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updating instance_info_cache with network_info: [{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.145 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Releasing lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.149 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Instance network_info: |[{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.152 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Start _get_guest_xml network_info=[{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.159 248514 WARNING nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.166 248514 DEBUG nova.virt.libvirt.host [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.166 248514 DEBUG nova.virt.libvirt.host [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.170 248514 DEBUG nova.virt.libvirt.host [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.170 248514 DEBUG nova.virt.libvirt.host [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.171 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.171 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.171 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.172 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.172 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.172 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.173 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.173 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.173 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.174 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.174 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.174 248514 DEBUG nova.virt.hardware [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.178 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.295 248514 DEBUG nova.network.neutron [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updated VIF entry in instance network info cache for port 59834f67-f81d-41bf-9bec-95eea737178e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.296 248514 DEBUG nova.network.neutron [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updating instance_info_cache with network_info: [{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.355 248514 DEBUG oslo_concurrency.lockutils [req-9efa6877-6398-4634-b7de-0a6b11bdbfb1 req-4bbd980a-d126-4714-be04-2052e0d239b5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.443 248514 DEBUG nova.compute.manager [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-changed-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.444 248514 DEBUG nova.compute.manager [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Refreshing instance network info cache due to event network-changed-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.444 248514 DEBUG oslo_concurrency.lockutils [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.444 248514 DEBUG oslo_concurrency.lockutils [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.445 248514 DEBUG nova.network.neutron [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Refreshing network info cache for port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.673 248514 DEBUG nova.network.neutron [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updated VIF entry in instance network info cache for port a010c1a2-26e3-477b-9539-f12ad28801ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.674 248514 DEBUG nova.network.neutron [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.703 248514 DEBUG oslo_concurrency.lockutils [req-179aa61f-a606-4943-9afd-6d10faee4bcc req-18dee1d1-22c1-4388-8cd4-83b0c2efb494 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:55:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:55:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4271251360' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.789 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.825 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.831 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.872 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.873 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 03:55:33 np0005558241 nova_compute[248510]: 2025-12-13 08:55:33.903 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 03:55:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2822: 321 pgs: 321 active+clean; 181 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Dec 13 03:55:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:55:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3066197622' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.423 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.425 248514 DEBUG nova.virt.libvirt.vif [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-608841865',display_name='tempest-TestSnapshotPattern-server-608841865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-608841865',id=118,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMAc1KgPQ7YZO/wn247Z4w7R3iPp09OVvor/iF1+jUHxvgVaItQTtOvsdjy6HTDjQXAsMtHqR3YUaichzeZD01aIcPKJkcVch2R8490ZPNprDBfXeJ2j92c1nlW1ddLVg==',key_name='tempest-TestSnapshotPattern-2059242288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c21c2eb2d0c4465ae562381f358fbd8',ramdisk_id='',reservation_id='r-dx9z8wv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1494512648',owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='81fb01d9d08845c3b626079ab726db7a',uuid=75f348ef-4044-47a1-ba1b-f1b66513450c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.425 248514 DEBUG nova.network.os_vif_util [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converting VIF {"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.426 248514 DEBUG nova.network.os_vif_util [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.427 248514 DEBUG nova.objects.instance [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 75f348ef-4044-47a1-ba1b-f1b66513450c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.465 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  <uuid>75f348ef-4044-47a1-ba1b-f1b66513450c</uuid>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  <name>instance-00000076</name>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestSnapshotPattern-server-608841865</nova:name>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:55:33</nova:creationTime>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:        <nova:user uuid="81fb01d9d08845c3b626079ab726db7a">tempest-TestSnapshotPattern-1494512648-project-member</nova:user>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:        <nova:project uuid="6c21c2eb2d0c4465ae562381f358fbd8">tempest-TestSnapshotPattern-1494512648</nova:project>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:        <nova:port uuid="63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <entry name="serial">75f348ef-4044-47a1-ba1b-f1b66513450c</entry>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <entry name="uuid">75f348ef-4044-47a1-ba1b-f1b66513450c</entry>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/75f348ef-4044-47a1-ba1b-f1b66513450c_disk">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/75f348ef-4044-47a1-ba1b-f1b66513450c_disk.config">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:7d:e7:bf"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <target dev="tap63a84e8b-c1"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/console.log" append="off"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:55:34 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:55:34 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:55:34 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:55:34 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.470 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Preparing to wait for external event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.470 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.471 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.471 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.472 248514 DEBUG nova.virt.libvirt.vif [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-608841865',display_name='tempest-TestSnapshotPattern-server-608841865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-608841865',id=118,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMAc1KgPQ7YZO/wn247Z4w7R3iPp09OVvor/iF1+jUHxvgVaItQTtOvsdjy6HTDjQXAsMtHqR3YUaichzeZD01aIcPKJkcVch2R8490ZPNprDBfXeJ2j92c1nlW1ddLVg==',key_name='tempest-TestSnapshotPattern-2059242288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c21c2eb2d0c4465ae562381f358fbd8',ramdisk_id='',reservation_id='r-dx9z8wv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1494512648',owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='81fb01d9d08845c3b626079ab726db7a',uuid=75f348ef-4044-47a1-ba1b-f1b66513450c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.472 248514 DEBUG nova.network.os_vif_util [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converting VIF {"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.473 248514 DEBUG nova.network.os_vif_util [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.473 248514 DEBUG os_vif [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.474 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.475 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.478 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.478 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63a84e8b-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.479 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63a84e8b-c1, col_values=(('external_ids', {'iface-id': '63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:e7:bf', 'vm-uuid': '75f348ef-4044-47a1-ba1b-f1b66513450c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.480 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:34 np0005558241 NetworkManager[50376]: <info>  [1765616134.4810] manager: (tap63a84e8b-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.482 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.487 248514 INFO os_vif [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1')#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.556 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.569 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.569 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No VIF found with MAC fa:16:3e:7d:e7:bf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.570 248514 INFO nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Using config drive#033[00m
Dec 13 03:55:34 np0005558241 nova_compute[248510]: 2025-12-13 08:55:34.599 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:55:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2823: 321 pgs: 321 active+clean; 181 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Dec 13 03:55:36 np0005558241 nova_compute[248510]: 2025-12-13 08:55:36.654 248514 INFO nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Creating config drive at /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/disk.config#033[00m
Dec 13 03:55:36 np0005558241 nova_compute[248510]: 2025-12-13 08:55:36.660 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gtvb51z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:36 np0005558241 nova_compute[248510]: 2025-12-13 08:55:36.804 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6gtvb51z" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:36 np0005558241 nova_compute[248510]: 2025-12-13 08:55:36.832 248514 DEBUG nova.storage.rbd_utils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 75f348ef-4044-47a1-ba1b-f1b66513450c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:36 np0005558241 nova_compute[248510]: 2025-12-13 08:55:36.835 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/disk.config 75f348ef-4044-47a1-ba1b-f1b66513450c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:36 np0005558241 nova_compute[248510]: 2025-12-13 08:55:36.978 248514 DEBUG oslo_concurrency.processutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/disk.config 75f348ef-4044-47a1-ba1b-f1b66513450c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:36 np0005558241 nova_compute[248510]: 2025-12-13 08:55:36.979 248514 INFO nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Deleting local config drive /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c/disk.config because it was imported into RBD.#033[00m
Dec 13 03:55:37 np0005558241 kernel: tap63a84e8b-c1: entered promiscuous mode
Dec 13 03:55:37 np0005558241 NetworkManager[50376]: <info>  [1765616137.0244] manager: (tap63a84e8b-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/478)
Dec 13 03:55:37 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:37Z|01142|binding|INFO|Claiming lport 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed for this chassis.
Dec 13 03:55:37 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:37Z|01143|binding|INFO|63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed: Claiming fa:16:3e:7d:e7:bf 10.100.0.3
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.032 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:37 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:37Z|01144|binding|INFO|Setting lport 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed ovn-installed in OVS
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.045 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:37 np0005558241 systemd-machined[210538]: New machine qemu-145-instance-00000076.
Dec 13 03:55:37 np0005558241 systemd[1]: Started Virtual Machine qemu-145-instance-00000076.
Dec 13 03:55:37 np0005558241 systemd-udevd[365326]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:55:37 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:37Z|01145|binding|INFO|Setting lport 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed up in Southbound
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.101 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:e7:bf 10.100.0.3'], port_security=['fa:16:3e:7d:e7:bf 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '75f348ef-4044-47a1-ba1b-f1b66513450c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c21c2eb2d0c4465ae562381f358fbd8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e1e2d18-e674-4ee6-b553-a8676e40259f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c30b2fc-21fd-4778-91da-98384d5e05df, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.103 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed in datapath d2ff4cff-54cc-40c6-a486-7e7532c2462b bound to our chassis#033[00m
Dec 13 03:55:37 np0005558241 NetworkManager[50376]: <info>  [1765616137.1064] device (tap63a84e8b-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:55:37 np0005558241 NetworkManager[50376]: <info>  [1765616137.1069] device (tap63a84e8b-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.106 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2ff4cff-54cc-40c6-a486-7e7532c2462b#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.118 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c72d57ae-3b15-4d13-a9d6-798f602da9d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.119 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2ff4cff-51 in ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.121 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2ff4cff-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.121 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd223605-cb24-44a0-aaf1-d9e7b8f697da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.122 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c2255a2a-c379-411b-a8ce-b675caddcf5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.133 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0164c1a5-51de-4bf7-9605-6fc36549b6f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:37Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:dc:ee 10.100.0.7
Dec 13 03:55:37 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:37Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:dc:ee 10.100.0.7
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.146 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[21cb71be-e2ad-4a86-9420-fb1895d74a4e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.173 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8df202c4-a6c5-410b-bba4-33f3be40d90a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 NetworkManager[50376]: <info>  [1765616137.1913] manager: (tapd2ff4cff-50): new Veth device (/org/freedesktop/NetworkManager/Devices/479)
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.193 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[00fb54ce-0a90-4687-ac04-4b5e688cb6e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.227 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef13cd9-545d-421a-ac93-c0c36d6a407b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.230 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[960f8656-5c45-41c2-b380-5d8fc7fadebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 NetworkManager[50376]: <info>  [1765616137.2601] device (tapd2ff4cff-50): carrier: link connected
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.270 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[04fc2aaa-f9c2-485e-9d6e-e86ee13def12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.287 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8ccd7f90-a8d3-461a-a385-cfb2da0bf118]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2ff4cff-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:85:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 871447, 'reachable_time': 40005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365361, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.303 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[610031df-31ef-4f20-aa96-2c1393bdc619]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:856b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 871447, 'tstamp': 871447}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365362, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.319 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ca490901-a5bb-4ebd-a362-524529be4acc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2ff4cff-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:85:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 871447, 'reachable_time': 40005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365363, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.352 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cd048956-11aa-433a-911a-4286aa0fe7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.397 248514 DEBUG nova.compute.manager [req-ea51372b-7d85-4b87-af64-abed7c5c5f5d req-48598077-8147-48d7-82d6-1538c2afe8e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.397 248514 DEBUG oslo_concurrency.lockutils [req-ea51372b-7d85-4b87-af64-abed7c5c5f5d req-48598077-8147-48d7-82d6-1538c2afe8e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.397 248514 DEBUG oslo_concurrency.lockutils [req-ea51372b-7d85-4b87-af64-abed7c5c5f5d req-48598077-8147-48d7-82d6-1538c2afe8e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.398 248514 DEBUG oslo_concurrency.lockutils [req-ea51372b-7d85-4b87-af64-abed7c5c5f5d req-48598077-8147-48d7-82d6-1538c2afe8e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.398 248514 DEBUG nova.compute.manager [req-ea51372b-7d85-4b87-af64-abed7c5c5f5d req-48598077-8147-48d7-82d6-1538c2afe8e8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Processing event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.421 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ee240e56-e84f-454d-a5ac-b6377e847f27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.424 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2ff4cff-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.424 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.424 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2ff4cff-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:37 np0005558241 NetworkManager[50376]: <info>  [1765616137.4273] manager: (tapd2ff4cff-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Dec 13 03:55:37 np0005558241 kernel: tapd2ff4cff-50: entered promiscuous mode
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.437 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2ff4cff-50, col_values=(('external_ids', {'iface-id': '47f45749-b232-4d0c-bf37-be042ea606c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:37 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:37Z|01146|binding|INFO|Releasing lport 47f45749-b232-4d0c-bf37-be042ea606c8 from this chassis (sb_readonly=0)
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.446 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2ff4cff-54cc-40c6-a486-7e7532c2462b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2ff4cff-54cc-40c6-a486-7e7532c2462b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.447 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[81375666-fbab-4161-a1cd-972b3c2800a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.447 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-d2ff4cff-54cc-40c6-a486-7e7532c2462b
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/d2ff4cff-54cc-40c6-a486-7e7532c2462b.pid.haproxy
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID d2ff4cff-54cc-40c6-a486-7e7532c2462b
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:55:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:37.448 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'env', 'PROCESS_TAG=haproxy-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2ff4cff-54cc-40c6-a486-7e7532c2462b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.455 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.726 248514 DEBUG nova.network.neutron [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updated VIF entry in instance network info cache for port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.728 248514 DEBUG nova.network.neutron [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updating instance_info_cache with network_info: [{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:55:37 np0005558241 nova_compute[248510]: 2025-12-13 08:55:37.746 248514 DEBUG oslo_concurrency.lockutils [req-4f5afd0b-5e29-4581-b91d-afe9693194b4 req-2e76664c-563b-4664-86ac-e07c3fdd42b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:55:37 np0005558241 podman[365403]: 2025-12-13 08:55:37.963311675 +0000 UTC m=+0.108441460 container create 5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 03:55:37 np0005558241 podman[365403]: 2025-12-13 08:55:37.895694419 +0000 UTC m=+0.040824234 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:55:38 np0005558241 systemd[1]: Started libpod-conmon-5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e.scope.
Dec 13 03:55:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:55:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fffb7936c063d506819829d9345338322d52c45114e0f00e692ced85a23dd5f4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.075 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616138.0748463, 75f348ef-4044-47a1-ba1b-f1b66513450c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.076 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] VM Started (Lifecycle Event)#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.079 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.084 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.091 248514 INFO nova.virt.libvirt.driver [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Instance spawned successfully.#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.092 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:55:38 np0005558241 podman[365403]: 2025-12-13 08:55:38.098776961 +0000 UTC m=+0.243906776 container init 5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 03:55:38 np0005558241 podman[365403]: 2025-12-13 08:55:38.10790754 +0000 UTC m=+0.253037325 container start 5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.126 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.133 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:55:38 np0005558241 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [NOTICE]   (365455) : New worker (365457) forked
Dec 13 03:55:38 np0005558241 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [NOTICE]   (365455) : Loading success.
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.140 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.140 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.141 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.141 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.142 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.142 248514 DEBUG nova.virt.libvirt.driver [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.179 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.180 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616138.0751743, 75f348ef-4044-47a1-ba1b-f1b66513450c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.180 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.209 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.214 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616138.0829155, 75f348ef-4044-47a1-ba1b-f1b66513450c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.215 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:55:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2824: 321 pgs: 321 active+clean; 192 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 190 op/s
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.441 248514 INFO nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Took 11.18 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.442 248514 DEBUG nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.444 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.450 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.481 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.511 248514 INFO nova.compute.manager [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Took 12.93 seconds to build instance.#033[00m
Dec 13 03:55:38 np0005558241 nova_compute[248510]: 2025-12-13 08:55:38.530 248514 DEBUG oslo_concurrency.lockutils [None req-48c7fc04-ae3a-4a7b-92a3-3e4f0f86bef6 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:55:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:55:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:55:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:55:39 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:39Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c9:87:a3 10.100.0.8
Dec 13 03:55:39 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:39Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c9:87:a3 10.100.0.8
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:55:39 np0005558241 nova_compute[248510]: 2025-12-13 08:55:39.481 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:39 np0005558241 nova_compute[248510]: 2025-12-13 08:55:39.516 248514 DEBUG nova.compute.manager [req-ac099ad8-fbc0-432f-928d-fc88fa5e8e91 req-c13fc052-850f-4ce9-bab8-871831bd9b9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:39 np0005558241 nova_compute[248510]: 2025-12-13 08:55:39.517 248514 DEBUG oslo_concurrency.lockutils [req-ac099ad8-fbc0-432f-928d-fc88fa5e8e91 req-c13fc052-850f-4ce9-bab8-871831bd9b9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:39 np0005558241 nova_compute[248510]: 2025-12-13 08:55:39.517 248514 DEBUG oslo_concurrency.lockutils [req-ac099ad8-fbc0-432f-928d-fc88fa5e8e91 req-c13fc052-850f-4ce9-bab8-871831bd9b9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:39 np0005558241 nova_compute[248510]: 2025-12-13 08:55:39.517 248514 DEBUG oslo_concurrency.lockutils [req-ac099ad8-fbc0-432f-928d-fc88fa5e8e91 req-c13fc052-850f-4ce9-bab8-871831bd9b9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:39 np0005558241 nova_compute[248510]: 2025-12-13 08:55:39.517 248514 DEBUG nova.compute.manager [req-ac099ad8-fbc0-432f-928d-fc88fa5e8e91 req-c13fc052-850f-4ce9-bab8-871831bd9b9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] No waiting events found dispatching network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:55:39 np0005558241 nova_compute[248510]: 2025-12-13 08:55:39.517 248514 WARNING nova.compute.manager [req-ac099ad8-fbc0-432f-928d-fc88fa5e8e91 req-c13fc052-850f-4ce9-bab8-871831bd9b9e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received unexpected event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed for instance with vm_state active and task_state None.#033[00m
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:55:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:55:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:55:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:55:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:55:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:55:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:55:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:55:40 np0005558241 podman[365679]: 2025-12-13 08:55:40.185526402 +0000 UTC m=+0.036426955 container create 3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:55:40 np0005558241 systemd[1]: Started libpod-conmon-3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe.scope.
Dec 13 03:55:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2825: 321 pgs: 321 active+clean; 231 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.9 MiB/s wr, 241 op/s
Dec 13 03:55:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:55:40 np0005558241 podman[365679]: 2025-12-13 08:55:40.169761857 +0000 UTC m=+0.020662410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:55:40 np0005558241 podman[365679]: 2025-12-13 08:55:40.265678531 +0000 UTC m=+0.116579124 container init 3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 13 03:55:40 np0005558241 podman[365679]: 2025-12-13 08:55:40.278435121 +0000 UTC m=+0.129335674 container start 3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:55:40 np0005558241 charming_bhaskara[365693]: 167 167
Dec 13 03:55:40 np0005558241 systemd[1]: libpod-3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe.scope: Deactivated successfully.
Dec 13 03:55:40 np0005558241 podman[365679]: 2025-12-13 08:55:40.286471643 +0000 UTC m=+0.137372216 container attach 3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bhaskara, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:55:40 np0005558241 podman[365679]: 2025-12-13 08:55:40.287097288 +0000 UTC m=+0.137997871 container died 3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bhaskara, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:55:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ebb5885c065a1855bfde3261867530d1e691054171e261ec4be75d51f4724e66-merged.mount: Deactivated successfully.
Dec 13 03:55:40 np0005558241 podman[365679]: 2025-12-13 08:55:40.335269036 +0000 UTC m=+0.186169589 container remove 3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:55:40 np0005558241 systemd[1]: libpod-conmon-3b8466fa8cff80210aa600b41ac83bb5a0f6a6118ccb0fe0760a56a06b78eebe.scope: Deactivated successfully.
Dec 13 03:55:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:55:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:55:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:55:40 np0005558241 podman[365716]: 2025-12-13 08:55:40.536111922 +0000 UTC m=+0.047698617 container create 05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_haibt, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 03:55:40 np0005558241 systemd[1]: Started libpod-conmon-05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513.scope.
Dec 13 03:55:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:55:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:55:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da812d6b83d18d6df049483f79be7b133a44f57921e3f7b7bedcf3b2e14f896e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:40 np0005558241 podman[365716]: 2025-12-13 08:55:40.515321991 +0000 UTC m=+0.026908716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:55:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da812d6b83d18d6df049483f79be7b133a44f57921e3f7b7bedcf3b2e14f896e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da812d6b83d18d6df049483f79be7b133a44f57921e3f7b7bedcf3b2e14f896e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da812d6b83d18d6df049483f79be7b133a44f57921e3f7b7bedcf3b2e14f896e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da812d6b83d18d6df049483f79be7b133a44f57921e3f7b7bedcf3b2e14f896e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:40 np0005558241 podman[365716]: 2025-12-13 08:55:40.636316864 +0000 UTC m=+0.147903579 container init 05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:55:40 np0005558241 podman[365716]: 2025-12-13 08:55:40.64451831 +0000 UTC m=+0.156105005 container start 05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:55:40 np0005558241 podman[365716]: 2025-12-13 08:55:40.648405557 +0000 UTC m=+0.159992252 container attach 05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 03:55:41 np0005558241 angry_haibt[365733]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:55:41 np0005558241 angry_haibt[365733]: --> All data devices are unavailable
Dec 13 03:55:41 np0005558241 systemd[1]: libpod-05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513.scope: Deactivated successfully.
Dec 13 03:55:41 np0005558241 podman[365716]: 2025-12-13 08:55:41.173776099 +0000 UTC m=+0.685362794 container died 05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_haibt, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:55:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-da812d6b83d18d6df049483f79be7b133a44f57921e3f7b7bedcf3b2e14f896e-merged.mount: Deactivated successfully.
Dec 13 03:55:41 np0005558241 podman[365716]: 2025-12-13 08:55:41.221182068 +0000 UTC m=+0.732768763 container remove 05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_haibt, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:55:41 np0005558241 systemd[1]: libpod-conmon-05ee0978101d6717faa2fbe3e83a697274c401477fc10e123a26010d1b093513.scope: Deactivated successfully.
Dec 13 03:55:41 np0005558241 podman[365828]: 2025-12-13 08:55:41.719170664 +0000 UTC m=+0.042617360 container create 72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lalande, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:55:41 np0005558241 systemd[1]: Started libpod-conmon-72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37.scope.
Dec 13 03:55:41 np0005558241 podman[365828]: 2025-12-13 08:55:41.699322026 +0000 UTC m=+0.022768732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:55:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:55:41 np0005558241 podman[365828]: 2025-12-13 08:55:41.818784831 +0000 UTC m=+0.142231547 container init 72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:55:41 np0005558241 podman[365828]: 2025-12-13 08:55:41.827353306 +0000 UTC m=+0.150799992 container start 72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lalande, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:55:41 np0005558241 festive_lalande[365844]: 167 167
Dec 13 03:55:41 np0005558241 podman[365828]: 2025-12-13 08:55:41.832718411 +0000 UTC m=+0.156165097 container attach 72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lalande, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:55:41 np0005558241 systemd[1]: libpod-72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37.scope: Deactivated successfully.
Dec 13 03:55:41 np0005558241 podman[365828]: 2025-12-13 08:55:41.834541096 +0000 UTC m=+0.157987782 container died 72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lalande, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:55:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a154a1c4401a97e1c1542382c49e95b3c57b7f45a656aaa8e6aee4cc111e2f1e-merged.mount: Deactivated successfully.
Dec 13 03:55:41 np0005558241 podman[365828]: 2025-12-13 08:55:41.881282918 +0000 UTC m=+0.204729604 container remove 72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:55:41 np0005558241 systemd[1]: libpod-conmon-72e32efda23f8df2da6f0979aeb909acbf2b24369431945e282e59432913dc37.scope: Deactivated successfully.
Dec 13 03:55:42 np0005558241 podman[365868]: 2025-12-13 08:55:42.065086677 +0000 UTC m=+0.030281621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:55:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2826: 321 pgs: 321 active+clean; 231 MiB data, 892 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 139 op/s
Dec 13 03:55:42 np0005558241 podman[365868]: 2025-12-13 08:55:42.283861332 +0000 UTC m=+0.249056256 container create bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lichterman, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:55:42 np0005558241 systemd[1]: Started libpod-conmon-bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266.scope.
Dec 13 03:55:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:55:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f8fa33b6908cee47c05336400d0f2fcbe03763e241e71379b39155fdb9e316/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f8fa33b6908cee47c05336400d0f2fcbe03763e241e71379b39155fdb9e316/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f8fa33b6908cee47c05336400d0f2fcbe03763e241e71379b39155fdb9e316/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f8fa33b6908cee47c05336400d0f2fcbe03763e241e71379b39155fdb9e316/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:42 np0005558241 podman[365868]: 2025-12-13 08:55:42.508203717 +0000 UTC m=+0.473398671 container init bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:55:42 np0005558241 podman[365868]: 2025-12-13 08:55:42.515530631 +0000 UTC m=+0.480725555 container start bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lichterman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:55:42 np0005558241 podman[365868]: 2025-12-13 08:55:42.523039789 +0000 UTC m=+0.488234733 container attach bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lichterman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:55:42 np0005558241 nova_compute[248510]: 2025-12-13 08:55:42.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]: {
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:    "0": [
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:        {
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "devices": [
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "/dev/loop3"
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            ],
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_name": "ceph_lv0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_size": "21470642176",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "name": "ceph_lv0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "tags": {
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.cluster_name": "ceph",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.crush_device_class": "",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.encrypted": "0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.objectstore": "bluestore",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.osd_id": "0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.type": "block",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.vdo": "0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.with_tpm": "0"
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            },
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "type": "block",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "vg_name": "ceph_vg0"
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:        }
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:    ],
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:    "1": [
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:        {
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "devices": [
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "/dev/loop4"
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            ],
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_name": "ceph_lv1",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_size": "21470642176",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "name": "ceph_lv1",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "tags": {
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.cluster_name": "ceph",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.crush_device_class": "",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.encrypted": "0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.objectstore": "bluestore",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.osd_id": "1",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.type": "block",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.vdo": "0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.with_tpm": "0"
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            },
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "type": "block",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "vg_name": "ceph_vg1"
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:        }
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:    ],
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:    "2": [
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:        {
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "devices": [
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "/dev/loop5"
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            ],
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_name": "ceph_lv2",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_size": "21470642176",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "name": "ceph_lv2",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "tags": {
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.cluster_name": "ceph",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.crush_device_class": "",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.encrypted": "0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.objectstore": "bluestore",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.osd_id": "2",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.type": "block",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.vdo": "0",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:                "ceph.with_tpm": "0"
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            },
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "type": "block",
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:            "vg_name": "ceph_vg2"
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:        }
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]:    ]
Dec 13 03:55:42 np0005558241 trusting_lichterman[365885]: }
Dec 13 03:55:42 np0005558241 systemd[1]: libpod-bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266.scope: Deactivated successfully.
Dec 13 03:55:42 np0005558241 podman[365868]: 2025-12-13 08:55:42.849311419 +0000 UTC m=+0.814506343 container died bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lichterman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:55:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b4f8fa33b6908cee47c05336400d0f2fcbe03763e241e71379b39155fdb9e316-merged.mount: Deactivated successfully.
Dec 13 03:55:42 np0005558241 podman[365868]: 2025-12-13 08:55:42.952539148 +0000 UTC m=+0.917734072 container remove bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:55:42 np0005558241 systemd[1]: libpod-conmon-bfaa6a20afe92cfd2fc131fc486dde2ad60f0162d459f518444c720e50de6266.scope: Deactivated successfully.
Dec 13 03:55:43 np0005558241 podman[365968]: 2025-12-13 08:55:43.419402723 +0000 UTC m=+0.050076706 container create be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_banach, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:55:43 np0005558241 systemd[1]: Started libpod-conmon-be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada.scope.
Dec 13 03:55:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:55:43 np0005558241 podman[365968]: 2025-12-13 08:55:43.398649703 +0000 UTC m=+0.029323716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:55:43 np0005558241 podman[365968]: 2025-12-13 08:55:43.50464347 +0000 UTC m=+0.135317473 container init be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:55:43 np0005558241 podman[365968]: 2025-12-13 08:55:43.512845366 +0000 UTC m=+0.143519359 container start be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_banach, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:55:43 np0005558241 loving_banach[365986]: 167 167
Dec 13 03:55:43 np0005558241 podman[365968]: 2025-12-13 08:55:43.52017179 +0000 UTC m=+0.150845803 container attach be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_banach, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:55:43 np0005558241 systemd[1]: libpod-be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada.scope: Deactivated successfully.
Dec 13 03:55:43 np0005558241 podman[365968]: 2025-12-13 08:55:43.52178329 +0000 UTC m=+0.152457273 container died be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:55:43 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7ce08ebe4d8283237fdcf0287261c36c40283b164276d072f0458629e6df175c-merged.mount: Deactivated successfully.
Dec 13 03:55:43 np0005558241 podman[365968]: 2025-12-13 08:55:43.576699447 +0000 UTC m=+0.207373430 container remove be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:55:43 np0005558241 systemd[1]: libpod-conmon-be833b0b06f2ff467ac96fe6f535b1ac92ea954725a79735f63ab3c9c3b11ada.scope: Deactivated successfully.
Dec 13 03:55:43 np0005558241 podman[366010]: 2025-12-13 08:55:43.7722423 +0000 UTC m=+0.044835805 container create 42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:55:43 np0005558241 systemd[1]: Started libpod-conmon-42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c.scope.
Dec 13 03:55:43 np0005558241 podman[366010]: 2025-12-13 08:55:43.752048694 +0000 UTC m=+0.024642219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:55:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:55:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c62a26d13ab40f05f63ffaeabdc04bfc74811ffcfd2cc1676a69606f749293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c62a26d13ab40f05f63ffaeabdc04bfc74811ffcfd2cc1676a69606f749293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c62a26d13ab40f05f63ffaeabdc04bfc74811ffcfd2cc1676a69606f749293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4c62a26d13ab40f05f63ffaeabdc04bfc74811ffcfd2cc1676a69606f749293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:43 np0005558241 podman[366010]: 2025-12-13 08:55:43.879246223 +0000 UTC m=+0.151839738 container init 42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:55:43 np0005558241 podman[366010]: 2025-12-13 08:55:43.886476004 +0000 UTC m=+0.159069499 container start 42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:55:43 np0005558241 podman[366010]: 2025-12-13 08:55:43.890478394 +0000 UTC m=+0.163071909 container attach 42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:55:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2827: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 206 op/s
Dec 13 03:55:44 np0005558241 nova_compute[248510]: 2025-12-13 08:55:44.483 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:44 np0005558241 lvm[366104]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:55:44 np0005558241 lvm[366105]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:55:44 np0005558241 lvm[366104]: VG ceph_vg0 finished
Dec 13 03:55:44 np0005558241 lvm[366105]: VG ceph_vg1 finished
Dec 13 03:55:44 np0005558241 lvm[366107]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:55:44 np0005558241 lvm[366107]: VG ceph_vg2 finished
Dec 13 03:55:44 np0005558241 vigilant_poincare[366026]: {}
Dec 13 03:55:44 np0005558241 systemd[1]: libpod-42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c.scope: Deactivated successfully.
Dec 13 03:55:44 np0005558241 systemd[1]: libpod-42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c.scope: Consumed 1.400s CPU time.
Dec 13 03:55:44 np0005558241 podman[366010]: 2025-12-13 08:55:44.779699009 +0000 UTC m=+1.052292504 container died 42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Dec 13 03:55:44 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e4c62a26d13ab40f05f63ffaeabdc04bfc74811ffcfd2cc1676a69606f749293-merged.mount: Deactivated successfully.
Dec 13 03:55:44 np0005558241 podman[366010]: 2025-12-13 08:55:44.825976659 +0000 UTC m=+1.098570154 container remove 42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:55:44 np0005558241 systemd[1]: libpod-conmon-42a636dc6034a15551e074eec7029537efdf16ae8d759d01f4f783774dd3da1c.scope: Deactivated successfully.
Dec 13 03:55:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:55:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:55:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:55:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:55:45 np0005558241 nova_compute[248510]: 2025-12-13 08:55:45.108 248514 DEBUG nova.compute.manager [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-changed-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:45 np0005558241 nova_compute[248510]: 2025-12-13 08:55:45.109 248514 DEBUG nova.compute.manager [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Refreshing instance network info cache due to event network-changed-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:55:45 np0005558241 nova_compute[248510]: 2025-12-13 08:55:45.109 248514 DEBUG oslo_concurrency.lockutils [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:55:45 np0005558241 nova_compute[248510]: 2025-12-13 08:55:45.109 248514 DEBUG oslo_concurrency.lockutils [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:55:45 np0005558241 nova_compute[248510]: 2025-12-13 08:55:45.109 248514 DEBUG nova.network.neutron [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Refreshing network info cache for port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:55:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:55:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:55:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:55:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2828: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Dec 13 03:55:47 np0005558241 nova_compute[248510]: 2025-12-13 08:55:47.122 248514 INFO nova.compute.manager [None req-ac1e67fe-5ffe-4f5a-956c-dfb234f7e89d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Get console output#033[00m
Dec 13 03:55:47 np0005558241 nova_compute[248510]: 2025-12-13 08:55:47.128 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 03:55:47 np0005558241 nova_compute[248510]: 2025-12-13 08:55:47.580 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:47 np0005558241 nova_compute[248510]: 2025-12-13 08:55:47.989 248514 DEBUG nova.network.neutron [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updated VIF entry in instance network info cache for port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:55:47 np0005558241 nova_compute[248510]: 2025-12-13 08:55:47.989 248514 DEBUG nova.network.neutron [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updating instance_info_cache with network_info: [{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:55:48 np0005558241 nova_compute[248510]: 2025-12-13 08:55:48.016 248514 DEBUG oslo_concurrency.lockutils [req-01850f13-16f5-41ff-a5e0-9021e00bace9 req-ef73d305-737a-4a3f-ace3-2bb635c4113d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:55:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2829: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 202 op/s
Dec 13 03:55:49 np0005558241 nova_compute[248510]: 2025-12-13 08:55:49.486 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2830: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.1 MiB/s wr, 177 op/s
Dec 13 03:55:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:55:50 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:50Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:e7:bf 10.100.0.3
Dec 13 03:55:50 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:50Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:e7:bf 10.100.0.3
Dec 13 03:55:51 np0005558241 nova_compute[248510]: 2025-12-13 08:55:51.328 248514 DEBUG oslo_concurrency.lockutils [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "interface-5c900cfb-46bb-436b-a574-7985be3447da-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:51 np0005558241 nova_compute[248510]: 2025-12-13 08:55:51.328 248514 DEBUG oslo_concurrency.lockutils [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-5c900cfb-46bb-436b-a574-7985be3447da-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:51 np0005558241 nova_compute[248510]: 2025-12-13 08:55:51.328 248514 DEBUG nova.objects.instance [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'flavor' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:55:51 np0005558241 nova_compute[248510]: 2025-12-13 08:55:51.851 248514 DEBUG nova.objects.instance [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_requests' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:55:51 np0005558241 nova_compute[248510]: 2025-12-13 08:55:51.870 248514 DEBUG nova.network.neutron [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:55:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2831: 321 pgs: 321 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 377 KiB/s wr, 68 op/s
Dec 13 03:55:52 np0005558241 nova_compute[248510]: 2025-12-13 08:55:52.283 248514 DEBUG nova.policy [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:55:52 np0005558241 nova_compute[248510]: 2025-12-13 08:55:52.346 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:52 np0005558241 nova_compute[248510]: 2025-12-13 08:55:52.347 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:52 np0005558241 nova_compute[248510]: 2025-12-13 08:55:52.365 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:55:52 np0005558241 nova_compute[248510]: 2025-12-13 08:55:52.455 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:52 np0005558241 nova_compute[248510]: 2025-12-13 08:55:52.456 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:52 np0005558241 nova_compute[248510]: 2025-12-13 08:55:52.468 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:55:52 np0005558241 nova_compute[248510]: 2025-12-13 08:55:52.469 248514 INFO nova.compute.claims [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:55:52 np0005558241 nova_compute[248510]: 2025-12-13 08:55:52.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:52 np0005558241 nova_compute[248510]: 2025-12-13 08:55:52.665 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:52 np0005558241 nova_compute[248510]: 2025-12-13 08:55:52.905 248514 DEBUG nova.network.neutron [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Successfully created port: e7c59bd7-06ff-4220-ac42-ec02c9e22e2c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:55:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:55:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2594600232' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.250 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.256 248514 DEBUG nova.compute.provider_tree [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.276 248514 DEBUG nova.scheduler.client.report [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.307 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.307 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.363 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.364 248514 DEBUG nova.network.neutron [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.387 248514 INFO nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.404 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.495 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.497 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.498 248514 INFO nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Creating image(s)#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.523 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.658 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.816 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.821 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.864 248514 DEBUG nova.policy [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c377508dda354c0aa762d15d52aa130c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.901 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.902 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.903 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.903 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.930 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.935 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 822fa9f6-0a5d-490e-89d7-446df19a068b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.979 248514 DEBUG nova.network.neutron [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Successfully updated port: e7c59bd7-06ff-4220-ac42-ec02c9e22e2c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.996 248514 DEBUG oslo_concurrency.lockutils [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.996 248514 DEBUG oslo_concurrency.lockutils [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:55:53 np0005558241 nova_compute[248510]: 2025-12-13 08:55:53.997 248514 DEBUG nova.network.neutron [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:55:54 np0005558241 nova_compute[248510]: 2025-12-13 08:55:54.079 248514 DEBUG nova.compute.manager [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-changed-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:54 np0005558241 nova_compute[248510]: 2025-12-13 08:55:54.079 248514 DEBUG nova.compute.manager [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing instance network info cache due to event network-changed-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:55:54 np0005558241 nova_compute[248510]: 2025-12-13 08:55:54.080 248514 DEBUG oslo_concurrency.lockutils [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:55:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2832: 321 pgs: 321 active+clean; 279 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.5 MiB/s wr, 130 op/s
Dec 13 03:55:54 np0005558241 nova_compute[248510]: 2025-12-13 08:55:54.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:55 np0005558241 nova_compute[248510]: 2025-12-13 08:55:55.054 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 822fa9f6-0a5d-490e-89d7-446df19a068b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:55:55 np0005558241 nova_compute[248510]: 2025-12-13 08:55:55.122 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] resizing rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:55:55 np0005558241 nova_compute[248510]: 2025-12-13 08:55:55.309 248514 DEBUG nova.objects.instance [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'migration_context' on Instance uuid 822fa9f6-0a5d-490e-89d7-446df19a068b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:55:55 np0005558241 nova_compute[248510]: 2025-12-13 08:55:55.328 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:55:55 np0005558241 nova_compute[248510]: 2025-12-13 08:55:55.329 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Ensure instance console log exists: /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:55:55 np0005558241 nova_compute[248510]: 2025-12-13 08:55:55.329 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:55 np0005558241 nova_compute[248510]: 2025-12-13 08:55:55.329 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:55 np0005558241 nova_compute[248510]: 2025-12-13 08:55:55.330 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:55 np0005558241 nova_compute[248510]: 2025-12-13 08:55:55.345 248514 DEBUG nova.network.neutron [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Successfully created port: f069307e-1a47-4342-b244-d88f04ff512b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:55:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:55.432 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:55.434 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:55.435 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:55:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2833: 321 pgs: 321 active+clean; 279 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 03:55:57 np0005558241 nova_compute[248510]: 2025-12-13 08:55:57.623 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.047 248514 DEBUG nova.network.neutron [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Successfully updated port: f069307e-1a47-4342-b244-d88f04ff512b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.064 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.065 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquired lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.065 248514 DEBUG nova.network.neutron [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.126 248514 DEBUG nova.network.neutron [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.165 248514 DEBUG oslo_concurrency.lockutils [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.166 248514 DEBUG oslo_concurrency.lockutils [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.166 248514 DEBUG nova.network.neutron [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing network info cache for port e7c59bd7-06ff-4220-ac42-ec02c9e22e2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.169 248514 DEBUG nova.virt.libvirt.vif [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.169 248514 DEBUG nova.network.os_vif_util [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.170 248514 DEBUG nova.network.os_vif_util [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.170 248514 DEBUG os_vif [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.171 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.171 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.171 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.176 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.176 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7c59bd7-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.176 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7c59bd7-06, col_values=(('external_ids', {'iface-id': 'e7c59bd7-06ff-4220-ac42-ec02c9e22e2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:f1:bc', 'vm-uuid': '5c900cfb-46bb-436b-a574-7985be3447da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:58 np0005558241 NetworkManager[50376]: <info>  [1765616158.2174] manager: (tape7c59bd7-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.218 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.227 248514 INFO os_vif [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06')#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.227 248514 DEBUG nova.virt.libvirt.vif [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.228 248514 DEBUG nova.network.os_vif_util [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.228 248514 DEBUG nova.network.os_vif_util [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.231 248514 DEBUG nova.virt.libvirt.guest [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] attach device xml: <interface type="ethernet">
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:1d:f1:bc"/>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  <target dev="tape7c59bd7-06"/>
Dec 13 03:55:58 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:55:58 np0005558241 nova_compute[248510]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec 13 03:55:58 np0005558241 kernel: tape7c59bd7-06: entered promiscuous mode
Dec 13 03:55:58 np0005558241 NetworkManager[50376]: <info>  [1765616158.2470] manager: (tape7c59bd7-06): new Tun device (/org/freedesktop/NetworkManager/Devices/482)
Dec 13 03:55:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:58Z|01147|binding|INFO|Claiming lport e7c59bd7-06ff-4220-ac42-ec02c9e22e2c for this chassis.
Dec 13 03:55:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:58Z|01148|binding|INFO|e7c59bd7-06ff-4220-ac42-ec02c9e22e2c: Claiming fa:16:3e:1d:f1:bc 10.100.0.18
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2834: 321 pgs: 321 active+clean; 297 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.9 MiB/s wr, 87 op/s
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.256 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:f1:bc 10.100.0.18'], port_security=['fa:16:3e:1d:f1:bc 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': '5c900cfb-46bb-436b-a574-7985be3447da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96311132-fe8d-4c3e-acff-901d94244605', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=482a9af5-c769-4836-8604-7eb97e93b8de, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.257 158419 INFO neutron.agent.ovn.metadata.agent [-] Port e7c59bd7-06ff-4220-ac42-ec02c9e22e2c in datapath 96311132-fe8d-4c3e-acff-901d94244605 bound to our chassis#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.259 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 96311132-fe8d-4c3e-acff-901d94244605#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.275 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[83f7df94-49fc-4551-be29-bded5e1ebf17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.276 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap96311132-f1 in ovnmeta-96311132-fe8d-4c3e-acff-901d94244605 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.279 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap96311132-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.279 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9063495-cba7-423e-aa41-d7167952ec91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.279 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[08f165cf-520d-4a22-9140-d6af84a88289]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 systemd-udevd[366341]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.293 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[ff973fa4-d796-4470-bc5c-10a1fbe4879c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.296 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:58Z|01149|binding|INFO|Setting lport e7c59bd7-06ff-4220-ac42-ec02c9e22e2c ovn-installed in OVS
Dec 13 03:55:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:58Z|01150|binding|INFO|Setting lport e7c59bd7-06ff-4220-ac42-ec02c9e22e2c up in Southbound
Dec 13 03:55:58 np0005558241 NetworkManager[50376]: <info>  [1765616158.2995] device (tape7c59bd7-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.300 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:58 np0005558241 NetworkManager[50376]: <info>  [1765616158.3014] device (tape7c59bd7-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.308 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35c947de-fa6b-44f3-acfa-cdfe88f5a1b0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.337 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[60c652da-65e6-4007-a085-e38676b073e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.344 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe1b430-7529-41a5-af13-246694ec0764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 systemd-udevd[366344]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:55:58 np0005558241 NetworkManager[50376]: <info>  [1765616158.3448] manager: (tap96311132-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/483)
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.362 248514 DEBUG nova.virt.libvirt.driver [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.362 248514 DEBUG nova.virt.libvirt.driver [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.363 248514 DEBUG nova.virt.libvirt.driver [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:c9:87:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.363 248514 DEBUG nova.virt.libvirt.driver [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:1d:f1:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.377 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[968a1e9b-6567-4613-a5f9-ec10add872ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.381 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa9552b-be68-4a77-8a45-cb6030aa5522]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.382 248514 DEBUG nova.network.neutron [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:55:58 np0005558241 NetworkManager[50376]: <info>  [1765616158.4037] device (tap96311132-f0): carrier: link connected
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.407 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6174a6a9-4dbf-47a8-9ffe-80d779b04173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.413 248514 DEBUG nova.virt.libvirt.guest [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:55:58</nova:creationTime>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:55:58 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:    <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 03:55:58 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:    <nova:port uuid="e7c59bd7-06ff-4220-ac42-ec02c9e22e2c">
Dec 13 03:55:58 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:55:58 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:55:58 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:55:58 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.428 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab732f64-2d26-4fac-b758-7ef1ddc5b999]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96311132-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:48:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873561, 'reachable_time': 22765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366367, 'error': None, 'target': 'ovnmeta-96311132-fe8d-4c3e-acff-901d94244605', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.444 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da9c7440-b0f6-473c-a2fd-1df4d49a9775]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:4848'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 873561, 'tstamp': 873561}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366368, 'error': None, 'target': 'ovnmeta-96311132-fe8d-4c3e-acff-901d94244605', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.448 248514 DEBUG oslo_concurrency.lockutils [None req-d0cbc717-a826-4b76-aab0-0a6ca9a115e6 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-5c900cfb-46bb-436b-a574-7985be3447da-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.462 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f0fc8c81-a73d-4ac4-9aca-cc25a64b2c92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96311132-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:48:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873561, 'reachable_time': 22765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366369, 'error': None, 'target': 'ovnmeta-96311132-fe8d-4c3e-acff-901d94244605', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.500 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb731a66-6125-4f48-b01a-b0e841bebfb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.545 248514 DEBUG nova.compute.manager [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-changed-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.546 248514 DEBUG nova.compute.manager [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Refreshing instance network info cache due to event network-changed-f069307e-1a47-4342-b244-d88f04ff512b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.546 248514 DEBUG oslo_concurrency.lockutils [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.568 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e342a79c-4635-4fca-97c7-f49140830a54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.569 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96311132-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.570 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.570 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96311132-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:58 np0005558241 kernel: tap96311132-f0: entered promiscuous mode
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.572 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:58 np0005558241 NetworkManager[50376]: <info>  [1765616158.5744] manager: (tap96311132-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.575 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap96311132-f0, col_values=(('external_ids', {'iface-id': '48362829-5f00-4442-8cbd-a68a3c85a0da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:58 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:58Z|01151|binding|INFO|Releasing lport 48362829-5f00-4442-8cbd-a68a3c85a0da from this chassis (sb_readonly=0)
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.591 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.592 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/96311132-fe8d-4c3e-acff-901d94244605.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/96311132-fe8d-4c3e-acff-901d94244605.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.593 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11eb1ca3-80aa-4561-89ba-5e035d21f051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.594 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-96311132-fe8d-4c3e-acff-901d94244605
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/96311132-fe8d-4c3e-acff-901d94244605.pid.haproxy
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 96311132-fe8d-4c3e-acff-901d94244605
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:55:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:55:58.595 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-96311132-fe8d-4c3e-acff-901d94244605', 'env', 'PROCESS_TAG=haproxy-96311132-fe8d-4c3e-acff-901d94244605', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/96311132-fe8d-4c3e-acff-901d94244605.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.641 248514 DEBUG nova.compute.manager [req-3c9003e0-6980-4e73-897a-d60c414302eb req-24c47067-d7cb-48cf-be3c-927968520b6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.642 248514 DEBUG oslo_concurrency.lockutils [req-3c9003e0-6980-4e73-897a-d60c414302eb req-24c47067-d7cb-48cf-be3c-927968520b6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.642 248514 DEBUG oslo_concurrency.lockutils [req-3c9003e0-6980-4e73-897a-d60c414302eb req-24c47067-d7cb-48cf-be3c-927968520b6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.643 248514 DEBUG oslo_concurrency.lockutils [req-3c9003e0-6980-4e73-897a-d60c414302eb req-24c47067-d7cb-48cf-be3c-927968520b6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.643 248514 DEBUG nova.compute.manager [req-3c9003e0-6980-4e73-897a-d60c414302eb req-24c47067-d7cb-48cf-be3c-927968520b6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:55:58 np0005558241 nova_compute[248510]: 2025-12-13 08:55:58.643 248514 WARNING nova.compute.manager [req-3c9003e0-6980-4e73-897a-d60c414302eb req-24c47067-d7cb-48cf-be3c-927968520b6c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received unexpected event network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c for instance with vm_state active and task_state None.#033[00m
Dec 13 03:55:59 np0005558241 podman[366401]: 2025-12-13 08:55:59.009439107 +0000 UTC m=+0.058875097 container create b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:55:59 np0005558241 systemd[1]: Started libpod-conmon-b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512.scope.
Dec 13 03:55:59 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:55:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb244b2b4cd5882665305b63ffc666cdd1d1c64088549bf395ad8f99e02bfcdc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:59 np0005558241 podman[366401]: 2025-12-13 08:55:58.975585228 +0000 UTC m=+0.025021238 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:55:59 np0005558241 podman[366401]: 2025-12-13 08:55:59.082845357 +0000 UTC m=+0.132281377 container init b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:55:59 np0005558241 podman[366401]: 2025-12-13 08:55:59.087997067 +0000 UTC m=+0.137433057 container start b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:55:59 np0005558241 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [NOTICE]   (366421) : New worker (366423) forked
Dec 13 03:55:59 np0005558241 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [NOTICE]   (366421) : Loading success.
Dec 13 03:55:59 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:59Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:f1:bc 10.100.0.18
Dec 13 03:55:59 np0005558241 ovn_controller[148476]: 2025-12-13T08:55:59Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:f1:bc 10.100.0.18
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.066 248514 DEBUG nova.network.neutron [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Updating instance_info_cache with network_info: [{"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.091 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Releasing lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.093 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Instance network_info: |[{"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.094 248514 DEBUG oslo_concurrency.lockutils [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.094 248514 DEBUG nova.network.neutron [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Refreshing network info cache for port f069307e-1a47-4342-b244-d88f04ff512b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.097 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Start _get_guest_xml network_info=[{"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.103 248514 WARNING nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.112 248514 DEBUG nova.virt.libvirt.host [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.113 248514 DEBUG nova.virt.libvirt.host [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.122 248514 DEBUG nova.virt.libvirt.host [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.123 248514 DEBUG nova.virt.libvirt.host [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.123 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.123 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.124 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.124 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.124 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.124 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.124 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.124 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.125 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.125 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.125 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.125 248514 DEBUG nova.virt.hardware [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.128 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2835: 321 pgs: 321 active+clean; 326 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 13 03:56:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:56:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:56:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019380606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.710 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.737 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:00 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.741 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:00.999 248514 DEBUG nova.compute.manager [req-ab486c4a-c6d8-40cb-be98-8b1c2a7af543 req-372ad8ad-3f5c-4f38-8eca-000cc0ee441b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.000 248514 DEBUG oslo_concurrency.lockutils [req-ab486c4a-c6d8-40cb-be98-8b1c2a7af543 req-372ad8ad-3f5c-4f38-8eca-000cc0ee441b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.000 248514 DEBUG oslo_concurrency.lockutils [req-ab486c4a-c6d8-40cb-be98-8b1c2a7af543 req-372ad8ad-3f5c-4f38-8eca-000cc0ee441b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.000 248514 DEBUG oslo_concurrency.lockutils [req-ab486c4a-c6d8-40cb-be98-8b1c2a7af543 req-372ad8ad-3f5c-4f38-8eca-000cc0ee441b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.001 248514 DEBUG nova.compute.manager [req-ab486c4a-c6d8-40cb-be98-8b1c2a7af543 req-372ad8ad-3f5c-4f38-8eca-000cc0ee441b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.001 248514 WARNING nova.compute.manager [req-ab486c4a-c6d8-40cb-be98-8b1c2a7af543 req-372ad8ad-3f5c-4f38-8eca-000cc0ee441b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received unexpected event network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c for instance with vm_state active and task_state None.#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.123 248514 DEBUG oslo_concurrency.lockutils [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "interface-5c900cfb-46bb-436b-a574-7985be3447da-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.124 248514 DEBUG oslo_concurrency.lockutils [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-5c900cfb-46bb-436b-a574-7985be3447da-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.142 248514 DEBUG nova.network.neutron [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updated VIF entry in instance network info cache for port e7c59bd7-06ff-4220-ac42-ec02c9e22e2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.142 248514 DEBUG nova.network.neutron [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.148 248514 DEBUG nova.objects.instance [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'flavor' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.175 248514 DEBUG oslo_concurrency.lockutils [req-6fb68cce-b8d2-48b2-838f-b9c96f62c8f8 req-20cea67b-9e3a-4f6d-b915-4ca8807f06ac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.180 248514 DEBUG nova.virt.libvirt.vif [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.180 248514 DEBUG nova.network.os_vif_util [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.181 248514 DEBUG nova.network.os_vif_util [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.185 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.187 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.190 248514 DEBUG nova.virt.libvirt.driver [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Attempting to detach device tape7c59bd7-06 from instance 5c900cfb-46bb-436b-a574-7985be3447da from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.190 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] detach device xml: <interface type="ethernet">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:1d:f1:bc"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <target dev="tape7c59bd7-06"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.195 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.198 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface>not found in domain: <domain type='kvm' id='144'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <name>instance-00000075</name>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <uuid>5c900cfb-46bb-436b-a574-7985be3447da</uuid>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:55:58</nova:creationTime>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:port uuid="e7c59bd7-06ff-4220-ac42-ec02c9e22e2c">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name='serial'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name='uuid'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk' index='2'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk.config' index='1'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:c9:87:a3'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target dev='tapa010c1a2-26'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:1d:f1:bc'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target dev='tape7c59bd7-06'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='net1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <source path='/dev/pts/1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/1'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <source path='/dev/pts/1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c584,c742</label>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c584,c742</imagelabel>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.198 248514 INFO nova.virt.libvirt.driver [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully detached device tape7c59bd7-06 from instance 5c900cfb-46bb-436b-a574-7985be3447da from the persistent domain config.#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.198 248514 DEBUG nova.virt.libvirt.driver [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] (1/8): Attempting to detach device tape7c59bd7-06 with device alias net1 from instance 5c900cfb-46bb-436b-a574-7985be3447da from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.199 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] detach device xml: <interface type="ethernet">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:1d:f1:bc"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <target dev="tape7c59bd7-06"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec 13 03:56:01 np0005558241 kernel: tape7c59bd7-06 (unregistering): left promiscuous mode
Dec 13 03:56:01 np0005558241 NetworkManager[50376]: <info>  [1765616161.3221] device (tape7c59bd7-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.336 248514 DEBUG nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Received event <DeviceRemovedEvent: 1765616161.3357909, 5c900cfb-46bb-436b-a574-7985be3447da => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec 13 03:56:01 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:01Z|01152|binding|INFO|Releasing lport e7c59bd7-06ff-4220-ac42-ec02c9e22e2c from this chassis (sb_readonly=0)
Dec 13 03:56:01 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:01Z|01153|binding|INFO|Setting lport e7c59bd7-06ff-4220-ac42-ec02c9e22e2c down in Southbound
Dec 13 03:56:01 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:01Z|01154|binding|INFO|Removing iface tape7c59bd7-06 ovn-installed in OVS
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.348 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.353 248514 DEBUG nova.virt.libvirt.driver [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Start waiting for the detach event from libvirt for device tape7c59bd7-06 with device alias net1 for instance 5c900cfb-46bb-436b-a574-7985be3447da _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.354 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.354 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:f1:bc 10.100.0.18'], port_security=['fa:16:3e:1d:f1:bc 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': '5c900cfb-46bb-436b-a574-7985be3447da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96311132-fe8d-4c3e-acff-901d94244605', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=482a9af5-c769-4836-8604-7eb97e93b8de, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.356 158419 INFO neutron.agent.ovn.metadata.agent [-] Port e7c59bd7-06ff-4220-ac42-ec02c9e22e2c in datapath 96311132-fe8d-4c3e-acff-901d94244605 unbound from our chassis#033[00m
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.358 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96311132-fe8d-4c3e-acff-901d94244605, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.359 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[70f98c7a-17d0-4fa6-b38f-c2216152d6b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.360 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-96311132-fe8d-4c3e-acff-901d94244605 namespace which is not needed anymore#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.368 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface>not found in domain: <domain type='kvm' id='144'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <name>instance-00000075</name>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <uuid>5c900cfb-46bb-436b-a574-7985be3447da</uuid>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:55:58</nova:creationTime>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:port uuid="e7c59bd7-06ff-4220-ac42-ec02c9e22e2c">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name='serial'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name='uuid'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk' index='2'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk.config' index='1'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:c9:87:a3'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target dev='tapa010c1a2-26'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <source path='/dev/pts/1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/1'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <source path='/dev/pts/1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c584,c742</label>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c584,c742</imagelabel>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.369 248514 INFO nova.virt.libvirt.driver [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully detached device tape7c59bd7-06 from instance 5c900cfb-46bb-436b-a574-7985be3447da from the live domain config.#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.370 248514 DEBUG nova.virt.libvirt.vif [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.370 248514 DEBUG nova.network.os_vif_util [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.370 248514 DEBUG nova.network.os_vif_util [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.371 248514 DEBUG os_vif [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.373 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.374 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7c59bd7-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:56:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/992201784' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.377 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.379 248514 INFO os_vif [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06')#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.380 248514 DEBUG nova.virt.libvirt.guest [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:56:01</nova:creationTime>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.423 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.424 248514 DEBUG nova.virt.libvirt.vif [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=119,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHu0mZ4KdPQNIQDmStfb8oGr9IxxGZIxLFalN/4uGlZNxfEdmbl3D4ueCINh2ZhW4F33mK6Ev46I8YbLOH/wB4QDYG50l143kirQCN41tCbmF+vCIghQWMs2Kzj6YH+pg==',key_name='tempest-TestSecurityGroupsBasicOps-398328978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-hzbxz9gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:53Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=822fa9f6-0a5d-490e-89d7-446df19a068b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.425 248514 DEBUG nova.network.os_vif_util [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.425 248514 DEBUG nova.network.os_vif_util [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.426 248514 DEBUG nova.objects.instance [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 822fa9f6-0a5d-490e-89d7-446df19a068b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.429 248514 DEBUG nova.compute.manager [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.445 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <uuid>822fa9f6-0a5d-490e-89d7-446df19a068b</uuid>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <name>instance-00000077</name>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999</nova:name>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:56:00</nova:creationTime>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <nova:user uuid="c377508dda354c0aa762d15d52aa130c">tempest-TestSecurityGroupsBasicOps-1856512026-project-member</nova:user>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <nova:project uuid="1064539fdf2b494ea705dd5e74afcd3b">tempest-TestSecurityGroupsBasicOps-1856512026</nova:project>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <nova:port uuid="f069307e-1a47-4342-b244-d88f04ff512b">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name="serial">822fa9f6-0a5d-490e-89d7-446df19a068b</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name="uuid">822fa9f6-0a5d-490e-89d7-446df19a068b</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/822fa9f6-0a5d-490e-89d7-446df19a068b_disk">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/822fa9f6-0a5d-490e-89d7-446df19a068b_disk.config">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:dd:ef:3d"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <target dev="tapf069307e-1a"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/console.log" append="off"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:56:01 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:56:01 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:56:01 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.445 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Preparing to wait for external event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.445 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.446 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.446 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:01 np0005558241 podman[366496]: 2025-12-13 08:56:01.447951918 +0000 UTC m=+0.097103526 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.447 248514 DEBUG nova.virt.libvirt.vif [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:55:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=119,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHu0mZ4KdPQNIQDmStfb8oGr9IxxGZIxLFalN/4uGlZNxfEdmbl3D4ueCINh2ZhW4F33mK6Ev46I8YbLOH/wB4QDYG50l143kirQCN41tCbmF+vCIghQWMs2Kzj6YH+pg==',key_name='tempest-TestSecurityGroupsBasicOps-398328978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-hzbxz9gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:55:53Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=822fa9f6-0a5d-490e-89d7-446df19a068b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.447 248514 DEBUG nova.network.os_vif_util [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.447 248514 DEBUG nova.network.os_vif_util [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.448 248514 DEBUG os_vif [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.449 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.449 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.451 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.452 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf069307e-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.452 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf069307e-1a, col_values=(('external_ids', {'iface-id': 'f069307e-1a47-4342-b244-d88f04ff512b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:ef:3d', 'vm-uuid': '822fa9f6-0a5d-490e-89d7-446df19a068b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.454 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:01 np0005558241 NetworkManager[50376]: <info>  [1765616161.4556] manager: (tapf069307e-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.456 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.463 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.464 248514 INFO os_vif [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a')#033[00m
Dec 13 03:56:01 np0005558241 podman[366495]: 2025-12-13 08:56:01.481152541 +0000 UTC m=+0.134261758 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.492 248514 INFO nova.compute.manager [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] instance snapshotting#033[00m
Dec 13 03:56:01 np0005558241 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [NOTICE]   (366421) : haproxy version is 2.8.14-c23fe91
Dec 13 03:56:01 np0005558241 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [NOTICE]   (366421) : path to executable is /usr/sbin/haproxy
Dec 13 03:56:01 np0005558241 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [WARNING]  (366421) : Exiting Master process...
Dec 13 03:56:01 np0005558241 podman[366492]: 2025-12-13 08:56:01.497940651 +0000 UTC m=+0.146136175 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:56:01 np0005558241 systemd[1]: libpod-b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512.scope: Deactivated successfully.
Dec 13 03:56:01 np0005558241 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [ALERT]    (366421) : Current worker (366423) exited with code 143 (Terminated)
Dec 13 03:56:01 np0005558241 neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605[366417]: [WARNING]  (366421) : All workers exited. Exiting... (0)
Dec 13 03:56:01 np0005558241 conmon[366417]: conmon b3630e20ea88cdd8cebd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512.scope/container/memory.events
Dec 13 03:56:01 np0005558241 podman[366572]: 2025-12-13 08:56:01.507672566 +0000 UTC m=+0.046915348 container died b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.523 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.523 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.524 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] No VIF found with MAC fa:16:3e:dd:ef:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.524 248514 INFO nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Using config drive#033[00m
Dec 13 03:56:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-eb244b2b4cd5882665305b63ffc666cdd1d1c64088549bf395ad8f99e02bfcdc-merged.mount: Deactivated successfully.
Dec 13 03:56:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512-userdata-shm.mount: Deactivated successfully.
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.552 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:01 np0005558241 podman[366572]: 2025-12-13 08:56:01.554882199 +0000 UTC m=+0.094124981 container cleanup b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:56:01 np0005558241 systemd[1]: libpod-conmon-b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512.scope: Deactivated successfully.
Dec 13 03:56:01 np0005558241 podman[366623]: 2025-12-13 08:56:01.622161196 +0000 UTC m=+0.044233360 container remove b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.628 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3287c2-be09-4925-83d7-f41ffa774be5]: (4, ('Sat Dec 13 08:56:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605 (b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512)\nb3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512\nSat Dec 13 08:56:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-96311132-fe8d-4c3e-acff-901d94244605 (b3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512)\nb3630e20ea88cdd8cebdf26d9de6ca69a7be4f96c21817acd568742867df3512\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.630 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3f2b3f-8752-4e24-b385-bed7f5aba1ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.631 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96311132-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:01 np0005558241 kernel: tap96311132-f0: left promiscuous mode
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.633 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.651 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.654 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e244cf-d8cb-4f5f-9c0f-a892d52230b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.669 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7727b6a5-b3aa-4a96-be33-59394aa549dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.671 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e48d3961-4575-45db-b2e7-2d882bdf2cd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.688 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[df70c662-23ce-4ada-b30b-7fbb9ddfe80a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873554, 'reachable_time': 41631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366641, 'error': None, 'target': 'ovnmeta-96311132-fe8d-4c3e-acff-901d94244605', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.691 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-96311132-fe8d-4c3e-acff-901d94244605 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:56:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:01.691 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[61235515-6c81-44b9-b0ee-d0706fdbb5f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:01 np0005558241 systemd[1]: run-netns-ovnmeta\x2d96311132\x2dfe8d\x2d4c3e\x2dacff\x2d901d94244605.mount: Deactivated successfully.
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.803 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.845 248514 INFO nova.virt.libvirt.driver [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Beginning live snapshot process#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.986 248514 INFO nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Creating config drive at /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/disk.config#033[00m
Dec 13 03:56:01 np0005558241 nova_compute[248510]: 2025-12-13 08:56:01.992 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8gl_1ect execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.062 248514 DEBUG nova.virt.libvirt.imagebackend [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No parent info for 0ed20320-9c25-4108-ad76-64b3cb3500ce; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.166 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8gl_1ect" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.193 248514 DEBUG nova.storage.rbd_utils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] rbd image 822fa9f6-0a5d-490e-89d7-446df19a068b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.198 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/disk.config 822fa9f6-0a5d-490e-89d7-446df19a068b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2836: 321 pgs: 321 active+clean; 326 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.344 248514 DEBUG oslo_concurrency.processutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/disk.config 822fa9f6-0a5d-490e-89d7-446df19a068b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.345 248514 INFO nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Deleting local config drive /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b/disk.config because it was imported into RBD.#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.364 248514 DEBUG nova.storage.rbd_utils [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] creating snapshot(5af9316f0a95453cbbf8adf24d1ded6b) on rbd image(75f348ef-4044-47a1-ba1b-f1b66513450c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:56:02 np0005558241 kernel: tapf069307e-1a: entered promiscuous mode
Dec 13 03:56:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:02Z|01155|binding|INFO|Claiming lport f069307e-1a47-4342-b244-d88f04ff512b for this chassis.
Dec 13 03:56:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:02Z|01156|binding|INFO|f069307e-1a47-4342-b244-d88f04ff512b: Claiming fa:16:3e:dd:ef:3d 10.100.0.10
Dec 13 03:56:02 np0005558241 systemd-udevd[366527]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:56:02 np0005558241 NetworkManager[50376]: <info>  [1765616162.4007] manager: (tapf069307e-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/486)
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.405 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.407 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:ef:3d 10.100.0.10'], port_security=['fa:16:3e:dd:ef:3d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '822fa9f6-0a5d-490e-89d7-446df19a068b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49b9fdf8-f095-49d6-8a3e-6b41045e0020', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ef6794c-20c0-44ef-9932-95bf5c168e3e, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f069307e-1a47-4342-b244-d88f04ff512b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.408 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f069307e-1a47-4342-b244-d88f04ff512b in datapath d62e4a11-9334-4dbd-978f-dcabebeb9f79 bound to our chassis#033[00m
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.410 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d62e4a11-9334-4dbd-978f-dcabebeb9f79#033[00m
Dec 13 03:56:02 np0005558241 NetworkManager[50376]: <info>  [1765616162.4131] device (tapf069307e-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:56:02 np0005558241 NetworkManager[50376]: <info>  [1765616162.4141] device (tapf069307e-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:56:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:02Z|01157|binding|INFO|Setting lport f069307e-1a47-4342-b244-d88f04ff512b ovn-installed in OVS
Dec 13 03:56:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:02Z|01158|binding|INFO|Setting lport f069307e-1a47-4342-b244-d88f04ff512b up in Southbound
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.428 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[149396da-7c7e-4d57-864d-5803928e1ea2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.428 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:02 np0005558241 systemd-machined[210538]: New machine qemu-146-instance-00000077.
Dec 13 03:56:02 np0005558241 systemd[1]: Started Virtual Machine qemu-146-instance-00000077.
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.463 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d8ce85-1646-40c6-ad51-7b009b771a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.466 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[56d7445a-385e-4cb5-b18a-b309769a5989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.493 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[54cc44e0-448c-4947-a28d-67d7d59ece10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.510 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c631881d-a1ff-4ec7-aafd-aa310676c5ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62e4a11-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869981, 'reachable_time': 20363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366756, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.531 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8bff13ca-7454-4e68-86e5-2a98a5f31682]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd62e4a11-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869993, 'tstamp': 869993}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366758, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd62e4a11-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869996, 'tstamp': 869996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366758, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.533 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62e4a11-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.537 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd62e4a11-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.537 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.537 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd62e4a11-90, col_values=(('external_ids', {'iface-id': '3d979ee9-5b95-4edf-8ffc-3de7e778c5ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:02.538 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.625 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.684 248514 DEBUG nova.network.neutron [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Updated VIF entry in instance network info cache for port f069307e-1a47-4342-b244-d88f04ff512b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.684 248514 DEBUG nova.network.neutron [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Updating instance_info_cache with network_info: [{"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.709 248514 DEBUG oslo_concurrency.lockutils [req-bb382b98-377e-4864-8167-6c17b7bbe486 req-1d011190-e9eb-45b1-a746-147244094c31 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.835 248514 DEBUG oslo_concurrency.lockutils [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.835 248514 DEBUG oslo_concurrency.lockutils [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:56:02 np0005558241 nova_compute[248510]: 2025-12-13 08:56:02.835 248514 DEBUG nova.network.neutron [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.095 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616163.0952287, 822fa9f6-0a5d-490e-89d7-446df19a068b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.096 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] VM Started (Lifecycle Event)#033[00m
Dec 13 03:56:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Dec 13 03:56:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Dec 13 03:56:03 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.520 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.525 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616163.0953505, 822fa9f6-0a5d-490e-89d7-446df19a068b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.525 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.545 248514 DEBUG nova.compute.manager [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-deleted-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.545 248514 INFO nova.compute.manager [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Neutron deleted interface e7c59bd7-06ff-4220-ac42-ec02c9e22e2c; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.546 248514 DEBUG nova.network.neutron [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.550 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.553 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.576 248514 DEBUG nova.storage.rbd_utils [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] cloning vms/75f348ef-4044-47a1-ba1b-f1b66513450c_disk@5af9316f0a95453cbbf8adf24d1ded6b to images/bc45ce83-2d30-4107-90ce-9a9307d84fab clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.607 248514 DEBUG nova.compute.manager [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-unplugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.607 248514 DEBUG oslo_concurrency.lockutils [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.608 248514 DEBUG oslo_concurrency.lockutils [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.608 248514 DEBUG oslo_concurrency.lockutils [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.608 248514 DEBUG nova.compute.manager [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-unplugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.608 248514 WARNING nova.compute.manager [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received unexpected event network-vif-unplugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c for instance with vm_state active and task_state None.#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.608 248514 DEBUG nova.compute.manager [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.609 248514 DEBUG oslo_concurrency.lockutils [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.609 248514 DEBUG oslo_concurrency.lockutils [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.609 248514 DEBUG oslo_concurrency.lockutils [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.609 248514 DEBUG nova.compute.manager [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.609 248514 WARNING nova.compute.manager [req-47f8b7d6-2213-4580-8df3-6634c370b541 req-faeb0985-2b52-47f8-b340-56d6948b842d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received unexpected event network-vif-plugged-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c for instance with vm_state active and task_state None.#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.611 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.612 248514 DEBUG nova.objects.instance [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'system_metadata' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.637 248514 DEBUG nova.objects.instance [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'flavor' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.659 248514 DEBUG nova.virt.libvirt.vif [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.659 248514 DEBUG nova.network.os_vif_util [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.660 248514 DEBUG nova.network.os_vif_util [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.663 248514 DEBUG nova.virt.libvirt.guest [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.666 248514 DEBUG nova.virt.libvirt.guest [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface>not found in domain: <domain type='kvm' id='144'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <name>instance-00000075</name>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <uuid>5c900cfb-46bb-436b-a574-7985be3447da</uuid>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:56:01</nova:creationTime>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:56:03 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <entry name='serial'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <entry name='uuid'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk' index='2'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk.config' index='1'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:c9:87:a3'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target dev='tapa010c1a2-26'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <source path='/dev/pts/1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/1'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <source path='/dev/pts/1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c584,c742</label>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c584,c742</imagelabel>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:56:03 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:56:03 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.666 248514 DEBUG nova.virt.libvirt.guest [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.671 248514 DEBUG nova.virt.libvirt.guest [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1d:f1:bc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape7c59bd7-06"/></interface>not found in domain: <domain type='kvm' id='144'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <name>instance-00000075</name>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <uuid>5c900cfb-46bb-436b-a574-7985be3447da</uuid>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:56:01</nova:creationTime>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:56:03 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <resource>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </resource>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <entry name='serial'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <entry name='uuid'>5c900cfb-46bb-436b-a574-7985be3447da</entry>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk' index='2'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/5c900cfb-46bb-436b-a574-7985be3447da_disk.config' index='1'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </controller>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:c9:87:a3'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target dev='tapa010c1a2-26'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <source path='/dev/pts/1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      </target>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/1'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <source path='/dev/pts/1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da/console.log' append='off'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </console>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </input>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c584,c742</label>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c584,c742</imagelabel>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 03:56:03 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:56:03 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.672 248514 WARNING nova.virt.libvirt.driver [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Detaching interface fa:16:3e:1d:f1:bc failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tape7c59bd7-06' not found.#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.673 248514 DEBUG nova.virt.libvirt.vif [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.673 248514 DEBUG nova.network.os_vif_util [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "address": "fa:16:3e:1d:f1:bc", "network": {"id": "96311132-fe8d-4c3e-acff-901d94244605", "bridge": "br-int", "label": "tempest-network-smoke--365157326", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7c59bd7-06", "ovs_interfaceid": "e7c59bd7-06ff-4220-ac42-ec02c9e22e2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.674 248514 DEBUG nova.network.os_vif_util [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.675 248514 DEBUG os_vif [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.677 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7c59bd7-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.678 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.680 248514 INFO os_vif [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f1:bc,bridge_name='br-int',has_traffic_filtering=True,id=e7c59bd7-06ff-4220-ac42-ec02c9e22e2c,network=Network(96311132-fe8d-4c3e-acff-901d94244605),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7c59bd7-06')#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.681 248514 DEBUG nova.virt.libvirt.guest [req-ed7639cb-7919-4e27-95f1-7281b946c9ac req-484a5b54-9e99-4a82-a112-635530b8433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1659365118</nova:name>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:56:03</nova:creationTime>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    <nova:port uuid="a010c1a2-26e3-477b-9539-f12ad28801ca">
Dec 13 03:56:03 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:56:03 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:56:03 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:56:03 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:56:03 np0005558241 nova_compute[248510]: 2025-12-13 08:56:03.893 248514 DEBUG nova.storage.rbd_utils [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] flattening images/bc45ce83-2d30-4107-90ce-9a9307d84fab flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:56:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2838: 321 pgs: 321 active+clean; 326 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 03:56:04 np0005558241 nova_compute[248510]: 2025-12-13 08:56:04.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:56:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:04Z|01159|binding|INFO|Releasing lport 47f45749-b232-4d0c-bf37-be042ea606c8 from this chassis (sb_readonly=0)
Dec 13 03:56:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:04Z|01160|binding|INFO|Releasing lport 091240c0-aa08-4e16-a096-0471c0ff1f24 from this chassis (sb_readonly=0)
Dec 13 03:56:04 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:04Z|01161|binding|INFO|Releasing lport 3d979ee9-5b95-4edf-8ffc-3de7e778c5ff from this chassis (sb_readonly=0)
Dec 13 03:56:04 np0005558241 nova_compute[248510]: 2025-12-13 08:56:04.861 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.198 248514 INFO nova.network.neutron [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Port e7c59bd7-06ff-4220-ac42-ec02c9e22e2c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.198 248514 DEBUG nova.network.neutron [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.230 248514 DEBUG oslo_concurrency.lockutils [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:56:05 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.271 248514 DEBUG oslo_concurrency.lockutils [None req-71a0c7ff-5aeb-414d-8a49-9fd7284ab563 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-5c900cfb-46bb-436b-a574-7985be3447da-e7c59bd7-06ff-4220-ac42-ec02c9e22e2c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.720 248514 DEBUG nova.compute.manager [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.721 248514 DEBUG oslo_concurrency.lockutils [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.722 248514 DEBUG oslo_concurrency.lockutils [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.723 248514 DEBUG oslo_concurrency.lockutils [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.723 248514 DEBUG nova.compute.manager [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Processing event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.724 248514 DEBUG nova.compute.manager [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.724 248514 DEBUG oslo_concurrency.lockutils [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.725 248514 DEBUG oslo_concurrency.lockutils [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.725 248514 DEBUG oslo_concurrency.lockutils [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.726 248514 DEBUG nova.compute.manager [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] No waiting events found dispatching network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.727 248514 WARNING nova.compute.manager [req-87ea1959-d1a4-4d36-b17f-313763971d3c req-04bfdc96-ec76-488d-ac6d-c98b5677eb32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received unexpected event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b for instance with vm_state building and task_state spawning.#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.728 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.735 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616165.7348769, 822fa9f6-0a5d-490e-89d7-446df19a068b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.735 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.739 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.744 248514 INFO nova.virt.libvirt.driver [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Instance spawned successfully.#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.745 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.763 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.769 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.772 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.772 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.772 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.773 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.773 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.773 248514 DEBUG nova.virt.libvirt.driver [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.804 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.842 248514 INFO nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Took 12.35 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.843 248514 DEBUG nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.911 248514 INFO nova.compute.manager [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Took 13.49 seconds to build instance.#033[00m
Dec 13 03:56:05 np0005558241 nova_compute[248510]: 2025-12-13 08:56:05.929 248514 DEBUG oslo_concurrency.lockutils [None req-43988842-01b6-4c05-963d-05db929bf91f c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:06 np0005558241 nova_compute[248510]: 2025-12-13 08:56:06.044 248514 DEBUG nova.storage.rbd_utils [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] removing snapshot(5af9316f0a95453cbbf8adf24d1ded6b) on rbd image(75f348ef-4044-47a1-ba1b-f1b66513450c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:56:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2839: 321 pgs: 321 active+clean; 326 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 03:56:06 np0005558241 nova_compute[248510]: 2025-12-13 08:56:06.454 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.019 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.020 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.020 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.020 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.021 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.023 248514 INFO nova.compute.manager [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Terminating instance#033[00m
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.024 248514 DEBUG nova.compute.manager [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.627 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Dec 13 03:56:07 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Dec 13 03:56:07 np0005558241 kernel: tapa010c1a2-26 (unregistering): left promiscuous mode
Dec 13 03:56:07 np0005558241 NetworkManager[50376]: <info>  [1765616167.7979] device (tapa010c1a2-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:56:07 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:07Z|01162|binding|INFO|Releasing lport a010c1a2-26e3-477b-9539-f12ad28801ca from this chassis (sb_readonly=0)
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.810 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:07 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:07Z|01163|binding|INFO|Setting lport a010c1a2-26e3-477b-9539-f12ad28801ca down in Southbound
Dec 13 03:56:07 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:07Z|01164|binding|INFO|Removing iface tapa010c1a2-26 ovn-installed in OVS
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.818 248514 DEBUG nova.compute.manager [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-changed-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.818 248514 DEBUG nova.compute.manager [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing instance network info cache due to event network-changed-a010c1a2-26e3-477b-9539-f12ad28801ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:56:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:07.818 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:87:a3 10.100.0.8'], port_security=['fa:16:3e:c9:87:a3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5c900cfb-46bb-436b-a574-7985be3447da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2e8c6a7-9a03-4a93-a9dd-d4be82f5297d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d20f24f-0d1e-4b3a-97e8-eb661209feb7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a010c1a2-26e3-477b-9539-f12ad28801ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.819 248514 DEBUG oslo_concurrency.lockutils [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.819 248514 DEBUG oslo_concurrency.lockutils [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.819 248514 DEBUG nova.network.neutron [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Refreshing network info cache for port a010c1a2-26e3-477b-9539-f12ad28801ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:56:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:07.820 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a010c1a2-26e3-477b-9539-f12ad28801ca in datapath b2d9e215-0c32-4abc-92a1-ad5f852b369d unbound from our chassis#033[00m
Dec 13 03:56:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:07.823 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2d9e215-0c32-4abc-92a1-ad5f852b369d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:56:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:07.824 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5770f2-439b-4c58-aa99-8c8d7f2a0f83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:07.825 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d namespace which is not needed anymore#033[00m
Dec 13 03:56:07 np0005558241 nova_compute[248510]: 2025-12-13 08:56:07.833 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:07 np0005558241 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000075.scope: Deactivated successfully.
Dec 13 03:56:07 np0005558241 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000075.scope: Consumed 14.477s CPU time.
Dec 13 03:56:07 np0005558241 systemd-machined[210538]: Machine qemu-144-instance-00000075 terminated.
Dec 13 03:56:08 np0005558241 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [NOTICE]   (364929) : haproxy version is 2.8.14-c23fe91
Dec 13 03:56:08 np0005558241 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [NOTICE]   (364929) : path to executable is /usr/sbin/haproxy
Dec 13 03:56:08 np0005558241 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [WARNING]  (364929) : Exiting Master process...
Dec 13 03:56:08 np0005558241 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [ALERT]    (364929) : Current worker (364931) exited with code 143 (Terminated)
Dec 13 03:56:08 np0005558241 neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d[364925]: [WARNING]  (364929) : All workers exited. Exiting... (0)
Dec 13 03:56:08 np0005558241 nova_compute[248510]: 2025-12-13 08:56:08.040 248514 DEBUG nova.storage.rbd_utils [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] creating snapshot(snap) on rbd image(bc45ce83-2d30-4107-90ce-9a9307d84fab) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:56:08 np0005558241 systemd[1]: libpod-d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6.scope: Deactivated successfully.
Dec 13 03:56:08 np0005558241 podman[366897]: 2025-12-13 08:56:08.054349547 +0000 UTC m=+0.105701381 container died d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:56:08 np0005558241 nova_compute[248510]: 2025-12-13 08:56:08.173 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6-userdata-shm.mount: Deactivated successfully.
Dec 13 03:56:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-761b232b1aa41b0d759478588d2b31a2caafbf1df47bdacf3a2927b25a18515d-merged.mount: Deactivated successfully.
Dec 13 03:56:08 np0005558241 nova_compute[248510]: 2025-12-13 08:56:08.187 248514 INFO nova.virt.libvirt.driver [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Instance destroyed successfully.#033[00m
Dec 13 03:56:08 np0005558241 nova_compute[248510]: 2025-12-13 08:56:08.188 248514 DEBUG nova.objects.instance [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 5c900cfb-46bb-436b-a574-7985be3447da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:56:08 np0005558241 nova_compute[248510]: 2025-12-13 08:56:08.207 248514 DEBUG nova.virt.libvirt.vif [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1659365118',display_name='tempest-TestNetworkBasicOps-server-1659365118',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1659365118',id=117,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHz16q3FD1p8k0tar2d1eIkC//skJH4xKj4kgjtgiTA4Jsb+bBhJVSUYYnxuUA+hH7IUJhmzn5ldV/32mNa3yep4fi6cHpPuzrb3SBRYiJiHxef1Ww8f4LbH3ZmQueGLKQ==',key_name='tempest-TestNetworkBasicOps-755840351',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-h08l4d8t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:27Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5c900cfb-46bb-436b-a574-7985be3447da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:56:08 np0005558241 nova_compute[248510]: 2025-12-13 08:56:08.208 248514 DEBUG nova.network.os_vif_util [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:56:08 np0005558241 nova_compute[248510]: 2025-12-13 08:56:08.209 248514 DEBUG nova.network.os_vif_util [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:56:08 np0005558241 nova_compute[248510]: 2025-12-13 08:56:08.210 248514 DEBUG os_vif [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:56:08 np0005558241 nova_compute[248510]: 2025-12-13 08:56:08.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:08 np0005558241 nova_compute[248510]: 2025-12-13 08:56:08.212 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa010c1a2-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:08 np0005558241 nova_compute[248510]: 2025-12-13 08:56:08.251 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2841: 321 pgs: 321 active+clean; 364 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.1 MiB/s wr, 104 op/s
Dec 13 03:56:08 np0005558241 nova_compute[248510]: 2025-12-13 08:56:08.262 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:56:08 np0005558241 nova_compute[248510]: 2025-12-13 08:56:08.265 248514 INFO os_vif [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:87:a3,bridge_name='br-int',has_traffic_filtering=True,id=a010c1a2-26e3-477b-9539-f12ad28801ca,network=Network(b2d9e215-0c32-4abc-92a1-ad5f852b369d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa010c1a2-26')#033[00m
Dec 13 03:56:08 np0005558241 podman[366897]: 2025-12-13 08:56:08.503818736 +0000 UTC m=+0.555170580 container cleanup d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 03:56:08 np0005558241 systemd[1]: libpod-conmon-d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6.scope: Deactivated successfully.
Dec 13 03:56:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Dec 13 03:56:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Dec 13 03:56:08 np0005558241 podman[366972]: 2025-12-13 08:56:08.995166096 +0000 UTC m=+0.461146853 container remove d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:56:09 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Dec 13 03:56:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.013 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eaef195d-1b6d-466e-a791-3e25036e2a30]: (4, ('Sat Dec 13 08:56:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d (d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6)\nd6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6\nSat Dec 13 08:56:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d (d6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6)\nd6a676ac07e5f6cc326fd61effbc35444097c72b426375d5339d9c483db193b6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.020 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c280c01a-3b54-4f66-8dea-eeec0b66884e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.024 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2d9e215-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.029 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:09 np0005558241 kernel: tapb2d9e215-00: left promiscuous mode
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.047 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.055 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5332e4-19cd-449e-aae8-cb9b72870d15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.074 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6a42deb9-907f-4fbd-a4a1-a45088d372e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.078 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f16493bc-97d0-42b3-b33d-e95a461a3f46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.107 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4538e68f-e204-46e5-a4f7-d0928718fcea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870136, 'reachable_time': 29451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366985, 'error': None, 'target': 'ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.111 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b2d9e215-0c32-4abc-92a1-ad5f852b369d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:56:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:09.111 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[68cb5833-ffdb-4f14-8f49-01a5a4b8d38c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:09 np0005558241 systemd[1]: run-netns-ovnmeta\x2db2d9e215\x2d0c32\x2d4abc\x2d92a1\x2dad5f852b369d.mount: Deactivated successfully.
Dec 13 03:56:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:56:09
Dec 13 03:56:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:56:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:56:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.mgr', 'backups', 'volumes']
Dec 13 03:56:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.509 248514 INFO nova.virt.libvirt.driver [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Deleting instance files /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da_del#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.511 248514 INFO nova.virt.libvirt.driver [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Deletion of /var/lib/nova/instances/5c900cfb-46bb-436b-a574-7985be3447da_del complete#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.638 248514 INFO nova.compute.manager [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Took 2.61 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.639 248514 DEBUG oslo.service.loopingcall [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.640 248514 DEBUG nova.compute.manager [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.640 248514 DEBUG nova.network.neutron [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.814 248514 DEBUG nova.network.neutron [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updated VIF entry in instance network info cache for port a010c1a2-26e3-477b-9539-f12ad28801ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.816 248514 DEBUG nova.network.neutron [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [{"id": "a010c1a2-26e3-477b-9539-f12ad28801ca", "address": "fa:16:3e:c9:87:a3", "network": {"id": "b2d9e215-0c32-4abc-92a1-ad5f852b369d", "bridge": "br-int", "label": "tempest-network-smoke--2107685138", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa010c1a2-26", "ovs_interfaceid": "a010c1a2-26e3-477b-9539-f12ad28801ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.907 248514 DEBUG oslo_concurrency.lockutils [req-ce7a6f6f-9b7e-4900-acaa-40d060c70fb7 req-2b847519-6528-4e61-8fb0-8710a8f742ff 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5c900cfb-46bb-436b-a574-7985be3447da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.951 248514 DEBUG nova.compute.manager [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-unplugged-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.952 248514 DEBUG oslo_concurrency.lockutils [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.953 248514 DEBUG oslo_concurrency.lockutils [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.953 248514 DEBUG oslo_concurrency.lockutils [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.953 248514 DEBUG nova.compute.manager [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-unplugged-a010c1a2-26e3-477b-9539-f12ad28801ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.953 248514 DEBUG nova.compute.manager [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-unplugged-a010c1a2-26e3-477b-9539-f12ad28801ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.954 248514 DEBUG nova.compute.manager [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.954 248514 DEBUG oslo_concurrency.lockutils [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5c900cfb-46bb-436b-a574-7985be3447da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.954 248514 DEBUG oslo_concurrency.lockutils [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.955 248514 DEBUG oslo_concurrency.lockutils [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.955 248514 DEBUG nova.compute.manager [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] No waiting events found dispatching network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:56:09 np0005558241 nova_compute[248510]: 2025-12-13 08:56:09.955 248514 WARNING nova.compute.manager [req-64f823f0-6c12-4e55-aaf6-58911a39e72f req-bd712225-b201-4a68-8f94-4c02e233d48d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received unexpected event network-vif-plugged-a010c1a2-26e3-477b-9539-f12ad28801ca for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2843: 321 pgs: 321 active+clean; 405 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 6.9 MiB/s wr, 266 op/s
Dec 13 03:56:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:56:10 np0005558241 nova_compute[248510]: 2025-12-13 08:56:10.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:56:10 np0005558241 nova_compute[248510]: 2025-12-13 08:56:10.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:56:10 np0005558241 nova_compute[248510]: 2025-12-13 08:56:10.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:56:10 np0005558241 nova_compute[248510]: 2025-12-13 08:56:10.809 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:56:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:56:10 np0005558241 nova_compute[248510]: 2025-12-13 08:56:10.885 248514 INFO nova.virt.libvirt.driver [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Snapshot image upload complete#033[00m
Dec 13 03:56:10 np0005558241 nova_compute[248510]: 2025-12-13 08:56:10.886 248514 INFO nova.compute.manager [None req-10125d0e-c786-4172-b7ea-6411b4e04988 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Took 9.39 seconds to snapshot the instance on the hypervisor.#033[00m
Dec 13 03:56:11 np0005558241 nova_compute[248510]: 2025-12-13 08:56:11.038 248514 DEBUG nova.network.neutron [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:11 np0005558241 nova_compute[248510]: 2025-12-13 08:56:11.065 248514 INFO nova.compute.manager [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Took 1.43 seconds to deallocate network for instance.#033[00m
Dec 13 03:56:11 np0005558241 nova_compute[248510]: 2025-12-13 08:56:11.071 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:56:11 np0005558241 nova_compute[248510]: 2025-12-13 08:56:11.071 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:56:11 np0005558241 nova_compute[248510]: 2025-12-13 08:56:11.071 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:56:11 np0005558241 nova_compute[248510]: 2025-12-13 08:56:11.072 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2dacd79d-d668-430f-89e3-bd607a8298ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:56:11 np0005558241 nova_compute[248510]: 2025-12-13 08:56:11.202 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:11 np0005558241 nova_compute[248510]: 2025-12-13 08:56:11.204 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:11 np0005558241 nova_compute[248510]: 2025-12-13 08:56:11.441 248514 DEBUG oslo_concurrency.processutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:56:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159425532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:56:12 np0005558241 nova_compute[248510]: 2025-12-13 08:56:12.035 248514 DEBUG oslo_concurrency.processutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:12 np0005558241 nova_compute[248510]: 2025-12-13 08:56:12.043 248514 DEBUG nova.compute.provider_tree [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:56:12 np0005558241 nova_compute[248510]: 2025-12-13 08:56:12.068 248514 DEBUG nova.scheduler.client.report [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:56:12 np0005558241 nova_compute[248510]: 2025-12-13 08:56:12.075 248514 DEBUG nova.compute.manager [req-788509ca-8c78-4127-852e-9c48ba9f69d0 req-83acf6e9-b05f-48b5-9062-a687db55b599 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Received event network-vif-deleted-a010c1a2-26e3-477b-9539-f12ad28801ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:12 np0005558241 nova_compute[248510]: 2025-12-13 08:56:12.094 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:12 np0005558241 nova_compute[248510]: 2025-12-13 08:56:12.163 248514 INFO nova.scheduler.client.report [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 5c900cfb-46bb-436b-a574-7985be3447da#033[00m
Dec 13 03:56:12 np0005558241 nova_compute[248510]: 2025-12-13 08:56:12.248 248514 DEBUG oslo_concurrency.lockutils [None req-cb879606-c14d-48fe-8020-54c63da1b002 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5c900cfb-46bb-436b-a574-7985be3447da" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2844: 321 pgs: 321 active+clean; 405 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 5.8 MiB/s wr, 192 op/s
Dec 13 03:56:12 np0005558241 nova_compute[248510]: 2025-12-13 08:56:12.631 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:12 np0005558241 nova_compute[248510]: 2025-12-13 08:56:12.860 248514 DEBUG nova.compute.manager [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-changed-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:12 np0005558241 nova_compute[248510]: 2025-12-13 08:56:12.861 248514 DEBUG nova.compute.manager [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Refreshing instance network info cache due to event network-changed-f069307e-1a47-4342-b244-d88f04ff512b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:56:12 np0005558241 nova_compute[248510]: 2025-12-13 08:56:12.861 248514 DEBUG oslo_concurrency.lockutils [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:56:12 np0005558241 nova_compute[248510]: 2025-12-13 08:56:12.862 248514 DEBUG oslo_concurrency.lockutils [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:56:12 np0005558241 nova_compute[248510]: 2025-12-13 08:56:12.862 248514 DEBUG nova.network.neutron [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Refreshing network info cache for port f069307e-1a47-4342-b244-d88f04ff512b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:56:13 np0005558241 nova_compute[248510]: 2025-12-13 08:56:13.284 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:13 np0005558241 nova_compute[248510]: 2025-12-13 08:56:13.691 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updating instance_info_cache with network_info: [{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:13 np0005558241 nova_compute[248510]: 2025-12-13 08:56:13.720 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:56:13 np0005558241 nova_compute[248510]: 2025-12-13 08:56:13.720 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:56:13 np0005558241 nova_compute[248510]: 2025-12-13 08:56:13.721 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:56:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2845: 321 pgs: 321 active+clean; 326 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 245 op/s
Dec 13 03:56:14 np0005558241 nova_compute[248510]: 2025-12-13 08:56:14.901 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:14.900 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:56:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:14.900 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:56:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:14.901 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:56:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1102281189' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:56:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:56:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1102281189' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:56:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:56:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Dec 13 03:56:15 np0005558241 nova_compute[248510]: 2025-12-13 08:56:15.664 248514 DEBUG nova.network.neutron [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Updated VIF entry in instance network info cache for port f069307e-1a47-4342-b244-d88f04ff512b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:56:15 np0005558241 nova_compute[248510]: 2025-12-13 08:56:15.665 248514 DEBUG nova.network.neutron [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Updating instance_info_cache with network_info: [{"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:15 np0005558241 nova_compute[248510]: 2025-12-13 08:56:15.688 248514 DEBUG oslo_concurrency.lockutils [req-5a6492a9-b4ca-4284-8fb1-232c2df1c26f req-823bbd0a-6da0-476d-a2e1-a11f53ade96d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-822fa9f6-0a5d-490e-89d7-446df19a068b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:56:15 np0005558241 nova_compute[248510]: 2025-12-13 08:56:15.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:56:15 np0005558241 nova_compute[248510]: 2025-12-13 08:56:15.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:56:15 np0005558241 nova_compute[248510]: 2025-12-13 08:56:15.815 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:15 np0005558241 nova_compute[248510]: 2025-12-13 08:56:15.816 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:15 np0005558241 nova_compute[248510]: 2025-12-13 08:56:15.816 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:15 np0005558241 nova_compute[248510]: 2025-12-13 08:56:15.816 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:56:15 np0005558241 nova_compute[248510]: 2025-12-13 08:56:15.816 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Dec 13 03:56:15 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.208 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.209 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.231 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:56:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2847: 321 pgs: 321 active+clean; 326 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.8 MiB/s wr, 184 op/s
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.383 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.384 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:56:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3903390185' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.395 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.396 248514 INFO nova.compute.claims [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.414 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.745 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.745 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.754 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.754 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.760 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.761 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:56:16 np0005558241 nova_compute[248510]: 2025-12-13 08:56:16.893 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.036 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.039 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3021MB free_disk=59.87539869081229GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.040 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:17Z|01165|binding|INFO|Releasing lport 47f45749-b232-4d0c-bf37-be042ea606c8 from this chassis (sb_readonly=0)
Dec 13 03:56:17 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:17Z|01166|binding|INFO|Releasing lport 3d979ee9-5b95-4edf-8ffc-3de7e778c5ff from this chassis (sb_readonly=0)
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:56:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417424684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.489 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.495 248514 DEBUG nova.compute.provider_tree [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.516 248514 DEBUG nova.scheduler.client.report [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.555 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.556 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.558 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.644 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.645 248514 DEBUG nova.network.neutron [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.665 248514 INFO nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.682 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.686 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 2dacd79d-d668-430f-89e3-bd607a8298ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.686 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 75f348ef-4044-47a1-ba1b-f1b66513450c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.687 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 822fa9f6-0a5d-490e-89d7-446df19a068b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.687 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 4887eb43-1570-49a5-b20e-326af1e84a7b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.687 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.687 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.799 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.801 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.802 248514 INFO nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Creating image(s)#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.825 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 4887eb43-1570-49a5-b20e-326af1e84a7b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.850 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 4887eb43-1570-49a5-b20e-326af1e84a7b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.874 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 4887eb43-1570-49a5-b20e-326af1e84a7b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.878 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "d52579aebd0b024759c7e8234ab3d7cdd411a8c3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.879 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "d52579aebd0b024759c7e8234ab3d7cdd411a8c3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.885 248514 DEBUG nova.policy [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '81fb01d9d08845c3b626079ab726db7a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6c21c2eb2d0c4465ae562381f358fbd8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:56:17 np0005558241 nova_compute[248510]: 2025-12-13 08:56:17.889 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2848: 321 pgs: 321 active+clean; 326 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.4 MiB/s wr, 159 op/s
Dec 13 03:56:18 np0005558241 nova_compute[248510]: 2025-12-13 08:56:18.286 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:18 np0005558241 nova_compute[248510]: 2025-12-13 08:56:18.295 248514 DEBUG nova.virt.libvirt.imagebackend [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Image locations are: [{'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/bc45ce83-2d30-4107-90ce-9a9307d84fab/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/bc45ce83-2d30-4107-90ce-9a9307d84fab/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec 13 03:56:18 np0005558241 nova_compute[248510]: 2025-12-13 08:56:18.359 248514 DEBUG nova.virt.libvirt.imagebackend [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Selected location: {'url': 'rbd://18ee9de6-e00b-571b-ab9b-b7aab06174df/images/bc45ce83-2d30-4107-90ce-9a9307d84fab/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Dec 13 03:56:18 np0005558241 nova_compute[248510]: 2025-12-13 08:56:18.360 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] cloning images/bc45ce83-2d30-4107-90ce-9a9307d84fab@snap to None/4887eb43-1570-49a5-b20e-326af1e84a7b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:56:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:56:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726082217' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:56:18 np0005558241 nova_compute[248510]: 2025-12-13 08:56:18.523 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:18 np0005558241 nova_compute[248510]: 2025-12-13 08:56:18.529 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:56:18 np0005558241 nova_compute[248510]: 2025-12-13 08:56:18.560 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:56:18 np0005558241 nova_compute[248510]: 2025-12-13 08:56:18.594 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:56:18 np0005558241 nova_compute[248510]: 2025-12-13 08:56:18.594 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2849: 321 pgs: 321 active+clean; 331 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 673 KiB/s wr, 57 op/s
Dec 13 03:56:20 np0005558241 nova_compute[248510]: 2025-12-13 08:56:20.441 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "d52579aebd0b024759c7e8234ab3d7cdd411a8c3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:20 np0005558241 nova_compute[248510]: 2025-12-13 08:56:20.493 248514 DEBUG nova.network.neutron [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Successfully created port: 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:56:20 np0005558241 nova_compute[248510]: 2025-12-13 08:56:20.594 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:56:20 np0005558241 nova_compute[248510]: 2025-12-13 08:56:20.594 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:56:20 np0005558241 nova_compute[248510]: 2025-12-13 08:56:20.595 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:56:20 np0005558241 nova_compute[248510]: 2025-12-13 08:56:20.601 248514 DEBUG nova.objects.instance [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lazy-loading 'migration_context' on Instance uuid 4887eb43-1570-49a5-b20e-326af1e84a7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:56:20 np0005558241 nova_compute[248510]: 2025-12-13 08:56:20.618 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:56:20 np0005558241 nova_compute[248510]: 2025-12-13 08:56:20.618 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Ensure instance console log exists: /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:56:20 np0005558241 nova_compute[248510]: 2025-12-13 08:56:20.618 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:20 np0005558241 nova_compute[248510]: 2025-12-13 08:56:20.619 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:20 np0005558241 nova_compute[248510]: 2025-12-13 08:56:20.619 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019915082332499054 of space, bias 1.0, pg target 0.5974524699749716 quantized to 32 (current 32)
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014274388759359915 of space, bias 1.0, pg target 0.42823166278079744 quantized to 32 (current 32)
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.729684235412037e-07 of space, bias 4.0, pg target 0.0006875621082494445 quantized to 16 (current 32)
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:56:21 np0005558241 nova_compute[248510]: 2025-12-13 08:56:21.366 248514 DEBUG nova.network.neutron [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Successfully updated port: 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:56:21 np0005558241 nova_compute[248510]: 2025-12-13 08:56:21.398 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:56:21 np0005558241 nova_compute[248510]: 2025-12-13 08:56:21.399 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquired lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:56:21 np0005558241 nova_compute[248510]: 2025-12-13 08:56:21.399 248514 DEBUG nova.network.neutron [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:56:21 np0005558241 nova_compute[248510]: 2025-12-13 08:56:21.526 248514 DEBUG nova.compute.manager [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-changed-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:21 np0005558241 nova_compute[248510]: 2025-12-13 08:56:21.527 248514 DEBUG nova.compute.manager [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Refreshing instance network info cache due to event network-changed-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:56:21 np0005558241 nova_compute[248510]: 2025-12-13 08:56:21.529 248514 DEBUG oslo_concurrency.lockutils [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:56:21 np0005558241 nova_compute[248510]: 2025-12-13 08:56:21.631 248514 DEBUG nova.network.neutron [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:56:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:21Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dd:ef:3d 10.100.0.10
Dec 13 03:56:21 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:21Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dd:ef:3d 10.100.0.10
Dec 13 03:56:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2850: 321 pgs: 321 active+clean; 331 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 673 KiB/s wr, 57 op/s
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.633 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.927 248514 DEBUG nova.network.neutron [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updating instance_info_cache with network_info: [{"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.971 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Releasing lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.971 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Instance network_info: |[{"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.971 248514 DEBUG oslo_concurrency.lockutils [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.972 248514 DEBUG nova.network.neutron [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Refreshing network info cache for port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.975 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Start _get_guest_xml network_info=[{"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:56:01Z,direct_url=<?>,disk_format='raw',id=bc45ce83-2d30-4107-90ce-9a9307d84fab,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-485800617',owner='6c21c2eb2d0c4465ae562381f358fbd8',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:56:11Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': 'bc45ce83-2d30-4107-90ce-9a9307d84fab'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.980 248514 WARNING nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.987 248514 DEBUG nova.virt.libvirt.host [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.988 248514 DEBUG nova.virt.libvirt.host [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.992 248514 DEBUG nova.virt.libvirt.host [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.993 248514 DEBUG nova.virt.libvirt.host [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.994 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.994 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T08:56:01Z,direct_url=<?>,disk_format='raw',id=bc45ce83-2d30-4107-90ce-9a9307d84fab,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-485800617',owner='6c21c2eb2d0c4465ae562381f358fbd8',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T08:56:11Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.995 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.996 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.997 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.997 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.998 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:56:22 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.999 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:56:23 np0005558241 nova_compute[248510]: 2025-12-13 08:56:22.999 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:56:23 np0005558241 nova_compute[248510]: 2025-12-13 08:56:23.000 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:56:23 np0005558241 nova_compute[248510]: 2025-12-13 08:56:23.000 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:56:23 np0005558241 nova_compute[248510]: 2025-12-13 08:56:23.001 248514 DEBUG nova.virt.hardware [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:56:23 np0005558241 nova_compute[248510]: 2025-12-13 08:56:23.009 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:23 np0005558241 nova_compute[248510]: 2025-12-13 08:56:23.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:23 np0005558241 nova_compute[248510]: 2025-12-13 08:56:23.177 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616168.0633242, 5c900cfb-46bb-436b-a574-7985be3447da => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:56:23 np0005558241 nova_compute[248510]: 2025-12-13 08:56:23.178 248514 INFO nova.compute.manager [-] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:56:23 np0005558241 nova_compute[248510]: 2025-12-13 08:56:23.251 248514 DEBUG nova.compute.manager [None req-258c2fee-b5ef-478e-9bd2-74489e4771bd - - - - - -] [instance: 5c900cfb-46bb-436b-a574-7985be3447da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:23 np0005558241 nova_compute[248510]: 2025-12-13 08:56:23.288 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:56:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905031900' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:56:23 np0005558241 nova_compute[248510]: 2025-12-13 08:56:23.635 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:23 np0005558241 nova_compute[248510]: 2025-12-13 08:56:23.670 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 4887eb43-1570-49a5-b20e-326af1e84a7b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:23 np0005558241 nova_compute[248510]: 2025-12-13 08:56:23.676 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:56:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2148729958' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:56:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2851: 321 pgs: 321 active+clean; 358 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 377 KiB/s rd, 2.6 MiB/s wr, 111 op/s
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.279 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.281 248514 DEBUG nova.virt.libvirt.vif [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1968716864',display_name='tempest-TestSnapshotPattern-server-1968716864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1968716864',id=120,image_ref='bc45ce83-2d30-4107-90ce-9a9307d84fab',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMAc1KgPQ7YZO/wn247Z4w7R3iPp09OVvor/iF1+jUHxvgVaItQTtOvsdjy6HTDjQXAsMtHqR3YUaichzeZD01aIcPKJkcVch2R8490ZPNprDBfXeJ2j92c1nlW1ddLVg==',key_name='tempest-TestSnapshotPattern-2059242288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c21c2eb2d0c4465ae562381f358fbd8',ramdisk_id='',reservation_id='r-ejrogks3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='75f348ef-4044-47a1-ba1b-f1b66513450c',image_min_disk='1',image_min_ram='0',image_owner_id='6c21c2eb2d0c4465ae562381f358fbd8',image_owner_project_name='tempest-TestSnapshotPattern-1494512648',image_owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member',image_user_id='81fb01d9d08845c3b626079ab726db7a',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1494512648',owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:56:17Z,user_data=None,user_id='81fb01d9d08845c3b626079ab726db7a',uuid=4887eb43-1570-49a5-b20e-326af1e84a7b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.282 248514 DEBUG nova.network.os_vif_util [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converting VIF {"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.283 248514 DEBUG nova.network.os_vif_util [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.285 248514 DEBUG nova.objects.instance [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4887eb43-1570-49a5-b20e-326af1e84a7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.330 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  <uuid>4887eb43-1570-49a5-b20e-326af1e84a7b</uuid>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  <name>instance-00000078</name>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestSnapshotPattern-server-1968716864</nova:name>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:56:22</nova:creationTime>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:        <nova:user uuid="81fb01d9d08845c3b626079ab726db7a">tempest-TestSnapshotPattern-1494512648-project-member</nova:user>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:        <nova:project uuid="6c21c2eb2d0c4465ae562381f358fbd8">tempest-TestSnapshotPattern-1494512648</nova:project>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="bc45ce83-2d30-4107-90ce-9a9307d84fab"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:        <nova:port uuid="0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <entry name="serial">4887eb43-1570-49a5-b20e-326af1e84a7b</entry>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <entry name="uuid">4887eb43-1570-49a5-b20e-326af1e84a7b</entry>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/4887eb43-1570-49a5-b20e-326af1e84a7b_disk">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/4887eb43-1570-49a5-b20e-326af1e84a7b_disk.config">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:da:e5:67"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <target dev="tap0b3a1c67-12"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/console.log" append="off"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <input type="keyboard" bus="usb"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:56:24 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:56:24 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:56:24 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:56:24 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.330 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Preparing to wait for external event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.330 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.331 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.331 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.331 248514 DEBUG nova.virt.libvirt.vif [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1968716864',display_name='tempest-TestSnapshotPattern-server-1968716864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1968716864',id=120,image_ref='bc45ce83-2d30-4107-90ce-9a9307d84fab',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMAc1KgPQ7YZO/wn247Z4w7R3iPp09OVvor/iF1+jUHxvgVaItQTtOvsdjy6HTDjQXAsMtHqR3YUaichzeZD01aIcPKJkcVch2R8490ZPNprDBfXeJ2j92c1nlW1ddLVg==',key_name='tempest-TestSnapshotPattern-2059242288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c21c2eb2d0c4465ae562381f358fbd8',ramdisk_id='',reservation_id='r-ejrogks3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='75f348ef-4044-47a1-ba1b-f1b66513450c',image_min_disk='1',image_min_ram='0',image_owner_id='6c21c2eb2d0c4465ae562381f358fbd8',image_owner_project_name='tempest-TestSnapshotPattern-1494512648',image_owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member',image_user_id='81fb01d9d08845c3b626079ab726db7a',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1494512648',owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:56:17Z,user_data=None,user_id='81fb01d9d08845c3b626079ab726db7a',uuid=4887eb43-1570-49a5-b20e-326af1e84a7b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.332 248514 DEBUG nova.network.os_vif_util [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converting VIF {"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.332 248514 DEBUG nova.network.os_vif_util [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.332 248514 DEBUG os_vif [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.333 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.334 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.336 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b3a1c67-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.336 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b3a1c67-12, col_values=(('external_ids', {'iface-id': '0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:e5:67', 'vm-uuid': '4887eb43-1570-49a5-b20e-326af1e84a7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:24 np0005558241 NetworkManager[50376]: <info>  [1765616184.3384] manager: (tap0b3a1c67-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/487)
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.339 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.341 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.343 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.343 248514 INFO os_vif [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12')#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.692 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.693 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.694 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] No VIF found with MAC fa:16:3e:da:e5:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.695 248514 INFO nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Using config drive#033[00m
Dec 13 03:56:24 np0005558241 nova_compute[248510]: 2025-12-13 08:56:24.732 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 4887eb43-1570-49a5-b20e-326af1e84a7b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:56:25 np0005558241 nova_compute[248510]: 2025-12-13 08:56:25.945 248514 INFO nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Creating config drive at /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/disk.config#033[00m
Dec 13 03:56:25 np0005558241 nova_compute[248510]: 2025-12-13 08:56:25.953 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphoyps3ma execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.109 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphoyps3ma" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.133 248514 DEBUG nova.storage.rbd_utils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] rbd image 4887eb43-1570-49a5-b20e-326af1e84a7b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.138 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/disk.config 4887eb43-1570-49a5-b20e-326af1e84a7b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2852: 321 pgs: 321 active+clean; 358 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.5 MiB/s wr, 108 op/s
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.296 248514 DEBUG oslo_concurrency.processutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/disk.config 4887eb43-1570-49a5-b20e-326af1e84a7b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.297 248514 INFO nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Deleting local config drive /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b/disk.config because it was imported into RBD.#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.331 248514 DEBUG nova.network.neutron [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updated VIF entry in instance network info cache for port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.332 248514 DEBUG nova.network.neutron [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updating instance_info_cache with network_info: [{"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:26 np0005558241 kernel: tap0b3a1c67-12: entered promiscuous mode
Dec 13 03:56:26 np0005558241 NetworkManager[50376]: <info>  [1765616186.3537] manager: (tap0b3a1c67-12): new Tun device (/org/freedesktop/NetworkManager/Devices/488)
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:26Z|01167|binding|INFO|Claiming lport 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a for this chassis.
Dec 13 03:56:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:26Z|01168|binding|INFO|0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a: Claiming fa:16:3e:da:e5:67 10.100.0.9
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.361 248514 DEBUG oslo_concurrency.lockutils [req-e4204f93-aca9-4371-b890-c2df61a3c0dc req-d436f5c6-9c19-4c12-80b7-c757d9cd62d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:56:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:26Z|01169|binding|INFO|Setting lport 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a ovn-installed in OVS
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.384 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:26 np0005558241 systemd-udevd[367388]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.392 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:e5:67 10.100.0.9'], port_security=['fa:16:3e:da:e5:67 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4887eb43-1570-49a5-b20e-326af1e84a7b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c21c2eb2d0c4465ae562381f358fbd8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e1e2d18-e674-4ee6-b553-a8676e40259f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c30b2fc-21fd-4778-91da-98384d5e05df, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:56:26 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:26Z|01170|binding|INFO|Setting lport 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a up in Southbound
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.393 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a in datapath d2ff4cff-54cc-40c6-a486-7e7532c2462b bound to our chassis#033[00m
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.396 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2ff4cff-54cc-40c6-a486-7e7532c2462b#033[00m
Dec 13 03:56:26 np0005558241 systemd-machined[210538]: New machine qemu-147-instance-00000078.
Dec 13 03:56:26 np0005558241 NetworkManager[50376]: <info>  [1765616186.4053] device (tap0b3a1c67-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:56:26 np0005558241 NetworkManager[50376]: <info>  [1765616186.4062] device (tap0b3a1c67-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:56:26 np0005558241 systemd[1]: Started Virtual Machine qemu-147-instance-00000078.
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.416 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a5bcb21a-15b1-4bfe-8b1b-a532e69a7e2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.455 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1b037a-12aa-45f9-812c-a4e66232b69a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.460 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fdfd3664-8c0e-4c16-90d1-a5eaf25b8172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.493 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[119b898b-f2c0-4e27-8c26-3498152a773d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.513 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c7304c06-0430-4a0c-8a74-54dfde1140ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2ff4cff-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:85:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 871447, 'reachable_time': 40005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367403, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.528 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2798dcfb-7a9b-4447-93b6-8cc933f74c2c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd2ff4cff-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 871459, 'tstamp': 871459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367404, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd2ff4cff-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 871463, 'tstamp': 871463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367404, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.529 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2ff4cff-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.532 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.533 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2ff4cff-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.533 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.534 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2ff4cff-50, col_values=(('external_ids', {'iface-id': '47f45749-b232-4d0c-bf37-be042ea606c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:26.534 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.918 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616186.917615, 4887eb43-1570-49a5-b20e-326af1e84a7b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.919 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] VM Started (Lifecycle Event)#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.957 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.962 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616186.9188564, 4887eb43-1570-49a5-b20e-326af1e84a7b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.963 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:56:26 np0005558241 nova_compute[248510]: 2025-12-13 08:56:26.996 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.001 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.028 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.343 248514 DEBUG nova.compute.manager [req-1e35f916-d105-45a9-b3c5-92762e33ccb8 req-599f9d48-2e0f-44f2-9fec-ea726948cc37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.343 248514 DEBUG oslo_concurrency.lockutils [req-1e35f916-d105-45a9-b3c5-92762e33ccb8 req-599f9d48-2e0f-44f2-9fec-ea726948cc37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.344 248514 DEBUG oslo_concurrency.lockutils [req-1e35f916-d105-45a9-b3c5-92762e33ccb8 req-599f9d48-2e0f-44f2-9fec-ea726948cc37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.344 248514 DEBUG oslo_concurrency.lockutils [req-1e35f916-d105-45a9-b3c5-92762e33ccb8 req-599f9d48-2e0f-44f2-9fec-ea726948cc37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.344 248514 DEBUG nova.compute.manager [req-1e35f916-d105-45a9-b3c5-92762e33ccb8 req-599f9d48-2e0f-44f2-9fec-ea726948cc37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Processing event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.345 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.348 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616187.3483145, 4887eb43-1570-49a5-b20e-326af1e84a7b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.349 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.351 248514 DEBUG nova.virt.libvirt.driver [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.354 248514 INFO nova.virt.libvirt.driver [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Instance spawned successfully.#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.355 248514 INFO nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Took 9.55 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.355 248514 DEBUG nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.396 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.399 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.435 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.453 248514 INFO nova.compute.manager [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Took 11.17 seconds to build instance.#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.475 248514 DEBUG oslo_concurrency.lockutils [None req-42ca4a52-c8d0-4ded-a79b-f101c8b472c1 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:27 np0005558241 nova_compute[248510]: 2025-12-13 08:56:27.636 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.215 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.215 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.216 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.216 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.216 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.218 248514 INFO nova.compute.manager [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Terminating instance#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.218 248514 DEBUG nova.compute.manager [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:56:28 np0005558241 kernel: tapf069307e-1a (unregistering): left promiscuous mode
Dec 13 03:56:28 np0005558241 NetworkManager[50376]: <info>  [1765616188.2547] device (tapf069307e-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.264 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:28Z|01171|binding|INFO|Releasing lport f069307e-1a47-4342-b244-d88f04ff512b from this chassis (sb_readonly=0)
Dec 13 03:56:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:28Z|01172|binding|INFO|Setting lport f069307e-1a47-4342-b244-d88f04ff512b down in Southbound
Dec 13 03:56:28 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:28Z|01173|binding|INFO|Removing iface tapf069307e-1a ovn-installed in OVS
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.266 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2853: 321 pgs: 321 active+clean; 358 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 439 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.271 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:ef:3d 10.100.0.10'], port_security=['fa:16:3e:dd:ef:3d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '822fa9f6-0a5d-490e-89d7-446df19a068b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'bb814d40-cd63-4d1c-97e3-f821733a618f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ef6794c-20c0-44ef-9932-95bf5c168e3e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f069307e-1a47-4342-b244-d88f04ff512b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.272 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f069307e-1a47-4342-b244-d88f04ff512b in datapath d62e4a11-9334-4dbd-978f-dcabebeb9f79 unbound from our chassis#033[00m
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.274 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d62e4a11-9334-4dbd-978f-dcabebeb9f79#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.282 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.291 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1e44e0da-c55f-4a1d-a616-ea227a1dd144]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.323 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[550809e0-9982-4161-b2c3-282629129388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.327 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[52293860-177f-43b7-9ba1-793e0a9cc80e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:28 np0005558241 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000077.scope: Deactivated successfully.
Dec 13 03:56:28 np0005558241 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000077.scope: Consumed 14.554s CPU time.
Dec 13 03:56:28 np0005558241 systemd-machined[210538]: Machine qemu-146-instance-00000077 terminated.
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.371 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fab671fc-fb99-4c76-a379-b3d4489c8bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.394 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe8af72-4f04-40c7-a513-ef72875d2223]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62e4a11-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:64:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 336], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869981, 'reachable_time': 20363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367456, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.411 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[509d4e6f-0e9c-4700-bcb1-a4787c38fcae]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd62e4a11-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869993, 'tstamp': 869993}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367457, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd62e4a11-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869996, 'tstamp': 869996}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367457, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.414 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62e4a11-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.446 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.455 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.456 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd62e4a11-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.456 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.457 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd62e4a11-90, col_values=(('external_ids', {'iface-id': '3d979ee9-5b95-4edf-8ffc-3de7e778c5ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:28.457 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.464 248514 INFO nova.virt.libvirt.driver [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Instance destroyed successfully.#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.465 248514 DEBUG nova.objects.instance [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 822fa9f6-0a5d-490e-89d7-446df19a068b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.480 248514 DEBUG nova.virt.libvirt.vif [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-gen-1-695339999',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ge',id=119,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHu0mZ4KdPQNIQDmStfb8oGr9IxxGZIxLFalN/4uGlZNxfEdmbl3D4ueCINh2ZhW4F33mK6Ev46I8YbLOH/wB4QDYG50l143kirQCN41tCbmF+vCIghQWMs2Kzj6YH+pg==',key_name='tempest-TestSecurityGroupsBasicOps-398328978',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:56:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-hzbxz9gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:56:05Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=822fa9f6-0a5d-490e-89d7-446df19a068b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.481 248514 DEBUG nova.network.os_vif_util [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "f069307e-1a47-4342-b244-d88f04ff512b", "address": "fa:16:3e:dd:ef:3d", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf069307e-1a", "ovs_interfaceid": "f069307e-1a47-4342-b244-d88f04ff512b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.482 248514 DEBUG nova.network.os_vif_util [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.483 248514 DEBUG os_vif [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.485 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.485 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf069307e-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.488 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.490 248514 INFO os_vif [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:ef:3d,bridge_name='br-int',has_traffic_filtering=True,id=f069307e-1a47-4342-b244-d88f04ff512b,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf069307e-1a')#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.726 248514 INFO nova.virt.libvirt.driver [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Deleting instance files /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b_del#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.728 248514 INFO nova.virt.libvirt.driver [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Deletion of /var/lib/nova/instances/822fa9f6-0a5d-490e-89d7-446df19a068b_del complete#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.823 248514 INFO nova.compute.manager [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.823 248514 DEBUG oslo.service.loopingcall [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.824 248514 DEBUG nova.compute.manager [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:56:28 np0005558241 nova_compute[248510]: 2025-12-13 08:56:28.824 248514 DEBUG nova.network.neutron [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:56:29 np0005558241 nova_compute[248510]: 2025-12-13 08:56:29.352 248514 DEBUG nova.compute.manager [req-5fd19515-559d-492d-9764-73f71b971c4e req-cbc7afc5-7d44-431c-bdf1-d1db2116d4c7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-vif-unplugged-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:29 np0005558241 nova_compute[248510]: 2025-12-13 08:56:29.352 248514 DEBUG oslo_concurrency.lockutils [req-5fd19515-559d-492d-9764-73f71b971c4e req-cbc7afc5-7d44-431c-bdf1-d1db2116d4c7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:29 np0005558241 nova_compute[248510]: 2025-12-13 08:56:29.353 248514 DEBUG oslo_concurrency.lockutils [req-5fd19515-559d-492d-9764-73f71b971c4e req-cbc7afc5-7d44-431c-bdf1-d1db2116d4c7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:29 np0005558241 nova_compute[248510]: 2025-12-13 08:56:29.353 248514 DEBUG oslo_concurrency.lockutils [req-5fd19515-559d-492d-9764-73f71b971c4e req-cbc7afc5-7d44-431c-bdf1-d1db2116d4c7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:29 np0005558241 nova_compute[248510]: 2025-12-13 08:56:29.353 248514 DEBUG nova.compute.manager [req-5fd19515-559d-492d-9764-73f71b971c4e req-cbc7afc5-7d44-431c-bdf1-d1db2116d4c7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] No waiting events found dispatching network-vif-unplugged-f069307e-1a47-4342-b244-d88f04ff512b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:56:29 np0005558241 nova_compute[248510]: 2025-12-13 08:56:29.354 248514 DEBUG nova.compute.manager [req-5fd19515-559d-492d-9764-73f71b971c4e req-cbc7afc5-7d44-431c-bdf1-d1db2116d4c7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-vif-unplugged-f069307e-1a47-4342-b244-d88f04ff512b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:56:29 np0005558241 nova_compute[248510]: 2025-12-13 08:56:29.958 248514 DEBUG nova.compute.manager [req-00d49d14-40ea-4fcb-8fb1-a4b4ae45a63e req-1690cf02-5e7e-4bfb-80a2-e1f0d320d741 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:29 np0005558241 nova_compute[248510]: 2025-12-13 08:56:29.959 248514 DEBUG oslo_concurrency.lockutils [req-00d49d14-40ea-4fcb-8fb1-a4b4ae45a63e req-1690cf02-5e7e-4bfb-80a2-e1f0d320d741 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:29 np0005558241 nova_compute[248510]: 2025-12-13 08:56:29.959 248514 DEBUG oslo_concurrency.lockutils [req-00d49d14-40ea-4fcb-8fb1-a4b4ae45a63e req-1690cf02-5e7e-4bfb-80a2-e1f0d320d741 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:29 np0005558241 nova_compute[248510]: 2025-12-13 08:56:29.959 248514 DEBUG oslo_concurrency.lockutils [req-00d49d14-40ea-4fcb-8fb1-a4b4ae45a63e req-1690cf02-5e7e-4bfb-80a2-e1f0d320d741 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:29 np0005558241 nova_compute[248510]: 2025-12-13 08:56:29.959 248514 DEBUG nova.compute.manager [req-00d49d14-40ea-4fcb-8fb1-a4b4ae45a63e req-1690cf02-5e7e-4bfb-80a2-e1f0d320d741 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] No waiting events found dispatching network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:56:29 np0005558241 nova_compute[248510]: 2025-12-13 08:56:29.960 248514 WARNING nova.compute.manager [req-00d49d14-40ea-4fcb-8fb1-a4b4ae45a63e req-1690cf02-5e7e-4bfb-80a2-e1f0d320d741 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received unexpected event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a for instance with vm_state active and task_state None.#033[00m
Dec 13 03:56:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2854: 321 pgs: 321 active+clean; 313 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Dec 13 03:56:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:56:30 np0005558241 nova_compute[248510]: 2025-12-13 08:56:30.934 248514 DEBUG nova.network.neutron [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:30 np0005558241 nova_compute[248510]: 2025-12-13 08:56:30.980 248514 INFO nova.compute.manager [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Took 2.16 seconds to deallocate network for instance.#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.033 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.034 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.189 248514 DEBUG oslo_concurrency.processutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.488 248514 DEBUG nova.compute.manager [req-a15b0d49-6551-42fc-8944-286091e8485e req-d5802c8b-8947-4d6e-81c5-0054cc74be54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.489 248514 DEBUG oslo_concurrency.lockutils [req-a15b0d49-6551-42fc-8944-286091e8485e req-d5802c8b-8947-4d6e-81c5-0054cc74be54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.489 248514 DEBUG oslo_concurrency.lockutils [req-a15b0d49-6551-42fc-8944-286091e8485e req-d5802c8b-8947-4d6e-81c5-0054cc74be54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.490 248514 DEBUG oslo_concurrency.lockutils [req-a15b0d49-6551-42fc-8944-286091e8485e req-d5802c8b-8947-4d6e-81c5-0054cc74be54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.490 248514 DEBUG nova.compute.manager [req-a15b0d49-6551-42fc-8944-286091e8485e req-d5802c8b-8947-4d6e-81c5-0054cc74be54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] No waiting events found dispatching network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.490 248514 WARNING nova.compute.manager [req-a15b0d49-6551-42fc-8944-286091e8485e req-d5802c8b-8947-4d6e-81c5-0054cc74be54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received unexpected event network-vif-plugged-f069307e-1a47-4342-b244-d88f04ff512b for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:56:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:56:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256735896' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.774 248514 DEBUG oslo_concurrency.processutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.780 248514 DEBUG nova.compute.provider_tree [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.801 248514 DEBUG nova.scheduler.client.report [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.827 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.856 248514 INFO nova.scheduler.client.report [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 822fa9f6-0a5d-490e-89d7-446df19a068b#033[00m
Dec 13 03:56:31 np0005558241 nova_compute[248510]: 2025-12-13 08:56:31.943 248514 DEBUG oslo_concurrency.lockutils [None req-844d0251-b18c-4d6a-b21b-8aa3dea8019d c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "822fa9f6-0a5d-490e-89d7-446df19a068b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:31 np0005558241 podman[367510]: 2025-12-13 08:56:31.986702683 +0000 UTC m=+0.065175865 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 13 03:56:31 np0005558241 podman[367509]: 2025-12-13 08:56:31.993262198 +0000 UTC m=+0.078929810 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.vendor=CentOS)
Dec 13 03:56:32 np0005558241 podman[367508]: 2025-12-13 08:56:32.02488093 +0000 UTC m=+0.110753047 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 03:56:32 np0005558241 nova_compute[248510]: 2025-12-13 08:56:32.072 248514 DEBUG nova.compute.manager [req-2f39ce16-f544-4cb0-9798-0d09273ed305 req-0cefeaaa-030b-4713-a939-1546cac6b1a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Received event network-vif-deleted-f069307e-1a47-4342-b244-d88f04ff512b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2855: 321 pgs: 321 active+clean; 313 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 1001 KiB/s rd, 1.6 MiB/s wr, 124 op/s
Dec 13 03:56:32 np0005558241 nova_compute[248510]: 2025-12-13 08:56:32.638 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:32 np0005558241 nova_compute[248510]: 2025-12-13 08:56:32.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:33 np0005558241 nova_compute[248510]: 2025-12-13 08:56:33.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2856: 321 pgs: 321 active+clean; 279 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 187 op/s
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.365 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.365 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.366 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.366 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.367 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.368 248514 INFO nova.compute.manager [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Terminating instance#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.369 248514 DEBUG nova.compute.manager [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:56:34 np0005558241 kernel: tap59834f67-f8 (unregistering): left promiscuous mode
Dec 13 03:56:34 np0005558241 NetworkManager[50376]: <info>  [1765616194.4103] device (tap59834f67-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:56:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:34Z|01174|binding|INFO|Releasing lport 59834f67-f81d-41bf-9bec-95eea737178e from this chassis (sb_readonly=0)
Dec 13 03:56:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:34Z|01175|binding|INFO|Setting lport 59834f67-f81d-41bf-9bec-95eea737178e down in Southbound
Dec 13 03:56:34 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:34Z|01176|binding|INFO|Removing iface tap59834f67-f8 ovn-installed in OVS
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.417 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.419 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.424 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:dc:ee 10.100.0.7'], port_security=['fa:16:3e:fc:dc:ee 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2dacd79d-d668-430f-89e3-bd607a8298ba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1064539fdf2b494ea705dd5e74afcd3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '49b9fdf8-f095-49d6-8a3e-6b41045e0020 daf1c258-d9fc-43cc-a960-fdfffc57ef37', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ef6794c-20c0-44ef-9932-95bf5c168e3e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=59834f67-f81d-41bf-9bec-95eea737178e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.425 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 59834f67-f81d-41bf-9bec-95eea737178e in datapath d62e4a11-9334-4dbd-978f-dcabebeb9f79 unbound from our chassis#033[00m
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.427 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d62e4a11-9334-4dbd-978f-dcabebeb9f79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.429 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb082c0-d088-47ae-8cb2-1daf66647c8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.430 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79 namespace which is not needed anymore#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.443 248514 DEBUG nova.compute.manager [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-changed-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.444 248514 DEBUG nova.compute.manager [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Refreshing instance network info cache due to event network-changed-59834f67-f81d-41bf-9bec-95eea737178e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.444 248514 DEBUG oslo_concurrency.lockutils [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.444 248514 DEBUG oslo_concurrency.lockutils [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.444 248514 DEBUG nova.network.neutron [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Refreshing network info cache for port 59834f67-f81d-41bf-9bec-95eea737178e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:56:34 np0005558241 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000074.scope: Deactivated successfully.
Dec 13 03:56:34 np0005558241 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000074.scope: Consumed 15.634s CPU time.
Dec 13 03:56:34 np0005558241 systemd-machined[210538]: Machine qemu-143-instance-00000074 terminated.
Dec 13 03:56:34 np0005558241 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [NOTICE]   (364730) : haproxy version is 2.8.14-c23fe91
Dec 13 03:56:34 np0005558241 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [NOTICE]   (364730) : path to executable is /usr/sbin/haproxy
Dec 13 03:56:34 np0005558241 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [WARNING]  (364730) : Exiting Master process...
Dec 13 03:56:34 np0005558241 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [WARNING]  (364730) : Exiting Master process...
Dec 13 03:56:34 np0005558241 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [ALERT]    (364730) : Current worker (364747) exited with code 143 (Terminated)
Dec 13 03:56:34 np0005558241 neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79[364708]: [WARNING]  (364730) : All workers exited. Exiting... (0)
Dec 13 03:56:34 np0005558241 systemd[1]: libpod-88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55.scope: Deactivated successfully.
Dec 13 03:56:34 np0005558241 podman[367595]: 2025-12-13 08:56:34.572423894 +0000 UTC m=+0.044785594 container died 88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:56:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55-userdata-shm.mount: Deactivated successfully.
Dec 13 03:56:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e03dea7b690d3db119f67c61ad462e71c1d2570616b62b014daca58c0b50d9d7-merged.mount: Deactivated successfully.
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.607 248514 INFO nova.virt.libvirt.driver [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Instance destroyed successfully.#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.609 248514 DEBUG nova.objects.instance [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lazy-loading 'resources' on Instance uuid 2dacd79d-d668-430f-89e3-bd607a8298ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:56:34 np0005558241 podman[367595]: 2025-12-13 08:56:34.619926674 +0000 UTC m=+0.092288374 container cleanup 88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:56:34 np0005558241 systemd[1]: libpod-conmon-88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55.scope: Deactivated successfully.
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.634 248514 DEBUG nova.virt.libvirt.vif [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1856512026-access_point-288019889',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1856512026-ac',id=116,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHu0mZ4KdPQNIQDmStfb8oGr9IxxGZIxLFalN/4uGlZNxfEdmbl3D4ueCINh2ZhW4F33mK6Ev46I8YbLOH/wB4QDYG50l143kirQCN41tCbmF+vCIghQWMs2Kzj6YH+pg==',key_name='tempest-TestSecurityGroupsBasicOps-398328978',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1064539fdf2b494ea705dd5e74afcd3b',ramdisk_id='',reservation_id='r-9i8zv62t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1856512026',owner_user_name='tempest-TestSecurityGroupsBasicOps-1856512026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:55:24Z,user_data=None,user_id='c377508dda354c0aa762d15d52aa130c',uuid=2dacd79d-d668-430f-89e3-bd607a8298ba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.635 248514 DEBUG nova.network.os_vif_util [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converting VIF {"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.636 248514 DEBUG nova.network.os_vif_util [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.636 248514 DEBUG os_vif [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.638 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.638 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59834f67-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.643 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.645 248514 INFO os_vif [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:dc:ee,bridge_name='br-int',has_traffic_filtering=True,id=59834f67-f81d-41bf-9bec-95eea737178e,network=Network(d62e4a11-9334-4dbd-978f-dcabebeb9f79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59834f67-f8')#033[00m
Dec 13 03:56:34 np0005558241 podman[367634]: 2025-12-13 08:56:34.700503525 +0000 UTC m=+0.056236461 container remove 88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.706 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[651e509e-7425-4da2-b191-f15b62deb68e]: (4, ('Sat Dec 13 08:56:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79 (88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55)\n88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55\nSat Dec 13 08:56:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79 (88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55)\n88dd9392da4c28b76d0912f70fe634e7ba41324412d036550e0ebc2e0c300b55\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.708 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0644a04-28f5-45ee-8f16-5362a8cc465b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.709 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62e4a11-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:34 np0005558241 kernel: tapd62e4a11-90: left promiscuous mode
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.712 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.730 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42020c84-b825-47bf-860c-80c4eab66f3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.755 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c2dbe79a-ddd9-4f85-b94f-5cce3415ff1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.757 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[15e30005-a46b-4a76-9b80-eb54e9d9910c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.778 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[02c044ec-1e9d-4ac6-8528-db2a11449a95]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869972, 'reachable_time': 27229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367667, 'error': None, 'target': 'ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:34 np0005558241 systemd[1]: run-netns-ovnmeta\x2dd62e4a11\x2d9334\x2d4dbd\x2d978f\x2ddcabebeb9f79.mount: Deactivated successfully.
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.784 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d62e4a11-9334-4dbd-978f-dcabebeb9f79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:56:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:34.784 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[f4802944-b25e-4a27-b45a-1eaffc259bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.947 248514 INFO nova.virt.libvirt.driver [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Deleting instance files /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba_del#033[00m
Dec 13 03:56:34 np0005558241 nova_compute[248510]: 2025-12-13 08:56:34.948 248514 INFO nova.virt.libvirt.driver [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Deletion of /var/lib/nova/instances/2dacd79d-d668-430f-89e3-bd607a8298ba_del complete#033[00m
Dec 13 03:56:35 np0005558241 nova_compute[248510]: 2025-12-13 08:56:35.005 248514 INFO nova.compute.manager [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:56:35 np0005558241 nova_compute[248510]: 2025-12-13 08:56:35.006 248514 DEBUG oslo.service.loopingcall [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:56:35 np0005558241 nova_compute[248510]: 2025-12-13 08:56:35.007 248514 DEBUG nova.compute.manager [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:56:35 np0005558241 nova_compute[248510]: 2025-12-13 08:56:35.007 248514 DEBUG nova.network.neutron [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:56:35 np0005558241 nova_compute[248510]: 2025-12-13 08:56:35.476 248514 DEBUG nova.compute.manager [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-changed-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:35 np0005558241 nova_compute[248510]: 2025-12-13 08:56:35.477 248514 DEBUG nova.compute.manager [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Refreshing instance network info cache due to event network-changed-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:56:35 np0005558241 nova_compute[248510]: 2025-12-13 08:56:35.477 248514 DEBUG oslo_concurrency.lockutils [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:56:35 np0005558241 nova_compute[248510]: 2025-12-13 08:56:35.477 248514 DEBUG oslo_concurrency.lockutils [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:56:35 np0005558241 nova_compute[248510]: 2025-12-13 08:56:35.478 248514 DEBUG nova.network.neutron [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Refreshing network info cache for port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:56:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:56:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2857: 321 pgs: 321 active+clean; 279 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 107 op/s
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.600 248514 DEBUG nova.network.neutron [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.626 248514 INFO nova.compute.manager [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Took 1.62 seconds to deallocate network for instance.#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.651 248514 DEBUG nova.compute.manager [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-vif-unplugged-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.651 248514 DEBUG oslo_concurrency.lockutils [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.652 248514 DEBUG oslo_concurrency.lockutils [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.652 248514 DEBUG oslo_concurrency.lockutils [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.652 248514 DEBUG nova.compute.manager [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] No waiting events found dispatching network-vif-unplugged-59834f67-f81d-41bf-9bec-95eea737178e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.653 248514 DEBUG nova.compute.manager [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-vif-unplugged-59834f67-f81d-41bf-9bec-95eea737178e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.653 248514 DEBUG nova.compute.manager [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.653 248514 DEBUG oslo_concurrency.lockutils [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.653 248514 DEBUG oslo_concurrency.lockutils [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.654 248514 DEBUG oslo_concurrency.lockutils [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.654 248514 DEBUG nova.compute.manager [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] No waiting events found dispatching network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.654 248514 WARNING nova.compute.manager [req-1467ec97-9489-4813-850c-b04a78e93aaf req-7e6844ea-b527-49e1-a000-209ee2d24644 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received unexpected event network-vif-plugged-59834f67-f81d-41bf-9bec-95eea737178e for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.691 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.692 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:36 np0005558241 nova_compute[248510]: 2025-12-13 08:56:36.789 248514 DEBUG oslo_concurrency.processutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:37 np0005558241 nova_compute[248510]: 2025-12-13 08:56:37.347 248514 DEBUG nova.network.neutron [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updated VIF entry in instance network info cache for port 59834f67-f81d-41bf-9bec-95eea737178e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:56:37 np0005558241 nova_compute[248510]: 2025-12-13 08:56:37.348 248514 DEBUG nova.network.neutron [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Updating instance_info_cache with network_info: [{"id": "59834f67-f81d-41bf-9bec-95eea737178e", "address": "fa:16:3e:fc:dc:ee", "network": {"id": "d62e4a11-9334-4dbd-978f-dcabebeb9f79", "bridge": "br-int", "label": "tempest-network-smoke--1754851520", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1064539fdf2b494ea705dd5e74afcd3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59834f67-f8", "ovs_interfaceid": "59834f67-f81d-41bf-9bec-95eea737178e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:37 np0005558241 nova_compute[248510]: 2025-12-13 08:56:37.370 248514 DEBUG oslo_concurrency.lockutils [req-42a54bea-ca33-4dac-9637-8ecc88a17d96 req-7ffedc30-f66c-45a2-a0e0-9c4df8dd4526 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-2dacd79d-d668-430f-89e3-bd607a8298ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:56:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:56:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040174884' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:56:37 np0005558241 nova_compute[248510]: 2025-12-13 08:56:37.504 248514 DEBUG oslo_concurrency.processutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.715s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:37 np0005558241 nova_compute[248510]: 2025-12-13 08:56:37.511 248514 DEBUG nova.compute.provider_tree [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:56:37 np0005558241 nova_compute[248510]: 2025-12-13 08:56:37.532 248514 DEBUG nova.scheduler.client.report [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:56:37 np0005558241 nova_compute[248510]: 2025-12-13 08:56:37.555 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:37 np0005558241 nova_compute[248510]: 2025-12-13 08:56:37.588 248514 INFO nova.scheduler.client.report [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Deleted allocations for instance 2dacd79d-d668-430f-89e3-bd607a8298ba#033[00m
Dec 13 03:56:37 np0005558241 nova_compute[248510]: 2025-12-13 08:56:37.639 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:37 np0005558241 nova_compute[248510]: 2025-12-13 08:56:37.680 248514 DEBUG oslo_concurrency.lockutils [None req-b66eac9f-213f-4e5f-a341-91596be6500b c377508dda354c0aa762d15d52aa130c 1064539fdf2b494ea705dd5e74afcd3b - - default default] Lock "2dacd79d-d668-430f-89e3-bd607a8298ba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:38 np0005558241 nova_compute[248510]: 2025-12-13 08:56:38.115 248514 DEBUG nova.network.neutron [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updated VIF entry in instance network info cache for port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:56:38 np0005558241 nova_compute[248510]: 2025-12-13 08:56:38.116 248514 DEBUG nova.network.neutron [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updating instance_info_cache with network_info: [{"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:38 np0005558241 nova_compute[248510]: 2025-12-13 08:56:38.141 248514 DEBUG oslo_concurrency.lockutils [req-97d66a5c-5ec5-4b09-a3a1-b7d1e3faa124 req-032c9a81-5c5b-4d7b-ab71-a78088121e04 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:56:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2858: 321 pgs: 321 active+clean; 254 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 109 op/s
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.406917) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616198407013, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2100, "num_deletes": 253, "total_data_size": 3514785, "memory_usage": 3569656, "flush_reason": "Manual Compaction"}
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616198433529, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3416669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54562, "largest_seqno": 56661, "table_properties": {"data_size": 3407175, "index_size": 5923, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19871, "raw_average_key_size": 20, "raw_value_size": 3388037, "raw_average_value_size": 3482, "num_data_blocks": 261, "num_entries": 973, "num_filter_entries": 973, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765615990, "oldest_key_time": 1765615990, "file_creation_time": 1765616198, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 26683 microseconds, and 8788 cpu microseconds.
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.433594) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3416669 bytes OK
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.433626) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.435984) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.436005) EVENT_LOG_v1 {"time_micros": 1765616198436000, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.436036) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3505928, prev total WAL file size 3505928, number of live WAL files 2.
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.437259) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3336KB)], [128(8984KB)]
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616198437385, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 12616999, "oldest_snapshot_seqno": -1}
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7865 keys, 10765031 bytes, temperature: kUnknown
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616198535727, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10765031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10712991, "index_size": 31255, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19717, "raw_key_size": 204557, "raw_average_key_size": 26, "raw_value_size": 10572908, "raw_average_value_size": 1344, "num_data_blocks": 1220, "num_entries": 7865, "num_filter_entries": 7865, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616198, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.536289) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10765031 bytes
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.540263) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.0 rd, 109.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.8 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 8387, records dropped: 522 output_compression: NoCompression
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.540310) EVENT_LOG_v1 {"time_micros": 1765616198540292, "job": 78, "event": "compaction_finished", "compaction_time_micros": 98603, "compaction_time_cpu_micros": 45573, "output_level": 6, "num_output_files": 1, "total_output_size": 10765031, "num_input_records": 8387, "num_output_records": 7865, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616198541122, "job": 78, "event": "table_file_deletion", "file_number": 130}
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616198542816, "job": 78, "event": "table_file_deletion", "file_number": 128}
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.437057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.542911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.542918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.542920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.542922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:56:38 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:56:38.542923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:56:38 np0005558241 nova_compute[248510]: 2025-12-13 08:56:38.750 248514 DEBUG nova.compute.manager [req-35aca36c-bcf0-486f-b844-c648bb40189e req-63675479-6bbe-411e-93eb-a0fc4de7572e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Received event network-vif-deleted-59834f67-f81d-41bf-9bec-95eea737178e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:38 np0005558241 nova_compute[248510]: 2025-12-13 08:56:38.751 248514 INFO nova.compute.manager [req-35aca36c-bcf0-486f-b844-c648bb40189e req-63675479-6bbe-411e-93eb-a0fc4de7572e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Neutron deleted interface 59834f67-f81d-41bf-9bec-95eea737178e; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 03:56:38 np0005558241 nova_compute[248510]: 2025-12-13 08:56:38.751 248514 DEBUG nova.network.neutron [req-35aca36c-bcf0-486f-b844-c648bb40189e req-63675479-6bbe-411e-93eb-a0fc4de7572e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Dec 13 03:56:38 np0005558241 nova_compute[248510]: 2025-12-13 08:56:38.755 248514 DEBUG nova.compute.manager [req-35aca36c-bcf0-486f-b844-c648bb40189e req-63675479-6bbe-411e-93eb-a0fc4de7572e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Detach interface failed, port_id=59834f67-f81d-41bf-9bec-95eea737178e, reason: Instance 2dacd79d-d668-430f-89e3-bd607a8298ba could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 03:56:39 np0005558241 nova_compute[248510]: 2025-12-13 08:56:39.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:56:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:56:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:56:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:56:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:56:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:56:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2859: 321 pgs: 321 active+clean; 205 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 315 KiB/s wr, 143 op/s
Dec 13 03:56:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:40Z|00142|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.9
Dec 13 03:56:40 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:40Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:da:e5:67 10.100.0.9
Dec 13 03:56:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:41.999 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.000 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.037 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.124 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.125 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.134 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.135 248514 INFO nova.compute.claims [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:56:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2860: 321 pgs: 321 active+clean; 205 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 289 KiB/s wr, 110 op/s
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.333 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:42Z|01177|binding|INFO|Releasing lport 47f45749-b232-4d0c-bf37-be042ea606c8 from this chassis (sb_readonly=0)
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.581 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:56:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:56:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3922969887' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.919 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.925 248514 DEBUG nova.compute.provider_tree [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.945 248514 DEBUG nova.scheduler.client.report [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.993 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:42 np0005558241 nova_compute[248510]: 2025-12-13 08:56:42.994 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.045 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.046 248514 DEBUG nova.network.neutron [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.084 248514 INFO nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.117 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.249 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.251 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.251 248514 INFO nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Creating image(s)#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.272 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.292 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.311 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.314 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.386 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.387 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.388 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.389 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.416 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.419 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.470 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616188.4623442, 822fa9f6-0a5d-490e-89d7-446df19a068b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.471 248514 INFO nova.compute.manager [-] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.495 248514 DEBUG nova.compute.manager [None req-ddd280ba-695e-4b8a-a320-f809ae1eb37f - - - - - -] [instance: 822fa9f6-0a5d-490e-89d7-446df19a068b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:43 np0005558241 nova_compute[248510]: 2025-12-13 08:56:43.699 248514 DEBUG nova.policy [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:56:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2861: 321 pgs: 321 active+clean; 214 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 502 KiB/s wr, 143 op/s
Dec 13 03:56:44 np0005558241 nova_compute[248510]: 2025-12-13 08:56:44.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:45Z|00144|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.9
Dec 13 03:56:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:45Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:da:e5:67 10.100.0.9
Dec 13 03:56:45 np0005558241 nova_compute[248510]: 2025-12-13 08:56:45.444 248514 DEBUG nova.network.neutron [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Successfully created port: 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:56:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:45Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:e5:67 10.100.0.9
Dec 13 03:56:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:45Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:e5:67 10.100.0.9
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:56:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:56:45 np0005558241 nova_compute[248510]: 2025-12-13 08:56:45.964 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:46 np0005558241 nova_compute[248510]: 2025-12-13 08:56:46.036 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:56:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2862: 321 pgs: 321 active+clean; 214 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 80 op/s
Dec 13 03:56:46 np0005558241 podman[368007]: 2025-12-13 08:56:46.335754091 +0000 UTC m=+0.086619073 container create 00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 03:56:46 np0005558241 podman[368007]: 2025-12-13 08:56:46.273239113 +0000 UTC m=+0.024104115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:56:46 np0005558241 systemd[1]: Started libpod-conmon-00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d.scope.
Dec 13 03:56:46 np0005558241 nova_compute[248510]: 2025-12-13 08:56:46.377 248514 DEBUG nova.objects.instance [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:56:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:56:46 np0005558241 nova_compute[248510]: 2025-12-13 08:56:46.408 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:56:46 np0005558241 nova_compute[248510]: 2025-12-13 08:56:46.408 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Ensure instance console log exists: /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:56:46 np0005558241 nova_compute[248510]: 2025-12-13 08:56:46.409 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:46 np0005558241 nova_compute[248510]: 2025-12-13 08:56:46.409 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:46 np0005558241 nova_compute[248510]: 2025-12-13 08:56:46.410 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:46 np0005558241 podman[368007]: 2025-12-13 08:56:46.420441534 +0000 UTC m=+0.171306536 container init 00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_lewin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:56:46 np0005558241 podman[368007]: 2025-12-13 08:56:46.426904266 +0000 UTC m=+0.177769248 container start 00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:56:46 np0005558241 elastic_lewin[368041]: 167 167
Dec 13 03:56:46 np0005558241 systemd[1]: libpod-00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d.scope: Deactivated successfully.
Dec 13 03:56:46 np0005558241 podman[368007]: 2025-12-13 08:56:46.454858177 +0000 UTC m=+0.205723189 container attach 00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_lewin, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:56:46 np0005558241 podman[368007]: 2025-12-13 08:56:46.456357804 +0000 UTC m=+0.207222826 container died 00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:56:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b02bdb34ab92482b536d9671858fb222b9f14ee24de846db1f46c1492104f69e-merged.mount: Deactivated successfully.
Dec 13 03:56:46 np0005558241 podman[368007]: 2025-12-13 08:56:46.498686416 +0000 UTC m=+0.249551398 container remove 00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_lewin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:56:46 np0005558241 systemd[1]: libpod-conmon-00353abd2c65910b14825cd48946f4f1529ca5e3fd40c3142e64b2de7a98414d.scope: Deactivated successfully.
Dec 13 03:56:46 np0005558241 podman[368067]: 2025-12-13 08:56:46.676851593 +0000 UTC m=+0.044337213 container create b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bartik, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:56:46 np0005558241 systemd[1]: Started libpod-conmon-b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4.scope.
Dec 13 03:56:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:56:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8191ca3c02a556ddf5d36405a9836f5cf8ca4cf1c74bb168277a68a5ad3ac393/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8191ca3c02a556ddf5d36405a9836f5cf8ca4cf1c74bb168277a68a5ad3ac393/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8191ca3c02a556ddf5d36405a9836f5cf8ca4cf1c74bb168277a68a5ad3ac393/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8191ca3c02a556ddf5d36405a9836f5cf8ca4cf1c74bb168277a68a5ad3ac393/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8191ca3c02a556ddf5d36405a9836f5cf8ca4cf1c74bb168277a68a5ad3ac393/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:46 np0005558241 podman[368067]: 2025-12-13 08:56:46.654336858 +0000 UTC m=+0.021822498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:56:46 np0005558241 podman[368067]: 2025-12-13 08:56:46.75770877 +0000 UTC m=+0.125194410 container init b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bartik, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 03:56:46 np0005558241 podman[368067]: 2025-12-13 08:56:46.768027359 +0000 UTC m=+0.135512979 container start b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bartik, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:56:46 np0005558241 podman[368067]: 2025-12-13 08:56:46.773168068 +0000 UTC m=+0.140653708 container attach b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bartik, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:56:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 03:56:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:56:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:56:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:56:47 np0005558241 intelligent_bartik[368083]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:56:47 np0005558241 intelligent_bartik[368083]: --> All data devices are unavailable
Dec 13 03:56:47 np0005558241 systemd[1]: libpod-b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4.scope: Deactivated successfully.
Dec 13 03:56:47 np0005558241 podman[368067]: 2025-12-13 08:56:47.329514007 +0000 UTC m=+0.696999627 container died b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bartik, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:56:47 np0005558241 nova_compute[248510]: 2025-12-13 08:56:47.337 248514 DEBUG nova.network.neutron [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Successfully updated port: 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:56:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8191ca3c02a556ddf5d36405a9836f5cf8ca4cf1c74bb168277a68a5ad3ac393-merged.mount: Deactivated successfully.
Dec 13 03:56:47 np0005558241 nova_compute[248510]: 2025-12-13 08:56:47.365 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:56:47 np0005558241 nova_compute[248510]: 2025-12-13 08:56:47.366 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:56:47 np0005558241 nova_compute[248510]: 2025-12-13 08:56:47.366 248514 DEBUG nova.network.neutron [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:56:47 np0005558241 podman[368067]: 2025-12-13 08:56:47.379412988 +0000 UTC m=+0.746898608 container remove b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:56:47 np0005558241 systemd[1]: libpod-conmon-b5da53d0343b3a5707266c62dfb4ce0d1e352a9a5dda21d4d98f321cc7fe28e4.scope: Deactivated successfully.
Dec 13 03:56:47 np0005558241 nova_compute[248510]: 2025-12-13 08:56:47.570 248514 DEBUG nova.compute.manager [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-changed-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:47 np0005558241 nova_compute[248510]: 2025-12-13 08:56:47.571 248514 DEBUG nova.compute.manager [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Refreshing instance network info cache due to event network-changed-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:56:47 np0005558241 nova_compute[248510]: 2025-12-13 08:56:47.571 248514 DEBUG oslo_concurrency.lockutils [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:56:47 np0005558241 nova_compute[248510]: 2025-12-13 08:56:47.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:47 np0005558241 nova_compute[248510]: 2025-12-13 08:56:47.675 248514 DEBUG nova.network.neutron [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:56:47 np0005558241 podman[368176]: 2025-12-13 08:56:47.885241241 +0000 UTC m=+0.035920122 container create 11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:56:47 np0005558241 systemd[1]: Started libpod-conmon-11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e.scope.
Dec 13 03:56:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:56:47 np0005558241 podman[368176]: 2025-12-13 08:56:47.963230666 +0000 UTC m=+0.113909587 container init 11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mclaren, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:56:47 np0005558241 podman[368176]: 2025-12-13 08:56:47.870369058 +0000 UTC m=+0.021047969 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:56:47 np0005558241 podman[368176]: 2025-12-13 08:56:47.969294848 +0000 UTC m=+0.119973729 container start 11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:56:47 np0005558241 podman[368176]: 2025-12-13 08:56:47.972871818 +0000 UTC m=+0.123550729 container attach 11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mclaren, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:56:47 np0005558241 sad_mclaren[368192]: 167 167
Dec 13 03:56:47 np0005558241 systemd[1]: libpod-11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e.scope: Deactivated successfully.
Dec 13 03:56:47 np0005558241 podman[368176]: 2025-12-13 08:56:47.975722289 +0000 UTC m=+0.126401180 container died 11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mclaren, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:56:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-54b2fe4435aa9e3d863c434d635b6c60aeadf48b5cee5157ae6eb06612bba167-merged.mount: Deactivated successfully.
Dec 13 03:56:48 np0005558241 podman[368176]: 2025-12-13 08:56:48.018595544 +0000 UTC m=+0.169274425 container remove 11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_mclaren, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:56:48 np0005558241 systemd[1]: libpod-conmon-11f16c844f50cbcc64527174f25124bf92f575028ec7cfae15bc04b0ccf1367e.scope: Deactivated successfully.
Dec 13 03:56:48 np0005558241 podman[368215]: 2025-12-13 08:56:48.172840222 +0000 UTC m=+0.038217920 container create 477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:56:48 np0005558241 systemd[1]: Started libpod-conmon-477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d.scope.
Dec 13 03:56:48 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:56:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dc7a80cd486709c8030905bb11e9b4c963c9bfbb48931b2a3fb24af47c8f0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dc7a80cd486709c8030905bb11e9b4c963c9bfbb48931b2a3fb24af47c8f0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dc7a80cd486709c8030905bb11e9b4c963c9bfbb48931b2a3fb24af47c8f0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dc7a80cd486709c8030905bb11e9b4c963c9bfbb48931b2a3fb24af47c8f0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:48 np0005558241 podman[368215]: 2025-12-13 08:56:48.156459351 +0000 UTC m=+0.021837059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:56:48 np0005558241 podman[368215]: 2025-12-13 08:56:48.260594872 +0000 UTC m=+0.125972560 container init 477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:56:48 np0005558241 podman[368215]: 2025-12-13 08:56:48.268662594 +0000 UTC m=+0.134040282 container start 477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_borg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:56:48 np0005558241 podman[368215]: 2025-12-13 08:56:48.273108426 +0000 UTC m=+0.138486134 container attach 477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 03:56:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2863: 321 pgs: 321 active+clean; 241 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 93 op/s
Dec 13 03:56:48 np0005558241 interesting_borg[368231]: {
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:    "0": [
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:        {
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "devices": [
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "/dev/loop3"
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            ],
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_name": "ceph_lv0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_size": "21470642176",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "name": "ceph_lv0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "tags": {
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.cluster_name": "ceph",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.crush_device_class": "",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.encrypted": "0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.objectstore": "bluestore",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.osd_id": "0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.type": "block",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.vdo": "0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.with_tpm": "0"
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            },
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "type": "block",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "vg_name": "ceph_vg0"
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:        }
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:    ],
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:    "1": [
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:        {
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "devices": [
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "/dev/loop4"
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            ],
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_name": "ceph_lv1",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_size": "21470642176",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "name": "ceph_lv1",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "tags": {
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.cluster_name": "ceph",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.crush_device_class": "",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.encrypted": "0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.objectstore": "bluestore",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.osd_id": "1",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.type": "block",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.vdo": "0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.with_tpm": "0"
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            },
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "type": "block",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "vg_name": "ceph_vg1"
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:        }
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:    ],
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:    "2": [
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:        {
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "devices": [
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "/dev/loop5"
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            ],
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_name": "ceph_lv2",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_size": "21470642176",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "name": "ceph_lv2",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "tags": {
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.cluster_name": "ceph",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.crush_device_class": "",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.encrypted": "0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.objectstore": "bluestore",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.osd_id": "2",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.type": "block",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.vdo": "0",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:                "ceph.with_tpm": "0"
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            },
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "type": "block",
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:            "vg_name": "ceph_vg2"
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:        }
Dec 13 03:56:48 np0005558241 interesting_borg[368231]:    ]
Dec 13 03:56:48 np0005558241 interesting_borg[368231]: }
Dec 13 03:56:48 np0005558241 systemd[1]: libpod-477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d.scope: Deactivated successfully.
Dec 13 03:56:48 np0005558241 podman[368215]: 2025-12-13 08:56:48.587794386 +0000 UTC m=+0.453172074 container died 477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_borg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:56:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-72dc7a80cd486709c8030905bb11e9b4c963c9bfbb48931b2a3fb24af47c8f0f-merged.mount: Deactivated successfully.
Dec 13 03:56:48 np0005558241 podman[368215]: 2025-12-13 08:56:48.643612975 +0000 UTC m=+0.508990663 container remove 477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_borg, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:56:48 np0005558241 systemd[1]: libpod-conmon-477d493666c334a77a9b3666e9a5cdb4261c66728c53f96a86b6a5b0b964b74d.scope: Deactivated successfully.
Dec 13 03:56:49 np0005558241 podman[368312]: 2025-12-13 08:56:49.125713252 +0000 UTC m=+0.038127967 container create 74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_greider, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:56:49 np0005558241 systemd[1]: Started libpod-conmon-74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a.scope.
Dec 13 03:56:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:56:49 np0005558241 podman[368312]: 2025-12-13 08:56:49.20341085 +0000 UTC m=+0.115825565 container init 74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_greider, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:56:49 np0005558241 podman[368312]: 2025-12-13 08:56:49.110253804 +0000 UTC m=+0.022668529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:56:49 np0005558241 podman[368312]: 2025-12-13 08:56:49.209370469 +0000 UTC m=+0.121785184 container start 74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_greider, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:56:49 np0005558241 podman[368312]: 2025-12-13 08:56:49.213124873 +0000 UTC m=+0.125539578 container attach 74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_greider, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:56:49 np0005558241 lucid_greider[368328]: 167 167
Dec 13 03:56:49 np0005558241 systemd[1]: libpod-74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a.scope: Deactivated successfully.
Dec 13 03:56:49 np0005558241 podman[368312]: 2025-12-13 08:56:49.214643501 +0000 UTC m=+0.127058216 container died 74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_greider, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:56:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1d41409324cfacffd7f66d59786a9bf61ea68546a13be6ea068a5584181898a7-merged.mount: Deactivated successfully.
Dec 13 03:56:49 np0005558241 podman[368312]: 2025-12-13 08:56:49.254200233 +0000 UTC m=+0.166614958 container remove 74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_greider, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 03:56:49 np0005558241 systemd[1]: libpod-conmon-74681dda8fedbed00458a9f1315f1584d92b8d0ab11e3bbc1cc0ed7f966e190a.scope: Deactivated successfully.
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.282 248514 DEBUG nova.network.neutron [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updating instance_info_cache with network_info: [{"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.317 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.318 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Instance network_info: |[{"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.318 248514 DEBUG oslo_concurrency.lockutils [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.318 248514 DEBUG nova.network.neutron [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Refreshing network info cache for port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.322 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Start _get_guest_xml network_info=[{"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.328 248514 WARNING nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.334 248514 DEBUG nova.virt.libvirt.host [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.335 248514 DEBUG nova.virt.libvirt.host [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.340 248514 DEBUG nova.virt.libvirt.host [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.341 248514 DEBUG nova.virt.libvirt.host [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.341 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.341 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.342 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.342 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.342 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.342 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.342 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.343 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.343 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.343 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.343 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.343 248514 DEBUG nova.virt.hardware [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.346 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:49 np0005558241 podman[368352]: 2025-12-13 08:56:49.431272963 +0000 UTC m=+0.045025460 container create 1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:56:49 np0005558241 systemd[1]: Started libpod-conmon-1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163.scope.
Dec 13 03:56:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:56:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5825876dd74443d0a05940844218eb7750298ffe38b1434cf44a3beaca04a82c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5825876dd74443d0a05940844218eb7750298ffe38b1434cf44a3beaca04a82c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5825876dd74443d0a05940844218eb7750298ffe38b1434cf44a3beaca04a82c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5825876dd74443d0a05940844218eb7750298ffe38b1434cf44a3beaca04a82c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:49 np0005558241 podman[368352]: 2025-12-13 08:56:49.412193635 +0000 UTC m=+0.025946172 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:56:49 np0005558241 podman[368352]: 2025-12-13 08:56:49.512548781 +0000 UTC m=+0.126301298 container init 1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:56:49 np0005558241 podman[368352]: 2025-12-13 08:56:49.518935141 +0000 UTC m=+0.132687648 container start 1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:56:49 np0005558241 podman[368352]: 2025-12-13 08:56:49.522179722 +0000 UTC m=+0.135932249 container attach 1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.603 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616194.6016686, 2dacd79d-d668-430f-89e3-bd607a8298ba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.604 248514 INFO nova.compute.manager [-] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.684 248514 DEBUG nova.compute.manager [None req-dc5579ee-1a7d-4b69-b02c-edfb6158fea4 - - - - - -] [instance: 2dacd79d-d668-430f-89e3-bd607a8298ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:56:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3266313267' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.936 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.967 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:49 np0005558241 nova_compute[248510]: 2025-12-13 08:56:49.972 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:50 np0005558241 lvm[368505]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:56:50 np0005558241 lvm[368505]: VG ceph_vg0 finished
Dec 13 03:56:50 np0005558241 lvm[368508]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:56:50 np0005558241 lvm[368508]: VG ceph_vg1 finished
Dec 13 03:56:50 np0005558241 lvm[368510]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:56:50 np0005558241 lvm[368510]: VG ceph_vg2 finished
Dec 13 03:56:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2864: 321 pgs: 321 active+clean; 264 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 106 op/s
Dec 13 03:56:50 np0005558241 hopeful_goldstine[368371]: {}
Dec 13 03:56:50 np0005558241 systemd[1]: libpod-1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163.scope: Deactivated successfully.
Dec 13 03:56:50 np0005558241 systemd[1]: libpod-1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163.scope: Consumed 1.274s CPU time.
Dec 13 03:56:50 np0005558241 podman[368352]: 2025-12-13 08:56:50.340432258 +0000 UTC m=+0.954184765 container died 1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:56:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5825876dd74443d0a05940844218eb7750298ffe38b1434cf44a3beaca04a82c-merged.mount: Deactivated successfully.
Dec 13 03:56:50 np0005558241 podman[368352]: 2025-12-13 08:56:50.388907784 +0000 UTC m=+1.002660291 container remove 1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:56:50 np0005558241 systemd[1]: libpod-conmon-1b66be4eeb0eaefa655f17996f26e0e1505deb2c1d4daa65ed8a3506f33e2163.scope: Deactivated successfully.
Dec 13 03:56:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:56:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:56:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:56:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:56:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:56:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3860048685' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.537 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.539 248514 DEBUG nova.virt.libvirt.vif [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:56:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1758092435',display_name='tempest-TestNetworkBasicOps-server-1758092435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1758092435',id=121,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDDhA9DlC0XXlTi33TP7442ZgmxgyTTZRyp/5+PtSzz/z4TT06lLY5cCNioPf17m6xj5p3Rza8zGSpra/Ou4pMBK7drw3VX1RTJrfYr/jaVe2RRgvmXLfZfYTeWegMxqwQ==',key_name='tempest-TestNetworkBasicOps-530003057',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-9yn1gbd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:56:43Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.540 248514 DEBUG nova.network.os_vif_util [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.541 248514 DEBUG nova.network.os_vif_util [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.542 248514 DEBUG nova.objects.instance [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.649 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  <uuid>1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d</uuid>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  <name>instance-00000079</name>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkBasicOps-server-1758092435</nova:name>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:56:49</nova:creationTime>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:        <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:        <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:        <nova:port uuid="4013e964-3f6f-4aaa-af6d-20b9cb5c2a39">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <entry name="serial">1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d</entry>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <entry name="uuid">1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d</entry>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk.config">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:7e:a9:be"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <target dev="tap4013e964-3f"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/console.log" append="off"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:56:50 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:56:50 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:56:50 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:56:50 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.650 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Preparing to wait for external event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.650 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.650 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.650 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.651 248514 DEBUG nova.virt.libvirt.vif [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:56:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1758092435',display_name='tempest-TestNetworkBasicOps-server-1758092435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1758092435',id=121,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDDhA9DlC0XXlTi33TP7442ZgmxgyTTZRyp/5+PtSzz/z4TT06lLY5cCNioPf17m6xj5p3Rza8zGSpra/Ou4pMBK7drw3VX1RTJrfYr/jaVe2RRgvmXLfZfYTeWegMxqwQ==',key_name='tempest-TestNetworkBasicOps-530003057',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-9yn1gbd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:56:43Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.651 248514 DEBUG nova.network.os_vif_util [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.652 248514 DEBUG nova.network.os_vif_util [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.652 248514 DEBUG os_vif [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.653 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.653 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.654 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.657 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.658 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4013e964-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.658 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4013e964-3f, col_values=(('external_ids', {'iface-id': '4013e964-3f6f-4aaa-af6d-20b9cb5c2a39', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:a9:be', 'vm-uuid': '1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:50 np0005558241 NetworkManager[50376]: <info>  [1765616210.6608] manager: (tap4013e964-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.667 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.668 248514 INFO os_vif [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f')#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.774 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.775 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.775 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:7e:a9:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.776 248514 INFO nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Using config drive#033[00m
Dec 13 03:56:50 np0005558241 nova_compute[248510]: 2025-12-13 08:56:50.807 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:56:51 np0005558241 nova_compute[248510]: 2025-12-13 08:56:51.345 248514 INFO nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Creating config drive at /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/disk.config#033[00m
Dec 13 03:56:51 np0005558241 nova_compute[248510]: 2025-12-13 08:56:51.353 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlqlzsdy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:56:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:56:51 np0005558241 nova_compute[248510]: 2025-12-13 08:56:51.514 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzlqlzsdy" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:51 np0005558241 nova_compute[248510]: 2025-12-13 08:56:51.536 248514 DEBUG nova.storage.rbd_utils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:56:51 np0005558241 nova_compute[248510]: 2025-12-13 08:56:51.539 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/disk.config 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:56:51 np0005558241 nova_compute[248510]: 2025-12-13 08:56:51.684 248514 DEBUG oslo_concurrency.processutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/disk.config 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:56:51 np0005558241 nova_compute[248510]: 2025-12-13 08:56:51.685 248514 INFO nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Deleting local config drive /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d/disk.config because it was imported into RBD.#033[00m
Dec 13 03:56:51 np0005558241 kernel: tap4013e964-3f: entered promiscuous mode
Dec 13 03:56:51 np0005558241 NetworkManager[50376]: <info>  [1765616211.7339] manager: (tap4013e964-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/490)
Dec 13 03:56:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:51Z|01178|binding|INFO|Claiming lport 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 for this chassis.
Dec 13 03:56:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:51Z|01179|binding|INFO|4013e964-3f6f-4aaa-af6d-20b9cb5c2a39: Claiming fa:16:3e:7e:a9:be 10.100.0.9
Dec 13 03:56:51 np0005558241 systemd-udevd[368509]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:56:51 np0005558241 nova_compute[248510]: 2025-12-13 08:56:51.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:51 np0005558241 NetworkManager[50376]: <info>  [1765616211.7475] device (tap4013e964-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:56:51 np0005558241 NetworkManager[50376]: <info>  [1765616211.7482] device (tap4013e964-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:56:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:51Z|01180|binding|INFO|Setting lport 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 ovn-installed in OVS
Dec 13 03:56:51 np0005558241 nova_compute[248510]: 2025-12-13 08:56:51.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:51 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:51Z|01181|binding|INFO|Setting lport 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 up in Southbound
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.759 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:a9:be 10.100.0.9'], port_security=['fa:16:3e:7e:a9:be 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c78db00b-677b-4c8b-af80-5bb717876b41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca3b9929-a40c-461d-b2b9-49fd6af07fd3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.761 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 in datapath 793ba3c3-1004-4068-a89a-dc7b4c56fc43 bound to our chassis#033[00m
Dec 13 03:56:51 np0005558241 nova_compute[248510]: 2025-12-13 08:56:51.761 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.763 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 793ba3c3-1004-4068-a89a-dc7b4c56fc43#033[00m
Dec 13 03:56:51 np0005558241 systemd-machined[210538]: New machine qemu-148-instance-00000079.
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.775 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[437ed159-1423-4b38-b76a-e4024186cd6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.776 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap793ba3c3-11 in ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.777 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap793ba3c3-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.777 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d0220d3a-428b-4073-a469-cf31d8a784ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.778 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55072e5f-c4c5-4df1-b3df-0a8cb7b10871]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.790 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4a8107-25ee-4d0f-95e5-f7bbc6891c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:51 np0005558241 systemd[1]: Started Virtual Machine qemu-148-instance-00000079.
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.813 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d33af3d8-6137-4f59-b0d7-a4154677b838]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.842 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[16231d44-811a-49ca-8884-1f696c40288e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.847 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4b77bf79-7de6-43d3-8b7f-eef95ce735f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:51 np0005558241 NetworkManager[50376]: <info>  [1765616211.8482] manager: (tap793ba3c3-10): new Veth device (/org/freedesktop/NetworkManager/Devices/491)
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.873 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca5e47c-3d9e-4fb3-8364-6bf4c0acb407]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.875 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4537659c-c71a-4e59-8103-132eb1f2f726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:51 np0005558241 NetworkManager[50376]: <info>  [1765616211.8994] device (tap793ba3c3-10): carrier: link connected
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.904 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a704cbee-8874-4e7c-8384-fee7cd2e687d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.927 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca4457d-dc8e-4d40-8db2-f677edf277c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap793ba3c3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:ed:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 349], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878911, 'reachable_time': 42802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368656, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.943 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e59d8ced-02cd-482d-a107-b9c121bf1c5d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:eddb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878911, 'tstamp': 878911}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368657, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.962 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf56a15-48cc-4a56-bcd0-43086e35d30e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap793ba3c3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:ed:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 349], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878911, 'reachable_time': 42802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368658, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:51 np0005558241 nova_compute[248510]: 2025-12-13 08:56:51.991 248514 DEBUG nova.network.neutron [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updated VIF entry in instance network info cache for port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:56:51 np0005558241 nova_compute[248510]: 2025-12-13 08:56:51.992 248514 DEBUG nova.network.neutron [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updating instance_info_cache with network_info: [{"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:56:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:51.999 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef60de09-edd9-4a88-b237-d47229e9ce00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.042 248514 DEBUG oslo_concurrency.lockutils [req-85989637-cb8e-4331-b528-4e644ba1e9e3 req-3fbf626b-5bff-4858-bd2f-98b304d79804 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.061 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0add1cb0-19c7-4ae8-b399-d060195edaa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.062 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap793ba3c3-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.062 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.062 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap793ba3c3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:52 np0005558241 NetworkManager[50376]: <info>  [1765616212.0650] manager: (tap793ba3c3-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Dec 13 03:56:52 np0005558241 kernel: tap793ba3c3-10: entered promiscuous mode
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.068 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap793ba3c3-10, col_values=(('external_ids', {'iface-id': 'e25ab14d-8bf6-4007-ae4c-085df43b875d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:56:52 np0005558241 ovn_controller[148476]: 2025-12-13T08:56:52Z|01182|binding|INFO|Releasing lport e25ab14d-8bf6-4007-ae4c-085df43b875d from this chassis (sb_readonly=0)
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.083 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.086 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/793ba3c3-1004-4068-a89a-dc7b4c56fc43.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/793ba3c3-1004-4068-a89a-dc7b4c56fc43.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.087 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef16ef10-5d26-473a-bc2e-b16c071529a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.088 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-793ba3c3-1004-4068-a89a-dc7b4c56fc43
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/793ba3c3-1004-4068-a89a-dc7b4c56fc43.pid.haproxy
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 793ba3c3-1004-4068-a89a-dc7b4c56fc43
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:56:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:52.089 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'env', 'PROCESS_TAG=haproxy-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/793ba3c3-1004-4068-a89a-dc7b4c56fc43.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.199 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616212.1988218, 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.199 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] VM Started (Lifecycle Event)#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.272 248514 DEBUG nova.compute.manager [req-0150302b-9299-4ae7-8443-32f4d910d049 req-dcc28e09-55be-41be-a8a9-cafe02804cbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.272 248514 DEBUG oslo_concurrency.lockutils [req-0150302b-9299-4ae7-8443-32f4d910d049 req-dcc28e09-55be-41be-a8a9-cafe02804cbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.273 248514 DEBUG oslo_concurrency.lockutils [req-0150302b-9299-4ae7-8443-32f4d910d049 req-dcc28e09-55be-41be-a8a9-cafe02804cbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.273 248514 DEBUG oslo_concurrency.lockutils [req-0150302b-9299-4ae7-8443-32f4d910d049 req-dcc28e09-55be-41be-a8a9-cafe02804cbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.273 248514 DEBUG nova.compute.manager [req-0150302b-9299-4ae7-8443-32f4d910d049 req-dcc28e09-55be-41be-a8a9-cafe02804cbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Processing event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.274 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:56:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2865: 321 pgs: 321 active+clean; 264 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 765 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.278 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.282 248514 INFO nova.virt.libvirt.driver [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Instance spawned successfully.#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.282 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.294 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.300 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.320 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.321 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.322 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.323 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.324 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.325 248514 DEBUG nova.virt.libvirt.driver [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.333 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.334 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616212.1990333, 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.334 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.378 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.382 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616212.277355, 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.383 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.420 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.424 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.429 248514 INFO nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Took 9.18 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.430 248514 DEBUG nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.445 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.515 248514 INFO nova.compute.manager [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Took 10.41 seconds to build instance.#033[00m
Dec 13 03:56:52 np0005558241 podman[368732]: 2025-12-13 08:56:52.530372325 +0000 UTC m=+0.060736874 container create 5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.567 248514 DEBUG oslo_concurrency.lockutils [None req-1cecfb7d-f821-4085-a93f-e4592d9007d7 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:52 np0005558241 systemd[1]: Started libpod-conmon-5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17.scope.
Dec 13 03:56:52 np0005558241 podman[368732]: 2025-12-13 08:56:52.498642849 +0000 UTC m=+0.029007428 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:56:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:56:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad8c437fd7f83a96f7ee83b44aeba75a23697638d72f15b7ac660f7308e1111/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:52 np0005558241 podman[368732]: 2025-12-13 08:56:52.630961477 +0000 UTC m=+0.161326056 container init 5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 03:56:52 np0005558241 podman[368732]: 2025-12-13 08:56:52.636987298 +0000 UTC m=+0.167351847 container start 5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:56:52 np0005558241 nova_compute[248510]: 2025-12-13 08:56:52.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:52 np0005558241 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [NOTICE]   (368751) : New worker (368753) forked
Dec 13 03:56:52 np0005558241 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [NOTICE]   (368751) : Loading success.
Dec 13 03:56:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2866: 321 pgs: 321 active+clean; 264 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Dec 13 03:56:54 np0005558241 nova_compute[248510]: 2025-12-13 08:56:54.392 248514 DEBUG nova.compute.manager [req-e006aa1d-69c4-4d8c-bd3c-73c8c0e535ba req-0cdcf6dd-00f0-4143-8397-bc33fade5d00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:54 np0005558241 nova_compute[248510]: 2025-12-13 08:56:54.392 248514 DEBUG oslo_concurrency.lockutils [req-e006aa1d-69c4-4d8c-bd3c-73c8c0e535ba req-0cdcf6dd-00f0-4143-8397-bc33fade5d00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:54 np0005558241 nova_compute[248510]: 2025-12-13 08:56:54.393 248514 DEBUG oslo_concurrency.lockutils [req-e006aa1d-69c4-4d8c-bd3c-73c8c0e535ba req-0cdcf6dd-00f0-4143-8397-bc33fade5d00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:54 np0005558241 nova_compute[248510]: 2025-12-13 08:56:54.394 248514 DEBUG oslo_concurrency.lockutils [req-e006aa1d-69c4-4d8c-bd3c-73c8c0e535ba req-0cdcf6dd-00f0-4143-8397-bc33fade5d00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:54 np0005558241 nova_compute[248510]: 2025-12-13 08:56:54.394 248514 DEBUG nova.compute.manager [req-e006aa1d-69c4-4d8c-bd3c-73c8c0e535ba req-0cdcf6dd-00f0-4143-8397-bc33fade5d00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] No waiting events found dispatching network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:56:54 np0005558241 nova_compute[248510]: 2025-12-13 08:56:54.395 248514 WARNING nova.compute.manager [req-e006aa1d-69c4-4d8c-bd3c-73c8c0e535ba req-0cdcf6dd-00f0-4143-8397-bc33fade5d00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received unexpected event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:56:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:55.434 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:56:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:55.435 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:56:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:56:55.436 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:56:55 np0005558241 nova_compute[248510]: 2025-12-13 08:56:55.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:55 np0005558241 nova_compute[248510]: 2025-12-13 08:56:55.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:56:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2867: 321 pgs: 321 active+clean; 264 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Dec 13 03:56:57 np0005558241 nova_compute[248510]: 2025-12-13 08:56:57.651 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:56:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2868: 321 pgs: 321 active+clean; 264 MiB data, 922 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Dec 13 03:56:58 np0005558241 nova_compute[248510]: 2025-12-13 08:56:58.285 248514 DEBUG nova.compute.manager [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-changed-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:56:58 np0005558241 nova_compute[248510]: 2025-12-13 08:56:58.286 248514 DEBUG nova.compute.manager [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Refreshing instance network info cache due to event network-changed-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:56:58 np0005558241 nova_compute[248510]: 2025-12-13 08:56:58.286 248514 DEBUG oslo_concurrency.lockutils [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:56:58 np0005558241 nova_compute[248510]: 2025-12-13 08:56:58.286 248514 DEBUG oslo_concurrency.lockutils [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:56:58 np0005558241 nova_compute[248510]: 2025-12-13 08:56:58.287 248514 DEBUG nova.network.neutron [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Refreshing network info cache for port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:57:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2869: 321 pgs: 321 active+clean; 268 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 652 KiB/s wr, 94 op/s
Dec 13 03:57:00 np0005558241 nova_compute[248510]: 2025-12-13 08:57:00.664 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:57:01 np0005558241 nova_compute[248510]: 2025-12-13 08:57:01.001 248514 DEBUG nova.network.neutron [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updated VIF entry in instance network info cache for port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:57:01 np0005558241 nova_compute[248510]: 2025-12-13 08:57:01.002 248514 DEBUG nova.network.neutron [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updating instance_info_cache with network_info: [{"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:57:01 np0005558241 nova_compute[248510]: 2025-12-13 08:57:01.052 248514 DEBUG oslo_concurrency.lockutils [req-0e75a6cf-600b-4dfb-8beb-20f7bb032926 req-26a02d7a-45da-4369-8f9d-c867cf6a8a0b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:57:01 np0005558241 nova_compute[248510]: 2025-12-13 08:57:01.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2870: 321 pgs: 321 active+clean; 268 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 208 KiB/s wr, 78 op/s
Dec 13 03:57:02 np0005558241 nova_compute[248510]: 2025-12-13 08:57:02.651 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:02 np0005558241 nova_compute[248510]: 2025-12-13 08:57:02.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:57:03 np0005558241 podman[368765]: 2025-12-13 08:57:03.072969334 +0000 UTC m=+0.152662607 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 03:57:03 np0005558241 podman[368766]: 2025-12-13 08:57:03.073367484 +0000 UTC m=+0.147471877 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:57:03 np0005558241 podman[368764]: 2025-12-13 08:57:03.108344392 +0000 UTC m=+0.187434050 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:57:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2871: 321 pgs: 321 active+clean; 278 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 97 op/s
Dec 13 03:57:05 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:05Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:a9:be 10.100.0.9
Dec 13 03:57:05 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:05Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:a9:be 10.100.0.9
Dec 13 03:57:05 np0005558241 nova_compute[248510]: 2025-12-13 08:57:05.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:57:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2872: 321 pgs: 321 active+clean; 278 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 53 op/s
Dec 13 03:57:06 np0005558241 nova_compute[248510]: 2025-12-13 08:57:06.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:57:07 np0005558241 nova_compute[248510]: 2025-12-13 08:57:07.654 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2873: 321 pgs: 321 active+clean; 290 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 89 op/s
Dec 13 03:57:08 np0005558241 nova_compute[248510]: 2025-12-13 08:57:08.726 248514 DEBUG nova.compute.manager [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:57:08 np0005558241 nova_compute[248510]: 2025-12-13 08:57:08.790 248514 INFO nova.compute.manager [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] instance snapshotting#033[00m
Dec 13 03:57:09 np0005558241 nova_compute[248510]: 2025-12-13 08:57:09.153 248514 INFO nova.virt.libvirt.driver [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Beginning live snapshot process#033[00m
Dec 13 03:57:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:57:09
Dec 13 03:57:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:57:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:57:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'backups', 'default.rgw.control', 'default.rgw.log', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta']
Dec 13 03:57:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:57:09 np0005558241 nova_compute[248510]: 2025-12-13 08:57:09.504 248514 DEBUG nova.storage.rbd_utils [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] creating snapshot(adb21fd68fa744dabbb257505c9debf2) on rbd image(4887eb43-1570-49a5-b20e-326af1e84a7b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2874: 321 pgs: 321 active+clean; 300 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 2.3 MiB/s wr, 72 op/s
Dec 13 03:57:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Dec 13 03:57:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Dec 13 03:57:10 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Dec 13 03:57:10 np0005558241 nova_compute[248510]: 2025-12-13 08:57:10.491 248514 DEBUG nova.storage.rbd_utils [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] cloning vms/4887eb43-1570-49a5-b20e-326af1e84a7b_disk@adb21fd68fa744dabbb257505c9debf2 to images/67e4474c-b70a-4aca-89b4-597c5be29fd3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Dec 13 03:57:10 np0005558241 nova_compute[248510]: 2025-12-13 08:57:10.600 248514 DEBUG nova.storage.rbd_utils [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] flattening images/67e4474c-b70a-4aca-89b4-597c5be29fd3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Dec 13 03:57:10 np0005558241 nova_compute[248510]: 2025-12-13 08:57:10.670 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:57:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:57:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:57:10 np0005558241 nova_compute[248510]: 2025-12-13 08:57:10.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:11 np0005558241 nova_compute[248510]: 2025-12-13 08:57:11.269 248514 DEBUG nova.storage.rbd_utils [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] removing snapshot(adb21fd68fa744dabbb257505c9debf2) on rbd image(4887eb43-1570-49a5-b20e-326af1e84a7b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Dec 13 03:57:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Dec 13 03:57:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Dec 13 03:57:11 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Dec 13 03:57:11 np0005558241 nova_compute[248510]: 2025-12-13 08:57:11.483 248514 DEBUG nova.storage.rbd_utils [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] creating snapshot(snap) on rbd image(67e4474c-b70a-4aca-89b4-597c5be29fd3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Dec 13 03:57:11 np0005558241 nova_compute[248510]: 2025-12-13 08:57:11.562 248514 INFO nova.compute.manager [None req-7e39d1b3-ccdc-4dd0-bbc2-6a4e83da40e9 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Get console output#033[00m
Dec 13 03:57:11 np0005558241 nova_compute[248510]: 2025-12-13 08:57:11.571 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 03:57:11 np0005558241 nova_compute[248510]: 2025-12-13 08:57:11.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:57:11 np0005558241 nova_compute[248510]: 2025-12-13 08:57:11.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:57:12 np0005558241 nova_compute[248510]: 2025-12-13 08:57:12.058 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:57:12 np0005558241 nova_compute[248510]: 2025-12-13 08:57:12.059 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:57:12 np0005558241 nova_compute[248510]: 2025-12-13 08:57:12.059 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:57:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2877: 321 pgs: 321 active+clean; 300 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 1.9 MiB/s wr, 74 op/s
Dec 13 03:57:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Dec 13 03:57:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Dec 13 03:57:12 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Dec 13 03:57:12 np0005558241 nova_compute[248510]: 2025-12-13 08:57:12.658 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:14 np0005558241 nova_compute[248510]: 2025-12-13 08:57:14.053 248514 DEBUG nova.compute.manager [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-changed-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:14 np0005558241 nova_compute[248510]: 2025-12-13 08:57:14.054 248514 DEBUG nova.compute.manager [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Refreshing instance network info cache due to event network-changed-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:57:14 np0005558241 nova_compute[248510]: 2025-12-13 08:57:14.055 248514 DEBUG oslo_concurrency.lockutils [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:57:14 np0005558241 nova_compute[248510]: 2025-12-13 08:57:14.055 248514 DEBUG oslo_concurrency.lockutils [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:57:14 np0005558241 nova_compute[248510]: 2025-12-13 08:57:14.056 248514 DEBUG nova.network.neutron [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Refreshing network info cache for port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:57:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2879: 321 pgs: 321 active+clean; 402 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 202 op/s
Dec 13 03:57:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:57:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2136173477' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:57:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:15.081 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:57:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2136173477' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:57:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:15.084 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.488 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updating instance_info_cache with network_info: [{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.519 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.520 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.520 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.808 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.808 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.808 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.809 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:57:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.860 248514 INFO nova.virt.libvirt.driver [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Snapshot image upload complete#033[00m
Dec 13 03:57:15 np0005558241 nova_compute[248510]: 2025-12-13 08:57:15.861 248514 INFO nova.compute.manager [None req-645990ce-fb6f-4f9e-a1b8-6f4273511c92 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Took 7.07 seconds to snapshot the instance on the hypervisor.#033[00m
Dec 13 03:57:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Dec 13 03:57:15 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Dec 13 03:57:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2881: 321 pgs: 321 active+clean; 402 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 180 op/s
Dec 13 03:57:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:57:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3341596593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.454 248514 DEBUG nova.network.neutron [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updated VIF entry in instance network info cache for port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.455 248514 DEBUG nova.network.neutron [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updating instance_info_cache with network_info: [{"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.457 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.483 248514 DEBUG oslo_concurrency.lockutils [req-290873ad-1b0b-44f2-9f01-1db6fded09b0 req-51c7ba2a-38c3-4f87-b3fc-471e8600c217 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.551 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.551 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.557 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.558 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.562 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.562 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.721 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.723 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2960MB free_disk=59.88791993074119GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.723 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.724 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.817 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 75f348ef-4044-47a1-ba1b-f1b66513450c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.818 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 4887eb43-1570-49a5-b20e-326af1e84a7b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.818 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.819 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.819 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.837 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.856 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.857 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.876 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 03:57:16 np0005558241 nova_compute[248510]: 2025-12-13 08:57:16.897 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 03:57:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Dec 13 03:57:17 np0005558241 nova_compute[248510]: 2025-12-13 08:57:17.024 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:17.086 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Dec 13 03:57:17 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Dec 13 03:57:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:57:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1738394164' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:57:17 np0005558241 nova_compute[248510]: 2025-12-13 08:57:17.616 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:17 np0005558241 nova_compute[248510]: 2025-12-13 08:57:17.622 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:57:17 np0005558241 nova_compute[248510]: 2025-12-13 08:57:17.654 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:57:17 np0005558241 nova_compute[248510]: 2025-12-13 08:57:17.659 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:17 np0005558241 nova_compute[248510]: 2025-12-13 08:57:17.679 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:57:17 np0005558241 nova_compute[248510]: 2025-12-13 08:57:17.680 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2883: 321 pgs: 321 active+clean; 349 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 15 MiB/s wr, 233 op/s
Dec 13 03:57:18 np0005558241 nova_compute[248510]: 2025-12-13 08:57:18.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.232 248514 DEBUG nova.compute.manager [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-changed-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.233 248514 DEBUG nova.compute.manager [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Refreshing instance network info cache due to event network-changed-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.233 248514 DEBUG oslo_concurrency.lockutils [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.233 248514 DEBUG oslo_concurrency.lockutils [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.234 248514 DEBUG nova.network.neutron [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Refreshing network info cache for port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.348 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.348 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.349 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.349 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.350 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.351 248514 INFO nova.compute.manager [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Terminating instance#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.352 248514 DEBUG nova.compute.manager [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:57:19 np0005558241 kernel: tap0b3a1c67-12 (unregistering): left promiscuous mode
Dec 13 03:57:19 np0005558241 NetworkManager[50376]: <info>  [1765616239.4059] device (tap0b3a1c67-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:19 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:19Z|01183|binding|INFO|Releasing lport 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a from this chassis (sb_readonly=0)
Dec 13 03:57:19 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:19Z|01184|binding|INFO|Setting lport 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a down in Southbound
Dec 13 03:57:19 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:19Z|01185|binding|INFO|Removing iface tap0b3a1c67-12 ovn-installed in OVS
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.419 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.427 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:e5:67 10.100.0.9'], port_security=['fa:16:3e:da:e5:67 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4887eb43-1570-49a5-b20e-326af1e84a7b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c21c2eb2d0c4465ae562381f358fbd8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e1e2d18-e674-4ee6-b553-a8676e40259f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c30b2fc-21fd-4778-91da-98384d5e05df, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.428 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a in datapath d2ff4cff-54cc-40c6-a486-7e7532c2462b unbound from our chassis#033[00m
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.430 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2ff4cff-54cc-40c6-a486-7e7532c2462b#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.433 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.449 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4d90637c-a2f1-4190-ad38-657991f08c6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:19 np0005558241 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000078.scope: Deactivated successfully.
Dec 13 03:57:19 np0005558241 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000078.scope: Consumed 15.438s CPU time.
Dec 13 03:57:19 np0005558241 systemd-machined[210538]: Machine qemu-147-instance-00000078 terminated.
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.478 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[458bc9d7-e730-45fe-b64a-038842b1089f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.481 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2e40f8eb-3ad5-4cd2-bec5-1a8ab9255bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.507 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf04452-ec5c-4b45-9f97-3f0f28b22b80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.523 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc273975-8cb6-487d-adc8-60f438bbbed5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2ff4cff-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:85:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 340], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 871447, 'reachable_time': 40005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369027, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.538 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed84153-0743-47ae-a4ad-dcb870d14c46]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd2ff4cff-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 871459, 'tstamp': 871459}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369028, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd2ff4cff-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 871463, 'tstamp': 871463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369028, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.539 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2ff4cff-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.545 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.545 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2ff4cff-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.545 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.546 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2ff4cff-50, col_values=(('external_ids', {'iface-id': '47f45749-b232-4d0c-bf37-be042ea606c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:19.546 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.593 248514 INFO nova.virt.libvirt.driver [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Instance destroyed successfully.#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.594 248514 DEBUG nova.objects.instance [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lazy-loading 'resources' on Instance uuid 4887eb43-1570-49a5-b20e-326af1e84a7b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.612 248514 DEBUG nova.virt.libvirt.vif [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1968716864',display_name='tempest-TestSnapshotPattern-server-1968716864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1968716864',id=120,image_ref='bc45ce83-2d30-4107-90ce-9a9307d84fab',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMAc1KgPQ7YZO/wn247Z4w7R3iPp09OVvor/iF1+jUHxvgVaItQTtOvsdjy6HTDjQXAsMtHqR3YUaichzeZD01aIcPKJkcVch2R8490ZPNprDBfXeJ2j92c1nlW1ddLVg==',key_name='tempest-TestSnapshotPattern-2059242288',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:56:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6c21c2eb2d0c4465ae562381f358fbd8',ramdisk_id='',reservation_id='r-ejrogks3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='75f348ef-4044-47a1-ba1b-f1b66513450c',image_min_disk='1',image_min_ram='0',image_owner_id='6c21c2eb2d0c4465ae562381f358fbd8',image_owner_project_name='tempest-TestSnapshotPattern-1494512648',image_owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member',image_user_id='81fb01d9d08845c3b626079ab726db7a',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-1494512648',owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:57:15Z,user_data=None,user_id='81fb01d9d08845c3b626079ab726db7a',uuid=4887eb43-1570-49a5-b20e-326af1e84a7b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.612 248514 DEBUG nova.network.os_vif_util [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converting VIF {"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.613 248514 DEBUG nova.network.os_vif_util [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.613 248514 DEBUG os_vif [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.615 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.615 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b3a1c67-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.621 248514 INFO os_vif [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:e5:67,bridge_name='br-int',has_traffic_filtering=True,id=0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b3a1c67-12')#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.680 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.875 248514 INFO nova.virt.libvirt.driver [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Deleting instance files /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b_del#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.876 248514 INFO nova.virt.libvirt.driver [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Deletion of /var/lib/nova/instances/4887eb43-1570-49a5-b20e-326af1e84a7b_del complete#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.955 248514 INFO nova.compute.manager [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.957 248514 DEBUG oslo.service.loopingcall [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.957 248514 DEBUG nova.compute.manager [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:57:19 np0005558241 nova_compute[248510]: 2025-12-13 08:57:19.958 248514 DEBUG nova.network.neutron [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:57:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2884: 321 pgs: 321 active+clean; 300 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 11 MiB/s wr, 186 op/s
Dec 13 03:57:20 np0005558241 nova_compute[248510]: 2025-12-13 08:57:20.326 248514 DEBUG nova.compute.manager [req-071329f1-1cb9-492c-804d-b652a7fceb07 req-0fb6cb96-587c-4dc2-b593-ce42deab695e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-vif-unplugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:20 np0005558241 nova_compute[248510]: 2025-12-13 08:57:20.326 248514 DEBUG oslo_concurrency.lockutils [req-071329f1-1cb9-492c-804d-b652a7fceb07 req-0fb6cb96-587c-4dc2-b593-ce42deab695e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:20 np0005558241 nova_compute[248510]: 2025-12-13 08:57:20.327 248514 DEBUG oslo_concurrency.lockutils [req-071329f1-1cb9-492c-804d-b652a7fceb07 req-0fb6cb96-587c-4dc2-b593-ce42deab695e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:20 np0005558241 nova_compute[248510]: 2025-12-13 08:57:20.327 248514 DEBUG oslo_concurrency.lockutils [req-071329f1-1cb9-492c-804d-b652a7fceb07 req-0fb6cb96-587c-4dc2-b593-ce42deab695e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:20 np0005558241 nova_compute[248510]: 2025-12-13 08:57:20.327 248514 DEBUG nova.compute.manager [req-071329f1-1cb9-492c-804d-b652a7fceb07 req-0fb6cb96-587c-4dc2-b593-ce42deab695e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] No waiting events found dispatching network-vif-unplugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:57:20 np0005558241 nova_compute[248510]: 2025-12-13 08:57:20.327 248514 DEBUG nova.compute.manager [req-071329f1-1cb9-492c-804d-b652a7fceb07 req-0fb6cb96-587c-4dc2-b593-ce42deab695e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-vif-unplugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:57:20 np0005558241 nova_compute[248510]: 2025-12-13 08:57:20.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:57:20 np0005558241 nova_compute[248510]: 2025-12-13 08:57:20.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:57:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:57:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Dec 13 03:57:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Dec 13 03:57:20 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Dec 13 03:57:21 np0005558241 nova_compute[248510]: 2025-12-13 08:57:21.004 248514 DEBUG nova.network.neutron [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updated VIF entry in instance network info cache for port 0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:57:21 np0005558241 nova_compute[248510]: 2025-12-13 08:57:21.005 248514 DEBUG nova.network.neutron [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updating instance_info_cache with network_info: [{"id": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "address": "fa:16:3e:da:e5:67", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b3a1c67-12", "ovs_interfaceid": "0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:57:21 np0005558241 nova_compute[248510]: 2025-12-13 08:57:21.032 248514 DEBUG oslo_concurrency.lockutils [req-8f74a312-b8c9-46f8-97f9-15d2e2be5288 req-f42c9ee2-e3d4-4538-8aec-c34c490652c6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-4887eb43-1570-49a5-b20e-326af1e84a7b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001673713189639438 of space, bias 1.0, pg target 0.5021139568918314 quantized to 32 (current 32)
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014274589188080123 of space, bias 1.0, pg target 0.42823767564240367 quantized to 32 (current 32)
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.734031256454457e-07 of space, bias 4.0, pg target 0.0006880837507745348 quantized to 16 (current 32)
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:57:21 np0005558241 nova_compute[248510]: 2025-12-13 08:57:21.501 248514 DEBUG nova.network.neutron [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:57:21 np0005558241 nova_compute[248510]: 2025-12-13 08:57:21.523 248514 INFO nova.compute.manager [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Took 1.57 seconds to deallocate network for instance.#033[00m
Dec 13 03:57:21 np0005558241 nova_compute[248510]: 2025-12-13 08:57:21.606 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:21 np0005558241 nova_compute[248510]: 2025-12-13 08:57:21.606 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2886: 321 pgs: 321 active+clean; 300 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 8.6 KiB/s wr, 62 op/s
Dec 13 03:57:22 np0005558241 nova_compute[248510]: 2025-12-13 08:57:22.453 248514 DEBUG nova.compute.manager [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:22 np0005558241 nova_compute[248510]: 2025-12-13 08:57:22.454 248514 DEBUG oslo_concurrency.lockutils [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:22 np0005558241 nova_compute[248510]: 2025-12-13 08:57:22.454 248514 DEBUG oslo_concurrency.lockutils [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:22 np0005558241 nova_compute[248510]: 2025-12-13 08:57:22.454 248514 DEBUG oslo_concurrency.lockutils [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:22 np0005558241 nova_compute[248510]: 2025-12-13 08:57:22.454 248514 DEBUG nova.compute.manager [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] No waiting events found dispatching network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:57:22 np0005558241 nova_compute[248510]: 2025-12-13 08:57:22.455 248514 WARNING nova.compute.manager [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received unexpected event network-vif-plugged-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:57:22 np0005558241 nova_compute[248510]: 2025-12-13 08:57:22.455 248514 DEBUG nova.compute.manager [req-fa2c2a85-73dc-455c-8fea-06570740861f req-c15995e4-01a3-472d-9d1d-2fd017ba8be1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Received event network-vif-deleted-0b3a1c67-12ca-481e-b2e3-3ebf1aaaf44a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:22 np0005558241 nova_compute[248510]: 2025-12-13 08:57:22.661 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:23 np0005558241 nova_compute[248510]: 2025-12-13 08:57:23.019 248514 DEBUG oslo_concurrency.processutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:57:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2819567143' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:57:23 np0005558241 nova_compute[248510]: 2025-12-13 08:57:23.857 248514 DEBUG oslo_concurrency.processutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.838s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:23 np0005558241 nova_compute[248510]: 2025-12-13 08:57:23.864 248514 DEBUG nova.compute.provider_tree [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:57:23 np0005558241 nova_compute[248510]: 2025-12-13 08:57:23.932 248514 DEBUG nova.scheduler.client.report [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:57:24 np0005558241 nova_compute[248510]: 2025-12-13 08:57:24.103 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:24 np0005558241 nova_compute[248510]: 2025-12-13 08:57:24.108 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:24 np0005558241 nova_compute[248510]: 2025-12-13 08:57:24.109 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:24 np0005558241 nova_compute[248510]: 2025-12-13 08:57:24.155 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:57:24 np0005558241 nova_compute[248510]: 2025-12-13 08:57:24.161 248514 INFO nova.scheduler.client.report [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Deleted allocations for instance 4887eb43-1570-49a5-b20e-326af1e84a7b#033[00m
Dec 13 03:57:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2887: 321 pgs: 321 active+clean; 279 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 10 KiB/s wr, 90 op/s
Dec 13 03:57:24 np0005558241 nova_compute[248510]: 2025-12-13 08:57:24.497 248514 DEBUG oslo_concurrency.lockutils [None req-56818c09-569f-48ca-895a-648935051c30 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "4887eb43-1570-49a5-b20e-326af1e84a7b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:24 np0005558241 nova_compute[248510]: 2025-12-13 08:57:24.530 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:24 np0005558241 nova_compute[248510]: 2025-12-13 08:57:24.531 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:24 np0005558241 nova_compute[248510]: 2025-12-13 08:57:24.541 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:57:24 np0005558241 nova_compute[248510]: 2025-12-13 08:57:24.542 248514 INFO nova.compute.claims [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:57:24 np0005558241 nova_compute[248510]: 2025-12-13 08:57:24.653 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:24 np0005558241 nova_compute[248510]: 2025-12-13 08:57:24.708 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:24 np0005558241 nova_compute[248510]: 2025-12-13 08:57:24.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:57:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:57:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1554119751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.275 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.282 248514 DEBUG nova.compute.provider_tree [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.305 248514 DEBUG nova.scheduler.client.report [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.341 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.342 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.413 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.414 248514 DEBUG nova.network.neutron [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.441 248514 INFO nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.620 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.721 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.723 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.723 248514 INFO nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Creating image(s)#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.744 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.766 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.790 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.794 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.872 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.873 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.874 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.874 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.897 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:25 np0005558241 nova_compute[248510]: 2025-12-13 08:57:25.900 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:26 np0005558241 nova_compute[248510]: 2025-12-13 08:57:26.016 248514 DEBUG nova.policy [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:57:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Dec 13 03:57:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Dec 13 03:57:26 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Dec 13 03:57:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2889: 321 pgs: 321 active+clean; 279 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 8.9 KiB/s wr, 46 op/s
Dec 13 03:57:27 np0005558241 nova_compute[248510]: 2025-12-13 08:57:27.664 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:27 np0005558241 nova_compute[248510]: 2025-12-13 08:57:27.934 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:28 np0005558241 nova_compute[248510]: 2025-12-13 08:57:28.000 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:57:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2890: 321 pgs: 321 active+clean; 298 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Dec 13 03:57:28 np0005558241 nova_compute[248510]: 2025-12-13 08:57:28.389 248514 DEBUG nova.objects.instance [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 5a7142a0-6e82-418c-affe-88fd6beb2ad9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:57:28 np0005558241 nova_compute[248510]: 2025-12-13 08:57:28.423 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:57:28 np0005558241 nova_compute[248510]: 2025-12-13 08:57:28.424 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Ensure instance console log exists: /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:57:28 np0005558241 nova_compute[248510]: 2025-12-13 08:57:28.424 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:28 np0005558241 nova_compute[248510]: 2025-12-13 08:57:28.425 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:28 np0005558241 nova_compute[248510]: 2025-12-13 08:57:28.425 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:28 np0005558241 nova_compute[248510]: 2025-12-13 08:57:28.771 248514 DEBUG nova.network.neutron [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Successfully created port: 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:57:29 np0005558241 nova_compute[248510]: 2025-12-13 08:57:29.655 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2891: 321 pgs: 321 active+clean; 285 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 2.3 MiB/s wr, 92 op/s
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.519 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.519 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.519 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.520 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.520 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.521 248514 INFO nova.compute.manager [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Terminating instance#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.522 248514 DEBUG nova.compute.manager [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.524 248514 DEBUG nova.network.neutron [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Successfully updated port: 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.541 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.542 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.542 248514 DEBUG nova.network.neutron [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:57:30 np0005558241 kernel: tap63a84e8b-c1 (unregistering): left promiscuous mode
Dec 13 03:57:30 np0005558241 NetworkManager[50376]: <info>  [1765616250.5765] device (tap63a84e8b-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:57:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:30Z|01186|binding|INFO|Releasing lport 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed from this chassis (sb_readonly=0)
Dec 13 03:57:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:30Z|01187|binding|INFO|Setting lport 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed down in Southbound
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:30Z|01188|binding|INFO|Removing iface tap63a84e8b-c1 ovn-installed in OVS
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.599 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:e7:bf 10.100.0.3'], port_security=['fa:16:3e:7d:e7:bf 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '75f348ef-4044-47a1-ba1b-f1b66513450c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c21c2eb2d0c4465ae562381f358fbd8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e1e2d18-e674-4ee6-b553-a8676e40259f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c30b2fc-21fd-4778-91da-98384d5e05df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.600 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed in datapath d2ff4cff-54cc-40c6-a486-7e7532c2462b unbound from our chassis#033[00m
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.601 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2ff4cff-54cc-40c6-a486-7e7532c2462b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.602 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[47f6db1d-7e87-4c94-a7ab-943594c11fa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.603 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b namespace which is not needed anymore#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.623 248514 DEBUG nova.compute.manager [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-changed-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.623 248514 DEBUG nova.compute.manager [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Refreshing instance network info cache due to event network-changed-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.624 248514 DEBUG oslo_concurrency.lockutils [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.624 248514 DEBUG oslo_concurrency.lockutils [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.624 248514 DEBUG nova.network.neutron [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Refreshing network info cache for port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:57:30 np0005558241 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000076.scope: Deactivated successfully.
Dec 13 03:57:30 np0005558241 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000076.scope: Consumed 17.089s CPU time.
Dec 13 03:57:30 np0005558241 systemd-machined[210538]: Machine qemu-145-instance-00000076 terminated.
Dec 13 03:57:30 np0005558241 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [NOTICE]   (365455) : haproxy version is 2.8.14-c23fe91
Dec 13 03:57:30 np0005558241 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [NOTICE]   (365455) : path to executable is /usr/sbin/haproxy
Dec 13 03:57:30 np0005558241 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [WARNING]  (365455) : Exiting Master process...
Dec 13 03:57:30 np0005558241 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [ALERT]    (365455) : Current worker (365457) exited with code 143 (Terminated)
Dec 13 03:57:30 np0005558241 neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b[365450]: [WARNING]  (365455) : All workers exited. Exiting... (0)
Dec 13 03:57:30 np0005558241 systemd[1]: libpod-5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e.scope: Deactivated successfully.
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.744 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.747 248514 DEBUG nova.network.neutron [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:57:30 np0005558241 podman[369298]: 2025-12-13 08:57:30.749730042 +0000 UTC m=+0.051444380 container died 5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.754 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.759 248514 INFO nova.virt.libvirt.driver [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Instance destroyed successfully.#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.760 248514 DEBUG nova.objects.instance [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lazy-loading 'resources' on Instance uuid 75f348ef-4044-47a1-ba1b-f1b66513450c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.780 248514 DEBUG nova.virt.libvirt.vif [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-608841865',display_name='tempest-TestSnapshotPattern-server-608841865',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-608841865',id=118,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOMAc1KgPQ7YZO/wn247Z4w7R3iPp09OVvor/iF1+jUHxvgVaItQTtOvsdjy6HTDjQXAsMtHqR3YUaichzeZD01aIcPKJkcVch2R8490ZPNprDBfXeJ2j92c1nlW1ddLVg==',key_name='tempest-TestSnapshotPattern-2059242288',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:55:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6c21c2eb2d0c4465ae562381f358fbd8',ramdisk_id='',reservation_id='r-dx9z8wv3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-1494512648',owner_user_name='tempest-TestSnapshotPattern-1494512648-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:56:10Z,user_data=None,user_id='81fb01d9d08845c3b626079ab726db7a',uuid=75f348ef-4044-47a1-ba1b-f1b66513450c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.781 248514 DEBUG nova.network.os_vif_util [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converting VIF {"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.782 248514 DEBUG nova.network.os_vif_util [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.782 248514 DEBUG os_vif [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.784 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.784 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63a84e8b-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.790 248514 INFO os_vif [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:e7:bf,bridge_name='br-int',has_traffic_filtering=True,id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed,network=Network(d2ff4cff-54cc-40c6-a486-7e7532c2462b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63a84e8b-c1')#033[00m
Dec 13 03:57:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e-userdata-shm.mount: Deactivated successfully.
Dec 13 03:57:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fffb7936c063d506819829d9345338322d52c45114e0f00e692ced85a23dd5f4-merged.mount: Deactivated successfully.
Dec 13 03:57:30 np0005558241 podman[369298]: 2025-12-13 08:57:30.807877655 +0000 UTC m=+0.109591963 container cleanup 5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:57:30 np0005558241 systemd[1]: libpod-conmon-5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e.scope: Deactivated successfully.
Dec 13 03:57:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:57:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Dec 13 03:57:30 np0005558241 podman[369353]: 2025-12-13 08:57:30.881053006 +0000 UTC m=+0.049332628 container remove 5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:57:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Dec 13 03:57:30 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.892 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[af702c3e-4b37-43dc-b556-9bda2d5d5d5c]: (4, ('Sat Dec 13 08:57:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b (5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e)\n5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e\nSat Dec 13 08:57:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b (5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e)\n5abb191a65c1c42004d0047f7c5ae5b2594cb12292bede1de338b89df2af073e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.894 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e288925a-0433-4556-909e-9e03d7492ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.895 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2ff4cff-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.897 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:30 np0005558241 kernel: tapd2ff4cff-50: left promiscuous mode
Dec 13 03:57:30 np0005558241 nova_compute[248510]: 2025-12-13 08:57:30.913 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[516a6828-be0d-469d-b3d7-0dae56af2e00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.937 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[085413bd-0f7d-4dda-96ff-85a486b3a239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.938 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9564dc0e-16e4-45f8-93a5-5906ceb09aa2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.958 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2c283a-9017-4e4c-aa48-1d6afd1847dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 871438, 'reachable_time': 37993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369372, 'error': None, 'target': 'ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:30 np0005558241 systemd[1]: run-netns-ovnmeta\x2dd2ff4cff\x2d54cc\x2d40c6\x2da486\x2d7e7532c2462b.mount: Deactivated successfully.
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.962 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2ff4cff-54cc-40c6-a486-7e7532c2462b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:57:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:30.962 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[f5988209-9b6d-4cf4-8664-11e9fff4e714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:31 np0005558241 nova_compute[248510]: 2025-12-13 08:57:31.075 248514 INFO nova.virt.libvirt.driver [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Deleting instance files /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c_del#033[00m
Dec 13 03:57:31 np0005558241 nova_compute[248510]: 2025-12-13 08:57:31.076 248514 INFO nova.virt.libvirt.driver [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Deletion of /var/lib/nova/instances/75f348ef-4044-47a1-ba1b-f1b66513450c_del complete#033[00m
Dec 13 03:57:31 np0005558241 nova_compute[248510]: 2025-12-13 08:57:31.133 248514 INFO nova.compute.manager [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:57:31 np0005558241 nova_compute[248510]: 2025-12-13 08:57:31.133 248514 DEBUG oslo.service.loopingcall [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:57:31 np0005558241 nova_compute[248510]: 2025-12-13 08:57:31.134 248514 DEBUG nova.compute.manager [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:57:31 np0005558241 nova_compute[248510]: 2025-12-13 08:57:31.134 248514 DEBUG nova.network.neutron [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:57:31 np0005558241 nova_compute[248510]: 2025-12-13 08:57:31.738 248514 DEBUG nova.compute.manager [req-3fe769da-c1e8-475c-8787-5195d6301f28 req-9a48f7bc-a15f-4614-862a-d5ec54fb21bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-vif-unplugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:31 np0005558241 nova_compute[248510]: 2025-12-13 08:57:31.738 248514 DEBUG oslo_concurrency.lockutils [req-3fe769da-c1e8-475c-8787-5195d6301f28 req-9a48f7bc-a15f-4614-862a-d5ec54fb21bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:31 np0005558241 nova_compute[248510]: 2025-12-13 08:57:31.739 248514 DEBUG oslo_concurrency.lockutils [req-3fe769da-c1e8-475c-8787-5195d6301f28 req-9a48f7bc-a15f-4614-862a-d5ec54fb21bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:31 np0005558241 nova_compute[248510]: 2025-12-13 08:57:31.740 248514 DEBUG oslo_concurrency.lockutils [req-3fe769da-c1e8-475c-8787-5195d6301f28 req-9a48f7bc-a15f-4614-862a-d5ec54fb21bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:31 np0005558241 nova_compute[248510]: 2025-12-13 08:57:31.740 248514 DEBUG nova.compute.manager [req-3fe769da-c1e8-475c-8787-5195d6301f28 req-9a48f7bc-a15f-4614-862a-d5ec54fb21bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] No waiting events found dispatching network-vif-unplugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:57:31 np0005558241 nova_compute[248510]: 2025-12-13 08:57:31.741 248514 DEBUG nova.compute.manager [req-3fe769da-c1e8-475c-8787-5195d6301f28 req-9a48f7bc-a15f-4614-862a-d5ec54fb21bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-vif-unplugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.132 248514 DEBUG nova.network.neutron [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Updating instance_info_cache with network_info: [{"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.223 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.223 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Instance network_info: |[{"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.225 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Start _get_guest_xml network_info=[{"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.229 248514 WARNING nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.234 248514 DEBUG nova.virt.libvirt.host [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.235 248514 DEBUG nova.virt.libvirt.host [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.238 248514 DEBUG nova.virt.libvirt.host [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.238 248514 DEBUG nova.virt.libvirt.host [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.239 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.239 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.240 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.240 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.240 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.240 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.241 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.241 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.241 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.241 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.242 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.242 248514 DEBUG nova.virt.hardware [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.246 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2893: 321 pgs: 321 active+clean; 285 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 2.7 MiB/s wr, 68 op/s
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.665 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.671 248514 DEBUG nova.network.neutron [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.724 248514 INFO nova.compute.manager [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Took 1.59 seconds to deallocate network for instance.#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.787 248514 DEBUG nova.compute.manager [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-changed-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.788 248514 DEBUG nova.compute.manager [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Refreshing instance network info cache due to event network-changed-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.788 248514 DEBUG oslo_concurrency.lockutils [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.789 248514 DEBUG oslo_concurrency.lockutils [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.789 248514 DEBUG nova.network.neutron [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Refreshing network info cache for port 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.793 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.794 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:57:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2919445828' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.820 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.848 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.858 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.908 248514 DEBUG nova.network.neutron [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updated VIF entry in instance network info cache for port 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:57:32 np0005558241 nova_compute[248510]: 2025-12-13 08:57:32.909 248514 DEBUG nova.network.neutron [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Updating instance_info_cache with network_info: [{"id": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "address": "fa:16:3e:7d:e7:bf", "network": {"id": "d2ff4cff-54cc-40c6-a486-7e7532c2462b", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1454422872-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c21c2eb2d0c4465ae562381f358fbd8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63a84e8b-c1", "ovs_interfaceid": "63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.009 248514 DEBUG oslo_concurrency.processutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.188 248514 DEBUG oslo_concurrency.lockutils [req-34c31287-afb5-4cdd-a6d6-79053f086f33 req-51e655ae-021e-4bc9-8e7f-dab52de6e8af 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-75f348ef-4044-47a1-ba1b-f1b66513450c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.243 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.244 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.282 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.350 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:57:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3503960443' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.503 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.505 248514 DEBUG nova.virt.libvirt.vif [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-182179822',display_name='tempest-TestNetworkBasicOps-server-182179822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-182179822',id=122,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIyOWHhK5PGqeOlzFwokovmTuf3HTihMwOQrzfuGYU+/TrdkTdWDTQvnoNZ7qiFrzCGlnIvswkbj8TaejN4nwLPFUx3mjtQULdplgXkj1ea+cO+RfMC1iM+NaDk/WgBTsg==',key_name='tempest-TestNetworkBasicOps-1885276769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-ly8zv4h3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:57:25Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5a7142a0-6e82-418c-affe-88fd6beb2ad9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.506 248514 DEBUG nova.network.os_vif_util [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.507 248514 DEBUG nova.network.os_vif_util [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.509 248514 DEBUG nova.objects.instance [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a7142a0-6e82-418c-affe-88fd6beb2ad9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.565 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  <uuid>5a7142a0-6e82-418c-affe-88fd6beb2ad9</uuid>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  <name>instance-0000007a</name>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkBasicOps-server-182179822</nova:name>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:57:32</nova:creationTime>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:        <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:        <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:        <nova:port uuid="8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <entry name="serial">5a7142a0-6e82-418c-affe-88fd6beb2ad9</entry>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <entry name="uuid">5a7142a0-6e82-418c-affe-88fd6beb2ad9</entry>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk.config">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:ed:3b:59"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <target dev="tap8ee3e1ae-47"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/console.log" append="off"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:57:33 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:57:33 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:57:33 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:57:33 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.566 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Preparing to wait for external event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.567 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.567 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.567 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.568 248514 DEBUG nova.virt.libvirt.vif [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-182179822',display_name='tempest-TestNetworkBasicOps-server-182179822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-182179822',id=122,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIyOWHhK5PGqeOlzFwokovmTuf3HTihMwOQrzfuGYU+/TrdkTdWDTQvnoNZ7qiFrzCGlnIvswkbj8TaejN4nwLPFUx3mjtQULdplgXkj1ea+cO+RfMC1iM+NaDk/WgBTsg==',key_name='tempest-TestNetworkBasicOps-1885276769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-ly8zv4h3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:57:25Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5a7142a0-6e82-418c-affe-88fd6beb2ad9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.568 248514 DEBUG nova.network.os_vif_util [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.569 248514 DEBUG nova.network.os_vif_util [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.569 248514 DEBUG os_vif [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.570 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.571 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.574 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ee3e1ae-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.574 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ee3e1ae-47, col_values=(('external_ids', {'iface-id': '8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:3b:59', 'vm-uuid': '5a7142a0-6e82-418c-affe-88fd6beb2ad9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:33 np0005558241 NetworkManager[50376]: <info>  [1765616253.5893] manager: (tap8ee3e1ae-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/493)
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.590 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.593 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.594 248514 INFO os_vif [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47')#033[00m
Dec 13 03:57:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:57:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/243712752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.655 248514 DEBUG oslo_concurrency.processutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.664 248514 DEBUG nova.compute.provider_tree [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.675 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.676 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.676 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:ed:3b:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.677 248514 INFO nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Using config drive#033[00m
Dec 13 03:57:33 np0005558241 podman[369461]: 2025-12-13 08:57:33.689216043 +0000 UTC m=+0.060424660 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 03:57:33 np0005558241 podman[369460]: 2025-12-13 08:57:33.699401083 +0000 UTC m=+0.067932144 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.707 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.714 248514 DEBUG nova.scheduler.client.report [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:57:33 np0005558241 podman[369459]: 2025-12-13 08:57:33.735821644 +0000 UTC m=+0.104809116 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.796 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.798 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.804 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.804 248514 INFO nova.compute.claims [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.862 248514 INFO nova.scheduler.client.report [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Deleted allocations for instance 75f348ef-4044-47a1-ba1b-f1b66513450c#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.974 248514 DEBUG oslo_concurrency.lockutils [None req-11fa51ef-6a2b-406d-acca-e763e464f0b3 81fb01d9d08845c3b626079ab726db7a 6c21c2eb2d0c4465ae562381f358fbd8 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:33 np0005558241 nova_compute[248510]: 2025-12-13 08:57:33.988 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.141 248514 DEBUG nova.compute.manager [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.142 248514 DEBUG oslo_concurrency.lockutils [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.142 248514 DEBUG oslo_concurrency.lockutils [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.143 248514 DEBUG oslo_concurrency.lockutils [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "75f348ef-4044-47a1-ba1b-f1b66513450c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.143 248514 DEBUG nova.compute.manager [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] No waiting events found dispatching network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.143 248514 WARNING nova.compute.manager [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received unexpected event network-vif-plugged-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.144 248514 DEBUG nova.compute.manager [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Received event network-vif-deleted-63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.144 248514 INFO nova.compute.manager [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Neutron deleted interface 63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.144 248514 DEBUG nova.network.neutron [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.148 248514 DEBUG nova.compute.manager [req-d69e73d1-10f4-4e9e-b1da-2d909144dec9 req-10f12a1b-05f9-4eff-bc11-0d0744c2069b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Detach interface failed, port_id=63a84e8b-c1b4-4e0d-b19a-54e6b7a96fed, reason: Instance 75f348ef-4044-47a1-ba1b-f1b66513450c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 03:57:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2894: 321 pgs: 321 active+clean; 167 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 2.7 MiB/s wr, 116 op/s
Dec 13 03:57:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:57:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554201659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.570 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.578 248514 DEBUG nova.compute.provider_tree [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.590 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616239.5896945, 4887eb43-1570-49a5-b20e-326af1e84a7b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.591 248514 INFO nova.compute.manager [-] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.646 248514 DEBUG nova.scheduler.client.report [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.653 248514 DEBUG nova.compute.manager [None req-d4c0beb6-d755-4e5c-aa45-2cff72c0b851 - - - - - -] [instance: 4887eb43-1570-49a5-b20e-326af1e84a7b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.907 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:34 np0005558241 nova_compute[248510]: 2025-12-13 08:57:34.908 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.158 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.158 248514 DEBUG nova.network.neutron [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.167 248514 INFO nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Creating config drive at /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/disk.config#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.174 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcz02201c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.222 248514 INFO nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.248 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.327 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcz02201c" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.359 248514 DEBUG nova.storage.rbd_utils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.365 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/disk.config 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.416 248514 DEBUG nova.policy [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '095998dc2eb348e8a90c866d4106cd74', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '30f07149729142048436dbfbb8bf2742', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.419 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.421 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.422 248514 INFO nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Creating image(s)#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.447 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.475 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.505 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.510 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.552 248514 DEBUG oslo_concurrency.processutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/disk.config 5a7142a0-6e82-418c-affe-88fd6beb2ad9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.553 248514 INFO nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Deleting local config drive /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9/disk.config because it was imported into RBD.#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.592 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.592 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.593 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.593 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:35 np0005558241 kernel: tap8ee3e1ae-47: entered promiscuous mode
Dec 13 03:57:35 np0005558241 NetworkManager[50376]: <info>  [1765616255.6106] manager: (tap8ee3e1ae-47): new Tun device (/org/freedesktop/NetworkManager/Devices/494)
Dec 13 03:57:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:35Z|01189|binding|INFO|Claiming lport 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 for this chassis.
Dec 13 03:57:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:35Z|01190|binding|INFO|8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2: Claiming fa:16:3e:ed:3b:59 10.100.0.13
Dec 13 03:57:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:35Z|01191|binding|INFO|Setting lport 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 ovn-installed in OVS
Dec 13 03:57:35 np0005558241 systemd-udevd[369691]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.642 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:35 np0005558241 NetworkManager[50376]: <info>  [1765616255.6538] device (tap8ee3e1ae-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.653 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 393f815d-d124-4aea-98c0-126aed0744bd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:35 np0005558241 NetworkManager[50376]: <info>  [1765616255.6545] device (tap8ee3e1ae-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:57:35 np0005558241 systemd-machined[210538]: New machine qemu-149-instance-0000007a.
Dec 13 03:57:35 np0005558241 systemd[1]: Started Virtual Machine qemu-149-instance-0000007a.
Dec 13 03:57:35 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:35Z|01192|binding|INFO|Setting lport 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 up in Southbound
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.685 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:3b:59 10.100.0.13'], port_security=['fa:16:3e:ed:3b:59 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5a7142a0-6e82-418c-affe-88fd6beb2ad9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8944c356-661d-4684-a169-e2ad4b13e098', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca3b9929-a40c-461d-b2b9-49fd6af07fd3, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.686 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 in datapath 793ba3c3-1004-4068-a89a-dc7b4c56fc43 bound to our chassis#033[00m
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.688 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 793ba3c3-1004-4068-a89a-dc7b4c56fc43#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.703 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.708 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[50cec198-324e-4273-aba4-008c28e677b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.759 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7dfbd3-30e0-4f64-ad0b-44090d69e0a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.763 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5a70906e-6a33-439d-9924-42499fc910cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.797 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad28ca3-961f-4d21-af12-288fe4b32614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.823 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f00fd20-af9d-4459-9094-6e3ae1ee4838]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap793ba3c3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:ed:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 349], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878911, 'reachable_time': 42802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369727, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.847 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1759bbd-07e5-4822-a365-b5461c3dd4e1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap793ba3c3-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878923, 'tstamp': 878923}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369728, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap793ba3c3-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878927, 'tstamp': 878927}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369728, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.850 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap793ba3c3-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.865 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap793ba3c3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.865 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.866 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap793ba3c3-10, col_values=(('external_ids', {'iface-id': 'e25ab14d-8bf6-4007-ae4c-085df43b875d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.867 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:35.871 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:57:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.920 248514 DEBUG nova.network.neutron [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Updated VIF entry in instance network info cache for port 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:57:35 np0005558241 nova_compute[248510]: 2025-12-13 08:57:35.920 248514 DEBUG nova.network.neutron [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Updating instance_info_cache with network_info: [{"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.009 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 393f815d-d124-4aea-98c0-126aed0744bd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.072 248514 DEBUG oslo_concurrency.lockutils [req-af83e613-20c0-4134-a923-0f96764d103f req-d4215eda-498e-4775-b071-3ee1fbe8b670 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.079 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] resizing rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.113 248514 DEBUG nova.compute.manager [req-3ed6942e-a0fa-4c72-8e78-afc39468ba78 req-a3237614-b909-4f77-9069-ef0626cf8247 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.114 248514 DEBUG oslo_concurrency.lockutils [req-3ed6942e-a0fa-4c72-8e78-afc39468ba78 req-a3237614-b909-4f77-9069-ef0626cf8247 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.114 248514 DEBUG oslo_concurrency.lockutils [req-3ed6942e-a0fa-4c72-8e78-afc39468ba78 req-a3237614-b909-4f77-9069-ef0626cf8247 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.115 248514 DEBUG oslo_concurrency.lockutils [req-3ed6942e-a0fa-4c72-8e78-afc39468ba78 req-a3237614-b909-4f77-9069-ef0626cf8247 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.115 248514 DEBUG nova.compute.manager [req-3ed6942e-a0fa-4c72-8e78-afc39468ba78 req-a3237614-b909-4f77-9069-ef0626cf8247 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Processing event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.159 248514 DEBUG nova.objects.instance [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lazy-loading 'migration_context' on Instance uuid 393f815d-d124-4aea-98c0-126aed0744bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.187 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.188 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Ensure instance console log exists: /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.188 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.188 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.189 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2895: 321 pgs: 321 active+clean; 167 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.703 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.704 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616256.70275, 5a7142a0-6e82-418c-affe-88fd6beb2ad9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.704 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] VM Started (Lifecycle Event)#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.707 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.710 248514 INFO nova.virt.libvirt.driver [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Instance spawned successfully.#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.711 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.731 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.738 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.744 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.745 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.746 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.746 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.746 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.747 248514 DEBUG nova.virt.libvirt.driver [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.769 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.769 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616256.703979, 5a7142a0-6e82-418c-affe-88fd6beb2ad9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.769 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.837 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.841 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616256.70753, 5a7142a0-6e82-418c-affe-88fd6beb2ad9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.841 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.880 248514 INFO nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Took 11.16 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.880 248514 DEBUG nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.889 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.892 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.945 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.973 248514 INFO nova.compute.manager [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Took 12.49 seconds to build instance.#033[00m
Dec 13 03:57:36 np0005558241 nova_compute[248510]: 2025-12-13 08:57:36.990 248514 DEBUG oslo_concurrency.lockutils [None req-e7df224d-29e0-48df-9357-9f68be615829 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:37 np0005558241 nova_compute[248510]: 2025-12-13 08:57:37.159 248514 DEBUG nova.network.neutron [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Successfully created port: 6b86528d-c1b7-4776-809f-1f8b37569b6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:57:37 np0005558241 nova_compute[248510]: 2025-12-13 08:57:37.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2896: 321 pgs: 321 active+clean; 187 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 129 op/s
Dec 13 03:57:38 np0005558241 nova_compute[248510]: 2025-12-13 08:57:38.589 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:38 np0005558241 nova_compute[248510]: 2025-12-13 08:57:38.712 248514 DEBUG nova.compute.manager [req-3e898b5b-a9e0-4c2f-a250-a312e62bf120 req-7be942bf-9c95-475d-b106-48ef8e16f710 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:38 np0005558241 nova_compute[248510]: 2025-12-13 08:57:38.712 248514 DEBUG oslo_concurrency.lockutils [req-3e898b5b-a9e0-4c2f-a250-a312e62bf120 req-7be942bf-9c95-475d-b106-48ef8e16f710 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:38 np0005558241 nova_compute[248510]: 2025-12-13 08:57:38.712 248514 DEBUG oslo_concurrency.lockutils [req-3e898b5b-a9e0-4c2f-a250-a312e62bf120 req-7be942bf-9c95-475d-b106-48ef8e16f710 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:38 np0005558241 nova_compute[248510]: 2025-12-13 08:57:38.712 248514 DEBUG oslo_concurrency.lockutils [req-3e898b5b-a9e0-4c2f-a250-a312e62bf120 req-7be942bf-9c95-475d-b106-48ef8e16f710 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:38 np0005558241 nova_compute[248510]: 2025-12-13 08:57:38.713 248514 DEBUG nova.compute.manager [req-3e898b5b-a9e0-4c2f-a250-a312e62bf120 req-7be942bf-9c95-475d-b106-48ef8e16f710 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] No waiting events found dispatching network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:57:38 np0005558241 nova_compute[248510]: 2025-12-13 08:57:38.713 248514 WARNING nova.compute.manager [req-3e898b5b-a9e0-4c2f-a250-a312e62bf120 req-7be942bf-9c95-475d-b106-48ef8e16f710 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received unexpected event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:57:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:57:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:57:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:57:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:57:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:57:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:57:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2897: 321 pgs: 321 active+clean; 213 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 160 op/s
Dec 13 03:57:40 np0005558241 nova_compute[248510]: 2025-12-13 08:57:40.391 248514 DEBUG nova.network.neutron [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Successfully updated port: 6b86528d-c1b7-4776-809f-1f8b37569b6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:57:40 np0005558241 nova_compute[248510]: 2025-12-13 08:57:40.441 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:57:40 np0005558241 nova_compute[248510]: 2025-12-13 08:57:40.441 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquired lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:57:40 np0005558241 nova_compute[248510]: 2025-12-13 08:57:40.441 248514 DEBUG nova.network.neutron [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:57:40 np0005558241 nova_compute[248510]: 2025-12-13 08:57:40.545 248514 DEBUG nova.compute.manager [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-changed-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:40 np0005558241 nova_compute[248510]: 2025-12-13 08:57:40.546 248514 DEBUG nova.compute.manager [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Refreshing instance network info cache due to event network-changed-6b86528d-c1b7-4776-809f-1f8b37569b6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:57:40 np0005558241 nova_compute[248510]: 2025-12-13 08:57:40.546 248514 DEBUG oslo_concurrency.lockutils [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:57:40 np0005558241 nova_compute[248510]: 2025-12-13 08:57:40.770 248514 DEBUG nova.network.neutron [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:57:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:57:41 np0005558241 nova_compute[248510]: 2025-12-13 08:57:41.351 248514 DEBUG nova.compute.manager [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-changed-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:41 np0005558241 nova_compute[248510]: 2025-12-13 08:57:41.351 248514 DEBUG nova.compute.manager [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Refreshing instance network info cache due to event network-changed-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:57:41 np0005558241 nova_compute[248510]: 2025-12-13 08:57:41.352 248514 DEBUG oslo_concurrency.lockutils [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:57:41 np0005558241 nova_compute[248510]: 2025-12-13 08:57:41.352 248514 DEBUG oslo_concurrency.lockutils [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:57:41 np0005558241 nova_compute[248510]: 2025-12-13 08:57:41.352 248514 DEBUG nova.network.neutron [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Refreshing network info cache for port 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:57:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2898: 321 pgs: 321 active+clean; 213 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 140 op/s
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.451 248514 DEBUG nova.network.neutron [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Updating instance_info_cache with network_info: [{"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.670 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.740 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Releasing lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.741 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Instance network_info: |[{"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.741 248514 DEBUG oslo_concurrency.lockutils [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.741 248514 DEBUG nova.network.neutron [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Refreshing network info cache for port 6b86528d-c1b7-4776-809f-1f8b37569b6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.745 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Start _get_guest_xml network_info=[{"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.750 248514 WARNING nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.757 248514 DEBUG nova.virt.libvirt.host [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.758 248514 DEBUG nova.virt.libvirt.host [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.763 248514 DEBUG nova.virt.libvirt.host [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.763 248514 DEBUG nova.virt.libvirt.host [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.763 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.763 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.764 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.764 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.764 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.764 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.764 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.765 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.765 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.765 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.765 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.765 248514 DEBUG nova.virt.hardware [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:57:42 np0005558241 nova_compute[248510]: 2025-12-13 08:57:42.768 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:57:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/547996344' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:57:43 np0005558241 nova_compute[248510]: 2025-12-13 08:57:43.375 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:43 np0005558241 nova_compute[248510]: 2025-12-13 08:57:43.408 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:43 np0005558241 nova_compute[248510]: 2025-12-13 08:57:43.413 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:43 np0005558241 nova_compute[248510]: 2025-12-13 08:57:43.592 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:43 np0005558241 nova_compute[248510]: 2025-12-13 08:57:43.864 248514 DEBUG nova.network.neutron [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Updated VIF entry in instance network info cache for port 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:57:43 np0005558241 nova_compute[248510]: 2025-12-13 08:57:43.864 248514 DEBUG nova.network.neutron [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Updating instance_info_cache with network_info: [{"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:57:43 np0005558241 nova_compute[248510]: 2025-12-13 08:57:43.894 248514 DEBUG oslo_concurrency.lockutils [req-f752a7fe-0ac3-44cc-863d-d4466c887775 req-b0bfc868-3139-440c-9a33-84808adab505 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-5a7142a0-6e82-418c-affe-88fd6beb2ad9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:57:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:57:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321302842' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.019 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.021 248514 DEBUG nova.virt.libvirt.vif [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:57:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1358982584',display_name='tempest-TestServerBasicOps-server-1358982584',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1358982584',id=123,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLfROCcHYxf8uvMCUymzKleNDXyw2tmzRbGzC//Zjm3WjKvoC/StL1LDTb6SGUu3s96IMOyEDfcMq8YnCwGooxmPETH1rv9aB87uxpvFNJYtgkaVxxoxt7V/jUmpT5M6g==',key_name='tempest-TestServerBasicOps-1980425304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30f07149729142048436dbfbb8bf2742',ramdisk_id='',reservation_id='r-sw1yv1h4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1870679034',owner_user_name='tempest-TestServerBasicOps-1870679034-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:57:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='095998dc2eb348e8a90c866d4106cd74',uuid=393f815d-d124-4aea-98c0-126aed0744bd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.021 248514 DEBUG nova.network.os_vif_util [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Converting VIF {"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.022 248514 DEBUG nova.network.os_vif_util [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.024 248514 DEBUG nova.objects.instance [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lazy-loading 'pci_devices' on Instance uuid 393f815d-d124-4aea-98c0-126aed0744bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.116 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  <uuid>393f815d-d124-4aea-98c0-126aed0744bd</uuid>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  <name>instance-0000007b</name>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestServerBasicOps-server-1358982584</nova:name>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:57:42</nova:creationTime>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:        <nova:user uuid="095998dc2eb348e8a90c866d4106cd74">tempest-TestServerBasicOps-1870679034-project-member</nova:user>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:        <nova:project uuid="30f07149729142048436dbfbb8bf2742">tempest-TestServerBasicOps-1870679034</nova:project>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:        <nova:port uuid="6b86528d-c1b7-4776-809f-1f8b37569b6f">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <entry name="serial">393f815d-d124-4aea-98c0-126aed0744bd</entry>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <entry name="uuid">393f815d-d124-4aea-98c0-126aed0744bd</entry>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/393f815d-d124-4aea-98c0-126aed0744bd_disk">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/393f815d-d124-4aea-98c0-126aed0744bd_disk.config">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:05:d0:ff"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <target dev="tap6b86528d-c1"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/console.log" append="off"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:57:44 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:57:44 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:57:44 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:57:44 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.117 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Preparing to wait for external event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.118 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.118 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.118 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.119 248514 DEBUG nova.virt.libvirt.vif [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:57:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1358982584',display_name='tempest-TestServerBasicOps-server-1358982584',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1358982584',id=123,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLfROCcHYxf8uvMCUymzKleNDXyw2tmzRbGzC//Zjm3WjKvoC/StL1LDTb6SGUu3s96IMOyEDfcMq8YnCwGooxmPETH1rv9aB87uxpvFNJYtgkaVxxoxt7V/jUmpT5M6g==',key_name='tempest-TestServerBasicOps-1980425304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='30f07149729142048436dbfbb8bf2742',ramdisk_id='',reservation_id='r-sw1yv1h4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1870679034',owner_user_name='tempest-TestServerBasicOps-1870679034-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:57:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='095998dc2eb348e8a90c866d4106cd74',uuid=393f815d-d124-4aea-98c0-126aed0744bd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.119 248514 DEBUG nova.network.os_vif_util [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Converting VIF {"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.120 248514 DEBUG nova.network.os_vif_util [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.121 248514 DEBUG os_vif [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.121 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.122 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.122 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.125 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.125 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b86528d-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.126 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b86528d-c1, col_values=(('external_ids', {'iface-id': '6b86528d-c1b7-4776-809f-1f8b37569b6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:d0:ff', 'vm-uuid': '393f815d-d124-4aea-98c0-126aed0744bd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:44 np0005558241 NetworkManager[50376]: <info>  [1765616264.1295] manager: (tap6b86528d-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/495)
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.135 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.135 248514 INFO os_vif [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1')#033[00m
Dec 13 03:57:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2899: 321 pgs: 321 active+clean; 213 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Dec 13 03:57:44 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:44Z|01193|binding|INFO|Releasing lport e25ab14d-8bf6-4007-ae4c-085df43b875d from this chassis (sb_readonly=0)
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.349 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.350 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.350 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] No VIF found with MAC fa:16:3e:05:d0:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.351 248514 INFO nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Using config drive#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.376 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.405 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.758 248514 DEBUG nova.network.neutron [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Updated VIF entry in instance network info cache for port 6b86528d-c1b7-4776-809f-1f8b37569b6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.759 248514 DEBUG nova.network.neutron [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Updating instance_info_cache with network_info: [{"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:57:44 np0005558241 nova_compute[248510]: 2025-12-13 08:57:44.861 248514 DEBUG oslo_concurrency.lockutils [req-53a9f497-019d-4149-be46-86ff521f86c6 req-e93c1a4d-dc34-4e00-bde9-cd7e1535ca4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:57:45 np0005558241 nova_compute[248510]: 2025-12-13 08:57:45.070 248514 INFO nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Creating config drive at /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/disk.config#033[00m
Dec 13 03:57:45 np0005558241 nova_compute[248510]: 2025-12-13 08:57:45.077 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ykuwpmu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:45 np0005558241 nova_compute[248510]: 2025-12-13 08:57:45.235 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ykuwpmu" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:45 np0005558241 nova_compute[248510]: 2025-12-13 08:57:45.263 248514 DEBUG nova.storage.rbd_utils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] rbd image 393f815d-d124-4aea-98c0-126aed0744bd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:57:45 np0005558241 nova_compute[248510]: 2025-12-13 08:57:45.268 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/disk.config 393f815d-d124-4aea-98c0-126aed0744bd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:57:45 np0005558241 nova_compute[248510]: 2025-12-13 08:57:45.665 248514 DEBUG oslo_concurrency.processutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/disk.config 393f815d-d124-4aea-98c0-126aed0744bd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:57:45 np0005558241 nova_compute[248510]: 2025-12-13 08:57:45.666 248514 INFO nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Deleting local config drive /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd/disk.config because it was imported into RBD.#033[00m
Dec 13 03:57:45 np0005558241 NetworkManager[50376]: <info>  [1765616265.7391] manager: (tap6b86528d-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/496)
Dec 13 03:57:45 np0005558241 kernel: tap6b86528d-c1: entered promiscuous mode
Dec 13 03:57:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:45Z|01194|binding|INFO|Claiming lport 6b86528d-c1b7-4776-809f-1f8b37569b6f for this chassis.
Dec 13 03:57:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:45Z|01195|binding|INFO|6b86528d-c1b7-4776-809f-1f8b37569b6f: Claiming fa:16:3e:05:d0:ff 10.100.0.11
Dec 13 03:57:45 np0005558241 nova_compute[248510]: 2025-12-13 08:57:45.744 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:45 np0005558241 nova_compute[248510]: 2025-12-13 08:57:45.757 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616250.7558303, 75f348ef-4044-47a1-ba1b-f1b66513450c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:57:45 np0005558241 nova_compute[248510]: 2025-12-13 08:57:45.757 248514 INFO nova.compute.manager [-] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:57:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:45Z|01196|binding|INFO|Setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f ovn-installed in OVS
Dec 13 03:57:45 np0005558241 nova_compute[248510]: 2025-12-13 08:57:45.765 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:45 np0005558241 systemd-udevd[369976]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:57:45 np0005558241 nova_compute[248510]: 2025-12-13 08:57:45.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:45 np0005558241 NetworkManager[50376]: <info>  [1765616265.7836] device (tap6b86528d-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:57:45 np0005558241 NetworkManager[50376]: <info>  [1765616265.7845] device (tap6b86528d-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:57:45 np0005558241 systemd-machined[210538]: New machine qemu-150-instance-0000007b.
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.797 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:d0:ff 10.100.0.11'], port_security=['fa:16:3e:05:d0:ff 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '393f815d-d124-4aea-98c0-126aed0744bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30f07149729142048436dbfbb8bf2742', 'neutron:revision_number': '2', 'neutron:security_group_ids': '339be054-74c6-402b-b71a-0e37379fa825 59357d23-9217-43e7-99ef-d4349c0de8ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77d611ab-66b1-4445-8d3d-978ee23b13d9, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b86528d-c1b7-4776-809f-1f8b37569b6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.799 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b86528d-c1b7-4776-809f-1f8b37569b6f in datapath b9992fbc-9a6f-4e82-84b9-be47eb5816aa bound to our chassis#033[00m
Dec 13 03:57:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:45Z|01197|binding|INFO|Setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f up in Southbound
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.801 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9992fbc-9a6f-4e82-84b9-be47eb5816aa#033[00m
Dec 13 03:57:45 np0005558241 systemd[1]: Started Virtual Machine qemu-150-instance-0000007b.
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.823 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4873d8d3-3e4d-4582-87f9-c24f27a7514d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.824 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb9992fbc-91 in ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.827 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb9992fbc-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.827 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d3cdc183-ba5f-4762-a621-dd82bbedbf5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.828 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0ece53-afe9-4f31-9e03-53b669f0b885]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:45 np0005558241 nova_compute[248510]: 2025-12-13 08:57:45.828 248514 DEBUG nova.compute.manager [None req-23bba313-05ce-4fce-b9aa-7ffb333b8af7 - - - - - -] [instance: 75f348ef-4044-47a1-ba1b-f1b66513450c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.842 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7aff8ab5-76b3-4dfc-bb8f-19b820ace7fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.860 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6baa6887-5a77-4eb6-a8b1-1fa74c828160]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.899 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[01e3c8cd-4a42-4eb8-95dd-6c2f4cf9762f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:45 np0005558241 NetworkManager[50376]: <info>  [1765616265.9103] manager: (tapb9992fbc-90): new Veth device (/org/freedesktop/NetworkManager/Devices/497)
Dec 13 03:57:45 np0005558241 systemd-udevd[369981]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.911 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[99ca4dcd-593c-4ae1-bfdb-679fe58569cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.964 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ec39f191-a42e-43c5-8dae-9fb7a5d1bf59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:45.968 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2ea67d-40ac-4970-a428-a1177b478bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:46 np0005558241 NetworkManager[50376]: <info>  [1765616265.9998] device (tapb9992fbc-90): carrier: link connected
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.007 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7f4f13e8-ffac-4c0e-836f-a83d65c99d14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.031 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e755666a-7969-47cf-a53e-4789560b67ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9992fbc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:30:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 354], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884321, 'reachable_time': 37298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370012, 'error': None, 'target': 'ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.055 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5f30d4ff-7c11-4ed8-a989-4b04330572a9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9c:3026'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 884321, 'tstamp': 884321}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370013, 'error': None, 'target': 'ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.077 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf5e387-8a8a-4957-91a2-634f839a5a4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9992fbc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:30:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 354], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884321, 'reachable_time': 37298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370014, 'error': None, 'target': 'ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.114 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d27f23b8-d266-4850-8444-2c8cc80249a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:46 np0005558241 nova_compute[248510]: 2025-12-13 08:57:46.222 248514 DEBUG nova.compute.manager [req-19ad9ed2-3bba-4417-b256-0a0d33e1ac38 req-58e7c1af-fa81-403c-aef9-f5b284904a3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:46 np0005558241 nova_compute[248510]: 2025-12-13 08:57:46.223 248514 DEBUG oslo_concurrency.lockutils [req-19ad9ed2-3bba-4417-b256-0a0d33e1ac38 req-58e7c1af-fa81-403c-aef9-f5b284904a3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:46 np0005558241 nova_compute[248510]: 2025-12-13 08:57:46.223 248514 DEBUG oslo_concurrency.lockutils [req-19ad9ed2-3bba-4417-b256-0a0d33e1ac38 req-58e7c1af-fa81-403c-aef9-f5b284904a3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:46 np0005558241 nova_compute[248510]: 2025-12-13 08:57:46.223 248514 DEBUG oslo_concurrency.lockutils [req-19ad9ed2-3bba-4417-b256-0a0d33e1ac38 req-58e7c1af-fa81-403c-aef9-f5b284904a3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:46 np0005558241 nova_compute[248510]: 2025-12-13 08:57:46.223 248514 DEBUG nova.compute.manager [req-19ad9ed2-3bba-4417-b256-0a0d33e1ac38 req-58e7c1af-fa81-403c-aef9-f5b284904a3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Processing event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.230 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1636b00-5b40-45c9-be74-4a427f01a03a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.232 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9992fbc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.233 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.234 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9992fbc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:46 np0005558241 nova_compute[248510]: 2025-12-13 08:57:46.235 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:46 np0005558241 NetworkManager[50376]: <info>  [1765616266.2364] manager: (tapb9992fbc-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Dec 13 03:57:46 np0005558241 kernel: tapb9992fbc-90: entered promiscuous mode
Dec 13 03:57:46 np0005558241 nova_compute[248510]: 2025-12-13 08:57:46.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.241 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9992fbc-90, col_values=(('external_ids', {'iface-id': 'a5f80d22-8b41-40d9-b8ff-ddee53f45af0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:46 np0005558241 nova_compute[248510]: 2025-12-13 08:57:46.242 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:46 np0005558241 nova_compute[248510]: 2025-12-13 08:57:46.243 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:46 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:46Z|01198|binding|INFO|Releasing lport a5f80d22-8b41-40d9-b8ff-ddee53f45af0 from this chassis (sb_readonly=0)
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.244 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b9992fbc-9a6f-4e82-84b9-be47eb5816aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b9992fbc-9a6f-4e82-84b9-be47eb5816aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.246 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eda56a37-1132-45ef-8486-d27188a6f348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.246 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-b9992fbc-9a6f-4e82-84b9-be47eb5816aa
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/b9992fbc-9a6f-4e82-84b9-be47eb5816aa.pid.haproxy
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID b9992fbc-9a6f-4e82-84b9-be47eb5816aa
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:57:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:46.249 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'env', 'PROCESS_TAG=haproxy-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b9992fbc-9a6f-4e82-84b9-be47eb5816aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:57:46 np0005558241 nova_compute[248510]: 2025-12-13 08:57:46.259 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2900: 321 pgs: 321 active+clean; 213 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 13 03:57:46 np0005558241 podman[370046]: 2025-12-13 08:57:46.686931246 +0000 UTC m=+0.061654490 container create 852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 03:57:46 np0005558241 systemd[1]: Started libpod-conmon-852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5.scope.
Dec 13 03:57:46 np0005558241 podman[370046]: 2025-12-13 08:57:46.653797985 +0000 UTC m=+0.028521299 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:57:46 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:57:46 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5948b445140de39f593e110d7d860347d27dfb64a06ca5afb4dbd7d1143ccfb3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:46 np0005558241 podman[370046]: 2025-12-13 08:57:46.7876212 +0000 UTC m=+0.162344484 container init 852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:57:46 np0005558241 podman[370046]: 2025-12-13 08:57:46.793017503 +0000 UTC m=+0.167740757 container start 852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:57:46 np0005558241 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [NOTICE]   (370065) : New worker (370067) forked
Dec 13 03:57:46 np0005558241 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [NOTICE]   (370065) : Loading success.
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.065 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616267.0644548, 393f815d-d124-4aea-98c0-126aed0744bd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.065 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] VM Started (Lifecycle Event)#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.068 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.072 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.076 248514 INFO nova.virt.libvirt.driver [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Instance spawned successfully.#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.076 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.102 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.109 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.110 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.110 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.111 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.111 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.112 248514 DEBUG nova.virt.libvirt.driver [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.117 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.145 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.146 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616267.0647352, 393f815d-d124-4aea-98c0-126aed0744bd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.146 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.180 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.185 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616267.0710046, 393f815d-d124-4aea-98c0-126aed0744bd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.185 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.194 248514 INFO nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Took 11.77 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.195 248514 DEBUG nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.206 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.210 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.246 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.286 248514 INFO nova.compute.manager [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Took 13.95 seconds to build instance.#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.307 248514 DEBUG oslo_concurrency.lockutils [None req-0f9a5aff-fe89-4a69-8073-4cacc51bbf14 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:47 np0005558241 nova_compute[248510]: 2025-12-13 08:57:47.673 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:48 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Dec 13 03:57:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2901: 321 pgs: 321 active+clean; 217 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Dec 13 03:57:48 np0005558241 nova_compute[248510]: 2025-12-13 08:57:48.680 248514 DEBUG nova.compute.manager [req-01728757-5cb4-4be1-ad7f-73300ade60a6 req-c9d40c10-d8e6-4a9b-a8c0-da91d3dd626a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:48 np0005558241 nova_compute[248510]: 2025-12-13 08:57:48.681 248514 DEBUG oslo_concurrency.lockutils [req-01728757-5cb4-4be1-ad7f-73300ade60a6 req-c9d40c10-d8e6-4a9b-a8c0-da91d3dd626a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:48 np0005558241 nova_compute[248510]: 2025-12-13 08:57:48.681 248514 DEBUG oslo_concurrency.lockutils [req-01728757-5cb4-4be1-ad7f-73300ade60a6 req-c9d40c10-d8e6-4a9b-a8c0-da91d3dd626a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:48 np0005558241 nova_compute[248510]: 2025-12-13 08:57:48.681 248514 DEBUG oslo_concurrency.lockutils [req-01728757-5cb4-4be1-ad7f-73300ade60a6 req-c9d40c10-d8e6-4a9b-a8c0-da91d3dd626a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:48 np0005558241 nova_compute[248510]: 2025-12-13 08:57:48.681 248514 DEBUG nova.compute.manager [req-01728757-5cb4-4be1-ad7f-73300ade60a6 req-c9d40c10-d8e6-4a9b-a8c0-da91d3dd626a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] No waiting events found dispatching network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:57:48 np0005558241 nova_compute[248510]: 2025-12-13 08:57:48.681 248514 WARNING nova.compute.manager [req-01728757-5cb4-4be1-ad7f-73300ade60a6 req-c9d40c10-d8e6-4a9b-a8c0-da91d3dd626a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received unexpected event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f for instance with vm_state active and task_state None.#033[00m
Dec 13 03:57:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:48Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ed:3b:59 10.100.0.13
Dec 13 03:57:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:48Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ed:3b:59 10.100.0.13
Dec 13 03:57:49 np0005558241 nova_compute[248510]: 2025-12-13 08:57:49.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:49 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:49Z|01199|binding|INFO|Releasing lport a5f80d22-8b41-40d9-b8ff-ddee53f45af0 from this chassis (sb_readonly=0)
Dec 13 03:57:49 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:49Z|01200|binding|INFO|Releasing lport e25ab14d-8bf6-4007-ae4c-085df43b875d from this chassis (sb_readonly=0)
Dec 13 03:57:49 np0005558241 nova_compute[248510]: 2025-12-13 08:57:49.309 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2902: 321 pgs: 321 active+clean; 227 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 134 op/s
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.895388) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616270895545, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 937, "num_deletes": 257, "total_data_size": 1287404, "memory_usage": 1312152, "flush_reason": "Manual Compaction"}
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616270908279, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 1265071, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56662, "largest_seqno": 57598, "table_properties": {"data_size": 1260333, "index_size": 2325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10511, "raw_average_key_size": 19, "raw_value_size": 1250656, "raw_average_value_size": 2346, "num_data_blocks": 103, "num_entries": 533, "num_filter_entries": 533, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616199, "oldest_key_time": 1765616199, "file_creation_time": 1765616270, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 12949 microseconds, and 6433 cpu microseconds.
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.908352) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 1265071 bytes OK
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.908385) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.910674) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.910690) EVENT_LOG_v1 {"time_micros": 1765616270910685, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.910725) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1282796, prev total WAL file size 1282796, number of live WAL files 2.
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.911445) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323631' seq:72057594037927935, type:22 .. '6C6F676D0032353132' seq:0, type:0; will stop at (end)
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(1235KB)], [131(10MB)]
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616270911547, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 12030102, "oldest_snapshot_seqno": -1}
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7868 keys, 11905093 bytes, temperature: kUnknown
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616270997504, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 11905093, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11851258, "index_size": 33050, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19717, "raw_key_size": 205662, "raw_average_key_size": 26, "raw_value_size": 11709368, "raw_average_value_size": 1488, "num_data_blocks": 1294, "num_entries": 7868, "num_filter_entries": 7868, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616270, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:57:50 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:57:51 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.997770) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 11905093 bytes
Dec 13 03:57:51 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.999994) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.8 rd, 138.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 10.3 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(18.9) write-amplify(9.4) OK, records in: 8398, records dropped: 530 output_compression: NoCompression
Dec 13 03:57:51 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:51.000034) EVENT_LOG_v1 {"time_micros": 1765616271000017, "job": 80, "event": "compaction_finished", "compaction_time_micros": 86034, "compaction_time_cpu_micros": 29977, "output_level": 6, "num_output_files": 1, "total_output_size": 11905093, "num_input_records": 8398, "num_output_records": 7868, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:57:51 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:57:51 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616271000589, "job": 80, "event": "table_file_deletion", "file_number": 133}
Dec 13 03:57:51 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:57:51 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616271002924, "job": 80, "event": "table_file_deletion", "file_number": 131}
Dec 13 03:57:51 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:50.911360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:57:51 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:51.003037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:57:51 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:51.003041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:57:51 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:51.003043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:57:51 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:51.003045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:57:51 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:57:51.003048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:57:51 np0005558241 podman[370211]: 2025-12-13 08:57:51.15362989 +0000 UTC m=+0.071886081 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 03:57:51 np0005558241 podman[370211]: 2025-12-13 08:57:51.256225271 +0000 UTC m=+0.174481432 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:57:51 np0005558241 nova_compute[248510]: 2025-12-13 08:57:51.911 248514 DEBUG nova.compute.manager [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-changed-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:51 np0005558241 nova_compute[248510]: 2025-12-13 08:57:51.914 248514 DEBUG nova.compute.manager [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Refreshing instance network info cache due to event network-changed-6b86528d-c1b7-4776-809f-1f8b37569b6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:57:51 np0005558241 nova_compute[248510]: 2025-12-13 08:57:51.915 248514 DEBUG oslo_concurrency.lockutils [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:57:51 np0005558241 nova_compute[248510]: 2025-12-13 08:57:51.916 248514 DEBUG oslo_concurrency.lockutils [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:57:51 np0005558241 nova_compute[248510]: 2025-12-13 08:57:51.916 248514 DEBUG nova.network.neutron [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Refreshing network info cache for port 6b86528d-c1b7-4776-809f-1f8b37569b6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:57:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2903: 321 pgs: 321 active+clean; 227 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 87 op/s
Dec 13 03:57:52 np0005558241 nova_compute[248510]: 2025-12-13 08:57:52.676 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:57:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:57:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:57:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:57:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:57:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:57:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:57:53 np0005558241 podman[370536]: 2025-12-13 08:57:53.284678742 +0000 UTC m=+0.042712066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:57:53 np0005558241 podman[370536]: 2025-12-13 08:57:53.434799587 +0000 UTC m=+0.192832911 container create e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:57:53 np0005558241 systemd[1]: Started libpod-conmon-e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473.scope.
Dec 13 03:57:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:57:53 np0005558241 podman[370536]: 2025-12-13 08:57:53.672156797 +0000 UTC m=+0.430190161 container init e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:57:53 np0005558241 podman[370536]: 2025-12-13 08:57:53.683412092 +0000 UTC m=+0.441445446 container start e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:57:53 np0005558241 pedantic_davinci[370552]: 167 167
Dec 13 03:57:53 np0005558241 systemd[1]: libpod-e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473.scope: Deactivated successfully.
Dec 13 03:57:53 np0005558241 podman[370536]: 2025-12-13 08:57:53.736401619 +0000 UTC m=+0.494434923 container attach e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 13 03:57:53 np0005558241 podman[370536]: 2025-12-13 08:57:53.737676651 +0000 UTC m=+0.495709995 container died e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:57:53 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f3f0bde0bdc992be0f244f608f9f74c2df85eb3a12a20106be3b62281c1d7dbe-merged.mount: Deactivated successfully.
Dec 13 03:57:54 np0005558241 podman[370536]: 2025-12-13 08:57:54.03300792 +0000 UTC m=+0.791041234 container remove e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_davinci, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:57:54 np0005558241 systemd[1]: libpod-conmon-e165eef6246619fcd6586ab9184e28d87e1cc4fc949783e7fe76e9ff471c2473.scope: Deactivated successfully.
Dec 13 03:57:54 np0005558241 nova_compute[248510]: 2025-12-13 08:57:54.146 248514 DEBUG nova.network.neutron [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Updated VIF entry in instance network info cache for port 6b86528d-c1b7-4776-809f-1f8b37569b6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:57:54 np0005558241 nova_compute[248510]: 2025-12-13 08:57:54.149 248514 DEBUG nova.network.neutron [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Updating instance_info_cache with network_info: [{"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:57:54 np0005558241 nova_compute[248510]: 2025-12-13 08:57:54.152 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:54 np0005558241 nova_compute[248510]: 2025-12-13 08:57:54.275 248514 DEBUG oslo_concurrency.lockutils [req-2ad3acdc-d668-462a-9014-d9ce0cbe9711 req-8c15c748-1baa-4e17-9212-1ddb0d264263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-393f815d-d124-4aea-98c0-126aed0744bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:57:54 np0005558241 nova_compute[248510]: 2025-12-13 08:57:54.282 248514 INFO nova.compute.manager [None req-b384d5e4-3937-44c0-b52a-ccb10269a664 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Get console output#033[00m
Dec 13 03:57:54 np0005558241 nova_compute[248510]: 2025-12-13 08:57:54.290 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 03:57:54 np0005558241 podman[370577]: 2025-12-13 08:57:54.293340682 +0000 UTC m=+0.071807339 container create 77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_newton, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:57:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2904: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 13 03:57:54 np0005558241 systemd[1]: Started libpod-conmon-77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b.scope.
Dec 13 03:57:54 np0005558241 podman[370577]: 2025-12-13 08:57:54.251930198 +0000 UTC m=+0.030396905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:57:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:57:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc86f3c7521091a8e63422aa4806e5e22a753a36b8e7f645623a34f02806960/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc86f3c7521091a8e63422aa4806e5e22a753a36b8e7f645623a34f02806960/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc86f3c7521091a8e63422aa4806e5e22a753a36b8e7f645623a34f02806960/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc86f3c7521091a8e63422aa4806e5e22a753a36b8e7f645623a34f02806960/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc86f3c7521091a8e63422aa4806e5e22a753a36b8e7f645623a34f02806960/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:54 np0005558241 podman[370577]: 2025-12-13 08:57:54.590615839 +0000 UTC m=+0.369082536 container init 77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_newton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:57:54 np0005558241 podman[370577]: 2025-12-13 08:57:54.603920734 +0000 UTC m=+0.382387391 container start 77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_newton, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:57:54 np0005558241 podman[370577]: 2025-12-13 08:57:54.607875121 +0000 UTC m=+0.386341838 container attach 77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_newton, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:57:55 np0005558241 heuristic_newton[370594]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:57:55 np0005558241 heuristic_newton[370594]: --> All data devices are unavailable
Dec 13 03:57:55 np0005558241 systemd[1]: libpod-77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b.scope: Deactivated successfully.
Dec 13 03:57:55 np0005558241 podman[370577]: 2025-12-13 08:57:55.13034529 +0000 UTC m=+0.908811947 container died 77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.199 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.200 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.200 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.202 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.203 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.204 248514 INFO nova.compute.manager [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Terminating instance#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.205 248514 DEBUG nova.compute.manager [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:57:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3fc86f3c7521091a8e63422aa4806e5e22a753a36b8e7f645623a34f02806960-merged.mount: Deactivated successfully.
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.435 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.436 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.437 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:55 np0005558241 podman[370577]: 2025-12-13 08:57:55.622311462 +0000 UTC m=+1.400778159 container remove 77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:57:55 np0005558241 kernel: tap8ee3e1ae-47 (unregistering): left promiscuous mode
Dec 13 03:57:55 np0005558241 NetworkManager[50376]: <info>  [1765616275.6399] device (tap8ee3e1ae-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:57:55 np0005558241 systemd[1]: libpod-conmon-77d6649b439c7279e0efa57bcc7f2f3d55ef8972e3e74fe85968d70971bdfb5b.scope: Deactivated successfully.
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:55Z|01201|binding|INFO|Releasing lport 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 from this chassis (sb_readonly=0)
Dec 13 03:57:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:55Z|01202|binding|INFO|Setting lport 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 down in Southbound
Dec 13 03:57:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:55Z|01203|binding|INFO|Removing iface tap8ee3e1ae-47 ovn-installed in OVS
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.721 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:55 np0005558241 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Dec 13 03:57:55 np0005558241 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d0000007a.scope: Consumed 13.738s CPU time.
Dec 13 03:57:55 np0005558241 systemd-machined[210538]: Machine qemu-149-instance-0000007a terminated.
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.758 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:3b:59 10.100.0.13'], port_security=['fa:16:3e:ed:3b:59 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5a7142a0-6e82-418c-affe-88fd6beb2ad9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8944c356-661d-4684-a169-e2ad4b13e098', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca3b9929-a40c-461d-b2b9-49fd6af07fd3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.760 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 in datapath 793ba3c3-1004-4068-a89a-dc7b4c56fc43 unbound from our chassis#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.763 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 793ba3c3-1004-4068-a89a-dc7b4c56fc43#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.783 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f41b49-7a4f-47d8-958b-8076ca16382f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.820 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2d7ee9c5-5a11-43f0-90e1-3dd85591a04a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.824 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1c335d60-fa85-4f46-a189-2605e33eb351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.835 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.853 248514 INFO nova.virt.libvirt.driver [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Instance destroyed successfully.#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.854 248514 DEBUG nova.objects.instance [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 5a7142a0-6e82-418c-affe-88fd6beb2ad9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.870 248514 DEBUG nova.virt.libvirt.vif [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:57:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-182179822',display_name='tempest-TestNetworkBasicOps-server-182179822',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-182179822',id=122,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIyOWHhK5PGqeOlzFwokovmTuf3HTihMwOQrzfuGYU+/TrdkTdWDTQvnoNZ7qiFrzCGlnIvswkbj8TaejN4nwLPFUx3mjtQULdplgXkj1ea+cO+RfMC1iM+NaDk/WgBTsg==',key_name='tempest-TestNetworkBasicOps-1885276769',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:57:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-ly8zv4h3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:57:36Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=5a7142a0-6e82-418c-affe-88fd6beb2ad9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.871 248514 DEBUG nova.network.os_vif_util [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "address": "fa:16:3e:ed:3b:59", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ee3e1ae-47", "ovs_interfaceid": "8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.872 248514 DEBUG nova.network.os_vif_util [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.871 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eb7b76c7-d90e-48c0-ba9a-bf736e835b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.872 248514 DEBUG os_vif [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.874 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ee3e1ae-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.890 248514 INFO os_vif [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:3b:59,bridge_name='br-int',has_traffic_filtering=True,id=8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ee3e1ae-47')#033[00m
Dec 13 03:57:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.908 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75d85142-62c0-4ef0-9fca-425d9eb304a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap793ba3c3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:ed:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 349], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878911, 'reachable_time': 42802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370699, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.936 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[71881209-143f-4cef-b0f9-b35f0b1f769f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap793ba3c3-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878923, 'tstamp': 878923}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370716, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap793ba3c3-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878927, 'tstamp': 878927}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370716, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.938 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap793ba3c3-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:55 np0005558241 nova_compute[248510]: 2025-12-13 08:57:55.941 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.942 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap793ba3c3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.942 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.942 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap793ba3c3-10, col_values=(('external_ids', {'iface-id': 'e25ab14d-8bf6-4007-ae4c-085df43b875d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:57:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:57:55.943 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:57:56 np0005558241 podman[370732]: 2025-12-13 08:57:56.174201241 +0000 UTC m=+0.026438718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:57:56 np0005558241 podman[370732]: 2025-12-13 08:57:56.296938165 +0000 UTC m=+0.149175642 container create b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:57:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2905: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 13 03:57:56 np0005558241 nova_compute[248510]: 2025-12-13 08:57:56.338 248514 DEBUG nova.compute.manager [req-a72b99c4-d674-4f30-bced-7f461f864a2a req-a81a7e3b-6ec9-4f47-87ac-660ead63bf00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-vif-unplugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:56 np0005558241 nova_compute[248510]: 2025-12-13 08:57:56.339 248514 DEBUG oslo_concurrency.lockutils [req-a72b99c4-d674-4f30-bced-7f461f864a2a req-a81a7e3b-6ec9-4f47-87ac-660ead63bf00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:56 np0005558241 nova_compute[248510]: 2025-12-13 08:57:56.339 248514 DEBUG oslo_concurrency.lockutils [req-a72b99c4-d674-4f30-bced-7f461f864a2a req-a81a7e3b-6ec9-4f47-87ac-660ead63bf00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:56 np0005558241 nova_compute[248510]: 2025-12-13 08:57:56.339 248514 DEBUG oslo_concurrency.lockutils [req-a72b99c4-d674-4f30-bced-7f461f864a2a req-a81a7e3b-6ec9-4f47-87ac-660ead63bf00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:56 np0005558241 nova_compute[248510]: 2025-12-13 08:57:56.339 248514 DEBUG nova.compute.manager [req-a72b99c4-d674-4f30-bced-7f461f864a2a req-a81a7e3b-6ec9-4f47-87ac-660ead63bf00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] No waiting events found dispatching network-vif-unplugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:57:56 np0005558241 nova_compute[248510]: 2025-12-13 08:57:56.339 248514 DEBUG nova.compute.manager [req-a72b99c4-d674-4f30-bced-7f461f864a2a req-a81a7e3b-6ec9-4f47-87ac-660ead63bf00 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-vif-unplugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:57:56 np0005558241 systemd[1]: Started libpod-conmon-b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4.scope.
Dec 13 03:57:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:57:56 np0005558241 podman[370732]: 2025-12-13 08:57:56.777440386 +0000 UTC m=+0.629677903 container init b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_dubinsky, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:57:56 np0005558241 podman[370732]: 2025-12-13 08:57:56.790421604 +0000 UTC m=+0.642659101 container start b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_dubinsky, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:57:56 np0005558241 vigorous_dubinsky[370749]: 167 167
Dec 13 03:57:56 np0005558241 systemd[1]: libpod-b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4.scope: Deactivated successfully.
Dec 13 03:57:56 np0005558241 podman[370732]: 2025-12-13 08:57:56.793912219 +0000 UTC m=+0.646149736 container attach b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:57:56 np0005558241 podman[370732]: 2025-12-13 08:57:56.803592816 +0000 UTC m=+0.655830303 container died b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_dubinsky, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:57:56 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c287a890b2ad78dc3f75eeca2019211e4af85016e4db29219d9340510199a581-merged.mount: Deactivated successfully.
Dec 13 03:57:56 np0005558241 podman[370732]: 2025-12-13 08:57:56.848410693 +0000 UTC m=+0.700648170 container remove b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_dubinsky, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:57:56 np0005558241 systemd[1]: libpod-conmon-b12acd4dbc357504ffa7d3e8679617f3de4efa5fbc14c8d7f694b56363ed2df4.scope: Deactivated successfully.
Dec 13 03:57:56 np0005558241 nova_compute[248510]: 2025-12-13 08:57:56.971 248514 INFO nova.virt.libvirt.driver [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Deleting instance files /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9_del#033[00m
Dec 13 03:57:56 np0005558241 nova_compute[248510]: 2025-12-13 08:57:56.971 248514 INFO nova.virt.libvirt.driver [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Deletion of /var/lib/nova/instances/5a7142a0-6e82-418c-affe-88fd6beb2ad9_del complete#033[00m
Dec 13 03:57:57 np0005558241 nova_compute[248510]: 2025-12-13 08:57:57.034 248514 INFO nova.compute.manager [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Took 1.83 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:57:57 np0005558241 nova_compute[248510]: 2025-12-13 08:57:57.035 248514 DEBUG oslo.service.loopingcall [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:57:57 np0005558241 nova_compute[248510]: 2025-12-13 08:57:57.035 248514 DEBUG nova.compute.manager [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:57:57 np0005558241 nova_compute[248510]: 2025-12-13 08:57:57.035 248514 DEBUG nova.network.neutron [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:57:57 np0005558241 podman[370773]: 2025-12-13 08:57:57.059046169 +0000 UTC m=+0.049553414 container create 3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:57:57 np0005558241 systemd[1]: Started libpod-conmon-3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4.scope.
Dec 13 03:57:57 np0005558241 podman[370773]: 2025-12-13 08:57:57.037632085 +0000 UTC m=+0.028139330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:57:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:57:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c779bda6682a772f0dba6aa2f1b35c81f66b0e65968a49d5299b862caf679/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c779bda6682a772f0dba6aa2f1b35c81f66b0e65968a49d5299b862caf679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c779bda6682a772f0dba6aa2f1b35c81f66b0e65968a49d5299b862caf679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f55c779bda6682a772f0dba6aa2f1b35c81f66b0e65968a49d5299b862caf679/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:57 np0005558241 podman[370773]: 2025-12-13 08:57:57.158586686 +0000 UTC m=+0.149093911 container init 3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:57:57 np0005558241 podman[370773]: 2025-12-13 08:57:57.166594682 +0000 UTC m=+0.157101907 container start 3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_haibt, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 03:57:57 np0005558241 podman[370773]: 2025-12-13 08:57:57.16980176 +0000 UTC m=+0.160308985 container attach 3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_haibt, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]: {
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:    "0": [
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:        {
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "devices": [
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "/dev/loop3"
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            ],
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_name": "ceph_lv0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_size": "21470642176",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "name": "ceph_lv0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "tags": {
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.cluster_name": "ceph",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.crush_device_class": "",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.encrypted": "0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.objectstore": "bluestore",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.osd_id": "0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.type": "block",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.vdo": "0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.with_tpm": "0"
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            },
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "type": "block",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "vg_name": "ceph_vg0"
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:        }
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:    ],
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:    "1": [
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:        {
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "devices": [
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "/dev/loop4"
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            ],
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_name": "ceph_lv1",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_size": "21470642176",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "name": "ceph_lv1",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "tags": {
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.cluster_name": "ceph",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.crush_device_class": "",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.encrypted": "0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.objectstore": "bluestore",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.osd_id": "1",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.type": "block",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.vdo": "0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.with_tpm": "0"
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            },
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "type": "block",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "vg_name": "ceph_vg1"
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:        }
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:    ],
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:    "2": [
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:        {
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "devices": [
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "/dev/loop5"
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            ],
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_name": "ceph_lv2",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_size": "21470642176",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "name": "ceph_lv2",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "tags": {
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.cluster_name": "ceph",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.crush_device_class": "",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.encrypted": "0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.objectstore": "bluestore",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.osd_id": "2",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.type": "block",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.vdo": "0",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:                "ceph.with_tpm": "0"
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            },
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "type": "block",
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:            "vg_name": "ceph_vg2"
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:        }
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]:    ]
Dec 13 03:57:57 np0005558241 quirky_haibt[370790]: }
Dec 13 03:57:57 np0005558241 systemd[1]: libpod-3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4.scope: Deactivated successfully.
Dec 13 03:57:57 np0005558241 podman[370773]: 2025-12-13 08:57:57.502046473 +0000 UTC m=+0.492553698 container died 3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_haibt, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 03:57:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f55c779bda6682a772f0dba6aa2f1b35c81f66b0e65968a49d5299b862caf679-merged.mount: Deactivated successfully.
Dec 13 03:57:57 np0005558241 podman[370773]: 2025-12-13 08:57:57.550912299 +0000 UTC m=+0.541419524 container remove 3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:57:57 np0005558241 systemd[1]: libpod-conmon-3637c2e8710a6c604fc124526ac7194c3618289aa089dbb7ab5a3020afcfe6c4.scope: Deactivated successfully.
Dec 13 03:57:57 np0005558241 nova_compute[248510]: 2025-12-13 08:57:57.678 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:57:58 np0005558241 podman[370871]: 2025-12-13 08:57:58.060709968 +0000 UTC m=+0.040186515 container create 08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:57:58 np0005558241 systemd[1]: Started libpod-conmon-08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f.scope.
Dec 13 03:57:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:57:58 np0005558241 podman[370871]: 2025-12-13 08:57:58.127929483 +0000 UTC m=+0.107406060 container init 08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_feistel, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:57:58 np0005558241 podman[370871]: 2025-12-13 08:57:58.134402911 +0000 UTC m=+0.113879458 container start 08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_feistel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 03:57:58 np0005558241 podman[370871]: 2025-12-13 08:57:58.041392645 +0000 UTC m=+0.020869192 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:57:58 np0005558241 podman[370871]: 2025-12-13 08:57:58.13843709 +0000 UTC m=+0.117913657 container attach 08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:57:58 np0005558241 busy_feistel[370887]: 167 167
Dec 13 03:57:58 np0005558241 systemd[1]: libpod-08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f.scope: Deactivated successfully.
Dec 13 03:57:58 np0005558241 podman[370871]: 2025-12-13 08:57:58.140027399 +0000 UTC m=+0.119503946 container died 08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:57:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1a9ace429bab01ab8b65d1ec7a7dd7e79320851e21d4e8250e40099ff38a050a-merged.mount: Deactivated successfully.
Dec 13 03:57:58 np0005558241 podman[370871]: 2025-12-13 08:57:58.178953052 +0000 UTC m=+0.158429599 container remove 08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:57:58 np0005558241 systemd[1]: libpod-conmon-08ac046bca6e467ddf65b72f07d5546bab707f1a76af254edee768073d72787f.scope: Deactivated successfully.
Dec 13 03:57:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2906: 321 pgs: 321 active+clean; 213 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 149 op/s
Dec 13 03:57:58 np0005558241 podman[370911]: 2025-12-13 08:57:58.393443902 +0000 UTC m=+0.047566775 container create d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chebyshev, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 03:57:58 np0005558241 systemd[1]: Started libpod-conmon-d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450.scope.
Dec 13 03:57:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:57:58 np0005558241 podman[370911]: 2025-12-13 08:57:58.376265462 +0000 UTC m=+0.030388365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:57:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68530751667ade0f76d13871f41d545b01703d0d80a4db7b4031352e671e84d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68530751667ade0f76d13871f41d545b01703d0d80a4db7b4031352e671e84d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68530751667ade0f76d13871f41d545b01703d0d80a4db7b4031352e671e84d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68530751667ade0f76d13871f41d545b01703d0d80a4db7b4031352e671e84d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:58 np0005558241 podman[370911]: 2025-12-13 08:57:58.492902287 +0000 UTC m=+0.147025180 container init d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:57:58 np0005558241 nova_compute[248510]: 2025-12-13 08:57:58.497 248514 DEBUG nova.compute.manager [req-88da77ac-52cb-4ca8-a3c1-347381dcb04a req-81fea4bd-654e-4bc7-9665-3ba9d8c484b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:57:58 np0005558241 nova_compute[248510]: 2025-12-13 08:57:58.500 248514 DEBUG oslo_concurrency.lockutils [req-88da77ac-52cb-4ca8-a3c1-347381dcb04a req-81fea4bd-654e-4bc7-9665-3ba9d8c484b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:57:58 np0005558241 nova_compute[248510]: 2025-12-13 08:57:58.500 248514 DEBUG oslo_concurrency.lockutils [req-88da77ac-52cb-4ca8-a3c1-347381dcb04a req-81fea4bd-654e-4bc7-9665-3ba9d8c484b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:57:58 np0005558241 nova_compute[248510]: 2025-12-13 08:57:58.500 248514 DEBUG oslo_concurrency.lockutils [req-88da77ac-52cb-4ca8-a3c1-347381dcb04a req-81fea4bd-654e-4bc7-9665-3ba9d8c484b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:57:58 np0005558241 nova_compute[248510]: 2025-12-13 08:57:58.500 248514 DEBUG nova.compute.manager [req-88da77ac-52cb-4ca8-a3c1-347381dcb04a req-81fea4bd-654e-4bc7-9665-3ba9d8c484b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] No waiting events found dispatching network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:57:58 np0005558241 nova_compute[248510]: 2025-12-13 08:57:58.501 248514 WARNING nova.compute.manager [req-88da77ac-52cb-4ca8-a3c1-347381dcb04a req-81fea4bd-654e-4bc7-9665-3ba9d8c484b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received unexpected event network-vif-plugged-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:57:58 np0005558241 podman[370911]: 2025-12-13 08:57:58.504009059 +0000 UTC m=+0.158131932 container start d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:57:58 np0005558241 podman[370911]: 2025-12-13 08:57:58.524408528 +0000 UTC m=+0.178531491 container attach d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chebyshev, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 03:57:59 np0005558241 lvm[371006]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:57:59 np0005558241 lvm[371005]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:57:59 np0005558241 lvm[371005]: VG ceph_vg0 finished
Dec 13 03:57:59 np0005558241 lvm[371006]: VG ceph_vg1 finished
Dec 13 03:57:59 np0005558241 lvm[371008]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:57:59 np0005558241 lvm[371008]: VG ceph_vg2 finished
Dec 13 03:57:59 np0005558241 vigilant_chebyshev[370927]: {}
Dec 13 03:57:59 np0005558241 systemd[1]: libpod-d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450.scope: Deactivated successfully.
Dec 13 03:57:59 np0005558241 systemd[1]: libpod-d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450.scope: Consumed 1.371s CPU time.
Dec 13 03:57:59 np0005558241 podman[370911]: 2025-12-13 08:57:59.386584982 +0000 UTC m=+1.040707855 container died d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chebyshev, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:57:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c68530751667ade0f76d13871f41d545b01703d0d80a4db7b4031352e671e84d-merged.mount: Deactivated successfully.
Dec 13 03:57:59 np0005558241 podman[370911]: 2025-12-13 08:57:59.444297135 +0000 UTC m=+1.098420008 container remove d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 03:57:59 np0005558241 systemd[1]: libpod-conmon-d25a0406871c5dceb22541369bd83fa62f86548e04c0698db67bd39822a53450.scope: Deactivated successfully.
Dec 13 03:57:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:57:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:57:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:57:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:57:59 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:59Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:05:d0:ff 10.100.0.11
Dec 13 03:57:59 np0005558241 ovn_controller[148476]: 2025-12-13T08:57:59Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:05:d0:ff 10.100.0.11
Dec 13 03:58:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2907: 321 pgs: 321 active+clean; 182 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 185 op/s
Dec 13 03:58:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:58:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:58:00 np0005558241 nova_compute[248510]: 2025-12-13 08:58:00.509 248514 DEBUG nova.network.neutron [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:58:00 np0005558241 nova_compute[248510]: 2025-12-13 08:58:00.548 248514 INFO nova.compute.manager [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Took 3.51 seconds to deallocate network for instance.#033[00m
Dec 13 03:58:00 np0005558241 nova_compute[248510]: 2025-12-13 08:58:00.649 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:00 np0005558241 nova_compute[248510]: 2025-12-13 08:58:00.650 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:00 np0005558241 nova_compute[248510]: 2025-12-13 08:58:00.760 248514 DEBUG nova.compute.manager [req-82a5cedd-f32e-4625-aca5-621eea7ea5cb req-d2b300ff-3d96-42e6-aa6b-3f1760a83068 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Received event network-vif-deleted-8ee3e1ae-47fa-4e84-a2a1-a548d63c21e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:58:00 np0005558241 nova_compute[248510]: 2025-12-13 08:58:00.769 248514 DEBUG oslo_concurrency.processutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:58:00 np0005558241 nova_compute[248510]: 2025-12-13 08:58:00.880 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:58:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:58:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1193198550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:58:01 np0005558241 nova_compute[248510]: 2025-12-13 08:58:01.393 248514 DEBUG oslo_concurrency.processutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:58:01 np0005558241 nova_compute[248510]: 2025-12-13 08:58:01.402 248514 DEBUG nova.compute.provider_tree [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:58:01 np0005558241 nova_compute[248510]: 2025-12-13 08:58:01.424 248514 DEBUG nova.scheduler.client.report [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:58:01 np0005558241 nova_compute[248510]: 2025-12-13 08:58:01.454 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:01 np0005558241 nova_compute[248510]: 2025-12-13 08:58:01.486 248514 INFO nova.scheduler.client.report [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 5a7142a0-6e82-418c-affe-88fd6beb2ad9#033[00m
Dec 13 03:58:01 np0005558241 nova_compute[248510]: 2025-12-13 08:58:01.570 248514 DEBUG oslo_concurrency.lockutils [None req-87a748d3-7e81-4dce-a0a4-96f5d1352476 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "5a7142a0-6e82-418c-affe-88fd6beb2ad9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2908: 321 pgs: 321 active+clean; 182 MiB data, 919 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 110 op/s
Dec 13 03:58:02 np0005558241 nova_compute[248510]: 2025-12-13 08:58:02.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.299 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.300 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.300 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.300 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.301 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.302 248514 INFO nova.compute.manager [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Terminating instance#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.303 248514 DEBUG nova.compute.manager [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:58:03 np0005558241 kernel: tap4013e964-3f (unregistering): left promiscuous mode
Dec 13 03:58:03 np0005558241 NetworkManager[50376]: <info>  [1765616283.3539] device (tap4013e964-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.361 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:03Z|01204|binding|INFO|Releasing lport 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 from this chassis (sb_readonly=0)
Dec 13 03:58:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:03Z|01205|binding|INFO|Setting lport 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 down in Southbound
Dec 13 03:58:03 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:03Z|01206|binding|INFO|Removing iface tap4013e964-3f ovn-installed in OVS
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.372 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:a9:be 10.100.0.9'], port_security=['fa:16:3e:7e:a9:be 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c78db00b-677b-4c8b-af80-5bb717876b41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca3b9929-a40c-461d-b2b9-49fd6af07fd3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.373 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 in datapath 793ba3c3-1004-4068-a89a-dc7b4c56fc43 unbound from our chassis#033[00m
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.375 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 793ba3c3-1004-4068-a89a-dc7b4c56fc43, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.377 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[829b0229-a1a1-4eeb-8d68-40615b02ba7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.379 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43 namespace which is not needed anymore#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.382 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:03 np0005558241 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000079.scope: Deactivated successfully.
Dec 13 03:58:03 np0005558241 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000079.scope: Consumed 14.994s CPU time.
Dec 13 03:58:03 np0005558241 systemd-machined[210538]: Machine qemu-148-instance-00000079 terminated.
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.545 248514 INFO nova.virt.libvirt.driver [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Instance destroyed successfully.#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.545 248514 DEBUG nova.objects.instance [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:58:03 np0005558241 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [NOTICE]   (368751) : haproxy version is 2.8.14-c23fe91
Dec 13 03:58:03 np0005558241 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [NOTICE]   (368751) : path to executable is /usr/sbin/haproxy
Dec 13 03:58:03 np0005558241 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [WARNING]  (368751) : Exiting Master process...
Dec 13 03:58:03 np0005558241 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [ALERT]    (368751) : Current worker (368753) exited with code 143 (Terminated)
Dec 13 03:58:03 np0005558241 neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43[368747]: [WARNING]  (368751) : All workers exited. Exiting... (0)
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.563 248514 DEBUG nova.virt.libvirt.vif [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:56:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1758092435',display_name='tempest-TestNetworkBasicOps-server-1758092435',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1758092435',id=121,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDDhA9DlC0XXlTi33TP7442ZgmxgyTTZRyp/5+PtSzz/z4TT06lLY5cCNioPf17m6xj5p3Rza8zGSpra/Ou4pMBK7drw3VX1RTJrfYr/jaVe2RRgvmXLfZfYTeWegMxqwQ==',key_name='tempest-TestNetworkBasicOps-530003057',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:56:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-9yn1gbd0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:56:52Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.563 248514 DEBUG nova.network.os_vif_util [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "address": "fa:16:3e:7e:a9:be", "network": {"id": "793ba3c3-1004-4068-a89a-dc7b4c56fc43", "bridge": "br-int", "label": "tempest-network-smoke--1074909412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4013e964-3f", "ovs_interfaceid": "4013e964-3f6f-4aaa-af6d-20b9cb5c2a39", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.566 248514 DEBUG nova.network.os_vif_util [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:58:03 np0005558241 systemd[1]: libpod-5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17.scope: Deactivated successfully.
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.567 248514 DEBUG os_vif [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.570 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4013e964-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:58:03 np0005558241 podman[371093]: 2025-12-13 08:58:03.571163071 +0000 UTC m=+0.058917813 container died 5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.579 248514 INFO os_vif [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:a9:be,bridge_name='br-int',has_traffic_filtering=True,id=4013e964-3f6f-4aaa-af6d-20b9cb5c2a39,network=Network(793ba3c3-1004-4068-a89a-dc7b4c56fc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4013e964-3f')#033[00m
Dec 13 03:58:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17-userdata-shm.mount: Deactivated successfully.
Dec 13 03:58:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-aad8c437fd7f83a96f7ee83b44aeba75a23697638d72f15b7ac660f7308e1111-merged.mount: Deactivated successfully.
Dec 13 03:58:03 np0005558241 podman[371093]: 2025-12-13 08:58:03.616649265 +0000 UTC m=+0.104404007 container cleanup 5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.625 248514 DEBUG nova.compute.manager [req-eec9fecb-9e52-42ad-a959-4f9108946e02 req-f7462493-4052-44ee-b07c-16327d58fbe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-vif-unplugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.625 248514 DEBUG oslo_concurrency.lockutils [req-eec9fecb-9e52-42ad-a959-4f9108946e02 req-f7462493-4052-44ee-b07c-16327d58fbe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.626 248514 DEBUG oslo_concurrency.lockutils [req-eec9fecb-9e52-42ad-a959-4f9108946e02 req-f7462493-4052-44ee-b07c-16327d58fbe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.626 248514 DEBUG oslo_concurrency.lockutils [req-eec9fecb-9e52-42ad-a959-4f9108946e02 req-f7462493-4052-44ee-b07c-16327d58fbe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.626 248514 DEBUG nova.compute.manager [req-eec9fecb-9e52-42ad-a959-4f9108946e02 req-f7462493-4052-44ee-b07c-16327d58fbe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] No waiting events found dispatching network-vif-unplugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.627 248514 DEBUG nova.compute.manager [req-eec9fecb-9e52-42ad-a959-4f9108946e02 req-f7462493-4052-44ee-b07c-16327d58fbe3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-vif-unplugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:58:03 np0005558241 systemd[1]: libpod-conmon-5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17.scope: Deactivated successfully.
Dec 13 03:58:03 np0005558241 podman[371149]: 2025-12-13 08:58:03.691145318 +0000 UTC m=+0.047459393 container remove 5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.697 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55558e1d-1a5e-451d-be7f-aba67b3c1cf0]: (4, ('Sat Dec 13 08:58:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43 (5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17)\n5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17\nSat Dec 13 08:58:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43 (5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17)\n5ae7136b2aef389e61e4e109d086b2ba663f8641e1eb9a5955d19aecb4d0aa17\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.699 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e99657c5-3ec0-4cca-8316-ecef893d6fa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.701 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap793ba3c3-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.703 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:03 np0005558241 kernel: tap793ba3c3-10: left promiscuous mode
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.720 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cfee50c6-5ec3-4905-9e15-f13dd3810e94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.736 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3848fafd-fee8-4f84-9cbe-a2d68e9c889f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.738 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8c0f1c-93f7-4a53-b311-d2e4dbc45428]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.759 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb517c52-4b7d-4730-8310-a01bbda7c382]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878905, 'reachable_time': 38688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371172, 'error': None, 'target': 'ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:03 np0005558241 systemd[1]: run-netns-ovnmeta\x2d793ba3c3\x2d1004\x2d4068\x2da89a\x2ddc7b4c56fc43.mount: Deactivated successfully.
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.770 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-793ba3c3-1004-4068-a89a-dc7b4c56fc43 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:58:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:03.770 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4db225dd-66e0-4e7b-bfd2-a5de3515714b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:58:03 np0005558241 podman[371162]: 2025-12-13 08:58:03.812097449 +0000 UTC m=+0.067453012 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 03:58:03 np0005558241 podman[371164]: 2025-12-13 08:58:03.83012201 +0000 UTC m=+0.083373912 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 03:58:03 np0005558241 podman[371184]: 2025-12-13 08:58:03.881133509 +0000 UTC m=+0.090096737 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.881 248514 INFO nova.virt.libvirt.driver [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Deleting instance files /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_del#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.882 248514 INFO nova.virt.libvirt.driver [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Deletion of /var/lib/nova/instances/1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d_del complete#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.943 248514 INFO nova.compute.manager [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.943 248514 DEBUG oslo.service.loopingcall [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.944 248514 DEBUG nova.compute.manager [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:58:03 np0005558241 nova_compute[248510]: 2025-12-13 08:58:03.944 248514 DEBUG nova.network.neutron [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:58:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2909: 321 pgs: 321 active+clean; 160 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.8 MiB/s wr, 158 op/s
Dec 13 03:58:04 np0005558241 nova_compute[248510]: 2025-12-13 08:58:04.540 248514 DEBUG nova.network.neutron [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:58:04 np0005558241 nova_compute[248510]: 2025-12-13 08:58:04.557 248514 INFO nova.compute.manager [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Took 0.61 seconds to deallocate network for instance.#033[00m
Dec 13 03:58:04 np0005558241 nova_compute[248510]: 2025-12-13 08:58:04.603 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:04 np0005558241 nova_compute[248510]: 2025-12-13 08:58:04.604 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:04 np0005558241 nova_compute[248510]: 2025-12-13 08:58:04.677 248514 DEBUG oslo_concurrency.processutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:58:04 np0005558241 nova_compute[248510]: 2025-12-13 08:58:04.724 248514 DEBUG nova.compute.manager [req-2f622381-4451-4840-8aba-0bfd3b6bc2d9 req-d2a32941-7dbd-40e1-97e9-f89ba1ffe769 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-vif-deleted-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:58:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:58:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1577693289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:58:05 np0005558241 nova_compute[248510]: 2025-12-13 08:58:05.232 248514 DEBUG oslo_concurrency.processutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:58:05 np0005558241 nova_compute[248510]: 2025-12-13 08:58:05.240 248514 DEBUG nova.compute.provider_tree [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:58:05 np0005558241 nova_compute[248510]: 2025-12-13 08:58:05.269 248514 DEBUG nova.scheduler.client.report [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:58:05 np0005558241 nova_compute[248510]: 2025-12-13 08:58:05.296 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:05 np0005558241 nova_compute[248510]: 2025-12-13 08:58:05.408 248514 INFO nova.scheduler.client.report [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d#033[00m
Dec 13 03:58:05 np0005558241 nova_compute[248510]: 2025-12-13 08:58:05.498 248514 DEBUG oslo_concurrency.lockutils [None req-0811e689-4970-463c-832d-0081eb6874fe 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:05 np0005558241 nova_compute[248510]: 2025-12-13 08:58:05.840 248514 DEBUG nova.compute.manager [req-be67be78-33ae-4639-b554-6b78512cde6d req-dd279db2-9e20-4802-9701-0e88995a2941 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:58:05 np0005558241 nova_compute[248510]: 2025-12-13 08:58:05.840 248514 DEBUG oslo_concurrency.lockutils [req-be67be78-33ae-4639-b554-6b78512cde6d req-dd279db2-9e20-4802-9701-0e88995a2941 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:05 np0005558241 nova_compute[248510]: 2025-12-13 08:58:05.841 248514 DEBUG oslo_concurrency.lockutils [req-be67be78-33ae-4639-b554-6b78512cde6d req-dd279db2-9e20-4802-9701-0e88995a2941 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:05 np0005558241 nova_compute[248510]: 2025-12-13 08:58:05.841 248514 DEBUG oslo_concurrency.lockutils [req-be67be78-33ae-4639-b554-6b78512cde6d req-dd279db2-9e20-4802-9701-0e88995a2941 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:05 np0005558241 nova_compute[248510]: 2025-12-13 08:58:05.842 248514 DEBUG nova.compute.manager [req-be67be78-33ae-4639-b554-6b78512cde6d req-dd279db2-9e20-4802-9701-0e88995a2941 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] No waiting events found dispatching network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:58:05 np0005558241 nova_compute[248510]: 2025-12-13 08:58:05.842 248514 WARNING nova.compute.manager [req-be67be78-33ae-4639-b554-6b78512cde6d req-dd279db2-9e20-4802-9701-0e88995a2941 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Received unexpected event network-vif-plugged-4013e964-3f6f-4aaa-af6d-20b9cb5c2a39 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 03:58:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:58:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2910: 321 pgs: 321 active+clean; 160 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Dec 13 03:58:06 np0005558241 nova_compute[248510]: 2025-12-13 08:58:06.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:58:07 np0005558241 nova_compute[248510]: 2025-12-13 08:58:07.684 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2911: 321 pgs: 321 active+clean; 121 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 13 03:58:08 np0005558241 nova_compute[248510]: 2025-12-13 08:58:08.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:58:09
Dec 13 03:58:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:58:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:58:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'backups', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta']
Dec 13 03:58:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:58:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:09.785 158745 DEBUG eventlet.wsgi.server [-] (158745) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec 13 03:58:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:09.787 158745 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Dec 13 03:58:09 np0005558241 ovn_metadata_agent[158414]: Accept: */*#015
Dec 13 03:58:09 np0005558241 ovn_metadata_agent[158414]: Connection: close#015
Dec 13 03:58:09 np0005558241 ovn_metadata_agent[158414]: Content-Type: text/plain#015
Dec 13 03:58:09 np0005558241 ovn_metadata_agent[158414]: Host: 169.254.169.254#015
Dec 13 03:58:09 np0005558241 ovn_metadata_agent[158414]: User-Agent: curl/7.84.0#015
Dec 13 03:58:09 np0005558241 ovn_metadata_agent[158414]: X-Forwarded-For: 10.100.0.11#015
Dec 13 03:58:09 np0005558241 ovn_metadata_agent[158414]: X-Ovn-Network-Id: b9992fbc-9a6f-4e82-84b9-be47eb5816aa __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2912: 321 pgs: 321 active+clean; 121 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Dec 13 03:58:10 np0005558241 nova_compute[248510]: 2025-12-13 08:58:10.849 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616275.848521, 5a7142a0-6e82-418c-affe-88fd6beb2ad9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:58:10 np0005558241 nova_compute[248510]: 2025-12-13 08:58:10.850 248514 INFO nova.compute.manager [-] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:58:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:58:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:58:11 np0005558241 nova_compute[248510]: 2025-12-13 08:58:11.010 248514 DEBUG nova.compute.manager [None req-10080ed7-5648-46bd-ad9a-c49275b676e9 - - - - - -] [instance: 5a7142a0-6e82-418c-affe-88fd6beb2ad9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:58:11 np0005558241 nova_compute[248510]: 2025-12-13 08:58:11.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:58:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:11.911 158745 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec 13 03:58:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:11.911 158745 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 2.1242702#033[00m
Dec 13 03:58:11 np0005558241 haproxy-metadata-proxy-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370067]: 10.100.0.11:35396 [13/Dec/2025:08:58:09.783] listener listener/metadata 0/0/0/2127/2127 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:12.027 158745 DEBUG eventlet.wsgi.server [-] (158745) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:12.028 158745 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: Accept: */*#015
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: Connection: close#015
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: Content-Length: 100#015
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: Content-Type: application/x-www-form-urlencoded#015
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: Host: 169.254.169.254#015
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: User-Agent: curl/7.84.0#015
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: X-Forwarded-For: 10.100.0.11#015
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: X-Ovn-Network-Id: b9992fbc-9a6f-4e82-84b9-be47eb5816aa#015
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: #015
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:12.256 158745 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec 13 03:58:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:12.256 158745 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2287345#033[00m
Dec 13 03:58:12 np0005558241 haproxy-metadata-proxy-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370067]: 10.100.0.11:60354 [13/Dec/2025:08:58:12.026] listener listener/metadata 0/0/0/230/230 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec 13 03:58:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2913: 321 pgs: 321 active+clean; 121 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 169 KiB/s rd, 678 KiB/s wr, 62 op/s
Dec 13 03:58:12 np0005558241 nova_compute[248510]: 2025-12-13 08:58:12.686 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:13 np0005558241 nova_compute[248510]: 2025-12-13 08:58:13.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:13 np0005558241 nova_compute[248510]: 2025-12-13 08:58:13.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:58:13 np0005558241 nova_compute[248510]: 2025-12-13 08:58:13.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:58:13 np0005558241 nova_compute[248510]: 2025-12-13 08:58:13.871 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.197 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.198 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.198 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.198 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.199 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.200 248514 INFO nova.compute.manager [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Terminating instance#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.201 248514 DEBUG nova.compute.manager [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:58:14 np0005558241 kernel: tap6b86528d-c1 (unregistering): left promiscuous mode
Dec 13 03:58:14 np0005558241 NetworkManager[50376]: <info>  [1765616294.2848] device (tap6b86528d-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01207|binding|INFO|Releasing lport 6b86528d-c1b7-4776-809f-1f8b37569b6f from this chassis (sb_readonly=0)
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.290 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01208|binding|INFO|Setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f down in Southbound
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01209|binding|INFO|Removing iface tap6b86528d-c1 ovn-installed in OVS
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.293 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2914: 321 pgs: 321 active+clean; 121 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 176 KiB/s rd, 679 KiB/s wr, 64 op/s
Dec 13 03:58:14 np0005558241 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Dec 13 03:58:14 np0005558241 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d0000007b.scope: Consumed 14.534s CPU time.
Dec 13 03:58:14 np0005558241 systemd-machined[210538]: Machine qemu-150-instance-0000007b terminated.
Dec 13 03:58:14 np0005558241 kernel: tap6b86528d-c1: entered promiscuous mode
Dec 13 03:58:14 np0005558241 kernel: tap6b86528d-c1 (unregistering): left promiscuous mode
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01210|if_status|INFO|Not updating pb chassis for 6b86528d-c1b7-4776-809f-1f8b37569b6f now as sb is readonly
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01211|binding|INFO|Claiming lport 6b86528d-c1b7-4776-809f-1f8b37569b6f for this chassis.
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01212|binding|INFO|6b86528d-c1b7-4776-809f-1f8b37569b6f: Claiming fa:16:3e:05:d0:ff 10.100.0.11
Dec 13 03:58:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.429 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:d0:ff 10.100.0.11'], port_security=['fa:16:3e:05:d0:ff 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '393f815d-d124-4aea-98c0-126aed0744bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30f07149729142048436dbfbb8bf2742', 'neutron:revision_number': '4', 'neutron:security_group_ids': '339be054-74c6-402b-b71a-0e37379fa825 59357d23-9217-43e7-99ef-d4349c0de8ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77d611ab-66b1-4445-8d3d-978ee23b13d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b86528d-c1b7-4776-809f-1f8b37569b6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:58:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.430 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b86528d-c1b7-4776-809f-1f8b37569b6f in datapath b9992fbc-9a6f-4e82-84b9-be47eb5816aa unbound from our chassis#033[00m
Dec 13 03:58:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.431 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9992fbc-9a6f-4e82-84b9-be47eb5816aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:58:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.433 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68f2e81d-f915-4c2b-b97b-38ab83206a45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.433 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa namespace which is not needed anymore#033[00m
Dec 13 03:58:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.441 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:d0:ff 10.100.0.11'], port_security=['fa:16:3e:05:d0:ff 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '393f815d-d124-4aea-98c0-126aed0744bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30f07149729142048436dbfbb8bf2742', 'neutron:revision_number': '4', 'neutron:security_group_ids': '339be054-74c6-402b-b71a-0e37379fa825 59357d23-9217-43e7-99ef-d4349c0de8ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77d611ab-66b1-4445-8d3d-978ee23b13d9, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b86528d-c1b7-4776-809f-1f8b37569b6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.445 248514 INFO nova.virt.libvirt.driver [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Instance destroyed successfully.#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.446 248514 DEBUG nova.objects.instance [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lazy-loading 'resources' on Instance uuid 393f815d-d124-4aea-98c0-126aed0744bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01213|binding|INFO|Setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f ovn-installed in OVS
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01214|binding|INFO|Setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f up in Southbound
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01215|binding|INFO|Releasing lport 6b86528d-c1b7-4776-809f-1f8b37569b6f from this chassis (sb_readonly=1)
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01216|if_status|INFO|Dropped 2 log messages in last 769 seconds (most recently, 769 seconds ago) due to excessive rate
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01217|if_status|INFO|Not setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f down as sb is readonly
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01218|binding|INFO|Removing iface tap6b86528d-c1 ovn-installed in OVS
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.451 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.467 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01219|binding|INFO|Releasing lport a5f80d22-8b41-40d9-b8ff-ddee53f45af0 from this chassis (sb_readonly=0)
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01220|binding|INFO|Releasing lport 6b86528d-c1b7-4776-809f-1f8b37569b6f from this chassis (sb_readonly=0)
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01221|binding|INFO|Setting lport 6b86528d-c1b7-4776-809f-1f8b37569b6f down in Southbound
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.513 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.515 248514 DEBUG nova.virt.libvirt.vif [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:57:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1358982584',display_name='tempest-TestServerBasicOps-server-1358982584',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1358982584',id=123,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOLfROCcHYxf8uvMCUymzKleNDXyw2tmzRbGzC//Zjm3WjKvoC/StL1LDTb6SGUu3s96IMOyEDfcMq8YnCwGooxmPETH1rv9aB87uxpvFNJYtgkaVxxoxt7V/jUmpT5M6g==',key_name='tempest-TestServerBasicOps-1980425304',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:57:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='30f07149729142048436dbfbb8bf2742',ramdisk_id='',reservation_id='r-sw1yv1h4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1870679034',owner_user_name='tempest-TestServerBasicOps-1870679034-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='095998dc2eb348e8a90c866d4106cd74',uuid=393f815d-d124-4aea-98c0-126aed0744bd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.516 248514 DEBUG nova.network.os_vif_util [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Converting VIF {"id": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "address": "fa:16:3e:05:d0:ff", "network": {"id": "b9992fbc-9a6f-4e82-84b9-be47eb5816aa", "bridge": "br-int", "label": "tempest-TestServerBasicOps-881072321-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "30f07149729142048436dbfbb8bf2742", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b86528d-c1", "ovs_interfaceid": "6b86528d-c1b7-4776-809f-1f8b37569b6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.517 248514 DEBUG nova.network.os_vif_util [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.518 248514 DEBUG os_vif [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:58:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:14.519 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:d0:ff 10.100.0.11'], port_security=['fa:16:3e:05:d0:ff 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '393f815d-d124-4aea-98c0-126aed0744bd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '30f07149729142048436dbfbb8bf2742', 'neutron:revision_number': '4', 'neutron:security_group_ids': '339be054-74c6-402b-b71a-0e37379fa825 59357d23-9217-43e7-99ef-d4349c0de8ef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=77d611ab-66b1-4445-8d3d-978ee23b13d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=6b86528d-c1b7-4776-809f-1f8b37569b6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.520 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.520 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b86528d-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.524 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.709 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.711 248514 INFO os_vif [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:d0:ff,bridge_name='br-int',has_traffic_filtering=True,id=6b86528d-c1b7-4776-809f-1f8b37569b6f,network=Network(b9992fbc-9a6f-4e82-84b9-be47eb5816aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b86528d-c1')#033[00m
Dec 13 03:58:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:14Z|01222|binding|INFO|Releasing lport a5f80d22-8b41-40d9-b8ff-ddee53f45af0 from this chassis (sb_readonly=0)
Dec 13 03:58:14 np0005558241 nova_compute[248510]: 2025-12-13 08:58:14.732 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:14 np0005558241 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [NOTICE]   (370065) : haproxy version is 2.8.14-c23fe91
Dec 13 03:58:14 np0005558241 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [NOTICE]   (370065) : path to executable is /usr/sbin/haproxy
Dec 13 03:58:14 np0005558241 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [WARNING]  (370065) : Exiting Master process...
Dec 13 03:58:14 np0005558241 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [WARNING]  (370065) : Exiting Master process...
Dec 13 03:58:14 np0005558241 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [ALERT]    (370065) : Current worker (370067) exited with code 143 (Terminated)
Dec 13 03:58:14 np0005558241 neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa[370060]: [WARNING]  (370065) : All workers exited. Exiting... (0)
Dec 13 03:58:14 np0005558241 systemd[1]: libpod-852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5.scope: Deactivated successfully.
Dec 13 03:58:14 np0005558241 podman[371279]: 2025-12-13 08:58:14.770126784 +0000 UTC m=+0.225721755 container died 852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:58:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5-userdata-shm.mount: Deactivated successfully.
Dec 13 03:58:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5948b445140de39f593e110d7d860347d27dfb64a06ca5afb4dbd7d1143ccfb3-merged.mount: Deactivated successfully.
Dec 13 03:58:14 np0005558241 podman[371279]: 2025-12-13 08:58:14.928160432 +0000 UTC m=+0.383755403 container cleanup 852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:58:14 np0005558241 systemd[1]: libpod-conmon-852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5.scope: Deactivated successfully.
Dec 13 03:58:15 np0005558241 podman[371327]: 2025-12-13 08:58:15.039511327 +0000 UTC m=+0.090370313 container remove 852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.047 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[14b36df8-897a-4006-a189-ec43bebe176d]: (4, ('Sat Dec 13 08:58:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa (852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5)\n852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5\nSat Dec 13 08:58:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa (852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5)\n852c92359ee0d7a2330b9f2699a34ad868cc898b4b7e11597c645ce74e295dc5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.049 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c63bfa44-e7fc-4aab-8b0d-290e51261d0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.050 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9992fbc-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.052 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:15 np0005558241 kernel: tapb9992fbc-90: left promiscuous mode
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.086 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3309b089-0c7f-4e4b-9f8e-262a028f910a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:58:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1922456529' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:58:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:58:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1922456529' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.102 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9739a51f-6523-4eca-8442-2a035422f20a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.104 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6e584b66-194e-4353-8663-1d8ef78b7f29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.125 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f578d666-5372-4d87-b2b1-69b9a23a30b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884310, 'reachable_time': 41239, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371343, 'error': None, 'target': 'ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.128 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b9992fbc-9a6f-4e82-84b9-be47eb5816aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.129 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[817127e4-d372-405c-a1ca-2716a0f59509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.129 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b86528d-c1b7-4776-809f-1f8b37569b6f in datapath b9992fbc-9a6f-4e82-84b9-be47eb5816aa unbound from our chassis#033[00m
Dec 13 03:58:15 np0005558241 systemd[1]: run-netns-ovnmeta\x2db9992fbc\x2d9a6f\x2d4e82\x2d84b9\x2dbe47eb5816aa.mount: Deactivated successfully.
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.131 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9992fbc-9a6f-4e82-84b9-be47eb5816aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.132 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5e71895c-8238-44b3-8598-9a0dfbcfb433]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.133 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 6b86528d-c1b7-4776-809f-1f8b37569b6f in datapath b9992fbc-9a6f-4e82-84b9-be47eb5816aa unbound from our chassis#033[00m
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.134 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9992fbc-9a6f-4e82-84b9-be47eb5816aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:58:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:15.135 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[30be4b7d-f89e-4124-8f93-2b9daa684ba6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.500 248514 DEBUG nova.compute.manager [req-f7d04e41-d6e4-4b57-80bb-9341f31f309b req-5331e619-fa72-4101-885a-897eaf83400d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-vif-unplugged-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.500 248514 DEBUG oslo_concurrency.lockutils [req-f7d04e41-d6e4-4b57-80bb-9341f31f309b req-5331e619-fa72-4101-885a-897eaf83400d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.500 248514 DEBUG oslo_concurrency.lockutils [req-f7d04e41-d6e4-4b57-80bb-9341f31f309b req-5331e619-fa72-4101-885a-897eaf83400d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.501 248514 DEBUG oslo_concurrency.lockutils [req-f7d04e41-d6e4-4b57-80bb-9341f31f309b req-5331e619-fa72-4101-885a-897eaf83400d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.501 248514 DEBUG nova.compute.manager [req-f7d04e41-d6e4-4b57-80bb-9341f31f309b req-5331e619-fa72-4101-885a-897eaf83400d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] No waiting events found dispatching network-vif-unplugged-6b86528d-c1b7-4776-809f-1f8b37569b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.501 248514 DEBUG nova.compute.manager [req-f7d04e41-d6e4-4b57-80bb-9341f31f309b req-5331e619-fa72-4101-885a-897eaf83400d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-vif-unplugged-6b86528d-c1b7-4776-809f-1f8b37569b6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.552 248514 INFO nova.virt.libvirt.driver [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Deleting instance files /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd_del#033[00m
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.553 248514 INFO nova.virt.libvirt.driver [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Deletion of /var/lib/nova/instances/393f815d-d124-4aea-98c0-126aed0744bd_del complete#033[00m
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.639 248514 INFO nova.compute.manager [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Took 1.44 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.640 248514 DEBUG oslo.service.loopingcall [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.640 248514 DEBUG nova.compute.manager [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:58:15 np0005558241 nova_compute[248510]: 2025-12-13 08:58:15.640 248514 DEBUG nova.network.neutron [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:58:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:58:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2915: 321 pgs: 321 active+clean; 121 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 15 KiB/s wr, 16 op/s
Dec 13 03:58:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:17.001 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.002 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:17.002 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.600 248514 DEBUG nova.compute.manager [req-847a7178-ad08-42fb-9c19-4576685980f5 req-9f93f9af-14e2-49c0-b063-d421691c8555 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.600 248514 DEBUG oslo_concurrency.lockutils [req-847a7178-ad08-42fb-9c19-4576685980f5 req-9f93f9af-14e2-49c0-b063-d421691c8555 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "393f815d-d124-4aea-98c0-126aed0744bd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.601 248514 DEBUG oslo_concurrency.lockutils [req-847a7178-ad08-42fb-9c19-4576685980f5 req-9f93f9af-14e2-49c0-b063-d421691c8555 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.601 248514 DEBUG oslo_concurrency.lockutils [req-847a7178-ad08-42fb-9c19-4576685980f5 req-9f93f9af-14e2-49c0-b063-d421691c8555 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.601 248514 DEBUG nova.compute.manager [req-847a7178-ad08-42fb-9c19-4576685980f5 req-9f93f9af-14e2-49c0-b063-d421691c8555 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] No waiting events found dispatching network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.602 248514 WARNING nova.compute.manager [req-847a7178-ad08-42fb-9c19-4576685980f5 req-9f93f9af-14e2-49c0-b063-d421691c8555 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received unexpected event network-vif-plugged-6b86528d-c1b7-4776-809f-1f8b37569b6f for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.803 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:58:17 np0005558241 nova_compute[248510]: 2025-12-13 08:58:17.804 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.048 248514 DEBUG nova.network.neutron [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.078 248514 INFO nova.compute.manager [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Took 2.44 seconds to deallocate network for instance.#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.147 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.148 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.168 248514 DEBUG nova.compute.manager [req-e43550be-e273-4bbb-8735-94666908e0c6 req-e0ae867d-232f-4375-9dcb-dc13f3232dd3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Received event network-vif-deleted-6b86528d-c1b7-4776-809f-1f8b37569b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.221 248514 DEBUG oslo_concurrency.processutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:58:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2916: 321 pgs: 321 active+clean; 91 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 16 KiB/s wr, 27 op/s
Dec 13 03:58:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:58:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3636236204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.394 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.542 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616283.5401127, 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.542 248514 INFO nova.compute.manager [-] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.571 248514 DEBUG nova.compute.manager [None req-8f688a4a-e898-4753-94e7-3ea44117b378 - - - - - -] [instance: 1fc44e4f-7ae4-4b82-a6c1-6b3d79f6ad5d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.593 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.595 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3516MB free_disk=59.94185414817184GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.596 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:58:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1348939361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.835 248514 DEBUG oslo_concurrency.processutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.840 248514 DEBUG nova.compute.provider_tree [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.869 248514 DEBUG nova.scheduler.client.report [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.900 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.903 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.933 248514 INFO nova.scheduler.client.report [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Deleted allocations for instance 393f815d-d124-4aea-98c0-126aed0744bd#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.973 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.973 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:58:18 np0005558241 nova_compute[248510]: 2025-12-13 08:58:18.991 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:58:19 np0005558241 nova_compute[248510]: 2025-12-13 08:58:19.033 248514 DEBUG oslo_concurrency.lockutils [None req-e4e941c5-f1ce-4295-8a88-2a777b099670 095998dc2eb348e8a90c866d4106cd74 30f07149729142048436dbfbb8bf2742 - - default default] Lock "393f815d-d124-4aea-98c0-126aed0744bd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:58:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/925900602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:58:19 np0005558241 nova_compute[248510]: 2025-12-13 08:58:19.523 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:19 np0005558241 nova_compute[248510]: 2025-12-13 08:58:19.526 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:58:19 np0005558241 nova_compute[248510]: 2025-12-13 08:58:19.532 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:58:19 np0005558241 nova_compute[248510]: 2025-12-13 08:58:19.572 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:58:19 np0005558241 nova_compute[248510]: 2025-12-13 08:58:19.616 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:58:19 np0005558241 nova_compute[248510]: 2025-12-13 08:58:19.617 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2917: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 3.8 KiB/s wr, 30 op/s
Dec 13 03:58:20 np0005558241 nova_compute[248510]: 2025-12-13 08:58:20.619 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:58:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4147892310034525e-05 of space, bias 1.0, pg target 0.004244367693010357 quantized to 32 (current 32)
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697139695278017 of space, bias 1.0, pg target 0.20091419085834053 quantized to 32 (current 32)
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.734031256454457e-07 of space, bias 4.0, pg target 0.0006880837507745348 quantized to 16 (current 32)
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:58:21 np0005558241 nova_compute[248510]: 2025-12-13 08:58:21.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:58:21 np0005558241 nova_compute[248510]: 2025-12-13 08:58:21.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:58:21 np0005558241 nova_compute[248510]: 2025-12-13 08:58:21.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:58:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2918: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Dec 13 03:58:22 np0005558241 nova_compute[248510]: 2025-12-13 08:58:22.689 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:23.004 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:58:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2919: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Dec 13 03:58:24 np0005558241 nova_compute[248510]: 2025-12-13 08:58:24.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:58:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2920: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 03:58:26 np0005558241 nova_compute[248510]: 2025-12-13 08:58:26.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:58:27 np0005558241 nova_compute[248510]: 2025-12-13 08:58:27.693 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2921: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 03:58:29 np0005558241 nova_compute[248510]: 2025-12-13 08:58:29.443 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616294.4412525, 393f815d-d124-4aea-98c0-126aed0744bd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:58:29 np0005558241 nova_compute[248510]: 2025-12-13 08:58:29.444 248514 INFO nova.compute.manager [-] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] VM Stopped (Lifecycle Event)#033[00m
Dec 13 03:58:29 np0005558241 nova_compute[248510]: 2025-12-13 08:58:29.468 248514 DEBUG nova.compute.manager [None req-e8ccf8cb-bdc1-4f23-a48c-49dde0cc9dc2 - - - - - -] [instance: 393f815d-d124-4aea-98c0-126aed0744bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:58:29 np0005558241 nova_compute[248510]: 2025-12-13 08:58:29.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2922: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 680 B/s wr, 17 op/s
Dec 13 03:58:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:58:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2923: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:58:32 np0005558241 nova_compute[248510]: 2025-12-13 08:58:32.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:33 np0005558241 podman[371416]: 2025-12-13 08:58:33.95900448 +0000 UTC m=+0.052730451 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 13 03:58:33 np0005558241 podman[371415]: 2025-12-13 08:58:33.988170874 +0000 UTC m=+0.083152466 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Dec 13 03:58:33 np0005558241 podman[371417]: 2025-12-13 08:58:33.990795398 +0000 UTC m=+0.079281731 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:58:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2924: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:58:34 np0005558241 nova_compute[248510]: 2025-12-13 08:58:34.580 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:58:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2925: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:58:37 np0005558241 nova_compute[248510]: 2025-12-13 08:58:37.695 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2926: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:58:39 np0005558241 nova_compute[248510]: 2025-12-13 08:58:39.583 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:40 np0005558241 nova_compute[248510]: 2025-12-13 08:58:40.054 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:40 np0005558241 nova_compute[248510]: 2025-12-13 08:58:40.055 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:40 np0005558241 nova_compute[248510]: 2025-12-13 08:58:40.075 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:58:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:58:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:58:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:58:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:58:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:58:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:58:40 np0005558241 nova_compute[248510]: 2025-12-13 08:58:40.162 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:40 np0005558241 nova_compute[248510]: 2025-12-13 08:58:40.163 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:40 np0005558241 nova_compute[248510]: 2025-12-13 08:58:40.172 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:58:40 np0005558241 nova_compute[248510]: 2025-12-13 08:58:40.173 248514 INFO nova.compute.claims [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:58:40 np0005558241 nova_compute[248510]: 2025-12-13 08:58:40.276 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:58:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2927: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:58:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:58:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4266125456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:58:40 np0005558241 nova_compute[248510]: 2025-12-13 08:58:40.857 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:58:40 np0005558241 nova_compute[248510]: 2025-12-13 08:58:40.864 248514 DEBUG nova.compute.provider_tree [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:58:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.318 248514 DEBUG nova.scheduler.client.report [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.361 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.362 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.422 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.423 248514 DEBUG nova.network.neutron [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.445 248514 INFO nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.468 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.639 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.641 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.642 248514 INFO nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Creating image(s)#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.674 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.699 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.731 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.736 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.825 248514 DEBUG nova.policy [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.829 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.829 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.830 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.831 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.856 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:58:41 np0005558241 nova_compute[248510]: 2025-12-13 08:58:41.860 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:58:42 np0005558241 nova_compute[248510]: 2025-12-13 08:58:42.180 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:58:42 np0005558241 nova_compute[248510]: 2025-12-13 08:58:42.243 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:58:42 np0005558241 nova_compute[248510]: 2025-12-13 08:58:42.321 248514 DEBUG nova.objects.instance [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:58:42 np0005558241 nova_compute[248510]: 2025-12-13 08:58:42.338 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:58:42 np0005558241 nova_compute[248510]: 2025-12-13 08:58:42.339 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Ensure instance console log exists: /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:58:42 np0005558241 nova_compute[248510]: 2025-12-13 08:58:42.339 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:42 np0005558241 nova_compute[248510]: 2025-12-13 08:58:42.339 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:42 np0005558241 nova_compute[248510]: 2025-12-13 08:58:42.339 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2928: 321 pgs: 321 active+clean; 41 MiB data, 836 MiB used, 59 GiB / 60 GiB avail
Dec 13 03:58:42 np0005558241 nova_compute[248510]: 2025-12-13 08:58:42.697 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:42 np0005558241 nova_compute[248510]: 2025-12-13 08:58:42.871 248514 DEBUG nova.network.neutron [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Successfully created port: 3abb490c-6aad-47d4-8200-febd480ac7db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:58:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2929: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:58:44 np0005558241 nova_compute[248510]: 2025-12-13 08:58:44.396 248514 DEBUG nova.network.neutron [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Successfully updated port: 3abb490c-6aad-47d4-8200-febd480ac7db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:58:44 np0005558241 nova_compute[248510]: 2025-12-13 08:58:44.414 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:58:44 np0005558241 nova_compute[248510]: 2025-12-13 08:58:44.415 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:58:44 np0005558241 nova_compute[248510]: 2025-12-13 08:58:44.415 248514 DEBUG nova.network.neutron [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:58:44 np0005558241 nova_compute[248510]: 2025-12-13 08:58:44.517 248514 DEBUG nova.compute.manager [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-changed-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:58:44 np0005558241 nova_compute[248510]: 2025-12-13 08:58:44.517 248514 DEBUG nova.compute.manager [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing instance network info cache due to event network-changed-3abb490c-6aad-47d4-8200-febd480ac7db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:58:44 np0005558241 nova_compute[248510]: 2025-12-13 08:58:44.517 248514 DEBUG oslo_concurrency.lockutils [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:58:44 np0005558241 nova_compute[248510]: 2025-12-13 08:58:44.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:45 np0005558241 nova_compute[248510]: 2025-12-13 08:58:45.037 248514 DEBUG nova.network.neutron [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:58:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:58:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2930: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.437 248514 DEBUG nova.network.neutron [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.462 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.462 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Instance network_info: |[{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.463 248514 DEBUG oslo_concurrency.lockutils [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.463 248514 DEBUG nova.network.neutron [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing network info cache for port 3abb490c-6aad-47d4-8200-febd480ac7db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.466 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Start _get_guest_xml network_info=[{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.469 248514 WARNING nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.477 248514 DEBUG nova.virt.libvirt.host [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.478 248514 DEBUG nova.virt.libvirt.host [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.484 248514 DEBUG nova.virt.libvirt.host [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.484 248514 DEBUG nova.virt.libvirt.host [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.485 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.485 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.485 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.486 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.486 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.486 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.486 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.487 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.487 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.487 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.487 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.488 248514 DEBUG nova.virt.hardware [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.490 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:58:46 np0005558241 nova_compute[248510]: 2025-12-13 08:58:46.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:58:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:58:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3206972685' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.059 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.083 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.087 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:58:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:58:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2755890126' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.664 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.666 248514 DEBUG nova.virt.libvirt.vif [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:58:41Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.667 248514 DEBUG nova.network.os_vif_util [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.668 248514 DEBUG nova.network.os_vif_util [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.669 248514 DEBUG nova.objects.instance [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.691 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  <uuid>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</uuid>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  <name>instance-0000007c</name>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:58:46</nova:creationTime>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:        <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:        <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:        <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <entry name="serial">3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <entry name="uuid">3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:a8:9a:b3"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <target dev="tap3abb490c-6a"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log" append="off"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:58:47 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:58:47 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:58:47 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:58:47 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.692 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Preparing to wait for external event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.693 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.693 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.693 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.694 248514 DEBUG nova.virt.libvirt.vif [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:58:41Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.694 248514 DEBUG nova.network.os_vif_util [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.695 248514 DEBUG nova.network.os_vif_util [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.695 248514 DEBUG os_vif [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.696 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.696 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.696 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.700 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.700 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3abb490c-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.701 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3abb490c-6a, col_values=(('external_ids', {'iface-id': '3abb490c-6aad-47d4-8200-febd480ac7db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:9a:b3', 'vm-uuid': '3a1e4b45-e8c6-41c4-8890-cb0775cb5786'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.702 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:47 np0005558241 NetworkManager[50376]: <info>  [1765616327.7037] manager: (tap3abb490c-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.708 248514 INFO os_vif [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a')#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.770 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.771 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.772 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:a8:9a:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.773 248514 INFO nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Using config drive#033[00m
Dec 13 03:58:47 np0005558241 nova_compute[248510]: 2025-12-13 08:58:47.799 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:58:48 np0005558241 nova_compute[248510]: 2025-12-13 08:58:48.256 248514 INFO nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Creating config drive at /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/disk.config#033[00m
Dec 13 03:58:48 np0005558241 nova_compute[248510]: 2025-12-13 08:58:48.265 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxy7nph9e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:58:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2931: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:58:48 np0005558241 nova_compute[248510]: 2025-12-13 08:58:48.426 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxy7nph9e" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:58:48 np0005558241 nova_compute[248510]: 2025-12-13 08:58:48.459 248514 DEBUG nova.storage.rbd_utils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:58:48 np0005558241 nova_compute[248510]: 2025-12-13 08:58:48.464 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/disk.config 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:58:48 np0005558241 nova_compute[248510]: 2025-12-13 08:58:48.656 248514 DEBUG oslo_concurrency.processutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/disk.config 3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:58:48 np0005558241 nova_compute[248510]: 2025-12-13 08:58:48.657 248514 INFO nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Deleting local config drive /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/disk.config because it was imported into RBD.#033[00m
Dec 13 03:58:48 np0005558241 kernel: tap3abb490c-6a: entered promiscuous mode
Dec 13 03:58:48 np0005558241 NetworkManager[50376]: <info>  [1765616328.7213] manager: (tap3abb490c-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/500)
Dec 13 03:58:48 np0005558241 nova_compute[248510]: 2025-12-13 08:58:48.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:48Z|01223|binding|INFO|Claiming lport 3abb490c-6aad-47d4-8200-febd480ac7db for this chassis.
Dec 13 03:58:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:48Z|01224|binding|INFO|3abb490c-6aad-47d4-8200-febd480ac7db: Claiming fa:16:3e:a8:9a:b3 10.100.0.12
Dec 13 03:58:48 np0005558241 nova_compute[248510]: 2025-12-13 08:58:48.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.735 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:9a:b3 10.100.0.12'], port_security=['fa:16:3e:a8:9a:b3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3a1e4b45-e8c6-41c4-8890-cb0775cb5786', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a303ed13-8629-4259-965d-e42689484f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03b6e291-05b2-4f8e-8278-d579b0a4e692', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b539ac67-be97-4028-97af-147cf6ca090d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3abb490c-6aad-47d4-8200-febd480ac7db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.736 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3abb490c-6aad-47d4-8200-febd480ac7db in datapath a303ed13-8629-4259-965d-e42689484f38 bound to our chassis#033[00m
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.737 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a303ed13-8629-4259-965d-e42689484f38#033[00m
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.751 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0bddc89c-220c-4650-a11d-6488acb9d627]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:48 np0005558241 systemd-udevd[371802]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.752 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa303ed13-81 in ovnmeta-a303ed13-8629-4259-965d-e42689484f38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.755 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa303ed13-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.756 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d828fb27-6f0a-40b4-a349-080d48ef1bea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.758 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e41584f7-3f1d-49e9-be21-765f0ea4b14e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:48 np0005558241 systemd-machined[210538]: New machine qemu-151-instance-0000007c.
Dec 13 03:58:48 np0005558241 NetworkManager[50376]: <info>  [1765616328.7680] device (tap3abb490c-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:58:48 np0005558241 NetworkManager[50376]: <info>  [1765616328.7689] device (tap3abb490c-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.776 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[546a19c0-67ae-495d-83fc-a65f4d7d4014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:48 np0005558241 nova_compute[248510]: 2025-12-13 08:58:48.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:48 np0005558241 systemd[1]: Started Virtual Machine qemu-151-instance-0000007c.
Dec 13 03:58:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:48Z|01225|binding|INFO|Setting lport 3abb490c-6aad-47d4-8200-febd480ac7db ovn-installed in OVS
Dec 13 03:58:48 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:48Z|01226|binding|INFO|Setting lport 3abb490c-6aad-47d4-8200-febd480ac7db up in Southbound
Dec 13 03:58:48 np0005558241 nova_compute[248510]: 2025-12-13 08:58:48.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.803 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a09f2a-133c-479a-8e98-881ab509cfce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.835 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8e586074-429a-4505-9525-0000065369a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.840 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d8167a85-6745-483f-8f7b-947a02b27fd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:48 np0005558241 NetworkManager[50376]: <info>  [1765616328.8412] manager: (tapa303ed13-80): new Veth device (/org/freedesktop/NetworkManager/Devices/501)
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.873 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[570460a3-fcd3-4a71-8044-18d26b2a36c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.877 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e12e2f5c-7163-481d-80bf-2106fbb400e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:48 np0005558241 NetworkManager[50376]: <info>  [1765616328.9024] device (tapa303ed13-80): carrier: link connected
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.915 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2e83f44b-b15e-45b6-b765-945b65d193a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.946 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[480c7518-80dc-4143-8985-b61e4ac04c24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa303ed13-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:86:13:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890611, 'reachable_time': 17646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371835, 'error': None, 'target': 'ovnmeta-a303ed13-8629-4259-965d-e42689484f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:48.973 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6bcae4e2-daa1-4b44-8d4f-ff8f961ed3d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe86:13ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890611, 'tstamp': 890611}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371836, 'error': None, 'target': 'ovnmeta-a303ed13-8629-4259-965d-e42689484f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.004 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa21fa4-087d-4982-8070-f45b45f50dff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa303ed13-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:86:13:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 359], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890611, 'reachable_time': 17646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371837, 'error': None, 'target': 'ovnmeta-a303ed13-8629-4259-965d-e42689484f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.050 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[88ec1044-1950-4fec-8ae5-af9f0d47021a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.070 248514 DEBUG nova.network.neutron [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updated VIF entry in instance network info cache for port 3abb490c-6aad-47d4-8200-febd480ac7db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.072 248514 DEBUG nova.network.neutron [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.095 248514 DEBUG oslo_concurrency.lockutils [req-9cfc9fd9-6f4c-44e3-9fe9-b17d30ab2e85 req-c455dc64-c514-48c4-9ef0-cc7d3485067e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.143 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a646bad2-5172-4ab4-b0aa-c0b41ceb300c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.145 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa303ed13-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.145 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.146 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa303ed13-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:49 np0005558241 kernel: tapa303ed13-80: entered promiscuous mode
Dec 13 03:58:49 np0005558241 NetworkManager[50376]: <info>  [1765616329.1503] manager: (tapa303ed13-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/502)
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.151 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa303ed13-80, col_values=(('external_ids', {'iface-id': '4a84c557-65ab-478a-95fb-44e9a95becd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.152 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:49 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:49Z|01227|binding|INFO|Releasing lport 4a84c557-65ab-478a-95fb-44e9a95becd6 from this chassis (sb_readonly=0)
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.167 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a303ed13-8629-4259-965d-e42689484f38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a303ed13-8629-4259-965d-e42689484f38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.168 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5f2efb0d-fa07-4e30-b1a3-060496e7879b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.169 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-a303ed13-8629-4259-965d-e42689484f38
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/a303ed13-8629-4259-965d-e42689484f38.pid.haproxy
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID a303ed13-8629-4259-965d-e42689484f38
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:58:49 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:49.171 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a303ed13-8629-4259-965d-e42689484f38', 'env', 'PROCESS_TAG=haproxy-a303ed13-8629-4259-965d-e42689484f38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a303ed13-8629-4259-965d-e42689484f38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.212 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616329.2111478, 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.212 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] VM Started (Lifecycle Event)#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.232 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.236 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616329.2113965, 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.236 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.262 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.266 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.290 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.436 248514 DEBUG nova.compute.manager [req-c9e6fe68-14bc-480c-9ed6-6eb3b3e913bd req-9cdb2ec6-4fba-413e-b199-bfb1516f75de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.437 248514 DEBUG oslo_concurrency.lockutils [req-c9e6fe68-14bc-480c-9ed6-6eb3b3e913bd req-9cdb2ec6-4fba-413e-b199-bfb1516f75de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.437 248514 DEBUG oslo_concurrency.lockutils [req-c9e6fe68-14bc-480c-9ed6-6eb3b3e913bd req-9cdb2ec6-4fba-413e-b199-bfb1516f75de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.438 248514 DEBUG oslo_concurrency.lockutils [req-c9e6fe68-14bc-480c-9ed6-6eb3b3e913bd req-9cdb2ec6-4fba-413e-b199-bfb1516f75de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.438 248514 DEBUG nova.compute.manager [req-c9e6fe68-14bc-480c-9ed6-6eb3b3e913bd req-9cdb2ec6-4fba-413e-b199-bfb1516f75de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Processing event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.439 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.445 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616329.444818, 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.446 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.448 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.454 248514 INFO nova.virt.libvirt.driver [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Instance spawned successfully.#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.455 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.483 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.493 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.500 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.501 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.502 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.503 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.504 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.505 248514 DEBUG nova.virt.libvirt.driver [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.535 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:58:49 np0005558241 podman[371909]: 2025-12-13 08:58:49.583550141 +0000 UTC m=+0.048802346 container create 443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.584 248514 INFO nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Took 7.94 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.584 248514 DEBUG nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:58:49 np0005558241 systemd[1]: Started libpod-conmon-443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565.scope.
Dec 13 03:58:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:58:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e10e5c7fc25c7b540b3e7c168ca8f4252eb841eb00fc201c536aa1133b3617a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:58:49 np0005558241 podman[371909]: 2025-12-13 08:58:49.555499464 +0000 UTC m=+0.020751689 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.663 248514 INFO nova.compute.manager [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Took 9.53 seconds to build instance.#033[00m
Dec 13 03:58:49 np0005558241 podman[371909]: 2025-12-13 08:58:49.666239715 +0000 UTC m=+0.131491910 container init 443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 03:58:49 np0005558241 podman[371909]: 2025-12-13 08:58:49.675245515 +0000 UTC m=+0.140497710 container start 443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 03:58:49 np0005558241 nova_compute[248510]: 2025-12-13 08:58:49.695 248514 DEBUG oslo_concurrency.lockutils [None req-b79b3a59-d2c6-489b-9152-1b9b2f2fa955 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:49 np0005558241 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [NOTICE]   (371928) : New worker (371930) forked
Dec 13 03:58:49 np0005558241 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [NOTICE]   (371928) : Loading success.
Dec 13 03:58:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2932: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 13 03:58:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:58:51 np0005558241 nova_compute[248510]: 2025-12-13 08:58:51.536 248514 DEBUG nova.compute.manager [req-0070644e-576d-4163-9208-0d6107e68cc5 req-199a5d21-37d0-4b07-9694-b9c2496e37b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:58:51 np0005558241 nova_compute[248510]: 2025-12-13 08:58:51.536 248514 DEBUG oslo_concurrency.lockutils [req-0070644e-576d-4163-9208-0d6107e68cc5 req-199a5d21-37d0-4b07-9694-b9c2496e37b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:51 np0005558241 nova_compute[248510]: 2025-12-13 08:58:51.536 248514 DEBUG oslo_concurrency.lockutils [req-0070644e-576d-4163-9208-0d6107e68cc5 req-199a5d21-37d0-4b07-9694-b9c2496e37b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:51 np0005558241 nova_compute[248510]: 2025-12-13 08:58:51.537 248514 DEBUG oslo_concurrency.lockutils [req-0070644e-576d-4163-9208-0d6107e68cc5 req-199a5d21-37d0-4b07-9694-b9c2496e37b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:51 np0005558241 nova_compute[248510]: 2025-12-13 08:58:51.537 248514 DEBUG nova.compute.manager [req-0070644e-576d-4163-9208-0d6107e68cc5 req-199a5d21-37d0-4b07-9694-b9c2496e37b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] No waiting events found dispatching network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:58:51 np0005558241 nova_compute[248510]: 2025-12-13 08:58:51.537 248514 WARNING nova.compute.manager [req-0070644e-576d-4163-9208-0d6107e68cc5 req-199a5d21-37d0-4b07-9694-b9c2496e37b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received unexpected event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db for instance with vm_state active and task_state None.#033[00m
Dec 13 03:58:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2933: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 13 03:58:52 np0005558241 nova_compute[248510]: 2025-12-13 08:58:52.701 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2934: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 03:58:55 np0005558241 nova_compute[248510]: 2025-12-13 08:58:55.256 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:55 np0005558241 NetworkManager[50376]: <info>  [1765616335.2575] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Dec 13 03:58:55 np0005558241 NetworkManager[50376]: <info>  [1765616335.2589] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/504)
Dec 13 03:58:55 np0005558241 nova_compute[248510]: 2025-12-13 08:58:55.336 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:55 np0005558241 ovn_controller[148476]: 2025-12-13T08:58:55Z|01228|binding|INFO|Releasing lport 4a84c557-65ab-478a-95fb-44e9a95becd6 from this chassis (sb_readonly=0)
Dec 13 03:58:55 np0005558241 nova_compute[248510]: 2025-12-13 08:58:55.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:55.436 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:58:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:55.438 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:58:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:58:55.440 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:58:55 np0005558241 nova_compute[248510]: 2025-12-13 08:58:55.828 248514 DEBUG nova.compute.manager [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-changed-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:58:55 np0005558241 nova_compute[248510]: 2025-12-13 08:58:55.829 248514 DEBUG nova.compute.manager [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing instance network info cache due to event network-changed-3abb490c-6aad-47d4-8200-febd480ac7db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:58:55 np0005558241 nova_compute[248510]: 2025-12-13 08:58:55.829 248514 DEBUG oslo_concurrency.lockutils [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:58:55 np0005558241 nova_compute[248510]: 2025-12-13 08:58:55.829 248514 DEBUG oslo_concurrency.lockutils [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:58:55 np0005558241 nova_compute[248510]: 2025-12-13 08:58:55.830 248514 DEBUG nova.network.neutron [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing network info cache for port 3abb490c-6aad-47d4-8200-febd480ac7db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:58:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:58:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2935: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:58:57 np0005558241 nova_compute[248510]: 2025-12-13 08:58:57.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:58:58 np0005558241 nova_compute[248510]: 2025-12-13 08:58:58.124 248514 DEBUG nova.network.neutron [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updated VIF entry in instance network info cache for port 3abb490c-6aad-47d4-8200-febd480ac7db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:58:58 np0005558241 nova_compute[248510]: 2025-12-13 08:58:58.125 248514 DEBUG nova.network.neutron [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:58:58 np0005558241 nova_compute[248510]: 2025-12-13 08:58:58.152 248514 DEBUG oslo_concurrency.lockutils [req-d68e0728-5f4d-4c7a-b795-c45f81aab999 req-8f55a703-4ec1-48fe-b603-66cf4fbd52ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:58:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2936: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:58:59 np0005558241 nova_compute[248510]: 2025-12-13 08:58:59.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:59:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2937: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:59:00 np0005558241 podman[372083]: 2025-12-13 08:59:00.819408087 +0000 UTC m=+0.050640711 container create 22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:59:00 np0005558241 systemd[1]: Started libpod-conmon-22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674.scope.
Dec 13 03:59:00 np0005558241 podman[372083]: 2025-12-13 08:59:00.796136247 +0000 UTC m=+0.027368901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:59:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:59:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:59:00 np0005558241 podman[372083]: 2025-12-13 08:59:00.921232709 +0000 UTC m=+0.152465353 container init 22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:59:00 np0005558241 podman[372083]: 2025-12-13 08:59:00.928190719 +0000 UTC m=+0.159423343 container start 22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:59:00 np0005558241 brave_stonebraker[372099]: 167 167
Dec 13 03:59:00 np0005558241 systemd[1]: libpod-22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674.scope: Deactivated successfully.
Dec 13 03:59:00 np0005558241 podman[372083]: 2025-12-13 08:59:00.937309743 +0000 UTC m=+0.168542407 container attach 22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:59:00 np0005558241 podman[372083]: 2025-12-13 08:59:00.938343438 +0000 UTC m=+0.169576072 container died 22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:59:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5e8c46a029027a47be49c4a701616fdb1758579ad943b8502e2822561f404a59-merged.mount: Deactivated successfully.
Dec 13 03:59:00 np0005558241 podman[372083]: 2025-12-13 08:59:00.995711372 +0000 UTC m=+0.226944006 container remove 22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:59:01 np0005558241 systemd[1]: libpod-conmon-22eb7c746b0e18bc7946e60dc3789ca5dceb807906e73866156181e8b2f19674.scope: Deactivated successfully.
Dec 13 03:59:01 np0005558241 podman[372123]: 2025-12-13 08:59:01.210609451 +0000 UTC m=+0.055054048 container create d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:59:01 np0005558241 systemd[1]: Started libpod-conmon-d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163.scope.
Dec 13 03:59:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:59:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e18790b133a0004cc896a0a0d5da8bd0910ddc49a74a90cf6a4725a04895d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e18790b133a0004cc896a0a0d5da8bd0910ddc49a74a90cf6a4725a04895d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e18790b133a0004cc896a0a0d5da8bd0910ddc49a74a90cf6a4725a04895d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e18790b133a0004cc896a0a0d5da8bd0910ddc49a74a90cf6a4725a04895d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e18790b133a0004cc896a0a0d5da8bd0910ddc49a74a90cf6a4725a04895d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:01 np0005558241 podman[372123]: 2025-12-13 08:59:01.187495096 +0000 UTC m=+0.031939733 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:59:01 np0005558241 podman[372123]: 2025-12-13 08:59:01.291592844 +0000 UTC m=+0.136037461 container init d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_albattani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:59:01 np0005558241 podman[372123]: 2025-12-13 08:59:01.299063827 +0000 UTC m=+0.143508414 container start d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:59:01 np0005558241 podman[372123]: 2025-12-13 08:59:01.302009729 +0000 UTC m=+0.146454336 container attach d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_albattani, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:59:01 np0005558241 reverent_albattani[372138]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:59:01 np0005558241 reverent_albattani[372138]: --> All data devices are unavailable
Dec 13 03:59:01 np0005558241 systemd[1]: libpod-d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163.scope: Deactivated successfully.
Dec 13 03:59:01 np0005558241 podman[372123]: 2025-12-13 08:59:01.88218513 +0000 UTC m=+0.726629717 container died d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_albattani, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:59:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b8e18790b133a0004cc896a0a0d5da8bd0910ddc49a74a90cf6a4725a04895d7-merged.mount: Deactivated successfully.
Dec 13 03:59:01 np0005558241 podman[372123]: 2025-12-13 08:59:01.946001992 +0000 UTC m=+0.790446579 container remove d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_albattani, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:59:01 np0005558241 systemd[1]: libpod-conmon-d89190f904a03b02c5c26bbb0997cedbae67610b111b8bb8e9ee666f2584e163.scope: Deactivated successfully.
Dec 13 03:59:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:02Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:9a:b3 10.100.0.12
Dec 13 03:59:02 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:02Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:9a:b3 10.100.0.12
Dec 13 03:59:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2938: 321 pgs: 321 active+clean; 88 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 69 op/s
Dec 13 03:59:02 np0005558241 podman[372231]: 2025-12-13 08:59:02.460925876 +0000 UTC m=+0.047184076 container create 48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:59:02 np0005558241 systemd[1]: Started libpod-conmon-48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832.scope.
Dec 13 03:59:02 np0005558241 podman[372231]: 2025-12-13 08:59:02.437618236 +0000 UTC m=+0.023876436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:59:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:59:02 np0005558241 podman[372231]: 2025-12-13 08:59:02.562466792 +0000 UTC m=+0.148724982 container init 48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:59:02 np0005558241 podman[372231]: 2025-12-13 08:59:02.5758904 +0000 UTC m=+0.162148560 container start 48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 03:59:02 np0005558241 frosty_morse[372247]: 167 167
Dec 13 03:59:02 np0005558241 systemd[1]: libpod-48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832.scope: Deactivated successfully.
Dec 13 03:59:02 np0005558241 podman[372231]: 2025-12-13 08:59:02.583964908 +0000 UTC m=+0.170223068 container attach 48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_morse, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:59:02 np0005558241 podman[372231]: 2025-12-13 08:59:02.584398989 +0000 UTC m=+0.170657149 container died 48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_morse, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 13 03:59:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b87c14ed2798f45c63a7fe2d0007b8eadef639e10c99da9187a88d1267192eb9-merged.mount: Deactivated successfully.
Dec 13 03:59:02 np0005558241 podman[372231]: 2025-12-13 08:59:02.629908323 +0000 UTC m=+0.216166483 container remove 48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_morse, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:59:02 np0005558241 systemd[1]: libpod-conmon-48180ca59009783ba3d5418bc65a4d2c31d4f724c581afea55b33ff9863d5832.scope: Deactivated successfully.
Dec 13 03:59:02 np0005558241 nova_compute[248510]: 2025-12-13 08:59:02.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:02 np0005558241 nova_compute[248510]: 2025-12-13 08:59:02.709 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:02 np0005558241 podman[372271]: 2025-12-13 08:59:02.804859465 +0000 UTC m=+0.048823746 container create eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:59:02 np0005558241 systemd[1]: Started libpod-conmon-eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7.scope.
Dec 13 03:59:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:59:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ddf2167f01cc678bc5c1de60a3911305cac235c8f70855a08183743e7a9d179/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ddf2167f01cc678bc5c1de60a3911305cac235c8f70855a08183743e7a9d179/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ddf2167f01cc678bc5c1de60a3911305cac235c8f70855a08183743e7a9d179/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ddf2167f01cc678bc5c1de60a3911305cac235c8f70855a08183743e7a9d179/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:02 np0005558241 podman[372271]: 2025-12-13 08:59:02.783377509 +0000 UTC m=+0.027341820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:59:02 np0005558241 podman[372271]: 2025-12-13 08:59:02.906719358 +0000 UTC m=+0.150683659 container init eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:59:02 np0005558241 podman[372271]: 2025-12-13 08:59:02.913517145 +0000 UTC m=+0.157481426 container start eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:59:02 np0005558241 podman[372271]: 2025-12-13 08:59:02.922831763 +0000 UTC m=+0.166796044 container attach eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:59:03 np0005558241 nova_compute[248510]: 2025-12-13 08:59:03.185 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:03 np0005558241 nova_compute[248510]: 2025-12-13 08:59:03.187 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:03 np0005558241 nova_compute[248510]: 2025-12-13 08:59:03.207 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:59:03 np0005558241 focused_leakey[372287]: {
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:    "0": [
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:        {
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "devices": [
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "/dev/loop3"
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            ],
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_name": "ceph_lv0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_size": "21470642176",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "name": "ceph_lv0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "tags": {
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.cluster_name": "ceph",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.crush_device_class": "",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.encrypted": "0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.objectstore": "bluestore",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.osd_id": "0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.type": "block",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.vdo": "0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.with_tpm": "0"
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            },
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "type": "block",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "vg_name": "ceph_vg0"
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:        }
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:    ],
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:    "1": [
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:        {
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "devices": [
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "/dev/loop4"
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            ],
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_name": "ceph_lv1",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_size": "21470642176",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "name": "ceph_lv1",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "tags": {
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.cluster_name": "ceph",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.crush_device_class": "",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.encrypted": "0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.objectstore": "bluestore",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.osd_id": "1",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.type": "block",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.vdo": "0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.with_tpm": "0"
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            },
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "type": "block",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "vg_name": "ceph_vg1"
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:        }
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:    ],
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:    "2": [
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:        {
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "devices": [
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "/dev/loop5"
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            ],
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_name": "ceph_lv2",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_size": "21470642176",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "name": "ceph_lv2",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "tags": {
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.cephx_lockbox_secret": "",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.cluster_name": "ceph",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.crush_device_class": "",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.encrypted": "0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.objectstore": "bluestore",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.osd_id": "2",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.type": "block",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.vdo": "0",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:                "ceph.with_tpm": "0"
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            },
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "type": "block",
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:            "vg_name": "ceph_vg2"
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:        }
Dec 13 03:59:03 np0005558241 focused_leakey[372287]:    ]
Dec 13 03:59:03 np0005558241 focused_leakey[372287]: }
Dec 13 03:59:03 np0005558241 systemd[1]: libpod-eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7.scope: Deactivated successfully.
Dec 13 03:59:03 np0005558241 podman[372271]: 2025-12-13 08:59:03.25976987 +0000 UTC m=+0.503734151 container died eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:59:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9ddf2167f01cc678bc5c1de60a3911305cac235c8f70855a08183743e7a9d179-merged.mount: Deactivated successfully.
Dec 13 03:59:03 np0005558241 podman[372271]: 2025-12-13 08:59:03.309145079 +0000 UTC m=+0.553109360 container remove eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:59:03 np0005558241 nova_compute[248510]: 2025-12-13 08:59:03.313 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:03 np0005558241 nova_compute[248510]: 2025-12-13 08:59:03.313 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:03 np0005558241 systemd[1]: libpod-conmon-eefe82e9de8312ea8ea7351241236195d45ed4c48c40f980e901701ae756a0e7.scope: Deactivated successfully.
Dec 13 03:59:03 np0005558241 nova_compute[248510]: 2025-12-13 08:59:03.325 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:59:03 np0005558241 nova_compute[248510]: 2025-12-13 08:59:03.325 248514 INFO nova.compute.claims [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:59:03 np0005558241 nova_compute[248510]: 2025-12-13 08:59:03.466 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:03 np0005558241 podman[372388]: 2025-12-13 08:59:03.818431915 +0000 UTC m=+0.056157486 container create e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:59:03 np0005558241 systemd[1]: Started libpod-conmon-e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617.scope.
Dec 13 03:59:03 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:59:03 np0005558241 podman[372388]: 2025-12-13 08:59:03.789429275 +0000 UTC m=+0.027154866 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:59:03 np0005558241 podman[372388]: 2025-12-13 08:59:03.906819759 +0000 UTC m=+0.144545360 container init e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:59:03 np0005558241 podman[372388]: 2025-12-13 08:59:03.914864296 +0000 UTC m=+0.152589867 container start e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:59:03 np0005558241 podman[372388]: 2025-12-13 08:59:03.920837372 +0000 UTC m=+0.158562983 container attach e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:59:03 np0005558241 youthful_pasteur[372405]: 167 167
Dec 13 03:59:03 np0005558241 systemd[1]: libpod-e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617.scope: Deactivated successfully.
Dec 13 03:59:03 np0005558241 podman[372388]: 2025-12-13 08:59:03.925140237 +0000 UTC m=+0.162865808 container died e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 13 03:59:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9e625005514a5cbe9380e7eca3a8fadddea3cf3d55c2345b35dae3a77b8c4bf0-merged.mount: Deactivated successfully.
Dec 13 03:59:03 np0005558241 podman[372388]: 2025-12-13 08:59:03.985895434 +0000 UTC m=+0.223620995 container remove e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pasteur, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:59:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:59:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/244800968' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:59:04 np0005558241 systemd[1]: libpod-conmon-e474b817ae474ff1818062f586dbc5b717735c50acf45e7b3200e58378145617.scope: Deactivated successfully.
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.052 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.062 248514 DEBUG nova.compute.provider_tree [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.082 248514 DEBUG nova.scheduler.client.report [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.119 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.120 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:59:04 np0005558241 podman[372428]: 2025-12-13 08:59:04.144172329 +0000 UTC m=+0.079446306 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 03:59:04 np0005558241 podman[372429]: 2025-12-13 08:59:04.156009128 +0000 UTC m=+0.091675895 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:59:04 np0005558241 podman[372426]: 2025-12-13 08:59:04.174908361 +0000 UTC m=+0.108952068 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.175 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.175 248514 DEBUG nova.network.neutron [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:59:04 np0005558241 podman[372485]: 2025-12-13 08:59:04.19816087 +0000 UTC m=+0.053824888 container create d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ride, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.198 248514 INFO nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.218 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:59:04 np0005558241 systemd[1]: Started libpod-conmon-d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1.scope.
Dec 13 03:59:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:59:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b58f4f7c30495239d1e4973d368a8696284684136a2bfd257c14be496ac67c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b58f4f7c30495239d1e4973d368a8696284684136a2bfd257c14be496ac67c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b58f4f7c30495239d1e4973d368a8696284684136a2bfd257c14be496ac67c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b58f4f7c30495239d1e4973d368a8696284684136a2bfd257c14be496ac67c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:04 np0005558241 podman[372485]: 2025-12-13 08:59:04.175995177 +0000 UTC m=+0.031659205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:59:04 np0005558241 podman[372485]: 2025-12-13 08:59:04.279660365 +0000 UTC m=+0.135324393 container init d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:59:04 np0005558241 podman[372485]: 2025-12-13 08:59:04.28639186 +0000 UTC m=+0.142055878 container start d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ride, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:59:04 np0005558241 podman[372485]: 2025-12-13 08:59:04.291928465 +0000 UTC m=+0.147592503 container attach d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ride, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.318 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.319 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.320 248514 INFO nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Creating image(s)#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.343 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.368 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2939: 321 pgs: 321 active+clean; 121 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.390 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.393 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.464 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.465 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.466 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.466 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.488 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.491 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.923 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:04 np0005558241 nova_compute[248510]: 2025-12-13 08:59:04.978 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] resizing rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:59:05 np0005558241 lvm[372732]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:59:05 np0005558241 lvm[372732]: VG ceph_vg0 finished
Dec 13 03:59:05 np0005558241 lvm[372735]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:59:05 np0005558241 lvm[372735]: VG ceph_vg1 finished
Dec 13 03:59:05 np0005558241 nova_compute[248510]: 2025-12-13 08:59:05.063 248514 DEBUG nova.objects.instance [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'migration_context' on Instance uuid 48db77d9-f4d5-44dd-852e-aa10f98ace90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:59:05 np0005558241 lvm[372755]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:59:05 np0005558241 lvm[372755]: VG ceph_vg2 finished
Dec 13 03:59:05 np0005558241 nova_compute[248510]: 2025-12-13 08:59:05.086 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:59:05 np0005558241 nova_compute[248510]: 2025-12-13 08:59:05.087 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Ensure instance console log exists: /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:59:05 np0005558241 nova_compute[248510]: 2025-12-13 08:59:05.087 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:05 np0005558241 nova_compute[248510]: 2025-12-13 08:59:05.088 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:05 np0005558241 nova_compute[248510]: 2025-12-13 08:59:05.088 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:05 np0005558241 lvm[372756]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:59:05 np0005558241 lvm[372756]: VG ceph_vg1 finished
Dec 13 03:59:05 np0005558241 nova_compute[248510]: 2025-12-13 08:59:05.101 248514 DEBUG nova.policy [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5fd410579fa429ba0f7f680590cd86a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '77d40177a4804671aa9c5da343bc2ed4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:59:05 np0005558241 eloquent_ride[372508]: {}
Dec 13 03:59:05 np0005558241 systemd[1]: libpod-d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1.scope: Deactivated successfully.
Dec 13 03:59:05 np0005558241 systemd[1]: libpod-d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1.scope: Consumed 1.386s CPU time.
Dec 13 03:59:05 np0005558241 podman[372485]: 2025-12-13 08:59:05.194919367 +0000 UTC m=+1.050583385 container died d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ride, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:59:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-16b58f4f7c30495239d1e4973d368a8696284684136a2bfd257c14be496ac67c-merged.mount: Deactivated successfully.
Dec 13 03:59:05 np0005558241 podman[372485]: 2025-12-13 08:59:05.247003282 +0000 UTC m=+1.102667320 container remove d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:59:05 np0005558241 systemd[1]: libpod-conmon-d67a213266e5b8d45e5867e7c1d46a8fc3a9949df49056ab918f61fb9ae7ead1.scope: Deactivated successfully.
Dec 13 03:59:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:59:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:59:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:59:05 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:59:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:59:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 03:59:05 np0005558241 nova_compute[248510]: 2025-12-13 08:59:05.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:59:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:59:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2940: 321 pgs: 321 active+clean; 121 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 03:59:06 np0005558241 nova_compute[248510]: 2025-12-13 08:59:06.661 248514 DEBUG nova.network.neutron [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Successfully created port: 558f49fb-1002-4c28-8ba2-ea32384811d7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:59:07 np0005558241 nova_compute[248510]: 2025-12-13 08:59:07.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:07 np0005558241 nova_compute[248510]: 2025-12-13 08:59:07.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:59:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2941: 321 pgs: 321 active+clean; 151 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 3.3 MiB/s wr, 66 op/s
Dec 13 03:59:09 np0005558241 nova_compute[248510]: 2025-12-13 08:59:09.191 248514 DEBUG nova.network.neutron [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Successfully updated port: 558f49fb-1002-4c28-8ba2-ea32384811d7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:59:09 np0005558241 nova_compute[248510]: 2025-12-13 08:59:09.306 248514 INFO nova.compute.manager [None req-3bc97e41-a435-403f-a4b6-a0c83f2d2d83 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Get console output#033[00m
Dec 13 03:59:09 np0005558241 nova_compute[248510]: 2025-12-13 08:59:09.313 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 03:59:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_08:59:09
Dec 13 03:59:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:59:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 03:59:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'images', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec 13 03:59:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:59:09 np0005558241 nova_compute[248510]: 2025-12-13 08:59:09.331 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:59:09 np0005558241 nova_compute[248510]: 2025-12-13 08:59:09.332 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:59:09 np0005558241 nova_compute[248510]: 2025-12-13 08:59:09.332 248514 DEBUG nova.network.neutron [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:59:09 np0005558241 nova_compute[248510]: 2025-12-13 08:59:09.488 248514 DEBUG nova.compute.manager [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-changed-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:09 np0005558241 nova_compute[248510]: 2025-12-13 08:59:09.489 248514 DEBUG nova.compute.manager [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Refreshing instance network info cache due to event network-changed-558f49fb-1002-4c28-8ba2-ea32384811d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:59:09 np0005558241 nova_compute[248510]: 2025-12-13 08:59:09.490 248514 DEBUG oslo_concurrency.lockutils [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:59:10 np0005558241 nova_compute[248510]: 2025-12-13 08:59:10.042 248514 DEBUG nova.network.neutron [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2942: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:59:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:59:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.444 248514 DEBUG nova.network.neutron [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updating instance_info_cache with network_info: [{"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.473 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.474 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Instance network_info: |[{"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.474 248514 DEBUG oslo_concurrency.lockutils [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.474 248514 DEBUG nova.network.neutron [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Refreshing network info cache for port 558f49fb-1002-4c28-8ba2-ea32384811d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.477 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Start _get_guest_xml network_info=[{"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.484 248514 WARNING nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.491 248514 DEBUG nova.virt.libvirt.host [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.492 248514 DEBUG nova.virt.libvirt.host [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.496 248514 DEBUG nova.virt.libvirt.host [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.496 248514 DEBUG nova.virt.libvirt.host [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.496 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.497 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.497 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.497 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.498 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.498 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.498 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.498 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.499 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.499 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.499 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.499 248514 DEBUG nova.virt.hardware [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:59:11 np0005558241 nova_compute[248510]: 2025-12-13 08:59:11.502 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:59:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/874647145' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.134 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.167 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.175 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2943: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.713 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:59:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3782659699' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.759 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.761 248514 DEBUG nova.virt.libvirt.vif [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:59:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1185653133',display_name='tempest-TestNetworkAdvancedServerOps-server-1185653133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1185653133',id=125,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBErdAvAxyanLSk6zlk6UR7jIkt03qXx3MfH8+NhdcAyZSFQd+bfa57m4HcmVco5q1ZITwwE9Zaq6j0nsDhbNKnt90E6ScVaQ/3xa8beJWwD79hCpdjT8jJu+dOAQXgrxSQ==',key_name='tempest-TestNetworkAdvancedServerOps-1501689960',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-rf02ewgw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:59:04Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=48db77d9-f4d5-44dd-852e-aa10f98ace90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.761 248514 DEBUG nova.network.os_vif_util [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.762 248514 DEBUG nova.network.os_vif_util [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.763 248514 DEBUG nova.objects.instance [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 48db77d9-f4d5-44dd-852e-aa10f98ace90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.980 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  <uuid>48db77d9-f4d5-44dd-852e-aa10f98ace90</uuid>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  <name>instance-0000007d</name>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1185653133</nova:name>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:59:11</nova:creationTime>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:        <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:        <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:        <nova:port uuid="558f49fb-1002-4c28-8ba2-ea32384811d7">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <entry name="serial">48db77d9-f4d5-44dd-852e-aa10f98ace90</entry>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <entry name="uuid">48db77d9-f4d5-44dd-852e-aa10f98ace90</entry>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/48db77d9-f4d5-44dd-852e-aa10f98ace90_disk">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/48db77d9-f4d5-44dd-852e-aa10f98ace90_disk.config">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:ba:ad:5d"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <target dev="tap558f49fb-10"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/console.log" append="off"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:59:12 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:59:12 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:59:12 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:59:12 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.981 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Preparing to wait for external event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.983 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.984 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.984 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.986 248514 DEBUG nova.virt.libvirt.vif [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:59:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1185653133',display_name='tempest-TestNetworkAdvancedServerOps-server-1185653133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1185653133',id=125,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBErdAvAxyanLSk6zlk6UR7jIkt03qXx3MfH8+NhdcAyZSFQd+bfa57m4HcmVco5q1ZITwwE9Zaq6j0nsDhbNKnt90E6ScVaQ/3xa8beJWwD79hCpdjT8jJu+dOAQXgrxSQ==',key_name='tempest-TestNetworkAdvancedServerOps-1501689960',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-rf02ewgw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:59:04Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=48db77d9-f4d5-44dd-852e-aa10f98ace90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.987 248514 DEBUG nova.network.os_vif_util [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.988 248514 DEBUG nova.network.os_vif_util [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.988 248514 DEBUG os_vif [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.989 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.990 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.991 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.996 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap558f49fb-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.997 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap558f49fb-10, col_values=(('external_ids', {'iface-id': '558f49fb-1002-4c28-8ba2-ea32384811d7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:ad:5d', 'vm-uuid': '48db77d9-f4d5-44dd-852e-aa10f98ace90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:12 np0005558241 nova_compute[248510]: 2025-12-13 08:59:12.999 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:13 np0005558241 NetworkManager[50376]: <info>  [1765616353.0011] manager: (tap558f49fb-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Dec 13 03:59:13 np0005558241 nova_compute[248510]: 2025-12-13 08:59:13.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:59:13 np0005558241 nova_compute[248510]: 2025-12-13 08:59:13.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:13 np0005558241 nova_compute[248510]: 2025-12-13 08:59:13.010 248514 INFO os_vif [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10')#033[00m
Dec 13 03:59:13 np0005558241 nova_compute[248510]: 2025-12-13 08:59:13.535 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:59:13 np0005558241 nova_compute[248510]: 2025-12-13 08:59:13.535 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:59:13 np0005558241 nova_compute[248510]: 2025-12-13 08:59:13.535 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No VIF found with MAC fa:16:3e:ba:ad:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:59:13 np0005558241 nova_compute[248510]: 2025-12-13 08:59:13.536 248514 INFO nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Using config drive#033[00m
Dec 13 03:59:13 np0005558241 nova_compute[248510]: 2025-12-13 08:59:13.558 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.184 248514 DEBUG nova.network.neutron [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updated VIF entry in instance network info cache for port 558f49fb-1002-4c28-8ba2-ea32384811d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.185 248514 DEBUG nova.network.neutron [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updating instance_info_cache with network_info: [{"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.328 248514 INFO nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Creating config drive at /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/disk.config#033[00m
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.334 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr6quna_n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2944: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 402 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.495 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr6quna_n" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.527 248514 DEBUG nova.storage.rbd_utils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.532 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/disk.config 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.668 248514 DEBUG oslo_concurrency.lockutils [req-503d3591-c526-4bb1-b375-116f5fcab29a req-4eef0ec1-d364-466b-9f5c-8b9dd5c1d536 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.702 248514 DEBUG oslo_concurrency.processutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/disk.config 48db77d9-f4d5-44dd-852e-aa10f98ace90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.702 248514 INFO nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Deleting local config drive /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90/disk.config because it was imported into RBD.#033[00m
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 03:59:14 np0005558241 kernel: tap558f49fb-10: entered promiscuous mode
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 03:59:14 np0005558241 NetworkManager[50376]: <info>  [1765616354.7759] manager: (tap558f49fb-10): new Tun device (/org/freedesktop/NetworkManager/Devices/506)
Dec 13 03:59:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:14Z|01229|binding|INFO|Claiming lport 558f49fb-1002-4c28-8ba2-ea32384811d7 for this chassis.
Dec 13 03:59:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:14Z|01230|binding|INFO|558f49fb-1002-4c28-8ba2-ea32384811d7: Claiming fa:16:3e:ba:ad:5d 10.100.0.5
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.779 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:14Z|01231|binding|INFO|Setting lport 558f49fb-1002-4c28-8ba2-ea32384811d7 ovn-installed in OVS
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.794 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:14 np0005558241 nova_compute[248510]: 2025-12-13 08:59:14.795 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:14 np0005558241 systemd-udevd[372930]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:59:14 np0005558241 systemd-machined[210538]: New machine qemu-152-instance-0000007d.
Dec 13 03:59:14 np0005558241 NetworkManager[50376]: <info>  [1765616354.8241] device (tap558f49fb-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:59:14 np0005558241 NetworkManager[50376]: <info>  [1765616354.8251] device (tap558f49fb-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:59:14 np0005558241 systemd[1]: Started Virtual Machine qemu-152-instance-0000007d.
Dec 13 03:59:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.957 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:ad:5d 10.100.0.5'], port_security=['fa:16:3e:ba:ad:5d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '48db77d9-f4d5-44dd-852e-aa10f98ace90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67032c28-66fa-4d75-99b8-37c3e61c4140', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af406687-c2e5-4e03-9415-af4130e7b9af, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=558f49fb-1002-4c28-8ba2-ea32384811d7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:59:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.959 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 558f49fb-1002-4c28-8ba2-ea32384811d7 in datapath 4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 bound to our chassis#033[00m
Dec 13 03:59:14 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:14Z|01232|binding|INFO|Setting lport 558f49fb-1002-4c28-8ba2-ea32384811d7 up in Southbound
Dec 13 03:59:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.961 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4977afa6-1a1c-43aa-9d3f-7b6747f5eb37#033[00m
Dec 13 03:59:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.974 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c41f574f-600c-4831-85da-a71aa38e557d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.975 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4977afa6-11 in ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:59:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.977 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4977afa6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:59:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.977 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[07c4480b-fce7-4035-9537-10e5bf2c24cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.978 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c027f0f4-2338-4774-893f-df2027ad3fbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:14.998 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[12083d34-6f81-4915-866d-552da9b33c56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.014 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7f6d5b-8328-4821-8267-16ce1bc3e18a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.044 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[95e35800-9c9a-4c6c-97b2-5dc23ce7e65d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:15 np0005558241 NetworkManager[50376]: <info>  [1765616355.0508] manager: (tap4977afa6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/507)
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.052 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9f44d1-5a60-4903-9a71-0b6e11cde0ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.091 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5016dd5c-39ca-4ce4-9a26-806138e6f56a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.094 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[223685dc-49d7-43a6-970f-6eeeb385df18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:15 np0005558241 NetworkManager[50376]: <info>  [1765616355.1199] device (tap4977afa6-10): carrier: link connected
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.124 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd8094e-2990-4a99-a299-1a1de2862ecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:15 np0005558241 nova_compute[248510]: 2025-12-13 08:59:15.140 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.141 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c9bde7-a46e-4e55-a436-3301740fd31f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4977afa6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:01:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 893233, 'reachable_time': 22249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372964, 'error': None, 'target': 'ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.161 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42fb8274-1630-4693-bb06-f8c0ea37443f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:1f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 893233, 'tstamp': 893233}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372965, 'error': None, 'target': 'ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.180 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[95cc0e49-f936-4aa1-b6fa-e14c7923b88e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4977afa6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:01:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 893233, 'reachable_time': 22249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372966, 'error': None, 'target': 'ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.218 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f99cc681-3ef4-4cc1-8249-a7b06c547767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.286 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec6c0bb-ed8d-46ce-884b-6d69c345a614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.288 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4977afa6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.288 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.288 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4977afa6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:15 np0005558241 kernel: tap4977afa6-10: entered promiscuous mode
Dec 13 03:59:15 np0005558241 nova_compute[248510]: 2025-12-13 08:59:15.330 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:15 np0005558241 NetworkManager[50376]: <info>  [1765616355.3312] manager: (tap4977afa6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Dec 13 03:59:15 np0005558241 nova_compute[248510]: 2025-12-13 08:59:15.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.334 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4977afa6-10, col_values=(('external_ids', {'iface-id': '9bf5ac40-d652-492d-a8ac-2cb23dbb3344'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:15 np0005558241 nova_compute[248510]: 2025-12-13 08:59:15.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:15 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:15Z|01233|binding|INFO|Releasing lport 9bf5ac40-d652-492d-a8ac-2cb23dbb3344 from this chassis (sb_readonly=0)
Dec 13 03:59:15 np0005558241 nova_compute[248510]: 2025-12-13 08:59:15.350 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.351 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4977afa6-1a1c-43aa-9d3f-7b6747f5eb37.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4977afa6-1a1c-43aa-9d3f-7b6747f5eb37.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.352 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1972fc2e-2230-4c67-8991-28372a460b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.353 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/4977afa6-1a1c-43aa-9d3f-7b6747f5eb37.pid.haproxy
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 4977afa6-1a1c-43aa-9d3f-7b6747f5eb37
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:59:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:15.354 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'env', 'PROCESS_TAG=haproxy-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4977afa6-1a1c-43aa-9d3f-7b6747f5eb37.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1196940316' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1196940316' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 03:59:15 np0005558241 nova_compute[248510]: 2025-12-13 08:59:15.399 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616355.3981373, 48db77d9-f4d5-44dd-852e-aa10f98ace90 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:59:15 np0005558241 nova_compute[248510]: 2025-12-13 08:59:15.400 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] VM Started (Lifecycle Event)#033[00m
Dec 13 03:59:15 np0005558241 nova_compute[248510]: 2025-12-13 08:59:15.670 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:59:15 np0005558241 nova_compute[248510]: 2025-12-13 08:59:15.674 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616355.3987756, 48db77d9-f4d5-44dd-852e-aa10f98ace90 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:59:15 np0005558241 nova_compute[248510]: 2025-12-13 08:59:15.674 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:59:15 np0005558241 podman[373040]: 2025-12-13 08:59:15.760380977 +0000 UTC m=+0.049036872 container create cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 03:59:15 np0005558241 systemd[1]: Started libpod-conmon-cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c.scope.
Dec 13 03:59:15 np0005558241 podman[373040]: 2025-12-13 08:59:15.734802251 +0000 UTC m=+0.023458166 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:59:15 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:59:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86652b75459e4a25df06217dc80b414d5f5e77aaf98604cf5669cd3f9ba9632a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:15 np0005558241 podman[373040]: 2025-12-13 08:59:15.858121319 +0000 UTC m=+0.146777254 container init cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 03:59:15 np0005558241 podman[373040]: 2025-12-13 08:59:15.863082521 +0000 UTC m=+0.151738416 container start cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:59:15 np0005558241 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [NOTICE]   (373059) : New worker (373061) forked
Dec 13 03:59:15 np0005558241 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [NOTICE]   (373059) : Loading success.
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.913564) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616355913720, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 977, "num_deletes": 250, "total_data_size": 1443813, "memory_usage": 1463928, "flush_reason": "Manual Compaction"}
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616355924156, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 900113, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57599, "largest_seqno": 58575, "table_properties": {"data_size": 896233, "index_size": 1531, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10418, "raw_average_key_size": 20, "raw_value_size": 887864, "raw_average_value_size": 1768, "num_data_blocks": 70, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616271, "oldest_key_time": 1765616271, "file_creation_time": 1765616355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 10642 microseconds, and 6027 cpu microseconds.
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.924215) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 900113 bytes OK
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.924238) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.925461) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.925484) EVENT_LOG_v1 {"time_micros": 1765616355925477, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.925506) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 1439124, prev total WAL file size 1439124, number of live WAL files 2.
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.926310) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323631' seq:72057594037927935, type:22 .. '6D6772737461740032353132' seq:0, type:0; will stop at (end)
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(879KB)], [134(11MB)]
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616355926368, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 12805206, "oldest_snapshot_seqno": -1}
Dec 13 03:59:15 np0005558241 nova_compute[248510]: 2025-12-13 08:59:15.958 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:59:15 np0005558241 nova_compute[248510]: 2025-12-13 08:59:15.962 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7894 keys, 9838205 bytes, temperature: kUnknown
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616355986746, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 9838205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9787888, "index_size": 29468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19781, "raw_key_size": 206407, "raw_average_key_size": 26, "raw_value_size": 9649191, "raw_average_value_size": 1222, "num_data_blocks": 1148, "num_entries": 7894, "num_filter_entries": 7894, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.987065) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 9838205 bytes
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.988401) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.6 rd, 162.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.4 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(25.2) write-amplify(10.9) OK, records in: 8370, records dropped: 476 output_compression: NoCompression
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.988442) EVENT_LOG_v1 {"time_micros": 1765616355988426, "job": 82, "event": "compaction_finished", "compaction_time_micros": 60506, "compaction_time_cpu_micros": 23523, "output_level": 6, "num_output_files": 1, "total_output_size": 9838205, "num_input_records": 8370, "num_output_records": 7894, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616355988984, "job": 82, "event": "table_file_deletion", "file_number": 136}
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616355992898, "job": 82, "event": "table_file_deletion", "file_number": 134}
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.926253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.992987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.992994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.992997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.993000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:59:15 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-08:59:15.993003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:59:15 np0005558241 nova_compute[248510]: 2025-12-13 08:59:15.998 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:59:16 np0005558241 nova_compute[248510]: 2025-12-13 08:59:16.051 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:59:16 np0005558241 nova_compute[248510]: 2025-12-13 08:59:16.052 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:59:16 np0005558241 nova_compute[248510]: 2025-12-13 08:59:16.053 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 03:59:16 np0005558241 nova_compute[248510]: 2025-12-13 08:59:16.053 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:59:16 np0005558241 nova_compute[248510]: 2025-12-13 08:59:16.279 248514 DEBUG oslo_concurrency.lockutils [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "interface-3a1e4b45-e8c6-41c4-8890-cb0775cb5786-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:16 np0005558241 nova_compute[248510]: 2025-12-13 08:59:16.280 248514 DEBUG oslo_concurrency.lockutils [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-3a1e4b45-e8c6-41c4-8890-cb0775cb5786-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:16 np0005558241 nova_compute[248510]: 2025-12-13 08:59:16.281 248514 DEBUG nova.objects.instance [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'flavor' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:59:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2945: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 03:59:17 np0005558241 nova_compute[248510]: 2025-12-13 08:59:17.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:17.999 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.100 248514 DEBUG nova.objects.instance [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_requests' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.129 248514 DEBUG nova.network.neutron [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.264 248514 DEBUG nova.compute.manager [req-ecdf282b-1e26-4bbc-9cef-18f2cd53c121 req-3fe27cec-1ac9-403d-a38a-a30d4206f7bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.264 248514 DEBUG oslo_concurrency.lockutils [req-ecdf282b-1e26-4bbc-9cef-18f2cd53c121 req-3fe27cec-1ac9-403d-a38a-a30d4206f7bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.264 248514 DEBUG oslo_concurrency.lockutils [req-ecdf282b-1e26-4bbc-9cef-18f2cd53c121 req-3fe27cec-1ac9-403d-a38a-a30d4206f7bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.264 248514 DEBUG oslo_concurrency.lockutils [req-ecdf282b-1e26-4bbc-9cef-18f2cd53c121 req-3fe27cec-1ac9-403d-a38a-a30d4206f7bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.265 248514 DEBUG nova.compute.manager [req-ecdf282b-1e26-4bbc-9cef-18f2cd53c121 req-3fe27cec-1ac9-403d-a38a-a30d4206f7bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Processing event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.265 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.271 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616358.2708058, 48db77d9-f4d5-44dd-852e-aa10f98ace90 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.271 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.273 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.277 248514 INFO nova.virt.libvirt.driver [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Instance spawned successfully.#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.278 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.297 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.303 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.307 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.307 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.308 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.308 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.308 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.309 248514 DEBUG nova.virt.libvirt.driver [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.336 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:59:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2946: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.386 248514 INFO nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Took 14.07 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.387 248514 DEBUG nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.470 248514 INFO nova.compute.manager [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Took 15.19 seconds to build instance.#033[00m
Dec 13 03:59:18 np0005558241 nova_compute[248510]: 2025-12-13 08:59:18.494 248514 DEBUG oslo_concurrency.lockutils [None req-e60b9d1d-d97f-4986-99c2-869c927e6c11 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:19 np0005558241 nova_compute[248510]: 2025-12-13 08:59:19.130 248514 DEBUG nova.policy [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:59:19 np0005558241 nova_compute[248510]: 2025-12-13 08:59:19.933 248514 DEBUG nova.network.neutron [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Successfully created port: a30b0da9-1ee1-4092-a86b-5fa66fe76492 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.164 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.220 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.220 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.220 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.221 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.221 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.260 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.261 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.261 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.261 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.261 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2947: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 627 KiB/s wr, 45 op/s
Dec 13 03:59:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:59:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562676911' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.874 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.990 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.991 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.998 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:59:20 np0005558241 nova_compute[248510]: 2025-12-13 08:59:20.999 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.211 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.213 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3178MB free_disk=59.92096870671958GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.213 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.214 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011220948804967251 of space, bias 1.0, pg target 0.33662846414901754 quantized to 32 (current 32)
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697035056271496 of space, bias 1.0, pg target 0.20091105168814488 quantized to 32 (current 32)
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.729684235412037e-07 of space, bias 4.0, pg target 0.0006875621082494445 quantized to 16 (current 32)
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.533 248514 DEBUG nova.compute.manager [req-44d1463d-3ab7-4dcf-9fb0-082691f024fd req-89bb49fd-f99c-4f2a-9be7-a52a193c0398 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.534 248514 DEBUG oslo_concurrency.lockutils [req-44d1463d-3ab7-4dcf-9fb0-082691f024fd req-89bb49fd-f99c-4f2a-9be7-a52a193c0398 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.534 248514 DEBUG oslo_concurrency.lockutils [req-44d1463d-3ab7-4dcf-9fb0-082691f024fd req-89bb49fd-f99c-4f2a-9be7-a52a193c0398 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.535 248514 DEBUG oslo_concurrency.lockutils [req-44d1463d-3ab7-4dcf-9fb0-082691f024fd req-89bb49fd-f99c-4f2a-9be7-a52a193c0398 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.535 248514 DEBUG nova.compute.manager [req-44d1463d-3ab7-4dcf-9fb0-082691f024fd req-89bb49fd-f99c-4f2a-9be7-a52a193c0398 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] No waiting events found dispatching network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.536 248514 WARNING nova.compute.manager [req-44d1463d-3ab7-4dcf-9fb0-082691f024fd req-89bb49fd-f99c-4f2a-9be7-a52a193c0398 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received unexpected event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.538 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.539 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 48db77d9-f4d5-44dd-852e-aa10f98ace90 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.539 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.539 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.625 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:21.761 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:59:21 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:21.762 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 03:59:21 np0005558241 nova_compute[248510]: 2025-12-13 08:59:21.763 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:59:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1045510146' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:59:22 np0005558241 nova_compute[248510]: 2025-12-13 08:59:22.207 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:22 np0005558241 nova_compute[248510]: 2025-12-13 08:59:22.213 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:59:22 np0005558241 nova_compute[248510]: 2025-12-13 08:59:22.233 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:59:22 np0005558241 nova_compute[248510]: 2025-12-13 08:59:22.274 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 03:59:22 np0005558241 nova_compute[248510]: 2025-12-13 08:59:22.275 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:22 np0005558241 nova_compute[248510]: 2025-12-13 08:59:22.328 248514 DEBUG nova.network.neutron [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Successfully updated port: a30b0da9-1ee1-4092-a86b-5fa66fe76492 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:59:22 np0005558241 nova_compute[248510]: 2025-12-13 08:59:22.344 248514 DEBUG oslo_concurrency.lockutils [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:59:22 np0005558241 nova_compute[248510]: 2025-12-13 08:59:22.344 248514 DEBUG oslo_concurrency.lockutils [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:59:22 np0005558241 nova_compute[248510]: 2025-12-13 08:59:22.345 248514 DEBUG nova.network.neutron [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:59:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2948: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 275 KiB/s rd, 17 KiB/s wr, 19 op/s
Dec 13 03:59:22 np0005558241 nova_compute[248510]: 2025-12-13 08:59:22.486 248514 DEBUG nova.compute.manager [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-changed-a30b0da9-1ee1-4092-a86b-5fa66fe76492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:22 np0005558241 nova_compute[248510]: 2025-12-13 08:59:22.488 248514 DEBUG nova.compute.manager [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing instance network info cache due to event network-changed-a30b0da9-1ee1-4092-a86b-5fa66fe76492. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:59:22 np0005558241 nova_compute[248510]: 2025-12-13 08:59:22.489 248514 DEBUG oslo_concurrency.lockutils [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:59:22 np0005558241 nova_compute[248510]: 2025-12-13 08:59:22.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:23 np0005558241 nova_compute[248510]: 2025-12-13 08:59:23.001 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2949: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 74 op/s
Dec 13 03:59:24 np0005558241 nova_compute[248510]: 2025-12-13 08:59:24.825 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:59:24 np0005558241 nova_compute[248510]: 2025-12-13 08:59:24.827 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:59:24 np0005558241 nova_compute[248510]: 2025-12-13 08:59:24.827 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.161 248514 DEBUG nova.compute.manager [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-changed-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.162 248514 DEBUG nova.compute.manager [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Refreshing instance network info cache due to event network-changed-558f49fb-1002-4c28-8ba2-ea32384811d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.162 248514 DEBUG oslo_concurrency.lockutils [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.163 248514 DEBUG oslo_concurrency.lockutils [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.163 248514 DEBUG nova.network.neutron [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Refreshing network info cache for port 558f49fb-1002-4c28-8ba2-ea32384811d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.397 248514 DEBUG nova.network.neutron [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.424 248514 DEBUG oslo_concurrency.lockutils [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.426 248514 DEBUG oslo_concurrency.lockutils [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.427 248514 DEBUG nova.network.neutron [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing network info cache for port a30b0da9-1ee1-4092-a86b-5fa66fe76492 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.431 248514 DEBUG nova.virt.libvirt.vif [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.431 248514 DEBUG nova.network.os_vif_util [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.432 248514 DEBUG nova.network.os_vif_util [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.433 248514 DEBUG os_vif [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.434 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.435 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.435 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.439 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.439 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa30b0da9-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.440 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa30b0da9-1e, col_values=(('external_ids', {'iface-id': 'a30b0da9-1ee1-4092-a86b-5fa66fe76492', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:10:e4', 'vm-uuid': '3a1e4b45-e8c6-41c4-8890-cb0775cb5786'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:25 np0005558241 NetworkManager[50376]: <info>  [1765616365.4446] manager: (tapa30b0da9-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/509)
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.444 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.451 248514 INFO os_vif [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e')#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.453 248514 DEBUG nova.virt.libvirt.vif [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.453 248514 DEBUG nova.network.os_vif_util [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.454 248514 DEBUG nova.network.os_vif_util [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.457 248514 DEBUG nova.virt.libvirt.guest [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] attach device xml: <interface type="ethernet">
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:65:10:e4"/>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  <target dev="tapa30b0da9-1e"/>
Dec 13 03:59:25 np0005558241 nova_compute[248510]: </interface>
Dec 13 03:59:25 np0005558241 nova_compute[248510]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Dec 13 03:59:25 np0005558241 kernel: tapa30b0da9-1e: entered promiscuous mode
Dec 13 03:59:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:25Z|01234|binding|INFO|Claiming lport a30b0da9-1ee1-4092-a86b-5fa66fe76492 for this chassis.
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:25Z|01235|binding|INFO|a30b0da9-1ee1-4092-a86b-5fa66fe76492: Claiming fa:16:3e:65:10:e4 10.100.0.18
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.480 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:10:e4 10.100.0.18'], port_security=['fa:16:3e:65:10:e4 10.100.0.18'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': '3a1e4b45-e8c6-41c4-8890-cb0775cb5786', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16fae4da-5722-4e42-b101-44d9ef244421', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9deccf5-6c40-469a-b45c-630adf312a35, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a30b0da9-1ee1-4092-a86b-5fa66fe76492) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.484 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a30b0da9-1ee1-4092-a86b-5fa66fe76492 in datapath 16fae4da-5722-4e42-b101-44d9ef244421 bound to our chassis#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.486 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16fae4da-5722-4e42-b101-44d9ef244421#033[00m
Dec 13 03:59:25 np0005558241 NetworkManager[50376]: <info>  [1765616365.4892] manager: (tapa30b0da9-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/510)
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.499 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc4762a-468f-4e82-a1fa-d205f4e79c6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.500 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap16fae4da-51 in ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.502 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap16fae4da-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.502 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[edebb7af-5364-449c-ad27-d99024fd1039]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.503 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ce9431-cc00-4c2a-8b6a-587f37d8edf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.516 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[757a04fa-80e2-420f-be79-ab8829ed7d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.517 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:25Z|01236|binding|INFO|Setting lport a30b0da9-1ee1-4092-a86b-5fa66fe76492 ovn-installed in OVS
Dec 13 03:59:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:25Z|01237|binding|INFO|Setting lport a30b0da9-1ee1-4092-a86b-5fa66fe76492 up in Southbound
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.523 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.533 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a5abe9-ddb3-45d0-ab12-554cba14959e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 systemd-udevd[373126]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:59:25 np0005558241 NetworkManager[50376]: <info>  [1765616365.5570] device (tapa30b0da9-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:59:25 np0005558241 NetworkManager[50376]: <info>  [1765616365.5578] device (tapa30b0da9-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.566 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0d19c9c1-92ff-4dfd-9a8c-ba51c6ffbe93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 NetworkManager[50376]: <info>  [1765616365.5730] manager: (tap16fae4da-50): new Veth device (/org/freedesktop/NetworkManager/Devices/511)
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.572 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b6232eba-a4f2-4082-8c16-40f329fcef92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.604 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[05fdf637-171e-4b2a-ad7e-ded9ec28f7b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.608 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b685db-09c7-46fb-a026-fe9f40505cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.624 248514 DEBUG nova.virt.libvirt.driver [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.624 248514 DEBUG nova.virt.libvirt.driver [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.624 248514 DEBUG nova.virt.libvirt.driver [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:a8:9a:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.625 248514 DEBUG nova.virt.libvirt.driver [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:65:10:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:59:25 np0005558241 NetworkManager[50376]: <info>  [1765616365.6336] device (tap16fae4da-50): carrier: link connected
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.641 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[780fcd4b-b856-4c3f-a310-9389455af041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.652 248514 DEBUG nova.virt.libvirt.guest [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:59:25</nova:creationTime>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 03:59:25 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:    <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 03:59:25 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:    <nova:port uuid="a30b0da9-1ee1-4092-a86b-5fa66fe76492">
Dec 13 03:59:25 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 03:59:25 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 03:59:25 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 03:59:25 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.659 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7746aaf6-cf85-4356-9403-782d559ad541]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16fae4da-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:9f:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 894284, 'reachable_time': 25312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373149, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.674 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[681f9295-c0a4-406f-877d-66df489f8b5b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb3:9f41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 894284, 'tstamp': 894284}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373150, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.692 248514 DEBUG oslo_concurrency.lockutils [None req-2512d487-c089-4a8f-b3c1-3db01f92ce36 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-3a1e4b45-e8c6-41c4-8890-cb0775cb5786-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.698 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1b772db0-50a6-4687-9e58-918f6f288ca1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16fae4da-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:9f:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 894284, 'reachable_time': 25312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373151, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.728 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c316c8-747d-4ce4-b86f-191b0bd71939]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.800 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4bdf4b70-471a-4acc-873b-43235225d331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.801 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16fae4da-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.802 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.802 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16fae4da-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.804 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:25 np0005558241 NetworkManager[50376]: <info>  [1765616365.8049] manager: (tap16fae4da-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Dec 13 03:59:25 np0005558241 kernel: tap16fae4da-50: entered promiscuous mode
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.807 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.807 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16fae4da-50, col_values=(('external_ids', {'iface-id': '7a0536ce-73f9-4aa4-8989-da712c91214d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:25 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:25Z|01238|binding|INFO|Releasing lport 7a0536ce-73f9-4aa4-8989-da712c91214d from this chassis (sb_readonly=0)
Dec 13 03:59:25 np0005558241 nova_compute[248510]: 2025-12-13 08:59:25.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.824 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/16fae4da-5722-4e42-b101-44d9ef244421.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/16fae4da-5722-4e42-b101-44d9ef244421.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.825 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[56adeef9-1663-42be-84a3-b0d50870b940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.825 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-16fae4da-5722-4e42-b101-44d9ef244421
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/16fae4da-5722-4e42-b101-44d9ef244421.pid.haproxy
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 16fae4da-5722-4e42-b101-44d9ef244421
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 03:59:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:25.827 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'env', 'PROCESS_TAG=haproxy-16fae4da-5722-4e42-b101-44d9ef244421', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/16fae4da-5722-4e42-b101-44d9ef244421.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 03:59:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:59:26 np0005558241 podman[373182]: 2025-12-13 08:59:26.229771471 +0000 UTC m=+0.055782926 container create 221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:59:26 np0005558241 systemd[1]: Started libpod-conmon-221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040.scope.
Dec 13 03:59:26 np0005558241 podman[373182]: 2025-12-13 08:59:26.203752105 +0000 UTC m=+0.029763610 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 03:59:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 03:59:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b75e401be1e6964868d0a53502eed4a247827032e021c62c59365aba03a75fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:26 np0005558241 podman[373182]: 2025-12-13 08:59:26.326918559 +0000 UTC m=+0.152930094 container init 221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 03:59:26 np0005558241 podman[373182]: 2025-12-13 08:59:26.337735184 +0000 UTC m=+0.163746669 container start 221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 03:59:26 np0005558241 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [NOTICE]   (373202) : New worker (373204) forked
Dec 13 03:59:26 np0005558241 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [NOTICE]   (373202) : Loading success.
Dec 13 03:59:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2950: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 13 03:59:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:26.765 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:26 np0005558241 nova_compute[248510]: 2025-12-13 08:59:26.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.288 248514 DEBUG nova.network.neutron [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updated VIF entry in instance network info cache for port a30b0da9-1ee1-4092-a86b-5fa66fe76492. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.289 248514 DEBUG nova.network.neutron [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.320 248514 DEBUG oslo_concurrency.lockutils [req-6ec15a8e-f18f-49a5-ab36-f96ecf030ba0 req-128a51c3-3b62-46c8-9a70-70845040b42e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.331 248514 DEBUG nova.compute.manager [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-plugged-a30b0da9-1ee1-4092-a86b-5fa66fe76492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.331 248514 DEBUG oslo_concurrency.lockutils [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.331 248514 DEBUG oslo_concurrency.lockutils [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.332 248514 DEBUG oslo_concurrency.lockutils [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.332 248514 DEBUG nova.compute.manager [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] No waiting events found dispatching network-vif-plugged-a30b0da9-1ee1-4092-a86b-5fa66fe76492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.332 248514 WARNING nova.compute.manager [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received unexpected event network-vif-plugged-a30b0da9-1ee1-4092-a86b-5fa66fe76492 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.332 248514 DEBUG nova.compute.manager [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-plugged-a30b0da9-1ee1-4092-a86b-5fa66fe76492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.333 248514 DEBUG oslo_concurrency.lockutils [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.333 248514 DEBUG oslo_concurrency.lockutils [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.333 248514 DEBUG oslo_concurrency.lockutils [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.333 248514 DEBUG nova.compute.manager [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] No waiting events found dispatching network-vif-plugged-a30b0da9-1ee1-4092-a86b-5fa66fe76492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.334 248514 WARNING nova.compute.manager [req-36892ddf-41fa-4aa6-90e5-323bcbe82625 req-79d25839-39b3-4be5-9659-c380f8e6ebab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received unexpected event network-vif-plugged-a30b0da9-1ee1-4092-a86b-5fa66fe76492 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.457 248514 DEBUG nova.network.neutron [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updated VIF entry in instance network info cache for port 558f49fb-1002-4c28-8ba2-ea32384811d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.458 248514 DEBUG nova.network.neutron [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updating instance_info_cache with network_info: [{"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.480 248514 DEBUG oslo_concurrency.lockutils [req-1b484377-be0e-4c45-b670-3b81b77ae82d req-996388b6-d271-49c4-bebc-e56d63eca793 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:59:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:27Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:10:e4 10.100.0.18
Dec 13 03:59:27 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:27Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:10:e4 10.100.0.18
Dec 13 03:59:27 np0005558241 nova_compute[248510]: 2025-12-13 08:59:27.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2951: 321 pgs: 321 active+clean; 167 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 13 03:59:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2952: 321 pgs: 321 active+clean; 172 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 381 KiB/s wr, 83 op/s
Dec 13 03:59:30 np0005558241 nova_compute[248510]: 2025-12-13 08:59:30.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:30Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:ad:5d 10.100.0.5
Dec 13 03:59:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:59:30 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:30Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:ad:5d 10.100.0.5
Dec 13 03:59:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2953: 321 pgs: 321 active+clean; 172 MiB data, 904 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 381 KiB/s wr, 64 op/s
Dec 13 03:59:32 np0005558241 nova_compute[248510]: 2025-12-13 08:59:32.722 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:33 np0005558241 nova_compute[248510]: 2025-12-13 08:59:33.323 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:33 np0005558241 nova_compute[248510]: 2025-12-13 08:59:33.323 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:33 np0005558241 nova_compute[248510]: 2025-12-13 08:59:33.350 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 03:59:33 np0005558241 nova_compute[248510]: 2025-12-13 08:59:33.448 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:33 np0005558241 nova_compute[248510]: 2025-12-13 08:59:33.448 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:33 np0005558241 nova_compute[248510]: 2025-12-13 08:59:33.461 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 03:59:33 np0005558241 nova_compute[248510]: 2025-12-13 08:59:33.461 248514 INFO nova.compute.claims [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 03:59:33 np0005558241 nova_compute[248510]: 2025-12-13 08:59:33.616 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:59:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2178757918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.213 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.219 248514 DEBUG nova.compute.provider_tree [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.237 248514 DEBUG nova.scheduler.client.report [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.259 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.260 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.319 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.319 248514 DEBUG nova.network.neutron [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.339 248514 INFO nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.369 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 03:59:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2954: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.529 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.530 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.530 248514 INFO nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Creating image(s)#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.556 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.583 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.608 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.613 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.704 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.705 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.705 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.706 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.727 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:34 np0005558241 nova_compute[248510]: 2025-12-13 08:59:34.731 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:34 np0005558241 podman[373332]: 2025-12-13 08:59:34.984822162 +0000 UTC m=+0.069143974 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:59:34 np0005558241 podman[373331]: 2025-12-13 08:59:34.998128568 +0000 UTC m=+0.085450903 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Dec 13 03:59:35 np0005558241 podman[373330]: 2025-12-13 08:59:35.023728584 +0000 UTC m=+0.113234063 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:59:35 np0005558241 nova_compute[248510]: 2025-12-13 08:59:35.081 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:35 np0005558241 nova_compute[248510]: 2025-12-13 08:59:35.116 248514 DEBUG nova.policy [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 03:59:35 np0005558241 nova_compute[248510]: 2025-12-13 08:59:35.156 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 03:59:35 np0005558241 nova_compute[248510]: 2025-12-13 08:59:35.234 248514 DEBUG nova.objects.instance [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid decfc3d0-e424-4304-8b3c-51daa9bd0fb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:59:35 np0005558241 nova_compute[248510]: 2025-12-13 08:59:35.258 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 03:59:35 np0005558241 nova_compute[248510]: 2025-12-13 08:59:35.258 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Ensure instance console log exists: /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 03:59:35 np0005558241 nova_compute[248510]: 2025-12-13 08:59:35.258 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:35 np0005558241 nova_compute[248510]: 2025-12-13 08:59:35.259 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:35 np0005558241 nova_compute[248510]: 2025-12-13 08:59:35.259 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:35 np0005558241 nova_compute[248510]: 2025-12-13 08:59:35.445 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:59:36 np0005558241 nova_compute[248510]: 2025-12-13 08:59:36.117 248514 INFO nova.compute.manager [None req-e800da01-0f44-4459-8172-b36ee7c61edd a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Get console output#033[00m
Dec 13 03:59:36 np0005558241 nova_compute[248510]: 2025-12-13 08:59:36.123 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 03:59:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2955: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 03:59:36 np0005558241 nova_compute[248510]: 2025-12-13 08:59:36.454 248514 INFO nova.compute.manager [None req-61a90f97-9a65-4f8f-8cc4-c8dab41913cd a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Pausing#033[00m
Dec 13 03:59:36 np0005558241 nova_compute[248510]: 2025-12-13 08:59:36.455 248514 DEBUG nova.objects.instance [None req-61a90f97-9a65-4f8f-8cc4-c8dab41913cd a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'flavor' on Instance uuid 48db77d9-f4d5-44dd-852e-aa10f98ace90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:59:36 np0005558241 nova_compute[248510]: 2025-12-13 08:59:36.505 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616376.5042372, 48db77d9-f4d5-44dd-852e-aa10f98ace90 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:59:36 np0005558241 nova_compute[248510]: 2025-12-13 08:59:36.506 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:59:36 np0005558241 nova_compute[248510]: 2025-12-13 08:59:36.510 248514 DEBUG nova.compute.manager [None req-61a90f97-9a65-4f8f-8cc4-c8dab41913cd a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:59:36 np0005558241 nova_compute[248510]: 2025-12-13 08:59:36.539 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:59:36 np0005558241 nova_compute[248510]: 2025-12-13 08:59:36.543 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:59:37 np0005558241 nova_compute[248510]: 2025-12-13 08:59:37.437 248514 DEBUG nova.network.neutron [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Successfully created port: d03635fc-13b4-44c2-baca-088d0efb07d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 03:59:37 np0005558241 nova_compute[248510]: 2025-12-13 08:59:37.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:38 np0005558241 nova_compute[248510]: 2025-12-13 08:59:38.138 248514 DEBUG nova.network.neutron [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Successfully updated port: d03635fc-13b4-44c2-baca-088d0efb07d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 03:59:38 np0005558241 nova_compute[248510]: 2025-12-13 08:59:38.167 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-decfc3d0-e424-4304-8b3c-51daa9bd0fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:59:38 np0005558241 nova_compute[248510]: 2025-12-13 08:59:38.167 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-decfc3d0-e424-4304-8b3c-51daa9bd0fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:59:38 np0005558241 nova_compute[248510]: 2025-12-13 08:59:38.167 248514 DEBUG nova.network.neutron [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 03:59:38 np0005558241 nova_compute[248510]: 2025-12-13 08:59:38.279 248514 DEBUG nova.compute.manager [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-changed-d03635fc-13b4-44c2-baca-088d0efb07d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:38 np0005558241 nova_compute[248510]: 2025-12-13 08:59:38.280 248514 DEBUG nova.compute.manager [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Refreshing instance network info cache due to event network-changed-d03635fc-13b4-44c2-baca-088d0efb07d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:59:38 np0005558241 nova_compute[248510]: 2025-12-13 08:59:38.280 248514 DEBUG oslo_concurrency.lockutils [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-decfc3d0-e424-4304-8b3c-51daa9bd0fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:59:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2956: 321 pgs: 321 active+clean; 229 MiB data, 941 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 3.1 MiB/s wr, 79 op/s
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.074 248514 DEBUG nova.network.neutron [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.906 248514 DEBUG nova.network.neutron [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Updating instance_info_cache with network_info: [{"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.931 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-decfc3d0-e424-4304-8b3c-51daa9bd0fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.931 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Instance network_info: |[{"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.932 248514 DEBUG oslo_concurrency.lockutils [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-decfc3d0-e424-4304-8b3c-51daa9bd0fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.932 248514 DEBUG nova.network.neutron [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Refreshing network info cache for port d03635fc-13b4-44c2-baca-088d0efb07d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.934 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Start _get_guest_xml network_info=[{"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.941 248514 WARNING nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.947 248514 DEBUG nova.virt.libvirt.host [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.947 248514 DEBUG nova.virt.libvirt.host [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.955 248514 DEBUG nova.virt.libvirt.host [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.956 248514 DEBUG nova.virt.libvirt.host [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.956 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.957 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.957 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.957 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.958 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.958 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.958 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.958 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.958 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.958 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.959 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.959 248514 DEBUG nova.virt.hardware [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 03:59:39 np0005558241 nova_compute[248510]: 2025-12-13 08:59:39.961 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:59:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:59:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:59:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:59:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:59:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:59:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2957: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Dec 13 03:59:40 np0005558241 nova_compute[248510]: 2025-12-13 08:59:40.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:59:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4178981125' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:59:40 np0005558241 nova_compute[248510]: 2025-12-13 08:59:40.584 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:40 np0005558241 nova_compute[248510]: 2025-12-13 08:59:40.606 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:40 np0005558241 nova_compute[248510]: 2025-12-13 08:59:40.609 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:40 np0005558241 nova_compute[248510]: 2025-12-13 08:59:40.671 248514 INFO nova.compute.manager [None req-d912cc48-9eed-4f45-8d32-9b0a43897626 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Get console output#033[00m
Dec 13 03:59:40 np0005558241 nova_compute[248510]: 2025-12-13 08:59:40.683 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 03:59:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:59:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:59:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1530065640' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.161 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.162 248514 DEBUG nova.virt.libvirt.vif [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:59:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-828624011',display_name='tempest-TestNetworkBasicOps-server-828624011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-828624011',id=126,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbfdn0vI5uiRW56FmHbdPgrEcVctEQCQ0JrumFh8Rkx0TdD9XCLn6dyTpTncQ+yr075mpR5CmHJQEBcBXMTv9XR0fwQ9NFmvKHwa7Jo4OFodBgAjBRwq8dmNkZu2ZP1tQ==',key_name='tempest-TestNetworkBasicOps-1642652749',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-19q60cz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:59:34Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=decfc3d0-e424-4304-8b3c-51daa9bd0fb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.163 248514 DEBUG nova.network.os_vif_util [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.164 248514 DEBUG nova.network.os_vif_util [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.165 248514 DEBUG nova.objects.instance [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid decfc3d0-e424-4304-8b3c-51daa9bd0fb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.182 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  <uuid>decfc3d0-e424-4304-8b3c-51daa9bd0fb6</uuid>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  <name>instance-0000007e</name>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkBasicOps-server-828624011</nova:name>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 08:59:39</nova:creationTime>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:        <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:        <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:        <nova:port uuid="d03635fc-13b4-44c2-baca-088d0efb07d9">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <system>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <entry name="serial">decfc3d0-e424-4304-8b3c-51daa9bd0fb6</entry>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <entry name="uuid">decfc3d0-e424-4304-8b3c-51daa9bd0fb6</entry>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    </system>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  <os>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  </os>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  <features>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  </features>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  </clock>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  <devices>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk.config">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      </source>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      </auth>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    </disk>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:c0:f2:de"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <target dev="tapd03635fc-13"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    </interface>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/console.log" append="off"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    </serial>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <video>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    </video>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    </rng>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 03:59:41 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 03:59:41 np0005558241 nova_compute[248510]:  </devices>
Dec 13 03:59:41 np0005558241 nova_compute[248510]: </domain>
Dec 13 03:59:41 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.184 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Preparing to wait for external event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.185 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.185 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.185 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.186 248514 DEBUG nova.virt.libvirt.vif [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T08:59:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-828624011',display_name='tempest-TestNetworkBasicOps-server-828624011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-828624011',id=126,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbfdn0vI5uiRW56FmHbdPgrEcVctEQCQ0JrumFh8Rkx0TdD9XCLn6dyTpTncQ+yr075mpR5CmHJQEBcBXMTv9XR0fwQ9NFmvKHwa7Jo4OFodBgAjBRwq8dmNkZu2ZP1tQ==',key_name='tempest-TestNetworkBasicOps-1642652749',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-19q60cz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T08:59:34Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=decfc3d0-e424-4304-8b3c-51daa9bd0fb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.186 248514 DEBUG nova.network.os_vif_util [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.187 248514 DEBUG nova.network.os_vif_util [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.187 248514 DEBUG os_vif [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.187 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.188 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.188 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.191 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.191 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd03635fc-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.191 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd03635fc-13, col_values=(('external_ids', {'iface-id': 'd03635fc-13b4-44c2-baca-088d0efb07d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:f2:de', 'vm-uuid': 'decfc3d0-e424-4304-8b3c-51daa9bd0fb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:41 np0005558241 NetworkManager[50376]: <info>  [1765616381.1939] manager: (tapd03635fc-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/513)
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.201 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.203 248514 INFO os_vif [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13')#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.224 248514 INFO nova.compute.manager [None req-133157d0-81db-44a2-bcae-84bf4be13972 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Unpausing#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.226 248514 DEBUG nova.objects.instance [None req-133157d0-81db-44a2-bcae-84bf4be13972 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'flavor' on Instance uuid 48db77d9-f4d5-44dd-852e-aa10f98ace90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.263 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616381.2628398, 48db77d9-f4d5-44dd-852e-aa10f98ace90 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.263 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:59:41 np0005558241 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.270 248514 DEBUG nova.virt.libvirt.guest [None req-133157d0-81db-44a2-bcae-84bf4be13972 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.271 248514 DEBUG nova.compute.manager [None req-133157d0-81db-44a2-bcae-84bf4be13972 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.275 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.275 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.276 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:c0:f2:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.276 248514 INFO nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Using config drive#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.303 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.316 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.323 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:59:41 np0005558241 nova_compute[248510]: 2025-12-13 08:59:41.360 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.272 248514 INFO nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Creating config drive at /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/disk.config#033[00m
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.277 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46uu5iw9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.380 248514 DEBUG nova.network.neutron [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Updated VIF entry in instance network info cache for port d03635fc-13b4-44c2-baca-088d0efb07d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.381 248514 DEBUG nova.network.neutron [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Updating instance_info_cache with network_info: [{"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:59:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2958: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.6 MiB/s wr, 82 op/s
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.401 248514 DEBUG oslo_concurrency.lockutils [req-72bb4fac-f3fb-4e4d-8413-a72738fb7bfa req-ed1011e6-3d00-4cbf-ad72-1ad1b8c75afc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-decfc3d0-e424-4304-8b3c-51daa9bd0fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.450 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp46uu5iw9" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.482 248514 DEBUG nova.storage.rbd_utils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.486 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/disk.config decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.640 248514 DEBUG oslo_concurrency.processutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/disk.config decfc3d0-e424-4304-8b3c-51daa9bd0fb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.641 248514 INFO nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Deleting local config drive /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6/disk.config because it was imported into RBD.#033[00m
Dec 13 03:59:42 np0005558241 kernel: tapd03635fc-13: entered promiscuous mode
Dec 13 03:59:42 np0005558241 NetworkManager[50376]: <info>  [1765616382.7074] manager: (tapd03635fc-13): new Tun device (/org/freedesktop/NetworkManager/Devices/514)
Dec 13 03:59:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:42Z|01239|binding|INFO|Claiming lport d03635fc-13b4-44c2-baca-088d0efb07d9 for this chassis.
Dec 13 03:59:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:42Z|01240|binding|INFO|d03635fc-13b4-44c2-baca-088d0efb07d9: Claiming fa:16:3e:c0:f2:de 10.100.0.30
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.709 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.717 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:f2:de 10.100.0.30'], port_security=['fa:16:3e:c0:f2:de 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': 'decfc3d0-e424-4304-8b3c-51daa9bd0fb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16fae4da-5722-4e42-b101-44d9ef244421', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'af860b42-00a6-4e57-908f-190713e2b805', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9deccf5-6c40-469a-b45c-630adf312a35, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d03635fc-13b4-44c2-baca-088d0efb07d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.719 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d03635fc-13b4-44c2-baca-088d0efb07d9 in datapath 16fae4da-5722-4e42-b101-44d9ef244421 bound to our chassis#033[00m
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.720 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16fae4da-5722-4e42-b101-44d9ef244421#033[00m
Dec 13 03:59:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:42Z|01241|binding|INFO|Setting lport d03635fc-13b4-44c2-baca-088d0efb07d9 ovn-installed in OVS
Dec 13 03:59:42 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:42Z|01242|binding|INFO|Setting lport d03635fc-13b4-44c2-baca-088d0efb07d9 up in Southbound
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.730 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.745 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e6db272d-f85b-4490-80c8-b08e487f2b2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:42 np0005558241 systemd-udevd[373600]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:59:42 np0005558241 systemd-machined[210538]: New machine qemu-153-instance-0000007e.
Dec 13 03:59:42 np0005558241 systemd[1]: Started Virtual Machine qemu-153-instance-0000007e.
Dec 13 03:59:42 np0005558241 NetworkManager[50376]: <info>  [1765616382.7689] device (tapd03635fc-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:59:42 np0005558241 NetworkManager[50376]: <info>  [1765616382.7697] device (tapd03635fc-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.788 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[01c6b78e-4ae0-4642-882f-dba9dfc78a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.793 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4ced4a09-2918-4555-a7fa-5c3025bedf34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.832 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1c94a9ba-d0ff-49f7-8b9d-2d9dd0816d10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.856 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cbaed246-bd99-4bed-8d14-c1736a54468d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16fae4da-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:9f:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 894284, 'reachable_time': 25312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373612, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.883 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[42a0b589-da14-4581-b060-aa16be6a887f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16fae4da-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 894297, 'tstamp': 894297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373614, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap16fae4da-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 894300, 'tstamp': 894300}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373614, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.884 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16fae4da-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:42 np0005558241 nova_compute[248510]: 2025-12-13 08:59:42.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.888 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16fae4da-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.888 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.889 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16fae4da-50, col_values=(('external_ids', {'iface-id': '7a0536ce-73f9-4aa4-8989-da712c91214d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:42.889 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.119 248514 DEBUG nova.compute.manager [req-6cc5871f-f575-445e-aa43-867f2fcf7324 req-266c524e-6c2b-45e6-a543-f2e96dbebc03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.120 248514 DEBUG oslo_concurrency.lockutils [req-6cc5871f-f575-445e-aa43-867f2fcf7324 req-266c524e-6c2b-45e6-a543-f2e96dbebc03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.120 248514 DEBUG oslo_concurrency.lockutils [req-6cc5871f-f575-445e-aa43-867f2fcf7324 req-266c524e-6c2b-45e6-a543-f2e96dbebc03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.121 248514 DEBUG oslo_concurrency.lockutils [req-6cc5871f-f575-445e-aa43-867f2fcf7324 req-266c524e-6c2b-45e6-a543-f2e96dbebc03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.121 248514 DEBUG nova.compute.manager [req-6cc5871f-f575-445e-aa43-867f2fcf7324 req-266c524e-6c2b-45e6-a543-f2e96dbebc03 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Processing event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.131 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616383.1313186, decfc3d0-e424-4304-8b3c-51daa9bd0fb6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.132 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] VM Started (Lifecycle Event)#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.134 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.138 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.142 248514 INFO nova.virt.libvirt.driver [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Instance spawned successfully.#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.142 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.162 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.175 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.176 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.177 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.177 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.178 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.179 248514 DEBUG nova.virt.libvirt.driver [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.184 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.215 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.216 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616383.131516, decfc3d0-e424-4304-8b3c-51daa9bd0fb6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.216 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] VM Paused (Lifecycle Event)#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.250 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.254 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616383.1369686, decfc3d0-e424-4304-8b3c-51daa9bd0fb6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.255 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] VM Resumed (Lifecycle Event)#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.259 248514 INFO nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Took 8.73 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.260 248514 DEBUG nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.290 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.295 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.327 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.344 248514 INFO nova.compute.manager [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Took 9.92 seconds to build instance.#033[00m
Dec 13 03:59:43 np0005558241 nova_compute[248510]: 2025-12-13 08:59:43.365 248514 DEBUG oslo_concurrency.lockutils [None req-e4abe76a-1df0-4cdd-b946-a9891f3e568a 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2959: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Dec 13 03:59:44 np0005558241 nova_compute[248510]: 2025-12-13 08:59:44.453 248514 INFO nova.compute.manager [None req-0830799e-1a68-4b7c-884d-6ba13fb4e75b a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Get console output#033[00m
Dec 13 03:59:44 np0005558241 nova_compute[248510]: 2025-12-13 08:59:44.460 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.238 248514 DEBUG nova.compute.manager [req-b842a7f2-b3f8-410d-8086-441911ab79f5 req-9a257872-98e6-474f-8aae-f476d7ca5ce1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.239 248514 DEBUG oslo_concurrency.lockutils [req-b842a7f2-b3f8-410d-8086-441911ab79f5 req-9a257872-98e6-474f-8aae-f476d7ca5ce1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.239 248514 DEBUG oslo_concurrency.lockutils [req-b842a7f2-b3f8-410d-8086-441911ab79f5 req-9a257872-98e6-474f-8aae-f476d7ca5ce1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.240 248514 DEBUG oslo_concurrency.lockutils [req-b842a7f2-b3f8-410d-8086-441911ab79f5 req-9a257872-98e6-474f-8aae-f476d7ca5ce1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.241 248514 DEBUG nova.compute.manager [req-b842a7f2-b3f8-410d-8086-441911ab79f5 req-9a257872-98e6-474f-8aae-f476d7ca5ce1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] No waiting events found dispatching network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.242 248514 WARNING nova.compute.manager [req-b842a7f2-b3f8-410d-8086-441911ab79f5 req-9a257872-98e6-474f-8aae-f476d7ca5ce1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received unexpected event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 for instance with vm_state active and task_state None.#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.660 248514 DEBUG nova.compute.manager [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-changed-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.661 248514 DEBUG nova.compute.manager [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Refreshing instance network info cache due to event network-changed-558f49fb-1002-4c28-8ba2-ea32384811d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.661 248514 DEBUG oslo_concurrency.lockutils [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.661 248514 DEBUG oslo_concurrency.lockutils [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.662 248514 DEBUG nova.network.neutron [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Refreshing network info cache for port 558f49fb-1002-4c28-8ba2-ea32384811d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.738 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.739 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.739 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.739 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.739 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.740 248514 INFO nova.compute.manager [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Terminating instance#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.741 248514 DEBUG nova.compute.manager [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 03:59:45 np0005558241 kernel: tap558f49fb-10 (unregistering): left promiscuous mode
Dec 13 03:59:45 np0005558241 NetworkManager[50376]: <info>  [1765616385.7953] device (tap558f49fb-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 03:59:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:45Z|01243|binding|INFO|Releasing lport 558f49fb-1002-4c28-8ba2-ea32384811d7 from this chassis (sb_readonly=0)
Dec 13 03:59:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:45Z|01244|binding|INFO|Setting lport 558f49fb-1002-4c28-8ba2-ea32384811d7 down in Southbound
Dec 13 03:59:45 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:45Z|01245|binding|INFO|Removing iface tap558f49fb-10 ovn-installed in OVS
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.806 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:45.813 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:ad:5d 10.100.0.5'], port_security=['fa:16:3e:ba:ad:5d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '48db77d9-f4d5-44dd-852e-aa10f98ace90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67032c28-66fa-4d75-99b8-37c3e61c4140', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af406687-c2e5-4e03-9415-af4130e7b9af, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=558f49fb-1002-4c28-8ba2-ea32384811d7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 03:59:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:45.814 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 558f49fb-1002-4c28-8ba2-ea32384811d7 in datapath 4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 unbound from our chassis#033[00m
Dec 13 03:59:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:45.816 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 03:59:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:45.818 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3c44e1e3-c72e-4f3c-a575-e087f2620851]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:45.819 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 namespace which is not needed anymore#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.832 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:45 np0005558241 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Dec 13 03:59:45 np0005558241 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d0000007d.scope: Consumed 12.858s CPU time.
Dec 13 03:59:45 np0005558241 systemd-machined[210538]: Machine qemu-152-instance-0000007d terminated.
Dec 13 03:59:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:59:45 np0005558241 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [NOTICE]   (373059) : haproxy version is 2.8.14-c23fe91
Dec 13 03:59:45 np0005558241 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [NOTICE]   (373059) : path to executable is /usr/sbin/haproxy
Dec 13 03:59:45 np0005558241 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [WARNING]  (373059) : Exiting Master process...
Dec 13 03:59:45 np0005558241 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [WARNING]  (373059) : Exiting Master process...
Dec 13 03:59:45 np0005558241 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [ALERT]    (373059) : Current worker (373061) exited with code 143 (Terminated)
Dec 13 03:59:45 np0005558241 neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37[373055]: [WARNING]  (373059) : All workers exited. Exiting... (0)
Dec 13 03:59:45 np0005558241 systemd[1]: libpod-cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c.scope: Deactivated successfully.
Dec 13 03:59:45 np0005558241 conmon[373055]: conmon cd1426da206dbb959906 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c.scope/container/memory.events
Dec 13 03:59:45 np0005558241 podman[373680]: 2025-12-13 08:59:45.97033975 +0000 UTC m=+0.063853224 container died cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.976 248514 INFO nova.virt.libvirt.driver [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Instance destroyed successfully.#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.977 248514 DEBUG nova.objects.instance [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid 48db77d9-f4d5-44dd-852e-aa10f98ace90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.992 248514 DEBUG nova.virt.libvirt.vif [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:59:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1185653133',display_name='tempest-TestNetworkAdvancedServerOps-server-1185653133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1185653133',id=125,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBErdAvAxyanLSk6zlk6UR7jIkt03qXx3MfH8+NhdcAyZSFQd+bfa57m4HcmVco5q1ZITwwE9Zaq6j0nsDhbNKnt90E6ScVaQ/3xa8beJWwD79hCpdjT8jJu+dOAQXgrxSQ==',key_name='tempest-TestNetworkAdvancedServerOps-1501689960',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:59:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-rf02ewgw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:59:41Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=48db77d9-f4d5-44dd-852e-aa10f98ace90,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.993 248514 DEBUG nova.network.os_vif_util [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.994 248514 DEBUG nova.network.os_vif_util [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.994 248514 DEBUG os_vif [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.996 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.996 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap558f49fb-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:45 np0005558241 nova_compute[248510]: 2025-12-13 08:59:45.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:46 np0005558241 nova_compute[248510]: 2025-12-13 08:59:46.001 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 03:59:46 np0005558241 nova_compute[248510]: 2025-12-13 08:59:46.002 248514 INFO os_vif [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:ad:5d,bridge_name='br-int',has_traffic_filtering=True,id=558f49fb-1002-4c28-8ba2-ea32384811d7,network=Network(4977afa6-1a1c-43aa-9d3f-7b6747f5eb37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap558f49fb-10')#033[00m
Dec 13 03:59:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c-userdata-shm.mount: Deactivated successfully.
Dec 13 03:59:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-86652b75459e4a25df06217dc80b414d5f5e77aaf98604cf5669cd3f9ba9632a-merged.mount: Deactivated successfully.
Dec 13 03:59:46 np0005558241 podman[373680]: 2025-12-13 08:59:46.099964823 +0000 UTC m=+0.193478297 container cleanup cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 03:59:46 np0005558241 systemd[1]: libpod-conmon-cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c.scope: Deactivated successfully.
Dec 13 03:59:46 np0005558241 podman[373739]: 2025-12-13 08:59:46.263903626 +0000 UTC m=+0.144479527 container remove cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:59:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.271 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46556867-67e1-450c-b1ae-11072364a159]: (4, ('Sat Dec 13 08:59:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 (cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c)\ncd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c\nSat Dec 13 08:59:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 (cd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c)\ncd1426da206dbb959906df6eb3c574effbda89c914536852c35eea1d6042820c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.273 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7fdb88eb-80f9-4143-9aaf-0cd97374688f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.275 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4977afa6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 03:59:46 np0005558241 kernel: tap4977afa6-10: left promiscuous mode
Dec 13 03:59:46 np0005558241 nova_compute[248510]: 2025-12-13 08:59:46.277 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.283 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5feab448-5da2-45b1-bbf1-406f6e13193b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:46 np0005558241 nova_compute[248510]: 2025-12-13 08:59:46.296 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.300 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1587b85a-4d37-4000-82bf-a3cbf5217387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.302 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db7abea4-51c1-4ae9-9af0-80806e0caaae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.326 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb25327d-82cd-45b2-aa88-a74475c60a58]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 893225, 'reachable_time': 27619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373754, 'error': None, 'target': 'ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:46 np0005558241 systemd[1]: run-netns-ovnmeta\x2d4977afa6\x2d1a1c\x2d43aa\x2d9d3f\x2d7b6747f5eb37.mount: Deactivated successfully.
Dec 13 03:59:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.332 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4977afa6-1a1c-43aa-9d3f-7b6747f5eb37 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 03:59:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:46.332 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[9b919c30-5774-41c7-ae48-e3d721ee45f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 03:59:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2960: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Dec 13 03:59:46 np0005558241 nova_compute[248510]: 2025-12-13 08:59:46.805 248514 INFO nova.virt.libvirt.driver [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Deleting instance files /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90_del#033[00m
Dec 13 03:59:46 np0005558241 nova_compute[248510]: 2025-12-13 08:59:46.806 248514 INFO nova.virt.libvirt.driver [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Deletion of /var/lib/nova/instances/48db77d9-f4d5-44dd-852e-aa10f98ace90_del complete#033[00m
Dec 13 03:59:46 np0005558241 nova_compute[248510]: 2025-12-13 08:59:46.870 248514 INFO nova.compute.manager [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 03:59:46 np0005558241 nova_compute[248510]: 2025-12-13 08:59:46.871 248514 DEBUG oslo.service.loopingcall [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 03:59:46 np0005558241 nova_compute[248510]: 2025-12-13 08:59:46.872 248514 DEBUG nova.compute.manager [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 03:59:46 np0005558241 nova_compute[248510]: 2025-12-13 08:59:46.872 248514 DEBUG nova.network.neutron [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.413 248514 DEBUG nova.compute.manager [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-vif-unplugged-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.414 248514 DEBUG oslo_concurrency.lockutils [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.414 248514 DEBUG oslo_concurrency.lockutils [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.415 248514 DEBUG oslo_concurrency.lockutils [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.415 248514 DEBUG nova.compute.manager [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] No waiting events found dispatching network-vif-unplugged-558f49fb-1002-4c28-8ba2-ea32384811d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.416 248514 DEBUG nova.compute.manager [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-vif-unplugged-558f49fb-1002-4c28-8ba2-ea32384811d7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.416 248514 DEBUG nova.compute.manager [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.417 248514 DEBUG oslo_concurrency.lockutils [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.417 248514 DEBUG oslo_concurrency.lockutils [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.418 248514 DEBUG oslo_concurrency.lockutils [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.418 248514 DEBUG nova.compute.manager [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] No waiting events found dispatching network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.419 248514 WARNING nova.compute.manager [req-97048ce2-3682-4d47-b418-e0775a8a3579 req-e61b3bf4-73b4-4994-9369-179a203756aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received unexpected event network-vif-plugged-558f49fb-1002-4c28-8ba2-ea32384811d7 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.549 248514 DEBUG nova.network.neutron [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.567 248514 INFO nova.compute.manager [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Took 0.69 seconds to deallocate network for instance.#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.619 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.620 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.692 248514 DEBUG nova.network.neutron [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updated VIF entry in instance network info cache for port 558f49fb-1002-4c28-8ba2-ea32384811d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.695 248514 DEBUG nova.network.neutron [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Updating instance_info_cache with network_info: [{"id": "558f49fb-1002-4c28-8ba2-ea32384811d7", "address": "fa:16:3e:ba:ad:5d", "network": {"id": "4977afa6-1a1c-43aa-9d3f-7b6747f5eb37", "bridge": "br-int", "label": "tempest-network-smoke--299228636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap558f49fb-10", "ovs_interfaceid": "558f49fb-1002-4c28-8ba2-ea32384811d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.718 248514 DEBUG oslo_concurrency.lockutils [req-d2c7cf9f-a79b-4d5b-b857-182ccb24acb1 req-68c0e027-9e21-4f3d-982e-f6633a6d0b25 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-48db77d9-f4d5-44dd-852e-aa10f98ace90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.719 248514 DEBUG oslo_concurrency.processutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 03:59:47 np0005558241 nova_compute[248510]: 2025-12-13 08:59:47.759 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 03:59:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/58622239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 03:59:48 np0005558241 nova_compute[248510]: 2025-12-13 08:59:48.259 248514 DEBUG oslo_concurrency.processutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 03:59:48 np0005558241 nova_compute[248510]: 2025-12-13 08:59:48.265 248514 DEBUG nova.compute.provider_tree [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 03:59:48 np0005558241 nova_compute[248510]: 2025-12-13 08:59:48.290 248514 DEBUG nova.scheduler.client.report [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 03:59:48 np0005558241 nova_compute[248510]: 2025-12-13 08:59:48.319 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:48 np0005558241 nova_compute[248510]: 2025-12-13 08:59:48.348 248514 INFO nova.scheduler.client.report [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Deleted allocations for instance 48db77d9-f4d5-44dd-852e-aa10f98ace90#033[00m
Dec 13 03:59:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2961: 321 pgs: 321 active+clean; 205 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Dec 13 03:59:48 np0005558241 nova_compute[248510]: 2025-12-13 08:59:48.411 248514 DEBUG oslo_concurrency.lockutils [None req-6a318a01-f40f-44e1-94c1-d965b834d737 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "48db77d9-f4d5-44dd-852e-aa10f98ace90" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:49 np0005558241 nova_compute[248510]: 2025-12-13 08:59:49.529 248514 DEBUG nova.compute.manager [req-e72a6be7-7ba7-43e4-9757-3d33135bc89f req-b1b9a7cb-7450-431d-8dc0-f5687139dfe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Received event network-vif-deleted-558f49fb-1002-4c28-8ba2-ea32384811d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 03:59:49 np0005558241 nova_compute[248510]: 2025-12-13 08:59:49.530 248514 INFO nova.compute.manager [req-e72a6be7-7ba7-43e4-9757-3d33135bc89f req-b1b9a7cb-7450-431d-8dc0-f5687139dfe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Neutron deleted interface 558f49fb-1002-4c28-8ba2-ea32384811d7; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 03:59:49 np0005558241 nova_compute[248510]: 2025-12-13 08:59:49.530 248514 DEBUG nova.network.neutron [req-e72a6be7-7ba7-43e4-9757-3d33135bc89f req-b1b9a7cb-7450-431d-8dc0-f5687139dfe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Dec 13 03:59:49 np0005558241 nova_compute[248510]: 2025-12-13 08:59:49.533 248514 DEBUG nova.compute.manager [req-e72a6be7-7ba7-43e4-9757-3d33135bc89f req-b1b9a7cb-7450-431d-8dc0-f5687139dfe6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Detach interface failed, port_id=558f49fb-1002-4c28-8ba2-ea32384811d7, reason: Instance 48db77d9-f4d5-44dd-852e-aa10f98ace90 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 03:59:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2962: 321 pgs: 321 active+clean; 167 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 828 KiB/s wr, 115 op/s
Dec 13 03:59:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:59:50 np0005558241 nova_compute[248510]: 2025-12-13 08:59:50.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2963: 321 pgs: 321 active+clean; 167 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 102 op/s
Dec 13 03:59:52 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:52Z|01246|binding|INFO|Releasing lport 4a84c557-65ab-478a-95fb-44e9a95becd6 from this chassis (sb_readonly=0)
Dec 13 03:59:52 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:52Z|01247|binding|INFO|Releasing lport 7a0536ce-73f9-4aa4-8989-da712c91214d from this chassis (sb_readonly=0)
Dec 13 03:59:52 np0005558241 nova_compute[248510]: 2025-12-13 08:59:52.663 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:52 np0005558241 nova_compute[248510]: 2025-12-13 08:59:52.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2964: 321 pgs: 321 active+clean; 169 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 137 KiB/s wr, 107 op/s
Dec 13 03:59:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:54Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:f2:de 10.100.0.30
Dec 13 03:59:54 np0005558241 ovn_controller[148476]: 2025-12-13T08:59:54Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:f2:de 10.100.0.30
Dec 13 03:59:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:55.437 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 03:59:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:55.438 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 03:59:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 08:59:55.439 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 03:59:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 03:59:56 np0005558241 nova_compute[248510]: 2025-12-13 08:59:56.002 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2965: 321 pgs: 321 active+clean; 169 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 124 KiB/s wr, 88 op/s
Dec 13 03:59:57 np0005558241 nova_compute[248510]: 2025-12-13 08:59:57.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 03:59:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2966: 321 pgs: 321 active+clean; 176 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 135 op/s
Dec 13 04:00:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2967: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Dec 13 04:00:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:00:00 np0005558241 nova_compute[248510]: 2025-12-13 09:00:00.975 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616385.974272, 48db77d9-f4d5-44dd-852e-aa10f98ace90 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:00:00 np0005558241 nova_compute[248510]: 2025-12-13 09:00:00.975 248514 INFO nova.compute.manager [-] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:00:01 np0005558241 nova_compute[248510]: 2025-12-13 09:00:01.003 248514 DEBUG nova.compute.manager [None req-2d1d20df-757b-493a-aab7-5d84ef82a2a9 - - - - - -] [instance: 48db77d9-f4d5-44dd-852e-aa10f98ace90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:01 np0005558241 nova_compute[248510]: 2025-12-13 09:00:01.005 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:02 np0005558241 nova_compute[248510]: 2025-12-13 09:00:02.285 248514 DEBUG nova.compute.manager [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-changed-a30b0da9-1ee1-4092-a86b-5fa66fe76492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:02 np0005558241 nova_compute[248510]: 2025-12-13 09:00:02.286 248514 DEBUG nova.compute.manager [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing instance network info cache due to event network-changed-a30b0da9-1ee1-4092-a86b-5fa66fe76492. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:00:02 np0005558241 nova_compute[248510]: 2025-12-13 09:00:02.286 248514 DEBUG oslo_concurrency.lockutils [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:00:02 np0005558241 nova_compute[248510]: 2025-12-13 09:00:02.286 248514 DEBUG oslo_concurrency.lockutils [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:00:02 np0005558241 nova_compute[248510]: 2025-12-13 09:00:02.287 248514 DEBUG nova.network.neutron [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing network info cache for port a30b0da9-1ee1-4092-a86b-5fa66fe76492 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:00:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2968: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:00:02 np0005558241 nova_compute[248510]: 2025-12-13 09:00:02.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:02 np0005558241 nova_compute[248510]: 2025-12-13 09:00:02.741 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:03 np0005558241 nova_compute[248510]: 2025-12-13 09:00:03.676 248514 DEBUG nova.network.neutron [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updated VIF entry in instance network info cache for port a30b0da9-1ee1-4092-a86b-5fa66fe76492. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:00:03 np0005558241 nova_compute[248510]: 2025-12-13 09:00:03.677 248514 DEBUG nova.network.neutron [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:00:03 np0005558241 nova_compute[248510]: 2025-12-13 09:00:03.709 248514 DEBUG oslo_concurrency.lockutils [req-6a1c0051-2029-4852-ab65-79b8b7984843 req-66857499-75f5-446a-b5e7-e6c9d4d78c89 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:00:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2969: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Dec 13 04:00:05 np0005558241 podman[373804]: 2025-12-13 09:00:05.628169649 +0000 UTC m=+0.056363860 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:00:05 np0005558241 podman[373803]: 2025-12-13 09:00:05.653166271 +0000 UTC m=+0.084347975 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 04:00:05 np0005558241 podman[373802]: 2025-12-13 09:00:05.655027236 +0000 UTC m=+0.088785333 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:00:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:00:06 np0005558241 nova_compute[248510]: 2025-12-13 09:00:06.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:00:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2970: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:00:06 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:00:06 np0005558241 nova_compute[248510]: 2025-12-13 09:00:06.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:07 np0005558241 podman[373981]: 2025-12-13 09:00:07.015842646 +0000 UTC m=+0.065365701 container create 06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 04:00:07 np0005558241 systemd[1]: Started libpod-conmon-06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0.scope.
Dec 13 04:00:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:00:07 np0005558241 podman[373981]: 2025-12-13 09:00:06.988559458 +0000 UTC m=+0.038082583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:00:07 np0005558241 podman[373981]: 2025-12-13 09:00:07.097112176 +0000 UTC m=+0.146635221 container init 06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec 13 04:00:07 np0005558241 podman[373981]: 2025-12-13 09:00:07.1070974 +0000 UTC m=+0.156620425 container start 06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 13 04:00:07 np0005558241 podman[373981]: 2025-12-13 09:00:07.111279052 +0000 UTC m=+0.160802077 container attach 06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:00:07 np0005558241 zen_rosalind[373998]: 167 167
Dec 13 04:00:07 np0005558241 systemd[1]: libpod-06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0.scope: Deactivated successfully.
Dec 13 04:00:07 np0005558241 conmon[373998]: conmon 06c919bf667da8204a5f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0.scope/container/memory.events
Dec 13 04:00:07 np0005558241 podman[373981]: 2025-12-13 09:00:07.115153867 +0000 UTC m=+0.164676892 container died 06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 04:00:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9c32c908094a803cd7e50696dd71697ae804e2303b7d316080bef7e24494458f-merged.mount: Deactivated successfully.
Dec 13 04:00:07 np0005558241 podman[373981]: 2025-12-13 09:00:07.154686295 +0000 UTC m=+0.204209320 container remove 06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:00:07 np0005558241 systemd[1]: libpod-conmon-06c919bf667da8204a5f36080cf48676094d8aadc400328262c1193b0a4fc2a0.scope: Deactivated successfully.
Dec 13 04:00:07 np0005558241 podman[374021]: 2025-12-13 09:00:07.34160976 +0000 UTC m=+0.043249849 container create 827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_darwin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:00:07 np0005558241 systemd[1]: Started libpod-conmon-827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d.scope.
Dec 13 04:00:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:00:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0900cab2ab88ea0a4d27935edbd8534b67c2cd89a3bea8c0070a1e44fef970d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0900cab2ab88ea0a4d27935edbd8534b67c2cd89a3bea8c0070a1e44fef970d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0900cab2ab88ea0a4d27935edbd8534b67c2cd89a3bea8c0070a1e44fef970d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:07 np0005558241 podman[374021]: 2025-12-13 09:00:07.321504538 +0000 UTC m=+0.023144647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:00:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0900cab2ab88ea0a4d27935edbd8534b67c2cd89a3bea8c0070a1e44fef970d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0900cab2ab88ea0a4d27935edbd8534b67c2cd89a3bea8c0070a1e44fef970d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:07 np0005558241 podman[374021]: 2025-12-13 09:00:07.427957684 +0000 UTC m=+0.129597803 container init 827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_darwin, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:00:07 np0005558241 podman[374021]: 2025-12-13 09:00:07.434805662 +0000 UTC m=+0.136445751 container start 827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_darwin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:00:07 np0005558241 podman[374021]: 2025-12-13 09:00:07.438805469 +0000 UTC m=+0.140445588 container attach 827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_darwin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:00:07 np0005558241 nova_compute[248510]: 2025-12-13 09:00:07.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:07 np0005558241 nova_compute[248510]: 2025-12-13 09:00:07.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:07 np0005558241 zealous_darwin[374038]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:00:07 np0005558241 zealous_darwin[374038]: --> All data devices are unavailable
Dec 13 04:00:07 np0005558241 systemd[1]: libpod-827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d.scope: Deactivated successfully.
Dec 13 04:00:07 np0005558241 podman[374021]: 2025-12-13 09:00:07.940701045 +0000 UTC m=+0.642341134 container died 827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:00:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0900cab2ab88ea0a4d27935edbd8534b67c2cd89a3bea8c0070a1e44fef970d3-merged.mount: Deactivated successfully.
Dec 13 04:00:07 np0005558241 podman[374021]: 2025-12-13 09:00:07.980873198 +0000 UTC m=+0.682513287 container remove 827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:00:08 np0005558241 systemd[1]: libpod-conmon-827cfda177be906c7eb63a8dc5efba48622f58c7de57e7f1376d5ae14e7fbb3d.scope: Deactivated successfully.
Dec 13 04:00:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2971: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Dec 13 04:00:08 np0005558241 podman[374131]: 2025-12-13 09:00:08.428822143 +0000 UTC m=+0.040790830 container create df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:00:08 np0005558241 systemd[1]: Started libpod-conmon-df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce.scope.
Dec 13 04:00:08 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:00:08 np0005558241 podman[374131]: 2025-12-13 09:00:08.41197724 +0000 UTC m=+0.023945957 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:00:08 np0005558241 podman[374131]: 2025-12-13 09:00:08.512366538 +0000 UTC m=+0.124335245 container init df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:00:08 np0005558241 podman[374131]: 2025-12-13 09:00:08.520469276 +0000 UTC m=+0.132437963 container start df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:00:08 np0005558241 kind_elbakyan[374148]: 167 167
Dec 13 04:00:08 np0005558241 podman[374131]: 2025-12-13 09:00:08.524382662 +0000 UTC m=+0.136351349 container attach df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 04:00:08 np0005558241 systemd[1]: libpod-df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce.scope: Deactivated successfully.
Dec 13 04:00:08 np0005558241 conmon[374148]: conmon df733168c78489603bc4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce.scope/container/memory.events
Dec 13 04:00:08 np0005558241 podman[374131]: 2025-12-13 09:00:08.526161155 +0000 UTC m=+0.138129842 container died df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Dec 13 04:00:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0fa6599a7aed2909e2ca23c7e79704d413955102d7f49b86c2916b7e5077c882-merged.mount: Deactivated successfully.
Dec 13 04:00:08 np0005558241 podman[374131]: 2025-12-13 09:00:08.565495018 +0000 UTC m=+0.177463705 container remove df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 04:00:08 np0005558241 systemd[1]: libpod-conmon-df733168c78489603bc4a96986486b203a3203f0aa10bb66d27ecdb92e338bce.scope: Deactivated successfully.
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.677 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.678 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.678 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.678 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.678 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.679 248514 INFO nova.compute.manager [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Terminating instance#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.680 248514 DEBUG nova.compute.manager [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:00:08 np0005558241 kernel: tapd03635fc-13 (unregistering): left promiscuous mode
Dec 13 04:00:08 np0005558241 NetworkManager[50376]: <info>  [1765616408.7253] device (tapd03635fc-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:00:08 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:08Z|01248|binding|INFO|Releasing lport d03635fc-13b4-44c2-baca-088d0efb07d9 from this chassis (sb_readonly=0)
Dec 13 04:00:08 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:08Z|01249|binding|INFO|Setting lport d03635fc-13b4-44c2-baca-088d0efb07d9 down in Southbound
Dec 13 04:00:08 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:08Z|01250|binding|INFO|Removing iface tapd03635fc-13 ovn-installed in OVS
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.743 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:f2:de 10.100.0.30'], port_security=['fa:16:3e:c0:f2:de 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': 'decfc3d0-e424-4304-8b3c-51daa9bd0fb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16fae4da-5722-4e42-b101-44d9ef244421', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'af860b42-00a6-4e57-908f-190713e2b805', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9deccf5-6c40-469a-b45c-630adf312a35, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=d03635fc-13b4-44c2-baca-088d0efb07d9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.744 158419 INFO neutron.agent.ovn.metadata.agent [-] Port d03635fc-13b4-44c2-baca-088d0efb07d9 in datapath 16fae4da-5722-4e42-b101-44d9ef244421 unbound from our chassis#033[00m
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.745 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16fae4da-5722-4e42-b101-44d9ef244421#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.773 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[054b66e0-5568-4e38-a1f3-ce4fa64d803b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:08 np0005558241 podman[374171]: 2025-12-13 09:00:08.782989662 +0000 UTC m=+0.058376310 container create cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pike, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.803 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.806 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[06da02d5-3117-4418-a6c5-aa49839a50fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:08 np0005558241 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Dec 13 04:00:08 np0005558241 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007e.scope: Consumed 13.586s CPU time.
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.809 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6985264f-010e-4839-98b5-6abac3078bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:08 np0005558241 systemd[1]: Started libpod-conmon-cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a.scope.
Dec 13 04:00:08 np0005558241 systemd-machined[210538]: Machine qemu-153-instance-0000007e terminated.
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.838 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b8da30c4-8027-4386-b108-5557a073f92c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.854 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1663da3-95cf-456e-a099-f16610137a8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16fae4da-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:9f:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 790, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 790, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 363], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 894284, 'reachable_time': 40385, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 524, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 524, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374201, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:08 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:00:08 np0005558241 podman[374171]: 2025-12-13 09:00:08.76617616 +0000 UTC m=+0.041562828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:00:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7edb21ed41d98a1c73e169f41a3615f19d70751a0c77cc7fe7510c2bc0eda2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7edb21ed41d98a1c73e169f41a3615f19d70751a0c77cc7fe7510c2bc0eda2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7edb21ed41d98a1c73e169f41a3615f19d70751a0c77cc7fe7510c2bc0eda2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7edb21ed41d98a1c73e169f41a3615f19d70751a0c77cc7fe7510c2bc0eda2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.872 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8727d35a-882f-4c88-af17-495d646c6e59]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16fae4da-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 894297, 'tstamp': 894297}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374202, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap16fae4da-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 894300, 'tstamp': 894300}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374202, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.874 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16fae4da-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.882 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:08 np0005558241 podman[374171]: 2025-12-13 09:00:08.884247311 +0000 UTC m=+0.159633989 container init cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pike, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.884 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16fae4da-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.885 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.885 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16fae4da-50, col_values=(('external_ids', {'iface-id': '7a0536ce-73f9-4aa4-8989-da712c91214d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:08.885 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:00:08 np0005558241 podman[374171]: 2025-12-13 09:00:08.894162683 +0000 UTC m=+0.169549351 container start cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:00:08 np0005558241 podman[374171]: 2025-12-13 09:00:08.898688554 +0000 UTC m=+0.174075212 container attach cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pike, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.922 248514 INFO nova.virt.libvirt.driver [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Instance destroyed successfully.#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.923 248514 DEBUG nova.objects.instance [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid decfc3d0-e424-4304-8b3c-51daa9bd0fb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.936 248514 DEBUG nova.virt.libvirt.vif [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:59:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-828624011',display_name='tempest-TestNetworkBasicOps-server-828624011',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-828624011',id=126,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLbfdn0vI5uiRW56FmHbdPgrEcVctEQCQ0JrumFh8Rkx0TdD9XCLn6dyTpTncQ+yr075mpR5CmHJQEBcBXMTv9XR0fwQ9NFmvKHwa7Jo4OFodBgAjBRwq8dmNkZu2ZP1tQ==',key_name='tempest-TestNetworkBasicOps-1642652749',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:59:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-19q60cz3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:59:43Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=decfc3d0-e424-4304-8b3c-51daa9bd0fb6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.936 248514 DEBUG nova.network.os_vif_util [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "d03635fc-13b4-44c2-baca-088d0efb07d9", "address": "fa:16:3e:c0:f2:de", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd03635fc-13", "ovs_interfaceid": "d03635fc-13b4-44c2-baca-088d0efb07d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.937 248514 DEBUG nova.network.os_vif_util [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.938 248514 DEBUG os_vif [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.940 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd03635fc-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.943 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.944 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.947 248514 INFO os_vif [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:f2:de,bridge_name='br-int',has_traffic_filtering=True,id=d03635fc-13b4-44c2-baca-088d0efb07d9,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd03635fc-13')#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.967 248514 DEBUG nova.compute.manager [req-75983850-dffc-4aab-94b0-c2f740e2453f req-358b585c-690a-4cbf-95ae-47943bbfcfc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-vif-unplugged-d03635fc-13b4-44c2-baca-088d0efb07d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.967 248514 DEBUG oslo_concurrency.lockutils [req-75983850-dffc-4aab-94b0-c2f740e2453f req-358b585c-690a-4cbf-95ae-47943bbfcfc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.967 248514 DEBUG oslo_concurrency.lockutils [req-75983850-dffc-4aab-94b0-c2f740e2453f req-358b585c-690a-4cbf-95ae-47943bbfcfc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.967 248514 DEBUG oslo_concurrency.lockutils [req-75983850-dffc-4aab-94b0-c2f740e2453f req-358b585c-690a-4cbf-95ae-47943bbfcfc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.968 248514 DEBUG nova.compute.manager [req-75983850-dffc-4aab-94b0-c2f740e2453f req-358b585c-690a-4cbf-95ae-47943bbfcfc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] No waiting events found dispatching network-vif-unplugged-d03635fc-13b4-44c2-baca-088d0efb07d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:08 np0005558241 nova_compute[248510]: 2025-12-13 09:00:08.968 248514 DEBUG nova.compute.manager [req-75983850-dffc-4aab-94b0-c2f740e2453f req-358b585c-690a-4cbf-95ae-47943bbfcfc3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-vif-unplugged-d03635fc-13b4-44c2-baca-088d0efb07d9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:00:09 np0005558241 gifted_pike[374198]: {
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:    "0": [
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:        {
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "devices": [
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "/dev/loop3"
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            ],
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_name": "ceph_lv0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_size": "21470642176",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "name": "ceph_lv0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "tags": {
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.cluster_name": "ceph",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.crush_device_class": "",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.encrypted": "0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.objectstore": "bluestore",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.osd_id": "0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.type": "block",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.vdo": "0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.with_tpm": "0"
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            },
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "type": "block",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "vg_name": "ceph_vg0"
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:        }
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:    ],
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:    "1": [
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:        {
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "devices": [
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "/dev/loop4"
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            ],
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_name": "ceph_lv1",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_size": "21470642176",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "name": "ceph_lv1",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "tags": {
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.cluster_name": "ceph",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.crush_device_class": "",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.encrypted": "0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.objectstore": "bluestore",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.osd_id": "1",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.type": "block",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.vdo": "0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.with_tpm": "0"
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            },
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "type": "block",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "vg_name": "ceph_vg1"
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:        }
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:    ],
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:    "2": [
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:        {
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "devices": [
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "/dev/loop5"
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            ],
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_name": "ceph_lv2",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_size": "21470642176",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "name": "ceph_lv2",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "tags": {
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.cluster_name": "ceph",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.crush_device_class": "",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.encrypted": "0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.objectstore": "bluestore",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.osd_id": "2",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.type": "block",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.vdo": "0",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:                "ceph.with_tpm": "0"
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            },
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "type": "block",
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:            "vg_name": "ceph_vg2"
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:        }
Dec 13 04:00:09 np0005558241 gifted_pike[374198]:    ]
Dec 13 04:00:09 np0005558241 gifted_pike[374198]: }
Dec 13 04:00:09 np0005558241 systemd[1]: libpod-cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a.scope: Deactivated successfully.
Dec 13 04:00:09 np0005558241 podman[374171]: 2025-12-13 09:00:09.213855338 +0000 UTC m=+0.489241996 container died cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pike, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:00:09 np0005558241 nova_compute[248510]: 2025-12-13 09:00:09.217 248514 INFO nova.virt.libvirt.driver [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Deleting instance files /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6_del#033[00m
Dec 13 04:00:09 np0005558241 nova_compute[248510]: 2025-12-13 09:00:09.218 248514 INFO nova.virt.libvirt.driver [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Deletion of /var/lib/nova/instances/decfc3d0-e424-4304-8b3c-51daa9bd0fb6_del complete#033[00m
Dec 13 04:00:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ea7edb21ed41d98a1c73e169f41a3615f19d70751a0c77cc7fe7510c2bc0eda2-merged.mount: Deactivated successfully.
Dec 13 04:00:09 np0005558241 podman[374171]: 2025-12-13 09:00:09.25356445 +0000 UTC m=+0.528951178 container remove cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_pike, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:00:09 np0005558241 systemd[1]: libpod-conmon-cc4bd55380ee579a1dce86c6f3df27e27fa2930cded74f75fdf82de0f72c773a.scope: Deactivated successfully.
Dec 13 04:00:09 np0005558241 nova_compute[248510]: 2025-12-13 09:00:09.282 248514 INFO nova.compute.manager [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:00:09 np0005558241 nova_compute[248510]: 2025-12-13 09:00:09.283 248514 DEBUG oslo.service.loopingcall [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:00:09 np0005558241 nova_compute[248510]: 2025-12-13 09:00:09.285 248514 DEBUG nova.compute.manager [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:00:09 np0005558241 nova_compute[248510]: 2025-12-13 09:00:09.285 248514 DEBUG nova.network.neutron [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:00:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:00:09
Dec 13 04:00:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:00:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:00:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'vms', '.rgw.root', 'images', 'default.rgw.control', 'backups', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta']
Dec 13 04:00:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:00:09 np0005558241 podman[374315]: 2025-12-13 09:00:09.738956231 +0000 UTC m=+0.039380145 container create 7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:00:09 np0005558241 systemd[1]: Started libpod-conmon-7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932.scope.
Dec 13 04:00:09 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:00:09 np0005558241 podman[374315]: 2025-12-13 09:00:09.722398036 +0000 UTC m=+0.022821970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:00:09 np0005558241 podman[374315]: 2025-12-13 09:00:09.835477124 +0000 UTC m=+0.135901058 container init 7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:00:09 np0005558241 podman[374315]: 2025-12-13 09:00:09.842201368 +0000 UTC m=+0.142625282 container start 7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:00:09 np0005558241 eloquent_burnell[374331]: 167 167
Dec 13 04:00:09 np0005558241 podman[374315]: 2025-12-13 09:00:09.847015726 +0000 UTC m=+0.147439690 container attach 7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_burnell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:00:09 np0005558241 systemd[1]: libpod-7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932.scope: Deactivated successfully.
Dec 13 04:00:09 np0005558241 podman[374315]: 2025-12-13 09:00:09.848600715 +0000 UTC m=+0.149024639 container died 7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_burnell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 04:00:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c8062f44919e17d67c72a84397752dba4da91d79e71ab5b766e92f11cc6e7cb1-merged.mount: Deactivated successfully.
Dec 13 04:00:09 np0005558241 podman[374315]: 2025-12-13 09:00:09.896580159 +0000 UTC m=+0.197004073 container remove 7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:00:09 np0005558241 systemd[1]: libpod-conmon-7e998aba4c5f2f98c1063a1529269119f574c58c2c9b4ef300f3500b83651932.scope: Deactivated successfully.
Dec 13 04:00:10 np0005558241 podman[374354]: 2025-12-13 09:00:10.105281588 +0000 UTC m=+0.042671986 container create 700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:00:10 np0005558241 systemd[1]: Started libpod-conmon-700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e.scope.
Dec 13 04:00:10 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:00:10 np0005558241 podman[374354]: 2025-12-13 09:00:10.087453941 +0000 UTC m=+0.024844359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:00:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c011f0ede1dcabdb4b26a8b420bb3f0afad65fba1403bb4fc5ede51f885e921/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c011f0ede1dcabdb4b26a8b420bb3f0afad65fba1403bb4fc5ede51f885e921/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c011f0ede1dcabdb4b26a8b420bb3f0afad65fba1403bb4fc5ede51f885e921/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c011f0ede1dcabdb4b26a8b420bb3f0afad65fba1403bb4fc5ede51f885e921/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:10 np0005558241 podman[374354]: 2025-12-13 09:00:10.205688186 +0000 UTC m=+0.143078614 container init 700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bohr, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:00:10 np0005558241 podman[374354]: 2025-12-13 09:00:10.214801059 +0000 UTC m=+0.152191477 container start 700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:00:10 np0005558241 podman[374354]: 2025-12-13 09:00:10.218500989 +0000 UTC m=+0.155891407 container attach 700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bohr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:00:10 np0005558241 nova_compute[248510]: 2025-12-13 09:00:10.384 248514 DEBUG nova.network.neutron [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2972: 321 pgs: 321 active+clean; 156 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1005 KiB/s wr, 28 op/s
Dec 13 04:00:10 np0005558241 nova_compute[248510]: 2025-12-13 09:00:10.411 248514 INFO nova.compute.manager [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Took 1.13 seconds to deallocate network for instance.#033[00m
Dec 13 04:00:10 np0005558241 nova_compute[248510]: 2025-12-13 09:00:10.482 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:10 np0005558241 nova_compute[248510]: 2025-12-13 09:00:10.482 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:10 np0005558241 nova_compute[248510]: 2025-12-13 09:00:10.567 248514 DEBUG oslo_concurrency.processutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:10 np0005558241 lvm[374467]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:00:10 np0005558241 lvm[374467]: VG ceph_vg0 finished
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:00:10 np0005558241 lvm[374469]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:00:10 np0005558241 lvm[374469]: VG ceph_vg1 finished
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:00:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:00:10 np0005558241 lvm[374471]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:00:10 np0005558241 lvm[374471]: VG ceph_vg2 finished
Dec 13 04:00:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:00:10 np0005558241 stoic_bohr[374371]: {}
Dec 13 04:00:11 np0005558241 systemd[1]: libpod-700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e.scope: Deactivated successfully.
Dec 13 04:00:11 np0005558241 systemd[1]: libpod-700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e.scope: Consumed 1.308s CPU time.
Dec 13 04:00:11 np0005558241 podman[374354]: 2025-12-13 09:00:11.0164126 +0000 UTC m=+0.953802998 container died 700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bohr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:00:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6c011f0ede1dcabdb4b26a8b420bb3f0afad65fba1403bb4fc5ede51f885e921-merged.mount: Deactivated successfully.
Dec 13 04:00:11 np0005558241 nova_compute[248510]: 2025-12-13 09:00:11.066 248514 DEBUG nova.compute.manager [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:11 np0005558241 nova_compute[248510]: 2025-12-13 09:00:11.066 248514 DEBUG oslo_concurrency.lockutils [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:11 np0005558241 nova_compute[248510]: 2025-12-13 09:00:11.066 248514 DEBUG oslo_concurrency.lockutils [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:11 np0005558241 nova_compute[248510]: 2025-12-13 09:00:11.067 248514 DEBUG oslo_concurrency.lockutils [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:11 np0005558241 nova_compute[248510]: 2025-12-13 09:00:11.067 248514 DEBUG nova.compute.manager [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] No waiting events found dispatching network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:11 np0005558241 nova_compute[248510]: 2025-12-13 09:00:11.067 248514 WARNING nova.compute.manager [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received unexpected event network-vif-plugged-d03635fc-13b4-44c2-baca-088d0efb07d9 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:00:11 np0005558241 nova_compute[248510]: 2025-12-13 09:00:11.067 248514 DEBUG nova.compute.manager [req-2c2c5291-e027-4e27-a02c-f8a331595828 req-4b24c530-ccb2-4124-ab64-92b249c08c1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Received event network-vif-deleted-d03635fc-13b4-44c2-baca-088d0efb07d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:11 np0005558241 podman[374354]: 2025-12-13 09:00:11.081510594 +0000 UTC m=+1.018900992 container remove 700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:00:11 np0005558241 systemd[1]: libpod-conmon-700e8a1e605cc9de58981cdb20f8813ddfe1cde89c18521f5407a8b9bd5fa10e.scope: Deactivated successfully.
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502021258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:00:11 np0005558241 nova_compute[248510]: 2025-12-13 09:00:11.200 248514 DEBUG oslo_concurrency.processutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:11 np0005558241 nova_compute[248510]: 2025-12-13 09:00:11.207 248514 DEBUG nova.compute.provider_tree [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:00:11 np0005558241 nova_compute[248510]: 2025-12-13 09:00:11.228 248514 DEBUG nova.scheduler.client.report [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:00:11 np0005558241 nova_compute[248510]: 2025-12-13 09:00:11.257 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:11 np0005558241 nova_compute[248510]: 2025-12-13 09:00:11.287 248514 INFO nova.scheduler.client.report [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance decfc3d0-e424-4304-8b3c-51daa9bd0fb6#033[00m
Dec 13 04:00:11 np0005558241 nova_compute[248510]: 2025-12-13 09:00:11.395 248514 DEBUG oslo_concurrency.lockutils [None req-5eb682fd-ef80-48c3-aa6d-6f29333c0b94 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "decfc3d0-e424-4304-8b3c-51daa9bd0fb6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.495800) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616411495881, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 702, "num_deletes": 251, "total_data_size": 909139, "memory_usage": 921944, "flush_reason": "Manual Compaction"}
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616411505680, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 893611, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58576, "largest_seqno": 59277, "table_properties": {"data_size": 889917, "index_size": 1537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8365, "raw_average_key_size": 19, "raw_value_size": 882549, "raw_average_value_size": 2052, "num_data_blocks": 68, "num_entries": 430, "num_filter_entries": 430, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616356, "oldest_key_time": 1765616356, "file_creation_time": 1765616411, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 9917 microseconds, and 4614 cpu microseconds.
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.505730) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 893611 bytes OK
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.505751) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.507180) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.507194) EVENT_LOG_v1 {"time_micros": 1765616411507189, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.507213) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 905486, prev total WAL file size 905486, number of live WAL files 2.
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.507937) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(872KB)], [137(9607KB)]
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616411508030, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 10731816, "oldest_snapshot_seqno": -1}
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7811 keys, 8926402 bytes, temperature: kUnknown
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616411574844, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8926402, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8877650, "index_size": 28095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 205369, "raw_average_key_size": 26, "raw_value_size": 8741418, "raw_average_value_size": 1119, "num_data_blocks": 1081, "num_entries": 7811, "num_filter_entries": 7811, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616411, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.575151) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8926402 bytes
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.576929) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.4 rd, 133.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.4 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(22.0) write-amplify(10.0) OK, records in: 8324, records dropped: 513 output_compression: NoCompression
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.576951) EVENT_LOG_v1 {"time_micros": 1765616411576941, "job": 84, "event": "compaction_finished", "compaction_time_micros": 66894, "compaction_time_cpu_micros": 23673, "output_level": 6, "num_output_files": 1, "total_output_size": 8926402, "num_input_records": 8324, "num_output_records": 7811, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616411577270, "job": 84, "event": "table_file_deletion", "file_number": 139}
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616411579607, "job": 84, "event": "table_file_deletion", "file_number": 137}
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.507799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.579701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.579709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.579710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.579712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:00:11 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:00:11.579714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.085 248514 DEBUG oslo_concurrency.lockutils [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "interface-3a1e4b45-e8c6-41c4-8890-cb0775cb5786-a30b0da9-1ee1-4092-a86b-5fa66fe76492" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.086 248514 DEBUG oslo_concurrency.lockutils [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-3a1e4b45-e8c6-41c4-8890-cb0775cb5786-a30b0da9-1ee1-4092-a86b-5fa66fe76492" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.106 248514 DEBUG nova.objects.instance [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'flavor' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.128 248514 DEBUG nova.virt.libvirt.vif [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.129 248514 DEBUG nova.network.os_vif_util [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.130 248514 DEBUG nova.network.os_vif_util [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.135 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.138 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.143 248514 DEBUG nova.virt.libvirt.driver [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Attempting to detach device tapa30b0da9-1e from instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.143 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] detach device xml: <interface type="ethernet">
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:65:10:e4"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <target dev="tapa30b0da9-1e"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]: </interface>
Dec 13 04:00:12 np0005558241 nova_compute[248510]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.150 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.157 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.158 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.160 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface>not found in domain: <domain type='kvm' id='151'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <name>instance-0000007c</name>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <uuid>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</uuid>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:59:25</nova:creationTime>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:port uuid="a30b0da9-1ee1-4092-a86b-5fa66fe76492">
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 04:00:12 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <resource>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </resource>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <entry name='serial'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <entry name='uuid'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk' index='2'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config' index='1'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:a8:9a:b3'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target dev='tap3abb490c-6a'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:65:10:e4'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target dev='tapa30b0da9-1e'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='net1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      </target>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/0'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </console>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </input>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </input>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </input>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c131,c649</label>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c131,c649</imagelabel>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 04:00:12 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:00:12 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.160 248514 INFO nova.virt.libvirt.driver [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully detached device tapa30b0da9-1e from instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 from the persistent domain config.#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.160 248514 DEBUG nova.virt.libvirt.driver [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] (1/8): Attempting to detach device tapa30b0da9-1e with device alias net1 from instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.161 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] detach device xml: <interface type="ethernet">
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <mac address="fa:16:3e:65:10:e4"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <model type="virtio"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <mtu size="1442"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <target dev="tapa30b0da9-1e"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]: </interface>
Dec 13 04:00:12 np0005558241 nova_compute[248510]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.176 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.250 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.251 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.259 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.259 248514 INFO nova.compute.claims [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:00:12 np0005558241 kernel: tapa30b0da9-1e (unregistering): left promiscuous mode
Dec 13 04:00:12 np0005558241 NetworkManager[50376]: <info>  [1765616412.2733] device (tapa30b0da9-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:00:12 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:12Z|01251|binding|INFO|Releasing lport a30b0da9-1ee1-4092-a86b-5fa66fe76492 from this chassis (sb_readonly=0)
Dec 13 04:00:12 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:12Z|01252|binding|INFO|Setting lport a30b0da9-1ee1-4092-a86b-5fa66fe76492 down in Southbound
Dec 13 04:00:12 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:12Z|01253|binding|INFO|Removing iface tapa30b0da9-1e ovn-installed in OVS
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.287 248514 DEBUG nova.virt.libvirt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Received event <DeviceRemovedEvent: 1765616412.2851186, 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.288 248514 DEBUG nova.virt.libvirt.driver [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Start waiting for the detach event from libvirt for device tapa30b0da9-1e with device alias net1 for instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.289 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.292 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface>not found in domain: <domain type='kvm' id='151'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <name>instance-0000007c</name>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <uuid>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</uuid>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 08:59:25</nova:creationTime>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:port uuid="a30b0da9-1ee1-4092-a86b-5fa66fe76492">
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.18" ipVersion="4"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 04:00:12 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <resource>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </resource>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <entry name='serial'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <entry name='uuid'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk' index='2'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config' index='1'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:a8:9a:b3'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target dev='tap3abb490c-6a'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      </target>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/0'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </console>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </input>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </input>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </input>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c131,c649</label>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c131,c649</imagelabel>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 04:00:12 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:00:12 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.293 248514 INFO nova.virt.libvirt.driver [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully detached device tapa30b0da9-1e from instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 from the live domain config.#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.294 248514 DEBUG nova.virt.libvirt.vif [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.294 248514 DEBUG nova.network.os_vif_util [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.296 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:10:e4 10.100.0.18', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': '3a1e4b45-e8c6-41c4-8890-cb0775cb5786', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16fae4da-5722-4e42-b101-44d9ef244421', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9deccf5-6c40-469a-b45c-630adf312a35, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=a30b0da9-1ee1-4092-a86b-5fa66fe76492) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.295 248514 DEBUG nova.network.os_vif_util [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.295 248514 DEBUG os_vif [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.297 158419 INFO neutron.agent.ovn.metadata.agent [-] Port a30b0da9-1ee1-4092-a86b-5fa66fe76492 in datapath 16fae4da-5722-4e42-b101-44d9ef244421 unbound from our chassis#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.299 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.299 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16fae4da-5722-4e42-b101-44d9ef244421, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.300 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa30b0da9-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.300 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[225d2f35-5c80-4aad-9732-eb4369a37875]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.301 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421 namespace which is not needed anymore#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.302 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.307 248514 INFO os_vif [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e')#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.308 248514 DEBUG nova.virt.libvirt.guest [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 09:00:12</nova:creationTime>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 04:00:12 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 04:00:12 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 04:00:12 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 04:00:12 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 04:00:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2973: 321 pgs: 321 active+clean; 156 MiB data, 929 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 22 KiB/s wr, 14 op/s
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.415 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:12 np0005558241 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [NOTICE]   (373202) : haproxy version is 2.8.14-c23fe91
Dec 13 04:00:12 np0005558241 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [NOTICE]   (373202) : path to executable is /usr/sbin/haproxy
Dec 13 04:00:12 np0005558241 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [WARNING]  (373202) : Exiting Master process...
Dec 13 04:00:12 np0005558241 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [WARNING]  (373202) : Exiting Master process...
Dec 13 04:00:12 np0005558241 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [ALERT]    (373202) : Current worker (373204) exited with code 143 (Terminated)
Dec 13 04:00:12 np0005558241 neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421[373198]: [WARNING]  (373202) : All workers exited. Exiting... (0)
Dec 13 04:00:12 np0005558241 systemd[1]: libpod-221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040.scope: Deactivated successfully.
Dec 13 04:00:12 np0005558241 conmon[373198]: conmon 221fe09bf91523ff8393 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040.scope/container/memory.events
Dec 13 04:00:12 np0005558241 podman[374538]: 2025-12-13 09:00:12.451807555 +0000 UTC m=+0.052262810 container died 221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:00:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040-userdata-shm.mount: Deactivated successfully.
Dec 13 04:00:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7b75e401be1e6964868d0a53502eed4a247827032e021c62c59365aba03a75fa-merged.mount: Deactivated successfully.
Dec 13 04:00:12 np0005558241 podman[374538]: 2025-12-13 09:00:12.500296452 +0000 UTC m=+0.100751707 container cleanup 221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:00:12 np0005558241 systemd[1]: libpod-conmon-221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040.scope: Deactivated successfully.
Dec 13 04:00:12 np0005558241 podman[374565]: 2025-12-13 09:00:12.562512345 +0000 UTC m=+0.041483606 container remove 221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.571 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4faf5dbb-ba56-405a-b8cc-02e938670f78]: (4, ('Sat Dec 13 09:00:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421 (221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040)\n221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040\nSat Dec 13 09:00:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421 (221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040)\n221fe09bf91523ff8393c744b7a91a38f3c6f043bb11d287ead9e3d83753b040\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e22784-372f-471e-b4ca-6e18a14d53e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.573 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16fae4da-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:12 np0005558241 kernel: tap16fae4da-50: left promiscuous mode
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.578 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.599 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb24145-5a24-40ff-a5af-96b9787c8277]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.613 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[97fe68d7-616d-44ac-9d01-8b4b7fbf80a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.615 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a3693d1b-0b98-4898-9ba1-a8266320362e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.635 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78f5a568-0bac-4011-af87-3532fd0138e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 894277, 'reachable_time': 30742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374599, 'error': None, 'target': 'ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:12 np0005558241 systemd[1]: run-netns-ovnmeta\x2d16fae4da\x2d5722\x2d4e42\x2db101\x2d44d9ef244421.mount: Deactivated successfully.
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.639 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-16fae4da-5722-4e42-b101-44d9ef244421 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:00:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:12.639 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[507c4d7c-528e-4d3c-8303-925adf0d27f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.744 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.927 248514 DEBUG oslo_concurrency.lockutils [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.928 248514 DEBUG oslo_concurrency.lockutils [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.928 248514 DEBUG nova.network.neutron [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:00:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:00:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2768523932' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.985 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:12 np0005558241 nova_compute[248510]: 2025-12-13 09:00:12.993 248514 DEBUG nova.compute.provider_tree [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.011 248514 DEBUG nova.scheduler.client.report [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.033 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.034 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.078 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.079 248514 DEBUG nova.network.neutron [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.098 248514 INFO nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.126 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.171 248514 DEBUG nova.compute.manager [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-deleted-a30b0da9-1ee1-4092-a86b-5fa66fe76492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.172 248514 INFO nova.compute.manager [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Neutron deleted interface a30b0da9-1ee1-4092-a86b-5fa66fe76492; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.172 248514 DEBUG nova.network.neutron [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.214 248514 DEBUG nova.objects.instance [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'system_metadata' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.258 248514 DEBUG nova.objects.instance [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lazy-loading 'flavor' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.277 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.280 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.281 248514 INFO nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Creating image(s)#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.314 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.343 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.370 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.374 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.424 248514 DEBUG nova.virt.libvirt.vif [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.424 248514 DEBUG nova.network.os_vif_util [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.425 248514 DEBUG nova.network.os_vif_util [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.429 248514 DEBUG nova.virt.libvirt.guest [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.434 248514 DEBUG nova.virt.libvirt.guest [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface>not found in domain: <domain type='kvm' id='151'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <name>instance-0000007c</name>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <uuid>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</uuid>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 09:00:12</nova:creationTime>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 04:00:13 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <resource>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </resource>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <entry name='serial'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <entry name='uuid'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk' index='2'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config' index='1'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:a8:9a:b3'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target dev='tap3abb490c-6a'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      </target>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/0'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </console>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </input>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </input>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </input>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c131,c649</label>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c131,c649</imagelabel>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 04:00:13 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:00:13 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.435 248514 DEBUG nova.virt.libvirt.guest [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.439 248514 DEBUG nova.virt.libvirt.guest [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:65:10:e4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa30b0da9-1e"/></interface>not found in domain: <domain type='kvm' id='151'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <name>instance-0000007c</name>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <uuid>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</uuid>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 09:00:12</nova:creationTime>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 04:00:13 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <memory unit='KiB'>131072</memory>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <currentMemory unit='KiB'>131072</currentMemory>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <vcpu placement='static'>1</vcpu>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <resource>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <partition>/machine</partition>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </resource>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <sysinfo type='smbios'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <entry name='manufacturer'>RDO</entry>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <entry name='product'>OpenStack Compute</entry>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <entry name='serial'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <entry name='uuid'>3a1e4b45-e8c6-41c4-8890-cb0775cb5786</entry>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <entry name='family'>Virtual Machine</entry>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <boot dev='hd'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <smbios mode='sysinfo'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <vmcoreinfo state='on'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <cpu mode='custom' match='exact' check='full'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <model fallback='forbid'>EPYC-Rome</model>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <vendor>AMD</vendor>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='x2apic'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc-deadline'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='hypervisor'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='tsc_adjust'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='spec-ctrl'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='stibp'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='ssbd'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='cmp_legacy'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='overflow-recov'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='succor'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='ibrs'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='amd-ssbd'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='virt-ssbd'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='lbrv'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='tsc-scale'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='vmcb-clean'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='flushbyasid'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pause-filter'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='pfthreshold'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svme-addr-chk'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='lfence-always-serializing'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='xsaves'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='svm'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='require' name='topoext'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='npt'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <feature policy='disable' name='nrip-save'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <clock offset='utc'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <timer name='pit' tickpolicy='delay'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <timer name='rtc' tickpolicy='catchup'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <timer name='hpet' present='no'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <on_poweroff>destroy</on_poweroff>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <on_reboot>restart</on_reboot>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <on_crash>destroy</on_crash>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <disk type='network' device='disk'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk' index='2'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target dev='vda' bus='virtio'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='virtio-disk0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <disk type='network' device='cdrom'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <driver name='qemu' type='raw' cache='none'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <auth username='openstack'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:        <secret type='ceph' uuid='18ee9de6-e00b-571b-ab9b-b7aab06174df'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <source protocol='rbd' name='vms/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_disk.config' index='1'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:        <host name='192.168.122.100' port='6789'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target dev='sda' bus='sata'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <readonly/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='sata0-0-0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='0' model='pcie-root'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pcie.0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='1' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='1' port='0x10'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='2' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='2' port='0x11'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.2'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='3' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='3' port='0x12'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.3'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='4' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='4' port='0x13'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.4'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='5' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='5' port='0x14'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.5'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='6' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='6' port='0x15'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.6'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='7' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='7' port='0x16'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.7'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='8' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='8' port='0x17'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.8'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='9' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='9' port='0x18'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.9'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='10' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='10' port='0x19'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.10'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='11' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='11' port='0x1a'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.11'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='12' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='12' port='0x1b'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.12'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='13' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='13' port='0x1c'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.13'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='14' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='14' port='0x1d'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.14'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='15' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='15' port='0x1e'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.15'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='16' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='16' port='0x1f'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.16'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='17' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='17' port='0x20'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.17'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='18' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='18' port='0x21'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.18'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='19' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='19' port='0x22'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.19'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='20' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='20' port='0x23'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.20'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='21' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='21' port='0x24'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.21'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='22' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='22' port='0x25'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.22'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='23' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='23' port='0x26'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.23'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='24' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='24' port='0x27'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.24'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='25' model='pcie-root-port'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-root-port'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target chassis='25' port='0x28'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.25'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model name='pcie-pci-bridge'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='pci.26'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='usb' index='0' model='piix3-uhci'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='usb'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <controller type='sata' index='0'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='ide'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </controller>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <interface type='ethernet'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <mac address='fa:16:3e:a8:9a:b3'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target dev='tap3abb490c-6a'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model type='virtio'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <driver name='vhost' rx_queue_size='512'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <mtu size='1442'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='net0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <serial type='pty'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target type='isa-serial' port='0'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:        <model name='isa-serial'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      </target>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <console type='pty' tty='/dev/pts/0'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <source path='/dev/pts/0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <log file='/var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786/console.log' append='off'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <target type='serial' port='0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='serial0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </console>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <input type='tablet' bus='usb'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='input0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='usb' bus='0' port='1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </input>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <input type='mouse' bus='ps2'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='input1'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </input>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <input type='keyboard' bus='ps2'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='input2'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </input>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <listen type='address' address='::0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </graphics>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <audio id='1' type='none'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <model type='virtio' heads='1' primary='yes'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='video0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <watchdog model='itco' action='reset'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='watchdog0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </watchdog>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <memballoon model='virtio'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <stats period='10'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='balloon0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <rng model='virtio'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <backend model='random'>/dev/urandom</backend>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <alias name='rng0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <label>system_u:system_r:svirt_t:s0:c131,c649</label>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c131,c649</imagelabel>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <label>+107:+107</label>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <imagelabel>+107:+107</imagelabel>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </seclabel>
Dec 13 04:00:13 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:00:13 np0005558241 nova_compute[248510]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.440 248514 WARNING nova.virt.libvirt.driver [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Detaching interface fa:16:3e:65:10:e4 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapa30b0da9-1e' not found.#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.441 248514 DEBUG nova.virt.libvirt.vif [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.441 248514 DEBUG nova.network.os_vif_util [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converting VIF {"id": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "address": "fa:16:3e:65:10:e4", "network": {"id": "16fae4da-5722-4e42-b101-44d9ef244421", "bridge": "br-int", "label": "tempest-network-smoke--1410128231", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.18", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa30b0da9-1e", "ovs_interfaceid": "a30b0da9-1ee1-4092-a86b-5fa66fe76492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.441 248514 DEBUG nova.network.os_vif_util [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.442 248514 DEBUG os_vif [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.444 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.444 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa30b0da9-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.444 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.447 248514 INFO os_vif [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:10:e4,bridge_name='br-int',has_traffic_filtering=True,id=a30b0da9-1ee1-4092-a86b-5fa66fe76492,network=Network(16fae4da-5722-4e42-b101-44d9ef244421),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa30b0da9-1e')#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.448 248514 DEBUG nova.virt.libvirt.guest [req-505e5161-cc86-4c3c-b4e6-07cec458b457 req-e5d9ccc9-3085-48b6-9709-49a0ff076ece 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:name>tempest-TestNetworkBasicOps-server-1459774770</nova:name>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:creationTime>2025-12-13 09:00:13</nova:creationTime>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:flavor name="m1.nano">
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:memory>128</nova:memory>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:disk>1</nova:disk>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:swap>0</nova:swap>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:vcpus>1</nova:vcpus>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </nova:flavor>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:owner>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </nova:owner>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  <nova:ports>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    <nova:port uuid="3abb490c-6aad-47d4-8200-febd480ac7db">
Dec 13 04:00:13 np0005558241 nova_compute[248510]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:    </nova:port>
Dec 13 04:00:13 np0005558241 nova_compute[248510]:  </nova:ports>
Dec 13 04:00:13 np0005558241 nova_compute[248510]: </nova:instance>
Dec 13 04:00:13 np0005558241 nova_compute[248510]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.473 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.474 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.475 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.475 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.501 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.507 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.802 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.874 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] resizing rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:00:13 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.968 248514 DEBUG nova.objects.instance [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'migration_context' on Instance uuid 9a653f54-8266-4f9c-ac5a-b4991534e9fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:00:14 np0005558241 nova_compute[248510]: 2025-12-13 09:00:13.999 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:00:14 np0005558241 nova_compute[248510]: 2025-12-13 09:00:14.000 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Ensure instance console log exists: /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:00:14 np0005558241 nova_compute[248510]: 2025-12-13 09:00:14.000 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:14 np0005558241 nova_compute[248510]: 2025-12-13 09:00:14.001 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:14 np0005558241 nova_compute[248510]: 2025-12-13 09:00:14.001 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:14 np0005558241 nova_compute[248510]: 2025-12-13 09:00:14.146 248514 DEBUG nova.policy [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5fd410579fa429ba0f7f680590cd86a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '77d40177a4804671aa9c5da343bc2ed4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:00:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2974: 321 pgs: 321 active+clean; 146 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Dec 13 04:00:14 np0005558241 nova_compute[248510]: 2025-12-13 09:00:14.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:14 np0005558241 nova_compute[248510]: 2025-12-13 09:00:14.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:00:14 np0005558241 nova_compute[248510]: 2025-12-13 09:00:14.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:00:14 np0005558241 nova_compute[248510]: 2025-12-13 09:00:14.794 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 04:00:14 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:14Z|01254|binding|INFO|Releasing lport 4a84c557-65ab-478a-95fb-44e9a95becd6 from this chassis (sb_readonly=0)
Dec 13 04:00:14 np0005558241 nova_compute[248510]: 2025-12-13 09:00:14.895 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:15 np0005558241 nova_compute[248510]: 2025-12-13 09:00:15.018 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:00:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:00:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1728748001' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:00:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:00:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1728748001' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:00:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.156 248514 INFO nova.network.neutron [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Port a30b0da9-1ee1-4092-a86b-5fa66fe76492 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.157 248514 DEBUG nova.network.neutron [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.181 248514 DEBUG oslo_concurrency.lockutils [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.185 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.185 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.186 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.202 248514 DEBUG nova.network.neutron [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Successfully created port: da5ed241-e9aa-44e8-ac5a-7dcb30431922 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.223 248514 DEBUG oslo_concurrency.lockutils [None req-3d71ac81-e261-4a07-a3b7-0cb6cd3396e4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "interface-3a1e4b45-e8c6-41c4-8890-cb0775cb5786-a30b0da9-1ee1-4092-a86b-5fa66fe76492" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2975: 321 pgs: 321 active+clean; 146 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.459 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.459 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.460 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.460 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.460 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.462 248514 INFO nova.compute.manager [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Terminating instance#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.463 248514 DEBUG nova.compute.manager [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:00:16 np0005558241 kernel: tap3abb490c-6a (unregistering): left promiscuous mode
Dec 13 04:00:16 np0005558241 NetworkManager[50376]: <info>  [1765616416.5063] device (tap3abb490c-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.513 248514 DEBUG nova.compute.manager [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-changed-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.513 248514 DEBUG nova.compute.manager [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing instance network info cache due to event network-changed-3abb490c-6aad-47d4-8200-febd480ac7db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.513 248514 DEBUG oslo_concurrency.lockutils [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.514 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:16 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:16Z|01255|binding|INFO|Releasing lport 3abb490c-6aad-47d4-8200-febd480ac7db from this chassis (sb_readonly=0)
Dec 13 04:00:16 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:16Z|01256|binding|INFO|Setting lport 3abb490c-6aad-47d4-8200-febd480ac7db down in Southbound
Dec 13 04:00:16 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:16Z|01257|binding|INFO|Removing iface tap3abb490c-6a ovn-installed in OVS
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.516 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.521 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:9a:b3 10.100.0.12'], port_security=['fa:16:3e:a8:9a:b3 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3a1e4b45-e8c6-41c4-8890-cb0775cb5786', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a303ed13-8629-4259-965d-e42689484f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03b6e291-05b2-4f8e-8278-d579b0a4e692', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b539ac67-be97-4028-97af-147cf6ca090d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3abb490c-6aad-47d4-8200-febd480ac7db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.523 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3abb490c-6aad-47d4-8200-febd480ac7db in datapath a303ed13-8629-4259-965d-e42689484f38 unbound from our chassis#033[00m
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.526 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a303ed13-8629-4259-965d-e42689484f38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.528 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39104070-ddbf-48c0-962f-bca7aca083ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.529 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a303ed13-8629-4259-965d-e42689484f38 namespace which is not needed anymore#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.533 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:16 np0005558241 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Dec 13 04:00:16 np0005558241 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d0000007c.scope: Consumed 16.373s CPU time.
Dec 13 04:00:16 np0005558241 systemd-machined[210538]: Machine qemu-151-instance-0000007c terminated.
Dec 13 04:00:16 np0005558241 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [NOTICE]   (371928) : haproxy version is 2.8.14-c23fe91
Dec 13 04:00:16 np0005558241 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [NOTICE]   (371928) : path to executable is /usr/sbin/haproxy
Dec 13 04:00:16 np0005558241 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [WARNING]  (371928) : Exiting Master process...
Dec 13 04:00:16 np0005558241 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [WARNING]  (371928) : Exiting Master process...
Dec 13 04:00:16 np0005558241 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [ALERT]    (371928) : Current worker (371930) exited with code 143 (Terminated)
Dec 13 04:00:16 np0005558241 neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38[371924]: [WARNING]  (371928) : All workers exited. Exiting... (0)
Dec 13 04:00:16 np0005558241 systemd[1]: libpod-443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565.scope: Deactivated successfully.
Dec 13 04:00:16 np0005558241 podman[374793]: 2025-12-13 09:00:16.680521414 +0000 UTC m=+0.050613070 container died 443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:00:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565-userdata-shm.mount: Deactivated successfully.
Dec 13 04:00:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8e10e5c7fc25c7b540b3e7c168ca8f4252eb841eb00fc201c536aa1133b3617a-merged.mount: Deactivated successfully.
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.707 248514 INFO nova.virt.libvirt.driver [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Instance destroyed successfully.#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.712 248514 DEBUG nova.objects.instance [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:00:16 np0005558241 podman[374793]: 2025-12-13 09:00:16.721256741 +0000 UTC m=+0.091348387 container cleanup 443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.728 248514 DEBUG nova.virt.libvirt.vif [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T08:58:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1459774770',display_name='tempest-TestNetworkBasicOps-server-1459774770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1459774770',id=124,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzdfnmkPECxEd+pmNHaznMbbUZ3fogbtIZFn1m4F0NrMdOCSnDuSrNqH1Qsxb1Ch2xV1rWjSGPlN7pyb3Hosb/a0AqdV4TvL3CfEX0F6WeXGg966JbzLuasErXM1xyHqw==',key_name='tempest-TestNetworkBasicOps-1902119774',keypairs=<?>,launch_index=0,launched_at=2025-12-13T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mq52krma',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T08:58:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=3a1e4b45-e8c6-41c4-8890-cb0775cb5786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.729 248514 DEBUG nova.network.os_vif_util [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.730 248514 DEBUG nova.network.os_vif_util [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.730 248514 DEBUG os_vif [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.732 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.733 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3abb490c-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.734 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.737 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.740 248514 INFO os_vif [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:9a:b3,bridge_name='br-int',has_traffic_filtering=True,id=3abb490c-6aad-47d4-8200-febd480ac7db,network=Network(a303ed13-8629-4259-965d-e42689484f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3abb490c-6a')#033[00m
Dec 13 04:00:16 np0005558241 systemd[1]: libpod-conmon-443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565.scope: Deactivated successfully.
Dec 13 04:00:16 np0005558241 podman[374833]: 2025-12-13 09:00:16.813417897 +0000 UTC m=+0.066024597 container remove 443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.819 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0e07d013-6876-4ca1-85f3-09da3822a0e9]: (4, ('Sat Dec 13 09:00:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38 (443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565)\n443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565\nSat Dec 13 09:00:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a303ed13-8629-4259-965d-e42689484f38 (443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565)\n443dc5b1c2b11d26d90480ef7d5302c3f56a4631cbbd61a25cd9532d587fe565\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.821 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2b2b63-bafb-4204-b330-54ee26b4cce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.823 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa303ed13-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.827 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:16 np0005558241 kernel: tapa303ed13-80: left promiscuous mode
Dec 13 04:00:16 np0005558241 nova_compute[248510]: 2025-12-13 09:00:16.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.844 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[df3cb205-286a-4d1c-a3e2-5921bb399489]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.861 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5de74720-3625-474a-98f6-56cb378fd328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.863 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b52db8f7-6447-4646-8ca4-ad4f8a7c18a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.892 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8a716191-8b84-4ffe-a1f8-ee8cfd495c3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890604, 'reachable_time': 25043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374866, 'error': None, 'target': 'ovnmeta-a303ed13-8629-4259-965d-e42689484f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.897 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a303ed13-8629-4259-965d-e42689484f38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:00:16 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:16.897 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[be06dc5d-a51e-43ff-9d7e-cc4165e1f8b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:16 np0005558241 systemd[1]: run-netns-ovnmeta\x2da303ed13\x2d8629\x2d4259\x2d965d\x2de42689484f38.mount: Deactivated successfully.
Dec 13 04:00:17 np0005558241 nova_compute[248510]: 2025-12-13 09:00:17.005 248514 INFO nova.virt.libvirt.driver [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Deleting instance files /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_del#033[00m
Dec 13 04:00:17 np0005558241 nova_compute[248510]: 2025-12-13 09:00:17.007 248514 INFO nova.virt.libvirt.driver [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Deletion of /var/lib/nova/instances/3a1e4b45-e8c6-41c4-8890-cb0775cb5786_del complete#033[00m
Dec 13 04:00:17 np0005558241 nova_compute[248510]: 2025-12-13 09:00:17.092 248514 INFO nova.compute.manager [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:00:17 np0005558241 nova_compute[248510]: 2025-12-13 09:00:17.093 248514 DEBUG oslo.service.loopingcall [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:00:17 np0005558241 nova_compute[248510]: 2025-12-13 09:00:17.094 248514 DEBUG nova.compute.manager [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:00:17 np0005558241 nova_compute[248510]: 2025-12-13 09:00:17.094 248514 DEBUG nova.network.neutron [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:00:17 np0005558241 nova_compute[248510]: 2025-12-13 09:00:17.781 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.130 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [{"id": "3abb490c-6aad-47d4-8200-febd480ac7db", "address": "fa:16:3e:a8:9a:b3", "network": {"id": "a303ed13-8629-4259-965d-e42689484f38", "bridge": "br-int", "label": "tempest-network-smoke--290104559", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3abb490c-6a", "ovs_interfaceid": "3abb490c-6aad-47d4-8200-febd480ac7db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.153 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.153 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.153 248514 DEBUG oslo_concurrency.lockutils [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.154 248514 DEBUG nova.network.neutron [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Refreshing network info cache for port 3abb490c-6aad-47d4-8200-febd480ac7db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.155 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.179 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.180 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.180 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.181 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.181 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.238 248514 DEBUG nova.network.neutron [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Successfully updated port: da5ed241-e9aa-44e8-ac5a-7dcb30431922 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.261 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.261 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.262 248514 DEBUG nova.network.neutron [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.301 248514 DEBUG nova.network.neutron [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.311 248514 INFO nova.network.neutron [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Port 3abb490c-6aad-47d4-8200-febd480ac7db from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.312 248514 DEBUG nova.network.neutron [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.322 248514 INFO nova.compute.manager [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Took 1.23 seconds to deallocate network for instance.#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.328 248514 DEBUG oslo_concurrency.lockutils [req-71235521-175a-4425-bfdf-bd21bac2c6a2 req-a60e6e6a-e9b6-47e0-afed-87caf95f21d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-3a1e4b45-e8c6-41c4-8890-cb0775cb5786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.371 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.372 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2976: 321 pgs: 321 active+clean; 139 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.438 248514 DEBUG oslo_concurrency.processutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.622 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-unplugged-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.623 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.623 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.623 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.623 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] No waiting events found dispatching network-vif-unplugged-3abb490c-6aad-47d4-8200-febd480ac7db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.623 248514 WARNING nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received unexpected event network-vif-unplugged-3abb490c-6aad-47d4-8200-febd480ac7db for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.623 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] No waiting events found dispatching network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 WARNING nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received unexpected event network-vif-plugged-3abb490c-6aad-47d4-8200-febd480ac7db for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-changed-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.624 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Refreshing instance network info cache due to event network-changed-da5ed241-e9aa-44e8-ac5a-7dcb30431922. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.625 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:00:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:00:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2468858762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.770 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.989 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.991 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3470MB free_disk=59.939925766550004GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:00:18 np0005558241 nova_compute[248510]: 2025-12-13 09:00:18.991 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:00:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451361975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.056 248514 DEBUG oslo_concurrency.processutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.065 248514 DEBUG nova.compute.provider_tree [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.087 248514 DEBUG nova.scheduler.client.report [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.097 248514 DEBUG nova.network.neutron [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.113 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.116 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.149 248514 INFO nova.scheduler.client.report [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 3a1e4b45-e8c6-41c4-8890-cb0775cb5786#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.217 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9a653f54-8266-4f9c-ac5a-b4991534e9fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.218 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.218 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.238 248514 DEBUG oslo_concurrency.lockutils [None req-0ce71538-fc22-4f02-b6d0-b747fb0cbc6e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "3a1e4b45-e8c6-41c4-8890-cb0775cb5786" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.268 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:00:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868176633' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.837 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.842 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.858 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.883 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:00:19 np0005558241 nova_compute[248510]: 2025-12-13 09:00:19.883 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2977: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Dec 13 04:00:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.127 248514 DEBUG nova.network.neutron [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.157 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.157 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance network_info: |[{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.159 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.159 248514 DEBUG nova.network.neutron [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Refreshing network info cache for port da5ed241-e9aa-44e8-ac5a-7dcb30431922 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.168 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Start _get_guest_xml network_info=[{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.175 248514 WARNING nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.180 248514 DEBUG nova.virt.libvirt.host [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.181 248514 DEBUG nova.virt.libvirt.host [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.190 248514 DEBUG nova.virt.libvirt.host [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.191 248514 DEBUG nova.virt.libvirt.host [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.192 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.192 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.192 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.193 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.193 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.193 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.194 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.194 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.194 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.195 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.195 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.195 248514 DEBUG nova.virt.hardware [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.200 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003601247509638841 of space, bias 1.0, pg target 0.10803742528916523 quantized to 32 (current 32)
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696810718935558 of space, bias 1.0, pg target 0.20090432156806673 quantized to 32 (current 32)
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.725337214369618e-07 of space, bias 4.0, pg target 0.0006870404657243541 quantized to 16 (current 32)
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:00:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1491095645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.796 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.818 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:21 np0005558241 nova_compute[248510]: 2025-12-13 09:00:21.822 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:00:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771744240' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.408 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2978: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.411 248514 DEBUG nova.virt.libvirt.vif [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:00:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1854403821',display_name='tempest-TestNetworkAdvancedServerOps-server-1854403821',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1854403821',id=127,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWMM/Lkrglzn6TUU8f1dhcCbz/Yw7nIlbXlIesplWBqBJXHooQUykZVdZMxvRqd3+5+420G2QZFN25AFW8hXICmnni45jcvyASiuhi4RAOuTlyQgzYOGb0LTvE5xWl7hA==',key_name='tempest-TestNetworkAdvancedServerOps-279390887',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-azzrjv4t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:00:13Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=9a653f54-8266-4f9c-ac5a-b4991534e9fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.412 248514 DEBUG nova.network.os_vif_util [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.413 248514 DEBUG nova.network.os_vif_util [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.415 248514 DEBUG nova.objects.instance [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9a653f54-8266-4f9c-ac5a-b4991534e9fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.442 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  <uuid>9a653f54-8266-4f9c-ac5a-b4991534e9fb</uuid>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  <name>instance-0000007f</name>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1854403821</nova:name>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:00:21</nova:creationTime>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:        <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:        <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:        <nova:port uuid="da5ed241-e9aa-44e8-ac5a-7dcb30431922">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <entry name="serial">9a653f54-8266-4f9c-ac5a-b4991534e9fb</entry>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <entry name="uuid">9a653f54-8266-4f9c-ac5a-b4991534e9fb</entry>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk.config">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:cc:05:2b"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <target dev="tapda5ed241-e9"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/console.log" append="off"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:00:22 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:00:22 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:00:22 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:00:22 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.444 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Preparing to wait for external event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.445 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.445 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.446 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.446 248514 DEBUG nova.virt.libvirt.vif [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:00:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1854403821',display_name='tempest-TestNetworkAdvancedServerOps-server-1854403821',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1854403821',id=127,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWMM/Lkrglzn6TUU8f1dhcCbz/Yw7nIlbXlIesplWBqBJXHooQUykZVdZMxvRqd3+5+420G2QZFN25AFW8hXICmnni45jcvyASiuhi4RAOuTlyQgzYOGb0LTvE5xWl7hA==',key_name='tempest-TestNetworkAdvancedServerOps-279390887',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-azzrjv4t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:00:13Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=9a653f54-8266-4f9c-ac5a-b4991534e9fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.447 248514 DEBUG nova.network.os_vif_util [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.448 248514 DEBUG nova.network.os_vif_util [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.448 248514 DEBUG os_vif [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.449 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.450 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.451 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.455 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.456 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda5ed241-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.456 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda5ed241-e9, col_values=(('external_ids', {'iface-id': 'da5ed241-e9aa-44e8-ac5a-7dcb30431922', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:05:2b', 'vm-uuid': '9a653f54-8266-4f9c-ac5a-b4991534e9fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:22 np0005558241 NetworkManager[50376]: <info>  [1765616422.4886] manager: (tapda5ed241-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/515)
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.492 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.493 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.494 248514 INFO os_vif [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9')#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.500 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.547 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.547 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.548 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No VIF found with MAC fa:16:3e:cc:05:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.548 248514 INFO nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Using config drive#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.575 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:22 np0005558241 nova_compute[248510]: 2025-12-13 09:00:22.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.151 248514 INFO nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Creating config drive at /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/disk.config#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.156 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7yy9s9q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.313 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7yy9s9q" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.341 248514 DEBUG nova.storage.rbd_utils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.345 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/disk.config 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.489 248514 DEBUG oslo_concurrency.processutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/disk.config 9a653f54-8266-4f9c-ac5a-b4991534e9fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.490 248514 INFO nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Deleting local config drive /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb/disk.config because it was imported into RBD.#033[00m
Dec 13 04:00:23 np0005558241 kernel: tapda5ed241-e9: entered promiscuous mode
Dec 13 04:00:23 np0005558241 NetworkManager[50376]: <info>  [1765616423.5413] manager: (tapda5ed241-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/516)
Dec 13 04:00:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:23Z|01258|binding|INFO|Claiming lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 for this chassis.
Dec 13 04:00:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:23Z|01259|binding|INFO|da5ed241-e9aa-44e8-ac5a-7dcb30431922: Claiming fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:23Z|01260|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 ovn-installed in OVS
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.557 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:23Z|01261|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 up in Southbound
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.558 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.560 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.560 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f bound to our chassis#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.561 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f#033[00m
Dec 13 04:00:23 np0005558241 systemd-udevd[375069]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.574 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b353702e-6238-4b7a-92e0-663adb3424c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.575 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb9b944cc-b1 in ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.577 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb9b944cc-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.577 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[26e4221e-8d97-46d7-83c4-0fdd31840472]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.578 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11953a34-338d-46ec-9633-79f73ef1bcfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 NetworkManager[50376]: <info>  [1765616423.5870] device (tapda5ed241-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:00:23 np0005558241 NetworkManager[50376]: <info>  [1765616423.5879] device (tapda5ed241-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.590 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[850cb3c3-24e0-46cf-8709-8343d313bd53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 systemd-machined[210538]: New machine qemu-154-instance-0000007f.
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.618 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[22d24a78-5774-4dd4-be01-9e37be35934c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 systemd[1]: Started Virtual Machine qemu-154-instance-0000007f.
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.653 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a2261f9b-27e9-48e4-b927-6f943b918463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 systemd-udevd[375072]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.658 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3fed49-900b-48e7-9cda-a3e3c2c90b9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 NetworkManager[50376]: <info>  [1765616423.6597] manager: (tapb9b944cc-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/517)
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.688 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6bffe1-7bb7-4b85-b25a-891bd1ad52f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.691 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b9debf-9c09-44a7-b011-ea4f0df4316f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 NetworkManager[50376]: <info>  [1765616423.7107] device (tapb9b944cc-b0): carrier: link connected
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.714 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[836b2036-5f86-432d-8aec-612941be90cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.729 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f82df206-181b-4fdc-916e-06a79553091f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9b944cc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:4f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900092, 'reachable_time': 43983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375104, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.742 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7481ef09-92a1-4e2b-885c-7ca4bcae416d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:4fac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 900092, 'tstamp': 900092}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375105, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.758 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[37951df6-df8f-4ec2-be72-ff16e07b022e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9b944cc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:4f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 369], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900092, 'reachable_time': 43983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375106, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.786 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7c1c80-8ae3-4065-98f3-1c340b49f461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.844 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2f726e-c95b-4daa-84bf-75f7e7001e0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.845 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9b944cc-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.845 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.846 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9b944cc-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.847 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:23 np0005558241 NetworkManager[50376]: <info>  [1765616423.8480] manager: (tapb9b944cc-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Dec 13 04:00:23 np0005558241 kernel: tapb9b944cc-b0: entered promiscuous mode
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.850 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9b944cc-b0, col_values=(('external_ids', {'iface-id': 'dacb29c7-c3ab-43a0-86c5-5add3f769729'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.853 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:23Z|01262|binding|INFO|Releasing lport dacb29c7-c3ab-43a0-86c5-5add3f769729 from this chassis (sb_readonly=0)
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.854 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.866 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[85e41e8e-a11c-41d6-b26a-342c8e99310e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.867 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-b9b944cc-b37e-492b-abd7-b6bfb9227f0f
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.pid.haproxy
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID b9b944cc-b37e-492b-abd7-b6bfb9227f0f
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.868 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:23.868 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'env', 'PROCESS_TAG=haproxy-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.920 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616408.919502, decfc3d0-e424-4304-8b3c-51daa9bd0fb6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.921 248514 INFO nova.compute.manager [-] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:00:23 np0005558241 nova_compute[248510]: 2025-12-13 09:00:23.947 248514 DEBUG nova.compute.manager [None req-a0d61e36-476e-469f-b342-12a0631d92a2 - - - - - -] [instance: decfc3d0-e424-4304-8b3c-51daa9bd0fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.049 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616424.0490115, 9a653f54-8266-4f9c-ac5a-b4991534e9fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.049 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] VM Started (Lifecycle Event)#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.082 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.088 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616424.0501661, 9a653f54-8266-4f9c-ac5a-b4991534e9fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.089 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.112 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.116 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.141 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.183 248514 DEBUG nova.network.neutron [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updated VIF entry in instance network info cache for port da5ed241-e9aa-44e8-ac5a-7dcb30431922. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.184 248514 DEBUG nova.network.neutron [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.209 248514 DEBUG oslo_concurrency.lockutils [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.210 248514 DEBUG nova.compute.manager [req-16da5e9a-82d7-4fce-9f74-1efbe5075a8b req-5f47bb0a-eeeb-4a5c-a901-babddf5996ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Received event network-vif-deleted-3abb490c-6aad-47d4-8200-febd480ac7db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:24 np0005558241 podman[375179]: 2025-12-13 09:00:24.264173013 +0000 UTC m=+0.052441465 container create 47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:00:24 np0005558241 systemd[1]: Started libpod-conmon-47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35.scope.
Dec 13 04:00:24 np0005558241 podman[375179]: 2025-12-13 09:00:24.237622393 +0000 UTC m=+0.025890875 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:00:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:00:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc161a8edde91d2349ff8deec297f008ca51bfac852b10dd03755608924ccb60/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.360 248514 DEBUG nova.compute.manager [req-0ed38729-67e3-4149-a446-fd3166aeb747 req-80ca50ca-2de3-4495-a27b-6aeea40ba40f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.361 248514 DEBUG oslo_concurrency.lockutils [req-0ed38729-67e3-4149-a446-fd3166aeb747 req-80ca50ca-2de3-4495-a27b-6aeea40ba40f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.362 248514 DEBUG oslo_concurrency.lockutils [req-0ed38729-67e3-4149-a446-fd3166aeb747 req-80ca50ca-2de3-4495-a27b-6aeea40ba40f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.362 248514 DEBUG oslo_concurrency.lockutils [req-0ed38729-67e3-4149-a446-fd3166aeb747 req-80ca50ca-2de3-4495-a27b-6aeea40ba40f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.363 248514 DEBUG nova.compute.manager [req-0ed38729-67e3-4149-a446-fd3166aeb747 req-80ca50ca-2de3-4495-a27b-6aeea40ba40f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Processing event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.364 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:00:24 np0005558241 podman[375179]: 2025-12-13 09:00:24.364392736 +0000 UTC m=+0.152661268 container init 47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.369 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616424.3688927, 9a653f54-8266-4f9c-ac5a-b4991534e9fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.369 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:00:24 np0005558241 podman[375179]: 2025-12-13 09:00:24.370871474 +0000 UTC m=+0.159139966 container start 47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.371 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.375 248514 INFO nova.virt.libvirt.driver [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance spawned successfully.#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.375 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:00:24 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [NOTICE]   (375198) : New worker (375200) forked
Dec 13 04:00:24 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [NOTICE]   (375198) : Loading success.
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.403 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2979: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.410 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.413 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.414 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.414 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.415 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.415 248514 DEBUG nova.virt.libvirt.driver [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.420 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.480 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.511 248514 INFO nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Took 11.23 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.512 248514 DEBUG nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.578 248514 INFO nova.compute.manager [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Took 12.35 seconds to build instance.#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.596 248514 DEBUG oslo_concurrency.lockutils [None req-7c49ee4a-db48-4fd8-9293-ba882023ba83 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:24 np0005558241 nova_compute[248510]: 2025-12-13 09:00:24.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:00:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:25.367 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:00:25 np0005558241 nova_compute[248510]: 2025-12-13 09:00:25.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:25.369 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:00:25 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:25Z|01263|binding|INFO|Releasing lport dacb29c7-c3ab-43a0-86c5-5add3f769729 from this chassis (sb_readonly=0)
Dec 13 04:00:25 np0005558241 nova_compute[248510]: 2025-12-13 09:00:25.527 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:25 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:25Z|01264|binding|INFO|Releasing lport dacb29c7-c3ab-43a0-86c5-5add3f769729 from this chassis (sb_readonly=0)
Dec 13 04:00:25 np0005558241 nova_compute[248510]: 2025-12-13 09:00:25.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:00:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2980: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 370 KiB/s wr, 49 op/s
Dec 13 04:00:26 np0005558241 nova_compute[248510]: 2025-12-13 09:00:26.653 248514 DEBUG nova.compute.manager [req-ca894bfb-2f98-4fd7-bb0c-551397b3a452 req-81bafdb2-cdf7-474e-8d24-c61e822ec663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:26 np0005558241 nova_compute[248510]: 2025-12-13 09:00:26.654 248514 DEBUG oslo_concurrency.lockutils [req-ca894bfb-2f98-4fd7-bb0c-551397b3a452 req-81bafdb2-cdf7-474e-8d24-c61e822ec663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:26 np0005558241 nova_compute[248510]: 2025-12-13 09:00:26.654 248514 DEBUG oslo_concurrency.lockutils [req-ca894bfb-2f98-4fd7-bb0c-551397b3a452 req-81bafdb2-cdf7-474e-8d24-c61e822ec663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:26 np0005558241 nova_compute[248510]: 2025-12-13 09:00:26.654 248514 DEBUG oslo_concurrency.lockutils [req-ca894bfb-2f98-4fd7-bb0c-551397b3a452 req-81bafdb2-cdf7-474e-8d24-c61e822ec663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:26 np0005558241 nova_compute[248510]: 2025-12-13 09:00:26.655 248514 DEBUG nova.compute.manager [req-ca894bfb-2f98-4fd7-bb0c-551397b3a452 req-81bafdb2-cdf7-474e-8d24-c61e822ec663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:26 np0005558241 nova_compute[248510]: 2025-12-13 09:00:26.655 248514 WARNING nova.compute.manager [req-ca894bfb-2f98-4fd7-bb0c-551397b3a452 req-81bafdb2-cdf7-474e-8d24-c61e822ec663 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:00:27 np0005558241 nova_compute[248510]: 2025-12-13 09:00:27.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:27 np0005558241 nova_compute[248510]: 2025-12-13 09:00:27.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:27 np0005558241 nova_compute[248510]: 2025-12-13 09:00:27.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:28.373 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2981: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 463 KiB/s rd, 370 KiB/s wr, 66 op/s
Dec 13 04:00:29 np0005558241 NetworkManager[50376]: <info>  [1765616429.6773] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/519)
Dec 13 04:00:29 np0005558241 NetworkManager[50376]: <info>  [1765616429.6779] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/520)
Dec 13 04:00:29 np0005558241 nova_compute[248510]: 2025-12-13 09:00:29.676 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:29Z|01265|binding|INFO|Releasing lport dacb29c7-c3ab-43a0-86c5-5add3f769729 from this chassis (sb_readonly=0)
Dec 13 04:00:29 np0005558241 nova_compute[248510]: 2025-12-13 09:00:29.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:29Z|01266|binding|INFO|Releasing lport dacb29c7-c3ab-43a0-86c5-5add3f769729 from this chassis (sb_readonly=0)
Dec 13 04:00:29 np0005558241 nova_compute[248510]: 2025-12-13 09:00:29.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:30 np0005558241 nova_compute[248510]: 2025-12-13 09:00:30.183 248514 DEBUG nova.compute.manager [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-changed-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:30 np0005558241 nova_compute[248510]: 2025-12-13 09:00:30.183 248514 DEBUG nova.compute.manager [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Refreshing instance network info cache due to event network-changed-da5ed241-e9aa-44e8-ac5a-7dcb30431922. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:00:30 np0005558241 nova_compute[248510]: 2025-12-13 09:00:30.183 248514 DEBUG oslo_concurrency.lockutils [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:00:30 np0005558241 nova_compute[248510]: 2025-12-13 09:00:30.184 248514 DEBUG oslo_concurrency.lockutils [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:00:30 np0005558241 nova_compute[248510]: 2025-12-13 09:00:30.184 248514 DEBUG nova.network.neutron [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Refreshing network info cache for port da5ed241-e9aa-44e8-ac5a-7dcb30431922 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:00:30 np0005558241 nova_compute[248510]: 2025-12-13 09:00:30.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:30 np0005558241 nova_compute[248510]: 2025-12-13 09:00:30.377 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2982: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Dec 13 04:00:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:00:31 np0005558241 nova_compute[248510]: 2025-12-13 09:00:31.701 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616416.70063, 3a1e4b45-e8c6-41c4-8890-cb0775cb5786 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:00:31 np0005558241 nova_compute[248510]: 2025-12-13 09:00:31.702 248514 INFO nova.compute.manager [-] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:00:31 np0005558241 nova_compute[248510]: 2025-12-13 09:00:31.766 248514 DEBUG nova.compute.manager [None req-0b211dad-2e2c-4fc9-99a1-11eb44b9d6f9 - - - - - -] [instance: 3a1e4b45-e8c6-41c4-8890-cb0775cb5786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:31 np0005558241 nova_compute[248510]: 2025-12-13 09:00:31.819 248514 DEBUG nova.network.neutron [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updated VIF entry in instance network info cache for port da5ed241-e9aa-44e8-ac5a-7dcb30431922. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:00:31 np0005558241 nova_compute[248510]: 2025-12-13 09:00:31.820 248514 DEBUG nova.network.neutron [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:00:31 np0005558241 nova_compute[248510]: 2025-12-13 09:00:31.856 248514 DEBUG oslo_concurrency.lockutils [req-08efa94e-4055-4a6e-85fd-202107f1e38e req-6ccae219-e543-406f-ac62-19d3c0708cfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:00:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2983: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:00:32 np0005558241 nova_compute[248510]: 2025-12-13 09:00:32.521 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:32 np0005558241 nova_compute[248510]: 2025-12-13 09:00:32.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2984: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:00:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:00:35 np0005558241 podman[375213]: 2025-12-13 09:00:35.964437988 +0000 UTC m=+0.056163456 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec 13 04:00:35 np0005558241 podman[375214]: 2025-12-13 09:00:35.985424361 +0000 UTC m=+0.076669077 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 04:00:35 np0005558241 podman[375212]: 2025-12-13 09:00:35.987962934 +0000 UTC m=+0.083385203 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Dec 13 04:00:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2985: 321 pgs: 321 active+clean; 88 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Dec 13 04:00:36 np0005558241 nova_compute[248510]: 2025-12-13 09:00:36.670 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:37 np0005558241 nova_compute[248510]: 2025-12-13 09:00:37.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:37 np0005558241 nova_compute[248510]: 2025-12-13 09:00:37.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:37 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:37Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 04:00:37 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:37Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 04:00:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2986: 321 pgs: 321 active+clean; 97 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 87 op/s
Dec 13 04:00:39 np0005558241 nova_compute[248510]: 2025-12-13 09:00:39.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:00:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:00:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:00:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:00:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:00:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:00:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2987: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Dec 13 04:00:40 np0005558241 nova_compute[248510]: 2025-12-13 09:00:40.804 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:40 np0005558241 nova_compute[248510]: 2025-12-13 09:00:40.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 04:00:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:00:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2988: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:00:42 np0005558241 nova_compute[248510]: 2025-12-13 09:00:42.568 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:42 np0005558241 nova_compute[248510]: 2025-12-13 09:00:42.791 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:43 np0005558241 nova_compute[248510]: 2025-12-13 09:00:43.921 248514 INFO nova.compute.manager [None req-b3ce700e-229f-4457-a15f-806307a83b26 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Get console output#033[00m
Dec 13 04:00:43 np0005558241 nova_compute[248510]: 2025-12-13 09:00:43.927 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:00:44 np0005558241 nova_compute[248510]: 2025-12-13 09:00:44.268 248514 DEBUG oslo_concurrency.lockutils [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:44 np0005558241 nova_compute[248510]: 2025-12-13 09:00:44.268 248514 DEBUG oslo_concurrency.lockutils [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:44 np0005558241 nova_compute[248510]: 2025-12-13 09:00:44.269 248514 INFO nova.compute.manager [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Rebooting instance#033[00m
Dec 13 04:00:44 np0005558241 nova_compute[248510]: 2025-12-13 09:00:44.284 248514 DEBUG oslo_concurrency.lockutils [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:00:44 np0005558241 nova_compute[248510]: 2025-12-13 09:00:44.284 248514 DEBUG oslo_concurrency.lockutils [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:00:44 np0005558241 nova_compute[248510]: 2025-12-13 09:00:44.284 248514 DEBUG nova.network.neutron [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:00:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2989: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:00:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:00:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2990: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:00:47 np0005558241 nova_compute[248510]: 2025-12-13 09:00:47.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:47 np0005558241 nova_compute[248510]: 2025-12-13 09:00:47.792 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:47 np0005558241 nova_compute[248510]: 2025-12-13 09:00:47.796 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:47 np0005558241 nova_compute[248510]: 2025-12-13 09:00:47.824 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:00:47 np0005558241 nova_compute[248510]: 2025-12-13 09:00:47.824 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 04:00:47 np0005558241 nova_compute[248510]: 2025-12-13 09:00:47.854 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 04:00:47 np0005558241 nova_compute[248510]: 2025-12-13 09:00:47.863 248514 DEBUG nova.network.neutron [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:00:47 np0005558241 nova_compute[248510]: 2025-12-13 09:00:47.910 248514 DEBUG oslo_concurrency.lockutils [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:00:47 np0005558241 nova_compute[248510]: 2025-12-13 09:00:47.912 248514 DEBUG nova.compute.manager [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2991: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:00:48 np0005558241 nova_compute[248510]: 2025-12-13 09:00:48.588 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:48 np0005558241 nova_compute[248510]: 2025-12-13 09:00:48.589 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:48 np0005558241 nova_compute[248510]: 2025-12-13 09:00:48.614 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:00:48 np0005558241 nova_compute[248510]: 2025-12-13 09:00:48.712 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:48 np0005558241 nova_compute[248510]: 2025-12-13 09:00:48.713 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:48 np0005558241 nova_compute[248510]: 2025-12-13 09:00:48.724 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:00:48 np0005558241 nova_compute[248510]: 2025-12-13 09:00:48.724 248514 INFO nova.compute.claims [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:00:48 np0005558241 nova_compute[248510]: 2025-12-13 09:00:48.908 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:00:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 12K writes, 59K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1434 writes, 6976 keys, 1434 commit groups, 1.0 writes per commit group, ingest: 9.59 MB, 0.02 MB/s#012Interval WAL: 1434 writes, 1434 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     20.6      3.60              0.26        42    0.086       0      0       0.0       0.0#012  L6      1/0    8.51 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.7     79.6     67.0      5.21              1.07        41    0.127    259K    22K       0.0       0.0#012 Sum      1/0    8.51 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.7     47.1     48.0      8.81              1.33        83    0.106    259K    22K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1    113.5    110.4      0.67              0.26        14    0.048     57K   3578       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     79.6     67.0      5.21              1.07        41    0.127    259K    22K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     20.6      3.59              0.26        41    0.088       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.072, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.41 GB write, 0.08 MB/s write, 0.40 GB read, 0.08 MB/s read, 8.8 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 48.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000431 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3027,46.72 MB,15.3677%) FilterBlock(84,716.05 KB,0.230021%) IndexBlock(84,1.18 MB,0.387834%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 04:00:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:00:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1870465599' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.570 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.578 248514 DEBUG nova.compute.provider_tree [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.607 248514 DEBUG nova.scheduler.client.report [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.636 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.638 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.695 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.696 248514 DEBUG nova.network.neutron [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.719 248514 INFO nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.756 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.861 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.864 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.865 248514 INFO nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Creating image(s)#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.890 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.919 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.944 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:49 np0005558241 nova_compute[248510]: 2025-12-13 09:00:49.948 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.027 248514 DEBUG nova.policy [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.034 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.035 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.036 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.036 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.061 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.065 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:50 np0005558241 kernel: tapda5ed241-e9 (unregistering): left promiscuous mode
Dec 13 04:00:50 np0005558241 NetworkManager[50376]: <info>  [1765616450.2750] device (tapda5ed241-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:00:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:50Z|01267|binding|INFO|Releasing lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 from this chassis (sb_readonly=0)
Dec 13 04:00:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:50Z|01268|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 down in Southbound
Dec 13 04:00:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:50Z|01269|binding|INFO|Removing iface tapda5ed241-e9 ovn-installed in OVS
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.285 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.295 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.296 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f unbound from our chassis#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.298 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.299 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3849ecc8-3716-4903-be66-7ffaa78ee23f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.300 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f namespace which is not needed anymore#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.304 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:50 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Dec 13 04:00:50 np0005558241 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Dec 13 04:00:50 np0005558241 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007f.scope: Consumed 13.500s CPU time.
Dec 13 04:00:50 np0005558241 systemd-machined[210538]: Machine qemu-154-instance-0000007f terminated.
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.371 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2992: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 145 KiB/s rd, 1.0 MiB/s wr, 45 op/s
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.453 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:00:50 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [NOTICE]   (375198) : haproxy version is 2.8.14-c23fe91
Dec 13 04:00:50 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [NOTICE]   (375198) : path to executable is /usr/sbin/haproxy
Dec 13 04:00:50 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [WARNING]  (375198) : Exiting Master process...
Dec 13 04:00:50 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [WARNING]  (375198) : Exiting Master process...
Dec 13 04:00:50 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [ALERT]    (375198) : Current worker (375200) exited with code 143 (Terminated)
Dec 13 04:00:50 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375194]: [WARNING]  (375198) : All workers exited. Exiting... (0)
Dec 13 04:00:50 np0005558241 systemd[1]: libpod-47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35.scope: Deactivated successfully.
Dec 13 04:00:50 np0005558241 podman[375429]: 2025-12-13 09:00:50.466168422 +0000 UTC m=+0.052592508 container died 47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 04:00:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35-userdata-shm.mount: Deactivated successfully.
Dec 13 04:00:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-dc161a8edde91d2349ff8deec297f008ca51bfac852b10dd03755608924ccb60-merged.mount: Deactivated successfully.
Dec 13 04:00:50 np0005558241 podman[375429]: 2025-12-13 09:00:50.513492501 +0000 UTC m=+0.099916557 container cleanup 47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:00:50 np0005558241 kernel: tapda5ed241-e9: entered promiscuous mode
Dec 13 04:00:50 np0005558241 NetworkManager[50376]: <info>  [1765616450.5172] manager: (tapda5ed241-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/521)
Dec 13 04:00:50 np0005558241 systemd-udevd[375394]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:00:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:50Z|01270|binding|INFO|Claiming lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 for this chassis.
Dec 13 04:00:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:50Z|01271|binding|INFO|da5ed241-e9aa-44e8-ac5a-7dcb30431922: Claiming fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 04:00:50 np0005558241 kernel: tapda5ed241-e9 (unregistering): left promiscuous mode
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.524 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:50 np0005558241 systemd[1]: libpod-conmon-47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35.scope: Deactivated successfully.
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.529 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:00:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:50Z|01272|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 ovn-installed in OVS
Dec 13 04:00:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:50Z|01273|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 up in Southbound
Dec 13 04:00:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:50Z|01274|binding|INFO|Releasing lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 from this chassis (sb_readonly=1)
Dec 13 04:00:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:50Z|01275|binding|INFO|Removing iface tapda5ed241-e9 ovn-installed in OVS
Dec 13 04:00:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:50Z|01276|if_status|INFO|Dropped 6 log messages in last 156 seconds (most recently, 156 seconds ago) due to excessive rate
Dec 13 04:00:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:50Z|01277|if_status|INFO|Not setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 down as sb is readonly
Dec 13 04:00:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:50Z|01278|binding|INFO|Releasing lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 from this chassis (sb_readonly=0)
Dec 13 04:00:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:50Z|01279|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 down in Southbound
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.561 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.566 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.573 248514 DEBUG nova.objects.instance [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:00:50 np0005558241 podman[375498]: 2025-12-13 09:00:50.587092822 +0000 UTC m=+0.042437969 container remove 47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.593 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e8181026-51bf-4280-91cb-9b7b7d4ed943]: (4, ('Sat Dec 13 09:00:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f (47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35)\n47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35\nSat Dec 13 09:00:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f (47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35)\n47b985b0aa3b2a78cf49c75cf889ca10ec82168e159527fae8e57251857e9a35\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.595 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b985ebd-f4c6-43b9-b6c8-ba820e3b8584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.595 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.595 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Ensure instance console log exists: /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.596 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.596 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.596 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.596 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9b944cc-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.643 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:50 np0005558241 kernel: tapb9b944cc-b0: left promiscuous mode
Dec 13 04:00:50 np0005558241 nova_compute[248510]: 2025-12-13 09:00:50.668 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.671 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0a21b157-bc7f-485e-b5b4-80e92adebaea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.683 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6d544a2e-6820-40c8-9c21-474e5de0ad36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.684 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e0bc32d4-b9e9-477e-8ecf-3e8baebcc92c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.708 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f290aad-405d-480e-be32-74c281067655]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 900086, 'reachable_time': 43880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375535, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:50 np0005558241 systemd[1]: run-netns-ovnmeta\x2db9b944cc\x2db37e\x2d492b\x2dabd7\x2db6bfb9227f0f.mount: Deactivated successfully.
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.713 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.714 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dbdac92f-3b8d-44bc-bd01-240951c37b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.715 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f unbound from our chassis#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.716 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.716 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[813a2be8-8ea1-40b7-994b-4da1a3c9b58e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.717 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f unbound from our chassis#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.718 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:00:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:50.719 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[299d8218-cf7f-4760-847c-231fcc92ea67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.120 248514 INFO nova.virt.libvirt.driver [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance shutdown successfully.#033[00m
Dec 13 04:00:51 np0005558241 kernel: tapda5ed241-e9: entered promiscuous mode
Dec 13 04:00:51 np0005558241 NetworkManager[50376]: <info>  [1765616451.1759] manager: (tapda5ed241-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/522)
Dec 13 04:00:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:51Z|01280|binding|INFO|Claiming lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 for this chassis.
Dec 13 04:00:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:51Z|01281|binding|INFO|da5ed241-e9aa-44e8-ac5a-7dcb30431922: Claiming fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.186 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.187 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f bound to our chassis#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.189 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f#033[00m
Dec 13 04:00:51 np0005558241 NetworkManager[50376]: <info>  [1765616451.1927] device (tapda5ed241-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.192 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:51 np0005558241 NetworkManager[50376]: <info>  [1765616451.1938] device (tapda5ed241-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:00:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:51Z|01282|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 ovn-installed in OVS
Dec 13 04:00:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:51Z|01283|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 up in Southbound
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.196 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.199 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.205 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea946fc3-6612-4011-8757-08426861a6cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.206 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb9b944cc-b1 in ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.212 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb9b944cc-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.213 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6881cc27-ec6d-4bae-bb03-fd83c694588c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.214 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a749fe17-3429-4e39-b637-bb758461a1f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 systemd-machined[210538]: New machine qemu-155-instance-0000007f.
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.232 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[787f5bc5-b460-4426-a8f6-bfb3bbeb35ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 systemd[1]: Started Virtual Machine qemu-155-instance-0000007f.
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.250 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a984d784-5127-4f4d-ab64-f02c895a4dbd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.286 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8df436d4-ba8e-4726-9bf2-2555503a20a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.294 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[db86190a-e8f1-4682-acac-7a747ef71c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 NetworkManager[50376]: <info>  [1765616451.2985] manager: (tapb9b944cc-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/523)
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.301 248514 DEBUG nova.compute.manager [req-88f18f91-9f99-4036-89d8-ee7abae2ab72 req-4df5f693-ce7c-4b0b-95d1-91544ffeab26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-unplugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.301 248514 DEBUG oslo_concurrency.lockutils [req-88f18f91-9f99-4036-89d8-ee7abae2ab72 req-4df5f693-ce7c-4b0b-95d1-91544ffeab26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.301 248514 DEBUG oslo_concurrency.lockutils [req-88f18f91-9f99-4036-89d8-ee7abae2ab72 req-4df5f693-ce7c-4b0b-95d1-91544ffeab26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.301 248514 DEBUG oslo_concurrency.lockutils [req-88f18f91-9f99-4036-89d8-ee7abae2ab72 req-4df5f693-ce7c-4b0b-95d1-91544ffeab26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.302 248514 DEBUG nova.compute.manager [req-88f18f91-9f99-4036-89d8-ee7abae2ab72 req-4df5f693-ce7c-4b0b-95d1-91544ffeab26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-unplugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.302 248514 WARNING nova.compute.manager [req-88f18f91-9f99-4036-89d8-ee7abae2ab72 req-4df5f693-ce7c-4b0b-95d1-91544ffeab26 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-unplugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state reboot_started.#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.336 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e127793c-ca80-44cb-8ca0-2de3bad25ce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.339 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[443d1d14-8fc6-46d1-b6df-235082c27f23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 NetworkManager[50376]: <info>  [1765616451.3651] device (tapb9b944cc-b0): carrier: link connected
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.372 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[052fd6f5-905a-446a-9f29-3b4760ecb52a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.389 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a430c3-4d31-49d4-b4be-a4731eb71dc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9b944cc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:4f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 902858, 'reachable_time': 15798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375580, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.402 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f791bda-19a7-47cc-bb90-6c440620d6a5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:4fac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 902858, 'tstamp': 902858}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375581, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.415 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3721b877-1351-468c-aba7-5ba266940620]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9b944cc-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:4f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 902858, 'reachable_time': 15798, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375582, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.452 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[853ba085-ae88-4b96-905a-ca8b4372f214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.535 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39284548-fb04-4f78-b4ce-9752360a43a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.537 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9b944cc-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.537 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.538 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9b944cc-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:51 np0005558241 NetworkManager[50376]: <info>  [1765616451.5408] manager: (tapb9b944cc-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Dec 13 04:00:51 np0005558241 kernel: tapb9b944cc-b0: entered promiscuous mode
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.544 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9b944cc-b0, col_values=(('external_ids', {'iface-id': 'dacb29c7-c3ab-43a0-86c5-5add3f769729'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:51Z|01284|binding|INFO|Releasing lport dacb29c7-c3ab-43a0-86c5-5add3f769729 from this chassis (sb_readonly=0)
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.554 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.566 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.568 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f1b02732-e941-465f-a075-7fce84d66f8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.569 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-b9b944cc-b37e-492b-abd7-b6bfb9227f0f
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.pid.haproxy
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID b9b944cc-b37e-492b-abd7-b6bfb9227f0f
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:00:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:51.570 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'env', 'PROCESS_TAG=haproxy-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b9b944cc-b37e-492b-abd7-b6bfb9227f0f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.952 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 9a653f54-8266-4f9c-ac5a-b4991534e9fb due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.952 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616451.9514248, 9a653f54-8266-4f9c-ac5a-b4991534e9fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.953 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.959 248514 INFO nova.virt.libvirt.driver [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance running successfully.#033[00m
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.960 248514 INFO nova.virt.libvirt.driver [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance soft rebooted successfully.#033[00m
Dec 13 04:00:51 np0005558241 nova_compute[248510]: 2025-12-13 09:00:51.961 248514 DEBUG nova.compute.manager [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:51 np0005558241 podman[375655]: 2025-12-13 09:00:51.973785826 +0000 UTC m=+0.050970209 container create 114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.007 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.014 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:00:52 np0005558241 systemd[1]: Started libpod-conmon-114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b.scope.
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.024 248514 DEBUG nova.network.neutron [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Successfully updated port: 3ba83e70-35d2-4947-85b4-64f21eae817c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.035 248514 DEBUG oslo_concurrency.lockutils [None req-2b1c1406-32ab-482e-a737-d4db3f402adf a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.037 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616451.952874, 9a653f54-8266-4f9c-ac5a-b4991534e9fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.037 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] VM Started (Lifecycle Event)#033[00m
Dec 13 04:00:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.040 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:00:52 np0005558241 podman[375655]: 2025-12-13 09:00:51.946898428 +0000 UTC m=+0.024082841 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.041 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.041 248514 DEBUG nova.network.neutron [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:00:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26999b8fd1c742d22d53c509972f804cd2c1a3b692246b65cd69c5f1f676657/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:52 np0005558241 podman[375655]: 2025-12-13 09:00:52.05483851 +0000 UTC m=+0.132022903 container init 114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:00:52 np0005558241 podman[375655]: 2025-12-13 09:00:52.059576796 +0000 UTC m=+0.136761189 container start 114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.074 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.078 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:00:52 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375671]: [NOTICE]   (375675) : New worker (375677) forked
Dec 13 04:00:52 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375671]: [NOTICE]   (375675) : Loading success.
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.126 248514 DEBUG nova.compute.manager [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-changed-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.127 248514 DEBUG nova.compute.manager [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Refreshing instance network info cache due to event network-changed-3ba83e70-35d2-4947-85b4-64f21eae817c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.127 248514 DEBUG oslo_concurrency.lockutils [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.380 248514 DEBUG nova.network.neutron [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:00:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2993: 321 pgs: 321 active+clean; 121 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 32 KiB/s wr, 3 op/s
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:52 np0005558241 nova_compute[248510]: 2025-12-13 09:00:52.795 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.237 248514 DEBUG nova.network.neutron [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Updating instance_info_cache with network_info: [{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.270 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.271 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Instance network_info: |[{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.271 248514 DEBUG oslo_concurrency.lockutils [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.272 248514 DEBUG nova.network.neutron [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Refreshing network info cache for port 3ba83e70-35d2-4947-85b4-64f21eae817c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.275 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Start _get_guest_xml network_info=[{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.279 248514 WARNING nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.290 248514 DEBUG nova.virt.libvirt.host [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.291 248514 DEBUG nova.virt.libvirt.host [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.296 248514 DEBUG nova.virt.libvirt.host [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.297 248514 DEBUG nova.virt.libvirt.host [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.298 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.298 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.299 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.299 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.300 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.300 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.301 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.301 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.302 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.302 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.303 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.303 248514 DEBUG nova.virt.hardware [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.307 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.392 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.392 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.393 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.393 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.393 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.394 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.394 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.394 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.395 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.395 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.395 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.395 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.396 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.396 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.396 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.397 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.397 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.397 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.397 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-unplugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.398 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.398 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.398 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.399 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-unplugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.399 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-unplugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.399 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.400 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.400 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.400 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.401 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.401 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.401 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.401 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.402 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.402 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.402 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.402 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.403 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.403 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.403 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.403 248514 DEBUG oslo_concurrency.lockutils [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.404 248514 DEBUG nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] No waiting events found dispatching network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.404 248514 WARNING nova.compute.manager [req-b761dd94-7e6c-4141-825b-7618a4c2ec52 req-9d410572-9bfa-41ff-ab26-b92c53ce7cab 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received unexpected event network-vif-plugged-da5ed241-e9aa-44e8-ac5a-7dcb30431922 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:00:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:00:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355443234' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.839 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.871 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:53 np0005558241 nova_compute[248510]: 2025-12-13 09:00:53.876 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:00:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4196716338' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.403 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.406 248514 DEBUG nova.virt.libvirt.vif [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430432318',display_name='tempest-TestNetworkBasicOps-server-1430432318',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430432318',id=128,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLZwYzQivS6OmxYADpL/9VcE8wCVSuV+/vnYuB1qCFxjkNKqDVEv7izs9w56y2Ibc/1G1HwpaZexOYOvh1sTJGy09M2esErCHKSz7Mr00MXTFv2qZyMndAh7JN+dvFj7tg==',key_name='tempest-TestNetworkBasicOps-481585601',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-p1inmefq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:00:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=046adf00-a09d-4e5c-89d5-e63a2ee02fc1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.406 248514 DEBUG nova.network.os_vif_util [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.407 248514 DEBUG nova.network.os_vif_util [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.409 248514 DEBUG nova.objects.instance [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:00:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2994: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.436 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  <uuid>046adf00-a09d-4e5c-89d5-e63a2ee02fc1</uuid>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  <name>instance-00000080</name>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkBasicOps-server-1430432318</nova:name>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:00:53</nova:creationTime>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:        <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:        <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:        <nova:port uuid="3ba83e70-35d2-4947-85b4-64f21eae817c">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <entry name="serial">046adf00-a09d-4e5c-89d5-e63a2ee02fc1</entry>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <entry name="uuid">046adf00-a09d-4e5c-89d5-e63a2ee02fc1</entry>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk.config">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:1a:55:74"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <target dev="tap3ba83e70-35"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/console.log" append="off"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:00:54 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:00:54 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:00:54 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:00:54 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.438 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Preparing to wait for external event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.438 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.438 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.438 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.439 248514 DEBUG nova.virt.libvirt.vif [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430432318',display_name='tempest-TestNetworkBasicOps-server-1430432318',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430432318',id=128,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLZwYzQivS6OmxYADpL/9VcE8wCVSuV+/vnYuB1qCFxjkNKqDVEv7izs9w56y2Ibc/1G1HwpaZexOYOvh1sTJGy09M2esErCHKSz7Mr00MXTFv2qZyMndAh7JN+dvFj7tg==',key_name='tempest-TestNetworkBasicOps-481585601',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-p1inmefq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:00:49Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=046adf00-a09d-4e5c-89d5-e63a2ee02fc1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.439 248514 DEBUG nova.network.os_vif_util [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.440 248514 DEBUG nova.network.os_vif_util [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.440 248514 DEBUG os_vif [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.441 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.441 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.441 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.444 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.444 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ba83e70-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.445 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ba83e70-35, col_values=(('external_ids', {'iface-id': '3ba83e70-35d2-4947-85b4-64f21eae817c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:55:74', 'vm-uuid': '046adf00-a09d-4e5c-89d5-e63a2ee02fc1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:54 np0005558241 NetworkManager[50376]: <info>  [1765616454.4484] manager: (tap3ba83e70-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/525)
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.447 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.452 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.456 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.458 248514 INFO os_vif [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35')#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.526 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.527 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.527 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:1a:55:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.528 248514 INFO nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Using config drive#033[00m
Dec 13 04:00:54 np0005558241 nova_compute[248510]: 2025-12-13 09:00:54.554 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:55 np0005558241 nova_compute[248510]: 2025-12-13 09:00:55.309 248514 INFO nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Creating config drive at /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/disk.config#033[00m
Dec 13 04:00:55 np0005558241 nova_compute[248510]: 2025-12-13 09:00:55.314 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplk1y0x9w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.438 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.439 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.439 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:55 np0005558241 nova_compute[248510]: 2025-12-13 09:00:55.470 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplk1y0x9w" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:55 np0005558241 nova_compute[248510]: 2025-12-13 09:00:55.502 248514 DEBUG nova.storage.rbd_utils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:00:55 np0005558241 nova_compute[248510]: 2025-12-13 09:00:55.507 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/disk.config 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:00:55 np0005558241 nova_compute[248510]: 2025-12-13 09:00:55.669 248514 DEBUG oslo_concurrency.processutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/disk.config 046adf00-a09d-4e5c-89d5-e63a2ee02fc1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:00:55 np0005558241 nova_compute[248510]: 2025-12-13 09:00:55.670 248514 INFO nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Deleting local config drive /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1/disk.config because it was imported into RBD.#033[00m
Dec 13 04:00:55 np0005558241 kernel: tap3ba83e70-35: entered promiscuous mode
Dec 13 04:00:55 np0005558241 NetworkManager[50376]: <info>  [1765616455.7363] manager: (tap3ba83e70-35): new Tun device (/org/freedesktop/NetworkManager/Devices/526)
Dec 13 04:00:55 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:55Z|01285|binding|INFO|Claiming lport 3ba83e70-35d2-4947-85b4-64f21eae817c for this chassis.
Dec 13 04:00:55 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:55Z|01286|binding|INFO|3ba83e70-35d2-4947-85b4-64f21eae817c: Claiming fa:16:3e:1a:55:74 10.100.0.6
Dec 13 04:00:55 np0005558241 nova_compute[248510]: 2025-12-13 09:00:55.743 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.747 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:55:74 10.100.0.6'], port_security=['fa:16:3e:1a:55:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '046adf00-a09d-4e5c-89d5-e63a2ee02fc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea2504ae-fad7-498d-b82b-9d089e69588d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3ba83e70-35d2-4947-85b4-64f21eae817c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.748 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3ba83e70-35d2-4947-85b4-64f21eae817c in datapath 9da7b572-116c-40f0-9b1e-0183fa9d3f87 bound to our chassis#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.750 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9da7b572-116c-40f0-9b1e-0183fa9d3f87#033[00m
Dec 13 04:00:55 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:55Z|01287|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c ovn-installed in OVS
Dec 13 04:00:55 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:55Z|01288|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c up in Southbound
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.761 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d8834319-cffe-43e1-b65c-1b068e42a9bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.762 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9da7b572-11 in ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:00:55 np0005558241 nova_compute[248510]: 2025-12-13 09:00:55.762 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.765 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9da7b572-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.765 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0f83ca-aee2-4954-b3e6-45d54857a4f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.767 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cd25e07c-4f5c-4137-89dc-1fea3746f59d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:55 np0005558241 systemd-udevd[375822]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:00:55 np0005558241 systemd-machined[210538]: New machine qemu-156-instance-00000080.
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.786 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[08648dcb-6f66-451a-addc-e81b0558ec98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:55 np0005558241 systemd[1]: Started Virtual Machine qemu-156-instance-00000080.
Dec 13 04:00:55 np0005558241 NetworkManager[50376]: <info>  [1765616455.7916] device (tap3ba83e70-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:00:55 np0005558241 NetworkManager[50376]: <info>  [1765616455.7925] device (tap3ba83e70-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.805 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f7b39f-67a5-44af-84c4-835c4ccf6bdb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.846 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7827c4a5-c124-47de-942e-d5dca4cf13e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.850 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52173a04-87a9-418f-8a06-b852d1dcc8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:55 np0005558241 NetworkManager[50376]: <info>  [1765616455.8518] manager: (tap9da7b572-10): new Veth device (/org/freedesktop/NetworkManager/Devices/527)
Dec 13 04:00:55 np0005558241 systemd-udevd[375827]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.889 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8164c346-9822-4b7c-bb55-9b315e2cc4c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.894 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d75e5f-99b6-42b8-9d63-1008c9397398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:55 np0005558241 NetworkManager[50376]: <info>  [1765616455.9220] device (tap9da7b572-10): carrier: link connected
Dec 13 04:00:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.929 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[de0e13b7-2ca9-421b-be0d-02604da4f367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.954 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe3d5df-7161-491f-8358-f55cb3f120b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9da7b572-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:10:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903313, 'reachable_time': 32396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375855, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.975 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5eb7c2b-589b-44bb-ac06-76229e640d5b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:101b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903313, 'tstamp': 903313}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375856, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:55.995 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e630b34f-8adb-426e-be17-d17536e47f08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9da7b572-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:10:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 374], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903313, 'reachable_time': 32396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375857, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.037 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3477e7-32e1-41e2-a949-ff07d785be9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.109 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b545de9-9b49-4cfc-94f2-944e035b6ee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.111 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9da7b572-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.111 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.112 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9da7b572-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:56 np0005558241 NetworkManager[50376]: <info>  [1765616456.1141] manager: (tap9da7b572-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Dec 13 04:00:56 np0005558241 kernel: tap9da7b572-10: entered promiscuous mode
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.115 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9da7b572-10, col_values=(('external_ids', {'iface-id': '92adc7c4-81cb-40ce-bd1a-0498c06b06d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:00:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:00:56Z|01289|binding|INFO|Releasing lport 92adc7c4-81cb-40ce-bd1a-0498c06b06d5 from this chassis (sb_readonly=0)
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.132 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9da7b572-116c-40f0-9b1e-0183fa9d3f87.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9da7b572-116c-40f0-9b1e-0183fa9d3f87.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.132 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f164950-f2f0-4094-9bec-2b08741b8442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.134 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-9da7b572-116c-40f0-9b1e-0183fa9d3f87
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/9da7b572-116c-40f0-9b1e-0183fa9d3f87.pid.haproxy
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 9da7b572-116c-40f0-9b1e-0183fa9d3f87
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:00:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:00:56.134 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'env', 'PROCESS_TAG=haproxy-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9da7b572-116c-40f0-9b1e-0183fa9d3f87.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.326 248514 DEBUG nova.compute.manager [req-eb69779f-2765-48f4-b429-6aeed87212cc req-45fc4fcf-3ed3-4a63-a3fe-6971f6d8f99f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.334 248514 DEBUG oslo_concurrency.lockutils [req-eb69779f-2765-48f4-b429-6aeed87212cc req-45fc4fcf-3ed3-4a63-a3fe-6971f6d8f99f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.334 248514 DEBUG oslo_concurrency.lockutils [req-eb69779f-2765-48f4-b429-6aeed87212cc req-45fc4fcf-3ed3-4a63-a3fe-6971f6d8f99f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.335 248514 DEBUG oslo_concurrency.lockutils [req-eb69779f-2765-48f4-b429-6aeed87212cc req-45fc4fcf-3ed3-4a63-a3fe-6971f6d8f99f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.336 248514 DEBUG nova.compute.manager [req-eb69779f-2765-48f4-b429-6aeed87212cc req-45fc4fcf-3ed3-4a63-a3fe-6971f6d8f99f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Processing event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:00:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2995: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.515 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616456.514865, 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.517 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] VM Started (Lifecycle Event)#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.521 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.526 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.530 248514 INFO nova.virt.libvirt.driver [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Instance spawned successfully.#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.532 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:00:56 np0005558241 podman[375927]: 2025-12-13 09:00:56.560522477 +0000 UTC m=+0.069508193 container create 69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.565 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.574 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.586 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.591 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.592 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.592 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.593 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.593 248514 DEBUG nova.virt.libvirt.driver [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.599 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.600 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616456.5152326, 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.600 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:00:56 np0005558241 podman[375927]: 2025-12-13 09:00:56.521945882 +0000 UTC m=+0.030931618 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:00:56 np0005558241 systemd[1]: Started libpod-conmon-69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56.scope.
Dec 13 04:00:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:00:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf7277745e8100040655f44041079f812c029e04aac3444da67e2a87f1405fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.643 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.647 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616456.525831, 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.648 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:00:56 np0005558241 podman[375927]: 2025-12-13 09:00:56.653525313 +0000 UTC m=+0.162511049 container init 69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:00:56 np0005558241 podman[375927]: 2025-12-13 09:00:56.660872213 +0000 UTC m=+0.169857929 container start 69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.678 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.683 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.689 248514 INFO nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Took 6.83 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.689 248514 DEBUG nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:00:56 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [NOTICE]   (375947) : New worker (375949) forked
Dec 13 04:00:56 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [NOTICE]   (375947) : Loading success.
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.721 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.756 248514 INFO nova.compute.manager [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Took 8.07 seconds to build instance.#033[00m
Dec 13 04:00:56 np0005558241 nova_compute[248510]: 2025-12-13 09:00:56.773 248514 DEBUG oslo_concurrency.lockutils [None req-7bb695fe-d01d-4b4e-96c5-6db41abc5a78 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:57 np0005558241 nova_compute[248510]: 2025-12-13 09:00:57.055 248514 DEBUG nova.network.neutron [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Updated VIF entry in instance network info cache for port 3ba83e70-35d2-4947-85b4-64f21eae817c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:00:57 np0005558241 nova_compute[248510]: 2025-12-13 09:00:57.056 248514 DEBUG nova.network.neutron [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Updating instance_info_cache with network_info: [{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:00:57 np0005558241 nova_compute[248510]: 2025-12-13 09:00:57.085 248514 DEBUG oslo_concurrency.lockutils [req-01779446-d904-458f-86f9-f916133161da req-40f80c68-1694-4371-87ba-7580d160b864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:00:57 np0005558241 nova_compute[248510]: 2025-12-13 09:00:57.798 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:00:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2996: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Dec 13 04:00:58 np0005558241 nova_compute[248510]: 2025-12-13 09:00:58.626 248514 DEBUG nova.compute.manager [req-136d294c-fae8-485c-8445-ee8a197b49d7 req-5599166f-340d-4ff3-8600-602f5d62c3ae 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:00:58 np0005558241 nova_compute[248510]: 2025-12-13 09:00:58.627 248514 DEBUG oslo_concurrency.lockutils [req-136d294c-fae8-485c-8445-ee8a197b49d7 req-5599166f-340d-4ff3-8600-602f5d62c3ae 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:00:58 np0005558241 nova_compute[248510]: 2025-12-13 09:00:58.627 248514 DEBUG oslo_concurrency.lockutils [req-136d294c-fae8-485c-8445-ee8a197b49d7 req-5599166f-340d-4ff3-8600-602f5d62c3ae 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:00:58 np0005558241 nova_compute[248510]: 2025-12-13 09:00:58.627 248514 DEBUG oslo_concurrency.lockutils [req-136d294c-fae8-485c-8445-ee8a197b49d7 req-5599166f-340d-4ff3-8600-602f5d62c3ae 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:00:58 np0005558241 nova_compute[248510]: 2025-12-13 09:00:58.627 248514 DEBUG nova.compute.manager [req-136d294c-fae8-485c-8445-ee8a197b49d7 req-5599166f-340d-4ff3-8600-602f5d62c3ae 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] No waiting events found dispatching network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:00:58 np0005558241 nova_compute[248510]: 2025-12-13 09:00:58.628 248514 WARNING nova.compute.manager [req-136d294c-fae8-485c-8445-ee8a197b49d7 req-5599166f-340d-4ff3-8600-602f5d62c3ae 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received unexpected event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c for instance with vm_state active and task_state None.#033[00m
Dec 13 04:00:59 np0005558241 nova_compute[248510]: 2025-12-13 09:00:59.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2997: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Dec 13 04:01:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:01:01 np0005558241 nova_compute[248510]: 2025-12-13 09:01:01.255 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:01:01 np0005558241 nova_compute[248510]: 2025-12-13 09:01:01.403 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 9a653f54-8266-4f9c-ac5a-b4991534e9fb _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 04:01:01 np0005558241 nova_compute[248510]: 2025-12-13 09:01:01.403 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 04:01:01 np0005558241 nova_compute[248510]: 2025-12-13 09:01:01.404 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:01 np0005558241 nova_compute[248510]: 2025-12-13 09:01:01.404 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:01 np0005558241 nova_compute[248510]: 2025-12-13 09:01:01.405 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:01 np0005558241 nova_compute[248510]: 2025-12-13 09:01:01.405 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:01 np0005558241 nova_compute[248510]: 2025-12-13 09:01:01.463 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:01 np0005558241 nova_compute[248510]: 2025-12-13 09:01:01.464 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2998: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Dec 13 04:01:02 np0005558241 nova_compute[248510]: 2025-12-13 09:01:02.799 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:03 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:03Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 04:01:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v2999: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Dec 13 04:01:04 np0005558241 nova_compute[248510]: 2025-12-13 09:01:04.451 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:01:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3000: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 22 KiB/s wr, 100 op/s
Dec 13 04:01:06 np0005558241 nova_compute[248510]: 2025-12-13 09:01:06.508 248514 DEBUG nova.compute.manager [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-changed-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:01:06 np0005558241 nova_compute[248510]: 2025-12-13 09:01:06.509 248514 DEBUG nova.compute.manager [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Refreshing instance network info cache due to event network-changed-3ba83e70-35d2-4947-85b4-64f21eae817c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:01:06 np0005558241 nova_compute[248510]: 2025-12-13 09:01:06.509 248514 DEBUG oslo_concurrency.lockutils [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:01:06 np0005558241 nova_compute[248510]: 2025-12-13 09:01:06.510 248514 DEBUG oslo_concurrency.lockutils [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:01:06 np0005558241 nova_compute[248510]: 2025-12-13 09:01:06.510 248514 DEBUG nova.network.neutron [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Refreshing network info cache for port 3ba83e70-35d2-4947-85b4-64f21eae817c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:01:06 np0005558241 nova_compute[248510]: 2025-12-13 09:01:06.922 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:01:06 np0005558241 podman[375971]: 2025-12-13 09:01:06.983122946 +0000 UTC m=+0.067170245 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 04:01:06 np0005558241 podman[375970]: 2025-12-13 09:01:06.99022836 +0000 UTC m=+0.071917842 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=multipathd)
Dec 13 04:01:07 np0005558241 podman[375969]: 2025-12-13 09:01:07.020208184 +0000 UTC m=+0.106215841 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Dec 13 04:01:07 np0005558241 nova_compute[248510]: 2025-12-13 09:01:07.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:01:07 np0005558241 nova_compute[248510]: 2025-12-13 09:01:07.842 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3001: 321 pgs: 321 active+clean; 173 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 475 KiB/s wr, 123 op/s
Dec 13 04:01:08 np0005558241 nova_compute[248510]: 2025-12-13 09:01:08.714 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:08 np0005558241 nova_compute[248510]: 2025-12-13 09:01:08.715 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:08 np0005558241 nova_compute[248510]: 2025-12-13 09:01:08.716 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:08 np0005558241 nova_compute[248510]: 2025-12-13 09:01:08.716 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:08 np0005558241 nova_compute[248510]: 2025-12-13 09:01:08.716 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:08 np0005558241 nova_compute[248510]: 2025-12-13 09:01:08.717 248514 INFO nova.compute.manager [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Terminating instance#033[00m
Dec 13 04:01:08 np0005558241 nova_compute[248510]: 2025-12-13 09:01:08.718 248514 DEBUG nova.compute.manager [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:01:08 np0005558241 kernel: tap3ba83e70-35 (unregistering): left promiscuous mode
Dec 13 04:01:08 np0005558241 NetworkManager[50376]: <info>  [1765616468.7633] device (tap3ba83e70-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:01:08 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:08Z|01290|binding|INFO|Releasing lport 3ba83e70-35d2-4947-85b4-64f21eae817c from this chassis (sb_readonly=0)
Dec 13 04:01:08 np0005558241 nova_compute[248510]: 2025-12-13 09:01:08.774 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:08 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:08Z|01291|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c down in Southbound
Dec 13 04:01:08 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:08Z|01292|binding|INFO|Removing iface tap3ba83e70-35 ovn-installed in OVS
Dec 13 04:01:08 np0005558241 nova_compute[248510]: 2025-12-13 09:01:08.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:08 np0005558241 nova_compute[248510]: 2025-12-13 09:01:08.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:08 np0005558241 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d00000080.scope: Deactivated successfully.
Dec 13 04:01:08 np0005558241 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d00000080.scope: Consumed 12.562s CPU time.
Dec 13 04:01:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:08.823 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:55:74 10.100.0.6'], port_security=['fa:16:3e:1a:55:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '046adf00-a09d-4e5c-89d5-e63a2ee02fc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea2504ae-fad7-498d-b82b-9d089e69588d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3ba83e70-35d2-4947-85b4-64f21eae817c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:01:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:08.824 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3ba83e70-35d2-4947-85b4-64f21eae817c in datapath 9da7b572-116c-40f0-9b1e-0183fa9d3f87 unbound from our chassis#033[00m
Dec 13 04:01:08 np0005558241 systemd-machined[210538]: Machine qemu-156-instance-00000080 terminated.
Dec 13 04:01:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:08.826 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9da7b572-116c-40f0-9b1e-0183fa9d3f87, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:01:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:08.827 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7c378d9c-e1a1-49a7-b382-0a9bfd8aa918]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:08.828 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 namespace which is not needed anymore#033[00m
Dec 13 04:01:08 np0005558241 kernel: tap3ba83e70-35: entered promiscuous mode
Dec 13 04:01:08 np0005558241 NetworkManager[50376]: <info>  [1765616468.9450] manager: (tap3ba83e70-35): new Tun device (/org/freedesktop/NetworkManager/Devices/529)
Dec 13 04:01:08 np0005558241 systemd-udevd[376034]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:01:08 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:08Z|01293|binding|INFO|Claiming lport 3ba83e70-35d2-4947-85b4-64f21eae817c for this chassis.
Dec 13 04:01:08 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:08Z|01294|binding|INFO|3ba83e70-35d2-4947-85b4-64f21eae817c: Claiming fa:16:3e:1a:55:74 10.100.0.6
Dec 13 04:01:08 np0005558241 nova_compute[248510]: 2025-12-13 09:01:08.992 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:08 np0005558241 kernel: tap3ba83e70-35 (unregistering): left promiscuous mode
Dec 13 04:01:08 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 04:01:09 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 04:01:09 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [NOTICE]   (375947) : haproxy version is 2.8.14-c23fe91
Dec 13 04:01:09 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [NOTICE]   (375947) : path to executable is /usr/sbin/haproxy
Dec 13 04:01:09 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [WARNING]  (375947) : Exiting Master process...
Dec 13 04:01:09 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [WARNING]  (375947) : Exiting Master process...
Dec 13 04:01:09 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 04:01:09 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [ALERT]    (375947) : Current worker (375949) exited with code 143 (Terminated)
Dec 13 04:01:09 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[375943]: [WARNING]  (375947) : All workers exited. Exiting... (0)
Dec 13 04:01:09 np0005558241 systemd[1]: libpod-69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56.scope: Deactivated successfully.
Dec 13 04:01:09 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.020 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:55:74 10.100.0.6'], port_security=['fa:16:3e:1a:55:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '046adf00-a09d-4e5c-89d5-e63a2ee02fc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea2504ae-fad7-498d-b82b-9d089e69588d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3ba83e70-35d2-4947-85b4-64f21eae817c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.022 248514 INFO nova.virt.libvirt.driver [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Instance destroyed successfully.#033[00m
Dec 13 04:01:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:09Z|01295|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c ovn-installed in OVS
Dec 13 04:01:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:09Z|01296|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c up in Southbound
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.022 248514 DEBUG nova.objects.instance [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:01:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:09Z|01297|binding|INFO|Releasing lport 3ba83e70-35d2-4947-85b4-64f21eae817c from this chassis (sb_readonly=1)
Dec 13 04:01:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:09Z|01298|if_status|INFO|Dropped 2 log messages in last 19 seconds (most recently, 19 seconds ago) due to excessive rate
Dec 13 04:01:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:09Z|01299|if_status|INFO|Not setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c down as sb is readonly
Dec 13 04:01:09 np0005558241 podman[376055]: 2025-12-13 09:01:09.024877653 +0000 UTC m=+0.096360089 container died 69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:01:09 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.024 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:09Z|01300|binding|INFO|Removing iface tap3ba83e70-35 ovn-installed in OVS
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.027 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:09 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 04:01:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:09Z|01301|binding|INFO|Releasing lport 3ba83e70-35d2-4947-85b4-64f21eae817c from this chassis (sb_readonly=0)
Dec 13 04:01:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:09Z|01302|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c down in Southbound
Dec 13 04:01:09 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 04:01:09 np0005558241 virtnodedevd[248873]: ethtool ioctl error on tap3ba83e70-35: No such device
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.041 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:55:74 10.100.0.6'], port_security=['fa:16:3e:1a:55:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '046adf00-a09d-4e5c-89d5-e63a2ee02fc1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea2504ae-fad7-498d-b82b-9d089e69588d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3ba83e70-35d2-4947-85b4-64f21eae817c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.047 248514 DEBUG nova.virt.libvirt.vif [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:00:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1430432318',display_name='tempest-TestNetworkBasicOps-server-1430432318',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1430432318',id=128,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLZwYzQivS6OmxYADpL/9VcE8wCVSuV+/vnYuB1qCFxjkNKqDVEv7izs9w56y2Ibc/1G1HwpaZexOYOvh1sTJGy09M2esErCHKSz7Mr00MXTFv2qZyMndAh7JN+dvFj7tg==',key_name='tempest-TestNetworkBasicOps-481585601',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:00:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-p1inmefq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:00:56Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=046adf00-a09d-4e5c-89d5-e63a2ee02fc1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.048 248514 DEBUG nova.network.os_vif_util [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.048 248514 DEBUG nova.network.os_vif_util [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.049 248514 DEBUG os_vif [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.051 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.051 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ba83e70-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.053 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.054 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.056 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56-userdata-shm.mount: Deactivated successfully.
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.059 248514 INFO os_vif [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35')#033[00m
Dec 13 04:01:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7bf7277745e8100040655f44041079f812c029e04aac3444da67e2a87f1405fc-merged.mount: Deactivated successfully.
Dec 13 04:01:09 np0005558241 podman[376055]: 2025-12-13 09:01:09.082472053 +0000 UTC m=+0.153954479 container cleanup 69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 04:01:09 np0005558241 systemd[1]: libpod-conmon-69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56.scope: Deactivated successfully.
Dec 13 04:01:09 np0005558241 podman[376121]: 2025-12-13 09:01:09.159711314 +0000 UTC m=+0.053355397 container remove 69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.169 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aea20d79-21fd-46dc-84d8-1f31f1deb9ea]: (4, ('Sat Dec 13 09:01:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 (69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56)\n69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56\nSat Dec 13 09:01:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 (69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56)\n69f785e1a04f52a7d1b7260b93261f5e300574baf7b879a25a453e442942bf56\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.172 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ead3f9b3-9de7-476d-a9af-9b90f466977f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.173 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9da7b572-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:09 np0005558241 kernel: tap9da7b572-10: left promiscuous mode
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.191 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.194 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[60937b4e-e1de-481e-b78d-c896bc38ae84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.206 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5b0eef-1d06-4f06-b6b8-33c86cad8430]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.207 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57fee616-3634-4792-8548-d275f421a673]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.225 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c96d1f-cc65-4921-853e-fcd3628e6bfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903305, 'reachable_time': 30137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376136, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:09 np0005558241 systemd[1]: run-netns-ovnmeta\x2d9da7b572\x2d116c\x2d40f0\x2d9b1e\x2d0183fa9d3f87.mount: Deactivated successfully.
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.228 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.228 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1396ad81-7296-4dd9-a60f-bc5d97bd8131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.230 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3ba83e70-35d2-4947-85b4-64f21eae817c in datapath 9da7b572-116c-40f0-9b1e-0183fa9d3f87 unbound from our chassis#033[00m
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.233 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9da7b572-116c-40f0-9b1e-0183fa9d3f87, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.234 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7688ae69-896a-4527-9f0c-09c84d9701ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.235 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3ba83e70-35d2-4947-85b4-64f21eae817c in datapath 9da7b572-116c-40f0-9b1e-0183fa9d3f87 unbound from our chassis#033[00m
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.237 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9da7b572-116c-40f0-9b1e-0183fa9d3f87, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:01:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:09.238 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb8e2fc-d08d-4131-a4ee-a9f67f2a82a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.328 248514 INFO nova.virt.libvirt.driver [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Deleting instance files /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1_del#033[00m
Dec 13 04:01:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:01:09
Dec 13 04:01:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:01:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:01:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'images']
Dec 13 04:01:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.330 248514 INFO nova.virt.libvirt.driver [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Deletion of /var/lib/nova/instances/046adf00-a09d-4e5c-89d5-e63a2ee02fc1_del complete#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.428 248514 INFO nova.compute.manager [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.429 248514 DEBUG oslo.service.loopingcall [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.429 248514 DEBUG nova.compute.manager [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.430 248514 DEBUG nova.network.neutron [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.668 248514 DEBUG nova.compute.manager [req-b3a372a3-4847-470c-acae-4d7b5efa5a65 req-531b35a9-41fd-4275-bf4b-fb5d34aa0656 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-vif-unplugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.668 248514 DEBUG oslo_concurrency.lockutils [req-b3a372a3-4847-470c-acae-4d7b5efa5a65 req-531b35a9-41fd-4275-bf4b-fb5d34aa0656 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.669 248514 DEBUG oslo_concurrency.lockutils [req-b3a372a3-4847-470c-acae-4d7b5efa5a65 req-531b35a9-41fd-4275-bf4b-fb5d34aa0656 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.669 248514 DEBUG oslo_concurrency.lockutils [req-b3a372a3-4847-470c-acae-4d7b5efa5a65 req-531b35a9-41fd-4275-bf4b-fb5d34aa0656 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.670 248514 DEBUG nova.compute.manager [req-b3a372a3-4847-470c-acae-4d7b5efa5a65 req-531b35a9-41fd-4275-bf4b-fb5d34aa0656 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] No waiting events found dispatching network-vif-unplugged-3ba83e70-35d2-4947-85b4-64f21eae817c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:01:09 np0005558241 nova_compute[248510]: 2025-12-13 09:01:09.670 248514 DEBUG nova.compute.manager [req-b3a372a3-4847-470c-acae-4d7b5efa5a65 req-531b35a9-41fd-4275-bf4b-fb5d34aa0656 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-vif-unplugged-3ba83e70-35d2-4947-85b4-64f21eae817c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3002: 321 pgs: 321 active+clean; 178 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 135 op/s
Dec 13 04:01:10 np0005558241 nova_compute[248510]: 2025-12-13 09:01:10.726 248514 DEBUG nova.network.neutron [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Updated VIF entry in instance network info cache for port 3ba83e70-35d2-4947-85b4-64f21eae817c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:01:10 np0005558241 nova_compute[248510]: 2025-12-13 09:01:10.727 248514 DEBUG nova.network.neutron [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Updating instance_info_cache with network_info: [{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:01:10 np0005558241 nova_compute[248510]: 2025-12-13 09:01:10.830 248514 INFO nova.compute.manager [None req-47fe4a6f-56ae-4101-89a1-d63f3ea6256f a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Get console output#033[00m
Dec 13 04:01:10 np0005558241 nova_compute[248510]: 2025-12-13 09:01:10.836 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:01:10 np0005558241 nova_compute[248510]: 2025-12-13 09:01:10.855 248514 DEBUG oslo_concurrency.lockutils [req-7af86fff-bae1-4118-89e1-fe34401529ac req-d473f8bf-ba33-419c-ae51-c951f6aaf5df 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-046adf00-a09d-4e5c-89d5-e63a2ee02fc1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:01:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:01:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:01:11 np0005558241 nova_compute[248510]: 2025-12-13 09:01:11.817 248514 DEBUG nova.compute.manager [req-951f4fd5-888a-49a2-8a3c-5d6e36ee8557 req-74d8b7c4-f068-4ab8-b481-3450b9c9b2a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:01:11 np0005558241 nova_compute[248510]: 2025-12-13 09:01:11.818 248514 DEBUG oslo_concurrency.lockutils [req-951f4fd5-888a-49a2-8a3c-5d6e36ee8557 req-74d8b7c4-f068-4ab8-b481-3450b9c9b2a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:11 np0005558241 nova_compute[248510]: 2025-12-13 09:01:11.818 248514 DEBUG oslo_concurrency.lockutils [req-951f4fd5-888a-49a2-8a3c-5d6e36ee8557 req-74d8b7c4-f068-4ab8-b481-3450b9c9b2a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:11 np0005558241 nova_compute[248510]: 2025-12-13 09:01:11.818 248514 DEBUG oslo_concurrency.lockutils [req-951f4fd5-888a-49a2-8a3c-5d6e36ee8557 req-74d8b7c4-f068-4ab8-b481-3450b9c9b2a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:11 np0005558241 nova_compute[248510]: 2025-12-13 09:01:11.818 248514 DEBUG nova.compute.manager [req-951f4fd5-888a-49a2-8a3c-5d6e36ee8557 req-74d8b7c4-f068-4ab8-b481-3450b9c9b2a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] No waiting events found dispatching network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:01:11 np0005558241 nova_compute[248510]: 2025-12-13 09:01:11.819 248514 WARNING nova.compute.manager [req-951f4fd5-888a-49a2-8a3c-5d6e36ee8557 req-74d8b7c4-f068-4ab8-b481-3450b9c9b2a2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Received unexpected event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:01:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:01:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:01:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:01:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:01:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:01:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:01:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:01:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:01:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:01:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:01:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:01:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:01:12 np0005558241 podman[376280]: 2025-12-13 09:01:12.433854547 +0000 UTC m=+0.040216556 container create c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_austin, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 04:01:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3003: 321 pgs: 321 active+clean; 178 MiB data, 950 MiB used, 59 GiB / 60 GiB avail; 740 KiB/s rd, 1.3 MiB/s wr, 84 op/s
Dec 13 04:01:12 np0005558241 systemd[1]: Started libpod-conmon-c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd.scope.
Dec 13 04:01:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:01:12 np0005558241 podman[376280]: 2025-12-13 09:01:12.41600308 +0000 UTC m=+0.022365119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:01:12 np0005558241 podman[376280]: 2025-12-13 09:01:12.529653702 +0000 UTC m=+0.136015731 container init c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:01:12 np0005558241 podman[376280]: 2025-12-13 09:01:12.536279924 +0000 UTC m=+0.142641933 container start c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_austin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:01:12 np0005558241 podman[376280]: 2025-12-13 09:01:12.540088957 +0000 UTC m=+0.146450966 container attach c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_austin, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:01:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:01:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:01:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:01:12 np0005558241 adoring_austin[376296]: 167 167
Dec 13 04:01:12 np0005558241 systemd[1]: libpod-c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd.scope: Deactivated successfully.
Dec 13 04:01:12 np0005558241 conmon[376296]: conmon c7f576896819f0e7f697 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd.scope/container/memory.events
Dec 13 04:01:12 np0005558241 podman[376280]: 2025-12-13 09:01:12.546928145 +0000 UTC m=+0.153290154 container died c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:01:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4d4fd43bf4e84ba7f548500a0d9ee8615c3f37e663e7896d74ccc8645ba70521-merged.mount: Deactivated successfully.
Dec 13 04:01:12 np0005558241 podman[376280]: 2025-12-13 09:01:12.598915107 +0000 UTC m=+0.205277126 container remove c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:01:12 np0005558241 systemd[1]: libpod-conmon-c7f576896819f0e7f697a0da7e792450b5baea471eda8304a6bb6efa01ae8fcd.scope: Deactivated successfully.
Dec 13 04:01:12 np0005558241 podman[376319]: 2025-12-13 09:01:12.832901475 +0000 UTC m=+0.061195889 container create 309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:01:12 np0005558241 nova_compute[248510]: 2025-12-13 09:01:12.843 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:12 np0005558241 systemd[1]: Started libpod-conmon-309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59.scope.
Dec 13 04:01:12 np0005558241 podman[376319]: 2025-12-13 09:01:12.80902804 +0000 UTC m=+0.037322444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:01:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:01:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fc850e513057184874c56dc59eaf2981abd2738022408d686f69dff97694a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fc850e513057184874c56dc59eaf2981abd2738022408d686f69dff97694a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fc850e513057184874c56dc59eaf2981abd2738022408d686f69dff97694a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fc850e513057184874c56dc59eaf2981abd2738022408d686f69dff97694a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fc850e513057184874c56dc59eaf2981abd2738022408d686f69dff97694a5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:12 np0005558241 podman[376319]: 2025-12-13 09:01:12.939601937 +0000 UTC m=+0.167896381 container init 309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_panini, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 04:01:12 np0005558241 podman[376319]: 2025-12-13 09:01:12.947019578 +0000 UTC m=+0.175313942 container start 309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:01:12 np0005558241 podman[376319]: 2025-12-13 09:01:12.951298053 +0000 UTC m=+0.179592457 container attach 309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_panini, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.250 248514 DEBUG nova.network.neutron [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.276 248514 INFO nova.compute.manager [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Took 3.85 seconds to deallocate network for instance.#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.408 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.408 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:13 np0005558241 hopeful_panini[376335]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:01:13 np0005558241 hopeful_panini[376335]: --> All data devices are unavailable
Dec 13 04:01:13 np0005558241 systemd[1]: libpod-309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59.scope: Deactivated successfully.
Dec 13 04:01:13 np0005558241 podman[376319]: 2025-12-13 09:01:13.466572556 +0000 UTC m=+0.694866920 container died 309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_panini, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:01:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b0fc850e513057184874c56dc59eaf2981abd2738022408d686f69dff97694a5-merged.mount: Deactivated successfully.
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.501 248514 DEBUG oslo_concurrency.processutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:13 np0005558241 podman[376319]: 2025-12-13 09:01:13.520559247 +0000 UTC m=+0.748853611 container remove 309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_panini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 04:01:13 np0005558241 systemd[1]: libpod-conmon-309cf51798b8b2461df40eb7b9a5ceeb21ceba494db21bdfd3c8097715182f59.scope: Deactivated successfully.
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.610 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.612 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.612 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.613 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.613 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.616 248514 INFO nova.compute.manager [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Terminating instance#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.618 248514 DEBUG nova.compute.manager [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:01:13 np0005558241 kernel: tapda5ed241-e9 (unregistering): left promiscuous mode
Dec 13 04:01:13 np0005558241 NetworkManager[50376]: <info>  [1765616473.6807] device (tapda5ed241-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:01:13 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:13Z|01303|binding|INFO|Releasing lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 from this chassis (sb_readonly=0)
Dec 13 04:01:13 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:13Z|01304|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 down in Southbound
Dec 13 04:01:13 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:13Z|01305|binding|INFO|Removing iface tapda5ed241-e9 ovn-installed in OVS
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.693 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.695 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:13 np0005558241 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Dec 13 04:01:13 np0005558241 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007f.scope: Consumed 13.021s CPU time.
Dec 13 04:01:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.729 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '8', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:01:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.736 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f unbound from our chassis#033[00m
Dec 13 04:01:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.739 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:01:13 np0005558241 systemd-machined[210538]: Machine qemu-155-instance-0000007f terminated.
Dec 13 04:01:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.742 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[31c2cdf4-a51f-43c1-b0b6-1fc3f0e9cf26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.743 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f namespace which is not needed anymore#033[00m
Dec 13 04:01:13 np0005558241 kernel: tapda5ed241-e9: entered promiscuous mode
Dec 13 04:01:13 np0005558241 NetworkManager[50376]: <info>  [1765616473.8445] manager: (tapda5ed241-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/530)
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:13 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:13Z|01306|binding|INFO|Claiming lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 for this chassis.
Dec 13 04:01:13 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:13Z|01307|binding|INFO|da5ed241-e9aa-44e8-ac5a-7dcb30431922: Claiming fa:16:3e:cc:05:2b 10.100.0.13
Dec 13 04:01:13 np0005558241 kernel: tapda5ed241-e9 (unregistering): left promiscuous mode
Dec 13 04:01:13 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:13Z|01308|binding|INFO|Setting lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 ovn-installed in OVS
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.877 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.884 248514 INFO nova.virt.libvirt.driver [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Instance destroyed successfully.#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.885 248514 DEBUG nova.objects.instance [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid 9a653f54-8266-4f9c-ac5a-b4991534e9fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:01:13 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:13Z|01309|binding|INFO|Releasing lport da5ed241-e9aa-44e8-ac5a-7dcb30431922 from this chassis (sb_readonly=0)
Dec 13 04:01:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.888 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '8', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:01:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:13.896 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:05:2b 10.100.0.13'], port_security=['fa:16:3e:cc:05:2b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9a653f54-8266-4f9c-ac5a-b4991534e9fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '8', 'neutron:security_group_ids': '60a3552e-c3a6-4944-a13e-2564e4138914', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e8c1cdec-485c-4b47-86d6-9771ffb4d4c7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=da5ed241-e9aa-44e8-ac5a-7dcb30431922) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.902 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.907 248514 DEBUG nova.virt.libvirt.vif [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:00:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1854403821',display_name='tempest-TestNetworkAdvancedServerOps-server-1854403821',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1854403821',id=127,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWMM/Lkrglzn6TUU8f1dhcCbz/Yw7nIlbXlIesplWBqBJXHooQUykZVdZMxvRqd3+5+420G2QZFN25AFW8hXICmnni45jcvyASiuhi4RAOuTlyQgzYOGb0LTvE5xWl7hA==',key_name='tempest-TestNetworkAdvancedServerOps-279390887',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:00:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-azzrjv4t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:00:52Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=9a653f54-8266-4f9c-ac5a-b4991534e9fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.908 248514 DEBUG nova.network.os_vif_util [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.909 248514 DEBUG nova.network.os_vif_util [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.910 248514 DEBUG os_vif [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.913 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.913 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda5ed241-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:13 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375671]: [NOTICE]   (375675) : haproxy version is 2.8.14-c23fe91
Dec 13 04:01:13 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375671]: [NOTICE]   (375675) : path to executable is /usr/sbin/haproxy
Dec 13 04:01:13 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375671]: [ALERT]    (375675) : Current worker (375677) exited with code 143 (Terminated)
Dec 13 04:01:13 np0005558241 neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f[375671]: [WARNING]  (375675) : All workers exited. Exiting... (0)
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:13 np0005558241 systemd[1]: libpod-114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b.scope: Deactivated successfully.
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.920 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:01:13 np0005558241 nova_compute[248510]: 2025-12-13 09:01:13.922 248514 INFO os_vif [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:05:2b,bridge_name='br-int',has_traffic_filtering=True,id=da5ed241-e9aa-44e8-ac5a-7dcb30431922,network=Network(b9b944cc-b37e-492b-abd7-b6bfb9227f0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda5ed241-e9')#033[00m
Dec 13 04:01:13 np0005558241 podman[376462]: 2025-12-13 09:01:13.925784195 +0000 UTC m=+0.083450273 container died 114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:01:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b-userdata-shm.mount: Deactivated successfully.
Dec 13 04:01:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a26999b8fd1c742d22d53c509972f804cd2c1a3b692246b65cd69c5f1f676657-merged.mount: Deactivated successfully.
Dec 13 04:01:13 np0005558241 podman[376462]: 2025-12-13 09:01:13.9774698 +0000 UTC m=+0.135135878 container cleanup 114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:01:13 np0005558241 systemd[1]: libpod-conmon-114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b.scope: Deactivated successfully.
Dec 13 04:01:14 np0005558241 podman[376525]: 2025-12-13 09:01:14.042370959 +0000 UTC m=+0.042139992 container remove 114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.050 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0690e0a-5253-4070-84dc-6b139eae5e95]: (4, ('Sat Dec 13 09:01:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f (114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b)\n114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b\nSat Dec 13 09:01:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f (114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b)\n114c29ad1718b0063043ef18994c5ca6a1a48711a2fa3ed4a1d52bfc79ce830b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.052 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11eca81f-c9e2-424e-804b-296fb9a6a58c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.053 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9b944cc-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:14 np0005558241 kernel: tapb9b944cc-b0: left promiscuous mode
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.055 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.059 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[58fe5ea5-5d0f-4bff-a0b5-ab6eb2782d79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:14 np0005558241 podman[376531]: 2025-12-13 09:01:14.06692294 +0000 UTC m=+0.054028683 container create 4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 04:01:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:01:14 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366073542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.076 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.086 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e9223cc7-3351-4e35-8c57-43647f582257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.088 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3c1152-71a0-45c2-aabf-47c7d4d682e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.101 248514 DEBUG oslo_concurrency.processutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:14 np0005558241 systemd[1]: Started libpod-conmon-4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856.scope.
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.111 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[402ff549-3e82-48b4-95d9-568772a3bbc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 902849, 'reachable_time': 27256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376557, 'error': None, 'target': 'ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.112 248514 DEBUG nova.compute.provider_tree [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:01:14 np0005558241 systemd[1]: run-netns-ovnmeta\x2db9b944cc\x2db37e\x2d492b\x2dabd7\x2db6bfb9227f0f.mount: Deactivated successfully.
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.115 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b9b944cc-b37e-492b-abd7-b6bfb9227f0f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.115 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7d8c589b-4ecf-473e-b191-041cb5a07adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.117 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f unbound from our chassis#033[00m
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.118 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.119 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7b479a-44a7-4977-afc1-9a10fc17175a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.120 158419 INFO neutron.agent.ovn.metadata.agent [-] Port da5ed241-e9aa-44e8-ac5a-7dcb30431922 in datapath b9b944cc-b37e-492b-abd7-b6bfb9227f0f unbound from our chassis#033[00m
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.121 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9b944cc-b37e-492b-abd7-b6bfb9227f0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:01:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:14.121 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[827823fb-6dae-4845-8271-8d5f0be27a47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:14 np0005558241 podman[376531]: 2025-12-13 09:01:14.044391298 +0000 UTC m=+0.031497061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:01:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.154 248514 DEBUG nova.scheduler.client.report [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:01:14 np0005558241 podman[376531]: 2025-12-13 09:01:14.161047374 +0000 UTC m=+0.148153167 container init 4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Dec 13 04:01:14 np0005558241 podman[376531]: 2025-12-13 09:01:14.170442404 +0000 UTC m=+0.157548147 container start 4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:01:14 np0005558241 podman[376531]: 2025-12-13 09:01:14.17394697 +0000 UTC m=+0.161052723 container attach 4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 04:01:14 np0005558241 eager_bhaskara[376559]: 167 167
Dec 13 04:01:14 np0005558241 systemd[1]: libpod-4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856.scope: Deactivated successfully.
Dec 13 04:01:14 np0005558241 conmon[376559]: conmon 4c70788d47719f324d82 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856.scope/container/memory.events
Dec 13 04:01:14 np0005558241 podman[376531]: 2025-12-13 09:01:14.179362422 +0000 UTC m=+0.166468165 container died 4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.195 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:14 np0005558241 podman[376531]: 2025-12-13 09:01:14.226920227 +0000 UTC m=+0.214025970 container remove 4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:01:14 np0005558241 systemd[1]: libpod-conmon-4c70788d47719f324d825d3fb579938c0b185ca04aca1db38bfb3c185911b856.scope: Deactivated successfully.
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.269 248514 INFO nova.virt.libvirt.driver [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Deleting instance files /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb_del#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.270 248514 INFO nova.virt.libvirt.driver [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Deletion of /var/lib/nova/instances/9a653f54-8266-4f9c-ac5a-b4991534e9fb_del complete#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.298 248514 INFO nova.scheduler.client.report [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 046adf00-a09d-4e5c-89d5-e63a2ee02fc1#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.412 248514 INFO nova.compute.manager [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.413 248514 DEBUG oslo.service.loopingcall [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.413 248514 DEBUG nova.compute.manager [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.413 248514 DEBUG nova.network.neutron [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:01:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3004: 321 pgs: 321 active+clean; 96 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 790 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 13 04:01:14 np0005558241 podman[376584]: 2025-12-13 09:01:14.477660824 +0000 UTC m=+0.059280232 container create 8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jepsen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:01:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d0313522abdda27c9850ad1dd1483fa3c45131d585bf3500bfaf34b66cb1dbf6-merged.mount: Deactivated successfully.
Dec 13 04:01:14 np0005558241 systemd[1]: Started libpod-conmon-8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684.scope.
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.532 248514 DEBUG nova.compute.manager [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-changed-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.533 248514 DEBUG nova.compute.manager [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Refreshing instance network info cache due to event network-changed-da5ed241-e9aa-44e8-ac5a-7dcb30431922. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.533 248514 DEBUG oslo_concurrency.lockutils [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.533 248514 DEBUG oslo_concurrency.lockutils [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.534 248514 DEBUG nova.network.neutron [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Refreshing network info cache for port da5ed241-e9aa-44e8-ac5a-7dcb30431922 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.537 248514 DEBUG oslo_concurrency.lockutils [None req-c9a92d32-80a8-4af7-a91b-cc471ddb0376 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "046adf00-a09d-4e5c-89d5-e63a2ee02fc1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:14 np0005558241 podman[376584]: 2025-12-13 09:01:14.460575046 +0000 UTC m=+0.042194484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:01:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:01:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126e3df4047e6b3d87f08391573faa59cf8f40cf18c325c64a7ae6fb5227dc09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126e3df4047e6b3d87f08391573faa59cf8f40cf18c325c64a7ae6fb5227dc09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126e3df4047e6b3d87f08391573faa59cf8f40cf18c325c64a7ae6fb5227dc09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126e3df4047e6b3d87f08391573faa59cf8f40cf18c325c64a7ae6fb5227dc09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:14 np0005558241 podman[376584]: 2025-12-13 09:01:14.578348369 +0000 UTC m=+0.159967827 container init 8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Dec 13 04:01:14 np0005558241 podman[376584]: 2025-12-13 09:01:14.587593475 +0000 UTC m=+0.169212903 container start 8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:01:14 np0005558241 podman[376584]: 2025-12-13 09:01:14.590854365 +0000 UTC m=+0.172473783 container attach 8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jepsen, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.850 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.851 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:01:14 np0005558241 nova_compute[248510]: 2025-12-13 09:01:14.851 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]: {
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:    "0": [
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:        {
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "devices": [
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "/dev/loop3"
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            ],
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_name": "ceph_lv0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_size": "21470642176",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "name": "ceph_lv0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "tags": {
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.cluster_name": "ceph",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.crush_device_class": "",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.encrypted": "0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.objectstore": "bluestore",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.osd_id": "0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.type": "block",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.vdo": "0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.with_tpm": "0"
Dec 13 04:01:14 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            },
Dec 13 04:01:14 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "type": "block",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "vg_name": "ceph_vg0"
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:        }
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:    ],
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:    "1": [
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:        {
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "devices": [
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "/dev/loop4"
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            ],
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_name": "ceph_lv1",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_size": "21470642176",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "name": "ceph_lv1",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "tags": {
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.cluster_name": "ceph",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.crush_device_class": "",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.encrypted": "0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.objectstore": "bluestore",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.osd_id": "1",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.type": "block",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.vdo": "0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.with_tpm": "0"
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            },
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "type": "block",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "vg_name": "ceph_vg1"
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:        }
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:    ],
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:    "2": [
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:        {
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "devices": [
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "/dev/loop5"
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            ],
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_name": "ceph_lv2",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_size": "21470642176",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "name": "ceph_lv2",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "tags": {
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.cluster_name": "ceph",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.crush_device_class": "",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.encrypted": "0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.objectstore": "bluestore",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.osd_id": "2",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.type": "block",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.vdo": "0",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:                "ceph.with_tpm": "0"
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            },
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "type": "block",
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:            "vg_name": "ceph_vg2"
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:        }
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]:    ]
Dec 13 04:01:14 np0005558241 brave_jepsen[376600]: }
Dec 13 04:01:14 np0005558241 systemd[1]: libpod-8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684.scope: Deactivated successfully.
Dec 13 04:01:14 np0005558241 podman[376610]: 2025-12-13 09:01:14.971290907 +0000 UTC m=+0.026673543 container died 8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:01:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-126e3df4047e6b3d87f08391573faa59cf8f40cf18c325c64a7ae6fb5227dc09-merged.mount: Deactivated successfully.
Dec 13 04:01:15 np0005558241 podman[376610]: 2025-12-13 09:01:15.011407729 +0000 UTC m=+0.066790335 container remove 8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jepsen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:01:15 np0005558241 systemd[1]: libpod-conmon-8f7906da0f8097e0bbdf101fc5c24a4b0fe6a142becadaa40a49de2db4db7684.scope: Deactivated successfully.
Dec 13 04:01:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:01:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3773702943' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:01:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:01:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3773702943' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:01:15 np0005558241 podman[376685]: 2025-12-13 09:01:15.50534943 +0000 UTC m=+0.046731585 container create 4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 04:01:15 np0005558241 systemd[1]: Started libpod-conmon-4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae.scope.
Dec 13 04:01:15 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:01:15 np0005558241 podman[376685]: 2025-12-13 09:01:15.488813885 +0000 UTC m=+0.030196060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:01:15 np0005558241 podman[376685]: 2025-12-13 09:01:15.593050557 +0000 UTC m=+0.134432722 container init 4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:01:15 np0005558241 podman[376685]: 2025-12-13 09:01:15.598410208 +0000 UTC m=+0.139792353 container start 4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:01:15 np0005558241 podman[376685]: 2025-12-13 09:01:15.600854218 +0000 UTC m=+0.142236403 container attach 4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curie, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 04:01:15 np0005558241 strange_curie[376702]: 167 167
Dec 13 04:01:15 np0005558241 systemd[1]: libpod-4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae.scope: Deactivated successfully.
Dec 13 04:01:15 np0005558241 podman[376685]: 2025-12-13 09:01:15.602888968 +0000 UTC m=+0.144271123 container died 4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curie, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 04:01:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-db1f216c1beb84723ed828a15830f4296f6c7cff922e654687f68828c14bb6fe-merged.mount: Deactivated successfully.
Dec 13 04:01:15 np0005558241 podman[376685]: 2025-12-13 09:01:15.641699878 +0000 UTC m=+0.183082043 container remove 4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:01:15 np0005558241 systemd[1]: libpod-conmon-4a137154ccad922c305360e0c662db71421f6679230c50f508c06986bbfe43ae.scope: Deactivated successfully.
Dec 13 04:01:15 np0005558241 nova_compute[248510]: 2025-12-13 09:01:15.780 248514 DEBUG nova.network.neutron [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:01:15 np0005558241 nova_compute[248510]: 2025-12-13 09:01:15.868 248514 DEBUG nova.compute.manager [req-5ea56200-92c3-4a6f-8566-15a1fc0fd7fe req-9b98f308-c591-4812-8f21-42df3fbf2282 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Received event network-vif-deleted-da5ed241-e9aa-44e8-ac5a-7dcb30431922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:01:15 np0005558241 nova_compute[248510]: 2025-12-13 09:01:15.868 248514 INFO nova.compute.manager [req-5ea56200-92c3-4a6f-8566-15a1fc0fd7fe req-9b98f308-c591-4812-8f21-42df3fbf2282 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Neutron deleted interface da5ed241-e9aa-44e8-ac5a-7dcb30431922; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:01:15 np0005558241 nova_compute[248510]: 2025-12-13 09:01:15.869 248514 DEBUG nova.network.neutron [req-5ea56200-92c3-4a6f-8566-15a1fc0fd7fe req-9b98f308-c591-4812-8f21-42df3fbf2282 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:01:15 np0005558241 podman[376724]: 2025-12-13 09:01:15.871258687 +0000 UTC m=+0.050249541 container create ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wright, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:01:15 np0005558241 nova_compute[248510]: 2025-12-13 09:01:15.888 248514 INFO nova.compute.manager [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Took 1.47 seconds to deallocate network for instance.#033[00m
Dec 13 04:01:15 np0005558241 nova_compute[248510]: 2025-12-13 09:01:15.901 248514 DEBUG nova.compute.manager [req-5ea56200-92c3-4a6f-8566-15a1fc0fd7fe req-9b98f308-c591-4812-8f21-42df3fbf2282 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Detach interface failed, port_id=da5ed241-e9aa-44e8-ac5a-7dcb30431922, reason: Instance 9a653f54-8266-4f9c-ac5a-b4991534e9fb could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 04:01:15 np0005558241 systemd[1]: Started libpod-conmon-ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d.scope.
Dec 13 04:01:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:01:15 np0005558241 podman[376724]: 2025-12-13 09:01:15.846723346 +0000 UTC m=+0.025714310 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:01:15 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:01:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981bd84b75a86856e8a79b4ba4946476093be4af5e948c36f32d3fab29df2fd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981bd84b75a86856e8a79b4ba4946476093be4af5e948c36f32d3fab29df2fd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981bd84b75a86856e8a79b4ba4946476093be4af5e948c36f32d3fab29df2fd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981bd84b75a86856e8a79b4ba4946476093be4af5e948c36f32d3fab29df2fd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:15 np0005558241 podman[376724]: 2025-12-13 09:01:15.979757163 +0000 UTC m=+0.158748017 container init ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wright, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:01:15 np0005558241 podman[376724]: 2025-12-13 09:01:15.988749573 +0000 UTC m=+0.167740427 container start ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wright, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:01:15 np0005558241 podman[376724]: 2025-12-13 09:01:15.992360151 +0000 UTC m=+0.171351005 container attach ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 04:01:16 np0005558241 nova_compute[248510]: 2025-12-13 09:01:16.033 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:16 np0005558241 nova_compute[248510]: 2025-12-13 09:01:16.034 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:16 np0005558241 nova_compute[248510]: 2025-12-13 09:01:16.079 248514 DEBUG oslo_concurrency.processutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3005: 321 pgs: 321 active+clean; 96 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Dec 13 04:01:16 np0005558241 lvm[376835]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:01:16 np0005558241 lvm[376835]: VG ceph_vg0 finished
Dec 13 04:01:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:01:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1330621943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:01:16 np0005558241 lvm[376837]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:01:16 np0005558241 lvm[376837]: VG ceph_vg1 finished
Dec 13 04:01:16 np0005558241 lvm[376840]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:01:16 np0005558241 lvm[376840]: VG ceph_vg0 finished
Dec 13 04:01:16 np0005558241 nova_compute[248510]: 2025-12-13 09:01:16.676 248514 DEBUG oslo_concurrency.processutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:16 np0005558241 nova_compute[248510]: 2025-12-13 09:01:16.683 248514 DEBUG nova.compute.provider_tree [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:01:16 np0005558241 lvm[376841]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:01:16 np0005558241 lvm[376841]: VG ceph_vg2 finished
Dec 13 04:01:16 np0005558241 nova_compute[248510]: 2025-12-13 09:01:16.704 248514 DEBUG nova.network.neutron [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updated VIF entry in instance network info cache for port da5ed241-e9aa-44e8-ac5a-7dcb30431922. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:01:16 np0005558241 nova_compute[248510]: 2025-12-13 09:01:16.704 248514 DEBUG nova.network.neutron [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Updating instance_info_cache with network_info: [{"id": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "address": "fa:16:3e:cc:05:2b", "network": {"id": "b9b944cc-b37e-492b-abd7-b6bfb9227f0f", "bridge": "br-int", "label": "tempest-network-smoke--1479453605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda5ed241-e9", "ovs_interfaceid": "da5ed241-e9aa-44e8-ac5a-7dcb30431922", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:01:16 np0005558241 nova_compute[248510]: 2025-12-13 09:01:16.709 248514 DEBUG nova.scheduler.client.report [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:01:16 np0005558241 nova_compute[248510]: 2025-12-13 09:01:16.777 248514 DEBUG oslo_concurrency.lockutils [req-28f47656-4d8a-46a6-8098-d406aea81d56 req-578cf2b7-c94e-4a9c-ac78-f82b79ea456b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9a653f54-8266-4f9c-ac5a-b4991534e9fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:01:16 np0005558241 pensive_wright[376740]: {}
Dec 13 04:01:16 np0005558241 systemd[1]: libpod-ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d.scope: Deactivated successfully.
Dec 13 04:01:16 np0005558241 systemd[1]: libpod-ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d.scope: Consumed 1.295s CPU time.
Dec 13 04:01:16 np0005558241 podman[376724]: 2025-12-13 09:01:16.805061675 +0000 UTC m=+0.984052549 container died ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wright, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:01:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay-981bd84b75a86856e8a79b4ba4946476093be4af5e948c36f32d3fab29df2fd0-merged.mount: Deactivated successfully.
Dec 13 04:01:16 np0005558241 podman[376724]: 2025-12-13 09:01:16.849470052 +0000 UTC m=+1.028460906 container remove ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:01:16 np0005558241 systemd[1]: libpod-conmon-ed36aa341be68a85b1df950450e8aada3b6ab1e6cd385200922ef326a13d895d.scope: Deactivated successfully.
Dec 13 04:01:16 np0005558241 nova_compute[248510]: 2025-12-13 09:01:16.865 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:01:16 np0005558241 nova_compute[248510]: 2025-12-13 09:01:16.904 248514 INFO nova.scheduler.client.report [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Deleted allocations for instance 9a653f54-8266-4f9c-ac5a-b4991534e9fb#033[00m
Dec 13 04:01:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:01:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:01:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:01:16 np0005558241 nova_compute[248510]: 2025-12-13 09:01:16.997 248514 DEBUG oslo_concurrency.lockutils [None req-6d77ecc9-133f-404b-bbca-c30f61183225 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "9a653f54-8266-4f9c-ac5a-b4991534e9fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:17 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:01:17 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:01:17 np0005558241 nova_compute[248510]: 2025-12-13 09:01:17.845 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3006: 321 pgs: 321 active+clean; 64 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Dec 13 04:01:18 np0005558241 nova_compute[248510]: 2025-12-13 09:01:18.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:19 np0005558241 nova_compute[248510]: 2025-12-13 09:01:19.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:01:19 np0005558241 nova_compute[248510]: 2025-12-13 09:01:19.816 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:19 np0005558241 nova_compute[248510]: 2025-12-13 09:01:19.817 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:19 np0005558241 nova_compute[248510]: 2025-12-13 09:01:19.817 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:19 np0005558241 nova_compute[248510]: 2025-12-13 09:01:19.817 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:01:19 np0005558241 nova_compute[248510]: 2025-12-13 09:01:19.817 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:01:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3433851384' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:01:20 np0005558241 nova_compute[248510]: 2025-12-13 09:01:20.398 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3007: 321 pgs: 321 active+clean; 41 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 264 KiB/s rd, 1.6 MiB/s wr, 93 op/s
Dec 13 04:01:20 np0005558241 nova_compute[248510]: 2025-12-13 09:01:20.565 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:01:20 np0005558241 nova_compute[248510]: 2025-12-13 09:01:20.566 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3446MB free_disk=59.97294283658266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:01:20 np0005558241 nova_compute[248510]: 2025-12-13 09:01:20.567 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:20 np0005558241 nova_compute[248510]: 2025-12-13 09:01:20.567 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:20 np0005558241 nova_compute[248510]: 2025-12-13 09:01:20.645 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:01:20 np0005558241 nova_compute[248510]: 2025-12-13 09:01:20.646 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:01:20 np0005558241 nova_compute[248510]: 2025-12-13 09:01:20.666 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:01:21 np0005558241 nova_compute[248510]: 2025-12-13 09:01:21.052 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:21 np0005558241 nova_compute[248510]: 2025-12-13 09:01:21.176 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:01:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2760382793' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:01:21 np0005558241 nova_compute[248510]: 2025-12-13 09:01:21.231 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:21 np0005558241 nova_compute[248510]: 2025-12-13 09:01:21.238 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:01:21 np0005558241 nova_compute[248510]: 2025-12-13 09:01:21.257 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4227178868833227e-05 of space, bias 1.0, pg target 0.004268153660649968 quantized to 32 (current 32)
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669669102060614 of space, bias 1.0, pg target 0.2009007306181842 quantized to 32 (current 32)
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.725337214369618e-07 of space, bias 4.0, pg target 0.0006870404657243541 quantized to 16 (current 32)
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:01:21 np0005558241 nova_compute[248510]: 2025-12-13 09:01:21.316 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:01:21 np0005558241 nova_compute[248510]: 2025-12-13 09:01:21.317 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3008: 321 pgs: 321 active+clean; 41 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 774 KiB/s wr, 58 op/s
Dec 13 04:01:22 np0005558241 nova_compute[248510]: 2025-12-13 09:01:22.847 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:23 np0005558241 nova_compute[248510]: 2025-12-13 09:01:23.317 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:01:23 np0005558241 nova_compute[248510]: 2025-12-13 09:01:23.920 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:24 np0005558241 nova_compute[248510]: 2025-12-13 09:01:24.019 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616469.0181606, 046adf00-a09d-4e5c-89d5-e63a2ee02fc1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:01:24 np0005558241 nova_compute[248510]: 2025-12-13 09:01:24.020 248514 INFO nova.compute.manager [-] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:01:24 np0005558241 nova_compute[248510]: 2025-12-13 09:01:24.066 248514 DEBUG nova.compute.manager [None req-d65f99fc-c130-4f1d-a1fd-4ab0c8dfbeb0 - - - - - -] [instance: 046adf00-a09d-4e5c-89d5-e63a2ee02fc1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:01:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3009: 321 pgs: 321 active+clean; 41 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 774 KiB/s wr, 58 op/s
Dec 13 04:01:24 np0005558241 nova_compute[248510]: 2025-12-13 09:01:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:01:24 np0005558241 nova_compute[248510]: 2025-12-13 09:01:24.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:01:25 np0005558241 nova_compute[248510]: 2025-12-13 09:01:25.288 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:25 np0005558241 nova_compute[248510]: 2025-12-13 09:01:25.288 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:25 np0005558241 nova_compute[248510]: 2025-12-13 09:01:25.355 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:01:25 np0005558241 nova_compute[248510]: 2025-12-13 09:01:25.485 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:25 np0005558241 nova_compute[248510]: 2025-12-13 09:01:25.486 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:25 np0005558241 nova_compute[248510]: 2025-12-13 09:01:25.527 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:01:25 np0005558241 nova_compute[248510]: 2025-12-13 09:01:25.528 248514 INFO nova.compute.claims [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:01:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:25.532 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:01:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:25.533 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:01:25 np0005558241 nova_compute[248510]: 2025-12-13 09:01:25.569 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:25 np0005558241 nova_compute[248510]: 2025-12-13 09:01:25.707 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:25 np0005558241 nova_compute[248510]: 2025-12-13 09:01:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:01:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:01:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:01:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256721503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.268 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.277 248514 DEBUG nova.compute.provider_tree [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.303 248514 DEBUG nova.scheduler.client.report [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.328 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.329 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.402 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.402 248514 DEBUG nova.network.neutron [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.427 248514 INFO nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:01:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3010: 321 pgs: 321 active+clean; 41 MiB data, 875 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 852 B/s wr, 22 op/s
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.460 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.591 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.592 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.593 248514 INFO nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Creating image(s)#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.619 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.647 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.675 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.680 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.796 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.797 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.798 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.798 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.819 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:26 np0005558241 nova_compute[248510]: 2025-12-13 09:01:26.823 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:27 np0005558241 nova_compute[248510]: 2025-12-13 09:01:27.045 248514 DEBUG nova.policy [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:01:27 np0005558241 nova_compute[248510]: 2025-12-13 09:01:27.139 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:27 np0005558241 nova_compute[248510]: 2025-12-13 09:01:27.196 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:01:27 np0005558241 nova_compute[248510]: 2025-12-13 09:01:27.267 248514 DEBUG nova.objects.instance [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 9519b20c-c79b-41e3-922c-362e2d3a7ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:01:27 np0005558241 nova_compute[248510]: 2025-12-13 09:01:27.290 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:01:27 np0005558241 nova_compute[248510]: 2025-12-13 09:01:27.291 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Ensure instance console log exists: /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:01:27 np0005558241 nova_compute[248510]: 2025-12-13 09:01:27.291 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:27 np0005558241 nova_compute[248510]: 2025-12-13 09:01:27.291 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:27 np0005558241 nova_compute[248510]: 2025-12-13 09:01:27.292 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:27.535 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:27 np0005558241 nova_compute[248510]: 2025-12-13 09:01:27.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3011: 321 pgs: 321 active+clean; 54 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 455 KiB/s wr, 34 op/s
Dec 13 04:01:28 np0005558241 nova_compute[248510]: 2025-12-13 09:01:28.611 248514 DEBUG nova.network.neutron [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Successfully updated port: 3ba83e70-35d2-4947-85b4-64f21eae817c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:01:28 np0005558241 nova_compute[248510]: 2025-12-13 09:01:28.655 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-9519b20c-c79b-41e3-922c-362e2d3a7ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:01:28 np0005558241 nova_compute[248510]: 2025-12-13 09:01:28.655 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-9519b20c-c79b-41e3-922c-362e2d3a7ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:01:28 np0005558241 nova_compute[248510]: 2025-12-13 09:01:28.655 248514 DEBUG nova.network.neutron [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:01:28 np0005558241 nova_compute[248510]: 2025-12-13 09:01:28.762 248514 DEBUG nova.compute.manager [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received event network-changed-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:01:28 np0005558241 nova_compute[248510]: 2025-12-13 09:01:28.762 248514 DEBUG nova.compute.manager [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Refreshing instance network info cache due to event network-changed-3ba83e70-35d2-4947-85b4-64f21eae817c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:01:28 np0005558241 nova_compute[248510]: 2025-12-13 09:01:28.763 248514 DEBUG oslo_concurrency.lockutils [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9519b20c-c79b-41e3-922c-362e2d3a7ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:01:28 np0005558241 nova_compute[248510]: 2025-12-13 09:01:28.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:01:28 np0005558241 nova_compute[248510]: 2025-12-13 09:01:28.875 248514 DEBUG nova.network.neutron [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:01:28 np0005558241 nova_compute[248510]: 2025-12-13 09:01:28.878 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616473.8758311, 9a653f54-8266-4f9c-ac5a-b4991534e9fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:01:28 np0005558241 nova_compute[248510]: 2025-12-13 09:01:28.878 248514 INFO nova.compute.manager [-] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:01:28 np0005558241 nova_compute[248510]: 2025-12-13 09:01:28.902 248514 DEBUG nova.compute.manager [None req-22e23101-cacf-45f8-bd46-235994f90eef - - - - - -] [instance: 9a653f54-8266-4f9c-ac5a-b4991534e9fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:01:28 np0005558241 nova_compute[248510]: 2025-12-13 09:01:28.975 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.928 248514 DEBUG nova.network.neutron [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Updating instance_info_cache with network_info: [{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.952 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-9519b20c-c79b-41e3-922c-362e2d3a7ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.953 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Instance network_info: |[{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.953 248514 DEBUG oslo_concurrency.lockutils [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9519b20c-c79b-41e3-922c-362e2d3a7ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.954 248514 DEBUG nova.network.neutron [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Refreshing network info cache for port 3ba83e70-35d2-4947-85b4-64f21eae817c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.957 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Start _get_guest_xml network_info=[{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.961 248514 WARNING nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.966 248514 DEBUG nova.virt.libvirt.host [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.967 248514 DEBUG nova.virt.libvirt.host [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.971 248514 DEBUG nova.virt.libvirt.host [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.972 248514 DEBUG nova.virt.libvirt.host [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.972 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.972 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.973 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.973 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.973 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.974 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.974 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.974 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.974 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.975 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.975 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.975 248514 DEBUG nova.virt.hardware [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:01:29 np0005558241 nova_compute[248510]: 2025-12-13 09:01:29.978 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3012: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Dec 13 04:01:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:01:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/663929690' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:01:30 np0005558241 nova_compute[248510]: 2025-12-13 09:01:30.561 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:30 np0005558241 nova_compute[248510]: 2025-12-13 09:01:30.587 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:30 np0005558241 nova_compute[248510]: 2025-12-13 09:01:30.591 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:01:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:01:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/489710228' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.142 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.143 248514 DEBUG nova.virt.libvirt.vif [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-465153182',display_name='tempest-TestNetworkBasicOps-server-465153182',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-465153182',id=129,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIO+yWVJ497mtKMlg6/trFGMKEjgI30rNV/FBctamIJHtuqDBXodOJRCF+bmPrWOZ8OVFPBiR7SruaUfZabfP77iTSiPTfosnAjyuR+wA0QlVYwQySGN575QhOgAkbSV7g==',key_name='tempest-TestNetworkBasicOps-815903989',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mmqkoou9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:01:26Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=9519b20c-c79b-41e3-922c-362e2d3a7ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.143 248514 DEBUG nova.network.os_vif_util [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.144 248514 DEBUG nova.network.os_vif_util [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.145 248514 DEBUG nova.objects.instance [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 9519b20c-c79b-41e3-922c-362e2d3a7ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.162 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  <uuid>9519b20c-c79b-41e3-922c-362e2d3a7ded</uuid>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  <name>instance-00000081</name>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkBasicOps-server-465153182</nova:name>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:01:29</nova:creationTime>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:        <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:        <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:        <nova:port uuid="3ba83e70-35d2-4947-85b4-64f21eae817c">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <entry name="serial">9519b20c-c79b-41e3-922c-362e2d3a7ded</entry>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <entry name="uuid">9519b20c-c79b-41e3-922c-362e2d3a7ded</entry>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9519b20c-c79b-41e3-922c-362e2d3a7ded_disk">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9519b20c-c79b-41e3-922c-362e2d3a7ded_disk.config">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:1a:55:74"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <target dev="tap3ba83e70-35"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/console.log" append="off"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:01:31 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:01:31 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:01:31 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:01:31 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.163 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Preparing to wait for external event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.163 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.163 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.164 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.164 248514 DEBUG nova.virt.libvirt.vif [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-465153182',display_name='tempest-TestNetworkBasicOps-server-465153182',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-465153182',id=129,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIO+yWVJ497mtKMlg6/trFGMKEjgI30rNV/FBctamIJHtuqDBXodOJRCF+bmPrWOZ8OVFPBiR7SruaUfZabfP77iTSiPTfosnAjyuR+wA0QlVYwQySGN575QhOgAkbSV7g==',key_name='tempest-TestNetworkBasicOps-815903989',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mmqkoou9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:01:26Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=9519b20c-c79b-41e3-922c-362e2d3a7ded,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.164 248514 DEBUG nova.network.os_vif_util [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.165 248514 DEBUG nova.network.os_vif_util [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.165 248514 DEBUG os_vif [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.166 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.167 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.170 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ba83e70-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.170 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ba83e70-35, col_values=(('external_ids', {'iface-id': '3ba83e70-35d2-4947-85b4-64f21eae817c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:55:74', 'vm-uuid': '9519b20c-c79b-41e3-922c-362e2d3a7ded'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:31 np0005558241 NetworkManager[50376]: <info>  [1765616491.1726] manager: (tap3ba83e70-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/531)
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.171 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.176 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.179 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.180 248514 INFO os_vif [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35')#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.242 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.242 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.243 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:1a:55:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.244 248514 INFO nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Using config drive#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.275 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.426 248514 DEBUG nova.network.neutron [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Updated VIF entry in instance network info cache for port 3ba83e70-35d2-4947-85b4-64f21eae817c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.427 248514 DEBUG nova.network.neutron [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Updating instance_info_cache with network_info: [{"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.447 248514 DEBUG oslo_concurrency.lockutils [req-c2921515-b5e4-490d-b57c-00577f1fabce req-834d6f90-374e-4221-ad9e-340559193c40 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9519b20c-c79b-41e3-922c-362e2d3a7ded" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.901 248514 INFO nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Creating config drive at /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/disk.config#033[00m
Dec 13 04:01:31 np0005558241 nova_compute[248510]: 2025-12-13 09:01:31.906 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9lon17q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.062 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9lon17q" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.088 248514 DEBUG nova.storage.rbd_utils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.092 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/disk.config 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.242 248514 DEBUG oslo_concurrency.processutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/disk.config 9519b20c-c79b-41e3-922c-362e2d3a7ded_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.243 248514 INFO nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Deleting local config drive /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded/disk.config because it was imported into RBD.#033[00m
Dec 13 04:01:32 np0005558241 kernel: tap3ba83e70-35: entered promiscuous mode
Dec 13 04:01:32 np0005558241 NetworkManager[50376]: <info>  [1765616492.3018] manager: (tap3ba83e70-35): new Tun device (/org/freedesktop/NetworkManager/Devices/532)
Dec 13 04:01:32 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:32Z|01310|binding|INFO|Claiming lport 3ba83e70-35d2-4947-85b4-64f21eae817c for this chassis.
Dec 13 04:01:32 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:32Z|01311|binding|INFO|3ba83e70-35d2-4947-85b4-64f21eae817c: Claiming fa:16:3e:1a:55:74 10.100.0.6
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.301 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.306 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.309 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.311 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:32 np0005558241 NetworkManager[50376]: <info>  [1765616492.3125] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/533)
Dec 13 04:01:32 np0005558241 NetworkManager[50376]: <info>  [1765616492.3131] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/534)
Dec 13 04:01:32 np0005558241 systemd-machined[210538]: New machine qemu-157-instance-00000081.
Dec 13 04:01:32 np0005558241 systemd[1]: Started Virtual Machine qemu-157-instance-00000081.
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.385 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:55:74 10.100.0.6'], port_security=['fa:16:3e:1a:55:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9519b20c-c79b-41e3-922c-362e2d3a7ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '9', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea2504ae-fad7-498d-b82b-9d089e69588d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3ba83e70-35d2-4947-85b4-64f21eae817c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.386 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3ba83e70-35d2-4947-85b4-64f21eae817c in datapath 9da7b572-116c-40f0-9b1e-0183fa9d3f87 bound to our chassis#033[00m
Dec 13 04:01:32 np0005558241 systemd-udevd[377253]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.388 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9da7b572-116c-40f0-9b1e-0183fa9d3f87#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.401 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aff01373-83cf-48ef-b038-b38ceb3962b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.402 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9da7b572-11 in ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:01:32 np0005558241 NetworkManager[50376]: <info>  [1765616492.4076] device (tap3ba83e70-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:01:32 np0005558241 NetworkManager[50376]: <info>  [1765616492.4082] device (tap3ba83e70-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.408 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9da7b572-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.409 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d8db9632-1d5d-41e1-8ac7-ae7115582101]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.410 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f5de73c5-3d15-43aa-93b5-d6cff210a698]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.413 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.423 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.427 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e02e6e-c75b-40fc-bcfd-537250d7c5dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:32Z|01312|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c ovn-installed in OVS
Dec 13 04:01:32 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:32Z|01313|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c up in Southbound
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.436 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.442 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11da7df9-f333-4352-982d-d7cf7f18b53d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3013: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.475 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f29cbf64-7347-4e44-b8d2-c36962e03df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.483 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[71d69425-5ae8-48a5-97ec-803712571cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 NetworkManager[50376]: <info>  [1765616492.4844] manager: (tap9da7b572-10): new Veth device (/org/freedesktop/NetworkManager/Devices/535)
Dec 13 04:01:32 np0005558241 systemd-udevd[377258]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.517 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[51e9f158-6f3b-440e-945c-e303512b3878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.522 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[31efe345-5b05-4d20-b571-969693caba83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 NetworkManager[50376]: <info>  [1765616492.5446] device (tap9da7b572-10): carrier: link connected
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.550 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[38f8a3de-f4cb-4cf0-8e1c-d14636109f97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e691ec68-ae0a-4e26-a8c3-d7c7263a8986]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9da7b572-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:10:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906976, 'reachable_time': 29183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377288, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.590 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f6bbe9f7-29a1-4c0d-8487-dc57423a486d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:101b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 906976, 'tstamp': 906976}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377289, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.607 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c21dea-3c2a-4292-8694-f78ff7bcd9a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9da7b572-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:10:1b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 3, 'rx_bytes': 180, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 3, 'rx_bytes': 180, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906976, 'reachable_time': 29183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377290, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.640 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[27ab6ac2-e5d1-403a-a4c0-20488adcd586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.706 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d46ebb3f-6736-4a24-bf3d-84f1207186e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.708 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9da7b572-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.708 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.709 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9da7b572-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:32 np0005558241 NetworkManager[50376]: <info>  [1765616492.7113] manager: (tap9da7b572-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/536)
Dec 13 04:01:32 np0005558241 kernel: tap9da7b572-10: entered promiscuous mode
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.718 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9da7b572-10, col_values=(('external_ids', {'iface-id': '92adc7c4-81cb-40ce-bd1a-0498c06b06d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:32 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:32Z|01314|binding|INFO|Releasing lport 92adc7c4-81cb-40ce-bd1a-0498c06b06d5 from this chassis (sb_readonly=0)
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.737 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.737 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9da7b572-116c-40f0-9b1e-0183fa9d3f87.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9da7b572-116c-40f0-9b1e-0183fa9d3f87.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.738 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[978f7513-c1d2-4374-9887-6fb968a9a4b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.739 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-9da7b572-116c-40f0-9b1e-0183fa9d3f87
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/9da7b572-116c-40f0-9b1e-0183fa9d3f87.pid.haproxy
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 9da7b572-116c-40f0-9b1e-0183fa9d3f87
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:01:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:32.740 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'env', 'PROCESS_TAG=haproxy-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9da7b572-116c-40f0-9b1e-0183fa9d3f87.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:01:32 np0005558241 nova_compute[248510]: 2025-12-13 09:01:32.890 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.041 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616493.0402653, 9519b20c-c79b-41e3-922c-362e2d3a7ded => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.041 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] VM Started (Lifecycle Event)#033[00m
Dec 13 04:01:33 np0005558241 podman[377364]: 2025-12-13 09:01:33.109141557 +0000 UTC m=+0.050757173 container create 93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:01:33 np0005558241 systemd[1]: Started libpod-conmon-93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc.scope.
Dec 13 04:01:33 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:01:33 np0005558241 podman[377364]: 2025-12-13 09:01:33.082263229 +0000 UTC m=+0.023878865 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:01:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c098a60e057cfd60886eb2452e9a4fdba031c53e7157bf4406be4a772701ff1a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.199 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:01:33 np0005558241 podman[377364]: 2025-12-13 09:01:33.204570373 +0000 UTC m=+0.146186019 container init 93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.206 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616493.0404358, 9519b20c-c79b-41e3-922c-362e2d3a7ded => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.207 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:01:33 np0005558241 podman[377364]: 2025-12-13 09:01:33.21179042 +0000 UTC m=+0.153406026 container start 93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.236 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:01:33 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [NOTICE]   (377383) : New worker (377385) forked
Dec 13 04:01:33 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [NOTICE]   (377383) : Loading success.
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.241 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.281 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.463 248514 DEBUG nova.compute.manager [req-2a7487ce-8cf5-4a21-8452-30d42d6cabf2 req-08903223-594d-48d6-ad46-2c95f20bdab8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.463 248514 DEBUG oslo_concurrency.lockutils [req-2a7487ce-8cf5-4a21-8452-30d42d6cabf2 req-08903223-594d-48d6-ad46-2c95f20bdab8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.464 248514 DEBUG oslo_concurrency.lockutils [req-2a7487ce-8cf5-4a21-8452-30d42d6cabf2 req-08903223-594d-48d6-ad46-2c95f20bdab8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.464 248514 DEBUG oslo_concurrency.lockutils [req-2a7487ce-8cf5-4a21-8452-30d42d6cabf2 req-08903223-594d-48d6-ad46-2c95f20bdab8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.464 248514 DEBUG nova.compute.manager [req-2a7487ce-8cf5-4a21-8452-30d42d6cabf2 req-08903223-594d-48d6-ad46-2c95f20bdab8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Processing event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.465 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.469 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616493.4691448, 9519b20c-c79b-41e3-922c-362e2d3a7ded => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.469 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.471 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.473 248514 INFO nova.virt.libvirt.driver [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Instance spawned successfully.#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.473 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.507 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.513 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.514 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.514 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.514 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.515 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.516 248514 DEBUG nova.virt.libvirt.driver [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.522 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.560 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.645 248514 INFO nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Took 7.05 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.647 248514 DEBUG nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.718 248514 INFO nova.compute.manager [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Took 8.27 seconds to build instance.#033[00m
Dec 13 04:01:33 np0005558241 nova_compute[248510]: 2025-12-13 09:01:33.748 248514 DEBUG oslo_concurrency.lockutils [None req-d6f6a26d-0be9-4894-aea8-62e7716cf7de 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3014: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Dec 13 04:01:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:01:35 np0005558241 nova_compute[248510]: 2025-12-13 09:01:35.994 248514 DEBUG nova.compute.manager [req-f8725001-33ce-47a1-b6d5-dae214371c7c req-5ec08d62-f6d1-4362-9efe-798ddbd890a4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:01:35 np0005558241 nova_compute[248510]: 2025-12-13 09:01:35.994 248514 DEBUG oslo_concurrency.lockutils [req-f8725001-33ce-47a1-b6d5-dae214371c7c req-5ec08d62-f6d1-4362-9efe-798ddbd890a4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:35 np0005558241 nova_compute[248510]: 2025-12-13 09:01:35.995 248514 DEBUG oslo_concurrency.lockutils [req-f8725001-33ce-47a1-b6d5-dae214371c7c req-5ec08d62-f6d1-4362-9efe-798ddbd890a4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:35 np0005558241 nova_compute[248510]: 2025-12-13 09:01:35.995 248514 DEBUG oslo_concurrency.lockutils [req-f8725001-33ce-47a1-b6d5-dae214371c7c req-5ec08d62-f6d1-4362-9efe-798ddbd890a4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:35 np0005558241 nova_compute[248510]: 2025-12-13 09:01:35.995 248514 DEBUG nova.compute.manager [req-f8725001-33ce-47a1-b6d5-dae214371c7c req-5ec08d62-f6d1-4362-9efe-798ddbd890a4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] No waiting events found dispatching network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:01:35 np0005558241 nova_compute[248510]: 2025-12-13 09:01:35.995 248514 WARNING nova.compute.manager [req-f8725001-33ce-47a1-b6d5-dae214371c7c req-5ec08d62-f6d1-4362-9efe-798ddbd890a4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received unexpected event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c for instance with vm_state active and task_state None.#033[00m
Dec 13 04:01:36 np0005558241 nova_compute[248510]: 2025-12-13 09:01:36.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3015: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.036 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.307 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.309 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.310 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.311 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.312 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.314 248514 INFO nova.compute.manager [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Terminating instance#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.316 248514 DEBUG nova.compute.manager [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:01:37 np0005558241 kernel: tap3ba83e70-35 (unregistering): left promiscuous mode
Dec 13 04:01:37 np0005558241 NetworkManager[50376]: <info>  [1765616497.3723] device (tap3ba83e70-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:01:37 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:37Z|01315|binding|INFO|Releasing lport 3ba83e70-35d2-4947-85b4-64f21eae817c from this chassis (sb_readonly=0)
Dec 13 04:01:37 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:37Z|01316|binding|INFO|Setting lport 3ba83e70-35d2-4947-85b4-64f21eae817c down in Southbound
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.376 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:37 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:37Z|01317|binding|INFO|Removing iface tap3ba83e70-35 ovn-installed in OVS
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.387 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:55:74 10.100.0.6'], port_security=['fa:16:3e:1a:55:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9519b20c-c79b-41e3-922c-362e2d3a7ded', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1789697356', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '11', 'neutron:security_group_ids': '213b92ea-ddd9-4749-8abe-382a90d446f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea2504ae-fad7-498d-b82b-9d089e69588d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3ba83e70-35d2-4947-85b4-64f21eae817c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.392 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.392 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3ba83e70-35d2-4947-85b4-64f21eae817c in datapath 9da7b572-116c-40f0-9b1e-0183fa9d3f87 unbound from our chassis#033[00m
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.395 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9da7b572-116c-40f0-9b1e-0183fa9d3f87, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.396 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78a7e7b1-46b1-493c-b162-ab6405892227]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.397 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 namespace which is not needed anymore#033[00m
Dec 13 04:01:37 np0005558241 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d00000081.scope: Deactivated successfully.
Dec 13 04:01:37 np0005558241 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d00000081.scope: Consumed 4.607s CPU time.
Dec 13 04:01:37 np0005558241 systemd-machined[210538]: Machine qemu-157-instance-00000081 terminated.
Dec 13 04:01:37 np0005558241 podman[377399]: 2025-12-13 09:01:37.471944838 +0000 UTC m=+0.063426514 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:01:37 np0005558241 podman[377398]: 2025-12-13 09:01:37.472523142 +0000 UTC m=+0.063793963 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 13 04:01:37 np0005558241 podman[377396]: 2025-12-13 09:01:37.501133982 +0000 UTC m=+0.093166411 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 04:01:37 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [NOTICE]   (377383) : haproxy version is 2.8.14-c23fe91
Dec 13 04:01:37 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [NOTICE]   (377383) : path to executable is /usr/sbin/haproxy
Dec 13 04:01:37 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [WARNING]  (377383) : Exiting Master process...
Dec 13 04:01:37 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [ALERT]    (377383) : Current worker (377385) exited with code 143 (Terminated)
Dec 13 04:01:37 np0005558241 neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87[377379]: [WARNING]  (377383) : All workers exited. Exiting... (0)
Dec 13 04:01:37 np0005558241 systemd[1]: libpod-93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc.scope: Deactivated successfully.
Dec 13 04:01:37 np0005558241 podman[377475]: 2025-12-13 09:01:37.5472185 +0000 UTC m=+0.050684361 container died 93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.555 248514 INFO nova.virt.libvirt.driver [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Instance destroyed successfully.#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.556 248514 DEBUG nova.objects.instance [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 9519b20c-c79b-41e3-922c-362e2d3a7ded obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:01:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c098a60e057cfd60886eb2452e9a4fdba031c53e7157bf4406be4a772701ff1a-merged.mount: Deactivated successfully.
Dec 13 04:01:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc-userdata-shm.mount: Deactivated successfully.
Dec 13 04:01:37 np0005558241 podman[377475]: 2025-12-13 09:01:37.607907996 +0000 UTC m=+0.111373847 container cleanup 93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Dec 13 04:01:37 np0005558241 systemd[1]: libpod-conmon-93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc.scope: Deactivated successfully.
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.672 248514 DEBUG nova.virt.libvirt.vif [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:01:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-465153182',display_name='tempest-TestNetworkBasicOps-server-465153182',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-465153182',id=129,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIO+yWVJ497mtKMlg6/trFGMKEjgI30rNV/FBctamIJHtuqDBXodOJRCF+bmPrWOZ8OVFPBiR7SruaUfZabfP77iTSiPTfosnAjyuR+wA0QlVYwQySGN575QhOgAkbSV7g==',key_name='tempest-TestNetworkBasicOps-815903989',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:01:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-mmqkoou9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:01:33Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=9519b20c-c79b-41e3-922c-362e2d3a7ded,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.673 248514 DEBUG nova.network.os_vif_util [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "3ba83e70-35d2-4947-85b4-64f21eae817c", "address": "fa:16:3e:1a:55:74", "network": {"id": "9da7b572-116c-40f0-9b1e-0183fa9d3f87", "bridge": "br-int", "label": "tempest-network-smoke--1348609630", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ba83e70-35", "ovs_interfaceid": "3ba83e70-35d2-4947-85b4-64f21eae817c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.674 248514 DEBUG nova.network.os_vif_util [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.674 248514 DEBUG os_vif [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.677 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ba83e70-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.678 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.680 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:37 np0005558241 podman[377520]: 2025-12-13 09:01:37.681584079 +0000 UTC m=+0.049953104 container remove 93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.682 248514 INFO os_vif [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:55:74,bridge_name='br-int',has_traffic_filtering=True,id=3ba83e70-35d2-4947-85b4-64f21eae817c,network=Network(9da7b572-116c-40f0-9b1e-0183fa9d3f87),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3ba83e70-35')#033[00m
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.687 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f198bf4c-4124-42a2-84ed-751d5120d5bc]: (4, ('Sat Dec 13 09:01:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 (93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc)\n93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc\nSat Dec 13 09:01:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 (93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc)\n93559f9ba3fed4caa1e9a5a8433f1752268274e0bbb1fc2911ea713a063f13cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.688 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4c81c8e0-f010-415c-b6d6-56f91ca05bd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.689 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9da7b572-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:37 np0005558241 kernel: tap9da7b572-10: left promiscuous mode
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.709 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7a977a84-7784-4f46-99d2-7b5e7e08c0f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.723 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[237f1c25-80e8-47c8-b539-5212f97aa272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.725 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d625cb-f445-4093-ae30-dfd04e17a2f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.747 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a882ffa-6560-441e-a446-05330bfa64b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 906968, 'reachable_time': 44046, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377553, 'error': None, 'target': 'ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.750 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9da7b572-116c-40f0-9b1e-0183fa9d3f87 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:01:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:37.750 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cf5635-73fc-4298-b7d4-8018026ee5f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:37 np0005558241 systemd[1]: run-netns-ovnmeta\x2d9da7b572\x2d116c\x2d40f0\x2d9b1e\x2d0183fa9d3f87.mount: Deactivated successfully.
Dec 13 04:01:37 np0005558241 nova_compute[248510]: 2025-12-13 09:01:37.892 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:38 np0005558241 nova_compute[248510]: 2025-12-13 09:01:38.016 248514 INFO nova.virt.libvirt.driver [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Deleting instance files /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded_del#033[00m
Dec 13 04:01:38 np0005558241 nova_compute[248510]: 2025-12-13 09:01:38.017 248514 INFO nova.virt.libvirt.driver [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Deletion of /var/lib/nova/instances/9519b20c-c79b-41e3-922c-362e2d3a7ded_del complete#033[00m
Dec 13 04:01:38 np0005558241 nova_compute[248510]: 2025-12-13 09:01:38.098 248514 INFO nova.compute.manager [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:01:38 np0005558241 nova_compute[248510]: 2025-12-13 09:01:38.099 248514 DEBUG oslo.service.loopingcall [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:01:38 np0005558241 nova_compute[248510]: 2025-12-13 09:01:38.099 248514 DEBUG nova.compute.manager [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:01:38 np0005558241 nova_compute[248510]: 2025-12-13 09:01:38.099 248514 DEBUG nova.network.neutron [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:01:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3016: 321 pgs: 321 active+clean; 74 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 788 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 13 04:01:39 np0005558241 nova_compute[248510]: 2025-12-13 09:01:39.577 248514 DEBUG nova.compute.manager [req-98793bf3-6480-41e1-9cb7-d4ad8bec84da req-87f0a9a7-c488-4973-a645-342dd8a15520 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received event network-vif-unplugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:01:39 np0005558241 nova_compute[248510]: 2025-12-13 09:01:39.577 248514 DEBUG oslo_concurrency.lockutils [req-98793bf3-6480-41e1-9cb7-d4ad8bec84da req-87f0a9a7-c488-4973-a645-342dd8a15520 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:39 np0005558241 nova_compute[248510]: 2025-12-13 09:01:39.577 248514 DEBUG oslo_concurrency.lockutils [req-98793bf3-6480-41e1-9cb7-d4ad8bec84da req-87f0a9a7-c488-4973-a645-342dd8a15520 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:39 np0005558241 nova_compute[248510]: 2025-12-13 09:01:39.578 248514 DEBUG oslo_concurrency.lockutils [req-98793bf3-6480-41e1-9cb7-d4ad8bec84da req-87f0a9a7-c488-4973-a645-342dd8a15520 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:39 np0005558241 nova_compute[248510]: 2025-12-13 09:01:39.578 248514 DEBUG nova.compute.manager [req-98793bf3-6480-41e1-9cb7-d4ad8bec84da req-87f0a9a7-c488-4973-a645-342dd8a15520 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] No waiting events found dispatching network-vif-unplugged-3ba83e70-35d2-4947-85b4-64f21eae817c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:01:39 np0005558241 nova_compute[248510]: 2025-12-13 09:01:39.578 248514 DEBUG nova.compute.manager [req-98793bf3-6480-41e1-9cb7-d4ad8bec84da req-87f0a9a7-c488-4973-a645-342dd8a15520 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received event network-vif-unplugged-3ba83e70-35d2-4947-85b4-64f21eae817c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:01:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:01:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:01:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:01:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:01:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:01:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:01:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3017: 321 pgs: 321 active+clean; 41 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 115 op/s
Dec 13 04:01:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.255 248514 DEBUG nova.network.neutron [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.282 248514 INFO nova.compute.manager [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Took 3.18 seconds to deallocate network for instance.#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.339 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.340 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.420 248514 DEBUG oslo_concurrency.processutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.500 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.501 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.532 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.612 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.678 248514 DEBUG nova.compute.manager [req-50607e0a-ee52-446f-aa14-0d97d23bbd0f req-9e768243-5e1f-4884-800e-cd6f961f54e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.679 248514 DEBUG oslo_concurrency.lockutils [req-50607e0a-ee52-446f-aa14-0d97d23bbd0f req-9e768243-5e1f-4884-800e-cd6f961f54e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.679 248514 DEBUG oslo_concurrency.lockutils [req-50607e0a-ee52-446f-aa14-0d97d23bbd0f req-9e768243-5e1f-4884-800e-cd6f961f54e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.679 248514 DEBUG oslo_concurrency.lockutils [req-50607e0a-ee52-446f-aa14-0d97d23bbd0f req-9e768243-5e1f-4884-800e-cd6f961f54e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.680 248514 DEBUG nova.compute.manager [req-50607e0a-ee52-446f-aa14-0d97d23bbd0f req-9e768243-5e1f-4884-800e-cd6f961f54e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] No waiting events found dispatching network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:01:41 np0005558241 nova_compute[248510]: 2025-12-13 09:01:41.680 248514 WARNING nova.compute.manager [req-50607e0a-ee52-446f-aa14-0d97d23bbd0f req-9e768243-5e1f-4884-800e-cd6f961f54e6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Received unexpected event network-vif-plugged-3ba83e70-35d2-4947-85b4-64f21eae817c for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:01:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:01:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4234585360' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.019 248514 DEBUG oslo_concurrency.processutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.026 248514 DEBUG nova.compute.provider_tree [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.066 248514 DEBUG nova.scheduler.client.report [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.097 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.100 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.105 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.105 248514 INFO nova.compute.claims [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.129 248514 INFO nova.scheduler.client.report [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 9519b20c-c79b-41e3-922c-362e2d3a7ded#033[00m
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.228 248514 DEBUG oslo_concurrency.lockutils [None req-dc253464-cef3-4526-bdb9-c882106efe19 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "9519b20c-c79b-41e3-922c-362e2d3a7ded" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.247 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3018: 321 pgs: 321 active+clean; 41 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.680 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:01:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3217924076' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.945 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.952 248514 DEBUG nova.compute.provider_tree [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:01:42 np0005558241 nova_compute[248510]: 2025-12-13 09:01:42.971 248514 DEBUG nova.scheduler.client.report [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.005 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.006 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.057 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.057 248514 DEBUG nova.network.neutron [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.082 248514 INFO nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.111 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.210 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.211 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.212 248514 INFO nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Creating image(s)#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.234 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.261 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.283 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.287 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.377 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.378 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.379 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.379 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.399 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.403 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 18506534-de17-4e42-87c7-b2546619f4d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.440 248514 DEBUG nova.policy [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5fd410579fa429ba0f7f680590cd86a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '77d40177a4804671aa9c5da343bc2ed4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.695 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 18506534-de17-4e42-87c7-b2546619f4d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:43 np0005558241 nova_compute[248510]: 2025-12-13 09:01:43.763 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] resizing rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:01:44 np0005558241 nova_compute[248510]: 2025-12-13 09:01:44.114 248514 DEBUG nova.objects.instance [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'migration_context' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:01:44 np0005558241 nova_compute[248510]: 2025-12-13 09:01:44.140 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:01:44 np0005558241 nova_compute[248510]: 2025-12-13 09:01:44.140 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Ensure instance console log exists: /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:01:44 np0005558241 nova_compute[248510]: 2025-12-13 09:01:44.141 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:44 np0005558241 nova_compute[248510]: 2025-12-13 09:01:44.141 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:44 np0005558241 nova_compute[248510]: 2025-12-13 09:01:44.141 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3019: 321 pgs: 321 active+clean; 76 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Dec 13 04:01:45 np0005558241 nova_compute[248510]: 2025-12-13 09:01:45.100 248514 DEBUG nova.network.neutron [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Successfully created port: 698673ab-115a-49aa-b3d5-68392d28aa81 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:01:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:01:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3020: 321 pgs: 321 active+clean; 76 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 96 op/s
Dec 13 04:01:46 np0005558241 nova_compute[248510]: 2025-12-13 09:01:46.900 248514 DEBUG nova.network.neutron [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Successfully updated port: 698673ab-115a-49aa-b3d5-68392d28aa81 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:01:46 np0005558241 nova_compute[248510]: 2025-12-13 09:01:46.917 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:01:46 np0005558241 nova_compute[248510]: 2025-12-13 09:01:46.918 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:01:46 np0005558241 nova_compute[248510]: 2025-12-13 09:01:46.918 248514 DEBUG nova.network.neutron [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:01:47 np0005558241 nova_compute[248510]: 2025-12-13 09:01:47.059 248514 DEBUG nova.compute.manager [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-changed-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:01:47 np0005558241 nova_compute[248510]: 2025-12-13 09:01:47.059 248514 DEBUG nova.compute.manager [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Refreshing instance network info cache due to event network-changed-698673ab-115a-49aa-b3d5-68392d28aa81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:01:47 np0005558241 nova_compute[248510]: 2025-12-13 09:01:47.059 248514 DEBUG oslo_concurrency.lockutils [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:01:47 np0005558241 nova_compute[248510]: 2025-12-13 09:01:47.328 248514 DEBUG nova.network.neutron [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:01:47 np0005558241 nova_compute[248510]: 2025-12-13 09:01:47.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:47 np0005558241 nova_compute[248510]: 2025-12-13 09:01:47.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3021: 321 pgs: 321 active+clean; 88 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 13 04:01:49 np0005558241 nova_compute[248510]: 2025-12-13 09:01:49.982 248514 DEBUG nova.network.neutron [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updating instance_info_cache with network_info: [{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.017 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.018 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance network_info: |[{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.018 248514 DEBUG oslo_concurrency.lockutils [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.018 248514 DEBUG nova.network.neutron [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Refreshing network info cache for port 698673ab-115a-49aa-b3d5-68392d28aa81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.021 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Start _get_guest_xml network_info=[{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.025 248514 WARNING nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.029 248514 DEBUG nova.virt.libvirt.host [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.029 248514 DEBUG nova.virt.libvirt.host [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.032 248514 DEBUG nova.virt.libvirt.host [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.033 248514 DEBUG nova.virt.libvirt.host [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.033 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.033 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.034 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.034 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.034 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.034 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.034 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.034 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.035 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.035 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.035 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.035 248514 DEBUG nova.virt.hardware [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.038 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3022: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:01:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744851776' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.655 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.681 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.685 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:50 np0005558241 nova_compute[248510]: 2025-12-13 09:01:50.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:01:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:01:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/999546636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.231 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.232 248514 DEBUG nova.virt.libvirt.vif [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1657427533',display_name='tempest-TestNetworkAdvancedServerOps-server-1657427533',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1657427533',id=130,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIu2Mio4iiKCPHCFWqnjQXxmeR9VuZ/sJMKRi7ThHNLpfALJX8ZDxWjNQlqcAjJCFa+dRlCd52p375xJuviq4QzkIt2R0aIT9v0cJCRdagDbQWGjbmiGhE29U66OFfUYVw==',key_name='tempest-TestNetworkAdvancedServerOps-2136416273',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a2gq0cdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:01:43Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=18506534-de17-4e42-87c7-b2546619f4d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.233 248514 DEBUG nova.network.os_vif_util [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.234 248514 DEBUG nova.network.os_vif_util [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.235 248514 DEBUG nova.objects.instance [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.256 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  <uuid>18506534-de17-4e42-87c7-b2546619f4d4</uuid>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  <name>instance-00000082</name>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1657427533</nova:name>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:01:50</nova:creationTime>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:        <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:        <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:        <nova:port uuid="698673ab-115a-49aa-b3d5-68392d28aa81">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <entry name="serial">18506534-de17-4e42-87c7-b2546619f4d4</entry>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <entry name="uuid">18506534-de17-4e42-87c7-b2546619f4d4</entry>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/18506534-de17-4e42-87c7-b2546619f4d4_disk">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/18506534-de17-4e42-87c7-b2546619f4d4_disk.config">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:7b:34:14"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <target dev="tap698673ab-11"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/console.log" append="off"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:01:51 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:01:51 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:01:51 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:01:51 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.257 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Preparing to wait for external event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.258 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.259 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.259 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.260 248514 DEBUG nova.virt.libvirt.vif [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1657427533',display_name='tempest-TestNetworkAdvancedServerOps-server-1657427533',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1657427533',id=130,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIu2Mio4iiKCPHCFWqnjQXxmeR9VuZ/sJMKRi7ThHNLpfALJX8ZDxWjNQlqcAjJCFa+dRlCd52p375xJuviq4QzkIt2R0aIT9v0cJCRdagDbQWGjbmiGhE29U66OFfUYVw==',key_name='tempest-TestNetworkAdvancedServerOps-2136416273',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a2gq0cdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:01:43Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=18506534-de17-4e42-87c7-b2546619f4d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.260 248514 DEBUG nova.network.os_vif_util [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.261 248514 DEBUG nova.network.os_vif_util [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.261 248514 DEBUG os_vif [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.262 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.262 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.262 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.268 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap698673ab-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.268 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap698673ab-11, col_values=(('external_ids', {'iface-id': '698673ab-115a-49aa-b3d5-68392d28aa81', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:34:14', 'vm-uuid': '18506534-de17-4e42-87c7-b2546619f4d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:51 np0005558241 NetworkManager[50376]: <info>  [1765616511.2710] manager: (tap698673ab-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.270 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.274 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.276 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.276 248514 INFO os_vif [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11')#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.428 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.429 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.429 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No VIF found with MAC fa:16:3e:7b:34:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.430 248514 INFO nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Using config drive#033[00m
Dec 13 04:01:51 np0005558241 nova_compute[248510]: 2025-12-13 09:01:51.450 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3023: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:01:52 np0005558241 nova_compute[248510]: 2025-12-13 09:01:52.554 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616497.5534518, 9519b20c-c79b-41e3-922c-362e2d3a7ded => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:01:52 np0005558241 nova_compute[248510]: 2025-12-13 09:01:52.555 248514 INFO nova.compute.manager [-] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:01:52 np0005558241 nova_compute[248510]: 2025-12-13 09:01:52.592 248514 DEBUG nova.compute.manager [None req-8b54f665-2adc-43e9-a7a5-955266c3afbc - - - - - -] [instance: 9519b20c-c79b-41e3-922c-362e2d3a7ded] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:01:52 np0005558241 nova_compute[248510]: 2025-12-13 09:01:52.684 248514 INFO nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Creating config drive at /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config#033[00m
Dec 13 04:01:52 np0005558241 nova_compute[248510]: 2025-12-13 09:01:52.690 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsrmu0jpl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:52 np0005558241 nova_compute[248510]: 2025-12-13 09:01:52.846 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsrmu0jpl" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:52 np0005558241 nova_compute[248510]: 2025-12-13 09:01:52.879 248514 DEBUG nova.storage.rbd_utils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:01:52 np0005558241 nova_compute[248510]: 2025-12-13 09:01:52.885 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config 18506534-de17-4e42-87c7-b2546619f4d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:01:52 np0005558241 nova_compute[248510]: 2025-12-13 09:01:52.942 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.056 248514 DEBUG oslo_concurrency.processutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config 18506534-de17-4e42-87c7-b2546619f4d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.056 248514 INFO nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deleting local config drive /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config because it was imported into RBD.#033[00m
Dec 13 04:01:53 np0005558241 NetworkManager[50376]: <info>  [1765616513.0985] manager: (tap698673ab-11): new Tun device (/org/freedesktop/NetworkManager/Devices/538)
Dec 13 04:01:53 np0005558241 kernel: tap698673ab-11: entered promiscuous mode
Dec 13 04:01:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:53Z|01318|binding|INFO|Claiming lport 698673ab-115a-49aa-b3d5-68392d28aa81 for this chassis.
Dec 13 04:01:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:53Z|01319|binding|INFO|698673ab-115a-49aa-b3d5-68392d28aa81: Claiming fa:16:3e:7b:34:14 10.100.0.5
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.101 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.103 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.106 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:53 np0005558241 systemd-udevd[377901]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:01:53 np0005558241 systemd-machined[210538]: New machine qemu-158-instance-00000082.
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.137 248514 DEBUG nova.network.neutron [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updated VIF entry in instance network info cache for port 698673ab-115a-49aa-b3d5-68392d28aa81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.138 248514 DEBUG nova.network.neutron [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updating instance_info_cache with network_info: [{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:01:53 np0005558241 NetworkManager[50376]: <info>  [1765616513.1404] device (tap698673ab-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:01:53 np0005558241 NetworkManager[50376]: <info>  [1765616513.1414] device (tap698673ab-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.144 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:34:14 10.100.0.5'], port_security=['fa:16:3e:7b:34:14 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18506534-de17-4e42-87c7-b2546619f4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '250e0b26-9d3f-404b-8b3e-f79706be2a3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db45c681-c79f-432b-bd2c-bbf3bd1f2153, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=698673ab-115a-49aa-b3d5-68392d28aa81) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.145 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 698673ab-115a-49aa-b3d5-68392d28aa81 in datapath 44b4664e-9eef-4d04-bcdb-68869f16c46a bound to our chassis#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.146 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44b4664e-9eef-4d04-bcdb-68869f16c46a#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.156 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b01b2f-b9c1-492a-8b62-9864d956db96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 systemd[1]: Started Virtual Machine qemu-158-instance-00000082.
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.157 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap44b4664e-91 in ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.159 248514 DEBUG oslo_concurrency.lockutils [req-0c2c31d1-2115-436c-98ef-95ca0de2eeb4 req-4aa78c61-1bd7-4f24-80e0-7d734e3f530d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.159 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap44b4664e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.159 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78ad4c61-cc8d-4f7d-a149-1de7fba44148]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.160 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[625f125b-2d9e-4003-bc84-cbdc3d0bb13b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.205 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[919b03a1-e737-44c5-9100-441f6b72f1d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:53Z|01320|binding|INFO|Setting lport 698673ab-115a-49aa-b3d5-68392d28aa81 ovn-installed in OVS
Dec 13 04:01:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:53Z|01321|binding|INFO|Setting lport 698673ab-115a-49aa-b3d5-68392d28aa81 up in Southbound
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.208 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.218 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[000a6675-b77b-4805-95d9-1541b68f46f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.249 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a62021fb-1fd8-42cf-bc71-5957b0ff73ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 NetworkManager[50376]: <info>  [1765616513.2553] manager: (tap44b4664e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/539)
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.254 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2bdc0c-cac0-4292-b1c4-53523b1d3208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.286 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a3e3cc-80cf-41d7-ab6b-e69ade501f74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.290 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ac756119-7934-4620-a672-5804f0cbc3cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 NetworkManager[50376]: <info>  [1765616513.3172] device (tap44b4664e-90): carrier: link connected
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.321 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3b1d1cfd-e326-47ab-b39a-22af5b545440]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.337 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ec315d9d-9822-4c05-ae98-5691f89752d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44b4664e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:4b:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 909053, 'reachable_time': 24631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377935, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.356 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f27b5f5-d4fc-46c3-8c39-a5cd26e9ef06]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:4b6f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 909053, 'tstamp': 909053}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377936, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.378 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9878e9d7-a0b4-46a7-adf4-4dfbd4d0ff89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44b4664e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:4b:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 909053, 'reachable_time': 24631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377937, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.412 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[98457172-0019-4795-a0e7-51dfe3490f5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.474 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1492b18-fe58-49fc-a4fd-bcfd42cfbc98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.476 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44b4664e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.476 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.476 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44b4664e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:53 np0005558241 NetworkManager[50376]: <info>  [1765616513.4796] manager: (tap44b4664e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/540)
Dec 13 04:01:53 np0005558241 kernel: tap44b4664e-90: entered promiscuous mode
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.478 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.481 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.486 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44b4664e-90, col_values=(('external_ids', {'iface-id': '88fafa32-b4a3-42fd-afa1-df6ba2200ad6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.488 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:01:53Z|01322|binding|INFO|Releasing lport 88fafa32-b4a3-42fd-afa1-df6ba2200ad6 from this chassis (sb_readonly=0)
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.492 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/44b4664e-9eef-4d04-bcdb-68869f16c46a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/44b4664e-9eef-4d04-bcdb-68869f16c46a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.493 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[087744ae-0bca-4786-b685-d08f8afcf358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.494 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-44b4664e-9eef-4d04-bcdb-68869f16c46a
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/44b4664e-9eef-4d04-bcdb-68869f16c46a.pid.haproxy
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 44b4664e-9eef-4d04-bcdb-68869f16c46a
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:01:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:53.495 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'env', 'PROCESS_TAG=haproxy-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/44b4664e-9eef-4d04-bcdb-68869f16c46a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.502 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.764 248514 DEBUG nova.compute.manager [req-71419847-fe6c-4830-aa0f-c6cdf5e2ce8b req-7bfa0ccf-ecd6-4879-81d8-4c6bc131420f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.764 248514 DEBUG oslo_concurrency.lockutils [req-71419847-fe6c-4830-aa0f-c6cdf5e2ce8b req-7bfa0ccf-ecd6-4879-81d8-4c6bc131420f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.765 248514 DEBUG oslo_concurrency.lockutils [req-71419847-fe6c-4830-aa0f-c6cdf5e2ce8b req-7bfa0ccf-ecd6-4879-81d8-4c6bc131420f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.765 248514 DEBUG oslo_concurrency.lockutils [req-71419847-fe6c-4830-aa0f-c6cdf5e2ce8b req-7bfa0ccf-ecd6-4879-81d8-4c6bc131420f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:53 np0005558241 nova_compute[248510]: 2025-12-13 09:01:53.765 248514 DEBUG nova.compute.manager [req-71419847-fe6c-4830-aa0f-c6cdf5e2ce8b req-7bfa0ccf-ecd6-4879-81d8-4c6bc131420f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Processing event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:01:53 np0005558241 podman[377969]: 2025-12-13 09:01:53.895241193 +0000 UTC m=+0.049366370 container create 611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:01:53 np0005558241 systemd[1]: Started libpod-conmon-611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e.scope.
Dec 13 04:01:53 np0005558241 podman[377969]: 2025-12-13 09:01:53.870578909 +0000 UTC m=+0.024704106 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:01:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:01:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11cf72505ed6c58f094c4c16689a309a1fb7eaf088d765885e56e37adbda2ee2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:53 np0005558241 podman[377969]: 2025-12-13 09:01:53.999740681 +0000 UTC m=+0.153865888 container init 611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:01:54 np0005558241 podman[377969]: 2025-12-13 09:01:54.009463079 +0000 UTC m=+0.163588256 container start 611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 04:01:54 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [NOTICE]   (377988) : New worker (377990) forked
Dec 13 04:01:54 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [NOTICE]   (377988) : Loading success.
Dec 13 04:01:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3024: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.490 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616514.4901824, 18506534-de17-4e42-87c7-b2546619f4d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.490 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] VM Started (Lifecycle Event)#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.493 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.496 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.500 248514 INFO nova.virt.libvirt.driver [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance spawned successfully.#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.501 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.513 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.517 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.527 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.527 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.527 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.528 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.528 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.528 248514 DEBUG nova.virt.libvirt.driver [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.537 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.538 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616514.4941294, 18506534-de17-4e42-87c7-b2546619f4d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.538 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.608 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.611 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616514.4966018, 18506534-de17-4e42-87c7-b2546619f4d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.611 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.643 248514 INFO nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Took 11.43 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.643 248514 DEBUG nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.656 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.660 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.695 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.774 248514 INFO nova.compute.manager [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Took 13.18 seconds to build instance.#033[00m
Dec 13 04:01:54 np0005558241 nova_compute[248510]: 2025-12-13 09:01:54.796 248514 DEBUG oslo_concurrency.lockutils [None req-bd65f8b6-04d1-4bb8-b859-32ccdbd52cb4 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:55.439 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:55.439 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:01:55.440 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:55 np0005558241 nova_compute[248510]: 2025-12-13 09:01:55.895 248514 DEBUG nova.compute.manager [req-e035bc5e-0669-453e-a444-2ecdc4e99db4 req-46426288-4c57-4676-a6d3-edba6d22508d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:01:55 np0005558241 nova_compute[248510]: 2025-12-13 09:01:55.895 248514 DEBUG oslo_concurrency.lockutils [req-e035bc5e-0669-453e-a444-2ecdc4e99db4 req-46426288-4c57-4676-a6d3-edba6d22508d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:01:55 np0005558241 nova_compute[248510]: 2025-12-13 09:01:55.895 248514 DEBUG oslo_concurrency.lockutils [req-e035bc5e-0669-453e-a444-2ecdc4e99db4 req-46426288-4c57-4676-a6d3-edba6d22508d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:01:55 np0005558241 nova_compute[248510]: 2025-12-13 09:01:55.896 248514 DEBUG oslo_concurrency.lockutils [req-e035bc5e-0669-453e-a444-2ecdc4e99db4 req-46426288-4c57-4676-a6d3-edba6d22508d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:01:55 np0005558241 nova_compute[248510]: 2025-12-13 09:01:55.896 248514 DEBUG nova.compute.manager [req-e035bc5e-0669-453e-a444-2ecdc4e99db4 req-46426288-4c57-4676-a6d3-edba6d22508d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:01:55 np0005558241 nova_compute[248510]: 2025-12-13 09:01:55.896 248514 WARNING nova.compute.manager [req-e035bc5e-0669-453e-a444-2ecdc4e99db4 req-46426288-4c57-4676-a6d3-edba6d22508d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received unexpected event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:01:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:01:56 np0005558241 nova_compute[248510]: 2025-12-13 09:01:56.271 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3025: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 468 KiB/s wr, 17 op/s
Dec 13 04:01:57 np0005558241 nova_compute[248510]: 2025-12-13 09:01:57.933 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:01:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3026: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 493 KiB/s rd, 468 KiB/s wr, 34 op/s
Dec 13 04:02:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3027: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 13 04:02:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:02:01 np0005558241 nova_compute[248510]: 2025-12-13 09:02:01.275 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3028: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:02:02 np0005558241 NetworkManager[50376]: <info>  [1765616522.4952] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Dec 13 04:02:02 np0005558241 NetworkManager[50376]: <info>  [1765616522.4965] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Dec 13 04:02:02 np0005558241 nova_compute[248510]: 2025-12-13 09:02:02.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:02 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:02Z|01323|binding|INFO|Releasing lport 88fafa32-b4a3-42fd-afa1-df6ba2200ad6 from this chassis (sb_readonly=0)
Dec 13 04:02:02 np0005558241 nova_compute[248510]: 2025-12-13 09:02:02.554 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:02 np0005558241 nova_compute[248510]: 2025-12-13 09:02:02.559 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:02 np0005558241 nova_compute[248510]: 2025-12-13 09:02:02.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:03 np0005558241 nova_compute[248510]: 2025-12-13 09:02:03.248 248514 DEBUG nova.compute.manager [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-changed-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:03 np0005558241 nova_compute[248510]: 2025-12-13 09:02:03.248 248514 DEBUG nova.compute.manager [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Refreshing instance network info cache due to event network-changed-698673ab-115a-49aa-b3d5-68392d28aa81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:02:03 np0005558241 nova_compute[248510]: 2025-12-13 09:02:03.249 248514 DEBUG oslo_concurrency.lockutils [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:02:03 np0005558241 nova_compute[248510]: 2025-12-13 09:02:03.249 248514 DEBUG oslo_concurrency.lockutils [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:02:03 np0005558241 nova_compute[248510]: 2025-12-13 09:02:03.249 248514 DEBUG nova.network.neutron [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Refreshing network info cache for port 698673ab-115a-49aa-b3d5-68392d28aa81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:02:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:02:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 40K writes, 159K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s#012Cumulative WAL: 40K writes, 14K syncs, 2.80 writes per sync, written: 0.16 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5560 writes, 22K keys, 5560 commit groups, 1.0 writes per commit group, ingest: 26.78 MB, 0.04 MB/s#012Interval WAL: 5560 writes, 2114 syncs, 2.63 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:02:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3029: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:02:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:02:06 np0005558241 nova_compute[248510]: 2025-12-13 09:02:06.233 248514 DEBUG nova.network.neutron [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updated VIF entry in instance network info cache for port 698673ab-115a-49aa-b3d5-68392d28aa81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:02:06 np0005558241 nova_compute[248510]: 2025-12-13 09:02:06.234 248514 DEBUG nova.network.neutron [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updating instance_info_cache with network_info: [{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:02:06 np0005558241 nova_compute[248510]: 2025-12-13 09:02:06.271 248514 DEBUG oslo_concurrency.lockutils [req-f8e14634-4284-4cb0-80e9-3f2122f7d369 req-4d7265fb-ee10-4118-9035-7687ae58ae9c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:02:06 np0005558241 nova_compute[248510]: 2025-12-13 09:02:06.278 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3030: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Dec 13 04:02:06 np0005558241 nova_compute[248510]: 2025-12-13 09:02:06.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:07 np0005558241 nova_compute[248510]: 2025-12-13 09:02:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:02:07 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:07Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:34:14 10.100.0.5
Dec 13 04:02:07 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:07Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:34:14 10.100.0.5
Dec 13 04:02:07 np0005558241 nova_compute[248510]: 2025-12-13 09:02:07.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:07 np0005558241 podman[378044]: 2025-12-13 09:02:07.980637388 +0000 UTC m=+0.060140173 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:02:07 np0005558241 podman[378043]: 2025-12-13 09:02:07.988645154 +0000 UTC m=+0.071485660 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 04:02:08 np0005558241 podman[378042]: 2025-12-13 09:02:08.008463929 +0000 UTC m=+0.095115699 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 13 04:02:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3031: 321 pgs: 321 active+clean; 94 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 412 KiB/s wr, 85 op/s
Dec 13 04:02:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:02:09
Dec 13 04:02:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:02:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:02:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', 'images', 'default.rgw.control']
Dec 13 04:02:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:02:09 np0005558241 nova_compute[248510]: 2025-12-13 09:02:09.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3032: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:02:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:02:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:02:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:02:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5401.8 total, 600.0 interval#012Cumulative writes: 41K writes, 159K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 41K writes, 15K syncs, 2.76 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5762 writes, 24K keys, 5762 commit groups, 1.0 writes per commit group, ingest: 26.51 MB, 0.04 MB/s#012Interval WAL: 5763 writes, 2205 syncs, 2.61 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:02:11 np0005558241 nova_compute[248510]: 2025-12-13 09:02:11.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3033: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:02:12 np0005558241 nova_compute[248510]: 2025-12-13 09:02:12.937 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:13 np0005558241 nova_compute[248510]: 2025-12-13 09:02:13.292 248514 INFO nova.compute.manager [None req-5f074d7a-0962-4f5f-9962-26f1c8d53407 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Get console output#033[00m
Dec 13 04:02:13 np0005558241 nova_compute[248510]: 2025-12-13 09:02:13.299 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:02:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3034: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:02:14 np0005558241 nova_compute[248510]: 2025-12-13 09:02:14.542 248514 INFO nova.compute.manager [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Rebuilding instance#033[00m
Dec 13 04:02:14 np0005558241 nova_compute[248510]: 2025-12-13 09:02:14.860 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:02:14 np0005558241 nova_compute[248510]: 2025-12-13 09:02:14.882 248514 DEBUG nova.compute.manager [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:02:14 np0005558241 nova_compute[248510]: 2025-12-13 09:02:14.953 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_requests' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:02:14 np0005558241 nova_compute[248510]: 2025-12-13 09:02:14.970 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:02:14 np0005558241 nova_compute[248510]: 2025-12-13 09:02:14.987 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:02:15 np0005558241 nova_compute[248510]: 2025-12-13 09:02:15.002 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'migration_context' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:02:15 np0005558241 nova_compute[248510]: 2025-12-13 09:02:15.018 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 04:02:15 np0005558241 nova_compute[248510]: 2025-12-13 09:02:15.022 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 04:02:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:02:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945107926' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:02:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:02:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945107926' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:02:15 np0005558241 nova_compute[248510]: 2025-12-13 09:02:15.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:02:15 np0005558241 nova_compute[248510]: 2025-12-13 09:02:15.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:02:15 np0005558241 nova_compute[248510]: 2025-12-13 09:02:15.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:02:15 np0005558241 nova_compute[248510]: 2025-12-13 09:02:15.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:02:15 np0005558241 nova_compute[248510]: 2025-12-13 09:02:15.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:02:15 np0005558241 nova_compute[248510]: 2025-12-13 09:02:15.799 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 04:02:15 np0005558241 nova_compute[248510]: 2025-12-13 09:02:15.799 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:02:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:02:16 np0005558241 nova_compute[248510]: 2025-12-13 09:02:16.284 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3035: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:02:17 np0005558241 kernel: tap698673ab-11 (unregistering): left promiscuous mode
Dec 13 04:02:17 np0005558241 NetworkManager[50376]: <info>  [1765616537.2484] device (tap698673ab-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:02:17 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:17Z|01324|binding|INFO|Releasing lport 698673ab-115a-49aa-b3d5-68392d28aa81 from this chassis (sb_readonly=0)
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.258 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:17 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:17Z|01325|binding|INFO|Setting lport 698673ab-115a-49aa-b3d5-68392d28aa81 down in Southbound
Dec 13 04:02:17 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:17Z|01326|binding|INFO|Removing iface tap698673ab-11 ovn-installed in OVS
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.265 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.280 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:34:14 10.100.0.5'], port_security=['fa:16:3e:7b:34:14 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18506534-de17-4e42-87c7-b2546619f4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '250e0b26-9d3f-404b-8b3e-f79706be2a3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db45c681-c79f-432b-bd2c-bbf3bd1f2153, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=698673ab-115a-49aa-b3d5-68392d28aa81) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.282 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 698673ab-115a-49aa-b3d5-68392d28aa81 in datapath 44b4664e-9eef-4d04-bcdb-68869f16c46a unbound from our chassis#033[00m
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.284 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44b4664e-9eef-4d04-bcdb-68869f16c46a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.285 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b81d144-6d3b-4019-8eb5-19287d187e09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.286 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a namespace which is not needed anymore#033[00m
Dec 13 04:02:17 np0005558241 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000082.scope: Deactivated successfully.
Dec 13 04:02:17 np0005558241 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000082.scope: Consumed 13.720s CPU time.
Dec 13 04:02:17 np0005558241 systemd-machined[210538]: Machine qemu-158-instance-00000082 terminated.
Dec 13 04:02:17 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [NOTICE]   (377988) : haproxy version is 2.8.14-c23fe91
Dec 13 04:02:17 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [NOTICE]   (377988) : path to executable is /usr/sbin/haproxy
Dec 13 04:02:17 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [WARNING]  (377988) : Exiting Master process...
Dec 13 04:02:17 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [ALERT]    (377988) : Current worker (377990) exited with code 143 (Terminated)
Dec 13 04:02:17 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[377984]: [WARNING]  (377988) : All workers exited. Exiting... (0)
Dec 13 04:02:17 np0005558241 systemd[1]: libpod-611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e.scope: Deactivated successfully.
Dec 13 04:02:17 np0005558241 podman[378184]: 2025-12-13 09:02:17.451413899 +0000 UTC m=+0.052144647 container died 611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 04:02:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e-userdata-shm.mount: Deactivated successfully.
Dec 13 04:02:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay-11cf72505ed6c58f094c4c16689a309a1fb7eaf088d765885e56e37adbda2ee2-merged.mount: Deactivated successfully.
Dec 13 04:02:17 np0005558241 podman[378184]: 2025-12-13 09:02:17.507304437 +0000 UTC m=+0.108035195 container cleanup 611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:02:17 np0005558241 systemd[1]: libpod-conmon-611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e.scope: Deactivated successfully.
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.566 248514 DEBUG nova.compute.manager [req-4bab1df3-a436-425a-8d13-7a51009d5d7b req-80318eda-4c44-44e5-a192-1242a3b0569c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-unplugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.568 248514 DEBUG oslo_concurrency.lockutils [req-4bab1df3-a436-425a-8d13-7a51009d5d7b req-80318eda-4c44-44e5-a192-1242a3b0569c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.568 248514 DEBUG oslo_concurrency.lockutils [req-4bab1df3-a436-425a-8d13-7a51009d5d7b req-80318eda-4c44-44e5-a192-1242a3b0569c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.569 248514 DEBUG oslo_concurrency.lockutils [req-4bab1df3-a436-425a-8d13-7a51009d5d7b req-80318eda-4c44-44e5-a192-1242a3b0569c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.569 248514 DEBUG nova.compute.manager [req-4bab1df3-a436-425a-8d13-7a51009d5d7b req-80318eda-4c44-44e5-a192-1242a3b0569c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-unplugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.569 248514 WARNING nova.compute.manager [req-4bab1df3-a436-425a-8d13-7a51009d5d7b req-80318eda-4c44-44e5-a192-1242a3b0569c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received unexpected event network-vif-unplugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with vm_state active and task_state rebuilding.#033[00m
Dec 13 04:02:17 np0005558241 podman[378229]: 2025-12-13 09:02:17.582289113 +0000 UTC m=+0.047380011 container remove 611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.589 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1221d02-ad2a-4a5b-92de-5d9cf5e46db3]: (4, ('Sat Dec 13 09:02:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a (611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e)\n611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e\nSat Dec 13 09:02:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a (611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e)\n611de90b436138983237b1f4615f56043c986bfd6489db8a15ed4aaba6e9cc9e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.591 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55cc62d5-37eb-4097-9775-8b2f212fe373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.594 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44b4664e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:17 np0005558241 kernel: tap44b4664e-90: left promiscuous mode
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.615 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.618 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c30e471-95fa-46f9-9228-98e5f3c88167]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.636 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[46999000-f175-4ce6-be6c-27d73e4c14a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.637 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd5bbe1-df7b-4b1e-af8b-872f10ba9766]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.653 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1bc40f-e29b-4eb8-bf8c-3a29079aa546]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 909045, 'reachable_time': 19969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378251, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:17 np0005558241 systemd[1]: run-netns-ovnmeta\x2d44b4664e\x2d9eef\x2d4d04\x2dbcdb\x2d68869f16c46a.mount: Deactivated successfully.
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.656 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:02:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:17.656 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b89f0be9-88d8-4d4c-bbd9-067829b0bd6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.780 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.781 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:02:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:02:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:02:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:02:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.807 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:02:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:02:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:02:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:02:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:02:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:02:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:02:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.919 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.920 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.930 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.930 248514 INFO nova.compute.claims [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:02:17 np0005558241 nova_compute[248510]: 2025-12-13 09:02:17.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.029 248514 DEBUG nova.scheduler.client.report [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.042 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance shutdown successfully after 3 seconds.#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.048 248514 INFO nova.virt.libvirt.driver [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance destroyed successfully.#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.053 248514 DEBUG nova.scheduler.client.report [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.054 248514 DEBUG nova.compute.provider_tree [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.062 248514 INFO nova.virt.libvirt.driver [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance destroyed successfully.#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.064 248514 DEBUG nova.virt.libvirt.vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1657427533',display_name='tempest-TestNetworkAdvancedServerOps-server-1657427533',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1657427533',id=130,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIu2Mio4iiKCPHCFWqnjQXxmeR9VuZ/sJMKRi7ThHNLpfALJX8ZDxWjNQlqcAjJCFa+dRlCd52p375xJuviq4QzkIt2R0aIT9v0cJCRdagDbQWGjbmiGhE29U66OFfUYVw==',key_name='tempest-TestNetworkAdvancedServerOps-2136416273',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:01:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a2gq0cdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:02:13Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=18506534-de17-4e42-87c7-b2546619f4d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.064 248514 DEBUG nova.network.os_vif_util [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.066 248514 DEBUG nova.network.os_vif_util [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.066 248514 DEBUG os_vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.069 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.070 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap698673ab-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.075 248514 DEBUG nova.scheduler.client.report [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.079 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.084 248514 INFO os_vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11')#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.111 248514 DEBUG nova.scheduler.client.report [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.185 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:18 np0005558241 podman[378346]: 2025-12-13 09:02:18.293919461 +0000 UTC m=+0.057577890 container create 586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 04:02:18 np0005558241 systemd[1]: Started libpod-conmon-586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5.scope.
Dec 13 04:02:18 np0005558241 podman[378346]: 2025-12-13 09:02:18.269995976 +0000 UTC m=+0.033654535 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:02:18 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:02:18 np0005558241 podman[378346]: 2025-12-13 09:02:18.391349396 +0000 UTC m=+0.155007835 container init 586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:02:18 np0005558241 podman[378346]: 2025-12-13 09:02:18.401423263 +0000 UTC m=+0.165081702 container start 586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:02:18 np0005558241 podman[378346]: 2025-12-13 09:02:18.407994134 +0000 UTC m=+0.171652643 container attach 586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:02:18 np0005558241 amazing_heisenberg[378364]: 167 167
Dec 13 04:02:18 np0005558241 systemd[1]: libpod-586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5.scope: Deactivated successfully.
Dec 13 04:02:18 np0005558241 conmon[378364]: conmon 586e9d0d7dba7a9e08eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5.scope/container/memory.events
Dec 13 04:02:18 np0005558241 podman[378346]: 2025-12-13 09:02:18.41109998 +0000 UTC m=+0.174758399 container died 586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_heisenberg, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:02:18 np0005558241 systemd[1]: var-lib-containers-storage-overlay-012065122cb87c7c2551928616b1d933b0741faf2be759b62fa583448a7384ba-merged.mount: Deactivated successfully.
Dec 13 04:02:18 np0005558241 podman[378346]: 2025-12-13 09:02:18.464061676 +0000 UTC m=+0.227720095 container remove 586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:02:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3036: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.465 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deleting instance files /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4_del#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.468 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deletion of /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4_del complete#033[00m
Dec 13 04:02:18 np0005558241 systemd[1]: libpod-conmon-586e9d0d7dba7a9e08eba4819bff82b132778c985c608a3de3cedcceac15e5a5.scope: Deactivated successfully.
Dec 13 04:02:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:02:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:02:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:02:18 np0005558241 podman[378404]: 2025-12-13 09:02:18.679684394 +0000 UTC m=+0.047492714 container create e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jackson, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:02:18 np0005558241 systemd[1]: Started libpod-conmon-e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb.scope.
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.727 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.728 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Creating image(s)#033[00m
Dec 13 04:02:18 np0005558241 podman[378404]: 2025-12-13 09:02:18.660584206 +0000 UTC m=+0.028392516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:02:18 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:02:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a586242ce0c4995ef5c110155c88cde6ff40350a580c02f9da9de9b9b017067c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a586242ce0c4995ef5c110155c88cde6ff40350a580c02f9da9de9b9b017067c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a586242ce0c4995ef5c110155c88cde6ff40350a580c02f9da9de9b9b017067c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a586242ce0c4995ef5c110155c88cde6ff40350a580c02f9da9de9b9b017067c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a586242ce0c4995ef5c110155c88cde6ff40350a580c02f9da9de9b9b017067c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.765 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:18 np0005558241 podman[378404]: 2025-12-13 09:02:18.780312247 +0000 UTC m=+0.148120587 container init e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jackson, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 04:02:18 np0005558241 podman[378404]: 2025-12-13 09:02:18.789473771 +0000 UTC m=+0.157282091 container start e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jackson, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:02:18 np0005558241 podman[378404]: 2025-12-13 09:02:18.793847418 +0000 UTC m=+0.161655758 container attach e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jackson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:02:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:02:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1729367253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.818 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.851 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.855 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.909 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.920 248514 DEBUG nova.compute.provider_tree [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.940 248514 DEBUG nova.scheduler.client.report [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.951 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.952 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.953 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.953 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "6963c18baf0656ea2bcf2978da1e5544086b18bc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.982 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:18 np0005558241 nova_compute[248510]: 2025-12-13 09:02:18.989 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 18506534-de17-4e42-87c7-b2546619f4d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.049 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.050 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.164 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.165 248514 DEBUG nova.network.neutron [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.190 248514 INFO nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.212 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:02:19 np0005558241 nifty_jackson[378422]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:02:19 np0005558241 nifty_jackson[378422]: --> All data devices are unavailable
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.300 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc 18506534-de17-4e42-87c7-b2546619f4d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:19 np0005558241 systemd[1]: libpod-e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb.scope: Deactivated successfully.
Dec 13 04:02:19 np0005558241 podman[378404]: 2025-12-13 09:02:19.32135327 +0000 UTC m=+0.689161610 container died e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:02:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a586242ce0c4995ef5c110155c88cde6ff40350a580c02f9da9de9b9b017067c-merged.mount: Deactivated successfully.
Dec 13 04:02:19 np0005558241 podman[378404]: 2025-12-13 09:02:19.368747951 +0000 UTC m=+0.736556271 container remove e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_jackson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:02:19 np0005558241 systemd[1]: libpod-conmon-e2da2e297b0135b6744a40d186ec62b3f8476d2ecf745b2231c7c136ce33e0bb.scope: Deactivated successfully.
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.411 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.413 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.414 248514 INFO nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Creating image(s)#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.443 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.471 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.492 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.499 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.554 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] resizing rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.592 248514 DEBUG nova.policy [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.599 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.600 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.601 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.601 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.631 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.635 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:02:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.2 total, 600.0 interval#012Cumulative writes: 32K writes, 132K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.02 MB/s#012Cumulative WAL: 32K writes, 11K syncs, 2.85 writes per sync, written: 0.13 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4125 writes, 18K keys, 4125 commit groups, 1.0 writes per commit group, ingest: 20.64 MB, 0.03 MB/s#012Interval WAL: 4125 writes, 1571 syncs, 2.63 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.689 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updating instance_info_cache with network_info: [{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.696 248514 DEBUG nova.compute.manager [req-4b5c457e-7dbd-4cb8-9615-9a27db949cc6 req-293a71bc-c385-4d07-9828-56698e49a9e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.696 248514 DEBUG oslo_concurrency.lockutils [req-4b5c457e-7dbd-4cb8-9615-9a27db949cc6 req-293a71bc-c385-4d07-9828-56698e49a9e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.697 248514 DEBUG oslo_concurrency.lockutils [req-4b5c457e-7dbd-4cb8-9615-9a27db949cc6 req-293a71bc-c385-4d07-9828-56698e49a9e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.697 248514 DEBUG oslo_concurrency.lockutils [req-4b5c457e-7dbd-4cb8-9615-9a27db949cc6 req-293a71bc-c385-4d07-9828-56698e49a9e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.697 248514 DEBUG nova.compute.manager [req-4b5c457e-7dbd-4cb8-9615-9a27db949cc6 req-293a71bc-c385-4d07-9828-56698e49a9e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.697 248514 WARNING nova.compute.manager [req-4b5c457e-7dbd-4cb8-9615-9a27db949cc6 req-293a71bc-c385-4d07-9828-56698e49a9e2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received unexpected event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.740 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.741 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.741 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.751 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.752 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Ensure instance console log exists: /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.752 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.752 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.753 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.755 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Start _get_guest_xml network_info=[{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.762 248514 WARNING nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.772 248514 DEBUG nova.virt.libvirt.host [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.773 248514 DEBUG nova.virt.libvirt.host [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.776 248514 DEBUG nova.virt.libvirt.host [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.777 248514 DEBUG nova.virt.libvirt.host [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.777 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.778 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:52Z,direct_url=<?>,disk_format='qcow2',id=c10f1898-20b3-4bc9-8a36-2ee01b39c9ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:56Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.778 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.778 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.778 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.779 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.779 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.779 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.779 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.779 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.779 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.780 248514 DEBUG nova.virt.hardware [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.780 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.813 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:19 np0005558241 podman[378777]: 2025-12-13 09:02:19.904683069 +0000 UTC m=+0.071337157 container create e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:02:19 np0005558241 nova_compute[248510]: 2025-12-13 09:02:19.934 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:19 np0005558241 systemd[1]: Started libpod-conmon-e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c.scope.
Dec 13 04:02:19 np0005558241 podman[378777]: 2025-12-13 09:02:19.863242535 +0000 UTC m=+0.029896653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:02:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:02:19 np0005558241 podman[378777]: 2025-12-13 09:02:19.984334899 +0000 UTC m=+0.150989007 container init e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:02:19 np0005558241 podman[378777]: 2025-12-13 09:02:19.990469569 +0000 UTC m=+0.157123647 container start e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 04:02:19 np0005558241 podman[378777]: 2025-12-13 09:02:19.9941889 +0000 UTC m=+0.160843018 container attach e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:02:19 np0005558241 charming_noether[378811]: 167 167
Dec 13 04:02:19 np0005558241 systemd[1]: libpod-e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c.scope: Deactivated successfully.
Dec 13 04:02:19 np0005558241 podman[378777]: 2025-12-13 09:02:19.995789259 +0000 UTC m=+0.162443347 container died e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.001 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:02:20 np0005558241 systemd[1]: var-lib-containers-storage-overlay-dc5cc6e2fe46080668caa39c8995b8f0b177f117aa86276df49da973a1a87f72-merged.mount: Deactivated successfully.
Dec 13 04:02:20 np0005558241 podman[378777]: 2025-12-13 09:02:20.039387296 +0000 UTC m=+0.206041384 container remove e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:02:20 np0005558241 systemd[1]: libpod-conmon-e36ef07a80975f98e6324cba2e016989ff2466c44f7511f72118ccdc4bede63c.scope: Deactivated successfully.
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.098 248514 DEBUG nova.objects.instance [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid d1949b92-7b16-4a9d-b033-d0a6df7a9f2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.117 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.117 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Ensure instance console log exists: /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.118 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.118 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.118 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:20 np0005558241 podman[378908]: 2025-12-13 09:02:20.204626411 +0000 UTC m=+0.044016548 container create e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:02:20 np0005558241 systemd[1]: Started libpod-conmon-e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969.scope.
Dec 13 04:02:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:02:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2b211186ee4f79a5b89c14c3937a4657910f7e09b617fce813b3e22c952b2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2b211186ee4f79a5b89c14c3937a4657910f7e09b617fce813b3e22c952b2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2b211186ee4f79a5b89c14c3937a4657910f7e09b617fce813b3e22c952b2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2b211186ee4f79a5b89c14c3937a4657910f7e09b617fce813b3e22c952b2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:20 np0005558241 podman[378908]: 2025-12-13 09:02:20.184031277 +0000 UTC m=+0.023421404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:02:20 np0005558241 podman[378908]: 2025-12-13 09:02:20.298427017 +0000 UTC m=+0.137817154 container init e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:02:20 np0005558241 podman[378908]: 2025-12-13 09:02:20.315238179 +0000 UTC m=+0.154628296 container start e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:02:20 np0005558241 podman[378908]: 2025-12-13 09:02:20.318669542 +0000 UTC m=+0.158059669 container attach e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:02:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:02:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3118914386' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.377 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.409 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.415 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3037: 321 pgs: 321 active+clean; 60 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 13 04:02:20 np0005558241 focused_khorana[378924]: {
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:    "0": [
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:        {
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "devices": [
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "/dev/loop3"
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            ],
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_name": "ceph_lv0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_size": "21470642176",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "name": "ceph_lv0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "tags": {
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.cluster_name": "ceph",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.crush_device_class": "",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.encrypted": "0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.objectstore": "bluestore",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.osd_id": "0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.type": "block",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.vdo": "0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.with_tpm": "0"
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            },
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "type": "block",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "vg_name": "ceph_vg0"
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:        }
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:    ],
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:    "1": [
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:        {
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "devices": [
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "/dev/loop4"
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            ],
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_name": "ceph_lv1",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_size": "21470642176",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "name": "ceph_lv1",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "tags": {
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.cluster_name": "ceph",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.crush_device_class": "",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.encrypted": "0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.objectstore": "bluestore",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.osd_id": "1",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.type": "block",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.vdo": "0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.with_tpm": "0"
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            },
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "type": "block",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "vg_name": "ceph_vg1"
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:        }
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:    ],
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:    "2": [
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:        {
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "devices": [
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "/dev/loop5"
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            ],
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_name": "ceph_lv2",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_size": "21470642176",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "name": "ceph_lv2",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "tags": {
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.cluster_name": "ceph",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.crush_device_class": "",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.encrypted": "0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.objectstore": "bluestore",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.osd_id": "2",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.type": "block",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.vdo": "0",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:                "ceph.with_tpm": "0"
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            },
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "type": "block",
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:            "vg_name": "ceph_vg2"
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:        }
Dec 13 04:02:20 np0005558241 focused_khorana[378924]:    ]
Dec 13 04:02:20 np0005558241 focused_khorana[378924]: }
Dec 13 04:02:20 np0005558241 systemd[1]: libpod-e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969.scope: Deactivated successfully.
Dec 13 04:02:20 np0005558241 podman[378908]: 2025-12-13 09:02:20.609571113 +0000 UTC m=+0.448961220 container died e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 04:02:20 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6c2b211186ee4f79a5b89c14c3937a4657910f7e09b617fce813b3e22c952b2d-merged.mount: Deactivated successfully.
Dec 13 04:02:20 np0005558241 podman[378908]: 2025-12-13 09:02:20.665441701 +0000 UTC m=+0.504831818 container remove e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:02:20 np0005558241 systemd[1]: libpod-conmon-e0883d3ae6520e4647f39e273d85e2f91813b6c180b096c46970a5528b748969.scope: Deactivated successfully.
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.800 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.802 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:02:20 np0005558241 nova_compute[248510]: 2025-12-13 09:02:20.802 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:02:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:02:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76311328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.030 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.033 248514 DEBUG nova.virt.libvirt.vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1657427533',display_name='tempest-TestNetworkAdvancedServerOps-server-1657427533',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1657427533',id=130,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIu2Mio4iiKCPHCFWqnjQXxmeR9VuZ/sJMKRi7ThHNLpfALJX8ZDxWjNQlqcAjJCFa+dRlCd52p375xJuviq4QzkIt2R0aIT9v0cJCRdagDbQWGjbmiGhE29U66OFfUYVw==',key_name='tempest-TestNetworkAdvancedServerOps-2136416273',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:01:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a2gq0cdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:02:18Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=18506534-de17-4e42-87c7-b2546619f4d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.034 248514 DEBUG nova.network.os_vif_util [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.035 248514 DEBUG nova.network.os_vif_util [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.041 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  <uuid>18506534-de17-4e42-87c7-b2546619f4d4</uuid>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  <name>instance-00000082</name>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1657427533</nova:name>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:02:19</nova:creationTime>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:        <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:        <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="c10f1898-20b3-4bc9-8a36-2ee01b39c9ce"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:        <nova:port uuid="698673ab-115a-49aa-b3d5-68392d28aa81">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <entry name="serial">18506534-de17-4e42-87c7-b2546619f4d4</entry>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <entry name="uuid">18506534-de17-4e42-87c7-b2546619f4d4</entry>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/18506534-de17-4e42-87c7-b2546619f4d4_disk">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/18506534-de17-4e42-87c7-b2546619f4d4_disk.config">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:7b:34:14"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <target dev="tap698673ab-11"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/console.log" append="off"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:02:21 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:02:21 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:02:21 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:02:21 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.050 248514 DEBUG nova.virt.libvirt.vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1657427533',display_name='tempest-TestNetworkAdvancedServerOps-server-1657427533',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1657427533',id=130,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIu2Mio4iiKCPHCFWqnjQXxmeR9VuZ/sJMKRi7ThHNLpfALJX8ZDxWjNQlqcAjJCFa+dRlCd52p375xJuviq4QzkIt2R0aIT9v0cJCRdagDbQWGjbmiGhE29U66OFfUYVw==',key_name='tempest-TestNetworkAdvancedServerOps-2136416273',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:01:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a2gq0cdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:02:18Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=18506534-de17-4e42-87c7-b2546619f4d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.051 248514 DEBUG nova.network.os_vif_util [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.052 248514 DEBUG nova.network.os_vif_util [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.053 248514 DEBUG os_vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.054 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.055 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.056 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.060 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.060 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap698673ab-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.061 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap698673ab-11, col_values=(('external_ids', {'iface-id': '698673ab-115a-49aa-b3d5-68392d28aa81', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:34:14', 'vm-uuid': '18506534-de17-4e42-87c7-b2546619f4d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:21 np0005558241 NetworkManager[50376]: <info>  [1765616541.0652] manager: (tap698673ab-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/543)
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.067 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.074 248514 INFO os_vif [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11')#033[00m
Dec 13 04:02:21 np0005558241 podman[379071]: 2025-12-13 09:02:21.167242814 +0000 UTC m=+0.050701572 container create 9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 04:02:21 np0005558241 systemd[1]: Started libpod-conmon-9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f.scope.
Dec 13 04:02:21 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:02:21 np0005558241 podman[379071]: 2025-12-13 09:02:21.243275155 +0000 UTC m=+0.126733923 container init 9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khayyam, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:02:21 np0005558241 podman[379071]: 2025-12-13 09:02:21.150471033 +0000 UTC m=+0.033929811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:02:21 np0005558241 podman[379071]: 2025-12-13 09:02:21.253032724 +0000 UTC m=+0.136491482 container start 9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khayyam, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:02:21 np0005558241 podman[379071]: 2025-12-13 09:02:21.257444522 +0000 UTC m=+0.140903300 container attach 9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khayyam, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:02:21 np0005558241 beautiful_khayyam[379088]: 167 167
Dec 13 04:02:21 np0005558241 systemd[1]: libpod-9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f.scope: Deactivated successfully.
Dec 13 04:02:21 np0005558241 podman[379071]: 2025-12-13 09:02:21.260186159 +0000 UTC m=+0.143644927 container died 9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khayyam, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0001823864093692832 of space, bias 1.0, pg target 0.05471592281078496 quantized to 32 (current 32)
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006698314011962477 of space, bias 1.0, pg target 0.2009494203588743 quantized to 32 (current 32)
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.725026712866588e-07 of space, bias 4.0, pg target 0.0006870032055439905 quantized to 16 (current 32)
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:02:21 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c3c8777fdae26bddf4684151dbd60a9b58d05630434eed191be2ddb1437c4c10-merged.mount: Deactivated successfully.
Dec 13 04:02:21 np0005558241 podman[379071]: 2025-12-13 09:02:21.303704604 +0000 UTC m=+0.187163362 container remove 9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khayyam, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:02:21 np0005558241 systemd[1]: libpod-conmon-9b765fec933d9977af6d25a5cd242f46c256a7d3e8452049a172868d3e5ce54f.scope: Deactivated successfully.
Dec 13 04:02:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:02:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/479693080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.433 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:21 np0005558241 podman[379113]: 2025-12-13 09:02:21.482557762 +0000 UTC m=+0.044838339 container create adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:02:21 np0005558241 systemd[1]: Started libpod-conmon-adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527.scope.
Dec 13 04:02:21 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:02:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687ffa81366ff0ff052c299544c551f7aab40a8b8ad468601251c45494d6506/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687ffa81366ff0ff052c299544c551f7aab40a8b8ad468601251c45494d6506/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687ffa81366ff0ff052c299544c551f7aab40a8b8ad468601251c45494d6506/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687ffa81366ff0ff052c299544c551f7aab40a8b8ad468601251c45494d6506/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:21 np0005558241 podman[379113]: 2025-12-13 09:02:21.464655634 +0000 UTC m=+0.026936231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:02:21 np0005558241 podman[379113]: 2025-12-13 09:02:21.560970801 +0000 UTC m=+0.123251408 container init adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_blackwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:02:21 np0005558241 podman[379113]: 2025-12-13 09:02:21.569215823 +0000 UTC m=+0.131496400 container start adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_blackwell, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:02:21 np0005558241 podman[379113]: 2025-12-13 09:02:21.572916494 +0000 UTC m=+0.135197071 container attach adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.598 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.599 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.600 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No VIF found with MAC fa:16:3e:7b:34:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.600 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Using config drive#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.625 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.654 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.661 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.662 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.707 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'keypairs' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.837 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.839 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3512MB free_disk=59.97734020277858GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.839 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.840 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.912 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 18506534-de17-4e42-87c7-b2546619f4d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.912 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance d1949b92-7b16-4a9d-b033-d0a6df7a9f2f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.913 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.913 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:02:21 np0005558241 nova_compute[248510]: 2025-12-13 09:02:21.971 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:22 np0005558241 lvm[379246]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:02:22 np0005558241 lvm[379247]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:02:22 np0005558241 lvm[379246]: VG ceph_vg0 finished
Dec 13 04:02:22 np0005558241 lvm[379247]: VG ceph_vg1 finished
Dec 13 04:02:22 np0005558241 lvm[379249]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:02:22 np0005558241 lvm[379249]: VG ceph_vg2 finished
Dec 13 04:02:22 np0005558241 trusting_blackwell[379130]: {}
Dec 13 04:02:22 np0005558241 systemd[1]: libpod-adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527.scope: Deactivated successfully.
Dec 13 04:02:22 np0005558241 systemd[1]: libpod-adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527.scope: Consumed 1.303s CPU time.
Dec 13 04:02:22 np0005558241 podman[379113]: 2025-12-13 09:02:22.350211759 +0000 UTC m=+0.912492346 container died adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_blackwell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:02:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f687ffa81366ff0ff052c299544c551f7aab40a8b8ad468601251c45494d6506-merged.mount: Deactivated successfully.
Dec 13 04:02:22 np0005558241 podman[379113]: 2025-12-13 09:02:22.404485958 +0000 UTC m=+0.966766525 container remove adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:02:22 np0005558241 systemd[1]: libpod-conmon-adf9153c59dbaa81a8189aeafe13600ddc4e5588251855b1def81eb5bfba7527.scope: Deactivated successfully.
Dec 13 04:02:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:02:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:02:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:02:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:02:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3038: 321 pgs: 321 active+clean; 60 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 33 KiB/s wr, 21 op/s
Dec 13 04:02:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:02:22 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3919006410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:02:22 np0005558241 nova_compute[248510]: 2025-12-13 09:02:22.604 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:22 np0005558241 nova_compute[248510]: 2025-12-13 09:02:22.617 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:02:22 np0005558241 nova_compute[248510]: 2025-12-13 09:02:22.652 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:02:22 np0005558241 nova_compute[248510]: 2025-12-13 09:02:22.676 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:02:22 np0005558241 nova_compute[248510]: 2025-12-13 09:02:22.676 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:22 np0005558241 nova_compute[248510]: 2025-12-13 09:02:22.788 248514 DEBUG nova.network.neutron [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Successfully created port: bf1a0f02-8913-41ae-aa00-ab927d45e18a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:02:22 np0005558241 nova_compute[248510]: 2025-12-13 09:02:22.847 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Creating config drive at /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config#033[00m
Dec 13 04:02:22 np0005558241 nova_compute[248510]: 2025-12-13 09:02:22.854 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwa2ds191 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:22 np0005558241 nova_compute[248510]: 2025-12-13 09:02:22.994 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.018 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwa2ds191" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.051 248514 DEBUG nova.storage.rbd_utils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 18506534-de17-4e42-87c7-b2546619f4d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.055 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config 18506534-de17-4e42-87c7-b2546619f4d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.232 248514 DEBUG oslo_concurrency.processutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config 18506534-de17-4e42-87c7-b2546619f4d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.234 248514 INFO nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deleting local config drive /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4/disk.config because it was imported into RBD.#033[00m
Dec 13 04:02:23 np0005558241 kernel: tap698673ab-11: entered promiscuous mode
Dec 13 04:02:23 np0005558241 NetworkManager[50376]: <info>  [1765616543.3095] manager: (tap698673ab-11): new Tun device (/org/freedesktop/NetworkManager/Devices/544)
Dec 13 04:02:23 np0005558241 systemd-udevd[379245]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:23Z|01327|binding|INFO|Claiming lport 698673ab-115a-49aa-b3d5-68392d28aa81 for this chassis.
Dec 13 04:02:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:23Z|01328|binding|INFO|698673ab-115a-49aa-b3d5-68392d28aa81: Claiming fa:16:3e:7b:34:14 10.100.0.5
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.320 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:34:14 10.100.0.5'], port_security=['fa:16:3e:7b:34:14 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18506534-de17-4e42-87c7-b2546619f4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '5', 'neutron:security_group_ids': '250e0b26-9d3f-404b-8b3e-f79706be2a3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db45c681-c79f-432b-bd2c-bbf3bd1f2153, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=698673ab-115a-49aa-b3d5-68392d28aa81) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.322 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 698673ab-115a-49aa-b3d5-68392d28aa81 in datapath 44b4664e-9eef-4d04-bcdb-68869f16c46a bound to our chassis#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.323 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44b4664e-9eef-4d04-bcdb-68869f16c46a#033[00m
Dec 13 04:02:23 np0005558241 NetworkManager[50376]: <info>  [1765616543.3274] device (tap698673ab-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:02:23 np0005558241 NetworkManager[50376]: <info>  [1765616543.3283] device (tap698673ab-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.340 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3c1dfd-22a5-4617-b700-9340895ece2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.341 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap44b4664e-91 in ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:02:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:23Z|01329|binding|INFO|Setting lport 698673ab-115a-49aa-b3d5-68392d28aa81 up in Southbound
Dec 13 04:02:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:23Z|01330|binding|INFO|Setting lport 698673ab-115a-49aa-b3d5-68392d28aa81 ovn-installed in OVS
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.344 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap44b4664e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.345 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[182b9115-571c-42e5-8602-1e8f4357e442]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.346 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62f85eeb-2b28-4ce9-825e-09e22ea43159]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.349 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.367 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[a89701b1-39db-40ba-8f5a-8689aabd30d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 systemd-machined[210538]: New machine qemu-159-instance-00000082.
Dec 13 04:02:23 np0005558241 systemd[1]: Started Virtual Machine qemu-159-instance-00000082.
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.383 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d25093eb-50a4-4c58-b126-6a04f561845c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.426 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b8fb64c5-650a-4204-89d7-cc6cfd76bad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.434 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7da829ac-c552-47c3-929f-8d5551dbb884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 NetworkManager[50376]: <info>  [1765616543.4384] manager: (tap44b4664e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/545)
Dec 13 04:02:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:02:23 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:02:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.477 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f1cec57c-bba6-4880-9fcb-74accca554be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.480 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[43c03123-d1d4-4391-b5ed-26151f4fedc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 NetworkManager[50376]: <info>  [1765616543.5156] device (tap44b4664e-90): carrier: link connected
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.526 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[833e3c82-628d-463c-b75d-5955a8f8cc90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.555 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c175b0-0223-40c2-921c-53e99ec6a5f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44b4664e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:4b:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 384], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912073, 'reachable_time': 38150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379374, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.576 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd9efa0-950f-4201-92b3-3d13a35d79fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:4b6f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 912073, 'tstamp': 912073}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379375, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.602 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[823f69d1-b017-46bc-89b7-e57d75ac9866]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44b4664e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:4b:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 384], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912073, 'reachable_time': 38150, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379376, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.650 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f050553-fa02-4bfe-b248-b9bdada9aa02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.678 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.736 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6dc273-43c5-425f-9b6a-e6e66ee7d0e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.738 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44b4664e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.739 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.740 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44b4664e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:23 np0005558241 NetworkManager[50376]: <info>  [1765616543.7439] manager: (tap44b4664e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Dec 13 04:02:23 np0005558241 kernel: tap44b4664e-90: entered promiscuous mode
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.743 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.747 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.750 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44b4664e-90, col_values=(('external_ids', {'iface-id': '88fafa32-b4a3-42fd-afa1-df6ba2200ad6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:23Z|01331|binding|INFO|Releasing lport 88fafa32-b4a3-42fd-afa1-df6ba2200ad6 from this chassis (sb_readonly=0)
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.752 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.782 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.784 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/44b4664e-9eef-4d04-bcdb-68869f16c46a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/44b4664e-9eef-4d04-bcdb-68869f16c46a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.785 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[973cb458-f744-4d73-b9d8-34cd01b3d6f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.786 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-44b4664e-9eef-4d04-bcdb-68869f16c46a
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/44b4664e-9eef-4d04-bcdb-68869f16c46a.pid.haproxy
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 44b4664e-9eef-4d04-bcdb-68869f16c46a
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:02:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:23.787 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'env', 'PROCESS_TAG=haproxy-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/44b4664e-9eef-4d04-bcdb-68869f16c46a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.876 248514 DEBUG nova.compute.manager [req-d7adce1e-0f0a-45ec-a0e8-c4485dc84daa req-a02eab75-f4c2-449d-b392-a4e8ff5651d1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.885 248514 DEBUG oslo_concurrency.lockutils [req-d7adce1e-0f0a-45ec-a0e8-c4485dc84daa req-a02eab75-f4c2-449d-b392-a4e8ff5651d1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.890 248514 DEBUG oslo_concurrency.lockutils [req-d7adce1e-0f0a-45ec-a0e8-c4485dc84daa req-a02eab75-f4c2-449d-b392-a4e8ff5651d1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.891 248514 DEBUG oslo_concurrency.lockutils [req-d7adce1e-0f0a-45ec-a0e8-c4485dc84daa req-a02eab75-f4c2-449d-b392-a4e8ff5651d1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.891 248514 DEBUG nova.compute.manager [req-d7adce1e-0f0a-45ec-a0e8-c4485dc84daa req-a02eab75-f4c2-449d-b392-a4e8ff5651d1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:02:23 np0005558241 nova_compute[248510]: 2025-12-13 09:02:23.891 248514 WARNING nova.compute.manager [req-d7adce1e-0f0a-45ec-a0e8-c4485dc84daa req-a02eab75-f4c2-449d-b392-a4e8ff5651d1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received unexpected event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.133 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 18506534-de17-4e42-87c7-b2546619f4d4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.135 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616544.1326709, 18506534-de17-4e42-87c7-b2546619f4d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.136 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.138 248514 DEBUG nova.compute.manager [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.139 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.143 248514 INFO nova.virt.libvirt.driver [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance spawned successfully.#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.144 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:02:24 np0005558241 podman[379449]: 2025-12-13 09:02:24.212610577 +0000 UTC m=+0.052308412 container create 77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.220 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.229 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.230 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.230 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.231 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.231 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.232 248514 DEBUG nova.virt.libvirt.driver [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.237 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:02:24 np0005558241 systemd[1]: Started libpod-conmon-77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066.scope.
Dec 13 04:02:24 np0005558241 podman[379449]: 2025-12-13 09:02:24.182464459 +0000 UTC m=+0.022162314 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.282 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.283 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616544.1345015, 18506534-de17-4e42-87c7-b2546619f4d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.283 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] VM Started (Lifecycle Event)#033[00m
Dec 13 04:02:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.307 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:02:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a218a85abcbb7ba749a23f029096ade87b462fcb9dae89324b66eda6c3873b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.314 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.320 248514 DEBUG nova.compute.manager [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:02:24 np0005558241 podman[379449]: 2025-12-13 09:02:24.325044469 +0000 UTC m=+0.164742324 container init 77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 13 04:02:24 np0005558241 podman[379449]: 2025-12-13 09:02:24.331094487 +0000 UTC m=+0.170792322 container start 77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:02:24 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [NOTICE]   (379468) : New worker (379470) forked
Dec 13 04:02:24 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [NOTICE]   (379468) : Loading success.
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.358 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.401 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.402 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.403 248514 DEBUG nova.objects.instance [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.444 248514 DEBUG nova.network.neutron [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Successfully updated port: bf1a0f02-8913-41ae-aa00-ab927d45e18a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.462 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.463 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.463 248514 DEBUG nova.network.neutron [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:02:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3039: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.475 248514 DEBUG oslo_concurrency.lockutils [None req-0a28dc91-efe0-4096-8628-f2b10d5c4fc5 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.598 248514 DEBUG nova.network.neutron [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:02:24 np0005558241 nova_compute[248510]: 2025-12-13 09:02:24.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:02:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.006 248514 DEBUG nova.compute.manager [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.006 248514 DEBUG oslo_concurrency.lockutils [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.006 248514 DEBUG oslo_concurrency.lockutils [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.006 248514 DEBUG oslo_concurrency.lockutils [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.007 248514 DEBUG nova.compute.manager [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.007 248514 WARNING nova.compute.manager [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received unexpected event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.007 248514 DEBUG nova.compute.manager [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-changed-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.007 248514 DEBUG nova.compute.manager [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Refreshing instance network info cache due to event network-changed-bf1a0f02-8913-41ae-aa00-ab927d45e18a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.007 248514 DEBUG oslo_concurrency.lockutils [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.050 248514 DEBUG nova.network.neutron [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updating instance_info_cache with network_info: [{"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.087 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.087 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Instance network_info: |[{"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.088 248514 DEBUG oslo_concurrency.lockutils [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.088 248514 DEBUG nova.network.neutron [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Refreshing network info cache for port bf1a0f02-8913-41ae-aa00-ab927d45e18a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.091 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Start _get_guest_xml network_info=[{"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.095 248514 WARNING nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.100 248514 DEBUG nova.virt.libvirt.host [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.101 248514 DEBUG nova.virt.libvirt.host [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.104 248514 DEBUG nova.virt.libvirt.host [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.104 248514 DEBUG nova.virt.libvirt.host [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.105 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.105 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.105 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.105 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.106 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.106 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.106 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.106 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.107 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.107 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.107 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.107 248514 DEBUG nova.virt.hardware [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.110 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3040: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Dec 13 04:02:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:02:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2428243478' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.711 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.734 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:26 np0005558241 nova_compute[248510]: 2025-12-13 09:02:26.744 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:02:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3184419748' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.292 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.294 248514 DEBUG nova.virt.libvirt.vif [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:02:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1513004833',display_name='tempest-TestNetworkBasicOps-server-1513004833',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1513004833',id=131,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI8dVZ4XrPNWO5yi7otU+kwc72UZys9k+bPmo3tsYqKE1ENZ9dHjFLyBJxupI7JFtVdj7CesXySUu0b7xutQxOWMSQgMIWptsgkbrjrKxEGVeJHg8i+xoVF0hfGKec7bVQ==',key_name='tempest-TestNetworkBasicOps-61860158',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-y8sla6xu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:02:19Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=d1949b92-7b16-4a9d-b033-d0a6df7a9f2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.294 248514 DEBUG nova.network.os_vif_util [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.295 248514 DEBUG nova.network.os_vif_util [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.296 248514 DEBUG nova.objects.instance [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid d1949b92-7b16-4a9d-b033-d0a6df7a9f2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.316 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  <uuid>d1949b92-7b16-4a9d-b033-d0a6df7a9f2f</uuid>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  <name>instance-00000083</name>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkBasicOps-server-1513004833</nova:name>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:02:26</nova:creationTime>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:        <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:        <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:        <nova:port uuid="bf1a0f02-8913-41ae-aa00-ab927d45e18a">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <entry name="serial">d1949b92-7b16-4a9d-b033-d0a6df7a9f2f</entry>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <entry name="uuid">d1949b92-7b16-4a9d-b033-d0a6df7a9f2f</entry>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk.config">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:eb:41:b6"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <target dev="tapbf1a0f02-89"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/console.log" append="off"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:02:27 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:02:27 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:02:27 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:02:27 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.317 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Preparing to wait for external event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.317 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.318 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.318 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.319 248514 DEBUG nova.virt.libvirt.vif [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:02:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1513004833',display_name='tempest-TestNetworkBasicOps-server-1513004833',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1513004833',id=131,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI8dVZ4XrPNWO5yi7otU+kwc72UZys9k+bPmo3tsYqKE1ENZ9dHjFLyBJxupI7JFtVdj7CesXySUu0b7xutQxOWMSQgMIWptsgkbrjrKxEGVeJHg8i+xoVF0hfGKec7bVQ==',key_name='tempest-TestNetworkBasicOps-61860158',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-y8sla6xu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:02:19Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=d1949b92-7b16-4a9d-b033-d0a6df7a9f2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.319 248514 DEBUG nova.network.os_vif_util [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.319 248514 DEBUG nova.network.os_vif_util [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.320 248514 DEBUG os_vif [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.320 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.321 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.321 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.325 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.325 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf1a0f02-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.326 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf1a0f02-89, col_values=(('external_ids', {'iface-id': 'bf1a0f02-8913-41ae-aa00-ab927d45e18a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:41:b6', 'vm-uuid': 'd1949b92-7b16-4a9d-b033-d0a6df7a9f2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.328 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:27 np0005558241 NetworkManager[50376]: <info>  [1765616547.3292] manager: (tapbf1a0f02-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/547)
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.329 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.336 248514 INFO os_vif [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89')#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.422 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.423 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.423 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:eb:41:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.424 248514 INFO nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Using config drive#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.451 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.651 248514 DEBUG nova.network.neutron [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updated VIF entry in instance network info cache for port bf1a0f02-8913-41ae-aa00-ab927d45e18a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.652 248514 DEBUG nova.network.neutron [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updating instance_info_cache with network_info: [{"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.675 248514 DEBUG oslo_concurrency.lockutils [req-712ec2e7-a9e4-4075-b18e-254e9ddbc344 req-003e94bc-fc58-4ba5-9474-f2d2e5a96ef9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.773 248514 INFO nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Creating config drive at /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/disk.config#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.777 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkyw4vcfz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.820 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.930 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkyw4vcfz" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.962 248514 DEBUG nova.storage.rbd_utils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:02:27 np0005558241 nova_compute[248510]: 2025-12-13 09:02:27.968 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/disk.config d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.016 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.123 248514 DEBUG oslo_concurrency.processutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/disk.config d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.124 248514 INFO nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Deleting local config drive /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f/disk.config because it was imported into RBD.#033[00m
Dec 13 04:02:28 np0005558241 kernel: tapbf1a0f02-89: entered promiscuous mode
Dec 13 04:02:28 np0005558241 NetworkManager[50376]: <info>  [1765616548.1946] manager: (tapbf1a0f02-89): new Tun device (/org/freedesktop/NetworkManager/Devices/548)
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:28 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:28Z|01332|binding|INFO|Claiming lport bf1a0f02-8913-41ae-aa00-ab927d45e18a for this chassis.
Dec 13 04:02:28 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:28Z|01333|binding|INFO|bf1a0f02-8913-41ae-aa00-ab927d45e18a: Claiming fa:16:3e:eb:41:b6 10.100.0.7
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.208 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:41:b6 10.100.0.7'], port_security=['fa:16:3e:eb:41:b6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd1949b92-7b16-4a9d-b033-d0a6df7a9f2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e40673f-327e-4eb6-92f2-18c595696258', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2f4c3d2c-26ca-48de-bf5d-37154a8db222', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=873a69d0-ac9b-484c-a75a-e746eafe7ffd, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bf1a0f02-8913-41ae-aa00-ab927d45e18a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.210 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bf1a0f02-8913-41ae-aa00-ab927d45e18a in datapath 6e40673f-327e-4eb6-92f2-18c595696258 bound to our chassis#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.212 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6e40673f-327e-4eb6-92f2-18c595696258#033[00m
Dec 13 04:02:28 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:28Z|01334|binding|INFO|Setting lport bf1a0f02-8913-41ae-aa00-ab927d45e18a ovn-installed in OVS
Dec 13 04:02:28 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:28Z|01335|binding|INFO|Setting lport bf1a0f02-8913-41ae-aa00-ab927d45e18a up in Southbound
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.224 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.227 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.230 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a73f679e-2994-443a-a90e-7e86f377fbe1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.232 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6e40673f-31 in ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:02:28 np0005558241 systemd-udevd[379612]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.233 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6e40673f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.233 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8f85aa-6990-4411-ab6e-05efc7d1679b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.234 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[08181d00-8571-436f-a56f-1f41364217c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 NetworkManager[50376]: <info>  [1765616548.2500] device (tapbf1a0f02-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:02:28 np0005558241 NetworkManager[50376]: <info>  [1765616548.2515] device (tapbf1a0f02-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.254 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[922250e9-4039-4b28-b1c9-2a7a6294a143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 systemd-machined[210538]: New machine qemu-160-instance-00000083.
Dec 13 04:02:28 np0005558241 systemd[1]: Started Virtual Machine qemu-160-instance-00000083.
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.273 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3139ec-2797-4d97-b97a-3819c32b903f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.323 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6cbc6eec-0e04-4584-9bb1-e953a36cc357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 NetworkManager[50376]: <info>  [1765616548.3334] manager: (tap6e40673f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/549)
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.332 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6225cea4-c17e-48de-88c5-9001463c081a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3041: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 3.6 MiB/s wr, 123 op/s
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.608 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[aeea98ca-8a4f-452c-a5e4-df92f4e08331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.614 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d4119f8b-29f8-4a80-ac56-e6f4a7981a7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 NetworkManager[50376]: <info>  [1765616548.6407] device (tap6e40673f-30): carrier: link connected
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.649 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[22f55905-ec93-4656-b765-e18cee849405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.666 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[081fc278-4cab-45d5-80e6-34c1daf0a795]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e40673f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:63:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912585, 'reachable_time': 34362, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379689, 'error': None, 'target': 'ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.672 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616548.6713371, d1949b92-7b16-4a9d-b033-d0a6df7a9f2f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.672 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] VM Started (Lifecycle Event)#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.683 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6356280c-50bc-4187-b6a5-3f5ef4774100]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:6367'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 912585, 'tstamp': 912585}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379690, 'error': None, 'target': 'ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.699 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[975ed36d-b6f1-412f-afe7-6bb0b95f8ebf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6e40673f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:63:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912585, 'reachable_time': 34362, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379691, 'error': None, 'target': 'ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.702 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.707 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616548.6746345, d1949b92-7b16-4a9d-b033-d0a6df7a9f2f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.707 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.733 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.738 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.741 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[529c3436-59e8-4a28-94c6-f2e3380cd857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.749 248514 DEBUG nova.compute.manager [req-4711cb82-af73-4bff-8df1-626e921e0436 req-a8ab2129-7d3b-49ff-975c-439de54528ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.750 248514 DEBUG oslo_concurrency.lockutils [req-4711cb82-af73-4bff-8df1-626e921e0436 req-a8ab2129-7d3b-49ff-975c-439de54528ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.750 248514 DEBUG oslo_concurrency.lockutils [req-4711cb82-af73-4bff-8df1-626e921e0436 req-a8ab2129-7d3b-49ff-975c-439de54528ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.750 248514 DEBUG oslo_concurrency.lockutils [req-4711cb82-af73-4bff-8df1-626e921e0436 req-a8ab2129-7d3b-49ff-975c-439de54528ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.751 248514 DEBUG nova.compute.manager [req-4711cb82-af73-4bff-8df1-626e921e0436 req-a8ab2129-7d3b-49ff-975c-439de54528ce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Processing event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.751 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.756 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.760 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.760 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616548.7544723, d1949b92-7b16-4a9d-b033-d0a6df7a9f2f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.761 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.765 248514 INFO nova.virt.libvirt.driver [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Instance spawned successfully.#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.765 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.783 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.790 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.795 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.795 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.796 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.796 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.796 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.797 248514 DEBUG nova.virt.libvirt.driver [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.827 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.832 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7acf1fbe-691d-4397-8299-73a65ac18968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.833 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e40673f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.833 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.834 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e40673f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.835 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:28 np0005558241 NetworkManager[50376]: <info>  [1765616548.8362] manager: (tap6e40673f-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/550)
Dec 13 04:02:28 np0005558241 kernel: tap6e40673f-30: entered promiscuous mode
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.839 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.840 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6e40673f-30, col_values=(('external_ids', {'iface-id': 'b5b50ff0-6ee9-471e-bc65-c9c2b390db7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.841 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:28 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:28Z|01336|binding|INFO|Releasing lport b5b50ff0-6ee9-471e-bc65-c9c2b390db7a from this chassis (sb_readonly=0)
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.860 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.861 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6e40673f-327e-4eb6-92f2-18c595696258.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6e40673f-327e-4eb6-92f2-18c595696258.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.863 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[659e7ba8-0232-4e49-8bcd-8bef60771756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.864 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-6e40673f-327e-4eb6-92f2-18c595696258
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/6e40673f-327e-4eb6-92f2-18c595696258.pid.haproxy
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 6e40673f-327e-4eb6-92f2-18c595696258
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:02:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:28.864 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258', 'env', 'PROCESS_TAG=haproxy-6e40673f-327e-4eb6-92f2-18c595696258', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6e40673f-327e-4eb6-92f2-18c595696258.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.870 248514 INFO nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Took 9.46 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.871 248514 DEBUG nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.950 248514 INFO nova.compute.manager [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Took 11.06 seconds to build instance.#033[00m
Dec 13 04:02:28 np0005558241 nova_compute[248510]: 2025-12-13 09:02:28.981 248514 DEBUG oslo_concurrency.lockutils [None req-cc0ea9ce-8ecc-4cac-b86e-45cba5301a45 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:29 np0005558241 podman[379723]: 2025-12-13 09:02:29.267237491 +0000 UTC m=+0.046306065 container create dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:02:29 np0005558241 systemd[1]: Started libpod-conmon-dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7.scope.
Dec 13 04:02:29 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:02:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b3b0cc85753c71ae0af5ca1116bf5693a0188d31c1be920354ca7251186e52/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:29 np0005558241 podman[379723]: 2025-12-13 09:02:29.241148592 +0000 UTC m=+0.020217166 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:02:29 np0005558241 podman[379723]: 2025-12-13 09:02:29.343921648 +0000 UTC m=+0.122990222 container init dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:02:29 np0005558241 podman[379723]: 2025-12-13 09:02:29.349816192 +0000 UTC m=+0.128884756 container start dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:02:29 np0005558241 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [NOTICE]   (379742) : New worker (379744) forked
Dec 13 04:02:29 np0005558241 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [NOTICE]   (379742) : Loading success.
Dec 13 04:02:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3042: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Dec 13 04:02:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:02:31 np0005558241 nova_compute[248510]: 2025-12-13 09:02:31.232 248514 DEBUG nova.compute.manager [req-30630ad7-5b74-4c82-a71b-da21a5c712a6 req-55d75383-07bf-4fc5-8320-d0698861615a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:31 np0005558241 nova_compute[248510]: 2025-12-13 09:02:31.233 248514 DEBUG oslo_concurrency.lockutils [req-30630ad7-5b74-4c82-a71b-da21a5c712a6 req-55d75383-07bf-4fc5-8320-d0698861615a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:31 np0005558241 nova_compute[248510]: 2025-12-13 09:02:31.234 248514 DEBUG oslo_concurrency.lockutils [req-30630ad7-5b74-4c82-a71b-da21a5c712a6 req-55d75383-07bf-4fc5-8320-d0698861615a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:31 np0005558241 nova_compute[248510]: 2025-12-13 09:02:31.234 248514 DEBUG oslo_concurrency.lockutils [req-30630ad7-5b74-4c82-a71b-da21a5c712a6 req-55d75383-07bf-4fc5-8320-d0698861615a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:31 np0005558241 nova_compute[248510]: 2025-12-13 09:02:31.234 248514 DEBUG nova.compute.manager [req-30630ad7-5b74-4c82-a71b-da21a5c712a6 req-55d75383-07bf-4fc5-8320-d0698861615a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] No waiting events found dispatching network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:02:31 np0005558241 nova_compute[248510]: 2025-12-13 09:02:31.234 248514 WARNING nova.compute.manager [req-30630ad7-5b74-4c82-a71b-da21a5c712a6 req-55d75383-07bf-4fc5-8320-d0698861615a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received unexpected event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a for instance with vm_state active and task_state None.#033[00m
Dec 13 04:02:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:31.722 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:02:31 np0005558241 nova_compute[248510]: 2025-12-13 09:02:31.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:31.724 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:02:32 np0005558241 nova_compute[248510]: 2025-12-13 09:02:32.328 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3043: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 141 op/s
Dec 13 04:02:32 np0005558241 nova_compute[248510]: 2025-12-13 09:02:32.861 248514 DEBUG nova.compute.manager [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-changed-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:32 np0005558241 nova_compute[248510]: 2025-12-13 09:02:32.862 248514 DEBUG nova.compute.manager [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Refreshing instance network info cache due to event network-changed-bf1a0f02-8913-41ae-aa00-ab927d45e18a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:02:32 np0005558241 nova_compute[248510]: 2025-12-13 09:02:32.863 248514 DEBUG oslo_concurrency.lockutils [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:02:32 np0005558241 nova_compute[248510]: 2025-12-13 09:02:32.863 248514 DEBUG oslo_concurrency.lockutils [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:02:32 np0005558241 nova_compute[248510]: 2025-12-13 09:02:32.863 248514 DEBUG nova.network.neutron [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Refreshing network info cache for port bf1a0f02-8913-41ae-aa00-ab927d45e18a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:02:32 np0005558241 nova_compute[248510]: 2025-12-13 09:02:32.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:34 np0005558241 nova_compute[248510]: 2025-12-13 09:02:34.152 248514 DEBUG nova.network.neutron [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updated VIF entry in instance network info cache for port bf1a0f02-8913-41ae-aa00-ab927d45e18a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:02:34 np0005558241 nova_compute[248510]: 2025-12-13 09:02:34.154 248514 DEBUG nova.network.neutron [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updating instance_info_cache with network_info: [{"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:02:34 np0005558241 nova_compute[248510]: 2025-12-13 09:02:34.179 248514 DEBUG oslo_concurrency.lockutils [req-f0064ff1-fcef-4c89-9553-909cac5376b3 req-285ef6b0-936d-4bc8-bcde-9061c4cadb0c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:02:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3044: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 210 op/s
Dec 13 04:02:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:02:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3045: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 140 op/s
Dec 13 04:02:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:36.726 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:37 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:37Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:34:14 10.100.0.5
Dec 13 04:02:37 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:37Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:34:14 10.100.0.5
Dec 13 04:02:37 np0005558241 nova_compute[248510]: 2025-12-13 09:02:37.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:38 np0005558241 nova_compute[248510]: 2025-12-13 09:02:38.001 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3046: 321 pgs: 321 active+clean; 150 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 977 KiB/s wr, 171 op/s
Dec 13 04:02:38 np0005558241 podman[379755]: 2025-12-13 09:02:38.970771451 +0000 UTC m=+0.052442595 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:02:39 np0005558241 podman[379753]: 2025-12-13 09:02:39.003386059 +0000 UTC m=+0.093586401 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 04:02:39 np0005558241 podman[379754]: 2025-12-13 09:02:39.003669296 +0000 UTC m=+0.091467800 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 04:02:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:02:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:02:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:02:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:02:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:02:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:02:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3047: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 172 op/s
Dec 13 04:02:40 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:40Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:41:b6 10.100.0.7
Dec 13 04:02:40 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:40Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:41:b6 10.100.0.7
Dec 13 04:02:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:02:42 np0005558241 nova_compute[248510]: 2025-12-13 09:02:42.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3048: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Dec 13 04:02:43 np0005558241 nova_compute[248510]: 2025-12-13 09:02:43.005 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:43 np0005558241 nova_compute[248510]: 2025-12-13 09:02:43.830 248514 INFO nova.compute.manager [None req-1ad34f63-d60a-4656-a94c-7f6ce32210bd a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Get console output#033[00m
Dec 13 04:02:43 np0005558241 nova_compute[248510]: 2025-12-13 09:02:43.838 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:02:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3049: 321 pgs: 321 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.235 248514 DEBUG nova.compute.manager [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-changed-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.236 248514 DEBUG nova.compute.manager [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Refreshing instance network info cache due to event network-changed-698673ab-115a-49aa-b3d5-68392d28aa81. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.236 248514 DEBUG oslo_concurrency.lockutils [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.237 248514 DEBUG oslo_concurrency.lockutils [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.237 248514 DEBUG nova.network.neutron [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Refreshing network info cache for port 698673ab-115a-49aa-b3d5-68392d28aa81 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.324 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.324 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.325 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.325 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.325 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.327 248514 INFO nova.compute.manager [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Terminating instance#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.328 248514 DEBUG nova.compute.manager [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:02:45 np0005558241 kernel: tap698673ab-11 (unregistering): left promiscuous mode
Dec 13 04:02:45 np0005558241 NetworkManager[50376]: <info>  [1765616565.3849] device (tap698673ab-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:02:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:45Z|01337|binding|INFO|Releasing lport 698673ab-115a-49aa-b3d5-68392d28aa81 from this chassis (sb_readonly=0)
Dec 13 04:02:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:45Z|01338|binding|INFO|Setting lport 698673ab-115a-49aa-b3d5-68392d28aa81 down in Southbound
Dec 13 04:02:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:45Z|01339|binding|INFO|Removing iface tap698673ab-11 ovn-installed in OVS
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.397 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.399 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.410 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:34:14 10.100.0.5'], port_security=['fa:16:3e:7b:34:14 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18506534-de17-4e42-87c7-b2546619f4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '250e0b26-9d3f-404b-8b3e-f79706be2a3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db45c681-c79f-432b-bd2c-bbf3bd1f2153, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=698673ab-115a-49aa-b3d5-68392d28aa81) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.411 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 698673ab-115a-49aa-b3d5-68392d28aa81 in datapath 44b4664e-9eef-4d04-bcdb-68869f16c46a unbound from our chassis#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.414 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44b4664e-9eef-4d04-bcdb-68869f16c46a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.415 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[78a7687b-89cb-497c-8860-a4a41f1d20ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.416 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a namespace which is not needed anymore#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:45 np0005558241 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000082.scope: Deactivated successfully.
Dec 13 04:02:45 np0005558241 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000082.scope: Consumed 13.317s CPU time.
Dec 13 04:02:45 np0005558241 systemd-machined[210538]: Machine qemu-159-instance-00000082 terminated.
Dec 13 04:02:45 np0005558241 kernel: tap698673ab-11: entered promiscuous mode
Dec 13 04:02:45 np0005558241 systemd-udevd[379817]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:02:45 np0005558241 NetworkManager[50376]: <info>  [1765616565.5588] manager: (tap698673ab-11): new Tun device (/org/freedesktop/NetworkManager/Devices/551)
Dec 13 04:02:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:45Z|01340|binding|INFO|Claiming lport 698673ab-115a-49aa-b3d5-68392d28aa81 for this chassis.
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:45Z|01341|binding|INFO|698673ab-115a-49aa-b3d5-68392d28aa81: Claiming fa:16:3e:7b:34:14 10.100.0.5
Dec 13 04:02:45 np0005558241 kernel: tap698673ab-11 (unregistering): left promiscuous mode
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.572 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:34:14 10.100.0.5'], port_security=['fa:16:3e:7b:34:14 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18506534-de17-4e42-87c7-b2546619f4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '250e0b26-9d3f-404b-8b3e-f79706be2a3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db45c681-c79f-432b-bd2c-bbf3bd1f2153, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=698673ab-115a-49aa-b3d5-68392d28aa81) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:45Z|01342|binding|INFO|Releasing lport 698673ab-115a-49aa-b3d5-68392d28aa81 from this chassis (sb_readonly=0)
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.601 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:34:14 10.100.0.5'], port_security=['fa:16:3e:7b:34:14 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '18506534-de17-4e42-87c7-b2546619f4d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '250e0b26-9d3f-404b-8b3e-f79706be2a3d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db45c681-c79f-432b-bd2c-bbf3bd1f2153, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=698673ab-115a-49aa-b3d5-68392d28aa81) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.605 248514 INFO nova.virt.libvirt.driver [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Instance destroyed successfully.#033[00m
Dec 13 04:02:45 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [NOTICE]   (379468) : haproxy version is 2.8.14-c23fe91
Dec 13 04:02:45 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [NOTICE]   (379468) : path to executable is /usr/sbin/haproxy
Dec 13 04:02:45 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [WARNING]  (379468) : Exiting Master process...
Dec 13 04:02:45 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [WARNING]  (379468) : Exiting Master process...
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.606 248514 DEBUG nova.objects.instance [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid 18506534-de17-4e42-87c7-b2546619f4d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:02:45 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [ALERT]    (379468) : Current worker (379470) exited with code 143 (Terminated)
Dec 13 04:02:45 np0005558241 neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a[379464]: [WARNING]  (379468) : All workers exited. Exiting... (0)
Dec 13 04:02:45 np0005558241 systemd[1]: libpod-77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066.scope: Deactivated successfully.
Dec 13 04:02:45 np0005558241 podman[379837]: 2025-12-13 09:02:45.617712661 +0000 UTC m=+0.076706559 container died 77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.644 248514 DEBUG nova.virt.libvirt.vif [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-13T09:01:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1657427533',display_name='tempest-TestNetworkAdvancedServerOps-server-1657427533',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1657427533',id=130,image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIu2Mio4iiKCPHCFWqnjQXxmeR9VuZ/sJMKRi7ThHNLpfALJX8ZDxWjNQlqcAjJCFa+dRlCd52p375xJuviq4QzkIt2R0aIT9v0cJCRdagDbQWGjbmiGhE29U66OFfUYVw==',key_name='tempest-TestNetworkAdvancedServerOps-2136416273',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:02:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a2gq0cdy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c10f1898-20b3-4bc9-8a36-2ee01b39c9ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:02:24Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=18506534-de17-4e42-87c7-b2546619f4d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.644 248514 DEBUG nova.network.os_vif_util [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.645 248514 DEBUG nova.network.os_vif_util [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.646 248514 DEBUG os_vif [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.648 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap698673ab-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066-userdata-shm.mount: Deactivated successfully.
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.649 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.651 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.655 248514 INFO os_vif [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:34:14,bridge_name='br-int',has_traffic_filtering=True,id=698673ab-115a-49aa-b3d5-68392d28aa81,network=Network(44b4664e-9eef-4d04-bcdb-68869f16c46a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap698673ab-11')#033[00m
Dec 13 04:02:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-92a218a85abcbb7ba749a23f029096ade87b462fcb9dae89324b66eda6c3873b-merged.mount: Deactivated successfully.
Dec 13 04:02:45 np0005558241 podman[379837]: 2025-12-13 09:02:45.673336462 +0000 UTC m=+0.132330330 container cleanup 77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:02:45 np0005558241 systemd[1]: libpod-conmon-77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066.scope: Deactivated successfully.
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.743 248514 DEBUG nova.compute.manager [req-6830f357-8fd5-4414-81fa-e8f53a178c5d req-f8a64ad5-8e67-418d-a45e-486b19d185de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-unplugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.744 248514 DEBUG oslo_concurrency.lockutils [req-6830f357-8fd5-4414-81fa-e8f53a178c5d req-f8a64ad5-8e67-418d-a45e-486b19d185de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.744 248514 DEBUG oslo_concurrency.lockutils [req-6830f357-8fd5-4414-81fa-e8f53a178c5d req-f8a64ad5-8e67-418d-a45e-486b19d185de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.745 248514 DEBUG oslo_concurrency.lockutils [req-6830f357-8fd5-4414-81fa-e8f53a178c5d req-f8a64ad5-8e67-418d-a45e-486b19d185de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.745 248514 DEBUG nova.compute.manager [req-6830f357-8fd5-4414-81fa-e8f53a178c5d req-f8a64ad5-8e67-418d-a45e-486b19d185de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-unplugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.746 248514 DEBUG nova.compute.manager [req-6830f357-8fd5-4414-81fa-e8f53a178c5d req-f8a64ad5-8e67-418d-a45e-486b19d185de 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-unplugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:02:45 np0005558241 podman[379884]: 2025-12-13 09:02:45.758591219 +0000 UTC m=+0.054936916 container remove 77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.764 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a3aff1-6683-4ce3-8c83-f6f29075642e]: (4, ('Sat Dec 13 09:02:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a (77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066)\n77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066\nSat Dec 13 09:02:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a (77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066)\n77560eb00b83b23aac136a5edd70f887e25fbd6be985313d05e81c73845a5066\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.766 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[682be346-63a5-4a38-82ed-1ca6e24613ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.767 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44b4664e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:45 np0005558241 kernel: tap44b4664e-90: left promiscuous mode
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.782 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.785 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd9ad41-74ec-41e9-a64f-f4cc716a3b60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.804 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[858c9360-4391-4ca3-90f8-08b28656737d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.805 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[38d77d4b-eaa9-45bd-bf59-bb5c7e836936]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.823 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d59b1551-dffc-4d83-9502-4e70359320d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912063, 'reachable_time': 41989, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379902, 'error': None, 'target': 'ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:45 np0005558241 systemd[1]: run-netns-ovnmeta\x2d44b4664e\x2d9eef\x2d4d04\x2dbcdb\x2d68869f16c46a.mount: Deactivated successfully.
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.827 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-44b4664e-9eef-4d04-bcdb-68869f16c46a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.827 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[1d60d683-dfcf-40d5-82c1-4af2513d2554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.828 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 698673ab-115a-49aa-b3d5-68392d28aa81 in datapath 44b4664e-9eef-4d04-bcdb-68869f16c46a unbound from our chassis#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.830 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44b4664e-9eef-4d04-bcdb-68869f16c46a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.831 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[255ccab5-b27f-43fe-b122-360e006eea79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.832 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 698673ab-115a-49aa-b3d5-68392d28aa81 in datapath 44b4664e-9eef-4d04-bcdb-68869f16c46a unbound from our chassis#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.834 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44b4664e-9eef-4d04-bcdb-68869f16c46a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:02:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:45.834 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[36e06051-46c8-4eda-8737-a7606deb892c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.882 248514 INFO nova.compute.manager [None req-69833bba-d288-46a3-ac4a-6d6d362d2fb9 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Get console output#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.888 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:02:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.947 248514 INFO nova.virt.libvirt.driver [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deleting instance files /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4_del#033[00m
Dec 13 04:02:45 np0005558241 nova_compute[248510]: 2025-12-13 09:02:45.948 248514 INFO nova.virt.libvirt.driver [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deletion of /var/lib/nova/instances/18506534-de17-4e42-87c7-b2546619f4d4_del complete#033[00m
Dec 13 04:02:46 np0005558241 nova_compute[248510]: 2025-12-13 09:02:46.021 248514 INFO nova.compute.manager [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:02:46 np0005558241 nova_compute[248510]: 2025-12-13 09:02:46.022 248514 DEBUG oslo.service.loopingcall [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:02:46 np0005558241 nova_compute[248510]: 2025-12-13 09:02:46.022 248514 DEBUG nova.compute.manager [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:02:46 np0005558241 nova_compute[248510]: 2025-12-13 09:02:46.022 248514 DEBUG nova.network.neutron [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:02:46 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:46Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:41:b6 10.100.0.7
Dec 13 04:02:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3050: 321 pgs: 321 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 653 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Dec 13 04:02:46 np0005558241 nova_compute[248510]: 2025-12-13 09:02:46.766 248514 DEBUG nova.network.neutron [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updated VIF entry in instance network info cache for port 698673ab-115a-49aa-b3d5-68392d28aa81. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:02:46 np0005558241 nova_compute[248510]: 2025-12-13 09:02:46.767 248514 DEBUG nova.network.neutron [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updating instance_info_cache with network_info: [{"id": "698673ab-115a-49aa-b3d5-68392d28aa81", "address": "fa:16:3e:7b:34:14", "network": {"id": "44b4664e-9eef-4d04-bcdb-68869f16c46a", "bridge": "br-int", "label": "tempest-network-smoke--1694573016", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap698673ab-11", "ovs_interfaceid": "698673ab-115a-49aa-b3d5-68392d28aa81", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:02:46 np0005558241 nova_compute[248510]: 2025-12-13 09:02:46.809 248514 DEBUG nova.network.neutron [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:02:46 np0005558241 nova_compute[248510]: 2025-12-13 09:02:46.811 248514 DEBUG oslo_concurrency.lockutils [req-b38a07fb-d462-4a2d-9708-0dbde24e5bef req-ba596af2-50f8-45ac-8d6d-296218531322 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-18506534-de17-4e42-87c7-b2546619f4d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:02:46 np0005558241 nova_compute[248510]: 2025-12-13 09:02:46.849 248514 INFO nova.compute.manager [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Took 0.83 seconds to deallocate network for instance.#033[00m
Dec 13 04:02:46 np0005558241 nova_compute[248510]: 2025-12-13 09:02:46.906 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:46 np0005558241 nova_compute[248510]: 2025-12-13 09:02:46.907 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.261 248514 DEBUG oslo_concurrency.processutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.334 248514 DEBUG nova.compute.manager [req-55e37121-68ae-467e-ad57-997e38d0d158 req-c667227d-80dd-48ed-bf06-c820fc3559e9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-deleted-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:02:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44448204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.767 248514 DEBUG oslo_concurrency.processutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.774 248514 DEBUG nova.compute.provider_tree [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.797 248514 DEBUG nova.scheduler.client.report [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.828 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.858 248514 DEBUG nova.compute.manager [req-7e7016f2-a0ce-4f70-9712-de1f8960dca0 req-8c2a2cda-b55d-4002-aff7-0bd5410f0d99 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.859 248514 DEBUG oslo_concurrency.lockutils [req-7e7016f2-a0ce-4f70-9712-de1f8960dca0 req-8c2a2cda-b55d-4002-aff7-0bd5410f0d99 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "18506534-de17-4e42-87c7-b2546619f4d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.859 248514 DEBUG oslo_concurrency.lockutils [req-7e7016f2-a0ce-4f70-9712-de1f8960dca0 req-8c2a2cda-b55d-4002-aff7-0bd5410f0d99 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.860 248514 DEBUG oslo_concurrency.lockutils [req-7e7016f2-a0ce-4f70-9712-de1f8960dca0 req-8c2a2cda-b55d-4002-aff7-0bd5410f0d99 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.860 248514 DEBUG nova.compute.manager [req-7e7016f2-a0ce-4f70-9712-de1f8960dca0 req-8c2a2cda-b55d-4002-aff7-0bd5410f0d99 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] No waiting events found dispatching network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.860 248514 WARNING nova.compute.manager [req-7e7016f2-a0ce-4f70-9712-de1f8960dca0 req-8c2a2cda-b55d-4002-aff7-0bd5410f0d99 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Received unexpected event network-vif-plugged-698673ab-115a-49aa-b3d5-68392d28aa81 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.862 248514 INFO nova.scheduler.client.report [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Deleted allocations for instance 18506534-de17-4e42-87c7-b2546619f4d4#033[00m
Dec 13 04:02:47 np0005558241 nova_compute[248510]: 2025-12-13 09:02:47.945 248514 DEBUG oslo_concurrency.lockutils [None req-36b60040-962f-499a-acd4-0c21ec9f49bb a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "18506534-de17-4e42-87c7-b2546619f4d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:48 np0005558241 nova_compute[248510]: 2025-12-13 09:02:48.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3051: 321 pgs: 321 active+clean; 164 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 4.3 MiB/s wr, 142 op/s
Dec 13 04:02:48 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:48Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:41:b6 10.100.0.7
Dec 13 04:02:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3052: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 503 KiB/s rd, 3.3 MiB/s wr, 126 op/s
Dec 13 04:02:50 np0005558241 nova_compute[248510]: 2025-12-13 09:02:50.651 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:50Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:41:b6 10.100.0.7
Dec 13 04:02:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.558 248514 DEBUG nova.compute.manager [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-changed-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.558 248514 DEBUG nova.compute.manager [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Refreshing instance network info cache due to event network-changed-bf1a0f02-8913-41ae-aa00-ab927d45e18a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.559 248514 DEBUG oslo_concurrency.lockutils [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.559 248514 DEBUG oslo_concurrency.lockutils [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.560 248514 DEBUG nova.network.neutron [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Refreshing network info cache for port bf1a0f02-8913-41ae-aa00-ab927d45e18a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.664 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.665 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.666 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.666 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.667 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.669 248514 INFO nova.compute.manager [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Terminating instance#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.671 248514 DEBUG nova.compute.manager [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:02:51 np0005558241 kernel: tapbf1a0f02-89 (unregistering): left promiscuous mode
Dec 13 04:02:51 np0005558241 NetworkManager[50376]: <info>  [1765616571.7300] device (tapbf1a0f02-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:02:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:51Z|01343|binding|INFO|Releasing lport bf1a0f02-8913-41ae-aa00-ab927d45e18a from this chassis (sb_readonly=0)
Dec 13 04:02:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:51Z|01344|binding|INFO|Setting lport bf1a0f02-8913-41ae-aa00-ab927d45e18a down in Southbound
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.739 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:51Z|01345|binding|INFO|Removing iface tapbf1a0f02-89 ovn-installed in OVS
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:51.747 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:41:b6 10.100.0.7'], port_security=['fa:16:3e:eb:41:b6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd1949b92-7b16-4a9d-b033-d0a6df7a9f2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e40673f-327e-4eb6-92f2-18c595696258', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2f4c3d2c-26ca-48de-bf5d-37154a8db222', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=873a69d0-ac9b-484c-a75a-e746eafe7ffd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=bf1a0f02-8913-41ae-aa00-ab927d45e18a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:02:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:51.749 158419 INFO neutron.agent.ovn.metadata.agent [-] Port bf1a0f02-8913-41ae-aa00-ab927d45e18a in datapath 6e40673f-327e-4eb6-92f2-18c595696258 unbound from our chassis#033[00m
Dec 13 04:02:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:51.750 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6e40673f-327e-4eb6-92f2-18c595696258, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:02:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:51.751 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b8186250-0e5d-437e-8bf9-91cf3c1cc563]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:51.752 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258 namespace which is not needed anymore#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.762 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:51 np0005558241 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000083.scope: Deactivated successfully.
Dec 13 04:02:51 np0005558241 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000083.scope: Consumed 13.500s CPU time.
Dec 13 04:02:51 np0005558241 systemd-machined[210538]: Machine qemu-160-instance-00000083 terminated.
Dec 13 04:02:51 np0005558241 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [NOTICE]   (379742) : haproxy version is 2.8.14-c23fe91
Dec 13 04:02:51 np0005558241 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [NOTICE]   (379742) : path to executable is /usr/sbin/haproxy
Dec 13 04:02:51 np0005558241 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [WARNING]  (379742) : Exiting Master process...
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.915 248514 INFO nova.virt.libvirt.driver [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Instance destroyed successfully.#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.916 248514 DEBUG nova.objects.instance [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid d1949b92-7b16-4a9d-b033-d0a6df7a9f2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:02:51 np0005558241 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [ALERT]    (379742) : Current worker (379744) exited with code 143 (Terminated)
Dec 13 04:02:51 np0005558241 neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258[379738]: [WARNING]  (379742) : All workers exited. Exiting... (0)
Dec 13 04:02:51 np0005558241 systemd[1]: libpod-dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7.scope: Deactivated successfully.
Dec 13 04:02:51 np0005558241 podman[379951]: 2025-12-13 09:02:51.925601193 +0000 UTC m=+0.054890155 container died dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.947 248514 DEBUG nova.virt.libvirt.vif [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:02:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1513004833',display_name='tempest-TestNetworkBasicOps-server-1513004833',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1513004833',id=131,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI8dVZ4XrPNWO5yi7otU+kwc72UZys9k+bPmo3tsYqKE1ENZ9dHjFLyBJxupI7JFtVdj7CesXySUu0b7xutQxOWMSQgMIWptsgkbrjrKxEGVeJHg8i+xoVF0hfGKec7bVQ==',key_name='tempest-TestNetworkBasicOps-61860158',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:02:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-y8sla6xu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:02:28Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=d1949b92-7b16-4a9d-b033-d0a6df7a9f2f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.947 248514 DEBUG nova.network.os_vif_util [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.948 248514 DEBUG nova.network.os_vif_util [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:02:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:02:51Z|01346|binding|INFO|Releasing lport b5b50ff0-6ee9-471e-bc65-c9c2b390db7a from this chassis (sb_readonly=0)
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.950 248514 DEBUG os_vif [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.952 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.952 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf1a0f02-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.955 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:02:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7-userdata-shm.mount: Deactivated successfully.
Dec 13 04:02:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f4b3b0cc85753c71ae0af5ca1116bf5693a0188d31c1be920354ca7251186e52-merged.mount: Deactivated successfully.
Dec 13 04:02:51 np0005558241 nova_compute[248510]: 2025-12-13 09:02:51.962 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:51 np0005558241 podman[379951]: 2025-12-13 09:02:51.977853192 +0000 UTC m=+0.107142154 container cleanup dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 04:02:51 np0005558241 systemd[1]: libpod-conmon-dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7.scope: Deactivated successfully.
Dec 13 04:02:52 np0005558241 podman[379991]: 2025-12-13 09:02:52.040397653 +0000 UTC m=+0.041384644 container remove dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 04:02:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.042 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f950f6c5-e966-4685-8abc-b135ed7e3047]: (4, ('Sat Dec 13 09:02:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258 (dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7)\ndec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7\nSat Dec 13 09:02:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258 (dec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7)\ndec033d11132ece716c8d488bf5012b22e0c2d70d313f52fd88acf75d0f75ec7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.043 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa71108-6dc7-4ba3-a222-de2e2b48dbd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.044 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e40673f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.045 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:52 np0005558241 kernel: tap6e40673f-30: left promiscuous mode
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.092 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.094 248514 INFO os_vif [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:41:b6,bridge_name='br-int',has_traffic_filtering=True,id=bf1a0f02-8913-41ae-aa00-ab927d45e18a,network=Network(6e40673f-327e-4eb6-92f2-18c595696258),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf1a0f02-89')#033[00m
Dec 13 04:02:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.096 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1140aee1-7e13-489e-9629-54cb28687a69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.119 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.120 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86138030-b8de-4d48-b2dd-32bf6faf11b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.121 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab24dd2c-d18c-4cf2-9a92-0376ba302d6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.122 248514 DEBUG nova.compute.manager [req-6d2367da-3fdb-495b-966f-45dd8c87d1ad req-98682793-915d-43d4-8ffa-b1da7f5d7805 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-vif-unplugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.122 248514 DEBUG oslo_concurrency.lockutils [req-6d2367da-3fdb-495b-966f-45dd8c87d1ad req-98682793-915d-43d4-8ffa-b1da7f5d7805 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.123 248514 DEBUG oslo_concurrency.lockutils [req-6d2367da-3fdb-495b-966f-45dd8c87d1ad req-98682793-915d-43d4-8ffa-b1da7f5d7805 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.123 248514 DEBUG oslo_concurrency.lockutils [req-6d2367da-3fdb-495b-966f-45dd8c87d1ad req-98682793-915d-43d4-8ffa-b1da7f5d7805 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.123 248514 DEBUG nova.compute.manager [req-6d2367da-3fdb-495b-966f-45dd8c87d1ad req-98682793-915d-43d4-8ffa-b1da7f5d7805 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] No waiting events found dispatching network-vif-unplugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.123 248514 DEBUG nova.compute.manager [req-6d2367da-3fdb-495b-966f-45dd8c87d1ad req-98682793-915d-43d4-8ffa-b1da7f5d7805 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-vif-unplugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:02:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.140 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[13307e63-f449-41c4-8e30-4798c9c26cf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912553, 'reachable_time': 19694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380022, 'error': None, 'target': 'ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:52 np0005558241 systemd[1]: run-netns-ovnmeta\x2d6e40673f\x2d327e\x2d4eb6\x2d92f2\x2d18c595696258.mount: Deactivated successfully.
Dec 13 04:02:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.144 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6e40673f-327e-4eb6-92f2-18c595696258 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:02:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:52.145 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[704d3917-daec-475c-816c-0bf33401e9ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.352 248514 INFO nova.virt.libvirt.driver [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Deleting instance files /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_del#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.353 248514 INFO nova.virt.libvirt.driver [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Deletion of /var/lib/nova/instances/d1949b92-7b16-4a9d-b033-d0a6df7a9f2f_del complete#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.436 248514 INFO nova.compute.manager [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.437 248514 DEBUG oslo.service.loopingcall [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.440 248514 DEBUG nova.compute.manager [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.440 248514 DEBUG nova.network.neutron [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:02:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3053: 321 pgs: 321 active+clean; 121 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Dec 13 04:02:52 np0005558241 nova_compute[248510]: 2025-12-13 09:02:52.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:02:53 np0005558241 nova_compute[248510]: 2025-12-13 09:02:53.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:53 np0005558241 nova_compute[248510]: 2025-12-13 09:02:53.349 248514 DEBUG nova.network.neutron [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:02:53 np0005558241 nova_compute[248510]: 2025-12-13 09:02:53.373 248514 INFO nova.compute.manager [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Took 0.93 seconds to deallocate network for instance.#033[00m
Dec 13 04:02:53 np0005558241 nova_compute[248510]: 2025-12-13 09:02:53.421 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:53 np0005558241 nova_compute[248510]: 2025-12-13 09:02:53.421 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:53 np0005558241 nova_compute[248510]: 2025-12-13 09:02:53.484 248514 DEBUG oslo_concurrency.processutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:02:53 np0005558241 nova_compute[248510]: 2025-12-13 09:02:53.540 248514 DEBUG nova.network.neutron [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updated VIF entry in instance network info cache for port bf1a0f02-8913-41ae-aa00-ab927d45e18a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:02:53 np0005558241 nova_compute[248510]: 2025-12-13 09:02:53.542 248514 DEBUG nova.network.neutron [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updating instance_info_cache with network_info: [{"id": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "address": "fa:16:3e:eb:41:b6", "network": {"id": "6e40673f-327e-4eb6-92f2-18c595696258", "bridge": "br-int", "label": "tempest-network-smoke--2106682485", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1a0f02-89", "ovs_interfaceid": "bf1a0f02-8913-41ae-aa00-ab927d45e18a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:02:53 np0005558241 nova_compute[248510]: 2025-12-13 09:02:53.564 248514 DEBUG oslo_concurrency.lockutils [req-9c6c5043-a3f9-4dbc-8cbe-69ee3e0509e2 req-ae75e922-bcf5-4a8a-8e09-83e586c32876 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:02:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:02:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3556270652' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.084 248514 DEBUG oslo_concurrency.processutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.090 248514 DEBUG nova.compute.provider_tree [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.115 248514 DEBUG nova.scheduler.client.report [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.136 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.185 248514 INFO nova.scheduler.client.report [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance d1949b92-7b16-4a9d-b033-d0a6df7a9f2f#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.249 248514 DEBUG nova.compute.manager [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.249 248514 DEBUG oslo_concurrency.lockutils [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.249 248514 DEBUG oslo_concurrency.lockutils [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.250 248514 DEBUG oslo_concurrency.lockutils [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.250 248514 DEBUG nova.compute.manager [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] No waiting events found dispatching network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.250 248514 WARNING nova.compute.manager [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received unexpected event network-vif-plugged-bf1a0f02-8913-41ae-aa00-ab927d45e18a for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.250 248514 DEBUG nova.compute.manager [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Received event network-vif-deleted-bf1a0f02-8913-41ae-aa00-ab927d45e18a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.250 248514 INFO nova.compute.manager [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Neutron deleted interface bf1a0f02-8913-41ae-aa00-ab927d45e18a; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.251 248514 DEBUG nova.network.neutron [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.278 248514 DEBUG oslo_concurrency.lockutils [None req-e638f264-dc8a-48d3-8b16-571b1b7827ac 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "d1949b92-7b16-4a9d-b033-d0a6df7a9f2f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:54 np0005558241 nova_compute[248510]: 2025-12-13 09:02:54.285 248514 DEBUG nova.compute.manager [req-2ef959cb-944d-463c-9cea-b9e2291f68a5 req-5fb63a80-6ed8-419f-904e-e0fd609e6a7f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Detach interface failed, port_id=bf1a0f02-8913-41ae-aa00-ab927d45e18a, reason: Instance d1949b92-7b16-4a9d-b033-d0a6df7a9f2f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 04:02:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3054: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 2.2 MiB/s wr, 120 op/s
Dec 13 04:02:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:55.439 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:02:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:55.440 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:02:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:02:55.441 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:02:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:02:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3055: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 16 KiB/s wr, 56 op/s
Dec 13 04:02:56 np0005558241 nova_compute[248510]: 2025-12-13 09:02:56.954 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:58 np0005558241 nova_compute[248510]: 2025-12-13 09:02:58.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:02:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3056: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 16 KiB/s wr, 56 op/s
Dec 13 04:03:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3057: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 42 op/s
Dec 13 04:03:00 np0005558241 nova_compute[248510]: 2025-12-13 09:03:00.603 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616565.6018267, 18506534-de17-4e42-87c7-b2546619f4d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:03:00 np0005558241 nova_compute[248510]: 2025-12-13 09:03:00.604 248514 INFO nova.compute.manager [-] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:03:00 np0005558241 nova_compute[248510]: 2025-12-13 09:03:00.872 248514 DEBUG nova.compute.manager [None req-526d59e2-b10d-4183-b193-ddc31f3dd8a3 - - - - - -] [instance: 18506534-de17-4e42-87c7-b2546619f4d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:03:01 np0005558241 nova_compute[248510]: 2025-12-13 09:03:01.957 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3058: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:03:03 np0005558241 nova_compute[248510]: 2025-12-13 09:03:03.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3059: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:03:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:03:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3060: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:03:06 np0005558241 nova_compute[248510]: 2025-12-13 09:03:06.913 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616571.9108522, d1949b92-7b16-4a9d-b033-d0a6df7a9f2f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:03:06 np0005558241 nova_compute[248510]: 2025-12-13 09:03:06.913 248514 INFO nova.compute.manager [-] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:03:06 np0005558241 nova_compute[248510]: 2025-12-13 09:03:06.960 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:06 np0005558241 nova_compute[248510]: 2025-12-13 09:03:06.969 248514 DEBUG nova.compute.manager [None req-4defc14b-2124-4295-82f8-d32ddb08db5f - - - - - -] [instance: d1949b92-7b16-4a9d-b033-d0a6df7a9f2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:08 np0005558241 nova_compute[248510]: 2025-12-13 09:03:08.015 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3061: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:03:08 np0005558241 nova_compute[248510]: 2025-12-13 09:03:08.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:03:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:03:09
Dec 13 04:03:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:03:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:03:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', '.rgw.root', 'default.rgw.log', 'vms', 'images']
Dec 13 04:03:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:03:09 np0005558241 podman[380051]: 2025-12-13 09:03:09.991395162 +0000 UTC m=+0.066544428 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 13 04:03:10 np0005558241 podman[380050]: 2025-12-13 09:03:10.019526407 +0000 UTC m=+0.099198947 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd)
Dec 13 04:03:10 np0005558241 podman[380049]: 2025-12-13 09:03:10.025808534 +0000 UTC m=+0.104832367 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3062: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:03:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:03:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:03:11 np0005558241 nova_compute[248510]: 2025-12-13 09:03:11.241 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:11 np0005558241 nova_compute[248510]: 2025-12-13 09:03:11.242 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:11 np0005558241 nova_compute[248510]: 2025-12-13 09:03:11.273 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:03:11 np0005558241 nova_compute[248510]: 2025-12-13 09:03:11.389 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:11 np0005558241 nova_compute[248510]: 2025-12-13 09:03:11.390 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:11 np0005558241 nova_compute[248510]: 2025-12-13 09:03:11.399 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:03:11 np0005558241 nova_compute[248510]: 2025-12-13 09:03:11.399 248514 INFO nova.compute.claims [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:03:11 np0005558241 nova_compute[248510]: 2025-12-13 09:03:11.544 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:11 np0005558241 nova_compute[248510]: 2025-12-13 09:03:11.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:03:11 np0005558241 nova_compute[248510]: 2025-12-13 09:03:11.962 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:03:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1867376723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.135 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.142 248514 DEBUG nova.compute.provider_tree [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.164 248514 DEBUG nova.scheduler.client.report [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.193 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.194 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.247 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.247 248514 DEBUG nova.network.neutron [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.277 248514 INFO nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.300 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.400 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.402 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.403 248514 INFO nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Creating image(s)#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.437 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.468 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3063: 321 pgs: 321 active+clean; 41 MiB data, 872 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.496 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.500 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.572 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.573 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.574 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.574 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.598 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.602 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.772 248514 DEBUG nova.policy [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5fd410579fa429ba0f7f680590cd86a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '77d40177a4804671aa9c5da343bc2ed4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.921 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:12 np0005558241 nova_compute[248510]: 2025-12-13 09:03:12.978 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] resizing rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:03:13 np0005558241 nova_compute[248510]: 2025-12-13 09:03:13.017 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:13 np0005558241 nova_compute[248510]: 2025-12-13 09:03:13.064 248514 DEBUG nova.objects.instance [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'migration_context' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:13 np0005558241 nova_compute[248510]: 2025-12-13 09:03:13.083 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:03:13 np0005558241 nova_compute[248510]: 2025-12-13 09:03:13.083 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Ensure instance console log exists: /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:03:13 np0005558241 nova_compute[248510]: 2025-12-13 09:03:13.084 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:13 np0005558241 nova_compute[248510]: 2025-12-13 09:03:13.084 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:13 np0005558241 nova_compute[248510]: 2025-12-13 09:03:13.084 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3064: 321 pgs: 321 active+clean; 76 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 960 KiB/s wr, 19 op/s
Dec 13 04:03:15 np0005558241 nova_compute[248510]: 2025-12-13 09:03:15.070 248514 DEBUG nova.network.neutron [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Successfully created port: daddb809-b36f-4f29-bd15-459cfd21a812 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:03:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:03:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3032886555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:03:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:03:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3032886555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:03:15 np0005558241 nova_compute[248510]: 2025-12-13 09:03:15.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:03:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:03:16 np0005558241 nova_compute[248510]: 2025-12-13 09:03:16.062 248514 DEBUG nova.network.neutron [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Successfully updated port: daddb809-b36f-4f29-bd15-459cfd21a812 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:03:16 np0005558241 nova_compute[248510]: 2025-12-13 09:03:16.081 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:03:16 np0005558241 nova_compute[248510]: 2025-12-13 09:03:16.082 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:03:16 np0005558241 nova_compute[248510]: 2025-12-13 09:03:16.082 248514 DEBUG nova.network.neutron [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:03:16 np0005558241 nova_compute[248510]: 2025-12-13 09:03:16.198 248514 DEBUG nova.compute.manager [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-changed-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:16 np0005558241 nova_compute[248510]: 2025-12-13 09:03:16.199 248514 DEBUG nova.compute.manager [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Refreshing instance network info cache due to event network-changed-daddb809-b36f-4f29-bd15-459cfd21a812. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:03:16 np0005558241 nova_compute[248510]: 2025-12-13 09:03:16.199 248514 DEBUG oslo_concurrency.lockutils [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:03:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3065: 321 pgs: 321 active+clean; 76 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 960 KiB/s wr, 19 op/s
Dec 13 04:03:16 np0005558241 nova_compute[248510]: 2025-12-13 09:03:16.717 248514 DEBUG nova.network.neutron [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:03:16 np0005558241 nova_compute[248510]: 2025-12-13 09:03:16.964 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.654 248514 DEBUG nova.network.neutron [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updating instance_info_cache with network_info: [{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.690 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.691 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance network_info: |[{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.692 248514 DEBUG oslo_concurrency.lockutils [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.693 248514 DEBUG nova.network.neutron [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Refreshing network info cache for port daddb809-b36f-4f29-bd15-459cfd21a812 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.698 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Start _get_guest_xml network_info=[{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.706 248514 WARNING nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.713 248514 DEBUG nova.virt.libvirt.host [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.715 248514 DEBUG nova.virt.libvirt.host [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.727 248514 DEBUG nova.virt.libvirt.host [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.728 248514 DEBUG nova.virt.libvirt.host [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.729 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.730 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.731 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.731 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.731 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.732 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.732 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.733 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.733 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.734 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.734 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.735 248514 DEBUG nova.virt.hardware [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.740 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.804 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.806 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.841 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 04:03:17 np0005558241 nova_compute[248510]: 2025-12-13 09:03:17.842 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.020 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:03:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3951604695' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.355 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.380 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.384 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3066: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:03:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:03:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1591836453' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.911 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.914 248514 DEBUG nova.virt.libvirt.vif [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:03:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-306376587',display_name='tempest-TestNetworkAdvancedServerOps-server-306376587',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-306376587',id=132,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHKEKbLZZE+aW59knHMs4q4nSClPgzNp8F0QXUH6k11KPbTMvxhfQvtv8ScRqzRXYIiwBhktHJd/maU265oWRgbsF6YV/H8WAVywICvzTkBF4CZcbkyPahF1JxO8nrngQ==',key_name='tempest-TestNetworkAdvancedServerOps-1547833332',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-4215zp0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:03:12Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=f5f29271-ff94-4d88-bc99-1cfc3e1128a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.915 248514 DEBUG nova.network.os_vif_util [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.917 248514 DEBUG nova.network.os_vif_util [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.919 248514 DEBUG nova.objects.instance [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.947 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  <uuid>f5f29271-ff94-4d88-bc99-1cfc3e1128a0</uuid>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  <name>instance-00000084</name>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-306376587</nova:name>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:03:17</nova:creationTime>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:        <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:        <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:        <nova:port uuid="daddb809-b36f-4f29-bd15-459cfd21a812">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <entry name="serial">f5f29271-ff94-4d88-bc99-1cfc3e1128a0</entry>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <entry name="uuid">f5f29271-ff94-4d88-bc99-1cfc3e1128a0</entry>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:b3:4e:45"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <target dev="tapdaddb809-b3"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/console.log" append="off"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:03:18 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:03:18 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:03:18 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:03:18 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.949 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Preparing to wait for external event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.950 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.950 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.951 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.951 248514 DEBUG nova.virt.libvirt.vif [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:03:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-306376587',display_name='tempest-TestNetworkAdvancedServerOps-server-306376587',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-306376587',id=132,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHKEKbLZZE+aW59knHMs4q4nSClPgzNp8F0QXUH6k11KPbTMvxhfQvtv8ScRqzRXYIiwBhktHJd/maU265oWRgbsF6YV/H8WAVywICvzTkBF4CZcbkyPahF1JxO8nrngQ==',key_name='tempest-TestNetworkAdvancedServerOps-1547833332',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-4215zp0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:03:12Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=f5f29271-ff94-4d88-bc99-1cfc3e1128a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.952 248514 DEBUG nova.network.os_vif_util [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.953 248514 DEBUG nova.network.os_vif_util [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.953 248514 DEBUG os_vif [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.954 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.954 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.955 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.958 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.959 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaddb809-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.959 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdaddb809-b3, col_values=(('external_ids', {'iface-id': 'daddb809-b36f-4f29-bd15-459cfd21a812', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:4e:45', 'vm-uuid': 'f5f29271-ff94-4d88-bc99-1cfc3e1128a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:18 np0005558241 NetworkManager[50376]: <info>  [1765616598.9626] manager: (tapdaddb809-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/552)
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.969 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:18 np0005558241 nova_compute[248510]: 2025-12-13 09:03:18.971 248514 INFO os_vif [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3')#033[00m
Dec 13 04:03:19 np0005558241 nova_compute[248510]: 2025-12-13 09:03:19.040 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:03:19 np0005558241 nova_compute[248510]: 2025-12-13 09:03:19.040 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:03:19 np0005558241 nova_compute[248510]: 2025-12-13 09:03:19.040 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No VIF found with MAC fa:16:3e:b3:4e:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:03:19 np0005558241 nova_compute[248510]: 2025-12-13 09:03:19.041 248514 INFO nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Using config drive#033[00m
Dec 13 04:03:19 np0005558241 nova_compute[248510]: 2025-12-13 09:03:19.064 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:19 np0005558241 nova_compute[248510]: 2025-12-13 09:03:19.630 248514 DEBUG nova.network.neutron [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updated VIF entry in instance network info cache for port daddb809-b36f-4f29-bd15-459cfd21a812. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:03:19 np0005558241 nova_compute[248510]: 2025-12-13 09:03:19.631 248514 DEBUG nova.network.neutron [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updating instance_info_cache with network_info: [{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:03:19 np0005558241 nova_compute[248510]: 2025-12-13 09:03:19.658 248514 DEBUG oslo_concurrency.lockutils [req-cd3608e6-ea42-4d97-bebd-448913eda12f req-96ae4310-43a1-492c-b1d6-1c8b027f910d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:03:19 np0005558241 nova_compute[248510]: 2025-12-13 09:03:19.753 248514 INFO nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Creating config drive at /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/disk.config#033[00m
Dec 13 04:03:19 np0005558241 nova_compute[248510]: 2025-12-13 09:03:19.764 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvov7r_9d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:19 np0005558241 nova_compute[248510]: 2025-12-13 09:03:19.928 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvov7r_9d" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:19 np0005558241 nova_compute[248510]: 2025-12-13 09:03:19.953 248514 DEBUG nova.storage.rbd_utils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:19 np0005558241 nova_compute[248510]: 2025-12-13 09:03:19.959 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/disk.config f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.123 248514 DEBUG oslo_concurrency.processutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/disk.config f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.125 248514 INFO nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Deleting local config drive /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/disk.config because it was imported into RBD.#033[00m
Dec 13 04:03:20 np0005558241 kernel: tapdaddb809-b3: entered promiscuous mode
Dec 13 04:03:20 np0005558241 NetworkManager[50376]: <info>  [1765616600.1964] manager: (tapdaddb809-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/553)
Dec 13 04:03:20 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:20Z|01347|binding|INFO|Claiming lport daddb809-b36f-4f29-bd15-459cfd21a812 for this chassis.
Dec 13 04:03:20 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:20Z|01348|binding|INFO|daddb809-b36f-4f29-bd15-459cfd21a812: Claiming fa:16:3e:b3:4e:45 10.100.0.11
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.202 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.213 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:4e:45 10.100.0.11'], port_security=['fa:16:3e:b3:4e:45 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f5f29271-ff94-4d88-bc99-1cfc3e1128a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd30303-3917-438c-8b47-a12827c948d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '48e751cb-5eae-4702-97af-e0ce47d426b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c018ca30-1abb-42b0-8adb-342f8b53fd8e, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=daddb809-b36f-4f29-bd15-459cfd21a812) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.215 158419 INFO neutron.agent.ovn.metadata.agent [-] Port daddb809-b36f-4f29-bd15-459cfd21a812 in datapath cdd30303-3917-438c-8b47-a12827c948d6 bound to our chassis#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.217 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cdd30303-3917-438c-8b47-a12827c948d6#033[00m
Dec 13 04:03:20 np0005558241 systemd-machined[210538]: New machine qemu-161-instance-00000084.
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.236 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0b13d932-e5ac-4808-b8b9-fc7c38e5beff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.237 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcdd30303-31 in ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.240 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcdd30303-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.240 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8029635d-0a97-4360-b7e3-a9b10525e04d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.241 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64198d5f-66da-43c7-8d7b-b6669f2a650d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 systemd[1]: Started Virtual Machine qemu-161-instance-00000084.
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.265 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[112b766c-77cd-4289-bcac-a303c1b66eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:20Z|01349|binding|INFO|Setting lport daddb809-b36f-4f29-bd15-459cfd21a812 ovn-installed in OVS
Dec 13 04:03:20 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:20Z|01350|binding|INFO|Setting lport daddb809-b36f-4f29-bd15-459cfd21a812 up in Southbound
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.269 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:20 np0005558241 systemd-udevd[380441]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.288 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0180eeb-f81b-425f-b172-4ab838166c6a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 NetworkManager[50376]: <info>  [1765616600.3071] device (tapdaddb809-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:03:20 np0005558241 NetworkManager[50376]: <info>  [1765616600.3082] device (tapdaddb809-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.338 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[aeeefb6b-e215-4468-9aec-1cb63e34f2e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.343 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c22aa4f-7f6d-4ba2-82df-ef7171b9873d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 systemd-udevd[380449]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:03:20 np0005558241 NetworkManager[50376]: <info>  [1765616600.3461] manager: (tapcdd30303-30): new Veth device (/org/freedesktop/NetworkManager/Devices/554)
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.392 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7f72b109-f4d9-441a-9bc9-67de2a9e47cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.396 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5b8ee0-c730-4f0b-81fd-0db73afe0a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 NetworkManager[50376]: <info>  [1765616600.4293] device (tapcdd30303-30): carrier: link connected
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.441 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2b4497b5-2836-488c-86a7-a27540a0bb41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.468 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6b86cd16-fc71-47c6-9e53-edb1d3620731]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd30303-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:ce:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 917764, 'reachable_time': 37290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380471, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bb287535-0f66-4a14-ab58-415bec5fd6e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:ce22'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 917764, 'tstamp': 917764}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380472, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3067: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.518 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d1326e74-2682-4350-80ae-d03286835fb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd30303-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:ce:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 917764, 'reachable_time': 37290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380473, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.549 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.550 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.572 248514 DEBUG nova.compute.manager [req-4bc692e7-1b02-4bf8-bce0-cfe81d8d5b6b req-144a415e-e2b3-4d0d-a66b-768f19d1c562 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.573 248514 DEBUG oslo_concurrency.lockutils [req-4bc692e7-1b02-4bf8-bce0-cfe81d8d5b6b req-144a415e-e2b3-4d0d-a66b-768f19d1c562 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.573 248514 DEBUG oslo_concurrency.lockutils [req-4bc692e7-1b02-4bf8-bce0-cfe81d8d5b6b req-144a415e-e2b3-4d0d-a66b-768f19d1c562 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.574 248514 DEBUG oslo_concurrency.lockutils [req-4bc692e7-1b02-4bf8-bce0-cfe81d8d5b6b req-144a415e-e2b3-4d0d-a66b-768f19d1c562 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.576 248514 DEBUG nova.compute.manager [req-4bc692e7-1b02-4bf8-bce0-cfe81d8d5b6b req-144a415e-e2b3-4d0d-a66b-768f19d1c562 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Processing event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.578 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.594 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[91f2e465-2b1c-4089-85cb-3fd5da4c9883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.687 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.688 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.690 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c10e2bd8-7f44-45d0-84a4-9d889976bada]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.692 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd30303-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.692 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.692 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdd30303-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.697 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.698 248514 INFO nova.compute.claims [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:03:20 np0005558241 kernel: tapcdd30303-30: entered promiscuous mode
Dec 13 04:03:20 np0005558241 NetworkManager[50376]: <info>  [1765616600.7491] manager: (tapcdd30303-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.748 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.751 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.751 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcdd30303-30, col_values=(('external_ids', {'iface-id': '2b4706f8-0b78-43d8-a3df-6327712b725d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:20 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:20Z|01351|binding|INFO|Releasing lport 2b4706f8-0b78-43d8-a3df-6327712b725d from this chassis (sb_readonly=0)
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.773 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cdd30303-3917-438c-8b47-a12827c948d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cdd30303-3917-438c-8b47-a12827c948d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.774 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[22fe39ab-e0f5-4619-9750-1bb856493769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.775 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-cdd30303-3917-438c-8b47-a12827c948d6
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/cdd30303-3917-438c-8b47-a12827c948d6.pid.haproxy
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID cdd30303-3917-438c-8b47-a12827c948d6
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:03:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:20.775 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'env', 'PROCESS_TAG=haproxy-cdd30303-3917-438c-8b47-a12827c948d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cdd30303-3917-438c-8b47-a12827c948d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.842 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.905 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616600.9044743, f5f29271-ff94-4d88-bc99-1cfc3e1128a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.905 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] VM Started (Lifecycle Event)#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.908 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.913 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.920 248514 INFO nova.virt.libvirt.driver [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance spawned successfully.#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.921 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.923 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.931 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.950 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.951 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.951 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.952 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.952 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.953 248514 DEBUG nova.virt.libvirt.driver [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.966 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.967 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616600.907748, f5f29271-ff94-4d88-bc99-1cfc3e1128a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:03:20 np0005558241 nova_compute[248510]: 2025-12-13 09:03:20.968 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.007 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.021 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616600.9123144, f5f29271-ff94-4d88-bc99-1cfc3e1128a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.022 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.026 248514 INFO nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Took 8.63 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.027 248514 DEBUG nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.056 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.062 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.090 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.111 248514 INFO nova.compute.manager [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Took 9.77 seconds to build instance.#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.130 248514 DEBUG oslo_concurrency.lockutils [None req-6ec172f1-6dc2-4a0f-81ff-e6bdb742c019 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:21 np0005558241 podman[380566]: 2025-12-13 09:03:21.210694836 +0000 UTC m=+0.056927797 container create 0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 04:03:21 np0005558241 systemd[1]: Started libpod-conmon-0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8.scope.
Dec 13 04:03:21 np0005558241 podman[380566]: 2025-12-13 09:03:21.18453091 +0000 UTC m=+0.030763871 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036026279993213125 of space, bias 1.0, pg target 0.10807883997963938 quantized to 32 (current 32)
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669848540879215 of space, bias 1.0, pg target 0.2009545622637645 quantized to 32 (current 32)
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724716211363558e-07 of space, bias 4.0, pg target 0.0006869659453636269 quantized to 16 (current 32)
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:21 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:03:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d734d895c82410d889b2d117e4e4de5869dfd023c73ed8e5614e3fc64a0d3650/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:21 np0005558241 podman[380566]: 2025-12-13 09:03:21.312594389 +0000 UTC m=+0.158827340 container init 0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:03:21 np0005558241 podman[380566]: 2025-12-13 09:03:21.320825265 +0000 UTC m=+0.167058216 container start 0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:03:21 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [NOTICE]   (380585) : New worker (380587) forked
Dec 13 04:03:21 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [NOTICE]   (380585) : Loading success.
Dec 13 04:03:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:03:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1434025' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.445 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.454 248514 DEBUG nova.compute.provider_tree [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.475 248514 DEBUG nova.scheduler.client.report [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.507 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.509 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.565 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.570 248514 DEBUG nova.network.neutron [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.595 248514 INFO nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.616 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.738 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.740 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.740 248514 INFO nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Creating image(s)#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.769 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.799 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.829 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.835 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.879 248514 DEBUG nova.policy [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.916 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.917 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.918 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.918 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.947 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:21 np0005558241 nova_compute[248510]: 2025-12-13 09:03:21.953 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3068: 321 pgs: 321 active+clean; 88 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.498 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.614 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.729 248514 DEBUG nova.network.neutron [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Successfully created port: 90d7401e-210b-4cbe-b93d-853787434352 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.743 248514 DEBUG nova.compute.manager [req-6a61fdbb-bdd9-4816-8ce1-1bed7fc7ad3a req-3e3c5dbe-4909-440e-bd81-4c41ef35ae73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.744 248514 DEBUG oslo_concurrency.lockutils [req-6a61fdbb-bdd9-4816-8ce1-1bed7fc7ad3a req-3e3c5dbe-4909-440e-bd81-4c41ef35ae73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.744 248514 DEBUG oslo_concurrency.lockutils [req-6a61fdbb-bdd9-4816-8ce1-1bed7fc7ad3a req-3e3c5dbe-4909-440e-bd81-4c41ef35ae73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.745 248514 DEBUG oslo_concurrency.lockutils [req-6a61fdbb-bdd9-4816-8ce1-1bed7fc7ad3a req-3e3c5dbe-4909-440e-bd81-4c41ef35ae73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.745 248514 DEBUG nova.compute.manager [req-6a61fdbb-bdd9-4816-8ce1-1bed7fc7ad3a req-3e3c5dbe-4909-440e-bd81-4c41ef35ae73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.746 248514 WARNING nova.compute.manager [req-6a61fdbb-bdd9-4816-8ce1-1bed7fc7ad3a req-3e3c5dbe-4909-440e-bd81-4c41ef35ae73 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.794 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.798 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.805 248514 DEBUG nova.objects.instance [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.837 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.837 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.838 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.838 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.838 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.883 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.884 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Ensure instance console log exists: /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.884 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.885 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:22 np0005558241 nova_compute[248510]: 2025-12-13 09:03:22.885 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.057 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:03:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947359066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.468 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.535 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.535 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.618 248514 DEBUG nova.network.neutron [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Successfully updated port: 90d7401e-210b-4cbe-b93d-853787434352 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.620 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:23 np0005558241 NetworkManager[50376]: <info>  [1765616603.6212] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Dec 13 04:03:23 np0005558241 NetworkManager[50376]: <info>  [1765616603.6223] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/557)
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.634 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.635 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.635 248514 DEBUG nova.network.neutron [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.705 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:23Z|01352|binding|INFO|Releasing lport 2b4706f8-0b78-43d8-a3df-6327712b725d from this chassis (sb_readonly=0)
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.713 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.731 248514 DEBUG nova.compute.manager [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-changed-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.731 248514 DEBUG nova.compute.manager [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing instance network info cache due to event network-changed-90d7401e-210b-4cbe-b93d-853787434352. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.731 248514 DEBUG oslo_concurrency.lockutils [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.782 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.783 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3326MB free_disk=59.96666970383376GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.784 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.784 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.840 248514 DEBUG nova.network.neutron [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:03:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:03:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.871 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance f5f29271-ff94-4d88-bc99-1cfc3e1128a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.871 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.872 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.872 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.943 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:23 np0005558241 nova_compute[248510]: 2025-12-13 09:03:23.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:24 np0005558241 podman[381018]: 2025-12-13 09:03:24.358719087 +0000 UTC m=+0.078675132 container create 099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:03:24 np0005558241 systemd[1]: Started libpod-conmon-099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18.scope.
Dec 13 04:03:24 np0005558241 podman[381018]: 2025-12-13 09:03:24.325654199 +0000 UTC m=+0.045610274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:03:24 np0005558241 podman[381018]: 2025-12-13 09:03:24.48295191 +0000 UTC m=+0.202907985 container init 099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:03:24 np0005558241 podman[381018]: 2025-12-13 09:03:24.49453208 +0000 UTC m=+0.214488125 container start 099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 04:03:24 np0005558241 podman[381018]: 2025-12-13 09:03:24.500140371 +0000 UTC m=+0.220096416 container attach 099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:03:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3069: 321 pgs: 321 active+clean; 134 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 125 op/s
Dec 13 04:03:24 np0005558241 nostalgic_moser[381033]: 167 167
Dec 13 04:03:24 np0005558241 systemd[1]: libpod-099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18.scope: Deactivated successfully.
Dec 13 04:03:24 np0005558241 podman[381018]: 2025-12-13 09:03:24.505049424 +0000 UTC m=+0.225005459 container died 099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:03:24 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ba511b0ac0e4bd3bca49bf653e631c5739386e58244956da56dd2f791bfff990-merged.mount: Deactivated successfully.
Dec 13 04:03:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:03:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637367313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:03:24 np0005558241 podman[381018]: 2025-12-13 09:03:24.573787126 +0000 UTC m=+0.293743171 container remove 099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_moser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 04:03:24 np0005558241 nova_compute[248510]: 2025-12-13 09:03:24.577 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:24 np0005558241 nova_compute[248510]: 2025-12-13 09:03:24.589 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:03:24 np0005558241 systemd[1]: libpod-conmon-099ba77fe91eee1b7c5421a175cfe3577a62696aabb14ad557df0b77751c8b18.scope: Deactivated successfully.
Dec 13 04:03:24 np0005558241 nova_compute[248510]: 2025-12-13 09:03:24.632 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:03:24 np0005558241 nova_compute[248510]: 2025-12-13 09:03:24.687 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:03:24 np0005558241 nova_compute[248510]: 2025-12-13 09:03:24.688 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:24 np0005558241 podman[381062]: 2025-12-13 09:03:24.792852944 +0000 UTC m=+0.044224109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.013 248514 DEBUG nova.compute.manager [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-changed-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.013 248514 DEBUG nova.compute.manager [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Refreshing instance network info cache due to event network-changed-daddb809-b36f-4f29-bd15-459cfd21a812. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.013 248514 DEBUG oslo_concurrency.lockutils [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.014 248514 DEBUG oslo_concurrency.lockutils [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.014 248514 DEBUG nova.network.neutron [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Refreshing network info cache for port daddb809-b36f-4f29-bd15-459cfd21a812 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.158 248514 DEBUG nova.network.neutron [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.182 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.182 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Instance network_info: |[{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.184 248514 DEBUG oslo_concurrency.lockutils [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.184 248514 DEBUG nova.network.neutron [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing network info cache for port 90d7401e-210b-4cbe-b93d-853787434352 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.190 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Start _get_guest_xml network_info=[{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:03:25 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:25 np0005558241 podman[381062]: 2025-12-13 09:03:25.312651498 +0000 UTC m=+0.564022583 container create 99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:03:25 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.319 248514 WARNING nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.337 248514 DEBUG nova.virt.libvirt.host [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.339 248514 DEBUG nova.virt.libvirt.host [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.346 248514 DEBUG nova.virt.libvirt.host [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.347 248514 DEBUG nova.virt.libvirt.host [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.348 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.348 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.349 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.349 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.350 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.350 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.350 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.351 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.351 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.352 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.352 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.352 248514 DEBUG nova.virt.hardware [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.358 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:25 np0005558241 systemd[1]: Started libpod-conmon-99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b.scope.
Dec 13 04:03:25 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:03:25 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6c622031b2c89e7d273410c6446491c7c7bdac70252e68f377bcedaa6b81c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:25 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6c622031b2c89e7d273410c6446491c7c7bdac70252e68f377bcedaa6b81c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:25 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6c622031b2c89e7d273410c6446491c7c7bdac70252e68f377bcedaa6b81c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:25 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d6c622031b2c89e7d273410c6446491c7c7bdac70252e68f377bcedaa6b81c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:25 np0005558241 podman[381062]: 2025-12-13 09:03:25.456494122 +0000 UTC m=+0.707865207 container init 99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:03:25 np0005558241 podman[381062]: 2025-12-13 09:03:25.473000945 +0000 UTC m=+0.724372030 container start 99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:03:25 np0005558241 podman[381062]: 2025-12-13 09:03:25.47638401 +0000 UTC m=+0.727755115 container attach 99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_poitras, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:03:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:03:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:03:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3845884973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:03:25 np0005558241 nova_compute[248510]: 2025-12-13 09:03:25.993 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.013 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.017 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]: [
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:    {
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:        "available": false,
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:        "being_replaced": false,
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:        "ceph_device_lvm": false,
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:        "lsm_data": {},
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:        "lvs": [],
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:        "path": "/dev/sr0",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:        "rejected_reasons": [
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "Insufficient space (<5GB)",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "Has a FileSystem"
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:        ],
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:        "sys_api": {
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "actuators": null,
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "device_nodes": [
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:                "sr0"
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            ],
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "devname": "sr0",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "human_readable_size": "482.00 KB",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "id_bus": "ata",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "model": "QEMU DVD-ROM",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "nr_requests": "2",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "parent": "/dev/sr0",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "partitions": {},
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "path": "/dev/sr0",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "removable": "1",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "rev": "2.5+",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "ro": "0",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "rotational": "1",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "sas_address": "",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "sas_device_handle": "",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "scheduler_mode": "mq-deadline",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "sectors": 0,
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "sectorsize": "2048",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "size": 493568.0,
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "support_discard": "2048",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "type": "disk",
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:            "vendor": "QEMU"
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:        }
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]:    }
Dec 13 04:03:26 np0005558241 compassionate_poitras[381079]: ]
Dec 13 04:03:26 np0005558241 systemd[1]: libpod-99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b.scope: Deactivated successfully.
Dec 13 04:03:26 np0005558241 podman[381884]: 2025-12-13 09:03:26.13698066 +0000 UTC m=+0.038086725 container died 99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:03:26 np0005558241 systemd[1]: var-lib-containers-storage-overlay-62d6c622031b2c89e7d273410c6446491c7c7bdac70252e68f377bcedaa6b81c-merged.mount: Deactivated successfully.
Dec 13 04:03:26 np0005558241 podman[381884]: 2025-12-13 09:03:26.18209752 +0000 UTC m=+0.083203565 container remove 99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_poitras, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:03:26 np0005558241 systemd[1]: libpod-conmon-99935e653c81335bdd4764b5bb25102ece3bd60b06edcc7c4bc051af5d71414b.scope: Deactivated successfully.
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:03:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3070: 321 pgs: 321 active+clean; 134 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 105 op/s
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:03:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2289559413' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.582 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.585 248514 DEBUG nova.virt.libvirt.vif [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:03:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1016404015',display_name='tempest-TestNetworkBasicOps-server-1016404015',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1016404015',id=133,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5E+jKld2VdKWtObDqeKBGvKmvekTSiiA9sjTJdF7hEcONw3irfWTSpIiwVY9k/7NWwXxkQuubngpznyfOsRWuRq1jkSBu1LNMt0g8LKC1KQLWo836n2hzpQ9ilyrcugQ==',key_name='tempest-TestNetworkBasicOps-149670649',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-i1dfr0ue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:03:21Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=19f43277-8c8d-48f2-9ec7-2b0d36a06e27,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.585 248514 DEBUG nova.network.os_vif_util [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.587 248514 DEBUG nova.network.os_vif_util [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.589 248514 DEBUG nova.objects.instance [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.616 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  <uuid>19f43277-8c8d-48f2-9ec7-2b0d36a06e27</uuid>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  <name>instance-00000085</name>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkBasicOps-server-1016404015</nova:name>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:03:25</nova:creationTime>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:        <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:        <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:        <nova:port uuid="90d7401e-210b-4cbe-b93d-853787434352">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <entry name="serial">19f43277-8c8d-48f2-9ec7-2b0d36a06e27</entry>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <entry name="uuid">19f43277-8c8d-48f2-9ec7-2b0d36a06e27</entry>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk.config">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:69:b4:6d"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <target dev="tap90d7401e-21"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/console.log" append="off"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:03:26 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:03:26 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:03:26 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:03:26 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.617 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Preparing to wait for external event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.617 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.618 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.618 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.619 248514 DEBUG nova.virt.libvirt.vif [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:03:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1016404015',display_name='tempest-TestNetworkBasicOps-server-1016404015',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1016404015',id=133,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5E+jKld2VdKWtObDqeKBGvKmvekTSiiA9sjTJdF7hEcONw3irfWTSpIiwVY9k/7NWwXxkQuubngpznyfOsRWuRq1jkSBu1LNMt0g8LKC1KQLWo836n2hzpQ9ilyrcugQ==',key_name='tempest-TestNetworkBasicOps-149670649',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-i1dfr0ue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:03:21Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=19f43277-8c8d-48f2-9ec7-2b0d36a06e27,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.620 248514 DEBUG nova.network.os_vif_util [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.621 248514 DEBUG nova.network.os_vif_util [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.622 248514 DEBUG os_vif [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.623 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.624 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.625 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.631 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.631 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap90d7401e-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.632 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap90d7401e-21, col_values=(('external_ids', {'iface-id': '90d7401e-210b-4cbe-b93d-853787434352', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:b4:6d', 'vm-uuid': '19f43277-8c8d-48f2-9ec7-2b0d36a06e27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:26 np0005558241 NetworkManager[50376]: <info>  [1765616606.6366] manager: (tap90d7401e-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/558)
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.650 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.653 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.656 248514 INFO os_vif [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21')#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.662 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.662 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.713 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.714 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.714 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:69:b4:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.715 248514 INFO nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Using config drive#033[00m
Dec 13 04:03:26 np0005558241 podman[381982]: 2025-12-13 09:03:26.732962182 +0000 UTC m=+0.066164399 container create e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.765 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:26 np0005558241 systemd[1]: Started libpod-conmon-e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039.scope.
Dec 13 04:03:26 np0005558241 podman[381982]: 2025-12-13 09:03:26.69736126 +0000 UTC m=+0.030563577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:03:26 np0005558241 podman[381982]: 2025-12-13 09:03:26.831581423 +0000 UTC m=+0.164783650 container init e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:03:26 np0005558241 podman[381982]: 2025-12-13 09:03:26.842014984 +0000 UTC m=+0.175217201 container start e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 04:03:26 np0005558241 podman[381982]: 2025-12-13 09:03:26.845669096 +0000 UTC m=+0.178871323 container attach e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:03:26 np0005558241 busy_thompson[382016]: 167 167
Dec 13 04:03:26 np0005558241 systemd[1]: libpod-e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039.scope: Deactivated successfully.
Dec 13 04:03:26 np0005558241 podman[381982]: 2025-12-13 09:03:26.848410154 +0000 UTC m=+0.181612371 container died e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 04:03:26 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d750da6e8095c084b0289c5babe8864245de2e7d98ba91d06411a002f436c877-merged.mount: Deactivated successfully.
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.882 248514 DEBUG nova.network.neutron [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updated VIF entry in instance network info cache for port daddb809-b36f-4f29-bd15-459cfd21a812. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.883 248514 DEBUG nova.network.neutron [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updating instance_info_cache with network_info: [{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:03:26 np0005558241 podman[381982]: 2025-12-13 09:03:26.889049243 +0000 UTC m=+0.222251490 container remove e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 04:03:26 np0005558241 nova_compute[248510]: 2025-12-13 09:03:26.908 248514 DEBUG oslo_concurrency.lockutils [req-7eec11ec-2b79-4c76-a1e0-1b749e7e8915 req-f875f422-857c-4ab3-a737-c52f03d8be2e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:03:26 np0005558241 systemd[1]: libpod-conmon-e18c394e46270d67172fa005085e8251e068f02c948c064d0d28cafd1954f039.scope: Deactivated successfully.
Dec 13 04:03:27 np0005558241 nova_compute[248510]: 2025-12-13 09:03:27.029 248514 DEBUG nova.network.neutron [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updated VIF entry in instance network info cache for port 90d7401e-210b-4cbe-b93d-853787434352. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:03:27 np0005558241 nova_compute[248510]: 2025-12-13 09:03:27.030 248514 DEBUG nova.network.neutron [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:03:27 np0005558241 nova_compute[248510]: 2025-12-13 09:03:27.054 248514 DEBUG oslo_concurrency.lockutils [req-d0c5d8c4-a371-4ae1-bb24-53a4f3764747 req-b584681d-9236-45a2-a0ad-015cad2d00ef 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:03:27 np0005558241 podman[382039]: 2025-12-13 09:03:27.14400377 +0000 UTC m=+0.055000989 container create 6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:03:27 np0005558241 nova_compute[248510]: 2025-12-13 09:03:27.175 248514 INFO nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Creating config drive at /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/disk.config#033[00m
Dec 13 04:03:27 np0005558241 systemd[1]: Started libpod-conmon-6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3.scope.
Dec 13 04:03:27 np0005558241 nova_compute[248510]: 2025-12-13 09:03:27.186 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu0u_9rke execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:27 np0005558241 podman[382039]: 2025-12-13 09:03:27.118926572 +0000 UTC m=+0.029923781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:27 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:03:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9a1ae178c29724a611fa06eb4a4e6aba666abadfd249f4605e32988aaeecab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9a1ae178c29724a611fa06eb4a4e6aba666abadfd249f4605e32988aaeecab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9a1ae178c29724a611fa06eb4a4e6aba666abadfd249f4605e32988aaeecab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9a1ae178c29724a611fa06eb4a4e6aba666abadfd249f4605e32988aaeecab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9a1ae178c29724a611fa06eb4a4e6aba666abadfd249f4605e32988aaeecab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:27 np0005558241 podman[382039]: 2025-12-13 09:03:27.252670933 +0000 UTC m=+0.163668142 container init 6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:03:27 np0005558241 podman[382039]: 2025-12-13 09:03:27.264547201 +0000 UTC m=+0.175544380 container start 6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 04:03:27 np0005558241 podman[382039]: 2025-12-13 09:03:27.268234943 +0000 UTC m=+0.179232142 container attach 6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:03:27 np0005558241 nova_compute[248510]: 2025-12-13 09:03:27.348 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu0u_9rke" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:27 np0005558241 nova_compute[248510]: 2025-12-13 09:03:27.387 248514 DEBUG nova.storage.rbd_utils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:27 np0005558241 nova_compute[248510]: 2025-12-13 09:03:27.392 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/disk.config 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:27 np0005558241 nova_compute[248510]: 2025-12-13 09:03:27.561 248514 DEBUG oslo_concurrency.processutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/disk.config 19f43277-8c8d-48f2-9ec7-2b0d36a06e27_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:27 np0005558241 nova_compute[248510]: 2025-12-13 09:03:27.562 248514 INFO nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Deleting local config drive /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27/disk.config because it was imported into RBD.#033[00m
Dec 13 04:03:27 np0005558241 kernel: tap90d7401e-21: entered promiscuous mode
Dec 13 04:03:27 np0005558241 NetworkManager[50376]: <info>  [1765616607.6145] manager: (tap90d7401e-21): new Tun device (/org/freedesktop/NetworkManager/Devices/559)
Dec 13 04:03:27 np0005558241 nova_compute[248510]: 2025-12-13 09:03:27.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:27 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:27Z|01353|binding|INFO|Claiming lport 90d7401e-210b-4cbe-b93d-853787434352 for this chassis.
Dec 13 04:03:27 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:27Z|01354|binding|INFO|90d7401e-210b-4cbe-b93d-853787434352: Claiming fa:16:3e:69:b4:6d 10.100.0.14
Dec 13 04:03:27 np0005558241 systemd-udevd[382117]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.668 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:b4:6d 10.100.0.14'], port_security=['fa:16:3e:69:b4:6d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '19f43277-8c8d-48f2-9ec7-2b0d36a06e27', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42e759fc-c2e4-4c38-92ea-ac51e44d350f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f130a7c-de13-4843-94c3-b6b246bfa796, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=90d7401e-210b-4cbe-b93d-853787434352) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.669 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 90d7401e-210b-4cbe-b93d-853787434352 in datapath c2c4278e-24ed-4454-ae7e-9f1ba7df516c bound to our chassis#033[00m
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.671 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2c4278e-24ed-4454-ae7e-9f1ba7df516c#033[00m
Dec 13 04:03:27 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:27Z|01355|binding|INFO|Setting lport 90d7401e-210b-4cbe-b93d-853787434352 ovn-installed in OVS
Dec 13 04:03:27 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:27Z|01356|binding|INFO|Setting lport 90d7401e-210b-4cbe-b93d-853787434352 up in Southbound
Dec 13 04:03:27 np0005558241 nova_compute[248510]: 2025-12-13 09:03:27.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce65bf5-a28a-4d8c-98c1-1a5d094ac3d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.687 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc2c4278e-21 in ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:03:27 np0005558241 NetworkManager[50376]: <info>  [1765616607.6900] device (tap90d7401e-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.689 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc2c4278e-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:03:27 np0005558241 NetworkManager[50376]: <info>  [1765616607.6911] device (tap90d7401e-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.689 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[986fab41-13dc-4bde-98d9-25e78476554c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.690 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f47d972e-35df-44fe-b2a6-ccaafc06d3cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:27 np0005558241 systemd-machined[210538]: New machine qemu-162-instance-00000085.
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.702 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[993179a7-4b2a-4e5c-a779-54d67ec6536c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:27 np0005558241 systemd[1]: Started Virtual Machine qemu-162-instance-00000085.
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.721 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2162e4fd-c8db-4ab6-b5fc-b835e39b4ea2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.773 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[311278bb-05da-4d72-9cda-5fa8567a7c77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.782 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d09e80-271f-4a28-9fb8-ad5424b51966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:27 np0005558241 NetworkManager[50376]: <info>  [1765616607.7828] manager: (tapc2c4278e-20): new Veth device (/org/freedesktop/NetworkManager/Devices/560)
Dec 13 04:03:27 np0005558241 great_chaplygin[382056]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:03:27 np0005558241 great_chaplygin[382056]: --> All data devices are unavailable
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.851 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[280d359c-14ed-4ce3-a669-b667d7be478d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.855 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ed94591e-2878-4323-a6a0-aa361dcb9b51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:27 np0005558241 systemd[1]: libpod-6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3.scope: Deactivated successfully.
Dec 13 04:03:27 np0005558241 conmon[382056]: conmon 6226b5a06f452c16c681 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3.scope/container/memory.events
Dec 13 04:03:27 np0005558241 podman[382039]: 2025-12-13 09:03:27.859505487 +0000 UTC m=+0.770502686 container died 6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:03:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ce9a1ae178c29724a611fa06eb4a4e6aba666abadfd249f4605e32988aaeecab-merged.mount: Deactivated successfully.
Dec 13 04:03:27 np0005558241 NetworkManager[50376]: <info>  [1765616607.8995] device (tapc2c4278e-20): carrier: link connected
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.905 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c4a0e9-36ab-4ee5-abf6-c550b49645d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:27 np0005558241 podman[382039]: 2025-12-13 09:03:27.910925605 +0000 UTC m=+0.821922784 container remove 6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_chaplygin, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 04:03:27 np0005558241 systemd[1]: libpod-conmon-6226b5a06f452c16c681e18e4b34f0ad4beae922ab885bbd5f11e4bee85f10b3.scope: Deactivated successfully.
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.927 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55d208dc-df8f-4204-b1c6-729c5fea5787]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2c4278e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:a6:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 392], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 918511, 'reachable_time': 16376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382171, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.942 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc2e4cf-89ea-4c5d-a4cc-9e767ff2bd9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:a650'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 918511, 'tstamp': 918511}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382172, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.958 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6206ba35-d117-41d7-bc69-c08f76edc884]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2c4278e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:a6:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 392], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 918511, 'reachable_time': 16376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382173, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:27.993 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0d48d4c0-d053-419a-97f3-3606362b9689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.059 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.081 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0045352b-7450-404f-b2a6-7a695ee65e73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.085 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2c4278e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.085 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.086 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2c4278e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:28 np0005558241 kernel: tapc2c4278e-20: entered promiscuous mode
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.088 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:28 np0005558241 NetworkManager[50376]: <info>  [1765616608.0898] manager: (tapc2c4278e-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/561)
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.091 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.095 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2c4278e-20, col_values=(('external_ids', {'iface-id': 'fb6127ed-4b20-49d3-8950-376b6e1d5999'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:28 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:28Z|01357|binding|INFO|Releasing lport fb6127ed-4b20-49d3-8950-376b6e1d5999 from this chassis (sb_readonly=0)
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.097 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.098 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.099 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c2c4278e-24ed-4454-ae7e-9f1ba7df516c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c2c4278e-24ed-4454-ae7e-9f1ba7df516c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.100 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[27bf95f2-812f-4f83-8617-551a1df58c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.101 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-c2c4278e-24ed-4454-ae7e-9f1ba7df516c
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/c2c4278e-24ed-4454-ae7e-9f1ba7df516c.pid.haproxy
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID c2c4278e-24ed-4454-ae7e-9f1ba7df516c
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:03:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:28.103 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'env', 'PROCESS_TAG=haproxy-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c2c4278e-24ed-4454-ae7e-9f1ba7df516c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.112 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:28 np0005558241 podman[382243]: 2025-12-13 09:03:28.378736836 +0000 UTC m=+0.044524806 container create 7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:03:28 np0005558241 systemd[1]: Started libpod-conmon-7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438.scope.
Dec 13 04:03:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:03:28 np0005558241 podman[382243]: 2025-12-13 09:03:28.358815877 +0000 UTC m=+0.024603847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:28 np0005558241 podman[382243]: 2025-12-13 09:03:28.464507765 +0000 UTC m=+0.130295755 container init 7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_grothendieck, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:03:28 np0005558241 podman[382243]: 2025-12-13 09:03:28.480101116 +0000 UTC m=+0.145889086 container start 7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:03:28 np0005558241 podman[382243]: 2025-12-13 09:03:28.486039365 +0000 UTC m=+0.151827335 container attach 7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:03:28 np0005558241 reverent_grothendieck[382293]: 167 167
Dec 13 04:03:28 np0005558241 systemd[1]: libpod-7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438.scope: Deactivated successfully.
Dec 13 04:03:28 np0005558241 podman[382243]: 2025-12-13 09:03:28.488026074 +0000 UTC m=+0.153814044 container died 7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_grothendieck, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 04:03:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3071: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 108 op/s
Dec 13 04:03:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7ca9977e371dd94af47d37ffd3dc583b9b073f830b05dabdce9b9c4700f89245-merged.mount: Deactivated successfully.
Dec 13 04:03:28 np0005558241 podman[382243]: 2025-12-13 09:03:28.532677183 +0000 UTC m=+0.198465153 container remove 7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:03:28 np0005558241 systemd[1]: libpod-conmon-7eeacd20a1bd303a56ad664dfa83df97baa02a5e82afbfd61840fcb43b399438.scope: Deactivated successfully.
Dec 13 04:03:28 np0005558241 podman[382323]: 2025-12-13 09:03:28.556169282 +0000 UTC m=+0.066823286 container create 097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.563 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616608.5626156, 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.568 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] VM Started (Lifecycle Event)#033[00m
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.593 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.597 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616608.5627651, 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.597 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:03:28 np0005558241 systemd[1]: Started libpod-conmon-097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd.scope.
Dec 13 04:03:28 np0005558241 podman[382323]: 2025-12-13 09:03:28.518718703 +0000 UTC m=+0.029372737 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.620 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.634 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:03:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:03:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6230ed043b200e92eb147763a359dfae1b35945024fa55c5518b190e8cc8fc9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.655 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:03:28 np0005558241 podman[382323]: 2025-12-13 09:03:28.665641564 +0000 UTC m=+0.176295618 container init 097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 04:03:28 np0005558241 podman[382323]: 2025-12-13 09:03:28.670913427 +0000 UTC m=+0.181567461 container start 097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 04:03:28 np0005558241 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [NOTICE]   (382364) : New worker (382377) forked
Dec 13 04:03:28 np0005558241 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [NOTICE]   (382364) : Loading success.
Dec 13 04:03:28 np0005558241 podman[382361]: 2025-12-13 09:03:28.745098965 +0000 UTC m=+0.070647351 container create f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:03:28 np0005558241 nova_compute[248510]: 2025-12-13 09:03:28.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:03:28 np0005558241 podman[382361]: 2025-12-13 09:03:28.70699477 +0000 UTC m=+0.032543196 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:28 np0005558241 systemd[1]: Started libpod-conmon-f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348.scope.
Dec 13 04:03:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:03:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b713d3ae5171aeb4c77dd90220f207f254f737261cfa9be648806096b5d7d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b713d3ae5171aeb4c77dd90220f207f254f737261cfa9be648806096b5d7d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b713d3ae5171aeb4c77dd90220f207f254f737261cfa9be648806096b5d7d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b713d3ae5171aeb4c77dd90220f207f254f737261cfa9be648806096b5d7d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:28 np0005558241 podman[382361]: 2025-12-13 09:03:28.864148778 +0000 UTC m=+0.189697184 container init f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 04:03:28 np0005558241 podman[382361]: 2025-12-13 09:03:28.874324493 +0000 UTC m=+0.199872919 container start f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:03:28 np0005558241 podman[382361]: 2025-12-13 09:03:28.878708443 +0000 UTC m=+0.204256829 container attach f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]: {
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:    "0": [
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:        {
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "devices": [
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "/dev/loop3"
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            ],
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_name": "ceph_lv0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_size": "21470642176",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "name": "ceph_lv0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "tags": {
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.cluster_name": "ceph",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.crush_device_class": "",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.encrypted": "0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.objectstore": "bluestore",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.osd_id": "0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.type": "block",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.vdo": "0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.with_tpm": "0"
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            },
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "type": "block",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "vg_name": "ceph_vg0"
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:        }
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:    ],
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:    "1": [
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:        {
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "devices": [
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "/dev/loop4"
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            ],
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_name": "ceph_lv1",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_size": "21470642176",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "name": "ceph_lv1",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "tags": {
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.cluster_name": "ceph",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.crush_device_class": "",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.encrypted": "0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.objectstore": "bluestore",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.osd_id": "1",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.type": "block",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.vdo": "0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.with_tpm": "0"
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            },
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "type": "block",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "vg_name": "ceph_vg1"
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:        }
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:    ],
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:    "2": [
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:        {
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "devices": [
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "/dev/loop5"
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            ],
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_name": "ceph_lv2",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_size": "21470642176",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "name": "ceph_lv2",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "tags": {
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.cluster_name": "ceph",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.crush_device_class": "",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.encrypted": "0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.objectstore": "bluestore",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.osd_id": "2",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.type": "block",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.vdo": "0",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:                "ceph.with_tpm": "0"
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            },
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "type": "block",
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:            "vg_name": "ceph_vg2"
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:        }
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]:    ]
Dec 13 04:03:29 np0005558241 jovial_franklin[382390]: }
Dec 13 04:03:29 np0005558241 systemd[1]: libpod-f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348.scope: Deactivated successfully.
Dec 13 04:03:29 np0005558241 conmon[382390]: conmon f30cf2ea2209ea199fef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348.scope/container/memory.events
Dec 13 04:03:29 np0005558241 podman[382361]: 2025-12-13 09:03:29.214872195 +0000 UTC m=+0.540420641 container died f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 04:03:29 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d9b713d3ae5171aeb4c77dd90220f207f254f737261cfa9be648806096b5d7d7-merged.mount: Deactivated successfully.
Dec 13 04:03:29 np0005558241 podman[382361]: 2025-12-13 09:03:29.262293773 +0000 UTC m=+0.587842149 container remove f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:03:29 np0005558241 systemd[1]: libpod-conmon-f30cf2ea2209ea199fef9a6364d1e6e5522bf927d657249a98178cdeafa51348.scope: Deactivated successfully.
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.667 248514 DEBUG nova.compute.manager [req-8298e340-4471-4323-82b2-f212657de814 req-8bf8d25e-5259-4e57-bcbc-8e186f131645 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.668 248514 DEBUG oslo_concurrency.lockutils [req-8298e340-4471-4323-82b2-f212657de814 req-8bf8d25e-5259-4e57-bcbc-8e186f131645 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.668 248514 DEBUG oslo_concurrency.lockutils [req-8298e340-4471-4323-82b2-f212657de814 req-8bf8d25e-5259-4e57-bcbc-8e186f131645 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.668 248514 DEBUG oslo_concurrency.lockutils [req-8298e340-4471-4323-82b2-f212657de814 req-8bf8d25e-5259-4e57-bcbc-8e186f131645 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.668 248514 DEBUG nova.compute.manager [req-8298e340-4471-4323-82b2-f212657de814 req-8bf8d25e-5259-4e57-bcbc-8e186f131645 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Processing event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.669 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.674 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.678 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616609.6743977, 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.679 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.692 248514 INFO nova.virt.libvirt.driver [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Instance spawned successfully.#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.693 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.704 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.714 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.720 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.720 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.721 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.722 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.722 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:29 np0005558241 nova_compute[248510]: 2025-12-13 09:03:29.723 248514 DEBUG nova.virt.libvirt.driver [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:29 np0005558241 podman[382474]: 2025-12-13 09:03:29.775552732 +0000 UTC m=+0.055052491 container create 6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_heyrovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:03:29 np0005558241 systemd[1]: Started libpod-conmon-6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238.scope.
Dec 13 04:03:29 np0005558241 podman[382474]: 2025-12-13 09:03:29.752795212 +0000 UTC m=+0.032295001 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:29 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:03:29 np0005558241 podman[382474]: 2025-12-13 09:03:29.879999119 +0000 UTC m=+0.159498908 container init 6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:03:29 np0005558241 podman[382474]: 2025-12-13 09:03:29.887748033 +0000 UTC m=+0.167247782 container start 6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_heyrovsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:03:29 np0005558241 podman[382474]: 2025-12-13 09:03:29.892508102 +0000 UTC m=+0.172007861 container attach 6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_heyrovsky, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:03:29 np0005558241 affectionate_heyrovsky[382490]: 167 167
Dec 13 04:03:29 np0005558241 systemd[1]: libpod-6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238.scope: Deactivated successfully.
Dec 13 04:03:29 np0005558241 conmon[382490]: conmon 6cacd9a1213363aa5b32 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238.scope/container/memory.events
Dec 13 04:03:29 np0005558241 podman[382474]: 2025-12-13 09:03:29.899429065 +0000 UTC m=+0.178928824 container died 6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:03:29 np0005558241 systemd[1]: var-lib-containers-storage-overlay-68dc095295ad6d7fc239b1dd7d7d480e9bbd23158f5794200bbd9b16e2b26fc7-merged.mount: Deactivated successfully.
Dec 13 04:03:29 np0005558241 podman[382474]: 2025-12-13 09:03:29.957517971 +0000 UTC m=+0.237017730 container remove 6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_heyrovsky, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:03:29 np0005558241 systemd[1]: libpod-conmon-6cacd9a1213363aa5b3226a1b517471c70a8ca2ed53053bbc87359181ccd2238.scope: Deactivated successfully.
Dec 13 04:03:30 np0005558241 nova_compute[248510]: 2025-12-13 09:03:30.015 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:03:30 np0005558241 nova_compute[248510]: 2025-12-13 09:03:30.061 248514 INFO nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Took 8.32 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:03:30 np0005558241 nova_compute[248510]: 2025-12-13 09:03:30.062 248514 DEBUG nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:30 np0005558241 podman[382512]: 2025-12-13 09:03:30.162843805 +0000 UTC m=+0.050333492 container create ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermat, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:03:30 np0005558241 systemd[1]: Started libpod-conmon-ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e.scope.
Dec 13 04:03:30 np0005558241 nova_compute[248510]: 2025-12-13 09:03:30.213 248514 INFO nova.compute.manager [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Took 9.55 seconds to build instance.#033[00m
Dec 13 04:03:30 np0005558241 podman[382512]: 2025-12-13 09:03:30.139443189 +0000 UTC m=+0.026932906 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:03:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0855d22b735a99016dde39240f2ff9ef98d062917575f69a8f5a6711f8a21ace/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0855d22b735a99016dde39240f2ff9ef98d062917575f69a8f5a6711f8a21ace/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0855d22b735a99016dde39240f2ff9ef98d062917575f69a8f5a6711f8a21ace/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0855d22b735a99016dde39240f2ff9ef98d062917575f69a8f5a6711f8a21ace/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:30 np0005558241 podman[382512]: 2025-12-13 09:03:30.267118898 +0000 UTC m=+0.154608585 container init ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 04:03:30 np0005558241 nova_compute[248510]: 2025-12-13 09:03:30.268 248514 DEBUG oslo_concurrency.lockutils [None req-563eeae4-57dd-49f6-81e7-2b021fda4ade 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:30 np0005558241 podman[382512]: 2025-12-13 09:03:30.275665892 +0000 UTC m=+0.163155549 container start ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:03:30 np0005558241 podman[382512]: 2025-12-13 09:03:30.279490508 +0000 UTC m=+0.166980225 container attach ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermat, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:03:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3072: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 13 04:03:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:03:31 np0005558241 lvm[382608]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:03:31 np0005558241 lvm[382608]: VG ceph_vg0 finished
Dec 13 04:03:31 np0005558241 lvm[382609]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:03:31 np0005558241 lvm[382609]: VG ceph_vg1 finished
Dec 13 04:03:31 np0005558241 lvm[382610]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:03:31 np0005558241 lvm[382610]: VG ceph_vg2 finished
Dec 13 04:03:31 np0005558241 charming_fermat[382529]: {}
Dec 13 04:03:31 np0005558241 systemd[1]: libpod-ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e.scope: Deactivated successfully.
Dec 13 04:03:31 np0005558241 systemd[1]: libpod-ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e.scope: Consumed 1.543s CPU time.
Dec 13 04:03:31 np0005558241 podman[382512]: 2025-12-13 09:03:31.26304364 +0000 UTC m=+1.150533337 container died ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermat, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:03:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0855d22b735a99016dde39240f2ff9ef98d062917575f69a8f5a6711f8a21ace-merged.mount: Deactivated successfully.
Dec 13 04:03:31 np0005558241 podman[382512]: 2025-12-13 09:03:31.321036103 +0000 UTC m=+1.208525770 container remove ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_fermat, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 04:03:31 np0005558241 systemd[1]: libpod-conmon-ce742b935978f94d43b4adc026f0987eb389784e795d90ab8dbdf11726e7ef9e.scope: Deactivated successfully.
Dec 13 04:03:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:03:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:03:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:03:31 np0005558241 nova_compute[248510]: 2025-12-13 09:03:31.636 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:31 np0005558241 nova_compute[248510]: 2025-12-13 09:03:31.769 248514 DEBUG nova.compute.manager [req-38381ace-2e78-4951-898f-533375b7f457 req-bafb7411-1f08-429b-887c-ee0d5174e0b8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:31 np0005558241 nova_compute[248510]: 2025-12-13 09:03:31.771 248514 DEBUG oslo_concurrency.lockutils [req-38381ace-2e78-4951-898f-533375b7f457 req-bafb7411-1f08-429b-887c-ee0d5174e0b8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:31 np0005558241 nova_compute[248510]: 2025-12-13 09:03:31.771 248514 DEBUG oslo_concurrency.lockutils [req-38381ace-2e78-4951-898f-533375b7f457 req-bafb7411-1f08-429b-887c-ee0d5174e0b8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:31 np0005558241 nova_compute[248510]: 2025-12-13 09:03:31.772 248514 DEBUG oslo_concurrency.lockutils [req-38381ace-2e78-4951-898f-533375b7f457 req-bafb7411-1f08-429b-887c-ee0d5174e0b8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:31 np0005558241 nova_compute[248510]: 2025-12-13 09:03:31.772 248514 DEBUG nova.compute.manager [req-38381ace-2e78-4951-898f-533375b7f457 req-bafb7411-1f08-429b-887c-ee0d5174e0b8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:03:31 np0005558241 nova_compute[248510]: 2025-12-13 09:03:31.773 248514 WARNING nova.compute.manager [req-38381ace-2e78-4951-898f-533375b7f457 req-bafb7411-1f08-429b-887c-ee0d5174e0b8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received unexpected event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:03:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3073: 321 pgs: 321 active+clean; 134 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 13 04:03:33 np0005558241 nova_compute[248510]: 2025-12-13 09:03:33.061 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:34 np0005558241 nova_compute[248510]: 2025-12-13 09:03:34.277 248514 DEBUG nova.compute.manager [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-changed-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:34 np0005558241 nova_compute[248510]: 2025-12-13 09:03:34.277 248514 DEBUG nova.compute.manager [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing instance network info cache due to event network-changed-90d7401e-210b-4cbe-b93d-853787434352. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:03:34 np0005558241 nova_compute[248510]: 2025-12-13 09:03:34.277 248514 DEBUG oslo_concurrency.lockutils [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:03:34 np0005558241 nova_compute[248510]: 2025-12-13 09:03:34.278 248514 DEBUG oslo_concurrency.lockutils [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:03:34 np0005558241 nova_compute[248510]: 2025-12-13 09:03:34.278 248514 DEBUG nova.network.neutron [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing network info cache for port 90d7401e-210b-4cbe-b93d-853787434352 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:03:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3074: 321 pgs: 321 active+clean; 162 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 224 op/s
Dec 13 04:03:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:34Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:4e:45 10.100.0.11
Dec 13 04:03:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:34Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:4e:45 10.100.0.11
Dec 13 04:03:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:03:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3075: 321 pgs: 321 active+clean; 162 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Dec 13 04:03:36 np0005558241 nova_compute[248510]: 2025-12-13 09:03:36.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:36 np0005558241 nova_compute[248510]: 2025-12-13 09:03:36.717 248514 DEBUG nova.network.neutron [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updated VIF entry in instance network info cache for port 90d7401e-210b-4cbe-b93d-853787434352. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:03:36 np0005558241 nova_compute[248510]: 2025-12-13 09:03:36.718 248514 DEBUG nova.network.neutron [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:03:36 np0005558241 nova_compute[248510]: 2025-12-13 09:03:36.750 248514 DEBUG oslo_concurrency.lockutils [req-2fde0247-5b80-4c6c-995c-9278616b1e27 req-2e0cf79d-0004-4722-97be-df4a566be081 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:03:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:37.188 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:03:37 np0005558241 nova_compute[248510]: 2025-12-13 09:03:37.190 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:37.191 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:03:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:37.192 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:38 np0005558241 nova_compute[248510]: 2025-12-13 09:03:38.063 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3076: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 13 04:03:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:03:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:03:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:03:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:03:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:03:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:03:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3077: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 13 04:03:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:03:41 np0005558241 podman[382650]: 2025-12-13 09:03:41.01584819 +0000 UTC m=+0.095347820 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd)
Dec 13 04:03:41 np0005558241 podman[382651]: 2025-12-13 09:03:41.027275456 +0000 UTC m=+0.097260828 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 13 04:03:41 np0005558241 podman[382649]: 2025-12-13 09:03:41.100915931 +0000 UTC m=+0.176492373 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:03:41 np0005558241 nova_compute[248510]: 2025-12-13 09:03:41.643 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3078: 321 pgs: 321 active+clean; 167 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Dec 13 04:03:42 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:42Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:b4:6d 10.100.0.14
Dec 13 04:03:42 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:42Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:b4:6d 10.100.0.14
Dec 13 04:03:42 np0005558241 nova_compute[248510]: 2025-12-13 09:03:42.776 248514 INFO nova.compute.manager [None req-b721c43c-1d8d-4da5-ba7a-0e0d6c792c3b a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Get console output#033[00m
Dec 13 04:03:42 np0005558241 nova_compute[248510]: 2025-12-13 09:03:42.784 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:03:43 np0005558241 nova_compute[248510]: 2025-12-13 09:03:43.057 248514 DEBUG oslo_concurrency.lockutils [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:43 np0005558241 nova_compute[248510]: 2025-12-13 09:03:43.057 248514 DEBUG oslo_concurrency.lockutils [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:43 np0005558241 nova_compute[248510]: 2025-12-13 09:03:43.058 248514 DEBUG nova.compute.manager [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:43 np0005558241 nova_compute[248510]: 2025-12-13 09:03:43.062 248514 DEBUG nova.compute.manager [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Dec 13 04:03:43 np0005558241 nova_compute[248510]: 2025-12-13 09:03:43.063 248514 DEBUG nova.objects.instance [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'flavor' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:43 np0005558241 nova_compute[248510]: 2025-12-13 09:03:43.066 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:43 np0005558241 nova_compute[248510]: 2025-12-13 09:03:43.106 248514 DEBUG nova.virt.libvirt.driver [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Dec 13 04:03:44 np0005558241 nova_compute[248510]: 2025-12-13 09:03:44.060 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:44 np0005558241 nova_compute[248510]: 2025-12-13 09:03:44.061 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:44 np0005558241 nova_compute[248510]: 2025-12-13 09:03:44.086 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:03:44 np0005558241 nova_compute[248510]: 2025-12-13 09:03:44.170 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:44 np0005558241 nova_compute[248510]: 2025-12-13 09:03:44.171 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:44 np0005558241 nova_compute[248510]: 2025-12-13 09:03:44.180 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:03:44 np0005558241 nova_compute[248510]: 2025-12-13 09:03:44.180 248514 INFO nova.compute.claims [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:03:44 np0005558241 nova_compute[248510]: 2025-12-13 09:03:44.319 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3079: 321 pgs: 321 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 194 op/s
Dec 13 04:03:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:03:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3062633587' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:03:44 np0005558241 nova_compute[248510]: 2025-12-13 09:03:44.963 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:44 np0005558241 nova_compute[248510]: 2025-12-13 09:03:44.972 248514 DEBUG nova.compute.provider_tree [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.036 248514 DEBUG nova.scheduler.client.report [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.068 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.069 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.125 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.126 248514 DEBUG nova.network.neutron [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.158 248514 INFO nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.190 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.289 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.290 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.291 248514 INFO nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Creating image(s)#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.314 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.348 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.371 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.376 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.455 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.456 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.457 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.457 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.483 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.488 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c89b029c-146b-47ae-8961-0000d4e49e29_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:45 np0005558241 kernel: tapdaddb809-b3 (unregistering): left promiscuous mode
Dec 13 04:03:45 np0005558241 NetworkManager[50376]: <info>  [1765616625.5184] device (tapdaddb809-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:03:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:45Z|01358|binding|INFO|Releasing lport daddb809-b36f-4f29-bd15-459cfd21a812 from this chassis (sb_readonly=0)
Dec 13 04:03:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:45Z|01359|binding|INFO|Setting lport daddb809-b36f-4f29-bd15-459cfd21a812 down in Southbound
Dec 13 04:03:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:45Z|01360|binding|INFO|Removing iface tapdaddb809-b3 ovn-installed in OVS
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.534 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:45.537 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:4e:45 10.100.0.11'], port_security=['fa:16:3e:b3:4e:45 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f5f29271-ff94-4d88-bc99-1cfc3e1128a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd30303-3917-438c-8b47-a12827c948d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '48e751cb-5eae-4702-97af-e0ce47d426b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c018ca30-1abb-42b0-8adb-342f8b53fd8e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=daddb809-b36f-4f29-bd15-459cfd21a812) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:03:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:45.538 158419 INFO neutron.agent.ovn.metadata.agent [-] Port daddb809-b36f-4f29-bd15-459cfd21a812 in datapath cdd30303-3917-438c-8b47-a12827c948d6 unbound from our chassis#033[00m
Dec 13 04:03:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:45.540 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cdd30303-3917-438c-8b47-a12827c948d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:03:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:45.542 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee0158f-7a08-49d2-b50c-1019f13a49a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:45.542 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 namespace which is not needed anymore#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.553 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:45 np0005558241 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000084.scope: Deactivated successfully.
Dec 13 04:03:45 np0005558241 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000084.scope: Consumed 13.428s CPU time.
Dec 13 04:03:45 np0005558241 systemd-machined[210538]: Machine qemu-161-instance-00000084 terminated.
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.741 248514 DEBUG nova.policy [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.762 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:45 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [NOTICE]   (380585) : haproxy version is 2.8.14-c23fe91
Dec 13 04:03:45 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [NOTICE]   (380585) : path to executable is /usr/sbin/haproxy
Dec 13 04:03:45 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [WARNING]  (380585) : Exiting Master process...
Dec 13 04:03:45 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [WARNING]  (380585) : Exiting Master process...
Dec 13 04:03:45 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [ALERT]    (380585) : Current worker (380587) exited with code 143 (Terminated)
Dec 13 04:03:45 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[380581]: [WARNING]  (380585) : All workers exited. Exiting... (0)
Dec 13 04:03:45 np0005558241 systemd[1]: libpod-0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8.scope: Deactivated successfully.
Dec 13 04:03:45 np0005558241 podman[382846]: 2025-12-13 09:03:45.900985773 +0000 UTC m=+0.266448536 container died 0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.941 248514 DEBUG nova.compute.manager [req-fb30527f-2c65-4ffe-a700-9f77e379c7fc req-519018d5-9395-4f4e-84e9-3f04fe131e5d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-unplugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.942 248514 DEBUG oslo_concurrency.lockutils [req-fb30527f-2c65-4ffe-a700-9f77e379c7fc req-519018d5-9395-4f4e-84e9-3f04fe131e5d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.942 248514 DEBUG oslo_concurrency.lockutils [req-fb30527f-2c65-4ffe-a700-9f77e379c7fc req-519018d5-9395-4f4e-84e9-3f04fe131e5d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.942 248514 DEBUG oslo_concurrency.lockutils [req-fb30527f-2c65-4ffe-a700-9f77e379c7fc req-519018d5-9395-4f4e-84e9-3f04fe131e5d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.943 248514 DEBUG nova.compute.manager [req-fb30527f-2c65-4ffe-a700-9f77e379c7fc req-519018d5-9395-4f4e-84e9-3f04fe131e5d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-unplugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:03:45 np0005558241 nova_compute[248510]: 2025-12-13 09:03:45.943 248514 WARNING nova.compute.manager [req-fb30527f-2c65-4ffe-a700-9f77e379c7fc req-519018d5-9395-4f4e-84e9-3f04fe131e5d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-unplugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state active and task_state powering-off.#033[00m
Dec 13 04:03:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:03:46 np0005558241 nova_compute[248510]: 2025-12-13 09:03:46.142 248514 INFO nova.virt.libvirt.driver [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance shutdown successfully after 3 seconds.#033[00m
Dec 13 04:03:46 np0005558241 nova_compute[248510]: 2025-12-13 09:03:46.148 248514 INFO nova.virt.libvirt.driver [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance destroyed successfully.#033[00m
Dec 13 04:03:46 np0005558241 nova_compute[248510]: 2025-12-13 09:03:46.149 248514 DEBUG nova.objects.instance [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'numa_topology' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:46 np0005558241 nova_compute[248510]: 2025-12-13 09:03:46.180 248514 DEBUG nova.compute.manager [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:46 np0005558241 nova_compute[248510]: 2025-12-13 09:03:46.232 248514 DEBUG oslo_concurrency.lockutils [None req-41b262ee-880b-44e6-b2c8-ea0f3c46e752 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8-userdata-shm.mount: Deactivated successfully.
Dec 13 04:03:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d734d895c82410d889b2d117e4e4de5869dfd023c73ed8e5614e3fc64a0d3650-merged.mount: Deactivated successfully.
Dec 13 04:03:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3080: 321 pgs: 321 active+clean; 200 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 395 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Dec 13 04:03:46 np0005558241 nova_compute[248510]: 2025-12-13 09:03:46.646 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:46 np0005558241 podman[382846]: 2025-12-13 09:03:46.680452733 +0000 UTC m=+1.045915546 container cleanup 0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:03:46 np0005558241 systemd[1]: libpod-conmon-0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8.scope: Deactivated successfully.
Dec 13 04:03:46 np0005558241 podman[382890]: 2025-12-13 09:03:46.853791886 +0000 UTC m=+0.132774918 container remove 0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:03:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.867 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[baf73bac-eb4a-4a1b-a817-5fc610c88eed]: (4, ('Sat Dec 13 09:03:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 (0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8)\n0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8\nSat Dec 13 09:03:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 (0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8)\n0ec21c770110030e0c904d1944fb464e8e0d3118002af2b22d166bfe125af4b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.870 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b54ce545-2b2c-41ff-8b39-c49a946c90d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:46 np0005558241 nova_compute[248510]: 2025-12-13 09:03:46.870 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c89b029c-146b-47ae-8961-0000d4e49e29_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.871 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd30303-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:46 np0005558241 kernel: tapcdd30303-30: left promiscuous mode
Dec 13 04:03:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.911 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[03b75d7d-0387-4b09-8d96-b3687ebdcfec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:46 np0005558241 nova_compute[248510]: 2025-12-13 09:03:46.912 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.928 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0991a73-5988-431e-af80-c1cd54941b54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.930 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[830b537b-1c8d-4a40-8dec-b7e83685b765]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.955 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0abb3bb4-89c3-40b5-8343-8688e0b871b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 917754, 'reachable_time': 26809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382933, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:46 np0005558241 systemd[1]: run-netns-ovnmeta\x2dcdd30303\x2d3917\x2d438c\x2d8b47\x2da12827c948d6.mount: Deactivated successfully.
Dec 13 04:03:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.961 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:03:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:46.961 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[45c7e238-edf2-4d72-8fba-4d6b07691e55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:46 np0005558241 nova_compute[248510]: 2025-12-13 09:03:46.968 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:03:47 np0005558241 nova_compute[248510]: 2025-12-13 09:03:47.069 248514 DEBUG nova.objects.instance [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid c89b029c-146b-47ae-8961-0000d4e49e29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:47 np0005558241 nova_compute[248510]: 2025-12-13 09:03:47.128 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:03:47 np0005558241 nova_compute[248510]: 2025-12-13 09:03:47.129 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Ensure instance console log exists: /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:03:47 np0005558241 nova_compute[248510]: 2025-12-13 09:03:47.130 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:47 np0005558241 nova_compute[248510]: 2025-12-13 09:03:47.131 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:47 np0005558241 nova_compute[248510]: 2025-12-13 09:03:47.131 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.157851) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616627157958, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2065, "num_deletes": 251, "total_data_size": 3547995, "memory_usage": 3601632, "flush_reason": "Manual Compaction"}
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616627199358, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3438958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59278, "largest_seqno": 61342, "table_properties": {"data_size": 3429635, "index_size": 5816, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19158, "raw_average_key_size": 20, "raw_value_size": 3411078, "raw_average_value_size": 3594, "num_data_blocks": 258, "num_entries": 949, "num_filter_entries": 949, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616412, "oldest_key_time": 1765616412, "file_creation_time": 1765616627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 41706 microseconds, and 11773 cpu microseconds.
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.199561) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3438958 bytes OK
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.199642) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.209324) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.209351) EVENT_LOG_v1 {"time_micros": 1765616627209343, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.209373) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3539313, prev total WAL file size 3539313, number of live WAL files 2.
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.211286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3358KB)], [140(8717KB)]
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616627211432, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 12365360, "oldest_snapshot_seqno": -1}
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8246 keys, 10588064 bytes, temperature: kUnknown
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616627309194, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 10588064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10534897, "index_size": 31422, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20677, "raw_key_size": 215100, "raw_average_key_size": 26, "raw_value_size": 10389645, "raw_average_value_size": 1259, "num_data_blocks": 1219, "num_entries": 8246, "num_filter_entries": 8246, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.309687) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 10588064 bytes
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.312261) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.2 rd, 108.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 8760, records dropped: 514 output_compression: NoCompression
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.312289) EVENT_LOG_v1 {"time_micros": 1765616627312276, "job": 86, "event": "compaction_finished", "compaction_time_micros": 97973, "compaction_time_cpu_micros": 48138, "output_level": 6, "num_output_files": 1, "total_output_size": 10588064, "num_input_records": 8760, "num_output_records": 8246, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616627313807, "job": 86, "event": "table_file_deletion", "file_number": 142}
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616627317106, "job": 86, "event": "table_file_deletion", "file_number": 140}
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.211153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.317243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.317250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.317255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.317259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:03:47 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:03:47.317264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:03:47 np0005558241 nova_compute[248510]: 2025-12-13 09:03:47.661 248514 DEBUG nova.network.neutron [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Successfully created port: 8b1532d0-4269-4522-a1c5-961c9f9254dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:03:48 np0005558241 nova_compute[248510]: 2025-12-13 09:03:48.042 248514 DEBUG nova.compute.manager [req-8dba17f3-1f72-480e-9a5f-ef02df26e9b0 req-85919c04-332c-40f4-a869-e5fcc72e66a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:48 np0005558241 nova_compute[248510]: 2025-12-13 09:03:48.043 248514 DEBUG oslo_concurrency.lockutils [req-8dba17f3-1f72-480e-9a5f-ef02df26e9b0 req-85919c04-332c-40f4-a869-e5fcc72e66a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:48 np0005558241 nova_compute[248510]: 2025-12-13 09:03:48.043 248514 DEBUG oslo_concurrency.lockutils [req-8dba17f3-1f72-480e-9a5f-ef02df26e9b0 req-85919c04-332c-40f4-a869-e5fcc72e66a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:48 np0005558241 nova_compute[248510]: 2025-12-13 09:03:48.043 248514 DEBUG oslo_concurrency.lockutils [req-8dba17f3-1f72-480e-9a5f-ef02df26e9b0 req-85919c04-332c-40f4-a869-e5fcc72e66a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:48 np0005558241 nova_compute[248510]: 2025-12-13 09:03:48.043 248514 DEBUG nova.compute.manager [req-8dba17f3-1f72-480e-9a5f-ef02df26e9b0 req-85919c04-332c-40f4-a869-e5fcc72e66a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:03:48 np0005558241 nova_compute[248510]: 2025-12-13 09:03:48.044 248514 WARNING nova.compute.manager [req-8dba17f3-1f72-480e-9a5f-ef02df26e9b0 req-85919c04-332c-40f4-a869-e5fcc72e66a6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state stopped and task_state None.#033[00m
Dec 13 04:03:48 np0005558241 nova_compute[248510]: 2025-12-13 09:03:48.067 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3081: 321 pgs: 321 active+clean; 228 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 3.4 MiB/s wr, 92 op/s
Dec 13 04:03:48 np0005558241 nova_compute[248510]: 2025-12-13 09:03:48.995 248514 DEBUG nova.network.neutron [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Successfully updated port: 8b1532d0-4269-4522-a1c5-961c9f9254dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:03:49 np0005558241 nova_compute[248510]: 2025-12-13 09:03:49.014 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:03:49 np0005558241 nova_compute[248510]: 2025-12-13 09:03:49.015 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:03:49 np0005558241 nova_compute[248510]: 2025-12-13 09:03:49.015 248514 DEBUG nova.network.neutron [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:03:49 np0005558241 nova_compute[248510]: 2025-12-13 09:03:49.122 248514 DEBUG nova.compute.manager [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-changed-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:49 np0005558241 nova_compute[248510]: 2025-12-13 09:03:49.123 248514 DEBUG nova.compute.manager [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Refreshing instance network info cache due to event network-changed-8b1532d0-4269-4522-a1c5-961c9f9254dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:03:49 np0005558241 nova_compute[248510]: 2025-12-13 09:03:49.123 248514 DEBUG oslo_concurrency.lockutils [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:03:49 np0005558241 nova_compute[248510]: 2025-12-13 09:03:49.235 248514 INFO nova.compute.manager [None req-29f6a72c-096a-4bcd-8a23-5d761f619b4c a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Get console output#033[00m
Dec 13 04:03:49 np0005558241 nova_compute[248510]: 2025-12-13 09:03:49.426 248514 DEBUG nova.objects.instance [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'flavor' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:49 np0005558241 nova_compute[248510]: 2025-12-13 09:03:49.450 248514 DEBUG oslo_concurrency.lockutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:03:49 np0005558241 nova_compute[248510]: 2025-12-13 09:03:49.450 248514 DEBUG oslo_concurrency.lockutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:03:49 np0005558241 nova_compute[248510]: 2025-12-13 09:03:49.450 248514 DEBUG nova.network.neutron [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:03:49 np0005558241 nova_compute[248510]: 2025-12-13 09:03:49.450 248514 DEBUG nova.objects.instance [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'info_cache' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:49 np0005558241 nova_compute[248510]: 2025-12-13 09:03:49.534 248514 DEBUG nova.network.neutron [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:03:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3082: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.903 248514 DEBUG nova.network.neutron [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updating instance_info_cache with network_info: [{"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.935 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.936 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Instance network_info: |[{"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.936 248514 DEBUG oslo_concurrency.lockutils [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.937 248514 DEBUG nova.network.neutron [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Refreshing network info cache for port 8b1532d0-4269-4522-a1c5-961c9f9254dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.942 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Start _get_guest_xml network_info=[{"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.948 248514 WARNING nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:03:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.970 248514 DEBUG nova.virt.libvirt.host [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.972 248514 DEBUG nova.virt.libvirt.host [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.976 248514 DEBUG nova.virt.libvirt.host [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.977 248514 DEBUG nova.virt.libvirt.host [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.977 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.977 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.978 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.978 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.978 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.979 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.979 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.979 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.979 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.980 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.980 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.980 248514 DEBUG nova.virt.hardware [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:03:50 np0005558241 nova_compute[248510]: 2025-12-13 09:03:50.983 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:03:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/138571391' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.556 248514 DEBUG nova.network.neutron [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updating instance_info_cache with network_info: [{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.563 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.586 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.590 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.635 248514 DEBUG oslo_concurrency.lockutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.648 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.665 248514 INFO nova.virt.libvirt.driver [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance destroyed successfully.#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.666 248514 DEBUG nova.objects.instance [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'numa_topology' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.683 248514 DEBUG nova.objects.instance [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.697 248514 DEBUG nova.virt.libvirt.vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:03:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-306376587',display_name='tempest-TestNetworkAdvancedServerOps-server-306376587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-306376587',id=132,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHKEKbLZZE+aW59knHMs4q4nSClPgzNp8F0QXUH6k11KPbTMvxhfQvtv8ScRqzRXYIiwBhktHJd/maU265oWRgbsF6YV/H8WAVywICvzTkBF4CZcbkyPahF1JxO8nrngQ==',key_name='tempest-TestNetworkAdvancedServerOps-1547833332',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:03:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-4215zp0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:03:46Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=f5f29271-ff94-4d88-bc99-1cfc3e1128a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.697 248514 DEBUG nova.network.os_vif_util [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.699 248514 DEBUG nova.network.os_vif_util [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.699 248514 DEBUG os_vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.701 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.702 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaddb809-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.708 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.710 248514 INFO os_vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3')#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.719 248514 DEBUG nova.virt.libvirt.driver [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Start _get_guest_xml network_info=[{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.725 248514 WARNING nova.virt.libvirt.driver [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.728 248514 DEBUG nova.virt.libvirt.host [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.729 248514 DEBUG nova.virt.libvirt.host [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.731 248514 DEBUG nova.virt.libvirt.host [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.732 248514 DEBUG nova.virt.libvirt.host [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.732 248514 DEBUG nova.virt.libvirt.driver [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.733 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.733 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.734 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.734 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.734 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.734 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.735 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.735 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.735 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.736 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.736 248514 DEBUG nova.virt.hardware [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.736 248514 DEBUG nova.objects.instance [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:51 np0005558241 nova_compute[248510]: 2025-12-13 09:03:51.762 248514 DEBUG oslo_concurrency.processutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:03:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3396109086' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.172 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.174 248514 DEBUG nova.virt.libvirt.vif [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:03:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1708858245',display_name='tempest-TestNetworkBasicOps-server-1708858245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1708858245',id=134,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL++NwsCfuXrqu4UC7qgUOg89a/J8JDHWy/KO/6t34DIcODsr1DE3qwPt8YEbfOTv/XMYKYNpUizYYeq5l6dOm5t1OYnPo2H2KGHxdPaguJpqsjNgEOg4dU6arkPu8gPzA==',key_name='tempest-TestNetworkBasicOps-1668505721',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-gfm2ze7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:03:45Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=c89b029c-146b-47ae-8961-0000d4e49e29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.174 248514 DEBUG nova.network.os_vif_util [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.175 248514 DEBUG nova.network.os_vif_util [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.177 248514 DEBUG nova.objects.instance [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid c89b029c-146b-47ae-8961-0000d4e49e29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.196 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <uuid>c89b029c-146b-47ae-8961-0000d4e49e29</uuid>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <name>instance-00000086</name>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkBasicOps-server-1708858245</nova:name>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:03:50</nova:creationTime>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:port uuid="8b1532d0-4269-4522-a1c5-961c9f9254dc">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <entry name="serial">c89b029c-146b-47ae-8961-0000d4e49e29</entry>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <entry name="uuid">c89b029c-146b-47ae-8961-0000d4e49e29</entry>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c89b029c-146b-47ae-8961-0000d4e49e29_disk">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c89b029c-146b-47ae-8961-0000d4e49e29_disk.config">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:b8:b5:f3"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <target dev="tap8b1532d0-42"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/console.log" append="off"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:03:52 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:03:52 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.197 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Preparing to wait for external event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.197 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.197 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.198 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.198 248514 DEBUG nova.virt.libvirt.vif [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:03:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1708858245',display_name='tempest-TestNetworkBasicOps-server-1708858245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1708858245',id=134,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL++NwsCfuXrqu4UC7qgUOg89a/J8JDHWy/KO/6t34DIcODsr1DE3qwPt8YEbfOTv/XMYKYNpUizYYeq5l6dOm5t1OYnPo2H2KGHxdPaguJpqsjNgEOg4dU6arkPu8gPzA==',key_name='tempest-TestNetworkBasicOps-1668505721',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-gfm2ze7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:03:45Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=c89b029c-146b-47ae-8961-0000d4e49e29,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.198 248514 DEBUG nova.network.os_vif_util [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.199 248514 DEBUG nova.network.os_vif_util [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.199 248514 DEBUG os_vif [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.200 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.201 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.201 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.203 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.203 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b1532d0-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.203 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b1532d0-42, col_values=(('external_ids', {'iface-id': '8b1532d0-4269-4522-a1c5-961c9f9254dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:b5:f3', 'vm-uuid': 'c89b029c-146b-47ae-8961-0000d4e49e29'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.205 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:52 np0005558241 NetworkManager[50376]: <info>  [1765616632.2073] manager: (tap8b1532d0-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/562)
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.210 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.213 248514 INFO os_vif [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42')#033[00m
Dec 13 04:03:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:03:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/731843873' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.297 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.298 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.298 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:b8:b5:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.298 248514 INFO nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Using config drive#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.324 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.329 248514 DEBUG oslo_concurrency.processutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.370 248514 DEBUG oslo_concurrency.processutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3083: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.775 248514 DEBUG nova.network.neutron [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updated VIF entry in instance network info cache for port 8b1532d0-4269-4522-a1c5-961c9f9254dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.776 248514 DEBUG nova.network.neutron [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updating instance_info_cache with network_info: [{"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.794 248514 DEBUG oslo_concurrency.lockutils [req-2b24ee4e-ac09-4576-a490-0c0760d6af8a req-c5ac17a8-0722-4293-873c-367e1cfd3b37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:03:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:03:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654709939' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.939 248514 DEBUG oslo_concurrency.processutils [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.941 248514 DEBUG nova.virt.libvirt.vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:03:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-306376587',display_name='tempest-TestNetworkAdvancedServerOps-server-306376587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-306376587',id=132,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHKEKbLZZE+aW59knHMs4q4nSClPgzNp8F0QXUH6k11KPbTMvxhfQvtv8ScRqzRXYIiwBhktHJd/maU265oWRgbsF6YV/H8WAVywICvzTkBF4CZcbkyPahF1JxO8nrngQ==',key_name='tempest-TestNetworkAdvancedServerOps-1547833332',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:03:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-4215zp0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:03:46Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=f5f29271-ff94-4d88-bc99-1cfc3e1128a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.942 248514 DEBUG nova.network.os_vif_util [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.943 248514 DEBUG nova.network.os_vif_util [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.944 248514 DEBUG nova.objects.instance [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.971 248514 DEBUG nova.virt.libvirt.driver [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <uuid>f5f29271-ff94-4d88-bc99-1cfc3e1128a0</uuid>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <name>instance-00000084</name>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-306376587</nova:name>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:03:51</nova:creationTime>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <nova:port uuid="daddb809-b36f-4f29-bd15-459cfd21a812">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <entry name="serial">f5f29271-ff94-4d88-bc99-1cfc3e1128a0</entry>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <entry name="uuid">f5f29271-ff94-4d88-bc99-1cfc3e1128a0</entry>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/f5f29271-ff94-4d88-bc99-1cfc3e1128a0_disk.config">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:b3:4e:45"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <target dev="tapdaddb809-b3"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0/console.log" append="off"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <input type="keyboard" bus="usb"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:03:52 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:03:52 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:03:52 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:03:52 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.974 248514 DEBUG nova.virt.libvirt.driver [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.974 248514 DEBUG nova.virt.libvirt.driver [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.975 248514 DEBUG nova.virt.libvirt.vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:03:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-306376587',display_name='tempest-TestNetworkAdvancedServerOps-server-306376587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-306376587',id=132,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHKEKbLZZE+aW59knHMs4q4nSClPgzNp8F0QXUH6k11KPbTMvxhfQvtv8ScRqzRXYIiwBhktHJd/maU265oWRgbsF6YV/H8WAVywICvzTkBF4CZcbkyPahF1JxO8nrngQ==',key_name='tempest-TestNetworkAdvancedServerOps-1547833332',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:03:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-4215zp0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:03:46Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=f5f29271-ff94-4d88-bc99-1cfc3e1128a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.977 248514 DEBUG nova.network.os_vif_util [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.977 248514 DEBUG nova.network.os_vif_util [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.978 248514 DEBUG os_vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.981 248514 INFO nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Creating config drive at /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/disk.config#033[00m
Dec 13 04:03:52 np0005558241 nova_compute[248510]: 2025-12-13 09:03:52.986 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2iw8k2q3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.030 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.032 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.033 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.037 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.037 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaddb809-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.038 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdaddb809-b3, col_values=(('external_ids', {'iface-id': 'daddb809-b36f-4f29-bd15-459cfd21a812', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:4e:45', 'vm-uuid': 'f5f29271-ff94-4d88-bc99-1cfc3e1128a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:53 np0005558241 NetworkManager[50376]: <info>  [1765616633.0410] manager: (tapdaddb809-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.042 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.050 248514 INFO os_vif [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3')#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.070 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:53 np0005558241 kernel: tapdaddb809-b3: entered promiscuous mode
Dec 13 04:03:53 np0005558241 NetworkManager[50376]: <info>  [1765616633.1312] manager: (tapdaddb809-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/564)
Dec 13 04:03:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:53Z|01361|binding|INFO|Claiming lport daddb809-b36f-4f29-bd15-459cfd21a812 for this chassis.
Dec 13 04:03:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:53Z|01362|binding|INFO|daddb809-b36f-4f29-bd15-459cfd21a812: Claiming fa:16:3e:b3:4e:45 10.100.0.11
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.134 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.138 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2iw8k2q3" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.150 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:4e:45 10.100.0.11'], port_security=['fa:16:3e:b3:4e:45 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f5f29271-ff94-4d88-bc99-1cfc3e1128a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd30303-3917-438c-8b47-a12827c948d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '5', 'neutron:security_group_ids': '48e751cb-5eae-4702-97af-e0ce47d426b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c018ca30-1abb-42b0-8adb-342f8b53fd8e, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=daddb809-b36f-4f29-bd15-459cfd21a812) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.153 158419 INFO neutron.agent.ovn.metadata.agent [-] Port daddb809-b36f-4f29-bd15-459cfd21a812 in datapath cdd30303-3917-438c-8b47-a12827c948d6 bound to our chassis#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.156 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cdd30303-3917-438c-8b47-a12827c948d6#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.170 248514 DEBUG nova.storage.rbd_utils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image c89b029c-146b-47ae-8961-0000d4e49e29_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:03:53 np0005558241 systemd-udevd[383162]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.174 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/disk.config c89b029c-146b-47ae-8961-0000d4e49e29_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:03:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:53Z|01363|binding|INFO|Setting lport daddb809-b36f-4f29-bd15-459cfd21a812 ovn-installed in OVS
Dec 13 04:03:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:53Z|01364|binding|INFO|Setting lport daddb809-b36f-4f29-bd15-459cfd21a812 up in Southbound
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.176 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4a8ffd-93e6-4080-a700-e8c6d9779e56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.178 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcdd30303-31 in ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.186 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcdd30303-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.186 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7dbdec58-14c2-4f4b-b8f3-c9cc6c3dee47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.189 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb254ef-6365-4321-bf5d-93018d820fd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 NetworkManager[50376]: <info>  [1765616633.1938] device (tapdaddb809-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:03:53 np0005558241 NetworkManager[50376]: <info>  [1765616633.1951] device (tapdaddb809-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:03:53 np0005558241 systemd-machined[210538]: New machine qemu-163-instance-00000084.
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.206 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ea7b9c-9afe-4fee-b6cb-a54969e4b691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 systemd[1]: Started Virtual Machine qemu-163-instance-00000084.
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.226 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1dc515e-1f0d-4ea2-8f71-761da7e3f49b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.231 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.264 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b7dda8e9-e7d2-491d-b4d7-ef1c3d1ab32c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 NetworkManager[50376]: <info>  [1765616633.2727] manager: (tapcdd30303-30): new Veth device (/org/freedesktop/NetworkManager/Devices/565)
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.271 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[30224f6d-512d-46e3-97c1-d92d49430c6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.319 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8e754267-217d-4099-b2df-26e07b60fe1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.323 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e998fe6a-ceab-4648-abc2-6c85e11ca88f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 NetworkManager[50376]: <info>  [1765616633.3564] device (tapcdd30303-30): carrier: link connected
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.365 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[22ecb394-054f-45c5-87ec-106b70f7a194]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.387 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[438c14d4-6ddb-4653-9190-c5070a139d92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd30303-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:ce:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 395], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 921057, 'reachable_time': 22161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383220, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.404 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f51fba81-ca8e-4aae-9ae9-c3fa9e5f0632]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:ce22'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 921057, 'tstamp': 921057}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383221, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.414 248514 DEBUG oslo_concurrency.processutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/disk.config c89b029c-146b-47ae-8961-0000d4e49e29_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.415 248514 INFO nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Deleting local config drive /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29/disk.config because it was imported into RBD.#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.424 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ea8e706f-016b-4874-8d68-c17613ae9d20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcdd30303-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:ce:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 395], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 921057, 'reachable_time': 22161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383222, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.461 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8336f0-d16d-4e31-91d4-587f7a16222f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 kernel: tap8b1532d0-42: entered promiscuous mode
Dec 13 04:03:53 np0005558241 NetworkManager[50376]: <info>  [1765616633.4715] manager: (tap8b1532d0-42): new Tun device (/org/freedesktop/NetworkManager/Devices/566)
Dec 13 04:03:53 np0005558241 systemd-udevd[383202]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:03:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:53Z|01365|binding|INFO|Claiming lport 8b1532d0-4269-4522-a1c5-961c9f9254dc for this chassis.
Dec 13 04:03:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:53Z|01366|binding|INFO|8b1532d0-4269-4522-a1c5-961c9f9254dc: Claiming fa:16:3e:b8:b5:f3 10.100.0.9
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.475 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.483 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b5:f3 10.100.0.9'], port_security=['fa:16:3e:b8:b5:f3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c89b029c-146b-47ae-8961-0000d4e49e29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82626aa1-467a-40c3-a9a5-93921d70f234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f130a7c-de13-4843-94c3-b6b246bfa796, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=8b1532d0-4269-4522-a1c5-961c9f9254dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:03:53 np0005558241 NetworkManager[50376]: <info>  [1765616633.4936] device (tap8b1532d0-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:03:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:53Z|01367|binding|INFO|Setting lport 8b1532d0-4269-4522-a1c5-961c9f9254dc ovn-installed in OVS
Dec 13 04:03:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:53Z|01368|binding|INFO|Setting lport 8b1532d0-4269-4522-a1c5-961c9f9254dc up in Southbound
Dec 13 04:03:53 np0005558241 NetworkManager[50376]: <info>  [1765616633.4942] device (tap8b1532d0-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.495 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:53 np0005558241 systemd-machined[210538]: New machine qemu-164-instance-00000086.
Dec 13 04:03:53 np0005558241 systemd[1]: Started Virtual Machine qemu-164-instance-00000086.
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.580 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94f8a8a8-d708-42d1-8345-52a52f57a734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.583 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd30303-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.583 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.583 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdd30303-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:53 np0005558241 NetworkManager[50376]: <info>  [1765616633.5867] manager: (tapcdd30303-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/567)
Dec 13 04:03:53 np0005558241 kernel: tapcdd30303-30: entered promiscuous mode
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.595 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcdd30303-30, col_values=(('external_ids', {'iface-id': '2b4706f8-0b78-43d8-a3df-6327712b725d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:03:53Z|01369|binding|INFO|Releasing lport 2b4706f8-0b78-43d8-a3df-6327712b725d from this chassis (sb_readonly=0)
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.601 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cdd30303-3917-438c-8b47-a12827c948d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cdd30303-3917-438c-8b47-a12827c948d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.602 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7f4a34f6-6dc3-4b62-bc8e-0b268b15e5b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.604 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-cdd30303-3917-438c-8b47-a12827c948d6
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/cdd30303-3917-438c-8b47-a12827c948d6.pid.haproxy
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID cdd30303-3917-438c-8b47-a12827c948d6
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:03:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:53.604 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'env', 'PROCESS_TAG=haproxy-cdd30303-3917-438c-8b47-a12827c948d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cdd30303-3917-438c-8b47-a12827c948d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.612 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.817 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for f5f29271-ff94-4d88-bc99-1cfc3e1128a0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.818 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616633.8162923, f5f29271-ff94-4d88-bc99-1cfc3e1128a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.818 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.821 248514 DEBUG nova.compute.manager [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.827 248514 INFO nova.virt.libvirt.driver [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance rebooted successfully.#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.827 248514 DEBUG nova.compute.manager [None req-a7ce9bef-d0c1-48d3-835c-7a47c5a7fab7 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.870 248514 DEBUG nova.compute.manager [req-a92df022-5b09-48c2-a156-14e9015b87be req-404bf8fc-6b5a-4f91-ae53-d6080fecc864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.870 248514 DEBUG oslo_concurrency.lockutils [req-a92df022-5b09-48c2-a156-14e9015b87be req-404bf8fc-6b5a-4f91-ae53-d6080fecc864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.870 248514 DEBUG oslo_concurrency.lockutils [req-a92df022-5b09-48c2-a156-14e9015b87be req-404bf8fc-6b5a-4f91-ae53-d6080fecc864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.870 248514 DEBUG oslo_concurrency.lockutils [req-a92df022-5b09-48c2-a156-14e9015b87be req-404bf8fc-6b5a-4f91-ae53-d6080fecc864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.871 248514 DEBUG nova.compute.manager [req-a92df022-5b09-48c2-a156-14e9015b87be req-404bf8fc-6b5a-4f91-ae53-d6080fecc864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.871 248514 WARNING nova.compute.manager [req-a92df022-5b09-48c2-a156-14e9015b87be req-404bf8fc-6b5a-4f91-ae53-d6080fecc864 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state stopped and task_state powering-on.#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.872 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.880 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.919 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616633.8167195, f5f29271-ff94-4d88-bc99-1cfc3e1128a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.919 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] VM Started (Lifecycle Event)#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.942 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:53 np0005558241 nova_compute[248510]: 2025-12-13 09:03:53.946 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:03:54 np0005558241 podman[383333]: 2025-12-13 09:03:54.010095783 +0000 UTC m=+0.031372917 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:03:54 np0005558241 nova_compute[248510]: 2025-12-13 09:03:54.125 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616634.1247852, c89b029c-146b-47ae-8961-0000d4e49e29 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:03:54 np0005558241 nova_compute[248510]: 2025-12-13 09:03:54.126 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] VM Started (Lifecycle Event)#033[00m
Dec 13 04:03:54 np0005558241 nova_compute[248510]: 2025-12-13 09:03:54.156 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:54 np0005558241 nova_compute[248510]: 2025-12-13 09:03:54.161 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616634.12489, c89b029c-146b-47ae-8961-0000d4e49e29 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:03:54 np0005558241 nova_compute[248510]: 2025-12-13 09:03:54.162 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:03:54 np0005558241 nova_compute[248510]: 2025-12-13 09:03:54.191 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:54 np0005558241 nova_compute[248510]: 2025-12-13 09:03:54.196 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:03:54 np0005558241 nova_compute[248510]: 2025-12-13 09:03:54.226 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:03:54 np0005558241 podman[383333]: 2025-12-13 09:03:54.300846628 +0000 UTC m=+0.322123782 container create 231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:03:54 np0005558241 systemd[1]: Started libpod-conmon-231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7.scope.
Dec 13 04:03:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:03:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4488aafd38a05ea281a308b9d734b5be686496064f1968b2b681c3b5260af583/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3084: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 3.9 MiB/s wr, 108 op/s
Dec 13 04:03:54 np0005558241 podman[383333]: 2025-12-13 09:03:54.751741654 +0000 UTC m=+0.773018828 container init 231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 04:03:54 np0005558241 podman[383333]: 2025-12-13 09:03:54.760099984 +0000 UTC m=+0.781377128 container start 231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:03:54 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [NOTICE]   (383375) : New worker (383377) forked
Dec 13 04:03:54 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [NOTICE]   (383375) : Loading success.
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.222 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 8b1532d0-4269-4522-a1c5-961c9f9254dc in datapath c2c4278e-24ed-4454-ae7e-9f1ba7df516c unbound from our chassis#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.224 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2c4278e-24ed-4454-ae7e-9f1ba7df516c#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.248 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7da951c3-3a9e-4c9a-b1f8-8746ad58b5f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.285 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[02a9e977-3634-46a2-9444-66305c7da9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.290 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[21ce9214-644d-4f0a-b00f-bb51e0c2fe03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.322 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[76f92dc9-4026-4e73-915f-9885839600c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.340 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[95c48d7b-2d9f-429e-820b-769c44cddcb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2c4278e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:a6:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 392], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 918511, 'reachable_time': 16376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383391, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.361 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[14308fc9-55b1-4bcf-8a08-af1158889432]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc2c4278e-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 918524, 'tstamp': 918524}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383392, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc2c4278e-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 918528, 'tstamp': 918528}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383392, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.363 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2c4278e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:55 np0005558241 nova_compute[248510]: 2025-12-13 09:03:55.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.366 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2c4278e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.366 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.367 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2c4278e-20, col_values=(('external_ids', {'iface-id': 'fb6127ed-4b20-49d3-8950-376b6e1d5999'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.367 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.440 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.440 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:03:55.441 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.094 248514 DEBUG nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.095 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.095 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.096 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.096 248514 DEBUG nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.096 248514 WARNING nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.096 248514 DEBUG nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.097 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.097 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.097 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.098 248514 DEBUG nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Processing event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.098 248514 DEBUG nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.098 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.098 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.099 248514 DEBUG oslo_concurrency.lockutils [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.099 248514 DEBUG nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] No waiting events found dispatching network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.099 248514 WARNING nova.compute.manager [req-f8dcc0ef-5490-4181-99ac-b6c9a73d533a req-7f6213c8-3107-4fde-9cdc-3c17af45678e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received unexpected event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc for instance with vm_state building and task_state spawning.#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.100 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.107 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.107 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616636.1067662, c89b029c-146b-47ae-8961-0000d4e49e29 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.107 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.113 248514 INFO nova.virt.libvirt.driver [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Instance spawned successfully.#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.113 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.131 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.138 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.142 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.143 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.143 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.144 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.144 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.145 248514 DEBUG nova.virt.libvirt.driver [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.161 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.260 248514 INFO nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Took 10.97 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.260 248514 DEBUG nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.351 248514 INFO nova.compute.manager [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Took 12.21 seconds to build instance.#033[00m
Dec 13 04:03:56 np0005558241 nova_compute[248510]: 2025-12-13 09:03:56.369 248514 DEBUG oslo_concurrency.lockutils [None req-fefaae1b-e549-497b-ae67-0d79bfb80541 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:03:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3085: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Dec 13 04:03:58 np0005558241 nova_compute[248510]: 2025-12-13 09:03:58.041 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:58 np0005558241 nova_compute[248510]: 2025-12-13 09:03:58.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:03:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3086: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Dec 13 04:04:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3087: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 639 KiB/s wr, 164 op/s
Dec 13 04:04:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:04:01 np0005558241 nova_compute[248510]: 2025-12-13 09:04:01.459 248514 DEBUG nova.compute.manager [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-changed-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:01 np0005558241 nova_compute[248510]: 2025-12-13 09:04:01.459 248514 DEBUG nova.compute.manager [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Refreshing instance network info cache due to event network-changed-8b1532d0-4269-4522-a1c5-961c9f9254dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:04:01 np0005558241 nova_compute[248510]: 2025-12-13 09:04:01.460 248514 DEBUG oslo_concurrency.lockutils [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:04:01 np0005558241 nova_compute[248510]: 2025-12-13 09:04:01.460 248514 DEBUG oslo_concurrency.lockutils [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:04:01 np0005558241 nova_compute[248510]: 2025-12-13 09:04:01.460 248514 DEBUG nova.network.neutron [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Refreshing network info cache for port 8b1532d0-4269-4522-a1c5-961c9f9254dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:04:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3088: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 24 KiB/s wr, 149 op/s
Dec 13 04:04:03 np0005558241 nova_compute[248510]: 2025-12-13 09:04:03.043 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:03 np0005558241 nova_compute[248510]: 2025-12-13 09:04:03.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:03 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Dec 13 04:04:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3089: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 25 KiB/s wr, 201 op/s
Dec 13 04:04:04 np0005558241 nova_compute[248510]: 2025-12-13 09:04:04.993 248514 DEBUG nova.network.neutron [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updated VIF entry in instance network info cache for port 8b1532d0-4269-4522-a1c5-961c9f9254dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:04:04 np0005558241 nova_compute[248510]: 2025-12-13 09:04:04.994 248514 DEBUG nova.network.neutron [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updating instance_info_cache with network_info: [{"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:04:05 np0005558241 nova_compute[248510]: 2025-12-13 09:04:05.020 248514 DEBUG oslo_concurrency.lockutils [req-97ee0e1c-c2b6-4e76-a6be-87ad3305b716 req-787ca27f-d818-4bd8-b6a1-53a7a5aaa5a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:04:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:04:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3090: 321 pgs: 321 active+clean; 246 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 14 KiB/s wr, 187 op/s
Dec 13 04:04:06 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:06Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:4e:45 10.100.0.11
Dec 13 04:04:08 np0005558241 nova_compute[248510]: 2025-12-13 09:04:08.046 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:08 np0005558241 nova_compute[248510]: 2025-12-13 09:04:08.075 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3091: 321 pgs: 321 active+clean; 246 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 24 KiB/s wr, 202 op/s
Dec 13 04:04:08 np0005558241 nova_compute[248510]: 2025-12-13 09:04:08.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:04:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:04:09
Dec 13 04:04:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:04:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:04:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'images', '.mgr', 'vms', 'volumes', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log']
Dec 13 04:04:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:04:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:09Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:b5:f3 10.100.0.9
Dec 13 04:04:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:09Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:b5:f3 10.100.0.9
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3092: 321 pgs: 321 active+clean; 259 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1001 KiB/s wr, 183 op/s
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:04:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:04:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:04:12 np0005558241 podman[383395]: 2025-12-13 09:04:12.000413129 +0000 UTC m=+0.082256742 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:04:12 np0005558241 podman[383396]: 2025-12-13 09:04:12.010936772 +0000 UTC m=+0.079444801 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:04:12 np0005558241 podman[383394]: 2025-12-13 09:04:12.054920324 +0000 UTC m=+0.128353667 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:04:12 np0005558241 nova_compute[248510]: 2025-12-13 09:04:12.175 248514 INFO nova.compute.manager [None req-597a6f41-f0ad-418c-8166-00ef38848157 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Get console output#033[00m
Dec 13 04:04:12 np0005558241 nova_compute[248510]: 2025-12-13 09:04:12.186 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:04:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3093: 321 pgs: 321 active+clean; 259 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 728 KiB/s rd, 1000 KiB/s wr, 119 op/s
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.078 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.408 248514 DEBUG nova.compute.manager [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-changed-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.409 248514 DEBUG nova.compute.manager [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Refreshing instance network info cache due to event network-changed-daddb809-b36f-4f29-bd15-459cfd21a812. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.409 248514 DEBUG oslo_concurrency.lockutils [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.410 248514 DEBUG oslo_concurrency.lockutils [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.410 248514 DEBUG nova.network.neutron [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Refreshing network info cache for port daddb809-b36f-4f29-bd15-459cfd21a812 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.499 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.499 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.499 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.500 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.500 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.501 248514 INFO nova.compute.manager [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Terminating instance#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.502 248514 DEBUG nova.compute.manager [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:04:13 np0005558241 kernel: tapdaddb809-b3 (unregistering): left promiscuous mode
Dec 13 04:04:13 np0005558241 NetworkManager[50376]: <info>  [1765616653.5601] device (tapdaddb809-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:04:13 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:13Z|01370|binding|INFO|Releasing lport daddb809-b36f-4f29-bd15-459cfd21a812 from this chassis (sb_readonly=0)
Dec 13 04:04:13 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:13Z|01371|binding|INFO|Setting lport daddb809-b36f-4f29-bd15-459cfd21a812 down in Southbound
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.594 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:13 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:13Z|01372|binding|INFO|Removing iface tapdaddb809-b3 ovn-installed in OVS
Dec 13 04:04:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:13.604 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:4e:45 10.100.0.11'], port_security=['fa:16:3e:b3:4e:45 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f5f29271-ff94-4d88-bc99-1cfc3e1128a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdd30303-3917-438c-8b47-a12827c948d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '48e751cb-5eae-4702-97af-e0ce47d426b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c018ca30-1abb-42b0-8adb-342f8b53fd8e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=daddb809-b36f-4f29-bd15-459cfd21a812) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:04:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:13.606 158419 INFO neutron.agent.ovn.metadata.agent [-] Port daddb809-b36f-4f29-bd15-459cfd21a812 in datapath cdd30303-3917-438c-8b47-a12827c948d6 unbound from our chassis#033[00m
Dec 13 04:04:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:13.607 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cdd30303-3917-438c-8b47-a12827c948d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:04:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:13.608 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[82aca5ad-23f8-4ab7-8ce2-66acdbf46ded]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:13.609 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 namespace which is not needed anymore#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:13 np0005558241 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000084.scope: Deactivated successfully.
Dec 13 04:04:13 np0005558241 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000084.scope: Consumed 13.649s CPU time.
Dec 13 04:04:13 np0005558241 systemd-machined[210538]: Machine qemu-163-instance-00000084 terminated.
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.742 248514 INFO nova.virt.libvirt.driver [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Instance destroyed successfully.#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.742 248514 DEBUG nova.objects.instance [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid f5f29271-ff94-4d88-bc99-1cfc3e1128a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.761 248514 DEBUG nova.virt.libvirt.vif [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:03:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-306376587',display_name='tempest-TestNetworkAdvancedServerOps-server-306376587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-306376587',id=132,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDHKEKbLZZE+aW59knHMs4q4nSClPgzNp8F0QXUH6k11KPbTMvxhfQvtv8ScRqzRXYIiwBhktHJd/maU265oWRgbsF6YV/H8WAVywICvzTkBF4CZcbkyPahF1JxO8nrngQ==',key_name='tempest-TestNetworkAdvancedServerOps-1547833332',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:03:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-4215zp0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:03:53Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=f5f29271-ff94-4d88-bc99-1cfc3e1128a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.762 248514 DEBUG nova.network.os_vif_util [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.763 248514 DEBUG nova.network.os_vif_util [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.763 248514 DEBUG os_vif [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.766 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.767 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaddb809-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:04:13 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [NOTICE]   (383375) : haproxy version is 2.8.14-c23fe91
Dec 13 04:04:13 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [NOTICE]   (383375) : path to executable is /usr/sbin/haproxy
Dec 13 04:04:13 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [WARNING]  (383375) : Exiting Master process...
Dec 13 04:04:13 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [WARNING]  (383375) : Exiting Master process...
Dec 13 04:04:13 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [ALERT]    (383375) : Current worker (383377) exited with code 143 (Terminated)
Dec 13 04:04:13 np0005558241 neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6[383371]: [WARNING]  (383375) : All workers exited. Exiting... (0)
Dec 13 04:04:13 np0005558241 nova_compute[248510]: 2025-12-13 09:04:13.776 248514 INFO os_vif [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:4e:45,bridge_name='br-int',has_traffic_filtering=True,id=daddb809-b36f-4f29-bd15-459cfd21a812,network=Network(cdd30303-3917-438c-8b47-a12827c948d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdaddb809-b3')#033[00m
Dec 13 04:04:13 np0005558241 systemd[1]: libpod-231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7.scope: Deactivated successfully.
Dec 13 04:04:13 np0005558241 podman[383477]: 2025-12-13 09:04:13.785859931 +0000 UTC m=+0.073197265 container died 231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:04:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7-userdata-shm.mount: Deactivated successfully.
Dec 13 04:04:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4488aafd38a05ea281a308b9d734b5be686496064f1968b2b681c3b5260af583-merged.mount: Deactivated successfully.
Dec 13 04:04:14 np0005558241 podman[383477]: 2025-12-13 09:04:14.232707747 +0000 UTC m=+0.520045071 container cleanup 231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:04:14 np0005558241 systemd[1]: libpod-conmon-231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7.scope: Deactivated successfully.
Dec 13 04:04:14 np0005558241 podman[383533]: 2025-12-13 09:04:14.343755079 +0000 UTC m=+0.072214780 container remove 231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:04:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.350 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a538a8f1-a2f5-4dab-a68b-dc4a52339101]: (4, ('Sat Dec 13 09:04:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 (231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7)\n231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7\nSat Dec 13 09:04:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 (231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7)\n231c7f855d63fd529688d0da5ba6781fd3b666beb462563be788c1501e8168c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.352 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[400dcc4c-9b1a-4321-a96b-dce8f09f5602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.353 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdd30303-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:14 np0005558241 nova_compute[248510]: 2025-12-13 09:04:14.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:14 np0005558241 kernel: tapcdd30303-30: left promiscuous mode
Dec 13 04:04:14 np0005558241 nova_compute[248510]: 2025-12-13 09:04:14.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.373 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9148c643-51e5-4880-8bd9-74606d7a5159]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.391 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[03666ed8-a7f6-44b3-8902-58491e4822de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.393 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9dec077a-3f23-4b80-81e4-1550b19e04d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.409 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eb286c48-945e-48f7-9197-eea3a5e03e82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 921047, 'reachable_time': 30164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383549, 'error': None, 'target': 'ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.412 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cdd30303-3917-438c-8b47-a12827c948d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:04:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:14.412 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[73ef057d-496d-464a-bfa2-572dc3081823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:14 np0005558241 systemd[1]: run-netns-ovnmeta\x2dcdd30303\x2d3917\x2d438c\x2d8b47\x2da12827c948d6.mount: Deactivated successfully.
Dec 13 04:04:14 np0005558241 nova_compute[248510]: 2025-12-13 09:04:14.463 248514 INFO nova.virt.libvirt.driver [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Deleting instance files /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0_del#033[00m
Dec 13 04:04:14 np0005558241 nova_compute[248510]: 2025-12-13 09:04:14.464 248514 INFO nova.virt.libvirt.driver [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Deletion of /var/lib/nova/instances/f5f29271-ff94-4d88-bc99-1cfc3e1128a0_del complete#033[00m
Dec 13 04:04:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3094: 321 pgs: 321 active+clean; 281 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 943 KiB/s rd, 2.2 MiB/s wr, 168 op/s
Dec 13 04:04:14 np0005558241 nova_compute[248510]: 2025-12-13 09:04:14.528 248514 INFO nova.compute.manager [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:04:14 np0005558241 nova_compute[248510]: 2025-12-13 09:04:14.529 248514 DEBUG oslo.service.loopingcall [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:04:14 np0005558241 nova_compute[248510]: 2025-12-13 09:04:14.529 248514 DEBUG nova.compute.manager [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:04:14 np0005558241 nova_compute[248510]: 2025-12-13 09:04:14.529 248514 DEBUG nova.network.neutron [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:04:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:04:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3900729166' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:04:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:04:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3900729166' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.320 248514 DEBUG nova.network.neutron [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.343 248514 INFO nova.compute.manager [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Took 0.81 seconds to deallocate network for instance.#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.398 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.399 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.467 248514 DEBUG nova.network.neutron [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updated VIF entry in instance network info cache for port daddb809-b36f-4f29-bd15-459cfd21a812. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.468 248514 DEBUG nova.network.neutron [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Updating instance_info_cache with network_info: [{"id": "daddb809-b36f-4f29-bd15-459cfd21a812", "address": "fa:16:3e:b3:4e:45", "network": {"id": "cdd30303-3917-438c-8b47-a12827c948d6", "bridge": "br-int", "label": "tempest-network-smoke--1046193350", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdaddb809-b3", "ovs_interfaceid": "daddb809-b36f-4f29-bd15-459cfd21a812", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.506 248514 DEBUG nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-unplugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.507 248514 DEBUG oslo_concurrency.lockutils [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.507 248514 DEBUG oslo_concurrency.lockutils [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.507 248514 DEBUG oslo_concurrency.lockutils [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.507 248514 DEBUG nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-unplugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.508 248514 WARNING nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-unplugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.508 248514 DEBUG nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.508 248514 DEBUG oslo_concurrency.lockutils [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.508 248514 DEBUG oslo_concurrency.lockutils [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.508 248514 DEBUG oslo_concurrency.lockutils [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.509 248514 DEBUG nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] No waiting events found dispatching network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.509 248514 WARNING nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received unexpected event network-vif-plugged-daddb809-b36f-4f29-bd15-459cfd21a812 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.509 248514 DEBUG nova.compute.manager [req-c1577c60-d569-48de-87ac-8fdbe8ff3d4e req-905f999f-2834-426c-8e2b-a0951e321de5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Received event network-vif-deleted-daddb809-b36f-4f29-bd15-459cfd21a812 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.511 248514 DEBUG oslo_concurrency.lockutils [req-be658d65-8d80-4de1-b040-44f2c5b9c58c req-35130b6a-ba94-4bf4-b6b1-291b1ad6465d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-f5f29271-ff94-4d88-bc99-1cfc3e1128a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:04:15 np0005558241 nova_compute[248510]: 2025-12-13 09:04:15.519 248514 DEBUG oslo_concurrency.processutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:04:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:04:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714183722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:04:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:04:16 np0005558241 nova_compute[248510]: 2025-12-13 09:04:16.085 248514 DEBUG oslo_concurrency.processutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:04:16 np0005558241 nova_compute[248510]: 2025-12-13 09:04:16.091 248514 DEBUG nova.compute.provider_tree [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:04:16 np0005558241 nova_compute[248510]: 2025-12-13 09:04:16.118 248514 DEBUG nova.scheduler.client.report [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:04:16 np0005558241 nova_compute[248510]: 2025-12-13 09:04:16.147 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:16 np0005558241 nova_compute[248510]: 2025-12-13 09:04:16.171 248514 INFO nova.scheduler.client.report [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Deleted allocations for instance f5f29271-ff94-4d88-bc99-1cfc3e1128a0#033[00m
Dec 13 04:04:16 np0005558241 nova_compute[248510]: 2025-12-13 09:04:16.260 248514 DEBUG oslo_concurrency.lockutils [None req-859bbce1-3786-4cc9-8376-13fb8a608142 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "f5f29271-ff94-4d88-bc99-1cfc3e1128a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3095: 321 pgs: 321 active+clean; 281 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 912 KiB/s rd, 2.2 MiB/s wr, 116 op/s
Dec 13 04:04:16 np0005558241 nova_compute[248510]: 2025-12-13 09:04:16.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:04:17 np0005558241 nova_compute[248510]: 2025-12-13 09:04:17.565 248514 INFO nova.compute.manager [None req-8747b0a7-9805-4359-ad8d-4df7dff2a0cb 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Get console output#033[00m
Dec 13 04:04:17 np0005558241 nova_compute[248510]: 2025-12-13 09:04:17.572 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:04:17 np0005558241 nova_compute[248510]: 2025-12-13 09:04:17.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:04:17 np0005558241 nova_compute[248510]: 2025-12-13 09:04:17.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:04:17 np0005558241 nova_compute[248510]: 2025-12-13 09:04:17.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:04:17 np0005558241 nova_compute[248510]: 2025-12-13 09:04:17.833 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:17.834 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:04:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:17.835 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:04:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:17.836 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:18 np0005558241 nova_compute[248510]: 2025-12-13 09:04:18.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3096: 321 pgs: 321 active+clean; 259 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 2.2 MiB/s wr, 133 op/s
Dec 13 04:04:18 np0005558241 nova_compute[248510]: 2025-12-13 09:04:18.770 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:18 np0005558241 nova_compute[248510]: 2025-12-13 09:04:18.830 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:04:18 np0005558241 nova_compute[248510]: 2025-12-13 09:04:18.830 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:04:18 np0005558241 nova_compute[248510]: 2025-12-13 09:04:18.830 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 04:04:18 np0005558241 nova_compute[248510]: 2025-12-13 09:04:18.831 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:04:19 np0005558241 nova_compute[248510]: 2025-12-13 09:04:19.809 248514 DEBUG nova.compute.manager [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-changed-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:19 np0005558241 nova_compute[248510]: 2025-12-13 09:04:19.810 248514 DEBUG nova.compute.manager [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing instance network info cache due to event network-changed-90d7401e-210b-4cbe-b93d-853787434352. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:04:19 np0005558241 nova_compute[248510]: 2025-12-13 09:04:19.810 248514 DEBUG oslo_concurrency.lockutils [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:04:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3097: 321 pgs: 321 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 797 KiB/s rd, 2.2 MiB/s wr, 127 op/s
Dec 13 04:04:20 np0005558241 nova_compute[248510]: 2025-12-13 09:04:20.969 248514 INFO nova.compute.manager [None req-de57d6f8-7376-4ff4-a1b2-cf0f391ac3b4 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Get console output#033[00m
Dec 13 04:04:20 np0005558241 nova_compute[248510]: 2025-12-13 09:04:20.978 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:04:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015325918245449068 of space, bias 1.0, pg target 0.45977754736347204 quantized to 32 (current 32)
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006698288550839229 of space, bias 1.0, pg target 0.20094865652517685 quantized to 32 (current 32)
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.757 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.788 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.788 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.789 248514 DEBUG oslo_concurrency.lockutils [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.790 248514 DEBUG nova.network.neutron [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing network info cache for port 90d7401e-210b-4cbe-b93d-853787434352 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.919 248514 DEBUG nova.compute.manager [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-unplugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.920 248514 DEBUG oslo_concurrency.lockutils [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.921 248514 DEBUG oslo_concurrency.lockutils [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.921 248514 DEBUG oslo_concurrency.lockutils [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.922 248514 DEBUG nova.compute.manager [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-unplugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.922 248514 WARNING nova.compute.manager [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received unexpected event network-vif-unplugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.923 248514 DEBUG nova.compute.manager [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.924 248514 DEBUG oslo_concurrency.lockutils [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.924 248514 DEBUG oslo_concurrency.lockutils [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.925 248514 DEBUG oslo_concurrency.lockutils [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.925 248514 DEBUG nova.compute.manager [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:04:21 np0005558241 nova_compute[248510]: 2025-12-13 09:04:21.926 248514 WARNING nova.compute.manager [req-e20b3e3c-4bb5-4f69-9891-ec3ca3106bca req-ae1be9a5-4c1b-4285-92cc-699937973a4b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received unexpected event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:04:21 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:21Z|01373|binding|INFO|Releasing lport fb6127ed-4b20-49d3-8950-376b6e1d5999 from this chassis (sb_readonly=0)
Dec 13 04:04:22 np0005558241 nova_compute[248510]: 2025-12-13 09:04:22.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3098: 321 pgs: 321 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 1.2 MiB/s wr, 74 op/s
Dec 13 04:04:22 np0005558241 nova_compute[248510]: 2025-12-13 09:04:22.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:04:22 np0005558241 nova_compute[248510]: 2025-12-13 09:04:22.853 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:22 np0005558241 nova_compute[248510]: 2025-12-13 09:04:22.853 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:22 np0005558241 nova_compute[248510]: 2025-12-13 09:04:22.853 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:22 np0005558241 nova_compute[248510]: 2025-12-13 09:04:22.853 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:04:22 np0005558241 nova_compute[248510]: 2025-12-13 09:04:22.854 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.084 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:04:23 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/826702923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.442 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.544 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.544 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.549 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.549 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.766 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.768 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3090MB free_disk=59.89634370058775GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.768 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.769 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.890 248514 INFO nova.compute.manager [None req-ca0ae6b8-ab90-47fe-97bb-bd4ff05d4505 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Get console output#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.893 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.894 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance c89b029c-146b-47ae-8961-0000d4e49e29 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.894 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.894 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.904 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:04:23 np0005558241 nova_compute[248510]: 2025-12-13 09:04:23.970 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:04:24 np0005558241 nova_compute[248510]: 2025-12-13 09:04:24.491 248514 DEBUG nova.compute.manager [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-changed-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:24 np0005558241 nova_compute[248510]: 2025-12-13 09:04:24.492 248514 DEBUG nova.compute.manager [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing instance network info cache due to event network-changed-90d7401e-210b-4cbe-b93d-853787434352. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:04:24 np0005558241 nova_compute[248510]: 2025-12-13 09:04:24.493 248514 DEBUG oslo_concurrency.lockutils [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:04:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3099: 321 pgs: 321 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 1.2 MiB/s wr, 74 op/s
Dec 13 04:04:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:04:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4153118151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:04:24 np0005558241 nova_compute[248510]: 2025-12-13 09:04:24.568 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:04:24 np0005558241 nova_compute[248510]: 2025-12-13 09:04:24.576 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:04:24 np0005558241 nova_compute[248510]: 2025-12-13 09:04:24.608 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:04:24 np0005558241 nova_compute[248510]: 2025-12-13 09:04:24.653 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:04:24 np0005558241 nova_compute[248510]: 2025-12-13 09:04:24.653 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:25 np0005558241 nova_compute[248510]: 2025-12-13 09:04:25.620 248514 DEBUG nova.network.neutron [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updated VIF entry in instance network info cache for port 90d7401e-210b-4cbe-b93d-853787434352. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:04:25 np0005558241 nova_compute[248510]: 2025-12-13 09:04:25.621 248514 DEBUG nova.network.neutron [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:04:25 np0005558241 nova_compute[248510]: 2025-12-13 09:04:25.653 248514 DEBUG oslo_concurrency.lockutils [req-29e4dd52-4797-4ed6-a2b7-2dc40929b9a4 req-1ad91b84-a536-4baa-8e42-9fd559ec56ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:04:25 np0005558241 nova_compute[248510]: 2025-12-13 09:04:25.654 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:04:25 np0005558241 nova_compute[248510]: 2025-12-13 09:04:25.654 248514 DEBUG oslo_concurrency.lockutils [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:04:25 np0005558241 nova_compute[248510]: 2025-12-13 09:04:25.654 248514 DEBUG nova.network.neutron [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing network info cache for port 90d7401e-210b-4cbe-b93d-853787434352 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:04:25 np0005558241 nova_compute[248510]: 2025-12-13 09:04:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:04:25 np0005558241 nova_compute[248510]: 2025-12-13 09:04:25.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.031 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.032 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.032 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.033 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.033 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.034 248514 INFO nova.compute.manager [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Terminating instance#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.035 248514 DEBUG nova.compute.manager [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:04:26 np0005558241 kernel: tap8b1532d0-42 (unregistering): left promiscuous mode
Dec 13 04:04:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:04:26 np0005558241 NetworkManager[50376]: <info>  [1765616666.0885] device (tap8b1532d0-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:04:26 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:26Z|01374|binding|INFO|Releasing lport 8b1532d0-4269-4522-a1c5-961c9f9254dc from this chassis (sb_readonly=0)
Dec 13 04:04:26 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:26Z|01375|binding|INFO|Setting lport 8b1532d0-4269-4522-a1c5-961c9f9254dc down in Southbound
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.097 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:26 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:26Z|01376|binding|INFO|Removing iface tap8b1532d0-42 ovn-installed in OVS
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.100 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.108 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:b5:f3 10.100.0.9'], port_security=['fa:16:3e:b8:b5:f3 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c89b029c-146b-47ae-8961-0000d4e49e29', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82626aa1-467a-40c3-a9a5-93921d70f234', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f130a7c-de13-4843-94c3-b6b246bfa796, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=8b1532d0-4269-4522-a1c5-961c9f9254dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.111 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 8b1532d0-4269-4522-a1c5-961c9f9254dc in datapath c2c4278e-24ed-4454-ae7e-9f1ba7df516c unbound from our chassis#033[00m
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.113 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2c4278e-24ed-4454-ae7e-9f1ba7df516c#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.124 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.136 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[601e77f6-aa87-4dcf-b30f-9f4aa79f68bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:26 np0005558241 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000086.scope: Deactivated successfully.
Dec 13 04:04:26 np0005558241 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000086.scope: Consumed 14.875s CPU time.
Dec 13 04:04:26 np0005558241 systemd-machined[210538]: Machine qemu-164-instance-00000086 terminated.
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.173 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2a0ccd9b-698a-4ec1-8436-069c71ce461c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.176 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5dfe2e-3de4-4331-9ec6-c473c3867cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.209 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[495a8c8c-ac76-4409-b344-78dc0001cca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.226 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[300b83c0-72d1-4741-8192-95a27b0a159b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2c4278e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:a6:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 392], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 918511, 'reachable_time': 16376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383629, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.245 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe51239-e276-4b79-bef3-f47c2da5ce32]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc2c4278e-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 918524, 'tstamp': 918524}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383630, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc2c4278e-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 918528, 'tstamp': 918528}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383630, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.247 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2c4278e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.249 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.257 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.257 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2c4278e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.258 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.258 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2c4278e-20, col_values=(('external_ids', {'iface-id': 'fb6127ed-4b20-49d3-8950-376b6e1d5999'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:26.259 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.284 248514 INFO nova.virt.libvirt.driver [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Instance destroyed successfully.#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.285 248514 DEBUG nova.objects.instance [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid c89b029c-146b-47ae-8961-0000d4e49e29 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.309 248514 DEBUG nova.virt.libvirt.vif [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:03:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1708858245',display_name='tempest-TestNetworkBasicOps-server-1708858245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1708858245',id=134,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL++NwsCfuXrqu4UC7qgUOg89a/J8JDHWy/KO/6t34DIcODsr1DE3qwPt8YEbfOTv/XMYKYNpUizYYeq5l6dOm5t1OYnPo2H2KGHxdPaguJpqsjNgEOg4dU6arkPu8gPzA==',key_name='tempest-TestNetworkBasicOps-1668505721',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:03:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-gfm2ze7v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:03:56Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=c89b029c-146b-47ae-8961-0000d4e49e29,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.310 248514 DEBUG nova.network.os_vif_util [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.312 248514 DEBUG nova.network.os_vif_util [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.313 248514 DEBUG os_vif [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.316 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.317 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b1532d0-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.319 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.323 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.326 248514 INFO os_vif [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:b5:f3,bridge_name='br-int',has_traffic_filtering=True,id=8b1532d0-4269-4522-a1c5-961c9f9254dc,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b1532d0-42')#033[00m
Dec 13 04:04:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3100: 321 pgs: 321 active+clean; 200 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 18 KiB/s wr, 26 op/s
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.667 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.668 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.669 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.669 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.669 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.670 248514 WARNING nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received unexpected event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.670 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-changed-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.671 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Refreshing instance network info cache due to event network-changed-8b1532d0-4269-4522-a1c5-961c9f9254dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.671 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.671 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.672 248514 DEBUG nova.network.neutron [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Refreshing network info cache for port 8b1532d0-4269-4522-a1c5-961c9f9254dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.680 248514 INFO nova.virt.libvirt.driver [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Deleting instance files /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29_del#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.681 248514 INFO nova.virt.libvirt.driver [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Deletion of /var/lib/nova/instances/c89b029c-146b-47ae-8961-0000d4e49e29_del complete#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.733 248514 INFO nova.compute.manager [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.733 248514 DEBUG oslo.service.loopingcall [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.734 248514 DEBUG nova.compute.manager [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:04:26 np0005558241 nova_compute[248510]: 2025-12-13 09:04:26.734 248514 DEBUG nova.network.neutron [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.087 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.096 248514 DEBUG nova.network.neutron [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.127 248514 INFO nova.compute.manager [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Took 1.39 seconds to deallocate network for instance.#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.235 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.235 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.239 248514 DEBUG nova.compute.manager [req-fb20ba8d-e454-4c17-9138-1b84b39aed13 req-7b058af0-54ea-4be1-a921-66b2931f47d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-vif-deleted-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.301 248514 DEBUG oslo_concurrency.processutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:04:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3101: 321 pgs: 321 active+clean; 159 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 19 KiB/s wr, 35 op/s
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.740 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616653.7396152, f5f29271-ff94-4d88-bc99-1cfc3e1128a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.741 248514 INFO nova.compute.manager [-] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.774 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.777 248514 DEBUG nova.compute.manager [req-16ce5582-c206-4880-9f9e-c56542779066 req-2cbe9e03-3d2e-4a23-b1e7-625f30cc0376 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.777 248514 DEBUG oslo_concurrency.lockutils [req-16ce5582-c206-4880-9f9e-c56542779066 req-2cbe9e03-3d2e-4a23-b1e7-625f30cc0376 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.777 248514 DEBUG oslo_concurrency.lockutils [req-16ce5582-c206-4880-9f9e-c56542779066 req-2cbe9e03-3d2e-4a23-b1e7-625f30cc0376 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.777 248514 DEBUG oslo_concurrency.lockutils [req-16ce5582-c206-4880-9f9e-c56542779066 req-2cbe9e03-3d2e-4a23-b1e7-625f30cc0376 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.778 248514 DEBUG nova.compute.manager [req-16ce5582-c206-4880-9f9e-c56542779066 req-2cbe9e03-3d2e-4a23-b1e7-625f30cc0376 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] No waiting events found dispatching network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.778 248514 WARNING nova.compute.manager [req-16ce5582-c206-4880-9f9e-c56542779066 req-2cbe9e03-3d2e-4a23-b1e7-625f30cc0376 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received unexpected event network-vif-plugged-8b1532d0-4269-4522-a1c5-961c9f9254dc for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.780 248514 DEBUG nova.compute.manager [None req-a07515fe-850a-494f-ac21-4edea3e7ed48 - - - - - -] [instance: f5f29271-ff94-4d88-bc99-1cfc3e1128a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:04:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:04:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2753694766' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.870 248514 DEBUG oslo_concurrency.processutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.878 248514 DEBUG nova.compute.provider_tree [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.899 248514 DEBUG nova.scheduler.client.report [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.953 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.962 248514 DEBUG nova.network.neutron [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updated VIF entry in instance network info cache for port 90d7401e-210b-4cbe-b93d-853787434352. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.963 248514 DEBUG nova.network.neutron [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.998 248514 DEBUG oslo_concurrency.lockutils [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.998 248514 DEBUG nova.compute.manager [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:28 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.999 248514 DEBUG oslo_concurrency.lockutils [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:28.999 248514 DEBUG oslo_concurrency.lockutils [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.000 248514 DEBUG oslo_concurrency.lockutils [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.000 248514 DEBUG nova.compute.manager [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.001 248514 WARNING nova.compute.manager [req-c15652b7-d85c-4182-b96c-7f57993bd93c req-570d1680-a672-43e2-8637-6fc58c978404 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received unexpected event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.030 248514 INFO nova.scheduler.client.report [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance c89b029c-146b-47ae-8961-0000d4e49e29#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.120 248514 DEBUG oslo_concurrency.lockutils [None req-1f411c36-dbed-4ff1-980a-46139469664e 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.330 248514 DEBUG nova.network.neutron [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updated VIF entry in instance network info cache for port 8b1532d0-4269-4522-a1c5-961c9f9254dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.331 248514 DEBUG nova.network.neutron [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Updating instance_info_cache with network_info: [{"id": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "address": "fa:16:3e:b8:b5:f3", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b1532d0-42", "ovs_interfaceid": "8b1532d0-4269-4522-a1c5-961c9f9254dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.359 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c89b029c-146b-47ae-8961-0000d4e49e29" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.360 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-vif-unplugged-8b1532d0-4269-4522-a1c5-961c9f9254dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.360 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.361 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.361 248514 DEBUG oslo_concurrency.lockutils [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c89b029c-146b-47ae-8961-0000d4e49e29-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.362 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] No waiting events found dispatching network-vif-unplugged-8b1532d0-4269-4522-a1c5-961c9f9254dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:04:29 np0005558241 nova_compute[248510]: 2025-12-13 09:04:29.362 248514 DEBUG nova.compute.manager [req-50aad065-fdfa-411e-a197-eab8ae8dde79 req-b63e2c31-dac5-435e-8c0a-13d53d436ad7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Received event network-vif-unplugged-8b1532d0-4269-4522-a1c5-961c9f9254dc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.503 248514 DEBUG nova.compute.manager [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-changed-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.505 248514 DEBUG nova.compute.manager [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing instance network info cache due to event network-changed-90d7401e-210b-4cbe-b93d-853787434352. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.505 248514 DEBUG oslo_concurrency.lockutils [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.506 248514 DEBUG oslo_concurrency.lockutils [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.506 248514 DEBUG nova.network.neutron [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Refreshing network info cache for port 90d7401e-210b-4cbe-b93d-853787434352 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:04:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3102: 321 pgs: 321 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 17 KiB/s wr, 37 op/s
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.622 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.623 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.623 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.623 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.624 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.625 248514 INFO nova.compute.manager [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Terminating instance#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.627 248514 DEBUG nova.compute.manager [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:04:30 np0005558241 kernel: tap90d7401e-21 (unregistering): left promiscuous mode
Dec 13 04:04:30 np0005558241 NetworkManager[50376]: <info>  [1765616670.6726] device (tap90d7401e-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:04:30 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:30Z|01377|binding|INFO|Releasing lport 90d7401e-210b-4cbe-b93d-853787434352 from this chassis (sb_readonly=0)
Dec 13 04:04:30 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:30Z|01378|binding|INFO|Setting lport 90d7401e-210b-4cbe-b93d-853787434352 down in Southbound
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:30 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:30Z|01379|binding|INFO|Removing iface tap90d7401e-21 ovn-installed in OVS
Dec 13 04:04:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:30.690 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:b4:6d 10.100.0.14'], port_security=['fa:16:3e:69:b4:6d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '19f43277-8c8d-48f2-9ec7-2b0d36a06e27', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '8', 'neutron:security_group_ids': '42e759fc-c2e4-4c38-92ea-ac51e44d350f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1f130a7c-de13-4843-94c3-b6b246bfa796, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=90d7401e-210b-4cbe-b93d-853787434352) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:04:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:30.691 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 90d7401e-210b-4cbe-b93d-853787434352 in datapath c2c4278e-24ed-4454-ae7e-9f1ba7df516c unbound from our chassis#033[00m
Dec 13 04:04:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:30.693 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c2c4278e-24ed-4454-ae7e-9f1ba7df516c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:04:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:30.694 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68606cd1-a6be-421a-9a32-070f5bd3c1ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:30.694 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c namespace which is not needed anymore#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:30 np0005558241 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000085.scope: Deactivated successfully.
Dec 13 04:04:30 np0005558241 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000085.scope: Consumed 15.165s CPU time.
Dec 13 04:04:30 np0005558241 systemd-machined[210538]: Machine qemu-162-instance-00000085 terminated.
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.854 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.861 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.872 248514 INFO nova.virt.libvirt.driver [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Instance destroyed successfully.#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.872 248514 DEBUG nova.objects.instance [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:04:30 np0005558241 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [NOTICE]   (382364) : haproxy version is 2.8.14-c23fe91
Dec 13 04:04:30 np0005558241 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [NOTICE]   (382364) : path to executable is /usr/sbin/haproxy
Dec 13 04:04:30 np0005558241 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [WARNING]  (382364) : Exiting Master process...
Dec 13 04:04:30 np0005558241 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [WARNING]  (382364) : Exiting Master process...
Dec 13 04:04:30 np0005558241 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [ALERT]    (382364) : Current worker (382377) exited with code 143 (Terminated)
Dec 13 04:04:30 np0005558241 neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c[382353]: [WARNING]  (382364) : All workers exited. Exiting... (0)
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.895 248514 DEBUG nova.virt.libvirt.vif [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:03:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1016404015',display_name='tempest-TestNetworkBasicOps-server-1016404015',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1016404015',id=133,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ5E+jKld2VdKWtObDqeKBGvKmvekTSiiA9sjTJdF7hEcONw3irfWTSpIiwVY9k/7NWwXxkQuubngpznyfOsRWuRq1jkSBu1LNMt0g8LKC1KQLWo836n2hzpQ9ilyrcugQ==',key_name='tempest-TestNetworkBasicOps-149670649',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:03:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-i1dfr0ue',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:03:30Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=19f43277-8c8d-48f2-9ec7-2b0d36a06e27,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.895 248514 DEBUG nova.network.os_vif_util [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.896 248514 DEBUG nova.network.os_vif_util [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.896 248514 DEBUG os_vif [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:04:30 np0005558241 systemd[1]: libpod-097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd.scope: Deactivated successfully.
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.898 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap90d7401e-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:30 np0005558241 podman[383709]: 2025-12-13 09:04:30.901111141 +0000 UTC m=+0.072645811 container died 097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.902 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:04:30 np0005558241 nova_compute[248510]: 2025-12-13 09:04:30.905 248514 INFO os_vif [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:b4:6d,bridge_name='br-int',has_traffic_filtering=True,id=90d7401e-210b-4cbe-b93d-853787434352,network=Network(c2c4278e-24ed-4454-ae7e-9f1ba7df516c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap90d7401e-21')#033[00m
Dec 13 04:04:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd-userdata-shm.mount: Deactivated successfully.
Dec 13 04:04:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e6230ed043b200e92eb147763a359dfae1b35945024fa55c5518b190e8cc8fc9-merged.mount: Deactivated successfully.
Dec 13 04:04:30 np0005558241 podman[383709]: 2025-12-13 09:04:30.946655823 +0000 UTC m=+0.118190493 container cleanup 097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:04:30 np0005558241 systemd[1]: libpod-conmon-097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd.scope: Deactivated successfully.
Dec 13 04:04:31 np0005558241 podman[383765]: 2025-12-13 09:04:31.020626316 +0000 UTC m=+0.049145703 container remove 097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 04:04:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.027 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb8a9f5-b63c-41d3-839b-5db99ed2e0f9]: (4, ('Sat Dec 13 09:04:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c (097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd)\n097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd\nSat Dec 13 09:04:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c (097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd)\n097b88a516774f15ea69fa6d20d7139362978491a2835071ed834f4d602627bd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.030 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2026a5c6-a2c2-4a28-9f82-ddd56038a7e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.033 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2c4278e-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:31 np0005558241 kernel: tapc2c4278e-20: left promiscuous mode
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.035 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.051 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cbbc86a5-b506-49cd-955b-aa075c6964ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.077 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0c25cb0d-4bd0-4c24-945b-7849fcec603f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.078 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[85a1157c-3926-4b39-ac68-3a983f6dfe18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.092 248514 DEBUG nova.compute.manager [req-df6708d7-d5b7-4fbd-9077-a2dfb8a2683a req-86da4124-26c3-4a93-b419-5a75584ba962 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-unplugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.093 248514 DEBUG oslo_concurrency.lockutils [req-df6708d7-d5b7-4fbd-9077-a2dfb8a2683a req-86da4124-26c3-4a93-b419-5a75584ba962 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.093 248514 DEBUG oslo_concurrency.lockutils [req-df6708d7-d5b7-4fbd-9077-a2dfb8a2683a req-86da4124-26c3-4a93-b419-5a75584ba962 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.093 248514 DEBUG oslo_concurrency.lockutils [req-df6708d7-d5b7-4fbd-9077-a2dfb8a2683a req-86da4124-26c3-4a93-b419-5a75584ba962 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.094 248514 DEBUG nova.compute.manager [req-df6708d7-d5b7-4fbd-9077-a2dfb8a2683a req-86da4124-26c3-4a93-b419-5a75584ba962 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-unplugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:04:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.093 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0dce131-1acd-450f-8571-6db91d4a6f41]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 918498, 'reachable_time': 20111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383781, 'error': None, 'target': 'ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.094 248514 DEBUG nova.compute.manager [req-df6708d7-d5b7-4fbd-9077-a2dfb8a2683a req-86da4124-26c3-4a93-b419-5a75584ba962 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-unplugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:04:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.096 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c2c4278e-24ed-4454-ae7e-9f1ba7df516c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:04:31 np0005558241 systemd[1]: run-netns-ovnmeta\x2dc2c4278e\x2d24ed\x2d4454\x2dae7e\x2d9f1ba7df516c.mount: Deactivated successfully.
Dec 13 04:04:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:31.096 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd57a8c-99f5-4d99-8b35-0ff05392a722]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.220 248514 INFO nova.virt.libvirt.driver [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Deleting instance files /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27_del#033[00m
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.221 248514 INFO nova.virt.libvirt.driver [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Deletion of /var/lib/nova/instances/19f43277-8c8d-48f2-9ec7-2b0d36a06e27_del complete#033[00m
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.293 248514 INFO nova.compute.manager [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.294 248514 DEBUG oslo.service.loopingcall [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.296 248514 DEBUG nova.compute.manager [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:04:31 np0005558241 nova_compute[248510]: 2025-12-13 09:04:31.296 248514 DEBUG nova.network.neutron [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:04:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3103: 321 pgs: 321 active+clean; 121 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:04:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:04:32 np0005558241 nova_compute[248510]: 2025-12-13 09:04:32.703 248514 DEBUG nova.network.neutron [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updated VIF entry in instance network info cache for port 90d7401e-210b-4cbe-b93d-853787434352. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:04:32 np0005558241 nova_compute[248510]: 2025-12-13 09:04:32.703 248514 DEBUG nova.network.neutron [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [{"id": "90d7401e-210b-4cbe-b93d-853787434352", "address": "fa:16:3e:69:b4:6d", "network": {"id": "c2c4278e-24ed-4454-ae7e-9f1ba7df516c", "bridge": "br-int", "label": "tempest-network-smoke--284393362", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap90d7401e-21", "ovs_interfaceid": "90d7401e-210b-4cbe-b93d-853787434352", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:04:32 np0005558241 podman[383922]: 2025-12-13 09:04:32.729267376 +0000 UTC m=+0.043557923 container create b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 04:04:32 np0005558241 nova_compute[248510]: 2025-12-13 09:04:32.733 248514 DEBUG oslo_concurrency.lockutils [req-5d85ae59-c497-47cb-a796-8bdbf5f600ea req-35dadf80-f08d-4558-8540-f4dc73877c63 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-19f43277-8c8d-48f2-9ec7-2b0d36a06e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:04:32 np0005558241 systemd[1]: Started libpod-conmon-b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95.scope.
Dec 13 04:04:32 np0005558241 nova_compute[248510]: 2025-12-13 09:04:32.790 248514 DEBUG nova.network.neutron [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:04:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:04:32 np0005558241 nova_compute[248510]: 2025-12-13 09:04:32.806 248514 INFO nova.compute.manager [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Took 1.51 seconds to deallocate network for instance.#033[00m
Dec 13 04:04:32 np0005558241 podman[383922]: 2025-12-13 09:04:32.710237189 +0000 UTC m=+0.024527746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:04:32 np0005558241 podman[383922]: 2025-12-13 09:04:32.814476941 +0000 UTC m=+0.128767518 container init b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 13 04:04:32 np0005558241 podman[383922]: 2025-12-13 09:04:32.823828035 +0000 UTC m=+0.138118592 container start b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 04:04:32 np0005558241 podman[383922]: 2025-12-13 09:04:32.827431405 +0000 UTC m=+0.141721952 container attach b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:04:32 np0005558241 pedantic_wing[383938]: 167 167
Dec 13 04:04:32 np0005558241 systemd[1]: libpod-b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95.scope: Deactivated successfully.
Dec 13 04:04:32 np0005558241 podman[383922]: 2025-12-13 09:04:32.833394525 +0000 UTC m=+0.147685072 container died b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:04:32 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7be5c1772ccec0ce56345119b1f1e6a1efb309150b0da4c514c735297deb6461-merged.mount: Deactivated successfully.
Dec 13 04:04:32 np0005558241 nova_compute[248510]: 2025-12-13 09:04:32.867 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:32 np0005558241 nova_compute[248510]: 2025-12-13 09:04:32.868 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:32 np0005558241 podman[383922]: 2025-12-13 09:04:32.882188587 +0000 UTC m=+0.196479134 container remove b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_wing, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:04:32 np0005558241 systemd[1]: libpod-conmon-b00535a528771d8c6213996ce39515d8033782f63ab92b5e86c0e3f214fbaf95.scope: Deactivated successfully.
Dec 13 04:04:32 np0005558241 nova_compute[248510]: 2025-12-13 09:04:32.930 248514 DEBUG oslo_concurrency.processutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:04:33 np0005558241 podman[383962]: 2025-12-13 09:04:33.080142467 +0000 UTC m=+0.051471871 container create a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.090 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:33 np0005558241 systemd[1]: Started libpod-conmon-a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e.scope.
Dec 13 04:04:33 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:04:33 np0005558241 podman[383962]: 2025-12-13 09:04:33.058296039 +0000 UTC m=+0.029625473 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:04:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8facbec6b1c2035a4fdfb7237ffe8e0e572ac290034949d8388d3c922a73c74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8facbec6b1c2035a4fdfb7237ffe8e0e572ac290034949d8388d3c922a73c74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8facbec6b1c2035a4fdfb7237ffe8e0e572ac290034949d8388d3c922a73c74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8facbec6b1c2035a4fdfb7237ffe8e0e572ac290034949d8388d3c922a73c74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8facbec6b1c2035a4fdfb7237ffe8e0e572ac290034949d8388d3c922a73c74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.183 248514 DEBUG nova.compute.manager [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.184 248514 DEBUG oslo_concurrency.lockutils [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.184 248514 DEBUG oslo_concurrency.lockutils [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.185 248514 DEBUG oslo_concurrency.lockutils [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.185 248514 DEBUG nova.compute.manager [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] No waiting events found dispatching network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.186 248514 WARNING nova.compute.manager [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received unexpected event network-vif-plugged-90d7401e-210b-4cbe-b93d-853787434352 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.186 248514 DEBUG nova.compute.manager [req-a3cc7cb9-9067-4bc0-b503-70db0dad445b req-1885712b-c9df-4250-b43d-e59c05a5b471 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Received event network-vif-deleted-90d7401e-210b-4cbe-b93d-853787434352 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:33 np0005558241 podman[383962]: 2025-12-13 09:04:33.198176344 +0000 UTC m=+0.169505748 container init a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:04:33 np0005558241 podman[383962]: 2025-12-13 09:04:33.205294362 +0000 UTC m=+0.176623746 container start a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:04:33 np0005558241 podman[383962]: 2025-12-13 09:04:33.208650466 +0000 UTC m=+0.179979850 container attach a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bardeen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:04:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:04:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295483595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.487 248514 DEBUG oslo_concurrency.processutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.494 248514 DEBUG nova.compute.provider_tree [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.518 248514 DEBUG nova.scheduler.client.report [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.541 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.593 248514 INFO nova.scheduler.client.report [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance 19f43277-8c8d-48f2-9ec7-2b0d36a06e27#033[00m
Dec 13 04:04:33 np0005558241 nova_compute[248510]: 2025-12-13 09:04:33.679 248514 DEBUG oslo_concurrency.lockutils [None req-512b3a9c-fe53-4b21-bff1-cc7a1911a995 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "19f43277-8c8d-48f2-9ec7-2b0d36a06e27" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:33 np0005558241 pedantic_bardeen[383997]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:04:33 np0005558241 pedantic_bardeen[383997]: --> All data devices are unavailable
Dec 13 04:04:33 np0005558241 systemd[1]: libpod-a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e.scope: Deactivated successfully.
Dec 13 04:04:33 np0005558241 conmon[383997]: conmon a49377d4ab703814b3e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e.scope/container/memory.events
Dec 13 04:04:33 np0005558241 podman[383962]: 2025-12-13 09:04:33.792420763 +0000 UTC m=+0.763750177 container died a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bardeen, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 04:04:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b8facbec6b1c2035a4fdfb7237ffe8e0e572ac290034949d8388d3c922a73c74-merged.mount: Deactivated successfully.
Dec 13 04:04:33 np0005558241 podman[383962]: 2025-12-13 09:04:33.847478312 +0000 UTC m=+0.818807696 container remove a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_bardeen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:04:33 np0005558241 systemd[1]: libpod-conmon-a49377d4ab703814b3e27032ca48c61fba0332ef49fbb686227c344ac7f2ed8e.scope: Deactivated successfully.
Dec 13 04:04:34 np0005558241 podman[384095]: 2025-12-13 09:04:34.372618298 +0000 UTC m=+0.062120717 container create 1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:04:34 np0005558241 systemd[1]: Started libpod-conmon-1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3.scope.
Dec 13 04:04:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:04:34 np0005558241 podman[384095]: 2025-12-13 09:04:34.35512257 +0000 UTC m=+0.044624989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:04:34 np0005558241 podman[384095]: 2025-12-13 09:04:34.449284989 +0000 UTC m=+0.138787438 container init 1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_darwin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:04:34 np0005558241 podman[384095]: 2025-12-13 09:04:34.461456224 +0000 UTC m=+0.150958633 container start 1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:04:34 np0005558241 condescending_darwin[384111]: 167 167
Dec 13 04:04:34 np0005558241 podman[384095]: 2025-12-13 09:04:34.465947956 +0000 UTC m=+0.155450455 container attach 1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_darwin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 04:04:34 np0005558241 systemd[1]: libpod-1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3.scope: Deactivated successfully.
Dec 13 04:04:34 np0005558241 podman[384095]: 2025-12-13 09:04:34.467412613 +0000 UTC m=+0.156915032 container died 1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_darwin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:04:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1cc7266df2d5c8dba92b9785e109499e5f16c73045492728919e810bda8c1e3d-merged.mount: Deactivated successfully.
Dec 13 04:04:34 np0005558241 podman[384095]: 2025-12-13 09:04:34.515293043 +0000 UTC m=+0.204795442 container remove 1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_darwin, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:04:34 np0005558241 systemd[1]: libpod-conmon-1b4a64153726d11b6476965584a67454bfe90658b5e8526316e6ddc5d43a22d3.scope: Deactivated successfully.
Dec 13 04:04:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3104: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 8.1 KiB/s wr, 56 op/s
Dec 13 04:04:34 np0005558241 podman[384135]: 2025-12-13 09:04:34.734775042 +0000 UTC m=+0.068984740 container create f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:04:34 np0005558241 systemd[1]: Started libpod-conmon-f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05.scope.
Dec 13 04:04:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:04:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63677bedcbae4c2b6ef6c06c52f725dde0aff8d0dfcf606accdabe46a0ea19a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63677bedcbae4c2b6ef6c06c52f725dde0aff8d0dfcf606accdabe46a0ea19a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63677bedcbae4c2b6ef6c06c52f725dde0aff8d0dfcf606accdabe46a0ea19a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:34 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63677bedcbae4c2b6ef6c06c52f725dde0aff8d0dfcf606accdabe46a0ea19a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:34 np0005558241 podman[384135]: 2025-12-13 09:04:34.707050877 +0000 UTC m=+0.041260635 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:04:34 np0005558241 podman[384135]: 2025-12-13 09:04:34.818982352 +0000 UTC m=+0.153192080 container init f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_ganguly, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:04:34 np0005558241 podman[384135]: 2025-12-13 09:04:34.833370822 +0000 UTC m=+0.167580520 container start f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_ganguly, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:04:34 np0005558241 podman[384135]: 2025-12-13 09:04:34.838315436 +0000 UTC m=+0.172525184 container attach f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_ganguly, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]: {
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:    "0": [
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:        {
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "devices": [
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "/dev/loop3"
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            ],
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_name": "ceph_lv0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_size": "21470642176",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "name": "ceph_lv0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "tags": {
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.cluster_name": "ceph",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.crush_device_class": "",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.encrypted": "0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.objectstore": "bluestore",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.osd_id": "0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.type": "block",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.vdo": "0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.with_tpm": "0"
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            },
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "type": "block",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "vg_name": "ceph_vg0"
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:        }
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:    ],
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:    "1": [
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:        {
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "devices": [
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "/dev/loop4"
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            ],
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_name": "ceph_lv1",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_size": "21470642176",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "name": "ceph_lv1",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "tags": {
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.cluster_name": "ceph",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.crush_device_class": "",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.encrypted": "0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.objectstore": "bluestore",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.osd_id": "1",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.type": "block",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.vdo": "0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.with_tpm": "0"
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            },
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "type": "block",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "vg_name": "ceph_vg1"
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:        }
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:    ],
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:    "2": [
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:        {
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "devices": [
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "/dev/loop5"
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            ],
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_name": "ceph_lv2",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_size": "21470642176",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "name": "ceph_lv2",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "tags": {
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.cluster_name": "ceph",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.crush_device_class": "",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.encrypted": "0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.objectstore": "bluestore",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.osd_id": "2",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.type": "block",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.vdo": "0",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:                "ceph.with_tpm": "0"
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            },
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "type": "block",
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:            "vg_name": "ceph_vg2"
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:        }
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]:    ]
Dec 13 04:04:35 np0005558241 naughty_ganguly[384152]: }
Dec 13 04:04:35 np0005558241 systemd[1]: libpod-f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05.scope: Deactivated successfully.
Dec 13 04:04:35 np0005558241 podman[384135]: 2025-12-13 09:04:35.137139953 +0000 UTC m=+0.471349661 container died f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:04:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-63677bedcbae4c2b6ef6c06c52f725dde0aff8d0dfcf606accdabe46a0ea19a4-merged.mount: Deactivated successfully.
Dec 13 04:04:35 np0005558241 podman[384135]: 2025-12-13 09:04:35.191566976 +0000 UTC m=+0.525776644 container remove f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_ganguly, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:04:35 np0005558241 systemd[1]: libpod-conmon-f9b1d0b3a044d5915ef8b264dbe13d4b2d75b1cf0446c195b75098782526cc05.scope: Deactivated successfully.
Dec 13 04:04:35 np0005558241 podman[384236]: 2025-12-13 09:04:35.762877531 +0000 UTC m=+0.054450346 container create e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 04:04:35 np0005558241 systemd[1]: Started libpod-conmon-e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43.scope.
Dec 13 04:04:35 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:04:35 np0005558241 podman[384236]: 2025-12-13 09:04:35.744145961 +0000 UTC m=+0.035718816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:04:35 np0005558241 podman[384236]: 2025-12-13 09:04:35.853919792 +0000 UTC m=+0.145492667 container init e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:04:35 np0005558241 podman[384236]: 2025-12-13 09:04:35.861300496 +0000 UTC m=+0.152873321 container start e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 04:04:35 np0005558241 podman[384236]: 2025-12-13 09:04:35.864678041 +0000 UTC m=+0.156250876 container attach e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:04:35 np0005558241 nostalgic_jennings[384252]: 167 167
Dec 13 04:04:35 np0005558241 systemd[1]: libpod-e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43.scope: Deactivated successfully.
Dec 13 04:04:35 np0005558241 podman[384236]: 2025-12-13 09:04:35.87064098 +0000 UTC m=+0.162213795 container died e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:04:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-17b7d72f077d8f001193fab4cad49204f1df912da607d97c738c6e8fd71b541d-merged.mount: Deactivated successfully.
Dec 13 04:04:35 np0005558241 nova_compute[248510]: 2025-12-13 09:04:35.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:35 np0005558241 podman[384236]: 2025-12-13 09:04:35.947062485 +0000 UTC m=+0.238635310 container remove e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:04:35 np0005558241 systemd[1]: libpod-conmon-e7047bd7687f4506a8ef06e105d251c4eb31efcda8719df9b6cdac8c5885ce43.scope: Deactivated successfully.
Dec 13 04:04:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:04:36 np0005558241 podman[384276]: 2025-12-13 09:04:36.155927088 +0000 UTC m=+0.062079086 container create aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:04:36 np0005558241 systemd[1]: Started libpod-conmon-aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3.scope.
Dec 13 04:04:36 np0005558241 podman[384276]: 2025-12-13 09:04:36.124384238 +0000 UTC m=+0.030536316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:04:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:04:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac3542f9026911fe042c7e8a179cf2dc0bba49f901356ec0348318d8fdebe70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac3542f9026911fe042c7e8a179cf2dc0bba49f901356ec0348318d8fdebe70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac3542f9026911fe042c7e8a179cf2dc0bba49f901356ec0348318d8fdebe70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac3542f9026911fe042c7e8a179cf2dc0bba49f901356ec0348318d8fdebe70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:36 np0005558241 podman[384276]: 2025-12-13 09:04:36.26335331 +0000 UTC m=+0.169505298 container init aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:04:36 np0005558241 podman[384276]: 2025-12-13 09:04:36.279590847 +0000 UTC m=+0.185742795 container start aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 04:04:36 np0005558241 podman[384276]: 2025-12-13 09:04:36.283235518 +0000 UTC m=+0.189387556 container attach aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 04:04:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3105: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Dec 13 04:04:37 np0005558241 lvm[384368]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:04:37 np0005558241 lvm[384368]: VG ceph_vg0 finished
Dec 13 04:04:37 np0005558241 lvm[384371]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:04:37 np0005558241 lvm[384371]: VG ceph_vg1 finished
Dec 13 04:04:37 np0005558241 lvm[384373]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:04:37 np0005558241 lvm[384373]: VG ceph_vg2 finished
Dec 13 04:04:37 np0005558241 silly_carson[384292]: {}
Dec 13 04:04:37 np0005558241 systemd[1]: libpod-aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3.scope: Deactivated successfully.
Dec 13 04:04:37 np0005558241 systemd[1]: libpod-aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3.scope: Consumed 1.573s CPU time.
Dec 13 04:04:37 np0005558241 podman[384276]: 2025-12-13 09:04:37.250317518 +0000 UTC m=+1.156469516 container died aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 04:04:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5ac3542f9026911fe042c7e8a179cf2dc0bba49f901356ec0348318d8fdebe70-merged.mount: Deactivated successfully.
Dec 13 04:04:37 np0005558241 podman[384276]: 2025-12-13 09:04:37.295804267 +0000 UTC m=+1.201956255 container remove aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:04:37 np0005558241 systemd[1]: libpod-conmon-aac35f27a67f34959fdcdf2d42e1fffe7e1a68d06d9235772cd08f3777d86db3.scope: Deactivated successfully.
Dec 13 04:04:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:04:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:04:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:04:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:04:38 np0005558241 nova_compute[248510]: 2025-12-13 09:04:38.092 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:04:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:04:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3106: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Dec 13 04:04:39 np0005558241 nova_compute[248510]: 2025-12-13 09:04:39.681 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:39 np0005558241 nova_compute[248510]: 2025-12-13 09:04:39.797 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:04:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:04:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:04:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:04:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:04:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:04:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3107: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.2 KiB/s wr, 46 op/s
Dec 13 04:04:40 np0005558241 nova_compute[248510]: 2025-12-13 09:04:40.933 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:04:41 np0005558241 nova_compute[248510]: 2025-12-13 09:04:41.280 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616666.2778819, c89b029c-146b-47ae-8961-0000d4e49e29 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:04:41 np0005558241 nova_compute[248510]: 2025-12-13 09:04:41.281 248514 INFO nova.compute.manager [-] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:04:41 np0005558241 nova_compute[248510]: 2025-12-13 09:04:41.316 248514 DEBUG nova.compute.manager [None req-66ad3494-31f1-46c9-9c37-a35870d4fb05 - - - - - -] [instance: c89b029c-146b-47ae-8961-0000d4e49e29] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:04:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3108: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 13 04:04:43 np0005558241 podman[384418]: 2025-12-13 09:04:43.010705231 +0000 UTC m=+0.080704173 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 13 04:04:43 np0005558241 podman[384419]: 2025-12-13 09:04:43.026884196 +0000 UTC m=+0.103579306 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 04:04:43 np0005558241 podman[384417]: 2025-12-13 09:04:43.039480562 +0000 UTC m=+0.116587752 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 04:04:43 np0005558241 nova_compute[248510]: 2025-12-13 09:04:43.095 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3109: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 30 op/s
Dec 13 04:04:45 np0005558241 nova_compute[248510]: 2025-12-13 09:04:45.870 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616670.869777, 19f43277-8c8d-48f2-9ec7-2b0d36a06e27 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:04:45 np0005558241 nova_compute[248510]: 2025-12-13 09:04:45.870 248514 INFO nova.compute.manager [-] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:04:45 np0005558241 nova_compute[248510]: 2025-12-13 09:04:45.901 248514 DEBUG nova.compute.manager [None req-ff76660d-c0c7-477b-b79d-d1524f2f2f8c - - - - - -] [instance: 19f43277-8c8d-48f2-9ec7-2b0d36a06e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:04:45 np0005558241 nova_compute[248510]: 2025-12-13 09:04:45.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:04:46 np0005558241 nova_compute[248510]: 2025-12-13 09:04:46.505 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:46 np0005558241 nova_compute[248510]: 2025-12-13 09:04:46.506 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:46 np0005558241 nova_compute[248510]: 2025-12-13 09:04:46.525 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:04:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3110: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 13 04:04:46 np0005558241 nova_compute[248510]: 2025-12-13 09:04:46.912 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:46 np0005558241 nova_compute[248510]: 2025-12-13 09:04:46.913 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:46 np0005558241 nova_compute[248510]: 2025-12-13 09:04:46.925 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:04:46 np0005558241 nova_compute[248510]: 2025-12-13 09:04:46.926 248514 INFO nova.compute.claims [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:04:47 np0005558241 nova_compute[248510]: 2025-12-13 09:04:47.072 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:04:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:04:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/402231214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:04:47 np0005558241 nova_compute[248510]: 2025-12-13 09:04:47.681 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:04:47 np0005558241 nova_compute[248510]: 2025-12-13 09:04:47.689 248514 DEBUG nova.compute.provider_tree [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:04:47 np0005558241 nova_compute[248510]: 2025-12-13 09:04:47.724 248514 DEBUG nova.scheduler.client.report [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:04:47 np0005558241 nova_compute[248510]: 2025-12-13 09:04:47.757 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:47 np0005558241 nova_compute[248510]: 2025-12-13 09:04:47.758 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:04:47 np0005558241 nova_compute[248510]: 2025-12-13 09:04:47.838 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:04:47 np0005558241 nova_compute[248510]: 2025-12-13 09:04:47.839 248514 DEBUG nova.network.neutron [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:04:47 np0005558241 nova_compute[248510]: 2025-12-13 09:04:47.893 248514 INFO nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:04:47 np0005558241 nova_compute[248510]: 2025-12-13 09:04:47.912 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.045 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.048 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.049 248514 INFO nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Creating image(s)#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.088 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.126 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.164 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.168 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.217 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.271 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.272 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.273 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.274 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.309 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.314 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:04:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3111: 321 pgs: 321 active+clean; 41 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.570 248514 DEBUG nova.policy [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a5fd410579fa429ba0f7f680590cd86a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '77d40177a4804671aa9c5da343bc2ed4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.672 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.762 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] resizing rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.868 248514 DEBUG nova.objects.instance [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'migration_context' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.891 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.892 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Ensure instance console log exists: /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.892 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.893 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:48 np0005558241 nova_compute[248510]: 2025-12-13 09:04:48.893 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3112: 321 pgs: 321 active+clean; 67 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 929 KiB/s wr, 30 op/s
Dec 13 04:04:50 np0005558241 nova_compute[248510]: 2025-12-13 09:04:50.749 248514 DEBUG nova.network.neutron [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Successfully created port: 21744784-e35a-46d5-ac8c-8f9783a0a387 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:04:50 np0005558241 nova_compute[248510]: 2025-12-13 09:04:50.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:04:52 np0005558241 nova_compute[248510]: 2025-12-13 09:04:52.288 248514 DEBUG nova.network.neutron [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Successfully updated port: 21744784-e35a-46d5-ac8c-8f9783a0a387 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:04:52 np0005558241 nova_compute[248510]: 2025-12-13 09:04:52.318 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:04:52 np0005558241 nova_compute[248510]: 2025-12-13 09:04:52.319 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:04:52 np0005558241 nova_compute[248510]: 2025-12-13 09:04:52.319 248514 DEBUG nova.network.neutron [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:04:52 np0005558241 nova_compute[248510]: 2025-12-13 09:04:52.447 248514 DEBUG nova.compute.manager [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-changed-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:52 np0005558241 nova_compute[248510]: 2025-12-13 09:04:52.447 248514 DEBUG nova.compute.manager [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Refreshing instance network info cache due to event network-changed-21744784-e35a-46d5-ac8c-8f9783a0a387. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:04:52 np0005558241 nova_compute[248510]: 2025-12-13 09:04:52.448 248514 DEBUG oslo_concurrency.lockutils [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:04:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3113: 321 pgs: 321 active+clean; 67 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 929 KiB/s wr, 30 op/s
Dec 13 04:04:52 np0005558241 nova_compute[248510]: 2025-12-13 09:04:52.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:04:52 np0005558241 nova_compute[248510]: 2025-12-13 09:04:52.928 248514 DEBUG nova.network.neutron [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:04:53 np0005558241 nova_compute[248510]: 2025-12-13 09:04:53.101 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.362 248514 DEBUG nova.network.neutron [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.399 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.400 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance network_info: |[{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.400 248514 DEBUG oslo_concurrency.lockutils [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.401 248514 DEBUG nova.network.neutron [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Refreshing network info cache for port 21744784-e35a-46d5-ac8c-8f9783a0a387 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.405 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Start _get_guest_xml network_info=[{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.412 248514 WARNING nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.417 248514 DEBUG nova.virt.libvirt.host [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.418 248514 DEBUG nova.virt.libvirt.host [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.429 248514 DEBUG nova.virt.libvirt.host [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.430 248514 DEBUG nova.virt.libvirt.host [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.430 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.431 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.432 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.432 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.433 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.433 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.433 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.434 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.434 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.435 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.435 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.436 248514 DEBUG nova.virt.hardware [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:04:54 np0005558241 nova_compute[248510]: 2025-12-13 09:04:54.441 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:04:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3114: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Dec 13 04:04:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:04:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3894288681' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.042 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.077 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.083 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:04:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:55.441 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:55.442 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:55.442 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:04:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2830231446' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.692 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.694 248514 DEBUG nova.virt.libvirt.vif [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:04:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1148057715',display_name='tempest-TestNetworkAdvancedServerOps-server-1148057715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1148057715',id=135,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEs/ai9hGvppAaJZucTj3XKRO2Hw6GseMJO/eDqjojdlq1TGbhXoM0CcjiSd48S7VRrLpZa2ASSqmIp8ZUA6nKNWf+HEZDknZ93ek9rtzJ3ZoLckx8X8JxIdR5xCKRsULQ==',key_name='tempest-TestNetworkAdvancedServerOps-326216115',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a5ac50ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:04:47Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=17c3814d-c11f-4032-a891-4cbdf3f7c065,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.695 248514 DEBUG nova.network.os_vif_util [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.696 248514 DEBUG nova.network.os_vif_util [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.698 248514 DEBUG nova.objects.instance [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.715 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  <uuid>17c3814d-c11f-4032-a891-4cbdf3f7c065</uuid>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  <name>instance-00000087</name>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1148057715</nova:name>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:04:54</nova:creationTime>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:        <nova:user uuid="a5fd410579fa429ba0f7f680590cd86a">tempest-TestNetworkAdvancedServerOps-1969761839-project-member</nova:user>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:        <nova:project uuid="77d40177a4804671aa9c5da343bc2ed4">tempest-TestNetworkAdvancedServerOps-1969761839</nova:project>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:        <nova:port uuid="21744784-e35a-46d5-ac8c-8f9783a0a387">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <entry name="serial">17c3814d-c11f-4032-a891-4cbdf3f7c065</entry>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <entry name="uuid">17c3814d-c11f-4032-a891-4cbdf3f7c065</entry>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/17c3814d-c11f-4032-a891-4cbdf3f7c065_disk">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/17c3814d-c11f-4032-a891-4cbdf3f7c065_disk.config">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:0c:86:07"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <target dev="tap21744784-e3"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/console.log" append="off"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:04:55 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:04:55 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:04:55 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:04:55 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.717 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Preparing to wait for external event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.717 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.717 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.717 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.718 248514 DEBUG nova.virt.libvirt.vif [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:04:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1148057715',display_name='tempest-TestNetworkAdvancedServerOps-server-1148057715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1148057715',id=135,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEs/ai9hGvppAaJZucTj3XKRO2Hw6GseMJO/eDqjojdlq1TGbhXoM0CcjiSd48S7VRrLpZa2ASSqmIp8ZUA6nKNWf+HEZDknZ93ek9rtzJ3ZoLckx8X8JxIdR5xCKRsULQ==',key_name='tempest-TestNetworkAdvancedServerOps-326216115',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a5ac50ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:04:47Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=17c3814d-c11f-4032-a891-4cbdf3f7c065,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.719 248514 DEBUG nova.network.os_vif_util [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.719 248514 DEBUG nova.network.os_vif_util [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.720 248514 DEBUG os_vif [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.721 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.721 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.725 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21744784-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.726 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21744784-e3, col_values=(('external_ids', {'iface-id': '21744784-e35a-46d5-ac8c-8f9783a0a387', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:86:07', 'vm-uuid': '17c3814d-c11f-4032-a891-4cbdf3f7c065'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:55 np0005558241 NetworkManager[50376]: <info>  [1765616695.7303] manager: (tap21744784-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.730 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.739 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.741 248514 INFO os_vif [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3')#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.813 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.814 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.814 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] No VIF found with MAC fa:16:3e:0c:86:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.815 248514 INFO nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Using config drive#033[00m
Dec 13 04:04:55 np0005558241 nova_compute[248510]: 2025-12-13 09:04:55.843 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:04:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.287 248514 INFO nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Creating config drive at /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/disk.config#033[00m
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.297 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnw6v8ok3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.458 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnw6v8ok3" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.497 248514 DEBUG nova.storage.rbd_utils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] rbd image 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.504 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/disk.config 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:04:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3115: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.711 248514 DEBUG oslo_concurrency.processutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/disk.config 17c3814d-c11f-4032-a891-4cbdf3f7c065_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.713 248514 INFO nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Deleting local config drive /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065/disk.config because it was imported into RBD.#033[00m
Dec 13 04:04:56 np0005558241 kernel: tap21744784-e3: entered promiscuous mode
Dec 13 04:04:56 np0005558241 NetworkManager[50376]: <info>  [1765616696.7966] manager: (tap21744784-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/569)
Dec 13 04:04:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:56Z|01380|binding|INFO|Claiming lport 21744784-e35a-46d5-ac8c-8f9783a0a387 for this chassis.
Dec 13 04:04:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:56Z|01381|binding|INFO|21744784-e35a-46d5-ac8c-8f9783a0a387: Claiming fa:16:3e:0c:86:07 10.100.0.13
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.797 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.804 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.807 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.819 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:86:07 10.100.0.13'], port_security=['fa:16:3e:0c:86:07 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17c3814d-c11f-4032-a891-4cbdf3f7c065', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '639365a4-e1d2-4459-8bf9-30c8c4273027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e18eae52-141b-40a9-9ed0-d52b1399a23b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=21744784-e35a-46d5-ac8c-8f9783a0a387) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:04:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.821 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 21744784-e35a-46d5-ac8c-8f9783a0a387 in datapath 084e5836-e0e4-4328-8e03-dfcdcd227a7c bound to our chassis#033[00m
Dec 13 04:04:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.824 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 084e5836-e0e4-4328-8e03-dfcdcd227a7c#033[00m
Dec 13 04:04:56 np0005558241 systemd-udevd[384803]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:04:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.846 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6f0646-19e4-4f86-bbaf-aca2147c43d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.847 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap084e5836-e1 in ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:04:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.849 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap084e5836-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:04:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.849 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7591c125-5bbe-416a-921f-7c64c254ea6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.850 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7c9217-c725-433a-8fd1-efd8ed9c186d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:56 np0005558241 systemd-machined[210538]: New machine qemu-165-instance-00000087.
Dec 13 04:04:56 np0005558241 NetworkManager[50376]: <info>  [1765616696.8687] device (tap21744784-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:04:56 np0005558241 NetworkManager[50376]: <info>  [1765616696.8694] device (tap21744784-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:04:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.870 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[48a853f6-5443-4153-a487-a7a87ab5e819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:56 np0005558241 systemd[1]: Started Virtual Machine qemu-165-instance-00000087.
Dec 13 04:04:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.902 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[018cd9fd-1632-4381-b792-dd0282bf3aa2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:56Z|01382|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 ovn-installed in OVS
Dec 13 04:04:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:56Z|01383|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 up in Southbound
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.934 248514 DEBUG nova.network.neutron [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updated VIF entry in instance network info cache for port 21744784-e35a-46d5-ac8c-8f9783a0a387. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.934 248514 DEBUG nova.network.neutron [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:04:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.940 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7c5e26e2-58bf-44d3-9401-6ca6e90e37d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:56.946 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9b44cb4b-67f3-4c49-b06f-36db4f06d273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:56 np0005558241 NetworkManager[50376]: <info>  [1765616696.9477] manager: (tap084e5836-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/570)
Dec 13 04:04:56 np0005558241 nova_compute[248510]: 2025-12-13 09:04:56.956 248514 DEBUG oslo_concurrency.lockutils [req-3f205869-44c6-4ebe-930d-1715479146d7 req-dba1d915-0f3f-4470-a73d-ce833266b049 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.004 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dbacf862-9c44-4547-a607-44d257768ca2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.008 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c44d74-09bd-491c-83b1-61e6611c9622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:57 np0005558241 NetworkManager[50376]: <info>  [1765616697.0459] device (tap084e5836-e0): carrier: link connected
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.055 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0b8f2f-7ad0-41cf-94a8-46829da2edc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.081 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b56384b5-9253-4c36-aed3-a5c085135ae3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap084e5836-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:f7:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 927426, 'reachable_time': 39915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384836, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.107 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62f10731-53ea-46c2-8060-73b0ac62b704]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee7:f759'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 927426, 'tstamp': 927426}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384837, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.136 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f06f28e-b682-496a-b3f9-530f82af2ac9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap084e5836-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:f7:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 927426, 'reachable_time': 39915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384851, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.181 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[01d74e7b-7cc7-4fe8-a94b-25d84642390e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.259 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9bfa2a-9c52-461b-968a-018f763aff4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.262 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap084e5836-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.263 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:04:57 np0005558241 nova_compute[248510]: 2025-12-13 09:04:57.264 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616697.2632244, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.264 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap084e5836-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:57 np0005558241 nova_compute[248510]: 2025-12-13 09:04:57.265 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Started (Lifecycle Event)#033[00m
Dec 13 04:04:57 np0005558241 NetworkManager[50376]: <info>  [1765616697.2670] manager: (tap084e5836-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/571)
Dec 13 04:04:57 np0005558241 kernel: tap084e5836-e0: entered promiscuous mode
Dec 13 04:04:57 np0005558241 nova_compute[248510]: 2025-12-13 09:04:57.268 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.269 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap084e5836-e0, col_values=(('external_ids', {'iface-id': '2953b79f-9235-4cc1-ad54-d75b960374dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:04:57 np0005558241 ovn_controller[148476]: 2025-12-13T09:04:57Z|01384|binding|INFO|Releasing lport 2953b79f-9235-4cc1-ad54-d75b960374dc from this chassis (sb_readonly=0)
Dec 13 04:04:57 np0005558241 nova_compute[248510]: 2025-12-13 09:04:57.292 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.293 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/084e5836-e0e4-4328-8e03-dfcdcd227a7c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/084e5836-e0e4-4328-8e03-dfcdcd227a7c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.294 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[27128b2e-1b0b-408b-962e-1a1c1c5755f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.295 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-084e5836-e0e4-4328-8e03-dfcdcd227a7c
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/084e5836-e0e4-4328-8e03-dfcdcd227a7c.pid.haproxy
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 084e5836-e0e4-4328-8e03-dfcdcd227a7c
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:04:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:04:57.297 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'env', 'PROCESS_TAG=haproxy-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/084e5836-e0e4-4328-8e03-dfcdcd227a7c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:04:57 np0005558241 nova_compute[248510]: 2025-12-13 09:04:57.298 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:04:57 np0005558241 nova_compute[248510]: 2025-12-13 09:04:57.304 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616697.263509, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:04:57 np0005558241 nova_compute[248510]: 2025-12-13 09:04:57.305 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:04:57 np0005558241 nova_compute[248510]: 2025-12-13 09:04:57.329 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:04:57 np0005558241 nova_compute[248510]: 2025-12-13 09:04:57.335 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:04:57 np0005558241 nova_compute[248510]: 2025-12-13 09:04:57.372 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:04:57 np0005558241 podman[384912]: 2025-12-13 09:04:57.795155248 +0000 UTC m=+0.087627896 container create cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:04:57 np0005558241 systemd[1]: Started libpod-conmon-cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc.scope.
Dec 13 04:04:57 np0005558241 podman[384912]: 2025-12-13 09:04:57.752425338 +0000 UTC m=+0.044898046 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:04:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:04:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab414d9614e7f964aa77c46b42174cdefbb9b6ee0ab6eaef22a54e3e3265f7e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:57 np0005558241 podman[384912]: 2025-12-13 09:04:57.880133237 +0000 UTC m=+0.172605905 container init cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:04:57 np0005558241 podman[384912]: 2025-12-13 09:04:57.88622223 +0000 UTC m=+0.178694878 container start cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 04:04:57 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [NOTICE]   (384931) : New worker (384933) forked
Dec 13 04:04:57 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [NOTICE]   (384931) : Loading success.
Dec 13 04:04:58 np0005558241 nova_compute[248510]: 2025-12-13 09:04:58.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:04:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3116: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.234 248514 DEBUG nova.compute.manager [req-8ed20ec5-b41a-44ea-ac2c-034b560b2f68 req-a809e081-1bee-45d6-aefc-cfcdf38aa845 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.235 248514 DEBUG oslo_concurrency.lockutils [req-8ed20ec5-b41a-44ea-ac2c-034b560b2f68 req-a809e081-1bee-45d6-aefc-cfcdf38aa845 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.235 248514 DEBUG oslo_concurrency.lockutils [req-8ed20ec5-b41a-44ea-ac2c-034b560b2f68 req-a809e081-1bee-45d6-aefc-cfcdf38aa845 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.236 248514 DEBUG oslo_concurrency.lockutils [req-8ed20ec5-b41a-44ea-ac2c-034b560b2f68 req-a809e081-1bee-45d6-aefc-cfcdf38aa845 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.236 248514 DEBUG nova.compute.manager [req-8ed20ec5-b41a-44ea-ac2c-034b560b2f68 req-a809e081-1bee-45d6-aefc-cfcdf38aa845 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Processing event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.237 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.244 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616699.2442088, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.245 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.248 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.253 248514 INFO nova.virt.libvirt.driver [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance spawned successfully.#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.254 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.276 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.283 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.293 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.294 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.294 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.295 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.296 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.297 248514 DEBUG nova.virt.libvirt.driver [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.310 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.364 248514 INFO nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Took 11.32 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.364 248514 DEBUG nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.440 248514 INFO nova.compute.manager [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Took 12.58 seconds to build instance.#033[00m
Dec 13 04:04:59 np0005558241 nova_compute[248510]: 2025-12-13 09:04:59.465 248514 DEBUG oslo_concurrency.lockutils [None req-240b7fe1-b1d1-4fba-8970-2d63a263c261 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3117: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Dec 13 04:05:00 np0005558241 nova_compute[248510]: 2025-12-13 09:05:00.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.102770) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616701102824, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 851, "num_deletes": 256, "total_data_size": 1183193, "memory_usage": 1205024, "flush_reason": "Manual Compaction"}
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616701112095, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 1162689, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61343, "largest_seqno": 62193, "table_properties": {"data_size": 1158358, "index_size": 1982, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9371, "raw_average_key_size": 19, "raw_value_size": 1149747, "raw_average_value_size": 2346, "num_data_blocks": 88, "num_entries": 490, "num_filter_entries": 490, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616628, "oldest_key_time": 1765616628, "file_creation_time": 1765616701, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 9384 microseconds, and 3915 cpu microseconds.
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.112148) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 1162689 bytes OK
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.112170) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.114052) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.114084) EVENT_LOG_v1 {"time_micros": 1765616701114063, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.114104) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 1178972, prev total WAL file size 1178972, number of live WAL files 2.
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.114539) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353131' seq:72057594037927935, type:22 .. '6C6F676D0032373633' seq:0, type:0; will stop at (end)
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(1135KB)], [143(10MB)]
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616701114598, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 11750753, "oldest_snapshot_seqno": -1}
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 8213 keys, 11635831 bytes, temperature: kUnknown
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616701193312, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 11635831, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11581317, "index_size": 32897, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 215351, "raw_average_key_size": 26, "raw_value_size": 11435016, "raw_average_value_size": 1392, "num_data_blocks": 1281, "num_entries": 8213, "num_filter_entries": 8213, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616701, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.193621) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 11635831 bytes
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.195482) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.1 rd, 147.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.1 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(20.1) write-amplify(10.0) OK, records in: 8736, records dropped: 523 output_compression: NoCompression
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.195499) EVENT_LOG_v1 {"time_micros": 1765616701195490, "job": 88, "event": "compaction_finished", "compaction_time_micros": 78807, "compaction_time_cpu_micros": 30971, "output_level": 6, "num_output_files": 1, "total_output_size": 11635831, "num_input_records": 8736, "num_output_records": 8213, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616701195821, "job": 88, "event": "table_file_deletion", "file_number": 145}
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616701198406, "job": 88, "event": "table_file_deletion", "file_number": 143}
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.114490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.198446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.198451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.198453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.198456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:05:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:05:01.198459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.224 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.225 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.251 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.346 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.347 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.355 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.356 248514 INFO nova.compute.claims [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.366 248514 DEBUG nova.compute.manager [req-d8f916ad-55cd-4e57-947d-45fed406f884 req-eaeb3e89-98fc-4b5c-b357-a2b9a4e49f3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.367 248514 DEBUG oslo_concurrency.lockutils [req-d8f916ad-55cd-4e57-947d-45fed406f884 req-eaeb3e89-98fc-4b5c-b357-a2b9a4e49f3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.367 248514 DEBUG oslo_concurrency.lockutils [req-d8f916ad-55cd-4e57-947d-45fed406f884 req-eaeb3e89-98fc-4b5c-b357-a2b9a4e49f3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.368 248514 DEBUG oslo_concurrency.lockutils [req-d8f916ad-55cd-4e57-947d-45fed406f884 req-eaeb3e89-98fc-4b5c-b357-a2b9a4e49f3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.368 248514 DEBUG nova.compute.manager [req-d8f916ad-55cd-4e57-947d-45fed406f884 req-eaeb3e89-98fc-4b5c-b357-a2b9a4e49f3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.369 248514 WARNING nova.compute.manager [req-d8f916ad-55cd-4e57-947d-45fed406f884 req-eaeb3e89-98fc-4b5c-b357-a2b9a4e49f3a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:05:01 np0005558241 nova_compute[248510]: 2025-12-13 09:05:01.506 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:05:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:05:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1055904252' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.082 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.091 248514 DEBUG nova.compute.provider_tree [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.116 248514 DEBUG nova.scheduler.client.report [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.149 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.150 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.208 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.209 248514 DEBUG nova.network.neutron [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.238 248514 INFO nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.267 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.396 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.398 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.399 248514 INFO nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Creating image(s)#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.437 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.475 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.504 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.508 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:05:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3118: 321 pgs: 321 active+clean; 88 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 898 KiB/s wr, 22 op/s
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.555 248514 DEBUG nova.policy [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ffebdfc45534ad4b00f1b6b71f95318', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd062d46c41254f178699723648bac64d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.602 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.603 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.604 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.604 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.634 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.642 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bb154aa9-7029-4193-8d00-38e8e552a382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:05:02 np0005558241 nova_compute[248510]: 2025-12-13 09:05:02.948 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bb154aa9-7029-4193-8d00-38e8e552a382_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:05:03 np0005558241 nova_compute[248510]: 2025-12-13 09:05:03.034 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] resizing rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:05:03 np0005558241 NetworkManager[50376]: <info>  [1765616703.1436] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Dec 13 04:05:03 np0005558241 NetworkManager[50376]: <info>  [1765616703.1443] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Dec 13 04:05:03 np0005558241 nova_compute[248510]: 2025-12-13 09:05:03.147 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:03 np0005558241 nova_compute[248510]: 2025-12-13 09:05:03.165 248514 DEBUG nova.objects.instance [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'migration_context' on Instance uuid bb154aa9-7029-4193-8d00-38e8e552a382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:05:03 np0005558241 nova_compute[248510]: 2025-12-13 09:05:03.184 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:05:03 np0005558241 nova_compute[248510]: 2025-12-13 09:05:03.184 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Ensure instance console log exists: /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:05:03 np0005558241 nova_compute[248510]: 2025-12-13 09:05:03.185 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:03 np0005558241 nova_compute[248510]: 2025-12-13 09:05:03.186 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:03 np0005558241 nova_compute[248510]: 2025-12-13 09:05:03.186 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:03 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:03Z|01385|binding|INFO|Releasing lport 2953b79f-9235-4cc1-ad54-d75b960374dc from this chassis (sb_readonly=0)
Dec 13 04:05:03 np0005558241 nova_compute[248510]: 2025-12-13 09:05:03.254 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:03 np0005558241 nova_compute[248510]: 2025-12-13 09:05:03.265 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:03 np0005558241 nova_compute[248510]: 2025-12-13 09:05:03.450 248514 DEBUG nova.network.neutron [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Successfully created port: f16b53b1-eab6-4458-976e-5cb112c77ef8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:05:04 np0005558241 nova_compute[248510]: 2025-12-13 09:05:04.526 248514 DEBUG nova.compute.manager [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-changed-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:04 np0005558241 nova_compute[248510]: 2025-12-13 09:05:04.527 248514 DEBUG nova.compute.manager [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Refreshing instance network info cache due to event network-changed-21744784-e35a-46d5-ac8c-8f9783a0a387. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:05:04 np0005558241 nova_compute[248510]: 2025-12-13 09:05:04.527 248514 DEBUG oslo_concurrency.lockutils [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:05:04 np0005558241 nova_compute[248510]: 2025-12-13 09:05:04.528 248514 DEBUG oslo_concurrency.lockutils [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:05:04 np0005558241 nova_compute[248510]: 2025-12-13 09:05:04.528 248514 DEBUG nova.network.neutron [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Refreshing network info cache for port 21744784-e35a-46d5-ac8c-8f9783a0a387 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:05:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3119: 321 pgs: 321 active+clean; 125 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 96 op/s
Dec 13 04:05:05 np0005558241 nova_compute[248510]: 2025-12-13 09:05:05.344 248514 DEBUG nova.network.neutron [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Successfully updated port: f16b53b1-eab6-4458-976e-5cb112c77ef8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:05:05 np0005558241 nova_compute[248510]: 2025-12-13 09:05:05.365 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:05:05 np0005558241 nova_compute[248510]: 2025-12-13 09:05:05.366 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquired lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:05:05 np0005558241 nova_compute[248510]: 2025-12-13 09:05:05.366 248514 DEBUG nova.network.neutron [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:05:05 np0005558241 nova_compute[248510]: 2025-12-13 09:05:05.466 248514 DEBUG nova.compute.manager [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-changed-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:05 np0005558241 nova_compute[248510]: 2025-12-13 09:05:05.466 248514 DEBUG nova.compute.manager [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Refreshing instance network info cache due to event network-changed-f16b53b1-eab6-4458-976e-5cb112c77ef8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:05:05 np0005558241 nova_compute[248510]: 2025-12-13 09:05:05.467 248514 DEBUG oslo_concurrency.lockutils [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:05:05 np0005558241 nova_compute[248510]: 2025-12-13 09:05:05.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:05 np0005558241 nova_compute[248510]: 2025-12-13 09:05:05.859 248514 DEBUG nova.network.neutron [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:05:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:05:06 np0005558241 nova_compute[248510]: 2025-12-13 09:05:06.277 248514 DEBUG nova.network.neutron [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updated VIF entry in instance network info cache for port 21744784-e35a-46d5-ac8c-8f9783a0a387. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:05:06 np0005558241 nova_compute[248510]: 2025-12-13 09:05:06.277 248514 DEBUG nova.network.neutron [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:05:06 np0005558241 nova_compute[248510]: 2025-12-13 09:05:06.302 248514 DEBUG oslo_concurrency.lockutils [req-b917f177-6de2-4942-b867-5e0d61726a68 req-119161ee-fe53-4ba5-837e-674a666db547 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:05:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3120: 321 pgs: 321 active+clean; 125 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 84 op/s
Dec 13 04:05:06 np0005558241 nova_compute[248510]: 2025-12-13 09:05:06.996 248514 DEBUG nova.network.neutron [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updating instance_info_cache with network_info: [{"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.023 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Releasing lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.024 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Instance network_info: |[{"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.024 248514 DEBUG oslo_concurrency.lockutils [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.024 248514 DEBUG nova.network.neutron [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Refreshing network info cache for port f16b53b1-eab6-4458-976e-5cb112c77ef8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.027 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Start _get_guest_xml network_info=[{"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.033 248514 WARNING nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.038 248514 DEBUG nova.virt.libvirt.host [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.039 248514 DEBUG nova.virt.libvirt.host [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.047 248514 DEBUG nova.virt.libvirt.host [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.048 248514 DEBUG nova.virt.libvirt.host [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.048 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.049 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.049 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.049 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.050 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.050 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.050 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.050 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.051 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.051 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.051 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.052 248514 DEBUG nova.virt.hardware [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.054 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:05:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:05:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949943202' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.643 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.668 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:05:07 np0005558241 nova_compute[248510]: 2025-12-13 09:05:07.673 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:05:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2108130498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.295 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.297 248514 DEBUG nova.virt.libvirt.vif [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:04:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-908300967',display_name='tempest-TestNetworkBasicOps-server-908300967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-908300967',id=136,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFBmqAnv0oD+weSJxDWEwIjgt5HjSBoUiVoV5CSnaOj2JsKkOoULuMXZQ1EHyUAr0FnFFBRSfsXFNYjXLHRq9tDCS1GQZDyMWxvCAc8xSVkqfYg8qd+Wbjc8cJKV8CheTw==',key_name='tempest-TestNetworkBasicOps-403784450',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-8elo0a7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:05:02Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=bb154aa9-7029-4193-8d00-38e8e552a382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.298 248514 DEBUG nova.network.os_vif_util [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.299 248514 DEBUG nova.network.os_vif_util [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.300 248514 DEBUG nova.objects.instance [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'pci_devices' on Instance uuid bb154aa9-7029-4193-8d00-38e8e552a382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.323 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  <uuid>bb154aa9-7029-4193-8d00-38e8e552a382</uuid>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  <name>instance-00000088</name>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestNetworkBasicOps-server-908300967</nova:name>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:05:07</nova:creationTime>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:        <nova:user uuid="0ffebdfc45534ad4b00f1b6b71f95318">tempest-TestNetworkBasicOps-1693340069-project-member</nova:user>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:        <nova:project uuid="d062d46c41254f178699723648bac64d">tempest-TestNetworkBasicOps-1693340069</nova:project>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:        <nova:port uuid="f16b53b1-eab6-4458-976e-5cb112c77ef8">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <entry name="serial">bb154aa9-7029-4193-8d00-38e8e552a382</entry>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <entry name="uuid">bb154aa9-7029-4193-8d00-38e8e552a382</entry>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/bb154aa9-7029-4193-8d00-38e8e552a382_disk">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/bb154aa9-7029-4193-8d00-38e8e552a382_disk.config">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:68:ba:c6"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <target dev="tapf16b53b1-ea"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/console.log" append="off"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:05:08 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:05:08 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:05:08 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:05:08 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.328 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Preparing to wait for external event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.329 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.329 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.329 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.330 248514 DEBUG nova.virt.libvirt.vif [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:04:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-908300967',display_name='tempest-TestNetworkBasicOps-server-908300967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-908300967',id=136,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFBmqAnv0oD+weSJxDWEwIjgt5HjSBoUiVoV5CSnaOj2JsKkOoULuMXZQ1EHyUAr0FnFFBRSfsXFNYjXLHRq9tDCS1GQZDyMWxvCAc8xSVkqfYg8qd+Wbjc8cJKV8CheTw==',key_name='tempest-TestNetworkBasicOps-403784450',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-8elo0a7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:05:02Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=bb154aa9-7029-4193-8d00-38e8e552a382,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.330 248514 DEBUG nova.network.os_vif_util [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.331 248514 DEBUG nova.network.os_vif_util [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.331 248514 DEBUG os_vif [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.332 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.332 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.335 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf16b53b1-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.336 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf16b53b1-ea, col_values=(('external_ids', {'iface-id': 'f16b53b1-eab6-4458-976e-5cb112c77ef8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:ba:c6', 'vm-uuid': 'bb154aa9-7029-4193-8d00-38e8e552a382'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:08 np0005558241 NetworkManager[50376]: <info>  [1765616708.3387] manager: (tapf16b53b1-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.340 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.347 248514 INFO os_vif [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea')#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.352 248514 DEBUG nova.network.neutron [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updated VIF entry in instance network info cache for port f16b53b1-eab6-4458-976e-5cb112c77ef8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.352 248514 DEBUG nova.network.neutron [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updating instance_info_cache with network_info: [{"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.371 248514 DEBUG oslo_concurrency.lockutils [req-b98587d4-8d65-45cb-ad9c-79ce2c8abe7a req-f6e09665-5933-4295-a0f9-dab8c8c30b0e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.413 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.414 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.414 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] No VIF found with MAC fa:16:3e:68:ba:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.414 248514 INFO nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Using config drive#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.436 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:05:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3121: 321 pgs: 321 active+clean; 134 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.868 248514 INFO nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Creating config drive at /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/disk.config#033[00m
Dec 13 04:05:08 np0005558241 nova_compute[248510]: 2025-12-13 09:05:08.872 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9edbuxxs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.042 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9edbuxxs" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.072 248514 DEBUG nova.storage.rbd_utils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] rbd image bb154aa9-7029-4193-8d00-38e8e552a382_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.076 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/disk.config bb154aa9-7029-4193-8d00-38e8e552a382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.251 248514 DEBUG oslo_concurrency.processutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/disk.config bb154aa9-7029-4193-8d00-38e8e552a382_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.252 248514 INFO nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Deleting local config drive /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382/disk.config because it was imported into RBD.#033[00m
Dec 13 04:05:09 np0005558241 kernel: tapf16b53b1-ea: entered promiscuous mode
Dec 13 04:05:09 np0005558241 NetworkManager[50376]: <info>  [1765616709.3240] manager: (tapf16b53b1-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/575)
Dec 13 04:05:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:09Z|01386|binding|INFO|Claiming lport f16b53b1-eab6-4458-976e-5cb112c77ef8 for this chassis.
Dec 13 04:05:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:05:09
Dec 13 04:05:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:09Z|01387|binding|INFO|f16b53b1-eab6-4458-976e-5cb112c77ef8: Claiming fa:16:3e:68:ba:c6 10.100.0.7
Dec 13 04:05:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:05:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:05:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'vms', 'volumes', '.rgw.root', 'default.rgw.control', 'default.rgw.meta']
Dec 13 04:05:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.370 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.382 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:ba:c6 10.100.0.7'], port_security=['fa:16:3e:68:ba:c6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bb154aa9-7029-4193-8d00-38e8e552a382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-065ccb44-e305-43f2-ab7a-728a65062da2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20aff6be-cf34-4af0-9288-26483b58fb2e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a94dc2d-ac12-4530-9899-bb1ddf6ed47b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f16b53b1-eab6-4458-976e-5cb112c77ef8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.385 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f16b53b1-eab6-4458-976e-5cb112c77ef8 in datapath 065ccb44-e305-43f2-ab7a-728a65062da2 bound to our chassis#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.389 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 065ccb44-e305-43f2-ab7a-728a65062da2#033[00m
Dec 13 04:05:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:09Z|01388|binding|INFO|Setting lport f16b53b1-eab6-4458-976e-5cb112c77ef8 ovn-installed in OVS
Dec 13 04:05:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:09Z|01389|binding|INFO|Setting lport f16b53b1-eab6-4458-976e-5cb112c77ef8 up in Southbound
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.401 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bda51c0c-258b-4a70-8416-cc70667e9835]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.402 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap065ccb44-e1 in ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.404 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap065ccb44-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.405 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7274ef0c-128e-431c-98fc-b5106f2c537e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.406 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[40b5c442-2829-4f52-9934-a42bef337880]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 systemd-udevd[385269]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:05:09 np0005558241 systemd-machined[210538]: New machine qemu-166-instance-00000088.
Dec 13 04:05:09 np0005558241 NetworkManager[50376]: <info>  [1765616709.4257] device (tapf16b53b1-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:05:09 np0005558241 systemd[1]: Started Virtual Machine qemu-166-instance-00000088.
Dec 13 04:05:09 np0005558241 NetworkManager[50376]: <info>  [1765616709.4270] device (tapf16b53b1-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.426 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[586af8d0-dc48-4f1c-9f37-a6abaf262302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.444 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[56e5c614-7ff4-4947-9d83-61e0e8b44482]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.482 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d7bb55-9b60-46b8-9afa-4a31fb935705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.490 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2c20b6-7f98-4ee4-81be-bc76e1020e31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 NetworkManager[50376]: <info>  [1765616709.4918] manager: (tap065ccb44-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/576)
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.538 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b06be451-9cdc-40f3-a42d-d7da7ce732b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.542 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[79d976b2-f9b3-4f77-aa07-d57bb37a5012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 NetworkManager[50376]: <info>  [1765616709.5689] device (tap065ccb44-e0): carrier: link connected
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.578 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[910fe7e5-f513-4a9b-8a75-301a8b36ffef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.597 248514 DEBUG nova.compute.manager [req-c3841a69-eebd-4876-94a2-66017fb0b764 req-4a036084-aa32-4887-b3a5-96d55ec88457 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.598 248514 DEBUG oslo_concurrency.lockutils [req-c3841a69-eebd-4876-94a2-66017fb0b764 req-4a036084-aa32-4887-b3a5-96d55ec88457 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.599 248514 DEBUG oslo_concurrency.lockutils [req-c3841a69-eebd-4876-94a2-66017fb0b764 req-4a036084-aa32-4887-b3a5-96d55ec88457 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.600 248514 DEBUG oslo_concurrency.lockutils [req-c3841a69-eebd-4876-94a2-66017fb0b764 req-4a036084-aa32-4887-b3a5-96d55ec88457 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.600 248514 DEBUG nova.compute.manager [req-c3841a69-eebd-4876-94a2-66017fb0b764 req-4a036084-aa32-4887-b3a5-96d55ec88457 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Processing event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.618 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[94111080-b269-46e4-871b-b120ea8f95dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap065ccb44-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:8b:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 928678, 'reachable_time': 36100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385301, 'error': None, 'target': 'ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.638 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a8276bc8-75c6-45ff-aa4a-4be38ab441c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:8b71'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 928678, 'tstamp': 928678}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385302, 'error': None, 'target': 'ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.668 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e12b0a9a-0a70-482e-8802-308f5013c957]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap065ccb44-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:8b:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 928678, 'reachable_time': 36100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385303, 'error': None, 'target': 'ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.705 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e41fff1c-2ddc-495c-b498-17b18f621424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.765 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9fdd7ce-2465-45af-a9b3-857f3f39c308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.766 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap065ccb44-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.767 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.767 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap065ccb44-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.769 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:09 np0005558241 NetworkManager[50376]: <info>  [1765616709.7700] manager: (tap065ccb44-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/577)
Dec 13 04:05:09 np0005558241 kernel: tap065ccb44-e0: entered promiscuous mode
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.772 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap065ccb44-e0, col_values=(('external_ids', {'iface-id': '648ca59e-493f-4103-aa42-8d4449fe6f9d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:09 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:09Z|01390|binding|INFO|Releasing lport 648ca59e-493f-4103-aa42-8d4449fe6f9d from this chassis (sb_readonly=0)
Dec 13 04:05:09 np0005558241 nova_compute[248510]: 2025-12-13 09:05:09.797 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.797 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/065ccb44-e305-43f2-ab7a-728a65062da2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/065ccb44-e305-43f2-ab7a-728a65062da2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.799 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9eab321c-a833-445a-9912-1304179d8987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.800 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-065ccb44-e305-43f2-ab7a-728a65062da2
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/065ccb44-e305-43f2-ab7a-728a65062da2.pid.haproxy
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 065ccb44-e305-43f2-ab7a-728a65062da2
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:05:09 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:09.801 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2', 'env', 'PROCESS_TAG=haproxy-065ccb44-e305-43f2-ab7a-728a65062da2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/065ccb44-e305-43f2-ab7a-728a65062da2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:05:10 np0005558241 podman[385335]: 2025-12-13 09:05:10.22863371 +0000 UTC m=+0.052350543 container create d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 04:05:10 np0005558241 systemd[1]: Started libpod-conmon-d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec.scope.
Dec 13 04:05:10 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:05:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa95f491df7395f5ed1e1bdb4690670b794dc054243ab1a1438a6d32051c9c18/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:10 np0005558241 podman[385335]: 2025-12-13 09:05:10.205639724 +0000 UTC m=+0.029356577 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:05:10 np0005558241 podman[385335]: 2025-12-13 09:05:10.311742222 +0000 UTC m=+0.135459055 container init d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:05:10 np0005558241 podman[385335]: 2025-12-13 09:05:10.32124482 +0000 UTC m=+0.144961653 container start d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:05:10 np0005558241 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [NOTICE]   (385373) : New worker (385389) forked
Dec 13 04:05:10 np0005558241 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [NOTICE]   (385373) : Loading success.
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.469 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.470 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616710.468951, bb154aa9-7029-4193-8d00-38e8e552a382 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.470 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] VM Started (Lifecycle Event)#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.473 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.476 248514 INFO nova.virt.libvirt.driver [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Instance spawned successfully.#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.476 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.502 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.515 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.520 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.521 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.521 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.522 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.522 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.522 248514 DEBUG nova.virt.libvirt.driver [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3122: 321 pgs: 321 active+clean; 134 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.555 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.556 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616710.4699361, bb154aa9-7029-4193-8d00-38e8e552a382 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.556 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.591 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.595 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616710.4724905, bb154aa9-7029-4193-8d00-38e8e552a382 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.595 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.612 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.615 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.619 248514 INFO nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Took 8.22 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.620 248514 DEBUG nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.658 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.742 248514 INFO nova.compute.manager [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Took 9.43 seconds to build instance.#033[00m
Dec 13 04:05:10 np0005558241 nova_compute[248510]: 2025-12-13 09:05:10.774 248514 DEBUG oslo_concurrency.lockutils [None req-f51483e2-2b3f-480c-8d8d-db29986d7ca3 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:05:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:05:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:05:11 np0005558241 nova_compute[248510]: 2025-12-13 09:05:11.728 248514 DEBUG nova.compute.manager [req-229a6fa0-b1ba-44dc-a1ad-ff620a800696 req-2ae3a704-96ff-4f83-a51a-bb29f8ab88fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:11 np0005558241 nova_compute[248510]: 2025-12-13 09:05:11.728 248514 DEBUG oslo_concurrency.lockutils [req-229a6fa0-b1ba-44dc-a1ad-ff620a800696 req-2ae3a704-96ff-4f83-a51a-bb29f8ab88fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:11 np0005558241 nova_compute[248510]: 2025-12-13 09:05:11.729 248514 DEBUG oslo_concurrency.lockutils [req-229a6fa0-b1ba-44dc-a1ad-ff620a800696 req-2ae3a704-96ff-4f83-a51a-bb29f8ab88fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:11 np0005558241 nova_compute[248510]: 2025-12-13 09:05:11.729 248514 DEBUG oslo_concurrency.lockutils [req-229a6fa0-b1ba-44dc-a1ad-ff620a800696 req-2ae3a704-96ff-4f83-a51a-bb29f8ab88fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:11 np0005558241 nova_compute[248510]: 2025-12-13 09:05:11.729 248514 DEBUG nova.compute.manager [req-229a6fa0-b1ba-44dc-a1ad-ff620a800696 req-2ae3a704-96ff-4f83-a51a-bb29f8ab88fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] No waiting events found dispatching network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:11 np0005558241 nova_compute[248510]: 2025-12-13 09:05:11.729 248514 WARNING nova.compute.manager [req-229a6fa0-b1ba-44dc-a1ad-ff620a800696 req-2ae3a704-96ff-4f83-a51a-bb29f8ab88fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received unexpected event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:05:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:11Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:86:07 10.100.0.13
Dec 13 04:05:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:11Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:86:07 10.100.0.13
Dec 13 04:05:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3123: 321 pgs: 321 active+clean; 134 MiB data, 932 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Dec 13 04:05:13 np0005558241 nova_compute[248510]: 2025-12-13 09:05:13.178 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:13 np0005558241 nova_compute[248510]: 2025-12-13 09:05:13.338 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:13 np0005558241 nova_compute[248510]: 2025-12-13 09:05:13.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:05:14 np0005558241 podman[385408]: 2025-12-13 09:05:14.013194349 +0000 UTC m=+0.101676788 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:05:14 np0005558241 podman[385409]: 2025-12-13 09:05:14.028734999 +0000 UTC m=+0.098889459 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:05:14 np0005558241 podman[385415]: 2025-12-13 09:05:14.038859552 +0000 UTC m=+0.103970516 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 13 04:05:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3124: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Dec 13 04:05:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:05:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/402574808' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:05:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:05:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/402574808' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:05:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:05:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3125: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 154 op/s
Dec 13 04:05:17 np0005558241 nova_compute[248510]: 2025-12-13 09:05:17.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:05:17 np0005558241 nova_compute[248510]: 2025-12-13 09:05:17.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:05:17 np0005558241 nova_compute[248510]: 2025-12-13 09:05:17.798 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:05:17 np0005558241 nova_compute[248510]: 2025-12-13 09:05:17.798 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:05:17 np0005558241 nova_compute[248510]: 2025-12-13 09:05:17.877 248514 INFO nova.compute.manager [None req-c45334f2-af22-4605-b023-637047f21221 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Get console output#033[00m
Dec 13 04:05:17 np0005558241 nova_compute[248510]: 2025-12-13 09:05:17.885 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:05:18 np0005558241 nova_compute[248510]: 2025-12-13 09:05:18.050 248514 DEBUG nova.compute.manager [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-changed-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:18 np0005558241 nova_compute[248510]: 2025-12-13 09:05:18.051 248514 DEBUG nova.compute.manager [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Refreshing instance network info cache due to event network-changed-f16b53b1-eab6-4458-976e-5cb112c77ef8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:05:18 np0005558241 nova_compute[248510]: 2025-12-13 09:05:18.051 248514 DEBUG oslo_concurrency.lockutils [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:05:18 np0005558241 nova_compute[248510]: 2025-12-13 09:05:18.051 248514 DEBUG oslo_concurrency.lockutils [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:05:18 np0005558241 nova_compute[248510]: 2025-12-13 09:05:18.051 248514 DEBUG nova.network.neutron [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Refreshing network info cache for port f16b53b1-eab6-4458-976e-5cb112c77ef8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:05:18 np0005558241 nova_compute[248510]: 2025-12-13 09:05:18.185 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:18 np0005558241 nova_compute[248510]: 2025-12-13 09:05:18.262 248514 DEBUG nova.objects.instance [None req-9aa5701a-b47b-483e-b99e-78c50f8e4e21 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:05:18 np0005558241 nova_compute[248510]: 2025-12-13 09:05:18.298 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616718.2978554, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:05:18 np0005558241 nova_compute[248510]: 2025-12-13 09:05:18.298 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:05:18 np0005558241 nova_compute[248510]: 2025-12-13 09:05:18.326 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:05:18 np0005558241 nova_compute[248510]: 2025-12-13 09:05:18.333 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:05:18 np0005558241 nova_compute[248510]: 2025-12-13 09:05:18.340 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:18 np0005558241 nova_compute[248510]: 2025-12-13 09:05:18.360 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Dec 13 04:05:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3126: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 154 op/s
Dec 13 04:05:19 np0005558241 kernel: tap21744784-e3 (unregistering): left promiscuous mode
Dec 13 04:05:19 np0005558241 NetworkManager[50376]: <info>  [1765616719.1190] device (tap21744784-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:05:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:19Z|01391|binding|INFO|Releasing lport 21744784-e35a-46d5-ac8c-8f9783a0a387 from this chassis (sb_readonly=0)
Dec 13 04:05:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:19Z|01392|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 down in Southbound
Dec 13 04:05:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:19Z|01393|binding|INFO|Removing iface tap21744784-e3 ovn-installed in OVS
Dec 13 04:05:19 np0005558241 nova_compute[248510]: 2025-12-13 09:05:19.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.140 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:86:07 10.100.0.13'], port_security=['fa:16:3e:0c:86:07 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17c3814d-c11f-4032-a891-4cbdf3f7c065', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '639365a4-e1d2-4459-8bf9-30c8c4273027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e18eae52-141b-40a9-9ed0-d52b1399a23b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=21744784-e35a-46d5-ac8c-8f9783a0a387) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.142 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 21744784-e35a-46d5-ac8c-8f9783a0a387 in datapath 084e5836-e0e4-4328-8e03-dfcdcd227a7c unbound from our chassis#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.143 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 084e5836-e0e4-4328-8e03-dfcdcd227a7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.145 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[decbe2b1-ae78-4e66-8d5d-da426d532f44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.148 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c namespace which is not needed anymore#033[00m
Dec 13 04:05:19 np0005558241 nova_compute[248510]: 2025-12-13 09:05:19.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:19 np0005558241 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000087.scope: Deactivated successfully.
Dec 13 04:05:19 np0005558241 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000087.scope: Consumed 12.506s CPU time.
Dec 13 04:05:19 np0005558241 systemd-machined[210538]: Machine qemu-165-instance-00000087 terminated.
Dec 13 04:05:19 np0005558241 kernel: tap21744784-e3: entered promiscuous mode
Dec 13 04:05:19 np0005558241 NetworkManager[50376]: <info>  [1765616719.3112] manager: (tap21744784-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/578)
Dec 13 04:05:19 np0005558241 nova_compute[248510]: 2025-12-13 09:05:19.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:19Z|01394|binding|INFO|Claiming lport 21744784-e35a-46d5-ac8c-8f9783a0a387 for this chassis.
Dec 13 04:05:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:19Z|01395|binding|INFO|21744784-e35a-46d5-ac8c-8f9783a0a387: Claiming fa:16:3e:0c:86:07 10.100.0.13
Dec 13 04:05:19 np0005558241 kernel: tap21744784-e3 (unregistering): left promiscuous mode
Dec 13 04:05:19 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [NOTICE]   (384931) : haproxy version is 2.8.14-c23fe91
Dec 13 04:05:19 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [NOTICE]   (384931) : path to executable is /usr/sbin/haproxy
Dec 13 04:05:19 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [WARNING]  (384931) : Exiting Master process...
Dec 13 04:05:19 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [WARNING]  (384931) : Exiting Master process...
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.336 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:86:07 10.100.0.13'], port_security=['fa:16:3e:0c:86:07 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17c3814d-c11f-4032-a891-4cbdf3f7c065', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '639365a4-e1d2-4459-8bf9-30c8c4273027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e18eae52-141b-40a9-9ed0-d52b1399a23b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=21744784-e35a-46d5-ac8c-8f9783a0a387) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:05:19 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [ALERT]    (384931) : Current worker (384933) exited with code 143 (Terminated)
Dec 13 04:05:19 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[384927]: [WARNING]  (384931) : All workers exited. Exiting... (0)
Dec 13 04:05:19 np0005558241 systemd[1]: libpod-cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc.scope: Deactivated successfully.
Dec 13 04:05:19 np0005558241 conmon[384927]: conmon cf1b23cdfc4dbbf79f4e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc.scope/container/memory.events
Dec 13 04:05:19 np0005558241 nova_compute[248510]: 2025-12-13 09:05:19.350 248514 DEBUG nova.compute.manager [None req-9aa5701a-b47b-483e-b99e-78c50f8e4e21 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:05:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:19Z|01396|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 ovn-installed in OVS
Dec 13 04:05:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:19Z|01397|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 up in Southbound
Dec 13 04:05:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:19Z|01398|binding|INFO|Releasing lport 21744784-e35a-46d5-ac8c-8f9783a0a387 from this chassis (sb_readonly=1)
Dec 13 04:05:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:19Z|01399|if_status|INFO|Dropped 4 log messages in last 250 seconds (most recently, 245 seconds ago) due to excessive rate
Dec 13 04:05:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:19Z|01400|if_status|INFO|Not setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 down as sb is readonly
Dec 13 04:05:19 np0005558241 nova_compute[248510]: 2025-12-13 09:05:19.353 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:19Z|01401|binding|INFO|Removing iface tap21744784-e3 ovn-installed in OVS
Dec 13 04:05:19 np0005558241 nova_compute[248510]: 2025-12-13 09:05:19.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:19 np0005558241 podman[385496]: 2025-12-13 09:05:19.357545759 +0000 UTC m=+0.083032622 container died cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:05:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:19Z|01402|binding|INFO|Releasing lport 21744784-e35a-46d5-ac8c-8f9783a0a387 from this chassis (sb_readonly=0)
Dec 13 04:05:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:19Z|01403|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 down in Southbound
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.366 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:86:07 10.100.0.13'], port_security=['fa:16:3e:0c:86:07 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17c3814d-c11f-4032-a891-4cbdf3f7c065', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '639365a4-e1d2-4459-8bf9-30c8c4273027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e18eae52-141b-40a9-9ed0-d52b1399a23b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=21744784-e35a-46d5-ac8c-8f9783a0a387) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:05:19 np0005558241 nova_compute[248510]: 2025-12-13 09:05:19.381 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc-userdata-shm.mount: Deactivated successfully.
Dec 13 04:05:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay-bab414d9614e7f964aa77c46b42174cdefbb9b6ee0ab6eaef22a54e3e3265f7e-merged.mount: Deactivated successfully.
Dec 13 04:05:19 np0005558241 podman[385496]: 2025-12-13 09:05:19.423988594 +0000 UTC m=+0.149475477 container cleanup cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 04:05:19 np0005558241 systemd[1]: libpod-conmon-cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc.scope: Deactivated successfully.
Dec 13 04:05:19 np0005558241 podman[385533]: 2025-12-13 09:05:19.529816885 +0000 UTC m=+0.058033305 container remove cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.539 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce93cf9-d24c-49c9-aaa9-1db5c65286ee]: (4, ('Sat Dec 13 09:05:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c (cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc)\ncf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc\nSat Dec 13 09:05:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c (cf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc)\ncf1b23cdfc4dbbf79f4ef35a484596b4638b37a827ba43871e50e106b2374dbc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.543 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9521fb9d-8ad0-4749-bd5a-b36c0b282899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.546 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap084e5836-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:19 np0005558241 nova_compute[248510]: 2025-12-13 09:05:19.548 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:19 np0005558241 kernel: tap084e5836-e0: left promiscuous mode
Dec 13 04:05:19 np0005558241 nova_compute[248510]: 2025-12-13 09:05:19.585 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.589 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6e667e-eddf-4679-9d20-42324d241d07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.612 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8562cef7-ce90-4b44-b92f-d09fda9174d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.614 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b2e41c-2594-4633-8b12-98f8d12e4503]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.644 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d92c1a-b035-4037-aff6-e40778a77df1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 927414, 'reachable_time': 17075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385552, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.648 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.648 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[edbb5dec-904e-4aea-a543-406de346a4cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.649 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 21744784-e35a-46d5-ac8c-8f9783a0a387 in datapath 084e5836-e0e4-4328-8e03-dfcdcd227a7c unbound from our chassis#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.651 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 084e5836-e0e4-4328-8e03-dfcdcd227a7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:05:19 np0005558241 systemd[1]: run-netns-ovnmeta\x2d084e5836\x2de0e4\x2d4328\x2d8e03\x2ddfcdcd227a7c.mount: Deactivated successfully.
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.654 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[292df783-09ad-46ec-9dd8-ed563b45d881]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.655 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 21744784-e35a-46d5-ac8c-8f9783a0a387 in datapath 084e5836-e0e4-4328-8e03-dfcdcd227a7c unbound from our chassis#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.658 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 084e5836-e0e4-4328-8e03-dfcdcd227a7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:05:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:19.659 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e7cdff-f545-4917-9fe8-e07959b5d203]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3127: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.802 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.802 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.803 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.803 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.803 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.804 248514 WARNING nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state None.#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.804 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.804 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.805 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.805 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.805 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.806 248514 WARNING nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state None.#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.806 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.806 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.807 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.807 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.807 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.808 248514 WARNING nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state None.#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.808 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.808 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.809 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.809 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.809 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.810 248514 WARNING nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state None.#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.810 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.810 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.811 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.811 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.812 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.812 248514 WARNING nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state None.#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.812 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.813 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.813 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.813 248514 DEBUG oslo_concurrency.lockutils [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.814 248514 DEBUG nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.814 248514 WARNING nova.compute.manager [req-68da337a-ab28-4aeb-9d93-55915d494abd req-fb37758b-8586-452c-82bd-48a37878a79b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state None.#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.936 248514 DEBUG nova.network.neutron [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updated VIF entry in instance network info cache for port f16b53b1-eab6-4458-976e-5cb112c77ef8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:05:20 np0005558241 nova_compute[248510]: 2025-12-13 09:05:20.937 248514 DEBUG nova.network.neutron [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updating instance_info_cache with network_info: [{"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:05:21 np0005558241 nova_compute[248510]: 2025-12-13 09:05:21.005 248514 DEBUG oslo_concurrency.lockutils [req-704af718-850a-40a6-99b7-a53c3f410810 req-0ec51a2f-bcad-485f-9d59-3079b7fc344e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:05:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:05:21 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011216364250275013 of space, bias 1.0, pg target 0.3364909275082504 quantized to 32 (current 32)
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006698180496316175 of space, bias 1.0, pg target 0.20094541488948525 quantized to 32 (current 32)
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:05:22 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:22Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:ba:c6 10.100.0.7
Dec 13 04:05:22 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:22Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:ba:c6 10.100.0.7
Dec 13 04:05:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3128: 321 pgs: 321 active+clean; 167 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Dec 13 04:05:22 np0005558241 nova_compute[248510]: 2025-12-13 09:05:22.757 248514 INFO nova.compute.manager [None req-d7086825-e1fe-4a5f-afeb-cdd070d9e891 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Get console output#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.036 248514 INFO nova.compute.manager [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Resuming#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.038 248514 DEBUG nova.objects.instance [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'flavor' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.085 248514 DEBUG oslo_concurrency.lockutils [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.086 248514 DEBUG oslo_concurrency.lockutils [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquired lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.086 248514 DEBUG nova.network.neutron [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.184 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.342 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.984 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.985 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.985 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.986 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:05:23 np0005558241 nova_compute[248510]: 2025-12-13 09:05:23.987 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:05:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3129: 321 pgs: 321 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Dec 13 04:05:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:05:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2738187052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.386 248514 DEBUG nova.network.neutron [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.394 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.414 248514 DEBUG oslo_concurrency.lockutils [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Releasing lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.423 248514 DEBUG nova.virt.libvirt.vif [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:04:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1148057715',display_name='tempest-TestNetworkAdvancedServerOps-server-1148057715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1148057715',id=135,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEs/ai9hGvppAaJZucTj3XKRO2Hw6GseMJO/eDqjojdlq1TGbhXoM0CcjiSd48S7VRrLpZa2ASSqmIp8ZUA6nKNWf+HEZDknZ93ek9rtzJ3ZoLckx8X8JxIdR5xCKRsULQ==',key_name='tempest-TestNetworkAdvancedServerOps-326216115',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:04:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a5ac50ap',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:05:19Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=17c3814d-c11f-4032-a891-4cbdf3f7c065,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.424 248514 DEBUG nova.network.os_vif_util [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.425 248514 DEBUG nova.network.os_vif_util [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.425 248514 DEBUG os_vif [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.426 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.427 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.431 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.431 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21744784-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.431 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21744784-e3, col_values=(('external_ids', {'iface-id': '21744784-e35a-46d5-ac8c-8f9783a0a387', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:86:07', 'vm-uuid': '17c3814d-c11f-4032-a891-4cbdf3f7c065'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.432 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.432 248514 INFO os_vif [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3')#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.455 248514 DEBUG nova.objects.instance [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.511 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.512 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:05:25 np0005558241 kernel: tap21744784-e3: entered promiscuous mode
Dec 13 04:05:25 np0005558241 NetworkManager[50376]: <info>  [1765616725.5433] manager: (tap21744784-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/579)
Dec 13 04:05:25 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:25Z|01404|binding|INFO|Claiming lport 21744784-e35a-46d5-ac8c-8f9783a0a387 for this chassis.
Dec 13 04:05:25 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:25Z|01405|binding|INFO|21744784-e35a-46d5-ac8c-8f9783a0a387: Claiming fa:16:3e:0c:86:07 10.100.0.13
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.581 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:25 np0005558241 systemd-udevd[385589]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.590 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:86:07 10.100.0.13'], port_security=['fa:16:3e:0c:86:07 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17c3814d-c11f-4032-a891-4cbdf3f7c065', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '7', 'neutron:security_group_ids': '639365a4-e1d2-4459-8bf9-30c8c4273027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e18eae52-141b-40a9-9ed0-d52b1399a23b, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=21744784-e35a-46d5-ac8c-8f9783a0a387) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.591 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 21744784-e35a-46d5-ac8c-8f9783a0a387 in datapath 084e5836-e0e4-4328-8e03-dfcdcd227a7c bound to our chassis#033[00m
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.593 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 084e5836-e0e4-4328-8e03-dfcdcd227a7c#033[00m
Dec 13 04:05:25 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:25Z|01406|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 ovn-installed in OVS
Dec 13 04:05:25 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:25Z|01407|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 up in Southbound
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.596 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.598 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:25 np0005558241 NetworkManager[50376]: <info>  [1765616725.6054] device (tap21744784-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:05:25 np0005558241 NetworkManager[50376]: <info>  [1765616725.6063] device (tap21744784-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.613 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2f48d4cd-3f18-4c6f-9654-e09accfabe28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.614 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap084e5836-e1 in ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.616 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap084e5836-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.616 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[51392abc-e0fd-4a70-95bc-5f565e5f4b27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.617 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0db26429-b28e-42d1-9d81-2557a80d9dae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 systemd-machined[210538]: New machine qemu-167-instance-00000087.
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.630 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0deef4d9-0a73-42ba-b081-bfd40f3f988a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 systemd[1]: Started Virtual Machine qemu-167-instance-00000087.
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.657 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f93b17f5-7460-4cc1-bb5a-6eed784c9700]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.707 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dfbc5921-a058-4c47-9361-ccffa41274dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 systemd-udevd[385594]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.715 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[07dda38f-998b-41ed-aabb-74c919da2805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 NetworkManager[50376]: <info>  [1765616725.7182] manager: (tap084e5836-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/580)
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.726 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.727 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.762 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b442f224-d338-49a3-8fa2-48eb4d4b9519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.765 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3a79a48d-1ff7-431c-a1a1-df32af41221a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 NetworkManager[50376]: <info>  [1765616725.8105] device (tap084e5836-e0): carrier: link connected
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.819 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[dce8dda9-e698-4b8d-b8fc-a0495dad21a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.854 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bd47de09-9df8-4b67-86f6-19adb8f44377]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap084e5836-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:f7:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 406], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930302, 'reachable_time': 27305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385625, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.877 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6a12d3bb-e877-44be-9006-058e4fa00bb1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee7:f759'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 930302, 'tstamp': 930302}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385626, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.898 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1508edf2-610f-47d2-9591-7c46ebec4598]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap084e5836-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:f7:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 406], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930302, 'reachable_time': 27305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385627, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:25.945 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2c95a459-a143-4130-af43-1e0d1096484e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.962 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.963 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3283MB free_disk=59.920996208675206GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.963 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:25 np0005558241 nova_compute[248510]: 2025-12-13 09:05:25.963 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.036 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcb7526-cf9c-4444-8d0e-57193d5f4ace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.038 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap084e5836-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.039 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.039 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap084e5836-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:26 np0005558241 kernel: tap084e5836-e0: entered promiscuous mode
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.041 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:26 np0005558241 NetworkManager[50376]: <info>  [1765616726.0425] manager: (tap084e5836-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/581)
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.043 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap084e5836-e0, col_values=(('external_ids', {'iface-id': '2953b79f-9235-4cc1-ad54-d75b960374dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.047 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:26 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:26Z|01408|binding|INFO|Releasing lport 2953b79f-9235-4cc1-ad54-d75b960374dc from this chassis (sb_readonly=0)
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.048 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/084e5836-e0e4-4328-8e03-dfcdcd227a7c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/084e5836-e0e4-4328-8e03-dfcdcd227a7c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.049 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9a8eee-14e8-49c8-ae5e-e0b45b6472e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.050 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-084e5836-e0e4-4328-8e03-dfcdcd227a7c
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/084e5836-e0e4-4328-8e03-dfcdcd227a7c.pid.haproxy
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 084e5836-e0e4-4328-8e03-dfcdcd227a7c
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:05:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:26.050 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'env', 'PROCESS_TAG=haproxy-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/084e5836-e0e4-4328-8e03-dfcdcd227a7c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.061 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.114 248514 DEBUG nova.compute.manager [req-8008d6b4-1628-41d6-bde9-8eb60d2a5fec req-95f0075e-6c0d-460e-bb6f-83f274ac04ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.115 248514 DEBUG oslo_concurrency.lockutils [req-8008d6b4-1628-41d6-bde9-8eb60d2a5fec req-95f0075e-6c0d-460e-bb6f-83f274ac04ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.115 248514 DEBUG oslo_concurrency.lockutils [req-8008d6b4-1628-41d6-bde9-8eb60d2a5fec req-95f0075e-6c0d-460e-bb6f-83f274ac04ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.116 248514 DEBUG oslo_concurrency.lockutils [req-8008d6b4-1628-41d6-bde9-8eb60d2a5fec req-95f0075e-6c0d-460e-bb6f-83f274ac04ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.116 248514 DEBUG nova.compute.manager [req-8008d6b4-1628-41d6-bde9-8eb60d2a5fec req-95f0075e-6c0d-460e-bb6f-83f274ac04ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.116 248514 WARNING nova.compute.manager [req-8008d6b4-1628-41d6-bde9-8eb60d2a5fec req-95f0075e-6c0d-460e-bb6f-83f274ac04ba 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state suspended and task_state resuming.#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.181 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 17c3814d-c11f-4032-a891-4cbdf3f7c065 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.182 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance bb154aa9-7029-4193-8d00-38e8e552a382 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.182 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.182 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.283 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.378 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for 17c3814d-c11f-4032-a891-4cbdf3f7c065 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.379 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616726.377207, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.379 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Started (Lifecycle Event)#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.403 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.422 248514 DEBUG nova.compute.manager [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.422 248514 DEBUG nova.objects.instance [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.426 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.444 248514 INFO nova.virt.libvirt.driver [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance running successfully.#033[00m
Dec 13 04:05:26 np0005558241 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.447 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.447 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616726.382789, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.448 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.450 248514 DEBUG nova.virt.libvirt.guest [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.450 248514 DEBUG nova.compute.manager [None req-1a956a36-0307-439b-ae3e-be4a9320abbe a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.496 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.500 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:05:26 np0005558241 podman[385698]: 2025-12-13 09:05:26.518556903 +0000 UTC m=+0.071066352 container create 142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.527 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Dec 13 04:05:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3130: 321 pgs: 321 active+clean; 200 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 13 04:05:26 np0005558241 podman[385698]: 2025-12-13 09:05:26.484883179 +0000 UTC m=+0.037392638 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:05:26 np0005558241 systemd[1]: Started libpod-conmon-142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f.scope.
Dec 13 04:05:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:05:26 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f7d273eeceb0bf4fd1ef9b86958ede76e0fc0f84087e86c40344f9a7eed60d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:26 np0005558241 podman[385698]: 2025-12-13 09:05:26.646196621 +0000 UTC m=+0.198706080 container init 142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:05:26 np0005558241 podman[385698]: 2025-12-13 09:05:26.652263693 +0000 UTC m=+0.204773122 container start 142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:05:26 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [NOTICE]   (385736) : New worker (385738) forked
Dec 13 04:05:26 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [NOTICE]   (385736) : Loading success.
Dec 13 04:05:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:05:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2079382082' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.894 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.903 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.921 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.954 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:05:26 np0005558241 nova_compute[248510]: 2025-12-13 09:05:26.954 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:28 np0005558241 nova_compute[248510]: 2025-12-13 09:05:28.066 248514 INFO nova.compute.manager [None req-6562f15c-1dc1-494a-8016-dac0ef581a6b a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Get console output#033[00m
Dec 13 04:05:28 np0005558241 nova_compute[248510]: 2025-12-13 09:05:28.076 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:05:28 np0005558241 nova_compute[248510]: 2025-12-13 09:05:28.187 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:28 np0005558241 nova_compute[248510]: 2025-12-13 09:05:28.237 248514 DEBUG nova.compute.manager [req-11d08ff0-9bbc-4681-ae3c-235070faa142 req-fe5a983a-a708-4599-a752-3f284e91216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:28 np0005558241 nova_compute[248510]: 2025-12-13 09:05:28.238 248514 DEBUG oslo_concurrency.lockutils [req-11d08ff0-9bbc-4681-ae3c-235070faa142 req-fe5a983a-a708-4599-a752-3f284e91216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:28 np0005558241 nova_compute[248510]: 2025-12-13 09:05:28.238 248514 DEBUG oslo_concurrency.lockutils [req-11d08ff0-9bbc-4681-ae3c-235070faa142 req-fe5a983a-a708-4599-a752-3f284e91216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:28 np0005558241 nova_compute[248510]: 2025-12-13 09:05:28.238 248514 DEBUG oslo_concurrency.lockutils [req-11d08ff0-9bbc-4681-ae3c-235070faa142 req-fe5a983a-a708-4599-a752-3f284e91216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:28 np0005558241 nova_compute[248510]: 2025-12-13 09:05:28.238 248514 DEBUG nova.compute.manager [req-11d08ff0-9bbc-4681-ae3c-235070faa142 req-fe5a983a-a708-4599-a752-3f284e91216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:28 np0005558241 nova_compute[248510]: 2025-12-13 09:05:28.238 248514 WARNING nova.compute.manager [req-11d08ff0-9bbc-4681-ae3c-235070faa142 req-fe5a983a-a708-4599-a752-3f284e91216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:05:28 np0005558241 nova_compute[248510]: 2025-12-13 09:05:28.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3131: 321 pgs: 321 active+clean; 200 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 198 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:05:28 np0005558241 nova_compute[248510]: 2025-12-13 09:05:28.930 248514 INFO nova.compute.manager [None req-3551c2a8-eb1c-411c-9534-f4415f34581d 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Get console output#033[00m
Dec 13 04:05:28 np0005558241 nova_compute[248510]: 2025-12-13 09:05:28.936 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:05:29 np0005558241 nova_compute[248510]: 2025-12-13 09:05:29.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:29.540 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:05:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:29.543 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:05:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:29.545 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:29 np0005558241 nova_compute[248510]: 2025-12-13 09:05:29.887 248514 DEBUG nova.compute.manager [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-changed-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:29 np0005558241 nova_compute[248510]: 2025-12-13 09:05:29.888 248514 DEBUG nova.compute.manager [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Refreshing instance network info cache due to event network-changed-21744784-e35a-46d5-ac8c-8f9783a0a387. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:05:29 np0005558241 nova_compute[248510]: 2025-12-13 09:05:29.888 248514 DEBUG oslo_concurrency.lockutils [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:05:29 np0005558241 nova_compute[248510]: 2025-12-13 09:05:29.889 248514 DEBUG oslo_concurrency.lockutils [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:05:29 np0005558241 nova_compute[248510]: 2025-12-13 09:05:29.889 248514 DEBUG nova.network.neutron [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Refreshing network info cache for port 21744784-e35a-46d5-ac8c-8f9783a0a387 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:05:29 np0005558241 nova_compute[248510]: 2025-12-13 09:05:29.971 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:29 np0005558241 nova_compute[248510]: 2025-12-13 09:05:29.971 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:29 np0005558241 nova_compute[248510]: 2025-12-13 09:05:29.972 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:29 np0005558241 nova_compute[248510]: 2025-12-13 09:05:29.972 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:29 np0005558241 nova_compute[248510]: 2025-12-13 09:05:29.972 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:29 np0005558241 nova_compute[248510]: 2025-12-13 09:05:29.974 248514 INFO nova.compute.manager [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Terminating instance#033[00m
Dec 13 04:05:29 np0005558241 nova_compute[248510]: 2025-12-13 09:05:29.976 248514 DEBUG nova.compute.manager [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:05:30 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:30Z|01409|binding|INFO|Releasing lport 648ca59e-493f-4103-aa42-8d4449fe6f9d from this chassis (sb_readonly=0)
Dec 13 04:05:30 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:30Z|01410|binding|INFO|Releasing lport 2953b79f-9235-4cc1-ad54-d75b960374dc from this chassis (sb_readonly=0)
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.016 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:30 np0005558241 kernel: tap21744784-e3 (unregistering): left promiscuous mode
Dec 13 04:05:30 np0005558241 NetworkManager[50376]: <info>  [1765616730.0418] device (tap21744784-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.106 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:30 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:30Z|01411|binding|INFO|Releasing lport 648ca59e-493f-4103-aa42-8d4449fe6f9d from this chassis (sb_readonly=0)
Dec 13 04:05:30 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:30Z|01412|binding|INFO|Releasing lport 2953b79f-9235-4cc1-ad54-d75b960374dc from this chassis (sb_readonly=0)
Dec 13 04:05:30 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:30Z|01413|binding|INFO|Releasing lport 21744784-e35a-46d5-ac8c-8f9783a0a387 from this chassis (sb_readonly=0)
Dec 13 04:05:30 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:30Z|01414|binding|INFO|Removing iface tap21744784-e3 ovn-installed in OVS
Dec 13 04:05:30 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:30Z|01415|binding|INFO|Setting lport 21744784-e35a-46d5-ac8c-8f9783a0a387 down in Southbound
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.111 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.117 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:86:07 10.100.0.13'], port_security=['fa:16:3e:0c:86:07 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '17c3814d-c11f-4032-a891-4cbdf3f7c065', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '77d40177a4804671aa9c5da343bc2ed4', 'neutron:revision_number': '8', 'neutron:security_group_ids': '639365a4-e1d2-4459-8bf9-30c8c4273027', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e18eae52-141b-40a9-9ed0-d52b1399a23b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=21744784-e35a-46d5-ac8c-8f9783a0a387) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.117 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.120 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 21744784-e35a-46d5-ac8c-8f9783a0a387 in datapath 084e5836-e0e4-4328-8e03-dfcdcd227a7c unbound from our chassis#033[00m
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.123 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 084e5836-e0e4-4328-8e03-dfcdcd227a7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.124 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f7cd1086-f7da-4682-b301-791ccc286462]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.125 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c namespace which is not needed anymore#033[00m
Dec 13 04:05:30 np0005558241 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Deactivated successfully.
Dec 13 04:05:30 np0005558241 systemd-machined[210538]: Machine qemu-167-instance-00000087 terminated.
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.171 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.218 248514 INFO nova.virt.libvirt.driver [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Instance destroyed successfully.#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.218 248514 DEBUG nova.objects.instance [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lazy-loading 'resources' on Instance uuid 17c3814d-c11f-4032-a891-4cbdf3f7c065 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.240 248514 DEBUG nova.virt.libvirt.vif [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:04:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1148057715',display_name='tempest-TestNetworkAdvancedServerOps-server-1148057715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1148057715',id=135,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEs/ai9hGvppAaJZucTj3XKRO2Hw6GseMJO/eDqjojdlq1TGbhXoM0CcjiSd48S7VRrLpZa2ASSqmIp8ZUA6nKNWf+HEZDknZ93ek9rtzJ3ZoLckx8X8JxIdR5xCKRsULQ==',key_name='tempest-TestNetworkAdvancedServerOps-326216115',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:04:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='77d40177a4804671aa9c5da343bc2ed4',ramdisk_id='',reservation_id='r-a5ac50ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1969761839',owner_user_name='tempest-TestNetworkAdvancedServerOps-1969761839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:05:26Z,user_data=None,user_id='a5fd410579fa429ba0f7f680590cd86a',uuid=17c3814d-c11f-4032-a891-4cbdf3f7c065,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.240 248514 DEBUG nova.network.os_vif_util [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converting VIF {"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.241 248514 DEBUG nova.network.os_vif_util [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.241 248514 DEBUG os_vif [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.243 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.244 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21744784-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.245 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.247 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.253 248514 INFO os_vif [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:86:07,bridge_name='br-int',has_traffic_filtering=True,id=21744784-e35a-46d5-ac8c-8f9783a0a387,network=Network(084e5836-e0e4-4328-8e03-dfcdcd227a7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21744784-e3')#033[00m
Dec 13 04:05:30 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [NOTICE]   (385736) : haproxy version is 2.8.14-c23fe91
Dec 13 04:05:30 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [NOTICE]   (385736) : path to executable is /usr/sbin/haproxy
Dec 13 04:05:30 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [WARNING]  (385736) : Exiting Master process...
Dec 13 04:05:30 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [WARNING]  (385736) : Exiting Master process...
Dec 13 04:05:30 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [ALERT]    (385736) : Current worker (385738) exited with code 143 (Terminated)
Dec 13 04:05:30 np0005558241 neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c[385732]: [WARNING]  (385736) : All workers exited. Exiting... (0)
Dec 13 04:05:30 np0005558241 systemd[1]: libpod-142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f.scope: Deactivated successfully.
Dec 13 04:05:30 np0005558241 conmon[385732]: conmon 142a55ec433536e54c1d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f.scope/container/memory.events
Dec 13 04:05:30 np0005558241 podman[385784]: 2025-12-13 09:05:30.327998596 +0000 UTC m=+0.063062441 container died 142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:05:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f-userdata-shm.mount: Deactivated successfully.
Dec 13 04:05:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d7f7d273eeceb0bf4fd1ef9b86958ede76e0fc0f84087e86c40344f9a7eed60d-merged.mount: Deactivated successfully.
Dec 13 04:05:30 np0005558241 podman[385784]: 2025-12-13 09:05:30.377202019 +0000 UTC m=+0.112265834 container cleanup 142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:05:30 np0005558241 systemd[1]: libpod-conmon-142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f.scope: Deactivated successfully.
Dec 13 04:05:30 np0005558241 podman[385828]: 2025-12-13 09:05:30.473894431 +0000 UTC m=+0.066607420 container remove 142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.487 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b3cbaa-e2ae-4703-97ac-6d756b210205]: (4, ('Sat Dec 13 09:05:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c (142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f)\n142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f\nSat Dec 13 09:05:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c (142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f)\n142a55ec433536e54c1d48c70170bddbc9f47622c68c25d17ff619fae1778b0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.490 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86ce0825-5da9-4182-b484-872592d37b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.491 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap084e5836-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.493 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:30 np0005558241 kernel: tap084e5836-e0: left promiscuous mode
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.514 248514 DEBUG nova.compute.manager [req-2b818b83-5e26-4cd8-afbb-01b9e692a656 req-47fe7c4a-0e53-4287-b095-42a036f52263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.514 248514 DEBUG oslo_concurrency.lockutils [req-2b818b83-5e26-4cd8-afbb-01b9e692a656 req-47fe7c4a-0e53-4287-b095-42a036f52263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.515 248514 DEBUG oslo_concurrency.lockutils [req-2b818b83-5e26-4cd8-afbb-01b9e692a656 req-47fe7c4a-0e53-4287-b095-42a036f52263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.515 248514 DEBUG oslo_concurrency.lockutils [req-2b818b83-5e26-4cd8-afbb-01b9e692a656 req-47fe7c4a-0e53-4287-b095-42a036f52263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.516 248514 DEBUG nova.compute.manager [req-2b818b83-5e26-4cd8-afbb-01b9e692a656 req-47fe7c4a-0e53-4287-b095-42a036f52263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.516 248514 DEBUG nova.compute.manager [req-2b818b83-5e26-4cd8-afbb-01b9e692a656 req-47fe7c4a-0e53-4287-b095-42a036f52263 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-unplugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.522 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.523 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.526 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7bffc04a-309e-41df-98cc-57cce2339fe2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.542 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[936d57b9-33a5-4f5f-97d4-925acf0fc0e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.544 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1a3904-02ad-4b1d-afa3-d87450ef8b8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3132: 321 pgs: 321 active+clean; 200 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 198 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.561 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[27d9deb1-c690-4a0b-9bcc-eafc3785dbf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930291, 'reachable_time': 25674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385844, 'error': None, 'target': 'ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:30 np0005558241 systemd[1]: run-netns-ovnmeta\x2d084e5836\x2de0e4\x2d4328\x2d8e03\x2ddfcdcd227a7c.mount: Deactivated successfully.
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.564 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-084e5836-e0e4-4328-8e03-dfcdcd227a7c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:05:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:30.564 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b0474edf-6966-4477-9388-b9bbffa40463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.585 248514 INFO nova.virt.libvirt.driver [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Deleting instance files /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065_del#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.586 248514 INFO nova.virt.libvirt.driver [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Deletion of /var/lib/nova/instances/17c3814d-c11f-4032-a891-4cbdf3f7c065_del complete#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.717 248514 INFO nova.compute.manager [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.718 248514 DEBUG oslo.service.loopingcall [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.719 248514 DEBUG nova.compute.manager [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.720 248514 DEBUG nova.network.neutron [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.955 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.956 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.956 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:05:30 np0005558241 nova_compute[248510]: 2025-12-13 09:05:30.956 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:05:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:05:31 np0005558241 nova_compute[248510]: 2025-12-13 09:05:31.337 248514 INFO nova.compute.manager [None req-02c63348-5dc6-4357-b15e-6e1a951214e9 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Get console output#033[00m
Dec 13 04:05:31 np0005558241 nova_compute[248510]: 2025-12-13 09:05:31.344 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:05:31 np0005558241 nova_compute[248510]: 2025-12-13 09:05:31.804 248514 DEBUG nova.network.neutron [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:05:31 np0005558241 nova_compute[248510]: 2025-12-13 09:05:31.828 248514 INFO nova.compute.manager [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Took 1.11 seconds to deallocate network for instance.#033[00m
Dec 13 04:05:31 np0005558241 nova_compute[248510]: 2025-12-13 09:05:31.890 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:31 np0005558241 nova_compute[248510]: 2025-12-13 09:05:31.891 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:31.999 248514 DEBUG oslo_concurrency.processutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:05:32 np0005558241 NetworkManager[50376]: <info>  [1765616732.0975] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/582)
Dec 13 04:05:32 np0005558241 NetworkManager[50376]: <info>  [1765616732.0985] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/583)
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.251 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:32 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:32Z|01416|binding|INFO|Releasing lport 648ca59e-493f-4103-aa42-8d4449fe6f9d from this chassis (sb_readonly=0)
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.260 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.417 248514 DEBUG nova.network.neutron [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updated VIF entry in instance network info cache for port 21744784-e35a-46d5-ac8c-8f9783a0a387. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.418 248514 DEBUG nova.network.neutron [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [{"id": "21744784-e35a-46d5-ac8c-8f9783a0a387", "address": "fa:16:3e:0c:86:07", "network": {"id": "084e5836-e0e4-4328-8e03-dfcdcd227a7c", "bridge": "br-int", "label": "tempest-network-smoke--2095345748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "77d40177a4804671aa9c5da343bc2ed4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21744784-e3", "ovs_interfaceid": "21744784-e35a-46d5-ac8c-8f9783a0a387", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.460 248514 DEBUG oslo_concurrency.lockutils [req-18f35139-d86c-47cc-82da-c8609cb15bbc req-06814d84-3234-4700-bcc3-8212eabcc47d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-17c3814d-c11f-4032-a891-4cbdf3f7c065" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:05:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:05:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4155451256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:05:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3133: 321 pgs: 321 active+clean; 200 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 198 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.581 248514 DEBUG oslo_concurrency.processutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.591 248514 DEBUG nova.compute.provider_tree [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.619 248514 DEBUG nova.scheduler.client.report [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.631 248514 INFO nova.compute.manager [None req-2134f62a-d5d5-45b5-9211-4f404b91cee5 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Get console output#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.638 310683 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.647 248514 DEBUG nova.compute.manager [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.648 248514 DEBUG oslo_concurrency.lockutils [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.648 248514 DEBUG oslo_concurrency.lockutils [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.649 248514 DEBUG oslo_concurrency.lockutils [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.649 248514 DEBUG nova.compute.manager [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] No waiting events found dispatching network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.649 248514 WARNING nova.compute.manager [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received unexpected event network-vif-plugged-21744784-e35a-46d5-ac8c-8f9783a0a387 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.649 248514 DEBUG nova.compute.manager [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Received event network-vif-deleted-21744784-e35a-46d5-ac8c-8f9783a0a387 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.650 248514 INFO nova.compute.manager [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Neutron deleted interface 21744784-e35a-46d5-ac8c-8f9783a0a387; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.650 248514 DEBUG nova.network.neutron [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.654 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.681 248514 INFO nova.scheduler.client.report [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Deleted allocations for instance 17c3814d-c11f-4032-a891-4cbdf3f7c065#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.688 248514 DEBUG nova.compute.manager [req-cdc8b6c2-fecf-4770-bf83-d2a6d498aa2d req-35540a8a-65fe-48be-8887-ddac430d6782 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Detach interface failed, port_id=21744784-e35a-46d5-ac8c-8f9783a0a387, reason: Instance 17c3814d-c11f-4032-a891-4cbdf3f7c065 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 04:05:32 np0005558241 nova_compute[248510]: 2025-12-13 09:05:32.775 248514 DEBUG oslo_concurrency.lockutils [None req-d2f46bdd-f41e-442d-961d-6fb4a43d7cd1 a5fd410579fa429ba0f7f680590cd86a 77d40177a4804671aa9c5da343bc2ed4 - - default default] Lock "17c3814d-c11f-4032-a891-4cbdf3f7c065" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:33 np0005558241 nova_compute[248510]: 2025-12-13 09:05:33.190 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.357 248514 DEBUG nova.compute.manager [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-changed-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.358 248514 DEBUG nova.compute.manager [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Refreshing instance network info cache due to event network-changed-f16b53b1-eab6-4458-976e-5cb112c77ef8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.359 248514 DEBUG oslo_concurrency.lockutils [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.359 248514 DEBUG oslo_concurrency.lockutils [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.360 248514 DEBUG nova.network.neutron [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Refreshing network info cache for port f16b53b1-eab6-4458-976e-5cb112c77ef8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.428 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.429 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.429 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.430 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.431 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.433 248514 INFO nova.compute.manager [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Terminating instance#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.435 248514 DEBUG nova.compute.manager [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:05:34 np0005558241 kernel: tapf16b53b1-ea (unregistering): left promiscuous mode
Dec 13 04:05:34 np0005558241 NetworkManager[50376]: <info>  [1765616734.5057] device (tapf16b53b1-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.514 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:34Z|01417|binding|INFO|Releasing lport f16b53b1-eab6-4458-976e-5cb112c77ef8 from this chassis (sb_readonly=0)
Dec 13 04:05:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:34Z|01418|binding|INFO|Setting lport f16b53b1-eab6-4458-976e-5cb112c77ef8 down in Southbound
Dec 13 04:05:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:05:34Z|01419|binding|INFO|Removing iface tapf16b53b1-ea ovn-installed in OVS
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.523 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:ba:c6 10.100.0.7'], port_security=['fa:16:3e:68:ba:c6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bb154aa9-7029-4193-8d00-38e8e552a382', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-065ccb44-e305-43f2-ab7a-728a65062da2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd062d46c41254f178699723648bac64d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20aff6be-cf34-4af0-9288-26483b58fb2e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a94dc2d-ac12-4530-9899-bb1ddf6ed47b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f16b53b1-eab6-4458-976e-5cb112c77ef8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.525 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f16b53b1-eab6-4458-976e-5cb112c77ef8 in datapath 065ccb44-e305-43f2-ab7a-728a65062da2 unbound from our chassis#033[00m
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.528 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 065ccb44-e305-43f2-ab7a-728a65062da2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.529 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7537cc5a-1b1c-4b79-8b42-0f2eca4f7666]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.529 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2 namespace which is not needed anymore#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.538 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3134: 321 pgs: 321 active+clean; 121 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 217 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 13 04:05:34 np0005558241 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000088.scope: Deactivated successfully.
Dec 13 04:05:34 np0005558241 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000088.scope: Consumed 13.514s CPU time.
Dec 13 04:05:34 np0005558241 systemd-machined[210538]: Machine qemu-166-instance-00000088 terminated.
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.667 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.686 248514 INFO nova.virt.libvirt.driver [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Instance destroyed successfully.#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.687 248514 DEBUG nova.objects.instance [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lazy-loading 'resources' on Instance uuid bb154aa9-7029-4193-8d00-38e8e552a382 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.711 248514 DEBUG nova.virt.libvirt.vif [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:04:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-908300967',display_name='tempest-TestNetworkBasicOps-server-908300967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-908300967',id=136,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFBmqAnv0oD+weSJxDWEwIjgt5HjSBoUiVoV5CSnaOj2JsKkOoULuMXZQ1EHyUAr0FnFFBRSfsXFNYjXLHRq9tDCS1GQZDyMWxvCAc8xSVkqfYg8qd+Wbjc8cJKV8CheTw==',key_name='tempest-TestNetworkBasicOps-403784450',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:05:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d062d46c41254f178699723648bac64d',ramdisk_id='',reservation_id='r-8elo0a7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1693340069',owner_user_name='tempest-TestNetworkBasicOps-1693340069-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:05:10Z,user_data=None,user_id='0ffebdfc45534ad4b00f1b6b71f95318',uuid=bb154aa9-7029-4193-8d00-38e8e552a382,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.712 248514 DEBUG nova.network.os_vif_util [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converting VIF {"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.713 248514 DEBUG nova.network.os_vif_util [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.714 248514 DEBUG os_vif [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:34 np0005558241 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [NOTICE]   (385373) : haproxy version is 2.8.14-c23fe91
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.718 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf16b53b1-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:34 np0005558241 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [NOTICE]   (385373) : path to executable is /usr/sbin/haproxy
Dec 13 04:05:34 np0005558241 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [WARNING]  (385373) : Exiting Master process...
Dec 13 04:05:34 np0005558241 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [ALERT]    (385373) : Current worker (385389) exited with code 143 (Terminated)
Dec 13 04:05:34 np0005558241 neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2[385351]: [WARNING]  (385373) : All workers exited. Exiting... (0)
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:34 np0005558241 systemd[1]: libpod-d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec.scope: Deactivated successfully.
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.729 248514 INFO os_vif [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:ba:c6,bridge_name='br-int',has_traffic_filtering=True,id=f16b53b1-eab6-4458-976e-5cb112c77ef8,network=Network(065ccb44-e305-43f2-ab7a-728a65062da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf16b53b1-ea')#033[00m
Dec 13 04:05:34 np0005558241 podman[385892]: 2025-12-13 09:05:34.730811546 +0000 UTC m=+0.059166963 container died d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:05:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec-userdata-shm.mount: Deactivated successfully.
Dec 13 04:05:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-aa95f491df7395f5ed1e1bdb4690670b794dc054243ab1a1438a6d32051c9c18-merged.mount: Deactivated successfully.
Dec 13 04:05:34 np0005558241 podman[385892]: 2025-12-13 09:05:34.773935676 +0000 UTC m=+0.102291073 container cleanup d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 04:05:34 np0005558241 systemd[1]: libpod-conmon-d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec.scope: Deactivated successfully.
Dec 13 04:05:34 np0005558241 podman[385948]: 2025-12-13 09:05:34.842228907 +0000 UTC m=+0.040401173 container remove d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.852 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[595a132d-9eae-462e-ac19-5a480f938b04]: (4, ('Sat Dec 13 09:05:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2 (d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec)\nd33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec\nSat Dec 13 09:05:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2 (d33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec)\nd33b0fdf10aaf529c8c49de10a2dfde4570a5da9fdfa769f6fdccc33546b32ec\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.854 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[76b6dbaa-025b-4261-bd92-10d773188767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.856 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap065ccb44-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.858 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:34 np0005558241 kernel: tap065ccb44-e0: left promiscuous mode
Dec 13 04:05:34 np0005558241 nova_compute[248510]: 2025-12-13 09:05:34.884 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.888 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0e282bbf-2159-4c68-9c0b-6d49c3196c86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.913 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e49c647a-36c4-4e69-a992-7531210f6ca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.916 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[80c3f9cc-c785-4db0-b763-8c1d845d6096]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.944 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[73498bb5-3342-495a-947f-eb6b9be01d0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 928669, 'reachable_time': 15688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385966, 'error': None, 'target': 'ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:34 np0005558241 systemd[1]: run-netns-ovnmeta\x2d065ccb44\x2de305\x2d43f2\x2dab7a\x2d728a65062da2.mount: Deactivated successfully.
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.949 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-065ccb44-e305-43f2-ab7a-728a65062da2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:05:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:34.950 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[758e6551-aaa3-485d-830a-db93cdd39361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:05:35 np0005558241 nova_compute[248510]: 2025-12-13 09:05:35.029 248514 INFO nova.virt.libvirt.driver [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Deleting instance files /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382_del#033[00m
Dec 13 04:05:35 np0005558241 nova_compute[248510]: 2025-12-13 09:05:35.030 248514 INFO nova.virt.libvirt.driver [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Deletion of /var/lib/nova/instances/bb154aa9-7029-4193-8d00-38e8e552a382_del complete#033[00m
Dec 13 04:05:35 np0005558241 nova_compute[248510]: 2025-12-13 09:05:35.111 248514 DEBUG nova.compute.manager [req-153bf626-b10e-408d-b1dd-0901bc332f40 req-5aeb937e-6cd2-4b09-a5a5-6cc9471b83be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-vif-unplugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:35 np0005558241 nova_compute[248510]: 2025-12-13 09:05:35.112 248514 DEBUG oslo_concurrency.lockutils [req-153bf626-b10e-408d-b1dd-0901bc332f40 req-5aeb937e-6cd2-4b09-a5a5-6cc9471b83be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:35 np0005558241 nova_compute[248510]: 2025-12-13 09:05:35.113 248514 DEBUG oslo_concurrency.lockutils [req-153bf626-b10e-408d-b1dd-0901bc332f40 req-5aeb937e-6cd2-4b09-a5a5-6cc9471b83be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:35 np0005558241 nova_compute[248510]: 2025-12-13 09:05:35.113 248514 DEBUG oslo_concurrency.lockutils [req-153bf626-b10e-408d-b1dd-0901bc332f40 req-5aeb937e-6cd2-4b09-a5a5-6cc9471b83be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:35 np0005558241 nova_compute[248510]: 2025-12-13 09:05:35.114 248514 DEBUG nova.compute.manager [req-153bf626-b10e-408d-b1dd-0901bc332f40 req-5aeb937e-6cd2-4b09-a5a5-6cc9471b83be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] No waiting events found dispatching network-vif-unplugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:35 np0005558241 nova_compute[248510]: 2025-12-13 09:05:35.114 248514 DEBUG nova.compute.manager [req-153bf626-b10e-408d-b1dd-0901bc332f40 req-5aeb937e-6cd2-4b09-a5a5-6cc9471b83be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-vif-unplugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:05:35 np0005558241 nova_compute[248510]: 2025-12-13 09:05:35.120 248514 INFO nova.compute.manager [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:05:35 np0005558241 nova_compute[248510]: 2025-12-13 09:05:35.121 248514 DEBUG oslo.service.loopingcall [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:05:35 np0005558241 nova_compute[248510]: 2025-12-13 09:05:35.122 248514 DEBUG nova.compute.manager [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:05:35 np0005558241 nova_compute[248510]: 2025-12-13 09:05:35.122 248514 DEBUG nova.network.neutron [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:05:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:05:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3135: 321 pgs: 321 active+clean; 121 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 16 KiB/s wr, 32 op/s
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.267 248514 DEBUG nova.compute.manager [req-e7a81b1e-9c45-4fff-8356-c5a88abf3bb7 req-f5db8f50-c845-499a-a1d4-2cd5834eeb37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.267 248514 DEBUG oslo_concurrency.lockutils [req-e7a81b1e-9c45-4fff-8356-c5a88abf3bb7 req-f5db8f50-c845-499a-a1d4-2cd5834eeb37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.268 248514 DEBUG oslo_concurrency.lockutils [req-e7a81b1e-9c45-4fff-8356-c5a88abf3bb7 req-f5db8f50-c845-499a-a1d4-2cd5834eeb37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.268 248514 DEBUG oslo_concurrency.lockutils [req-e7a81b1e-9c45-4fff-8356-c5a88abf3bb7 req-f5db8f50-c845-499a-a1d4-2cd5834eeb37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.268 248514 DEBUG nova.compute.manager [req-e7a81b1e-9c45-4fff-8356-c5a88abf3bb7 req-f5db8f50-c845-499a-a1d4-2cd5834eeb37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] No waiting events found dispatching network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.268 248514 WARNING nova.compute.manager [req-e7a81b1e-9c45-4fff-8356-c5a88abf3bb7 req-f5db8f50-c845-499a-a1d4-2cd5834eeb37 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received unexpected event network-vif-plugged-f16b53b1-eab6-4458-976e-5cb112c77ef8 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.623 248514 DEBUG nova.network.neutron [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updated VIF entry in instance network info cache for port f16b53b1-eab6-4458-976e-5cb112c77ef8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.623 248514 DEBUG nova.network.neutron [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updating instance_info_cache with network_info: [{"id": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "address": "fa:16:3e:68:ba:c6", "network": {"id": "065ccb44-e305-43f2-ab7a-728a65062da2", "bridge": "br-int", "label": "tempest-network-smoke--649019991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d062d46c41254f178699723648bac64d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf16b53b1-ea", "ovs_interfaceid": "f16b53b1-eab6-4458-976e-5cb112c77ef8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.661 248514 DEBUG oslo_concurrency.lockutils [req-79184bbd-7ad3-4912-920d-02acd5046711 req-408f5c3b-9950-4896-8af2-ed569e3bab28 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bb154aa9-7029-4193-8d00-38e8e552a382" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.662 248514 DEBUG nova.network.neutron [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.695 248514 INFO nova.compute.manager [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Took 2.57 seconds to deallocate network for instance.#033[00m
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.773 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.774 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:37 np0005558241 nova_compute[248510]: 2025-12-13 09:05:37.860 248514 DEBUG oslo_concurrency.processutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:05:38 np0005558241 nova_compute[248510]: 2025-12-13 09:05:38.193 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2157535386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:05:38 np0005558241 nova_compute[248510]: 2025-12-13 09:05:38.481 248514 DEBUG oslo_concurrency.processutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:05:38 np0005558241 nova_compute[248510]: 2025-12-13 09:05:38.488 248514 DEBUG nova.compute.provider_tree [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:05:38 np0005558241 nova_compute[248510]: 2025-12-13 09:05:38.512 248514 DEBUG nova.scheduler.client.report [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:05:38 np0005558241 nova_compute[248510]: 2025-12-13 09:05:38.554 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3136: 321 pgs: 321 active+clean; 96 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 17 KiB/s wr, 46 op/s
Dec 13 04:05:38 np0005558241 nova_compute[248510]: 2025-12-13 09:05:38.594 248514 INFO nova.scheduler.client.report [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Deleted allocations for instance bb154aa9-7029-4193-8d00-38e8e552a382#033[00m
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:05:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:05:38 np0005558241 nova_compute[248510]: 2025-12-13 09:05:38.665 248514 DEBUG oslo_concurrency.lockutils [None req-dba2d858-634d-4802-bdd7-eef19fb96a92 0ffebdfc45534ad4b00f1b6b71f95318 d062d46c41254f178699723648bac64d - - default default] Lock "bb154aa9-7029-4193-8d00-38e8e552a382" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:38 np0005558241 podman[386133]: 2025-12-13 09:05:38.732626757 +0000 UTC m=+0.051080380 container create b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:05:38 np0005558241 systemd[1]: Started libpod-conmon-b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c.scope.
Dec 13 04:05:38 np0005558241 podman[386133]: 2025-12-13 09:05:38.714347729 +0000 UTC m=+0.032801382 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:05:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:05:38 np0005558241 podman[386133]: 2025-12-13 09:05:38.828856898 +0000 UTC m=+0.147310561 container init b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:05:38 np0005558241 podman[386133]: 2025-12-13 09:05:38.836157461 +0000 UTC m=+0.154611084 container start b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:05:38 np0005558241 podman[386133]: 2025-12-13 09:05:38.839577597 +0000 UTC m=+0.158031250 container attach b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 04:05:38 np0005558241 nervous_sanderson[386149]: 167 167
Dec 13 04:05:38 np0005558241 systemd[1]: libpod-b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c.scope: Deactivated successfully.
Dec 13 04:05:38 np0005558241 conmon[386149]: conmon b684faf8edd3bc3f9357 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c.scope/container/memory.events
Dec 13 04:05:38 np0005558241 podman[386133]: 2025-12-13 09:05:38.845767742 +0000 UTC m=+0.164221385 container died b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 13 04:05:38 np0005558241 nova_compute[248510]: 2025-12-13 09:05:38.857 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:38 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3f18622e545ddf9e436cff2f3af6da1ab78b413af5e1fa0edab60b3a2a0893bc-merged.mount: Deactivated successfully.
Dec 13 04:05:38 np0005558241 podman[386133]: 2025-12-13 09:05:38.90273942 +0000 UTC m=+0.221193053 container remove b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:05:38 np0005558241 systemd[1]: libpod-conmon-b684faf8edd3bc3f93577a36e20303f219abf41a742e0c909cd59cb64651e00c.scope: Deactivated successfully.
Dec 13 04:05:38 np0005558241 nova_compute[248510]: 2025-12-13 09:05:38.994 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:39 np0005558241 podman[386174]: 2025-12-13 09:05:39.101736905 +0000 UTC m=+0.053025389 container create e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kalam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:05:39 np0005558241 systemd[1]: Started libpod-conmon-e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a.scope.
Dec 13 04:05:39 np0005558241 podman[386174]: 2025-12-13 09:05:39.073358624 +0000 UTC m=+0.024647158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:05:39 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:05:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c7ddc192f5fdb2b83494b4eec958e89301aa46bc88ecc9282ee43e6f5c358b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c7ddc192f5fdb2b83494b4eec958e89301aa46bc88ecc9282ee43e6f5c358b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c7ddc192f5fdb2b83494b4eec958e89301aa46bc88ecc9282ee43e6f5c358b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c7ddc192f5fdb2b83494b4eec958e89301aa46bc88ecc9282ee43e6f5c358b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39c7ddc192f5fdb2b83494b4eec958e89301aa46bc88ecc9282ee43e6f5c358b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:39 np0005558241 podman[386174]: 2025-12-13 09:05:39.215044584 +0000 UTC m=+0.166333098 container init e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kalam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Dec 13 04:05:39 np0005558241 podman[386174]: 2025-12-13 09:05:39.23045468 +0000 UTC m=+0.181743154 container start e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 04:05:39 np0005558241 podman[386174]: 2025-12-13 09:05:39.234905102 +0000 UTC m=+0.186193586 container attach e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kalam, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:05:39 np0005558241 nova_compute[248510]: 2025-12-13 09:05:39.372 248514 DEBUG nova.compute.manager [req-e3195a83-23a1-4003-97e5-4ad4df0ed27f req-9e8fbb6f-78a2-4fc4-af7e-d439f8f9fdea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Received event network-vif-deleted-f16b53b1-eab6-4458-976e-5cb112c77ef8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:05:39 np0005558241 nova_compute[248510]: 2025-12-13 09:05:39.721 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:39 np0005558241 pensive_kalam[386191]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:05:39 np0005558241 pensive_kalam[386191]: --> All data devices are unavailable
Dec 13 04:05:39 np0005558241 systemd[1]: libpod-e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a.scope: Deactivated successfully.
Dec 13 04:05:39 np0005558241 podman[386174]: 2025-12-13 09:05:39.871609544 +0000 UTC m=+0.822898028 container died e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 04:05:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-39c7ddc192f5fdb2b83494b4eec958e89301aa46bc88ecc9282ee43e6f5c358b-merged.mount: Deactivated successfully.
Dec 13 04:05:39 np0005558241 podman[386174]: 2025-12-13 09:05:39.919783561 +0000 UTC m=+0.871071995 container remove e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kalam, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:05:39 np0005558241 systemd[1]: libpod-conmon-e18b5832ff5c8fc70c3db371c967ce2650bba722e89235b7fffc37610ce2850a.scope: Deactivated successfully.
Dec 13 04:05:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:05:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:05:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:05:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:05:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:05:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:05:40 np0005558241 podman[386285]: 2025-12-13 09:05:40.429586374 +0000 UTC m=+0.048172878 container create 34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mirzakhani, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:05:40 np0005558241 systemd[1]: Started libpod-conmon-34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528.scope.
Dec 13 04:05:40 np0005558241 podman[386285]: 2025-12-13 09:05:40.405610703 +0000 UTC m=+0.024197227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:05:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:05:40 np0005558241 podman[386285]: 2025-12-13 09:05:40.52041862 +0000 UTC m=+0.139005114 container init 34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mirzakhani, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:05:40 np0005558241 podman[386285]: 2025-12-13 09:05:40.530726188 +0000 UTC m=+0.149312662 container start 34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:05:40 np0005558241 podman[386285]: 2025-12-13 09:05:40.534681597 +0000 UTC m=+0.153268121 container attach 34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 04:05:40 np0005558241 sleepy_mirzakhani[386301]: 167 167
Dec 13 04:05:40 np0005558241 systemd[1]: libpod-34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528.scope: Deactivated successfully.
Dec 13 04:05:40 np0005558241 podman[386285]: 2025-12-13 09:05:40.539445346 +0000 UTC m=+0.158031830 container died 34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mirzakhani, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 04:05:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3137: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Dec 13 04:05:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay-219308bc69710a4998d304356995be37ac9bdf4f47761e2d3b61aaffdd59d358-merged.mount: Deactivated successfully.
Dec 13 04:05:40 np0005558241 podman[386285]: 2025-12-13 09:05:40.582965537 +0000 UTC m=+0.201552011 container remove 34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:05:40 np0005558241 systemd[1]: libpod-conmon-34dac6f569aefaecd3c2d102aed8e585ce99ce85b9d586a30935c955c0de8528.scope: Deactivated successfully.
Dec 13 04:05:40 np0005558241 nova_compute[248510]: 2025-12-13 09:05:40.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:05:40 np0005558241 podman[386325]: 2025-12-13 09:05:40.813667687 +0000 UTC m=+0.061964613 container create 2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:05:40 np0005558241 systemd[1]: Started libpod-conmon-2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1.scope.
Dec 13 04:05:40 np0005558241 podman[386325]: 2025-12-13 09:05:40.788764603 +0000 UTC m=+0.037061529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:05:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:05:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2f0c4c656bd9697c7ad013a10342b3999c90e80a3a7cfe75afffd3c420083c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2f0c4c656bd9697c7ad013a10342b3999c90e80a3a7cfe75afffd3c420083c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2f0c4c656bd9697c7ad013a10342b3999c90e80a3a7cfe75afffd3c420083c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:40 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2f0c4c656bd9697c7ad013a10342b3999c90e80a3a7cfe75afffd3c420083c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:40 np0005558241 podman[386325]: 2025-12-13 09:05:40.928409722 +0000 UTC m=+0.176706648 container init 2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:05:40 np0005558241 podman[386325]: 2025-12-13 09:05:40.93673543 +0000 UTC m=+0.185032336 container start 2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:05:40 np0005558241 podman[386325]: 2025-12-13 09:05:40.939729055 +0000 UTC m=+0.188025981 container attach 2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:05:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]: {
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:    "0": [
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:        {
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "devices": [
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "/dev/loop3"
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            ],
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_name": "ceph_lv0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_size": "21470642176",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "name": "ceph_lv0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "tags": {
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.cluster_name": "ceph",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.crush_device_class": "",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.encrypted": "0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.objectstore": "bluestore",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.osd_id": "0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.type": "block",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.vdo": "0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.with_tpm": "0"
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            },
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "type": "block",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "vg_name": "ceph_vg0"
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:        }
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:    ],
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:    "1": [
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:        {
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "devices": [
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "/dev/loop4"
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            ],
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_name": "ceph_lv1",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_size": "21470642176",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "name": "ceph_lv1",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "tags": {
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.cluster_name": "ceph",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.crush_device_class": "",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.encrypted": "0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.objectstore": "bluestore",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.osd_id": "1",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.type": "block",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.vdo": "0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.with_tpm": "0"
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            },
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "type": "block",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "vg_name": "ceph_vg1"
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:        }
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:    ],
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:    "2": [
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:        {
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "devices": [
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "/dev/loop5"
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            ],
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_name": "ceph_lv2",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_size": "21470642176",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "name": "ceph_lv2",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "tags": {
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.cluster_name": "ceph",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.crush_device_class": "",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.encrypted": "0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.objectstore": "bluestore",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.osd_id": "2",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.type": "block",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.vdo": "0",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:                "ceph.with_tpm": "0"
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            },
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "type": "block",
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:            "vg_name": "ceph_vg2"
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:        }
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]:    ]
Dec 13 04:05:41 np0005558241 vigilant_cartwright[386342]: }
Dec 13 04:05:41 np0005558241 systemd[1]: libpod-2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1.scope: Deactivated successfully.
Dec 13 04:05:41 np0005558241 podman[386325]: 2025-12-13 09:05:41.298266998 +0000 UTC m=+0.546563964 container died 2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 04:05:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c2f0c4c656bd9697c7ad013a10342b3999c90e80a3a7cfe75afffd3c420083c5-merged.mount: Deactivated successfully.
Dec 13 04:05:41 np0005558241 podman[386325]: 2025-12-13 09:05:41.362911668 +0000 UTC m=+0.611208614 container remove 2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 04:05:41 np0005558241 systemd[1]: libpod-conmon-2e8ab18c24124f7b07dd8d1ebd0a832751cbfa5dedd6208eb48feef48f34a5f1.scope: Deactivated successfully.
Dec 13 04:05:41 np0005558241 podman[386425]: 2025-12-13 09:05:41.937452963 +0000 UTC m=+0.049962333 container create 5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_galois, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 04:05:41 np0005558241 systemd[1]: Started libpod-conmon-5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066.scope.
Dec 13 04:05:42 np0005558241 podman[386425]: 2025-12-13 09:05:41.916945809 +0000 UTC m=+0.029455209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:05:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:05:42 np0005558241 podman[386425]: 2025-12-13 09:05:42.049774076 +0000 UTC m=+0.162283546 container init 5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_galois, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:05:42 np0005558241 podman[386425]: 2025-12-13 09:05:42.058494445 +0000 UTC m=+0.171003815 container start 5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_galois, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 04:05:42 np0005558241 podman[386425]: 2025-12-13 09:05:42.061333866 +0000 UTC m=+0.173843236 container attach 5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_galois, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:05:42 np0005558241 amazing_galois[386442]: 167 167
Dec 13 04:05:42 np0005558241 systemd[1]: libpod-5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066.scope: Deactivated successfully.
Dec 13 04:05:42 np0005558241 podman[386425]: 2025-12-13 09:05:42.066749691 +0000 UTC m=+0.179259061 container died 5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_galois, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:05:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay-35105134f3f8e3628a339fda27c2b7ff28025c8563f35f65321e581e6b06d4f7-merged.mount: Deactivated successfully.
Dec 13 04:05:42 np0005558241 podman[386425]: 2025-12-13 09:05:42.10820911 +0000 UTC m=+0.220718520 container remove 5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:05:42 np0005558241 systemd[1]: libpod-conmon-5d986e7596a5a78ddd09821554fbae5bb3d09e32907332f47001a4f15acd9066.scope: Deactivated successfully.
Dec 13 04:05:42 np0005558241 podman[386466]: 2025-12-13 09:05:42.309389401 +0000 UTC m=+0.046840765 container create fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hypatia, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:05:42 np0005558241 systemd[1]: Started libpod-conmon-fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4.scope.
Dec 13 04:05:42 np0005558241 podman[386466]: 2025-12-13 09:05:42.290469857 +0000 UTC m=+0.027921241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:05:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:05:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505599508af53d7ebe2f8fc66a04835704930967290375167963e91e0be1c654/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505599508af53d7ebe2f8fc66a04835704930967290375167963e91e0be1c654/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505599508af53d7ebe2f8fc66a04835704930967290375167963e91e0be1c654/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/505599508af53d7ebe2f8fc66a04835704930967290375167963e91e0be1c654/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:42 np0005558241 podman[386466]: 2025-12-13 09:05:42.421032788 +0000 UTC m=+0.158484232 container init fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hypatia, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:05:42 np0005558241 podman[386466]: 2025-12-13 09:05:42.432888785 +0000 UTC m=+0.170340179 container start fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:05:42 np0005558241 podman[386466]: 2025-12-13 09:05:42.437307356 +0000 UTC m=+0.174758750 container attach fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hypatia, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:05:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3138: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.7 KiB/s wr, 55 op/s
Dec 13 04:05:43 np0005558241 nova_compute[248510]: 2025-12-13 09:05:43.195 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:43 np0005558241 lvm[386559]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:05:43 np0005558241 lvm[386559]: VG ceph_vg0 finished
Dec 13 04:05:43 np0005558241 lvm[386562]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:05:43 np0005558241 lvm[386562]: VG ceph_vg1 finished
Dec 13 04:05:43 np0005558241 lvm[386563]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:05:43 np0005558241 lvm[386563]: VG ceph_vg2 finished
Dec 13 04:05:43 np0005558241 fervent_hypatia[386482]: {}
Dec 13 04:05:43 np0005558241 systemd[1]: libpod-fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4.scope: Deactivated successfully.
Dec 13 04:05:43 np0005558241 systemd[1]: libpod-fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4.scope: Consumed 1.518s CPU time.
Dec 13 04:05:43 np0005558241 podman[386466]: 2025-12-13 09:05:43.320833242 +0000 UTC m=+1.058284616 container died fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:05:43 np0005558241 systemd[1]: var-lib-containers-storage-overlay-505599508af53d7ebe2f8fc66a04835704930967290375167963e91e0be1c654-merged.mount: Deactivated successfully.
Dec 13 04:05:43 np0005558241 podman[386466]: 2025-12-13 09:05:43.377783499 +0000 UTC m=+1.115234883 container remove fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hypatia, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:05:43 np0005558241 systemd[1]: libpod-conmon-fb2460ae90730ab9f8e1ed1d86db1b85076321e04834374b2c732f1a5d4110b4.scope: Deactivated successfully.
Dec 13 04:05:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:05:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:05:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:05:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:05:43 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:05:43 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:05:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3139: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.7 KiB/s wr, 55 op/s
Dec 13 04:05:44 np0005558241 nova_compute[248510]: 2025-12-13 09:05:44.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:44 np0005558241 podman[386606]: 2025-12-13 09:05:44.968987746 +0000 UTC m=+0.062635801 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 13 04:05:44 np0005558241 podman[386605]: 2025-12-13 09:05:44.979587921 +0000 UTC m=+0.073101392 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:05:45 np0005558241 podman[386604]: 2025-12-13 09:05:45.029175114 +0000 UTC m=+0.122483150 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 13 04:05:45 np0005558241 nova_compute[248510]: 2025-12-13 09:05:45.216 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616730.214843, 17c3814d-c11f-4032-a891-4cbdf3f7c065 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:05:45 np0005558241 nova_compute[248510]: 2025-12-13 09:05:45.216 248514 INFO nova.compute.manager [-] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:05:45 np0005558241 nova_compute[248510]: 2025-12-13 09:05:45.251 248514 DEBUG nova.compute.manager [None req-67200722-9277-4b38-84ea-b836ae35dadc - - - - - -] [instance: 17c3814d-c11f-4032-a891-4cbdf3f7c065] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:05:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:05:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3140: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:05:48 np0005558241 nova_compute[248510]: 2025-12-13 09:05:48.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3141: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:05:49 np0005558241 nova_compute[248510]: 2025-12-13 09:05:49.033 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:05:49 np0005558241 nova_compute[248510]: 2025-12-13 09:05:49.033 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 04:05:49 np0005558241 nova_compute[248510]: 2025-12-13 09:05:49.686 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616734.6855795, bb154aa9-7029-4193-8d00-38e8e552a382 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:05:49 np0005558241 nova_compute[248510]: 2025-12-13 09:05:49.687 248514 INFO nova.compute.manager [-] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:05:49 np0005558241 nova_compute[248510]: 2025-12-13 09:05:49.725 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:49 np0005558241 nova_compute[248510]: 2025-12-13 09:05:49.932 248514 DEBUG nova.compute.manager [None req-ce1c4621-9888-4dd9-a6fa-8039dc53993d - - - - - -] [instance: bb154aa9-7029-4193-8d00-38e8e552a382] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:05:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3142: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 341 B/s wr, 14 op/s
Dec 13 04:05:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:05:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3143: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:05:53 np0005558241 nova_compute[248510]: 2025-12-13 09:05:53.200 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3144: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:05:54 np0005558241 nova_compute[248510]: 2025-12-13 09:05:54.727 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:55.442 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:05:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:55.443 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:05:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:05:55.443 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:05:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:05:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3145: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:05:58 np0005558241 nova_compute[248510]: 2025-12-13 09:05:58.214 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:05:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3146: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:05:59 np0005558241 nova_compute[248510]: 2025-12-13 09:05:59.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3147: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:06:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:06:01 np0005558241 nova_compute[248510]: 2025-12-13 09:06:01.786 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:06:01 np0005558241 nova_compute[248510]: 2025-12-13 09:06:01.787 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 04:06:01 np0005558241 nova_compute[248510]: 2025-12-13 09:06:01.805 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 04:06:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3148: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:06:03 np0005558241 nova_compute[248510]: 2025-12-13 09:06:03.217 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3149: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:06:04 np0005558241 nova_compute[248510]: 2025-12-13 09:06:04.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:06:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3150: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:06:08 np0005558241 nova_compute[248510]: 2025-12-13 09:06:08.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3151: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:06:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:06:09
Dec 13 04:06:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:06:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:06:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', '.mgr', '.rgw.root', 'images']
Dec 13 04:06:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:06:09 np0005558241 nova_compute[248510]: 2025-12-13 09:06:09.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:09 np0005558241 nova_compute[248510]: 2025-12-13 09:06:09.791 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3152: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:06:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:06:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:06:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:11.887 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:27:74 10.100.0.2 2001:db8::f816:3eff:fe17:2774'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe17:2774/64', 'neutron:device_id': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5b5848-41ef-422d-b206-10cd26f67092, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=966dba06-5969-4f23-922d-d3fb06b4a741) old=Port_Binding(mac=['fa:16:3e:17:27:74 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:06:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:11.889 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 966dba06-5969-4f23-922d-d3fb06b4a741 in datapath 4134c529-684a-4aee-a450-f026f71bff55 updated#033[00m
Dec 13 04:06:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:11.892 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4134c529-684a-4aee-a450-f026f71bff55, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:06:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:11.893 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[36146ca6-e610-4b8f-8e2d-65e3a9c01c42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3153: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:06:13 np0005558241 nova_compute[248510]: 2025-12-13 09:06:13.261 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3154: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:06:14 np0005558241 nova_compute[248510]: 2025-12-13 09:06:14.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:06:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3398534661' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:06:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:06:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3398534661' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:06:15 np0005558241 nova_compute[248510]: 2025-12-13 09:06:15.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:06:16 np0005558241 podman[386670]: 2025-12-13 09:06:16.027901482 +0000 UTC m=+0.105202067 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 04:06:16 np0005558241 podman[386671]: 2025-12-13 09:06:16.044372127 +0000 UTC m=+0.123060128 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 04:06:16 np0005558241 podman[386669]: 2025-12-13 09:06:16.096174994 +0000 UTC m=+0.173564231 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 04:06:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:06:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3155: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:06:18 np0005558241 nova_compute[248510]: 2025-12-13 09:06:18.295 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3156: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:06:18 np0005558241 nova_compute[248510]: 2025-12-13 09:06:18.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:06:18 np0005558241 nova_compute[248510]: 2025-12-13 09:06:18.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:06:18 np0005558241 nova_compute[248510]: 2025-12-13 09:06:18.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:06:18 np0005558241 nova_compute[248510]: 2025-12-13 09:06:18.795 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:06:18 np0005558241 nova_compute[248510]: 2025-12-13 09:06:18.796 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:06:19 np0005558241 nova_compute[248510]: 2025-12-13 09:06:19.738 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:20 np0005558241 nova_compute[248510]: 2025-12-13 09:06:20.036 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:20 np0005558241 nova_compute[248510]: 2025-12-13 09:06:20.037 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:20 np0005558241 nova_compute[248510]: 2025-12-13 09:06:20.061 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:06:20 np0005558241 nova_compute[248510]: 2025-12-13 09:06:20.141 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:20 np0005558241 nova_compute[248510]: 2025-12-13 09:06:20.142 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:20 np0005558241 nova_compute[248510]: 2025-12-13 09:06:20.153 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:06:20 np0005558241 nova_compute[248510]: 2025-12-13 09:06:20.154 248514 INFO nova.compute.claims [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:06:20 np0005558241 nova_compute[248510]: 2025-12-13 09:06:20.276 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:06:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3157: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:06:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:06:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373886190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:06:20 np0005558241 nova_compute[248510]: 2025-12-13 09:06:20.880 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:06:20 np0005558241 nova_compute[248510]: 2025-12-13 09:06:20.886 248514 DEBUG nova.compute.provider_tree [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:06:20 np0005558241 nova_compute[248510]: 2025-12-13 09:06:20.920 248514 DEBUG nova.scheduler.client.report [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:06:20 np0005558241 nova_compute[248510]: 2025-12-13 09:06:20.961 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:06:20 np0005558241 nova_compute[248510]: 2025-12-13 09:06:20.962 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.019 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.019 248514 DEBUG nova.network.neutron [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.048 248514 INFO nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.074 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:06:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.177 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.179 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.180 248514 INFO nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Creating image(s)#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.216 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.252 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.286 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.292 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.355 248514 DEBUG nova.policy [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4444623071405115e-05 of space, bias 1.0, pg target 0.004333386921421534 quantized to 32 (current 32)
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006698148359410611 of space, bias 1.0, pg target 0.20094445078231832 quantized to 32 (current 32)
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.408 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.409 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.410 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.411 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.447 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.453 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e407f205-43a8-423e-a1cb-dc7f58ccced2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.801 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 e407f205-43a8-423e-a1cb-dc7f58ccced2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.875 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.972 248514 DEBUG nova.objects.instance [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid e407f205-43a8-423e-a1cb-dc7f58ccced2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.990 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.990 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Ensure instance console log exists: /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.991 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.991 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:21 np0005558241 nova_compute[248510]: 2025-12-13 09:06:21.992 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:06:22 np0005558241 nova_compute[248510]: 2025-12-13 09:06:22.453 248514 DEBUG nova.network.neutron [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Successfully created port: cccd2f42-71c6-4464-b04e-1c3e885a4378 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:06:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3158: 321 pgs: 321 active+clean; 41 MiB data, 907 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:06:23 np0005558241 nova_compute[248510]: 2025-12-13 09:06:23.236 248514 DEBUG nova.network.neutron [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Successfully updated port: cccd2f42-71c6-4464-b04e-1c3e885a4378 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:06:23 np0005558241 nova_compute[248510]: 2025-12-13 09:06:23.254 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:06:23 np0005558241 nova_compute[248510]: 2025-12-13 09:06:23.255 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:06:23 np0005558241 nova_compute[248510]: 2025-12-13 09:06:23.255 248514 DEBUG nova.network.neutron [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:06:23 np0005558241 nova_compute[248510]: 2025-12-13 09:06:23.330 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:23 np0005558241 nova_compute[248510]: 2025-12-13 09:06:23.365 248514 DEBUG nova.compute.manager [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-changed-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:06:23 np0005558241 nova_compute[248510]: 2025-12-13 09:06:23.366 248514 DEBUG nova.compute.manager [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Refreshing instance network info cache due to event network-changed-cccd2f42-71c6-4464-b04e-1c3e885a4378. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:06:23 np0005558241 nova_compute[248510]: 2025-12-13 09:06:23.366 248514 DEBUG oslo_concurrency.lockutils [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:06:23 np0005558241 nova_compute[248510]: 2025-12-13 09:06:23.423 248514 DEBUG nova.network.neutron [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:06:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3159: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:06:24 np0005558241 nova_compute[248510]: 2025-12-13 09:06:24.740 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:24 np0005558241 nova_compute[248510]: 2025-12-13 09:06:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:06:24 np0005558241 nova_compute[248510]: 2025-12-13 09:06:24.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:06:24 np0005558241 nova_compute[248510]: 2025-12-13 09:06:24.806 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:24 np0005558241 nova_compute[248510]: 2025-12-13 09:06:24.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:24 np0005558241 nova_compute[248510]: 2025-12-13 09:06:24.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:06:24 np0005558241 nova_compute[248510]: 2025-12-13 09:06:24.808 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:06:24 np0005558241 nova_compute[248510]: 2025-12-13 09:06:24.808 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:06:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:06:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350107981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:06:25 np0005558241 nova_compute[248510]: 2025-12-13 09:06:25.538 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.730s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:06:25 np0005558241 nova_compute[248510]: 2025-12-13 09:06:25.731 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:06:25 np0005558241 nova_compute[248510]: 2025-12-13 09:06:25.733 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3433MB free_disk=59.96666217781603GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:06:25 np0005558241 nova_compute[248510]: 2025-12-13 09:06:25.734 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:25 np0005558241 nova_compute[248510]: 2025-12-13 09:06:25.734 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:25 np0005558241 nova_compute[248510]: 2025-12-13 09:06:25.846 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance e407f205-43a8-423e-a1cb-dc7f58ccced2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:06:25 np0005558241 nova_compute[248510]: 2025-12-13 09:06:25.847 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:06:25 np0005558241 nova_compute[248510]: 2025-12-13 09:06:25.847 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:06:25 np0005558241 nova_compute[248510]: 2025-12-13 09:06:25.914 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:06:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:06:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:06:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3341216508' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:06:26 np0005558241 nova_compute[248510]: 2025-12-13 09:06:26.491 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:06:26 np0005558241 nova_compute[248510]: 2025-12-13 09:06:26.499 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:06:26 np0005558241 nova_compute[248510]: 2025-12-13 09:06:26.522 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:06:26 np0005558241 nova_compute[248510]: 2025-12-13 09:06:26.553 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:06:26 np0005558241 nova_compute[248510]: 2025-12-13 09:06:26.554 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:06:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3160: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.415 248514 DEBUG nova.network.neutron [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updating instance_info_cache with network_info: [{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.444 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.445 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Instance network_info: |[{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.446 248514 DEBUG oslo_concurrency.lockutils [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.447 248514 DEBUG nova.network.neutron [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Refreshing network info cache for port cccd2f42-71c6-4464-b04e-1c3e885a4378 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.454 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Start _get_guest_xml network_info=[{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.462 248514 WARNING nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.470 248514 DEBUG nova.virt.libvirt.host [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.471 248514 DEBUG nova.virt.libvirt.host [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.484 248514 DEBUG nova.virt.libvirt.host [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.486 248514 DEBUG nova.virt.libvirt.host [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.487 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.487 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.488 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.488 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.489 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.489 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.490 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.491 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.491 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.492 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.492 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.493 248514 DEBUG nova.virt.hardware [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:06:27 np0005558241 nova_compute[248510]: 2025-12-13 09:06:27.500 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:06:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:06:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434418499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.050 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.070 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.073 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3161: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:06:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:06:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741688218' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.645 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.647 248514 DEBUG nova.virt.libvirt.vif [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:06:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-906384999',display_name='tempest-TestGettingAddress-server-906384999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-906384999',id=137,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMkENvafv8/WsEwcyvUB7dPhR5LIszSoSQPrlQUsJiVP5nV5qd8d4ARbreQ6b5yyI9G9/iw/kPcl2vcitjw0oexy5ltUfx15pAU69dzZBbn6bv/Uzo+ktqtZuKuE1Ehyg==',key_name='tempest-TestGettingAddress-384614847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ozf85u1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:06:21Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=e407f205-43a8-423e-a1cb-dc7f58ccced2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.648 248514 DEBUG nova.network.os_vif_util [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.650 248514 DEBUG nova.network.os_vif_util [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.651 248514 DEBUG nova.objects.instance [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid e407f205-43a8-423e-a1cb-dc7f58ccced2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.674 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  <uuid>e407f205-43a8-423e-a1cb-dc7f58ccced2</uuid>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  <name>instance-00000089</name>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-906384999</nova:name>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:06:27</nova:creationTime>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:        <nova:port uuid="cccd2f42-71c6-4464-b04e-1c3e885a4378">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fec6:dfa0" ipVersion="6"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <entry name="serial">e407f205-43a8-423e-a1cb-dc7f58ccced2</entry>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <entry name="uuid">e407f205-43a8-423e-a1cb-dc7f58ccced2</entry>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/e407f205-43a8-423e-a1cb-dc7f58ccced2_disk">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/e407f205-43a8-423e-a1cb-dc7f58ccced2_disk.config">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:c6:df:a0"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <target dev="tapcccd2f42-71"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/console.log" append="off"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:06:28 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:06:28 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:06:28 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:06:28 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.676 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Preparing to wait for external event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.677 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.677 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.678 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.679 248514 DEBUG nova.virt.libvirt.vif [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:06:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-906384999',display_name='tempest-TestGettingAddress-server-906384999',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-906384999',id=137,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMkENvafv8/WsEwcyvUB7dPhR5LIszSoSQPrlQUsJiVP5nV5qd8d4ARbreQ6b5yyI9G9/iw/kPcl2vcitjw0oexy5ltUfx15pAU69dzZBbn6bv/Uzo+ktqtZuKuE1Ehyg==',key_name='tempest-TestGettingAddress-384614847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ozf85u1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:06:21Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=e407f205-43a8-423e-a1cb-dc7f58ccced2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.680 248514 DEBUG nova.network.os_vif_util [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.681 248514 DEBUG nova.network.os_vif_util [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.682 248514 DEBUG os_vif [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.683 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.684 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.688 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcccd2f42-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.689 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcccd2f42-71, col_values=(('external_ids', {'iface-id': 'cccd2f42-71c6-4464-b04e-1c3e885a4378', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:df:a0', 'vm-uuid': 'e407f205-43a8-423e-a1cb-dc7f58ccced2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.691 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:28 np0005558241 NetworkManager[50376]: <info>  [1765616788.6925] manager: (tapcccd2f42-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/584)
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.700 248514 INFO os_vif [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71')#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.763 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.764 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.764 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:c6:df:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.765 248514 INFO nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Using config drive#033[00m
Dec 13 04:06:28 np0005558241 nova_compute[248510]: 2025-12-13 09:06:28.798 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.217 248514 INFO nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Creating config drive at /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/disk.config#033[00m
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.226 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzeohr92u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.376 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzeohr92u" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.403 248514 DEBUG nova.storage.rbd_utils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image e407f205-43a8-423e-a1cb-dc7f58ccced2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.407 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/disk.config e407f205-43a8-423e-a1cb-dc7f58ccced2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.553 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.554 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.555 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.588 248514 DEBUG oslo_concurrency.processutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/disk.config e407f205-43a8-423e-a1cb-dc7f58ccced2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.589 248514 INFO nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Deleting local config drive /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2/disk.config because it was imported into RBD.#033[00m
Dec 13 04:06:29 np0005558241 kernel: tapcccd2f42-71: entered promiscuous mode
Dec 13 04:06:29 np0005558241 NetworkManager[50376]: <info>  [1765616789.6688] manager: (tapcccd2f42-71): new Tun device (/org/freedesktop/NetworkManager/Devices/585)
Dec 13 04:06:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:06:29Z|01420|binding|INFO|Claiming lport cccd2f42-71c6-4464-b04e-1c3e885a4378 for this chassis.
Dec 13 04:06:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:06:29Z|01421|binding|INFO|cccd2f42-71c6-4464-b04e-1c3e885a4378: Claiming fa:16:3e:c6:df:a0 10.100.0.3 2001:db8::f816:3eff:fec6:dfa0
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.669 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.676 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.698 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:df:a0 10.100.0.3 2001:db8::f816:3eff:fec6:dfa0'], port_security=['fa:16:3e:c6:df:a0 10.100.0.3 2001:db8::f816:3eff:fec6:dfa0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fec6:dfa0/64', 'neutron:device_id': 'e407f205-43a8-423e-a1cb-dc7f58ccced2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f3b9c4f-25eb-4912-b05c-6ea015f84c28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5b5848-41ef-422d-b206-10cd26f67092, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=cccd2f42-71c6-4464-b04e-1c3e885a4378) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.701 158419 INFO neutron.agent.ovn.metadata.agent [-] Port cccd2f42-71c6-4464-b04e-1c3e885a4378 in datapath 4134c529-684a-4aee-a450-f026f71bff55 bound to our chassis#033[00m
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.704 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4134c529-684a-4aee-a450-f026f71bff55#033[00m
Dec 13 04:06:29 np0005558241 systemd-machined[210538]: New machine qemu-168-instance-00000089.
Dec 13 04:06:29 np0005558241 systemd-udevd[387097]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.719 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa4dfa25-da41-4c1a-abb1-322e7ba7c079]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.720 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4134c529-61 in ovnmeta-4134c529-684a-4aee-a450-f026f71bff55 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.721 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4134c529-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.721 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0fe0f7-b963-4a4c-b82f-880de2b1539b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.722 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc35db83-9d2e-4886-9131-18f80b818e14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:29 np0005558241 NetworkManager[50376]: <info>  [1765616789.7327] device (tapcccd2f42-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:06:29 np0005558241 NetworkManager[50376]: <info>  [1765616789.7350] device (tapcccd2f42-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.737 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[254fbc2d-51d3-4657-814f-93e5ad8dd6a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:29 np0005558241 systemd[1]: Started Virtual Machine qemu-168-instance-00000089.
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.762 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[01bac608-9967-4310-af8a-9d7f30e91733]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.765 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:06:29Z|01422|binding|INFO|Setting lport cccd2f42-71c6-4464-b04e-1c3e885a4378 ovn-installed in OVS
Dec 13 04:06:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:06:29Z|01423|binding|INFO|Setting lport cccd2f42-71c6-4464-b04e-1c3e885a4378 up in Southbound
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:06:29 np0005558241 nova_compute[248510]: 2025-12-13 09:06:29.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.802 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f55a1a14-70a3-4acf-bbd2-10d8a5e99bc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.806 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b96583d7-d478-480e-a1b4-2bc8c4041a9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:29 np0005558241 systemd-udevd[387100]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:06:29 np0005558241 NetworkManager[50376]: <info>  [1765616789.8079] manager: (tap4134c529-60): new Veth device (/org/freedesktop/NetworkManager/Devices/586)
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.845 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[30477490-2afa-4eb8-b8f3-efda5c436cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.849 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[71154260-fb0f-4d49-afc7-ccc723335808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:29 np0005558241 NetworkManager[50376]: <info>  [1765616789.8797] device (tap4134c529-60): carrier: link connected
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.890 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0a7fca-f632-4248-9490-dd0f09cebb40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.912 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0f3201-1b96-43ce-b6c3-33f4dfebe371]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4134c529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:27:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936709, 'reachable_time': 43689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387129, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.938 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6e4d16-43c3-47f9-a5c5-5173e127e291]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe17:2774'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 936709, 'tstamp': 936709}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387130, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:29.961 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[286017f1-b5ad-4975-9f06-25503c1e0f05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4134c529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:27:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936709, 'reachable_time': 43689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 387131, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.005 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c690e52b-0aa9-40d8-9760-1b39f2f8ec00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.078 248514 DEBUG nova.compute.manager [req-e4ac3301-2114-4ba4-b23d-853052e09c0a req-977721ca-07cc-4ac8-89e3-e0290985dd4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.078 248514 DEBUG oslo_concurrency.lockutils [req-e4ac3301-2114-4ba4-b23d-853052e09c0a req-977721ca-07cc-4ac8-89e3-e0290985dd4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.078 248514 DEBUG oslo_concurrency.lockutils [req-e4ac3301-2114-4ba4-b23d-853052e09c0a req-977721ca-07cc-4ac8-89e3-e0290985dd4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.079 248514 DEBUG oslo_concurrency.lockutils [req-e4ac3301-2114-4ba4-b23d-853052e09c0a req-977721ca-07cc-4ac8-89e3-e0290985dd4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.079 248514 DEBUG nova.compute.manager [req-e4ac3301-2114-4ba4-b23d-853052e09c0a req-977721ca-07cc-4ac8-89e3-e0290985dd4a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Processing event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.082 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f5629999-1240-431c-abe8-4e7f2a689c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.084 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4134c529-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.084 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.085 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4134c529-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:06:30 np0005558241 NetworkManager[50376]: <info>  [1765616790.0871] manager: (tap4134c529-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/587)
Dec 13 04:06:30 np0005558241 kernel: tap4134c529-60: entered promiscuous mode
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.089 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4134c529-60, col_values=(('external_ids', {'iface-id': '966dba06-5969-4f23-922d-d3fb06b4a741'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:06:30 np0005558241 ovn_controller[148476]: 2025-12-13T09:06:30Z|01424|binding|INFO|Releasing lport 966dba06-5969-4f23-922d-d3fb06b4a741 from this chassis (sb_readonly=0)
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.103 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.106 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.107 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4134c529-684a-4aee-a450-f026f71bff55.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4134c529-684a-4aee-a450-f026f71bff55.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.108 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f69d870-91e8-4453-9175-1ff9cd486fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.108 248514 DEBUG nova.network.neutron [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updated VIF entry in instance network info cache for port cccd2f42-71c6-4464-b04e-1c3e885a4378. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.109 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-4134c529-684a-4aee-a450-f026f71bff55
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/4134c529-684a-4aee-a450-f026f71bff55.pid.haproxy
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 4134c529-684a-4aee-a450-f026f71bff55
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.109 248514 DEBUG nova.network.neutron [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updating instance_info_cache with network_info: [{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:06:30 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:30.110 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'env', 'PROCESS_TAG=haproxy-4134c529-684a-4aee-a450-f026f71bff55', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4134c529-684a-4aee-a450-f026f71bff55.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.129 248514 DEBUG oslo_concurrency.lockutils [req-26742b66-28b5-4120-898a-484c25764af3 req-774e58b3-7920-4c25-b765-be56ce608f55 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.393 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616790.3923938, e407f205-43a8-423e-a1cb-dc7f58ccced2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.394 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] VM Started (Lifecycle Event)#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.397 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.402 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.407 248514 INFO nova.virt.libvirt.driver [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Instance spawned successfully.#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.407 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.419 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.424 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.442 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.443 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.444 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.444 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.445 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.446 248514 DEBUG nova.virt.libvirt.driver [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.478 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.479 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616790.3939173, e407f205-43a8-423e-a1cb-dc7f58ccced2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.479 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.518 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.523 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616790.4006698, e407f205-43a8-423e-a1cb-dc7f58ccced2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.524 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.538 248514 INFO nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Took 9.36 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.539 248514 DEBUG nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.583 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.586 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:06:30 np0005558241 podman[387204]: 2025-12-13 09:06:30.492403089 +0000 UTC m=+0.033992478 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:06:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3162: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.643 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.916 248514 INFO nova.compute.manager [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Took 10.80 seconds to build instance.#033[00m
Dec 13 04:06:30 np0005558241 nova_compute[248510]: 2025-12-13 09:06:30.952 248514 DEBUG oslo_concurrency.lockutils [None req-ae482dda-f75e-4cb5-abe8-b018d2697aa4 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:06:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:06:31 np0005558241 podman[387204]: 2025-12-13 09:06:31.34363423 +0000 UTC m=+0.885223599 container create b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:06:31 np0005558241 systemd[1]: Started libpod-conmon-b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e.scope.
Dec 13 04:06:31 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:06:31 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6629d0d2946616c16ab6364a7afe8e36321cd71c229fca3a1701a5a9a8e1eeb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:31 np0005558241 podman[387204]: 2025-12-13 09:06:31.502877931 +0000 UTC m=+1.044467320 container init b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 04:06:31 np0005558241 podman[387204]: 2025-12-13 09:06:31.513925646 +0000 UTC m=+1.055514995 container start b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 04:06:31 np0005558241 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [NOTICE]   (387224) : New worker (387226) forked
Dec 13 04:06:31 np0005558241 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [NOTICE]   (387224) : Loading success.
Dec 13 04:06:32 np0005558241 nova_compute[248510]: 2025-12-13 09:06:32.207 248514 DEBUG nova.compute.manager [req-e2acfe12-b42a-4e2b-a232-f9e6c564728e req-de834545-522c-48f7-816e-2de9ce214e87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:06:32 np0005558241 nova_compute[248510]: 2025-12-13 09:06:32.208 248514 DEBUG oslo_concurrency.lockutils [req-e2acfe12-b42a-4e2b-a232-f9e6c564728e req-de834545-522c-48f7-816e-2de9ce214e87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:32 np0005558241 nova_compute[248510]: 2025-12-13 09:06:32.208 248514 DEBUG oslo_concurrency.lockutils [req-e2acfe12-b42a-4e2b-a232-f9e6c564728e req-de834545-522c-48f7-816e-2de9ce214e87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:32 np0005558241 nova_compute[248510]: 2025-12-13 09:06:32.209 248514 DEBUG oslo_concurrency.lockutils [req-e2acfe12-b42a-4e2b-a232-f9e6c564728e req-de834545-522c-48f7-816e-2de9ce214e87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:06:32 np0005558241 nova_compute[248510]: 2025-12-13 09:06:32.209 248514 DEBUG nova.compute.manager [req-e2acfe12-b42a-4e2b-a232-f9e6c564728e req-de834545-522c-48f7-816e-2de9ce214e87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] No waiting events found dispatching network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:06:32 np0005558241 nova_compute[248510]: 2025-12-13 09:06:32.210 248514 WARNING nova.compute.manager [req-e2acfe12-b42a-4e2b-a232-f9e6c564728e req-de834545-522c-48f7-816e-2de9ce214e87 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received unexpected event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:06:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3163: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:06:33 np0005558241 nova_compute[248510]: 2025-12-13 09:06:33.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:33 np0005558241 nova_compute[248510]: 2025-12-13 09:06:33.691 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3164: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 04:06:35 np0005558241 ovn_controller[148476]: 2025-12-13T09:06:35Z|01425|binding|INFO|Releasing lport 966dba06-5969-4f23-922d-d3fb06b4a741 from this chassis (sb_readonly=0)
Dec 13 04:06:35 np0005558241 NetworkManager[50376]: <info>  [1765616795.4696] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/588)
Dec 13 04:06:35 np0005558241 NetworkManager[50376]: <info>  [1765616795.4706] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/589)
Dec 13 04:06:35 np0005558241 nova_compute[248510]: 2025-12-13 09:06:35.474 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:35 np0005558241 nova_compute[248510]: 2025-12-13 09:06:35.529 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:35 np0005558241 ovn_controller[148476]: 2025-12-13T09:06:35Z|01426|binding|INFO|Releasing lport 966dba06-5969-4f23-922d-d3fb06b4a741 from this chassis (sb_readonly=0)
Dec 13 04:06:35 np0005558241 nova_compute[248510]: 2025-12-13 09:06:35.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:06:36 np0005558241 nova_compute[248510]: 2025-12-13 09:06:36.595 248514 DEBUG nova.compute.manager [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-changed-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:06:36 np0005558241 nova_compute[248510]: 2025-12-13 09:06:36.596 248514 DEBUG nova.compute.manager [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Refreshing instance network info cache due to event network-changed-cccd2f42-71c6-4464-b04e-1c3e885a4378. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:06:36 np0005558241 nova_compute[248510]: 2025-12-13 09:06:36.596 248514 DEBUG oslo_concurrency.lockutils [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:06:36 np0005558241 nova_compute[248510]: 2025-12-13 09:06:36.596 248514 DEBUG oslo_concurrency.lockutils [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:06:36 np0005558241 nova_compute[248510]: 2025-12-13 09:06:36.596 248514 DEBUG nova.network.neutron [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Refreshing network info cache for port cccd2f42-71c6-4464-b04e-1c3e885a4378 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:06:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3165: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:06:38 np0005558241 nova_compute[248510]: 2025-12-13 09:06:38.391 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3166: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:06:38 np0005558241 nova_compute[248510]: 2025-12-13 09:06:38.599 248514 DEBUG nova.network.neutron [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updated VIF entry in instance network info cache for port cccd2f42-71c6-4464-b04e-1c3e885a4378. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:06:38 np0005558241 nova_compute[248510]: 2025-12-13 09:06:38.600 248514 DEBUG nova.network.neutron [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updating instance_info_cache with network_info: [{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:06:38 np0005558241 nova_compute[248510]: 2025-12-13 09:06:38.693 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:39 np0005558241 nova_compute[248510]: 2025-12-13 09:06:39.036 248514 DEBUG oslo_concurrency.lockutils [req-6befca67-c1dd-4dbb-bc00-cba37ef299a6 req-68e4405a-70e7-49c0-93ed-9bef38ef55f2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:06:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:06:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:06:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:06:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:06:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:06:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:06:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3167: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:06:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:06:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3168: 321 pgs: 321 active+clean; 88 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:06:42 np0005558241 ovn_controller[148476]: 2025-12-13T09:06:42Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:df:a0 10.100.0.3
Dec 13 04:06:42 np0005558241 ovn_controller[148476]: 2025-12-13T09:06:42Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:df:a0 10.100.0.3
Dec 13 04:06:43 np0005558241 nova_compute[248510]: 2025-12-13 09:06:43.394 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:43 np0005558241 nova_compute[248510]: 2025-12-13 09:06:43.696 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:06:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3169: 321 pgs: 321 active+clean; 120 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:06:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:06:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:06:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:06:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:06:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:06:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:06:45 np0005558241 podman[387453]: 2025-12-13 09:06:45.47849927 +0000 UTC m=+0.069221738 container create 3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_turing, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:06:45 np0005558241 systemd[1]: Started libpod-conmon-3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324.scope.
Dec 13 04:06:45 np0005558241 podman[387453]: 2025-12-13 09:06:45.444482672 +0000 UTC m=+0.035205190 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:06:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:06:45 np0005558241 podman[387453]: 2025-12-13 09:06:45.585652556 +0000 UTC m=+0.176375074 container init 3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_turing, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:06:45 np0005558241 podman[387453]: 2025-12-13 09:06:45.600696714 +0000 UTC m=+0.191419142 container start 3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 04:06:45 np0005558241 podman[387453]: 2025-12-13 09:06:45.604952794 +0000 UTC m=+0.195675322 container attach 3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 04:06:45 np0005558241 admiring_turing[387469]: 167 167
Dec 13 04:06:45 np0005558241 systemd[1]: libpod-3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324.scope: Deactivated successfully.
Dec 13 04:06:45 np0005558241 conmon[387469]: conmon 3f37a697286e682251ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324.scope/container/memory.events
Dec 13 04:06:45 np0005558241 podman[387453]: 2025-12-13 09:06:45.612897319 +0000 UTC m=+0.203619777 container died 3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:06:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d75def86a616fdcd3f51e8f810ec8d56e3ba98a236fa91b198b5f5b6b564d53f-merged.mount: Deactivated successfully.
Dec 13 04:06:45 np0005558241 podman[387453]: 2025-12-13 09:06:45.664439979 +0000 UTC m=+0.255162437 container remove 3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:06:45 np0005558241 systemd[1]: libpod-conmon-3f37a697286e682251ee4c6c969104c9fd5ff472319972bc22cdee2ce0a65324.scope: Deactivated successfully.
Dec 13 04:06:45 np0005558241 podman[387492]: 2025-12-13 09:06:45.847247318 +0000 UTC m=+0.041895182 container create 1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chebyshev, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:06:45 np0005558241 systemd[1]: Started libpod-conmon-1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f.scope.
Dec 13 04:06:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:06:45 np0005558241 podman[387492]: 2025-12-13 09:06:45.829303715 +0000 UTC m=+0.023951609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:06:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bd0f794143dd3f23eeb5ab5f016b0ea1dcc8b5aa262454176daad63a51da4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bd0f794143dd3f23eeb5ab5f016b0ea1dcc8b5aa262454176daad63a51da4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bd0f794143dd3f23eeb5ab5f016b0ea1dcc8b5aa262454176daad63a51da4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bd0f794143dd3f23eeb5ab5f016b0ea1dcc8b5aa262454176daad63a51da4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bd0f794143dd3f23eeb5ab5f016b0ea1dcc8b5aa262454176daad63a51da4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:45 np0005558241 podman[387492]: 2025-12-13 09:06:45.940021913 +0000 UTC m=+0.134669837 container init 1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chebyshev, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:06:45 np0005558241 podman[387492]: 2025-12-13 09:06:45.946867189 +0000 UTC m=+0.141515093 container start 1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chebyshev, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:06:45 np0005558241 podman[387492]: 2025-12-13 09:06:45.950978585 +0000 UTC m=+0.145626459 container attach 1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:06:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:06:46 np0005558241 romantic_chebyshev[387508]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:06:46 np0005558241 romantic_chebyshev[387508]: --> All data devices are unavailable
Dec 13 04:06:46 np0005558241 systemd[1]: libpod-1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f.scope: Deactivated successfully.
Dec 13 04:06:46 np0005558241 podman[387492]: 2025-12-13 09:06:46.543620072 +0000 UTC m=+0.738268006 container died 1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:06:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-71bd0f794143dd3f23eeb5ab5f016b0ea1dcc8b5aa262454176daad63a51da4d-merged.mount: Deactivated successfully.
Dec 13 04:06:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3170: 321 pgs: 321 active+clean; 120 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 04:06:46 np0005558241 podman[387492]: 2025-12-13 09:06:46.604889563 +0000 UTC m=+0.799537477 container remove 1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:06:46 np0005558241 systemd[1]: libpod-conmon-1d1e0e1f4f96e8944b0b88d911f8ea7ab2e0afe92a1fa870e56c606f570bfe4f.scope: Deactivated successfully.
Dec 13 04:06:46 np0005558241 podman[387540]: 2025-12-13 09:06:46.674001137 +0000 UTC m=+0.080668933 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 13 04:06:46 np0005558241 podman[387538]: 2025-12-13 09:06:46.721029851 +0000 UTC m=+0.125670345 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible)
Dec 13 04:06:46 np0005558241 podman[387529]: 2025-12-13 09:06:46.740885463 +0000 UTC m=+0.149066798 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 04:06:47 np0005558241 podman[387664]: 2025-12-13 09:06:47.123279873 +0000 UTC m=+0.043248987 container create c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_curie, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 04:06:47 np0005558241 systemd[1]: Started libpod-conmon-c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b.scope.
Dec 13 04:06:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:06:47 np0005558241 podman[387664]: 2025-12-13 09:06:47.101365128 +0000 UTC m=+0.021334282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:06:47 np0005558241 podman[387664]: 2025-12-13 09:06:47.220165064 +0000 UTC m=+0.140134208 container init c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_curie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:06:47 np0005558241 podman[387664]: 2025-12-13 09:06:47.233345524 +0000 UTC m=+0.153314678 container start c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:06:47 np0005558241 podman[387664]: 2025-12-13 09:06:47.237358638 +0000 UTC m=+0.157327772 container attach c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_curie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:06:47 np0005558241 charming_curie[387680]: 167 167
Dec 13 04:06:47 np0005558241 systemd[1]: libpod-c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b.scope: Deactivated successfully.
Dec 13 04:06:47 np0005558241 conmon[387680]: conmon c8fe316ffef2c63c9292 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b.scope/container/memory.events
Dec 13 04:06:47 np0005558241 podman[387664]: 2025-12-13 09:06:47.243277891 +0000 UTC m=+0.163247025 container died c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:06:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3313704034a8a0eb4a76198f395417933d7eb710bab3c2e5795deb01a06e0b69-merged.mount: Deactivated successfully.
Dec 13 04:06:47 np0005558241 podman[387664]: 2025-12-13 09:06:47.297414588 +0000 UTC m=+0.217383732 container remove c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:06:47 np0005558241 systemd[1]: libpod-conmon-c8fe316ffef2c63c9292948b8c7182dd6aa0b1a03774bcf26fe1c4f2eac0506b.scope: Deactivated successfully.
Dec 13 04:06:47 np0005558241 podman[387704]: 2025-12-13 09:06:47.560884639 +0000 UTC m=+0.065687177 container create 4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 04:06:47 np0005558241 systemd[1]: Started libpod-conmon-4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1.scope.
Dec 13 04:06:47 np0005558241 podman[387704]: 2025-12-13 09:06:47.538359887 +0000 UTC m=+0.043162495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:06:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:06:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a5cffdd0e14db57361ee66b6f1db103fbd134fdaa1c6867f44cdd1e61e02d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a5cffdd0e14db57361ee66b6f1db103fbd134fdaa1c6867f44cdd1e61e02d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a5cffdd0e14db57361ee66b6f1db103fbd134fdaa1c6867f44cdd1e61e02d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a5cffdd0e14db57361ee66b6f1db103fbd134fdaa1c6867f44cdd1e61e02d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:47 np0005558241 podman[387704]: 2025-12-13 09:06:47.669148273 +0000 UTC m=+0.173950841 container init 4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:06:47 np0005558241 podman[387704]: 2025-12-13 09:06:47.687575039 +0000 UTC m=+0.192377607 container start 4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_moser, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:06:47 np0005558241 podman[387704]: 2025-12-13 09:06:47.69186618 +0000 UTC m=+0.196668748 container attach 4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_moser, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 04:06:47 np0005558241 sharp_moser[387720]: {
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:    "0": [
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:        {
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "devices": [
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "/dev/loop3"
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            ],
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_name": "ceph_lv0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_size": "21470642176",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "name": "ceph_lv0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "tags": {
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.cluster_name": "ceph",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.crush_device_class": "",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.encrypted": "0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.objectstore": "bluestore",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.osd_id": "0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.type": "block",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.vdo": "0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.with_tpm": "0"
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            },
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "type": "block",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "vg_name": "ceph_vg0"
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:        }
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:    ],
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:    "1": [
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:        {
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "devices": [
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "/dev/loop4"
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            ],
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_name": "ceph_lv1",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_size": "21470642176",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "name": "ceph_lv1",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "tags": {
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.cluster_name": "ceph",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.crush_device_class": "",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.encrypted": "0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.objectstore": "bluestore",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.osd_id": "1",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.type": "block",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.vdo": "0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.with_tpm": "0"
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            },
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "type": "block",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "vg_name": "ceph_vg1"
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:        }
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:    ],
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:    "2": [
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:        {
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "devices": [
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "/dev/loop5"
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            ],
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_name": "ceph_lv2",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_size": "21470642176",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "name": "ceph_lv2",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "tags": {
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.cluster_name": "ceph",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.crush_device_class": "",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.encrypted": "0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.objectstore": "bluestore",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.osd_id": "2",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.type": "block",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.vdo": "0",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:                "ceph.with_tpm": "0"
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            },
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "type": "block",
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:            "vg_name": "ceph_vg2"
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:        }
Dec 13 04:06:47 np0005558241 sharp_moser[387720]:    ]
Dec 13 04:06:47 np0005558241 sharp_moser[387720]: }
Dec 13 04:06:48 np0005558241 systemd[1]: libpod-4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1.scope: Deactivated successfully.
Dec 13 04:06:48 np0005558241 podman[387729]: 2025-12-13 09:06:48.080546742 +0000 UTC m=+0.040211259 container died 4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_moser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:06:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d85a5cffdd0e14db57361ee66b6f1db103fbd134fdaa1c6867f44cdd1e61e02d-merged.mount: Deactivated successfully.
Dec 13 04:06:48 np0005558241 podman[387729]: 2025-12-13 09:06:48.131312502 +0000 UTC m=+0.090976979 container remove 4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Dec 13 04:06:48 np0005558241 systemd[1]: libpod-conmon-4ad296374d89fd52e47650cc1b5ef5ba20131ff11b77766d1861ed0a9e8181b1.scope: Deactivated successfully.
Dec 13 04:06:48 np0005558241 nova_compute[248510]: 2025-12-13 09:06:48.395 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3171: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:06:48 np0005558241 nova_compute[248510]: 2025-12-13 09:06:48.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:48 np0005558241 podman[387806]: 2025-12-13 09:06:48.700818802 +0000 UTC m=+0.051836649 container create 2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_napier, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:06:48 np0005558241 systemd[1]: Started libpod-conmon-2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103.scope.
Dec 13 04:06:48 np0005558241 podman[387806]: 2025-12-13 09:06:48.674131513 +0000 UTC m=+0.025149410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:06:48 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:06:48 np0005558241 podman[387806]: 2025-12-13 09:06:48.807378813 +0000 UTC m=+0.158396690 container init 2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_napier, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:06:48 np0005558241 podman[387806]: 2025-12-13 09:06:48.820747348 +0000 UTC m=+0.171765215 container start 2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 04:06:48 np0005558241 podman[387806]: 2025-12-13 09:06:48.825144211 +0000 UTC m=+0.176162078 container attach 2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 04:06:48 np0005558241 kind_napier[387823]: 167 167
Dec 13 04:06:48 np0005558241 systemd[1]: libpod-2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103.scope: Deactivated successfully.
Dec 13 04:06:48 np0005558241 podman[387806]: 2025-12-13 09:06:48.82818315 +0000 UTC m=+0.179201037 container died 2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:06:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-17f06fc154fb8a1e4dadf4278479a35067ecb219b4b18a980dcbe50d57a66cda-merged.mount: Deactivated successfully.
Dec 13 04:06:48 np0005558241 podman[387806]: 2025-12-13 09:06:48.877091952 +0000 UTC m=+0.228109819 container remove 2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_napier, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:06:48 np0005558241 systemd[1]: libpod-conmon-2bb9aa0e4d1a222758f4b647209743c987f88f2a1aa7604028fec57e01244103.scope: Deactivated successfully.
Dec 13 04:06:49 np0005558241 podman[387847]: 2025-12-13 09:06:49.103536227 +0000 UTC m=+0.049614392 container create 51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_blackburn, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 04:06:49 np0005558241 systemd[1]: Started libpod-conmon-51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89.scope.
Dec 13 04:06:49 np0005558241 podman[387847]: 2025-12-13 09:06:49.083267024 +0000 UTC m=+0.029345209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:06:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:06:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75112c473006ccd3e7768a968dc207ea3b7d026b9edf04e34c05b161de791f97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75112c473006ccd3e7768a968dc207ea3b7d026b9edf04e34c05b161de791f97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75112c473006ccd3e7768a968dc207ea3b7d026b9edf04e34c05b161de791f97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75112c473006ccd3e7768a968dc207ea3b7d026b9edf04e34c05b161de791f97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:49 np0005558241 podman[387847]: 2025-12-13 09:06:49.208166238 +0000 UTC m=+0.154244393 container init 51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:06:49 np0005558241 podman[387847]: 2025-12-13 09:06:49.219898331 +0000 UTC m=+0.165976496 container start 51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_blackburn, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:06:49 np0005558241 podman[387847]: 2025-12-13 09:06:49.223717569 +0000 UTC m=+0.169795754 container attach 51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:06:49 np0005558241 lvm[387943]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:06:49 np0005558241 lvm[387943]: VG ceph_vg1 finished
Dec 13 04:06:49 np0005558241 lvm[387942]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:06:49 np0005558241 lvm[387942]: VG ceph_vg0 finished
Dec 13 04:06:49 np0005558241 lvm[387945]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:06:49 np0005558241 lvm[387945]: VG ceph_vg2 finished
Dec 13 04:06:50 np0005558241 agitated_blackburn[387864]: {}
Dec 13 04:06:50 np0005558241 systemd[1]: libpod-51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89.scope: Deactivated successfully.
Dec 13 04:06:50 np0005558241 podman[387847]: 2025-12-13 09:06:50.087451452 +0000 UTC m=+1.033529617 container died 51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 04:06:50 np0005558241 systemd[1]: libpod-51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89.scope: Consumed 1.429s CPU time.
Dec 13 04:06:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-75112c473006ccd3e7768a968dc207ea3b7d026b9edf04e34c05b161de791f97-merged.mount: Deactivated successfully.
Dec 13 04:06:50 np0005558241 podman[387847]: 2025-12-13 09:06:50.141344493 +0000 UTC m=+1.087422668 container remove 51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_blackburn, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 13 04:06:50 np0005558241 systemd[1]: libpod-conmon-51260c01678c792753a3b44dc73aea4836510a95febef7d31fef37d50551ff89.scope: Deactivated successfully.
Dec 13 04:06:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:06:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:06:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:06:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:06:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3172: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:06:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:06:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:06:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:06:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3173: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:06:53 np0005558241 nova_compute[248510]: 2025-12-13 09:06:53.397 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:53 np0005558241 nova_compute[248510]: 2025-12-13 09:06:53.700 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:53 np0005558241 nova_compute[248510]: 2025-12-13 09:06:53.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:06:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3174: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:06:55 np0005558241 nova_compute[248510]: 2025-12-13 09:06:55.072 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:55 np0005558241 nova_compute[248510]: 2025-12-13 09:06:55.073 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:55 np0005558241 nova_compute[248510]: 2025-12-13 09:06:55.109 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:06:55 np0005558241 nova_compute[248510]: 2025-12-13 09:06:55.225 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:55 np0005558241 nova_compute[248510]: 2025-12-13 09:06:55.226 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:55 np0005558241 nova_compute[248510]: 2025-12-13 09:06:55.236 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:06:55 np0005558241 nova_compute[248510]: 2025-12-13 09:06:55.237 248514 INFO nova.compute.claims [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:06:55 np0005558241 nova_compute[248510]: 2025-12-13 09:06:55.380 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:06:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:55.443 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:55.444 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:55.445 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:06:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:06:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2022079524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:06:55 np0005558241 nova_compute[248510]: 2025-12-13 09:06:55.939 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:06:55 np0005558241 nova_compute[248510]: 2025-12-13 09:06:55.949 248514 DEBUG nova.compute.provider_tree [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:06:55 np0005558241 nova_compute[248510]: 2025-12-13 09:06:55.982 248514 DEBUG nova.scheduler.client.report [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.012 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.013 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.080 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.081 248514 DEBUG nova.network.neutron [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.117 248514 INFO nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:06:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.152 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.265 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.267 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.268 248514 INFO nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Creating image(s)#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.298 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.327 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.350 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.358 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.401 248514 DEBUG nova.policy [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.437 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.438 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.439 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.439 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.465 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.469 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:06:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3175: 321 pgs: 321 active+clean; 121 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s rd, 14 KiB/s wr, 1 op/s
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.786 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.886 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:06:56 np0005558241 nova_compute[248510]: 2025-12-13 09:06:56.990 248514 DEBUG nova.objects.instance [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 9fc8d93d-978b-4964-b6ca-86b96580e92f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:06:57 np0005558241 nova_compute[248510]: 2025-12-13 09:06:57.007 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:06:57 np0005558241 nova_compute[248510]: 2025-12-13 09:06:57.008 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Ensure instance console log exists: /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:06:57 np0005558241 nova_compute[248510]: 2025-12-13 09:06:57.008 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:06:57 np0005558241 nova_compute[248510]: 2025-12-13 09:06:57.008 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:06:57 np0005558241 nova_compute[248510]: 2025-12-13 09:06:57.009 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:06:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:57.694 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:06:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:57.695 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:06:57 np0005558241 nova_compute[248510]: 2025-12-13 09:06:57.695 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:57 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:06:57.696 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:06:57 np0005558241 nova_compute[248510]: 2025-12-13 09:06:57.851 248514 DEBUG nova.network.neutron [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Successfully created port: 1e0dc858-5ac1-402f-aa44-2ae372f17d6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:06:58 np0005558241 nova_compute[248510]: 2025-12-13 09:06:58.398 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3176: 321 pgs: 321 active+clean; 135 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 562 KiB/s wr, 13 op/s
Dec 13 04:06:58 np0005558241 nova_compute[248510]: 2025-12-13 09:06:58.702 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:06:58 np0005558241 nova_compute[248510]: 2025-12-13 09:06:58.758 248514 DEBUG nova.network.neutron [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Successfully updated port: 1e0dc858-5ac1-402f-aa44-2ae372f17d6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:06:58 np0005558241 nova_compute[248510]: 2025-12-13 09:06:58.779 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:06:58 np0005558241 nova_compute[248510]: 2025-12-13 09:06:58.780 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:06:58 np0005558241 nova_compute[248510]: 2025-12-13 09:06:58.780 248514 DEBUG nova.network.neutron [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:06:58 np0005558241 nova_compute[248510]: 2025-12-13 09:06:58.877 248514 DEBUG nova.compute.manager [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-changed-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:06:58 np0005558241 nova_compute[248510]: 2025-12-13 09:06:58.878 248514 DEBUG nova.compute.manager [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Refreshing instance network info cache due to event network-changed-1e0dc858-5ac1-402f-aa44-2ae372f17d6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:06:58 np0005558241 nova_compute[248510]: 2025-12-13 09:06:58.878 248514 DEBUG oslo_concurrency.lockutils [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:06:58 np0005558241 nova_compute[248510]: 2025-12-13 09:06:58.971 248514 DEBUG nova.network.neutron [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:07:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3177: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.094 248514 DEBUG nova.network.neutron [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updating instance_info_cache with network_info: [{"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.118 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.118 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Instance network_info: |[{"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.119 248514 DEBUG oslo_concurrency.lockutils [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.119 248514 DEBUG nova.network.neutron [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Refreshing network info cache for port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.123 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Start _get_guest_xml network_info=[{"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:07:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.128 248514 WARNING nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.136 248514 DEBUG nova.virt.libvirt.host [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.137 248514 DEBUG nova.virt.libvirt.host [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.147 248514 DEBUG nova.virt.libvirt.host [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.147 248514 DEBUG nova.virt.libvirt.host [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.148 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.148 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.149 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.149 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.149 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.149 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.149 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.150 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.150 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.150 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.151 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.151 248514 DEBUG nova.virt.hardware [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.154 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:07:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:07:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351611217' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.754 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.789 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:07:01 np0005558241 nova_compute[248510]: 2025-12-13 09:07:01.795 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:07:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:07:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2136825159' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.404 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.406 248514 DEBUG nova.virt.libvirt.vif [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:06:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-487738396',display_name='tempest-TestGettingAddress-server-487738396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-487738396',id=138,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMkENvafv8/WsEwcyvUB7dPhR5LIszSoSQPrlQUsJiVP5nV5qd8d4ARbreQ6b5yyI9G9/iw/kPcl2vcitjw0oexy5ltUfx15pAU69dzZBbn6bv/Uzo+ktqtZuKuE1Ehyg==',key_name='tempest-TestGettingAddress-384614847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-4nz1f07d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:06:56Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=9fc8d93d-978b-4964-b6ca-86b96580e92f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.407 248514 DEBUG nova.network.os_vif_util [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.408 248514 DEBUG nova.network.os_vif_util [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.409 248514 DEBUG nova.objects.instance [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fc8d93d-978b-4964-b6ca-86b96580e92f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.439 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  <uuid>9fc8d93d-978b-4964-b6ca-86b96580e92f</uuid>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  <name>instance-0000008a</name>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-487738396</nova:name>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:07:01</nova:creationTime>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:        <nova:port uuid="1e0dc858-5ac1-402f-aa44-2ae372f17d6f">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feda:8c61" ipVersion="6"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <entry name="serial">9fc8d93d-978b-4964-b6ca-86b96580e92f</entry>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <entry name="uuid">9fc8d93d-978b-4964-b6ca-86b96580e92f</entry>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9fc8d93d-978b-4964-b6ca-86b96580e92f_disk">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/9fc8d93d-978b-4964-b6ca-86b96580e92f_disk.config">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:da:8c:61"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <target dev="tap1e0dc858-5a"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/console.log" append="off"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:07:02 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:07:02 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:07:02 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:07:02 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.441 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Preparing to wait for external event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.442 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.443 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.443 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.445 248514 DEBUG nova.virt.libvirt.vif [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:06:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-487738396',display_name='tempest-TestGettingAddress-server-487738396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-487738396',id=138,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMkENvafv8/WsEwcyvUB7dPhR5LIszSoSQPrlQUsJiVP5nV5qd8d4ARbreQ6b5yyI9G9/iw/kPcl2vcitjw0oexy5ltUfx15pAU69dzZBbn6bv/Uzo+ktqtZuKuE1Ehyg==',key_name='tempest-TestGettingAddress-384614847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-4nz1f07d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:06:56Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=9fc8d93d-978b-4964-b6ca-86b96580e92f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.445 248514 DEBUG nova.network.os_vif_util [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.447 248514 DEBUG nova.network.os_vif_util [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.447 248514 DEBUG os_vif [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.448 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.449 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.450 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.454 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.455 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e0dc858-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.456 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e0dc858-5a, col_values=(('external_ids', {'iface-id': '1e0dc858-5ac1-402f-aa44-2ae372f17d6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:8c:61', 'vm-uuid': '9fc8d93d-978b-4964-b6ca-86b96580e92f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:07:02 np0005558241 NetworkManager[50376]: <info>  [1765616822.4599] manager: (tap1e0dc858-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/590)
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.458 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.469 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.470 248514 INFO os_vif [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a')#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.554 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.555 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.555 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:da:8c:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.556 248514 INFO nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Using config drive#033[00m
Dec 13 04:07:02 np0005558241 nova_compute[248510]: 2025-12-13 09:07:02.589 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:07:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3178: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:07:03 np0005558241 nova_compute[248510]: 2025-12-13 09:07:03.203 248514 INFO nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Creating config drive at /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/disk.config#033[00m
Dec 13 04:07:03 np0005558241 nova_compute[248510]: 2025-12-13 09:07:03.213 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxytrjt5m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:07:03 np0005558241 nova_compute[248510]: 2025-12-13 09:07:03.394 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxytrjt5m" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:07:03 np0005558241 nova_compute[248510]: 2025-12-13 09:07:03.440 248514 DEBUG nova.storage.rbd_utils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:07:03 np0005558241 nova_compute[248510]: 2025-12-13 09:07:03.446 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/disk.config 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:07:03 np0005558241 nova_compute[248510]: 2025-12-13 09:07:03.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:03 np0005558241 nova_compute[248510]: 2025-12-13 09:07:03.648 248514 DEBUG oslo_concurrency.processutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/disk.config 9fc8d93d-978b-4964-b6ca-86b96580e92f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:07:03 np0005558241 nova_compute[248510]: 2025-12-13 09:07:03.649 248514 INFO nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Deleting local config drive /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f/disk.config because it was imported into RBD.#033[00m
Dec 13 04:07:03 np0005558241 kernel: tap1e0dc858-5a: entered promiscuous mode
Dec 13 04:07:03 np0005558241 NetworkManager[50376]: <info>  [1765616823.7324] manager: (tap1e0dc858-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/591)
Dec 13 04:07:03 np0005558241 ovn_controller[148476]: 2025-12-13T09:07:03Z|01427|binding|INFO|Claiming lport 1e0dc858-5ac1-402f-aa44-2ae372f17d6f for this chassis.
Dec 13 04:07:03 np0005558241 ovn_controller[148476]: 2025-12-13T09:07:03Z|01428|binding|INFO|1e0dc858-5ac1-402f-aa44-2ae372f17d6f: Claiming fa:16:3e:da:8c:61 10.100.0.4 2001:db8::f816:3eff:feda:8c61
Dec 13 04:07:03 np0005558241 nova_compute[248510]: 2025-12-13 09:07:03.733 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.744 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:8c:61 10.100.0.4 2001:db8::f816:3eff:feda:8c61'], port_security=['fa:16:3e:da:8c:61 10.100.0.4 2001:db8::f816:3eff:feda:8c61'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:feda:8c61/64', 'neutron:device_id': '9fc8d93d-978b-4964-b6ca-86b96580e92f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f3b9c4f-25eb-4912-b05c-6ea015f84c28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5b5848-41ef-422d-b206-10cd26f67092, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1e0dc858-5ac1-402f-aa44-2ae372f17d6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.747 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f in datapath 4134c529-684a-4aee-a450-f026f71bff55 bound to our chassis#033[00m
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.749 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4134c529-684a-4aee-a450-f026f71bff55#033[00m
Dec 13 04:07:03 np0005558241 ovn_controller[148476]: 2025-12-13T09:07:03Z|01429|binding|INFO|Setting lport 1e0dc858-5ac1-402f-aa44-2ae372f17d6f up in Southbound
Dec 13 04:07:03 np0005558241 ovn_controller[148476]: 2025-12-13T09:07:03Z|01430|binding|INFO|Setting lport 1e0dc858-5ac1-402f-aa44-2ae372f17d6f ovn-installed in OVS
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.770 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa043859-8e53-47a2-9291-08c9b9d1ebee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:03 np0005558241 nova_compute[248510]: 2025-12-13 09:07:03.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:03 np0005558241 nova_compute[248510]: 2025-12-13 09:07:03.778 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:03 np0005558241 systemd-machined[210538]: New machine qemu-169-instance-0000008a.
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.805 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[38d1d25a-58bc-4a57-940f-7d43fe6765f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:03 np0005558241 systemd[1]: Started Virtual Machine qemu-169-instance-0000008a.
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.809 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[661e2b41-de22-41c1-b9ad-c91dd9fd23d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:03 np0005558241 systemd-udevd[388313]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.842 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f96c1f-7fd4-4df7-96c8-6736c3aff105]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:03 np0005558241 NetworkManager[50376]: <info>  [1765616823.8467] device (tap1e0dc858-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:07:03 np0005558241 NetworkManager[50376]: <info>  [1765616823.8486] device (tap1e0dc858-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.875 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[da233a2c-616f-4873-a1a3-7933d1d4bc49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4134c529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:27:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 6, 'rx_bytes': 1586, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 6, 'rx_bytes': 1586, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936709, 'reachable_time': 43689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388319, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.905 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f39ee43b-25f3-4af2-846f-54f8690ae583]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4134c529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 936725, 'tstamp': 936725}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388323, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4134c529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 936729, 'tstamp': 936729}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388323, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.907 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4134c529-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:07:03 np0005558241 nova_compute[248510]: 2025-12-13 09:07:03.909 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.910 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4134c529-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.911 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.911 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4134c529-60, col_values=(('external_ids', {'iface-id': '966dba06-5969-4f23-922d-d3fb06b4a741'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:07:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:03.911 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.174 248514 DEBUG nova.compute.manager [req-dc07fd6f-fca9-4225-86cb-d0f3e559cdfb req-34ef2e50-3f9f-433a-8ac5-cdf7382e68d9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.174 248514 DEBUG oslo_concurrency.lockutils [req-dc07fd6f-fca9-4225-86cb-d0f3e559cdfb req-34ef2e50-3f9f-433a-8ac5-cdf7382e68d9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.174 248514 DEBUG oslo_concurrency.lockutils [req-dc07fd6f-fca9-4225-86cb-d0f3e559cdfb req-34ef2e50-3f9f-433a-8ac5-cdf7382e68d9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.175 248514 DEBUG oslo_concurrency.lockutils [req-dc07fd6f-fca9-4225-86cb-d0f3e559cdfb req-34ef2e50-3f9f-433a-8ac5-cdf7382e68d9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.175 248514 DEBUG nova.compute.manager [req-dc07fd6f-fca9-4225-86cb-d0f3e559cdfb req-34ef2e50-3f9f-433a-8ac5-cdf7382e68d9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Processing event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.255 248514 DEBUG nova.network.neutron [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updated VIF entry in instance network info cache for port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.256 248514 DEBUG nova.network.neutron [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updating instance_info_cache with network_info: [{"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.276 248514 DEBUG oslo_concurrency.lockutils [req-3a0e3ab1-1d90-44c3-87c3-650d4637931c req-2f8a59c8-c191-400c-994a-2116298835fb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.371 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616824.3709996, 9fc8d93d-978b-4964-b6ca-86b96580e92f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.372 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] VM Started (Lifecycle Event)#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.376 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.381 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.387 248514 INFO nova.virt.libvirt.driver [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Instance spawned successfully.#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.388 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.395 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.400 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.417 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.418 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.419 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.420 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.421 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.422 248514 DEBUG nova.virt.libvirt.driver [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.428 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.429 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616824.3712752, 9fc8d93d-978b-4964-b6ca-86b96580e92f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.430 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.459 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.463 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616824.3798156, 9fc8d93d-978b-4964-b6ca-86b96580e92f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.464 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.485 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.495 248514 INFO nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Took 8.23 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.495 248514 DEBUG nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.496 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.533 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.584 248514 INFO nova.compute.manager [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Took 9.39 seconds to build instance.#033[00m
Dec 13 04:07:04 np0005558241 nova_compute[248510]: 2025-12-13 09:07:04.604 248514 DEBUG oslo_concurrency.lockutils [None req-10d6c591-8dd6-44dd-91f7-fd488144913e 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3179: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 13 04:07:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:06 np0005558241 nova_compute[248510]: 2025-12-13 09:07:06.281 248514 DEBUG nova.compute.manager [req-217868cd-7125-42c1-aadf-9fce1ab91a93 req-e72f9509-51c8-44e1-9707-4b8536edf298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:07:06 np0005558241 nova_compute[248510]: 2025-12-13 09:07:06.282 248514 DEBUG oslo_concurrency.lockutils [req-217868cd-7125-42c1-aadf-9fce1ab91a93 req-e72f9509-51c8-44e1-9707-4b8536edf298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:06 np0005558241 nova_compute[248510]: 2025-12-13 09:07:06.282 248514 DEBUG oslo_concurrency.lockutils [req-217868cd-7125-42c1-aadf-9fce1ab91a93 req-e72f9509-51c8-44e1-9707-4b8536edf298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:06 np0005558241 nova_compute[248510]: 2025-12-13 09:07:06.283 248514 DEBUG oslo_concurrency.lockutils [req-217868cd-7125-42c1-aadf-9fce1ab91a93 req-e72f9509-51c8-44e1-9707-4b8536edf298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:06 np0005558241 nova_compute[248510]: 2025-12-13 09:07:06.283 248514 DEBUG nova.compute.manager [req-217868cd-7125-42c1-aadf-9fce1ab91a93 req-e72f9509-51c8-44e1-9707-4b8536edf298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] No waiting events found dispatching network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:07:06 np0005558241 nova_compute[248510]: 2025-12-13 09:07:06.284 248514 WARNING nova.compute.manager [req-217868cd-7125-42c1-aadf-9fce1ab91a93 req-e72f9509-51c8-44e1-9707-4b8536edf298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received unexpected event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f for instance with vm_state active and task_state None.#033[00m
Dec 13 04:07:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3180: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 13 04:07:07 np0005558241 nova_compute[248510]: 2025-12-13 09:07:07.460 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:08 np0005558241 nova_compute[248510]: 2025-12-13 09:07:08.403 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3181: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 595 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Dec 13 04:07:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:07:09
Dec 13 04:07:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:07:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:07:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'backups', 'images', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control']
Dec 13 04:07:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:07:10 np0005558241 nova_compute[248510]: 2025-12-13 09:07:10.253 248514 DEBUG nova.compute.manager [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-changed-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:07:10 np0005558241 nova_compute[248510]: 2025-12-13 09:07:10.254 248514 DEBUG nova.compute.manager [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Refreshing instance network info cache due to event network-changed-1e0dc858-5ac1-402f-aa44-2ae372f17d6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:07:10 np0005558241 nova_compute[248510]: 2025-12-13 09:07:10.254 248514 DEBUG oslo_concurrency.lockutils [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:07:10 np0005558241 nova_compute[248510]: 2025-12-13 09:07:10.255 248514 DEBUG oslo_concurrency.lockutils [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:07:10 np0005558241 nova_compute[248510]: 2025-12-13 09:07:10.255 248514 DEBUG nova.network.neutron [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Refreshing network info cache for port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3182: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 89 op/s
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:07:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:07:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:11 np0005558241 nova_compute[248510]: 2025-12-13 09:07:11.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:07:12 np0005558241 nova_compute[248510]: 2025-12-13 09:07:12.000 248514 DEBUG nova.network.neutron [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updated VIF entry in instance network info cache for port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:07:12 np0005558241 nova_compute[248510]: 2025-12-13 09:07:12.001 248514 DEBUG nova.network.neutron [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updating instance_info_cache with network_info: [{"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:07:12 np0005558241 nova_compute[248510]: 2025-12-13 09:07:12.309 248514 DEBUG oslo_concurrency.lockutils [req-b3d854c3-462b-4514-8f05-7f61ef4d8aa4 req-dc9b0df6-1d13-4cb1-a3d4-c3b8f2cb3e67 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:07:12 np0005558241 nova_compute[248510]: 2025-12-13 09:07:12.462 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3183: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:07:13 np0005558241 nova_compute[248510]: 2025-12-13 09:07:13.406 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3184: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 81 op/s
Dec 13 04:07:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:07:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1819600709' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:07:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:07:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1819600709' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:07:15 np0005558241 nova_compute[248510]: 2025-12-13 09:07:15.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:07:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:16 np0005558241 ovn_controller[148476]: 2025-12-13T09:07:16Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:8c:61 10.100.0.4
Dec 13 04:07:16 np0005558241 ovn_controller[148476]: 2025-12-13T09:07:16Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:8c:61 10.100.0.4
Dec 13 04:07:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3185: 321 pgs: 321 active+clean; 167 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 77 op/s
Dec 13 04:07:17 np0005558241 podman[388370]: 2025-12-13 09:07:17.027356565 +0000 UTC m=+0.095511316 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 13 04:07:17 np0005558241 podman[388369]: 2025-12-13 09:07:17.030024484 +0000 UTC m=+0.109175599 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd)
Dec 13 04:07:17 np0005558241 podman[388368]: 2025-12-13 09:07:17.083437432 +0000 UTC m=+0.161315244 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:07:17 np0005558241 nova_compute[248510]: 2025-12-13 09:07:17.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:18 np0005558241 nova_compute[248510]: 2025-12-13 09:07:18.409 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3186: 321 pgs: 321 active+clean; 177 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 99 op/s
Dec 13 04:07:19 np0005558241 nova_compute[248510]: 2025-12-13 09:07:19.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:07:19 np0005558241 nova_compute[248510]: 2025-12-13 09:07:19.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:07:19 np0005558241 nova_compute[248510]: 2025-12-13 09:07:19.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:07:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3187: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Dec 13 04:07:20 np0005558241 nova_compute[248510]: 2025-12-13 09:07:20.987 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:07:20 np0005558241 nova_compute[248510]: 2025-12-13 09:07:20.988 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:07:20 np0005558241 nova_compute[248510]: 2025-12-13 09:07:20.988 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 04:07:20 np0005558241 nova_compute[248510]: 2025-12-13 09:07:20.988 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid e407f205-43a8-423e-a1cb-dc7f58ccced2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:07:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015304081295743975 of space, bias 1.0, pg target 0.45912243887231924 quantized to 32 (current 32)
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006698104578698684 of space, bias 1.0, pg target 0.2009431373609605 quantized to 32 (current 32)
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:07:22 np0005558241 nova_compute[248510]: 2025-12-13 09:07:22.466 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3188: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 13 04:07:23 np0005558241 nova_compute[248510]: 2025-12-13 09:07:23.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:23 np0005558241 nova_compute[248510]: 2025-12-13 09:07:23.995 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updating instance_info_cache with network_info: [{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:07:24 np0005558241 nova_compute[248510]: 2025-12-13 09:07:24.022 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:07:24 np0005558241 nova_compute[248510]: 2025-12-13 09:07:24.023 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 04:07:24 np0005558241 nova_compute[248510]: 2025-12-13 09:07:24.024 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:07:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3189: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 04:07:24 np0005558241 nova_compute[248510]: 2025-12-13 09:07:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:07:25 np0005558241 nova_compute[248510]: 2025-12-13 09:07:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:07:25 np0005558241 nova_compute[248510]: 2025-12-13 09:07:25.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:25 np0005558241 nova_compute[248510]: 2025-12-13 09:07:25.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:25 np0005558241 nova_compute[248510]: 2025-12-13 09:07:25.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:25 np0005558241 nova_compute[248510]: 2025-12-13 09:07:25.804 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:07:25 np0005558241 nova_compute[248510]: 2025-12-13 09:07:25.805 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:07:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:07:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1201304418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.470 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.565 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.566 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.573 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.573 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:07:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3190: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.795 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.798 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3075MB free_disk=59.89634881168604GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.799 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.884 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance e407f205-43a8-423e-a1cb-dc7f58ccced2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.884 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 9fc8d93d-978b-4964-b6ca-86b96580e92f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.884 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.885 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.902 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.923 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.923 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.938 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 04:07:26 np0005558241 nova_compute[248510]: 2025-12-13 09:07:26.962 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 04:07:27 np0005558241 nova_compute[248510]: 2025-12-13 09:07:27.054 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:07:27 np0005558241 nova_compute[248510]: 2025-12-13 09:07:27.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.522894) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616847522937, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1457, "num_deletes": 251, "total_data_size": 2370995, "memory_usage": 2406000, "flush_reason": "Manual Compaction"}
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616847537246, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 2306066, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62194, "largest_seqno": 63650, "table_properties": {"data_size": 2299336, "index_size": 3864, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14237, "raw_average_key_size": 19, "raw_value_size": 2285799, "raw_average_value_size": 3201, "num_data_blocks": 172, "num_entries": 714, "num_filter_entries": 714, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616701, "oldest_key_time": 1765616701, "file_creation_time": 1765616847, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 14504 microseconds, and 6816 cpu microseconds.
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.537392) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 2306066 bytes OK
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.537450) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.538899) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.538917) EVENT_LOG_v1 {"time_micros": 1765616847538912, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.538940) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2364577, prev total WAL file size 2364577, number of live WAL files 2.
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.540004) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(2252KB)], [146(11MB)]
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616847540099, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13941897, "oldest_snapshot_seqno": -1}
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8413 keys, 12098620 bytes, temperature: kUnknown
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616847634527, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 12098620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12042400, "index_size": 34091, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21061, "raw_key_size": 220161, "raw_average_key_size": 26, "raw_value_size": 11892291, "raw_average_value_size": 1413, "num_data_blocks": 1326, "num_entries": 8413, "num_filter_entries": 8413, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616847, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.634870) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 12098620 bytes
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.638348) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.4 rd, 127.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 11.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(11.3) write-amplify(5.2) OK, records in: 8927, records dropped: 514 output_compression: NoCompression
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.638411) EVENT_LOG_v1 {"time_micros": 1765616847638386, "job": 90, "event": "compaction_finished", "compaction_time_micros": 94581, "compaction_time_cpu_micros": 37108, "output_level": 6, "num_output_files": 1, "total_output_size": 12098620, "num_input_records": 8927, "num_output_records": 8413, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616847639502, "job": 90, "event": "table_file_deletion", "file_number": 148}
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616847643120, "job": 90, "event": "table_file_deletion", "file_number": 146}
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.539947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.643235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.643240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.643242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.643245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:27.643247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:07:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763754987' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:07:27 np0005558241 nova_compute[248510]: 2025-12-13 09:07:27.737 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:07:27 np0005558241 nova_compute[248510]: 2025-12-13 09:07:27.746 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.037 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.066 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.067 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.116 248514 DEBUG nova.compute.manager [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-changed-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.117 248514 DEBUG nova.compute.manager [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Refreshing instance network info cache due to event network-changed-1e0dc858-5ac1-402f-aa44-2ae372f17d6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.118 248514 DEBUG oslo_concurrency.lockutils [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.118 248514 DEBUG oslo_concurrency.lockutils [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.119 248514 DEBUG nova.network.neutron [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Refreshing network info cache for port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.245 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.246 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.246 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.247 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.247 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.250 248514 INFO nova.compute.manager [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Terminating instance#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.253 248514 DEBUG nova.compute.manager [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:07:28 np0005558241 kernel: tap1e0dc858-5a (unregistering): left promiscuous mode
Dec 13 04:07:28 np0005558241 NetworkManager[50376]: <info>  [1765616848.3028] device (tap1e0dc858-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:07:28 np0005558241 ovn_controller[148476]: 2025-12-13T09:07:28Z|01431|binding|INFO|Releasing lport 1e0dc858-5ac1-402f-aa44-2ae372f17d6f from this chassis (sb_readonly=0)
Dec 13 04:07:28 np0005558241 ovn_controller[148476]: 2025-12-13T09:07:28Z|01432|binding|INFO|Setting lport 1e0dc858-5ac1-402f-aa44-2ae372f17d6f down in Southbound
Dec 13 04:07:28 np0005558241 ovn_controller[148476]: 2025-12-13T09:07:28Z|01433|binding|INFO|Removing iface tap1e0dc858-5a ovn-installed in OVS
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.324 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:8c:61 10.100.0.4 2001:db8::f816:3eff:feda:8c61'], port_security=['fa:16:3e:da:8c:61 10.100.0.4 2001:db8::f816:3eff:feda:8c61'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:feda:8c61/64', 'neutron:device_id': '9fc8d93d-978b-4964-b6ca-86b96580e92f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f3b9c4f-25eb-4912-b05c-6ea015f84c28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5b5848-41ef-422d-b206-10cd26f67092, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1e0dc858-5ac1-402f-aa44-2ae372f17d6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.326 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f in datapath 4134c529-684a-4aee-a450-f026f71bff55 unbound from our chassis#033[00m
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.328 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4134c529-684a-4aee-a450-f026f71bff55#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.343 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.357 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3041bbc1-4d9a-4df4-811f-482af257c65d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:28 np0005558241 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Dec 13 04:07:28 np0005558241 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d0000008a.scope: Consumed 12.846s CPU time.
Dec 13 04:07:28 np0005558241 systemd-machined[210538]: Machine qemu-169-instance-0000008a terminated.
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.405 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3442a46a-23e3-4499-9b67-3fbef7643c34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.410 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2597d2ee-1db2-4298-b9d8-57f2548d2c55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.457 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb5b160-9a74-4740-a63a-0cc797923538]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.515 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.517 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7f1ead-070c-4038-ae4f-16ad172fe75a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4134c529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:17:27:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 8, 'rx_bytes': 2640, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 8, 'rx_bytes': 2640, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 410], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936709, 'reachable_time': 43689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388487, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.520 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.529 248514 INFO nova.virt.libvirt.driver [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Instance destroyed successfully.#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.529 248514 DEBUG nova.objects.instance [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 9fc8d93d-978b-4964-b6ca-86b96580e92f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.537 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6f08aa9d-f939-4f04-a26f-33a097937005]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4134c529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 936725, 'tstamp': 936725}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388494, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4134c529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 936729, 'tstamp': 936729}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388494, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.539 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4134c529-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.546 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4134c529-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:07:28 np0005558241 nova_compute[248510]: 2025-12-13 09:07:28.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.546 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.547 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4134c529-60, col_values=(('external_ids', {'iface-id': '966dba06-5969-4f23-922d-d3fb06b4a741'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:07:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:28.547 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:07:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3191: 321 pgs: 321 active+clean; 200 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Dec 13 04:07:28 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.000 248514 DEBUG nova.virt.libvirt.vif [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:06:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-487738396',display_name='tempest-TestGettingAddress-server-487738396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-487738396',id=138,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMkENvafv8/WsEwcyvUB7dPhR5LIszSoSQPrlQUsJiVP5nV5qd8d4ARbreQ6b5yyI9G9/iw/kPcl2vcitjw0oexy5ltUfx15pAU69dzZBbn6bv/Uzo+ktqtZuKuE1Ehyg==',key_name='tempest-TestGettingAddress-384614847',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:07:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-4nz1f07d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:07:04Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=9fc8d93d-978b-4964-b6ca-86b96580e92f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.001 248514 DEBUG nova.network.os_vif_util [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.003 248514 DEBUG nova.network.os_vif_util [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.004 248514 DEBUG os_vif [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.009 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e0dc858-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.015 248514 DEBUG nova.compute.manager [req-6fc5847b-f9a0-4201-8210-a3d8737fa374 req-c5e5c027-3d2c-4e7b-b0c7-32d54f2c29ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-vif-unplugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.016 248514 DEBUG oslo_concurrency.lockutils [req-6fc5847b-f9a0-4201-8210-a3d8737fa374 req-c5e5c027-3d2c-4e7b-b0c7-32d54f2c29ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.016 248514 DEBUG oslo_concurrency.lockutils [req-6fc5847b-f9a0-4201-8210-a3d8737fa374 req-c5e5c027-3d2c-4e7b-b0c7-32d54f2c29ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.017 248514 DEBUG oslo_concurrency.lockutils [req-6fc5847b-f9a0-4201-8210-a3d8737fa374 req-c5e5c027-3d2c-4e7b-b0c7-32d54f2c29ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.017 248514 DEBUG nova.compute.manager [req-6fc5847b-f9a0-4201-8210-a3d8737fa374 req-c5e5c027-3d2c-4e7b-b0c7-32d54f2c29ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] No waiting events found dispatching network-vif-unplugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.017 248514 DEBUG nova.compute.manager [req-6fc5847b-f9a0-4201-8210-a3d8737fa374 req-c5e5c027-3d2c-4e7b-b0c7-32d54f2c29ee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-vif-unplugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.019 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.021 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.024 248514 INFO os_vif [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:8c:61,bridge_name='br-int',has_traffic_filtering=True,id=1e0dc858-5ac1-402f-aa44-2ae372f17d6f,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e0dc858-5a')#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.364 248514 INFO nova.virt.libvirt.driver [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Deleting instance files /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f_del#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.365 248514 INFO nova.virt.libvirt.driver [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Deletion of /var/lib/nova/instances/9fc8d93d-978b-4964-b6ca-86b96580e92f_del complete#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.424 248514 INFO nova.compute.manager [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Took 1.17 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.425 248514 DEBUG oslo.service.loopingcall [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.425 248514 DEBUG nova.compute.manager [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:07:29 np0005558241 nova_compute[248510]: 2025-12-13 09:07:29.425 248514 DEBUG nova.network.neutron [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:07:30 np0005558241 nova_compute[248510]: 2025-12-13 09:07:30.069 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:07:30 np0005558241 nova_compute[248510]: 2025-12-13 09:07:30.070 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:07:30 np0005558241 nova_compute[248510]: 2025-12-13 09:07:30.232 248514 DEBUG nova.network.neutron [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updated VIF entry in instance network info cache for port 1e0dc858-5ac1-402f-aa44-2ae372f17d6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:07:30 np0005558241 nova_compute[248510]: 2025-12-13 09:07:30.233 248514 DEBUG nova.network.neutron [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updating instance_info_cache with network_info: [{"id": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "address": "fa:16:3e:da:8c:61", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feda:8c61", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e0dc858-5a", "ovs_interfaceid": "1e0dc858-5ac1-402f-aa44-2ae372f17d6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:07:30 np0005558241 nova_compute[248510]: 2025-12-13 09:07:30.260 248514 DEBUG oslo_concurrency.lockutils [req-a8778dd6-8519-40fc-86a8-e2f7b4a89d78 req-44283d53-9a82-4cc1-9b31-b482cd566baa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-9fc8d93d-978b-4964-b6ca-86b96580e92f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:07:30 np0005558241 nova_compute[248510]: 2025-12-13 09:07:30.348 248514 DEBUG nova.network.neutron [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:07:30 np0005558241 nova_compute[248510]: 2025-12-13 09:07:30.366 248514 INFO nova.compute.manager [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Took 0.94 seconds to deallocate network for instance.#033[00m
Dec 13 04:07:30 np0005558241 nova_compute[248510]: 2025-12-13 09:07:30.431 248514 DEBUG nova.compute.manager [req-83d3e230-36be-447d-b144-ff07d5e400f0 req-aaabe7b0-9e75-4480-9c28-1b5a504e0018 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-vif-deleted-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:07:30 np0005558241 nova_compute[248510]: 2025-12-13 09:07:30.434 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:30 np0005558241 nova_compute[248510]: 2025-12-13 09:07:30.435 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:30 np0005558241 nova_compute[248510]: 2025-12-13 09:07:30.503 248514 DEBUG oslo_concurrency.processutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:07:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3192: 321 pgs: 321 active+clean; 166 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 1.1 MiB/s wr, 44 op/s
Dec 13 04:07:30 np0005558241 nova_compute[248510]: 2025-12-13 09:07:30.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:07:31 np0005558241 nova_compute[248510]: 2025-12-13 09:07:31.122 248514 DEBUG nova.compute.manager [req-060cf8b9-9b20-4829-b1e0-aba2928634dd req-624fc934-ef6f-44a6-ae32-e3238f71d8f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:07:31 np0005558241 nova_compute[248510]: 2025-12-13 09:07:31.123 248514 DEBUG oslo_concurrency.lockutils [req-060cf8b9-9b20-4829-b1e0-aba2928634dd req-624fc934-ef6f-44a6-ae32-e3238f71d8f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:31 np0005558241 nova_compute[248510]: 2025-12-13 09:07:31.123 248514 DEBUG oslo_concurrency.lockutils [req-060cf8b9-9b20-4829-b1e0-aba2928634dd req-624fc934-ef6f-44a6-ae32-e3238f71d8f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:31 np0005558241 nova_compute[248510]: 2025-12-13 09:07:31.123 248514 DEBUG oslo_concurrency.lockutils [req-060cf8b9-9b20-4829-b1e0-aba2928634dd req-624fc934-ef6f-44a6-ae32-e3238f71d8f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:31 np0005558241 nova_compute[248510]: 2025-12-13 09:07:31.124 248514 DEBUG nova.compute.manager [req-060cf8b9-9b20-4829-b1e0-aba2928634dd req-624fc934-ef6f-44a6-ae32-e3238f71d8f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] No waiting events found dispatching network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:07:31 np0005558241 nova_compute[248510]: 2025-12-13 09:07:31.124 248514 WARNING nova.compute.manager [req-060cf8b9-9b20-4829-b1e0-aba2928634dd req-624fc934-ef6f-44a6-ae32-e3238f71d8f7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Received unexpected event network-vif-plugged-1e0dc858-5ac1-402f-aa44-2ae372f17d6f for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:07:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:07:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3796815517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:07:31 np0005558241 nova_compute[248510]: 2025-12-13 09:07:31.248 248514 DEBUG oslo_concurrency.processutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:07:31 np0005558241 nova_compute[248510]: 2025-12-13 09:07:31.259 248514 DEBUG nova.compute.provider_tree [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:07:31 np0005558241 nova_compute[248510]: 2025-12-13 09:07:31.279 248514 DEBUG nova.scheduler.client.report [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:07:31 np0005558241 nova_compute[248510]: 2025-12-13 09:07:31.304 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:31 np0005558241 nova_compute[248510]: 2025-12-13 09:07:31.340 248514 INFO nova.scheduler.client.report [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 9fc8d93d-978b-4964-b6ca-86b96580e92f#033[00m
Dec 13 04:07:31 np0005558241 nova_compute[248510]: 2025-12-13 09:07:31.409 248514 DEBUG oslo_concurrency.lockutils [None req-c3721167-f21b-4f38-9c2f-e95653b7d345 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "9fc8d93d-978b-4964-b6ca-86b96580e92f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:31 np0005558241 nova_compute[248510]: 2025-12-13 09:07:31.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.597 248514 DEBUG nova.compute.manager [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-changed-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.597 248514 DEBUG nova.compute.manager [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Refreshing instance network info cache due to event network-changed-cccd2f42-71c6-4464-b04e-1c3e885a4378. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.598 248514 DEBUG oslo_concurrency.lockutils [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.598 248514 DEBUG oslo_concurrency.lockutils [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.599 248514 DEBUG nova.network.neutron [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Refreshing network info cache for port cccd2f42-71c6-4464-b04e-1c3e885a4378 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:07:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3193: 321 pgs: 321 active+clean; 166 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 24 KiB/s wr, 12 op/s
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.676 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.677 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.679 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.680 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.680 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.682 248514 INFO nova.compute.manager [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Terminating instance#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.685 248514 DEBUG nova.compute.manager [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:07:32 np0005558241 kernel: tapcccd2f42-71 (unregistering): left promiscuous mode
Dec 13 04:07:32 np0005558241 NetworkManager[50376]: <info>  [1765616852.7512] device (tapcccd2f42-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:07:32 np0005558241 ovn_controller[148476]: 2025-12-13T09:07:32Z|01434|binding|INFO|Releasing lport cccd2f42-71c6-4464-b04e-1c3e885a4378 from this chassis (sb_readonly=0)
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:32 np0005558241 ovn_controller[148476]: 2025-12-13T09:07:32Z|01435|binding|INFO|Setting lport cccd2f42-71c6-4464-b04e-1c3e885a4378 down in Southbound
Dec 13 04:07:32 np0005558241 ovn_controller[148476]: 2025-12-13T09:07:32Z|01436|binding|INFO|Removing iface tapcccd2f42-71 ovn-installed in OVS
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.791 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:32 np0005558241 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000089.scope: Deactivated successfully.
Dec 13 04:07:32 np0005558241 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000089.scope: Consumed 15.478s CPU time.
Dec 13 04:07:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:32.843 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:df:a0 10.100.0.3 2001:db8::f816:3eff:fec6:dfa0'], port_security=['fa:16:3e:c6:df:a0 10.100.0.3 2001:db8::f816:3eff:fec6:dfa0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fec6:dfa0/64', 'neutron:device_id': 'e407f205-43a8-423e-a1cb-dc7f58ccced2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4134c529-684a-4aee-a450-f026f71bff55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f3b9c4f-25eb-4912-b05c-6ea015f84c28', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5b5848-41ef-422d-b206-10cd26f67092, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=cccd2f42-71c6-4464-b04e-1c3e885a4378) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:07:32 np0005558241 systemd-machined[210538]: Machine qemu-168-instance-00000089 terminated.
Dec 13 04:07:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:32.846 158419 INFO neutron.agent.ovn.metadata.agent [-] Port cccd2f42-71c6-4464-b04e-1c3e885a4378 in datapath 4134c529-684a-4aee-a450-f026f71bff55 unbound from our chassis#033[00m
Dec 13 04:07:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:32.851 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4134c529-684a-4aee-a450-f026f71bff55, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:07:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:32.852 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff56cce-b35f-435a-a92c-5e0afc79e2e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:32.853 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4134c529-684a-4aee-a450-f026f71bff55 namespace which is not needed anymore#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.940 248514 INFO nova.virt.libvirt.driver [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Instance destroyed successfully.#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.941 248514 DEBUG nova.objects.instance [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid e407f205-43a8-423e-a1cb-dc7f58ccced2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.964 248514 DEBUG nova.virt.libvirt.vif [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:06:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-906384999',display_name='tempest-TestGettingAddress-server-906384999',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-906384999',id=137,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMkENvafv8/WsEwcyvUB7dPhR5LIszSoSQPrlQUsJiVP5nV5qd8d4ARbreQ6b5yyI9G9/iw/kPcl2vcitjw0oexy5ltUfx15pAU69dzZBbn6bv/Uzo+ktqtZuKuE1Ehyg==',key_name='tempest-TestGettingAddress-384614847',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:06:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ozf85u1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:06:30Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=e407f205-43a8-423e-a1cb-dc7f58ccced2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.965 248514 DEBUG nova.network.os_vif_util [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.967 248514 DEBUG nova.network.os_vif_util [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.968 248514 DEBUG os_vif [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.972 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.973 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcccd2f42-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.976 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.978 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:07:32 np0005558241 nova_compute[248510]: 2025-12-13 09:07:32.981 248514 INFO os_vif [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:df:a0,bridge_name='br-int',has_traffic_filtering=True,id=cccd2f42-71c6-4464-b04e-1c3e885a4378,network=Network(4134c529-684a-4aee-a450-f026f71bff55),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcccd2f42-71')#033[00m
Dec 13 04:07:33 np0005558241 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [NOTICE]   (387224) : haproxy version is 2.8.14-c23fe91
Dec 13 04:07:33 np0005558241 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [NOTICE]   (387224) : path to executable is /usr/sbin/haproxy
Dec 13 04:07:33 np0005558241 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [WARNING]  (387224) : Exiting Master process...
Dec 13 04:07:33 np0005558241 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [WARNING]  (387224) : Exiting Master process...
Dec 13 04:07:33 np0005558241 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [ALERT]    (387224) : Current worker (387226) exited with code 143 (Terminated)
Dec 13 04:07:33 np0005558241 neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55[387220]: [WARNING]  (387224) : All workers exited. Exiting... (0)
Dec 13 04:07:33 np0005558241 systemd[1]: libpod-b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e.scope: Deactivated successfully.
Dec 13 04:07:33 np0005558241 podman[388577]: 2025-12-13 09:07:33.073466331 +0000 UTC m=+0.080048977 container died b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:07:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e-userdata-shm.mount: Deactivated successfully.
Dec 13 04:07:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a6629d0d2946616c16ab6364a7afe8e36321cd71c229fca3a1701a5a9a8e1eeb-merged.mount: Deactivated successfully.
Dec 13 04:07:33 np0005558241 podman[388577]: 2025-12-13 09:07:33.140522362 +0000 UTC m=+0.147105038 container cleanup b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 04:07:33 np0005558241 systemd[1]: libpod-conmon-b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e.scope: Deactivated successfully.
Dec 13 04:07:33 np0005558241 podman[388628]: 2025-12-13 09:07:33.230658609 +0000 UTC m=+0.053073601 container remove b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:07:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.241 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[afc661d5-19eb-4dea-88f9-9189ebdfa26f]: (4, ('Sat Dec 13 09:07:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55 (b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e)\nb0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e\nSat Dec 13 09:07:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4134c529-684a-4aee-a450-f026f71bff55 (b0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e)\nb0508db188518d4c986326dee928997ecc48230dfabddc66c741394740ac614e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.243 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9080573-07cd-42ce-946c-f1e576798697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.245 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4134c529-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:07:33 np0005558241 nova_compute[248510]: 2025-12-13 09:07:33.247 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:33 np0005558241 kernel: tap4134c529-60: left promiscuous mode
Dec 13 04:07:33 np0005558241 nova_compute[248510]: 2025-12-13 09:07:33.275 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.278 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[826255ba-3a07-41ba-b88b-dc368ea3ba4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.295 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6df9ad91-3b8f-4e92-bd07-525f3312a539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:33 np0005558241 nova_compute[248510]: 2025-12-13 09:07:33.297 248514 INFO nova.virt.libvirt.driver [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Deleting instance files /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2_del#033[00m
Dec 13 04:07:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.297 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[899e4006-8dd9-4088-b630-9ced67ebf61b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:33 np0005558241 nova_compute[248510]: 2025-12-13 09:07:33.297 248514 INFO nova.virt.libvirt.driver [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Deletion of /var/lib/nova/instances/e407f205-43a8-423e-a1cb-dc7f58ccced2_del complete#033[00m
Dec 13 04:07:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.316 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b26158-7f2f-4d85-8744-5b7aee6e3513]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 936700, 'reachable_time': 28385, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388643, 'error': None, 'target': 'ovnmeta-4134c529-684a-4aee-a450-f026f71bff55', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.321 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4134c529-684a-4aee-a450-f026f71bff55 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:07:33 np0005558241 systemd[1]: run-netns-ovnmeta\x2d4134c529\x2d684a\x2d4aee\x2da450\x2df026f71bff55.mount: Deactivated successfully.
Dec 13 04:07:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:33.322 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d755843d-05b3-4856-9f38-99847323e4a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:07:33 np0005558241 nova_compute[248510]: 2025-12-13 09:07:33.472 248514 INFO nova.compute.manager [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:07:33 np0005558241 nova_compute[248510]: 2025-12-13 09:07:33.473 248514 DEBUG oslo.service.loopingcall [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:07:33 np0005558241 nova_compute[248510]: 2025-12-13 09:07:33.473 248514 DEBUG nova.compute.manager [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:07:33 np0005558241 nova_compute[248510]: 2025-12-13 09:07:33.473 248514 DEBUG nova.network.neutron [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:07:33 np0005558241 nova_compute[248510]: 2025-12-13 09:07:33.496 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3194: 321 pgs: 321 active+clean; 69 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 28 KiB/s wr, 37 op/s
Dec 13 04:07:34 np0005558241 nova_compute[248510]: 2025-12-13 09:07:34.834 248514 DEBUG nova.compute.manager [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-vif-unplugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:07:34 np0005558241 nova_compute[248510]: 2025-12-13 09:07:34.835 248514 DEBUG oslo_concurrency.lockutils [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:34 np0005558241 nova_compute[248510]: 2025-12-13 09:07:34.835 248514 DEBUG oslo_concurrency.lockutils [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:34 np0005558241 nova_compute[248510]: 2025-12-13 09:07:34.835 248514 DEBUG oslo_concurrency.lockutils [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:34 np0005558241 nova_compute[248510]: 2025-12-13 09:07:34.836 248514 DEBUG nova.compute.manager [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] No waiting events found dispatching network-vif-unplugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:07:34 np0005558241 nova_compute[248510]: 2025-12-13 09:07:34.836 248514 DEBUG nova.compute.manager [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-vif-unplugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:07:34 np0005558241 nova_compute[248510]: 2025-12-13 09:07:34.836 248514 DEBUG nova.compute.manager [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:07:34 np0005558241 nova_compute[248510]: 2025-12-13 09:07:34.837 248514 DEBUG oslo_concurrency.lockutils [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:34 np0005558241 nova_compute[248510]: 2025-12-13 09:07:34.837 248514 DEBUG oslo_concurrency.lockutils [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:34 np0005558241 nova_compute[248510]: 2025-12-13 09:07:34.838 248514 DEBUG oslo_concurrency.lockutils [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:34 np0005558241 nova_compute[248510]: 2025-12-13 09:07:34.838 248514 DEBUG nova.compute.manager [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] No waiting events found dispatching network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:07:34 np0005558241 nova_compute[248510]: 2025-12-13 09:07:34.838 248514 WARNING nova.compute.manager [req-33ef95c2-37c0-4d5a-a8f0-8b06913639d8 req-063ccdce-d8b6-4ea7-9e2e-615bcab7cc72 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received unexpected event network-vif-plugged-cccd2f42-71c6-4464-b04e-1c3e885a4378 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.108 248514 DEBUG nova.network.neutron [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.134 248514 INFO nova.compute.manager [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Took 1.66 seconds to deallocate network for instance.#033[00m
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.240 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.240 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.287 248514 DEBUG oslo_concurrency.processutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.378 248514 DEBUG nova.compute.manager [req-98b56e73-beb3-4a37-9ccf-e91fe2bb215a req-ce866b64-0eee-4411-9314-f2f57f46165f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Received event network-vif-deleted-cccd2f42-71c6-4464-b04e-1c3e885a4378 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.688 248514 DEBUG nova.network.neutron [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updated VIF entry in instance network info cache for port cccd2f42-71c6-4464-b04e-1c3e885a4378. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.689 248514 DEBUG nova.network.neutron [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Updating instance_info_cache with network_info: [{"id": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "address": "fa:16:3e:c6:df:a0", "network": {"id": "4134c529-684a-4aee-a450-f026f71bff55", "bridge": "br-int", "label": "tempest-network-smoke--1002600570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec6:dfa0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcccd2f42-71", "ovs_interfaceid": "cccd2f42-71c6-4464-b04e-1c3e885a4378", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.711 248514 DEBUG oslo_concurrency.lockutils [req-59b3646a-c723-48cc-b70b-948ff78b2bb1 req-30cf0ebd-2a31-4a59-ab19-8c98fa64ca1a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-e407f205-43a8-423e-a1cb-dc7f58ccced2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:07:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:07:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344881787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.902 248514 DEBUG oslo_concurrency.processutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.908 248514 DEBUG nova.compute.provider_tree [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.925 248514 DEBUG nova.scheduler.client.report [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.947 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:35 np0005558241 nova_compute[248510]: 2025-12-13 09:07:35.974 248514 INFO nova.scheduler.client.report [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance e407f205-43a8-423e-a1cb-dc7f58ccced2#033[00m
Dec 13 04:07:36 np0005558241 nova_compute[248510]: 2025-12-13 09:07:36.058 248514 DEBUG oslo_concurrency.lockutils [None req-b579c3d8-3eb0-4862-856c-9a97a8f2f6ea 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "e407f205-43a8-423e-a1cb-dc7f58ccced2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.141519) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616856141559, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 332, "num_deletes": 250, "total_data_size": 137266, "memory_usage": 143264, "flush_reason": "Manual Compaction"}
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616856144845, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 134585, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63651, "largest_seqno": 63982, "table_properties": {"data_size": 132513, "index_size": 235, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5797, "raw_average_key_size": 20, "raw_value_size": 128431, "raw_average_value_size": 447, "num_data_blocks": 11, "num_entries": 287, "num_filter_entries": 287, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616848, "oldest_key_time": 1765616848, "file_creation_time": 1765616856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 3376 microseconds, and 1593 cpu microseconds.
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.144890) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 134585 bytes OK
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.144915) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.146362) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.146380) EVENT_LOG_v1 {"time_micros": 1765616856146374, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.146402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 134980, prev total WAL file size 134980, number of live WAL files 2.
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.146767) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353131' seq:72057594037927935, type:22 .. '6D6772737461740032373632' seq:0, type:0; will stop at (end)
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(131KB)], [149(11MB)]
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616856146810, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 12233205, "oldest_snapshot_seqno": -1}
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8193 keys, 8785127 bytes, temperature: kUnknown
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616856209168, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 8785127, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8735161, "index_size": 28349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 215773, "raw_average_key_size": 26, "raw_value_size": 8593707, "raw_average_value_size": 1048, "num_data_blocks": 1086, "num_entries": 8193, "num_filter_entries": 8193, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765616856, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.209557) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 8785127 bytes
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.211017) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.7 rd, 140.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.5 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(156.2) write-amplify(65.3) OK, records in: 8700, records dropped: 507 output_compression: NoCompression
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.211037) EVENT_LOG_v1 {"time_micros": 1765616856211027, "job": 92, "event": "compaction_finished", "compaction_time_micros": 62495, "compaction_time_cpu_micros": 31445, "output_level": 6, "num_output_files": 1, "total_output_size": 8785127, "num_input_records": 8700, "num_output_records": 8193, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616856211230, "job": 92, "event": "table_file_deletion", "file_number": 151}
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765616856213818, "job": 92, "event": "table_file_deletion", "file_number": 149}
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.146709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.213934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.213944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.213947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.213949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:07:36 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:07:36.213952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:07:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3195: 321 pgs: 321 active+clean; 69 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 17 KiB/s wr, 37 op/s
Dec 13 04:07:37 np0005558241 nova_compute[248510]: 2025-12-13 09:07:37.977 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:38 np0005558241 nova_compute[248510]: 2025-12-13 09:07:38.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3196: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 18 KiB/s wr, 58 op/s
Dec 13 04:07:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:07:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:07:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:07:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:07:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:07:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:07:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3197: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 17 KiB/s wr, 57 op/s
Dec 13 04:07:41 np0005558241 nova_compute[248510]: 2025-12-13 09:07:41.106 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:41 np0005558241 nova_compute[248510]: 2025-12-13 09:07:41.178 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3198: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.7 KiB/s wr, 45 op/s
Dec 13 04:07:42 np0005558241 nova_compute[248510]: 2025-12-13 09:07:42.979 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:43 np0005558241 nova_compute[248510]: 2025-12-13 09:07:43.527 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616848.5262506, 9fc8d93d-978b-4964-b6ca-86b96580e92f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:07:43 np0005558241 nova_compute[248510]: 2025-12-13 09:07:43.528 248514 INFO nova.compute.manager [-] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:07:43 np0005558241 nova_compute[248510]: 2025-12-13 09:07:43.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:43 np0005558241 nova_compute[248510]: 2025-12-13 09:07:43.557 248514 DEBUG nova.compute.manager [None req-4560b96e-d278-4200-85af-097f7dcaaa57 - - - - - -] [instance: 9fc8d93d-978b-4964-b6ca-86b96580e92f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:07:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3199: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.7 KiB/s wr, 45 op/s
Dec 13 04:07:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3200: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 852 B/s wr, 20 op/s
Dec 13 04:07:47 np0005558241 nova_compute[248510]: 2025-12-13 09:07:47.936 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616852.9349854, e407f205-43a8-423e-a1cb-dc7f58ccced2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:07:47 np0005558241 nova_compute[248510]: 2025-12-13 09:07:47.937 248514 INFO nova.compute.manager [-] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:07:47 np0005558241 nova_compute[248510]: 2025-12-13 09:07:47.968 248514 DEBUG nova.compute.manager [None req-d6406821-3854-4722-b152-e94a830fef12 - - - - - -] [instance: e407f205-43a8-423e-a1cb-dc7f58ccced2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:07:47 np0005558241 nova_compute[248510]: 2025-12-13 09:07:47.981 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:47 np0005558241 podman[388669]: 2025-12-13 09:07:47.986865853 +0000 UTC m=+0.064061454 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:07:48 np0005558241 podman[388668]: 2025-12-13 09:07:48.007917327 +0000 UTC m=+0.086356650 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:07:48 np0005558241 podman[388667]: 2025-12-13 09:07:48.022449782 +0000 UTC m=+0.109722433 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:07:48 np0005558241 nova_compute[248510]: 2025-12-13 09:07:48.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3201: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 852 B/s wr, 20 op/s
Dec 13 04:07:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3202: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:07:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:07:51 np0005558241 podman[388874]: 2025-12-13 09:07:51.72465311 +0000 UTC m=+0.073462407 container create 298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:07:51 np0005558241 systemd[1]: Started libpod-conmon-298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b.scope.
Dec 13 04:07:51 np0005558241 podman[388874]: 2025-12-13 09:07:51.690420056 +0000 UTC m=+0.039229403 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:07:51 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:07:51 np0005558241 podman[388874]: 2025-12-13 09:07:51.843024885 +0000 UTC m=+0.191834212 container init 298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 04:07:51 np0005558241 podman[388874]: 2025-12-13 09:07:51.855468357 +0000 UTC m=+0.204277604 container start 298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shaw, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:07:51 np0005558241 podman[388874]: 2025-12-13 09:07:51.85946406 +0000 UTC m=+0.208273377 container attach 298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shaw, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 04:07:51 np0005558241 elated_shaw[388891]: 167 167
Dec 13 04:07:51 np0005558241 systemd[1]: libpod-298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b.scope: Deactivated successfully.
Dec 13 04:07:51 np0005558241 podman[388874]: 2025-12-13 09:07:51.865394423 +0000 UTC m=+0.214203670 container died 298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shaw, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:07:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f3b4300bd6ecbd447897c20e829212ba39085bc670b5c11204b816f6745600c9-merged.mount: Deactivated successfully.
Dec 13 04:07:51 np0005558241 podman[388874]: 2025-12-13 09:07:51.916900992 +0000 UTC m=+0.265710229 container remove 298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shaw, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:07:51 np0005558241 systemd[1]: libpod-conmon-298e8a877335ab00e29a5a448c2c492c342a1c0ecade18ca4bd6990b61750f9b.scope: Deactivated successfully.
Dec 13 04:07:52 np0005558241 podman[388914]: 2025-12-13 09:07:52.152937795 +0000 UTC m=+0.074011862 container create 2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mirzakhani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:07:52 np0005558241 systemd[1]: Started libpod-conmon-2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5.scope.
Dec 13 04:07:52 np0005558241 podman[388914]: 2025-12-13 09:07:52.120685692 +0000 UTC m=+0.041759809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:07:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:07:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b36233687ba3bb1d77736cc88271be4c833182dca35a519751682e1d414bb87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b36233687ba3bb1d77736cc88271be4c833182dca35a519751682e1d414bb87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b36233687ba3bb1d77736cc88271be4c833182dca35a519751682e1d414bb87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b36233687ba3bb1d77736cc88271be4c833182dca35a519751682e1d414bb87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b36233687ba3bb1d77736cc88271be4c833182dca35a519751682e1d414bb87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:52 np0005558241 podman[388914]: 2025-12-13 09:07:52.27557713 +0000 UTC m=+0.196651177 container init 2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:07:52 np0005558241 podman[388914]: 2025-12-13 09:07:52.289683694 +0000 UTC m=+0.210757771 container start 2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mirzakhani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:07:52 np0005558241 podman[388914]: 2025-12-13 09:07:52.29532516 +0000 UTC m=+0.216399187 container attach 2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mirzakhani, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:07:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3203: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:07:52 np0005558241 gifted_mirzakhani[388930]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:07:52 np0005558241 gifted_mirzakhani[388930]: --> All data devices are unavailable
Dec 13 04:07:52 np0005558241 systemd[1]: libpod-2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5.scope: Deactivated successfully.
Dec 13 04:07:52 np0005558241 podman[388950]: 2025-12-13 09:07:52.95239863 +0000 UTC m=+0.024600386 container died 2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:07:52 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9b36233687ba3bb1d77736cc88271be4c833182dca35a519751682e1d414bb87-merged.mount: Deactivated successfully.
Dec 13 04:07:52 np0005558241 nova_compute[248510]: 2025-12-13 09:07:52.983 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:53 np0005558241 podman[388950]: 2025-12-13 09:07:53.003042407 +0000 UTC m=+0.075244143 container remove 2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:07:53 np0005558241 systemd[1]: libpod-conmon-2a10ff67660e931c1ea35967852f52ebd6b1edfc62a9de097fd98ee037415cd5.scope: Deactivated successfully.
Dec 13 04:07:53 np0005558241 nova_compute[248510]: 2025-12-13 09:07:53.545 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:53 np0005558241 podman[389027]: 2025-12-13 09:07:53.56800759 +0000 UTC m=+0.081493215 container create 2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banzai, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:07:53 np0005558241 podman[389027]: 2025-12-13 09:07:53.513863802 +0000 UTC m=+0.027349517 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:07:53 np0005558241 systemd[1]: Started libpod-conmon-2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197.scope.
Dec 13 04:07:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:07:53 np0005558241 podman[389027]: 2025-12-13 09:07:53.649498963 +0000 UTC m=+0.162984628 container init 2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banzai, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:07:53 np0005558241 podman[389027]: 2025-12-13 09:07:53.657607172 +0000 UTC m=+0.171092797 container start 2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banzai, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:07:53 np0005558241 podman[389027]: 2025-12-13 09:07:53.661409331 +0000 UTC m=+0.174894996 container attach 2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banzai, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:07:53 np0005558241 stupefied_banzai[389041]: 167 167
Dec 13 04:07:53 np0005558241 systemd[1]: libpod-2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197.scope: Deactivated successfully.
Dec 13 04:07:53 np0005558241 conmon[389041]: conmon 2f52703391785696d364 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197.scope/container/memory.events
Dec 13 04:07:53 np0005558241 podman[389027]: 2025-12-13 09:07:53.664651284 +0000 UTC m=+0.178136959 container died 2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 04:07:53 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f359c19036d19c4ebe312800fb98e526208674dd3c91999878eba204c000b099-merged.mount: Deactivated successfully.
Dec 13 04:07:53 np0005558241 podman[389027]: 2025-12-13 09:07:53.703500527 +0000 UTC m=+0.216986152 container remove 2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:07:53 np0005558241 systemd[1]: libpod-conmon-2f52703391785696d364e4ccae74cb6340dc96bdb1fe5aa6d53ecf426d130197.scope: Deactivated successfully.
Dec 13 04:07:53 np0005558241 podman[389064]: 2025-12-13 09:07:53.903763906 +0000 UTC m=+0.047062556 container create b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_galileo, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:07:53 np0005558241 systemd[1]: Started libpod-conmon-b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e.scope.
Dec 13 04:07:53 np0005558241 podman[389064]: 2025-12-13 09:07:53.886695856 +0000 UTC m=+0.029994526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:07:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:07:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472f33ce33c21099c69f0d97f235b8d44c7e4d2080da443c940619ac0eec4250/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472f33ce33c21099c69f0d97f235b8d44c7e4d2080da443c940619ac0eec4250/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472f33ce33c21099c69f0d97f235b8d44c7e4d2080da443c940619ac0eec4250/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/472f33ce33c21099c69f0d97f235b8d44c7e4d2080da443c940619ac0eec4250/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:54 np0005558241 podman[389064]: 2025-12-13 09:07:54.00772301 +0000 UTC m=+0.151021740 container init b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_galileo, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:07:54 np0005558241 podman[389064]: 2025-12-13 09:07:54.022714067 +0000 UTC m=+0.166012717 container start b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 04:07:54 np0005558241 podman[389064]: 2025-12-13 09:07:54.026249318 +0000 UTC m=+0.169548068 container attach b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]: {
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:    "0": [
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:        {
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "devices": [
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "/dev/loop3"
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            ],
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_name": "ceph_lv0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_size": "21470642176",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "name": "ceph_lv0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "tags": {
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.cluster_name": "ceph",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.crush_device_class": "",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.encrypted": "0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.objectstore": "bluestore",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.osd_id": "0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.type": "block",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.vdo": "0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.with_tpm": "0"
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            },
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "type": "block",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "vg_name": "ceph_vg0"
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:        }
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:    ],
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:    "1": [
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:        {
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "devices": [
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "/dev/loop4"
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            ],
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_name": "ceph_lv1",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_size": "21470642176",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "name": "ceph_lv1",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "tags": {
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.cluster_name": "ceph",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.crush_device_class": "",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.encrypted": "0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.objectstore": "bluestore",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.osd_id": "1",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.type": "block",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.vdo": "0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.with_tpm": "0"
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            },
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "type": "block",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "vg_name": "ceph_vg1"
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:        }
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:    ],
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:    "2": [
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:        {
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "devices": [
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "/dev/loop5"
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            ],
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_name": "ceph_lv2",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_size": "21470642176",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "name": "ceph_lv2",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "tags": {
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.cluster_name": "ceph",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.crush_device_class": "",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.encrypted": "0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.objectstore": "bluestore",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.osd_id": "2",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.type": "block",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.vdo": "0",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:                "ceph.with_tpm": "0"
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            },
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "type": "block",
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:            "vg_name": "ceph_vg2"
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:        }
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]:    ]
Dec 13 04:07:54 np0005558241 recursing_galileo[389080]: }
Dec 13 04:07:54 np0005558241 systemd[1]: libpod-b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e.scope: Deactivated successfully.
Dec 13 04:07:54 np0005558241 podman[389064]: 2025-12-13 09:07:54.39981225 +0000 UTC m=+0.543110950 container died b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_galileo, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:07:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3204: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:07:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay-472f33ce33c21099c69f0d97f235b8d44c7e4d2080da443c940619ac0eec4250-merged.mount: Deactivated successfully.
Dec 13 04:07:55 np0005558241 podman[389064]: 2025-12-13 09:07:55.013937331 +0000 UTC m=+1.157236011 container remove b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:07:55 np0005558241 systemd[1]: libpod-conmon-b7050499d6bd42677ce925c4ef2ef2fd199e76f79d7dc94eee69c1d55be6009e.scope: Deactivated successfully.
Dec 13 04:07:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:55.444 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:07:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:55.445 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:07:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:55.446 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:07:55 np0005558241 podman[389166]: 2025-12-13 09:07:55.537924516 +0000 UTC m=+0.031862334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:07:55 np0005558241 podman[389166]: 2025-12-13 09:07:55.635667159 +0000 UTC m=+0.129604937 container create 753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:07:55 np0005558241 systemd[1]: Started libpod-conmon-753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc.scope.
Dec 13 04:07:55 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:07:55 np0005558241 podman[389166]: 2025-12-13 09:07:55.840815284 +0000 UTC m=+0.334753092 container init 753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:07:55 np0005558241 podman[389166]: 2025-12-13 09:07:55.849225971 +0000 UTC m=+0.343163779 container start 753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:07:55 np0005558241 podman[389166]: 2025-12-13 09:07:55.853333217 +0000 UTC m=+0.347271025 container attach 753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:07:55 np0005558241 nifty_keldysh[389181]: 167 167
Dec 13 04:07:55 np0005558241 systemd[1]: libpod-753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc.scope: Deactivated successfully.
Dec 13 04:07:55 np0005558241 podman[389166]: 2025-12-13 09:07:55.858258744 +0000 UTC m=+0.352196622 container died 753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:07:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-884cfa3c3ec925f5b51a76f27625f6709d0c5522feb549e0807b2af8bed045fe-merged.mount: Deactivated successfully.
Dec 13 04:07:55 np0005558241 podman[389166]: 2025-12-13 09:07:55.903661266 +0000 UTC m=+0.397599074 container remove 753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:07:55 np0005558241 systemd[1]: libpod-conmon-753907b51cfd2432d0ea17226fcc237edb929cd8a6a3b0c0b7bb867a188f8bdc.scope: Deactivated successfully.
Dec 13 04:07:56 np0005558241 podman[389206]: 2025-12-13 09:07:56.109872439 +0000 UTC m=+0.050916385 container create 2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:07:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:56 np0005558241 systemd[1]: Started libpod-conmon-2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547.scope.
Dec 13 04:07:56 np0005558241 podman[389206]: 2025-12-13 09:07:56.087893181 +0000 UTC m=+0.028937127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:07:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:07:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492c6c9bf90c4251496a30accd14eab32b64dea9332748ca282f93c6e41ee999/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492c6c9bf90c4251496a30accd14eab32b64dea9332748ca282f93c6e41ee999/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492c6c9bf90c4251496a30accd14eab32b64dea9332748ca282f93c6e41ee999/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492c6c9bf90c4251496a30accd14eab32b64dea9332748ca282f93c6e41ee999/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:56 np0005558241 podman[389206]: 2025-12-13 09:07:56.22500028 +0000 UTC m=+0.166044316 container init 2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:07:56 np0005558241 podman[389206]: 2025-12-13 09:07:56.237040511 +0000 UTC m=+0.178084427 container start 2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_darwin, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:07:56 np0005558241 podman[389206]: 2025-12-13 09:07:56.240693335 +0000 UTC m=+0.181737301 container attach 2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_darwin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:07:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3205: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:07:57 np0005558241 lvm[389300]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:07:57 np0005558241 lvm[389300]: VG ceph_vg0 finished
Dec 13 04:07:57 np0005558241 lvm[389301]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:07:57 np0005558241 lvm[389301]: VG ceph_vg1 finished
Dec 13 04:07:57 np0005558241 lvm[389303]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:07:57 np0005558241 lvm[389303]: VG ceph_vg2 finished
Dec 13 04:07:57 np0005558241 amazing_darwin[389222]: {}
Dec 13 04:07:57 np0005558241 systemd[1]: libpod-2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547.scope: Deactivated successfully.
Dec 13 04:07:57 np0005558241 systemd[1]: libpod-2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547.scope: Consumed 1.595s CPU time.
Dec 13 04:07:57 np0005558241 podman[389206]: 2025-12-13 09:07:57.194127735 +0000 UTC m=+1.135171661 container died 2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_darwin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:07:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-492c6c9bf90c4251496a30accd14eab32b64dea9332748ca282f93c6e41ee999-merged.mount: Deactivated successfully.
Dec 13 04:07:57 np0005558241 podman[389206]: 2025-12-13 09:07:57.238832629 +0000 UTC m=+1.179876555 container remove 2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_darwin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 04:07:57 np0005558241 systemd[1]: libpod-conmon-2cd53fdce9950febf89458d6a537e3c23fb894d536508de567b0d1edb6665547.scope: Deactivated successfully.
Dec 13 04:07:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:07:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:07:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:07:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:07:57 np0005558241 nova_compute[248510]: 2025-12-13 09:07:57.988 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:58 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:07:58 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:07:58 np0005558241 nova_compute[248510]: 2025-12-13 09:07:58.548 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3206: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:07:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:59.718 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:07:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:59.719 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:07:59 np0005558241 nova_compute[248510]: 2025-12-13 09:07:59.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:07:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:07:59.721 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:08:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3207: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:08:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3208: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:08:03 np0005558241 nova_compute[248510]: 2025-12-13 09:08:03.023 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:03 np0005558241 nova_compute[248510]: 2025-12-13 09:08:03.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3209: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:08:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3210: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:08:07 np0005558241 nova_compute[248510]: 2025-12-13 09:08:07.534 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:07 np0005558241 nova_compute[248510]: 2025-12-13 09:08:07.535 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:07 np0005558241 nova_compute[248510]: 2025-12-13 09:08:07.556 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:08:07 np0005558241 nova_compute[248510]: 2025-12-13 09:08:07.705 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:07 np0005558241 nova_compute[248510]: 2025-12-13 09:08:07.706 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:07 np0005558241 nova_compute[248510]: 2025-12-13 09:08:07.721 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:08:07 np0005558241 nova_compute[248510]: 2025-12-13 09:08:07.722 248514 INFO nova.compute.claims [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:08:07 np0005558241 nova_compute[248510]: 2025-12-13 09:08:07.949 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:08:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3638317404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.555 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.577 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.586 248514 DEBUG nova.compute.provider_tree [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:08:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3211: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.667 248514 DEBUG nova.scheduler.client.report [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.701 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.703 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.762 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.763 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.795 248514 INFO nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.824 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.932 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.934 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.934 248514 INFO nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Creating image(s)#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.961 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:08:08 np0005558241 nova_compute[248510]: 2025-12-13 09:08:08.992 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:08:09 np0005558241 nova_compute[248510]: 2025-12-13 09:08:09.023 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:08:09 np0005558241 nova_compute[248510]: 2025-12-13 09:08:09.028 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:08:09 np0005558241 nova_compute[248510]: 2025-12-13 09:08:09.135 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:08:09 np0005558241 nova_compute[248510]: 2025-12-13 09:08:09.136 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:09 np0005558241 nova_compute[248510]: 2025-12-13 09:08:09.136 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:09 np0005558241 nova_compute[248510]: 2025-12-13 09:08:09.137 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:09 np0005558241 nova_compute[248510]: 2025-12-13 09:08:09.164 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:08:09 np0005558241 nova_compute[248510]: 2025-12-13 09:08:09.167 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 54c767d1-14e1-4a29-be59-440d4e412c4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:08:09 np0005558241 nova_compute[248510]: 2025-12-13 09:08:09.231 248514 DEBUG nova.policy [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:08:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:08:09
Dec 13 04:08:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:08:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:08:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'volumes', 'backups', '.rgw.root', 'images', 'cephfs.cephfs.meta']
Dec 13 04:08:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:08:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:08:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:08:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:08:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:08:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:08:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:08:10 np0005558241 nova_compute[248510]: 2025-12-13 09:08:10.185 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 54c767d1-14e1-4a29-be59-440d4e412c4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:08:10 np0005558241 nova_compute[248510]: 2025-12-13 09:08:10.226 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Successfully created port: 820e6a4a-a776-40a1-a7c0-d34cd3d2543c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:08:10 np0005558241 nova_compute[248510]: 2025-12-13 09:08:10.278 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:08:10 np0005558241 nova_compute[248510]: 2025-12-13 09:08:10.382 248514 DEBUG nova.objects.instance [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 54c767d1-14e1-4a29-be59-440d4e412c4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:08:10 np0005558241 nova_compute[248510]: 2025-12-13 09:08:10.406 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:08:10 np0005558241 nova_compute[248510]: 2025-12-13 09:08:10.407 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Ensure instance console log exists: /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:08:10 np0005558241 nova_compute[248510]: 2025-12-13 09:08:10.407 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:10 np0005558241 nova_compute[248510]: 2025-12-13 09:08:10.407 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:10 np0005558241 nova_compute[248510]: 2025-12-13 09:08:10.408 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3212: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:08:10 np0005558241 nova_compute[248510]: 2025-12-13 09:08:10.821 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Successfully created port: 06da1719-402b-4c36-b4b8-dcc95ed8b65c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:08:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:08:10 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:08:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:08:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:08:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:08:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:08:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:08:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:08:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:08:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:08:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:11 np0005558241 nova_compute[248510]: 2025-12-13 09:08:11.709 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Successfully updated port: 820e6a4a-a776-40a1-a7c0-d34cd3d2543c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:08:11 np0005558241 nova_compute[248510]: 2025-12-13 09:08:11.990 248514 DEBUG nova.compute.manager [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-changed-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:08:11 np0005558241 nova_compute[248510]: 2025-12-13 09:08:11.990 248514 DEBUG nova.compute.manager [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing instance network info cache due to event network-changed-820e6a4a-a776-40a1-a7c0-d34cd3d2543c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:08:11 np0005558241 nova_compute[248510]: 2025-12-13 09:08:11.991 248514 DEBUG oslo_concurrency.lockutils [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:08:11 np0005558241 nova_compute[248510]: 2025-12-13 09:08:11.991 248514 DEBUG oslo_concurrency.lockutils [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:08:11 np0005558241 nova_compute[248510]: 2025-12-13 09:08:11.991 248514 DEBUG nova.network.neutron [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing network info cache for port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:08:12 np0005558241 nova_compute[248510]: 2025-12-13 09:08:12.184 248514 DEBUG nova.network.neutron [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:08:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3213: 321 pgs: 321 active+clean; 41 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:08:12 np0005558241 nova_compute[248510]: 2025-12-13 09:08:12.664 248514 DEBUG nova.network.neutron [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:08:13 np0005558241 nova_compute[248510]: 2025-12-13 09:08:13.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:13 np0005558241 nova_compute[248510]: 2025-12-13 09:08:13.178 248514 DEBUG oslo_concurrency.lockutils [req-9a1f4020-10b7-4ff9-9117-af1328fb90e1 req-75342f1d-4b46-410f-a579-8cdc414d9e6d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:08:13 np0005558241 nova_compute[248510]: 2025-12-13 09:08:13.556 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:13 np0005558241 nova_compute[248510]: 2025-12-13 09:08:13.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:08:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3214: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:08:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:08:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2884898755' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:08:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2884898755' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:16 np0005558241 nova_compute[248510]: 2025-12-13 09:08:16.283 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Successfully updated port: 06da1719-402b-4c36-b4b8-dcc95ed8b65c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:08:16 np0005558241 nova_compute[248510]: 2025-12-13 09:08:16.298 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:08:16 np0005558241 nova_compute[248510]: 2025-12-13 09:08:16.299 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:08:16 np0005558241 nova_compute[248510]: 2025-12-13 09:08:16.299 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:08:16 np0005558241 nova_compute[248510]: 2025-12-13 09:08:16.382 248514 DEBUG nova.compute.manager [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-changed-06da1719-402b-4c36-b4b8-dcc95ed8b65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:08:16 np0005558241 nova_compute[248510]: 2025-12-13 09:08:16.383 248514 DEBUG nova.compute.manager [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing instance network info cache due to event network-changed-06da1719-402b-4c36-b4b8-dcc95ed8b65c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:08:16 np0005558241 nova_compute[248510]: 2025-12-13 09:08:16.383 248514 DEBUG oslo_concurrency.lockutils [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:08:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3215: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:08:16 np0005558241 nova_compute[248510]: 2025-12-13 09:08:16.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:08:17 np0005558241 nova_compute[248510]: 2025-12-13 09:08:17.272 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:08:18 np0005558241 nova_compute[248510]: 2025-12-13 09:08:18.074 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:18 np0005558241 nova_compute[248510]: 2025-12-13 09:08:18.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3216: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:08:18 np0005558241 podman[389538]: 2025-12-13 09:08:18.993910876 +0000 UTC m=+0.071940518 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 04:08:19 np0005558241 podman[389537]: 2025-12-13 09:08:19.02816623 +0000 UTC m=+0.114722602 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:08:19 np0005558241 podman[389536]: 2025-12-13 09:08:19.033999681 +0000 UTC m=+0.117459723 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.327 248514 DEBUG nova.network.neutron [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.355 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.356 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Instance network_info: |[{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.356 248514 DEBUG oslo_concurrency.lockutils [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.357 248514 DEBUG nova.network.neutron [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing network info cache for port 06da1719-402b-4c36-b4b8-dcc95ed8b65c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.361 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Start _get_guest_xml network_info=[{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.367 248514 WARNING nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.375 248514 DEBUG nova.virt.libvirt.host [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.376 248514 DEBUG nova.virt.libvirt.host [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.382 248514 DEBUG nova.virt.libvirt.host [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.383 248514 DEBUG nova.virt.libvirt.host [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.383 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.384 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.384 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.385 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.385 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.385 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.385 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.386 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.386 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.386 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.387 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.387 248514 DEBUG nova.virt.hardware [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.390 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:08:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3217: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:08:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:08:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2203756744' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.952 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.975 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:08:20 np0005558241 nova_compute[248510]: 2025-12-13 09:08:20.979 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:08:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000360499293401914 of space, bias 1.0, pg target 0.1081497880205742 quantized to 32 (current 32)
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697557475050345 of space, bias 1.0, pg target 0.20092672425151034 quantized to 32 (current 32)
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:08:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:08:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964944478' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.549 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.551 248514 DEBUG nova.virt.libvirt.vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-698361226',display_name='tempest-TestGettingAddress-server-698361226',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-698361226',id=139,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-xmk45p0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:08Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=54c767d1-14e1-4a29-be59-440d4e412c4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.551 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.552 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.553 248514 DEBUG nova.virt.libvirt.vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-698361226',display_name='tempest-TestGettingAddress-server-698361226',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-698361226',id=139,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-xmk45p0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:08Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=54c767d1-14e1-4a29-be59-440d4e412c4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.554 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.554 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.555 248514 DEBUG nova.objects.instance [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 54c767d1-14e1-4a29-be59-440d4e412c4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.580 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  <uuid>54c767d1-14e1-4a29-be59-440d4e412c4e</uuid>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  <name>instance-0000008b</name>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-698361226</nova:name>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:08:20</nova:creationTime>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        <nova:port uuid="820e6a4a-a776-40a1-a7c0-d34cd3d2543c">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        <nova:port uuid="06da1719-402b-4c36-b4b8-dcc95ed8b65c">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fef0:d903" ipVersion="6"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <entry name="serial">54c767d1-14e1-4a29-be59-440d4e412c4e</entry>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <entry name="uuid">54c767d1-14e1-4a29-be59-440d4e412c4e</entry>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/54c767d1-14e1-4a29-be59-440d4e412c4e_disk">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/54c767d1-14e1-4a29-be59-440d4e412c4e_disk.config">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:bc:71:ee"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <target dev="tap820e6a4a-a7"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:f0:d9:03"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <target dev="tap06da1719-40"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/console.log" append="off"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:08:21 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:08:21 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:08:21 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:08:21 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.582 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Preparing to wait for external event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.582 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.583 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.583 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.583 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Preparing to wait for external event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.583 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.584 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.584 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.585 248514 DEBUG nova.virt.libvirt.vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-698361226',display_name='tempest-TestGettingAddress-server-698361226',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-698361226',id=139,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-xmk45p0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:08Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=54c767d1-14e1-4a29-be59-440d4e412c4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.585 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.586 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.586 248514 DEBUG os_vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.587 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.588 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.588 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.592 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.593 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap820e6a4a-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.593 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap820e6a4a-a7, col_values=(('external_ids', {'iface-id': '820e6a4a-a776-40a1-a7c0-d34cd3d2543c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:71:ee', 'vm-uuid': '54c767d1-14e1-4a29-be59-440d4e412c4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.595 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:21 np0005558241 NetworkManager[50376]: <info>  [1765616901.5966] manager: (tap820e6a4a-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.598 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.604 248514 INFO os_vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7')#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.605 248514 DEBUG nova.virt.libvirt.vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-698361226',display_name='tempest-TestGettingAddress-server-698361226',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-698361226',id=139,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-xmk45p0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:08Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=54c767d1-14e1-4a29-be59-440d4e412c4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.605 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.606 248514 DEBUG nova.network.os_vif_util [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.606 248514 DEBUG os_vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.607 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.607 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.607 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.610 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.610 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06da1719-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.611 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06da1719-40, col_values=(('external_ids', {'iface-id': '06da1719-402b-4c36-b4b8-dcc95ed8b65c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:d9:03', 'vm-uuid': '54c767d1-14e1-4a29-be59-440d4e412c4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.612 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:21 np0005558241 NetworkManager[50376]: <info>  [1765616901.6134] manager: (tap06da1719-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.615 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.620 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.621 248514 INFO os_vif [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40')#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.859 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.860 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.883 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.884 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.885 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:bc:71:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.885 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:f0:d9:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.886 248514 INFO nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Using config drive#033[00m
Dec 13 04:08:21 np0005558241 nova_compute[248510]: 2025-12-13 09:08:21.919 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:08:22 np0005558241 nova_compute[248510]: 2025-12-13 09:08:22.503 248514 INFO nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Creating config drive at /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/disk.config#033[00m
Dec 13 04:08:22 np0005558241 nova_compute[248510]: 2025-12-13 09:08:22.510 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphul1qh03 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:08:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3218: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:08:22 np0005558241 nova_compute[248510]: 2025-12-13 09:08:22.671 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphul1qh03" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:08:22 np0005558241 nova_compute[248510]: 2025-12-13 09:08:22.712 248514 DEBUG nova.storage.rbd_utils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 54c767d1-14e1-4a29-be59-440d4e412c4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:08:22 np0005558241 nova_compute[248510]: 2025-12-13 09:08:22.718 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/disk.config 54c767d1-14e1-4a29-be59-440d4e412c4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:08:22 np0005558241 nova_compute[248510]: 2025-12-13 09:08:22.774 248514 DEBUG nova.network.neutron [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updated VIF entry in instance network info cache for port 06da1719-402b-4c36-b4b8-dcc95ed8b65c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:08:22 np0005558241 nova_compute[248510]: 2025-12-13 09:08:22.776 248514 DEBUG nova.network.neutron [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:08:22 np0005558241 nova_compute[248510]: 2025-12-13 09:08:22.967 248514 DEBUG oslo_concurrency.processutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/disk.config 54c767d1-14e1-4a29-be59-440d4e412c4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:08:22 np0005558241 nova_compute[248510]: 2025-12-13 09:08:22.968 248514 INFO nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Deleting local config drive /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e/disk.config because it was imported into RBD.#033[00m
Dec 13 04:08:23 np0005558241 NetworkManager[50376]: <info>  [1765616903.0302] manager: (tap820e6a4a-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/594)
Dec 13 04:08:23 np0005558241 kernel: tap820e6a4a-a7: entered promiscuous mode
Dec 13 04:08:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:23Z|01437|binding|INFO|Claiming lport 820e6a4a-a776-40a1-a7c0-d34cd3d2543c for this chassis.
Dec 13 04:08:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:23Z|01438|binding|INFO|820e6a4a-a776-40a1-a7c0-d34cd3d2543c: Claiming fa:16:3e:bc:71:ee 10.100.0.12
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.034 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:23 np0005558241 NetworkManager[50376]: <info>  [1765616903.0468] manager: (tap06da1719-40): new Tun device (/org/freedesktop/NetworkManager/Devices/595)
Dec 13 04:08:23 np0005558241 systemd-udevd[389743]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:08:23 np0005558241 systemd-udevd[389742]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:08:23 np0005558241 NetworkManager[50376]: <info>  [1765616903.0775] device (tap820e6a4a-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:08:23 np0005558241 NetworkManager[50376]: <info>  [1765616903.0791] device (tap820e6a4a-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:08:23 np0005558241 systemd-machined[210538]: New machine qemu-170-instance-0000008b.
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.114 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:71:ee 10.100.0.12'], port_security=['fa:16:3e:bc:71:ee 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '54c767d1-14e1-4a29-be59-440d4e412c4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=642d5a3d-2cd7-49a8-8481-4caf7441de52, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=820e6a4a-a776-40a1-a7c0-d34cd3d2543c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.115 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c in datapath 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 bound to our chassis#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.117 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.125 248514 DEBUG oslo_concurrency.lockutils [req-7573c044-02ef-40a2-9ab5-6336262960fc req-30f3189f-9cf3-4444-88d0-6094f6d78eb6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.135 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4930ec-6a22-4c72-b123-d47f6a415893]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.136 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap31e0a1ad-e1 in ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.138 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap31e0a1ad-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.138 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff198ce-9db2-479b-b8f5-a3f05f93a78f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.140 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4f94e9d9-7008-4595-a69d-326ff693786b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 kernel: tap06da1719-40: entered promiscuous mode
Dec 13 04:08:23 np0005558241 NetworkManager[50376]: <info>  [1765616903.1493] device (tap06da1719-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.151 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:23 np0005558241 NetworkManager[50376]: <info>  [1765616903.1531] device (tap06da1719-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:08:23 np0005558241 systemd[1]: Started Virtual Machine qemu-170-instance-0000008b.
Dec 13 04:08:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:23Z|01439|binding|INFO|Claiming lport 06da1719-402b-4c36-b4b8-dcc95ed8b65c for this chassis.
Dec 13 04:08:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:23Z|01440|binding|INFO|06da1719-402b-4c36-b4b8-dcc95ed8b65c: Claiming fa:16:3e:f0:d9:03 2001:db8::f816:3eff:fef0:d903
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.156 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[727c448e-fd95-4e13-9f80-b5febf9f35f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.160 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:d9:03 2001:db8::f816:3eff:fef0:d903'], port_security=['fa:16:3e:f0:d9:03 2001:db8::f816:3eff:fef0:d903'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:d903/64', 'neutron:device_id': '54c767d1-14e1-4a29-be59-440d4e412c4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d30eddc7-04d5-44b8-966c-6e93f6525e9a, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=06da1719-402b-4c36-b4b8-dcc95ed8b65c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:08:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:23Z|01441|binding|INFO|Setting lport 820e6a4a-a776-40a1-a7c0-d34cd3d2543c ovn-installed in OVS
Dec 13 04:08:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:23Z|01442|binding|INFO|Setting lport 820e6a4a-a776-40a1-a7c0-d34cd3d2543c up in Southbound
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.162 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:23Z|01443|binding|INFO|Setting lport 06da1719-402b-4c36-b4b8-dcc95ed8b65c ovn-installed in OVS
Dec 13 04:08:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:23Z|01444|binding|INFO|Setting lport 06da1719-402b-4c36-b4b8-dcc95ed8b65c up in Southbound
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.174 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cd869dfa-93dc-4b7a-a273-e1355f8b6401]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.222 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[84c8c15f-bf7a-4af5-9cdc-bc44f3f4eda0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 NetworkManager[50376]: <info>  [1765616903.2317] manager: (tap31e0a1ad-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/596)
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.232 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[932d5809-ced8-473a-9da8-2d7df3500b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.274 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f62f88bd-5666-413e-b126-5240fc697560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.278 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2835c75c-c582-49e1-bb6d-08e8215fa557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 NetworkManager[50376]: <info>  [1765616903.3033] device (tap31e0a1ad-e0): carrier: link connected
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.312 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d3152b-83a4-4809-93d0-9a840e8238c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.336 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[632e8994-a3c8-4f22-a71e-edba565a0d40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31e0a1ad-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:46:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948051, 'reachable_time': 26277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389779, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.358 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[47938127-0d77-4e88-9646-c3541ff9e81b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:46da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948051, 'tstamp': 948051}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389780, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.387 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ba026b76-350e-4678-ad29-72b3ca3bf597]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31e0a1ad-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:46:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948051, 'reachable_time': 26277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389781, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.433 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[766eee82-14b1-4c77-b5e6-2ce7b14c4360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.489 248514 DEBUG nova.compute.manager [req-a9712ea8-5707-4f57-910c-e8eccff0af5e req-6968c384-bd55-4e8b-80ad-076d769a8db9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.489 248514 DEBUG oslo_concurrency.lockutils [req-a9712ea8-5707-4f57-910c-e8eccff0af5e req-6968c384-bd55-4e8b-80ad-076d769a8db9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.489 248514 DEBUG oslo_concurrency.lockutils [req-a9712ea8-5707-4f57-910c-e8eccff0af5e req-6968c384-bd55-4e8b-80ad-076d769a8db9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.490 248514 DEBUG oslo_concurrency.lockutils [req-a9712ea8-5707-4f57-910c-e8eccff0af5e req-6968c384-bd55-4e8b-80ad-076d769a8db9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.490 248514 DEBUG nova.compute.manager [req-a9712ea8-5707-4f57-910c-e8eccff0af5e req-6968c384-bd55-4e8b-80ad-076d769a8db9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Processing event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.498 248514 DEBUG nova.compute.manager [req-96be31fd-e68d-456a-b6f4-ec2b7daf7442 req-3c79cdec-c42f-4734-b7eb-dcad34fc8637 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.498 248514 DEBUG oslo_concurrency.lockutils [req-96be31fd-e68d-456a-b6f4-ec2b7daf7442 req-3c79cdec-c42f-4734-b7eb-dcad34fc8637 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.498 248514 DEBUG oslo_concurrency.lockutils [req-96be31fd-e68d-456a-b6f4-ec2b7daf7442 req-3c79cdec-c42f-4734-b7eb-dcad34fc8637 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.498 248514 DEBUG oslo_concurrency.lockutils [req-96be31fd-e68d-456a-b6f4-ec2b7daf7442 req-3c79cdec-c42f-4734-b7eb-dcad34fc8637 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.498 248514 DEBUG nova.compute.manager [req-96be31fd-e68d-456a-b6f4-ec2b7daf7442 req-3c79cdec-c42f-4734-b7eb-dcad34fc8637 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Processing event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.521 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d0baaa94-16c2-44e5-baea-438b33a619cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.523 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31e0a1ad-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.523 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.523 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31e0a1ad-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.525 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:23 np0005558241 NetworkManager[50376]: <info>  [1765616903.5262] manager: (tap31e0a1ad-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/597)
Dec 13 04:08:23 np0005558241 kernel: tap31e0a1ad-e0: entered promiscuous mode
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.532 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31e0a1ad-e0, col_values=(('external_ids', {'iface-id': 'bbfdb82a-c1cb-4d3d-b44d-9475a3177d38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.533 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:23Z|01445|binding|INFO|Releasing lport bbfdb82a-c1cb-4d3d-b44d-9475a3177d38 from this chassis (sb_readonly=0)
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.536 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.537 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb751b8-5007-483a-8adf-1a58210362d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.537 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5.pid.haproxy
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:08:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:23.538 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'env', 'PROCESS_TAG=haproxy-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.545 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.558 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:23 np0005558241 podman[389849]: 2025-12-13 09:08:23.925888495 +0000 UTC m=+0.053117002 container create 63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 04:08:23 np0005558241 systemd[1]: Started libpod-conmon-63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922.scope.
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.988 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.990 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616903.9879081, 54c767d1-14e1-4a29-be59-440d4e412c4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.990 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] VM Started (Lifecycle Event)#033[00m
Dec 13 04:08:23 np0005558241 podman[389849]: 2025-12-13 09:08:23.899413862 +0000 UTC m=+0.026642389 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:08:23 np0005558241 nova_compute[248510]: 2025-12-13 09:08:23.999 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.004 248514 INFO nova.virt.libvirt.driver [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Instance spawned successfully.#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.004 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:08:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:08:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d43e0b8694a8e8a523da23bc5772976ab296f6f8c8d08138f08d92ba5a318180/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.021 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.027 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.030 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.030 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.031 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.031 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.031 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.032 248514 DEBUG nova.virt.libvirt.driver [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:08:24 np0005558241 podman[389849]: 2025-12-13 09:08:24.038244236 +0000 UTC m=+0.165472763 container init 63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:08:24 np0005558241 podman[389849]: 2025-12-13 09:08:24.0446151 +0000 UTC m=+0.171843607 container start 63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.067 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.067 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616903.9891014, 54c767d1-14e1-4a29-be59-440d4e412c4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.068 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:08:24 np0005558241 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [NOTICE]   (389874) : New worker (389876) forked
Dec 13 04:08:24 np0005558241 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [NOTICE]   (389874) : Loading success.
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.102 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 06da1719-402b-4c36-b4b8-dcc95ed8b65c in datapath 9cd5db3b-c54c-46c9-b59b-e477b2784420 unbound from our chassis#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.104 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9cd5db3b-c54c-46c9-b59b-e477b2784420#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.108 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.111 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616903.992605, 54c767d1-14e1-4a29-be59-440d4e412c4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.111 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.114 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[68003e99-5bcd-4153-a4d7-a6e32b57f3d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.115 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9cd5db3b-c1 in ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.118 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9cd5db3b-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.118 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[69b68b38-0a72-4df3-8470-da6be77a5ef6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.120 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9e45ac6b-83e1-4909-a168-1dd5bcee5a55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.124 248514 INFO nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Took 15.19 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.125 248514 DEBUG nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.130 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.134 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0719bcb0-c164-4a72-a7bc-7b0f9118c1c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.143 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.149 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[34b4e4f5-cafb-4cdf-8812-24680111283b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.177 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.180 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0575ff-339f-4bba-84e2-79c60b7a6b90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.187 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[48aed80b-5759-44e2-9386-645259423abf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 NetworkManager[50376]: <info>  [1765616904.1888] manager: (tap9cd5db3b-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/598)
Dec 13 04:08:24 np0005558241 systemd-udevd[389765]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.227 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae02a4f-3c23-46be-8ce1-6b3aa4530676]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.228 248514 INFO nova.compute.manager [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Took 16.62 seconds to build instance.#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.230 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc41e16-fa84-4588-b4aa-a2091a8ae99f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 NetworkManager[50376]: <info>  [1765616904.2606] device (tap9cd5db3b-c0): carrier: link connected
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.269 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[ea735268-9aa0-4f2d-9564-a42e1a4d1e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.299 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[89aae542-77a5-455b-8c60-1c898db2977c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cd5db3b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:ad:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948147, 'reachable_time': 31724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389895, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.317 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ccd03d-e61a-4ac6-bf9d-f595c75cc0c6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe90:ade0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948147, 'tstamp': 948147}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389896, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.344 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57b81455-8ab0-4181-8d46-fb284f282901]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cd5db3b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:ad:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948147, 'reachable_time': 31724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389897, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.382 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f260c5-c504-4d62-bc5d-35467ca01b4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.389 248514 DEBUG oslo_concurrency.lockutils [None req-d8b43aeb-171c-447e-af0c-be9ee9c84783 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.422 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[539cce2b-4b98-4102-83f4-91fea9c5e6e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.424 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cd5db3b-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.425 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.426 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cd5db3b-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:08:24 np0005558241 kernel: tap9cd5db3b-c0: entered promiscuous mode
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.428 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:24 np0005558241 NetworkManager[50376]: <info>  [1765616904.4297] manager: (tap9cd5db3b-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.430 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.462 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9cd5db3b-c0, col_values=(('external_ids', {'iface-id': '26b2c676-4744-4ec7-b362-11535b0ba025'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.465 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.468 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9cd5db3b-c54c-46c9-b59b-e477b2784420.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9cd5db3b-c54c-46c9-b59b-e477b2784420.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.469 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f42713-2c2e-47b2-b66d-3c8c74030eb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.470 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-9cd5db3b-c54c-46c9-b59b-e477b2784420
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/9cd5db3b-c54c-46c9-b59b-e477b2784420.pid.haproxy
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 9cd5db3b-c54c-46c9-b59b-e477b2784420
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:08:24 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:24.471 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'env', 'PROCESS_TAG=haproxy-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9cd5db3b-c54c-46c9-b59b-e477b2784420.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:08:24 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:24Z|01446|binding|INFO|Releasing lport 26b2c676-4744-4ec7-b362-11535b0ba025 from this chassis (sb_readonly=0)
Dec 13 04:08:24 np0005558241 nova_compute[248510]: 2025-12-13 09:08:24.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3219: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 04:08:24 np0005558241 podman[389927]: 2025-12-13 09:08:24.967669705 +0000 UTC m=+0.071454245 container create a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:08:25 np0005558241 systemd[1]: Started libpod-conmon-a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b.scope.
Dec 13 04:08:25 np0005558241 podman[389927]: 2025-12-13 09:08:24.941155581 +0000 UTC m=+0.044940141 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:08:25 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:08:25 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053d728be1d4b2966613ad5805ec267c950bc3159538ec562a41c9a868f9efff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:08:25 np0005558241 podman[389927]: 2025-12-13 09:08:25.060264935 +0000 UTC m=+0.164049475 container init a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:08:25 np0005558241 podman[389927]: 2025-12-13 09:08:25.07243281 +0000 UTC m=+0.176217320 container start a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:08:25 np0005558241 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [NOTICE]   (389946) : New worker (389948) forked
Dec 13 04:08:25 np0005558241 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [NOTICE]   (389946) : Loading success.
Dec 13 04:08:25 np0005558241 nova_compute[248510]: 2025-12-13 09:08:25.839 248514 DEBUG nova.compute.manager [req-3c4709d6-4a5c-4035-b4aa-e896c6096d6f req-a37a78ff-15dc-4775-93eb-0c72eadd043b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:08:25 np0005558241 nova_compute[248510]: 2025-12-13 09:08:25.839 248514 DEBUG oslo_concurrency.lockutils [req-3c4709d6-4a5c-4035-b4aa-e896c6096d6f req-a37a78ff-15dc-4775-93eb-0c72eadd043b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:25 np0005558241 nova_compute[248510]: 2025-12-13 09:08:25.839 248514 DEBUG oslo_concurrency.lockutils [req-3c4709d6-4a5c-4035-b4aa-e896c6096d6f req-a37a78ff-15dc-4775-93eb-0c72eadd043b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:25 np0005558241 nova_compute[248510]: 2025-12-13 09:08:25.840 248514 DEBUG oslo_concurrency.lockutils [req-3c4709d6-4a5c-4035-b4aa-e896c6096d6f req-a37a78ff-15dc-4775-93eb-0c72eadd043b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:25 np0005558241 nova_compute[248510]: 2025-12-13 09:08:25.840 248514 DEBUG nova.compute.manager [req-3c4709d6-4a5c-4035-b4aa-e896c6096d6f req-a37a78ff-15dc-4775-93eb-0c72eadd043b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] No waiting events found dispatching network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:08:25 np0005558241 nova_compute[248510]: 2025-12-13 09:08:25.841 248514 WARNING nova.compute.manager [req-3c4709d6-4a5c-4035-b4aa-e896c6096d6f req-a37a78ff-15dc-4775-93eb-0c72eadd043b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received unexpected event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c for instance with vm_state active and task_state None.#033[00m
Dec 13 04:08:25 np0005558241 nova_compute[248510]: 2025-12-13 09:08:25.964 248514 DEBUG nova.compute.manager [req-135481f3-62f8-4df3-a20f-8d54b54bd6a7 req-c66c85ec-a4a0-459f-a81b-07095d53e34b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:08:25 np0005558241 nova_compute[248510]: 2025-12-13 09:08:25.964 248514 DEBUG oslo_concurrency.lockutils [req-135481f3-62f8-4df3-a20f-8d54b54bd6a7 req-c66c85ec-a4a0-459f-a81b-07095d53e34b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:25 np0005558241 nova_compute[248510]: 2025-12-13 09:08:25.965 248514 DEBUG oslo_concurrency.lockutils [req-135481f3-62f8-4df3-a20f-8d54b54bd6a7 req-c66c85ec-a4a0-459f-a81b-07095d53e34b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:25 np0005558241 nova_compute[248510]: 2025-12-13 09:08:25.965 248514 DEBUG oslo_concurrency.lockutils [req-135481f3-62f8-4df3-a20f-8d54b54bd6a7 req-c66c85ec-a4a0-459f-a81b-07095d53e34b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:25 np0005558241 nova_compute[248510]: 2025-12-13 09:08:25.965 248514 DEBUG nova.compute.manager [req-135481f3-62f8-4df3-a20f-8d54b54bd6a7 req-c66c85ec-a4a0-459f-a81b-07095d53e34b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] No waiting events found dispatching network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:08:25 np0005558241 nova_compute[248510]: 2025-12-13 09:08:25.965 248514 WARNING nova.compute.manager [req-135481f3-62f8-4df3-a20f-8d54b54bd6a7 req-c66c85ec-a4a0-459f-a81b-07095d53e34b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received unexpected event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c for instance with vm_state active and task_state None.#033[00m
Dec 13 04:08:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:26 np0005558241 nova_compute[248510]: 2025-12-13 09:08:26.615 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3220: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 12 KiB/s wr, 5 op/s
Dec 13 04:08:26 np0005558241 nova_compute[248510]: 2025-12-13 09:08:26.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:08:26 np0005558241 nova_compute[248510]: 2025-12-13 09:08:26.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:08:26 np0005558241 nova_compute[248510]: 2025-12-13 09:08:26.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:26 np0005558241 nova_compute[248510]: 2025-12-13 09:08:26.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:26 np0005558241 nova_compute[248510]: 2025-12-13 09:08:26.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:26 np0005558241 nova_compute[248510]: 2025-12-13 09:08:26.810 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:08:26 np0005558241 nova_compute[248510]: 2025-12-13 09:08:26.811 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:08:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:08:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2721834458' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:08:27 np0005558241 nova_compute[248510]: 2025-12-13 09:08:27.362 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:08:27 np0005558241 nova_compute[248510]: 2025-12-13 09:08:27.447 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:08:27 np0005558241 nova_compute[248510]: 2025-12-13 09:08:27.447 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:08:27 np0005558241 nova_compute[248510]: 2025-12-13 09:08:27.666 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:08:27 np0005558241 nova_compute[248510]: 2025-12-13 09:08:27.668 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3293MB free_disk=59.966510431841016GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:08:27 np0005558241 nova_compute[248510]: 2025-12-13 09:08:27.668 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:27 np0005558241 nova_compute[248510]: 2025-12-13 09:08:27.668 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:27 np0005558241 nova_compute[248510]: 2025-12-13 09:08:27.858 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 54c767d1-14e1-4a29-be59-440d4e412c4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:08:27 np0005558241 nova_compute[248510]: 2025-12-13 09:08:27.859 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:08:27 np0005558241 nova_compute[248510]: 2025-12-13 09:08:27.859 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:08:27 np0005558241 nova_compute[248510]: 2025-12-13 09:08:27.922 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:08:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:08:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2326658025' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:08:28 np0005558241 nova_compute[248510]: 2025-12-13 09:08:28.477 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:08:28 np0005558241 nova_compute[248510]: 2025-12-13 09:08:28.483 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:08:28 np0005558241 nova_compute[248510]: 2025-12-13 09:08:28.531 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:08:28 np0005558241 nova_compute[248510]: 2025-12-13 09:08:28.561 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3221: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 708 KiB/s rd, 12 KiB/s wr, 33 op/s
Dec 13 04:08:28 np0005558241 nova_compute[248510]: 2025-12-13 09:08:28.679 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:08:28 np0005558241 nova_compute[248510]: 2025-12-13 09:08:28.679 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:29 np0005558241 NetworkManager[50376]: <info>  [1765616909.6511] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/600)
Dec 13 04:08:29 np0005558241 NetworkManager[50376]: <info>  [1765616909.6520] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/601)
Dec 13 04:08:29 np0005558241 nova_compute[248510]: 2025-12-13 09:08:29.650 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:29Z|01447|binding|INFO|Releasing lport bbfdb82a-c1cb-4d3d-b44d-9475a3177d38 from this chassis (sb_readonly=0)
Dec 13 04:08:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:29Z|01448|binding|INFO|Releasing lport 26b2c676-4744-4ec7-b362-11535b0ba025 from this chassis (sb_readonly=0)
Dec 13 04:08:29 np0005558241 nova_compute[248510]: 2025-12-13 09:08:29.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:29Z|01449|binding|INFO|Releasing lport bbfdb82a-c1cb-4d3d-b44d-9475a3177d38 from this chassis (sb_readonly=0)
Dec 13 04:08:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:29Z|01450|binding|INFO|Releasing lport 26b2c676-4744-4ec7-b362-11535b0ba025 from this chassis (sb_readonly=0)
Dec 13 04:08:29 np0005558241 nova_compute[248510]: 2025-12-13 09:08:29.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:29 np0005558241 nova_compute[248510]: 2025-12-13 09:08:29.952 248514 DEBUG nova.compute.manager [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-changed-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:08:29 np0005558241 nova_compute[248510]: 2025-12-13 09:08:29.953 248514 DEBUG nova.compute.manager [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing instance network info cache due to event network-changed-820e6a4a-a776-40a1-a7c0-d34cd3d2543c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:08:29 np0005558241 nova_compute[248510]: 2025-12-13 09:08:29.954 248514 DEBUG oslo_concurrency.lockutils [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:08:29 np0005558241 nova_compute[248510]: 2025-12-13 09:08:29.954 248514 DEBUG oslo_concurrency.lockutils [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:08:29 np0005558241 nova_compute[248510]: 2025-12-13 09:08:29.954 248514 DEBUG nova.network.neutron [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing network info cache for port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:08:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3222: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:08:30 np0005558241 nova_compute[248510]: 2025-12-13 09:08:30.679 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:08:30 np0005558241 nova_compute[248510]: 2025-12-13 09:08:30.680 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:08:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:31 np0005558241 nova_compute[248510]: 2025-12-13 09:08:31.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:31 np0005558241 nova_compute[248510]: 2025-12-13 09:08:31.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:08:32 np0005558241 nova_compute[248510]: 2025-12-13 09:08:32.484 248514 DEBUG nova.network.neutron [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updated VIF entry in instance network info cache for port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:08:32 np0005558241 nova_compute[248510]: 2025-12-13 09:08:32.485 248514 DEBUG nova.network.neutron [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:08:32 np0005558241 nova_compute[248510]: 2025-12-13 09:08:32.516 248514 DEBUG oslo_concurrency.lockutils [req-ea3ae052-a49e-4dc3-8ba3-e2c77a0783ac req-4c542d85-830c-48c8-84f4-70b8c12f3944 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:08:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3223: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:08:33 np0005558241 nova_compute[248510]: 2025-12-13 09:08:33.566 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:33 np0005558241 nova_compute[248510]: 2025-12-13 09:08:33.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:08:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3224: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:08:36 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:36Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:71:ee 10.100.0.12
Dec 13 04:08:36 np0005558241 ovn_controller[148476]: 2025-12-13T09:08:36Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:71:ee 10.100.0.12
Dec 13 04:08:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:36 np0005558241 nova_compute[248510]: 2025-12-13 09:08:36.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3225: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 68 op/s
Dec 13 04:08:38 np0005558241 nova_compute[248510]: 2025-12-13 09:08:38.569 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3226: 321 pgs: 321 active+clean; 94 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 372 KiB/s wr, 83 op/s
Dec 13 04:08:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:08:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:08:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:08:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:08:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:08:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:08:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3227: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 103 op/s
Dec 13 04:08:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:41 np0005558241 nova_compute[248510]: 2025-12-13 09:08:41.625 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3228: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:08:43 np0005558241 nova_compute[248510]: 2025-12-13 09:08:43.571 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3229: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:08:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:46 np0005558241 nova_compute[248510]: 2025-12-13 09:08:46.660 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3230: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:08:48 np0005558241 nova_compute[248510]: 2025-12-13 09:08:48.574 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3231: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:08:49 np0005558241 podman[390004]: 2025-12-13 09:08:49.996614336 +0000 UTC m=+0.083840685 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:08:50 np0005558241 podman[390005]: 2025-12-13 09:08:50.003711749 +0000 UTC m=+0.072221075 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:08:50 np0005558241 podman[390003]: 2025-12-13 09:08:50.031973199 +0000 UTC m=+0.119104266 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 04:08:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3232: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Dec 13 04:08:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:51 np0005558241 nova_compute[248510]: 2025-12-13 09:08:51.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3233: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Dec 13 04:08:52 np0005558241 nova_compute[248510]: 2025-12-13 09:08:52.774 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:52 np0005558241 nova_compute[248510]: 2025-12-13 09:08:52.775 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:52 np0005558241 nova_compute[248510]: 2025-12-13 09:08:52.869 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:08:53 np0005558241 nova_compute[248510]: 2025-12-13 09:08:53.215 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:53 np0005558241 nova_compute[248510]: 2025-12-13 09:08:53.215 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:53 np0005558241 nova_compute[248510]: 2025-12-13 09:08:53.224 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:08:53 np0005558241 nova_compute[248510]: 2025-12-13 09:08:53.225 248514 INFO nova.compute.claims [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:08:53 np0005558241 nova_compute[248510]: 2025-12-13 09:08:53.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:53 np0005558241 nova_compute[248510]: 2025-12-13 09:08:53.689 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:08:53 np0005558241 nova_compute[248510]: 2025-12-13 09:08:53.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:08:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:08:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190108441' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:08:54 np0005558241 nova_compute[248510]: 2025-12-13 09:08:54.221 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:08:54 np0005558241 nova_compute[248510]: 2025-12-13 09:08:54.231 248514 DEBUG nova.compute.provider_tree [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:08:54 np0005558241 nova_compute[248510]: 2025-12-13 09:08:54.518 248514 DEBUG nova.scheduler.client.report [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:08:54 np0005558241 nova_compute[248510]: 2025-12-13 09:08:54.589 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:54 np0005558241 nova_compute[248510]: 2025-12-13 09:08:54.591 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:08:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3234: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Dec 13 04:08:54 np0005558241 nova_compute[248510]: 2025-12-13 09:08:54.815 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:08:54 np0005558241 nova_compute[248510]: 2025-12-13 09:08:54.815 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:08:55 np0005558241 nova_compute[248510]: 2025-12-13 09:08:55.224 248514 INFO nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:08:55 np0005558241 nova_compute[248510]: 2025-12-13 09:08:55.262 248514 DEBUG nova.policy [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:08:55 np0005558241 nova_compute[248510]: 2025-12-13 09:08:55.438 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:08:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:55.446 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:55.447 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:08:55.449 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:55 np0005558241 nova_compute[248510]: 2025-12-13 09:08:55.972 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:08:55 np0005558241 nova_compute[248510]: 2025-12-13 09:08:55.974 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:08:55 np0005558241 nova_compute[248510]: 2025-12-13 09:08:55.975 248514 INFO nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Creating image(s)#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.008 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.051 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.079 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.084 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:08:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.176 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.177 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.178 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.178 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.205 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.210 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bdbb2140-812f-4913-a83e-7dadd1968cde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.559 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 bdbb2140-812f-4913-a83e-7dadd1968cde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.645 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:08:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3235: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.746 248514 DEBUG nova.objects.instance [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid bdbb2140-812f-4913-a83e-7dadd1968cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.985 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.985 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Ensure instance console log exists: /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.986 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.986 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:08:56 np0005558241 nova_compute[248510]: 2025-12-13 09:08:56.987 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:08:57 np0005558241 nova_compute[248510]: 2025-12-13 09:08:57.717 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Successfully created port: c509d630-f81b-402a-bae1-eae9e8fca8ca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:08:58 np0005558241 podman[390350]: 2025-12-13 09:08:58.118752069 +0000 UTC m=+0.094572812 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:08:58 np0005558241 podman[390350]: 2025-12-13 09:08:58.224476428 +0000 UTC m=+0.200297171 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:08:58 np0005558241 nova_compute[248510]: 2025-12-13 09:08:58.580 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:08:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3236: 321 pgs: 321 active+clean; 140 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 727 KiB/s wr, 13 op/s
Dec 13 04:08:59 np0005558241 nova_compute[248510]: 2025-12-13 09:08:59.101 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Successfully created port: 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:08:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:08:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:08:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:08:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:09:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:00.156 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:09:00 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:00.158 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:09:00 np0005558241 nova_compute[248510]: 2025-12-13 09:09:00.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:09:00 np0005558241 nova_compute[248510]: 2025-12-13 09:09:00.238 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Successfully updated port: c509d630-f81b-402a-bae1-eae9e8fca8ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:09:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:09:00 np0005558241 nova_compute[248510]: 2025-12-13 09:09:00.357 248514 DEBUG nova.compute.manager [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-changed-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:00 np0005558241 nova_compute[248510]: 2025-12-13 09:09:00.358 248514 DEBUG nova.compute.manager [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing instance network info cache due to event network-changed-c509d630-f81b-402a-bae1-eae9e8fca8ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:09:00 np0005558241 nova_compute[248510]: 2025-12-13 09:09:00.358 248514 DEBUG oslo_concurrency.lockutils [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:09:00 np0005558241 nova_compute[248510]: 2025-12-13 09:09:00.358 248514 DEBUG oslo_concurrency.lockutils [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:09:00 np0005558241 nova_compute[248510]: 2025-12-13 09:09:00.359 248514 DEBUG nova.network.neutron [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing network info cache for port c509d630-f81b-402a-bae1-eae9e8fca8ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:09:00 np0005558241 podman[390681]: 2025-12-13 09:09:00.623842938 +0000 UTC m=+0.045853364 container create 5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:09:00 np0005558241 systemd[1]: Started libpod-conmon-5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429.scope.
Dec 13 04:09:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3237: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:09:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:09:00 np0005558241 podman[390681]: 2025-12-13 09:09:00.604629612 +0000 UTC m=+0.026640068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:09:00 np0005558241 podman[390681]: 2025-12-13 09:09:00.715444243 +0000 UTC m=+0.137454689 container init 5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:09:00 np0005558241 podman[390681]: 2025-12-13 09:09:00.725719488 +0000 UTC m=+0.147729924 container start 5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:09:00 np0005558241 podman[390681]: 2025-12-13 09:09:00.72927943 +0000 UTC m=+0.151289886 container attach 5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:09:00 np0005558241 laughing_johnson[390697]: 167 167
Dec 13 04:09:00 np0005558241 systemd[1]: libpod-5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429.scope: Deactivated successfully.
Dec 13 04:09:00 np0005558241 podman[390681]: 2025-12-13 09:09:00.734411362 +0000 UTC m=+0.156421798 container died 5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:09:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-016b61c2bb8803811178b21a8f60a111a8c55574ef3242aa8049e68d845e1214-merged.mount: Deactivated successfully.
Dec 13 04:09:00 np0005558241 podman[390681]: 2025-12-13 09:09:00.777935166 +0000 UTC m=+0.199945592 container remove 5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:09:00 np0005558241 systemd[1]: libpod-conmon-5207cc0003c80fca79ef91ad6d2450277e9b56583e7711485a490b5a19d26429.scope: Deactivated successfully.
Dec 13 04:09:00 np0005558241 podman[390721]: 2025-12-13 09:09:00.974715725 +0000 UTC m=+0.051951032 container create b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 04:09:01 np0005558241 systemd[1]: Started libpod-conmon-b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc.scope.
Dec 13 04:09:01 np0005558241 podman[390721]: 2025-12-13 09:09:00.950339366 +0000 UTC m=+0.027574693 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:09:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:09:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ba5376a4d804508007d3db31beac434e560514e2bc55aecb9ded101201a33a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ba5376a4d804508007d3db31beac434e560514e2bc55aecb9ded101201a33a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ba5376a4d804508007d3db31beac434e560514e2bc55aecb9ded101201a33a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ba5376a4d804508007d3db31beac434e560514e2bc55aecb9ded101201a33a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ba5376a4d804508007d3db31beac434e560514e2bc55aecb9ded101201a33a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:01 np0005558241 podman[390721]: 2025-12-13 09:09:01.079214892 +0000 UTC m=+0.156450289 container init b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lalande, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:09:01 np0005558241 podman[390721]: 2025-12-13 09:09:01.091191291 +0000 UTC m=+0.168426598 container start b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lalande, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:09:01 np0005558241 podman[390721]: 2025-12-13 09:09:01.109462653 +0000 UTC m=+0.186698030 container attach b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lalande, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:09:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:01 np0005558241 nova_compute[248510]: 2025-12-13 09:09:01.389 248514 DEBUG nova.network.neutron [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:09:01 np0005558241 sweet_lalande[390737]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:09:01 np0005558241 sweet_lalande[390737]: --> All data devices are unavailable
Dec 13 04:09:01 np0005558241 nova_compute[248510]: 2025-12-13 09:09:01.702 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:01 np0005558241 systemd[1]: libpod-b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc.scope: Deactivated successfully.
Dec 13 04:09:01 np0005558241 podman[390721]: 2025-12-13 09:09:01.721686415 +0000 UTC m=+0.798921782 container died b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 04:09:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d8ba5376a4d804508007d3db31beac434e560514e2bc55aecb9ded101201a33a-merged.mount: Deactivated successfully.
Dec 13 04:09:01 np0005558241 podman[390721]: 2025-12-13 09:09:01.788049088 +0000 UTC m=+0.865284435 container remove b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:09:01 np0005558241 systemd[1]: libpod-conmon-b9049e819078a5758b29837e190937bb29064c412840f7c23bd3aff5540e20bc.scope: Deactivated successfully.
Dec 13 04:09:02 np0005558241 podman[390831]: 2025-12-13 09:09:02.280889219 +0000 UTC m=+0.050164865 container create d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:09:02 np0005558241 systemd[1]: Started libpod-conmon-d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45.scope.
Dec 13 04:09:02 np0005558241 podman[390831]: 2025-12-13 09:09:02.259038905 +0000 UTC m=+0.028314541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:09:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:09:02 np0005558241 podman[390831]: 2025-12-13 09:09:02.392557002 +0000 UTC m=+0.161832698 container init d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 04:09:02 np0005558241 podman[390831]: 2025-12-13 09:09:02.404929321 +0000 UTC m=+0.174204937 container start d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leavitt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:09:02 np0005558241 podman[390831]: 2025-12-13 09:09:02.409194891 +0000 UTC m=+0.178470607 container attach d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 04:09:02 np0005558241 sweet_leavitt[390847]: 167 167
Dec 13 04:09:02 np0005558241 systemd[1]: libpod-d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45.scope: Deactivated successfully.
Dec 13 04:09:02 np0005558241 podman[390831]: 2025-12-13 09:09:02.413300257 +0000 UTC m=+0.182575873 container died d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:09:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-33f25ebf3ff7e87d2f5ad41b6d34921e45cbf00fef98ed10c808e486ca75a7aa-merged.mount: Deactivated successfully.
Dec 13 04:09:02 np0005558241 podman[390831]: 2025-12-13 09:09:02.459990662 +0000 UTC m=+0.229266308 container remove d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_leavitt, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:09:02 np0005558241 systemd[1]: libpod-conmon-d051743a61a9823c2807d1f3b460e89e83181e1e3bdd142680a7223306c50f45.scope: Deactivated successfully.
Dec 13 04:09:02 np0005558241 nova_compute[248510]: 2025-12-13 09:09:02.667 248514 DEBUG nova.network.neutron [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:09:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3238: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:09:02 np0005558241 podman[390870]: 2025-12-13 09:09:02.755629012 +0000 UTC m=+0.067482583 container create 141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 04:09:02 np0005558241 nova_compute[248510]: 2025-12-13 09:09:02.785 248514 DEBUG oslo_concurrency.lockutils [req-3f198f31-366c-4d26-b8fc-415461919142 req-cbd6f26a-537b-495c-a7bd-e288b7bca008 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:09:02 np0005558241 systemd[1]: Started libpod-conmon-141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d.scope.
Dec 13 04:09:02 np0005558241 podman[390870]: 2025-12-13 09:09:02.734599839 +0000 UTC m=+0.046453440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:09:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:09:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877a3524b7f9e4e14faa74ab94757888d5e4a55e256d523c652c2afdc92fdec0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877a3524b7f9e4e14faa74ab94757888d5e4a55e256d523c652c2afdc92fdec0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877a3524b7f9e4e14faa74ab94757888d5e4a55e256d523c652c2afdc92fdec0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877a3524b7f9e4e14faa74ab94757888d5e4a55e256d523c652c2afdc92fdec0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:02 np0005558241 podman[390870]: 2025-12-13 09:09:02.87255709 +0000 UTC m=+0.184410691 container init 141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cannon, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:09:02 np0005558241 podman[390870]: 2025-12-13 09:09:02.890692308 +0000 UTC m=+0.202545879 container start 141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:09:02 np0005558241 podman[390870]: 2025-12-13 09:09:02.893905641 +0000 UTC m=+0.205759212 container attach 141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]: {
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:    "0": [
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:        {
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "devices": [
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "/dev/loop3"
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            ],
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_name": "ceph_lv0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_size": "21470642176",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "name": "ceph_lv0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "tags": {
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.cluster_name": "ceph",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.crush_device_class": "",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.encrypted": "0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.objectstore": "bluestore",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.osd_id": "0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.type": "block",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.vdo": "0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.with_tpm": "0"
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            },
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "type": "block",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "vg_name": "ceph_vg0"
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:        }
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:    ],
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:    "1": [
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:        {
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "devices": [
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "/dev/loop4"
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            ],
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_name": "ceph_lv1",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_size": "21470642176",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "name": "ceph_lv1",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "tags": {
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.cluster_name": "ceph",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.crush_device_class": "",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.encrypted": "0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.objectstore": "bluestore",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.osd_id": "1",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.type": "block",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.vdo": "0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.with_tpm": "0"
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            },
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "type": "block",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "vg_name": "ceph_vg1"
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:        }
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:    ],
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:    "2": [
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:        {
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "devices": [
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "/dev/loop5"
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            ],
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_name": "ceph_lv2",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_size": "21470642176",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "name": "ceph_lv2",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "tags": {
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.cluster_name": "ceph",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.crush_device_class": "",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.encrypted": "0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.objectstore": "bluestore",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.osd_id": "2",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.type": "block",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.vdo": "0",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:                "ceph.with_tpm": "0"
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            },
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "type": "block",
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:            "vg_name": "ceph_vg2"
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:        }
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]:    ]
Dec 13 04:09:03 np0005558241 zealous_cannon[390886]: }
Dec 13 04:09:03 np0005558241 nova_compute[248510]: 2025-12-13 09:09:03.242 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Successfully updated port: 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:09:03 np0005558241 systemd[1]: libpod-141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d.scope: Deactivated successfully.
Dec 13 04:09:03 np0005558241 podman[390870]: 2025-12-13 09:09:03.252282482 +0000 UTC m=+0.564136133 container died 141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:09:03 np0005558241 nova_compute[248510]: 2025-12-13 09:09:03.297 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:09:03 np0005558241 nova_compute[248510]: 2025-12-13 09:09:03.297 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:09:03 np0005558241 nova_compute[248510]: 2025-12-13 09:09:03.298 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:09:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-877a3524b7f9e4e14faa74ab94757888d5e4a55e256d523c652c2afdc92fdec0-merged.mount: Deactivated successfully.
Dec 13 04:09:03 np0005558241 nova_compute[248510]: 2025-12-13 09:09:03.442 248514 DEBUG nova.compute.manager [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-changed-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:03 np0005558241 nova_compute[248510]: 2025-12-13 09:09:03.442 248514 DEBUG nova.compute.manager [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing instance network info cache due to event network-changed-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:09:03 np0005558241 nova_compute[248510]: 2025-12-13 09:09:03.443 248514 DEBUG oslo_concurrency.lockutils [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:09:03 np0005558241 nova_compute[248510]: 2025-12-13 09:09:03.514 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:09:03 np0005558241 podman[390870]: 2025-12-13 09:09:03.521273954 +0000 UTC m=+0.833127555 container remove 141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cannon, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Dec 13 04:09:03 np0005558241 systemd[1]: libpod-conmon-141bd244b368f9d07bff5cbc318cfc605e2e4f3656835176ffbb995cc2b2477d.scope: Deactivated successfully.
Dec 13 04:09:03 np0005558241 nova_compute[248510]: 2025-12-13 09:09:03.582 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:04 np0005558241 podman[390969]: 2025-12-13 09:09:04.054263562 +0000 UTC m=+0.044654484 container create cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:09:04 np0005558241 systemd[1]: Started libpod-conmon-cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198.scope.
Dec 13 04:09:04 np0005558241 podman[390969]: 2025-12-13 09:09:04.032624563 +0000 UTC m=+0.023015505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:09:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:09:04 np0005558241 podman[390969]: 2025-12-13 09:09:04.144979193 +0000 UTC m=+0.135370095 container init cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:09:04 np0005558241 podman[390969]: 2025-12-13 09:09:04.151765718 +0000 UTC m=+0.142156620 container start cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 04:09:04 np0005558241 podman[390969]: 2025-12-13 09:09:04.155386442 +0000 UTC m=+0.145777374 container attach cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 04:09:04 np0005558241 bold_allen[390986]: 167 167
Dec 13 04:09:04 np0005558241 systemd[1]: libpod-cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198.scope: Deactivated successfully.
Dec 13 04:09:04 np0005558241 podman[390969]: 2025-12-13 09:09:04.160722669 +0000 UTC m=+0.151113571 container died cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:09:04 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3aa0b990a37fe48990895f550196c947e2356cbd8a87d4b9ae6d4805cb29b9bc-merged.mount: Deactivated successfully.
Dec 13 04:09:04 np0005558241 podman[390969]: 2025-12-13 09:09:04.198111865 +0000 UTC m=+0.188502787 container remove cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:09:04 np0005558241 systemd[1]: libpod-conmon-cd03aadef8f1037ac528b1a3a4fb67645614caf6d11945cbfd72f35616d02198.scope: Deactivated successfully.
Dec 13 04:09:04 np0005558241 podman[391009]: 2025-12-13 09:09:04.386550798 +0000 UTC m=+0.030327843 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:09:04 np0005558241 podman[391009]: 2025-12-13 09:09:04.665246932 +0000 UTC m=+0.309023967 container create d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nobel, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:09:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3239: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:09:04 np0005558241 systemd[1]: Started libpod-conmon-d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e.scope.
Dec 13 04:09:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:09:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2070e95509077b2e8e39f8d13a9d819a3861a26f847a3ec18da37927c010d190/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2070e95509077b2e8e39f8d13a9d819a3861a26f847a3ec18da37927c010d190/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2070e95509077b2e8e39f8d13a9d819a3861a26f847a3ec18da37927c010d190/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2070e95509077b2e8e39f8d13a9d819a3861a26f847a3ec18da37927c010d190/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:05 np0005558241 podman[391009]: 2025-12-13 09:09:05.058532143 +0000 UTC m=+0.702309218 container init d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:09:05 np0005558241 podman[391009]: 2025-12-13 09:09:05.069669081 +0000 UTC m=+0.713446116 container start d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:09:05 np0005558241 podman[391009]: 2025-12-13 09:09:05.327694171 +0000 UTC m=+0.971471186 container attach d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:09:05 np0005558241 lvm[391104]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:09:05 np0005558241 lvm[391104]: VG ceph_vg0 finished
Dec 13 04:09:05 np0005558241 lvm[391105]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:09:05 np0005558241 lvm[391105]: VG ceph_vg1 finished
Dec 13 04:09:05 np0005558241 lvm[391107]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:09:05 np0005558241 lvm[391107]: VG ceph_vg2 finished
Dec 13 04:09:05 np0005558241 adoring_nobel[391026]: {}
Dec 13 04:09:06 np0005558241 systemd[1]: libpod-d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e.scope: Deactivated successfully.
Dec 13 04:09:06 np0005558241 systemd[1]: libpod-d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e.scope: Consumed 1.485s CPU time.
Dec 13 04:09:06 np0005558241 podman[391009]: 2025-12-13 09:09:06.010456665 +0000 UTC m=+1.654233730 container died d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nobel, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:09:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2070e95509077b2e8e39f8d13a9d819a3861a26f847a3ec18da37927c010d190-merged.mount: Deactivated successfully.
Dec 13 04:09:06 np0005558241 podman[391009]: 2025-12-13 09:09:06.188988512 +0000 UTC m=+1.832765497 container remove d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:09:06 np0005558241 systemd[1]: libpod-conmon-d098e4cc27d696ad285cb7ef20323428d4c506ab0f55713656cf769bf6e0407e.scope: Deactivated successfully.
Dec 13 04:09:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:09:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:09:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:09:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:09:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3240: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:09:06 np0005558241 nova_compute[248510]: 2025-12-13 09:09:06.705 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:07 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:09:07 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.405 248514 DEBUG nova.network.neutron [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updating instance_info_cache with network_info: [{"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.470 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.471 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Instance network_info: |[{"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.471 248514 DEBUG oslo_concurrency.lockutils [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.471 248514 DEBUG nova.network.neutron [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing network info cache for port 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.476 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Start _get_guest_xml network_info=[{"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.483 248514 WARNING nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.495 248514 DEBUG nova.virt.libvirt.host [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.496 248514 DEBUG nova.virt.libvirt.host [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.500 248514 DEBUG nova.virt.libvirt.host [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.501 248514 DEBUG nova.virt.libvirt.host [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.501 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.501 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.502 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.502 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.502 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.502 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.503 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.503 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.503 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.503 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.504 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.504 248514 DEBUG nova.virt.hardware [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:09:07 np0005558241 nova_compute[248510]: 2025-12-13 09:09:07.507 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:09:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:09:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2325214439' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:09:08 np0005558241 nova_compute[248510]: 2025-12-13 09:09:08.125 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:09:08 np0005558241 nova_compute[248510]: 2025-12-13 09:09:08.159 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:09:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:08.161 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:08 np0005558241 nova_compute[248510]: 2025-12-13 09:09:08.164 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:09:08 np0005558241 nova_compute[248510]: 2025-12-13 09:09:08.585 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3241: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:09:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:09:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392839541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:09:08 np0005558241 nova_compute[248510]: 2025-12-13 09:09:08.769 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:09:08 np0005558241 nova_compute[248510]: 2025-12-13 09:09:08.772 248514 DEBUG nova.virt.libvirt.vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1296664967',display_name='tempest-TestGettingAddress-server-1296664967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1296664967',id=140,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-76ioq0wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:55Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=bdbb2140-812f-4913-a83e-7dadd1968cde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:09:08 np0005558241 nova_compute[248510]: 2025-12-13 09:09:08.773 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:09:08 np0005558241 nova_compute[248510]: 2025-12-13 09:09:08.775 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:09:08 np0005558241 nova_compute[248510]: 2025-12-13 09:09:08.776 248514 DEBUG nova.virt.libvirt.vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1296664967',display_name='tempest-TestGettingAddress-server-1296664967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1296664967',id=140,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-76ioq0wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:55Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=bdbb2140-812f-4913-a83e-7dadd1968cde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:09:08 np0005558241 nova_compute[248510]: 2025-12-13 09:09:08.776 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:09:08 np0005558241 nova_compute[248510]: 2025-12-13 09:09:08.778 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:09:08 np0005558241 nova_compute[248510]: 2025-12-13 09:09:08.780 248514 DEBUG nova.objects.instance [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid bdbb2140-812f-4913-a83e-7dadd1968cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.034 248514 DEBUG nova.network.neutron [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updated VIF entry in instance network info cache for port 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.035 248514 DEBUG nova.network.neutron [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updating instance_info_cache with network_info: [{"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:09:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:09:09
Dec 13 04:09:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:09:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:09:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'images', 'backups']
Dec 13 04:09:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.531 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  <uuid>bdbb2140-812f-4913-a83e-7dadd1968cde</uuid>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  <name>instance-0000008c</name>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-1296664967</nova:name>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:09:07</nova:creationTime>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        <nova:port uuid="c509d630-f81b-402a-bae1-eae9e8fca8ca">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        <nova:port uuid="0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe95:c79d" ipVersion="6"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <entry name="serial">bdbb2140-812f-4913-a83e-7dadd1968cde</entry>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <entry name="uuid">bdbb2140-812f-4913-a83e-7dadd1968cde</entry>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/bdbb2140-812f-4913-a83e-7dadd1968cde_disk">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/bdbb2140-812f-4913-a83e-7dadd1968cde_disk.config">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:6d:6c:8b"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <target dev="tapc509d630-f8"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:95:c7:9d"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <target dev="tap0a76fd5a-5a"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/console.log" append="off"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:09:09 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:09:09 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:09:09 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:09:09 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.532 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Preparing to wait for external event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.533 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.533 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.534 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.534 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Preparing to wait for external event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.534 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.535 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.535 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.536 248514 DEBUG nova.virt.libvirt.vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1296664967',display_name='tempest-TestGettingAddress-server-1296664967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1296664967',id=140,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-76ioq0wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:55Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=bdbb2140-812f-4913-a83e-7dadd1968cde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.537 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.538 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.538 248514 DEBUG os_vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.540 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.541 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.546 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc509d630-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.547 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc509d630-f8, col_values=(('external_ids', {'iface-id': 'c509d630-f81b-402a-bae1-eae9e8fca8ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:6c:8b', 'vm-uuid': 'bdbb2140-812f-4913-a83e-7dadd1968cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:09 np0005558241 NetworkManager[50376]: <info>  [1765616949.5988] manager: (tapc509d630-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.599 248514 DEBUG oslo_concurrency.lockutils [req-7cc93416-649d-4032-91f5-4a9116d2e4f4 req-9cb4d5b9-df85-459d-9b77-e9d7bea0c2ca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.604 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.608 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.610 248514 INFO os_vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8')#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.611 248514 DEBUG nova.virt.libvirt.vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:08:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1296664967',display_name='tempest-TestGettingAddress-server-1296664967',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1296664967',id=140,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-76ioq0wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:08:55Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=bdbb2140-812f-4913-a83e-7dadd1968cde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.612 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.613 248514 DEBUG nova.network.os_vif_util [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.613 248514 DEBUG os_vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.614 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.615 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.618 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a76fd5a-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.619 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0a76fd5a-5a, col_values=(('external_ids', {'iface-id': '0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:c7:9d', 'vm-uuid': 'bdbb2140-812f-4913-a83e-7dadd1968cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.620 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:09 np0005558241 NetworkManager[50376]: <info>  [1765616949.6220] manager: (tap0a76fd5a-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/603)
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:09 np0005558241 nova_compute[248510]: 2025-12-13 09:09:09.631 248514 INFO os_vif [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a')#033[00m
Dec 13 04:09:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:09:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:09:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:09:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:09:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:09:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:09:10 np0005558241 nova_compute[248510]: 2025-12-13 09:09:10.272 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:09:10 np0005558241 nova_compute[248510]: 2025-12-13 09:09:10.272 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:09:10 np0005558241 nova_compute[248510]: 2025-12-13 09:09:10.272 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:6d:6c:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:09:10 np0005558241 nova_compute[248510]: 2025-12-13 09:09:10.272 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:95:c7:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:09:10 np0005558241 nova_compute[248510]: 2025-12-13 09:09:10.273 248514 INFO nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Using config drive#033[00m
Dec 13 04:09:10 np0005558241 nova_compute[248510]: 2025-12-13 09:09:10.299 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:09:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3242: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.6 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Dec 13 04:09:10 np0005558241 nova_compute[248510]: 2025-12-13 09:09:10.994 248514 INFO nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Creating config drive at /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/disk.config#033[00m
Dec 13 04:09:11 np0005558241 nova_compute[248510]: 2025-12-13 09:09:11.000 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd6dvpwjl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:09:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:09:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:09:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:09:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:09:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:09:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:09:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:09:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:09:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:09:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:09:11 np0005558241 nova_compute[248510]: 2025-12-13 09:09:11.158 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd6dvpwjl" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:09:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:11 np0005558241 nova_compute[248510]: 2025-12-13 09:09:11.194 248514 DEBUG nova.storage.rbd_utils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image bdbb2140-812f-4913-a83e-7dadd1968cde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:09:11 np0005558241 nova_compute[248510]: 2025-12-13 09:09:11.199 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/disk.config bdbb2140-812f-4913-a83e-7dadd1968cde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:09:11 np0005558241 nova_compute[248510]: 2025-12-13 09:09:11.354 248514 DEBUG oslo_concurrency.processutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/disk.config bdbb2140-812f-4913-a83e-7dadd1968cde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:09:11 np0005558241 nova_compute[248510]: 2025-12-13 09:09:11.356 248514 INFO nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Deleting local config drive /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde/disk.config because it was imported into RBD.#033[00m
Dec 13 04:09:11 np0005558241 NetworkManager[50376]: <info>  [1765616951.4339] manager: (tapc509d630-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/604)
Dec 13 04:09:11 np0005558241 kernel: tapc509d630-f8: entered promiscuous mode
Dec 13 04:09:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:11Z|01451|binding|INFO|Claiming lport c509d630-f81b-402a-bae1-eae9e8fca8ca for this chassis.
Dec 13 04:09:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:11Z|01452|binding|INFO|c509d630-f81b-402a-bae1-eae9e8fca8ca: Claiming fa:16:3e:6d:6c:8b 10.100.0.13
Dec 13 04:09:11 np0005558241 nova_compute[248510]: 2025-12-13 09:09:11.440 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:11 np0005558241 NetworkManager[50376]: <info>  [1765616951.4674] manager: (tap0a76fd5a-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/605)
Dec 13 04:09:11 np0005558241 kernel: tap0a76fd5a-5a: entered promiscuous mode
Dec 13 04:09:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:11Z|01453|binding|INFO|Setting lport c509d630-f81b-402a-bae1-eae9e8fca8ca ovn-installed in OVS
Dec 13 04:09:11 np0005558241 nova_compute[248510]: 2025-12-13 09:09:11.477 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:11 np0005558241 systemd-udevd[391285]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:09:11 np0005558241 systemd-udevd[391284]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:09:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:11Z|01454|if_status|INFO|Not updating pb chassis for 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 now as sb is readonly
Dec 13 04:09:11 np0005558241 NetworkManager[50376]: <info>  [1765616951.4955] device (tap0a76fd5a-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:09:11 np0005558241 NetworkManager[50376]: <info>  [1765616951.4963] device (tap0a76fd5a-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:09:11 np0005558241 NetworkManager[50376]: <info>  [1765616951.4977] device (tapc509d630-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:09:11 np0005558241 NetworkManager[50376]: <info>  [1765616951.4983] device (tapc509d630-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:09:11 np0005558241 nova_compute[248510]: 2025-12-13 09:09:11.506 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:11 np0005558241 systemd-machined[210538]: New machine qemu-171-instance-0000008c.
Dec 13 04:09:11 np0005558241 systemd[1]: Started Virtual Machine qemu-171-instance-0000008c.
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.770 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:6c:8b 10.100.0.13'], port_security=['fa:16:3e:6d:6c:8b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bdbb2140-812f-4913-a83e-7dadd1968cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=642d5a3d-2cd7-49a8-8481-4caf7441de52, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c509d630-f81b-402a-bae1-eae9e8fca8ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.772 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c509d630-f81b-402a-bae1-eae9e8fca8ca in datapath 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 bound to our chassis#033[00m
Dec 13 04:09:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:11Z|01455|binding|INFO|Claiming lport 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 for this chassis.
Dec 13 04:09:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:11Z|01456|binding|INFO|0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2: Claiming fa:16:3e:95:c7:9d 2001:db8::f816:3eff:fe95:c79d
Dec 13 04:09:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:11Z|01457|binding|INFO|Setting lport c509d630-f81b-402a-bae1-eae9e8fca8ca up in Southbound
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.773 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5#033[00m
Dec 13 04:09:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:11Z|01458|binding|INFO|Setting lport 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 ovn-installed in OVS
Dec 13 04:09:11 np0005558241 nova_compute[248510]: 2025-12-13 09:09:11.776 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.785 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:c7:9d 2001:db8::f816:3eff:fe95:c79d'], port_security=['fa:16:3e:95:c7:9d 2001:db8::f816:3eff:fe95:c79d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe95:c79d/64', 'neutron:device_id': 'bdbb2140-812f-4913-a83e-7dadd1968cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d30eddc7-04d5-44b8-966c-6e93f6525e9a, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:09:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:11Z|01459|binding|INFO|Setting lport 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 up in Southbound
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.791 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ef61a464-206d-40a2-a8f4-0b37a301a3e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.840 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c5353a-f69e-4365-acf1-5867621f4e3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.844 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7ffdf96b-baef-4cb7-8c10-d202770168b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.879 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bc61cdb6-b24d-4912-aa43-4b424dd553f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.904 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd349a2f-97e3-4740-9eb5-2773263c1931]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31e0a1ad-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:46:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948051, 'reachable_time': 26277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391302, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.923 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[01ce2b50-4d41-4a47-8a64-82e94f75f4f5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap31e0a1ad-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948068, 'tstamp': 948068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391303, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap31e0a1ad-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948073, 'tstamp': 948073}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391303, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.926 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31e0a1ad-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:11 np0005558241 nova_compute[248510]: 2025-12-13 09:09:11.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:11 np0005558241 nova_compute[248510]: 2025-12-13 09:09:11.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.929 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31e0a1ad-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.930 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.930 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31e0a1ad-e0, col_values=(('external_ids', {'iface-id': 'bbfdb82a-c1cb-4d3d-b44d-9475a3177d38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.931 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.933 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 in datapath 9cd5db3b-c54c-46c9-b59b-e477b2784420 unbound from our chassis#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.935 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9cd5db3b-c54c-46c9-b59b-e477b2784420#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.951 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b681e49e-1e0b-43d2-b8cc-d58e2f662a96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.992 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e6538b6e-8b0f-4d6b-b2d4-2abc2b0d5db1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:11.996 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[607831b4-611a-4d66-acf0-a3fdbc3dab80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.033 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d51eaa1d-25fe-4556-9755-627470b79760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.058 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e4b04283-288f-42a1-9ffd-6ff33e15a928]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cd5db3b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:ad:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948147, 'reachable_time': 31724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391351, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.075 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[06889cb0-a69f-47fe-baf1-b295fa54fd4a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9cd5db3b-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948163, 'tstamp': 948163}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391353, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.076 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cd5db3b-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.078 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.079 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.079 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cd5db3b-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.080 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:09:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.080 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9cd5db3b-c0, col_values=(('external_ids', {'iface-id': '26b2c676-4744-4ec7-b362-11535b0ba025'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:12.080 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.095 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616952.0948906, bdbb2140-812f-4913-a83e-7dadd1968cde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.095 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] VM Started (Lifecycle Event)#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.152 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.157 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616952.0951285, bdbb2140-812f-4913-a83e-7dadd1968cde => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.157 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.210 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.214 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.235 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:09:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3243: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.841 248514 DEBUG nova.compute.manager [req-91783b90-3f58-412d-959e-09c0be3aa3ff req-36601082-e1f7-43eb-868d-c5faea67216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.842 248514 DEBUG oslo_concurrency.lockutils [req-91783b90-3f58-412d-959e-09c0be3aa3ff req-36601082-e1f7-43eb-868d-c5faea67216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.842 248514 DEBUG oslo_concurrency.lockutils [req-91783b90-3f58-412d-959e-09c0be3aa3ff req-36601082-e1f7-43eb-868d-c5faea67216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.842 248514 DEBUG oslo_concurrency.lockutils [req-91783b90-3f58-412d-959e-09c0be3aa3ff req-36601082-e1f7-43eb-868d-c5faea67216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:12 np0005558241 nova_compute[248510]: 2025-12-13 09:09:12.842 248514 DEBUG nova.compute.manager [req-91783b90-3f58-412d-959e-09c0be3aa3ff req-36601082-e1f7-43eb-868d-c5faea67216a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Processing event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:09:13 np0005558241 nova_compute[248510]: 2025-12-13 09:09:13.587 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:13 np0005558241 nova_compute[248510]: 2025-12-13 09:09:13.764 248514 DEBUG nova.compute.manager [req-87953135-a4e8-4ce8-bcf3-3ba205d334a7 req-6cb394a5-40e4-4ee6-a349-c6289bd5be1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:13 np0005558241 nova_compute[248510]: 2025-12-13 09:09:13.764 248514 DEBUG oslo_concurrency.lockutils [req-87953135-a4e8-4ce8-bcf3-3ba205d334a7 req-6cb394a5-40e4-4ee6-a349-c6289bd5be1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:13 np0005558241 nova_compute[248510]: 2025-12-13 09:09:13.765 248514 DEBUG oslo_concurrency.lockutils [req-87953135-a4e8-4ce8-bcf3-3ba205d334a7 req-6cb394a5-40e4-4ee6-a349-c6289bd5be1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:13 np0005558241 nova_compute[248510]: 2025-12-13 09:09:13.765 248514 DEBUG oslo_concurrency.lockutils [req-87953135-a4e8-4ce8-bcf3-3ba205d334a7 req-6cb394a5-40e4-4ee6-a349-c6289bd5be1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:13 np0005558241 nova_compute[248510]: 2025-12-13 09:09:13.765 248514 DEBUG nova.compute.manager [req-87953135-a4e8-4ce8-bcf3-3ba205d334a7 req-6cb394a5-40e4-4ee6-a349-c6289bd5be1d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Processing event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:09:13 np0005558241 nova_compute[248510]: 2025-12-13 09:09:13.766 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:09:13 np0005558241 nova_compute[248510]: 2025-12-13 09:09:13.768 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765616953.768806, bdbb2140-812f-4913-a83e-7dadd1968cde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:09:13 np0005558241 nova_compute[248510]: 2025-12-13 09:09:13.769 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:09:13 np0005558241 nova_compute[248510]: 2025-12-13 09:09:13.771 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:09:13 np0005558241 nova_compute[248510]: 2025-12-13 09:09:13.774 248514 INFO nova.virt.libvirt.driver [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Instance spawned successfully.#033[00m
Dec 13 04:09:13 np0005558241 nova_compute[248510]: 2025-12-13 09:09:13.775 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.052 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.059 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.064 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.064 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.065 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.065 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.066 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.066 248514 DEBUG nova.virt.libvirt.driver [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.433 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.477 248514 INFO nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Took 18.50 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.478 248514 DEBUG nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3244: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.740 248514 INFO nova.compute.manager [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Took 21.57 seconds to build instance.#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:09:14 np0005558241 nova_compute[248510]: 2025-12-13 09:09:14.846 248514 DEBUG oslo_concurrency.lockutils [None req-59bfde24-aad1-453e-919b-d4ca535da918 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:15 np0005558241 nova_compute[248510]: 2025-12-13 09:09:15.051 248514 DEBUG nova.compute.manager [req-6111481b-bc75-471d-bff5-9fe15a326606 req-da3472c6-a61e-4668-9664-5dbd035a7adc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:15 np0005558241 nova_compute[248510]: 2025-12-13 09:09:15.051 248514 DEBUG oslo_concurrency.lockutils [req-6111481b-bc75-471d-bff5-9fe15a326606 req-da3472c6-a61e-4668-9664-5dbd035a7adc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:15 np0005558241 nova_compute[248510]: 2025-12-13 09:09:15.052 248514 DEBUG oslo_concurrency.lockutils [req-6111481b-bc75-471d-bff5-9fe15a326606 req-da3472c6-a61e-4668-9664-5dbd035a7adc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:15 np0005558241 nova_compute[248510]: 2025-12-13 09:09:15.052 248514 DEBUG oslo_concurrency.lockutils [req-6111481b-bc75-471d-bff5-9fe15a326606 req-da3472c6-a61e-4668-9664-5dbd035a7adc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:15 np0005558241 nova_compute[248510]: 2025-12-13 09:09:15.052 248514 DEBUG nova.compute.manager [req-6111481b-bc75-471d-bff5-9fe15a326606 req-da3472c6-a61e-4668-9664-5dbd035a7adc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] No waiting events found dispatching network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:09:15 np0005558241 nova_compute[248510]: 2025-12-13 09:09:15.053 248514 WARNING nova.compute.manager [req-6111481b-bc75-471d-bff5-9fe15a326606 req-da3472c6-a61e-4668-9664-5dbd035a7adc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received unexpected event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:09:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4249586351' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4249586351' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:16 np0005558241 nova_compute[248510]: 2025-12-13 09:09:16.051 248514 DEBUG nova.compute.manager [req-10fd1db4-f79c-4cb7-bd1f-7d163d97e134 req-0f0e426e-15b6-4922-9d55-38370bd45f76 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:16 np0005558241 nova_compute[248510]: 2025-12-13 09:09:16.052 248514 DEBUG oslo_concurrency.lockutils [req-10fd1db4-f79c-4cb7-bd1f-7d163d97e134 req-0f0e426e-15b6-4922-9d55-38370bd45f76 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:16 np0005558241 nova_compute[248510]: 2025-12-13 09:09:16.052 248514 DEBUG oslo_concurrency.lockutils [req-10fd1db4-f79c-4cb7-bd1f-7d163d97e134 req-0f0e426e-15b6-4922-9d55-38370bd45f76 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:16 np0005558241 nova_compute[248510]: 2025-12-13 09:09:16.053 248514 DEBUG oslo_concurrency.lockutils [req-10fd1db4-f79c-4cb7-bd1f-7d163d97e134 req-0f0e426e-15b6-4922-9d55-38370bd45f76 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:16 np0005558241 nova_compute[248510]: 2025-12-13 09:09:16.053 248514 DEBUG nova.compute.manager [req-10fd1db4-f79c-4cb7-bd1f-7d163d97e134 req-0f0e426e-15b6-4922-9d55-38370bd45f76 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] No waiting events found dispatching network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:09:16 np0005558241 nova_compute[248510]: 2025-12-13 09:09:16.053 248514 WARNING nova.compute.manager [req-10fd1db4-f79c-4cb7-bd1f-7d163d97e134 req-0f0e426e-15b6-4922-9d55-38370bd45f76 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received unexpected event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca for instance with vm_state active and task_state None.#033[00m
Dec 13 04:09:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3245: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Dec 13 04:09:16 np0005558241 nova_compute[248510]: 2025-12-13 09:09:16.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:09:18 np0005558241 nova_compute[248510]: 2025-12-13 09:09:18.589 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3246: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 728 KiB/s rd, 15 KiB/s wr, 34 op/s
Dec 13 04:09:19 np0005558241 nova_compute[248510]: 2025-12-13 09:09:19.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3247: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Dec 13 04:09:20 np0005558241 podman[391357]: 2025-12-13 09:09:20.994058163 +0000 UTC m=+0.064686371 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 04:09:21 np0005558241 podman[391356]: 2025-12-13 09:09:21.013506975 +0000 UTC m=+0.084889392 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd)
Dec 13 04:09:21 np0005558241 podman[391355]: 2025-12-13 09:09:21.036317164 +0000 UTC m=+0.117418832 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:09:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011222335815181286 of space, bias 1.0, pg target 0.33667007445543856 quantized to 32 (current 32)
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697484041444878 of space, bias 1.0, pg target 0.20092452124334634 quantized to 32 (current 32)
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:09:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3248: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:09:22 np0005558241 nova_compute[248510]: 2025-12-13 09:09:22.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:09:23 np0005558241 nova_compute[248510]: 2025-12-13 09:09:23.590 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:23 np0005558241 nova_compute[248510]: 2025-12-13 09:09:23.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:09:23 np0005558241 nova_compute[248510]: 2025-12-13 09:09:23.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:09:23 np0005558241 nova_compute[248510]: 2025-12-13 09:09:23.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:09:24 np0005558241 nova_compute[248510]: 2025-12-13 09:09:24.655 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3249: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 04:09:25 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:25Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:6c:8b 10.100.0.13
Dec 13 04:09:25 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:25Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:6c:8b 10.100.0.13
Dec 13 04:09:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3250: 321 pgs: 321 active+clean; 167 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 64 op/s
Dec 13 04:09:27 np0005558241 nova_compute[248510]: 2025-12-13 09:09:27.354 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:09:27 np0005558241 nova_compute[248510]: 2025-12-13 09:09:27.354 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:09:27 np0005558241 nova_compute[248510]: 2025-12-13 09:09:27.355 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 04:09:27 np0005558241 nova_compute[248510]: 2025-12-13 09:09:27.355 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54c767d1-14e1-4a29-be59-440d4e412c4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:09:28 np0005558241 nova_compute[248510]: 2025-12-13 09:09:28.593 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3251: 321 pgs: 321 active+clean; 175 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 528 KiB/s wr, 85 op/s
Dec 13 04:09:29 np0005558241 nova_compute[248510]: 2025-12-13 09:09:29.709 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:30 np0005558241 nova_compute[248510]: 2025-12-13 09:09:30.499 248514 DEBUG nova.compute.manager [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-changed-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:30 np0005558241 nova_compute[248510]: 2025-12-13 09:09:30.500 248514 DEBUG nova.compute.manager [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing instance network info cache due to event network-changed-c509d630-f81b-402a-bae1-eae9e8fca8ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:09:30 np0005558241 nova_compute[248510]: 2025-12-13 09:09:30.500 248514 DEBUG oslo_concurrency.lockutils [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:09:30 np0005558241 nova_compute[248510]: 2025-12-13 09:09:30.501 248514 DEBUG oslo_concurrency.lockutils [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:09:30 np0005558241 nova_compute[248510]: 2025-12-13 09:09:30.501 248514 DEBUG nova.network.neutron [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing network info cache for port c509d630-f81b-402a-bae1-eae9e8fca8ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:09:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3252: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Dec 13 04:09:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.136 248514 DEBUG nova.network.neutron [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updated VIF entry in instance network info cache for port c509d630-f81b-402a-bae1-eae9e8fca8ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.136 248514 DEBUG nova.network.neutron [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updating instance_info_cache with network_info: [{"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.290 248514 DEBUG oslo_concurrency.lockutils [req-dff155f0-abc0-4f23-80b7-35b5bc07c976 req-c6aaaa75-3f17-426c-91eb-cae2d66fc23d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.684 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:09:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3253: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.731 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.731 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.732 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.732 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.732 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.732 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.795 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.796 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.796 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.796 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:09:32 np0005558241 nova_compute[248510]: 2025-12-13 09:09:32.796 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:09:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:09:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412450346' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.396 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.589 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.589 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.595 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.596 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.596 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.856 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.857 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3113MB free_disk=59.89646621514112GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.857 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.857 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.993 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 54c767d1-14e1-4a29-be59-440d4e412c4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.993 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance bdbb2140-812f-4913-a83e-7dadd1968cde actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.993 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:09:33 np0005558241 nova_compute[248510]: 2025-12-13 09:09:33.994 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.076 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.375 248514 DEBUG nova.compute.manager [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-changed-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.376 248514 DEBUG nova.compute.manager [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing instance network info cache due to event network-changed-c509d630-f81b-402a-bae1-eae9e8fca8ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.376 248514 DEBUG oslo_concurrency.lockutils [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.376 248514 DEBUG oslo_concurrency.lockutils [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.376 248514 DEBUG nova.network.neutron [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Refreshing network info cache for port c509d630-f81b-402a-bae1-eae9e8fca8ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.463 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.463 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.464 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.464 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.464 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.466 248514 INFO nova.compute.manager [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Terminating instance#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.467 248514 DEBUG nova.compute.manager [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:09:34 np0005558241 kernel: tapc509d630-f8 (unregistering): left promiscuous mode
Dec 13 04:09:34 np0005558241 NetworkManager[50376]: <info>  [1765616974.5277] device (tapc509d630-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:09:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:34Z|01460|binding|INFO|Releasing lport c509d630-f81b-402a-bae1-eae9e8fca8ca from this chassis (sb_readonly=0)
Dec 13 04:09:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:34Z|01461|binding|INFO|Setting lport c509d630-f81b-402a-bae1-eae9e8fca8ca down in Southbound
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.565 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:34Z|01462|binding|INFO|Removing iface tapc509d630-f8 ovn-installed in OVS
Dec 13 04:09:34 np0005558241 kernel: tap0a76fd5a-5a (unregistering): left promiscuous mode
Dec 13 04:09:34 np0005558241 NetworkManager[50376]: <info>  [1765616974.5836] device (tap0a76fd5a-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.588 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:34Z|01463|binding|INFO|Releasing lport 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 from this chassis (sb_readonly=1)
Dec 13 04:09:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:34Z|01464|binding|INFO|Removing iface tap0a76fd5a-5a ovn-installed in OVS
Dec 13 04:09:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:34Z|01465|if_status|INFO|Dropped 2 log messages in last 255 seconds (most recently, 255 seconds ago) due to excessive rate
Dec 13 04:09:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:34Z|01466|if_status|INFO|Not setting lport 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 down as sb is readonly
Dec 13 04:09:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:34Z|01467|binding|INFO|Setting lport 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 down in Southbound
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.598 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.605 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:6c:8b 10.100.0.13'], port_security=['fa:16:3e:6d:6c:8b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bdbb2140-812f-4913-a83e-7dadd1968cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=642d5a3d-2cd7-49a8-8481-4caf7441de52, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=c509d630-f81b-402a-bae1-eae9e8fca8ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.609 158419 INFO neutron.agent.ovn.metadata.agent [-] Port c509d630-f81b-402a-bae1-eae9e8fca8ca in datapath 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 unbound from our chassis#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.618 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.638 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b2178f4d-60ab-452f-84d7-8115943a1870]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:34 np0005558241 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Dec 13 04:09:34 np0005558241 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008c.scope: Consumed 13.552s CPU time.
Dec 13 04:09:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:09:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/795097538' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:09:34 np0005558241 systemd-machined[210538]: Machine qemu-171-instance-0000008c terminated.
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.677 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.685 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[955d6f82-f6c5-4a9b-9df6-1733b3f73b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.688 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:09:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3254: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.691 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a8558d6b-d0ab-4f23-93b0-bb8258dce154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:34 np0005558241 NetworkManager[50376]: <info>  [1765616974.7022] manager: (tap0a76fd5a-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/606)
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.711 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.717 248514 INFO nova.virt.libvirt.driver [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Instance destroyed successfully.#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.717 248514 DEBUG nova.objects.instance [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid bdbb2140-812f-4913-a83e-7dadd1968cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.727 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3b31adad-4edd-48fc-b47c-be3e05aab511]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.743 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc8859f1-0dce-45a4-a745-40515034f7af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31e0a1ad-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:46:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 416], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948051, 'reachable_time': 26277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391502, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.768 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[36c71d3c-a8a6-4978-9a9e-4a127c1e0c48]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap31e0a1ad-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948068, 'tstamp': 948068}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391503, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap31e0a1ad-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948073, 'tstamp': 948073}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391503, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.771 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31e0a1ad-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.784 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31e0a1ad-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.784 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.784 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31e0a1ad-e0, col_values=(('external_ids', {'iface-id': 'bbfdb82a-c1cb-4d3d-b44d-9475a3177d38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.785 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.873 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:c7:9d 2001:db8::f816:3eff:fe95:c79d'], port_security=['fa:16:3e:95:c7:9d 2001:db8::f816:3eff:fe95:c79d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe95:c79d/64', 'neutron:device_id': 'bdbb2140-812f-4913-a83e-7dadd1968cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d30eddc7-04d5-44b8-966c-6e93f6525e9a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.874 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 in datapath 9cd5db3b-c54c-46c9-b59b-e477b2784420 unbound from our chassis#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.875 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9cd5db3b-c54c-46c9-b59b-e477b2784420#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.890 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.894 248514 DEBUG nova.virt.libvirt.vif [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:08:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1296664967',display_name='tempest-TestGettingAddress-server-1296664967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1296664967',id=140,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:09:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-76ioq0wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:09:14Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=bdbb2140-812f-4913-a83e-7dadd1968cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.895 248514 DEBUG nova.network.os_vif_util [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.895 248514 DEBUG nova.network.os_vif_util [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.896 248514 DEBUG os_vif [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.897 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.898 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc509d630-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.898 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[39f435b7-2b27-4a01-8f00-2ae8c2dfa4ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.902 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.909 248514 INFO os_vif [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=c509d630-f81b-402a-bae1-eae9e8fca8ca,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc509d630-f8')#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.910 248514 DEBUG nova.virt.libvirt.vif [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:08:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1296664967',display_name='tempest-TestGettingAddress-server-1296664967',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1296664967',id=140,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:09:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-76ioq0wj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:09:14Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=bdbb2140-812f-4913-a83e-7dadd1968cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.910 248514 DEBUG nova.network.os_vif_util [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.911 248514 DEBUG nova.network.os_vif_util [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.911 248514 DEBUG os_vif [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.912 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.913 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a76fd5a-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.918 248514 INFO os_vif [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:c7:9d,bridge_name='br-int',has_traffic_filtering=True,id=0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a76fd5a-5a')#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.941 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.941 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.950 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[399f0dbb-4b29-4f3f-8583-cd83b5c3a710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.954 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f82ffa15-dc09-41d0-a842-ba3ee4531d42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.980 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:09:34 np0005558241 nova_compute[248510]: 2025-12-13 09:09:34.981 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:09:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:34.997 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b7fd9ba3-2792-49ed-9334-b94532f6925d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.019 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd72d488-fb70-4eaf-991a-6d5e4ed104b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9cd5db3b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:ad:e0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948147, 'reachable_time': 31724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391530, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.044 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f47e808-2622-4176-902a-998f32098fc4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9cd5db3b-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 948163, 'tstamp': 948163}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391531, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.046 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cd5db3b-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.049 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.050 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9cd5db3b-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.050 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:09:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.051 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9cd5db3b-c0, col_values=(('external_ids', {'iface-id': '26b2c676-4744-4ec7-b362-11535b0ba025'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:35.051 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.214 248514 INFO nova.virt.libvirt.driver [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Deleting instance files /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde_del#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.215 248514 INFO nova.virt.libvirt.driver [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Deletion of /var/lib/nova/instances/bdbb2140-812f-4913-a83e-7dadd1968cde_del complete#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.250 248514 DEBUG nova.compute.manager [req-d4933263-15d7-44fb-8bb0-8038cdf02f97 req-2246fb0f-bd14-4f48-9636-4384418537da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-unplugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.250 248514 DEBUG oslo_concurrency.lockutils [req-d4933263-15d7-44fb-8bb0-8038cdf02f97 req-2246fb0f-bd14-4f48-9636-4384418537da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.251 248514 DEBUG oslo_concurrency.lockutils [req-d4933263-15d7-44fb-8bb0-8038cdf02f97 req-2246fb0f-bd14-4f48-9636-4384418537da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.251 248514 DEBUG oslo_concurrency.lockutils [req-d4933263-15d7-44fb-8bb0-8038cdf02f97 req-2246fb0f-bd14-4f48-9636-4384418537da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.251 248514 DEBUG nova.compute.manager [req-d4933263-15d7-44fb-8bb0-8038cdf02f97 req-2246fb0f-bd14-4f48-9636-4384418537da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] No waiting events found dispatching network-vif-unplugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.251 248514 DEBUG nova.compute.manager [req-d4933263-15d7-44fb-8bb0-8038cdf02f97 req-2246fb0f-bd14-4f48-9636-4384418537da 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-unplugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.359 248514 INFO nova.compute.manager [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.359 248514 DEBUG oslo.service.loopingcall [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.360 248514 DEBUG nova.compute.manager [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:09:35 np0005558241 nova_compute[248510]: 2025-12-13 09:09:35.360 248514 DEBUG nova.network.neutron [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:09:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.484 248514 DEBUG nova.compute.manager [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-unplugged-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.484 248514 DEBUG oslo_concurrency.lockutils [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.484 248514 DEBUG oslo_concurrency.lockutils [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.485 248514 DEBUG oslo_concurrency.lockutils [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.485 248514 DEBUG nova.compute.manager [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] No waiting events found dispatching network-vif-unplugged-c509d630-f81b-402a-bae1-eae9e8fca8ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.485 248514 DEBUG nova.compute.manager [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-unplugged-c509d630-f81b-402a-bae1-eae9e8fca8ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.485 248514 DEBUG nova.compute.manager [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.485 248514 DEBUG oslo_concurrency.lockutils [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.486 248514 DEBUG oslo_concurrency.lockutils [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.486 248514 DEBUG oslo_concurrency.lockutils [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.486 248514 DEBUG nova.compute.manager [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] No waiting events found dispatching network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.486 248514 WARNING nova.compute.manager [req-8f53d0b8-7d6d-4885-a90d-7da98b144202 req-7fbdb505-4e59-41da-ab3c-0e737bbc0c60 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received unexpected event network-vif-plugged-c509d630-f81b-402a-bae1-eae9e8fca8ca for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.656 248514 DEBUG nova.network.neutron [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updated VIF entry in instance network info cache for port c509d630-f81b-402a-bae1-eae9e8fca8ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.656 248514 DEBUG nova.network.neutron [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updating instance_info_cache with network_info: [{"id": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "address": "fa:16:3e:6d:6c:8b", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc509d630-f8", "ovs_interfaceid": "c509d630-f81b-402a-bae1-eae9e8fca8ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "address": "fa:16:3e:95:c7:9d", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:c79d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a76fd5a-5a", "ovs_interfaceid": "0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:09:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3255: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.690 248514 DEBUG oslo_concurrency.lockutils [req-9f6a6727-dc7d-4114-b844-096be54529a8 req-4815eb42-9f8c-48be-8b63-0cab44ce7441 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-bdbb2140-812f-4913-a83e-7dadd1968cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.918 248514 DEBUG nova.network.neutron [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.936 248514 INFO nova.compute.manager [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Took 1.58 seconds to deallocate network for instance.#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.982 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:36 np0005558241 nova_compute[248510]: 2025-12-13 09:09:36.982 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:37 np0005558241 nova_compute[248510]: 2025-12-13 09:09:37.061 248514 DEBUG oslo_concurrency.processutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:09:37 np0005558241 nova_compute[248510]: 2025-12-13 09:09:37.378 248514 DEBUG nova.compute.manager [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:37 np0005558241 nova_compute[248510]: 2025-12-13 09:09:37.379 248514 DEBUG oslo_concurrency.lockutils [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:37 np0005558241 nova_compute[248510]: 2025-12-13 09:09:37.379 248514 DEBUG oslo_concurrency.lockutils [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:37 np0005558241 nova_compute[248510]: 2025-12-13 09:09:37.380 248514 DEBUG oslo_concurrency.lockutils [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:37 np0005558241 nova_compute[248510]: 2025-12-13 09:09:37.381 248514 DEBUG nova.compute.manager [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] No waiting events found dispatching network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:09:37 np0005558241 nova_compute[248510]: 2025-12-13 09:09:37.381 248514 WARNING nova.compute.manager [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received unexpected event network-vif-plugged-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:09:37 np0005558241 nova_compute[248510]: 2025-12-13 09:09:37.382 248514 DEBUG nova.compute.manager [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-deleted-0a76fd5a-5ab6-4cf9-9746-c59ac1cdbfa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:37 np0005558241 nova_compute[248510]: 2025-12-13 09:09:37.382 248514 DEBUG nova.compute.manager [req-b85f9d30-dab7-4912-818d-7070795b2031 req-e020cbbb-2b9f-470e-85c8-a5cb8ecc0d54 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Received event network-vif-deleted-c509d630-f81b-402a-bae1-eae9e8fca8ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:09:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860343970' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:09:37 np0005558241 nova_compute[248510]: 2025-12-13 09:09:37.692 248514 DEBUG oslo_concurrency.processutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:09:37 np0005558241 nova_compute[248510]: 2025-12-13 09:09:37.697 248514 DEBUG nova.compute.provider_tree [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:09:38 np0005558241 nova_compute[248510]: 2025-12-13 09:09:38.106 248514 DEBUG nova.scheduler.client.report [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:09:38 np0005558241 nova_compute[248510]: 2025-12-13 09:09:38.131 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:38 np0005558241 nova_compute[248510]: 2025-12-13 09:09:38.166 248514 INFO nova.scheduler.client.report [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance bdbb2140-812f-4913-a83e-7dadd1968cde#033[00m
Dec 13 04:09:38 np0005558241 nova_compute[248510]: 2025-12-13 09:09:38.274 248514 DEBUG oslo_concurrency.lockutils [None req-8b3679e8-02c7-4256-ae95-f428fe4fb97f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "bdbb2140-812f-4913-a83e-7dadd1968cde" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:38 np0005558241 nova_compute[248510]: 2025-12-13 09:09:38.598 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3256: 321 pgs: 321 active+clean; 172 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Dec 13 04:09:39 np0005558241 nova_compute[248510]: 2025-12-13 09:09:39.969 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:09:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:09:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:09:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:09:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:09:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:09:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3257: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 1.6 MiB/s wr, 75 op/s
Dec 13 04:09:40 np0005558241 nova_compute[248510]: 2025-12-13 09:09:40.887 248514 DEBUG nova.compute.manager [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-changed-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:40 np0005558241 nova_compute[248510]: 2025-12-13 09:09:40.887 248514 DEBUG nova.compute.manager [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing instance network info cache due to event network-changed-820e6a4a-a776-40a1-a7c0-d34cd3d2543c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:09:40 np0005558241 nova_compute[248510]: 2025-12-13 09:09:40.888 248514 DEBUG oslo_concurrency.lockutils [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:09:40 np0005558241 nova_compute[248510]: 2025-12-13 09:09:40.888 248514 DEBUG oslo_concurrency.lockutils [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:09:40 np0005558241 nova_compute[248510]: 2025-12-13 09:09:40.888 248514 DEBUG nova.network.neutron [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Refreshing network info cache for port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.013 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.014 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.015 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.015 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.016 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.018 248514 INFO nova.compute.manager [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Terminating instance#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.020 248514 DEBUG nova.compute.manager [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:09:41 np0005558241 kernel: tap820e6a4a-a7 (unregistering): left promiscuous mode
Dec 13 04:09:41 np0005558241 NetworkManager[50376]: <info>  [1765616981.0879] device (tap820e6a4a-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:09:41 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:41Z|01468|binding|INFO|Releasing lport 820e6a4a-a776-40a1-a7c0-d34cd3d2543c from this chassis (sb_readonly=0)
Dec 13 04:09:41 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:41Z|01469|binding|INFO|Setting lport 820e6a4a-a776-40a1-a7c0-d34cd3d2543c down in Southbound
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.111 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:41Z|01470|binding|INFO|Removing iface tap820e6a4a-a7 ovn-installed in OVS
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.114 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.125 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:71:ee 10.100.0.12'], port_security=['fa:16:3e:bc:71:ee 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '54c767d1-14e1-4a29-be59-440d4e412c4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=642d5a3d-2cd7-49a8-8481-4caf7441de52, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=820e6a4a-a776-40a1-a7c0-d34cd3d2543c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.126 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c in datapath 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 unbound from our chassis#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.129 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.130 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4534f8-14f5-4caa-9213-9155bc913bae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.131 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 namespace which is not needed anymore#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.133 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 kernel: tap06da1719-40 (unregistering): left promiscuous mode
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.144 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 NetworkManager[50376]: <info>  [1765616981.1467] device (tap06da1719-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:09:41 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:41Z|01471|binding|INFO|Releasing lport 06da1719-402b-4c36-b4b8-dcc95ed8b65c from this chassis (sb_readonly=0)
Dec 13 04:09:41 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:41Z|01472|binding|INFO|Setting lport 06da1719-402b-4c36-b4b8-dcc95ed8b65c down in Southbound
Dec 13 04:09:41 np0005558241 ovn_controller[148476]: 2025-12-13T09:09:41Z|01473|binding|INFO|Removing iface tap06da1719-40 ovn-installed in OVS
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.170 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:d9:03 2001:db8::f816:3eff:fef0:d903'], port_security=['fa:16:3e:f0:d9:03 2001:db8::f816:3eff:fef0:d903'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:d903/64', 'neutron:device_id': '54c767d1-14e1-4a29-be59-440d4e412c4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee151f33-1853-4ff9-89d9-85c7418f7618', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d30eddc7-04d5-44b8-966c-6e93f6525e9a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=06da1719-402b-4c36-b4b8-dcc95ed8b65c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.188 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Dec 13 04:09:41 np0005558241 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008b.scope: Consumed 16.793s CPU time.
Dec 13 04:09:41 np0005558241 systemd-machined[210538]: Machine qemu-170-instance-0000008b terminated.
Dec 13 04:09:41 np0005558241 NetworkManager[50376]: <info>  [1765616981.2482] manager: (tap820e6a4a-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/607)
Dec 13 04:09:41 np0005558241 NetworkManager[50376]: <info>  [1765616981.2608] manager: (tap06da1719-40): new Tun device (/org/freedesktop/NetworkManager/Devices/608)
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.279 248514 INFO nova.virt.libvirt.driver [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Instance destroyed successfully.#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.280 248514 DEBUG nova.objects.instance [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 54c767d1-14e1-4a29-be59-440d4e412c4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.309 248514 DEBUG nova.virt.libvirt.vif [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:08:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-698361226',display_name='tempest-TestGettingAddress-server-698361226',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-698361226',id=139,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:08:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-xmk45p0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:08:24Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=54c767d1-14e1-4a29-be59-440d4e412c4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.310 248514 DEBUG nova.network.os_vif_util [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.311 248514 DEBUG nova.network.os_vif_util [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.312 248514 DEBUG os_vif [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.314 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap820e6a4a-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.318 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:09:41 np0005558241 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [NOTICE]   (389874) : haproxy version is 2.8.14-c23fe91
Dec 13 04:09:41 np0005558241 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [NOTICE]   (389874) : path to executable is /usr/sbin/haproxy
Dec 13 04:09:41 np0005558241 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [WARNING]  (389874) : Exiting Master process...
Dec 13 04:09:41 np0005558241 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [WARNING]  (389874) : Exiting Master process...
Dec 13 04:09:41 np0005558241 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [ALERT]    (389874) : Current worker (389876) exited with code 143 (Terminated)
Dec 13 04:09:41 np0005558241 neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5[389870]: [WARNING]  (389874) : All workers exited. Exiting... (0)
Dec 13 04:09:41 np0005558241 systemd[1]: libpod-63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922.scope: Deactivated successfully.
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.323 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.326 248514 INFO os_vif [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:71:ee,bridge_name='br-int',has_traffic_filtering=True,id=820e6a4a-a776-40a1-a7c0-d34cd3d2543c,network=Network(31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap820e6a4a-a7')#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.327 248514 DEBUG nova.virt.libvirt.vif [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:08:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-698361226',display_name='tempest-TestGettingAddress-server-698361226',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-698361226',id=139,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP6Uq0vmfwZTdjsxeE7C48EQCxdlvTBScg+4UXjPpp2TgJPMZaI58DoC24Jp5/sConPRYTxTcmNFmKlopjvwcFs1kkxjZ3YQomXEBTk2O3gWW85lSZMC73IyeNF5pJUO2Q==',key_name='tempest-TestGettingAddress-312952467',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:08:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-xmk45p0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:08:24Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=54c767d1-14e1-4a29-be59-440d4e412c4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.328 248514 DEBUG nova.network.os_vif_util [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.328 248514 DEBUG nova.network.os_vif_util [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.329 248514 DEBUG os_vif [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:09:41 np0005558241 podman[391587]: 2025-12-13 09:09:41.330154572 +0000 UTC m=+0.057880635 container died 63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.334 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06da1719-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.340 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.343 248514 INFO os_vif [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:d9:03,bridge_name='br-int',has_traffic_filtering=True,id=06da1719-402b-4c36-b4b8-dcc95ed8b65c,network=Network(9cd5db3b-c54c-46c9-b59b-e477b2784420),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06da1719-40')#033[00m
Dec 13 04:09:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922-userdata-shm.mount: Deactivated successfully.
Dec 13 04:09:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d43e0b8694a8e8a523da23bc5772976ab296f6f8c8d08138f08d92ba5a318180-merged.mount: Deactivated successfully.
Dec 13 04:09:41 np0005558241 podman[391587]: 2025-12-13 09:09:41.375373339 +0000 UTC m=+0.103099312 container cleanup 63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:09:41 np0005558241 systemd[1]: libpod-conmon-63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922.scope: Deactivated successfully.
Dec 13 04:09:41 np0005558241 podman[391648]: 2025-12-13 09:09:41.459418279 +0000 UTC m=+0.056896520 container remove 63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.466 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1b72e7d3-d3ad-4e38-804c-170c0e1846b2]: (4, ('Sat Dec 13 09:09:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 (63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922)\n63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922\nSat Dec 13 09:09:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 (63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922)\n63c1b3ed0a35353adbbef45056552a96af721d677ac7e0275f46832b7769e922\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.468 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc00df96-e34c-4872-bf87-af6ea6544260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.470 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31e0a1ad-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.471 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 kernel: tap31e0a1ad-e0: left promiscuous mode
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.486 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f536fc3a-6c17-4f92-8c5f-d7d0a8f2ec97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.507 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a0144054-6d63-45ff-aa87-f832fec4818e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.508 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79dac72c-a41c-4782-8537-fd0a46858cfe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.536 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[15644d06-5430-4bf3-93b9-6102cd86c9b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948042, 'reachable_time': 32120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391666, 'error': None, 'target': 'ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 systemd[1]: run-netns-ovnmeta\x2d31e0a1ad\x2dee13\x2d4de2\x2dbfd5\x2d7dfcf6ce6ab5.mount: Deactivated successfully.
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.545 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.546 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c6e3c0-8ea5-45bd-8546-67f9cfa84111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.547 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 06da1719-402b-4c36-b4b8-dcc95ed8b65c in datapath 9cd5db3b-c54c-46c9-b59b-e477b2784420 unbound from our chassis#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.550 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9cd5db3b-c54c-46c9-b59b-e477b2784420, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.551 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0edf5d-bcc7-46ea-a35a-5063f05da192]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.553 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420 namespace which is not needed anymore#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.667 248514 INFO nova.virt.libvirt.driver [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Deleting instance files /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e_del#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.668 248514 INFO nova.virt.libvirt.driver [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Deletion of /var/lib/nova/instances/54c767d1-14e1-4a29-be59-440d4e412c4e_del complete#033[00m
Dec 13 04:09:41 np0005558241 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [NOTICE]   (389946) : haproxy version is 2.8.14-c23fe91
Dec 13 04:09:41 np0005558241 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [NOTICE]   (389946) : path to executable is /usr/sbin/haproxy
Dec 13 04:09:41 np0005558241 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [WARNING]  (389946) : Exiting Master process...
Dec 13 04:09:41 np0005558241 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [ALERT]    (389946) : Current worker (389948) exited with code 143 (Terminated)
Dec 13 04:09:41 np0005558241 neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420[389942]: [WARNING]  (389946) : All workers exited. Exiting... (0)
Dec 13 04:09:41 np0005558241 systemd[1]: libpod-a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b.scope: Deactivated successfully.
Dec 13 04:09:41 np0005558241 podman[391683]: 2025-12-13 09:09:41.710674744 +0000 UTC m=+0.054972310 container died a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:09:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b-userdata-shm.mount: Deactivated successfully.
Dec 13 04:09:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-053d728be1d4b2966613ad5805ec267c950bc3159538ec562a41c9a868f9efff-merged.mount: Deactivated successfully.
Dec 13 04:09:41 np0005558241 podman[391683]: 2025-12-13 09:09:41.758625912 +0000 UTC m=+0.102923518 container cleanup a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:09:41 np0005558241 systemd[1]: libpod-conmon-a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b.scope: Deactivated successfully.
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.799 248514 DEBUG nova.compute.manager [req-2047f10f-12b2-4ac8-936c-d9698b315eac req-0bacb4f2-e6c8-4d16-9bec-591f6d67f679 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-unplugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.799 248514 DEBUG oslo_concurrency.lockutils [req-2047f10f-12b2-4ac8-936c-d9698b315eac req-0bacb4f2-e6c8-4d16-9bec-591f6d67f679 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.800 248514 DEBUG oslo_concurrency.lockutils [req-2047f10f-12b2-4ac8-936c-d9698b315eac req-0bacb4f2-e6c8-4d16-9bec-591f6d67f679 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.801 248514 DEBUG oslo_concurrency.lockutils [req-2047f10f-12b2-4ac8-936c-d9698b315eac req-0bacb4f2-e6c8-4d16-9bec-591f6d67f679 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.801 248514 DEBUG nova.compute.manager [req-2047f10f-12b2-4ac8-936c-d9698b315eac req-0bacb4f2-e6c8-4d16-9bec-591f6d67f679 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] No waiting events found dispatching network-vif-unplugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.802 248514 DEBUG nova.compute.manager [req-2047f10f-12b2-4ac8-936c-d9698b315eac req-0bacb4f2-e6c8-4d16-9bec-591f6d67f679 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-unplugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:09:41 np0005558241 podman[391712]: 2025-12-13 09:09:41.839114098 +0000 UTC m=+0.056190410 container remove a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.845 248514 INFO nova.compute.manager [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.846 248514 DEBUG oslo.service.loopingcall [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.846 248514 DEBUG nova.compute.manager [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.847 248514 DEBUG nova.network.neutron [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.849 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a376c5-574e-41b0-8c58-bd678ac90030]: (4, ('Sat Dec 13 09:09:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420 (a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b)\na55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b\nSat Dec 13 09:09:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420 (a55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b)\na55e562d4f82f4e37a91249bb7d8db80b6b1db333cf4b77ad7ac0e6926378e7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.851 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc3cab2-2717-4124-be28-d2445b659fd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.853 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9cd5db3b-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:09:41 np0005558241 kernel: tap9cd5db3b-c0: left promiscuous mode
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 nova_compute[248510]: 2025-12-13 09:09:41.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.874 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9d2fbf-fb6b-4523-a219-7de9abde7d36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.886 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[15d6957a-c08d-4fd0-b314-1ae39213f76b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.888 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbf5399-8d8e-4001-95ed-959cb3106c9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.912 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57b83c01-9fda-4a27-8955-ba71eb9f94bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 948139, 'reachable_time': 28482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391725, 'error': None, 'target': 'ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.914 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9cd5db3b-c54c-46c9-b59b-e477b2784420 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:09:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:41.915 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[55152dfc-00bf-4c72-ab15-61b87fb0e592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:09:42 np0005558241 systemd[1]: run-netns-ovnmeta\x2d9cd5db3b\x2dc54c\x2d46c9\x2db59b\x2de477b2784420.mount: Deactivated successfully.
Dec 13 04:09:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3258: 321 pgs: 321 active+clean; 121 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 18 KiB/s wr, 30 op/s
Dec 13 04:09:42 np0005558241 nova_compute[248510]: 2025-12-13 09:09:42.869 248514 DEBUG nova.network.neutron [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updated VIF entry in instance network info cache for port 820e6a4a-a776-40a1-a7c0-d34cd3d2543c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:09:42 np0005558241 nova_compute[248510]: 2025-12-13 09:09:42.870 248514 DEBUG nova.network.neutron [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [{"id": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "address": "fa:16:3e:bc:71:ee", "network": {"id": "31e0a1ad-ee13-4de2-bfd5-7dfcf6ce6ab5", "bridge": "br-int", "label": "tempest-network-smoke--67265481", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap820e6a4a-a7", "ovs_interfaceid": "820e6a4a-a776-40a1-a7c0-d34cd3d2543c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "address": "fa:16:3e:f0:d9:03", "network": {"id": "9cd5db3b-c54c-46c9-b59b-e477b2784420", "bridge": "br-int", "label": "tempest-network-smoke--846366094", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef0:d903", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06da1719-40", "ovs_interfaceid": "06da1719-402b-4c36-b4b8-dcc95ed8b65c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.292 248514 DEBUG nova.compute.manager [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-unplugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.293 248514 DEBUG oslo_concurrency.lockutils [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.294 248514 DEBUG oslo_concurrency.lockutils [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.295 248514 DEBUG oslo_concurrency.lockutils [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.295 248514 DEBUG nova.compute.manager [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] No waiting events found dispatching network-vif-unplugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.296 248514 DEBUG nova.compute.manager [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-unplugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.296 248514 DEBUG nova.compute.manager [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.297 248514 DEBUG oslo_concurrency.lockutils [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.297 248514 DEBUG oslo_concurrency.lockutils [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.297 248514 DEBUG oslo_concurrency.lockutils [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.298 248514 DEBUG nova.compute.manager [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] No waiting events found dispatching network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.298 248514 WARNING nova.compute.manager [req-9a51d401-fe02-4674-85b8-d0d59e85da0f req-ecace8e6-31c8-4620-94b7-e93902c4b5a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received unexpected event network-vif-plugged-06da1719-402b-4c36-b4b8-dcc95ed8b65c for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.304 248514 DEBUG oslo_concurrency.lockutils [req-596e7ef5-aedf-40d9-8931-e7011ca5a0e6 req-bcc1e174-cb3d-48cf-8cd2-84240926fe32 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-54c767d1-14e1-4a29-be59-440d4e412c4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.752 248514 DEBUG nova.network.neutron [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.769 248514 INFO nova.compute.manager [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Took 1.92 seconds to deallocate network for instance.#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.825 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.826 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.893 248514 DEBUG oslo_concurrency.processutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.966 248514 DEBUG nova.compute.manager [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.967 248514 DEBUG oslo_concurrency.lockutils [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.968 248514 DEBUG oslo_concurrency.lockutils [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.969 248514 DEBUG oslo_concurrency.lockutils [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.969 248514 DEBUG nova.compute.manager [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] No waiting events found dispatching network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.970 248514 WARNING nova.compute.manager [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received unexpected event network-vif-plugged-820e6a4a-a776-40a1-a7c0-d34cd3d2543c for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.970 248514 DEBUG nova.compute.manager [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-deleted-06da1719-402b-4c36-b4b8-dcc95ed8b65c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:43 np0005558241 nova_compute[248510]: 2025-12-13 09:09:43.970 248514 DEBUG nova.compute.manager [req-3e98894a-6422-4343-8c96-ddeb9dab8ed8 req-174cd8de-6fda-417c-9e9c-bf9f115920a9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Received event network-vif-deleted-820e6a4a-a776-40a1-a7c0-d34cd3d2543c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:09:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:09:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882559847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:09:44 np0005558241 nova_compute[248510]: 2025-12-13 09:09:44.473 248514 DEBUG oslo_concurrency.processutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:09:44 np0005558241 nova_compute[248510]: 2025-12-13 09:09:44.481 248514 DEBUG nova.compute.provider_tree [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:09:44 np0005558241 nova_compute[248510]: 2025-12-13 09:09:44.499 248514 DEBUG nova.scheduler.client.report [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:09:44 np0005558241 nova_compute[248510]: 2025-12-13 09:09:44.518 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:44 np0005558241 nova_compute[248510]: 2025-12-13 09:09:44.550 248514 INFO nova.scheduler.client.report [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 54c767d1-14e1-4a29-be59-440d4e412c4e#033[00m
Dec 13 04:09:44 np0005558241 nova_compute[248510]: 2025-12-13 09:09:44.636 248514 DEBUG oslo_concurrency.lockutils [None req-d952e46e-536f-445f-b63f-c0d873c5faa6 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "54c767d1-14e1-4a29-be59-440d4e412c4e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3259: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 19 KiB/s wr, 58 op/s
Dec 13 04:09:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:46 np0005558241 nova_compute[248510]: 2025-12-13 09:09:46.338 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3260: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 55 op/s
Dec 13 04:09:48 np0005558241 nova_compute[248510]: 2025-12-13 09:09:48.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3261: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 55 op/s
Dec 13 04:09:49 np0005558241 nova_compute[248510]: 2025-12-13 09:09:49.715 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616974.7133727, bdbb2140-812f-4913-a83e-7dadd1968cde => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:09:49 np0005558241 nova_compute[248510]: 2025-12-13 09:09:49.716 248514 INFO nova.compute.manager [-] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:09:49 np0005558241 nova_compute[248510]: 2025-12-13 09:09:49.784 248514 DEBUG nova.compute.manager [None req-f8c90c71-2344-4fa7-a4d5-b8972c14a79c - - - - - -] [instance: bdbb2140-812f-4913-a83e-7dadd1968cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:09:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3262: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Dec 13 04:09:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:51 np0005558241 nova_compute[248510]: 2025-12-13 09:09:51.341 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:52 np0005558241 podman[391749]: 2025-12-13 09:09:52.012875752 +0000 UTC m=+0.090089566 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 04:09:52 np0005558241 podman[391750]: 2025-12-13 09:09:52.021222667 +0000 UTC m=+0.096453170 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 04:09:52 np0005558241 podman[391748]: 2025-12-13 09:09:52.049300242 +0000 UTC m=+0.130037287 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Dec 13 04:09:52 np0005558241 nova_compute[248510]: 2025-12-13 09:09:52.602 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:52 np0005558241 nova_compute[248510]: 2025-12-13 09:09:52.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3263: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:09:53 np0005558241 nova_compute[248510]: 2025-12-13 09:09:53.605 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3264: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:09:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:55.447 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:09:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:55.448 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:09:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:09:55.448 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:09:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:56 np0005558241 nova_compute[248510]: 2025-12-13 09:09:56.277 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765616981.2758005, 54c767d1-14e1-4a29-be59-440d4e412c4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:09:56 np0005558241 nova_compute[248510]: 2025-12-13 09:09:56.277 248514 INFO nova.compute.manager [-] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:09:56 np0005558241 nova_compute[248510]: 2025-12-13 09:09:56.343 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:56 np0005558241 nova_compute[248510]: 2025-12-13 09:09:56.456 248514 DEBUG nova.compute.manager [None req-3596b85f-378c-4ecb-9081-ae3e1f4cf9e4 - - - - - -] [instance: 54c767d1-14e1-4a29-be59-440d4e412c4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:09:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3265: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:09:58 np0005558241 nova_compute[248510]: 2025-12-13 09:09:58.607 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:09:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3266: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3267: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:01 np0005558241 nova_compute[248510]: 2025-12-13 09:10:01.346 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:01 np0005558241 nova_compute[248510]: 2025-12-13 09:10:01.522 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:01.521 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:10:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:01.523 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:10:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3268: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:03 np0005558241 nova_compute[248510]: 2025-12-13 09:10:03.609 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3269: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:06 np0005558241 nova_compute[248510]: 2025-12-13 09:10:06.349 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:06.525 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:10:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3270: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:10:07 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:10:07 np0005558241 podman[391958]: 2025-12-13 09:10:07.608228596 +0000 UTC m=+0.049595801 container create 7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wright, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:10:07 np0005558241 systemd[1]: Started libpod-conmon-7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86.scope.
Dec 13 04:10:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:10:07 np0005558241 podman[391958]: 2025-12-13 09:10:07.590201941 +0000 UTC m=+0.031569166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:10:07 np0005558241 podman[391958]: 2025-12-13 09:10:07.692179123 +0000 UTC m=+0.133546348 container init 7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:10:07 np0005558241 podman[391958]: 2025-12-13 09:10:07.702993882 +0000 UTC m=+0.144361087 container start 7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:10:07 np0005558241 podman[391958]: 2025-12-13 09:10:07.705808674 +0000 UTC m=+0.147175899 container attach 7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wright, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:10:07 np0005558241 interesting_wright[391974]: 167 167
Dec 13 04:10:07 np0005558241 systemd[1]: libpod-7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86.scope: Deactivated successfully.
Dec 13 04:10:07 np0005558241 conmon[391974]: conmon 7687d44c78c21b074d12 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86.scope/container/memory.events
Dec 13 04:10:07 np0005558241 podman[391979]: 2025-12-13 09:10:07.766430869 +0000 UTC m=+0.036892433 container died 7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:10:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2438b38e79e1082e02ae20d52d114520b29e4bbc42570d42e97d4747c217cd78-merged.mount: Deactivated successfully.
Dec 13 04:10:07 np0005558241 podman[391979]: 2025-12-13 09:10:07.80792685 +0000 UTC m=+0.078388414 container remove 7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_wright, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:10:07 np0005558241 systemd[1]: libpod-conmon-7687d44c78c21b074d122998c6167773be9cb185ddea813f558a3e41cdd9ec86.scope: Deactivated successfully.
Dec 13 04:10:08 np0005558241 podman[392001]: 2025-12-13 09:10:08.076646136 +0000 UTC m=+0.083735232 container create a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meninsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 04:10:08 np0005558241 podman[392001]: 2025-12-13 09:10:08.036066379 +0000 UTC m=+0.043155515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:10:08 np0005558241 systemd[1]: Started libpod-conmon-a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c.scope.
Dec 13 04:10:08 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:10:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7cc45ee8d5936514c48863f325469925b39fa06cafeb2c5250a121aa6b376/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7cc45ee8d5936514c48863f325469925b39fa06cafeb2c5250a121aa6b376/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7cc45ee8d5936514c48863f325469925b39fa06cafeb2c5250a121aa6b376/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7cc45ee8d5936514c48863f325469925b39fa06cafeb2c5250a121aa6b376/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a7cc45ee8d5936514c48863f325469925b39fa06cafeb2c5250a121aa6b376/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:08 np0005558241 podman[392001]: 2025-12-13 09:10:08.205106102 +0000 UTC m=+0.212195248 container init a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meninsky, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:10:08 np0005558241 podman[392001]: 2025-12-13 09:10:08.220881259 +0000 UTC m=+0.227970355 container start a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meninsky, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:10:08 np0005558241 podman[392001]: 2025-12-13 09:10:08.22594939 +0000 UTC m=+0.233038526 container attach a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:10:08 np0005558241 nova_compute[248510]: 2025-12-13 09:10:08.612 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3271: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:08 np0005558241 heuristic_meninsky[392018]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:10:08 np0005558241 heuristic_meninsky[392018]: --> All data devices are unavailable
Dec 13 04:10:08 np0005558241 systemd[1]: libpod-a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c.scope: Deactivated successfully.
Dec 13 04:10:08 np0005558241 podman[392001]: 2025-12-13 09:10:08.820989099 +0000 UTC m=+0.828078165 container died a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:10:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b8a7cc45ee8d5936514c48863f325469925b39fa06cafeb2c5250a121aa6b376-merged.mount: Deactivated successfully.
Dec 13 04:10:08 np0005558241 podman[392001]: 2025-12-13 09:10:08.886164941 +0000 UTC m=+0.893254007 container remove a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meninsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:10:08 np0005558241 systemd[1]: libpod-conmon-a5e41ec8da900063f315a38c0869e3146fc1898d5df0bb684bde7fe5207c605c.scope: Deactivated successfully.
Dec 13 04:10:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:10:09
Dec 13 04:10:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:10:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:10:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'volumes', '.mgr', 'backups', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data']
Dec 13 04:10:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:10:09 np0005558241 podman[392112]: 2025-12-13 09:10:09.442577183 +0000 UTC m=+0.057554976 container create 45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_merkle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:10:09 np0005558241 systemd[1]: Started libpod-conmon-45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af.scope.
Dec 13 04:10:09 np0005558241 podman[392112]: 2025-12-13 09:10:09.413138153 +0000 UTC m=+0.028115966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:10:09 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:10:09 np0005558241 podman[392112]: 2025-12-13 09:10:09.525669988 +0000 UTC m=+0.140647821 container init 45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_merkle, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:10:09 np0005558241 podman[392112]: 2025-12-13 09:10:09.536032746 +0000 UTC m=+0.151010519 container start 45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_merkle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 04:10:09 np0005558241 podman[392112]: 2025-12-13 09:10:09.540264135 +0000 UTC m=+0.155242008 container attach 45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:10:09 np0005558241 bold_merkle[392128]: 167 167
Dec 13 04:10:09 np0005558241 podman[392112]: 2025-12-13 09:10:09.543819346 +0000 UTC m=+0.158797119 container died 45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_merkle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 04:10:09 np0005558241 systemd[1]: libpod-45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af.scope: Deactivated successfully.
Dec 13 04:10:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e076ae78eb0801c32f47c73d81c1090f129e28b89fa1b6c2dc5e91e66bcacd23-merged.mount: Deactivated successfully.
Dec 13 04:10:09 np0005558241 podman[392112]: 2025-12-13 09:10:09.578504822 +0000 UTC m=+0.193482595 container remove 45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Dec 13 04:10:09 np0005558241 systemd[1]: libpod-conmon-45dbff727ddc8f08bbd1a4f6bf3d5548247af3ce5544ee960c0d8765c1b027af.scope: Deactivated successfully.
Dec 13 04:10:09 np0005558241 podman[392151]: 2025-12-13 09:10:09.784392406 +0000 UTC m=+0.066021435 container create eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_hamilton, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:10:09 np0005558241 systemd[1]: Started libpod-conmon-eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e.scope.
Dec 13 04:10:09 np0005558241 podman[392151]: 2025-12-13 09:10:09.754859744 +0000 UTC m=+0.036488863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:10:09 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:10:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c809a2494321630fa3627dc74d887061763fde57f6c9d53093528075d0263076/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c809a2494321630fa3627dc74d887061763fde57f6c9d53093528075d0263076/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c809a2494321630fa3627dc74d887061763fde57f6c9d53093528075d0263076/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c809a2494321630fa3627dc74d887061763fde57f6c9d53093528075d0263076/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:09 np0005558241 podman[392151]: 2025-12-13 09:10:09.893898503 +0000 UTC m=+0.175527562 container init eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 04:10:09 np0005558241 podman[392151]: 2025-12-13 09:10:09.902282829 +0000 UTC m=+0.183911858 container start eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_hamilton, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:10:09 np0005558241 podman[392151]: 2025-12-13 09:10:09.905995965 +0000 UTC m=+0.187625004 container attach eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_hamilton, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:10:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:10:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:10:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:10:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:10:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:10:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]: {
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:    "0": [
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:        {
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "devices": [
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "/dev/loop3"
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            ],
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_name": "ceph_lv0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_size": "21470642176",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "name": "ceph_lv0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "tags": {
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.cluster_name": "ceph",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.crush_device_class": "",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.encrypted": "0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.objectstore": "bluestore",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.osd_id": "0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.type": "block",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.vdo": "0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.with_tpm": "0"
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            },
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "type": "block",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "vg_name": "ceph_vg0"
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:        }
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:    ],
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:    "1": [
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:        {
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "devices": [
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "/dev/loop4"
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            ],
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_name": "ceph_lv1",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_size": "21470642176",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "name": "ceph_lv1",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "tags": {
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.cluster_name": "ceph",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.crush_device_class": "",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.encrypted": "0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.objectstore": "bluestore",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.osd_id": "1",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.type": "block",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.vdo": "0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.with_tpm": "0"
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            },
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "type": "block",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "vg_name": "ceph_vg1"
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:        }
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:    ],
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:    "2": [
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:        {
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "devices": [
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "/dev/loop5"
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            ],
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_name": "ceph_lv2",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_size": "21470642176",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "name": "ceph_lv2",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "tags": {
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.cluster_name": "ceph",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.crush_device_class": "",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.encrypted": "0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.objectstore": "bluestore",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.osd_id": "2",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.type": "block",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.vdo": "0",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:                "ceph.with_tpm": "0"
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            },
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "type": "block",
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:            "vg_name": "ceph_vg2"
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:        }
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]:    ]
Dec 13 04:10:10 np0005558241 sleepy_hamilton[392168]: }
Dec 13 04:10:10 np0005558241 systemd[1]: libpod-eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e.scope: Deactivated successfully.
Dec 13 04:10:10 np0005558241 podman[392151]: 2025-12-13 09:10:10.230527962 +0000 UTC m=+0.512157061 container died eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_hamilton, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:10:10 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c809a2494321630fa3627dc74d887061763fde57f6c9d53093528075d0263076-merged.mount: Deactivated successfully.
Dec 13 04:10:10 np0005558241 podman[392151]: 2025-12-13 09:10:10.603698773 +0000 UTC m=+0.885327802 container remove eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:10:10 np0005558241 systemd[1]: libpod-conmon-eaa53c33612f5e3b68a0a4d212942716190b803704be5cb33710c9191a25060e.scope: Deactivated successfully.
Dec 13 04:10:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3272: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:10:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:10:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:10:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:10:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:10:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:10:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:10:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:10:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:10:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:10:11 np0005558241 podman[392251]: 2025-12-13 09:10:11.13862333 +0000 UTC m=+0.053678107 container create 8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:10:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:11 np0005558241 systemd[1]: Started libpod-conmon-8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43.scope.
Dec 13 04:10:11 np0005558241 podman[392251]: 2025-12-13 09:10:11.116573581 +0000 UTC m=+0.031628348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:10:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:10:11 np0005558241 podman[392251]: 2025-12-13 09:10:11.231940739 +0000 UTC m=+0.146995506 container init 8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_perlman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:10:11 np0005558241 podman[392251]: 2025-12-13 09:10:11.238866107 +0000 UTC m=+0.153920844 container start 8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:10:11 np0005558241 podman[392251]: 2025-12-13 09:10:11.242432879 +0000 UTC m=+0.157487636 container attach 8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_perlman, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:10:11 np0005558241 modest_perlman[392267]: 167 167
Dec 13 04:10:11 np0005558241 systemd[1]: libpod-8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43.scope: Deactivated successfully.
Dec 13 04:10:11 np0005558241 podman[392251]: 2025-12-13 09:10:11.248194758 +0000 UTC m=+0.163249505 container died 8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_perlman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:10:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0a3e667c1813d05700829b1d0ad80b3983779356ceb2c73e7f3ebf80627c9f4b-merged.mount: Deactivated successfully.
Dec 13 04:10:11 np0005558241 podman[392251]: 2025-12-13 09:10:11.288866658 +0000 UTC m=+0.203921425 container remove 8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Dec 13 04:10:11 np0005558241 systemd[1]: libpod-conmon-8b9c7ac05c28c219b0a35ffc4f318ce9c29dd13d09ef6cbd23cd6e7fb34cdd43.scope: Deactivated successfully.
Dec 13 04:10:11 np0005558241 nova_compute[248510]: 2025-12-13 09:10:11.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:11 np0005558241 podman[392290]: 2025-12-13 09:10:11.476279775 +0000 UTC m=+0.052407453 container create 27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 04:10:11 np0005558241 systemd[1]: Started libpod-conmon-27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9.scope.
Dec 13 04:10:11 np0005558241 podman[392290]: 2025-12-13 09:10:11.453106147 +0000 UTC m=+0.029233915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:10:11 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:10:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f540b0c40ffdf8c15dddb81c27466e98da397545dd7baba565a3f1316e153d10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f540b0c40ffdf8c15dddb81c27466e98da397545dd7baba565a3f1316e153d10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f540b0c40ffdf8c15dddb81c27466e98da397545dd7baba565a3f1316e153d10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:11 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f540b0c40ffdf8c15dddb81c27466e98da397545dd7baba565a3f1316e153d10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:11 np0005558241 podman[392290]: 2025-12-13 09:10:11.575569838 +0000 UTC m=+0.151697516 container init 27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sanderson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:10:11 np0005558241 podman[392290]: 2025-12-13 09:10:11.585198467 +0000 UTC m=+0.161326135 container start 27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 04:10:11 np0005558241 podman[392290]: 2025-12-13 09:10:11.587754933 +0000 UTC m=+0.163882611 container attach 27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:10:12 np0005558241 lvm[392386]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:10:12 np0005558241 lvm[392386]: VG ceph_vg1 finished
Dec 13 04:10:12 np0005558241 lvm[392385]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:10:12 np0005558241 lvm[392385]: VG ceph_vg0 finished
Dec 13 04:10:12 np0005558241 lvm[392388]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:10:12 np0005558241 lvm[392388]: VG ceph_vg2 finished
Dec 13 04:10:12 np0005558241 competent_sanderson[392307]: {}
Dec 13 04:10:12 np0005558241 systemd[1]: libpod-27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9.scope: Deactivated successfully.
Dec 13 04:10:12 np0005558241 systemd[1]: libpod-27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9.scope: Consumed 1.497s CPU time.
Dec 13 04:10:12 np0005558241 podman[392290]: 2025-12-13 09:10:12.468514387 +0000 UTC m=+1.044642135 container died 27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:10:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f540b0c40ffdf8c15dddb81c27466e98da397545dd7baba565a3f1316e153d10-merged.mount: Deactivated successfully.
Dec 13 04:10:12 np0005558241 podman[392290]: 2025-12-13 09:10:12.524177753 +0000 UTC m=+1.100305461 container remove 27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:10:12 np0005558241 systemd[1]: libpod-conmon-27f646c66f394312d322df6e8dbc82e47265330d3518df019131429fe4fc87d9.scope: Deactivated successfully.
Dec 13 04:10:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:10:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:10:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:10:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:10:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:10:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:10:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3273: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:13 np0005558241 nova_compute[248510]: 2025-12-13 09:10:13.615 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3274: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4121122571' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4121122571' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:16 np0005558241 nova_compute[248510]: 2025-12-13 09:10:16.356 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3275: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:16 np0005558241 nova_compute[248510]: 2025-12-13 09:10:16.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:10:16 np0005558241 nova_compute[248510]: 2025-12-13 09:10:16.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:10:18 np0005558241 nova_compute[248510]: 2025-12-13 09:10:18.616 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3276: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3277: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:21 np0005558241 nova_compute[248510]: 2025-12-13 09:10:21.358 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4593073840003745e-05 of space, bias 1.0, pg target 0.004377922152001124 quantized to 32 (current 32)
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697401448045072 of space, bias 1.0, pg target 0.20092204344135217 quantized to 32 (current 32)
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:10:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3278: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:22 np0005558241 podman[392432]: 2025-12-13 09:10:22.992184196 +0000 UTC m=+0.069759432 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 04:10:23 np0005558241 podman[392431]: 2025-12-13 09:10:23.006705511 +0000 UTC m=+0.082976173 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:10:23 np0005558241 podman[392430]: 2025-12-13 09:10:23.038629245 +0000 UTC m=+0.114903637 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:10:23 np0005558241 nova_compute[248510]: 2025-12-13 09:10:23.618 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3279: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:24 np0005558241 nova_compute[248510]: 2025-12-13 09:10:24.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:10:24 np0005558241 nova_compute[248510]: 2025-12-13 09:10:24.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:10:24 np0005558241 nova_compute[248510]: 2025-12-13 09:10:24.910 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:10:24 np0005558241 nova_compute[248510]: 2025-12-13 09:10:24.910 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:10:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:26 np0005558241 nova_compute[248510]: 2025-12-13 09:10:26.362 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3280: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:26.854 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:a3:20 2001:db8:0:1:f816:3eff:fe49:a320 2001:db8::f816:3eff:fe49:a320'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe49:a320/64 2001:db8::f816:3eff:fe49:a320/64', 'neutron:device_id': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28a1c52d-6dd3-4202-a082-66c542a5a384, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c9cb1492-8274-4506-acb8-65660baeb5cf) old=Port_Binding(mac=['fa:16:3e:49:a3:20 2001:db8::f816:3eff:fe49:a320'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe49:a320/64', 'neutron:device_id': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:10:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:26.855 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c9cb1492-8274-4506-acb8-65660baeb5cf in datapath 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c updated#033[00m
Dec 13 04:10:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:26.857 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:10:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:26.859 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e44f8100-422d-404f-b49d-cedcd23f8810]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:28 np0005558241 nova_compute[248510]: 2025-12-13 09:10:28.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3281: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:28 np0005558241 nova_compute[248510]: 2025-12-13 09:10:28.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:10:28 np0005558241 nova_compute[248510]: 2025-12-13 09:10:28.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:10:28 np0005558241 nova_compute[248510]: 2025-12-13 09:10:28.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:10:28 np0005558241 nova_compute[248510]: 2025-12-13 09:10:28.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:10:28 np0005558241 nova_compute[248510]: 2025-12-13 09:10:28.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:10:28 np0005558241 nova_compute[248510]: 2025-12-13 09:10:28.810 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:10:28 np0005558241 nova_compute[248510]: 2025-12-13 09:10:28.811 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:10:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:10:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/835442701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:10:29 np0005558241 nova_compute[248510]: 2025-12-13 09:10:29.427 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:10:29 np0005558241 nova_compute[248510]: 2025-12-13 09:10:29.671 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:10:29 np0005558241 nova_compute[248510]: 2025-12-13 09:10:29.673 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3499MB free_disk=59.987405836582184GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:10:29 np0005558241 nova_compute[248510]: 2025-12-13 09:10:29.674 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:10:29 np0005558241 nova_compute[248510]: 2025-12-13 09:10:29.674 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:10:29 np0005558241 nova_compute[248510]: 2025-12-13 09:10:29.756 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:10:29 np0005558241 nova_compute[248510]: 2025-12-13 09:10:29.757 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:10:29 np0005558241 nova_compute[248510]: 2025-12-13 09:10:29.776 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:10:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:10:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/360535993' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:10:30 np0005558241 nova_compute[248510]: 2025-12-13 09:10:30.329 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:10:30 np0005558241 nova_compute[248510]: 2025-12-13 09:10:30.336 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:10:30 np0005558241 nova_compute[248510]: 2025-12-13 09:10:30.356 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:10:30 np0005558241 nova_compute[248510]: 2025-12-13 09:10:30.383 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:10:30 np0005558241 nova_compute[248510]: 2025-12-13 09:10:30.383 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:10:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3282: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:31 np0005558241 nova_compute[248510]: 2025-12-13 09:10:31.364 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:31 np0005558241 nova_compute[248510]: 2025-12-13 09:10:31.384 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:10:31 np0005558241 nova_compute[248510]: 2025-12-13 09:10:31.384 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:10:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3283: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:32 np0005558241 nova_compute[248510]: 2025-12-13 09:10:32.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:10:33 np0005558241 nova_compute[248510]: 2025-12-13 09:10:33.624 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3284: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:35 np0005558241 nova_compute[248510]: 2025-12-13 09:10:35.255 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:10:35 np0005558241 nova_compute[248510]: 2025-12-13 09:10:35.256 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:10:35 np0005558241 nova_compute[248510]: 2025-12-13 09:10:35.283 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:10:35 np0005558241 nova_compute[248510]: 2025-12-13 09:10:35.391 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:10:35 np0005558241 nova_compute[248510]: 2025-12-13 09:10:35.392 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:10:35 np0005558241 nova_compute[248510]: 2025-12-13 09:10:35.399 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:10:35 np0005558241 nova_compute[248510]: 2025-12-13 09:10:35.399 248514 INFO nova.compute.claims [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:10:35 np0005558241 nova_compute[248510]: 2025-12-13 09:10:35.514 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:10:35 np0005558241 nova_compute[248510]: 2025-12-13 09:10:35.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:10:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:10:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015392230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.062 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.068 248514 DEBUG nova.compute.provider_tree [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.084 248514 DEBUG nova.scheduler.client.report [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.105 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.106 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.155 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.156 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:10:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.182 248514 INFO nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.208 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.308 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.310 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.310 248514 INFO nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Creating image(s)#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.334 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.358 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.383 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.387 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.431 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.473 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.474 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.474 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.475 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.497 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.501 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:10:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3285: 321 pgs: 321 active+clean; 41 MiB data, 945 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.741 248514 DEBUG nova.policy [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.807 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:10:36 np0005558241 nova_compute[248510]: 2025-12-13 09:10:36.890 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:10:37 np0005558241 nova_compute[248510]: 2025-12-13 09:10:37.007 248514 DEBUG nova.objects.instance [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:10:37 np0005558241 nova_compute[248510]: 2025-12-13 09:10:37.029 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:10:37 np0005558241 nova_compute[248510]: 2025-12-13 09:10:37.030 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Ensure instance console log exists: /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:10:37 np0005558241 nova_compute[248510]: 2025-12-13 09:10:37.031 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:10:37 np0005558241 nova_compute[248510]: 2025-12-13 09:10:37.031 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:10:37 np0005558241 nova_compute[248510]: 2025-12-13 09:10:37.032 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:10:37 np0005558241 nova_compute[248510]: 2025-12-13 09:10:37.747 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Successfully created port: b6bde1c8-9710-4155-9619-7040cbbca806 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:10:38 np0005558241 nova_compute[248510]: 2025-12-13 09:10:38.295 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Successfully created port: 445f61a4-1352-4145-8b2b-f9f87f5435a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:10:38 np0005558241 nova_compute[248510]: 2025-12-13 09:10:38.669 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3286: 321 pgs: 321 active+clean; 53 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 670 KiB/s wr, 11 op/s
Dec 13 04:10:39 np0005558241 nova_compute[248510]: 2025-12-13 09:10:39.065 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Successfully updated port: b6bde1c8-9710-4155-9619-7040cbbca806 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:10:39 np0005558241 nova_compute[248510]: 2025-12-13 09:10:39.501 248514 DEBUG nova.compute.manager [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-changed-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:10:39 np0005558241 nova_compute[248510]: 2025-12-13 09:10:39.502 248514 DEBUG nova.compute.manager [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing instance network info cache due to event network-changed-b6bde1c8-9710-4155-9619-7040cbbca806. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:10:39 np0005558241 nova_compute[248510]: 2025-12-13 09:10:39.502 248514 DEBUG oslo_concurrency.lockutils [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:10:39 np0005558241 nova_compute[248510]: 2025-12-13 09:10:39.502 248514 DEBUG oslo_concurrency.lockutils [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:10:39 np0005558241 nova_compute[248510]: 2025-12-13 09:10:39.503 248514 DEBUG nova.network.neutron [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing network info cache for port b6bde1c8-9710-4155-9619-7040cbbca806 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:10:39 np0005558241 nova_compute[248510]: 2025-12-13 09:10:39.691 248514 DEBUG nova.network.neutron [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:10:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:10:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:10:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:10:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:10:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:10:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:10:40 np0005558241 nova_compute[248510]: 2025-12-13 09:10:40.707 248514 DEBUG nova.network.neutron [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:10:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3287: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:10:40 np0005558241 nova_compute[248510]: 2025-12-13 09:10:40.770 248514 DEBUG oslo_concurrency.lockutils [req-9280c1c7-d610-429d-a61a-9a645f39e615 req-8296d93a-e176-4c40-bc08-5e4bb6a47270 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:10:41 np0005558241 nova_compute[248510]: 2025-12-13 09:10:41.008 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Successfully updated port: 445f61a4-1352-4145-8b2b-f9f87f5435a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:10:41 np0005558241 nova_compute[248510]: 2025-12-13 09:10:41.025 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:10:41 np0005558241 nova_compute[248510]: 2025-12-13 09:10:41.026 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:10:41 np0005558241 nova_compute[248510]: 2025-12-13 09:10:41.026 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:10:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:41 np0005558241 nova_compute[248510]: 2025-12-13 09:10:41.406 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:10:41 np0005558241 nova_compute[248510]: 2025-12-13 09:10:41.435 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:41 np0005558241 nova_compute[248510]: 2025-12-13 09:10:41.576 248514 DEBUG nova.compute.manager [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-changed-445f61a4-1352-4145-8b2b-f9f87f5435a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:10:41 np0005558241 nova_compute[248510]: 2025-12-13 09:10:41.577 248514 DEBUG nova.compute.manager [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing instance network info cache due to event network-changed-445f61a4-1352-4145-8b2b-f9f87f5435a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:10:41 np0005558241 nova_compute[248510]: 2025-12-13 09:10:41.577 248514 DEBUG oslo_concurrency.lockutils [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:10:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3288: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:10:43 np0005558241 nova_compute[248510]: 2025-12-13 09:10:43.711 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3289: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:10:44 np0005558241 nova_compute[248510]: 2025-12-13 09:10:44.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:10:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:45Z|01474|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 13 04:10:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:46 np0005558241 nova_compute[248510]: 2025-12-13 09:10:46.438 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3290: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.551 248514 DEBUG nova.network.neutron [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.579 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.580 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Instance network_info: |[{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.582 248514 DEBUG oslo_concurrency.lockutils [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.582 248514 DEBUG nova.network.neutron [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing network info cache for port 445f61a4-1352-4145-8b2b-f9f87f5435a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.587 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Start _get_guest_xml network_info=[{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.592 248514 WARNING nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.602 248514 DEBUG nova.virt.libvirt.host [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.603 248514 DEBUG nova.virt.libvirt.host [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.612 248514 DEBUG nova.virt.libvirt.host [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.612 248514 DEBUG nova.virt.libvirt.host [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.613 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.613 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.613 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.614 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.614 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.614 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.614 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.614 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.615 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.615 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.615 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.615 248514 DEBUG nova.virt.hardware [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:10:47 np0005558241 nova_compute[248510]: 2025-12-13 09:10:47.618 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:10:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:10:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3339093545' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.269 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.302 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.307 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3291: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:10:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:10:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3375848692' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.842 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.844 248514 DEBUG nova.virt.libvirt.vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-230241554',display_name='tempest-TestGettingAddress-server-230241554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-230241554',id=141,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-60o8t9sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:10:36Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.844 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.845 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.846 248514 DEBUG nova.virt.libvirt.vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-230241554',display_name='tempest-TestGettingAddress-server-230241554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-230241554',id=141,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-60o8t9sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:10:36Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.846 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.847 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.848 248514 DEBUG nova.objects.instance [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.876 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  <uuid>8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8</uuid>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  <name>instance-0000008d</name>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-230241554</nova:name>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:10:47</nova:creationTime>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        <nova:port uuid="b6bde1c8-9710-4155-9619-7040cbbca806">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        <nova:port uuid="445f61a4-1352-4145-8b2b-f9f87f5435a7">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe66:7d02" ipVersion="6"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe66:7d02" ipVersion="6"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <entry name="serial">8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8</entry>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <entry name="uuid">8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8</entry>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk.config">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:54:9b:72"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <target dev="tapb6bde1c8-97"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:66:7d:02"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <target dev="tap445f61a4-13"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/console.log" append="off"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:10:48 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:10:48 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:10:48 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:10:48 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.877 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Preparing to wait for external event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.878 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.879 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.879 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.880 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Preparing to wait for external event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.880 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.880 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.880 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.881 248514 DEBUG nova.virt.libvirt.vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-230241554',display_name='tempest-TestGettingAddress-server-230241554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-230241554',id=141,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-60o8t9sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:10:36Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.881 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.882 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.883 248514 DEBUG os_vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.883 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.884 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.884 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.889 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6bde1c8-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.889 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6bde1c8-97, col_values=(('external_ids', {'iface-id': 'b6bde1c8-9710-4155-9619-7040cbbca806', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:9b:72', 'vm-uuid': '8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.892 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:10:48 np0005558241 NetworkManager[50376]: <info>  [1765617048.8943] manager: (tapb6bde1c8-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.905 248514 INFO os_vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97')#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.906 248514 DEBUG nova.virt.libvirt.vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-230241554',display_name='tempest-TestGettingAddress-server-230241554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-230241554',id=141,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-60o8t9sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:10:36Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.906 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.907 248514 DEBUG nova.network.os_vif_util [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.907 248514 DEBUG os_vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.908 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.908 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.911 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.911 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap445f61a4-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.912 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap445f61a4-13, col_values=(('external_ids', {'iface-id': '445f61a4-1352-4145-8b2b-f9f87f5435a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:7d:02', 'vm-uuid': '8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:48 np0005558241 NetworkManager[50376]: <info>  [1765617048.9152] manager: (tap445f61a4-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/610)
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.926 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.928 248514 INFO os_vif [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13')#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.995 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.996 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.996 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:54:9b:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.996 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:66:7d:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:10:48 np0005558241 nova_compute[248510]: 2025-12-13 09:10:48.997 248514 INFO nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Using config drive#033[00m
Dec 13 04:10:49 np0005558241 nova_compute[248510]: 2025-12-13 09:10:49.021 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:10:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:10:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 14K writes, 65K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1386 writes, 5988 keys, 1386 commit groups, 1.0 writes per commit group, ingest: 9.31 MB, 0.02 MB/s#012Interval WAL: 1385 writes, 1385 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     22.1      3.67              0.28        46    0.080       0      0       0.0       0.0#012  L6      1/0    8.38 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.8     83.4     70.3      5.54              1.22        45    0.123    294K    24K       0.0       0.0#012 Sum      1/0    8.38 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.8     50.2     51.1      9.21              1.50        91    0.101    294K    24K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.1    119.1    118.7      0.40              0.17         8    0.050     35K   2058       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     83.4     70.3      5.54              1.22        45    0.123    294K    24K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     22.1      3.66              0.28        45    0.081       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.079, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.46 GB write, 0.08 MB/s write, 0.45 GB read, 0.08 MB/s read, 9.2 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 53.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000892 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3328,51.44 MB,16.9221%) FilterBlock(92,819.05 KB,0.263109%) IndexBlock(92,1.33 MB,0.438223%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 04:10:49 np0005558241 nova_compute[248510]: 2025-12-13 09:10:49.763 248514 INFO nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Creating config drive at /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/disk.config#033[00m
Dec 13 04:10:49 np0005558241 nova_compute[248510]: 2025-12-13 09:10:49.769 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpka2_pk1_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:10:49 np0005558241 nova_compute[248510]: 2025-12-13 09:10:49.928 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpka2_pk1_" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:10:49 np0005558241 nova_compute[248510]: 2025-12-13 09:10:49.969 248514 DEBUG nova.storage.rbd_utils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:10:49 np0005558241 nova_compute[248510]: 2025-12-13 09:10:49.975 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/disk.config 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:10:50 np0005558241 nova_compute[248510]: 2025-12-13 09:10:50.313 248514 DEBUG oslo_concurrency.processutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/disk.config 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:10:50 np0005558241 nova_compute[248510]: 2025-12-13 09:10:50.315 248514 INFO nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Deleting local config drive /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8/disk.config because it was imported into RBD.#033[00m
Dec 13 04:10:50 np0005558241 kernel: tapb6bde1c8-97: entered promiscuous mode
Dec 13 04:10:50 np0005558241 NetworkManager[50376]: <info>  [1765617050.4036] manager: (tapb6bde1c8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/611)
Dec 13 04:10:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:50Z|01475|binding|INFO|Claiming lport b6bde1c8-9710-4155-9619-7040cbbca806 for this chassis.
Dec 13 04:10:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:50Z|01476|binding|INFO|b6bde1c8-9710-4155-9619-7040cbbca806: Claiming fa:16:3e:54:9b:72 10.100.0.6
Dec 13 04:10:50 np0005558241 nova_compute[248510]: 2025-12-13 09:10:50.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:50 np0005558241 NetworkManager[50376]: <info>  [1765617050.4227] manager: (tap445f61a4-13): new Tun device (/org/freedesktop/NetworkManager/Devices/612)
Dec 13 04:10:50 np0005558241 kernel: tap445f61a4-13: entered promiscuous mode
Dec 13 04:10:50 np0005558241 nova_compute[248510]: 2025-12-13 09:10:50.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:50 np0005558241 nova_compute[248510]: 2025-12-13 09:10:50.429 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:50Z|01477|if_status|INFO|Dropped 5 log messages in last 99 seconds (most recently, 99 seconds ago) due to excessive rate
Dec 13 04:10:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:50Z|01478|if_status|INFO|Not updating pb chassis for 445f61a4-1352-4145-8b2b-f9f87f5435a7 now as sb is readonly
Dec 13 04:10:50 np0005558241 systemd-udevd[392868]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:10:50 np0005558241 systemd-udevd[392869]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:10:50 np0005558241 NetworkManager[50376]: <info>  [1765617050.4626] device (tapb6bde1c8-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:10:50 np0005558241 NetworkManager[50376]: <info>  [1765617050.4640] device (tap445f61a4-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:10:50 np0005558241 NetworkManager[50376]: <info>  [1765617050.4661] device (tapb6bde1c8-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:10:50 np0005558241 NetworkManager[50376]: <info>  [1765617050.4673] device (tap445f61a4-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:10:50 np0005558241 systemd-machined[210538]: New machine qemu-172-instance-0000008d.
Dec 13 04:10:50 np0005558241 systemd[1]: Started Virtual Machine qemu-172-instance-0000008d.
Dec 13 04:10:50 np0005558241 nova_compute[248510]: 2025-12-13 09:10:50.506 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:50 np0005558241 nova_compute[248510]: 2025-12-13 09:10:50.515 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:50Z|01479|binding|INFO|Setting lport b6bde1c8-9710-4155-9619-7040cbbca806 ovn-installed in OVS
Dec 13 04:10:50 np0005558241 nova_compute[248510]: 2025-12-13 09:10:50.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:50Z|01480|binding|INFO|Claiming lport 445f61a4-1352-4145-8b2b-f9f87f5435a7 for this chassis.
Dec 13 04:10:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:50Z|01481|binding|INFO|445f61a4-1352-4145-8b2b-f9f87f5435a7: Claiming fa:16:3e:66:7d:02 2001:db8:0:1:f816:3eff:fe66:7d02 2001:db8::f816:3eff:fe66:7d02
Dec 13 04:10:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:50Z|01482|binding|INFO|Setting lport b6bde1c8-9710-4155-9619-7040cbbca806 up in Southbound
Dec 13 04:10:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:50Z|01483|binding|INFO|Setting lport 445f61a4-1352-4145-8b2b-f9f87f5435a7 ovn-installed in OVS
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.638 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:9b:72 10.100.0.6'], port_security=['fa:16:3e:54:9b:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44250a44-17ff-41cb-ae95-e575bff91a4a, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b6bde1c8-9710-4155-9619-7040cbbca806) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.640 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b6bde1c8-9710-4155-9619-7040cbbca806 in datapath a7065d5a-edce-4470-a56d-ab529d56aa3c bound to our chassis#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.641 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7065d5a-edce-4470-a56d-ab529d56aa3c#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.655 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[df6dcce1-d257-4ad6-bccf-dd2280df8137]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.656 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa7065d5a-e1 in ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.659 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa7065d5a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.659 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[76c398bf-f4a6-432a-b9a0-5285f4b35b01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.660 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cf630294-e100-44a1-aad3-0f401a4c46d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.676 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd61417-d16e-470f-baf3-9f2f0ba4b86d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 nova_compute[248510]: 2025-12-13 09:10:50.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.698 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bff5b301-2004-4da7-9741-c715fe09fefd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.704 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:7d:02 2001:db8:0:1:f816:3eff:fe66:7d02 2001:db8::f816:3eff:fe66:7d02'], port_security=['fa:16:3e:66:7d:02 2001:db8:0:1:f816:3eff:fe66:7d02 2001:db8::f816:3eff:fe66:7d02'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe66:7d02/64 2001:db8::f816:3eff:fe66:7d02/64', 'neutron:device_id': '8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28a1c52d-6dd3-4202-a082-66c542a5a384, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=445f61a4-1352-4145-8b2b-f9f87f5435a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:10:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:50Z|01484|binding|INFO|Setting lport 445f61a4-1352-4145-8b2b-f9f87f5435a7 up in Southbound
Dec 13 04:10:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3292: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 15 op/s
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.735 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1e10e5a4-1c98-47fc-8363-b0e3ce0066d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 systemd-udevd[392875]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.741 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a1358229-dc93-4300-b076-74c4892b1216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 NetworkManager[50376]: <info>  [1765617050.7426] manager: (tapa7065d5a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/613)
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.783 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d34b82bc-3a62-49c8-bf2d-d4f81653db02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.787 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[84644085-cbc4-4cd3-9a2c-8163058010ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 NetworkManager[50376]: <info>  [1765617050.8146] device (tapa7065d5a-e0): carrier: link connected
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.821 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5e501c-e191-4ef5-b1dc-575392643a31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.839 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3b01ad4f-85e2-4780-930f-a8376952c9ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7065d5a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:e7:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962803, 'reachable_time': 25861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392923, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.859 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7de223-2ec9-4e06-b177-dab75ef1ecf3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:e795'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962803, 'tstamp': 962803}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392924, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.878 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[871d6a3f-f19f-4cc6-ac17-b7d415d8a793]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7065d5a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:e7:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962803, 'reachable_time': 25861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 392925, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.915 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aabc3f77-f366-49d1-a727-ad5bb80b33cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.994 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f29471fa-d291-43ab-8c31-8836e8ef95a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.996 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7065d5a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.996 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:10:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:50.996 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7065d5a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:10:50 np0005558241 NetworkManager[50376]: <info>  [1765617050.9987] manager: (tapa7065d5a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/614)
Dec 13 04:10:50 np0005558241 kernel: tapa7065d5a-e0: entered promiscuous mode
Dec 13 04:10:50 np0005558241 nova_compute[248510]: 2025-12-13 09:10:50.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:51.002 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7065d5a-e0, col_values=(('external_ids', {'iface-id': 'b83346dd-17c7-4f89-a540-efae6916310e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:10:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:51Z|01485|binding|INFO|Releasing lport b83346dd-17c7-4f89-a540-efae6916310e from this chassis (sb_readonly=0)
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.003 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:51.005 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a7065d5a-edce-4470-a56d-ab529d56aa3c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a7065d5a-edce-4470-a56d-ab529d56aa3c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:51.006 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d99794-4f71-4fc1-9895-cc2622eff509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:51.007 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-a7065d5a-edce-4470-a56d-ab529d56aa3c
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/a7065d5a-edce-4470-a56d-ab529d56aa3c.pid.haproxy
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID a7065d5a-edce-4470-a56d-ab529d56aa3c
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:10:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:51.007 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'env', 'PROCESS_TAG=haproxy-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a7065d5a-edce-4470-a56d-ab529d56aa3c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.014 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.044 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617051.0440233, 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.045 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] VM Started (Lifecycle Event)#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.076 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.080 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617051.0441818, 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.080 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.102 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.105 248514 DEBUG nova.compute.manager [req-b93d3ded-e1db-4351-a084-6faf3e800698 req-3a4c03d2-c5ca-4fae-a58c-60a6a20c5fb7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.105 248514 DEBUG oslo_concurrency.lockutils [req-b93d3ded-e1db-4351-a084-6faf3e800698 req-3a4c03d2-c5ca-4fae-a58c-60a6a20c5fb7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.105 248514 DEBUG oslo_concurrency.lockutils [req-b93d3ded-e1db-4351-a084-6faf3e800698 req-3a4c03d2-c5ca-4fae-a58c-60a6a20c5fb7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.106 248514 DEBUG oslo_concurrency.lockutils [req-b93d3ded-e1db-4351-a084-6faf3e800698 req-3a4c03d2-c5ca-4fae-a58c-60a6a20c5fb7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.106 248514 DEBUG nova.compute.manager [req-b93d3ded-e1db-4351-a084-6faf3e800698 req-3a4c03d2-c5ca-4fae-a58c-60a6a20c5fb7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Processing event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.108 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.128 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:10:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.429 248514 DEBUG nova.compute.manager [req-9588d7a0-7d7c-4584-b701-1b7074f6841c req-2be158ea-0a8e-4bbf-8fe0-ba730bd06298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.429 248514 DEBUG oslo_concurrency.lockutils [req-9588d7a0-7d7c-4584-b701-1b7074f6841c req-2be158ea-0a8e-4bbf-8fe0-ba730bd06298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.429 248514 DEBUG oslo_concurrency.lockutils [req-9588d7a0-7d7c-4584-b701-1b7074f6841c req-2be158ea-0a8e-4bbf-8fe0-ba730bd06298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.430 248514 DEBUG oslo_concurrency.lockutils [req-9588d7a0-7d7c-4584-b701-1b7074f6841c req-2be158ea-0a8e-4bbf-8fe0-ba730bd06298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.430 248514 DEBUG nova.compute.manager [req-9588d7a0-7d7c-4584-b701-1b7074f6841c req-2be158ea-0a8e-4bbf-8fe0-ba730bd06298 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Processing event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.430 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.434 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617051.43434, 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.434 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.436 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.440 248514 INFO nova.virt.libvirt.driver [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Instance spawned successfully.#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.440 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.464 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.470 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.473 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.474 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.474 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.474 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.475 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.475 248514 DEBUG nova.virt.libvirt.driver [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:10:51 np0005558241 podman[392982]: 2025-12-13 09:10:51.391557753 +0000 UTC m=+0.022623525 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.531 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.547 248514 INFO nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Took 15.24 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.548 248514 DEBUG nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.663 248514 INFO nova.compute.manager [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Took 16.31 seconds to build instance.#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.681 248514 DEBUG oslo_concurrency.lockutils [None req-26cbf089-1fb5-41bd-9c6a-49e419d3c8c3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:10:51 np0005558241 podman[392982]: 2025-12-13 09:10:51.741793543 +0000 UTC m=+0.372859305 container create b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.858 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:10:51 np0005558241 nova_compute[248510]: 2025-12-13 09:10:51.858 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 04:10:51 np0005558241 systemd[1]: Started libpod-conmon-b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c.scope.
Dec 13 04:10:51 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:10:51 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83ba7e95364800c67b1ef1b047ddcfb67b0d58972ad0e7ad048a84a30676ad2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:52 np0005558241 nova_compute[248510]: 2025-12-13 09:10:52.026 248514 DEBUG nova.network.neutron [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updated VIF entry in instance network info cache for port 445f61a4-1352-4145-8b2b-f9f87f5435a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:10:52 np0005558241 nova_compute[248510]: 2025-12-13 09:10:52.027 248514 DEBUG nova.network.neutron [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:10:52 np0005558241 nova_compute[248510]: 2025-12-13 09:10:52.047 248514 DEBUG oslo_concurrency.lockutils [req-73078329-5e9c-418b-a884-e66799bf5967 req-8d42b115-45b5-4ce7-8300-d523e7d10bc2 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:10:52 np0005558241 podman[392982]: 2025-12-13 09:10:52.071983236 +0000 UTC m=+0.703049048 container init b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:10:52 np0005558241 podman[392982]: 2025-12-13 09:10:52.085626038 +0000 UTC m=+0.716691800 container start b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 04:10:52 np0005558241 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [NOTICE]   (393002) : New worker (393004) forked
Dec 13 04:10:52 np0005558241 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [NOTICE]   (393002) : Loading success.
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.293 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 445f61a4-1352-4145-8b2b-f9f87f5435a7 in datapath 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c unbound from our chassis#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.294 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.310 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe64b9c5-2be8-4317-a216-a4e864fc3bf4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.310 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7661cc7b-f1 in ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.312 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7661cc7b-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.312 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[75b4b4dd-6f76-4971-89a4-0d5b803c5413]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.313 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3a279b-0de1-46db-a943-93632e7c6d93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.324 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a6a2c6-8204-4985-b0f4-f5e68ad174d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.346 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1795afd-e461-42b9-b488-d0cdf5c0ebd1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.391 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[59e65133-3b22-45a1-9c92-01121697a177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 NetworkManager[50376]: <info>  [1765617052.4009] manager: (tap7661cc7b-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/615)
Dec 13 04:10:52 np0005558241 systemd-udevd[392900]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.400 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7c600024-3c20-4a51-8fc7-6a4cfc51fcd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.437 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2aaa0b5d-9cf9-4883-adaf-c97621ca3593]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.443 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[989cd4b9-bb68-451c-9e72-8219c8d5f974]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 NetworkManager[50376]: <info>  [1765617052.4696] device (tap7661cc7b-f0): carrier: link connected
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.476 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b21abdfe-0906-4ec5-b91a-786626e045b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.501 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[63cacdcd-3ade-481b-b20d-de725ff2f7ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7661cc7b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:a3:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962968, 'reachable_time': 41750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393023, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.518 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[342bbd66-753c-4306-a6c3-526bbb04b82a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe49:a320'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962968, 'tstamp': 962968}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393024, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.537 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f23984c7-57ab-48fd-9f8e-26378d88c3a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7661cc7b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:a3:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962968, 'reachable_time': 41750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393025, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.572 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e1961034-838d-4a15-9fe5-a479d77475bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.611 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[26077b22-25dd-473e-b384-723f730ca9bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.614 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7661cc7b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.614 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.615 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7661cc7b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:10:52 np0005558241 nova_compute[248510]: 2025-12-13 09:10:52.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:52 np0005558241 NetworkManager[50376]: <info>  [1765617052.6183] manager: (tap7661cc7b-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/616)
Dec 13 04:10:52 np0005558241 kernel: tap7661cc7b-f0: entered promiscuous mode
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.624 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7661cc7b-f0, col_values=(('external_ids', {'iface-id': 'c9cb1492-8274-4506-acb8-65660baeb5cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:10:52 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:52Z|01486|binding|INFO|Releasing lport c9cb1492-8274-4506-acb8-65660baeb5cf from this chassis (sb_readonly=0)
Dec 13 04:10:52 np0005558241 nova_compute[248510]: 2025-12-13 09:10:52.625 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.627 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7661cc7b-fcf7-41b6-b117-ca8fede7ad4c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7661cc7b-fcf7-41b6-b117-ca8fede7ad4c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.628 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7223f012-76ad-4f84-9531-1326b2e6e4ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.629 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/7661cc7b-fcf7-41b6-b117-ca8fede7ad4c.pid.haproxy
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:10:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:52.630 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'env', 'PROCESS_TAG=haproxy-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7661cc7b-fcf7-41b6-b117-ca8fede7ad4c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:10:52 np0005558241 nova_compute[248510]: 2025-12-13 09:10:52.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3293: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:10:53 np0005558241 podman[393056]: 2025-12-13 09:10:53.020723255 +0000 UTC m=+0.066410806 container create bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:10:53 np0005558241 podman[393056]: 2025-12-13 09:10:52.978882164 +0000 UTC m=+0.024569776 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:10:53 np0005558241 systemd[1]: Started libpod-conmon-bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411.scope.
Dec 13 04:10:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:10:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f32a15c927b95fe7b352ffa0d269ecc44713de0e4d769f1bd0a4e926820f3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:53 np0005558241 podman[393056]: 2025-12-13 09:10:53.189895182 +0000 UTC m=+0.235582723 container init bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 04:10:53 np0005558241 podman[393056]: 2025-12-13 09:10:53.197285182 +0000 UTC m=+0.242972693 container start bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:10:53 np0005558241 podman[393069]: 2025-12-13 09:10:53.210914514 +0000 UTC m=+0.139230655 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, tcib_managed=true)
Dec 13 04:10:53 np0005558241 podman[393070]: 2025-12-13 09:10:53.211592351 +0000 UTC m=+0.145029534 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 13 04:10:53 np0005558241 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [NOTICE]   (393125) : New worker (393132) forked
Dec 13 04:10:53 np0005558241 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [NOTICE]   (393125) : Loading success.
Dec 13 04:10:53 np0005558241 podman[393083]: 2025-12-13 09:10:53.24835502 +0000 UTC m=+0.153039131 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.567 248514 DEBUG nova.compute.manager [req-55dc2bca-d490-461b-95b9-9b63a1b2243b req-277b6e81-5634-4426-97ae-4e94000dff35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.567 248514 DEBUG oslo_concurrency.lockutils [req-55dc2bca-d490-461b-95b9-9b63a1b2243b req-277b6e81-5634-4426-97ae-4e94000dff35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.568 248514 DEBUG oslo_concurrency.lockutils [req-55dc2bca-d490-461b-95b9-9b63a1b2243b req-277b6e81-5634-4426-97ae-4e94000dff35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.568 248514 DEBUG oslo_concurrency.lockutils [req-55dc2bca-d490-461b-95b9-9b63a1b2243b req-277b6e81-5634-4426-97ae-4e94000dff35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.569 248514 DEBUG nova.compute.manager [req-55dc2bca-d490-461b-95b9-9b63a1b2243b req-277b6e81-5634-4426-97ae-4e94000dff35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] No waiting events found dispatching network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.569 248514 WARNING nova.compute.manager [req-55dc2bca-d490-461b-95b9-9b63a1b2243b req-277b6e81-5634-4426-97ae-4e94000dff35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received unexpected event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.573 248514 DEBUG nova.compute.manager [req-076931d4-b9f4-4b7c-81af-d288a058f633 req-88c3be8c-4e7f-48c4-94a0-ba7e64c3dcf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.573 248514 DEBUG oslo_concurrency.lockutils [req-076931d4-b9f4-4b7c-81af-d288a058f633 req-88c3be8c-4e7f-48c4-94a0-ba7e64c3dcf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.574 248514 DEBUG oslo_concurrency.lockutils [req-076931d4-b9f4-4b7c-81af-d288a058f633 req-88c3be8c-4e7f-48c4-94a0-ba7e64c3dcf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.574 248514 DEBUG oslo_concurrency.lockutils [req-076931d4-b9f4-4b7c-81af-d288a058f633 req-88c3be8c-4e7f-48c4-94a0-ba7e64c3dcf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.575 248514 DEBUG nova.compute.manager [req-076931d4-b9f4-4b7c-81af-d288a058f633 req-88c3be8c-4e7f-48c4-94a0-ba7e64c3dcf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] No waiting events found dispatching network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.575 248514 WARNING nova.compute.manager [req-076931d4-b9f4-4b7c-81af-d288a058f633 req-88c3be8c-4e7f-48c4-94a0-ba7e64c3dcf8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received unexpected event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:53 np0005558241 nova_compute[248510]: 2025-12-13 09:10:53.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3294: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:10:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:55.448 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:10:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:55.449 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:10:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:10:55.449 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:10:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3295: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:10:56 np0005558241 NetworkManager[50376]: <info>  [1765617056.7431] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/617)
Dec 13 04:10:56 np0005558241 nova_compute[248510]: 2025-12-13 09:10:56.742 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:56 np0005558241 NetworkManager[50376]: <info>  [1765617056.7449] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Dec 13 04:10:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:56Z|01487|binding|INFO|Releasing lport c9cb1492-8274-4506-acb8-65660baeb5cf from this chassis (sb_readonly=0)
Dec 13 04:10:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:56Z|01488|binding|INFO|Releasing lport b83346dd-17c7-4f89-a540-efae6916310e from this chassis (sb_readonly=0)
Dec 13 04:10:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:56Z|01489|binding|INFO|Releasing lport c9cb1492-8274-4506-acb8-65660baeb5cf from this chassis (sb_readonly=0)
Dec 13 04:10:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:10:56Z|01490|binding|INFO|Releasing lport b83346dd-17c7-4f89-a540-efae6916310e from this chassis (sb_readonly=0)
Dec 13 04:10:56 np0005558241 nova_compute[248510]: 2025-12-13 09:10:56.802 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:56 np0005558241 nova_compute[248510]: 2025-12-13 09:10:56.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:57 np0005558241 nova_compute[248510]: 2025-12-13 09:10:57.708 248514 DEBUG nova.compute.manager [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-changed-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:10:57 np0005558241 nova_compute[248510]: 2025-12-13 09:10:57.709 248514 DEBUG nova.compute.manager [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing instance network info cache due to event network-changed-b6bde1c8-9710-4155-9619-7040cbbca806. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:10:57 np0005558241 nova_compute[248510]: 2025-12-13 09:10:57.709 248514 DEBUG oslo_concurrency.lockutils [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:10:57 np0005558241 nova_compute[248510]: 2025-12-13 09:10:57.710 248514 DEBUG oslo_concurrency.lockutils [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:10:57 np0005558241 nova_compute[248510]: 2025-12-13 09:10:57.710 248514 DEBUG nova.network.neutron [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing network info cache for port b6bde1c8-9710-4155-9619-7040cbbca806 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:10:57 np0005558241 nova_compute[248510]: 2025-12-13 09:10:57.782 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:10:58 np0005558241 nova_compute[248510]: 2025-12-13 09:10:58.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3296: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:10:58 np0005558241 nova_compute[248510]: 2025-12-13 09:10:58.923 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:10:59 np0005558241 nova_compute[248510]: 2025-12-13 09:10:59.310 248514 DEBUG nova.network.neutron [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updated VIF entry in instance network info cache for port b6bde1c8-9710-4155-9619-7040cbbca806. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:10:59 np0005558241 nova_compute[248510]: 2025-12-13 09:10:59.310 248514 DEBUG nova.network.neutron [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:10:59 np0005558241 nova_compute[248510]: 2025-12-13 09:10:59.345 248514 DEBUG oslo_concurrency.lockutils [req-6c32e77b-19b6-4876-be46-8c6a1f623d82 req-79ff8a44-50e5-4408-925f-d1d6d1f83be7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:11:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3297: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:11:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3298: 321 pgs: 321 active+clean; 88 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:11:03 np0005558241 nova_compute[248510]: 2025-12-13 09:11:03.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:03 np0005558241 nova_compute[248510]: 2025-12-13 09:11:03.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3299: 321 pgs: 321 active+clean; 109 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Dec 13 04:11:04 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:04Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:9b:72 10.100.0.6
Dec 13 04:11:04 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:04Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:9b:72 10.100.0.6
Dec 13 04:11:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3300: 321 pgs: 321 active+clean; 109 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 191 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.634393) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617068634449, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1963, "num_deletes": 251, "total_data_size": 3407520, "memory_usage": 3455552, "flush_reason": "Manual Compaction"}
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Dec 13 04:11:08 np0005558241 nova_compute[248510]: 2025-12-13 09:11:08.722 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3301: 321 pgs: 321 active+clean; 116 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617068779878, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 3312728, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63983, "largest_seqno": 65945, "table_properties": {"data_size": 3303776, "index_size": 5573, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18032, "raw_average_key_size": 20, "raw_value_size": 3286051, "raw_average_value_size": 3655, "num_data_blocks": 247, "num_entries": 899, "num_filter_entries": 899, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765616857, "oldest_key_time": 1765616857, "file_creation_time": 1765617068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 145775 microseconds, and 12559 cpu microseconds.
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.780116) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 3312728 bytes OK
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.780230) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.783700) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.783714) EVENT_LOG_v1 {"time_micros": 1765617068783709, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.783731) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 3399264, prev total WAL file size 3399264, number of live WAL files 2.
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.784835) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(3235KB)], [152(8579KB)]
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617068784955, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 12097855, "oldest_snapshot_seqno": -1}
Dec 13 04:11:08 np0005558241 nova_compute[248510]: 2025-12-13 09:11:08.929 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8578 keys, 10304293 bytes, temperature: kUnknown
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617068976568, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 10304293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10250280, "index_size": 31436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 224378, "raw_average_key_size": 26, "raw_value_size": 10100606, "raw_average_value_size": 1177, "num_data_blocks": 1213, "num_entries": 8578, "num_filter_entries": 8578, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:11:08 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:11:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.976862) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10304293 bytes
Dec 13 04:11:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.023157) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.1 rd, 53.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.4 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 9092, records dropped: 514 output_compression: NoCompression
Dec 13 04:11:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.023204) EVENT_LOG_v1 {"time_micros": 1765617069023186, "job": 94, "event": "compaction_finished", "compaction_time_micros": 191704, "compaction_time_cpu_micros": 47728, "output_level": 6, "num_output_files": 1, "total_output_size": 10304293, "num_input_records": 9092, "num_output_records": 8578, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:11:09 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:11:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617069024828, "job": 94, "event": "table_file_deletion", "file_number": 154}
Dec 13 04:11:09 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:11:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617069027978, "job": 94, "event": "table_file_deletion", "file_number": 152}
Dec 13 04:11:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:08.784690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:11:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.028108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:11:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.028117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:11:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.028121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:11:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.028124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:11:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:11:09.028127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:11:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:11:09
Dec 13 04:11:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:11:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:11:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'backups', 'images', '.mgr', '.rgw.root']
Dec 13 04:11:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:11:09 np0005558241 nova_compute[248510]: 2025-12-13 09:11:09.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:11:09 np0005558241 nova_compute[248510]: 2025-12-13 09:11:09.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 04:11:09 np0005558241 nova_compute[248510]: 2025-12-13 09:11:09.981 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 04:11:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:11:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:11:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:11:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:11:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:11:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:11:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3302: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 04:11:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:11:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:11:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:11:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:11:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:11:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:11:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:11:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:11:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:11:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:11:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3303: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:11:13 np0005558241 nova_compute[248510]: 2025-12-13 09:11:13.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:13 np0005558241 nova_compute[248510]: 2025-12-13 09:11:13.931 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:11:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:11:14 np0005558241 podman[393295]: 2025-12-13 09:11:14.08346692 +0000 UTC m=+0.050670939 container create b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_khorana, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:11:14 np0005558241 systemd[1]: Started libpod-conmon-b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2.scope.
Dec 13 04:11:14 np0005558241 podman[393295]: 2025-12-13 09:11:14.053045424 +0000 UTC m=+0.020249514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:11:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:11:14 np0005558241 podman[393295]: 2025-12-13 09:11:14.174796687 +0000 UTC m=+0.142000716 container init b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_khorana, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:11:14 np0005558241 podman[393295]: 2025-12-13 09:11:14.185574585 +0000 UTC m=+0.152778574 container start b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_khorana, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:11:14 np0005558241 podman[393295]: 2025-12-13 09:11:14.189321412 +0000 UTC m=+0.156525421 container attach b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_khorana, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:11:14 np0005558241 sad_khorana[393311]: 167 167
Dec 13 04:11:14 np0005558241 systemd[1]: libpod-b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2.scope: Deactivated successfully.
Dec 13 04:11:14 np0005558241 podman[393295]: 2025-12-13 09:11:14.19582432 +0000 UTC m=+0.163028359 container died b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:11:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-29b378f1bd454c92c416782d7ffd6d03df425f5b41178497c1d620d5571bf43c-merged.mount: Deactivated successfully.
Dec 13 04:11:14 np0005558241 podman[393295]: 2025-12-13 09:11:14.252665767 +0000 UTC m=+0.219869756 container remove b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_khorana, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:11:14 np0005558241 systemd[1]: libpod-conmon-b3a5dc3324a6aa35e02954e28068765ba02f4eb5d53f0df463c1be8eb87d49e2.scope: Deactivated successfully.
Dec 13 04:11:14 np0005558241 podman[393334]: 2025-12-13 09:11:14.496807389 +0000 UTC m=+0.049939590 container create 62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:11:14 np0005558241 systemd[1]: Started libpod-conmon-62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17.scope.
Dec 13 04:11:14 np0005558241 podman[393334]: 2025-12-13 09:11:14.475502239 +0000 UTC m=+0.028634480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:11:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:11:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47a22f3e5d42138916a8158f9bea72e084f44bb8b9dae01822bd36b2c2fd336/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47a22f3e5d42138916a8158f9bea72e084f44bb8b9dae01822bd36b2c2fd336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47a22f3e5d42138916a8158f9bea72e084f44bb8b9dae01822bd36b2c2fd336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47a22f3e5d42138916a8158f9bea72e084f44bb8b9dae01822bd36b2c2fd336/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:14 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47a22f3e5d42138916a8158f9bea72e084f44bb8b9dae01822bd36b2c2fd336/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:14 np0005558241 podman[393334]: 2025-12-13 09:11:14.727832892 +0000 UTC m=+0.280965123 container init 62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:11:14 np0005558241 podman[393334]: 2025-12-13 09:11:14.739708928 +0000 UTC m=+0.292841129 container start 62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:11:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3304: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:11:14 np0005558241 podman[393334]: 2025-12-13 09:11:14.781918988 +0000 UTC m=+0.335051229 container attach 62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:11:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4044870584' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4044870584' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:15 np0005558241 musing_feynman[393351]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:11:15 np0005558241 musing_feynman[393351]: --> All data devices are unavailable
Dec 13 04:11:15 np0005558241 systemd[1]: libpod-62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17.scope: Deactivated successfully.
Dec 13 04:11:15 np0005558241 podman[393334]: 2025-12-13 09:11:15.354937887 +0000 UTC m=+0.908070128 container died 62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 04:11:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d47a22f3e5d42138916a8158f9bea72e084f44bb8b9dae01822bd36b2c2fd336-merged.mount: Deactivated successfully.
Dec 13 04:11:15 np0005558241 podman[393334]: 2025-12-13 09:11:15.417149573 +0000 UTC m=+0.970281764 container remove 62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:11:15 np0005558241 systemd[1]: libpod-conmon-62c0771568663dfcb292d9a583579397e3ebd78aec8689652d29ea863aa5ce17.scope: Deactivated successfully.
Dec 13 04:11:15 np0005558241 podman[393444]: 2025-12-13 09:11:15.922820055 +0000 UTC m=+0.040444775 container create 24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:11:15 np0005558241 systemd[1]: Started libpod-conmon-24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f.scope.
Dec 13 04:11:15 np0005558241 podman[393444]: 2025-12-13 09:11:15.904626336 +0000 UTC m=+0.022251086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:11:16 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:11:16 np0005558241 podman[393444]: 2025-12-13 09:11:16.023059093 +0000 UTC m=+0.140683793 container init 24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mirzakhani, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:11:16 np0005558241 podman[393444]: 2025-12-13 09:11:16.02875229 +0000 UTC m=+0.146376990 container start 24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mirzakhani, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:11:16 np0005558241 thirsty_mirzakhani[393460]: 167 167
Dec 13 04:11:16 np0005558241 podman[393444]: 2025-12-13 09:11:16.032057725 +0000 UTC m=+0.149682425 container attach 24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mirzakhani, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:11:16 np0005558241 systemd[1]: libpod-24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f.scope: Deactivated successfully.
Dec 13 04:11:16 np0005558241 podman[393444]: 2025-12-13 09:11:16.032723652 +0000 UTC m=+0.150348352 container died 24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:11:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay-90a6c0f478a41092c61225cf1b999d299a400f775480c20392903f124cefa74f-merged.mount: Deactivated successfully.
Dec 13 04:11:16 np0005558241 podman[393444]: 2025-12-13 09:11:16.081650335 +0000 UTC m=+0.199275035 container remove 24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:11:16 np0005558241 systemd[1]: libpod-conmon-24382545cf89d9b6fcd54df41952d273aa057641f50df7671255f7679206829f.scope: Deactivated successfully.
Dec 13 04:11:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:16 np0005558241 podman[393483]: 2025-12-13 09:11:16.255625406 +0000 UTC m=+0.048124144 container create 53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 04:11:16 np0005558241 systemd[1]: Started libpod-conmon-53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa.scope.
Dec 13 04:11:16 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:11:16 np0005558241 podman[393483]: 2025-12-13 09:11:16.235932747 +0000 UTC m=+0.028431505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:11:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78482cc95833915a4f3a2d32d37a39c9b9c0e9e22ded98de72d9b13ca8d20a17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78482cc95833915a4f3a2d32d37a39c9b9c0e9e22ded98de72d9b13ca8d20a17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78482cc95833915a4f3a2d32d37a39c9b9c0e9e22ded98de72d9b13ca8d20a17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78482cc95833915a4f3a2d32d37a39c9b9c0e9e22ded98de72d9b13ca8d20a17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:16 np0005558241 podman[393483]: 2025-12-13 09:11:16.351646044 +0000 UTC m=+0.144144812 container init 53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:11:16 np0005558241 podman[393483]: 2025-12-13 09:11:16.358228274 +0000 UTC m=+0.150727032 container start 53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:11:16 np0005558241 podman[393483]: 2025-12-13 09:11:16.361495368 +0000 UTC m=+0.153994086 container attach 53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]: {
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:    "0": [
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:        {
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "devices": [
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "/dev/loop3"
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            ],
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_name": "ceph_lv0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_size": "21470642176",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "name": "ceph_lv0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "tags": {
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.cluster_name": "ceph",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.crush_device_class": "",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.encrypted": "0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.objectstore": "bluestore",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.osd_id": "0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.type": "block",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.vdo": "0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.with_tpm": "0"
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            },
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "type": "block",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "vg_name": "ceph_vg0"
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:        }
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:    ],
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:    "1": [
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:        {
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "devices": [
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "/dev/loop4"
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            ],
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_name": "ceph_lv1",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_size": "21470642176",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "name": "ceph_lv1",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "tags": {
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.cluster_name": "ceph",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.crush_device_class": "",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.encrypted": "0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.objectstore": "bluestore",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.osd_id": "1",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.type": "block",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.vdo": "0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.with_tpm": "0"
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            },
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "type": "block",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "vg_name": "ceph_vg1"
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:        }
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:    ],
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:    "2": [
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:        {
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "devices": [
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "/dev/loop5"
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            ],
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_name": "ceph_lv2",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_size": "21470642176",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "name": "ceph_lv2",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "tags": {
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.cluster_name": "ceph",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.crush_device_class": "",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.encrypted": "0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.objectstore": "bluestore",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.osd_id": "2",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.type": "block",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.vdo": "0",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:                "ceph.with_tpm": "0"
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            },
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "type": "block",
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:            "vg_name": "ceph_vg2"
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:        }
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]:    ]
Dec 13 04:11:16 np0005558241 tender_driscoll[393502]: }
Dec 13 04:11:16 np0005558241 systemd[1]: libpod-53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa.scope: Deactivated successfully.
Dec 13 04:11:16 np0005558241 podman[393483]: 2025-12-13 09:11:16.729444726 +0000 UTC m=+0.521943474 container died 53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:11:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3305: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 107 KiB/s wr, 19 op/s
Dec 13 04:11:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay-78482cc95833915a4f3a2d32d37a39c9b9c0e9e22ded98de72d9b13ca8d20a17-merged.mount: Deactivated successfully.
Dec 13 04:11:16 np0005558241 podman[393483]: 2025-12-13 09:11:16.788951002 +0000 UTC m=+0.581449730 container remove 53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_driscoll, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:11:16 np0005558241 systemd[1]: libpod-conmon-53fffc89be58822869d59293da6dc0da4a81a3335c760551fb3477d4cc3d4faa.scope: Deactivated successfully.
Dec 13 04:11:16 np0005558241 nova_compute[248510]: 2025-12-13 09:11:16.975 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:11:17 np0005558241 podman[393585]: 2025-12-13 09:11:17.281722601 +0000 UTC m=+0.042103848 container create d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_satoshi, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:11:17 np0005558241 systemd[1]: Started libpod-conmon-d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced.scope.
Dec 13 04:11:17 np0005558241 podman[393585]: 2025-12-13 09:11:17.261779996 +0000 UTC m=+0.022161283 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:11:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:11:17 np0005558241 podman[393585]: 2025-12-13 09:11:17.388563989 +0000 UTC m=+0.148945316 container init d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_satoshi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 04:11:17 np0005558241 podman[393585]: 2025-12-13 09:11:17.395960099 +0000 UTC m=+0.156341336 container start d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:11:17 np0005558241 podman[393585]: 2025-12-13 09:11:17.39945819 +0000 UTC m=+0.159839457 container attach d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_satoshi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:11:17 np0005558241 laughing_satoshi[393602]: 167 167
Dec 13 04:11:17 np0005558241 systemd[1]: libpod-d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced.scope: Deactivated successfully.
Dec 13 04:11:17 np0005558241 podman[393585]: 2025-12-13 09:11:17.401619956 +0000 UTC m=+0.162001233 container died d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_satoshi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 04:11:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d11c7ed76ba4296c320b7cc36c6fa16d2ad2049192036b833b55b09f55a640cb-merged.mount: Deactivated successfully.
Dec 13 04:11:17 np0005558241 podman[393585]: 2025-12-13 09:11:17.456635216 +0000 UTC m=+0.217016493 container remove d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_satoshi, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 04:11:17 np0005558241 systemd[1]: libpod-conmon-d1408b28844e628f705e8b7f09e5d4d1d438b353fe2c74634fb3ab4e6ad3bced.scope: Deactivated successfully.
Dec 13 04:11:17 np0005558241 podman[393625]: 2025-12-13 09:11:17.7179434 +0000 UTC m=+0.073032316 container create bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:11:17 np0005558241 systemd[1]: Started libpod-conmon-bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1.scope.
Dec 13 04:11:17 np0005558241 podman[393625]: 2025-12-13 09:11:17.690732498 +0000 UTC m=+0.045821484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:11:17 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:11:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5229feeec5708b75d0794c4ea51426f325d50f30a7c799be574a188fa58d230/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5229feeec5708b75d0794c4ea51426f325d50f30a7c799be574a188fa58d230/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5229feeec5708b75d0794c4ea51426f325d50f30a7c799be574a188fa58d230/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:17 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5229feeec5708b75d0794c4ea51426f325d50f30a7c799be574a188fa58d230/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:17 np0005558241 podman[393625]: 2025-12-13 09:11:17.81982839 +0000 UTC m=+0.174917316 container init bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 04:11:17 np0005558241 podman[393625]: 2025-12-13 09:11:17.834400616 +0000 UTC m=+0.189489552 container start bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_yonath, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:11:17 np0005558241 podman[393625]: 2025-12-13 09:11:17.839932809 +0000 UTC m=+0.195021745 container attach bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_yonath, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:11:18 np0005558241 lvm[393720]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:11:18 np0005558241 lvm[393720]: VG ceph_vg0 finished
Dec 13 04:11:18 np0005558241 lvm[393721]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:11:18 np0005558241 lvm[393721]: VG ceph_vg1 finished
Dec 13 04:11:18 np0005558241 lvm[393723]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:11:18 np0005558241 lvm[393723]: VG ceph_vg2 finished
Dec 13 04:11:18 np0005558241 lvm[393724]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:11:18 np0005558241 lvm[393724]: VG ceph_vg0 finished
Dec 13 04:11:18 np0005558241 nice_yonath[393642]: {}
Dec 13 04:11:18 np0005558241 systemd[1]: libpod-bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1.scope: Deactivated successfully.
Dec 13 04:11:18 np0005558241 systemd[1]: libpod-bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1.scope: Consumed 1.391s CPU time.
Dec 13 04:11:18 np0005558241 podman[393625]: 2025-12-13 09:11:18.743928722 +0000 UTC m=+1.099017668 container died bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:11:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3306: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 107 KiB/s wr, 19 op/s
Dec 13 04:11:18 np0005558241 nova_compute[248510]: 2025-12-13 09:11:18.760 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:18 np0005558241 nova_compute[248510]: 2025-12-13 09:11:18.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:11:18 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b5229feeec5708b75d0794c4ea51426f325d50f30a7c799be574a188fa58d230-merged.mount: Deactivated successfully.
Dec 13 04:11:18 np0005558241 podman[393625]: 2025-12-13 09:11:18.804030603 +0000 UTC m=+1.159119519 container remove bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 04:11:18 np0005558241 systemd[1]: libpod-conmon-bdae40daa189007b78f1d3c3bd09f63ecb1164e3e880b6e632a368431fe3aaa1.scope: Deactivated successfully.
Dec 13 04:11:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:11:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:11:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:11:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:11:18 np0005558241 nova_compute[248510]: 2025-12-13 09:11:18.933 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:11:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:11:19 np0005558241 nova_compute[248510]: 2025-12-13 09:11:19.940 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:19 np0005558241 nova_compute[248510]: 2025-12-13 09:11:19.941 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:19 np0005558241 nova_compute[248510]: 2025-12-13 09:11:19.965 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:11:20 np0005558241 nova_compute[248510]: 2025-12-13 09:11:20.094 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:20 np0005558241 nova_compute[248510]: 2025-12-13 09:11:20.095 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:20 np0005558241 nova_compute[248510]: 2025-12-13 09:11:20.102 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:11:20 np0005558241 nova_compute[248510]: 2025-12-13 09:11:20.103 248514 INFO nova.compute.claims [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:11:20 np0005558241 nova_compute[248510]: 2025-12-13 09:11:20.311 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:11:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3307: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 100 KiB/s wr, 8 op/s
Dec 13 04:11:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:11:20 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2212060461' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:11:20 np0005558241 nova_compute[248510]: 2025-12-13 09:11:20.881 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:11:20 np0005558241 nova_compute[248510]: 2025-12-13 09:11:20.888 248514 DEBUG nova.compute.provider_tree [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:11:20 np0005558241 nova_compute[248510]: 2025-12-13 09:11:20.908 248514 DEBUG nova.scheduler.client.report [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:11:20 np0005558241 nova_compute[248510]: 2025-12-13 09:11:20.941 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:20 np0005558241 nova_compute[248510]: 2025-12-13 09:11:20.942 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.003 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.004 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.147 248514 INFO nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.183 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:11:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.301 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.303 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.303 248514 INFO nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Creating image(s)#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.343 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.381 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.417 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.422 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007738589215823556 of space, bias 1.0, pg target 0.23215767647470667 quantized to 32 (current 32)
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697360151345169 of space, bias 1.0, pg target 0.20092080454035507 quantized to 32 (current 32)
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.488 248514 DEBUG nova.policy [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.534 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.536 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.537 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.537 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.567 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:11:21 np0005558241 nova_compute[248510]: 2025-12-13 09:11:21.576 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:11:22 np0005558241 nova_compute[248510]: 2025-12-13 09:11:22.023 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:11:22 np0005558241 nova_compute[248510]: 2025-12-13 09:11:22.096 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:11:22 np0005558241 nova_compute[248510]: 2025-12-13 09:11:22.177 248514 DEBUG nova.objects.instance [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d2b3fb1-4256-4dd2-bbba-188212ddd10e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:11:22 np0005558241 nova_compute[248510]: 2025-12-13 09:11:22.203 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:11:22 np0005558241 nova_compute[248510]: 2025-12-13 09:11:22.203 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Ensure instance console log exists: /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:11:22 np0005558241 nova_compute[248510]: 2025-12-13 09:11:22.204 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:22 np0005558241 nova_compute[248510]: 2025-12-13 09:11:22.204 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:22 np0005558241 nova_compute[248510]: 2025-12-13 09:11:22.205 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3308: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Dec 13 04:11:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:22.839 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:11:22 np0005558241 nova_compute[248510]: 2025-12-13 09:11:22.839 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:22 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:22.840 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:11:23 np0005558241 nova_compute[248510]: 2025-12-13 09:11:23.070 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Successfully created port: 10f9c496-a049-40c4-a29b-fe4279219d91 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:11:23 np0005558241 nova_compute[248510]: 2025-12-13 09:11:23.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:23 np0005558241 nova_compute[248510]: 2025-12-13 09:11:23.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:24 np0005558241 podman[393954]: 2025-12-13 09:11:24.001346601 +0000 UTC m=+0.075353195 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 04:11:24 np0005558241 podman[393953]: 2025-12-13 09:11:24.001447494 +0000 UTC m=+0.083588338 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 04:11:24 np0005558241 podman[393952]: 2025-12-13 09:11:24.02145232 +0000 UTC m=+0.103874382 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 13 04:11:24 np0005558241 nova_compute[248510]: 2025-12-13 09:11:24.225 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:11:24 np0005558241 nova_compute[248510]: 2025-12-13 09:11:24.276 248514 WARNING nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.#033[00m
Dec 13 04:11:24 np0005558241 nova_compute[248510]: 2025-12-13 09:11:24.276 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 04:11:24 np0005558241 nova_compute[248510]: 2025-12-13 09:11:24.276 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Triggering sync for uuid 0d2b3fb1-4256-4dd2-bbba-188212ddd10e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec 13 04:11:24 np0005558241 nova_compute[248510]: 2025-12-13 09:11:24.277 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:24 np0005558241 nova_compute[248510]: 2025-12-13 09:11:24.278 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:24 np0005558241 nova_compute[248510]: 2025-12-13 09:11:24.278 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:24 np0005558241 nova_compute[248510]: 2025-12-13 09:11:24.309 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3309: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:11:24 np0005558241 nova_compute[248510]: 2025-12-13 09:11:24.826 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:11:24 np0005558241 nova_compute[248510]: 2025-12-13 09:11:24.826 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:11:24 np0005558241 nova_compute[248510]: 2025-12-13 09:11:24.826 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:11:24 np0005558241 nova_compute[248510]: 2025-12-13 09:11:24.856 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 04:11:24 np0005558241 nova_compute[248510]: 2025-12-13 09:11:24.961 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Successfully created port: 47c08f82-5e0f-4d58-bc8a-5c049f6846fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:11:25 np0005558241 nova_compute[248510]: 2025-12-13 09:11:25.095 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:11:25 np0005558241 nova_compute[248510]: 2025-12-13 09:11:25.096 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:11:25 np0005558241 nova_compute[248510]: 2025-12-13 09:11:25.096 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 04:11:25 np0005558241 nova_compute[248510]: 2025-12-13 09:11:25.096 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:11:26 np0005558241 nova_compute[248510]: 2025-12-13 09:11:26.153 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Successfully updated port: 10f9c496-a049-40c4-a29b-fe4279219d91 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:11:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:26 np0005558241 nova_compute[248510]: 2025-12-13 09:11:26.342 248514 DEBUG nova.compute.manager [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-changed-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:11:26 np0005558241 nova_compute[248510]: 2025-12-13 09:11:26.342 248514 DEBUG nova.compute.manager [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing instance network info cache due to event network-changed-10f9c496-a049-40c4-a29b-fe4279219d91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:11:26 np0005558241 nova_compute[248510]: 2025-12-13 09:11:26.343 248514 DEBUG oslo_concurrency.lockutils [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:11:26 np0005558241 nova_compute[248510]: 2025-12-13 09:11:26.343 248514 DEBUG oslo_concurrency.lockutils [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:11:26 np0005558241 nova_compute[248510]: 2025-12-13 09:11:26.343 248514 DEBUG nova.network.neutron [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing network info cache for port 10f9c496-a049-40c4-a29b-fe4279219d91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:11:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3310: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:11:26 np0005558241 nova_compute[248510]: 2025-12-13 09:11:26.768 248514 DEBUG nova.network.neutron [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:11:27 np0005558241 nova_compute[248510]: 2025-12-13 09:11:27.162 248514 DEBUG nova.network.neutron [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:11:27 np0005558241 nova_compute[248510]: 2025-12-13 09:11:27.194 248514 DEBUG oslo_concurrency.lockutils [req-d8ca84c2-088e-49a6-8663-27f9993d6b6f req-76c66f92-02cd-40d1-a9c5-be58bc2fa812 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:11:27 np0005558241 nova_compute[248510]: 2025-12-13 09:11:27.226 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Successfully updated port: 47c08f82-5e0f-4d58-bc8a-5c049f6846fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:11:27 np0005558241 nova_compute[248510]: 2025-12-13 09:11:27.247 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:11:27 np0005558241 nova_compute[248510]: 2025-12-13 09:11:27.247 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:11:27 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:11:27 np0005558241 nova_compute[248510]: 2025-12-13 09:11:27.248 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:11:27 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:11:27 np0005558241 nova_compute[248510]: 2025-12-13 09:11:27.493 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.257 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.295 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.296 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.300 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.457 248514 DEBUG nova.compute.manager [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-changed-47c08f82-5e0f-4d58-bc8a-5c049f6846fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.458 248514 DEBUG nova.compute.manager [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing instance network info cache due to event network-changed-47c08f82-5e0f-4d58-bc8a-5c049f6846fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.459 248514 DEBUG oslo_concurrency.lockutils [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:11:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3311: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.765 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.831 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.832 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.833 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.833 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.834 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:11:28 np0005558241 nova_compute[248510]: 2025-12-13 09:11:28.937 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:11:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2637127471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.393 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.484 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.485 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.711 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.712 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3299MB free_disk=59.921106703579426GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.712 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.712 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.807 248514 DEBUG nova.network.neutron [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.829 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.829 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 0d2b3fb1-4256-4dd2-bbba-188212ddd10e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.830 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.830 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:11:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:29.843 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.862 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.863 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Instance network_info: |[{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.864 248514 DEBUG oslo_concurrency.lockutils [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.864 248514 DEBUG nova.network.neutron [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing network info cache for port 47c08f82-5e0f-4d58-bc8a-5c049f6846fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.871 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Start _get_guest_xml network_info=[{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.884 248514 WARNING nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.890 248514 DEBUG nova.virt.libvirt.host [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.891 248514 DEBUG nova.virt.libvirt.host [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.904 248514 DEBUG nova.virt.libvirt.host [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.905 248514 DEBUG nova.virt.libvirt.host [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.906 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.906 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.907 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.908 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.908 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.908 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.909 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.909 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.910 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.910 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.911 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.911 248514 DEBUG nova.virt.hardware [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.916 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:11:29 np0005558241 nova_compute[248510]: 2025-12-13 09:11:29.973 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:11:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:11:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/516835256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:11:30 np0005558241 nova_compute[248510]: 2025-12-13 09:11:30.597 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:11:30 np0005558241 nova_compute[248510]: 2025-12-13 09:11:30.603 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:11:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:11:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1536569844' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:11:30 np0005558241 nova_compute[248510]: 2025-12-13 09:11:30.633 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.717s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:11:30 np0005558241 nova_compute[248510]: 2025-12-13 09:11:30.658 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:11:30 np0005558241 nova_compute[248510]: 2025-12-13 09:11:30.663 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:11:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3312: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:11:30 np0005558241 nova_compute[248510]: 2025-12-13 09:11:30.812 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:11:30 np0005558241 nova_compute[248510]: 2025-12-13 09:11:30.843 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:11:30 np0005558241 nova_compute[248510]: 2025-12-13 09:11:30.843 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:11:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2965586814' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.242 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.245 248514 DEBUG nova.virt.libvirt.vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1965828306',display_name='tempest-TestGettingAddress-server-1965828306',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1965828306',id=142,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-8ma5x5dk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:11:21Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=0d2b3fb1-4256-4dd2-bbba-188212ddd10e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.245 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.247 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.249 248514 DEBUG nova.virt.libvirt.vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1965828306',display_name='tempest-TestGettingAddress-server-1965828306',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1965828306',id=142,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-8ma5x5dk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:11:21Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=0d2b3fb1-4256-4dd2-bbba-188212ddd10e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.249 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.250 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.252 248514 DEBUG nova.objects.instance [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d2b3fb1-4256-4dd2-bbba-188212ddd10e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.330 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  <uuid>0d2b3fb1-4256-4dd2-bbba-188212ddd10e</uuid>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  <name>instance-0000008e</name>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-1965828306</nova:name>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:11:29</nova:creationTime>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        <nova:port uuid="10f9c496-a049-40c4-a29b-fe4279219d91">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        <nova:port uuid="47c08f82-5e0f-4d58-bc8a-5c049f6846fd">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe39:7e4a" ipVersion="6"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe39:7e4a" ipVersion="6"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <entry name="serial">0d2b3fb1-4256-4dd2-bbba-188212ddd10e</entry>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <entry name="uuid">0d2b3fb1-4256-4dd2-bbba-188212ddd10e</entry>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk.config">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:5f:00:ab"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <target dev="tap10f9c496-a0"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:39:7e:4a"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <target dev="tap47c08f82-5e"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/console.log" append="off"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:11:31 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:11:31 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:11:31 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:11:31 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.332 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Preparing to wait for external event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.332 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.333 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.333 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.334 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Preparing to wait for external event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.334 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.335 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.335 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.337 248514 DEBUG nova.virt.libvirt.vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1965828306',display_name='tempest-TestGettingAddress-server-1965828306',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1965828306',id=142,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-8ma5x5dk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:11:21Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=0d2b3fb1-4256-4dd2-bbba-188212ddd10e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.337 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.338 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.340 248514 DEBUG os_vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.341 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.343 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.343 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.354 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.355 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10f9c496-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.356 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10f9c496-a0, col_values=(('external_ids', {'iface-id': '10f9c496-a049-40c4-a29b-fe4279219d91', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:00:ab', 'vm-uuid': '0d2b3fb1-4256-4dd2-bbba-188212ddd10e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.359 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:31 np0005558241 NetworkManager[50376]: <info>  [1765617091.3606] manager: (tap10f9c496-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/619)
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.362 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.369 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.370 248514 INFO os_vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0')#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.372 248514 DEBUG nova.virt.libvirt.vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1965828306',display_name='tempest-TestGettingAddress-server-1965828306',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1965828306',id=142,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-8ma5x5dk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:11:21Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=0d2b3fb1-4256-4dd2-bbba-188212ddd10e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.373 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.374 248514 DEBUG nova.network.os_vif_util [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.375 248514 DEBUG os_vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.376 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.376 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.377 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.380 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.380 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47c08f82-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.381 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap47c08f82-5e, col_values=(('external_ids', {'iface-id': '47c08f82-5e0f-4d58-bc8a-5c049f6846fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:7e:4a', 'vm-uuid': '0d2b3fb1-4256-4dd2-bbba-188212ddd10e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.382 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:31 np0005558241 NetworkManager[50376]: <info>  [1765617091.3835] manager: (tap47c08f82-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.385 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.394 248514 INFO os_vif [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e')#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.517 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.518 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.518 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:5f:00:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.519 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:39:7e:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.520 248514 INFO nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Using config drive#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.549 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.846 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.846 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:11:31 np0005558241 nova_compute[248510]: 2025-12-13 09:11:31.847 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:11:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3313: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:11:32 np0005558241 nova_compute[248510]: 2025-12-13 09:11:32.942 248514 INFO nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Creating config drive at /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/disk.config#033[00m
Dec 13 04:11:32 np0005558241 nova_compute[248510]: 2025-12-13 09:11:32.951 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0qcoyc6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.009 248514 DEBUG nova.network.neutron [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updated VIF entry in instance network info cache for port 47c08f82-5e0f-4d58-bc8a-5c049f6846fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.011 248514 DEBUG nova.network.neutron [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.036 248514 DEBUG oslo_concurrency.lockutils [req-3762d896-d550-4b2a-8b7d-c0ac3e4b05a3 req-acb043e2-85e3-49a4-b593-c0bb6ac9c9bb 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.118 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0qcoyc6" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.160 248514 DEBUG nova.storage.rbd_utils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.165 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/disk.config 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.577 248514 DEBUG oslo_concurrency.processutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/disk.config 0d2b3fb1-4256-4dd2-bbba-188212ddd10e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.578 248514 INFO nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Deleting local config drive /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e/disk.config because it was imported into RBD.#033[00m
Dec 13 04:11:33 np0005558241 NetworkManager[50376]: <info>  [1765617093.6539] manager: (tap10f9c496-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Dec 13 04:11:33 np0005558241 kernel: tap10f9c496-a0: entered promiscuous mode
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.658 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:33 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:33Z|01491|binding|INFO|Claiming lport 10f9c496-a049-40c4-a29b-fe4279219d91 for this chassis.
Dec 13 04:11:33 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:33Z|01492|binding|INFO|10f9c496-a049-40c4-a29b-fe4279219d91: Claiming fa:16:3e:5f:00:ab 10.100.0.11
Dec 13 04:11:33 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:33Z|01493|binding|INFO|Setting lport 10f9c496-a049-40c4-a29b-fe4279219d91 ovn-installed in OVS
Dec 13 04:11:33 np0005558241 kernel: tap47c08f82-5e: entered promiscuous mode
Dec 13 04:11:33 np0005558241 NetworkManager[50376]: <info>  [1765617093.6846] manager: (tap47c08f82-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/622)
Dec 13 04:11:33 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:33Z|01494|if_status|INFO|Dropped 12 log messages in last 43 seconds (most recently, 43 seconds ago) due to excessive rate
Dec 13 04:11:33 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:33Z|01495|if_status|INFO|Not updating pb chassis for 47c08f82-5e0f-4d58-bc8a-5c049f6846fd now as sb is readonly
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.685 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:33 np0005558241 systemd-udevd[394199]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:11:33 np0005558241 systemd-udevd[394201]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:11:33 np0005558241 NetworkManager[50376]: <info>  [1765617093.7007] device (tap10f9c496-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:11:33 np0005558241 NetworkManager[50376]: <info>  [1765617093.7016] device (tap10f9c496-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:11:33 np0005558241 NetworkManager[50376]: <info>  [1765617093.7021] device (tap47c08f82-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:11:33 np0005558241 NetworkManager[50376]: <info>  [1765617093.7028] device (tap47c08f82-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.705 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:33 np0005558241 systemd-machined[210538]: New machine qemu-173-instance-0000008e.
Dec 13 04:11:33 np0005558241 systemd[1]: Started Virtual Machine qemu-173-instance-0000008e.
Dec 13 04:11:33 np0005558241 nova_compute[248510]: 2025-12-13 09:11:33.768 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.073 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:00:ab 10.100.0.11'], port_security=['fa:16:3e:5f:00:ab 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0d2b3fb1-4256-4dd2-bbba-188212ddd10e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44250a44-17ff-41cb-ae95-e575bff91a4a, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=10f9c496-a049-40c4-a29b-fe4279219d91) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.074 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 10f9c496-a049-40c4-a29b-fe4279219d91 in datapath a7065d5a-edce-4470-a56d-ab529d56aa3c bound to our chassis#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.075 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7065d5a-edce-4470-a56d-ab529d56aa3c#033[00m
Dec 13 04:11:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:34Z|01496|binding|INFO|Claiming lport 47c08f82-5e0f-4d58-bc8a-5c049f6846fd for this chassis.
Dec 13 04:11:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:34Z|01497|binding|INFO|47c08f82-5e0f-4d58-bc8a-5c049f6846fd: Claiming fa:16:3e:39:7e:4a 2001:db8:0:1:f816:3eff:fe39:7e4a 2001:db8::f816:3eff:fe39:7e4a
Dec 13 04:11:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:34Z|01498|binding|INFO|Setting lport 10f9c496-a049-40c4-a29b-fe4279219d91 up in Southbound
Dec 13 04:11:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:34Z|01499|binding|INFO|Setting lport 47c08f82-5e0f-4d58-bc8a-5c049f6846fd ovn-installed in OVS
Dec 13 04:11:34 np0005558241 nova_compute[248510]: 2025-12-13 09:11:34.078 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.101 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8b51fa-ef14-4484-91e9-6bf41c49f773]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.147 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2def0345-5218-45ff-a93d-69710f8263fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.151 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8836a3-a881-49bc-ba7f-ce5905407fe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.192 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a957350f-fee7-4b7c-b0b4-da414781b471]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.219 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8ebf96-033e-48c2-9e2a-75620c7101a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7065d5a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:e7:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962803, 'reachable_time': 25861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394218, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.242 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64cfdff7-142b-46a1-a396-c8ea5b3544d5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa7065d5a-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962816, 'tstamp': 962816}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394219, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa7065d5a-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962820, 'tstamp': 962820}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394219, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.245 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7065d5a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:11:34 np0005558241 nova_compute[248510]: 2025-12-13 09:11:34.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:34 np0005558241 nova_compute[248510]: 2025-12-13 09:11:34.248 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.249 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7065d5a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.249 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.249 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7065d5a-e0, col_values=(('external_ids', {'iface-id': 'b83346dd-17c7-4f89-a540-efae6916310e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.250 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.297 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7e:4a 2001:db8:0:1:f816:3eff:fe39:7e4a 2001:db8::f816:3eff:fe39:7e4a'], port_security=['fa:16:3e:39:7e:4a 2001:db8:0:1:f816:3eff:fe39:7e4a 2001:db8::f816:3eff:fe39:7e4a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe39:7e4a/64 2001:db8::f816:3eff:fe39:7e4a/64', 'neutron:device_id': '0d2b3fb1-4256-4dd2-bbba-188212ddd10e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28a1c52d-6dd3-4202-a082-66c542a5a384, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=47c08f82-5e0f-4d58-bc8a-5c049f6846fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.298 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 47c08f82-5e0f-4d58-bc8a-5c049f6846fd in datapath 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c unbound from our chassis#033[00m
Dec 13 04:11:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:34Z|01500|binding|INFO|Setting lport 47c08f82-5e0f-4d58-bc8a-5c049f6846fd up in Southbound
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.300 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.329 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[26a62669-c33a-49b6-8149-870a9e82c21e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.369 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a763b7c9-acd4-4bfa-9d95-6add1ada7fb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.372 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b9295965-6624-447e-b305-cc6c15c4d40f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.411 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[733930f5-2226-425e-8eae-0b6e46cd8337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.438 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c9d47b-7929-4cec-9854-589e798c104a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7661cc7b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:a3:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962968, 'reachable_time': 41750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394225, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.461 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe47072-3aa4-42e0-91b4-a2db75967565]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7661cc7b-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962982, 'tstamp': 962982}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394226, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.463 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7661cc7b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:11:34 np0005558241 nova_compute[248510]: 2025-12-13 09:11:34.464 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:34 np0005558241 nova_compute[248510]: 2025-12-13 09:11:34.466 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.466 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7661cc7b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.466 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.466 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7661cc7b-f0, col_values=(('external_ids', {'iface-id': 'c9cb1492-8274-4506-acb8-65660baeb5cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:11:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:34.467 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:11:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3314: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 13 04:11:34 np0005558241 nova_compute[248510]: 2025-12-13 09:11:34.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:11:34 np0005558241 nova_compute[248510]: 2025-12-13 09:11:34.924 248514 DEBUG nova.compute.manager [req-8b02b0c4-6861-49c2-b6e4-989118e05dfa req-fe9d80bc-8c63-4be1-9f85-5d8f4eee19b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:11:34 np0005558241 nova_compute[248510]: 2025-12-13 09:11:34.925 248514 DEBUG oslo_concurrency.lockutils [req-8b02b0c4-6861-49c2-b6e4-989118e05dfa req-fe9d80bc-8c63-4be1-9f85-5d8f4eee19b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:34 np0005558241 nova_compute[248510]: 2025-12-13 09:11:34.925 248514 DEBUG oslo_concurrency.lockutils [req-8b02b0c4-6861-49c2-b6e4-989118e05dfa req-fe9d80bc-8c63-4be1-9f85-5d8f4eee19b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:34 np0005558241 nova_compute[248510]: 2025-12-13 09:11:34.926 248514 DEBUG oslo_concurrency.lockutils [req-8b02b0c4-6861-49c2-b6e4-989118e05dfa req-fe9d80bc-8c63-4be1-9f85-5d8f4eee19b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:34 np0005558241 nova_compute[248510]: 2025-12-13 09:11:34.926 248514 DEBUG nova.compute.manager [req-8b02b0c4-6861-49c2-b6e4-989118e05dfa req-fe9d80bc-8c63-4be1-9f85-5d8f4eee19b1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Processing event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:11:34 np0005558241 nova_compute[248510]: 2025-12-13 09:11:34.969 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617094.9688623, 0d2b3fb1-4256-4dd2-bbba-188212ddd10e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:11:34 np0005558241 nova_compute[248510]: 2025-12-13 09:11:34.970 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] VM Started (Lifecycle Event)#033[00m
Dec 13 04:11:35 np0005558241 nova_compute[248510]: 2025-12-13 09:11:35.272 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:11:35 np0005558241 nova_compute[248510]: 2025-12-13 09:11:35.277 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617094.969139, 0d2b3fb1-4256-4dd2-bbba-188212ddd10e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:11:35 np0005558241 nova_compute[248510]: 2025-12-13 09:11:35.278 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:11:35 np0005558241 nova_compute[248510]: 2025-12-13 09:11:35.304 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:11:35 np0005558241 nova_compute[248510]: 2025-12-13 09:11:35.308 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:11:35 np0005558241 nova_compute[248510]: 2025-12-13 09:11:35.353 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:11:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:36 np0005558241 nova_compute[248510]: 2025-12-13 09:11:36.384 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3315: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 511 B/s wr, 4 op/s
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.008 248514 DEBUG nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.008 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.009 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.009 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.009 248514 DEBUG nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] No event matching network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 in dict_keys([('network-vif-plugged', '47c08f82-5e0f-4d58-bc8a-5c049f6846fd')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.009 248514 WARNING nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received unexpected event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 for instance with vm_state building and task_state spawning.#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.009 248514 DEBUG nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.009 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.010 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.010 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.010 248514 DEBUG nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Processing event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.010 248514 DEBUG nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.010 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.010 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.011 248514 DEBUG oslo_concurrency.lockutils [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.011 248514 DEBUG nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] No waiting events found dispatching network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.011 248514 WARNING nova.compute.manager [req-af157ab8-34df-4240-9e2d-90bc441a83db req-112873f7-2f8c-494e-9e1a-5830d377a446 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received unexpected event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd for instance with vm_state building and task_state spawning.#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.011 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.015 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617097.0153348, 0d2b3fb1-4256-4dd2-bbba-188212ddd10e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.015 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.017 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.019 248514 INFO nova.virt.libvirt.driver [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Instance spawned successfully.#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.019 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.043 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.048 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.051 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.051 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.051 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.052 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.052 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.052 248514 DEBUG nova.virt.libvirt.driver [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.196 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.207 248514 INFO nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Took 15.91 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.208 248514 DEBUG nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.279 248514 INFO nova.compute.manager [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Took 17.21 seconds to build instance.#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.302 248514 DEBUG oslo_concurrency.lockutils [None req-0a74db8c-67cc-4a7a-adfb-f9ea70657aa1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.302 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 13.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.303 248514 INFO nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.303 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:37 np0005558241 nova_compute[248510]: 2025-12-13 09:11:37.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:11:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3316: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 616 KiB/s rd, 14 KiB/s wr, 27 op/s
Dec 13 04:11:38 np0005558241 nova_compute[248510]: 2025-12-13 09:11:38.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:11:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:11:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:11:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:11:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:11:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:11:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3317: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 15 KiB/s wr, 67 op/s
Dec 13 04:11:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:41 np0005558241 nova_compute[248510]: 2025-12-13 09:11:41.387 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:41 np0005558241 nova_compute[248510]: 2025-12-13 09:11:41.863 248514 DEBUG nova.compute.manager [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-changed-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:11:41 np0005558241 nova_compute[248510]: 2025-12-13 09:11:41.864 248514 DEBUG nova.compute.manager [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing instance network info cache due to event network-changed-10f9c496-a049-40c4-a29b-fe4279219d91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:11:41 np0005558241 nova_compute[248510]: 2025-12-13 09:11:41.865 248514 DEBUG oslo_concurrency.lockutils [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:11:41 np0005558241 nova_compute[248510]: 2025-12-13 09:11:41.866 248514 DEBUG oslo_concurrency.lockutils [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:11:41 np0005558241 nova_compute[248510]: 2025-12-13 09:11:41.866 248514 DEBUG nova.network.neutron [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing network info cache for port 10f9c496-a049-40c4-a29b-fe4279219d91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:11:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3318: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 15 KiB/s wr, 67 op/s
Dec 13 04:11:43 np0005558241 nova_compute[248510]: 2025-12-13 09:11:43.226 248514 DEBUG nova.network.neutron [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updated VIF entry in instance network info cache for port 10f9c496-a049-40c4-a29b-fe4279219d91. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:11:43 np0005558241 nova_compute[248510]: 2025-12-13 09:11:43.227 248514 DEBUG nova.network.neutron [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:11:43 np0005558241 nova_compute[248510]: 2025-12-13 09:11:43.261 248514 DEBUG oslo_concurrency.lockutils [req-54bcd0c8-afb7-4a89-ae28-6abac30d6b22 req-d28927ed-1c3e-4af2-9c1b-be94d139e5db 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:11:43 np0005558241 nova_compute[248510]: 2025-12-13 09:11:43.773 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3319: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 04:11:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:46 np0005558241 nova_compute[248510]: 2025-12-13 09:11:46.390 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3320: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 69 op/s
Dec 13 04:11:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3321: 321 pgs: 321 active+clean; 171 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 573 KiB/s wr, 76 op/s
Dec 13 04:11:48 np0005558241 nova_compute[248510]: 2025-12-13 09:11:48.785 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:50Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:00:ab 10.100.0.11
Dec 13 04:11:50 np0005558241 ovn_controller[148476]: 2025-12-13T09:11:50Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:00:ab 10.100.0.11
Dec 13 04:11:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3322: 321 pgs: 321 active+clean; 184 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.7 MiB/s wr, 67 op/s
Dec 13 04:11:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:51 np0005558241 nova_compute[248510]: 2025-12-13 09:11:51.393 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3323: 321 pgs: 321 active+clean; 184 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Dec 13 04:11:53 np0005558241 nova_compute[248510]: 2025-12-13 09:11:53.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3324: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 517 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Dec 13 04:11:55 np0005558241 podman[394273]: 2025-12-13 09:11:55.009094163 +0000 UTC m=+0.075071549 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:11:55 np0005558241 podman[394274]: 2025-12-13 09:11:55.021209936 +0000 UTC m=+0.091839232 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:11:55 np0005558241 podman[394272]: 2025-12-13 09:11:55.065018636 +0000 UTC m=+0.133154588 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 04:11:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:55.449 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:11:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:55.450 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:11:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:11:55.451 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:11:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:56 np0005558241 nova_compute[248510]: 2025-12-13 09:11:56.396 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:11:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3325: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:11:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3326: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:11:58 np0005558241 nova_compute[248510]: 2025-12-13 09:11:58.791 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3327: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 1.6 MiB/s wr, 57 op/s
Dec 13 04:12:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:01 np0005558241 nova_compute[248510]: 2025-12-13 09:12:01.398 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3328: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 297 KiB/s rd, 506 KiB/s wr, 43 op/s
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.186 248514 DEBUG nova.compute.manager [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-changed-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.187 248514 DEBUG nova.compute.manager [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing instance network info cache due to event network-changed-10f9c496-a049-40c4-a29b-fe4279219d91. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.187 248514 DEBUG oslo_concurrency.lockutils [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.188 248514 DEBUG oslo_concurrency.lockutils [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.188 248514 DEBUG nova.network.neutron [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Refreshing network info cache for port 10f9c496-a049-40c4-a29b-fe4279219d91 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.285 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.285 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.286 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.286 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.286 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.287 248514 INFO nova.compute.manager [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Terminating instance#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.289 248514 DEBUG nova.compute.manager [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:12:03 np0005558241 kernel: tap10f9c496-a0 (unregistering): left promiscuous mode
Dec 13 04:12:03 np0005558241 NetworkManager[50376]: <info>  [1765617123.3423] device (tap10f9c496-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:12:03 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:03Z|01501|binding|INFO|Releasing lport 10f9c496-a049-40c4-a29b-fe4279219d91 from this chassis (sb_readonly=0)
Dec 13 04:12:03 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:03Z|01502|binding|INFO|Setting lport 10f9c496-a049-40c4-a29b-fe4279219d91 down in Southbound
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.351 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:03Z|01503|binding|INFO|Removing iface tap10f9c496-a0 ovn-installed in OVS
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.359 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:00:ab 10.100.0.11'], port_security=['fa:16:3e:5f:00:ab 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0d2b3fb1-4256-4dd2-bbba-188212ddd10e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44250a44-17ff-41cb-ae95-e575bff91a4a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=10f9c496-a049-40c4-a29b-fe4279219d91) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.361 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 10f9c496-a049-40c4-a29b-fe4279219d91 in datapath a7065d5a-edce-4470-a56d-ab529d56aa3c unbound from our chassis#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.362 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7065d5a-edce-4470-a56d-ab529d56aa3c#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.377 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[36be4744-650b-40da-855e-0bb564f7ca08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:03 np0005558241 kernel: tap47c08f82-5e (unregistering): left promiscuous mode
Dec 13 04:12:03 np0005558241 NetworkManager[50376]: <info>  [1765617123.3828] device (tap47c08f82-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:12:03 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:03Z|01504|binding|INFO|Releasing lport 47c08f82-5e0f-4d58-bc8a-5c049f6846fd from this chassis (sb_readonly=0)
Dec 13 04:12:03 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:03Z|01505|binding|INFO|Setting lport 47c08f82-5e0f-4d58-bc8a-5c049f6846fd down in Southbound
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.392 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:03Z|01506|binding|INFO|Removing iface tap47c08f82-5e ovn-installed in OVS
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.394 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.401 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:7e:4a 2001:db8:0:1:f816:3eff:fe39:7e4a 2001:db8::f816:3eff:fe39:7e4a'], port_security=['fa:16:3e:39:7e:4a 2001:db8:0:1:f816:3eff:fe39:7e4a 2001:db8::f816:3eff:fe39:7e4a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe39:7e4a/64 2001:db8::f816:3eff:fe39:7e4a/64', 'neutron:device_id': '0d2b3fb1-4256-4dd2-bbba-188212ddd10e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28a1c52d-6dd3-4202-a082-66c542a5a384, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=47c08f82-5e0f-4d58-bc8a-5c049f6846fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.406 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.416 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e35cf1cb-5cb8-49d9-a778-1ae25111c078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.420 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[65d25e56-6a46-4573-b723-a21e2e388557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.451 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2122d3-43a5-4d92-8c79-04c2030555ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:03 np0005558241 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Dec 13 04:12:03 np0005558241 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008e.scope: Consumed 14.705s CPU time.
Dec 13 04:12:03 np0005558241 systemd-machined[210538]: Machine qemu-173-instance-0000008e terminated.
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.472 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b688fef9-a902-49fa-8126-bad169ab3dc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7065d5a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:e7:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962803, 'reachable_time': 25861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394351, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.491 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab087fe7-a339-4add-9096-a7bf07d1768c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa7065d5a-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962816, 'tstamp': 962816}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394352, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa7065d5a-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962820, 'tstamp': 962820}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394352, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.493 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7065d5a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.495 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.502 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.503 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7065d5a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.503 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.504 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7065d5a-e0, col_values=(('external_ids', {'iface-id': 'b83346dd-17c7-4f89-a540-efae6916310e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.504 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.505 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 47c08f82-5e0f-4d58-bc8a-5c049f6846fd in datapath 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c unbound from our chassis#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.506 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.525 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[80b8940b-0aed-4fcf-b5b2-cdc16672d769]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:03 np0005558241 NetworkManager[50376]: <info>  [1765617123.5295] manager: (tap47c08f82-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/623)
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.543 248514 INFO nova.virt.libvirt.driver [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Instance destroyed successfully.#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.544 248514 DEBUG nova.objects.instance [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 0d2b3fb1-4256-4dd2-bbba-188212ddd10e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.563 248514 DEBUG nova.virt.libvirt.vif [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1965828306',display_name='tempest-TestGettingAddress-server-1965828306',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1965828306',id=142,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:11:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-8ma5x5dk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:11:37Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=0d2b3fb1-4256-4dd2-bbba-188212ddd10e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.564 248514 DEBUG nova.network.os_vif_util [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.565 248514 DEBUG nova.network.os_vif_util [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.566 248514 DEBUG os_vif [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.568 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.569 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10f9c496-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.572 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.575 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.577 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0d835e00-4e68-43a1-9ec1-bb34b96e99d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.578 248514 INFO os_vif [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:00:ab,bridge_name='br-int',has_traffic_filtering=True,id=10f9c496-a049-40c4-a29b-fe4279219d91,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10f9c496-a0')#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.580 248514 DEBUG nova.virt.libvirt.vif [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1965828306',display_name='tempest-TestGettingAddress-server-1965828306',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1965828306',id=142,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:11:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-8ma5x5dk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:11:37Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=0d2b3fb1-4256-4dd2-bbba-188212ddd10e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.580 248514 DEBUG nova.network.os_vif_util [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.580 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[41d22265-e327-4a68-a1f7-3d4254bc41d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.581 248514 DEBUG nova.network.os_vif_util [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.581 248514 DEBUG os_vif [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.583 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.583 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47c08f82-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.585 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.588 248514 INFO os_vif [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:7e:4a,bridge_name='br-int',has_traffic_filtering=True,id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47c08f82-5e')#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.618 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2620b1-55c8-4406-9aa9-eaf9d9d1f2b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.630 248514 DEBUG nova.compute.manager [req-afc8e004-1390-4842-a708-7b72d836224f req-5f0930f2-04aa-4057-a1d6-be82341cdb81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-unplugged-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.631 248514 DEBUG oslo_concurrency.lockutils [req-afc8e004-1390-4842-a708-7b72d836224f req-5f0930f2-04aa-4057-a1d6-be82341cdb81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.631 248514 DEBUG oslo_concurrency.lockutils [req-afc8e004-1390-4842-a708-7b72d836224f req-5f0930f2-04aa-4057-a1d6-be82341cdb81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.632 248514 DEBUG oslo_concurrency.lockutils [req-afc8e004-1390-4842-a708-7b72d836224f req-5f0930f2-04aa-4057-a1d6-be82341cdb81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.632 248514 DEBUG nova.compute.manager [req-afc8e004-1390-4842-a708-7b72d836224f req-5f0930f2-04aa-4057-a1d6-be82341cdb81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] No waiting events found dispatching network-vif-unplugged-10f9c496-a049-40c4-a29b-fe4279219d91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.633 248514 DEBUG nova.compute.manager [req-afc8e004-1390-4842-a708-7b72d836224f req-5f0930f2-04aa-4057-a1d6-be82341cdb81 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-unplugged-10f9c496-a049-40c4-a29b-fe4279219d91 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.641 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cda5bc02-c815-4921-81a0-8739de99d87f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7661cc7b-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:49:a3:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 427], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962968, 'reachable_time': 41750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394402, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.663 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[56ed7812-f866-4fd5-baea-caedfae55edb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7661cc7b-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 962982, 'tstamp': 962982}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394403, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.665 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7661cc7b-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.668 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.668 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7661cc7b-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.668 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.669 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7661cc7b-f0, col_values=(('external_ids', {'iface-id': 'c9cb1492-8274-4506-acb8-65660baeb5cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:12:03 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:03.669 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:12:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:12:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 43K writes, 172K keys, 43K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 43K writes, 15K syncs, 2.77 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3460 writes, 13K keys, 3460 commit groups, 1.0 writes per commit group, ingest: 16.23 MB, 0.03 MB/s#012Interval WAL: 3459 writes, 1381 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fffe9af8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Dec 13 04:12:03 np0005558241 nova_compute[248510]: 2025-12-13 09:12:03.794 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:04 np0005558241 nova_compute[248510]: 2025-12-13 09:12:04.667 248514 DEBUG nova.network.neutron [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updated VIF entry in instance network info cache for port 10f9c496-a049-40c4-a29b-fe4279219d91. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:12:04 np0005558241 nova_compute[248510]: 2025-12-13 09:12:04.668 248514 DEBUG nova.network.neutron [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "address": "fa:16:3e:39:7e:4a", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:7e4a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47c08f82-5e", "ovs_interfaceid": "47c08f82-5e0f-4d58-bc8a-5c049f6846fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:12:04 np0005558241 nova_compute[248510]: 2025-12-13 09:12:04.707 248514 DEBUG oslo_concurrency.lockutils [req-51b89bc4-68e7-49b7-bedb-85e8c962999d req-a4a7c250-b8a3-4bc4-9c76-200bb6dcda75 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-0d2b3fb1-4256-4dd2-bbba-188212ddd10e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:12:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3329: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 310 KiB/s rd, 515 KiB/s wr, 46 op/s
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.706 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.707 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.707 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.708 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.709 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] No waiting events found dispatching network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.709 248514 WARNING nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received unexpected event network-vif-plugged-10f9c496-a049-40c4-a29b-fe4279219d91 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.710 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-unplugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.710 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.711 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.711 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.714 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] No waiting events found dispatching network-vif-unplugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.714 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-unplugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.714 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.714 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.715 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.715 248514 DEBUG oslo_concurrency.lockutils [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.715 248514 DEBUG nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] No waiting events found dispatching network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.715 248514 WARNING nova.compute.manager [req-234bc3b0-d6f3-4c2a-9ab3-97ce63ca367b req-104929d2-cac7-4ef6-b8f8-2480ebaaf2bf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received unexpected event network-vif-plugged-47c08f82-5e0f-4d58-bc8a-5c049f6846fd for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.798 248514 INFO nova.virt.libvirt.driver [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Deleting instance files /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e_del#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.799 248514 INFO nova.virt.libvirt.driver [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Deletion of /var/lib/nova/instances/0d2b3fb1-4256-4dd2-bbba-188212ddd10e_del complete#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.867 248514 INFO nova.compute.manager [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Took 2.58 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.868 248514 DEBUG oslo.service.loopingcall [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.868 248514 DEBUG nova.compute.manager [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:12:05 np0005558241 nova_compute[248510]: 2025-12-13 09:12:05.869 248514 DEBUG nova.network.neutron [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:12:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3330: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 22 KiB/s wr, 3 op/s
Dec 13 04:12:06 np0005558241 nova_compute[248510]: 2025-12-13 09:12:06.854 248514 DEBUG nova.compute.manager [req-018b02b7-441b-44ac-96f3-ccf3eeea436a req-c0663206-9d64-4617-9ea7-75b393359dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-deleted-47c08f82-5e0f-4d58-bc8a-5c049f6846fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:06 np0005558241 nova_compute[248510]: 2025-12-13 09:12:06.855 248514 INFO nova.compute.manager [req-018b02b7-441b-44ac-96f3-ccf3eeea436a req-c0663206-9d64-4617-9ea7-75b393359dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Neutron deleted interface 47c08f82-5e0f-4d58-bc8a-5c049f6846fd; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:12:06 np0005558241 nova_compute[248510]: 2025-12-13 09:12:06.856 248514 DEBUG nova.network.neutron [req-018b02b7-441b-44ac-96f3-ccf3eeea436a req-c0663206-9d64-4617-9ea7-75b393359dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [{"id": "10f9c496-a049-40c4-a29b-fe4279219d91", "address": "fa:16:3e:5f:00:ab", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10f9c496-a0", "ovs_interfaceid": "10f9c496-a049-40c4-a29b-fe4279219d91", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:12:06 np0005558241 nova_compute[248510]: 2025-12-13 09:12:06.888 248514 DEBUG nova.compute.manager [req-018b02b7-441b-44ac-96f3-ccf3eeea436a req-c0663206-9d64-4617-9ea7-75b393359dce 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Detach interface failed, port_id=47c08f82-5e0f-4d58-bc8a-5c049f6846fd, reason: Instance 0d2b3fb1-4256-4dd2-bbba-188212ddd10e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 04:12:07 np0005558241 nova_compute[248510]: 2025-12-13 09:12:07.171 248514 DEBUG nova.network.neutron [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:12:07 np0005558241 nova_compute[248510]: 2025-12-13 09:12:07.195 248514 INFO nova.compute.manager [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Took 1.33 seconds to deallocate network for instance.#033[00m
Dec 13 04:12:07 np0005558241 nova_compute[248510]: 2025-12-13 09:12:07.237 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:07 np0005558241 nova_compute[248510]: 2025-12-13 09:12:07.238 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:07 np0005558241 nova_compute[248510]: 2025-12-13 09:12:07.323 248514 DEBUG oslo_concurrency.processutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:12:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:12:07 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3118093410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:07 np0005558241 nova_compute[248510]: 2025-12-13 09:12:07.910 248514 DEBUG oslo_concurrency.processutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:12:07 np0005558241 nova_compute[248510]: 2025-12-13 09:12:07.921 248514 DEBUG nova.compute.provider_tree [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:12:07 np0005558241 nova_compute[248510]: 2025-12-13 09:12:07.943 248514 DEBUG nova.scheduler.client.report [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:12:07 np0005558241 nova_compute[248510]: 2025-12-13 09:12:07.971 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:08 np0005558241 nova_compute[248510]: 2025-12-13 09:12:07.999 248514 INFO nova.scheduler.client.report [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 0d2b3fb1-4256-4dd2-bbba-188212ddd10e#033[00m
Dec 13 04:12:08 np0005558241 nova_compute[248510]: 2025-12-13 09:12:08.073 248514 DEBUG oslo_concurrency.lockutils [None req-eef0475e-be96-4cdc-98c6-2e4fb898ed71 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "0d2b3fb1-4256-4dd2-bbba-188212ddd10e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:08 np0005558241 nova_compute[248510]: 2025-12-13 09:12:08.586 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3331: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 23 KiB/s wr, 14 op/s
Dec 13 04:12:08 np0005558241 nova_compute[248510]: 2025-12-13 09:12:08.795 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:08 np0005558241 nova_compute[248510]: 2025-12-13 09:12:08.987 248514 DEBUG nova.compute.manager [req-914c688f-f70c-4a3d-bc37-67a9e7759f03 req-d68eccbe-098b-4b5d-a785-6363c8c0e86c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Received event network-vif-deleted-10f9c496-a049-40c4-a29b-fe4279219d91 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:09 np0005558241 nova_compute[248510]: 2025-12-13 09:12:09.377 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:09 np0005558241 nova_compute[248510]: 2025-12-13 09:12:09.378 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:09 np0005558241 nova_compute[248510]: 2025-12-13 09:12:09.378 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:09 np0005558241 nova_compute[248510]: 2025-12-13 09:12:09.378 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:09 np0005558241 nova_compute[248510]: 2025-12-13 09:12:09.379 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:09 np0005558241 nova_compute[248510]: 2025-12-13 09:12:09.380 248514 INFO nova.compute.manager [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Terminating instance#033[00m
Dec 13 04:12:09 np0005558241 nova_compute[248510]: 2025-12-13 09:12:09.381 248514 DEBUG nova.compute.manager [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:12:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:12:09
Dec 13 04:12:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:12:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:12:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'vms']
Dec 13 04:12:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:12:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:12:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:12:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:12:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:12:10 np0005558241 kernel: tapb6bde1c8-97 (unregistering): left promiscuous mode
Dec 13 04:12:10 np0005558241 NetworkManager[50376]: <info>  [1765617130.1580] device (tapb6bde1c8-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:12:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:12:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:12:10 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:10Z|01507|binding|INFO|Releasing lport b6bde1c8-9710-4155-9619-7040cbbca806 from this chassis (sb_readonly=0)
Dec 13 04:12:10 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:10Z|01508|binding|INFO|Setting lport b6bde1c8-9710-4155-9619-7040cbbca806 down in Southbound
Dec 13 04:12:10 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:10Z|01509|binding|INFO|Removing iface tapb6bde1c8-97 ovn-installed in OVS
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.225 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 kernel: tap445f61a4-13 (unregistering): left promiscuous mode
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 NetworkManager[50376]: <info>  [1765617130.2675] device (tap445f61a4-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:10Z|01510|binding|INFO|Releasing lport 445f61a4-1352-4145-8b2b-f9f87f5435a7 from this chassis (sb_readonly=1)
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.289 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:10Z|01511|binding|INFO|Removing iface tap445f61a4-13 ovn-installed in OVS
Dec 13 04:12:10 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:10Z|01512|if_status|INFO|Dropped 1 log messages in last 156 seconds (most recently, 156 seconds ago) due to excessive rate
Dec 13 04:12:10 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:10Z|01513|if_status|INFO|Not setting lport 445f61a4-1352-4145-8b2b-f9f87f5435a7 down as sb is readonly
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.291 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Dec 13 04:12:10 np0005558241 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008d.scope: Consumed 16.184s CPU time.
Dec 13 04:12:10 np0005558241 systemd-machined[210538]: Machine qemu-172-instance-0000008d terminated.
Dec 13 04:12:10 np0005558241 NetworkManager[50376]: <info>  [1765617130.4089] manager: (tapb6bde1c8-97): new Tun device (/org/freedesktop/NetworkManager/Devices/624)
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.414 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 NetworkManager[50376]: <info>  [1765617130.4223] manager: (tap445f61a4-13): new Tun device (/org/freedesktop/NetworkManager/Devices/625)
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.427 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.441 248514 INFO nova.virt.libvirt.driver [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Instance destroyed successfully.#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.442 248514 DEBUG nova.objects.instance [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:12:10 np0005558241 ovn_controller[148476]: 2025-12-13T09:12:10Z|01514|binding|INFO|Setting lport 445f61a4-1352-4145-8b2b-f9f87f5435a7 down in Southbound
Dec 13 04:12:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:10.605 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:9b:72 10.100.0.6'], port_security=['fa:16:3e:54:9b:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44250a44-17ff-41cb-ae95-e575bff91a4a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=b6bde1c8-9710-4155-9619-7040cbbca806) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:12:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:10.608 158419 INFO neutron.agent.ovn.metadata.agent [-] Port b6bde1c8-9710-4155-9619-7040cbbca806 in datapath a7065d5a-edce-4470-a56d-ab529d56aa3c unbound from our chassis#033[00m
Dec 13 04:12:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:10.611 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7065d5a-edce-4470-a56d-ab529d56aa3c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:12:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:10.613 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[92415575-0112-4249-b3de-5a8f417ed2f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:10.614 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c namespace which is not needed anymore#033[00m
Dec 13 04:12:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3332: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 11 KiB/s wr, 29 op/s
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.886 248514 DEBUG nova.virt.libvirt.vif [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-230241554',display_name='tempest-TestGettingAddress-server-230241554',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-230241554',id=141,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:10:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-60o8t9sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:10:51Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.888 248514 DEBUG nova.network.os_vif_util [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.890 248514 DEBUG nova.network.os_vif_util [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.892 248514 DEBUG os_vif [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.895 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6bde1c8-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.900 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.911 248514 INFO os_vif [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:9b:72,bridge_name='br-int',has_traffic_filtering=True,id=b6bde1c8-9710-4155-9619-7040cbbca806,network=Network(a7065d5a-edce-4470-a56d-ab529d56aa3c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6bde1c8-97')#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.912 248514 DEBUG nova.virt.libvirt.vif [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:10:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-230241554',display_name='tempest-TestGettingAddress-server-230241554',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-230241554',id=141,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBATQPC4t+wxvDwwCxa8urAHuZrrym7mZmOJbATKpLlSx8f1HBavg2DxJwQVq6Xh11Cjg2XZMdNmmV7i4/mA4v2we9ssDJcPYmqBlF45/hhRWDCHkL+g8/bTQlH8L6jaUSw==',key_name='tempest-TestGettingAddress-545174561',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:10:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-60o8t9sd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:10:51Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.913 248514 DEBUG nova.network.os_vif_util [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.914 248514 DEBUG nova.network.os_vif_util [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.914 248514 DEBUG os_vif [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.917 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap445f61a4-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:10 np0005558241 nova_compute[248510]: 2025-12-13 09:12:10.924 248514 INFO os_vif [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:7d:02,bridge_name='br-int',has_traffic_filtering=True,id=445f61a4-1352-4145-8b2b-f9f87f5435a7,network=Network(7661cc7b-fcf7-41b6-b117-ca8fede7ad4c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap445f61a4-13')#033[00m
Dec 13 04:12:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:12:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:11.025 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:7d:02 2001:db8:0:1:f816:3eff:fe66:7d02 2001:db8::f816:3eff:fe66:7d02'], port_security=['fa:16:3e:66:7d:02 2001:db8:0:1:f816:3eff:fe66:7d02 2001:db8::f816:3eff:fe66:7d02'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe66:7d02/64 2001:db8::f816:3eff:fe66:7d02/64', 'neutron:device_id': '8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40eecf4d-448d-4e47-a0ea-dbd30bf32f62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28a1c52d-6dd3-4202-a082-66c542a5a384, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=445f61a4-1352-4145-8b2b-f9f87f5435a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:12:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:12:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:12:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:12:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:12:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:12:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:12:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:12:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:12:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:12:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:12:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6001.8 total, 600.0 interval#012Cumulative writes: 45K writes, 175K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.74 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3990 writes, 15K keys, 3990 commit groups, 1.0 writes per commit group, ingest: 19.11 MB, 0.03 MB/s#012Interval WAL: 3990 writes, 1573 syncs, 2.54 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.8 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.7 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.8 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.8 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Dec 13 04:12:11 np0005558241 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [NOTICE]   (393002) : haproxy version is 2.8.14-c23fe91
Dec 13 04:12:11 np0005558241 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [NOTICE]   (393002) : path to executable is /usr/sbin/haproxy
Dec 13 04:12:11 np0005558241 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [WARNING]  (393002) : Exiting Master process...
Dec 13 04:12:11 np0005558241 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [ALERT]    (393002) : Current worker (393004) exited with code 143 (Terminated)
Dec 13 04:12:11 np0005558241 neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c[392998]: [WARNING]  (393002) : All workers exited. Exiting... (0)
Dec 13 04:12:11 np0005558241 systemd[1]: libpod-b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c.scope: Deactivated successfully.
Dec 13 04:12:11 np0005558241 podman[394473]: 2025-12-13 09:12:11.213764942 +0000 UTC m=+0.445173237 container died b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 04:12:11 np0005558241 nova_compute[248510]: 2025-12-13 09:12:11.572 248514 DEBUG nova.compute.manager [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-changed-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:11 np0005558241 nova_compute[248510]: 2025-12-13 09:12:11.573 248514 DEBUG nova.compute.manager [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing instance network info cache due to event network-changed-b6bde1c8-9710-4155-9619-7040cbbca806. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:12:11 np0005558241 nova_compute[248510]: 2025-12-13 09:12:11.573 248514 DEBUG oslo_concurrency.lockutils [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:12:11 np0005558241 nova_compute[248510]: 2025-12-13 09:12:11.574 248514 DEBUG oslo_concurrency.lockutils [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:12:11 np0005558241 nova_compute[248510]: 2025-12-13 09:12:11.574 248514 DEBUG nova.network.neutron [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Refreshing network info cache for port b6bde1c8-9710-4155-9619-7040cbbca806 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:12:11 np0005558241 nova_compute[248510]: 2025-12-13 09:12:11.757 248514 DEBUG nova.compute.manager [req-89060cce-d2c8-4641-805f-da428bd0e0ce req-818f211f-c36b-45f9-b244-c3aacbe29b17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-unplugged-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:11 np0005558241 nova_compute[248510]: 2025-12-13 09:12:11.758 248514 DEBUG oslo_concurrency.lockutils [req-89060cce-d2c8-4641-805f-da428bd0e0ce req-818f211f-c36b-45f9-b244-c3aacbe29b17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:11 np0005558241 nova_compute[248510]: 2025-12-13 09:12:11.758 248514 DEBUG oslo_concurrency.lockutils [req-89060cce-d2c8-4641-805f-da428bd0e0ce req-818f211f-c36b-45f9-b244-c3aacbe29b17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:11 np0005558241 nova_compute[248510]: 2025-12-13 09:12:11.759 248514 DEBUG oslo_concurrency.lockutils [req-89060cce-d2c8-4641-805f-da428bd0e0ce req-818f211f-c36b-45f9-b244-c3aacbe29b17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:11 np0005558241 nova_compute[248510]: 2025-12-13 09:12:11.759 248514 DEBUG nova.compute.manager [req-89060cce-d2c8-4641-805f-da428bd0e0ce req-818f211f-c36b-45f9-b244-c3aacbe29b17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] No waiting events found dispatching network-vif-unplugged-b6bde1c8-9710-4155-9619-7040cbbca806 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:12:11 np0005558241 nova_compute[248510]: 2025-12-13 09:12:11.760 248514 DEBUG nova.compute.manager [req-89060cce-d2c8-4641-805f-da428bd0e0ce req-818f211f-c36b-45f9-b244-c3aacbe29b17 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-unplugged-b6bde1c8-9710-4155-9619-7040cbbca806 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:12:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3333: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 11 KiB/s wr, 29 op/s
Dec 13 04:12:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d83ba7e95364800c67b1ef1b047ddcfb67b0d58972ad0e7ad048a84a30676ad2-merged.mount: Deactivated successfully.
Dec 13 04:12:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c-userdata-shm.mount: Deactivated successfully.
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.766 248514 DEBUG nova.compute.manager [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-unplugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.767 248514 DEBUG oslo_concurrency.lockutils [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.767 248514 DEBUG oslo_concurrency.lockutils [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.768 248514 DEBUG oslo_concurrency.lockutils [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.768 248514 DEBUG nova.compute.manager [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] No waiting events found dispatching network-vif-unplugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.768 248514 DEBUG nova.compute.manager [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-unplugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.769 248514 DEBUG nova.compute.manager [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.769 248514 DEBUG oslo_concurrency.lockutils [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.769 248514 DEBUG oslo_concurrency.lockutils [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.770 248514 DEBUG oslo_concurrency.lockutils [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.770 248514 DEBUG nova.compute.manager [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] No waiting events found dispatching network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.770 248514 WARNING nova.compute.manager [req-6ddc4b6e-ef35-4660-a37d-d9b9f616784e req-ceb70637-a196-49ca-a943-b3e7983472aa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received unexpected event network-vif-plugged-445f61a4-1352-4145-8b2b-f9f87f5435a7 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.798 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:13 np0005558241 podman[394473]: 2025-12-13 09:12:13.938665012 +0000 UTC m=+3.170073307 container cleanup b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.945 248514 DEBUG nova.compute.manager [req-85e66e3f-a92d-41af-b5b2-4bfad7560ead req-258db07f-ba80-45ff-87ea-1d60eabee468 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.945 248514 DEBUG oslo_concurrency.lockutils [req-85e66e3f-a92d-41af-b5b2-4bfad7560ead req-258db07f-ba80-45ff-87ea-1d60eabee468 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.946 248514 DEBUG oslo_concurrency.lockutils [req-85e66e3f-a92d-41af-b5b2-4bfad7560ead req-258db07f-ba80-45ff-87ea-1d60eabee468 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.946 248514 DEBUG oslo_concurrency.lockutils [req-85e66e3f-a92d-41af-b5b2-4bfad7560ead req-258db07f-ba80-45ff-87ea-1d60eabee468 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.946 248514 DEBUG nova.compute.manager [req-85e66e3f-a92d-41af-b5b2-4bfad7560ead req-258db07f-ba80-45ff-87ea-1d60eabee468 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] No waiting events found dispatching network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:12:13 np0005558241 nova_compute[248510]: 2025-12-13 09:12:13.947 248514 WARNING nova.compute.manager [req-85e66e3f-a92d-41af-b5b2-4bfad7560ead req-258db07f-ba80-45ff-87ea-1d60eabee468 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received unexpected event network-vif-plugged-b6bde1c8-9710-4155-9619-7040cbbca806 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:12:13 np0005558241 systemd[1]: libpod-conmon-b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c.scope: Deactivated successfully.
Dec 13 04:12:14 np0005558241 nova_compute[248510]: 2025-12-13 09:12:14.135 248514 DEBUG nova.network.neutron [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updated VIF entry in instance network info cache for port b6bde1c8-9710-4155-9619-7040cbbca806. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:12:14 np0005558241 nova_compute[248510]: 2025-12-13 09:12:14.136 248514 DEBUG nova.network.neutron [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "address": "fa:16:3e:66:7d:02", "network": {"id": "7661cc7b-fcf7-41b6-b117-ca8fede7ad4c", "bridge": "br-int", "label": "tempest-network-smoke--1007713288", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe66:7d02", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap445f61a4-13", "ovs_interfaceid": "445f61a4-1352-4145-8b2b-f9f87f5435a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:12:14 np0005558241 nova_compute[248510]: 2025-12-13 09:12:14.169 248514 DEBUG oslo_concurrency.lockutils [req-4827d222-9e4e-45bf-b797-c948f64877ac req-cfc73564-711d-4d46-8a26-7e826170f413 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:12:14 np0005558241 podman[394523]: 2025-12-13 09:12:14.686410886 +0000 UTC m=+0.706827279 container remove b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:12:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.697 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f46d537b-c24c-49c7-8cf2-34814f9d24fd]: (4, ('Sat Dec 13 09:12:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c (b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c)\nb9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c\nSat Dec 13 09:12:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c (b9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c)\nb9e425db167a09e48ca553673d5a3d6f84e13d967902f169709bab3ee2633f7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.700 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[281b6c24-15bb-426b-9a5b-2813df9d2c44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.701 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7065d5a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:12:14 np0005558241 nova_compute[248510]: 2025-12-13 09:12:14.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:14 np0005558241 kernel: tapa7065d5a-e0: left promiscuous mode
Dec 13 04:12:14 np0005558241 nova_compute[248510]: 2025-12-13 09:12:14.724 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.728 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[386760d0-107e-4a49-abf9-9a2db7279103]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.746 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d38b87-5aca-4694-81b5-e7704fca8c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.748 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[de19b97d-cfad-4cf7-8ac1-c5c054bcfa66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.766 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[982eb3d1-93f1-410d-998a-1f99f5900451]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962794, 'reachable_time': 15662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394537, 'error': None, 'target': 'ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.770 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a7065d5a-edce-4470-a56d-ab529d56aa3c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:12:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.771 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae606ff-e1bc-4064-9c70-23d44ca667b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.772 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 445f61a4-1352-4145-8b2b-f9f87f5435a7 in datapath 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c unbound from our chassis#033[00m
Dec 13 04:12:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3334: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 11 KiB/s wr, 35 op/s
Dec 13 04:12:14 np0005558241 systemd[1]: run-netns-ovnmeta\x2da7065d5a\x2dedce\x2d4470\x2da56d\x2dab529d56aa3c.mount: Deactivated successfully.
Dec 13 04:12:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.775 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:12:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.776 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8bcce4e7-3bcb-48af-ade7-6bf5bd8f65d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:14 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:14.777 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c namespace which is not needed anymore#033[00m
Dec 13 04:12:14 np0005558241 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [NOTICE]   (393125) : haproxy version is 2.8.14-c23fe91
Dec 13 04:12:14 np0005558241 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [NOTICE]   (393125) : path to executable is /usr/sbin/haproxy
Dec 13 04:12:14 np0005558241 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [WARNING]  (393125) : Exiting Master process...
Dec 13 04:12:14 np0005558241 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [ALERT]    (393125) : Current worker (393132) exited with code 143 (Terminated)
Dec 13 04:12:14 np0005558241 neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c[393088]: [WARNING]  (393125) : All workers exited. Exiting... (0)
Dec 13 04:12:14 np0005558241 systemd[1]: libpod-bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411.scope: Deactivated successfully.
Dec 13 04:12:14 np0005558241 podman[394559]: 2025-12-13 09:12:14.969136897 +0000 UTC m=+0.086741377 container died bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 04:12:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:12:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3293831454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:12:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:12:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3293831454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:12:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411-userdata-shm.mount: Deactivated successfully.
Dec 13 04:12:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-20f32a15c927b95fe7b352ffa0d269ecc44713de0e4d769f1bd0a4e926820f3a-merged.mount: Deactivated successfully.
Dec 13 04:12:15 np0005558241 podman[394559]: 2025-12-13 09:12:15.240729169 +0000 UTC m=+0.358333639 container cleanup bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:12:15 np0005558241 systemd[1]: libpod-conmon-bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411.scope: Deactivated successfully.
Dec 13 04:12:15 np0005558241 podman[394590]: 2025-12-13 09:12:15.534868286 +0000 UTC m=+0.258481364 container remove bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:12:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.547 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e8c45e-7096-4840-8caf-781f4fabbadb]: (4, ('Sat Dec 13 09:12:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c (bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411)\nbba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411\nSat Dec 13 09:12:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c (bba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411)\nbba5048265e780202d81d3a291fe15617ad2a356714ed548a476b6a5c32bf411\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.549 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b1396e19-d6c6-4dd7-820a-42af51d3d6a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.550 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7661cc7b-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:12:15 np0005558241 nova_compute[248510]: 2025-12-13 09:12:15.553 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:15 np0005558241 kernel: tap7661cc7b-f0: left promiscuous mode
Dec 13 04:12:15 np0005558241 nova_compute[248510]: 2025-12-13 09:12:15.569 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8fe364-169d-41ed-be93-bc739554430d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.588 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ad30806a-68af-4b39-bddb-b3343bcbd2b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.590 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[290defd4-93bb-4ac4-96e9-fc0582fc887c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.612 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[839da859-7d21-4f93-9ed6-58b9ebba202c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 962959, 'reachable_time': 36125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394605, 'error': None, 'target': 'ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.615 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7661cc7b-fcf7-41b6-b117-ca8fede7ad4c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:12:15 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:15.616 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[78aee92c-d891-4fdf-bac4-7848dfaa20b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:15 np0005558241 systemd[1]: run-netns-ovnmeta\x2d7661cc7b\x2dfcf7\x2d41b6\x2db117\x2dca8fede7ad4c.mount: Deactivated successfully.
Dec 13 04:12:15 np0005558241 nova_compute[248510]: 2025-12-13 09:12:15.840 248514 INFO nova.virt.libvirt.driver [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Deleting instance files /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_del#033[00m
Dec 13 04:12:15 np0005558241 nova_compute[248510]: 2025-12-13 09:12:15.841 248514 INFO nova.virt.libvirt.driver [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Deletion of /var/lib/nova/instances/8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8_del complete#033[00m
Dec 13 04:12:15 np0005558241 nova_compute[248510]: 2025-12-13 09:12:15.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:16 np0005558241 nova_compute[248510]: 2025-12-13 09:12:16.758 248514 INFO nova.compute.manager [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Took 7.38 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:12:16 np0005558241 nova_compute[248510]: 2025-12-13 09:12:16.759 248514 DEBUG oslo.service.loopingcall [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:12:16 np0005558241 nova_compute[248510]: 2025-12-13 09:12:16.759 248514 DEBUG nova.compute.manager [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:12:16 np0005558241 nova_compute[248510]: 2025-12-13 09:12:16.759 248514 DEBUG nova.network.neutron [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:12:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3335: 321 pgs: 321 active+clean; 121 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 33 op/s
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.513914) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617137514013, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 808, "num_deletes": 257, "total_data_size": 1119077, "memory_usage": 1145024, "flush_reason": "Manual Compaction"}
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617137536169, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 1099957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65946, "largest_seqno": 66753, "table_properties": {"data_size": 1095810, "index_size": 1862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9006, "raw_average_key_size": 19, "raw_value_size": 1087530, "raw_average_value_size": 2299, "num_data_blocks": 83, "num_entries": 473, "num_filter_entries": 473, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617069, "oldest_key_time": 1765617069, "file_creation_time": 1765617137, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 22298 microseconds, and 7455 cpu microseconds.
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.536219) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 1099957 bytes OK
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.536244) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.538294) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.538316) EVENT_LOG_v1 {"time_micros": 1765617137538309, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.538336) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1115012, prev total WAL file size 1115012, number of live WAL files 2.
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.538997) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373632' seq:72057594037927935, type:22 .. '6C6F676D0033303135' seq:0, type:0; will stop at (end)
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(1074KB)], [155(10062KB)]
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617137539129, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 11404250, "oldest_snapshot_seqno": -1}
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8526 keys, 11283682 bytes, temperature: kUnknown
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617137768365, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 11283682, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11228494, "index_size": 32739, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21381, "raw_key_size": 224230, "raw_average_key_size": 26, "raw_value_size": 11078212, "raw_average_value_size": 1299, "num_data_blocks": 1267, "num_entries": 8526, "num_filter_entries": 8526, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617137, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.768875) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 11283682 bytes
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.790478) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 49.7 rd, 49.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.8 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(20.6) write-amplify(10.3) OK, records in: 9051, records dropped: 525 output_compression: NoCompression
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.790525) EVENT_LOG_v1 {"time_micros": 1765617137790506, "job": 96, "event": "compaction_finished", "compaction_time_micros": 229469, "compaction_time_cpu_micros": 48788, "output_level": 6, "num_output_files": 1, "total_output_size": 11283682, "num_input_records": 9051, "num_output_records": 8526, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617137791538, "job": 96, "event": "table_file_deletion", "file_number": 157}
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617137793703, "job": 96, "event": "table_file_deletion", "file_number": 155}
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.538837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.793928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.793938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.793942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.793945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:12:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:12:17.793948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:12:17 np0005558241 nova_compute[248510]: 2025-12-13 09:12:17.904 248514 DEBUG nova.compute.manager [req-86dbae2b-e019-4a96-bbdb-430335b44f4d req-b57e1029-202d-4c04-94f2-0fedf823fb3f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-deleted-445f61a4-1352-4145-8b2b-f9f87f5435a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:17 np0005558241 nova_compute[248510]: 2025-12-13 09:12:17.905 248514 INFO nova.compute.manager [req-86dbae2b-e019-4a96-bbdb-430335b44f4d req-b57e1029-202d-4c04-94f2-0fedf823fb3f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Neutron deleted interface 445f61a4-1352-4145-8b2b-f9f87f5435a7; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:12:17 np0005558241 nova_compute[248510]: 2025-12-13 09:12:17.905 248514 DEBUG nova.network.neutron [req-86dbae2b-e019-4a96-bbdb-430335b44f4d req-b57e1029-202d-4c04-94f2-0fedf823fb3f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [{"id": "b6bde1c8-9710-4155-9619-7040cbbca806", "address": "fa:16:3e:54:9b:72", "network": {"id": "a7065d5a-edce-4470-a56d-ab529d56aa3c", "bridge": "br-int", "label": "tempest-network-smoke--37709476", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6bde1c8-97", "ovs_interfaceid": "b6bde1c8-9710-4155-9619-7040cbbca806", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:12:17 np0005558241 nova_compute[248510]: 2025-12-13 09:12:17.950 248514 DEBUG nova.compute.manager [req-86dbae2b-e019-4a96-bbdb-430335b44f4d req-b57e1029-202d-4c04-94f2-0fedf823fb3f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Detach interface failed, port_id=445f61a4-1352-4145-8b2b-f9f87f5435a7, reason: Instance 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 04:12:18 np0005558241 nova_compute[248510]: 2025-12-13 09:12:18.541 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617123.539736, 0d2b3fb1-4256-4dd2-bbba-188212ddd10e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:12:18 np0005558241 nova_compute[248510]: 2025-12-13 09:12:18.542 248514 INFO nova.compute.manager [-] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:12:18 np0005558241 nova_compute[248510]: 2025-12-13 09:12:18.565 248514 DEBUG nova.compute.manager [None req-83d0274f-295e-4818-ada7-f2a9d3d040bd - - - - - -] [instance: 0d2b3fb1-4256-4dd2-bbba-188212ddd10e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:12:18 np0005558241 nova_compute[248510]: 2025-12-13 09:12:18.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:12:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3336: 321 pgs: 321 active+clean; 94 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.3 KiB/s wr, 38 op/s
Dec 13 04:12:18 np0005558241 nova_compute[248510]: 2025-12-13 09:12:18.800 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:19 np0005558241 nova_compute[248510]: 2025-12-13 09:12:19.069 248514 DEBUG nova.network.neutron [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:12:19 np0005558241 nova_compute[248510]: 2025-12-13 09:12:19.097 248514 INFO nova.compute.manager [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Took 2.34 seconds to deallocate network for instance.#033[00m
Dec 13 04:12:19 np0005558241 nova_compute[248510]: 2025-12-13 09:12:19.153 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:19 np0005558241 nova_compute[248510]: 2025-12-13 09:12:19.154 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:19 np0005558241 nova_compute[248510]: 2025-12-13 09:12:19.237 248514 DEBUG oslo_concurrency.processutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:12:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:12:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.2 total, 600.0 interval#012Cumulative writes: 35K writes, 143K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.83 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2470 writes, 10K keys, 2470 commit groups, 1.0 writes per commit group, ingest: 13.34 MB, 0.02 MB/s#012Interval WAL: 2470 writes, 940 syncs, 2.63 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:12:19 np0005558241 nova_compute[248510]: 2025-12-13 09:12:19.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:12:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210104534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:19 np0005558241 nova_compute[248510]: 2025-12-13 09:12:19.897 248514 DEBUG oslo_concurrency.processutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:12:19 np0005558241 nova_compute[248510]: 2025-12-13 09:12:19.905 248514 DEBUG nova.compute.provider_tree [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:12:19 np0005558241 nova_compute[248510]: 2025-12-13 09:12:19.930 248514 DEBUG nova.scheduler.client.report [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:12:19 np0005558241 nova_compute[248510]: 2025-12-13 09:12:19.957 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:19 np0005558241 nova_compute[248510]: 2025-12-13 09:12:19.990 248514 INFO nova.scheduler.client.report [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8#033[00m
Dec 13 04:12:19 np0005558241 nova_compute[248510]: 2025-12-13 09:12:19.998 248514 DEBUG nova.compute.manager [req-2bd4862b-6c59-4eb6-ac26-d5f7440eb2ed req-a4d83b53-addf-471a-b4bd-a784d4304014 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Received event network-vif-deleted-b6bde1c8-9710-4155-9619-7040cbbca806 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:12:20 np0005558241 nova_compute[248510]: 2025-12-13 09:12:20.066 248514 DEBUG oslo_concurrency.lockutils [None req-0987db3c-16ef-4e72-985d-8205b86ea0ae 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:20 np0005558241 podman[394772]: 2025-12-13 09:12:20.274898886 +0000 UTC m=+0.050582489 container create b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hofstadter, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:12:20 np0005558241 systemd[1]: Started libpod-conmon-b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71.scope.
Dec 13 04:12:20 np0005558241 podman[394772]: 2025-12-13 09:12:20.251303365 +0000 UTC m=+0.026987038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:12:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:12:20 np0005558241 podman[394772]: 2025-12-13 09:12:20.387195373 +0000 UTC m=+0.162879086 container init b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hofstadter, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 04:12:20 np0005558241 podman[394772]: 2025-12-13 09:12:20.39746076 +0000 UTC m=+0.173144353 container start b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 04:12:20 np0005558241 podman[394772]: 2025-12-13 09:12:20.402531758 +0000 UTC m=+0.178215401 container attach b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hofstadter, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:12:20 np0005558241 elegant_hofstadter[394789]: 167 167
Dec 13 04:12:20 np0005558241 systemd[1]: libpod-b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71.scope: Deactivated successfully.
Dec 13 04:12:20 np0005558241 conmon[394789]: conmon b333856ad8653ff0899b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71.scope/container/memory.events
Dec 13 04:12:20 np0005558241 podman[394772]: 2025-12-13 09:12:20.408254101 +0000 UTC m=+0.183937734 container died b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:12:20 np0005558241 systemd[1]: var-lib-containers-storage-overlay-05745bca6dd78adba63758da1c048b1176c90dabadea03b75a42adcb058abba0-merged.mount: Deactivated successfully.
Dec 13 04:12:20 np0005558241 podman[394772]: 2025-12-13 09:12:20.463083076 +0000 UTC m=+0.238766679 container remove b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hofstadter, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:12:20 np0005558241 systemd[1]: libpod-conmon-b333856ad8653ff0899b7efc22486a9d000f698bfea7bc582d2b6a7a460c6e71.scope: Deactivated successfully.
Dec 13 04:12:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:12:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:12:20 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:12:20 np0005558241 podman[394813]: 2025-12-13 09:12:20.68972661 +0000 UTC m=+0.069073583 container create 7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:12:20 np0005558241 systemd[1]: Started libpod-conmon-7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b.scope.
Dec 13 04:12:20 np0005558241 podman[394813]: 2025-12-13 09:12:20.658927568 +0000 UTC m=+0.038274591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:12:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3337: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 KiB/s wr, 43 op/s
Dec 13 04:12:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:12:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986bf0c42fade488771d2153ba16833702e136e2e9d62a7d58c255f686e6f2c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986bf0c42fade488771d2153ba16833702e136e2e9d62a7d58c255f686e6f2c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986bf0c42fade488771d2153ba16833702e136e2e9d62a7d58c255f686e6f2c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986bf0c42fade488771d2153ba16833702e136e2e9d62a7d58c255f686e6f2c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986bf0c42fade488771d2153ba16833702e136e2e9d62a7d58c255f686e6f2c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:20 np0005558241 podman[394813]: 2025-12-13 09:12:20.799292018 +0000 UTC m=+0.178639001 container init 7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:12:20 np0005558241 podman[394813]: 2025-12-13 09:12:20.810985872 +0000 UTC m=+0.190332815 container start 7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 04:12:20 np0005558241 podman[394813]: 2025-12-13 09:12:20.815205178 +0000 UTC m=+0.194552111 container attach 7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:12:20 np0005558241 nova_compute[248510]: 2025-12-13 09:12:20.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:21 np0005558241 sweet_goldwasser[394828]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:12:21 np0005558241 sweet_goldwasser[394828]: --> All data devices are unavailable
Dec 13 04:12:21 np0005558241 systemd[1]: libpod-7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b.scope: Deactivated successfully.
Dec 13 04:12:21 np0005558241 podman[394813]: 2025-12-13 09:12:21.394519348 +0000 UTC m=+0.773866301 container died 7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4640145867863087e-05 of space, bias 1.0, pg target 0.004392043760358926 quantized to 32 (current 32)
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697318854645266 of space, bias 1.0, pg target 0.20091956563935798 quantized to 32 (current 32)
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:12:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-986bf0c42fade488771d2153ba16833702e136e2e9d62a7d58c255f686e6f2c6-merged.mount: Deactivated successfully.
Dec 13 04:12:22 np0005558241 podman[394813]: 2025-12-13 09:12:22.132138318 +0000 UTC m=+1.511485291 container remove 7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goldwasser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:12:22 np0005558241 systemd[1]: libpod-conmon-7ea5435e82982f0d4da5976d3ffbc06f5f9ea8ca2767ae8cb7ce876d5f88355b.scope: Deactivated successfully.
Dec 13 04:12:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:22 np0005558241 podman[394924]: 2025-12-13 09:12:22.666770547 +0000 UTC m=+0.040944398 container create c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:12:22 np0005558241 systemd[1]: Started libpod-conmon-c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971.scope.
Dec 13 04:12:22 np0005558241 podman[394924]: 2025-12-13 09:12:22.6473477 +0000 UTC m=+0.021521551 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:12:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:12:22 np0005558241 podman[394924]: 2025-12-13 09:12:22.772309174 +0000 UTC m=+0.146483015 container init c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:12:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3338: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:12:22 np0005558241 podman[394924]: 2025-12-13 09:12:22.780684124 +0000 UTC m=+0.154857965 container start c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:12:22 np0005558241 podman[394924]: 2025-12-13 09:12:22.785464104 +0000 UTC m=+0.159637915 container attach c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:12:22 np0005558241 strange_franklin[394940]: 167 167
Dec 13 04:12:22 np0005558241 systemd[1]: libpod-c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971.scope: Deactivated successfully.
Dec 13 04:12:22 np0005558241 podman[394924]: 2025-12-13 09:12:22.789132566 +0000 UTC m=+0.163306377 container died c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:12:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-fc861060424efba21eca2bf2672320a3fa51ec5bb13f7e32b2c06f2e049cf11d-merged.mount: Deactivated successfully.
Dec 13 04:12:22 np0005558241 podman[394924]: 2025-12-13 09:12:22.833261263 +0000 UTC m=+0.207435074 container remove c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:12:22 np0005558241 systemd[1]: libpod-conmon-c387fa890878724897c74adb13d3e9ff74fb0eb19eb20d57c18cbce247025971.scope: Deactivated successfully.
Dec 13 04:12:23 np0005558241 podman[394962]: 2025-12-13 09:12:23.036297374 +0000 UTC m=+0.046819475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:12:23 np0005558241 podman[394962]: 2025-12-13 09:12:23.151156335 +0000 UTC m=+0.161678416 container create eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:12:23 np0005558241 nova_compute[248510]: 2025-12-13 09:12:23.176 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:23.176 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:12:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:23.178 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:12:23 np0005558241 systemd[1]: Started libpod-conmon-eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48.scope.
Dec 13 04:12:23 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:12:23 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5e8858449f2ca17f8bde6b8a8756793c37fb4eaa3d4beddd2b82007f87a03c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:23 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5e8858449f2ca17f8bde6b8a8756793c37fb4eaa3d4beddd2b82007f87a03c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:23 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5e8858449f2ca17f8bde6b8a8756793c37fb4eaa3d4beddd2b82007f87a03c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:23 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5e8858449f2ca17f8bde6b8a8756793c37fb4eaa3d4beddd2b82007f87a03c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:23 np0005558241 podman[394962]: 2025-12-13 09:12:23.255636465 +0000 UTC m=+0.266158596 container init eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:12:23 np0005558241 podman[394962]: 2025-12-13 09:12:23.26741395 +0000 UTC m=+0.277936031 container start eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jang, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:12:23 np0005558241 podman[394962]: 2025-12-13 09:12:23.272514658 +0000 UTC m=+0.283036799 container attach eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:12:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 04:12:23 np0005558241 musing_jang[394978]: {
Dec 13 04:12:23 np0005558241 musing_jang[394978]:    "0": [
Dec 13 04:12:23 np0005558241 musing_jang[394978]:        {
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "devices": [
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "/dev/loop3"
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            ],
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_name": "ceph_lv0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_size": "21470642176",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "name": "ceph_lv0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "tags": {
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.cluster_name": "ceph",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.crush_device_class": "",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.encrypted": "0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.objectstore": "bluestore",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.osd_id": "0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.type": "block",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.vdo": "0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.with_tpm": "0"
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            },
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "type": "block",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "vg_name": "ceph_vg0"
Dec 13 04:12:23 np0005558241 musing_jang[394978]:        }
Dec 13 04:12:23 np0005558241 musing_jang[394978]:    ],
Dec 13 04:12:23 np0005558241 musing_jang[394978]:    "1": [
Dec 13 04:12:23 np0005558241 musing_jang[394978]:        {
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "devices": [
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "/dev/loop4"
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            ],
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_name": "ceph_lv1",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_size": "21470642176",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "name": "ceph_lv1",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "tags": {
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.cluster_name": "ceph",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.crush_device_class": "",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.encrypted": "0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.objectstore": "bluestore",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.osd_id": "1",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.type": "block",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.vdo": "0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.with_tpm": "0"
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            },
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "type": "block",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "vg_name": "ceph_vg1"
Dec 13 04:12:23 np0005558241 musing_jang[394978]:        }
Dec 13 04:12:23 np0005558241 musing_jang[394978]:    ],
Dec 13 04:12:23 np0005558241 musing_jang[394978]:    "2": [
Dec 13 04:12:23 np0005558241 musing_jang[394978]:        {
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "devices": [
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "/dev/loop5"
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            ],
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_name": "ceph_lv2",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_size": "21470642176",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "name": "ceph_lv2",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "tags": {
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.cluster_name": "ceph",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.crush_device_class": "",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.encrypted": "0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.objectstore": "bluestore",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.osd_id": "2",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.type": "block",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.vdo": "0",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:                "ceph.with_tpm": "0"
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            },
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "type": "block",
Dec 13 04:12:23 np0005558241 musing_jang[394978]:            "vg_name": "ceph_vg2"
Dec 13 04:12:23 np0005558241 musing_jang[394978]:        }
Dec 13 04:12:23 np0005558241 musing_jang[394978]:    ]
Dec 13 04:12:23 np0005558241 musing_jang[394978]: }
Dec 13 04:12:23 np0005558241 systemd[1]: libpod-eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48.scope: Deactivated successfully.
Dec 13 04:12:23 np0005558241 podman[394962]: 2025-12-13 09:12:23.609775847 +0000 UTC m=+0.620297898 container died eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jang, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 04:12:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ec5e8858449f2ca17f8bde6b8a8756793c37fb4eaa3d4beddd2b82007f87a03c-merged.mount: Deactivated successfully.
Dec 13 04:12:23 np0005558241 podman[394962]: 2025-12-13 09:12:23.659251908 +0000 UTC m=+0.669773959 container remove eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_jang, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:12:23 np0005558241 systemd[1]: libpod-conmon-eb107c0ff0ceb4ebf7fffc6c7bf70d5f02504a012f54c83348636a5fd7aaab48.scope: Deactivated successfully.
Dec 13 04:12:23 np0005558241 nova_compute[248510]: 2025-12-13 09:12:23.802 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:24 np0005558241 podman[395064]: 2025-12-13 09:12:24.148851077 +0000 UTC m=+0.042111417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:12:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3339: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:12:25 np0005558241 nova_compute[248510]: 2025-12-13 09:12:25.170 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:12:25 np0005558241 nova_compute[248510]: 2025-12-13 09:12:25.171 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:12:25 np0005558241 nova_compute[248510]: 2025-12-13 09:12:25.172 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:12:25 np0005558241 nova_compute[248510]: 2025-12-13 09:12:25.190 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:12:25 np0005558241 podman[395064]: 2025-12-13 09:12:25.413356001 +0000 UTC m=+1.306616351 container create cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:12:25 np0005558241 nova_compute[248510]: 2025-12-13 09:12:25.440 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617130.4387507, 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:12:25 np0005558241 nova_compute[248510]: 2025-12-13 09:12:25.441 248514 INFO nova.compute.manager [-] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:12:25 np0005558241 nova_compute[248510]: 2025-12-13 09:12:25.746 248514 DEBUG nova.compute.manager [None req-05bbff23-3e4c-42b6-b80c-b95265d70dd0 - - - - - -] [instance: 8d7ab35e-cbd2-465d-9b3a-5ce6eec81bb8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:12:25 np0005558241 nova_compute[248510]: 2025-12-13 09:12:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:12:25 np0005558241 nova_compute[248510]: 2025-12-13 09:12:25.928 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:25 np0005558241 systemd[1]: Started libpod-conmon-cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07.scope.
Dec 13 04:12:25 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:12:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:26.181 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:12:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3340: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Dec 13 04:12:27 np0005558241 podman[395064]: 2025-12-13 09:12:27.22987177 +0000 UTC m=+3.123132140 container init cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:12:27 np0005558241 podman[395064]: 2025-12-13 09:12:27.241383459 +0000 UTC m=+3.134643799 container start cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:12:27 np0005558241 serene_keldysh[395110]: 167 167
Dec 13 04:12:27 np0005558241 systemd[1]: libpod-cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07.scope: Deactivated successfully.
Dec 13 04:12:27 np0005558241 podman[395064]: 2025-12-13 09:12:27.47509761 +0000 UTC m=+3.368357930 container attach cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:12:27 np0005558241 podman[395064]: 2025-12-13 09:12:27.477819648 +0000 UTC m=+3.371079968 container died cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 04:12:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-37beb4cb1dbebfda2390f74992059a35e3bfb0b6d9346c7845776239a8b77b16-merged.mount: Deactivated successfully.
Dec 13 04:12:28 np0005558241 podman[395064]: 2025-12-13 09:12:28.754503917 +0000 UTC m=+4.647764247 container remove cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:12:28 np0005558241 nova_compute[248510]: 2025-12-13 09:12:28.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:12:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3341: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Dec 13 04:12:28 np0005558241 nova_compute[248510]: 2025-12-13 09:12:28.804 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:28 np0005558241 systemd[1]: libpod-conmon-cece95eb4f799f21d26a32933f5ab61617443e8602d92d2ef4f3f23724389a07.scope: Deactivated successfully.
Dec 13 04:12:28 np0005558241 nova_compute[248510]: 2025-12-13 09:12:28.832 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:28 np0005558241 nova_compute[248510]: 2025-12-13 09:12:28.833 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:28 np0005558241 nova_compute[248510]: 2025-12-13 09:12:28.834 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:28 np0005558241 nova_compute[248510]: 2025-12-13 09:12:28.834 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:12:28 np0005558241 nova_compute[248510]: 2025-12-13 09:12:28.835 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:12:28 np0005558241 podman[395079]: 2025-12-13 09:12:28.857726887 +0000 UTC m=+3.404615050 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true)
Dec 13 04:12:28 np0005558241 podman[395080]: 2025-12-13 09:12:28.877442421 +0000 UTC m=+3.413698327 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:12:28 np0005558241 podman[395078]: 2025-12-13 09:12:28.897269448 +0000 UTC m=+3.444133460 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 04:12:29 np0005558241 podman[395169]: 2025-12-13 09:12:28.981419399 +0000 UTC m=+0.038789184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:12:29 np0005558241 podman[395169]: 2025-12-13 09:12:29.108627129 +0000 UTC m=+0.165996914 container create d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:12:29 np0005558241 systemd[1]: Started libpod-conmon-d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05.scope.
Dec 13 04:12:29 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.204 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baa47ef64a1bb761cc75ab8ee0284c77f8553dc9ac0edc202be465a9a7a152f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baa47ef64a1bb761cc75ab8ee0284c77f8553dc9ac0edc202be465a9a7a152f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baa47ef64a1bb761cc75ab8ee0284c77f8553dc9ac0edc202be465a9a7a152f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2baa47ef64a1bb761cc75ab8ee0284c77f8553dc9ac0edc202be465a9a7a152f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:29 np0005558241 podman[395169]: 2025-12-13 09:12:29.269811192 +0000 UTC m=+0.327180937 container init d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:12:29 np0005558241 podman[395169]: 2025-12-13 09:12:29.279476464 +0000 UTC m=+0.336846199 container start d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.308 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:29 np0005558241 podman[395169]: 2025-12-13 09:12:29.326189676 +0000 UTC m=+0.383559421 container attach d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:12:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:12:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/832433437' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.430 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.608 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.610 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3477MB free_disk=59.98740301281214GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.610 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.611 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.693 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.695 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.711 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.736 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.737 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.769 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.797 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 04:12:29 np0005558241 nova_compute[248510]: 2025-12-13 09:12:29.817 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:12:29 np0005558241 lvm[395307]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:12:29 np0005558241 lvm[395306]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:12:29 np0005558241 lvm[395306]: VG ceph_vg0 finished
Dec 13 04:12:29 np0005558241 lvm[395307]: VG ceph_vg1 finished
Dec 13 04:12:29 np0005558241 lvm[395309]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:12:29 np0005558241 lvm[395309]: VG ceph_vg2 finished
Dec 13 04:12:30 np0005558241 amazing_kapitsa[395204]: {}
Dec 13 04:12:30 np0005558241 systemd[1]: libpod-d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05.scope: Deactivated successfully.
Dec 13 04:12:30 np0005558241 systemd[1]: libpod-d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05.scope: Consumed 1.323s CPU time.
Dec 13 04:12:30 np0005558241 podman[395169]: 2025-12-13 09:12:30.085386686 +0000 UTC m=+1.142756471 container died d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Dec 13 04:12:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2baa47ef64a1bb761cc75ab8ee0284c77f8553dc9ac0edc202be465a9a7a152f-merged.mount: Deactivated successfully.
Dec 13 04:12:30 np0005558241 podman[395169]: 2025-12-13 09:12:30.447236171 +0000 UTC m=+1.504605926 container remove d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kapitsa, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:12:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:12:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1461985336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:30 np0005558241 nova_compute[248510]: 2025-12-13 09:12:30.503 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:12:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:12:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:12:30 np0005558241 nova_compute[248510]: 2025-12-13 09:12:30.513 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:12:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:12:30 np0005558241 nova_compute[248510]: 2025-12-13 09:12:30.535 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:12:30 np0005558241 systemd[1]: libpod-conmon-d81f61a5f87c7b0b2844d44438754e45adee45f26769b6267eca8f3020b15f05.scope: Deactivated successfully.
Dec 13 04:12:30 np0005558241 nova_compute[248510]: 2025-12-13 09:12:30.559 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:12:30 np0005558241 nova_compute[248510]: 2025-12-13 09:12:30.560 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:12:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3342: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 341 B/s wr, 16 op/s
Dec 13 04:12:30 np0005558241 nova_compute[248510]: 2025-12-13 09:12:30.932 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:12:31 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:12:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3343: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:33 np0005558241 nova_compute[248510]: 2025-12-13 09:12:33.560 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:12:33 np0005558241 nova_compute[248510]: 2025-12-13 09:12:33.560 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:12:33 np0005558241 nova_compute[248510]: 2025-12-13 09:12:33.561 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:12:33 np0005558241 nova_compute[248510]: 2025-12-13 09:12:33.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3344: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:35 np0005558241 nova_compute[248510]: 2025-12-13 09:12:35.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:12:35 np0005558241 nova_compute[248510]: 2025-12-13 09:12:35.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3345: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3346: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:38 np0005558241 nova_compute[248510]: 2025-12-13 09:12:38.809 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:39 np0005558241 nova_compute[248510]: 2025-12-13 09:12:39.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:12:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:12:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:12:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:12:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:12:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:12:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:12:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3347: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:40 np0005558241 nova_compute[248510]: 2025-12-13 09:12:40.937 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3348: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:43 np0005558241 nova_compute[248510]: 2025-12-13 09:12:43.812 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3349: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:45 np0005558241 nova_compute[248510]: 2025-12-13 09:12:45.938 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3350: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3351: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:48 np0005558241 nova_compute[248510]: 2025-12-13 09:12:48.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3352: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:50 np0005558241 nova_compute[248510]: 2025-12-13 09:12:50.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:52.585 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:6b:85 2001:db8:0:1:f816:3eff:feb7:6b85 2001:db8::f816:3eff:feb7:6b85'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb7:6b85/64 2001:db8::f816:3eff:feb7:6b85/64', 'neutron:device_id': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d42d98d-b09c-47fa-98dd-69f99f24cca5, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a155cc1b-19d3-4430-9d83-b19e30ef1a95) old=Port_Binding(mac=['fa:16:3e:b7:6b:85 2001:db8::f816:3eff:feb7:6b85'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb7:6b85/64', 'neutron:device_id': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:52.586 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a155cc1b-19d3-4430-9d83-b19e30ef1a95 in datapath ff80f4de-0e76-47f2-b06e-6d3900b63130 updated#033[00m
Dec 13 04:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:52.587 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff80f4de-0e76-47f2-b06e-6d3900b63130, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:12:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:52.588 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0eb68a-6f42-40ee-b6d8-91b8a16020f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:12:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3353: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:53 np0005558241 nova_compute[248510]: 2025-12-13 09:12:53.816 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3354: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:55.450 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:12:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:55.451 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:12:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:12:55.451 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:12:55 np0005558241 nova_compute[248510]: 2025-12-13 09:12:55.942 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3355: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3356: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:12:58 np0005558241 nova_compute[248510]: 2025-12-13 09:12:58.869 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:12:58 np0005558241 podman[395353]: 2025-12-13 09:12:58.999331391 +0000 UTC m=+0.073522575 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:12:59 np0005558241 podman[395351]: 2025-12-13 09:12:59.010421649 +0000 UTC m=+0.086375117 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 04:12:59 np0005558241 podman[395352]: 2025-12-13 09:12:59.049447088 +0000 UTC m=+0.126469593 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:13:00 np0005558241 nova_compute[248510]: 2025-12-13 09:13:00.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:13:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3357: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:13:00 np0005558241 nova_compute[248510]: 2025-12-13 09:13:00.944 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:01 np0005558241 nova_compute[248510]: 2025-12-13 09:13:01.947 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:01 np0005558241 nova_compute[248510]: 2025-12-13 09:13:01.947 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:01 np0005558241 nova_compute[248510]: 2025-12-13 09:13:01.966 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:13:02 np0005558241 nova_compute[248510]: 2025-12-13 09:13:02.058 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:02 np0005558241 nova_compute[248510]: 2025-12-13 09:13:02.060 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:02 np0005558241 nova_compute[248510]: 2025-12-13 09:13:02.070 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:13:02 np0005558241 nova_compute[248510]: 2025-12-13 09:13:02.070 248514 INFO nova.compute.claims [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:13:02 np0005558241 nova_compute[248510]: 2025-12-13 09:13:02.176 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:13:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3104872258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3358: 321 pgs: 321 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:13:02 np0005558241 nova_compute[248510]: 2025-12-13 09:13:02.797 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:02 np0005558241 nova_compute[248510]: 2025-12-13 09:13:02.805 248514 DEBUG nova.compute.provider_tree [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:13:02 np0005558241 nova_compute[248510]: 2025-12-13 09:13:02.939 248514 DEBUG nova.scheduler.client.report [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:13:02 np0005558241 nova_compute[248510]: 2025-12-13 09:13:02.968 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:02 np0005558241 nova_compute[248510]: 2025-12-13 09:13:02.969 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.114 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.115 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.141 248514 INFO nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.480 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.588 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.589 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.590 248514 INFO nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Creating image(s)#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.619 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.648 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.674 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.678 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.776 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.777 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.778 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.778 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.806 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.813 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.856 248514 DEBUG nova.policy [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:13:03 np0005558241 nova_compute[248510]: 2025-12-13 09:13:03.872 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:04 np0005558241 nova_compute[248510]: 2025-12-13 09:13:04.141 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:04 np0005558241 nova_compute[248510]: 2025-12-13 09:13:04.217 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:13:04 np0005558241 nova_compute[248510]: 2025-12-13 09:13:04.315 248514 DEBUG nova.objects.instance [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 707e75d9-f6d2-413d-a727-c3ecbfea90c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:13:04 np0005558241 nova_compute[248510]: 2025-12-13 09:13:04.490 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:13:04 np0005558241 nova_compute[248510]: 2025-12-13 09:13:04.491 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Ensure instance console log exists: /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:13:04 np0005558241 nova_compute[248510]: 2025-12-13 09:13:04.492 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:04 np0005558241 nova_compute[248510]: 2025-12-13 09:13:04.492 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:04 np0005558241 nova_compute[248510]: 2025-12-13 09:13:04.492 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3359: 321 pgs: 321 active+clean; 76 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 13 04:13:05 np0005558241 nova_compute[248510]: 2025-12-13 09:13:05.946 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:06 np0005558241 nova_compute[248510]: 2025-12-13 09:13:06.316 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Successfully created port: 37c91936-0589-4bd3-9413-3af7db3e8feb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:13:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3360: 321 pgs: 321 active+clean; 76 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 13 04:13:07 np0005558241 nova_compute[248510]: 2025-12-13 09:13:07.395 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Successfully created port: 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:13:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3361: 321 pgs: 321 active+clean; 88 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Dec 13 04:13:08 np0005558241 nova_compute[248510]: 2025-12-13 09:13:08.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:13:09
Dec 13 04:13:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:13:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:13:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.meta', '.rgw.root']
Dec 13 04:13:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:13:09 np0005558241 nova_compute[248510]: 2025-12-13 09:13:09.922 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Successfully updated port: 37c91936-0589-4bd3-9413-3af7db3e8feb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:13:10 np0005558241 nova_compute[248510]: 2025-12-13 09:13:10.023 248514 DEBUG nova.compute.manager [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-changed-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:13:10 np0005558241 nova_compute[248510]: 2025-12-13 09:13:10.024 248514 DEBUG nova.compute.manager [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing instance network info cache due to event network-changed-37c91936-0589-4bd3-9413-3af7db3e8feb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:13:10 np0005558241 nova_compute[248510]: 2025-12-13 09:13:10.025 248514 DEBUG oslo_concurrency.lockutils [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:13:10 np0005558241 nova_compute[248510]: 2025-12-13 09:13:10.025 248514 DEBUG oslo_concurrency.lockutils [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:13:10 np0005558241 nova_compute[248510]: 2025-12-13 09:13:10.025 248514 DEBUG nova.network.neutron [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing network info cache for port 37c91936-0589-4bd3-9413-3af7db3e8feb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:13:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:13:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:13:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:13:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:13:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:13:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:13:10 np0005558241 nova_compute[248510]: 2025-12-13 09:13:10.254 248514 DEBUG nova.network.neutron [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:13:10 np0005558241 nova_compute[248510]: 2025-12-13 09:13:10.708 248514 DEBUG nova.network.neutron [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:13:10 np0005558241 nova_compute[248510]: 2025-12-13 09:13:10.728 248514 DEBUG oslo_concurrency.lockutils [req-08e3e53b-1337-48d3-95c8-3614f1b34679 req-d860811b-3084-401e-8bc8-a1131a15a18c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:13:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3362: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:13:10 np0005558241 nova_compute[248510]: 2025-12-13 09:13:10.948 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:13:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:13:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:13:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:13:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:13:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:13:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:13:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:13:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:13:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:13:11 np0005558241 nova_compute[248510]: 2025-12-13 09:13:11.530 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Successfully updated port: 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:13:11 np0005558241 nova_compute[248510]: 2025-12-13 09:13:11.555 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:13:11 np0005558241 nova_compute[248510]: 2025-12-13 09:13:11.556 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:13:11 np0005558241 nova_compute[248510]: 2025-12-13 09:13:11.556 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:13:11 np0005558241 nova_compute[248510]: 2025-12-13 09:13:11.767 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:13:12 np0005558241 nova_compute[248510]: 2025-12-13 09:13:12.117 248514 DEBUG nova.compute.manager [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-changed-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:13:12 np0005558241 nova_compute[248510]: 2025-12-13 09:13:12.117 248514 DEBUG nova.compute.manager [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing instance network info cache due to event network-changed-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:13:12 np0005558241 nova_compute[248510]: 2025-12-13 09:13:12.117 248514 DEBUG oslo_concurrency.lockutils [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:13:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3363: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:13:13 np0005558241 nova_compute[248510]: 2025-12-13 09:13:13.879 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3364: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:13:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:13:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713716095' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:13:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713716095' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:15 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:15Z|01515|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec 13 04:13:15 np0005558241 nova_compute[248510]: 2025-12-13 09:13:15.944 248514 DEBUG nova.network.neutron [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:13:15 np0005558241 nova_compute[248510]: 2025-12-13 09:13:15.950 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:15 np0005558241 nova_compute[248510]: 2025-12-13 09:13:15.983 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:13:15 np0005558241 nova_compute[248510]: 2025-12-13 09:13:15.984 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance network_info: |[{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:13:15 np0005558241 nova_compute[248510]: 2025-12-13 09:13:15.984 248514 DEBUG oslo_concurrency.lockutils [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:13:15 np0005558241 nova_compute[248510]: 2025-12-13 09:13:15.984 248514 DEBUG nova.network.neutron [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing network info cache for port 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:13:15 np0005558241 nova_compute[248510]: 2025-12-13 09:13:15.989 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Start _get_guest_xml network_info=[{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:13:15 np0005558241 nova_compute[248510]: 2025-12-13 09:13:15.996 248514 WARNING nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.011 248514 DEBUG nova.virt.libvirt.host [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.013 248514 DEBUG nova.virt.libvirt.host [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.019 248514 DEBUG nova.virt.libvirt.host [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.020 248514 DEBUG nova.virt.libvirt.host [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.021 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.021 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.022 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.022 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.022 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.023 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.023 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.023 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.023 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.024 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.024 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.024 248514 DEBUG nova.virt.hardware [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.028 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:13:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3352414849' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.610 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.651 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:16 np0005558241 nova_compute[248510]: 2025-12-13 09:13:16.657 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3365: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 395 KiB/s wr, 2 op/s
Dec 13 04:13:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:13:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1271516082' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.227 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.229 248514 DEBUG nova.virt.libvirt.vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-132103940',display_name='tempest-TestGettingAddress-server-132103940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-132103940',id=143,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-eqjt2kv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:03Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=707e75d9-f6d2-413d-a727-c3ecbfea90c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.229 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.230 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.231 248514 DEBUG nova.virt.libvirt.vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-132103940',display_name='tempest-TestGettingAddress-server-132103940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-132103940',id=143,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-eqjt2kv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:03Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=707e75d9-f6d2-413d-a727-c3ecbfea90c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.232 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.232 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.234 248514 DEBUG nova.objects.instance [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 707e75d9-f6d2-413d-a727-c3ecbfea90c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.259 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  <uuid>707e75d9-f6d2-413d-a727-c3ecbfea90c1</uuid>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  <name>instance-0000008f</name>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-132103940</nova:name>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:13:15</nova:creationTime>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        <nova:port uuid="37c91936-0589-4bd3-9413-3af7db3e8feb">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        <nova:port uuid="9538b93c-6b21-4b5f-b6d3-89fd1b840f5d">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe39:da84" ipVersion="6"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe39:da84" ipVersion="6"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <entry name="serial">707e75d9-f6d2-413d-a727-c3ecbfea90c1</entry>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <entry name="uuid">707e75d9-f6d2-413d-a727-c3ecbfea90c1</entry>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk.config">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:77:6c:85"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <target dev="tap37c91936-05"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:39:da:84"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <target dev="tap9538b93c-6b"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/console.log" append="off"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:13:17 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:13:17 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:13:17 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:13:17 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.261 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Preparing to wait for external event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.261 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.261 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.262 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.262 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Preparing to wait for external event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.262 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.263 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.263 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.264 248514 DEBUG nova.virt.libvirt.vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-132103940',display_name='tempest-TestGettingAddress-server-132103940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-132103940',id=143,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-eqjt2kv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:03Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=707e75d9-f6d2-413d-a727-c3ecbfea90c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.264 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.265 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.265 248514 DEBUG os_vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.266 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.266 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.267 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.270 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.270 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap37c91936-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.271 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap37c91936-05, col_values=(('external_ids', {'iface-id': '37c91936-0589-4bd3-9413-3af7db3e8feb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:6c:85', 'vm-uuid': '707e75d9-f6d2-413d-a727-c3ecbfea90c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.273 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:17 np0005558241 NetworkManager[50376]: <info>  [1765617197.2748] manager: (tap37c91936-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.275 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.284 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.286 248514 INFO os_vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05')#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.287 248514 DEBUG nova.virt.libvirt.vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-132103940',display_name='tempest-TestGettingAddress-server-132103940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-132103940',id=143,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-eqjt2kv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:03Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=707e75d9-f6d2-413d-a727-c3ecbfea90c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.288 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.289 248514 DEBUG nova.network.os_vif_util [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.290 248514 DEBUG os_vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.291 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.292 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.292 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.295 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.296 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9538b93c-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.297 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9538b93c-6b, col_values=(('external_ids', {'iface-id': '9538b93c-6b21-4b5f-b6d3-89fd1b840f5d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:da:84', 'vm-uuid': '707e75d9-f6d2-413d-a727-c3ecbfea90c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.299 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:17 np0005558241 NetworkManager[50376]: <info>  [1765617197.3004] manager: (tap9538b93c-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/627)
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.302 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.307 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.308 248514 INFO os_vif [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b')#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.370 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.370 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.371 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:77:6c:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.371 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:39:da:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.372 248514 INFO nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Using config drive#033[00m
Dec 13 04:13:17 np0005558241 nova_compute[248510]: 2025-12-13 09:13:17.397 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.102 248514 INFO nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Creating config drive at /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/disk.config#033[00m
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.107 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpui2vk4ei execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.282 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpui2vk4ei" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.313 248514 DEBUG nova.storage.rbd_utils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.317 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/disk.config 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.478 248514 DEBUG oslo_concurrency.processutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/disk.config 707e75d9-f6d2-413d-a727-c3ecbfea90c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.479 248514 INFO nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Deleting local config drive /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1/disk.config because it was imported into RBD.#033[00m
Dec 13 04:13:18 np0005558241 NetworkManager[50376]: <info>  [1765617198.5564] manager: (tap37c91936-05): new Tun device (/org/freedesktop/NetworkManager/Devices/628)
Dec 13 04:13:18 np0005558241 kernel: tap37c91936-05: entered promiscuous mode
Dec 13 04:13:18 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:18Z|01516|binding|INFO|Claiming lport 37c91936-0589-4bd3-9413-3af7db3e8feb for this chassis.
Dec 13 04:13:18 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:18Z|01517|binding|INFO|37c91936-0589-4bd3-9413-3af7db3e8feb: Claiming fa:16:3e:77:6c:85 10.100.0.11
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.560 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.567 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:18 np0005558241 kernel: tap9538b93c-6b: entered promiscuous mode
Dec 13 04:13:18 np0005558241 NetworkManager[50376]: <info>  [1765617198.5745] manager: (tap9538b93c-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/629)
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.581 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:6c:85 10.100.0.11'], port_security=['fa:16:3e:77:6c:85 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '707e75d9-f6d2-413d-a727-c3ecbfea90c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9e8c352-1166-4841-b5aa-79bfa0acca5d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=37c91936-0589-4bd3-9413-3af7db3e8feb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.582 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 37c91936-0589-4bd3-9413-3af7db3e8feb in datapath ef1a3009-9f3e-4c4a-9ee4-04bec3653434 bound to our chassis#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.584 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef1a3009-9f3e-4c4a-9ee4-04bec3653434#033[00m
Dec 13 04:13:18 np0005558241 systemd-udevd[395743]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:13:18 np0005558241 systemd-udevd[395742]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.598 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0f2b6ab0-0a96-4b5e-b73e-bb75b28ebcc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.600 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef1a3009-91 in ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.602 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef1a3009-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.603 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5748dc73-435f-4ee8-bf88-f46bce7dc4fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.603 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8ecdf364-0254-4f63-9cc6-25fbc3180f0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 NetworkManager[50376]: <info>  [1765617198.6134] device (tap9538b93c-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:13:18 np0005558241 NetworkManager[50376]: <info>  [1765617198.6145] device (tap9538b93c-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:13:18 np0005558241 NetworkManager[50376]: <info>  [1765617198.6151] device (tap37c91936-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:13:18 np0005558241 NetworkManager[50376]: <info>  [1765617198.6159] device (tap37c91936-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:13:18 np0005558241 systemd-machined[210538]: New machine qemu-174-instance-0000008f.
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.623 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[10ed4b20-a038-413d-82d2-74411d4ec27d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.654 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2ae638-3d1c-47cd-8336-2a8fe19dbf58]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 systemd[1]: Started Virtual Machine qemu-174-instance-0000008f.
Dec 13 04:13:18 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:18Z|01518|binding|INFO|Claiming lport 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d for this chassis.
Dec 13 04:13:18 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:18Z|01519|binding|INFO|9538b93c-6b21-4b5f-b6d3-89fd1b840f5d: Claiming fa:16:3e:39:da:84 2001:db8:0:1:f816:3eff:fe39:da84 2001:db8::f816:3eff:fe39:da84
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.666 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.669 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:18 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:18Z|01520|binding|INFO|Setting lport 37c91936-0589-4bd3-9413-3af7db3e8feb ovn-installed in OVS
Dec 13 04:13:18 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:18Z|01521|binding|INFO|Setting lport 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d ovn-installed in OVS
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.689 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.703 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[82bd100d-4996-4fdf-b78d-c3cf55c7b8be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 NetworkManager[50376]: <info>  [1765617198.7122] manager: (tapef1a3009-90): new Veth device (/org/freedesktop/NetworkManager/Devices/630)
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.712 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57bdad6b-8220-4201-ba4e-ea1985c51c62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.755 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b80a570a-e672-43b4-8484-bfa7e0770ba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.760 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[83f172e2-024c-4ef8-95db-2a6396c27e40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:18Z|01522|binding|INFO|Setting lport 37c91936-0589-4bd3-9413-3af7db3e8feb up in Southbound
Dec 13 04:13:18 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:18Z|01523|binding|INFO|Setting lport 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d up in Southbound
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.772 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:da:84 2001:db8:0:1:f816:3eff:fe39:da84 2001:db8::f816:3eff:fe39:da84'], port_security=['fa:16:3e:39:da:84 2001:db8:0:1:f816:3eff:fe39:da84 2001:db8::f816:3eff:fe39:da84'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe39:da84/64 2001:db8::f816:3eff:fe39:da84/64', 'neutron:device_id': '707e75d9-f6d2-413d-a727-c3ecbfea90c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d42d98d-b09c-47fa-98dd-69f99f24cca5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:13:18 np0005558241 NetworkManager[50376]: <info>  [1765617198.7884] device (tapef1a3009-90): carrier: link connected
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.796 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[936e3dea-b394-45f7-bbf1-8da6c457ed2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3366: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 395 KiB/s wr, 2 op/s
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.821 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[217fed17-4ac5-4b2d-9fe1-9948b78d28ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef1a3009-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:51:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 436], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977600, 'reachable_time': 27921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395779, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.837 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6391f351-1aef-4f0d-94d1-863ea72b9813]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec4:5195'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977600, 'tstamp': 977600}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395780, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.836 248514 DEBUG nova.network.neutron [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updated VIF entry in instance network info cache for port 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.836 248514 DEBUG nova.network.neutron [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.859 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[114b6d44-5150-4548-9728-a62455b32eb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef1a3009-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:51:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 436], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977600, 'reachable_time': 27921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395781, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.882 248514 DEBUG oslo_concurrency.lockutils [req-ddde1f14-e481-442b-bc11-f7c805bba891 req-7eadcf28-be55-49e0-abd9-87e649f978fc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:13:18 np0005558241 nova_compute[248510]: 2025-12-13 09:13:18.883 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.891 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[35468b07-ac0c-40ac-a888-e049db1bcd5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.976 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1df77a08-13d2-4421-9817-0048e1420ad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.978 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef1a3009-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.978 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.980 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef1a3009-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:18 np0005558241 kernel: tapef1a3009-90: entered promiscuous mode
Dec 13 04:13:18 np0005558241 NetworkManager[50376]: <info>  [1765617198.9834] manager: (tapef1a3009-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/631)
Dec 13 04:13:18 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:18.985 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef1a3009-90, col_values=(('external_ids', {'iface-id': '365ef2b2-09ce-4255-947a-81f3896d22ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:18 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:18Z|01524|binding|INFO|Releasing lport 365ef2b2-09ce-4255-947a-81f3896d22ee from this chassis (sb_readonly=0)
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.006 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef1a3009-9f3e-4c4a-9ee4-04bec3653434.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef1a3009-9f3e-4c4a-9ee4-04bec3653434.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.008 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[06d73351-3270-480d-89eb-c79f7891a462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.009 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-ef1a3009-9f3e-4c4a-9ee4-04bec3653434
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/ef1a3009-9f3e-4c4a-9ee4-04bec3653434.pid.haproxy
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID ef1a3009-9f3e-4c4a-9ee4-04bec3653434
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.011 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'env', 'PROCESS_TAG=haproxy-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef1a3009-9f3e-4c4a-9ee4-04bec3653434.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.039 248514 DEBUG nova.compute.manager [req-f3c62ee2-b6c9-4c57-94d0-1cd5ca179ba0 req-71873ca2-aa8c-4876-86db-bfee15d5ec80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.042 248514 DEBUG oslo_concurrency.lockutils [req-f3c62ee2-b6c9-4c57-94d0-1cd5ca179ba0 req-71873ca2-aa8c-4876-86db-bfee15d5ec80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.042 248514 DEBUG oslo_concurrency.lockutils [req-f3c62ee2-b6c9-4c57-94d0-1cd5ca179ba0 req-71873ca2-aa8c-4876-86db-bfee15d5ec80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.043 248514 DEBUG oslo_concurrency.lockutils [req-f3c62ee2-b6c9-4c57-94d0-1cd5ca179ba0 req-71873ca2-aa8c-4876-86db-bfee15d5ec80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.043 248514 DEBUG nova.compute.manager [req-f3c62ee2-b6c9-4c57-94d0-1cd5ca179ba0 req-71873ca2-aa8c-4876-86db-bfee15d5ec80 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Processing event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.063 248514 DEBUG nova.compute.manager [req-676a9607-14a5-4ad3-81ba-583870e8554c req-f9c371af-2d84-4d3e-b600-d18593cc375f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.064 248514 DEBUG oslo_concurrency.lockutils [req-676a9607-14a5-4ad3-81ba-583870e8554c req-f9c371af-2d84-4d3e-b600-d18593cc375f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.064 248514 DEBUG oslo_concurrency.lockutils [req-676a9607-14a5-4ad3-81ba-583870e8554c req-f9c371af-2d84-4d3e-b600-d18593cc375f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.064 248514 DEBUG oslo_concurrency.lockutils [req-676a9607-14a5-4ad3-81ba-583870e8554c req-f9c371af-2d84-4d3e-b600-d18593cc375f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.065 248514 DEBUG nova.compute.manager [req-676a9607-14a5-4ad3-81ba-583870e8554c req-f9c371af-2d84-4d3e-b600-d18593cc375f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Processing event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.284 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617199.2840216, 707e75d9-f6d2-413d-a727-c3ecbfea90c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.285 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] VM Started (Lifecycle Event)#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.287 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.292 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.296 248514 INFO nova.virt.libvirt.driver [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance spawned successfully.#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.296 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.329 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.336 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.337 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.338 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.338 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.339 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.339 248514 DEBUG nova.virt.libvirt.driver [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.344 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.393 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.394 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617199.2841537, 707e75d9-f6d2-413d-a727-c3ecbfea90c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.394 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.422 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.426 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617199.2901063, 707e75d9-f6d2-413d-a727-c3ecbfea90c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.426 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.438 248514 INFO nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Took 15.85 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.438 248514 DEBUG nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:13:19 np0005558241 podman[395856]: 2025-12-13 09:13:19.438979722 +0000 UTC m=+0.049061702 container create 1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.447 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.455 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.483 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:13:19 np0005558241 systemd[1]: Started libpod-conmon-1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a.scope.
Dec 13 04:13:19 np0005558241 podman[395856]: 2025-12-13 09:13:19.411836291 +0000 UTC m=+0.021918311 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:13:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.521 248514 INFO nova.compute.manager [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Took 17.50 seconds to build instance.#033[00m
Dec 13 04:13:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e219d71c2457e91e96bcf457fd2d2a6b9c400e89e423017a2889e15b6fb339/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.543 248514 DEBUG oslo_concurrency.lockutils [None req-b5b589dd-70c0-4085-8d84-93a1fa60ee7d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:19 np0005558241 podman[395856]: 2025-12-13 09:13:19.552729055 +0000 UTC m=+0.162811035 container init 1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:13:19 np0005558241 podman[395856]: 2025-12-13 09:13:19.557895594 +0000 UTC m=+0.167977574 container start 1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:13:19 np0005558241 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [NOTICE]   (395876) : New worker (395878) forked
Dec 13 04:13:19 np0005558241 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [NOTICE]   (395876) : Loading success.
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.627 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d in datapath ff80f4de-0e76-47f2-b06e-6d3900b63130 unbound from our chassis#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.631 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff80f4de-0e76-47f2-b06e-6d3900b63130#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.645 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[963669cf-5ab6-414b-9793-b253a3a19de9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.646 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapff80f4de-01 in ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.649 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapff80f4de-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.649 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3160cbd2-9be3-4709-9e8b-543a44f11b3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.651 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[996bdab5-03ba-4686-8ad4-3e43811ca534]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.664 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[39326667-9ab6-4ea9-8274-2881d23105e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.687 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c7837abc-941f-4cbb-a07e-f3c99b95e242]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.711 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf1eb83-81ca-471f-b880-c8f4a517fbf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.725 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[07f54147-df29-460e-88c8-32465d8dc5f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 NetworkManager[50376]: <info>  [1765617199.7273] manager: (tapff80f4de-00): new Veth device (/org/freedesktop/NetworkManager/Devices/632)
Dec 13 04:13:19 np0005558241 systemd-udevd[395774]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:13:19 np0005558241 nova_compute[248510]: 2025-12-13 09:13:19.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.774 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5d2828-ee22-4be2-8ae4-786033385f5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.779 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3856c9de-8aca-4900-a35f-b51458846b37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 NetworkManager[50376]: <info>  [1765617199.8177] device (tapff80f4de-00): carrier: link connected
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.825 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[264724d5-fedd-4344-8b26-386b1c353287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.846 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb973c39-48d9-4178-a145-975584af65bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff80f4de-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:6b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977703, 'reachable_time': 20848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395899, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.876 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c7166d0f-851f-4616-aa2d-cb7a4a175b5e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:6b85'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977703, 'tstamp': 977703}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395900, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.905 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[55020b81-1694-4c36-915a-60e886d0e6b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff80f4de-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:6b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977703, 'reachable_time': 20848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395901, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.947 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[32356e02-c725-4415-b405-6cc852a18b8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:19.998 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc1dea0-b8d1-4ff6-8d1f-ea693df0435a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.000 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff80f4de-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.001 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.002 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff80f4de-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:20 np0005558241 nova_compute[248510]: 2025-12-13 09:13:20.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:20 np0005558241 NetworkManager[50376]: <info>  [1765617200.0046] manager: (tapff80f4de-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/633)
Dec 13 04:13:20 np0005558241 kernel: tapff80f4de-00: entered promiscuous mode
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.010 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff80f4de-00, col_values=(('external_ids', {'iface-id': 'a155cc1b-19d3-4430-9d83-b19e30ef1a95'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:20 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:20Z|01525|binding|INFO|Releasing lport a155cc1b-19d3-4430-9d83-b19e30ef1a95 from this chassis (sb_readonly=0)
Dec 13 04:13:20 np0005558241 nova_compute[248510]: 2025-12-13 09:13:20.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.015 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ff80f4de-0e76-47f2-b06e-6d3900b63130.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ff80f4de-0e76-47f2-b06e-6d3900b63130.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.016 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a6428577-3a7c-4e05-b8b5-9e1c00cfd580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.017 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-ff80f4de-0e76-47f2-b06e-6d3900b63130
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/ff80f4de-0e76-47f2-b06e-6d3900b63130.pid.haproxy
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID ff80f4de-0e76-47f2-b06e-6d3900b63130
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:13:20 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:20.018 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'env', 'PROCESS_TAG=haproxy-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ff80f4de-0e76-47f2-b06e-6d3900b63130.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:13:20 np0005558241 nova_compute[248510]: 2025-12-13 09:13:20.026 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:20 np0005558241 podman[395932]: 2025-12-13 09:13:20.45927251 +0000 UTC m=+0.064460478 container create 8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 04:13:20 np0005558241 systemd[1]: Started libpod-conmon-8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6.scope.
Dec 13 04:13:20 np0005558241 podman[395932]: 2025-12-13 09:13:20.428140039 +0000 UTC m=+0.033328037 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:13:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:13:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5335169775ac2f2f0d9de9ac62c16a1b6366679c57b207e1bc023d06103d75f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:20 np0005558241 podman[395932]: 2025-12-13 09:13:20.552612261 +0000 UTC m=+0.157800249 container init 8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Dec 13 04:13:20 np0005558241 podman[395932]: 2025-12-13 09:13:20.561495514 +0000 UTC m=+0.166683492 container start 8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:13:20 np0005558241 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [NOTICE]   (395952) : New worker (395954) forked
Dec 13 04:13:20 np0005558241 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [NOTICE]   (395952) : Loading success.
Dec 13 04:13:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3367: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 12 KiB/s wr, 4 op/s
Dec 13 04:13:21 np0005558241 nova_compute[248510]: 2025-12-13 09:13:21.153 248514 DEBUG nova.compute.manager [req-fe8ce97c-4f33-4bb1-ac22-163f5caed679 req-cc32b4aa-5e00-44c3-84b2-516b1913097e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:13:21 np0005558241 nova_compute[248510]: 2025-12-13 09:13:21.154 248514 DEBUG oslo_concurrency.lockutils [req-fe8ce97c-4f33-4bb1-ac22-163f5caed679 req-cc32b4aa-5e00-44c3-84b2-516b1913097e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:21 np0005558241 nova_compute[248510]: 2025-12-13 09:13:21.154 248514 DEBUG oslo_concurrency.lockutils [req-fe8ce97c-4f33-4bb1-ac22-163f5caed679 req-cc32b4aa-5e00-44c3-84b2-516b1913097e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:21 np0005558241 nova_compute[248510]: 2025-12-13 09:13:21.154 248514 DEBUG oslo_concurrency.lockutils [req-fe8ce97c-4f33-4bb1-ac22-163f5caed679 req-cc32b4aa-5e00-44c3-84b2-516b1913097e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:21 np0005558241 nova_compute[248510]: 2025-12-13 09:13:21.154 248514 DEBUG nova.compute.manager [req-fe8ce97c-4f33-4bb1-ac22-163f5caed679 req-cc32b4aa-5e00-44c3-84b2-516b1913097e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] No waiting events found dispatching network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:13:21 np0005558241 nova_compute[248510]: 2025-12-13 09:13:21.155 248514 WARNING nova.compute.manager [req-fe8ce97c-4f33-4bb1-ac22-163f5caed679 req-cc32b4aa-5e00-44c3-84b2-516b1913097e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received unexpected event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d for instance with vm_state active and task_state None.#033[00m
Dec 13 04:13:21 np0005558241 nova_compute[248510]: 2025-12-13 09:13:21.240 248514 DEBUG nova.compute.manager [req-89989770-38d0-462e-a0f0-42aba80ca6d5 req-24321a0b-18f0-44a4-a183-c739ada7cab4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:13:21 np0005558241 nova_compute[248510]: 2025-12-13 09:13:21.241 248514 DEBUG oslo_concurrency.lockutils [req-89989770-38d0-462e-a0f0-42aba80ca6d5 req-24321a0b-18f0-44a4-a183-c739ada7cab4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:21 np0005558241 nova_compute[248510]: 2025-12-13 09:13:21.241 248514 DEBUG oslo_concurrency.lockutils [req-89989770-38d0-462e-a0f0-42aba80ca6d5 req-24321a0b-18f0-44a4-a183-c739ada7cab4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:21 np0005558241 nova_compute[248510]: 2025-12-13 09:13:21.241 248514 DEBUG oslo_concurrency.lockutils [req-89989770-38d0-462e-a0f0-42aba80ca6d5 req-24321a0b-18f0-44a4-a183-c739ada7cab4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:21 np0005558241 nova_compute[248510]: 2025-12-13 09:13:21.241 248514 DEBUG nova.compute.manager [req-89989770-38d0-462e-a0f0-42aba80ca6d5 req-24321a0b-18f0-44a4-a183-c739ada7cab4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] No waiting events found dispatching network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:13:21 np0005558241 nova_compute[248510]: 2025-12-13 09:13:21.242 248514 WARNING nova.compute.manager [req-89989770-38d0-462e-a0f0-42aba80ca6d5 req-24321a0b-18f0-44a4-a183-c739ada7cab4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received unexpected event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb for instance with vm_state active and task_state None.#033[00m
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003629306754213902 of space, bias 1.0, pg target 0.10887920262641705 quantized to 32 (current 32)
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697318854645266 of space, bias 1.0, pg target 0.20091956563935798 quantized to 32 (current 32)
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:13:22 np0005558241 nova_compute[248510]: 2025-12-13 09:13:22.300 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3368: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 12 KiB/s wr, 3 op/s
Dec 13 04:13:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:23.245 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:13:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:23.248 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:13:23 np0005558241 nova_compute[248510]: 2025-12-13 09:13:23.251 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:23 np0005558241 nova_compute[248510]: 2025-12-13 09:13:23.889 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:23Z|01526|binding|INFO|Releasing lport 365ef2b2-09ce-4255-947a-81f3896d22ee from this chassis (sb_readonly=0)
Dec 13 04:13:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:23Z|01527|binding|INFO|Releasing lport a155cc1b-19d3-4430-9d83-b19e30ef1a95 from this chassis (sb_readonly=0)
Dec 13 04:13:23 np0005558241 NetworkManager[50376]: <info>  [1765617203.8914] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Dec 13 04:13:23 np0005558241 NetworkManager[50376]: <info>  [1765617203.8922] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Dec 13 04:13:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:23Z|01528|binding|INFO|Releasing lport 365ef2b2-09ce-4255-947a-81f3896d22ee from this chassis (sb_readonly=0)
Dec 13 04:13:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:23Z|01529|binding|INFO|Releasing lport a155cc1b-19d3-4430-9d83-b19e30ef1a95 from this chassis (sb_readonly=0)
Dec 13 04:13:23 np0005558241 nova_compute[248510]: 2025-12-13 09:13:23.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:23 np0005558241 nova_compute[248510]: 2025-12-13 09:13:23.934 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:24 np0005558241 nova_compute[248510]: 2025-12-13 09:13:24.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:13:24 np0005558241 nova_compute[248510]: 2025-12-13 09:13:24.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:13:24 np0005558241 nova_compute[248510]: 2025-12-13 09:13:24.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:13:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3369: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:13:25 np0005558241 nova_compute[248510]: 2025-12-13 09:13:25.026 248514 DEBUG nova.compute.manager [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-changed-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:13:25 np0005558241 nova_compute[248510]: 2025-12-13 09:13:25.027 248514 DEBUG nova.compute.manager [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing instance network info cache due to event network-changed-37c91936-0589-4bd3-9413-3af7db3e8feb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:13:25 np0005558241 nova_compute[248510]: 2025-12-13 09:13:25.028 248514 DEBUG oslo_concurrency.lockutils [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:13:25 np0005558241 nova_compute[248510]: 2025-12-13 09:13:25.029 248514 DEBUG oslo_concurrency.lockutils [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:13:25 np0005558241 nova_compute[248510]: 2025-12-13 09:13:25.029 248514 DEBUG nova.network.neutron [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing network info cache for port 37c91936-0589-4bd3-9413-3af7db3e8feb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:13:25 np0005558241 nova_compute[248510]: 2025-12-13 09:13:25.096 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:13:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3370: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:13:27 np0005558241 nova_compute[248510]: 2025-12-13 09:13:27.176 248514 DEBUG nova.network.neutron [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updated VIF entry in instance network info cache for port 37c91936-0589-4bd3-9413-3af7db3e8feb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:13:27 np0005558241 nova_compute[248510]: 2025-12-13 09:13:27.177 248514 DEBUG nova.network.neutron [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:13:27 np0005558241 nova_compute[248510]: 2025-12-13 09:13:27.252 248514 DEBUG oslo_concurrency.lockutils [req-af933f49-3963-48d9-8b37-4e756d36aea6 req-5d08c5c9-9727-4e60-849a-fb5ace16ba4f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:13:27 np0005558241 nova_compute[248510]: 2025-12-13 09:13:27.252 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:13:27 np0005558241 nova_compute[248510]: 2025-12-13 09:13:27.253 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 04:13:27 np0005558241 nova_compute[248510]: 2025-12-13 09:13:27.253 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 707e75d9-f6d2-413d-a727-c3ecbfea90c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:13:27 np0005558241 nova_compute[248510]: 2025-12-13 09:13:27.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3371: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:13:28 np0005558241 nova_compute[248510]: 2025-12-13 09:13:28.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:29 np0005558241 nova_compute[248510]: 2025-12-13 09:13:29.691 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:13:29 np0005558241 nova_compute[248510]: 2025-12-13 09:13:29.712 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:13:29 np0005558241 nova_compute[248510]: 2025-12-13 09:13:29.713 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 04:13:29 np0005558241 nova_compute[248510]: 2025-12-13 09:13:29.713 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:13:29 np0005558241 nova_compute[248510]: 2025-12-13 09:13:29.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:13:29 np0005558241 nova_compute[248510]: 2025-12-13 09:13:29.831 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:29 np0005558241 nova_compute[248510]: 2025-12-13 09:13:29.832 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:29 np0005558241 nova_compute[248510]: 2025-12-13 09:13:29.833 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:29 np0005558241 nova_compute[248510]: 2025-12-13 09:13:29.833 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:13:29 np0005558241 nova_compute[248510]: 2025-12-13 09:13:29.834 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:29 np0005558241 podman[395967]: 2025-12-13 09:13:29.986481274 +0000 UTC m=+0.062214501 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:13:29 np0005558241 podman[395966]: 2025-12-13 09:13:29.998001433 +0000 UTC m=+0.083003413 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:13:30 np0005558241 podman[395965]: 2025-12-13 09:13:30.036394026 +0000 UTC m=+0.125292193 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 13 04:13:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:13:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114344371' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:30 np0005558241 nova_compute[248510]: 2025-12-13 09:13:30.514 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3372: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:13:30 np0005558241 nova_compute[248510]: 2025-12-13 09:13:30.847 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:13:30 np0005558241 nova_compute[248510]: 2025-12-13 09:13:30.849 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:13:31 np0005558241 nova_compute[248510]: 2025-12-13 09:13:31.105 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:13:31 np0005558241 nova_compute[248510]: 2025-12-13 09:13:31.106 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3322MB free_disk=59.96650728955865GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:13:31 np0005558241 nova_compute[248510]: 2025-12-13 09:13:31.106 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:31 np0005558241 nova_compute[248510]: 2025-12-13 09:13:31.107 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:31.252 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:31 np0005558241 nova_compute[248510]: 2025-12-13 09:13:31.270 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 707e75d9-f6d2-413d-a727-c3ecbfea90c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:13:31 np0005558241 nova_compute[248510]: 2025-12-13 09:13:31.271 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:13:31 np0005558241 nova_compute[248510]: 2025-12-13 09:13:31.271 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:13:31 np0005558241 nova_compute[248510]: 2025-12-13 09:13:31.521 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:13:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:13:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:13:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:13:31 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Dec 13 04:13:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:13:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:13:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:13:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:13:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:13:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:13:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:13:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:13:32 np0005558241 nova_compute[248510]: 2025-12-13 09:13:32.308 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:13:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500816159' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:32 np0005558241 nova_compute[248510]: 2025-12-13 09:13:32.371 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.851s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:32 np0005558241 nova_compute[248510]: 2025-12-13 09:13:32.383 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:13:32 np0005558241 podman[396212]: 2025-12-13 09:13:32.475725445 +0000 UTC m=+0.095887656 container create a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 04:13:32 np0005558241 podman[396212]: 2025-12-13 09:13:32.40698285 +0000 UTC m=+0.027145061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:13:32 np0005558241 systemd[1]: Started libpod-conmon-a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54.scope.
Dec 13 04:13:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:13:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:32 np0005558241 podman[396212]: 2025-12-13 09:13:32.737783767 +0000 UTC m=+0.357946028 container init a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_almeida, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:13:32 np0005558241 podman[396212]: 2025-12-13 09:13:32.748858915 +0000 UTC m=+0.369021096 container start a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 04:13:32 np0005558241 gifted_almeida[396228]: 167 167
Dec 13 04:13:32 np0005558241 systemd[1]: libpod-a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54.scope: Deactivated successfully.
Dec 13 04:13:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:13:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:13:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:13:32 np0005558241 podman[396212]: 2025-12-13 09:13:32.801111096 +0000 UTC m=+0.421273317 container attach a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:13:32 np0005558241 podman[396212]: 2025-12-13 09:13:32.803491965 +0000 UTC m=+0.423654146 container died a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_almeida, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:13:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3373: 321 pgs: 321 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 69 op/s
Dec 13 04:13:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4f3d708c4143ea8749c738a2eae980460585ea2052cea3cda4137f8caaa12548-merged.mount: Deactivated successfully.
Dec 13 04:13:33 np0005558241 podman[396212]: 2025-12-13 09:13:33.151488073 +0000 UTC m=+0.771650244 container remove a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_almeida, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:13:33 np0005558241 nova_compute[248510]: 2025-12-13 09:13:33.163 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:13:33 np0005558241 systemd[1]: libpod-conmon-a902f91b2e87757ab4fb3937598bec72277862af1ada55abde00b175b20cde54.scope: Deactivated successfully.
Dec 13 04:13:33 np0005558241 nova_compute[248510]: 2025-12-13 09:13:33.299 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:13:33 np0005558241 nova_compute[248510]: 2025-12-13 09:13:33.300 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:33 np0005558241 podman[396251]: 2025-12-13 09:13:33.389752019 +0000 UTC m=+0.065089094 container create 30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mestorf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 04:13:33 np0005558241 podman[396251]: 2025-12-13 09:13:33.348617147 +0000 UTC m=+0.023954242 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:13:33 np0005558241 systemd[1]: Started libpod-conmon-30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1.scope.
Dec 13 04:13:33 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:13:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6572ed8c130784f2a6016a5fe67a98c6968dbe0809a7ee378e6a63918e1fb50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6572ed8c130784f2a6016a5fe67a98c6968dbe0809a7ee378e6a63918e1fb50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6572ed8c130784f2a6016a5fe67a98c6968dbe0809a7ee378e6a63918e1fb50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6572ed8c130784f2a6016a5fe67a98c6968dbe0809a7ee378e6a63918e1fb50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6572ed8c130784f2a6016a5fe67a98c6968dbe0809a7ee378e6a63918e1fb50/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:33 np0005558241 podman[396251]: 2025-12-13 09:13:33.571907117 +0000 UTC m=+0.247244212 container init 30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:13:33 np0005558241 podman[396251]: 2025-12-13 09:13:33.581765495 +0000 UTC m=+0.257102570 container start 30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 04:13:33 np0005558241 podman[396251]: 2025-12-13 09:13:33.608455854 +0000 UTC m=+0.283792969 container attach 30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mestorf, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:13:33 np0005558241 nova_compute[248510]: 2025-12-13 09:13:33.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:34 np0005558241 gracious_mestorf[396268]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:13:34 np0005558241 gracious_mestorf[396268]: --> All data devices are unavailable
Dec 13 04:13:34 np0005558241 systemd[1]: libpod-30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1.scope: Deactivated successfully.
Dec 13 04:13:34 np0005558241 podman[396251]: 2025-12-13 09:13:34.08635 +0000 UTC m=+0.761687075 container died 30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:13:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:34Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:6c:85 10.100.0.11
Dec 13 04:13:34 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:34Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:6c:85 10.100.0.11
Dec 13 04:13:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c6572ed8c130784f2a6016a5fe67a98c6968dbe0809a7ee378e6a63918e1fb50-merged.mount: Deactivated successfully.
Dec 13 04:13:34 np0005558241 podman[396251]: 2025-12-13 09:13:34.591759915 +0000 UTC m=+1.267097030 container remove 30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:13:34 np0005558241 systemd[1]: libpod-conmon-30d4cd4eeefd9dd150a67603109cdfcfe98ab0c7495891671e2508135e3961a1.scope: Deactivated successfully.
Dec 13 04:13:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3374: 321 pgs: 321 active+clean; 113 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 118 op/s
Dec 13 04:13:35 np0005558241 podman[396363]: 2025-12-13 09:13:35.14700083 +0000 UTC m=+0.024053334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:13:35 np0005558241 podman[396363]: 2025-12-13 09:13:35.259684237 +0000 UTC m=+0.136736731 container create 33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:13:35 np0005558241 nova_compute[248510]: 2025-12-13 09:13:35.301 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:13:35 np0005558241 nova_compute[248510]: 2025-12-13 09:13:35.302 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:13:35 np0005558241 nova_compute[248510]: 2025-12-13 09:13:35.302 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:13:35 np0005558241 systemd[1]: Started libpod-conmon-33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c.scope.
Dec 13 04:13:35 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:13:35 np0005558241 podman[396363]: 2025-12-13 09:13:35.804346177 +0000 UTC m=+0.681398671 container init 33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:13:35 np0005558241 podman[396363]: 2025-12-13 09:13:35.814241806 +0000 UTC m=+0.691294280 container start 33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 04:13:35 np0005558241 thirsty_lehmann[396380]: 167 167
Dec 13 04:13:35 np0005558241 systemd[1]: libpod-33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c.scope: Deactivated successfully.
Dec 13 04:13:35 np0005558241 podman[396363]: 2025-12-13 09:13:35.993963203 +0000 UTC m=+0.871015697 container attach 33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 04:13:35 np0005558241 podman[396363]: 2025-12-13 09:13:35.994768243 +0000 UTC m=+0.871820717 container died 33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:13:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-03053423c19a82c35f949015a07e58b0d63e30b2610544e8bc341c28286565a9-merged.mount: Deactivated successfully.
Dec 13 04:13:36 np0005558241 podman[396363]: 2025-12-13 09:13:36.333714544 +0000 UTC m=+1.210767018 container remove 33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 04:13:36 np0005558241 systemd[1]: libpod-conmon-33397d1f6e3b5d278fa72890c96712daa4d5f1afcc227c74fcb9f5c47ca3e73c.scope: Deactivated successfully.
Dec 13 04:13:36 np0005558241 podman[396404]: 2025-12-13 09:13:36.616710552 +0000 UTC m=+0.103827555 container create f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shaw, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:13:36 np0005558241 podman[396404]: 2025-12-13 09:13:36.558628325 +0000 UTC m=+0.045745358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:13:36 np0005558241 systemd[1]: Started libpod-conmon-f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296.scope.
Dec 13 04:13:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:13:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feecaeddac6f9357948fd3cbbaf562ed900471dd01bddc588c38c30ad6c2fa87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feecaeddac6f9357948fd3cbbaf562ed900471dd01bddc588c38c30ad6c2fa87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feecaeddac6f9357948fd3cbbaf562ed900471dd01bddc588c38c30ad6c2fa87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feecaeddac6f9357948fd3cbbaf562ed900471dd01bddc588c38c30ad6c2fa87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:36 np0005558241 nova_compute[248510]: 2025-12-13 09:13:36.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:13:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3375: 321 pgs: 321 active+clean; 113 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 242 KiB/s rd, 2.0 MiB/s wr, 48 op/s
Dec 13 04:13:36 np0005558241 podman[396404]: 2025-12-13 09:13:36.842995027 +0000 UTC m=+0.330112020 container init f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shaw, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:13:36 np0005558241 podman[396404]: 2025-12-13 09:13:36.85468354 +0000 UTC m=+0.341800533 container start f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:13:36 np0005558241 podman[396404]: 2025-12-13 09:13:36.932773259 +0000 UTC m=+0.419890272 container attach f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 04:13:37 np0005558241 zen_shaw[396420]: {
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:    "0": [
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:        {
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "devices": [
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "/dev/loop3"
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            ],
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_name": "ceph_lv0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_size": "21470642176",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "name": "ceph_lv0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "tags": {
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.cluster_name": "ceph",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.crush_device_class": "",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.encrypted": "0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.objectstore": "bluestore",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.osd_id": "0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.type": "block",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.vdo": "0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.with_tpm": "0"
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            },
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "type": "block",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "vg_name": "ceph_vg0"
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:        }
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:    ],
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:    "1": [
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:        {
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "devices": [
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "/dev/loop4"
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            ],
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_name": "ceph_lv1",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_size": "21470642176",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "name": "ceph_lv1",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "tags": {
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.cluster_name": "ceph",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.crush_device_class": "",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.encrypted": "0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.objectstore": "bluestore",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.osd_id": "1",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.type": "block",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.vdo": "0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.with_tpm": "0"
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            },
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "type": "block",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "vg_name": "ceph_vg1"
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:        }
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:    ],
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:    "2": [
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:        {
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "devices": [
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "/dev/loop5"
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            ],
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_name": "ceph_lv2",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_size": "21470642176",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "name": "ceph_lv2",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "tags": {
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.cluster_name": "ceph",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.crush_device_class": "",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.encrypted": "0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.objectstore": "bluestore",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.osd_id": "2",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.type": "block",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.vdo": "0",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:                "ceph.with_tpm": "0"
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            },
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "type": "block",
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:            "vg_name": "ceph_vg2"
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:        }
Dec 13 04:13:37 np0005558241 zen_shaw[396420]:    ]
Dec 13 04:13:37 np0005558241 zen_shaw[396420]: }
Dec 13 04:13:37 np0005558241 systemd[1]: libpod-f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296.scope: Deactivated successfully.
Dec 13 04:13:37 np0005558241 podman[396404]: 2025-12-13 09:13:37.159959196 +0000 UTC m=+0.647076189 container died f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:13:37 np0005558241 nova_compute[248510]: 2025-12-13 09:13:37.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-feecaeddac6f9357948fd3cbbaf562ed900471dd01bddc588c38c30ad6c2fa87-merged.mount: Deactivated successfully.
Dec 13 04:13:37 np0005558241 podman[396404]: 2025-12-13 09:13:37.54605736 +0000 UTC m=+1.033174363 container remove f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_shaw, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:13:37 np0005558241 systemd[1]: libpod-conmon-f268addf8016aed2bc33d984ec320354d6adf2c2fc5403b645230d91004d4296.scope: Deactivated successfully.
Dec 13 04:13:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:38 np0005558241 podman[396504]: 2025-12-13 09:13:38.0835609 +0000 UTC m=+0.105052095 container create 522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:13:38 np0005558241 podman[396504]: 2025-12-13 09:13:38.004406006 +0000 UTC m=+0.025897231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:13:38 np0005558241 systemd[1]: Started libpod-conmon-522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1.scope.
Dec 13 04:13:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:13:38 np0005558241 podman[396504]: 2025-12-13 09:13:38.20756941 +0000 UTC m=+0.229060635 container init 522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sutherland, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:13:38 np0005558241 podman[396504]: 2025-12-13 09:13:38.214155145 +0000 UTC m=+0.235646340 container start 522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:13:38 np0005558241 elastic_sutherland[396521]: 167 167
Dec 13 04:13:38 np0005558241 systemd[1]: libpod-522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1.scope: Deactivated successfully.
Dec 13 04:13:38 np0005558241 podman[396504]: 2025-12-13 09:13:38.217954891 +0000 UTC m=+0.239446106 container attach 522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sutherland, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:13:38 np0005558241 podman[396504]: 2025-12-13 09:13:38.218370411 +0000 UTC m=+0.239861606 container died 522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 04:13:38 np0005558241 systemd[1]: var-lib-containers-storage-overlay-dd8371c3e500d2619489dd22ba125346f1ec4429767e4d34b6ac371465194dbc-merged.mount: Deactivated successfully.
Dec 13 04:13:38 np0005558241 podman[396504]: 2025-12-13 09:13:38.258819106 +0000 UTC m=+0.280310301 container remove 522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sutherland, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:13:38 np0005558241 systemd[1]: libpod-conmon-522793af3e6e8cbbb8c6e8e4ca8c1705a8f5eba47aa890217996057a389b16d1.scope: Deactivated successfully.
Dec 13 04:13:38 np0005558241 podman[396543]: 2025-12-13 09:13:38.474875634 +0000 UTC m=+0.081431513 container create 0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:13:38 np0005558241 podman[396543]: 2025-12-13 09:13:38.419126176 +0000 UTC m=+0.025681905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:13:38 np0005558241 systemd[1]: Started libpod-conmon-0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767.scope.
Dec 13 04:13:38 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:13:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a663a84d3dd12222edecee6819ca0c463a0cf74e6705ad8b15ccc402e3a48542/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a663a84d3dd12222edecee6819ca0c463a0cf74e6705ad8b15ccc402e3a48542/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a663a84d3dd12222edecee6819ca0c463a0cf74e6705ad8b15ccc402e3a48542/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:38 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a663a84d3dd12222edecee6819ca0c463a0cf74e6705ad8b15ccc402e3a48542/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:38 np0005558241 podman[396543]: 2025-12-13 09:13:38.612816474 +0000 UTC m=+0.219372153 container init 0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 04:13:38 np0005558241 podman[396543]: 2025-12-13 09:13:38.626370754 +0000 UTC m=+0.232926443 container start 0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:13:38 np0005558241 podman[396543]: 2025-12-13 09:13:38.637530564 +0000 UTC m=+0.244086213 container attach 0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mclean, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:13:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3376: 321 pgs: 321 active+clean; 117 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 247 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Dec 13 04:13:38 np0005558241 nova_compute[248510]: 2025-12-13 09:13:38.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:39 np0005558241 lvm[396638]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:13:39 np0005558241 lvm[396639]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:13:39 np0005558241 lvm[396638]: VG ceph_vg0 finished
Dec 13 04:13:39 np0005558241 lvm[396639]: VG ceph_vg1 finished
Dec 13 04:13:39 np0005558241 lvm[396641]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:13:39 np0005558241 lvm[396641]: VG ceph_vg2 finished
Dec 13 04:13:39 np0005558241 stoic_mclean[396560]: {}
Dec 13 04:13:39 np0005558241 systemd[1]: libpod-0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767.scope: Deactivated successfully.
Dec 13 04:13:39 np0005558241 podman[396543]: 2025-12-13 09:13:39.493613315 +0000 UTC m=+1.100168984 container died 0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:13:39 np0005558241 systemd[1]: libpod-0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767.scope: Consumed 1.461s CPU time.
Dec 13 04:13:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a663a84d3dd12222edecee6819ca0c463a0cf74e6705ad8b15ccc402e3a48542-merged.mount: Deactivated successfully.
Dec 13 04:13:39 np0005558241 podman[396543]: 2025-12-13 09:13:39.54166342 +0000 UTC m=+1.148219069 container remove 0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:13:39 np0005558241 systemd[1]: libpod-conmon-0873a767a4e9fdacbbae97b3996739a1e0e7a8b6eb2a4182fc878b66fdddc767.scope: Deactivated successfully.
Dec 13 04:13:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:13:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:13:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:13:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:13:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:13:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:13:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:13:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:13:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:13:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:13:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:13:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:13:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3377: 321 pgs: 321 active+clean; 121 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:13:41 np0005558241 nova_compute[248510]: 2025-12-13 09:13:41.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:13:42 np0005558241 nova_compute[248510]: 2025-12-13 09:13:42.319 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3378: 321 pgs: 321 active+clean; 121 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:13:43 np0005558241 nova_compute[248510]: 2025-12-13 09:13:43.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:44 np0005558241 nova_compute[248510]: 2025-12-13 09:13:44.401 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:44 np0005558241 nova_compute[248510]: 2025-12-13 09:13:44.401 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:44 np0005558241 nova_compute[248510]: 2025-12-13 09:13:44.424 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:13:44 np0005558241 nova_compute[248510]: 2025-12-13 09:13:44.511 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:44 np0005558241 nova_compute[248510]: 2025-12-13 09:13:44.511 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:44 np0005558241 nova_compute[248510]: 2025-12-13 09:13:44.524 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:13:44 np0005558241 nova_compute[248510]: 2025-12-13 09:13:44.524 248514 INFO nova.compute.claims [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:13:44 np0005558241 nova_compute[248510]: 2025-12-13 09:13:44.677 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3379: 321 pgs: 321 active+clean; 121 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:13:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:13:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1788640244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:45 np0005558241 nova_compute[248510]: 2025-12-13 09:13:45.241 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:45 np0005558241 nova_compute[248510]: 2025-12-13 09:13:45.251 248514 DEBUG nova.compute.provider_tree [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:13:45 np0005558241 nova_compute[248510]: 2025-12-13 09:13:45.411 248514 DEBUG nova.scheduler.client.report [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:13:45 np0005558241 nova_compute[248510]: 2025-12-13 09:13:45.437 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:45 np0005558241 nova_compute[248510]: 2025-12-13 09:13:45.438 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:13:45 np0005558241 nova_compute[248510]: 2025-12-13 09:13:45.503 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:13:45 np0005558241 nova_compute[248510]: 2025-12-13 09:13:45.503 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:13:45 np0005558241 nova_compute[248510]: 2025-12-13 09:13:45.536 248514 INFO nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:13:45 np0005558241 nova_compute[248510]: 2025-12-13 09:13:45.555 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:13:45 np0005558241 nova_compute[248510]: 2025-12-13 09:13:45.881 248514 DEBUG nova.policy [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:13:45 np0005558241 nova_compute[248510]: 2025-12-13 09:13:45.993 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:13:45 np0005558241 nova_compute[248510]: 2025-12-13 09:13:45.995 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:13:45 np0005558241 nova_compute[248510]: 2025-12-13 09:13:45.996 248514 INFO nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Creating image(s)#033[00m
Dec 13 04:13:46 np0005558241 nova_compute[248510]: 2025-12-13 09:13:46.044 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:46 np0005558241 nova_compute[248510]: 2025-12-13 09:13:46.076 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:46 np0005558241 nova_compute[248510]: 2025-12-13 09:13:46.285 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:46 np0005558241 nova_compute[248510]: 2025-12-13 09:13:46.289 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:46 np0005558241 nova_compute[248510]: 2025-12-13 09:13:46.375 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:46 np0005558241 nova_compute[248510]: 2025-12-13 09:13:46.376 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:46 np0005558241 nova_compute[248510]: 2025-12-13 09:13:46.377 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:46 np0005558241 nova_compute[248510]: 2025-12-13 09:13:46.378 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:46 np0005558241 nova_compute[248510]: 2025-12-13 09:13:46.412 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:46 np0005558241 nova_compute[248510]: 2025-12-13 09:13:46.418 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3380: 321 pgs: 321 active+clean; 121 MiB data, 1024 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 103 KiB/s wr, 15 op/s
Dec 13 04:13:47 np0005558241 nova_compute[248510]: 2025-12-13 09:13:47.075 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:47 np0005558241 nova_compute[248510]: 2025-12-13 09:13:47.152 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:13:47 np0005558241 nova_compute[248510]: 2025-12-13 09:13:47.239 248514 DEBUG nova.objects.instance [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:13:47 np0005558241 nova_compute[248510]: 2025-12-13 09:13:47.315 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:13:47 np0005558241 nova_compute[248510]: 2025-12-13 09:13:47.315 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Ensure instance console log exists: /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:13:47 np0005558241 nova_compute[248510]: 2025-12-13 09:13:47.316 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:47 np0005558241 nova_compute[248510]: 2025-12-13 09:13:47.316 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:47 np0005558241 nova_compute[248510]: 2025-12-13 09:13:47.316 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:47 np0005558241 nova_compute[248510]: 2025-12-13 09:13:47.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:48 np0005558241 nova_compute[248510]: 2025-12-13 09:13:48.010 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Successfully created port: fc1f4300-f39f-49b1-8d88-be328fb018fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:13:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3381: 321 pgs: 321 active+clean; 132 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 481 KiB/s wr, 27 op/s
Dec 13 04:13:48 np0005558241 nova_compute[248510]: 2025-12-13 09:13:48.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:48 np0005558241 nova_compute[248510]: 2025-12-13 09:13:48.934 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Successfully created port: e86a788a-8070-426b-8b30-20117bfe712f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:13:49 np0005558241 nova_compute[248510]: 2025-12-13 09:13:49.664 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Successfully updated port: fc1f4300-f39f-49b1-8d88-be328fb018fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:13:49 np0005558241 nova_compute[248510]: 2025-12-13 09:13:49.757 248514 DEBUG nova.compute.manager [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-changed-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:13:49 np0005558241 nova_compute[248510]: 2025-12-13 09:13:49.757 248514 DEBUG nova.compute.manager [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing instance network info cache due to event network-changed-fc1f4300-f39f-49b1-8d88-be328fb018fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:13:49 np0005558241 nova_compute[248510]: 2025-12-13 09:13:49.758 248514 DEBUG oslo_concurrency.lockutils [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:13:49 np0005558241 nova_compute[248510]: 2025-12-13 09:13:49.758 248514 DEBUG oslo_concurrency.lockutils [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:13:49 np0005558241 nova_compute[248510]: 2025-12-13 09:13:49.759 248514 DEBUG nova.network.neutron [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing network info cache for port fc1f4300-f39f-49b1-8d88-be328fb018fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:13:49 np0005558241 nova_compute[248510]: 2025-12-13 09:13:49.949 248514 DEBUG nova.network.neutron [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:13:50 np0005558241 nova_compute[248510]: 2025-12-13 09:13:50.277 248514 DEBUG nova.network.neutron [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:13:50 np0005558241 nova_compute[248510]: 2025-12-13 09:13:50.303 248514 DEBUG oslo_concurrency.lockutils [req-92fd9e6b-933e-40fe-acea-03002fc10b07 req-d9e73b7e-05bc-49f2-920b-8ca052d857be 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:13:50 np0005558241 nova_compute[248510]: 2025-12-13 09:13:50.501 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Successfully updated port: e86a788a-8070-426b-8b30-20117bfe712f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:13:50 np0005558241 nova_compute[248510]: 2025-12-13 09:13:50.532 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:13:50 np0005558241 nova_compute[248510]: 2025-12-13 09:13:50.532 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:13:50 np0005558241 nova_compute[248510]: 2025-12-13 09:13:50.532 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:13:50 np0005558241 nova_compute[248510]: 2025-12-13 09:13:50.679 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:13:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3382: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 1.9 MiB/s wr, 40 op/s
Dec 13 04:13:51 np0005558241 nova_compute[248510]: 2025-12-13 09:13:51.885 248514 DEBUG nova.compute.manager [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-changed-e86a788a-8070-426b-8b30-20117bfe712f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:13:51 np0005558241 nova_compute[248510]: 2025-12-13 09:13:51.886 248514 DEBUG nova.compute.manager [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing instance network info cache due to event network-changed-e86a788a-8070-426b-8b30-20117bfe712f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:13:51 np0005558241 nova_compute[248510]: 2025-12-13 09:13:51.886 248514 DEBUG oslo_concurrency.lockutils [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:13:52 np0005558241 nova_compute[248510]: 2025-12-13 09:13:52.326 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3383: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.484 248514 DEBUG nova.network.neutron [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.513 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.513 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance network_info: |[{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.514 248514 DEBUG oslo_concurrency.lockutils [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.514 248514 DEBUG nova.network.neutron [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing network info cache for port e86a788a-8070-426b-8b30-20117bfe712f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.519 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Start _get_guest_xml network_info=[{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.526 248514 WARNING nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.532 248514 DEBUG nova.virt.libvirt.host [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.533 248514 DEBUG nova.virt.libvirt.host [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.544 248514 DEBUG nova.virt.libvirt.host [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.545 248514 DEBUG nova.virt.libvirt.host [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.545 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.546 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.546 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.546 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.546 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.547 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.547 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.547 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.547 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.548 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.548 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.548 248514 DEBUG nova.virt.hardware [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.551 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:53 np0005558241 nova_compute[248510]: 2025-12-13 09:13:53.901 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:13:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150298232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.098 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.121 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.126 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:13:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1527906899' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.693 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.697 248514 DEBUG nova.virt.libvirt.vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1859978450',display_name='tempest-TestGettingAddress-server-1859978450',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1859978450',id=144,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ju51941',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:45Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.699 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.700 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.702 248514 DEBUG nova.virt.libvirt.vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1859978450',display_name='tempest-TestGettingAddress-server-1859978450',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1859978450',id=144,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ju51941',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:45Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.703 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.704 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.706 248514 DEBUG nova.objects.instance [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.735 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  <uuid>ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4</uuid>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  <name>instance-00000090</name>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-1859978450</nova:name>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:13:53</nova:creationTime>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        <nova:port uuid="fc1f4300-f39f-49b1-8d88-be328fb018fe">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        <nova:port uuid="e86a788a-8070-426b-8b30-20117bfe712f">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feb1:dc7" ipVersion="6"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feb1:dc7" ipVersion="6"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <entry name="serial">ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4</entry>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <entry name="uuid">ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4</entry>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk.config">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:dc:f2:dc"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <target dev="tapfc1f4300-f3"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:b1:0d:c7"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <target dev="tape86a788a-80"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/console.log" append="off"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:13:54 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:13:54 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:13:54 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:13:54 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.737 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Preparing to wait for external event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.738 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.738 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.738 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.739 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Preparing to wait for external event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.739 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.739 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.739 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.741 248514 DEBUG nova.virt.libvirt.vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1859978450',display_name='tempest-TestGettingAddress-server-1859978450',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1859978450',id=144,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ju51941',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:45Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.741 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.742 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.743 248514 DEBUG os_vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.743 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.744 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.745 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.749 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.750 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc1f4300-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.750 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc1f4300-f3, col_values=(('external_ids', {'iface-id': 'fc1f4300-f39f-49b1-8d88-be328fb018fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:f2:dc', 'vm-uuid': 'ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:54 np0005558241 NetworkManager[50376]: <info>  [1765617234.7535] manager: (tapfc1f4300-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/636)
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.753 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.756 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.761 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.763 248514 INFO os_vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3')#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.763 248514 DEBUG nova.virt.libvirt.vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1859978450',display_name='tempest-TestGettingAddress-server-1859978450',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1859978450',id=144,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ju51941',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:13:45Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.764 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.765 248514 DEBUG nova.network.os_vif_util [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.765 248514 DEBUG os_vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.766 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.766 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.766 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.769 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.770 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape86a788a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.771 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape86a788a-80, col_values=(('external_ids', {'iface-id': 'e86a788a-8070-426b-8b30-20117bfe712f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:0d:c7', 'vm-uuid': 'ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.772 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:54 np0005558241 NetworkManager[50376]: <info>  [1765617234.7732] manager: (tape86a788a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/637)
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.774 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.778 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.779 248514 INFO os_vif [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80')#033[00m
Dec 13 04:13:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3384: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.853 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.853 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.854 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:dc:f2:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.854 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:b1:0d:c7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.855 248514 INFO nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Using config drive#033[00m
Dec 13 04:13:54 np0005558241 nova_compute[248510]: 2025-12-13 09:13:54.883 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:55.450 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:55.451 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:55.452 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.062 248514 INFO nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Creating config drive at /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/disk.config#033[00m
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.067 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyrj_42al execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.220 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyrj_42al" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.250 248514 DEBUG nova.storage.rbd_utils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.253 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/disk.config ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.398 248514 DEBUG oslo_concurrency.processutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/disk.config ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.399 248514 INFO nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Deleting local config drive /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4/disk.config because it was imported into RBD.#033[00m
Dec 13 04:13:56 np0005558241 NetworkManager[50376]: <info>  [1765617236.4655] manager: (tapfc1f4300-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/638)
Dec 13 04:13:56 np0005558241 kernel: tapfc1f4300-f3: entered promiscuous mode
Dec 13 04:13:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:56Z|01530|binding|INFO|Claiming lport fc1f4300-f39f-49b1-8d88-be328fb018fe for this chassis.
Dec 13 04:13:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:56Z|01531|binding|INFO|fc1f4300-f39f-49b1-8d88-be328fb018fe: Claiming fa:16:3e:dc:f2:dc 10.100.0.9
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.516 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.526 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:f2:dc 10.100.0.9'], port_security=['fa:16:3e:dc:f2:dc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9e8c352-1166-4841-b5aa-79bfa0acca5d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=fc1f4300-f39f-49b1-8d88-be328fb018fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:13:56 np0005558241 NetworkManager[50376]: <info>  [1765617236.5280] manager: (tape86a788a-80): new Tun device (/org/freedesktop/NetworkManager/Devices/639)
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.527 158419 INFO neutron.agent.ovn.metadata.agent [-] Port fc1f4300-f39f-49b1-8d88-be328fb018fe in datapath ef1a3009-9f3e-4c4a-9ee4-04bec3653434 bound to our chassis#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.530 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef1a3009-9f3e-4c4a-9ee4-04bec3653434#033[00m
Dec 13 04:13:56 np0005558241 kernel: tape86a788a-80: entered promiscuous mode
Dec 13 04:13:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:56Z|01532|binding|INFO|Setting lport fc1f4300-f39f-49b1-8d88-be328fb018fe ovn-installed in OVS
Dec 13 04:13:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:56Z|01533|binding|INFO|Setting lport fc1f4300-f39f-49b1-8d88-be328fb018fe up in Southbound
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:56Z|01534|if_status|INFO|Dropped 5 log messages in last 143 seconds (most recently, 143 seconds ago) due to excessive rate
Dec 13 04:13:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:56Z|01535|if_status|INFO|Not updating pb chassis for e86a788a-8070-426b-8b30-20117bfe712f now as sb is readonly
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.544 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:56Z|01536|binding|INFO|Claiming lport e86a788a-8070-426b-8b30-20117bfe712f for this chassis.
Dec 13 04:13:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:56Z|01537|binding|INFO|e86a788a-8070-426b-8b30-20117bfe712f: Claiming fa:16:3e:b1:0d:c7 2001:db8:0:1:f816:3eff:feb1:dc7 2001:db8::f816:3eff:feb1:dc7
Dec 13 04:13:56 np0005558241 systemd-udevd[397010]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:13:56 np0005558241 systemd-udevd[397011]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.555 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:0d:c7 2001:db8:0:1:f816:3eff:feb1:dc7 2001:db8::f816:3eff:feb1:dc7'], port_security=['fa:16:3e:b1:0d:c7 2001:db8:0:1:f816:3eff:feb1:dc7 2001:db8::f816:3eff:feb1:dc7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb1:dc7/64 2001:db8::f816:3eff:feb1:dc7/64', 'neutron:device_id': 'ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d42d98d-b09c-47fa-98dd-69f99f24cca5, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=e86a788a-8070-426b-8b30-20117bfe712f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.554 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed613a1-495a-4b02-8979-d57e59c41d54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:56Z|01538|binding|INFO|Setting lport e86a788a-8070-426b-8b30-20117bfe712f ovn-installed in OVS
Dec 13 04:13:56 np0005558241 ovn_controller[148476]: 2025-12-13T09:13:56Z|01539|binding|INFO|Setting lport e86a788a-8070-426b-8b30-20117bfe712f up in Southbound
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.560 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:56 np0005558241 NetworkManager[50376]: <info>  [1765617236.5650] device (tape86a788a-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:13:56 np0005558241 NetworkManager[50376]: <info>  [1765617236.5662] device (tape86a788a-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:13:56 np0005558241 NetworkManager[50376]: <info>  [1765617236.5668] device (tapfc1f4300-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:13:56 np0005558241 NetworkManager[50376]: <info>  [1765617236.5674] device (tapfc1f4300-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:13:56 np0005558241 systemd-machined[210538]: New machine qemu-175-instance-00000090.
Dec 13 04:13:56 np0005558241 systemd[1]: Started Virtual Machine qemu-175-instance-00000090.
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.597 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a8447734-c857-48f1-ac19-0fa2267a9354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.601 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cb407d11-f1b7-48ef-95aa-c4e78bca0718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.642 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[538dfb14-6550-4573-bd43-7a6dcab91569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.659 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[21f530ac-f11a-4ac8-b09b-fe2559d57f5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef1a3009-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:51:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 436], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977600, 'reachable_time': 27921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397025, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.686 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4dca82-215d-4e92-a4c5-410c707a0716]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef1a3009-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977613, 'tstamp': 977613}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397027, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef1a3009-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977618, 'tstamp': 977618}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397027, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.688 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef1a3009-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.690 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.691 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.694 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef1a3009-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.694 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.695 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef1a3009-90, col_values=(('external_ids', {'iface-id': '365ef2b2-09ce-4255-947a-81f3896d22ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.695 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.697 158419 INFO neutron.agent.ovn.metadata.agent [-] Port e86a788a-8070-426b-8b30-20117bfe712f in datapath ff80f4de-0e76-47f2-b06e-6d3900b63130 unbound from our chassis#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.699 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff80f4de-0e76-47f2-b06e-6d3900b63130#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.721 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a504d476-558f-4043-a845-dea8a4348612]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.756 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8178bfa8-9ce8-488c-bbca-3aabbebc8ff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.760 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[097636bc-f784-4a2f-a52b-115716199f14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.796 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d61c512a-911f-425f-92f6-885a5c86d86b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.819 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3e7fe4-285f-4bef-ac2a-044e996fd23e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff80f4de-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:6b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977703, 'reachable_time': 20848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397049, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3385: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.836 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1bc63ea-271f-40a7-985b-2575179b9d0d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapff80f4de-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977720, 'tstamp': 977720}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397060, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.837 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff80f4de-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.839 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.840 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff80f4de-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.840 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.841 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff80f4de-00, col_values=(('external_ids', {'iface-id': 'a155cc1b-19d3-4430-9d83-b19e30ef1a95'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:13:56 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:13:56.841 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.992 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617236.9916697, ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:13:56 np0005558241 nova_compute[248510]: 2025-12-13 09:13:56.992 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] VM Started (Lifecycle Event)#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.014 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.019 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617236.9917843, ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.019 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.039 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.043 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.062 248514 DEBUG nova.compute.manager [req-289fbb1f-79c3-4b5f-9822-e4e35364cf7f req-2a1448a4-e83c-44fc-88d1-ed515ee8a8e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.064 248514 DEBUG oslo_concurrency.lockutils [req-289fbb1f-79c3-4b5f-9822-e4e35364cf7f req-2a1448a4-e83c-44fc-88d1-ed515ee8a8e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.065 248514 DEBUG oslo_concurrency.lockutils [req-289fbb1f-79c3-4b5f-9822-e4e35364cf7f req-2a1448a4-e83c-44fc-88d1-ed515ee8a8e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.065 248514 DEBUG oslo_concurrency.lockutils [req-289fbb1f-79c3-4b5f-9822-e4e35364cf7f req-2a1448a4-e83c-44fc-88d1-ed515ee8a8e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.066 248514 DEBUG nova.compute.manager [req-289fbb1f-79c3-4b5f-9822-e4e35364cf7f req-2a1448a4-e83c-44fc-88d1-ed515ee8a8e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Processing event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.069 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.196 248514 DEBUG nova.network.neutron [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updated VIF entry in instance network info cache for port e86a788a-8070-426b-8b30-20117bfe712f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.197 248514 DEBUG nova.network.neutron [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:13:57 np0005558241 nova_compute[248510]: 2025-12-13 09:13:57.218 248514 DEBUG oslo_concurrency.lockutils [req-3ef9e8a1-8092-4e9d-9bb2-cc0521667955 req-3f3c9b43-098c-495c-9b63-2096323e2fc8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:13:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3386: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:13:58 np0005558241 nova_compute[248510]: 2025-12-13 09:13:58.904 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.164 248514 DEBUG nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.165 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.165 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.166 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.166 248514 DEBUG nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] No event matching network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe in dict_keys([('network-vif-plugged', 'e86a788a-8070-426b-8b30-20117bfe712f')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.166 248514 WARNING nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received unexpected event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe for instance with vm_state building and task_state spawning.#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.167 248514 DEBUG nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.167 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.168 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.168 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.169 248514 DEBUG nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Processing event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.169 248514 DEBUG nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.169 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.170 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.170 248514 DEBUG oslo_concurrency.lockutils [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.171 248514 DEBUG nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] No waiting events found dispatching network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.172 248514 WARNING nova.compute.manager [req-121354ed-d87b-4e30-b39d-db9db4e29cfa req-2ab26f8b-56da-4cdb-be2d-dfe7ee7891f6 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received unexpected event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f for instance with vm_state building and task_state spawning.#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.174 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.179 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617239.1788175, ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.180 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.185 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.190 248514 INFO nova.virt.libvirt.driver [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance spawned successfully.#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.191 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.209 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.217 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.223 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.224 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.225 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.225 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.225 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.226 248514 DEBUG nova.virt.libvirt.driver [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.238 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.291 248514 INFO nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Took 13.30 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.292 248514 DEBUG nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.357 248514 INFO nova.compute.manager [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Took 14.88 seconds to build instance.#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.378 248514 DEBUG oslo_concurrency.lockutils [None req-bdb0d4f9-e559-4bd7-a3b6-0ac8efb2ff05 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:13:59 np0005558241 nova_compute[248510]: 2025-12-13 09:13:59.773 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3387: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 MiB/s wr, 30 op/s
Dec 13 04:14:01 np0005558241 podman[397081]: 2025-12-13 09:14:01.002806617 +0000 UTC m=+0.070923300 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:14:01 np0005558241 podman[397080]: 2025-12-13 09:14:01.012257864 +0000 UTC m=+0.082390277 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Dec 13 04:14:01 np0005558241 podman[397079]: 2025-12-13 09:14:01.069236513 +0000 UTC m=+0.145681815 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:14:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3388: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 13 KiB/s wr, 14 op/s
Dec 13 04:14:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Dec 13 04:14:03 np0005558241 nova_compute[248510]: 2025-12-13 09:14:03.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:04 np0005558241 nova_compute[248510]: 2025-12-13 09:14:04.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3389: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 104 op/s
Dec 13 04:14:05 np0005558241 nova_compute[248510]: 2025-12-13 09:14:05.472 248514 DEBUG nova.compute.manager [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-changed-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:05 np0005558241 nova_compute[248510]: 2025-12-13 09:14:05.473 248514 DEBUG nova.compute.manager [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing instance network info cache due to event network-changed-fc1f4300-f39f-49b1-8d88-be328fb018fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:14:05 np0005558241 nova_compute[248510]: 2025-12-13 09:14:05.473 248514 DEBUG oslo_concurrency.lockutils [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:14:05 np0005558241 nova_compute[248510]: 2025-12-13 09:14:05.473 248514 DEBUG oslo_concurrency.lockutils [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:14:05 np0005558241 nova_compute[248510]: 2025-12-13 09:14:05.474 248514 DEBUG nova.network.neutron [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing network info cache for port fc1f4300-f39f-49b1-8d88-be328fb018fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:14:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3390: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 104 op/s
Dec 13 04:14:07 np0005558241 nova_compute[248510]: 2025-12-13 09:14:07.039 248514 DEBUG nova.network.neutron [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updated VIF entry in instance network info cache for port fc1f4300-f39f-49b1-8d88-be328fb018fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:14:07 np0005558241 nova_compute[248510]: 2025-12-13 09:14:07.040 248514 DEBUG nova.network.neutron [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:14:07 np0005558241 nova_compute[248510]: 2025-12-13 09:14:07.067 248514 DEBUG oslo_concurrency.lockutils [req-9e2f9c87-5494-48f9-9088-425681042cde req-9c5fb91a-7bbf-4314-ae52-0795505f1368 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:14:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3391: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 120 op/s
Dec 13 04:14:08 np0005558241 nova_compute[248510]: 2025-12-13 09:14:08.909 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:14:09
Dec 13 04:14:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:14:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:14:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.meta']
Dec 13 04:14:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:14:09 np0005558241 nova_compute[248510]: 2025-12-13 09:14:09.779 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:14:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:14:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:14:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:14:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:14:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:14:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3392: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 134 op/s
Dec 13 04:14:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:14:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:14:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:14:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:14:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:14:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:14:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:14:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:14:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:14:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:14:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:11Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:f2:dc 10.100.0.9
Dec 13 04:14:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:11Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:f2:dc 10.100.0.9
Dec 13 04:14:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3393: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 KiB/s wr, 120 op/s
Dec 13 04:14:13 np0005558241 nova_compute[248510]: 2025-12-13 09:14:13.912 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:14 np0005558241 nova_compute[248510]: 2025-12-13 09:14:14.783 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3394: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Dec 13 04:14:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:14:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2577487572' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:14:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:14:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2577487572' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:14:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3395: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 411 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Dec 13 04:14:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3396: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 411 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Dec 13 04:14:18 np0005558241 nova_compute[248510]: 2025-12-13 09:14:18.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:19 np0005558241 nova_compute[248510]: 2025-12-13 09:14:19.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:14:19 np0005558241 nova_compute[248510]: 2025-12-13 09:14:19.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:20 np0005558241 nova_compute[248510]: 2025-12-13 09:14:20.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:14:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3397: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015327102653432375 of space, bias 1.0, pg target 0.45981307960297124 quantized to 32 (current 32)
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697272279419812 of space, bias 1.0, pg target 0.20091816838259435 quantized to 32 (current 32)
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:14:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3398: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 04:14:23 np0005558241 nova_compute[248510]: 2025-12-13 09:14:23.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:24 np0005558241 nova_compute[248510]: 2025-12-13 09:14:24.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3399: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 04:14:25 np0005558241 nova_compute[248510]: 2025-12-13 09:14:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:14:25 np0005558241 nova_compute[248510]: 2025-12-13 09:14:25.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:14:25 np0005558241 nova_compute[248510]: 2025-12-13 09:14:25.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:14:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:25.856 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:14:25 np0005558241 nova_compute[248510]: 2025-12-13 09:14:25.857 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:25.858 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:14:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:25.859 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.198 248514 DEBUG nova.compute.manager [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-changed-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.199 248514 DEBUG nova.compute.manager [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing instance network info cache due to event network-changed-fc1f4300-f39f-49b1-8d88-be328fb018fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.199 248514 DEBUG oslo_concurrency.lockutils [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.199 248514 DEBUG oslo_concurrency.lockutils [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.200 248514 DEBUG nova.network.neutron [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Refreshing network info cache for port fc1f4300-f39f-49b1-8d88-be328fb018fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.297 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.298 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.299 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.299 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.299 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.300 248514 INFO nova.compute.manager [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Terminating instance#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.301 248514 DEBUG nova.compute.manager [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:14:26 np0005558241 kernel: tapfc1f4300-f3 (unregistering): left promiscuous mode
Dec 13 04:14:26 np0005558241 NetworkManager[50376]: <info>  [1765617266.3544] device (tapfc1f4300-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.354 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.355 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.355 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.356 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 707e75d9-f6d2-413d-a727-c3ecbfea90c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:26Z|01540|binding|INFO|Releasing lport fc1f4300-f39f-49b1-8d88-be328fb018fe from this chassis (sb_readonly=0)
Dec 13 04:14:26 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:26Z|01541|binding|INFO|Setting lport fc1f4300-f39f-49b1-8d88-be328fb018fe down in Southbound
Dec 13 04:14:26 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:26Z|01542|binding|INFO|Removing iface tapfc1f4300-f3 ovn-installed in OVS
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.368 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.375 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:f2:dc 10.100.0.9'], port_security=['fa:16:3e:dc:f2:dc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9e8c352-1166-4841-b5aa-79bfa0acca5d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=fc1f4300-f39f-49b1-8d88-be328fb018fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.377 158419 INFO neutron.agent.ovn.metadata.agent [-] Port fc1f4300-f39f-49b1-8d88-be328fb018fe in datapath ef1a3009-9f3e-4c4a-9ee4-04bec3653434 unbound from our chassis#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.380 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef1a3009-9f3e-4c4a-9ee4-04bec3653434#033[00m
Dec 13 04:14:26 np0005558241 kernel: tape86a788a-80 (unregistering): left promiscuous mode
Dec 13 04:14:26 np0005558241 NetworkManager[50376]: <info>  [1765617266.3873] device (tape86a788a-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.389 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.396 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c45843-05f3-43c8-9879-9c4d6e4a1c67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:26 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:26Z|01543|binding|INFO|Releasing lport e86a788a-8070-426b-8b30-20117bfe712f from this chassis (sb_readonly=0)
Dec 13 04:14:26 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:26Z|01544|binding|INFO|Setting lport e86a788a-8070-426b-8b30-20117bfe712f down in Southbound
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.400 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:26Z|01545|binding|INFO|Removing iface tape86a788a-80 ovn-installed in OVS
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.402 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.408 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:0d:c7 2001:db8:0:1:f816:3eff:feb1:dc7 2001:db8::f816:3eff:feb1:dc7'], port_security=['fa:16:3e:b1:0d:c7 2001:db8:0:1:f816:3eff:feb1:dc7 2001:db8::f816:3eff:feb1:dc7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb1:dc7/64 2001:db8::f816:3eff:feb1:dc7/64', 'neutron:device_id': 'ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d42d98d-b09c-47fa-98dd-69f99f24cca5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=e86a788a-8070-426b-8b30-20117bfe712f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.417 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.441 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d58259-eaf9-4ec3-8a72-f629bcb07a81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.444 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bf066882-4c5c-4d3a-b439-352df9212621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:26 np0005558241 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000090.scope: Deactivated successfully.
Dec 13 04:14:26 np0005558241 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000090.scope: Consumed 13.340s CPU time.
Dec 13 04:14:26 np0005558241 systemd-machined[210538]: Machine qemu-175-instance-00000090 terminated.
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.484 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0f568865-d44b-42ba-9dd1-8340c1c687d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.503 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[50fc0df2-efb3-439b-afe0-430139a32570]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef1a3009-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c4:51:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 436], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977600, 'reachable_time': 27921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397165, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.525 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4b261254-b014-4ad1-a0e9-2882d77b0fc7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef1a3009-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977613, 'tstamp': 977613}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397166, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapef1a3009-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977618, 'tstamp': 977618}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397166, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.528 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef1a3009-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 NetworkManager[50376]: <info>  [1765617266.5570] manager: (tape86a788a-80): new Tun device (/org/freedesktop/NetworkManager/Devices/640)
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.561 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.561 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef1a3009-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.562 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.562 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef1a3009-90, col_values=(('external_ids', {'iface-id': '365ef2b2-09ce-4255-947a-81f3896d22ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.563 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.564 158419 INFO neutron.agent.ovn.metadata.agent [-] Port e86a788a-8070-426b-8b30-20117bfe712f in datapath ff80f4de-0e76-47f2-b06e-6d3900b63130 unbound from our chassis#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.565 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff80f4de-0e76-47f2-b06e-6d3900b63130#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.571 248514 INFO nova.virt.libvirt.driver [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance destroyed successfully.#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.572 248514 DEBUG nova.objects.instance [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.580 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[117bdafc-1383-4fca-9950-b20890375a9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.596 248514 DEBUG nova.virt.libvirt.vif [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1859978450',display_name='tempest-TestGettingAddress-server-1859978450',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1859978450',id=144,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:13:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ju51941',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:13:59Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.597 248514 DEBUG nova.network.os_vif_util [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.598 248514 DEBUG nova.network.os_vif_util [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.598 248514 DEBUG os_vif [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.599 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.600 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc1f4300-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.601 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.603 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.605 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.608 248514 INFO os_vif [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:f2:dc,bridge_name='br-int',has_traffic_filtering=True,id=fc1f4300-f39f-49b1-8d88-be328fb018fe,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc1f4300-f3')#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.609 248514 DEBUG nova.virt.libvirt.vif [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:13:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1859978450',display_name='tempest-TestGettingAddress-server-1859978450',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1859978450',id=144,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:13:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-0ju51941',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:13:59Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.609 248514 DEBUG nova.network.os_vif_util [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.610 248514 DEBUG nova.network.os_vif_util [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.610 248514 DEBUG os_vif [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.611 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.612 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape86a788a-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.613 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.614 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.615 248514 INFO os_vif [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:0d:c7,bridge_name='br-int',has_traffic_filtering=True,id=e86a788a-8070-426b-8b30-20117bfe712f,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape86a788a-80')#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.618 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[70521fda-b164-4879-83d0-ff68d18f59b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.620 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[fc939233-6f33-4a19-add5-4bdb04764ed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.655 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[a78f0251-e9ee-4759-9884-a814b67cc937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.674 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b9e5af-feec-457b-a034-42086006f2d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff80f4de-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:6b:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977703, 'reachable_time': 20848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397215, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.691 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5ecac775-3b63-4c23-af70-ac0dd234fac3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapff80f4de-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977720, 'tstamp': 977720}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397216, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.693 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff80f4de-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.696 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.697 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff80f4de-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.697 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.697 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff80f4de-00, col_values=(('external_ids', {'iface-id': 'a155cc1b-19d3-4430-9d83-b19e30ef1a95'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:14:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:26.698 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.711 248514 DEBUG nova.compute.manager [req-f05f3d7c-5e2f-4e03-812e-c897970d5766 req-866e9901-3412-4a2c-82c3-2780e415dee4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-unplugged-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.711 248514 DEBUG oslo_concurrency.lockutils [req-f05f3d7c-5e2f-4e03-812e-c897970d5766 req-866e9901-3412-4a2c-82c3-2780e415dee4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.712 248514 DEBUG oslo_concurrency.lockutils [req-f05f3d7c-5e2f-4e03-812e-c897970d5766 req-866e9901-3412-4a2c-82c3-2780e415dee4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.712 248514 DEBUG oslo_concurrency.lockutils [req-f05f3d7c-5e2f-4e03-812e-c897970d5766 req-866e9901-3412-4a2c-82c3-2780e415dee4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.712 248514 DEBUG nova.compute.manager [req-f05f3d7c-5e2f-4e03-812e-c897970d5766 req-866e9901-3412-4a2c-82c3-2780e415dee4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] No waiting events found dispatching network-vif-unplugged-fc1f4300-f39f-49b1-8d88-be328fb018fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.712 248514 DEBUG nova.compute.manager [req-f05f3d7c-5e2f-4e03-812e-c897970d5766 req-866e9901-3412-4a2c-82c3-2780e415dee4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-unplugged-fc1f4300-f39f-49b1-8d88-be328fb018fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:14:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3400: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 12 KiB/s wr, 1 op/s
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.864 248514 INFO nova.virt.libvirt.driver [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Deleting instance files /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_del#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.865 248514 INFO nova.virt.libvirt.driver [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Deletion of /var/lib/nova/instances/ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4_del complete#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.947 248514 INFO nova.compute.manager [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.948 248514 DEBUG oslo.service.loopingcall [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.948 248514 DEBUG nova.compute.manager [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:14:26 np0005558241 nova_compute[248510]: 2025-12-13 09:14:26.948 248514 DEBUG nova.network.neutron [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:14:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.309 248514 DEBUG nova.network.neutron [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.313 248514 DEBUG nova.compute.manager [req-98668b05-d762-45c2-be55-84515e6e2468 req-037ba3b4-f797-48a3-9210-879329df36a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-deleted-e86a788a-8070-426b-8b30-20117bfe712f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.313 248514 INFO nova.compute.manager [req-98668b05-d762-45c2-be55-84515e6e2468 req-037ba3b4-f797-48a3-9210-879329df36a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Neutron deleted interface e86a788a-8070-426b-8b30-20117bfe712f; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.313 248514 DEBUG nova.network.neutron [req-98668b05-d762-45c2-be55-84515e6e2468 req-037ba3b4-f797-48a3-9210-879329df36a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.321 248514 DEBUG nova.network.neutron [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updated VIF entry in instance network info cache for port fc1f4300-f39f-49b1-8d88-be328fb018fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.321 248514 DEBUG nova.network.neutron [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Updating instance_info_cache with network_info: [{"id": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "address": "fa:16:3e:dc:f2:dc", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc1f4300-f3", "ovs_interfaceid": "fc1f4300-f39f-49b1-8d88-be328fb018fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e86a788a-8070-426b-8b30-20117bfe712f", "address": "fa:16:3e:b1:0d:c7", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb1:dc7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape86a788a-80", "ovs_interfaceid": "e86a788a-8070-426b-8b30-20117bfe712f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.346 248514 INFO nova.compute.manager [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Took 1.40 seconds to deallocate network for instance.#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.347 248514 DEBUG oslo_concurrency.lockutils [req-1fd6cf21-dce4-421f-b8fa-df16695d4f8e req-320fa931-8c2a-4a6b-96ec-bae69523da22 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.352 248514 DEBUG nova.compute.manager [req-98668b05-d762-45c2-be55-84515e6e2468 req-037ba3b4-f797-48a3-9210-879329df36a1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Detach interface failed, port_id=e86a788a-8070-426b-8b30-20117bfe712f, reason: Instance ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.395 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.395 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.487 248514 DEBUG oslo_concurrency.processutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.798 248514 DEBUG nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.799 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.799 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.799 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.799 248514 DEBUG nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] No waiting events found dispatching network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.799 248514 WARNING nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received unexpected event network-vif-plugged-fc1f4300-f39f-49b1-8d88-be328fb018fe for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 DEBUG nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-unplugged-e86a788a-8070-426b-8b30-20117bfe712f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 DEBUG nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] No waiting events found dispatching network-vif-unplugged-e86a788a-8070-426b-8b30-20117bfe712f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 WARNING nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received unexpected event network-vif-unplugged-e86a788a-8070-426b-8b30-20117bfe712f for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.800 248514 DEBUG nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.801 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.801 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.801 248514 DEBUG oslo_concurrency.lockutils [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.801 248514 DEBUG nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] No waiting events found dispatching network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.801 248514 WARNING nova.compute.manager [req-58a484ec-e85f-40f3-b278-5aba4e4ba5e1 req-ec76ccdf-6291-4e78-b0be-2c92c3508705 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received unexpected event network-vif-plugged-e86a788a-8070-426b-8b30-20117bfe712f for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:14:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3401: 321 pgs: 321 active+clean; 177 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 7 op/s
Dec 13 04:14:28 np0005558241 nova_compute[248510]: 2025-12-13 09:14:28.960 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:29 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329259479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:29 np0005558241 nova_compute[248510]: 2025-12-13 09:14:29.047 248514 DEBUG oslo_concurrency.processutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:14:29 np0005558241 nova_compute[248510]: 2025-12-13 09:14:29.056 248514 DEBUG nova.compute.provider_tree [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:14:29 np0005558241 nova_compute[248510]: 2025-12-13 09:14:29.075 248514 DEBUG nova.scheduler.client.report [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:14:29 np0005558241 nova_compute[248510]: 2025-12-13 09:14:29.105 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:29 np0005558241 nova_compute[248510]: 2025-12-13 09:14:29.181 248514 INFO nova.scheduler.client.report [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4#033[00m
Dec 13 04:14:29 np0005558241 nova_compute[248510]: 2025-12-13 09:14:29.283 248514 DEBUG oslo_concurrency.lockutils [None req-523c258f-f384-41bd-a93f-522841bb5e14 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.426 248514 DEBUG nova.compute.manager [req-e771d645-979a-41f0-9cbc-2126227d3c06 req-71f0af84-a910-4112-b17d-405fc66ecb57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Received event network-vif-deleted-fc1f4300-f39f-49b1-8d88-be328fb018fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.427 248514 INFO nova.compute.manager [req-e771d645-979a-41f0-9cbc-2126227d3c06 req-71f0af84-a910-4112-b17d-405fc66ecb57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Neutron deleted interface fc1f4300-f39f-49b1-8d88-be328fb018fe; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.427 248514 DEBUG nova.network.neutron [req-e771d645-979a-41f0-9cbc-2126227d3c06 req-71f0af84-a910-4112-b17d-405fc66ecb57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.431 248514 DEBUG nova.compute.manager [req-e771d645-979a-41f0-9cbc-2126227d3c06 req-71f0af84-a910-4112-b17d-405fc66ecb57 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Detach interface failed, port_id=fc1f4300-f39f-49b1-8d88-be328fb018fe, reason: Instance ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 04:14:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3402: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 29 op/s
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.879 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.903 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.904 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.904 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.905 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.932 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.933 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.933 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.934 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:14:30 np0005558241 nova_compute[248510]: 2025-12-13 09:14:30.934 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.408 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.410 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.410 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.411 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.411 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.414 248514 INFO nova.compute.manager [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Terminating instance#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.415 248514 DEBUG nova.compute.manager [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:14:31 np0005558241 kernel: tap37c91936-05 (unregistering): left promiscuous mode
Dec 13 04:14:31 np0005558241 NetworkManager[50376]: <info>  [1765617271.4726] device (tap37c91936-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:14:31 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:31Z|01546|binding|INFO|Releasing lport 37c91936-0589-4bd3-9413-3af7db3e8feb from this chassis (sb_readonly=0)
Dec 13 04:14:31 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:31Z|01547|binding|INFO|Setting lport 37c91936-0589-4bd3-9413-3af7db3e8feb down in Southbound
Dec 13 04:14:31 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:31Z|01548|binding|INFO|Removing iface tap37c91936-05 ovn-installed in OVS
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.479 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.495 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:6c:85 10.100.0.11'], port_security=['fa:16:3e:77:6c:85 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '707e75d9-f6d2-413d-a727-c3ecbfea90c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9e8c352-1166-4841-b5aa-79bfa0acca5d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=37c91936-0589-4bd3-9413-3af7db3e8feb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.503 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 37c91936-0589-4bd3-9413-3af7db3e8feb in datapath ef1a3009-9f3e-4c4a-9ee4-04bec3653434 unbound from our chassis#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.503 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.505 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef1a3009-9f3e-4c4a-9ee4-04bec3653434, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.505 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[afcb63c0-dc0f-4db0-98f2-833a284ed583]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.509 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434 namespace which is not needed anymore#033[00m
Dec 13 04:14:31 np0005558241 kernel: tap9538b93c-6b (unregistering): left promiscuous mode
Dec 13 04:14:31 np0005558241 NetworkManager[50376]: <info>  [1765617271.5205] device (tap9538b93c-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:31 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:31Z|01549|binding|INFO|Releasing lport 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d from this chassis (sb_readonly=0)
Dec 13 04:14:31 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:31Z|01550|binding|INFO|Setting lport 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d down in Southbound
Dec 13 04:14:31 np0005558241 ovn_controller[148476]: 2025-12-13T09:14:31Z|01551|binding|INFO|Removing iface tap9538b93c-6b ovn-installed in OVS
Dec 13 04:14:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1005340049' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.549 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:da:84 2001:db8:0:1:f816:3eff:fe39:da84 2001:db8::f816:3eff:fe39:da84'], port_security=['fa:16:3e:39:da:84 2001:db8:0:1:f816:3eff:fe39:da84 2001:db8::f816:3eff:fe39:da84'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe39:da84/64 2001:db8::f816:3eff:fe39:da84/64', 'neutron:device_id': '707e75d9-f6d2-413d-a727-c3ecbfea90c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a658b7a7-58f0-4691-acef-c04fab4ff88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d42d98d-b09c-47fa-98dd-69f99f24cca5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.563 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.582 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:14:31 np0005558241 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Dec 13 04:14:31 np0005558241 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008f.scope: Consumed 16.538s CPU time.
Dec 13 04:14:31 np0005558241 systemd-machined[210538]: Machine qemu-174-instance-0000008f terminated.
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.613 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:31 np0005558241 podman[397264]: 2025-12-13 09:14:31.627056205 +0000 UTC m=+0.102015110 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:14:31 np0005558241 podman[397263]: 2025-12-13 09:14:31.633373983 +0000 UTC m=+0.106992964 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=multipathd)
Dec 13 04:14:31 np0005558241 podman[397262]: 2025-12-13 09:14:31.665778136 +0000 UTC m=+0.131382146 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.671 248514 INFO nova.virt.libvirt.driver [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance destroyed successfully.#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.672 248514 DEBUG nova.objects.instance [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 707e75d9-f6d2-413d-a727-c3ecbfea90c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:14:31 np0005558241 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [NOTICE]   (395876) : haproxy version is 2.8.14-c23fe91
Dec 13 04:14:31 np0005558241 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [NOTICE]   (395876) : path to executable is /usr/sbin/haproxy
Dec 13 04:14:31 np0005558241 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [WARNING]  (395876) : Exiting Master process...
Dec 13 04:14:31 np0005558241 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [WARNING]  (395876) : Exiting Master process...
Dec 13 04:14:31 np0005558241 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [ALERT]    (395876) : Current worker (395878) exited with code 143 (Terminated)
Dec 13 04:14:31 np0005558241 neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434[395872]: [WARNING]  (395876) : All workers exited. Exiting... (0)
Dec 13 04:14:31 np0005558241 systemd[1]: libpod-1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a.scope: Deactivated successfully.
Dec 13 04:14:31 np0005558241 podman[397341]: 2025-12-13 09:14:31.689932901 +0000 UTC m=+0.060949779 container died 1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.690 248514 DEBUG nova.virt.libvirt.vif [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-132103940',display_name='tempest-TestGettingAddress-server-132103940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-132103940',id=143,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:13:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-eqjt2kv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:13:19Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=707e75d9-f6d2-413d-a727-c3ecbfea90c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.691 248514 DEBUG nova.network.os_vif_util [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.692 248514 DEBUG nova.network.os_vif_util [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.693 248514 DEBUG os_vif [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.695 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap37c91936-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.699 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.702 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.705 248514 INFO os_vif [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:6c:85,bridge_name='br-int',has_traffic_filtering=True,id=37c91936-0589-4bd3-9413-3af7db3e8feb,network=Network(ef1a3009-9f3e-4c4a-9ee4-04bec3653434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap37c91936-05')#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.706 248514 DEBUG nova.virt.libvirt.vif [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-132103940',display_name='tempest-TestGettingAddress-server-132103940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-132103940',id=143,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNusrR77VFhjOTJVbPys9m2PSq1whPvKRBHi84Ifn36fxHgTKgkJAwXMfp8A756xSXzv0U+bOAZ/T3OB0dzGHYLzrQoiSkR4Eq3jGRgpFHUM+JK7sFeCL7/AFvYu4sBJJg==',key_name='tempest-TestGettingAddress-1024966808',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:13:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-eqjt2kv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:13:19Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=707e75d9-f6d2-413d-a727-c3ecbfea90c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.707 248514 DEBUG nova.network.os_vif_util [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.708 248514 DEBUG nova.network.os_vif_util [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.708 248514 DEBUG os_vif [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.712 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.712 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9538b93c-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.717 248514 INFO os_vif [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:da:84,bridge_name='br-int',has_traffic_filtering=True,id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d,network=Network(ff80f4de-0e76-47f2-b06e-6d3900b63130),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9538b93c-6b')#033[00m
Dec 13 04:14:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a-userdata-shm.mount: Deactivated successfully.
Dec 13 04:14:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-95e219d71c2457e91e96bcf457fd2d2a6b9c400e89e423017a2889e15b6fb339-merged.mount: Deactivated successfully.
Dec 13 04:14:31 np0005558241 podman[397341]: 2025-12-13 09:14:31.735383201 +0000 UTC m=+0.106400079 container cleanup 1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:14:31 np0005558241 systemd[1]: libpod-conmon-1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a.scope: Deactivated successfully.
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.771 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.772 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.809 248514 DEBUG nova.compute.manager [req-6a98f265-69a2-4d41-97e7-0473dc6a3653 req-d9faeb2b-c55c-42e1-a827-a8e49227164c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-unplugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.809 248514 DEBUG oslo_concurrency.lockutils [req-6a98f265-69a2-4d41-97e7-0473dc6a3653 req-d9faeb2b-c55c-42e1-a827-a8e49227164c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.810 248514 DEBUG oslo_concurrency.lockutils [req-6a98f265-69a2-4d41-97e7-0473dc6a3653 req-d9faeb2b-c55c-42e1-a827-a8e49227164c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.810 248514 DEBUG oslo_concurrency.lockutils [req-6a98f265-69a2-4d41-97e7-0473dc6a3653 req-d9faeb2b-c55c-42e1-a827-a8e49227164c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.810 248514 DEBUG nova.compute.manager [req-6a98f265-69a2-4d41-97e7-0473dc6a3653 req-d9faeb2b-c55c-42e1-a827-a8e49227164c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] No waiting events found dispatching network-vif-unplugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.810 248514 DEBUG nova.compute.manager [req-6a98f265-69a2-4d41-97e7-0473dc6a3653 req-d9faeb2b-c55c-42e1-a827-a8e49227164c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-unplugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:14:31 np0005558241 podman[397418]: 2025-12-13 09:14:31.814862824 +0000 UTC m=+0.055440102 container remove 1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.822 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c58bddde-0ab0-4e0f-a1f9-3122d027cbd1]: (4, ('Sat Dec 13 09:14:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434 (1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a)\n1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a\nSat Dec 13 09:14:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434 (1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a)\n1a49ad955aed1881838eb2853fd8d06901d4864921fde9bd79938f405acd467a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.825 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5d004b-4c70-4ae3-99e0-69df8533c7c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.826 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef1a3009-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.829 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:31 np0005558241 kernel: tapef1a3009-90: left promiscuous mode
Dec 13 04:14:31 np0005558241 nova_compute[248510]: 2025-12-13 09:14:31.843 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.847 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0f36a0-8499-49ea-b2a7-2b9f67f7582d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.866 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b55d66a-44ee-4243-97fe-5835514d6e3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.868 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c60fc68-5761-45ed-822c-40f027197170]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.884 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c187e78b-e96d-454d-b44e-93364e1db578]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977591, 'reachable_time': 19949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397435, 'error': None, 'target': 'ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:31 np0005558241 systemd[1]: run-netns-ovnmeta\x2def1a3009\x2d9f3e\x2d4c4a\x2d9ee4\x2d04bec3653434.mount: Deactivated successfully.
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.891 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef1a3009-9f3e-4c4a-9ee4-04bec3653434 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.892 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b99435a1-94e7-43ef-b0bd-e1214e8023fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.892 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d in datapath ff80f4de-0e76-47f2-b06e-6d3900b63130 unbound from our chassis#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.894 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff80f4de-0e76-47f2-b06e-6d3900b63130, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.894 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b90ed9a5-be50-49a8-bca0-3682eebc5f66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:31.895 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130 namespace which is not needed anymore#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.001 248514 INFO nova.virt.libvirt.driver [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Deleting instance files /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1_del#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.002 248514 INFO nova.virt.libvirt.driver [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Deletion of /var/lib/nova/instances/707e75d9-f6d2-413d-a727-c3ecbfea90c1_del complete#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.008 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.009 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3439MB free_disk=59.941806520335376GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.009 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.010 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:32 np0005558241 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [NOTICE]   (395952) : haproxy version is 2.8.14-c23fe91
Dec 13 04:14:32 np0005558241 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [NOTICE]   (395952) : path to executable is /usr/sbin/haproxy
Dec 13 04:14:32 np0005558241 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [WARNING]  (395952) : Exiting Master process...
Dec 13 04:14:32 np0005558241 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [ALERT]    (395952) : Current worker (395954) exited with code 143 (Terminated)
Dec 13 04:14:32 np0005558241 neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130[395948]: [WARNING]  (395952) : All workers exited. Exiting... (0)
Dec 13 04:14:32 np0005558241 systemd[1]: libpod-8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6.scope: Deactivated successfully.
Dec 13 04:14:32 np0005558241 podman[397454]: 2025-12-13 09:14:32.042253967 +0000 UTC m=+0.043422680 container died 8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:14:32 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6-userdata-shm.mount: Deactivated successfully.
Dec 13 04:14:32 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5335169775ac2f2f0d9de9ac62c16a1b6366679c57b207e1bc023d06103d75f6-merged.mount: Deactivated successfully.
Dec 13 04:14:32 np0005558241 podman[397454]: 2025-12-13 09:14:32.077684795 +0000 UTC m=+0.078853498 container cleanup 8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:14:32 np0005558241 systemd[1]: libpod-conmon-8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6.scope: Deactivated successfully.
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.101 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 707e75d9-f6d2-413d-a727-c3ecbfea90c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.101 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.102 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.109 248514 INFO nova.compute.manager [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.109 248514 DEBUG oslo.service.loopingcall [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.109 248514 DEBUG nova.compute.manager [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.110 248514 DEBUG nova.network.neutron [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:14:32 np0005558241 podman[397483]: 2025-12-13 09:14:32.141833464 +0000 UTC m=+0.043045490 container remove 8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:14:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.147 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c2cc4361-31a0-4656-bc52-3580d41f3058]: (4, ('Sat Dec 13 09:14:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130 (8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6)\n8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6\nSat Dec 13 09:14:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130 (8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6)\n8dd542db5554e50eb23d8746cf37bfb94f9aefd98848d93dde134cb7741cf3d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.149 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:14:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.149 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[23f73562-33dd-4aa7-a21c-89d50d9f98e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.150 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff80f4de-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:14:32 np0005558241 kernel: tapff80f4de-00: left promiscuous mode
Dec 13 04:14:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.169 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[50f9bd48-a50a-49ce-b4f4-b301c0a1e376]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.183 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0183e1-1867-4472-959f-832ff0adfa4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.185 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[40aaed85-b496-4d3f-86ea-49d922368f02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.193 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.204 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[191d745b-a977-403e-a406-a140b8cb9eda]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977692, 'reachable_time': 25316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397499, 'error': None, 'target': 'ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.206 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ff80f4de-0e76-47f2-b06e-6d3900b63130 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:14:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:32.206 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f2ee75-989d-41d8-94ab-08545691d75a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.508 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-changed-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.509 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing instance network info cache due to event network-changed-37c91936-0589-4bd3-9413-3af7db3e8feb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.510 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.510 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.510 248514 DEBUG nova.network.neutron [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Refreshing network info cache for port 37c91936-0589-4bd3-9413-3af7db3e8feb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:14:32 np0005558241 systemd[1]: run-netns-ovnmeta\x2dff80f4de\x2d0e76\x2d47f2\x2db06e\x2d6d3900b63130.mount: Deactivated successfully.
Dec 13 04:14:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2111490107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.745 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.751 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.774 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.800 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:14:32 np0005558241 nova_compute[248510]: 2025-12-13 09:14:32.800 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3403: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 11 KiB/s wr, 29 op/s
Dec 13 04:14:33 np0005558241 nova_compute[248510]: 2025-12-13 09:14:33.667 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:14:33 np0005558241 nova_compute[248510]: 2025-12-13 09:14:33.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:14:33 np0005558241 nova_compute[248510]: 2025-12-13 09:14:33.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:14:33 np0005558241 nova_compute[248510]: 2025-12-13 09:14:33.963 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:33 np0005558241 nova_compute[248510]: 2025-12-13 09:14:33.995 248514 DEBUG nova.compute.manager [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:33 np0005558241 nova_compute[248510]: 2025-12-13 09:14:33.996 248514 DEBUG oslo_concurrency.lockutils [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:33 np0005558241 nova_compute[248510]: 2025-12-13 09:14:33.997 248514 DEBUG oslo_concurrency.lockutils [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:33 np0005558241 nova_compute[248510]: 2025-12-13 09:14:33.997 248514 DEBUG oslo_concurrency.lockutils [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:33 np0005558241 nova_compute[248510]: 2025-12-13 09:14:33.997 248514 DEBUG nova.compute.manager [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] No waiting events found dispatching network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:14:33 np0005558241 nova_compute[248510]: 2025-12-13 09:14:33.998 248514 WARNING nova.compute.manager [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received unexpected event network-vif-plugged-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:14:33 np0005558241 nova_compute[248510]: 2025-12-13 09:14:33.998 248514 DEBUG nova.compute.manager [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-deleted-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:33 np0005558241 nova_compute[248510]: 2025-12-13 09:14:33.998 248514 INFO nova.compute.manager [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Neutron deleted interface 37c91936-0589-4bd3-9413-3af7db3e8feb; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:14:33 np0005558241 nova_compute[248510]: 2025-12-13 09:14:33.999 248514 DEBUG nova.network.neutron [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.111 248514 DEBUG nova.compute.manager [req-901ef58e-a03e-4c98-81b8-11ed8f4d3316 req-2e316127-8906-4f47-bc36-c1afb109c362 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Detach interface failed, port_id=37c91936-0589-4bd3-9413-3af7db3e8feb, reason: Instance 707e75d9-f6d2-413d-a727-c3ecbfea90c1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.297 248514 DEBUG nova.network.neutron [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.315 248514 INFO nova.compute.manager [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Took 2.21 seconds to deallocate network for instance.#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.619 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.620 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.691 248514 DEBUG oslo_concurrency.processutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:14:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3404: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 12 KiB/s wr, 57 op/s
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.959 248514 DEBUG nova.network.neutron [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updated VIF entry in instance network info cache for port 37c91936-0589-4bd3-9413-3af7db3e8feb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.961 248514 DEBUG nova.network.neutron [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Updating instance_info_cache with network_info: [{"id": "37c91936-0589-4bd3-9413-3af7db3e8feb", "address": "fa:16:3e:77:6c:85", "network": {"id": "ef1a3009-9f3e-4c4a-9ee4-04bec3653434", "bridge": "br-int", "label": "tempest-network-smoke--1677192880", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap37c91936-05", "ovs_interfaceid": "37c91936-0589-4bd3-9413-3af7db3e8feb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "address": "fa:16:3e:39:da:84", "network": {"id": "ff80f4de-0e76-47f2-b06e-6d3900b63130", "bridge": "br-int", "label": "tempest-network-smoke--346360645", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe39:da84", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9538b93c-6b", "ovs_interfaceid": "9538b93c-6b21-4b5f-b6d3-89fd1b840f5d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.982 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-707e75d9-f6d2-413d-a727-c3ecbfea90c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.983 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-unplugged-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.983 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.984 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.984 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.984 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] No waiting events found dispatching network-vif-unplugged-37c91936-0589-4bd3-9413-3af7db3e8feb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.985 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-unplugged-37c91936-0589-4bd3-9413-3af7db3e8feb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.985 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.985 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.986 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.986 248514 DEBUG oslo_concurrency.lockutils [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.986 248514 DEBUG nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] No waiting events found dispatching network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:14:34 np0005558241 nova_compute[248510]: 2025-12-13 09:14:34.987 248514 WARNING nova.compute.manager [req-9c367ff5-773d-4509-adde-61c839a75b97 req-864d5c48-7311-4e1e-82d7-db583826d8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received unexpected event network-vif-plugged-37c91936-0589-4bd3-9413-3af7db3e8feb for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:14:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/83806067' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:35 np0005558241 nova_compute[248510]: 2025-12-13 09:14:35.290 248514 DEBUG oslo_concurrency.processutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:14:35 np0005558241 nova_compute[248510]: 2025-12-13 09:14:35.298 248514 DEBUG nova.compute.provider_tree [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:14:35 np0005558241 nova_compute[248510]: 2025-12-13 09:14:35.321 248514 DEBUG nova.scheduler.client.report [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:14:35 np0005558241 nova_compute[248510]: 2025-12-13 09:14:35.360 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:35 np0005558241 nova_compute[248510]: 2025-12-13 09:14:35.408 248514 INFO nova.scheduler.client.report [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 707e75d9-f6d2-413d-a727-c3ecbfea90c1#033[00m
Dec 13 04:14:35 np0005558241 nova_compute[248510]: 2025-12-13 09:14:35.479 248514 DEBUG oslo_concurrency.lockutils [None req-0c8005a8-e4c5-48f3-94c7-7fe4bb9f396d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "707e75d9-f6d2-413d-a727-c3ecbfea90c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:36 np0005558241 nova_compute[248510]: 2025-12-13 09:14:36.188 248514 DEBUG nova.compute.manager [req-1f944e3a-19e8-476c-a440-5a5d13870c4a req-da96ce95-e8fb-4ddd-9347-9d7f9b36f889 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Received event network-vif-deleted-9538b93c-6b21-4b5f-b6d3-89fd1b840f5d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:14:36 np0005558241 nova_compute[248510]: 2025-12-13 09:14:36.188 248514 INFO nova.compute.manager [req-1f944e3a-19e8-476c-a440-5a5d13870c4a req-da96ce95-e8fb-4ddd-9347-9d7f9b36f889 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Neutron deleted interface 9538b93c-6b21-4b5f-b6d3-89fd1b840f5d; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:14:36 np0005558241 nova_compute[248510]: 2025-12-13 09:14:36.189 248514 DEBUG nova.network.neutron [req-1f944e3a-19e8-476c-a440-5a5d13870c4a req-da96ce95-e8fb-4ddd-9347-9d7f9b36f889 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Dec 13 04:14:36 np0005558241 nova_compute[248510]: 2025-12-13 09:14:36.191 248514 DEBUG nova.compute.manager [req-1f944e3a-19e8-476c-a440-5a5d13870c4a req-da96ce95-e8fb-4ddd-9347-9d7f9b36f889 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Detach interface failed, port_id=9538b93c-6b21-4b5f-b6d3-89fd1b840f5d, reason: Instance 707e75d9-f6d2-413d-a727-c3ecbfea90c1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 04:14:36 np0005558241 nova_compute[248510]: 2025-12-13 09:14:36.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:36 np0005558241 nova_compute[248510]: 2025-12-13 09:14:36.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:14:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3405: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 55 op/s
Dec 13 04:14:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3406: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 55 op/s
Dec 13 04:14:38 np0005558241 nova_compute[248510]: 2025-12-13 09:14:38.965 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:14:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:14:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:14:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:14:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:14:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:14:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:14:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3407: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 6.8 KiB/s wr, 50 op/s
Dec 13 04:14:40 np0005558241 podman[397685]: 2025-12-13 09:14:40.992721366 +0000 UTC m=+0.098072760 container create fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_taussig, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 04:14:41 np0005558241 podman[397685]: 2025-12-13 09:14:40.919442828 +0000 UTC m=+0.024794242 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:14:41 np0005558241 systemd[1]: Started libpod-conmon-fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9.scope.
Dec 13 04:14:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:14:41 np0005558241 podman[397685]: 2025-12-13 09:14:41.100820067 +0000 UTC m=+0.206171481 container init fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_taussig, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:14:41 np0005558241 podman[397685]: 2025-12-13 09:14:41.112414968 +0000 UTC m=+0.217766362 container start fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 04:14:41 np0005558241 podman[397685]: 2025-12-13 09:14:41.116607293 +0000 UTC m=+0.221958737 container attach fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 04:14:41 np0005558241 heuristic_taussig[397701]: 167 167
Dec 13 04:14:41 np0005558241 systemd[1]: libpod-fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9.scope: Deactivated successfully.
Dec 13 04:14:41 np0005558241 conmon[397701]: conmon fdf047bfa4400cf388bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9.scope/container/memory.events
Dec 13 04:14:41 np0005558241 podman[397685]: 2025-12-13 09:14:41.120972543 +0000 UTC m=+0.226323937 container died fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 04:14:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-82a85eee03b34e73ba7fa06479a74d9ec30f67b644d570cdc37ba5052f4f04a8-merged.mount: Deactivated successfully.
Dec 13 04:14:41 np0005558241 podman[397685]: 2025-12-13 09:14:41.29309573 +0000 UTC m=+0.398447134 container remove fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_taussig, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:14:41 np0005558241 systemd[1]: libpod-conmon-fdf047bfa4400cf388bde2b2c67af4863fac5909862f58459e775f028df48ab9.scope: Deactivated successfully.
Dec 13 04:14:41 np0005558241 podman[397726]: 2025-12-13 09:14:41.461691358 +0000 UTC m=+0.051670577 container create dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 04:14:41 np0005558241 systemd[1]: Started libpod-conmon-dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8.scope.
Dec 13 04:14:41 np0005558241 podman[397726]: 2025-12-13 09:14:41.437714247 +0000 UTC m=+0.027693466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:14:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:14:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb9612867d51e4107b8a3cc06dda5c430f447cd1a4a6b7f5fd0f6e40de3d85e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb9612867d51e4107b8a3cc06dda5c430f447cd1a4a6b7f5fd0f6e40de3d85e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb9612867d51e4107b8a3cc06dda5c430f447cd1a4a6b7f5fd0f6e40de3d85e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb9612867d51e4107b8a3cc06dda5c430f447cd1a4a6b7f5fd0f6e40de3d85e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb9612867d51e4107b8a3cc06dda5c430f447cd1a4a6b7f5fd0f6e40de3d85e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:41 np0005558241 podman[397726]: 2025-12-13 09:14:41.572581559 +0000 UTC m=+0.162560768 container init dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Dec 13 04:14:41 np0005558241 nova_compute[248510]: 2025-12-13 09:14:41.575 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617266.569178, ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:14:41 np0005558241 nova_compute[248510]: 2025-12-13 09:14:41.576 248514 INFO nova.compute.manager [-] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:14:41 np0005558241 podman[397726]: 2025-12-13 09:14:41.591985676 +0000 UTC m=+0.181964865 container start dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 04:14:41 np0005558241 podman[397726]: 2025-12-13 09:14:41.595932525 +0000 UTC m=+0.185911714 container attach dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:14:41 np0005558241 nova_compute[248510]: 2025-12-13 09:14:41.604 248514 DEBUG nova.compute.manager [None req-2ffe027d-610e-48d9-99c6-501324836918 - - - - - -] [instance: ea0bebb4-5bb7-4caf-8a88-cdfa8feab5a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:14:41 np0005558241 nova_compute[248510]: 2025-12-13 09:14:41.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:42 np0005558241 strange_swanson[397742]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:14:42 np0005558241 strange_swanson[397742]: --> All data devices are unavailable
Dec 13 04:14:42 np0005558241 systemd[1]: libpod-dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8.scope: Deactivated successfully.
Dec 13 04:14:42 np0005558241 podman[397762]: 2025-12-13 09:14:42.198624001 +0000 UTC m=+0.051007631 container died dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 04:14:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4eb9612867d51e4107b8a3cc06dda5c430f447cd1a4a6b7f5fd0f6e40de3d85e-merged.mount: Deactivated successfully.
Dec 13 04:14:42 np0005558241 podman[397762]: 2025-12-13 09:14:42.252353488 +0000 UTC m=+0.104737078 container remove dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:14:42 np0005558241 systemd[1]: libpod-conmon-dbfb6b6e3e76a7f7478b4a183d511f20da84119c407fb005d04d1ce26a7563a8.scope: Deactivated successfully.
Dec 13 04:14:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:42 np0005558241 podman[397839]: 2025-12-13 09:14:42.747887525 +0000 UTC m=+0.058843846 container create ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:14:42 np0005558241 systemd[1]: Started libpod-conmon-ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1.scope.
Dec 13 04:14:42 np0005558241 podman[397839]: 2025-12-13 09:14:42.71699114 +0000 UTC m=+0.027947521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:14:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:14:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3408: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:14:42 np0005558241 podman[397839]: 2025-12-13 09:14:42.857023193 +0000 UTC m=+0.167979544 container init ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 04:14:42 np0005558241 podman[397839]: 2025-12-13 09:14:42.865780892 +0000 UTC m=+0.176737213 container start ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 04:14:42 np0005558241 podman[397839]: 2025-12-13 09:14:42.869842934 +0000 UTC m=+0.180799255 container attach ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:14:42 np0005558241 pensive_golick[397856]: 167 167
Dec 13 04:14:42 np0005558241 systemd[1]: libpod-ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1.scope: Deactivated successfully.
Dec 13 04:14:42 np0005558241 podman[397839]: 2025-12-13 09:14:42.872818269 +0000 UTC m=+0.183774600 container died ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:14:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d6e433f08f3d8145ee0386b597303e263bf14dd2da9a5f2469b0cf1ab966dc53-merged.mount: Deactivated successfully.
Dec 13 04:14:42 np0005558241 podman[397839]: 2025-12-13 09:14:42.915033167 +0000 UTC m=+0.225989488 container remove ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 04:14:42 np0005558241 systemd[1]: libpod-conmon-ed516165c835b6d47f37c5d8e22f5b6e8b5523fc3da7db9c700955c7871aaae1.scope: Deactivated successfully.
Dec 13 04:14:43 np0005558241 podman[397880]: 2025-12-13 09:14:43.122017889 +0000 UTC m=+0.051889563 container create 735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_gagarin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:14:43 np0005558241 systemd[1]: Started libpod-conmon-735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70.scope.
Dec 13 04:14:43 np0005558241 podman[397880]: 2025-12-13 09:14:43.098984701 +0000 UTC m=+0.028856395 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:14:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:14:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5da30275fdac41162d0d74046818b77a0745a34ee8404ba84f49896c442b12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5da30275fdac41162d0d74046818b77a0745a34ee8404ba84f49896c442b12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5da30275fdac41162d0d74046818b77a0745a34ee8404ba84f49896c442b12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5da30275fdac41162d0d74046818b77a0745a34ee8404ba84f49896c442b12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:43 np0005558241 podman[397880]: 2025-12-13 09:14:43.217047882 +0000 UTC m=+0.146919576 container init 735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_gagarin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:14:43 np0005558241 podman[397880]: 2025-12-13 09:14:43.225762071 +0000 UTC m=+0.155633765 container start 735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_gagarin, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:14:43 np0005558241 podman[397880]: 2025-12-13 09:14:43.230014867 +0000 UTC m=+0.159886561 container attach 735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]: {
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:    "0": [
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:        {
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "devices": [
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "/dev/loop3"
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            ],
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_name": "ceph_lv0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_size": "21470642176",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "name": "ceph_lv0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "tags": {
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.cluster_name": "ceph",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.crush_device_class": "",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.encrypted": "0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.objectstore": "bluestore",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.osd_id": "0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.type": "block",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.vdo": "0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.with_tpm": "0"
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            },
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "type": "block",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "vg_name": "ceph_vg0"
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:        }
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:    ],
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:    "1": [
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:        {
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "devices": [
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "/dev/loop4"
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            ],
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_name": "ceph_lv1",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_size": "21470642176",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "name": "ceph_lv1",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "tags": {
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.cluster_name": "ceph",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.crush_device_class": "",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.encrypted": "0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.objectstore": "bluestore",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.osd_id": "1",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.type": "block",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.vdo": "0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.with_tpm": "0"
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            },
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "type": "block",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "vg_name": "ceph_vg1"
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:        }
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:    ],
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:    "2": [
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:        {
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "devices": [
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "/dev/loop5"
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            ],
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_name": "ceph_lv2",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_size": "21470642176",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "name": "ceph_lv2",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "tags": {
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.cluster_name": "ceph",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.crush_device_class": "",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.encrypted": "0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.objectstore": "bluestore",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.osd_id": "2",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.type": "block",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.vdo": "0",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:                "ceph.with_tpm": "0"
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            },
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "type": "block",
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:            "vg_name": "ceph_vg2"
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:        }
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]:    ]
Dec 13 04:14:43 np0005558241 eloquent_gagarin[397897]: }
Dec 13 04:14:43 np0005558241 systemd[1]: libpod-735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70.scope: Deactivated successfully.
Dec 13 04:14:43 np0005558241 podman[397880]: 2025-12-13 09:14:43.563381688 +0000 UTC m=+0.493253362 container died 735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_gagarin, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:14:43 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4c5da30275fdac41162d0d74046818b77a0745a34ee8404ba84f49896c442b12-merged.mount: Deactivated successfully.
Dec 13 04:14:43 np0005558241 podman[397880]: 2025-12-13 09:14:43.614534791 +0000 UTC m=+0.544406485 container remove 735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:14:43 np0005558241 systemd[1]: libpod-conmon-735e2b51ce5470cac5536658a818a5a60ddbd1da48ff24480abb464d2e61ab70.scope: Deactivated successfully.
Dec 13 04:14:43 np0005558241 nova_compute[248510]: 2025-12-13 09:14:43.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:14:43 np0005558241 nova_compute[248510]: 2025-12-13 09:14:43.968 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:44 np0005558241 podman[397981]: 2025-12-13 09:14:44.106399727 +0000 UTC m=+0.056733603 container create 848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_torvalds, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:14:44 np0005558241 systemd[1]: Started libpod-conmon-848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34.scope.
Dec 13 04:14:44 np0005558241 podman[397981]: 2025-12-13 09:14:44.081093353 +0000 UTC m=+0.031427259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:14:44 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:14:44 np0005558241 podman[397981]: 2025-12-13 09:14:44.214656903 +0000 UTC m=+0.164990789 container init 848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 04:14:44 np0005558241 podman[397981]: 2025-12-13 09:14:44.228421278 +0000 UTC m=+0.178755134 container start 848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_torvalds, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:14:44 np0005558241 podman[397981]: 2025-12-13 09:14:44.232532191 +0000 UTC m=+0.182866147 container attach 848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:14:44 np0005558241 strange_torvalds[397998]: 167 167
Dec 13 04:14:44 np0005558241 systemd[1]: libpod-848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34.scope: Deactivated successfully.
Dec 13 04:14:44 np0005558241 conmon[397998]: conmon 848847b13eb16cd50181 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34.scope/container/memory.events
Dec 13 04:14:44 np0005558241 podman[397981]: 2025-12-13 09:14:44.237321481 +0000 UTC m=+0.187655377 container died 848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 04:14:44 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6e9e0a42229df10f2d6689cc8f3658afbbe6a8b8fb5da65db42c7778a72074e7-merged.mount: Deactivated successfully.
Dec 13 04:14:44 np0005558241 podman[397981]: 2025-12-13 09:14:44.297117471 +0000 UTC m=+0.247451337 container remove 848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:14:44 np0005558241 systemd[1]: libpod-conmon-848847b13eb16cd50181f34992ae7a069923b532abc770ec4727bd1fd5b95e34.scope: Deactivated successfully.
Dec 13 04:14:44 np0005558241 podman[398021]: 2025-12-13 09:14:44.467110324 +0000 UTC m=+0.051673557 container create b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:14:44 np0005558241 systemd[1]: Started libpod-conmon-b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb.scope.
Dec 13 04:14:44 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:14:44 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190acd4d6a63f3cd42f7b7accc0f255f75c0307a3af1ffe0bf68bf69f48be904/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:44 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190acd4d6a63f3cd42f7b7accc0f255f75c0307a3af1ffe0bf68bf69f48be904/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:44 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190acd4d6a63f3cd42f7b7accc0f255f75c0307a3af1ffe0bf68bf69f48be904/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:44 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/190acd4d6a63f3cd42f7b7accc0f255f75c0307a3af1ffe0bf68bf69f48be904/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:44 np0005558241 podman[398021]: 2025-12-13 09:14:44.447742938 +0000 UTC m=+0.032306161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:14:44 np0005558241 podman[398021]: 2025-12-13 09:14:44.555970523 +0000 UTC m=+0.140533786 container init b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_banach, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:14:44 np0005558241 podman[398021]: 2025-12-13 09:14:44.564917697 +0000 UTC m=+0.149480920 container start b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_banach, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:14:44 np0005558241 podman[398021]: 2025-12-13 09:14:44.568985699 +0000 UTC m=+0.153549012 container attach b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:14:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3409: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:14:45 np0005558241 lvm[398116]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:14:45 np0005558241 lvm[398116]: VG ceph_vg0 finished
Dec 13 04:14:45 np0005558241 lvm[398117]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:14:45 np0005558241 lvm[398117]: VG ceph_vg1 finished
Dec 13 04:14:45 np0005558241 lvm[398119]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:14:45 np0005558241 lvm[398119]: VG ceph_vg2 finished
Dec 13 04:14:45 np0005558241 tender_banach[398038]: {}
Dec 13 04:14:45 np0005558241 systemd[1]: libpod-b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb.scope: Deactivated successfully.
Dec 13 04:14:45 np0005558241 systemd[1]: libpod-b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb.scope: Consumed 1.423s CPU time.
Dec 13 04:14:45 np0005558241 podman[398021]: 2025-12-13 09:14:45.427912911 +0000 UTC m=+1.012476174 container died b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:14:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-190acd4d6a63f3cd42f7b7accc0f255f75c0307a3af1ffe0bf68bf69f48be904-merged.mount: Deactivated successfully.
Dec 13 04:14:45 np0005558241 podman[398021]: 2025-12-13 09:14:45.493981439 +0000 UTC m=+1.078544702 container remove b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:14:45 np0005558241 systemd[1]: libpod-conmon-b3233ee53c2c7143c925487294f138b630b2c371c421fe671eb92af0f39b8afb.scope: Deactivated successfully.
Dec 13 04:14:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:14:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:14:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:14:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:14:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:14:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:14:46 np0005558241 nova_compute[248510]: 2025-12-13 09:14:46.666 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617271.6649587, 707e75d9-f6d2-413d-a727-c3ecbfea90c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:14:46 np0005558241 nova_compute[248510]: 2025-12-13 09:14:46.668 248514 INFO nova.compute.manager [-] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:14:46 np0005558241 nova_compute[248510]: 2025-12-13 09:14:46.703 248514 DEBUG nova.compute.manager [None req-606fc9f8-7447-4a40-a3af-cc74e0026038 - - - - - -] [instance: 707e75d9-f6d2-413d-a727-c3ecbfea90c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:14:46 np0005558241 nova_compute[248510]: 2025-12-13 09:14:46.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3410: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:14:47 np0005558241 nova_compute[248510]: 2025-12-13 09:14:47.056 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:47 np0005558241 nova_compute[248510]: 2025-12-13 09:14:47.152 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3411: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:14:48 np0005558241 nova_compute[248510]: 2025-12-13 09:14:48.987 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3412: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:14:51 np0005558241 nova_compute[248510]: 2025-12-13 09:14:51.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3413: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:14:53 np0005558241 nova_compute[248510]: 2025-12-13 09:14:53.989 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.033426) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617294033535, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1507, "num_deletes": 251, "total_data_size": 2521833, "memory_usage": 2565040, "flush_reason": "Manual Compaction"}
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617294051836, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2454808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66754, "largest_seqno": 68260, "table_properties": {"data_size": 2447735, "index_size": 4143, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14685, "raw_average_key_size": 20, "raw_value_size": 2433650, "raw_average_value_size": 3315, "num_data_blocks": 185, "num_entries": 734, "num_filter_entries": 734, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617138, "oldest_key_time": 1765617138, "file_creation_time": 1765617294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 18472 microseconds, and 7421 cpu microseconds.
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.051908) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2454808 bytes OK
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.051941) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.053478) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.053495) EVENT_LOG_v1 {"time_micros": 1765617294053490, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.053523) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2515209, prev total WAL file size 2515209, number of live WAL files 2.
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.054574) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2397KB)], [158(10MB)]
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617294054718, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 13738490, "oldest_snapshot_seqno": -1}
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8746 keys, 11940151 bytes, temperature: kUnknown
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617294153131, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11940151, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11882873, "index_size": 34317, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21893, "raw_key_size": 229488, "raw_average_key_size": 26, "raw_value_size": 11728109, "raw_average_value_size": 1340, "num_data_blocks": 1329, "num_entries": 8746, "num_filter_entries": 8746, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617294, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.153588) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11940151 bytes
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.156280) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.4 rd, 121.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.8 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(10.5) write-amplify(4.9) OK, records in: 9260, records dropped: 514 output_compression: NoCompression
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.156302) EVENT_LOG_v1 {"time_micros": 1765617294156291, "job": 98, "event": "compaction_finished", "compaction_time_micros": 98551, "compaction_time_cpu_micros": 40445, "output_level": 6, "num_output_files": 1, "total_output_size": 11940151, "num_input_records": 9260, "num_output_records": 8746, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617294157012, "job": 98, "event": "table_file_deletion", "file_number": 160}
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617294159712, "job": 98, "event": "table_file_deletion", "file_number": 158}
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.054456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.159835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.159843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.159845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.159847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:14:54 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:14:54.159849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:14:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3414: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:14:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:55.451 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:14:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:55.452 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:14:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:14:55.452 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:14:56 np0005558241 nova_compute[248510]: 2025-12-13 09:14:56.726 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:14:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3415: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:14:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3416: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:14:58 np0005558241 nova_compute[248510]: 2025-12-13 09:14:58.991 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3417: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:01 np0005558241 nova_compute[248510]: 2025-12-13 09:15:01.728 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:02 np0005558241 podman[398161]: 2025-12-13 09:15:02.001244729 +0000 UTC m=+0.084764942 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 13 04:15:02 np0005558241 podman[398162]: 2025-12-13 09:15:02.009965287 +0000 UTC m=+0.085709867 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:15:02 np0005558241 podman[398160]: 2025-12-13 09:15:02.050009824 +0000 UTC m=+0.130046031 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 04:15:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3418: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:03 np0005558241 nova_compute[248510]: 2025-12-13 09:15:03.994 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3419: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:05 np0005558241 nova_compute[248510]: 2025-12-13 09:15:05.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:15:06 np0005558241 nova_compute[248510]: 2025-12-13 09:15:06.731 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3420: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3421: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:08 np0005558241 nova_compute[248510]: 2025-12-13 09:15:08.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:15:09
Dec 13 04:15:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:15:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:15:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'volumes', 'vms', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Dec 13 04:15:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:15:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:15:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:15:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:15:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:15:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:15:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:15:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3422: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:15:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:15:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:15:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:15:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:15:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:15:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:15:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:15:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:15:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:15:11 np0005558241 nova_compute[248510]: 2025-12-13 09:15:11.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3423: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:13 np0005558241 nova_compute[248510]: 2025-12-13 09:15:13.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3424: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:15:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3564067138' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:15:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:15:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3564067138' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:15:16 np0005558241 nova_compute[248510]: 2025-12-13 09:15:16.738 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3425: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3426: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:19 np0005558241 nova_compute[248510]: 2025-12-13 09:15:19.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:19 np0005558241 nova_compute[248510]: 2025-12-13 09:15:19.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:15:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3427: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4722195890038757e-05 of space, bias 1.0, pg target 0.004416658767011627 quantized to 32 (current 32)
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697272279419812 of space, bias 1.0, pg target 0.20091816838259435 quantized to 32 (current 32)
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:15:21 np0005558241 nova_compute[248510]: 2025-12-13 09:15:21.741 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:22 np0005558241 nova_compute[248510]: 2025-12-13 09:15:22.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:15:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3428: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:24 np0005558241 nova_compute[248510]: 2025-12-13 09:15:24.002 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3429: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:25 np0005558241 nova_compute[248510]: 2025-12-13 09:15:25.622 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:15:25 np0005558241 nova_compute[248510]: 2025-12-13 09:15:25.623 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:15:25 np0005558241 nova_compute[248510]: 2025-12-13 09:15:25.648 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:15:25 np0005558241 nova_compute[248510]: 2025-12-13 09:15:25.762 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:15:25 np0005558241 nova_compute[248510]: 2025-12-13 09:15:25.763 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:15:25 np0005558241 nova_compute[248510]: 2025-12-13 09:15:25.790 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:15:25 np0005558241 nova_compute[248510]: 2025-12-13 09:15:25.791 248514 INFO nova.compute.claims [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:15:25 np0005558241 nova_compute[248510]: 2025-12-13 09:15:25.917 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:15:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:15:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/576176643' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.551 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.558 248514 DEBUG nova.compute.provider_tree [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.582 248514 DEBUG nova.scheduler.client.report [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.609 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.610 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.659 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.660 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.686 248514 INFO nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.708 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.745 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.809 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.826 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.827 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.827 248514 INFO nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Creating image(s)#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.854 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:15:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3430: 321 pgs: 321 active+clean; 41 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.882 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.910 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:15:26 np0005558241 nova_compute[248510]: 2025-12-13 09:15:26.916 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.014 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.015 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.017 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.017 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.041 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.045 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.394 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.466 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.553 248514 DEBUG nova.objects.instance [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 44149dbe-362b-4930-a63b-d04c9a3b3b4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.684 248514 DEBUG nova.policy [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.700 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.701 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Ensure instance console log exists: /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.701 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.702 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:15:27 np0005558241 nova_compute[248510]: 2025-12-13 09:15:27.702 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:15:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3431: 321 pgs: 321 active+clean; 62 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 1002 KiB/s wr, 1 op/s
Dec 13 04:15:29 np0005558241 nova_compute[248510]: 2025-12-13 09:15:29.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:30 np0005558241 nova_compute[248510]: 2025-12-13 09:15:30.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:15:30 np0005558241 nova_compute[248510]: 2025-12-13 09:15:30.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:15:30 np0005558241 nova_compute[248510]: 2025-12-13 09:15:30.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:15:30 np0005558241 nova_compute[248510]: 2025-12-13 09:15:30.801 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:15:30 np0005558241 nova_compute[248510]: 2025-12-13 09:15:30.802 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:15:30 np0005558241 nova_compute[248510]: 2025-12-13 09:15:30.802 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:15:30 np0005558241 nova_compute[248510]: 2025-12-13 09:15:30.802 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:15:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3432: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:15:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:15:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3990506696' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:15:31 np0005558241 nova_compute[248510]: 2025-12-13 09:15:31.352 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:15:31 np0005558241 nova_compute[248510]: 2025-12-13 09:15:31.543 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:15:31 np0005558241 nova_compute[248510]: 2025-12-13 09:15:31.544 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3487MB free_disk=59.96664601098746GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:15:31 np0005558241 nova_compute[248510]: 2025-12-13 09:15:31.544 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:15:31 np0005558241 nova_compute[248510]: 2025-12-13 09:15:31.545 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:15:31 np0005558241 nova_compute[248510]: 2025-12-13 09:15:31.632 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 44149dbe-362b-4930-a63b-d04c9a3b3b4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:15:31 np0005558241 nova_compute[248510]: 2025-12-13 09:15:31.632 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:15:31 np0005558241 nova_compute[248510]: 2025-12-13 09:15:31.632 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:15:31 np0005558241 nova_compute[248510]: 2025-12-13 09:15:31.688 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:15:31 np0005558241 nova_compute[248510]: 2025-12-13 09:15:31.748 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:31 np0005558241 nova_compute[248510]: 2025-12-13 09:15:31.952 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Successfully created port: 66abeeb9-b4e2-4901-9437-be8cd001222f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:15:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:15:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627510535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:15:32 np0005558241 nova_compute[248510]: 2025-12-13 09:15:32.231 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:15:32 np0005558241 nova_compute[248510]: 2025-12-13 09:15:32.238 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:15:32 np0005558241 nova_compute[248510]: 2025-12-13 09:15:32.257 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:15:32 np0005558241 nova_compute[248510]: 2025-12-13 09:15:32.287 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:15:32 np0005558241 nova_compute[248510]: 2025-12-13 09:15:32.288 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:15:32 np0005558241 nova_compute[248510]: 2025-12-13 09:15:32.583 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Successfully created port: 3b47d34a-6968-411d-8f9d-38a835c0fa77 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:15:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3433: 321 pgs: 321 active+clean; 88 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:15:33 np0005558241 podman[398459]: 2025-12-13 09:15:33.018437094 +0000 UTC m=+0.091767037 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 04:15:33 np0005558241 podman[398460]: 2025-12-13 09:15:33.020864995 +0000 UTC m=+0.090400184 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Dec 13 04:15:33 np0005558241 podman[398458]: 2025-12-13 09:15:33.093545216 +0000 UTC m=+0.170794477 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 04:15:34 np0005558241 nova_compute[248510]: 2025-12-13 09:15:34.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:34 np0005558241 nova_compute[248510]: 2025-12-13 09:15:34.043 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Successfully updated port: 66abeeb9-b4e2-4901-9437-be8cd001222f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:15:34 np0005558241 nova_compute[248510]: 2025-12-13 09:15:34.216 248514 DEBUG nova.compute.manager [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-changed-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:15:34 np0005558241 nova_compute[248510]: 2025-12-13 09:15:34.217 248514 DEBUG nova.compute.manager [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing instance network info cache due to event network-changed-66abeeb9-b4e2-4901-9437-be8cd001222f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:15:34 np0005558241 nova_compute[248510]: 2025-12-13 09:15:34.218 248514 DEBUG oslo_concurrency.lockutils [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:15:34 np0005558241 nova_compute[248510]: 2025-12-13 09:15:34.218 248514 DEBUG oslo_concurrency.lockutils [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:15:34 np0005558241 nova_compute[248510]: 2025-12-13 09:15:34.218 248514 DEBUG nova.network.neutron [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing network info cache for port 66abeeb9-b4e2-4901-9437-be8cd001222f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:15:34 np0005558241 nova_compute[248510]: 2025-12-13 09:15:34.288 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:15:34 np0005558241 nova_compute[248510]: 2025-12-13 09:15:34.289 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:15:34 np0005558241 nova_compute[248510]: 2025-12-13 09:15:34.289 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:15:34 np0005558241 nova_compute[248510]: 2025-12-13 09:15:34.477 248514 DEBUG nova.network.neutron [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:15:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3434: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:15:35 np0005558241 nova_compute[248510]: 2025-12-13 09:15:35.018 248514 DEBUG nova.network.neutron [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:15:35 np0005558241 nova_compute[248510]: 2025-12-13 09:15:35.058 248514 DEBUG oslo_concurrency.lockutils [req-782084dd-9972-4439-b6a9-09418aa0fc11 req-7f4a77bf-98b7-462f-8ef9-e9b06228c46b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:15:36 np0005558241 nova_compute[248510]: 2025-12-13 09:15:36.017 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Successfully updated port: 3b47d34a-6968-411d-8f9d-38a835c0fa77 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:15:36 np0005558241 nova_compute[248510]: 2025-12-13 09:15:36.137 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:15:36 np0005558241 nova_compute[248510]: 2025-12-13 09:15:36.138 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:15:36 np0005558241 nova_compute[248510]: 2025-12-13 09:15:36.139 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:15:36 np0005558241 nova_compute[248510]: 2025-12-13 09:15:36.270 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:15:36 np0005558241 nova_compute[248510]: 2025-12-13 09:15:36.319 248514 DEBUG nova.compute.manager [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-changed-3b47d34a-6968-411d-8f9d-38a835c0fa77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:15:36 np0005558241 nova_compute[248510]: 2025-12-13 09:15:36.319 248514 DEBUG nova.compute.manager [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing instance network info cache due to event network-changed-3b47d34a-6968-411d-8f9d-38a835c0fa77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:15:36 np0005558241 nova_compute[248510]: 2025-12-13 09:15:36.319 248514 DEBUG oslo_concurrency.lockutils [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:15:36 np0005558241 nova_compute[248510]: 2025-12-13 09:15:36.750 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:36 np0005558241 nova_compute[248510]: 2025-12-13 09:15:36.774 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:15:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3435: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:15:37 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:37Z|01552|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec 13 04:15:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3436: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:15:38 np0005558241 nova_compute[248510]: 2025-12-13 09:15:38.973 248514 DEBUG nova.network.neutron [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.009 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.756 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.757 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Instance network_info: |[{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.758 248514 DEBUG oslo_concurrency.lockutils [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.759 248514 DEBUG nova.network.neutron [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing network info cache for port 3b47d34a-6968-411d-8f9d-38a835c0fa77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.763 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Start _get_guest_xml network_info=[{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.769 248514 WARNING nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.777 248514 DEBUG nova.virt.libvirt.host [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.778 248514 DEBUG nova.virt.libvirt.host [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.781 248514 DEBUG nova.virt.libvirt.host [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.782 248514 DEBUG nova.virt.libvirt.host [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.782 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.782 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.783 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.783 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.783 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.783 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.784 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.784 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.784 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.784 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.785 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.785 248514 DEBUG nova.virt.hardware [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:15:39 np0005558241 nova_compute[248510]: 2025-12-13 09:15:39.788 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:15:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:15:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:15:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:15:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:15:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:15:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:15:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:15:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4016951900' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:40 np0005558241 nova_compute[248510]: 2025-12-13 09:15:40.380 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:15:40 np0005558241 nova_compute[248510]: 2025-12-13 09:15:40.410 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:15:40 np0005558241 nova_compute[248510]: 2025-12-13 09:15:40.415 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:15:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3437: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 812 KiB/s wr, 25 op/s
Dec 13 04:15:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:15:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3175098924' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:40 np0005558241 nova_compute[248510]: 2025-12-13 09:15:40.966 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:15:40 np0005558241 nova_compute[248510]: 2025-12-13 09:15:40.969 248514 DEBUG nova.virt.libvirt.vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:15:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1513449558',display_name='tempest-TestGettingAddress-server-1513449558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1513449558',id=145,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-g1atpgf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:15:26Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=44149dbe-362b-4930-a63b-d04c9a3b3b4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:15:40 np0005558241 nova_compute[248510]: 2025-12-13 09:15:40.969 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:15:40 np0005558241 nova_compute[248510]: 2025-12-13 09:15:40.970 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:15:40 np0005558241 nova_compute[248510]: 2025-12-13 09:15:40.971 248514 DEBUG nova.virt.libvirt.vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:15:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1513449558',display_name='tempest-TestGettingAddress-server-1513449558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1513449558',id=145,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-g1atpgf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:15:26Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=44149dbe-362b-4930-a63b-d04c9a3b3b4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:15:40 np0005558241 nova_compute[248510]: 2025-12-13 09:15:40.971 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:15:40 np0005558241 nova_compute[248510]: 2025-12-13 09:15:40.972 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:15:40 np0005558241 nova_compute[248510]: 2025-12-13 09:15:40.973 248514 DEBUG nova.objects.instance [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44149dbe-362b-4930-a63b-d04c9a3b3b4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.615 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  <uuid>44149dbe-362b-4930-a63b-d04c9a3b3b4c</uuid>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  <name>instance-00000091</name>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-1513449558</nova:name>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:15:39</nova:creationTime>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        <nova:port uuid="66abeeb9-b4e2-4901-9437-be8cd001222f">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        <nova:port uuid="3b47d34a-6968-411d-8f9d-38a835c0fa77">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe1a:e9f0" ipVersion="6"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <entry name="serial">44149dbe-362b-4930-a63b-d04c9a3b3b4c</entry>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <entry name="uuid">44149dbe-362b-4930-a63b-d04c9a3b3b4c</entry>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk.config">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:08:09:e2"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <target dev="tap66abeeb9-b4"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:1a:e9:f0"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <target dev="tap3b47d34a-69"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/console.log" append="off"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:15:41 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:15:41 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:15:41 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:15:41 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.617 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Preparing to wait for external event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.618 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.618 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.618 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.619 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Preparing to wait for external event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.619 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.619 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.619 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.620 248514 DEBUG nova.virt.libvirt.vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:15:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1513449558',display_name='tempest-TestGettingAddress-server-1513449558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1513449558',id=145,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-g1atpgf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:15:26Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=44149dbe-362b-4930-a63b-d04c9a3b3b4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.620 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.621 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.621 248514 DEBUG os_vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.622 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.623 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.627 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.628 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66abeeb9-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.629 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66abeeb9-b4, col_values=(('external_ids', {'iface-id': '66abeeb9-b4e2-4901-9437-be8cd001222f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:09:e2', 'vm-uuid': '44149dbe-362b-4930-a63b-d04c9a3b3b4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.632 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:41 np0005558241 NetworkManager[50376]: <info>  [1765617341.6328] manager: (tap66abeeb9-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.635 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.644 248514 INFO os_vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4')#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.645 248514 DEBUG nova.virt.libvirt.vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:15:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1513449558',display_name='tempest-TestGettingAddress-server-1513449558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1513449558',id=145,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-g1atpgf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:15:26Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=44149dbe-362b-4930-a63b-d04c9a3b3b4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.646 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.647 248514 DEBUG nova.network.os_vif_util [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.648 248514 DEBUG os_vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.649 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.649 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.650 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.653 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b47d34a-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.653 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b47d34a-69, col_values=(('external_ids', {'iface-id': '3b47d34a-6968-411d-8f9d-38a835c0fa77', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:e9:f0', 'vm-uuid': '44149dbe-362b-4930-a63b-d04c9a3b3b4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.655 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:41 np0005558241 NetworkManager[50376]: <info>  [1765617341.6570] manager: (tap3b47d34a-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/642)
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.658 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.667 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:41 np0005558241 nova_compute[248510]: 2025-12-13 09:15:41.669 248514 INFO os_vif [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69')#033[00m
Dec 13 04:15:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3438: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:15:42 np0005558241 nova_compute[248510]: 2025-12-13 09:15:42.961 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:15:42 np0005558241 nova_compute[248510]: 2025-12-13 09:15:42.961 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:15:42 np0005558241 nova_compute[248510]: 2025-12-13 09:15:42.962 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:08:09:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:15:42 np0005558241 nova_compute[248510]: 2025-12-13 09:15:42.962 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:1a:e9:f0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:15:42 np0005558241 nova_compute[248510]: 2025-12-13 09:15:42.963 248514 INFO nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Using config drive#033[00m
Dec 13 04:15:42 np0005558241 nova_compute[248510]: 2025-12-13 09:15:42.995 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:15:43 np0005558241 nova_compute[248510]: 2025-12-13 09:15:43.509 248514 INFO nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Creating config drive at /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/disk.config#033[00m
Dec 13 04:15:43 np0005558241 nova_compute[248510]: 2025-12-13 09:15:43.520 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq4jvc5cf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:15:43 np0005558241 nova_compute[248510]: 2025-12-13 09:15:43.586 248514 DEBUG nova.network.neutron [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updated VIF entry in instance network info cache for port 3b47d34a-6968-411d-8f9d-38a835c0fa77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:15:43 np0005558241 nova_compute[248510]: 2025-12-13 09:15:43.588 248514 DEBUG nova.network.neutron [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:15:43 np0005558241 nova_compute[248510]: 2025-12-13 09:15:43.637 248514 DEBUG oslo_concurrency.lockutils [req-9e85b135-0181-4b79-85bb-42bad8c09288 req-da2ef161-4f3b-454a-aa4a-4168ec649059 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:15:43 np0005558241 nova_compute[248510]: 2025-12-13 09:15:43.695 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq4jvc5cf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:15:43 np0005558241 nova_compute[248510]: 2025-12-13 09:15:43.731 248514 DEBUG nova.storage.rbd_utils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:15:43 np0005558241 nova_compute[248510]: 2025-12-13 09:15:43.735 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/disk.config 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:15:43 np0005558241 nova_compute[248510]: 2025-12-13 09:15:43.886 248514 DEBUG oslo_concurrency.processutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/disk.config 44149dbe-362b-4930-a63b-d04c9a3b3b4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:15:43 np0005558241 nova_compute[248510]: 2025-12-13 09:15:43.888 248514 INFO nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Deleting local config drive /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c/disk.config because it was imported into RBD.#033[00m
Dec 13 04:15:43 np0005558241 NetworkManager[50376]: <info>  [1765617343.9590] manager: (tap66abeeb9-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/643)
Dec 13 04:15:43 np0005558241 kernel: tap66abeeb9-b4: entered promiscuous mode
Dec 13 04:15:43 np0005558241 nova_compute[248510]: 2025-12-13 09:15:43.962 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:43 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:43Z|01553|binding|INFO|Claiming lport 66abeeb9-b4e2-4901-9437-be8cd001222f for this chassis.
Dec 13 04:15:43 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:43Z|01554|binding|INFO|66abeeb9-b4e2-4901-9437-be8cd001222f: Claiming fa:16:3e:08:09:e2 10.100.0.7
Dec 13 04:15:43 np0005558241 kernel: tap3b47d34a-69: entered promiscuous mode
Dec 13 04:15:43 np0005558241 nova_compute[248510]: 2025-12-13 09:15:43.980 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:43 np0005558241 NetworkManager[50376]: <info>  [1765617343.9816] manager: (tap3b47d34a-69): new Tun device (/org/freedesktop/NetworkManager/Devices/644)
Dec 13 04:15:43 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:43Z|01555|if_status|INFO|Dropped 2 log messages in last 108 seconds (most recently, 108 seconds ago) due to excessive rate
Dec 13 04:15:43 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:43Z|01556|if_status|INFO|Not updating pb chassis for 3b47d34a-6968-411d-8f9d-38a835c0fa77 now as sb is readonly
Dec 13 04:15:43 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:43Z|01557|binding|INFO|Claiming lport 3b47d34a-6968-411d-8f9d-38a835c0fa77 for this chassis.
Dec 13 04:15:43 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:43Z|01558|binding|INFO|3b47d34a-6968-411d-8f9d-38a835c0fa77: Claiming fa:16:3e:1a:e9:f0 2001:db8::f816:3eff:fe1a:e9f0
Dec 13 04:15:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:43.989 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:09:e2 10.100.0.7'], port_security=['fa:16:3e:08:09:e2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '44149dbe-362b-4930-a63b-d04c9a3b3b4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fc9ec4-4452-4225-b100-75f3859e091a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74bfbdc0-d183-4405-bf5b-7ce9dc4c0882, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=66abeeb9-b4e2-4901-9437-be8cd001222f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:15:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:43.993 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 66abeeb9-b4e2-4901-9437-be8cd001222f in datapath d3fc9ec4-4452-4225-b100-75f3859e091a bound to our chassis#033[00m
Dec 13 04:15:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:43.995 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:e9:f0 2001:db8::f816:3eff:fe1a:e9f0'], port_security=['fa:16:3e:1a:e9:f0 2001:db8::f816:3eff:fe1a:e9f0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe1a:e9f0/64', 'neutron:device_id': '44149dbe-362b-4930-a63b-d04c9a3b3b4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=637bfd4d-c545-47be-afdb-625fd0ecaffb, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b47d34a-6968-411d-8f9d-38a835c0fa77) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:15:43 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:43.999 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3fc9ec4-4452-4225-b100-75f3859e091a#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.017 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dc6cd296-0ba4-4058-af46-871d71ea9e6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.019 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd3fc9ec4-41 in ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.021 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd3fc9ec4-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:15:44 np0005558241 systemd-udevd[398660]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:15:44 np0005558241 systemd-udevd[398661]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.021 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f9818e5a-c0a2-4b4d-9ac4-411383ea4ae8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.022 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[10c375dd-0a27-4f5c-a60f-8f7309987f00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 systemd-machined[210538]: New machine qemu-176-instance-00000091.
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.040 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[c9086443-d4cc-4c57-9431-06049b61445c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 NetworkManager[50376]: <info>  [1765617344.0423] device (tap66abeeb9-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:15:44 np0005558241 NetworkManager[50376]: <info>  [1765617344.0435] device (tap66abeeb9-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:15:44 np0005558241 NetworkManager[50376]: <info>  [1765617344.0443] device (tap3b47d34a-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:15:44 np0005558241 NetworkManager[50376]: <info>  [1765617344.0453] device (tap3b47d34a-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:15:44 np0005558241 systemd[1]: Started Virtual Machine qemu-176-instance-00000091.
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.066 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfb2b0d-d987-4331-873f-20716b0db226]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:44 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:44Z|01559|binding|INFO|Setting lport 66abeeb9-b4e2-4901-9437-be8cd001222f ovn-installed in OVS
Dec 13 04:15:44 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:44Z|01560|binding|INFO|Setting lport 66abeeb9-b4e2-4901-9437-be8cd001222f up in Southbound
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.076 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:44 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:44Z|01561|binding|INFO|Setting lport 3b47d34a-6968-411d-8f9d-38a835c0fa77 ovn-installed in OVS
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.087 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.102 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c133e0-02db-4489-8fa1-a6da1c6f7fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 NetworkManager[50376]: <info>  [1765617344.1091] manager: (tapd3fc9ec4-40): new Veth device (/org/freedesktop/NetworkManager/Devices/645)
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.108 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[89fd99d5-46cb-40d9-931a-ec6ba9a47dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:44Z|01562|binding|INFO|Setting lport 3b47d34a-6968-411d-8f9d-38a835c0fa77 up in Southbound
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.143 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[214a853a-9b85-48cf-b8fd-c1a119cea5c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.146 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[bbaf9a95-42b0-466b-b0c4-f79c6893464c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 NetworkManager[50376]: <info>  [1765617344.1704] device (tapd3fc9ec4-40): carrier: link connected
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.178 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e97c620c-0de3-413f-9913-2eb3341c193d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.197 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ac64a9df-21b5-4d78-93b8-39df7b61329e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fc9ec4-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:82:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 446], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992138, 'reachable_time': 20078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398694, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.221 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b2cb8c3e-18a8-43dd-a236-12e7137d62d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febc:82c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992138, 'tstamp': 992138}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398695, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.247 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[57032f95-44cf-4464-b847-b46c3b55840a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fc9ec4-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:82:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 446], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992138, 'reachable_time': 20078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398696, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.297 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[753daf97-5e5f-40ef-9d5c-b7ed8073bdce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.378 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[6497679a-14ff-4875-8a73-54e30ee487ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.381 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fc9ec4-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.381 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.382 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3fc9ec4-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.384 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:44 np0005558241 NetworkManager[50376]: <info>  [1765617344.3849] manager: (tapd3fc9ec4-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/646)
Dec 13 04:15:44 np0005558241 kernel: tapd3fc9ec4-40: entered promiscuous mode
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.386 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3fc9ec4-40, col_values=(('external_ids', {'iface-id': 'f954b813-27c7-46fa-b00a-23e0bb5d820d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.387 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:44 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:44Z|01563|binding|INFO|Releasing lport f954b813-27c7-46fa-b00a-23e0bb5d820d from this chassis (sb_readonly=0)
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.405 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d3fc9ec4-4452-4225-b100-75f3859e091a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d3fc9ec4-4452-4225-b100-75f3859e091a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.407 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[62a9b8a6-4658-40ca-bab3-f22f8fce182c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.407 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-d3fc9ec4-4452-4225-b100-75f3859e091a
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/d3fc9ec4-4452-4225-b100-75f3859e091a.pid.haproxy
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID d3fc9ec4-4452-4225-b100-75f3859e091a
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.409 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'env', 'PROCESS_TAG=haproxy-d3fc9ec4-4452-4225-b100-75f3859e091a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d3fc9ec4-4452-4225-b100-75f3859e091a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.667 248514 DEBUG nova.compute.manager [req-daac2a19-454a-425a-9505-dc7130eae2a5 req-01fc90bd-5f2a-4ba1-95a4-1e8e70e356b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.668 248514 DEBUG oslo_concurrency.lockutils [req-daac2a19-454a-425a-9505-dc7130eae2a5 req-01fc90bd-5f2a-4ba1-95a4-1e8e70e356b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.668 248514 DEBUG oslo_concurrency.lockutils [req-daac2a19-454a-425a-9505-dc7130eae2a5 req-01fc90bd-5f2a-4ba1-95a4-1e8e70e356b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.668 248514 DEBUG oslo_concurrency.lockutils [req-daac2a19-454a-425a-9505-dc7130eae2a5 req-01fc90bd-5f2a-4ba1-95a4-1e8e70e356b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.669 248514 DEBUG nova.compute.manager [req-daac2a19-454a-425a-9505-dc7130eae2a5 req-01fc90bd-5f2a-4ba1-95a4-1e8e70e356b4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Processing event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.688 248514 DEBUG nova.compute.manager [req-cf83ad62-100d-47e7-81fb-2a0879a68b95 req-d1a96109-15b2-42fd-8f59-a67d4bad6d8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.688 248514 DEBUG oslo_concurrency.lockutils [req-cf83ad62-100d-47e7-81fb-2a0879a68b95 req-d1a96109-15b2-42fd-8f59-a67d4bad6d8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.689 248514 DEBUG oslo_concurrency.lockutils [req-cf83ad62-100d-47e7-81fb-2a0879a68b95 req-d1a96109-15b2-42fd-8f59-a67d4bad6d8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.689 248514 DEBUG oslo_concurrency.lockutils [req-cf83ad62-100d-47e7-81fb-2a0879a68b95 req-d1a96109-15b2-42fd-8f59-a67d4bad6d8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.690 248514 DEBUG nova.compute.manager [req-cf83ad62-100d-47e7-81fb-2a0879a68b95 req-d1a96109-15b2-42fd-8f59-a67d4bad6d8e 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Processing event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.690 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617344.6895018, 44149dbe-362b-4930-a63b-d04c9a3b3b4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.690 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] VM Started (Lifecycle Event)#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.693 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.696 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.701 248514 INFO nova.virt.libvirt.driver [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Instance spawned successfully.#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.701 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.724 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.731 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.736 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.736 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.736 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.737 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.737 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.737 248514 DEBUG nova.virt.libvirt.driver [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.770 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.771 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617344.689662, 44149dbe-362b-4930-a63b-d04c9a3b3b4c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.771 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:15:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:44.808 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.810 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.813 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617344.6956758, 44149dbe-362b-4930-a63b-d04c9a3b3b4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.813 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.824 248514 INFO nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Took 18.00 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.825 248514 DEBUG nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.854 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.856 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:15:44 np0005558241 podman[398771]: 2025-12-13 09:15:44.87925182 +0000 UTC m=+0.093676215 container create 6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:15:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3439: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 5 op/s
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.889 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.903 248514 INFO nova.compute.manager [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Took 19.18 seconds to build instance.#033[00m
Dec 13 04:15:44 np0005558241 podman[398771]: 2025-12-13 09:15:44.807972834 +0000 UTC m=+0.022397259 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:15:44 np0005558241 nova_compute[248510]: 2025-12-13 09:15:44.919 248514 DEBUG oslo_concurrency.lockutils [None req-c9e49410-966f-4040-b1e1-9d86312d37a2 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:15:44 np0005558241 systemd[1]: Started libpod-conmon-6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb.scope.
Dec 13 04:15:44 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:15:44 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72ad838abac5f9443ea2e98c9a253d9a9467a978cbeac3dbb207a420c8706cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:44 np0005558241 podman[398771]: 2025-12-13 09:15:44.993692741 +0000 UTC m=+0.208117156 container init 6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 04:15:45 np0005558241 podman[398771]: 2025-12-13 09:15:45.000561642 +0000 UTC m=+0.214986057 container start 6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:15:45 np0005558241 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [NOTICE]   (398790) : New worker (398792) forked
Dec 13 04:15:45 np0005558241 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [NOTICE]   (398790) : Loading success.
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.070 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b47d34a-6968-411d-8f9d-38a835c0fa77 in datapath 9f757ab8-2a8f-4771-8492-2bcf521016bb unbound from our chassis#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.072 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f757ab8-2a8f-4771-8492-2bcf521016bb#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.085 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[726093cb-7575-43c2-822c-45bd5fa920b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.087 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f757ab8-21 in ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.089 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f757ab8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.089 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c8bed7-ec4f-49a4-9a6d-79dd45b37dc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.090 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5814855e-61e5-450f-8973-0c3fada5a30e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.102 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[df8e8183-545a-4ea2-a98a-8e0f6c0c53c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.118 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f234cdc5-6f4b-4ee8-92c2-1b97968115a5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.155 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[555bce7b-629c-4f8f-8159-d3c467b35c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.161 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c227b4c0-4f8e-47b4-bf21-1130f1eb6279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 NetworkManager[50376]: <info>  [1765617345.1631] manager: (tap9f757ab8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/647)
Dec 13 04:15:45 np0005558241 systemd-udevd[398690]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.218 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[509606e9-c9e8-4730-a9ea-f49fb270c046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.224 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6562d251-d3b2-48a6-991b-e2a998ae26a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 NetworkManager[50376]: <info>  [1765617345.2554] device (tap9f757ab8-20): carrier: link connected
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.261 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eae2a628-ad93-4785-bd30-8766338daaa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.279 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[02b99b13-bfa1-4ded-a5fc-288ff8c59d77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f757ab8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c4:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992247, 'reachable_time': 37493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398811, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.298 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[53a8d9ed-6180-4c8e-aa17-8158bae88596]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:c4fa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992247, 'tstamp': 992247}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398812, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.320 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fa367df1-e34b-49d4-ab94-6da42bce2bbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f757ab8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c4:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992247, 'reachable_time': 37493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398813, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.357 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[251ddd37-91c8-4af1-b802-6dd566937f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.404 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2154ef2c-0019-42b8-b7e8-ea6cbd4838d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.408 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f757ab8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.409 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.409 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f757ab8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:15:45 np0005558241 nova_compute[248510]: 2025-12-13 09:15:45.411 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:45 np0005558241 kernel: tap9f757ab8-20: entered promiscuous mode
Dec 13 04:15:45 np0005558241 NetworkManager[50376]: <info>  [1765617345.4125] manager: (tap9f757ab8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/648)
Dec 13 04:15:45 np0005558241 nova_compute[248510]: 2025-12-13 09:15:45.414 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.417 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f757ab8-20, col_values=(('external_ids', {'iface-id': '8c81cc53-1ad8-47d5-9def-f8710a05a423'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:15:45 np0005558241 nova_compute[248510]: 2025-12-13 09:15:45.418 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:45Z|01564|binding|INFO|Releasing lport 8c81cc53-1ad8-47d5-9def-f8710a05a423 from this chassis (sb_readonly=0)
Dec 13 04:15:45 np0005558241 nova_compute[248510]: 2025-12-13 09:15:45.433 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.435 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f757ab8-2a8f-4771-8492-2bcf521016bb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f757ab8-2a8f-4771-8492-2bcf521016bb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.436 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f706cf-f5cf-421c-a235-9db849e3b715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.437 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-9f757ab8-2a8f-4771-8492-2bcf521016bb
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/9f757ab8-2a8f-4771-8492-2bcf521016bb.pid.haproxy
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID 9f757ab8-2a8f-4771-8492-2bcf521016bb
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:15:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:45.439 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'env', 'PROCESS_TAG=haproxy-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f757ab8-2a8f-4771-8492-2bcf521016bb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:15:45 np0005558241 nova_compute[248510]: 2025-12-13 09:15:45.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:15:45 np0005558241 podman[398891]: 2025-12-13 09:15:45.875027462 +0000 UTC m=+0.059225837 container create 2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:15:45 np0005558241 systemd[1]: Started libpod-conmon-2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a.scope.
Dec 13 04:15:45 np0005558241 podman[398891]: 2025-12-13 09:15:45.84125683 +0000 UTC m=+0.025455225 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:15:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:15:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f319c184b06470198864244c31cd681dddd4dfb8c67f2ea0cdfe3b222afde25a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:45 np0005558241 podman[398891]: 2025-12-13 09:15:45.959319392 +0000 UTC m=+0.143517777 container init 2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:15:45 np0005558241 podman[398891]: 2025-12-13 09:15:45.965629359 +0000 UTC m=+0.149827744 container start 2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 04:15:45 np0005558241 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [NOTICE]   (398911) : New worker (398913) forked
Dec 13 04:15:45 np0005558241 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [NOTICE]   (398911) : Loading success.
Dec 13 04:15:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:46.033 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:15:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:15:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:15:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:15:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:15:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:15:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:15:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:15:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:15:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:15:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:15:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:15:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:15:46 np0005558241 nova_compute[248510]: 2025-12-13 09:15:46.656 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:46 np0005558241 nova_compute[248510]: 2025-12-13 09:15:46.803 248514 DEBUG nova.compute.manager [req-fbfc73b6-d644-449f-86c3-896020825ce0 req-856e9641-286a-4b66-89d5-ad7cd8907310 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:15:46 np0005558241 nova_compute[248510]: 2025-12-13 09:15:46.804 248514 DEBUG oslo_concurrency.lockutils [req-fbfc73b6-d644-449f-86c3-896020825ce0 req-856e9641-286a-4b66-89d5-ad7cd8907310 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:15:46 np0005558241 nova_compute[248510]: 2025-12-13 09:15:46.804 248514 DEBUG oslo_concurrency.lockutils [req-fbfc73b6-d644-449f-86c3-896020825ce0 req-856e9641-286a-4b66-89d5-ad7cd8907310 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:15:46 np0005558241 nova_compute[248510]: 2025-12-13 09:15:46.804 248514 DEBUG oslo_concurrency.lockutils [req-fbfc73b6-d644-449f-86c3-896020825ce0 req-856e9641-286a-4b66-89d5-ad7cd8907310 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:15:46 np0005558241 nova_compute[248510]: 2025-12-13 09:15:46.804 248514 DEBUG nova.compute.manager [req-fbfc73b6-d644-449f-86c3-896020825ce0 req-856e9641-286a-4b66-89d5-ad7cd8907310 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] No waiting events found dispatching network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:15:46 np0005558241 nova_compute[248510]: 2025-12-13 09:15:46.805 248514 WARNING nova.compute.manager [req-fbfc73b6-d644-449f-86c3-896020825ce0 req-856e9641-286a-4b66-89d5-ad7cd8907310 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received unexpected event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f for instance with vm_state active and task_state None.#033[00m
Dec 13 04:15:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3440: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 5 op/s
Dec 13 04:15:46 np0005558241 nova_compute[248510]: 2025-12-13 09:15:46.957 248514 DEBUG nova.compute.manager [req-51b28370-993b-4fa0-8771-913c4a63aa07 req-3013be1c-4bff-4d2c-a7c1-bb27f5c1f9a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:15:46 np0005558241 nova_compute[248510]: 2025-12-13 09:15:46.957 248514 DEBUG oslo_concurrency.lockutils [req-51b28370-993b-4fa0-8771-913c4a63aa07 req-3013be1c-4bff-4d2c-a7c1-bb27f5c1f9a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:15:46 np0005558241 nova_compute[248510]: 2025-12-13 09:15:46.958 248514 DEBUG oslo_concurrency.lockutils [req-51b28370-993b-4fa0-8771-913c4a63aa07 req-3013be1c-4bff-4d2c-a7c1-bb27f5c1f9a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:15:46 np0005558241 nova_compute[248510]: 2025-12-13 09:15:46.959 248514 DEBUG oslo_concurrency.lockutils [req-51b28370-993b-4fa0-8771-913c4a63aa07 req-3013be1c-4bff-4d2c-a7c1-bb27f5c1f9a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:15:46 np0005558241 nova_compute[248510]: 2025-12-13 09:15:46.959 248514 DEBUG nova.compute.manager [req-51b28370-993b-4fa0-8771-913c4a63aa07 req-3013be1c-4bff-4d2c-a7c1-bb27f5c1f9a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] No waiting events found dispatching network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:15:46 np0005558241 nova_compute[248510]: 2025-12-13 09:15:46.960 248514 WARNING nova.compute.manager [req-51b28370-993b-4fa0-8771-913c4a63aa07 req-3013be1c-4bff-4d2c-a7c1-bb27f5c1f9a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received unexpected event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:15:47 np0005558241 podman[399016]: 2025-12-13 09:15:47.021419716 +0000 UTC m=+0.079303317 container create 1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:15:47 np0005558241 podman[399016]: 2025-12-13 09:15:46.983755937 +0000 UTC m=+0.041639548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:47 np0005558241 systemd[1]: Started libpod-conmon-1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2.scope.
Dec 13 04:15:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:15:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:15:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:15:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:15:47 np0005558241 podman[399016]: 2025-12-13 09:15:47.385816476 +0000 UTC m=+0.443700147 container init 1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chatelet, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:15:47 np0005558241 podman[399016]: 2025-12-13 09:15:47.401200069 +0000 UTC m=+0.459083690 container start 1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:15:47 np0005558241 strange_chatelet[399032]: 167 167
Dec 13 04:15:47 np0005558241 systemd[1]: libpod-1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2.scope: Deactivated successfully.
Dec 13 04:15:47 np0005558241 podman[399016]: 2025-12-13 09:15:47.413758592 +0000 UTC m=+0.471642263 container attach 1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chatelet, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:15:47 np0005558241 podman[399016]: 2025-12-13 09:15:47.414704185 +0000 UTC m=+0.472587806 container died 1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chatelet, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:15:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-74057b586b1a3c924a34bb7263d0dd78f5b14ff80544de0d85793a0b0e68633c-merged.mount: Deactivated successfully.
Dec 13 04:15:47 np0005558241 podman[399016]: 2025-12-13 09:15:47.469843499 +0000 UTC m=+0.527727080 container remove 1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:15:47 np0005558241 systemd[1]: libpod-conmon-1bd1358e84113115b8c5f05c35e1ce282f46a2745e1db8a1c2161dfbd0e8e1a2.scope: Deactivated successfully.
Dec 13 04:15:47 np0005558241 podman[399056]: 2025-12-13 09:15:47.665179387 +0000 UTC m=+0.042284115 container create d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:15:47 np0005558241 systemd[1]: Started libpod-conmon-d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828.scope.
Dec 13 04:15:47 np0005558241 podman[399056]: 2025-12-13 09:15:47.649170128 +0000 UTC m=+0.026274876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:15:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bca42ce734c6ed5a3216f77b2af6bbe9b60163d5312d2a9ebeea3eaca3b67af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bca42ce734c6ed5a3216f77b2af6bbe9b60163d5312d2a9ebeea3eaca3b67af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bca42ce734c6ed5a3216f77b2af6bbe9b60163d5312d2a9ebeea3eaca3b67af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bca42ce734c6ed5a3216f77b2af6bbe9b60163d5312d2a9ebeea3eaca3b67af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bca42ce734c6ed5a3216f77b2af6bbe9b60163d5312d2a9ebeea3eaca3b67af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:47 np0005558241 podman[399056]: 2025-12-13 09:15:47.782559471 +0000 UTC m=+0.159664229 container init d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 04:15:47 np0005558241 podman[399056]: 2025-12-13 09:15:47.796917769 +0000 UTC m=+0.174022497 container start d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:15:47 np0005558241 podman[399056]: 2025-12-13 09:15:47.807031581 +0000 UTC m=+0.184136339 container attach d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:15:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:48 np0005558241 sweet_davinci[399073]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:15:48 np0005558241 sweet_davinci[399073]: --> All data devices are unavailable
Dec 13 04:15:48 np0005558241 systemd[1]: libpod-d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828.scope: Deactivated successfully.
Dec 13 04:15:48 np0005558241 podman[399056]: 2025-12-13 09:15:48.381932546 +0000 UTC m=+0.759037274 container died d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:15:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8bca42ce734c6ed5a3216f77b2af6bbe9b60163d5312d2a9ebeea3eaca3b67af-merged.mount: Deactivated successfully.
Dec 13 04:15:48 np0005558241 podman[399056]: 2025-12-13 09:15:48.447446119 +0000 UTC m=+0.824550847 container remove d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:15:48 np0005558241 systemd[1]: libpod-conmon-d4cb0f0bb8c69fb18bc38e3d2a7cf831c0873987370bed8811109f179f1c0828.scope: Deactivated successfully.
Dec 13 04:15:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3441: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 44 op/s
Dec 13 04:15:48 np0005558241 podman[399170]: 2025-12-13 09:15:48.95880901 +0000 UTC m=+0.049994236 container create 5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_boyd, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 04:15:48 np0005558241 systemd[1]: Started libpod-conmon-5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3.scope.
Dec 13 04:15:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:15:49 np0005558241 podman[399170]: 2025-12-13 09:15:48.939218912 +0000 UTC m=+0.030404128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:49 np0005558241 podman[399170]: 2025-12-13 09:15:49.052854954 +0000 UTC m=+0.144040200 container init 5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_boyd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:15:49 np0005558241 podman[399170]: 2025-12-13 09:15:49.064705019 +0000 UTC m=+0.155890255 container start 5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:15:49 np0005558241 podman[399170]: 2025-12-13 09:15:49.068914824 +0000 UTC m=+0.160100070 container attach 5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_boyd, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:15:49 np0005558241 festive_boyd[399186]: 167 167
Dec 13 04:15:49 np0005558241 systemd[1]: libpod-5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3.scope: Deactivated successfully.
Dec 13 04:15:49 np0005558241 conmon[399186]: conmon 5c29feabcaf12f6a2140 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3.scope/container/memory.events
Dec 13 04:15:49 np0005558241 podman[399170]: 2025-12-13 09:15:49.107064604 +0000 UTC m=+0.198249800 container died 5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_boyd, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:15:49 np0005558241 nova_compute[248510]: 2025-12-13 09:15:49.108 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-abbcfe0964e2c6444d1fbe8a0afe41d9c79f0667730f1d583c204f858b609767-merged.mount: Deactivated successfully.
Dec 13 04:15:49 np0005558241 podman[399170]: 2025-12-13 09:15:49.162170128 +0000 UTC m=+0.253355334 container remove 5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 04:15:49 np0005558241 systemd[1]: libpod-conmon-5c29feabcaf12f6a21406a5cfb721922e7a81dd6ea713185ac849840cf63ffc3.scope: Deactivated successfully.
Dec 13 04:15:49 np0005558241 podman[399209]: 2025-12-13 09:15:49.436539514 +0000 UTC m=+0.070593240 container create 3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:15:49 np0005558241 podman[399209]: 2025-12-13 09:15:49.403331447 +0000 UTC m=+0.037385223 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:49 np0005558241 systemd[1]: Started libpod-conmon-3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab.scope.
Dec 13 04:15:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:15:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a42790e8ec5733dbcf5e7bb893eb4e5ff5a1eafd180fda7cd5ff7c13b052d05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a42790e8ec5733dbcf5e7bb893eb4e5ff5a1eafd180fda7cd5ff7c13b052d05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a42790e8ec5733dbcf5e7bb893eb4e5ff5a1eafd180fda7cd5ff7c13b052d05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a42790e8ec5733dbcf5e7bb893eb4e5ff5a1eafd180fda7cd5ff7c13b052d05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:49 np0005558241 podman[399209]: 2025-12-13 09:15:49.570719278 +0000 UTC m=+0.204772964 container init 3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:15:49 np0005558241 podman[399209]: 2025-12-13 09:15:49.5800336 +0000 UTC m=+0.214087296 container start 3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:15:49 np0005558241 podman[399209]: 2025-12-13 09:15:49.656220148 +0000 UTC m=+0.290273864 container attach 3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_curie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]: {
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:    "0": [
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:        {
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "devices": [
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "/dev/loop3"
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            ],
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_name": "ceph_lv0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_size": "21470642176",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "name": "ceph_lv0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "tags": {
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.cluster_name": "ceph",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.crush_device_class": "",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.encrypted": "0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.objectstore": "bluestore",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.osd_id": "0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.type": "block",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.vdo": "0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.with_tpm": "0"
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            },
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "type": "block",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "vg_name": "ceph_vg0"
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:        }
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:    ],
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:    "1": [
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:        {
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "devices": [
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "/dev/loop4"
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            ],
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_name": "ceph_lv1",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_size": "21470642176",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "name": "ceph_lv1",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "tags": {
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.cluster_name": "ceph",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.crush_device_class": "",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.encrypted": "0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.objectstore": "bluestore",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.osd_id": "1",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.type": "block",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.vdo": "0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.with_tpm": "0"
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            },
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "type": "block",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "vg_name": "ceph_vg1"
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:        }
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:    ],
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:    "2": [
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:        {
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "devices": [
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "/dev/loop5"
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            ],
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_name": "ceph_lv2",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_size": "21470642176",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "name": "ceph_lv2",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "tags": {
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.cluster_name": "ceph",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.crush_device_class": "",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.encrypted": "0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.objectstore": "bluestore",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.osd_id": "2",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.type": "block",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.vdo": "0",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:                "ceph.with_tpm": "0"
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            },
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "type": "block",
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:            "vg_name": "ceph_vg2"
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:        }
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]:    ]
Dec 13 04:15:49 np0005558241 quizzical_curie[399225]: }
Dec 13 04:15:49 np0005558241 systemd[1]: libpod-3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab.scope: Deactivated successfully.
Dec 13 04:15:49 np0005558241 podman[399209]: 2025-12-13 09:15:49.892224978 +0000 UTC m=+0.526278694 container died 3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_curie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:15:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1a42790e8ec5733dbcf5e7bb893eb4e5ff5a1eafd180fda7cd5ff7c13b052d05-merged.mount: Deactivated successfully.
Dec 13 04:15:49 np0005558241 podman[399209]: 2025-12-13 09:15:49.955608258 +0000 UTC m=+0.589661944 container remove 3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_curie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:15:49 np0005558241 systemd[1]: libpod-conmon-3d770b81eb18172bbb8737982a2c173ef038bd149acb5c5d2b9ca21635a525ab.scope: Deactivated successfully.
Dec 13 04:15:50 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:50.040 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:15:50 np0005558241 podman[399307]: 2025-12-13 09:15:50.493202603 +0000 UTC m=+0.056626972 container create 5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_swirles, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:15:50 np0005558241 systemd[1]: Started libpod-conmon-5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b.scope.
Dec 13 04:15:50 np0005558241 podman[399307]: 2025-12-13 09:15:50.464264982 +0000 UTC m=+0.027689411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:15:50 np0005558241 podman[399307]: 2025-12-13 09:15:50.606401843 +0000 UTC m=+0.169826232 container init 5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:15:50 np0005558241 podman[399307]: 2025-12-13 09:15:50.616153466 +0000 UTC m=+0.179577815 container start 5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:15:50 np0005558241 boring_swirles[399323]: 167 167
Dec 13 04:15:50 np0005558241 podman[399307]: 2025-12-13 09:15:50.623477218 +0000 UTC m=+0.186901577 container attach 5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_swirles, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:15:50 np0005558241 systemd[1]: libpod-5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b.scope: Deactivated successfully.
Dec 13 04:15:50 np0005558241 podman[399307]: 2025-12-13 09:15:50.624713529 +0000 UTC m=+0.188137898 container died 5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_swirles, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:15:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c85ddf19f596662a01d7fbc58fb15bea37b58720d198e143415268d59e6db68b-merged.mount: Deactivated successfully.
Dec 13 04:15:50 np0005558241 podman[399307]: 2025-12-13 09:15:50.679870393 +0000 UTC m=+0.243294772 container remove 5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:15:50 np0005558241 systemd[1]: libpod-conmon-5433a39c7f0dd7fe39e334180af9fc323a49274a8730e3b35c16e2fa31f91f1b.scope: Deactivated successfully.
Dec 13 04:15:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3442: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:15:50 np0005558241 podman[399347]: 2025-12-13 09:15:50.90606172 +0000 UTC m=+0.050718775 container create ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_golick, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:15:50 np0005558241 systemd[1]: Started libpod-conmon-ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347.scope.
Dec 13 04:15:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:15:50 np0005558241 podman[399347]: 2025-12-13 09:15:50.881849406 +0000 UTC m=+0.026506491 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735fd7be7d6db211db5e8b3d262f648fad1e84b9c6f74ee743f0dbd1b5d802e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735fd7be7d6db211db5e8b3d262f648fad1e84b9c6f74ee743f0dbd1b5d802e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735fd7be7d6db211db5e8b3d262f648fad1e84b9c6f74ee743f0dbd1b5d802e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735fd7be7d6db211db5e8b3d262f648fad1e84b9c6f74ee743f0dbd1b5d802e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:51 np0005558241 podman[399347]: 2025-12-13 09:15:51.015895976 +0000 UTC m=+0.160553051 container init ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_golick, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 04:15:51 np0005558241 podman[399347]: 2025-12-13 09:15:51.025354492 +0000 UTC m=+0.170011547 container start ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:15:51 np0005558241 podman[399347]: 2025-12-13 09:15:51.02927971 +0000 UTC m=+0.173936765 container attach ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_golick, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 04:15:51 np0005558241 nova_compute[248510]: 2025-12-13 09:15:51.658 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:51 np0005558241 lvm[399441]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:15:51 np0005558241 lvm[399442]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:15:51 np0005558241 lvm[399441]: VG ceph_vg0 finished
Dec 13 04:15:51 np0005558241 lvm[399442]: VG ceph_vg1 finished
Dec 13 04:15:51 np0005558241 lvm[399444]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:15:51 np0005558241 lvm[399444]: VG ceph_vg2 finished
Dec 13 04:15:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:51Z|01565|binding|INFO|Releasing lport 8c81cc53-1ad8-47d5-9def-f8710a05a423 from this chassis (sb_readonly=0)
Dec 13 04:15:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:51Z|01566|binding|INFO|Releasing lport f954b813-27c7-46fa-b00a-23e0bb5d820d from this chassis (sb_readonly=0)
Dec 13 04:15:51 np0005558241 NetworkManager[50376]: <info>  [1765617351.8408] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/649)
Dec 13 04:15:51 np0005558241 nova_compute[248510]: 2025-12-13 09:15:51.839 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:51 np0005558241 NetworkManager[50376]: <info>  [1765617351.8424] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/650)
Dec 13 04:15:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:51Z|01567|binding|INFO|Releasing lport 8c81cc53-1ad8-47d5-9def-f8710a05a423 from this chassis (sb_readonly=0)
Dec 13 04:15:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:51Z|01568|binding|INFO|Releasing lport f954b813-27c7-46fa-b00a-23e0bb5d820d from this chassis (sb_readonly=0)
Dec 13 04:15:51 np0005558241 nova_compute[248510]: 2025-12-13 09:15:51.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:51 np0005558241 nova_compute[248510]: 2025-12-13 09:15:51.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:51 np0005558241 nostalgic_golick[399363]: {}
Dec 13 04:15:51 np0005558241 systemd[1]: libpod-ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347.scope: Deactivated successfully.
Dec 13 04:15:51 np0005558241 podman[399347]: 2025-12-13 09:15:51.962900403 +0000 UTC m=+1.107557458 container died ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:15:51 np0005558241 systemd[1]: libpod-ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347.scope: Consumed 1.494s CPU time.
Dec 13 04:15:52 np0005558241 systemd[1]: var-lib-containers-storage-overlay-735fd7be7d6db211db5e8b3d262f648fad1e84b9c6f74ee743f0dbd1b5d802e5-merged.mount: Deactivated successfully.
Dec 13 04:15:52 np0005558241 podman[399347]: 2025-12-13 09:15:52.028941939 +0000 UTC m=+1.173598994 container remove ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:15:52 np0005558241 systemd[1]: libpod-conmon-ac8cbf67c1da3047bb0eba1b478b77858a89220780e73971e34839adb19f9347.scope: Deactivated successfully.
Dec 13 04:15:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:15:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:15:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:15:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:15:52 np0005558241 nova_compute[248510]: 2025-12-13 09:15:52.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:15:52 np0005558241 nova_compute[248510]: 2025-12-13 09:15:52.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 04:15:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3443: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:15:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:15:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:15:53 np0005558241 nova_compute[248510]: 2025-12-13 09:15:53.700 248514 DEBUG nova.compute.manager [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-changed-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:15:53 np0005558241 nova_compute[248510]: 2025-12-13 09:15:53.701 248514 DEBUG nova.compute.manager [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing instance network info cache due to event network-changed-66abeeb9-b4e2-4901-9437-be8cd001222f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:15:53 np0005558241 nova_compute[248510]: 2025-12-13 09:15:53.701 248514 DEBUG oslo_concurrency.lockutils [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:15:53 np0005558241 nova_compute[248510]: 2025-12-13 09:15:53.702 248514 DEBUG oslo_concurrency.lockutils [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:15:53 np0005558241 nova_compute[248510]: 2025-12-13 09:15:53.702 248514 DEBUG nova.network.neutron [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing network info cache for port 66abeeb9-b4e2-4901-9437-be8cd001222f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:15:54 np0005558241 nova_compute[248510]: 2025-12-13 09:15:54.108 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3444: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:15:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:55.452 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:15:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:55.453 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:15:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:15:55.455 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:15:56 np0005558241 nova_compute[248510]: 2025-12-13 09:15:56.662 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:15:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3445: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 68 op/s
Dec 13 04:15:57 np0005558241 nova_compute[248510]: 2025-12-13 09:15:57.683 248514 DEBUG nova.network.neutron [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updated VIF entry in instance network info cache for port 66abeeb9-b4e2-4901-9437-be8cd001222f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:15:57 np0005558241 nova_compute[248510]: 2025-12-13 09:15:57.684 248514 DEBUG nova.network.neutron [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:15:57 np0005558241 nova_compute[248510]: 2025-12-13 09:15:57.708 248514 DEBUG oslo_concurrency.lockutils [req-5d64f298-5ce7-4e83-b2de-8a24550851a5 req-195e1da9-3ed9-4034-b9d2-d0b52824fc46 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:15:57 np0005558241 nova_compute[248510]: 2025-12-13 09:15:57.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:15:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.071471) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617358071546, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 765, "num_deletes": 251, "total_data_size": 1027448, "memory_usage": 1041872, "flush_reason": "Manual Compaction"}
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617358089424, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 651224, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68261, "largest_seqno": 69025, "table_properties": {"data_size": 647948, "index_size": 1119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8809, "raw_average_key_size": 20, "raw_value_size": 640984, "raw_average_value_size": 1501, "num_data_blocks": 50, "num_entries": 427, "num_filter_entries": 427, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617295, "oldest_key_time": 1765617295, "file_creation_time": 1765617358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 17997 microseconds, and 4615 cpu microseconds.
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.089474) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 651224 bytes OK
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.089497) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.098838) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.098898) EVENT_LOG_v1 {"time_micros": 1765617358098885, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.098935) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1023570, prev total WAL file size 1023570, number of live WAL files 2.
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.099872) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373631' seq:72057594037927935, type:22 .. '6D6772737461740033303133' seq:0, type:0; will stop at (end)
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(635KB)], [161(11MB)]
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617358099950, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12591375, "oldest_snapshot_seqno": -1}
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8685 keys, 9475647 bytes, temperature: kUnknown
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617358198690, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9475647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9422809, "index_size": 30005, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 228384, "raw_average_key_size": 26, "raw_value_size": 9273167, "raw_average_value_size": 1067, "num_data_blocks": 1152, "num_entries": 8685, "num_filter_entries": 8685, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.198944) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9475647 bytes
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.201247) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.4 rd, 95.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 11.4 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(33.9) write-amplify(14.6) OK, records in: 9173, records dropped: 488 output_compression: NoCompression
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.201264) EVENT_LOG_v1 {"time_micros": 1765617358201256, "job": 100, "event": "compaction_finished", "compaction_time_micros": 98807, "compaction_time_cpu_micros": 45732, "output_level": 6, "num_output_files": 1, "total_output_size": 9475647, "num_input_records": 9173, "num_output_records": 8685, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617358201512, "job": 100, "event": "table_file_deletion", "file_number": 163}
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617358203459, "job": 100, "event": "table_file_deletion", "file_number": 161}
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.099808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.203496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.203500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.203644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.203653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:15:58 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:15:58.203655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:15:58 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:58Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:09:e2 10.100.0.7
Dec 13 04:15:58 np0005558241 ovn_controller[148476]: 2025-12-13T09:15:58Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:09:e2 10.100.0.7
Dec 13 04:15:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3446: 321 pgs: 321 active+clean; 92 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 693 KiB/s wr, 86 op/s
Dec 13 04:15:59 np0005558241 nova_compute[248510]: 2025-12-13 09:15:59.110 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3447: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 84 op/s
Dec 13 04:16:01 np0005558241 nova_compute[248510]: 2025-12-13 09:16:01.665 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3448: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Dec 13 04:16:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:04 np0005558241 podman[399487]: 2025-12-13 09:16:04.011353295 +0000 UTC m=+0.084472126 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 13 04:16:04 np0005558241 podman[399486]: 2025-12-13 09:16:04.041296281 +0000 UTC m=+0.123020646 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:16:04 np0005558241 podman[399485]: 2025-12-13 09:16:04.055447034 +0000 UTC m=+0.135755124 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:16:04 np0005558241 nova_compute[248510]: 2025-12-13 09:16:04.113 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3449: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:16:06 np0005558241 nova_compute[248510]: 2025-12-13 09:16:06.667 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3450: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:16:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3451: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:16:09 np0005558241 nova_compute[248510]: 2025-12-13 09:16:09.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:16:09
Dec 13 04:16:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:16:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:16:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.rgw.root', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.meta', 'images']
Dec 13 04:16:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:16:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:16:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:16:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:16:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:16:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:16:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:16:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3452: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 1.5 MiB/s wr, 45 op/s
Dec 13 04:16:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:16:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:16:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:16:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:16:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:16:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:16:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:16:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:16:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:16:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:16:11 np0005558241 nova_compute[248510]: 2025-12-13 09:16:11.671 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3453: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 43 KiB/s wr, 8 op/s
Dec 13 04:16:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:14 np0005558241 nova_compute[248510]: 2025-12-13 09:16:14.118 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:14 np0005558241 nova_compute[248510]: 2025-12-13 09:16:14.239 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:14 np0005558241 nova_compute[248510]: 2025-12-13 09:16:14.240 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:14 np0005558241 nova_compute[248510]: 2025-12-13 09:16:14.268 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:16:14 np0005558241 nova_compute[248510]: 2025-12-13 09:16:14.376 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:14 np0005558241 nova_compute[248510]: 2025-12-13 09:16:14.376 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:14 np0005558241 nova_compute[248510]: 2025-12-13 09:16:14.389 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:16:14 np0005558241 nova_compute[248510]: 2025-12-13 09:16:14.389 248514 INFO nova.compute.claims [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:16:14 np0005558241 nova_compute[248510]: 2025-12-13 09:16:14.552 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:16:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3454: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 45 KiB/s wr, 8 op/s
Dec 13 04:16:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:16:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2220735548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.124 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.133 248514 DEBUG nova.compute.provider_tree [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:16:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1793862639' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1793862639' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.159 248514 DEBUG nova.scheduler.client.report [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.193 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.195 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.252 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.253 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.282 248514 INFO nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.307 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.403 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.405 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.405 248514 INFO nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Creating image(s)#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.439 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.479 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.518 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.525 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.575 248514 DEBUG nova.policy [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.610 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.611 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.612 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.612 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.643 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:16:15 np0005558241 nova_compute[248510]: 2025-12-13 09:16:15.648 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:16:16 np0005558241 nova_compute[248510]: 2025-12-13 09:16:16.003 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:16:16 np0005558241 nova_compute[248510]: 2025-12-13 09:16:16.076 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:16:16 np0005558241 nova_compute[248510]: 2025-12-13 09:16:16.204 248514 DEBUG nova.objects.instance [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:16:16 np0005558241 nova_compute[248510]: 2025-12-13 09:16:16.227 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:16:16 np0005558241 nova_compute[248510]: 2025-12-13 09:16:16.227 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Ensure instance console log exists: /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:16:16 np0005558241 nova_compute[248510]: 2025-12-13 09:16:16.228 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:16 np0005558241 nova_compute[248510]: 2025-12-13 09:16:16.229 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:16 np0005558241 nova_compute[248510]: 2025-12-13 09:16:16.229 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:16 np0005558241 nova_compute[248510]: 2025-12-13 09:16:16.673 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:16 np0005558241 nova_compute[248510]: 2025-12-13 09:16:16.791 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:16:16 np0005558241 nova_compute[248510]: 2025-12-13 09:16:16.792 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 04:16:16 np0005558241 nova_compute[248510]: 2025-12-13 09:16:16.816 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 04:16:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3455: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Dec 13 04:16:17 np0005558241 nova_compute[248510]: 2025-12-13 09:16:17.673 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Successfully created port: fb0ef518-cd2c-4c22-8d93-009f10641109 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:16:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:18 np0005558241 nova_compute[248510]: 2025-12-13 09:16:18.623 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Successfully created port: 398cfb7c-bd58-4bb7-8778-e485fd13a934 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:16:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3456: 321 pgs: 321 active+clean; 131 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 161 KiB/s wr, 2 op/s
Dec 13 04:16:19 np0005558241 nova_compute[248510]: 2025-12-13 09:16:19.119 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:19 np0005558241 nova_compute[248510]: 2025-12-13 09:16:19.797 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:16:20 np0005558241 nova_compute[248510]: 2025-12-13 09:16:20.295 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Successfully updated port: fb0ef518-cd2c-4c22-8d93-009f10641109 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:16:20 np0005558241 nova_compute[248510]: 2025-12-13 09:16:20.477 248514 DEBUG nova.compute.manager [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-changed-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:16:20 np0005558241 nova_compute[248510]: 2025-12-13 09:16:20.477 248514 DEBUG nova.compute.manager [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing instance network info cache due to event network-changed-fb0ef518-cd2c-4c22-8d93-009f10641109. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:16:20 np0005558241 nova_compute[248510]: 2025-12-13 09:16:20.477 248514 DEBUG oslo_concurrency.lockutils [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:16:20 np0005558241 nova_compute[248510]: 2025-12-13 09:16:20.477 248514 DEBUG oslo_concurrency.lockutils [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:16:20 np0005558241 nova_compute[248510]: 2025-12-13 09:16:20.477 248514 DEBUG nova.network.neutron [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing network info cache for port fb0ef518-cd2c-4c22-8d93-009f10641109 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:16:20 np0005558241 nova_compute[248510]: 2025-12-13 09:16:20.728 248514 DEBUG nova.network.neutron [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:16:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3457: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:16:21 np0005558241 nova_compute[248510]: 2025-12-13 09:16:21.364 248514 DEBUG nova.network.neutron [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:16:21 np0005558241 nova_compute[248510]: 2025-12-13 09:16:21.383 248514 DEBUG oslo_concurrency.lockutils [req-6b77f9d8-6beb-4ecc-b587-3bfb1cdd89f2 req-51cd8e03-e82d-456f-a35a-58e954590ad9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:16:21 np0005558241 nova_compute[248510]: 2025-12-13 09:16:21.419 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Successfully updated port: 398cfb7c-bd58-4bb7-8778-e485fd13a934 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:16:21 np0005558241 nova_compute[248510]: 2025-12-13 09:16:21.445 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:16:21 np0005558241 nova_compute[248510]: 2025-12-13 09:16:21.446 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:16:21 np0005558241 nova_compute[248510]: 2025-12-13 09:16:21.446 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011199236055863373 of space, bias 1.0, pg target 0.3359770816759012 quantized to 32 (current 32)
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697230982719909 of space, bias 1.0, pg target 0.20091692948159728 quantized to 32 (current 32)
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:16:21 np0005558241 nova_compute[248510]: 2025-12-13 09:16:21.654 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:16:21 np0005558241 nova_compute[248510]: 2025-12-13 09:16:21.675 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:22 np0005558241 nova_compute[248510]: 2025-12-13 09:16:22.574 248514 DEBUG nova.compute.manager [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-changed-398cfb7c-bd58-4bb7-8778-e485fd13a934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:16:22 np0005558241 nova_compute[248510]: 2025-12-13 09:16:22.575 248514 DEBUG nova.compute.manager [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing instance network info cache due to event network-changed-398cfb7c-bd58-4bb7-8778-e485fd13a934. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:16:22 np0005558241 nova_compute[248510]: 2025-12-13 09:16:22.575 248514 DEBUG oslo_concurrency.lockutils [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:16:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3458: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:16:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.612 248514 DEBUG nova.network.neutron [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [{"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.638 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.639 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Instance network_info: |[{"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.640 248514 DEBUG oslo_concurrency.lockutils [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.641 248514 DEBUG nova.network.neutron [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing network info cache for port 398cfb7c-bd58-4bb7-8778-e485fd13a934 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.647 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Start _get_guest_xml network_info=[{"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.655 248514 WARNING nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.664 248514 DEBUG nova.virt.libvirt.host [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.666 248514 DEBUG nova.virt.libvirt.host [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.677 248514 DEBUG nova.virt.libvirt.host [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.678 248514 DEBUG nova.virt.libvirt.host [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.678 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.679 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.680 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.680 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.681 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.681 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.682 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.682 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.683 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.683 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.684 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.684 248514 DEBUG nova.virt.hardware [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:16:23 np0005558241 nova_compute[248510]: 2025-12-13 09:16:23.690 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:16:24 np0005558241 nova_compute[248510]: 2025-12-13 09:16:24.121 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:16:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3415804229' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:24 np0005558241 nova_compute[248510]: 2025-12-13 09:16:24.395 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.705s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:16:24 np0005558241 nova_compute[248510]: 2025-12-13 09:16:24.430 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:16:24 np0005558241 nova_compute[248510]: 2025-12-13 09:16:24.435 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:16:24 np0005558241 nova_compute[248510]: 2025-12-13 09:16:24.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:16:24 np0005558241 nova_compute[248510]: 2025-12-13 09:16:24.903 248514 DEBUG nova.network.neutron [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updated VIF entry in instance network info cache for port 398cfb7c-bd58-4bb7-8778-e485fd13a934. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:16:24 np0005558241 nova_compute[248510]: 2025-12-13 09:16:24.904 248514 DEBUG nova.network.neutron [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [{"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:16:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3459: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:16:24 np0005558241 nova_compute[248510]: 2025-12-13 09:16:24.943 248514 DEBUG oslo_concurrency.lockutils [req-f369350e-dc4a-479e-a73f-2939075a554d req-e6e9231d-bf75-43fd-8bff-625f38d72a9a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:16:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:16:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3258990419' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.078 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.080 248514 DEBUG nova.virt.libvirt.vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:16:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2101964116',display_name='tempest-TestGettingAddress-server-2101964116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2101964116',id=146,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-mpwdn9yq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:16:15Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=c6e4d841-78ee-4a00-87ca-a6c2d542a9b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.080 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.082 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.083 248514 DEBUG nova.virt.libvirt.vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:16:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2101964116',display_name='tempest-TestGettingAddress-server-2101964116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2101964116',id=146,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-mpwdn9yq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:16:15Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=c6e4d841-78ee-4a00-87ca-a6c2d542a9b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.083 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.084 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.085 248514 DEBUG nova.objects.instance [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.108 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  <uuid>c6e4d841-78ee-4a00-87ca-a6c2d542a9b7</uuid>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  <name>instance-00000092</name>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-2101964116</nova:name>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:16:23</nova:creationTime>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        <nova:port uuid="fb0ef518-cd2c-4c22-8d93-009f10641109">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        <nova:port uuid="398cfb7c-bd58-4bb7-8778-e485fd13a934">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fee3:e271" ipVersion="6"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <entry name="serial">c6e4d841-78ee-4a00-87ca-a6c2d542a9b7</entry>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <entry name="uuid">c6e4d841-78ee-4a00-87ca-a6c2d542a9b7</entry>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk.config">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:43:ef:0d"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <target dev="tapfb0ef518-cd"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:e3:e2:71"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <target dev="tap398cfb7c-bd"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/console.log" append="off"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:16:25 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:16:25 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:16:25 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:16:25 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.110 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Preparing to wait for external event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.111 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.111 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.111 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.112 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Preparing to wait for external event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.112 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.112 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.112 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.113 248514 DEBUG nova.virt.libvirt.vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:16:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2101964116',display_name='tempest-TestGettingAddress-server-2101964116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2101964116',id=146,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-mpwdn9yq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:16:15Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=c6e4d841-78ee-4a00-87ca-a6c2d542a9b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.113 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.114 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.114 248514 DEBUG os_vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.115 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.115 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.116 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.120 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.120 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb0ef518-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.121 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfb0ef518-cd, col_values=(('external_ids', {'iface-id': 'fb0ef518-cd2c-4c22-8d93-009f10641109', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:ef:0d', 'vm-uuid': 'c6e4d841-78ee-4a00-87ca-a6c2d542a9b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.122 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:25 np0005558241 NetworkManager[50376]: <info>  [1765617385.1235] manager: (tapfb0ef518-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.125 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.133 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.134 248514 INFO os_vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd')#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.135 248514 DEBUG nova.virt.libvirt.vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:16:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2101964116',display_name='tempest-TestGettingAddress-server-2101964116',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2101964116',id=146,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-mpwdn9yq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:16:15Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=c6e4d841-78ee-4a00-87ca-a6c2d542a9b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.135 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.136 248514 DEBUG nova.network.os_vif_util [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.136 248514 DEBUG os_vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.137 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.137 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.137 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.140 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap398cfb7c-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.140 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap398cfb7c-bd, col_values=(('external_ids', {'iface-id': '398cfb7c-bd58-4bb7-8778-e485fd13a934', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:e2:71', 'vm-uuid': 'c6e4d841-78ee-4a00-87ca-a6c2d542a9b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.142 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:25 np0005558241 NetworkManager[50376]: <info>  [1765617385.1426] manager: (tap398cfb7c-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/652)
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.144 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.151 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.152 248514 INFO os_vif [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd')#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.217 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.218 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.218 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:43:ef:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.218 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:e3:e2:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.218 248514 INFO nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Using config drive#033[00m
Dec 13 04:16:25 np0005558241 nova_compute[248510]: 2025-12-13 09:16:25.243 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:16:26 np0005558241 nova_compute[248510]: 2025-12-13 09:16:26.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:16:26 np0005558241 nova_compute[248510]: 2025-12-13 09:16:26.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:16:26 np0005558241 nova_compute[248510]: 2025-12-13 09:16:26.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:16:26 np0005558241 nova_compute[248510]: 2025-12-13 09:16:26.813 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 04:16:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3460: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.244 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.244 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.244 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.245 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 44149dbe-362b-4930-a63b-d04c9a3b3b4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.302 248514 INFO nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Creating config drive at /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/disk.config#033[00m
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.311 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9m_lobec execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.478 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9m_lobec" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.511 248514 DEBUG nova.storage.rbd_utils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.519 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/disk.config c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.726 248514 DEBUG oslo_concurrency.processutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/disk.config c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.727 248514 INFO nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Deleting local config drive /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7/disk.config because it was imported into RBD.#033[00m
Dec 13 04:16:27 np0005558241 kernel: tapfb0ef518-cd: entered promiscuous mode
Dec 13 04:16:27 np0005558241 NetworkManager[50376]: <info>  [1765617387.7994] manager: (tapfb0ef518-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/653)
Dec 13 04:16:27 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:27Z|01569|binding|INFO|Claiming lport fb0ef518-cd2c-4c22-8d93-009f10641109 for this chassis.
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.804 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:27 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:27Z|01570|binding|INFO|fb0ef518-cd2c-4c22-8d93-009f10641109: Claiming fa:16:3e:43:ef:0d 10.100.0.6
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.815 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:ef:0d 10.100.0.6'], port_security=['fa:16:3e:43:ef:0d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c6e4d841-78ee-4a00-87ca-a6c2d542a9b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fc9ec4-4452-4225-b100-75f3859e091a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74bfbdc0-d183-4405-bf5b-7ce9dc4c0882, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=fb0ef518-cd2c-4c22-8d93-009f10641109) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.818 158419 INFO neutron.agent.ovn.metadata.agent [-] Port fb0ef518-cd2c-4c22-8d93-009f10641109 in datapath d3fc9ec4-4452-4225-b100-75f3859e091a bound to our chassis#033[00m
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.820 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3fc9ec4-4452-4225-b100-75f3859e091a#033[00m
Dec 13 04:16:27 np0005558241 NetworkManager[50376]: <info>  [1765617387.8217] manager: (tap398cfb7c-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/654)
Dec 13 04:16:27 np0005558241 kernel: tap398cfb7c-bd: entered promiscuous mode
Dec 13 04:16:27 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:27Z|01571|binding|INFO|Setting lport fb0ef518-cd2c-4c22-8d93-009f10641109 ovn-installed in OVS
Dec 13 04:16:27 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:27Z|01572|binding|INFO|Setting lport fb0ef518-cd2c-4c22-8d93-009f10641109 up in Southbound
Dec 13 04:16:27 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:27Z|01573|if_status|INFO|Not updating pb chassis for 398cfb7c-bd58-4bb7-8778-e485fd13a934 now as sb is readonly
Dec 13 04:16:27 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:27Z|01574|binding|INFO|Claiming lport 398cfb7c-bd58-4bb7-8778-e485fd13a934 for this chassis.
Dec 13 04:16:27 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:27Z|01575|binding|INFO|398cfb7c-bd58-4bb7-8778-e485fd13a934: Claiming fa:16:3e:e3:e2:71 2001:db8::f816:3eff:fee3:e271
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:27 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:27Z|01576|binding|INFO|Setting lport 398cfb7c-bd58-4bb7-8778-e485fd13a934 ovn-installed in OVS
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.842 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:27 np0005558241 systemd-udevd[399882]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:16:27 np0005558241 systemd-udevd[399881]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:16:27 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:27Z|01577|binding|INFO|Setting lport 398cfb7c-bd58-4bb7-8778-e485fd13a934 up in Southbound
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.851 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:e2:71 2001:db8::f816:3eff:fee3:e271'], port_security=['fa:16:3e:e3:e2:71 2001:db8::f816:3eff:fee3:e271'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee3:e271/64', 'neutron:device_id': 'c6e4d841-78ee-4a00-87ca-a6c2d542a9b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=637bfd4d-c545-47be-afdb-625fd0ecaffb, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=398cfb7c-bd58-4bb7-8778-e485fd13a934) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:16:27 np0005558241 NetworkManager[50376]: <info>  [1765617387.8675] device (tapfb0ef518-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:16:27 np0005558241 NetworkManager[50376]: <info>  [1765617387.8683] device (tapfb0ef518-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.862 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3a94be-526e-4960-a423-d976daba31c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:27 np0005558241 NetworkManager[50376]: <info>  [1765617387.8688] device (tap398cfb7c-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:16:27 np0005558241 NetworkManager[50376]: <info>  [1765617387.8693] device (tap398cfb7c-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:16:27 np0005558241 systemd-machined[210538]: New machine qemu-177-instance-00000092.
Dec 13 04:16:27 np0005558241 systemd[1]: Started Virtual Machine qemu-177-instance-00000092.
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.901 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f5ce95-1783-4542-8985-8047459b9404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.905 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[625c4715-bd97-48ea-bc38-4a459d7dbf2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.943 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[99554286-60fe-4009-98cd-f61cfae63ebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.963 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[16fb628c-02a0-4e7d-b604-65a8871e9650]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fc9ec4-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:82:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 446], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992138, 'reachable_time': 20078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399898, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.986 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1f3ac5-cee0-4291-b155-0db17be60a52]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd3fc9ec4-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992154, 'tstamp': 992154}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399899, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd3fc9ec4-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992158, 'tstamp': 992158}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399899, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.988 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fc9ec4-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:27 np0005558241 nova_compute[248510]: 2025-12-13 09:16:27.990 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.991 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3fc9ec4-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.991 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.992 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3fc9ec4-40, col_values=(('external_ids', {'iface-id': 'f954b813-27c7-46fa-b00a-23e0bb5d820d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.992 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.994 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 398cfb7c-bd58-4bb7-8778-e485fd13a934 in datapath 9f757ab8-2a8f-4771-8492-2bcf521016bb unbound from our chassis#033[00m
Dec 13 04:16:27 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:27.995 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f757ab8-2a8f-4771-8492-2bcf521016bb#033[00m
Dec 13 04:16:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.011 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[054e14b0-5bc7-43d8-b2a9-d38694180f76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.043 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[f81e6e7e-9df4-4a85-9b44-0445f58aa14a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.048 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b2abfadc-8411-4b04-bca7-3560ce5431e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.077 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[572d8da6-4fdd-457d-b736-ffc9827e2c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.096 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5009a761-0004-476b-b294-feb2ff16f32b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f757ab8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c4:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992247, 'reachable_time': 37493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399906, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.116 248514 DEBUG nova.compute.manager [req-1dc39f39-a12e-4186-a822-c65ff9f99b75 req-4a8cd31b-3814-49b0-8d5b-ca2948a9f584 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.116 248514 DEBUG oslo_concurrency.lockutils [req-1dc39f39-a12e-4186-a822-c65ff9f99b75 req-4a8cd31b-3814-49b0-8d5b-ca2948a9f584 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.116 248514 DEBUG oslo_concurrency.lockutils [req-1dc39f39-a12e-4186-a822-c65ff9f99b75 req-4a8cd31b-3814-49b0-8d5b-ca2948a9f584 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.117 248514 DEBUG oslo_concurrency.lockutils [req-1dc39f39-a12e-4186-a822-c65ff9f99b75 req-4a8cd31b-3814-49b0-8d5b-ca2948a9f584 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.117 248514 DEBUG nova.compute.manager [req-1dc39f39-a12e-4186-a822-c65ff9f99b75 req-4a8cd31b-3814-49b0-8d5b-ca2948a9f584 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Processing event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:16:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.123 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[87d3402d-c9a2-47b2-abdd-e368ff733dbf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f757ab8-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992261, 'tstamp': 992261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399907, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.125 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f757ab8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.126 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.127 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.128 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f757ab8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.128 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:16:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.128 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f757ab8-20, col_values=(('external_ids', {'iface-id': '8c81cc53-1ad8-47d5-9def-f8710a05a423'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.129 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.164 248514 DEBUG nova.compute.manager [req-aeedce0b-39c0-4492-969f-f45f94a4af97 req-106b2533-249d-4adb-a785-3294a4f9e9d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.165 248514 DEBUG oslo_concurrency.lockutils [req-aeedce0b-39c0-4492-969f-f45f94a4af97 req-106b2533-249d-4adb-a785-3294a4f9e9d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.165 248514 DEBUG oslo_concurrency.lockutils [req-aeedce0b-39c0-4492-969f-f45f94a4af97 req-106b2533-249d-4adb-a785-3294a4f9e9d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.166 248514 DEBUG oslo_concurrency.lockutils [req-aeedce0b-39c0-4492-969f-f45f94a4af97 req-106b2533-249d-4adb-a785-3294a4f9e9d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.166 248514 DEBUG nova.compute.manager [req-aeedce0b-39c0-4492-969f-f45f94a4af97 req-106b2533-249d-4adb-a785-3294a4f9e9d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Processing event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.321 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:16:28 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:28.323 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.493 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617388.4931283, c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.494 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] VM Started (Lifecycle Event)#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.497 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.501 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.504 248514 INFO nova.virt.libvirt.driver [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Instance spawned successfully.#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.504 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.519 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.526 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.531 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.531 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.532 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.532 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.532 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.533 248514 DEBUG nova.virt.libvirt.driver [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.561 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.562 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617388.4943027, c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.562 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.591 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.596 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617388.5000918, c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.596 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.609 248514 INFO nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Took 13.20 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.609 248514 DEBUG nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.621 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.624 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.649 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.689 248514 INFO nova.compute.manager [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Took 14.35 seconds to build instance.#033[00m
Dec 13 04:16:28 np0005558241 nova_compute[248510]: 2025-12-13 09:16:28.705 248514 DEBUG oslo_concurrency.lockutils [None req-3ebc901a-7961-4caa-b87c-5664a57fa21a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3461: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 13 04:16:29 np0005558241 nova_compute[248510]: 2025-12-13 09:16:29.123 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.193 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.224 248514 DEBUG nova.compute.manager [req-22334cfa-7c05-429a-ac87-d4d7c90799dc req-442cf451-7eb3-49a3-9b30-78ac99494e7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.224 248514 DEBUG oslo_concurrency.lockutils [req-22334cfa-7c05-429a-ac87-d4d7c90799dc req-442cf451-7eb3-49a3-9b30-78ac99494e7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.225 248514 DEBUG oslo_concurrency.lockutils [req-22334cfa-7c05-429a-ac87-d4d7c90799dc req-442cf451-7eb3-49a3-9b30-78ac99494e7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.225 248514 DEBUG oslo_concurrency.lockutils [req-22334cfa-7c05-429a-ac87-d4d7c90799dc req-442cf451-7eb3-49a3-9b30-78ac99494e7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.225 248514 DEBUG nova.compute.manager [req-22334cfa-7c05-429a-ac87-d4d7c90799dc req-442cf451-7eb3-49a3-9b30-78ac99494e7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] No waiting events found dispatching network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.225 248514 WARNING nova.compute.manager [req-22334cfa-7c05-429a-ac87-d4d7c90799dc req-442cf451-7eb3-49a3-9b30-78ac99494e7d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received unexpected event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.259 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.306 248514 DEBUG nova.compute.manager [req-c4c51b53-93e8-4014-bad0-77025b7bc80c req-de23f7c0-d8a5-4985-91b3-35a08e706efa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.306 248514 DEBUG oslo_concurrency.lockutils [req-c4c51b53-93e8-4014-bad0-77025b7bc80c req-de23f7c0-d8a5-4985-91b3-35a08e706efa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.307 248514 DEBUG oslo_concurrency.lockutils [req-c4c51b53-93e8-4014-bad0-77025b7bc80c req-de23f7c0-d8a5-4985-91b3-35a08e706efa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.307 248514 DEBUG oslo_concurrency.lockutils [req-c4c51b53-93e8-4014-bad0-77025b7bc80c req-de23f7c0-d8a5-4985-91b3-35a08e706efa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.307 248514 DEBUG nova.compute.manager [req-c4c51b53-93e8-4014-bad0-77025b7bc80c req-de23f7c0-d8a5-4985-91b3-35a08e706efa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] No waiting events found dispatching network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.307 248514 WARNING nova.compute.manager [req-c4c51b53-93e8-4014-bad0-77025b7bc80c req-de23f7c0-d8a5-4985-91b3-35a08e706efa 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received unexpected event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.312 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.312 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 04:16:30 np0005558241 nova_compute[248510]: 2025-12-13 09:16:30.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:16:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3462: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 72 op/s
Dec 13 04:16:32 np0005558241 nova_compute[248510]: 2025-12-13 09:16:32.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:16:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3463: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 14 KiB/s wr, 47 op/s
Dec 13 04:16:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:33 np0005558241 nova_compute[248510]: 2025-12-13 09:16:33.131 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:33 np0005558241 nova_compute[248510]: 2025-12-13 09:16:33.131 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:33 np0005558241 nova_compute[248510]: 2025-12-13 09:16:33.131 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:33 np0005558241 nova_compute[248510]: 2025-12-13 09:16:33.132 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:16:33 np0005558241 nova_compute[248510]: 2025-12-13 09:16:33.132 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:16:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:33.326 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:16:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/408043946' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:33 np0005558241 nova_compute[248510]: 2025-12-13 09:16:33.744 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:16:33 np0005558241 nova_compute[248510]: 2025-12-13 09:16:33.879 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:16:33 np0005558241 nova_compute[248510]: 2025-12-13 09:16:33.881 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:16:33 np0005558241 nova_compute[248510]: 2025-12-13 09:16:33.888 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:16:33 np0005558241 nova_compute[248510]: 2025-12-13 09:16:33.889 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.112 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.114 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3095MB free_disk=59.92095153965056GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.115 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.115 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.125 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.206 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 44149dbe-362b-4930-a63b-d04c9a3b3b4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.207 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.207 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.207 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.283 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.825 248514 DEBUG nova.compute.manager [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-changed-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.825 248514 DEBUG nova.compute.manager [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing instance network info cache due to event network-changed-fb0ef518-cd2c-4c22-8d93-009f10641109. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.826 248514 DEBUG oslo_concurrency.lockutils [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.826 248514 DEBUG oslo_concurrency.lockutils [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.827 248514 DEBUG nova.network.neutron [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing network info cache for port fb0ef518-cd2c-4c22-8d93-009f10641109 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:16:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:16:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2378584537' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.887 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.893 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:16:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3464: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.914 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.947 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:16:34 np0005558241 nova_compute[248510]: 2025-12-13 09:16:34.948 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:34 np0005558241 podman[399998]: 2025-12-13 09:16:34.983772975 +0000 UTC m=+0.066136059 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 04:16:34 np0005558241 podman[399997]: 2025-12-13 09:16:34.985814306 +0000 UTC m=+0.070773735 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:16:35 np0005558241 podman[399996]: 2025-12-13 09:16:35.019747161 +0000 UTC m=+0.104702380 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:16:35 np0005558241 nova_compute[248510]: 2025-12-13 09:16:35.195 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:35 np0005558241 nova_compute[248510]: 2025-12-13 09:16:35.946 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:16:35 np0005558241 nova_compute[248510]: 2025-12-13 09:16:35.946 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:16:35 np0005558241 nova_compute[248510]: 2025-12-13 09:16:35.946 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:16:36 np0005558241 nova_compute[248510]: 2025-12-13 09:16:36.088 248514 DEBUG nova.network.neutron [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updated VIF entry in instance network info cache for port fb0ef518-cd2c-4c22-8d93-009f10641109. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:16:36 np0005558241 nova_compute[248510]: 2025-12-13 09:16:36.088 248514 DEBUG nova.network.neutron [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [{"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:16:36 np0005558241 nova_compute[248510]: 2025-12-13 09:16:36.129 248514 DEBUG oslo_concurrency.lockutils [req-dc891abb-3c5d-48af-bc52-5e4d65de8f13 req-61602894-94ec-45b9-99f1-b66a9275802d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:16:36 np0005558241 nova_compute[248510]: 2025-12-13 09:16:36.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:16:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3465: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 04:16:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3466: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 04:16:39 np0005558241 nova_compute[248510]: 2025-12-13 09:16:39.128 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:16:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:16:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:16:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:16:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:16:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:16:40 np0005558241 nova_compute[248510]: 2025-12-13 09:16:40.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3467: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 70 op/s
Dec 13 04:16:41 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:41Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:ef:0d 10.100.0.6
Dec 13 04:16:41 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:41Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:ef:0d 10.100.0.6
Dec 13 04:16:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3468: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 733 KiB/s rd, 1.3 KiB/s wr, 27 op/s
Dec 13 04:16:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:44 np0005558241 nova_compute[248510]: 2025-12-13 09:16:44.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3469: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 13 04:16:45 np0005558241 nova_compute[248510]: 2025-12-13 09:16:45.200 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3470: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:16:47 np0005558241 nova_compute[248510]: 2025-12-13 09:16:47.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:16:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3471: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:16:49 np0005558241 nova_compute[248510]: 2025-12-13 09:16:49.134 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:50 np0005558241 nova_compute[248510]: 2025-12-13 09:16:50.204 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3472: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 04:16:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:16:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3473: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:16:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:16:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:16:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:16:54 np0005558241 nova_compute[248510]: 2025-12-13 09:16:54.138 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:16:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:16:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3474: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 04:16:55 np0005558241 nova_compute[248510]: 2025-12-13 09:16:55.206 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:55 np0005558241 podman[400271]: 2025-12-13 09:16:55.235672881 +0000 UTC m=+0.050226633 container create 83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_agnesi, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:16:55 np0005558241 systemd[1]: Started libpod-conmon-83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0.scope.
Dec 13 04:16:55 np0005558241 podman[400271]: 2025-12-13 09:16:55.215312973 +0000 UTC m=+0.029866755 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:16:55 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:16:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:55.453 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:55.453 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:55.455 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:55 np0005558241 podman[400271]: 2025-12-13 09:16:55.575664072 +0000 UTC m=+0.390217844 container init 83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_agnesi, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 04:16:55 np0005558241 podman[400271]: 2025-12-13 09:16:55.585527698 +0000 UTC m=+0.400081440 container start 83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_agnesi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:16:55 np0005558241 elated_agnesi[400287]: 167 167
Dec 13 04:16:55 np0005558241 systemd[1]: libpod-83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0.scope: Deactivated successfully.
Dec 13 04:16:55 np0005558241 conmon[400287]: conmon 83fed1b9815e4cef10a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0.scope/container/memory.events
Dec 13 04:16:55 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:16:55 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:16:55 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:16:55 np0005558241 podman[400271]: 2025-12-13 09:16:55.661910781 +0000 UTC m=+0.476464583 container attach 83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_agnesi, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:16:55 np0005558241 podman[400271]: 2025-12-13 09:16:55.66386739 +0000 UTC m=+0.478421162 container died 83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_agnesi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:16:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5788fe39492688df29275c2f179edc036cdd1f33fd32b9b2f8b2e304a8a9ba01-merged.mount: Deactivated successfully.
Dec 13 04:16:55 np0005558241 podman[400271]: 2025-12-13 09:16:55.739913565 +0000 UTC m=+0.554467327 container remove 83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_agnesi, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:16:55 np0005558241 systemd[1]: libpod-conmon-83fed1b9815e4cef10a0c0db6cd72146b48f2e6420971cf67791b928de0e48e0.scope: Deactivated successfully.
Dec 13 04:16:56 np0005558241 podman[400313]: 2025-12-13 09:16:55.941908608 +0000 UTC m=+0.030939232 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:16:56 np0005558241 podman[400313]: 2025-12-13 09:16:56.309017576 +0000 UTC m=+0.398048210 container create 94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_goldstine, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:16:56 np0005558241 systemd[1]: Started libpod-conmon-94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353.scope.
Dec 13 04:16:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:16:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767b43a3966adcc4a4455ae237869c103847be2250faafa70f39f2ac72d6dde5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767b43a3966adcc4a4455ae237869c103847be2250faafa70f39f2ac72d6dde5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767b43a3966adcc4a4455ae237869c103847be2250faafa70f39f2ac72d6dde5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767b43a3966adcc4a4455ae237869c103847be2250faafa70f39f2ac72d6dde5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767b43a3966adcc4a4455ae237869c103847be2250faafa70f39f2ac72d6dde5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:56 np0005558241 podman[400313]: 2025-12-13 09:16:56.785723674 +0000 UTC m=+0.874754268 container init 94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_goldstine, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 04:16:56 np0005558241 podman[400313]: 2025-12-13 09:16:56.792990735 +0000 UTC m=+0.882021329 container start 94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_goldstine, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:16:56 np0005558241 podman[400313]: 2025-12-13 09:16:56.862198419 +0000 UTC m=+0.951229013 container attach 94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:16:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3475: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 12 KiB/s wr, 2 op/s
Dec 13 04:16:57 np0005558241 thirsty_goldstine[400330]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:16:57 np0005558241 thirsty_goldstine[400330]: --> All data devices are unavailable
Dec 13 04:16:57 np0005558241 systemd[1]: libpod-94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353.scope: Deactivated successfully.
Dec 13 04:16:57 np0005558241 conmon[400330]: conmon 94b5e8882c2f4e22b494 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353.scope/container/memory.events
Dec 13 04:16:57 np0005558241 podman[400313]: 2025-12-13 09:16:57.324819137 +0000 UTC m=+1.413849731 container died 94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:16:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-767b43a3966adcc4a4455ae237869c103847be2250faafa70f39f2ac72d6dde5-merged.mount: Deactivated successfully.
Dec 13 04:16:57 np0005558241 podman[400313]: 2025-12-13 09:16:57.396307148 +0000 UTC m=+1.485337742 container remove 94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_goldstine, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:16:57 np0005558241 systemd[1]: libpod-conmon-94b5e8882c2f4e22b494ba9fdc7aee6e68de4c59dda14954fba9ace60164e353.scope: Deactivated successfully.
Dec 13 04:16:57 np0005558241 podman[400423]: 2025-12-13 09:16:57.864188966 +0000 UTC m=+0.032769727 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:16:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.102 248514 DEBUG nova.compute.manager [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-changed-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.104 248514 DEBUG nova.compute.manager [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing instance network info cache due to event network-changed-fb0ef518-cd2c-4c22-8d93-009f10641109. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.104 248514 DEBUG oslo_concurrency.lockutils [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.105 248514 DEBUG oslo_concurrency.lockutils [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.105 248514 DEBUG nova.network.neutron [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Refreshing network info cache for port fb0ef518-cd2c-4c22-8d93-009f10641109 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:16:58 np0005558241 podman[400423]: 2025-12-13 09:16:58.164594432 +0000 UTC m=+0.333175103 container create 4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.185 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.186 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.186 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.187 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.187 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.190 248514 INFO nova.compute.manager [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Terminating instance#033[00m
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.192 248514 DEBUG nova.compute.manager [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:16:58 np0005558241 systemd[1]: Started libpod-conmon-4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3.scope.
Dec 13 04:16:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:16:58 np0005558241 podman[400423]: 2025-12-13 09:16:58.914545857 +0000 UTC m=+1.083126618 container init 4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:16:58 np0005558241 podman[400423]: 2025-12-13 09:16:58.925780417 +0000 UTC m=+1.094361118 container start 4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:16:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3476: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 12 KiB/s wr, 2 op/s
Dec 13 04:16:58 np0005558241 nice_chatelet[400440]: 167 167
Dec 13 04:16:58 np0005558241 systemd[1]: libpod-4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3.scope: Deactivated successfully.
Dec 13 04:16:58 np0005558241 kernel: tapfb0ef518-cd (unregistering): left promiscuous mode
Dec 13 04:16:58 np0005558241 NetworkManager[50376]: <info>  [1765617418.9686] device (tapfb0ef518-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:16:58 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:58Z|01578|binding|INFO|Releasing lport fb0ef518-cd2c-4c22-8d93-009f10641109 from this chassis (sb_readonly=0)
Dec 13 04:16:58 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:58Z|01579|binding|INFO|Setting lport fb0ef518-cd2c-4c22-8d93-009f10641109 down in Southbound
Dec 13 04:16:58 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:58Z|01580|binding|INFO|Removing iface tapfb0ef518-cd ovn-installed in OVS
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.984 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:58 np0005558241 nova_compute[248510]: 2025-12-13 09:16:58.986 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:58.992 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:ef:0d 10.100.0.6'], port_security=['fa:16:3e:43:ef:0d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c6e4d841-78ee-4a00-87ca-a6c2d542a9b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fc9ec4-4452-4225-b100-75f3859e091a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74bfbdc0-d183-4405-bf5b-7ce9dc4c0882, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=fb0ef518-cd2c-4c22-8d93-009f10641109) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:16:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:58.994 158419 INFO neutron.agent.ovn.metadata.agent [-] Port fb0ef518-cd2c-4c22-8d93-009f10641109 in datapath d3fc9ec4-4452-4225-b100-75f3859e091a unbound from our chassis#033[00m
Dec 13 04:16:58 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:58.995 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3fc9ec4-4452-4225-b100-75f3859e091a#033[00m
Dec 13 04:16:58 np0005558241 kernel: tap398cfb7c-bd (unregistering): left promiscuous mode
Dec 13 04:16:59 np0005558241 NetworkManager[50376]: <info>  [1765617419.0038] device (tap398cfb7c-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.006 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:59Z|01581|binding|INFO|Releasing lport 398cfb7c-bd58-4bb7-8778-e485fd13a934 from this chassis (sb_readonly=0)
Dec 13 04:16:59 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:59Z|01582|binding|INFO|Setting lport 398cfb7c-bd58-4bb7-8778-e485fd13a934 down in Southbound
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.022 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 ovn_controller[148476]: 2025-12-13T09:16:59Z|01583|binding|INFO|Removing iface tap398cfb7c-bd ovn-installed in OVS
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.021 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3177051f-4a5e-41a1-a652-285132ee4853]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.026 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.032 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:e2:71 2001:db8::f816:3eff:fee3:e271'], port_security=['fa:16:3e:e3:e2:71 2001:db8::f816:3eff:fee3:e271'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee3:e271/64', 'neutron:device_id': 'c6e4d841-78ee-4a00-87ca-a6c2d542a9b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=637bfd4d-c545-47be-afdb-625fd0ecaffb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=398cfb7c-bd58-4bb7-8778-e485fd13a934) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.040 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.065 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[befce846-63e3-42bf-bc0f-6bd54eaaa57b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.070 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[d973e70d-567d-4293-9330-89614817b1af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:59 np0005558241 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000092.scope: Deactivated successfully.
Dec 13 04:16:59 np0005558241 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d00000092.scope: Consumed 14.185s CPU time.
Dec 13 04:16:59 np0005558241 systemd-machined[210538]: Machine qemu-177-instance-00000092 terminated.
Dec 13 04:16:59 np0005558241 podman[400423]: 2025-12-13 09:16:59.099210349 +0000 UTC m=+1.267791020 container attach 4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:16:59 np0005558241 podman[400423]: 2025-12-13 09:16:59.101401193 +0000 UTC m=+1.269981864 container died 4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.105 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[e1cce48b-9f2e-4ef6-a3dc-63321028c7d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.125 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8f81719e-3f5f-442d-9b81-00d768ba6a2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fc9ec4-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:82:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 446], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992138, 'reachable_time': 20078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400474, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.141 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.147 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[359fd760-f86a-41b0-9b0c-ca484450c472]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd3fc9ec4-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992154, 'tstamp': 992154}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400475, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd3fc9ec4-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992158, 'tstamp': 992158}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400475, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.149 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fc9ec4-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.152 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.159 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3fc9ec4-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.160 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.161 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3fc9ec4-40, col_values=(('external_ids', {'iface-id': 'f954b813-27c7-46fa-b00a-23e0bb5d820d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.162 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.165 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 398cfb7c-bd58-4bb7-8778-e485fd13a934 in datapath 9f757ab8-2a8f-4771-8492-2bcf521016bb unbound from our chassis#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.168 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f757ab8-2a8f-4771-8492-2bcf521016bb#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.192 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[26d6bc2a-57c6-4a95-a370-b02584144aca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.222 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[790f2d21-2613-403b-9ee8-b84c46e16245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.228 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd0c323-68fc-4e15-a638-ece561b426ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 NetworkManager[50376]: <info>  [1765617419.2391] manager: (tap398cfb7c-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.260 248514 INFO nova.virt.libvirt.driver [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Instance destroyed successfully.#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.261 248514 DEBUG nova.objects.instance [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.267 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[466c69dd-f362-4403-b682-ba011c443253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.285 248514 DEBUG nova.virt.libvirt.vif [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:16:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2101964116',display_name='tempest-TestGettingAddress-server-2101964116',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2101964116',id=146,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:16:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-mpwdn9yq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:16:28Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=c6e4d841-78ee-4a00-87ca-a6c2d542a9b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.286 248514 DEBUG nova.network.os_vif_util [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.287 248514 DEBUG nova.network.os_vif_util [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.286 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea4b3b5-b975-4e40-a719-00fc3a576b4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f757ab8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:c4:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 447], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992247, 'reachable_time': 37493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400504, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.288 248514 DEBUG os_vif [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.292 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.292 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb0ef518-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.297 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.299 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.304 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.306 248514 INFO os_vif [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:ef:0d,bridge_name='br-int',has_traffic_filtering=True,id=fb0ef518-cd2c-4c22-8d93-009f10641109,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb0ef518-cd')#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.307 248514 DEBUG nova.virt.libvirt.vif [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:16:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2101964116',display_name='tempest-TestGettingAddress-server-2101964116',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2101964116',id=146,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:16:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-mpwdn9yq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:16:28Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=c6e4d841-78ee-4a00-87ca-a6c2d542a9b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.307 248514 DEBUG nova.network.os_vif_util [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.308 248514 DEBUG nova.network.os_vif_util [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.308 248514 DEBUG os_vif [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.309 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0165461d-4e5c-400d-9cbf-b13b3efcad85]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f757ab8-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 992261, 'tstamp': 992261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400505, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.310 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap398cfb7c-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.311 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f757ab8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.311 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.314 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:16:59 np0005558241 nova_compute[248510]: 2025-12-13 09:16:59.315 248514 INFO os_vif [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:e2:71,bridge_name='br-int',has_traffic_filtering=True,id=398cfb7c-bd58-4bb7-8778-e485fd13a934,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap398cfb7c-bd')#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.316 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f757ab8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.317 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.317 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f757ab8-20, col_values=(('external_ids', {'iface-id': '8c81cc53-1ad8-47d5-9def-f8710a05a423'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:16:59 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:16:59.318 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.047 248514 DEBUG nova.compute.manager [req-cce9b55c-5a66-4e41-a695-caeb0fceb9ec req-63204bf7-1cdf-423c-87b5-8aaa78c6163b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-unplugged-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.048 248514 DEBUG oslo_concurrency.lockutils [req-cce9b55c-5a66-4e41-a695-caeb0fceb9ec req-63204bf7-1cdf-423c-87b5-8aaa78c6163b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.048 248514 DEBUG oslo_concurrency.lockutils [req-cce9b55c-5a66-4e41-a695-caeb0fceb9ec req-63204bf7-1cdf-423c-87b5-8aaa78c6163b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.049 248514 DEBUG oslo_concurrency.lockutils [req-cce9b55c-5a66-4e41-a695-caeb0fceb9ec req-63204bf7-1cdf-423c-87b5-8aaa78c6163b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.049 248514 DEBUG nova.compute.manager [req-cce9b55c-5a66-4e41-a695-caeb0fceb9ec req-63204bf7-1cdf-423c-87b5-8aaa78c6163b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] No waiting events found dispatching network-vif-unplugged-fb0ef518-cd2c-4c22-8d93-009f10641109 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.049 248514 DEBUG nova.compute.manager [req-cce9b55c-5a66-4e41-a695-caeb0fceb9ec req-63204bf7-1cdf-423c-87b5-8aaa78c6163b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-unplugged-fb0ef518-cd2c-4c22-8d93-009f10641109 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.232 248514 DEBUG nova.compute.manager [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-unplugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.233 248514 DEBUG oslo_concurrency.lockutils [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.233 248514 DEBUG oslo_concurrency.lockutils [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.234 248514 DEBUG oslo_concurrency.lockutils [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.234 248514 DEBUG nova.compute.manager [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] No waiting events found dispatching network-vif-unplugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.235 248514 DEBUG nova.compute.manager [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-unplugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.235 248514 DEBUG nova.compute.manager [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.236 248514 DEBUG oslo_concurrency.lockutils [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.236 248514 DEBUG oslo_concurrency.lockutils [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.237 248514 DEBUG oslo_concurrency.lockutils [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.237 248514 DEBUG nova.compute.manager [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] No waiting events found dispatching network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:17:00 np0005558241 nova_compute[248510]: 2025-12-13 09:17:00.238 248514 WARNING nova.compute.manager [req-7a1e874a-0dfa-455e-9056-ddd702ff5642 req-3ccb7a64-a218-4c70-b490-02055f1ee507 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received unexpected event network-vif-plugged-398cfb7c-bd58-4bb7-8778-e485fd13a934 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:17:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3d15a022f82b9e20854443099096a553386da9089999dd92ae994af7f378ef6d-merged.mount: Deactivated successfully.
Dec 13 04:17:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3477: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 21 KiB/s wr, 3 op/s
Dec 13 04:17:01 np0005558241 podman[400423]: 2025-12-13 09:17:01.001707113 +0000 UTC m=+3.170287774 container remove 4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 04:17:01 np0005558241 systemd[1]: libpod-conmon-4ee8d3fc8ee1f3ae710c62f43c771db919c0ebe834483d8429d9627a3c0245b3.scope: Deactivated successfully.
Dec 13 04:17:01 np0005558241 nova_compute[248510]: 2025-12-13 09:17:01.231 248514 INFO nova.virt.libvirt.driver [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Deleting instance files /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_del#033[00m
Dec 13 04:17:01 np0005558241 nova_compute[248510]: 2025-12-13 09:17:01.232 248514 INFO nova.virt.libvirt.driver [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Deletion of /var/lib/nova/instances/c6e4d841-78ee-4a00-87ca-a6c2d542a9b7_del complete#033[00m
Dec 13 04:17:01 np0005558241 podman[400534]: 2025-12-13 09:17:01.252661576 +0000 UTC m=+0.058465047 container create 79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ritchie, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:17:01 np0005558241 systemd[1]: Started libpod-conmon-79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0.scope.
Dec 13 04:17:01 np0005558241 podman[400534]: 2025-12-13 09:17:01.230087404 +0000 UTC m=+0.035890905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:17:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:17:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddb6dce944d5bb2dc09a36f1252c062b698b280d505a3d6dac7780bee09eeaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddb6dce944d5bb2dc09a36f1252c062b698b280d505a3d6dac7780bee09eeaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddb6dce944d5bb2dc09a36f1252c062b698b280d505a3d6dac7780bee09eeaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddb6dce944d5bb2dc09a36f1252c062b698b280d505a3d6dac7780bee09eeaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:01 np0005558241 podman[400534]: 2025-12-13 09:17:01.360503063 +0000 UTC m=+0.166306564 container init 79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 04:17:01 np0005558241 podman[400534]: 2025-12-13 09:17:01.369362784 +0000 UTC m=+0.175166255 container start 79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ritchie, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:17:01 np0005558241 podman[400534]: 2025-12-13 09:17:01.374155074 +0000 UTC m=+0.179958545 container attach 79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:17:01 np0005558241 nova_compute[248510]: 2025-12-13 09:17:01.559 248514 INFO nova.compute.manager [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Took 3.37 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:17:01 np0005558241 nova_compute[248510]: 2025-12-13 09:17:01.561 248514 DEBUG oslo.service.loopingcall [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:17:01 np0005558241 nova_compute[248510]: 2025-12-13 09:17:01.561 248514 DEBUG nova.compute.manager [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:17:01 np0005558241 nova_compute[248510]: 2025-12-13 09:17:01.561 248514 DEBUG nova.network.neutron [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]: {
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:    "0": [
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:        {
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "devices": [
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "/dev/loop3"
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            ],
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_name": "ceph_lv0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_size": "21470642176",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "name": "ceph_lv0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "tags": {
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.cluster_name": "ceph",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.crush_device_class": "",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.encrypted": "0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.objectstore": "bluestore",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.osd_id": "0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.type": "block",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.vdo": "0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.with_tpm": "0"
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            },
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "type": "block",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "vg_name": "ceph_vg0"
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:        }
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:    ],
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:    "1": [
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:        {
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "devices": [
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "/dev/loop4"
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            ],
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_name": "ceph_lv1",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_size": "21470642176",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "name": "ceph_lv1",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "tags": {
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.cluster_name": "ceph",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.crush_device_class": "",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.encrypted": "0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.objectstore": "bluestore",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.osd_id": "1",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.type": "block",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.vdo": "0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.with_tpm": "0"
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            },
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "type": "block",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "vg_name": "ceph_vg1"
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:        }
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:    ],
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:    "2": [
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:        {
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "devices": [
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "/dev/loop5"
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            ],
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_name": "ceph_lv2",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_size": "21470642176",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "name": "ceph_lv2",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "tags": {
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.cluster_name": "ceph",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.crush_device_class": "",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.encrypted": "0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.objectstore": "bluestore",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.osd_id": "2",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.type": "block",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.vdo": "0",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:                "ceph.with_tpm": "0"
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            },
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "type": "block",
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:            "vg_name": "ceph_vg2"
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:        }
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]:    ]
Dec 13 04:17:01 np0005558241 mystifying_ritchie[400550]: }
Dec 13 04:17:01 np0005558241 systemd[1]: libpod-79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0.scope: Deactivated successfully.
Dec 13 04:17:01 np0005558241 conmon[400550]: conmon 79f0a44fa66771032e39 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0.scope/container/memory.events
Dec 13 04:17:01 np0005558241 podman[400534]: 2025-12-13 09:17:01.728728939 +0000 UTC m=+0.534532400 container died 79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:17:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1ddb6dce944d5bb2dc09a36f1252c062b698b280d505a3d6dac7780bee09eeaf-merged.mount: Deactivated successfully.
Dec 13 04:17:01 np0005558241 podman[400534]: 2025-12-13 09:17:01.774240703 +0000 UTC m=+0.580044184 container remove 79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ritchie, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:17:01 np0005558241 systemd[1]: libpod-conmon-79f0a44fa66771032e39a045a95ce7865d524d8ae69a78938811e99fd48525f0.scope: Deactivated successfully.
Dec 13 04:17:01 np0005558241 nova_compute[248510]: 2025-12-13 09:17:01.822 248514 DEBUG nova.network.neutron [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updated VIF entry in instance network info cache for port fb0ef518-cd2c-4c22-8d93-009f10641109. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:17:01 np0005558241 nova_compute[248510]: 2025-12-13 09:17:01.823 248514 DEBUG nova.network.neutron [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [{"id": "fb0ef518-cd2c-4c22-8d93-009f10641109", "address": "fa:16:3e:43:ef:0d", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb0ef518-cd", "ovs_interfaceid": "fb0ef518-cd2c-4c22-8d93-009f10641109", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:17:01 np0005558241 nova_compute[248510]: 2025-12-13 09:17:01.850 248514 DEBUG oslo_concurrency.lockutils [req-9f8db408-9243-4a52-85b5-b048763de2fc req-906647eb-fe12-4390-b02d-16683eb6217f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:17:02 np0005558241 nova_compute[248510]: 2025-12-13 09:17:02.154 248514 DEBUG nova.compute.manager [req-69ed8f3c-b417-4e4e-b04d-7d7d1480a546 req-2e9bdd2e-876d-4ead-ad0a-be7d42d5433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:02 np0005558241 nova_compute[248510]: 2025-12-13 09:17:02.154 248514 DEBUG oslo_concurrency.lockutils [req-69ed8f3c-b417-4e4e-b04d-7d7d1480a546 req-2e9bdd2e-876d-4ead-ad0a-be7d42d5433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:02 np0005558241 nova_compute[248510]: 2025-12-13 09:17:02.154 248514 DEBUG oslo_concurrency.lockutils [req-69ed8f3c-b417-4e4e-b04d-7d7d1480a546 req-2e9bdd2e-876d-4ead-ad0a-be7d42d5433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:02 np0005558241 nova_compute[248510]: 2025-12-13 09:17:02.155 248514 DEBUG oslo_concurrency.lockutils [req-69ed8f3c-b417-4e4e-b04d-7d7d1480a546 req-2e9bdd2e-876d-4ead-ad0a-be7d42d5433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:02 np0005558241 nova_compute[248510]: 2025-12-13 09:17:02.155 248514 DEBUG nova.compute.manager [req-69ed8f3c-b417-4e4e-b04d-7d7d1480a546 req-2e9bdd2e-876d-4ead-ad0a-be7d42d5433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] No waiting events found dispatching network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:17:02 np0005558241 nova_compute[248510]: 2025-12-13 09:17:02.155 248514 WARNING nova.compute.manager [req-69ed8f3c-b417-4e4e-b04d-7d7d1480a546 req-2e9bdd2e-876d-4ead-ad0a-be7d42d5433d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received unexpected event network-vif-plugged-fb0ef518-cd2c-4c22-8d93-009f10641109 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:17:02 np0005558241 podman[400634]: 2025-12-13 09:17:02.270693192 +0000 UTC m=+0.046101680 container create 97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:17:02 np0005558241 systemd[1]: Started libpod-conmon-97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923.scope.
Dec 13 04:17:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:17:02 np0005558241 podman[400634]: 2025-12-13 09:17:02.25135581 +0000 UTC m=+0.026764318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:17:02 np0005558241 podman[400634]: 2025-12-13 09:17:02.362174691 +0000 UTC m=+0.137583189 container init 97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:17:02 np0005558241 podman[400634]: 2025-12-13 09:17:02.371247598 +0000 UTC m=+0.146656076 container start 97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wing, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:17:02 np0005558241 podman[400634]: 2025-12-13 09:17:02.376019386 +0000 UTC m=+0.151427894 container attach 97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wing, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:17:02 np0005558241 practical_wing[400650]: 167 167
Dec 13 04:17:02 np0005558241 podman[400634]: 2025-12-13 09:17:02.378649042 +0000 UTC m=+0.154057530 container died 97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wing, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:17:02 np0005558241 systemd[1]: libpod-97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923.scope: Deactivated successfully.
Dec 13 04:17:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-919d115e9804a4cd973bef2adb61562c7305dea26db8248e2dfcc274eb712789-merged.mount: Deactivated successfully.
Dec 13 04:17:02 np0005558241 podman[400634]: 2025-12-13 09:17:02.420847263 +0000 UTC m=+0.196255741 container remove 97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:17:02 np0005558241 systemd[1]: libpod-conmon-97933e3bb12caf42cc8677b7a400d256d360429da053e94028c169d3cb77e923.scope: Deactivated successfully.
Dec 13 04:17:02 np0005558241 nova_compute[248510]: 2025-12-13 09:17:02.462 248514 DEBUG nova.compute.manager [req-bfb4e517-8a5d-4202-81fb-0e0fbfb185bd req-723979a7-8b5a-4c9a-a32e-80210c81c920 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-deleted-fb0ef518-cd2c-4c22-8d93-009f10641109 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:02 np0005558241 nova_compute[248510]: 2025-12-13 09:17:02.463 248514 INFO nova.compute.manager [req-bfb4e517-8a5d-4202-81fb-0e0fbfb185bd req-723979a7-8b5a-4c9a-a32e-80210c81c920 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Neutron deleted interface fb0ef518-cd2c-4c22-8d93-009f10641109; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:17:02 np0005558241 nova_compute[248510]: 2025-12-13 09:17:02.463 248514 DEBUG nova.network.neutron [req-bfb4e517-8a5d-4202-81fb-0e0fbfb185bd req-723979a7-8b5a-4c9a-a32e-80210c81c920 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [{"id": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "address": "fa:16:3e:e3:e2:71", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee3:e271", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap398cfb7c-bd", "ovs_interfaceid": "398cfb7c-bd58-4bb7-8778-e485fd13a934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:17:02 np0005558241 nova_compute[248510]: 2025-12-13 09:17:02.486 248514 DEBUG nova.compute.manager [req-bfb4e517-8a5d-4202-81fb-0e0fbfb185bd req-723979a7-8b5a-4c9a-a32e-80210c81c920 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Detach interface failed, port_id=fb0ef518-cd2c-4c22-8d93-009f10641109, reason: Instance c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 04:17:02 np0005558241 podman[400673]: 2025-12-13 09:17:02.645578363 +0000 UTC m=+0.068658482 container create a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:17:02 np0005558241 systemd[1]: Started libpod-conmon-a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf.scope.
Dec 13 04:17:02 np0005558241 podman[400673]: 2025-12-13 09:17:02.612802716 +0000 UTC m=+0.035882845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:17:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:17:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d43b064a60bd18064d068fec2cbccc166d8ed6bf0678c5d0ff977af7b76bf07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d43b064a60bd18064d068fec2cbccc166d8ed6bf0678c5d0ff977af7b76bf07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d43b064a60bd18064d068fec2cbccc166d8ed6bf0678c5d0ff977af7b76bf07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:02 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d43b064a60bd18064d068fec2cbccc166d8ed6bf0678c5d0ff977af7b76bf07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:02 np0005558241 podman[400673]: 2025-12-13 09:17:02.772179018 +0000 UTC m=+0.195259227 container init a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:17:02 np0005558241 podman[400673]: 2025-12-13 09:17:02.785856649 +0000 UTC m=+0.208936758 container start a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:17:02 np0005558241 podman[400673]: 2025-12-13 09:17:02.789811057 +0000 UTC m=+0.212891166 container attach a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:17:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3478: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 9.4 KiB/s wr, 2 op/s
Dec 13 04:17:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:03 np0005558241 nova_compute[248510]: 2025-12-13 09:17:03.153 248514 DEBUG nova.network.neutron [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:17:03 np0005558241 nova_compute[248510]: 2025-12-13 09:17:03.171 248514 INFO nova.compute.manager [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Took 1.61 seconds to deallocate network for instance.#033[00m
Dec 13 04:17:03 np0005558241 nova_compute[248510]: 2025-12-13 09:17:03.220 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:03 np0005558241 nova_compute[248510]: 2025-12-13 09:17:03.221 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:03 np0005558241 nova_compute[248510]: 2025-12-13 09:17:03.301 248514 DEBUG oslo_concurrency.processutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:17:03 np0005558241 lvm[400788]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:17:03 np0005558241 lvm[400789]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:17:03 np0005558241 lvm[400789]: VG ceph_vg1 finished
Dec 13 04:17:03 np0005558241 lvm[400788]: VG ceph_vg0 finished
Dec 13 04:17:03 np0005558241 lvm[400791]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:17:03 np0005558241 lvm[400791]: VG ceph_vg2 finished
Dec 13 04:17:03 np0005558241 recursing_thompson[400690]: {}
Dec 13 04:17:03 np0005558241 systemd[1]: libpod-a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf.scope: Deactivated successfully.
Dec 13 04:17:03 np0005558241 systemd[1]: libpod-a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf.scope: Consumed 1.382s CPU time.
Dec 13 04:17:03 np0005558241 podman[400673]: 2025-12-13 09:17:03.700589321 +0000 UTC m=+1.123669430 container died a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:17:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6d43b064a60bd18064d068fec2cbccc166d8ed6bf0678c5d0ff977af7b76bf07-merged.mount: Deactivated successfully.
Dec 13 04:17:03 np0005558241 podman[400673]: 2025-12-13 09:17:03.778210875 +0000 UTC m=+1.201290984 container remove a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:17:03 np0005558241 systemd[1]: libpod-conmon-a0cb8945492a5fb0fe79127da1e659c341ef46bcceaf9c269a979a38a6f35bbf.scope: Deactivated successfully.
Dec 13 04:17:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:17:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:17:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:17:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:17:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:17:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1721760534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:03 np0005558241 nova_compute[248510]: 2025-12-13 09:17:03.925 248514 DEBUG oslo_concurrency.processutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:17:03 np0005558241 nova_compute[248510]: 2025-12-13 09:17:03.935 248514 DEBUG nova.compute.provider_tree [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:17:03 np0005558241 nova_compute[248510]: 2025-12-13 09:17:03.958 248514 DEBUG nova.scheduler.client.report [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:17:03 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:17:03 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:17:03 np0005558241 nova_compute[248510]: 2025-12-13 09:17:03.990 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:04 np0005558241 nova_compute[248510]: 2025-12-13 09:17:04.013 248514 INFO nova.scheduler.client.report [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance c6e4d841-78ee-4a00-87ca-a6c2d542a9b7#033[00m
Dec 13 04:17:04 np0005558241 nova_compute[248510]: 2025-12-13 09:17:04.096 248514 DEBUG oslo_concurrency.lockutils [None req-d77167b4-84c8-490f-a56f-284b7705b0f0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "c6e4d841-78ee-4a00-87ca-a6c2d542a9b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:04 np0005558241 nova_compute[248510]: 2025-12-13 09:17:04.143 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:04 np0005558241 nova_compute[248510]: 2025-12-13 09:17:04.312 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:04 np0005558241 nova_compute[248510]: 2025-12-13 09:17:04.565 248514 DEBUG nova.compute.manager [req-769ee749-bef1-4440-93b9-9d6673b476bf req-cb743aad-0776-4a34-a91e-acfd08d82789 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Received event network-vif-deleted-398cfb7c-bd58-4bb7-8778-e485fd13a934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3479: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 11 KiB/s wr, 30 op/s
Dec 13 04:17:05 np0005558241 nova_compute[248510]: 2025-12-13 09:17:05.853 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:05 np0005558241 nova_compute[248510]: 2025-12-13 09:17:05.854 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:05 np0005558241 nova_compute[248510]: 2025-12-13 09:17:05.854 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:05 np0005558241 nova_compute[248510]: 2025-12-13 09:17:05.854 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:05 np0005558241 nova_compute[248510]: 2025-12-13 09:17:05.854 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:05 np0005558241 nova_compute[248510]: 2025-12-13 09:17:05.855 248514 INFO nova.compute.manager [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Terminating instance#033[00m
Dec 13 04:17:05 np0005558241 nova_compute[248510]: 2025-12-13 09:17:05.856 248514 DEBUG nova.compute.manager [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:17:05 np0005558241 kernel: tap66abeeb9-b4 (unregistering): left promiscuous mode
Dec 13 04:17:05 np0005558241 NetworkManager[50376]: <info>  [1765617425.9090] device (tap66abeeb9-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:17:05 np0005558241 ovn_controller[148476]: 2025-12-13T09:17:05Z|01584|binding|INFO|Releasing lport 66abeeb9-b4e2-4901-9437-be8cd001222f from this chassis (sb_readonly=0)
Dec 13 04:17:05 np0005558241 nova_compute[248510]: 2025-12-13 09:17:05.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:05 np0005558241 ovn_controller[148476]: 2025-12-13T09:17:05Z|01585|binding|INFO|Setting lport 66abeeb9-b4e2-4901-9437-be8cd001222f down in Southbound
Dec 13 04:17:05 np0005558241 ovn_controller[148476]: 2025-12-13T09:17:05Z|01586|binding|INFO|Removing iface tap66abeeb9-b4 ovn-installed in OVS
Dec 13 04:17:05 np0005558241 nova_compute[248510]: 2025-12-13 09:17:05.922 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:05 np0005558241 kernel: tap3b47d34a-69 (unregistering): left promiscuous mode
Dec 13 04:17:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:05.932 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:09:e2 10.100.0.7'], port_security=['fa:16:3e:08:09:e2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '44149dbe-362b-4930-a63b-d04c9a3b3b4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fc9ec4-4452-4225-b100-75f3859e091a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74bfbdc0-d183-4405-bf5b-7ce9dc4c0882, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=66abeeb9-b4e2-4901-9437-be8cd001222f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:17:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:05.933 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 66abeeb9-b4e2-4901-9437-be8cd001222f in datapath d3fc9ec4-4452-4225-b100-75f3859e091a unbound from our chassis#033[00m
Dec 13 04:17:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:05.935 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d3fc9ec4-4452-4225-b100-75f3859e091a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:17:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:05.937 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[24b1147a-474a-4eb8-bdb0-cdbc089e9a4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:05.937 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a namespace which is not needed anymore#033[00m
Dec 13 04:17:05 np0005558241 NetworkManager[50376]: <info>  [1765617425.9411] device (tap3b47d34a-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:17:05 np0005558241 nova_compute[248510]: 2025-12-13 09:17:05.943 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:05 np0005558241 nova_compute[248510]: 2025-12-13 09:17:05.949 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:05 np0005558241 ovn_controller[148476]: 2025-12-13T09:17:05Z|01587|binding|INFO|Releasing lport 3b47d34a-6968-411d-8f9d-38a835c0fa77 from this chassis (sb_readonly=0)
Dec 13 04:17:05 np0005558241 ovn_controller[148476]: 2025-12-13T09:17:05Z|01588|binding|INFO|Setting lport 3b47d34a-6968-411d-8f9d-38a835c0fa77 down in Southbound
Dec 13 04:17:05 np0005558241 ovn_controller[148476]: 2025-12-13T09:17:05Z|01589|binding|INFO|Removing iface tap3b47d34a-69 ovn-installed in OVS
Dec 13 04:17:05 np0005558241 nova_compute[248510]: 2025-12-13 09:17:05.952 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:05 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:05.961 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:e9:f0 2001:db8::f816:3eff:fe1a:e9f0'], port_security=['fa:16:3e:1a:e9:f0 2001:db8::f816:3eff:fe1a:e9f0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe1a:e9f0/64', 'neutron:device_id': '44149dbe-362b-4930-a63b-d04c9a3b3b4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ec73538a-d215-468f-b84e-56f8b040d8a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=637bfd4d-c545-47be-afdb-625fd0ecaffb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3b47d34a-6968-411d-8f9d-38a835c0fa77) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:17:05 np0005558241 nova_compute[248510]: 2025-12-13 09:17:05.972 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:05 np0005558241 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000091.scope: Deactivated successfully.
Dec 13 04:17:05 np0005558241 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000091.scope: Consumed 16.873s CPU time.
Dec 13 04:17:06 np0005558241 systemd-machined[210538]: Machine qemu-176-instance-00000091 terminated.
Dec 13 04:17:06 np0005558241 podman[400836]: 2025-12-13 09:17:06.003877822 +0000 UTC m=+0.079685597 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec 13 04:17:06 np0005558241 podman[400835]: 2025-12-13 09:17:06.030983547 +0000 UTC m=+0.114968395 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 04:17:06 np0005558241 podman[400834]: 2025-12-13 09:17:06.031312656 +0000 UTC m=+0.115211652 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 13 04:17:06 np0005558241 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [NOTICE]   (398790) : haproxy version is 2.8.14-c23fe91
Dec 13 04:17:06 np0005558241 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [NOTICE]   (398790) : path to executable is /usr/sbin/haproxy
Dec 13 04:17:06 np0005558241 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [WARNING]  (398790) : Exiting Master process...
Dec 13 04:17:06 np0005558241 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [ALERT]    (398790) : Current worker (398792) exited with code 143 (Terminated)
Dec 13 04:17:06 np0005558241 NetworkManager[50376]: <info>  [1765617426.0875] manager: (tap66abeeb9-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/656)
Dec 13 04:17:06 np0005558241 neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a[398786]: [WARNING]  (398790) : All workers exited. Exiting... (0)
Dec 13 04:17:06 np0005558241 systemd[1]: libpod-6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb.scope: Deactivated successfully.
Dec 13 04:17:06 np0005558241 conmon[398786]: conmon 6ea39ac5e945181bf086 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb.scope/container/memory.events
Dec 13 04:17:06 np0005558241 podman[400923]: 2025-12-13 09:17:06.093262549 +0000 UTC m=+0.054554300 container died 6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.118 248514 INFO nova.virt.libvirt.driver [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Instance destroyed successfully.#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.118 248514 DEBUG nova.objects.instance [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 44149dbe-362b-4930-a63b-d04c9a3b3b4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:17:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb-userdata-shm.mount: Deactivated successfully.
Dec 13 04:17:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a72ad838abac5f9443ea2e98c9a253d9a9467a978cbeac3dbb207a420c8706cd-merged.mount: Deactivated successfully.
Dec 13 04:17:06 np0005558241 podman[400923]: 2025-12-13 09:17:06.137907802 +0000 UTC m=+0.099199553 container cleanup 6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Dec 13 04:17:06 np0005558241 systemd[1]: libpod-conmon-6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb.scope: Deactivated successfully.
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.147 248514 DEBUG nova.virt.libvirt.vif [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:15:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1513449558',display_name='tempest-TestGettingAddress-server-1513449558',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1513449558',id=145,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:15:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-g1atpgf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:15:44Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=44149dbe-362b-4930-a63b-d04c9a3b3b4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.147 248514 DEBUG nova.network.os_vif_util [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.148 248514 DEBUG nova.network.os_vif_util [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.148 248514 DEBUG os_vif [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.151 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.151 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66abeeb9-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.154 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.158 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.160 248514 INFO os_vif [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:09:e2,bridge_name='br-int',has_traffic_filtering=True,id=66abeeb9-b4e2-4901-9437-be8cd001222f,network=Network(d3fc9ec4-4452-4225-b100-75f3859e091a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66abeeb9-b4')#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.160 248514 DEBUG nova.virt.libvirt.vif [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:15:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1513449558',display_name='tempest-TestGettingAddress-server-1513449558',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1513449558',id=145,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0CsSgLr5ZjJcSh6LYbew8AI0jJ884LFXjloVH5z7SJYJQLdvnybHgq9sNX0DSYX8wwQQDPgm616hnaWVFHNOxEZJwilUmmCUtgdNSAG6C7FF+z7aXVgfL9F768MWQjaA==',key_name='tempest-TestGettingAddress-25224908',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:15:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-g1atpgf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:15:44Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=44149dbe-362b-4930-a63b-d04c9a3b3b4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.160 248514 DEBUG nova.network.os_vif_util [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.161 248514 DEBUG nova.network.os_vif_util [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.161 248514 DEBUG os_vif [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.162 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.162 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b47d34a-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.164 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.165 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.168 248514 INFO os_vif [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:e9:f0,bridge_name='br-int',has_traffic_filtering=True,id=3b47d34a-6968-411d-8f9d-38a835c0fa77,network=Network(9f757ab8-2a8f-4771-8492-2bcf521016bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b47d34a-69')#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.191 248514 DEBUG nova.compute.manager [req-c05b5021-b382-4a8b-901f-7f6f8aed249f req-a925e0e0-260a-4f2b-9632-4fcff79b20a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-unplugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.191 248514 DEBUG oslo_concurrency.lockutils [req-c05b5021-b382-4a8b-901f-7f6f8aed249f req-a925e0e0-260a-4f2b-9632-4fcff79b20a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.192 248514 DEBUG oslo_concurrency.lockutils [req-c05b5021-b382-4a8b-901f-7f6f8aed249f req-a925e0e0-260a-4f2b-9632-4fcff79b20a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.192 248514 DEBUG oslo_concurrency.lockutils [req-c05b5021-b382-4a8b-901f-7f6f8aed249f req-a925e0e0-260a-4f2b-9632-4fcff79b20a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.193 248514 DEBUG nova.compute.manager [req-c05b5021-b382-4a8b-901f-7f6f8aed249f req-a925e0e0-260a-4f2b-9632-4fcff79b20a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] No waiting events found dispatching network-vif-unplugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.193 248514 DEBUG nova.compute.manager [req-c05b5021-b382-4a8b-901f-7f6f8aed249f req-a925e0e0-260a-4f2b-9632-4fcff79b20a5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-unplugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:17:06 np0005558241 podman[400968]: 2025-12-13 09:17:06.758518306 +0000 UTC m=+0.592540376 container remove 6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 04:17:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.766 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a0fabb-65f1-497b-b59f-3e0500924da0]: (4, ('Sat Dec 13 09:17:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a (6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb)\n6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb\nSat Dec 13 09:17:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a (6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb)\n6ea39ac5e945181bf086fda76d865e4e23f9c6671631c66c86208e580e539bfb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.768 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64275cd7-7ab0-429d-bdb9-caa9d68d1840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.769 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fc9ec4-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:06 np0005558241 kernel: tapd3fc9ec4-40: left promiscuous mode
Dec 13 04:17:06 np0005558241 nova_compute[248510]: 2025-12-13 09:17:06.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.790 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[06544b0c-9e8c-42c2-a030-05b6615f1671]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.808 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3d1455-16cd-4ff3-b563-f61499b0a54e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.810 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[168137d6-c5f1-41db-ad40-94f50daf192e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.829 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c6aba7ee-9643-42e5-8f76-f95e9218a08a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992131, 'reachable_time': 17859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400998, 'error': None, 'target': 'ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:06 np0005558241 systemd[1]: run-netns-ovnmeta\x2dd3fc9ec4\x2d4452\x2d4225\x2db100\x2d75f3859e091a.mount: Deactivated successfully.
Dec 13 04:17:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.832 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d3fc9ec4-4452-4225-b100-75f3859e091a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:17:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.833 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd767e2-1b16-4942-922c-b43e8dad6f3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.837 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3b47d34a-6968-411d-8f9d-38a835c0fa77 in datapath 9f757ab8-2a8f-4771-8492-2bcf521016bb unbound from our chassis#033[00m
Dec 13 04:17:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.838 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f757ab8-2a8f-4771-8492-2bcf521016bb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:17:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.839 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e78af46d-dea1-41b2-942d-346bbe8ccb44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:06 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:06.840 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb namespace which is not needed anymore#033[00m
Dec 13 04:17:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3480: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 28 op/s
Dec 13 04:17:07 np0005558241 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [NOTICE]   (398911) : haproxy version is 2.8.14-c23fe91
Dec 13 04:17:07 np0005558241 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [NOTICE]   (398911) : path to executable is /usr/sbin/haproxy
Dec 13 04:17:07 np0005558241 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [WARNING]  (398911) : Exiting Master process...
Dec 13 04:17:07 np0005558241 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [WARNING]  (398911) : Exiting Master process...
Dec 13 04:17:07 np0005558241 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [ALERT]    (398911) : Current worker (398913) exited with code 143 (Terminated)
Dec 13 04:17:07 np0005558241 neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb[398907]: [WARNING]  (398911) : All workers exited. Exiting... (0)
Dec 13 04:17:07 np0005558241 systemd[1]: libpod-2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a.scope: Deactivated successfully.
Dec 13 04:17:07 np0005558241 podman[401020]: 2025-12-13 09:17:07.368524185 +0000 UTC m=+0.434745693 container died 2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:17:07 np0005558241 nova_compute[248510]: 2025-12-13 09:17:07.661 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-changed-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:07 np0005558241 nova_compute[248510]: 2025-12-13 09:17:07.662 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing instance network info cache due to event network-changed-66abeeb9-b4e2-4901-9437-be8cd001222f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:17:07 np0005558241 nova_compute[248510]: 2025-12-13 09:17:07.662 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:17:07 np0005558241 nova_compute[248510]: 2025-12-13 09:17:07.662 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:17:07 np0005558241 nova_compute[248510]: 2025-12-13 09:17:07.663 248514 DEBUG nova.network.neutron [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Refreshing network info cache for port 66abeeb9-b4e2-4901-9437-be8cd001222f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:17:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a-userdata-shm.mount: Deactivated successfully.
Dec 13 04:17:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f319c184b06470198864244c31cd681dddd4dfb8c67f2ea0cdfe3b222afde25a-merged.mount: Deactivated successfully.
Dec 13 04:17:07 np0005558241 nova_compute[248510]: 2025-12-13 09:17:07.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:17:07 np0005558241 podman[401020]: 2025-12-13 09:17:07.767549688 +0000 UTC m=+0.833771196 container cleanup 2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:17:07 np0005558241 nova_compute[248510]: 2025-12-13 09:17:07.789 248514 INFO nova.virt.libvirt.driver [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Deleting instance files /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c_del#033[00m
Dec 13 04:17:07 np0005558241 nova_compute[248510]: 2025-12-13 09:17:07.790 248514 INFO nova.virt.libvirt.driver [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Deletion of /var/lib/nova/instances/44149dbe-362b-4930-a63b-d04c9a3b3b4c_del complete#033[00m
Dec 13 04:17:07 np0005558241 podman[401048]: 2025-12-13 09:17:07.854650908 +0000 UTC m=+0.060067637 container remove 2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:17:07 np0005558241 systemd[1]: libpod-conmon-2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a.scope: Deactivated successfully.
Dec 13 04:17:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.863 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1225ddc2-0867-44b0-848c-587757fe8c25]: (4, ('Sat Dec 13 09:17:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb (2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a)\n2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a\nSat Dec 13 09:17:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb (2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a)\n2aa0e8cda27cadc3fa8a3f2d5bc2e13d011efdf691404a5a5a709b761573499a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.865 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c5ccbb-914e-4ed4-a0d8-99ed5673b186]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.865 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f757ab8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:17:07 np0005558241 nova_compute[248510]: 2025-12-13 09:17:07.867 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:07 np0005558241 kernel: tap9f757ab8-20: left promiscuous mode
Dec 13 04:17:07 np0005558241 nova_compute[248510]: 2025-12-13 09:17:07.880 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.883 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8c137f63-fed7-421a-825d-cef48160cf82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.910 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f581315a-506f-4d69-ac93-211a28d96de1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.911 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1152716e-c388-4a2b-993d-bddf0cfbcc78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.927 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[555ce82c-11e3-4fd1-98de-fb0e27711894]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 992236, 'reachable_time': 15176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401064, 'error': None, 'target': 'ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.929 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f757ab8-2a8f-4771-8492-2bcf521016bb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:17:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:07.930 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[041dfa20-865a-43d0-8d20-fff8bc3374a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:07 np0005558241 systemd[1]: run-netns-ovnmeta\x2d9f757ab8\x2d2a8f\x2d4771\x2d8492\x2d2bcf521016bb.mount: Deactivated successfully.
Dec 13 04:17:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:08 np0005558241 nova_compute[248510]: 2025-12-13 09:17:08.348 248514 DEBUG nova.compute.manager [req-d74193db-3ff2-459f-9e56-331ff08f5edd req-dbb07bb4-b717-4245-831d-aba66404614b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:08 np0005558241 nova_compute[248510]: 2025-12-13 09:17:08.349 248514 DEBUG oslo_concurrency.lockutils [req-d74193db-3ff2-459f-9e56-331ff08f5edd req-dbb07bb4-b717-4245-831d-aba66404614b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:08 np0005558241 nova_compute[248510]: 2025-12-13 09:17:08.349 248514 DEBUG oslo_concurrency.lockutils [req-d74193db-3ff2-459f-9e56-331ff08f5edd req-dbb07bb4-b717-4245-831d-aba66404614b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:08 np0005558241 nova_compute[248510]: 2025-12-13 09:17:08.349 248514 DEBUG oslo_concurrency.lockutils [req-d74193db-3ff2-459f-9e56-331ff08f5edd req-dbb07bb4-b717-4245-831d-aba66404614b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:08 np0005558241 nova_compute[248510]: 2025-12-13 09:17:08.349 248514 DEBUG nova.compute.manager [req-d74193db-3ff2-459f-9e56-331ff08f5edd req-dbb07bb4-b717-4245-831d-aba66404614b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] No waiting events found dispatching network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:17:08 np0005558241 nova_compute[248510]: 2025-12-13 09:17:08.350 248514 WARNING nova.compute.manager [req-d74193db-3ff2-459f-9e56-331ff08f5edd req-dbb07bb4-b717-4245-831d-aba66404614b 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received unexpected event network-vif-plugged-3b47d34a-6968-411d-8f9d-38a835c0fa77 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:17:08 np0005558241 nova_compute[248510]: 2025-12-13 09:17:08.354 248514 INFO nova.compute.manager [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Took 2.50 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:17:08 np0005558241 nova_compute[248510]: 2025-12-13 09:17:08.354 248514 DEBUG oslo.service.loopingcall [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:17:08 np0005558241 nova_compute[248510]: 2025-12-13 09:17:08.354 248514 DEBUG nova.compute.manager [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:17:08 np0005558241 nova_compute[248510]: 2025-12-13 09:17:08.355 248514 DEBUG nova.network.neutron [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:17:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3481: 321 pgs: 321 active+clean; 93 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 33 op/s
Dec 13 04:17:09 np0005558241 nova_compute[248510]: 2025-12-13 09:17:09.145 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:17:09
Dec 13 04:17:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:17:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:17:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'default.rgw.control', 'volumes', 'default.rgw.log', 'vms', '.mgr', 'backups', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec 13 04:17:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:17:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:17:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:17:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:17:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:17:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:17:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.450 248514 DEBUG nova.compute.manager [req-e5f77e51-fe6d-4373-89a3-5ec40d4c7f1a req-6c1478e6-b055-473c-a445-bb9b14c017d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-deleted-3b47d34a-6968-411d-8f9d-38a835c0fa77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.450 248514 INFO nova.compute.manager [req-e5f77e51-fe6d-4373-89a3-5ec40d4c7f1a req-6c1478e6-b055-473c-a445-bb9b14c017d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Neutron deleted interface 3b47d34a-6968-411d-8f9d-38a835c0fa77; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.451 248514 DEBUG nova.network.neutron [req-e5f77e51-fe6d-4373-89a3-5ec40d4c7f1a req-6c1478e6-b055-473c-a445-bb9b14c017d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.486 248514 DEBUG nova.network.neutron [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updated VIF entry in instance network info cache for port 66abeeb9-b4e2-4901-9437-be8cd001222f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.486 248514 DEBUG nova.network.neutron [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [{"id": "66abeeb9-b4e2-4901-9437-be8cd001222f", "address": "fa:16:3e:08:09:e2", "network": {"id": "d3fc9ec4-4452-4225-b100-75f3859e091a", "bridge": "br-int", "label": "tempest-network-smoke--676558098", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66abeeb9-b4", "ovs_interfaceid": "66abeeb9-b4e2-4901-9437-be8cd001222f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "address": "fa:16:3e:1a:e9:f0", "network": {"id": "9f757ab8-2a8f-4771-8492-2bcf521016bb", "bridge": "br-int", "label": "tempest-network-smoke--1350359736", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:e9f0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b47d34a-69", "ovs_interfaceid": "3b47d34a-6968-411d-8f9d-38a835c0fa77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.497 248514 DEBUG nova.compute.manager [req-e5f77e51-fe6d-4373-89a3-5ec40d4c7f1a req-6c1478e6-b055-473c-a445-bb9b14c017d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Detach interface failed, port_id=3b47d34a-6968-411d-8f9d-38a835c0fa77, reason: Instance 44149dbe-362b-4930-a63b-d04c9a3b3b4c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.513 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-44149dbe-362b-4930-a63b-d04c9a3b3b4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.513 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-unplugged-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.514 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.514 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.514 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.514 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] No waiting events found dispatching network-vif-unplugged-66abeeb9-b4e2-4901-9437-be8cd001222f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.515 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-unplugged-66abeeb9-b4e2-4901-9437-be8cd001222f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.515 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.515 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.515 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.516 248514 DEBUG oslo_concurrency.lockutils [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.516 248514 DEBUG nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] No waiting events found dispatching network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:17:10 np0005558241 nova_compute[248510]: 2025-12-13 09:17:10.516 248514 WARNING nova.compute.manager [req-d44df0b4-2b7d-4e8e-9f2e-5630733f309a req-33a4aa3e-7d51-49d2-b0c3-22e8b7288ab7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received unexpected event network-vif-plugged-66abeeb9-b4e2-4901-9437-be8cd001222f for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:17:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3482: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 55 op/s
Dec 13 04:17:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:17:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:17:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:17:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:17:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:17:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:17:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:17:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:17:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:17:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:17:11 np0005558241 nova_compute[248510]: 2025-12-13 09:17:11.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:11 np0005558241 nova_compute[248510]: 2025-12-13 09:17:11.322 248514 DEBUG nova.network.neutron [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:17:11 np0005558241 nova_compute[248510]: 2025-12-13 09:17:11.345 248514 INFO nova.compute.manager [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Took 2.99 seconds to deallocate network for instance.#033[00m
Dec 13 04:17:11 np0005558241 nova_compute[248510]: 2025-12-13 09:17:11.426 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:11 np0005558241 nova_compute[248510]: 2025-12-13 09:17:11.426 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:11 np0005558241 nova_compute[248510]: 2025-12-13 09:17:11.488 248514 DEBUG oslo_concurrency.processutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:17:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:17:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1068009244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:12 np0005558241 nova_compute[248510]: 2025-12-13 09:17:12.057 248514 DEBUG oslo_concurrency.processutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:17:12 np0005558241 nova_compute[248510]: 2025-12-13 09:17:12.067 248514 DEBUG nova.compute.provider_tree [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:17:12 np0005558241 nova_compute[248510]: 2025-12-13 09:17:12.089 248514 DEBUG nova.scheduler.client.report [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:17:12 np0005558241 nova_compute[248510]: 2025-12-13 09:17:12.120 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:12 np0005558241 nova_compute[248510]: 2025-12-13 09:17:12.155 248514 INFO nova.scheduler.client.report [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 44149dbe-362b-4930-a63b-d04c9a3b3b4c#033[00m
Dec 13 04:17:12 np0005558241 nova_compute[248510]: 2025-12-13 09:17:12.255 248514 DEBUG oslo_concurrency.lockutils [None req-3df24c93-a31a-4c62-abd9-18bdd88ac98a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "44149dbe-362b-4930-a63b-d04c9a3b3b4c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:12 np0005558241 nova_compute[248510]: 2025-12-13 09:17:12.536 248514 DEBUG nova.compute.manager [req-b15739ed-1aad-4a82-86a6-260ba658af48 req-f7a32e79-0bb4-4d48-ae0b-13ea6be6d08d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Received event network-vif-deleted-66abeeb9-b4e2-4901-9437-be8cd001222f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3483: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Dec 13 04:17:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:14 np0005558241 nova_compute[248510]: 2025-12-13 09:17:14.147 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:14 np0005558241 nova_compute[248510]: 2025-12-13 09:17:14.258 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617419.2561684, c6e4d841-78ee-4a00-87ca-a6c2d542a9b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:17:14 np0005558241 nova_compute[248510]: 2025-12-13 09:17:14.258 248514 INFO nova.compute.manager [-] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:17:14 np0005558241 nova_compute[248510]: 2025-12-13 09:17:14.288 248514 DEBUG nova.compute.manager [None req-44bd7916-3c46-4f0b-bccb-cda8b78173ad - - - - - -] [instance: c6e4d841-78ee-4a00-87ca-a6c2d542a9b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:17:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3484: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Dec 13 04:17:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:17:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1324666648' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:17:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1324666648' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:16 np0005558241 nova_compute[248510]: 2025-12-13 09:17:16.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3485: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:17:17 np0005558241 nova_compute[248510]: 2025-12-13 09:17:17.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:17 np0005558241 nova_compute[248510]: 2025-12-13 09:17:17.094 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3486: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:17:19 np0005558241 nova_compute[248510]: 2025-12-13 09:17:19.149 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3487: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 341 B/s wr, 22 op/s
Dec 13 04:17:21 np0005558241 nova_compute[248510]: 2025-12-13 09:17:21.115 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617426.114808, 44149dbe-362b-4930-a63b-d04c9a3b3b4c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:17:21 np0005558241 nova_compute[248510]: 2025-12-13 09:17:21.116 248514 INFO nova.compute.manager [-] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:17:21 np0005558241 nova_compute[248510]: 2025-12-13 09:17:21.135 248514 DEBUG nova.compute.manager [None req-bfd1b427-5ecb-442c-842c-8adade80865c - - - - - -] [instance: 44149dbe-362b-4930-a63b-d04c9a3b3b4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:17:21 np0005558241 nova_compute[248510]: 2025-12-13 09:17:21.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4732504539939352e-05 of space, bias 1.0, pg target 0.004419751361981806 quantized to 32 (current 32)
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697116252414539 of space, bias 1.0, pg target 0.20091348757243618 quantized to 32 (current 32)
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:17:21 np0005558241 nova_compute[248510]: 2025-12-13 09:17:21.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:17:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3488: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:17:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:24 np0005558241 nova_compute[248510]: 2025-12-13 09:17:24.152 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:24 np0005558241 nova_compute[248510]: 2025-12-13 09:17:24.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:17:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3489: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:17:26 np0005558241 nova_compute[248510]: 2025-12-13 09:17:26.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3490: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:17:27 np0005558241 nova_compute[248510]: 2025-12-13 09:17:27.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:17:27 np0005558241 nova_compute[248510]: 2025-12-13 09:17:27.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:17:27 np0005558241 nova_compute[248510]: 2025-12-13 09:17:27.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:17:27 np0005558241 nova_compute[248510]: 2025-12-13 09:17:27.883 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:17:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3491: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:17:29 np0005558241 nova_compute[248510]: 2025-12-13 09:17:29.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:29.635 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:17:29 np0005558241 nova_compute[248510]: 2025-12-13 09:17:29.636 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:29.636 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:17:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3492: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:17:31 np0005558241 nova_compute[248510]: 2025-12-13 09:17:31.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:31 np0005558241 nova_compute[248510]: 2025-12-13 09:17:31.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:17:32 np0005558241 nova_compute[248510]: 2025-12-13 09:17:32.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:17:32 np0005558241 nova_compute[248510]: 2025-12-13 09:17:32.811 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:32 np0005558241 nova_compute[248510]: 2025-12-13 09:17:32.812 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:32 np0005558241 nova_compute[248510]: 2025-12-13 09:17:32.812 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:32 np0005558241 nova_compute[248510]: 2025-12-13 09:17:32.812 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:17:32 np0005558241 nova_compute[248510]: 2025-12-13 09:17:32.812 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:17:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3493: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:17:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:33.016 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:3e:92 10.100.0.2 2001:db8::f816:3eff:fe95:3e92'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe95:3e92/64', 'neutron:device_id': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f555052b-c852-42e8-9f82-1988cdbda87d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=311a1d17-b8d0-425a-a0e2-260061ce5d5d) old=Port_Binding(mac=['fa:16:3e:95:3e:92 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:17:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:33.018 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 311a1d17-b8d0-425a-a0e2-260061ce5d5d in datapath e5815775-e5d0-4b72-a008-efd9f04c6ee4 updated#033[00m
Dec 13 04:17:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:33.021 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5815775-e5d0-4b72-a008-efd9f04c6ee4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:17:33 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:33.022 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ff960c10-5f08-4f26-9e7a-0e5208415f50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:17:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4268610521' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:33 np0005558241 nova_compute[248510]: 2025-12-13 09:17:33.345 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:17:33 np0005558241 nova_compute[248510]: 2025-12-13 09:17:33.534 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:17:33 np0005558241 nova_compute[248510]: 2025-12-13 09:17:33.536 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3468MB free_disk=59.98739747237414GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:17:33 np0005558241 nova_compute[248510]: 2025-12-13 09:17:33.537 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:33 np0005558241 nova_compute[248510]: 2025-12-13 09:17:33.537 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:33 np0005558241 nova_compute[248510]: 2025-12-13 09:17:33.660 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:17:33 np0005558241 nova_compute[248510]: 2025-12-13 09:17:33.661 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:17:33 np0005558241 nova_compute[248510]: 2025-12-13 09:17:33.682 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 04:17:33 np0005558241 nova_compute[248510]: 2025-12-13 09:17:33.713 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 04:17:33 np0005558241 nova_compute[248510]: 2025-12-13 09:17:33.714 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 04:17:33 np0005558241 nova_compute[248510]: 2025-12-13 09:17:33.735 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 04:17:33 np0005558241 nova_compute[248510]: 2025-12-13 09:17:33.771 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 04:17:33 np0005558241 nova_compute[248510]: 2025-12-13 09:17:33.793 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:17:34 np0005558241 nova_compute[248510]: 2025-12-13 09:17:34.155 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:17:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/55505373' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:34 np0005558241 nova_compute[248510]: 2025-12-13 09:17:34.431 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:17:34 np0005558241 nova_compute[248510]: 2025-12-13 09:17:34.438 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:17:34 np0005558241 nova_compute[248510]: 2025-12-13 09:17:34.457 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:17:34 np0005558241 nova_compute[248510]: 2025-12-13 09:17:34.482 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:17:34 np0005558241 nova_compute[248510]: 2025-12-13 09:17:34.483 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:34 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:34.639 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:17:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3494: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:17:36 np0005558241 nova_compute[248510]: 2025-12-13 09:17:36.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3495: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:17:36 np0005558241 podman[401136]: 2025-12-13 09:17:36.981670727 +0000 UTC m=+0.055377541 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 13 04:17:36 np0005558241 podman[401135]: 2025-12-13 09:17:36.989035471 +0000 UTC m=+0.068241272 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 13 04:17:37 np0005558241 podman[401134]: 2025-12-13 09:17:37.022698229 +0000 UTC m=+0.104798962 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 13 04:17:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:37.048 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:3e:92 10.100.0.2 2001:db8:0:1:f816:3eff:fe95:3e92 2001:db8::f816:3eff:fe95:3e92'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe95:3e92/64 2001:db8::f816:3eff:fe95:3e92/64', 'neutron:device_id': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f555052b-c852-42e8-9f82-1988cdbda87d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=311a1d17-b8d0-425a-a0e2-260061ce5d5d) old=Port_Binding(mac=['fa:16:3e:95:3e:92 10.100.0.2 2001:db8::f816:3eff:fe95:3e92'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe95:3e92/64', 'neutron:device_id': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:17:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:37.049 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 311a1d17-b8d0-425a-a0e2-260061ce5d5d in datapath e5815775-e5d0-4b72-a008-efd9f04c6ee4 updated#033[00m
Dec 13 04:17:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:37.051 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5815775-e5d0-4b72-a008-efd9f04c6ee4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:17:37 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:37.052 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0663c8a5-3f09-4e46-a107-d324ee8730bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:37 np0005558241 nova_compute[248510]: 2025-12-13 09:17:37.483 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:17:37 np0005558241 nova_compute[248510]: 2025-12-13 09:17:37.484 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:17:37 np0005558241 nova_compute[248510]: 2025-12-13 09:17:37.484 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:17:37 np0005558241 nova_compute[248510]: 2025-12-13 09:17:37.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:17:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3496: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:17:39 np0005558241 nova_compute[248510]: 2025-12-13 09:17:39.157 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:17:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:17:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:17:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:17:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:17:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:17:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3497: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:17:41 np0005558241 nova_compute[248510]: 2025-12-13 09:17:41.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:42 np0005558241 nova_compute[248510]: 2025-12-13 09:17:42.122 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:42 np0005558241 nova_compute[248510]: 2025-12-13 09:17:42.122 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:42 np0005558241 nova_compute[248510]: 2025-12-13 09:17:42.142 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:17:42 np0005558241 nova_compute[248510]: 2025-12-13 09:17:42.234 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:42 np0005558241 nova_compute[248510]: 2025-12-13 09:17:42.235 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:42 np0005558241 nova_compute[248510]: 2025-12-13 09:17:42.243 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:17:42 np0005558241 nova_compute[248510]: 2025-12-13 09:17:42.243 248514 INFO nova.compute.claims [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:17:42 np0005558241 nova_compute[248510]: 2025-12-13 09:17:42.357 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:17:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3498: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:17:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:17:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2738697607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.041 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.050 248514 DEBUG nova.compute.provider_tree [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.082 248514 DEBUG nova.scheduler.client.report [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.113 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.114 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.164 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.165 248514 DEBUG nova.network.neutron [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.193 248514 INFO nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.215 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.324 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.325 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.326 248514 INFO nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Creating image(s)#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.425 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.450 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.479 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.483 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.545 248514 DEBUG nova.policy [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.591 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.592 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.593 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.593 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.617 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:17:43 np0005558241 nova_compute[248510]: 2025-12-13 09:17:43.621 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:17:44 np0005558241 nova_compute[248510]: 2025-12-13 09:17:44.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:44 np0005558241 nova_compute[248510]: 2025-12-13 09:17:44.804 248514 DEBUG nova.network.neutron [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Successfully created port: 1271974a-de11-42b5-87b8-470c51840315 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:17:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3499: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.053 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.146 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.318 248514 DEBUG nova.network.neutron [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Successfully updated port: 1271974a-de11-42b5-87b8-470c51840315 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.342 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.343 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.343 248514 DEBUG nova.network.neutron [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.407 248514 DEBUG nova.compute.manager [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-changed-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.407 248514 DEBUG nova.compute.manager [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Refreshing instance network info cache due to event network-changed-1271974a-de11-42b5-87b8-470c51840315. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.407 248514 DEBUG oslo_concurrency.lockutils [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.468 248514 DEBUG nova.objects.instance [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 7aadb3d0-64f2-4531-9896-93b087cdea5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.487 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.488 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Ensure instance console log exists: /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.488 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.489 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.489 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:46 np0005558241 nova_compute[248510]: 2025-12-13 09:17:46.519 248514 DEBUG nova.network.neutron [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:17:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3500: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 13 04:17:47 np0005558241 nova_compute[248510]: 2025-12-13 09:17:47.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:17:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.009 248514 DEBUG nova.network.neutron [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updating instance_info_cache with network_info: [{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.032 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.034 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Instance network_info: |[{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.035 248514 DEBUG oslo_concurrency.lockutils [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.035 248514 DEBUG nova.network.neutron [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Refreshing network info cache for port 1271974a-de11-42b5-87b8-470c51840315 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.038 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Start _get_guest_xml network_info=[{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.042 248514 WARNING nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.047 248514 DEBUG nova.virt.libvirt.host [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.048 248514 DEBUG nova.virt.libvirt.host [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.053 248514 DEBUG nova.virt.libvirt.host [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.054 248514 DEBUG nova.virt.libvirt.host [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.054 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.055 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.055 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.055 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.056 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.056 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.056 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.056 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.057 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.057 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.057 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.058 248514 DEBUG nova.virt.hardware [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.060 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:17:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2311718874' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.634 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.663 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:17:48 np0005558241 nova_compute[248510]: 2025-12-13 09:17:48.668 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:17:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3501: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.214 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3964636419' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.255 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.257 248514 DEBUG nova.virt.libvirt.vif [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:17:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1903797139',display_name='tempest-TestGettingAddress-server-1903797139',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1903797139',id=147,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKfVIjC+c2E48WoFz2Ud3i73TUL/pY6+s2/66GaU+lrLkwx0f9N1LcjeXJdOzvtjVb2jNhFM65SYiWkjK3RJ3eDcO6vcs74xDf/lQ4jF6SyHOGQ3syJuqY4MMzp7LfTqAw==',key_name='tempest-TestGettingAddress-444069759',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-zvq7q3d0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:17:43Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=7aadb3d0-64f2-4531-9896-93b087cdea5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.258 248514 DEBUG nova.network.os_vif_util [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.259 248514 DEBUG nova.network.os_vif_util [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.260 248514 DEBUG nova.objects.instance [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7aadb3d0-64f2-4531-9896-93b087cdea5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.280 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  <uuid>7aadb3d0-64f2-4531-9896-93b087cdea5c</uuid>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  <name>instance-00000093</name>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-1903797139</nova:name>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:17:48</nova:creationTime>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:        <nova:port uuid="1271974a-de11-42b5-87b8-470c51840315">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fed5:f68c" ipVersion="6"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fed5:f68c" ipVersion="6"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <entry name="serial">7aadb3d0-64f2-4531-9896-93b087cdea5c</entry>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <entry name="uuid">7aadb3d0-64f2-4531-9896-93b087cdea5c</entry>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/7aadb3d0-64f2-4531-9896-93b087cdea5c_disk">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/7aadb3d0-64f2-4531-9896-93b087cdea5c_disk.config">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:d5:f6:8c"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <target dev="tap1271974a-de"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/console.log" append="off"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:17:49 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:17:49 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:17:49 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:17:49 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.282 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Preparing to wait for external event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.282 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.282 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.283 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.283 248514 DEBUG nova.virt.libvirt.vif [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:17:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1903797139',display_name='tempest-TestGettingAddress-server-1903797139',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1903797139',id=147,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKfVIjC+c2E48WoFz2Ud3i73TUL/pY6+s2/66GaU+lrLkwx0f9N1LcjeXJdOzvtjVb2jNhFM65SYiWkjK3RJ3eDcO6vcs74xDf/lQ4jF6SyHOGQ3syJuqY4MMzp7LfTqAw==',key_name='tempest-TestGettingAddress-444069759',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-zvq7q3d0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:17:43Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=7aadb3d0-64f2-4531-9896-93b087cdea5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.284 248514 DEBUG nova.network.os_vif_util [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.284 248514 DEBUG nova.network.os_vif_util [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.285 248514 DEBUG os_vif [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.285 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.286 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.286 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.290 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.291 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1271974a-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.292 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1271974a-de, col_values=(('external_ids', {'iface-id': '1271974a-de11-42b5-87b8-470c51840315', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:f6:8c', 'vm-uuid': '7aadb3d0-64f2-4531-9896-93b087cdea5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.294 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:49 np0005558241 NetworkManager[50376]: <info>  [1765617469.2959] manager: (tap1271974a-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/657)
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.297 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.301 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.302 248514 INFO os_vif [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de')#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.572 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.573 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.573 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:d5:f6:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.575 248514 INFO nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Using config drive#033[00m
Dec 13 04:17:49 np0005558241 nova_compute[248510]: 2025-12-13 09:17:49.595 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:17:50 np0005558241 nova_compute[248510]: 2025-12-13 09:17:50.157 248514 INFO nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Creating config drive at /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/disk.config#033[00m
Dec 13 04:17:50 np0005558241 nova_compute[248510]: 2025-12-13 09:17:50.163 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmgp4ody4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:17:50 np0005558241 nova_compute[248510]: 2025-12-13 09:17:50.320 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmgp4ody4" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:17:50 np0005558241 nova_compute[248510]: 2025-12-13 09:17:50.352 248514 DEBUG nova.storage.rbd_utils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:17:50 np0005558241 nova_compute[248510]: 2025-12-13 09:17:50.358 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/disk.config 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:17:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3502: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:17:51 np0005558241 nova_compute[248510]: 2025-12-13 09:17:51.397 248514 DEBUG oslo_concurrency.processutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/disk.config 7aadb3d0-64f2-4531-9896-93b087cdea5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:17:51 np0005558241 nova_compute[248510]: 2025-12-13 09:17:51.399 248514 INFO nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Deleting local config drive /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c/disk.config because it was imported into RBD.#033[00m
Dec 13 04:17:51 np0005558241 kernel: tap1271974a-de: entered promiscuous mode
Dec 13 04:17:51 np0005558241 NetworkManager[50376]: <info>  [1765617471.4566] manager: (tap1271974a-de): new Tun device (/org/freedesktop/NetworkManager/Devices/658)
Dec 13 04:17:51 np0005558241 nova_compute[248510]: 2025-12-13 09:17:51.458 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:17:51Z|01590|binding|INFO|Claiming lport 1271974a-de11-42b5-87b8-470c51840315 for this chassis.
Dec 13 04:17:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:17:51Z|01591|binding|INFO|1271974a-de11-42b5-87b8-470c51840315: Claiming fa:16:3e:d5:f6:8c 10.100.0.6 2001:db8:0:1:f816:3eff:fed5:f68c 2001:db8::f816:3eff:fed5:f68c
Dec 13 04:17:51 np0005558241 nova_compute[248510]: 2025-12-13 09:17:51.461 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:51 np0005558241 nova_compute[248510]: 2025-12-13 09:17:51.468 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:51 np0005558241 systemd-machined[210538]: New machine qemu-178-instance-00000093.
Dec 13 04:17:51 np0005558241 systemd-udevd[401522]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:17:51 np0005558241 systemd[1]: Started Virtual Machine qemu-178-instance-00000093.
Dec 13 04:17:51 np0005558241 nova_compute[248510]: 2025-12-13 09:17:51.528 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:17:51Z|01592|binding|INFO|Setting lport 1271974a-de11-42b5-87b8-470c51840315 ovn-installed in OVS
Dec 13 04:17:51 np0005558241 nova_compute[248510]: 2025-12-13 09:17:51.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:51 np0005558241 NetworkManager[50376]: <info>  [1765617471.5433] device (tap1271974a-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:17:51 np0005558241 NetworkManager[50376]: <info>  [1765617471.5441] device (tap1271974a-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:17:51 np0005558241 ovn_controller[148476]: 2025-12-13T09:17:51Z|01593|binding|INFO|Setting lport 1271974a-de11-42b5-87b8-470c51840315 up in Southbound
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.661 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:f6:8c 10.100.0.6 2001:db8:0:1:f816:3eff:fed5:f68c 2001:db8::f816:3eff:fed5:f68c'], port_security=['fa:16:3e:d5:f6:8c 10.100.0.6 2001:db8:0:1:f816:3eff:fed5:f68c 2001:db8::f816:3eff:fed5:f68c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fed5:f68c/64 2001:db8::f816:3eff:fed5:f68c/64', 'neutron:device_id': '7aadb3d0-64f2-4531-9896-93b087cdea5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '747095ec-05b6-4395-bc25-989b7630b3c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f555052b-c852-42e8-9f82-1988cdbda87d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1271974a-de11-42b5-87b8-470c51840315) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.662 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1271974a-de11-42b5-87b8-470c51840315 in datapath e5815775-e5d0-4b72-a008-efd9f04c6ee4 bound to our chassis#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.664 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5815775-e5d0-4b72-a008-efd9f04c6ee4#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.677 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fe98fc33-dfb6-475b-bfd2-e8a757dd8e0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.678 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5815775-e1 in ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.680 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5815775-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.680 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7abd16b8-48c9-4413-8b53-4fe51344f6bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.681 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[7b04420f-303e-4a40-99ef-717a63e6efbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.706 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[c61e8dfc-e0c4-4336-8d65-505888902133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.742 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf7e39d-b983-4ae8-872b-799a64d55719]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.779 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf5f173-5a39-4e96-ab9c-578bd9afb9e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.788 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[01c2f587-5e71-4a88-95fb-11f70aa46c82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:51 np0005558241 NetworkManager[50376]: <info>  [1765617471.7903] manager: (tape5815775-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/659)
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.824 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9ebf83dc-a4e1-4db0-b3d6-2d4cd3e88c0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.828 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[529e5748-629c-409c-b86d-6a7b6fef19a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:51 np0005558241 NetworkManager[50376]: <info>  [1765617471.8550] device (tape5815775-e0): carrier: link connected
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.864 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[04652054-3a5b-45ec-8acd-ca7b4f413331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.892 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3b83472a-6cbe-4ef3-bf08-8395eb3aca54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5815775-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:3e:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1004907, 'reachable_time': 36718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401555, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.915 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[85de9519-15b4-4b00-9e88-4f5eedafa305]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:3e92'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1004907, 'tstamp': 1004907}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401556, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:51 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.936 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e53000f5-5bcf-4797-8c04-56b21bc176ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5815775-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:3e:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1004907, 'reachable_time': 36718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 401557, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:51 np0005558241 nova_compute[248510]: 2025-12-13 09:17:51.953 248514 DEBUG nova.network.neutron [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updated VIF entry in instance network info cache for port 1271974a-de11-42b5-87b8-470c51840315. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:17:51 np0005558241 nova_compute[248510]: 2025-12-13 09:17:51.954 248514 DEBUG nova.network.neutron [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updating instance_info_cache with network_info: [{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:17:51 np0005558241 nova_compute[248510]: 2025-12-13 09:17:51.976 248514 DEBUG oslo_concurrency.lockutils [req-2f0c6255-9c31-4076-8105-0ca7c2b30000 req-41536d76-3eaa-4215-9526-ebc3e97caef3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:51.999 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[86a3b989-5d11-4724-afbd-dc8bbda4b0fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.088 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5e613d36-18ed-4636-954a-44286ab459e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.091 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5815775-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.092 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.093 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5815775-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:17:52 np0005558241 NetworkManager[50376]: <info>  [1765617472.0974] manager: (tape5815775-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.096 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:52 np0005558241 kernel: tape5815775-e0: entered promiscuous mode
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.100 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.101 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5815775-e0, col_values=(('external_ids', {'iface-id': '311a1d17-b8d0-425a-a0e2-260061ce5d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.103 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:52 np0005558241 ovn_controller[148476]: 2025-12-13T09:17:52Z|01594|binding|INFO|Releasing lport 311a1d17-b8d0-425a-a0e2-260061ce5d5d from this chassis (sb_readonly=0)
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.130 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.131 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5815775-e5d0-4b72-a008-efd9f04c6ee4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5815775-e5d0-4b72-a008-efd9f04c6ee4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.133 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ca11b9-e363-411e-995e-df57f33e63fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.135 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-e5815775-e5d0-4b72-a008-efd9f04c6ee4
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/e5815775-e5d0-4b72-a008-efd9f04c6ee4.pid.haproxy
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID e5815775-e5d0-4b72-a008-efd9f04c6ee4
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:17:52 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:52.138 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'env', 'PROCESS_TAG=haproxy-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5815775-e5d0-4b72-a008-efd9f04c6ee4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.377 248514 DEBUG nova.compute.manager [req-54b2b11b-527e-491a-bce5-14b789755b44 req-5f67564d-e831-4ee5-96c9-7c84104f4043 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.378 248514 DEBUG oslo_concurrency.lockutils [req-54b2b11b-527e-491a-bce5-14b789755b44 req-5f67564d-e831-4ee5-96c9-7c84104f4043 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.378 248514 DEBUG oslo_concurrency.lockutils [req-54b2b11b-527e-491a-bce5-14b789755b44 req-5f67564d-e831-4ee5-96c9-7c84104f4043 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.378 248514 DEBUG oslo_concurrency.lockutils [req-54b2b11b-527e-491a-bce5-14b789755b44 req-5f67564d-e831-4ee5-96c9-7c84104f4043 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.379 248514 DEBUG nova.compute.manager [req-54b2b11b-527e-491a-bce5-14b789755b44 req-5f67564d-e831-4ee5-96c9-7c84104f4043 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Processing event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:17:52 np0005558241 podman[401607]: 2025-12-13 09:17:52.567833947 +0000 UTC m=+0.029465405 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.671 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617472.6704443, 7aadb3d0-64f2-4531-9896-93b087cdea5c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.671 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] VM Started (Lifecycle Event)#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.675 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.679 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.683 248514 INFO nova.virt.libvirt.driver [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Instance spawned successfully.#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.684 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.693 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.697 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.720 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.721 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617472.6705973, 7aadb3d0-64f2-4531-9896-93b087cdea5c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.721 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.728 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.728 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.729 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.730 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.731 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.731 248514 DEBUG nova.virt.libvirt.driver [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.810 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.815 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617472.6790617, 7aadb3d0-64f2-4531-9896-93b087cdea5c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:17:52 np0005558241 nova_compute[248510]: 2025-12-13 09:17:52.815 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:17:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3503: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:17:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:53 np0005558241 nova_compute[248510]: 2025-12-13 09:17:53.002 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:17:53 np0005558241 nova_compute[248510]: 2025-12-13 09:17:53.007 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:17:53 np0005558241 nova_compute[248510]: 2025-12-13 09:17:53.016 248514 INFO nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Took 9.69 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:17:53 np0005558241 nova_compute[248510]: 2025-12-13 09:17:53.017 248514 DEBUG nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:17:53 np0005558241 nova_compute[248510]: 2025-12-13 09:17:53.061 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:17:53 np0005558241 podman[401607]: 2025-12-13 09:17:53.077926577 +0000 UTC m=+0.539558025 container create 5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 04:17:53 np0005558241 systemd[1]: Started libpod-conmon-5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763.scope.
Dec 13 04:17:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:17:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cab876830acab31fce5dbc2e972338fc2e9949b13d6a5f72f360a2dbded2beb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:53 np0005558241 podman[401607]: 2025-12-13 09:17:53.557608189 +0000 UTC m=+1.019239617 container init 5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:17:53 np0005558241 podman[401607]: 2025-12-13 09:17:53.565004464 +0000 UTC m=+1.026635892 container start 5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 04:17:53 np0005558241 nova_compute[248510]: 2025-12-13 09:17:53.584 248514 INFO nova.compute.manager [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Took 11.38 seconds to build instance.#033[00m
Dec 13 04:17:53 np0005558241 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [NOTICE]   (401650) : New worker (401652) forked
Dec 13 04:17:53 np0005558241 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [NOTICE]   (401650) : Loading success.
Dec 13 04:17:53 np0005558241 nova_compute[248510]: 2025-12-13 09:17:53.623 248514 DEBUG oslo_concurrency.lockutils [None req-4db3f62c-ac64-4fc0-836e-1bc5887ee3e3 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:54 np0005558241 nova_compute[248510]: 2025-12-13 09:17:54.216 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:54 np0005558241 nova_compute[248510]: 2025-12-13 09:17:54.345 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:54 np0005558241 nova_compute[248510]: 2025-12-13 09:17:54.462 248514 DEBUG nova.compute.manager [req-451a75ef-806c-428b-8582-a5e9651b3ca2 req-2add6f74-36c2-4500-910a-4029ec3c9dfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:54 np0005558241 nova_compute[248510]: 2025-12-13 09:17:54.463 248514 DEBUG oslo_concurrency.lockutils [req-451a75ef-806c-428b-8582-a5e9651b3ca2 req-2add6f74-36c2-4500-910a-4029ec3c9dfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:54 np0005558241 nova_compute[248510]: 2025-12-13 09:17:54.464 248514 DEBUG oslo_concurrency.lockutils [req-451a75ef-806c-428b-8582-a5e9651b3ca2 req-2add6f74-36c2-4500-910a-4029ec3c9dfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:54 np0005558241 nova_compute[248510]: 2025-12-13 09:17:54.464 248514 DEBUG oslo_concurrency.lockutils [req-451a75ef-806c-428b-8582-a5e9651b3ca2 req-2add6f74-36c2-4500-910a-4029ec3c9dfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:54 np0005558241 nova_compute[248510]: 2025-12-13 09:17:54.465 248514 DEBUG nova.compute.manager [req-451a75ef-806c-428b-8582-a5e9651b3ca2 req-2add6f74-36c2-4500-910a-4029ec3c9dfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] No waiting events found dispatching network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:17:54 np0005558241 nova_compute[248510]: 2025-12-13 09:17:54.465 248514 WARNING nova.compute.manager [req-451a75ef-806c-428b-8582-a5e9651b3ca2 req-2add6f74-36c2-4500-910a-4029ec3c9dfc 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received unexpected event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:17:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3504: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Dec 13 04:17:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:55.454 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:17:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:55.455 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:17:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:17:55.456 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:17:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3505: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Dec 13 04:17:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:58 np0005558241 ovn_controller[148476]: 2025-12-13T09:17:58Z|01595|binding|INFO|Releasing lport 311a1d17-b8d0-425a-a0e2-260061ce5d5d from this chassis (sb_readonly=0)
Dec 13 04:17:58 np0005558241 nova_compute[248510]: 2025-12-13 09:17:58.809 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:58 np0005558241 NetworkManager[50376]: <info>  [1765617478.8147] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Dec 13 04:17:58 np0005558241 NetworkManager[50376]: <info>  [1765617478.8164] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Dec 13 04:17:58 np0005558241 ovn_controller[148476]: 2025-12-13T09:17:58Z|01596|binding|INFO|Releasing lport 311a1d17-b8d0-425a-a0e2-260061ce5d5d from this chassis (sb_readonly=0)
Dec 13 04:17:58 np0005558241 nova_compute[248510]: 2025-12-13 09:17:58.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:58 np0005558241 nova_compute[248510]: 2025-12-13 09:17:58.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3506: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 13 04:17:59 np0005558241 nova_compute[248510]: 2025-12-13 09:17:59.220 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:59 np0005558241 nova_compute[248510]: 2025-12-13 09:17:59.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:17:59 np0005558241 nova_compute[248510]: 2025-12-13 09:17:59.695 248514 DEBUG nova.compute.manager [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-changed-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:17:59 np0005558241 nova_compute[248510]: 2025-12-13 09:17:59.697 248514 DEBUG nova.compute.manager [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Refreshing instance network info cache due to event network-changed-1271974a-de11-42b5-87b8-470c51840315. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:17:59 np0005558241 nova_compute[248510]: 2025-12-13 09:17:59.698 248514 DEBUG oslo_concurrency.lockutils [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:17:59 np0005558241 nova_compute[248510]: 2025-12-13 09:17:59.698 248514 DEBUG oslo_concurrency.lockutils [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:17:59 np0005558241 nova_compute[248510]: 2025-12-13 09:17:59.699 248514 DEBUG nova.network.neutron [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Refreshing network info cache for port 1271974a-de11-42b5-87b8-470c51840315 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:18:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3507: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:18:02 np0005558241 nova_compute[248510]: 2025-12-13 09:18:02.470 248514 DEBUG nova.network.neutron [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updated VIF entry in instance network info cache for port 1271974a-de11-42b5-87b8-470c51840315. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:18:02 np0005558241 nova_compute[248510]: 2025-12-13 09:18:02.471 248514 DEBUG nova.network.neutron [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updating instance_info_cache with network_info: [{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:18:02 np0005558241 nova_compute[248510]: 2025-12-13 09:18:02.514 248514 DEBUG oslo_concurrency.lockutils [req-ccba7cfc-8bf9-4617-8812-cae666e694df req-74f44f0a-abb4-4a68-ba7a-6fb8d81bbb35 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:18:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3508: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:18:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:04 np0005558241 nova_compute[248510]: 2025-12-13 09:18:04.224 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:04 np0005558241 nova_compute[248510]: 2025-12-13 09:18:04.350 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:18:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:18:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3509: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:18:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:18:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:18:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:18:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:18:05 np0005558241 podman[401802]: 2025-12-13 09:18:05.13124471 +0000 UTC m=+0.044099790 container create ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_gould, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 04:18:05 np0005558241 systemd[1]: Started libpod-conmon-ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e.scope.
Dec 13 04:18:05 np0005558241 podman[401802]: 2025-12-13 09:18:05.109720324 +0000 UTC m=+0.022575444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:18:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:18:05 np0005558241 podman[401802]: 2025-12-13 09:18:05.231784095 +0000 UTC m=+0.144639215 container init ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:18:05 np0005558241 podman[401802]: 2025-12-13 09:18:05.24120374 +0000 UTC m=+0.154058830 container start ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_gould, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:18:05 np0005558241 podman[401802]: 2025-12-13 09:18:05.245158819 +0000 UTC m=+0.158013939 container attach ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:18:05 np0005558241 hungry_gould[401819]: 167 167
Dec 13 04:18:05 np0005558241 systemd[1]: libpod-ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e.scope: Deactivated successfully.
Dec 13 04:18:05 np0005558241 podman[401802]: 2025-12-13 09:18:05.249303732 +0000 UTC m=+0.162158842 container died ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:18:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f361aa35f1edb18d587d2e7898f6cc3951fd7a67a0e7bcc48218740df0cbff22-merged.mount: Deactivated successfully.
Dec 13 04:18:05 np0005558241 podman[401802]: 2025-12-13 09:18:05.313477621 +0000 UTC m=+0.226332711 container remove ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:18:05 np0005558241 systemd[1]: libpod-conmon-ec315071640a12820bd27f3202384e957409515db5e0f89193acc55016bf3a7e.scope: Deactivated successfully.
Dec 13 04:18:05 np0005558241 podman[401843]: 2025-12-13 09:18:05.524663743 +0000 UTC m=+0.046337896 container create 8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_beaver, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:18:05 np0005558241 systemd[1]: Started libpod-conmon-8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2.scope.
Dec 13 04:18:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:18:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4eec641f2d5e9cba6f7d39f9c1946ce6285ad70f9db18c14504af3420ab27a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4eec641f2d5e9cba6f7d39f9c1946ce6285ad70f9db18c14504af3420ab27a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4eec641f2d5e9cba6f7d39f9c1946ce6285ad70f9db18c14504af3420ab27a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4eec641f2d5e9cba6f7d39f9c1946ce6285ad70f9db18c14504af3420ab27a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4eec641f2d5e9cba6f7d39f9c1946ce6285ad70f9db18c14504af3420ab27a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:05 np0005558241 podman[401843]: 2025-12-13 09:18:05.508815338 +0000 UTC m=+0.030489521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:18:05 np0005558241 podman[401843]: 2025-12-13 09:18:05.608918882 +0000 UTC m=+0.130593055 container init 8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_beaver, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:18:05 np0005558241 podman[401843]: 2025-12-13 09:18:05.614529882 +0000 UTC m=+0.136204035 container start 8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:18:05 np0005558241 podman[401843]: 2025-12-13 09:18:05.61845302 +0000 UTC m=+0.140127213 container attach 8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_beaver, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Dec 13 04:18:05 np0005558241 ovn_controller[148476]: 2025-12-13T09:18:05Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:f6:8c 10.100.0.6
Dec 13 04:18:05 np0005558241 ovn_controller[148476]: 2025-12-13T09:18:05Z|00206|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:f6:8c 10.100.0.6
Dec 13 04:18:06 np0005558241 musing_beaver[401859]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:18:06 np0005558241 musing_beaver[401859]: --> All data devices are unavailable
Dec 13 04:18:06 np0005558241 systemd[1]: libpod-8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2.scope: Deactivated successfully.
Dec 13 04:18:06 np0005558241 podman[401843]: 2025-12-13 09:18:06.118287234 +0000 UTC m=+0.639961417 container died 8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_beaver, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:18:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0d4eec641f2d5e9cba6f7d39f9c1946ce6285ad70f9db18c14504af3420ab27a-merged.mount: Deactivated successfully.
Dec 13 04:18:06 np0005558241 podman[401843]: 2025-12-13 09:18:06.181718635 +0000 UTC m=+0.703392828 container remove 8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 04:18:06 np0005558241 systemd[1]: libpod-conmon-8cab83c1797ee8a6e5e2979c1ae9564db1025a449fc6912e9d3e125186cbecb2.scope: Deactivated successfully.
Dec 13 04:18:06 np0005558241 podman[401953]: 2025-12-13 09:18:06.71934525 +0000 UTC m=+0.052403956 container create c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:18:06 np0005558241 systemd[1]: Started libpod-conmon-c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe.scope.
Dec 13 04:18:06 np0005558241 podman[401953]: 2025-12-13 09:18:06.699234399 +0000 UTC m=+0.032293135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:18:06 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:18:06 np0005558241 podman[401953]: 2025-12-13 09:18:06.814145262 +0000 UTC m=+0.147203968 container init c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:18:06 np0005558241 podman[401953]: 2025-12-13 09:18:06.822041149 +0000 UTC m=+0.155099845 container start c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 04:18:06 np0005558241 podman[401953]: 2025-12-13 09:18:06.8249049 +0000 UTC m=+0.157963686 container attach c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:18:06 np0005558241 zealous_wing[401969]: 167 167
Dec 13 04:18:06 np0005558241 systemd[1]: libpod-c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe.scope: Deactivated successfully.
Dec 13 04:18:06 np0005558241 podman[401953]: 2025-12-13 09:18:06.828862809 +0000 UTC m=+0.161921505 container died c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 04:18:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e1262a75e45ef039ec507822985d5528fcf7be03208f895049da71bc72467a3f-merged.mount: Deactivated successfully.
Dec 13 04:18:06 np0005558241 podman[401953]: 2025-12-13 09:18:06.872828875 +0000 UTC m=+0.205887611 container remove c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:18:06 np0005558241 systemd[1]: libpod-conmon-c87adc2d277be53ea0804ec0b7df2b741143a769129f744a10f3df2c5e61bebe.scope: Deactivated successfully.
Dec 13 04:18:06 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3510: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 51 op/s
Dec 13 04:18:07 np0005558241 podman[401992]: 2025-12-13 09:18:07.100855376 +0000 UTC m=+0.065262097 container create dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_newton, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:18:07 np0005558241 systemd[1]: Started libpod-conmon-dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65.scope.
Dec 13 04:18:07 np0005558241 podman[401992]: 2025-12-13 09:18:07.071394412 +0000 UTC m=+0.035801203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:18:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:18:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c168b3edd34b1d2bb22895e137d39247a199d6c62d15fe94954d2ae54f630b37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c168b3edd34b1d2bb22895e137d39247a199d6c62d15fe94954d2ae54f630b37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c168b3edd34b1d2bb22895e137d39247a199d6c62d15fe94954d2ae54f630b37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c168b3edd34b1d2bb22895e137d39247a199d6c62d15fe94954d2ae54f630b37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:07 np0005558241 podman[401992]: 2025-12-13 09:18:07.21859095 +0000 UTC m=+0.182997691 container init dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:18:07 np0005558241 podman[401992]: 2025-12-13 09:18:07.226952438 +0000 UTC m=+0.191359189 container start dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_newton, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:18:07 np0005558241 podman[401992]: 2025-12-13 09:18:07.231476521 +0000 UTC m=+0.195883282 container attach dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 04:18:07 np0005558241 podman[402010]: 2025-12-13 09:18:07.253867469 +0000 UTC m=+0.091044570 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:18:07 np0005558241 podman[402011]: 2025-12-13 09:18:07.286754608 +0000 UTC m=+0.111948220 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 04:18:07 np0005558241 podman[402006]: 2025-12-13 09:18:07.300022269 +0000 UTC m=+0.138228635 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 04:18:07 np0005558241 recursing_newton[402022]: {
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:    "0": [
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:        {
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "devices": [
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "/dev/loop3"
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            ],
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_name": "ceph_lv0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_size": "21470642176",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "name": "ceph_lv0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "tags": {
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.cluster_name": "ceph",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.crush_device_class": "",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.encrypted": "0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.objectstore": "bluestore",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.osd_id": "0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.type": "block",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.vdo": "0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.with_tpm": "0"
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            },
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "type": "block",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "vg_name": "ceph_vg0"
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:        }
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:    ],
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:    "1": [
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:        {
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "devices": [
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "/dev/loop4"
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            ],
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_name": "ceph_lv1",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_size": "21470642176",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "name": "ceph_lv1",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "tags": {
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.cluster_name": "ceph",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.crush_device_class": "",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.encrypted": "0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.objectstore": "bluestore",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.osd_id": "1",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.type": "block",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.vdo": "0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.with_tpm": "0"
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            },
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "type": "block",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "vg_name": "ceph_vg1"
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:        }
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:    ],
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:    "2": [
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:        {
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "devices": [
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "/dev/loop5"
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            ],
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_name": "ceph_lv2",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_size": "21470642176",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "name": "ceph_lv2",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "tags": {
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.cluster_name": "ceph",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.crush_device_class": "",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.encrypted": "0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.objectstore": "bluestore",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.osd_id": "2",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.type": "block",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.vdo": "0",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:                "ceph.with_tpm": "0"
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            },
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "type": "block",
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:            "vg_name": "ceph_vg2"
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:        }
Dec 13 04:18:07 np0005558241 recursing_newton[402022]:    ]
Dec 13 04:18:07 np0005558241 recursing_newton[402022]: }
Dec 13 04:18:07 np0005558241 systemd[1]: libpod-dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65.scope: Deactivated successfully.
Dec 13 04:18:07 np0005558241 conmon[402022]: conmon dd25b3b20459392efed4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65.scope/container/memory.events
Dec 13 04:18:07 np0005558241 podman[401992]: 2025-12-13 09:18:07.555852654 +0000 UTC m=+0.520259405 container died dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:18:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c168b3edd34b1d2bb22895e137d39247a199d6c62d15fe94954d2ae54f630b37-merged.mount: Deactivated successfully.
Dec 13 04:18:07 np0005558241 podman[401992]: 2025-12-13 09:18:07.623975131 +0000 UTC m=+0.588381832 container remove dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_newton, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:18:07 np0005558241 systemd[1]: libpod-conmon-dd25b3b20459392efed468b59dd1f60d1ee6ed641836ac633e8e9e5ef2b46b65.scope: Deactivated successfully.
Dec 13 04:18:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:08 np0005558241 podman[402153]: 2025-12-13 09:18:08.189605135 +0000 UTC m=+0.045047803 container create 03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:18:08 np0005558241 systemd[1]: Started libpod-conmon-03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b.scope.
Dec 13 04:18:08 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:18:08 np0005558241 podman[402153]: 2025-12-13 09:18:08.168462328 +0000 UTC m=+0.023904986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:18:08 np0005558241 podman[402153]: 2025-12-13 09:18:08.281529066 +0000 UTC m=+0.136971744 container init 03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 04:18:08 np0005558241 podman[402153]: 2025-12-13 09:18:08.289478384 +0000 UTC m=+0.144921022 container start 03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:18:08 np0005558241 podman[402153]: 2025-12-13 09:18:08.292986551 +0000 UTC m=+0.148429209 container attach 03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lovelace, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:18:08 np0005558241 vigorous_lovelace[402170]: 167 167
Dec 13 04:18:08 np0005558241 systemd[1]: libpod-03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b.scope: Deactivated successfully.
Dec 13 04:18:08 np0005558241 podman[402153]: 2025-12-13 09:18:08.298845677 +0000 UTC m=+0.154288315 container died 03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lovelace, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:18:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e1fbbc82bde121c026aaa9bd85ced7f8dc58de0daad75e7ebcfbb920c4925748-merged.mount: Deactivated successfully.
Dec 13 04:18:08 np0005558241 podman[402153]: 2025-12-13 09:18:08.341385997 +0000 UTC m=+0.196828645 container remove 03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lovelace, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:18:08 np0005558241 systemd[1]: libpod-conmon-03d1a6f02c379a8e2bcbd263edd32b85a7e3af46cf685ea6cf976d156abf305b.scope: Deactivated successfully.
Dec 13 04:18:08 np0005558241 podman[402192]: 2025-12-13 09:18:08.533863663 +0000 UTC m=+0.043371562 container create 5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_napier, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:18:08 np0005558241 systemd[1]: Started libpod-conmon-5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da.scope.
Dec 13 04:18:08 np0005558241 podman[402192]: 2025-12-13 09:18:08.514332286 +0000 UTC m=+0.023840195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:18:08 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:18:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2867ee08693af5e3779c34bc4194809944b77b29757bbb414bc3a6f73d507f07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2867ee08693af5e3779c34bc4194809944b77b29757bbb414bc3a6f73d507f07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2867ee08693af5e3779c34bc4194809944b77b29757bbb414bc3a6f73d507f07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:08 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2867ee08693af5e3779c34bc4194809944b77b29757bbb414bc3a6f73d507f07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:08 np0005558241 podman[402192]: 2025-12-13 09:18:08.636938451 +0000 UTC m=+0.146446370 container init 5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Dec 13 04:18:08 np0005558241 podman[402192]: 2025-12-13 09:18:08.646141531 +0000 UTC m=+0.155649430 container start 5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:18:08 np0005558241 podman[402192]: 2025-12-13 09:18:08.649284669 +0000 UTC m=+0.158792568 container attach 5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_napier, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 04:18:08 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3511: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Dec 13 04:18:09 np0005558241 nova_compute[248510]: 2025-12-13 09:18:09.227 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:09 np0005558241 nova_compute[248510]: 2025-12-13 09:18:09.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:09 np0005558241 lvm[402288]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:18:09 np0005558241 lvm[402288]: VG ceph_vg2 finished
Dec 13 04:18:09 np0005558241 lvm[402289]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:18:09 np0005558241 lvm[402289]: VG ceph_vg1 finished
Dec 13 04:18:09 np0005558241 lvm[402285]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:18:09 np0005558241 lvm[402285]: VG ceph_vg0 finished
Dec 13 04:18:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:18:09
Dec 13 04:18:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:18:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:18:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'backups', '.mgr', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.meta']
Dec 13 04:18:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:18:09 np0005558241 admiring_napier[402208]: {}
Dec 13 04:18:09 np0005558241 systemd[1]: libpod-5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da.scope: Deactivated successfully.
Dec 13 04:18:09 np0005558241 systemd[1]: libpod-5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da.scope: Consumed 1.516s CPU time.
Dec 13 04:18:09 np0005558241 podman[402192]: 2025-12-13 09:18:09.535238045 +0000 UTC m=+1.044745964 container died 5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_napier, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 04:18:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2867ee08693af5e3779c34bc4194809944b77b29757bbb414bc3a6f73d507f07-merged.mount: Deactivated successfully.
Dec 13 04:18:09 np0005558241 podman[402192]: 2025-12-13 09:18:09.587455436 +0000 UTC m=+1.096963335 container remove 5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:18:09 np0005558241 systemd[1]: libpod-conmon-5913c5488b3785818b010b238a6f7c6f6a58ac49dac97e41bed1efd689b6d1da.scope: Deactivated successfully.
Dec 13 04:18:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:18:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:18:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:18:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:18:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:18:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:18:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:18:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:18:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:18:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:18:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:18:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:18:10 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3512: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:18:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:18:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:18:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:18:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:18:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:18:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:18:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:18:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:18:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:18:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:18:12 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3513: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:18:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:14 np0005558241 nova_compute[248510]: 2025-12-13 09:18:14.229 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:14 np0005558241 nova_compute[248510]: 2025-12-13 09:18:14.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:14 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3514: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 04:18:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:18:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1696508029' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:18:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1696508029' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:16 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3515: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:18:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:18 np0005558241 nova_compute[248510]: 2025-12-13 09:18:18.419 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:18 np0005558241 nova_compute[248510]: 2025-12-13 09:18:18.420 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:18 np0005558241 nova_compute[248510]: 2025-12-13 09:18:18.445 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:18:18 np0005558241 nova_compute[248510]: 2025-12-13 09:18:18.527 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:18 np0005558241 nova_compute[248510]: 2025-12-13 09:18:18.528 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:18 np0005558241 nova_compute[248510]: 2025-12-13 09:18:18.543 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:18:18 np0005558241 nova_compute[248510]: 2025-12-13 09:18:18.544 248514 INFO nova.compute.claims [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:18:18 np0005558241 nova_compute[248510]: 2025-12-13 09:18:18.711 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:18:18 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3516: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.232 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:18:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1236208476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.345 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.354 248514 DEBUG nova.compute.provider_tree [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.358 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.378 248514 DEBUG nova.scheduler.client.report [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.408 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.410 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.457 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.458 248514 DEBUG nova.network.neutron [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.476 248514 INFO nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.493 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.597 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.599 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.599 248514 INFO nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Creating image(s)#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.625 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.655 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.683 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.688 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.741 248514 DEBUG nova.policy [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.787 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.788 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.789 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.789 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.820 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:18:19 np0005558241 nova_compute[248510]: 2025-12-13 09:18:19.825 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:18:20 np0005558241 nova_compute[248510]: 2025-12-13 09:18:20.191 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:18:20 np0005558241 nova_compute[248510]: 2025-12-13 09:18:20.299 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:18:20 np0005558241 nova_compute[248510]: 2025-12-13 09:18:20.384 248514 DEBUG nova.objects.instance [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:18:20 np0005558241 nova_compute[248510]: 2025-12-13 09:18:20.404 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:18:20 np0005558241 nova_compute[248510]: 2025-12-13 09:18:20.405 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Ensure instance console log exists: /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:18:20 np0005558241 nova_compute[248510]: 2025-12-13 09:18:20.406 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:20 np0005558241 nova_compute[248510]: 2025-12-13 09:18:20.406 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:20 np0005558241 nova_compute[248510]: 2025-12-13 09:18:20.406 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:20 np0005558241 nova_compute[248510]: 2025-12-13 09:18:20.763 248514 DEBUG nova.network.neutron [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Successfully created port: 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:18:20 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3517: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007742746986199878 of space, bias 1.0, pg target 0.23228240958599636 quantized to 32 (current 32)
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697116252414539 of space, bias 1.0, pg target 0.20091348757243618 quantized to 32 (current 32)
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.724405709860528e-07 of space, bias 4.0, pg target 0.0006869286851832634 quantized to 16 (current 32)
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:18:21 np0005558241 nova_compute[248510]: 2025-12-13 09:18:21.596 248514 DEBUG nova.network.neutron [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Successfully updated port: 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:18:21 np0005558241 nova_compute[248510]: 2025-12-13 09:18:21.620 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:18:21 np0005558241 nova_compute[248510]: 2025-12-13 09:18:21.621 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:18:21 np0005558241 nova_compute[248510]: 2025-12-13 09:18:21.621 248514 DEBUG nova.network.neutron [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:18:21 np0005558241 nova_compute[248510]: 2025-12-13 09:18:21.743 248514 DEBUG nova.compute.manager [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-changed-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:18:21 np0005558241 nova_compute[248510]: 2025-12-13 09:18:21.744 248514 DEBUG nova.compute.manager [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Refreshing instance network info cache due to event network-changed-051f9c2f-0627-4ddc-b79c-5b2542f0efa7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:18:21 np0005558241 nova_compute[248510]: 2025-12-13 09:18:21.744 248514 DEBUG oslo_concurrency.lockutils [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:18:21 np0005558241 nova_compute[248510]: 2025-12-13 09:18:21.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:18:21 np0005558241 nova_compute[248510]: 2025-12-13 09:18:21.796 248514 DEBUG nova.network.neutron [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:18:22 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3518: 321 pgs: 321 active+clean; 143 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 1.0 MiB/s wr, 3 op/s
Dec 13 04:18:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:24 np0005558241 nova_compute[248510]: 2025-12-13 09:18:24.233 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:24 np0005558241 nova_compute[248510]: 2025-12-13 09:18:24.360 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:24 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3519: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.502 248514 DEBUG nova.network.neutron [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updating instance_info_cache with network_info: [{"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.574 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.574 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Instance network_info: |[{"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.576 248514 DEBUG oslo_concurrency.lockutils [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.576 248514 DEBUG nova.network.neutron [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Refreshing network info cache for port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.582 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Start _get_guest_xml network_info=[{"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.591 248514 WARNING nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.606 248514 DEBUG nova.virt.libvirt.host [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.607 248514 DEBUG nova.virt.libvirt.host [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.612 248514 DEBUG nova.virt.libvirt.host [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.613 248514 DEBUG nova.virt.libvirt.host [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.614 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.614 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.615 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.616 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.616 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.617 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.617 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.618 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.618 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.619 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.619 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.619 248514 DEBUG nova.virt.hardware [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.625 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:18:26 np0005558241 nova_compute[248510]: 2025-12-13 09:18:26.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:18:26 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3520: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:18:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:18:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286539412' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.235 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.266 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.271 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.774 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.836 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 04:18:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:18:27 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252489878' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.887 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.889 248514 DEBUG nova.virt.libvirt.vif [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:18:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1366374546',display_name='tempest-TestGettingAddress-server-1366374546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1366374546',id=148,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKfVIjC+c2E48WoFz2Ud3i73TUL/pY6+s2/66GaU+lrLkwx0f9N1LcjeXJdOzvtjVb2jNhFM65SYiWkjK3RJ3eDcO6vcs74xDf/lQ4jF6SyHOGQ3syJuqY4MMzp7LfTqAw==',key_name='tempest-TestGettingAddress-444069759',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-3gsokvio',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:18:19Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=db3ef8e8-1a8a-42cc-a5ed-d3e401098f07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.889 248514 DEBUG nova.network.os_vif_util [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.890 248514 DEBUG nova.network.os_vif_util [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.892 248514 DEBUG nova.objects.instance [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.919 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  <uuid>db3ef8e8-1a8a-42cc-a5ed-d3e401098f07</uuid>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  <name>instance-00000094</name>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-1366374546</nova:name>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:18:26</nova:creationTime>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:        <nova:port uuid="051f9c2f-0627-4ddc-b79c-5b2542f0efa7">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe5c:fd37" ipVersion="6"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe5c:fd37" ipVersion="6"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <entry name="serial">db3ef8e8-1a8a-42cc-a5ed-d3e401098f07</entry>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <entry name="uuid">db3ef8e8-1a8a-42cc-a5ed-d3e401098f07</entry>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk.config">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:5c:fd:37"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <target dev="tap051f9c2f-06"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/console.log" append="off"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:18:27 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:18:27 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:18:27 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:18:27 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.921 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Preparing to wait for external event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.922 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.922 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.922 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.923 248514 DEBUG nova.virt.libvirt.vif [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:18:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1366374546',display_name='tempest-TestGettingAddress-server-1366374546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1366374546',id=148,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKfVIjC+c2E48WoFz2Ud3i73TUL/pY6+s2/66GaU+lrLkwx0f9N1LcjeXJdOzvtjVb2jNhFM65SYiWkjK3RJ3eDcO6vcs74xDf/lQ4jF6SyHOGQ3syJuqY4MMzp7LfTqAw==',key_name='tempest-TestGettingAddress-444069759',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-3gsokvio',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:18:19Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=db3ef8e8-1a8a-42cc-a5ed-d3e401098f07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.923 248514 DEBUG nova.network.os_vif_util [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.924 248514 DEBUG nova.network.os_vif_util [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.925 248514 DEBUG os_vif [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.926 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.926 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.931 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.932 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap051f9c2f-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.932 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap051f9c2f-06, col_values=(('external_ids', {'iface-id': '051f9c2f-0627-4ddc-b79c-5b2542f0efa7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:fd:37', 'vm-uuid': 'db3ef8e8-1a8a-42cc-a5ed-d3e401098f07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.934 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:27 np0005558241 NetworkManager[50376]: <info>  [1765617507.9357] manager: (tap051f9c2f-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/663)
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.948 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:27 np0005558241 nova_compute[248510]: 2025-12-13 09:18:27.950 248514 INFO os_vif [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06')#033[00m
Dec 13 04:18:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.153 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.153 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.154 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:5c:fd:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.155 248514 INFO nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Using config drive#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.191 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.310 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.310 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.311 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.311 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7aadb3d0-64f2-4531-9896-93b087cdea5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.578 248514 INFO nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Creating config drive at /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/disk.config#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.584 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdyfxvsn9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.740 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdyfxvsn9" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.771 248514 DEBUG nova.storage.rbd_utils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.775 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/disk.config db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.946 248514 DEBUG oslo_concurrency.processutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/disk.config db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:18:28 np0005558241 nova_compute[248510]: 2025-12-13 09:18:28.947 248514 INFO nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Deleting local config drive /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07/disk.config because it was imported into RBD.#033[00m
Dec 13 04:18:28 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3521: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:18:29 np0005558241 kernel: tap051f9c2f-06: entered promiscuous mode
Dec 13 04:18:29 np0005558241 NetworkManager[50376]: <info>  [1765617509.0241] manager: (tap051f9c2f-06): new Tun device (/org/freedesktop/NetworkManager/Devices/664)
Dec 13 04:18:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:18:29Z|01597|binding|INFO|Claiming lport 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 for this chassis.
Dec 13 04:18:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:18:29Z|01598|binding|INFO|051f9c2f-0627-4ddc-b79c-5b2542f0efa7: Claiming fa:16:3e:5c:fd:37 10.100.0.13 2001:db8:0:1:f816:3eff:fe5c:fd37 2001:db8::f816:3eff:fe5c:fd37
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.026 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:18:29Z|01599|binding|INFO|Setting lport 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 ovn-installed in OVS
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.051 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:29 np0005558241 systemd-udevd[402652]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:18:29 np0005558241 systemd-machined[210538]: New machine qemu-179-instance-00000094.
Dec 13 04:18:29 np0005558241 NetworkManager[50376]: <info>  [1765617509.0819] device (tap051f9c2f-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:18:29 np0005558241 NetworkManager[50376]: <info>  [1765617509.0826] device (tap051f9c2f-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:18:29 np0005558241 systemd[1]: Started Virtual Machine qemu-179-instance-00000094.
Dec 13 04:18:29 np0005558241 ovn_controller[148476]: 2025-12-13T09:18:29Z|01600|binding|INFO|Setting lport 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 up in Southbound
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.134 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:fd:37 10.100.0.13 2001:db8:0:1:f816:3eff:fe5c:fd37 2001:db8::f816:3eff:fe5c:fd37'], port_security=['fa:16:3e:5c:fd:37 10.100.0.13 2001:db8:0:1:f816:3eff:fe5c:fd37 2001:db8::f816:3eff:fe5c:fd37'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe5c:fd37/64 2001:db8::f816:3eff:fe5c:fd37/64', 'neutron:device_id': 'db3ef8e8-1a8a-42cc-a5ed-d3e401098f07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '747095ec-05b6-4395-bc25-989b7630b3c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f555052b-c852-42e8-9f82-1988cdbda87d, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=051f9c2f-0627-4ddc-b79c-5b2542f0efa7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.137 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 in datapath e5815775-e5d0-4b72-a008-efd9f04c6ee4 bound to our chassis#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.140 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5815775-e5d0-4b72-a008-efd9f04c6ee4#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.162 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3f106e3b-6490-4393-80b5-e142136a442c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.206 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[7f366397-1921-4470-9786-d125e5fcac81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.211 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[52a7733e-9c5a-48fc-85b0-ce9d776bc16a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.234 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.259 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[13888107-9c42-4cab-b715-cbbeccdb62ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.284 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef04ddd-1730-432c-b1cf-3fcc093eba60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5815775-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:3e:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1004907, 'reachable_time': 36718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402668, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.307 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[194fa4ad-f656-474c-a059-0d26def8eca4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape5815775-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1004925, 'tstamp': 1004925}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402669, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape5815775-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1004929, 'tstamp': 1004929}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402669, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.309 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5815775-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.311 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.312 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.313 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5815775-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.313 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.314 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5815775-e0, col_values=(('external_ids', {'iface-id': '311a1d17-b8d0-425a-a0e2-260061ce5d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.315 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.459 248514 DEBUG nova.compute.manager [req-058893b3-6624-4038-8ecc-2df1497e6d9b req-78be10e3-b73b-428b-bebe-96b9ff671f70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.460 248514 DEBUG oslo_concurrency.lockutils [req-058893b3-6624-4038-8ecc-2df1497e6d9b req-78be10e3-b73b-428b-bebe-96b9ff671f70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.460 248514 DEBUG oslo_concurrency.lockutils [req-058893b3-6624-4038-8ecc-2df1497e6d9b req-78be10e3-b73b-428b-bebe-96b9ff671f70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.460 248514 DEBUG oslo_concurrency.lockutils [req-058893b3-6624-4038-8ecc-2df1497e6d9b req-78be10e3-b73b-428b-bebe-96b9ff671f70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.461 248514 DEBUG nova.compute.manager [req-058893b3-6624-4038-8ecc-2df1497e6d9b req-78be10e3-b73b-428b-bebe-96b9ff671f70 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Processing event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.643 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.644 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617509.642649, db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.644 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] VM Started (Lifecycle Event)#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.646 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.649 248514 INFO nova.virt.libvirt.driver [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Instance spawned successfully.#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.650 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.755 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.774 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.779 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.780 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.781 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.781 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.782 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.783 248514 DEBUG nova.virt.libvirt.driver [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.826 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.827 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617509.6458447, db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.827 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.898 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.899 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:29 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:29.900 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.908 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.913 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617509.6460905, db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.913 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.921 248514 INFO nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Took 10.32 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:18:29 np0005558241 nova_compute[248510]: 2025-12-13 09:18:29.922 248514 DEBUG nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:18:30 np0005558241 nova_compute[248510]: 2025-12-13 09:18:30.029 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:18:30 np0005558241 nova_compute[248510]: 2025-12-13 09:18:30.033 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:18:30 np0005558241 nova_compute[248510]: 2025-12-13 09:18:30.068 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:18:30 np0005558241 nova_compute[248510]: 2025-12-13 09:18:30.092 248514 INFO nova.compute.manager [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Took 11.60 seconds to build instance.#033[00m
Dec 13 04:18:30 np0005558241 nova_compute[248510]: 2025-12-13 09:18:30.115 248514 DEBUG oslo_concurrency.lockutils [None req-d213e80a-c346-49c7-a9ec-663714a6b7a8 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:30 np0005558241 nova_compute[248510]: 2025-12-13 09:18:30.708 248514 DEBUG nova.network.neutron [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updated VIF entry in instance network info cache for port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:18:30 np0005558241 nova_compute[248510]: 2025-12-13 09:18:30.709 248514 DEBUG nova.network.neutron [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updating instance_info_cache with network_info: [{"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:18:30 np0005558241 nova_compute[248510]: 2025-12-13 09:18:30.729 248514 DEBUG oslo_concurrency.lockutils [req-a2986e36-05a6-477e-bb03-87e51a5e6b5c req-54f2b1c7-2779-4a2b-8a92-83d2d892e020 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:18:30 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3522: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:18:31 np0005558241 nova_compute[248510]: 2025-12-13 09:18:31.448 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updating instance_info_cache with network_info: [{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:18:31 np0005558241 nova_compute[248510]: 2025-12-13 09:18:31.468 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:18:31 np0005558241 nova_compute[248510]: 2025-12-13 09:18:31.469 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 04:18:31 np0005558241 nova_compute[248510]: 2025-12-13 09:18:31.548 248514 DEBUG nova.compute.manager [req-1559859c-5c34-488d-9092-edf4468b2639 req-f6f1408a-f5e7-45cb-9010-6297f1bac533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:18:31 np0005558241 nova_compute[248510]: 2025-12-13 09:18:31.549 248514 DEBUG oslo_concurrency.lockutils [req-1559859c-5c34-488d-9092-edf4468b2639 req-f6f1408a-f5e7-45cb-9010-6297f1bac533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:31 np0005558241 nova_compute[248510]: 2025-12-13 09:18:31.549 248514 DEBUG oslo_concurrency.lockutils [req-1559859c-5c34-488d-9092-edf4468b2639 req-f6f1408a-f5e7-45cb-9010-6297f1bac533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:31 np0005558241 nova_compute[248510]: 2025-12-13 09:18:31.549 248514 DEBUG oslo_concurrency.lockutils [req-1559859c-5c34-488d-9092-edf4468b2639 req-f6f1408a-f5e7-45cb-9010-6297f1bac533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:31 np0005558241 nova_compute[248510]: 2025-12-13 09:18:31.550 248514 DEBUG nova.compute.manager [req-1559859c-5c34-488d-9092-edf4468b2639 req-f6f1408a-f5e7-45cb-9010-6297f1bac533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] No waiting events found dispatching network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:18:31 np0005558241 nova_compute[248510]: 2025-12-13 09:18:31.550 248514 WARNING nova.compute.manager [req-1559859c-5c34-488d-9092-edf4468b2639 req-f6f1408a-f5e7-45cb-9010-6297f1bac533 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received unexpected event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:18:32 np0005558241 nova_compute[248510]: 2025-12-13 09:18:32.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:18:32 np0005558241 nova_compute[248510]: 2025-12-13 09:18:32.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:32 np0005558241 nova_compute[248510]: 2025-12-13 09:18:32.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:32 np0005558241 nova_compute[248510]: 2025-12-13 09:18:32.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:32 np0005558241 nova_compute[248510]: 2025-12-13 09:18:32.799 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:18:32 np0005558241 nova_compute[248510]: 2025-12-13 09:18:32.799 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:18:32 np0005558241 nova_compute[248510]: 2025-12-13 09:18:32.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:32 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3523: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 13 04:18:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:18:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2056976498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.398 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.491 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.492 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.497 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.497 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.660 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.661 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3126MB free_disk=59.92107794713229GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.662 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.662 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.855 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 7aadb3d0-64f2-4531-9896-93b087cdea5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.855 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.856 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.856 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:18:33 np0005558241 nova_compute[248510]: 2025-12-13 09:18:33.996 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:18:34 np0005558241 nova_compute[248510]: 2025-12-13 09:18:34.055 248514 DEBUG nova.compute.manager [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-changed-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:18:34 np0005558241 nova_compute[248510]: 2025-12-13 09:18:34.056 248514 DEBUG nova.compute.manager [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Refreshing instance network info cache due to event network-changed-051f9c2f-0627-4ddc-b79c-5b2542f0efa7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:18:34 np0005558241 nova_compute[248510]: 2025-12-13 09:18:34.056 248514 DEBUG oslo_concurrency.lockutils [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:18:34 np0005558241 nova_compute[248510]: 2025-12-13 09:18:34.057 248514 DEBUG oslo_concurrency.lockutils [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:18:34 np0005558241 nova_compute[248510]: 2025-12-13 09:18:34.057 248514 DEBUG nova.network.neutron [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Refreshing network info cache for port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:18:34 np0005558241 nova_compute[248510]: 2025-12-13 09:18:34.238 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:18:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427994080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:18:34 np0005558241 nova_compute[248510]: 2025-12-13 09:18:34.590 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:18:34 np0005558241 nova_compute[248510]: 2025-12-13 09:18:34.598 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:18:34 np0005558241 nova_compute[248510]: 2025-12-13 09:18:34.628 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:18:34 np0005558241 nova_compute[248510]: 2025-12-13 09:18:34.661 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:18:34 np0005558241 nova_compute[248510]: 2025-12-13 09:18:34.661 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:34 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3524: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 799 KiB/s wr, 97 op/s
Dec 13 04:18:35 np0005558241 nova_compute[248510]: 2025-12-13 09:18:35.661 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:18:36 np0005558241 nova_compute[248510]: 2025-12-13 09:18:36.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:18:36 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3525: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:18:37 np0005558241 nova_compute[248510]: 2025-12-13 09:18:37.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:18:37 np0005558241 nova_compute[248510]: 2025-12-13 09:18:37.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:18:37 np0005558241 nova_compute[248510]: 2025-12-13 09:18:37.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:18:37 np0005558241 nova_compute[248510]: 2025-12-13 09:18:37.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:37 np0005558241 podman[402759]: 2025-12-13 09:18:37.998145995 +0000 UTC m=+0.075885972 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 13 04:18:38 np0005558241 podman[402758]: 2025-12-13 09:18:38.032146702 +0000 UTC m=+0.109628093 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 04:18:38 np0005558241 podman[402757]: 2025-12-13 09:18:38.056425697 +0000 UTC m=+0.128933504 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 04:18:38 np0005558241 nova_compute[248510]: 2025-12-13 09:18:38.543 248514 DEBUG nova.network.neutron [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updated VIF entry in instance network info cache for port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:18:38 np0005558241 nova_compute[248510]: 2025-12-13 09:18:38.544 248514 DEBUG nova.network.neutron [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updating instance_info_cache with network_info: [{"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:18:38 np0005558241 nova_compute[248510]: 2025-12-13 09:18:38.786 248514 DEBUG oslo_concurrency.lockutils [req-ead14148-8b38-4d8c-80ef-e6b7bbdf32b4 req-bb77324c-596b-482b-bdf2-135c371eb106 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:18:38 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3526: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 04:18:39 np0005558241 nova_compute[248510]: 2025-12-13 09:18:39.238 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:39.905 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.057962) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617520058036, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1541, "num_deletes": 251, "total_data_size": 2613330, "memory_usage": 2656688, "flush_reason": "Manual Compaction"}
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617520077983, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 2544917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69026, "largest_seqno": 70566, "table_properties": {"data_size": 2537639, "index_size": 4284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14932, "raw_average_key_size": 20, "raw_value_size": 2523261, "raw_average_value_size": 3382, "num_data_blocks": 191, "num_entries": 746, "num_filter_entries": 746, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617358, "oldest_key_time": 1765617358, "file_creation_time": 1765617520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 20066 microseconds, and 6905 cpu microseconds.
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.078034) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 2544917 bytes OK
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.078058) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.080198) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.080214) EVENT_LOG_v1 {"time_micros": 1765617520080209, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.080231) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 2606588, prev total WAL file size 2606588, number of live WAL files 2.
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.081147) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(2485KB)], [164(9253KB)]
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617520081292, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 12020564, "oldest_snapshot_seqno": -1}
Dec 13 04:18:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:18:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:18:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:18:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8917 keys, 10227131 bytes, temperature: kUnknown
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617520192629, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 10227131, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10171956, "index_size": 31766, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22341, "raw_key_size": 233889, "raw_average_key_size": 26, "raw_value_size": 10017447, "raw_average_value_size": 1123, "num_data_blocks": 1221, "num_entries": 8917, "num_filter_entries": 8917, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:18:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:18:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.192921) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 10227131 bytes
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.196529) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.9 rd, 91.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 9.0 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(8.7) write-amplify(4.0) OK, records in: 9431, records dropped: 514 output_compression: NoCompression
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.196553) EVENT_LOG_v1 {"time_micros": 1765617520196542, "job": 102, "event": "compaction_finished", "compaction_time_micros": 111443, "compaction_time_cpu_micros": 32727, "output_level": 6, "num_output_files": 1, "total_output_size": 10227131, "num_input_records": 9431, "num_output_records": 8917, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617520197043, "job": 102, "event": "table_file_deletion", "file_number": 166}
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617520198854, "job": 102, "event": "table_file_deletion", "file_number": 164}
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.080969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.198960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.198966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.198967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.198969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:18:40 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:18:40.198970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:18:40 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3527: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 04:18:42 np0005558241 ovn_controller[148476]: 2025-12-13T09:18:42Z|00207|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5c:fd:37 10.100.0.13
Dec 13 04:18:42 np0005558241 ovn_controller[148476]: 2025-12-13T09:18:42Z|00208|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5c:fd:37 10.100.0.13
Dec 13 04:18:42 np0005558241 nova_compute[248510]: 2025-12-13 09:18:42.947 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:42 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3528: 321 pgs: 321 active+clean; 175 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 761 KiB/s wr, 91 op/s
Dec 13 04:18:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:44 np0005558241 nova_compute[248510]: 2025-12-13 09:18:44.241 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:44 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3529: 321 pgs: 321 active+clean; 197 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Dec 13 04:18:46 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3530: 321 pgs: 321 active+clean; 197 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Dec 13 04:18:47 np0005558241 nova_compute[248510]: 2025-12-13 09:18:47.950 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:48 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3531: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:18:49 np0005558241 nova_compute[248510]: 2025-12-13 09:18:49.242 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:49 np0005558241 nova_compute[248510]: 2025-12-13 09:18:49.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:18:50 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3532: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 04:18:52 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3533: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:18:52 np0005558241 nova_compute[248510]: 2025-12-13 09:18:52.992 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.084 248514 DEBUG nova.compute.manager [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-changed-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.085 248514 DEBUG nova.compute.manager [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Refreshing instance network info cache due to event network-changed-051f9c2f-0627-4ddc-b79c-5b2542f0efa7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.085 248514 DEBUG oslo_concurrency.lockutils [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.085 248514 DEBUG oslo_concurrency.lockutils [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.085 248514 DEBUG nova.network.neutron [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Refreshing network info cache for port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.245 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.265 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.266 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.266 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.266 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.266 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.268 248514 INFO nova.compute.manager [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Terminating instance#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.268 248514 DEBUG nova.compute.manager [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:18:54 np0005558241 kernel: tap051f9c2f-06 (unregistering): left promiscuous mode
Dec 13 04:18:54 np0005558241 NetworkManager[50376]: <info>  [1765617534.3251] device (tap051f9c2f-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:18:54 np0005558241 ovn_controller[148476]: 2025-12-13T09:18:54Z|01601|binding|INFO|Releasing lport 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 from this chassis (sb_readonly=0)
Dec 13 04:18:54 np0005558241 ovn_controller[148476]: 2025-12-13T09:18:54Z|01602|binding|INFO|Setting lport 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 down in Southbound
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.337 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:54 np0005558241 ovn_controller[148476]: 2025-12-13T09:18:54Z|01603|binding|INFO|Removing iface tap051f9c2f-06 ovn-installed in OVS
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.342 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.369 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:54 np0005558241 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000094.scope: Deactivated successfully.
Dec 13 04:18:54 np0005558241 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000094.scope: Consumed 14.828s CPU time.
Dec 13 04:18:54 np0005558241 systemd-machined[210538]: Machine qemu-179-instance-00000094 terminated.
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.477 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:fd:37 10.100.0.13 2001:db8:0:1:f816:3eff:fe5c:fd37 2001:db8::f816:3eff:fe5c:fd37'], port_security=['fa:16:3e:5c:fd:37 10.100.0.13 2001:db8:0:1:f816:3eff:fe5c:fd37 2001:db8::f816:3eff:fe5c:fd37'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe5c:fd37/64 2001:db8::f816:3eff:fe5c:fd37/64', 'neutron:device_id': 'db3ef8e8-1a8a-42cc-a5ed-d3e401098f07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '747095ec-05b6-4395-bc25-989b7630b3c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f555052b-c852-42e8-9f82-1988cdbda87d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=051f9c2f-0627-4ddc-b79c-5b2542f0efa7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.478 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7 in datapath e5815775-e5d0-4b72-a008-efd9f04c6ee4 unbound from our chassis#033[00m
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.480 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5815775-e5d0-4b72-a008-efd9f04c6ee4#033[00m
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.501 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4b3ec5-da19-4143-a26d-3a114d03a536]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.514 248514 INFO nova.virt.libvirt.driver [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Instance destroyed successfully.#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.515 248514 DEBUG nova.objects.instance [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.533 248514 DEBUG nova.virt.libvirt.vif [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:18:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1366374546',display_name='tempest-TestGettingAddress-server-1366374546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1366374546',id=148,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKfVIjC+c2E48WoFz2Ud3i73TUL/pY6+s2/66GaU+lrLkwx0f9N1LcjeXJdOzvtjVb2jNhFM65SYiWkjK3RJ3eDcO6vcs74xDf/lQ4jF6SyHOGQ3syJuqY4MMzp7LfTqAw==',key_name='tempest-TestGettingAddress-444069759',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:18:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-3gsokvio',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:18:30Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=db3ef8e8-1a8a-42cc-a5ed-d3e401098f07,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.535 248514 DEBUG nova.network.os_vif_util [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.536 248514 DEBUG nova.network.os_vif_util [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.537 248514 DEBUG os_vif [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.539 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap051f9c2f-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.543 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.547 248514 INFO os_vif [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:fd:37,bridge_name='br-int',has_traffic_filtering=True,id=051f9c2f-0627-4ddc-b79c-5b2542f0efa7,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap051f9c2f-06')#033[00m
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.557 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0f731302-28e7-4774-9266-679e1e147ce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.562 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[df634efe-6157-4dd5-bb6b-ae53e6a63ccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.616 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[8293dbda-fa8b-466a-a82d-6077b7580e42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.642 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2a286de2-7944-46b5-8d55-0f8062953afd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5815775-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:3e:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1004907, 'reachable_time': 36718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402855, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.667 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f4a611-e9de-4ee5-abb0-424b7a5da9e1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape5815775-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1004925, 'tstamp': 1004925}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402856, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape5815775-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1004929, 'tstamp': 1004929}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402856, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.670 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5815775-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.673 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5815775-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.674 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:18:54 np0005558241 nova_compute[248510]: 2025-12-13 09:18:54.673 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.674 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5815775-e0, col_values=(('external_ids', {'iface-id': '311a1d17-b8d0-425a-a0e2-260061ce5d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:18:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:54.675 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:18:54 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3534: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 1.4 MiB/s wr, 48 op/s
Dec 13 04:18:55 np0005558241 nova_compute[248510]: 2025-12-13 09:18:55.208 248514 INFO nova.virt.libvirt.driver [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Deleting instance files /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_del#033[00m
Dec 13 04:18:55 np0005558241 nova_compute[248510]: 2025-12-13 09:18:55.210 248514 INFO nova.virt.libvirt.driver [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Deletion of /var/lib/nova/instances/db3ef8e8-1a8a-42cc-a5ed-d3e401098f07_del complete#033[00m
Dec 13 04:18:55 np0005558241 nova_compute[248510]: 2025-12-13 09:18:55.402 248514 INFO nova.compute.manager [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:18:55 np0005558241 nova_compute[248510]: 2025-12-13 09:18:55.403 248514 DEBUG oslo.service.loopingcall [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:18:55 np0005558241 nova_compute[248510]: 2025-12-13 09:18:55.404 248514 DEBUG nova.compute.manager [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:18:55 np0005558241 nova_compute[248510]: 2025-12-13 09:18:55.405 248514 DEBUG nova.network.neutron [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:18:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:55.455 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:55.456 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:18:55.456 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:56 np0005558241 nova_compute[248510]: 2025-12-13 09:18:56.608 248514 DEBUG nova.compute.manager [req-a6c49155-494c-4fab-b3c7-86e3a56eb6d1 req-55aa8e09-3229-461b-be30-89beb2c9afbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-vif-unplugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:18:56 np0005558241 nova_compute[248510]: 2025-12-13 09:18:56.609 248514 DEBUG oslo_concurrency.lockutils [req-a6c49155-494c-4fab-b3c7-86e3a56eb6d1 req-55aa8e09-3229-461b-be30-89beb2c9afbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:56 np0005558241 nova_compute[248510]: 2025-12-13 09:18:56.610 248514 DEBUG oslo_concurrency.lockutils [req-a6c49155-494c-4fab-b3c7-86e3a56eb6d1 req-55aa8e09-3229-461b-be30-89beb2c9afbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:56 np0005558241 nova_compute[248510]: 2025-12-13 09:18:56.610 248514 DEBUG oslo_concurrency.lockutils [req-a6c49155-494c-4fab-b3c7-86e3a56eb6d1 req-55aa8e09-3229-461b-be30-89beb2c9afbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:56 np0005558241 nova_compute[248510]: 2025-12-13 09:18:56.611 248514 DEBUG nova.compute.manager [req-a6c49155-494c-4fab-b3c7-86e3a56eb6d1 req-55aa8e09-3229-461b-be30-89beb2c9afbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] No waiting events found dispatching network-vif-unplugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:18:56 np0005558241 nova_compute[248510]: 2025-12-13 09:18:56.611 248514 DEBUG nova.compute.manager [req-a6c49155-494c-4fab-b3c7-86e3a56eb6d1 req-55aa8e09-3229-461b-be30-89beb2c9afbf 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-vif-unplugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:18:56 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3535: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 66 KiB/s wr, 12 op/s
Dec 13 04:18:57 np0005558241 nova_compute[248510]: 2025-12-13 09:18:57.794 248514 DEBUG nova.network.neutron [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:18:57 np0005558241 nova_compute[248510]: 2025-12-13 09:18:57.919 248514 INFO nova.compute.manager [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Took 2.51 seconds to deallocate network for instance.#033[00m
Dec 13 04:18:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:58 np0005558241 nova_compute[248510]: 2025-12-13 09:18:58.182 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:58 np0005558241 nova_compute[248510]: 2025-12-13 09:18:58.183 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:58 np0005558241 nova_compute[248510]: 2025-12-13 09:18:58.276 248514 DEBUG oslo_concurrency.processutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:18:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:18:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373072729' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:18:58 np0005558241 nova_compute[248510]: 2025-12-13 09:18:58.930 248514 DEBUG oslo_concurrency.processutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:18:58 np0005558241 nova_compute[248510]: 2025-12-13 09:18:58.939 248514 DEBUG nova.compute.provider_tree [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:18:58 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3536: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 70 KiB/s wr, 40 op/s
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.247 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.287 248514 DEBUG nova.compute.manager [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.288 248514 DEBUG oslo_concurrency.lockutils [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.289 248514 DEBUG oslo_concurrency.lockutils [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.289 248514 DEBUG oslo_concurrency.lockutils [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.290 248514 DEBUG nova.compute.manager [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] No waiting events found dispatching network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.290 248514 WARNING nova.compute.manager [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received unexpected event network-vif-plugged-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.291 248514 DEBUG nova.compute.manager [req-4f19bfc3-2343-475b-b3f4-42abd4505213 req-d2de10c0-e5a8-4c6e-9b3d-e3176d38db8a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Received event network-vif-deleted-051f9c2f-0627-4ddc-b79c-5b2542f0efa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.300 248514 DEBUG nova.scheduler.client.report [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.425 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.474 248514 INFO nova.scheduler.client.report [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance db3ef8e8-1a8a-42cc-a5ed-d3e401098f07#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.497 248514 DEBUG nova.network.neutron [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updated VIF entry in instance network info cache for port 051f9c2f-0627-4ddc-b79c-5b2542f0efa7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.497 248514 DEBUG nova.network.neutron [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Updating instance_info_cache with network_info: [{"id": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "address": "fa:16:3e:5c:fd:37", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:fd37", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap051f9c2f-06", "ovs_interfaceid": "051f9c2f-0627-4ddc-b79c-5b2542f0efa7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.818 248514 DEBUG oslo_concurrency.lockutils [req-75aa3360-f14b-4762-93d7-4af095a9acf2 req-eae9d070-6eb1-410e-a362-21a9c979b343 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:18:59 np0005558241 nova_compute[248510]: 2025-12-13 09:18:59.859 248514 DEBUG oslo_concurrency.lockutils [None req-62524a39-7fee-415c-8099-576162089a5d 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "db3ef8e8-1a8a-42cc-a5ed-d3e401098f07" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:19:00 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3537: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 30 op/s
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.258 248514 DEBUG nova.compute.manager [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-changed-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.258 248514 DEBUG nova.compute.manager [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Refreshing instance network info cache due to event network-changed-1271974a-de11-42b5-87b8-470c51840315. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.259 248514 DEBUG oslo_concurrency.lockutils [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.259 248514 DEBUG oslo_concurrency.lockutils [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.259 248514 DEBUG nova.network.neutron [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Refreshing network info cache for port 1271974a-de11-42b5-87b8-470c51840315 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.326 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.326 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.327 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.327 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.327 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.328 248514 INFO nova.compute.manager [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Terminating instance#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.330 248514 DEBUG nova.compute.manager [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:19:02 np0005558241 kernel: tap1271974a-de (unregistering): left promiscuous mode
Dec 13 04:19:02 np0005558241 NetworkManager[50376]: <info>  [1765617542.3821] device (tap1271974a-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:19:02 np0005558241 ovn_controller[148476]: 2025-12-13T09:19:02Z|01604|binding|INFO|Releasing lport 1271974a-de11-42b5-87b8-470c51840315 from this chassis (sb_readonly=0)
Dec 13 04:19:02 np0005558241 ovn_controller[148476]: 2025-12-13T09:19:02Z|01605|binding|INFO|Setting lport 1271974a-de11-42b5-87b8-470c51840315 down in Southbound
Dec 13 04:19:02 np0005558241 ovn_controller[148476]: 2025-12-13T09:19:02Z|01606|binding|INFO|Removing iface tap1271974a-de ovn-installed in OVS
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.429 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.439 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:f6:8c 10.100.0.6 2001:db8:0:1:f816:3eff:fed5:f68c 2001:db8::f816:3eff:fed5:f68c'], port_security=['fa:16:3e:d5:f6:8c 10.100.0.6 2001:db8:0:1:f816:3eff:fed5:f68c 2001:db8::f816:3eff:fed5:f68c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28 2001:db8:0:1:f816:3eff:fed5:f68c/64 2001:db8::f816:3eff:fed5:f68c/64', 'neutron:device_id': '7aadb3d0-64f2-4531-9896-93b087cdea5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '747095ec-05b6-4395-bc25-989b7630b3c9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f555052b-c852-42e8-9f82-1988cdbda87d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1271974a-de11-42b5-87b8-470c51840315) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.440 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1271974a-de11-42b5-87b8-470c51840315 in datapath e5815775-e5d0-4b72-a008-efd9f04c6ee4 unbound from our chassis#033[00m
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.442 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5815775-e5d0-4b72-a008-efd9f04c6ee4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.443 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2af1db-293e-494e-9fdd-167cbdfd0474]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.444 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4 namespace which is not needed anymore#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.450 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:02 np0005558241 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000093.scope: Deactivated successfully.
Dec 13 04:19:02 np0005558241 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000093.scope: Consumed 16.383s CPU time.
Dec 13 04:19:02 np0005558241 systemd-machined[210538]: Machine qemu-178-instance-00000093 terminated.
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.571 248514 INFO nova.virt.libvirt.driver [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Instance destroyed successfully.#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.572 248514 DEBUG nova.objects.instance [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 7aadb3d0-64f2-4531-9896-93b087cdea5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:19:02 np0005558241 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [NOTICE]   (401650) : haproxy version is 2.8.14-c23fe91
Dec 13 04:19:02 np0005558241 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [NOTICE]   (401650) : path to executable is /usr/sbin/haproxy
Dec 13 04:19:02 np0005558241 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [WARNING]  (401650) : Exiting Master process...
Dec 13 04:19:02 np0005558241 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [ALERT]    (401650) : Current worker (401652) exited with code 143 (Terminated)
Dec 13 04:19:02 np0005558241 neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4[401646]: [WARNING]  (401650) : All workers exited. Exiting... (0)
Dec 13 04:19:02 np0005558241 systemd[1]: libpod-5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763.scope: Deactivated successfully.
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.598 248514 DEBUG nova.virt.libvirt.vif [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:17:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1903797139',display_name='tempest-TestGettingAddress-server-1903797139',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1903797139',id=147,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKfVIjC+c2E48WoFz2Ud3i73TUL/pY6+s2/66GaU+lrLkwx0f9N1LcjeXJdOzvtjVb2jNhFM65SYiWkjK3RJ3eDcO6vcs74xDf/lQ4jF6SyHOGQ3syJuqY4MMzp7LfTqAw==',key_name='tempest-TestGettingAddress-444069759',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:17:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-zvq7q3d0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:17:53Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=7aadb3d0-64f2-4531-9896-93b087cdea5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.598 248514 DEBUG nova.network.os_vif_util [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.600 248514 DEBUG nova.network.os_vif_util [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.600 248514 DEBUG os_vif [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:19:02 np0005558241 podman[402904]: 2025-12-13 09:19:02.602508242 +0000 UTC m=+0.051887823 container died 5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.602 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.603 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1271974a-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.604 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.606 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.608 248514 INFO os_vif [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:f6:8c,bridge_name='br-int',has_traffic_filtering=True,id=1271974a-de11-42b5-87b8-470c51840315,network=Network(e5815775-e5d0-4b72-a008-efd9f04c6ee4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1271974a-de')#033[00m
Dec 13 04:19:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763-userdata-shm.mount: Deactivated successfully.
Dec 13 04:19:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7cab876830acab31fce5dbc2e972338fc2e9949b13d6a5f72f360a2dbded2beb-merged.mount: Deactivated successfully.
Dec 13 04:19:02 np0005558241 podman[402904]: 2025-12-13 09:19:02.649125804 +0000 UTC m=+0.098505425 container cleanup 5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:19:02 np0005558241 systemd[1]: libpod-conmon-5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763.scope: Deactivated successfully.
Dec 13 04:19:02 np0005558241 podman[402959]: 2025-12-13 09:19:02.77098042 +0000 UTC m=+0.079803689 container remove 5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.778 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[e43fdf81-7406-4f47-a4c0-a17f4dcac17a]: (4, ('Sat Dec 13 09:19:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4 (5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763)\n5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763\nSat Dec 13 09:19:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4 (5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763)\n5adb9377d70fd9c3b1cd85c938e8e26cfac66ab9e8cc2c82ce5ecbd94389b763\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.781 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[64e9522f-9cdf-4b0d-a5b2-d772560c5eea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.781 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5815775-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:19:02 np0005558241 kernel: tape5815775-e0: left promiscuous mode
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.784 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.796 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.799 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a090df-06e6-4537-b897-8f382441aee3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.813 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[17be6174-9311-4e45-9dbf-2a5682111381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.815 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[2c67a109-ed5c-4191-9a19-69a328c8985b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.832 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3b581ddd-7565-44f2-b880-2137f731a772]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1004898, 'reachable_time': 25872, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402975, 'error': None, 'target': 'ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:19:02 np0005558241 systemd[1]: run-netns-ovnmeta\x2de5815775\x2de5d0\x2d4b72\x2da008\x2defd9f04c6ee4.mount: Deactivated successfully.
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.836 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5815775-e5d0-4b72-a008-efd9f04c6ee4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:19:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:02.837 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[4c877d2a-9acf-477a-b833-b0f24ed587e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.909 248514 INFO nova.virt.libvirt.driver [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Deleting instance files /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c_del#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.911 248514 INFO nova.virt.libvirt.driver [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Deletion of /var/lib/nova/instances/7aadb3d0-64f2-4531-9896-93b087cdea5c_del complete#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.986 248514 INFO nova.compute.manager [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.986 248514 DEBUG oslo.service.loopingcall [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.987 248514 DEBUG nova.compute.manager [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:19:02 np0005558241 nova_compute[248510]: 2025-12-13 09:19:02.987 248514 DEBUG nova.network.neutron [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:19:02 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3538: 321 pgs: 321 active+clean; 94 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 13 KiB/s wr, 35 op/s
Dec 13 04:19:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:03 np0005558241 nova_compute[248510]: 2025-12-13 09:19:03.829 248514 DEBUG nova.network.neutron [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:19:03 np0005558241 nova_compute[248510]: 2025-12-13 09:19:03.856 248514 INFO nova.compute.manager [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Took 0.87 seconds to deallocate network for instance.#033[00m
Dec 13 04:19:03 np0005558241 nova_compute[248510]: 2025-12-13 09:19:03.939 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:19:03 np0005558241 nova_compute[248510]: 2025-12-13 09:19:03.940 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:19:03 np0005558241 nova_compute[248510]: 2025-12-13 09:19:03.948 248514 DEBUG nova.compute.manager [req-8c64a81e-85bb-4c99-93c7-4ca94429135b req-e094ef57-43d9-4f65-82e9-760a78fcca65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-vif-deleted-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:19:03 np0005558241 nova_compute[248510]: 2025-12-13 09:19:03.985 248514 DEBUG oslo_concurrency.processutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.250 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.369 248514 DEBUG nova.compute.manager [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-vif-unplugged-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.370 248514 DEBUG oslo_concurrency.lockutils [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.371 248514 DEBUG oslo_concurrency.lockutils [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.372 248514 DEBUG oslo_concurrency.lockutils [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.372 248514 DEBUG nova.compute.manager [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] No waiting events found dispatching network-vif-unplugged-1271974a-de11-42b5-87b8-470c51840315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.373 248514 WARNING nova.compute.manager [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received unexpected event network-vif-unplugged-1271974a-de11-42b5-87b8-470c51840315 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.374 248514 DEBUG nova.compute.manager [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.375 248514 DEBUG oslo_concurrency.lockutils [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.375 248514 DEBUG oslo_concurrency.lockutils [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.376 248514 DEBUG oslo_concurrency.lockutils [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.377 248514 DEBUG nova.compute.manager [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] No waiting events found dispatching network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.377 248514 WARNING nova.compute.manager [req-3b1acc77-fa74-4c14-8723-8065898f3512 req-279e38ef-f3de-4eda-a4a9-5215a185c7f4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Received unexpected event network-vif-plugged-1271974a-de11-42b5-87b8-470c51840315 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:19:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:19:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046399464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.589 248514 DEBUG oslo_concurrency.processutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.595 248514 DEBUG nova.compute.provider_tree [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.620 248514 DEBUG nova.scheduler.client.report [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.657 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.692 248514 INFO nova.scheduler.client.report [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 7aadb3d0-64f2-4531-9896-93b087cdea5c#033[00m
Dec 13 04:19:04 np0005558241 nova_compute[248510]: 2025-12-13 09:19:04.767 248514 DEBUG oslo_concurrency.lockutils [None req-8bec0d42-a6fe-4a46-8768-eb1a471c83a1 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7aadb3d0-64f2-4531-9896-93b087cdea5c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:19:04 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3539: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 5.2 KiB/s wr, 55 op/s
Dec 13 04:19:06 np0005558241 nova_compute[248510]: 2025-12-13 09:19:06.313 248514 DEBUG nova.network.neutron [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updated VIF entry in instance network info cache for port 1271974a-de11-42b5-87b8-470c51840315. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:19:06 np0005558241 nova_compute[248510]: 2025-12-13 09:19:06.313 248514 DEBUG nova.network.neutron [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Updating instance_info_cache with network_info: [{"id": "1271974a-de11-42b5-87b8-470c51840315", "address": "fa:16:3e:d5:f6:8c", "network": {"id": "e5815775-e5d0-4b72-a008-efd9f04c6ee4", "bridge": "br-int", "label": "tempest-network-smoke--1361908117", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed5:f68c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1271974a-de", "ovs_interfaceid": "1271974a-de11-42b5-87b8-470c51840315", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:19:06 np0005558241 nova_compute[248510]: 2025-12-13 09:19:06.341 248514 DEBUG oslo_concurrency.lockutils [req-6fed17b3-e1f1-4bfc-9ea2-9dd0a1bbc007 req-98182f55-0dbb-4609-8f26-c02f233b8c41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7aadb3d0-64f2-4531-9896-93b087cdea5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:19:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3540: 321 pgs: 321 active+clean; 41 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 4.4 KiB/s wr, 54 op/s
Dec 13 04:19:07 np0005558241 nova_compute[248510]: 2025-12-13 09:19:07.606 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:07 np0005558241 nova_compute[248510]: 2025-12-13 09:19:07.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:19:07 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:08 np0005558241 podman[403000]: 2025-12-13 09:19:08.977641012 +0000 UTC m=+0.056153530 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:19:08 np0005558241 podman[402999]: 2025-12-13 09:19:08.989412085 +0000 UTC m=+0.074008235 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 04:19:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3541: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 4.7 KiB/s wr, 55 op/s
Dec 13 04:19:09 np0005558241 podman[402998]: 2025-12-13 09:19:09.044567439 +0000 UTC m=+0.126012760 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Dec 13 04:19:09 np0005558241 nova_compute[248510]: 2025-12-13 09:19:09.275 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:19:09
Dec 13 04:19:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:19:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:19:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['images', 'volumes', 'vms', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta']
Dec 13 04:19:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:19:09 np0005558241 nova_compute[248510]: 2025-12-13 09:19:09.511 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617534.5099504, db3ef8e8-1a8a-42cc-a5ed-d3e401098f07 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:19:09 np0005558241 nova_compute[248510]: 2025-12-13 09:19:09.512 248514 INFO nova.compute.manager [-] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:19:09 np0005558241 nova_compute[248510]: 2025-12-13 09:19:09.544 248514 DEBUG nova.compute.manager [None req-b80e8ac5-f84c-4138-b40a-4764013f1218 - - - - - -] [instance: db3ef8e8-1a8a-42cc-a5ed-d3e401098f07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:19:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:19:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:19:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:19:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:19:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:19:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:19:10 np0005558241 podman[403148]: 2025-12-13 09:19:10.394308711 +0000 UTC m=+0.070947728 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:19:10 np0005558241 podman[403148]: 2025-12-13 09:19:10.54757754 +0000 UTC m=+0.224216547 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 04:19:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3542: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:19:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:19:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:19:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:19:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:19:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:19:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:19:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:19:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:19:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:19:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:19:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:19:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:19:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:19:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:19:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:19:12 np0005558241 auditd[696]: Audit daemon rotating log files
Dec 13 04:19:12 np0005558241 nova_compute[248510]: 2025-12-13 09:19:12.609 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:12 np0005558241 podman[403474]: 2025-12-13 09:19:12.775296518 +0000 UTC m=+0.033027354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:19:12 np0005558241 podman[403474]: 2025-12-13 09:19:12.913737328 +0000 UTC m=+0.171468084 container create aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 04:19:12 np0005558241 systemd[1]: Started libpod-conmon-aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a.scope.
Dec 13 04:19:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:19:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3543: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:19:13 np0005558241 podman[403474]: 2025-12-13 09:19:13.01336932 +0000 UTC m=+0.271100146 container init aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 04:19:13 np0005558241 podman[403474]: 2025-12-13 09:19:13.023843111 +0000 UTC m=+0.281573857 container start aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:19:13 np0005558241 podman[403474]: 2025-12-13 09:19:13.027642486 +0000 UTC m=+0.285373322 container attach aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:19:13 np0005558241 brave_chatterjee[403490]: 167 167
Dec 13 04:19:13 np0005558241 systemd[1]: libpod-aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a.scope: Deactivated successfully.
Dec 13 04:19:13 np0005558241 podman[403474]: 2025-12-13 09:19:13.030961129 +0000 UTC m=+0.288691915 container died aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:19:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a5f74a7255a0b9d3170abaf185392807ab88c9709bd84f32f407f2c2d4725a38-merged.mount: Deactivated successfully.
Dec 13 04:19:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:19:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:19:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:19:13 np0005558241 podman[403474]: 2025-12-13 09:19:13.085373875 +0000 UTC m=+0.343104651 container remove aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:19:13 np0005558241 systemd[1]: libpod-conmon-aeedd3f1d389c6f7171d4de1452c1cf2a5e87c62534443c677844a379a4efd4a.scope: Deactivated successfully.
Dec 13 04:19:13 np0005558241 podman[403513]: 2025-12-13 09:19:13.309883419 +0000 UTC m=+0.053613787 container create 2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:19:13 np0005558241 systemd[1]: Started libpod-conmon-2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88.scope.
Dec 13 04:19:13 np0005558241 podman[403513]: 2025-12-13 09:19:13.287134212 +0000 UTC m=+0.030864580 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:19:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:19:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2e98e310c0cda839731ffa49d842b0f948405adb44459f24aeb9680f1ab7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2e98e310c0cda839731ffa49d842b0f948405adb44459f24aeb9680f1ab7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2e98e310c0cda839731ffa49d842b0f948405adb44459f24aeb9680f1ab7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2e98e310c0cda839731ffa49d842b0f948405adb44459f24aeb9680f1ab7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e2e98e310c0cda839731ffa49d842b0f948405adb44459f24aeb9680f1ab7b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:13 np0005558241 podman[403513]: 2025-12-13 09:19:13.421244134 +0000 UTC m=+0.164974582 container init 2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 04:19:13 np0005558241 podman[403513]: 2025-12-13 09:19:13.435340155 +0000 UTC m=+0.179070543 container start 2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:19:13 np0005558241 podman[403513]: 2025-12-13 09:19:13.443732984 +0000 UTC m=+0.187463432 container attach 2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 04:19:14 np0005558241 practical_knuth[403530]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:19:14 np0005558241 practical_knuth[403530]: --> All data devices are unavailable
Dec 13 04:19:14 np0005558241 systemd[1]: libpod-2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88.scope: Deactivated successfully.
Dec 13 04:19:14 np0005558241 conmon[403530]: conmon 2e57cc744d3430914349 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88.scope/container/memory.events
Dec 13 04:19:14 np0005558241 podman[403513]: 2025-12-13 09:19:14.088032538 +0000 UTC m=+0.831762926 container died 2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:19:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-83e2e98e310c0cda839731ffa49d842b0f948405adb44459f24aeb9680f1ab7b-merged.mount: Deactivated successfully.
Dec 13 04:19:14 np0005558241 podman[403513]: 2025-12-13 09:19:14.145979402 +0000 UTC m=+0.889709760 container remove 2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_knuth, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:19:14 np0005558241 systemd[1]: libpod-conmon-2e57cc744d343091434933c78b5d90d739dac5c63245d1f7d76c2ee39547ac88.scope: Deactivated successfully.
Dec 13 04:19:14 np0005558241 nova_compute[248510]: 2025-12-13 09:19:14.277 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:14 np0005558241 podman[403622]: 2025-12-13 09:19:14.70256117 +0000 UTC m=+0.067274378 container create d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:19:14 np0005558241 systemd[1]: Started libpod-conmon-d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba.scope.
Dec 13 04:19:14 np0005558241 podman[403622]: 2025-12-13 09:19:14.680701855 +0000 UTC m=+0.045415083 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:19:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:19:14 np0005558241 podman[403622]: 2025-12-13 09:19:14.795047214 +0000 UTC m=+0.159760422 container init d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:19:14 np0005558241 podman[403622]: 2025-12-13 09:19:14.803236238 +0000 UTC m=+0.167949426 container start d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:19:14 np0005558241 podman[403622]: 2025-12-13 09:19:14.806441278 +0000 UTC m=+0.171154466 container attach d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:19:14 np0005558241 unruffled_shtern[403639]: 167 167
Dec 13 04:19:14 np0005558241 systemd[1]: libpod-d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba.scope: Deactivated successfully.
Dec 13 04:19:14 np0005558241 podman[403622]: 2025-12-13 09:19:14.812456258 +0000 UTC m=+0.177169456 container died d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:19:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-82905772f25c4c34acd5ab19c3cfe0c0f4b3a435b097d06c694f75d682c538e9-merged.mount: Deactivated successfully.
Dec 13 04:19:14 np0005558241 podman[403622]: 2025-12-13 09:19:14.856223169 +0000 UTC m=+0.220936357 container remove d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 04:19:14 np0005558241 systemd[1]: libpod-conmon-d70d8bb3b09fd3b7e1a53120c8c14e689b4f3f0ac9071343266b41177e1490ba.scope: Deactivated successfully.
Dec 13 04:19:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3544: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 597 B/s wr, 23 op/s
Dec 13 04:19:15 np0005558241 podman[403664]: 2025-12-13 09:19:15.040454649 +0000 UTC m=+0.046837408 container create 54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:19:15 np0005558241 systemd[1]: Started libpod-conmon-54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21.scope.
Dec 13 04:19:15 np0005558241 podman[403664]: 2025-12-13 09:19:15.021524177 +0000 UTC m=+0.027906956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:19:15 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:19:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3034764914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3034764914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dfaf7bd55f4b1a84f3242188a16293501aaf353f217cee679f68ae880bf7df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dfaf7bd55f4b1a84f3242188a16293501aaf353f217cee679f68ae880bf7df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dfaf7bd55f4b1a84f3242188a16293501aaf353f217cee679f68ae880bf7df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:15 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8dfaf7bd55f4b1a84f3242188a16293501aaf353f217cee679f68ae880bf7df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:15 np0005558241 podman[403664]: 2025-12-13 09:19:15.13841816 +0000 UTC m=+0.144800949 container init 54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:19:15 np0005558241 podman[403664]: 2025-12-13 09:19:15.144302357 +0000 UTC m=+0.150685116 container start 54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:19:15 np0005558241 podman[403664]: 2025-12-13 09:19:15.14724909 +0000 UTC m=+0.153631849 container attach 54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]: {
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:    "0": [
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:        {
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "devices": [
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "/dev/loop3"
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            ],
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_name": "ceph_lv0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_size": "21470642176",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "name": "ceph_lv0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "tags": {
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.cluster_name": "ceph",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.crush_device_class": "",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.encrypted": "0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.objectstore": "bluestore",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.osd_id": "0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.type": "block",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.vdo": "0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.with_tpm": "0"
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            },
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "type": "block",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "vg_name": "ceph_vg0"
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:        }
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:    ],
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:    "1": [
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:        {
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "devices": [
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "/dev/loop4"
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            ],
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_name": "ceph_lv1",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_size": "21470642176",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "name": "ceph_lv1",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "tags": {
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.cluster_name": "ceph",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.crush_device_class": "",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.encrypted": "0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.objectstore": "bluestore",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.osd_id": "1",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.type": "block",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.vdo": "0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.with_tpm": "0"
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            },
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "type": "block",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "vg_name": "ceph_vg1"
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:        }
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:    ],
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:    "2": [
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:        {
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "devices": [
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "/dev/loop5"
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            ],
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_name": "ceph_lv2",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_size": "21470642176",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "name": "ceph_lv2",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "tags": {
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.cluster_name": "ceph",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.crush_device_class": "",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.encrypted": "0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.objectstore": "bluestore",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.osd_id": "2",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.type": "block",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.vdo": "0",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:                "ceph.with_tpm": "0"
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            },
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "type": "block",
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:            "vg_name": "ceph_vg2"
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:        }
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]:    ]
Dec 13 04:19:15 np0005558241 amazing_jepsen[403680]: }
Dec 13 04:19:15 np0005558241 systemd[1]: libpod-54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21.scope: Deactivated successfully.
Dec 13 04:19:15 np0005558241 podman[403664]: 2025-12-13 09:19:15.466750561 +0000 UTC m=+0.473133330 container died 54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:19:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f8dfaf7bd55f4b1a84f3242188a16293501aaf353f217cee679f68ae880bf7df-merged.mount: Deactivated successfully.
Dec 13 04:19:15 np0005558241 podman[403664]: 2025-12-13 09:19:15.509735252 +0000 UTC m=+0.516118031 container remove 54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 04:19:15 np0005558241 systemd[1]: libpod-conmon-54fc8fb980fe8aeb341775ba7515d3ce0dfa926443c01e33889a2b341a15eb21.scope: Deactivated successfully.
Dec 13 04:19:16 np0005558241 podman[403762]: 2025-12-13 09:19:16.063286485 +0000 UTC m=+0.048337925 container create 187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tharp, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:19:16 np0005558241 systemd[1]: Started libpod-conmon-187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a.scope.
Dec 13 04:19:16 np0005558241 podman[403762]: 2025-12-13 09:19:16.042560619 +0000 UTC m=+0.027612059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:19:16 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:19:16 np0005558241 podman[403762]: 2025-12-13 09:19:16.173061601 +0000 UTC m=+0.158113061 container init 187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:19:16 np0005558241 podman[403762]: 2025-12-13 09:19:16.187554092 +0000 UTC m=+0.172605502 container start 187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:19:16 np0005558241 podman[403762]: 2025-12-13 09:19:16.191386567 +0000 UTC m=+0.176438037 container attach 187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tharp, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:19:16 np0005558241 exciting_tharp[403779]: 167 167
Dec 13 04:19:16 np0005558241 systemd[1]: libpod-187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a.scope: Deactivated successfully.
Dec 13 04:19:16 np0005558241 podman[403762]: 2025-12-13 09:19:16.194902975 +0000 UTC m=+0.179954425 container died 187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tharp, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:19:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c5ac879590d5ab5159b845e7b560dafc34f85de39f394cc4891d43dcf3de2521-merged.mount: Deactivated successfully.
Dec 13 04:19:16 np0005558241 podman[403762]: 2025-12-13 09:19:16.237227969 +0000 UTC m=+0.222279379 container remove 187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:19:16 np0005558241 systemd[1]: libpod-conmon-187325c7ffb9e76e52e104b8bfdaee874945d31e810e24d5a65d3d4e5f33059a.scope: Deactivated successfully.
Dec 13 04:19:16 np0005558241 podman[403802]: 2025-12-13 09:19:16.434741301 +0000 UTC m=+0.047713240 container create 4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:19:16 np0005558241 systemd[1]: Started libpod-conmon-4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5.scope.
Dec 13 04:19:16 np0005558241 podman[403802]: 2025-12-13 09:19:16.41464385 +0000 UTC m=+0.027615819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:19:16 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:19:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c8377d036e0d9dbb1446c442e9b70b7a43933dec10b5916cd279392735b8ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c8377d036e0d9dbb1446c442e9b70b7a43933dec10b5916cd279392735b8ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c8377d036e0d9dbb1446c442e9b70b7a43933dec10b5916cd279392735b8ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:16 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c8377d036e0d9dbb1446c442e9b70b7a43933dec10b5916cd279392735b8ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:16 np0005558241 podman[403802]: 2025-12-13 09:19:16.55027778 +0000 UTC m=+0.163249719 container init 4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 04:19:16 np0005558241 podman[403802]: 2025-12-13 09:19:16.560047943 +0000 UTC m=+0.173019882 container start 4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 04:19:16 np0005558241 podman[403802]: 2025-12-13 09:19:16.563954261 +0000 UTC m=+0.176926210 container attach 4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:19:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3545: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 596 B/s rd, 255 B/s wr, 1 op/s
Dec 13 04:19:17 np0005558241 lvm[403897]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:19:17 np0005558241 lvm[403897]: VG ceph_vg0 finished
Dec 13 04:19:17 np0005558241 lvm[403898]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:19:17 np0005558241 lvm[403898]: VG ceph_vg1 finished
Dec 13 04:19:17 np0005558241 lvm[403900]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:19:17 np0005558241 lvm[403900]: VG ceph_vg2 finished
Dec 13 04:19:17 np0005558241 elated_germain[403818]: {}
Dec 13 04:19:17 np0005558241 systemd[1]: libpod-4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5.scope: Deactivated successfully.
Dec 13 04:19:17 np0005558241 podman[403802]: 2025-12-13 09:19:17.509939562 +0000 UTC m=+1.122911501 container died 4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:19:17 np0005558241 systemd[1]: libpod-4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5.scope: Consumed 1.576s CPU time.
Dec 13 04:19:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d1c8377d036e0d9dbb1446c442e9b70b7a43933dec10b5916cd279392735b8ce-merged.mount: Deactivated successfully.
Dec 13 04:19:17 np0005558241 podman[403802]: 2025-12-13 09:19:17.565328642 +0000 UTC m=+1.178300591 container remove 4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:19:17 np0005558241 nova_compute[248510]: 2025-12-13 09:19:17.570 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617542.5681205, 7aadb3d0-64f2-4531-9896-93b087cdea5c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:19:17 np0005558241 nova_compute[248510]: 2025-12-13 09:19:17.572 248514 INFO nova.compute.manager [-] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:19:17 np0005558241 systemd[1]: libpod-conmon-4eb2cedc1108ee18935ed4f5866ebd347894ddf525d7b88dd64c33ecd57439a5.scope: Deactivated successfully.
Dec 13 04:19:17 np0005558241 nova_compute[248510]: 2025-12-13 09:19:17.598 248514 DEBUG nova.compute.manager [None req-c9d65d3c-5327-498f-9461-1ff528634278 - - - - - -] [instance: 7aadb3d0-64f2-4531-9896-93b087cdea5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:19:17 np0005558241 nova_compute[248510]: 2025-12-13 09:19:17.613 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:19:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:19:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:19:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:19:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:19:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:19:18 np0005558241 nova_compute[248510]: 2025-12-13 09:19:18.572 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:18 np0005558241 nova_compute[248510]: 2025-12-13 09:19:18.683 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3546: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 1 op/s
Dec 13 04:19:19 np0005558241 nova_compute[248510]: 2025-12-13 09:19:19.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3547: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.4798206657980494e-05 of space, bias 1.0, pg target 0.004439461997394148 quantized to 32 (current 32)
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697110352885981 of space, bias 1.0, pg target 0.20091331058657944 quantized to 32 (current 32)
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.723939957605983e-07 of space, bias 4.0, pg target 0.0006868727949127179 quantized to 16 (current 32)
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:19:22 np0005558241 nova_compute[248510]: 2025-12-13 09:19:22.616 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3548: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:23 np0005558241 nova_compute[248510]: 2025-12-13 09:19:23.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:19:24 np0005558241 nova_compute[248510]: 2025-12-13 09:19:24.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3549: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:26 np0005558241 nova_compute[248510]: 2025-12-13 09:19:26.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:19:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3550: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:27 np0005558241 nova_compute[248510]: 2025-12-13 09:19:27.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3551: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:29 np0005558241 nova_compute[248510]: 2025-12-13 09:19:29.369 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:29 np0005558241 nova_compute[248510]: 2025-12-13 09:19:29.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:19:29 np0005558241 nova_compute[248510]: 2025-12-13 09:19:29.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:19:29 np0005558241 nova_compute[248510]: 2025-12-13 09:19:29.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:19:29 np0005558241 nova_compute[248510]: 2025-12-13 09:19:29.820 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:19:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3552: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:31.869 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:19:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:31.870 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:19:31 np0005558241 nova_compute[248510]: 2025-12-13 09:19:31.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:32 np0005558241 nova_compute[248510]: 2025-12-13 09:19:32.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3553: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:34 np0005558241 nova_compute[248510]: 2025-12-13 09:19:34.372 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:34 np0005558241 nova_compute[248510]: 2025-12-13 09:19:34.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:19:34 np0005558241 nova_compute[248510]: 2025-12-13 09:19:34.962 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:19:34 np0005558241 nova_compute[248510]: 2025-12-13 09:19:34.962 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:19:34 np0005558241 nova_compute[248510]: 2025-12-13 09:19:34.963 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:19:34 np0005558241 nova_compute[248510]: 2025-12-13 09:19:34.963 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:19:34 np0005558241 nova_compute[248510]: 2025-12-13 09:19:34.963 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:19:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3554: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:19:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/164859578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:35 np0005558241 nova_compute[248510]: 2025-12-13 09:19:35.714 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:19:35 np0005558241 nova_compute[248510]: 2025-12-13 09:19:35.897 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:19:35 np0005558241 nova_compute[248510]: 2025-12-13 09:19:35.898 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3491MB free_disk=59.987393531017005GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:19:35 np0005558241 nova_compute[248510]: 2025-12-13 09:19:35.898 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:19:35 np0005558241 nova_compute[248510]: 2025-12-13 09:19:35.899 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:19:35 np0005558241 nova_compute[248510]: 2025-12-13 09:19:35.977 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:19:35 np0005558241 nova_compute[248510]: 2025-12-13 09:19:35.977 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:19:36 np0005558241 nova_compute[248510]: 2025-12-13 09:19:36.005 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:19:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:19:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/621910952' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:36 np0005558241 nova_compute[248510]: 2025-12-13 09:19:36.681 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:19:36 np0005558241 nova_compute[248510]: 2025-12-13 09:19:36.689 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:19:36 np0005558241 nova_compute[248510]: 2025-12-13 09:19:36.712 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:19:36 np0005558241 nova_compute[248510]: 2025-12-13 09:19:36.751 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:19:36 np0005558241 nova_compute[248510]: 2025-12-13 09:19:36.752 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:19:36 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:36.872 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:19:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3555: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:37 np0005558241 nova_compute[248510]: 2025-12-13 09:19:37.625 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:37 np0005558241 nova_compute[248510]: 2025-12-13 09:19:37.753 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:19:37 np0005558241 nova_compute[248510]: 2025-12-13 09:19:37.753 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:19:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:38 np0005558241 nova_compute[248510]: 2025-12-13 09:19:38.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:19:38 np0005558241 nova_compute[248510]: 2025-12-13 09:19:38.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:19:38 np0005558241 nova_compute[248510]: 2025-12-13 09:19:38.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:19:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3556: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:39 np0005558241 nova_compute[248510]: 2025-12-13 09:19:39.374 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:39 np0005558241 podman[403990]: 2025-12-13 09:19:39.983145897 +0000 UTC m=+0.065322699 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:19:39 np0005558241 podman[403989]: 2025-12-13 09:19:39.988030358 +0000 UTC m=+0.072304792 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd)
Dec 13 04:19:40 np0005558241 podman[403988]: 2025-12-13 09:19:40.01658942 +0000 UTC m=+0.101030089 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 13 04:19:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:19:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:19:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:19:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:19:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:19:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:19:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:40.283 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:85:21 10.100.0.2 2001:db8::f816:3eff:fe5f:8521'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe5f:8521/64', 'neutron:device_id': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60522aa7-9184-4e18-b84a-db845e6d2800, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0c3d5666-7453-4eb7-8973-bef187574418) old=Port_Binding(mac=['fa:16:3e:5f:85:21 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:19:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:40.285 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0c3d5666-7453-4eb7-8973-bef187574418 in datapath e41a163e-7597-4a26-aa5d-b894f952cca4 updated#033[00m
Dec 13 04:19:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:40.288 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e41a163e-7597-4a26-aa5d-b894f952cca4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:19:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:40.289 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f5247e23-f5cd-44f5-a515-09628d9185da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:19:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3557: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:42 np0005558241 nova_compute[248510]: 2025-12-13 09:19:42.629 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3558: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:44 np0005558241 nova_compute[248510]: 2025-12-13 09:19:44.377 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3559: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:46.179 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:85:21 10.100.0.2 2001:db8:0:1:f816:3eff:fe5f:8521 2001:db8::f816:3eff:fe5f:8521'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe5f:8521/64 2001:db8::f816:3eff:fe5f:8521/64', 'neutron:device_id': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60522aa7-9184-4e18-b84a-db845e6d2800, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0c3d5666-7453-4eb7-8973-bef187574418) old=Port_Binding(mac=['fa:16:3e:5f:85:21 10.100.0.2 2001:db8::f816:3eff:fe5f:8521'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe5f:8521/64', 'neutron:device_id': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:19:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:46.180 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0c3d5666-7453-4eb7-8973-bef187574418 in datapath e41a163e-7597-4a26-aa5d-b894f952cca4 updated#033[00m
Dec 13 04:19:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:46.181 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e41a163e-7597-4a26-aa5d-b894f952cca4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:19:46 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:46.182 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1204dfc2-1ac1-4a65-9aff-94b70c05c2ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:19:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3560: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:19:47 np0005558241 nova_compute[248510]: 2025-12-13 09:19:47.633 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3561: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 13 04:19:49 np0005558241 nova_compute[248510]: 2025-12-13 09:19:49.380 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3562: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 04:19:51 np0005558241 nova_compute[248510]: 2025-12-13 09:19:51.675 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:19:51 np0005558241 nova_compute[248510]: 2025-12-13 09:19:51.675 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:19:51 np0005558241 nova_compute[248510]: 2025-12-13 09:19:51.695 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:19:51 np0005558241 nova_compute[248510]: 2025-12-13 09:19:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:19:51 np0005558241 nova_compute[248510]: 2025-12-13 09:19:51.785 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:19:51 np0005558241 nova_compute[248510]: 2025-12-13 09:19:51.786 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:19:51 np0005558241 nova_compute[248510]: 2025-12-13 09:19:51.794 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:19:51 np0005558241 nova_compute[248510]: 2025-12-13 09:19:51.795 248514 INFO nova.compute.claims [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:19:51 np0005558241 nova_compute[248510]: 2025-12-13 09:19:51.960 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:19:52 np0005558241 nova_compute[248510]: 2025-12-13 09:19:52.636 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:19:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4276888792' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:52 np0005558241 nova_compute[248510]: 2025-12-13 09:19:52.697 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.737s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:19:52 np0005558241 nova_compute[248510]: 2025-12-13 09:19:52.707 248514 DEBUG nova.compute.provider_tree [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:19:52 np0005558241 nova_compute[248510]: 2025-12-13 09:19:52.741 248514 DEBUG nova.scheduler.client.report [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:19:52 np0005558241 nova_compute[248510]: 2025-12-13 09:19:52.778 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:19:52 np0005558241 nova_compute[248510]: 2025-12-13 09:19:52.779 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:19:52 np0005558241 nova_compute[248510]: 2025-12-13 09:19:52.831 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:19:52 np0005558241 nova_compute[248510]: 2025-12-13 09:19:52.832 248514 DEBUG nova.network.neutron [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:19:52 np0005558241 nova_compute[248510]: 2025-12-13 09:19:52.863 248514 INFO nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:19:52 np0005558241 nova_compute[248510]: 2025-12-13 09:19:52.927 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3563: 321 pgs: 321 active+clean; 41 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.036 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.038 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.038 248514 INFO nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Creating image(s)#033[00m
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.078330) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617593078458, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 847, "num_deletes": 257, "total_data_size": 1191091, "memory_usage": 1218104, "flush_reason": "Manual Compaction"}
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.093 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617593108472, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 1159516, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70567, "largest_seqno": 71413, "table_properties": {"data_size": 1155293, "index_size": 1938, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9217, "raw_average_key_size": 19, "raw_value_size": 1146772, "raw_average_value_size": 2364, "num_data_blocks": 87, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617521, "oldest_key_time": 1765617521, "file_creation_time": 1765617593, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 30185 microseconds, and 5112 cpu microseconds.
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.108530) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 1159516 bytes OK
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.108559) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.110253) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.110274) EVENT_LOG_v1 {"time_micros": 1765617593110267, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.110300) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 1186897, prev total WAL file size 1186897, number of live WAL files 2.
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.110958) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303134' seq:72057594037927935, type:22 .. '6C6F676D0033323637' seq:0, type:0; will stop at (end)
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(1132KB)], [167(9987KB)]
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617593111009, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 11386647, "oldest_snapshot_seqno": -1}
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.125 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.155 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.160 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8877 keys, 11269453 bytes, temperature: kUnknown
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617593205464, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 11269453, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11212994, "index_size": 33178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 233952, "raw_average_key_size": 26, "raw_value_size": 11057586, "raw_average_value_size": 1245, "num_data_blocks": 1281, "num_entries": 8877, "num_filter_entries": 8877, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617593, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.205865) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 11269453 bytes
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.207893) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.4 rd, 119.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.8 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(19.5) write-amplify(9.7) OK, records in: 9402, records dropped: 525 output_compression: NoCompression
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.207915) EVENT_LOG_v1 {"time_micros": 1765617593207905, "job": 104, "event": "compaction_finished", "compaction_time_micros": 94572, "compaction_time_cpu_micros": 27646, "output_level": 6, "num_output_files": 1, "total_output_size": 11269453, "num_input_records": 9402, "num_output_records": 8877, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617593212734, "job": 104, "event": "table_file_deletion", "file_number": 169}
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.213 248514 DEBUG nova.policy [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617593216812, "job": 104, "event": "table_file_deletion", "file_number": 167}
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.110849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.216968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.216973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.216975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.216977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:19:53 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:19:53.216979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.256 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.257 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.258 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.258 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.287 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:19:53 np0005558241 nova_compute[248510]: 2025-12-13 09:19:53.292 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:19:54 np0005558241 nova_compute[248510]: 2025-12-13 09:19:54.382 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3564: 321 pgs: 321 active+clean; 52 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 77 KiB/s wr, 17 op/s
Dec 13 04:19:55 np0005558241 nova_compute[248510]: 2025-12-13 09:19:55.071 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.778s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:19:55 np0005558241 nova_compute[248510]: 2025-12-13 09:19:55.146 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:19:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:55.456 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:19:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:55.456 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:19:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:19:55.457 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:19:55 np0005558241 nova_compute[248510]: 2025-12-13 09:19:55.459 248514 DEBUG nova.network.neutron [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Successfully created port: 76d94bc0-cb6b-4bd0-8366-5158fdaece5b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:19:55 np0005558241 nova_compute[248510]: 2025-12-13 09:19:55.726 248514 DEBUG nova.objects.instance [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 387ffe9e-b998-43aa-bb42-e87639c1ba6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:19:55 np0005558241 nova_compute[248510]: 2025-12-13 09:19:55.751 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:19:55 np0005558241 nova_compute[248510]: 2025-12-13 09:19:55.752 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Ensure instance console log exists: /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:19:55 np0005558241 nova_compute[248510]: 2025-12-13 09:19:55.753 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:19:55 np0005558241 nova_compute[248510]: 2025-12-13 09:19:55.754 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:19:55 np0005558241 nova_compute[248510]: 2025-12-13 09:19:55.754 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:19:56 np0005558241 nova_compute[248510]: 2025-12-13 09:19:56.352 248514 DEBUG nova.network.neutron [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Successfully updated port: 76d94bc0-cb6b-4bd0-8366-5158fdaece5b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:19:56 np0005558241 nova_compute[248510]: 2025-12-13 09:19:56.384 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:19:56 np0005558241 nova_compute[248510]: 2025-12-13 09:19:56.385 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:19:56 np0005558241 nova_compute[248510]: 2025-12-13 09:19:56.385 248514 DEBUG nova.network.neutron [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:19:56 np0005558241 nova_compute[248510]: 2025-12-13 09:19:56.465 248514 DEBUG nova.compute.manager [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-changed-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:19:56 np0005558241 nova_compute[248510]: 2025-12-13 09:19:56.466 248514 DEBUG nova.compute.manager [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Refreshing instance network info cache due to event network-changed-76d94bc0-cb6b-4bd0-8366-5158fdaece5b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:19:56 np0005558241 nova_compute[248510]: 2025-12-13 09:19:56.466 248514 DEBUG oslo_concurrency.lockutils [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:19:56 np0005558241 nova_compute[248510]: 2025-12-13 09:19:56.596 248514 DEBUG nova.network.neutron [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:19:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3565: 321 pgs: 321 active+clean; 52 MiB data, 999 MiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 77 KiB/s wr, 17 op/s
Dec 13 04:19:57 np0005558241 nova_compute[248510]: 2025-12-13 09:19:57.639 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3566: 321 pgs: 321 active+clean; 80 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.5 MiB/s wr, 42 op/s
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.383 248514 DEBUG nova.network.neutron [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updating instance_info_cache with network_info: [{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.421 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.421 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Instance network_info: |[{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.421 248514 DEBUG oslo_concurrency.lockutils [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.422 248514 DEBUG nova.network.neutron [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Refreshing network info cache for port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.424 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Start _get_guest_xml network_info=[{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.430 248514 WARNING nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.436 248514 DEBUG nova.virt.libvirt.host [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.437 248514 DEBUG nova.virt.libvirt.host [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.442 248514 DEBUG nova.virt.libvirt.host [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.443 248514 DEBUG nova.virt.libvirt.host [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.445 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.446 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.446 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.447 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.447 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.447 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.447 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.447 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.448 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.448 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.448 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.448 248514 DEBUG nova.virt.hardware [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:19:59 np0005558241 nova_compute[248510]: 2025-12-13 09:19:59.451 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:20:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:20:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068317904' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.037 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.066 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.072 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:20:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:20:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2705364153' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.674 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.677 248514 DEBUG nova.virt.libvirt.vif [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569396822',display_name='tempest-TestGettingAddress-server-569396822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569396822',id=149,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIpcx/5vuboJVwrV0nq+WfHP6t5LA2OndBCH/1ll7d/PpOHljwzW/JjtFSE1pOt+PUX/l8C8ALgzEgIgSQ3b5WDhZcogX0HQPVvqXPOMGr4RM3k5j3wZaHphWzCWA2h6Q==',key_name='tempest-TestGettingAddress-483066452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-r3ss7vci',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:19:52Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=387ffe9e-b998-43aa-bb42-e87639c1ba6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.677 248514 DEBUG nova.network.os_vif_util [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.679 248514 DEBUG nova.network.os_vif_util [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.680 248514 DEBUG nova.objects.instance [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 387ffe9e-b998-43aa-bb42-e87639c1ba6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.707 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  <uuid>387ffe9e-b998-43aa-bb42-e87639c1ba6a</uuid>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  <name>instance-00000095</name>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-569396822</nova:name>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:19:59</nova:creationTime>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:        <nova:port uuid="76d94bc0-cb6b-4bd0-8366-5158fdaece5b">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe30:6b81" ipVersion="6"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe30:6b81" ipVersion="6"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <entry name="serial">387ffe9e-b998-43aa-bb42-e87639c1ba6a</entry>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <entry name="uuid">387ffe9e-b998-43aa-bb42-e87639c1ba6a</entry>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk.config">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:30:6b:81"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <target dev="tap76d94bc0-cb"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/console.log" append="off"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:20:00 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:20:00 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:20:00 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:20:00 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.709 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Preparing to wait for external event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.710 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.711 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.711 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.712 248514 DEBUG nova.virt.libvirt.vif [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569396822',display_name='tempest-TestGettingAddress-server-569396822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569396822',id=149,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIpcx/5vuboJVwrV0nq+WfHP6t5LA2OndBCH/1ll7d/PpOHljwzW/JjtFSE1pOt+PUX/l8C8ALgzEgIgSQ3b5WDhZcogX0HQPVvqXPOMGr4RM3k5j3wZaHphWzCWA2h6Q==',key_name='tempest-TestGettingAddress-483066452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-r3ss7vci',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:19:52Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=387ffe9e-b998-43aa-bb42-e87639c1ba6a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.713 248514 DEBUG nova.network.os_vif_util [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.714 248514 DEBUG nova.network.os_vif_util [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.715 248514 DEBUG os_vif [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.717 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.718 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.718 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.723 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.723 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76d94bc0-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.724 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76d94bc0-cb, col_values=(('external_ids', {'iface-id': '76d94bc0-cb6b-4bd0-8366-5158fdaece5b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:6b:81', 'vm-uuid': '387ffe9e-b998-43aa-bb42-e87639c1ba6a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.726 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:00 np0005558241 NetworkManager[50376]: <info>  [1765617600.7278] manager: (tap76d94bc0-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/665)
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.730 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.735 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.737 248514 INFO os_vif [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb')#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.806 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.807 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.808 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:30:6b:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.809 248514 INFO nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Using config drive#033[00m
Dec 13 04:20:00 np0005558241 nova_compute[248510]: 2025-12-13 09:20:00.838 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:20:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3567: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.340 248514 INFO nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Creating config drive at /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/disk.config#033[00m
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.345 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp26o15cx6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.489 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp26o15cx6" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.522 248514 DEBUG nova.storage.rbd_utils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.529 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/disk.config 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.697 248514 DEBUG oslo_concurrency.processutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/disk.config 387ffe9e-b998-43aa-bb42-e87639c1ba6a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.698 248514 INFO nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Deleting local config drive /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a/disk.config because it was imported into RBD.#033[00m
Dec 13 04:20:01 np0005558241 kernel: tap76d94bc0-cb: entered promiscuous mode
Dec 13 04:20:01 np0005558241 NetworkManager[50376]: <info>  [1765617601.7586] manager: (tap76d94bc0-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/666)
Dec 13 04:20:01 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:01Z|01607|binding|INFO|Claiming lport 76d94bc0-cb6b-4bd0-8366-5158fdaece5b for this chassis.
Dec 13 04:20:01 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:01Z|01608|binding|INFO|76d94bc0-cb6b-4bd0-8366-5158fdaece5b: Claiming fa:16:3e:30:6b:81 10.100.0.11 2001:db8:0:1:f816:3eff:fe30:6b81 2001:db8::f816:3eff:fe30:6b81
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.759 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.766 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.770 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:01 np0005558241 systemd-udevd[404373]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:20:01 np0005558241 systemd-machined[210538]: New machine qemu-180-instance-00000095.
Dec 13 04:20:01 np0005558241 NetworkManager[50376]: <info>  [1765617601.8103] device (tap76d94bc0-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:20:01 np0005558241 NetworkManager[50376]: <info>  [1765617601.8115] device (tap76d94bc0-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.816 248514 DEBUG nova.network.neutron [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updated VIF entry in instance network info cache for port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:20:01 np0005558241 systemd[1]: Started Virtual Machine qemu-180-instance-00000095.
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.817 248514 DEBUG nova.network.neutron [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updating instance_info_cache with network_info: [{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:20:01 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:01Z|01609|binding|INFO|Setting lport 76d94bc0-cb6b-4bd0-8366-5158fdaece5b ovn-installed in OVS
Dec 13 04:20:01 np0005558241 nova_compute[248510]: 2025-12-13 09:20:01.835 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.171 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617602.170666, 387ffe9e-b998-43aa-bb42-e87639c1ba6a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.171 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] VM Started (Lifecycle Event)#033[00m
Dec 13 04:20:02 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:02Z|01610|binding|INFO|Setting lport 76d94bc0-cb6b-4bd0-8366-5158fdaece5b up in Southbound
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.224 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:6b:81 10.100.0.11 2001:db8:0:1:f816:3eff:fe30:6b81 2001:db8::f816:3eff:fe30:6b81'], port_security=['fa:16:3e:30:6b:81 10.100.0.11 2001:db8:0:1:f816:3eff:fe30:6b81 2001:db8::f816:3eff:fe30:6b81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8:0:1:f816:3eff:fe30:6b81/64 2001:db8::f816:3eff:fe30:6b81/64', 'neutron:device_id': '387ffe9e-b998-43aa-bb42-e87639c1ba6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7518f8bf-76d2-4439-8293-d0d6a2979cbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60522aa7-9184-4e18-b84a-db845e6d2800, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=76d94bc0-cb6b-4bd0-8366-5158fdaece5b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.228 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b in datapath e41a163e-7597-4a26-aa5d-b894f952cca4 bound to our chassis#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.232 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e41a163e-7597-4a26-aa5d-b894f952cca4#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.248 248514 DEBUG oslo_concurrency.lockutils [req-00f9f8f8-b928-4aa4-b1a9-9d3e8efb0167 req-3d85e716-7011-4043-84f8-b4928596cd41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.249 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.253 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef77f01-69c1-46c8-beba-b183b68cc819]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.254 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape41a163e-71 in ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.254 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617602.1753275, 387ffe9e-b998-43aa-bb42-e87639c1ba6a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.255 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.257 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape41a163e-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.257 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0a0fe4-3e03-496e-b568-4f8b6d8f6f63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.259 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[11669d67-72ef-4b7f-ab75-1770884769c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.276 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[765015bd-aaa2-41a2-8197-f6caa0aba718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.286 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.292 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.295 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[25bbbd6f-326e-4728-a19f-fda8cbadad5a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.316 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.339 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[0be82de0-5848-4531-8faa-e39a1f6b44a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 systemd-udevd[404376]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:20:02 np0005558241 NetworkManager[50376]: <info>  [1765617602.3506] manager: (tape41a163e-70): new Veth device (/org/freedesktop/NetworkManager/Devices/667)
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.349 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[f471fe34-32b3-42cd-873b-9e08f0122262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.393 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[b07ecb14-420c-4457-9774-817c51e082cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.398 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cef4dbc8-4fc8-464b-834d-8a3d46b24dba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 NetworkManager[50376]: <info>  [1765617602.4313] device (tape41a163e-70): carrier: link connected
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.440 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[eecfff2e-0cf8-4bc7-987e-78d448849aee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.462 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5b93f2f2-4135-4eec-bfb3-e20638559d47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape41a163e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:85:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1017964, 'reachable_time': 24214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404449, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.488 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1c23a1d8-f61b-4ff9-9780-0e0290e889d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:8521'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1017964, 'tstamp': 1017964}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404450, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.517 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[21b78a5d-fc50-4c75-80e9-42fe839a0d7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape41a163e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:85:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1017964, 'reachable_time': 24214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404451, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.560 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[fc219224-0537-4132-801f-44402e0d9fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.641 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6eae7e-6d8e-47dd-9d1a-fdb129734a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.643 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape41a163e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.643 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.643 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape41a163e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:02 np0005558241 kernel: tape41a163e-70: entered promiscuous mode
Dec 13 04:20:02 np0005558241 NetworkManager[50376]: <info>  [1765617602.6475] manager: (tape41a163e-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/668)
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.649 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape41a163e-70, col_values=(('external_ids', {'iface-id': '0c3d5666-7453-4eb7-8973-bef187574418'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:20:02 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:02Z|01611|binding|INFO|Releasing lport 0c3d5666-7453-4eb7-8973-bef187574418 from this chassis (sb_readonly=0)
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.663 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.665 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e41a163e-7597-4a26-aa5d-b894f952cca4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e41a163e-7597-4a26-aa5d-b894f952cca4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.666 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[52cfe3bd-a5a1-4414-9bfd-9f8e961aa192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.667 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-e41a163e-7597-4a26-aa5d-b894f952cca4
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/e41a163e-7597-4a26-aa5d-b894f952cca4.pid.haproxy
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID e41a163e-7597-4a26-aa5d-b894f952cca4
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:20:02 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:02.667 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'env', 'PROCESS_TAG=haproxy-e41a163e-7597-4a26-aa5d-b894f952cca4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e41a163e-7597-4a26-aa5d-b894f952cca4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.959 248514 DEBUG nova.compute.manager [req-22a2aa77-ae76-4ce5-b351-3fa1fddb51ee req-866fc08e-ce65-40d7-a1ca-8ecc6c624f65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.960 248514 DEBUG oslo_concurrency.lockutils [req-22a2aa77-ae76-4ce5-b351-3fa1fddb51ee req-866fc08e-ce65-40d7-a1ca-8ecc6c624f65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.960 248514 DEBUG oslo_concurrency.lockutils [req-22a2aa77-ae76-4ce5-b351-3fa1fddb51ee req-866fc08e-ce65-40d7-a1ca-8ecc6c624f65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.961 248514 DEBUG oslo_concurrency.lockutils [req-22a2aa77-ae76-4ce5-b351-3fa1fddb51ee req-866fc08e-ce65-40d7-a1ca-8ecc6c624f65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.961 248514 DEBUG nova.compute.manager [req-22a2aa77-ae76-4ce5-b351-3fa1fddb51ee req-866fc08e-ce65-40d7-a1ca-8ecc6c624f65 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Processing event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.961 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.965 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617602.9654477, 387ffe9e-b998-43aa-bb42-e87639c1ba6a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.965 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.967 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.971 248514 INFO nova.virt.libvirt.driver [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Instance spawned successfully.#033[00m
Dec 13 04:20:02 np0005558241 nova_compute[248510]: 2025-12-13 09:20:02.971 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:20:03 np0005558241 nova_compute[248510]: 2025-12-13 09:20:03.001 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:20:03 np0005558241 nova_compute[248510]: 2025-12-13 09:20:03.008 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:20:03 np0005558241 nova_compute[248510]: 2025-12-13 09:20:03.009 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:20:03 np0005558241 nova_compute[248510]: 2025-12-13 09:20:03.009 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:20:03 np0005558241 nova_compute[248510]: 2025-12-13 09:20:03.010 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:20:03 np0005558241 nova_compute[248510]: 2025-12-13 09:20:03.010 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:20:03 np0005558241 nova_compute[248510]: 2025-12-13 09:20:03.011 248514 DEBUG nova.virt.libvirt.driver [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:20:03 np0005558241 nova_compute[248510]: 2025-12-13 09:20:03.017 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:20:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3568: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 04:20:03 np0005558241 nova_compute[248510]: 2025-12-13 09:20:03.052 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:20:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:03 np0005558241 nova_compute[248510]: 2025-12-13 09:20:03.077 248514 INFO nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Took 10.04 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:20:03 np0005558241 nova_compute[248510]: 2025-12-13 09:20:03.077 248514 DEBUG nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:20:03 np0005558241 podman[404483]: 2025-12-13 09:20:03.092512805 +0000 UTC m=+0.056823327 container create d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:20:03 np0005558241 systemd[1]: Started libpod-conmon-d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c.scope.
Dec 13 04:20:03 np0005558241 nova_compute[248510]: 2025-12-13 09:20:03.151 248514 INFO nova.compute.manager [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Took 11.41 seconds to build instance.#033[00m
Dec 13 04:20:03 np0005558241 podman[404483]: 2025-12-13 09:20:03.059858401 +0000 UTC m=+0.024168933 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:20:03 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:20:03 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fcb212c1c75939221274e39c5248b874df50042250d2b2667a9a1d38806218d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:03 np0005558241 nova_compute[248510]: 2025-12-13 09:20:03.176 248514 DEBUG oslo_concurrency.lockutils [None req-fb44f009-ac32-4334-bef9-e02bf12a7aa0 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:03 np0005558241 podman[404483]: 2025-12-13 09:20:03.382850399 +0000 UTC m=+0.347160951 container init d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:20:03 np0005558241 podman[404483]: 2025-12-13 09:20:03.388532231 +0000 UTC m=+0.352842753 container start d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 04:20:03 np0005558241 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [NOTICE]   (404502) : New worker (404504) forked
Dec 13 04:20:03 np0005558241 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [NOTICE]   (404502) : Loading success.
Dec 13 04:20:04 np0005558241 nova_compute[248510]: 2025-12-13 09:20:04.406 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3569: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Dec 13 04:20:05 np0005558241 nova_compute[248510]: 2025-12-13 09:20:05.090 248514 DEBUG nova.compute.manager [req-6ff05117-db1d-4281-a4a1-265b418d5cc0 req-d1c1a5d6-23d4-4c28-97b7-6514d5565d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:20:05 np0005558241 nova_compute[248510]: 2025-12-13 09:20:05.091 248514 DEBUG oslo_concurrency.lockutils [req-6ff05117-db1d-4281-a4a1-265b418d5cc0 req-d1c1a5d6-23d4-4c28-97b7-6514d5565d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:20:05 np0005558241 nova_compute[248510]: 2025-12-13 09:20:05.091 248514 DEBUG oslo_concurrency.lockutils [req-6ff05117-db1d-4281-a4a1-265b418d5cc0 req-d1c1a5d6-23d4-4c28-97b7-6514d5565d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:20:05 np0005558241 nova_compute[248510]: 2025-12-13 09:20:05.091 248514 DEBUG oslo_concurrency.lockutils [req-6ff05117-db1d-4281-a4a1-265b418d5cc0 req-d1c1a5d6-23d4-4c28-97b7-6514d5565d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:05 np0005558241 nova_compute[248510]: 2025-12-13 09:20:05.092 248514 DEBUG nova.compute.manager [req-6ff05117-db1d-4281-a4a1-265b418d5cc0 req-d1c1a5d6-23d4-4c28-97b7-6514d5565d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] No waiting events found dispatching network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:20:05 np0005558241 nova_compute[248510]: 2025-12-13 09:20:05.092 248514 WARNING nova.compute.manager [req-6ff05117-db1d-4281-a4a1-265b418d5cc0 req-d1c1a5d6-23d4-4c28-97b7-6514d5565d50 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received unexpected event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b for instance with vm_state active and task_state None.#033[00m
Dec 13 04:20:05 np0005558241 nova_compute[248510]: 2025-12-13 09:20:05.727 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3570: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1.7 MiB/s wr, 51 op/s
Dec 13 04:20:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3571: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 99 op/s
Dec 13 04:20:09 np0005558241 nova_compute[248510]: 2025-12-13 09:20:09.409 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:20:09
Dec 13 04:20:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:20:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:20:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['vms', '.mgr', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Dec 13 04:20:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:20:10 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:10Z|01612|binding|INFO|Releasing lport 0c3d5666-7453-4eb7-8973-bef187574418 from this chassis (sb_readonly=0)
Dec 13 04:20:10 np0005558241 NetworkManager[50376]: <info>  [1765617610.0038] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/669)
Dec 13 04:20:10 np0005558241 NetworkManager[50376]: <info>  [1765617610.0044] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/670)
Dec 13 04:20:10 np0005558241 nova_compute[248510]: 2025-12-13 09:20:10.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:10 np0005558241 nova_compute[248510]: 2025-12-13 09:20:10.054 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:10 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:10Z|01613|binding|INFO|Releasing lport 0c3d5666-7453-4eb7-8973-bef187574418 from this chassis (sb_readonly=0)
Dec 13 04:20:10 np0005558241 nova_compute[248510]: 2025-12-13 09:20:10.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:20:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:20:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:20:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:20:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:20:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:10.302 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:20:10 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:10.303 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:20:10 np0005558241 nova_compute[248510]: 2025-12-13 09:20:10.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:10 np0005558241 nova_compute[248510]: 2025-12-13 09:20:10.383 248514 DEBUG nova.compute.manager [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-changed-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:20:10 np0005558241 nova_compute[248510]: 2025-12-13 09:20:10.384 248514 DEBUG nova.compute.manager [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Refreshing instance network info cache due to event network-changed-76d94bc0-cb6b-4bd0-8366-5158fdaece5b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:20:10 np0005558241 nova_compute[248510]: 2025-12-13 09:20:10.384 248514 DEBUG oslo_concurrency.lockutils [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:20:10 np0005558241 nova_compute[248510]: 2025-12-13 09:20:10.384 248514 DEBUG oslo_concurrency.lockutils [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:20:10 np0005558241 nova_compute[248510]: 2025-12-13 09:20:10.385 248514 DEBUG nova.network.neutron [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Refreshing network info cache for port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:20:10 np0005558241 nova_compute[248510]: 2025-12-13 09:20:10.729 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:11 np0005558241 podman[404515]: 2025-12-13 09:20:10.999870992 +0000 UTC m=+0.083160863 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 04:20:11 np0005558241 podman[404516]: 2025-12-13 09:20:11.007717898 +0000 UTC m=+0.091111481 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:20:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3572: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 312 KiB/s wr, 74 op/s
Dec 13 04:20:11 np0005558241 podman[404514]: 2025-12-13 09:20:11.042262939 +0000 UTC m=+0.122746170 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 04:20:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:20:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:20:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:20:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:20:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:20:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:20:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:20:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:20:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:20:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:20:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3573: 321 pgs: 321 active+clean; 88 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:20:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:14 np0005558241 nova_compute[248510]: 2025-12-13 09:20:14.000 248514 DEBUG nova.network.neutron [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updated VIF entry in instance network info cache for port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:20:14 np0005558241 nova_compute[248510]: 2025-12-13 09:20:14.001 248514 DEBUG nova.network.neutron [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updating instance_info_cache with network_info: [{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:20:14 np0005558241 nova_compute[248510]: 2025-12-13 09:20:14.036 248514 DEBUG oslo_concurrency.lockutils [req-2ce43244-e3a2-461b-803f-ee5f64f5d2c0 req-780324a9-1019-4f4e-8e2c-fbe58e865ffd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:20:14 np0005558241 nova_compute[248510]: 2025-12-13 09:20:14.442 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3574: 321 pgs: 321 active+clean; 96 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 849 KiB/s wr, 77 op/s
Dec 13 04:20:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2840370517' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2840370517' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:15 np0005558241 nova_compute[248510]: 2025-12-13 09:20:15.732 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:15 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:15Z|00209|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:6b:81 10.100.0.11
Dec 13 04:20:15 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:15Z|00210|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:6b:81 10.100.0.11
Dec 13 04:20:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3575: 321 pgs: 321 active+clean; 96 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 848 KiB/s wr, 56 op/s
Dec 13 04:20:17 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:17.306 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:20:18 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:20:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3576: 321 pgs: 321 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Dec 13 04:20:19 np0005558241 podman[404723]: 2025-12-13 09:20:19.125657572 +0000 UTC m=+0.059915624 container create 838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 04:20:19 np0005558241 systemd[1]: Started libpod-conmon-838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd.scope.
Dec 13 04:20:19 np0005558241 podman[404723]: 2025-12-13 09:20:19.102912725 +0000 UTC m=+0.037170807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:20:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:20:19 np0005558241 podman[404723]: 2025-12-13 09:20:19.229555861 +0000 UTC m=+0.163813963 container init 838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:20:19 np0005558241 podman[404723]: 2025-12-13 09:20:19.244151144 +0000 UTC m=+0.178409206 container start 838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:20:19 np0005558241 podman[404723]: 2025-12-13 09:20:19.248943884 +0000 UTC m=+0.183201976 container attach 838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:20:19 np0005558241 dazzling_hodgkin[404739]: 167 167
Dec 13 04:20:19 np0005558241 systemd[1]: libpod-838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd.scope: Deactivated successfully.
Dec 13 04:20:19 np0005558241 podman[404723]: 2025-12-13 09:20:19.256178914 +0000 UTC m=+0.190437006 container died 838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:20:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2143cfb170410ee97406c607ab72f9c08fb210bcd8ca5efdebfda551077eca17-merged.mount: Deactivated successfully.
Dec 13 04:20:19 np0005558241 podman[404723]: 2025-12-13 09:20:19.327899431 +0000 UTC m=+0.262157513 container remove 838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 04:20:19 np0005558241 systemd[1]: libpod-conmon-838deac209c979bdb3791a7b88f83c4e2318897f27cd224cfb975dcb0972d8bd.scope: Deactivated successfully.
Dec 13 04:20:19 np0005558241 nova_compute[248510]: 2025-12-13 09:20:19.445 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:19 np0005558241 podman[404764]: 2025-12-13 09:20:19.554406055 +0000 UTC m=+0.052101109 container create 1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:20:19 np0005558241 systemd[1]: Started libpod-conmon-1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a.scope.
Dec 13 04:20:19 np0005558241 podman[404764]: 2025-12-13 09:20:19.530868289 +0000 UTC m=+0.028563393 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:20:19 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:20:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c55f23ae9a095029945a0ed340b6f14c44b78c81dcefc6a328b7ed3244d9a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c55f23ae9a095029945a0ed340b6f14c44b78c81dcefc6a328b7ed3244d9a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c55f23ae9a095029945a0ed340b6f14c44b78c81dcefc6a328b7ed3244d9a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c55f23ae9a095029945a0ed340b6f14c44b78c81dcefc6a328b7ed3244d9a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:19 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c55f23ae9a095029945a0ed340b6f14c44b78c81dcefc6a328b7ed3244d9a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:19 np0005558241 podman[404764]: 2025-12-13 09:20:19.659884673 +0000 UTC m=+0.157579757 container init 1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sammet, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:20:19 np0005558241 podman[404764]: 2025-12-13 09:20:19.671721918 +0000 UTC m=+0.169416972 container start 1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:20:19 np0005558241 podman[404764]: 2025-12-13 09:20:19.67620469 +0000 UTC m=+0.173899744 container attach 1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sammet, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:20:20 np0005558241 angry_sammet[404781]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:20:20 np0005558241 angry_sammet[404781]: --> All data devices are unavailable
Dec 13 04:20:20 np0005558241 systemd[1]: libpod-1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a.scope: Deactivated successfully.
Dec 13 04:20:20 np0005558241 conmon[404781]: conmon 1acd90fbaa8ae3a68115 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a.scope/container/memory.events
Dec 13 04:20:20 np0005558241 podman[404801]: 2025-12-13 09:20:20.308667249 +0000 UTC m=+0.030794068 container died 1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sammet, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 04:20:20 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d5c55f23ae9a095029945a0ed340b6f14c44b78c81dcefc6a328b7ed3244d9a8-merged.mount: Deactivated successfully.
Dec 13 04:20:20 np0005558241 podman[404801]: 2025-12-13 09:20:20.359511496 +0000 UTC m=+0.081638295 container remove 1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sammet, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:20:20 np0005558241 systemd[1]: libpod-conmon-1acd90fbaa8ae3a6811581da5a8a5c2af04be54d67c43ce537d54e7a37cabe6a.scope: Deactivated successfully.
Dec 13 04:20:20 np0005558241 nova_compute[248510]: 2025-12-13 09:20:20.736 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:20 np0005558241 podman[404879]: 2025-12-13 09:20:20.970594143 +0000 UTC m=+0.049418033 container create 16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:20:21 np0005558241 systemd[1]: Started libpod-conmon-16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539.scope.
Dec 13 04:20:21 np0005558241 podman[404879]: 2025-12-13 09:20:20.944151524 +0000 UTC m=+0.022975434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3577: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:20:21 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:20:21 np0005558241 podman[404879]: 2025-12-13 09:20:21.083850685 +0000 UTC m=+0.162674605 container init 16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_austin, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:20:21 np0005558241 podman[404879]: 2025-12-13 09:20:21.090668344 +0000 UTC m=+0.169492234 container start 16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_austin, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:20:21 np0005558241 podman[404879]: 2025-12-13 09:20:21.094450389 +0000 UTC m=+0.173274299 container attach 16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_austin, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:20:21 np0005558241 magical_austin[404895]: 167 167
Dec 13 04:20:21 np0005558241 systemd[1]: libpod-16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539.scope: Deactivated successfully.
Dec 13 04:20:21 np0005558241 podman[404879]: 2025-12-13 09:20:21.099034973 +0000 UTC m=+0.177858863 container died 16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_austin, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:20:21 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8e501d0296170ffebea6cd9505ad55761893c17f9a86a74d70af4b5231abdcad-merged.mount: Deactivated successfully.
Dec 13 04:20:21 np0005558241 podman[404879]: 2025-12-13 09:20:21.134300372 +0000 UTC m=+0.213124262 container remove 16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:20:21 np0005558241 systemd[1]: libpod-conmon-16f5453e65198e2d634011174f423821593423a3d0d61dfcbb88c1d6d3eeb539.scope: Deactivated successfully.
Dec 13 04:20:21 np0005558241 podman[404920]: 2025-12-13 09:20:21.341427763 +0000 UTC m=+0.069640757 container create 8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:20:21 np0005558241 systemd[1]: Started libpod-conmon-8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3.scope.
Dec 13 04:20:21 np0005558241 podman[404920]: 2025-12-13 09:20:21.318709057 +0000 UTC m=+0.046922091 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:20:21 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:20:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b03ae189e4ba6036ffa77eca1475c5b49f30b18bd9b9f1b7549f91bdd9787f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b03ae189e4ba6036ffa77eca1475c5b49f30b18bd9b9f1b7549f91bdd9787f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b03ae189e4ba6036ffa77eca1475c5b49f30b18bd9b9f1b7549f91bdd9787f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b03ae189e4ba6036ffa77eca1475c5b49f30b18bd9b9f1b7549f91bdd9787f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:21 np0005558241 podman[404920]: 2025-12-13 09:20:21.458628763 +0000 UTC m=+0.186841807 container init 8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:20:21 np0005558241 podman[404920]: 2025-12-13 09:20:21.467850903 +0000 UTC m=+0.196063947 container start 8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:20:21 np0005558241 podman[404920]: 2025-12-13 09:20:21.472597511 +0000 UTC m=+0.200810555 container attach 8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007714119368623521 of space, bias 1.0, pg target 0.2314235810587056 quantized to 32 (current 32)
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697110352885981 of space, bias 1.0, pg target 0.20091331058657944 quantized to 32 (current 32)
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.723939957605983e-07 of space, bias 4.0, pg target 0.0006868727949127179 quantized to 16 (current 32)
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:20:21 np0005558241 youthful_golick[404936]: {
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:    "0": [
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:        {
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "devices": [
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "/dev/loop3"
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            ],
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_name": "ceph_lv0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_size": "21470642176",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "name": "ceph_lv0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "tags": {
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.cluster_name": "ceph",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.crush_device_class": "",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.encrypted": "0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.objectstore": "bluestore",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.osd_id": "0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.type": "block",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.vdo": "0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.with_tpm": "0"
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            },
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "type": "block",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "vg_name": "ceph_vg0"
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:        }
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:    ],
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:    "1": [
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:        {
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "devices": [
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "/dev/loop4"
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            ],
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_name": "ceph_lv1",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_size": "21470642176",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "name": "ceph_lv1",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "tags": {
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.cluster_name": "ceph",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.crush_device_class": "",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.encrypted": "0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.objectstore": "bluestore",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.osd_id": "1",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.type": "block",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.vdo": "0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.with_tpm": "0"
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            },
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "type": "block",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "vg_name": "ceph_vg1"
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:        }
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:    ],
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:    "2": [
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:        {
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "devices": [
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "/dev/loop5"
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            ],
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_name": "ceph_lv2",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_size": "21470642176",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "name": "ceph_lv2",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "tags": {
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.cluster_name": "ceph",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.crush_device_class": "",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.encrypted": "0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.objectstore": "bluestore",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.osd_id": "2",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.type": "block",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.vdo": "0",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:                "ceph.with_tpm": "0"
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            },
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "type": "block",
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:            "vg_name": "ceph_vg2"
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:        }
Dec 13 04:20:21 np0005558241 youthful_golick[404936]:    ]
Dec 13 04:20:21 np0005558241 youthful_golick[404936]: }
Dec 13 04:20:21 np0005558241 systemd[1]: libpod-8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3.scope: Deactivated successfully.
Dec 13 04:20:21 np0005558241 podman[404920]: 2025-12-13 09:20:21.817504065 +0000 UTC m=+0.545717099 container died 8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:20:21 np0005558241 systemd[1]: var-lib-containers-storage-overlay-08b03ae189e4ba6036ffa77eca1475c5b49f30b18bd9b9f1b7549f91bdd9787f-merged.mount: Deactivated successfully.
Dec 13 04:20:21 np0005558241 podman[404920]: 2025-12-13 09:20:21.885445608 +0000 UTC m=+0.613658612 container remove 8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 04:20:21 np0005558241 systemd[1]: libpod-conmon-8114c1c8ccab47e803aa74691e2eec141b70ab9e81e015db34711e36c69e94a3.scope: Deactivated successfully.
Dec 13 04:20:22 np0005558241 podman[405019]: 2025-12-13 09:20:22.446220161 +0000 UTC m=+0.042596652 container create a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:20:22 np0005558241 systemd[1]: Started libpod-conmon-a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af.scope.
Dec 13 04:20:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:20:22 np0005558241 podman[405019]: 2025-12-13 09:20:22.428477179 +0000 UTC m=+0.024853700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:20:22 np0005558241 podman[405019]: 2025-12-13 09:20:22.536553101 +0000 UTC m=+0.132929642 container init a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:20:22 np0005558241 podman[405019]: 2025-12-13 09:20:22.547597216 +0000 UTC m=+0.143973717 container start a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:20:22 np0005558241 podman[405019]: 2025-12-13 09:20:22.550821147 +0000 UTC m=+0.147197678 container attach a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:20:22 np0005558241 musing_kilby[405036]: 167 167
Dec 13 04:20:22 np0005558241 systemd[1]: libpod-a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af.scope: Deactivated successfully.
Dec 13 04:20:22 np0005558241 podman[405019]: 2025-12-13 09:20:22.555515644 +0000 UTC m=+0.151892125 container died a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:20:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a36a236b973814a06452819059709aff9c22b7c8e10712e110a52d5cb1df24f2-merged.mount: Deactivated successfully.
Dec 13 04:20:22 np0005558241 podman[405019]: 2025-12-13 09:20:22.593635523 +0000 UTC m=+0.190012014 container remove a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:20:22 np0005558241 systemd[1]: libpod-conmon-a4a9ecdaf2e9e84e5d264c4bb3877098a6b302d5846517f4652a94c0c93648af.scope: Deactivated successfully.
Dec 13 04:20:22 np0005558241 podman[405059]: 2025-12-13 09:20:22.844180486 +0000 UTC m=+0.081421059 container create b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kalam, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:20:22 np0005558241 systemd[1]: Started libpod-conmon-b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa.scope.
Dec 13 04:20:22 np0005558241 podman[405059]: 2025-12-13 09:20:22.808358954 +0000 UTC m=+0.045599577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:20:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:20:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d5889dd3d894feee07807786169675f3c4021f63710e688c3c9d4459173da6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d5889dd3d894feee07807786169675f3c4021f63710e688c3c9d4459173da6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d5889dd3d894feee07807786169675f3c4021f63710e688c3c9d4459173da6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:22 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d5889dd3d894feee07807786169675f3c4021f63710e688c3c9d4459173da6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:22 np0005558241 podman[405059]: 2025-12-13 09:20:22.939655925 +0000 UTC m=+0.176896528 container init b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kalam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:20:22 np0005558241 podman[405059]: 2025-12-13 09:20:22.955293085 +0000 UTC m=+0.192533668 container start b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kalam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 04:20:22 np0005558241 podman[405059]: 2025-12-13 09:20:22.959298335 +0000 UTC m=+0.196538938 container attach b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:20:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3578: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:20:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:23 np0005558241 lvm[405153]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:20:23 np0005558241 lvm[405153]: VG ceph_vg0 finished
Dec 13 04:20:23 np0005558241 lvm[405154]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:20:23 np0005558241 lvm[405154]: VG ceph_vg1 finished
Dec 13 04:20:23 np0005558241 lvm[405156]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:20:23 np0005558241 lvm[405156]: VG ceph_vg2 finished
Dec 13 04:20:23 np0005558241 lvm[405158]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:20:23 np0005558241 lvm[405158]: VG ceph_vg2 finished
Dec 13 04:20:23 np0005558241 practical_kalam[405075]: {}
Dec 13 04:20:23 np0005558241 nova_compute[248510]: 2025-12-13 09:20:23.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:20:23 np0005558241 systemd[1]: libpod-b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa.scope: Deactivated successfully.
Dec 13 04:20:23 np0005558241 podman[405059]: 2025-12-13 09:20:23.79666819 +0000 UTC m=+1.033908813 container died b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kalam, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:20:23 np0005558241 systemd[1]: libpod-b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa.scope: Consumed 1.362s CPU time.
Dec 13 04:20:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-73d5889dd3d894feee07807786169675f3c4021f63710e688c3c9d4459173da6-merged.mount: Deactivated successfully.
Dec 13 04:20:23 np0005558241 podman[405059]: 2025-12-13 09:20:23.85405629 +0000 UTC m=+1.091296873 container remove b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 04:20:23 np0005558241 systemd[1]: libpod-conmon-b33ff1b14a5ac957ec9e43a7e28a183dea1fe66299002501a701550455c817fa.scope: Deactivated successfully.
Dec 13 04:20:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:20:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:20:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:20:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:20:24 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:20:24 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:20:24 np0005558241 nova_compute[248510]: 2025-12-13 09:20:24.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3579: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:20:25 np0005558241 nova_compute[248510]: 2025-12-13 09:20:25.784 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3580: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 1.3 MiB/s wr, 55 op/s
Dec 13 04:20:27 np0005558241 nova_compute[248510]: 2025-12-13 09:20:27.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:20:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3581: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 304 KiB/s rd, 1.3 MiB/s wr, 55 op/s
Dec 13 04:20:29 np0005558241 nova_compute[248510]: 2025-12-13 09:20:29.504 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:29 np0005558241 nova_compute[248510]: 2025-12-13 09:20:29.665 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:20:29 np0005558241 nova_compute[248510]: 2025-12-13 09:20:29.666 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:20:29 np0005558241 nova_compute[248510]: 2025-12-13 09:20:29.689 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:20:29 np0005558241 nova_compute[248510]: 2025-12-13 09:20:29.792 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:20:29 np0005558241 nova_compute[248510]: 2025-12-13 09:20:29.793 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:20:29 np0005558241 nova_compute[248510]: 2025-12-13 09:20:29.807 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:20:29 np0005558241 nova_compute[248510]: 2025-12-13 09:20:29.808 248514 INFO nova.compute.claims [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:20:29 np0005558241 nova_compute[248510]: 2025-12-13 09:20:29.950 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:20:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:20:30 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/32728649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:20:30 np0005558241 nova_compute[248510]: 2025-12-13 09:20:30.544 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:20:30 np0005558241 nova_compute[248510]: 2025-12-13 09:20:30.552 248514 DEBUG nova.compute.provider_tree [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:20:30 np0005558241 nova_compute[248510]: 2025-12-13 09:20:30.580 248514 DEBUG nova.scheduler.client.report [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:20:30 np0005558241 nova_compute[248510]: 2025-12-13 09:20:30.626 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:30 np0005558241 nova_compute[248510]: 2025-12-13 09:20:30.628 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:20:30 np0005558241 nova_compute[248510]: 2025-12-13 09:20:30.688 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:20:30 np0005558241 nova_compute[248510]: 2025-12-13 09:20:30.689 248514 DEBUG nova.network.neutron [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:20:30 np0005558241 nova_compute[248510]: 2025-12-13 09:20:30.788 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:30 np0005558241 nova_compute[248510]: 2025-12-13 09:20:30.929 248514 INFO nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:20:30 np0005558241 nova_compute[248510]: 2025-12-13 09:20:30.951 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:20:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3582: 321 pgs: 321 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 61 KiB/s wr, 4 op/s
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.067 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.069 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.070 248514 INFO nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Creating image(s)#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.103 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.135 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.159 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.164 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.276 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.277 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.278 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.278 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.304 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.310 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.560 248514 DEBUG nova.policy [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.669 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.737 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.812 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.822 248514 DEBUG nova.objects.instance [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 776d7b4e-dc82-47f6-b1bb-53188ad804b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.837 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.837 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Ensure instance console log exists: /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.838 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.838 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:20:31 np0005558241 nova_compute[248510]: 2025-12-13 09:20:31.838 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:32 np0005558241 nova_compute[248510]: 2025-12-13 09:20:32.366 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:20:32 np0005558241 nova_compute[248510]: 2025-12-13 09:20:32.366 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:20:32 np0005558241 nova_compute[248510]: 2025-12-13 09:20:32.367 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 04:20:32 np0005558241 nova_compute[248510]: 2025-12-13 09:20:32.367 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 387ffe9e-b998-43aa-bb42-e87639c1ba6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:20:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3583: 321 pgs: 321 active+clean; 139 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 418 KiB/s wr, 12 op/s
Dec 13 04:20:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:33 np0005558241 nova_compute[248510]: 2025-12-13 09:20:33.344 248514 DEBUG nova.network.neutron [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Successfully created port: 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:20:34 np0005558241 nova_compute[248510]: 2025-12-13 09:20:34.506 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:34 np0005558241 nova_compute[248510]: 2025-12-13 09:20:34.838 248514 DEBUG nova.network.neutron [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Successfully updated port: 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:20:34 np0005558241 nova_compute[248510]: 2025-12-13 09:20:34.856 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:20:34 np0005558241 nova_compute[248510]: 2025-12-13 09:20:34.857 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:20:34 np0005558241 nova_compute[248510]: 2025-12-13 09:20:34.857 248514 DEBUG nova.network.neutron [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:20:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3584: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:20:35 np0005558241 nova_compute[248510]: 2025-12-13 09:20:35.284 248514 DEBUG nova.compute.manager [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-changed-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:20:35 np0005558241 nova_compute[248510]: 2025-12-13 09:20:35.285 248514 DEBUG nova.compute.manager [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Refreshing instance network info cache due to event network-changed-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:20:35 np0005558241 nova_compute[248510]: 2025-12-13 09:20:35.285 248514 DEBUG oslo_concurrency.lockutils [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:20:35 np0005558241 nova_compute[248510]: 2025-12-13 09:20:35.385 248514 DEBUG nova.network.neutron [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:20:35 np0005558241 nova_compute[248510]: 2025-12-13 09:20:35.421 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updating instance_info_cache with network_info: [{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:20:35 np0005558241 nova_compute[248510]: 2025-12-13 09:20:35.441 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:20:35 np0005558241 nova_compute[248510]: 2025-12-13 09:20:35.441 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 04:20:35 np0005558241 nova_compute[248510]: 2025-12-13 09:20:35.791 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:36 np0005558241 nova_compute[248510]: 2025-12-13 09:20:36.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:20:36 np0005558241 nova_compute[248510]: 2025-12-13 09:20:36.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:20:36 np0005558241 nova_compute[248510]: 2025-12-13 09:20:36.820 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:20:36 np0005558241 nova_compute[248510]: 2025-12-13 09:20:36.821 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:20:36 np0005558241 nova_compute[248510]: 2025-12-13 09:20:36.821 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:36 np0005558241 nova_compute[248510]: 2025-12-13 09:20:36.822 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:20:36 np0005558241 nova_compute[248510]: 2025-12-13 09:20:36.822 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:20:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3585: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:20:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:20:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2259731585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.421 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.516 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.517 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.749 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.751 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3219MB free_disk=59.92107870336622GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.751 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.752 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.754 248514 DEBUG nova.network.neutron [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updating instance_info_cache with network_info: [{"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.792 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.792 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Instance network_info: |[{"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.793 248514 DEBUG oslo_concurrency.lockutils [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.793 248514 DEBUG nova.network.neutron [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Refreshing network info cache for port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.797 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Start _get_guest_xml network_info=[{"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.802 248514 WARNING nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.810 248514 DEBUG nova.virt.libvirt.host [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.810 248514 DEBUG nova.virt.libvirt.host [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.819 248514 DEBUG nova.virt.libvirt.host [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.819 248514 DEBUG nova.virt.libvirt.host [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.820 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.820 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.821 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.821 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.821 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.822 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.822 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.822 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.823 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.823 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.823 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.823 248514 DEBUG nova.virt.hardware [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.827 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.935 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 387ffe9e-b998-43aa-bb42-e87639c1ba6a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.936 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 776d7b4e-dc82-47f6-b1bb-53188ad804b6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.937 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:20:37 np0005558241 nova_compute[248510]: 2025-12-13 09:20:37.937 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:20:38 np0005558241 nova_compute[248510]: 2025-12-13 09:20:38.006 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:20:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:20:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/952357148' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:20:38 np0005558241 nova_compute[248510]: 2025-12-13 09:20:38.456 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:20:38 np0005558241 nova_compute[248510]: 2025-12-13 09:20:38.488 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:20:38 np0005558241 nova_compute[248510]: 2025-12-13 09:20:38.494 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:20:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3586: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:20:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:20:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966682541' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:20:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:20:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894919964' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.113 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.118 248514 DEBUG nova.virt.libvirt.vif [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:20:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-272771977',display_name='tempest-TestGettingAddress-server-272771977',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-272771977',id=150,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIpcx/5vuboJVwrV0nq+WfHP6t5LA2OndBCH/1ll7d/PpOHljwzW/JjtFSE1pOt+PUX/l8C8ALgzEgIgSQ3b5WDhZcogX0HQPVvqXPOMGr4RM3k5j3wZaHphWzCWA2h6Q==',key_name='tempest-TestGettingAddress-483066452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-nwca5oxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:20:30Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=776d7b4e-dc82-47f6-b1bb-53188ad804b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.120 248514 DEBUG nova.network.os_vif_util [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.122 248514 DEBUG nova.network.os_vif_util [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.124 248514 DEBUG nova.objects.instance [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 776d7b4e-dc82-47f6-b1bb-53188ad804b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.127 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.136 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.150 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  <uuid>776d7b4e-dc82-47f6-b1bb-53188ad804b6</uuid>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  <name>instance-00000096</name>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-272771977</nova:name>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:20:37</nova:creationTime>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:        <nova:port uuid="3e8a3ef7-51fa-466a-b6ae-c5e00a04d987">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feae:a19" ipVersion="6"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feae:a19" ipVersion="6"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <entry name="serial">776d7b4e-dc82-47f6-b1bb-53188ad804b6</entry>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <entry name="uuid">776d7b4e-dc82-47f6-b1bb-53188ad804b6</entry>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk.config">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:ae:0a:19"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <target dev="tap3e8a3ef7-51"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/console.log" append="off"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:20:39 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:20:39 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:20:39 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:20:39 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.152 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Preparing to wait for external event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.153 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.153 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.153 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.154 248514 DEBUG nova.virt.libvirt.vif [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:20:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-272771977',display_name='tempest-TestGettingAddress-server-272771977',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-272771977',id=150,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIpcx/5vuboJVwrV0nq+WfHP6t5LA2OndBCH/1ll7d/PpOHljwzW/JjtFSE1pOt+PUX/l8C8ALgzEgIgSQ3b5WDhZcogX0HQPVvqXPOMGr4RM3k5j3wZaHphWzCWA2h6Q==',key_name='tempest-TestGettingAddress-483066452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-nwca5oxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:20:30Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=776d7b4e-dc82-47f6-b1bb-53188ad804b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.155 248514 DEBUG nova.network.os_vif_util [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.156 248514 DEBUG nova.network.os_vif_util [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.156 248514 DEBUG os_vif [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.157 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.158 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.158 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.161 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.166 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.167 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e8a3ef7-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.167 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e8a3ef7-51, col_values=(('external_ids', {'iface-id': '3e8a3ef7-51fa-466a-b6ae-c5e00a04d987', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:0a:19', 'vm-uuid': '776d7b4e-dc82-47f6-b1bb-53188ad804b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.171 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:20:39 np0005558241 NetworkManager[50376]: <info>  [1765617639.1716] manager: (tap3e8a3ef7-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/671)
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.183 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.184 248514 INFO os_vif [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51')#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.187 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.187 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.585 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.586 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.586 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:ae:0a:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.587 248514 INFO nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Using config drive#033[00m
Dec 13 04:20:39 np0005558241 nova_compute[248510]: 2025-12-13 09:20:39.630 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:20:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:20:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:20:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:20:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:20:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:20:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:20:40 np0005558241 nova_compute[248510]: 2025-12-13 09:20:40.589 248514 INFO nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Creating config drive at /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/disk.config#033[00m
Dec 13 04:20:40 np0005558241 nova_compute[248510]: 2025-12-13 09:20:40.597 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_8ygcv5e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:20:40 np0005558241 nova_compute[248510]: 2025-12-13 09:20:40.638 248514 DEBUG nova.network.neutron [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updated VIF entry in instance network info cache for port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:20:40 np0005558241 nova_compute[248510]: 2025-12-13 09:20:40.640 248514 DEBUG nova.network.neutron [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updating instance_info_cache with network_info: [{"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:20:40 np0005558241 nova_compute[248510]: 2025-12-13 09:20:40.662 248514 DEBUG oslo_concurrency.lockutils [req-66c09d69-328a-49a1-9129-f5765bcd4210 req-2ab96aeb-deb3-4109-9938-b3c22e58954d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:20:40 np0005558241 nova_compute[248510]: 2025-12-13 09:20:40.743 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_8ygcv5e" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:20:40 np0005558241 nova_compute[248510]: 2025-12-13 09:20:40.789 248514 DEBUG nova.storage.rbd_utils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:20:40 np0005558241 nova_compute[248510]: 2025-12-13 09:20:40.794 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/disk.config 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:20:40 np0005558241 nova_compute[248510]: 2025-12-13 09:20:40.965 248514 DEBUG oslo_concurrency.processutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/disk.config 776d7b4e-dc82-47f6-b1bb-53188ad804b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:20:40 np0005558241 nova_compute[248510]: 2025-12-13 09:20:40.966 248514 INFO nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Deleting local config drive /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6/disk.config because it was imported into RBD.#033[00m
Dec 13 04:20:41 np0005558241 kernel: tap3e8a3ef7-51: entered promiscuous mode
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.036 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:41 np0005558241 NetworkManager[50376]: <info>  [1765617641.0375] manager: (tap3e8a3ef7-51): new Tun device (/org/freedesktop/NetworkManager/Devices/672)
Dec 13 04:20:41 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:41Z|01614|binding|INFO|Claiming lport 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 for this chassis.
Dec 13 04:20:41 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:41Z|01615|binding|INFO|3e8a3ef7-51fa-466a-b6ae-c5e00a04d987: Claiming fa:16:3e:ae:0a:19 10.100.0.14 2001:db8:0:1:f816:3eff:feae:a19 2001:db8::f816:3eff:feae:a19
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.046 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:0a:19 10.100.0.14 2001:db8:0:1:f816:3eff:feae:a19 2001:db8::f816:3eff:feae:a19'], port_security=['fa:16:3e:ae:0a:19 10.100.0.14 2001:db8:0:1:f816:3eff:feae:a19 2001:db8::f816:3eff:feae:a19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:feae:a19/64 2001:db8::f816:3eff:feae:a19/64', 'neutron:device_id': '776d7b4e-dc82-47f6-b1bb-53188ad804b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7518f8bf-76d2-4439-8293-d0d6a2979cbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60522aa7-9184-4e18-b84a-db845e6d2800, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.048 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 in datapath e41a163e-7597-4a26-aa5d-b894f952cca4 bound to our chassis#033[00m
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.050 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e41a163e-7597-4a26-aa5d-b894f952cca4#033[00m
Dec 13 04:20:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3587: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:20:41 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:41Z|01616|binding|INFO|Setting lport 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 ovn-installed in OVS
Dec 13 04:20:41 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:41Z|01617|binding|INFO|Setting lport 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 up in Southbound
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.054 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.076 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c7872e92-486c-4889-8dca-c190807e674f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:41 np0005558241 systemd-udevd[405590]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:20:41 np0005558241 systemd-machined[210538]: New machine qemu-181-instance-00000096.
Dec 13 04:20:41 np0005558241 systemd[1]: Started Virtual Machine qemu-181-instance-00000096.
Dec 13 04:20:41 np0005558241 NetworkManager[50376]: <info>  [1765617641.1150] device (tap3e8a3ef7-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:20:41 np0005558241 NetworkManager[50376]: <info>  [1765617641.1161] device (tap3e8a3ef7-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.120 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[546eec4a-311e-4728-ba29-44fc256d0d82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.123 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[67ac630d-6378-4cc2-b7ed-1aeff95fe249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:41 np0005558241 podman[405562]: 2025-12-13 09:20:41.150032924 +0000 UTC m=+0.075288837 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.154 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[acb6f206-9a0b-47a8-a2cf-8116f591d048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:41 np0005558241 podman[405560]: 2025-12-13 09:20:41.155910821 +0000 UTC m=+0.080919908 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.176 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d11a7368-a2c0-41c6-b588-0932ee2e22fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape41a163e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:85:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1017964, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405631, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.186 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.187 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.187 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.187 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.194 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5abdf3a9-745b-486e-8e2d-c16f1a1e9b2b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape41a163e-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1017980, 'tstamp': 1017980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405639, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape41a163e-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1017984, 'tstamp': 1017984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405639, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.196 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape41a163e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.199 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape41a163e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.199 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.199 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape41a163e-70, col_values=(('external_ids', {'iface-id': '0c3d5666-7453-4eb7-8973-bef187574418'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:20:41 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:41.200 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:20:41 np0005558241 podman[405563]: 2025-12-13 09:20:41.212661695 +0000 UTC m=+0.133836686 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.470 248514 DEBUG nova.compute.manager [req-b2379fa7-5b3d-4eb8-b03e-546910ceb47d req-ebb2cff3-809e-4f9d-a726-1fc4cfc838e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.472 248514 DEBUG oslo_concurrency.lockutils [req-b2379fa7-5b3d-4eb8-b03e-546910ceb47d req-ebb2cff3-809e-4f9d-a726-1fc4cfc838e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.472 248514 DEBUG oslo_concurrency.lockutils [req-b2379fa7-5b3d-4eb8-b03e-546910ceb47d req-ebb2cff3-809e-4f9d-a726-1fc4cfc838e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.473 248514 DEBUG oslo_concurrency.lockutils [req-b2379fa7-5b3d-4eb8-b03e-546910ceb47d req-ebb2cff3-809e-4f9d-a726-1fc4cfc838e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.473 248514 DEBUG nova.compute.manager [req-b2379fa7-5b3d-4eb8-b03e-546910ceb47d req-ebb2cff3-809e-4f9d-a726-1fc4cfc838e4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Processing event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.670 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.672 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617641.6702237, 776d7b4e-dc82-47f6-b1bb-53188ad804b6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.672 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] VM Started (Lifecycle Event)#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.676 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.680 248514 INFO nova.virt.libvirt.driver [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Instance spawned successfully.#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.680 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.704 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.710 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.714 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.714 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.715 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.715 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.715 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.716 248514 DEBUG nova.virt.libvirt.driver [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.743 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.744 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617641.6717417, 776d7b4e-dc82-47f6-b1bb-53188ad804b6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.745 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.782 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.788 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617641.6759708, 776d7b4e-dc82-47f6-b1bb-53188ad804b6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.788 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.796 248514 INFO nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Took 10.73 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.796 248514 DEBUG nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.808 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.812 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.844 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.871 248514 INFO nova.compute.manager [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Took 12.12 seconds to build instance.#033[00m
Dec 13 04:20:41 np0005558241 nova_compute[248510]: 2025-12-13 09:20:41.902 248514 DEBUG oslo_concurrency.lockutils [None req-7f96917f-dff0-4b79-b88e-e3dca66fb26f 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3588: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Dec 13 04:20:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:43 np0005558241 nova_compute[248510]: 2025-12-13 09:20:43.576 248514 DEBUG nova.compute.manager [req-64883493-2ea6-43d5-b08e-b26573f17bc8 req-ce06c684-7670-4b4c-92f4-158024a16652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:20:43 np0005558241 nova_compute[248510]: 2025-12-13 09:20:43.576 248514 DEBUG oslo_concurrency.lockutils [req-64883493-2ea6-43d5-b08e-b26573f17bc8 req-ce06c684-7670-4b4c-92f4-158024a16652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:20:43 np0005558241 nova_compute[248510]: 2025-12-13 09:20:43.576 248514 DEBUG oslo_concurrency.lockutils [req-64883493-2ea6-43d5-b08e-b26573f17bc8 req-ce06c684-7670-4b4c-92f4-158024a16652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:20:43 np0005558241 nova_compute[248510]: 2025-12-13 09:20:43.576 248514 DEBUG oslo_concurrency.lockutils [req-64883493-2ea6-43d5-b08e-b26573f17bc8 req-ce06c684-7670-4b4c-92f4-158024a16652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:43 np0005558241 nova_compute[248510]: 2025-12-13 09:20:43.577 248514 DEBUG nova.compute.manager [req-64883493-2ea6-43d5-b08e-b26573f17bc8 req-ce06c684-7670-4b4c-92f4-158024a16652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] No waiting events found dispatching network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:20:43 np0005558241 nova_compute[248510]: 2025-12-13 09:20:43.577 248514 WARNING nova.compute.manager [req-64883493-2ea6-43d5-b08e-b26573f17bc8 req-ce06c684-7670-4b4c-92f4-158024a16652 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received unexpected event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:20:44 np0005558241 nova_compute[248510]: 2025-12-13 09:20:44.170 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:44 np0005558241 nova_compute[248510]: 2025-12-13 09:20:44.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3589: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 88 op/s
Dec 13 04:20:45 np0005558241 nova_compute[248510]: 2025-12-13 09:20:45.963 248514 DEBUG nova.compute.manager [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-changed-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:20:45 np0005558241 nova_compute[248510]: 2025-12-13 09:20:45.964 248514 DEBUG nova.compute.manager [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Refreshing instance network info cache due to event network-changed-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:20:45 np0005558241 nova_compute[248510]: 2025-12-13 09:20:45.964 248514 DEBUG oslo_concurrency.lockutils [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:20:45 np0005558241 nova_compute[248510]: 2025-12-13 09:20:45.964 248514 DEBUG oslo_concurrency.lockutils [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:20:45 np0005558241 nova_compute[248510]: 2025-12-13 09:20:45.964 248514 DEBUG nova.network.neutron [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Refreshing network info cache for port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:20:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3590: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:20:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:48 np0005558241 nova_compute[248510]: 2025-12-13 09:20:48.444 248514 DEBUG nova.network.neutron [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updated VIF entry in instance network info cache for port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:20:48 np0005558241 nova_compute[248510]: 2025-12-13 09:20:48.446 248514 DEBUG nova.network.neutron [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updating instance_info_cache with network_info: [{"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:20:48 np0005558241 nova_compute[248510]: 2025-12-13 09:20:48.473 248514 DEBUG oslo_concurrency.lockutils [req-69e9ea01-b0dd-49a7-a3b6-6ee7474e00da req-ffea6fb2-f506-41c6-a136-1ba5d32734d5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:20:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3591: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 13 04:20:49 np0005558241 nova_compute[248510]: 2025-12-13 09:20:49.172 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:20:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 15K writes, 71K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s#012Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1342 writes, 6337 keys, 1342 commit groups, 1.0 writes per commit group, ingest: 9.21 MB, 0.02 MB/s#012Interval WAL: 1343 writes, 1343 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     23.4      3.92              0.32        52    0.075       0      0       0.0       0.0#012  L6      1/0   10.75 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.9     83.6     70.9      6.37              1.46        51    0.125    350K    27K       0.0       0.0#012 Sum      1/0   10.75 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.9     51.7     52.8     10.29              1.79       103    0.100    350K    27K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     64.7     66.9      1.08              0.29        12    0.090     55K   3080       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     83.6     70.9      6.37              1.46        51    0.125    350K    27K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     23.4      3.92              0.32        51    0.077       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.089, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.53 GB write, 0.08 MB/s write, 0.52 GB read, 0.08 MB/s read, 10.3 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 61.01 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.00065 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3770,58.51 MB,19.2459%) FilterBlock(104,973.55 KB,0.31274%) IndexBlock(104,1.56 MB,0.511807%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 04:20:49 np0005558241 nova_compute[248510]: 2025-12-13 09:20:49.587 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3592: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 13 04:20:51 np0005558241 nova_compute[248510]: 2025-12-13 09:20:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:20:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3593: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Dec 13 04:20:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:54 np0005558241 nova_compute[248510]: 2025-12-13 09:20:54.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:54 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Dec 13 04:20:54 np0005558241 nova_compute[248510]: 2025-12-13 09:20:54.589 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:54 np0005558241 nova_compute[248510]: 2025-12-13 09:20:54.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:20:54 np0005558241 nova_compute[248510]: 2025-12-13 09:20:54.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 04:20:54 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:54Z|00211|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:0a:19 10.100.0.14
Dec 13 04:20:54 np0005558241 ovn_controller[148476]: 2025-12-13T09:20:54Z|00212|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:0a:19 10.100.0.14
Dec 13 04:20:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3594: 321 pgs: 321 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.4 MiB/s wr, 83 op/s
Dec 13 04:20:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:55.457 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:20:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:55.458 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:20:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:20:55.459 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:20:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3595: 321 pgs: 321 active+clean; 188 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Dec 13 04:20:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3596: 321 pgs: 321 active+clean; 199 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 04:20:59 np0005558241 nova_compute[248510]: 2025-12-13 09:20:59.177 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:20:59 np0005558241 nova_compute[248510]: 2025-12-13 09:20:59.591 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3597: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:21:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3598: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:21:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:03 np0005558241 nova_compute[248510]: 2025-12-13 09:21:03.868 248514 DEBUG nova.compute.manager [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-changed-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:21:03 np0005558241 nova_compute[248510]: 2025-12-13 09:21:03.869 248514 DEBUG nova.compute.manager [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Refreshing instance network info cache due to event network-changed-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:21:03 np0005558241 nova_compute[248510]: 2025-12-13 09:21:03.869 248514 DEBUG oslo_concurrency.lockutils [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:21:03 np0005558241 nova_compute[248510]: 2025-12-13 09:21:03.870 248514 DEBUG oslo_concurrency.lockutils [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:21:03 np0005558241 nova_compute[248510]: 2025-12-13 09:21:03.870 248514 DEBUG nova.network.neutron [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Refreshing network info cache for port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:21:03 np0005558241 nova_compute[248510]: 2025-12-13 09:21:03.950 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:03 np0005558241 nova_compute[248510]: 2025-12-13 09:21:03.951 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:03 np0005558241 nova_compute[248510]: 2025-12-13 09:21:03.951 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:03 np0005558241 nova_compute[248510]: 2025-12-13 09:21:03.952 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:03 np0005558241 nova_compute[248510]: 2025-12-13 09:21:03.953 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:03 np0005558241 nova_compute[248510]: 2025-12-13 09:21:03.956 248514 INFO nova.compute.manager [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Terminating instance#033[00m
Dec 13 04:21:03 np0005558241 nova_compute[248510]: 2025-12-13 09:21:03.957 248514 DEBUG nova.compute.manager [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:21:04 np0005558241 kernel: tap3e8a3ef7-51 (unregistering): left promiscuous mode
Dec 13 04:21:04 np0005558241 NetworkManager[50376]: <info>  [1765617664.0154] device (tap3e8a3ef7-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:21:04 np0005558241 ovn_controller[148476]: 2025-12-13T09:21:04Z|01618|binding|INFO|Releasing lport 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 from this chassis (sb_readonly=0)
Dec 13 04:21:04 np0005558241 ovn_controller[148476]: 2025-12-13T09:21:04Z|01619|binding|INFO|Setting lport 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 down in Southbound
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.030 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:04 np0005558241 ovn_controller[148476]: 2025-12-13T09:21:04Z|01620|binding|INFO|Removing iface tap3e8a3ef7-51 ovn-installed in OVS
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.032 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.041 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:0a:19 10.100.0.14 2001:db8:0:1:f816:3eff:feae:a19 2001:db8::f816:3eff:feae:a19'], port_security=['fa:16:3e:ae:0a:19 10.100.0.14 2001:db8:0:1:f816:3eff:feae:a19 2001:db8::f816:3eff:feae:a19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:feae:a19/64 2001:db8::f816:3eff:feae:a19/64', 'neutron:device_id': '776d7b4e-dc82-47f6-b1bb-53188ad804b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7518f8bf-76d2-4439-8293-d0d6a2979cbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60522aa7-9184-4e18-b84a-db845e6d2800, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.042 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 in datapath e41a163e-7597-4a26-aa5d-b894f952cca4 unbound from our chassis#033[00m
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.045 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e41a163e-7597-4a26-aa5d-b894f952cca4#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.045 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.066 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5d95fec9-1674-438d-9239-a33a9de33302]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:04 np0005558241 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000096.scope: Deactivated successfully.
Dec 13 04:21:04 np0005558241 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000096.scope: Consumed 13.626s CPU time.
Dec 13 04:21:04 np0005558241 systemd-machined[210538]: Machine qemu-181-instance-00000096 terminated.
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.104 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[cbaae919-a882-41e9-b262-a767235ad5ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.108 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[1851258e-5969-4e42-9ae7-c4b030519588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.146 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[02ee4b7e-738a-4d12-a9ef-3e6ca89b3f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.168 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[79f233a3-7313-4de3-8ea6-ebd41f5ff239]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape41a163e-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:85:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1017964, 'reachable_time': 28358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405695, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.179 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.191 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[84795b8f-c4a3-48ce-88d3-5ebdb71675e1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape41a163e-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1017980, 'tstamp': 1017980}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405697, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape41a163e-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1017984, 'tstamp': 1017984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405697, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.194 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape41a163e-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.201 248514 INFO nova.virt.libvirt.driver [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Instance destroyed successfully.#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.201 248514 DEBUG nova.objects.instance [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 776d7b4e-dc82-47f6-b1bb-53188ad804b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.210 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.211 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape41a163e-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.212 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.212 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape41a163e-70, col_values=(('external_ids', {'iface-id': '0c3d5666-7453-4eb7-8973-bef187574418'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:21:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:04.213 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.220 248514 DEBUG nova.virt.libvirt.vif [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:20:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-272771977',display_name='tempest-TestGettingAddress-server-272771977',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-272771977',id=150,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIpcx/5vuboJVwrV0nq+WfHP6t5LA2OndBCH/1ll7d/PpOHljwzW/JjtFSE1pOt+PUX/l8C8ALgzEgIgSQ3b5WDhZcogX0HQPVvqXPOMGr4RM3k5j3wZaHphWzCWA2h6Q==',key_name='tempest-TestGettingAddress-483066452',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:20:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-nwca5oxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:20:41Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=776d7b4e-dc82-47f6-b1bb-53188ad804b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.221 248514 DEBUG nova.network.os_vif_util [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.222 248514 DEBUG nova.network.os_vif_util [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.223 248514 DEBUG os_vif [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.226 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.227 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e8a3ef7-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.231 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.235 248514 INFO os_vif [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:0a:19,bridge_name='br-int',has_traffic_filtering=True,id=3e8a3ef7-51fa-466a-b6ae-c5e00a04d987,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e8a3ef7-51')#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.557 248514 INFO nova.virt.libvirt.driver [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Deleting instance files /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6_del#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.558 248514 INFO nova.virt.libvirt.driver [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Deletion of /var/lib/nova/instances/776d7b4e-dc82-47f6-b1bb-53188ad804b6_del complete#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.666 248514 INFO nova.compute.manager [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.667 248514 DEBUG oslo.service.loopingcall [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.668 248514 DEBUG nova.compute.manager [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.669 248514 DEBUG nova.network.neutron [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:21:04 np0005558241 nova_compute[248510]: 2025-12-13 09:21:04.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3599: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 04:21:05 np0005558241 nova_compute[248510]: 2025-12-13 09:21:05.812 248514 DEBUG nova.network.neutron [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:21:05 np0005558241 nova_compute[248510]: 2025-12-13 09:21:05.833 248514 INFO nova.compute.manager [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Took 1.16 seconds to deallocate network for instance.#033[00m
Dec 13 04:21:05 np0005558241 nova_compute[248510]: 2025-12-13 09:21:05.900 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:05 np0005558241 nova_compute[248510]: 2025-12-13 09:21:05.901 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:05 np0005558241 nova_compute[248510]: 2025-12-13 09:21:05.994 248514 DEBUG nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-vif-unplugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:21:05 np0005558241 nova_compute[248510]: 2025-12-13 09:21:05.995 248514 DEBUG oslo_concurrency.lockutils [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:05 np0005558241 nova_compute[248510]: 2025-12-13 09:21:05.996 248514 DEBUG oslo_concurrency.lockutils [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:05 np0005558241 nova_compute[248510]: 2025-12-13 09:21:05.996 248514 DEBUG oslo_concurrency.lockutils [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:05 np0005558241 nova_compute[248510]: 2025-12-13 09:21:05.997 248514 DEBUG nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] No waiting events found dispatching network-vif-unplugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:21:05 np0005558241 nova_compute[248510]: 2025-12-13 09:21:05.998 248514 WARNING nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received unexpected event network-vif-unplugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:21:05 np0005558241 nova_compute[248510]: 2025-12-13 09:21:05.998 248514 DEBUG nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:21:05 np0005558241 nova_compute[248510]: 2025-12-13 09:21:05.999 248514 DEBUG oslo_concurrency.lockutils [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:05.999 248514 DEBUG oslo_concurrency.lockutils [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.000 248514 DEBUG oslo_concurrency.lockutils [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.000 248514 DEBUG nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] No waiting events found dispatching network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.001 248514 WARNING nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received unexpected event network-vif-plugged-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.001 248514 DEBUG nova.compute.manager [req-1a1afcd9-d975-438c-bf12-11677dde5889 req-01c26472-6a50-493f-a4b2-c228b281b9ed 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Received event network-vif-deleted-3e8a3ef7-51fa-466a-b6ae-c5e00a04d987 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.004 248514 DEBUG oslo_concurrency.processutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.090 248514 DEBUG nova.network.neutron [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updated VIF entry in instance network info cache for port 3e8a3ef7-51fa-466a-b6ae-c5e00a04d987. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.091 248514 DEBUG nova.network.neutron [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Updating instance_info_cache with network_info: [{"id": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "address": "fa:16:3e:ae:0a:19", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feae:a19", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e8a3ef7-51", "ovs_interfaceid": "3e8a3ef7-51fa-466a-b6ae-c5e00a04d987", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.119 248514 DEBUG oslo_concurrency.lockutils [req-4346c1d7-c7bf-40c9-8317-941068d68fa0 req-cd8f57fb-51cb-43d9-855d-43fdabdb80ad 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-776d7b4e-dc82-47f6-b1bb-53188ad804b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:21:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:21:06 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/902950929' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.586 248514 DEBUG oslo_concurrency.processutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.593 248514 DEBUG nova.compute.provider_tree [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.615 248514 DEBUG nova.scheduler.client.report [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.646 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.675 248514 INFO nova.scheduler.client.report [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 776d7b4e-dc82-47f6-b1bb-53188ad804b6#033[00m
Dec 13 04:21:06 np0005558241 nova_compute[248510]: 2025-12-13 09:21:06.780 248514 DEBUG oslo_concurrency.lockutils [None req-c30a6abf-e7e4-4154-8c9d-fe790c24cbf5 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "776d7b4e-dc82-47f6-b1bb-53188ad804b6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3600: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 764 KiB/s wr, 40 op/s
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.786 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.787 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.787 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.788 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.788 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.789 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.789 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.812 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.832 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.833 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Image id 0ed20320-9c25-4108-ad76-64b3cb3500ce yields fingerprint 7e19890462cb757da298333dcef0801755c35301 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.833 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] image 0ed20320-9c25-4108-ad76-64b3cb3500ce at (/var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301): checking#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.834 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] image 0ed20320-9c25-4108-ad76-64b3cb3500ce at (/var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.837 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.837 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] 387ffe9e-b998-43aa-bb42-e87639c1ba6a is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.837 248514 WARNING nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.838 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Active base files: /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.838 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Removable base files: /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.838 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6963c18baf0656ea2bcf2978da1e5544086b18bc#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.838 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.839 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.839 248514 DEBUG nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.839 248514 INFO nova.virt.libvirt.imagecache [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.938 248514 DEBUG nova.compute.manager [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-changed-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.939 248514 DEBUG nova.compute.manager [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Refreshing instance network info cache due to event network-changed-76d94bc0-cb6b-4bd0-8366-5158fdaece5b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.939 248514 DEBUG oslo_concurrency.lockutils [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.939 248514 DEBUG oslo_concurrency.lockutils [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:21:07 np0005558241 nova_compute[248510]: 2025-12-13 09:21:07.939 248514 DEBUG nova.network.neutron [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Refreshing network info cache for port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.011 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.012 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.012 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.013 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.013 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.014 248514 INFO nova.compute.manager [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Terminating instance#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.016 248514 DEBUG nova.compute.manager [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:21:08 np0005558241 kernel: tap76d94bc0-cb (unregistering): left promiscuous mode
Dec 13 04:21:08 np0005558241 NetworkManager[50376]: <info>  [1765617668.0768] device (tap76d94bc0-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:21:08 np0005558241 ovn_controller[148476]: 2025-12-13T09:21:08Z|01621|binding|INFO|Releasing lport 76d94bc0-cb6b-4bd0-8366-5158fdaece5b from this chassis (sb_readonly=0)
Dec 13 04:21:08 np0005558241 ovn_controller[148476]: 2025-12-13T09:21:08Z|01622|binding|INFO|Setting lport 76d94bc0-cb6b-4bd0-8366-5158fdaece5b down in Southbound
Dec 13 04:21:08 np0005558241 ovn_controller[148476]: 2025-12-13T09:21:08Z|01623|binding|INFO|Removing iface tap76d94bc0-cb ovn-installed in OVS
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.089 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.092 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:6b:81 10.100.0.11 2001:db8:0:1:f816:3eff:fe30:6b81 2001:db8::f816:3eff:fe30:6b81'], port_security=['fa:16:3e:30:6b:81 10.100.0.11 2001:db8:0:1:f816:3eff:fe30:6b81 2001:db8::f816:3eff:fe30:6b81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8:0:1:f816:3eff:fe30:6b81/64 2001:db8::f816:3eff:fe30:6b81/64', 'neutron:device_id': '387ffe9e-b998-43aa-bb42-e87639c1ba6a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e41a163e-7597-4a26-aa5d-b894f952cca4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7518f8bf-76d2-4439-8293-d0d6a2979cbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60522aa7-9184-4e18-b84a-db845e6d2800, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=76d94bc0-cb6b-4bd0-8366-5158fdaece5b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.094 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b in datapath e41a163e-7597-4a26-aa5d-b894f952cca4 unbound from our chassis#033[00m
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.097 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e41a163e-7597-4a26-aa5d-b894f952cca4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.098 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[28e63c90-d01c-40d8-b881-c15c31431893]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.100 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4 namespace which is not needed anymore#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.109 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:08 np0005558241 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000095.scope: Deactivated successfully.
Dec 13 04:21:08 np0005558241 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000095.scope: Consumed 15.403s CPU time.
Dec 13 04:21:08 np0005558241 systemd-machined[210538]: Machine qemu-180-instance-00000095 terminated.
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.259 248514 INFO nova.virt.libvirt.driver [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Instance destroyed successfully.#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.260 248514 DEBUG nova.objects.instance [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 387ffe9e-b998-43aa-bb42-e87639c1ba6a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.278 248514 DEBUG nova.virt.libvirt.vif [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:19:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-569396822',display_name='tempest-TestGettingAddress-server-569396822',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-569396822',id=149,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIpcx/5vuboJVwrV0nq+WfHP6t5LA2OndBCH/1ll7d/PpOHljwzW/JjtFSE1pOt+PUX/l8C8ALgzEgIgSQ3b5WDhZcogX0HQPVvqXPOMGr4RM3k5j3wZaHphWzCWA2h6Q==',key_name='tempest-TestGettingAddress-483066452',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:20:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-r3ss7vci',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:20:03Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=387ffe9e-b998-43aa-bb42-e87639c1ba6a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.278 248514 DEBUG nova.network.os_vif_util [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.279 248514 DEBUG nova.network.os_vif_util [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.280 248514 DEBUG os_vif [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.282 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.282 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76d94bc0-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.307 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.309 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:08 np0005558241 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [NOTICE]   (404502) : haproxy version is 2.8.14-c23fe91
Dec 13 04:21:08 np0005558241 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [NOTICE]   (404502) : path to executable is /usr/sbin/haproxy
Dec 13 04:21:08 np0005558241 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [WARNING]  (404502) : Exiting Master process...
Dec 13 04:21:08 np0005558241 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [WARNING]  (404502) : Exiting Master process...
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.312 248514 INFO os_vif [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:6b:81,bridge_name='br-int',has_traffic_filtering=True,id=76d94bc0-cb6b-4bd0-8366-5158fdaece5b,network=Network(e41a163e-7597-4a26-aa5d-b894f952cca4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d94bc0-cb')#033[00m
Dec 13 04:21:08 np0005558241 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [ALERT]    (404502) : Current worker (404504) exited with code 143 (Terminated)
Dec 13 04:21:08 np0005558241 neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4[404498]: [WARNING]  (404502) : All workers exited. Exiting... (0)
Dec 13 04:21:08 np0005558241 systemd[1]: libpod-d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c.scope: Deactivated successfully.
Dec 13 04:21:08 np0005558241 podman[405775]: 2025-12-13 09:21:08.321262111 +0000 UTC m=+0.087498931 container died d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 04:21:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c-userdata-shm.mount: Deactivated successfully.
Dec 13 04:21:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2fcb212c1c75939221274e39c5248b874df50042250d2b2667a9a1d38806218d-merged.mount: Deactivated successfully.
Dec 13 04:21:08 np0005558241 podman[405775]: 2025-12-13 09:21:08.369552514 +0000 UTC m=+0.135789334 container cleanup d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 04:21:08 np0005558241 systemd[1]: libpod-conmon-d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c.scope: Deactivated successfully.
Dec 13 04:21:08 np0005558241 podman[405833]: 2025-12-13 09:21:08.462123271 +0000 UTC m=+0.061231847 container remove d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.470 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b7a874-18f6-4713-8209-99a898bf4ced]: (4, ('Sat Dec 13 09:21:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4 (d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c)\nd73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c\nSat Dec 13 09:21:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4 (d73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c)\nd73c129eb87be3442336bd8ea1e4b092956ca01cef3b3433a685aab43705e78c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.473 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bf655df2-146b-43aa-8832-7b807c8585ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.474 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape41a163e-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.476 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:08 np0005558241 kernel: tape41a163e-70: left promiscuous mode
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.490 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.492 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cd16346b-4b91-4b26-bcf1-408f85ec749f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.509 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[1e53fbde-e9a2-4f82-93cc-3103236d6163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.511 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[82b47ace-cc35-4d2f-8fb3-8207d89641c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.527 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ad94e5-7a65-4bd0-abec-04512fda14a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1017954, 'reachable_time': 20185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405849, 'error': None, 'target': 'ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:08 np0005558241 systemd[1]: run-netns-ovnmeta\x2de41a163e\x2d7597\x2d4a26\x2daa5d\x2db894f952cca4.mount: Deactivated successfully.
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.531 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e41a163e-7597-4a26-aa5d-b894f952cca4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:21:08 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:08.531 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[2517588a-e44c-47d1-a120-7bc523f1f6b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.615 248514 INFO nova.virt.libvirt.driver [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Deleting instance files /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a_del#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.616 248514 INFO nova.virt.libvirt.driver [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Deletion of /var/lib/nova/instances/387ffe9e-b998-43aa-bb42-e87639c1ba6a_del complete#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.687 248514 INFO nova.compute.manager [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.687 248514 DEBUG oslo.service.loopingcall [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.687 248514 DEBUG nova.compute.manager [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:21:08 np0005558241 nova_compute[248510]: 2025-12-13 09:21:08.688 248514 DEBUG nova.network.neutron [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:21:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3601: 321 pgs: 321 active+clean; 136 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 204 KiB/s rd, 766 KiB/s wr, 70 op/s
Dec 13 04:21:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:21:09
Dec 13 04:21:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:21:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:21:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'images']
Dec 13 04:21:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:21:09 np0005558241 nova_compute[248510]: 2025-12-13 09:21:09.474 248514 DEBUG nova.network.neutron [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:21:09 np0005558241 nova_compute[248510]: 2025-12-13 09:21:09.499 248514 INFO nova.compute.manager [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Took 0.81 seconds to deallocate network for instance.#033[00m
Dec 13 04:21:09 np0005558241 nova_compute[248510]: 2025-12-13 09:21:09.563 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:09 np0005558241 nova_compute[248510]: 2025-12-13 09:21:09.564 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:09 np0005558241 nova_compute[248510]: 2025-12-13 09:21:09.573 248514 DEBUG nova.compute.manager [req-ecc7fbd4-505f-4b72-8efa-e431aaa1f05c req-7a099754-7e46-4889-8b84-c974b07c384d 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-vif-deleted-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:21:09 np0005558241 nova_compute[248510]: 2025-12-13 09:21:09.619 248514 DEBUG oslo_concurrency.processutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:21:09 np0005558241 nova_compute[248510]: 2025-12-13 09:21:09.668 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:09 np0005558241 nova_compute[248510]: 2025-12-13 09:21:09.919 248514 DEBUG nova.network.neutron [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updated VIF entry in instance network info cache for port 76d94bc0-cb6b-4bd0-8366-5158fdaece5b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:21:09 np0005558241 nova_compute[248510]: 2025-12-13 09:21:09.920 248514 DEBUG nova.network.neutron [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Updating instance_info_cache with network_info: [{"id": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "address": "fa:16:3e:30:6b:81", "network": {"id": "e41a163e-7597-4a26-aa5d-b894f952cca4", "bridge": "br-int", "label": "tempest-network-smoke--1213061193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe30:6b81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d94bc0-cb", "ovs_interfaceid": "76d94bc0-cb6b-4bd0-8366-5158fdaece5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:21:09 np0005558241 nova_compute[248510]: 2025-12-13 09:21:09.943 248514 DEBUG oslo_concurrency.lockutils [req-dae94160-9378-4b1c-8b28-2cec80e11432 req-5f388227-1fcd-4abe-ab02-570a10f548bd 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-387ffe9e-b998-43aa-bb42-e87639c1ba6a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.046 248514 DEBUG nova.compute.manager [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-vif-unplugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.047 248514 DEBUG oslo_concurrency.lockutils [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.048 248514 DEBUG oslo_concurrency.lockutils [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.048 248514 DEBUG oslo_concurrency.lockutils [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.048 248514 DEBUG nova.compute.manager [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] No waiting events found dispatching network-vif-unplugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.049 248514 WARNING nova.compute.manager [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received unexpected event network-vif-unplugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.049 248514 DEBUG nova.compute.manager [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.049 248514 DEBUG oslo_concurrency.lockutils [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.050 248514 DEBUG oslo_concurrency.lockutils [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.050 248514 DEBUG oslo_concurrency.lockutils [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.050 248514 DEBUG nova.compute.manager [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] No waiting events found dispatching network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.051 248514 WARNING nova.compute.manager [req-fa131358-21e4-4ddd-af56-48ffe96679d4 req-365ddf97-6547-41b6-b4de-d1f774fe1ba3 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Received unexpected event network-vif-plugged-76d94bc0-cb6b-4bd0-8366-5158fdaece5b for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:21:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:21:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:21:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:21:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:21:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:21:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:21:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:21:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182366854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.245 248514 DEBUG oslo_concurrency.processutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.255 248514 DEBUG nova.compute.provider_tree [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.285 248514 DEBUG nova.scheduler.client.report [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.315 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.358 248514 INFO nova.scheduler.client.report [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 387ffe9e-b998-43aa-bb42-e87639c1ba6a#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.431 248514 DEBUG oslo_concurrency.lockutils [None req-d40bbed2-7721-45a0-9244-5daa08d3db82 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "387ffe9e-b998-43aa-bb42-e87639c1ba6a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:10 np0005558241 nova_compute[248510]: 2025-12-13 09:21:10.819 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3602: 321 pgs: 321 active+clean; 79 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 26 KiB/s wr, 46 op/s
Dec 13 04:21:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:21:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:21:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:21:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:21:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:21:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:21:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:21:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:21:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:21:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:21:12 np0005558241 podman[405873]: 2025-12-13 09:21:12.013984353 +0000 UTC m=+0.082594189 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 13 04:21:12 np0005558241 podman[405874]: 2025-12-13 09:21:12.02749345 +0000 UTC m=+0.095794008 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 04:21:12 np0005558241 podman[405872]: 2025-12-13 09:21:12.042618556 +0000 UTC m=+0.114387411 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Dec 13 04:21:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3603: 321 pgs: 321 active+clean; 46 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 26 KiB/s wr, 56 op/s
Dec 13 04:21:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:13 np0005558241 nova_compute[248510]: 2025-12-13 09:21:13.309 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:13.536 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:21:13 np0005558241 nova_compute[248510]: 2025-12-13 09:21:13.537 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:13 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:13.538 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:21:14 np0005558241 nova_compute[248510]: 2025-12-13 09:21:14.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3604: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 24 KiB/s wr, 58 op/s
Dec 13 04:21:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:21:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/452393289' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:21:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:21:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/452393289' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:21:16 np0005558241 nova_compute[248510]: 2025-12-13 09:21:16.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:16 np0005558241 nova_compute[248510]: 2025-12-13 09:21:16.958 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3605: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Dec 13 04:21:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:18 np0005558241 nova_compute[248510]: 2025-12-13 09:21:18.312 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3606: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Dec 13 04:21:19 np0005558241 nova_compute[248510]: 2025-12-13 09:21:19.201 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617664.199182, 776d7b4e-dc82-47f6-b1bb-53188ad804b6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:21:19 np0005558241 nova_compute[248510]: 2025-12-13 09:21:19.201 248514 INFO nova.compute.manager [-] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:21:19 np0005558241 nova_compute[248510]: 2025-12-13 09:21:19.229 248514 DEBUG nova.compute.manager [None req-d2aa4f23-4e27-4eef-abe2-6ffd2fc1c42a - - - - - -] [instance: 776d7b4e-dc82-47f6-b1bb-53188ad804b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:21:19 np0005558241 nova_compute[248510]: 2025-12-13 09:21:19.647 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3607: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 3.3 KiB/s wr, 25 op/s
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.497708657387606e-05 of space, bias 1.0, pg target 0.004493125972162818 quantized to 32 (current 32)
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696862883188067 of space, bias 1.0, pg target 0.200905886495642 quantized to 32 (current 32)
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.723939957605983e-07 of space, bias 4.0, pg target 0.0006868727949127179 quantized to 16 (current 32)
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:21:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3608: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 682 B/s wr, 12 op/s
Dec 13 04:21:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:23 np0005558241 nova_compute[248510]: 2025-12-13 09:21:23.255 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617668.25371, 387ffe9e-b998-43aa-bb42-e87639c1ba6a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:21:23 np0005558241 nova_compute[248510]: 2025-12-13 09:21:23.256 248514 INFO nova.compute.manager [-] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:21:23 np0005558241 nova_compute[248510]: 2025-12-13 09:21:23.283 248514 DEBUG nova.compute.manager [None req-e54e4e89-532d-43c1-a452-6b12d8a36f8c - - - - - -] [instance: 387ffe9e-b998-43aa-bb42-e87639c1ba6a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:21:23 np0005558241 nova_compute[248510]: 2025-12-13 09:21:23.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:23.541 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:21:24 np0005558241 nova_compute[248510]: 2025-12-13 09:21:24.649 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3609: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:21:25 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:21:25 np0005558241 nova_compute[248510]: 2025-12-13 09:21:25.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:25 np0005558241 podman[406082]: 2025-12-13 09:21:25.935307711 +0000 UTC m=+0.108605007 container create 94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec 13 04:21:25 np0005558241 podman[406082]: 2025-12-13 09:21:25.85339856 +0000 UTC m=+0.026695886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:21:26 np0005558241 systemd[1]: Started libpod-conmon-94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322.scope.
Dec 13 04:21:26 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:21:26 np0005558241 podman[406082]: 2025-12-13 09:21:26.181596308 +0000 UTC m=+0.354893624 container init 94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_meitner, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:21:26 np0005558241 podman[406082]: 2025-12-13 09:21:26.189108636 +0000 UTC m=+0.362405932 container start 94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_meitner, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:21:26 np0005558241 eager_meitner[406098]: 167 167
Dec 13 04:21:26 np0005558241 systemd[1]: libpod-94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322.scope: Deactivated successfully.
Dec 13 04:21:26 np0005558241 conmon[406098]: conmon 94876c858e2883d6b07e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322.scope/container/memory.events
Dec 13 04:21:26 np0005558241 podman[406082]: 2025-12-13 09:21:26.301461985 +0000 UTC m=+0.474759321 container attach 94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:21:26 np0005558241 podman[406082]: 2025-12-13 09:21:26.30205355 +0000 UTC m=+0.475350886 container died 94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_meitner, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 04:21:26 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6159560793f0aedec36f604f1b7529852b4d0a50373f6bcf585bffead3e1f8a2-merged.mount: Deactivated successfully.
Dec 13 04:21:26 np0005558241 podman[406082]: 2025-12-13 09:21:26.612184007 +0000 UTC m=+0.785481313 container remove 94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:21:26 np0005558241 systemd[1]: libpod-conmon-94876c858e2883d6b07efef81269a64ebfbf1eb610b905934f3d040092ec7322.scope: Deactivated successfully.
Dec 13 04:21:26 np0005558241 podman[406120]: 2025-12-13 09:21:26.800245283 +0000 UTC m=+0.043032713 container create 1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_goldberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 04:21:26 np0005558241 podman[406120]: 2025-12-13 09:21:26.78168395 +0000 UTC m=+0.024471400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:21:27 np0005558241 systemd[1]: Started libpod-conmon-1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79.scope.
Dec 13 04:21:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3610: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:21:27 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:21:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8f9c440e039a231c973d2e121d5cac44035549957d4983cab371de0163695f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8f9c440e039a231c973d2e121d5cac44035549957d4983cab371de0163695f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8f9c440e039a231c973d2e121d5cac44035549957d4983cab371de0163695f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8f9c440e039a231c973d2e121d5cac44035549957d4983cab371de0163695f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:27 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a8f9c440e039a231c973d2e121d5cac44035549957d4983cab371de0163695f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:27 np0005558241 podman[406120]: 2025-12-13 09:21:27.109627941 +0000 UTC m=+0.352415391 container init 1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:21:27 np0005558241 podman[406120]: 2025-12-13 09:21:27.117950059 +0000 UTC m=+0.360737489 container start 1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:21:27 np0005558241 podman[406120]: 2025-12-13 09:21:27.121374674 +0000 UTC m=+0.364162104 container attach 1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 13 04:21:27 np0005558241 laughing_goldberg[406137]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:21:27 np0005558241 laughing_goldberg[406137]: --> All data devices are unavailable
Dec 13 04:21:27 np0005558241 systemd[1]: libpod-1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79.scope: Deactivated successfully.
Dec 13 04:21:27 np0005558241 podman[406120]: 2025-12-13 09:21:27.644227242 +0000 UTC m=+0.887014712 container died 1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_goldberg, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:21:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1a8f9c440e039a231c973d2e121d5cac44035549957d4983cab371de0163695f-merged.mount: Deactivated successfully.
Dec 13 04:21:27 np0005558241 nova_compute[248510]: 2025-12-13 09:21:27.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:27 np0005558241 nova_compute[248510]: 2025-12-13 09:21:27.775 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 04:21:27 np0005558241 nova_compute[248510]: 2025-12-13 09:21:27.807 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 04:21:27 np0005558241 podman[406120]: 2025-12-13 09:21:27.840622405 +0000 UTC m=+1.083409845 container remove 1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 04:21:27 np0005558241 systemd[1]: libpod-conmon-1b858cc7eba5e8e049695af91a7321a5f6455030afe2d762ac42262895d77d79.scope: Deactivated successfully.
Dec 13 04:21:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:28 np0005558241 nova_compute[248510]: 2025-12-13 09:21:28.317 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:28 np0005558241 podman[406233]: 2025-12-13 09:21:28.331405534 +0000 UTC m=+0.046054508 container create 3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_knuth, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:21:28 np0005558241 podman[406233]: 2025-12-13 09:21:28.311596641 +0000 UTC m=+0.026245625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:21:28 np0005558241 systemd[1]: Started libpod-conmon-3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7.scope.
Dec 13 04:21:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:21:28 np0005558241 podman[406233]: 2025-12-13 09:21:28.711976978 +0000 UTC m=+0.426625962 container init 3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:21:28 np0005558241 podman[406233]: 2025-12-13 09:21:28.719749591 +0000 UTC m=+0.434398565 container start 3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:21:28 np0005558241 sweet_knuth[406249]: 167 167
Dec 13 04:21:28 np0005558241 systemd[1]: libpod-3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7.scope: Deactivated successfully.
Dec 13 04:21:28 np0005558241 podman[406233]: 2025-12-13 09:21:28.73616085 +0000 UTC m=+0.450809844 container attach 3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec 13 04:21:28 np0005558241 podman[406233]: 2025-12-13 09:21:28.738364675 +0000 UTC m=+0.453013679 container died 3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_knuth, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:21:28 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3f9af6ca7bb067ef107378ecd0a6a06c346514356b600e890f64242678a9cefe-merged.mount: Deactivated successfully.
Dec 13 04:21:28 np0005558241 podman[406233]: 2025-12-13 09:21:28.780362602 +0000 UTC m=+0.495011576 container remove 3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:21:28 np0005558241 systemd[1]: libpod-conmon-3b0407ff67f6920dab97a796f80fcc17dabf1b8be12f5390cb626320b4ea0ff7.scope: Deactivated successfully.
Dec 13 04:21:28 np0005558241 nova_compute[248510]: 2025-12-13 09:21:28.801 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:28 np0005558241 podman[406274]: 2025-12-13 09:21:28.959996857 +0000 UTC m=+0.042992492 container create 91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:21:29 np0005558241 systemd[1]: Started libpod-conmon-91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757.scope.
Dec 13 04:21:29 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:21:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f8ff01510d98d1c8fb5492a0d10be8eef77418742d2c2b2bd1b55aebbe0a33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f8ff01510d98d1c8fb5492a0d10be8eef77418742d2c2b2bd1b55aebbe0a33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f8ff01510d98d1c8fb5492a0d10be8eef77418742d2c2b2bd1b55aebbe0a33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1f8ff01510d98d1c8fb5492a0d10be8eef77418742d2c2b2bd1b55aebbe0a33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:29 np0005558241 podman[406274]: 2025-12-13 09:21:28.942227254 +0000 UTC m=+0.025222919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:21:29 np0005558241 podman[406274]: 2025-12-13 09:21:29.044746669 +0000 UTC m=+0.127742344 container init 91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:21:29 np0005558241 podman[406274]: 2025-12-13 09:21:29.053588889 +0000 UTC m=+0.136584524 container start 91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:21:29 np0005558241 podman[406274]: 2025-12-13 09:21:29.05683241 +0000 UTC m=+0.139828065 container attach 91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bassi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:21:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3611: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]: {
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:    "0": [
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:        {
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "devices": [
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "/dev/loop3"
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            ],
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_name": "ceph_lv0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_size": "21470642176",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "name": "ceph_lv0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "tags": {
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.cluster_name": "ceph",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.crush_device_class": "",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.encrypted": "0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.objectstore": "bluestore",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.osd_id": "0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.type": "block",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.vdo": "0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.with_tpm": "0"
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            },
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "type": "block",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "vg_name": "ceph_vg0"
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:        }
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:    ],
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:    "1": [
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:        {
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "devices": [
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "/dev/loop4"
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            ],
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_name": "ceph_lv1",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_size": "21470642176",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "name": "ceph_lv1",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "tags": {
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.cluster_name": "ceph",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.crush_device_class": "",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.encrypted": "0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.objectstore": "bluestore",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.osd_id": "1",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.type": "block",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.vdo": "0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.with_tpm": "0"
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            },
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "type": "block",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "vg_name": "ceph_vg1"
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:        }
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:    ],
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:    "2": [
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:        {
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "devices": [
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "/dev/loop5"
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            ],
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_name": "ceph_lv2",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_size": "21470642176",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "name": "ceph_lv2",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "tags": {
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.cluster_name": "ceph",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.crush_device_class": "",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.encrypted": "0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.objectstore": "bluestore",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.osd_id": "2",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.type": "block",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.vdo": "0",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:                "ceph.with_tpm": "0"
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            },
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "type": "block",
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:            "vg_name": "ceph_vg2"
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:        }
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]:    ]
Dec 13 04:21:29 np0005558241 flamboyant_bassi[406290]: }
Dec 13 04:21:29 np0005558241 systemd[1]: libpod-91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757.scope: Deactivated successfully.
Dec 13 04:21:29 np0005558241 podman[406274]: 2025-12-13 09:21:29.407124958 +0000 UTC m=+0.490120623 container died 91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 04:21:29 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a1f8ff01510d98d1c8fb5492a0d10be8eef77418742d2c2b2bd1b55aebbe0a33-merged.mount: Deactivated successfully.
Dec 13 04:21:29 np0005558241 podman[406274]: 2025-12-13 09:21:29.455809091 +0000 UTC m=+0.538804726 container remove 91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_bassi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:21:29 np0005558241 systemd[1]: libpod-conmon-91562648a8b33b0a0c267d84a34f47af58d29bf1365c0d9276d943e12706d757.scope: Deactivated successfully.
Dec 13 04:21:29 np0005558241 nova_compute[248510]: 2025-12-13 09:21:29.652 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:30 np0005558241 podman[406374]: 2025-12-13 09:21:30.006706328 +0000 UTC m=+0.057146455 container create ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:21:30 np0005558241 systemd[1]: Started libpod-conmon-ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3.scope.
Dec 13 04:21:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:21:30 np0005558241 podman[406374]: 2025-12-13 09:21:29.98914079 +0000 UTC m=+0.039580937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:21:30 np0005558241 podman[406374]: 2025-12-13 09:21:30.090771853 +0000 UTC m=+0.141212000 container init ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:21:30 np0005558241 podman[406374]: 2025-12-13 09:21:30.097239874 +0000 UTC m=+0.147680011 container start ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:21:30 np0005558241 podman[406374]: 2025-12-13 09:21:30.101776427 +0000 UTC m=+0.152216564 container attach ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 04:21:30 np0005558241 dreamy_ptolemy[406390]: 167 167
Dec 13 04:21:30 np0005558241 systemd[1]: libpod-ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3.scope: Deactivated successfully.
Dec 13 04:21:30 np0005558241 podman[406374]: 2025-12-13 09:21:30.104806872 +0000 UTC m=+0.155246999 container died ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:21:30 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8979391525bde997567861d586ab3fbc40b97ee3b952f62b85a50f699e3124d9-merged.mount: Deactivated successfully.
Dec 13 04:21:30 np0005558241 podman[406374]: 2025-12-13 09:21:30.140519742 +0000 UTC m=+0.190959869 container remove ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 04:21:30 np0005558241 systemd[1]: libpod-conmon-ca04b9aa727fe5bfadf64afc1defd135aed1b7339d42c5e84132a7492e8ddab3.scope: Deactivated successfully.
Dec 13 04:21:30 np0005558241 nova_compute[248510]: 2025-12-13 09:21:30.224 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:30 np0005558241 podman[406414]: 2025-12-13 09:21:30.324277381 +0000 UTC m=+0.051051743 container create 7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_knuth, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:21:30 np0005558241 systemd[1]: Started libpod-conmon-7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0.scope.
Dec 13 04:21:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:21:30 np0005558241 podman[406414]: 2025-12-13 09:21:30.302427517 +0000 UTC m=+0.029201909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:21:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebaed8f01766ed33c5f0f568ce8f0d6374ecd1c18c1c3b3327c54d1256e5014f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebaed8f01766ed33c5f0f568ce8f0d6374ecd1c18c1c3b3327c54d1256e5014f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebaed8f01766ed33c5f0f568ce8f0d6374ecd1c18c1c3b3327c54d1256e5014f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:30 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebaed8f01766ed33c5f0f568ce8f0d6374ecd1c18c1c3b3327c54d1256e5014f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:30 np0005558241 podman[406414]: 2025-12-13 09:21:30.417228987 +0000 UTC m=+0.144003369 container init 7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_knuth, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 04:21:30 np0005558241 podman[406414]: 2025-12-13 09:21:30.425479833 +0000 UTC m=+0.152254195 container start 7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_knuth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:21:30 np0005558241 podman[406414]: 2025-12-13 09:21:30.429293918 +0000 UTC m=+0.156068290 container attach 7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:21:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3612: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:21:31 np0005558241 lvm[406508]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:21:31 np0005558241 lvm[406509]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:21:31 np0005558241 lvm[406509]: VG ceph_vg0 finished
Dec 13 04:21:31 np0005558241 lvm[406508]: VG ceph_vg1 finished
Dec 13 04:21:31 np0005558241 lvm[406511]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:21:31 np0005558241 lvm[406511]: VG ceph_vg2 finished
Dec 13 04:21:31 np0005558241 nostalgic_knuth[406430]: {}
Dec 13 04:21:31 np0005558241 systemd[1]: libpod-7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0.scope: Deactivated successfully.
Dec 13 04:21:31 np0005558241 systemd[1]: libpod-7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0.scope: Consumed 1.524s CPU time.
Dec 13 04:21:31 np0005558241 podman[406414]: 2025-12-13 09:21:31.354783997 +0000 UTC m=+1.081558359 container died 7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:21:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ebaed8f01766ed33c5f0f568ce8f0d6374ecd1c18c1c3b3327c54d1256e5014f-merged.mount: Deactivated successfully.
Dec 13 04:21:31 np0005558241 podman[406414]: 2025-12-13 09:21:31.401379558 +0000 UTC m=+1.128153920 container remove 7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:21:31 np0005558241 systemd[1]: libpod-conmon-7db03e344954204d8c58266a9e416edf12df450c0cea19db52782fae672c23e0.scope: Deactivated successfully.
Dec 13 04:21:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:21:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:21:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:21:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:21:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:21:32 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:21:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3613: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:21:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:33 np0005558241 nova_compute[248510]: 2025-12-13 09:21:33.321 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:33 np0005558241 nova_compute[248510]: 2025-12-13 09:21:33.799 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:33 np0005558241 nova_compute[248510]: 2025-12-13 09:21:33.800 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:21:33 np0005558241 nova_compute[248510]: 2025-12-13 09:21:33.801 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:21:34 np0005558241 nova_compute[248510]: 2025-12-13 09:21:34.284 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:21:34 np0005558241 nova_compute[248510]: 2025-12-13 09:21:34.654 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3614: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:21:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:35.903 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:66:81 10.100.0.2 2001:db8::f816:3eff:fe2c:6681'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe2c:6681/64', 'neutron:device_id': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=681ece70-9463-4a56-a445-6b0b679ff248, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f3bff06d-212e-4afb-99d4-011e9e890967) old=Port_Binding(mac=['fa:16:3e:2c:66:81 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:21:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:35.904 158419 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f3bff06d-212e-4afb-99d4-011e9e890967 in datapath e4b9bd04-ada7-4867-9918-3cd5d21d273d updated#033[00m
Dec 13 04:21:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:35.905 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4b9bd04-ada7-4867-9918-3cd5d21d273d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:21:35 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:35.907 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd765596-1e39-4cc9-aa67-ab874f3f2a27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:36 np0005558241 nova_compute[248510]: 2025-12-13 09:21:36.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3615: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:21:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:38 np0005558241 nova_compute[248510]: 2025-12-13 09:21:38.365 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:38 np0005558241 nova_compute[248510]: 2025-12-13 09:21:38.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:38 np0005558241 nova_compute[248510]: 2025-12-13 09:21:38.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:38 np0005558241 nova_compute[248510]: 2025-12-13 09:21:38.813 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:38 np0005558241 nova_compute[248510]: 2025-12-13 09:21:38.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:38 np0005558241 nova_compute[248510]: 2025-12-13 09:21:38.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:38 np0005558241 nova_compute[248510]: 2025-12-13 09:21:38.814 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:21:38 np0005558241 nova_compute[248510]: 2025-12-13 09:21:38.815 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:21:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3616: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:21:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:21:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2287222542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:39 np0005558241 nova_compute[248510]: 2025-12-13 09:21:39.423 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:21:39 np0005558241 nova_compute[248510]: 2025-12-13 09:21:39.642 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:21:39 np0005558241 nova_compute[248510]: 2025-12-13 09:21:39.643 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3459MB free_disk=59.9873828003183GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:21:39 np0005558241 nova_compute[248510]: 2025-12-13 09:21:39.644 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:39 np0005558241 nova_compute[248510]: 2025-12-13 09:21:39.644 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:39 np0005558241 nova_compute[248510]: 2025-12-13 09:21:39.655 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:39 np0005558241 nova_compute[248510]: 2025-12-13 09:21:39.975 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:21:39 np0005558241 nova_compute[248510]: 2025-12-13 09:21:39.976 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:21:39 np0005558241 nova_compute[248510]: 2025-12-13 09:21:39.999 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:21:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:21:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:21:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:21:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:21:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:21:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:21:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:21:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1614901868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:40 np0005558241 nova_compute[248510]: 2025-12-13 09:21:40.588 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:21:40 np0005558241 nova_compute[248510]: 2025-12-13 09:21:40.595 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:21:40 np0005558241 nova_compute[248510]: 2025-12-13 09:21:40.618 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:21:40 np0005558241 nova_compute[248510]: 2025-12-13 09:21:40.638 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:21:40 np0005558241 nova_compute[248510]: 2025-12-13 09:21:40.639 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3617: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:21:41 np0005558241 nova_compute[248510]: 2025-12-13 09:21:41.639 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:41 np0005558241 nova_compute[248510]: 2025-12-13 09:21:41.640 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:41 np0005558241 nova_compute[248510]: 2025-12-13 09:21:41.640 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:21:42 np0005558241 podman[406598]: 2025-12-13 09:21:42.978151418 +0000 UTC m=+0.062236532 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:21:42 np0005558241 podman[406597]: 2025-12-13 09:21:42.986028464 +0000 UTC m=+0.070938938 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_id=multipathd)
Dec 13 04:21:43 np0005558241 podman[406596]: 2025-12-13 09:21:43.011908539 +0000 UTC m=+0.096818963 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 04:21:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3618: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:21:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:43 np0005558241 nova_compute[248510]: 2025-12-13 09:21:43.367 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:43 np0005558241 nova_compute[248510]: 2025-12-13 09:21:43.477 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:43 np0005558241 nova_compute[248510]: 2025-12-13 09:21:43.477 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:43 np0005558241 nova_compute[248510]: 2025-12-13 09:21:43.498 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:21:43 np0005558241 nova_compute[248510]: 2025-12-13 09:21:43.591 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:43 np0005558241 nova_compute[248510]: 2025-12-13 09:21:43.592 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:43 np0005558241 nova_compute[248510]: 2025-12-13 09:21:43.600 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:21:43 np0005558241 nova_compute[248510]: 2025-12-13 09:21:43.601 248514 INFO nova.compute.claims [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:21:43 np0005558241 nova_compute[248510]: 2025-12-13 09:21:43.713 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:21:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:21:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820773070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.310 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.321 248514 DEBUG nova.compute.provider_tree [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.475 248514 DEBUG nova.scheduler.client.report [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.514 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.516 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.583 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.584 248514 DEBUG nova.network.neutron [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.615 248514 INFO nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.637 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.655 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.736 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.737 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.738 248514 INFO nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Creating image(s)#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.773 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.813 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.855 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.862 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.921 248514 DEBUG nova.policy [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.960 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.961 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.962 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.963 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.993 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:21:44 np0005558241 nova_compute[248510]: 2025-12-13 09:21:44.999 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1437fc03-8f31-440f-8928-2fe388a22bbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:21:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3619: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:21:45 np0005558241 nova_compute[248510]: 2025-12-13 09:21:45.337 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 1437fc03-8f31-440f-8928-2fe388a22bbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:21:45 np0005558241 nova_compute[248510]: 2025-12-13 09:21:45.411 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:21:45 np0005558241 nova_compute[248510]: 2025-12-13 09:21:45.497 248514 DEBUG nova.objects.instance [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 1437fc03-8f31-440f-8928-2fe388a22bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:21:45 np0005558241 nova_compute[248510]: 2025-12-13 09:21:45.517 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:21:45 np0005558241 nova_compute[248510]: 2025-12-13 09:21:45.517 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Ensure instance console log exists: /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:21:45 np0005558241 nova_compute[248510]: 2025-12-13 09:21:45.518 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:45 np0005558241 nova_compute[248510]: 2025-12-13 09:21:45.518 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:45 np0005558241 nova_compute[248510]: 2025-12-13 09:21:45.519 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3620: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:21:47 np0005558241 nova_compute[248510]: 2025-12-13 09:21:47.522 248514 DEBUG nova.network.neutron [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Successfully created port: 2777ba55-72e3-4334-96ae-48077ed6a8d5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:21:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:48 np0005558241 nova_compute[248510]: 2025-12-13 09:21:48.365 248514 DEBUG nova.network.neutron [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Successfully updated port: 2777ba55-72e3-4334-96ae-48077ed6a8d5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:21:48 np0005558241 nova_compute[248510]: 2025-12-13 09:21:48.370 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:48 np0005558241 nova_compute[248510]: 2025-12-13 09:21:48.384 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:21:48 np0005558241 nova_compute[248510]: 2025-12-13 09:21:48.384 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:21:48 np0005558241 nova_compute[248510]: 2025-12-13 09:21:48.384 248514 DEBUG nova.network.neutron [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:21:48 np0005558241 nova_compute[248510]: 2025-12-13 09:21:48.506 248514 DEBUG nova.compute.manager [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-changed-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:21:48 np0005558241 nova_compute[248510]: 2025-12-13 09:21:48.506 248514 DEBUG nova.compute.manager [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Refreshing instance network info cache due to event network-changed-2777ba55-72e3-4334-96ae-48077ed6a8d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:21:48 np0005558241 nova_compute[248510]: 2025-12-13 09:21:48.506 248514 DEBUG oslo_concurrency.lockutils [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:21:48 np0005558241 nova_compute[248510]: 2025-12-13 09:21:48.597 248514 DEBUG nova.network.neutron [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:21:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3621: 321 pgs: 321 active+clean; 82 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 1.7 MiB/s wr, 2 op/s
Dec 13 04:21:49 np0005558241 nova_compute[248510]: 2025-12-13 09:21:49.657 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3622: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.548 248514 DEBUG nova.network.neutron [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updating instance_info_cache with network_info: [{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.573 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.573 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Instance network_info: |[{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.574 248514 DEBUG oslo_concurrency.lockutils [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.574 248514 DEBUG nova.network.neutron [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Refreshing network info cache for port 2777ba55-72e3-4334-96ae-48077ed6a8d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.577 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Start _get_guest_xml network_info=[{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.582 248514 WARNING nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.590 248514 DEBUG nova.virt.libvirt.host [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.590 248514 DEBUG nova.virt.libvirt.host [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.599 248514 DEBUG nova.virt.libvirt.host [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.600 248514 DEBUG nova.virt.libvirt.host [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.601 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.601 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.602 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.602 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.602 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.602 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.603 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.603 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.603 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.604 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.604 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.604 248514 DEBUG nova.virt.hardware [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:21:51 np0005558241 nova_compute[248510]: 2025-12-13 09:21:51.608 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:21:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2117008094' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.188 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.308 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.314 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:21:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4110810512' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.893 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.895 248514 DEBUG nova.virt.libvirt.vif [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:21:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1012009154',display_name='tempest-TestGettingAddress-server-1012009154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1012009154',id=151,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPtRBq+IYkmbvmIJh/QIOTOUFFBD8bcvC1qbRw2zjdn4nFUb754ilWe6cE2YmyI9CIp/UsSOGtzaJ1FH6v8ozWcGolUeM9n52RqDlU5UB2iwMcylMROMdWHIoAhEPM94oA==',key_name='tempest-TestGettingAddress-343990024',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-4xgqh80f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:21:44Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=1437fc03-8f31-440f-8928-2fe388a22bbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.896 248514 DEBUG nova.network.os_vif_util [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.898 248514 DEBUG nova.network.os_vif_util [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.900 248514 DEBUG nova.objects.instance [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1437fc03-8f31-440f-8928-2fe388a22bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.924 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  <uuid>1437fc03-8f31-440f-8928-2fe388a22bbe</uuid>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  <name>instance-00000097</name>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-1012009154</nova:name>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:21:51</nova:creationTime>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:        <nova:port uuid="2777ba55-72e3-4334-96ae-48077ed6a8d5">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe4e:1751" ipVersion="6"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <entry name="serial">1437fc03-8f31-440f-8928-2fe388a22bbe</entry>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <entry name="uuid">1437fc03-8f31-440f-8928-2fe388a22bbe</entry>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/1437fc03-8f31-440f-8928-2fe388a22bbe_disk">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/1437fc03-8f31-440f-8928-2fe388a22bbe_disk.config">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:4e:17:51"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <target dev="tap2777ba55-72"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/console.log" append="off"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:21:52 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:21:52 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:21:52 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:21:52 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.926 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Preparing to wait for external event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.926 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.926 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.927 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.928 248514 DEBUG nova.virt.libvirt.vif [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:21:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1012009154',display_name='tempest-TestGettingAddress-server-1012009154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1012009154',id=151,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPtRBq+IYkmbvmIJh/QIOTOUFFBD8bcvC1qbRw2zjdn4nFUb754ilWe6cE2YmyI9CIp/UsSOGtzaJ1FH6v8ozWcGolUeM9n52RqDlU5UB2iwMcylMROMdWHIoAhEPM94oA==',key_name='tempest-TestGettingAddress-343990024',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-4xgqh80f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:21:44Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=1437fc03-8f31-440f-8928-2fe388a22bbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.928 248514 DEBUG nova.network.os_vif_util [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.929 248514 DEBUG nova.network.os_vif_util [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.929 248514 DEBUG os_vif [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.930 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.931 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.935 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2777ba55-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.936 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2777ba55-72, col_values=(('external_ids', {'iface-id': '2777ba55-72e3-4334-96ae-48077ed6a8d5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:17:51', 'vm-uuid': '1437fc03-8f31-440f-8928-2fe388a22bbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.938 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:52 np0005558241 NetworkManager[50376]: <info>  [1765617712.9397] manager: (tap2777ba55-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/673)
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.951 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:52 np0005558241 nova_compute[248510]: 2025-12-13 09:21:52.953 248514 INFO os_vif [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72')#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.013 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.013 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.014 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:4e:17:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.014 248514 INFO nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Using config drive#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.040 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:21:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3623: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:21:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.352 248514 DEBUG nova.network.neutron [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updated VIF entry in instance network info cache for port 2777ba55-72e3-4334-96ae-48077ed6a8d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.353 248514 DEBUG nova.network.neutron [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updating instance_info_cache with network_info: [{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.370 248514 DEBUG oslo_concurrency.lockutils [req-1de5e8cd-1c3c-4448-bde7-cdb4cdbcfedb req-e180fa14-f4c1-4f49-830d-e0816979ebc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.457 248514 INFO nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Creating config drive at /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/disk.config#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.463 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn78rasmo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.633 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn78rasmo" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.665 248514 DEBUG nova.storage.rbd_utils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 1437fc03-8f31-440f-8928-2fe388a22bbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.670 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/disk.config 1437fc03-8f31-440f-8928-2fe388a22bbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.888 248514 DEBUG oslo_concurrency.processutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/disk.config 1437fc03-8f31-440f-8928-2fe388a22bbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.889 248514 INFO nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Deleting local config drive /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe/disk.config because it was imported into RBD.#033[00m
Dec 13 04:21:53 np0005558241 kernel: tap2777ba55-72: entered promiscuous mode
Dec 13 04:21:53 np0005558241 NetworkManager[50376]: <info>  [1765617713.9424] manager: (tap2777ba55-72): new Tun device (/org/freedesktop/NetworkManager/Devices/674)
Dec 13 04:21:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:21:53Z|01624|binding|INFO|Claiming lport 2777ba55-72e3-4334-96ae-48077ed6a8d5 for this chassis.
Dec 13 04:21:53 np0005558241 ovn_controller[148476]: 2025-12-13T09:21:53Z|01625|binding|INFO|2777ba55-72e3-4334-96ae-48077ed6a8d5: Claiming fa:16:3e:4e:17:51 10.100.0.14 2001:db8::f816:3eff:fe4e:1751
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.944 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.947 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:53 np0005558241 nova_compute[248510]: 2025-12-13 09:21:53.951 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.960 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:17:51 10.100.0.14 2001:db8::f816:3eff:fe4e:1751'], port_security=['fa:16:3e:4e:17:51 10.100.0.14 2001:db8::f816:3eff:fe4e:1751'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe4e:1751/64', 'neutron:device_id': '1437fc03-8f31-440f-8928-2fe388a22bbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd4a89b2-79c1-411e-b3a6-9349b772360d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=681ece70-9463-4a56-a445-6b0b679ff248, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2777ba55-72e3-4334-96ae-48077ed6a8d5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:21:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.962 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2777ba55-72e3-4334-96ae-48077ed6a8d5 in datapath e4b9bd04-ada7-4867-9918-3cd5d21d273d bound to our chassis#033[00m
Dec 13 04:21:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.965 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4b9bd04-ada7-4867-9918-3cd5d21d273d#033[00m
Dec 13 04:21:53 np0005558241 systemd-udevd[406985]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:21:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.977 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[54f406cd-54da-4eff-bf06-e26cfdc96aac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.978 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4b9bd04-a1 in ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec 13 04:21:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.981 260774 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4b9bd04-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec 13 04:21:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.981 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0bfe14-25e4-4472-bd1d-8c611dbb1e55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:53 np0005558241 systemd-machined[210538]: New machine qemu-182-instance-00000097.
Dec 13 04:21:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.982 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[20f75c83-dc99-4074-9763-6bc02fd2b13c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:53 np0005558241 NetworkManager[50376]: <info>  [1765617713.9906] device (tap2777ba55-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:21:53 np0005558241 NetworkManager[50376]: <info>  [1765617713.9927] device (tap2777ba55-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:21:53 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:53.997 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[280c73b7-3a11-473a-8d98-a81cbec967a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:54 np0005558241 ovn_controller[148476]: 2025-12-13T09:21:54Z|01626|binding|INFO|Setting lport 2777ba55-72e3-4334-96ae-48077ed6a8d5 ovn-installed in OVS
Dec 13 04:21:54 np0005558241 ovn_controller[148476]: 2025-12-13T09:21:54Z|01627|binding|INFO|Setting lport 2777ba55-72e3-4334-96ae-48077ed6a8d5 up in Southbound
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.012 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:54 np0005558241 systemd[1]: Started Virtual Machine qemu-182-instance-00000097.
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.021 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[5c703d88-4a9d-4fcb-876b-0b5ad12f123b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.051 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[60579ae0-21e4-4c53-8c34-67d8af1299ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.057 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a88783f0-6e10-4a61-b1cb-e7c03118bfb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:54 np0005558241 NetworkManager[50376]: <info>  [1765617714.0592] manager: (tape4b9bd04-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/675)
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.113 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[39ec972a-38a5-4b22-9aec-4475784e6458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.117 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[424e1ad5-ede1-43c3-b04b-3ff46941e8af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:54 np0005558241 NetworkManager[50376]: <info>  [1765617714.1534] device (tape4b9bd04-a0): carrier: link connected
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.165 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[95409b57-36e5-4a95-9a10-a80166792222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.192 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f4473a-a7e3-4cae-b813-f0032b56a193]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4b9bd04-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:66:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1029136, 'reachable_time': 27101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407019, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.214 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c539c68f-bd25-4875-aeea-89fe66f295b4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2c:6681'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1029136, 'tstamp': 1029136}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407020, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.236 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9ea036-3df0-41f9-83c4-dab42c4921bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4b9bd04-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:66:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1029136, 'reachable_time': 27101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 407021, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.250 248514 DEBUG nova.compute.manager [req-5b6a93c4-441e-41d1-9bc3-1c79d7254c13 req-571fe7b4-3623-48b3-8948-5690f3ad2bb0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.251 248514 DEBUG oslo_concurrency.lockutils [req-5b6a93c4-441e-41d1-9bc3-1c79d7254c13 req-571fe7b4-3623-48b3-8948-5690f3ad2bb0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.252 248514 DEBUG oslo_concurrency.lockutils [req-5b6a93c4-441e-41d1-9bc3-1c79d7254c13 req-571fe7b4-3623-48b3-8948-5690f3ad2bb0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.253 248514 DEBUG oslo_concurrency.lockutils [req-5b6a93c4-441e-41d1-9bc3-1c79d7254c13 req-571fe7b4-3623-48b3-8948-5690f3ad2bb0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.253 248514 DEBUG nova.compute.manager [req-5b6a93c4-441e-41d1-9bc3-1c79d7254c13 req-571fe7b4-3623-48b3-8948-5690f3ad2bb0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Processing event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.279 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[d6474da8-f241-4fbe-ba1c-3f6fb7ad84bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.376 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce920d5-4ab4-40de-9657-edcc54e2f496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.378 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4b9bd04-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.379 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.380 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4b9bd04-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.382 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:54 np0005558241 NetworkManager[50376]: <info>  [1765617714.3836] manager: (tape4b9bd04-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/676)
Dec 13 04:21:54 np0005558241 kernel: tape4b9bd04-a0: entered promiscuous mode
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.386 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4b9bd04-a0, col_values=(('external_ids', {'iface-id': 'f3bff06d-212e-4afb-99d4-011e9e890967'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:21:54 np0005558241 ovn_controller[148476]: 2025-12-13T09:21:54Z|01628|binding|INFO|Releasing lport f3bff06d-212e-4afb-99d4-011e9e890967 from this chassis (sb_readonly=0)
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.390 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.416 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.423 158419 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4b9bd04-ada7-4867-9918-3cd5d21d273d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4b9bd04-ada7-4867-9918-3cd5d21d273d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.425 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[43b78ceb-d789-4d2d-8c3a-89fb1f4ab482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.426 158419 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: global
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    log         /dev/log local0 debug
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    log-tag     haproxy-metadata-proxy-e4b9bd04-ada7-4867-9918-3cd5d21d273d
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    user        root
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    group       root
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    maxconn     1024
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    pidfile     /var/lib/neutron/external/pids/e4b9bd04-ada7-4867-9918-3cd5d21d273d.pid.haproxy
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    daemon
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: defaults
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    log global
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    mode http
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    option httplog
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    option dontlognull
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    option http-server-close
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    option forwardfor
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    retries                 3
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    timeout http-request    30s
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    timeout connect         30s
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    timeout client          32s
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    timeout server          32s
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    timeout http-keep-alive 30s
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: listen listener
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    bind 169.254.169.254:80
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]:    http-request add-header X-OVN-Network-ID e4b9bd04-ada7-4867-9918-3cd5d21d273d
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec 13 04:21:54 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:54.427 158419 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'env', 'PROCESS_TAG=haproxy-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4b9bd04-ada7-4867-9918-3cd5d21d273d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.652 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.655 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617714.650619, 1437fc03-8f31-440f-8928-2fe388a22bbe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.656 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] VM Started (Lifecycle Event)#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.710 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.713 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.715 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.719 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.722 248514 INFO nova.virt.libvirt.driver [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Instance spawned successfully.#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.723 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.747 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.748 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617714.6507325, 1437fc03-8f31-440f-8928-2fe388a22bbe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.748 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.755 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.756 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.756 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.756 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.757 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.757 248514 DEBUG nova.virt.libvirt.driver [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.789 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.793 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617714.7114062, 1437fc03-8f31-440f-8928-2fe388a22bbe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.793 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.817 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.822 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.832 248514 INFO nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Took 10.10 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.832 248514 DEBUG nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.844 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:21:54 np0005558241 podman[407092]: 2025-12-13 09:21:54.881419402 +0000 UTC m=+0.065703108 container create 5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.895 248514 INFO nova.compute.manager [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Took 11.33 seconds to build instance.#033[00m
Dec 13 04:21:54 np0005558241 nova_compute[248510]: 2025-12-13 09:21:54.916 248514 DEBUG oslo_concurrency.lockutils [None req-e7d3b71a-edd7-41e3-bb27-f7030bf17f83 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:54 np0005558241 systemd[1]: Started libpod-conmon-5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8.scope.
Dec 13 04:21:54 np0005558241 podman[407092]: 2025-12-13 09:21:54.851811824 +0000 UTC m=+0.036095550 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2
Dec 13 04:21:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:21:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd99150eceb1e5ac1a3f0f2d3168cf6b269a5c94e89b47366e7abdf1ad0507ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:54 np0005558241 podman[407092]: 2025-12-13 09:21:54.983395913 +0000 UTC m=+0.167679639 container init 5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:21:54 np0005558241 podman[407092]: 2025-12-13 09:21:54.990383807 +0000 UTC m=+0.174667533 container start 5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:21:55 np0005558241 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [NOTICE]   (407112) : New worker (407114) forked
Dec 13 04:21:55 np0005558241 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [NOTICE]   (407112) : Loading success.
Dec 13 04:21:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3624: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:21:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:55.458 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:55.459 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:21:55.460 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:56 np0005558241 nova_compute[248510]: 2025-12-13 09:21:56.341 248514 DEBUG nova.compute.manager [req-59c4d379-8c91-4a75-a206-23ec5f9a7655 req-aa80cafd-3507-4180-9dfb-7b102109a913 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:21:56 np0005558241 nova_compute[248510]: 2025-12-13 09:21:56.342 248514 DEBUG oslo_concurrency.lockutils [req-59c4d379-8c91-4a75-a206-23ec5f9a7655 req-aa80cafd-3507-4180-9dfb-7b102109a913 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:21:56 np0005558241 nova_compute[248510]: 2025-12-13 09:21:56.342 248514 DEBUG oslo_concurrency.lockutils [req-59c4d379-8c91-4a75-a206-23ec5f9a7655 req-aa80cafd-3507-4180-9dfb-7b102109a913 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:21:56 np0005558241 nova_compute[248510]: 2025-12-13 09:21:56.343 248514 DEBUG oslo_concurrency.lockutils [req-59c4d379-8c91-4a75-a206-23ec5f9a7655 req-aa80cafd-3507-4180-9dfb-7b102109a913 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:21:56 np0005558241 nova_compute[248510]: 2025-12-13 09:21:56.343 248514 DEBUG nova.compute.manager [req-59c4d379-8c91-4a75-a206-23ec5f9a7655 req-aa80cafd-3507-4180-9dfb-7b102109a913 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] No waiting events found dispatching network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:21:56 np0005558241 nova_compute[248510]: 2025-12-13 09:21:56.343 248514 WARNING nova.compute.manager [req-59c4d379-8c91-4a75-a206-23ec5f9a7655 req-aa80cafd-3507-4180-9dfb-7b102109a913 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received unexpected event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:21:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3625: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:21:57 np0005558241 nova_compute[248510]: 2025-12-13 09:21:57.939 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:58 np0005558241 ovn_controller[148476]: 2025-12-13T09:21:58Z|01629|binding|INFO|Releasing lport f3bff06d-212e-4afb-99d4-011e9e890967 from this chassis (sb_readonly=0)
Dec 13 04:21:58 np0005558241 nova_compute[248510]: 2025-12-13 09:21:58.973 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:58 np0005558241 NetworkManager[50376]: <info>  [1765617718.9793] manager: (patch-br-int-to-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/677)
Dec 13 04:21:58 np0005558241 NetworkManager[50376]: <info>  [1765617718.9807] manager: (patch-provnet-b2be3bf6-b660-4cae-a6b4-9b8cda96f0cb-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/678)
Dec 13 04:21:59 np0005558241 ovn_controller[148476]: 2025-12-13T09:21:59Z|01630|binding|INFO|Releasing lport f3bff06d-212e-4afb-99d4-011e9e890967 from this chassis (sb_readonly=0)
Dec 13 04:21:59 np0005558241 nova_compute[248510]: 2025-12-13 09:21:59.019 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:59 np0005558241 nova_compute[248510]: 2025-12-13 09:21:59.030 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:21:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3626: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 13 04:21:59 np0005558241 nova_compute[248510]: 2025-12-13 09:21:59.445 248514 DEBUG nova.compute.manager [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-changed-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:21:59 np0005558241 nova_compute[248510]: 2025-12-13 09:21:59.446 248514 DEBUG nova.compute.manager [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Refreshing instance network info cache due to event network-changed-2777ba55-72e3-4334-96ae-48077ed6a8d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:21:59 np0005558241 nova_compute[248510]: 2025-12-13 09:21:59.446 248514 DEBUG oslo_concurrency.lockutils [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:21:59 np0005558241 nova_compute[248510]: 2025-12-13 09:21:59.447 248514 DEBUG oslo_concurrency.lockutils [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:21:59 np0005558241 nova_compute[248510]: 2025-12-13 09:21:59.447 248514 DEBUG nova.network.neutron [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Refreshing network info cache for port 2777ba55-72e3-4334-96ae-48077ed6a8d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:21:59 np0005558241 nova_compute[248510]: 2025-12-13 09:21:59.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:01 np0005558241 nova_compute[248510]: 2025-12-13 09:22:01.056 248514 DEBUG nova.network.neutron [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updated VIF entry in instance network info cache for port 2777ba55-72e3-4334-96ae-48077ed6a8d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:22:01 np0005558241 nova_compute[248510]: 2025-12-13 09:22:01.057 248514 DEBUG nova.network.neutron [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updating instance_info_cache with network_info: [{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:22:01 np0005558241 nova_compute[248510]: 2025-12-13 09:22:01.091 248514 DEBUG oslo_concurrency.lockutils [req-48b363df-4b42-4775-b788-6011ab7a1097 req-dd2ef04b-9b28-49a0-8fa6-7760ad6dded8 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:22:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3627: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 131 KiB/s wr, 98 op/s
Dec 13 04:22:02 np0005558241 nova_compute[248510]: 2025-12-13 09:22:02.942 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3628: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:22:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:22:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 46K writes, 183K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 46K writes, 16K syncs, 2.76 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2584 writes, 10K keys, 2584 commit groups, 1.0 writes per commit group, ingest: 11.17 MB, 0.02 MB/s#012Interval WAL: 2585 writes, 1016 syncs, 2.54 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:22:04 np0005558241 nova_compute[248510]: 2025-12-13 09:22:04.716 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3629: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:22:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3630: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:22:07 np0005558241 nova_compute[248510]: 2025-12-13 09:22:07.945 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3631: 321 pgs: 321 active+clean; 92 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 828 KiB/s wr, 84 op/s
Dec 13 04:22:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:22:09
Dec 13 04:22:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:22:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:22:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'volumes']
Dec 13 04:22:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:22:09 np0005558241 nova_compute[248510]: 2025-12-13 09:22:09.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:10 np0005558241 ovn_controller[148476]: 2025-12-13T09:22:10Z|00213|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4e:17:51 10.100.0.14
Dec 13 04:22:10 np0005558241 ovn_controller[148476]: 2025-12-13T09:22:10Z|00214|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4e:17:51 10.100.0.14
Dec 13 04:22:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:22:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:22:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:22:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:22:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:22:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:22:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:22:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6601.8 total, 600.0 interval#012Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2279 writes, 9430 keys, 2279 commit groups, 1.0 writes per commit group, ingest: 11.32 MB, 0.02 MB/s#012Interval WAL: 2279 writes, 877 syncs, 2.60 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:22:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3632: 321 pgs: 321 active+clean; 96 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 1.1 MiB/s wr, 28 op/s
Dec 13 04:22:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:22:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:22:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:22:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:22:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:22:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:22:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:22:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:22:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:22:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:22:12 np0005558241 nova_compute[248510]: 2025-12-13 09:22:12.948 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3633: 321 pgs: 321 active+clean; 105 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 185 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Dec 13 04:22:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:13 np0005558241 podman[407127]: 2025-12-13 09:22:13.979280263 +0000 UTC m=+0.060307564 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:22:14 np0005558241 podman[407125]: 2025-12-13 09:22:14.001416775 +0000 UTC m=+0.090517487 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Dec 13 04:22:14 np0005558241 podman[407126]: 2025-12-13 09:22:14.009767143 +0000 UTC m=+0.093827919 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:22:14 np0005558241 nova_compute[248510]: 2025-12-13 09:22:14.720 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3634: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 04:22:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:22:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/751250222' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:22:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:22:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/751250222' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:22:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3635: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 04:22:17 np0005558241 nova_compute[248510]: 2025-12-13 09:22:17.950 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3636: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 13 04:22:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:22:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.2 total, 600.0 interval#012Cumulative writes: 37K writes, 150K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.82 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1838 writes, 7373 keys, 1838 commit groups, 1.0 writes per commit group, ingest: 8.70 MB, 0.01 MB/s#012Interval WAL: 1838 writes, 718 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:22:19 np0005558241 nova_compute[248510]: 2025-12-13 09:22:19.762 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3637: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 302 KiB/s rd, 1.3 MiB/s wr, 54 op/s
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.497792) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617741497898, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1457, "num_deletes": 251, "total_data_size": 2381224, "memory_usage": 2418400, "flush_reason": "Manual Compaction"}
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617741516856, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 2327064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71414, "largest_seqno": 72870, "table_properties": {"data_size": 2320303, "index_size": 3895, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14214, "raw_average_key_size": 19, "raw_value_size": 2306742, "raw_average_value_size": 3235, "num_data_blocks": 174, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617593, "oldest_key_time": 1765617593, "file_creation_time": 1765617741, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 19104 microseconds, and 6969 cpu microseconds.
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.516913) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 2327064 bytes OK
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.516936) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.520054) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.520088) EVENT_LOG_v1 {"time_micros": 1765617741520084, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.520110) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2374806, prev total WAL file size 2374806, number of live WAL files 2.
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.520840) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(2272KB)], [170(10MB)]
Dec 13 04:22:21 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617741520920, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 13596517, "oldest_snapshot_seqno": -1}
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007738823178706089 of space, bias 1.0, pg target 0.23216469536118267 quantized to 32 (current 32)
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696852326136964 of space, bias 1.0, pg target 0.20090556978410892 quantized to 32 (current 32)
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.723939957605983e-07 of space, bias 4.0, pg target 0.0006868727949127179 quantized to 16 (current 32)
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 9076 keys, 11754775 bytes, temperature: kUnknown
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617742022704, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 11754775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11696528, "index_size": 34454, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 238739, "raw_average_key_size": 26, "raw_value_size": 11537172, "raw_average_value_size": 1271, "num_data_blocks": 1329, "num_entries": 9076, "num_filter_entries": 9076, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617741, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.023029) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 11754775 bytes
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.103522) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 27.1 rd, 23.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 10.7 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(10.9) write-amplify(5.1) OK, records in: 9590, records dropped: 514 output_compression: NoCompression
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.103575) EVENT_LOG_v1 {"time_micros": 1765617742103553, "job": 106, "event": "compaction_finished", "compaction_time_micros": 501873, "compaction_time_cpu_micros": 35191, "output_level": 6, "num_output_files": 1, "total_output_size": 11754775, "num_input_records": 9590, "num_output_records": 9076, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617742104730, "job": 106, "event": "table_file_deletion", "file_number": 172}
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617742109133, "job": 106, "event": "table_file_deletion", "file_number": 170}
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:21.520732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.109244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.109252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.109256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.109259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:22:22 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:22:22.109262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:22:22 np0005558241 nova_compute[248510]: 2025-12-13 09:22:22.953 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3638: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 277 KiB/s rd, 1.0 MiB/s wr, 44 op/s
Dec 13 04:22:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 04:22:24 np0005558241 nova_compute[248510]: 2025-12-13 09:22:24.763 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3639: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 804 KiB/s wr, 26 op/s
Dec 13 04:22:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3640: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Dec 13 04:22:27 np0005558241 nova_compute[248510]: 2025-12-13 09:22:27.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:22:27 np0005558241 nova_compute[248510]: 2025-12-13 09:22:27.956 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:28 np0005558241 ovn_controller[148476]: 2025-12-13T09:22:28Z|01631|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Dec 13 04:22:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3641: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Dec 13 04:22:29 np0005558241 nova_compute[248510]: 2025-12-13 09:22:29.764 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:30 np0005558241 nova_compute[248510]: 2025-12-13 09:22:30.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:22:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3642: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Dec 13 04:22:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:22:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:22:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:22:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:22:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:22:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:22:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:22:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:22:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:22:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:22:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:22:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:22:32 np0005558241 podman[407330]: 2025-12-13 09:22:32.866204679 +0000 UTC m=+0.045963096 container create 734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_feistel, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:22:32 np0005558241 systemd[1]: Started libpod-conmon-734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3.scope.
Dec 13 04:22:32 np0005558241 podman[407330]: 2025-12-13 09:22:32.847189935 +0000 UTC m=+0.026948332 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:22:32 np0005558241 nova_compute[248510]: 2025-12-13 09:22:32.959 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:22:32 np0005558241 podman[407330]: 2025-12-13 09:22:32.99268013 +0000 UTC m=+0.172438527 container init 734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_feistel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 04:22:33 np0005558241 podman[407330]: 2025-12-13 09:22:33.011447928 +0000 UTC m=+0.191206305 container start 734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 04:22:33 np0005558241 podman[407330]: 2025-12-13 09:22:33.016958775 +0000 UTC m=+0.196717152 container attach 734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 04:22:33 np0005558241 determined_feistel[407346]: 167 167
Dec 13 04:22:33 np0005558241 systemd[1]: libpod-734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3.scope: Deactivated successfully.
Dec 13 04:22:33 np0005558241 podman[407330]: 2025-12-13 09:22:33.020006291 +0000 UTC m=+0.199764688 container died 734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:22:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-41bc9a6af237de3cc9692ee091f973a65e1f9ad6087bf9e57a552d7a5fac2faa-merged.mount: Deactivated successfully.
Dec 13 04:22:33 np0005558241 podman[407330]: 2025-12-13 09:22:33.081469163 +0000 UTC m=+0.261227580 container remove 734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_feistel, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:22:33 np0005558241 systemd[1]: libpod-conmon-734d9603363ec4754cc00311a9f9f5708cc1e914a3807850dc784527d21d09d3.scope: Deactivated successfully.
Dec 13 04:22:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3643: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 13 04:22:33 np0005558241 podman[407369]: 2025-12-13 09:22:33.253600852 +0000 UTC m=+0.033956947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:22:33 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:22:33 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:22:33 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:22:33 np0005558241 podman[407369]: 2025-12-13 09:22:33.365631913 +0000 UTC m=+0.145987948 container create a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_raman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:22:33 np0005558241 systemd[1]: Started libpod-conmon-a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad.scope.
Dec 13 04:22:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:33 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:22:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28455591d463506f4fed3fb7a30303a5c3f3ca5ab3ede909450d32674d884d73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28455591d463506f4fed3fb7a30303a5c3f3ca5ab3ede909450d32674d884d73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28455591d463506f4fed3fb7a30303a5c3f3ca5ab3ede909450d32674d884d73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28455591d463506f4fed3fb7a30303a5c3f3ca5ab3ede909450d32674d884d73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:33 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28455591d463506f4fed3fb7a30303a5c3f3ca5ab3ede909450d32674d884d73/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:33 np0005558241 podman[407369]: 2025-12-13 09:22:33.483585923 +0000 UTC m=+0.263941988 container init a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_raman, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:22:33 np0005558241 podman[407369]: 2025-12-13 09:22:33.495001797 +0000 UTC m=+0.275357832 container start a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 04:22:33 np0005558241 podman[407369]: 2025-12-13 09:22:33.500254758 +0000 UTC m=+0.280610833 container attach a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:22:34 np0005558241 amazing_raman[407386]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:22:34 np0005558241 amazing_raman[407386]: --> All data devices are unavailable
Dec 13 04:22:34 np0005558241 systemd[1]: libpod-a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad.scope: Deactivated successfully.
Dec 13 04:22:34 np0005558241 conmon[407386]: conmon a84ab27a0f80ce2329a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad.scope/container/memory.events
Dec 13 04:22:34 np0005558241 podman[407369]: 2025-12-13 09:22:34.137790484 +0000 UTC m=+0.918146519 container died a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_raman, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 04:22:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-28455591d463506f4fed3fb7a30303a5c3f3ca5ab3ede909450d32674d884d73-merged.mount: Deactivated successfully.
Dec 13 04:22:34 np0005558241 podman[407369]: 2025-12-13 09:22:34.189847751 +0000 UTC m=+0.970203836 container remove a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_raman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:22:34 np0005558241 systemd[1]: libpod-conmon-a84ab27a0f80ce2329a600b027515b53c5a6e04c6d066cbd79aaaaa710e4dbad.scope: Deactivated successfully.
Dec 13 04:22:34 np0005558241 podman[407478]: 2025-12-13 09:22:34.717868988 +0000 UTC m=+0.047284530 container create 3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Dec 13 04:22:34 np0005558241 systemd[1]: Started libpod-conmon-3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b.scope.
Dec 13 04:22:34 np0005558241 nova_compute[248510]: 2025-12-13 09:22:34.767 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:34 np0005558241 podman[407478]: 2025-12-13 09:22:34.698839363 +0000 UTC m=+0.028254915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:22:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:22:34 np0005558241 podman[407478]: 2025-12-13 09:22:34.823537021 +0000 UTC m=+0.152952613 container init 3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 04:22:34 np0005558241 podman[407478]: 2025-12-13 09:22:34.831789436 +0000 UTC m=+0.161204948 container start 3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_khayyam, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:22:34 np0005558241 podman[407478]: 2025-12-13 09:22:34.836295648 +0000 UTC m=+0.165711220 container attach 3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:22:34 np0005558241 sleepy_khayyam[407494]: 167 167
Dec 13 04:22:34 np0005558241 systemd[1]: libpod-3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b.scope: Deactivated successfully.
Dec 13 04:22:34 np0005558241 podman[407478]: 2025-12-13 09:22:34.840142294 +0000 UTC m=+0.169557836 container died 3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 04:22:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8fc349560b66e4d3dd28cd7f2106bd4109e1bd5051afdfa816233a4440af3c60-merged.mount: Deactivated successfully.
Dec 13 04:22:34 np0005558241 podman[407478]: 2025-12-13 09:22:34.883553216 +0000 UTC m=+0.212968768 container remove 3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:22:34 np0005558241 systemd[1]: libpod-conmon-3121eb4a442df9b282c1ec4054265df348f690b8a99f6517b05228e7c9476e4b.scope: Deactivated successfully.
Dec 13 04:22:35 np0005558241 podman[407519]: 2025-12-13 09:22:35.106692675 +0000 UTC m=+0.060788326 container create 1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:22:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3644: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 13 04:22:35 np0005558241 systemd[1]: Started libpod-conmon-1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d.scope.
Dec 13 04:22:35 np0005558241 podman[407519]: 2025-12-13 09:22:35.075874827 +0000 UTC m=+0.029970548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:22:35 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:22:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f8d0bdb212c71d90ba9bc79050e18b276566ba9bbb7057d21b7ce540fb9f36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f8d0bdb212c71d90ba9bc79050e18b276566ba9bbb7057d21b7ce540fb9f36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f8d0bdb212c71d90ba9bc79050e18b276566ba9bbb7057d21b7ce540fb9f36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:35 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f8d0bdb212c71d90ba9bc79050e18b276566ba9bbb7057d21b7ce540fb9f36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:35 np0005558241 podman[407519]: 2025-12-13 09:22:35.203778294 +0000 UTC m=+0.157873935 container init 1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:22:35 np0005558241 podman[407519]: 2025-12-13 09:22:35.21687168 +0000 UTC m=+0.170967321 container start 1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:22:35 np0005558241 podman[407519]: 2025-12-13 09:22:35.220729596 +0000 UTC m=+0.174825247 container attach 1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_babbage, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:22:35 np0005558241 happy_babbage[407537]: {
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:    "0": [
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:        {
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "devices": [
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "/dev/loop3"
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            ],
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_name": "ceph_lv0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_size": "21470642176",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "name": "ceph_lv0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "tags": {
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.cluster_name": "ceph",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.crush_device_class": "",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.encrypted": "0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.objectstore": "bluestore",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.osd_id": "0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.type": "block",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.vdo": "0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.with_tpm": "0"
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            },
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "type": "block",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "vg_name": "ceph_vg0"
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:        }
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:    ],
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:    "1": [
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:        {
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "devices": [
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "/dev/loop4"
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            ],
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_name": "ceph_lv1",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_size": "21470642176",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "name": "ceph_lv1",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "tags": {
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.cluster_name": "ceph",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.crush_device_class": "",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.encrypted": "0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.objectstore": "bluestore",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.osd_id": "1",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.type": "block",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.vdo": "0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.with_tpm": "0"
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            },
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "type": "block",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "vg_name": "ceph_vg1"
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:        }
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:    ],
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:    "2": [
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:        {
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "devices": [
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "/dev/loop5"
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            ],
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_name": "ceph_lv2",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_size": "21470642176",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "name": "ceph_lv2",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "tags": {
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.cluster_name": "ceph",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.crush_device_class": "",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.encrypted": "0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.objectstore": "bluestore",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.osd_id": "2",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.type": "block",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.vdo": "0",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:                "ceph.with_tpm": "0"
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            },
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "type": "block",
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:            "vg_name": "ceph_vg2"
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:        }
Dec 13 04:22:35 np0005558241 happy_babbage[407537]:    ]
Dec 13 04:22:35 np0005558241 happy_babbage[407537]: }
Dec 13 04:22:35 np0005558241 systemd[1]: libpod-1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d.scope: Deactivated successfully.
Dec 13 04:22:35 np0005558241 podman[407519]: 2025-12-13 09:22:35.562781889 +0000 UTC m=+0.516877580 container died 1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:22:35 np0005558241 systemd[1]: var-lib-containers-storage-overlay-26f8d0bdb212c71d90ba9bc79050e18b276566ba9bbb7057d21b7ce540fb9f36-merged.mount: Deactivated successfully.
Dec 13 04:22:35 np0005558241 podman[407519]: 2025-12-13 09:22:35.611553715 +0000 UTC m=+0.565649356 container remove 1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:22:35 np0005558241 systemd[1]: libpod-conmon-1613d7c34f22fb0a938f10e8b8ea23d6e8e8c7ea8b3954d9293752bdf709751d.scope: Deactivated successfully.
Dec 13 04:22:35 np0005558241 nova_compute[248510]: 2025-12-13 09:22:35.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:22:35 np0005558241 nova_compute[248510]: 2025-12-13 09:22:35.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:22:35 np0005558241 nova_compute[248510]: 2025-12-13 09:22:35.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:22:36 np0005558241 podman[407621]: 2025-12-13 09:22:36.154974995 +0000 UTC m=+0.058139489 container create e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:22:36 np0005558241 systemd[1]: Started libpod-conmon-e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966.scope.
Dec 13 04:22:36 np0005558241 podman[407621]: 2025-12-13 09:22:36.125115421 +0000 UTC m=+0.028279955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:22:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:22:36 np0005558241 podman[407621]: 2025-12-13 09:22:36.251917641 +0000 UTC m=+0.155082095 container init e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:22:36 np0005558241 podman[407621]: 2025-12-13 09:22:36.259472629 +0000 UTC m=+0.162637083 container start e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:22:36 np0005558241 podman[407621]: 2025-12-13 09:22:36.263014607 +0000 UTC m=+0.166179081 container attach e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:22:36 np0005558241 unruffled_diffie[407637]: 167 167
Dec 13 04:22:36 np0005558241 systemd[1]: libpod-e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966.scope: Deactivated successfully.
Dec 13 04:22:36 np0005558241 podman[407621]: 2025-12-13 09:22:36.267172461 +0000 UTC m=+0.170336925 container died e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:22:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-726c4799ddc752383ab68694b782979c5a50b1a68a9254de88c03a40375051fa-merged.mount: Deactivated successfully.
Dec 13 04:22:36 np0005558241 podman[407621]: 2025-12-13 09:22:36.311364242 +0000 UTC m=+0.214528706 container remove e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_diffie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:22:36 np0005558241 systemd[1]: libpod-conmon-e586740ba9372327034d7288754df198a3a239aed4e7698891cc0552d6d2c966.scope: Deactivated successfully.
Dec 13 04:22:36 np0005558241 nova_compute[248510]: 2025-12-13 09:22:36.461 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:22:36 np0005558241 nova_compute[248510]: 2025-12-13 09:22:36.461 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquired lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:22:36 np0005558241 nova_compute[248510]: 2025-12-13 09:22:36.461 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec 13 04:22:36 np0005558241 nova_compute[248510]: 2025-12-13 09:22:36.461 248514 DEBUG nova.objects.instance [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1437fc03-8f31-440f-8928-2fe388a22bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:22:36 np0005558241 podman[407660]: 2025-12-13 09:22:36.559502605 +0000 UTC m=+0.064058397 container create ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 04:22:36 np0005558241 systemd[1]: Started libpod-conmon-ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92.scope.
Dec 13 04:22:36 np0005558241 podman[407660]: 2025-12-13 09:22:36.533962909 +0000 UTC m=+0.038518751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:22:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:22:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c184ed9048e2ea0f8192a1cb5b7231307a053db43e3bfbbd86f085f994137a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c184ed9048e2ea0f8192a1cb5b7231307a053db43e3bfbbd86f085f994137a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c184ed9048e2ea0f8192a1cb5b7231307a053db43e3bfbbd86f085f994137a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30c184ed9048e2ea0f8192a1cb5b7231307a053db43e3bfbbd86f085f994137a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:36 np0005558241 podman[407660]: 2025-12-13 09:22:36.653285512 +0000 UTC m=+0.157841344 container init ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:22:36 np0005558241 podman[407660]: 2025-12-13 09:22:36.665671961 +0000 UTC m=+0.170227753 container start ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:22:36 np0005558241 podman[407660]: 2025-12-13 09:22:36.669445725 +0000 UTC m=+0.174001517 container attach ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:22:36 np0005558241 nova_compute[248510]: 2025-12-13 09:22:36.898 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:22:36 np0005558241 nova_compute[248510]: 2025-12-13 09:22:36.900 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:22:36 np0005558241 nova_compute[248510]: 2025-12-13 09:22:36.919 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.007 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.008 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.017 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.017 248514 INFO nova.compute.claims [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:22:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3645: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.122 248514 DEBUG nova.scheduler.client.report [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.154 248514 DEBUG nova.scheduler.client.report [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.154 248514 DEBUG nova.compute.provider_tree [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.177 248514 DEBUG nova.scheduler.client.report [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.210 248514 DEBUG nova.scheduler.client.report [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.266 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:22:37 np0005558241 lvm[407757]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:22:37 np0005558241 lvm[407757]: VG ceph_vg1 finished
Dec 13 04:22:37 np0005558241 lvm[407760]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:22:37 np0005558241 lvm[407760]: VG ceph_vg2 finished
Dec 13 04:22:37 np0005558241 lvm[407756]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:22:37 np0005558241 lvm[407756]: VG ceph_vg0 finished
Dec 13 04:22:37 np0005558241 gracious_meninsky[407677]: {}
Dec 13 04:22:37 np0005558241 systemd[1]: libpod-ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92.scope: Deactivated successfully.
Dec 13 04:22:37 np0005558241 systemd[1]: libpod-ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92.scope: Consumed 1.433s CPU time.
Dec 13 04:22:37 np0005558241 podman[407660]: 2025-12-13 09:22:37.565216645 +0000 UTC m=+1.069772437 container died ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:22:37 np0005558241 systemd[1]: var-lib-containers-storage-overlay-30c184ed9048e2ea0f8192a1cb5b7231307a053db43e3bfbbd86f085f994137a-merged.mount: Deactivated successfully.
Dec 13 04:22:37 np0005558241 podman[407660]: 2025-12-13 09:22:37.608416701 +0000 UTC m=+1.112972483 container remove ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_meninsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 04:22:37 np0005558241 systemd[1]: libpod-conmon-ac2de08c09388f4dab27131525bb228a32d01618a9b70f49a26025a2552bdb92.scope: Deactivated successfully.
Dec 13 04:22:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:22:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:22:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:22:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:22:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:22:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4232831819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.866 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.876 248514 DEBUG nova.compute.provider_tree [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.900 248514 DEBUG nova.scheduler.client.report [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.933 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.934 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.966 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.988 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:22:37 np0005558241 nova_compute[248510]: 2025-12-13 09:22:37.989 248514 DEBUG nova.network.neutron [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.011 248514 INFO nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.030 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.126 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.127 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.128 248514 INFO nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Creating image(s)#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.150 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.176 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.203 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.208 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.310 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.312 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.312 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.313 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.338 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.344 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:22:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:22:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:22:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.533 248514 DEBUG nova.network.neutron [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updating instance_info_cache with network_info: [{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.539 248514 DEBUG nova.policy [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '147b09b7bd134651ba75bd6b135b270d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dce54a0218054f5380cc72fa81c26b43', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.556 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Releasing lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.557 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.558 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.712 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.798 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] resizing rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.911 248514 DEBUG nova.objects.instance [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'migration_context' on Instance uuid 7f12ff6b-a944-402c-9e58-ee4338d7eca4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.929 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.930 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Ensure instance console log exists: /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.931 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.932 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:22:38 np0005558241 nova_compute[248510]: 2025-12-13 09:22:38.932 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:22:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3646: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 170 B/s wr, 2 op/s
Dec 13 04:22:39 np0005558241 nova_compute[248510]: 2025-12-13 09:22:39.769 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:39 np0005558241 nova_compute[248510]: 2025-12-13 09:22:39.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:22:39 np0005558241 nova_compute[248510]: 2025-12-13 09:22:39.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:22:39 np0005558241 nova_compute[248510]: 2025-12-13 09:22:39.816 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:22:39 np0005558241 nova_compute[248510]: 2025-12-13 09:22:39.817 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:22:39 np0005558241 nova_compute[248510]: 2025-12-13 09:22:39.817 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:22:39 np0005558241 nova_compute[248510]: 2025-12-13 09:22:39.817 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:22:39 np0005558241 nova_compute[248510]: 2025-12-13 09:22:39.818 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.019 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:40.020 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:22:40 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:40.021 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:22:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:22:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:22:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:22:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:22:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:22:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:22:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:22:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2624575856' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.455 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.535 248514 DEBUG nova.network.neutron [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Successfully created port: f4743e20-bc79-4ca1-87e3-c94183b9a23f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.554 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.554 248514 DEBUG nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.709 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.710 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3187MB free_disk=59.94185793865472GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.710 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.710 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.785 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 1437fc03-8f31-440f-8928-2fe388a22bbe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.786 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Instance 7f12ff6b-a944-402c-9e58-ee4338d7eca4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.786 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.786 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:22:40 np0005558241 nova_compute[248510]: 2025-12-13 09:22:40.846 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:22:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3647: 321 pgs: 321 active+clean; 130 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 503 KiB/s wr, 16 op/s
Dec 13 04:22:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:22:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/196965802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:41 np0005558241 nova_compute[248510]: 2025-12-13 09:22:41.451 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:22:41 np0005558241 nova_compute[248510]: 2025-12-13 09:22:41.460 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:22:41 np0005558241 nova_compute[248510]: 2025-12-13 09:22:41.485 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:22:41 np0005558241 nova_compute[248510]: 2025-12-13 09:22:41.514 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:22:41 np0005558241 nova_compute[248510]: 2025-12-13 09:22:41.515 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:22:41 np0005558241 nova_compute[248510]: 2025-12-13 09:22:41.841 248514 DEBUG nova.network.neutron [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Successfully updated port: f4743e20-bc79-4ca1-87e3-c94183b9a23f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:22:41 np0005558241 nova_compute[248510]: 2025-12-13 09:22:41.869 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:22:41 np0005558241 nova_compute[248510]: 2025-12-13 09:22:41.870 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquired lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:22:41 np0005558241 nova_compute[248510]: 2025-12-13 09:22:41.870 248514 DEBUG nova.network.neutron [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:22:41 np0005558241 nova_compute[248510]: 2025-12-13 09:22:41.918 248514 DEBUG nova.compute.manager [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-changed-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:22:41 np0005558241 nova_compute[248510]: 2025-12-13 09:22:41.918 248514 DEBUG nova.compute.manager [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Refreshing instance network info cache due to event network-changed-f4743e20-bc79-4ca1-87e3-c94183b9a23f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:22:41 np0005558241 nova_compute[248510]: 2025-12-13 09:22:41.919 248514 DEBUG oslo_concurrency.lockutils [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:22:42 np0005558241 nova_compute[248510]: 2025-12-13 09:22:42.025 248514 DEBUG nova.network.neutron [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:22:42 np0005558241 nova_compute[248510]: 2025-12-13 09:22:42.516 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:22:42 np0005558241 nova_compute[248510]: 2025-12-13 09:22:42.516 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:22:42 np0005558241 nova_compute[248510]: 2025-12-13 09:22:42.517 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:22:42 np0005558241 nova_compute[248510]: 2025-12-13 09:22:42.970 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3648: 321 pgs: 321 active+clean; 155 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 18 op/s
Dec 13 04:22:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.472 248514 DEBUG nova.network.neutron [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updating instance_info_cache with network_info: [{"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.494 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Releasing lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.495 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Instance network_info: |[{"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.496 248514 DEBUG oslo_concurrency.lockutils [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.497 248514 DEBUG nova.network.neutron [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Refreshing network info cache for port f4743e20-bc79-4ca1-87e3-c94183b9a23f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.505 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Start _get_guest_xml network_info=[{"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.513 248514 WARNING nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.527 248514 DEBUG nova.virt.libvirt.host [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.528 248514 DEBUG nova.virt.libvirt.host [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.531 248514 DEBUG nova.virt.libvirt.host [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.532 248514 DEBUG nova.virt.libvirt.host [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.533 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.533 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.534 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.534 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.534 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.535 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.535 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.535 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.536 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.536 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.536 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.537 248514 DEBUG nova.virt.hardware [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:22:43 np0005558241 nova_compute[248510]: 2025-12-13 09:22:43.541 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:22:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:22:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1575609293' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.134 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.164 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.171 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:22:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:22:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230898467' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.783 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.785 248514 DEBUG nova.virt.libvirt.vif [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1374339452',display_name='tempest-TestGettingAddress-server-1374339452',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1374339452',id=152,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPtRBq+IYkmbvmIJh/QIOTOUFFBD8bcvC1qbRw2zjdn4nFUb754ilWe6cE2YmyI9CIp/UsSOGtzaJ1FH6v8ozWcGolUeM9n52RqDlU5UB2iwMcylMROMdWHIoAhEPM94oA==',key_name='tempest-TestGettingAddress-343990024',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-330yrx20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:22:38Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=7f12ff6b-a944-402c-9e58-ee4338d7eca4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.785 248514 DEBUG nova.network.os_vif_util [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.786 248514 DEBUG nova.network.os_vif_util [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.787 248514 DEBUG nova.objects.instance [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7f12ff6b-a944-402c-9e58-ee4338d7eca4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.806 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  <uuid>7f12ff6b-a944-402c-9e58-ee4338d7eca4</uuid>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  <name>instance-00000098</name>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestGettingAddress-server-1374339452</nova:name>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:22:43</nova:creationTime>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:        <nova:user uuid="147b09b7bd134651ba75bd6b135b270d">tempest-TestGettingAddress-76263460-project-member</nova:user>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:        <nova:project uuid="dce54a0218054f5380cc72fa81c26b43">tempest-TestGettingAddress-76263460</nova:project>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:        <nova:port uuid="f4743e20-bc79-4ca1-87e3-c94183b9a23f">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:febd:a8f9" ipVersion="6"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <entry name="serial">7f12ff6b-a944-402c-9e58-ee4338d7eca4</entry>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <entry name="uuid">7f12ff6b-a944-402c-9e58-ee4338d7eca4</entry>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk.config">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:bd:a8:f9"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <target dev="tapf4743e20-bc"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/console.log" append="off"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:22:44 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:22:44 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:22:44 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:22:44 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.807 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Preparing to wait for external event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.808 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.808 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.808 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.809 248514 DEBUG nova.virt.libvirt.vif [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1374339452',display_name='tempest-TestGettingAddress-server-1374339452',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1374339452',id=152,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPtRBq+IYkmbvmIJh/QIOTOUFFBD8bcvC1qbRw2zjdn4nFUb754ilWe6cE2YmyI9CIp/UsSOGtzaJ1FH6v8ozWcGolUeM9n52RqDlU5UB2iwMcylMROMdWHIoAhEPM94oA==',key_name='tempest-TestGettingAddress-343990024',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-330yrx20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:22:38Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=7f12ff6b-a944-402c-9e58-ee4338d7eca4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.810 248514 DEBUG nova.network.os_vif_util [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.811 248514 DEBUG nova.network.os_vif_util [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.811 248514 DEBUG os_vif [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.812 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.813 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.813 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.818 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.818 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4743e20-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.819 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf4743e20-bc, col_values=(('external_ids', {'iface-id': 'f4743e20-bc79-4ca1-87e3-c94183b9a23f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:a8:f9', 'vm-uuid': '7f12ff6b-a944-402c-9e58-ee4338d7eca4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:22:44 np0005558241 NetworkManager[50376]: <info>  [1765617764.8228] manager: (tapf4743e20-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/679)
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.826 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.832 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.833 248514 INFO os_vif [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc')#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.891 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.892 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.893 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] No VIF found with MAC fa:16:3e:bd:a8:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.894 248514 INFO nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Using config drive#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.928 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.935 248514 DEBUG nova.network.neutron [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updated VIF entry in instance network info cache for port f4743e20-bc79-4ca1-87e3-c94183b9a23f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.936 248514 DEBUG nova.network.neutron [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updating instance_info_cache with network_info: [{"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:22:44 np0005558241 nova_compute[248510]: 2025-12-13 09:22:44.967 248514 DEBUG oslo_concurrency.lockutils [req-886ee067-a544-402b-a43f-936ad03d3120 req-db72bc5c-c39e-4c73-be57-26cb201b921f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:22:44 np0005558241 podman[408095]: 2025-12-13 09:22:44.982003029 +0000 UTC m=+0.060847187 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 04:22:44 np0005558241 podman[408094]: 2025-12-13 09:22:44.994129851 +0000 UTC m=+0.072249461 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:22:45 np0005558241 podman[408093]: 2025-12-13 09:22:45.036940888 +0000 UTC m=+0.115014157 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:22:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3649: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:22:45 np0005558241 nova_compute[248510]: 2025-12-13 09:22:45.265 248514 INFO nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Creating config drive at /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/disk.config#033[00m
Dec 13 04:22:45 np0005558241 nova_compute[248510]: 2025-12-13 09:22:45.274 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi1c9uy30 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:22:45 np0005558241 nova_compute[248510]: 2025-12-13 09:22:45.431 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi1c9uy30" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:22:45 np0005558241 nova_compute[248510]: 2025-12-13 09:22:45.482 248514 DEBUG nova.storage.rbd_utils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] rbd image 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:22:45 np0005558241 nova_compute[248510]: 2025-12-13 09:22:45.488 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/disk.config 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:22:45 np0005558241 nova_compute[248510]: 2025-12-13 09:22:45.671 248514 DEBUG oslo_concurrency.processutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/disk.config 7f12ff6b-a944-402c-9e58-ee4338d7eca4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:22:45 np0005558241 nova_compute[248510]: 2025-12-13 09:22:45.672 248514 INFO nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Deleting local config drive /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4/disk.config because it was imported into RBD.#033[00m
Dec 13 04:22:45 np0005558241 kernel: tapf4743e20-bc: entered promiscuous mode
Dec 13 04:22:45 np0005558241 NetworkManager[50376]: <info>  [1765617765.7511] manager: (tapf4743e20-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/680)
Dec 13 04:22:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:22:45Z|01632|binding|INFO|Claiming lport f4743e20-bc79-4ca1-87e3-c94183b9a23f for this chassis.
Dec 13 04:22:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:22:45Z|01633|binding|INFO|f4743e20-bc79-4ca1-87e3-c94183b9a23f: Claiming fa:16:3e:bd:a8:f9 10.100.0.13 2001:db8::f816:3eff:febd:a8f9
Dec 13 04:22:45 np0005558241 nova_compute[248510]: 2025-12-13 09:22:45.753 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.761 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:a8:f9 10.100.0.13 2001:db8::f816:3eff:febd:a8f9'], port_security=['fa:16:3e:bd:a8:f9 10.100.0.13 2001:db8::f816:3eff:febd:a8f9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:febd:a8f9/64', 'neutron:device_id': '7f12ff6b-a944-402c-9e58-ee4338d7eca4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd4a89b2-79c1-411e-b3a6-9349b772360d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=681ece70-9463-4a56-a445-6b0b679ff248, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f4743e20-bc79-4ca1-87e3-c94183b9a23f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.763 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f4743e20-bc79-4ca1-87e3-c94183b9a23f in datapath e4b9bd04-ada7-4867-9918-3cd5d21d273d bound to our chassis#033[00m
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.765 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4b9bd04-ada7-4867-9918-3cd5d21d273d#033[00m
Dec 13 04:22:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:22:45Z|01634|binding|INFO|Setting lport f4743e20-bc79-4ca1-87e3-c94183b9a23f ovn-installed in OVS
Dec 13 04:22:45 np0005558241 ovn_controller[148476]: 2025-12-13T09:22:45Z|01635|binding|INFO|Setting lport f4743e20-bc79-4ca1-87e3-c94183b9a23f up in Southbound
Dec 13 04:22:45 np0005558241 nova_compute[248510]: 2025-12-13 09:22:45.770 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.784 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[92c020c0-c109-4373-93cd-d6badd13f111]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:22:45 np0005558241 systemd-udevd[408225]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:22:45 np0005558241 systemd-machined[210538]: New machine qemu-183-instance-00000098.
Dec 13 04:22:45 np0005558241 NetworkManager[50376]: <info>  [1765617765.8104] device (tapf4743e20-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:22:45 np0005558241 NetworkManager[50376]: <info>  [1765617765.8112] device (tapf4743e20-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:22:45 np0005558241 systemd[1]: Started Virtual Machine qemu-183-instance-00000098.
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.824 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[6d819270-bd26-4bb7-adb3-23f58edb66aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.827 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[3f7284f0-32a8-45ad-b6f3-93ed239c5dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.859 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[2a405386-1fce-47df-aa12-89bd0372735c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.880 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[4698f4a6-142e-4900-adb2-552cfcd58746]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4b9bd04-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:66:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1029136, 'reachable_time': 27101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408238, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.899 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[b5781a5f-5cee-4947-89f7-4b87ec4af849]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape4b9bd04-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1029153, 'tstamp': 1029153}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408240, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape4b9bd04-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1029158, 'tstamp': 1029158}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408240, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.901 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4b9bd04-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:22:45 np0005558241 nova_compute[248510]: 2025-12-13 09:22:45.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:45 np0005558241 nova_compute[248510]: 2025-12-13 09:22:45.906 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.906 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4b9bd04-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.907 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.907 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4b9bd04-a0, col_values=(('external_ids', {'iface-id': 'f3bff06d-212e-4afb-99d4-011e9e890967'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:22:45 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:45.908 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.218 248514 DEBUG nova.compute.manager [req-1f6ec00a-5c75-4257-b26a-efa61839da92 req-a099a50a-1e38-4dc7-a81b-8ac9858f4585 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.219 248514 DEBUG oslo_concurrency.lockutils [req-1f6ec00a-5c75-4257-b26a-efa61839da92 req-a099a50a-1e38-4dc7-a81b-8ac9858f4585 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.220 248514 DEBUG oslo_concurrency.lockutils [req-1f6ec00a-5c75-4257-b26a-efa61839da92 req-a099a50a-1e38-4dc7-a81b-8ac9858f4585 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.220 248514 DEBUG oslo_concurrency.lockutils [req-1f6ec00a-5c75-4257-b26a-efa61839da92 req-a099a50a-1e38-4dc7-a81b-8ac9858f4585 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.220 248514 DEBUG nova.compute.manager [req-1f6ec00a-5c75-4257-b26a-efa61839da92 req-a099a50a-1e38-4dc7-a81b-8ac9858f4585 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Processing event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.645 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617766.6445847, 7f12ff6b-a944-402c-9e58-ee4338d7eca4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.646 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] VM Started (Lifecycle Event)#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.652 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.670 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.677 248514 INFO nova.virt.libvirt.driver [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Instance spawned successfully.#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.677 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.719 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.731 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.733 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.734 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.734 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.735 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.736 248514 DEBUG nova.virt.libvirt.driver [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.745 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.788 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.789 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617766.6499858, 7f12ff6b-a944-402c-9e58-ee4338d7eca4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.789 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.839 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.845 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617766.656597, 7f12ff6b-a944-402c-9e58-ee4338d7eca4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.845 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.868 248514 INFO nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Took 8.74 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.869 248514 DEBUG nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.909 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.916 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.958 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:22:46 np0005558241 nova_compute[248510]: 2025-12-13 09:22:46.987 248514 INFO nova.compute.manager [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Took 10.01 seconds to build instance.#033[00m
Dec 13 04:22:47 np0005558241 nova_compute[248510]: 2025-12-13 09:22:47.006 248514 DEBUG oslo_concurrency.lockutils [None req-a4dbf2f4-2cb2-4a34-b2af-dd2c6147a36a 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:22:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:47.025 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:22:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3650: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:22:48 np0005558241 nova_compute[248510]: 2025-12-13 09:22:48.320 248514 DEBUG nova.compute.manager [req-eb3efb06-cc0e-4d8d-93cb-ee1913ee3426 req-6dfd71a8-9672-4223-97af-716242ecbf36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:22:48 np0005558241 nova_compute[248510]: 2025-12-13 09:22:48.321 248514 DEBUG oslo_concurrency.lockutils [req-eb3efb06-cc0e-4d8d-93cb-ee1913ee3426 req-6dfd71a8-9672-4223-97af-716242ecbf36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:22:48 np0005558241 nova_compute[248510]: 2025-12-13 09:22:48.321 248514 DEBUG oslo_concurrency.lockutils [req-eb3efb06-cc0e-4d8d-93cb-ee1913ee3426 req-6dfd71a8-9672-4223-97af-716242ecbf36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:22:48 np0005558241 nova_compute[248510]: 2025-12-13 09:22:48.322 248514 DEBUG oslo_concurrency.lockutils [req-eb3efb06-cc0e-4d8d-93cb-ee1913ee3426 req-6dfd71a8-9672-4223-97af-716242ecbf36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:22:48 np0005558241 nova_compute[248510]: 2025-12-13 09:22:48.322 248514 DEBUG nova.compute.manager [req-eb3efb06-cc0e-4d8d-93cb-ee1913ee3426 req-6dfd71a8-9672-4223-97af-716242ecbf36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] No waiting events found dispatching network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:22:48 np0005558241 nova_compute[248510]: 2025-12-13 09:22:48.323 248514 WARNING nova.compute.manager [req-eb3efb06-cc0e-4d8d-93cb-ee1913ee3426 req-6dfd71a8-9672-4223-97af-716242ecbf36 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received unexpected event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f for instance with vm_state active and task_state None.#033[00m
Dec 13 04:22:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3651: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Dec 13 04:22:49 np0005558241 nova_compute[248510]: 2025-12-13 09:22:49.774 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:49 np0005558241 nova_compute[248510]: 2025-12-13 09:22:49.821 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3652: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Dec 13 04:22:52 np0005558241 nova_compute[248510]: 2025-12-13 09:22:52.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:22:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3653: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 85 op/s
Dec 13 04:22:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:53 np0005558241 nova_compute[248510]: 2025-12-13 09:22:53.433 248514 DEBUG nova.compute.manager [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-changed-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:22:53 np0005558241 nova_compute[248510]: 2025-12-13 09:22:53.433 248514 DEBUG nova.compute.manager [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Refreshing instance network info cache due to event network-changed-f4743e20-bc79-4ca1-87e3-c94183b9a23f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:22:53 np0005558241 nova_compute[248510]: 2025-12-13 09:22:53.433 248514 DEBUG oslo_concurrency.lockutils [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:22:53 np0005558241 nova_compute[248510]: 2025-12-13 09:22:53.434 248514 DEBUG oslo_concurrency.lockutils [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:22:53 np0005558241 nova_compute[248510]: 2025-12-13 09:22:53.434 248514 DEBUG nova.network.neutron [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Refreshing network info cache for port f4743e20-bc79-4ca1-87e3-c94183b9a23f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:22:54 np0005558241 nova_compute[248510]: 2025-12-13 09:22:54.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:54 np0005558241 nova_compute[248510]: 2025-12-13 09:22:54.823 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:22:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3654: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 569 KiB/s wr, 83 op/s
Dec 13 04:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:55.458 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:55.459 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:22:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:22:55.460 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:22:56 np0005558241 nova_compute[248510]: 2025-12-13 09:22:56.507 248514 DEBUG nova.network.neutron [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updated VIF entry in instance network info cache for port f4743e20-bc79-4ca1-87e3-c94183b9a23f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:22:56 np0005558241 nova_compute[248510]: 2025-12-13 09:22:56.509 248514 DEBUG nova.network.neutron [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updating instance_info_cache with network_info: [{"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:22:56 np0005558241 nova_compute[248510]: 2025-12-13 09:22:56.545 248514 DEBUG oslo_concurrency.lockutils [req-ffdecd17-6063-426f-aaa6-386e2cd30322 req-d92a68b5-8e54-4087-977f-db446fc8b4e1 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:22:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3655: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:22:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3656: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 76 op/s
Dec 13 04:22:59 np0005558241 nova_compute[248510]: 2025-12-13 09:22:59.824 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:00 np0005558241 ovn_controller[148476]: 2025-12-13T09:23:00Z|00215|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bd:a8:f9 10.100.0.13
Dec 13 04:23:00 np0005558241 ovn_controller[148476]: 2025-12-13T09:23:00Z|00216|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bd:a8:f9 10.100.0.13
Dec 13 04:23:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3657: 321 pgs: 321 active+clean; 173 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 662 KiB/s rd, 351 KiB/s wr, 35 op/s
Dec 13 04:23:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3658: 321 pgs: 321 active+clean; 180 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Dec 13 04:23:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:04 np0005558241 nova_compute[248510]: 2025-12-13 09:23:04.825 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3659: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 04:23:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3660: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 04:23:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3661: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 13 04:23:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:23:09
Dec 13 04:23:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:23:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:23:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.meta', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.control']
Dec 13 04:23:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:23:09 np0005558241 nova_compute[248510]: 2025-12-13 09:23:09.828 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:23:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:23:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:23:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:23:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:23:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:23:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:23:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:23:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:23:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:23:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:23:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:23:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:23:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:23:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:23:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:23:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3662: 321 pgs: 321 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.660 248514 DEBUG nova.compute.manager [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-changed-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.661 248514 DEBUG nova.compute.manager [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Refreshing instance network info cache due to event network-changed-f4743e20-bc79-4ca1-87e3-c94183b9a23f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.661 248514 DEBUG oslo_concurrency.lockutils [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.662 248514 DEBUG oslo_concurrency.lockutils [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.662 248514 DEBUG nova.network.neutron [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Refreshing network info cache for port f4743e20-bc79-4ca1-87e3-c94183b9a23f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.725 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.726 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.726 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.727 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.728 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.729 248514 INFO nova.compute.manager [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Terminating instance#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.730 248514 DEBUG nova.compute.manager [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:23:11 np0005558241 kernel: tapf4743e20-bc (unregistering): left promiscuous mode
Dec 13 04:23:11 np0005558241 NetworkManager[50376]: <info>  [1765617791.7954] device (tapf4743e20-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:23:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:23:11Z|01636|binding|INFO|Releasing lport f4743e20-bc79-4ca1-87e3-c94183b9a23f from this chassis (sb_readonly=0)
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.806 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:23:11Z|01637|binding|INFO|Setting lport f4743e20-bc79-4ca1-87e3-c94183b9a23f down in Southbound
Dec 13 04:23:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:23:11Z|01638|binding|INFO|Removing iface tapf4743e20-bc ovn-installed in OVS
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.827 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:a8:f9 10.100.0.13 2001:db8::f816:3eff:febd:a8f9'], port_security=['fa:16:3e:bd:a8:f9 10.100.0.13 2001:db8::f816:3eff:febd:a8f9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:febd:a8f9/64', 'neutron:device_id': '7f12ff6b-a944-402c-9e58-ee4338d7eca4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd4a89b2-79c1-411e-b3a6-9349b772360d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=681ece70-9463-4a56-a445-6b0b679ff248, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=f4743e20-bc79-4ca1-87e3-c94183b9a23f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:23:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.829 158419 INFO neutron.agent.ovn.metadata.agent [-] Port f4743e20-bc79-4ca1-87e3-c94183b9a23f in datapath e4b9bd04-ada7-4867-9918-3cd5d21d273d unbound from our chassis#033[00m
Dec 13 04:23:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.831 158419 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4b9bd04-ada7-4867-9918-3cd5d21d273d#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:11 np0005558241 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000098.scope: Deactivated successfully.
Dec 13 04:23:11 np0005558241 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000098.scope: Consumed 14.723s CPU time.
Dec 13 04:23:11 np0005558241 systemd-machined[210538]: Machine qemu-183-instance-00000098 terminated.
Dec 13 04:23:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.873 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc34f33-239c-494f-8a17-8947e5111a12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.922 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[c7461754-3087-43d1-aadd-4988ecd9a054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.926 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[5c406b85-f2b5-4158-8679-5066886a395c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.969 260916 DEBUG oslo.privsep.daemon [-] privsep: reply[9d5e7f90-cae4-46e1-9a9f-831ac5a7dfa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.973 248514 INFO nova.virt.libvirt.driver [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Instance destroyed successfully.#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.973 248514 DEBUG nova.objects.instance [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 7f12ff6b-a944-402c-9e58-ee4338d7eca4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.990 248514 DEBUG nova.virt.libvirt.vif [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:22:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1374339452',display_name='tempest-TestGettingAddress-server-1374339452',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1374339452',id=152,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPtRBq+IYkmbvmIJh/QIOTOUFFBD8bcvC1qbRw2zjdn4nFUb754ilWe6cE2YmyI9CIp/UsSOGtzaJ1FH6v8ozWcGolUeM9n52RqDlU5UB2iwMcylMROMdWHIoAhEPM94oA==',key_name='tempest-TestGettingAddress-343990024',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:22:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-330yrx20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:22:46Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=7f12ff6b-a944-402c-9e58-ee4338d7eca4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.990 248514 DEBUG nova.network.os_vif_util [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:23:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:11.990 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1c06f3-b08f-4286-8aab-5b08a2af948f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4b9bd04-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:66:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 465], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1029136, 'reachable_time': 27101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408305, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.991 248514 DEBUG nova.network.os_vif_util [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.993 248514 DEBUG os_vif [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.995 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4743e20-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:11 np0005558241 nova_compute[248510]: 2025-12-13 09:23:11.999 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:12 np0005558241 nova_compute[248510]: 2025-12-13 09:23:12.002 248514 INFO os_vif [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:a8:f9,bridge_name='br-int',has_traffic_filtering=True,id=f4743e20-bc79-4ca1-87e3-c94183b9a23f,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4743e20-bc')#033[00m
Dec 13 04:23:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:12.014 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[cc71fe9a-10c9-45dc-9b36-beafbe0f9c59]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape4b9bd04-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1029153, 'tstamp': 1029153}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408307, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape4b9bd04-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1029158, 'tstamp': 1029158}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408307, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:12.016 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4b9bd04-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:23:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:12.020 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4b9bd04-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:23:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:12.020 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:23:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:12.021 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4b9bd04-a0, col_values=(('external_ids', {'iface-id': 'f3bff06d-212e-4afb-99d4-011e9e890967'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:23:12 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:12.021 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:23:12 np0005558241 nova_compute[248510]: 2025-12-13 09:23:12.023 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:12 np0005558241 nova_compute[248510]: 2025-12-13 09:23:12.325 248514 INFO nova.virt.libvirt.driver [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Deleting instance files /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4_del#033[00m
Dec 13 04:23:12 np0005558241 nova_compute[248510]: 2025-12-13 09:23:12.326 248514 INFO nova.virt.libvirt.driver [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Deletion of /var/lib/nova/instances/7f12ff6b-a944-402c-9e58-ee4338d7eca4_del complete#033[00m
Dec 13 04:23:12 np0005558241 nova_compute[248510]: 2025-12-13 09:23:12.393 248514 INFO nova.compute.manager [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:23:12 np0005558241 nova_compute[248510]: 2025-12-13 09:23:12.394 248514 DEBUG oslo.service.loopingcall [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:23:12 np0005558241 nova_compute[248510]: 2025-12-13 09:23:12.394 248514 DEBUG nova.compute.manager [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:23:12 np0005558241 nova_compute[248510]: 2025-12-13 09:23:12.394 248514 DEBUG nova.network.neutron [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:23:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3663: 321 pgs: 321 active+clean; 168 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 13 04:23:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:13 np0005558241 nova_compute[248510]: 2025-12-13 09:23:13.742 248514 DEBUG nova.compute.manager [req-8a118b22-8b9f-4ae7-93be-e15a8c6f5e1d req-30fc612c-daac-43a8-9fee-428df901ba41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-vif-unplugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:23:13 np0005558241 nova_compute[248510]: 2025-12-13 09:23:13.742 248514 DEBUG oslo_concurrency.lockutils [req-8a118b22-8b9f-4ae7-93be-e15a8c6f5e1d req-30fc612c-daac-43a8-9fee-428df901ba41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:13 np0005558241 nova_compute[248510]: 2025-12-13 09:23:13.742 248514 DEBUG oslo_concurrency.lockutils [req-8a118b22-8b9f-4ae7-93be-e15a8c6f5e1d req-30fc612c-daac-43a8-9fee-428df901ba41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:13 np0005558241 nova_compute[248510]: 2025-12-13 09:23:13.743 248514 DEBUG oslo_concurrency.lockutils [req-8a118b22-8b9f-4ae7-93be-e15a8c6f5e1d req-30fc612c-daac-43a8-9fee-428df901ba41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:13 np0005558241 nova_compute[248510]: 2025-12-13 09:23:13.743 248514 DEBUG nova.compute.manager [req-8a118b22-8b9f-4ae7-93be-e15a8c6f5e1d req-30fc612c-daac-43a8-9fee-428df901ba41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] No waiting events found dispatching network-vif-unplugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:23:13 np0005558241 nova_compute[248510]: 2025-12-13 09:23:13.743 248514 DEBUG nova.compute.manager [req-8a118b22-8b9f-4ae7-93be-e15a8c6f5e1d req-30fc612c-daac-43a8-9fee-428df901ba41 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-vif-unplugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:23:14 np0005558241 nova_compute[248510]: 2025-12-13 09:23:14.887 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3664: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 1.1 MiB/s wr, 52 op/s
Dec 13 04:23:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:23:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/903405779' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:23:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/903405779' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:15 np0005558241 nova_compute[248510]: 2025-12-13 09:23:15.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:23:15 np0005558241 nova_compute[248510]: 2025-12-13 09:23:15.812 248514 DEBUG nova.network.neutron [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:23:15 np0005558241 nova_compute[248510]: 2025-12-13 09:23:15.830 248514 INFO nova.compute.manager [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Took 3.44 seconds to deallocate network for instance.#033[00m
Dec 13 04:23:15 np0005558241 nova_compute[248510]: 2025-12-13 09:23:15.856 248514 DEBUG nova.compute.manager [req-a42a6d54-c951-4c73-b740-9c483ed0dc8e req-c16d3bb3-2c00-445f-895c-aecb8fa1ddee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:23:15 np0005558241 nova_compute[248510]: 2025-12-13 09:23:15.857 248514 DEBUG oslo_concurrency.lockutils [req-a42a6d54-c951-4c73-b740-9c483ed0dc8e req-c16d3bb3-2c00-445f-895c-aecb8fa1ddee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:15 np0005558241 nova_compute[248510]: 2025-12-13 09:23:15.857 248514 DEBUG oslo_concurrency.lockutils [req-a42a6d54-c951-4c73-b740-9c483ed0dc8e req-c16d3bb3-2c00-445f-895c-aecb8fa1ddee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:15 np0005558241 nova_compute[248510]: 2025-12-13 09:23:15.857 248514 DEBUG oslo_concurrency.lockutils [req-a42a6d54-c951-4c73-b740-9c483ed0dc8e req-c16d3bb3-2c00-445f-895c-aecb8fa1ddee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:15 np0005558241 nova_compute[248510]: 2025-12-13 09:23:15.857 248514 DEBUG nova.compute.manager [req-a42a6d54-c951-4c73-b740-9c483ed0dc8e req-c16d3bb3-2c00-445f-895c-aecb8fa1ddee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] No waiting events found dispatching network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:23:15 np0005558241 nova_compute[248510]: 2025-12-13 09:23:15.858 248514 WARNING nova.compute.manager [req-a42a6d54-c951-4c73-b740-9c483ed0dc8e req-c16d3bb3-2c00-445f-895c-aecb8fa1ddee 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received unexpected event network-vif-plugged-f4743e20-bc79-4ca1-87e3-c94183b9a23f for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:23:15 np0005558241 nova_compute[248510]: 2025-12-13 09:23:15.877 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:15 np0005558241 nova_compute[248510]: 2025-12-13 09:23:15.877 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:15 np0005558241 nova_compute[248510]: 2025-12-13 09:23:15.976 248514 DEBUG oslo_concurrency.processutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:23:15 np0005558241 podman[408328]: 2025-12-13 09:23:15.985461652 +0000 UTC m=+0.073017850 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 04:23:15 np0005558241 podman[408329]: 2025-12-13 09:23:15.998195739 +0000 UTC m=+0.072745333 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:23:16 np0005558241 podman[408327]: 2025-12-13 09:23:16.030207857 +0000 UTC m=+0.117916129 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 04:23:16 np0005558241 nova_compute[248510]: 2025-12-13 09:23:16.230 248514 DEBUG nova.network.neutron [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updated VIF entry in instance network info cache for port f4743e20-bc79-4ca1-87e3-c94183b9a23f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:23:16 np0005558241 nova_compute[248510]: 2025-12-13 09:23:16.231 248514 DEBUG nova.network.neutron [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Updating instance_info_cache with network_info: [{"id": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "address": "fa:16:3e:bd:a8:f9", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febd:a8f9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4743e20-bc", "ovs_interfaceid": "f4743e20-bc79-4ca1-87e3-c94183b9a23f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:23:16 np0005558241 nova_compute[248510]: 2025-12-13 09:23:16.254 248514 DEBUG oslo_concurrency.lockutils [req-9c05005d-a548-4d62-8bb5-2fd9705ff8ca req-c0d7ad65-da35-4bbf-983a-d3385e941261 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-7f12ff6b-a944-402c-9e58-ee4338d7eca4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:23:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:23:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2135396273' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:16 np0005558241 nova_compute[248510]: 2025-12-13 09:23:16.538 248514 DEBUG oslo_concurrency.processutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:23:16 np0005558241 nova_compute[248510]: 2025-12-13 09:23:16.545 248514 DEBUG nova.compute.provider_tree [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:23:16 np0005558241 nova_compute[248510]: 2025-12-13 09:23:16.565 248514 DEBUG nova.scheduler.client.report [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:23:16 np0005558241 nova_compute[248510]: 2025-12-13 09:23:16.997 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:17 np0005558241 nova_compute[248510]: 2025-12-13 09:23:17.020 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3665: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 29 op/s
Dec 13 04:23:17 np0005558241 nova_compute[248510]: 2025-12-13 09:23:17.379 248514 INFO nova.scheduler.client.report [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 7f12ff6b-a944-402c-9e58-ee4338d7eca4#033[00m
Dec 13 04:23:17 np0005558241 nova_compute[248510]: 2025-12-13 09:23:17.742 248514 DEBUG oslo_concurrency.lockutils [None req-c228a76b-08a5-40bd-bd3a-8cb039e45774 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "7f12ff6b-a944-402c-9e58-ee4338d7eca4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:17 np0005558241 nova_compute[248510]: 2025-12-13 09:23:17.978 248514 DEBUG nova.compute.manager [req-b5dc4951-b301-4325-8cee-9b13f7f41584 req-54f81004-7aa3-44ee-b611-37b9517de27c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Received event network-vif-deleted-f4743e20-bc79-4ca1-87e3-c94183b9a23f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:23:17 np0005558241 nova_compute[248510]: 2025-12-13 09:23:17.979 248514 INFO nova.compute.manager [req-b5dc4951-b301-4325-8cee-9b13f7f41584 req-54f81004-7aa3-44ee-b611-37b9517de27c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Neutron deleted interface f4743e20-bc79-4ca1-87e3-c94183b9a23f; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:23:17 np0005558241 nova_compute[248510]: 2025-12-13 09:23:17.979 248514 DEBUG nova.network.neutron [req-b5dc4951-b301-4325-8cee-9b13f7f41584 req-54f81004-7aa3-44ee-b611-37b9517de27c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Dec 13 04:23:17 np0005558241 nova_compute[248510]: 2025-12-13 09:23:17.981 248514 DEBUG nova.compute.manager [req-b5dc4951-b301-4325-8cee-9b13f7f41584 req-54f81004-7aa3-44ee-b611-37b9517de27c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Detach interface failed, port_id=f4743e20-bc79-4ca1-87e3-c94183b9a23f, reason: Instance 7f12ff6b-a944-402c-9e58-ee4338d7eca4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 04:23:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.091 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.092 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.093 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.094 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.094 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.096 248514 INFO nova.compute.manager [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Terminating instance#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.098 248514 DEBUG nova.compute.manager [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:23:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3666: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 13 KiB/s wr, 29 op/s
Dec 13 04:23:19 np0005558241 kernel: tap2777ba55-72 (unregistering): left promiscuous mode
Dec 13 04:23:19 np0005558241 NetworkManager[50376]: <info>  [1765617799.1471] device (tap2777ba55-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:23:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:23:19Z|01639|binding|INFO|Releasing lport 2777ba55-72e3-4334-96ae-48077ed6a8d5 from this chassis (sb_readonly=0)
Dec 13 04:23:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:23:19Z|01640|binding|INFO|Setting lport 2777ba55-72e3-4334-96ae-48077ed6a8d5 down in Southbound
Dec 13 04:23:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:23:19Z|01641|binding|INFO|Removing iface tap2777ba55-72 ovn-installed in OVS
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.157 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.166 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:17:51 10.100.0.14 2001:db8::f816:3eff:fe4e:1751'], port_security=['fa:16:3e:4e:17:51 10.100.0.14 2001:db8::f816:3eff:fe4e:1751'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe4e:1751/64', 'neutron:device_id': '1437fc03-8f31-440f-8928-2fe388a22bbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dce54a0218054f5380cc72fa81c26b43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd4a89b2-79c1-411e-b3a6-9349b772360d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=681ece70-9463-4a56-a445-6b0b679ff248, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=2777ba55-72e3-4334-96ae-48077ed6a8d5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.167 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 2777ba55-72e3-4334-96ae-48077ed6a8d5 in datapath e4b9bd04-ada7-4867-9918-3cd5d21d273d unbound from our chassis#033[00m
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.169 158419 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4b9bd04-ada7-4867-9918-3cd5d21d273d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.170 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[c58bfc89-8f6e-4051-9558-0fd1a1bfd6b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.171 158419 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d namespace which is not needed anymore#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.195 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:19 np0005558241 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000097.scope: Deactivated successfully.
Dec 13 04:23:19 np0005558241 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000097.scope: Consumed 16.905s CPU time.
Dec 13 04:23:19 np0005558241 systemd-machined[210538]: Machine qemu-182-instance-00000097 terminated.
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.344 248514 INFO nova.virt.libvirt.driver [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Instance destroyed successfully.#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.346 248514 DEBUG nova.objects.instance [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lazy-loading 'resources' on Instance uuid 1437fc03-8f31-440f-8928-2fe388a22bbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:23:19 np0005558241 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [NOTICE]   (407112) : haproxy version is 2.8.14-c23fe91
Dec 13 04:23:19 np0005558241 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [NOTICE]   (407112) : path to executable is /usr/sbin/haproxy
Dec 13 04:23:19 np0005558241 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [WARNING]  (407112) : Exiting Master process...
Dec 13 04:23:19 np0005558241 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [WARNING]  (407112) : Exiting Master process...
Dec 13 04:23:19 np0005558241 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [ALERT]    (407112) : Current worker (407114) exited with code 143 (Terminated)
Dec 13 04:23:19 np0005558241 neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d[407107]: [WARNING]  (407112) : All workers exited. Exiting... (0)
Dec 13 04:23:19 np0005558241 systemd[1]: libpod-5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8.scope: Deactivated successfully.
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.366 248514 DEBUG nova.virt.libvirt.vif [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:21:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1012009154',display_name='tempest-TestGettingAddress-server-1012009154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1012009154',id=151,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPtRBq+IYkmbvmIJh/QIOTOUFFBD8bcvC1qbRw2zjdn4nFUb754ilWe6cE2YmyI9CIp/UsSOGtzaJ1FH6v8ozWcGolUeM9n52RqDlU5UB2iwMcylMROMdWHIoAhEPM94oA==',key_name='tempest-TestGettingAddress-343990024',keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:21:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dce54a0218054f5380cc72fa81c26b43',ramdisk_id='',reservation_id='r-4xgqh80f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-76263460',owner_user_name='tempest-TestGettingAddress-76263460-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:21:54Z,user_data=None,user_id='147b09b7bd134651ba75bd6b135b270d',uuid=1437fc03-8f31-440f-8928-2fe388a22bbe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:23:19 np0005558241 podman[408436]: 2025-12-13 09:23:19.367430701 +0000 UTC m=+0.072831376 container died 5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.367 248514 DEBUG nova.network.os_vif_util [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converting VIF {"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.368 248514 DEBUG nova.network.os_vif_util [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.369 248514 DEBUG os_vif [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.373 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.373 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2777ba55-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.376 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.379 248514 INFO os_vif [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:17:51,bridge_name='br-int',has_traffic_filtering=True,id=2777ba55-72e3-4334-96ae-48077ed6a8d5,network=Network(e4b9bd04-ada7-4867-9918-3cd5d21d273d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2777ba55-72')#033[00m
Dec 13 04:23:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay-bd99150eceb1e5ac1a3f0f2d3168cf6b269a5c94e89b47366e7abdf1ad0507ca-merged.mount: Deactivated successfully.
Dec 13 04:23:19 np0005558241 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8-userdata-shm.mount: Deactivated successfully.
Dec 13 04:23:19 np0005558241 podman[408436]: 2025-12-13 09:23:19.428037791 +0000 UTC m=+0.133438416 container cleanup 5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:23:19 np0005558241 systemd[1]: libpod-conmon-5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8.scope: Deactivated successfully.
Dec 13 04:23:19 np0005558241 podman[408497]: 2025-12-13 09:23:19.535258622 +0000 UTC m=+0.064012886 container remove 5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.542 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[eee7399f-882b-41e9-8f34-cc28d1cce3b3]: (4, ('Sat Dec 13 09:23:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d (5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8)\n5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8\nSat Dec 13 09:23:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d (5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8)\n5e1665bfcf7a577109408fc6fed0a0991be88ed6efdbb548703c6fc8481e9ff8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.546 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[0ab2db02-4a7a-4adb-a653-f1beb5bae508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.548 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4b9bd04-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:19 np0005558241 kernel: tape4b9bd04-a0: left promiscuous mode
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.570 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.573 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a23a3a08-6749-48e0-91cb-f1d7adfb4590]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.588 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[674fd9e6-5787-4a88-9829-873637021180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.589 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[bc383750-bc66-45a3-a760-72844afaf973]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.608 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c68055-a588-4924-b736-1742f24cbbfb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1029125, 'reachable_time': 17057, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408513, 'error': None, 'target': 'ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:19 np0005558241 systemd[1]: run-netns-ovnmeta\x2de4b9bd04\x2dada7\x2d4867\x2d9918\x2d3cd5d21d273d.mount: Deactivated successfully.
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.612 158790 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4b9bd04-ada7-4867-9918-3cd5d21d273d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec 13 04:23:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:19.613 158790 DEBUG oslo.privsep.daemon [-] privsep: reply[363c6e2a-7e13-4aac-849a-27768c598cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.686 248514 INFO nova.virt.libvirt.driver [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Deleting instance files /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe_del#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.687 248514 INFO nova.virt.libvirt.driver [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Deletion of /var/lib/nova/instances/1437fc03-8f31-440f-8928-2fe388a22bbe_del complete#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.761 248514 INFO nova.compute.manager [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.762 248514 DEBUG oslo.service.loopingcall [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.762 248514 DEBUG nova.compute.manager [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.763 248514 DEBUG nova.network.neutron [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:23:19 np0005558241 nova_compute[248510]: 2025-12-13 09:23:19.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.075 248514 DEBUG nova.compute.manager [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-changed-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.075 248514 DEBUG nova.compute.manager [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Refreshing instance network info cache due to event network-changed-2777ba55-72e3-4334-96ae-48077ed6a8d5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.076 248514 DEBUG oslo_concurrency.lockutils [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.076 248514 DEBUG oslo_concurrency.lockutils [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.076 248514 DEBUG nova.network.neutron [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Refreshing network info cache for port 2777ba55-72e3-4334-96ae-48077ed6a8d5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.157 248514 DEBUG nova.compute.manager [req-5f6a5681-1087-4fad-b0ea-00885036706a req-f2fa81e4-a06f-4d7c-bd4f-14749e77a6f9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-vif-unplugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.158 248514 DEBUG oslo_concurrency.lockutils [req-5f6a5681-1087-4fad-b0ea-00885036706a req-f2fa81e4-a06f-4d7c-bd4f-14749e77a6f9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.158 248514 DEBUG oslo_concurrency.lockutils [req-5f6a5681-1087-4fad-b0ea-00885036706a req-f2fa81e4-a06f-4d7c-bd4f-14749e77a6f9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.159 248514 DEBUG oslo_concurrency.lockutils [req-5f6a5681-1087-4fad-b0ea-00885036706a req-f2fa81e4-a06f-4d7c-bd4f-14749e77a6f9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.159 248514 DEBUG nova.compute.manager [req-5f6a5681-1087-4fad-b0ea-00885036706a req-f2fa81e4-a06f-4d7c-bd4f-14749e77a6f9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] No waiting events found dispatching network-vif-unplugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.160 248514 DEBUG nova.compute.manager [req-5f6a5681-1087-4fad-b0ea-00885036706a req-f2fa81e4-a06f-4d7c-bd4f-14749e77a6f9 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-vif-unplugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.565 248514 DEBUG nova.network.neutron [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.584 248514 INFO nova.compute.manager [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Took 0.82 seconds to deallocate network for instance.#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.628 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.628 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:20 np0005558241 nova_compute[248510]: 2025-12-13 09:23:20.671 248514 DEBUG oslo_concurrency.processutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3667: 321 pgs: 321 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.6 KiB/s wr, 29 op/s
Dec 13 04:23:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:23:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3610069559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:21 np0005558241 nova_compute[248510]: 2025-12-13 09:23:21.292 248514 DEBUG oslo_concurrency.processutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:23:21 np0005558241 nova_compute[248510]: 2025-12-13 09:23:21.301 248514 DEBUG nova.compute.provider_tree [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:23:21 np0005558241 nova_compute[248510]: 2025-12-13 09:23:21.319 248514 DEBUG nova.scheduler.client.report [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:23:21 np0005558241 nova_compute[248510]: 2025-12-13 09:23:21.344 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:21 np0005558241 nova_compute[248510]: 2025-12-13 09:23:21.369 248514 INFO nova.scheduler.client.report [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Deleted allocations for instance 1437fc03-8f31-440f-8928-2fe388a22bbe#033[00m
Dec 13 04:23:21 np0005558241 nova_compute[248510]: 2025-12-13 09:23:21.438 248514 DEBUG oslo_concurrency.lockutils [None req-e249df11-0a2e-4213-8910-ad08ce210bbd 147b09b7bd134651ba75bd6b135b270d dce54a0218054f5380cc72fa81c26b43 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007748083575532455 of space, bias 1.0, pg target 0.23244250726597362 quantized to 32 (current 32)
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696737595831594 of space, bias 1.0, pg target 0.20090212787494782 quantized to 32 (current 32)
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.723939957605983e-07 of space, bias 4.0, pg target 0.0006868727949127179 quantized to 16 (current 32)
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:23:22 np0005558241 nova_compute[248510]: 2025-12-13 09:23:22.172 248514 DEBUG nova.compute.manager [req-010775ff-71af-4ac8-8d5e-d9f134885f02 req-316e30ae-a828-4a32-9c24-98d9587b7fc4 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-vif-deleted-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:23:22 np0005558241 nova_compute[248510]: 2025-12-13 09:23:22.245 248514 DEBUG nova.compute.manager [req-2779d8b0-6941-453d-bf58-32647385d28b req-16c20014-a79f-44eb-859e-6eb4d5efd8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:23:22 np0005558241 nova_compute[248510]: 2025-12-13 09:23:22.246 248514 DEBUG oslo_concurrency.lockutils [req-2779d8b0-6941-453d-bf58-32647385d28b req-16c20014-a79f-44eb-859e-6eb4d5efd8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:22 np0005558241 nova_compute[248510]: 2025-12-13 09:23:22.246 248514 DEBUG oslo_concurrency.lockutils [req-2779d8b0-6941-453d-bf58-32647385d28b req-16c20014-a79f-44eb-859e-6eb4d5efd8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:22 np0005558241 nova_compute[248510]: 2025-12-13 09:23:22.247 248514 DEBUG oslo_concurrency.lockutils [req-2779d8b0-6941-453d-bf58-32647385d28b req-16c20014-a79f-44eb-859e-6eb4d5efd8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "1437fc03-8f31-440f-8928-2fe388a22bbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:22 np0005558241 nova_compute[248510]: 2025-12-13 09:23:22.247 248514 DEBUG nova.compute.manager [req-2779d8b0-6941-453d-bf58-32647385d28b req-16c20014-a79f-44eb-859e-6eb4d5efd8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] No waiting events found dispatching network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:23:22 np0005558241 nova_compute[248510]: 2025-12-13 09:23:22.247 248514 WARNING nova.compute.manager [req-2779d8b0-6941-453d-bf58-32647385d28b req-16c20014-a79f-44eb-859e-6eb4d5efd8d0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Received unexpected event network-vif-plugged-2777ba55-72e3-4334-96ae-48077ed6a8d5 for instance with vm_state deleted and task_state None.#033[00m
Dec 13 04:23:22 np0005558241 nova_compute[248510]: 2025-12-13 09:23:22.489 248514 DEBUG nova.network.neutron [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updated VIF entry in instance network info cache for port 2777ba55-72e3-4334-96ae-48077ed6a8d5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:23:22 np0005558241 nova_compute[248510]: 2025-12-13 09:23:22.490 248514 DEBUG nova.network.neutron [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Updating instance_info_cache with network_info: [{"id": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "address": "fa:16:3e:4e:17:51", "network": {"id": "e4b9bd04-ada7-4867-9918-3cd5d21d273d", "bridge": "br-int", "label": "tempest-network-smoke--1791741515", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe4e:1751", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "dce54a0218054f5380cc72fa81c26b43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2777ba55-72", "ovs_interfaceid": "2777ba55-72e3-4334-96ae-48077ed6a8d5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:23:22 np0005558241 nova_compute[248510]: 2025-12-13 09:23:22.513 248514 DEBUG oslo_concurrency.lockutils [req-616068c8-3c4e-4ba1-a0ba-d03f20905ffa req-5a278e7e-dc55-4d46-970a-6c2ce79f602c 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-1437fc03-8f31-440f-8928-2fe388a22bbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:23:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3668: 321 pgs: 321 active+clean; 89 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.4 KiB/s wr, 31 op/s
Dec 13 04:23:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:24 np0005558241 nova_compute[248510]: 2025-12-13 09:23:24.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:24 np0005558241 nova_compute[248510]: 2025-12-13 09:23:24.893 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3669: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 4.6 KiB/s wr, 45 op/s
Dec 13 04:23:26 np0005558241 nova_compute[248510]: 2025-12-13 09:23:26.971 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617791.9701722, 7f12ff6b-a944-402c-9e58-ee4338d7eca4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:23:26 np0005558241 nova_compute[248510]: 2025-12-13 09:23:26.972 248514 INFO nova.compute.manager [-] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:23:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3670: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Dec 13 04:23:27 np0005558241 nova_compute[248510]: 2025-12-13 09:23:27.227 248514 DEBUG nova.compute.manager [None req-28b321c2-fd0b-4d99-97fe-f0f5c10b1cd0 - - - - - -] [instance: 7f12ff6b-a944-402c-9e58-ee4338d7eca4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:23:27 np0005558241 nova_compute[248510]: 2025-12-13 09:23:27.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:23:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3671: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Dec 13 04:23:29 np0005558241 nova_compute[248510]: 2025-12-13 09:23:29.378 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:29 np0005558241 nova_compute[248510]: 2025-12-13 09:23:29.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:29 np0005558241 nova_compute[248510]: 2025-12-13 09:23:29.687 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:29 np0005558241 nova_compute[248510]: 2025-12-13 09:23:29.895 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:30 np0005558241 nova_compute[248510]: 2025-12-13 09:23:30.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:23:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3672: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 27 op/s
Dec 13 04:23:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3673: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 13 04:23:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:34 np0005558241 nova_compute[248510]: 2025-12-13 09:23:34.342 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617799.3405201, 1437fc03-8f31-440f-8928-2fe388a22bbe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:23:34 np0005558241 nova_compute[248510]: 2025-12-13 09:23:34.343 248514 INFO nova.compute.manager [-] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:23:34 np0005558241 nova_compute[248510]: 2025-12-13 09:23:34.377 248514 DEBUG nova.compute.manager [None req-ea8bc6c1-f54d-4455-af97-c4ac5d635111 - - - - - -] [instance: 1437fc03-8f31-440f-8928-2fe388a22bbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:23:34 np0005558241 nova_compute[248510]: 2025-12-13 09:23:34.379 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:34 np0005558241 nova_compute[248510]: 2025-12-13 09:23:34.897 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3674: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 24 op/s
Dec 13 04:23:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3675: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:23:37 np0005558241 nova_compute[248510]: 2025-12-13 09:23:37.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:23:37 np0005558241 nova_compute[248510]: 2025-12-13 09:23:37.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:23:37 np0005558241 nova_compute[248510]: 2025-12-13 09:23:37.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:23:37 np0005558241 nova_compute[248510]: 2025-12-13 09:23:37.876 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:23:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:23:38 np0005558241 nova_compute[248510]: 2025-12-13 09:23:38.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:23:39 np0005558241 podman[408682]: 2025-12-13 09:23:39.070862715 +0000 UTC m=+0.059712228 container create f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:23:39 np0005558241 systemd[1]: Started libpod-conmon-f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1.scope.
Dec 13 04:23:39 np0005558241 podman[408682]: 2025-12-13 09:23:39.039637362 +0000 UTC m=+0.028486935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:23:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3676: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:23:39 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:23:39 np0005558241 podman[408682]: 2025-12-13 09:23:39.174443782 +0000 UTC m=+0.163293275 container init f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 04:23:39 np0005558241 podman[408682]: 2025-12-13 09:23:39.182539455 +0000 UTC m=+0.171388928 container start f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_zhukovsky, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 04:23:39 np0005558241 podman[408682]: 2025-12-13 09:23:39.186422202 +0000 UTC m=+0.175271785 container attach f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 04:23:39 np0005558241 beautiful_zhukovsky[408698]: 167 167
Dec 13 04:23:39 np0005558241 systemd[1]: libpod-f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1.scope: Deactivated successfully.
Dec 13 04:23:39 np0005558241 conmon[408698]: conmon f1b698a6877809036522 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1.scope/container/memory.events
Dec 13 04:23:39 np0005558241 podman[408682]: 2025-12-13 09:23:39.190778321 +0000 UTC m=+0.179627784 container died f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:23:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-80bc05bd92f119a25415ebb602c748c2aa02b5b99bc62b0dea26955bda07548e-merged.mount: Deactivated successfully.
Dec 13 04:23:39 np0005558241 podman[408682]: 2025-12-13 09:23:39.225324427 +0000 UTC m=+0.214173900 container remove f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_zhukovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:23:39 np0005558241 systemd[1]: libpod-conmon-f1b698a68778090365220ed1b844d374ecf411623b553ace3e81d0206cbc28e1.scope: Deactivated successfully.
Dec 13 04:23:39 np0005558241 nova_compute[248510]: 2025-12-13 09:23:39.381 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:39 np0005558241 podman[408721]: 2025-12-13 09:23:39.414225741 +0000 UTC m=+0.053579743 container create 75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Dec 13 04:23:39 np0005558241 systemd[1]: Started libpod-conmon-75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694.scope.
Dec 13 04:23:39 np0005558241 podman[408721]: 2025-12-13 09:23:39.3918424 +0000 UTC m=+0.031196492 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:23:39 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:23:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1c99ba7960d6fd4f288179a158c40c0ac013319bb872026c4fc7a3d7e348fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1c99ba7960d6fd4f288179a158c40c0ac013319bb872026c4fc7a3d7e348fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1c99ba7960d6fd4f288179a158c40c0ac013319bb872026c4fc7a3d7e348fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1c99ba7960d6fd4f288179a158c40c0ac013319bb872026c4fc7a3d7e348fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:39 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1c99ba7960d6fd4f288179a158c40c0ac013319bb872026c4fc7a3d7e348fe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:39 np0005558241 podman[408721]: 2025-12-13 09:23:39.518147796 +0000 UTC m=+0.157501848 container init 75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 04:23:39 np0005558241 podman[408721]: 2025-12-13 09:23:39.527151172 +0000 UTC m=+0.166505184 container start 75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:23:39 np0005558241 podman[408721]: 2025-12-13 09:23:39.53146452 +0000 UTC m=+0.170818522 container attach 75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:23:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:23:39 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:23:39 np0005558241 nova_compute[248510]: 2025-12-13 09:23:39.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:23:39 np0005558241 nova_compute[248510]: 2025-12-13 09:23:39.900 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:39 np0005558241 nova_compute[248510]: 2025-12-13 09:23:39.959 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:39 np0005558241 nova_compute[248510]: 2025-12-13 09:23:39.960 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:39 np0005558241 nova_compute[248510]: 2025-12-13 09:23:39.960 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:39 np0005558241 nova_compute[248510]: 2025-12-13 09:23:39.960 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:23:39 np0005558241 nova_compute[248510]: 2025-12-13 09:23:39.961 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:23:40 np0005558241 nice_mccarthy[408738]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:23:40 np0005558241 nice_mccarthy[408738]: --> All data devices are unavailable
Dec 13 04:23:40 np0005558241 systemd[1]: libpod-75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694.scope: Deactivated successfully.
Dec 13 04:23:40 np0005558241 podman[408721]: 2025-12-13 09:23:40.143725908 +0000 UTC m=+0.783079920 container died 75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:23:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:23:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:23:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:23:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:23:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:23:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:23:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1b1c99ba7960d6fd4f288179a158c40c0ac013319bb872026c4fc7a3d7e348fe-merged.mount: Deactivated successfully.
Dec 13 04:23:40 np0005558241 podman[408721]: 2025-12-13 09:23:40.242030842 +0000 UTC m=+0.881384844 container remove 75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_mccarthy, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:23:40 np0005558241 systemd[1]: libpod-conmon-75934a98c2d89f859d4fdfb6df1c075b765977f4ac124b74bf30bebbf47c0694.scope: Deactivated successfully.
Dec 13 04:23:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:23:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3377932943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:40 np0005558241 nova_compute[248510]: 2025-12-13 09:23:40.638 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:23:40 np0005558241 podman[408855]: 2025-12-13 09:23:40.793558747 +0000 UTC m=+0.057800700 container create ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:23:40 np0005558241 systemd[1]: Started libpod-conmon-ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4.scope.
Dec 13 04:23:40 np0005558241 podman[408855]: 2025-12-13 09:23:40.768194491 +0000 UTC m=+0.032436464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:23:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:23:40 np0005558241 podman[408855]: 2025-12-13 09:23:40.8938207 +0000 UTC m=+0.158062673 container init ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hamilton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:23:40 np0005558241 nova_compute[248510]: 2025-12-13 09:23:40.897 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:23:40 np0005558241 nova_compute[248510]: 2025-12-13 09:23:40.898 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3444MB free_disk=59.98738182429224GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:23:40 np0005558241 nova_compute[248510]: 2025-12-13 09:23:40.899 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:40 np0005558241 nova_compute[248510]: 2025-12-13 09:23:40.899 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:40 np0005558241 podman[408855]: 2025-12-13 09:23:40.902743634 +0000 UTC m=+0.166985587 container start ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hamilton, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 04:23:40 np0005558241 podman[408855]: 2025-12-13 09:23:40.906704603 +0000 UTC m=+0.170946556 container attach ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hamilton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:23:40 np0005558241 quizzical_hamilton[408871]: 167 167
Dec 13 04:23:40 np0005558241 systemd[1]: libpod-ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4.scope: Deactivated successfully.
Dec 13 04:23:40 np0005558241 podman[408855]: 2025-12-13 09:23:40.909481303 +0000 UTC m=+0.173723256 container died ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hamilton, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:23:40 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d45d5d7b0cfda718cbb2a3d12f28251ce67c4d59ead35f12647cc755ef082fbc-merged.mount: Deactivated successfully.
Dec 13 04:23:40 np0005558241 podman[408855]: 2025-12-13 09:23:40.955803774 +0000 UTC m=+0.220045727 container remove ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hamilton, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:23:40 np0005558241 systemd[1]: libpod-conmon-ff4adea350113a8fe09d7986b8baeeab7b0852184370862486eae1b481886ce4.scope: Deactivated successfully.
Dec 13 04:23:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3677: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:23:41 np0005558241 podman[408896]: 2025-12-13 09:23:41.153165641 +0000 UTC m=+0.051160023 container create e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:23:41 np0005558241 systemd[1]: Started libpod-conmon-e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1.scope.
Dec 13 04:23:41 np0005558241 nova_compute[248510]: 2025-12-13 09:23:41.203 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:23:41 np0005558241 nova_compute[248510]: 2025-12-13 09:23:41.204 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:23:41 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:23:41 np0005558241 podman[408896]: 2025-12-13 09:23:41.133369745 +0000 UTC m=+0.031364157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:23:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d32c3b279e82e4400e4388a7eb05493171966370ed1df77eef17f1cb18648ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d32c3b279e82e4400e4388a7eb05493171966370ed1df77eef17f1cb18648ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d32c3b279e82e4400e4388a7eb05493171966370ed1df77eef17f1cb18648ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:41 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d32c3b279e82e4400e4388a7eb05493171966370ed1df77eef17f1cb18648ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:41 np0005558241 podman[408896]: 2025-12-13 09:23:41.245179578 +0000 UTC m=+0.143173970 container init e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:23:41 np0005558241 podman[408896]: 2025-12-13 09:23:41.2564359 +0000 UTC m=+0.154430282 container start e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:23:41 np0005558241 podman[408896]: 2025-12-13 09:23:41.260151983 +0000 UTC m=+0.158146365 container attach e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:23:41 np0005558241 nova_compute[248510]: 2025-12-13 09:23:41.297 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]: {
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:    "0": [
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:        {
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "devices": [
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "/dev/loop3"
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            ],
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_name": "ceph_lv0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_size": "21470642176",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "name": "ceph_lv0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "tags": {
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.cluster_name": "ceph",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.crush_device_class": "",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.encrypted": "0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.objectstore": "bluestore",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.osd_id": "0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.type": "block",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.vdo": "0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.with_tpm": "0"
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            },
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "type": "block",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "vg_name": "ceph_vg0"
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:        }
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:    ],
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:    "1": [
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:        {
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "devices": [
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "/dev/loop4"
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            ],
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_name": "ceph_lv1",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_size": "21470642176",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "name": "ceph_lv1",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "tags": {
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.cluster_name": "ceph",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.crush_device_class": "",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.encrypted": "0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.objectstore": "bluestore",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.osd_id": "1",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.type": "block",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.vdo": "0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.with_tpm": "0"
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            },
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "type": "block",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "vg_name": "ceph_vg1"
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:        }
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:    ],
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:    "2": [
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:        {
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "devices": [
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "/dev/loop5"
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            ],
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_name": "ceph_lv2",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_size": "21470642176",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "name": "ceph_lv2",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "tags": {
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.cluster_name": "ceph",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.crush_device_class": "",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.encrypted": "0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.objectstore": "bluestore",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.osd_id": "2",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.type": "block",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.vdo": "0",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:                "ceph.with_tpm": "0"
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            },
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "type": "block",
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:            "vg_name": "ceph_vg2"
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:        }
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]:    ]
Dec 13 04:23:41 np0005558241 sad_cartwright[408913]: }
Dec 13 04:23:41 np0005558241 systemd[1]: libpod-e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1.scope: Deactivated successfully.
Dec 13 04:23:41 np0005558241 podman[408942]: 2025-12-13 09:23:41.670565581 +0000 UTC m=+0.029663955 container died e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:23:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1d32c3b279e82e4400e4388a7eb05493171966370ed1df77eef17f1cb18648ff-merged.mount: Deactivated successfully.
Dec 13 04:23:41 np0005558241 podman[408942]: 2025-12-13 09:23:41.734279048 +0000 UTC m=+0.093377372 container remove e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 04:23:41 np0005558241 systemd[1]: libpod-conmon-e4d36d38356dd8e9f569e9082f0832a7de28b02738018552408670c2de9e95e1.scope: Deactivated successfully.
Dec 13 04:23:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:23:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480714251' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:41 np0005558241 nova_compute[248510]: 2025-12-13 09:23:41.928 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:23:41 np0005558241 nova_compute[248510]: 2025-12-13 09:23:41.935 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:23:41 np0005558241 nova_compute[248510]: 2025-12-13 09:23:41.961 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:23:41 np0005558241 nova_compute[248510]: 2025-12-13 09:23:41.994 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:23:41 np0005558241 nova_compute[248510]: 2025-12-13 09:23:41.994 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:42 np0005558241 podman[409021]: 2025-12-13 09:23:42.253701758 +0000 UTC m=+0.053570884 container create 73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:23:42 np0005558241 systemd[1]: Started libpod-conmon-73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841.scope.
Dec 13 04:23:42 np0005558241 podman[409021]: 2025-12-13 09:23:42.22585424 +0000 UTC m=+0.025723356 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:23:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:23:42 np0005558241 podman[409021]: 2025-12-13 09:23:42.36388218 +0000 UTC m=+0.163751326 container init 73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pare, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:23:42 np0005558241 podman[409021]: 2025-12-13 09:23:42.373177893 +0000 UTC m=+0.173047039 container start 73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 04:23:42 np0005558241 podman[409021]: 2025-12-13 09:23:42.377396749 +0000 UTC m=+0.177265885 container attach 73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:23:42 np0005558241 dreamy_pare[409038]: 167 167
Dec 13 04:23:42 np0005558241 systemd[1]: libpod-73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841.scope: Deactivated successfully.
Dec 13 04:23:42 np0005558241 podman[409021]: 2025-12-13 09:23:42.380513677 +0000 UTC m=+0.180382783 container died 73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:23:42 np0005558241 systemd[1]: var-lib-containers-storage-overlay-22666dd1c6be481a20543995b1e8b1e2d5533e10914a1cc52e74a11fa645ab6a-merged.mount: Deactivated successfully.
Dec 13 04:23:42 np0005558241 podman[409021]: 2025-12-13 09:23:42.427769891 +0000 UTC m=+0.227638987 container remove 73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_pare, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 04:23:42 np0005558241 systemd[1]: libpod-conmon-73f9fde3ce7d4b2324a2a7073a47eba3f913dc9fbb8d03c3c1492830b6e87841.scope: Deactivated successfully.
Dec 13 04:23:42 np0005558241 podman[409063]: 2025-12-13 09:23:42.61159792 +0000 UTC m=+0.056042636 container create 98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:23:42 np0005558241 systemd[1]: Started libpod-conmon-98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762.scope.
Dec 13 04:23:42 np0005558241 podman[409063]: 2025-12-13 09:23:42.586585983 +0000 UTC m=+0.031030689 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:23:42 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:23:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0592f82bf25a3f8e2d3c5ab4152fb3782103c74e4dca33a2418004e36d6d0540/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0592f82bf25a3f8e2d3c5ab4152fb3782103c74e4dca33a2418004e36d6d0540/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0592f82bf25a3f8e2d3c5ab4152fb3782103c74e4dca33a2418004e36d6d0540/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:42 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0592f82bf25a3f8e2d3c5ab4152fb3782103c74e4dca33a2418004e36d6d0540/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:42 np0005558241 podman[409063]: 2025-12-13 09:23:42.72449743 +0000 UTC m=+0.168942156 container init 98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_shamir, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:23:42 np0005558241 podman[409063]: 2025-12-13 09:23:42.733219809 +0000 UTC m=+0.177664515 container start 98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_shamir, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:23:42 np0005558241 podman[409063]: 2025-12-13 09:23:42.736834799 +0000 UTC m=+0.181279635 container attach 98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_shamir, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 04:23:42 np0005558241 nova_compute[248510]: 2025-12-13 09:23:42.994 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:23:42 np0005558241 nova_compute[248510]: 2025-12-13 09:23:42.996 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:23:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3678: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:23:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:43 np0005558241 lvm[409155]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:23:43 np0005558241 lvm[409155]: VG ceph_vg0 finished
Dec 13 04:23:43 np0005558241 lvm[409156]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:23:43 np0005558241 lvm[409156]: VG ceph_vg1 finished
Dec 13 04:23:43 np0005558241 lvm[409158]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:23:43 np0005558241 lvm[409158]: VG ceph_vg2 finished
Dec 13 04:23:43 np0005558241 lvm[409160]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:23:43 np0005558241 lvm[409160]: VG ceph_vg2 finished
Dec 13 04:23:43 np0005558241 charming_shamir[409077]: {}
Dec 13 04:23:43 np0005558241 systemd[1]: libpod-98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762.scope: Deactivated successfully.
Dec 13 04:23:43 np0005558241 podman[409063]: 2025-12-13 09:23:43.660234915 +0000 UTC m=+1.104679611 container died 98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_shamir, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:23:43 np0005558241 systemd[1]: libpod-98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762.scope: Consumed 1.498s CPU time.
Dec 13 04:23:43 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0592f82bf25a3f8e2d3c5ab4152fb3782103c74e4dca33a2418004e36d6d0540-merged.mount: Deactivated successfully.
Dec 13 04:23:43 np0005558241 podman[409063]: 2025-12-13 09:23:43.725912502 +0000 UTC m=+1.170357198 container remove 98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:23:43 np0005558241 systemd[1]: libpod-conmon-98be402251303e8d567ed6dfc06d66b384acb600b5148bb72693418a6b0ea762.scope: Deactivated successfully.
Dec 13 04:23:43 np0005558241 nova_compute[248510]: 2025-12-13 09:23:43.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:23:43 np0005558241 nova_compute[248510]: 2025-12-13 09:23:43.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:23:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:23:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:23:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:23:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:23:44 np0005558241 nova_compute[248510]: 2025-12-13 09:23:44.392 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:44.619 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:23:44 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:44.620 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:23:44 np0005558241 nova_compute[248510]: 2025-12-13 09:23:44.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:23:44 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:23:44 np0005558241 nova_compute[248510]: 2025-12-13 09:23:44.902 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3679: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:23:46 np0005558241 podman[409202]: 2025-12-13 09:23:46.991121331 +0000 UTC m=+0.067347680 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 13 04:23:46 np0005558241 podman[409201]: 2025-12-13 09:23:46.991151631 +0000 UTC m=+0.076371455 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 13 04:23:47 np0005558241 podman[409200]: 2025-12-13 09:23:47.022298232 +0000 UTC m=+0.110765817 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:23:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3680: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:23:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:48 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:48.624 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:23:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3681: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:23:49 np0005558241 nova_compute[248510]: 2025-12-13 09:23:49.396 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:49 np0005558241 nova_compute[248510]: 2025-12-13 09:23:49.954 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3682: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:23:51 np0005558241 nova_compute[248510]: 2025-12-13 09:23:51.390 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:51 np0005558241 nova_compute[248510]: 2025-12-13 09:23:51.391 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:51 np0005558241 nova_compute[248510]: 2025-12-13 09:23:51.412 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:23:51 np0005558241 nova_compute[248510]: 2025-12-13 09:23:51.514 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:51 np0005558241 nova_compute[248510]: 2025-12-13 09:23:51.514 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:51 np0005558241 nova_compute[248510]: 2025-12-13 09:23:51.525 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:23:51 np0005558241 nova_compute[248510]: 2025-12-13 09:23:51.526 248514 INFO nova.compute.claims [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:23:51 np0005558241 nova_compute[248510]: 2025-12-13 09:23:51.789 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:23:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:23:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1720196126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.368 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.378 248514 DEBUG nova.compute.provider_tree [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.400 248514 DEBUG nova.scheduler.client.report [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.425 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.426 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.478 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.478 248514 DEBUG nova.network.neutron [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.512 248514 INFO nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.529 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.619 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.621 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.622 248514 INFO nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Creating image(s)#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.653 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.685 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.713 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.717 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.807 248514 DEBUG nova.policy [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '895926e785ed4e44b6b55b32f5fa263c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.814 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.816 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.817 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.818 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.855 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:23:52 np0005558241 nova_compute[248510]: 2025-12-13 09:23:52.859 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 df092140-61e7-46dc-a59e-317f6b309e77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:23:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3683: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:23:53 np0005558241 nova_compute[248510]: 2025-12-13 09:23:53.259 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 df092140-61e7-46dc-a59e-317f6b309e77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:23:53 np0005558241 nova_compute[248510]: 2025-12-13 09:23:53.329 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] resizing rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:23:53 np0005558241 nova_compute[248510]: 2025-12-13 09:23:53.415 248514 DEBUG nova.objects.instance [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'migration_context' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:23:53 np0005558241 nova_compute[248510]: 2025-12-13 09:23:53.441 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:23:53 np0005558241 nova_compute[248510]: 2025-12-13 09:23:53.442 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Ensure instance console log exists: /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:23:53 np0005558241 nova_compute[248510]: 2025-12-13 09:23:53.443 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:53 np0005558241 nova_compute[248510]: 2025-12-13 09:23:53.444 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:53 np0005558241 nova_compute[248510]: 2025-12-13 09:23:53.444 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:53 np0005558241 nova_compute[248510]: 2025-12-13 09:23:53.890 248514 DEBUG nova.network.neutron [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Successfully created port: 1d60d1ee-c619-439a-a2d3-0ab5e5872411 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec 13 04:23:54 np0005558241 nova_compute[248510]: 2025-12-13 09:23:54.401 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:54 np0005558241 nova_compute[248510]: 2025-12-13 09:23:54.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:23:54 np0005558241 nova_compute[248510]: 2025-12-13 09:23:54.957 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3684: 321 pgs: 321 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Dec 13 04:23:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:55.459 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:23:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:55.460 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:23:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:23:55.460 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:23:56 np0005558241 nova_compute[248510]: 2025-12-13 09:23:56.779 248514 DEBUG nova.network.neutron [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Successfully updated port: 1d60d1ee-c619-439a-a2d3-0ab5e5872411 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec 13 04:23:56 np0005558241 nova_compute[248510]: 2025-12-13 09:23:56.811 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:23:56 np0005558241 nova_compute[248510]: 2025-12-13 09:23:56.811 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquired lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:23:56 np0005558241 nova_compute[248510]: 2025-12-13 09:23:56.812 248514 DEBUG nova.network.neutron [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:23:56 np0005558241 nova_compute[248510]: 2025-12-13 09:23:56.926 248514 DEBUG nova.compute.manager [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-changed-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:23:56 np0005558241 nova_compute[248510]: 2025-12-13 09:23:56.927 248514 DEBUG nova.compute.manager [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Refreshing instance network info cache due to event network-changed-1d60d1ee-c619-439a-a2d3-0ab5e5872411. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec 13 04:23:56 np0005558241 nova_compute[248510]: 2025-12-13 09:23:56.927 248514 DEBUG oslo_concurrency.lockutils [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:23:56 np0005558241 nova_compute[248510]: 2025-12-13 09:23:56.992 248514 DEBUG nova.network.neutron [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:23:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3685: 321 pgs: 321 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 24 op/s
Dec 13 04:23:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.831 248514 DEBUG nova.network.neutron [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updating instance_info_cache with network_info: [{"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.865 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Releasing lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.865 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance network_info: |[{"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.867 248514 DEBUG oslo_concurrency.lockutils [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquired lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.867 248514 DEBUG nova.network.neutron [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Refreshing network info cache for port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.872 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Start _get_guest_xml network_info=[{"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.880 248514 WARNING nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.885 248514 DEBUG nova.virt.libvirt.host [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.886 248514 DEBUG nova.virt.libvirt.host [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.894 248514 DEBUG nova.virt.libvirt.host [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.894 248514 DEBUG nova.virt.libvirt.host [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.895 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.895 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.896 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.896 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.896 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.896 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.897 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.897 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.897 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.897 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.898 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.898 248514 DEBUG nova.virt.hardware [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:23:58 np0005558241 nova_compute[248510]: 2025-12-13 09:23:58.901 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:23:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3686: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:23:59 np0005558241 nova_compute[248510]: 2025-12-13 09:23:59.457 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:23:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:23:59 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/9542623' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:23:59 np0005558241 nova_compute[248510]: 2025-12-13 09:23:59.523 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:23:59 np0005558241 nova_compute[248510]: 2025-12-13 09:23:59.549 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:23:59 np0005558241 nova_compute[248510]: 2025-12-13 09:23:59.556 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:23:59 np0005558241 nova_compute[248510]: 2025-12-13 09:23:59.959 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:24:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1563599860' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.135 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.137 248514 DEBUG nova.virt.libvirt.vif [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-175634348',display_name='tempest-TestServerAdvancedOps-server-175634348',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-175634348',id=153,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7e519f414149a7bdf699bbd9b4a3e9',ramdisk_id='',reservation_id='r-e7ub8c4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1044369445',owner_user_name='tempest-TestServerAdvancedOps-1044369445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:23:52Z,user_data=None,user_id='895926e785ed4e44b6b55b32f5fa263c',uuid=df092140-61e7-46dc-a59e-317f6b309e77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.137 248514 DEBUG nova.network.os_vif_util [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converting VIF {"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.138 248514 DEBUG nova.network.os_vif_util [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.139 248514 DEBUG nova.objects.instance [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.193 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  <uuid>df092140-61e7-46dc-a59e-317f6b309e77</uuid>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  <name>instance-00000099</name>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <nova:name>tempest-TestServerAdvancedOps-server-175634348</nova:name>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:23:58</nova:creationTime>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:        <nova:user uuid="895926e785ed4e44b6b55b32f5fa263c">tempest-TestServerAdvancedOps-1044369445-project-member</nova:user>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:        <nova:project uuid="ca7e519f414149a7bdf699bbd9b4a3e9">tempest-TestServerAdvancedOps-1044369445</nova:project>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <nova:ports>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:        <nova:port uuid="1d60d1ee-c619-439a-a2d3-0ab5e5872411">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:        </nova:port>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      </nova:ports>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <entry name="serial">df092140-61e7-46dc-a59e-317f6b309e77</entry>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <entry name="uuid">df092140-61e7-46dc-a59e-317f6b309e77</entry>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/df092140-61e7-46dc-a59e-317f6b309e77_disk">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/df092140-61e7-46dc-a59e-317f6b309e77_disk.config">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <interface type="ethernet">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <mac address="fa:16:3e:77:21:8c"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <mtu size="1442"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <target dev="tap1d60d1ee-c6"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    </interface>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/console.log" append="off"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:24:00 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:24:00 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:24:00 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:24:00 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.194 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Preparing to wait for external event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.195 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.195 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.195 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.196 248514 DEBUG nova.virt.libvirt.vif [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T09:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-175634348',display_name='tempest-TestServerAdvancedOps-server-175634348',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-175634348',id=153,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7e519f414149a7bdf699bbd9b4a3e9',ramdisk_id='',reservation_id='r-e7ub8c4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1044369445',owner_user_name='tempest-TestServerAdvancedOps-1044369445-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T09:23:52Z,user_data=None,user_id='895926e785ed4e44b6b55b32f5fa263c',uuid=df092140-61e7-46dc-a59e-317f6b309e77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.196 248514 DEBUG nova.network.os_vif_util [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converting VIF {"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.197 248514 DEBUG nova.network.os_vif_util [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.197 248514 DEBUG os_vif [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.198 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.198 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.199 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.203 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.204 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d60d1ee-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.205 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d60d1ee-c6, col_values=(('external_ids', {'iface-id': '1d60d1ee-c619-439a-a2d3-0ab5e5872411', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:21:8c', 'vm-uuid': 'df092140-61e7-46dc-a59e-317f6b309e77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.206 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:00 np0005558241 NetworkManager[50376]: <info>  [1765617840.2085] manager: (tap1d60d1ee-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/681)
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.219 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.221 248514 INFO os_vif [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6')#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.283 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.283 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.284 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] No VIF found with MAC fa:16:3e:77:21:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.284 248514 INFO nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Using config drive#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.313 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.926 248514 DEBUG nova.network.neutron [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updated VIF entry in instance network info cache for port 1d60d1ee-c619-439a-a2d3-0ab5e5872411. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.927 248514 DEBUG nova.network.neutron [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updating instance_info_cache with network_info: [{"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:24:00 np0005558241 nova_compute[248510]: 2025-12-13 09:24:00.995 248514 DEBUG oslo_concurrency.lockutils [req-64ec85b4-f56c-4eea-8c93-683f60f91eff req-7b62a839-7f9d-4cab-9bff-d4f58995a1c5 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Releasing lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:24:01 np0005558241 nova_compute[248510]: 2025-12-13 09:24:01.064 248514 INFO nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Creating config drive at /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/disk.config#033[00m
Dec 13 04:24:01 np0005558241 nova_compute[248510]: 2025-12-13 09:24:01.069 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdjchlz_z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:24:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3687: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 13 04:24:01 np0005558241 nova_compute[248510]: 2025-12-13 09:24:01.235 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdjchlz_z" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:24:01 np0005558241 nova_compute[248510]: 2025-12-13 09:24:01.265 248514 DEBUG nova.storage.rbd_utils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] rbd image df092140-61e7-46dc-a59e-317f6b309e77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:24:01 np0005558241 nova_compute[248510]: 2025-12-13 09:24:01.270 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/disk.config df092140-61e7-46dc-a59e-317f6b309e77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:24:01 np0005558241 nova_compute[248510]: 2025-12-13 09:24:01.440 248514 DEBUG oslo_concurrency.processutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/disk.config df092140-61e7-46dc-a59e-317f6b309e77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:24:01 np0005558241 nova_compute[248510]: 2025-12-13 09:24:01.441 248514 INFO nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Deleting local config drive /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77/disk.config because it was imported into RBD.#033[00m
Dec 13 04:24:01 np0005558241 kernel: tap1d60d1ee-c6: entered promiscuous mode
Dec 13 04:24:01 np0005558241 NetworkManager[50376]: <info>  [1765617841.5106] manager: (tap1d60d1ee-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/682)
Dec 13 04:24:01 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:01Z|01642|binding|INFO|Claiming lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 for this chassis.
Dec 13 04:24:01 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:01Z|01643|binding|INFO|1d60d1ee-c619-439a-a2d3-0ab5e5872411: Claiming fa:16:3e:77:21:8c 10.100.0.2
Dec 13 04:24:01 np0005558241 nova_compute[248510]: 2025-12-13 09:24:01.510 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:01.521 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:21:8c 10.100.0.2'], port_security=['fa:16:3e:77:21:8c 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'df092140-61e7-46dc-a59e-317f6b309e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db32bc67-75e0-4356-ad47-39387ebf5d89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b021c13b-f68e-4f70-afcb-8e734f176d38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f99370b0-f9e5-4b19-b3ce-10f6a23d7122, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1d60d1ee-c619-439a-a2d3-0ab5e5872411) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:24:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:01.522 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 in datapath db32bc67-75e0-4356-ad47-39387ebf5d89 bound to our chassis#033[00m
Dec 13 04:24:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:01.523 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network db32bc67-75e0-4356-ad47-39387ebf5d89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 04:24:01 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:01.524 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[877ae67d-99ab-4bd3-8bec-fef9fc573feb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:24:01 np0005558241 systemd-machined[210538]: New machine qemu-184-instance-00000099.
Dec 13 04:24:01 np0005558241 systemd-udevd[409588]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:24:01 np0005558241 nova_compute[248510]: 2025-12-13 09:24:01.550 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:01 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:01Z|01644|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 ovn-installed in OVS
Dec 13 04:24:01 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:01Z|01645|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 up in Southbound
Dec 13 04:24:01 np0005558241 nova_compute[248510]: 2025-12-13 09:24:01.553 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:01 np0005558241 systemd[1]: Started Virtual Machine qemu-184-instance-00000099.
Dec 13 04:24:01 np0005558241 NetworkManager[50376]: <info>  [1765617841.5633] device (tap1d60d1ee-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:24:01 np0005558241 NetworkManager[50376]: <info>  [1765617841.5643] device (tap1d60d1ee-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.061 248514 DEBUG nova.compute.manager [req-87ca5fe7-4f97-4cdc-abcb-066387290052 req-b3f8e49d-f512-4563-b506-8e048dc89dca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.063 248514 DEBUG oslo_concurrency.lockutils [req-87ca5fe7-4f97-4cdc-abcb-066387290052 req-b3f8e49d-f512-4563-b506-8e048dc89dca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.063 248514 DEBUG oslo_concurrency.lockutils [req-87ca5fe7-4f97-4cdc-abcb-066387290052 req-b3f8e49d-f512-4563-b506-8e048dc89dca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.063 248514 DEBUG oslo_concurrency.lockutils [req-87ca5fe7-4f97-4cdc-abcb-066387290052 req-b3f8e49d-f512-4563-b506-8e048dc89dca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.063 248514 DEBUG nova.compute.manager [req-87ca5fe7-4f97-4cdc-abcb-066387290052 req-b3f8e49d-f512-4563-b506-8e048dc89dca 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Processing event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.723 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617842.7225373, df092140-61e7-46dc-a59e-317f6b309e77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.724 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Started (Lifecycle Event)#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.727 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.731 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.735 248514 INFO nova.virt.libvirt.driver [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance spawned successfully.#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.735 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.750 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.755 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.770 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.770 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.771 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.771 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.772 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.772 248514 DEBUG nova.virt.libvirt.driver [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.779 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.780 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617842.7238724, df092140-61e7-46dc-a59e-317f6b309e77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.780 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.819 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.824 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617842.7296407, df092140-61e7-46dc-a59e-317f6b309e77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.824 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.854 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.861 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.865 248514 INFO nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Took 10.25 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.866 248514 DEBUG nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.900 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:24:02 np0005558241 nova_compute[248510]: 2025-12-13 09:24:02.948 248514 INFO nova.compute.manager [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Took 11.47 seconds to build instance.#033[00m
Dec 13 04:24:03 np0005558241 nova_compute[248510]: 2025-12-13 09:24:03.002 248514 DEBUG oslo_concurrency.lockutils [None req-84db07e7-1a08-49e6-b8cb-7370a936e145 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3688: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 13 04:24:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:04 np0005558241 nova_compute[248510]: 2025-12-13 09:24:04.173 248514 DEBUG nova.compute.manager [req-648adc3f-3f70-4dbe-b6f0-d83f74c0b2ef req-586fdc99-bb3f-42b7-b603-4b2dc0c90aac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:24:04 np0005558241 nova_compute[248510]: 2025-12-13 09:24:04.174 248514 DEBUG oslo_concurrency.lockutils [req-648adc3f-3f70-4dbe-b6f0-d83f74c0b2ef req-586fdc99-bb3f-42b7-b603-4b2dc0c90aac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:04 np0005558241 nova_compute[248510]: 2025-12-13 09:24:04.174 248514 DEBUG oslo_concurrency.lockutils [req-648adc3f-3f70-4dbe-b6f0-d83f74c0b2ef req-586fdc99-bb3f-42b7-b603-4b2dc0c90aac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:04 np0005558241 nova_compute[248510]: 2025-12-13 09:24:04.174 248514 DEBUG oslo_concurrency.lockutils [req-648adc3f-3f70-4dbe-b6f0-d83f74c0b2ef req-586fdc99-bb3f-42b7-b603-4b2dc0c90aac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:04 np0005558241 nova_compute[248510]: 2025-12-13 09:24:04.174 248514 DEBUG nova.compute.manager [req-648adc3f-3f70-4dbe-b6f0-d83f74c0b2ef req-586fdc99-bb3f-42b7-b603-4b2dc0c90aac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:24:04 np0005558241 nova_compute[248510]: 2025-12-13 09:24:04.174 248514 WARNING nova.compute.manager [req-648adc3f-3f70-4dbe-b6f0-d83f74c0b2ef req-586fdc99-bb3f-42b7-b603-4b2dc0c90aac 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:24:04 np0005558241 nova_compute[248510]: 2025-12-13 09:24:04.995 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3689: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 13 04:24:05 np0005558241 nova_compute[248510]: 2025-12-13 09:24:05.207 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:06 np0005558241 nova_compute[248510]: 2025-12-13 09:24:06.244 248514 DEBUG nova.objects.instance [None req-16ec372b-22e5-4dc7-805f-14a2d81a49e3 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:24:06 np0005558241 nova_compute[248510]: 2025-12-13 09:24:06.275 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617846.2748225, df092140-61e7-46dc-a59e-317f6b309e77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:24:06 np0005558241 nova_compute[248510]: 2025-12-13 09:24:06.275 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:24:06 np0005558241 nova_compute[248510]: 2025-12-13 09:24:06.478 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:06 np0005558241 nova_compute[248510]: 2025-12-13 09:24:06.484 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:24:06 np0005558241 nova_compute[248510]: 2025-12-13 09:24:06.585 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Dec 13 04:24:06 np0005558241 kernel: tap1d60d1ee-c6 (unregistering): left promiscuous mode
Dec 13 04:24:06 np0005558241 NetworkManager[50376]: <info>  [1765617846.9833] device (tap1d60d1ee-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:24:06 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:06Z|01646|binding|INFO|Releasing lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 from this chassis (sb_readonly=0)
Dec 13 04:24:06 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:06Z|01647|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 down in Southbound
Dec 13 04:24:06 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:06Z|01648|binding|INFO|Removing iface tap1d60d1ee-c6 ovn-installed in OVS
Dec 13 04:24:06 np0005558241 nova_compute[248510]: 2025-12-13 09:24:06.992 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:07 np0005558241 nova_compute[248510]: 2025-12-13 09:24:07.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:07.046 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:21:8c 10.100.0.2'], port_security=['fa:16:3e:77:21:8c 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'df092140-61e7-46dc-a59e-317f6b309e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db32bc67-75e0-4356-ad47-39387ebf5d89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b021c13b-f68e-4f70-afcb-8e734f176d38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f99370b0-f9e5-4b19-b3ce-10f6a23d7122, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1d60d1ee-c619-439a-a2d3-0ab5e5872411) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:24:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:07.047 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 in datapath db32bc67-75e0-4356-ad47-39387ebf5d89 unbound from our chassis#033[00m
Dec 13 04:24:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:07.048 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network db32bc67-75e0-4356-ad47-39387ebf5d89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 04:24:07 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:07.050 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[93ad7dc5-ebf0-4446-bdf9-9322585453fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:24:07 np0005558241 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000099.scope: Deactivated successfully.
Dec 13 04:24:07 np0005558241 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000099.scope: Consumed 4.876s CPU time.
Dec 13 04:24:07 np0005558241 systemd-machined[210538]: Machine qemu-184-instance-00000099 terminated.
Dec 13 04:24:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3690: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 749 KiB/s wr, 68 op/s
Dec 13 04:24:07 np0005558241 nova_compute[248510]: 2025-12-13 09:24:07.169 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:07 np0005558241 nova_compute[248510]: 2025-12-13 09:24:07.174 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:07 np0005558241 nova_compute[248510]: 2025-12-13 09:24:07.187 248514 DEBUG nova.compute.manager [None req-16ec372b-22e5-4dc7-805f-14a2d81a49e3 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:07 np0005558241 nova_compute[248510]: 2025-12-13 09:24:07.275 248514 DEBUG nova.compute.manager [req-43b016a4-02de-4686-8d51-1bc008f1e630 req-c4835f33-0db5-4bf6-b4a1-5e15a6c0d0b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:24:07 np0005558241 nova_compute[248510]: 2025-12-13 09:24:07.275 248514 DEBUG oslo_concurrency.lockutils [req-43b016a4-02de-4686-8d51-1bc008f1e630 req-c4835f33-0db5-4bf6-b4a1-5e15a6c0d0b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:07 np0005558241 nova_compute[248510]: 2025-12-13 09:24:07.275 248514 DEBUG oslo_concurrency.lockutils [req-43b016a4-02de-4686-8d51-1bc008f1e630 req-c4835f33-0db5-4bf6-b4a1-5e15a6c0d0b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:07 np0005558241 nova_compute[248510]: 2025-12-13 09:24:07.276 248514 DEBUG oslo_concurrency.lockutils [req-43b016a4-02de-4686-8d51-1bc008f1e630 req-c4835f33-0db5-4bf6-b4a1-5e15a6c0d0b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:07 np0005558241 nova_compute[248510]: 2025-12-13 09:24:07.276 248514 DEBUG nova.compute.manager [req-43b016a4-02de-4686-8d51-1bc008f1e630 req-c4835f33-0db5-4bf6-b4a1-5e15a6c0d0b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:24:07 np0005558241 nova_compute[248510]: 2025-12-13 09:24:07.276 248514 WARNING nova.compute.manager [req-43b016a4-02de-4686-8d51-1bc008f1e630 req-c4835f33-0db5-4bf6-b4a1-5e15a6c0d0b0 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state suspended and task_state None.#033[00m
Dec 13 04:24:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3691: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 749 KiB/s wr, 135 op/s
Dec 13 04:24:09 np0005558241 nova_compute[248510]: 2025-12-13 09:24:09.243 248514 INFO nova.compute.manager [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Resuming#033[00m
Dec 13 04:24:09 np0005558241 nova_compute[248510]: 2025-12-13 09:24:09.244 248514 DEBUG nova.objects.instance [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'flavor' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:24:09 np0005558241 nova_compute[248510]: 2025-12-13 09:24:09.287 248514 DEBUG oslo_concurrency.lockutils [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:24:09 np0005558241 nova_compute[248510]: 2025-12-13 09:24:09.287 248514 DEBUG oslo_concurrency.lockutils [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquired lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:24:09 np0005558241 nova_compute[248510]: 2025-12-13 09:24:09.288 248514 DEBUG nova.network.neutron [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:24:09 np0005558241 nova_compute[248510]: 2025-12-13 09:24:09.361 248514 DEBUG nova.compute.manager [req-bc6ba099-8d25-408e-8c9d-6737a00340d6 req-6bfec2ab-9227-4d38-a051-ecd5a640487f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:24:09 np0005558241 nova_compute[248510]: 2025-12-13 09:24:09.362 248514 DEBUG oslo_concurrency.lockutils [req-bc6ba099-8d25-408e-8c9d-6737a00340d6 req-6bfec2ab-9227-4d38-a051-ecd5a640487f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:09 np0005558241 nova_compute[248510]: 2025-12-13 09:24:09.363 248514 DEBUG oslo_concurrency.lockutils [req-bc6ba099-8d25-408e-8c9d-6737a00340d6 req-6bfec2ab-9227-4d38-a051-ecd5a640487f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:09 np0005558241 nova_compute[248510]: 2025-12-13 09:24:09.363 248514 DEBUG oslo_concurrency.lockutils [req-bc6ba099-8d25-408e-8c9d-6737a00340d6 req-6bfec2ab-9227-4d38-a051-ecd5a640487f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:09 np0005558241 nova_compute[248510]: 2025-12-13 09:24:09.363 248514 DEBUG nova.compute.manager [req-bc6ba099-8d25-408e-8c9d-6737a00340d6 req-6bfec2ab-9227-4d38-a051-ecd5a640487f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:24:09 np0005558241 nova_compute[248510]: 2025-12-13 09:24:09.364 248514 WARNING nova.compute.manager [req-bc6ba099-8d25-408e-8c9d-6737a00340d6 req-6bfec2ab-9227-4d38-a051-ecd5a640487f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state suspended and task_state resuming.#033[00m
Dec 13 04:24:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:24:09
Dec 13 04:24:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:24:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:24:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'vms', 'volumes', '.rgw.root']
Dec 13 04:24:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:24:09 np0005558241 nova_compute[248510]: 2025-12-13 09:24:09.998 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:24:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:24:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:24:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:24:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:24:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:24:10 np0005558241 nova_compute[248510]: 2025-12-13 09:24:10.209 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:24:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:24:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:24:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:24:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:24:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:24:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:24:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:24:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:24:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:24:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3692: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 133 op/s
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.554 248514 DEBUG nova.network.neutron [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updating instance_info_cache with network_info: [{"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.575 248514 DEBUG oslo_concurrency.lockutils [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Releasing lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.585 248514 DEBUG nova.virt.libvirt.vif [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-175634348',display_name='tempest-TestServerAdvancedOps-server-175634348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-175634348',id=153,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:24:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ca7e519f414149a7bdf699bbd9b4a3e9',ramdisk_id='',reservation_id='r-e7ub8c4u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1044369445',owner_user_name='tempest-TestServerAdvancedOps-1044369445-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:24:07Z,user_data=None,user_id='895926e785ed4e44b6b55b32f5fa263c',uuid=df092140-61e7-46dc-a59e-317f6b309e77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.586 248514 DEBUG nova.network.os_vif_util [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converting VIF {"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.588 248514 DEBUG nova.network.os_vif_util [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.589 248514 DEBUG os_vif [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.590 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.591 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.592 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.597 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.598 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d60d1ee-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.598 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d60d1ee-c6, col_values=(('external_ids', {'iface-id': '1d60d1ee-c619-439a-a2d3-0ab5e5872411', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:21:8c', 'vm-uuid': 'df092140-61e7-46dc-a59e-317f6b309e77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.598 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.599 248514 INFO os_vif [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6')#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.638 248514 DEBUG nova.objects.instance [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'numa_topology' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:24:11 np0005558241 kernel: tap1d60d1ee-c6: entered promiscuous mode
Dec 13 04:24:11 np0005558241 NetworkManager[50376]: <info>  [1765617851.7382] manager: (tap1d60d1ee-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/683)
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.795 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:11Z|01649|binding|INFO|Claiming lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 for this chassis.
Dec 13 04:24:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:11Z|01650|binding|INFO|1d60d1ee-c619-439a-a2d3-0ab5e5872411: Claiming fa:16:3e:77:21:8c 10.100.0.2
Dec 13 04:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:11.803 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:21:8c 10.100.0.2'], port_security=['fa:16:3e:77:21:8c 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'df092140-61e7-46dc-a59e-317f6b309e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db32bc67-75e0-4356-ad47-39387ebf5d89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b021c13b-f68e-4f70-afcb-8e734f176d38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f99370b0-f9e5-4b19-b3ce-10f6a23d7122, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1d60d1ee-c619-439a-a2d3-0ab5e5872411) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:11.804 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 in datapath db32bc67-75e0-4356-ad47-39387ebf5d89 bound to our chassis#033[00m
Dec 13 04:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:11.805 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network db32bc67-75e0-4356-ad47-39387ebf5d89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 04:24:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:11.806 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7327aa-53b1-4888-b09a-6eccf65ab3c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:24:11 np0005558241 nova_compute[248510]: 2025-12-13 09:24:11.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:11Z|01651|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 ovn-installed in OVS
Dec 13 04:24:11 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:11Z|01652|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 up in Southbound
Dec 13 04:24:11 np0005558241 systemd-udevd[409672]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:24:11 np0005558241 systemd-machined[210538]: New machine qemu-185-instance-00000099.
Dec 13 04:24:11 np0005558241 NetworkManager[50376]: <info>  [1765617851.8372] device (tap1d60d1ee-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:24:11 np0005558241 NetworkManager[50376]: <info>  [1765617851.8384] device (tap1d60d1ee-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:24:11 np0005558241 systemd[1]: Started Virtual Machine qemu-185-instance-00000099.
Dec 13 04:24:12 np0005558241 nova_compute[248510]: 2025-12-13 09:24:12.035 248514 DEBUG nova.compute.manager [req-e511b1a6-b444-4c78-b81f-03dd2c2f6076 req-372bcbff-34e8-4091-9a21-7404c830f785 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:24:12 np0005558241 nova_compute[248510]: 2025-12-13 09:24:12.036 248514 DEBUG oslo_concurrency.lockutils [req-e511b1a6-b444-4c78-b81f-03dd2c2f6076 req-372bcbff-34e8-4091-9a21-7404c830f785 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:12 np0005558241 nova_compute[248510]: 2025-12-13 09:24:12.036 248514 DEBUG oslo_concurrency.lockutils [req-e511b1a6-b444-4c78-b81f-03dd2c2f6076 req-372bcbff-34e8-4091-9a21-7404c830f785 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:12 np0005558241 nova_compute[248510]: 2025-12-13 09:24:12.036 248514 DEBUG oslo_concurrency.lockutils [req-e511b1a6-b444-4c78-b81f-03dd2c2f6076 req-372bcbff-34e8-4091-9a21-7404c830f785 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:12 np0005558241 nova_compute[248510]: 2025-12-13 09:24:12.036 248514 DEBUG nova.compute.manager [req-e511b1a6-b444-4c78-b81f-03dd2c2f6076 req-372bcbff-34e8-4091-9a21-7404c830f785 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:24:12 np0005558241 nova_compute[248510]: 2025-12-13 09:24:12.036 248514 WARNING nova.compute.manager [req-e511b1a6-b444-4c78-b81f-03dd2c2f6076 req-372bcbff-34e8-4091-9a21-7404c830f785 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state suspended and task_state resuming.#033[00m
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.051 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for df092140-61e7-46dc-a59e-317f6b309e77 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.052 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617853.050872, df092140-61e7-46dc-a59e-317f6b309e77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.052 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Started (Lifecycle Event)#033[00m
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.078 248514 DEBUG nova.compute.manager [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.078 248514 DEBUG nova.objects.instance [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:24:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3693: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 131 op/s
Dec 13 04:24:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.523 248514 INFO nova.virt.libvirt.driver [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance running successfully.#033[00m
Dec 13 04:24:13 np0005558241 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.527 248514 DEBUG nova.virt.libvirt.guest [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.528 248514 DEBUG nova.compute.manager [None req-afc6432b-581d-470a-a5a6-b28e64cccb3f 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.540 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.543 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.808 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617853.0567622, df092140-61e7-46dc-a59e-317f6b309e77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.809 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.914 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:13 np0005558241 nova_compute[248510]: 2025-12-13 09:24:13.919 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:24:14 np0005558241 nova_compute[248510]: 2025-12-13 09:24:14.175 248514 DEBUG nova.compute.manager [req-6e1400e1-c8e2-4693-b25d-b51cb8f9a8db req-7fff9f1d-8b3d-48c5-9269-fe11345ca317 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:24:14 np0005558241 nova_compute[248510]: 2025-12-13 09:24:14.176 248514 DEBUG oslo_concurrency.lockutils [req-6e1400e1-c8e2-4693-b25d-b51cb8f9a8db req-7fff9f1d-8b3d-48c5-9269-fe11345ca317 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:14 np0005558241 nova_compute[248510]: 2025-12-13 09:24:14.176 248514 DEBUG oslo_concurrency.lockutils [req-6e1400e1-c8e2-4693-b25d-b51cb8f9a8db req-7fff9f1d-8b3d-48c5-9269-fe11345ca317 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:14 np0005558241 nova_compute[248510]: 2025-12-13 09:24:14.177 248514 DEBUG oslo_concurrency.lockutils [req-6e1400e1-c8e2-4693-b25d-b51cb8f9a8db req-7fff9f1d-8b3d-48c5-9269-fe11345ca317 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:14 np0005558241 nova_compute[248510]: 2025-12-13 09:24:14.177 248514 DEBUG nova.compute.manager [req-6e1400e1-c8e2-4693-b25d-b51cb8f9a8db req-7fff9f1d-8b3d-48c5-9269-fe11345ca317 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:24:14 np0005558241 nova_compute[248510]: 2025-12-13 09:24:14.178 248514 WARNING nova.compute.manager [req-6e1400e1-c8e2-4693-b25d-b51cb8f9a8db req-7fff9f1d-8b3d-48c5-9269-fe11345ca317 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:24:15 np0005558241 nova_compute[248510]: 2025-12-13 09:24:15.000 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3855798119' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3855798119' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3694: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 132 op/s
Dec 13 04:24:15 np0005558241 nova_compute[248510]: 2025-12-13 09:24:15.253 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3695: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 0 B/s wr, 71 op/s
Dec 13 04:24:17 np0005558241 podman[409726]: 2025-12-13 09:24:17.985618203 +0000 UTC m=+0.066674242 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Dec 13 04:24:18 np0005558241 podman[409725]: 2025-12-13 09:24:18.018366604 +0000 UTC m=+0.103835384 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 04:24:18 np0005558241 podman[409724]: 2025-12-13 09:24:18.052548161 +0000 UTC m=+0.134030291 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 13 04:24:18 np0005558241 nova_compute[248510]: 2025-12-13 09:24:18.357 248514 DEBUG nova.objects.instance [None req-862950b7-a47d-42d3-9b87-195003c43e44 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:24:18 np0005558241 nova_compute[248510]: 2025-12-13 09:24:18.381 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617858.3806083, df092140-61e7-46dc-a59e-317f6b309e77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:24:18 np0005558241 nova_compute[248510]: 2025-12-13 09:24:18.381 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Paused (Lifecycle Event)#033[00m
Dec 13 04:24:18 np0005558241 nova_compute[248510]: 2025-12-13 09:24:18.403 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:18 np0005558241 nova_compute[248510]: 2025-12-13 09:24:18.408 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:24:18 np0005558241 nova_compute[248510]: 2025-12-13 09:24:18.432 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.481454) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617858481565, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1209, "num_deletes": 250, "total_data_size": 1868254, "memory_usage": 1897128, "flush_reason": "Manual Compaction"}
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617858491714, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1107138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72871, "largest_seqno": 74079, "table_properties": {"data_size": 1102717, "index_size": 1880, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11750, "raw_average_key_size": 20, "raw_value_size": 1093121, "raw_average_value_size": 1924, "num_data_blocks": 85, "num_entries": 568, "num_filter_entries": 568, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617743, "oldest_key_time": 1765617743, "file_creation_time": 1765617858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 10308 microseconds, and 4189 cpu microseconds.
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.491777) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1107138 bytes OK
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.491810) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.493699) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.493713) EVENT_LOG_v1 {"time_micros": 1765617858493709, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.493737) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1862757, prev total WAL file size 1862757, number of live WAL files 2.
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.494524) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303132' seq:72057594037927935, type:22 .. '6D6772737461740033323633' seq:0, type:0; will stop at (end)
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1081KB)], [173(11MB)]
Dec 13 04:24:18 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617858494591, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 12861913, "oldest_snapshot_seqno": -1}
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9184 keys, 10116410 bytes, temperature: kUnknown
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617859006211, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 10116410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10060722, "index_size": 31639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241136, "raw_average_key_size": 26, "raw_value_size": 9902867, "raw_average_value_size": 1078, "num_data_blocks": 1215, "num_entries": 9184, "num_filter_entries": 9184, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:24:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3696: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 0 B/s wr, 71 op/s
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.006641) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10116410 bytes
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.244916) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.1 rd, 19.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.2 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(20.8) write-amplify(9.1) OK, records in: 9644, records dropped: 460 output_compression: NoCompression
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.244969) EVENT_LOG_v1 {"time_micros": 1765617859244948, "job": 108, "event": "compaction_finished", "compaction_time_micros": 511745, "compaction_time_cpu_micros": 27454, "output_level": 6, "num_output_files": 1, "total_output_size": 10116410, "num_input_records": 9644, "num_output_records": 9184, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617859245603, "job": 108, "event": "table_file_deletion", "file_number": 175}
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617859249638, "job": 108, "event": "table_file_deletion", "file_number": 173}
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:18.494445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.249809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.249826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.249830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.249835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:24:19 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:24:19.249840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:24:19 np0005558241 kernel: tap1d60d1ee-c6 (unregistering): left promiscuous mode
Dec 13 04:24:19 np0005558241 NetworkManager[50376]: <info>  [1765617859.2884] device (tap1d60d1ee-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:24:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:19Z|01653|binding|INFO|Releasing lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 from this chassis (sb_readonly=0)
Dec 13 04:24:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:19Z|01654|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 down in Southbound
Dec 13 04:24:19 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:19Z|01655|binding|INFO|Removing iface tap1d60d1ee-c6 ovn-installed in OVS
Dec 13 04:24:19 np0005558241 nova_compute[248510]: 2025-12-13 09:24:19.297 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:19 np0005558241 nova_compute[248510]: 2025-12-13 09:24:19.300 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:19 np0005558241 nova_compute[248510]: 2025-12-13 09:24:19.328 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:19 np0005558241 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000099.scope: Deactivated successfully.
Dec 13 04:24:19 np0005558241 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000099.scope: Consumed 6.637s CPU time.
Dec 13 04:24:19 np0005558241 systemd-machined[210538]: Machine qemu-185-instance-00000099 terminated.
Dec 13 04:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:19.410 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:21:8c 10.100.0.2'], port_security=['fa:16:3e:77:21:8c 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'df092140-61e7-46dc-a59e-317f6b309e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db32bc67-75e0-4356-ad47-39387ebf5d89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b021c13b-f68e-4f70-afcb-8e734f176d38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f99370b0-f9e5-4b19-b3ce-10f6a23d7122, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1d60d1ee-c619-439a-a2d3-0ab5e5872411) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:19.411 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 in datapath db32bc67-75e0-4356-ad47-39387ebf5d89 unbound from our chassis#033[00m
Dec 13 04:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:19.412 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network db32bc67-75e0-4356-ad47-39387ebf5d89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 04:24:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:19.413 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[214367a9-6b07-4de3-a3ae-c38061125c74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:24:19 np0005558241 nova_compute[248510]: 2025-12-13 09:24:19.496 248514 DEBUG nova.compute.manager [None req-862950b7-a47d-42d3-9b87-195003c43e44 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:19 np0005558241 nova_compute[248510]: 2025-12-13 09:24:19.996 248514 DEBUG nova.compute.manager [req-4f391dd7-762e-4e2f-b13c-43363825afd1 req-e2c64a8c-a8e0-4b93-a905-136938af881f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:24:19 np0005558241 nova_compute[248510]: 2025-12-13 09:24:19.997 248514 DEBUG oslo_concurrency.lockutils [req-4f391dd7-762e-4e2f-b13c-43363825afd1 req-e2c64a8c-a8e0-4b93-a905-136938af881f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:19 np0005558241 nova_compute[248510]: 2025-12-13 09:24:19.997 248514 DEBUG oslo_concurrency.lockutils [req-4f391dd7-762e-4e2f-b13c-43363825afd1 req-e2c64a8c-a8e0-4b93-a905-136938af881f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:19 np0005558241 nova_compute[248510]: 2025-12-13 09:24:19.997 248514 DEBUG oslo_concurrency.lockutils [req-4f391dd7-762e-4e2f-b13c-43363825afd1 req-e2c64a8c-a8e0-4b93-a905-136938af881f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:19 np0005558241 nova_compute[248510]: 2025-12-13 09:24:19.998 248514 DEBUG nova.compute.manager [req-4f391dd7-762e-4e2f-b13c-43363825afd1 req-e2c64a8c-a8e0-4b93-a905-136938af881f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:24:19 np0005558241 nova_compute[248510]: 2025-12-13 09:24:19.998 248514 WARNING nova.compute.manager [req-4f391dd7-762e-4e2f-b13c-43363825afd1 req-e2c64a8c-a8e0-4b93-a905-136938af881f 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state suspended and task_state None.#033[00m
Dec 13 04:24:20 np0005558241 nova_compute[248510]: 2025-12-13 09:24:20.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:20 np0005558241 nova_compute[248510]: 2025-12-13 09:24:20.255 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:20 np0005558241 nova_compute[248510]: 2025-12-13 09:24:20.631 248514 INFO nova.compute.manager [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Resuming#033[00m
Dec 13 04:24:20 np0005558241 nova_compute[248510]: 2025-12-13 09:24:20.632 248514 DEBUG nova.objects.instance [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'flavor' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:24:20 np0005558241 nova_compute[248510]: 2025-12-13 09:24:20.820 248514 DEBUG oslo_concurrency.lockutils [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:24:20 np0005558241 nova_compute[248510]: 2025-12-13 09:24:20.821 248514 DEBUG oslo_concurrency.lockutils [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquired lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:24:20 np0005558241 nova_compute[248510]: 2025-12-13 09:24:20.821 248514 DEBUG nova.network.neutron [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3697: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036334424789835097 of space, bias 1.0, pg target 0.1090032743695053 quantized to 32 (current 32)
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696740079843619 of space, bias 1.0, pg target 0.20090220239530857 quantized to 32 (current 32)
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.719592936563563e-07 of space, bias 4.0, pg target 0.0006863511523876276 quantized to 16 (current 32)
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:24:22 np0005558241 nova_compute[248510]: 2025-12-13 09:24:22.083 248514 DEBUG nova.compute.manager [req-fa4d9b73-ca16-48e5-b28c-67b94f5a3d5f req-9fc832c0-4098-4a36-ad9b-e73ce13473d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:24:22 np0005558241 nova_compute[248510]: 2025-12-13 09:24:22.084 248514 DEBUG oslo_concurrency.lockutils [req-fa4d9b73-ca16-48e5-b28c-67b94f5a3d5f req-9fc832c0-4098-4a36-ad9b-e73ce13473d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:22 np0005558241 nova_compute[248510]: 2025-12-13 09:24:22.084 248514 DEBUG oslo_concurrency.lockutils [req-fa4d9b73-ca16-48e5-b28c-67b94f5a3d5f req-9fc832c0-4098-4a36-ad9b-e73ce13473d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:22 np0005558241 nova_compute[248510]: 2025-12-13 09:24:22.084 248514 DEBUG oslo_concurrency.lockutils [req-fa4d9b73-ca16-48e5-b28c-67b94f5a3d5f req-9fc832c0-4098-4a36-ad9b-e73ce13473d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:22 np0005558241 nova_compute[248510]: 2025-12-13 09:24:22.084 248514 DEBUG nova.compute.manager [req-fa4d9b73-ca16-48e5-b28c-67b94f5a3d5f req-9fc832c0-4098-4a36-ad9b-e73ce13473d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:24:22 np0005558241 nova_compute[248510]: 2025-12-13 09:24:22.085 248514 WARNING nova.compute.manager [req-fa4d9b73-ca16-48e5-b28c-67b94f5a3d5f req-9fc832c0-4098-4a36-ad9b-e73ce13473d7 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state suspended and task_state resuming.#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.154 248514 DEBUG nova.network.neutron [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updating instance_info_cache with network_info: [{"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.175 248514 DEBUG oslo_concurrency.lockutils [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Releasing lock "refresh_cache-df092140-61e7-46dc-a59e-317f6b309e77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:24:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3698: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.181 248514 DEBUG nova.virt.libvirt.vif [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-175634348',display_name='tempest-TestServerAdvancedOps-server-175634348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-175634348',id=153,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:24:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ca7e519f414149a7bdf699bbd9b4a3e9',ramdisk_id='',reservation_id='r-e7ub8c4u',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1044369445',owner_user_name='tempest-TestServerAdvancedOps-1044369445-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:24:19Z,user_data=None,user_id='895926e785ed4e44b6b55b32f5fa263c',uuid=df092140-61e7-46dc-a59e-317f6b309e77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.181 248514 DEBUG nova.network.os_vif_util [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converting VIF {"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.182 248514 DEBUG nova.network.os_vif_util [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.183 248514 DEBUG os_vif [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.183 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.183 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.184 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.187 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.187 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d60d1ee-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.187 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d60d1ee-c6, col_values=(('external_ids', {'iface-id': '1d60d1ee-c619-439a-a2d3-0ab5e5872411', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:21:8c', 'vm-uuid': 'df092140-61e7-46dc-a59e-317f6b309e77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.188 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.188 248514 INFO os_vif [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6')#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.210 248514 DEBUG nova.objects.instance [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'numa_topology' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:24:23 np0005558241 kernel: tap1d60d1ee-c6: entered promiscuous mode
Dec 13 04:24:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:23Z|01656|binding|INFO|Claiming lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 for this chassis.
Dec 13 04:24:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:23Z|01657|binding|INFO|1d60d1ee-c619-439a-a2d3-0ab5e5872411: Claiming fa:16:3e:77:21:8c 10.100.0.2
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.283 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:23 np0005558241 NetworkManager[50376]: <info>  [1765617863.2847] manager: (tap1d60d1ee-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/684)
Dec 13 04:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:23.291 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:21:8c 10.100.0.2'], port_security=['fa:16:3e:77:21:8c 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'df092140-61e7-46dc-a59e-317f6b309e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db32bc67-75e0-4356-ad47-39387ebf5d89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'b021c13b-f68e-4f70-afcb-8e734f176d38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f99370b0-f9e5-4b19-b3ce-10f6a23d7122, chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1d60d1ee-c619-439a-a2d3-0ab5e5872411) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:23.292 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 in datapath db32bc67-75e0-4356-ad47-39387ebf5d89 bound to our chassis#033[00m
Dec 13 04:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:23.293 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network db32bc67-75e0-4356-ad47-39387ebf5d89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 04:24:23 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:23.294 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[97d3e248-e51d-49be-ae2f-f7504651f73f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.297 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:23Z|01658|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 ovn-installed in OVS
Dec 13 04:24:23 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:23Z|01659|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 up in Southbound
Dec 13 04:24:23 np0005558241 nova_compute[248510]: 2025-12-13 09:24:23.299 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:23 np0005558241 systemd-udevd[409823]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:24:23 np0005558241 systemd-machined[210538]: New machine qemu-186-instance-00000099.
Dec 13 04:24:23 np0005558241 NetworkManager[50376]: <info>  [1765617863.3308] device (tap1d60d1ee-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:24:23 np0005558241 NetworkManager[50376]: <info>  [1765617863.3315] device (tap1d60d1ee-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:24:23 np0005558241 systemd[1]: Started Virtual Machine qemu-186-instance-00000099.
Dec 13 04:24:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.061 248514 DEBUG nova.virt.libvirt.host [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Removed pending event for df092140-61e7-46dc-a59e-317f6b309e77 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.063 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617864.0612276, df092140-61e7-46dc-a59e-317f6b309e77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.064 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Started (Lifecycle Event)#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.082 248514 DEBUG nova.compute.manager [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.083 248514 DEBUG nova.objects.instance [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'pci_devices' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.096 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.101 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.108 248514 INFO nova.virt.libvirt.driver [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance running successfully.#033[00m
Dec 13 04:24:24 np0005558241 virtqemud[248808]: argument unsupported: QEMU guest agent is not configured
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.110 248514 DEBUG nova.virt.libvirt.guest [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.111 248514 DEBUG nova.compute.manager [None req-ffd91c66-19eb-4799-be49-9b748ca14e1b 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.153 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.154 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617864.0641851, df092140-61e7-46dc-a59e-317f6b309e77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.154 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.182 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.186 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.211 248514 DEBUG nova.compute.manager [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.212 248514 DEBUG oslo_concurrency.lockutils [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.212 248514 DEBUG oslo_concurrency.lockutils [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.212 248514 DEBUG oslo_concurrency.lockutils [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.212 248514 DEBUG nova.compute.manager [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.213 248514 WARNING nova.compute.manager [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.213 248514 DEBUG nova.compute.manager [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.213 248514 DEBUG oslo_concurrency.lockutils [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.214 248514 DEBUG oslo_concurrency.lockutils [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.214 248514 DEBUG oslo_concurrency.lockutils [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.214 248514 DEBUG nova.compute.manager [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:24:24 np0005558241 nova_compute[248510]: 2025-12-13 09:24:24.214 248514 WARNING nova.compute.manager [req-b68975be-a48b-46ec-9f76-79ae5333e67f req-6f748b3e-44d9-4741-acbd-57d5025fa3ea 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state active and task_state None.#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.007 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3699: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 7 op/s
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.257 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.275 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.276 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.277 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.277 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.277 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.279 248514 INFO nova.compute.manager [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Terminating instance#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.280 248514 DEBUG nova.compute.manager [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:24:25 np0005558241 kernel: tap1d60d1ee-c6 (unregistering): left promiscuous mode
Dec 13 04:24:25 np0005558241 NetworkManager[50376]: <info>  [1765617865.3221] device (tap1d60d1ee-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:24:25 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:25Z|01660|binding|INFO|Releasing lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 from this chassis (sb_readonly=0)
Dec 13 04:24:25 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:25Z|01661|binding|INFO|Setting lport 1d60d1ee-c619-439a-a2d3-0ab5e5872411 down in Southbound
Dec 13 04:24:25 np0005558241 ovn_controller[148476]: 2025-12-13T09:24:25Z|01662|binding|INFO|Removing iface tap1d60d1ee-c6 ovn-installed in OVS
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.333 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:25.339 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:21:8c 10.100.0.2'], port_security=['fa:16:3e:77:21:8c 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'df092140-61e7-46dc-a59e-317f6b309e77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db32bc67-75e0-4356-ad47-39387ebf5d89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7e519f414149a7bdf699bbd9b4a3e9', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b021c13b-f68e-4f70-afcb-8e734f176d38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f99370b0-f9e5-4b19-b3ce-10f6a23d7122, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>], logical_port=1d60d1ee-c619-439a-a2d3-0ab5e5872411) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f56686c2d30>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:25.340 158419 INFO neutron.agent.ovn.metadata.agent [-] Port 1d60d1ee-c619-439a-a2d3-0ab5e5872411 in datapath db32bc67-75e0-4356-ad47-39387ebf5d89 unbound from our chassis#033[00m
Dec 13 04:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:25.342 158419 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network db32bc67-75e0-4356-ad47-39387ebf5d89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Dec 13 04:24:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:25.343 260774 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee87b47-2afe-4b16-9fcc-a1c9d303a70d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.348 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:25 np0005558241 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Deactivated successfully.
Dec 13 04:24:25 np0005558241 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000099.scope: Consumed 1.887s CPU time.
Dec 13 04:24:25 np0005558241 systemd-machined[210538]: Machine qemu-186-instance-00000099 terminated.
Dec 13 04:24:25 np0005558241 NetworkManager[50376]: <info>  [1765617865.5020] manager: (tap1d60d1ee-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/685)
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.517 248514 INFO nova.virt.libvirt.driver [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Instance destroyed successfully.#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.519 248514 DEBUG nova.objects.instance [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lazy-loading 'resources' on Instance uuid df092140-61e7-46dc-a59e-317f6b309e77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.573 248514 DEBUG nova.virt.libvirt.vif [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T09:23:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-175634348',display_name='tempest-TestServerAdvancedOps-server-175634348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-175634348',id=153,image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T09:24:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7e519f414149a7bdf699bbd9b4a3e9',ramdisk_id='',reservation_id='r-e7ub8c4u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0ed20320-9c25-4108-ad76-64b3cb3500ce',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-1044369445',owner_user_name='tempest-TestServerAdvancedOps-1044369445-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T09:24:24Z,user_data=None,user_id='895926e785ed4e44b6b55b32f5fa263c',uuid=df092140-61e7-46dc-a59e-317f6b309e77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.573 248514 DEBUG nova.network.os_vif_util [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converting VIF {"id": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "address": "fa:16:3e:77:21:8c", "network": {"id": "db32bc67-75e0-4356-ad47-39387ebf5d89", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1237077832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "ca7e519f414149a7bdf699bbd9b4a3e9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d60d1ee-c6", "ovs_interfaceid": "1d60d1ee-c619-439a-a2d3-0ab5e5872411", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.574 248514 DEBUG nova.network.os_vif_util [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.574 248514 DEBUG os_vif [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.576 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.576 248514 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d60d1ee-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.621 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.623 248514 INFO os_vif [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:21:8c,bridge_name='br-int',has_traffic_filtering=True,id=1d60d1ee-c619-439a-a2d3-0ab5e5872411,network=Network(db32bc67-75e0-4356-ad47-39387ebf5d89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d60d1ee-c6')#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.959 248514 INFO nova.virt.libvirt.driver [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Deleting instance files /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77_del#033[00m
Dec 13 04:24:25 np0005558241 nova_compute[248510]: 2025-12-13 09:24:25.961 248514 INFO nova.virt.libvirt.driver [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Deletion of /var/lib/nova/instances/df092140-61e7-46dc-a59e-317f6b309e77_del complete#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.033 248514 INFO nova.compute.manager [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.034 248514 DEBUG oslo.service.loopingcall [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.034 248514 DEBUG nova.compute.manager [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.034 248514 DEBUG nova.network.neutron [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.313 248514 DEBUG nova.compute.manager [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.313 248514 DEBUG oslo_concurrency.lockutils [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.314 248514 DEBUG oslo_concurrency.lockutils [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.315 248514 DEBUG oslo_concurrency.lockutils [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.315 248514 DEBUG nova.compute.manager [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.316 248514 DEBUG nova.compute.manager [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-unplugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.316 248514 DEBUG nova.compute.manager [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.317 248514 DEBUG oslo_concurrency.lockutils [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Acquiring lock "df092140-61e7-46dc-a59e-317f6b309e77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.318 248514 DEBUG oslo_concurrency.lockutils [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.318 248514 DEBUG oslo_concurrency.lockutils [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.319 248514 DEBUG nova.compute.manager [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] No waiting events found dispatching network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.319 248514 WARNING nova.compute.manager [req-a0bc53e2-4776-45bb-a429-d4b3062c159f req-edb6dad8-fd80-4509-b528-3413d3be1529 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received unexpected event network-vif-plugged-1d60d1ee-c619-439a-a2d3-0ab5e5872411 for instance with vm_state active and task_state deleting.#033[00m
Dec 13 04:24:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:26.926 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.927 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:26 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:26.928 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:24:26 np0005558241 nova_compute[248510]: 2025-12-13 09:24:26.963 248514 DEBUG nova.network.neutron [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:24:27 np0005558241 nova_compute[248510]: 2025-12-13 09:24:27.083 248514 DEBUG nova.compute.manager [req-489b79f1-75a6-4766-b700-ed8a8cf21932 req-e0ceab04-3097-4fa5-8777-05703033fe0a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Received event network-vif-deleted-1d60d1ee-c619-439a-a2d3-0ab5e5872411 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec 13 04:24:27 np0005558241 nova_compute[248510]: 2025-12-13 09:24:27.084 248514 INFO nova.compute.manager [req-489b79f1-75a6-4766-b700-ed8a8cf21932 req-e0ceab04-3097-4fa5-8777-05703033fe0a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Neutron deleted interface 1d60d1ee-c619-439a-a2d3-0ab5e5872411; detaching it from the instance and deleting it from the info cache#033[00m
Dec 13 04:24:27 np0005558241 nova_compute[248510]: 2025-12-13 09:24:27.084 248514 DEBUG nova.network.neutron [req-489b79f1-75a6-4766-b700-ed8a8cf21932 req-e0ceab04-3097-4fa5-8777-05703033fe0a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:24:27 np0005558241 nova_compute[248510]: 2025-12-13 09:24:27.172 248514 INFO nova.compute.manager [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Took 1.14 seconds to deallocate network for instance.#033[00m
Dec 13 04:24:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3700: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 2 op/s
Dec 13 04:24:27 np0005558241 nova_compute[248510]: 2025-12-13 09:24:27.193 248514 DEBUG nova.compute.manager [req-489b79f1-75a6-4766-b700-ed8a8cf21932 req-e0ceab04-3097-4fa5-8777-05703033fe0a 613961304a564d58b76f826404fa8f75 05e416ab53b743379c92b90b665b2251 - - default default] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Detach interface failed, port_id=1d60d1ee-c619-439a-a2d3-0ab5e5872411, reason: Instance df092140-61e7-46dc-a59e-317f6b309e77 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Dec 13 04:24:27 np0005558241 nova_compute[248510]: 2025-12-13 09:24:27.464 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:27 np0005558241 nova_compute[248510]: 2025-12-13 09:24:27.464 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:27 np0005558241 nova_compute[248510]: 2025-12-13 09:24:27.523 248514 DEBUG oslo_concurrency.processutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:24:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:24:28 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481576824' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:24:28 np0005558241 nova_compute[248510]: 2025-12-13 09:24:28.111 248514 DEBUG oslo_concurrency.processutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:24:28 np0005558241 nova_compute[248510]: 2025-12-13 09:24:28.121 248514 DEBUG nova.compute.provider_tree [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:24:28 np0005558241 nova_compute[248510]: 2025-12-13 09:24:28.212 248514 DEBUG nova.scheduler.client.report [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:24:28 np0005558241 nova_compute[248510]: 2025-12-13 09:24:28.266 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:28 np0005558241 nova_compute[248510]: 2025-12-13 09:24:28.404 248514 INFO nova.scheduler.client.report [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Deleted allocations for instance df092140-61e7-46dc-a59e-317f6b309e77#033[00m
Dec 13 04:24:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:28 np0005558241 nova_compute[248510]: 2025-12-13 09:24:28.775 248514 DEBUG oslo_concurrency.lockutils [None req-2f3e552b-b2ef-46c1-ae88-97d35af8cd1a 895926e785ed4e44b6b55b32f5fa263c ca7e519f414149a7bdf699bbd9b4a3e9 - - default default] Lock "df092140-61e7-46dc-a59e-317f6b309e77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3701: 321 pgs: 321 active+clean; 47 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Dec 13 04:24:29 np0005558241 nova_compute[248510]: 2025-12-13 09:24:29.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:24:30 np0005558241 nova_compute[248510]: 2025-12-13 09:24:30.010 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:30 np0005558241 nova_compute[248510]: 2025-12-13 09:24:30.619 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3702: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 13 04:24:31 np0005558241 nova_compute[248510]: 2025-12-13 09:24:31.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:24:31 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:31.931 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:24:32 np0005558241 nova_compute[248510]: 2025-12-13 09:24:32.031 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3703: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 13 04:24:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:35 np0005558241 nova_compute[248510]: 2025-12-13 09:24:35.012 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3704: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 13 04:24:35 np0005558241 nova_compute[248510]: 2025-12-13 09:24:35.622 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3705: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 13 04:24:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3706: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 13 04:24:39 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:24:39 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:24:39 np0005558241 nova_compute[248510]: 2025-12-13 09:24:39.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:24:39 np0005558241 nova_compute[248510]: 2025-12-13 09:24:39.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:24:39 np0005558241 nova_compute[248510]: 2025-12-13 09:24:39.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:24:39 np0005558241 nova_compute[248510]: 2025-12-13 09:24:39.793 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:24:39 np0005558241 nova_compute[248510]: 2025-12-13 09:24:39.794 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:24:40 np0005558241 nova_compute[248510]: 2025-12-13 09:24:40.015 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:24:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:24:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:24:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:24:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:24:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:24:40 np0005558241 nova_compute[248510]: 2025-12-13 09:24:40.514 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617865.5128546, df092140-61e7-46dc-a59e-317f6b309e77 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:24:40 np0005558241 nova_compute[248510]: 2025-12-13 09:24:40.515 248514 INFO nova.compute.manager [-] [instance: df092140-61e7-46dc-a59e-317f6b309e77] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:24:40 np0005558241 nova_compute[248510]: 2025-12-13 09:24:40.539 248514 DEBUG nova.compute.manager [None req-2a383a6a-7510-4cc7-87eb-8be714742e60 - - - - - -] [instance: df092140-61e7-46dc-a59e-317f6b309e77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:24:40 np0005558241 nova_compute[248510]: 2025-12-13 09:24:40.676 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:40 np0005558241 nova_compute[248510]: 2025-12-13 09:24:40.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:24:40 np0005558241 nova_compute[248510]: 2025-12-13 09:24:40.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:24:40 np0005558241 nova_compute[248510]: 2025-12-13 09:24:40.813 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:40 np0005558241 nova_compute[248510]: 2025-12-13 09:24:40.813 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:40 np0005558241 nova_compute[248510]: 2025-12-13 09:24:40.814 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:40 np0005558241 nova_compute[248510]: 2025-12-13 09:24:40.814 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:24:40 np0005558241 nova_compute[248510]: 2025-12-13 09:24:40.814 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:24:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3707: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 KiB/s rd, 0 B/s wr, 8 op/s
Dec 13 04:24:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:24:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3194415615' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:24:41 np0005558241 nova_compute[248510]: 2025-12-13 09:24:41.465 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:24:41 np0005558241 nova_compute[248510]: 2025-12-13 09:24:41.709 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:24:41 np0005558241 nova_compute[248510]: 2025-12-13 09:24:41.711 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3388MB free_disk=59.98738075513393GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:24:41 np0005558241 nova_compute[248510]: 2025-12-13 09:24:41.711 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:41 np0005558241 nova_compute[248510]: 2025-12-13 09:24:41.712 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:41 np0005558241 nova_compute[248510]: 2025-12-13 09:24:41.795 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:24:41 np0005558241 nova_compute[248510]: 2025-12-13 09:24:41.796 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:24:41 np0005558241 nova_compute[248510]: 2025-12-13 09:24:41.818 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:24:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:24:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2524129072' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:24:42 np0005558241 nova_compute[248510]: 2025-12-13 09:24:42.422 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:24:42 np0005558241 nova_compute[248510]: 2025-12-13 09:24:42.430 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:24:42 np0005558241 nova_compute[248510]: 2025-12-13 09:24:42.589 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:24:42 np0005558241 nova_compute[248510]: 2025-12-13 09:24:42.713 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:24:42 np0005558241 nova_compute[248510]: 2025-12-13 09:24:42.713 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3708: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:24:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:43 np0005558241 nova_compute[248510]: 2025-12-13 09:24:43.714 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:24:43 np0005558241 nova_compute[248510]: 2025-12-13 09:24:43.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:24:43 np0005558241 nova_compute[248510]: 2025-12-13 09:24:43.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:24:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:24:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:24:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:24:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:24:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:24:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:24:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:24:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:24:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:24:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:24:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:24:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:24:45 np0005558241 nova_compute[248510]: 2025-12-13 09:24:45.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3709: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:24:45 np0005558241 podman[410122]: 2025-12-13 09:24:45.231466745 +0000 UTC m=+0.055403130 container create 5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_khayyam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:24:45 np0005558241 systemd[1]: Started libpod-conmon-5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc.scope.
Dec 13 04:24:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:24:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:24:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:24:45 np0005558241 podman[410122]: 2025-12-13 09:24:45.203666678 +0000 UTC m=+0.027603113 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:24:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:24:45 np0005558241 podman[410122]: 2025-12-13 09:24:45.345899133 +0000 UTC m=+0.169835538 container init 5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_khayyam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 04:24:45 np0005558241 podman[410122]: 2025-12-13 09:24:45.357545065 +0000 UTC m=+0.181481450 container start 5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 04:24:45 np0005558241 podman[410122]: 2025-12-13 09:24:45.361055693 +0000 UTC m=+0.184992078 container attach 5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:24:45 np0005558241 inspiring_khayyam[410139]: 167 167
Dec 13 04:24:45 np0005558241 systemd[1]: libpod-5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc.scope: Deactivated successfully.
Dec 13 04:24:45 np0005558241 conmon[410139]: conmon 5d383b80a8d19b304a45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc.scope/container/memory.events
Dec 13 04:24:45 np0005558241 podman[410122]: 2025-12-13 09:24:45.3657156 +0000 UTC m=+0.189652015 container died 5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:24:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-85b4e18c13b58dd0822820b68d4b62e66a89d07e60589abb2d2760fa31f89d49-merged.mount: Deactivated successfully.
Dec 13 04:24:45 np0005558241 podman[410122]: 2025-12-13 09:24:45.410292357 +0000 UTC m=+0.234228742 container remove 5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:24:45 np0005558241 systemd[1]: libpod-conmon-5d383b80a8d19b304a45aaab5d48d38d5d482683e2d60fb37467965fd78bb2fc.scope: Deactivated successfully.
Dec 13 04:24:45 np0005558241 podman[410164]: 2025-12-13 09:24:45.599442209 +0000 UTC m=+0.056515798 container create 154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 04:24:45 np0005558241 systemd[1]: Started libpod-conmon-154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60.scope.
Dec 13 04:24:45 np0005558241 podman[410164]: 2025-12-13 09:24:45.572640147 +0000 UTC m=+0.029713826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:24:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:24:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d138ff6afd1715da7945481a53d713afab0115e4a581a7e4863cbbb0ffb55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d138ff6afd1715da7945481a53d713afab0115e4a581a7e4863cbbb0ffb55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d138ff6afd1715da7945481a53d713afab0115e4a581a7e4863cbbb0ffb55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d138ff6afd1715da7945481a53d713afab0115e4a581a7e4863cbbb0ffb55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:45 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739d138ff6afd1715da7945481a53d713afab0115e4a581a7e4863cbbb0ffb55/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:45 np0005558241 nova_compute[248510]: 2025-12-13 09:24:45.680 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:45 np0005558241 podman[410164]: 2025-12-13 09:24:45.69005851 +0000 UTC m=+0.147132189 container init 154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:24:45 np0005558241 podman[410164]: 2025-12-13 09:24:45.710248286 +0000 UTC m=+0.167321915 container start 154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:24:45 np0005558241 podman[410164]: 2025-12-13 09:24:45.714822861 +0000 UTC m=+0.171896490 container attach 154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 13 04:24:46 np0005558241 nervous_morse[410180]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:24:46 np0005558241 nervous_morse[410180]: --> All data devices are unavailable
Dec 13 04:24:46 np0005558241 systemd[1]: libpod-154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60.scope: Deactivated successfully.
Dec 13 04:24:46 np0005558241 podman[410164]: 2025-12-13 09:24:46.341715175 +0000 UTC m=+0.798788774 container died 154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:24:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-739d138ff6afd1715da7945481a53d713afab0115e4a581a7e4863cbbb0ffb55-merged.mount: Deactivated successfully.
Dec 13 04:24:46 np0005558241 podman[410164]: 2025-12-13 09:24:46.402806767 +0000 UTC m=+0.859880396 container remove 154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 04:24:46 np0005558241 systemd[1]: libpod-conmon-154e028630cdc5a30a7418e193a747006f24644912e8d311058b3200dd0a6d60.scope: Deactivated successfully.
Dec 13 04:24:47 np0005558241 podman[410274]: 2025-12-13 09:24:47.042479271 +0000 UTC m=+0.053357758 container create 6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:24:47 np0005558241 systemd[1]: Started libpod-conmon-6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75.scope.
Dec 13 04:24:47 np0005558241 podman[410274]: 2025-12-13 09:24:47.020056919 +0000 UTC m=+0.030935396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:24:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:24:47 np0005558241 podman[410274]: 2025-12-13 09:24:47.150026677 +0000 UTC m=+0.160905164 container init 6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:24:47 np0005558241 podman[410274]: 2025-12-13 09:24:47.156754386 +0000 UTC m=+0.167632853 container start 6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:24:47 np0005558241 podman[410274]: 2025-12-13 09:24:47.161230278 +0000 UTC m=+0.172108845 container attach 6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:24:47 np0005558241 affectionate_lovelace[410290]: 167 167
Dec 13 04:24:47 np0005558241 systemd[1]: libpod-6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75.scope: Deactivated successfully.
Dec 13 04:24:47 np0005558241 podman[410274]: 2025-12-13 09:24:47.164400948 +0000 UTC m=+0.175279415 container died 6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 04:24:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3710: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:24:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f34934a0d2432ae41ee7e621fc0a5729f14e91d2f04788359b98dca23e3f9586-merged.mount: Deactivated successfully.
Dec 13 04:24:47 np0005558241 podman[410274]: 2025-12-13 09:24:47.249326636 +0000 UTC m=+0.260205093 container remove 6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:24:47 np0005558241 systemd[1]: libpod-conmon-6c5085d5678df4c4556b6a10dcf67d6e4f2732f700032b701964d49877ddda75.scope: Deactivated successfully.
Dec 13 04:24:47 np0005558241 podman[410313]: 2025-12-13 09:24:47.46654816 +0000 UTC m=+0.054944378 container create c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:24:47 np0005558241 systemd[1]: Started libpod-conmon-c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1.scope.
Dec 13 04:24:47 np0005558241 podman[410313]: 2025-12-13 09:24:47.442338874 +0000 UTC m=+0.030735082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:24:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:24:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28474c748e1730dec57d03fd857c8ffd0b80f89264f122f17c99fb6da15dc93c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28474c748e1730dec57d03fd857c8ffd0b80f89264f122f17c99fb6da15dc93c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28474c748e1730dec57d03fd857c8ffd0b80f89264f122f17c99fb6da15dc93c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28474c748e1730dec57d03fd857c8ffd0b80f89264f122f17c99fb6da15dc93c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:47 np0005558241 podman[410313]: 2025-12-13 09:24:47.581297297 +0000 UTC m=+0.169693505 container init c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 04:24:47 np0005558241 podman[410313]: 2025-12-13 09:24:47.593812201 +0000 UTC m=+0.182208379 container start c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:24:47 np0005558241 podman[410313]: 2025-12-13 09:24:47.59737789 +0000 UTC m=+0.185774288 container attach c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]: {
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:    "0": [
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:        {
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "devices": [
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "/dev/loop3"
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            ],
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_name": "ceph_lv0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_size": "21470642176",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "name": "ceph_lv0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "tags": {
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.cluster_name": "ceph",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.crush_device_class": "",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.encrypted": "0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.objectstore": "bluestore",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.osd_id": "0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.type": "block",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.vdo": "0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.with_tpm": "0"
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            },
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "type": "block",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "vg_name": "ceph_vg0"
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:        }
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:    ],
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:    "1": [
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:        {
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "devices": [
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "/dev/loop4"
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            ],
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_name": "ceph_lv1",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_size": "21470642176",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "name": "ceph_lv1",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "tags": {
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.cluster_name": "ceph",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.crush_device_class": "",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.encrypted": "0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.objectstore": "bluestore",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.osd_id": "1",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.type": "block",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.vdo": "0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.with_tpm": "0"
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            },
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "type": "block",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "vg_name": "ceph_vg1"
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:        }
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:    ],
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:    "2": [
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:        {
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "devices": [
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "/dev/loop5"
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            ],
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_name": "ceph_lv2",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_size": "21470642176",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "name": "ceph_lv2",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "tags": {
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.cluster_name": "ceph",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.crush_device_class": "",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.encrypted": "0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.objectstore": "bluestore",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.osd_id": "2",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.type": "block",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.vdo": "0",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:                "ceph.with_tpm": "0"
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            },
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "type": "block",
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:            "vg_name": "ceph_vg2"
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:        }
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]:    ]
Dec 13 04:24:47 np0005558241 jovial_poincare[410330]: }
Dec 13 04:24:47 np0005558241 systemd[1]: libpod-c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1.scope: Deactivated successfully.
Dec 13 04:24:47 np0005558241 podman[410313]: 2025-12-13 09:24:47.939146317 +0000 UTC m=+0.527542525 container died c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 04:24:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-28474c748e1730dec57d03fd857c8ffd0b80f89264f122f17c99fb6da15dc93c-merged.mount: Deactivated successfully.
Dec 13 04:24:48 np0005558241 podman[410313]: 2025-12-13 09:24:48.004434214 +0000 UTC m=+0.592830402 container remove c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:24:48 np0005558241 systemd[1]: libpod-conmon-c0a39db674876fcc2e172aa2c18b3647b874b61f632f93be212b50ec521e73e1.scope: Deactivated successfully.
Dec 13 04:24:48 np0005558241 podman[410352]: 2025-12-13 09:24:48.13993318 +0000 UTC m=+0.089156386 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:24:48 np0005558241 podman[410354]: 2025-12-13 09:24:48.161691036 +0000 UTC m=+0.098535901 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:24:48 np0005558241 podman[410413]: 2025-12-13 09:24:48.284146195 +0000 UTC m=+0.109383533 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:24:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:48 np0005558241 podman[410479]: 2025-12-13 09:24:48.589754826 +0000 UTC m=+0.045894902 container create c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 04:24:48 np0005558241 systemd[1]: Started libpod-conmon-c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843.scope.
Dec 13 04:24:48 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:24:48 np0005558241 podman[410479]: 2025-12-13 09:24:48.57197065 +0000 UTC m=+0.028110746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:24:48 np0005558241 podman[410479]: 2025-12-13 09:24:48.682031979 +0000 UTC m=+0.138172065 container init c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhabha, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 04:24:48 np0005558241 podman[410479]: 2025-12-13 09:24:48.694380658 +0000 UTC m=+0.150520775 container start c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhabha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:24:48 np0005558241 podman[410479]: 2025-12-13 09:24:48.698737558 +0000 UTC m=+0.154877644 container attach c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:24:48 np0005558241 frosty_bhabha[410495]: 167 167
Dec 13 04:24:48 np0005558241 systemd[1]: libpod-c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843.scope: Deactivated successfully.
Dec 13 04:24:48 np0005558241 podman[410479]: 2025-12-13 09:24:48.700682586 +0000 UTC m=+0.156822662 container died c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 04:24:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f54cd06cce4bd26b185bb60227e0cde33d272b1afa3ec1970596a2fe03e9026b-merged.mount: Deactivated successfully.
Dec 13 04:24:48 np0005558241 podman[410479]: 2025-12-13 09:24:48.750237289 +0000 UTC m=+0.206377365 container remove c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_bhabha, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:24:48 np0005558241 systemd[1]: libpod-conmon-c1221e056f12efae4fb2bddc276a7afd105924572d98e28e1b5af63a5e588843.scope: Deactivated successfully.
Dec 13 04:24:48 np0005558241 podman[410518]: 2025-12-13 09:24:48.976255154 +0000 UTC m=+0.062535728 container create af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:24:49 np0005558241 systemd[1]: Started libpod-conmon-af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a.scope.
Dec 13 04:24:49 np0005558241 podman[410518]: 2025-12-13 09:24:48.951774031 +0000 UTC m=+0.038054605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:24:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:24:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd38c0494318b62a04576405d015538a186f3ca34251e322b2c1b928232562e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd38c0494318b62a04576405d015538a186f3ca34251e322b2c1b928232562e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd38c0494318b62a04576405d015538a186f3ca34251e322b2c1b928232562e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd38c0494318b62a04576405d015538a186f3ca34251e322b2c1b928232562e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:49 np0005558241 podman[410518]: 2025-12-13 09:24:49.089954764 +0000 UTC m=+0.176235378 container init af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jemison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:24:49 np0005558241 podman[410518]: 2025-12-13 09:24:49.102124209 +0000 UTC m=+0.188404763 container start af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jemison, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:24:49 np0005558241 podman[410518]: 2025-12-13 09:24:49.105603147 +0000 UTC m=+0.191883781 container attach af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:24:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3711: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:24:49 np0005558241 lvm[410612]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:24:49 np0005558241 lvm[410612]: VG ceph_vg0 finished
Dec 13 04:24:49 np0005558241 lvm[410613]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:24:49 np0005558241 lvm[410613]: VG ceph_vg1 finished
Dec 13 04:24:49 np0005558241 lvm[410615]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:24:49 np0005558241 lvm[410615]: VG ceph_vg2 finished
Dec 13 04:24:50 np0005558241 goofy_jemison[410534]: {}
Dec 13 04:24:50 np0005558241 systemd[1]: libpod-af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a.scope: Deactivated successfully.
Dec 13 04:24:50 np0005558241 systemd[1]: libpod-af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a.scope: Consumed 1.614s CPU time.
Dec 13 04:24:50 np0005558241 podman[410518]: 2025-12-13 09:24:50.059390355 +0000 UTC m=+1.145670909 container died af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:24:50 np0005558241 nova_compute[248510]: 2025-12-13 09:24:50.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7cd38c0494318b62a04576405d015538a186f3ca34251e322b2c1b928232562e-merged.mount: Deactivated successfully.
Dec 13 04:24:50 np0005558241 podman[410518]: 2025-12-13 09:24:50.11145812 +0000 UTC m=+1.197738684 container remove af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_jemison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:24:50 np0005558241 systemd[1]: libpod-conmon-af70325c4743de8b11871b9ad97bbcc629bfd6c2a4bcfe3ab531f153dc73cd6a.scope: Deactivated successfully.
Dec 13 04:24:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:24:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:24:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:24:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:24:50 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:24:50 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:24:50 np0005558241 nova_compute[248510]: 2025-12-13 09:24:50.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3712: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:24:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3713: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:24:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:55 np0005558241 nova_compute[248510]: 2025-12-13 09:24:55.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3714: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:24:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:55.460 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:24:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:55.461 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:24:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:24:55.461 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:24:55 np0005558241 nova_compute[248510]: 2025-12-13 09:24:55.684 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:24:56 np0005558241 nova_compute[248510]: 2025-12-13 09:24:56.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:24:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3715: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:24:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3716: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:00 np0005558241 nova_compute[248510]: 2025-12-13 09:25:00.071 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:00 np0005558241 nova_compute[248510]: 2025-12-13 09:25:00.709 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3717: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3718: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:05 np0005558241 nova_compute[248510]: 2025-12-13 09:25:05.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3719: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:05 np0005558241 nova_compute[248510]: 2025-12-13 09:25:05.712 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3720: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3721: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:25:09
Dec 13 04:25:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:25:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:25:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Dec 13 04:25:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:25:10 np0005558241 nova_compute[248510]: 2025-12-13 09:25:10.075 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:25:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:25:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:25:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:25:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:25:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:25:10 np0005558241 nova_compute[248510]: 2025-12-13 09:25:10.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:25:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:25:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:25:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:25:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:25:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:25:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:25:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:25:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:25:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:25:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3722: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3723: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:15 np0005558241 nova_compute[248510]: 2025-12-13 09:25:15.078 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1468542223' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1468542223' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3724: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:15 np0005558241 nova_compute[248510]: 2025-12-13 09:25:15.754 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3725: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:18 np0005558241 podman[410657]: 2025-12-13 09:25:18.985412023 +0000 UTC m=+0.063290528 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:25:18 np0005558241 podman[410656]: 2025-12-13 09:25:18.990304105 +0000 UTC m=+0.072123569 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=multipathd, container_name=multipathd)
Dec 13 04:25:19 np0005558241 podman[410655]: 2025-12-13 09:25:19.074371262 +0000 UTC m=+0.156494283 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:25:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3726: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:20 np0005558241 nova_compute[248510]: 2025-12-13 09:25:20.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:20 np0005558241 nova_compute[248510]: 2025-12-13 09:25:20.666 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "958d2cbf-5c68-4aa8-88b5-117c871f8959" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:25:20 np0005558241 nova_compute[248510]: 2025-12-13 09:25:20.667 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "958d2cbf-5c68-4aa8-88b5-117c871f8959" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:25:20 np0005558241 nova_compute[248510]: 2025-12-13 09:25:20.684 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec 13 04:25:20 np0005558241 nova_compute[248510]: 2025-12-13 09:25:20.757 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:20 np0005558241 nova_compute[248510]: 2025-12-13 09:25:20.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:25:20 np0005558241 nova_compute[248510]: 2025-12-13 09:25:20.776 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:25:20 np0005558241 nova_compute[248510]: 2025-12-13 09:25:20.777 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:25:20 np0005558241 nova_compute[248510]: 2025-12-13 09:25:20.791 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec 13 04:25:20 np0005558241 nova_compute[248510]: 2025-12-13 09:25:20.792 248514 INFO nova.compute.claims [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec 13 04:25:21 np0005558241 nova_compute[248510]: 2025-12-13 09:25:21.186 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3727: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.501117963890875e-05 of space, bias 1.0, pg target 0.004503353891672625 quantized to 32 (current 32)
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696679221549024 of space, bias 1.0, pg target 0.20090037664647073 quantized to 32 (current 32)
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.715245915521144e-07 of space, bias 4.0, pg target 0.0006858295098625373 quantized to 16 (current 32)
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:25:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:25:21 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/653473999' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:25:21 np0005558241 nova_compute[248510]: 2025-12-13 09:25:21.781 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:25:21 np0005558241 nova_compute[248510]: 2025-12-13 09:25:21.789 248514 DEBUG nova.compute.provider_tree [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:25:21 np0005558241 nova_compute[248510]: 2025-12-13 09:25:21.970 248514 DEBUG nova.scheduler.client.report [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:25:22 np0005558241 ovn_controller[148476]: 2025-12-13T09:25:22Z|01663|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec 13 04:25:22 np0005558241 nova_compute[248510]: 2025-12-13 09:25:22.537 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:25:22 np0005558241 nova_compute[248510]: 2025-12-13 09:25:22.538 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec 13 04:25:22 np0005558241 nova_compute[248510]: 2025-12-13 09:25:22.683 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec 13 04:25:22 np0005558241 nova_compute[248510]: 2025-12-13 09:25:22.684 248514 DEBUG nova.network.neutron [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec 13 04:25:22 np0005558241 nova_compute[248510]: 2025-12-13 09:25:22.708 248514 INFO nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec 13 04:25:22 np0005558241 nova_compute[248510]: 2025-12-13 09:25:22.740 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.065 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.067 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.067 248514 INFO nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Creating image(s)#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.094 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.120 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.144 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.149 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:25:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3728: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.238 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.239 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "7e19890462cb757da298333dcef0801755c35301" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.240 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.240 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "7e19890462cb757da298333dcef0801755c35301" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.268 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.273 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.329 248514 DEBUG nova.network.neutron [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.330 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec 13 04:25:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.655 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/7e19890462cb757da298333dcef0801755c35301 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.724 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] resizing rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.806 248514 DEBUG nova.objects.instance [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lazy-loading 'migration_context' on Instance uuid 958d2cbf-5c68-4aa8-88b5-117c871f8959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.984 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.985 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Ensure instance console log exists: /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.985 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.986 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.986 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.987 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_name': '/dev/vda', 'size': 0, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '0ed20320-9c25-4108-ad76-64b3cb3500ce'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.992 248514 WARNING nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.997 248514 DEBUG nova.virt.libvirt.host [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec 13 04:25:23 np0005558241 nova_compute[248510]: 2025-12-13 09:25:23.998 248514 DEBUG nova.virt.libvirt.host [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.000 248514 DEBUG nova.virt.libvirt.host [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.001 248514 DEBUG nova.virt.libvirt.host [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.001 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.001 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T08:10:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1fa1f001-2bfe-4953-a34a-60242b726cb1',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T08:10:50Z,direct_url=<?>,disk_format='qcow2',id=0ed20320-9c25-4108-ad76-64b3cb3500ce,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='904214aea8584a899ae1854d2bfa2f56',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T08:10:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.002 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.002 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.002 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.002 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.002 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.002 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.003 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.003 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.003 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.003 248514 DEBUG nova.virt.hardware [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.006 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:25:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:25:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734257284' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.580 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.609 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:25:24 np0005558241 nova_compute[248510]: 2025-12-13 09:25:24.614 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:25:25 np0005558241 nova_compute[248510]: 2025-12-13 09:25:25.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:25:25 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280297851' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:25:25 np0005558241 nova_compute[248510]: 2025-12-13 09:25:25.189 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:25:25 np0005558241 nova_compute[248510]: 2025-12-13 09:25:25.191 248514 DEBUG nova.objects.instance [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 958d2cbf-5c68-4aa8-88b5-117c871f8959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:25:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3729: 321 pgs: 321 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.3 MiB/s wr, 13 op/s
Dec 13 04:25:25 np0005558241 nova_compute[248510]: 2025-12-13 09:25:25.304 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  <uuid>958d2cbf-5c68-4aa8-88b5-117c871f8959</uuid>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  <name>instance-0000009a</name>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  <memory>131072</memory>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  <vcpu>1</vcpu>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  <metadata>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <nova:name>tempest-AggregatesAdminTestJSON-server-1018707817</nova:name>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <nova:creationTime>2025-12-13 09:25:23</nova:creationTime>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <nova:flavor name="m1.nano">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:        <nova:memory>128</nova:memory>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:        <nova:disk>1</nova:disk>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:        <nova:swap>0</nova:swap>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:        <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:        <nova:vcpus>1</nova:vcpus>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      </nova:flavor>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <nova:owner>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:        <nova:user uuid="075d71b5d40a478690478a7e4dcb68bc">tempest-AggregatesAdminTestJSON-1874549376-project-member</nova:user>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:        <nova:project uuid="77bc8076d5d1433c8b0b9fc3dc0286e2">tempest-AggregatesAdminTestJSON-1874549376</nova:project>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      </nova:owner>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <nova:root type="image" uuid="0ed20320-9c25-4108-ad76-64b3cb3500ce"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <nova:ports/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    </nova:instance>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  </metadata>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  <sysinfo type="smbios">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <system>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <entry name="manufacturer">RDO</entry>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <entry name="product">OpenStack Compute</entry>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <entry name="serial">958d2cbf-5c68-4aa8-88b5-117c871f8959</entry>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <entry name="uuid">958d2cbf-5c68-4aa8-88b5-117c871f8959</entry>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <entry name="family">Virtual Machine</entry>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    </system>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  </sysinfo>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  <os>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <boot dev="hd"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <smbios mode="sysinfo"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  </os>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  <features>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <acpi/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <apic/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <vmcoreinfo/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  </features>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  <clock offset="utc">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <timer name="pit" tickpolicy="delay"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <timer name="hpet" present="no"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  </clock>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  <cpu mode="host-model" match="exact">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  </cpu>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  <devices>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <disk type="network" device="disk">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/958d2cbf-5c68-4aa8-88b5-117c871f8959_disk">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <target dev="vda" bus="virtio"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <disk type="network" device="cdrom">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <driver type="raw" cache="none"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <source protocol="rbd" name="vms/958d2cbf-5c68-4aa8-88b5-117c871f8959_disk.config">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:        <host name="192.168.122.100" port="6789"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      </source>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <auth username="openstack">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:        <secret type="ceph" uuid="18ee9de6-e00b-571b-ab9b-b7aab06174df"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      </auth>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <target dev="sda" bus="sata"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    </disk>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <serial type="pty">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <log file="/var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/console.log" append="off"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    </serial>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <video>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <model type="virtio"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    </video>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <input type="tablet" bus="usb"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <rng model="virtio">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <backend model="random">/dev/urandom</backend>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    </rng>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="pci" model="pcie-root-port"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <controller type="usb" index="0"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    <memballoon model="virtio">
Dec 13 04:25:25 np0005558241 nova_compute[248510]:      <stats period="10"/>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:    </memballoon>
Dec 13 04:25:25 np0005558241 nova_compute[248510]:  </devices>
Dec 13 04:25:25 np0005558241 nova_compute[248510]: </domain>
Dec 13 04:25:25 np0005558241 nova_compute[248510]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec 13 04:25:25 np0005558241 nova_compute[248510]: 2025-12-13 09:25:25.580 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:25:25 np0005558241 nova_compute[248510]: 2025-12-13 09:25:25.581 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec 13 04:25:25 np0005558241 nova_compute[248510]: 2025-12-13 09:25:25.582 248514 INFO nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Using config drive#033[00m
Dec 13 04:25:25 np0005558241 nova_compute[248510]: 2025-12-13 09:25:25.685 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:25:25 np0005558241 nova_compute[248510]: 2025-12-13 09:25:25.797 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:25 np0005558241 nova_compute[248510]: 2025-12-13 09:25:25.950 248514 INFO nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Creating config drive at /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/disk.config#033[00m
Dec 13 04:25:25 np0005558241 nova_compute[248510]: 2025-12-13 09:25:25.959 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp402zzc4j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:25:26 np0005558241 nova_compute[248510]: 2025-12-13 09:25:26.119 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp402zzc4j" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:25:26 np0005558241 nova_compute[248510]: 2025-12-13 09:25:26.155 248514 DEBUG nova.storage.rbd_utils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] rbd image 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec 13 04:25:26 np0005558241 nova_compute[248510]: 2025-12-13 09:25:26.161 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/disk.config 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:25:26 np0005558241 nova_compute[248510]: 2025-12-13 09:25:26.469 248514 DEBUG oslo_concurrency.processutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/disk.config 958d2cbf-5c68-4aa8-88b5-117c871f8959_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:25:26 np0005558241 nova_compute[248510]: 2025-12-13 09:25:26.471 248514 INFO nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Deleting local config drive /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959/disk.config because it was imported into RBD.#033[00m
Dec 13 04:25:26 np0005558241 systemd-machined[210538]: New machine qemu-187-instance-0000009a.
Dec 13 04:25:26 np0005558241 systemd[1]: Started Virtual Machine qemu-187-instance-0000009a.
Dec 13 04:25:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3730: 321 pgs: 321 active+clean; 73 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 1.3 MiB/s wr, 13 op/s
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.640 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617927.6396694, 958d2cbf-5c68-4aa8-88b5-117c871f8959 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.641 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] VM Resumed (Lifecycle Event)#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.645 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.646 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.651 248514 INFO nova.virt.libvirt.driver [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Instance spawned successfully.#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.652 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.670 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.677 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.683 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.684 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.684 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.685 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.685 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.686 248514 DEBUG nova.virt.libvirt.driver [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.712 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.713 248514 DEBUG nova.virt.driver [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] Emitting event <LifecycleEvent: 1765617927.6415715, 958d2cbf-5c68-4aa8-88b5-117c871f8959 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.713 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] VM Started (Lifecycle Event)#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.877 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.882 248514 DEBUG nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.907 248514 INFO nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Took 4.84 seconds to spawn the instance on the hypervisor.#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.908 248514 DEBUG nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:25:27 np0005558241 nova_compute[248510]: 2025-12-13 09:25:27.909 248514 INFO nova.compute.manager [None req-0c6f84b2-bbb4-47d6-84ce-361bc52a36d4 - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec 13 04:25:28 np0005558241 nova_compute[248510]: 2025-12-13 09:25:28.047 248514 INFO nova.compute.manager [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Took 7.32 seconds to build instance.#033[00m
Dec 13 04:25:28 np0005558241 nova_compute[248510]: 2025-12-13 09:25:28.089 248514 DEBUG oslo_concurrency.lockutils [None req-f780cff6-65e4-4eb2-a489-b41aee0b876a 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "958d2cbf-5c68-4aa8-88b5-117c871f8959" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:25:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3731: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 806 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Dec 13 04:25:29 np0005558241 nova_compute[248510]: 2025-12-13 09:25:29.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:25:30 np0005558241 nova_compute[248510]: 2025-12-13 09:25:30.084 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:30 np0005558241 nova_compute[248510]: 2025-12-13 09:25:30.433 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "958d2cbf-5c68-4aa8-88b5-117c871f8959" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:25:30 np0005558241 nova_compute[248510]: 2025-12-13 09:25:30.433 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "958d2cbf-5c68-4aa8-88b5-117c871f8959" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:25:30 np0005558241 nova_compute[248510]: 2025-12-13 09:25:30.434 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "958d2cbf-5c68-4aa8-88b5-117c871f8959-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:25:30 np0005558241 nova_compute[248510]: 2025-12-13 09:25:30.434 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "958d2cbf-5c68-4aa8-88b5-117c871f8959-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:25:30 np0005558241 nova_compute[248510]: 2025-12-13 09:25:30.434 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "958d2cbf-5c68-4aa8-88b5-117c871f8959-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:25:30 np0005558241 nova_compute[248510]: 2025-12-13 09:25:30.435 248514 INFO nova.compute.manager [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Terminating instance#033[00m
Dec 13 04:25:30 np0005558241 nova_compute[248510]: 2025-12-13 09:25:30.436 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "refresh_cache-958d2cbf-5c68-4aa8-88b5-117c871f8959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec 13 04:25:30 np0005558241 nova_compute[248510]: 2025-12-13 09:25:30.436 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquired lock "refresh_cache-958d2cbf-5c68-4aa8-88b5-117c871f8959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec 13 04:25:30 np0005558241 nova_compute[248510]: 2025-12-13 09:25:30.436 248514 DEBUG nova.network.neutron [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec 13 04:25:30 np0005558241 nova_compute[248510]: 2025-12-13 09:25:30.799 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:30 np0005558241 nova_compute[248510]: 2025-12-13 09:25:30.967 248514 DEBUG nova.network.neutron [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:25:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3732: 321 pgs: 321 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Dec 13 04:25:31 np0005558241 nova_compute[248510]: 2025-12-13 09:25:31.296 248514 DEBUG nova.network.neutron [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:25:31 np0005558241 nova_compute[248510]: 2025-12-13 09:25:31.313 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Releasing lock "refresh_cache-958d2cbf-5c68-4aa8-88b5-117c871f8959" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec 13 04:25:31 np0005558241 nova_compute[248510]: 2025-12-13 09:25:31.314 248514 DEBUG nova.compute.manager [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec 13 04:25:31 np0005558241 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Dec 13 04:25:31 np0005558241 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d0000009a.scope: Consumed 4.447s CPU time.
Dec 13 04:25:31 np0005558241 systemd-machined[210538]: Machine qemu-187-instance-0000009a terminated.
Dec 13 04:25:31 np0005558241 nova_compute[248510]: 2025-12-13 09:25:31.748 248514 INFO nova.virt.libvirt.driver [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Instance destroyed successfully.#033[00m
Dec 13 04:25:31 np0005558241 nova_compute[248510]: 2025-12-13 09:25:31.749 248514 DEBUG nova.objects.instance [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lazy-loading 'resources' on Instance uuid 958d2cbf-5c68-4aa8-88b5-117c871f8959 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec 13 04:25:31 np0005558241 nova_compute[248510]: 2025-12-13 09:25:31.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:25:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3733: 321 pgs: 321 active+clean; 85 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Dec 13 04:25:33 np0005558241 nova_compute[248510]: 2025-12-13 09:25:33.282 248514 INFO nova.virt.libvirt.driver [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Deleting instance files /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959_del#033[00m
Dec 13 04:25:33 np0005558241 nova_compute[248510]: 2025-12-13 09:25:33.283 248514 INFO nova.virt.libvirt.driver [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Deletion of /var/lib/nova/instances/958d2cbf-5c68-4aa8-88b5-117c871f8959_del complete#033[00m
Dec 13 04:25:33 np0005558241 nova_compute[248510]: 2025-12-13 09:25:33.340 248514 INFO nova.compute.manager [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Took 2.03 seconds to destroy the instance on the hypervisor.#033[00m
Dec 13 04:25:33 np0005558241 nova_compute[248510]: 2025-12-13 09:25:33.341 248514 DEBUG oslo.service.loopingcall [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec 13 04:25:33 np0005558241 nova_compute[248510]: 2025-12-13 09:25:33.341 248514 DEBUG nova.compute.manager [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec 13 04:25:33 np0005558241 nova_compute[248510]: 2025-12-13 09:25:33.341 248514 DEBUG nova.network.neutron [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec 13 04:25:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:33 np0005558241 nova_compute[248510]: 2025-12-13 09:25:33.955 248514 DEBUG nova.network.neutron [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec 13 04:25:33 np0005558241 nova_compute[248510]: 2025-12-13 09:25:33.972 248514 DEBUG nova.network.neutron [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec 13 04:25:34 np0005558241 nova_compute[248510]: 2025-12-13 09:25:34.005 248514 INFO nova.compute.manager [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Took 0.66 seconds to deallocate network for instance.#033[00m
Dec 13 04:25:34 np0005558241 nova_compute[248510]: 2025-12-13 09:25:34.066 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:25:34 np0005558241 nova_compute[248510]: 2025-12-13 09:25:34.067 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:25:34 np0005558241 nova_compute[248510]: 2025-12-13 09:25:34.352 248514 DEBUG oslo_concurrency.processutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:25:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:25:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3803142436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:25:34 np0005558241 nova_compute[248510]: 2025-12-13 09:25:34.972 248514 DEBUG oslo_concurrency.processutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:25:34 np0005558241 nova_compute[248510]: 2025-12-13 09:25:34.981 248514 DEBUG nova.compute.provider_tree [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:25:35 np0005558241 nova_compute[248510]: 2025-12-13 09:25:35.007 248514 DEBUG nova.scheduler.client.report [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:25:35 np0005558241 nova_compute[248510]: 2025-12-13 09:25:35.035 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:25:35 np0005558241 nova_compute[248510]: 2025-12-13 09:25:35.064 248514 INFO nova.scheduler.client.report [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Deleted allocations for instance 958d2cbf-5c68-4aa8-88b5-117c871f8959#033[00m
Dec 13 04:25:35 np0005558241 nova_compute[248510]: 2025-12-13 09:25:35.085 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:35 np0005558241 nova_compute[248510]: 2025-12-13 09:25:35.161 248514 DEBUG oslo_concurrency.lockutils [None req-f943bdf1-098b-4a37-9dd7-823ffe961bfa 075d71b5d40a478690478a7e4dcb68bc 77bc8076d5d1433c8b0b9fc3dc0286e2 - - default default] Lock "958d2cbf-5c68-4aa8-88b5-117c871f8959" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:25:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3734: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Dec 13 04:25:35 np0005558241 nova_compute[248510]: 2025-12-13 09:25:35.803 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3735: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 489 KiB/s wr, 113 op/s
Dec 13 04:25:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3736: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 489 KiB/s wr, 114 op/s
Dec 13 04:25:39 np0005558241 nova_compute[248510]: 2025-12-13 09:25:39.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:25:39 np0005558241 nova_compute[248510]: 2025-12-13 09:25:39.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:25:39 np0005558241 nova_compute[248510]: 2025-12-13 09:25:39.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:25:39 np0005558241 nova_compute[248510]: 2025-12-13 09:25:39.808 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:25:40 np0005558241 nova_compute[248510]: 2025-12-13 09:25:40.088 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:25:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:25:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:25:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:25:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:25:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:25:40 np0005558241 nova_compute[248510]: 2025-12-13 09:25:40.806 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3737: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 13 KiB/s wr, 65 op/s
Dec 13 04:25:41 np0005558241 nova_compute[248510]: 2025-12-13 09:25:41.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:25:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:25:42.133 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:25:42 np0005558241 nova_compute[248510]: 2025-12-13 09:25:42.134 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:42 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:25:42.135 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:25:42 np0005558241 nova_compute[248510]: 2025-12-13 09:25:42.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:25:42 np0005558241 nova_compute[248510]: 2025-12-13 09:25:42.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:25:42 np0005558241 nova_compute[248510]: 2025-12-13 09:25:42.797 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:25:42 np0005558241 nova_compute[248510]: 2025-12-13 09:25:42.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:25:42 np0005558241 nova_compute[248510]: 2025-12-13 09:25:42.798 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:25:42 np0005558241 nova_compute[248510]: 2025-12-13 09:25:42.799 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:25:42 np0005558241 nova_compute[248510]: 2025-12-13 09:25:42.799 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:25:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3738: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 1.2 KiB/s wr, 38 op/s
Dec 13 04:25:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:25:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2783843062' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:25:43 np0005558241 nova_compute[248510]: 2025-12-13 09:25:43.375 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:25:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:43 np0005558241 nova_compute[248510]: 2025-12-13 09:25:43.559 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:25:43 np0005558241 nova_compute[248510]: 2025-12-13 09:25:43.560 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3437MB free_disk=59.98737745825201GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:25:43 np0005558241 nova_compute[248510]: 2025-12-13 09:25:43.561 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:25:43 np0005558241 nova_compute[248510]: 2025-12-13 09:25:43.561 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:25:43 np0005558241 nova_compute[248510]: 2025-12-13 09:25:43.641 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:25:43 np0005558241 nova_compute[248510]: 2025-12-13 09:25:43.642 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:25:43 np0005558241 nova_compute[248510]: 2025-12-13 09:25:43.659 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:25:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:25:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2025571377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:25:44 np0005558241 nova_compute[248510]: 2025-12-13 09:25:44.234 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:25:44 np0005558241 nova_compute[248510]: 2025-12-13 09:25:44.244 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:25:44 np0005558241 nova_compute[248510]: 2025-12-13 09:25:44.283 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:25:44 np0005558241 nova_compute[248510]: 2025-12-13 09:25:44.331 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:25:44 np0005558241 nova_compute[248510]: 2025-12-13 09:25:44.331 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:25:45 np0005558241 nova_compute[248510]: 2025-12-13 09:25:45.090 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3739: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 938 B/s wr, 22 op/s
Dec 13 04:25:45 np0005558241 nova_compute[248510]: 2025-12-13 09:25:45.333 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:25:45 np0005558241 nova_compute[248510]: 2025-12-13 09:25:45.334 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:25:45 np0005558241 nova_compute[248510]: 2025-12-13 09:25:45.334 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:25:45 np0005558241 nova_compute[248510]: 2025-12-13 09:25:45.809 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:46 np0005558241 nova_compute[248510]: 2025-12-13 09:25:46.745 248514 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765617931.7435424, 958d2cbf-5c68-4aa8-88b5-117c871f8959 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec 13 04:25:46 np0005558241 nova_compute[248510]: 2025-12-13 09:25:46.745 248514 INFO nova.compute.manager [-] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] VM Stopped (Lifecycle Event)#033[00m
Dec 13 04:25:46 np0005558241 nova_compute[248510]: 2025-12-13 09:25:46.777 248514 DEBUG nova.compute.manager [None req-3b6d0744-5dbf-4d2f-a044-329a0a30429c - - - - - -] [instance: 958d2cbf-5c68-4aa8-88b5-117c871f8959] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec 13 04:25:47 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:25:47.137 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:25:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3740: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Dec 13 04:25:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3741: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Dec 13 04:25:50 np0005558241 podman[411170]: 2025-12-13 09:25:49.999558974 +0000 UTC m=+0.081754131 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 04:25:50 np0005558241 podman[411171]: 2025-12-13 09:25:50.019116934 +0000 UTC m=+0.092485870 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 04:25:50 np0005558241 podman[411169]: 2025-12-13 09:25:50.033014672 +0000 UTC m=+0.111452845 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:25:50 np0005558241 nova_compute[248510]: 2025-12-13 09:25:50.092 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:50 np0005558241 nova_compute[248510]: 2025-12-13 09:25:50.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:25:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3742: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:25:51 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:25:51 np0005558241 podman[411376]: 2025-12-13 09:25:51.535650369 +0000 UTC m=+0.045338408 container create fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mayer, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 04:25:51 np0005558241 systemd[1]: Started libpod-conmon-fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173.scope.
Dec 13 04:25:51 np0005558241 podman[411376]: 2025-12-13 09:25:51.516517729 +0000 UTC m=+0.026205788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:25:51 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:25:51 np0005558241 podman[411376]: 2025-12-13 09:25:51.64063384 +0000 UTC m=+0.150321899 container init fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mayer, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 04:25:51 np0005558241 podman[411376]: 2025-12-13 09:25:51.649267427 +0000 UTC m=+0.158955466 container start fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mayer, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:25:51 np0005558241 podman[411376]: 2025-12-13 09:25:51.652711273 +0000 UTC m=+0.162399332 container attach fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mayer, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:25:51 np0005558241 thirsty_mayer[411392]: 167 167
Dec 13 04:25:51 np0005558241 systemd[1]: libpod-fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173.scope: Deactivated successfully.
Dec 13 04:25:51 np0005558241 podman[411376]: 2025-12-13 09:25:51.655914983 +0000 UTC m=+0.165603052 container died fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mayer, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 04:25:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a2dbb4d6ccdceca01990aa13f198529ad3cb278d150389150f4bbfabc60f837f-merged.mount: Deactivated successfully.
Dec 13 04:25:51 np0005558241 podman[411376]: 2025-12-13 09:25:51.701527447 +0000 UTC m=+0.211215486 container remove fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:25:51 np0005558241 systemd[1]: libpod-conmon-fbafd7b4f349dc628f5267072c68cda3f92e9657fa46dbf9df4b101af6379173.scope: Deactivated successfully.
Dec 13 04:25:51 np0005558241 podman[411416]: 2025-12-13 09:25:51.940637039 +0000 UTC m=+0.059186214 container create 50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:25:51 np0005558241 systemd[1]: Started libpod-conmon-50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831.scope.
Dec 13 04:25:52 np0005558241 podman[411416]: 2025-12-13 09:25:51.917848918 +0000 UTC m=+0.036398153 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:25:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:25:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d4e2c183cabcdf76539998cce31943987075f3140f1cea395657f6adbcf154/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d4e2c183cabcdf76539998cce31943987075f3140f1cea395657f6adbcf154/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d4e2c183cabcdf76539998cce31943987075f3140f1cea395657f6adbcf154/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d4e2c183cabcdf76539998cce31943987075f3140f1cea395657f6adbcf154/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d4e2c183cabcdf76539998cce31943987075f3140f1cea395657f6adbcf154/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:52 np0005558241 podman[411416]: 2025-12-13 09:25:52.045115088 +0000 UTC m=+0.163664293 container init 50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:25:52 np0005558241 podman[411416]: 2025-12-13 09:25:52.055233832 +0000 UTC m=+0.173783007 container start 50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shaw, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:25:52 np0005558241 podman[411416]: 2025-12-13 09:25:52.059271653 +0000 UTC m=+0.177820828 container attach 50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 04:25:52 np0005558241 crazy_shaw[411433]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:25:52 np0005558241 crazy_shaw[411433]: --> All data devices are unavailable
Dec 13 04:25:52 np0005558241 systemd[1]: libpod-50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831.scope: Deactivated successfully.
Dec 13 04:25:52 np0005558241 podman[411416]: 2025-12-13 09:25:52.65635456 +0000 UTC m=+0.774903735 container died 50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shaw, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:25:52 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e4d4e2c183cabcdf76539998cce31943987075f3140f1cea395657f6adbcf154-merged.mount: Deactivated successfully.
Dec 13 04:25:52 np0005558241 podman[411416]: 2025-12-13 09:25:52.716655802 +0000 UTC m=+0.835204977 container remove 50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_shaw, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 04:25:52 np0005558241 systemd[1]: libpod-conmon-50984c74919c55ad6bc481efa3951864a1bea0cb63a1c20428cb681d28d78831.scope: Deactivated successfully.
Dec 13 04:25:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3743: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:53 np0005558241 podman[411527]: 2025-12-13 09:25:53.263996292 +0000 UTC m=+0.064144829 container create e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:25:53 np0005558241 systemd[1]: Started libpod-conmon-e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7.scope.
Dec 13 04:25:53 np0005558241 podman[411527]: 2025-12-13 09:25:53.237371585 +0000 UTC m=+0.037520182 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:25:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:25:53 np0005558241 podman[411527]: 2025-12-13 09:25:53.360780398 +0000 UTC m=+0.160928935 container init e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_murdock, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:25:53 np0005558241 podman[411527]: 2025-12-13 09:25:53.367672981 +0000 UTC m=+0.167821498 container start e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 04:25:53 np0005558241 podman[411527]: 2025-12-13 09:25:53.371321032 +0000 UTC m=+0.171469549 container attach e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_murdock, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:25:53 np0005558241 lucid_murdock[411543]: 167 167
Dec 13 04:25:53 np0005558241 systemd[1]: libpod-e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7.scope: Deactivated successfully.
Dec 13 04:25:53 np0005558241 podman[411527]: 2025-12-13 09:25:53.376698227 +0000 UTC m=+0.176846734 container died e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 04:25:53 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a3478e4047d20cbb5e9647f8a84f895551596915a0f5511e3b56ad7a4174eb38-merged.mount: Deactivated successfully.
Dec 13 04:25:53 np0005558241 podman[411527]: 2025-12-13 09:25:53.415557811 +0000 UTC m=+0.215706318 container remove e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:25:53 np0005558241 systemd[1]: libpod-conmon-e57ba0e0e2958861771953b879bc60a11f5bf9ba10d9c248ec53e21e4f7271b7.scope: Deactivated successfully.
Dec 13 04:25:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:53 np0005558241 podman[411565]: 2025-12-13 09:25:53.60422225 +0000 UTC m=+0.058093687 container create 97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:25:53 np0005558241 systemd[1]: Started libpod-conmon-97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799.scope.
Dec 13 04:25:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:25:53 np0005558241 podman[411565]: 2025-12-13 09:25:53.574991428 +0000 UTC m=+0.028862955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:25:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dcb850714c89b750f59c68808535eb555e87be946ba6d98186091955920a419/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dcb850714c89b750f59c68808535eb555e87be946ba6d98186091955920a419/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dcb850714c89b750f59c68808535eb555e87be946ba6d98186091955920a419/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:53 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dcb850714c89b750f59c68808535eb555e87be946ba6d98186091955920a419/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:53 np0005558241 podman[411565]: 2025-12-13 09:25:53.692259797 +0000 UTC m=+0.146131244 container init 97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 04:25:53 np0005558241 podman[411565]: 2025-12-13 09:25:53.704223187 +0000 UTC m=+0.158094634 container start 97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:25:53 np0005558241 podman[411565]: 2025-12-13 09:25:53.708517265 +0000 UTC m=+0.162388692 container attach 97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:25:54 np0005558241 brave_villani[411581]: {
Dec 13 04:25:54 np0005558241 brave_villani[411581]:    "0": [
Dec 13 04:25:54 np0005558241 brave_villani[411581]:        {
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "devices": [
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "/dev/loop3"
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            ],
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_name": "ceph_lv0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_size": "21470642176",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "name": "ceph_lv0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "tags": {
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.cluster_name": "ceph",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.crush_device_class": "",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.encrypted": "0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.objectstore": "bluestore",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.osd_id": "0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.type": "block",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.vdo": "0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.with_tpm": "0"
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            },
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "type": "block",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "vg_name": "ceph_vg0"
Dec 13 04:25:54 np0005558241 brave_villani[411581]:        }
Dec 13 04:25:54 np0005558241 brave_villani[411581]:    ],
Dec 13 04:25:54 np0005558241 brave_villani[411581]:    "1": [
Dec 13 04:25:54 np0005558241 brave_villani[411581]:        {
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "devices": [
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "/dev/loop4"
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            ],
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_name": "ceph_lv1",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_size": "21470642176",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "name": "ceph_lv1",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "tags": {
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.cluster_name": "ceph",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.crush_device_class": "",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.encrypted": "0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.objectstore": "bluestore",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.osd_id": "1",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.type": "block",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.vdo": "0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.with_tpm": "0"
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            },
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "type": "block",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "vg_name": "ceph_vg1"
Dec 13 04:25:54 np0005558241 brave_villani[411581]:        }
Dec 13 04:25:54 np0005558241 brave_villani[411581]:    ],
Dec 13 04:25:54 np0005558241 brave_villani[411581]:    "2": [
Dec 13 04:25:54 np0005558241 brave_villani[411581]:        {
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "devices": [
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "/dev/loop5"
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            ],
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_name": "ceph_lv2",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_size": "21470642176",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "name": "ceph_lv2",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "tags": {
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.cluster_name": "ceph",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.crush_device_class": "",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.encrypted": "0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.objectstore": "bluestore",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.osd_id": "2",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.type": "block",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.vdo": "0",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:                "ceph.with_tpm": "0"
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            },
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "type": "block",
Dec 13 04:25:54 np0005558241 brave_villani[411581]:            "vg_name": "ceph_vg2"
Dec 13 04:25:54 np0005558241 brave_villani[411581]:        }
Dec 13 04:25:54 np0005558241 brave_villani[411581]:    ]
Dec 13 04:25:54 np0005558241 brave_villani[411581]: }
Dec 13 04:25:54 np0005558241 systemd[1]: libpod-97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799.scope: Deactivated successfully.
Dec 13 04:25:54 np0005558241 podman[411565]: 2025-12-13 09:25:54.082670023 +0000 UTC m=+0.536541450 container died 97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:25:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9dcb850714c89b750f59c68808535eb555e87be946ba6d98186091955920a419-merged.mount: Deactivated successfully.
Dec 13 04:25:54 np0005558241 podman[411565]: 2025-12-13 09:25:54.138833391 +0000 UTC m=+0.592704828 container remove 97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:25:54 np0005558241 systemd[1]: libpod-conmon-97ffc47f6275859f1efcaaa40480d249f06ab17d24b3792c7e79a4d9adab7799.scope: Deactivated successfully.
Dec 13 04:25:54 np0005558241 podman[411667]: 2025-12-13 09:25:54.735689663 +0000 UTC m=+0.073350900 container create 7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:25:54 np0005558241 systemd[1]: Started libpod-conmon-7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e.scope.
Dec 13 04:25:54 np0005558241 podman[411667]: 2025-12-13 09:25:54.706210424 +0000 UTC m=+0.043871721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:25:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:25:54 np0005558241 podman[411667]: 2025-12-13 09:25:54.834615732 +0000 UTC m=+0.172277029 container init 7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wright, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:25:54 np0005558241 podman[411667]: 2025-12-13 09:25:54.849049274 +0000 UTC m=+0.186710501 container start 7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wright, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:25:54 np0005558241 podman[411667]: 2025-12-13 09:25:54.852676715 +0000 UTC m=+0.190337932 container attach 7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wright, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:25:54 np0005558241 happy_wright[411683]: 167 167
Dec 13 04:25:54 np0005558241 systemd[1]: libpod-7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e.scope: Deactivated successfully.
Dec 13 04:25:54 np0005558241 podman[411667]: 2025-12-13 09:25:54.858799439 +0000 UTC m=+0.196460656 container died 7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wright, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:25:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay-88956d8d9841c168bed1cca6c6a029cf1365c8060bebdd223dcab78c95697cae-merged.mount: Deactivated successfully.
Dec 13 04:25:54 np0005558241 podman[411667]: 2025-12-13 09:25:54.901452138 +0000 UTC m=+0.239113345 container remove 7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:25:54 np0005558241 systemd[1]: libpod-conmon-7a7aefcc74ef060c243252936ec42e4b4cb598f19bf6994aabd48b9cf2abde1e.scope: Deactivated successfully.
Dec 13 04:25:55 np0005558241 podman[411706]: 2025-12-13 09:25:55.103477942 +0000 UTC m=+0.044970898 container create 78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gagarin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:25:55 np0005558241 nova_compute[248510]: 2025-12-13 09:25:55.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:55 np0005558241 systemd[1]: Started libpod-conmon-78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a.scope.
Dec 13 04:25:55 np0005558241 podman[411706]: 2025-12-13 09:25:55.084430505 +0000 UTC m=+0.025923461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:25:55 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:25:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f2b9d9fdf6f6b19ab9b9f85cacd3e87aedf3d3c0b61d508bec58892e1f505f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f2b9d9fdf6f6b19ab9b9f85cacd3e87aedf3d3c0b61d508bec58892e1f505f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f2b9d9fdf6f6b19ab9b9f85cacd3e87aedf3d3c0b61d508bec58892e1f505f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f2b9d9fdf6f6b19ab9b9f85cacd3e87aedf3d3c0b61d508bec58892e1f505f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:55 np0005558241 podman[411706]: 2025-12-13 09:25:55.216761292 +0000 UTC m=+0.158254278 container init 78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gagarin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:25:55 np0005558241 podman[411706]: 2025-12-13 09:25:55.224341872 +0000 UTC m=+0.165834828 container start 78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gagarin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:25:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3744: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:55 np0005558241 podman[411706]: 2025-12-13 09:25:55.231772458 +0000 UTC m=+0.173265414 container attach 78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:25:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:25:55.461 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:25:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:25:55.463 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:25:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:25:55.463 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:25:55 np0005558241 nova_compute[248510]: 2025-12-13 09:25:55.814 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:25:56 np0005558241 lvm[411801]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:25:56 np0005558241 lvm[411799]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:25:56 np0005558241 lvm[411799]: VG ceph_vg0 finished
Dec 13 04:25:56 np0005558241 lvm[411801]: VG ceph_vg1 finished
Dec 13 04:25:56 np0005558241 lvm[411803]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:25:56 np0005558241 lvm[411803]: VG ceph_vg2 finished
Dec 13 04:25:56 np0005558241 sharp_gagarin[411722]: {}
Dec 13 04:25:56 np0005558241 systemd[1]: libpod-78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a.scope: Deactivated successfully.
Dec 13 04:25:56 np0005558241 systemd[1]: libpod-78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a.scope: Consumed 1.584s CPU time.
Dec 13 04:25:56 np0005558241 podman[411706]: 2025-12-13 09:25:56.142207489 +0000 UTC m=+1.083700445 container died 78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:25:56 np0005558241 systemd[1]: var-lib-containers-storage-overlay-27f2b9d9fdf6f6b19ab9b9f85cacd3e87aedf3d3c0b61d508bec58892e1f505f-merged.mount: Deactivated successfully.
Dec 13 04:25:56 np0005558241 podman[411706]: 2025-12-13 09:25:56.212100021 +0000 UTC m=+1.153592977 container remove 78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_gagarin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 04:25:56 np0005558241 systemd[1]: libpod-conmon-78fa967fae455853707c7a573ec33fe62b320b57f360a2918aeb16224d60070a.scope: Deactivated successfully.
Dec 13 04:25:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:25:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:25:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:25:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:25:56 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:25:56 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:25:56 np0005558241 nova_compute[248510]: 2025-12-13 09:25:56.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:25:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3745: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:25:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3746: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:00 np0005558241 nova_compute[248510]: 2025-12-13 09:26:00.143 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:00 np0005558241 nova_compute[248510]: 2025-12-13 09:26:00.818 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3747: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.339437) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617961339472, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1092, "num_deletes": 251, "total_data_size": 1682058, "memory_usage": 1701088, "flush_reason": "Manual Compaction"}
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617961353852, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 1641299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74080, "largest_seqno": 75171, "table_properties": {"data_size": 1635982, "index_size": 2776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11377, "raw_average_key_size": 19, "raw_value_size": 1625349, "raw_average_value_size": 2821, "num_data_blocks": 124, "num_entries": 576, "num_filter_entries": 576, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617859, "oldest_key_time": 1765617859, "file_creation_time": 1765617961, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 14456 microseconds, and 6463 cpu microseconds.
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.353890) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 1641299 bytes OK
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.353911) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.355764) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.355777) EVENT_LOG_v1 {"time_micros": 1765617961355773, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.355797) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 1676971, prev total WAL file size 1676971, number of live WAL files 2.
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.356542) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(1602KB)], [176(9879KB)]
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617961356570, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 11757709, "oldest_snapshot_seqno": -1}
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 9246 keys, 9916046 bytes, temperature: kUnknown
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617961419851, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 9916046, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9860075, "index_size": 31730, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 243086, "raw_average_key_size": 26, "raw_value_size": 9701178, "raw_average_value_size": 1049, "num_data_blocks": 1213, "num_entries": 9246, "num_filter_entries": 9246, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765617961, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.420130) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9916046 bytes
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.421434) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.5 rd, 156.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.6 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(13.2) write-amplify(6.0) OK, records in: 9760, records dropped: 514 output_compression: NoCompression
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.421449) EVENT_LOG_v1 {"time_micros": 1765617961421442, "job": 110, "event": "compaction_finished", "compaction_time_micros": 63367, "compaction_time_cpu_micros": 27207, "output_level": 6, "num_output_files": 1, "total_output_size": 9916046, "num_input_records": 9760, "num_output_records": 9246, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617961421785, "job": 110, "event": "table_file_deletion", "file_number": 178}
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765617961423387, "job": 110, "event": "table_file_deletion", "file_number": 176}
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.356492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.423441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.423446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.423448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.423451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:26:01 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:26:01.423453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:26:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3748: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:04 np0005558241 nova_compute[248510]: 2025-12-13 09:26:04.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:26:04 np0005558241 nova_compute[248510]: 2025-12-13 09:26:04.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 04:26:05 np0005558241 nova_compute[248510]: 2025-12-13 09:26:05.144 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3749: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:05 np0005558241 nova_compute[248510]: 2025-12-13 09:26:05.821 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3750: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:07 np0005558241 nova_compute[248510]: 2025-12-13 09:26:07.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:26:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3751: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:26:09
Dec 13 04:26:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:26:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:26:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'images']
Dec 13 04:26:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:26:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:26:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:26:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:26:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:26:10 np0005558241 nova_compute[248510]: 2025-12-13 09:26:10.195 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:26:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:26:10 np0005558241 nova_compute[248510]: 2025-12-13 09:26:10.824 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:26:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:26:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:26:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:26:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:26:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:26:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:26:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:26:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:26:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:26:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3752: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3753: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:26:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3580943426' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:26:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3580943426' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3754: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:15 np0005558241 nova_compute[248510]: 2025-12-13 09:26:15.240 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:15 np0005558241 nova_compute[248510]: 2025-12-13 09:26:15.827 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3755: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3756: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:20 np0005558241 nova_compute[248510]: 2025-12-13 09:26:20.241 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:20 np0005558241 nova_compute[248510]: 2025-12-13 09:26:20.829 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:20 np0005558241 podman[411844]: 2025-12-13 09:26:20.980021831 +0000 UTC m=+0.061861462 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 04:26:20 np0005558241 podman[411843]: 2025-12-13 09:26:20.990412262 +0000 UTC m=+0.074912990 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:26:21 np0005558241 podman[411842]: 2025-12-13 09:26:21.048520278 +0000 UTC m=+0.132893512 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3757: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5066138404945055e-05 of space, bias 1.0, pg target 0.004519841521483516 quantized to 32 (current 32)
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006696679221549024 of space, bias 1.0, pg target 0.20090037664647073 quantized to 32 (current 32)
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.715245915521144e-07 of space, bias 4.0, pg target 0.0006858295098625373 quantized to 16 (current 32)
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:26:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3758: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3759: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:25 np0005558241 nova_compute[248510]: 2025-12-13 09:26:25.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:25 np0005558241 nova_compute[248510]: 2025-12-13 09:26:25.831 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3760: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3761: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:29 np0005558241 nova_compute[248510]: 2025-12-13 09:26:29.794 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:26:30 np0005558241 nova_compute[248510]: 2025-12-13 09:26:30.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:30 np0005558241 nova_compute[248510]: 2025-12-13 09:26:30.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3762: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:32 np0005558241 nova_compute[248510]: 2025-12-13 09:26:32.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:26:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3763: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:35 np0005558241 nova_compute[248510]: 2025-12-13 09:26:35.249 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3764: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:35 np0005558241 nova_compute[248510]: 2025-12-13 09:26:35.837 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3765: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3766: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:26:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:26:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:26:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:26:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:26:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:26:40 np0005558241 nova_compute[248510]: 2025-12-13 09:26:40.297 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:40 np0005558241 nova_compute[248510]: 2025-12-13 09:26:40.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:26:40 np0005558241 nova_compute[248510]: 2025-12-13 09:26:40.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:26:40 np0005558241 nova_compute[248510]: 2025-12-13 09:26:40.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:26:40 np0005558241 nova_compute[248510]: 2025-12-13 09:26:40.814 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:26:40 np0005558241 nova_compute[248510]: 2025-12-13 09:26:40.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3767: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:41 np0005558241 nova_compute[248510]: 2025-12-13 09:26:41.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:26:41 np0005558241 nova_compute[248510]: 2025-12-13 09:26:41.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:26:41 np0005558241 nova_compute[248510]: 2025-12-13 09:26:41.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 04:26:41 np0005558241 nova_compute[248510]: 2025-12-13 09:26:41.809 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 04:26:42 np0005558241 nova_compute[248510]: 2025-12-13 09:26:42.810 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:26:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3768: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 13 04:26:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Dec 13 04:26:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Dec 13 04:26:43 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Dec 13 04:26:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:44 np0005558241 nova_compute[248510]: 2025-12-13 09:26:44.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:26:44 np0005558241 nova_compute[248510]: 2025-12-13 09:26:44.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:26:44 np0005558241 nova_compute[248510]: 2025-12-13 09:26:44.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:26:44 np0005558241 nova_compute[248510]: 2025-12-13 09:26:44.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:26:44 np0005558241 nova_compute[248510]: 2025-12-13 09:26:44.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:26:44 np0005558241 nova_compute[248510]: 2025-12-13 09:26:44.804 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:26:44 np0005558241 nova_compute[248510]: 2025-12-13 09:26:44.804 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:26:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3770: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 614 B/s wr, 12 op/s
Dec 13 04:26:45 np0005558241 nova_compute[248510]: 2025-12-13 09:26:45.298 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:26:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664761174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:26:45 np0005558241 nova_compute[248510]: 2025-12-13 09:26:45.358 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:26:45 np0005558241 nova_compute[248510]: 2025-12-13 09:26:45.547 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:26:45 np0005558241 nova_compute[248510]: 2025-12-13 09:26:45.548 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3475MB free_disk=59.98737745825201GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:26:45 np0005558241 nova_compute[248510]: 2025-12-13 09:26:45.549 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:26:45 np0005558241 nova_compute[248510]: 2025-12-13 09:26:45.549 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:26:45 np0005558241 nova_compute[248510]: 2025-12-13 09:26:45.618 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:26:45 np0005558241 nova_compute[248510]: 2025-12-13 09:26:45.619 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:26:45 np0005558241 nova_compute[248510]: 2025-12-13 09:26:45.642 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:26:45 np0005558241 nova_compute[248510]: 2025-12-13 09:26:45.841 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:26:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1736735145' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:26:46 np0005558241 nova_compute[248510]: 2025-12-13 09:26:46.194 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:26:46 np0005558241 nova_compute[248510]: 2025-12-13 09:26:46.201 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:26:46 np0005558241 nova_compute[248510]: 2025-12-13 09:26:46.224 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:26:46 np0005558241 nova_compute[248510]: 2025-12-13 09:26:46.228 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:26:46 np0005558241 nova_compute[248510]: 2025-12-13 09:26:46.228 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:26:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3771: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.6 KiB/s rd, 614 B/s wr, 12 op/s
Dec 13 04:26:48 np0005558241 nova_compute[248510]: 2025-12-13 09:26:48.230 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:26:48 np0005558241 nova_compute[248510]: 2025-12-13 09:26:48.231 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:26:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3772: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 04:26:50 np0005558241 nova_compute[248510]: 2025-12-13 09:26:50.300 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:50 np0005558241 nova_compute[248510]: 2025-12-13 09:26:50.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3773: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 04:26:52 np0005558241 podman[411950]: 2025-12-13 09:26:52.01598819 +0000 UTC m=+0.085853783 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:26:52 np0005558241 podman[411949]: 2025-12-13 09:26:52.018915954 +0000 UTC m=+0.091977007 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:26:52 np0005558241 podman[411948]: 2025-12-13 09:26:52.026946115 +0000 UTC m=+0.103644849 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:26:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3774: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 04:26:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Dec 13 04:26:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Dec 13 04:26:54 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Dec 13 04:26:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3776: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 818 B/s wr, 12 op/s
Dec 13 04:26:55 np0005558241 nova_compute[248510]: 2025-12-13 09:26:55.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:26:55.463 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:26:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:26:55.463 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:26:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:26:55.463 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:26:55 np0005558241 nova_compute[248510]: 2025-12-13 09:26:55.848 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:26:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:26:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:26:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:26:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:26:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3777: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 818 B/s wr, 12 op/s
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:26:57 np0005558241 nova_compute[248510]: 2025-12-13 09:26:57.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:26:57 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:26:58 np0005558241 podman[412223]: 2025-12-13 09:26:58.015424175 +0000 UTC m=+0.045039810 container create 59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec 13 04:26:58 np0005558241 systemd[1]: Started libpod-conmon-59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381.scope.
Dec 13 04:26:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:26:58 np0005558241 podman[412223]: 2025-12-13 09:26:57.996922661 +0000 UTC m=+0.026538326 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:26:58 np0005558241 podman[412223]: 2025-12-13 09:26:58.10660191 +0000 UTC m=+0.136217625 container init 59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:26:58 np0005558241 podman[412223]: 2025-12-13 09:26:58.120223772 +0000 UTC m=+0.149839407 container start 59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:26:58 np0005558241 podman[412223]: 2025-12-13 09:26:58.125121885 +0000 UTC m=+0.154737540 container attach 59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 04:26:58 np0005558241 interesting_shirley[412240]: 167 167
Dec 13 04:26:58 np0005558241 systemd[1]: libpod-59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381.scope: Deactivated successfully.
Dec 13 04:26:58 np0005558241 podman[412223]: 2025-12-13 09:26:58.131995837 +0000 UTC m=+0.161611472 container died 59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:26:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ca4842a20d35c399b7b92c15707803d81c57314ea9f219116971d5e02296bab0-merged.mount: Deactivated successfully.
Dec 13 04:26:58 np0005558241 podman[412223]: 2025-12-13 09:26:58.187315514 +0000 UTC m=+0.216931149 container remove 59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_shirley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:26:58 np0005558241 systemd[1]: libpod-conmon-59fba2cd108a396500e12067d9efa95b8144b20e5e1c28c0497382144c4a6381.scope: Deactivated successfully.
Dec 13 04:26:58 np0005558241 podman[412266]: 2025-12-13 09:26:58.405725868 +0000 UTC m=+0.059471761 container create 08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:26:58 np0005558241 systemd[1]: Started libpod-conmon-08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568.scope.
Dec 13 04:26:58 np0005558241 podman[412266]: 2025-12-13 09:26:58.377167253 +0000 UTC m=+0.030913226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:26:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:26:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0337e6097758149f8499fa2827bcbba50d5d02c87a0c2fa70f5f9f0c49943c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0337e6097758149f8499fa2827bcbba50d5d02c87a0c2fa70f5f9f0c49943c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0337e6097758149f8499fa2827bcbba50d5d02c87a0c2fa70f5f9f0c49943c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0337e6097758149f8499fa2827bcbba50d5d02c87a0c2fa70f5f9f0c49943c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0337e6097758149f8499fa2827bcbba50d5d02c87a0c2fa70f5f9f0c49943c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:58 np0005558241 podman[412266]: 2025-12-13 09:26:58.51271812 +0000 UTC m=+0.166464033 container init 08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:26:58 np0005558241 podman[412266]: 2025-12-13 09:26:58.520307441 +0000 UTC m=+0.174053324 container start 08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 04:26:58 np0005558241 podman[412266]: 2025-12-13 09:26:58.524749862 +0000 UTC m=+0.178495735 container attach 08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 04:26:58 np0005558241 strange_lamarr[412283]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:26:58 np0005558241 strange_lamarr[412283]: --> All data devices are unavailable
Dec 13 04:26:59 np0005558241 systemd[1]: libpod-08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568.scope: Deactivated successfully.
Dec 13 04:26:59 np0005558241 podman[412266]: 2025-12-13 09:26:59.002025636 +0000 UTC m=+0.655771539 container died 08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:26:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e0337e6097758149f8499fa2827bcbba50d5d02c87a0c2fa70f5f9f0c49943c0-merged.mount: Deactivated successfully.
Dec 13 04:26:59 np0005558241 podman[412266]: 2025-12-13 09:26:59.051825184 +0000 UTC m=+0.705571067 container remove 08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:26:59 np0005558241 systemd[1]: libpod-conmon-08a268054f7523a0cf0af75e88397e32f8db61f35436bca69c3dba1827e32568.scope: Deactivated successfully.
Dec 13 04:26:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3778: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:26:59 np0005558241 podman[412376]: 2025-12-13 09:26:59.559230353 +0000 UTC m=+0.045559673 container create b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:26:59 np0005558241 systemd[1]: Started libpod-conmon-b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657.scope.
Dec 13 04:26:59 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:26:59 np0005558241 podman[412376]: 2025-12-13 09:26:59.538714469 +0000 UTC m=+0.025043819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:26:59 np0005558241 podman[412376]: 2025-12-13 09:26:59.651185048 +0000 UTC m=+0.137514468 container init b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:26:59 np0005558241 podman[412376]: 2025-12-13 09:26:59.659082006 +0000 UTC m=+0.145411326 container start b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:26:59 np0005558241 podman[412376]: 2025-12-13 09:26:59.663182289 +0000 UTC m=+0.149511609 container attach b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hopper, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:26:59 np0005558241 systemd[1]: libpod-b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657.scope: Deactivated successfully.
Dec 13 04:26:59 np0005558241 upbeat_hopper[412392]: 167 167
Dec 13 04:26:59 np0005558241 conmon[412392]: conmon b00ca2e1f6c8c764efd9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657.scope/container/memory.events
Dec 13 04:26:59 np0005558241 podman[412376]: 2025-12-13 09:26:59.666353989 +0000 UTC m=+0.152683319 container died b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hopper, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:26:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3d7c5ce483511536244761534d6305b73ff3e277e3480001313346be9a51530c-merged.mount: Deactivated successfully.
Dec 13 04:26:59 np0005558241 podman[412376]: 2025-12-13 09:26:59.706375242 +0000 UTC m=+0.192704562 container remove b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hopper, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:26:59 np0005558241 systemd[1]: libpod-conmon-b00ca2e1f6c8c764efd911dee02cce09f04ca85ed0d0f44b79ccb11941c14657.scope: Deactivated successfully.
Dec 13 04:26:59 np0005558241 podman[412416]: 2025-12-13 09:26:59.857494709 +0000 UTC m=+0.038232539 container create d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mendel, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Dec 13 04:26:59 np0005558241 systemd[1]: Started libpod-conmon-d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3.scope.
Dec 13 04:26:59 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:26:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9cb296990ef03f60ee99ba5352894ae6bd1138cc7167c157cb0f814544cf76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9cb296990ef03f60ee99ba5352894ae6bd1138cc7167c157cb0f814544cf76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9cb296990ef03f60ee99ba5352894ae6bd1138cc7167c157cb0f814544cf76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:59 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9cb296990ef03f60ee99ba5352894ae6bd1138cc7167c157cb0f814544cf76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:59 np0005558241 podman[412416]: 2025-12-13 09:26:59.840692488 +0000 UTC m=+0.021430338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:26:59 np0005558241 podman[412416]: 2025-12-13 09:26:59.939879734 +0000 UTC m=+0.120617594 container init d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 04:26:59 np0005558241 podman[412416]: 2025-12-13 09:26:59.947028603 +0000 UTC m=+0.127766433 container start d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mendel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Dec 13 04:26:59 np0005558241 podman[412416]: 2025-12-13 09:26:59.95047333 +0000 UTC m=+0.131211190 container attach d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]: {
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:    "0": [
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:        {
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "devices": [
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "/dev/loop3"
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            ],
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_name": "ceph_lv0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_size": "21470642176",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "name": "ceph_lv0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "tags": {
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.cluster_name": "ceph",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.crush_device_class": "",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.encrypted": "0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.objectstore": "bluestore",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.osd_id": "0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.type": "block",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.vdo": "0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.with_tpm": "0"
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            },
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "type": "block",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "vg_name": "ceph_vg0"
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:        }
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:    ],
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:    "1": [
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:        {
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "devices": [
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "/dev/loop4"
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            ],
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_name": "ceph_lv1",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_size": "21470642176",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "name": "ceph_lv1",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "tags": {
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.cluster_name": "ceph",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.crush_device_class": "",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.encrypted": "0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.objectstore": "bluestore",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.osd_id": "1",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.type": "block",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.vdo": "0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.with_tpm": "0"
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            },
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "type": "block",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "vg_name": "ceph_vg1"
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:        }
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:    ],
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:    "2": [
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:        {
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "devices": [
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "/dev/loop5"
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            ],
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_name": "ceph_lv2",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_size": "21470642176",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "name": "ceph_lv2",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "tags": {
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.cluster_name": "ceph",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.crush_device_class": "",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.encrypted": "0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.objectstore": "bluestore",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.osd_id": "2",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.type": "block",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.vdo": "0",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:                "ceph.with_tpm": "0"
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            },
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "type": "block",
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:            "vg_name": "ceph_vg2"
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:        }
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]:    ]
Dec 13 04:27:00 np0005558241 romantic_mendel[412433]: }
Dec 13 04:27:00 np0005558241 systemd[1]: libpod-d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3.scope: Deactivated successfully.
Dec 13 04:27:00 np0005558241 podman[412416]: 2025-12-13 09:27:00.271134598 +0000 UTC m=+0.451872438 container died d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mendel, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:27:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-de9cb296990ef03f60ee99ba5352894ae6bd1138cc7167c157cb0f814544cf76-merged.mount: Deactivated successfully.
Dec 13 04:27:00 np0005558241 nova_compute[248510]: 2025-12-13 09:27:00.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:00 np0005558241 podman[412416]: 2025-12-13 09:27:00.314343311 +0000 UTC m=+0.495081171 container remove d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:27:00 np0005558241 systemd[1]: libpod-conmon-d6d8b7e9e7b06613c05a5cce50d9dd9dd7d87e6aded08ee5a88ccf2b475c99d3.scope: Deactivated successfully.
Dec 13 04:27:00 np0005558241 podman[412514]: 2025-12-13 09:27:00.768588057 +0000 UTC m=+0.039596353 container create 869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:27:00 np0005558241 systemd[1]: Started libpod-conmon-869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6.scope.
Dec 13 04:27:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:27:00 np0005558241 podman[412514]: 2025-12-13 09:27:00.749775356 +0000 UTC m=+0.020783672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:27:00 np0005558241 nova_compute[248510]: 2025-12-13 09:27:00.851 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:00 np0005558241 podman[412514]: 2025-12-13 09:27:00.853988198 +0000 UTC m=+0.124996494 container init 869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 04:27:00 np0005558241 podman[412514]: 2025-12-13 09:27:00.861437235 +0000 UTC m=+0.132445531 container start 869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 04:27:00 np0005558241 podman[412514]: 2025-12-13 09:27:00.865971899 +0000 UTC m=+0.136980215 container attach 869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:27:00 np0005558241 suspicious_aryabhata[412530]: 167 167
Dec 13 04:27:00 np0005558241 systemd[1]: libpod-869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6.scope: Deactivated successfully.
Dec 13 04:27:00 np0005558241 podman[412514]: 2025-12-13 09:27:00.867157808 +0000 UTC m=+0.138166134 container died 869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 04:27:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cb3aa2a8cb09b9115ada6fec002b12b9fd14bebaa2c0a158be65852e71c4ab69-merged.mount: Deactivated successfully.
Dec 13 04:27:00 np0005558241 podman[412514]: 2025-12-13 09:27:00.911953401 +0000 UTC m=+0.182961697 container remove 869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_aryabhata, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:27:00 np0005558241 systemd[1]: libpod-conmon-869d2667048a2df54cf9f677d27f85d235bfd0f1408fe309664282218f2da2a6.scope: Deactivated successfully.
Dec 13 04:27:01 np0005558241 podman[412553]: 2025-12-13 09:27:01.081192724 +0000 UTC m=+0.044209329 container create 5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:27:01 np0005558241 systemd[1]: Started libpod-conmon-5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d.scope.
Dec 13 04:27:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:27:01 np0005558241 podman[412553]: 2025-12-13 09:27:01.060799413 +0000 UTC m=+0.023816038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:27:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6414c72e85f9049109e7658633dbe482a643e52ed6488f961e53b436b6056/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:27:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6414c72e85f9049109e7658633dbe482a643e52ed6488f961e53b436b6056/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:27:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6414c72e85f9049109e7658633dbe482a643e52ed6488f961e53b436b6056/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:27:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6414c72e85f9049109e7658633dbe482a643e52ed6488f961e53b436b6056/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:27:01 np0005558241 podman[412553]: 2025-12-13 09:27:01.177780695 +0000 UTC m=+0.140797300 container init 5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:27:01 np0005558241 podman[412553]: 2025-12-13 09:27:01.188856013 +0000 UTC m=+0.151872618 container start 5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_johnson, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 04:27:01 np0005558241 podman[412553]: 2025-12-13 09:27:01.192954416 +0000 UTC m=+0.155971041 container attach 5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_johnson, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:27:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3779: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:01 np0005558241 lvm[412647]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:27:01 np0005558241 lvm[412648]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:27:01 np0005558241 lvm[412648]: VG ceph_vg1 finished
Dec 13 04:27:01 np0005558241 lvm[412647]: VG ceph_vg0 finished
Dec 13 04:27:01 np0005558241 lvm[412650]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:27:01 np0005558241 lvm[412650]: VG ceph_vg2 finished
Dec 13 04:27:02 np0005558241 stupefied_johnson[412569]: {}
Dec 13 04:27:02 np0005558241 systemd[1]: libpod-5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d.scope: Deactivated successfully.
Dec 13 04:27:02 np0005558241 systemd[1]: libpod-5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d.scope: Consumed 1.528s CPU time.
Dec 13 04:27:02 np0005558241 podman[412553]: 2025-12-13 09:27:02.123905252 +0000 UTC m=+1.086921857 container died 5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 04:27:02 np0005558241 systemd[1]: var-lib-containers-storage-overlay-30e6414c72e85f9049109e7658633dbe482a643e52ed6488f961e53b436b6056-merged.mount: Deactivated successfully.
Dec 13 04:27:02 np0005558241 podman[412553]: 2025-12-13 09:27:02.175803793 +0000 UTC m=+1.138820398 container remove 5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_johnson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:27:02 np0005558241 systemd[1]: libpod-conmon-5d17d12c514b55615f519efbd695d8a59f7c7acdc142a9540136556f83c2a29d.scope: Deactivated successfully.
Dec 13 04:27:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:27:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:27:02 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:27:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:27:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:27:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:27:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3780: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3781: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:05 np0005558241 nova_compute[248510]: 2025-12-13 09:27:05.306 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:05 np0005558241 nova_compute[248510]: 2025-12-13 09:27:05.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3782: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.201976) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618029202056, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 814, "num_deletes": 255, "total_data_size": 1129572, "memory_usage": 1157352, "flush_reason": "Manual Compaction"}
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618029214402, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 1099593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75172, "largest_seqno": 75985, "table_properties": {"data_size": 1095425, "index_size": 1883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9176, "raw_average_key_size": 19, "raw_value_size": 1086943, "raw_average_value_size": 2278, "num_data_blocks": 84, "num_entries": 477, "num_filter_entries": 477, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765617962, "oldest_key_time": 1765617962, "file_creation_time": 1765618029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 12553 microseconds, and 7369 cpu microseconds.
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.214528) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 1099593 bytes OK
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.214556) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.216563) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.216617) EVENT_LOG_v1 {"time_micros": 1765618029216609, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.216645) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 1125491, prev total WAL file size 1125491, number of live WAL files 2.
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.217584) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323636' seq:72057594037927935, type:22 .. '6C6F676D0033353137' seq:0, type:0; will stop at (end)
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(1073KB)], [179(9683KB)]
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618029217655, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 11015639, "oldest_snapshot_seqno": -1}
Dec 13 04:27:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3783: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 9198 keys, 10907740 bytes, temperature: kUnknown
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618029318870, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 10907740, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10850418, "index_size": 33209, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23045, "raw_key_size": 243018, "raw_average_key_size": 26, "raw_value_size": 10690644, "raw_average_value_size": 1162, "num_data_blocks": 1275, "num_entries": 9198, "num_filter_entries": 9198, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618029, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.319411) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 10907740 bytes
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.321495) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.7 rd, 107.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.5 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(19.9) write-amplify(9.9) OK, records in: 9723, records dropped: 525 output_compression: NoCompression
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.321543) EVENT_LOG_v1 {"time_micros": 1765618029321523, "job": 112, "event": "compaction_finished", "compaction_time_micros": 101333, "compaction_time_cpu_micros": 47172, "output_level": 6, "num_output_files": 1, "total_output_size": 10907740, "num_input_records": 9723, "num_output_records": 9198, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618029321966, "job": 112, "event": "table_file_deletion", "file_number": 181}
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618029323824, "job": 112, "event": "table_file_deletion", "file_number": 179}
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.217451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.323907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.323916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.323920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.323924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:27:09 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:27:09.323928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:27:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:27:09
Dec 13 04:27:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:27:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:27:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'backups', 'volumes']
Dec 13 04:27:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:27:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:27:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:27:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:27:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:27:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:27:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:27:10 np0005558241 nova_compute[248510]: 2025-12-13 09:27:10.309 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:10 np0005558241 nova_compute[248510]: 2025-12-13 09:27:10.858 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:27:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:27:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:27:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:27:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:27:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:27:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:27:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:27:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:27:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:27:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3784: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3785: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:27:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3442094109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:27:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:27:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3442094109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:27:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3786: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:15 np0005558241 nova_compute[248510]: 2025-12-13 09:27:15.310 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:15 np0005558241 nova_compute[248510]: 2025-12-13 09:27:15.859 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3787: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3788: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:20 np0005558241 nova_compute[248510]: 2025-12-13 09:27:20.312 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:20 np0005558241 nova_compute[248510]: 2025-12-13 09:27:20.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:27:20 np0005558241 nova_compute[248510]: 2025-12-13 09:27:20.862 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3789: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.514783135039224e-05 of space, bias 1.0, pg target 0.004544349405117672 quantized to 32 (current 32)
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669348866335464 of space, bias 1.0, pg target 0.2008046599006392 quantized to 32 (current 32)
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.717419426042354e-07 of space, bias 4.0, pg target 0.0006860903311250824 quantized to 16 (current 32)
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:27:23 np0005558241 podman[412694]: 2025-12-13 09:27:23.003521008 +0000 UTC m=+0.072391095 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 04:27:23 np0005558241 podman[412693]: 2025-12-13 09:27:23.011865787 +0000 UTC m=+0.087817392 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:27:23 np0005558241 podman[412692]: 2025-12-13 09:27:23.060592549 +0000 UTC m=+0.141585680 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:27:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3790: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3791: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:25 np0005558241 nova_compute[248510]: 2025-12-13 09:27:25.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:25 np0005558241 nova_compute[248510]: 2025-12-13 09:27:25.866 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:26 np0005558241 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 13 04:27:26 np0005558241 systemd[1]: virtsecretd.service: Consumed 1.352s CPU time.
Dec 13 04:27:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3792: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3793: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:29 np0005558241 nova_compute[248510]: 2025-12-13 09:27:29.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:27:30 np0005558241 nova_compute[248510]: 2025-12-13 09:27:30.313 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:30 np0005558241 nova_compute[248510]: 2025-12-13 09:27:30.869 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3794: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:32 np0005558241 nova_compute[248510]: 2025-12-13 09:27:32.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:27:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3795: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3796: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:35 np0005558241 nova_compute[248510]: 2025-12-13 09:27:35.315 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:35 np0005558241 nova_compute[248510]: 2025-12-13 09:27:35.871 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3797: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3798: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:27:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:27:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:27:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:27:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:27:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:27:40 np0005558241 nova_compute[248510]: 2025-12-13 09:27:40.319 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:40 np0005558241 nova_compute[248510]: 2025-12-13 09:27:40.873 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3799: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:42 np0005558241 nova_compute[248510]: 2025-12-13 09:27:42.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:27:42 np0005558241 nova_compute[248510]: 2025-12-13 09:27:42.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:27:42 np0005558241 nova_compute[248510]: 2025-12-13 09:27:42.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:27:42 np0005558241 nova_compute[248510]: 2025-12-13 09:27:42.789 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:27:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3800: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:43 np0005558241 nova_compute[248510]: 2025-12-13 09:27:43.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:27:43 np0005558241 nova_compute[248510]: 2025-12-13 09:27:43.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:27:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3801: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:45 np0005558241 nova_compute[248510]: 2025-12-13 09:27:45.322 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:45 np0005558241 nova_compute[248510]: 2025-12-13 09:27:45.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:27:45 np0005558241 nova_compute[248510]: 2025-12-13 09:27:45.875 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:46 np0005558241 nova_compute[248510]: 2025-12-13 09:27:46.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:27:46 np0005558241 nova_compute[248510]: 2025-12-13 09:27:46.882 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:27:46 np0005558241 nova_compute[248510]: 2025-12-13 09:27:46.883 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:27:46 np0005558241 nova_compute[248510]: 2025-12-13 09:27:46.883 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:27:46 np0005558241 nova_compute[248510]: 2025-12-13 09:27:46.883 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:27:46 np0005558241 nova_compute[248510]: 2025-12-13 09:27:46.884 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:27:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3802: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:27:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/204429211' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:27:47 np0005558241 nova_compute[248510]: 2025-12-13 09:27:47.554 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:27:47 np0005558241 nova_compute[248510]: 2025-12-13 09:27:47.775 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:27:47 np0005558241 nova_compute[248510]: 2025-12-13 09:27:47.777 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3510MB free_disk=59.987372557632625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:27:47 np0005558241 nova_compute[248510]: 2025-12-13 09:27:47.777 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:27:47 np0005558241 nova_compute[248510]: 2025-12-13 09:27:47.777 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:27:47 np0005558241 nova_compute[248510]: 2025-12-13 09:27:47.902 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:27:47 np0005558241 nova_compute[248510]: 2025-12-13 09:27:47.903 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:27:47 np0005558241 nova_compute[248510]: 2025-12-13 09:27:47.925 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 04:27:47 np0005558241 nova_compute[248510]: 2025-12-13 09:27:47.951 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 04:27:47 np0005558241 nova_compute[248510]: 2025-12-13 09:27:47.951 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 04:27:47 np0005558241 nova_compute[248510]: 2025-12-13 09:27:47.969 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 04:27:47 np0005558241 nova_compute[248510]: 2025-12-13 09:27:47.995 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 04:27:48 np0005558241 nova_compute[248510]: 2025-12-13 09:27:48.011 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:27:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:27:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1591096340' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:27:48 np0005558241 nova_compute[248510]: 2025-12-13 09:27:48.618 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:27:48 np0005558241 nova_compute[248510]: 2025-12-13 09:27:48.628 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:27:48 np0005558241 nova_compute[248510]: 2025-12-13 09:27:48.665 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:27:48 np0005558241 nova_compute[248510]: 2025-12-13 09:27:48.668 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:27:48 np0005558241 nova_compute[248510]: 2025-12-13 09:27:48.669 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:27:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3803: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:50 np0005558241 nova_compute[248510]: 2025-12-13 09:27:50.324 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:50 np0005558241 nova_compute[248510]: 2025-12-13 09:27:50.669 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:27:50 np0005558241 nova_compute[248510]: 2025-12-13 09:27:50.670 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:27:50 np0005558241 nova_compute[248510]: 2025-12-13 09:27:50.878 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3804: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3805: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:54 np0005558241 podman[412801]: 2025-12-13 09:27:54.020965339 +0000 UTC m=+0.100554941 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251202)
Dec 13 04:27:54 np0005558241 podman[412802]: 2025-12-13 09:27:54.026686103 +0000 UTC m=+0.090944781 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:27:54 np0005558241 podman[412800]: 2025-12-13 09:27:54.072917842 +0000 UTC m=+0.150013302 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 13 04:27:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3806: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:55 np0005558241 nova_compute[248510]: 2025-12-13 09:27:55.326 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:27:55.464 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:27:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:27:55.464 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:27:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:27:55.464 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:27:55 np0005558241 nova_compute[248510]: 2025-12-13 09:27:55.884 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:27:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3807: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:57 np0005558241 nova_compute[248510]: 2025-12-13 09:27:57.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:27:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3808: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:00 np0005558241 nova_compute[248510]: 2025-12-13 09:28:00.329 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:00 np0005558241 nova_compute[248510]: 2025-12-13 09:28:00.886 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3809: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:28:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3810: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:28:03 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:28:03 np0005558241 podman[413006]: 2025-12-13 09:28:03.774781808 +0000 UTC m=+0.104955982 container create 7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:28:03 np0005558241 podman[413006]: 2025-12-13 09:28:03.693329557 +0000 UTC m=+0.023503751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:28:03 np0005558241 systemd[1]: Started libpod-conmon-7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa.scope.
Dec 13 04:28:03 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:28:03 np0005558241 podman[413006]: 2025-12-13 09:28:03.944278847 +0000 UTC m=+0.274453021 container init 7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:28:03 np0005558241 podman[413006]: 2025-12-13 09:28:03.959102529 +0000 UTC m=+0.289276703 container start 7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:28:03 np0005558241 podman[413006]: 2025-12-13 09:28:03.963018197 +0000 UTC m=+0.293192391 container attach 7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:28:03 np0005558241 sleepy_sanderson[413022]: 167 167
Dec 13 04:28:03 np0005558241 systemd[1]: libpod-7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa.scope: Deactivated successfully.
Dec 13 04:28:03 np0005558241 conmon[413022]: conmon 7f6aea2e27b4ef5414ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa.scope/container/memory.events
Dec 13 04:28:03 np0005558241 podman[413006]: 2025-12-13 09:28:03.970455243 +0000 UTC m=+0.300629447 container died 7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 04:28:04 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c0fc4b8c7570c1dc097dde53aeccc838b0e35efc3d0a01fc6ff905421f55ef9b-merged.mount: Deactivated successfully.
Dec 13 04:28:04 np0005558241 podman[413006]: 2025-12-13 09:28:04.02295889 +0000 UTC m=+0.353133064 container remove 7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:28:04 np0005558241 systemd[1]: libpod-conmon-7f6aea2e27b4ef5414ee800c86d83fefe59031f8727d7ec20e43ba8dd898d3aa.scope: Deactivated successfully.
Dec 13 04:28:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:04 np0005558241 podman[413047]: 2025-12-13 09:28:04.289388507 +0000 UTC m=+0.075982055 container create 1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 04:28:04 np0005558241 systemd[1]: Started libpod-conmon-1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3.scope.
Dec 13 04:28:04 np0005558241 podman[413047]: 2025-12-13 09:28:04.260131704 +0000 UTC m=+0.046725322 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:28:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:28:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3feeb214adeed1d61fdc947c6108ffe7c691c0b5cf8f755b7742ae229589ad7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3feeb214adeed1d61fdc947c6108ffe7c691c0b5cf8f755b7742ae229589ad7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3feeb214adeed1d61fdc947c6108ffe7c691c0b5cf8f755b7742ae229589ad7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3feeb214adeed1d61fdc947c6108ffe7c691c0b5cf8f755b7742ae229589ad7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3feeb214adeed1d61fdc947c6108ffe7c691c0b5cf8f755b7742ae229589ad7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:04 np0005558241 podman[413047]: 2025-12-13 09:28:04.39482261 +0000 UTC m=+0.181416168 container init 1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:28:04 np0005558241 podman[413047]: 2025-12-13 09:28:04.402244276 +0000 UTC m=+0.188837794 container start 1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 04:28:04 np0005558241 podman[413047]: 2025-12-13 09:28:04.40757657 +0000 UTC m=+0.194170098 container attach 1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:28:04 np0005558241 clever_kalam[413064]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:28:04 np0005558241 clever_kalam[413064]: --> All data devices are unavailable
Dec 13 04:28:04 np0005558241 systemd[1]: libpod-1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3.scope: Deactivated successfully.
Dec 13 04:28:04 np0005558241 podman[413047]: 2025-12-13 09:28:04.977916756 +0000 UTC m=+0.764510294 container died 1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:28:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d3feeb214adeed1d61fdc947c6108ffe7c691c0b5cf8f755b7742ae229589ad7-merged.mount: Deactivated successfully.
Dec 13 04:28:05 np0005558241 podman[413047]: 2025-12-13 09:28:05.024293449 +0000 UTC m=+0.810886967 container remove 1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:28:05 np0005558241 systemd[1]: libpod-conmon-1bf4c7efa6b73b11cc58c8c7221d57c01c454dee7a6d59f501bbd9ca67a468a3.scope: Deactivated successfully.
Dec 13 04:28:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3811: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:05 np0005558241 nova_compute[248510]: 2025-12-13 09:28:05.331 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:05 np0005558241 podman[413158]: 2025-12-13 09:28:05.575056315 +0000 UTC m=+0.049256946 container create a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:28:05 np0005558241 systemd[1]: Started libpod-conmon-a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19.scope.
Dec 13 04:28:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:28:05 np0005558241 podman[413158]: 2025-12-13 09:28:05.554859929 +0000 UTC m=+0.029060600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:28:05 np0005558241 podman[413158]: 2025-12-13 09:28:05.664642201 +0000 UTC m=+0.138842832 container init a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:28:05 np0005558241 podman[413158]: 2025-12-13 09:28:05.677302668 +0000 UTC m=+0.151503299 container start a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_ellis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:28:05 np0005558241 podman[413158]: 2025-12-13 09:28:05.680810806 +0000 UTC m=+0.155011437 container attach a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:28:05 np0005558241 flamboyant_ellis[413174]: 167 167
Dec 13 04:28:05 np0005558241 systemd[1]: libpod-a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19.scope: Deactivated successfully.
Dec 13 04:28:05 np0005558241 podman[413158]: 2025-12-13 09:28:05.683573565 +0000 UTC m=+0.157774206 container died a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_ellis, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:28:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9dfc63aa3119a456a913d1e634939e2330edb69b4b96ed03d9d66692ef29822e-merged.mount: Deactivated successfully.
Dec 13 04:28:05 np0005558241 podman[413158]: 2025-12-13 09:28:05.728802649 +0000 UTC m=+0.203003270 container remove a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_ellis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 04:28:05 np0005558241 systemd[1]: libpod-conmon-a56bbd3da8af9b601f78083651c0efb8d8cf6e0feefc6621108ee7e9a1198e19.scope: Deactivated successfully.
Dec 13 04:28:05 np0005558241 nova_compute[248510]: 2025-12-13 09:28:05.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:05 np0005558241 podman[413196]: 2025-12-13 09:28:05.95902763 +0000 UTC m=+0.066893088 container create 27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_yalow, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:28:06 np0005558241 systemd[1]: Started libpod-conmon-27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9.scope.
Dec 13 04:28:06 np0005558241 podman[413196]: 2025-12-13 09:28:05.92673875 +0000 UTC m=+0.034604278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:28:06 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:28:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fd5159299597621813c04cfe54451520492df7d94a31b7ed895445a775cc41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fd5159299597621813c04cfe54451520492df7d94a31b7ed895445a775cc41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fd5159299597621813c04cfe54451520492df7d94a31b7ed895445a775cc41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fd5159299597621813c04cfe54451520492df7d94a31b7ed895445a775cc41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:06 np0005558241 podman[413196]: 2025-12-13 09:28:06.07195116 +0000 UTC m=+0.179816648 container init 27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:28:06 np0005558241 podman[413196]: 2025-12-13 09:28:06.079178412 +0000 UTC m=+0.187043860 container start 27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:28:06 np0005558241 podman[413196]: 2025-12-13 09:28:06.082648799 +0000 UTC m=+0.190514237 container attach 27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:28:06 np0005558241 charming_yalow[413213]: {
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:    "0": [
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:        {
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "devices": [
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "/dev/loop3"
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            ],
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_name": "ceph_lv0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_size": "21470642176",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "name": "ceph_lv0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "tags": {
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.cluster_name": "ceph",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.crush_device_class": "",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.encrypted": "0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.objectstore": "bluestore",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.osd_id": "0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.type": "block",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.vdo": "0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.with_tpm": "0"
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            },
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "type": "block",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "vg_name": "ceph_vg0"
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:        }
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:    ],
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:    "1": [
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:        {
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "devices": [
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "/dev/loop4"
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            ],
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_name": "ceph_lv1",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_size": "21470642176",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "name": "ceph_lv1",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "tags": {
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.cluster_name": "ceph",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.crush_device_class": "",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.encrypted": "0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.objectstore": "bluestore",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.osd_id": "1",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.type": "block",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.vdo": "0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.with_tpm": "0"
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            },
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "type": "block",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "vg_name": "ceph_vg1"
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:        }
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:    ],
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:    "2": [
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:        {
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "devices": [
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "/dev/loop5"
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            ],
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_name": "ceph_lv2",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_size": "21470642176",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "name": "ceph_lv2",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "tags": {
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.cluster_name": "ceph",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.crush_device_class": "",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.encrypted": "0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.objectstore": "bluestore",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.osd_id": "2",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.type": "block",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.vdo": "0",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:                "ceph.with_tpm": "0"
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            },
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "type": "block",
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:            "vg_name": "ceph_vg2"
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:        }
Dec 13 04:28:06 np0005558241 charming_yalow[413213]:    ]
Dec 13 04:28:06 np0005558241 charming_yalow[413213]: }
Dec 13 04:28:06 np0005558241 systemd[1]: libpod-27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9.scope: Deactivated successfully.
Dec 13 04:28:06 np0005558241 podman[413196]: 2025-12-13 09:28:06.422755234 +0000 UTC m=+0.530620722 container died 27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:28:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-07fd5159299597621813c04cfe54451520492df7d94a31b7ed895445a775cc41-merged.mount: Deactivated successfully.
Dec 13 04:28:06 np0005558241 podman[413196]: 2025-12-13 09:28:06.480519862 +0000 UTC m=+0.588385310 container remove 27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:28:06 np0005558241 systemd[1]: libpod-conmon-27deb457074b931289b95e3ae253432083b642f170626e1d9ac2868b006f3de9.scope: Deactivated successfully.
Dec 13 04:28:07 np0005558241 podman[413296]: 2025-12-13 09:28:07.015187955 +0000 UTC m=+0.032572538 container create aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:28:07 np0005558241 systemd[1]: Started libpod-conmon-aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb.scope.
Dec 13 04:28:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:28:07 np0005558241 podman[413296]: 2025-12-13 09:28:07.001595204 +0000 UTC m=+0.018979807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:28:07 np0005558241 podman[413296]: 2025-12-13 09:28:07.103533929 +0000 UTC m=+0.120918532 container init aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:28:07 np0005558241 podman[413296]: 2025-12-13 09:28:07.110060963 +0000 UTC m=+0.127445576 container start aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:28:07 np0005558241 podman[413296]: 2025-12-13 09:28:07.114421642 +0000 UTC m=+0.131806245 container attach aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 13 04:28:07 np0005558241 amazing_euler[413313]: 167 167
Dec 13 04:28:07 np0005558241 systemd[1]: libpod-aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb.scope: Deactivated successfully.
Dec 13 04:28:07 np0005558241 podman[413296]: 2025-12-13 09:28:07.115752715 +0000 UTC m=+0.133137328 container died aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:28:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a267a8ec68d3da8edfb8dd5113e980739e67b2c19638abf10bb51a5fe15197a0-merged.mount: Deactivated successfully.
Dec 13 04:28:07 np0005558241 podman[413296]: 2025-12-13 09:28:07.16339967 +0000 UTC m=+0.180784273 container remove aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 04:28:07 np0005558241 systemd[1]: libpod-conmon-aae96e69d149487e2e960a5b9b04a8dfc0dfc08e88a073cee1700fe35740f8eb.scope: Deactivated successfully.
Dec 13 04:28:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3812: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:07 np0005558241 podman[413336]: 2025-12-13 09:28:07.38760858 +0000 UTC m=+0.074315334 container create 1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_keller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:28:07 np0005558241 systemd[1]: Started libpod-conmon-1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb.scope.
Dec 13 04:28:07 np0005558241 podman[413336]: 2025-12-13 09:28:07.35650488 +0000 UTC m=+0.043211704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:28:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:28:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e97f8470747ad17c0f2fce9cc65e513c97a123ae5f8f51c0d36b54ff9dd153/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e97f8470747ad17c0f2fce9cc65e513c97a123ae5f8f51c0d36b54ff9dd153/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e97f8470747ad17c0f2fce9cc65e513c97a123ae5f8f51c0d36b54ff9dd153/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e97f8470747ad17c0f2fce9cc65e513c97a123ae5f8f51c0d36b54ff9dd153/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:07 np0005558241 podman[413336]: 2025-12-13 09:28:07.489214077 +0000 UTC m=+0.175920821 container init 1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_keller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 04:28:07 np0005558241 podman[413336]: 2025-12-13 09:28:07.495999077 +0000 UTC m=+0.182705841 container start 1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 04:28:07 np0005558241 podman[413336]: 2025-12-13 09:28:07.50090091 +0000 UTC m=+0.187607664 container attach 1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_keller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:28:08 np0005558241 lvm[413433]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:28:08 np0005558241 lvm[413432]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:28:08 np0005558241 lvm[413432]: VG ceph_vg1 finished
Dec 13 04:28:08 np0005558241 lvm[413433]: VG ceph_vg0 finished
Dec 13 04:28:08 np0005558241 lvm[413435]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:28:08 np0005558241 lvm[413435]: VG ceph_vg2 finished
Dec 13 04:28:08 np0005558241 lvm[413437]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:28:08 np0005558241 lvm[413437]: VG ceph_vg2 finished
Dec 13 04:28:08 np0005558241 lvm[413439]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:28:08 np0005558241 lvm[413439]: VG ceph_vg2 finished
Dec 13 04:28:08 np0005558241 relaxed_keller[413353]: {}
Dec 13 04:28:08 np0005558241 systemd[1]: libpod-1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb.scope: Deactivated successfully.
Dec 13 04:28:08 np0005558241 systemd[1]: libpod-1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb.scope: Consumed 1.491s CPU time.
Dec 13 04:28:08 np0005558241 podman[413336]: 2025-12-13 09:28:08.409873554 +0000 UTC m=+1.096580308 container died 1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_keller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 04:28:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-63e97f8470747ad17c0f2fce9cc65e513c97a123ae5f8f51c0d36b54ff9dd153-merged.mount: Deactivated successfully.
Dec 13 04:28:08 np0005558241 podman[413336]: 2025-12-13 09:28:08.464295708 +0000 UTC m=+1.151002432 container remove 1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_keller, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 04:28:08 np0005558241 systemd[1]: libpod-conmon-1258faff7c95cf5b4835ac4912fc8981777dc6e6c4d75b34275f7d801225eacb.scope: Deactivated successfully.
Dec 13 04:28:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:28:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:28:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:28:08 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:28:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3813: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:28:09
Dec 13 04:28:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:28:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:28:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.control', '.rgw.root', 'vms']
Dec 13 04:28:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:28:09 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:28:09 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:28:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:28:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:28:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:28:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:28:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:28:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:28:10 np0005558241 nova_compute[248510]: 2025-12-13 09:28:10.334 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:10 np0005558241 nova_compute[248510]: 2025-12-13 09:28:10.935 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:28:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:28:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:28:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:28:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:28:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:28:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:28:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:28:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:28:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:28:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3814: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3815: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:28:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2617986221' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:28:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:28:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2617986221' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:28:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3816: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:15 np0005558241 nova_compute[248510]: 2025-12-13 09:28:15.335 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:15 np0005558241 nova_compute[248510]: 2025-12-13 09:28:15.982 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3817: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3818: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:20 np0005558241 nova_compute[248510]: 2025-12-13 09:28:20.338 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:21 np0005558241 nova_compute[248510]: 2025-12-13 09:28:21.040 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3819: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.514783135039224e-05 of space, bias 1.0, pg target 0.004544349405117672 quantized to 32 (current 32)
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000669348866335464 of space, bias 1.0, pg target 0.2008046599006392 quantized to 32 (current 32)
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.717419426042354e-07 of space, bias 4.0, pg target 0.0006860903311250824 quantized to 16 (current 32)
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:28:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3820: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:24 np0005558241 podman[413476]: 2025-12-13 09:28:24.987038978 +0000 UTC m=+0.067451522 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Dec 13 04:28:24 np0005558241 podman[413477]: 2025-12-13 09:28:24.996102995 +0000 UTC m=+0.070482178 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:28:25 np0005558241 podman[413475]: 2025-12-13 09:28:25.020043925 +0000 UTC m=+0.102917971 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 13 04:28:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3821: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:25 np0005558241 nova_compute[248510]: 2025-12-13 09:28:25.341 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:26 np0005558241 nova_compute[248510]: 2025-12-13 09:28:26.043 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3822: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3823: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:29 np0005558241 nova_compute[248510]: 2025-12-13 09:28:29.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:28:30 np0005558241 nova_compute[248510]: 2025-12-13 09:28:30.342 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:31 np0005558241 nova_compute[248510]: 2025-12-13 09:28:31.046 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3824: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3825: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:34 np0005558241 nova_compute[248510]: 2025-12-13 09:28:34.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:28:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3826: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:35 np0005558241 nova_compute[248510]: 2025-12-13 09:28:35.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:36 np0005558241 nova_compute[248510]: 2025-12-13 09:28:36.048 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3827: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3828: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:28:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:28:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:28:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:28:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:28:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:28:40 np0005558241 nova_compute[248510]: 2025-12-13 09:28:40.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:41 np0005558241 nova_compute[248510]: 2025-12-13 09:28:41.052 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3829: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:42 np0005558241 nova_compute[248510]: 2025-12-13 09:28:42.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:28:42 np0005558241 nova_compute[248510]: 2025-12-13 09:28:42.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:28:42 np0005558241 nova_compute[248510]: 2025-12-13 09:28:42.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:28:42 np0005558241 nova_compute[248510]: 2025-12-13 09:28:42.792 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:28:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3830: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 1 op/s
Dec 13 04:28:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3831: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 04:28:45 np0005558241 nova_compute[248510]: 2025-12-13 09:28:45.350 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:45 np0005558241 nova_compute[248510]: 2025-12-13 09:28:45.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:28:45 np0005558241 nova_compute[248510]: 2025-12-13 09:28:45.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:28:46 np0005558241 nova_compute[248510]: 2025-12-13 09:28:46.054 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3832: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 04:28:47 np0005558241 nova_compute[248510]: 2025-12-13 09:28:47.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:28:47 np0005558241 nova_compute[248510]: 2025-12-13 09:28:47.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:28:47 np0005558241 nova_compute[248510]: 2025-12-13 09:28:47.838 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:28:47 np0005558241 nova_compute[248510]: 2025-12-13 09:28:47.838 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:28:47 np0005558241 nova_compute[248510]: 2025-12-13 09:28:47.839 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:28:47 np0005558241 nova_compute[248510]: 2025-12-13 09:28:47.839 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:28:47 np0005558241 nova_compute[248510]: 2025-12-13 09:28:47.839 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:28:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:28:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559057708' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:28:48 np0005558241 nova_compute[248510]: 2025-12-13 09:28:48.408 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:28:48 np0005558241 nova_compute[248510]: 2025-12-13 09:28:48.634 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:28:48 np0005558241 nova_compute[248510]: 2025-12-13 09:28:48.635 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3466MB free_disk=59.987372557632625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:28:48 np0005558241 nova_compute[248510]: 2025-12-13 09:28:48.635 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:28:48 np0005558241 nova_compute[248510]: 2025-12-13 09:28:48.636 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:28:48 np0005558241 nova_compute[248510]: 2025-12-13 09:28:48.720 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:28:48 np0005558241 nova_compute[248510]: 2025-12-13 09:28:48.721 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:28:48 np0005558241 nova_compute[248510]: 2025-12-13 09:28:48.891 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:28:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3833: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 04:28:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:28:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1708456428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:28:49 np0005558241 nova_compute[248510]: 2025-12-13 09:28:49.555 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:28:49 np0005558241 nova_compute[248510]: 2025-12-13 09:28:49.564 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:28:49 np0005558241 nova_compute[248510]: 2025-12-13 09:28:49.640 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:28:49 np0005558241 nova_compute[248510]: 2025-12-13 09:28:49.643 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:28:49 np0005558241 nova_compute[248510]: 2025-12-13 09:28:49.643 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:28:50 np0005558241 nova_compute[248510]: 2025-12-13 09:28:50.351 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:51 np0005558241 nova_compute[248510]: 2025-12-13 09:28:51.057 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3834: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 04:28:51 np0005558241 nova_compute[248510]: 2025-12-13 09:28:51.643 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:28:51 np0005558241 nova_compute[248510]: 2025-12-13 09:28:51.644 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:28:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Dec 13 04:28:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Dec 13 04:28:52 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Dec 13 04:28:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3836: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 614 B/s wr, 12 op/s
Dec 13 04:28:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3837: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 04:28:55 np0005558241 nova_compute[248510]: 2025-12-13 09:28:55.354 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:28:55.464 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:28:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:28:55.465 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:28:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:28:55.465 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:28:55 np0005558241 podman[413583]: 2025-12-13 09:28:55.98151458 +0000 UTC m=+0.063151864 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 13 04:28:55 np0005558241 podman[413584]: 2025-12-13 09:28:55.991392477 +0000 UTC m=+0.065328508 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 04:28:56 np0005558241 podman[413582]: 2025-12-13 09:28:56.018859476 +0000 UTC m=+0.102864970 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:28:56 np0005558241 nova_compute[248510]: 2025-12-13 09:28:56.059 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:28:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Dec 13 04:28:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Dec 13 04:28:56 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Dec 13 04:28:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3839: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 13 04:28:58 np0005558241 nova_compute[248510]: 2025-12-13 09:28:58.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:28:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Dec 13 04:28:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Dec 13 04:28:59 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Dec 13 04:28:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3841: 321 pgs: 321 active+clean; 8.5 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.5 KiB/s wr, 39 op/s
Dec 13 04:29:00 np0005558241 nova_compute[248510]: 2025-12-13 09:29:00.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:01 np0005558241 nova_compute[248510]: 2025-12-13 09:29:01.061 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3842: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.9 KiB/s wr, 55 op/s
Dec 13 04:29:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3843: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 13 04:29:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Dec 13 04:29:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Dec 13 04:29:04 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Dec 13 04:29:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3845: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 13 04:29:05 np0005558241 nova_compute[248510]: 2025-12-13 09:29:05.358 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:06 np0005558241 nova_compute[248510]: 2025-12-13 09:29:06.064 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3846: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 28 op/s
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3847: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Dec 13 04:29:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:29:09
Dec 13 04:29:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:29:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:29:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr']
Dec 13 04:29:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:29:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:29:10 np0005558241 podman[413790]: 2025-12-13 09:29:10.118726362 +0000 UTC m=+0.109962047 container create 48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 04:29:10 np0005558241 podman[413790]: 2025-12-13 09:29:10.044314997 +0000 UTC m=+0.035550722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:29:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:29:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:29:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:29:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:29:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:29:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:29:10 np0005558241 systemd[1]: Started libpod-conmon-48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88.scope.
Dec 13 04:29:10 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:29:10 np0005558241 podman[413790]: 2025-12-13 09:29:10.296285853 +0000 UTC m=+0.287521558 container init 48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:29:10 np0005558241 podman[413790]: 2025-12-13 09:29:10.309470314 +0000 UTC m=+0.300705999 container start 48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hawking, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:29:10 np0005558241 podman[413790]: 2025-12-13 09:29:10.313655729 +0000 UTC m=+0.304891414 container attach 48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 04:29:10 np0005558241 pedantic_hawking[413806]: 167 167
Dec 13 04:29:10 np0005558241 systemd[1]: libpod-48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88.scope: Deactivated successfully.
Dec 13 04:29:10 np0005558241 podman[413790]: 2025-12-13 09:29:10.317957896 +0000 UTC m=+0.309193551 container died 48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:29:10 np0005558241 systemd[1]: var-lib-containers-storage-overlay-155228ff6f7e6894ef4bce14f7a2eb572d8c5cd730f506c86523edbb5b479de4-merged.mount: Deactivated successfully.
Dec 13 04:29:10 np0005558241 nova_compute[248510]: 2025-12-13 09:29:10.361 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:10 np0005558241 podman[413790]: 2025-12-13 09:29:10.36635729 +0000 UTC m=+0.357592935 container remove 48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hawking, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:29:10 np0005558241 systemd[1]: libpod-conmon-48bcaade3615bec332b297051c2716d05f08f65aec85297b0ae498a273811e88.scope: Deactivated successfully.
Dec 13 04:29:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:29:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:29:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:29:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:29:10 np0005558241 podman[413830]: 2025-12-13 09:29:10.655327103 +0000 UTC m=+0.111402123 container create 601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_boyd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 04:29:10 np0005558241 podman[413830]: 2025-12-13 09:29:10.588630011 +0000 UTC m=+0.044705051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:29:10 np0005558241 systemd[1]: Started libpod-conmon-601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d.scope.
Dec 13 04:29:10 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:29:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5e56add483aaddf47e502c479092a6172d09ae90702ee25e65eadfdf924bbc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5e56add483aaddf47e502c479092a6172d09ae90702ee25e65eadfdf924bbc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5e56add483aaddf47e502c479092a6172d09ae90702ee25e65eadfdf924bbc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5e56add483aaddf47e502c479092a6172d09ae90702ee25e65eadfdf924bbc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:10 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5e56add483aaddf47e502c479092a6172d09ae90702ee25e65eadfdf924bbc5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:10 np0005558241 podman[413830]: 2025-12-13 09:29:10.747127784 +0000 UTC m=+0.203202824 container init 601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 04:29:10 np0005558241 podman[413830]: 2025-12-13 09:29:10.760734675 +0000 UTC m=+0.216809735 container start 601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_boyd, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:29:10 np0005558241 podman[413830]: 2025-12-13 09:29:10.76530327 +0000 UTC m=+0.221378310 container attach 601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_boyd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:29:11 np0005558241 nova_compute[248510]: 2025-12-13 09:29:11.065 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:29:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:29:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:29:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:29:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:29:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:29:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:29:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:29:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:29:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:29:11 np0005558241 pedantic_boyd[413846]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:29:11 np0005558241 pedantic_boyd[413846]: --> All data devices are unavailable
Dec 13 04:29:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3849: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 383 B/s rd, 383 B/s wr, 1 op/s
Dec 13 04:29:11 np0005558241 systemd[1]: libpod-601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d.scope: Deactivated successfully.
Dec 13 04:29:11 np0005558241 podman[413830]: 2025-12-13 09:29:11.364230853 +0000 UTC m=+0.820305913 container died 601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_boyd, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:29:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a5e56add483aaddf47e502c479092a6172d09ae90702ee25e65eadfdf924bbc5-merged.mount: Deactivated successfully.
Dec 13 04:29:11 np0005558241 podman[413830]: 2025-12-13 09:29:11.425671583 +0000 UTC m=+0.881746633 container remove 601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_boyd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:29:11 np0005558241 systemd[1]: libpod-conmon-601e6921d31ed04ad3d890cee6bda7bc5b3565aaf80bbcff9c925c9fd44f630d.scope: Deactivated successfully.
Dec 13 04:29:11 np0005558241 podman[413941]: 2025-12-13 09:29:11.962357356 +0000 UTC m=+0.059057241 container create 43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 13 04:29:12 np0005558241 systemd[1]: Started libpod-conmon-43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0.scope.
Dec 13 04:29:12 np0005558241 podman[413941]: 2025-12-13 09:29:11.933990475 +0000 UTC m=+0.030690380 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:29:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:29:12 np0005558241 podman[413941]: 2025-12-13 09:29:12.051673915 +0000 UTC m=+0.148373780 container init 43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lamarr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 04:29:12 np0005558241 podman[413941]: 2025-12-13 09:29:12.05905882 +0000 UTC m=+0.155758675 container start 43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:29:12 np0005558241 podman[413941]: 2025-12-13 09:29:12.063185434 +0000 UTC m=+0.159885309 container attach 43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:29:12 np0005558241 dreamy_lamarr[413958]: 167 167
Dec 13 04:29:12 np0005558241 systemd[1]: libpod-43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0.scope: Deactivated successfully.
Dec 13 04:29:12 np0005558241 podman[413941]: 2025-12-13 09:29:12.067271956 +0000 UTC m=+0.163971801 container died 43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:29:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-d6b2d06491f59edc243fa816f39e64fc58a12cf0070e148bc6134a90b0886b3c-merged.mount: Deactivated successfully.
Dec 13 04:29:12 np0005558241 podman[413941]: 2025-12-13 09:29:12.118739656 +0000 UTC m=+0.215439511 container remove 43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:29:12 np0005558241 systemd[1]: libpod-conmon-43275cdb8a12dbf9ccc309517748755408b3c70866dc74cae3611411ea92d0c0.scope: Deactivated successfully.
Dec 13 04:29:12 np0005558241 podman[413981]: 2025-12-13 09:29:12.301264991 +0000 UTC m=+0.057623836 container create 5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_almeida, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 04:29:12 np0005558241 systemd[1]: Started libpod-conmon-5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b.scope.
Dec 13 04:29:12 np0005558241 podman[413981]: 2025-12-13 09:29:12.27970184 +0000 UTC m=+0.036060455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:29:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:29:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d342465ad965cc551a6b6c18ab2a54e640fad1596afbc610afd225439d879d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d342465ad965cc551a6b6c18ab2a54e640fad1596afbc610afd225439d879d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d342465ad965cc551a6b6c18ab2a54e640fad1596afbc610afd225439d879d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d342465ad965cc551a6b6c18ab2a54e640fad1596afbc610afd225439d879d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:12 np0005558241 podman[413981]: 2025-12-13 09:29:12.407258208 +0000 UTC m=+0.163616823 container init 5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 04:29:12 np0005558241 podman[413981]: 2025-12-13 09:29:12.422807187 +0000 UTC m=+0.179165782 container start 5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_almeida, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:29:12 np0005558241 podman[413981]: 2025-12-13 09:29:12.427045624 +0000 UTC m=+0.183404279 container attach 5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_almeida, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]: {
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:    "0": [
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:        {
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "devices": [
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "/dev/loop3"
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            ],
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_name": "ceph_lv0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_size": "21470642176",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "name": "ceph_lv0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "tags": {
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.cluster_name": "ceph",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.crush_device_class": "",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.encrypted": "0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.objectstore": "bluestore",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.osd_id": "0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.type": "block",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.vdo": "0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.with_tpm": "0"
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            },
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "type": "block",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "vg_name": "ceph_vg0"
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:        }
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:    ],
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:    "1": [
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:        {
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "devices": [
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "/dev/loop4"
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            ],
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_name": "ceph_lv1",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_size": "21470642176",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "name": "ceph_lv1",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "tags": {
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.cluster_name": "ceph",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.crush_device_class": "",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.encrypted": "0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.objectstore": "bluestore",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.osd_id": "1",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.type": "block",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.vdo": "0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.with_tpm": "0"
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            },
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "type": "block",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "vg_name": "ceph_vg1"
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:        }
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:    ],
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:    "2": [
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:        {
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "devices": [
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "/dev/loop5"
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            ],
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_name": "ceph_lv2",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_size": "21470642176",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "name": "ceph_lv2",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "tags": {
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.cluster_name": "ceph",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.crush_device_class": "",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.encrypted": "0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.objectstore": "bluestore",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.osd_id": "2",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.type": "block",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.vdo": "0",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:                "ceph.with_tpm": "0"
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            },
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "type": "block",
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:            "vg_name": "ceph_vg2"
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:        }
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]:    ]
Dec 13 04:29:12 np0005558241 relaxed_almeida[413998]: }
Dec 13 04:29:12 np0005558241 systemd[1]: libpod-5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b.scope: Deactivated successfully.
Dec 13 04:29:12 np0005558241 conmon[413998]: conmon 5b8b8b7e80b51eb8f2ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b.scope/container/memory.events
Dec 13 04:29:12 np0005558241 podman[413981]: 2025-12-13 09:29:12.796366271 +0000 UTC m=+0.552724876 container died 5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 04:29:12 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b3d342465ad965cc551a6b6c18ab2a54e640fad1596afbc610afd225439d879d-merged.mount: Deactivated successfully.
Dec 13 04:29:12 np0005558241 podman[413981]: 2025-12-13 09:29:12.848422076 +0000 UTC m=+0.604780681 container remove 5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_almeida, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 04:29:12 np0005558241 systemd[1]: libpod-conmon-5b8b8b7e80b51eb8f2ee59fd214bd3c3b50508c1b2f94ef0f172184e9de0981b.scope: Deactivated successfully.
Dec 13 04:29:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3850: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 1011 B/s wr, 13 op/s
Dec 13 04:29:13 np0005558241 podman[414081]: 2025-12-13 09:29:13.4078967 +0000 UTC m=+0.044951167 container create 903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:29:13 np0005558241 systemd[1]: Started libpod-conmon-903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723.scope.
Dec 13 04:29:13 np0005558241 podman[414081]: 2025-12-13 09:29:13.389982921 +0000 UTC m=+0.027037408 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:29:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:29:13 np0005558241 podman[414081]: 2025-12-13 09:29:13.504482332 +0000 UTC m=+0.141536829 container init 903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:29:13 np0005558241 podman[414081]: 2025-12-13 09:29:13.513383655 +0000 UTC m=+0.150438122 container start 903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 04:29:13 np0005558241 podman[414081]: 2025-12-13 09:29:13.517292783 +0000 UTC m=+0.154347250 container attach 903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:29:13 np0005558241 stoic_varahamihira[414095]: 167 167
Dec 13 04:29:13 np0005558241 systemd[1]: libpod-903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723.scope: Deactivated successfully.
Dec 13 04:29:13 np0005558241 podman[414081]: 2025-12-13 09:29:13.520831621 +0000 UTC m=+0.157886088 container died 903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:29:13 np0005558241 systemd[1]: var-lib-containers-storage-overlay-2868d44251965fd730497215f4f3ab0f2705bacf1df0835a51387a550204da82-merged.mount: Deactivated successfully.
Dec 13 04:29:13 np0005558241 podman[414081]: 2025-12-13 09:29:13.565734857 +0000 UTC m=+0.202789364 container remove 903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 04:29:13 np0005558241 systemd[1]: libpod-conmon-903cfe5e9c8cfa2b2fde6bdb3cd0ae16c6b977f7706ad7428a903071ccf87723.scope: Deactivated successfully.
Dec 13 04:29:13 np0005558241 podman[414121]: 2025-12-13 09:29:13.78767913 +0000 UTC m=+0.069225286 container create 6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:29:13 np0005558241 systemd[1]: Started libpod-conmon-6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24.scope.
Dec 13 04:29:13 np0005558241 podman[414121]: 2025-12-13 09:29:13.754025157 +0000 UTC m=+0.035571403 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:29:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:29:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4380abdf5dd2a6ed6903be09a23264dd6f21ce36678bf06cb607345d9c442552/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4380abdf5dd2a6ed6903be09a23264dd6f21ce36678bf06cb607345d9c442552/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4380abdf5dd2a6ed6903be09a23264dd6f21ce36678bf06cb607345d9c442552/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:13 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4380abdf5dd2a6ed6903be09a23264dd6f21ce36678bf06cb607345d9c442552/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:13 np0005558241 podman[414121]: 2025-12-13 09:29:13.909949596 +0000 UTC m=+0.191495782 container init 6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:29:13 np0005558241 podman[414121]: 2025-12-13 09:29:13.919161076 +0000 UTC m=+0.200707232 container start 6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 04:29:13 np0005558241 podman[414121]: 2025-12-13 09:29:13.923129976 +0000 UTC m=+0.204676162 container attach 6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:29:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:14 np0005558241 lvm[414215]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:29:14 np0005558241 lvm[414216]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:29:14 np0005558241 lvm[414216]: VG ceph_vg1 finished
Dec 13 04:29:14 np0005558241 lvm[414215]: VG ceph_vg0 finished
Dec 13 04:29:14 np0005558241 lvm[414218]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:29:14 np0005558241 lvm[414218]: VG ceph_vg2 finished
Dec 13 04:29:14 np0005558241 silly_visvesvaraya[414137]: {}
Dec 13 04:29:14 np0005558241 systemd[1]: libpod-6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24.scope: Deactivated successfully.
Dec 13 04:29:14 np0005558241 podman[414121]: 2025-12-13 09:29:14.879262263 +0000 UTC m=+1.160808409 container died 6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:29:14 np0005558241 systemd[1]: libpod-6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24.scope: Consumed 1.565s CPU time.
Dec 13 04:29:14 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4380abdf5dd2a6ed6903be09a23264dd6f21ce36678bf06cb607345d9c442552-merged.mount: Deactivated successfully.
Dec 13 04:29:14 np0005558241 podman[414121]: 2025-12-13 09:29:14.941254757 +0000 UTC m=+1.222800903 container remove 6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:29:14 np0005558241 systemd[1]: libpod-conmon-6658ce345f7398f8c9c56c6b5bf749a80f0194c0ba20e9e59bd59c21c22d4a24.scope: Deactivated successfully.
Dec 13 04:29:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:29:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:29:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:29:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:29:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:29:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4288270553' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:29:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:29:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4288270553' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:29:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3851: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 13 04:29:15 np0005558241 nova_compute[248510]: 2025-12-13 09:29:15.363 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:16 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:29:16 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:29:16 np0005558241 nova_compute[248510]: 2025-12-13 09:29:16.068 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3852: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 13 04:29:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3853: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 13 04:29:20 np0005558241 nova_compute[248510]: 2025-12-13 09:29:20.364 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:21 np0005558241 nova_compute[248510]: 2025-12-13 09:29:21.070 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3854: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5195834882760674e-05 of space, bias 1.0, pg target 0.004558750464828202 quantized to 32 (current 32)
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.151001443556761e-06 of space, bias 1.0, pg target 0.0012453004330670282 quantized to 32 (current 32)
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.642278062309101e-07 of space, bias 4.0, pg target 0.0006770733674770921 quantized to 16 (current 32)
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:29:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3855: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.4 KiB/s rd, 1.1 KiB/s wr, 11 op/s
Dec 13 04:29:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:24 np0005558241 nova_compute[248510]: 2025-12-13 09:29:24.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:29:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3856: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 2 op/s
Dec 13 04:29:25 np0005558241 nova_compute[248510]: 2025-12-13 09:29:25.366 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:26 np0005558241 nova_compute[248510]: 2025-12-13 09:29:26.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:27 np0005558241 podman[414261]: 2025-12-13 09:29:27.024147227 +0000 UTC m=+0.101049044 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 04:29:27 np0005558241 podman[414259]: 2025-12-13 09:29:27.056826946 +0000 UTC m=+0.140037691 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:29:27 np0005558241 podman[414260]: 2025-12-13 09:29:27.057257037 +0000 UTC m=+0.140416041 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Dec 13 04:29:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3857: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3858: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:30 np0005558241 nova_compute[248510]: 2025-12-13 09:29:30.369 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:30 np0005558241 nova_compute[248510]: 2025-12-13 09:29:30.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:29:31 np0005558241 nova_compute[248510]: 2025-12-13 09:29:31.076 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3859: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3860: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3861: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:35 np0005558241 nova_compute[248510]: 2025-12-13 09:29:35.371 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:35 np0005558241 nova_compute[248510]: 2025-12-13 09:29:35.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:29:36 np0005558241 nova_compute[248510]: 2025-12-13 09:29:36.078 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3862: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3863: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:29:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:29:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:29:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:29:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:29:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:29:40 np0005558241 nova_compute[248510]: 2025-12-13 09:29:40.380 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:41 np0005558241 nova_compute[248510]: 2025-12-13 09:29:41.081 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3864: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3865: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.656934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618183657176, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1518, "num_deletes": 253, "total_data_size": 2528944, "memory_usage": 2562600, "flush_reason": "Manual Compaction"}
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618183675975, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2472697, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75986, "largest_seqno": 77503, "table_properties": {"data_size": 2465528, "index_size": 4239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14823, "raw_average_key_size": 20, "raw_value_size": 2451142, "raw_average_value_size": 3330, "num_data_blocks": 189, "num_entries": 736, "num_filter_entries": 736, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618030, "oldest_key_time": 1765618030, "file_creation_time": 1765618183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 19106 microseconds, and 6674 cpu microseconds.
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.676035) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2472697 bytes OK
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.676107) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.678259) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.678330) EVENT_LOG_v1 {"time_micros": 1765618183678317, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.678366) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2522301, prev total WAL file size 2522301, number of live WAL files 2.
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.679917) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(2414KB)], [182(10MB)]
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618183680010, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 13380437, "oldest_snapshot_seqno": -1}
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 9412 keys, 11572786 bytes, temperature: kUnknown
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618183755893, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 11572786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11513228, "index_size": 34933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23557, "raw_key_size": 248118, "raw_average_key_size": 26, "raw_value_size": 11348815, "raw_average_value_size": 1205, "num_data_blocks": 1343, "num_entries": 9412, "num_filter_entries": 9412, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618183, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.756255) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 11572786 bytes
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.757720) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.1 rd, 152.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 10.4 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(10.1) write-amplify(4.7) OK, records in: 9934, records dropped: 522 output_compression: NoCompression
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.757740) EVENT_LOG_v1 {"time_micros": 1765618183757730, "job": 114, "event": "compaction_finished", "compaction_time_micros": 75969, "compaction_time_cpu_micros": 34249, "output_level": 6, "num_output_files": 1, "total_output_size": 11572786, "num_input_records": 9934, "num_output_records": 9412, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618183758332, "job": 114, "event": "table_file_deletion", "file_number": 184}
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618183760248, "job": 114, "event": "table_file_deletion", "file_number": 182}
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.679775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.760403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.760412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.760414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.760416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:29:43 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:29:43.760419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:29:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:44 np0005558241 nova_compute[248510]: 2025-12-13 09:29:44.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:29:44 np0005558241 nova_compute[248510]: 2025-12-13 09:29:44.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:29:44 np0005558241 nova_compute[248510]: 2025-12-13 09:29:44.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:29:44 np0005558241 nova_compute[248510]: 2025-12-13 09:29:44.788 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:29:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3866: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:45 np0005558241 nova_compute[248510]: 2025-12-13 09:29:45.375 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:46 np0005558241 nova_compute[248510]: 2025-12-13 09:29:46.083 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:46 np0005558241 nova_compute[248510]: 2025-12-13 09:29:46.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:29:46 np0005558241 nova_compute[248510]: 2025-12-13 09:29:46.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:29:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3867: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:48 np0005558241 nova_compute[248510]: 2025-12-13 09:29:48.770 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:29:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3868: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:49 np0005558241 nova_compute[248510]: 2025-12-13 09:29:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:29:49 np0005558241 nova_compute[248510]: 2025-12-13 09:29:49.803 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:29:49 np0005558241 nova_compute[248510]: 2025-12-13 09:29:49.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:29:49 np0005558241 nova_compute[248510]: 2025-12-13 09:29:49.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:29:49 np0005558241 nova_compute[248510]: 2025-12-13 09:29:49.804 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:29:49 np0005558241 nova_compute[248510]: 2025-12-13 09:29:49.804 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:29:50 np0005558241 nova_compute[248510]: 2025-12-13 09:29:50.377 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:29:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726806841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:29:50 np0005558241 nova_compute[248510]: 2025-12-13 09:29:50.434 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:29:50 np0005558241 nova_compute[248510]: 2025-12-13 09:29:50.592 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:29:50 np0005558241 nova_compute[248510]: 2025-12-13 09:29:50.593 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3495MB free_disk=59.987369677983224GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:29:50 np0005558241 nova_compute[248510]: 2025-12-13 09:29:50.594 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:29:50 np0005558241 nova_compute[248510]: 2025-12-13 09:29:50.594 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:29:50 np0005558241 nova_compute[248510]: 2025-12-13 09:29:50.932 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:29:50 np0005558241 nova_compute[248510]: 2025-12-13 09:29:50.932 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:29:50 np0005558241 nova_compute[248510]: 2025-12-13 09:29:50.966 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:29:51 np0005558241 nova_compute[248510]: 2025-12-13 09:29:51.086 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3869: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:29:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985874201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:29:51 np0005558241 nova_compute[248510]: 2025-12-13 09:29:51.614 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:29:51 np0005558241 nova_compute[248510]: 2025-12-13 09:29:51.620 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:29:51 np0005558241 nova_compute[248510]: 2025-12-13 09:29:51.805 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:29:51 np0005558241 nova_compute[248510]: 2025-12-13 09:29:51.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:29:51 np0005558241 nova_compute[248510]: 2025-12-13 09:29:51.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:29:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3870: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:53 np0005558241 nova_compute[248510]: 2025-12-13 09:29:53.809 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:29:53 np0005558241 nova_compute[248510]: 2025-12-13 09:29:53.810 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:29:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3871: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:55 np0005558241 nova_compute[248510]: 2025-12-13 09:29:55.380 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:29:55.466 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:29:55.466 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:29:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:29:55.467 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:29:56 np0005558241 nova_compute[248510]: 2025-12-13 09:29:56.088 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:29:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3872: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:29:57 np0005558241 podman[414370]: 2025-12-13 09:29:57.98087071 +0000 UTC m=+0.057863831 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 04:29:57 np0005558241 podman[414366]: 2025-12-13 09:29:57.993329962 +0000 UTC m=+0.076895238 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:29:58 np0005558241 podman[414365]: 2025-12-13 09:29:58.004648086 +0000 UTC m=+0.094223763 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 04:29:58 np0005558241 nova_compute[248510]: 2025-12-13 09:29:58.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:29:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3873: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:00 np0005558241 nova_compute[248510]: 2025-12-13 09:30:00.382 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:01 np0005558241 nova_compute[248510]: 2025-12-13 09:30:01.091 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3874: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3875: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3876: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:05 np0005558241 nova_compute[248510]: 2025-12-13 09:30:05.386 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:06 np0005558241 nova_compute[248510]: 2025-12-13 09:30:06.094 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3877: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3878: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:30:09
Dec 13 04:30:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:30:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:30:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'vms']
Dec 13 04:30:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:30:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:30:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:30:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:30:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:30:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:30:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:30:10 np0005558241 nova_compute[248510]: 2025-12-13 09:30:10.387 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:11 np0005558241 nova_compute[248510]: 2025-12-13 09:30:11.096 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:30:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:30:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:30:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:30:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:30:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:30:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:30:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:30:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:30:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:30:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3879: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3880: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:30:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3977364793' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:30:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:30:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3977364793' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:30:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3881: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:15 np0005558241 nova_compute[248510]: 2025-12-13 09:30:15.404 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:15 np0005558241 podman[414524]: 2025-12-13 09:30:15.691625632 +0000 UTC m=+0.067375190 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:30:15 np0005558241 podman[414524]: 2025-12-13 09:30:15.784545401 +0000 UTC m=+0.160294969 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:30:16 np0005558241 nova_compute[248510]: 2025-12-13 09:30:16.098 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:30:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:30:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:30:16 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:30:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3882: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:30:17 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:30:18 np0005558241 podman[414852]: 2025-12-13 09:30:17.981259865 +0000 UTC m=+0.035791678 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:30:18 np0005558241 podman[414852]: 2025-12-13 09:30:18.09875537 +0000 UTC m=+0.153287183 container create b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 04:30:18 np0005558241 systemd[1]: Started libpod-conmon-b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9.scope.
Dec 13 04:30:18 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:30:18 np0005558241 podman[414852]: 2025-12-13 09:30:18.199007783 +0000 UTC m=+0.253539626 container init b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:30:18 np0005558241 podman[414852]: 2025-12-13 09:30:18.209391184 +0000 UTC m=+0.263922967 container start b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:30:18 np0005558241 podman[414852]: 2025-12-13 09:30:18.212664126 +0000 UTC m=+0.267195959 container attach b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elbakyan, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:30:18 np0005558241 condescending_elbakyan[414868]: 167 167
Dec 13 04:30:18 np0005558241 systemd[1]: libpod-b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9.scope: Deactivated successfully.
Dec 13 04:30:18 np0005558241 podman[414852]: 2025-12-13 09:30:18.220120042 +0000 UTC m=+0.274651825 container died b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elbakyan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 04:30:18 np0005558241 systemd[1]: var-lib-containers-storage-overlay-55edef3dc718a03f03a0bc075238af2d4ce87ff8dfa81c21f44c97f062108788-merged.mount: Deactivated successfully.
Dec 13 04:30:18 np0005558241 podman[414852]: 2025-12-13 09:30:18.270108396 +0000 UTC m=+0.324640169 container remove b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elbakyan, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 04:30:18 np0005558241 systemd[1]: libpod-conmon-b0c0b16ed9cd101eb221c736f483b1c090bb0e6ca86b97119c7734c597550ee9.scope: Deactivated successfully.
Dec 13 04:30:18 np0005558241 podman[414890]: 2025-12-13 09:30:18.428386133 +0000 UTC m=+0.024589497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:30:18 np0005558241 podman[414890]: 2025-12-13 09:30:18.577724127 +0000 UTC m=+0.173927471 container create dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:30:18 np0005558241 systemd[1]: Started libpod-conmon-dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51.scope.
Dec 13 04:30:18 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:30:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cd021074f428ca1805eab8be73fa4ca0637e9dc17115347701e8f462c8ff8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cd021074f428ca1805eab8be73fa4ca0637e9dc17115347701e8f462c8ff8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cd021074f428ca1805eab8be73fa4ca0637e9dc17115347701e8f462c8ff8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cd021074f428ca1805eab8be73fa4ca0637e9dc17115347701e8f462c8ff8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:18 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35cd021074f428ca1805eab8be73fa4ca0637e9dc17115347701e8f462c8ff8d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:19 np0005558241 podman[414890]: 2025-12-13 09:30:19.068826557 +0000 UTC m=+0.665029931 container init dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 04:30:19 np0005558241 podman[414890]: 2025-12-13 09:30:19.078268154 +0000 UTC m=+0.674471498 container start dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:30:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:19 np0005558241 podman[414890]: 2025-12-13 09:30:19.316134146 +0000 UTC m=+0.912337490 container attach dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:30:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3883: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:19 np0005558241 modest_meninsky[414907]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:30:19 np0005558241 modest_meninsky[414907]: --> All data devices are unavailable
Dec 13 04:30:19 np0005558241 systemd[1]: libpod-dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51.scope: Deactivated successfully.
Dec 13 04:30:19 np0005558241 podman[414890]: 2025-12-13 09:30:19.63902712 +0000 UTC m=+1.235230514 container died dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:30:20 np0005558241 nova_compute[248510]: 2025-12-13 09:30:20.406 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:20 np0005558241 systemd[1]: var-lib-containers-storage-overlay-35cd021074f428ca1805eab8be73fa4ca0637e9dc17115347701e8f462c8ff8d-merged.mount: Deactivated successfully.
Dec 13 04:30:21 np0005558241 nova_compute[248510]: 2025-12-13 09:30:21.101 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:21 np0005558241 podman[414890]: 2025-12-13 09:30:21.175208236 +0000 UTC m=+2.771411580 container remove dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_meninsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:30:21 np0005558241 systemd[1]: libpod-conmon-dde7ad3c6436a2281cf42564cce641664fb89e4475a9feb4b81351612e39fd51.scope: Deactivated successfully.
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3884: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5195834882760674e-05 of space, bias 1.0, pg target 0.004558750464828202 quantized to 32 (current 32)
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.151001443556761e-06 of space, bias 1.0, pg target 0.0012453004330670282 quantized to 32 (current 32)
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.642278062309101e-07 of space, bias 4.0, pg target 0.0006770733674770921 quantized to 16 (current 32)
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:30:21 np0005558241 podman[415000]: 2025-12-13 09:30:21.671685402 +0000 UTC m=+0.028775952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:30:22 np0005558241 podman[415000]: 2025-12-13 09:30:22.018629809 +0000 UTC m=+0.375720299 container create 5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_galileo, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:30:22 np0005558241 systemd[1]: Started libpod-conmon-5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a.scope.
Dec 13 04:30:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:30:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3885: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:24 np0005558241 podman[415000]: 2025-12-13 09:30:24.564814772 +0000 UTC m=+2.921905242 container init 5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_galileo, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:30:24 np0005558241 podman[415000]: 2025-12-13 09:30:24.578325581 +0000 UTC m=+2.935416071 container start 5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:30:24 np0005558241 zen_galileo[415016]: 167 167
Dec 13 04:30:24 np0005558241 systemd[1]: libpod-5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a.scope: Deactivated successfully.
Dec 13 04:30:25 np0005558241 podman[415000]: 2025-12-13 09:30:25.354563919 +0000 UTC m=+3.711654389 container attach 5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_galileo, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:30:25 np0005558241 podman[415000]: 2025-12-13 09:30:25.355446461 +0000 UTC m=+3.712536921 container died 5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 04:30:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3886: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:25 np0005558241 nova_compute[248510]: 2025-12-13 09:30:25.407 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:26 np0005558241 nova_compute[248510]: 2025-12-13 09:30:26.103 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:27 np0005558241 systemd[1]: var-lib-containers-storage-overlay-aacc17f652493c1933223ea4888585b4294ab071b8d8c18e6646088b6aa7729a-merged.mount: Deactivated successfully.
Dec 13 04:30:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3887: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:28 np0005558241 podman[415000]: 2025-12-13 09:30:28.468262728 +0000 UTC m=+6.825353168 container remove 5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:30:28 np0005558241 systemd[1]: libpod-conmon-5f56169df19cd74db545cb94576c917d3d4190dd7b46d0b923e725274dd6a39a.scope: Deactivated successfully.
Dec 13 04:30:28 np0005558241 podman[415038]: 2025-12-13 09:30:28.605694843 +0000 UTC m=+0.070932669 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Dec 13 04:30:28 np0005558241 podman[415037]: 2025-12-13 09:30:28.607109789 +0000 UTC m=+0.073360220 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:30:28 np0005558241 podman[415035]: 2025-12-13 09:30:28.669363389 +0000 UTC m=+0.135807425 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 04:30:28 np0005558241 podman[415098]: 2025-12-13 09:30:28.665298727 +0000 UTC m=+0.030581267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:30:29 np0005558241 podman[415098]: 2025-12-13 09:30:29.075447468 +0000 UTC m=+0.440729978 container create 981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:30:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3888: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:29 np0005558241 systemd[1]: Started libpod-conmon-981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262.scope.
Dec 13 04:30:29 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:30:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f336b9d5b8d2576e96b02920bddfa33fc7d663aa52862868f56bb1d75b71517/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f336b9d5b8d2576e96b02920bddfa33fc7d663aa52862868f56bb1d75b71517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f336b9d5b8d2576e96b02920bddfa33fc7d663aa52862868f56bb1d75b71517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:29 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f336b9d5b8d2576e96b02920bddfa33fc7d663aa52862868f56bb1d75b71517/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:30 np0005558241 podman[415098]: 2025-12-13 09:30:30.261383427 +0000 UTC m=+1.626666017 container init 981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Dec 13 04:30:30 np0005558241 podman[415098]: 2025-12-13 09:30:30.27109464 +0000 UTC m=+1.636377170 container start 981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:30:30 np0005558241 nova_compute[248510]: 2025-12-13 09:30:30.410 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]: {
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:    "0": [
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:        {
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "devices": [
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "/dev/loop3"
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            ],
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_name": "ceph_lv0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_size": "21470642176",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "name": "ceph_lv0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "tags": {
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.cluster_name": "ceph",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.crush_device_class": "",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.encrypted": "0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.objectstore": "bluestore",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.osd_id": "0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.type": "block",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.vdo": "0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.with_tpm": "0"
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            },
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "type": "block",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "vg_name": "ceph_vg0"
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:        }
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:    ],
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:    "1": [
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:        {
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "devices": [
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "/dev/loop4"
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            ],
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_name": "ceph_lv1",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_size": "21470642176",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "name": "ceph_lv1",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "tags": {
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.cluster_name": "ceph",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.crush_device_class": "",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.encrypted": "0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.objectstore": "bluestore",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.osd_id": "1",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.type": "block",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.vdo": "0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.with_tpm": "0"
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            },
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "type": "block",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "vg_name": "ceph_vg1"
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:        }
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:    ],
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:    "2": [
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:        {
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "devices": [
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "/dev/loop5"
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            ],
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_name": "ceph_lv2",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_size": "21470642176",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "name": "ceph_lv2",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "tags": {
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.cluster_name": "ceph",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.crush_device_class": "",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.encrypted": "0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.objectstore": "bluestore",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.osd_id": "2",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.type": "block",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.vdo": "0",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:                "ceph.with_tpm": "0"
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            },
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "type": "block",
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:            "vg_name": "ceph_vg2"
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:        }
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]:    ]
Dec 13 04:30:30 np0005558241 laughing_mclaren[415124]: }
Dec 13 04:30:30 np0005558241 systemd[1]: libpod-981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262.scope: Deactivated successfully.
Dec 13 04:30:30 np0005558241 podman[415098]: 2025-12-13 09:30:30.955031905 +0000 UTC m=+2.320314505 container attach 981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:30:30 np0005558241 podman[415098]: 2025-12-13 09:30:30.956763668 +0000 UTC m=+2.322046208 container died 981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:30:31 np0005558241 nova_compute[248510]: 2025-12-13 09:30:31.107 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3889: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:31 np0005558241 nova_compute[248510]: 2025-12-13 09:30:31.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:30:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3890: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:34 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4f336b9d5b8d2576e96b02920bddfa33fc7d663aa52862868f56bb1d75b71517-merged.mount: Deactivated successfully.
Dec 13 04:30:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3891: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:35 np0005558241 nova_compute[248510]: 2025-12-13 09:30:35.412 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:35 np0005558241 nova_compute[248510]: 2025-12-13 09:30:35.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:30:36 np0005558241 nova_compute[248510]: 2025-12-13 09:30:36.111 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3892: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:38 np0005558241 podman[415098]: 2025-12-13 09:30:38.143430645 +0000 UTC m=+9.508713195 container remove 981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:30:38 np0005558241 systemd[1]: libpod-conmon-981a6238936d785888e2dd080aca7ab41d57192859e5ef16f69184f89e0eb262.scope: Deactivated successfully.
Dec 13 04:30:38 np0005558241 podman[415209]: 2025-12-13 09:30:38.689854732 +0000 UTC m=+0.027684165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:30:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3893: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:39 np0005558241 podman[415209]: 2025-12-13 09:30:39.628951082 +0000 UTC m=+0.966780505 container create 726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_northcutt, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 04:30:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:40 np0005558241 systemd[1]: Started libpod-conmon-726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba.scope.
Dec 13 04:30:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:30:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:30:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:30:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:30:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:30:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:30:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:30:40 np0005558241 nova_compute[248510]: 2025-12-13 09:30:40.415 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:40 np0005558241 podman[415209]: 2025-12-13 09:30:40.538457221 +0000 UTC m=+1.876286634 container init 726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_northcutt, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:30:40 np0005558241 podman[415209]: 2025-12-13 09:30:40.551097118 +0000 UTC m=+1.888926501 container start 726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:30:40 np0005558241 determined_northcutt[415225]: 167 167
Dec 13 04:30:40 np0005558241 systemd[1]: libpod-726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba.scope: Deactivated successfully.
Dec 13 04:30:41 np0005558241 podman[415209]: 2025-12-13 09:30:41.041671785 +0000 UTC m=+2.379501208 container attach 726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 04:30:41 np0005558241 podman[415209]: 2025-12-13 09:30:41.043332607 +0000 UTC m=+2.381162040 container died 726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:30:41 np0005558241 nova_compute[248510]: 2025-12-13 09:30:41.114 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5ef48f9b5d1e57e657cc2714904214f1eb81723ad1fef815b1d978848832244d-merged.mount: Deactivated successfully.
Dec 13 04:30:41 np0005558241 nova_compute[248510]: 2025-12-13 09:30:41.378 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:30:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3894: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:42 np0005558241 podman[415209]: 2025-12-13 09:30:42.469185797 +0000 UTC m=+3.807015210 container remove 726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Dec 13 04:30:42 np0005558241 systemd[1]: libpod-conmon-726dd8af3ded5fca4cfdd0d057a0a742450e435295edaa98285217925359d5ba.scope: Deactivated successfully.
Dec 13 04:30:42 np0005558241 podman[415250]: 2025-12-13 09:30:42.727664076 +0000 UTC m=+0.049369548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:30:42 np0005558241 podman[415250]: 2025-12-13 09:30:42.953798275 +0000 UTC m=+0.275503717 container create b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Dec 13 04:30:43 np0005558241 systemd[1]: Started libpod-conmon-b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc.scope.
Dec 13 04:30:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:30:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699ad42857095961b9a6674e09eff0491055f00513c559e14cdaacb34675f4d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699ad42857095961b9a6674e09eff0491055f00513c559e14cdaacb34675f4d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699ad42857095961b9a6674e09eff0491055f00513c559e14cdaacb34675f4d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/699ad42857095961b9a6674e09eff0491055f00513c559e14cdaacb34675f4d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3895: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:43 np0005558241 podman[415250]: 2025-12-13 09:30:43.490248792 +0000 UTC m=+0.811954304 container init b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 04:30:43 np0005558241 podman[415250]: 2025-12-13 09:30:43.496671303 +0000 UTC m=+0.818376735 container start b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:30:43 np0005558241 podman[415250]: 2025-12-13 09:30:43.78617087 +0000 UTC m=+1.107876322 container attach b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 04:30:44 np0005558241 lvm[415344]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:30:44 np0005558241 lvm[415344]: VG ceph_vg1 finished
Dec 13 04:30:44 np0005558241 lvm[415345]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:30:44 np0005558241 lvm[415345]: VG ceph_vg0 finished
Dec 13 04:30:44 np0005558241 lvm[415347]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:30:44 np0005558241 lvm[415347]: VG ceph_vg2 finished
Dec 13 04:30:44 np0005558241 agitated_mendeleev[415266]: {}
Dec 13 04:30:44 np0005558241 systemd[1]: libpod-b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc.scope: Deactivated successfully.
Dec 13 04:30:44 np0005558241 systemd[1]: libpod-b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc.scope: Consumed 1.503s CPU time.
Dec 13 04:30:44 np0005558241 podman[415350]: 2025-12-13 09:30:44.431170378 +0000 UTC m=+0.034183928 container died b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:30:44 np0005558241 nova_compute[248510]: 2025-12-13 09:30:44.774 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:30:44 np0005558241 nova_compute[248510]: 2025-12-13 09:30:44.777 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:30:44 np0005558241 nova_compute[248510]: 2025-12-13 09:30:44.777 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:30:44 np0005558241 nova_compute[248510]: 2025-12-13 09:30:44.802 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:30:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:44 np0005558241 systemd[1]: var-lib-containers-storage-overlay-699ad42857095961b9a6674e09eff0491055f00513c559e14cdaacb34675f4d9-merged.mount: Deactivated successfully.
Dec 13 04:30:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3896: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:45 np0005558241 nova_compute[248510]: 2025-12-13 09:30:45.417 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:45 np0005558241 podman[415350]: 2025-12-13 09:30:45.434105559 +0000 UTC m=+1.037119109 container remove b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:30:45 np0005558241 systemd[1]: libpod-conmon-b61cee87bcc68211af966d2f1e1bd675ed31742fe643b69cb7722e40017f0fbc.scope: Deactivated successfully.
Dec 13 04:30:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:30:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:30:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:30:45 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:30:46 np0005558241 nova_compute[248510]: 2025-12-13 09:30:46.116 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:46 np0005558241 nova_compute[248510]: 2025-12-13 09:30:46.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:30:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:30:46 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:30:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3897: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:48 np0005558241 nova_compute[248510]: 2025-12-13 09:30:48.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:30:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 17K writes, 77K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s#012Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1351 writes, 6107 keys, 1351 commit groups, 1.0 writes per commit group, ingest: 9.18 MB, 0.02 MB/s#012Interval WAL: 1351 writes, 1351 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     25.0      4.00              0.36        57    0.070       0      0       0.0       0.0#012  L6      1/0   11.04 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.0     77.7     66.0      7.62              1.63        56    0.136    398K    29K       0.0       0.0#012 Sum      1/0   11.04 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.0     51.0     51.9     11.62              1.99       113    0.103    398K    29K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.3     44.9     45.1      1.33              0.20        10    0.133     48K   2535       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     77.7     66.0      7.62              1.63        56    0.136    398K    29K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     25.0      3.99              0.36        56    0.071       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.097, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.59 GB write, 0.08 MB/s write, 0.58 GB read, 0.08 MB/s read, 11.6 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 68.32 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000898 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4211,65.50 MB,21.545%) FilterBlock(114,1.08 MB,0.354039%) IndexBlock(114,1.75 MB,0.576064%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 04:30:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3898: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:49 np0005558241 nova_compute[248510]: 2025-12-13 09:30:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:30:49 np0005558241 nova_compute[248510]: 2025-12-13 09:30:49.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:30:49 np0005558241 nova_compute[248510]: 2025-12-13 09:30:49.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:30:49 np0005558241 nova_compute[248510]: 2025-12-13 09:30:49.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:30:49 np0005558241 nova_compute[248510]: 2025-12-13 09:30:49.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:30:49 np0005558241 nova_compute[248510]: 2025-12-13 09:30:49.810 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:30:49 np0005558241 nova_compute[248510]: 2025-12-13 09:30:49.810 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:30:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:30:50 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2487398291' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:30:50 np0005558241 nova_compute[248510]: 2025-12-13 09:30:50.421 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:50 np0005558241 nova_compute[248510]: 2025-12-13 09:30:50.438 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:30:50 np0005558241 nova_compute[248510]: 2025-12-13 09:30:50.610 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:30:50 np0005558241 nova_compute[248510]: 2025-12-13 09:30:50.611 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3469MB free_disk=59.987369677983224GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:30:50 np0005558241 nova_compute[248510]: 2025-12-13 09:30:50.612 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:30:50 np0005558241 nova_compute[248510]: 2025-12-13 09:30:50.612 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:30:50 np0005558241 nova_compute[248510]: 2025-12-13 09:30:50.681 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:30:50 np0005558241 nova_compute[248510]: 2025-12-13 09:30:50.682 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:30:50 np0005558241 nova_compute[248510]: 2025-12-13 09:30:50.759 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:30:51 np0005558241 nova_compute[248510]: 2025-12-13 09:30:51.118 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:30:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372129658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:30:51 np0005558241 nova_compute[248510]: 2025-12-13 09:30:51.332 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:30:51 np0005558241 nova_compute[248510]: 2025-12-13 09:30:51.338 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:30:51 np0005558241 nova_compute[248510]: 2025-12-13 09:30:51.362 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:30:51 np0005558241 nova_compute[248510]: 2025-12-13 09:30:51.363 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:30:51 np0005558241 nova_compute[248510]: 2025-12-13 09:30:51.363 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:30:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3899: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3900: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:55 np0005558241 nova_compute[248510]: 2025-12-13 09:30:55.363 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:30:55 np0005558241 nova_compute[248510]: 2025-12-13 09:30:55.364 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:30:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3901: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:55 np0005558241 nova_compute[248510]: 2025-12-13 09:30:55.423 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:30:55.466 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:30:55.467 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:30:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:30:55.467 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:30:56 np0005558241 nova_compute[248510]: 2025-12-13 09:30:56.122 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:30:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3902: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:58 np0005558241 podman[415436]: 2025-12-13 09:30:58.993901019 +0000 UTC m=+0.078880459 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:30:59 np0005558241 podman[415437]: 2025-12-13 09:30:59.016139016 +0000 UTC m=+0.096593782 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:30:59 np0005558241 podman[415435]: 2025-12-13 09:30:59.047849201 +0000 UTC m=+0.134759269 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 04:30:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Dec 13 04:30:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Dec 13 04:30:59 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Dec 13 04:30:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3904: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 102 B/s wr, 0 op/s
Dec 13 04:31:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:00 np0005558241 nova_compute[248510]: 2025-12-13 09:31:00.424 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:00 np0005558241 nova_compute[248510]: 2025-12-13 09:31:00.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:01 np0005558241 nova_compute[248510]: 2025-12-13 09:31:01.124 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Dec 13 04:31:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Dec 13 04:31:01 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Dec 13 04:31:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3906: 321 pgs: 321 active+clean; 463 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 383 B/s rd, 255 B/s wr, 1 op/s
Dec 13 04:31:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3907: 321 pgs: 321 active+clean; 16 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.0 MiB/s wr, 24 op/s
Dec 13 04:31:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3908: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 13 04:31:05 np0005558241 nova_compute[248510]: 2025-12-13 09:31:05.426 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:06 np0005558241 nova_compute[248510]: 2025-12-13 09:31:06.127 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3909: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 5.0 MiB/s wr, 46 op/s
Dec 13 04:31:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3910: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 37 op/s
Dec 13 04:31:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:31:09
Dec 13 04:31:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:31:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:31:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.log']
Dec 13 04:31:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:31:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:31:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:31:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:31:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:31:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:31:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:31:10 np0005558241 nova_compute[248510]: 2025-12-13 09:31:10.428 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:11 np0005558241 nova_compute[248510]: 2025-12-13 09:31:11.129 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:31:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:31:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:31:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:31:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:31:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:31:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:31:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:31:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:31:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:31:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3911: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 37 op/s
Dec 13 04:31:12 np0005558241 nova_compute[248510]: 2025-12-13 09:31:12.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:12 np0005558241 nova_compute[248510]: 2025-12-13 09:31:12.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 04:31:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3912: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Dec 13 04:31:13 np0005558241 nova_compute[248510]: 2025-12-13 09:31:13.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:31:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2635968025' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:31:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:31:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2635968025' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:31:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3913: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 2.1 MiB/s wr, 15 op/s
Dec 13 04:31:15 np0005558241 nova_compute[248510]: 2025-12-13 09:31:15.472 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:16 np0005558241 nova_compute[248510]: 2025-12-13 09:31:16.131 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3914: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3915: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:20 np0005558241 nova_compute[248510]: 2025-12-13 09:31:20.475 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:21 np0005558241 nova_compute[248510]: 2025-12-13 09:31:21.134 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3916: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5217104235718227e-05 of space, bias 1.0, pg target 0.004565131270715468 quantized to 32 (current 32)
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006700045834095627 of space, bias 1.0, pg target 0.20100137502286883 quantized to 32 (current 32)
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64414107132728e-07 of space, bias 4.0, pg target 0.0006772969285592736 quantized to 16 (current 32)
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:31:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3917: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:31:25.305 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:31:25 np0005558241 nova_compute[248510]: 2025-12-13 09:31:25.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:25 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:31:25.305 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:31:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3918: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:25 np0005558241 nova_compute[248510]: 2025-12-13 09:31:25.477 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:26 np0005558241 nova_compute[248510]: 2025-12-13 09:31:26.137 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3919: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:28 np0005558241 nova_compute[248510]: 2025-12-13 09:31:28.843 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3920: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:29 np0005558241 podman[415498]: 2025-12-13 09:31:29.977119976 +0000 UTC m=+0.060522818 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:31:29 np0005558241 podman[415499]: 2025-12-13 09:31:29.998033821 +0000 UTC m=+0.070009146 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:31:30 np0005558241 podman[415497]: 2025-12-13 09:31:30.012059132 +0000 UTC m=+0.100026118 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:31:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:30 np0005558241 nova_compute[248510]: 2025-12-13 09:31:30.479 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:31 np0005558241 nova_compute[248510]: 2025-12-13 09:31:31.140 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3921: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:31:32.308 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:31:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3922: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:33 np0005558241 nova_compute[248510]: 2025-12-13 09:31:33.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3923: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:35 np0005558241 nova_compute[248510]: 2025-12-13 09:31:35.481 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:35 np0005558241 nova_compute[248510]: 2025-12-13 09:31:35.765 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:36 np0005558241 nova_compute[248510]: 2025-12-13 09:31:36.142 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3924: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3925: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.4 KiB/s rd, 170 B/s wr, 5 op/s
Dec 13 04:31:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Dec 13 04:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:31:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:31:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Dec 13 04:31:40 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Dec 13 04:31:40 np0005558241 nova_compute[248510]: 2025-12-13 09:31:40.484 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:41 np0005558241 nova_compute[248510]: 2025-12-13 09:31:41.145 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3927: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 204 B/s wr, 6 op/s
Dec 13 04:31:41 np0005558241 nova_compute[248510]: 2025-12-13 09:31:41.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:41 np0005558241 nova_compute[248510]: 2025-12-13 09:31:41.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 04:31:41 np0005558241 nova_compute[248510]: 2025-12-13 09:31:41.794 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 04:31:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3928: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 613 B/s wr, 18 op/s
Dec 13 04:31:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3929: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 04:31:45 np0005558241 nova_compute[248510]: 2025-12-13 09:31:45.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:45 np0005558241 nova_compute[248510]: 2025-12-13 09:31:45.794 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:45 np0005558241 nova_compute[248510]: 2025-12-13 09:31:45.795 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:31:45 np0005558241 nova_compute[248510]: 2025-12-13 09:31:45.795 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:31:45 np0005558241 nova_compute[248510]: 2025-12-13 09:31:45.816 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:31:46 np0005558241 nova_compute[248510]: 2025-12-13 09:31:46.148 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:31:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:31:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:31:46 np0005558241 nova_compute[248510]: 2025-12-13 09:31:46.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:47 np0005558241 podman[415706]: 2025-12-13 09:31:47.169981524 +0000 UTC m=+0.057128673 container create 00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:31:47 np0005558241 systemd[1]: Started libpod-conmon-00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e.scope.
Dec 13 04:31:47 np0005558241 podman[415706]: 2025-12-13 09:31:47.144758112 +0000 UTC m=+0.031905341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:31:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:31:47 np0005558241 podman[415706]: 2025-12-13 09:31:47.279281974 +0000 UTC m=+0.166429153 container init 00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_faraday, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:31:47 np0005558241 podman[415706]: 2025-12-13 09:31:47.293160482 +0000 UTC m=+0.180307671 container start 00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 04:31:47 np0005558241 podman[415706]: 2025-12-13 09:31:47.297721236 +0000 UTC m=+0.184868415 container attach 00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:31:47 np0005558241 competent_faraday[415722]: 167 167
Dec 13 04:31:47 np0005558241 systemd[1]: libpod-00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e.scope: Deactivated successfully.
Dec 13 04:31:47 np0005558241 conmon[415722]: conmon 00285e0473e7f64ac435 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e.scope/container/memory.events
Dec 13 04:31:47 np0005558241 podman[415706]: 2025-12-13 09:31:47.304132907 +0000 UTC m=+0.191280096 container died 00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:31:47 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ace8d9907bf831be6c850b46cdb4bdf83267df6422453da9b873941a3bc26c10-merged.mount: Deactivated successfully.
Dec 13 04:31:47 np0005558241 podman[415706]: 2025-12-13 09:31:47.358699165 +0000 UTC m=+0.245846314 container remove 00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:31:47 np0005558241 systemd[1]: libpod-conmon-00285e0473e7f64ac43572ea990a6de3c5fa9d20c86a31665c3cc3569640737e.scope: Deactivated successfully.
Dec 13 04:31:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3930: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 04:31:47 np0005558241 podman[415745]: 2025-12-13 09:31:47.518709606 +0000 UTC m=+0.043818340 container create 9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:31:47 np0005558241 systemd[1]: Started libpod-conmon-9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df.scope.
Dec 13 04:31:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:31:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:31:47 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:31:47 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a042a4874be703f5990d90214da5fdf1e931dde095a29c2204f33f4aa8abfdd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a042a4874be703f5990d90214da5fdf1e931dde095a29c2204f33f4aa8abfdd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a042a4874be703f5990d90214da5fdf1e931dde095a29c2204f33f4aa8abfdd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a042a4874be703f5990d90214da5fdf1e931dde095a29c2204f33f4aa8abfdd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:47 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a042a4874be703f5990d90214da5fdf1e931dde095a29c2204f33f4aa8abfdd4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:47 np0005558241 podman[415745]: 2025-12-13 09:31:47.501964976 +0000 UTC m=+0.027073730 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:31:47 np0005558241 podman[415745]: 2025-12-13 09:31:47.603932002 +0000 UTC m=+0.129040746 container init 9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:31:47 np0005558241 podman[415745]: 2025-12-13 09:31:47.619832691 +0000 UTC m=+0.144941435 container start 9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:31:47 np0005558241 podman[415745]: 2025-12-13 09:31:47.625239656 +0000 UTC m=+0.150348390 container attach 9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:31:48 np0005558241 eager_leavitt[415762]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:31:48 np0005558241 eager_leavitt[415762]: --> All data devices are unavailable
Dec 13 04:31:48 np0005558241 systemd[1]: libpod-9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df.scope: Deactivated successfully.
Dec 13 04:31:48 np0005558241 podman[415745]: 2025-12-13 09:31:48.105555566 +0000 UTC m=+0.630664300 container died 9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:31:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a042a4874be703f5990d90214da5fdf1e931dde095a29c2204f33f4aa8abfdd4-merged.mount: Deactivated successfully.
Dec 13 04:31:48 np0005558241 podman[415745]: 2025-12-13 09:31:48.177051478 +0000 UTC m=+0.702160222 container remove 9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_leavitt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:31:48 np0005558241 systemd[1]: libpod-conmon-9df622f7e2e030ba4e2c6ed607bc4f1b2cb095fec7852bfc6f5245bad57fe8df.scope: Deactivated successfully.
Dec 13 04:31:48 np0005558241 podman[415861]: 2025-12-13 09:31:48.748886723 +0000 UTC m=+0.068123169 container create b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:31:48 np0005558241 systemd[1]: Started libpod-conmon-b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0.scope.
Dec 13 04:31:48 np0005558241 podman[415861]: 2025-12-13 09:31:48.726934812 +0000 UTC m=+0.046171298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:31:48 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:31:48 np0005558241 podman[415861]: 2025-12-13 09:31:48.849766151 +0000 UTC m=+0.169002607 container init b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:31:48 np0005558241 podman[415861]: 2025-12-13 09:31:48.856602723 +0000 UTC m=+0.175839169 container start b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:31:48 np0005558241 podman[415861]: 2025-12-13 09:31:48.860010478 +0000 UTC m=+0.179246924 container attach b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_blackwell, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:31:48 np0005558241 interesting_blackwell[415877]: 167 167
Dec 13 04:31:48 np0005558241 systemd[1]: libpod-b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0.scope: Deactivated successfully.
Dec 13 04:31:48 np0005558241 conmon[415877]: conmon b9dada57b6b656f871f7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0.scope/container/memory.events
Dec 13 04:31:48 np0005558241 podman[415861]: 2025-12-13 09:31:48.866282555 +0000 UTC m=+0.185519051 container died b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_blackwell, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:31:48 np0005558241 systemd[1]: var-lib-containers-storage-overlay-c1f2793a815c206edc09f6814ea15fac1b1ee91c5ef2115d38b9714c8b867430-merged.mount: Deactivated successfully.
Dec 13 04:31:48 np0005558241 podman[415861]: 2025-12-13 09:31:48.917698274 +0000 UTC m=+0.236934720 container remove b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_blackwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 04:31:48 np0005558241 systemd[1]: libpod-conmon-b9dada57b6b656f871f71e73b1465bc5aa270be37f359a2be00aeb98cc98d5c0.scope: Deactivated successfully.
Dec 13 04:31:49 np0005558241 podman[415900]: 2025-12-13 09:31:49.105791799 +0000 UTC m=+0.048139948 container create 7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Dec 13 04:31:49 np0005558241 systemd[1]: Started libpod-conmon-7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072.scope.
Dec 13 04:31:49 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:31:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acc9db54e4e948ac316dbc8bbfadb1e6f25a59a5afb4c593014e464c87c532/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acc9db54e4e948ac316dbc8bbfadb1e6f25a59a5afb4c593014e464c87c532/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acc9db54e4e948ac316dbc8bbfadb1e6f25a59a5afb4c593014e464c87c532/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:49 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acc9db54e4e948ac316dbc8bbfadb1e6f25a59a5afb4c593014e464c87c532/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:49 np0005558241 podman[415900]: 2025-12-13 09:31:49.084660219 +0000 UTC m=+0.027008388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:31:49 np0005558241 podman[415900]: 2025-12-13 09:31:49.184984944 +0000 UTC m=+0.127333093 container init 7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_newton, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:31:49 np0005558241 podman[415900]: 2025-12-13 09:31:49.192038431 +0000 UTC m=+0.134386600 container start 7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_newton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 04:31:49 np0005558241 podman[415900]: 2025-12-13 09:31:49.196323678 +0000 UTC m=+0.138671837 container attach 7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 04:31:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3931: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Dec 13 04:31:49 np0005558241 goofy_newton[415917]: {
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:    "0": [
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:        {
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "devices": [
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "/dev/loop3"
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            ],
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_name": "ceph_lv0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_size": "21470642176",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "name": "ceph_lv0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "tags": {
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.cluster_name": "ceph",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.crush_device_class": "",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.encrypted": "0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.objectstore": "bluestore",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.osd_id": "0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.type": "block",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.vdo": "0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.with_tpm": "0"
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            },
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "type": "block",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "vg_name": "ceph_vg0"
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:        }
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:    ],
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:    "1": [
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:        {
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "devices": [
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "/dev/loop4"
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            ],
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_name": "ceph_lv1",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_size": "21470642176",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "name": "ceph_lv1",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "tags": {
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.cluster_name": "ceph",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.crush_device_class": "",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.encrypted": "0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.objectstore": "bluestore",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.osd_id": "1",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.type": "block",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.vdo": "0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.with_tpm": "0"
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            },
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "type": "block",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "vg_name": "ceph_vg1"
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:        }
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:    ],
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:    "2": [
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:        {
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "devices": [
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "/dev/loop5"
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            ],
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_name": "ceph_lv2",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_size": "21470642176",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "name": "ceph_lv2",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "tags": {
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.cluster_name": "ceph",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.crush_device_class": "",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.encrypted": "0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.objectstore": "bluestore",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.osd_id": "2",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.type": "block",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.vdo": "0",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:                "ceph.with_tpm": "0"
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            },
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "type": "block",
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:            "vg_name": "ceph_vg2"
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:        }
Dec 13 04:31:49 np0005558241 goofy_newton[415917]:    ]
Dec 13 04:31:49 np0005558241 goofy_newton[415917]: }
Dec 13 04:31:49 np0005558241 systemd[1]: libpod-7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072.scope: Deactivated successfully.
Dec 13 04:31:49 np0005558241 podman[415900]: 2025-12-13 09:31:49.520755011 +0000 UTC m=+0.463103180 container died 7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_newton, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:31:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-63acc9db54e4e948ac316dbc8bbfadb1e6f25a59a5afb4c593014e464c87c532-merged.mount: Deactivated successfully.
Dec 13 04:31:49 np0005558241 podman[415900]: 2025-12-13 09:31:49.564337423 +0000 UTC m=+0.506685572 container remove 7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:31:49 np0005558241 systemd[1]: libpod-conmon-7f4c4701dddb8b67a124ed541d763ea0c6e79264b3ab98d212a29dabf6f12072.scope: Deactivated successfully.
Dec 13 04:31:49 np0005558241 nova_compute[248510]: 2025-12-13 09:31:49.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Dec 13 04:31:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Dec 13 04:31:50 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Dec 13 04:31:50 np0005558241 podman[416001]: 2025-12-13 09:31:50.052969061 +0000 UTC m=+0.046717792 container create c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_margulis, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:31:50 np0005558241 systemd[1]: Started libpod-conmon-c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7.scope.
Dec 13 04:31:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:31:50 np0005558241 podman[416001]: 2025-12-13 09:31:50.032497848 +0000 UTC m=+0.026246599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:31:50 np0005558241 podman[416001]: 2025-12-13 09:31:50.139653244 +0000 UTC m=+0.133401995 container init c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_margulis, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 04:31:50 np0005558241 podman[416001]: 2025-12-13 09:31:50.147824339 +0000 UTC m=+0.141573070 container start c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_margulis, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:31:50 np0005558241 podman[416001]: 2025-12-13 09:31:50.151834419 +0000 UTC m=+0.145583170 container attach c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_margulis, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:31:50 np0005558241 zen_margulis[416017]: 167 167
Dec 13 04:31:50 np0005558241 systemd[1]: libpod-c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7.scope: Deactivated successfully.
Dec 13 04:31:50 np0005558241 podman[416001]: 2025-12-13 09:31:50.154997448 +0000 UTC m=+0.148746179 container died c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_margulis, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:31:50 np0005558241 systemd[1]: var-lib-containers-storage-overlay-929246e319ae01d6df15c0d2f2c3e9327add32e62d1956fd8de0bef96078b390-merged.mount: Deactivated successfully.
Dec 13 04:31:50 np0005558241 podman[416001]: 2025-12-13 09:31:50.196337325 +0000 UTC m=+0.190086056 container remove c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:31:50 np0005558241 systemd[1]: libpod-conmon-c3ccdb7744229f01132d11225a73969e4ca7f824386d18dab7b5b676d707e5e7.scope: Deactivated successfully.
Dec 13 04:31:50 np0005558241 podman[416041]: 2025-12-13 09:31:50.363172627 +0000 UTC m=+0.048566559 container create 40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_williamson, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:31:50 np0005558241 systemd[1]: Started libpod-conmon-40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2.scope.
Dec 13 04:31:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:31:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f3ca0cda28b7438adfe67d61272f287b24db14568d45578723af6a0f63a6f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f3ca0cda28b7438adfe67d61272f287b24db14568d45578723af6a0f63a6f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f3ca0cda28b7438adfe67d61272f287b24db14568d45578723af6a0f63a6f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:50 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66f3ca0cda28b7438adfe67d61272f287b24db14568d45578723af6a0f63a6f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:50 np0005558241 podman[416041]: 2025-12-13 09:31:50.344694334 +0000 UTC m=+0.030088286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:31:50 np0005558241 podman[416041]: 2025-12-13 09:31:50.441784657 +0000 UTC m=+0.127178609 container init 40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_williamson, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:31:50 np0005558241 podman[416041]: 2025-12-13 09:31:50.449619964 +0000 UTC m=+0.135013896 container start 40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_williamson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:31:50 np0005558241 podman[416041]: 2025-12-13 09:31:50.452954927 +0000 UTC m=+0.138348879 container attach 40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_williamson, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:31:50 np0005558241 nova_compute[248510]: 2025-12-13 09:31:50.487 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:50 np0005558241 nova_compute[248510]: 2025-12-13 09:31:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:50 np0005558241 nova_compute[248510]: 2025-12-13 09:31:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:50 np0005558241 nova_compute[248510]: 2025-12-13 09:31:50.802 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:31:50 np0005558241 nova_compute[248510]: 2025-12-13 09:31:50.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:31:50 np0005558241 nova_compute[248510]: 2025-12-13 09:31:50.804 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:31:50 np0005558241 nova_compute[248510]: 2025-12-13 09:31:50.804 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:31:50 np0005558241 nova_compute[248510]: 2025-12-13 09:31:50.805 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:31:51 np0005558241 nova_compute[248510]: 2025-12-13 09:31:51.150 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:51 np0005558241 lvm[416156]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:31:51 np0005558241 lvm[416157]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:31:51 np0005558241 lvm[416157]: VG ceph_vg1 finished
Dec 13 04:31:51 np0005558241 lvm[416156]: VG ceph_vg0 finished
Dec 13 04:31:51 np0005558241 lvm[416159]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:31:51 np0005558241 lvm[416159]: VG ceph_vg2 finished
Dec 13 04:31:51 np0005558241 pedantic_williamson[416058]: {}
Dec 13 04:31:51 np0005558241 systemd[1]: libpod-40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2.scope: Deactivated successfully.
Dec 13 04:31:51 np0005558241 podman[416041]: 2025-12-13 09:31:51.348447475 +0000 UTC m=+1.033841427 container died 40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_williamson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 04:31:51 np0005558241 systemd[1]: libpod-40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2.scope: Consumed 1.511s CPU time.
Dec 13 04:31:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:31:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474496401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:31:51 np0005558241 nova_compute[248510]: 2025-12-13 09:31:51.384 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:31:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3933: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Dec 13 04:31:51 np0005558241 nova_compute[248510]: 2025-12-13 09:31:51.541 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:31:51 np0005558241 nova_compute[248510]: 2025-12-13 09:31:51.542 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3394MB free_disk=59.98736613895744GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:31:51 np0005558241 nova_compute[248510]: 2025-12-13 09:31:51.542 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:31:51 np0005558241 nova_compute[248510]: 2025-12-13 09:31:51.542 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:31:51 np0005558241 nova_compute[248510]: 2025-12-13 09:31:51.613 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:31:51 np0005558241 nova_compute[248510]: 2025-12-13 09:31:51.614 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:31:51 np0005558241 nova_compute[248510]: 2025-12-13 09:31:51.636 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:31:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-66f3ca0cda28b7438adfe67d61272f287b24db14568d45578723af6a0f63a6f2-merged.mount: Deactivated successfully.
Dec 13 04:31:51 np0005558241 podman[416041]: 2025-12-13 09:31:51.935984782 +0000 UTC m=+1.621378714 container remove 40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 04:31:51 np0005558241 systemd[1]: libpod-conmon-40eb001d12115b152f96e5617033ff8639d06f23d02406eed294cf5db2c0dbd2.scope: Deactivated successfully.
Dec 13 04:31:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:31:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:31:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:31:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:31:52 np0005558241 systemd[1]: Starting dnf makecache...
Dec 13 04:31:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:31:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7845145' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:31:52 np0005558241 nova_compute[248510]: 2025-12-13 09:31:52.342 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:31:52 np0005558241 nova_compute[248510]: 2025-12-13 09:31:52.357 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:31:52 np0005558241 nova_compute[248510]: 2025-12-13 09:31:52.377 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:31:52 np0005558241 nova_compute[248510]: 2025-12-13 09:31:52.379 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:31:52 np0005558241 nova_compute[248510]: 2025-12-13 09:31:52.379 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:31:52 np0005558241 dnf[416221]: Metadata cache refreshed recently.
Dec 13 04:31:52 np0005558241 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 13 04:31:52 np0005558241 systemd[1]: Finished dnf makecache.
Dec 13 04:31:52 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:31:52 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:31:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3934: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 818 B/s wr, 6 op/s
Dec 13 04:31:54 np0005558241 nova_compute[248510]: 2025-12-13 09:31:54.833 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:54 np0005558241 nova_compute[248510]: 2025-12-13 09:31:54.856 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:31:54 np0005558241 nova_compute[248510]: 2025-12-13 09:31:54.856 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:31:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3935: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:31:55.468 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:31:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:31:55.468 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:31:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:31:55.468 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:31:55 np0005558241 nova_compute[248510]: 2025-12-13 09:31:55.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:56 np0005558241 nova_compute[248510]: 2025-12-13 09:31:56.153 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:31:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3936: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3937: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:00 np0005558241 nova_compute[248510]: 2025-12-13 09:32:00.492 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:00 np0005558241 podman[416228]: 2025-12-13 09:32:00.990587122 +0000 UTC m=+0.074073788 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:32:01 np0005558241 podman[416227]: 2025-12-13 09:32:01.002525121 +0000 UTC m=+0.084897479 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Dec 13 04:32:01 np0005558241 podman[416226]: 2025-12-13 09:32:01.031932428 +0000 UTC m=+0.114928612 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 13 04:32:01 np0005558241 nova_compute[248510]: 2025-12-13 09:32:01.197 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3938: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:02 np0005558241 nova_compute[248510]: 2025-12-13 09:32:02.797 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:32:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3939: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:32:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 47K writes, 187K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1267 writes, 4441 keys, 1267 commit groups, 1.0 writes per commit group, ingest: 3.93 MB, 0.01 MB/s#012Interval WAL: 1267 writes, 514 syncs, 2.46 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:32:03 np0005558241 ceph-osd[87041]: bluestore.MempoolThread fragmentation_score=0.004811 took=0.000062s
Dec 13 04:32:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3940: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:05 np0005558241 nova_compute[248510]: 2025-12-13 09:32:05.494 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:06 np0005558241 nova_compute[248510]: 2025-12-13 09:32:06.200 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3941: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3942: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:32:09
Dec 13 04:32:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:32:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:32:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'vms', 'default.rgw.control', 'volumes', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'cephfs.cephfs.meta']
Dec 13 04:32:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:32:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:32:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:32:10 np0005558241 nova_compute[248510]: 2025-12-13 09:32:10.498 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:32:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7201.8 total, 600.0 interval#012Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 997 writes, 3370 keys, 997 commit groups, 1.0 writes per commit group, ingest: 2.81 MB, 0.00 MB/s#012Interval WAL: 997 writes, 422 syncs, 2.36 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:32:11 np0005558241 ceph-osd[88086]: bluestore.MempoolThread fragmentation_score=0.004687 took=0.000062s
Dec 13 04:32:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:32:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:32:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:32:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:32:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:32:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:32:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:32:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:32:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:32:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:32:11 np0005558241 nova_compute[248510]: 2025-12-13 09:32:11.202 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3943: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3944: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:32:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345319997' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:32:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:32:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345319997' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:32:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3945: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:15 np0005558241 nova_compute[248510]: 2025-12-13 09:32:15.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:16 np0005558241 nova_compute[248510]: 2025-12-13 09:32:16.204 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3946: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3947: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:32:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.2 total, 600.0 interval#012Cumulative writes: 38K writes, 153K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.80 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 912 writes, 2666 keys, 912 commit groups, 1.0 writes per commit group, ingest: 1.84 MB, 0.00 MB/s#012Interval WAL: 912 writes, 399 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:32:19 np0005558241 ceph-osd[89221]: bluestore.MempoolThread fragmentation_score=0.004241 took=0.000088s
Dec 13 04:32:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:20 np0005558241 nova_compute[248510]: 2025-12-13 09:32:20.499 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:21 np0005558241 nova_compute[248510]: 2025-12-13 09:32:21.206 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3948: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:32:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3949: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 04:32:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3950: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:25 np0005558241 nova_compute[248510]: 2025-12-13 09:32:25.500 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:26 np0005558241 nova_compute[248510]: 2025-12-13 09:32:26.209 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3951: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3952: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:30 np0005558241 nova_compute[248510]: 2025-12-13 09:32:30.503 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:31 np0005558241 nova_compute[248510]: 2025-12-13 09:32:31.212 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3953: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:31 np0005558241 podman[416288]: 2025-12-13 09:32:31.989984698 +0000 UTC m=+0.059811773 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:32:31 np0005558241 podman[416287]: 2025-12-13 09:32:31.989814704 +0000 UTC m=+0.069492056 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 04:32:32 np0005558241 podman[416286]: 2025-12-13 09:32:32.02150547 +0000 UTC m=+0.098234268 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:32:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3954: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3955: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:35 np0005558241 nova_compute[248510]: 2025-12-13 09:32:35.504 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:35 np0005558241 nova_compute[248510]: 2025-12-13 09:32:35.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:32:36 np0005558241 nova_compute[248510]: 2025-12-13 09:32:36.214 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3956: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:37 np0005558241 nova_compute[248510]: 2025-12-13 09:32:37.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:32:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3957: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:32:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:32:40 np0005558241 nova_compute[248510]: 2025-12-13 09:32:40.506 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:41 np0005558241 nova_compute[248510]: 2025-12-13 09:32:41.217 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3958: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3959: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.043035) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618365043084, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1675, "num_deletes": 252, "total_data_size": 2861938, "memory_usage": 2902568, "flush_reason": "Manual Compaction"}
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618365059281, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 1659236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77504, "largest_seqno": 79178, "table_properties": {"data_size": 1653574, "index_size": 2800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14877, "raw_average_key_size": 20, "raw_value_size": 1640868, "raw_average_value_size": 2298, "num_data_blocks": 128, "num_entries": 714, "num_filter_entries": 714, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618185, "oldest_key_time": 1765618185, "file_creation_time": 1765618365, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 16386 microseconds, and 6589 cpu microseconds.
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.059407) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 1659236 bytes OK
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.059449) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.062502) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.062537) EVENT_LOG_v1 {"time_micros": 1765618365062527, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.062571) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 2854713, prev total WAL file size 2854713, number of live WAL files 2.
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.064631) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323632' seq:72057594037927935, type:22 .. '6D6772737461740033353134' seq:0, type:0; will stop at (end)
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(1620KB)], [185(11MB)]
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618365064697, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 13232022, "oldest_snapshot_seqno": -1}
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 9685 keys, 10887440 bytes, temperature: kUnknown
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618365181629, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 10887440, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10828288, "index_size": 33823, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24261, "raw_key_size": 253927, "raw_average_key_size": 26, "raw_value_size": 10661319, "raw_average_value_size": 1100, "num_data_blocks": 1303, "num_entries": 9685, "num_filter_entries": 9685, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618365, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.182009) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 10887440 bytes
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.184058) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.1 rd, 93.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 11.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(14.5) write-amplify(6.6) OK, records in: 10126, records dropped: 441 output_compression: NoCompression
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.184100) EVENT_LOG_v1 {"time_micros": 1765618365184086, "job": 116, "event": "compaction_finished", "compaction_time_micros": 117045, "compaction_time_cpu_micros": 44341, "output_level": 6, "num_output_files": 1, "total_output_size": 10887440, "num_input_records": 10126, "num_output_records": 9685, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618365184525, "job": 116, "event": "table_file_deletion", "file_number": 187}
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618365186799, "job": 116, "event": "table_file_deletion", "file_number": 185}
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.064460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.186939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.186946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.186948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.186950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:32:45 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:32:45.186952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:32:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3960: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:45 np0005558241 nova_compute[248510]: 2025-12-13 09:32:45.508 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:46 np0005558241 nova_compute[248510]: 2025-12-13 09:32:46.222 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:46 np0005558241 nova_compute[248510]: 2025-12-13 09:32:46.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:32:46 np0005558241 nova_compute[248510]: 2025-12-13 09:32:46.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:32:46 np0005558241 nova_compute[248510]: 2025-12-13 09:32:46.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:32:46 np0005558241 nova_compute[248510]: 2025-12-13 09:32:46.799 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:32:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3961: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:48 np0005558241 nova_compute[248510]: 2025-12-13 09:32:48.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:32:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3962: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:49 np0005558241 nova_compute[248510]: 2025-12-13 09:32:49.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:32:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:50 np0005558241 nova_compute[248510]: 2025-12-13 09:32:50.510 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:50 np0005558241 nova_compute[248510]: 2025-12-13 09:32:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:32:50 np0005558241 nova_compute[248510]: 2025-12-13 09:32:50.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:32:50 np0005558241 nova_compute[248510]: 2025-12-13 09:32:50.818 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:32:50 np0005558241 nova_compute[248510]: 2025-12-13 09:32:50.818 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:32:50 np0005558241 nova_compute[248510]: 2025-12-13 09:32:50.818 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:32:50 np0005558241 nova_compute[248510]: 2025-12-13 09:32:50.818 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:32:50 np0005558241 nova_compute[248510]: 2025-12-13 09:32:50.819 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.228 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:32:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410393467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.370 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:32:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3963: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.571 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.572 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3501MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.572 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.572 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.745 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.745 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.764 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.795 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.795 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.811 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.833 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 04:32:51 np0005558241 nova_compute[248510]: 2025-12-13 09:32:51.858 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:32:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:32:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3770924869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:32:52 np0005558241 nova_compute[248510]: 2025-12-13 09:32:52.420 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:32:52 np0005558241 nova_compute[248510]: 2025-12-13 09:32:52.428 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:32:52 np0005558241 nova_compute[248510]: 2025-12-13 09:32:52.452 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:32:52 np0005558241 nova_compute[248510]: 2025-12-13 09:32:52.455 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:32:52 np0005558241 nova_compute[248510]: 2025-12-13 09:32:52.455 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:32:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3964: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:53 np0005558241 podman[416540]: 2025-12-13 09:32:53.53971193 +0000 UTC m=+0.060392598 container create 4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:32:53 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:32:53 np0005558241 systemd[1]: Started libpod-conmon-4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed.scope.
Dec 13 04:32:53 np0005558241 podman[416540]: 2025-12-13 09:32:53.517384049 +0000 UTC m=+0.038064727 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:32:53 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:32:53 np0005558241 podman[416540]: 2025-12-13 09:32:53.646557103 +0000 UTC m=+0.167237781 container init 4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:32:53 np0005558241 podman[416540]: 2025-12-13 09:32:53.658155665 +0000 UTC m=+0.178836343 container start 4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:32:53 np0005558241 podman[416540]: 2025-12-13 09:32:53.662195716 +0000 UTC m=+0.182876494 container attach 4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:32:53 np0005558241 naughty_golick[416556]: 167 167
Dec 13 04:32:53 np0005558241 systemd[1]: libpod-4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed.scope: Deactivated successfully.
Dec 13 04:32:53 np0005558241 conmon[416556]: conmon 4dd7eae55a5a336ddd31 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed.scope/container/memory.events
Dec 13 04:32:53 np0005558241 podman[416540]: 2025-12-13 09:32:53.671327065 +0000 UTC m=+0.192007733 container died 4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:32:53 np0005558241 systemd[1]: var-lib-containers-storage-overlay-244a785459fddbaf6c6e792d9287624937b7f89cb27e1db2e4f98102ad6d8960-merged.mount: Deactivated successfully.
Dec 13 04:32:53 np0005558241 podman[416540]: 2025-12-13 09:32:53.740437011 +0000 UTC m=+0.261117679 container remove 4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_golick, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Dec 13 04:32:53 np0005558241 systemd[1]: libpod-conmon-4dd7eae55a5a336ddd3146cefd5753c5722e018c898b71069dbff83d81555aed.scope: Deactivated successfully.
Dec 13 04:32:54 np0005558241 podman[416579]: 2025-12-13 09:32:54.049153724 +0000 UTC m=+0.111608924 container create 594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feynman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:32:54 np0005558241 podman[416579]: 2025-12-13 09:32:53.984363697 +0000 UTC m=+0.046818907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:32:54 np0005558241 systemd[1]: Started libpod-conmon-594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf.scope.
Dec 13 04:32:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:32:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2a9b6536adb5f72d0aeceaa8e0e245edd98fa7512c61d9af238d8c52819af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2a9b6536adb5f72d0aeceaa8e0e245edd98fa7512c61d9af238d8c52819af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2a9b6536adb5f72d0aeceaa8e0e245edd98fa7512c61d9af238d8c52819af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2a9b6536adb5f72d0aeceaa8e0e245edd98fa7512c61d9af238d8c52819af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:54 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e2a9b6536adb5f72d0aeceaa8e0e245edd98fa7512c61d9af238d8c52819af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:54 np0005558241 podman[416579]: 2025-12-13 09:32:54.175418734 +0000 UTC m=+0.237874044 container init 594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:32:54 np0005558241 podman[416579]: 2025-12-13 09:32:54.190755119 +0000 UTC m=+0.253210349 container start 594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:32:54 np0005558241 podman[416579]: 2025-12-13 09:32:54.195712264 +0000 UTC m=+0.258167564 container attach 594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:32:54 np0005558241 priceless_feynman[416595]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:32:54 np0005558241 priceless_feynman[416595]: --> All data devices are unavailable
Dec 13 04:32:54 np0005558241 systemd[1]: libpod-594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf.scope: Deactivated successfully.
Dec 13 04:32:54 np0005558241 podman[416579]: 2025-12-13 09:32:54.853245997 +0000 UTC m=+0.915701307 container died 594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:32:54 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f5e2a9b6536adb5f72d0aeceaa8e0e245edd98fa7512c61d9af238d8c52819af-merged.mount: Deactivated successfully.
Dec 13 04:32:54 np0005558241 podman[416579]: 2025-12-13 09:32:54.917770808 +0000 UTC m=+0.980226028 container remove 594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feynman, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:32:54 np0005558241 systemd[1]: libpod-conmon-594dfab6f4795ddb52ea0497f738b3ef357ad2ee073cbc52d43a603159616ebf.scope: Deactivated successfully.
Dec 13 04:32:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:55 np0005558241 nova_compute[248510]: 2025-12-13 09:32:55.455 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:32:55 np0005558241 nova_compute[248510]: 2025-12-13 09:32:55.456 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:32:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:32:55.469 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:32:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:32:55.470 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:32:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:32:55.470 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:32:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3965: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:55 np0005558241 nova_compute[248510]: 2025-12-13 09:32:55.512 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:55 np0005558241 podman[416690]: 2025-12-13 09:32:55.558241143 +0000 UTC m=+0.072671796 container create 457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:32:55 np0005558241 systemd[1]: Started libpod-conmon-457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c.scope.
Dec 13 04:32:55 np0005558241 podman[416690]: 2025-12-13 09:32:55.530159098 +0000 UTC m=+0.044589791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:32:55 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:32:55 np0005558241 podman[416690]: 2025-12-13 09:32:55.680126314 +0000 UTC m=+0.194556927 container init 457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:32:55 np0005558241 podman[416690]: 2025-12-13 09:32:55.691737826 +0000 UTC m=+0.206168469 container start 457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:32:55 np0005558241 podman[416690]: 2025-12-13 09:32:55.696345801 +0000 UTC m=+0.210776414 container attach 457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:32:55 np0005558241 upbeat_allen[416706]: 167 167
Dec 13 04:32:55 np0005558241 systemd[1]: libpod-457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c.scope: Deactivated successfully.
Dec 13 04:32:55 np0005558241 conmon[416706]: conmon 457e8ecf178508449e6a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c.scope/container/memory.events
Dec 13 04:32:55 np0005558241 podman[416690]: 2025-12-13 09:32:55.702491276 +0000 UTC m=+0.216921899 container died 457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:32:55 np0005558241 systemd[1]: var-lib-containers-storage-overlay-efd0b6dc5f0ca08176122418a644ee45986ce092a135e4ab7265eeb5eac6dc37-merged.mount: Deactivated successfully.
Dec 13 04:32:55 np0005558241 podman[416690]: 2025-12-13 09:32:55.759450256 +0000 UTC m=+0.273880879 container remove 457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_allen, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 04:32:55 np0005558241 systemd[1]: libpod-conmon-457e8ecf178508449e6a7e1647cd2252da80a73633d266ab7d36b4d6c813191c.scope: Deactivated successfully.
Dec 13 04:32:55 np0005558241 podman[416730]: 2025-12-13 09:32:55.97858661 +0000 UTC m=+0.057392253 container create d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 04:32:56 np0005558241 systemd[1]: Started libpod-conmon-d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5.scope.
Dec 13 04:32:56 np0005558241 podman[416730]: 2025-12-13 09:32:55.951458108 +0000 UTC m=+0.030263841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:32:56 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:32:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26c111444c79507091e1585730c11be0969d63e4f17deaf91b587deb98d17c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26c111444c79507091e1585730c11be0969d63e4f17deaf91b587deb98d17c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26c111444c79507091e1585730c11be0969d63e4f17deaf91b587deb98d17c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:56 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26c111444c79507091e1585730c11be0969d63e4f17deaf91b587deb98d17c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:56 np0005558241 podman[416730]: 2025-12-13 09:32:56.096141692 +0000 UTC m=+0.174947375 container init d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 04:32:56 np0005558241 podman[416730]: 2025-12-13 09:32:56.109889547 +0000 UTC m=+0.188695180 container start d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:32:56 np0005558241 podman[416730]: 2025-12-13 09:32:56.113666402 +0000 UTC m=+0.192472135 container attach d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:32:56 np0005558241 nova_compute[248510]: 2025-12-13 09:32:56.231 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]: {
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:    "0": [
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:        {
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "devices": [
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "/dev/loop3"
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            ],
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_name": "ceph_lv0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_size": "21470642176",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "name": "ceph_lv0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "tags": {
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.cluster_name": "ceph",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.crush_device_class": "",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.encrypted": "0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.objectstore": "bluestore",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.osd_id": "0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.type": "block",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.vdo": "0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.with_tpm": "0"
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            },
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "type": "block",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "vg_name": "ceph_vg0"
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:        }
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:    ],
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:    "1": [
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:        {
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "devices": [
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "/dev/loop4"
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            ],
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_name": "ceph_lv1",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_size": "21470642176",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "name": "ceph_lv1",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "tags": {
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.cluster_name": "ceph",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.crush_device_class": "",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.encrypted": "0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.objectstore": "bluestore",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.osd_id": "1",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.type": "block",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.vdo": "0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.with_tpm": "0"
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            },
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "type": "block",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "vg_name": "ceph_vg1"
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:        }
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:    ],
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:    "2": [
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:        {
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "devices": [
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "/dev/loop5"
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            ],
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_name": "ceph_lv2",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_size": "21470642176",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "name": "ceph_lv2",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "tags": {
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.cluster_name": "ceph",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.crush_device_class": "",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.encrypted": "0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.objectstore": "bluestore",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.osd_id": "2",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.type": "block",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.vdo": "0",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:                "ceph.with_tpm": "0"
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            },
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "type": "block",
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:            "vg_name": "ceph_vg2"
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:        }
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]:    ]
Dec 13 04:32:56 np0005558241 cranky_yonath[416746]: }
Dec 13 04:32:56 np0005558241 systemd[1]: libpod-d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5.scope: Deactivated successfully.
Dec 13 04:32:56 np0005558241 podman[416730]: 2025-12-13 09:32:56.457931958 +0000 UTC m=+0.536737621 container died d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 04:32:56 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a26c111444c79507091e1585730c11be0969d63e4f17deaf91b587deb98d17c7-merged.mount: Deactivated successfully.
Dec 13 04:32:56 np0005558241 podman[416730]: 2025-12-13 09:32:56.519119855 +0000 UTC m=+0.597925538 container remove d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 04:32:56 np0005558241 systemd[1]: libpod-conmon-d7e43646d79bdab271fc71bf6e7c317975a289cc7c3982414997760b2612fee5.scope: Deactivated successfully.
Dec 13 04:32:57 np0005558241 podman[416830]: 2025-12-13 09:32:57.067808404 +0000 UTC m=+0.044598100 container create 803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:32:57 np0005558241 systemd[1]: Started libpod-conmon-803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840.scope.
Dec 13 04:32:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:32:57 np0005558241 podman[416830]: 2025-12-13 09:32:57.046708445 +0000 UTC m=+0.023498171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:32:57 np0005558241 podman[416830]: 2025-12-13 09:32:57.149463445 +0000 UTC m=+0.126253171 container init 803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:32:57 np0005558241 podman[416830]: 2025-12-13 09:32:57.155473546 +0000 UTC m=+0.132263242 container start 803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 04:32:57 np0005558241 podman[416830]: 2025-12-13 09:32:57.161016265 +0000 UTC m=+0.137806001 container attach 803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:32:57 np0005558241 nice_gould[416846]: 167 167
Dec 13 04:32:57 np0005558241 systemd[1]: libpod-803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840.scope: Deactivated successfully.
Dec 13 04:32:57 np0005558241 podman[416830]: 2025-12-13 09:32:57.163660082 +0000 UTC m=+0.140449788 container died 803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 04:32:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e8b01fc53cd650760661d9fbf214dcdd1fbeb5f63d5c573759802bb15da13fd2-merged.mount: Deactivated successfully.
Dec 13 04:32:57 np0005558241 podman[416830]: 2025-12-13 09:32:57.211607826 +0000 UTC m=+0.188397512 container remove 803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:32:57 np0005558241 systemd[1]: libpod-conmon-803653f8f182a3dfb434dd25d306711fe528c2ec1b074284a8e9845cdceb3840.scope: Deactivated successfully.
Dec 13 04:32:57 np0005558241 podman[416870]: 2025-12-13 09:32:57.408221084 +0000 UTC m=+0.054831198 container create d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:32:57 np0005558241 podman[416870]: 2025-12-13 09:32:57.380464897 +0000 UTC m=+0.027075071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:32:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3966: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:57 np0005558241 systemd[1]: Started libpod-conmon-d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce.scope.
Dec 13 04:32:57 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:32:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2d15583f473e8384c04ca7564974d8cfeb8e909fc3a6697b5730ec40aed63fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2d15583f473e8384c04ca7564974d8cfeb8e909fc3a6697b5730ec40aed63fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2d15583f473e8384c04ca7564974d8cfeb8e909fc3a6697b5730ec40aed63fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:57 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2d15583f473e8384c04ca7564974d8cfeb8e909fc3a6697b5730ec40aed63fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:57 np0005558241 podman[416870]: 2025-12-13 09:32:57.709164452 +0000 UTC m=+0.355774546 container init d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 04:32:57 np0005558241 podman[416870]: 2025-12-13 09:32:57.723489672 +0000 UTC m=+0.370099756 container start d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_cori, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:32:57 np0005558241 podman[416870]: 2025-12-13 09:32:57.796675639 +0000 UTC m=+0.443285813 container attach d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:32:58 np0005558241 lvm[416968]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:32:58 np0005558241 lvm[416968]: VG ceph_vg2 finished
Dec 13 04:32:58 np0005558241 lvm[416967]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:32:58 np0005558241 lvm[416967]: VG ceph_vg1 finished
Dec 13 04:32:58 np0005558241 lvm[416966]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:32:58 np0005558241 lvm[416966]: VG ceph_vg0 finished
Dec 13 04:32:58 np0005558241 nice_cori[416887]: {}
Dec 13 04:32:58 np0005558241 systemd[1]: libpod-d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce.scope: Deactivated successfully.
Dec 13 04:32:58 np0005558241 podman[416870]: 2025-12-13 09:32:58.656450132 +0000 UTC m=+1.303060226 container died d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 04:32:58 np0005558241 systemd[1]: libpod-d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce.scope: Consumed 1.519s CPU time.
Dec 13 04:32:58 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e2d15583f473e8384c04ca7564974d8cfeb8e909fc3a6697b5730ec40aed63fd-merged.mount: Deactivated successfully.
Dec 13 04:32:58 np0005558241 podman[416870]: 2025-12-13 09:32:58.711879924 +0000 UTC m=+1.358490048 container remove d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_cori, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 04:32:58 np0005558241 systemd[1]: libpod-conmon-d5653e0ecdca5eb93007b4e9613121c911b1837848b6db96ba425568d8ae98ce.scope: Deactivated successfully.
Dec 13 04:32:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:32:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:32:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:32:58 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:32:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3967: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:32:59 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:33:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:00 np0005558241 nova_compute[248510]: 2025-12-13 09:33:00.516 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:01 np0005558241 nova_compute[248510]: 2025-12-13 09:33:01.235 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3968: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:02 np0005558241 nova_compute[248510]: 2025-12-13 09:33:02.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:33:03 np0005558241 podman[417011]: 2025-12-13 09:33:03.034064591 +0000 UTC m=+0.113507911 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:33:03 np0005558241 podman[417010]: 2025-12-13 09:33:03.043053857 +0000 UTC m=+0.122049826 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Dec 13 04:33:03 np0005558241 podman[417009]: 2025-12-13 09:33:03.056502215 +0000 UTC m=+0.135100734 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:33:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3969: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3970: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:05 np0005558241 nova_compute[248510]: 2025-12-13 09:33:05.518 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:06 np0005558241 nova_compute[248510]: 2025-12-13 09:33:06.239 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3971: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3972: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:33:09
Dec 13 04:33:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:33:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:33:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'images', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.data']
Dec 13 04:33:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:33:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:33:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:33:10 np0005558241 nova_compute[248510]: 2025-12-13 09:33:10.520 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:33:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:33:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:33:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:33:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:33:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:33:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:33:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:33:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:33:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:33:11 np0005558241 nova_compute[248510]: 2025-12-13 09:33:11.244 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3973: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3974: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:33:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/793515846' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:33:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:33:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/793515846' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:33:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3975: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:15 np0005558241 nova_compute[248510]: 2025-12-13 09:33:15.522 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:16 np0005558241 nova_compute[248510]: 2025-12-13 09:33:16.246 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3976: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3977: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:20 np0005558241 nova_compute[248510]: 2025-12-13 09:33:20.524 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:21 np0005558241 nova_compute[248510]: 2025-12-13 09:33:21.249 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3978: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:33:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3979: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3980: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:25 np0005558241 nova_compute[248510]: 2025-12-13 09:33:25.526 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:26 np0005558241 nova_compute[248510]: 2025-12-13 09:33:26.252 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3981: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3982: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:30 np0005558241 nova_compute[248510]: 2025-12-13 09:33:30.531 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:31 np0005558241 nova_compute[248510]: 2025-12-13 09:33:31.303 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3983: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.588553) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618412588839, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 643, "num_deletes": 251, "total_data_size": 782998, "memory_usage": 793992, "flush_reason": "Manual Compaction"}
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618412598162, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 769668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79179, "largest_seqno": 79821, "table_properties": {"data_size": 766242, "index_size": 1333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7876, "raw_average_key_size": 19, "raw_value_size": 759362, "raw_average_value_size": 1865, "num_data_blocks": 59, "num_entries": 407, "num_filter_entries": 407, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618366, "oldest_key_time": 1765618366, "file_creation_time": 1765618412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 9462 microseconds, and 4175 cpu microseconds.
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.598213) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 769668 bytes OK
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.598237) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.599773) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.599789) EVENT_LOG_v1 {"time_micros": 1765618412599784, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.599815) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 779566, prev total WAL file size 779566, number of live WAL files 2.
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.600497) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(751KB)], [188(10MB)]
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618412600565, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 11657108, "oldest_snapshot_seqno": -1}
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 9579 keys, 9860761 bytes, temperature: kUnknown
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618412687549, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 9860761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9803198, "index_size": 32490, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24005, "raw_key_size": 252400, "raw_average_key_size": 26, "raw_value_size": 9638911, "raw_average_value_size": 1006, "num_data_blocks": 1239, "num_entries": 9579, "num_filter_entries": 9579, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.687912) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 9860761 bytes
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.689697) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.9 rd, 113.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.4 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(28.0) write-amplify(12.8) OK, records in: 10092, records dropped: 513 output_compression: NoCompression
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.689729) EVENT_LOG_v1 {"time_micros": 1765618412689714, "job": 118, "event": "compaction_finished", "compaction_time_micros": 87090, "compaction_time_cpu_micros": 49372, "output_level": 6, "num_output_files": 1, "total_output_size": 9860761, "num_input_records": 10092, "num_output_records": 9579, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618412690158, "job": 118, "event": "table_file_deletion", "file_number": 190}
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618412694501, "job": 118, "event": "table_file_deletion", "file_number": 188}
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.600338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.694567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.694576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.694581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.694585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:33:32 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:33:32.694589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:33:32 np0005558241 nova_compute[248510]: 2025-12-13 09:33:32.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:33:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3984: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:34 np0005558241 podman[417071]: 2025-12-13 09:33:34.015296765 +0000 UTC m=+0.082262397 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 13 04:33:34 np0005558241 podman[417070]: 2025-12-13 09:33:34.025025069 +0000 UTC m=+0.095334865 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Dec 13 04:33:34 np0005558241 podman[417069]: 2025-12-13 09:33:34.047508394 +0000 UTC m=+0.126407706 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:33:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3985: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:35 np0005558241 nova_compute[248510]: 2025-12-13 09:33:35.533 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:36 np0005558241 nova_compute[248510]: 2025-12-13 09:33:36.305 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3986: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:37 np0005558241 nova_compute[248510]: 2025-12-13 09:33:37.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:33:38 np0005558241 nova_compute[248510]: 2025-12-13 09:33:38.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:33:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3987: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:33:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:33:40 np0005558241 nova_compute[248510]: 2025-12-13 09:33:40.535 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:41 np0005558241 nova_compute[248510]: 2025-12-13 09:33:41.308 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3988: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3989: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3990: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:45 np0005558241 nova_compute[248510]: 2025-12-13 09:33:45.536 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:46 np0005558241 nova_compute[248510]: 2025-12-13 09:33:46.338 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3991: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:48 np0005558241 nova_compute[248510]: 2025-12-13 09:33:48.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:33:48 np0005558241 nova_compute[248510]: 2025-12-13 09:33:48.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:33:48 np0005558241 nova_compute[248510]: 2025-12-13 09:33:48.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:33:48 np0005558241 nova_compute[248510]: 2025-12-13 09:33:48.796 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:33:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3992: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:50 np0005558241 nova_compute[248510]: 2025-12-13 09:33:50.538 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:50 np0005558241 nova_compute[248510]: 2025-12-13 09:33:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:33:50 np0005558241 nova_compute[248510]: 2025-12-13 09:33:50.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:33:50 np0005558241 nova_compute[248510]: 2025-12-13 09:33:50.827 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:33:50 np0005558241 nova_compute[248510]: 2025-12-13 09:33:50.828 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:33:50 np0005558241 nova_compute[248510]: 2025-12-13 09:33:50.828 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:33:50 np0005558241 nova_compute[248510]: 2025-12-13 09:33:50.828 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:33:50 np0005558241 nova_compute[248510]: 2025-12-13 09:33:50.829 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:33:51 np0005558241 nova_compute[248510]: 2025-12-13 09:33:51.341 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:33:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4244520359' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:33:51 np0005558241 nova_compute[248510]: 2025-12-13 09:33:51.433 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:33:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3993: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:51 np0005558241 nova_compute[248510]: 2025-12-13 09:33:51.637 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:33:51 np0005558241 nova_compute[248510]: 2025-12-13 09:33:51.638 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3508MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:33:51 np0005558241 nova_compute[248510]: 2025-12-13 09:33:51.638 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:33:51 np0005558241 nova_compute[248510]: 2025-12-13 09:33:51.638 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:33:51 np0005558241 nova_compute[248510]: 2025-12-13 09:33:51.948 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:33:51 np0005558241 nova_compute[248510]: 2025-12-13 09:33:51.948 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:33:51 np0005558241 nova_compute[248510]: 2025-12-13 09:33:51.981 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:33:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:33:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3546281179' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:33:52 np0005558241 nova_compute[248510]: 2025-12-13 09:33:52.557 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:33:52 np0005558241 nova_compute[248510]: 2025-12-13 09:33:52.568 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:33:52 np0005558241 nova_compute[248510]: 2025-12-13 09:33:52.677 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:33:52 np0005558241 nova_compute[248510]: 2025-12-13 09:33:52.678 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:33:52 np0005558241 nova_compute[248510]: 2025-12-13 09:33:52.679 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:33:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3994: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:53 np0005558241 nova_compute[248510]: 2025-12-13 09:33:53.679 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:33:53 np0005558241 nova_compute[248510]: 2025-12-13 09:33:53.680 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:33:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:33:55.470 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:33:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:33:55.471 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:33:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:33:55.471 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:33:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3995: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:55 np0005558241 nova_compute[248510]: 2025-12-13 09:33:55.539 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:55 np0005558241 nova_compute[248510]: 2025-12-13 09:33:55.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:33:55 np0005558241 nova_compute[248510]: 2025-12-13 09:33:55.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:33:56 np0005558241 nova_compute[248510]: 2025-12-13 09:33:56.344 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:33:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3996: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:33:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3997: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Dec 13 04:34:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:34:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:34:00 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:00 np0005558241 nova_compute[248510]: 2025-12-13 09:34:00.540 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:00 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:00 np0005558241 podman[417388]: 2025-12-13 09:34:00.691176851 +0000 UTC m=+0.049785011 container create c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_torvalds, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 04:34:00 np0005558241 systemd[1]: Started libpod-conmon-c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655.scope.
Dec 13 04:34:00 np0005558241 podman[417388]: 2025-12-13 09:34:00.669380564 +0000 UTC m=+0.027988774 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:00 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:34:00 np0005558241 podman[417388]: 2025-12-13 09:34:00.785847989 +0000 UTC m=+0.144456199 container init c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_torvalds, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:34:00 np0005558241 podman[417388]: 2025-12-13 09:34:00.79466266 +0000 UTC m=+0.153270830 container start c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_torvalds, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:34:00 np0005558241 podman[417388]: 2025-12-13 09:34:00.799466091 +0000 UTC m=+0.158074301 container attach c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Dec 13 04:34:00 np0005558241 mystifying_torvalds[417405]: 167 167
Dec 13 04:34:00 np0005558241 systemd[1]: libpod-c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655.scope: Deactivated successfully.
Dec 13 04:34:00 np0005558241 podman[417388]: 2025-12-13 09:34:00.803134753 +0000 UTC m=+0.161742913 container died c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_torvalds, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:34:00 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4b035f882825022b984752c0b91c75130ce05187a3051f641c66cf4309c813f8-merged.mount: Deactivated successfully.
Dec 13 04:34:00 np0005558241 podman[417388]: 2025-12-13 09:34:00.852519843 +0000 UTC m=+0.211128003 container remove c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_torvalds, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:34:00 np0005558241 systemd[1]: libpod-conmon-c7f984dc96b4449fd13bd7fa38452e05d823cbc4d37122e5a3a9615e751fd655.scope: Deactivated successfully.
Dec 13 04:34:01 np0005558241 podman[417429]: 2025-12-13 09:34:01.044170746 +0000 UTC m=+0.045050592 container create 555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_villani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:34:01 np0005558241 systemd[1]: Started libpod-conmon-555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1.scope.
Dec 13 04:34:01 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:34:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22397d0f9473bbc42ae649e94ba3f6aac8d2a5e26e4471c87cabdf764b838021/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22397d0f9473bbc42ae649e94ba3f6aac8d2a5e26e4471c87cabdf764b838021/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22397d0f9473bbc42ae649e94ba3f6aac8d2a5e26e4471c87cabdf764b838021/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:01 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22397d0f9473bbc42ae649e94ba3f6aac8d2a5e26e4471c87cabdf764b838021/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:01 np0005558241 podman[417429]: 2025-12-13 09:34:01.025426106 +0000 UTC m=+0.026305992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:01 np0005558241 podman[417429]: 2025-12-13 09:34:01.134084174 +0000 UTC m=+0.134964080 container init 555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_villani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:34:01 np0005558241 podman[417429]: 2025-12-13 09:34:01.144080745 +0000 UTC m=+0.144960611 container start 555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_villani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Dec 13 04:34:01 np0005558241 podman[417429]: 2025-12-13 09:34:01.147688866 +0000 UTC m=+0.148568782 container attach 555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_villani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:34:01 np0005558241 nova_compute[248510]: 2025-12-13 09:34:01.347 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3998: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Dec 13 04:34:01 np0005558241 youthful_villani[417446]: [
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:    {
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:        "available": false,
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:        "being_replaced": false,
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:        "ceph_device_lvm": false,
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:        "lsm_data": {},
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:        "lvs": [],
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:        "path": "/dev/sr0",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:        "rejected_reasons": [
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "Has a FileSystem",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "Insufficient space (<5GB)"
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:        ],
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:        "sys_api": {
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "actuators": null,
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "device_nodes": [
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:                "sr0"
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            ],
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "devname": "sr0",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "human_readable_size": "482.00 KB",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "id_bus": "ata",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "model": "QEMU DVD-ROM",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "nr_requests": "2",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "parent": "/dev/sr0",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "partitions": {},
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "path": "/dev/sr0",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "removable": "1",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "rev": "2.5+",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "ro": "0",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "rotational": "1",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "sas_address": "",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "sas_device_handle": "",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "scheduler_mode": "mq-deadline",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "sectors": 0,
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "sectorsize": "2048",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "size": 493568.0,
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "support_discard": "2048",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "type": "disk",
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:            "vendor": "QEMU"
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:        }
Dec 13 04:34:01 np0005558241 youthful_villani[417446]:    }
Dec 13 04:34:01 np0005558241 youthful_villani[417446]: ]
Dec 13 04:34:01 np0005558241 systemd[1]: libpod-555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1.scope: Deactivated successfully.
Dec 13 04:34:01 np0005558241 podman[417429]: 2025-12-13 09:34:01.769906403 +0000 UTC m=+0.770786269 container died 555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_villani, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:34:01 np0005558241 systemd[1]: var-lib-containers-storage-overlay-22397d0f9473bbc42ae649e94ba3f6aac8d2a5e26e4471c87cabdf764b838021-merged.mount: Deactivated successfully.
Dec 13 04:34:01 np0005558241 podman[417429]: 2025-12-13 09:34:01.841513211 +0000 UTC m=+0.842393067 container remove 555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 13 04:34:01 np0005558241 systemd[1]: libpod-conmon-555e53d5066e9a0ede66fe38588e14eefa341258de4bcc4a769b1776453a71b1.scope: Deactivated successfully.
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:34:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:34:02 np0005558241 systemd-logind[787]: New session 55 of user zuul.
Dec 13 04:34:02 np0005558241 systemd[1]: Started Session 55 of User zuul.
Dec 13 04:34:02 np0005558241 podman[418258]: 2025-12-13 09:34:02.416224514 +0000 UTC m=+0.039901014 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:02 np0005558241 podman[418258]: 2025-12-13 09:34:02.637103961 +0000 UTC m=+0.260780451 container create 7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:34:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:34:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:34:02 np0005558241 systemd[1]: Started libpod-conmon-7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3.scope.
Dec 13 04:34:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:34:02 np0005558241 podman[418258]: 2025-12-13 09:34:02.847390792 +0000 UTC m=+0.471067282 container init 7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_galois, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:34:02 np0005558241 podman[418258]: 2025-12-13 09:34:02.86084009 +0000 UTC m=+0.484516580 container start 7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:34:02 np0005558241 quizzical_galois[418322]: 167 167
Dec 13 04:34:02 np0005558241 systemd[1]: libpod-7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3.scope: Deactivated successfully.
Dec 13 04:34:03 np0005558241 podman[418258]: 2025-12-13 09:34:03.266008765 +0000 UTC m=+0.889685245 container attach 7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_galois, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 04:34:03 np0005558241 podman[418258]: 2025-12-13 09:34:03.267441891 +0000 UTC m=+0.891118351 container died 7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:34:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v3999: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Dec 13 04:34:03 np0005558241 systemd[1]: var-lib-containers-storage-overlay-de67c133a911fbf56954307f821a15b9caf014fb0193efe35a2357ae1a32f4f8-merged.mount: Deactivated successfully.
Dec 13 04:34:04 np0005558241 podman[418258]: 2025-12-13 09:34:04.421859773 +0000 UTC m=+2.045536253 container remove 7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 04:34:04 np0005558241 systemd[1]: libpod-conmon-7a60f7c0baebf95c8a9a0e2ce5b02f13641cea4308a5a4ca5ce81e5918b57dc3.scope: Deactivated successfully.
Dec 13 04:34:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:34:04.511 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:34:04 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:34:04.513 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:34:04 np0005558241 nova_compute[248510]: 2025-12-13 09:34:04.512 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:04 np0005558241 podman[418342]: 2025-12-13 09:34:04.609179537 +0000 UTC m=+0.084714378 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:34:04 np0005558241 podman[418343]: 2025-12-13 09:34:04.613147477 +0000 UTC m=+0.083919549 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:34:04 np0005558241 podman[418377]: 2025-12-13 09:34:04.641050108 +0000 UTC m=+0.051706180 container create d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_driscoll, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:34:04 np0005558241 podman[418341]: 2025-12-13 09:34:04.64752279 +0000 UTC m=+0.116508787 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:34:04 np0005558241 systemd[1]: Started libpod-conmon-d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8.scope.
Dec 13 04:34:04 np0005558241 podman[418377]: 2025-12-13 09:34:04.618535812 +0000 UTC m=+0.029191914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:04 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:34:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1987d1eedece97744a888d12f4b8335bfdfddc58d0a36a6d806decf7a8ee3654/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1987d1eedece97744a888d12f4b8335bfdfddc58d0a36a6d806decf7a8ee3654/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1987d1eedece97744a888d12f4b8335bfdfddc58d0a36a6d806decf7a8ee3654/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1987d1eedece97744a888d12f4b8335bfdfddc58d0a36a6d806decf7a8ee3654/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:04 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1987d1eedece97744a888d12f4b8335bfdfddc58d0a36a6d806decf7a8ee3654/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:04 np0005558241 podman[418377]: 2025-12-13 09:34:04.754002024 +0000 UTC m=+0.164658116 container init d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_driscoll, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:34:04 np0005558241 podman[418377]: 2025-12-13 09:34:04.764052877 +0000 UTC m=+0.174708949 container start d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:34:04 np0005558241 podman[418377]: 2025-12-13 09:34:04.768140079 +0000 UTC m=+0.178796151 container attach d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_driscoll, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:34:04 np0005558241 nova_compute[248510]: 2025-12-13 09:34:04.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:34:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:05 np0005558241 keen_driscoll[418427]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:34:05 np0005558241 keen_driscoll[418427]: --> All data devices are unavailable
Dec 13 04:34:05 np0005558241 systemd[1]: libpod-d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8.scope: Deactivated successfully.
Dec 13 04:34:05 np0005558241 podman[418377]: 2025-12-13 09:34:05.300686624 +0000 UTC m=+0.711342696 container died d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:34:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-1987d1eedece97744a888d12f4b8335bfdfddc58d0a36a6d806decf7a8ee3654-merged.mount: Deactivated successfully.
Dec 13 04:34:05 np0005558241 podman[418377]: 2025-12-13 09:34:05.346612487 +0000 UTC m=+0.757268559 container remove d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_driscoll, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:34:05 np0005558241 systemd[1]: libpod-conmon-d6c08220425227939d274e2a3f88eb4d0a95beca06f55fbf61312e4df427a6c8.scope: Deactivated successfully.
Dec 13 04:34:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4000: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec 13 04:34:05 np0005558241 nova_compute[248510]: 2025-12-13 09:34:05.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:05 np0005558241 podman[418702]: 2025-12-13 09:34:05.893122332 +0000 UTC m=+0.049017492 container create 12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:34:05 np0005558241 systemd[1]: Started libpod-conmon-12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200.scope.
Dec 13 04:34:05 np0005558241 podman[418702]: 2025-12-13 09:34:05.872702429 +0000 UTC m=+0.028597609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:34:05 np0005558241 podman[418702]: 2025-12-13 09:34:05.986191929 +0000 UTC m=+0.142087119 container init 12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:34:05 np0005558241 podman[418702]: 2025-12-13 09:34:05.993518363 +0000 UTC m=+0.149413553 container start 12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Dec 13 04:34:05 np0005558241 podman[418702]: 2025-12-13 09:34:05.997815871 +0000 UTC m=+0.153711061 container attach 12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 04:34:06 np0005558241 funny_gates[418718]: 167 167
Dec 13 04:34:06 np0005558241 systemd[1]: libpod-12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200.scope: Deactivated successfully.
Dec 13 04:34:06 np0005558241 podman[418702]: 2025-12-13 09:34:06.001708919 +0000 UTC m=+0.157604079 container died 12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:34:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-b8325494977d5b59c8fa5c4cfd63d4975cc9e9ad1e819a2977c78b13ac812761-merged.mount: Deactivated successfully.
Dec 13 04:34:06 np0005558241 podman[418702]: 2025-12-13 09:34:06.055656044 +0000 UTC m=+0.211551204 container remove 12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:34:06 np0005558241 systemd[1]: libpod-conmon-12fb424d32288347bc2d492a59cb4a4c6c0a2f9bfb56b96f1ddcf1ee944fa200.scope: Deactivated successfully.
Dec 13 04:34:06 np0005558241 podman[418742]: 2025-12-13 09:34:06.268284704 +0000 UTC m=+0.052467619 container create 3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 04:34:06 np0005558241 systemd[1]: Started libpod-conmon-3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663.scope.
Dec 13 04:34:06 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:34:06 np0005558241 podman[418742]: 2025-12-13 09:34:06.245400169 +0000 UTC m=+0.029583134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55fe42313bc0677862f764fb537da779b5eb9550a692c8cb467ae8a3e600cbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55fe42313bc0677862f764fb537da779b5eb9550a692c8cb467ae8a3e600cbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55fe42313bc0677862f764fb537da779b5eb9550a692c8cb467ae8a3e600cbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55fe42313bc0677862f764fb537da779b5eb9550a692c8cb467ae8a3e600cbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:06 np0005558241 podman[418742]: 2025-12-13 09:34:06.353669458 +0000 UTC m=+0.137852363 container init 3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:34:06 np0005558241 nova_compute[248510]: 2025-12-13 09:34:06.352 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:06 np0005558241 podman[418742]: 2025-12-13 09:34:06.362133471 +0000 UTC m=+0.146316376 container start 3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:34:06 np0005558241 podman[418742]: 2025-12-13 09:34:06.366734916 +0000 UTC m=+0.150917821 container attach 3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:34:06 np0005558241 bold_wright[418758]: {
Dec 13 04:34:06 np0005558241 bold_wright[418758]:    "0": [
Dec 13 04:34:06 np0005558241 bold_wright[418758]:        {
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "devices": [
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "/dev/loop3"
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            ],
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_name": "ceph_lv0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_size": "21470642176",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "name": "ceph_lv0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "tags": {
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.cluster_name": "ceph",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.crush_device_class": "",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.encrypted": "0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.objectstore": "bluestore",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.osd_id": "0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.type": "block",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.vdo": "0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.with_tpm": "0"
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            },
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "type": "block",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "vg_name": "ceph_vg0"
Dec 13 04:34:06 np0005558241 bold_wright[418758]:        }
Dec 13 04:34:06 np0005558241 bold_wright[418758]:    ],
Dec 13 04:34:06 np0005558241 bold_wright[418758]:    "1": [
Dec 13 04:34:06 np0005558241 bold_wright[418758]:        {
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "devices": [
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "/dev/loop4"
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            ],
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_name": "ceph_lv1",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_size": "21470642176",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "name": "ceph_lv1",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "tags": {
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.cluster_name": "ceph",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.crush_device_class": "",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.encrypted": "0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.objectstore": "bluestore",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.osd_id": "1",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.type": "block",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.vdo": "0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.with_tpm": "0"
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            },
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "type": "block",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "vg_name": "ceph_vg1"
Dec 13 04:34:06 np0005558241 bold_wright[418758]:        }
Dec 13 04:34:06 np0005558241 bold_wright[418758]:    ],
Dec 13 04:34:06 np0005558241 bold_wright[418758]:    "2": [
Dec 13 04:34:06 np0005558241 bold_wright[418758]:        {
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "devices": [
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "/dev/loop5"
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            ],
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_name": "ceph_lv2",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_size": "21470642176",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "name": "ceph_lv2",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "tags": {
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.cluster_name": "ceph",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.crush_device_class": "",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.encrypted": "0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.objectstore": "bluestore",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.osd_id": "2",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.type": "block",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.vdo": "0",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:                "ceph.with_tpm": "0"
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            },
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "type": "block",
Dec 13 04:34:06 np0005558241 bold_wright[418758]:            "vg_name": "ceph_vg2"
Dec 13 04:34:06 np0005558241 bold_wright[418758]:        }
Dec 13 04:34:06 np0005558241 bold_wright[418758]:    ]
Dec 13 04:34:06 np0005558241 bold_wright[418758]: }
Dec 13 04:34:06 np0005558241 systemd[1]: libpod-3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663.scope: Deactivated successfully.
Dec 13 04:34:06 np0005558241 podman[418742]: 2025-12-13 09:34:06.725349043 +0000 UTC m=+0.509531948 container died 3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:34:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a55fe42313bc0677862f764fb537da779b5eb9550a692c8cb467ae8a3e600cbc-merged.mount: Deactivated successfully.
Dec 13 04:34:06 np0005558241 podman[418742]: 2025-12-13 09:34:06.776946318 +0000 UTC m=+0.561129233 container remove 3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 04:34:06 np0005558241 systemd[1]: libpod-conmon-3ca8fadd4afc78e295470d4d03725c49104bb623ccfcf088b468ffc79b345663.scope: Deactivated successfully.
Dec 13 04:34:07 np0005558241 podman[418842]: 2025-12-13 09:34:07.306527058 +0000 UTC m=+0.046740454 container create 24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Dec 13 04:34:07 np0005558241 systemd[1]: Started libpod-conmon-24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b.scope.
Dec 13 04:34:07 np0005558241 podman[418842]: 2025-12-13 09:34:07.285237134 +0000 UTC m=+0.025450580 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:34:07 np0005558241 podman[418842]: 2025-12-13 09:34:07.410396557 +0000 UTC m=+0.150609973 container init 24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_carver, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:34:07 np0005558241 podman[418842]: 2025-12-13 09:34:07.422153912 +0000 UTC m=+0.162367308 container start 24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:34:07 np0005558241 podman[418842]: 2025-12-13 09:34:07.425827665 +0000 UTC m=+0.166041061 container attach 24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:34:07 np0005558241 cranky_carver[418858]: 167 167
Dec 13 04:34:07 np0005558241 systemd[1]: libpod-24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b.scope: Deactivated successfully.
Dec 13 04:34:07 np0005558241 conmon[418858]: conmon 24f4a84d5faff117b719 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b.scope/container/memory.events
Dec 13 04:34:07 np0005558241 podman[418842]: 2025-12-13 09:34:07.433757764 +0000 UTC m=+0.173971180 container died 24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:34:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9dd966305c25bf1fdcf1d73fb9508f94632d5fcfb49a12ca81b6b007b105c731-merged.mount: Deactivated successfully.
Dec 13 04:34:07 np0005558241 podman[418842]: 2025-12-13 09:34:07.490908749 +0000 UTC m=+0.231122155 container remove 24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_carver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 04:34:07 np0005558241 systemd[1]: libpod-conmon-24f4a84d5faff117b719500a70501449c4ae1dbfc38766f90b9eee143509520b.scope: Deactivated successfully.
Dec 13 04:34:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4001: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec 13 04:34:07 np0005558241 podman[418882]: 2025-12-13 09:34:07.700930194 +0000 UTC m=+0.046294084 container create d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_moser, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:34:07 np0005558241 systemd[1]: Started libpod-conmon-d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6.scope.
Dec 13 04:34:07 np0005558241 podman[418882]: 2025-12-13 09:34:07.681402053 +0000 UTC m=+0.026765993 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:34:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6780dd2cb495a69b72f5c644dceffba76a7d12a008931b11fb2b7e1f83e7c07d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6780dd2cb495a69b72f5c644dceffba76a7d12a008931b11fb2b7e1f83e7c07d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6780dd2cb495a69b72f5c644dceffba76a7d12a008931b11fb2b7e1f83e7c07d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6780dd2cb495a69b72f5c644dceffba76a7d12a008931b11fb2b7e1f83e7c07d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:07 np0005558241 podman[418882]: 2025-12-13 09:34:07.803196672 +0000 UTC m=+0.148560652 container init d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:34:07 np0005558241 podman[418882]: 2025-12-13 09:34:07.810791033 +0000 UTC m=+0.156154963 container start d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_moser, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:34:07 np0005558241 podman[418882]: 2025-12-13 09:34:07.814795533 +0000 UTC m=+0.160159443 container attach d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_moser, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:34:08 np0005558241 systemd[1]: session-55.scope: Deactivated successfully.
Dec 13 04:34:08 np0005558241 systemd-logind[787]: Session 55 logged out. Waiting for processes to exit.
Dec 13 04:34:08 np0005558241 systemd-logind[787]: Removed session 55.
Dec 13 04:34:08 np0005558241 lvm[419002]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:34:08 np0005558241 lvm[419002]: VG ceph_vg1 finished
Dec 13 04:34:08 np0005558241 lvm[419001]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:34:08 np0005558241 lvm[419001]: VG ceph_vg0 finished
Dec 13 04:34:08 np0005558241 lvm[419004]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:34:08 np0005558241 lvm[419004]: VG ceph_vg2 finished
Dec 13 04:34:08 np0005558241 magical_moser[418899]: {}
Dec 13 04:34:08 np0005558241 systemd[1]: libpod-d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6.scope: Deactivated successfully.
Dec 13 04:34:08 np0005558241 systemd[1]: libpod-d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6.scope: Consumed 1.501s CPU time.
Dec 13 04:34:08 np0005558241 podman[418882]: 2025-12-13 09:34:08.690943707 +0000 UTC m=+1.036307607 container died d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 04:34:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6780dd2cb495a69b72f5c644dceffba76a7d12a008931b11fb2b7e1f83e7c07d-merged.mount: Deactivated successfully.
Dec 13 04:34:09 np0005558241 podman[418882]: 2025-12-13 09:34:09.044308082 +0000 UTC m=+1.389672012 container remove d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_moser, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 04:34:09 np0005558241 systemd[1]: libpod-conmon-d84e113e28becdebda2eea5d262b5f8838f3ddb8620bf5f35a0fab1ca12306f6.scope: Deactivated successfully.
Dec 13 04:34:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:34:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:34:09 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4002: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:34:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:34:09
Dec 13 04:34:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:34:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:34:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'default.rgw.log', 'backups', 'volumes', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images']
Dec 13 04:34:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:34:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:34:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:34:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:10 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:34:10 np0005558241 nova_compute[248510]: 2025-12-13 09:34:10.541 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:34:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:34:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:34:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:34:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:34:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:34:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:34:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:34:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:34:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:34:11 np0005558241 nova_compute[248510]: 2025-12-13 09:34:11.355 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:34:11.515 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:34:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4003: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 63 op/s
Dec 13 04:34:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4004: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Dec 13 04:34:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:34:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2564468377' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:34:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:34:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2564468377' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:34:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4005: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Dec 13 04:34:15 np0005558241 nova_compute[248510]: 2025-12-13 09:34:15.544 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:16 np0005558241 nova_compute[248510]: 2025-12-13 09:34:16.357 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4006: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 13 04:34:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4007: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec 13 04:34:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:20 np0005558241 nova_compute[248510]: 2025-12-13 09:34:20.546 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:21 np0005558241 nova_compute[248510]: 2025-12-13 09:34:21.359 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4008: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:34:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4009: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4010: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:25 np0005558241 nova_compute[248510]: 2025-12-13 09:34:25.549 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:26 np0005558241 nova_compute[248510]: 2025-12-13 09:34:26.391 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4011: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4012: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:30 np0005558241 nova_compute[248510]: 2025-12-13 09:34:30.549 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:31 np0005558241 nova_compute[248510]: 2025-12-13 09:34:31.440 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4013: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4014: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:35 np0005558241 podman[419047]: 2025-12-13 09:34:35.025698096 +0000 UTC m=+0.098270529 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible)
Dec 13 04:34:35 np0005558241 podman[419048]: 2025-12-13 09:34:35.025755057 +0000 UTC m=+0.094110994 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 13 04:34:35 np0005558241 podman[419046]: 2025-12-13 09:34:35.06291242 +0000 UTC m=+0.135112944 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:34:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4015: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:35 np0005558241 nova_compute[248510]: 2025-12-13 09:34:35.552 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:36 np0005558241 nova_compute[248510]: 2025-12-13 09:34:36.443 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4016: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:38 np0005558241 nova_compute[248510]: 2025-12-13 09:34:38.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:34:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4017: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:34:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:34:40 np0005558241 nova_compute[248510]: 2025-12-13 09:34:40.554 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:40 np0005558241 nova_compute[248510]: 2025-12-13 09:34:40.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:34:41 np0005558241 nova_compute[248510]: 2025-12-13 09:34:41.482 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4018: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4019: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4020: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:45 np0005558241 nova_compute[248510]: 2025-12-13 09:34:45.555 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:46 np0005558241 nova_compute[248510]: 2025-12-13 09:34:46.486 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4021: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4022: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Dec 13 04:34:49 np0005558241 nova_compute[248510]: 2025-12-13 09:34:49.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:34:49 np0005558241 nova_compute[248510]: 2025-12-13 09:34:49.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:34:49 np0005558241 nova_compute[248510]: 2025-12-13 09:34:49.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:34:50 np0005558241 nova_compute[248510]: 2025-12-13 09:34:50.399 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:34:50 np0005558241 nova_compute[248510]: 2025-12-13 09:34:50.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:50 np0005558241 nova_compute[248510]: 2025-12-13 09:34:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:34:50 np0005558241 nova_compute[248510]: 2025-12-13 09:34:50.885 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:34:50 np0005558241 nova_compute[248510]: 2025-12-13 09:34:50.886 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:34:50 np0005558241 nova_compute[248510]: 2025-12-13 09:34:50.886 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:34:50 np0005558241 nova_compute[248510]: 2025-12-13 09:34:50.886 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:34:50 np0005558241 nova_compute[248510]: 2025-12-13 09:34:50.887 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:34:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:51 np0005558241 nova_compute[248510]: 2025-12-13 09:34:51.489 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4023: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Dec 13 04:34:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:34:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1835873693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:34:51 np0005558241 nova_compute[248510]: 2025-12-13 09:34:51.703 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.816s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:34:51 np0005558241 nova_compute[248510]: 2025-12-13 09:34:51.992 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:34:51 np0005558241 nova_compute[248510]: 2025-12-13 09:34:51.995 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3487MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:34:51 np0005558241 nova_compute[248510]: 2025-12-13 09:34:51.996 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:34:51 np0005558241 nova_compute[248510]: 2025-12-13 09:34:51.996 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:34:52 np0005558241 nova_compute[248510]: 2025-12-13 09:34:52.115 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:34:52 np0005558241 nova_compute[248510]: 2025-12-13 09:34:52.116 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:34:52 np0005558241 nova_compute[248510]: 2025-12-13 09:34:52.140 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:34:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4024: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 04:34:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:34:55.471 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:34:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:34:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:34:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:34:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:34:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4025: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 04:34:55 np0005558241 nova_compute[248510]: 2025-12-13 09:34:55.580 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:56 np0005558241 nova_compute[248510]: 2025-12-13 09:34:56.492 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:34:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4026: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 04:34:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:34:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026049802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:34:57 np0005558241 nova_compute[248510]: 2025-12-13 09:34:57.767 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:34:57 np0005558241 nova_compute[248510]: 2025-12-13 09:34:57.776 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:34:58 np0005558241 nova_compute[248510]: 2025-12-13 09:34:58.487 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:34:58 np0005558241 nova_compute[248510]: 2025-12-13 09:34:58.490 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:34:58 np0005558241 nova_compute[248510]: 2025-12-13 09:34:58.491 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:34:59 np0005558241 nova_compute[248510]: 2025-12-13 09:34:59.492 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:34:59 np0005558241 nova_compute[248510]: 2025-12-13 09:34:59.493 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:34:59 np0005558241 nova_compute[248510]: 2025-12-13 09:34:59.493 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:34:59 np0005558241 nova_compute[248510]: 2025-12-13 09:34:59.494 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:34:59 np0005558241 nova_compute[248510]: 2025-12-13 09:34:59.494 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:34:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4027: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 04:35:00 np0005558241 nova_compute[248510]: 2025-12-13 09:35:00.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:01 np0005558241 nova_compute[248510]: 2025-12-13 09:35:01.495 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4028: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 04:35:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4029: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 04:35:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4030: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 04:35:05 np0005558241 nova_compute[248510]: 2025-12-13 09:35:05.620 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:05 np0005558241 podman[419152]: 2025-12-13 09:35:05.976033715 +0000 UTC m=+0.063525766 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 04:35:06 np0005558241 podman[419153]: 2025-12-13 09:35:06.014914041 +0000 UTC m=+0.088627606 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 13 04:35:06 np0005558241 podman[419151]: 2025-12-13 09:35:06.014945902 +0000 UTC m=+0.096604467 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:35:06 np0005558241 nova_compute[248510]: 2025-12-13 09:35:06.497 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:06 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:06 np0005558241 nova_compute[248510]: 2025-12-13 09:35:06.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:35:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4031: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 04:35:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:35:09
Dec 13 04:35:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:35:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:35:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', '.mgr']
Dec 13 04:35:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:35:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4032: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 6 op/s
Dec 13 04:35:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:35:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:35:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:35:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:35:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:35:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:35:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:35:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:35:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:35:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:35:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:35:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:35:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:35:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:35:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:35:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:35:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:35:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:35:10 np0005558241 nova_compute[248510]: 2025-12-13 09:35:10.623 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:11 np0005558241 podman[419356]: 2025-12-13 09:35:11.014117962 +0000 UTC m=+0.038350674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:35:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:35:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:35:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:35:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:35:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:35:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:35:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:35:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:35:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:35:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:35:11 np0005558241 nova_compute[248510]: 2025-12-13 09:35:11.527 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4033: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 04:35:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:12 np0005558241 podman[419356]: 2025-12-13 09:35:12.014909896 +0000 UTC m=+1.039142548 container create e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:35:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:35:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:35:12 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:35:13 np0005558241 systemd[1]: Started libpod-conmon-e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c.scope.
Dec 13 04:35:13 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:35:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4034: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 04:35:14 np0005558241 podman[419356]: 2025-12-13 09:35:14.05578149 +0000 UTC m=+3.080014212 container init e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:35:14 np0005558241 podman[419356]: 2025-12-13 09:35:14.068473469 +0000 UTC m=+3.092706141 container start e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curie, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:35:14 np0005558241 condescending_curie[419373]: 167 167
Dec 13 04:35:14 np0005558241 systemd[1]: libpod-e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c.scope: Deactivated successfully.
Dec 13 04:35:14 np0005558241 conmon[419373]: conmon e52d8576cf4d35e992e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c.scope/container/memory.events
Dec 13 04:35:14 np0005558241 podman[419356]: 2025-12-13 09:35:14.591954685 +0000 UTC m=+3.616187377 container attach e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 04:35:14 np0005558241 podman[419356]: 2025-12-13 09:35:14.592671173 +0000 UTC m=+3.616903885 container died e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:35:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4035: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 04:35:15 np0005558241 nova_compute[248510]: 2025-12-13 09:35:15.626 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:35:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2986954795' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:35:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:35:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2986954795' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:35:16 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4112d5cfafc40d6df4d8a69dcfd7c066158305bedc1433fc296039a83f055f3b-merged.mount: Deactivated successfully.
Dec 13 04:35:16 np0005558241 nova_compute[248510]: 2025-12-13 09:35:16.571 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4036: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 04:35:17 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Dec 13 04:35:17 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:17.591026) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:35:17 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Dec 13 04:35:17 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618517591152, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1032, "num_deletes": 262, "total_data_size": 1579712, "memory_usage": 1609664, "flush_reason": "Manual Compaction"}
Dec 13 04:35:17 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618518262164, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 1530896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79822, "largest_seqno": 80853, "table_properties": {"data_size": 1525789, "index_size": 2566, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10911, "raw_average_key_size": 19, "raw_value_size": 1515525, "raw_average_value_size": 2687, "num_data_blocks": 115, "num_entries": 564, "num_filter_entries": 564, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618413, "oldest_key_time": 1765618413, "file_creation_time": 1765618517, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 671218 microseconds, and 8955 cpu microseconds.
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.262240) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 1530896 bytes OK
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.262271) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.482286) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.482328) EVENT_LOG_v1 {"time_micros": 1765618518482316, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.482359) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 1574770, prev total WAL file size 1603089, number of live WAL files 2.
Dec 13 04:35:18 np0005558241 podman[419356]: 2025-12-13 09:35:18.482346979 +0000 UTC m=+7.506579611 container remove e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_curie, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.483932) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353136' seq:72057594037927935, type:22 .. '6C6F676D0033373734' seq:0, type:0; will stop at (end)
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(1495KB)], [191(9629KB)]
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618518483977, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 11391657, "oldest_snapshot_seqno": -1}
Dec 13 04:35:18 np0005558241 systemd[1]: libpod-conmon-e52d8576cf4d35e992e423c28068fd03f055d7e12413ba05b1e738cc82508a4c.scope: Deactivated successfully.
Dec 13 04:35:18 np0005558241 podman[419398]: 2025-12-13 09:35:18.681033199 +0000 UTC m=+0.038921179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 9607 keys, 11277108 bytes, temperature: kUnknown
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618518956184, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 11277108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11217374, "index_size": 34597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 253963, "raw_average_key_size": 26, "raw_value_size": 11050657, "raw_average_value_size": 1150, "num_data_blocks": 1328, "num_entries": 9607, "num_filter_entries": 9607, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618518, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:35:18 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:35:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4037: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 7 op/s
Dec 13 04:35:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.956563) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 11277108 bytes
Dec 13 04:35:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.100981) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 24.1 rd, 23.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(14.8) write-amplify(7.4) OK, records in: 10143, records dropped: 536 output_compression: NoCompression
Dec 13 04:35:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.101047) EVENT_LOG_v1 {"time_micros": 1765618520101018, "job": 120, "event": "compaction_finished", "compaction_time_micros": 472315, "compaction_time_cpu_micros": 50245, "output_level": 6, "num_output_files": 1, "total_output_size": 11277108, "num_input_records": 10143, "num_output_records": 9607, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:35:20 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:35:20 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618520101918, "job": 120, "event": "table_file_deletion", "file_number": 193}
Dec 13 04:35:20 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:35:20 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618520105346, "job": 120, "event": "table_file_deletion", "file_number": 191}
Dec 13 04:35:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:18.483859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:35:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.105399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:35:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.105407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:35:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.105410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:35:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.105414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:35:20 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:35:20.105417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:35:20 np0005558241 podman[419398]: 2025-12-13 09:35:20.330301559 +0000 UTC m=+1.688189459 container create f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:35:20 np0005558241 nova_compute[248510]: 2025-12-13 09:35:20.629 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:21 np0005558241 systemd[1]: Started libpod-conmon-f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9.scope.
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4038: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Dec 13 04:35:21 np0005558241 nova_compute[248510]: 2025-12-13 09:35:21.573 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:21 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:35:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6474256d1f73c1e9489f5a91a0d9ff588d68c13d01980a636fe9605d753af308/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6474256d1f73c1e9489f5a91a0d9ff588d68c13d01980a636fe9605d753af308/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6474256d1f73c1e9489f5a91a0d9ff588d68c13d01980a636fe9605d753af308/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6474256d1f73c1e9489f5a91a0d9ff588d68c13d01980a636fe9605d753af308/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:21 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6474256d1f73c1e9489f5a91a0d9ff588d68c13d01980a636fe9605d753af308/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:35:22 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:22 np0005558241 podman[419398]: 2025-12-13 09:35:22.453968622 +0000 UTC m=+3.811856612 container init f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:35:22 np0005558241 podman[419398]: 2025-12-13 09:35:22.469234315 +0000 UTC m=+3.827122225 container start f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:35:23 np0005558241 romantic_banzai[419414]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:35:23 np0005558241 romantic_banzai[419414]: --> All data devices are unavailable
Dec 13 04:35:23 np0005558241 systemd[1]: libpod-f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9.scope: Deactivated successfully.
Dec 13 04:35:23 np0005558241 podman[419398]: 2025-12-13 09:35:23.466884031 +0000 UTC m=+4.824771951 container attach f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:35:23 np0005558241 podman[419398]: 2025-12-13 09:35:23.470865921 +0000 UTC m=+4.828753841 container died f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 04:35:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4039: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 0 B/s wr, 3 op/s
Dec 13 04:35:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4040: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Dec 13 04:35:25 np0005558241 nova_compute[248510]: 2025-12-13 09:35:25.630 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:25 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6474256d1f73c1e9489f5a91a0d9ff588d68c13d01980a636fe9605d753af308-merged.mount: Deactivated successfully.
Dec 13 04:35:26 np0005558241 nova_compute[248510]: 2025-12-13 09:35:26.577 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4041: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:28 np0005558241 podman[419398]: 2025-12-13 09:35:28.560516733 +0000 UTC m=+9.918404673 container remove f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:35:28 np0005558241 systemd[1]: libpod-conmon-f6c6ab04c7bc3a359de6c221ba38d512e35710965fe6865bd18d812f73e3f0a9.scope: Deactivated successfully.
Dec 13 04:35:29 np0005558241 podman[419510]: 2025-12-13 09:35:29.099650523 +0000 UTC m=+0.028724743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:35:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4042: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:29 np0005558241 podman[419510]: 2025-12-13 09:35:29.647465631 +0000 UTC m=+0.576539741 container create 7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_archimedes, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 04:35:30 np0005558241 systemd[1]: Started libpod-conmon-7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5.scope.
Dec 13 04:35:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:35:30 np0005558241 nova_compute[248510]: 2025-12-13 09:35:30.633 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:30 np0005558241 podman[419510]: 2025-12-13 09:35:30.758290008 +0000 UTC m=+1.687364148 container init 7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:35:30 np0005558241 podman[419510]: 2025-12-13 09:35:30.770794113 +0000 UTC m=+1.699868263 container start 7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:35:30 np0005558241 busy_archimedes[419526]: 167 167
Dec 13 04:35:30 np0005558241 systemd[1]: libpod-7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5.scope: Deactivated successfully.
Dec 13 04:35:31 np0005558241 podman[419510]: 2025-12-13 09:35:31.122341081 +0000 UTC m=+2.051415211 container attach 7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_archimedes, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 04:35:31 np0005558241 podman[419510]: 2025-12-13 09:35:31.123539591 +0000 UTC m=+2.052613711 container died 7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 04:35:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4043: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:31 np0005558241 nova_compute[248510]: 2025-12-13 09:35:31.617 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:35:32.843 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:35:32 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:35:32.844 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:35:32 np0005558241 nova_compute[248510]: 2025-12-13 09:35:32.878 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:32 np0005558241 systemd-logind[787]: New session 56 of user zuul.
Dec 13 04:35:32 np0005558241 systemd[1]: Started Session 56 of User zuul.
Dec 13 04:35:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-303ed7bbf5942a236c8dba9fd284ae12c57d67151f2c87b3a862a40856a559ab-merged.mount: Deactivated successfully.
Dec 13 04:35:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4044: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:34 np0005558241 podman[419510]: 2025-12-13 09:35:34.029479662 +0000 UTC m=+4.958553822 container remove 7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 04:35:34 np0005558241 systemd[1]: libpod-conmon-7718cc07dc4b19d4cb0ba6704b9e6e173220ebd33f732957baecfa3202b00af5.scope: Deactivated successfully.
Dec 13 04:35:34 np0005558241 podman[419723]: 2025-12-13 09:35:34.285799659 +0000 UTC m=+0.038041976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:35:34 np0005558241 nova_compute[248510]: 2025-12-13 09:35:34.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:35:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4045: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:35 np0005558241 nova_compute[248510]: 2025-12-13 09:35:35.635 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:35 np0005558241 podman[419723]: 2025-12-13 09:35:35.692926766 +0000 UTC m=+1.445169033 container create 555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:35:36 np0005558241 systemd[1]: Started libpod-conmon-555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a.scope.
Dec 13 04:35:36 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:35:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fe11723b3397c4765311c6c611e42d1705d8a7b23f406004479bf7c9b73793e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fe11723b3397c4765311c6c611e42d1705d8a7b23f406004479bf7c9b73793e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fe11723b3397c4765311c6c611e42d1705d8a7b23f406004479bf7c9b73793e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:36 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fe11723b3397c4765311c6c611e42d1705d8a7b23f406004479bf7c9b73793e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:36 np0005558241 nova_compute[248510]: 2025-12-13 09:35:36.645 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:37 np0005558241 podman[419723]: 2025-12-13 09:35:37.053601869 +0000 UTC m=+2.805844116 container init 555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_colden, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:35:37 np0005558241 podman[419723]: 2025-12-13 09:35:37.072201907 +0000 UTC m=+2.824444174 container start 555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_colden, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]: {
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:    "0": [
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:        {
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "devices": [
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "/dev/loop3"
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            ],
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_name": "ceph_lv0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_size": "21470642176",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "name": "ceph_lv0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "tags": {
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.cluster_name": "ceph",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.crush_device_class": "",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.encrypted": "0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.objectstore": "bluestore",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.osd_id": "0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.type": "block",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.vdo": "0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.with_tpm": "0"
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            },
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "type": "block",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "vg_name": "ceph_vg0"
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:        }
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:    ],
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:    "1": [
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:        {
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "devices": [
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "/dev/loop4"
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            ],
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_name": "ceph_lv1",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_size": "21470642176",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "name": "ceph_lv1",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "tags": {
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.cluster_name": "ceph",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.crush_device_class": "",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.encrypted": "0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.objectstore": "bluestore",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.osd_id": "1",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.type": "block",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.vdo": "0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.with_tpm": "0"
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            },
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "type": "block",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "vg_name": "ceph_vg1"
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:        }
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:    ],
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:    "2": [
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:        {
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "devices": [
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "/dev/loop5"
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            ],
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_name": "ceph_lv2",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_size": "21470642176",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "name": "ceph_lv2",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "tags": {
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.cluster_name": "ceph",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.crush_device_class": "",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.encrypted": "0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.objectstore": "bluestore",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.osd_id": "2",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.type": "block",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.vdo": "0",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:                "ceph.with_tpm": "0"
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            },
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "type": "block",
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:            "vg_name": "ceph_vg2"
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:        }
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]:    ]
Dec 13 04:35:37 np0005558241 mystifying_colden[419739]: }
Dec 13 04:35:37 np0005558241 systemd[1]: libpod-555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a.scope: Deactivated successfully.
Dec 13 04:35:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4046: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:37 np0005558241 podman[419723]: 2025-12-13 09:35:37.615964653 +0000 UTC m=+3.368206930 container attach 555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_colden, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:35:37 np0005558241 podman[419723]: 2025-12-13 09:35:37.618182009 +0000 UTC m=+3.370424276 container died 555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_colden, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 04:35:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0fe11723b3397c4765311c6c611e42d1705d8a7b23f406004479bf7c9b73793e-merged.mount: Deactivated successfully.
Dec 13 04:35:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4047: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:39 np0005558241 podman[419723]: 2025-12-13 09:35:39.730936119 +0000 UTC m=+5.483178356 container remove 555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:35:39 np0005558241 systemd[1]: libpod-conmon-555c57db903cfad0cc9573ef49b10b37d1abaa3d9d1c184e9cb7c0486fcd1b6a.scope: Deactivated successfully.
Dec 13 04:35:39 np0005558241 nova_compute[248510]: 2025-12-13 09:35:39.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:35:39 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:35:39.847 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:35:39 np0005558241 podman[419744]: 2025-12-13 09:35:39.855056816 +0000 UTC m=+3.507835568 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 04:35:39 np0005558241 podman[419742]: 2025-12-13 09:35:39.855586679 +0000 UTC m=+3.516977897 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 13 04:35:39 np0005558241 podman[419741]: 2025-12-13 09:35:39.912513819 +0000 UTC m=+3.568856570 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec 13 04:35:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:35:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:35:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:35:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:35:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:35:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:35:40 np0005558241 podman[419885]: 2025-12-13 09:35:40.27768963 +0000 UTC m=+0.035082022 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:35:40 np0005558241 systemd-logind[787]: New session 57 of user zuul.
Dec 13 04:35:40 np0005558241 systemd[1]: Started Session 57 of User zuul.
Dec 13 04:35:40 np0005558241 podman[419885]: 2025-12-13 09:35:40.599440651 +0000 UTC m=+0.356832943 container create 07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:35:40 np0005558241 nova_compute[248510]: 2025-12-13 09:35:40.637 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:40 np0005558241 systemd[1]: Started libpod-conmon-07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68.scope.
Dec 13 04:35:40 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:35:40 np0005558241 podman[419885]: 2025-12-13 09:35:40.918229916 +0000 UTC m=+0.675622228 container init 07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:35:40 np0005558241 podman[419885]: 2025-12-13 09:35:40.92953951 +0000 UTC m=+0.686931802 container start 07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 04:35:40 np0005558241 jovial_golick[419979]: 167 167
Dec 13 04:35:40 np0005558241 systemd[1]: libpod-07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68.scope: Deactivated successfully.
Dec 13 04:35:41 np0005558241 podman[419885]: 2025-12-13 09:35:41.172101572 +0000 UTC m=+0.929493864 container attach 07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:35:41 np0005558241 podman[419885]: 2025-12-13 09:35:41.173980979 +0000 UTC m=+0.931373271 container died 07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:35:41 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3f17ec75e62fc75ac6ad749da7c4b61bc858fd5c433d0dd71280f4b60d9e9ed2-merged.mount: Deactivated successfully.
Dec 13 04:35:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4048: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:41 np0005558241 nova_compute[248510]: 2025-12-13 09:35:41.650 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:42 np0005558241 podman[419885]: 2025-12-13 09:35:42.348048285 +0000 UTC m=+2.105440577 container remove 07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:35:42 np0005558241 systemd[1]: libpod-conmon-07187b5b16bf51f509ef303a7e097c47dcf6fecafbc92b249f1fccf4dd27ea68.scope: Deactivated successfully.
Dec 13 04:35:42 np0005558241 podman[420035]: 2025-12-13 09:35:42.525524142 +0000 UTC m=+0.037584425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:35:42 np0005558241 nova_compute[248510]: 2025-12-13 09:35:42.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:35:43 np0005558241 podman[420035]: 2025-12-13 09:35:43.000766317 +0000 UTC m=+0.512826580 container create 74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_williams, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:35:43 np0005558241 systemd[1]: Started libpod-conmon-74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e.scope.
Dec 13 04:35:43 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:35:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5815bf93afab729f4194569f345475b8b7dacaf90a945873686b6b289c3c07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5815bf93afab729f4194569f345475b8b7dacaf90a945873686b6b289c3c07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5815bf93afab729f4194569f345475b8b7dacaf90a945873686b6b289c3c07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:43 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b5815bf93afab729f4194569f345475b8b7dacaf90a945873686b6b289c3c07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4049: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:43 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:43 np0005558241 podman[420035]: 2025-12-13 09:35:43.914683679 +0000 UTC m=+1.426743952 container init 74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_williams, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Dec 13 04:35:43 np0005558241 podman[420035]: 2025-12-13 09:35:43.922927777 +0000 UTC m=+1.434988040 container start 74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_williams, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:35:44 np0005558241 systemd[1]: Reloading.
Dec 13 04:35:44 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 04:35:44 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 04:35:45 np0005558241 flamboyant_williams[420069]: {}
Dec 13 04:35:45 np0005558241 podman[420035]: 2025-12-13 09:35:45.134760091 +0000 UTC m=+2.646820344 container attach 74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:35:45 np0005558241 podman[420035]: 2025-12-13 09:35:45.159771769 +0000 UTC m=+2.671832062 container died 74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_williams, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:35:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4050: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:45 np0005558241 systemd[1]: libpod-74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e.scope: Deactivated successfully.
Dec 13 04:35:45 np0005558241 systemd[1]: libpod-74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e.scope: Consumed 1.409s CPU time.
Dec 13 04:35:45 np0005558241 nova_compute[248510]: 2025-12-13 09:35:45.638 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:45 np0005558241 lvm[420202]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:35:45 np0005558241 lvm[420202]: VG ceph_vg0 finished
Dec 13 04:35:45 np0005558241 lvm[420207]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:35:45 np0005558241 lvm[420207]: VG ceph_vg1 finished
Dec 13 04:35:45 np0005558241 lvm[420208]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:35:45 np0005558241 lvm[420208]: VG ceph_vg2 finished
Dec 13 04:35:45 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8b5815bf93afab729f4194569f345475b8b7dacaf90a945873686b6b289c3c07-merged.mount: Deactivated successfully.
Dec 13 04:35:45 np0005558241 systemd[1]: Reloading.
Dec 13 04:35:46 np0005558241 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 04:35:46 np0005558241 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 04:35:46 np0005558241 podman[420035]: 2025-12-13 09:35:46.277341985 +0000 UTC m=+3.789402248 container remove 74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_williams, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:35:46 np0005558241 systemd[1]: libpod-conmon-74a51eeea122383ccabf96cebe2f6a1f76eb46738abdcd269496dad0b8f4226e.scope: Deactivated successfully.
Dec 13 04:35:46 np0005558241 systemd[1]: Starting Podman API Socket...
Dec 13 04:35:46 np0005558241 systemd[1]: Listening on Podman API Socket.
Dec 13 04:35:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:35:46 np0005558241 dbus-broker-launch[768]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Dec 13 04:35:46 np0005558241 systemd[1]: podman.socket: Deactivated successfully.
Dec 13 04:35:46 np0005558241 systemd[1]: Closed Podman API Socket.
Dec 13 04:35:46 np0005558241 systemd[1]: Stopping Podman API Socket...
Dec 13 04:35:46 np0005558241 systemd[1]: Starting Podman API Socket...
Dec 13 04:35:46 np0005558241 systemd[1]: Listening on Podman API Socket.
Dec 13 04:35:46 np0005558241 nova_compute[248510]: 2025-12-13 09:35:46.664 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:46 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:35:46 np0005558241 systemd-logind[787]: New session 58 of user zuul.
Dec 13 04:35:46 np0005558241 systemd[1]: Started Session 58 of User zuul.
Dec 13 04:35:46 np0005558241 systemd[1]: Starting Podman API Service...
Dec 13 04:35:46 np0005558241 systemd[1]: Started Podman API Service.
Dec 13 04:35:46 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:35:46 np0005558241 podman[420271]: time="2025-12-13T09:35:46Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec 13 04:35:46 np0005558241 podman[420271]: time="2025-12-13T09:35:46Z" level=info msg="Setting parallel job count to 25"
Dec 13 04:35:46 np0005558241 podman[420271]: time="2025-12-13T09:35:46Z" level=info msg="Using sqlite as database backend"
Dec 13 04:35:46 np0005558241 podman[420271]: time="2025-12-13T09:35:46Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec 13 04:35:46 np0005558241 podman[420271]: time="2025-12-13T09:35:46Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec 13 04:35:46 np0005558241 podman[420271]: time="2025-12-13T09:35:46Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec 13 04:35:46 np0005558241 podman[420271]: @ - - [13/Dec/2025:09:35:46 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Dec 13 04:35:46 np0005558241 podman[420271]: @ - - [13/Dec/2025:09:35:46 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 25040 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Dec 13 04:35:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4051: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:47 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:35:48 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:35:48 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:35:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4052: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:50 np0005558241 nova_compute[248510]: 2025-12-13 09:35:50.641 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:50 np0005558241 nova_compute[248510]: 2025-12-13 09:35:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:35:50 np0005558241 nova_compute[248510]: 2025-12-13 09:35:50.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:35:50 np0005558241 nova_compute[248510]: 2025-12-13 09:35:50.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:35:50 np0005558241 nova_compute[248510]: 2025-12-13 09:35:50.808 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:35:50 np0005558241 nova_compute[248510]: 2025-12-13 09:35:50.809 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:35:50 np0005558241 nova_compute[248510]: 2025-12-13 09:35:50.859 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:35:50 np0005558241 nova_compute[248510]: 2025-12-13 09:35:50.859 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:35:50 np0005558241 nova_compute[248510]: 2025-12-13 09:35:50.859 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:35:50 np0005558241 nova_compute[248510]: 2025-12-13 09:35:50.860 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:35:50 np0005558241 nova_compute[248510]: 2025-12-13 09:35:50.860 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:35:51 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:35:51 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350389499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:35:51 np0005558241 nova_compute[248510]: 2025-12-13 09:35:51.483 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:35:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4053: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:51 np0005558241 nova_compute[248510]: 2025-12-13 09:35:51.667 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:51 np0005558241 nova_compute[248510]: 2025-12-13 09:35:51.676 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:35:51 np0005558241 nova_compute[248510]: 2025-12-13 09:35:51.677 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3453MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:35:51 np0005558241 nova_compute[248510]: 2025-12-13 09:35:51.677 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:35:51 np0005558241 nova_compute[248510]: 2025-12-13 09:35:51.677 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:35:51 np0005558241 nova_compute[248510]: 2025-12-13 09:35:51.804 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:35:51 np0005558241 nova_compute[248510]: 2025-12-13 09:35:51.804 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:35:52 np0005558241 nova_compute[248510]: 2025-12-13 09:35:52.003 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:35:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:35:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2736780615' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:35:52 np0005558241 nova_compute[248510]: 2025-12-13 09:35:52.582 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:35:52 np0005558241 nova_compute[248510]: 2025-12-13 09:35:52.589 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:35:52 np0005558241 nova_compute[248510]: 2025-12-13 09:35:52.642 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:35:52 np0005558241 nova_compute[248510]: 2025-12-13 09:35:52.644 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:35:52 np0005558241 nova_compute[248510]: 2025-12-13 09:35:52.644 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:35:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4054: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:53 np0005558241 nova_compute[248510]: 2025-12-13 09:35:53.607 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:35:53 np0005558241 nova_compute[248510]: 2025-12-13 09:35:53.608 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:35:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:35:55.471 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:35:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:35:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:35:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:35:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:35:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4055: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:55 np0005558241 nova_compute[248510]: 2025-12-13 09:35:55.642 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:55 np0005558241 nova_compute[248510]: 2025-12-13 09:35:55.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:35:56 np0005558241 nova_compute[248510]: 2025-12-13 09:35:56.671 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:35:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4056: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:57 np0005558241 nova_compute[248510]: 2025-12-13 09:35:57.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:35:57 np0005558241 nova_compute[248510]: 2025-12-13 09:35:57.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:35:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4057: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:00 np0005558241 nova_compute[248510]: 2025-12-13 09:36:00.644 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4058: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:01 np0005558241 nova_compute[248510]: 2025-12-13 09:36:01.674 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:01 np0005558241 podman[420271]: time="2025-12-13T09:36:01Z" level=info msg="Received shutdown.Stop(), terminating!" PID=420271
Dec 13 04:36:01 np0005558241 systemd[1]: podman.service: Deactivated successfully.
Dec 13 04:36:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4059: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4060: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:05 np0005558241 nova_compute[248510]: 2025-12-13 09:36:05.705 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:06 np0005558241 nova_compute[248510]: 2025-12-13 09:36:06.677 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:06 np0005558241 nova_compute[248510]: 2025-12-13 09:36:06.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:36:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4061: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:36:09
Dec 13 04:36:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:36:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:36:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.mgr', 'images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'vms']
Dec 13 04:36:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:36:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4062: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:09 np0005558241 podman[420355]: 2025-12-13 09:36:09.986331974 +0000 UTC m=+0.075994650 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:36:10 np0005558241 podman[420354]: 2025-12-13 09:36:10.015274931 +0000 UTC m=+0.104494966 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:36:10 np0005558241 podman[420393]: 2025-12-13 09:36:10.109567599 +0000 UTC m=+0.094657488 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:36:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:36:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:36:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:36:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:36:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:36:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:36:10 np0005558241 nova_compute[248510]: 2025-12-13 09:36:10.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:36:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:36:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:36:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:36:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:36:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:36:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:36:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:36:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:36:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:36:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4063: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:36:11.679 158419 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:79:43', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ba:92:66:49:01:1b'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec 13 04:36:11 np0005558241 nova_compute[248510]: 2025-12-13 09:36:11.679 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:11 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:36:11.681 158419 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec 13 04:36:12 np0005558241 systemd[1]: session-56.scope: Deactivated successfully.
Dec 13 04:36:12 np0005558241 systemd-logind[787]: Session 56 logged out. Waiting for processes to exit.
Dec 13 04:36:12 np0005558241 systemd-logind[787]: Removed session 56.
Dec 13 04:36:12 np0005558241 systemd[1]: session-57.scope: Deactivated successfully.
Dec 13 04:36:12 np0005558241 systemd[1]: session-57.scope: Consumed 1.395s CPU time.
Dec 13 04:36:12 np0005558241 systemd-logind[787]: Session 57 logged out. Waiting for processes to exit.
Dec 13 04:36:12 np0005558241 systemd-logind[787]: Removed session 57.
Dec 13 04:36:13 np0005558241 systemd[1]: session-58.scope: Deactivated successfully.
Dec 13 04:36:13 np0005558241 systemd-logind[787]: Session 58 logged out. Waiting for processes to exit.
Dec 13 04:36:13 np0005558241 systemd-logind[787]: Removed session 58.
Dec 13 04:36:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4064: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:36:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/770163602' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:36:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:36:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/770163602' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:36:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4065: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:15 np0005558241 nova_compute[248510]: 2025-12-13 09:36:15.711 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:16 np0005558241 nova_compute[248510]: 2025-12-13 09:36:16.682 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4066: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4067: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:19 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:36:19.685 158419 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=0c490016-f399-44ae-a985-d9ff6e29d8d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec 13 04:36:19 np0005558241 nova_compute[248510]: 2025-12-13 09:36:19.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:36:19 np0005558241 nova_compute[248510]: 2025-12-13 09:36:19.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 04:36:20 np0005558241 nova_compute[248510]: 2025-12-13 09:36:20.714 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4068: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:21 np0005558241 nova_compute[248510]: 2025-12-13 09:36:21.685 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:36:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:36:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4069: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4070: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:25 np0005558241 nova_compute[248510]: 2025-12-13 09:36:25.716 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:25 np0005558241 nova_compute[248510]: 2025-12-13 09:36:25.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:36:26 np0005558241 nova_compute[248510]: 2025-12-13 09:36:26.688 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4071: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4072: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:30 np0005558241 nova_compute[248510]: 2025-12-13 09:36:30.718 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4073: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:31 np0005558241 nova_compute[248510]: 2025-12-13 09:36:31.691 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4074: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4075: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:35 np0005558241 nova_compute[248510]: 2025-12-13 09:36:35.719 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:36 np0005558241 nova_compute[248510]: 2025-12-13 09:36:36.694 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4076: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4077: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:39 np0005558241 nova_compute[248510]: 2025-12-13 09:36:39.788 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:36:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:36:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:36:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:36:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:36:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:36:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:36:40 np0005558241 nova_compute[248510]: 2025-12-13 09:36:40.721 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:40 np0005558241 podman[420472]: 2025-12-13 09:36:40.995087136 +0000 UTC m=+0.072549472 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 13 04:36:41 np0005558241 podman[420471]: 2025-12-13 09:36:41.001814566 +0000 UTC m=+0.084304789 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 13 04:36:41 np0005558241 podman[420470]: 2025-12-13 09:36:41.057538155 +0000 UTC m=+0.135601016 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:36:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4078: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:41 np0005558241 nova_compute[248510]: 2025-12-13 09:36:41.696 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4079: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:44 np0005558241 nova_compute[248510]: 2025-12-13 09:36:44.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:36:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4080: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:45 np0005558241 nova_compute[248510]: 2025-12-13 09:36:45.761 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:46 np0005558241 nova_compute[248510]: 2025-12-13 09:36:46.698 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4081: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:36:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:36:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:36:48 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:36:48 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:36:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:36:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:36:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:36:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:36:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:36:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:36:49 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:36:49 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:36:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4082: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:49 np0005558241 podman[420677]: 2025-12-13 09:36:49.668759539 +0000 UTC m=+0.022110777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:36:50 np0005558241 podman[420677]: 2025-12-13 09:36:50.164868778 +0000 UTC m=+0.518220016 container create 5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 04:36:50 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:36:50 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:36:50 np0005558241 systemd[1]: Started libpod-conmon-5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63.scope.
Dec 13 04:36:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:36:50 np0005558241 nova_compute[248510]: 2025-12-13 09:36:50.763 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:50 np0005558241 nova_compute[248510]: 2025-12-13 09:36:50.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:36:50 np0005558241 nova_compute[248510]: 2025-12-13 09:36:50.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:36:50 np0005558241 nova_compute[248510]: 2025-12-13 09:36:50.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:36:50 np0005558241 nova_compute[248510]: 2025-12-13 09:36:50.793 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:36:51 np0005558241 podman[420677]: 2025-12-13 09:36:51.049288219 +0000 UTC m=+1.402639447 container init 5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hamilton, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:36:51 np0005558241 podman[420677]: 2025-12-13 09:36:51.057269779 +0000 UTC m=+1.410620997 container start 5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:36:51 np0005558241 xenodochial_hamilton[420693]: 167 167
Dec 13 04:36:51 np0005558241 systemd[1]: libpod-5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63.scope: Deactivated successfully.
Dec 13 04:36:51 np0005558241 podman[420677]: 2025-12-13 09:36:51.291593174 +0000 UTC m=+1.644944452 container attach 5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hamilton, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:36:51 np0005558241 podman[420677]: 2025-12-13 09:36:51.293115762 +0000 UTC m=+1.646466980 container died 5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:36:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4083: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:51 np0005558241 nova_compute[248510]: 2025-12-13 09:36:51.700 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:51 np0005558241 nova_compute[248510]: 2025-12-13 09:36:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:36:51 np0005558241 nova_compute[248510]: 2025-12-13 09:36:51.815 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:36:51 np0005558241 nova_compute[248510]: 2025-12-13 09:36:51.816 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:36:51 np0005558241 nova_compute[248510]: 2025-12-13 09:36:51.817 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:36:51 np0005558241 nova_compute[248510]: 2025-12-13 09:36:51.817 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:36:51 np0005558241 nova_compute[248510]: 2025-12-13 09:36:51.818 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:36:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-6ae9d182c58776983a0943aef46782920f653b5ec12d1addd446f51b8bb3878b-merged.mount: Deactivated successfully.
Dec 13 04:36:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:36:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/840352053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:36:52 np0005558241 nova_compute[248510]: 2025-12-13 09:36:52.507 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.688s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:36:52 np0005558241 nova_compute[248510]: 2025-12-13 09:36:52.678 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:36:52 np0005558241 nova_compute[248510]: 2025-12-13 09:36:52.680 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3459MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:36:52 np0005558241 nova_compute[248510]: 2025-12-13 09:36:52.680 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:36:52 np0005558241 nova_compute[248510]: 2025-12-13 09:36:52.680 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:36:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4084: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:54 np0005558241 podman[420677]: 2025-12-13 09:36:54.468743946 +0000 UTC m=+4.822095154 container remove 5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:36:54 np0005558241 systemd[1]: libpod-conmon-5f657c88933d4306056491f9d42142f250a996e322e0cd0966dfc1630977ba63.scope: Deactivated successfully.
Dec 13 04:36:54 np0005558241 nova_compute[248510]: 2025-12-13 09:36:54.570 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:36:54 np0005558241 nova_compute[248510]: 2025-12-13 09:36:54.571 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:36:54 np0005558241 nova_compute[248510]: 2025-12-13 09:36:54.600 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:36:54 np0005558241 podman[420740]: 2025-12-13 09:36:54.646807098 +0000 UTC m=+0.034049426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:36:54 np0005558241 podman[420740]: 2025-12-13 09:36:54.871809449 +0000 UTC m=+0.259051807 container create 00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:36:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:36:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3246048544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:36:55 np0005558241 nova_compute[248510]: 2025-12-13 09:36:55.141 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:36:55 np0005558241 nova_compute[248510]: 2025-12-13 09:36:55.147 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:36:55 np0005558241 nova_compute[248510]: 2025-12-13 09:36:55.191 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:36:55 np0005558241 nova_compute[248510]: 2025-12-13 09:36:55.193 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:36:55 np0005558241 nova_compute[248510]: 2025-12-13 09:36:55.193 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:36:55 np0005558241 nova_compute[248510]: 2025-12-13 09:36:55.193 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:36:55 np0005558241 nova_compute[248510]: 2025-12-13 09:36:55.194 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 04:36:55 np0005558241 nova_compute[248510]: 2025-12-13 09:36:55.223 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 04:36:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:36:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:36:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:36:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:36:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:36:55.472 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:36:55 np0005558241 systemd[1]: Started libpod-conmon-00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f.scope.
Dec 13 04:36:55 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:36:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd630312589d63a904072563c83ba91da0b9184f94f498291a3764f1b0ec04b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:36:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd630312589d63a904072563c83ba91da0b9184f94f498291a3764f1b0ec04b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:36:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd630312589d63a904072563c83ba91da0b9184f94f498291a3764f1b0ec04b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:36:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd630312589d63a904072563c83ba91da0b9184f94f498291a3764f1b0ec04b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:36:55 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd630312589d63a904072563c83ba91da0b9184f94f498291a3764f1b0ec04b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:36:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4085: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:55 np0005558241 nova_compute[248510]: 2025-12-13 09:36:55.765 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:55 np0005558241 podman[420740]: 2025-12-13 09:36:55.769932545 +0000 UTC m=+1.157174903 container init 00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:36:55 np0005558241 podman[420740]: 2025-12-13 09:36:55.780750806 +0000 UTC m=+1.167993124 container start 00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 04:36:55 np0005558241 podman[420740]: 2025-12-13 09:36:55.856046367 +0000 UTC m=+1.243288675 container attach 00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:36:56 np0005558241 nova_compute[248510]: 2025-12-13 09:36:56.223 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:36:56 np0005558241 nova_compute[248510]: 2025-12-13 09:36:56.224 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:36:56 np0005558241 quizzical_wright[420778]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:36:56 np0005558241 quizzical_wright[420778]: --> All data devices are unavailable
Dec 13 04:36:56 np0005558241 systemd[1]: libpod-00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f.scope: Deactivated successfully.
Dec 13 04:36:56 np0005558241 podman[420740]: 2025-12-13 09:36:56.308969472 +0000 UTC m=+1.696211800 container died 00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:36:56 np0005558241 nova_compute[248510]: 2025-12-13 09:36:56.704 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:36:56 np0005558241 nova_compute[248510]: 2025-12-13 09:36:56.775 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:36:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4086: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-edd630312589d63a904072563c83ba91da0b9184f94f498291a3764f1b0ec04b-merged.mount: Deactivated successfully.
Dec 13 04:36:58 np0005558241 nova_compute[248510]: 2025-12-13 09:36:58.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:36:58 np0005558241 nova_compute[248510]: 2025-12-13 09:36:58.771 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:36:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4087: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:00 np0005558241 podman[420740]: 2025-12-13 09:37:00.517426287 +0000 UTC m=+5.904668605 container remove 00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:37:00 np0005558241 systemd[1]: libpod-conmon-00e7b100cc4e6c02c10fbe452f7bca065fd7b048dc4d4df9b9984cca3c0f7c2f.scope: Deactivated successfully.
Dec 13 04:37:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:37:00 np0005558241 nova_compute[248510]: 2025-12-13 09:37:00.767 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:01 np0005558241 podman[420873]: 2025-12-13 09:37:01.059255814 +0000 UTC m=+0.050588011 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:37:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4088: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:01 np0005558241 nova_compute[248510]: 2025-12-13 09:37:01.707 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:01 np0005558241 podman[420873]: 2025-12-13 09:37:01.721285729 +0000 UTC m=+0.712617936 container create 1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 04:37:02 np0005558241 systemd[1]: Started libpod-conmon-1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0.scope.
Dec 13 04:37:02 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:37:02 np0005558241 podman[420873]: 2025-12-13 09:37:02.700706757 +0000 UTC m=+1.692038944 container init 1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_euler, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:37:02 np0005558241 podman[420873]: 2025-12-13 09:37:02.714855082 +0000 UTC m=+1.706187289 container start 1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 04:37:02 np0005558241 modest_euler[420889]: 167 167
Dec 13 04:37:02 np0005558241 systemd[1]: libpod-1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0.scope: Deactivated successfully.
Dec 13 04:37:03 np0005558241 podman[420873]: 2025-12-13 09:37:03.139717392 +0000 UTC m=+2.131049609 container attach 1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_euler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 04:37:03 np0005558241 podman[420873]: 2025-12-13 09:37:03.141483997 +0000 UTC m=+2.132816184 container died 1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_euler, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:37:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4089: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:04 np0005558241 systemd[1]: var-lib-containers-storage-overlay-59b11ec34ec2eb0a043c10a6c1ae272e0e33499e46977869a251437bf28e51a0-merged.mount: Deactivated successfully.
Dec 13 04:37:05 np0005558241 podman[420873]: 2025-12-13 09:37:05.489657388 +0000 UTC m=+4.480989575 container remove 1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_euler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:37:05 np0005558241 systemd[1]: libpod-conmon-1f477a74f2e6f962a1243b4be5750993daadfeedcbda5c2ce0118c7c67a400d0.scope: Deactivated successfully.
Dec 13 04:37:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4090: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:37:05 np0005558241 podman[420913]: 2025-12-13 09:37:05.643896692 +0000 UTC m=+0.024957758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:37:05 np0005558241 nova_compute[248510]: 2025-12-13 09:37:05.770 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:06 np0005558241 podman[420913]: 2025-12-13 09:37:06.348062057 +0000 UTC m=+0.729123133 container create 4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 04:37:06 np0005558241 systemd[1]: Started libpod-conmon-4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8.scope.
Dec 13 04:37:06 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:37:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3efa91cf9d364bc306fd53b35b000a5f1327e01f23d37f6954e84d19f2bd0156/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:37:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3efa91cf9d364bc306fd53b35b000a5f1327e01f23d37f6954e84d19f2bd0156/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:37:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3efa91cf9d364bc306fd53b35b000a5f1327e01f23d37f6954e84d19f2bd0156/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:37:06 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3efa91cf9d364bc306fd53b35b000a5f1327e01f23d37f6954e84d19f2bd0156/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:37:06 np0005558241 nova_compute[248510]: 2025-12-13 09:37:06.710 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:06 np0005558241 nova_compute[248510]: 2025-12-13 09:37:06.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:37:06 np0005558241 podman[420913]: 2025-12-13 09:37:06.935871589 +0000 UTC m=+1.316932675 container init 4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 04:37:06 np0005558241 podman[420913]: 2025-12-13 09:37:06.945664885 +0000 UTC m=+1.326725941 container start 4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:37:07 np0005558241 determined_kare[420930]: {
Dec 13 04:37:07 np0005558241 determined_kare[420930]:    "0": [
Dec 13 04:37:07 np0005558241 determined_kare[420930]:        {
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "devices": [
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "/dev/loop3"
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            ],
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_name": "ceph_lv0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_size": "21470642176",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "name": "ceph_lv0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "tags": {
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.cluster_name": "ceph",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.crush_device_class": "",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.encrypted": "0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.objectstore": "bluestore",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.osd_id": "0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.type": "block",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.vdo": "0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.with_tpm": "0"
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            },
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "type": "block",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "vg_name": "ceph_vg0"
Dec 13 04:37:07 np0005558241 determined_kare[420930]:        }
Dec 13 04:37:07 np0005558241 determined_kare[420930]:    ],
Dec 13 04:37:07 np0005558241 determined_kare[420930]:    "1": [
Dec 13 04:37:07 np0005558241 determined_kare[420930]:        {
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "devices": [
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "/dev/loop4"
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            ],
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_name": "ceph_lv1",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_size": "21470642176",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "name": "ceph_lv1",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "tags": {
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.cluster_name": "ceph",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.crush_device_class": "",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.encrypted": "0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.objectstore": "bluestore",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.osd_id": "1",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.type": "block",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.vdo": "0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.with_tpm": "0"
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            },
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "type": "block",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "vg_name": "ceph_vg1"
Dec 13 04:37:07 np0005558241 determined_kare[420930]:        }
Dec 13 04:37:07 np0005558241 determined_kare[420930]:    ],
Dec 13 04:37:07 np0005558241 determined_kare[420930]:    "2": [
Dec 13 04:37:07 np0005558241 determined_kare[420930]:        {
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "devices": [
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "/dev/loop5"
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            ],
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_name": "ceph_lv2",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_size": "21470642176",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "name": "ceph_lv2",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "tags": {
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.cluster_name": "ceph",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.crush_device_class": "",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.encrypted": "0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.objectstore": "bluestore",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.osd_id": "2",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.type": "block",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.vdo": "0",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:                "ceph.with_tpm": "0"
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            },
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "type": "block",
Dec 13 04:37:07 np0005558241 determined_kare[420930]:            "vg_name": "ceph_vg2"
Dec 13 04:37:07 np0005558241 determined_kare[420930]:        }
Dec 13 04:37:07 np0005558241 determined_kare[420930]:    ]
Dec 13 04:37:07 np0005558241 determined_kare[420930]: }
Dec 13 04:37:07 np0005558241 systemd[1]: libpod-4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8.scope: Deactivated successfully.
Dec 13 04:37:07 np0005558241 podman[420913]: 2025-12-13 09:37:07.453345835 +0000 UTC m=+1.834406931 container attach 4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kare, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 04:37:07 np0005558241 podman[420913]: 2025-12-13 09:37:07.45473325 +0000 UTC m=+1.835794316 container died 4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kare, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:37:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4091: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3efa91cf9d364bc306fd53b35b000a5f1327e01f23d37f6954e84d19f2bd0156-merged.mount: Deactivated successfully.
Dec 13 04:37:09 np0005558241 podman[420913]: 2025-12-13 09:37:09.058760193 +0000 UTC m=+3.439821259 container remove 4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 04:37:09 np0005558241 systemd[1]: libpod-conmon-4aee9d609cd90dd28423f4d62f7c4f07feb26fc62f1d9629da3adf9868d2e0d8.scope: Deactivated successfully.
Dec 13 04:37:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:37:09
Dec 13 04:37:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:37:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:37:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'volumes', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta']
Dec 13 04:37:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:37:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4092: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:09 np0005558241 podman[421013]: 2025-12-13 09:37:09.549908017 +0000 UTC m=+0.023066930 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:37:09 np0005558241 podman[421013]: 2025-12-13 09:37:09.696751365 +0000 UTC m=+0.169910248 container create 81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:37:10 np0005558241 systemd[1]: Started libpod-conmon-81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7.scope.
Dec 13 04:37:10 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:37:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:37:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:37:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:37:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:37:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:37:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:37:10 np0005558241 podman[421013]: 2025-12-13 09:37:10.241133127 +0000 UTC m=+0.714292060 container init 81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:37:10 np0005558241 podman[421013]: 2025-12-13 09:37:10.252727308 +0000 UTC m=+0.725886231 container start 81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_noyce, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:37:10 np0005558241 focused_noyce[421030]: 167 167
Dec 13 04:37:10 np0005558241 systemd[1]: libpod-81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7.scope: Deactivated successfully.
Dec 13 04:37:10 np0005558241 podman[421013]: 2025-12-13 09:37:10.734917828 +0000 UTC m=+1.208076731 container attach 81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_noyce, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:37:10 np0005558241 podman[421013]: 2025-12-13 09:37:10.737180695 +0000 UTC m=+1.210339668 container died 81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 04:37:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:37:10 np0005558241 nova_compute[248510]: 2025-12-13 09:37:10.771 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:11 np0005558241 systemd[1]: var-lib-containers-storage-overlay-40660fe34a546203efd4000d6d2cfdf16e469071e2dfe88ba0c785fa1de35cbe-merged.mount: Deactivated successfully.
Dec 13 04:37:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:37:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:37:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:37:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:37:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:37:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:37:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:37:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:37:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:37:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:37:11 np0005558241 podman[421013]: 2025-12-13 09:37:11.305724514 +0000 UTC m=+1.778883427 container remove 81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 04:37:11 np0005558241 systemd[1]: libpod-conmon-81cd167a60653b6870221e67a82fa2c7e980c2c9d9c7cec5ad6e17619297e3a7.scope: Deactivated successfully.
Dec 13 04:37:11 np0005558241 podman[421051]: 2025-12-13 09:37:11.405014257 +0000 UTC m=+0.263912259 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 13 04:37:11 np0005558241 podman[421050]: 2025-12-13 09:37:11.411470159 +0000 UTC m=+0.275981692 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible)
Dec 13 04:37:11 np0005558241 podman[421049]: 2025-12-13 09:37:11.443406201 +0000 UTC m=+0.308720814 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:37:11 np0005558241 podman[421115]: 2025-12-13 09:37:11.527292268 +0000 UTC m=+0.030270661 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:37:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4093: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:11 np0005558241 nova_compute[248510]: 2025-12-13 09:37:11.712 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:12 np0005558241 podman[421115]: 2025-12-13 09:37:12.143498772 +0000 UTC m=+0.646477135 container create 00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_banach, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:37:12 np0005558241 systemd[1]: Started libpod-conmon-00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d.scope.
Dec 13 04:37:12 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:37:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee970df4c9b30006f1ee9cfa83ed6e578119e09ed35a9fe197b1559b3d79f659/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:37:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee970df4c9b30006f1ee9cfa83ed6e578119e09ed35a9fe197b1559b3d79f659/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:37:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee970df4c9b30006f1ee9cfa83ed6e578119e09ed35a9fe197b1559b3d79f659/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:37:12 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee970df4c9b30006f1ee9cfa83ed6e578119e09ed35a9fe197b1559b3d79f659/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:37:13 np0005558241 podman[421115]: 2025-12-13 09:37:13.118317954 +0000 UTC m=+1.621296357 container init 00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:37:13 np0005558241 podman[421115]: 2025-12-13 09:37:13.132027789 +0000 UTC m=+1.635006182 container start 00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_banach, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 04:37:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4094: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:13 np0005558241 podman[421115]: 2025-12-13 09:37:13.891432061 +0000 UTC m=+2.394410504 container attach 00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:37:13 np0005558241 lvm[421210]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:37:13 np0005558241 lvm[421211]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:37:13 np0005558241 lvm[421210]: VG ceph_vg0 finished
Dec 13 04:37:13 np0005558241 lvm[421211]: VG ceph_vg1 finished
Dec 13 04:37:13 np0005558241 lvm[421213]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:37:13 np0005558241 lvm[421213]: VG ceph_vg2 finished
Dec 13 04:37:14 np0005558241 lvm[421214]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:37:14 np0005558241 lvm[421214]: VG ceph_vg2 finished
Dec 13 04:37:14 np0005558241 jovial_banach[421132]: {}
Dec 13 04:37:14 np0005558241 lvm[421217]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:37:14 np0005558241 lvm[421217]: VG ceph_vg2 finished
Dec 13 04:37:14 np0005558241 systemd[1]: libpod-00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d.scope: Deactivated successfully.
Dec 13 04:37:14 np0005558241 podman[421115]: 2025-12-13 09:37:14.110987675 +0000 UTC m=+2.613966028 container died 00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_banach, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:37:14 np0005558241 systemd[1]: libpod-00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d.scope: Consumed 1.618s CPU time.
Dec 13 04:37:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:37:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1998377532' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:37:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:37:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1998377532' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:37:15 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ee970df4c9b30006f1ee9cfa83ed6e578119e09ed35a9fe197b1559b3d79f659-merged.mount: Deactivated successfully.
Dec 13 04:37:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4095: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:15 np0005558241 podman[421115]: 2025-12-13 09:37:15.715264624 +0000 UTC m=+4.218242987 container remove 00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_banach, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec 13 04:37:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:37:15 np0005558241 nova_compute[248510]: 2025-12-13 09:37:15.773 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:37:15 np0005558241 systemd[1]: libpod-conmon-00ea60b608fc82c96ff380b4b8c7341951ce603ce69fbb8a58217266908e3f5d.scope: Deactivated successfully.
Dec 13 04:37:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:37:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:37:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:37:16 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:37:16 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:37:16 np0005558241 nova_compute[248510]: 2025-12-13 09:37:16.715 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4096: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4097: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:20 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:37:20 np0005558241 nova_compute[248510]: 2025-12-13 09:37:20.775 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4098: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:21 np0005558241 nova_compute[248510]: 2025-12-13 09:37:21.781 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:37:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:37:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4099: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4100: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:25 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:37:25 np0005558241 nova_compute[248510]: 2025-12-13 09:37:25.777 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:26 np0005558241 nova_compute[248510]: 2025-12-13 09:37:26.784 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4101: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4102: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:30 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:37:30 np0005558241 nova_compute[248510]: 2025-12-13 09:37:30.779 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4103: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:31 np0005558241 nova_compute[248510]: 2025-12-13 09:37:31.786 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4104: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4105: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:35 np0005558241 nova_compute[248510]: 2025-12-13 09:37:35.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:37:35 np0005558241 nova_compute[248510]: 2025-12-13 09:37:35.781 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:37:36 np0005558241 nova_compute[248510]: 2025-12-13 09:37:36.789 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4106: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.701333) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618657701391, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1288, "num_deletes": 251, "total_data_size": 2169617, "memory_usage": 2204736, "flush_reason": "Manual Compaction"}
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618657719331, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 2120720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80854, "largest_seqno": 82141, "table_properties": {"data_size": 2114457, "index_size": 3525, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12736, "raw_average_key_size": 19, "raw_value_size": 2102129, "raw_average_value_size": 3269, "num_data_blocks": 158, "num_entries": 643, "num_filter_entries": 643, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618518, "oldest_key_time": 1765618518, "file_creation_time": 1765618657, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 18093 microseconds, and 6566 cpu microseconds.
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.719419) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 2120720 bytes OK
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.719462) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.722746) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.722781) EVENT_LOG_v1 {"time_micros": 1765618657722770, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.722814) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 2163830, prev total WAL file size 2163830, number of live WAL files 2.
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.724169) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(2071KB)], [194(10MB)]
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618657724244, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 13397828, "oldest_snapshot_seqno": -1}
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 9736 keys, 11547202 bytes, temperature: kUnknown
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618657838444, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 11547202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11486370, "index_size": 35375, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24389, "raw_key_size": 257272, "raw_average_key_size": 26, "raw_value_size": 11316988, "raw_average_value_size": 1162, "num_data_blocks": 1355, "num_entries": 9736, "num_filter_entries": 9736, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618657, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.838888) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 11547202 bytes
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.841473) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.2 rd, 101.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.8 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 10250, records dropped: 514 output_compression: NoCompression
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.841497) EVENT_LOG_v1 {"time_micros": 1765618657841485, "job": 122, "event": "compaction_finished", "compaction_time_micros": 114333, "compaction_time_cpu_micros": 38319, "output_level": 6, "num_output_files": 1, "total_output_size": 11547202, "num_input_records": 10250, "num_output_records": 9736, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618657842032, "job": 122, "event": "table_file_deletion", "file_number": 196}
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618657844462, "job": 122, "event": "table_file_deletion", "file_number": 194}
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.724044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.844545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.844550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.844552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.844554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:37:37 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:37:37.844555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:37:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4107: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:37:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:37:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:37:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:37:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:37:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:37:40 np0005558241 nova_compute[248510]: 2025-12-13 09:37:40.784 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:37:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4108: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:41 np0005558241 nova_compute[248510]: 2025-12-13 09:37:41.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:37:41 np0005558241 nova_compute[248510]: 2025-12-13 09:37:41.792 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:41 np0005558241 podman[421258]: 2025-12-13 09:37:41.994616295 +0000 UTC m=+0.074215555 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 04:37:42 np0005558241 podman[421259]: 2025-12-13 09:37:42.017745716 +0000 UTC m=+0.085304194 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 04:37:42 np0005558241 podman[421257]: 2025-12-13 09:37:42.053288108 +0000 UTC m=+0.135828982 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:37:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4109: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4110: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:45 np0005558241 nova_compute[248510]: 2025-12-13 09:37:45.787 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:45 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:37:46 np0005558241 nova_compute[248510]: 2025-12-13 09:37:46.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:37:46 np0005558241 nova_compute[248510]: 2025-12-13 09:37:46.796 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4111: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4112: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:50 np0005558241 nova_compute[248510]: 2025-12-13 09:37:50.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:50 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:37:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4113: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:51 np0005558241 nova_compute[248510]: 2025-12-13 09:37:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:37:51 np0005558241 nova_compute[248510]: 2025-12-13 09:37:51.798 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:51 np0005558241 nova_compute[248510]: 2025-12-13 09:37:51.806 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:37:51 np0005558241 nova_compute[248510]: 2025-12-13 09:37:51.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:37:51 np0005558241 nova_compute[248510]: 2025-12-13 09:37:51.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:37:51 np0005558241 nova_compute[248510]: 2025-12-13 09:37:51.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:37:51 np0005558241 nova_compute[248510]: 2025-12-13 09:37:51.808 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:37:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:37:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1677801049' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:37:52 np0005558241 nova_compute[248510]: 2025-12-13 09:37:52.407 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:37:52 np0005558241 nova_compute[248510]: 2025-12-13 09:37:52.607 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:37:52 np0005558241 nova_compute[248510]: 2025-12-13 09:37:52.609 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3509MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:37:52 np0005558241 nova_compute[248510]: 2025-12-13 09:37:52.609 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:37:52 np0005558241 nova_compute[248510]: 2025-12-13 09:37:52.609 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:37:52 np0005558241 nova_compute[248510]: 2025-12-13 09:37:52.876 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:37:52 np0005558241 nova_compute[248510]: 2025-12-13 09:37:52.876 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:37:52 np0005558241 nova_compute[248510]: 2025-12-13 09:37:52.908 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing inventories for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec 13 04:37:52 np0005558241 nova_compute[248510]: 2025-12-13 09:37:52.943 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating ProviderTree inventory for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec 13 04:37:52 np0005558241 nova_compute[248510]: 2025-12-13 09:37:52.944 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Updating inventory in ProviderTree for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec 13 04:37:52 np0005558241 nova_compute[248510]: 2025-12-13 09:37:52.980 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing aggregate associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec 13 04:37:53 np0005558241 nova_compute[248510]: 2025-12-13 09:37:53.027 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Refreshing trait associations for resource provider f581abbc-7737-4dd5-b64e-9a4263571fb3, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE4A,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SVM,HW_CPU_X86_F16C,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec 13 04:37:53 np0005558241 nova_compute[248510]: 2025-12-13 09:37:53.049 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:37:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4114: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:37:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/699968712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:37:53 np0005558241 nova_compute[248510]: 2025-12-13 09:37:53.759 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:37:53 np0005558241 nova_compute[248510]: 2025-12-13 09:37:53.765 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:37:53 np0005558241 nova_compute[248510]: 2025-12-13 09:37:53.800 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:37:53 np0005558241 nova_compute[248510]: 2025-12-13 09:37:53.802 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:37:53 np0005558241 nova_compute[248510]: 2025-12-13 09:37:53.802 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:37:54 np0005558241 nova_compute[248510]: 2025-12-13 09:37:54.803 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:37:54 np0005558241 nova_compute[248510]: 2025-12-13 09:37:54.804 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:37:54 np0005558241 nova_compute[248510]: 2025-12-13 09:37:54.804 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:37:54 np0005558241 nova_compute[248510]: 2025-12-13 09:37:54.872 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:37:54 np0005558241 nova_compute[248510]: 2025-12-13 09:37:54.873 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:37:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:37:55.473 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:37:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:37:55.473 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:37:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:37:55.473 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:37:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4115: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:55 np0005558241 nova_compute[248510]: 2025-12-13 09:37:55.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:37:55 np0005558241 nova_compute[248510]: 2025-12-13 09:37:55.810 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:37:56 np0005558241 nova_compute[248510]: 2025-12-13 09:37:56.801 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:37:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4116: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:37:58 np0005558241 nova_compute[248510]: 2025-12-13 09:37:58.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:37:58 np0005558241 nova_compute[248510]: 2025-12-13 09:37:58.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:37:58 np0005558241 nova_compute[248510]: 2025-12-13 09:37:58.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:37:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4117: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:00 np0005558241 nova_compute[248510]: 2025-12-13 09:38:00.812 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:38:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4118: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:01 np0005558241 nova_compute[248510]: 2025-12-13 09:38:01.803 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4119: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4120: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:05 np0005558241 nova_compute[248510]: 2025-12-13 09:38:05.814 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:05 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:38:06 np0005558241 nova_compute[248510]: 2025-12-13 09:38:06.807 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4121: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:07 np0005558241 nova_compute[248510]: 2025-12-13 09:38:07.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:38:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:38:09
Dec 13 04:38:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:38:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:38:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'backups', '.mgr', 'volumes', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.control']
Dec 13 04:38:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:38:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4122: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:38:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:38:10 np0005558241 nova_compute[248510]: 2025-12-13 09:38:10.817 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:38:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:38:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:38:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:38:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:38:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:38:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:38:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:38:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:38:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:38:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:38:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4123: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:11 np0005558241 nova_compute[248510]: 2025-12-13 09:38:11.808 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:12 np0005558241 podman[421365]: 2025-12-13 09:38:12.983293297 +0000 UTC m=+0.059376862 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 13 04:38:12 np0005558241 podman[421364]: 2025-12-13 09:38:12.997108034 +0000 UTC m=+0.075534088 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:38:13 np0005558241 podman[421363]: 2025-12-13 09:38:13.042043613 +0000 UTC m=+0.129942395 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller)
Dec 13 04:38:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4124: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4125: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:15 np0005558241 nova_compute[248510]: 2025-12-13 09:38:15.819 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:38:16 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:38:16 np0005558241 nova_compute[248510]: 2025-12-13 09:38:16.811 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:38:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:38:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4126: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:38:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:38:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:38:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:38:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:38:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:38:18 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:38:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:38:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:38:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:38:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:38:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:38:19 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:38:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:38:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:38:19 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:38:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4127: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:19 np0005558241 podman[421639]: 2025-12-13 09:38:19.780974165 +0000 UTC m=+0.029045161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:38:20 np0005558241 podman[421639]: 2025-12-13 09:38:20.152120704 +0000 UTC m=+0.400191700 container create b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:38:20 np0005558241 systemd[1]: Started libpod-conmon-b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41.scope.
Dec 13 04:38:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:38:20 np0005558241 nova_compute[248510]: 2025-12-13 09:38:20.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:21 np0005558241 podman[421639]: 2025-12-13 09:38:21.101636261 +0000 UTC m=+1.349707247 container init b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:38:21 np0005558241 podman[421639]: 2025-12-13 09:38:21.110361081 +0000 UTC m=+1.358432057 container start b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 04:38:21 np0005558241 nice_noyce[421655]: 167 167
Dec 13 04:38:21 np0005558241 systemd[1]: libpod-b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41.scope: Deactivated successfully.
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4128: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:21 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:38:21 np0005558241 podman[421639]: 2025-12-13 09:38:21.702691076 +0000 UTC m=+1.950762162 container attach b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 04:38:21 np0005558241 podman[421639]: 2025-12-13 09:38:21.703746193 +0000 UTC m=+1.951817209 container died b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 04:38:21 np0005558241 nova_compute[248510]: 2025-12-13 09:38:21.813 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:21 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:38:21 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:38:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:38:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ce5a59a6ad9ca0f7f4511989f9f842a1635c25a5c917bdc579940ced1048b717-merged.mount: Deactivated successfully.
Dec 13 04:38:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4129: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4130: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:25 np0005558241 podman[421639]: 2025-12-13 09:38:25.759269173 +0000 UTC m=+6.007340159 container remove b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_noyce, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 04:38:25 np0005558241 systemd[1]: libpod-conmon-b83d1baa53b630858666fbc277c2f6c6450b5a68e4fb4484ff6109402c59fe41.scope: Deactivated successfully.
Dec 13 04:38:25 np0005558241 nova_compute[248510]: 2025-12-13 09:38:25.824 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:26 np0005558241 podman[421680]: 2025-12-13 09:38:25.913354463 +0000 UTC m=+0.024544427 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:38:26 np0005558241 nova_compute[248510]: 2025-12-13 09:38:26.817 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4131: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:27 np0005558241 podman[421680]: 2025-12-13 09:38:27.667399684 +0000 UTC m=+1.778589608 container create 5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:38:27 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:38:28 np0005558241 systemd[1]: Started libpod-conmon-5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3.scope.
Dec 13 04:38:28 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:38:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e26ed56c2a18797fd2d42416f9f8f3a54edc542fa730f57d9eaf5078156df1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:38:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e26ed56c2a18797fd2d42416f9f8f3a54edc542fa730f57d9eaf5078156df1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:38:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e26ed56c2a18797fd2d42416f9f8f3a54edc542fa730f57d9eaf5078156df1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:38:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e26ed56c2a18797fd2d42416f9f8f3a54edc542fa730f57d9eaf5078156df1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:38:28 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2e26ed56c2a18797fd2d42416f9f8f3a54edc542fa730f57d9eaf5078156df1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:38:28 np0005558241 podman[421680]: 2025-12-13 09:38:28.361057545 +0000 UTC m=+2.472247489 container init 5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:38:28 np0005558241 podman[421680]: 2025-12-13 09:38:28.376427211 +0000 UTC m=+2.487617135 container start 5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:38:28 np0005558241 podman[421680]: 2025-12-13 09:38:28.717947108 +0000 UTC m=+2.829137132 container attach 5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 04:38:28 np0005558241 stoic_yalow[421697]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:38:28 np0005558241 stoic_yalow[421697]: --> All data devices are unavailable
Dec 13 04:38:28 np0005558241 systemd[1]: libpod-5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3.scope: Deactivated successfully.
Dec 13 04:38:28 np0005558241 podman[421680]: 2025-12-13 09:38:28.906500514 +0000 UTC m=+3.017690468 container died 5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 04:38:29 np0005558241 systemd[1]: var-lib-containers-storage-overlay-e2e26ed56c2a18797fd2d42416f9f8f3a54edc542fa730f57d9eaf5078156df1-merged.mount: Deactivated successfully.
Dec 13 04:38:29 np0005558241 podman[421680]: 2025-12-13 09:38:29.397252579 +0000 UTC m=+3.508442543 container remove 5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:38:29 np0005558241 systemd[1]: libpod-conmon-5dec053d2a381bae27b4af14b325a2b46d0ef7757f42c94ce5d0a4385aaabaf3.scope: Deactivated successfully.
Dec 13 04:38:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4132: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:29 np0005558241 podman[421794]: 2025-12-13 09:38:29.863102489 +0000 UTC m=+0.022871236 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:38:29 np0005558241 podman[421794]: 2025-12-13 09:38:29.993658347 +0000 UTC m=+0.153427074 container create 5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hypatia, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:38:30 np0005558241 systemd[1]: Started libpod-conmon-5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912.scope.
Dec 13 04:38:30 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:38:30 np0005558241 podman[421794]: 2025-12-13 09:38:30.463143878 +0000 UTC m=+0.622912625 container init 5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hypatia, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:38:30 np0005558241 podman[421794]: 2025-12-13 09:38:30.474576265 +0000 UTC m=+0.634345002 container start 5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hypatia, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 04:38:30 np0005558241 funny_hypatia[421810]: 167 167
Dec 13 04:38:30 np0005558241 systemd[1]: libpod-5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912.scope: Deactivated successfully.
Dec 13 04:38:30 np0005558241 conmon[421810]: conmon 5a17aed4bed951f115bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912.scope/container/memory.events
Dec 13 04:38:30 np0005558241 nova_compute[248510]: 2025-12-13 09:38:30.825 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:30 np0005558241 podman[421794]: 2025-12-13 09:38:30.969822162 +0000 UTC m=+1.129590979 container attach 5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hypatia, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:38:30 np0005558241 podman[421794]: 2025-12-13 09:38:30.971763161 +0000 UTC m=+1.131531918 container died 5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hypatia, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:38:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4133: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:31 np0005558241 systemd[1]: var-lib-containers-storage-overlay-ee80ad20d2672d8503d677f91f38ddfc9a50fac2f0462784a3ad70e6275c9a79-merged.mount: Deactivated successfully.
Dec 13 04:38:31 np0005558241 nova_compute[248510]: 2025-12-13 09:38:31.820 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:32 np0005558241 podman[421794]: 2025-12-13 09:38:32.0559855 +0000 UTC m=+2.215754227 container remove 5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:38:32 np0005558241 systemd[1]: libpod-conmon-5a17aed4bed951f115bbe5f5a139f7d72b8324b71fa977147cffed4b371d2912.scope: Deactivated successfully.
Dec 13 04:38:32 np0005558241 podman[421834]: 2025-12-13 09:38:32.267292317 +0000 UTC m=+0.059907615 container create 720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:38:32 np0005558241 podman[421834]: 2025-12-13 09:38:32.232614216 +0000 UTC m=+0.025229514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:38:32 np0005558241 systemd[1]: Started libpod-conmon-720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7.scope.
Dec 13 04:38:32 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:38:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bdc50c036390b1cb4df7b6aca946c0a4be4965d76679b122f7a9512faa608cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:38:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bdc50c036390b1cb4df7b6aca946c0a4be4965d76679b122f7a9512faa608cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:38:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bdc50c036390b1cb4df7b6aca946c0a4be4965d76679b122f7a9512faa608cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:38:32 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bdc50c036390b1cb4df7b6aca946c0a4be4965d76679b122f7a9512faa608cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:38:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:38:32 np0005558241 podman[421834]: 2025-12-13 09:38:32.763616072 +0000 UTC m=+0.556231470 container init 720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:38:32 np0005558241 podman[421834]: 2025-12-13 09:38:32.776105996 +0000 UTC m=+0.568721284 container start 720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:38:32 np0005558241 podman[421834]: 2025-12-13 09:38:32.781921412 +0000 UTC m=+0.574536730 container attach 720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_davinci, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:38:33 np0005558241 happy_davinci[421850]: {
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:    "0": [
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:        {
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "devices": [
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "/dev/loop3"
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            ],
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_name": "ceph_lv0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_size": "21470642176",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "name": "ceph_lv0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "tags": {
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.cluster_name": "ceph",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.crush_device_class": "",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.encrypted": "0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.objectstore": "bluestore",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.osd_id": "0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.type": "block",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.vdo": "0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.with_tpm": "0"
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            },
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "type": "block",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "vg_name": "ceph_vg0"
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:        }
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:    ],
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:    "1": [
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:        {
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "devices": [
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "/dev/loop4"
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            ],
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_name": "ceph_lv1",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_size": "21470642176",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "name": "ceph_lv1",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "tags": {
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.cluster_name": "ceph",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.crush_device_class": "",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.encrypted": "0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.objectstore": "bluestore",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.osd_id": "1",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.type": "block",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.vdo": "0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.with_tpm": "0"
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            },
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "type": "block",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "vg_name": "ceph_vg1"
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:        }
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:    ],
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:    "2": [
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:        {
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "devices": [
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "/dev/loop5"
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            ],
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_name": "ceph_lv2",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_size": "21470642176",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "name": "ceph_lv2",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "tags": {
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.cluster_name": "ceph",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.crush_device_class": "",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.encrypted": "0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.objectstore": "bluestore",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.osd_id": "2",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.type": "block",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.vdo": "0",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:                "ceph.with_tpm": "0"
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            },
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "type": "block",
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:            "vg_name": "ceph_vg2"
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:        }
Dec 13 04:38:33 np0005558241 happy_davinci[421850]:    ]
Dec 13 04:38:33 np0005558241 happy_davinci[421850]: }
Dec 13 04:38:33 np0005558241 systemd[1]: libpod-720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7.scope: Deactivated successfully.
Dec 13 04:38:33 np0005558241 podman[421834]: 2025-12-13 09:38:33.132455676 +0000 UTC m=+0.925070964 container died 720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_davinci, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 04:38:33 np0005558241 systemd[1]: var-lib-containers-storage-overlay-5bdc50c036390b1cb4df7b6aca946c0a4be4965d76679b122f7a9512faa608cf-merged.mount: Deactivated successfully.
Dec 13 04:38:33 np0005558241 podman[421834]: 2025-12-13 09:38:33.338273454 +0000 UTC m=+1.130888742 container remove 720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_davinci, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:38:33 np0005558241 systemd[1]: libpod-conmon-720fbb60e23abe30d029e798879bbe4e316a1df7bc9f1d98a773ab495c1de4a7.scope: Deactivated successfully.
Dec 13 04:38:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4134: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:33 np0005558241 podman[421934]: 2025-12-13 09:38:33.858772066 +0000 UTC m=+0.027317317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:38:33 np0005558241 podman[421934]: 2025-12-13 09:38:33.959736772 +0000 UTC m=+0.128281983 container create 1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_borg, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:38:34 np0005558241 systemd[1]: Started libpod-conmon-1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5.scope.
Dec 13 04:38:34 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:38:35 np0005558241 podman[421934]: 2025-12-13 09:38:35.076350894 +0000 UTC m=+1.244896195 container init 1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_borg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 04:38:35 np0005558241 podman[421934]: 2025-12-13 09:38:35.085687129 +0000 UTC m=+1.254232380 container start 1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_borg, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:38:35 np0005558241 practical_borg[421950]: 167 167
Dec 13 04:38:35 np0005558241 systemd[1]: libpod-1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5.scope: Deactivated successfully.
Dec 13 04:38:35 np0005558241 podman[421934]: 2025-12-13 09:38:35.350682395 +0000 UTC m=+1.519227696 container attach 1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_borg, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:38:35 np0005558241 podman[421934]: 2025-12-13 09:38:35.351893665 +0000 UTC m=+1.520438906 container died 1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_borg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:38:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4135: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:35 np0005558241 nova_compute[248510]: 2025-12-13 09:38:35.827 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:36 np0005558241 systemd[1]: var-lib-containers-storage-overlay-dd7f260fec64a94d7ae2dcd02714098879ae068000f4e109dcc21b9b3efa6877-merged.mount: Deactivated successfully.
Dec 13 04:38:36 np0005558241 nova_compute[248510]: 2025-12-13 09:38:36.822 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:37 np0005558241 podman[421934]: 2025-12-13 09:38:37.062741252 +0000 UTC m=+3.231286463 container remove 1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_borg, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:38:37 np0005558241 systemd[1]: libpod-conmon-1df67f4aa4b8e62cd1105c4efc23266736c02522fbb38d0edd91f4910ec44aa5.scope: Deactivated successfully.
Dec 13 04:38:37 np0005558241 podman[421975]: 2025-12-13 09:38:37.243724917 +0000 UTC m=+0.032453616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:38:37 np0005558241 podman[421975]: 2025-12-13 09:38:37.376515152 +0000 UTC m=+0.165243751 container create 34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_davinci, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:38:37 np0005558241 systemd[1]: Started libpod-conmon-34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b.scope.
Dec 13 04:38:37 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:38:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c72a9b67fa717e223ed297f87380d83730ef315fe795a0bbc41c51f4ff1ceb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:38:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c72a9b67fa717e223ed297f87380d83730ef315fe795a0bbc41c51f4ff1ceb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:38:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c72a9b67fa717e223ed297f87380d83730ef315fe795a0bbc41c51f4ff1ceb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:38:37 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c72a9b67fa717e223ed297f87380d83730ef315fe795a0bbc41c51f4ff1ceb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:38:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4136: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:37 np0005558241 podman[421975]: 2025-12-13 09:38:37.80053947 +0000 UTC m=+0.589268089 container init 34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:38:37 np0005558241 podman[421975]: 2025-12-13 09:38:37.807879495 +0000 UTC m=+0.596608094 container start 34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:38:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:38:38 np0005558241 podman[421975]: 2025-12-13 09:38:38.327560416 +0000 UTC m=+1.116289035 container attach 34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_davinci, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:38:38 np0005558241 lvm[422069]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:38:38 np0005558241 lvm[422070]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:38:38 np0005558241 lvm[422069]: VG ceph_vg0 finished
Dec 13 04:38:38 np0005558241 lvm[422070]: VG ceph_vg1 finished
Dec 13 04:38:38 np0005558241 lvm[422072]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:38:38 np0005558241 lvm[422072]: VG ceph_vg2 finished
Dec 13 04:38:38 np0005558241 charming_davinci[421991]: {}
Dec 13 04:38:38 np0005558241 systemd[1]: libpod-34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b.scope: Deactivated successfully.
Dec 13 04:38:38 np0005558241 podman[421975]: 2025-12-13 09:38:38.749926744 +0000 UTC m=+1.538655343 container died 34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:38:38 np0005558241 systemd[1]: libpod-34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b.scope: Consumed 1.459s CPU time.
Dec 13 04:38:39 np0005558241 systemd[1]: var-lib-containers-storage-overlay-4c72a9b67fa717e223ed297f87380d83730ef315fe795a0bbc41c51f4ff1ceb3-merged.mount: Deactivated successfully.
Dec 13 04:38:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4137: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:39 np0005558241 podman[421975]: 2025-12-13 09:38:39.857852368 +0000 UTC m=+2.646580987 container remove 34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:38:39 np0005558241 systemd[1]: libpod-conmon-34b0e7e9d36a1d2f96c4ed499fb58687e8b6164ea787eb134e953a80d8bd074b.scope: Deactivated successfully.
Dec 13 04:38:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:38:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:38:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:38:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:38:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:38:40 np0005558241 nova_compute[248510]: 2025-12-13 09:38:40.829 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:38:40 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:38:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4138: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:41 np0005558241 nova_compute[248510]: 2025-12-13 09:38:41.824 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:42 np0005558241 nova_compute[248510]: 2025-12-13 09:38:42.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:38:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:38:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4139: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:43 np0005558241 podman[422118]: 2025-12-13 09:38:43.988482865 +0000 UTC m=+0.062832009 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:38:44 np0005558241 podman[422117]: 2025-12-13 09:38:44.021410032 +0000 UTC m=+0.102918126 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3)
Dec 13 04:38:44 np0005558241 podman[422116]: 2025-12-13 09:38:44.027905625 +0000 UTC m=+0.109759848 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 13 04:38:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4140: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:45 np0005558241 nova_compute[248510]: 2025-12-13 09:38:45.832 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:46 np0005558241 nova_compute[248510]: 2025-12-13 09:38:46.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:38:46 np0005558241 nova_compute[248510]: 2025-12-13 09:38:46.827 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4141: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:47 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:38:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4142: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:50 np0005558241 nova_compute[248510]: 2025-12-13 09:38:50.834 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4143: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:51 np0005558241 nova_compute[248510]: 2025-12-13 09:38:51.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:38:51 np0005558241 nova_compute[248510]: 2025-12-13 09:38:51.830 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:51 np0005558241 nova_compute[248510]: 2025-12-13 09:38:51.974 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:38:51 np0005558241 nova_compute[248510]: 2025-12-13 09:38:51.975 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:38:51 np0005558241 nova_compute[248510]: 2025-12-13 09:38:51.975 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:38:51 np0005558241 nova_compute[248510]: 2025-12-13 09:38:51.975 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:38:51 np0005558241 nova_compute[248510]: 2025-12-13 09:38:51.975 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:38:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:38:52 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187745040' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:38:52 np0005558241 nova_compute[248510]: 2025-12-13 09:38:52.516 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:38:52 np0005558241 nova_compute[248510]: 2025-12-13 09:38:52.713 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:38:52 np0005558241 nova_compute[248510]: 2025-12-13 09:38:52.714 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3486MB free_disk=59.98736572358757GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:38:52 np0005558241 nova_compute[248510]: 2025-12-13 09:38:52.714 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:38:52 np0005558241 nova_compute[248510]: 2025-12-13 09:38:52.715 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:38:52 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:38:53 np0005558241 nova_compute[248510]: 2025-12-13 09:38:53.570 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:38:53 np0005558241 nova_compute[248510]: 2025-12-13 09:38:53.571 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:38:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4144: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:53 np0005558241 nova_compute[248510]: 2025-12-13 09:38:53.748 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:38:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:38:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3312532257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:38:54 np0005558241 nova_compute[248510]: 2025-12-13 09:38:54.302 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:38:54 np0005558241 nova_compute[248510]: 2025-12-13 09:38:54.309 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:38:54 np0005558241 nova_compute[248510]: 2025-12-13 09:38:54.401 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:38:54 np0005558241 nova_compute[248510]: 2025-12-13 09:38:54.403 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:38:54 np0005558241 nova_compute[248510]: 2025-12-13 09:38:54.403 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:38:55 np0005558241 nova_compute[248510]: 2025-12-13 09:38:55.404 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:38:55 np0005558241 nova_compute[248510]: 2025-12-13 09:38:55.405 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:38:55 np0005558241 nova_compute[248510]: 2025-12-13 09:38:55.405 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:38:55 np0005558241 nova_compute[248510]: 2025-12-13 09:38:55.424 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:38:55 np0005558241 nova_compute[248510]: 2025-12-13 09:38:55.425 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:38:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:38:55.474 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:38:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:38:55.474 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:38:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:38:55.475 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:38:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4145: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:55 np0005558241 nova_compute[248510]: 2025-12-13 09:38:55.836 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:56 np0005558241 nova_compute[248510]: 2025-12-13 09:38:56.833 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:38:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4146: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:57 np0005558241 nova_compute[248510]: 2025-12-13 09:38:57.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:38:58 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:38:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4147: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:38:59 np0005558241 nova_compute[248510]: 2025-12-13 09:38:59.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:38:59 np0005558241 nova_compute[248510]: 2025-12-13 09:38:59.772 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:39:00 np0005558241 nova_compute[248510]: 2025-12-13 09:39:00.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:39:00 np0005558241 nova_compute[248510]: 2025-12-13 09:39:00.840 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4148: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:39:01 np0005558241 nova_compute[248510]: 2025-12-13 09:39:01.836 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:03 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:39:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4149: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:39:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4150: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:39:05 np0005558241 nova_compute[248510]: 2025-12-13 09:39:05.842 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:06 np0005558241 nova_compute[248510]: 2025-12-13 09:39:06.839 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4151: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:39:08 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:39:08 np0005558241 nova_compute[248510]: 2025-12-13 09:39:08.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:39:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:39:09
Dec 13 04:39:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:39:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:39:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.control', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes']
Dec 13 04:39:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:39:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4152: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:39:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:39:10 np0005558241 nova_compute[248510]: 2025-12-13 09:39:10.843 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:39:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:39:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:39:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:39:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:39:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:39:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:39:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:39:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:39:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:39:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4153: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:39:11 np0005558241 nova_compute[248510]: 2025-12-13 09:39:11.842 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4154: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:39:13 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:39:14 np0005558241 podman[422224]: 2025-12-13 09:39:14.99853844 +0000 UTC m=+0.082904923 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 13 04:39:15 np0005558241 podman[422225]: 2025-12-13 09:39:15.009682539 +0000 UTC m=+0.082198265 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:39:15 np0005558241 podman[422223]: 2025-12-13 09:39:15.055008098 +0000 UTC m=+0.134426257 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec 13 04:39:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4155: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:39:15 np0005558241 nova_compute[248510]: 2025-12-13 09:39:15.846 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:16 np0005558241 nova_compute[248510]: 2025-12-13 09:39:16.845 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4156: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Dec 13 04:39:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:39:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3553484223' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:39:17 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:39:17 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3553484223' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:39:18 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:39:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4157: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 13 04:39:20 np0005558241 nova_compute[248510]: 2025-12-13 09:39:20.847 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4158: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 13 04:39:21 np0005558241 nova_compute[248510]: 2025-12-13 09:39:21.848 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5261754351853937e-05 of space, bias 1.0, pg target 0.004578526305556181 quantized to 32 (current 32)
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006697878223102975 of space, bias 1.0, pg target 0.20093634669308924 quantized to 32 (current 32)
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.64600408034546e-07 of space, bias 4.0, pg target 0.0006775204896414552 quantized to 16 (current 32)
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:39:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:39:23 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Dec 13 04:39:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4159: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 511 B/s wr, 7 op/s
Dec 13 04:39:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Dec 13 04:39:24 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Dec 13 04:39:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4161: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Dec 13 04:39:25 np0005558241 nova_compute[248510]: 2025-12-13 09:39:25.851 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:26 np0005558241 nova_compute[248510]: 2025-12-13 09:39:26.851 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4162: 321 pgs: 321 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Dec 13 04:39:28 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:39:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4163: 321 pgs: 321 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1023 B/s wr, 21 op/s
Dec 13 04:39:30 np0005558241 nova_compute[248510]: 2025-12-13 09:39:30.852 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4164: 321 pgs: 321 active+clean; 29 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1023 B/s wr, 21 op/s
Dec 13 04:39:31 np0005558241 nova_compute[248510]: 2025-12-13 09:39:31.854 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4165: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Dec 13 04:39:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:39:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Dec 13 04:39:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Dec 13 04:39:34 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Dec 13 04:39:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4167: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Dec 13 04:39:35 np0005558241 nova_compute[248510]: 2025-12-13 09:39:35.766 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:39:35 np0005558241 nova_compute[248510]: 2025-12-13 09:39:35.853 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Dec 13 04:39:36 np0005558241 nova_compute[248510]: 2025-12-13 09:39:36.856 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Dec 13 04:39:37 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Dec 13 04:39:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4169: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.1 KiB/s wr, 13 op/s
Dec 13 04:39:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:39:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4170: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.4 KiB/s wr, 20 op/s
Dec 13 04:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:39:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:39:40 np0005558241 nova_compute[248510]: 2025-12-13 09:39:40.855 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4171: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 895 B/s wr, 15 op/s
Dec 13 04:39:41 np0005558241 nova_compute[248510]: 2025-12-13 09:39:41.858 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 04:39:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:39:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:39:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:39:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:39:42 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:39:42 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:39:43 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:39:43 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:39:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4172: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 573 B/s wr, 8 op/s
Dec 13 04:39:43 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:39:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:39:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:39:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:39:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:39:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:39:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:39:44 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:39:44 np0005558241 podman[422429]: 2025-12-13 09:39:44.584050001 +0000 UTC m=+0.027109062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:39:44 np0005558241 nova_compute[248510]: 2025-12-13 09:39:44.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:39:45 np0005558241 podman[422429]: 2025-12-13 09:39:45.033382466 +0000 UTC m=+0.476441507 container create 4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 04:39:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:39:45 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:39:45 np0005558241 systemd[1]: Started libpod-conmon-4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c.scope.
Dec 13 04:39:45 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:39:45 np0005558241 podman[422429]: 2025-12-13 09:39:45.252598611 +0000 UTC m=+0.695657712 container init 4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:39:45 np0005558241 podman[422448]: 2025-12-13 09:39:45.252545419 +0000 UTC m=+0.159790094 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 04:39:45 np0005558241 podman[422447]: 2025-12-13 09:39:45.254280513 +0000 UTC m=+0.169116378 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 13 04:39:45 np0005558241 podman[422429]: 2025-12-13 09:39:45.264263994 +0000 UTC m=+0.707322995 container start 4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:39:45 np0005558241 sad_wu[422459]: 167 167
Dec 13 04:39:45 np0005558241 systemd[1]: libpod-4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c.scope: Deactivated successfully.
Dec 13 04:39:45 np0005558241 podman[422429]: 2025-12-13 09:39:45.292212416 +0000 UTC m=+0.735271447 container attach 4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:39:45 np0005558241 podman[422429]: 2025-12-13 09:39:45.295013646 +0000 UTC m=+0.738072657 container died 4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:39:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4173: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 14 op/s
Dec 13 04:39:45 np0005558241 nova_compute[248510]: 2025-12-13 09:39:45.856 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:46 np0005558241 systemd[1]: var-lib-containers-storage-overlay-8eebeefe9736551223a5499ba0eeabf30e3d250e06b566294030cd69ac51be07-merged.mount: Deactivated successfully.
Dec 13 04:39:46 np0005558241 nova_compute[248510]: 2025-12-13 09:39:46.861 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:47 np0005558241 podman[422429]: 2025-12-13 09:39:47.198833517 +0000 UTC m=+2.641892528 container remove 4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:39:47 np0005558241 systemd[1]: libpod-conmon-4933a2a349fa613fb1f359bd437c5adf0eebff5527a22cb9733371d31ec9892c.scope: Deactivated successfully.
Dec 13 04:39:47 np0005558241 podman[422443]: 2025-12-13 09:39:47.299735531 +0000 UTC m=+2.214511515 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 13 04:39:47 np0005558241 podman[422535]: 2025-12-13 09:39:47.385192687 +0000 UTC m=+0.032076156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:39:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4174: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 481 B/s wr, 13 op/s
Dec 13 04:39:47 np0005558241 nova_compute[248510]: 2025-12-13 09:39:47.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:39:47 np0005558241 podman[422535]: 2025-12-13 09:39:47.857464188 +0000 UTC m=+0.504347637 container create 0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_clarke, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:39:48 np0005558241 systemd[1]: Started libpod-conmon-0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a.scope.
Dec 13 04:39:48 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:39:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b93c0ed958b2f29505e8e5d5cb8f1de2d0d1eebdb728c51a12866ccd972be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:39:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b93c0ed958b2f29505e8e5d5cb8f1de2d0d1eebdb728c51a12866ccd972be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:39:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b93c0ed958b2f29505e8e5d5cb8f1de2d0d1eebdb728c51a12866ccd972be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:39:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b93c0ed958b2f29505e8e5d5cb8f1de2d0d1eebdb728c51a12866ccd972be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:39:48 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19b93c0ed958b2f29505e8e5d5cb8f1de2d0d1eebdb728c51a12866ccd972be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:39:48 np0005558241 podman[422535]: 2025-12-13 09:39:48.629419665 +0000 UTC m=+1.276303154 container init 0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_clarke, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:39:48 np0005558241 podman[422535]: 2025-12-13 09:39:48.638346799 +0000 UTC m=+1.285230248 container start 0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_clarke, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:39:48 np0005558241 podman[422535]: 2025-12-13 09:39:48.723879027 +0000 UTC m=+1.370762476 container attach 0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_clarke, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 04:39:49 np0005558241 frosty_clarke[422552]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:39:49 np0005558241 frosty_clarke[422552]: --> All data devices are unavailable
Dec 13 04:39:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:39:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Dec 13 04:39:49 np0005558241 systemd[1]: libpod-0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a.scope: Deactivated successfully.
Dec 13 04:39:49 np0005558241 podman[422535]: 2025-12-13 09:39:49.1735414 +0000 UTC m=+1.820424849 container died 0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_clarke, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:39:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Dec 13 04:39:49 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Dec 13 04:39:49 np0005558241 systemd[1]: var-lib-containers-storage-overlay-f19b93c0ed958b2f29505e8e5d5cb8f1de2d0d1eebdb728c51a12866ccd972be-merged.mount: Deactivated successfully.
Dec 13 04:39:49 np0005558241 podman[422535]: 2025-12-13 09:39:49.599990009 +0000 UTC m=+2.246873458 container remove 0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:39:49 np0005558241 systemd[1]: libpod-conmon-0eebbcc9f0a75fc6e539d01c55d3c3fb47d6c7f90f2c54b344779111f236c39a.scope: Deactivated successfully.
Dec 13 04:39:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4176: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 716 B/s wr, 12 op/s
Dec 13 04:39:50 np0005558241 podman[422650]: 2025-12-13 09:39:50.072922647 +0000 UTC m=+0.023528392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:39:50 np0005558241 podman[422650]: 2025-12-13 09:39:50.209344343 +0000 UTC m=+0.159950078 container create 9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:39:50 np0005558241 systemd[1]: Started libpod-conmon-9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6.scope.
Dec 13 04:39:50 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:39:50 np0005558241 nova_compute[248510]: 2025-12-13 09:39:50.859 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:51 np0005558241 podman[422650]: 2025-12-13 09:39:51.085290202 +0000 UTC m=+1.035895957 container init 9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:39:51 np0005558241 podman[422650]: 2025-12-13 09:39:51.094543314 +0000 UTC m=+1.045149039 container start 9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:39:51 np0005558241 unruffled_sammet[422666]: 167 167
Dec 13 04:39:51 np0005558241 systemd[1]: libpod-9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6.scope: Deactivated successfully.
Dec 13 04:39:51 np0005558241 podman[422650]: 2025-12-13 09:39:51.37825437 +0000 UTC m=+1.328860205 container attach 9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:39:51 np0005558241 podman[422650]: 2025-12-13 09:39:51.378815994 +0000 UTC m=+1.329421759 container died 9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:39:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4177: 321 pgs: 321 active+clean; 462 KiB data, 989 MiB used, 59 GiB / 60 GiB avail; 8.5 KiB/s rd, 716 B/s wr, 12 op/s
Dec 13 04:39:51 np0005558241 systemd[1]: var-lib-containers-storage-overlay-9fc9d003412f46eb2dac1ddf4a6616c3c95e34818f92328f3dbacd555b9d00dd-merged.mount: Deactivated successfully.
Dec 13 04:39:51 np0005558241 podman[422650]: 2025-12-13 09:39:51.793941098 +0000 UTC m=+1.744546823 container remove 9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:39:51 np0005558241 systemd[1]: libpod-conmon-9529d06560bf6e861f0826810c3f992a45395fee0c523a7278a200edf067f3b6.scope: Deactivated successfully.
Dec 13 04:39:51 np0005558241 nova_compute[248510]: 2025-12-13 09:39:51.863 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:52 np0005558241 podman[422690]: 2025-12-13 09:39:51.945393012 +0000 UTC m=+0.027578784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:39:52 np0005558241 podman[422690]: 2025-12-13 09:39:52.126398478 +0000 UTC m=+0.208584230 container create 85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 04:39:52 np0005558241 systemd[1]: Started libpod-conmon-85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d.scope.
Dec 13 04:39:52 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:39:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a40757cd05582fd64dd02ff31b2d93cb5fbe275e2167cf1496c565b109acc44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:39:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a40757cd05582fd64dd02ff31b2d93cb5fbe275e2167cf1496c565b109acc44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:39:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a40757cd05582fd64dd02ff31b2d93cb5fbe275e2167cf1496c565b109acc44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:39:52 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a40757cd05582fd64dd02ff31b2d93cb5fbe275e2167cf1496c565b109acc44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:39:52 np0005558241 podman[422690]: 2025-12-13 09:39:52.403251351 +0000 UTC m=+0.485437133 container init 85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_davinci, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 04:39:52 np0005558241 podman[422690]: 2025-12-13 09:39:52.416796701 +0000 UTC m=+0.498982463 container start 85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:39:52 np0005558241 podman[422690]: 2025-12-13 09:39:52.425034568 +0000 UTC m=+0.507220340 container attach 85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_davinci, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]: {
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:    "0": [
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:        {
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "devices": [
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "/dev/loop3"
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            ],
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_name": "ceph_lv0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_size": "21470642176",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "name": "ceph_lv0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "tags": {
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.cluster_name": "ceph",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.crush_device_class": "",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.encrypted": "0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.objectstore": "bluestore",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.osd_id": "0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.type": "block",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.vdo": "0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.with_tpm": "0"
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            },
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "type": "block",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "vg_name": "ceph_vg0"
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:        }
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:    ],
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:    "1": [
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:        {
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "devices": [
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "/dev/loop4"
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            ],
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_name": "ceph_lv1",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_size": "21470642176",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "name": "ceph_lv1",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "tags": {
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.cluster_name": "ceph",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.crush_device_class": "",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.encrypted": "0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.objectstore": "bluestore",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.osd_id": "1",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.type": "block",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.vdo": "0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.with_tpm": "0"
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            },
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "type": "block",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "vg_name": "ceph_vg1"
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:        }
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:    ],
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:    "2": [
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:        {
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "devices": [
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "/dev/loop5"
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            ],
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_name": "ceph_lv2",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_size": "21470642176",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "name": "ceph_lv2",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "tags": {
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.cluster_name": "ceph",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.crush_device_class": "",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.encrypted": "0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.objectstore": "bluestore",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.osd_id": "2",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.type": "block",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.vdo": "0",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:                "ceph.with_tpm": "0"
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            },
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "type": "block",
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:            "vg_name": "ceph_vg2"
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:        }
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]:    ]
Dec 13 04:39:52 np0005558241 hopeful_davinci[422706]: }
Dec 13 04:39:52 np0005558241 systemd[1]: libpod-85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d.scope: Deactivated successfully.
Dec 13 04:39:52 np0005558241 podman[422690]: 2025-12-13 09:39:52.767361045 +0000 UTC m=+0.849546827 container died 85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:39:52 np0005558241 nova_compute[248510]: 2025-12-13 09:39:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:39:52 np0005558241 nova_compute[248510]: 2025-12-13 09:39:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:39:52 np0005558241 nova_compute[248510]: 2025-12-13 09:39:52.806 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:39:52 np0005558241 nova_compute[248510]: 2025-12-13 09:39:52.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:39:52 np0005558241 nova_compute[248510]: 2025-12-13 09:39:52.807 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:39:52 np0005558241 nova_compute[248510]: 2025-12-13 09:39:52.807 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:39:52 np0005558241 nova_compute[248510]: 2025-12-13 09:39:52.808 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:39:53 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0a40757cd05582fd64dd02ff31b2d93cb5fbe275e2167cf1496c565b109acc44-merged.mount: Deactivated successfully.
Dec 13 04:39:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Dec 13 04:39:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Dec 13 04:39:53 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Dec 13 04:39:53 np0005558241 podman[422690]: 2025-12-13 09:39:53.259552235 +0000 UTC m=+1.341737987 container remove 85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 04:39:53 np0005558241 systemd[1]: libpod-conmon-85de82e56f13f973cd01b561d384428ab20ba48878c70542804f5a08084fba2d.scope: Deactivated successfully.
Dec 13 04:39:53 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:39:53 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2342472209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:39:53 np0005558241 nova_compute[248510]: 2025-12-13 09:39:53.484 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:39:53 np0005558241 nova_compute[248510]: 2025-12-13 09:39:53.667 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:39:53 np0005558241 nova_compute[248510]: 2025-12-13 09:39:53.668 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3442MB free_disk=59.987360855564475GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:39:53 np0005558241 nova_compute[248510]: 2025-12-13 09:39:53.669 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:39:53 np0005558241 nova_compute[248510]: 2025-12-13 09:39:53.669 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:39:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4179: 321 pgs: 321 active+clean; 21 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.6 MiB/s wr, 13 op/s
Dec 13 04:39:53 np0005558241 nova_compute[248510]: 2025-12-13 09:39:53.746 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:39:53 np0005558241 nova_compute[248510]: 2025-12-13 09:39:53.747 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:39:53 np0005558241 nova_compute[248510]: 2025-12-13 09:39:53.771 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:39:53 np0005558241 podman[422813]: 2025-12-13 09:39:53.761578283 +0000 UTC m=+0.021278885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:39:54 np0005558241 podman[422813]: 2025-12-13 09:39:54.10347024 +0000 UTC m=+0.363170842 container create 05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:39:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:39:54 np0005558241 systemd[1]: Started libpod-conmon-05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc.scope.
Dec 13 04:39:54 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:39:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:39:54 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/192107434' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:39:54 np0005558241 nova_compute[248510]: 2025-12-13 09:39:54.707 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.936s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:39:54 np0005558241 nova_compute[248510]: 2025-12-13 09:39:54.714 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:39:54 np0005558241 nova_compute[248510]: 2025-12-13 09:39:54.764 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:39:54 np0005558241 nova_compute[248510]: 2025-12-13 09:39:54.766 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:39:54 np0005558241 nova_compute[248510]: 2025-12-13 09:39:54.767 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:39:54 np0005558241 podman[422813]: 2025-12-13 09:39:54.857814655 +0000 UTC m=+1.117515257 container init 05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 04:39:54 np0005558241 podman[422813]: 2025-12-13 09:39:54.869114939 +0000 UTC m=+1.128815511 container start 05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:39:54 np0005558241 romantic_mccarthy[422850]: 167 167
Dec 13 04:39:54 np0005558241 systemd[1]: libpod-05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc.scope: Deactivated successfully.
Dec 13 04:39:55 np0005558241 podman[422813]: 2025-12-13 09:39:55.253492012 +0000 UTC m=+1.513192584 container attach 05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:39:55 np0005558241 podman[422813]: 2025-12-13 09:39:55.254328653 +0000 UTC m=+1.514029245 container died 05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:39:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:39:55.474 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:39:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:39:55.475 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:39:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:39:55.476 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:39:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4180: 321 pgs: 321 active+clean; 21 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 2.6 MiB/s wr, 14 op/s
Dec 13 04:39:55 np0005558241 nova_compute[248510]: 2025-12-13 09:39:55.861 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:56 np0005558241 nova_compute[248510]: 2025-12-13 09:39:56.867 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:39:57 np0005558241 systemd[1]: var-lib-containers-storage-overlay-441f40f441a3302c8d591bb719402c5386e3dc8aeee74400bcd48ef8d60def11-merged.mount: Deactivated successfully.
Dec 13 04:39:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4181: 321 pgs: 321 active+clean; 21 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 5.9 KiB/s rd, 2.5 MiB/s wr, 10 op/s
Dec 13 04:39:57 np0005558241 podman[422813]: 2025-12-13 09:39:57.782445843 +0000 UTC m=+4.042146455 container remove 05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 04:39:57 np0005558241 systemd[1]: libpod-conmon-05207c6d95d8f0c92818ec6f124364e2b63b2fefa9ca309e4f89e1eb252162cc.scope: Deactivated successfully.
Dec 13 04:39:58 np0005558241 podman[422875]: 2025-12-13 09:39:57.965439529 +0000 UTC m=+0.027102392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:39:58 np0005558241 podman[422875]: 2025-12-13 09:39:58.305303684 +0000 UTC m=+0.366966497 container create 81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:39:58 np0005558241 systemd[1]: Started libpod-conmon-81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208.scope.
Dec 13 04:39:58 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:39:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b5f21673c537729ab808fa0a22d4a6cf3da350725b7f55647914abe3ae9853/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:39:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b5f21673c537729ab808fa0a22d4a6cf3da350725b7f55647914abe3ae9853/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:39:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b5f21673c537729ab808fa0a22d4a6cf3da350725b7f55647914abe3ae9853/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:39:58 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b5f21673c537729ab808fa0a22d4a6cf3da350725b7f55647914abe3ae9853/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:39:58 np0005558241 nova_compute[248510]: 2025-12-13 09:39:58.768 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:39:58 np0005558241 nova_compute[248510]: 2025-12-13 09:39:58.769 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:39:58 np0005558241 nova_compute[248510]: 2025-12-13 09:39:58.769 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:39:58 np0005558241 nova_compute[248510]: 2025-12-13 09:39:58.795 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:39:58 np0005558241 nova_compute[248510]: 2025-12-13 09:39:58.795 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:39:58 np0005558241 podman[422875]: 2025-12-13 09:39:58.80681298 +0000 UTC m=+0.868475803 container init 81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 04:39:58 np0005558241 podman[422875]: 2025-12-13 09:39:58.817468507 +0000 UTC m=+0.879131310 container start 81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:39:58 np0005558241 podman[422875]: 2025-12-13 09:39:58.948844407 +0000 UTC m=+1.010507230 container attach 81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 04:39:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:39:59 np0005558241 lvm[422969]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:39:59 np0005558241 lvm[422969]: VG ceph_vg0 finished
Dec 13 04:39:59 np0005558241 lvm[422970]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:39:59 np0005558241 lvm[422970]: VG ceph_vg1 finished
Dec 13 04:39:59 np0005558241 lvm[422972]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:39:59 np0005558241 lvm[422972]: VG ceph_vg2 finished
Dec 13 04:39:59 np0005558241 inspiring_mendeleev[422891]: {}
Dec 13 04:39:59 np0005558241 systemd[1]: libpod-81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208.scope: Deactivated successfully.
Dec 13 04:39:59 np0005558241 systemd[1]: libpod-81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208.scope: Consumed 1.309s CPU time.
Dec 13 04:39:59 np0005558241 podman[422875]: 2025-12-13 09:39:59.616379711 +0000 UTC m=+1.678042514 container died 81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 04:39:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4182: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Dec 13 04:39:59 np0005558241 systemd[1]: var-lib-containers-storage-overlay-11b5f21673c537729ab808fa0a22d4a6cf3da350725b7f55647914abe3ae9853-merged.mount: Deactivated successfully.
Dec 13 04:40:00 np0005558241 podman[422875]: 2025-12-13 09:40:00.662129932 +0000 UTC m=+2.723792775 container remove 81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:40:00 np0005558241 systemd[1]: libpod-conmon-81e76e7e6953f8d097d3c563e5f84c6789b940b36e19b80201fd651b67349208.scope: Deactivated successfully.
Dec 13 04:40:00 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:40:00 np0005558241 nova_compute[248510]: 2025-12-13 09:40:00.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:40:00 np0005558241 nova_compute[248510]: 2025-12-13 09:40:00.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:40:00 np0005558241 nova_compute[248510]: 2025-12-13 09:40:00.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:40:00 np0005558241 nova_compute[248510]: 2025-12-13 09:40:00.863 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:01 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:40:01 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:40:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4183: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Dec 13 04:40:01 np0005558241 nova_compute[248510]: 2025-12-13 09:40:01.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:02 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:40:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:40:02 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:40:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4184: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 783 KiB/s wr, 8 op/s
Dec 13 04:40:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:40:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4185: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 426 B/s wr, 3 op/s
Dec 13 04:40:05 np0005558241 nova_compute[248510]: 2025-12-13 09:40:05.865 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:06 np0005558241 nova_compute[248510]: 2025-12-13 09:40:06.873 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4186: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Dec 13 04:40:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:40:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:40:09
Dec 13 04:40:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:40:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:40:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'images', 'vms', 'backups', 'default.rgw.control', '.rgw.root', '.mgr', 'default.rgw.log']
Dec 13 04:40:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:40:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4187: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Dec 13 04:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:40:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:40:10 np0005558241 nova_compute[248510]: 2025-12-13 09:40:10.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:40:10 np0005558241 nova_compute[248510]: 2025-12-13 09:40:10.866 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:40:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:40:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:40:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:40:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:40:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:40:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:40:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:40:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:40:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:40:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4188: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:11 np0005558241 nova_compute[248510]: 2025-12-13 09:40:11.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4189: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:40:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:40:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1500980230' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:40:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:40:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1500980230' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:40:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4190: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:15 np0005558241 nova_compute[248510]: 2025-12-13 09:40:15.868 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:15 np0005558241 podman[423014]: 2025-12-13 09:40:15.988482215 +0000 UTC m=+0.075342313 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 04:40:16 np0005558241 podman[423013]: 2025-12-13 09:40:16.049513998 +0000 UTC m=+0.136540490 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:40:16 np0005558241 nova_compute[248510]: 2025-12-13 09:40:16.879 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4191: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:17 np0005558241 podman[423055]: 2025-12-13 09:40:17.987657613 +0000 UTC m=+0.071082065 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 13 04:40:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:40:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4192: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:20 np0005558241 nova_compute[248510]: 2025-12-13 09:40:20.870 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4193: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:21 np0005558241 nova_compute[248510]: 2025-12-13 09:40:21.882 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.537814584026472e-05 of space, bias 1.0, pg target 0.004613443752079416 quantized to 32 (current 32)
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003367012783039856 of space, bias 1.0, pg target 0.10101038349119568 quantized to 32 (current 32)
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.966907383726934e-07 of space, bias 4.0, pg target 0.0007160288860472322 quantized to 16 (current 32)
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:40:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:40:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4194: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:40:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4195: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:25 np0005558241 nova_compute[248510]: 2025-12-13 09:40:25.872 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:26 np0005558241 nova_compute[248510]: 2025-12-13 09:40:26.885 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4196: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:40:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4197: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:30 np0005558241 nova_compute[248510]: 2025-12-13 09:40:30.874 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4198: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:31 np0005558241 nova_compute[248510]: 2025-12-13 09:40:31.888 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4199: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:40:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4200: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:35 np0005558241 nova_compute[248510]: 2025-12-13 09:40:35.876 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:36 np0005558241 nova_compute[248510]: 2025-12-13 09:40:36.891 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4201: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:40:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4202: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:40:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:40:40 np0005558241 nova_compute[248510]: 2025-12-13 09:40:40.877 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4203: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:41 np0005558241 nova_compute[248510]: 2025-12-13 09:40:41.894 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4204: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:40:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4205: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:45 np0005558241 nova_compute[248510]: 2025-12-13 09:40:45.880 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:46 np0005558241 nova_compute[248510]: 2025-12-13 09:40:46.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:40:46 np0005558241 nova_compute[248510]: 2025-12-13 09:40:46.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:46 np0005558241 podman[423076]: 2025-12-13 09:40:46.960891248 +0000 UTC m=+0.053409878 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:40:46 np0005558241 podman[423075]: 2025-12-13 09:40:46.989101738 +0000 UTC m=+0.083164666 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 13 04:40:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4206: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:47 np0005558241 nova_compute[248510]: 2025-12-13 09:40:47.767 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:40:49 np0005558241 podman[423117]: 2025-12-13 09:40:49.004792624 +0000 UTC m=+0.094671123 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:40:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:40:49 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.0 total, 600.0 interval#012Cumulative writes: 18K writes, 83K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s#012Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1274 writes, 5603 keys, 1274 commit groups, 1.0 writes per commit group, ingest: 8.86 MB, 0.01 MB/s#012Interval WAL: 1273 writes, 1273 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     22.4      4.71              0.38        61    0.077       0      0       0.0       0.0#012  L6      1/0   11.01 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2     76.0     64.8      8.41              1.81        60    0.140    439K    31K       0.0       0.0#012 Sum      1/0   11.01 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2     48.7     49.6     13.13              2.20       121    0.108    439K    31K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   8.2     31.5     31.4      1.51              0.21         8    0.188     40K   2004       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     76.0     64.8      8.41              1.81        60    0.140    439K    31K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     22.4      4.71              0.38        60    0.078       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.0      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7800.0 total, 600.0 interval#012Flush(GB): cumulative 0.103, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.64 GB write, 0.08 MB/s write, 0.62 GB read, 0.08 MB/s read, 13.1 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564515ad18d0#2 capacity: 304.00 MB usage: 73.37 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000626 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4520,70.28 MB,23.1198%) FilterBlock(122,1.18 MB,0.387006%) IndexBlock(122,1.91 MB,0.628587%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec 13 04:40:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:40:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4207: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:50 np0005558241 nova_compute[248510]: 2025-12-13 09:40:50.895 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4208: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:51 np0005558241 nova_compute[248510]: 2025-12-13 09:40:51.900 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:52 np0005558241 nova_compute[248510]: 2025-12-13 09:40:52.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:40:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4209: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:40:54 np0005558241 nova_compute[248510]: 2025-12-13 09:40:54.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:40:54 np0005558241 nova_compute[248510]: 2025-12-13 09:40:54.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:40:54 np0005558241 nova_compute[248510]: 2025-12-13 09:40:54.809 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:40:54 np0005558241 nova_compute[248510]: 2025-12-13 09:40:54.810 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:40:54 np0005558241 nova_compute[248510]: 2025-12-13 09:40:54.810 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:40:54 np0005558241 nova_compute[248510]: 2025-12-13 09:40:54.810 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:40:55 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:40:55 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3423949893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:40:55 np0005558241 nova_compute[248510]: 2025-12-13 09:40:55.385 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:40:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:40:55.475 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:40:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:40:55.475 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:40:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:40:55.476 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:40:55 np0005558241 nova_compute[248510]: 2025-12-13 09:40:55.578 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:40:55 np0005558241 nova_compute[248510]: 2025-12-13 09:40:55.579 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3487MB free_disk=59.98735874146223GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:40:55 np0005558241 nova_compute[248510]: 2025-12-13 09:40:55.580 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:40:55 np0005558241 nova_compute[248510]: 2025-12-13 09:40:55.580 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:40:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4210: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:55 np0005558241 nova_compute[248510]: 2025-12-13 09:40:55.896 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:56 np0005558241 nova_compute[248510]: 2025-12-13 09:40:56.156 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:40:56 np0005558241 nova_compute[248510]: 2025-12-13 09:40:56.156 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:40:56 np0005558241 nova_compute[248510]: 2025-12-13 09:40:56.175 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:40:56 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:40:56 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1690645903' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:40:56 np0005558241 nova_compute[248510]: 2025-12-13 09:40:56.725 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:40:56 np0005558241 nova_compute[248510]: 2025-12-13 09:40:56.734 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:40:56 np0005558241 nova_compute[248510]: 2025-12-13 09:40:56.753 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:40:56 np0005558241 nova_compute[248510]: 2025-12-13 09:40:56.755 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:40:56 np0005558241 nova_compute[248510]: 2025-12-13 09:40:56.756 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:40:56 np0005558241 nova_compute[248510]: 2025-12-13 09:40:56.903 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:40:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4211: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:58 np0005558241 nova_compute[248510]: 2025-12-13 09:40:58.757 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:40:58 np0005558241 nova_compute[248510]: 2025-12-13 09:40:58.757 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:40:58 np0005558241 nova_compute[248510]: 2025-12-13 09:40:58.758 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:40:58 np0005558241 nova_compute[248510]: 2025-12-13 09:40:58.788 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:40:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:40:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4212: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:40:59 np0005558241 nova_compute[248510]: 2025-12-13 09:40:59.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:40:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Dec 13 04:40:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Dec 13 04:40:59 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Dec 13 04:41:00 np0005558241 nova_compute[248510]: 2025-12-13 09:41:00.898 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4214: 321 pgs: 321 active+clean; 21 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:01 np0005558241 nova_compute[248510]: 2025-12-13 09:41:01.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:41:01 np0005558241 nova_compute[248510]: 2025-12-13 09:41:01.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:02 np0005558241 nova_compute[248510]: 2025-12-13 09:41:02.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:41:02 np0005558241 nova_compute[248510]: 2025-12-13 09:41:02.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:41:02 np0005558241 podman[423275]: 2025-12-13 09:41:02.957967621 +0000 UTC m=+0.138507292 container exec 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:41:03 np0005558241 podman[423275]: 2025-12-13 09:41:03.089713053 +0000 UTC m=+0.270252714 container exec_died 63e25f0ba27109eef7514e42f7f42224a14fd9dd278a846de4f7802e571c7ebd (image=quay.io/ceph/ceph:v20, name=ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mon-compute-0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:41:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4215: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:41:04 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:41:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:41:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:41:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:41:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:41:05 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:41:05 np0005558241 podman[423606]: 2025-12-13 09:41:05.194785688 +0000 UTC m=+0.042551688 container create 6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bohr, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:41:05 np0005558241 systemd[1]: Started libpod-conmon-6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d.scope.
Dec 13 04:41:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:41:05 np0005558241 podman[423606]: 2025-12-13 09:41:05.175210412 +0000 UTC m=+0.022976442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:41:05 np0005558241 podman[423606]: 2025-12-13 09:41:05.27257565 +0000 UTC m=+0.120341690 container init 6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bohr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:41:05 np0005558241 podman[423606]: 2025-12-13 09:41:05.277728308 +0000 UTC m=+0.125494318 container start 6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:41:05 np0005558241 peaceful_bohr[423622]: 167 167
Dec 13 04:41:05 np0005558241 systemd[1]: libpod-6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d.scope: Deactivated successfully.
Dec 13 04:41:05 np0005558241 podman[423606]: 2025-12-13 09:41:05.294775711 +0000 UTC m=+0.142541751 container attach 6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:41:05 np0005558241 podman[423606]: 2025-12-13 09:41:05.295878009 +0000 UTC m=+0.143644009 container died 6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bohr, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:41:05 np0005558241 systemd[1]: var-lib-containers-storage-overlay-3e8d855cdb43757368984e7241ff5fb45ade656de37a7d84a177ee0a7c2026ad-merged.mount: Deactivated successfully.
Dec 13 04:41:05 np0005558241 podman[423606]: 2025-12-13 09:41:05.341218995 +0000 UTC m=+0.188984995 container remove 6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:41:05 np0005558241 systemd[1]: libpod-conmon-6e4e3f08538b704fd79f3dc0cb5a16d62669fab3301e5194c4c43d2a1035c16d.scope: Deactivated successfully.
Dec 13 04:41:05 np0005558241 podman[423646]: 2025-12-13 09:41:05.500514351 +0000 UTC m=+0.044782063 container create 427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:41:05 np0005558241 systemd[1]: Started libpod-conmon-427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8.scope.
Dec 13 04:41:05 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:41:05 np0005558241 podman[423646]: 2025-12-13 09:41:05.483054678 +0000 UTC m=+0.027322410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:41:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97670aec2a46754cc310c07b6fed053849783622524d83394648b6bc4c2ac4b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:41:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97670aec2a46754cc310c07b6fed053849783622524d83394648b6bc4c2ac4b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:41:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97670aec2a46754cc310c07b6fed053849783622524d83394648b6bc4c2ac4b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:41:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97670aec2a46754cc310c07b6fed053849783622524d83394648b6bc4c2ac4b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:41:05 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97670aec2a46754cc310c07b6fed053849783622524d83394648b6bc4c2ac4b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:41:05 np0005558241 podman[423646]: 2025-12-13 09:41:05.618061271 +0000 UTC m=+0.162329073 container init 427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ganguly, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:41:05 np0005558241 podman[423646]: 2025-12-13 09:41:05.63370355 +0000 UTC m=+0.177971262 container start 427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:41:05 np0005558241 podman[423646]: 2025-12-13 09:41:05.63813356 +0000 UTC m=+0.182401312 container attach 427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ganguly, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 13 04:41:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4216: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 04:41:05 np0005558241 nova_compute[248510]: 2025-12-13 09:41:05.901 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:06 np0005558241 wonderful_ganguly[423663]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:41:06 np0005558241 wonderful_ganguly[423663]: --> All data devices are unavailable
Dec 13 04:41:06 np0005558241 systemd[1]: libpod-427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8.scope: Deactivated successfully.
Dec 13 04:41:06 np0005558241 podman[423646]: 2025-12-13 09:41:06.137692098 +0000 UTC m=+0.681959810 container died 427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:41:06 np0005558241 systemd[1]: var-lib-containers-storage-overlay-97670aec2a46754cc310c07b6fed053849783622524d83394648b6bc4c2ac4b3-merged.mount: Deactivated successfully.
Dec 13 04:41:06 np0005558241 podman[423646]: 2025-12-13 09:41:06.18005677 +0000 UTC m=+0.724324482 container remove 427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_ganguly, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:41:06 np0005558241 systemd[1]: libpod-conmon-427afb1649cefe1f93479f85c3295c9ac5a4868d58842138d0424ff54ca42ac8.scope: Deactivated successfully.
Dec 13 04:41:06 np0005558241 podman[423757]: 2025-12-13 09:41:06.651040048 +0000 UTC m=+0.023851203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:41:06 np0005558241 nova_compute[248510]: 2025-12-13 09:41:06.910 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:07 np0005558241 podman[423757]: 2025-12-13 09:41:07.328556106 +0000 UTC m=+0.701367261 container create dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 04:41:07 np0005558241 systemd[1]: Started libpod-conmon-dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf.scope.
Dec 13 04:41:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:41:07 np0005558241 podman[423757]: 2025-12-13 09:41:07.626630449 +0000 UTC m=+0.999441594 container init dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcclintock, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 04:41:07 np0005558241 podman[423757]: 2025-12-13 09:41:07.634429323 +0000 UTC m=+1.007240468 container start dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 04:41:07 np0005558241 podman[423757]: 2025-12-13 09:41:07.637803777 +0000 UTC m=+1.010614942 container attach dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcclintock, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:41:07 np0005558241 compassionate_mcclintock[423773]: 167 167
Dec 13 04:41:07 np0005558241 systemd[1]: libpod-dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf.scope: Deactivated successfully.
Dec 13 04:41:07 np0005558241 podman[423757]: 2025-12-13 09:41:07.640701869 +0000 UTC m=+1.013513014 container died dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 04:41:07 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a53f8d82d53188a9ed09f214196a0d81c8f5f7710d4508ad905fd8bee27c2ebb-merged.mount: Deactivated successfully.
Dec 13 04:41:07 np0005558241 podman[423757]: 2025-12-13 09:41:07.684038015 +0000 UTC m=+1.056849200 container remove dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mcclintock, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 04:41:07 np0005558241 systemd[1]: libpod-conmon-dbd81b60edc7025e0afd71e1e3de33b2826c5cec8f0063b41c471eb581564abf.scope: Deactivated successfully.
Dec 13 04:41:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4217: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 04:41:07 np0005558241 podman[423796]: 2025-12-13 09:41:07.906539382 +0000 UTC m=+0.061510769 container create 90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:41:07 np0005558241 systemd[1]: Started libpod-conmon-90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9.scope.
Dec 13 04:41:07 np0005558241 podman[423796]: 2025-12-13 09:41:07.873610224 +0000 UTC m=+0.028581701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:41:07 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:41:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a46c8f940ba887e54058fc54e5a4bd84e676c080ad41133446882d0a0dbe103/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:41:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a46c8f940ba887e54058fc54e5a4bd84e676c080ad41133446882d0a0dbe103/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:41:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a46c8f940ba887e54058fc54e5a4bd84e676c080ad41133446882d0a0dbe103/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:41:07 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a46c8f940ba887e54058fc54e5a4bd84e676c080ad41133446882d0a0dbe103/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:41:08 np0005558241 podman[423796]: 2025-12-13 09:41:08.002764162 +0000 UTC m=+0.157735559 container init 90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_panini, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:41:08 np0005558241 podman[423796]: 2025-12-13 09:41:08.010794171 +0000 UTC m=+0.165765558 container start 90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:41:08 np0005558241 podman[423796]: 2025-12-13 09:41:08.015059027 +0000 UTC m=+0.170030454 container attach 90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_panini, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 04:41:08 np0005558241 eager_panini[423812]: {
Dec 13 04:41:08 np0005558241 eager_panini[423812]:    "0": [
Dec 13 04:41:08 np0005558241 eager_panini[423812]:        {
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "devices": [
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "/dev/loop3"
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            ],
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_name": "ceph_lv0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_size": "21470642176",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "name": "ceph_lv0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "tags": {
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.cluster_name": "ceph",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.crush_device_class": "",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.encrypted": "0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.objectstore": "bluestore",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.osd_id": "0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.type": "block",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.vdo": "0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.with_tpm": "0"
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            },
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "type": "block",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "vg_name": "ceph_vg0"
Dec 13 04:41:08 np0005558241 eager_panini[423812]:        }
Dec 13 04:41:08 np0005558241 eager_panini[423812]:    ],
Dec 13 04:41:08 np0005558241 eager_panini[423812]:    "1": [
Dec 13 04:41:08 np0005558241 eager_panini[423812]:        {
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "devices": [
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "/dev/loop4"
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            ],
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_name": "ceph_lv1",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_size": "21470642176",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "name": "ceph_lv1",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "tags": {
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.cluster_name": "ceph",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.crush_device_class": "",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.encrypted": "0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.objectstore": "bluestore",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.osd_id": "1",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.type": "block",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.vdo": "0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.with_tpm": "0"
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            },
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "type": "block",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "vg_name": "ceph_vg1"
Dec 13 04:41:08 np0005558241 eager_panini[423812]:        }
Dec 13 04:41:08 np0005558241 eager_panini[423812]:    ],
Dec 13 04:41:08 np0005558241 eager_panini[423812]:    "2": [
Dec 13 04:41:08 np0005558241 eager_panini[423812]:        {
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "devices": [
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "/dev/loop5"
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            ],
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_name": "ceph_lv2",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_size": "21470642176",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "name": "ceph_lv2",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "tags": {
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.cluster_name": "ceph",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.crush_device_class": "",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.encrypted": "0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.objectstore": "bluestore",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.osd_id": "2",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.type": "block",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.vdo": "0",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:                "ceph.with_tpm": "0"
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            },
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "type": "block",
Dec 13 04:41:08 np0005558241 eager_panini[423812]:            "vg_name": "ceph_vg2"
Dec 13 04:41:08 np0005558241 eager_panini[423812]:        }
Dec 13 04:41:08 np0005558241 eager_panini[423812]:    ]
Dec 13 04:41:08 np0005558241 eager_panini[423812]: }
Dec 13 04:41:08 np0005558241 systemd[1]: libpod-90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9.scope: Deactivated successfully.
Dec 13 04:41:08 np0005558241 podman[423796]: 2025-12-13 09:41:08.352120358 +0000 UTC m=+0.507091765 container died 90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_panini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:41:08 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0a46c8f940ba887e54058fc54e5a4bd84e676c080ad41133446882d0a0dbe103-merged.mount: Deactivated successfully.
Dec 13 04:41:08 np0005558241 podman[423796]: 2025-12-13 09:41:08.395241379 +0000 UTC m=+0.550212766 container remove 90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_panini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:41:08 np0005558241 systemd[1]: libpod-conmon-90a8eb64a1a151d7dc4a88fd933bdba8a8a96bb28a8335e8b39f0f95741531a9.scope: Deactivated successfully.
Dec 13 04:41:08 np0005558241 podman[423893]: 2025-12-13 09:41:08.914451035 +0000 UTC m=+0.045519081 container create ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_banach, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:41:08 np0005558241 systemd[1]: Started libpod-conmon-ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7.scope.
Dec 13 04:41:08 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:41:08 np0005558241 podman[423893]: 2025-12-13 09:41:08.892528011 +0000 UTC m=+0.023596107 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:41:09 np0005558241 podman[423893]: 2025-12-13 09:41:09.004339738 +0000 UTC m=+0.135407804 container init ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_banach, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 04:41:09 np0005558241 podman[423893]: 2025-12-13 09:41:09.014224733 +0000 UTC m=+0.145292769 container start ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:41:09 np0005558241 podman[423893]: 2025-12-13 09:41:09.01810828 +0000 UTC m=+0.149176316 container attach ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_banach, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:41:09 np0005558241 systemd[1]: libpod-ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7.scope: Deactivated successfully.
Dec 13 04:41:09 np0005558241 lucid_banach[423910]: 167 167
Dec 13 04:41:09 np0005558241 conmon[423910]: conmon ac60e454ea985ea618c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7.scope/container/memory.events
Dec 13 04:41:09 np0005558241 podman[423893]: 2025-12-13 09:41:09.022255813 +0000 UTC m=+0.153323879 container died ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_banach, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:41:09 np0005558241 systemd[1]: var-lib-containers-storage-overlay-86d898c0a3e0b0bc9e2e787d710a8a834a90a35b4c5ab7d287abac840c8c5402-merged.mount: Deactivated successfully.
Dec 13 04:41:09 np0005558241 podman[423893]: 2025-12-13 09:41:09.074573672 +0000 UTC m=+0.205641708 container remove ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_banach, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:41:09 np0005558241 systemd[1]: libpod-conmon-ac60e454ea985ea618c909cf572f0dbc28b25cbce5a52f3b22ca86a86e9bc0e7.scope: Deactivated successfully.
Dec 13 04:41:09 np0005558241 podman[423932]: 2025-12-13 09:41:09.275350829 +0000 UTC m=+0.048552697 container create f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:41:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:41:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Dec 13 04:41:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Dec 13 04:41:09 np0005558241 ceph-mon[76537]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Dec 13 04:41:09 np0005558241 systemd[1]: Started libpod-conmon-f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236.scope.
Dec 13 04:41:09 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:41:09 np0005558241 podman[423932]: 2025-12-13 09:41:09.255485226 +0000 UTC m=+0.028687074 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:41:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a01ac19fe98a9964a012998fc998858200d53990ed0269c60f876104e9fc36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:41:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a01ac19fe98a9964a012998fc998858200d53990ed0269c60f876104e9fc36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:41:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a01ac19fe98a9964a012998fc998858200d53990ed0269c60f876104e9fc36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:41:09 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a01ac19fe98a9964a012998fc998858200d53990ed0269c60f876104e9fc36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:41:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:41:09
Dec 13 04:41:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:41:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:41:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr', '.rgw.root', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups']
Dec 13 04:41:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:41:09 np0005558241 podman[423932]: 2025-12-13 09:41:09.66468412 +0000 UTC m=+0.437885988 container init f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nobel, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:41:09 np0005558241 podman[423932]: 2025-12-13 09:41:09.672175666 +0000 UTC m=+0.445377514 container start f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nobel, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:41:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4219: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 04:41:09 np0005558241 podman[423932]: 2025-12-13 09:41:09.923341574 +0000 UTC m=+0.696543452 container attach f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:41:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:41:10 np0005558241 lvm[424031]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:41:10 np0005558241 lvm[424031]: VG ceph_vg2 finished
Dec 13 04:41:10 np0005558241 lvm[424026]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:41:10 np0005558241 lvm[424026]: VG ceph_vg0 finished
Dec 13 04:41:10 np0005558241 lvm[424028]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:41:10 np0005558241 lvm[424028]: VG ceph_vg1 finished
Dec 13 04:41:10 np0005558241 elastic_nobel[423949]: {}
Dec 13 04:41:10 np0005558241 systemd[1]: libpod-f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236.scope: Deactivated successfully.
Dec 13 04:41:10 np0005558241 systemd[1]: libpod-f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236.scope: Consumed 1.465s CPU time.
Dec 13 04:41:10 np0005558241 podman[423932]: 2025-12-13 09:41:10.607256221 +0000 UTC m=+1.380458109 container died f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nobel, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 04:41:10 np0005558241 systemd[1]: var-lib-containers-storage-overlay-a2a01ac19fe98a9964a012998fc998858200d53990ed0269c60f876104e9fc36-merged.mount: Deactivated successfully.
Dec 13 04:41:10 np0005558241 podman[423932]: 2025-12-13 09:41:10.649185212 +0000 UTC m=+1.422387060 container remove f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 04:41:10 np0005558241 systemd[1]: libpod-conmon-f85d1a18dda481e4130d6c9ab71b402126f8edaf3483179a5afe041412d7d236.scope: Deactivated successfully.
Dec 13 04:41:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:41:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:41:10 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:41:10 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:41:10 np0005558241 nova_compute[248510]: 2025-12-13 09:41:10.905 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:41:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:41:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:41:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:41:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:41:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:41:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:41:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:41:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:41:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:41:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:41:11 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:41:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4220: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 13 04:41:11 np0005558241 nova_compute[248510]: 2025-12-13 09:41:11.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:41:11 np0005558241 nova_compute[248510]: 2025-12-13 09:41:11.913 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4221: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Dec 13 04:41:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:41:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:41:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045476058' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:41:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:41:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045476058' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:41:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4222: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:15 np0005558241 nova_compute[248510]: 2025-12-13 09:41:15.907 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:16 np0005558241 nova_compute[248510]: 2025-12-13 09:41:16.957 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4223: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:18 np0005558241 podman[424072]: 2025-12-13 09:41:17.999587379 +0000 UTC m=+0.088033597 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:41:18 np0005558241 podman[424071]: 2025-12-13 09:41:18.037283525 +0000 UTC m=+0.126258737 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:41:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:41:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4224: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:20 np0005558241 podman[424116]: 2025-12-13 09:41:20.001796599 +0000 UTC m=+0.090490138 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 04:41:20 np0005558241 nova_compute[248510]: 2025-12-13 09:41:20.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:41:20 np0005558241 nova_compute[248510]: 2025-12-13 09:41:20.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec 13 04:41:20 np0005558241 nova_compute[248510]: 2025-12-13 09:41:20.908 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4225: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5435278116822236e-05 of space, bias 1.0, pg target 0.004630583435046671 quantized to 32 (current 32)
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.826108195861351e-06 of space, bias 1.0, pg target 0.0011478324587584053 quantized to 32 (current 32)
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.968770392745114e-07 of space, bias 4.0, pg target 0.0007162524471294137 quantized to 16 (current 32)
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:41:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:41:21 np0005558241 nova_compute[248510]: 2025-12-13 09:41:21.960 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4226: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:41:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4227: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:25 np0005558241 nova_compute[248510]: 2025-12-13 09:41:25.910 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:27 np0005558241 nova_compute[248510]: 2025-12-13 09:41:27.001 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4228: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:28 np0005558241 nova_compute[248510]: 2025-12-13 09:41:28.773 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.590691) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618889590774, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2077, "num_deletes": 253, "total_data_size": 3669057, "memory_usage": 3714720, "flush_reason": "Manual Compaction"}
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618889608723, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 2126493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82142, "largest_seqno": 84218, "table_properties": {"data_size": 2119686, "index_size": 3620, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17702, "raw_average_key_size": 20, "raw_value_size": 2104405, "raw_average_value_size": 2493, "num_data_blocks": 165, "num_entries": 844, "num_filter_entries": 844, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618657, "oldest_key_time": 1765618657, "file_creation_time": 1765618889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 18088 microseconds, and 5881 cpu microseconds.
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.608790) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 2126493 bytes OK
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.608819) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.611568) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.611593) EVENT_LOG_v1 {"time_micros": 1765618889611585, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.611621) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 3660301, prev total WAL file size 3660301, number of live WAL files 2.
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.613313) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353133' seq:72057594037927935, type:22 .. '6D6772737461740033373634' seq:0, type:0; will stop at (end)
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(2076KB)], [197(11MB)]
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618889613381, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 13673695, "oldest_snapshot_seqno": -1}
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 10164 keys, 11645808 bytes, temperature: kUnknown
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618889707819, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 11645808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11583645, "index_size": 35618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 266365, "raw_average_key_size": 26, "raw_value_size": 11408266, "raw_average_value_size": 1122, "num_data_blocks": 1372, "num_entries": 10164, "num_filter_entries": 10164, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.708295) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 11645808 bytes
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.711605) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.6 rd, 123.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.0 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(11.9) write-amplify(5.5) OK, records in: 10580, records dropped: 416 output_compression: NoCompression
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.711636) EVENT_LOG_v1 {"time_micros": 1765618889711621, "job": 124, "event": "compaction_finished", "compaction_time_micros": 94578, "compaction_time_cpu_micros": 30022, "output_level": 6, "num_output_files": 1, "total_output_size": 11645808, "num_input_records": 10580, "num_output_records": 10164, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618889712598, "job": 124, "event": "table_file_deletion", "file_number": 199}
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618889716818, "job": 124, "event": "table_file_deletion", "file_number": 197}
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.613121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.717015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.717026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.717028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.717030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:41:29 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:29.717033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:41:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4229: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:30 np0005558241 nova_compute[248510]: 2025-12-13 09:41:30.914 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.610111) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618891610160, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 270, "num_deletes": 251, "total_data_size": 42860, "memory_usage": 49048, "flush_reason": "Manual Compaction"}
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618891613106, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 42752, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84219, "largest_seqno": 84488, "table_properties": {"data_size": 40888, "index_size": 92, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4821, "raw_average_key_size": 18, "raw_value_size": 37294, "raw_average_value_size": 141, "num_data_blocks": 4, "num_entries": 264, "num_filter_entries": 264, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765618890, "oldest_key_time": 1765618890, "file_creation_time": 1765618891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 3034 microseconds, and 882 cpu microseconds.
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.613144) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 42752 bytes OK
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.613161) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.614419) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.614432) EVENT_LOG_v1 {"time_micros": 1765618891614428, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.614451) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 40791, prev total WAL file size 40791, number of live WAL files 2.
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.614840) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(41KB)], [200(11MB)]
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618891614929, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 11688560, "oldest_snapshot_seqno": -1}
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 9919 keys, 9856043 bytes, temperature: kUnknown
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618891698380, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 9856043, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9797209, "index_size": 32929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24837, "raw_key_size": 261967, "raw_average_key_size": 26, "raw_value_size": 9627727, "raw_average_value_size": 970, "num_data_blocks": 1250, "num_entries": 9919, "num_filter_entries": 9919, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765611045, "oldest_key_time": 0, "file_creation_time": 1765618891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7ffae10-18d5-4cdc-93fc-8955429ef551", "db_session_id": "DFSB2OF23V678EVCNB4J", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.698688) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 9856043 bytes
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.699834) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.9 rd, 118.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 11.1 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(503.9) write-amplify(230.5) OK, records in: 10428, records dropped: 509 output_compression: NoCompression
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.699852) EVENT_LOG_v1 {"time_micros": 1765618891699844, "job": 126, "event": "compaction_finished", "compaction_time_micros": 83553, "compaction_time_cpu_micros": 30026, "output_level": 6, "num_output_files": 1, "total_output_size": 9856043, "num_input_records": 10428, "num_output_records": 9919, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618891699983, "job": 126, "event": "table_file_deletion", "file_number": 202}
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765618891702295, "job": 126, "event": "table_file_deletion", "file_number": 200}
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.614707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.702327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.702332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.702333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.702335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:41:31 np0005558241 ceph-mon[76537]: rocksdb: (Original Log Time 2025/12/13-09:41:31.702337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:41:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4230: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:32 np0005558241 nova_compute[248510]: 2025-12-13 09:41:32.004 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4231: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:41:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4232: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:35 np0005558241 nova_compute[248510]: 2025-12-13 09:41:35.916 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:36 np0005558241 nova_compute[248510]: 2025-12-13 09:41:36.781 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:41:37 np0005558241 nova_compute[248510]: 2025-12-13 09:41:37.008 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4233: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:41:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4234: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:41:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:41:40 np0005558241 nova_compute[248510]: 2025-12-13 09:41:40.917 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4235: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:42 np0005558241 nova_compute[248510]: 2025-12-13 09:41:42.011 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:43 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4236: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:44 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:41:45 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4237: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:45 np0005558241 nova_compute[248510]: 2025-12-13 09:41:45.919 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:47 np0005558241 nova_compute[248510]: 2025-12-13 09:41:47.013 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:47 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4238: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:47 np0005558241 nova_compute[248510]: 2025-12-13 09:41:47.825 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:41:48 np0005558241 nova_compute[248510]: 2025-12-13 09:41:48.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:41:48 np0005558241 podman[424138]: 2025-12-13 09:41:48.97019027 +0000 UTC m=+0.055876909 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:41:49 np0005558241 podman[424137]: 2025-12-13 09:41:49.001970559 +0000 UTC m=+0.090406696 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Dec 13 04:41:49 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:41:49 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4239: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:50 np0005558241 nova_compute[248510]: 2025-12-13 09:41:50.921 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:50 np0005558241 podman[424181]: 2025-12-13 09:41:50.96833744 +0000 UTC m=+0.058812002 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd)
Dec 13 04:41:51 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4240: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:52 np0005558241 nova_compute[248510]: 2025-12-13 09:41:52.017 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:52 np0005558241 nova_compute[248510]: 2025-12-13 09:41:52.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:41:53 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4241: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:54 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:41:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:41:55.476 158419 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:41:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:41:55.477 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:41:55 np0005558241 ovn_metadata_agent[158414]: 2025-12-13 09:41:55.477 158419 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:41:55 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4242: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:55 np0005558241 nova_compute[248510]: 2025-12-13 09:41:55.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:41:55 np0005558241 nova_compute[248510]: 2025-12-13 09:41:55.924 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:56 np0005558241 nova_compute[248510]: 2025-12-13 09:41:56.503 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:41:56 np0005558241 nova_compute[248510]: 2025-12-13 09:41:56.503 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:41:56 np0005558241 nova_compute[248510]: 2025-12-13 09:41:56.504 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:41:56 np0005558241 nova_compute[248510]: 2025-12-13 09:41:56.504 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec 13 04:41:56 np0005558241 nova_compute[248510]: 2025-12-13 09:41:56.504 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.019 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:41:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:41:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2082699147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.062 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.217 248514 WARNING nova.virt.libvirt.driver [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.218 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3514MB free_disk=59.987355314195156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.219 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.219 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.326 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.327 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.360 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec 13 04:41:57 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4243: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:57 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:41:57 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2640706481' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.954 248514 DEBUG oslo_concurrency.processutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.962 248514 DEBUG nova.compute.provider_tree [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed in ProviderTree for provider: f581abbc-7737-4dd5-b64e-9a4263571fb3 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.994 248514 DEBUG nova.scheduler.client.report [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Inventory has not changed for provider f581abbc-7737-4dd5-b64e-9a4263571fb3 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.997 248514 DEBUG nova.compute.resource_tracker [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec 13 04:41:57 np0005558241 nova_compute[248510]: 2025-12-13 09:41:57.998 248514 DEBUG oslo_concurrency.lockutils [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec 13 04:41:59 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:41:59 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4244: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:41:59 np0005558241 nova_compute[248510]: 2025-12-13 09:41:59.998 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:42:00 np0005558241 nova_compute[248510]: 2025-12-13 09:41:59.999 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec 13 04:42:00 np0005558241 nova_compute[248510]: 2025-12-13 09:41:59.999 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec 13 04:42:00 np0005558241 nova_compute[248510]: 2025-12-13 09:42:00.925 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:01 np0005558241 nova_compute[248510]: 2025-12-13 09:42:01.540 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec 13 04:42:01 np0005558241 nova_compute[248510]: 2025-12-13 09:42:01.771 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:42:01 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4245: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:02 np0005558241 nova_compute[248510]: 2025-12-13 09:42:02.022 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:03 np0005558241 nova_compute[248510]: 2025-12-13 09:42:03.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:42:03 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4246: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:42:03 np0005558241 ceph-osd[87041]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.74 writes per sync, written: 0.19 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 496 writes, 1133 keys, 496 commit groups, 1.0 writes per commit group, ingest: 0.52 MB, 0.00 MB/s#012Interval WAL: 496 writes, 229 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:42:04 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:42:04 np0005558241 nova_compute[248510]: 2025-12-13 09:42:04.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:42:04 np0005558241 nova_compute[248510]: 2025-12-13 09:42:04.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec 13 04:42:05 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4247: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:05 np0005558241 nova_compute[248510]: 2025-12-13 09:42:05.930 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:07 np0005558241 nova_compute[248510]: 2025-12-13 09:42:07.070 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:07 np0005558241 nova_compute[248510]: 2025-12-13 09:42:07.772 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:42:07 np0005558241 nova_compute[248510]: 2025-12-13 09:42:07.773 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec 13 04:42:07 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4248: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:08 np0005558241 nova_compute[248510]: 2025-12-13 09:42:08.261 248514 DEBUG nova.compute.manager [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec 13 04:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Optimize plan auto_2025-12-13_09:42:09
Dec 13 04:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] do_upmap
Dec 13 04:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.meta']
Dec 13 04:42:09 np0005558241 ceph-mgr[76830]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:42:09 np0005558241 nova_compute[248510]: 2025-12-13 09:42:09.714 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:42:09 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4249: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:09 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:42:10 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:42:10 np0005558241 nova_compute[248510]: 2025-12-13 09:42:10.932 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:42:11 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7801.8 total, 600.0 interval#012Cumulative writes: 49K writes, 190K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s#012Cumulative WAL: 49K writes, 18K syncs, 2.72 writes per sync, written: 0.18 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 525 writes, 1278 keys, 525 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s#012Interval WAL: 525 writes, 234 syncs, 2.24 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:42:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:42:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:42:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:42:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:42:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:42:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:42:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:42:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:42:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:42:11 np0005558241 ceph-mgr[76830]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:42:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:42:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:42:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:42:11 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:42:11 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:42:11 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4250: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:12 np0005558241 nova_compute[248510]: 2025-12-13 09:42:12.073 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:42:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:42:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:42:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:42:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:42:12 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:42:12 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:42:12 np0005558241 podman[424392]: 2025-12-13 09:42:12.579869011 +0000 UTC m=+0.040587929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:42:12 np0005558241 nova_compute[248510]: 2025-12-13 09:42:12.801 248514 DEBUG oslo_service.periodic_task [None req-c7c7d884-2a1f-4b13-b70c-8a4826c0880f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec 13 04:42:13 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4251: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:42:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:42:13 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:42:13 np0005558241 podman[424392]: 2025-12-13 09:42:13.988277492 +0000 UTC m=+1.448996420 container create a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclaren, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:42:14 np0005558241 systemd[1]: Started libpod-conmon-a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a.scope.
Dec 13 04:42:14 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:42:14 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:42:14 np0005558241 podman[424392]: 2025-12-13 09:42:14.809786367 +0000 UTC m=+2.270505315 container init a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclaren, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:42:14 np0005558241 podman[424392]: 2025-12-13 09:42:14.822336878 +0000 UTC m=+2.283055826 container start a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:42:14 np0005558241 systemd[1]: libpod-a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a.scope: Deactivated successfully.
Dec 13 04:42:14 np0005558241 quirky_mclaren[424409]: 167 167
Dec 13 04:42:14 np0005558241 conmon[424409]: conmon a5de0316ede0394b02bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a.scope/container/memory.events
Dec 13 04:42:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:42:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1186689776' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:42:15 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:42:15 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1186689776' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:42:15 np0005558241 podman[424392]: 2025-12-13 09:42:15.420872795 +0000 UTC m=+2.881591713 container attach a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:42:15 np0005558241 podman[424392]: 2025-12-13 09:42:15.422533906 +0000 UTC m=+2.883252844 container died a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclaren, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 04:42:15 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4252: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:15 np0005558241 nova_compute[248510]: 2025-12-13 09:42:15.934 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:16 np0005558241 systemd-logind[787]: New session 59 of user zuul.
Dec 13 04:42:16 np0005558241 systemd[1]: Started Session 59 of User zuul.
Dec 13 04:42:17 np0005558241 nova_compute[248510]: 2025-12-13 09:42:17.077 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:17 np0005558241 systemd[1]: var-lib-containers-storage-overlay-7717a5d7a392ea09ba24430527defec3cca24a5a21236568ae7d3edd14675062-merged.mount: Deactivated successfully.
Dec 13 04:42:17 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4253: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:19 np0005558241 podman[424392]: 2025-12-13 09:42:19.137038666 +0000 UTC m=+6.597757604 container remove a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:42:19 np0005558241 systemd[1]: libpod-conmon-a5de0316ede0394b02bf90393b007647ce69b472208fbf99d4eb2eb8712e532a.scope: Deactivated successfully.
Dec 13 04:42:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:42:19 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.2 total, 600.0 interval#012Cumulative writes: 38K writes, 154K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.80 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 443 writes, 954 keys, 443 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 443 writes, 192 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:42:19 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4254: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:19 np0005558241 podman[424502]: 2025-12-13 09:42:19.701565186 +0000 UTC m=+0.389933735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:42:19 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:42:20 np0005558241 podman[424502]: 2025-12-13 09:42:20.280350272 +0000 UTC m=+0.968718731 container create 6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_elgamal, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 04:42:20 np0005558241 podman[424498]: 2025-12-13 09:42:20.413114169 +0000 UTC m=+1.116534072 container health_status bb20bb621ee557043ca155cd703dee5082b864cbc998d48ffbac86b1217077f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:42:20 np0005558241 podman[424494]: 2025-12-13 09:42:20.428778579 +0000 UTC m=+1.140011296 container health_status 2ebc2799303fe01b8f31560b726a9953e77f57cb89259fe7932365d1ecc715e1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Dec 13 04:42:20 np0005558241 systemd[1]: Started libpod-conmon-6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac.scope.
Dec 13 04:42:20 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:42:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf2b9857f9081ec73c67c9c8a295dd3ad8b8531624a1a3de561bc3330f00e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:42:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf2b9857f9081ec73c67c9c8a295dd3ad8b8531624a1a3de561bc3330f00e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:42:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf2b9857f9081ec73c67c9c8a295dd3ad8b8531624a1a3de561bc3330f00e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:42:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf2b9857f9081ec73c67c9c8a295dd3ad8b8531624a1a3de561bc3330f00e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:42:20 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dcf2b9857f9081ec73c67c9c8a295dd3ad8b8531624a1a3de561bc3330f00e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:42:20 np0005558241 podman[424502]: 2025-12-13 09:42:20.72675129 +0000 UTC m=+1.415119769 container init 6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:42:20 np0005558241 podman[424502]: 2025-12-13 09:42:20.735720562 +0000 UTC m=+1.424089011 container start 6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:42:20 np0005558241 podman[424502]: 2025-12-13 09:42:20.922968633 +0000 UTC m=+1.611337092 container attach 6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 04:42:20 np0005558241 nova_compute[248510]: 2025-12-13 09:42:20.936 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:21 np0005558241 xenodochial_elgamal[424572]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:42:21 np0005558241 xenodochial_elgamal[424572]: --> All data devices are unavailable
Dec 13 04:42:21 np0005558241 systemd[1]: libpod-6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac.scope: Deactivated successfully.
Dec 13 04:42:21 np0005558241 podman[424502]: 2025-12-13 09:42:21.329239354 +0000 UTC m=+2.017607823 container died 6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4255: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5435278116822236e-05 of space, bias 1.0, pg target 0.004630583435046671 quantized to 32 (current 32)
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.826108195861351e-06 of space, bias 1.0, pg target 0.0011478324587584053 quantized to 32 (current 32)
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.968770392745114e-07 of space, bias 4.0, pg target 0.0007162524471294137 quantized to 16 (current 32)
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:42:21 np0005558241 ceph-mgr[76830]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:42:22 np0005558241 nova_compute[248510]: 2025-12-13 09:42:22.080 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:22 np0005558241 systemd[1]: var-lib-containers-storage-overlay-0dcf2b9857f9081ec73c67c9c8a295dd3ad8b8531624a1a3de561bc3330f00e4-merged.mount: Deactivated successfully.
Dec 13 04:42:22 np0005558241 podman[424502]: 2025-12-13 09:42:22.387629723 +0000 UTC m=+3.075998182 container remove 6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_elgamal, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 04:42:22 np0005558241 systemd[1]: libpod-conmon-6ef1dcc74535acba1d6137ef0c1f12df33158107f6fc6c76dcb2a0cbf71cbeac.scope: Deactivated successfully.
Dec 13 04:42:22 np0005558241 podman[424621]: 2025-12-13 09:42:22.492505377 +0000 UTC m=+1.117887547 container health_status a6cf51c21be830d1aed33c779cdcbdea761ae318fdca6a5c685b9cbd37b301c3 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:42:22 np0005558241 podman[424782]: 2025-12-13 09:42:22.905012833 +0000 UTC m=+0.041123732 container create c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:42:22 np0005558241 systemd[1]: Started libpod-conmon-c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7.scope.
Dec 13 04:42:22 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23186 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:22 np0005558241 podman[424782]: 2025-12-13 09:42:22.885658993 +0000 UTC m=+0.021769912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:42:22 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:42:23 np0005558241 podman[424782]: 2025-12-13 09:42:23.005181161 +0000 UTC m=+0.141292090 container init c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:42:23 np0005558241 podman[424782]: 2025-12-13 09:42:23.019890737 +0000 UTC m=+0.156001676 container start c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:42:23 np0005558241 podman[424782]: 2025-12-13 09:42:23.02525854 +0000 UTC m=+0.161369479 container attach c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:42:23 np0005558241 hopeful_wescoff[424799]: 167 167
Dec 13 04:42:23 np0005558241 systemd[1]: libpod-c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7.scope: Deactivated successfully.
Dec 13 04:42:23 np0005558241 conmon[424799]: conmon c2d6df58c0bdda285d0f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7.scope/container/memory.events
Dec 13 04:42:23 np0005558241 podman[424782]: 2025-12-13 09:42:23.029825513 +0000 UTC m=+0.165936432 container died c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 04:42:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-51da354d36afb062bd30b45df001a9991ac848ad7297a6be9c9ca1edc4b67018-merged.mount: Deactivated successfully.
Dec 13 04:42:23 np0005558241 podman[424782]: 2025-12-13 09:42:23.078425101 +0000 UTC m=+0.214536000 container remove c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_wescoff, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:42:23 np0005558241 systemd[1]: libpod-conmon-c2d6df58c0bdda285d0f7f187d5c387affd1541ae88786216a9077a9f62d17f7.scope: Deactivated successfully.
Dec 13 04:42:23 np0005558241 podman[424831]: 2025-12-13 09:42:23.28015971 +0000 UTC m=+0.054839372 container create e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jackson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:42:23 np0005558241 systemd[1]: Started libpod-conmon-e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a.scope.
Dec 13 04:42:23 np0005558241 podman[424831]: 2025-12-13 09:42:23.259426545 +0000 UTC m=+0.034106207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:42:23 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:42:23 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d294c4bc6fba48d05e4f2e8fef8346340749eb76655dbb461a4d7ab09fe4d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:42:23 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d294c4bc6fba48d05e4f2e8fef8346340749eb76655dbb461a4d7ab09fe4d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:42:23 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d294c4bc6fba48d05e4f2e8fef8346340749eb76655dbb461a4d7ab09fe4d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:42:23 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d294c4bc6fba48d05e4f2e8fef8346340749eb76655dbb461a4d7ab09fe4d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:42:23 np0005558241 podman[424831]: 2025-12-13 09:42:23.389872165 +0000 UTC m=+0.164551857 container init e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jackson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:42:23 np0005558241 podman[424831]: 2025-12-13 09:42:23.401980666 +0000 UTC m=+0.176660328 container start e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jackson, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 04:42:23 np0005558241 podman[424831]: 2025-12-13 09:42:23.40776028 +0000 UTC m=+0.182439962 container attach e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:42:23 np0005558241 ceph-mgr[76830]: [devicehealth INFO root] Check health
Dec 13 04:42:23 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23188 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]: {
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:    "0": [
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:        {
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "devices": [
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "/dev/loop3"
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            ],
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_name": "ceph_lv0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_size": "21470642176",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6879a430-e4fa-44e8-aed5-572a3459a9c1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "name": "ceph_lv0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "tags": {
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.block_uuid": "WlfzVv-OilX-o258-L2vO-g7kg-t9SA-537DsA",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.cluster_name": "ceph",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.crush_device_class": "",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.encrypted": "0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.objectstore": "bluestore",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.osd_fsid": "6879a430-e4fa-44e8-aed5-572a3459a9c1",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.osd_id": "0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.type": "block",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.vdo": "0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.with_tpm": "0"
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            },
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "type": "block",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "vg_name": "ceph_vg0"
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:        }
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:    ],
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:    "1": [
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:        {
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "devices": [
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "/dev/loop4"
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            ],
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_name": "ceph_lv1",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_size": "21470642176",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a9563d22-728f-45a3-a243-89e4c60c7325,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "name": "ceph_lv1",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "tags": {
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.block_uuid": "8LcLWZ-q5XL-o6RI-bwVe-pg0L-VxHn-jNZPdd",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.cluster_name": "ceph",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.crush_device_class": "",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.encrypted": "0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.objectstore": "bluestore",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.osd_fsid": "a9563d22-728f-45a3-a243-89e4c60c7325",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.osd_id": "1",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.type": "block",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.vdo": "0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.with_tpm": "0"
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            },
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "type": "block",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "vg_name": "ceph_vg1"
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:        }
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:    ],
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:    "2": [
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:        {
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "devices": [
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "/dev/loop5"
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            ],
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_name": "ceph_lv2",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_size": "21470642176",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=18ee9de6-e00b-571b-ab9b-b7aab06174df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=cfcb57a7-9012-4974-af2d-c0c014a6860e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "lv_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "name": "ceph_lv2",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "tags": {
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.block_uuid": "wiI2go-ePsj-LzL1-W0y7-nZ8B-CDsc-UPm65X",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.cephx_lockbox_secret": "",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.cluster_fsid": "18ee9de6-e00b-571b-ab9b-b7aab06174df",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.cluster_name": "ceph",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.crush_device_class": "",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.encrypted": "0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.objectstore": "bluestore",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.osd_fsid": "cfcb57a7-9012-4974-af2d-c0c014a6860e",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.osd_id": "2",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.type": "block",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.vdo": "0",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:                "ceph.with_tpm": "0"
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            },
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "type": "block",
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:            "vg_name": "ceph_vg2"
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:        }
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]:    ]
Dec 13 04:42:23 np0005558241 lucid_jackson[424867]: }
Dec 13 04:42:23 np0005558241 systemd[1]: libpod-e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a.scope: Deactivated successfully.
Dec 13 04:42:23 np0005558241 podman[424831]: 2025-12-13 09:42:23.772717684 +0000 UTC m=+0.547397346 container died e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:42:23 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4256: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:23 np0005558241 systemd[1]: var-lib-containers-storage-overlay-42d294c4bc6fba48d05e4f2e8fef8346340749eb76655dbb461a4d7ab09fe4d7-merged.mount: Deactivated successfully.
Dec 13 04:42:23 np0005558241 podman[424831]: 2025-12-13 09:42:23.837119264 +0000 UTC m=+0.611798936 container remove e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:42:23 np0005558241 systemd[1]: libpod-conmon-e136ba7128bb3736a7f03b9835425fdf10e531736b872b2d043c2e182505942a.scope: Deactivated successfully.
Dec 13 04:42:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 13 04:42:24 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3746652238' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 04:42:24 np0005558241 podman[424976]: 2025-12-13 09:42:24.366841421 +0000 UTC m=+0.051862549 container create d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ramanujan, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:42:24 np0005558241 systemd[1]: Started libpod-conmon-d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9.scope.
Dec 13 04:42:24 np0005558241 podman[424976]: 2025-12-13 09:42:24.343332757 +0000 UTC m=+0.028353895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:42:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:42:24 np0005558241 podman[424976]: 2025-12-13 09:42:24.464400384 +0000 UTC m=+0.149421532 container init d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ramanujan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:42:24 np0005558241 podman[424976]: 2025-12-13 09:42:24.472363162 +0000 UTC m=+0.157384290 container start d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:42:24 np0005558241 podman[424976]: 2025-12-13 09:42:24.476361921 +0000 UTC m=+0.161383069 container attach d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ramanujan, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:42:24 np0005558241 serene_ramanujan[424995]: 167 167
Dec 13 04:42:24 np0005558241 systemd[1]: libpod-d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9.scope: Deactivated successfully.
Dec 13 04:42:24 np0005558241 podman[424976]: 2025-12-13 09:42:24.479326215 +0000 UTC m=+0.164347343 container died d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:42:24 np0005558241 systemd[1]: var-lib-containers-storage-overlay-cc4e8b187745e4bdcd783c35ca6d150b224c1e75ce71063a62fca6065cba4fd5-merged.mount: Deactivated successfully.
Dec 13 04:42:24 np0005558241 podman[424976]: 2025-12-13 09:42:24.520224141 +0000 UTC m=+0.205245269 container remove d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_ramanujan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:42:24 np0005558241 systemd[1]: libpod-conmon-d9b24edf42861f344559711be76186425d16ca27df498a55755aac166af543a9.scope: Deactivated successfully.
Dec 13 04:42:24 np0005558241 podman[425038]: 2025-12-13 09:42:24.761799911 +0000 UTC m=+0.049816408 container create ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:42:24 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:42:24 np0005558241 systemd[1]: Started libpod-conmon-ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e.scope.
Dec 13 04:42:24 np0005558241 podman[425038]: 2025-12-13 09:42:24.739983889 +0000 UTC m=+0.028000406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:42:24 np0005558241 systemd[1]: Started libcrun container.
Dec 13 04:42:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278968dbeb62ffdd9ec2b44afff4d1bbd226bdf941769c108a5220c8ba212936/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:42:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278968dbeb62ffdd9ec2b44afff4d1bbd226bdf941769c108a5220c8ba212936/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:42:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278968dbeb62ffdd9ec2b44afff4d1bbd226bdf941769c108a5220c8ba212936/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:42:24 np0005558241 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278968dbeb62ffdd9ec2b44afff4d1bbd226bdf941769c108a5220c8ba212936/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:42:24 np0005558241 podman[425038]: 2025-12-13 09:42:24.871401224 +0000 UTC m=+0.159417751 container init ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:42:24 np0005558241 podman[425038]: 2025-12-13 09:42:24.880174141 +0000 UTC m=+0.168190638 container start ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:42:24 np0005558241 podman[425038]: 2025-12-13 09:42:24.884030457 +0000 UTC m=+0.172046984 container attach ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:42:25 np0005558241 lvm[425140]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:42:25 np0005558241 lvm[425140]: VG ceph_vg0 finished
Dec 13 04:42:25 np0005558241 lvm[425139]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:42:25 np0005558241 lvm[425139]: VG ceph_vg1 finished
Dec 13 04:42:25 np0005558241 lvm[425142]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:42:25 np0005558241 lvm[425142]: VG ceph_vg2 finished
Dec 13 04:42:25 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4257: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:25 np0005558241 bold_wilson[425055]: {}
Dec 13 04:42:25 np0005558241 systemd[1]: libpod-ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e.scope: Deactivated successfully.
Dec 13 04:42:25 np0005558241 systemd[1]: libpod-ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e.scope: Consumed 1.574s CPU time.
Dec 13 04:42:25 np0005558241 podman[425038]: 2025-12-13 09:42:25.877679737 +0000 UTC m=+1.165696474 container died ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:42:25 np0005558241 systemd[1]: var-lib-containers-storage-overlay-278968dbeb62ffdd9ec2b44afff4d1bbd226bdf941769c108a5220c8ba212936-merged.mount: Deactivated successfully.
Dec 13 04:42:25 np0005558241 nova_compute[248510]: 2025-12-13 09:42:25.938 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:25 np0005558241 podman[425038]: 2025-12-13 09:42:25.97199693 +0000 UTC m=+1.260013427 container remove ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:42:25 np0005558241 systemd[1]: libpod-conmon-ee017a8735464f71d8d36cbb1a22860dfd95a6d99adaa15d479e48a565db878e.scope: Deactivated successfully.
Dec 13 04:42:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:42:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:42:26 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:42:26 np0005558241 ceph-mon[76537]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:42:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:42:27 np0005558241 ceph-mon[76537]: from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' 
Dec 13 04:42:27 np0005558241 nova_compute[248510]: 2025-12-13 09:42:27.083 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:27 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4258: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:28 np0005558241 ovs-vsctl[425211]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 13 04:42:29 np0005558241 virtqemud[248808]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 13 04:42:29 np0005558241 virtqemud[248808]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 13 04:42:29 np0005558241 virtqemud[248808]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 13 04:42:29 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4259: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:29 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:42:29 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: cache status {prefix=cache status} (starting...)
Dec 13 04:42:30 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: client ls {prefix=client ls} (starting...)
Dec 13 04:42:30 np0005558241 lvm[425545]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:42:30 np0005558241 lvm[425545]: VG ceph_vg1 finished
Dec 13 04:42:30 np0005558241 lvm[425568]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:42:30 np0005558241 lvm[425568]: VG ceph_vg0 finished
Dec 13 04:42:30 np0005558241 lvm[425577]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:42:30 np0005558241 lvm[425577]: VG ceph_vg2 finished
Dec 13 04:42:30 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23192 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:30 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: damage ls {prefix=damage ls} (starting...)
Dec 13 04:42:30 np0005558241 nova_compute[248510]: 2025-12-13 09:42:30.940 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:30 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump loads {prefix=dump loads} (starting...)
Dec 13 04:42:31 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23194 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:31 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 13 04:42:31 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 13 04:42:31 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 13 04:42:31 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23196 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:31 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 13 04:42:31 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec 13 04:42:31 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596733442' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 13 04:42:31 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4260: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:31 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 13 04:42:31 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 13 04:42:32 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23200 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:32 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: 2025-12-13T09:42:32.037+0000 7f1f704e3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 04:42:32 np0005558241 ceph-mgr[76830]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 04:42:32 np0005558241 nova_compute[248510]: 2025-12-13 09:42:32.085 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:42:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3143079818' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:42:32 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: ops {prefix=ops} (starting...)
Dec 13 04:42:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec 13 04:42:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3759029824' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 13 04:42:32 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 13 04:42:32 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2971111549' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 13 04:42:32 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: session ls {prefix=session ls} (starting...)
Dec 13 04:42:33 np0005558241 ceph-mds[98037]: mds.cephfs.compute-0.gfdyct asok_command: status {prefix=status} (starting...)
Dec 13 04:42:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 13 04:42:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82207280' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 04:42:33 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 13 04:42:33 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3438341048' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 13 04:42:33 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4261: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:34 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23214 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 13 04:42:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/216254374' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 04:42:34 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23216 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 13 04:42:34 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/629908501' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 04:42:34 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:42:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec 13 04:42:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2820940915' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 13 04:42:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 04:42:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1684076411' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 04:42:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 13 04:42:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2750013254' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 13 04:42:35 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 13 04:42:35 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3832629167' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 04:42:35 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4262: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:35 np0005558241 nova_compute[248510]: 2025-12-13 09:42:35.942 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:36 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23228 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:36 np0005558241 ceph-mgr[76830]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 13 04:42:36 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: 2025-12-13T09:42:36.063+0000 7f1f704e3640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 13 04:42:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 13 04:42:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3980690016' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 04:42:36 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 13 04:42:36 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/175755030' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 13 04:42:37 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23234 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:37 np0005558241 nova_compute[248510]: 2025-12-13 09:42:37.087 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 49430528 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 49422336 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287645696 unmapped: 49422336 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3254905 data_alloc: 218103808 data_used: 4010677
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 49414144 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 49414144 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 49414144 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 49414144 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287653888 unmapped: 49414144 heap: 337068032 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e8400 session 0x562b54407500
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3254905 data_alloc: 218103808 data_used: 4010677
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980000 session 0x562b552fc1c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b54d6b880
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc0c00 session 0x562b552fd340
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 61.573890686s of 61.637538910s, submitted: 40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e8400 session 0x562b553461c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b555c9c00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980000 session 0x562b54d2c000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b566bf180
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b553adc00 session 0x562b54406fc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbfb000/0x0/0x4ffc00000, data 0x1d5298f/0x1f11000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbfb000/0x0/0x4ffc00000, data 0x1d5298f/0x1f11000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3300675 data_alloc: 218103808 data_used: 4010677
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287735808 unmapped: 56680448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56672256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56672256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbfb000/0x0/0x4ffc00000, data 0x1d5298f/0x1f11000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56672256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3300675 data_alloc: 218103808 data_used: 4010677
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287744000 unmapped: 56672256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287752192 unmapped: 56664064 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287752192 unmapped: 56664064 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287752192 unmapped: 56664064 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.575314522s of 13.690111160s, submitted: 13
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e8400 session 0x562b54950e00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 56508416 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3302715 data_alloc: 218103808 data_used: 4011205
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbd7000/0x0/0x4ffc00000, data 0x1d7698f/0x1f35000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 56508416 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbd7000/0x0/0x4ffc00000, data 0x1d7698f/0x1f35000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbd7000/0x0/0x4ffc00000, data 0x1d7698f/0x1f35000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339323 data_alloc: 218103808 data_used: 10128069
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339323 data_alloc: 218103808 data_used: 10128069
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 56475648 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecbd7000/0x0/0x4ffc00000, data 0x1d7698f/0x1f35000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.049160004s of 12.053641319s, submitted: 1
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289538048 unmapped: 54878208 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289603584 unmapped: 54812672 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec773000/0x0/0x4ffc00000, data 0x21da98f/0x2399000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386233 data_alloc: 218103808 data_used: 10975941
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec74f000/0x0/0x4ffc00000, data 0x21fd98f/0x23bc000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec74f000/0x0/0x4ffc00000, data 0x21fd98f/0x23bc000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384497 data_alloc: 218103808 data_used: 10975941
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec74d000/0x0/0x4ffc00000, data 0x220098f/0x23bf000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384753 data_alloc: 218103808 data_used: 10984133
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec74d000/0x0/0x4ffc00000, data 0x220098f/0x23bf000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec74d000/0x0/0x4ffc00000, data 0x220098f/0x23bf000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 54771712 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b54e0da40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b55367a40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a1000 session 0x562b53181c00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384753 data_alloc: 218103808 data_used: 10984133
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d2000 session 0x562b53219c00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.035808563s of 19.402549744s, submitted: 44
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e8400 session 0x562b555c8fc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a1000 session 0x562b51d10fc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289660928 unmapped: 54755328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b5819ee00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b550fce00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b52946800 session 0x562b55367500
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289660928 unmapped: 54755328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289660928 unmapped: 54755328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289660928 unmapped: 54755328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289669120 unmapped: 54747136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec327000/0x0/0x4ffc00000, data 0x2624a01/0x27e5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414760 data_alloc: 218103808 data_used: 10984133
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289669120 unmapped: 54747136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec327000/0x0/0x4ffc00000, data 0x2624a01/0x27e5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289669120 unmapped: 54747136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289669120 unmapped: 54747136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289677312 unmapped: 54738944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e8400 session 0x562b5375fc00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289677312 unmapped: 54738944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a1000 session 0x562b555c8000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3414760 data_alloc: 218103808 data_used: 10984133
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec327000/0x0/0x4ffc00000, data 0x2624a01/0x27e5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289677312 unmapped: 54738944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289677312 unmapped: 54738944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b55366a80
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.518766403s of 11.703509331s, submitted: 25
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b5250a1c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec327000/0x0/0x4ffc00000, data 0x2624a01/0x27e5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289677312 unmapped: 54738944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289677312 unmapped: 54738944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289685504 unmapped: 54730752 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415626 data_alloc: 218103808 data_used: 10984133
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289685504 unmapped: 54730752 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec326000/0x0/0x4ffc00000, data 0x2624a11/0x27e6000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec326000/0x0/0x4ffc00000, data 0x2624a11/0x27e6000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440970 data_alloc: 234881024 data_used: 15234757
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec326000/0x0/0x4ffc00000, data 0x2624a11/0x27e6000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3440970 data_alloc: 234881024 data_used: 15234757
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 290430976 unmapped: 53985280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.469920158s of 14.109190941s, submitted: 2
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 293150720 unmapped: 51265536 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebf42000/0x0/0x4ffc00000, data 0x2a00a11/0x2bc2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [0,0,0,0,0,1,1])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 293773312 unmapped: 50642944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 293773312 unmapped: 50642944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebcb9000/0x0/0x4ffc00000, data 0x2c83a11/0x2e45000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 293773312 unmapped: 50642944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480778 data_alloc: 234881024 data_used: 15280837
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebcab000/0x0/0x4ffc00000, data 0x2c97a11/0x2e59000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3488080 data_alloc: 234881024 data_used: 15280837
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebcab000/0x0/0x4ffc00000, data 0x2c97a11/0x2e59000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489120 data_alloc: 234881024 data_used: 15358661
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294166528 unmapped: 50249728 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.193249702s of 15.741126060s, submitted: 73
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53219dc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b54381180
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebcab000/0x0/0x4ffc00000, data 0x2c97a11/0x2e59000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [0,0,0,0,1])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294174720 unmapped: 50241536 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 294174720 unmapped: 50241536 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5297a1c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396833 data_alloc: 218103808 data_used: 11061957
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec746000/0x0/0x4ffc00000, data 0x220298f/0x23c1000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b5511f340
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980000 session 0x562b566be1c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396833 data_alloc: 218103808 data_used: 11061957
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec746000/0x0/0x4ffc00000, data 0x220298f/0x23c1000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.748074532s of 10.684754372s, submitted: 21
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 289529856 unmapped: 54886400 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e8400 session 0x562b55366fc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.2 total, 600.0 interval#012Cumulative writes: 35K writes, 143K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.83 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2470 writes, 10K keys, 2470 commit groups, 1.0 writes per commit group, ingest: 13.34 MB, 0.02 MB/s#012Interval WAL: 2470 writes, 940 syncs, 2.63 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562b50f238d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 mem
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287236096 unmapped: 57180160 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57171968 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57171968 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57171968 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57171968 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287244288 unmapped: 57171968 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287252480 unmapped: 57163776 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287260672 unmapped: 57155584 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57147392 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287268864 unmapped: 57147392 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57139200 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57139200 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57139200 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57139200 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57139200 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287277056 unmapped: 57139200 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287285248 unmapped: 57131008 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 57122816 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274387 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 57122816 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287293440 unmapped: 57122816 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b552c7c00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b550fcfc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980000 session 0x562b547ff6c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b54380c40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 49.722755432s of 49.969383240s, submitted: 8
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a1000 session 0x562b53218e00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54d6b880
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b553461c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980000 session 0x562b54d2c000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b54406fc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 56811520 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cce97f/0x1e8c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 56811520 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 56811520 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320829 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 56811520 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287604736 unmapped: 56811520 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56803328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cce97f/0x1e8c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56803328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287612928 unmapped: 56803328 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320829 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cce97f/0x1e8c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287621120 unmapped: 56795136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cce97f/0x1e8c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287621120 unmapped: 56795136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cce97f/0x1e8c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287621120 unmapped: 56795136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc80000/0x0/0x4ffc00000, data 0x1cce97f/0x1e8c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287621120 unmapped: 56795136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287621120 unmapped: 56795136 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b531808c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320829 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b550fda40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287629312 unmapped: 56786944 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b55346fc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.317261696s of 14.469831467s, submitted: 12
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b51d11340
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56778752 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287637504 unmapped: 56778752 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc7f000/0x0/0x4ffc00000, data 0x1cce98f/0x1e8d000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [1])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361887 data_alloc: 218103808 data_used: 10613331
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc7f000/0x0/0x4ffc00000, data 0x1cce98f/0x1e8d000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361887 data_alloc: 218103808 data_used: 10613331
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecc7f000/0x0/0x4ffc00000, data 0x1cce98f/0x1e8d000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 287399936 unmapped: 57016320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.383279800s of 12.385676384s, submitted: 1
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 296574976 unmapped: 47841280 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3430299 data_alloc: 218103808 data_used: 10621523
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec089000/0x0/0x4ffc00000, data 0x28c498f/0x2a83000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 48250880 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 48250880 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 296165376 unmapped: 48250880 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfca000/0x0/0x4ffc00000, data 0x298198f/0x2b40000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [0,0,3,0,0,0,0,0,7,1])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445819 data_alloc: 218103808 data_used: 11854419
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445819 data_alloc: 218103808 data_used: 11854419
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfb8000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 47144960 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b547ffc00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494fc00 session 0x562b54d2c1c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54d6a380
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b5511ec40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.426490784s of 15.836899757s, submitted: 131
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297009152 unmapped: 47407104 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b552c7180
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b5250bc00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b57d3e000 session 0x562b555c8c40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3483357 data_alloc: 218103808 data_used: 11854419
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54406e00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b552fc540
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb918000/0x0/0x4ffc00000, data 0x303499f/0x31f4000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3483357 data_alloc: 218103808 data_used: 11854419
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297025536 unmapped: 47390720 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b5250aa80
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb918000/0x0/0x4ffc00000, data 0x303499f/0x31f4000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297033728 unmapped: 47382528 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b552fce00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297033728 unmapped: 47382528 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb918000/0x0/0x4ffc00000, data 0x303499f/0x31f4000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980c00 session 0x562b566bea80
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b51d101c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 47374336 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3485119 data_alloc: 218103808 data_used: 11854419
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 47374336 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297041920 unmapped: 47374336 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.240238190s of 12.527070999s, submitted: 14
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297066496 unmapped: 47349760 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb917000/0x0/0x4ffc00000, data 0x30349af/0x31f5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb917000/0x0/0x4ffc00000, data 0x30349af/0x31f5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 46686208 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 46686208 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518799 data_alloc: 234881024 data_used: 17691219
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297730048 unmapped: 46686208 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297738240 unmapped: 46678016 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297754624 unmapped: 46661632 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb917000/0x0/0x4ffc00000, data 0x30349af/0x31f5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3518671 data_alloc: 234881024 data_used: 17687123
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb917000/0x0/0x4ffc00000, data 0x30349af/0x31f5000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297771008 unmapped: 46645248 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.727138519s of 10.814496040s, submitted: 92
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297926656 unmapped: 46489600 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557167 data_alloc: 234881024 data_used: 18235987
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557167 data_alloc: 234881024 data_used: 18235987
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557295 data_alloc: 234881024 data_used: 18240083
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297975808 unmapped: 46440448 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297984000 unmapped: 46432256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb419000/0x0/0x4ffc00000, data 0x35319af/0x36f2000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297984000 unmapped: 46432256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297984000 unmapped: 46432256 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b52afdc00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b54951a40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.452621460s of 16.708354950s, submitted: 31
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b54e0d180
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3449686 data_alloc: 218103808 data_used: 11854419
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfc0000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298000384 unmapped: 46415872 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980000 session 0x562b55347500
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b50f49c00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b55367880
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebfc0000/0x0/0x4ffc00000, data 0x298d98f/0x2b4c000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3296031 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d45e, meta 0x110a2ba2), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292487168 unmapped: 51929088 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 60.426128387s of 60.458377838s, submitted: 21
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 293036032 unmapped: 51380224 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b54db96c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b5498ee00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54980800 session 0x562b5511efc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54407dc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b552c6c40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323283 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323283 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323283 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b550fd340
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3323283 data_alloc: 218103808 data_used: 4014675
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b59f63dc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292519936 unmapped: 51896320 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5497fc00 session 0x562b5819ec40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.514007568s of 16.712116241s, submitted: 33
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5250b500
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf48000/0x0/0x4ffc00000, data 0x1a059e1/0x1bc4000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf24000/0x0/0x4ffc00000, data 0x1a299e1/0x1be8000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3347951 data_alloc: 218103808 data_used: 7718483
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf24000/0x0/0x4ffc00000, data 0x1a299e1/0x1be8000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3347951 data_alloc: 218103808 data_used: 7718483
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecf24000/0x0/0x4ffc00000, data 0x1a299e1/0x1be8000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 292823040 unmapped: 51593216 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.294775009s of 12.303226471s, submitted: 2
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 45989888 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297967616 unmapped: 46448640 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x25a09e1/0x275f000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3425143 data_alloc: 218103808 data_used: 7842387
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 47120384 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec383000/0x0/0x4ffc00000, data 0x25ca9e1/0x2789000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429921 data_alloc: 218103808 data_used: 7842387
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec383000/0x0/0x4ffc00000, data 0x25ca9e1/0x2789000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428473 data_alloc: 218103808 data_used: 7846483
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec380000/0x0/0x4ffc00000, data 0x25cd9e1/0x278c000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec380000/0x0/0x4ffc00000, data 0x25cd9e1/0x278c000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 47095808 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3428473 data_alloc: 218103808 data_used: 7846483
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 47087616 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 47087616 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec380000/0x0/0x4ffc00000, data 0x25cd9e1/0x278c000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 47087616 heap: 344416256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.759756088s of 19.683139801s, submitted: 100
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b54db8700
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b52afcfc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc1400 session 0x562b59f62540
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d2c00 session 0x562b54d2c380
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5498e700
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464111 data_alloc: 218103808 data_used: 7846483
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda9000/0x0/0x4ffc00000, data 0x2ba49e1/0x2d63000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 50765824 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464111 data_alloc: 218103808 data_used: 7846483
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda9000/0x0/0x4ffc00000, data 0x2ba49e1/0x2d63000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b543aca80
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc1400 session 0x562b54db9340
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda9000/0x0/0x4ffc00000, data 0x2ba49e1/0x2d63000, compress 0x0/0x0/0x0, omap 0x4d67a, meta 0x110a2986), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b566be000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5ad35400 session 0x562b54d2d500
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464111 data_alloc: 218103808 data_used: 7846483
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 50757632 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298393600 unmapped: 49700864 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.676366806s of 13.795249939s, submitted: 7
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda7000/0x0/0x4ffc00000, data 0x2ba59e1/0x2d64000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500135 data_alloc: 234881024 data_used: 13920851
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda7000/0x0/0x4ffc00000, data 0x2ba59e1/0x2d64000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda7000/0x0/0x4ffc00000, data 0x2ba59e1/0x2d64000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500135 data_alloc: 234881024 data_used: 13920851
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298426368 unmapped: 49668096 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ebda7000/0x0/0x4ffc00000, data 0x2ba59e1/0x2d64000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.781335831s of 10.787143707s, submitted: 3
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 45432832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299909120 unmapped: 48185344 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547725 data_alloc: 234881024 data_used: 14633555
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547725 data_alloc: 234881024 data_used: 14633555
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6b0, meta 0x110a2950), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.882456779s of 11.166961670s, submitted: 31
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299941888 unmapped: 48152576 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6e6, meta 0x110a291a), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549005 data_alloc: 234881024 data_used: 14719571
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fb000/0x0/0x4ffc00000, data 0x31529e1/0x3311000, compress 0x0/0x0/0x0, omap 0x4d6e6, meta 0x110a291a), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3547045 data_alloc: 234881024 data_used: 14719571
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b543801c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299950080 unmapped: 48144384 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b555c8380
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 48136192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299958272 unmapped: 48136192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc1400 session 0x562b54db96c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb7fa000/0x0/0x4ffc00000, data 0x31539e1/0x3312000, compress 0x0/0x0/0x0, omap 0x4d6e6, meta 0x110a291a), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435449 data_alloc: 218103808 data_used: 7846483
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297148416 unmapped: 50946048 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.982597351s of 14.264299393s, submitted: 9
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b532196c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b54407880
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2c3000/0x0/0x4ffc00000, data 0x168b97f/0x1849000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b555c88c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: mgrc ms_handle_reset ms_handle_reset con 0x562b588df800
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2938807880
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2938807880,v1:192.168.122.100:6801/2938807880]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: mgrc handle_mgr_configure stats_period=5
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3316645 data_alloc: 218103808 data_used: 4018673
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 50905088 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297189376 unmapped: 50905088 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.654140472s of 39.253135681s, submitted: 37
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5511f500
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 50937856 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b55347c00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b550fcc40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54dc1400 session 0x562b543ac8c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b54d2da40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 50937856 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 50937856 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3335641 data_alloc: 218103808 data_used: 4018673
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297156608 unmapped: 50937856 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b51d108c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3335897 data_alloc: 218103808 data_used: 4050417
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354329 data_alloc: 218103808 data_used: 7196145
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297172992 unmapped: 50921472 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecfc9000/0x0/0x4ffc00000, data 0x198597f/0x1b43000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297181184 unmapped: 50913280 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354329 data_alloc: 218103808 data_used: 7196145
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.388601303s of 18.978036880s, submitted: 4
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 297549824 unmapped: 50544640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375597 data_alloc: 218103808 data_used: 7561713
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299278336 unmapped: 48816128 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299286528 unmapped: 48807936 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299286528 unmapped: 48807936 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299286528 unmapped: 48807936 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375613 data_alloc: 218103808 data_used: 7561713
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299294720 unmapped: 48799744 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299294720 unmapped: 48799744 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 48791552 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 48791552 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece24000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4d8cc, meta 0x110a2734), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 48791552 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375613 data_alloc: 218103808 data_used: 7561713
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299302912 unmapped: 48791552 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.605512619s of 15.843142509s, submitted: 25
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b51d116c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298852352 unmapped: 49242112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d9c00 session 0x562b51d11340
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555de800 session 0x562b5297a000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53180000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b552c7180
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c6000/0x0/0x4ffc00000, data 0x1f879e1/0x2146000, compress 0x0/0x0/0x0, omap 0x4dae8, meta 0x110a2518), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407519 data_alloc: 218103808 data_used: 7561713
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c6000/0x0/0x4ffc00000, data 0x1f879e1/0x2146000, compress 0x0/0x0/0x0, omap 0x4dae8, meta 0x110a2518), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c6000/0x0/0x4ffc00000, data 0x1f879e1/0x2146000, compress 0x0/0x0/0x0, omap 0x4dae8, meta 0x110a2518), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298868736 unmapped: 49225728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3407519 data_alloc: 218103808 data_used: 7561713
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d9c00 session 0x562b54e0c380
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298885120 unmapped: 49209344 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298893312 unmapped: 49201152 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436133 data_alloc: 218103808 data_used: 11955185
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436133 data_alloc: 218103808 data_used: 11955185
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec9c5000/0x0/0x4ffc00000, data 0x1f87a04/0x2147000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 298934272 unmapped: 49160192 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.689737320s of 20.918289185s, submitted: 50
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 46456832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 46456832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301916160 unmapped: 46178304 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3496455 data_alloc: 218103808 data_used: 12848113
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec174000/0x0/0x4ffc00000, data 0x27d8a04/0x2998000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec174000/0x0/0x4ffc00000, data 0x27d8a04/0x2998000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec174000/0x0/0x4ffc00000, data 0x27d8a04/0x2998000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497911 data_alloc: 218103808 data_used: 12860401
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec174000/0x0/0x4ffc00000, data 0x27d8a04/0x2998000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5523bc00 session 0x562b552c6a80
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 46170112 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec153000/0x0/0x4ffc00000, data 0x27f9a04/0x29b9000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497663 data_alloc: 218103808 data_used: 12934129
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.069880486s of 13.417132378s, submitted: 83
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b54950fc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5ad35800 session 0x562b55346000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301940736 unmapped: 46153728 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec153000/0x0/0x4ffc00000, data 0x27f9a04/0x29b9000, compress 0x0/0x0/0x0, omap 0x4dcce, meta 0x110a2332), peers [0,1] op hist [1])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b555c8c40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3384144 data_alloc: 218103808 data_used: 7561713
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecaeb000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b53218000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 45056000 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b552fce00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecaeb000/0x0/0x4ffc00000, data 0x1b2297f/0x1ce0000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [0,0,1,2])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b53234e00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299565056 unmapped: 48529408 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 48521216 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 48521216 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299573248 unmapped: 48521216 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299581440 unmapped: 48513024 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299589632 unmapped: 48504832 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299597824 unmapped: 48496640 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 48488448 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 48488448 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 48488448 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299606016 unmapped: 48488448 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 50.231575012s of 50.706127167s, submitted: 63
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 48455680 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 48455680 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 48455680 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299638784 unmapped: 48455680 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299646976 unmapped: 48447488 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334008 data_alloc: 218103808 data_used: 4026732
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [0,0,0,0,0,0,0,0,4])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 44638208 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d9c00 session 0x562b5819ec40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53181dc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b53180700
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b566bea80
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b532341c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 48308224 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299786240 unmapped: 48308224 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299794432 unmapped: 48300032 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299794432 unmapped: 48300032 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369086 data_alloc: 218103808 data_used: 4026732
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7c000/0x0/0x4ffc00000, data 0x1ad297f/0x1c90000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299794432 unmapped: 48300032 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299794432 unmapped: 48300032 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7c000/0x0/0x4ffc00000, data 0x1ad297f/0x1c90000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.732717514s of 16.663993835s, submitted: 37
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b552fc700
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395959 data_alloc: 218103808 data_used: 8353644
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3395959 data_alloc: 218103808 data_used: 8353644
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ece7b000/0x0/0x4ffc00000, data 0x1ad29a2/0x1c91000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 299802624 unmapped: 48291840 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.296160698s of 12.306211472s, submitted: 4
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3411073 data_alloc: 218103808 data_used: 8374124
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec87f000/0x0/0x4ffc00000, data 0x20ce9a2/0x228d000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [1])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302555136 unmapped: 45539328 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec41c000/0x0/0x4ffc00000, data 0x25299a2/0x26e8000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465651 data_alloc: 218103808 data_used: 8661868
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302620672 unmapped: 45473792 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec405000/0x0/0x4ffc00000, data 0x25489a2/0x2707000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460459 data_alloc: 218103808 data_used: 8661868
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec405000/0x0/0x4ffc00000, data 0x25489a2/0x2707000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec405000/0x0/0x4ffc00000, data 0x25489a2/0x2707000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 46022656 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.590172768s of 12.890510559s, submitted: 92
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 46014464 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3fa000/0x0/0x4ffc00000, data 0x25539a2/0x2712000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [0,0,0,0,0,0,1])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 46014464 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3f9000/0x0/0x4ffc00000, data 0x25549a2/0x2713000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 46014464 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3460587 data_alloc: 218103808 data_used: 8661868
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 46006272 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3f9000/0x0/0x4ffc00000, data 0x25549a2/0x2713000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 46006272 heap: 348094464 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b53219a40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b54406fc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b52afdc00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b54951500
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3f9000/0x0/0x4ffc00000, data 0x25549a2/0x2713000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [0,0,0,0,0,0,4])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555cec00 session 0x562b555c96c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b555c9880
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b59f62540
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b547ff340
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b5511ec40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532989 data_alloc: 218103808 data_used: 8661868
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 52486144 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4dd65, meta 0x110a229b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 52477952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494fc00 session 0x562b55347500
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 52477952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3532989 data_alloc: 218103808 data_used: 8661868
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b5297a1c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 52469760 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b54d6bdc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 52469760 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.584549904s of 14.069593430s, submitted: 17
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b53219500
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 52461568 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580038 data_alloc: 218103808 data_used: 13113962
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb811000/0x0/0x4ffc00000, data 0x313b9b2/0x32fb000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580542 data_alloc: 218103808 data_used: 13113962
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb80f000/0x0/0x4ffc00000, data 0x313c9b2/0x32fc000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 51453952 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb80f000/0x0/0x4ffc00000, data 0x313c9b2/0x32fc000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.964795113s of 11.985386848s, submitted: 9
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 52191232 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 51101696 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3610428 data_alloc: 234881024 data_used: 14183018
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb37a000/0x0/0x4ffc00000, data 0x35d29b2/0x3792000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617380 data_alloc: 234881024 data_used: 14252650
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eb359000/0x0/0x4ffc00000, data 0x35f39b2/0x37b3000, compress 0x0/0x0/0x0, omap 0x4df78, meta 0x110a2088), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b552fce00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.867029190s of 11.092142105s, submitted: 63
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305348608 unmapped: 50618368 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528d9800 session 0x562b566bf880
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3612536 data_alloc: 234881024 data_used: 14252650
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b555c8e00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 52658176 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec3f8000/0x0/0x4ffc00000, data 0x25559a2/0x2714000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 52658176 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 52658176 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b51d101c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b5250a700
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 52658176 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b553476c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 52649984 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 52641792 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 52633600 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 52625408 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 52625408 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 52617216 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e6000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e18b, meta 0x110a1e75), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 52609024 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354120 data_alloc: 218103808 data_used: 4006490
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5494e000 session 0x562b53218a80
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b552c7c00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b552c6e00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b54d2c1c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 41.048336029s of 41.130916595s, submitted: 44
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 51535872 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b5375ee00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b598e5400 session 0x562b552c6c40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53234e00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b55346700
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b550fcc40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371796 data_alloc: 218103808 data_used: 4010551
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 51519488 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b543801c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371796 data_alloc: 218103808 data_used: 4010551
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371928 data_alloc: 218103808 data_used: 4010551
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 51511296 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371928 data_alloc: 218103808 data_used: 4010551
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed189000/0x0/0x4ffc00000, data 0x17c39f1/0x1983000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.683448792s of 20.806020737s, submitted: 41
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 51503104 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 51322880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306692096 unmapped: 49274880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 52928512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3396090 data_alloc: 218103808 data_used: 4084279
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eccef000/0x0/0x4ffc00000, data 0x1c5c9f1/0x1e1c000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eccef000/0x0/0x4ffc00000, data 0x1c5c9f1/0x1e1c000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3404906 data_alloc: 218103808 data_used: 4227639
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402178 data_alloc: 218103808 data_used: 4231735
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.2 total, 600.0 interval#012Cumulative writes: 37K writes, 150K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.82 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1838 writes, 7373 keys, 1838 commit groups, 1.0 writes per commit group, ingest: 8.70 MB, 0.01 MB/s#012Interval WAL: 1838 writes, 718 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 52445184 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402178 data_alloc: 218103808 data_used: 4231735
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 52436992 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402178 data_alloc: 218103808 data_used: 4231735
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3402178 data_alloc: 218103808 data_used: 4231735
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ecccf000/0x0/0x4ffc00000, data 0x1c7d9f1/0x1e3d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 52428800 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.124912262s of 30.743280411s, submitted: 64
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 51372032 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b54a8e800 session 0x562b566bea80
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b52afc540
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b59f62a80
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b59f621c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 51372032 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b54381a40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5499b400 session 0x562b5250afc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b59f63880
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b53181c00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b552fcfc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x225aa2a/0x241c000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442101 data_alloc: 218103808 data_used: 4231735
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b566bee00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 51355648 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441205 data_alloc: 218103808 data_used: 4231735
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6f0000/0x0/0x4ffc00000, data 0x225aa63/0x241c000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5489d800 session 0x562b51d116c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b555c9dc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b528556c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6ef000/0x0/0x4ffc00000, data 0x225aa73/0x241d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6ef000/0x0/0x4ffc00000, data 0x225aa73/0x241d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479447 data_alloc: 218103808 data_used: 10342983
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6ef000/0x0/0x4ffc00000, data 0x225aa73/0x241d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec6ef000/0x0/0x4ffc00000, data 0x225aa73/0x241d000, compress 0x0/0x0/0x0, omap 0x4e3d9, meta 0x110a1c27), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3479447 data_alloc: 218103808 data_used: 10342983
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.493576050s of 19.711193085s, submitted: 33
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 51347456 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305594368 unmapped: 50372608 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308699136 unmapped: 47267840 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511373 data_alloc: 218103808 data_used: 10801735
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec2ed000/0x0/0x4ffc00000, data 0x265ca73/0x281f000, compress 0x0/0x0/0x0, omap 0x4e414, meta 0x110a1bec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec2c9000/0x0/0x4ffc00000, data 0x2680a73/0x2843000, compress 0x0/0x0/0x0, omap 0x4e414, meta 0x110a1bec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3517921 data_alloc: 218103808 data_used: 11022919
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308731904 unmapped: 47235072 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec2c7000/0x0/0x4ffc00000, data 0x2682a73/0x2845000, compress 0x0/0x0/0x0, omap 0x4e414, meta 0x110a1bec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 47226880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3515601 data_alloc: 218103808 data_used: 11035207
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.758734703s of 13.324839592s, submitted: 61
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 47226880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 308740096 unmapped: 47226880 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b54380c40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b53180000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e5800 session 0x562b566bf340
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eca9e000/0x0/0x4ffc00000, data 0x1c839f1/0x1e43000, compress 0x0/0x0/0x0, omap 0x4e44f, meta 0x110a1bb1), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410514 data_alloc: 218103808 data_used: 4231735
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eca9e000/0x0/0x4ffc00000, data 0x1c839f1/0x1e43000, compress 0x0/0x0/0x0, omap 0x4e44f, meta 0x110a1bb1), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4eccc2000/0x0/0x4ffc00000, data 0x1c8a9f1/0x1e4a000, compress 0x0/0x0/0x0, omap 0x4e44f, meta 0x110a1bb1), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305446912 unmapped: 50520064 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d7000 session 0x562b5511f500
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a0c00 session 0x562b54db8000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b53234c40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23238 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed126000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370600 data_alloc: 218103808 data_used: 4014549
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305496064 unmapped: 50470912 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.192054749s of 43.299339294s, submitted: 52
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b5250a700
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a4800 session 0x562b54950fc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b55346540
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b5819e000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a0c00 session 0x562b51d108c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386400 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0fe000/0x0/0x4ffc00000, data 0x185097f/0x1a0e000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0fe000/0x0/0x4ffc00000, data 0x185097f/0x1a0e000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d7000 session 0x562b552fc8c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385968 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b552fd6c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 50151424 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b555c8700
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b54d2c700
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0fe000/0x0/0x4ffc00000, data 0x185097f/0x1a0e000, compress 0x0/0x0/0x0, omap 0x4e662, meta 0x110a199e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 49995776 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 49995776 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.878920555s of 10.003678322s, submitted: 50
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397315 data_alloc: 218103808 data_used: 4811123
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b55367500
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a0c00 session 0x562b55366a80
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397183 data_alloc: 218103808 data_used: 4811123
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397443 data_alloc: 218103808 data_used: 4811139
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555d7000 session 0x562b55366e00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.139425278s of 16.248653412s, submitted: 60
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5cc12000 session 0x562b552fd180
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397311 data_alloc: 218103808 data_used: 4811139
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed0d9000/0x0/0x4ffc00000, data 0x187498f/0x1a33000, compress 0x0/0x0/0x0, omap 0x4e74d, meta 0x110a18b3), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306020352 unmapped: 49946624 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3397443 data_alloc: 218103808 data_used: 4811139
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b555c8c40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b53219500
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306036736 unmapped: 49930240 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b555c9500
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306044928 unmapped: 49922048 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306053120 unmapped: 49913856 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306069504 unmapped: 49897472 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306069504 unmapped: 49897472 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306069504 unmapped: 49897472 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306069504 unmapped: 49897472 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306077696 unmapped: 49889280 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306085888 unmapped: 49881088 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306094080 unmapped: 49872896 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306102272 unmapped: 49864704 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306102272 unmapped: 49864704 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306102272 unmapped: 49864704 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306102272 unmapped: 49864704 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306110464 unmapped: 49856512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306110464 unmapped: 49856512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306110464 unmapped: 49856512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306110464 unmapped: 49856512 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3375482 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306118656 unmapped: 49848320 heap: 355966976 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 64.132514954s of 64.175086975s, submitted: 21
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b548a0c00 session 0x562b54380c40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b55346a80
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b552c6540
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b53234e00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5cc12000 session 0x562b54e0d880
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b555dc000 session 0x562b550fddc0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3433682 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b528e1800 session 0x562b5819f880
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5296d800 session 0x562b54407880
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b538a8400 session 0x562b532196c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306126848 unmapped: 53510144 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec90f000/0x0/0x4ffc00000, data 0x203e98f/0x21fd000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306143232 unmapped: 53493760 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306143232 unmapped: 53493760 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491164 data_alloc: 218103808 data_used: 12978547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306143232 unmapped: 53493760 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b5cc12000 session 0x562b51d101c0
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b52946400 session 0x562b53235c00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ec90f000/0x0/0x4ffc00000, data 0x203e98f/0x21fd000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306143232 unmapped: 53493760 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 ms_handle_reset con 0x562b52946400 session 0x562b52afc540
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 58187776 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 58179584 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 58171392 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 58163200 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 58155008 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 58138624 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 58138624 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 58138624 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 58130432 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301514752 unmapped: 58122240 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed2e7000/0x0/0x4ffc00000, data 0x166797f/0x1825000, compress 0x0/0x0/0x0, omap 0x4e814, meta 0x110a17ec), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3381771 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 58105856 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 79.888259888s of 80.062774658s, submitted: 22
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 58089472 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 313 ms_handle_reset con 0x562b528e1800 session 0x562b54e0c380
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ed2e2000/0x0/0x4ffc00000, data 0x166956f/0x1828000, compress 0x0/0x0/0x0, omap 0x4ecc7, meta 0x110a1339), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 58089472 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ed2e2000/0x0/0x4ffc00000, data 0x166956f/0x1828000, compress 0x0/0x0/0x0, omap 0x4ed3d, meta 0x110a12c3), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385265 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ed2e2000/0x0/0x4ffc00000, data 0x166956f/0x1828000, compress 0x0/0x0/0x0, omap 0x4ed3d, meta 0x110a12c3), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385265 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ed2e2000/0x0/0x4ffc00000, data 0x166956f/0x1828000, compress 0x0/0x0/0x0, omap 0x4ed3d, meta 0x110a12c3), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 58081280 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 58064896 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 58064896 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.884174347s of 11.914915085s, submitted: 19
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 58056704 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 58048512 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 58048512 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 58048512 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 58040320 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301604864 unmapped: 58032128 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301604864 unmapped: 58032128 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301604864 unmapped: 58032128 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301604864 unmapped: 58032128 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 58015744 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301629440 unmapped: 58007552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 57999360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 57999360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 57999360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301645824 unmapped: 57991168 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 57982976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 57974784 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 57966592 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 57958400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 57950208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 57942016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 57942016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 57942016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 57933824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 57933824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301711360 unmapped: 57925632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301711360 unmapped: 57925632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301711360 unmapped: 57925632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 57917440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 57901056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 57901056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 57892864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2df000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ee93, meta 0x110a116d), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 57884672 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301760512 unmapped: 57876480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301760512 unmapped: 57876480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301760512 unmapped: 57876480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 57868288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3388039 data_alloc: 218103808 data_used: 4018547
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 106.316764832s of 106.328948975s, submitted: 13
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 ms_handle_reset con 0x562b5296d800 session 0x562b54380700
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 ms_handle_reset con 0x562b538a8400 session 0x562b54380540
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2e1000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4eece, meta 0x110a1132), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387959 data_alloc: 234881024 data_used: 11555187
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2e1000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4eece, meta 0x110a1132), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307773440 unmapped: 51863552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 51855360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 51855360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 heartbeat osd_stat(store_statfs(0x4ed2e1000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4eece, meta 0x110a1132), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 51855360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307781632 unmapped: 51855360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387959 data_alloc: 234881024 data_used: 11555187
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.003993034s of 10.013246536s, submitted: 5
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 307798016 unmapped: 51838976 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 315 heartbeat osd_stat(store_statfs(0x4ed2e1000/0x0/0x4ffc00000, data 0x166afee/0x182b000, compress 0x0/0x0/0x0, omap 0x4ef44, meta 0x110a10bc), peers [0,1] op hist [0,0,1])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 315 ms_handle_reset con 0x562b5cc12000 session 0x562b550fdc00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 56983552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 56983552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302653440 unmapped: 56983552 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 315 heartbeat osd_stat(store_statfs(0x4edf4c000/0x0/0x4ffc00000, data 0x9fcbde/0xbbe000, compress 0x0/0x0/0x0, omap 0x4f41b, meta 0x110a0be5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302661632 unmapped: 56975360 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3303249 data_alloc: 218103808 data_used: 151939
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 316 ms_handle_reset con 0x562b528e1800 session 0x562b54407c00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ee74a000/0x0/0x4ffc00000, data 0x1fe79b/0x3bf000, compress 0x0/0x0/0x0, omap 0x4fd71, meta 0x110a028f), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 316 heartbeat osd_stat(store_statfs(0x4ee74a000/0x0/0x4ffc00000, data 0x1fe79b/0x3bf000, compress 0x0/0x0/0x0, omap 0x4fd71, meta 0x110a028f), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262502 data_alloc: 218103808 data_used: 151923
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 316 ms_handle_reset con 0x562b52946400 session 0x562b5250aa80
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.022649765s of 10.216675758s, submitted: 92
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ee748000/0x0/0x4ffc00000, data 0x200236/0x3c2000, compress 0x0/0x0/0x0, omap 0x4fec5, meta 0x110a013b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302702592 unmapped: 56934400 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 317 heartbeat osd_stat(store_statfs(0x4ee748000/0x0/0x4ffc00000, data 0x200236/0x3c2000, compress 0x0/0x0/0x0, omap 0x4fec5, meta 0x110a013b), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 317 ms_handle_reset con 0x562b5296d800 session 0x562b566bee00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 317 handle_osd_map epochs [318,318], i have 318, src has [1,318]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3267970 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 318 heartbeat osd_stat(store_statfs(0x4ee745000/0x0/0x4ffc00000, data 0x201cb5/0x3c5000, compress 0x0/0x0/0x0, omap 0x503f7, meta 0x1109fc09), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 318 heartbeat osd_stat(store_statfs(0x4ee745000/0x0/0x4ffc00000, data 0x201cb5/0x3c5000, compress 0x0/0x0/0x0, omap 0x503f7, meta 0x1109fc09), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302710784 unmapped: 56926208 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 318 ms_handle_reset con 0x562b538a8400 session 0x562b59f63180
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302718976 unmapped: 56918016 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302727168 unmapped: 56909824 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302735360 unmapped: 56901632 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302743552 unmapped: 56893440 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302751744 unmapped: 56885248 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302759936 unmapped: 56877056 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302768128 unmapped: 56868864 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 56852480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 56852480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 56852480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302784512 unmapped: 56852480 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302792704 unmapped: 56844288 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302800896 unmapped: 56836096 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 56827904 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302809088 unmapped: 56827904 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302817280 unmapped: 56819712 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302825472 unmapped: 56811520 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56803328 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56803328 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56803328 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302833664 unmapped: 56803328 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 56795136 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 heartbeat osd_stat(store_statfs(0x4ee740000/0x0/0x4ffc00000, data 0x203972/0x3ca000, compress 0x0/0x0/0x0, omap 0x50558, meta 0x1109faa8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 56786944 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 56786944 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 56786944 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3274210 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 56778752 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 116.213493347s of 116.241378784s, submitted: 33
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 56778752 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 320 ms_handle_reset con 0x562b54dc0800 session 0x562b59fc2000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 56762368 heap: 359636992 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 320 heartbeat osd_stat(store_statfs(0x4edf3d000/0x0/0x4ffc00000, data 0x20550e/0x3cd000, compress 0x0/0x0/0x0, omap 0x5097d, meta 0x1109f683), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 65150976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 ms_handle_reset con 0x562b548a1000 session 0x562b59fc2380
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 65126400 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 65126400 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 65126400 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324603 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 65077248 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 65077248 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 65077248 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 65069056 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324731 data_alloc: 218103808 data_used: 156000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 heartbeat osd_stat(store_statfs(0x4edf39000/0x0/0x4ffc00000, data 0xa070cd/0xbd1000, compress 0x0/0x0/0x0, omap 0x50f70, meta 0x1109f090), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 65060864 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 65060864 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 65060864 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 42.045394897s of 42.178756714s, submitted: 18
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 322 ms_handle_reset con 0x562b5497f000 session 0x562b54d6a540
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326012 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 322 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0xa08b9c/0xbd2000, compress 0x0/0x0/0x0, omap 0x510d2, meta 0x1109ef2e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 322 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0xa08b9c/0xbd2000, compress 0x0/0x0/0x0, omap 0x510d2, meta 0x1109ef2e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 322 heartbeat osd_stat(store_statfs(0x4edf38000/0x0/0x4ffc00000, data 0xa08b9c/0xbd2000, compress 0x0/0x0/0x0, omap 0x510d2, meta 0x1109ef2e), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3326012 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 65003520 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.889834404s of 10.003436089s, submitted: 19
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64978944 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64962560 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64937984 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302841856 unmapped: 65191936 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.2 total, 600.0 interval#012Cumulative writes: 38K writes, 153K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.80 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 912 writes, 2666 keys, 912 commit groups, 1.0 writes per commit group, ingest: 1.84 MB, 0.00 MB/s#012Interval WAL: 912 writes, 399 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread fragmentation_score=0.004241 took=0.000088s
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 65183744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 65175552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 65175552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 65167360 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 65167360 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 65159168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 65159168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 65159168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 65159168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 65150976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 65150976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 65142784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 65134592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 65118208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 65110016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 65093632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 65085440 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 65069056 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 65069056 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 65069056 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 65052672 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 65044480 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 65019904 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 65011712 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 65003520 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64995328 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64995328 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 13 04:42:37 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2038311636' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64987136 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64970752 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64962560 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64962560 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303079424 unmapped: 64954368 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 ms_handle_reset con 0x562b52941000 session 0x562b54406e00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328786 data_alloc: 218103808 data_used: 155984
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf35000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64946176 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 126.705642700s of 126.712867737s, submitted: 15
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64897024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64897024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64897024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64897024 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64888832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64888832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64888832 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 303194112 unmapped: 64839680 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63782912 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63774720 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63758336 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63750144 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63750144 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63750144 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63741952 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63725568 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 63717376 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 63717376 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 49.510643005s of 50.603378296s, submitted: 108
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328152 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 63717376 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [0,0,0,0,0,1])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 2.568279505s of 10.172575951s, submitted: 10
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328152 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304324608 unmapped: 63709184 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63700992 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63692800 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.948706627s of 10.349843025s, submitted: 6
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63684608 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63684608 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63684608 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63684608 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328152 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63676416 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63668224 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63660032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63660032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63660032 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63651840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63651840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63651840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63651840 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63643648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63643648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63643648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63643648 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63635456 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63619072 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63610880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63610880 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 63602688 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63586304 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63586304 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63578112 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 63569920 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 63569920 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 63569920 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 63561728 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 63545344 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 63545344 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 63537152 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 63528960 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 63504384 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 63488000 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 63471616 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 63463424 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 63438848 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 63406080 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 63406080 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 63406080 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 63340544 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 63340544 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 63340544 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304726016 unmapped: 63307776 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 63299584 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 63291392 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 63275008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 63275008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 63275008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 63275008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 63266816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 63258624 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 63258624 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304783360 unmapped: 63250432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304799744 unmapped: 63234048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304807936 unmapped: 63225856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304816128 unmapped: 63217664 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304824320 unmapped: 63209472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304840704 unmapped: 63193088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304840704 unmapped: 63193088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304840704 unmapped: 63193088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304840704 unmapped: 63193088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304848896 unmapped: 63184896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304848896 unmapped: 63184896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304857088 unmapped: 63176704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304857088 unmapped: 63176704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304865280 unmapped: 63168512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 63160320 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 63160320 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304898048 unmapped: 63135744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304898048 unmapped: 63135744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304898048 unmapped: 63135744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304898048 unmapped: 63135744 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304906240 unmapped: 63127552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304906240 unmapped: 63127552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304906240 unmapped: 63127552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304906240 unmapped: 63127552 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304914432 unmapped: 63119360 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304914432 unmapped: 63119360 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 63111168 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304930816 unmapped: 63102976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304930816 unmapped: 63102976 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 63094784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 63094784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 heartbeat osd_stat(store_statfs(0x4edf37000/0x0/0x4ffc00000, data 0xa0a61b/0xbd5000, compress 0x0/0x0/0x0, omap 0x5160b, meta 0x1109e9f5), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328066 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 63094784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304939008 unmapped: 63094784 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 254.228393555s of 257.374877930s, submitted: 8
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 63070208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 324 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0xa0c20b/0xbd8000, compress 0x0/0x0/0x0, omap 0x516f8, meta 0x1109e908), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 63070208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304963584 unmapped: 63070208 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 324 heartbeat osd_stat(store_statfs(0x4edf32000/0x0/0x4ffc00000, data 0xa0c20b/0xbd8000, compress 0x0/0x0/0x0, omap 0x5176e, meta 0x1109e892), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331560 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304971776 unmapped: 63062016 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 324 ms_handle_reset con 0x562b52941000 session 0x562b57c07340
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3331192 data_alloc: 218103808 data_used: 156129
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304988160 unmapped: 63045632 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 324 heartbeat osd_stat(store_statfs(0x4edf34000/0x0/0x4ffc00000, data 0xa0c20b/0xbd8000, compress 0x0/0x0/0x0, omap 0x517e4, meta 0x1109e81c), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 63569920 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.808628559s of 10.056917191s, submitted: 37
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 63553536 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 63512576 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 63512576 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 326 heartbeat osd_stat(store_statfs(0x4edf2d000/0x0/0x4ffc00000, data 0xa0f857/0xbdd000, compress 0x0/0x0/0x0, omap 0x522af, meta 0x1109dd51), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3336626 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 63512576 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 326 heartbeat osd_stat(store_statfs(0x4edf2f000/0x0/0x4ffc00000, data 0xa0f857/0xbdd000, compress 0x0/0x0/0x0, omap 0x522af, meta 0x1109dd51), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 63496192 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 63488000 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3295730 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 326 heartbeat osd_stat(store_statfs(0x4ee72f000/0x0/0x4ffc00000, data 0x20f857/0x3dd000, compress 0x0/0x0/0x0, omap 0x52325, meta 0x1109dcdb), peers [0,1] op hist [0,0,0,0,0,0,1])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 326 ms_handle_reset con 0x562b538a8400 session 0x562b56c08c40
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 326 heartbeat osd_stat(store_statfs(0x4ee72f000/0x0/0x4ffc00000, data 0x20f857/0x3dd000, compress 0x0/0x0/0x0, omap 0x52538, meta 0x1109dac8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3294910 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 326 handle_osd_map epochs [326,327], i have 326, src has [1,327]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.690998077s of 13.717031479s, submitted: 34
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 63479808 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 63463424 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 63463424 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 327 heartbeat osd_stat(store_statfs(0x4ee72a000/0x0/0x4ffc00000, data 0x2112d6/0x3e0000, compress 0x0/0x0/0x0, omap 0x52708, meta 0x1109d8f8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 63455232 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 327 heartbeat osd_stat(store_statfs(0x4ee72a000/0x0/0x4ffc00000, data 0x2112d6/0x3e0000, compress 0x0/0x0/0x0, omap 0x52708, meta 0x1109d8f8), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 327 handle_osd_map epochs [327,328], i have 327, src has [1,328]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 ms_handle_reset con 0x562b548a1000 session 0x562b56c08e00
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 63430656 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304611328 unmapped: 63422464 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304619520 unmapped: 63414272 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 63406080 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 63397888 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 63389696 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 63381504 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 63373312 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 63365120 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304676864 unmapped: 63356928 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 63348736 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 heartbeat osd_stat(store_statfs(0x4ee727000/0x0/0x4ffc00000, data 0x212e72/0x3e3000, compress 0x0/0x0/0x0, omap 0x52bde, meta 0x1109d422), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301178 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 63332352 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 70.411071777s of 70.590003967s, submitted: 17
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 329 ms_handle_reset con 0x562b54dc0800 session 0x562b54fe8000
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 329 heartbeat osd_stat(store_statfs(0x4ee724000/0x0/0x4ffc00000, data 0x214a62/0x3e6000, compress 0x0/0x0/0x0, omap 0x52d43, meta 0x1109d2bd), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304709632 unmapped: 63324160 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3303952 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 329 heartbeat osd_stat(store_statfs(0x4ee724000/0x0/0x4ffc00000, data 0x214a62/0x3e6000, compress 0x0/0x0/0x0, omap 0x52d43, meta 0x1109d2bd), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 63315968 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 329 handle_osd_map epochs [329,330], i have 329, src has [1,330]
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305782784 unmapped: 62251008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305782784 unmapped: 62251008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305782784 unmapped: 62251008 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305790976 unmapped: 62242816 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305807360 unmapped: 62226432 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305815552 unmapped: 62218240 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305823744 unmapped: 62210048 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305831936 unmapped: 62201856 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305848320 unmapped: 62185472 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305856512 unmapped: 62177280 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305864704 unmapped: 62169088 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305872896 unmapped: 62160896 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.2 total, 600.0 interval#012Cumulative writes: 38K writes, 154K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.80 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 443 writes, 954 keys, 443 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 443 writes, 192 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305881088 unmapped: 62152704 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305889280 unmapped: 62144512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305889280 unmapped: 62144512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305889280 unmapped: 62144512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305889280 unmapped: 62144512 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 62062592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: do_command 'config diff' '{prefix=config diff}'
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: do_command 'config show' '{prefix=config show}'
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306726 data_alloc: 218103808 data_used: 156094
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 62062592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 305971200 unmapped: 62062592 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: osd.2 330 heartbeat osd_stat(store_statfs(0x4ee721000/0x0/0x4ffc00000, data 0x2164e1/0x3e9000, compress 0x0/0x0/0x0, omap 0x53288, meta 0x1109cd78), peers [0,1] op hist [])
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: prioritycache tune_memory target: 4294967296 mapped: 306028544 unmapped: 62005248 heap: 368033792 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:37 np0005558241 ceph-osd[89221]: do_command 'log dump' '{prefix=log dump}'
Dec 13 04:42:37 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4263: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:38 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23240 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} v 0)
Dec 13 04:42:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 04:42:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 13 04:42:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2716947977' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 04:42:38 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23244 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} v 0)
Dec 13 04:42:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1108099035' entity='mgr.compute-0.vndjzx' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.tiyxaw", "name": "rgw_frontends"} : dispatch
Dec 13 04:42:38 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 13 04:42:38 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810040118' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 04:42:38 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23248 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:39 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23252 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 13 04:42:39 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2356441915' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 04:42:39 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4264: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:39 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:42:39 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23254 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:42:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 04:42:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3732311435' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 04:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:42:40 np0005558241 ceph-mgr[76830]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:42:40 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23258 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:42:40 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 13 04:42:40 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1155563914' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 04:42:40 np0005558241 nova_compute[248510]: 2025-12-13 09:42:40.979 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:41 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23262 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:42:41 np0005558241 ceph-mon[76537]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 13 04:42:41 np0005558241 ceph-mon[76537]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254522402' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 13 04:42:41 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23266 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:42:41 np0005558241 ceph-mgr[76830]: log_channel(cluster) log [DBG] : pgmap v4265: 321 pgs: 321 active+clean; 462 KiB data, 990 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:42:42 np0005558241 nova_compute[248510]: 2025-12-13 09:42:42.138 248514 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec 13 04:42:42 np0005558241 ceph-mgr[76830]: log_channel(audit) log [DBG] : from='client.23270 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:42:42 np0005558241 ceph-mgr[76830]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 04:42:42 np0005558241 ceph-18ee9de6-e00b-571b-ab9b-b7aab06174df-mgr-compute-0-vndjzx[76826]: 2025-12-13T09:42:42.193+0000 7f1f704e3640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 67485696 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 67485696 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456929 data_alloc: 218103808 data_used: 185903
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329457664 unmapped: 67485696 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329465856 unmapped: 67477504 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329465856 unmapped: 67477504 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329465856 unmapped: 67477504 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329465856 unmapped: 67477504 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456929 data_alloc: 218103808 data_used: 185903
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329465856 unmapped: 67477504 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329465856 unmapped: 67477504 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329474048 unmapped: 67469312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561829398a80
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182a1fe8c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc1800 session 0x561829cac8c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c5ff000 session 0x56182b708fc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 61.984951019s of 62.126846313s, submitted: 70
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bc5e700
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561829b65a40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329793536 unmapped: 67149824 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329793536 unmapped: 67149824 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504826 data_alloc: 218103808 data_used: 185903
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa3ccc3/0xbfd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa3ccc3/0xbfd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329793536 unmapped: 67149824 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329793536 unmapped: 67149824 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa3ccc3/0xbfd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329801728 unmapped: 67141632 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa3ccc3/0xbfd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329801728 unmapped: 67141632 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa3ccc3/0xbfd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329801728 unmapped: 67141632 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504826 data_alloc: 218103808 data_used: 185903
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329801728 unmapped: 67141632 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329801728 unmapped: 67141632 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329809920 unmapped: 67133440 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329809920 unmapped: 67133440 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa3ccc3/0xbfd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329818112 unmapped: 67125248 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504826 data_alloc: 218103808 data_used: 185903
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x561829bf3a40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc1800 session 0x56182a0001c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329818112 unmapped: 67125248 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182cc39800 session 0x56182be2d340
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329818112 unmapped: 67125248 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.380788803s of 13.533100128s, submitted: 29
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182b886c40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbce000/0x0/0x4ffc00000, data 0xa3ccd3/0xbfe000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329818112 unmapped: 67125248 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329818112 unmapped: 67125248 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554588 data_alloc: 218103808 data_used: 8312367
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbce000/0x0/0x4ffc00000, data 0xa3ccd3/0xbfe000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554588 data_alloc: 218103808 data_used: 8312367
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbce000/0x0/0x4ffc00000, data 0xa3ccd3/0xbfe000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbce000/0x0/0x4ffc00000, data 0xa3ccd3/0xbfe000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbce000/0x0/0x4ffc00000, data 0xa3ccd3/0xbfe000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 329826304 unmapped: 67117056 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.119009972s of 12.188241005s, submitted: 1
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 333938688 unmapped: 63004672 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3601308 data_alloc: 218103808 data_used: 8312367
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334135296 unmapped: 62808064 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb360000/0x0/0x4ffc00000, data 0x12aacd3/0x146c000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611734 data_alloc: 234881024 data_used: 9410095
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb360000/0x0/0x4ffc00000, data 0x12aacd3/0x146c000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334012416 unmapped: 62930944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb360000/0x0/0x4ffc00000, data 0x12aacd3/0x146c000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611734 data_alloc: 234881024 data_used: 9410095
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611734 data_alloc: 234881024 data_used: 9410095
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb360000/0x0/0x4ffc00000, data 0x12aacd3/0x146c000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334020608 unmapped: 62922752 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334028800 unmapped: 62914560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334028800 unmapped: 62914560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc1800 session 0x56182b0fc8c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c60c400 session 0x56182a1fe540
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182c330e00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829363800 session 0x561829ace540
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.916124344s of 19.432558060s, submitted: 79
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x561831c20700
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561828f05180
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc1800 session 0x561831c21880
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c60c400 session 0x56182bfb8c40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618337d0800 session 0x561829ace1c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334405632 unmapped: 62537728 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 62472192 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674491 data_alloc: 234881024 data_used: 9414191
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 62472192 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eaa13000/0x0/0x4ffc00000, data 0x1bf7cd3/0x1db9000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 62472192 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334471168 unmapped: 62472192 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eaa13000/0x0/0x4ffc00000, data 0x1bf7cd3/0x1db9000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674491 data_alloc: 234881024 data_used: 9414191
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eaa13000/0x0/0x4ffc00000, data 0x1bf7cd3/0x1db9000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334479360 unmapped: 62464000 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674491 data_alloc: 234881024 data_used: 9414191
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.609133720s of 11.786836624s, submitted: 29
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182bc13a40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea9ef000/0x0/0x4ffc00000, data 0x1c1bcd3/0x1ddd000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334635008 unmapped: 62308352 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 62300160 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 62300160 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334643200 unmapped: 62300160 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3727215 data_alloc: 234881024 data_used: 17754159
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea9ed000/0x0/0x4ffc00000, data 0x1c1ccd3/0x1dde000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea9ed000/0x0/0x4ffc00000, data 0x1c1ccd3/0x1dde000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3727215 data_alloc: 234881024 data_used: 17754159
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336232448 unmapped: 60710912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.139926910s of 13.925807953s, submitted: 5
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 54566912 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3795023 data_alloc: 234881024 data_used: 17813551
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9ebb000/0x0/0x4ffc00000, data 0x274fcd3/0x2911000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [0,0,0,0,0,0,0,0,15])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343351296 unmapped: 53592064 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343482368 unmapped: 53460992 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343605248 unmapped: 53338112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343826432 unmapped: 53116928 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343908352 unmapped: 53035008 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818345 data_alloc: 234881024 data_used: 19507611
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343908352 unmapped: 53035008 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e13000/0x0/0x4ffc00000, data 0x27f6cd3/0x29b8000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343908352 unmapped: 53035008 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343908352 unmapped: 53035008 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3816841 data_alloc: 234881024 data_used: 19519899
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x281acd3/0x29dc000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x281acd3/0x29dc000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x281acd3/0x29dc000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9df0000/0x0/0x4ffc00000, data 0x281acd3/0x29dc000, compress 0x0/0x0/0x0, omap 0x7463d, meta 0x133bb9c3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.503668785s of 15.253365517s, submitted: 133
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182be2d500
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc1800 session 0x56182bc12c40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818569 data_alloc: 234881024 data_used: 19545499
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 51978240 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344973312 unmapped: 51970048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c60c400 session 0x56182b855340
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9de1000/0x0/0x4ffc00000, data 0x2829cd3/0x29eb000, compress 0x0/0x0/0x0, omap 0x747d8, meta 0x133bb828), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341581824 unmapped: 55361536 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341581824 unmapped: 55361536 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341581824 unmapped: 55361536 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623583 data_alloc: 218103808 data_used: 8891291
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341581824 unmapped: 55361536 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x561829cac1c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182c331500
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341581824 unmapped: 55361536 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6001.8 total, 600.0 interval#012Cumulative writes: 45K writes, 175K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.74 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3990 writes, 15K keys, 3990 commit groups, 1.0 writes per commit group, ingest: 19.11 MB, 0.03 MB/s#012Interval WAL: 3990 writes, 1573 syncs, 2.54 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.66              0.00         1    0.660       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.8 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.7 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.8 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5618277b58d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.8 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb35f000/0x0/0x4ffc00000, data 0x12abcd3/0x146d000, compress 0x0/0x0/0x0, omap 0x74a85, meta 0x133bb57b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341581824 unmapped: 55361536 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 55353344 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.131446838s of 10.597537994s, submitted: 44
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 55353344 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623511 data_alloc: 218103808 data_used: 8891291
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341590016 unmapped: 55353344 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x561829398380
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c71/0x428000, compress 0x0/0x0/0x0, omap 0x74ca9, meta 0x133bb357), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338075648 unmapped: 58867712 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338083840 unmapped: 58859520 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338083840 unmapped: 58859520 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338083840 unmapped: 58859520 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338083840 unmapped: 58859520 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338083840 unmapped: 58859520 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 58851328 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 58851328 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 58851328 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 58851328 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 58851328 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338092032 unmapped: 58851328 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec154000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x561830d47c00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bfb3c00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc1800 session 0x56182bfb96c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3481992 data_alloc: 218103808 data_used: 165787
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182bb11500
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.578239441s of 50.584568024s, submitted: 27
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bb83a40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182bd40fc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182b854c40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c60c400 session 0x56182a1fec40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf86000/0x0/0x4ffc00000, data 0x685c71/0x846000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182c08e540
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf86000/0x0/0x4ffc00000, data 0x685c71/0x846000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf03000/0x0/0x4ffc00000, data 0x708c71/0x8c9000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf03000/0x0/0x4ffc00000, data 0x708c71/0x8c9000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513790 data_alloc: 218103808 data_used: 169785
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf03000/0x0/0x4ffc00000, data 0x708c71/0x8c9000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338100224 unmapped: 58843136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3513790 data_alloc: 218103808 data_used: 169785
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf03000/0x0/0x4ffc00000, data 0x708c71/0x8c9000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338108416 unmapped: 58834944 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf03000/0x0/0x4ffc00000, data 0x708c71/0x8c9000, compress 0x0/0x0/0x0, omap 0x74ecd, meta 0x133bb133), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.418393135s of 14.521083832s, submitted: 6
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182be2d500
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516668 data_alloc: 218103808 data_used: 169785
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3545088 data_alloc: 218103808 data_used: 4920137
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf02000/0x0/0x4ffc00000, data 0x708c94/0x8ca000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3545088 data_alloc: 218103808 data_used: 4920137
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf02000/0x0/0x4ffc00000, data 0x708c94/0x8ca000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebf02000/0x0/0x4ffc00000, data 0x708c94/0x8ca000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338124800 unmapped: 58818560 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.942356110s of 12.959303856s, submitted: 10
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338231296 unmapped: 58712064 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 58040320 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338903040 unmapped: 58040320 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567036 data_alloc: 218103808 data_used: 4929353
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338911232 unmapped: 58032128 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbcf000/0x0/0x4ffc00000, data 0xa33c94/0xbf5000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573854 data_alloc: 218103808 data_used: 4929353
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbc9000/0x0/0x4ffc00000, data 0xa41c94/0xc03000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.332371712s of 10.059819221s, submitted: 28
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338239488 unmapped: 58703872 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573870 data_alloc: 218103808 data_used: 4929353
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbc9000/0x0/0x4ffc00000, data 0xa41c94/0xc03000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbc9000/0x0/0x4ffc00000, data 0xa41c94/0xc03000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342441984 unmapped: 54501376 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6800 session 0x56182cef4e00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a09ac00 session 0x56182b8861c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338247680 unmapped: 58695680 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602181 data_alloc: 218103808 data_used: 4929353
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602181 data_alloc: 218103808 data_used: 4929353
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338255872 unmapped: 58687488 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338264064 unmapped: 58679296 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338264064 unmapped: 58679296 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3602181 data_alloc: 218103808 data_used: 4929353
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.482851028s of 17.732896805s, submitted: 27
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338272256 unmapped: 58671104 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338436096 unmapped: 58507264 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338436096 unmapped: 58507264 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338444288 unmapped: 58499072 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338452480 unmapped: 58490880 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3627765 data_alloc: 218103808 data_used: 9248073
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338460672 unmapped: 58482688 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb762000/0x0/0x4ffc00000, data 0xea7cf6/0x106a000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3627981 data_alloc: 218103808 data_used: 9248073
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.794153214s of 10.743078232s, submitted: 91
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 338485248 unmapped: 58458112 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb2dc000/0x0/0x4ffc00000, data 0x1325cf6/0x14e8000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341172224 unmapped: 55771136 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3669113 data_alloc: 218103808 data_used: 9273673
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1c6000/0x0/0x4ffc00000, data 0x1443cf6/0x1606000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1c6000/0x0/0x4ffc00000, data 0x1443cf6/0x1606000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340738048 unmapped: 56205312 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1a5000/0x0/0x4ffc00000, data 0x1464cf6/0x1627000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3667489 data_alloc: 218103808 data_used: 9277769
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1a5000/0x0/0x4ffc00000, data 0x1464cf6/0x1627000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340877312 unmapped: 56066048 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb1a5000/0x0/0x4ffc00000, data 0x1464cf6/0x1627000, compress 0x0/0x0/0x0, omap 0x748c1, meta 0x133bb73f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3667745 data_alloc: 218103808 data_used: 9285961
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340885504 unmapped: 56057856 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.866159439s of 16.156522751s, submitted: 69
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c60dc00 session 0x56182b709a40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x561830d47340
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eba06000/0x0/0x4ffc00000, data 0xa42c94/0xc04000, compress 0x0/0x0/0x0, omap 0x74d01, meta 0x133bb2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eba06000/0x0/0x4ffc00000, data 0xa42c94/0xc04000, compress 0x0/0x0/0x0, omap 0x74d01, meta 0x133bb2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3581947 data_alloc: 218103808 data_used: 4929353
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 339976192 unmapped: 56967168 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182bf65dc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x561831c20700
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561829b40c40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c84/0x428000, compress 0x0/0x0/0x0, omap 0x750b9, meta 0x133baf47), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334766080 unmapped: 62177280 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 62169088 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 62169088 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 62169088 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334774272 unmapped: 62169088 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 62160896 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 62160896 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 62160896 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502317 data_alloc: 218103808 data_used: 173783
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334782464 unmapped: 62160896 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 334790656 unmapped: 62152704 heap: 396943360 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a09ac00 session 0x56182bb83500
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182cef41c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182ba7cc40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182c330380
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 61.001258850s of 61.117053986s, submitted: 73
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 340320256 unmapped: 60293120 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182a1fe540
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576269 data_alloc: 218103808 data_used: 177844
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335036416 unmapped: 65576960 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576269 data_alloc: 218103808 data_used: 177844
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3576269 data_alloc: 218103808 data_used: 177844
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6800 session 0x561830d46c40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335044608 unmapped: 65568768 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335052800 unmapped: 65560576 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182c330e00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e4000/0x0/0x4ffc00000, data 0xe28c61/0xfe8000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335052800 unmapped: 65560576 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182ba7c8c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.469144821s of 16.633630753s, submitted: 17
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182be2d6c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335060992 unmapped: 65552384 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3579208 data_alloc: 218103808 data_used: 177844
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 335020032 unmapped: 65593344 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652168 data_alloc: 234881024 data_used: 12416692
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb7e2000/0x0/0x4ffc00000, data 0xe28c94/0xfea000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652168 data_alloc: 234881024 data_used: 12416692
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 336846848 unmapped: 63766528 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.341166496s of 12.350997925s, submitted: 5
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341409792 unmapped: 59203584 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 58343424 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb079000/0x0/0x4ffc00000, data 0x1591c94/0x1753000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342269952 unmapped: 58343424 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716928 data_alloc: 234881024 data_used: 14595764
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717184 data_alloc: 234881024 data_used: 14603956
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717440 data_alloc: 234881024 data_used: 14612148
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa5000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343638016 unmapped: 56975360 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343646208 unmapped: 56967168 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717440 data_alloc: 234881024 data_used: 14612148
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182b709500
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b413c00 session 0x56182bfb3880
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182b855340
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343646208 unmapped: 56967168 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561829cac1c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.109582901s of 19.573698044s, submitted: 104
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561830d47c00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182b431180
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6400 session 0x56182bc5ec40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x561829b65340
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561829b65880
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343670784 unmapped: 56942592 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730452 data_alloc: 234881024 data_used: 14612148
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343678976 unmapped: 56934400 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730452 data_alloc: 234881024 data_used: 14612148
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 56926208 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.862773895s of 11.917412758s, submitted: 10
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343834624 unmapped: 56778752 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182bb82380
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadfa000/0x0/0x4ffc00000, data 0x180fca4/0x19d2000, compress 0x0/0x0/0x0, omap 0x75609, meta 0x133ba9f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadd5000/0x0/0x4ffc00000, data 0x1833cc7/0x19f7000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 343842816 unmapped: 56770560 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344391680 unmapped: 56221696 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740334 data_alloc: 234881024 data_used: 15198916
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344391680 unmapped: 56221696 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadd5000/0x0/0x4ffc00000, data 0x1833cc7/0x19f7000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344391680 unmapped: 56221696 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740334 data_alloc: 234881024 data_used: 15198916
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eadd5000/0x0/0x4ffc00000, data 0x1833cc7/0x19f7000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344399872 unmapped: 56213504 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 56205312 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344408064 unmapped: 56205312 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3740334 data_alloc: 234881024 data_used: 15198916
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.189497948s of 12.216222763s, submitted: 13
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346177536 unmapped: 54435840 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346439680 unmapped: 54173696 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea779000/0x0/0x4ffc00000, data 0x1e80cc7/0x2044000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791218 data_alloc: 234881024 data_used: 15420612
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791234 data_alloc: 234881024 data_used: 15420612
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3792386 data_alloc: 234881024 data_used: 15502532
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182be2d340
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.931962967s of 18.925085068s, submitted: 81
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182b1e2e00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea76e000/0x0/0x4ffc00000, data 0x1e92cc7/0x2056000, compress 0x0/0x0/0x0, omap 0x74fd2, meta 0x133bb02e), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788494 data_alloc: 234881024 data_used: 15502532
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347013120 unmapped: 53600256 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182cef5340
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 53583872 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 53583872 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347029504 unmapped: 53583872 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafb2000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x758c9, meta 0x133ba737), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 53575680 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3724208 data_alloc: 234881024 data_used: 14681780
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347037696 unmapped: 53575680 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182ba7c540
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6800 session 0x561829b41340
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341475328 unmapped: 59138048 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafb2000/0x0/0x4ffc00000, data 0x1657c94/0x1819000, compress 0x0/0x0/0x0, omap 0x758c9, meta 0x133ba737), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x5618299ef400 session 0x56182b708700
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c84/0x428000, compress 0x0/0x0/0x0, omap 0x75c7a, meta 0x133ba386), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341483520 unmapped: 59129856 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b081800 session 0x56182c330fc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: mgrc ms_handle_reset ms_handle_reset con 0x56182a1c1400
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2938807880
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2938807880,v1:192.168.122.100:6801/2938807880]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: mgrc handle_mgr_configure stats_period=5
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b415c00 session 0x56182bd53a40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a012800 session 0x56182bb836c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341491712 unmapped: 59121664 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 59113472 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3526115 data_alloc: 218103808 data_used: 181905
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 59113472 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341499904 unmapped: 59113472 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 59105280 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182ba7c8c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x561831c216c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561829cac8c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a4000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x56182c330380
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 59105280 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 44.973899841s of 45.842048645s, submitted: 73
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341508096 unmapped: 59105280 heap: 400613376 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527821 data_alloc: 218103808 data_used: 181905
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182be2d180
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 62676992 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182b855340
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23c8a/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 62676992 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 62676992 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341614592 unmapped: 62676992 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 62668800 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3581350 data_alloc: 218103808 data_used: 185966
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23cc3/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 62668800 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341622784 unmapped: 62668800 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341630976 unmapped: 62660608 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23cc3/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341934080 unmapped: 62357504 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620518 data_alloc: 218103808 data_used: 6809198
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23cc3/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620518 data_alloc: 218103808 data_used: 6809198
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341942272 unmapped: 62349312 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.478496552s of 19.308280945s, submitted: 36
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ebbe8000/0x0/0x4ffc00000, data 0xa23cc3/0xbe4000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [0,0,1,0,1])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342933504 unmapped: 61358080 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342376448 unmapped: 61915136 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3668324 data_alloc: 218103808 data_used: 7542382
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb363000/0x0/0x4ffc00000, data 0x1299cc3/0x145a000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341704704 unmapped: 62586880 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3681236 data_alloc: 218103808 data_used: 7419502
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0x12b8cc3/0x1479000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674556 data_alloc: 218103808 data_used: 7423598
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0x12b8cc3/0x1479000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb353000/0x0/0x4ffc00000, data 0x12b8cc3/0x1479000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.386922836s of 12.756870270s, submitted: 92
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341573632 unmapped: 62717952 heap: 404291584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561829b65880
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182a1fe540
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182cef41c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c4400 session 0x56182b0b2000
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561831c20700
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 70705152 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3751276 data_alloc: 218103808 data_used: 7423598
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x561829167dc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bd52000
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182bb83a40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcbe000 session 0x56182b0fdc00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341983232 unmapped: 70705152 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea739000/0x0/0x4ffc00000, data 0x1ed1cd2/0x2093000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3751276 data_alloc: 218103808 data_used: 7423598
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea739000/0x0/0x4ffc00000, data 0x1ed1cd2/0x2093000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6000 session 0x561829b65500
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182ba4bc00 session 0x56182bb708c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x561829bf2c40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.984330177s of 12.172765732s, submitted: 21
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182be2c000
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 341991424 unmapped: 70696960 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 342130688 unmapped: 70557696 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753806 data_alloc: 218103808 data_used: 7520878
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea738000/0x0/0x4ffc00000, data 0x1ed1ce2/0x2094000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3819854 data_alloc: 234881024 data_used: 18727534
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea738000/0x0/0x4ffc00000, data 0x1ed1ce2/0x2094000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349372416 unmapped: 63315968 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3819262 data_alloc: 234881024 data_used: 18731630
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.963133812s of 11.972195625s, submitted: 3
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353280000 unmapped: 59408384 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e91000/0x0/0x4ffc00000, data 0x276ace2/0x292d000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353361920 unmapped: 59326464 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352829440 unmapped: 59858944 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e57000/0x0/0x4ffc00000, data 0x27a9ce2/0x296c000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3884820 data_alloc: 234881024 data_used: 19830382
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e57000/0x0/0x4ffc00000, data 0x27a9ce2/0x296c000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e57000/0x0/0x4ffc00000, data 0x27a9ce2/0x296c000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e5d000/0x0/0x4ffc00000, data 0x27acce2/0x296f000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829499000 session 0x56182bb11880
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3879500 data_alloc: 234881024 data_used: 19830382
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e5d000/0x0/0x4ffc00000, data 0x27acce2/0x296f000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e5d000/0x0/0x4ffc00000, data 0x27acce2/0x296f000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.312377930s of 10.672085762s, submitted: 111
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9e5d000/0x0/0x4ffc00000, data 0x27acce2/0x296f000, compress 0x0/0x0/0x0, omap 0x75d01, meta 0x133ba2ff), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0c00 session 0x561829b41a40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182bc5fdc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182bd53c00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352837632 unmapped: 59850752 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bd4d340
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689771 data_alloc: 218103808 data_used: 7411310
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa8000/0x0/0x4ffc00000, data 0x12cbcc3/0x148c000, compress 0x0/0x0/0x0, omap 0x76139, meta 0x133b9ec7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689771 data_alloc: 218103808 data_used: 7411310
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eafa8000/0x0/0x4ffc00000, data 0x12cbcc3/0x148c000, compress 0x0/0x0/0x0, omap 0x76139, meta 0x133b9ec7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.554950714s of 10.802752495s, submitted: 39
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182b709a40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 350814208 unmapped: 61874176 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c602400 session 0x56182b0fc8c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 65822720 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 65814528 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 65806336 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553143 data_alloc: 218103808 data_used: 185966
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346890240 unmapped: 65798144 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 65789952 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 65789952 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.951286316s of 42.007965088s, submitted: 30
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346898432 unmapped: 65789952 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec35e000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553079 data_alloc: 218103808 data_used: 189964
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346923008 unmapped: 65765376 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553079 data_alloc: 218103808 data_used: 189964
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 65757184 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 65757184 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 65757184 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182cef4540
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182cef4a80
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x561831c20c40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182a000e00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ec3a5000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 65757184 heap: 412688384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.872159004s of 10.219871521s, submitted: 29
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 62029824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182bb10380
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182b1e3c00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182b0fddc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611703 data_alloc: 218103808 data_used: 189980
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x561829acefc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347250688 unmapped: 69115904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182bc12fc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9f5000/0x0/0x4ffc00000, data 0xc16c71/0xdd7000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347250688 unmapped: 69115904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347250688 unmapped: 69115904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 69107712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9f5000/0x0/0x4ffc00000, data 0xc16c71/0xdd7000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 69107712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9f5000/0x0/0x4ffc00000, data 0xc16c71/0xdd7000, compress 0x0/0x0/0x0, omap 0x76571, meta 0x133b9a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3611703 data_alloc: 218103808 data_used: 189980
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347258880 unmapped: 69107712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bd4c700
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 70713344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9f5000/0x0/0x4ffc00000, data 0xc16c71/0xdd7000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345653248 unmapped: 70713344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345923584 unmapped: 70443008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675816 data_alloc: 234881024 data_used: 10259996
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9d1000/0x0/0x4ffc00000, data 0xc3ac71/0xdfb000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9d1000/0x0/0x4ffc00000, data 0xc3ac71/0xdfb000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675816 data_alloc: 234881024 data_used: 10259996
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9d1000/0x0/0x4ffc00000, data 0xc3ac71/0xdfb000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4eb9d1000/0x0/0x4ffc00000, data 0xc3ac71/0xdfb000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x133b98bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348446720 unmapped: 67919872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.931827545s of 19.378744125s, submitted: 11
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353779712 unmapped: 62586880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759650 data_alloc: 234881024 data_used: 11513372
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da5000/0x0/0x4ffc00000, data 0x16c6c71/0x1887000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da5000/0x0/0x4ffc00000, data 0x16c6c71/0x1887000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da5000/0x0/0x4ffc00000, data 0x16c6c71/0x1887000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3759650 data_alloc: 234881024 data_used: 11513372
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da3000/0x0/0x4ffc00000, data 0x16c8c71/0x1889000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753666 data_alloc: 234881024 data_used: 11513372
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.650314331s of 12.895412445s, submitted: 88
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da2000/0x0/0x4ffc00000, data 0x16c9c71/0x188a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3753866 data_alloc: 234881024 data_used: 11513372
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353943552 unmapped: 62423040 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9da2000/0x0/0x4ffc00000, data 0x16c9c71/0x188a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182cef5c00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182a1ffdc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3794327 data_alloc: 234881024 data_used: 11513372
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f2000/0x0/0x4ffc00000, data 0x1b78cd3/0x1d3a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353861632 unmapped: 62504960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f2000/0x0/0x4ffc00000, data 0x1b78cd3/0x1d3a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f2000/0x0/0x4ffc00000, data 0x1b78cd3/0x1d3a000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3794327 data_alloc: 234881024 data_used: 11513372
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.721897125s of 14.075347900s, submitted: 35
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182c08e000
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f1000/0x0/0x4ffc00000, data 0x1b78cf6/0x1d3b000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f1000/0x0/0x4ffc00000, data 0x1b78cf6/0x1d3b000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817488 data_alloc: 234881024 data_used: 15285788
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e98f1000/0x0/0x4ffc00000, data 0x1b78cf6/0x1d3b000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x145598bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817872 data_alloc: 234881024 data_used: 15289884
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353869824 unmapped: 62496768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353878016 unmapped: 62488576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.944444656s of 11.960687637s, submitted: 7
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354533376 unmapped: 61833216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 353632256 unmapped: 62734336 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7ef4000/0x0/0x4ffc00000, data 0x23c6cf6/0x2589000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x156f98bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7ef4000/0x0/0x4ffc00000, data 0x23c6cf6/0x2589000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x156f98bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3875166 data_alloc: 234881024 data_used: 15687196
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 60481536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355803136 unmapped: 60563456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7f00000/0x0/0x4ffc00000, data 0x23c9cf6/0x258c000, compress 0x0/0x0/0x0, omap 0x76745, meta 0x156f98bb), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3869110 data_alloc: 234881024 data_used: 15691292
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355803136 unmapped: 60563456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 60555264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 60555264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.615018845s of 11.088137627s, submitted: 103
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c611800 session 0x561830d46c40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 60555264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182d6b0400 session 0x56182cef4380
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 60530688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3765060 data_alloc: 234881024 data_used: 11513372
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 60530688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8c00000/0x0/0x4ffc00000, data 0x16c9c71/0x188a000, compress 0x0/0x0/0x0, omap 0x76b7d, meta 0x156f9483), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 60530688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182b708a80
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182fe8b180
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355835904 unmapped: 60530688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x5618293988c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 64643072 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 64634880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351731712 unmapped: 64634880 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 64626688 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580092 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 64618496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182b854000
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182b431c00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x56182ba7c540
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182fe8a8c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.068725586s of 41.208274841s, submitted: 78
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352288768 unmapped: 64077824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x561829ace540
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3640912 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351756288 unmapped: 64610304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ee000/0x0/0x4ffc00000, data 0xbdec61/0xd9e000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3640912 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351764480 unmapped: 64602112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182bfb9180
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ee000/0x0/0x4ffc00000, data 0xbdec61/0xd9e000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 64528384 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 64528384 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3701496 data_alloc: 234881024 data_used: 10021255
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ca000/0x0/0x4ffc00000, data 0xc02c61/0xdc2000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3701496 data_alloc: 234881024 data_used: 10021255
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ca000/0x0/0x4ffc00000, data 0xc02c61/0xdc2000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e96ca000/0x0/0x4ffc00000, data 0xc02c61/0xdc2000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702776 data_alloc: 234881024 data_used: 10060167
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 64520192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.024909973s of 21.150117874s, submitted: 14
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 352968704 unmapped: 63397888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e94a6000/0x0/0x4ffc00000, data 0xe26c61/0xfe6000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [0,0,0,0,0,0,0,1,0,27])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354033664 unmapped: 62332928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354344960 unmapped: 62021632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9139000/0x0/0x4ffc00000, data 0x118bc61/0x134b000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9139000/0x0/0x4ffc00000, data 0x118bc61/0x134b000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6601.8 total, 600.0 interval#012Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.73 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2279 writes, 9430 keys, 2279 commit groups, 1.0 writes per commit group, ingest: 11.32 MB, 0.02 MB/s#012Interval WAL: 2279 writes, 877 syncs, 2.60 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768766 data_alloc: 234881024 data_used: 11496839
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354353152 unmapped: 62013440 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768782 data_alloc: 234881024 data_used: 11496839
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768782 data_alloc: 234881024 data_used: 11496839
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768782 data_alloc: 234881024 data_used: 11496839
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 60964864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769038 data_alloc: 234881024 data_used: 11505031
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 60956672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 60948480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769038 data_alloc: 234881024 data_used: 11505031
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 60948480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 60948480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9125000/0x0/0x4ffc00000, data 0x119fc61/0x135f000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 60948480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.785942078s of 32.167488098s, submitted: 70
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829f62c00 session 0x561831c20540
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61431808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61431808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3825944 data_alloc: 234881024 data_used: 11505031
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354934784 unmapped: 61431808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8846000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8846000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x77189, meta 0x156f8e77), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bfb9500
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182b855880
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182b887dc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182bcc0400 session 0x561828f04fc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826205 data_alloc: 234881024 data_used: 11505031
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 354942976 unmapped: 61423616 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8847000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3873569 data_alloc: 234881024 data_used: 19325831
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8847000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356876288 unmapped: 59490304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8847000/0x0/0x4ffc00000, data 0x1a85c61/0x1c45000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3873569 data_alloc: 234881024 data_used: 19325831
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356892672 unmapped: 59473920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 356892672 unmapped: 59473920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.664230347s of 19.858476639s, submitted: 27
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 55656448 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e8691000/0x0/0x4ffc00000, data 0x1c3bc61/0x1dfb000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 55631872 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3930201 data_alloc: 234881024 data_used: 20498311
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7fe5000/0x0/0x4ffc00000, data 0x22e7c61/0x24a7000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 55353344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3929153 data_alloc: 234881024 data_used: 20498311
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 55345152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7fc6000/0x0/0x4ffc00000, data 0x2306c61/0x24c6000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 55345152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7fc6000/0x0/0x4ffc00000, data 0x2306c61/0x24c6000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 55345152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e7fc6000/0x0/0x4ffc00000, data 0x2306c61/0x24c6000, compress 0x0/0x0/0x0, omap 0x7735d, meta 0x156f8ca3), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 55345152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 55328768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3929665 data_alloc: 234881024 data_used: 20559751
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 55328768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182bd53180
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.727113724s of 13.411382675s, submitted: 117
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x561829399c00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bb83dc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e912c000/0x0/0x4ffc00000, data 0x11a0c61/0x1360000, compress 0x0/0x0/0x0, omap 0x76e29, meta 0x156f91d7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e912c000/0x0/0x4ffc00000, data 0x11a0c61/0x1360000, compress 0x0/0x0/0x0, omap 0x76e29, meta 0x156f91d7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3776487 data_alloc: 234881024 data_used: 11566471
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e912c000/0x0/0x4ffc00000, data 0x11a0c61/0x1360000, compress 0x0/0x0/0x0, omap 0x76e29, meta 0x156f91d7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 56188928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c611800 session 0x561829398a80
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bb82540
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00f800 session 0x56182c331880
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3607581 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bfb8c40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x56182b887dc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x561829b64700
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c611800 session 0x561831c21180
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.109107971s of 41.198600769s, submitted: 61
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182bd41180
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182b0fcc40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bd4ddc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x56182bb708c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bb83dc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c2000/0x0/0x4ffc00000, data 0xb09c71/0xcca000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662357 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 349069312 unmapped: 67297280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c2000/0x0/0x4ffc00000, data 0xb09c71/0xcca000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c2000/0x0/0x4ffc00000, data 0xb09c71/0xcca000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c611800 session 0x561829ace540
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3663102 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348987392 unmapped: 67379200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.737547874s of 10.004721642s, submitted: 42
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716094 data_alloc: 218103808 data_used: 9047959
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182bfb3dc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716094 data_alloc: 218103808 data_used: 9047959
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716094 data_alloc: 218103808 data_used: 9047959
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x56182c08f180
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3716094 data_alloc: 218103808 data_used: 9047959
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e97c1000/0x0/0x4ffc00000, data 0xb09c94/0xccb000, compress 0x0/0x0/0x0, omap 0x77259, meta 0x156f8da7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 348356608 unmapped: 68009984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182bfb3500
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.629669189s of 22.757272720s, submitted: 64
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182c331a40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea064000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77689, meta 0x156f8977), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614988 data_alloc: 218103808 data_used: 198023
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344506368 unmapped: 71860224 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.619342804s of 57.669017792s, submitted: 25
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182a001180
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182bd40c40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x56182be2c000
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x561829399c00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182c3316c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 70803456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182bfb2000
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 70803456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182c0e9400 session 0x56182b854e00
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9ed5000/0x0/0x4ffc00000, data 0x3f7c61/0x5b7000, compress 0x0/0x0/0x0, omap 0x77891, meta 0x156f876f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x561829b7e000 session 0x561831c21a40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 70803456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a00e000 session 0x56182b4308c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9ed5000/0x0/0x4ffc00000, data 0x3f7c61/0x5b7000, compress 0x0/0x0/0x0, omap 0x77891, meta 0x156f876f), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628047 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345563136 unmapped: 70803456 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344547328 unmapped: 71819264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344547328 unmapped: 71819264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344547328 unmapped: 71819264 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182a1c6c00 session 0x56182c08ec40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3637451 data_alloc: 218103808 data_used: 1815908
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344580096 unmapped: 71786496 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77c7d, meta 0x156f8383), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 ms_handle_reset con 0x56182b411c00 session 0x56182bb82000
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 71778304 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344596480 unmapped: 71770112 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344604672 unmapped: 71761920 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344621056 unmapped: 71745536 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344629248 unmapped: 71737344 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344637440 unmapped: 71729152 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344645632 unmapped: 71720960 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344653824 unmapped: 71712768 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344662016 unmapped: 71704576 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344678400 unmapped: 71688192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344678400 unmapped: 71688192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344678400 unmapped: 71688192 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344686592 unmapped: 71680000 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 71671808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 heartbeat osd_stat(store_statfs(0x4ea065000/0x0/0x4ffc00000, data 0x267c61/0x427000, compress 0x0/0x0/0x0, omap 0x77e95, meta 0x156f816b), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 71671808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617917 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344694784 unmapped: 71671808 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 78.625038147s of 79.510787964s, submitted: 41
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 71655424 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 313 ms_handle_reset con 0x56182b411c00 session 0x56182ba7c540
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 71655424 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344711168 unmapped: 71655424 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea062000/0x0/0x4ffc00000, data 0x26970d/0x427000, compress 0x0/0x0/0x0, omap 0x77f1b, meta 0x156f80e5), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619786 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea062000/0x0/0x4ffc00000, data 0x26970d/0x427000, compress 0x0/0x0/0x0, omap 0x77f1b, meta 0x156f80e5), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3619786 data_alloc: 218103808 data_used: 202084
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344719360 unmapped: 71647232 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 313 heartbeat osd_stat(store_statfs(0x4ea062000/0x0/0x4ffc00000, data 0x26970d/0x427000, compress 0x0/0x0/0x0, omap 0x77f1b, meta 0x156f80e5), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 313 ms_handle_reset con 0x561829b7e000 session 0x56182ba7c8c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.686821938s of 11.737939835s, submitted: 28
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344743936 unmapped: 71622656 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344743936 unmapped: 71622656 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344760320 unmapped: 71606272 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344768512 unmapped: 71598080 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344776704 unmapped: 71589888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344776704 unmapped: 71589888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344776704 unmapped: 71589888 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344784896 unmapped: 71581696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344784896 unmapped: 71581696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344784896 unmapped: 71581696 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344793088 unmapped: 71573504 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344801280 unmapped: 71565312 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344809472 unmapped: 71557120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344809472 unmapped: 71557120 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344817664 unmapped: 71548928 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344825856 unmapped: 71540736 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344842240 unmapped: 71524352 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344858624 unmapped: 71507968 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344866816 unmapped: 71499776 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344875008 unmapped: 71491584 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344883200 unmapped: 71483392 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344883200 unmapped: 71483392 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344891392 unmapped: 71475200 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344899584 unmapped: 71467008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344899584 unmapped: 71467008 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344907776 unmapped: 71458816 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344915968 unmapped: 71450624 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 71434240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344932352 unmapped: 71434240 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344940544 unmapped: 71426048 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206145
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 71417856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 71417856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 71417856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344948736 unmapped: 71417856 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 ms_handle_reset con 0x56182a00e000 session 0x56182bd416c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 ms_handle_reset con 0x56182a1c6c00 session 0x56182bb11340
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 71409664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206180
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 71409664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 71409664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344956928 unmapped: 71409664 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 71401472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 71401472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3622480 data_alloc: 218103808 data_used: 206180
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 71401472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344965120 unmapped: 71401472 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344973312 unmapped: 71393280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344973312 unmapped: 71393280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 heartbeat osd_stat(store_statfs(0x4ea060000/0x0/0x4ffc00000, data 0x26b18c/0x42a000, compress 0x0/0x0/0x0, omap 0x78afb, meta 0x156f7505), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 117.142692566s of 117.148269653s, submitted: 14
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 344973312 unmapped: 71393280 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 315 ms_handle_reset con 0x56182c0e9400 session 0x56182c08efc0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3623367 data_alloc: 218103808 data_used: 206129
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345006080 unmapped: 71360512 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345014272 unmapped: 71352320 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 71344128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 315 heartbeat osd_stat(store_statfs(0x4ea05e000/0x0/0x4ffc00000, data 0x26cd26/0x42a000, compress 0x0/0x0/0x0, omap 0x78b82, meta 0x156f747e), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 315 heartbeat osd_stat(store_statfs(0x4ea05e000/0x0/0x4ffc00000, data 0x26cd26/0x42a000, compress 0x0/0x0/0x0, omap 0x78b82, meta 0x156f747e), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345022464 unmapped: 71344128 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 315 handle_osd_map epochs [315,316], i have 315, src has [1,316]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 316 ms_handle_reset con 0x56182bcc0400 session 0x56182bc13340
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345030656 unmapped: 71335936 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3625319 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345030656 unmapped: 71335936 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 316 heartbeat osd_stat(store_statfs(0x4ea05e000/0x0/0x4ffc00000, data 0x26e8f3/0x42c000, compress 0x0/0x0/0x0, omap 0x78c09, meta 0x156f73f7), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345038848 unmapped: 71327744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345038848 unmapped: 71327744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345038848 unmapped: 71327744 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.064450264s of 10.179132462s, submitted: 41
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345055232 unmapped: 71311360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3628093 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345055232 unmapped: 71311360 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 317 heartbeat osd_stat(store_statfs(0x4ea05b000/0x0/0x4ffc00000, data 0x27038e/0x42f000, compress 0x0/0x0/0x0, omap 0x78c90, meta 0x156f7370), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345063424 unmapped: 71303168 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 317 heartbeat osd_stat(store_statfs(0x4ea05b000/0x0/0x4ffc00000, data 0x27038e/0x42f000, compress 0x0/0x0/0x0, omap 0x78c90, meta 0x156f7370), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 71286784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 71286784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 71286784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3630867 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 71286784 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345088000 unmapped: 71278592 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 ms_handle_reset con 0x56182a00e000 session 0x56182bb11880
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 71262208 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345104384 unmapped: 71262208 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345112576 unmapped: 71254016 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 71245824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 71245824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345120768 unmapped: 71245824 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345128960 unmapped: 71237632 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345145344 unmapped: 71221248 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345153536 unmapped: 71213056 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 71204864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345161728 unmapped: 71204864 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345169920 unmapped: 71196672 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345178112 unmapped: 71188480 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345186304 unmapped: 71180288 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345202688 unmapped: 71163904 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 71155712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345210880 unmapped: 71155712 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345219072 unmapped: 71147520 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 71131136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 71131136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345235456 unmapped: 71131136 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 71122944 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345243648 unmapped: 71122944 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345251840 unmapped: 71114752 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345260032 unmapped: 71106560 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 71098368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 71098368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345268224 unmapped: 71098368 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345276416 unmapped: 71090176 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 71081984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345284608 unmapped: 71081984 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345292800 unmapped: 71073792 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345300992 unmapped: 71065600 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635198 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 71049216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 71049216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 71049216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345317376 unmapped: 71049216 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345325568 unmapped: 71041024 heap: 416366592 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 116.263618469s of 116.304878235s, submitted: 39
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679712 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 345333760 unmapped: 79429632 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 heartbeat osd_stat(store_statfs(0x4ea055000/0x0/0x4ffc00000, data 0x2739a9/0x435000, compress 0x0/0x0/0x0, omap 0x79500, meta 0x156f6b00), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 319 handle_osd_map epochs [320,320], i have 320, src has [1,320]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 320 ms_handle_reset con 0x56182a1c6c00 session 0x56182be2c000
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346341376 unmapped: 78422016 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 ms_handle_reset con 0x56182b411c00 session 0x56182c08ec40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e9052000/0x0/0x4ffc00000, data 0x127558b/0x143a000, compress 0x0/0x0/0x0, omap 0x7a247, meta 0x156f5db9), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346365952 unmapped: 78397440 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 78381056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 78381056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 78381056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346382336 unmapped: 78381056 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346390528 unmapped: 78372864 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 78356480 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346415104 unmapped: 78348288 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728668 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 78340096 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 78340096 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346423296 unmapped: 78340096 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904d000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a2ce, meta 0x156f5d32), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.994350433s of 38.509582520s, submitted: 26
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 78315520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 78315520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728172 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 78315520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 heartbeat osd_stat(store_statfs(0x4e904f000/0x0/0x4ffc00000, data 0x1277127/0x143d000, compress 0x0/0x0/0x0, omap 0x7a571, meta 0x156f5a8f), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 346447872 unmapped: 78315520 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 322 ms_handle_reset con 0x56182c0e9400 session 0x56182ba7c000
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730846 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 322 heartbeat osd_stat(store_statfs(0x4e904a000/0x0/0x4ffc00000, data 0x1278d17/0x1440000, compress 0x0/0x0/0x0, omap 0x7ac4c, meta 0x156f53b4), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 322 heartbeat osd_stat(store_statfs(0x4e904a000/0x0/0x4ffc00000, data 0x1278d17/0x1440000, compress 0x0/0x0/0x0, omap 0x7ac4c, meta 0x156f53b4), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730846 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347521024 unmapped: 77242368 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 322 handle_osd_map epochs [322,323], i have 322, src has [1,323]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.453020096s of 13.245205879s, submitted: 36
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e904a000/0x0/0x4ffc00000, data 0x1278d17/0x1440000, compress 0x0/0x0/0x0, omap 0x7ac4c, meta 0x156f53b4), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347553792 unmapped: 77209600 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 77201408 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 77201408 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347561984 unmapped: 77201408 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347570176 unmapped: 77193216 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347578368 unmapped: 77185024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347578368 unmapped: 77185024 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7201.8 total, 600.0 interval#012Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 997 writes, 3370 keys, 997 commit groups, 1.0 writes per commit group, ingest: 2.81 MB, 0.00 MB/s#012Interval WAL: 997 writes, 422 syncs, 2.36 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread fragmentation_score=0.004687 took=0.000062s
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 77176832 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347594752 unmapped: 77168640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347594752 unmapped: 77168640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347594752 unmapped: 77168640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347594752 unmapped: 77168640 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 77160448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 77160448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 77160448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347602944 unmapped: 77160448 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x5618299ef400 session 0x561831c201c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347611136 unmapped: 77152256 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: mgrc ms_handle_reset ms_handle_reset con 0x56182cc39800
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2938807880
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2938807880,v1:192.168.122.100:6801/2938807880]
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: mgrc handle_mgr_configure stats_period=5
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x561829362c00 session 0x56182cef4c40
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x56182b415c00 session 0x56182bd4c1c0
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347619328 unmapped: 77144064 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347627520 unmapped: 77135872 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347635712 unmapped: 77127680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347635712 unmapped: 77127680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347635712 unmapped: 77127680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347635712 unmapped: 77127680 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347643904 unmapped: 77119488 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347643904 unmapped: 77119488 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347652096 unmapped: 77111296 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347660288 unmapped: 77103104 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347660288 unmapped: 77103104 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347668480 unmapped: 77094912 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347676672 unmapped: 77086720 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347676672 unmapped: 77086720 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347684864 unmapped: 77078528 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347693056 unmapped: 77070336 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347701248 unmapped: 77062144 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347709440 unmapped: 77053952 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347717632 unmapped: 77045760 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347725824 unmapped: 77037568 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347734016 unmapped: 77029376 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347734016 unmapped: 77029376 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347734016 unmapped: 77029376 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x56182a1c6000 session 0x56182fe8a380
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347750400 unmapped: 77012992 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 ms_handle_reset con 0x56182ba4bc00 session 0x561830d47880
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9047000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733620 data_alloc: 218103808 data_used: 210155
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347758592 unmapped: 77004800 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 127.064315796s of 127.072921753s, submitted: 13
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347774976 unmapped: 76988416 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347783168 unmapped: 76980224 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347791360 unmapped: 76972032 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347799552 unmapped: 76963840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347799552 unmapped: 76963840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347799552 unmapped: 76963840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347799552 unmapped: 76963840 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347832320 unmapped: 76931072 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 76914688 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 76906496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 76906496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347856896 unmapped: 76906496 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 76898304 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347865088 unmapped: 76898304 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347873280 unmapped: 76890112 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 76881920 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 76873728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 76873728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347889664 unmapped: 76873728 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347897856 unmapped: 76865536 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 76857344 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.015445709s of 48.233345032s, submitted: 110
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347914240 unmapped: 76849152 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 76840960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 76840960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 76840960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347922432 unmapped: 76840960 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347930624 unmapped: 76832768 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 76824576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.083396912s of 17.435754776s, submitted: 6
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347938816 unmapped: 76824576 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 76808192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 76808192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 76808192 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: osd.1 323 heartbeat osd_stat(store_statfs(0x4e9049000/0x0/0x4ffc00000, data 0x127a796/0x1443000, compress 0x0/0x0/0x0, omap 0x7acd3, meta 0x156f532d), peers [0,2] op hist [])
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 76800000 heap: 424763392 old mem: 2845415832 new mem: 2845415832
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:42:42 np0005558241 ceph-osd[88086]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3733028 data_alloc: 218103808 data_used: 210375
